contenox CLI Reference
contenox is the local AI agent CLI. It runs the Contenox chain engine entirely on your machine.
Global Flags
Persistent flags on the root command (also shown under Global Flags on subcommands). Run contenox --help for the full list.
| Flag | Description |
|---|---|
--model <name> | Default model name (default in tree: qwen2.5:7b; override KV via contenox config set default-model) |
--provider <type> | Provider override: ollama, openai, vllm, gemini, vertex-google, vertex-anthropic, vertex-meta, vertex-mistralai |
--db <path> | SQLite DB path (defaults via contenox --help; often project .contenox/local.db or ~/.contenox/local.db depending on layout) |
--data-dir <path> | Override the .contenox data directory path (skips the walk-up search; also sets the default DB location to <path>/local.db) |
--timeout | Max execution time per invocation (default 5m) |
--context | Context length hint for the tokenizer |
--ollama | Ollama base URL (default http://127.0.0.1:11434) |
--no-delete-models | Do not delete undeclared Ollama models (default true for CLI) |
--chain <path> | Chain JSON for injected run / chat when applicable |
--input <value> | Input string or @file (chat / bare run paths) |
--trace | Structured operation telemetry on stderr |
--steps | Print execution steps after the result |
--think | Print model reasoning trace to stderr (thinking models) |
--raw | Print full structured output (e.g. entire chat JSON) |
--shell | Enable local_shell hook (trusted environments only) |
--local-exec-allowed-dir <dir> | Restrict local_shell to a directory |
Subcommands
contenox (bare — stateless run)
If the first token is not a reserved subcommand (chat, init, run, plan, beam, …), the CLI prepends run. That is stateless: no chat session.
The default chain file is <resolved .contenox>/default-run-chain.json, where .contenox is found by walking up from the current working directory (same rules as contenox init). It is not read from ~/.contenox/ — global state lives in ~/.contenox/local.db, but chain JSON files are project-local. If no --chain is set and the default file is missing, contenox run errors with a hint to run contenox init or pass --chain.
contenox "what can you do?"
echo "summarise README.md" | contenox
contenox --shell "list files here"
contenox --local-exec-allowed-dir . "summarise the README"
| Flag | Description |
|---|---|
--shell | Enable local_shell hook (opt-in; command policy is defined in the chain) |
--local-exec-allowed-dir <dir> | Restrict local_fs tools to this directory |
--hitl | Enable human-in-the-loop approval. Tool calls matching the active policy pause for y/n approval in the terminal. Active policy is configured in the workspace Admin → HITL Policies. |
contenox chat
Sends a message to the active chat session and prints the response. History is persisted across invocations in SQLite.
contenox chat "what can you do?"
echo "summarise README.md" | contenox chat
contenox chat --shell "list files here"
| Flag | Description |
|---|---|
--trim N | Only send last N messages from session history to the model (0 = all) |
--last N | Print last N user/assistant turns after the reply (0 = only new reply) |
--shell | Enable local_shell hook |
--local-exec-allowed-dir <dir> | Restrict local_fs tools to this directory |
--hitl | Enable HITL approval prompts (active policy set in workspace HITL Policies) |
Manage named chat sessions. Each session maintains its own conversation history.
contenox session list # list all sessions (* = active)
contenox session new [name] # create a session (becomes active)
contenox session switch <name> # switch to a different session
contenox session show # show active session's history
contenox session show <name> # show any session by name
contenox session show --tail 10 # show last 10 messages
contenox session show --head 5 # show first 5 messages
contenox session show default --tail 6 # tail a non-active session
contenox session delete <name> # delete session and all messages
contenox run
Executes a chain non-interactively. No session history.
contenox run --chain .contenox/chain-nws.json --input-type chat "how is the weather?"
contenox run --chain .contenox/my-chain.json --shell "refactor main.go"
--chain <path>: Optional if<resolved .contenox>/default-run-chain.jsonexists; otherwise required.--input-type <type>:string(default),chat,json,int,float,bool— seecontenox run --help.--shell: Enable shell execution for this invocation (use only in trusted environments).--think/--trace/--steps: Global flags (see table above).
contenox plan
Autonomous multi-step execution using a separate "planner" model that directs an "executor" model. For a conceptual overview (what gets stored, planner vs executor, typical workflow), see Execution Plans.
contenox plan new "analyze main.go, find the bug, and write a fix to patch.diff"
contenox plan list # list all plans (* = active)
contenox plan show # show active plan's steps and status
contenox plan next # execute next pending step
contenox plan next --shell # execute next step with shell access enabled
contenox plan next --auto # run all pending steps automatically
contenox plan retry <ordinal> # reset a failed step back to pending
contenox plan skip <ordinal> # mark a step done without running it
contenox plan replan # ask planner to revise remaining steps
contenox plan delete <name> # delete a plan by name
contenox plan clean # delete all completed/archived plans
plan next flags:
| Flag | Description |
|---|---|
--auto | Continue executing steps automatically until the plan is done or a step fails |
--shell | Enable local_shell hook for this step (required for shell-based tasks) |
--gate | Use the gated executor: after each tool round a small model scores whether to continue. Aborts on bad/corrupt tool output. Extra latency/cost. |
--hitl | Pause before each write/shell tool call and require y/n approval in the terminal |
Global flags --trace, --steps, and --think apply to plan commands that execute chains.
contenox plan explore
Runs the read-only explorer chain (chain-plan-explorer.json) against the workspace and persists a RepoContext on the active plan. The RepoContext captures languages, entry points, build/test commands, key files, and conventions, then injects them as {{var:repo_context}} into every subsequent step's prompt — so each step sees concrete paths instead of cold-exploring the repo from scratch.
contenox plan explore # explore for the active plan
contenox plan new --explore "..." # explore as part of plan creation
The explorer is read-only by contract: only local_fs and other read-only hooks are allowlisted.
contenox doctor
Prints local LLM setup readiness — same evaluation as the workspace GET /api/setup-status.
contenox doctor
contenox doctor --json # machine-readable output
contenox doctor --skip-cycle # faster; skips backend sync (status may be stale)
| Flag | Description |
|---|---|
--json | Print results as JSON instead of human-readable text |
--skip-cycle | Skip syncing backends before the check (faster but may show stale status) |
contenox model
Manage models in the local Model Registry — a name-to-URL index of GGUF files that can be downloaded for local inference. See Local Models (GGUF) for a full walkthrough.
contenox model registry-list
List all curated and user-added registry entries. Does not require a running backend.
contenox model registry-list
contenox model pull
Download a curated or custom GGUF model to ~/.contenox/models/<name>/model.gguf.
contenox model pull qwen3-4b # curated model
contenox model pull my-model --url https://huggingface.co/org/repo/resolve/main/model.gguf
After downloading, register the local backend once:
contenox backend add local --type local --url ~/.contenox/models/
| Flag | Description |
|---|---|
--url | Direct GGUF download URL (requires a name as arg[0]) |
contenox model add
Register a custom model entry in the local registry without downloading.
contenox model add my-model --url https://huggingface.co/org/repo/resolve/main/model.gguf
contenox model add my-model --url https://... --size 4500000000
| Flag | Description |
|---|---|
--url | Source URL (required) |
--size | File size in bytes (optional, informational) |
contenox model show
Display registry details for a model.
contenox model show qwen3-4b
contenox model rm
Remove a user-added registry entry by name. Curated entries cannot be removed.
contenox model rm my-model
contenox model list
List models currently available from all configured backends (live query, requires at least one backend).
contenox model list
contenox model set-context
Override the context window size for a specific model name. Useful when a backend reports a different (or no) context size than the model actually supports.
contenox model set-context qwen2.5:7b --context 32k
contenox model set-context gpt-5-mini --context 128k
contenox model set-context gemini-3.1-pro-preview --context 1m
| Flag | Description |
|---|---|
--context | Context window size: bare integer or shorthand (12k, 128k, 1m). Required. |
contenox beam
Starts the Contenox runtime as an HTTP server and serves the Contenox workspace. Same environment variables as the standalone server; optional --tenant for the tenant ID.
contenox beam
contenox beam --tenant 96ed1c59-ffc1-4545-b3c3-191079c68d79
Use contenox chat, contenox plan, contenox session, contenox hook, and contenox mcp from the terminal for shell-native workflows.
contenox hook
Manage remote OpenAPI hooks. See Remote Hooks and Hook Allowlist Patterns.
contenox hook add <name> --url <url>
contenox hook add <name> --url <url> --header "Authorization: Bearer $TOKEN" --inject "tenant_id=acme"
contenox hook list
contenox hook show <name>
contenox hook update <name> --header <...> --inject <...>
contenox hook remove <name>
| Flag | Description |
|---|---|
--url | Base URL of the OpenAPI service (required) |
--header | HTTP header to inject on every call, e.g. "Authorization: Bearer $TOKEN" (repeatable) |
--inject | Tool call argument to inject and hide from the model, e.g. "tenant_id=acme" (repeatable) |
--timeout | Request timeout in milliseconds (default: 10000) |
contenox init
Initializes a new .contenox/ directory with default chain files and HITL policy presets. Pass a provider name to pre-configure the default model and provider.
contenox init # defaults to ollama
contenox init gemini # pre-configure for Gemini
contenox init openai # pre-configure for OpenAI
contenox init --force # overwrite existing files
| Flag | Description |
|---|---|
-f, --force | Overwrite existing .contenox/ files |
After init, register a backend:
contenox backend add local --type ollama
contenox config set default-model qwen2.5:7b
# Or use Ollama Cloud instead:
contenox backend add ollama-cloud --type ollama --url https://ollama.com/api --api-key-env OLLAMA_API_KEY
contenox backend
Register and manage LLM backend endpoints.
contenox backend add local --type ollama
contenox backend add ollama-cloud --type ollama --url https://ollama.com/api --api-key-env OLLAMA_API_KEY
contenox backend add openai --type openai --api-key-env OPENAI_API_KEY
contenox backend add gemini --type gemini --api-key-env GEMINI_API_KEY
contenox backend add myvllm --type vllm --url http://gpu-host:8000
contenox backend add vertex --type vertex-google \
--url "https://us-central1-aiplatform.googleapis.com/v1/projects/YOUR_PROJECT_ID/locations/us-central1"
contenox backend list
contenox backend show openai
contenox backend remove myvllm
| Flag | Description |
|---|---|
--type | Backend type: ollama, openai, gemini, vllm, local, vertex-google, vertex-anthropic, vertex-meta, vertex-mistralai |
--url | Base URL (auto-inferred for openai/gemini; required for vllm and all vertex-* types) |
--api-key-env | Environment variable holding the API key (preferred) |
--api-key | API key literal (avoid — use --api-key-env) |
contenox config
Manage persistent CLI defaults stored in SQLite.
contenox config set default-model qwen2.5:7b
contenox config set default-provider ollama
contenox config set default-chain .contenox/default-chain.json
contenox config set hitl-policy-name hitl-policy-strict.json
contenox config get default-model
contenox config list
Valid keys: default-model, default-provider, default-chain, hitl-policy-name.
contenox mcp
Register and manage MCP (Model Context Protocol) servers.
# Shorthand: name + URL (transport defaults to http)
contenox mcp add notion https://mcp.notion.com/mcp --auth-type oauth
# Stdio transport (local process)
contenox mcp add myserver --transport stdio --command npx \
--args "-y,@modelcontextprotocol/server-filesystem,/tmp"
# SSE transport (remote) with bearer auth
contenox mcp add remote --transport sse --url https://mcp.example.com/sse \
--auth-type bearer --auth-env MCP_TOKEN
# Inject hidden params into every tool call (model never sees them)
contenox mcp add myserver --transport http --url http://localhost:8090 \
--header "X-Tenant: acme" \
--inject "tenant_id=acme" --inject "env=production"
contenox mcp list
contenox mcp show myserver
contenox mcp update myserver --inject "tenant_id=newvalue"
contenox mcp remove myserver
| Flag | Description |
|---|---|
[url] | URL as a second positional arg — sets --url and defaults --transport to http |
--transport | Server transport: stdio, sse, http |
--command | Command to execute (stdio only) |
--args | Comma-separated command arguments |
--url | Remote endpoint URL (sse, http) |
--auth-type | Authentication type (e.g. bearer) |
--auth-env | Environment variable holding auth token (preferred) |
--auth-token | Auth token literal (avoid — use --auth-env) |
--header | Additional HTTP header for SSE/HTTP connections, e.g. "X-Tenant: acme" (repeatable) |
--inject | Tool call argument to inject and hide from the model, e.g. "tenant_id=acme" (repeatable) |
Note
mcp update --header and mcp update --inject each replace the entire corresponding map. Pass all required values in a single update call.
contenox version
Prints the current binary version and exits.
contenox version
Environment variables
| Variable | Description |
|---|---|
CONTENOX_HITL_ENABLED=true | Equivalent to passing --hitl on every invocation. Useful for making HITL permanent in a shell session or CI environment. |