contenox CLI Reference

contenox is the local AI agent CLI. It runs the Contenox chain engine entirely on your machine.

Global Flags

Persistent flags on the root command (also shown under Global Flags on subcommands). Run contenox --help for the full list.

FlagDescription
--model <name>Default model name (default in tree: qwen2.5:7b; override KV via contenox config set default-model)
--provider <type>Provider override: ollama, openai, vllm, gemini, vertex-google, vertex-anthropic, vertex-meta, vertex-mistralai
--db <path>SQLite DB path (defaults via contenox --help; often project .contenox/local.db or ~/.contenox/local.db depending on layout)
--data-dir <path>Override the .contenox data directory path (skips the walk-up search; also sets the default DB location to <path>/local.db)
--timeoutMax execution time per invocation (default 5m)
--contextContext length hint for the tokenizer
--ollamaOllama base URL (default http://127.0.0.1:11434)
--no-delete-modelsDo not delete undeclared Ollama models (default true for CLI)
--chain <path>Chain JSON for injected run / chat when applicable
--input <value>Input string or @file (chat / bare run paths)
--traceStructured operation telemetry on stderr
--stepsPrint execution steps after the result
--thinkPrint model reasoning trace to stderr (thinking models)
--rawPrint full structured output (e.g. entire chat JSON)
--shellEnable local_shell hook (trusted environments only)
--local-exec-allowed-dir <dir>Restrict local_shell to a directory

Subcommands

contenox (bare — stateless run)

If the first token is not a reserved subcommand (chat, init, run, plan, beam, …), the CLI prepends run. That is stateless: no chat session.

The default chain file is <resolved .contenox>/default-run-chain.json, where .contenox is found by walking up from the current working directory (same rules as contenox init). It is not read from ~/.contenox/ — global state lives in ~/.contenox/local.db, but chain JSON files are project-local. If no --chain is set and the default file is missing, contenox run errors with a hint to run contenox init or pass --chain.

contenox "what can you do?"
echo "summarise README.md" | contenox
contenox --shell "list files here"
contenox --local-exec-allowed-dir . "summarise the README"
FlagDescription
--shellEnable local_shell hook (opt-in; command policy is defined in the chain)
--local-exec-allowed-dir <dir>Restrict local_fs tools to this directory
--hitlEnable human-in-the-loop approval. Tool calls matching the active policy pause for y/n approval in the terminal. Active policy is configured in the workspace Admin → HITL Policies.

contenox chat

Sends a message to the active chat session and prints the response. History is persisted across invocations in SQLite.

contenox chat "what can you do?"
echo "summarise README.md" | contenox chat
contenox chat --shell "list files here"
FlagDescription
--trim NOnly send last N messages from session history to the model (0 = all)
--last NPrint last N user/assistant turns after the reply (0 = only new reply)
--shellEnable local_shell hook
--local-exec-allowed-dir <dir>Restrict local_fs tools to this directory
--hitlEnable HITL approval prompts (active policy set in workspace HITL Policies)

Manage named chat sessions. Each session maintains its own conversation history.

contenox session list                    # list all sessions (* = active)
contenox session new [name]             # create a session (becomes active)
contenox session switch <name>          # switch to a different session
contenox session show                   # show active session's history
contenox session show <name>            # show any session by name
contenox session show --tail 10         # show last 10 messages
contenox session show --head 5          # show first 5 messages
contenox session show default --tail 6  # tail a non-active session
contenox session delete <name>          # delete session and all messages

contenox run

Executes a chain non-interactively. No session history.

contenox run --chain .contenox/chain-nws.json --input-type chat "how is the weather?"
contenox run --chain .contenox/my-chain.json --shell "refactor main.go"
  • --chain <path>: Optional if <resolved .contenox>/default-run-chain.json exists; otherwise required.
  • --input-type <type>: string (default), chat, json, int, float, bool — see contenox run --help.
  • --shell: Enable shell execution for this invocation (use only in trusted environments).
  • --think / --trace / --steps: Global flags (see table above).

contenox plan

Autonomous multi-step execution using a separate "planner" model that directs an "executor" model. For a conceptual overview (what gets stored, planner vs executor, typical workflow), see Execution Plans.

contenox plan new "analyze main.go, find the bug, and write a fix to patch.diff"
contenox plan list              # list all plans (* = active)
contenox plan show              # show active plan's steps and status
contenox plan next              # execute next pending step
contenox plan next --shell      # execute next step with shell access enabled
contenox plan next --auto       # run all pending steps automatically
contenox plan retry <ordinal>   # reset a failed step back to pending
contenox plan skip  <ordinal>   # mark a step done without running it
contenox plan replan            # ask planner to revise remaining steps
contenox plan delete <name>     # delete a plan by name
contenox plan clean             # delete all completed/archived plans

plan next flags:

FlagDescription
--autoContinue executing steps automatically until the plan is done or a step fails
--shellEnable local_shell hook for this step (required for shell-based tasks)
--gateUse the gated executor: after each tool round a small model scores whether to continue. Aborts on bad/corrupt tool output. Extra latency/cost.
--hitlPause before each write/shell tool call and require y/n approval in the terminal

Global flags --trace, --steps, and --think apply to plan commands that execute chains.

contenox plan explore

Runs the read-only explorer chain (chain-plan-explorer.json) against the workspace and persists a RepoContext on the active plan. The RepoContext captures languages, entry points, build/test commands, key files, and conventions, then injects them as {{var:repo_context}} into every subsequent step's prompt — so each step sees concrete paths instead of cold-exploring the repo from scratch.

contenox plan explore                # explore for the active plan
contenox plan new --explore "..."    # explore as part of plan creation

The explorer is read-only by contract: only local_fs and other read-only hooks are allowlisted.

contenox doctor

Prints local LLM setup readiness — same evaluation as the workspace GET /api/setup-status.

contenox doctor
contenox doctor --json          # machine-readable output
contenox doctor --skip-cycle    # faster; skips backend sync (status may be stale)
FlagDescription
--jsonPrint results as JSON instead of human-readable text
--skip-cycleSkip syncing backends before the check (faster but may show stale status)

contenox model

Manage models in the local Model Registry — a name-to-URL index of GGUF files that can be downloaded for local inference. See Local Models (GGUF) for a full walkthrough.

contenox model registry-list

List all curated and user-added registry entries. Does not require a running backend.

contenox model registry-list

contenox model pull

Download a curated or custom GGUF model to ~/.contenox/models/<name>/model.gguf.

contenox model pull qwen3-4b                                         # curated model
contenox model pull my-model --url https://huggingface.co/org/repo/resolve/main/model.gguf

After downloading, register the local backend once:

contenox backend add local --type local --url ~/.contenox/models/
FlagDescription
--urlDirect GGUF download URL (requires a name as arg[0])

contenox model add

Register a custom model entry in the local registry without downloading.

contenox model add my-model --url https://huggingface.co/org/repo/resolve/main/model.gguf
contenox model add my-model --url https://... --size 4500000000
FlagDescription
--urlSource URL (required)
--sizeFile size in bytes (optional, informational)

contenox model show

Display registry details for a model.

contenox model show qwen3-4b

contenox model rm

Remove a user-added registry entry by name. Curated entries cannot be removed.

contenox model rm my-model

contenox model list

List models currently available from all configured backends (live query, requires at least one backend).

contenox model list

contenox model set-context

Override the context window size for a specific model name. Useful when a backend reports a different (or no) context size than the model actually supports.

contenox model set-context qwen2.5:7b           --context 32k
contenox model set-context gpt-5-mini           --context 128k
contenox model set-context gemini-3.1-pro-preview --context 1m
FlagDescription
--contextContext window size: bare integer or shorthand (12k, 128k, 1m). Required.

contenox beam

Starts the Contenox runtime as an HTTP server and serves the Contenox workspace. Same environment variables as the standalone server; optional --tenant for the tenant ID.

contenox beam
contenox beam --tenant 96ed1c59-ffc1-4545-b3c3-191079c68d79

Use contenox chat, contenox plan, contenox session, contenox hook, and contenox mcp from the terminal for shell-native workflows.

contenox hook

Manage remote OpenAPI hooks. See Remote Hooks and Hook Allowlist Patterns.

contenox hook add <name> --url <url>
contenox hook add <name> --url <url> --header "Authorization: Bearer $TOKEN" --inject "tenant_id=acme"
contenox hook list
contenox hook show <name>
contenox hook update <name> --header <...> --inject <...>
contenox hook remove <name>
FlagDescription
--urlBase URL of the OpenAPI service (required)
--headerHTTP header to inject on every call, e.g. "Authorization: Bearer $TOKEN" (repeatable)
--injectTool call argument to inject and hide from the model, e.g. "tenant_id=acme" (repeatable)
--timeoutRequest timeout in milliseconds (default: 10000)

contenox init

Initializes a new .contenox/ directory with default chain files and HITL policy presets. Pass a provider name to pre-configure the default model and provider.

contenox init                    # defaults to ollama
contenox init gemini             # pre-configure for Gemini
contenox init openai             # pre-configure for OpenAI
contenox init --force            # overwrite existing files
FlagDescription
-f, --forceOverwrite existing .contenox/ files

After init, register a backend:

contenox backend add local --type ollama
contenox config set default-model qwen2.5:7b
# Or use Ollama Cloud instead:
contenox backend add ollama-cloud --type ollama --url https://ollama.com/api --api-key-env OLLAMA_API_KEY

contenox backend

Register and manage LLM backend endpoints.

contenox backend add local        --type ollama
contenox backend add ollama-cloud --type ollama --url https://ollama.com/api --api-key-env OLLAMA_API_KEY
contenox backend add openai       --type openai  --api-key-env OPENAI_API_KEY
contenox backend add gemini       --type gemini  --api-key-env GEMINI_API_KEY
contenox backend add myvllm       --type vllm    --url http://gpu-host:8000
contenox backend add vertex       --type vertex-google \
  --url "https://us-central1-aiplatform.googleapis.com/v1/projects/YOUR_PROJECT_ID/locations/us-central1"

contenox backend list
contenox backend show openai
contenox backend remove myvllm
FlagDescription
--typeBackend type: ollama, openai, gemini, vllm, local, vertex-google, vertex-anthropic, vertex-meta, vertex-mistralai
--urlBase URL (auto-inferred for openai/gemini; required for vllm and all vertex-* types)
--api-key-envEnvironment variable holding the API key (preferred)
--api-keyAPI key literal (avoid — use --api-key-env)

contenox config

Manage persistent CLI defaults stored in SQLite.

contenox config set default-model    qwen2.5:7b
contenox config set default-provider ollama
contenox config set default-chain    .contenox/default-chain.json
contenox config set hitl-policy-name hitl-policy-strict.json

contenox config get default-model
contenox config list

Valid keys: default-model, default-provider, default-chain, hitl-policy-name.

contenox mcp

Register and manage MCP (Model Context Protocol) servers.

# Shorthand: name + URL (transport defaults to http)
contenox mcp add notion https://mcp.notion.com/mcp --auth-type oauth

# Stdio transport (local process)
contenox mcp add myserver --transport stdio --command npx \
  --args "-y,@modelcontextprotocol/server-filesystem,/tmp"

# SSE transport (remote) with bearer auth
contenox mcp add remote --transport sse --url https://mcp.example.com/sse \
  --auth-type bearer --auth-env MCP_TOKEN

# Inject hidden params into every tool call (model never sees them)
contenox mcp add myserver --transport http --url http://localhost:8090 \
  --header "X-Tenant: acme" \
  --inject "tenant_id=acme" --inject "env=production"

contenox mcp list
contenox mcp show myserver
contenox mcp update myserver --inject "tenant_id=newvalue"
contenox mcp remove myserver
FlagDescription
[url]URL as a second positional arg — sets --url and defaults --transport to http
--transportServer transport: stdio, sse, http
--commandCommand to execute (stdio only)
--argsComma-separated command arguments
--urlRemote endpoint URL (sse, http)
--auth-typeAuthentication type (e.g. bearer)
--auth-envEnvironment variable holding auth token (preferred)
--auth-tokenAuth token literal (avoid — use --auth-env)
--headerAdditional HTTP header for SSE/HTTP connections, e.g. "X-Tenant: acme" (repeatable)
--injectTool call argument to inject and hide from the model, e.g. "tenant_id=acme" (repeatable)

Note

mcp update --header and mcp update --inject each replace the entire corresponding map. Pass all required values in a single update call.

contenox version

Prints the current binary version and exits.

contenox version

Environment variables

VariableDescription
CONTENOX_HITL_ENABLED=trueEquivalent to passing --hitl on every invocation. Useful for making HITL permanent in a shell session or CI environment.