contenox CLI Reference
contenox is the local AI agent CLI. It runs the Contenox chain engine entirely on your machine.
Global Flags
Persistent flags on the root command (also shown under Global Flags on subcommands). Run contenox --help for the full list.
| Flag | Description |
|---|---|
--model <name> | Default model name (default in tree: qwen2.5:7b; override KV via contenox config set default-model) |
--provider <type> | Provider override: ollama, openai, vllm, gemini |
--db <path> | SQLite DB path (defaults via contenox --help; often project .contenox/local.db or ~/.contenox/local.db depending on layout) |
--data-dir <path> | Override the .contenox data directory path (skips the walk-up search; also sets the default DB location to <path>/local.db) |
--timeout | Max execution time per invocation (default 5m) |
--context | Context length hint for the tokenizer |
--ollama | Ollama base URL (default http://127.0.0.1:11434) |
--no-delete-models | Do not delete undeclared Ollama models (default true for CLI) |
--chain <path> | Chain JSON for injected run / chat when applicable |
--input <value> | Input string or @file (chat / bare run paths) |
--trace | Structured operation telemetry on stderr |
--steps | Print execution steps after the result |
--think | Print model reasoning trace to stderr (thinking models) |
--raw | Print full structured output (e.g. entire chat JSON) |
--shell | Enable local_shell hook (trusted environments only) |
--local-exec-allowed-dir <dir> | Restrict local_shell to a directory |
Subcommands
contenox (bare — stateless run)
If the first token is not a reserved subcommand (chat, init, run, plan, beam, …), the CLI prepends run. That is stateless: no chat session.
The default chain file is <resolved .contenox>/default-run-chain.json, where .contenox is found by walking up from the current working directory (same rules as contenox init). It is not read from ~/.contenox/ — global state lives in ~/.contenox/local.db, but chain JSON files are project-local. If no --chain is set and the default file is missing, contenox run errors with a hint to run contenox init or pass --chain.
contenox "what can you do?"
echo "summarise README.md" | contenox
contenox --shell "list files here"
contenox --local-exec-allowed-dir . "summarise the README"
| Flag | Description |
|---|---|
--shell | Enable local_shell hook (opt-in; command policy is defined in the chain) |
--local-exec-allowed-dir <dir> | Restrict local_fs tools to this directory |
contenox chat
Sends a message to the active chat session and prints the response. History is persisted across invocations in SQLite.
contenox chat "what can you do?"
echo "summarise README.md" | contenox chat
contenox chat --shell "list files here"
| Flag | Description |
|---|---|
--trim N | Only send last N messages from session history to the model (0 = all) |
--last N | Print last N user/assistant turns after the reply (0 = only new reply) |
--shell | Enable local_shell hook |
--local-exec-allowed-dir <dir> | Restrict local_fs tools to this directory |
Manage named chat sessions. Each session maintains its own conversation history.
contenox session list # list all sessions (* = active)
contenox session new [name] # create a session (becomes active)
contenox session switch <name> # switch to a different session
contenox session show # show active session's history
contenox session show <name> # show any session by name
contenox session show --tail 10 # show last 10 messages
contenox session show --head 5 # show first 5 messages
contenox session show default --tail 6 # tail a non-active session
contenox session delete <name> # delete session and all messages
contenox run
Executes a chain non-interactively. No session history.
contenox run --chain .contenox/chain-nws.json --input-type chat "how is the weather?"
contenox run --chain .contenox/my-chain.json --shell "refactor main.go"
--chain <path>: Optional if<resolved .contenox>/default-run-chain.jsonexists; otherwise required.--input-type <type>:string(default),chat,json,int,float,bool— seecontenox run --help.--shell: Enable shell execution for this invocation (use only in trusted environments).--think/--trace/--steps: Global flags (see table above).
contenox plan
Autonomous multi-step execution using a separate "planner" model that directs an "executor" model. For a conceptual overview (what gets stored, planner vs executor, typical workflow), see Execution Plans.
contenox plan new "analyze main.go, find the bug, and write a fix to patch.diff"
contenox plan next # execute next pending step
contenox plan next --shell # execute next step with shell access enabled
contenox plan next --auto # run all pending steps automatically
plan nextflags:--auto(run until done or failure),--shell(enablelocal_shellfor that step).- Global flags such as
--trace,--steps, and--thinkapply to the underlying engine for plan commands that execute chains (seecontenox plan --helpand root Global Flags).
contenox doctor
Prints local LLM setup readiness — same evaluation as Beam GET /api/setup-status.
contenox doctor
contenox model
List, add, or remove models in the local registry (live model list from configured backends).
contenox model list
contenox model --help
contenox beam
Starts the Contenox runtime as an HTTP server and serves the Beam web app. Same environment variables as the standalone server; optional --tenant for the tenant ID.
contenox beam
contenox beam --tenant 96ed1c59-ffc1-4545-b3c3-191079c68d79
Use contenox chat, contenox plan, contenox session, contenox hook, and contenox mcp from the terminal for shell-native workflows.
contenox hook
Manage remote OpenAPI hooks. See Remote Hooks and Hook Allowlist Patterns.
contenox hook add <name> --url <url>
contenox hook add <name> --url <url> --header "Authorization: Bearer $TOKEN" --inject "tenant_id=acme"
contenox hook list
contenox hook show <name>
contenox hook update <name> --header <...> --inject <...>
contenox hook remove <name>
| Flag | Description |
|---|---|
--url | Base URL of the OpenAPI service (required) |
--header | HTTP header to inject on every call, e.g. "Authorization: Bearer $TOKEN" (repeatable) |
--inject | Tool call argument to inject and hide from the model, e.g. "tenant_id=acme" (repeatable) |
--timeout | Request timeout in milliseconds (default: 10000) |
contenox init
Initializes a new .contenox/ directory with default chain files.
$ contenox init
Created .contenox/default-chain.json
Created .contenox/default-run-chain.json
Done.
After init, register a backend:
contenox backend add local --type ollama
contenox config set default-model qwen2.5:7b
# Or use Ollama Cloud instead:
contenox backend add ollama-cloud --type ollama --url https://ollama.com/api --api-key-env OLLAMA_API_KEY
contenox backend
Register and manage LLM backend endpoints.
contenox backend add local --type ollama
contenox backend add ollama-cloud --type ollama --url https://ollama.com/api --api-key-env OLLAMA_API_KEY
contenox backend add openai --type openai --api-key-env OPENAI_API_KEY
contenox backend add gemini --type gemini --api-key-env GEMINI_API_KEY
contenox backend add myvllm --type vllm --url http://gpu-host:8000
contenox backend list
contenox backend show openai
contenox backend remove myvllm
| Flag | Description |
|---|---|
--type | Backend type: ollama, openai, gemini, vllm |
--url | Base URL (auto-inferred for openai/gemini; set https://ollama.com/api for Ollama Cloud) |
--api-key-env | Environment variable holding the API key (preferred) |
--api-key | API key literal (avoid — use --api-key-env) |
contenox config
Manage persistent CLI defaults stored in SQLite.
contenox config set default-model qwen2.5:7b
contenox config set default-provider ollama
contenox config set default-chain .contenox/default-chain.json
contenox config get default-model
contenox config list
contenox mcp
Register and manage MCP (Model Context Protocol) servers.
# Shorthand: name + URL (transport defaults to http)
contenox mcp add notion https://mcp.notion.com/mcp --auth-type oauth
# Stdio transport (local process)
contenox mcp add myserver --transport stdio --command npx \
--args "-y,@modelcontextprotocol/server-filesystem,/tmp"
# SSE transport (remote) with bearer auth
contenox mcp add remote --transport sse --url https://mcp.example.com/sse \
--auth-type bearer --auth-env MCP_TOKEN
# Inject hidden params into every tool call (model never sees them)
contenox mcp add myserver --transport http --url http://localhost:8090 \
--header "X-Tenant: acme" \
--inject "tenant_id=acme" --inject "env=production"
contenox mcp list
contenox mcp show myserver
contenox mcp update myserver --inject "tenant_id=newvalue"
contenox mcp remove myserver
| Flag | Description |
|---|---|
[url] | URL as a second positional arg — sets --url and defaults --transport to http |
--transport | Server transport: stdio, sse, http |
--command | Command to execute (stdio only) |
--args | Comma-separated command arguments |
--url | Remote endpoint URL (sse, http) |
--auth-type | Authentication type (e.g. bearer) |
--auth-env | Environment variable holding auth token (preferred) |
--auth-token | Auth token literal (avoid — use --auth-env) |
--header | Additional HTTP header for SSE/HTTP connections, e.g. "X-Tenant: acme" (repeatable) |
--inject | Tool call argument to inject and hide from the model, e.g. "tenant_id=acme" (repeatable) |
Note
mcp update --header and mcp update --inject each replace the entire corresponding map. Pass all required values in a single update call.