contenox CLI Reference

contenox is the local AI agent CLI. It runs the Contenox chain engine entirely on your machine.

Global Flags

Persistent flags on the root command (also shown under Global Flags on subcommands). Run contenox --help for the full list.

FlagDescription
--model <name>Default model name (default in tree: qwen2.5:7b; override KV via contenox config set default-model)
--provider <type>Provider override: ollama, openai, vllm, gemini
--db <path>SQLite DB path (defaults via contenox --help; often project .contenox/local.db or ~/.contenox/local.db depending on layout)
--data-dir <path>Override the .contenox data directory path (skips the walk-up search; also sets the default DB location to <path>/local.db)
--timeoutMax execution time per invocation (default 5m)
--contextContext length hint for the tokenizer
--ollamaOllama base URL (default http://127.0.0.1:11434)
--no-delete-modelsDo not delete undeclared Ollama models (default true for CLI)
--chain <path>Chain JSON for injected run / chat when applicable
--input <value>Input string or @file (chat / bare run paths)
--traceStructured operation telemetry on stderr
--stepsPrint execution steps after the result
--thinkPrint model reasoning trace to stderr (thinking models)
--rawPrint full structured output (e.g. entire chat JSON)
--shellEnable local_shell hook (trusted environments only)
--local-exec-allowed-dir <dir>Restrict local_shell to a directory

Subcommands

contenox (bare — stateless run)

If the first token is not a reserved subcommand (chat, init, run, plan, beam, …), the CLI prepends run. That is stateless: no chat session.

The default chain file is <resolved .contenox>/default-run-chain.json, where .contenox is found by walking up from the current working directory (same rules as contenox init). It is not read from ~/.contenox/ — global state lives in ~/.contenox/local.db, but chain JSON files are project-local. If no --chain is set and the default file is missing, contenox run errors with a hint to run contenox init or pass --chain.

contenox "what can you do?"
echo "summarise README.md" | contenox
contenox --shell "list files here"
contenox --local-exec-allowed-dir . "summarise the README"
FlagDescription
--shellEnable local_shell hook (opt-in; command policy is defined in the chain)
--local-exec-allowed-dir <dir>Restrict local_fs tools to this directory

contenox chat

Sends a message to the active chat session and prints the response. History is persisted across invocations in SQLite.

contenox chat "what can you do?"
echo "summarise README.md" | contenox chat
contenox chat --shell "list files here"
FlagDescription
--trim NOnly send last N messages from session history to the model (0 = all)
--last NPrint last N user/assistant turns after the reply (0 = only new reply)
--shellEnable local_shell hook
--local-exec-allowed-dir <dir>Restrict local_fs tools to this directory

Manage named chat sessions. Each session maintains its own conversation history.

contenox session list                    # list all sessions (* = active)
contenox session new [name]             # create a session (becomes active)
contenox session switch <name>          # switch to a different session
contenox session show                   # show active session's history
contenox session show <name>            # show any session by name
contenox session show --tail 10         # show last 10 messages
contenox session show --head 5          # show first 5 messages
contenox session show default --tail 6  # tail a non-active session
contenox session delete <name>          # delete session and all messages

contenox run

Executes a chain non-interactively. No session history.

contenox run --chain .contenox/chain-nws.json --input-type chat "how is the weather?"
contenox run --chain .contenox/my-chain.json --shell "refactor main.go"
  • --chain <path>: Optional if <resolved .contenox>/default-run-chain.json exists; otherwise required.
  • --input-type <type>: string (default), chat, json, int, float, bool — see contenox run --help.
  • --shell: Enable shell execution for this invocation (use only in trusted environments).
  • --think / --trace / --steps: Global flags (see table above).

contenox plan

Autonomous multi-step execution using a separate "planner" model that directs an "executor" model. For a conceptual overview (what gets stored, planner vs executor, typical workflow), see Execution Plans.

contenox plan new "analyze main.go, find the bug, and write a fix to patch.diff"
contenox plan next          # execute next pending step
contenox plan next --shell  # execute next step with shell access enabled
contenox plan next --auto   # run all pending steps automatically
  • plan next flags: --auto (run until done or failure), --shell (enable local_shell for that step).
  • Global flags such as --trace, --steps, and --think apply to the underlying engine for plan commands that execute chains (see contenox plan --help and root Global Flags).

contenox doctor

Prints local LLM setup readiness — same evaluation as Beam GET /api/setup-status.

contenox doctor

contenox model

List, add, or remove models in the local registry (live model list from configured backends).

contenox model list
contenox model --help

contenox beam

Starts the Contenox runtime as an HTTP server and serves the Beam web app. Same environment variables as the standalone server; optional --tenant for the tenant ID.

contenox beam
contenox beam --tenant 96ed1c59-ffc1-4545-b3c3-191079c68d79

Use contenox chat, contenox plan, contenox session, contenox hook, and contenox mcp from the terminal for shell-native workflows.

contenox hook

Manage remote OpenAPI hooks. See Remote Hooks and Hook Allowlist Patterns.

contenox hook add <name> --url <url>
contenox hook add <name> --url <url> --header "Authorization: Bearer $TOKEN" --inject "tenant_id=acme"
contenox hook list
contenox hook show <name>
contenox hook update <name> --header <...> --inject <...>
contenox hook remove <name>
FlagDescription
--urlBase URL of the OpenAPI service (required)
--headerHTTP header to inject on every call, e.g. "Authorization: Bearer $TOKEN" (repeatable)
--injectTool call argument to inject and hide from the model, e.g. "tenant_id=acme" (repeatable)
--timeoutRequest timeout in milliseconds (default: 10000)

contenox init

Initializes a new .contenox/ directory with default chain files.

$ contenox init
  Created .contenox/default-chain.json
  Created .contenox/default-run-chain.json
Done.

After init, register a backend:

contenox backend add local --type ollama
contenox config set default-model qwen2.5:7b
# Or use Ollama Cloud instead:
contenox backend add ollama-cloud --type ollama --url https://ollama.com/api --api-key-env OLLAMA_API_KEY

contenox backend

Register and manage LLM backend endpoints.

contenox backend add local   --type ollama
contenox backend add ollama-cloud --type ollama --url https://ollama.com/api --api-key-env OLLAMA_API_KEY
contenox backend add openai  --type openai  --api-key-env OPENAI_API_KEY
contenox backend add gemini  --type gemini  --api-key-env GEMINI_API_KEY
contenox backend add myvllm --type vllm    --url http://gpu-host:8000

contenox backend list
contenox backend show openai
contenox backend remove myvllm
FlagDescription
--typeBackend type: ollama, openai, gemini, vllm
--urlBase URL (auto-inferred for openai/gemini; set https://ollama.com/api for Ollama Cloud)
--api-key-envEnvironment variable holding the API key (preferred)
--api-keyAPI key literal (avoid — use --api-key-env)

contenox config

Manage persistent CLI defaults stored in SQLite.

contenox config set default-model    qwen2.5:7b
contenox config set default-provider ollama
contenox config set default-chain    .contenox/default-chain.json

contenox config get default-model
contenox config list

contenox mcp

Register and manage MCP (Model Context Protocol) servers.

# Shorthand: name + URL (transport defaults to http)
contenox mcp add notion https://mcp.notion.com/mcp --auth-type oauth

# Stdio transport (local process)
contenox mcp add myserver --transport stdio --command npx \
  --args "-y,@modelcontextprotocol/server-filesystem,/tmp"

# SSE transport (remote) with bearer auth
contenox mcp add remote --transport sse --url https://mcp.example.com/sse \
  --auth-type bearer --auth-env MCP_TOKEN

# Inject hidden params into every tool call (model never sees them)
contenox mcp add myserver --transport http --url http://localhost:8090 \
  --header "X-Tenant: acme" \
  --inject "tenant_id=acme" --inject "env=production"

contenox mcp list
contenox mcp show myserver
contenox mcp update myserver --inject "tenant_id=newvalue"
contenox mcp remove myserver
FlagDescription
[url]URL as a second positional arg — sets --url and defaults --transport to http
--transportServer transport: stdio, sse, http
--commandCommand to execute (stdio only)
--argsComma-separated command arguments
--urlRemote endpoint URL (sse, http)
--auth-typeAuthentication type (e.g. bearer)
--auth-envEnvironment variable holding auth token (preferred)
--auth-tokenAuth token literal (avoid — use --auth-env)
--headerAdditional HTTP header for SSE/HTTP connections, e.g. "X-Tenant: acme" (repeatable)
--injectTool call argument to inject and hide from the model, e.g. "tenant_id=acme" (repeatable)

Note

mcp update --header and mcp update --inject each replace the entire corresponding map. Pass all required values in a single update call.