Real execution engine under the hood
Contenox runs a typed state machine: branching logic, retry policies, multi-model routing, tool call dispatch, token budget management. The CLI surface is simple. What's underneath isn't.
Local AI agent CLI
AI workflows at your fingertips.
Contenox turns natural language into persistent, step-by-step plans and executes them
on your machine using real shell and filesystem tools.
No cloud required. No daemon. Just a single binary.
Most AI CLI tools are one-shot: ask a question, get a reply, done. Contenox is different because it's built on a typed execution engine — not a prompt loop.
Copilot CLI suggests a command. Cursor autocompletes code. Mistral CLI hits the cloud API. None of them remember what happened, can retry a failed step, or execute a 5-part plan autonomously while you grab a coffee.
Plans survive reboots. Each step result is saved to SQLite. You can pause mid-plan, inspect what happened, retry a specific step, or let the model replan from the current state. It's a workflow engine you run from a terminal.
Contenox runs a typed state machine: branching logic, retry policies, multi-model routing, tool call dispatch, token budget management. The CLI surface is simple. What's underneath isn't.
contenox plan next runs one step and pauses. You review. You decide.
--auto unlocks full autonomy — only when you say so. This isn't
a safety feature bolted on. It's the design.
Workflows are JSON files you own. Swap models per task. Add external hooks for anything that has an HTTP API. Run fully offline with Ollama. Or use OpenAI, Gemini, or any OpenAI-compatible endpoint.
Single binary. SQLite. No daemon, no Docker, no NATS, no Postgres. Drop it on any machine and it works. The Runtime API server is available for team and production use — but the CLI needs nothing.
One command. Move the binary. Done.
Then scaffold your config and run your first task:
Cloud providers (OpenAI, Gemini, vLLM) also work — set your API key in
.contenox/config.yaml after contenox init.
Three modes. One binary.
contenox plan — Autonomous task execution
Describe a goal. The LLM generates a plan. Steps are saved to SQLite.
Execute one step at a time with plan next, or go full-auto with
plan next --auto. Retry, skip, or replan any step at any time.
contenox — Interactive chat
Natural language → shell commands → response. Chat history persists across
sessions. Use your own chain with --chain, or use the default.
contenox exec — Scriptable chainsRun any chain with any input type, stateless. Pipe-friendly. Perfect for CI scripting, batch jobs, or connecting chains together.
Chains are JSON. Define tasks, handlers, branching conditions, tool hooks, and model configs. Swap models per step. Add any HTTP endpoint as a tool. The engine handles the rest.
One binary. Works on your laptop, your server, or air-gapped.
Questions: hello@contenox.com