Local AI agent CLI

Describe a goal.
Get it done.

AI workflows at your fingertips.

Contenox turns natural language into persistent, step-by-step plans and executes them on your machine using real shell and filesystem tools.
No cloud required. No daemon. Just a single binary.

$ contenox plan new "install a git pre-commit hook that blocks commits when go build fails"
Creating plan "install-a-git-pre-commit-a3f9e12b" — 5 steps

$ contenox plan next --auto
Step 1: Install necessary tools...
Step 2: Create .git/hooks/pre-commit...
Step 3: Write the build-check script...
Step 4: Write bash content to hook file...
Step 5: chmod +x .git/hooks/pre-commit...

# The model wrote that hook. On your machine.
  • 🏠 Fully local with Ollama
  • 💾 Plans persist across reboots
  • 🔍 Review before every step
  • 🔌 OpenAI, Gemini, vLLM too

Not another chatbot wrapper

Most AI CLI tools are one-shot: ask a question, get a reply, done. Contenox is different because it's built on a typed execution engine — not a prompt loop.

Other tools

One-shot, stateless, forgotten

Copilot CLI suggests a command. Cursor autocompletes code. Mistral CLI hits the cloud API. None of them remember what happened, can retry a failed step, or execute a 5-part plan autonomously while you grab a coffee.

Contenox

Persistent plans, real execution

Plans survive reboots. Each step result is saved to SQLite. You can pause mid-plan, inspect what happened, retry a specific step, or let the model replan from the current state. It's a workflow engine you run from a terminal.

Real execution engine under the hood

Contenox runs a typed state machine: branching logic, retry policies, multi-model routing, tool call dispatch, token budget management. The CLI surface is simple. What's underneath isn't.

Human-in-the-loop by default

contenox plan next runs one step and pauses. You review. You decide. --auto unlocks full autonomy — only when you say so. This isn't a safety feature bolted on. It's the design.

Your chains, your models

Workflows are JSON files you own. Swap models per task. Add external hooks for anything that has an HTTP API. Run fully offline with Ollama. Or use OpenAI, Gemini, or any OpenAI-compatible endpoint.

Zero infrastructure

Single binary. SQLite. No daemon, no Docker, no NATS, no Postgres. Drop it on any machine and it works. The Runtime API server is available for team and production use — but the CLI needs nothing.

Install

One command. Move the binary. Done.

# Download latest release
TAG=$(curl -sL https://api.github.com/repos/contenox/contenox/releases/latest | grep '"tag_name"' | cut -d'"' -f4)
ARCH=$(uname -m | sed 's/x86_64/amd64/;s/aarch64/arm64/')
curl -sL "https://github.com/contenox/contenox/releases/download/${TAG}/contenox-${TAG}-linux-${ARCH}" -o contenox
chmod +x contenox && sudo mv contenox /usr/local/bin/contenox
$ contenox --version
# Download latest release
TAG=$(curl -sL https://api.github.com/repos/contenox/contenox/releases/latest | grep '"tag_name"' | cut -d'"' -f4)
ARCH=$(uname -m | sed 's/x86_64/amd64/;s/arm64/arm64/')
curl -sL "https://github.com/contenox/contenox/releases/download/${TAG}/contenox-${TAG}-darwin-${ARCH}" -o contenox
chmod +x contenox && sudo mv contenox /usr/local/bin/contenox
$ contenox --version
git clone https://github.com/contenox/contenox
cd contenox
go build -o contenox ./cmd/contenox
sudo mv contenox /usr/local/bin/contenox

Then scaffold your config and run your first task:

# Local model (recommended)
ollama serve && ollama pull qwen2.5:7b

$ contenox init
Created .contenox/config.yaml and .contenox/default-chain.json

$ contenox "what is in my home directory?"

Cloud providers (OpenAI, Gemini, vLLM) also work — set your API key in .contenox/config.yaml after contenox init.

How it works

Three modes. One binary.

contenox plan — Autonomous task execution

Describe a goal. The LLM generates a plan. Steps are saved to SQLite.
Execute one step at a time with plan next, or go full-auto with plan next --auto. Retry, skip, or replan any step at any time.

contenox — Interactive chat

Natural language → shell commands → response. Chat history persists across sessions. Use your own chain with --chain, or use the default.

contenox exec — Scriptable chains

Run any chain with any input type, stateless. Pipe-friendly. Perfect for CI scripting, batch jobs, or connecting chains together.

Bring your own workflow

Chains are JSON. Define tasks, handlers, branching conditions, tool hooks, and model configs. Swap models per step. Add any HTTP endpoint as a tool. The engine handles the rest.

Ready to try it?

One binary. Works on your laptop, your server, or air-gapped.

Questions: hello@contenox.com