Quickstart
Get a running AI agent in under 5 minutes.
Prerequisites
You need at least one LLM backend. Choose based on your setup:
- Ollama (local, no API key): install from ollama.com, then pull any model:
ollama pull qwen2.5:7b - Ollama Cloud: get an API key at ollama.com/settings/keys, then:
export OLLAMA_API_KEY=your-key - Google Gemini (cloud, no GPU required): get a free API key at aistudio.google.com/apikey, then:
export GEMINI_API_KEY=your-key - OpenAI / vLLM: any OpenAI-compatible endpoint works — wire it in step 2.
1. Install Contenox
Linux:
TAG=$(curl -sL https://api.github.com/repos/contenox/contenox/releases/latest | grep '"tag_name"' | cut -d'"' -f4)
ARCH=$(uname -m | sed 's/x86_64/amd64/;s/aarch64/arm64/')
curl -sL "https://github.com/contenox/contenox/releases/download/${TAG}/contenox-${TAG}-linux-${ARCH}" -o contenox
chmod +x contenox && sudo mv contenox /usr/local/bin/contenox
contenox --version
macOS:
TAG=$(curl -sL https://api.github.com/repos/contenox/contenox/releases/latest | grep '"tag_name"' | cut -d'"' -f4)
ARCH=$(uname -m | sed 's/x86_64/amd64/;s/arm64/arm64/')
curl -sL "https://github.com/contenox/contenox/releases/download/${TAG}/contenox-${TAG}-darwin-${ARCH}" -o contenox
chmod +x contenox && sudo mv contenox /usr/local/bin/contenox
contenox --version
2. Register a backend
Note
Global vs local scope: Backends and config are global (stored in ~/.contenox/local.db and available from any directory). Chain files (.contenox/) are local to each project — like .git/. You register a backend once; you run contenox init once per project.
Choose your provider:
Ollama (local)
contenox backend add local --type ollama
contenox config set default-model qwen2.5:7b
Ollama Cloud
contenox backend add ollama-cloud --type ollama --url https://ollama.com/api --api-key-env OLLAMA_API_KEY
contenox model list
contenox config set default-model <name-from-contenox-model-list>
contenox config set default-provider ollama
Google Gemini (cloud, no GPU)
contenox backend add gemini --type gemini --api-key-env GEMINI_API_KEY
contenox config set default-model gemini-2.5-flash
contenox config set default-provider gemini
OpenAI
contenox backend add openai --type openai --api-key-env OPENAI_API_KEY
contenox config set default-model gpt-4o-mini
contenox config set default-provider openai
Important
For cloud providers (Ollama Cloud, Gemini, OpenAI) you must set both default-model and default-provider. Without default-provider, commands will fail with a confusing embedder error.
3. Initialize a project
mkdir my-agent && cd my-agent
contenox init
This scaffolds a .contenox/ directory in the current folder:
.contenox/
├── default-chain.json ← persistent chat chain
└── default-run-chain.json ← stateless run chain
4. Run your first query
contenox "what is the capital of France?"
# → Paris.
contenox without a subcommand runs statelessly via .contenox/default-run-chain.json — no session history is saved.
Add --shell to give the model access to your local shell:
contenox "list Go files in this directory sorted by size" --shell
5. Use Beam (web UI)
contenox beam starts the HTTP server and serves the Beam web app — the same Contenox runtime as the CLI, with chains, admin, hooks, and MCP in the browser.
contenox beam
Open http://127.0.0.1:8081 (or the URL printed in the log). The login page is at /login.
Note
Default sign-in: username admin, password admin. This is the built-in local credential for Beam.
Use contenox chat, contenox plan, and contenox session when you prefer the terminal.
6. Persistent chat
For a simple conversation with session history:
contenox chat "hello, what can you help me with?"
contenox chat "tell me more" # continues the same session
Tip
Each contenox chat call appends to the current session. Start a fresh context with:
contenox session new
contenox chat "fresh start"
See Session management for listing, switching, and deleting sessions.
7. Add tools via hooks
Register any OpenAPI-compatible HTTP endpoint as a tool:
contenox hook add nws --url https://api.weather.gov --timeout 15000
contenox hook show nws # lists discovered tools
Then run with hooks enabled using the default run chain:
contenox "Use the nws hook to tell me how many active weather alerts are there right now." --shell=false
The default run chain has "hooks": ["*"] — all registered hooks are available to the model automatically.
Important
"hooks": ["*"] gives the model access to every registered tool. For sensitive environments list only what the task needs — e.g. "hooks": ["nws"]. This is Contenox's per-invocation tool policy: the model can only call what you explicitly grant, in this specific run.
See Hooks reference for access control and AI governance patterns.
Next steps
- Core Concepts — chains, tasks, hooks, sessions
- MCP guide — connect Notion, Linear, and other MCP servers
- CLI reference — all flags and subcommands