Introduction

Contenox is a local AI agent that turns what you type into real work: shell commands, file edits, multi-step plans, and optional browser/API tools — without shipping your code to someone else’s cloud unless you choose a cloud model.

You describe goals in plain language (or pipe data from git, curl, or anything else). Contenox runs a task chain — a JSON graph of steps the engine executes — so behaviour stays inspectable and repeatable, not buried inside a black-box chat.

How it works

User input
    │
    ▼
┌─────────────────────┐
│   Task Chain (JSON) │  ← you define this
│  task → task → …   │
└─────────────────────┘
    │
    ▼
Model (local Ollama / Ollama Cloud / OpenAI / vLLM / Gemini)
    │
    ├─ tool call? → Hook (local shell, remote API, MCP server)
    │                    │
    └─ text reply ←──────┘

Each task has a handler (what it does), an optional LLM config (which model, which hooks), and a transition (where to go next). The chain engine drives the loop — the model doesn't.

How you interact with it

ModeCommandWhen to use
Stateless runcontenox "…"One-shot queries, piping, CI/CD
Persistent chatcontenox chat "…"Conversations with session history
Planscontenox plan new "…" then contenox plan nextMulti-step goals stored in SQLite — pause, retry, or run --auto
Beamcontenox beamWeb UI and HTTP API on the same runtime as the CLI

For a minimal walkthrough, see the Quickstart.

Design philosophy

  • UNIX composability — standard stdin/stdout so you can pipe into jq, grep, or scripts
  • Explicit tool grants — the model only calls tools listed in the chain's hooks config; nothing runs in the background without your consent
  • Vendor-agnostic — swap local Ollama, Ollama Cloud, OpenAI, or Gemini by changing one config line
  • Headless-friendly — runs unattended in CI/CD pipelines, cron jobs, or git hooks

Next steps