Configuration

Contenox stores all configuration in SQLite (.contenox/local.db, or ~/.contenox/local.db globally). There is no YAML file — register backends and set defaults using CLI commands.

Register a backend

# Local Ollama (base URL inferred automatically)
contenox backend add local --type ollama

# Ollama Cloud
contenox backend add ollama-cloud --type ollama --url https://ollama.com/api --api-key-env OLLAMA_API_KEY

# OpenAI (base URL inferred)
contenox backend add openai --type openai --api-key-env OPENAI_API_KEY

# Google Gemini
contenox backend add gemini --type gemini --api-key-env GEMINI_API_KEY

# Self-hosted vLLM or compatible endpoint
contenox backend add myvllm --type vllm --url http://gpu-host:8000

Set persistent defaults

contenox config set default-model    qwen2.5:7b
contenox config set default-provider ollama
contenox config set default-chain    .contenox/default-chain.json

contenox config list   # review current settings

Manage backends

contenox backend list
contenox backend show openai
contenox backend remove myvllm

Supported providers

--typeNotes
ollamaLocal: run ollama serve first. Hosted: use --url https://ollama.com/api --api-key-env OLLAMA_API_KEY.
openaiUse --api-key-env OPENAI_API_KEY. Base URL inferred.
geminiUse --api-key-env GEMINI_API_KEY. Base URL inferred.
vllmSelf-hosted OpenAI-compatible endpoint. Requires --url.

Database location

Contenox resolves the database path in this order:

  1. --db <path> flag
  2. --data-dir <path> — if set, uses <path>/local.db
  3. .contenox/local.db found by walking up from the current working directory
  4. ~/.contenox/local.db (global fallback)

The --data-dir flag also controls where chain files, plans, and VFS data are stored. This is useful for running isolated instances (e.g. integration tests) without affecting your real .contenox/ directory.