Configuration

Contenox stores all configuration in SQLite (.contenox/local.db, or ~/.contenox/local.db globally). There is no YAML file — register backends and set defaults using CLI commands.

Register a backend

# Local Ollama (base URL inferred automatically)
contenox backend add local --type ollama

# Ollama Cloud
contenox backend add ollama-cloud --type ollama --url https://ollama.com/api --api-key-env OLLAMA_API_KEY

# OpenAI (base URL inferred)
contenox backend add openai --type openai --api-key-env OPENAI_API_KEY

# Google Gemini
contenox backend add gemini --type gemini --api-key-env GEMINI_API_KEY

# Self-hosted vLLM or compatible endpoint
contenox backend add myvllm --type vllm --url http://gpu-host:8000

# Vertex AI — --url is required (include project and region)
# Option A: service account JSON (works everywhere, including the workspace)
export VERTEX_SA_JSON=$(cat /path/to/service-account.json)
contenox backend add vertex --type vertex-google \
  --url "https://us-central1-aiplatform.googleapis.com/v1/projects/YOUR_PROJECT_ID/locations/us-central1" \
  --api-key-env VERTEX_SA_JSON

# Option B: Application Default Credentials (CLI only)
# Both gcloud steps are required — set-quota-project is not optional
gcloud auth application-default login
gcloud auth application-default set-quota-project YOUR_PROJECT_ID
contenox backend add vertex --type vertex-google \
  --url "https://us-central1-aiplatform.googleapis.com/v1/projects/YOUR_PROJECT_ID/locations/us-central1"

Set persistent defaults

contenox config set default-model    qwen2.5:7b
contenox config set default-provider ollama
contenox config set default-chain    .contenox/default-chain.json
contenox config set hitl-policy-name hitl-policy-strict.json

contenox config list   # review current settings
KeyDescription
default-modelModel name used when --model is not passed
default-providerProvider type used when --provider is not passed
default-chainPath to the default chain file
hitl-policy-nameActive HITL policy file name (e.g. hitl-policy-strict.json). Empty = hitl-policy-default.json.

Manage backends

contenox backend list
contenox backend show openai
contenox backend remove myvllm

Supported providers

--typeNotes
ollamaLocal: run ollama serve first. Hosted: use --url https://ollama.com/api --api-key-env OLLAMA_API_KEY.
openaiUse --api-key-env OPENAI_API_KEY. Base URL inferred.
geminiUse --api-key-env GEMINI_API_KEY. Base URL inferred.
vllmSelf-hosted OpenAI-compatible endpoint. Requires --url.
vertex-googleVertex AI — Google models (Gemini). Requires --url with project and region. Auth: service account JSON via --api-key-env, or ADC (no flag needed if gcloud auth application-default login is configured).
vertex-anthropicVertex AI — Anthropic models (Claude). Same auth and URL pattern as vertex-google.
vertex-metaVertex AI — Meta models (Llama). Same auth and URL pattern as vertex-google.
vertex-mistralaiVertex AI — Mistral models. Same auth and URL pattern as vertex-google.

Database location

Contenox resolves the database path in this order:

  1. --db <path> flag
  2. --data-dir <path> — if set, uses <path>/local.db
  3. .contenox/local.db found by walking up from the current working directory
  4. ~/.contenox/local.db (global fallback)

The --data-dir flag also controls where chain files, plans, and VFS data are stored. This is useful for running isolated instances (e.g. integration tests) without affecting your real .contenox/ directory.