Configuration
The Contenox runtime is configured via .contenox/config.yaml.
When you run vibe init, a default configuration is generated. vibe looks for this directory in your current working directory, then walks up to the root, and finally checks your home directory (~/.contenox/).
Default config.yaml
yaml
version: 1
default_provider: ollama
default_model: qwen2.5:7b
providers:
ollama:
base_url: http://localhost:11434
openai:
api_key: ""Settings
| Key | Description | Example |
|---|---|---|
default_provider | The fallback provider if a chain doesn't specify one | ollama |
default_model | The fallback model if a chain doesn't specify one | gpt-4o |
providers.<name> | Provider-specific connection settings | see below |
Supported Providers
Contenox uses a unified translation layer, meaning you can swap providers per-task in your chains without changing the prompt format or tool schemas.
ollama: Requiresbase_url(usuallyhttp://localhost:11434).openai: Requiresapi_key(or usesOPENAI_API_KEYfrom environment).vllm: Exposes an OpenAI-compatible endpoint. Requiresbase_url.gemini: Requiresapi_key.
