EN | DE

contenox

Your sovereign, open-source runtime for reliable AI workflows.

Orchestrate LLMs, external tools, and business logic as deterministic state machines—fully self-hosted, extensible, and vendor-agnostic.

AI Workflows as State Machines

contenox executes AI workflows as explicit, observable state machines. Every step—LLM call, data fetch, or external action—is a state with defined inputs, outputs, and transitions.

🔌 Hooks: Connect to Anything

Extend your agents with hooks—remote services that integrate external APIs, databases, or tools. Register any OpenAPI-compatible endpoint, and contenox auto-generates callable LLM tools.

⚙️ Declarative & Observable

Define workflows in YAML or via API. Every transition, model call, and hook execution is logged, traced, and replayable—no black-box agents.

☁️ Self-Hosted & Portable

Run anywhere: on-prem, air-gapped, or cloud. Built with Go, containerized, and designed for Kubernetes. Full data sovereignty.

🔄 Multi-Model Orchestration

Route requests dynamically across Ollama, vLLM, OpenAI, or custom backends. Define fallbacks, affinity groups, and resolution policies per task.

100% Open Source. No Paywalls.

After extensive of internal use, we’re open-sourcing everything—runtime, hooks, observability, and tooling—under Apache 2.0.

If you’re tired of proprietary “AI platforms” that hide critical logic behind APIs, contenox gives you full control.

Star the Repo & Get Started

Try It in 2 Minutes

Runs a full AI stack: LLMs, vector DB, tokenizer, and orchestration engine.


    git clone https://github.com/contenox/runtime.git
    cd runtime
    docker compose up -d
    ./scripts/bootstrap.sh nomic-embed-text:latest phi3:3.8b phi3:3.8b