EN | DE

contenox – Build AI Agents & Chat-Driven UIs with Low Code

A modular platform for building context-aware agents, semantic search, and LLM-powered automation β€” all powered by conversational interfaces and a powerfull DSL.

Core Capabilities

πŸ’¬ Conversational UI Engine

Replace buttons, menus, and wizards with natural language interactions driven by chat and model suggested commands.

✨ Unlimited AI Agent Capabilities

Let your AI agents take action and connect with anything. Imagine an agent that can research the latest news and interpret its impact on the stock market, generate personalized content, or even manage physical devices. Our system allows for seamless integration with virtually any service or function your business needs.

πŸš€ Scalable Runtime Architecture

contenox isn’t just a frontend β€” it’s a scalable backend for AI agents that questions all existing frameworks and builds pieces from the foundation where needed.

βš™οΈ Configurable Agent Logic (Soon)

Build workflows in YAML β€” no backend code. Mix LLM prompts, hooks, and logic in a single flow. Change behavior without redeploying.

πŸ” Semantic Search & RAG

Query documents using natural language. Embeddings + vector search power semantic understanding and retrieval-augmented generation.

🎨 Visual Behavior Builder (Soon)

Drag-and-drop your YAML chains, hooks, and transitions β€” no YAML hand-editing required. Build and preview your agent flows in a visual UI.

πŸ”„ LLM Orchestration & Fallbacks

Our Model Resolver routes requests to the optimal backend while the Model Provider executes LLM calls via Ollama and vLLM. Perfectly suited for air gapped Zero Trust deployments.

πŸ“Š Tracing, Metrics and Observability

contenox is built for full system transparency at its core.

☁️ Self-Hostable & Cloud-Native

Designed for full control over your data and deployment. Containerized and built with Kubernetes in mind for seamless integration into cloud-native environments.

πŸ‡ͺπŸ‡Ί GDPR & AI Act Ready

Designed with GDPR, AI Act, and enterprise compliance standards in mind. Our architecture ensures full transparency and control over data flows.

πŸ”’ LLM driven moderation

User input can be analyzed before reaching models via tasks, and output can also be checked before reaching users.

Why I Built Contenox

When I started experimenting with chat-based automation, I assumed there would already be a framework or platform that let me define intelligent agents, control workflows through natural language, and plug into my systems without reinventing the wheel or handing over my data.

But I quickly realized: existing tools were either too experimental, too limited, or locked into proprietary ecosystems. None gave me full control over deployment, data flow, or agent behavior without writing tons of glue code. So I did what I do best β€” build a product for enterprise-ready technology infrastructure.

So I built contenox β€” not just to scratch my own itch, but to create a flexible, self-hostable runtime that lets me sleep at night when deployed publicly, and where chat isn’t just a conversation β€” it’s the way to replace the need to log into dozens of UIs and hunt through nested dropdowns.

If you’ve ever wanted to build a system where your team can truly control your environment through conversation, contenox is for you.

β€” Alexander Ertli, Founder

Why contenox?

⛓️ Modular Agent Behavior

Define agent behavior through flexible configurationsβ€”easily add new features without code changes. Combine AI prompts, decision rules, and custom actions into a single, intuitive structure.

πŸ”Œ Intelligent Hook System

Empower your AI agents to perform real-world tasks and integrate with any system. contenox allows your agent to suggest and execute commands – from generating articles, to analyzing market data, to controlling drones, or even sending an email.

πŸ”Œ Connect MCP-Servers as Hooks (Soon)

allow any compatible LLM servers to extend contenox capabilities via the Model Context Protocol.

🧠 Context-Aware Chat Sessions

Maintain multi-turn conversations with full memory of past interactions. Supports LLMs, document grounding, and role-based message history.

🌐 Omni-Channel Chat APIs (Soon)

Deploy agents to Slack, Discord, or any React-based chat UI. Integrate your conversational workflows wherever your users are.

πŸš€ Chat-First Runtime Architecture

contenox is more than just an interfaceβ€”it's a scalable distributed system with a high-performance engine for AI actions, task management, and consistent state handling.

πŸ”’ User & Access Control

Robust multi-user support with granular permissions, managed by a custom access control system for secure, enterprise-grade deployments.

πŸš€ Scalable Microservice Architecture

Built with independent Go and Python services, enabling parallel development, flexible scaling, and resilient deployments for complex AI workflows.

🚦 Intelligent LLM Routing & Management

Our LLM Resolver dynamically selects the optimal model backend using scoring and routing policies, ensuring efficient and performant AI interactions.

⚑ Asynchronous Job Processing

Dedicated Python workers handle background tasks like document parsing, and chunking, ensuring smooth, scalable data ingestion.

🌐 Comprehensive Data Persistence

Leverages PostgreSQL for core data, Vald for high-performance vector search, and Valkey for distributed caching, ensuring robust and scalable storage.

πŸ›‘οΈ Secure Backend-for-Frontend (BFF)

Our BFF pattern securely manages API requests and authentication, protecting the UI from direct exposure and enhancing overall system security.

πŸ§ͺ Comprehensive Testing & QA

contenox is covered end-to-end with unit tests and behavior-driven integration tests across all layers and features, ensuring stability as the platform evolves.

πŸ” DSL Inspector

All inputs and output of each state change and routing decision done via the contenox DSL are fully recorded via the DSL Inspector into a comprehensive Stacktrace.

Build Production-Ready Agents in Minutes

Define complex AI behaviors with declarative YAML configurations. No boilerplate - just ship working agents.

πŸ”’ Complete Chat with Moderation

Full chat flow with input validation, command routing, and error handling

    - id: append_user_message
  description: Append user message to chat history
  type: hook
  hook:
    type: append_user_message
    args:
      subject_id: "{{ .subject_id }}"
  transition:
    branches:
      - operator: default
        goto: mux_input

- id: mux_input
  description: Check for commands like /echo using Mux
  type: hook
  hook:
    type: command_router
    args:
      subject_id: "{{ .subject_id }}"
  transition:
    branches:
      - operator: equals
        when: "echo"
        goto: persist_messages
      - operator: default
        goto: moderate

- id: moderate
  description: Moderate the input
  type: parse_number
  prompt_template: "Classify input safety (0=safe, 10=spam): {{.input}}"
  input_var: input
  transition:
    branches:
      - operator: ">"
        when: "4"
        goto: reject_request
      - operator: default
        goto: execute_chat_model

- id: reject_request
  description: Reject the request
  type: raw_string
  prompt_template: "Response: Input rejected for safety reasons"
  transition:
    branches:
      - operator: default
        goto: end

- id: execute_chat_model
  description: Run inference using selected LLM
  type: model_execution
  system_instruction: "You're a helpful assistant..."
  execute_config:
    models:
      - gemini-2.5-flash
      - llama3
    providers:
      - gemini
  input_var: input
  transition:
    branches:
      - operator: default
        goto: persist_messages

- id: persist_messages
  description: Persist the conversation
  type: hook
  hook:
    type: persist_messages
    args:
      subject_id: "{{ .subject_id }}"
  transition:
    branches:
      - operator: default
        goto: end

🀝 OpenAI-Compatible API

Drop-in replacement for OpenAI API endpoints

    - id: convert_request
  description: Convert OpenAI request to internal format
  type: hook
  hook:
    type: openai_to_internal
  transition:
    branches:
      - operator: default
        goto: execute_model

- id: execute_model
  description: Run inference using selected LLM
  type: model_execution
  execute_config:
    models:
      - "{{ .model }}"
    providers:
      - openai
      - vllm
  transition:
    branches:
      - operator: default
        goto: convert_response

- id: convert_response
  description: Convert internal result to OpenAI format
  type: hook
  hook:
    type: internal_to_openai
    args:
      model: "{{ .model }}"
  transition:
    branches:
      - operator: default
        goto: end

    # Python usage:
    # from openai import OpenAI
    # client = OpenAI(base_url="https://tenant_id.contenox.com/v1/chat/completions")
    # response = client.chat.completions.create(
    #   model="gemini-2.5-flash",
    #   messages=[{"role": "user", "content": "Hello!"}]
    # )

Get in Touch

contenox is currently in active development.
Want to learn more or collaborate?

Contact Us