About

Why you shouldn't rent your ability to ship

← Home

Most people think AI tools are just unreliable.
That's not the real problem.
The real problem is that your ability to work now depends on systems you don't control.

What breaks

This isn't a prompt problem. It isn't a UI problem.
It's what happens when your workflow:

  • resets itself
  • depends on uptime you don't control
  • lives in systems you can't inspect

It didn't start as a theory

It started when AI became my default way to get ideas into production — and the cracks showed immediately.

"Can't reply.""Try again later.""Out of tokens."

Workflows that should have been repeatable weren't. Every time I needed them again, I had to rebuild the session from scratch — manually reconstructing context just to do the same job.

And it got worse. One model wasn't enough. Then two. Then four or five providers just to stay effective.

At the same time, it became obvious what this actually costs to run — and how much money gets burned to make it feel cheap right now. Which raises the question: what happens when that flips?

Underneath it all: when your work lives in a system you can't audit, you're trusting a boundary you can't see. Plans, context, and execution traces stay yours. Which provider handles inference is just a config line.

This isn't a tradeoff.
It's dependency.

Frustration is what you argue about in public. Fear is what you can't unthink once you've felt the boundary give.

The breaking point

When something breaks in production, you don't need suggestions. You need execution — on the machine that's actually failing, under the constraints that matter.

The wake-up call was a production incident. I needed something in the spirit of the big chat assistants — but safe to drop onto a live host full of credentials, next to a database under SLA, while the clock was ticking. The usual suspects didn't fit: not a web chat tab, not a cloud black box, not a brittle chain of one-off automations. In a textbook world that outage shouldn't have required what it required; in the real world I was still running kubectl by hand like it was half a decade ago.

I needed a system that speaks Unix, runs where I run, executes exactly what was planned, and stays under human oversight — a fire extinguisher, not a toy. So I rebuilt Contenox around that bar.

Once the CLI felt solid enough for that kind of stakes, the everyday uses piled up: onboard to a new app, pull screenshots, wrangle user docs, automate the boring glue — the same engine, lower drama. You don't have to live my incident to benefit: if you want execution you control, real shells, and plans you own, Contenox is for that too.

The question

Why do I need permission to do my own job?

What Contenox is for

Contenox is built around a simple idea: your workflow should stay on your side. Your context shouldn't disappear. Your ability to ship shouldn't depend on someone else's uptime or pricing model — because if it does, you don't own your workflow. You're renting your ability to work.

Not another chat. A system that executes with plans you can trace, rules you set, and backends you choose.

This isn't about AI tools.
It's about not losing control of your ability to work.

This is what happens when execution moves off your machine.
Contenox moves it back.

Today

The engine stays Apache 2.0.

You own it.

We build the business around that — not the other way around.

Contenox partners with design partners and enterprise teams where deeper integration and operated infrastructure matter; the open-source path stays free forever.

Alexander Ertli

Alexander Ertli

Building in Hamburg

Questions or collaboration? Open an issue on GitHub or see Pricing for partner programs.