Handlers
Every task has a handler field that determines what it does. This page documents all available handlers and which fields are valid for each.
Handler types
| Handler | What it does |
|---|---|
chat_completion | Send messages to an LLM, receive a text/tool-call reply |
execute_tool_calls | Execute the tool calls from the previous LLM reply |
hook | Call a specific named hook tool directly (no LLM involved) |
prompt_to_string | Render a Go template with task variables, output as string |
prompt_to_int | Parse the model's reply as an integer |
raise_error | Immediately halt the chain with an error message |
noop | Pass input through unchanged |
chat_completion
Sends the current input to the LLM and waits for a reply. If the model calls a tool, the transition evaluates to "tool-call".
Key fields:
| Field | Required | Description |
|---|---|---|
system_instruction | No | System prompt (supports macros) |
execute_config.model | Yes | Model name, e.g. qwen2.5:7b |
execute_config.provider | Yes | ollama, openai, vllm, gemini, vertex-google, vertex-anthropic, vertex-meta, vertex-mistralai |
execute_config.hooks | No | Hook allowlist: []=none, ["*"]=all, ["a","b"]=named, ["*","!x"]=all-except. Absent=all (backward compat). |
execute_config.hide_tools | No | Tools to suppress from the model |
execute_config.temperature | No | Sampling temperature (0–1) |
execute_config.think | No | Reasoning effort level. "low", "medium", "high", or "false". Supported by Ollama (v0.17.5+), Gemini 2.5+, vLLM, and OpenAI o-series models. |
execute_config.shift | No | Boolean. If true, slides the context window by dropping old messages instead of erroring on token limits. |
execute_config.truncate | No | Boolean. If true, truncates the initial prompt instead of sliding the context window (Ollama-specific). |
execute_config.models | No | Array of fallback model IDs tried in order when the primary model is unavailable. |
execute_config.providers | No | Array of fallback provider types, paired index-for-index with models. |
execute_config.compact_policy | No | Mid-run history compaction settings — see compact_policy below. |
execute_config.retry_policy | No | LLM-call retry and model-fallback settings — see retry_policy below. |
Transition values:
"tool-call"— model issued one or more tool calls"stop"— model replied with text and stopped"length"— reply was truncated at token limit
compact_policy
When the chat history approaches token_limit, the engine summarises older messages in place rather than erroring or sliding the window.
| Field | Type | Default | Description |
|---|---|---|---|
trigger_fraction | float | 0.85 | Fraction of token_limit that triggers compaction |
keep_recent | int | 10 | Number of trailing messages preserved verbatim |
model | string | caller's model | LLM used to produce the summary |
provider | string | caller's provider | Provider for the compaction call |
max_failures | int | 3 | Consecutive failures before compaction is disabled for the run |
min_replaced_messages | int | 4 | Minimum messages that must be replaced for compaction to be worthwhile |
retry_policy
Controls automatic retries on transient LLM errors and optional model swapping after repeated failures.
| Field | Type | Default | Description |
|---|---|---|---|
max_attempts | int | 1 | Total attempts including the first (0 or 1 disables retry) |
initial_backoff | duration | "500ms" | Wait before the second attempt; doubled each retry |
max_backoff | duration | — | Cap on exponential backoff |
jitter | float | 0 | 0–1 fraction of backoff added as random noise |
rate_limit_min_wait | duration | — | Minimum wait when the provider returns a rate-limit error |
fallback_model_id | string | — | Alternate model ID to switch to after fallback_after consecutive failures |
fallback_after | int | — | Failure count that triggers the model swap |
Example:
{
"id": "chat",
"handler": "chat_completion",
"system_instruction": "You are a helpful assistant. Today is <span v-pre>{{now:2006-01-02}}</span>.",
"execute_config": {
"model": "<span v-pre>{{var:model}}</span>",
"provider": "<span v-pre>{{var:provider}}</span>",
"hooks": ["nws"]
},
"transition": {
"branches": [
{ "operator": "equals", "when": "tool-call", "goto": "run_tools" },
{ "operator": "default", "when": "", "goto": "end" }
]
}
}
execute_tool_calls
Executes the tool calls emitted by the previous chat_completion task, appends the results to the chat history, and loops back.
Key fields:
| Field | Required | Description |
|---|---|---|
input_var | Yes | ID of the chat_completion task whose output to use |
Example:
{
"id": "run_tools",
"handler": "execute_tool_calls",
"input_var": "chat",
"transition": {
"branches": [
{ "operator": "default", "when": "", "goto": "chat" }
]
}
}
hook
Calls a specific tool on a named hook directly — no LLM involved. Use for deterministic side effects (e.g. writing a file, calling a fixed API endpoint).
Key fields:
| Field | Required | Description |
|---|---|---|
hook.name | Yes | Registered hook name (e.g. local_shell) |
hook.tool_name | Yes | Tool/operation to call on that hook |
hook.args | No | Static arguments passed to the tool |
output_template | No | Go text/template string rendered against the hook's JSON response. Variables are the response fields (e.g. {{.exit_code}}). Output stored as a string. |
Example:
{
"id": "write_file",
"handler": "hook",
"hook": {
"name": "local_fs",
"tool_name": "write_file",
"args": { "path": "/tmp/output.txt" }
}
}
prompt_to_string
Renders a Go template string using accumulated task output variables. Useful for building prompts that combine outputs from multiple previous tasks.
Key fields:
| Field | Required | Description |
|---|---|---|
prompt_template | Yes | Go template string, variables via {{.task_id}} |
noop
Passes input through to the next task unchanged. Useful as an explicit routing node.
prompt_to_int
Sends the prompt to the LLM and parses the reply as an integer. The eval string is the decimal representation of the result (e.g. "42"). Use with the in_range or equals operators in branches.
Key fields:
| Field | Required | Description |
|---|---|---|
system_instruction | Yes | Prompt that instructs the model to reply with a number |
execute_config.model / provider | Yes | Model to use |
raise_error
Immediately halts the chain with the input string as the error message. Use as a terminal error branch — no transition needed.