Build LLM agents that write and execute programs. SubAgents combine the reasoning power of LLMs with the computational precision of a sandboxed interpreter.
Quick Start
# Conceptual example - see Getting Started guide for runnable code
{:ok, step} = PtcRunner.SubAgent.run(
"What's the total value of orders over $100?",
tools: %{"get_orders" => &MyApp.Orders.list/0},
signature: "{total :float}",
llm: my_llm
)
step.return.total #=> 2450.00Try it yourself: The Getting Started guide includes fully runnable examples you can copy-paste.
The SubAgent doesn't answer directly - it writes a program that computes the answer:
(->> (tool/get_orders)
(filter #(> (:amount %) 100))
(sum-by :amount))This is Programmatic Tool Calling: instead of the LLM being the computer, it programs the computer.
Why PtcRunner?
LLMs as programmers, not computers. Most agent frameworks treat LLMs as the runtime. PtcRunner inverts this: LLMs generate programs that execute deterministically in a sandbox. Tool results stay in memory — the LLM explores data through code, exposing only relevant findings. This scales to thousands of items without context limits and eliminates hallucinated counts.
Best suited for: Document analysis (agentic RAG), log analysis, data aggregation, multi-source joins — any task where raw data volume would overwhelm an LLM's context window.
Key Features
- Two execution modes: PTC-Lisp for multi-turn agentic workflows with tools, or JSON mode for single-turn structured output via Mustache templates
- Signatures: Type contracts (
{sentiment :string, score :float}) that validate outputs and drive auto-retry on mismatch - Context firewall:
_prefixed fields stay in BEAM memory, hidden from LLM prompts - Transactional memory:
defpersists data across turns without bloating context - Composable SubAgents: Nest agents as tools with isolated state and turn budgets
- Recursive agents (RLM): Agents call themselves via
:selftools to subdivide large inputs - Ad-hoc LLM queries:
llm-querycalls an LLM from within PTC-Lisp with signature-validated responses - Observable: Telemetry spans for every turn, LLM call, and tool call with parent-child correlation. JSONL trace logs with Chrome DevTools flame chart export for debugging multi-agent flows (interactive Livebook)
- BEAM-native: Parallel tool calling (
pmap/pcalls), process isolation with timeout and heap limits, fault tolerance
Examples
Parallel tool calling - fetch data concurrently:
;; LLM generates this - executes in parallel automatically
(let [[user orders stats] (pcalls #(tool/get_user {:id data/user_id})
#(tool/get_orders {:id data/user_id})
#(tool/get_stats {:id data/user_id}))]
{:user user :order_count (count orders) :stats stats})Context firewall - keep large data out of LLM prompts:
# The LLM sees: %{summary: "Found 3 urgent emails"}
# Elixir gets: %{summary: "...", _email_ids: [101, 102, 103]}
signature: "{summary :string, _email_ids [:int]}"Ad-hoc LLM judgment from code - the LLM writes programs that call other LLMs, with typed responses and parallel execution:
;; LLM generates this - each llm-query runs in parallel via pmap
(pmap (fn [item]
(tool/llm-query {:prompt "Rate urgency: {{desc}}"
:signature "{urgent :bool, reason :string}"
:desc (:description item)}))
data/items)The agent decides what to ask and how to structure the response — at runtime, from within the generated program. Enable with llm_query: true. See the LLM Agent Livebook for a full example.
Compile SubAgents - LLM writes the orchestration logic once, execute deterministically:
# Orchestrator with SubAgentTools + pure Elixir functions
{:ok, compiled} = SubAgent.compile(orchestrator, llm: my_llm)
# LLM generated: (loop [joke initial, i 1] (if (tool/check ...) (return ...) (recur ...)))
# Execute with zero orchestration cost - only child SubAgents call the LLM
compiled.execute.(%{topic: "cats"}, llm: my_llm)See the Joke Workflow Livebook for a complete example.
Meta Planner
The meta planner decomposes a mission into a dependency graph of tasks, assigns each to a specialized SubAgent, and executes them in parallel phases. The Trace Viewer provides interactive visualization of the full execution — from the high-level DAG down to individual agent turns with thinking, programs, and tool output.

mix ptc.viewer --trace-dir path/to/traces
Installation
def deps do
[{:ptc_runner, "~> 0.7.0"}]
endDocumentation
Guides
- Getting Started - Build your first SubAgent
- Core Concepts - Context, memory, and the firewall convention
- Patterns - Chaining, orchestration, and composition
- Testing - Mocking LLMs and integration testing
- Troubleshooting - Common issues and solutions
Reference
- Signature Syntax - Input/output type contracts
- PTC-Lisp Specification - The language SubAgents write
- Benchmark Evaluation - LLM accuracy by model
Interactive
mix ptc.repl- Interactive REPL for testing PTC-Lisp expressions- Playground Livebook - Try PTC-Lisp interactively
- LLM Agent Livebook - Build an agent end-to-end
- Examples - Runnable example applications including PageIndex (agentic RAG over PDFs using MetaPlanner)
Low-Level API
For direct program execution without the agentic loop:
{:ok, step} = PtcRunner.Lisp.run(
"(->> data/items (filter :active) (count))",
context: %{items: items}
)
step.return #=> 3Programs run in isolated BEAM processes with resource limits (1s timeout, 10MB heap).
See PtcRunner.Lisp module docs for options.
License
MIT