Getting Started with SubAgents

Copy Markdown View Source

This guide walks you through your first SubAgent - from a minimal example to understanding the core execution model.

Prerequisites

  • Elixir 1.15+
  • An LLM provider (OpenRouter, Anthropic, OpenAI, etc.)

The Simplest SubAgent

{:ok, step} = PtcRunner.SubAgent.run(
  "How many r's are in raspberry?",
  llm: my_llm
)

step.return  #=> 3

That's it. No tools, no signature, no validation - just a prompt and an LLM.

Why This Matters

The SubAgent doesn't answer directly - it writes a program that computes the answer:

(count (filter #(= % "r") (seq "raspberry")))

This is the core insight of PTC (Programmatic Tool Calling): instead of asking the LLM to be the computer, ask it to program the computer. The LLM reasons and generates code; the actual computation runs in a sandboxed interpreter where results are deterministic.

With Context

Pass data to the prompt using {{placeholders}}:

{:ok, step} = PtcRunner.SubAgent.run(
  "Summarize in one sentence: {{text}}",
  context: %{text: "Long article about climate change..."},
  llm: my_llm
)

step.return  #=> "Climate change poses significant global challenges..."

With Type Validation

Add a signature to validate the output structure:

{:ok, step} = PtcRunner.SubAgent.run(
  "Rate this review sentiment",
  context: %{review: "Great product, love it!"},
  signature: "{sentiment :string, score :float}",
  llm: my_llm
)

step.return["sentiment"]  #=> "positive"
step.return["score"]      #=> 0.95

Text Mode (Simpler Alternative)

For tasks that don't need PTC-Lisp, use output: :text. The behavior auto-detects based on whether tools are provided and the return type:

{:ok, step} = PtcRunner.SubAgent.run(
  "Extract the person's name and age from: {{text}}",
  context: %{text: "John is 25 years old"},
  output: :text,
  signature: "(text :string) -> {name :string, age :int}",
  llm: my_llm
)

step.return["name"]  #=> "John"
step.return["age"]   #=> 25

With a complex return type and no tools, the LLM returns structured JSON directly. With no signature or a :string return type, it returns raw text. Use it when you need structured output but not computation.

Text mode supports full Mustache templating including sections for lists:

# Iterate over list data with {{#section}}...{{/section}}
SubAgent.new(
  prompt: "Summarize these items: {{#items}}{{name}}, {{/items}}",
  output: :text,
  signature: "(items [{name :string}]) -> {summary :string}"
)

Constraints: Signature is optional. Tools are optional. Compression and firewall fields are not supported.

See Text Mode Guide for Mustache syntax, validation rules, tool calling, and examples.

Text Mode with Tools (For Smaller LLMs)

For smaller or faster LLMs that can use native tool calling but can't generate PTC-Lisp, use output: :text with tools:

{:ok, step} = PtcRunner.SubAgent.run(
  "What is 17 + 25? Use the add tool.",
  output: :text,
  signature: "() -> {result :int}",
  tools: %{
    "add" => {fn args -> args["a"] + args["b"] end,
              signature: "(a :int, b :int) -> :int",
              description: "Add two numbers"}
  },
  llm: my_llm
)

step.return["result"]  #=> 42

Text mode auto-detects tool calling when tools are provided. It converts tool signatures to JSON Schema and uses the LLM provider's native tool calling API. The LLM calls tools, ptc_runner executes them, and the loop continues until the LLM returns a final answer. If a complex return type is specified, the answer is validated as JSON against the signature. If no signature or :string return type, the raw text answer is returned.

Constraints: No memory persistence between turns.

See Text Mode Guide for multi-tool scenarios, limits, and error handling.

Adding Tools

Tools let the agent call functions to gather information:

{:ok, step} = PtcRunner.SubAgent.run(
  "What is the most expensive product?",
  signature: "{name :string, price :float}",
  tools: %{"list_products" => &MyApp.Products.list/0},
  llm: my_llm
)

step.return["name"]   #=> "Widget Pro"
step.return["price"]  #=> 299.99

With tools, the SubAgent enters an agentic loop - it calls tools and reasons until it has enough information to return.

Execution Behavior

ModeConditionBehavior
Single-shotmax_turns: 1 and no toolsOne LLM call, expression returned directly
Loop (PTC-Lisp)Tools or max_turns > 1Multiple turns until (return ...) or (fail ...)
Loop (Text)output: :text with toolsLLM calls tools via native API, returns final text or JSON

In single-shot mode, the LLM's expression is evaluated and returned directly. In PTC-Lisp loop mode, the agent must explicitly call return or fail to complete. In text mode with tools, the loop ends when the LLM returns content without tool calls.

Common Pitfall: If your agent produces correct results but keeps looping until max_turns_exceeded, it's likely in loop mode without calling return. Either set max_turns: 1 for single-shot execution, or ensure your prompt guides the LLM to use (return {:value ...}) when done.

Validation Retries with retry_turns

By default, if return value validation fails, the agent stops with an error. To enable automatic recovery, use the retry_turns option to give agents a limited budget for retrying after validation failures:

{:ok, step} = PtcRunner.SubAgent.run(
  "Extract and return user data",
  signature: "{name :string, age :int}",
  retry_turns: 3,  # Budget for 3 retry attempts if validation fails
  llm: my_llm
)

When validation fails and retries are available:

  1. The agent enters retry mode with the original error message and guidance
  2. The LLM sees feedback like "Retry 1 of 3" to understand how many attempts remain
  3. The agent must call (return new_value) to complete
  4. If validation passes, the loop continues normally
  5. If retries are exhausted, the agent returns an error

The retry_turns option uses a unified budget model alongside max_turns:

  • Work turns (max_turns): Used for normal execution with tools available
  • Retry turns (retry_turns): Used only after validation failures, with no tools

This separation lets agents safely explore solutions during work turns, then recover from validation errors during retry turns without consuming the main work budget.

Note: Single-shot agents with retry_turns > 0 use compression to collapse previous failed attempts, preventing context window inflation during retries. For multi-turn agents with signatures, use signatures to enable validation in your return statement.

Debugging Execution

To see what the agent is doing, use PtcRunner.SubAgent.Debug.print_trace/2:

{:ok, step} = SubAgent.run(prompt, llm: my_llm)
PtcRunner.SubAgent.Debug.print_trace(step)

For more detail, include raw LLM output (reasoning) or the actual messages sent:

# Include LLM reasoning/commentary
PtcRunner.SubAgent.Debug.print_trace(step, raw: true)

# Show full messages sent to LLM
PtcRunner.SubAgent.Debug.print_trace(step, messages: true)

This is essential for identifying why a model might be failing or ignoring tool instructions.

More options: See Observability for compression, telemetry, and production tips.

Signatures (Optional)

Signatures define a contract for inputs and outputs:

# Output only
signature: "{name :string, price :float}"

# With inputs (for reusable agents)
signature: "(query :string) -> [{id :int, title :string}]"

When provided, signatures:

  • Validate return data (agent retries on mismatch)
  • Document expected shape to the LLM
  • Give your Elixir code predictable types

See Signature Syntax for full syntax.

Providing an LLM

Add {:req_llm, "~> 1.2"} to your deps for the built-in adapter:

llm = PtcRunner.LLM.callback("openrouter:anthropic/claude-haiku-4.5")
{:ok, step} = PtcRunner.SubAgent.run("What is 2 + 2?", llm: llm)

Or supply any callback function directly:

llm = fn %{system: system, messages: messages} ->
  # Call your LLM provider here
  {:ok, response_text}
end

See LLM Setup for provider configuration, streaming, custom adapters, and framework integration (Req, LangChain, Bumblebee).

Defining Tools

Tools are functions the SubAgent can call. Provide them as a map:

tools = %{
  "list_products" => &MyApp.Products.list/0,
  "get_product" => &MyApp.Products.get/1,
  "search" => fn %{query: q, limit: l} -> MyApp.search(q, l) end
}

Auto-Extraction from @spec and @doc

Tool signatures and descriptions are auto-extracted when available:

# In your module
@doc "Search for items matching the query string"
@spec search(String.t(), integer()) :: [map()]
def search(query, limit), do: ...

# Auto-extracted:
#   signature: "(query :string, limit :int) -> [:map]"
#   description: "Search for items matching the query string"
tools = %{"search" => &MyApp.search/2}

Explicit Signatures

For functions without specs, provide a signature explicitly:

tools = %{
  "search" => {&MyApp.search/2, "(query :string, limit :int) -> [{id :int}]"}
}

For production tools, add descriptions and explicit signatures using keyword list format:

tools = %{
  "search" => {&MyApp.search/2,
    signature: "(query :string, limit :int?) -> [{id :int, title :string}]",
    description: "Search for items matching query. Returns up to limit results (default 10)."
  }
}

Result Caching

For tools with stable, pure outputs (same inputs always produce the same result), enable cache: true to avoid redundant calls across turns:

tools = %{
  "get-config" => {&MyApp.get_config/1,
    signature: "(key :string) -> :any",
    cache: true
  }
}

Cached results persist across turns within a single SubAgent.run/2 call. Only successful results are cached — errors are never stored. Do not use on tools that read mutable state modifiable by other tools in the session.

See PtcRunner.Tool for all supported tool formats.

Builtin LLM Queries

Enable llm_query: true to let the agent make ad-hoc LLM calls from PTC-Lisp without defining separate tools:

{:ok, step} = PtcRunner.SubAgent.run(
  "Classify each item by urgency",
  signature: "(items [:map]) -> {urgent [:map]}",
  llm_query: true,
  llm: my_llm,
  context: %{items: items}
)

The agent can call tool/llm-query with a prompt and optional signature for classification, judgment, or extraction tasks. See Composition Patterns for details.

Builtin Tools

Use builtin_tools to enable utility tool families without defining them yourself:

{:ok, step} = PtcRunner.SubAgent.run(
  "Find lines mentioning 'error' in the log",
  builtin_tools: [:grep],
  llm: my_llm,
  context: %{log: log_text}
)

The :grep family adds tool/grep and tool/grep-n (line-numbered variant). Multiple families can be combined: builtin_tools: [:grep]. User-defined tools with the same name take precedence.

Text mode note: In text mode (output: :text), tool names with hyphens are automatically sanitized to underscores for the LLM provider API (e.g., grep-n becomes grep_n). The mapping is handled transparently.

Agent as Data

For reusable agents, create the struct separately:

# Define once
product_finder = PtcRunner.SubAgent.new(
  prompt: "Find the most expensive product",
  signature: "{name :string, price :float}",
  tools: product_tools,
  max_turns: 5
)

# Execute with runtime params
{:ok, step} = PtcRunner.SubAgent.run(product_finder, llm: my_llm)

This separation enables testing, composition, and reuse.

SubAgents also support fields for documentation (description, field_descriptions, context_descriptions), output formatting (format_options, float_precision), and memory limits (memory_limit, memory_strategy). See PtcRunner.SubAgent.new/1 for all options.

The Firewall Convention

Fields prefixed with _ are firewalled - available to your Elixir code and the agent's programs, but hidden from LLM prompt history:

signature: "{summary :string, count :int, _email_ids [:int]}"

This keeps parent agent context lean while preserving full data access. See Core Concepts for details.

State Persistence

Use def to store values that persist across turns within a single run:

(def cache result)   ; store
cache                ; access as plain symbol

Use defn to define reusable functions:

(defn expensive? [item] (> (:price item) 1000))
(filter expensive? data/items)

State is scoped per-agent and hidden from prompts. See Core Concepts for details.

Multi-Turn Chat

For chat applications where conversation history persists across calls, use chat/3:

agent = PtcRunner.SubAgent.new(
  prompt: "placeholder",
  output: :text,
  system_prompt: "You are a helpful assistant."
)

# First turn
{:ok, reply, messages} = PtcRunner.SubAgent.chat(agent, "Hello!", llm: my_llm)

# Second turn — pass messages back to continue the conversation
{:ok, reply2, messages2} = PtcRunner.SubAgent.chat(
  agent, "Tell me more",
  llm: my_llm, messages: messages
)

chat/3 forces output: :text and automatically threads conversation history. The system prompt is managed by the agent struct — you don't need to include it in the messages list.

Streaming works via on_chunk:

{:ok, reply, messages} = PtcRunner.SubAgent.chat(agent, "Hello!",
  llm: my_llm,
  on_chunk: fn %{delta: text} -> IO.write(text) end
)

See Phoenix Streaming for a full LiveView integration recipe.

See Also