Troubleshooting SubAgents
View SourceCommon issues and solutions when working with SubAgents.
Agent Loops Until max_turns_exceeded
Symptom: Agent produces correct intermediate results but never returns, hitting max_turns_exceeded.
Cause: The agent is in loop mode but not calling return to complete.
Solutions:
For single-shot tasks, set
max_turns: 1:PtcRunner.SubAgent.run(prompt, max_turns: 1, # Single expression, no explicit return needed llm: llm )For agentic tasks, ensure your prompt guides the LLM to call
return:prompt = """ Find the most expensive product. When done, call (return {:name "...", :price ...}) """Check the trace to see what the agent is doing:
{:error, step} = SubAgent.run(prompt, debug: true, llm: llm) SubAgent.Debug.print_trace(step.trace)
Validation Errors (Wrong Return Type)
Symptom: {:error, step} with step.fail.reason == :validation_error.
Cause: The agent's return value doesn't match the signature.
Solutions:
Check the signature syntax:
# Output only signature: "{name :string, price :float}" # With optional fields signature: "{name :string, price :float?}" # Arrays signature: "[{id :int, name :string}]"Make the signature more lenient if the LLM struggles:
# Instead of strict types signature: "{count :int}" # Allow any value (validate in Elixir) signature: "{count :any}"Inspect what the agent returned:
{:error, step} = SubAgent.run(prompt, debug: true, llm: llm) IO.inspect(step.fail, label: "Validation error")
Tool Not Being Called
Symptom: Agent answers from "knowledge" instead of calling the provided tool.
Cause: The LLM doesn't understand when or how to use the tool.
Solutions:
Add a clear description:
tools = %{ "get_products" => {&MyApp.Products.list/0, description: "Returns all products with name, price, and category fields." } }Be explicit in the prompt:
prompt = "Use the get_products tool to find the most expensive item."Verify the tool appears in the system prompt: You can preview the prompt before running:
preview = SubAgent.preview_prompt(agent, context: %{}) IO.puts(preview.system) # Should list available toolsOr inspect it in the trace of an executed agent (requires
debug: true):SubAgent.Debug.print_trace(step, messages: true)
Context Too Large
Symptom: LLM responses are slow, expensive, or truncated.
Cause: Too much data in context or return values.
Solutions:
Use the firewall convention for large data:
# _ids hidden from LLM prompts but available to programs signature: "{summary :string, _ids [:int]}"Set prompt limits:
PtcRunner.SubAgent.run(prompt, prompt_limit: %{list: 3, string: 500}, # Truncate in prompts llm: llm )Process in stages - fetch data in one agent, analyze in another:
{:ok, step1} = SubAgent.run("Fetch relevant data", tools: fetch_tools, ...) {:ok, step2} = SubAgent.run("Analyze this data", context: step1, ...)
LLM Returns Prose Instead of Code
Symptom: The LLM explains what it would do instead of writing PTC-Lisp. You may see MaxTurnsExceeded errors with empty traces and no programs generated.
Cause: System prompt not being sent, model confusion, or using wrong code fence format.
Solutions:
Enable debug mode to see exactly what the LLM is receiving and returning:
{:error, step} = SubAgent.run(prompt, debug: true, llm: llm) # Show full LLM messages including the system prompt SubAgent.Debug.print_trace(step, messages: true)With
messages: true, you'll see the System Prompt (containing instructions and tool definitions), the actual LLM response, and what feedback was sent back. This is essential for verifying that the instructions and tool definitions are correctly formatted and sent to the LLM.Ensure your LLM callback includes the system prompt:
llm = fn %{system: system, messages: messages} -> # system MUST be included - it contains PTC-Lisp instructions full_messages = [%{role: :system, content: system} | messages] call_llm(full_messages) endPreview the prompt to verify it contains PTC-Lisp instructions:
preview = SubAgent.preview_prompt(agent, context: %{}) String.contains?(preview.system, "PTC-Lisp") #=> trueTry a different model - some models follow PTC-Lisp instructions better than others. See Benchmark Evaluation for model comparisons.
Viewing Token Usage
To see token consumption for debugging or optimization:
{:ok, step} = SubAgent.run(prompt, llm: llm)
SubAgent.Debug.print_trace(step, usage: true)Output:
┌─ Usage ──────────────────────────────────────────────────┐
│ Input tokens: 3,107
│ Output tokens: 368
│ Total tokens: 3,475
│ System prompt: 2,329 (est.)
│ Duration: 1,234ms
│ Turns: 1
└──────────────────────────────────────────────────────────┘Options can be combined: print_trace(step, messages: true, usage: true).
Viewing Println Output
When debugging multi-turn agents, println output appears in the trace under "Output:":
{:ok, step} = SubAgent.run(prompt, llm: llm, debug: true)
SubAgent.Debug.print_trace(step)Output:
┌─ Turn 1 ────────────────────────────────────────────────┐
│ Program:
│ (def results (ctx/search {:q "test"}))
│ (println "Found:" (count results))
│ results
│ Output:
│ Found: 42
│ Result:
│ [{:id 1, :name "..."}, ...]
└──────────────────────────────────────────────────────────┘If you don't see "Output:" in the trace, either no println was called or the LLM didn't use it. The prompt (lisp-addon-multi_turn.md) documents that only println output is shown in feedback—expression results are not displayed.
Parse Errors in Generated Code
Symptom: {:error, {:parse_error, ...}} from the sandbox.
Cause: LLM generated invalid PTC-Lisp syntax.
Solutions:
Check common mistakes (these are fed back to the LLM automatically):
- Missing operator:
(where :status "active")should be(where :status = "active") - Lists instead of vectors:
'(1 2 3)should be[1 2 3] - Missing else branch:
(if cond then)should be(if cond then nil)
- Missing operator:
Enable debug mode to see raw LLM output:
{:error, step} = SubAgent.run(prompt, debug: true, llm: llm) SubAgent.Debug.print_trace(step.trace)The agent retries automatically - parse errors are shown to the LLM for correction. If it keeps failing, the prompt or model may need adjustment.
Tool Errors
Symptom: step.fail.reason == :tool_error.
Cause: Your tool function raised an exception or returned {:error, ...}.
Solutions:
Return
{:error, reason}for expected failures:def get_user(%{id: id}) do case Repo.get(User, id) do nil -> {:error, "User #{id} not found"} user -> user end endLet unexpected errors crash - they'll be logged and the agent will see a generic error.
Test tools in isolation before using with SubAgents:
MyApp.Tools.get_user(%{id: 123}) # Test directly
State Not Persisting
Symptom: A stored value returns nil in subsequent turns.
Cause: The program didn't use def to store the value.
Solutions:
Use
defto persist values:;; This persists cached-data for later access (def cached-data (ctx/fetch-data {}))Store and return different values:
;; Persists cached-data, returns a summary (do (def cached-data (ctx/fetch-data {})) (str "Stored " (count cached-data) " items"))Access stored values as plain symbols:
;; Access previously stored value cached-data
See Core Concepts for the full state persistence documentation.
See Also
- Getting Started - Basic SubAgent usage
- Core Concepts - Context, memory, error handling
- Testing - Mock LLMs and debug strategies
PtcRunner.SubAgent- API reference