LLM-powered tools for classification, evaluation, and judgment.
LLMTool allows you to create tools that use an LLM to make decisions or generate structured outputs. The tool is configured with a prompt template and signature that defines its inputs and outputs.
Use Cases
LLMTool is ideal for:
- Classification - Categorize inputs (sentiment, priority, type)
- Evaluation - Score quality, relevance, urgency
- Judgment - Make yes/no decisions with reasoning
- Extraction - Pull structured data from text
For complex multi-step tasks, use SubAgent.as_tool/2 instead.
LLM Inheritance
The :llm option controls which LLM is used:
| Value | Behavior |
|---|---|
:caller (default) | Inherit from calling agent |
:haiku, :sonnet | Specific model via registry |
fn input -> result end | Custom LLM function |
The :caller atom is only valid for LLMTool and explicitly signals
"use whatever LLM the calling agent is using."
Execution
Three Output Modes
- Text mode (default) — LLM returns JSON, validated against signature return type.
- Template mode — LLM returns JSON (
json_signature), thenresponse_templaterenders a PTC-Lisp expression with Mustache placeholders filled from the JSON. The template runs in a no-tools sandbox. Best for turning simple LLM judgments (booleans, numbers) into typed Lisp values (keywords, expressions).
Template mode fields:
:json_signature— Signature for the internal JSON call (falls back to:signature):response_template— PTC-Lisp string with{{placeholder}}references to JSON fields
Safety note: response_template injects raw JSON values into PTC-Lisp source.
This is safe for structural primitives (booleans, keywords, numbers). Avoid string
interpolation where quotes could break Lisp parsing.
Examples
iex> PtcRunner.SubAgent.LLMTool.new(
...> prompt: "Is {{email}} urgent for {{tier}} customer?",
...> signature: "(email :string, tier :string) -> {urgent :bool, reason :string}"
...> )
%PtcRunner.SubAgent.LLMTool{
prompt: "Is {{email}} urgent for {{tier}} customer?",
signature: "(email :string, tier :string) -> {urgent :bool, reason :string}",
llm: :caller,
description: nil,
tools: nil,
response_template: nil,
json_signature: nil
}
iex> PtcRunner.SubAgent.LLMTool.new(
...> prompt: "Classify {{text}}",
...> signature: "(text :string) -> {category :string}",
...> llm: :haiku,
...> description: "Classifies text into categories"
...> )
%PtcRunner.SubAgent.LLMTool{
prompt: "Classify {{text}}",
signature: "(text :string) -> {category :string}",
llm: :haiku,
description: "Classifies text into categories",
tools: nil,
response_template: nil,
json_signature: nil
}
Summary
Types
@type t() :: %PtcRunner.SubAgent.LLMTool{ description: String.t() | nil, json_signature: String.t() | nil, llm: :caller | atom() | function() | nil, prompt: String.t(), response_template: String.t() | nil, signature: String.t(), tools: map() | nil, validator: (map() -> :ok | {:error, String.t()}) | nil }
Functions
Create a new LLMTool with validation.
Options
:prompt(required) - Template with{{placeholder}}references:signature(required) - Contract (inputs validated against placeholders):llm-:caller(default), atom (registry lookup), or function:description- For schema generation:tools- If provided, runs as multi-turn agent:response_template- PTC-Lisp template with{{placeholder}}for JSON fields:json_signature- Signature for the internal JSON call (falls back to:signature):validator- Function(args -> :ok | {:error, msg})to validate inputs before execution
Examples
iex> PtcRunner.SubAgent.LLMTool.new(prompt: "Hello {{name}}", signature: "(name :string) -> :string")
%PtcRunner.SubAgent.LLMTool{prompt: "Hello {{name}}", signature: "(name :string) -> :string", llm: :caller, description: nil, tools: nil, response_template: nil, json_signature: nil}
iex> PtcRunner.SubAgent.LLMTool.new(prompt: "Hi", signature: ":string")
%PtcRunner.SubAgent.LLMTool{prompt: "Hi", signature: ":string", llm: :caller, description: nil, tools: nil, response_template: nil, json_signature: nil}