A generic action impl that returns structured outputs from an LLM matching the action return.
Uses ReqLLM for structured output generation with model specifications as strings.
Example
action :analyze_sentiment, :atom do
constraints one_of: [:positive, :negative]
description """
Analyzes the sentiment of a given piece of text to determine if it is overall positive or negative.
"""
argument :text, :string do
allow_nil? false
description "The text for analysis."
end
run prompt("openai:gpt-4o",
prompt: {"You are a sentiment analyzer", "Analyze: <%= @input.arguments.text %>"}
)
endModel Specification
The first argument to prompt/2 is a model specification string in the format "provider:model-name".
Valid model strings can be browsed at https://llmdb.xyz.
Examples: "openai:gpt-4o", "anthropic:claude-haiku-4-5", "openai:gpt-4o-mini".
Options
:prompt- A custom prompt. Supports multiple formats - see the prompt section below.:req_llm- Override the ReqLLM module (useful for testing with mocks).:req_llm_opts- Additional ReqLLM request options passed through to generation and tool loops.:transform_flow- ReqLLM-native flow customization hook (fn flow_state, context -> flow_state end).:tools-false,true, or a list of tool names to allow tool-calling in the action.:extra_tools- Additional arbitraryReqLLM.Tools to expose during tool-calling.:max_iterations- Maximum tool-loop iterations. Defaults to:infinityfor prompt actions.:verbose?- When true, logs tool-loop lifecycle events withLogger.debug/1.
Behavior Notes
- Tool-loop failures are returned as action errors with loop reason details.
- Unconstrained
:mapreturn types use a permissive map schema (type: object).
Prompt Formats
The prompt by default is generated using the action and input descriptions. You can provide your own prompt
via the prompt option which supports multiple formats:
Supported Formats
- String (EEx template):
"Analyze this: <%= @input.arguments.text %>" - {System, User} tuple:
{"You are an expert", "Analyze: <%= @input.arguments.text %>"} - ReqLLM.Context: Pass a context directly (canonical format)
- List of messages: Maps with role/content, ReqLLM.Message structs, or mixed
- Function returning any of the above:
fn input, context -> ... end
Using ReqLLM.Context (Recommended)
import ReqLLM.Context
run prompt("openai:gpt-4o",
prompt: fn input, _ctx ->
ReqLLM.Context.new([
system("You are an OCR expert"),
user([
ReqLLM.Message.ContentPart.text("Extract text from this image"),
ReqLLM.Message.ContentPart.image_url(input.arguments.image_url)
])
])
end
)Legacy Map Format
For convenience, loose maps with role/content keys are also supported:
[
%{role: "system", content: "You are an OCR expert"},
%{role: "user", content: "Extract text: <%= @input.arguments.text %>"}
]The default prompt template is:
{"You are responsible for performing the `<%= @input.action.name %>` action.\n\n<%= if @input.action.description do %>\n# Description\n<%= @input.action.description %>\n<% end %>\n\n## Inputs\n<%= for argument <- @input.action.arguments do %>\n- <%= argument.name %><%= if argument.description do %>: <%= argument.description %>\n<% end %>\n<% end %>\n",
"# Action Inputs\n\n<%= for argument <- @input.action.arguments,\n {:ok, value} = Ash.ActionInput.fetch_argument(@input, argument.name),\n {:ok, value} = Ash.Type.dump_to_embedded(argument.type, value, argument.constraints) do %>\n - <%= argument.name %>: <%= Jason.encode!(value) %>\n<% end %>\n"}
Summary
Functions
Callback implementation for Ash.Resource.Actions.Implementation.run/3.
Functions
Callback implementation for Ash.Resource.Actions.Implementation.run/3.