# `AshAi.Actions.Prompt`
[🔗](https://github.com/ash-project/ash_ai/blob/v0.6.1/lib/ash_ai/actions/prompt.ex#L5)

A generic action impl that returns structured outputs from an LLM matching the action return.

Uses ReqLLM for structured output generation with model specifications as strings.

## Example

```elixir
action :analyze_sentiment, :atom do
  constraints one_of: [:positive, :negative]

  description """
  Analyzes the sentiment of a given piece of text to determine if it is overall positive or negative.
  """

  argument :text, :string do
    allow_nil? false
    description "The text for analysis."
  end

  run prompt("openai:gpt-4o",
    prompt: {"You are a sentiment analyzer", "Analyze: <%= @input.arguments.text %>"}
  )
end
```

## Model Specification

The first argument to `prompt/2` is a model specification string in the format `"provider:model-name"`.
Valid model strings can be browsed at https://llmdb.xyz.
Examples: `"openai:gpt-4o"`, `"anthropic:claude-haiku-4-5"`, `"openai:gpt-4o-mini"`.

## Options

- `:prompt` - A custom prompt. Supports multiple formats - see the prompt section below.
- `:req_llm` - Override the ReqLLM module (useful for testing with mocks).
- `:req_llm_opts` - Additional ReqLLM request options passed through to generation and tool loops.
- `:transform_flow` - ReqLLM-native flow customization hook (`fn flow_state, context -> flow_state end`).
- `:tools` - `false`, `true`, or a list of tool names to allow tool-calling in the action.
- `:extra_tools` - Additional arbitrary `ReqLLM.Tool`s to expose during tool-calling.
- `:max_iterations` - Maximum tool-loop iterations. Defaults to `:infinity` for prompt actions.
- `:verbose?` - When true, logs tool-loop lifecycle events with `Logger.debug/1`.

## Behavior Notes

- Tool-loop failures are returned as action errors with loop reason details.
- Unconstrained `:map` return types use a permissive map schema (`type: object`).

## Prompt Formats

The prompt by default is generated using the action and input descriptions. You can provide your own prompt
via the `prompt` option which supports multiple formats:

### Supported Formats

1. **String (EEx template)**: `"Analyze this: <%= @input.arguments.text %>"`
2. **{System, User} tuple**: `{"You are an expert", "Analyze: <%= @input.arguments.text %>"}`
3. **ReqLLM.Context**: Pass a context directly (canonical format)
4. **List of messages**: Maps with role/content, ReqLLM.Message structs, or mixed
5. **Function returning any of the above**: `fn input, context -> ... end`

### Using ReqLLM.Context (Recommended)

```elixir
import ReqLLM.Context

run prompt("openai:gpt-4o",
  prompt: fn input, _ctx ->
    ReqLLM.Context.new([
      system("You are an OCR expert"),
      user([
        ReqLLM.Message.ContentPart.text("Extract text from this image"),
        ReqLLM.Message.ContentPart.image_url(input.arguments.image_url)
      ])
    ])
  end
)
```

### Legacy Map Format

For convenience, loose maps with role/content keys are also supported:

```elixir
[
  %{role: "system", content: "You are an OCR expert"},
  %{role: "user", content: "Extract text: <%= @input.arguments.text %>"}
]
```

The default prompt template is:

```elixir
{"You are responsible for performing the `<%= @input.action.name %>` action.\n\n<%= if @input.action.description do %>\n# Description\n<%= @input.action.description %>\n<% end %>\n\n## Inputs\n<%= for argument <- @input.action.arguments do %>\n- <%= argument.name %><%= if argument.description do %>: <%= argument.description %>\n<% end %>\n<% end %>\n",
 "# Action Inputs\n\n<%= for argument <- @input.action.arguments,\n    {:ok, value} = Ash.ActionInput.fetch_argument(@input, argument.name),\n    {:ok, value} = Ash.Type.dump_to_embedded(argument.type, value, argument.constraints) do %>\n  - <%= argument.name %>: <%= Jason.encode!(value) %>\n<% end %>\n"}
```

# `run`

---

*Consult [api-reference.md](api-reference.md) for complete listing*
