AshAi.Actions.Prompt (ash_ai v0.2.2)

View Source

A generic action impl that returns structured outputs from an LLM matching the action return.

Typically used via prompt/2, for example:

action :analyze_sentiment, :atom do
  constraints one_of: [:positive, :negative]

  description """
  Analyzes the sentiment of a given piece of text to determine if it is overall positive or negative.

  Does not consider swear words as inherently negative.
  """

  argument :text, :string do
    allow_nil? false
    description "The text for analysis."
  end

  run prompt(
    LangChain.ChatModels.ChatOpenAI.new!(%{ model: "gpt-4o"}),
    # setting `tools: true` allows it to use all exposed tools in your app
    tools: true
    # alternatively you can restrict it to only a set of tools
    # tools: [:list, :of, :tool, :names]
    # provide an optional prompt, which is an EEx template
    # prompt: "Analyze the sentiment of the following text: <%= @input.arguments.description %>"
  )
end

The first argument to prompt/2 is the LangChain model. It can also be a 2-arity function which will be invoked with the input and the context, useful for dynamically selecting the model.

Dynamic Configuration (using 2-arity function)

For runtime configuration (like using environment variables), pass a function as the first argument to prompt/2:

run prompt(
  fn _input, _context ->
    LangChain.ChatModels.ChatOpenAI.new!(%{
      model: "gpt-4o",
      # this can also be configured in application config, see langchain docs for more.
      api_key: System.get_env("OPENAI_API_KEY"),
      endpoint: System.get_env("OPENAI_ENDPOINT")
    })
  end,
  tools: false
)

This function will be executed just before the prompt is sent to the LLM.

Options

  • :tools: A list of tool names to expose to the agent call.
  • :verbose?: Set to true for more output to be logged.
  • :prompt: A custom prompt as an EEx template. See the prompt section below.

Prompt

The prompt by default is generated using the action and input descriptions. You can provide your own prompt via the prompt option which will be able to reference @input and @context.

The prompt can be a string or a tuple of two strings. The first string is the system prompt and the second string is the user message. If no user message is provided, the user message will be "perform the action". Both are treated as EEx templates.

We have found that the "3rd party" style description writing paired with the format we provide by default to be a good basis point for LLMs who are meant to accomplish a task. With this in mind, for refining your prompt, first try describing via the action description that desired outcome or operating basis of the action, as well as how the LLM is meant to use them. State these passively as facts. For example, above we used: "Does not consider swear words as inherently negative" instead of instructing the LLM via "Do not consider swear words as inherently negative".

You are of course free to use any prompting pattern you prefer, but the end result of the above prompting pattern leads to having a great description of your actual logic, acting both as documentation and instructions to the LLM that executes the action.

The default prompt template is:

{"You are responsible for performing the `<%= @input.action.name %>` action.\n\n<%= if @input.action.description do %>\n# Description\n<%= @input.action.description %>\n<% end %>\n\n## Inputs\n<%= for argument <- @input.action.arguments do %>\n- <%= argument.name %><%= if argument.description do %>: <%= argument.description %>\n<% end %>\n<% end %>\n",
 "# Action Inputs\n\n<%= for argument <- @input.action.arguments,\n    {:ok, value} = Ash.ActionInput.fetch_argument(@input, argument.name),\n    {:ok, value} = Ash.Type.dump_to_embedded(argument.type, value, argument.constraints) do %>\n  - <%= argument.name %>: <%= Jason.encode!(value) %>\n<% end %>\n"}

Summary

Functions

run(input, opts, context)

Callback implementation for Ash.Resource.Actions.Implementation.run/3.