Execute prompt modules against LLM adapters.
A prompt module implements the LangchainPrompt.Prompt behaviour and
encapsulates everything for a specific AI task: which model to use
(set_profile/1), what to say (generate_system_prompt/1,
generate_user_prompt/1), and how to interpret the result
(post_process/2).
Minimal example
defmodule MyApp.Prompts.Summarise do
@behaviour LangchainPrompt.Prompt
alias LangchainPrompt.Profile
alias LangchainPrompt.Adapters.Langchain
@impl true
def set_profile(_assigns) do
%Profile{
adapter: Langchain,
opts: %{
chat_module: LangChain.ChatModels.ChatOpenAI,
model: "gpt-4o-mini"
}
}
end
@impl true
def generate_system_prompt(_assigns), do: "You are a concise summariser."
@impl true
def generate_user_prompt(%{text: text}), do: "Summarise: #{text}"
@impl true
def post_process(_assigns, %LangchainPrompt.Message{content: content}),
do: {:ok, content}
end
{:ok, summary} = LangchainPrompt.execute(MyApp.Prompts.Summarise, %{text: "..."})Message history
Pass prior turns as the third argument to enable conversational prompts:
history = [
%LangchainPrompt.Message{role: :user, content: "Hello"},
%LangchainPrompt.Message{role: :assistant, content: "Hi there!"}
]
LangchainPrompt.execute(MyPrompt, assigns, history)Attachments
Pass a list of LangchainPrompt.Attachment structs to send files alongside
the user prompt:
attachments = [LangchainPrompt.Attachment.from_file!("/tmp/menu.jpg")]
LangchainPrompt.execute(MyPrompt, assigns, [], attachments)
Summary
Functions
Executes a prompt module and returns {:ok, result} or {:error, reason}.
Functions
@spec execute(module(), map() | struct(), [LangchainPrompt.Message.t()], [ LangchainPrompt.Attachment.t() ]) :: {:ok, any()} | {:error, any()}
Executes a prompt module and returns {:ok, result} or {:error, reason}.
Error reasons are tagged tuples:
{:adapter_failure, reason}— the adapter returned an error{:post_processing_failure, reason}—post_process/2returned an error