Jido.AI.Prompt
View SourceIntroduction
The Jido.AI.Prompt
module provides a structured approach to managing conversations with Large Language Models (LLMs). Instead of working with simple strings, this module enables developers to create, version, manipulate, and render sophisticated prompts with dynamic content substitution.
Core Concepts
Understanding the Prompt Architecture
The Jido.AI.Prompt
module is built around a central struct with these key components:
typedstruct do
field(:id, String.t(), default: Jido.Util.generate_id())
field(:version, non_neg_integer(), default: 1)
field(:history, list(map()), default: [])
field(:messages, list(MessageItem.t()), default: [])
field(:params, map(), default: %{})
field(:metadata, map(), default: %{})
end
- Messages: The core content of the prompt, each with a role (system, user, assistant)
- Parameters: Values that can be interpolated into templated messages
- Versioning: Built-in tracking of prompt changes with history and rollback capability
Getting Started
Creating Basic Prompts
alias Jido.AI.Prompt
# Simple prompt with a single message
prompt = Prompt.new(:user, "How do I use Elixir's pattern matching?")
# Multiple messages
complex_prompt = Prompt.new(%{
messages: [
%{role: :system, content: "You are a programming assistant"},
%{role: :user, content: "Explain pattern matching in Elixir"}
]
})
Rendering Prompts for LLM Submission
To convert your prompt into a format suitable for LLM API calls:
# Get a list of message maps
messages = Prompt.render(prompt)
# => [%{role: :user, content: "How do I use Elixir's pattern matching?"}]
# For debugging or text-based APIs
text_format = Prompt.to_text(prompt)
# => "[user] How do I use Elixir's pattern matching?"
Working with Templates
The module's true power emerges when using templates for dynamic content generation.
Template-Based Messages
# Create a prompt with EEx templates
template_prompt = Prompt.new(%{
messages: [
%{role: :system, content: "You are a <%= @assistant_type %>", engine: :eex},
%{role: :user, content: "Help me with <%= @topic %>", engine: :eex}
],
params: %{
assistant_type: "programming assistant",
topic: "recursion"
}
})
# Render with default parameters
messages = Prompt.render(template_prompt)
# Override parameters during rendering
messages = Prompt.render(template_prompt, %{topic: "list comprehensions"})
Configuring LLM Options
The Prompt struct can store LLM-specific options that are used when making requests to AI providers.
Setting Options
# Create a prompt with options
prompt = Prompt.new(:user, "Generate creative text")
|> Prompt.with_temperature(0.9)
|> Prompt.with_max_tokens(2000)
|> Prompt.with_top_p(0.95)
|> Prompt.with_stop(["END", "STOP"])
# Setting multiple options at once
prompt = Prompt.new(:user, "Generate a story")
|> Prompt.with_options([
temperature: 0.8,
max_tokens: 2500,
timeout: 60000
])
# When rendering with options
complete_params = Prompt.render_with_options(prompt)
# => %{
# messages: [%{role: :user, content: "Generate a story"}],
# temperature: 0.8,
# max_tokens: 2500,
# timeout: 60000
# }
Options Precedence in Actions
When using prompts with Jido.AI actions, the options in the prompt serve as defaults but can be overridden:
# Create a prompt with options
prompt = Prompt.new(:user, "Generate text")
|> Prompt.with_temperature(0.7)
|> Prompt.with_max_tokens(1000)
# Use with an action - prompt options are used as defaults
{:ok, result} = Jido.AI.Actions.Langchain.run(%{
model: model,
prompt: prompt
# temperature will be 0.7 from the prompt
# max_tokens will be 1000 from the prompt
}, context)
# Override prompt options with explicit parameters
{:ok, result} = Jido.AI.Actions.Langchain.run(%{
model: model,
prompt: prompt,
temperature: 0.9 # Overrides the 0.7 from the prompt
# max_tokens will still be 1000 from the prompt
}, context)
This behavior works consistently across all Jido.AI actions (Langchain, Instructor, OpenaiEx).
Structured Output Schema
The prompt can also include a schema for validating structured outputs:
schema = NimbleOptions.new!([
name: [type: :string, required: true],
age: [type: :integer, required: true],
interests: [type: {:list, :string}, required: false]
])
prompt = Prompt.new(:user, "Generate a person profile")
|> Prompt.with_output_schema(schema)
|> Prompt.with_temperature(0.3) # More deterministic for structured data
Template Engines
The module supports different template engines:
# EEx (Embedded Elixir) - default
eex_message = %{
role: :user,
content: "My name is <%= @name %>, I need help with <%= @topic %>",
engine: :eex
}
# Liquid templates
liquid_message = %{
role: :user,
content: "My name is {{ name }}, I need help with {{ topic }}",
engine: :liquid
}
Building Conversations
Adding Messages
# Start with a system message
prompt = Prompt.new(:system, "You are a helpful assistant")
# Add a user question
prompt = Prompt.add_message(prompt, :user, "How does pattern matching work?")
# Add an assistant response
prompt = Prompt.add_message(prompt, :assistant, "Pattern matching in Elixir allows...")
# Add a follow-up question
prompt = Prompt.add_message(prompt, :user, "Can you show an example?")
Message Roles and Validation
The module enforces rules about message roles:
- Only one system message is allowed
- If present, the system message must be the first message
# This works - system message first
valid_prompt = Prompt.new(%{
messages: [
%{role: :system, content: "You are an assistant"},
%{role: :user, content: "Hello"}
]
})
# This raises an error - system message not first
# invalid_prompt = Prompt.new(%{
# messages: [
# %{role: :user, content: "Hello"},
# %{role: :system, content: "You are an assistant"}
# ]
# })
Versioning and History
Creating New Versions
# Start with a basic prompt
prompt = Prompt.new(:user, "Initial question")
# Create version 2 with an additional message
v2 = Prompt.new_version(prompt, fn p ->
Prompt.add_message(p, :assistant, "Initial response")
end)
# Create version 3
v3 = Prompt.new_version(v2, fn p ->
Prompt.add_message(p, :user, "Follow-up question")
end)
Managing Versions
# List all versions
versions = Prompt.list_versions(v3) # [3, 2, 1]
# Retrieve a specific version
{:ok, original} = Prompt.get_version(v3, 1)
# Compare versions
{:ok, diff} = Prompt.compare_versions(v3, 3, 1)
# => %{added_messages: [...], removed_messages: [...]}
Advanced Usage Patterns
Parameter Substitution with Logic
template = """
<%= if @advanced_mode do %>
You are an expert-level <%= @domain %> consultant. Use technical terminology and provide in-depth explanations.
<% else %>
You are a helpful <%= @domain %> assistant. Explain concepts simply and avoid technical jargon.
<% end %>
"""
prompt = Prompt.new(%{
messages: [
%{role: :system, content: template, engine: :eex}
],
params: %{
advanced_mode: false,
domain: "machine learning"
}
})
Creating Reusable Templates
For common prompt patterns, leverage the Template
module:
alias Jido.AI.Prompt.Template
# Use template to create prompts
prompt = Prompt.new(%{
messages: [
%{role: :system, content: "You are a code reviewer"},
%{role: :user, content: Template.format(code_review_template, %{
language: "elixir",
code: "defmodule Math do\n def add(a, b), do: a + b\nend",
focus_areas: ["Performance", "Readability", "Error handling"]
}), engine: :none}
]
})
Integration with AI Actions
Jido.AI includes action modules that use these prompts for LLM interactions:
# Create a prompt
prompt = Prompt.new(:user, "What is the Elixir programming language?")
# Use with ChatResponse action
{:ok, result} = Jido.AI.Actions.Instructor.ChatResponse.run(%{
model: %Jido.AI.Model{provider: :anthropic, model: "claude-3-haiku-20240307"},
prompt: prompt,
temperature: 0.7
}, %{})
# Response is in result.response
IO.puts(result.response)
Error Handling and Validation
Validating Prompt Options
case Prompt.validate_prompt_opts(user_input) do
{:ok, validated_prompt} ->
# Use the validated prompt
messages = Prompt.render(validated_prompt)
# Call LLM with messages
{:error, reason} ->
# Handle validation error
Logger.error("Invalid prompt: #{reason}")
end
Template Rendering Errors
Handle potential rendering errors when working with templates:
try do
messages = Prompt.render(template_prompt)
# Use rendered messages
rescue
e in Jido.AI.Error ->
# Handle template rendering errors
Logger.error("Failed to render prompt: #{Exception.message(e)}")
end
Best Practices
Separate Structure from Content
- Use templates to isolate prompt structure from variable content
- Create reusable prompt patterns for common use cases
Leverage Role-Based Messaging
- Use system messages for overall instruction
- Use user messages for specific queries
- Use assistant messages to provide context from previous responses
Manage Complexity with Versioning
- Use the built-in versioning for complex, evolving prompts
- Compare versions when debugging unexpected LLM behaviors
Validate and Sanitize Inputs
- Use
sanitize_inputs
to prevent template injection when working with user inputs - Validate inputs before rendering templates
- Use
Progressive Enhancement
- Start with simple prompts and gradually add complexity
- Test prompt variations to optimize LLM responses
Example Workflow: Implementing a Chain-of-Thought
defmodule ChainOfThoughtPrompt do
alias Jido.AI.Prompt
alias Jido.AI.Model
alias Jido.AI.Actions.Instructor.ChatResponse
def solve_problem(problem_statement) do
# Create a base prompt with system instruction
prompt = Prompt.new(:system, """
You are a problem-solving assistant that uses step-by-step reasoning.
Always break down problems into clear steps before providing the final answer.
""")
# Add the user's problem
prompt = Prompt.add_message(prompt, :user, problem_statement)
# Get initial response with reasoning
{:ok, init_result} = ChatResponse.run(%{
model: %Model{provider: :anthropic, model: "claude-3-haiku-20240307"},
prompt: prompt
}, %{})
# Add the response to the conversation
prompt = Prompt.add_message(prompt, :assistant, init_result.response)
# Add a follow-up to verify the solution
prompt = Prompt.add_message(prompt, :user, """
Thank you for the step-by-step solution.
Can you check your work and ensure the final answer is correct?
""")
# Get verification response
{:ok, final_result} = ChatResponse.run(%{
model: %Model{provider: :anthropic, model: "claude-3-haiku-20240307"},
prompt: prompt
}, %{})
# Return the complete conversation and final response
%{
conversation: Prompt.to_text(prompt),
final_answer: final_result.response
}
end
end
Conclusion
The Jido.AI.Prompt
module provides a powerful foundation for building sophisticated LLM interactions in Elixir. By leveraging its structured approach to prompt management, templates, and version control, developers can create robust, maintainable, and dynamic LLM-powered applications.
By mastering these techniques, you'll be able to create prompt systems that adapt to changing requirements, maintain context across complex conversations, and deliver consistent, high-quality interactions with large language models.