Manages conversational state for AI-assisted query generation.
Stores message history to enable multi-turn conversations where users can refine queries, fix errors, and have back-and-forth dialogue with the AI.
Data Structure
%{
messages: [
%{role: :system, content: "...", timestamp: ~U[...]},
%{role: :user, content: "Show active users", timestamp: ~U[...]},
%{role: :assistant, content: "I'll generate...", sql: "SELECT...", timestamp: ~U[...]},
%{role: :error, content: "column 'status' not found", sql: "...", timestamp: ~U[...]},
%{role: :user, content: "Fix the error", timestamp: ~U[...]}
],
schema_context: %{tables_analyzed: ["users", "orders"]},
generation_count: 3,
started_at: ~U[...],
last_activity: ~U[...]
}Usage
conversation = Conversation.new()
conversation = Conversation.add_user_message(conversation, "Show active users")
conversation = Conversation.add_assistant_response(conversation, "Here's your query:", "SELECT * FROM users")
conversation = Conversation.add_query_result(conversation, {:error, "column 'status' does not exist"})
# Check if we should auto-retry based on error
Conversation.should_auto_retry?(conversation)
# => true
Summary
Functions
Add an assistant response to the conversation.
Add query execution result to the conversation for context.
Add a user message to the conversation.
Build context prompt for the LLM from conversation history.
Initialize a new empty conversation.
Prune old messages to manage token usage.
Determine if the conversation should auto-retry based on the last message.
Update schema context with tables analyzed during generation.
Types
@type message() :: %{ role: :system | :user | :assistant | :error, content: String.t(), sql: String.t() | nil, timestamp: DateTime.t() }
@type t() :: %{ messages: [message()], schema_context: map(), generation_count: non_neg_integer(), started_at: DateTime.t(), last_activity: DateTime.t() }
Functions
Add an assistant response to the conversation.
Parameters
conversation- Current conversation statecontent- Assistant's explanation or messagesql- Generated SQL query
Examples
iex> conversation = Conversation.new()
iex> conversation = Conversation.add_assistant_response(conversation, "Here's your query:", "SELECT * FROM users")
iex> message = List.last(conversation.messages)
iex> message.role
:assistant
iex> message.sql
"SELECT * FROM users"
Add query execution result to the conversation for context.
Tracks whether the last query succeeded or failed. Failed queries include error details that help the AI understand what went wrong and fix it.
Parameters
conversation- Current conversation stateresult- Query execution result tuple:{:ok, result}or{:error, error}
Examples
iex> conversation = Conversation.new()
iex> conversation = Conversation.add_assistant_response(conversation, "Query:", "SELECT status FROM users")
iex> conversation = Conversation.add_query_result(conversation, {:error, "column 'status' does not exist"})
iex> message = List.last(conversation.messages)
iex> message.role
:error
Add a user message to the conversation.
Parameters
conversation- Current conversation statecontent- User's message content
Examples
iex> conversation = Conversation.new()
iex> conversation = Conversation.add_user_message(conversation, "Show active users")
iex> List.last(conversation.messages).role
:user
Build context prompt for the LLM from conversation history.
Formats the conversation history into a message list that can be sent to the LLM for context-aware generation.
Parameters
conversation- Current conversation statesystem_prompt- Base system prompt with database schema info
Returns
List of messages formatted for LLM consumption with roles and content.
Examples
iex> conversation = Conversation.new()
iex> |> Conversation.add_user_message("Show users")
iex> messages = Conversation.build_context_messages(conversation, "You are a SQL generator")
iex> Enum.map(messages, & &1.role)
[:system, :user]
@spec new() :: t()
Initialize a new empty conversation.
Examples
iex> conversation = Conversation.new()
iex> conversation.messages
[]
iex> conversation.generation_count
0
@spec prune_messages(t(), non_neg_integer()) :: t()
Prune old messages to manage token usage.
Keeps the most recent N messages while always preserving the first system message. This prevents token count from growing unbounded in long conversations.
Parameters
conversation- Current conversation statekeep_last- Number of recent messages to keep (default: 10)
Examples
iex> conversation = Conversation.new()
iex> conversation = Conversation.add_user_message(conversation, "Message 1")
iex> conversation = Conversation.add_user_message(conversation, "Message 2")
iex> pruned = Conversation.prune_messages(conversation, 1)
iex> length(pruned.messages)
1
Determine if the conversation should auto-retry based on the last message.
Returns true if the last message is an error, indicating that the AI should automatically attempt to fix the issue.
Examples
iex> conversation = Conversation.new()
iex> Conversation.should_auto_retry?(conversation)
false
iex> conversation = Conversation.new()
iex> conversation = Conversation.add_assistant_response(conversation, "Query:", "SELECT 1")
iex> conversation = Conversation.add_query_result(conversation, {:error, "syntax error"})
iex> Conversation.should_auto_retry?(conversation)
true
Update schema context with tables analyzed during generation.
Tracks which tables the AI has examined to provide context for future queries in the conversation.
Parameters
conversation- Current conversation statetable_names- List of table names that were analyzed
Examples
iex> conversation = Conversation.new()
iex> conversation = Conversation.update_schema_context(conversation, ["users", "orders"])
iex> conversation.schema_context.tables_analyzed
["users", "orders"]