View Source Mentor (mentor v0.1.1)
The Mentor
module facilitates interactions with Large Language Models (LLMs) by managing conversation state, configuring adapters, and validating responses against specified schemas.
Features
- Initiate and manage chat sessions with various LLM adapters.
- Configure session parameters, including retry limits and debugging options.
- Validate LLM responses against predefined schemas to ensure data integrity. Supported schemas include
Ecto
schemas, structs, raw maps,NimbleOptions
, andPeri
schemas.
Note
For now, until v0.1.0
only Ecto
shemas are supported.
Summary
Functions
Adds a new message to the conversation history.
Completes the interaction by sending the accumulated messages to the LLM adapter and processing the response.
Same as complete/1
but it raises an exception if it fails
Configures the LLM adapter with the given options.
Sets the maximum number of retries for validation failures.
Overwrites the initial prompt for the LLM session.
Starts a new interaction pipeline based on a schema.
Types
@type t() :: %Mentor{ __schema__: Mentor.Schema.t(), adapter: module(), config: keyword(), debug: boolean(), http_client: module(), initial_prompt: String.t(), json_schema: map() | nil, max_retries: integer(), messages: [message()] }
Represents the state of a Mentor session.
Fields
:__schema__
- The schema module or map defining the expected data structure.:json_schema
- The JSON schema map derived from the schema, used for validation.:adapter
- The LLM adapter module responsible for handling interactions.:initial_prompt
- The initial system prompt guiding the LLM's behavior.:messages
- A list of messages exchanged in the session.:config
- Configuration options for the adapter.:max_retries
- The maximum number of retries allowed for validation failures.:debug
- A boolean flag indicating whether debugging is enabled.:http_client
- The HTTP Client that implements theMentor.HTTPClient.Adapter
behaviour to be used to dispatch HTTP requests to the LLM adapter.
Functions
Adds a new message to the conversation history.
Parameters
mentor
- The currentMentor
struct.message
- A map representing the message to be added, typically containing::role
- The role of the message sender (e.g., "user", "assistant", "system", "developer").:content
- The content of the message (e.g. a raw string).
Returns
- An updated
Mentor
struct with the new message appended to themessages
list.
Examples
iex> mentor = %Mentor{}
iex> message = %{role: "user", content: "Hello, assistant!"}
iex> Mentor.append_message(mentor, message)
%Mentor{messages: [%{role: "user", content: "Hello, assistant!"}]}
@spec complete(t()) :: {:ok, Mentor.Schema.t()} | {:error, term()}
Completes the interaction by sending the accumulated messages to the LLM adapter and processing the response.
Parameters
mentor
- The currentMentor
struct.
Returns
{:ok, result}
on successful completion, whereresult
is the validated and processed response.{:error, reason}
on failure, withreason
indicating the cause of the error.
Examples
iex> mentor = %Mentor{adapter: Mentor.LLM.Adapters.OpenAI, __schema__: MySchema, config: [model: "gpt-4"]}
iex> Mentor.complete(mentor)
{:ok, %MySchema{}}
iex> mentor = %Mentor{adapter: nil, __schema__: MySchema}
iex> Mentor.complete(mentor)
{:error, :adapter_not_configured}
Same as complete/1
but it raises an exception if it fails
Configures the LLM adapter with the given options.
Parameters
mentor
- The currentMentor
struct.config
- A keyword list of configuration options for the adapter.
Returns
- An updated
Mentor
struct with the merged adapter configuration.
Examples
iex> mentor = %Mentor{config: [model: "gpt-3.5"]}
iex> new_config = [temperature: 0.7]
iex> Mentor.configure_adapter(mentor, new_config)
%Mentor{config: [model: "gpt-3.5", temperature: 0.7]}
Sets the maximum number of retries for validation failures.
Parameters
mentor
- The currentMentor
struct.max
- An integer specifying the maximum number of retries.
Returns
- An updated
Mentor
struct with the newmax_retries
value.
Examples
iex> mentor = %Mentor{max_retries: 3}
iex> Mentor.define_max_retries(mentor, 5)
%Mentor{max_retries: 5}
Overwrites the initial prompt for the LLM session.
Parameters
mentor
- The currentMentor
struct.initial_prompt
- A string containing the new initial prompt.
Returns
- An updated
Mentor
struct with the new initial prompt, overwritten.
Examples
iex> mentor = %Mentor{}
iex> new_prompt = "You are a helpful assistant."
iex> Mentor.overwrite_initial_prompt(mentor, new_prompt)
%Mentor{initial_prompt: "You are a helpful assistant."}
@spec start_chat_with!(module(), config) :: t() when config: [option], option: {:max_retries, integer()} | {:schema, Mentor.Schema.t()} | {:adapter_config, keyword()} | {:http_client, module()}
Starts a new interaction pipeline based on a schema.
Parameters
adapter
- The LLM adapter module to handle interactions (e.g.,Mentor.LLM.Adapters.OpenAI
).opts
- A keyword list of options::schema
- The schema module or map defining the expected data structure, required.:adapter_config
- Configuration options specific to the adapter, required.:max_retries
(optional) - The maximum number of retries for validation failures (default: 3).
Examples
iex> config = [model: "gpt-4", api_key: System.get_env("OPENAI_API_KEY")]
iex> Mentor.start_chat_with!(Mentor.LLM.Adapters.OpenAI, schema: MySchema, adapter_config: config)
%Mentor{}
iex> Mentor.start_chat_with!(UnknownLLMAdapter, schema: MySchema)
** (RuntimeError) UnknownLLMAdapter should implement the Mentor.LLM.Adapter behaviour.
iex> Mentor.start_chat_with!(Mentor.LLM.Adapters.OpenAI, schema: nil)
** (RuntimeError) nil should be a valid schema