Mentor (mentor v0.2.2)
View SourceThe Mentor
module facilitates interactions with Large Language Models (LLMs) by managing conversation state, configuring adapters, and validating responses against specified schemas.
Features
- Initiate and manage chat sessions with various LLM adapters.
- Configure session parameters, including retry limits and debugging options.
- Validate LLM responses against predefined schemas to ensure data integrity. Supported schemas include
Ecto
schemas, structs, raw maps,NimbleOptions
, andPeri
schemas.
Note
For now, until v0.1.0
only Ecto
shemas are supported.
Backoff calculation
If a structured output request fails, mentor
support retries, defaulting to a max number of 3 retries, that can be overwritten.
mentor
also applies by default, an exponential backoff while to execute the next retry attempt, the backoff calculation, that formula is used:
min(max_backoff, (base_backoff * 2) ^ retry_count)
Example:
Attempt | Base Backoff | Max Backoff | Sleep time |
---|---|---|---|
1 | 10 | 30000 | 20 |
2 | 10 | 30000 | 400 |
3 | 10 | 30000 | 8000 |
4 | 10 | 30000 | 30000 |
5 | 10 | 30000 | 30000 |
considering the default values for
base_backoff
(1s) andmax_backoff
(5s)
Those backoff values can be overwritten by using configure_backoff/2
function.
Summary
Functions
Adds a new message to the conversation history.
Completes the interaction by sending the accumulated messages to the LLM adapter and processing the response.
Same as complete/1
but it raises an exception if it fails
Configures the LLM adapter with the given options.
Configures the exponential backoff values to be used on retry attempts in case of failed requests.
Configures the underlying HTTP client used to make request, with the given options.
Sets the maximum number of retries for validation failures.
Overwrites the initial prompt for the LLM session.
Starts a new interaction pipeline based on a schema.
Types
@type t() :: %Mentor{ __schema__: Mentor.Schema.t() | nil, adapter: module(), config: keyword(), debug: boolean(), http_client: module(), http_config: keyword(), initial_prompt: String.t(), json_schema: map() | nil, max_retries: integer(), messages: [message()], timeout: [max_backoff: non_neg_integer(), base_backoff: non_neg_integer()] }
Represents the state of a Mentor session.
Fields
:__schema__
- The schema module or map defining the expected data structure.:json_schema
- The JSON schema map derived from the schema, used for validation.:adapter
- The LLM adapter module responsible for handling interactions.:initial_prompt
- The initial system prompt guiding the LLM's behavior.:messages
- A list of messages exchanged in the session.:config
- Configuration options for the adapter.:max_retries
- The maximum number of retries allowed for validation failures.:debug
- A boolean flag indicating whether debugging is enabled.:http_client
- The HTTP Client that implements theMentor.HTTPClient.Adapter
behaviour to be used to dispatch HTTP requests to the LLM adapter.:timeout
- Configures how the timeout backoff should be applied on retries. Check Backoff calculation to understand it better:max_backoff
- The maximum backoff value in ms, default:5s
.:base_backoff
- The base backoff value in ms, default:1s
.
Functions
Adds a new message to the conversation history.
Parameters
mentor
- The currentMentor
struct.message
- A map representing the message to be added, typically containing::role
- The role of the message sender (e.g., "user", "assistant", "system", "developer").:content
- The content of the message (e.g. a raw string).
Returns
- An updated
Mentor
struct with the new message appended to themessages
list.
Examples
iex> mentor = %Mentor{}
iex> message = %{role: "user", content: "Hello, assistant!"}
iex> Mentor.append_message(mentor, message)
%Mentor{messages: [%{role: "user", content: "Hello, assistant!"}]}
@spec complete(t()) :: {:ok, Mentor.Schema.t()} | {:error, term()}
Completes the interaction by sending the accumulated messages to the LLM adapter and processing the response.
Parameters
mentor
- The currentMentor
struct.
Returns
{:ok, result}
on successful completion, whereresult
is the validated and processed response.{:error, reason}
on failure, withreason
indicating the cause of the error.
Examples
iex> mentor = %Mentor{adapter: Mentor.LLM.Adapters.OpenAI, __schema__: MySchema, config: [model: "gpt-4"]}
iex> Mentor.complete(mentor)
{:ok, %MySchema{}}
iex> mentor = %Mentor{adapter: nil, __schema__: MySchema}
iex> Mentor.complete(mentor)
{:error, :adapter_not_configured}
Same as complete/1
but it raises an exception if it fails
Configures the LLM adapter with the given options.
Parameters
mentor
- The currentMentor
struct.config
- A keyword list of configuration options for the adapter.
Returns
- An updated
Mentor
struct with the merged adapter configuration.
Examples
iex> mentor = %Mentor{config: [model: "gpt-3.5"]}
iex> new_config = [temperature: 0.7]
iex> Mentor.configure_adapter(mentor, new_config)
%Mentor{config: [model: "gpt-3.5", temperature: 0.7]}
Configures the exponential backoff values to be used on retry attempts in case of failed requests.
Parameters
mentor
- The currentMentor
struct.config
- A keyword list of configuration options for the backoff.:max_backoff
- the max value of the backoff can wait in ms, defaults to 5s:base_backoff
- the base value of the backoff can wait in ms, default to 1s
Returns
- An updated
Mentor
struct with the merged bacoff configuration.
Examples
iex> mentor = %Mentor{config: [model: "gpt-3.5"]}
iex> backoff = [max_backoff: to_timeout(second: 10)]
iex> Mentor.configure_backoff(mentor, backoff)
%Mentor{config: [model: "gpt-3.5"], timeout: [max_backoff: 10_000, base_backoff: 1_000]}
Configures the underlying HTTP client used to make request, with the given options.
Parameters
mentor
- The currentMentor
struct.http_client
- The HTTP client to use underlying, needs to implement theMentor.LLM.Adapter
behaviour and default to theMentor.HTTPClient.Finch
.config
- A keyword list of configuration options for the underlying HTTP client.
Returns
- An updated
Mentor
struct with the chosen HTTP client and its config.
Examples
iex> mentor = %Mentor{http_client: Mentor.HTTPClient.Finch, http_config: []}
iex> config = [request_timeout: 50_000]
iex> Mentor.configure_http_client(mentor, MyReqAdapter, config)
%Mentor{http_client: MyReqAdapter, http_config: ^config}
iex> mentor = %Mentor{http_client: Mentor.HTTPClient.Finch, http_config: []}
iex> config = [request_timeout: 50_000]
iex> Mentor.configure_http_client(mentor, config)
%Mentor{http_client: Mentor.HTTPClient.Finch, http_config: ^config}
iex> mentor = %Mentor{http_client: Mentor.HTTPClient.Finch, http_config: []}
iex> Mentor.configure_http_client(mentor, MyReqAdapter)
%Mentor{http_client: MyReqAdapter, http_config: []}
Sets the maximum number of retries for validation failures.
Parameters
mentor
- The currentMentor
struct.max
- An integer specifying the maximum number of retries.
Returns
- An updated
Mentor
struct with the newmax_retries
value.
Examples
iex> mentor = %Mentor{max_retries: 3}
iex> Mentor.define_max_retries(mentor, 5)
%Mentor{max_retries: 5}
Overwrites the initial prompt for the LLM session.
Parameters
mentor
- The currentMentor
struct.initial_prompt
- A string containing the new initial prompt.
Returns
- An updated
Mentor
struct with the new initial prompt, overwritten.
Examples
iex> mentor = %Mentor{}
iex> new_prompt = "You are a helpful assistant."
iex> Mentor.overwrite_initial_prompt(mentor, new_prompt)
%Mentor{initial_prompt: "You are a helpful assistant."}
@spec start_chat_with!(module(), config) :: t() when config: [option], option: {:max_retries, integer()} | {:schema, Mentor.Schema.t()}
Starts a new interaction pipeline based on a schema.
Parameters
adapter
- The LLM adapter module to handle interactions (e.g.,Mentor.LLM.Adapters.OpenAI
).opts
- A keyword list of options::schema
- The schema module or map defining the expected data structure, required.:max_retries
(optional) - The maximum number of retries for validation failures (default: 3).
Examples
iex> Mentor.start_chat_with!(Mentor.LLM.Adapters.OpenAI, schema: MySchema)
%Mentor{}
iex> Mentor.start_chat_with!(UnknownLLMAdapter, schema: MySchema)
** (RuntimeError) UnknownLLMAdapter should implement the Mentor.LLM.Adapter behaviour.
iex> Mentor.start_chat_with!(Mentor.LLM.Adapters.OpenAI, schema: nil)
** (RuntimeError) nil should be a valid schema