Mentor (mentor v0.2.2)

View Source

The Mentor module facilitates interactions with Large Language Models (LLMs) by managing conversation state, configuring adapters, and validating responses against specified schemas.

Features

  • Initiate and manage chat sessions with various LLM adapters.
  • Configure session parameters, including retry limits and debugging options.
  • Validate LLM responses against predefined schemas to ensure data integrity. Supported schemas include Ecto schemas, structs, raw maps, NimbleOptions, and Peri schemas.

Note

For now, until v0.1.0 only Ecto shemas are supported.

Backoff calculation

If a structured output request fails, mentor support retries, defaulting to a max number of 3 retries, that can be overwritten.

mentor also applies by default, an exponential backoff while to execute the next retry attempt, the backoff calculation, that formula is used:

min(max_backoff, (base_backoff * 2) ^ retry_count)

Example:

AttemptBase BackoffMax BackoffSleep time
1103000020
21030000400
310300008000
4103000030000
5103000030000

considering the default values for base_backoff (1s) and max_backoff (5s)

Those backoff values can be overwritten by using configure_backoff/2 function.

Summary

Types

t()

Represents the state of a Mentor session.

Functions

Adds a new message to the conversation history.

Completes the interaction by sending the accumulated messages to the LLM adapter and processing the response.

Same as complete/1 but it raises an exception if it fails

Configures the LLM adapter with the given options.

Configures the exponential backoff values to be used on retry attempts in case of failed requests.

Configures the underlying HTTP client used to make request, with the given options.

Sets the maximum number of retries for validation failures.

Overwrites the initial prompt for the LLM session.

Starts a new interaction pipeline based on a schema.

Types

message()

@type message() :: %{role: String.t(), content: term()}

t()

@type t() :: %Mentor{
  __schema__: Mentor.Schema.t() | nil,
  adapter: module(),
  config: keyword(),
  debug: boolean(),
  http_client: module(),
  http_config: keyword(),
  initial_prompt: String.t(),
  json_schema: map() | nil,
  max_retries: integer(),
  messages: [message()],
  timeout: [max_backoff: non_neg_integer(), base_backoff: non_neg_integer()]
}

Represents the state of a Mentor session.

Fields

  • :__schema__ - The schema module or map defining the expected data structure.
  • :json_schema - The JSON schema map derived from the schema, used for validation.
  • :adapter - The LLM adapter module responsible for handling interactions.
  • :initial_prompt - The initial system prompt guiding the LLM's behavior.
  • :messages - A list of messages exchanged in the session.
  • :config - Configuration options for the adapter.
  • :max_retries - The maximum number of retries allowed for validation failures.
  • :debug - A boolean flag indicating whether debugging is enabled.
  • :http_client - The HTTP Client that implements the Mentor.HTTPClient.Adapter behaviour to be used to dispatch HTTP requests to the LLM adapter.
  • :timeout - Configures how the timeout backoff should be applied on retries. Check Backoff calculation to understand it better
    • :max_backoff - The maximum backoff value in ms, default: 5s.
    • :base_backoff - The base backoff value in ms, default: 1s.

Functions

append_message(mentor, message)

@spec append_message(t(), map()) :: t()

Adds a new message to the conversation history.

Parameters

  • mentor - The current Mentor struct.
  • message - A map representing the message to be added, typically containing:
    • :role - The role of the message sender (e.g., "user", "assistant", "system", "developer").
    • :content - The content of the message (e.g. a raw string).

Returns

  • An updated Mentor struct with the new message appended to the messages list.

Examples

iex> mentor = %Mentor{}
iex> message = %{role: "user", content: "Hello, assistant!"}
iex> Mentor.append_message(mentor, message)
%Mentor{messages: [%{role: "user", content: "Hello, assistant!"}]}

complete(mentor)

@spec complete(t()) :: {:ok, Mentor.Schema.t()} | {:error, term()}

Completes the interaction by sending the accumulated messages to the LLM adapter and processing the response.

Parameters

  • mentor - The current Mentor struct.

Returns

  • {:ok, result} on successful completion, where result is the validated and processed response.
  • {:error, reason} on failure, with reason indicating the cause of the error.

Examples

iex> mentor = %Mentor{adapter: Mentor.LLM.Adapters.OpenAI, __schema__: MySchema, config: [model: "gpt-4"]}
iex> Mentor.complete(mentor)
{:ok, %MySchema{}}

iex> mentor = %Mentor{adapter: nil, __schema__: MySchema}
iex> Mentor.complete(mentor)
{:error, :adapter_not_configured}

complete!(mentor)

Same as complete/1 but it raises an exception if it fails

configure_adapter(mentor, config)

@spec configure_adapter(
  t(),
  keyword()
) :: t()

Configures the LLM adapter with the given options.

Parameters

  • mentor - The current Mentor struct.
  • config - A keyword list of configuration options for the adapter.

Returns

  • An updated Mentor struct with the merged adapter configuration.

Examples

iex> mentor = %Mentor{config: [model: "gpt-3.5"]}
iex> new_config = [temperature: 0.7]
iex> Mentor.configure_adapter(mentor, new_config)
%Mentor{config: [model: "gpt-3.5", temperature: 0.7]}

configure_backoff(mentor, config)

@spec configure_backoff(
  t(),
  keyword()
) :: t()

Configures the exponential backoff values to be used on retry attempts in case of failed requests.

Parameters

  • mentor - The current Mentor struct.
  • config - A keyword list of configuration options for the backoff.
    • :max_backoff - the max value of the backoff can wait in ms, defaults to 5s
    • :base_backoff - the base value of the backoff can wait in ms, default to 1s

Returns

  • An updated Mentor struct with the merged bacoff configuration.

Examples

iex> mentor = %Mentor{config: [model: "gpt-3.5"]}
iex> backoff = [max_backoff: to_timeout(second: 10)]
iex> Mentor.configure_backoff(mentor, backoff)
%Mentor{config: [model: "gpt-3.5"], timeout: [max_backoff: 10_000, base_backoff: 1_000]}

configure_http_client(mentor, client \\ Finch, config \\ [])

@spec configure_http_client(t(), http_client :: module(), config :: keyword()) :: t()

Configures the underlying HTTP client used to make request, with the given options.

Parameters

  • mentor - The current Mentor struct.
  • http_client - The HTTP client to use underlying, needs to implement the Mentor.LLM.Adapter behaviour and default to the Mentor.HTTPClient.Finch.
  • config - A keyword list of configuration options for the underlying HTTP client.

Returns

  • An updated Mentor struct with the chosen HTTP client and its config.

Examples

iex> mentor = %Mentor{http_client: Mentor.HTTPClient.Finch, http_config: []}
iex> config = [request_timeout: 50_000]
iex> Mentor.configure_http_client(mentor, MyReqAdapter, config)
%Mentor{http_client: MyReqAdapter, http_config: ^config}

iex> mentor = %Mentor{http_client: Mentor.HTTPClient.Finch, http_config: []}
iex> config = [request_timeout: 50_000]
iex> Mentor.configure_http_client(mentor, config)
%Mentor{http_client: Mentor.HTTPClient.Finch, http_config: ^config}

iex> mentor = %Mentor{http_client: Mentor.HTTPClient.Finch, http_config: []}
iex> Mentor.configure_http_client(mentor, MyReqAdapter)
%Mentor{http_client: MyReqAdapter, http_config: []}

define_max_retries(mentor, max)

@spec define_max_retries(t(), integer()) :: t()

Sets the maximum number of retries for validation failures.

Parameters

  • mentor - The current Mentor struct.
  • max - An integer specifying the maximum number of retries.

Returns

  • An updated Mentor struct with the new max_retries value.

Examples

iex> mentor = %Mentor{max_retries: 3}
iex> Mentor.define_max_retries(mentor, 5)
%Mentor{max_retries: 5}

is_llm_adapter(llm)

(macro)

overwrite_initial_prompt(mentor, initial_prompt \\ "")

@spec overwrite_initial_prompt(t(), String.t()) :: t()

Overwrites the initial prompt for the LLM session.

Parameters

  • mentor - The current Mentor struct.
  • initial_prompt - A string containing the new initial prompt.

Returns

  • An updated Mentor struct with the new initial prompt, overwritten.

Examples

iex> mentor = %Mentor{}
iex> new_prompt = "You are a helpful assistant."
iex> Mentor.overwrite_initial_prompt(mentor, new_prompt)
%Mentor{initial_prompt: "You are a helpful assistant."}

start_chat_with!(adapter, opts)

@spec start_chat_with!(module(), config) :: t()
when config: [option],
     option: {:max_retries, integer()} | {:schema, Mentor.Schema.t()}

Starts a new interaction pipeline based on a schema.

Parameters

  • adapter - The LLM adapter module to handle interactions (e.g., Mentor.LLM.Adapters.OpenAI).
  • opts - A keyword list of options:
    • :schema - The schema module or map defining the expected data structure, required.
    • :max_retries (optional) - The maximum number of retries for validation failures (default: 3).

Examples

iex> Mentor.start_chat_with!(Mentor.LLM.Adapters.OpenAI, schema: MySchema)
%Mentor{}

iex> Mentor.start_chat_with!(UnknownLLMAdapter, schema: MySchema)
** (RuntimeError) UnknownLLMAdapter should implement the Mentor.LLM.Adapter behaviour.

iex> Mentor.start_chat_with!(Mentor.LLM.Adapters.OpenAI, schema: nil)
** (RuntimeError) nil should be a valid schema