View Source InstructorLite (instructor_lite v0.3.0)

Main building blocks of InstructorLite.

Key Concepts

Structured prompting can be quite different depending on the LLM and InstructorLite does only the bare minimum to abstract this complexity. This means the usage can be quite different depending on the adapter you're using, so make sure to consult adapter documentation to learn the details.

There are two key arguments used throughout this module. Understanding what they are will make your life a lot easier.

  • params - is an adapter-specific map, that contain values eventually sent to the LLM. More simply, this is the body that will be posted to the API endpoint. You prompt, model name, optional parameters like temperature all likely belong here.
  • opts - is a list of options that shape behavior of InstructorLite itself. Options may include things like which schema to cast response to, http client to use, api key, optional headers, http timeout, etc.

Shared options

Most functions in this module accept a list of options.

Summary

Types

Options passed to instructor functions.

Functions

Perform instruction session from start to finish.

Prepare prompt that can be later sent to LLM

Types

opts()

@type opts() :: [
  response_model: atom() | Ecto.Changeset.types(),
  adapter: atom(),
  max_retries: non_neg_integer(),
  validate_changeset: (Ecto.Changeset.t(), opts() -> Ecto.Changeset.t()),
  notes: String.t(),
  json_schema: map(),
  adapter_context: term(),
  extra: term()
]

Options passed to instructor functions.

Functions

consume_response(response, params, opts)

@spec consume_response(
  InstructorLite.Adapter.response(),
  InstructorLite.Adapter.params(),
  opts()
) ::
  {:ok, Ecto.Schema.t()}
  | {:error, Ecto.Changeset.t(), InstructorLite.Adapter.params()}
  | {:error, any()}
  | {:error, reason :: atom(), any()}

Triage raw LLM response

Attempts to cast raw response from InstructorLite.Adapter.send_request/2 and either returns an object or an invalid changeset with new prompt that can be used for a retry.

This function will call InstructorLite.Instruction.validate_changeset/2 callback, unless validate_changeset option is overridden in opts.

instruct(params, opts)

@spec instruct(InstructorLite.Adapter.params(), opts()) ::
  {:ok, Ecto.Schema.t()}
  | {:error, Ecto.Changeset.t()}
  | {:error, any()}
  | {:error, atom(), any()}

Perform instruction session from start to finish.

This function glues together all other functions and adds retries on top.

Examples

Basic Example

iex> InstructorLite.instruct(%{
    messages: [
      %{role: "user", content: "John Doe is fourty two years old"}
    ]
  },
  response_model: %{name: :string, age: :integer},
  adapter: InstructorLite.Adapters.OpenAI,
  adapter_context: [api_key: Application.fetch_env!(:instructor_lite, :openai_key)]
)
{:ok, %{name: "John Doe", age: 42}}

Using max_retries

defmodule Rhymes do
  use Ecto.Schema
  use InstructorLite.Instruction
  
  @primary_key false
  embedded_schema do
    field(:word, :string)
    field(:rhymes, {:array, :string})
  end
  
  @impl true
  def validate_changeset(changeset, _opts) do
    Ecto.Changeset.validate_length(changeset, :rhymes, is: 3)
  end
end

InstructorLite.instruct(%{
    messages: [
      %{role: "user", content: "Take the last word from the following line and add some rhymes to it
Even though you broke my heart"}
    ]
  },
  response_model: Rhymes,
  max_retries: 1,
  adapter_context: [api_key: Application.fetch_env!(:instructor_lite, :openai_key)]
)
{:ok, %Rhymes{word: "heart", rhymes: ["part", "start", "dart"]}}

prepare_prompt(params, opts)

Prepare prompt that can be later sent to LLM

The prompt is added to params, so you need to cooperate with the adapter to know what you can provide there.

The function will call InstructorLite.Instruction.notes/0 and InstructorLite.Instruction.json_schema/0 callbacks for response_model. Both can be overriden with corresponding options in opts.