View Source InstructorLite (instructor_lite v0.3.0)
Main building blocks of InstructorLite.
Key Concepts
Structured prompting can be quite different depending on the LLM and InstructorLite does only the bare minimum to abstract this complexity. This means the usage can be quite different depending on the adapter you're using, so make sure to consult adapter documentation to learn the details.
There are two key arguments used throughout this module. Understanding what they are will make your life a lot easier.
params
- is an adapter-specific map, that contain values eventually sent to the LLM. More simply, this is the body that will be posted to the API endpoint. You prompt, model name, optional parameters like temperature all likely belong here.opts
- is a list of options that shape behavior of InstructorLite itself. Options may include things like which schema to cast response to, http client to use, api key, optional headers, http timeout, etc.
Shared options
Most functions in this module accept a list of options.
:response_model
- Required. A module implementingInstructorLite.Instruction
behaviour, Ecto schema or schemaless Ecto definition:adapter
(atom/0
) - A module implementingInstructorLite.Adapter
behaviour. The default value isInstructorLite.Adapters.OpenAI
.:max_retries
(non_neg_integer/0
) - How many additional attempts to make if changeset validation fails. The default value is0
.:validate_changeset
(function of arity 2) - Override function to be called instead ofresponse_model.validate_changeset/2
callback:notes
(String.t/0
) - Additional notes about the schema that might be used by an adapter:json_schema
(map/0
) - JSON schema to use instead of calling response_model.json_schema/0 callback or generating it at runtime usingInstructorLite.JSONSchema
module:adapter_context
(term/0
) - Options used by adapter callbacks. See adapter docs for schema.:extra
(term/0
) - Any arbitrary term for ad-hoc usage. For example, inInstructorLite.Instruction.validate_changeset/2
callback
Summary
Functions
Triage raw LLM response
Perform instruction session from start to finish.
Prepare prompt that can be later sent to LLM
Types
@type opts() :: [ response_model: atom() | Ecto.Changeset.types(), adapter: atom(), max_retries: non_neg_integer(), validate_changeset: (Ecto.Changeset.t(), opts() -> Ecto.Changeset.t()), notes: String.t(), json_schema: map(), adapter_context: term(), extra: term() ]
Options passed to instructor functions.
Functions
@spec consume_response( InstructorLite.Adapter.response(), InstructorLite.Adapter.params(), opts() ) :: {:ok, Ecto.Schema.t()} | {:error, Ecto.Changeset.t(), InstructorLite.Adapter.params()} | {:error, any()} | {:error, reason :: atom(), any()}
Triage raw LLM response
Attempts to cast raw response from InstructorLite.Adapter.send_request/2
and either returns an object or an invalid changeset with new prompt that can be used for a retry.
This function will call InstructorLite.Instruction.validate_changeset/2
callback, unless validate_changeset
option is overridden in opts
.
@spec instruct(InstructorLite.Adapter.params(), opts()) :: {:ok, Ecto.Schema.t()} | {:error, Ecto.Changeset.t()} | {:error, any()} | {:error, atom(), any()}
Perform instruction session from start to finish.
This function glues together all other functions and adds retries on top.
Examples
Basic Example
iex> InstructorLite.instruct(%{
messages: [
%{role: "user", content: "John Doe is fourty two years old"}
]
},
response_model: %{name: :string, age: :integer},
adapter: InstructorLite.Adapters.OpenAI,
adapter_context: [api_key: Application.fetch_env!(:instructor_lite, :openai_key)]
)
{:ok, %{name: "John Doe", age: 42}}
iex> InstructorLite.instruct(%{
messages: [
%{role: "user", content: "John Doe is fourty two years old"}
]
},
response_model: %{name: :string, age: :integer},
adapter: InstructorLite.Adapters.Anthropic,
adapter_context: [api_key: Application.fetch_env!(:instructor_lite, :anthropic_key)]
)
{:ok, %{name: "John Doe", age: 42}}
iex> InstructorLite.instruct(%{
prompt: "John Doe is fourty two years old"
},
response_model: %{name: :string, age: :integer},
adapter: InstructorLite.Adapters.Llamacpp,
adapter_context: [url: Application.fetch_env!(:instructor_lite, :llamacpp_url)]
)
{:ok, %{name: "John Doe", age: 42}}
iex> InstructorLite.instruct(%{
contents: [
%{
role: "user",
parts: [%{text: "John Doe is fourty two years old"}]
}
]
},
response_model: %{name: :string, age: :integer},
json_schema: %{
type: "object",
required: [:age, :name],
properties: %{name: %{type: "string"}, age: %{type: "integer"}}
},
adapter: InstructorLite.Adapters.Gemini,
adapter_context: [
api_key: Application.fetch_env!(:instructor_lite, :gemini_key)
]
)
{:ok, %{name: "John Doe", age: 42}}
Using max_retries
defmodule Rhymes do
use Ecto.Schema
use InstructorLite.Instruction
@primary_key false
embedded_schema do
field(:word, :string)
field(:rhymes, {:array, :string})
end
@impl true
def validate_changeset(changeset, _opts) do
Ecto.Changeset.validate_length(changeset, :rhymes, is: 3)
end
end
InstructorLite.instruct(%{
messages: [
%{role: "user", content: "Take the last word from the following line and add some rhymes to it
Even though you broke my heart"}
]
},
response_model: Rhymes,
max_retries: 1,
adapter_context: [api_key: Application.fetch_env!(:instructor_lite, :openai_key)]
)
{:ok, %Rhymes{word: "heart", rhymes: ["part", "start", "dart"]}}
@spec prepare_prompt(InstructorLite.Adapter.params(), opts()) :: InstructorLite.Adapter.params()
Prepare prompt that can be later sent to LLM
The prompt is added to params
, so you need to cooperate with the adapter to know what you can provide there.
The function will call InstructorLite.Instruction.notes/0
and InstructorLite.Instruction.json_schema/0
callbacks for response_model
. Both can be overriden with corresponding options in opts
.