Structured prompting for LLMs. InstructorLite is a fork, spiritual successor and almost an entire rewrite of instructor_ex library.
InstructorLite provides basic building blocks to embed LLMs into your application. It uses Ecto schemas to make sure LLM output has a predictable shape and can play nicely with deterministic application logic. For an example of what can be built with InstructorLite, check out Handwave
Why Lite
InstructorLite is designed to be:
- Lean. It does so little it makes you question if you should just write your own version!
- Composable. Almost everything it does can be overridden or extended.
- Magic-free. It doesn't hide complexity behind one line function calls, but does its best to provide you with enough information to understand what's going on.
InstructorLite is tested to be compatible with the following providers: OpenAI, Anthropic, Gemini and any Chat Completions-compatible APIs, such as Grok.
Features
InstructorLite can be boiled down to these features:
- It provides a very simple function for generating JSON schema from Ecto schema.
- It facilitates generating prompts, calling LLMs, casting and validating responses, including retrying prompts when validation fails.
- It holds knowledge of major LLM providers' API interfaces with adapters.
Any of the features above can be used independently.
Usage
Define an instruction, which is a normal Ecto schema with an extra use Instructor.Instruction call.
defmodule UserInfo do
use Ecto.Schema
use InstructorLite.Instruction
@primary_key false
embedded_schema do
field(:name, :string)
field(:age, :integer)
end
endNow let's use InstructorLite.instruct/2 to fill the schema from unstructured text:
OpenAI
iex> InstructorLite.instruct(%{
input: [
%{role: "user", content: "John Doe is forty-two years old"}
]
},
response_model: UserInfo,
adapter_context: [api_key: Application.fetch_env!(:instructor_lite, :openai_key)]
)
{:ok, %UserInfo{name: "John Doe", age: 42}}Anthropic
iex> InstructorLite.instruct(%{
messages: [
%{role: "user", content: "John Doe is forty-two years old"}
]
},
response_model: UserInfo,
adapter: InstructorLite.Adapters.Anthropic,
adapter_context: [api_key: Application.fetch_env!(:instructor_lite, :anthropic_key)]
)
{:ok, %UserInfo{name: "John Doe", age: 42}}Llamacpp
iex> InstructorLite.instruct(%{
prompt: "John Doe is forty-two years old"
},
response_model: UserInfo,
adapter: InstructorLite.Adapters.Llamacpp,
adapter_context: [url: Application.fetch_env!(:instructor_lite, :llamacpp_url)]
)
{:ok, %UserInfo{name: "John Doe", age: 42}}Gemini
iex> InstructorLite.instruct(%{
contents: [
%{
role: "user",
parts: [%{text: "John Doe is forty-two years old"}]
}
]
},
response_model: UserInfo,
json_schema: %{
type: "object",
required: [:age, :name],
properties: %{name: %{type: "string"}, age: %{type: "integer"}},
},
adapter: InstructorLite.Adapters.Gemini,
adapter_context: [
api_key: Application.fetch_env!(:instructor_lite, :gemini_key)
]
)
{:ok, %UserInfo{name: "John Doe", age: 42}}Grok
Grok API is compatible with OpenAI Chat Completions endpoint, so we can use
the ChatCompletionsCompatible adapter with Grok's url and model_name
iex> InstructorLite.instruct(%{
model: "grok-3-latest",
messages: [
%{role: "user", content: "John Doe is forty-two years old"}
]
},
response_model: UserInfo,
adapter: InstructorLite.Adapters.ChatCompletionsCompatible,
adapter_context: [
url: "https://api.x.ai/v1/chat/completions",
api_key: Application.fetch_env!(:instructor_lite, :grok_key)
]
)
{:ok, %UserInfo{name: "John Doe", age: 42}}Configuration
InstructorLite does not access the application environment for configuration options like adapter or API key. Instead, they're passed as options when needed. Note that different adapters may require different options, so make sure to check their documentation.
Approach to development
InstructorLite is hand-written by a human and all external contributions are vetted by a human. And said human is committed to keep it this way for foreseeable future. This comes with both advantages and drawbacks. The library may be prone to silly human errors and poor judgement, but at the same time it is likely won't explode in complexity overnight or undergo a full rewrite every couple of months. Tune your expectations accordingly!
Non-goals
InstructorLite very explicitly doesn't pursue the following goals:
- Response streaming. Streaming is good UX for cases when LLM output is relayed to users, but doesn't make much sense for application environment, where structured outputs are usually used.
- Unified interface. We acknowledge that LLM providers can be very different and trying to fit them under the same roof brings a ton of unnecessary complexity. Instead, InstructorLite aims to make it simple for developers to understand these differences and grapple with them.
Installation
In your mix.exs, add :instructor_lite to your list of dependencies:
def deps do
[
{:instructor_lite, "~> 1.2.0"}
]
endOptionally, include Req HTTP client (used by default) and Jason (for Elixir older than 1.18):
def deps do
[
{:req, "~> 0.5 or ~> 1.0"},
{:jason, "~> 1.4"}
]
end