Puck (Puck v0.2.11)

Copy Markdown View Source

Puck - An AI framework for Elixir.

Quick Start

# Create a client (requires :req_llm dep)
client = Puck.Client.new({Puck.Backends.ReqLLM, "anthropic:claude-sonnet-4-5"},
  system_prompt: "You are a helpful assistant."
)

# Simple call
{:ok, response, _ctx} = Puck.call(client, "Hello!")

# Multi-turn conversation
context = Puck.Context.new()
{:ok, response, context} = Puck.call(client, "Hello!", context)
{:ok, response, context} = Puck.call(client, "Follow-up question", context)

# Stream responses
{:ok, stream, _ctx} = Puck.stream(client, "Tell me a story")
Enum.each(stream, fn chunk -> IO.write(chunk.content) end)

Core Concepts

  • Puck.Client - Configuration struct for an LLM client (backend, system prompt, hooks)
  • Puck.Context - Conversation history and metadata
  • Puck.Content - Multi-modal content (text, images, files, audio, video)
  • Puck.Message - Individual message in a conversation
  • Puck.Backend - Behaviour for LLM backend implementations
  • Puck.Hooks - Behaviour for lifecycle event hooks
  • Puck.Response - Normalized response struct with content, finish_reason, usage

Optional Packages

  • :req_llm - Multi-provider LLM backend (enables Puck.Backends.ReqLLM)
  • :solid - Prompt templates with Liquid syntax (enables Puck.Prompt.Solid)
  • :telemetry - Telemetry integration for observability
  • :zoi - Schema validation for structured outputs

Summary

Functions

call(client, content)

Calls an LLM and returns the response.

Returns

  • {:ok, response, context} on success
  • {:error, reason} on failure

Examples

# Simple call
client = Puck.Client.new({Puck.Backends.ReqLLM, "anthropic:claude-sonnet-4-5"})
{:ok, response, _ctx} = Puck.call(client, "Hello!")

# With system prompt
client = Puck.Client.new({Puck.Backends.ReqLLM, "anthropic:claude-sonnet-4-5"},
  system_prompt: "You are a translator."
)
{:ok, response, _ctx} = Puck.call(client, "Translate to Spanish")

# Multi-turn conversation
context = Puck.Context.new()
{:ok, response, context} = Puck.call(client, "Hello!", context)
{:ok, response, context} = Puck.call(client, "Follow-up question", context)

call(client, content, context, opts \\ [])

stream(client, content)

Streams an LLM response.

Returns

  • {:ok, stream, context} where stream is an Enumerable of chunks
  • {:error, reason} on failure

Examples

client = Puck.Client.new({Puck.Backends.ReqLLM, "anthropic:claude-sonnet-4-5"})
{:ok, stream, _ctx} = Puck.stream(client, "Tell me a story")
Enum.each(stream, fn chunk -> IO.write(chunk.content) end)

stream(client, content, context, opts \\ [])