Backend implementation using the ReqLLM library.
ReqLLM provides a unified interface to multiple LLM providers.
Model Specification
The model must include the provider in "provider:model" format:
model: "anthropic:claude-sonnet-4-5"Configuration
Backend options are passed in the client tuple and forwarded to ReqLLM:
:temperature- Sampling temperature (0.0 to 2.0):max_tokens- Maximum tokens in response:top_p- Nucleus sampling parameter:stop- Stop sequences
Examples
# Basic usage
client = Puck.Client.new({Puck.Backends.ReqLLM, "anthropic:claude-sonnet-4-5"})
# With options
client = Puck.Client.new(
{Puck.Backends.ReqLLM, model: "anthropic:claude-sonnet-4-5", temperature: 0.7}
)See ReqLLM documentation for supported providers and additional options.