View Source Nexlm (Nexlm v0.1.15)
A unified interface (Nexus) for interacting with various Large Language Model (LLM) providers in Elixir.
Nexlm abstracts away provider-specific implementations while offering a clean, consistent API for developers. This enables easy integration with different LLM services like OpenAI's GPT, Anthropic's Claude, and Google's Gemini.
Features
- Single, unified API for multiple LLM providers
- Support for text and multimodal (image) inputs
- Built-in validation and error handling
- Configurable request parameters
- Provider-agnostic message format
- Caching support for reduced costs
Provider Support
Currently supported providers:
- OpenAI (GPT-4, GPT-3.5)
- Anthropic (Claude)
- Google (Gemini)
Model Names
Model names must be prefixed with the provider name:
"anthropic/claude-3-haiku-20240307""openai/gpt-4""google/gemini-pro"
Basic Usage
Simple Text Completion
messages = [%{
"role" => "user",
"content" => "What is the capital of France?"
}]
{:ok, response} = Nexlm.complete("anthropic/claude-3-haiku-20240307", messages)
# => {:ok, %{role: "assistant", content: "The capital of France is Paris."}}With System Message
messages = [
%{
"role" => "system",
"content" => "You are a mathematician who only responds with numbers"
},
%{
"role" => "user",
"content" => "What is five plus five?"
}
]
{:ok, response} = Nexlm.complete("openai/gpt-4", messages, temperature: 0.7)
# => {:ok, %{role: "assistant", content: "10"}}Image Analysis
image_data = File.read!("image.jpg") |> Base.encode64()
messages = [
%{
"role" => "user",
"content" => [
%{"type" => "text", "text" => "What's in this image?"},
%{
"type" => "image",
"mime_type" => "image/jpeg",
"data" => image_data,
"cache" => true # Enable caching for this content
}
]
}
]
{:ok, response} = Nexlm.complete(
"google/gemini-pro-vision",
messages,
max_tokens: 100
)Configuration
Configure provider API keys in your application's runtime configuration:
# config/runtime.exs
config :nexlm, Nexlm.Providers.OpenAI,
api_key: System.get_env("OPENAI_API_KEY")
config :nexlm, Nexlm.Providers.Anthropic,
api_key: System.get_env("ANTHROPIC_API_KEY")
config :nexlm, Nexlm.Providers.Google,
api_key: System.get_env("GOOGLE_API_KEY")Debug Logging
Enable detailed debug logging to see request/response details:
# In configuration
config :nexlm, :debug, true
# Or via environment variable
export NEXLM_DEBUG=trueDebug logs include:
- Provider and model information
- Complete HTTP requests (headers, body) with sensitive data redacted
- Complete HTTP responses (status, headers, body)
- Message transformations and validation steps
- Request timing information
Example debug output:
[debug] [Nexlm] Starting request for model: anthropic/claude-3-haiku-20240307
[debug] [Nexlm] Provider: anthropic
[debug] [Nexlm] Request: POST https://api.anthropic.com/v1/messages
[debug] [Nexlm] Headers: %{"x-api-key" => "[REDACTED]", ...}
[debug] [Nexlm] Body: %{model: "claude-3-haiku-20240307", messages: [...]}
[debug] [Nexlm] Response: 200 OK (342ms)
[debug] [Nexlm] Response Body: %{content: [...], role: "assistant"}Error Handling
The library provides structured error handling:
case Nexlm.complete(model, messages, opts) do
{:ok, response} ->
handle_success(response)
{:error, %Nexlm.Error{type: :network_error}} ->
retry_request()
{:error, %Nexlm.Error{type: :provider_error, message: msg, details: details}} ->
status = Map.get(details, :status, "n/a")
Logger.error("Provider error (status #{status}): #{msg}")
handle_provider_error(status)
{:error, %Nexlm.Error{type: :authentication_error}} ->
refresh_credentials()
{:error, error} ->
Logger.error("Unexpected error: #{inspect(error)}")
handle_generic_error()
endMessage Format
Simple Text Message
%{
"role" => "user", # "user", "assistant", or "system"
"content" => "Hello, world!"
}Message with Image
%{
"role" => "user",
"content" => [
%{"type" => "text", "text" => "What's in this image?"},
%{
"type" => "image",
"mime_type" => "image/jpeg",
"data" => "base64_encoded_data",
"cache" => true # Optional caching flag
}
]
}System Message
%{
"role" => "system",
"content" => "You are a helpful assistant"
}Content Caching
Nexlm supports provider-level message caching through content item configuration:
# Image with caching enabled
%{
"type" => "image",
"mime_type" => "image/jpeg",
"data" => "base64_data",
"cache" => true # Enable caching
}Currently supported by:
- Anthropic (via
cache_controlin content items) - Other providers may add support in future updates
Summary
Types
A content item in a message, used for multimodal inputs.
A message that can be sent to an LLM provider.
Functions
Sends a request to an LLM provider and returns the response.
Types
@type content_item() :: %{ type: String.t(), text: String.t() | nil, mime_type: String.t() | nil, data: String.t() | nil, cache: boolean() | nil }
A content item in a message, used for multimodal inputs.
Fields:
- type: The type of content ("text" or "image")
- text: The text content for text type items
- mime_type: The MIME type for image content (e.g., "image/jpeg")
- data: Base64 encoded image data
- cache: Whether this content should be cached by the provider
@type message() :: %{role: String.t(), content: String.t() | [content_item()]} | %{ role: String.t(), tool_call_id: String.t(), tool_calls: [map()], content: map() }
A message that can be sent to an LLM provider.
Fields:
- role: The role of the message sender ("user", "assistant", or "system")
- content: The content of the message, either a string or a list of content items
Functions
@spec complete(String.t(), [message()], keyword()) :: {:ok, message()} | {:error, Nexlm.Error.t()}
Sends a request to an LLM provider and returns the response.
This is the main entry point for interacting with LLM providers. It handles:
- Message validation
- Provider selection and configuration
- Request formatting
- Error handling
- Response parsing
Arguments
model- String in the format "provider/model-name" (e.g., "anthropic/claude-3")messages- List of message maps with :role and :content keysopts- Optional keyword list of settings
Options
:temperature- Float between 0 and 1 (default: 0.0):max_tokens- Maximum tokens in response:top_p- Float between 0 and 1:receive_timeout- Timeout in milliseconds (default: 300_000):retry_count- Number of retry attempts (default: 3):retry_delay- Delay between retries in milliseconds (default: 1000)
Examples
Simple text completion:
messages = [%{"role" => "user", "content" => "What's 2+2?"}]
{:ok, response} = Nexlm.complete("anthropic/claude-3-haiku-20240307", messages)
# => {:ok, %{role: "assistant", content: "4"}}With system message and temperature:
messages = [
%{"role" => "system", "content" => "Respond like a pirate"},
%{"role" => "user", "content" => "Hello"}
]
{:ok, response} = Nexlm.complete("openai/gpt-4", messages, temperature: 0.7)
# => {:ok, %{role: "assistant", content: "Arr, ahoy there matey!"}}With image analysis:
messages = [
%{
"role" => "user",
"content" => [
%{"type" => "text", "text" => "Describe this image:"},
%{
"type" => "image",
"mime_type" => "image/jpeg",
"data" => "base64_data",
"cache" => true
}
]
}
]
{:ok, response} = Nexlm.complete(
"google/gemini-pro-vision",
messages,
max_tokens: 200
)Returns
Returns either:
{:ok, message}- Success response with assistant's message{:error, error}- Error tuple with Nexlm.Error struct
Error Handling
Possible error types:
:validation_error- Invalid message format or content:provider_error- Provider-specific API errors:network_error- Transport or connectivity failure:authentication_error- Provider rejected supplied credentials:configuration_error- Invalid configuration
For :provider_error results, the details map includes the provider's HTTP
status code when available (under the :status key).