View Source Nexlm.Providers.OpenAI (Nexlm v0.1.5)
Provider implementation for OpenAI's Chat Completion API.
Model Names
Models should be prefixed with "openai/", for example:
- "openai/gpt-4"
- "openai/gpt-4-vision-preview"
- "openai/gpt-3.5-turbo"
Message Formats
Supports the following message types:
- Text messages: Simple string content
- System messages: Special instructions for model behavior
- Image messages: Base64 encoded images or URLs (converted to data URLs)
Configuration
Required:
- API key in runtime config (:nexlm, Nexlm.Providers.OpenAI, api_key: "key")
- Model name in request
Optional:
- temperature: Float between 0 and 1 (default: 0.0)
- max_tokens: Integer for response length limit
- top_p: Float between 0 and 1 for nucleus sampling
Examples
# Simple text completion config = OpenAI.init(model: "openai/gpt-4") messages = [%{"role" => "user", "content" => "Hello"}] {:ok, response} = OpenAI.call(config, messages)
# Vision API with image messages = [
%{
"role" => "user",
"content" => [
%{"type" => "text", "text" => "What's in this image?"},
%{
"type" => "image",
"mime_type" => "image/jpeg",
"data" => "base64_encoded_data"
}
]
}
] config = OpenAI.init(model: "openai/gpt-4o-mini")