View Source Nexlm.Providers.OpenAI (Nexlm v0.1.15)

Provider implementation for OpenAI's Chat Completion API.

Model Names

Models should be prefixed with "openai/", for example:

  • "openai/gpt-5" (reasoning model)
  • "openai/gpt-4"
  • "openai/gpt-4-vision-preview"
  • "openai/gpt-3.5-turbo"
  • "openai/o1" (reasoning model)
  • "openai/o1-preview" (reasoning model)

Message Formats

Supports the following message types:

  • Text messages: Simple string content
  • System messages: Special instructions for model behavior
  • Image messages: Base64 encoded images or URLs (converted to data URLs)

Configuration

Required:

  • API key in runtime config (:nexlm, Nexlm.Providers.OpenAI, api_key: "key")
  • Model name in request

Optional:

  • temperature: Float between 0 and 1 (not supported by reasoning models like GPT-5, o1)
  • max_tokens: Integer for response length limit (default: 4000)
  • top_p: Float between 0 and 1 for nucleus sampling

Examples

# Simple text completion config = OpenAI.init(model: "openai/gpt-4") messages = [%{"role" => "user", "content" => "Hello"}] {:ok, response} = OpenAI.call(config, messages)

# Vision API with image messages = [

%{
  "role" => "user",
  "content" => [
    %{"type" => "text", "text" => "What's in this image?"},
    %{
      "type" => "image",
      "mime_type" => "image/jpeg",
      "data" => "base64_encoded_data"
    }
  ]
}

] config = OpenAI.init(model: "openai/gpt-4o-mini")

# GPT-5 reasoning model (no temperature support) config = OpenAI.init(model: "openai/gpt-5") messages = [%{"role" => "user", "content" => "Solve this step by step: What is 15% of 240?"}] {:ok, response} = OpenAI.call(config, messages)