Nous.Providers.Gemini (nous v0.13.3)

View Source

Google Gemini provider implementation.

Supports Gemini models via the Google AI Generative Language API.

Usage

# Basic usage
{:ok, response} = Nous.Providers.Gemini.chat(%{
  model: "gemini-2.0-flash-exp",
  contents: [%{"role" => "user", "parts" => [%{"text" => "Hello"}]}]
})

# With system instruction
{:ok, response} = Nous.Providers.Gemini.chat(%{
  model: "gemini-2.0-flash-exp",
  systemInstruction: %{"parts" => [%{"text" => "You are helpful."}]},
  contents: [%{"role" => "user", "parts" => [%{"text" => "Hello"}]}],
  generationConfig: %{"temperature" => 0.7, "maxOutputTokens" => 1024}
})

# Streaming
{:ok, stream} = Nous.Providers.Gemini.chat_stream(params)
Enum.each(stream, fn event -> IO.inspect(event) end)

Configuration

# In config.exs
config :nous, :gemini,
  api_key: "AIza...",
  base_url: "https://generativelanguage.googleapis.com/v1beta"  # optional

Note on Authentication

Unlike OpenAI/Anthropic which use headers, Gemini uses query parameter auth: ?key=API_KEY

Summary

Functions

Get the API key from options, environment, or application config.

Get the base URL from options, application config, or default.

Count tokens in messages (rough estimate).

High-level request with message conversion, telemetry, and error wrapping.

High-level streaming request with message conversion and telemetry.

Functions

api_key(opts \\ [])

@spec api_key(keyword()) :: String.t() | nil

Get the API key from options, environment, or application config.

Lookup order:

  1. :api_key option passed directly
  2. Environment variable (GOOGLE_AI_API_KEY)
  3. Application config: config :nous, gemini, api_key: "..."

base_url(opts \\ [])

@spec base_url(keyword()) :: String.t()

Get the base URL from options, application config, or default.

Lookup order:

  1. :base_url option passed directly
  2. Application config: config :nous, gemini, base_url: "..."
  3. Default: https://generativelanguage.googleapis.com/v1beta

count_tokens(messages)

@spec count_tokens(list()) :: integer()

Count tokens in messages (rough estimate).

Override this in your provider for more accurate counting.

request(model, messages, settings)

High-level request with message conversion, telemetry, and error wrapping.

Default implementation that:

  1. Converts messages to provider format
  2. Builds request params
  3. Calls chat/2
  4. Parses response
  5. Emits telemetry events
  6. Wraps errors

request_stream(model, messages, settings)

High-level streaming request with message conversion and telemetry.