Gemini (GeminiEx v0.2.0)
View SourceGemini Elixir Client
A comprehensive Elixir client for Google's Gemini AI API with dual authentication support, advanced streaming capabilities, type safety, and built-in telemetry.
Features
- 🔐 Dual Authentication: Seamless support for both Gemini API keys and Vertex AI OAuth/Service Accounts
- ⚡ Advanced Streaming: Production-grade Server-Sent Events streaming with real-time processing
- 🛡️ Type Safety: Complete type definitions with runtime validation
- 📊 Built-in Telemetry: Comprehensive observability and metrics out of the box
- 💬 Chat Sessions: Multi-turn conversation management with state persistence
- 🎭 Multimodal: Full support for text, image, audio, and video content
- 🚀 Production Ready: Robust error handling, retry logic, and performance optimizations
Quick Start
Installation
Add to your mix.exs
:
def deps do
[
{:gemini, "~> 0.0.1"}
]
end
Basic Configuration
Configure your API key in config/runtime.exs
:
import Config
config :gemini,
api_key: System.get_env("GEMINI_API_KEY")
Or set the environment variable:
export GEMINI_API_KEY="your_api_key_here"
Simple Usage
# Basic text generation
{:ok, response} = Gemini.generate("Tell me about Elixir programming")
{:ok, text} = Gemini.extract_text(response)
IO.puts(text)
# With options
{:ok, response} = Gemini.generate("Explain quantum computing", [
model: Gemini.Config.get_model(:flash_2_0_lite),
temperature: 0.7,
max_output_tokens: 1000
])
Streaming
# Start a streaming session
{:ok, stream_id} = Gemini.stream_generate("Write a long story", [
on_chunk: fn chunk -> IO.write(chunk) end,
on_complete: fn -> IO.puts("\n✅ Complete!") end
])
Authentication
This client supports two authentication methods:
1. Gemini API Key (Simple)
Best for development and simple applications:
# Environment variable (recommended)
export GEMINI_API_KEY="your_api_key"
# Application config
config :gemini, api_key: "your_api_key"
# Per-request override
Gemini.generate("Hello", api_key: "specific_key")
2. Vertex AI (Production)
Best for production Google Cloud applications:
# Service Account JSON file
export VERTEX_SERVICE_ACCOUNT="/path/to/service-account.json"
export VERTEX_PROJECT_ID="your-gcp-project"
export VERTEX_LOCATION="us-central1"
# Application config
config :gemini, :auth,
type: :vertex_ai,
credentials: %{
service_account_key: System.get_env("VERTEX_SERVICE_ACCOUNT"),
project_id: System.get_env("VERTEX_PROJECT_ID"),
location: "us-central1"
}
Error Handling
The client provides detailed error information with recovery suggestions:
case Gemini.generate("Hello world") do
{:ok, response} ->
{:ok, text} = Gemini.extract_text(response)
{:error, %Gemini.Error{type: :rate_limit} = error} ->
IO.puts("Rate limited. Retry after: #{error.retry_after}")
{:error, %Gemini.Error{type: :authentication} = error} ->
IO.puts("Auth error: #{error.message}")
{:error, error} ->
IO.puts("Unexpected error: #{inspect(error)}")
end
Advanced Features
Multimodal Content
content = [
%{type: "text", text: "What's in this image?"},
%{type: "image", source: %{type: "base64", data: base64_image}}
]
{:ok, response} = Gemini.generate(content)
Model Management
# List available models
{:ok, models} = Gemini.list_models()
# Get model details
{:ok, model_info} = Gemini.get_model(Gemini.Config.get_model(:flash_2_0_lite))
# Count tokens
{:ok, token_count} = Gemini.count_tokens("Your text", model: Gemini.Config.get_model(:flash_2_0_lite))
This module provides backward-compatible access to the Gemini API while routing requests through the unified coordinator for maximum flexibility and performance.
Summary
Functions
Start a new chat session.
Configure authentication for the client.
Count tokens in the given content.
Extract text from a GenerateContentResponse or raw streaming data.
Generate content using the configured authentication.
Generate content with automatic tool execution.
Get information about a specific model.
Get stream status.
List available models.
Check if a model exists.
Send a message in a chat session.
Start the streaming manager (for compatibility).
Start a managed streaming session.
Generate content with streaming response (synchronous collection).
Start a streaming session with automatic tool execution.
Subscribe to streaming events.
Generate text content and return only the text.
Types
@type options() :: [ model: String.t(), generation_config: Gemini.Types.GenerationConfig.t() | nil, safety_settings: [Gemini.Types.SafetySetting.t()], system_instruction: Gemini.Types.Content.t() | String.t() | nil, tools: [map()], tool_config: map() | nil, api_key: String.t(), auth: :gemini | :vertex_ai, temperature: float(), max_output_tokens: non_neg_integer(), top_p: float(), top_k: non_neg_integer() ]
Options for content generation and related API calls.
:model
- Model name (string, defaults to configured default model):generation_config
- GenerationConfig struct (Gemini.Types.GenerationConfig.t()
):safety_settings
- List of SafetySetting structs ([Gemini.Types.SafetySetting.t()]
):system_instruction
- System instruction as Content struct or string (Gemini.Types.Content.t() | String.t() | nil
):tools
- List of tool definitions ([map()]
):tool_config
- Tool configuration (map() | nil
):api_key
- Override API key (string):auth
- Authentication strategy (:gemini | :vertex_ai
):temperature
- Generation temperature (float, 0.0-1.0):max_output_tokens
- Maximum tokens to generate (non_neg_integer):top_p
- Top-p sampling parameter (float):top_k
- Top-k sampling parameter (non_neg_integer)
Functions
@spec chat(options()) :: {:ok, Gemini.Chat.t()}
Start a new chat session.
See Gemini.options/0
for available options.
Configure authentication for the client.
Examples
# Gemini API
Gemini.configure(:gemini, %{api_key: "your_api_key"})
# Vertex AI
Gemini.configure(:vertex_ai, %{
service_account_key: "/path/to/key.json",
project_id: "your-project",
location: "us-central1"
})
@spec count_tokens(String.t() | [Gemini.Types.Content.t()], options()) :: {:ok, map()} | {:error, Gemini.Error.t()}
Count tokens in the given content.
See Gemini.options/0
for available options.
@spec extract_text(Gemini.Types.Response.GenerateContentResponse.t() | map()) :: {:ok, String.t()} | {:error, String.t()}
Extract text from a GenerateContentResponse or raw streaming data.
@spec generate(String.t() | [Gemini.Types.Content.t()], options()) :: {:ok, Gemini.Types.Response.GenerateContentResponse.t()} | {:error, Gemini.Error.t()}
Generate content using the configured authentication.
See Gemini.options/0
for available options.
@spec generate_content_with_auto_tools( String.t() | [Gemini.Types.Content.t()], options() ) :: {:ok, Gemini.Types.Response.GenerateContentResponse.t()} | {:error, Gemini.Error.t()}
Generate content with automatic tool execution.
This function provides a seamless, Python-SDK-like experience by automatically handling the tool-calling loop. When the model returns function calls, they are executed automatically and the conversation continues until a final text response is received.
Parameters
contents
: String prompt or list of Content structsopts
: Standard generation options plus::turn_limit
- Maximum number of tool-calling turns (default: 10):tools
- List of tool declarations (required for tool calling):tool_config
- Tool configuration (optional)
Examples
# Register a tool first
{:ok, declaration} = Altar.ADM.new_function_declaration(%{
name: "get_weather",
description: "Gets weather for a location",
parameters: %{
type: "object",
properties: %{location: %{type: "string"}},
required: ["location"]
}
})
:ok = Gemini.Tools.register(declaration, &MyApp.get_weather/1)
# Use automatic tool execution
{:ok, response} = Gemini.generate_content_with_auto_tools(
"What's the weather in San Francisco?",
tools: [declaration],
model: "gemini-2.0-flash-lite"
)
Returns
{:ok, GenerateContentResponse.t()}
: Final text response after all tool calls{:error, term()}
: Error during generation or tool execution
@spec get_model(String.t()) :: {:ok, map()} | {:error, Gemini.Error.t()}
Get information about a specific model.
@spec get_stream_status(String.t()) :: {:ok, map()} | {:error, Gemini.Error.t()}
Get stream status.
@spec list_models(options()) :: {:ok, map()} | {:error, Gemini.Error.t()}
List available models.
See Gemini.options/0
for available options.
Check if a model exists.
@spec send_message(Gemini.Chat.t(), String.t()) :: {:ok, Gemini.Types.Response.GenerateContentResponse.t(), Gemini.Chat.t()} | {:error, Gemini.Error.t()}
Send a message in a chat session.
Start the streaming manager (for compatibility).
@spec start_stream(String.t() | [Gemini.Types.Content.t()], options()) :: {:ok, String.t()} | {:error, Gemini.Error.t()}
Start a managed streaming session.
See Gemini.options/0
for available options.
@spec stream_generate(String.t() | [Gemini.Types.Content.t()], options()) :: {:ok, [map()]} | {:error, Gemini.Error.t()}
Generate content with streaming response (synchronous collection).
See Gemini.options/0
for available options.
@spec stream_generate_with_auto_tools( String.t() | [Gemini.Types.Content.t()], options() ) :: {:ok, String.t()} | {:error, Gemini.Error.t()}
Start a streaming session with automatic tool execution.
This function provides streaming support for the automatic tool-calling loop. When the model returns function calls, they are executed automatically and the conversation continues until a final text response is streamed to the subscriber.
Parameters
contents
: String prompt or list of Content structsopts
: Standard generation options plus::turn_limit
- Maximum number of tool-calling turns (default: 10):tools
- List of tool declarations (required for tool calling):tool_config
- Tool configuration (optional)
Examples
# Register a tool first
{:ok, declaration} = Altar.ADM.new_function_declaration(%{
name: "get_weather",
description: "Gets weather for a location",
parameters: %{
type: "object",
properties: %{location: %{type: "string"}},
required: ["location"]
}
})
:ok = Gemini.Tools.register(declaration, &MyApp.get_weather/1)
# Start streaming with automatic tool execution
{:ok, stream_id} = Gemini.stream_generate_with_auto_tools(
"What's the weather in San Francisco?",
tools: [declaration],
model: "gemini-2.0-flash-lite"
)
# Subscribe to receive only the final text response
:ok = Gemini.subscribe_stream(stream_id)
Returns
{:ok, stream_id}
: Stream started successfully{:error, term()}
: Error during stream setup
@spec subscribe_stream(String.t()) :: :ok | {:error, Gemini.Error.t()}
Subscribe to streaming events.
@spec text(String.t() | [Gemini.Types.Content.t()], options()) :: {:ok, String.t()} | {:error, Gemini.Error.t()}
Generate text content and return only the text.
See Gemini.options/0
for available options.