# `Gemini`
[🔗](https://github.com/nshkrdotcom/gemini_ex/blob/v0.11.0/lib/gemini.ex#L1)

# Gemini Elixir Client

A comprehensive Elixir client for Google's Gemini AI API with dual authentication support,
advanced streaming capabilities, type safety, and built-in telemetry.

## Features

- **🔐 Dual Authentication**: Seamless support for both Gemini API keys and Vertex AI OAuth/Service Accounts
- **⚡ Advanced Streaming**: Production-grade Server-Sent Events streaming with real-time processing
- **🛡️ Type Safety**: Complete type definitions with runtime validation
- **📊 Built-in Telemetry**: Comprehensive observability and metrics out of the box
- **💬 Chat Sessions**: Multi-turn conversation management with state persistence
- **🎭 Multimodal**: Full support for text, image, audio, and video content
- **🚀 Production Ready**: Robust error handling, retry logic, and performance optimizations

## Quick Start

### Installation

Add to your `mix.exs`:

```elixir
def deps do
  [
    {:gemini, "~> 0.0.1"}
  ]
end
```

### Basic Configuration

Configure your API key in `config/runtime.exs`:

```elixir
import Config

config :gemini,
  api_key: System.get_env("GEMINI_API_KEY")
```

Or set the environment variable:

```bash
export GEMINI_API_KEY="your_api_key_here"
```

### Simple Usage

```elixir
# Basic text generation
{:ok, response} = Gemini.generate("Tell me about Elixir programming")
{:ok, text} = Gemini.extract_text(response)
IO.puts(text)

# With options
{:ok, response} = Gemini.generate("Explain quantum computing", [
  model: Gemini.Config.get_model(:flash_lite_latest),
  temperature: 0.7,
  max_output_tokens: 1000
])
```

### Streaming

```elixir
# Start a streaming session
{:ok, stream_id} = Gemini.stream_generate("Write a long story", [
  on_chunk: fn chunk -> IO.write(chunk) end,
  on_complete: fn -> IO.puts("\n✅ Complete!") end
])
```

## Authentication

This client supports two authentication methods:

### 1. Gemini API Key (Simple)

Best for development and simple applications:

```elixir
# Environment variable (recommended)
export GEMINI_API_KEY="your_api_key"

# Application config
config :gemini, api_key: "your_api_key"

# Per-request override
Gemini.generate("Hello", api_key: "specific_key")
```

### 2. Vertex AI (Production)

Best for production Google Cloud applications:

```elixir
# Service Account JSON file
export VERTEX_SERVICE_ACCOUNT="/path/to/service-account.json"
export VERTEX_PROJECT_ID="your-gcp-project"
export VERTEX_LOCATION="us-central1"

# Application config
config :gemini, :auth,
  type: :vertex_ai,
  credentials: %{
    service_account_key: System.get_env("VERTEX_SERVICE_ACCOUNT"),
    project_id: System.get_env("VERTEX_PROJECT_ID"),
    location: "us-central1"
  }
```

## Error Handling

The client provides detailed error information with recovery suggestions:

```elixir
case Gemini.generate("Hello world") do
  {:ok, response} ->
    {:ok, text} = Gemini.extract_text(response)

  {:error, %Gemini.Error{type: :rate_limit} = error} ->
    IO.puts("Rate limited. Retry after: #{error.retry_after}")

  {:error, %Gemini.Error{type: :authentication} = error} ->
    IO.puts("Auth error: #{error.message}")

  {:error, error} ->
    IO.puts("Unexpected error: #{inspect(error)}")
end
```

## Advanced Features

### Multimodal Content

```elixir
content = [
  %{type: "text", text: "What's in this image?"},
  %{type: "image", source: %{type: "base64", data: base64_image}}
]

{:ok, response} = Gemini.generate(content)
```

### Model Management

```elixir
# List available models
{:ok, models} = Gemini.list_models()

# Get model details
{:ok, model_info} = Gemini.get_model(Gemini.Config.get_model(:flash_lite_latest))

# Count tokens
{:ok, token_count} = Gemini.count_tokens("Your text", model: Gemini.Config.get_model(:flash_lite_latest))
```

This module provides backward-compatible access to the Gemini API while routing
requests through the unified coordinator for maximum flexibility and performance.

# `options`
[🔗](https://github.com/nshkrdotcom/gemini_ex/blob/v0.11.0/lib/gemini.ex#L187)

```elixir
@type options() :: [
  model: String.t(),
  generation_config: Gemini.Types.GenerationConfig.t() | nil,
  safety_settings: [Gemini.Types.SafetySetting.t()],
  system_instruction: Gemini.Types.Content.t() | String.t() | nil,
  tools: [map()],
  tool_config: map() | nil,
  api_key: String.t(),
  auth: :gemini | :vertex_ai,
  temperature: float(),
  max_output_tokens: non_neg_integer(),
  top_p: float(),
  top_k: non_neg_integer()
]
```

Options for content generation and related API calls.

- `:model` - Model name (string, defaults to configured default model)
- `:generation_config` - GenerationConfig struct (`Gemini.Types.GenerationConfig.t()`)
- `:safety_settings` - List of SafetySetting structs (`[Gemini.Types.SafetySetting.t()]`)
- `:system_instruction` - System instruction as Content struct or string (`Gemini.Types.Content.t() | String.t() | nil`)
- `:tools` - List of tool definitions (`[map()]`)
- `:tool_config` - Tool configuration (`map() | nil`)
- `:api_key` - Override API key (string)
- `:auth` - Authentication strategy (`:gemini | :vertex_ai`)
- `:temperature` - Generation temperature (float, 0.0-1.0)
- `:max_output_tokens` - Maximum tokens to generate (non_neg_integer)
- `:top_p` - Top-p sampling parameter (float)
- `:top_k` - Top-k sampling parameter (non_neg_integer)

# `async_batch_embed_contents`
[🔗](https://github.com/nshkrdotcom/gemini_ex/blob/v0.11.0/lib/gemini.ex#L732)

```elixir
@spec async_batch_embed_contents([String.t()], options()) ::
  {:ok, map()} | {:error, Gemini.Error.t()}
```

Submit an asynchronous batch embedding job for production-scale generation.

Processes large batches with 50% cost savings compared to interactive API.

See `t:Gemini.options/0` for available options.

## Examples

    {:ok, batch} = Gemini.async_batch_embed_contents(
      ["Text 1", "Text 2", "Text 3"],
      display_name: "My Batch",
      task_type: :retrieval_document
    )

# `await_batch_completion`
[🔗](https://github.com/nshkrdotcom/gemini_ex/blob/v0.11.0/lib/gemini.ex#L776)

```elixir
@spec await_batch_completion(String.t(), options()) :: {:ok, map()} | {:error, term()}
```

Poll and wait for batch completion with configurable intervals.

## Examples

    {:ok, completed} = Gemini.await_batch_completion(
      batch.name,
      poll_interval: 10_000,
      timeout: 600_000
    )

# `batch_embed_contents`
[🔗](https://github.com/nshkrdotcom/gemini_ex/blob/v0.11.0/lib/gemini.ex#L712)

```elixir
@spec batch_embed_contents([String.t()], options()) ::
  {:ok, map()} | {:error, Gemini.Error.t()}
```

Generate embeddings for multiple texts in a single batch request.

See `t:Gemini.options/0` for available options.

## Examples

    {:ok, response} = Gemini.batch_embed_contents([
      "What is AI?",
      "How does ML work?"
    ])

# `chat`
[🔗](https://github.com/nshkrdotcom/gemini_ex/blob/v0.11.0/lib/gemini.ex#L330)

```elixir
@spec chat(options()) :: {:ok, Gemini.Chat.t()}
```

Start a new chat session.

See `t:Gemini.options/0` for available options.

# `configure`
[🔗](https://github.com/nshkrdotcom/gemini_ex/blob/v0.11.0/lib/gemini.ex#L218)

```elixir
@spec configure(atom(), map()) :: :ok
```

Configure authentication for the client.

## Examples

    # Gemini API
    Gemini.configure(:gemini, %{api_key: "your_api_key"})

    # Vertex AI
    Gemini.configure(:vertex_ai, %{
      service_account_key: "/path/to/key.json",
      project_id: "your-project",
      location: "us-central1"
    })

# `count_tokens`
[🔗](https://github.com/nshkrdotcom/gemini_ex/blob/v0.11.0/lib/gemini.ex#L320)

```elixir
@spec count_tokens(String.t() | [Gemini.Types.Content.t()], options()) ::
  {:ok, map()} | {:error, Gemini.Error.t()}
```

Count tokens in the given content.

See `t:Gemini.options/0` for available options.

# `create_cache`
[🔗](https://github.com/nshkrdotcom/gemini_ex/blob/v0.11.0/lib/gemini.ex#L252)

```elixir
@spec create_cache(
  [Gemini.Types.Content.t()] | [map()] | String.t(),
  keyword()
) :: {:ok, map()} | {:error, term()}
```

Create a cached content resource for reuse across requests.

# `delete_cache`
[🔗](https://github.com/nshkrdotcom/gemini_ex/blob/v0.11.0/lib/gemini.ex#L268)

```elixir
@spec delete_cache(
  String.t(),
  keyword()
) :: :ok | {:error, term()}
```

Delete cached content.

# `embed_content`
[🔗](https://github.com/nshkrdotcom/gemini_ex/blob/v0.11.0/lib/gemini.ex#L695)

```elixir
@spec embed_content(String.t(), options()) ::
  {:ok, map()} | {:error, Gemini.Error.t()}
```

Generate an embedding for the given text content.

See `t:Gemini.options/0` for available options.

## Examples

    {:ok, response} = Gemini.embed_content("What is AI?")
    {:ok, values} = EmbedContentResponse.get_values(response)

# `extract_text`
[🔗](https://github.com/nshkrdotcom/gemini_ex/blob/v0.11.0/lib/gemini.ex#L507)

```elixir
@spec extract_text(Gemini.Types.Response.GenerateContentResponse.t() | map()) ::
  {:ok, String.t()} | {:error, String.t()}
```

Extract text from a GenerateContentResponse or raw streaming data.

This function searches through all parts in the response to find text content,
which is important for Gemini 2.5+ models that may include thought parts before
text parts in the response.

# `extract_thought_signatures`
[🔗](https://github.com/nshkrdotcom/gemini_ex/blob/v0.11.0/lib/gemini.ex#L571)

```elixir
@spec extract_thought_signatures(
  Gemini.Types.Response.GenerateContentResponse.t()
  | nil
) :: [
  String.t()
]
```

Extract thought signatures from a GenerateContentResponse.

Gemini 3 models return `thought_signature` fields on parts that must be
echoed back in subsequent turns to maintain reasoning context.

## Parameters
- `response`: GenerateContentResponse struct

## Returns
- List of thought signature strings found in the response

## Examples

    {:ok, response} = Gemini.generate("Complex question", model: "gemini-3.1-pro-preview")
    signatures = Gemini.extract_thought_signatures(response)
    # => ["sig_abc123", "sig_def456"]

# `generate`
[🔗](https://github.com/nshkrdotcom/gemini_ex/blob/v0.11.0/lib/gemini.ex#L230)

```elixir
@spec generate(String.t() | [Gemini.Types.Content.t()], options()) ::
  {:ok, Gemini.Types.Response.GenerateContentResponse.t()}
  | {:error, Gemini.Error.t()}
```

Generate content using the configured authentication.

See `t:Gemini.options/0` for available options.

# `generate_content_with_auto_tools`
[🔗](https://github.com/nshkrdotcom/gemini_ex/blob/v0.11.0/lib/gemini.ex#L481)

```elixir
@spec generate_content_with_auto_tools(
  String.t() | [Gemini.Types.Content.t()],
  options()
) ::
  {:ok, Gemini.Types.Response.GenerateContentResponse.t()}
  | {:error, Gemini.Error.t()}
```

Generate content with automatic tool execution.

This function provides a seamless, Python-SDK-like experience by automatically
handling the tool-calling loop. When the model returns function calls, they are
executed automatically and the conversation continues until a final text response
is received.

## Parameters
- `contents`: String prompt or list of Content structs
- `opts`: Standard generation options plus:
  - `:turn_limit` - Maximum number of tool-calling turns (default: 10)
  - `:tools` - List of tool declarations (required for tool calling)
  - `:tool_config` - Tool configuration (optional)

## Examples

    # Register a tool first
    {:ok, declaration} = Altar.ADM.new_function_declaration(%{
      name: "get_weather",
      description: "Gets weather for a location",
      parameters: %{
        type: "object",
        properties: %{location: %{type: "string"}},
        required: ["location"]
      }
    })
    :ok = Gemini.Tools.register(declaration, &MyApp.get_weather/1)

    # Use automatic tool execution
    {:ok, response} = Gemini.generate_content_with_auto_tools(
      "What's the weather in San Francisco?",
      tools: [declaration],
      model: "gemini-flash-lite-latest"
    )

## Returns
- `{:ok, GenerateContentResponse.t()}`: Final text response after all tool calls
- `{:error, term()}`: Error during generation or tool execution

# `get_batch_embeddings`
[🔗](https://github.com/nshkrdotcom/gemini_ex/blob/v0.11.0/lib/gemini.ex#L760)

```elixir
@spec get_batch_embeddings(map()) :: {:ok, [map()]} | {:error, String.t()}
```

Retrieve embeddings from a completed batch job.

## Examples

    {:ok, batch} = Gemini.get_batch_status(batch_id)
    if batch.state == :completed do
      {:ok, embeddings} = Gemini.get_batch_embeddings(batch)
    end

# `get_batch_status`
[🔗](https://github.com/nshkrdotcom/gemini_ex/blob/v0.11.0/lib/gemini.ex#L745)

```elixir
@spec get_batch_status(String.t(), options()) ::
  {:ok, map()} | {:error, Gemini.Error.t()}
```

Get the current status of an async batch embedding job.

## Examples

    {:ok, batch} = Gemini.get_batch_status("batches/abc123")
    IO.puts("State: #{batch.state}")

# `get_cache`
[🔗](https://github.com/nshkrdotcom/gemini_ex/blob/v0.11.0/lib/gemini.ex#L260)

```elixir
@spec get_cache(
  String.t(),
  keyword()
) :: {:ok, map()} | {:error, term()}
```

Get a cached content by name.

# `get_model`
[🔗](https://github.com/nshkrdotcom/gemini_ex/blob/v0.11.0/lib/gemini.ex#L284)

```elixir
@spec get_model(String.t()) :: {:ok, map()} | {:error, Gemini.Error.t()}
```

Get information about a specific model.

# `get_stream_status`
[🔗](https://github.com/nshkrdotcom/gemini_ex/blob/v0.11.0/lib/gemini.ex#L435)

```elixir
@spec get_stream_status(String.t()) :: {:ok, map()} | {:error, Gemini.Error.t()}
```

Get stream status.

# `list_caches`
[🔗](https://github.com/nshkrdotcom/gemini_ex/blob/v0.11.0/lib/gemini.ex#L256)

```elixir
@spec list_caches(keyword()) :: {:ok, map()} | {:error, term()}
```

List cached contents.

# `list_models`
[🔗](https://github.com/nshkrdotcom/gemini_ex/blob/v0.11.0/lib/gemini.ex#L276)

```elixir
@spec list_models(options()) :: {:ok, map()} | {:error, Gemini.Error.t()}
```

List available models.

See `t:Gemini.options/0` for available options.

# `model_exists?`
[🔗](https://github.com/nshkrdotcom/gemini_ex/blob/v0.11.0/lib/gemini.ex#L675)

```elixir
@spec model_exists?(String.t()) :: {:ok, boolean()}
```

Check if a model exists.

# `predict`
[🔗](https://github.com/nshkrdotcom/gemini_ex/blob/v0.11.0/lib/gemini.ex#L297)

```elixir
@spec predict(String.t(), list(), options()) ::
  {:ok, map()} | {:error, Gemini.Error.t()}
```

Perform a prediction request on a model.

Generic prediction endpoint used by specialized APIs (Imagen, Veo).
For most use cases, prefer `Gemini.APIs.Images` or `Gemini.APIs.Videos`.

See `t:Gemini.options/0` for available options.

# `predict_long_running`
[🔗](https://github.com/nshkrdotcom/gemini_ex/blob/v0.11.0/lib/gemini.ex#L310)

```elixir
@spec predict_long_running(String.t(), list(), options()) ::
  {:ok, map()} | {:error, Gemini.Error.t()}
```

Perform a long-running prediction request on a model.

Returns an Operation for asynchronous processing. Used for tasks
like video generation that take significant time.

See `t:Gemini.options/0` for available options.

# `send_message`
[🔗](https://github.com/nshkrdotcom/gemini_ex/blob/v0.11.0/lib/gemini.ex#L339)

```elixir
@spec send_message(Gemini.Chat.t(), String.t()) ::
  {:ok, Gemini.Types.Response.GenerateContentResponse.t(), Gemini.Chat.t()}
  | {:error, Gemini.Error.t()}
```

Send a message in a chat session.

# `start_link`
[🔗](https://github.com/nshkrdotcom/gemini_ex/blob/v0.11.0/lib/gemini.ex#L804)

```elixir
@spec start_link() :: {:ok, pid()} | {:error, term()}
```

Start the streaming manager (for compatibility).

# `start_stream`
[🔗](https://github.com/nshkrdotcom/gemini_ex/blob/v0.11.0/lib/gemini.ex#L369)

```elixir
@spec start_stream(String.t() | [Gemini.Types.Content.t()], options()) ::
  {:ok, String.t()} | {:error, Gemini.Error.t()}
```

Start a managed streaming session.

See `t:Gemini.options/0` for available options.

# `stream_generate`
[🔗](https://github.com/nshkrdotcom/gemini_ex/blob/v0.11.0/lib/gemini.ex#L787)

```elixir
@spec stream_generate(String.t() | [Gemini.Types.Content.t()], options()) ::
  {:ok, [map()]} | {:error, Gemini.Error.t()}
```

Generate content with streaming response (synchronous collection).

See `t:Gemini.options/0` for available options.

# `stream_generate_with_auto_tools`
[🔗](https://github.com/nshkrdotcom/gemini_ex/blob/v0.11.0/lib/gemini.ex#L417)

```elixir
@spec stream_generate_with_auto_tools(
  String.t() | [Gemini.Types.Content.t()],
  options()
) ::
  {:ok, String.t()} | {:error, Gemini.Error.t()}
```

Start a streaming session with automatic tool execution.

This function provides streaming support for the automatic tool-calling loop.
When the model returns function calls, they are executed automatically and the
conversation continues until a final text response is streamed to the subscriber.

## Parameters
- `contents`: String prompt or list of Content structs
- `opts`: Standard generation options plus:
  - `:turn_limit` - Maximum number of tool-calling turns (default: 10)
  - `:tools` - List of tool declarations (required for tool calling)
  - `:tool_config` - Tool configuration (optional)

## Examples

    # Register a tool first
    {:ok, declaration} = Altar.ADM.new_function_declaration(%{
      name: "get_weather",
      description: "Gets weather for a location",
      parameters: %{
        type: "object",
        properties: %{location: %{type: "string"}},
        required: ["location"]
      }
    })
    :ok = Gemini.Tools.register(declaration, &MyApp.get_weather/1)

    # Start streaming with automatic tool execution
    {:ok, stream_id} = Gemini.stream_generate_with_auto_tools(
      "What's the weather in San Francisco?",
      tools: [declaration],
      model: "gemini-flash-lite-latest"
    )

    # Subscribe to receive only the final text response
    :ok = Gemini.subscribe_stream(stream_id)

## Returns
- `{:ok, stream_id}`: Stream started successfully
- `{:error, term()}`: Error during stream setup

# `subscribe_stream`
[🔗](https://github.com/nshkrdotcom/gemini_ex/blob/v0.11.0/lib/gemini.ex#L427)

```elixir
@spec subscribe_stream(String.t()) :: :ok | {:error, Gemini.Error.t()}
```

Subscribe to streaming events.

# `text`
[🔗](https://github.com/nshkrdotcom/gemini_ex/blob/v0.11.0/lib/gemini.ex#L240)

```elixir
@spec text(String.t() | [Gemini.Types.Content.t()], options()) ::
  {:ok, String.t()} | {:error, Gemini.Error.t()}
```

Generate text content and return only the text.

See `t:Gemini.options/0` for available options.

# `update_cache`
[🔗](https://github.com/nshkrdotcom/gemini_ex/blob/v0.11.0/lib/gemini.ex#L264)

```elixir
@spec update_cache(
  String.t(),
  keyword()
) :: {:ok, map()} | {:error, term()}
```

Update cached content TTL/expiry.

---

*Consult [api-reference.md](api-reference.md) for complete listing*
