Gemini.APIs.Coordinator (GeminiEx v0.3.0)
View SourceCoordinates API calls across different authentication strategies and endpoints.
Provides a unified interface that can route requests to either Gemini API or Vertex AI based on configuration, while maintaining the same interface.
This module acts as the main entry point for all Gemini API operations, automatically handling authentication strategy selection and request routing.
Features
- Unified API for content generation across auth strategies
- Automatic auth strategy selection based on configuration
- Per-request auth strategy override capability
- Consistent error handling and response format
- Support for both streaming and non-streaming operations
- Model listing and token counting functionality
Usage
# Use default auth strategy
{:ok, response} = Coordinator.generate_content("Hello world")
# Override auth strategy for specific request
{:ok, response} = Coordinator.generate_content("Hello world", auth: :vertex_ai)
# Start streaming with specific auth
{:ok, stream_id} = Coordinator.stream_generate_content("Tell me a story", auth: :gemini)
See Gemini.options/0
in Gemini
for the canonical list of options.
Summary
Functions
Generate embeddings for multiple text inputs in a single batch request.
Count tokens in the given input.
Generate an embedding for the given text content.
Extract text content from a GenerateContentResponse.
Generate content using the specified model and input.
Get information about a specific model.
List available models for the specified authentication strategy.
Stop a streaming content generation.
Stream content generation with real-time response chunks.
Get the status of a streaming content generation.
Subscribe to a streaming content generation.
Unsubscribe from a streaming content generation.
Types
Functions
@spec batch_embed_contents([String.t()], Gemini.options()) :: api_result(Gemini.Types.Response.BatchEmbedContentsResponse.t())
Generate embeddings for multiple text inputs in a single batch request.
More efficient than individual requests when embedding multiple texts.
See Gemini.options/0
for available options.
Parameters
texts
: List of text strings to embedopts
: Options including model, auth strategy, and embedding-specific parameters
Options
Same as embed_content/2
, applied to all texts in the batch.
Examples
# Batch embedding
{:ok, response} = Coordinator.batch_embed_contents([
"What is AI?",
"How does machine learning work?",
"Explain neural networks"
])
{:ok, all_values} = BatchEmbedContentsResponse.get_all_values(response)
# With task type
{:ok, response} = Coordinator.batch_embed_contents(
["Doc 1 content", "Doc 2 content", "Doc 3 content"],
task_type: :retrieval_document,
output_dimensionality: 256
)
@spec count_tokens( String.t() | Gemini.Types.Request.GenerateContentRequest.t(), Gemini.options() ) :: api_result(%{total_tokens: integer()})
Count tokens in the given input.
See Gemini.options/0
for available options.
Parameters
input
: String or GenerateContentRequest to count tokens foropts
: Options including model and auth strategy
Options
:model
: Model to use for token counting (defaults to configured default model):auth
: Authentication strategy (:gemini
or:vertex_ai
)
Examples
{:ok, count} = Coordinator.count_tokens("Hello world")
{:ok, count} = Coordinator.count_tokens("Complex text", model: "gemini-2.5-pro", auth: :vertex_ai)
@spec embed_content(String.t(), Gemini.options()) :: api_result(Gemini.Types.Response.EmbedContentResponse.t())
Generate an embedding for the given text content.
Uses the Gemini embedding models to convert text into a numerical vector representation that can be used for similarity comparison, clustering, and retrieval tasks.
See Gemini.options/0
for available options.
Parameters
text
: String content to embedopts
: Options including model, auth strategy, and embedding-specific parameters
Options
:model
: Embedding model to use (default: "text-embedding-004"):auth
: Authentication strategy (:gemini
or:vertex_ai
):task_type
: Optional task type for optimized embeddings:retrieval_query
- Text is a search query:retrieval_document
- Text is a document being searched:semantic_similarity
- For semantic similarity tasks:classification
- For classification tasks:clustering
- For clustering tasks:question_answering
- For Q&A tasks:fact_verification
- For fact verification:code_retrieval_query
- For code retrieval
:title
: Optional title (only for:retrieval_document
task type):output_dimensionality
: Optional dimension reduction for newer models
Examples
# Simple embedding
{:ok, response} = Coordinator.embed_content("What is the meaning of life?")
{:ok, values} = EmbedContentResponse.get_values(response)
# With task type for retrieval
{:ok, response} = Coordinator.embed_content(
"This is a document about AI",
task_type: :retrieval_document,
title: "AI Overview"
)
# With specific model and dimensionality
{:ok, response} = Coordinator.embed_content(
"Query text",
model: "text-embedding-004",
task_type: :retrieval_query,
output_dimensionality: 256
)
@spec extract_text(Gemini.Types.Response.GenerateContentResponse.t()) :: {:ok, String.t()} | {:error, term()}
Extract text content from a GenerateContentResponse.
Examples
{:ok, response} = Coordinator.generate_content("Hello")
{:ok, text} = Coordinator.extract_text(response)
@spec generate_content( String.t() | [Gemini.Types.Content.t()] | Gemini.Types.Request.GenerateContentRequest.t(), Gemini.options() ) :: api_result(Gemini.Types.Response.GenerateContentResponse.t())
Generate content using the specified model and input.
See Gemini.options/0
for available options.
Parameters
input
: String prompt or GenerateContentRequest structopts
: Options including model, auth strategy, and generation config
Examples
# Simple text generation
{:ok, response} = Coordinator.generate_content("What is AI?")
# With specific model and auth
{:ok, response} = Coordinator.generate_content(
"Explain quantum computing",
model: Gemini.Config.get_model(:flash_2_0_lite),
auth: :vertex_ai,
temperature: 0.7
)
# Using request struct
request = %GenerateContentRequest{...}
{:ok, response} = Coordinator.generate_content(request)
@spec get_model(String.t(), Gemini.options()) :: api_result(map())
Get information about a specific model.
See Gemini.options/0
for available options.
Parameters
model_name
: Name of the model to retrieveopts
: Options including auth strategy
Examples
{:ok, model} = Coordinator.get_model(Gemini.Config.get_model(:flash_2_0_lite))
{:ok, model} = Coordinator.get_model("gemini-2.5-pro", auth: :vertex_ai)
@spec list_models(Gemini.options()) :: api_result(Gemini.Types.Response.ListModelsResponse.t())
List available models for the specified authentication strategy.
See Gemini.options/0
for available options.
Parameters
opts
: Options including auth strategy and pagination
Options
:auth
: Authentication strategy (:gemini
or:vertex_ai
):page_size
: Number of models per page:page_token
: Pagination token for next page
Examples
# List models with default auth
{:ok, models_response} = Coordinator.list_models()
# List models with specific auth strategy
{:ok, models_response} = Coordinator.list_models(auth: :vertex_ai)
# With pagination
{:ok, models_response} = Coordinator.list_models(
auth: :gemini,
page_size: 50,
page_token: "next_page_token"
)
Stop a streaming content generation.
@spec stream_generate_content( String.t() | Gemini.Types.Request.GenerateContentRequest.t(), Gemini.options() ) :: api_result(String.t())
Stream content generation with real-time response chunks.
See Gemini.options/0
for available options.
Parameters
input
: String prompt or GenerateContentRequest structopts
: Options including model, auth strategy, and generation config
Returns
{:ok, stream_id}
: Stream started successfully{:error, reason}
: Failed to start stream
After starting the stream, subscribe to receive events:
{:ok, stream_id} = Coordinator.stream_generate_content("Tell me a story")
:ok = Coordinator.subscribe_stream(stream_id)
# Handle incoming messages
receive do
{:stream_event, ^stream_id, event} ->
IO.inspect(event, label: "Stream Event")
{:stream_complete, ^stream_id} ->
IO.puts("Stream completed")
{:stream_error, ^stream_id, stream_error} ->
IO.puts("Stream error: #{inspect(stream_error)}")
end
Examples
# Basic streaming
{:ok, stream_id} = Coordinator.stream_generate_content("Write a poem")
# With specific configuration
{:ok, stream_id} = Coordinator.stream_generate_content(
"Explain machine learning",
model: Gemini.Config.get_model(:flash_2_0_lite),
auth: :gemini,
temperature: 0.8,
max_output_tokens: 1000
)
Get the status of a streaming content generation.
Subscribe to a streaming content generation.
Parameters
stream_id
: ID of the stream to subscribe tosubscriber_pid
: Process to receive stream events (defaults to current process)
Examples
{:ok, stream_id} = Coordinator.stream_generate_content("Hello")
:ok = Coordinator.subscribe_stream(stream_id)
# In a different process
:ok = Coordinator.subscribe_stream(stream_id, target_pid)
Unsubscribe from a streaming content generation.