View Source LlmComposer.Providers.Google (llm_composer v0.13.0)
Provider implementation for Google
This provider supports Google's Generative AI API and Vertex AI platform, offering comprehensive features including function calls, streaming responses, structured outputs, and auto function execution.
Dependencies
For Google AI API
No additional dependencies required.
For Vertex AI
- Goth: Required for OAuth 2.0 authentication with Google Cloud Platform
Add to your
mix.exs:{:goth, "~> 1.3"}
Provider Options
The third argument of run/3 accepts the following options in provider_opts:
Required Options
:model- The Gemini model to use (e.g., "gemini-2.5-flash")
Authentication Options
:api_key- Google API key (overrides application config, for Google AI API only):vertex- Vertex AI configuration map (see Vertex AI section below):goth- Name of the Goth process for Vertex AI authentication (overrides application config)
Request Options
:stream_response- Boolean to enable streaming responses (default: false):request_params- Map of additional request parameters to merge with the request body:functions- List of function definitions for tool calling
Response Format Options
:response_schema- Map defining structured output schema for JSON responses
Vertex AI Configuration
To use Vertex AI instead of the standard Google AI API, provide a :vertex map with:
Required Vertex Fields
:project_id- Your Google Cloud project ID:location_id- The location/region for your Vertex AI endpoint (e.g., "us-central1", "global")
Optional Vertex Fields
:api_endpoint- Custom API endpoint (overrides default regional endpoint)
Examples
Basic Google AI API Usage
opts = [
model: "gemini-2.5-flash",
api_key: "your-api-key"
]Vertex AI Usage with Goth Setup
First, set up Goth in your application. This example shows manual Goth setup:
# Read service account credentials
google_json = File.read!(Path.expand("~/path/to/service-account.json"))
credentials = Jason.decode!(google_json)
source = {:service_account, credentials}
# Configure HTTP client for Goth (optional, if using llm_composer you could use Tesla)
http_client = fn opts ->
client = Tesla.client([{Tesla.Middleware.Retry, delay: 500, max_retries: 2}])
Tesla.request(client, opts)
end
# Start Goth process
{:ok, _pid} = Goth.start_link([
source: source,
http_client: http_client,
name: MyApp.Goth
])
# Configure LlmComposer to use your Goth process
Application.put_env(:llm_composer, :google, goth: MyApp.Goth)
# Provider options
opts = [
model: "gemini-2.5-flash",
goth: MyApp.Goth,
vertex: %{
project_id: "my-gcp-project",
location_id: "global"
}
]Vertex AI with Supervision Tree
For production applications, add Goth to your supervision tree:
# In your application.ex
def start(_type, _args) do
google_json = File.read!(Application.get_env(:my_app, :google_credentials_path))
credentials = Jason.decode!(google_json)
children = [
# Other children...
{Goth, name: MyApp.Goth, source: {:service_account, credentials}},
]
opts = [strategy: :one_for_one, name: MyApp.Supervisor]
Supervisor.start_link(children, opts)
end
# Configure in config.exs
config :llm_composer, :google, goth: MyApp.GothAuthentication
Google AI API
Set your API key in application config:
config :llm_composer, :google, api_key: "your-google-ai-api-key"Or pass it directly in options:
opts = [model: "gemini-pro", api_key: "your-key"]Vertex AI with Goth
Vertex AI requires OAuth 2.0 authentication handled by Goth. You need:
- Service Account: Create a service account in Google Cloud Console with appropriate permissions
- Credentials File: Download the JSON credentials file for your service account
- Goth Process: Start a Goth process with your service account credentials
- Configuration: Configure LlmComposer to use your Goth process name
Service Account Permissions
Your service account needs the following IAM roles:
Vertex AI UserorVertex AI Service AgentService Account Token Creator(if using impersonation)
Goth Configuration Options
Configure the Goth process name in your application config:
config :llm_composer, :google, goth: MyApp.GothOr pass it directly in provider options:
opts = [
model: "gemini-pro",
goth: MyApp.Goth,
vertex: %{project_id: "my-project", location_id: "global"}
]Error Handling
The provider returns:
{:ok, response}on successful requests{:error, :model_not_provided}when model is not specified{:error, reason}for API errors, network issues, or Goth authentication failures
Supported Features
- ✅ Basic chat completion
- ✅ Streaming responses
- ✅ Function/tool calling
- ✅ Auto function execution
- ✅ Structured outputs (JSON schema)
- ✅ System instructions
- ✅ Vertex AI platform support
Notes
- When using Vertex AI, the base URL construction differs from standard Google AI API
- Streaming is not compatible with Tesla retries
- Function declarations are wrapped in Google's expected format automatically
- Request parameters in
:request_paramsare merged with the final request body - Goth handles token refresh automatically for Vertex AI authentication
- Ensure your service account has proper permissions for Vertex AI access
Summary
Functions
Reference: https://ai.google.dev/api/generate-content