ReqLLM.Providers.GoogleVertex (ReqLLM v1.0.0)
View SourceGoogle Vertex AI provider implementation.
Supports Vertex AI's unified API for accessing multiple AI models including:
- Anthropic Claude models (claude-haiku-4-5, claude-sonnet-4-5, claude-opus-4-1)
- Google Gemini models (gemini-2.0-flash, gemini-2.5-flash, gemini-2.5-pro)
- And more as Google adds them
Authentication
Vertex AI uses Google Cloud OAuth2 authentication with service accounts.
Service Account (Recommended)
# Option 1: Environment variables
export GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account.json"
export GOOGLE_CLOUD_PROJECT="your-project-id"
export GOOGLE_CLOUD_REGION="us-central1"
# Option 2: Pass directly in options
ReqLLM.generate_text(
"google-vertex:claude-haiku-4-5@20251001",
"Hello",
provider_options: [
service_account_json: "/path/to/service-account.json",
project_id: "your-project-id",
region: "us-central1"
]
)Examples
# Simple text generation with Claude on Vertex
{:ok, response} = ReqLLM.generate_text(
"google-vertex:claude-haiku-4-5@20251001",
"Hello!"
)
# Streaming
{:ok, response} = ReqLLM.stream_text(
"google-vertex:claude-haiku-4-5@20251001",
"Tell me a story"
)Extending for New Models
To add support for a new model family:
- Add the model family to
@model_families - Implement the formatter module (e.g.,
ReqLLM.Providers.GoogleVertex.Gemini) - The formatter needs:
format_request/3- Convert ReqLLM context to provider formatparse_response/2- Convert provider response to ReqLLM formatextract_usage/2- Extract usage information
Summary
Functions
Default implementation of attach/3.
Default implementation of attach_stream/4.
Default implementation of decode_response/1.
Default implementation of decode_stream_event/2.
Callback implementation for ReqLLM.Provider.default_env_key/0.
Default implementation of encode_body/1.
Default implementation of extract_usage/2.
Default implementation of prepare_request/4.
Default implementation of translate_options/3.
Functions
Default implementation of attach/3.
Sets up Bearer token authentication and standard pipeline steps.
Default implementation of attach_stream/4.
Builds complete streaming requests using OpenAI-compatible format.
Default implementation of decode_response/1.
Handles success/error responses with standard ReqLLM.Response creation.
Default implementation of decode_stream_event/2.
Decodes SSE events using OpenAI-compatible format.
Callback implementation for ReqLLM.Provider.default_env_key/0.
Default implementation of encode_body/1.
Encodes request body using OpenAI-compatible format for chat and embedding operations.
Default implementation of extract_usage/2.
Extracts usage data from standard usage field in response body.
Default implementation of prepare_request/4.
Handles :chat, :object, and :embedding operations using OpenAI-compatible patterns.
Default implementation of translate_options/3.
Pass-through implementation that returns options unchanged.