ReqLLM.Providers.GoogleVertex.Anthropic (ReqLLM v1.0.0)
View SourceAnthropic model family support for Google Vertex AI.
Handles Claude models (Claude 3.5 Haiku, Claude 3.5 Sonnet, Claude Opus, etc.) on Google Vertex AI.
This module acts as a thin adapter between Vertex AI's GCP infrastructure and Anthropic's native message format. It delegates to the native Anthropic modules for all format conversion.
Prompt Caching Support
Full Anthropic prompt caching is supported. Enable with anthropic_prompt_cache: true option.
Extended Thinking Support
Extended thinking (reasoning) is supported for models that support it.
Enable with reasoning_effort: "low" | "medium" | "high" option.
Summary
Functions
Extracts usage metadata from the response body.
Formats a ReqLLM context into Anthropic request format for Vertex AI.
Cleans up thinking config if incompatible with other options.
Parses Anthropic response from Vertex AI into ReqLLM format.
Pre-validates and transforms options for Claude models on Vertex AI. Handles reasoning_effort/reasoning_token_budget translation to thinking config.
Functions
Extracts usage metadata from the response body.
Delegates to the native Anthropic provider.
Formats a ReqLLM context into Anthropic request format for Vertex AI.
Delegates to the native Anthropic.Context module. Vertex AI uses the native Anthropic Messages API format directly.
For :object operations, creates a synthetic "structured_output" tool to leverage Claude's tool-calling for structured JSON output.
Cleans up thinking config if incompatible with other options.
Delegates to shared PlatformReasoning module.
See ReqLLM.Providers.Anthropic.PlatformReasoning.maybe_clean_thinking_after_translation/2.
Parses Anthropic response from Vertex AI into ReqLLM format.
Delegates to the native Anthropic.Response module.
For :object operations, extracts the structured output from the tool call.
Pre-validates and transforms options for Claude models on Vertex AI. Handles reasoning_effort/reasoning_token_budget translation to thinking config.