ReqLLM.Providers.Anthropic (ReqLLM v1.0.0)

View Source

Provider implementation for Anthropic Claude models.

Supports Claude 3 models including:

  • claude-3-5-sonnet-20241022
  • claude-3-5-haiku-20241022
  • claude-3-opus-20240229

Key Differences from OpenAI

  • Uses /v1/messages endpoint instead of /chat/completions
  • Different authentication: x-api-key header instead of Authorization: Bearer
  • Different message format with content blocks
  • Different response structure with top-level role and content
  • System messages are included in the messages array, not separate
  • Tool calls use different format with content blocks

Usage

iex> ReqLLM.generate_text("anthropic:claude-3-5-sonnet-20241022", "Hello!")
{:ok, response}

Summary

Functions

Default implementation of attach/3.

Default implementation of attach_stream/4.

Default implementation of decode_response/1.

Default implementation of decode_stream_event/2.

Default implementation of encode_body/1.

Default implementation of extract_usage/2.

Maps reasoning effort levels to token budgets.

Default implementation of prepare_request/4.

Convert a ReqLLM.Tool to Anthropic's tool format.

Default implementation of translate_options/3.

Functions

attach(request, model_input, user_opts)

Default implementation of attach/3.

Sets up Bearer token authentication and standard pipeline steps.

attach_stream(model, context, opts, finch_name)

Default implementation of attach_stream/4.

Builds complete streaming requests using OpenAI-compatible format.

decode_response(request_response)

Default implementation of decode_response/1.

Handles success/error responses with standard ReqLLM.Response creation.

decode_stream_event(event, model)

Default implementation of decode_stream_event/2.

Decodes SSE events using OpenAI-compatible format.

default_base_url()

default_env_key()

Callback implementation for ReqLLM.Provider.default_env_key/0.

default_provider_opts()

encode_body(request)

Default implementation of encode_body/1.

Encodes request body using OpenAI-compatible format for chat and embedding operations.

extract_usage(body, model)

Default implementation of extract_usage/2.

Extracts usage data from standard usage field in response body.

map_reasoning_effort_to_budget(arg1)

Maps reasoning effort levels to token budgets.

This is the canonical source of truth for Anthropic reasoning effort mappings, used by all providers hosting Anthropic models.

  • :low → 1,024 tokens
  • :medium → 2,048 tokens
  • :high → 4,096 tokens

Examples

iex> ReqLLM.Providers.Anthropic.map_reasoning_effort_to_budget(:low)
1024

iex> ReqLLM.Providers.Anthropic.map_reasoning_effort_to_budget("medium")
2048

metadata()

prepare_request(operation, model_spec, input, opts)

Default implementation of prepare_request/4.

Handles :chat, :object, and :embedding operations using OpenAI-compatible patterns.

provider_extended_generation_schema()

provider_id()

provider_schema()

supported_provider_options()

tool_to_anthropic_format(tool)

Convert a ReqLLM.Tool to Anthropic's tool format.

This is made public so that Bedrock and Vertex formatters can reuse it.

translate_options(operation, model, opts)

Default implementation of translate_options/3.

Pass-through implementation that returns options unchanged.