ReqLLM.Providers.Alibaba (ReqLLM v1.9.0)

View Source

Alibaba Cloud Bailian (DashScope) provider – international endpoint.

OpenAI-compatible Chat Completions API for Qwen family models via DashScope.

Implementation

Uses built-in OpenAI-style encoding/decoding defaults with DashScope-specific extensions for search, thinking/reasoning, and vision parameters.

DashScope-Specific Extensions

Beyond standard OpenAI parameters, DashScope supports provider-specific options as top-level body keys:

  • enable_search - Enable internet search integration
  • search_options - Search configuration (strategy, source citation)
  • enable_thinking - Activate deep thinking mode for hybrid reasoning
  • thinking_budget - Maximum token length for thinking process
  • top_k - Candidate token pool size for sampling
  • repetition_penalty - Penalise repeated tokens
  • enable_code_interpreter - Activate code execution
  • vl_high_resolution_images - Increase vision input pixel limit
  • incremental_output - Streaming: send incremental chunks only

See provider_schema/0 for the complete DashScope-specific schema and ReqLLM.Provider.Options for inherited OpenAI parameters.

Configuration

# Add to .env file (automatically loaded)
DASHSCOPE_API_KEY=your-api-key

Examples

# Basic usage
ReqLLM.generate_text("alibaba:qwen-plus", "Hello!")

# With search enabled
ReqLLM.generate_text("alibaba:qwen-plus", "What happened today?",
  provider_options: [enable_search: true]
)

# With thinking mode
ReqLLM.generate_text("alibaba:qwen-plus", "Solve this step by step",
  provider_options: [enable_thinking: true, thinking_budget: 4096]
)

Summary

Functions

Default implementation of attach/3.

Default implementation of attach_stream/4.

Default implementation of build_body/1.

Default implementation of decode_response/1.

Default implementation of decode_stream_event/2.

Default implementation of encode_body/1.

Default implementation of extract_usage/2.

Default implementation of prepare_request/4.

Default implementation of translate_options/3.

Functions

attach(request, model_input, user_opts)

Default implementation of attach/3.

Sets up Bearer token authentication and standard pipeline steps.

attach_stream(model, context, opts, finch_name)

Default implementation of attach_stream/4.

Builds complete streaming requests using OpenAI-compatible format.

base_url()

build_body(request)

Default implementation of build_body/1.

Builds request body using OpenAI-compatible format for chat and embedding operations.

decode_response(request_response)

Default implementation of decode_response/1.

Handles success/error responses with standard ReqLLM.Response creation.

decode_stream_event(event, model)

Default implementation of decode_stream_event/2.

Decodes SSE events using OpenAI-compatible format.

default_base_url()

default_env_key()

Callback implementation for ReqLLM.Provider.default_env_key/0.

encode_body(request)

Default implementation of encode_body/1.

Encodes request body using OpenAI-compatible format for chat and embedding operations.

extract_usage(body, model)

Default implementation of extract_usage/2.

Extracts usage data from standard usage field in response body.

prepare_request(operation, model_spec, input, opts)

Default implementation of prepare_request/4.

Handles :chat, :object, and :embedding operations using OpenAI-compatible patterns.

provider_extended_generation_schema()

provider_id()

provider_schema()

supported_provider_options()

translate_options(operation, model, opts)

Default implementation of translate_options/3.

Pass-through implementation that returns options unchanged.