Nous.Providers.Mistral (nous v0.13.3)
View SourceMistral AI provider implementation.
Uses the OpenAI-compatible API with Mistral-specific extensions:
reasoning_mode- Enable reasoning mode for complex tasksprediction_mode- Enable prediction modesafe_prompt- Enable safe prompt filtering
Configuration
Set your API key via environment variable:
export MISTRAL_API_KEY="your-mistral-api-key-here"Or in config:
config :nous, :mistral,
api_key: "your-mistral-api-key-here"Usage
# Via Model.parse
model = Nous.Model.parse("mistral:mistral-large-latest")
# Direct provider usage
{:ok, response} = Nous.Providers.Mistral.chat(%{
"model" => "mistral-large-latest",
"messages" => [%{"role" => "user", "content" => "Hello"}]
})
# With reasoning mode
{:ok, response} = Nous.Providers.Mistral.chat(%{
"model" => "mistral-large-latest",
"messages" => messages,
"reasoning_mode" => true
})
Summary
Functions
Get the API key from options, environment, or application config.
Get the base URL from options, application config, or default.
Count tokens in messages (rough estimate).
High-level request with message conversion, telemetry, and error wrapping.
High-level streaming request with message conversion and telemetry.
Functions
Get the API key from options, environment, or application config.
Lookup order:
:api_keyoption passed directly- Environment variable (MISTRAL_API_KEY)
- Application config:
config :nous, mistral, api_key: "..."
Get the base URL from options, application config, or default.
Lookup order:
:base_urloption passed directly- Application config:
config :nous, mistral, base_url: "..." - Default: https://api.mistral.ai/v1
Count tokens in messages (rough estimate).
Override this in your provider for more accurate counting.
High-level request with message conversion, telemetry, and error wrapping.
Default implementation that:
- Converts messages to provider format
- Builds request params
- Calls chat/2
- Parses response
- Emits telemetry events
- Wraps errors
High-level streaming request with message conversion and telemetry.