Nous.Providers.Mistral (nous v0.13.3)

View Source

Mistral AI provider implementation.

Uses the OpenAI-compatible API with Mistral-specific extensions:

  • reasoning_mode - Enable reasoning mode for complex tasks
  • prediction_mode - Enable prediction mode
  • safe_prompt - Enable safe prompt filtering

Configuration

Set your API key via environment variable:

export MISTRAL_API_KEY="your-mistral-api-key-here"

Or in config:

config :nous, :mistral,
  api_key: "your-mistral-api-key-here"

Usage

# Via Model.parse
model = Nous.Model.parse("mistral:mistral-large-latest")

# Direct provider usage
{:ok, response} = Nous.Providers.Mistral.chat(%{
  "model" => "mistral-large-latest",
  "messages" => [%{"role" => "user", "content" => "Hello"}]
})

# With reasoning mode
{:ok, response} = Nous.Providers.Mistral.chat(%{
  "model" => "mistral-large-latest",
  "messages" => messages,
  "reasoning_mode" => true
})

Summary

Functions

Get the API key from options, environment, or application config.

Get the base URL from options, application config, or default.

Count tokens in messages (rough estimate).

High-level request with message conversion, telemetry, and error wrapping.

High-level streaming request with message conversion and telemetry.

Functions

api_key(opts \\ [])

@spec api_key(keyword()) :: String.t() | nil

Get the API key from options, environment, or application config.

Lookup order:

  1. :api_key option passed directly
  2. Environment variable (MISTRAL_API_KEY)
  3. Application config: config :nous, mistral, api_key: "..."

base_url(opts \\ [])

@spec base_url(keyword()) :: String.t()

Get the base URL from options, application config, or default.

Lookup order:

  1. :base_url option passed directly
  2. Application config: config :nous, mistral, base_url: "..."
  3. Default: https://api.mistral.ai/v1

count_tokens(messages)

@spec count_tokens(list()) :: integer()

Count tokens in messages (rough estimate).

Override this in your provider for more accurate counting.

request(model, messages, settings)

High-level request with message conversion, telemetry, and error wrapping.

Default implementation that:

  1. Converts messages to provider format
  2. Builds request params
  3. Calls chat/2
  4. Parses response
  5. Emits telemetry events
  6. Wraps errors

request_stream(model, messages, settings)

High-level streaming request with message conversion and telemetry.