# `GenAI.Provider.Gemini`

This module implements the GenAI provider for Mistral AI.

# `chat`

# `chat`

# `config_key`

Return config_key inference provide application config stored under :genai entry

# `default_encoder`

# `effective_settings`

Obtain map of effective settings: settings, model_settings, provider_settings, config_settings, etc.

# `endpoint`

Prepare endpoint and method to make inference call to

# `headers`

# `headers`

Prepare request headers

# `models`

Retrieves a list of available Mistral models.

This function calls the Mistral API to retrieve a list of models and returns them as a list of `GenAI.Model` structs.

# `request_body`

Prepare request body to be passed to inference call.

# `run`

Build and run inference thread

# `standardize_model`

# `stream`

Build and run inference thread in streaming mode

---

*Consult [api-reference.md](api-reference.md) for complete listing*
