gleamstral

Package Version Hex Docs

A Gleam client library for the Mistral AI API, providing type-safe access to Mistral’s powerful language models, embeddings, and agents.

Overview

Gleamstral enables Gleam applications to seamlessly integrate with Mistral AI’s capabilities, including:

Further documentation can be found at https://hexdocs.pm/gleamstral.

Installation

gleam add gleamstral

Getting an API key

You can get an API key for free from Mistral La Plateforme.

Quick Example

import gleam/io
import gleamstral/client
import gleamstral/chat/chat
import gleamstral/message
import gleamstral/model

pub fn main() {
  // Create a Mistral client with your API key
  let client = client.new("your-mistral-api-key")
  
  // Set up a chat with Mistral Large model
  let chat_client = chat.new(client) |> chat.with_temperature(0.7)

  // Define a list of one or many messages to send to the model
  let messages = [
    message.UserMessage(message.TextContent("Explain brievly what is Gleam")),
  ]

  // Send the request and get a response from the model
  let assert Ok(response) =
    chat_client
    |> chat.complete(model.MistralSmall, messages)

  let assert Ok(choice) = list.first(response.choices)
  let assert message.AssistantMessage(content, _, _) = choice.message

  io.println("Response: " <> content) // "Gleam is a very cool language [...]"
}

Key Features

Chat Completions with Vision

Gleamstral supports multimodal inputs, allowing you to send both text and images to vision-enabled models like Pixtral:

// Create a message with an image
let messages = [
  message.UserMessage(
    message.MultiContent([
      message.Text("What's in this image?"),
      message.ImageUrl("https://gleam.run/images/lucy/lucy.svg")
    ])
  )
]

// Use a Pixtral model for image analysis
let response = chat.new(client) |> chat.complete(model.PixtralLarge, messages)

// Get the first choice from the response
let assert Ok(choice) = list.first(response.choices)
let assert message.AssistantMessage(content, _, _) = choice.message

io.println("Response: " <> content) // "This is a picture of the cute star lucy"

Agent API

Access Mistral’s Agent API to utilize pre-configured agents for specific tasks:

// Get your agent ID from the Mistral console
let agent_id = "your-agent-id"

// Call the agent with your agent ID and messages
let response = agent.new(client) |> agent.complete(agent_id, messages)

Embeddings Generation

Generate vector embeddings for text to enable semantic search, clustering, or other vector operations:

// Generate embeddings for a text input
embeddings.new(client)
|> embeddings.create(model.MistralEmbed, ["Your text to embed"])

Tool/Function Calling

Define tools that the model can use to call functions in your application:

// Define a tool
let weather_tool = tool.new(
  "get_weather",
  "Get the current weather in a location",
  // Define the parameters schema
  [#("location", String, True, "The city and country, e.g. Seoul, South Korea")]
)

// Create a chat client with the tool
let response = chat.new(client)
|> chat.with_tools([weather_tool])
|> chat.complete(model.MistralSmall, messages)

Examples

The examples/ directory contains several ready-to-use examples demonstrating the library’s capabilities:

To run any example:

cd examples
gleam run -m example_name  # e.g., gleam run -m agent

Note: You’ll need to set your Mistral API key in an .env file or as an environment variable.

Roadmap

Contributing

Contributions are welcome! Please open an issue or submit a pull request.

License

This project is licensed under the MIT License.

Search Document