Mentor.LLM.Adapters.Gemini (mentor v0.2.3)

View Source

An adapter for integrating Google's Gemini language models with the Mentor framework.

This module implements the Mentor.LLM.Adapter behaviour, enabling communication between Mentor and Google Generative AI API. It facilitates sending prompts and receiving responses, ensuring compatibility with Mentor's expected data structures.

Options

  • :url (String.t/0) - Base API endpoint to use for sending requests The default value is "https://generativelanguage.googleapis.com/v1beta/models".

  • :api_key (String.t/0) - Required. Google Generative AI API key

  • :model - Required. The Gemini model to query on, known models are: ["gemini-2.0-pro", "gemini-2.0-pro-latest", "gemini-2.0-pro-vision", "gemini-2.0-flash", "gemini-2.0-flash-lite", "gemini-2.0-flash-latest", "gemini-2.0-flash-vision", "gemini-1.5-pro", "gemini-1.5-pro-latest", "gemini-1.5-flash"]

  • :temperature (float/0) - What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. The default value is 1.0.

  • :http_options (keyword/0) - The default value is [].

Usage

To utilize this adapter, configure your Mentor instance with the appropriate options:

config = [
  api_key: System.get_env("GEMINI_API_KEY"),
  model: "gemini-2.0-pro"
]

mentor = Mentor.start_chat_with!(Mentor.LLM.Adapters.Gemini, adapter_config: config)

Multimodal Support

The adapter supports multimodal inputs using the Gemini API format directly:

Mentor.append_message(%{
  role: "user",
  content: [
    %{
      type: "text",
      text: "Extract information from this image."
    },
    %{
      type: "image_base64",
      data: "base64_encoded_data",
      mime_type: "image/jpeg"
    }
  ]
})

The adapter passes these formats directly to Gemini with minimal transformation, allowing you to use any content format supported by the Gemini API.

Considerations

  • API Key Security: Ensure your Google API key is stored securely and not exposed in your codebase.
  • Model Availability: Verify that the specified model is available and suitable for your use case. Refer to Google's official documentation for the most up-to-date list of models and their capabilities.
  • Vision Models: For image processing, use vision-capable models.
  • Error Handling: The complete/1 function returns {:ok, response} on success or {:error, reason} on failure. Implement appropriate error handling in your application to manage these scenarios.

Summary

Functions

Same as complete/1 but raises an exception in case of error

Functions

complete!(mentor)

Same as complete/1 but raises an exception in case of error