LLMDB.Sources.Google (LLM DB v2026.3.0)

Copy Markdown View Source

Remote source for Google Gemini models (https://generativelanguage.googleapis.com/v1beta/models).

  • pull/1 fetches data from Google Gemini API and caches locally
  • load/1 reads from cached file (no network call)

Options

  • :url - API endpoint (default: "https://generativelanguage.googleapis.com/v1beta/models")
  • :api_key - Google API key (required, or set GOOGLE_API_KEY or GEMINI_API_KEY env var)
  • :page_size - Items per page (1-1000, default: 1000 to fetch all)
  • :req_opts - Additional Req options for testing

Configuration

Cache directory can be configured in application config:

config :llm_db,
  google_cache_dir: "priv/llm_db/remote"

Default: "priv/llm_db/remote"

Usage

# Pull remote data and cache (requires API key)
mix llm_db.pull --source google

# Load from cache
{:ok, data} = Google.load(%{})

Summary

Functions

Transforms Google Gemini API response to canonical Zoi format.

Functions

transform(content)

Transforms Google Gemini API response to canonical Zoi format.

Input Format (Google)

{
  "models": [
    {
      "name": "models/gemini-2.0-flash-exp",
      "baseModelId": "models/gemini-2.0-flash",
      "version": "001",
      "displayName": "Gemini 2.0 Flash",
      "description": "...",
      "inputTokenLimit": 1048576,
      "outputTokenLimit": 8192,
      "supportedGenerationMethods": ["generateContent"],
      "thinking": false
    }
  ]
}

Output Format (Canonical Zoi)

%{
  "google" => %{
    id: :google,
    name: "Google",
    models: [
      %{
        id: "gemini-2.0-flash-exp",
        provider: :google,
        name: "Gemini 2.0 Flash",
        limits: %{
          context: 1048576,
          output: 8192
        },
        extra: %{
          base_model_id: "models/gemini-2.0-flash",
          version: "001",
          description: "...",
          supported_generation_methods: ["generateContent"],
          thinking: false
        }
      }
    ]
  }
}