# `LangChain.ChatModels.ChatGoogleAI`
[🔗](https://github.com/brainlid/langchain/blob/v0.6.2/lib/chat_models/chat_google_ai.ex#L1)

Parses and validates inputs for making a request for the Google AI  Chat API.

Converts response into more specialized `LangChain` data structures.

**NOTE:** The GoogleAI service is unique in how it reports TokenUsage
information. So far, it's the only API that returns TokenUsage for each
returned delta, where the generated token count is incremented with one. Other
services return the total TokenUsage data at the end. This Chat model fires
the callback each time it is received.

**Google Search Integration**

Starting with Gemini 2.0, this module supports Google Search as a native tool,
allowing the model to automatically search the web for recent information to ground
its responses and improve factuality. Check out the [Google AI Documentation](https://ai.google.dev/gemini-api/docs/grounding?lang=rest)
for more information.

Example Usage:

```elixir
alias LangChain.Chains.LLMChain
alias LangChain.Message
alias LangChain.NativeTool

model = ChatGoogleAI.new!(%{temperature: 0, stream: false, model: "gemini-2.0-flash"})

{:ok, updated_chain} =
   %{llm: model, verbose: false, stream: false}
   |> LLMChain.new!()
   |> LLMChain.add_message(
     Message.new_user!("What is the current Google stock price?")
   )
   |> LLMChain.add_tools(NativeTool.new!(%{name: "google_search", configuration: %{}}))
   |> LLMChain.run()
```

The above call will return the current Google stock price.

When `google_search` is used, the model will also return grounding information in the metadata attribute of the assistant message.

# `t`
[🔗](https://github.com/brainlid/langchain/blob/v0.6.2/lib/chat_models/chat_google_ai.ex#L138)

```elixir
@type t() :: %LangChain.ChatModels.ChatGoogleAI{
  api_key: term(),
  api_version: term(),
  callbacks: term(),
  endpoint: term(),
  json_response: term(),
  json_schema: term(),
  model: term(),
  receive_timeout: term(),
  req_config: term(),
  safety_settings: term(),
  stream: term(),
  temperature: term(),
  thinking_config: term(),
  top_k: term(),
  top_p: term(),
  verbose_api: term()
}
```

# `call`
[🔗](https://github.com/brainlid/langchain/blob/v0.6.2/lib/chat_models/chat_google_ai.ex#L502)

Calls the Google AI API passing the ChatGoogleAI struct with configuration, plus
either a simple message or the list of messages to act as the prompt.

Optionally pass in a list of tools available to the LLM for requesting
execution in response.

Optionally pass in a callback function that can be executed as data is
received from the API.

**NOTE:** This function *can* be used directly, but the primary interface
should be through `LangChain.Chains.LLMChain`. The `ChatGoogleAI` module is more focused on
translating the `LangChain` data structures to and from the Google AI API.

Another benefit of using `LangChain.Chains.LLMChain` is that it combines the
storage of messages, adding tools, adding custom context that should be
passed to tools, and automatically applying `LangChain.MessageDelta`
structs as they are are received, then converting those to the full
`LangChain.Message` once fully complete.

# `do_process_response`
[🔗](https://github.com/brainlid/langchain/blob/v0.6.2/lib/chat_models/chat_google_ai.ex#L739)

# `for_api`
[🔗](https://github.com/brainlid/langchain/blob/v0.6.2/lib/chat_models/chat_google_ai.ex#L198)

# `get_message_contents`
[🔗](https://github.com/brainlid/langchain/blob/v0.6.2/lib/chat_models/chat_google_ai.ex#L939)

```elixir
@spec get_message_contents(LangChain.MessageDelta.t() | LangChain.Message.t()) :: [
  %{required(String.t()) =&gt; any()}
]
```

Return the content parts for the message.

# `new`
[🔗](https://github.com/brainlid/langchain/blob/v0.6.2/lib/chat_models/chat_google_ai.ex#L172)

```elixir
@spec new(attrs :: map()) :: {:ok, t()} | {:error, Ecto.Changeset.t()}
```

Setup a ChatGoogleAI client configuration.

# `new!`
[🔗](https://github.com/brainlid/langchain/blob/v0.6.2/lib/chat_models/chat_google_ai.ex#L183)

```elixir
@spec new!(attrs :: map()) :: t() | no_return()
```

Setup a ChatGoogleAI client configuration and return it or raise an error if invalid.

# `restore_from_map`
[🔗](https://github.com/brainlid/langchain/blob/v0.6.2/lib/chat_models/chat_google_ai.ex#L1007)

Restores the model from the config.

# `retry_on_fallback?`
[🔗](https://github.com/brainlid/langchain/blob/v0.6.2/lib/chat_models/chat_google_ai.ex#L971)

```elixir
@spec retry_on_fallback?(LangChain.LangChainError.t()) :: boolean()
```

Determine if an error should be retried. If `true`, a fallback LLM may be
used. If `false`, the error is understood to be more fundamental with the
request rather than a service issue and it should not be retried or fallback
to another service.

# `serialize_config`
[🔗](https://github.com/brainlid/langchain/blob/v0.6.2/lib/chat_models/chat_google_ai.ex#L982)

```elixir
@spec serialize_config(t()) :: %{required(String.t()) =&gt; any()}
```

Generate a config map that can later restore the model's configuration.

---

*Consult [api-reference.md](api-reference.md) for complete listing*
