# `LangChain.ChatModels.ChatVertexAI`
[🔗](https://github.com/brainlid/langchain/blob/v0.6.2/lib/chat_models/chat_vertex_ai.ex#L1)

Parses and validates inputs for making a request for the Google AI  Chat API.

Converts response into more specialized `LangChain` data structures.

Example Usage:

```elixir
alias LangChain.Chains.LLMChain
alias LangChain.Message
alias LangChain.Message.ContentPart
alias LangChain.ChatModels.ChatVertexAI

config = %{
      model: "gemini-2.0-flash",
      api_key: ..., # vertex requires gcloud auth token https://cloud.google.com/vertex-ai/generative-ai/docs/start/quickstarts/quickstart-multimodal#rest
      temperature: 1.0,
      top_p: 0.8,
      receive_timeout: ...
    }
 model = ChatVertexAI.new!(config)

    %{llm: model, verbose: false, stream: false}
    |> LLMChain.new!()
    |> LLMChain.add_message(
      Message.new_user!([
        ContentPart.new!(%{type: :text, content: "Analyse the provided file and share a summary"}),
        ContentPart.new!(%{
          type: :file_url,
          content: ...,
          options: [media: ...]
        })
      ])
    )
    |> LLMChain.run()
The above call will return summary of the media content.
```

# `t`
[🔗](https://github.com/brainlid/langchain/blob/v0.6.2/lib/chat_models/chat_vertex_ai.ex#L121)

```elixir
@type t() :: %LangChain.ChatModels.ChatVertexAI{
  api_key: term(),
  callbacks: term(),
  endpoint: term(),
  json_response: term(),
  json_schema: term(),
  model: term(),
  receive_timeout: term(),
  req_config: term(),
  stream: term(),
  temperature: term(),
  thinking_config: term(),
  top_k: term(),
  top_p: term(),
  verbose_api: term()
}
```

# `call`
[🔗](https://github.com/brainlid/langchain/blob/v0.6.2/lib/chat_models/chat_vertex_ai.ex#L382)

Calls the Google AI API passing the ChatVertexAI struct with configuration,
plus either a simple message or the list of messages to act as the prompt.

Optionally pass in a list of tools available to the LLM for requesting
execution in response.

**NOTE:** This function *can* be used directly, but the primary interface
should be through `LangChain.Chains.LLMChain`. The `ChatVertexAI` module is
more focused on translating the `LangChain` data structures to and from the
OpenAI API.

Another benefit of using `LangChain.Chains.LLMChain` is that it combines the
storage of messages, adding tools, adding custom context that should be passed
to tools, and automatically applying `LangChain.MessageDelta` structs as they
are are received, then converting those to the full `LangChain.Message` once
fully complete.

# `complete_final_delta`
[🔗](https://github.com/brainlid/langchain/blob/v0.6.2/lib/chat_models/chat_vertex_ai.ex#L542)

# `do_process_response`
[🔗](https://github.com/brainlid/langchain/blob/v0.6.2/lib/chat_models/chat_vertex_ai.ex#L546)

# `for_api`
[🔗](https://github.com/brainlid/langchain/blob/v0.6.2/lib/chat_models/chat_vertex_ai.ex#L178)

# `get_message_contents`
[🔗](https://github.com/brainlid/langchain/blob/v0.6.2/lib/chat_models/chat_vertex_ai.ex#L778)

```elixir
@spec get_message_contents(LangChain.MessageDelta.t() | LangChain.Message.t()) :: [
  %{required(String.t()) =&gt; any()}
]
```

Return the content parts for the message.

# `new`
[🔗](https://github.com/brainlid/langchain/blob/v0.6.2/lib/chat_models/chat_vertex_ai.ex#L152)

```elixir
@spec new(attrs :: map()) :: {:ok, t()} | {:error, Ecto.Changeset.t()}
```

Setup a ChatVertexAI client configuration.

# `new!`
[🔗](https://github.com/brainlid/langchain/blob/v0.6.2/lib/chat_models/chat_vertex_ai.ex#L163)

```elixir
@spec new!(attrs :: map()) :: t() | no_return()
```

Setup a ChatVertexAI client configuration and return it or raise an error if invalid.

# `restore_from_map`
[🔗](https://github.com/brainlid/langchain/blob/v0.6.2/lib/chat_models/chat_vertex_ai.ex#L857)

Restores the model from the config.

# `retry_on_fallback?`
[🔗](https://github.com/brainlid/langchain/blob/v0.6.2/lib/chat_models/chat_vertex_ai.ex#L823)

```elixir
@spec retry_on_fallback?(LangChain.LangChainError.t()) :: boolean()
```

Determine if an error should be retried. If `true`, a fallback LLM may be
used. If `false`, the error is understood to be more fundamental with the
request rather than a service issue and it should not be retried or fallback
to another service.

# `serialize_config`
[🔗](https://github.com/brainlid/langchain/blob/v0.6.2/lib/chat_models/chat_vertex_ai.ex#L834)

```elixir
@spec serialize_config(t()) :: %{required(String.t()) =&gt; any()}
```

Generate a config map that can later restore the model's configuration.

---

*Consult [api-reference.md](api-reference.md) for complete listing*
