Geminix.V1beta.GenerateMessageRequest (geminix v0.2.0)

Request to generate a message response from the model.

Fields:

  • :candidate_count (integer/0) - Optional. The number of generated response messages to return. This value must be between [1, 8], inclusive. If unset, this will default to 1.
  • :prompt (Geminix.V1beta.MessagePrompt.t/0) - Required. The structured textual input given to the model as a prompt. Given a prompt, the model will return what it predicts is the next message in the discussion.
  • :temperature (number/0) - Optional. Controls the randomness of the output. Values can range over [0.0,1.0], inclusive. A value closer to 1.0 will produce responses that are more varied, while a value closer to 0.0 will typically result in less surprising responses from the model.
  • :top_k (integer/0) - Optional. The maximum number of tokens to consider when sampling. The model uses combined Top-k and nucleus sampling. Top-k sampling considers the set of top_k most probable tokens.
  • :top_p (number/0) - Optional. The maximum cumulative probability of tokens to consider when sampling. The model uses combined Top-k and nucleus sampling. Nucleus sampling considers the smallest set of tokens whose probability sum is at least top_p.

Summary

Types

t()

@type t() :: %Geminix.V1beta.GenerateMessageRequest{
  __meta__: term(),
  candidate_count: integer(),
  prompt: Geminix.V1beta.MessagePrompt.t(),
  temperature: number(),
  top_k: integer(),
  top_p: number()
}

Functions

from_map(schema \\ %__MODULE__{}, map)

@spec from_map(t(), map()) :: {:ok, t()} | {:error, Ecto.Changeset.t()}

Create a Geminix.V1beta.GenerateMessageRequest.t/0 from a map returned by the Gemini API.

Sometimes, this function should not be applied to the full response body, but instead it should be applied to the correct part of the map in the response body. This depends on the concrete API call.