Geminix.V1beta.GenerateTextRequest (geminix v0.2.0)
Request to generate a text completion response from the model.
Fields:
:candidate_count(integer/0) - Optional. Number of generated responses to return. This value must be between [1, 8], inclusive. If unset, this will default to 1.:max_output_tokens(integer/0) - Optional. The maximum number of tokens to include in a candidate. If unset, this will default to output_token_limit specified in theModelspecification.:prompt(Geminix.V1beta.TextPrompt.t/0) - Required. The free-form input text given to the model as a prompt. Given a prompt, the model will generate a TextCompletion response it predicts as the completion of the input text.:safety_settings(list ofGeminix.V1beta.SafetySetting.t/0) - Optional. A list of uniqueSafetySettinginstances for blocking unsafe content. that will be enforced on theGenerateTextRequest.promptandGenerateTextResponse.candidates. There should not be more than one setting for eachSafetyCategorytype. The API will block any prompts and responses that fail to meet the thresholds set by these settings. This list overrides the default settings for eachSafetyCategoryspecified in the safety_settings. If there is noSafetySettingfor a givenSafetyCategoryprovided in the list, the API will use the default safety setting for that category. Harm categories HARM_CATEGORY_DEROGATORY, HARM_CATEGORY_TOXICITY, HARM_CATEGORY_VIOLENCE, HARM_CATEGORY_SEXUAL, HARM_CATEGORY_MEDICAL, HARM_CATEGORY_DANGEROUS are supported in text service.:stop_sequences(list ofbinary/0) - The set of character sequences (up to 5) that will stop output generation. If specified, the API will stop at the first appearance of a stop sequence. The stop sequence will not be included as part of the response.:temperature(number/0) - Optional. Controls the randomness of the output. Note: The default value varies by model, see theModel.temperatureattribute of theModelreturned thegetModelfunction. Values can range from [0.0,1.0], inclusive. A value closer to 1.0 will produce responses that are more varied and creative, while a value closer to 0.0 will typically result in more straightforward responses from the model.:top_k(integer/0) - Optional. The maximum number of tokens to consider when sampling. The model uses combined Top-k and nucleus sampling. Top-k sampling considers the set oftop_kmost probable tokens. Defaults to 40. Note: The default value varies by model, see theModel.top_kattribute of theModelreturned thegetModelfunction.:top_p(number/0) - Optional. The maximum cumulative probability of tokens to consider when sampling. The model uses combined Top-k and nucleus sampling. Tokens are sorted based on their assigned probabilities so that only the most likely tokens are considered. Top-k sampling directly limits the maximum number of tokens to consider, while Nucleus sampling limits number of tokens based on the cumulative probability. Note: The default value varies by model, see theModel.top_pattribute of theModelreturned thegetModelfunction.
Summary
Functions
Create a Geminix.V1beta.GenerateTextRequest.t/0 from a map returned
by the Gemini API.
Types
@type t() :: %Geminix.V1beta.GenerateTextRequest{ __meta__: term(), candidate_count: integer(), max_output_tokens: integer(), prompt: Geminix.V1beta.TextPrompt.t(), safety_settings: [Geminix.V1beta.SafetySetting.t()], stop_sequences: [binary()], temperature: number(), top_k: integer(), top_p: number() }
Functions
@spec from_map(t(), map()) :: {:ok, t()} | {:error, Ecto.Changeset.t()}
Create a Geminix.V1beta.GenerateTextRequest.t/0 from a map returned
by the Gemini API.
Sometimes, this function should not be applied to the full response body, but instead it should be applied to the correct part of the map in the response body. This depends on the concrete API call.