API Reference geminix v#0.2.0

Modules

Documentation for Geminix.

Config options for Geminix functions.

Request for an AsyncBatchEmbedContent operation.

Identifier for the source contributing to this attribution.

Batch request to get embeddings from the model for a list of prompts.

The response to a BatchEmbedContentsRequest.

Batch request to get a text embedding from the model.

The response to a EmbedTextRequest.

Request for a BatchGenerateContent operation.

Stats about the batch.

Raw media bytes. Text should not be sent as raw bytes, use the 'text' field.

Content that has been preprocessed and can be used in subsequent request to GenerativeService. Cached content can be only used with model it was created for.

Metadata on the usage of the cached content.

A response candidate generated from the model.

Parameters for telling the service how to chunk the file. inspired by google3/cloud/ai/platform/extension/lib/retrieval/config/chunker_config.proto

A collection of source attributions for a piece of content.

A citation to a source for a portion of a specific response.

Tool that executes code generated by the model, and automatically returns the result to the model. See also ExecutableCode and CodeExecutionResult which are only generated when using this tool.

Result of executing the ExecutableCode. Only generated when using the CodeExecution, and always follows a part containing the ExecutableCode.

Computer Use tool type.

Filter condition applicable to a single key.

The base structured datatype containing multi-part content of a message. A Content includes a role field designating the producer of the Content and a parts field containing multi-part data that contains the content of the message turn.

A list of floats representing an embedding.

Content filtering metadata associated with processing a single request. ContentFilter contains a reason and an optional supporting string. The reason may be unspecified.

A Corpus is a collection of Documents. A project can create up to 10 corpora.

Counts the number of tokens in the prompt sent to a model. Models may tokenize text differently, so each model may return a different token_count.

A response from CountMessageTokens. It returns the model's token_count for the prompt.

Counts the number of tokens in the prompt sent to a model. Models may tokenize text differently, so each model may return a different token_count.

A response from CountTextTokens. It returns the model's token_count for the prompt.

Counts the number of tokens in the prompt sent to a model. Models may tokenize text differently, so each model may return a different token_count.

A response from CountTokens. It returns the model's token_count for the prompt.

Request for CreateFile.

Response for CreateFile.

TODO: no high-level description

User provided metadata stored as key-value pairs.

Dataset for training or validation.

A Document is a collection of Chunks.

Response for DownloadFile.

Describes the options to customize dynamic retrieval.

A resource representing a batch of EmbedContent requests.

The output of a batch request. This is returned in the AsyncBatchEmbedContentResponse or the EmbedContentBatch.output field.

Stats about the batch.

Request containing the Content for the model to embed.

The response to an EmbedContentRequest.

Request to get a text embedding from the model.

The response to a EmbedTextRequest.

A list of floats representing the embedding.

A generic empty message that you can re-use to avoid defining duplicated empty messages in your APIs. A typical example is to use it as the request or the response type of an API method. For instance: service Foo { rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); }

An input/output example used to instruct the Model. It demonstrates how the model should respond or format its response.

Code generated by the model that is meant to be executed, and the result returned to the model. Only generated when using the CodeExecution tool, in which the code will be automatically executed, and a corresponding CodeExecutionResult will also be generated.

A file uploaded to the API. Next ID: 15

URI based data.

The FileSearch tool that retrieves knowledge from Semantic Retrieval corpora. Files are imported to Semantic Retrieval corpora using the ImportFile API.

A FileSearchStore is a collection of Documents.

A predicted FunctionCall returned from the model that contains a string representing the FunctionDeclaration.name with the arguments and their values.

Configuration for specifying function calling behavior.

Structured representation of a function declaration as defined by the OpenAPI 3.03 specification. Included in this declaration are the function name and parameters. This FunctionDeclaration is a representation of a block of code that can be used as a Tool by the model and executed by the client.

The result output from a FunctionCall that contains a string representing the FunctionDeclaration.name and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of aFunctionCall made based on model prediction.

Raw media bytes for function response. Text should not be sent as raw bytes, use the 'FunctionResponse.response' field.

A datatype containing media that is part of a FunctionResponse message. A FunctionResponsePart consists of data which has an associated datatype. A FunctionResponsePart can only contain one of the accepted types in FunctionResponsePart.data. A FunctionResponsePart must have a fixed IANA MIME type identifying the type and subtype of the media if the inline_data field is filled with raw bytes.

Request to generate a grounded answer from the Model.

Response from the model for a grounded answer.

A resource representing a batch of GenerateContent requests.

The output of a batch request. This is returned in the BatchGenerateContentResponse or the GenerateContentBatch.output field.

Request to generate a completion from the model.

Response from the model supporting multiple candidate responses. Safety ratings and content filtering are reported for both prompt in GenerateContentResponse.prompt_feedback and for each candidate in finish_reason and in safety_ratings. The API: - Returns either all requested candidates or none of them - Returns no candidates at all only if there was something wrong with the prompt (check prompt_feedback) - Reports feedback on each candidate in finish_reason and safety_ratings.

Request to generate a message response from the model.

The response from the model. This includes candidate messages and conversation history in the form of chronologically-ordered messages.

Request to generate a text completion response from the model.

The response from the model, including candidate completions.

A file generated on behalf of a user.

Configuration options for model generation and outputs. Not all parameters are configurable for every model.

The GoogleMaps Tool that provides geospatial context for the user's query.

GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.

Tool to retrieve public web data for grounding, powered by Google.

Attribution for a source that contributed to an answer.

Metadata returned to client when grounding is enabled.

Passage included inline with a grounding configuration.

Identifier for a part within a GroundingPassage.

A repeated list of passages.

Hyperparameters controlling the tuning process. Read more at https://ai.google.dev/docs/model_tuning_guidance

Config for image generation features.

Request for ImportFile to import a File API file with a FileSearchStore. LINT.IfChange(ImportFileRequest)

The request to be processed in the batch.

The requests to be processed in the batch if provided as part of the batch creation request.

The response to a single request in the batch.

The responses to the requests in the batch.

The request to be processed in the batch.

The requests to be processed in the batch if provided as part of the batch creation request.

The response to a single request in the batch.

The responses to the requests in the batch.

Configures the input to the batch request.

Configures the input to the batch request.

Feedback related to the input data used to answer the question, as opposed to the model-generated response to the question.

Represents a time interval, encoded as a Timestamp start (inclusive) and a Timestamp end (exclusive). The start must be less than or equal to the end. When the start equals the end, the interval is empty (matches no time). When both start and end are unspecified, the interval matches any time.

An object that represents a latitude/longitude pair. This is expressed as a pair of doubles to represent degrees latitude and degrees longitude. Unless specified otherwise, this object must conform to the WGS84 standard. Values must be within normalized ranges.

Response with CachedContents list.

Response from ListCorpora containing a paginated list of Corpora. The results are sorted by ascending corpus.create_time.

Response from ListDocuments containing a paginated list of Documents. The Documents are sorted by ascending document.create_time.

Response from ListFileSearchStores containing a paginated list of FileSearchStores. The results are sorted by ascending file_search_store.create_time.

Response for ListFiles.

Response for ListGeneratedFiles.

Response from ListModel containing a paginated list of Models.

The response message for Operations.ListOperations.

Response from ListPermissions containing a paginated list of permissions.

Response from ListTunedModels containing a paginated list of Models.

Candidate for the logprobs token and score.

A grounding chunk from Google Maps. A Maps chunk corresponds to a single place.

A MCPServer is a server that can be called by the model to perform actions. It is a server that implements the MCP protocol. Next ID: 5

Media resolution for the input media.

The base unit of structured text. A Message includes an author and the content of the Message. The author is used to tag messages when they are fed to the model as text.

All of the structured input text passed to the model as a prompt. A MessagePrompt contains a structured set of fields that provide context for the conversation, examples of user input/model output message pairs that prime the model to respond in different ways, and the conversation history or list of messages representing the alternating turns of the conversation between the user and the model.

User provided filter to limit retrieval based on Chunk or Document level metadata values. Example (genre = drama OR genre = action): key = "document.custom_metadata.genre" conditions = [{string_value = "drama", operation = EQUAL}, {string_value = "action", operation = EQUAL}]

Represents token counting info for a single modality.

Information about a Generative Language Model.

The status of the underlying model. This is used to indicate the stage of the underlying model and the retirement time if applicable.

The configuration for the multi-speaker setup.

This resource represents a long-running operation that is the result of a network API call.

A datatype containing media that is part of a multi-part Content message. A Part consists of data which has an associated datatype. A Part can only contain one of the accepted types in Part.data. A Part must have a fixed IANA MIME type identifying the type and subtype of the media if the inline_data field is filled with raw bytes.

Permission resource grants user, group or the rest of the world access to the PaLM API resource (e.g. a tuned model, corpus). A role is a collection of permitted operations that allows users to perform specific actions on PaLM API resources. To make them available to users, groups, or service accounts, you assign roles. When you assign a role, you grant permissions that the role contains. There are three concentric roles. Each role is a superset of the previous role's permitted operations: - reader can use the resource (e.g. tuned model, corpus) for inference - writer has reader's permissions and additionally can edit and share - owner has writer's permissions and additionally can delete

Collection of sources that provide answers about the features of a given place in Google Maps. Each PlaceAnswerSources message corresponds to a specific place in Google Maps. The Google Maps tool used these sources in order to answer questions about features of the place (e.g: "does Bar Foo have Wifi" or "is Foo Bar wheelchair accessible?"). Currently we only support review snippets as sources.

The configuration for the prebuilt speaker to use.

Request message for [PredictionService.PredictLongRunning].

Request message for PredictionService.Predict.

Response message for [PredictionService.Predict].

A set of the feedback metadata the prompt specified in GenerateContentRequest.content.

Request for RegisterFiles.

Response for RegisterFiles.

Retrieval config.

Metadata related to retrieval in the grounding flow.

Chunk from context retrieved by the file search tool.

Encapsulates a snippet of a user review that answers a question about the features of a specific place in Google Maps.

Safety feedback for an entire request. This field is populated if content in the input and/or response is blocked due to safety settings. SafetyFeedback may not exist for every HarmCategory. Each SafetyFeedback will return the safety settings used by the request as well as the lowest HarmProbability that should be allowed in order to return a result.

Safety rating for a piece of content. The safety rating contains the category of harm and the harm probability level in that category for a piece of content. Content is classified for safety across a number of harm categories and the probability of the harm classification is included here.

Safety setting, affecting the safety-blocking behavior. Passing a safety setting for a category changes the allowed probability that content is blocked.

The Schema object allows the definition of input and output data types. These types can be objects, but also primitives and arrays. Represents a select subset of an OpenAPI 3.0 schema object.

Google search entry point.

Identifier for a Chunk retrieved via Semantic Retriever specified in the GenerateAnswerRequest using SemanticRetrieverConfig.

Configuration for retrieving grounding content from a Corpus or Document created using the Semantic Retriever API.

The configuration for a single speaker in a multi speaker setup.

The speech generation config.

The Status type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC. Each Status message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the API Design Guide.

A transport that can stream HTTP requests and responses. Next ID: 6

User provided string values assigned to a single metadata key.

Output text returned from a model.

Text given to the model as a prompt. The Model will use this TextPrompt to Generate a text completion.

Config for thinking features.

Tool details that the model may use to generate response. A Tool is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. Next ID: 14

The Tool configuration containing parameters for specifying Tool use in the request.

Candidates with top log probabilities at each decoding step.

Request to transfer the ownership of the tuned model.

Response from TransferOwnership.

A fine-tuned model created using ModelService.CreateTunedModel.

Tuned model as a source for training a new model.

A single example for tuning.

A set of tuning examples. Can be training or validation data.

Record for a single tuning step.

Tuning tasks that create tuned models.

Request for UploadToFileSearchStore.

Tool to support URL context retrieval.

Metadata related to url context retrieval tool.

Context of the a single url retrieval.

Metadata on the generation request's token usage.

Metadata for a video File.

Metadata describes the input video content.

The configuration for the voice to use.

Chunk from the web.

Configuration for a white space chunking algorithm [white space delimited].