View Source API Reference google_api_ai_platform v0.13.0
Modules
API client metadata for GoogleApi.AIPlatform.V1.
API calls for all endpoints tagged Datasets
.
API calls for all endpoints tagged Endpoints
.
API calls for all endpoints tagged Projects
.
API calls for all endpoints tagged Publishers
.
Handle Tesla connections for GoogleApi.AIPlatform.V1.
Generate video response.
RAI scores for generated image returned.
Attributes
-
detectedLabels
(type:list(GoogleApi.AIPlatform.V1.Model.CloudAiLargeModelsVisionRaiInfoDetectedLabels.t)
, default:nil
) - The list of detected labels for different rai categories. -
modelName
(type:String.t
, default:nil
) - The model name used to indexing into the RaiFilterConfig map. Would either be one of imagegeneration@002-006, imagen-3.0-... api endpoint names, or internal names used for mapping to different filter configs (genselfie, ai_watermark) than its api endpoint. -
raiCategories
(type:list(String.t)
, default:nil
) - List of rai categories' information to return -
scores
(type:list(number())
, default:nil
) - List of rai scores mapping to the rai categories. Rounded to 1 decimal place.
Filters returning list of deteceted labels, scores, and bounding boxes.
An integer bounding box of original pixels of the image for the detected labels.
The properties for a detected entity from the rai signal.
Attributes
-
namedBoundingBoxes
(type:list(GoogleApi.AIPlatform.V1.Model.CloudAiLargeModelsVisionNamedBoundingBox.t)
, default:nil
) - Class labels of the bounding boxes that failed the semantic filtering. Bounding box coordinates. -
passedSemanticFilter
(type:boolean()
, default:nil
) - This response is added when semantic filter config is turned on in EditConfig. It reports if this image is passed semantic filter response. If passed_semantic_filter is false, the bounding box information will be populated for user to check what caused the semantic filter to fail.
Create API error message for Vertex Pipeline. Next Id: 3.
Message that represents an arbitrary HTTP body. It should only be used for payload formats that can't be represented as JSON, such as raw binary or an HTML page. This message can be used both in streaming and non-streaming API methods in the request as well as the response. It can be used as a top-level request field, which is convenient if one wants to extract parameters from either the URL or HTTP template into the request fields and also want access to the raw HTTP body. Example: message GetResourceRequest { // A unique request id. string request_id = 1; // The raw HTTP body is bound to this field. google.api.HttpBody http_body = 2; } service ResourceService { rpc GetResource(GetResourceRequest) returns (google.api.HttpBody); rpc UpdateResource(google.api.HttpBody) returns (google.protobuf.Empty); } Example with streaming methods: service CaldavService { rpc GetCalendar(stream google.api.HttpBody) returns (stream google.api.HttpBody); rpc UpdateCalendar(stream google.api.HttpBody) returns (stream google.api.HttpBody); } Use of this type only changes how the request and response bodies are handled, all other features will continue to work unchanged.
Parameters that configure the active learning pipeline. Active learning will label the data incrementally by several iterations. For every iteration, it will select a batch of data based on the sampling strategy.
Request message for MetadataService.AddContextArtifactsAndExecutions.
Response message for MetadataService.AddContextArtifactsAndExecutions.
Request message for MetadataService.AddContextChildren.
Response message for MetadataService.AddContextChildren.
Request message for MetadataService.AddExecutionEvents.
Response message for MetadataService.AddExecutionEvents.
Request message for VizierService.AddTrialMeasurement.
Used to assign specific AnnotationSpec to a particular area of a DataItem or the whole part of the DataItem.
Identifies a concept with which DataItems may be annotated with.
Instance of a general artifact.
Metadata information for NotebookService.AssignNotebookRuntime.
Request message for NotebookService.AssignNotebookRuntime.
Attribution that explains a particular prediction output.
A description of resources that to large degree are decided by Vertex AI, and require only a modest additional configuration. Each Model supporting these resources documents its specific guidelines.
The metric specification that defines the target resource utilization (CPU utilization, accelerator's duty cycle, and so on) for calculating the desired replica count.
The storage details for Avro input content.
Request message for PipelineService.BatchCancelPipelineJobs.
Details of operations that perform batch create Features.
Request message for FeaturestoreService.BatchCreateFeatures.
Response message for FeaturestoreService.BatchCreateFeatures.
Request message for TensorboardService.BatchCreateTensorboardRuns.
Response message for TensorboardService.BatchCreateTensorboardRuns.
Request message for TensorboardService.BatchCreateTensorboardTimeSeries.
Response message for TensorboardService.BatchCreateTensorboardTimeSeries.
A description of resources that are used for performing batch operations, are dedicated to a Model, and need manual configuration.
Request message for PipelineService.BatchDeletePipelineJobs.
Request message for ModelService.BatchImportEvaluatedAnnotations
Response message for ModelService.BatchImportEvaluatedAnnotations
Request message for ModelService.BatchImportModelEvaluationSlices
Response message for ModelService.BatchImportModelEvaluationSlices
Runtime operation information for MigrationService.BatchMigrateResources.
Represents a partial result in batch migration operation for one MigrateResourceRequest.
Request message for MigrationService.BatchMigrateResources.
Response message for MigrationService.BatchMigrateResources.
A job that uses a Model to produce predictions on multiple input instances. If predictions for significant portion of the instances fail, the job may finish without attempting predictions for all remaining instances.
Configures the input to BatchPredictionJob. See Model.supported_input_storage_formats for Model's supported input formats, and how instances should be expressed via any of them.
Configuration defining how to transform batch prediction input instances to the instances that the Model accepts.
Configures the output of BatchPredictionJob. See Model.supported_output_storage_formats for supported output formats, and how predictions are expressed via any of them.
Further describes this job's output. Supplements output_config.
Details of operations that batch reads Feature values.
Request message for FeaturestoreService.BatchReadFeatureValues.
Selects Features of an EntityType to read values of and specifies read settings.
Describe pass-through fields in read_instance source.
Response message for FeaturestoreService.BatchReadFeatureValues.
Response message for TensorboardService.BatchReadTensorboardTimeSeriesData.
The BigQuery location for the output content.
The BigQuery location for the input content.
Input for bleu metric.
Spec for bleu instance.
Bleu metric value for an instance.
Results for bleu metric.
Spec for bleu score metric - calculates the precision of n-grams in the prediction as compared to reference - returns a score ranging between 0 to 1.
Content blob. It's preferred to send as text directly rather than raw bytes.
Config for blur baseline. When enabled, a linear path from the maximally blurred image to the input image is created. Using a blurred baseline instead of zero (black image) is motivated by the BlurIG approach explained here: https://arxiv.org/abs/2004.03383
A list of boolean values.
Request message for JobService.CancelBatchPredictionJob.
Request message for JobService.CancelCustomJob.
Request message for JobService.CancelDataLabelingJob.
Request message for JobService.CancelHyperparameterTuningJob.
Request message for JobService.CancelNasJob.
Request message for PipelineService.CancelPipelineJob.
Request message for PipelineService.CancelTrainingPipeline.
Request message for GenAiTuningService.CancelTuningJob.
A response candidate generated from the model.
This message will be placed in the metadata field of a google.longrunning.Operation associated with a CheckTrialEarlyStoppingState request.
Request message for VizierService.CheckTrialEarlyStoppingState.
Response message for VizierService.CheckTrialEarlyStoppingState.
Source attributions for content.
A collection of source attributions for a piece of content.
Input for coherence metric.
Spec for coherence instance.
Spec for coherence result.
Spec for coherence score metric.
Request message for VizierService.CompleteTrial.
Success and error statistics of processing multiple entities (for example, DataItems or structured data rows) in batch.
Request message for ComputeTokens RPC call.
Response message for ComputeTokens RPC call.
The Container Registry location for the container image.
The spec of a Container.
The base structured datatype containing multi-part content of a message. A Content
includes a role
field designating the producer of the Content
and a parts
field containing multi-part data that contains the content of the message turn.
Instance of a general context.
Details of ModelService.CopyModel operation.
Request message for ModelService.CopyModel.
Response message of ModelService.CopyModel operation.
Request message for PredictionService.CountTokens.
Response message for PredictionService.CountTokens.
Runtime operation information for DatasetService.CreateDataset.
Runtime operation information for DatasetService.CreateDatasetVersion.
Runtime operation information for CreateDeploymentResourcePool method.
Request message for CreateDeploymentResourcePool method.
Runtime operation information for EndpointService.CreateEndpoint.
Details of operations that perform create EntityType.
Details of operations that perform create FeatureGroup.
Details of operations that perform create FeatureOnlineStore.
Details of operations that perform create Feature.
Request message for FeaturestoreService.CreateFeature. Request message for FeatureRegistryService.CreateFeature.
Details of operations that perform create FeatureView.
Details of operations that perform create Featurestore.
Runtime operation information for IndexEndpointService.CreateIndexEndpoint.
Runtime operation information for IndexService.CreateIndex.
Details of operations that perform MetadataService.CreateMetadataStore.
Metadata information for NotebookService.CreateNotebookExecutionJob.
Metadata information for NotebookService.CreateNotebookRuntimeTemplate.
Details of operations that perform create PersistentResource.
Request message for PipelineService.CreatePipelineJob.
Details of operations that perform create FeatureGroup.
Runtime operation information for SpecialistPoolService.CreateSpecialistPool.
Details of operations that perform create Tensorboard.
Request message for TensorboardService.CreateTensorboardRun.
Request message for TensorboardService.CreateTensorboardTimeSeries.
The storage details for CSV output content.
The storage details for CSV input content.
Represents a job that runs custom workloads such as a Docker container or a Python package. A CustomJob can have multiple worker pools and each worker pool can have its own machine and input spec. A CustomJob will be cleaned up once the job enters terminal state (failed or succeeded).
Represents the spec of a CustomJob.
A piece of data in a Dataset. Could be an image, a video, a document or plain text.
A container for a single DataItem and Annotations on it.
DataLabelingJob is used to trigger a human labeling job on unlabeled data from the following Dataset
A collection of DataItems and Annotations on them.
Describes the dataset version.
A description of resources that are dedicated to a DeployedModel, and that need a higher degree of manual configuration.
Details of operations that delete Feature values.
Request message for FeaturestoreService.DeleteFeatureValues.
Message to select entity. If an entity id is selected, all the feature values corresponding to the entity id will be deleted, including the entityId.
Message to select time range and feature. Values of the selected feature generated within an inclusive time range will be deleted. Using this option permanently deletes the feature values from the specified feature IDs within the specified time range. This might include data from the online storage. If you want to retain any deleted historical data in the online storage, you must re-ingest it.
Response message for FeaturestoreService.DeleteFeatureValues.
Response message if the request uses the SelectEntity option.
Response message if the request uses the SelectTimeRangeAndFeature option.
Details of operations that perform MetadataService.DeleteMetadataStore.
Details of operations that perform deletes of any entities.
Runtime operation information for IndexEndpointService.DeployIndex.
Request message for IndexEndpointService.DeployIndex.
Response message for IndexEndpointService.DeployIndex.
Runtime operation information for EndpointService.DeployModel.
Request message for EndpointService.DeployModel.
Response message for EndpointService.DeployModel.
A deployment of an Index. IndexEndpoints contain one or more DeployedIndexes.
Used to set up the auth on the DeployedIndex's private endpoint.
Configuration for an authentication provider, including support for JSON Web Token (JWT).
Points to a DeployedIndex.
A deployment of a Model. Endpoints contain one or more DeployedModels.
Points to a DeployedModel.
A description of resources that can be shared by multiple DeployedModels, whose underlying specification consists of a DedicatedResources.
Request message for PredictionService.DirectPredict.
Response message for PredictionService.DirectPredict.
Request message for PredictionService.DirectRawPredict.
Response message for PredictionService.DirectRawPredict.
Represents the spec of disk options.
A list of double values.
Represents a customer-managed encryption key spec that can be applied to a top-level resource.
Models are deployed into it, and afterwards Endpoint is called to obtain predictions and explanations.
Selector for entityId. Getting ids from the given source.
An entity type is a type of object in a system that needs to be modeled and have stored information about. For example, driver is an entity type, and driver0 is an instance of an entity type driver.
Represents an environment variable present in a Container or Python Module.
Model error analysis for each annotation.
Attributed items for a given annotation, typically representing neighbors from the training sets constrained by the query type.
Request message for EvaluationService.EvaluateInstances.
Response message for EvaluationService.EvaluateInstances.
True positive, false positive, or false negative. EvaluatedAnnotation is only available under ModelEvaluationSlice with slice of annotationSpec
dimension.
Explanation result of the prediction produced by the Model.
An edge describing the relationship between an Artifact and an Execution in a lineage graph.
Input for exact match metric.
Spec for exact match instance.
Exact match metric value for an instance.
Results for exact match metric.
Spec for exact match metric - returns 1 if prediction and reference exactly matches, otherwise 0.
Example-based explainability that returns the nearest neighbors from the provided dataset.
The Cloud Storage input instances.
Overrides for example-based explanations.
Restrictions namespace for example-based explanations overrides.
Instance of a general execution.
Request message for PredictionService.Explain.
Response message for PredictionService.Explain.
Explanation of a prediction (provided in PredictResponse.predictions) produced by the Model on a given instance.
Metadata describing the Model's input and output for explanation.
Metadata of the input of a feature. Fields other than InputMetadata.input_baselines are applicable only for Models that are using Vertex AI-provided images for Tensorflow.
Domain details of the input feature value. Provides numeric information about the feature, such as its range (min, max). If the feature has been pre-processed, for example with z-scoring, then it provides information about how to recover the original feature. For example, if the input feature is an image and it has been pre-processed to obtain 0-mean and stddev = 1 values, then original_mean, and original_stddev refer to the mean and stddev of the original feature (e.g. image tensor) from which input feature (with mean = 0 and stddev = 1) was obtained.
Visualization configurations for image explanation.
Metadata of the prediction output to be explained.
The ExplanationMetadata entries that can be overridden at online explanation time.
The input metadata entries to be overridden.
Parameters to configure explaining for Model's predictions.
Specification of Model explanation.
The ExplanationSpec entries that can be overridden at online explanation time.
Describes what part of the Dataset is to be exported, the destination of the export and how to export.
Runtime operation information for DatasetService.ExportData.
Request message for DatasetService.ExportData.
Response message for DatasetService.ExportData.
Details of operations that exports Features values.
Request message for FeaturestoreService.ExportFeatureValues.
Describes exporting all historical Feature values of all entities of the EntityType between [start_time, end_time].
Describes exporting the latest Feature values of all entities of the EntityType between [start_time, snapshot_time].
Response message for FeaturestoreService.ExportFeatureValues.
Assigns input data to training, validation, and test sets based on the given filters, data pieces not matched by any filter are ignored. Currently only supported for Datasets containing DataItems. If any of the filters in this message are to match nothing, then they can be set as '-' (the minus sign). Supported only for unstructured Datasets.
Assigns the input data to training, validation, and test sets as per the given fractions. Any of training_fraction
, validation_fraction
and test_fraction
may optionally be provided, they must sum to up to 1. If the provided ones sum to less than 1, the remainder is assigned to sets as decided by Vertex AI. If none of the fractions are set, by default roughly 80% of data is used for training, 10% for validation, and 10% for test.
Details of ModelService.ExportModel operation.
Further describes the output of the ExportModel. Supplements ExportModelRequest.OutputConfig.
Request message for ModelService.ExportModel.
Output configuration for the Model export.
Response message of ModelService.ExportModel operation.
Request message for TensorboardService.ExportTensorboardTimeSeriesData.
Response message for TensorboardService.ExportTensorboardTimeSeriesData.
Feature Metadata information. For example, color is a feature that describes an apple.
Vertex AI Feature Group.
Input source type for BigQuery Tables and Views.
A list of historical SnapshotAnalysis or ImportFeaturesAnalysis stats requested by user, sorted by FeatureStatsAnomaly.start_time descending.
Noise sigma by features. Noise sigma represents the standard deviation of the gaussian kernel that will be used to add noise to interpolated inputs prior to computing gradients.
Noise sigma for a single feature.
Vertex AI Feature Online Store provides a centralized repository for serving ML features and embedding indexes at low latency. The Feature Online Store is a top-level container.
Attributes
-
cpuUtilizationTarget
(type:integer()
, default:nil
) - Optional. A percentage of the cluster's CPU capacity. Can be from 10% to 80%. When a cluster's CPU utilization exceeds the target that you have set, Bigtable immediately adds nodes to the cluster. When CPU utilization is substantially lower than the target, Bigtable removes nodes. If not set will default to 50%. -
maxNodeCount
(type:integer()
, default:nil
) - Required. The maximum number of nodes to scale up to. Must be greater than or equal to min_node_count, and less than or equal to 10 times of 'min_node_count'. -
minNodeCount
(type:integer()
, default:nil
) - Required. The minimum number of nodes to scale down to. Must be greater than or equal to 1.
The dedicated serving endpoint for this FeatureOnlineStore. Only need to set when you choose Optimized storage type. Public endpoint is provisioned by default.
Optimized storage type
Selector for Features of an EntityType.
Stats and Anomaly generated at specific timestamp for specific Feature. The start_time and end_time are used to define the time range of the dataset that current stats belongs to, e.g. prediction traffic is bucketed into prediction datasets by time window. If the Dataset is not defined by time window, start_time = end_time. Timestamp of the stats and anomalies always refers to end_time. Raw stats and anomalies are stored in stats_uri or anomaly_uri in the tensorflow defined protos. Field data_stats contains almost identical information with the raw stats in Vertex AI defined proto, for UI to display.
Value for a feature.
A destination location for Feature values and format.
Container for list of values.
Metadata of feature value.
FeatureView is representation of values that the FeatureOnlineStore will serve based on its syncConfig.
Lookup key for a feature view.
ID that is comprised from several parts (columns).
A Feature Registry source for features that need to be synced to Online Store.
Features belonging to a single feature group that will be synced to Online Store.
Configuration for vector indexing.
Configuration options for using brute force search.
Configuration options for the tree-AH algorithm.
FeatureViewSync is a representation of sync operation which copies data from data source to Feature View in Online Store.
Configuration for Sync. Only one option is set.
Summary from the Sync job. For continuous syncs, the summary is updated periodically. For batch syncs, it gets updated on completion of the sync.
Vertex AI Feature Store provides a centralized repository for organizing, storing, and serving ML features. The Featurestore is a top-level container for your features and their values.
Configuration of how features in Featurestore are monitored.
Configuration of the Featurestore's ImportFeature Analysis Based Monitoring. This type of analysis generates statistics for values of each Feature imported by every ImportFeatureValues operation.
Configuration of the Featurestore's Snapshot Analysis Based Monitoring. This type of analysis generates statistics for each Feature based on a snapshot of the latest feature value of each entities every monitoring_interval.
The config for Featurestore Monitoring threshold.
OnlineServingConfig specifies the details for provisioning online serving resources.
Online serving scaling configuration. If min_node_count and max_node_count are set to the same value, the cluster will be configured with the fixed number of node (no auto-scaling).
Request message for FeatureOnlineStoreService.FetchFeatureValues. All the features under the requested feature view will be returned.
Response message for FeatureOnlineStoreService.FetchFeatureValues
Response structure in the format of key (feature name) and (feature) value pair.
Feature name & value pair.
URI based data.
Assigns input data to training, validation, and test sets based on the given filters, data pieces not matched by any filter are ignored. Currently only supported for Datasets containing DataItems. If any of the filters in this message are to match nothing, then they can be set as '-' (the minus sign). Supported only for unstructured Datasets.
The request message for MatchService.FindNeighbors.
A query to find a number of the nearest neighbors (most similar vectors) of a vector.
Parameters for RRF algorithm that combines search results.
The response message for MatchService.FindNeighbors.
Nearest neighbors for one query.
A neighbor of the query vector.
Input for fluency metric.
Spec for fluency instance.
Spec for fluency result.
Spec for fluency score metric.
Assigns the input data to training, validation, and test sets as per the given fractions. Any of training_fraction
, validation_fraction
and test_fraction
may optionally be provided, they must sum to up to 1. If the provided ones sum to less than 1, the remainder is assigned to sets as decided by Vertex AI. If none of the fractions are set, by default roughly 80% of data is used for training, 10% for validation, and 10% for test.
Input for fulfillment metric.
Spec for fulfillment instance.
Spec for fulfillment result.
Spec for fulfillment metric.
A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values.
Function calling config.
Structured representation of a function declaration as defined by the OpenAPI 3.0 specification. Included in this declaration are the function name and parameters. This FunctionDeclaration is a representation of a block of code that can be used as a Tool
by the model and executed by the client.
The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction.
The Google Cloud Storage location where the output is to be written to.
The Google Cloud Storage location for the input content.
Request message for [PredictionService.GenerateContent].
Response message for [PredictionService.GenerateContent].
Content filter results for a prompt sent in the request.
Usage metadata about response(s).
Generation config.
The configuration for routing the request to a specific model.
When automated routing is specified, the routing will be determined by the pretrained routing model and customer provided model routing preference.
When manual routing is set, the specified model will be used directly.
Generic Metadata shared by all operations.
Contains information about the source of the models generated from Generative AI Studio.
Tool to retrieve public web data for grounding, powered by Google.
Input for groundedness metric.
Spec for groundedness instance.
Spec for groundedness result.
Spec for groundedness metric.
Grounding chunk.
Chunk from context retrieved by the retrieval tools.
Chunk from the web.
Metadata returned to client when grounding is enabled.
Grounding support.
Represents a HyperparameterTuningJob. A HyperparameterTuningJob has a Study specification and multiple CustomJobs with identical CustomJob specification.
Matcher for Features of an EntityType by Feature ID.
Describes the location from where we import data into a Dataset, together with the labels that will be applied to the DataItems and the Annotations.
Runtime operation information for DatasetService.ImportData.
Request message for DatasetService.ImportData.
Response message for DatasetService.ImportData.
Details of operations that perform import Feature values.
Request message for FeaturestoreService.ImportFeatureValues.
Defines the Feature value(s) to import.
Response message for FeaturestoreService.ImportFeatureValues.
Request message for ModelService.ImportModelEvaluation
A representation of a collection of database items organized in a way that allows for approximate nearest neighbor (a.k.a ANN) algorithms search.
A datapoint of Index.
Crowding tag is a constraint on a neighbor list produced by nearest neighbor search requiring that no more than some value k' of the k neighbors returned have the same value of crowding_attribute.
This field allows restricts to be based on numeric comparisons rather than categorical tokens.
Restriction of a datapoint which describe its attributes(tokens) from each of several attribute categories(namespaces).
Feature embedding vector for sparse index. An array of numbers whose values are located in the specified dimensions.
Indexes are deployed into it. An IndexEndpoint can have multiple DeployedIndexes.
IndexPrivateEndpoints proto is used to provide paths for users to send requests via private endpoints (e.g. private service access, private service connect). To send request via private service access, use match_grpc_address. To send request via private service connect, use service_attachment.
Stats of the Index.
Specifies Vertex AI owned input data to be used for training, and possibly evaluating, the Model.
A list of int64 values.
An attribution method that computes the Aumann-Shapley value taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1703.01365
Contains information about the Large Model.
A subgraph of the overall lineage graph. Event edges connect Artifact and Execution nodes.
Response message for DatasetService.ListAnnotations.
Response message for MetadataService.ListArtifacts.
Response message for JobService.ListBatchPredictionJobs
Response message for MetadataService.ListContexts.
Response message for JobService.ListCustomJobs
Response message for DatasetService.ListDataItems.
Response message for JobService.ListDataLabelingJobs.
Response message for DatasetService.ListDatasetVersions.
Response message for DatasetService.ListDatasets.
Response message for ListDeploymentResourcePools method.
Response message for EndpointService.ListEndpoints.
Response message for FeaturestoreService.ListEntityTypes.
Response message for MetadataService.ListExecutions.
Response message for FeatureRegistryService.ListFeatureGroups.
Response message for FeatureOnlineStoreAdminService.ListFeatureOnlineStores.
Response message for FeatureOnlineStoreAdminService.ListFeatureViewSyncs.
Response message for FeatureOnlineStoreAdminService.ListFeatureViews.
Response message for FeaturestoreService.ListFeatures. Response message for FeatureRegistryService.ListFeatures.
Response message for FeaturestoreService.ListFeaturestores.
Response message for JobService.ListHyperparameterTuningJobs
Response message for IndexEndpointService.ListIndexEndpoints.
Response message for IndexService.ListIndexes.
Response message for MetadataService.ListMetadataSchemas.
Response message for MetadataService.ListMetadataStores.
Response message for JobService.ListModelDeploymentMonitoringJobs.
Response message for ModelService.ListModelEvaluationSlices.
Response message for ModelService.ListModelEvaluations.
Response message for ModelService.ListModelVersions
Response message for ModelService.ListModels
Response message for JobService.ListNasJobs
Response message for JobService.ListNasTrialDetails
Response message for [NotebookService.CreateNotebookExecutionJob]
Response message for NotebookService.ListNotebookRuntimeTemplates.
Response message for NotebookService.ListNotebookRuntimes.
Request message for VizierService.ListOptimalTrials.
Response message for VizierService.ListOptimalTrials.
Response message for PersistentResourceService.ListPersistentResources
Response message for PipelineService.ListPipelineJobs
Response message for DatasetService.ListSavedQueries.
Response message for ScheduleService.ListSchedules
Response message for SpecialistPoolService.ListSpecialistPools.
Response message for VizierService.ListStudies.
Response message for TensorboardService.ListTensorboardExperiments.
Response message for TensorboardService.ListTensorboardRuns.
Response message for TensorboardService.ListTensorboardTimeSeries.
Response message for TensorboardService.ListTensorboards.
Response message for PipelineService.ListTrainingPipelines
Response message for VizierService.ListTrials.
Response message for GenAiTuningService.ListTuningJobs
Request message for VizierService.LookupStudy.
Specification of a single machine.
Manual batch tuning parameters.
A message representing a Measurement of a Trial. A Measurement contains the Metrics got by executing a Trial using suggested hyperparameter values.
A message representing a metric in the measurement.
Request message for ModelService.MergeVersionAliases.
Instance of a general MetadataSchema.
Instance of a metadata store. Contains a set of metadata that can be queried.
Represents Dataplex integration settings.
Represents state information for a MetadataStore.
Represents one resource that exists in automl.googleapis.com, datalabeling.googleapis.com or ml.googleapis.com.
Represents one Dataset in automl.googleapis.com.
Represents one Model in automl.googleapis.com.
Represents one Dataset in datalabeling.googleapis.com.
Represents one AnnotatedDataset in datalabeling.googleapis.com.
Represents one model Version in ml.googleapis.com.
Config of migrating one resource from automl.googleapis.com, datalabeling.googleapis.com and ml.googleapis.com to Vertex AI.
Config for migrating Dataset in automl.googleapis.com to Vertex AI's Dataset.
Config for migrating Model in automl.googleapis.com to Vertex AI's Model.
Config for migrating Dataset in datalabeling.googleapis.com to Vertex AI's Dataset.
Config for migrating AnnotatedDataset in datalabeling.googleapis.com to Vertex AI's SavedQuery.
Config for migrating version in ml.googleapis.com to Vertex AI's Model.
Describes a successfully migrated resource.
A trained machine learning Model.
User input field to specify the base model source. Currently it only supports specifing the Model Garden models and Genie models.
Specification of a container for serving predictions. Some fields in this message correspond to fields in the Kubernetes Container v1 core specification.
Stats of data used for train or evaluate the Model.
ModelDeploymentMonitoringBigQueryTable specifies the BigQuery table name as well as some information of the logs stored in this table.
Represents a job that runs periodically to monitor the deployed models in an endpoint. It will analyze the logged training & prediction data to detect any abnormal behaviors.
All metadata of most recent monitoring pipelines.
ModelDeploymentMonitoringObjectiveConfig contains the pair of deployed_model_id to ModelMonitoringObjectiveConfig.
The config for scheduling monitoring job.
A collection of metrics calculated by comparing Model's predictions on all of the test data against annotations from the test data.
Attributes
-
explanationSpec
(type:GoogleApi.AIPlatform.V1.Model.GoogleCloudAiplatformV1ExplanationSpec.t
, default:nil
) - Explanation spec details. -
explanationType
(type:String.t
, default:nil
) - Explanation type. For AutoML Image Classification models, possible values are:image-integrated-gradients
image-xrai
A collection of metrics calculated by comparing Model's predictions on a slice of the test data against ground truth annotations.
Definition of a slice.
Specification for how the data should be sliced.
A range of values for slice(s). low
is inclusive, high
is exclusive.
Specification message containing the config for this SliceSpec. When kind
is selected as value
and/or range
, only a single slice will be computed. When all_values
is present, a separate slice will be computed for each possible label/value for the corresponding key in config
. Examples, with feature zip_code with values 12345, 23334, 88888 and feature country with values "US", "Canada", "Mexico" in the dataset: Example 1: { "zip_code": { "value": { "float_value": 12345.0 } } } A single slice for any data with zip_code 12345 in the dataset. Example 2: { "zip_code": { "range": { "low": 12345, "high": 20000 } } } A single slice containing data where the zip_codes between 12345 and 20000 For this example, data with the zip_code of 12345 will be in this slice. Example 3: { "zip_code": { "range": { "low": 10000, "high": 20000 } }, "country": { "value": { "string_value": "US" } } } A single slice containing data where the zip_codes between 10000 and 20000 has the country "US". For this example, data with the zip_code of 12345 and country "US" will be in this slice. Example 4: { "country": {"all_values": { "value": true } } } Three slices are computed, one for each unique country in the dataset. Example 5: { "country": { "all_values": { "value": true } }, "zip_code": { "value": { "float_value": 12345.0 } } } Three slices are computed, one for each unique country in the dataset where the zip_code is also 12345. For this example, data with zip_code 12345 and country "US" will be in one slice, zip_code 12345 and country "Canada" in another slice, and zip_code 12345 and country "Mexico" in another slice, totaling 3 slices.
Single value that supports strings and floats.
Aggregated explanation metrics for a Model over a set of instances.
Represents export format supported by the Model. All formats export to Google Cloud Storage.
Contains information about the source of the models generated from Model Garden.
The alert config for model monitoring.
The config for email alert.
The objective configuration for model monitoring, including the information needed to detect anomalies for one particular model.
The config for integrating with Vertex Explainable AI. Only applicable if the Model has explanation_spec populated.
Output from BatchPredictionJob for Model Monitoring baseline dataset, which can be used to generate baseline attribution scores.
The config for Prediction data drift detection.
Training Dataset information.
The config for Training & Prediction data skew detection. It specifies the training dataset sources and the skew detection parameters.
Statistics and anomalies generated by Model Monitoring.
Historical Stats (and Anomalies) for a specific Feature.
Contains information about the original Model if this Model is a copy.
Detail description of the source information of the model.
Runtime operation information for IndexEndpointService.MutateDeployedIndex.
Response message for IndexEndpointService.MutateDeployedIndex.
Runtime operation information for EndpointService.MutateDeployedModel.
Request message for EndpointService.MutateDeployedModel.
Response message for EndpointService.MutateDeployedModel.
Represents a Neural Architecture Search (NAS) job.
Represents a uCAIP NasJob output.
The output of a multi-trial Neural Architecture Search (NAS) jobs.
Represents the spec of a NasJob.
The spec of multi-trial Neural Architecture Search (NAS).
Represents a metric to optimize.
Represent spec for search trials.
Represent spec for train trials.
Represents a uCAIP NasJob trial.
Represents a NasTrial details along with its parameters. If there is a corresponding train NasTrial, the train NasTrial is also returned.
A query to find a number of similar entities.
The embedding vector.
Numeric filter is used to search a subset of the entities by using boolean rules on numeric columns. For example: Database Point 0: {name: "a" value_int: 42} {name: "b" value_float: 1.0} Database Point 1: {name: "a" value_int: 10} {name: "b" value_float: 2.0} Database Point 2: {name: "a" value_int: -1} {name: "b" value_float: 3.0} Query: {name: "a" value_int: 12 operator: LESS} // Matches Point 1, 2 {name: "b" value_float: 2.0 operator: EQUAL} // Matches Point 1
Parameters that can be overrided in each query to tune query latency and recall.
String filter is used to search a subset of the entities by using boolean rules on string columns. For example: if a query specifies string filter with 'name = color, allow_tokens = {red, blue}, deny_tokens = {purple}',' then that query will match entities that are red or blue, but if those points are also purple, then they will be excluded even if they are red/blue. Only string filter is supported for now, numeric filter will be supported in the near future.
Runtime operation metadata with regard to Matching Engine Index.
Attributes
-
invalidRecordCount
(type:String.t
, default:nil
) - Number of records in this file we skipped due to validate errors. -
invalidSparseRecordCount
(type:String.t
, default:nil
) - Number of sparse records in this file we skipped due to validate errors. -
partialErrors
(type:list(GoogleApi.AIPlatform.V1.Model.GoogleCloudAiplatformV1NearestNeighborSearchOperationMetadataRecordError.t)
, default:nil
) - The detail information of the partial failures encountered for those invalid records that couldn't be parsed. Up to 50 partial errors will be reported. -
sourceGcsUri
(type:String.t
, default:nil
) - Cloud Storage URI pointing to the original file in user's bucket. -
validRecordCount
(type:String.t
, default:nil
) - Number of records in this file that were successfully processed. -
validSparseRecordCount
(type:String.t
, default:nil
) - Number of sparse records in this file that were successfully processed.
Attributes
-
embeddingId
(type:String.t
, default:nil
) - Empty if the embedding id is failed to parse. -
errorMessage
(type:String.t
, default:nil
) - A human-readable message that is shown to the user to help them fix the error. Note that this message may change from time to time, your code should check against error_type as the source of truth. -
errorType
(type:String.t
, default:nil
) - The error type of this record. -
rawRecord
(type:String.t
, default:nil
) - The original content of this record. -
sourceGcsUri
(type:String.t
, default:nil
) - Cloud Storage URI pointing to the original file in user's bucket.
Nearest neighbors for one query.
A neighbor of the query vector.
Neighbors for example-based explanations.
Represents a mount configuration for Network File System (NFS) to mount.
The euc configuration of NotebookRuntimeTemplate.
NotebookExecutionJob represents an instance of a notebook execution.
The Dataform Repository containing the input notebook.
The content of the input notebook in ipynb format.
The Cloud Storage uri for the input notebook.
The idle shutdown configuration of NotebookRuntimeTemplate, which contains the idle_timeout as required field.
A runtime is a virtual machine allocated to a particular user for a particular Notebook file on temporary basis with lifetime limited to 24 hours.
A template that specifies runtime configurations such as machine type, runtime version, network configurations, etc. Multiple runtimes can be created from a runtime template.
Points to a NotebookRuntimeTemplateRef.
Input for pairwise metric.
Pairwise metric instance. Usually one instance corresponds to one row in an evaluation dataset.
Spec for pairwise metric result.
Spec for pairwise metric.
Input for pairwise question answering quality metric.
Spec for pairwise question answering quality instance.
Spec for pairwise question answering quality result.
Spec for pairwise question answering quality score metric.
Input for pairwise summarization quality metric.
Spec for pairwise summarization quality instance.
Spec for pairwise summarization quality result.
Spec for pairwise summarization quality score metric.
A datatype containing media that is part of a multi-part Content
message. A Part
consists of data which has an associated datatype. A Part
can only contain one of the accepted types in Part.data
. A Part
must have a fixed IANA MIME type identifying the type and subtype of the media if inline_data
or file_data
field is filled with raw bytes.
Request message for JobService.PauseModelDeploymentMonitoringJob.
Request message for ScheduleService.PauseSchedule.
Represents the spec of persistent disk options.
Represents long-lasting resources that are dedicated to users to runs custom workloads. A PersistentResource can have multiple node pools and each node pool can have its own machine spec.
An instance of a machine learning PipelineJob.
The runtime detail of PipelineJob.
The runtime config of a PipelineJob.
The type of an input artifact.
The runtime detail of a task execution.
A list of artifact metadata.
A single record of the task status.
The runtime detail of a pipeline executor.
The detail of a container execution. It contains the job names of the lifecycle of a container execution.
The detailed info for a custom job executor.
Pipeline template metadata if PipelineJob.template_uri is from supported template registry. Currently, the only supported registry is Artifact Registry.
Input for pointwise metric.
Pointwise metric instance. Usually one instance corresponds to one row in an evaluation dataset.
Spec for pointwise metric result.
Spec for pointwise metric.
Represents a network port in a container.
Assigns input data to training, validation, and test sets based on the value of a provided key. Supported only for tabular Datasets.
Request message for PredictionService.Predict.
Configuration for logging request-response to a BigQuery table.
Response message for PredictionService.Predict.
Contains the schemata used in Model's predictions and explanations via PredictionService.Predict, PredictionService.Explain and BatchPredictionJob.
Preset configuration for example-based explanations
PrivateEndpoints proto is used to provide paths for users to send requests privately. To send request via private service access, use predict_http_uri, explain_http_uri or health_http_uri. To send request via private service connect, use service_attachment.
Represents configuration for private service connect.
Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic.
ExecAction specifies a command to execute.
PscAutomatedEndpoints defines the output of the forwarding rule automatically created by each PscAutomationConfig.
A Model Garden Publisher Model.
Actions could take on this Publisher Model.
Model metadata that is needed for UploadModel or DeployModel/CreateEndpoint requests.
Metadata information about the deployment for managing deployment config.
Configurations for PublisherModel GKE deployment
Multiple setups to deploy the PublisherModel.
Open fine tuning pipelines.
Open notebooks.
The regional resource name or the URI. Key is region, e.g., us-central1, europe-west2, global, etc..
Rest API docs.
A named piece of documentation.
Reference to a resource.
Details of operations that perform MetadataService.PurgeArtifacts.
Request message for MetadataService.PurgeArtifacts.
Response message for MetadataService.PurgeArtifacts.
Details of operations that perform MetadataService.PurgeContexts.
Request message for MetadataService.PurgeContexts.
Response message for MetadataService.PurgeContexts.
Details of operations that perform MetadataService.PurgeExecutions.
Request message for MetadataService.PurgeExecutions.
Response message for MetadataService.PurgeExecutions.
The spec of a Python packaged code.
Response message for QueryDeployedModels method.
Input for question answering correctness metric.
Spec for question answering correctness instance.
Spec for question answering correctness result.
Spec for question answering correctness metric.
Input for question answering helpfulness metric.
Spec for question answering helpfulness instance.
Spec for question answering helpfulness result.
Spec for question answering helpfulness metric.
Input for question answering quality metric.
Spec for question answering quality instance.
Spec for question answering quality result.
Spec for question answering quality score metric.
Input for question answering relevance metric.
Spec for question answering relevance instance.
Spec for question answering relevance result.
Spec for question answering relevance metric.
Request message for PredictionService.RawPredict.
Configuration for the Ray OSS Logs.
Configuration for the Ray metrics.
Configuration information for the Ray cluster. For experimental launch, Ray cluster creation and Persistent cluster creation are 1:1 mapping: We will provision all the nodes within the Persistent cluster as Ray nodes.
Request message for FeaturestoreOnlineServingService.ReadFeatureValues.
Response message for FeaturestoreOnlineServingService.ReadFeatureValues.
Entity view with Feature values.
Container to hold value(s), successive in time, for one Feature from the request.
Metadata for requested Features.
Response header with metadata for the requested ReadFeatureValuesRequest.entity_type and Features.
The request message for MatchService.ReadIndexDatapoints.
The response message for MatchService.ReadIndexDatapoints.
Response message for TensorboardService.ReadTensorboardBlobData.
Response message for TensorboardService.ReadTensorboardSize.
Response message for TensorboardService.ReadTensorboardTimeSeriesData.
Response message for TensorboardService.ReadTensorboardUsage.
Per month usage data
Per user usage data.
Details of operations that perform reboot PersistentResource.
Request message for PersistentResourceService.RebootPersistentResource.
Request message for MetadataService.DeleteContextChildrenRequest.
Response message for MetadataService.RemoveContextChildren.
Request message for IndexService.RemoveDatapoints
Response message for IndexService.RemoveDatapoints
A ReservationAffinity can be used to configure a Vertex AI resource (e.g., a DeployedModel) to draw its Compute Engine resources from a Shared Reservation, or exclusively from on-demand capacity.
Represents the spec of a group of resources of the same type, for example machine type, disk, and accelerators, in a PersistentResource.
The min/max number of replicas allowed if enabling autoscaling
Persistent Cluster runtime information as output
Configuration for the runtime on a PersistentResource instance, including but not limited to: Service accounts used to run the workloads. Whether to make it a dedicated Ray Cluster.
Statistics information about resource consumption.
Runtime operation information for DatasetService.RestoreDatasetVersion.
Request message for JobService.ResumeModelDeploymentMonitoringJob.
Request message for ScheduleService.ResumeSchedule.
Defines a retrieval tool that model can call to access external knowledge.
Input for rouge metric.
Spec for rouge instance.
Rouge metric value for an instance.
Results for rouge metric.
Spec for rouge score metric - calculates the recall of n-grams in prediction as compared to reference - returns a score ranging between 0 and 1.
Input for safety metric.
Spec for safety instance.
Safety rating corresponding to the generated content.
Spec for safety result.
Safety settings.
Spec for safety metric.
Active learning data sampling config. For every active learning labeling iteration, it will select a batch of data based on the sampling strategy.
An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features.
Sampling Strategy for logging, can be for both training and prediction dataset.
Requests are randomly selected.
A SavedQuery is a view of the dataset. It references a subset of annotations by problem type and filters.
One point viewable on a scalar metric plot.
An instance of a Schedule periodically schedules runs to make API calls based on user specified time specification and API request type.
Status of a scheduled run.
All parameters related to queuing and scheduling of custom jobs.
Schema is used to define the format of input/output data. Represents a select subset of an OpenAPI 3.0 schema object. More fields may be added in the future as needed.
An entry of mapping between color and AnnotationSpec. The mapping is used in segmentation mask.
Annotation details specific to image object detection.
Annotation details specific to image classification.
Payload of Image DataItem.
The metadata of Datasets that contain Image DataItems.
Annotation details specific to image segmentation.
The mask based segmentation annotation.
Represents a polygon in image.
Represents a polyline in image.
Bounding box matching model metrics for a single intersection-over-union threshold and multiple label match confidence thresholds.
Metrics for a single confidence threshold.
Metrics for classification evaluation results.
Attributes
-
confidenceThreshold
(type:number()
, default:nil
) - Metrics are computed with an assumption that the Model never returns predictions with score lower than this value. -
confusionMatrix
(type:GoogleApi.AIPlatform.V1.Model.GoogleCloudAiplatformV1SchemaModelevaluationMetricsConfusionMatrix.t
, default:nil
) - Confusion matrix of the evaluation for this confidence_threshold. -
f1Score
(type:number()
, default:nil
) - The harmonic mean of recall and precision. For summary metrics, it computes the micro-averaged F1 score. -
f1ScoreAt1
(type:number()
, default:nil
) - The harmonic mean of recallAt1 and precisionAt1. -
f1ScoreMacro
(type:number()
, default:nil
) - Macro-averaged F1 Score. -
f1ScoreMicro
(type:number()
, default:nil
) - Micro-averaged F1 Score. -
falseNegativeCount
(type:String.t
, default:nil
) - The number of ground truth labels that are not matched by a Model created label. -
falsePositiveCount
(type:String.t
, default:nil
) - The number of Model created labels that do not match a ground truth label. -
falsePositiveRate
(type:number()
, default:nil
) - False Positive Rate for the given confidence threshold. -
falsePositiveRateAt1
(type:number()
, default:nil
) - The False Positive Rate when only considering the label that has the highest prediction score and not below the confidence threshold for each DataItem. -
maxPredictions
(type:integer()
, default:nil
) - Metrics are computed with an assumption that the Model always returns at most this many predictions (ordered by their score, descendingly), but they all still need to meet theconfidenceThreshold
. -
precision
(type:number()
, default:nil
) - Precision for the given confidence threshold. -
precisionAt1
(type:number()
, default:nil
) - The precision when only considering the label that has the highest prediction score and not below the confidence threshold for each DataItem. -
recall
(type:number()
, default:nil
) - Recall (True Positive Rate) for the given confidence threshold. -
recallAt1
(type:number()
, default:nil
) - The Recall (True Positive Rate) when only considering the label that has the highest prediction score and not below the confidence threshold for each DataItem. -
trueNegativeCount
(type:String.t
, default:nil
) - The number of labels that were not created by the Model, but if they would, they would not match a ground truth label. -
truePositiveCount
(type:String.t
, default:nil
) - The number of Model created labels that match a ground truth label.
Attributes
-
annotationSpecs
(type:list(GoogleApi.AIPlatform.V1.Model.GoogleCloudAiplatformV1SchemaModelevaluationMetricsConfusionMatrixAnnotationSpecRef.t)
, default:nil
) - AnnotationSpecs used in the confusion matrix. For AutoML Text Extraction, a special negative AnnotationSpec with emptyid
anddisplayName
of "NULL" will be added as the last element. -
rows
(type:list(list(any()))
, default:nil
) - Rows in the confusion matrix. The number of rows is equal to the size ofannotationSpecs
.rowsi
is the number of DataItems that have ground truth of theannotationSpecs[i]
and are predicted asannotationSpecs[j]
by the Model being evaluated. For Text Extraction, whenannotationSpecs[i]
is the last element inannotationSpecs
, i.e. the special negative AnnotationSpec,rowsi
is the number of predicted entities ofannoatationSpec[j]
that are not labeled as any of the ground truth AnnotationSpec. When annotationSpecs[j] is the special negative AnnotationSpec,rowsi
is the number of entities have ground truth ofannotationSpec[i]
that are not predicted as an entity by the Model. The value of the last cell, i.e.rowi
where i == j andannotationSpec[i]
is the special negative AnnotationSpec, is always 0.
Metrics for forecasting evaluation results.
Entry for the Quantiles loss type optimization objective.
Metrics for image object detection evaluation results.
Metrics for image segmentation evaluation results.
Attributes
-
confidenceThreshold
(type:number()
, default:nil
) - Metrics are computed with an assumption that the model never returns predictions with score lower than this value. -
confusionMatrix
(type:GoogleApi.AIPlatform.V1.Model.GoogleCloudAiplatformV1SchemaModelevaluationMetricsConfusionMatrix.t
, default:nil
) - Confusion matrix for the given confidence threshold. -
diceScoreCoefficient
(type:number()
, default:nil
) - DSC or the F1 score, The harmonic mean of recall and precision. -
iouScore
(type:number()
, default:nil
) - The intersection-over-union score. The measure of overlap of the annotation's category mask with ground truth category mask on the DataItem. -
precision
(type:number()
, default:nil
) - Precision for the given confidence threshold. -
recall
(type:number()
, default:nil
) - Recall (True Positive Rate) for the given confidence threshold.
Metrics for general pairwise text generation evaluation results.
Metrics for regression evaluation results.
Metrics for text extraction evaluation results.
Attributes
-
confidenceThreshold
(type:number()
, default:nil
) - Metrics are computed with an assumption that the Model never returns predictions with score lower than this value. -
f1Score
(type:number()
, default:nil
) - The harmonic mean of recall and precision. -
precision
(type:number()
, default:nil
) - Precision for the given confidence threshold. -
recall
(type:number()
, default:nil
) - Recall (True Positive Rate) for the given confidence threshold.
Model evaluation metrics for text sentiment problems.
UNIMPLEMENTED. Track matching model metrics for a single track match threshold and multiple label match confidence thresholds.
Metrics for a single confidence threshold.
The Evaluation metrics given a specific precision_window_length.
Metrics for a single confidence threshold.
Model evaluation metrics for video action recognition.
Model evaluation metrics for video object tracking problems. Evaluates prediction quality of both labeled bounding boxes and labeled tracks (i.e. series of bounding boxes sharing same label and instance ID).
Prediction input format for Image Classification.
Prediction input format for Image Object Detection.
Prediction input format for Image Segmentation.
Prediction input format for Text Classification.
Prediction input format for Text Extraction.
Prediction input format for Text Sentiment.
Prediction input format for Video Action Recognition.
Prediction input format for Video Classification.
Prediction input format for Video Object Tracking.
The configuration for grounding checking.
Single source entry for the grounding checking.
Prediction model parameters for Image Classification.
Prediction model parameters for Image Object Detection.
Prediction model parameters for Image Segmentation.
Prediction model parameters for Video Action Recognition.
Prediction model parameters for Video Classification.
Prediction model parameters for Video Object Tracking.
Prediction output format for Image and Text Classification.
Prediction output format for Image Object Detection.
Prediction output format for Image Segmentation.
Prediction output format for Tabular Classification.
Prediction output format for Tabular Regression.
Prediction output format for Text Extraction.
Prediction output format for Text Sentiment
Attributes
-
attributeColumns
(type:list(String.t)
, default:nil
) - -
attributeWeights
(type:list(number())
, default:nil
) - -
contextColumns
(type:list(String.t)
, default:nil
) - -
contextWeights
(type:list(number())
, default:nil
) - TFT feature importance values. Each pair for {context/horizon/attribute} should have the same shape since the weight corresponds to the column names. -
horizonColumns
(type:list(String.t)
, default:nil
) - -
horizonWeights
(type:list(number())
, default:nil
) -
Prediction output format for Time Series Forecasting.
Prediction output format for Video Action Recognition.
Prediction output format for Video Classification.
Prediction output format for Video Object Tracking.
The fields xMin
, xMax
, yMin
, and yMax
refer to a bounding box, i.e. the rectangle over the video frame pinpointing the found AnnotationSpec. The coordinates are relative to the frame size, and the point 0,0 is in the top left of the frame.
Represents a line of JSONL in the batch prediction output file.
The metadata of Datasets that contain tables data.
Attributes
-
uri
(type:list(String.t)
, default:nil
) - Cloud Storage URI of one or more files. Only CSV files are supported. The first line of the CSV file is used as the header. If there are multiple files, the header is the first line of the lexicographically first file, the other files must either contain the exact same header or omit the header.
The tables Dataset's data source. The Dataset doesn't store the data directly, but only pointer(s) to its data.
Annotation details specific to text classification.
Payload of Text DataItem.
The metadata of Datasets that contain Text DataItems.
Annotation details specific to text extraction.
The metadata of Datasets that contain Text Prompt data.
The text segment inside of DataItem.
Annotation details specific to text sentiment.
The metadata of SavedQuery contains TextSentiment Annotations.
A time period inside of a DataItem that has a time dimension (e.g. video).
The metadata of Datasets that contain time series data.
Attributes
-
uri
(type:list(String.t)
, default:nil
) - Cloud Storage URI of one or more files. Only CSV files are supported. The first line of the CSV file is used as the header. If there are multiple files, the header is the first line of the lexicographically first file, the other files must either contain the exact same header or omit the header.
The time series Dataset's data source. The Dataset doesn't store the data directly, but only pointer(s) to its data.
A TrainingJob that trains and uploads an AutoML Forecasting Model.
Attributes
-
additionalExperiments
(type:list(String.t)
, default:nil
) - Additional experiment flags for the time series forcasting training. -
availableAtForecastColumns
(type:list(String.t)
, default:nil
) - Names of columns that are available and provided when a forecast is requested. These columns contain information for the given entity (identified by the time_series_identifier_column column) that is known at forecast. For example, predicted weather for a specific day. -
contextWindow
(type:String.t
, default:nil
) - The amount of time into the past training and prediction data is used for model training and prediction respectively. Expressed in number of units defined by thedata_granularity
field. -
dataGranularity
(type:GoogleApi.AIPlatform.V1.Model.GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlForecastingInputsGranularity.t
, default:nil
) - Expected difference in time granularity between rows in the data. -
enableProbabilisticInference
(type:boolean()
, default:nil
) - If probabilistic inference is enabled, the model will fit a distribution that captures the uncertainty of a prediction. At inference time, the predictive distribution is used to make a point prediction that minimizes the optimization objective. For example, the mean of a predictive distribution is the point prediction that minimizes RMSE loss. If quantiles are specified, then the quantiles of the distribution are also returned. The optimization objective cannot be minimize-quantile-loss. -
exportEvaluatedDataItemsConfig
(type:GoogleApi.AIPlatform.V1.Model.GoogleCloudAiplatformV1SchemaTrainingjobDefinitionExportEvaluatedDataItemsConfig.t
, default:nil
) - Configuration for exporting test set predictions to a BigQuery table. If this configuration is absent, then the export is not performed. -
forecastHorizon
(type:String.t
, default:nil
) - The amount of time into the future for which forecasted values for the target are returned. Expressed in number of units defined by thedata_granularity
field. -
hierarchyConfig
(type:GoogleApi.AIPlatform.V1.Model.GoogleCloudAiplatformV1SchemaTrainingjobDefinitionHierarchyConfig.t
, default:nil
) - Configuration that defines the hierarchical relationship of time series and parameters for hierarchical forecasting strategies. -
holidayRegions
(type:list(String.t)
, default:nil
) - The geographical region based on which the holiday effect is applied in modeling by adding holiday categorical array feature that include all holidays matching the date. This option only allowed when data_granularity is day. By default, holiday effect modeling is disabled. To turn it on, specify the holiday region using this option. -
optimizationObjective
(type:String.t
, default:nil
) - Objective function the model is optimizing towards. The training process creates a model that optimizes the value of the objective function over the validation set. The supported optimization objectives: "minimize-rmse" (default) - Minimize root-mean-squared error (RMSE). "minimize-mae" - Minimize mean-absolute error (MAE). "minimize-rmsle" - Minimize root-mean-squared log error (RMSLE). "minimize-rmspe" - Minimize root-mean-squared percentage error (RMSPE). "minimize-wape-mae" - Minimize the combination of weighted absolute percentage error (WAPE) and mean-absolute-error (MAE). "minimize-quantile-loss" - Minimize the quantile loss at the quantiles defined inquantiles
. * "minimize-mape" - Minimize the mean absolute percentage error. -
quantiles
(type:list(float())
, default:nil
) - Quantiles to use for minimize-quantile-lossoptimization_objective
, or for probabilistic inference. Up to 5 quantiles are allowed of values between 0 and 1, exclusive. Required if the value of optimization_objective is minimize-quantile-loss. Represents the percent quantiles to use for that objective. Quantiles must be unique. -
targetColumn
(type:String.t
, default:nil
) - The name of the column that the Model is to predict values for. This column must be unavailable at forecast. -
timeColumn
(type:String.t
, default:nil
) - The name of the column that identifies time order in the time series. This column must be available at forecast. -
timeSeriesAttributeColumns
(type:list(String.t)
, default:nil
) - Column names that should be used as attribute columns. The value of these columns does not vary as a function of time. For example, store ID or item color. -
timeSeriesIdentifierColumn
(type:String.t
, default:nil
) - The name of the column that identifies the time series. -
trainBudgetMilliNodeHours
(type:String.t
, default:nil
) - Required. The train budget of creating this model, expressed in milli node hours i.e. 1,000 value in this field means 1 node hour. The training cost of the model will not exceed this budget. The final cost will be attempted to be close to the budget, though may end up being (even) noticeably smaller - at the backend's discretion. This especially may happen when further model training ceases to provide any improvements. If the budget is set to a value known to be insufficient to train a model for the given dataset, the training won't be attempted and will error. The train budget must be between 1,000 and 72,000 milli node hours, inclusive. -
transformations
(type:list(GoogleApi.AIPlatform.V1.Model.GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlForecastingInputsTransformation.t)
, default:nil
) - Each transformation will apply transform function to given input column. And the result will be used for training. When creating transformation for BigQuery Struct column, the column should be flattened using "." as the delimiter. -
unavailableAtForecastColumns
(type:list(String.t)
, default:nil
) - Names of columns that are unavailable when a forecast is requested. This column contains information for the given entity (identified by the time_series_identifier_column) that is unknown before the forecast For example, actual weather on a given day. -
validationOptions
(type:String.t
, default:nil
) - Validation options for the data validation component. The available options are: "fail-pipeline" - default, will validate against the validation and fail the pipeline if it fails. "ignore-validation" - ignore the results of the validation and continue -
weightColumn
(type:String.t
, default:nil
) - Column name that should be used as the weight column. Higher values in this column give more importance to the row during model training. The column must have numeric values between 0 and 10000 inclusively; 0 means the row is ignored for training. If weight column field is not set, then all rows are assumed to have equal weight of 1. -
windowConfig
(type:GoogleApi.AIPlatform.V1.Model.GoogleCloudAiplatformV1SchemaTrainingjobDefinitionWindowConfig.t
, default:nil
) - Config containing strategy for generating sliding windows.
A duration of time expressed in time granularity units.
Attributes
-
auto
(type:GoogleApi.AIPlatform.V1.Model.GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlForecastingInputsTransformationAutoTransformation.t
, default:nil
) - -
categorical
(type:GoogleApi.AIPlatform.V1.Model.GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlForecastingInputsTransformationCategoricalTransformation.t
, default:nil
) - -
numeric
(type:GoogleApi.AIPlatform.V1.Model.GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlForecastingInputsTransformationNumericTransformation.t
, default:nil
) - -
text
(type:GoogleApi.AIPlatform.V1.Model.GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlForecastingInputsTransformationTextTransformation.t
, default:nil
) - -
timestamp
(type:GoogleApi.AIPlatform.V1.Model.GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlForecastingInputsTransformationTimestampTransformation.t
, default:nil
) -
Training pipeline will infer the proper transformation based on the statistic of dataset.
Training pipeline will perform following transformation functions. The categorical string as is--no change to case, punctuation, spelling, tense, and so on. Convert the category name to a dictionary lookup index and generate an embedding for each index. * Categories that appear less than 5 times in the training dataset are treated as the "unknown" category. The "unknown" category gets its own special lookup index and resulting embedding.
Training pipeline will perform following transformation functions. The value converted to float32. The z_score of the value. log(value+1) when the value is greater than or equal to 0. Otherwise, this transformation is not applied and the value is considered a missing value. z_score of log(value+1) when the value is greater than or equal to 0. Otherwise, this transformation is not applied and the value is considered a missing value. * A boolean value that indicates whether the value is valid.
Training pipeline will perform following transformation functions. The text as is--no change to case, punctuation, spelling, tense, and so on. Convert the category name to a dictionary lookup index and generate an embedding for each index.
Training pipeline will perform following transformation functions. Apply the transformation functions for Numerical columns. Determine the year, month, day,and weekday. Treat each value from the timestamp as a Categorical column. * Invalid numerical values (for example, values that fall outside of a typical timestamp range, or are extreme values) receive no special treatment and are not removed.
Model metadata specific to AutoML Forecasting.
A TrainingJob that trains and uploads an AutoML Image Classification Model.
Attributes
-
baseModelId
(type:String.t
, default:nil
) - The ID of thebase
model. If it is specified, the new model will be trained based on thebase
model. Otherwise, the new model will be trained from scratch. Thebase
model must be in the same Project and Location as the new Model to train, and have the same modelType. -
budgetMilliNodeHours
(type:String.t
, default:nil
) - The training budget of creating this model, expressed in milli node hours i.e. 1,000 value in this field means 1 node hour. The actual metadata.costMilliNodeHours will be equal or less than this value. If further model training ceases to provide any improvements, it will stop without using the full budget and the metadata.successfulStopReason will bemodel-converged
. Note, node_hour = actual_hour * number_of_nodes_involved. For modelTypecloud
(default), the budget must be between 8,000 and 800,000 milli node hours, inclusive. The default value is 192,000 which represents one day in wall time, considering 8 nodes are used. For model typesmobile-tf-low-latency-1
,mobile-tf-versatile-1
,mobile-tf-high-accuracy-1
, the training budget must be between 1,000 and 100,000 milli node hours, inclusive. The default value is 24,000 which represents one day in wall time on a single node that is used. -
disableEarlyStopping
(type:boolean()
, default:nil
) - Use the entire training budget. This disables the early stopping feature. When false the early stopping feature is enabled, which means that AutoML Image Classification might stop training before the entire training budget has been used. -
modelType
(type:String.t
, default:nil
) - -
multiLabel
(type:boolean()
, default:nil
) - If false, a single-label (multi-class) Model will be trained (i.e. assuming that for each image just up to one annotation may be applicable). If true, a multi-label Model will be trained (i.e. assuming that for each image multiple annotations may be applicable). -
tunableParameter
(type:GoogleApi.AIPlatform.V1.Model.GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutomlImageTrainingTunableParameter.t
, default:nil
) - Trainer type for Vision TrainRequest. -
uptrainBaseModelId
(type:String.t
, default:nil
) - The ID ofbase
model for upTraining. If it is specified, the new model will be upTrained based on thebase
model for upTraining. Otherwise, the new model will be trained from scratch. Thebase
model for upTraining must be in the same Project and Location as the new Model to train, and have the same modelType.
Attributes
-
costMilliNodeHours
(type:String.t
, default:nil
) - The actual training cost of creating this model, expressed in milli node hours, i.e. 1,000 value in this field means 1 node hour. Guaranteed to not exceed inputs.budgetMilliNodeHours. -
successfulStopReason
(type:String.t
, default:nil
) - For successful job completions, this is the reason why the job has finished.
A TrainingJob that trains and uploads an AutoML Image Object Detection Model.
Attributes
-
budgetMilliNodeHours
(type:String.t
, default:nil
) - The training budget of creating this model, expressed in milli node hours i.e. 1,000 value in this field means 1 node hour. The actual metadata.costMilliNodeHours will be equal or less than this value. If further model training ceases to provide any improvements, it will stop without using the full budget and the metadata.successfulStopReason will bemodel-converged
. Note, node_hour = actual_hour * number_of_nodes_involved. For modelTypecloud
(default), the budget must be between 20,000 and 900,000 milli node hours, inclusive. The default value is 216,000 which represents one day in wall time, considering 9 nodes are used. For model typesmobile-tf-low-latency-1
,mobile-tf-versatile-1
,mobile-tf-high-accuracy-1
the training budget must be between 1,000 and 100,000 milli node hours, inclusive. The default value is 24,000 which represents one day in wall time on a single node that is used. -
disableEarlyStopping
(type:boolean()
, default:nil
) - Use the entire training budget. This disables the early stopping feature. When false the early stopping feature is enabled, which means that AutoML Image Object Detection might stop training before the entire training budget has been used. -
modelType
(type:String.t
, default:nil
) - -
tunableParameter
(type:GoogleApi.AIPlatform.V1.Model.GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutomlImageTrainingTunableParameter.t
, default:nil
) - Trainer type for Vision TrainRequest. -
uptrainBaseModelId
(type:String.t
, default:nil
) - The ID ofbase
model for upTraining. If it is specified, the new model will be upTrained based on thebase
model for upTraining. Otherwise, the new model will be trained from scratch. Thebase
model for upTraining must be in the same Project and Location as the new Model to train, and have the same modelType.
Attributes
-
costMilliNodeHours
(type:String.t
, default:nil
) - The actual training cost of creating this model, expressed in milli node hours, i.e. 1,000 value in this field means 1 node hour. Guaranteed to not exceed inputs.budgetMilliNodeHours. -
successfulStopReason
(type:String.t
, default:nil
) - For successful job completions, this is the reason why the job has finished.
A TrainingJob that trains and uploads an AutoML Image Segmentation Model.
Attributes
-
baseModelId
(type:String.t
, default:nil
) - The ID of thebase
model. If it is specified, the new model will be trained based on thebase
model. Otherwise, the new model will be trained from scratch. Thebase
model must be in the same Project and Location as the new Model to train, and have the same modelType. -
budgetMilliNodeHours
(type:String.t
, default:nil
) - The training budget of creating this model, expressed in milli node hours i.e. 1,000 value in this field means 1 node hour. The actual metadata.costMilliNodeHours will be equal or less than this value. If further model training ceases to provide any improvements, it will stop without using the full budget and the metadata.successfulStopReason will bemodel-converged
. Note, node_hour = actual_hour number_of_nodes_involved. Or actual_wall_clock_hours = train_budget_milli_node_hours / (number_of_nodes_involved 1000) For modelTypecloud-high-accuracy-1
(default), the budget must be between 20,000 and 2,000,000 milli node hours, inclusive. The default value is 192,000 which represents one day in wall time (1000 milli 24 hours 8 nodes). -
modelType
(type:String.t
, default:nil
) -
Attributes
-
costMilliNodeHours
(type:String.t
, default:nil
) - The actual training cost of creating this model, expressed in milli node hours, i.e. 1,000 value in this field means 1 node hour. Guaranteed to not exceed inputs.budgetMilliNodeHours. -
successfulStopReason
(type:String.t
, default:nil
) - For successful job completions, this is the reason why the job has finished.
A TrainingJob that trains and uploads an AutoML Tables Model.
Attributes
-
additionalExperiments
(type:list(String.t)
, default:nil
) - Additional experiment flags for the Tables training pipeline. -
disableEarlyStopping
(type:boolean()
, default:nil
) - Use the entire training budget. This disables the early stopping feature. By default, the early stopping feature is enabled, which means that AutoML Tables might stop training before the entire training budget has been used. -
exportEvaluatedDataItemsConfig
(type:GoogleApi.AIPlatform.V1.Model.GoogleCloudAiplatformV1SchemaTrainingjobDefinitionExportEvaluatedDataItemsConfig.t
, default:nil
) - Configuration for exporting test set predictions to a BigQuery table. If this configuration is absent, then the export is not performed. -
optimizationObjective
(type:String.t
, default:nil
) - Objective function the model is optimizing towards. The training process creates a model that maximizes/minimizes the value of the objective function over the validation set. The supported optimization objectives depend on the prediction type. If the field is not set, a default objective function is used. classification (binary): "maximize-au-roc" (default) - Maximize the area under the receiver operating characteristic (ROC) curve. "minimize-log-loss" - Minimize log loss. "maximize-au-prc" - Maximize the area under the precision-recall curve. "maximize-precision-at-recall" - Maximize precision for a specified recall value. "maximize-recall-at-precision" - Maximize recall for a specified precision value. classification (multi-class): "minimize-log-loss" (default) - Minimize log loss. regression: "minimize-rmse" (default) - Minimize root-mean-squared error (RMSE). "minimize-mae" - Minimize mean-absolute error (MAE). "minimize-rmsle" - Minimize root-mean-squared log error (RMSLE). -
optimizationObjectivePrecisionValue
(type:number()
, default:nil
) - Required when optimization_objective is "maximize-recall-at-precision". Must be between 0 and 1, inclusive. -
optimizationObjectiveRecallValue
(type:number()
, default:nil
) - Required when optimization_objective is "maximize-precision-at-recall". Must be between 0 and 1, inclusive. -
predictionType
(type:String.t
, default:nil
) - The type of prediction the Model is to produce. "classification" - Predict one out of multiple target values is picked for each row. "regression" - Predict a value based on its relation to other values. This type is available only to columns that contain semantically numeric values, i.e. integers or floating point number, even if stored as e.g. strings. -
targetColumn
(type:String.t
, default:nil
) - The column name of the target column that the model is to predict. -
trainBudgetMilliNodeHours
(type:String.t
, default:nil
) - Required. The train budget of creating this model, expressed in milli node hours i.e. 1,000 value in this field means 1 node hour. The training cost of the model will not exceed this budget. The final cost will be attempted to be close to the budget, though may end up being (even) noticeably smaller - at the backend's discretion. This especially may happen when further model training ceases to provide any improvements. If the budget is set to a value known to be insufficient to train a model for the given dataset, the training won't be attempted and will error. The train budget must be between 1,000 and 72,000 milli node hours, inclusive. -
transformations
(type:list(GoogleApi.AIPlatform.V1.Model.GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlTablesInputsTransformation.t)
, default:nil
) - Each transformation will apply transform function to given input column. And the result will be used for training. When creating transformation for BigQuery Struct column, the column should be flattened using "." as the delimiter. -
weightColumnName
(type:String.t
, default:nil
) - Column name that should be used as the weight column. Higher values in this column give more importance to the row during model training. The column must have numeric values between 0 and 10000 inclusively; 0 means the row is ignored for training. If weight column field is not set, then all rows are assumed to have equal weight of 1.
Attributes
-
auto
(type:GoogleApi.AIPlatform.V1.Model.GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlTablesInputsTransformationAutoTransformation.t
, default:nil
) - -
categorical
(type:GoogleApi.AIPlatform.V1.Model.GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlTablesInputsTransformationCategoricalTransformation.t
, default:nil
) - -
numeric
(type:GoogleApi.AIPlatform.V1.Model.GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlTablesInputsTransformationNumericTransformation.t
, default:nil
) - -
repeatedCategorical
(type:GoogleApi.AIPlatform.V1.Model.GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlTablesInputsTransformationCategoricalArrayTransformation.t
, default:nil
) - -
repeatedNumeric
(type:GoogleApi.AIPlatform.V1.Model.GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlTablesInputsTransformationNumericArrayTransformation.t
, default:nil
) - -
repeatedText
(type:GoogleApi.AIPlatform.V1.Model.GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlTablesInputsTransformationTextArrayTransformation.t
, default:nil
) - -
text
(type:GoogleApi.AIPlatform.V1.Model.GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlTablesInputsTransformationTextTransformation.t
, default:nil
) - -
timestamp
(type:GoogleApi.AIPlatform.V1.Model.GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlTablesInputsTransformationTimestampTransformation.t
, default:nil
) -
Training pipeline will infer the proper transformation based on the statistic of dataset.
Treats the column as categorical array and performs following transformation functions. For each element in the array, convert the category name to a dictionary lookup index and generate an embedding for each index. Combine the embedding of all elements into a single embedding using the mean. Empty arrays treated as an embedding of zeroes.
Training pipeline will perform following transformation functions. The categorical string as is--no change to case, punctuation, spelling, tense, and so on. Convert the category name to a dictionary lookup index and generate an embedding for each index. * Categories that appear less than 5 times in the training dataset are treated as the "unknown" category. The "unknown" category gets its own special lookup index and resulting embedding.
Treats the column as numerical array and performs following transformation functions. All transformations for Numerical types applied to the average of the all elements. The average of empty arrays is treated as zero.
Training pipeline will perform following transformation functions. The value converted to float32. The z_score of the value. log(value+1) when the value is greater than or equal to 0. Otherwise, this transformation is not applied and the value is considered a missing value. z_score of log(value+1) when the value is greater than or equal to 0. Otherwise, this transformation is not applied and the value is considered a missing value. * A boolean value that indicates whether the value is valid.
Treats the column as text array and performs following transformation functions. Concatenate all text values in the array into a single text value using a space (" ") as a delimiter, and then treat the result as a single text value. Apply the transformations for Text columns. Empty arrays treated as an empty text.
Training pipeline will perform following transformation functions. The text as is--no change to case, punctuation, spelling, tense, and so on. Tokenize text to words. Convert each words to a dictionary lookup index and generate an embedding for each index. Combine the embedding of all elements into a single embedding using the mean. Tokenization is based on unicode script boundaries. Missing values get their own lookup index and resulting embedding. * Stop-words receive no special treatment and are not removed.
Training pipeline will perform following transformation functions. Apply the transformation functions for Numerical columns. Determine the year, month, day,and weekday. Treat each value from the timestamp as a Categorical column. Invalid numerical values (for example, values that fall outside of a typical timestamp range, or are extreme values) receive no special treatment and are not removed.
Model metadata specific to AutoML Tables.
A TrainingJob that trains and uploads an AutoML Text Classification Model.
A TrainingJob that trains and uploads an AutoML Text Extraction Model.
A TrainingJob that trains and uploads an AutoML Text Sentiment Model.
Attributes
-
sentimentMax
(type:integer()
, default:nil
) - A sentiment is expressed as an integer ordinal, where higher value means a more positive sentiment. The range of sentiments that will be used is between 0 and sentimentMax (inclusive on both ends), and all the values in the range must be represented in the dataset before a model can be created. Only the Annotations with this sentimentMax will be used for training. sentimentMax value must be between 1 and 10 (inclusive).
A TrainingJob that trains and uploads an AutoML Video Action Recognition Model.
A TrainingJob that trains and uploads an AutoML Video Classification Model.
A TrainingJob that trains and uploads an AutoML Video ObjectTracking Model.
A wrapper class which contains the tunable parameters in an AutoML Image training job.
A TrainingJob that trains a custom code Model.
Configuration for exporting test set predictions to a BigQuery table.
Configuration that defines the hierarchical relationship of time series and parameters for hierarchical forecasting strategies.
Attributes
-
backingHyperparameterTuningJob
(type:String.t
, default:nil
) - The resource name of the HyperparameterTuningJob that has been created to carry out this HyperparameterTuning task. -
bestTrialBackingCustomJob
(type:String.t
, default:nil
) - The resource name of the CustomJob that has been created to run the best Trial of this HyperparameterTuning task.
Attributes
-
maxFailedTrialCount
(type:integer()
, default:nil
) - The number of failed Trials that need to be seen before failing the HyperparameterTuningJob. If set to 0, Vertex AI decides how many Trials must fail before the whole job fails. -
maxTrialCount
(type:integer()
, default:nil
) - The desired total number of Trials. -
parallelTrialCount
(type:integer()
, default:nil
) - The desired number of Trials to run in parallel. -
studySpec
(type:GoogleApi.AIPlatform.V1.Model.GoogleCloudAiplatformV1StudySpec.t
, default:nil
) - Study configuration of the HyperparameterTuningJob. -
trialJobSpec
(type:GoogleApi.AIPlatform.V1.Model.GoogleCloudAiplatformV1CustomJobSpec.t
, default:nil
) - The spec of a trial job. The same spec applies to the CustomJobs created in all the trials.
A TrainingJob that tunes Hypererparameters of a custom code Model.
A TrainingJob that trains and uploads an AutoML Forecasting Model.
Attributes
-
additionalExperiments
(type:list(String.t)
, default:nil
) - Additional experiment flags for the time series forcasting training. -
availableAtForecastColumns
(type:list(String.t)
, default:nil
) - Names of columns that are available and provided when a forecast is requested. These columns contain information for the given entity (identified by the time_series_identifier_column column) that is known at forecast. For example, predicted weather for a specific day. -
contextWindow
(type:String.t
, default:nil
) - The amount of time into the past training and prediction data is used for model training and prediction respectively. Expressed in number of units defined by thedata_granularity
field. -
dataGranularity
(type:GoogleApi.AIPlatform.V1.Model.GoogleCloudAiplatformV1SchemaTrainingjobDefinitionSeq2SeqPlusForecastingInputsGranularity.t
, default:nil
) - Expected difference in time granularity between rows in the data. -
exportEvaluatedDataItemsConfig
(type:GoogleApi.AIPlatform.V1.Model.GoogleCloudAiplatformV1SchemaTrainingjobDefinitionExportEvaluatedDataItemsConfig.t
, default:nil
) - Configuration for exporting test set predictions to a BigQuery table. If this configuration is absent, then the export is not performed. -
forecastHorizon
(type:String.t
, default:nil
) - The amount of time into the future for which forecasted values for the target are returned. Expressed in number of units defined by thedata_granularity
field. -
hierarchyConfig
(type:GoogleApi.AIPlatform.V1.Model.GoogleCloudAiplatformV1SchemaTrainingjobDefinitionHierarchyConfig.t
, default:nil
) - Configuration that defines the hierarchical relationship of time series and parameters for hierarchical forecasting strategies. -
holidayRegions
(type:list(String.t)
, default:nil
) - The geographical region based on which the holiday effect is applied in modeling by adding holiday categorical array feature that include all holidays matching the date. This option only allowed when data_granularity is day. By default, holiday effect modeling is disabled. To turn it on, specify the holiday region using this option. -
optimizationObjective
(type:String.t
, default:nil
) - Objective function the model is optimizing towards. The training process creates a model that optimizes the value of the objective function over the validation set. The supported optimization objectives: "minimize-rmse" (default) - Minimize root-mean-squared error (RMSE). "minimize-mae" - Minimize mean-absolute error (MAE). "minimize-rmsle" - Minimize root-mean-squared log error (RMSLE). "minimize-rmspe" - Minimize root-mean-squared percentage error (RMSPE). "minimize-wape-mae" - Minimize the combination of weighted absolute percentage error (WAPE) and mean-absolute-error (MAE). "minimize-quantile-loss" - Minimize the quantile loss at the quantiles defined inquantiles
. * "minimize-mape" - Minimize the mean absolute percentage error. -
quantiles
(type:list(float())
, default:nil
) - Quantiles to use for minimize-quantile-lossoptimization_objective
. Up to 5 quantiles are allowed of values between 0 and 1, exclusive. Required if the value of optimization_objective is minimize-quantile-loss. Represents the percent quantiles to use for that objective. Quantiles must be unique. -
targetColumn
(type:String.t
, default:nil
) - The name of the column that the Model is to predict values for. This column must be unavailable at forecast. -
timeColumn
(type:String.t
, default:nil
) - The name of the column that identifies time order in the time series. This column must be available at forecast. -
timeSeriesAttributeColumns
(type:list(String.t)
, default:nil
) - Column names that should be used as attribute columns. The value of these columns does not vary as a function of time. For example, store ID or item color. -
timeSeriesIdentifierColumn
(type:String.t
, default:nil
) - The name of the column that identifies the time series. -
trainBudgetMilliNodeHours
(type:String.t
, default:nil
) - Required. The train budget of creating this model, expressed in milli node hours i.e. 1,000 value in this field means 1 node hour. The training cost of the model will not exceed this budget. The final cost will be attempted to be close to the budget, though may end up being (even) noticeably smaller - at the backend's discretion. This especially may happen when further model training ceases to provide any improvements. If the budget is set to a value known to be insufficient to train a model for the given dataset, the training won't be attempted and will error. The train budget must be between 1,000 and 72,000 milli node hours, inclusive. -
transformations
(type:list(GoogleApi.AIPlatform.V1.Model.GoogleCloudAiplatformV1SchemaTrainingjobDefinitionSeq2SeqPlusForecastingInputsTransformation.t)
, default:nil
) - Each transformation will apply transform function to given input column. And the result will be used for training. When creating transformation for BigQuery Struct column, the column should be flattened using "." as the delimiter. -
unavailableAtForecastColumns
(type:list(String.t)
, default:nil
) - Names of columns that are unavailable when a forecast is requested. This column contains information for the given entity (identified by the time_series_identifier_column) that is unknown before the forecast For example, actual weather on a given day. -
validationOptions
(type:String.t
, default:nil
) - Validation options for the data validation component. The available options are: "fail-pipeline" - default, will validate against the validation and fail the pipeline if it fails. "ignore-validation" - ignore the results of the validation and continue -
weightColumn
(type:String.t
, default:nil
) - Column name that should be used as the weight column. Higher values in this column give more importance to the row during model training. The column must have numeric values between 0 and 10000 inclusively; 0 means the row is ignored for training. If weight column field is not set, then all rows are assumed to have equal weight of 1. This column must be available at forecast. -
windowConfig
(type:GoogleApi.AIPlatform.V1.Model.GoogleCloudAiplatformV1SchemaTrainingjobDefinitionWindowConfig.t
, default:nil
) - Config containing strategy for generating sliding windows.
A duration of time expressed in time granularity units.
Attributes
-
auto
(type:GoogleApi.AIPlatform.V1.Model.GoogleCloudAiplatformV1SchemaTrainingjobDefinitionSeq2SeqPlusForecastingInputsTransformationAutoTransformation.t
, default:nil
) - -
categorical
(type:GoogleApi.AIPlatform.V1.Model.GoogleCloudAiplatformV1SchemaTrainingjobDefinitionSeq2SeqPlusForecastingInputsTransformationCategoricalTransformation.t
, default:nil
) - -
numeric
(type:GoogleApi.AIPlatform.V1.Model.GoogleCloudAiplatformV1SchemaTrainingjobDefinitionSeq2SeqPlusForecastingInputsTransformationNumericTransformation.t
, default:nil
) - -
text
(type:GoogleApi.AIPlatform.V1.Model.GoogleCloudAiplatformV1SchemaTrainingjobDefinitionSeq2SeqPlusForecastingInputsTransformationTextTransformation.t
, default:nil
) - -
timestamp
(type:GoogleApi.AIPlatform.V1.Model.GoogleCloudAiplatformV1SchemaTrainingjobDefinitionSeq2SeqPlusForecastingInputsTransformationTimestampTransformation.t
, default:nil
) -
Training pipeline will infer the proper transformation based on the statistic of dataset.
Training pipeline will perform following transformation functions. The categorical string as is--no change to case, punctuation, spelling, tense, and so on. Convert the category name to a dictionary lookup index and generate an embedding for each index. * Categories that appear less than 5 times in the training dataset are treated as the "unknown" category. The "unknown" category gets its own special lookup index and resulting embedding.
Training pipeline will perform following transformation functions. The value converted to float32. The z_score of the value. log(value+1) when the value is greater than or equal to 0. Otherwise, this transformation is not applied and the value is considered a missing value. z_score of log(value+1) when the value is greater than or equal to 0. Otherwise, this transformation is not applied and the value is considered a missing value.
Training pipeline will perform following transformation functions. The text as is--no change to case, punctuation, spelling, tense, and so on. Convert the category name to a dictionary lookup index and generate an embedding for each index.
Training pipeline will perform following transformation functions. Apply the transformation functions for Numerical columns. Determine the year, month, day,and weekday. Treat each value from the timestamp as a Categorical column. * Invalid numerical values (for example, values that fall outside of a typical timestamp range, or are extreme values) receive no special treatment and are not removed.
Model metadata specific to Seq2Seq Plus Forecasting.
A TrainingJob that trains and uploads an AutoML Forecasting Model.
Attributes
-
additionalExperiments
(type:list(String.t)
, default:nil
) - Additional experiment flags for the time series forcasting training. -
availableAtForecastColumns
(type:list(String.t)
, default:nil
) - Names of columns that are available and provided when a forecast is requested. These columns contain information for the given entity (identified by the time_series_identifier_column column) that is known at forecast. For example, predicted weather for a specific day. -
contextWindow
(type:String.t
, default:nil
) - The amount of time into the past training and prediction data is used for model training and prediction respectively. Expressed in number of units defined by thedata_granularity
field. -
dataGranularity
(type:GoogleApi.AIPlatform.V1.Model.GoogleCloudAiplatformV1SchemaTrainingjobDefinitionTftForecastingInputsGranularity.t
, default:nil
) - Expected difference in time granularity between rows in the data. -
exportEvaluatedDataItemsConfig
(type:GoogleApi.AIPlatform.V1.Model.GoogleCloudAiplatformV1SchemaTrainingjobDefinitionExportEvaluatedDataItemsConfig.t
, default:nil
) - Configuration for exporting test set predictions to a BigQuery table. If this configuration is absent, then the export is not performed. -
forecastHorizon
(type:String.t
, default:nil
) - The amount of time into the future for which forecasted values for the target are returned. Expressed in number of units defined by thedata_granularity
field. -
hierarchyConfig
(type:GoogleApi.AIPlatform.V1.Model.GoogleCloudAiplatformV1SchemaTrainingjobDefinitionHierarchyConfig.t
, default:nil
) - Configuration that defines the hierarchical relationship of time series and parameters for hierarchical forecasting strategies. -
holidayRegions
(type:list(String.t)
, default:nil
) - The geographical region based on which the holiday effect is applied in modeling by adding holiday categorical array feature that include all holidays matching the date. This option only allowed when data_granularity is day. By default, holiday effect modeling is disabled. To turn it on, specify the holiday region using this option. -
optimizationObjective
(type:String.t
, default:nil
) - Objective function the model is optimizing towards. The training process creates a model that optimizes the value of the objective function over the validation set. The supported optimization objectives: "minimize-rmse" (default) - Minimize root-mean-squared error (RMSE). "minimize-mae" - Minimize mean-absolute error (MAE). "minimize-rmsle" - Minimize root-mean-squared log error (RMSLE). "minimize-rmspe" - Minimize root-mean-squared percentage error (RMSPE). "minimize-wape-mae" - Minimize the combination of weighted absolute percentage error (WAPE) and mean-absolute-error (MAE). "minimize-quantile-loss" - Minimize the quantile loss at the quantiles defined inquantiles
. * "minimize-mape" - Minimize the mean absolute percentage error. -
quantiles
(type:list(float())
, default:nil
) - Quantiles to use for minimize-quantile-lossoptimization_objective
. Up to 5 quantiles are allowed of values between 0 and 1, exclusive. Required if the value of optimization_objective is minimize-quantile-loss. Represents the percent quantiles to use for that objective. Quantiles must be unique. -
targetColumn
(type:String.t
, default:nil
) - The name of the column that the Model is to predict values for. This column must be unavailable at forecast. -
timeColumn
(type:String.t
, default:nil
) - The name of the column that identifies time order in the time series. This column must be available at forecast. -
timeSeriesAttributeColumns
(type:list(String.t)
, default:nil
) - Column names that should be used as attribute columns. The value of these columns does not vary as a function of time. For example, store ID or item color. -
timeSeriesIdentifierColumn
(type:String.t
, default:nil
) - The name of the column that identifies the time series. -
trainBudgetMilliNodeHours
(type:String.t
, default:nil
) - Required. The train budget of creating this model, expressed in milli node hours i.e. 1,000 value in this field means 1 node hour. The training cost of the model will not exceed this budget. The final cost will be attempted to be close to the budget, though may end up being (even) noticeably smaller - at the backend's discretion. This especially may happen when further model training ceases to provide any improvements. If the budget is set to a value known to be insufficient to train a model for the given dataset, the training won't be attempted and will error. The train budget must be between 1,000 and 72,000 milli node hours, inclusive. -
transformations
(type:list(GoogleApi.AIPlatform.V1.Model.GoogleCloudAiplatformV1SchemaTrainingjobDefinitionTftForecastingInputsTransformation.t)
, default:nil
) - Each transformation will apply transform function to given input column. And the result will be used for training. When creating transformation for BigQuery Struct column, the column should be flattened using "." as the delimiter. -
unavailableAtForecastColumns
(type:list(String.t)
, default:nil
) - Names of columns that are unavailable when a forecast is requested. This column contains information for the given entity (identified by the time_series_identifier_column) that is unknown before the forecast For example, actual weather on a given day. -
validationOptions
(type:String.t
, default:nil
) - Validation options for the data validation component. The available options are: "fail-pipeline" - default, will validate against the validation and fail the pipeline if it fails. "ignore-validation" - ignore the results of the validation and continue -
weightColumn
(type:String.t
, default:nil
) - Column name that should be used as the weight column. Higher values in this column give more importance to the row during model training. The column must have numeric values between 0 and 10000 inclusively; 0 means the row is ignored for training. If weight column field is not set, then all rows are assumed to have equal weight of 1. This column must be available at forecast. -
windowConfig
(type:GoogleApi.AIPlatform.V1.Model.GoogleCloudAiplatformV1SchemaTrainingjobDefinitionWindowConfig.t
, default:nil
) - Config containing strategy for generating sliding windows.
A duration of time expressed in time granularity units.
Attributes
-
auto
(type:GoogleApi.AIPlatform.V1.Model.GoogleCloudAiplatformV1SchemaTrainingjobDefinitionTftForecastingInputsTransformationAutoTransformation.t
, default:nil
) - -
categorical
(type:GoogleApi.AIPlatform.V1.Model.GoogleCloudAiplatformV1SchemaTrainingjobDefinitionTftForecastingInputsTransformationCategoricalTransformation.t
, default:nil
) - -
numeric
(type:GoogleApi.AIPlatform.V1.Model.GoogleCloudAiplatformV1SchemaTrainingjobDefinitionTftForecastingInputsTransformationNumericTransformation.t
, default:nil
) - -
text
(type:GoogleApi.AIPlatform.V1.Model.GoogleCloudAiplatformV1SchemaTrainingjobDefinitionTftForecastingInputsTransformationTextTransformation.t
, default:nil
) - -
timestamp
(type:GoogleApi.AIPlatform.V1.Model.GoogleCloudAiplatformV1SchemaTrainingjobDefinitionTftForecastingInputsTransformationTimestampTransformation.t
, default:nil
) -
Training pipeline will infer the proper transformation based on the statistic of dataset.
Training pipeline will perform following transformation functions. The categorical string as is--no change to case, punctuation, spelling, tense, and so on. Convert the category name to a dictionary lookup index and generate an embedding for each index. * Categories that appear less than 5 times in the training dataset are treated as the "unknown" category. The "unknown" category gets its own special lookup index and resulting embedding.
Training pipeline will perform following transformation functions. The value converted to float32. The z_score of the value. log(value+1) when the value is greater than or equal to 0. Otherwise, this transformation is not applied and the value is considered a missing value. z_score of log(value+1) when the value is greater than or equal to 0. Otherwise, this transformation is not applied and the value is considered a missing value.
Training pipeline will perform following transformation functions. The text as is--no change to case, punctuation, spelling, tense, and so on. Convert the category name to a dictionary lookup index and generate an embedding for each index.
Training pipeline will perform following transformation functions. Apply the transformation functions for Numerical columns. Determine the year, month, day,and weekday. Treat each value from the timestamp as a Categorical column. * Invalid numerical values (for example, values that fall outside of a typical timestamp range, or are extreme values) receive no special treatment and are not removed.
Model metadata specific to TFT Forecasting.
Config that contains the strategy used to generate sliding windows in time series training. A window is a series of rows that comprise the context up to the time of prediction, and the horizon following. The corresponding row for each window marks the start of the forecast horizon. Each window is used as an input example for training/evaluation.
A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.
Annotation details specific to video action recognition.
Annotation details specific to video classification.
Payload of Video DataItem.
The metadata of Datasets that contain Video DataItems.
Annotation details specific to video object tracking.
Response message for DatasetService.SearchDataItems.
Google search entry point.
Response message for FeaturestoreService.SearchFeatures.
Request message for MigrationService.SearchMigratableResources.
Response message for MigrationService.SearchMigratableResources.
Request message for JobService.SearchModelDeploymentMonitoringStatsAnomalies.
Stats requested for specific objective.
Response message for JobService.SearchModelDeploymentMonitoringStatsAnomalies.
The request message for FeatureOnlineStoreService.SearchNearestEntities.
Response message for FeatureOnlineStoreService.SearchNearestEntities
Segment of the content.
Configuration for the use of custom service account to run the workloads.
A set of Shielded Instance options. See Images using supported Shielded VM features.
Config for SmoothGrad approximation of gradients. When enabled, the gradients are approximated by averaging the gradients from noisy samples in the vicinity of the inputs. Adding noise can help improve the computed gradients. Refer to this paper for more details: https://arxiv.org/pdf/1706.03825.pdf
SpecialistPool represents customers' own workforce to work on their data labeling jobs. It includes a group of specialist managers and workers. Managers are responsible for managing the workers in this pool as well as customers' data labeling jobs associated with this pool. Customers create specialist pool as well as start data labeling jobs on Cloud, managers and workers handle the jobs using CrowdCompute console.
Metadata information for NotebookService.StartNotebookRuntime.
Request message for NotebookService.StartNotebookRuntime.
Request message for VizierService.StopTrial.
Assigns input data to the training, validation, and test sets so that the distribution of values found in the categorical column (as specified by the key
field) is mirrored within each split. The fraction values determine the relative sizes of the splits. For example, if the specified column has three values, with 50% of the rows having value "A", 25% value "B", and 25% value "C", and the split fractions are specified as 80/10/10, then the training set will constitute 80% of the training data, with about 50% of the training set rows having the value "A" for the specified column, about 25% having the value "B", and about 25% having the value "C". Only the top 500 occurring values are used; any values not in the top 500 values are randomly assigned to a split. If less than three rows contain a specific value, those rows are randomly assigned. Supported only for tabular Datasets.
Request message for PredictionService.StreamRawPredict.
Request message for PredictionService.StreamingPredict. The first message must contain endpoint field and optionally input. The subsequent messages must contain input.
Response message for PredictionService.StreamingPredict.
Request message for FeaturestoreOnlineServingService.StreamingFeatureValuesRead.
A list of string values.
One field of a Struct (or object) type feature value.
Struct (or object) type feature value.
A message representing a Study.
Represents specification of a Study.
Configuration for ConvexAutomatedStoppingSpec. When there are enough completed trials (configured by min_measurement_count), for pending trials with enough measurements and steps, the policy first computes an overestimate of the objective value at max_num_steps according to the slope of the incomplete objective value curve. No prediction can be made if the curve is completely flat. If the overestimation is worse than the best objective value of the completed trials, this pending trial will be early-stopped, but a last measurement will be added to the pending trial with max_num_steps and predicted objective value from the autoregression model.
The decay curve automated stopping rule builds a Gaussian Process Regressor to predict the final objective value of a Trial based on the already completed Trials and the intermediate measurements of the current Trial. Early stopping is requested for the current Trial if there is very low probability to exceed the optimal value found so far.
The median automated stopping rule stops a pending Trial if the Trial's best objective_value is strictly below the median 'performance' of all completed Trials reported up to the Trial's last measurement. Currently, 'performance' refers to the running average of the objective values reported by the Trial in each measurement.
Represents a metric to optimize.
Used in safe optimization to specify threshold levels and risk tolerance.
Represents a single parameter to optimize.
Value specification for a parameter in CATEGORICAL
type.
Represents a parameter spec with condition from its parent parameter.
Represents the spec to match categorical values from parent parameter.
Represents the spec to match discrete values from parent parameter.
Represents the spec to match integer values from parent parameter.
Value specification for a parameter in DISCRETE
type.
Value specification for a parameter in DOUBLE
type.
Value specification for a parameter in INTEGER
type.
The configuration (stopping conditions) for automated stopping of a Study. Conditions include trial budgets, time budgets, and convergence detection.
Time-based Constraint for Study
Details of operations that perform Trials suggestion.
Request message for VizierService.SuggestTrials.
Response message for VizierService.SuggestTrials.
Input for summarization helpfulness metric.
Spec for summarization helpfulness instance.
Spec for summarization helpfulness result.
Spec for summarization helpfulness score metric.
Input for summarization quality metric.
Spec for summarization quality instance.
Spec for summarization quality result.
Spec for summarization quality score metric.
Input for summarization verbosity metric.
Spec for summarization verbosity instance.
Spec for summarization verbosity result.
Spec for summarization verbosity score metric.
Hyperparameters for SFT.
Tuning data statistics for Supervised Tuning.
Dataset distribution for Supervised Tuning.
Dataset bucket used to create a histogram for the distribution given a population of values.
Tuning Spec for Supervised Tuning.
Request message for FeatureOnlineStoreAdminService.SyncFeatureView.
Respose message for FeatureOnlineStoreAdminService.SyncFeatureView.
The storage details for TFRecord output content.
A tensor value type.
Tensorboard is a physical database that stores users' training metrics. A default Tensorboard is provided in each region of a Google Cloud project. If needed users can also create extra Tensorboards in their projects.
One blob (e.g, image, graph) viewable on a blob metric plot.
One point viewable on a blob metric plot, but mostly just a wrapper message to work around repeated fields can't be used directly within oneof
fields.
A TensorboardExperiment is a group of TensorboardRuns, that are typically the results of a training job run, in a Tensorboard.
TensorboardRun maps to a specific execution of a training job with a given set of hyperparameter values, model definition, dataset, etc
One point viewable on a tensor metric plot.
TensorboardTimeSeries maps to times series produced in training runs
Describes metadata for a TensorboardTimeSeries.
The config for feature monitoring threshold.
All the data stored in a TensorboardTimeSeries.
A TensorboardTimeSeries data point.
Assigns input data to training, validation, and test sets based on a provided timestamps. The youngest data pieces are assigned to training set, next to validation set, and the oldest to the test set. Supported only for tabular Datasets.
Tokens info with a list of tokens and the corresponding list of token ids.
Tool details that the model may use to generate response. A Tool
is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
Input for tool call valid metric.
Spec for tool call valid instance.
Tool call valid metric value for an instance.
Results for tool call valid metric.
Spec for tool call valid metric.
Tool config. This config is shared for all tools provided in the request.
Input for tool name match metric.
Spec for tool name match instance.
Tool name match metric value for an instance.
Results for tool name match metric.
Spec for tool name match metric.
Input for tool parameter key value match metric.
Spec for tool parameter key value match instance.
Tool parameter key value match metric value for an instance.
Results for tool parameter key value match metric.
Spec for tool parameter key value match metric.
Input for tool parameter key match metric.
Spec for tool parameter key match instance.
Tool parameter key match metric value for an instance.
Results for tool parameter key match metric.
Spec for tool parameter key match metric.
CMLE training config. For every active learning labeling iteration, system will train a machine learning model on CMLE. The trained model will be used by data sampling algorithm to select DataItems.
The TrainingPipeline orchestrates tasks associated with training a Model. It always executes the training task, and optionally may also export data from Vertex AI's Dataset which becomes the training input, upload the Model to Vertex AI, and evaluate the Model.
A message representing a Trial. A Trial contains a unique set of Parameters that has been or will be evaluated, along with the objective metrics got by running the Trial.
Attributes
-
description
(type:String.t
, default:nil
) - A human-readable field which can store a description of this context. This will become part of the resulting Trial's description field. -
parameters
(type:list(GoogleApi.AIPlatform.V1.Model.GoogleCloudAiplatformV1TrialParameter.t)
, default:nil
) - If/when a Trial is generated or selected from this Context, its Parameters will match any parameters specified here. (I.e. if this context specifies parameter name:'a' int_value:3, then a resulting Trial will have int_value:3 for its parameter named 'a'.) Note that we first attempt to match existing REQUESTED Trials with contexts, and if there are no matches, we generate suggestions in the subspace defined by the parameters specified here. NOTE: a Context without any Parameters matches the entire feasible search space.
A message representing a parameter to be tuned.
The Model Registry Model and Online Prediction Endpoint assiociated with this TuningJob.
The tuning data statistic values for TuningJob.
Represents a TuningJob that runs with Google owned models.
Runtime operation information for IndexEndpointService.UndeployIndex.
Request message for IndexEndpointService.UndeployIndex.
Response message for IndexEndpointService.UndeployIndex.
Runtime operation information for EndpointService.UndeployModel.
Request message for EndpointService.UndeployModel.
Response message for EndpointService.UndeployModel.
Contains model information necessary to perform batch prediction without requiring a full model import.
Runtime operation information for UpdateDeploymentResourcePool method.
Runtime operation information for ModelService.UpdateExplanationDataset.
Request message for ModelService.UpdateExplanationDataset.
Response message of ModelService.UpdateExplanationDataset operation.
Details of operations that perform update FeatureGroup.
Details of operations that perform update FeatureOnlineStore.
Details of operations that perform update Feature.
Details of operations that perform update FeatureView.
Details of operations that perform update Featurestore.
Runtime operation information for IndexService.UpdateIndex.
Runtime operation information for JobService.UpdateModelDeploymentMonitoringJob.
Details of operations that perform update PersistentResource.
Runtime operation metadata for SpecialistPoolService.UpdateSpecialistPool.
Details of operations that perform update Tensorboard.
Metadata information for NotebookService.UpgradeNotebookRuntime.
Request message for NotebookService.UpgradeNotebookRuntime.
Details of ModelService.UploadModel operation.
Request message for ModelService.UploadModel.
Response message of ModelService.UploadModel operation.
Request message for IndexService.UpsertDatapoints
Response message for IndexService.UpsertDatapoints
References an API call. It contains more information about long running operation and Jobs that are triggered by the API call.
Value is the value of the field.
Retrieve from Vertex AI Search datastore for grounding. See https://cloud.google.com/products/agent-builder
Retrieve from Vertex RAG Store for grounding.
The definition of the Rag resource.
Metadata describes the input video content.
Represents the spec of a worker pool in a job.
Contains Feature values to be written for a specific entity.
Request message for FeaturestoreOnlineServingService.WriteFeatureValues.
Response message for FeaturestoreOnlineServingService.WriteFeatureValues.
Request message for TensorboardService.WriteTensorboardExperimentData.
Response message for TensorboardService.WriteTensorboardExperimentData.
Request message for TensorboardService.WriteTensorboardRunData.
Response message for TensorboardService.WriteTensorboardRunData.
An explanation method that redistributes Integrated Gradients attributions to segmented regions, taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1906.02825 Supported only by image Models.
The response message for Locations.ListLocations.
A resource that represents a Google Cloud location.
Associates members
, or principals, with a role
.
An Identity and Access Management (IAM) policy, which specifies access controls for Google Cloud resources. A Policy
is a collection of bindings
. A binding
binds one or more members
, or principals, to a single role
. Principals can be user accounts, service accounts, Google groups, and domains (such as G Suite). A role
is a named list of permissions; each role
can be an IAM predefined role or a user-created custom role. For some types of Google Cloud resources, a binding
can also specify a condition
, which is a logical expression that allows access to a resource only if the expression evaluates to true
. A condition can add constraints based on attributes of the request, the resource, or both. To learn which resources support conditions in their IAM policies, see the IAM documentation. JSON example: { "bindings": [ { "role": "roles/resourcemanager.organizationAdmin", "members": [ "user:mike@example.com", "group:admins@example.com", "domain:google.com", "serviceAccount:my-project-id@appspot.gserviceaccount.com" ] }, { "role": "roles/resourcemanager.organizationViewer", "members": [ "user:eve@example.com" ], "condition": { "title": "expirable access", "description": "Does not grant access after Sep 2020", "expression": "request.time < timestamp('2020-10-01T00:00:00.000Z')", } } ], "etag": "BwWWja0YfJA=", "version": 3 }
YAML example: bindings: - members: - user:mike@example.com - group:admins@example.com - domain:google.com - serviceAccount:my-project-id@appspot.gserviceaccount.com role: roles/resourcemanager.organizationAdmin - members: - user:eve@example.com role: roles/resourcemanager.organizationViewer condition: title: expirable access description: Does not grant access after Sep 2020 expression: request.time < timestamp('2020-10-01T00:00:00.000Z') etag: BwWWja0YfJA= version: 3
For a description of IAM and its features, see the IAM documentation.
Request message for SetIamPolicy
method.
Response message for TestIamPermissions
method.
The response message for Operations.ListOperations.
This resource represents a long-running operation that is the result of a network API call.
A generic empty message that you can re-use to avoid defining duplicated empty messages in your APIs. A typical example is to use it as the request or the response type of an API method. For instance: service Foo { rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); }
The Status
type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC. Each Status
message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the API Design Guide.
Represents a color in the RGBA color space. This representation is designed for simplicity of conversion to and from color representations in various languages over compactness. For example, the fields of this representation can be trivially provided to the constructor of java.awt.Color
in Java; it can also be trivially provided to UIColor's +colorWithRed:green:blue:alpha
method in iOS; and, with just a little work, it can be easily formatted into a CSS rgba()
string in JavaScript. This reference page doesn't have information about the absolute color space that should be used to interpret the RGB value—for example, sRGB, Adobe RGB, DCI-P3, and BT.2020. By default, applications should assume the sRGB color space. When color equality needs to be decided, implementations, unless documented otherwise, treat two colors as equal if all their red, green, blue, and alpha values each differ by at most 1e-5
. Example (Java): import com.google.type.Color; // ... public static java.awt.Color fromProto(Color protocolor) { float alpha = protocolor.hasAlpha() ? protocolor.getAlpha().getValue() : 1.0; return new java.awt.Color( protocolor.getRed(), protocolor.getGreen(), protocolor.getBlue(), alpha); } public static Color toProto(java.awt.Color color) { float red = (float) color.getRed(); float green = (float) color.getGreen(); float blue = (float) color.getBlue(); float denominator = 255.0; Color.Builder resultBuilder = Color .newBuilder() .setRed(red / denominator) .setGreen(green / denominator) .setBlue(blue / denominator); int alpha = color.getAlpha(); if (alpha != 255) { result.setAlpha( FloatValue .newBuilder() .setValue(((float) alpha) / denominator) .build()); } return resultBuilder.build(); } // ... Example (iOS / Obj-C): // ... static UIColor fromProto(Color protocolor) { float red = [protocolor red]; float green = [protocolor green]; float blue = [protocolor blue]; FloatValue alpha_wrapper = [protocolor alpha]; float alpha = 1.0; if (alpha_wrapper != nil) { alpha = [alpha_wrapper value]; } return [UIColor colorWithRed:red green:green blue:blue alpha:alpha]; } static Color toProto(UIColor color) { CGFloat red, green, blue, alpha; if (![color getRed:&red green:&green blue:&blue alpha:&alpha]) { return nil; } Color result = [[Color alloc] init]; [result setRed:red]; [result setGreen:green]; [result setBlue:blue]; if (alpha <= 0.9999) { [result setAlpha:floatWrapperWithValue(alpha)]; } [result autorelease]; return result; } // ... Example (JavaScript): // ... var protoToCssColor = function(rgb_color) { var redFrac = rgb_color.red || 0.0; var greenFrac = rgb_color.green || 0.0; var blueFrac = rgb_color.blue || 0.0; var red = Math.floor(redFrac 255); var green = Math.floor(greenFrac 255); var blue = Math.floor(blueFrac * 255); if (!('alpha' in rgb_color)) { return rgbToCssColor(red, green, blue); } var alphaFrac = rgb_color.alpha.value || 0.0; var rgbParams = [red, green, blue].join(','); return ['rgba(', rgbParams, ',', alphaFrac, ')'].join(''); }; var rgbToCssColor = function(red, green, blue) { var rgbNumber = new Number((red << 16) | (green << 8) | blue); var hexString = rgbNumber.toString(16); var missingZeros = 6 - hexString.length; var resultBuilder = ['#']; for (var i = 0; i < missingZeros; i++) { resultBuilder.push('0'); } resultBuilder.push(hexString); return resultBuilder.join(''); }; // ...
Represents a whole or partial calendar date, such as a birthday. The time of day and time zone are either specified elsewhere or are insignificant. The date is relative to the Gregorian Calendar. This can represent one of the following: A full date, with non-zero year, month, and day values. A month and day, with a zero year (for example, an anniversary). A year on its own, with a zero month and a zero day. A year and month, with a zero day (for example, a credit card expiration date). Related types: google.type.TimeOfDay google.type.DateTime * google.protobuf.Timestamp
Represents a textual expression in the Common Expression Language (CEL) syntax. CEL is a C-like expression language. The syntax and semantics of CEL are documented at https://github.com/google/cel-spec. Example (Comparison): title: "Summary size limit" description: "Determines if a summary is less than 100 chars" expression: "document.summary.size() < 100" Example (Equality): title: "Requestor is owner" description: "Determines if requestor is the document owner" expression: "document.owner == request.auth.claims.email" Example (Logic): title: "Public documents" description: "Determine whether the document should be publicly visible" expression: "document.type != 'private' && document.type != 'internal'" Example (Data Manipulation): title: "Notification string" description: "Create a notification string with a timestamp." expression: "'New message received at ' + string(document.create_time)" The exact variables and functions that may be referenced within an expression are determined by the service that evaluates it. See the service documentation for additional information.
Represents a time interval, encoded as a Timestamp start (inclusive) and a Timestamp end (exclusive). The start must be less than or equal to the end. When the start equals the end, the interval is empty (matches no time). When both start and end are unspecified, the interval matches any time.
Represents an amount of money with its currency type.