View Source API Reference google_api_dataplex v0.20.0
Modules
API client metadata for GoogleApi.Dataplex.V1.
API calls for all endpoints tagged Organizations
.
API calls for all endpoints tagged Projects
.
Handle Tesla connections for GoogleApi.Dataplex.V1.
A generic empty message that you can re-use to avoid defining duplicated empty messages in your APIs. A typical example is to use it as the request or the response type of an API method. For instance: service Foo { rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); }
Action represents an issue requiring administrator action for resolution.
Failed to apply security policy to the managed resource(s) under a lake, zone or an asset. For a lake or zone resource, one or more underlying assets has a failure applying security policy to the associated managed resource.
Action details for incompatible schemas detected by discovery.
Action details for invalid or unsupported data files detected by discovery.
Action details for invalid data arrangement.
Action details for invalid or unsupported partitions detected by discovery.
Action details for absence of data detected by discovery.
Action details for resource references in assets that cannot be located.
Action details for unauthorized resource issues raised to indicate that the service account associated with the lake instance is not authorized to access or manage the resource associated with an asset.
An aspect is a single piece of metadata describing an entry.
Information related to the source system of the aspect.
AspectType is a template for creating Aspects, and represents the JSON-schema for a given Entry, for example, BigQuery Table Schema.
Autorization for an AspectType.
MetadataTemplate definition for an AspectType.
Definition of the annotations of a field.
Definition of the constraints of a field.
Definition of Enumvalue, to be used for enum fields.
An asset represents a cloud resource that is being managed within a lake as a member of a zone.
Settings to manage the metadata discovery and publishing for an asset.
Describe CSV and similar semi-structured data formats.
Describe JSON data format.
Status of discovery for an asset.
The aggregated data statistics for the asset reported by discovery.
Identifies the cloud resource that is referenced by this asset.
Status of the resource referenced by an asset.
Security policy status of the asset. Data security policy, i.e., readers, writers & owners, should be specified in the lake/zone/asset IAM policy.
Aggregated status of the underlying assets of a lake or zone.
Cancel task jobs.
Cancel metadata job request.
Content represents a user-visible notebook or a sql script
Configuration for Notebook content.
Configuration for the Sql Script content.
DataAccessSpec holds the access control configuration to be enforced on data stored within resources (eg: rows, columns in BigQuery Tables). When associated with data, the data is only accessible to principals explicitly granted access through the DataAccessSpec. Principals with access to the containing resource are not implicitly granted access.
Denotes one dataAttribute in a dataTaxonomy, for example, PII. DataAttribute resources can be defined in a hierarchy. A single dataAttribute resource can contain specs of multiple types PII - ResourceAccessSpec : - readers :foo@bar.com - DataAccessSpec : - readers :bar@foo.com
DataAttributeBinding represents binding of attributes to resources. Eg: Bind 'CustomerInfo' entity with 'PII' attribute.
Represents a subresource of the given resource, and associated bindings with it. Currently supported subresources are column and partition schema fields within a table.
The output of a data discovery scan.
Describes BigQuery publishing configurations.
Spec for a data discovery scan.
Describes BigQuery publishing configurations.
Configurations related to Cloud Storage as the data source.
Describes CSV and similar semi-structured data formats.
Describes JSON data format.
DataProfileResult defines the output of DataProfileScan. Each field of the table will have field type specific profile result.
The result of post scan actions of DataProfileScan job.
The result of BigQuery export post scan action.
Contains name, type, mode and field type specific profile information.
A field within a table.
The profile information for each field type.
The profile information for a double type field.
The profile information for an integer type field.
The profile information for a string type field.
Top N non-null values in the scanned data.
DataProfileScan related setting.
The configuration of post scan actions of DataProfileScan job.
The configuration of BigQuery export post scan action.
The specification for fields to include or exclude in data profile scan.
DataQualityColumnResult provides a more detailed, per-column view of the results.
A dimension captures data quality intent about a defined subset of the rules specified.
DataQualityDimensionResult provides a more detailed, per-dimension view of the results.
The output of a DataQualityScan.
The result of post scan actions of DataQualityScan job.
The result of BigQuery export post scan action.
A rule captures data quality intent about a data source.
Evaluates whether each column value is null.
Evaluates whether each column value lies between a specified range.
Evaluates whether each column value matches a specified regex.
DataQualityRuleResult provides a more detailed, per-rule view of the results.
Evaluates whether each row passes the specified condition.The SQL expression needs to use BigQuery standard SQL syntax and should produce a boolean value per row as the result.Example: col1 >= 0 AND col2 < 10
Evaluates whether each column value is contained by a specified set.
A SQL statement that is evaluated to return rows that match an invalid state. If any rows are are returned, this rule fails.The SQL statement must use BigQuery standard SQL syntax, and must not contain any semicolons.You can use the data reference parameter ${data()} to reference the source table with all of its precondition filters applied. Examples of precondition filters include row filters, incremental data filters, and sampling. For more information, see Data reference parameter (https://cloud.google.com/dataplex/docs/auto-data-quality-overview#data-reference-parameter).Example: SELECT * FROM ${data()} WHERE price < 0
Evaluates whether the column aggregate statistic lies between a specified range.
Evaluates whether the provided expression is true.The SQL expression needs to use BigQuery standard SQL syntax and should produce a scalar boolean result.Example: MIN(col1) >= 0
Evaluates whether the column has duplicates.
Information about the result of a data quality rule for data quality scan. The monitored resource is 'DataScan'.
DataQualityScan related setting.
The configuration of post scan actions of DataQualityScan.
The configuration of BigQuery export post scan action.
This trigger is triggered whenever a scan job run ends, regardless of the result.
This trigger is triggered when the scan job itself fails, regardless of the result.
The configuration of notification report post scan action.
The individuals or groups who are designated to receive notifications upon triggers.
This trigger is triggered when the DQ score in the job result is less than a specified input score.
Represents a user-visible job which provides the insights for the related data source.For example: Data Quality: generates queries based on the rules and runs against the data to get data quality check results. Data Profile: analyzes the data in table(s) and generates insights about the structure, content and relationships (such as null percent, cardinality, min/max/mean, etc).
These messages contain information about the execution of a datascan. The monitored resource is 'DataScan'
Applied configs for data profile type data scan job.
Data profile result for data scan job.
Applied configs for data quality type data scan job.
Data quality result for data scan job.
Post scan actions result for data scan job.
The result of BigQuery export post scan action.
DataScan execution settings.
Status of the data scan execution.
A DataScanJob represents an instance of DataScan execution.
The data source for DataScan.
DataTaxonomy represents a set of hierarchical DataAttributes resources, grouped with a common theme Eg: 'SensitiveDataTaxonomy' can have attributes to manage PII data. It is defined at project level.
The payload associated with Discovery data processing.
Details about the action.
Details about configuration events.
Details about the entity.
Details about the partition.
Details about the published table.
Represents tables and fileset metadata contained within a zone.
Provides compatibility information for various metadata stores.
Provides compatibility information for a specific metadata store.
An entry is a representation of a data resource that can be described by various metadata.
An Entry Group represents a logical grouping of one or more Entries.
Information related to the source system of the data resource that is represented by the entry.
Information about individual items in the hierarchy that is associated with the data resource.
Entry Type is a template for creating Entries.
Authorization for an Entry Type.
Environment represents a user-visible compute infrastructure for analytics within a lake.
URI Endpoints to access sessions associated with the Environment.
Configuration for the underlying infrastructure used to run workloads.
Compute resources associated with the analyze interactive workloads.
Software Runtime Configuration to run Analyze.
Configuration for sessions created for this environment.
Status of sessions created for this environment.
Request details for generating data quality rule recommendations.
Response details for data quality rule recommendations.
Payload associated with Governance related log events.
Information about Entity resource that the log event is associated with.
An object that describes the values that you want to set for an entry and its attached aspects when you import metadata. Used when you run a metadata import job. See CreateMetadataJob.You provide a collection of import items in a metadata import file. For more information about how to create a metadata import file, see Metadata import file (https://cloud.google.com/dataplex/docs/import-metadata#metadata-import-file).
A job represents an instance of a task.
The payload associated with Job logs that contains events describing jobs that have run within a Lake.
A lake is a centralized repository for managing enterprise data across the organization distributed across many cloud projects, and stored in a variety of storage services such as Google Cloud Storage and BigQuery. The resources attached to a lake are referred to as managed resources. Data within these managed resources can be structured or unstructured. A lake provides data admins with tools to organize, secure and manage their data at scale, and provides data scientists and data engineers an integrated experience to easily search, discover, analyze and transform data and associated metadata.
Settings to manage association of Dataproc Metastore with a lake.
Status of Lake and Dataproc Metastore service instance association.
List actions response.
List AspectTypes response.
List assets response.
List content response.
List DataAttributeBindings response.
List DataAttributes response.
List DataScanJobs response.
List dataScans response.
List DataTaxonomies response.
List metadata entities response.
List Entries response.
List entry groups response.
List EntryTypes response.
List environments response.
List jobs response.
List lakes response.
List metadata jobs response.
List metadata partitions response.
List sessions response.
List tasks response.
List zones response.
A metadata job resource.
Results from a metadata import job.
Job specification for a metadata import job
A boundary on the scope of impact that the metadata import job can have.
Metadata job status.
Represents the metadata of a long-running operation.
Represents partition metadata contained within entity instances.
ResourceAccessSpec holds the access control configuration to be enforced on the resources, for example, Cloud Storage bucket, BigQuery dataset, BigQuery table.
Run DataScan Request
Run DataScan Response.
Attributes
-
args
(type:map()
, default:nil
) - Optional. Execution spec arguments. If the map is left empty, the task will run with existing execution spec args from task definition. If the map contains an entry with a new key, the same will be added to existing set of args. If the map contains an entry with an existing arg key in task definition, the task will run with new arg value for that entry. Clearing an existing arg will require arg value to be explicitly set to a hyphen "-". The arg value cannot be empty. -
labels
(type:map()
, default:nil
) - Optional. User-defined labels for the task. If the map is left empty, the task will run with existing labels from task definition. If the map contains an entry with a new key, the same will be added to existing set of labels. If the map contains an entry with an existing label key in task definition, the task will run with new label value for that entry. Clearing an existing label will require label value to be explicitly set to a hyphen "-". The label value cannot be empty.
The data scanned during processing (e.g. in incremental DataScan)
A data range denoted by a pair of start/end values of a field.
Schema information describing the structure and layout of the data.
Represents a key field within the entity's partition structure. You could have up to 20 partition fields, but only the first 10 partitions have the filtering ability due to performance consideration. Note: Partition fields are immutable.
Represents a column field within a table schema.
Attributes
-
nextPageToken
(type:String.t
, default:nil
) - Token to retrieve the next page of results, or empty if there are no more results in the list. -
results
(type:list(GoogleApi.Dataplex.V1.Model.GoogleCloudDataplexV1SearchEntriesResult.t)
, default:nil
) - The results matching the search query. -
totalSize
(type:integer()
, default:nil
) - The estimated total number of matching entries. This number isn't guaranteed to be accurate. -
unreachable
(type:list(String.t)
, default:nil
) - Locations that the service couldn't reach. Search results don't include data from these locations.
A single result of a SearchEntries request.
Snippets for the entry, contains HTML-style highlighting for matched tokens, will be used in UI.
Represents an active analyze session running for a user.
These messages contain information about sessions within an environment. The monitored resource is 'Environment'.
Execution details of the query.
Describes the access mechanism of the data within its storage location.
Describes the format of the data within its storage location.
Describes CSV and similar semi-structured data formats.
Describes Iceberg data format.
Describes JSON data format.
A task represents a user-visible job.
Execution related settings, like retry and service_account.
Status of the task execution (e.g. Jobs).
Configuration for the underlying infrastructure used to run workloads.
Batch compute resources associated with the task.
Container Image Runtime Configuration used with Batch execution.
Cloud VPC Network used to run the infrastructure.
Config for running scheduled notebooks.
User-specified config for running a Spark task.
Task scheduling and trigger settings.
DataScan scheduling and trigger settings.
The scan runs once via RunDataScan API.
The scan is scheduled to run periodically.
A zone represents a logical group of related assets within a lake. A zone can be used to map to organizational structure or represent stages of data readiness from raw to curated. It provides managing behavior that is shared or inherited by all contained assets.
Settings to manage the metadata discovery and publishing in a zone.
Describe CSV and similar semi-structured data formats.
Describe JSON data format.
Settings for resources attached as assets within a zone.
The response message for Locations.ListLocations.
A resource that represents a Google Cloud location.
Specifies the audit configuration for a service. The configuration determines which permission types are logged, and what identities, if any, are exempted from logging. An AuditConfig must have one or more AuditLogConfigs.If there are AuditConfigs for both allServices and a specific service, the union of the two AuditConfigs is used for that service: the log_types specified in each AuditConfig are enabled, and the exempted_members in each AuditLogConfig are exempted.Example Policy with multiple AuditConfigs: { "audit_configs": [ { "service": "allServices", "audit_log_configs": [ { "log_type": "DATA_READ", "exempted_members": [ "user:jose@example.com" ] }, { "log_type": "DATA_WRITE" }, { "log_type": "ADMIN_READ" } ] }, { "service": "sampleservice.googleapis.com", "audit_log_configs": [ { "log_type": "DATA_READ" }, { "log_type": "DATA_WRITE", "exempted_members": [ "user:aliya@example.com" ] } ] } ] } For sampleservice, this policy enables DATA_READ, DATA_WRITE and ADMIN_READ logging. It also exempts jose@example.com from DATA_READ logging, and aliya@example.com from DATA_WRITE logging.
Provides the configuration for logging a type of permissions. Example: { "audit_log_configs": [ { "log_type": "DATA_READ", "exempted_members": [ "user:jose@example.com" ] }, { "log_type": "DATA_WRITE" } ] } This enables 'DATA_READ' and 'DATA_WRITE' logging, while exempting jose@example.com from DATA_READ logging.
Associates members, or principals, with a role.
An Identity and Access Management (IAM) policy, which specifies access controls for Google Cloud resources.A Policy is a collection of bindings. A binding binds one or more members, or principals, to a single role. Principals can be user accounts, service accounts, Google groups, and domains (such as G Suite). A role is a named list of permissions; each role can be an IAM predefined role or a user-created custom role.For some types of Google Cloud resources, a binding can also specify a condition, which is a logical expression that allows access to a resource only if the expression evaluates to true. A condition can add constraints based on attributes of the request, the resource, or both. To learn which resources support conditions in their IAM policies, see the IAM documentation (https://cloud.google.com/iam/help/conditions/resource-policies).JSON example: { "bindings": [ { "role": "roles/resourcemanager.organizationAdmin", "members": [ "user:mike@example.com", "group:admins@example.com", "domain:google.com", "serviceAccount:my-project-id@appspot.gserviceaccount.com" ] }, { "role": "roles/resourcemanager.organizationViewer", "members": [ "user:eve@example.com" ], "condition": { "title": "expirable access", "description": "Does not grant access after Sep 2020", "expression": "request.time < timestamp('2020-10-01T00:00:00.000Z')", } } ], "etag": "BwWWja0YfJA=", "version": 3 } YAML example: bindings: - members: - user:mike@example.com - group:admins@example.com - domain:google.com - serviceAccount:my-project-id@appspot.gserviceaccount.com role: roles/resourcemanager.organizationAdmin - members: - user:eve@example.com role: roles/resourcemanager.organizationViewer condition: title: expirable access description: Does not grant access after Sep 2020 expression: request.time < timestamp('2020-10-01T00:00:00.000Z') etag: BwWWja0YfJA= version: 3 For a description of IAM and its features, see the IAM documentation (https://cloud.google.com/iam/docs/).
Request message for SetIamPolicy method.
Request message for TestIamPermissions method.
Response message for TestIamPermissions method.
The request message for Operations.CancelOperation.
The response message for Operations.ListOperations.
This resource represents a long-running operation that is the result of a network API call.
The Status type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC (https://github.com/grpc). Each Status message contains three pieces of data: error code, error message, and error details.You can find out more about this error model and how to work with it in the API Design Guide (https://cloud.google.com/apis/design/errors).
Represents a textual expression in the Common Expression Language (CEL) syntax. CEL is a C-like expression language. The syntax and semantics of CEL are documented at https://github.com/google/cel-spec.Example (Comparison): title: "Summary size limit" description: "Determines if a summary is less than 100 chars" expression: "document.summary.size() < 100" Example (Equality): title: "Requestor is owner" description: "Determines if requestor is the document owner" expression: "document.owner == request.auth.claims.email" Example (Logic): title: "Public documents" description: "Determine whether the document should be publicly visible" expression: "document.type != 'private' && document.type != 'internal'" Example (Data Manipulation): title: "Notification string" description: "Create a notification string with a timestamp." expression: "'New message received at ' + string(document.create_time)" The exact variables and functions that may be referenced within an expression are determined by the service that evaluates it. See the service documentation for additional information.