API Reference google_api_document_ai v0.21.0

Modules

API client metadata for GoogleApi.DocumentAI.V1beta2.

API calls for all endpoints tagged Projects.

Handle Tesla connections for GoogleApi.DocumentAI.V1beta2.

The long running operation metadata for delete processor method.

The long running operation metadata for delete processor version method.

The long running operation metadata for deploy processor version method.

The long running operation metadata for disable processor method.

Response message for the disable processor method. Intentionally empty proto for adding fields in future.

The long running operation metadata for enable processor method.

Response message for the enable processor method. Intentionally empty proto for adding fields in future.

The long running operation metadata for set default processor version method.

The metadata that represents a processor version being created.

The dataset validation information. This includes any and all errors with documents and the dataset.

The long running operation metadata for the undeploy processor version method.

The long running operation metadata for updating the human review configuration.

The long running operation metadata for batch process method.

Response message for batch process document method.

The status of human review on a processed document.

The long running operation metadata for review document method.

Response to an batch document processing request. This is returned in the LRO Operation after the operation is complete.

A bounding polygon for the detected image annotation.

Document represents the canonical document resource in Document Understanding AI. It is an interchange format that provides insights into documents and allows for collaboration between users and Document Understanding AI to iterate and optimize for quality.

A phrase in the text that is a known entity type, such as a person, an organization, or location.

Referencing the visual context of the entity in the Document.pages. Page anchors can be cross-page, consist of multiple bounding polygons and optionally reference specific layout element types.

Represents a weak reference to a page element within a document.

A block has a set of lines (collected into paragraphs) that have a common line-spacing and orientation.

A collection of tokens that a human would perceive as a line. Does not cross column boundaries, can be horizontal, vertical, etc.

Representation for transformation matrix, intended to be compatible and used with OpenCV format for image manipulation.

A collection of lines that a human would perceive as a paragraph.

A table representation similar to HTML table structure.

Detected non-text visual elements e.g. checkbox, signature etc. on the page.

Structure to identify provenance relationships between annotations in different revisions.

Structure for referencing parent provenances. When an element replaces one of more other elements parent references identify the elements that are replaced.

Contains past or forward revisions of this document.

For a large document, sharding may be performed to produce several document shards. Each document shard contains this field to detail which shard it is.

Annotation for common text style attributes. This adheres to CSS conventions as much as possible.

A text segment in the Document.text. The indices may be out of bounds which indicate that the text extends into another document shard for large sharded documents. See ShardInfo.text_offset

This message is used for text changes aka. OCR corrections.

The Google Cloud Storage location where the output file will be written to.

The Google Cloud Storage location where the input file will be read from.

A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.

Contains metadata for the BatchProcessDocuments operation.

A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image.

Parameters to control AutoML model prediction behavior.

Request to batch process documents as an asynchronous operation. The output is written to Cloud Storage as JSON in the [Document] format.

Response to an batch document processing request. This is returned in the LRO Operation after the operation is complete.

A bounding polygon for the detected image annotation.

Document represents the canonical document resource in Document Understanding AI. It is an interchange format that provides insights into documents and allows for collaboration between users and Document Understanding AI to iterate and optimize for quality.

A phrase in the text that is a known entity type, such as a person, an organization, or location.

Label attaches schema information and/or other metadata to segments within a Document. Multiple Labels on a single field can denote either different labels, different instances of the same label created at different times, or some combination of both.

Referencing the visual context of the entity in the Document.pages. Page anchors can be cross-page, consist of multiple bounding polygons and optionally reference specific layout element types.

Represents a weak reference to a page element within a document.

A block has a set of lines (collected into paragraphs) that have a common line-spacing and orientation.

A collection of tokens that a human would perceive as a line. Does not cross column boundaries, can be horizontal, vertical, etc.

Representation for transformation matrix, intended to be compatible and used with OpenCV format for image manipulation.

A collection of lines that a human would perceive as a paragraph.

A table representation similar to HTML table structure.

Detected non-text visual elements e.g. checkbox, signature etc. on the page.

Structure to identify provenance relationships between annotations in different revisions.

Structure for referencing parent provenances. When an element replaces one of more other elements parent references identify the elements that are replaced.

Contains past or forward revisions of this document.

For a large document, sharding may be performed to produce several document shards. Each document shard contains this field to detail which shard it is.

Annotation for common text style attributes. This adheres to CSS conventions as much as possible.

A text segment in the Document.text. The indices may be out of bounds which indicate that the text extends into another document shard for large sharded documents. See ShardInfo.text_offset

This message is used for text changes aka. OCR corrections.

The Google Cloud Storage location where the output file will be written to.

The Google Cloud Storage location where the input file will be read from.

A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.

Parameters to control Optical Character Recognition (OCR) behavior.

Contains metadata for the BatchProcessDocuments operation.

A hint for a table bounding box on the page for table parsing.

A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image.

The long running operation metadata for batch process method.

The long running operation metadata for delete processor method.

The long running operation metadata for disable processor method.

Response message for the disable processor method. Intentionally empty proto for adding fields in future.

The long running operation metadata for enable processor method.

Response message for the enable processor method. Intentionally empty proto for adding fields in future.

The status of human review on a processed document.

The long running operation metadata for review document method.

This resource represents a long-running operation that is the result of a network API call.

A generic empty message that you can re-use to avoid defining duplicated empty messages in your APIs. A typical example is to use it as the request or the response type of an API method. For instance: service Foo { rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); } The JSON representation for Empty is empty JSON object {}.

The Status type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC. Each Status message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the API Design Guide.

Represents a color in the RGBA color space. This representation is designed for simplicity of conversion to/from color representations in various languages over compactness. For example, the fields of this representation can be trivially provided to the constructor of java.awt.Color in Java; it can also be trivially provided to UIColor's +colorWithRed:green:blue:alpha method in iOS; and, with just a little work, it can be easily formatted into a CSS rgba() string in JavaScript. This reference page doesn't carry information about the absolute color space that should be used to interpret the RGB value (e.g. sRGB, Adobe RGB, DCI-P3, BT.2020, etc.). By default, applications should assume the sRGB color space. When color equality needs to be decided, implementations, unless documented otherwise, treat two colors as equal if all their red, green, blue, and alpha values each differ by at most 1e-5. Example (Java): import com.google.type.Color; // ... public static java.awt.Color fromProto(Color protocolor) { float alpha = protocolor.hasAlpha() ? protocolor.getAlpha().getValue() : 1.0; return new java.awt.Color( protocolor.getRed(), protocolor.getGreen(), protocolor.getBlue(), alpha); } public static Color toProto(java.awt.Color color) { float red = (float) color.getRed(); float green = (float) color.getGreen(); float blue = (float) color.getBlue(); float denominator = 255.0; Color.Builder resultBuilder = Color .newBuilder() .setRed(red / denominator) .setGreen(green / denominator) .setBlue(blue / denominator); int alpha = color.getAlpha(); if (alpha != 255) { result.setAlpha( FloatValue .newBuilder() .setValue(((float) alpha) / denominator) .build()); } return resultBuilder.build(); } // ... Example (iOS / Obj-C): // ... static UIColor fromProto(Color protocolor) { float red = [protocolor red]; float green = [protocolor green]; float blue = [protocolor blue]; FloatValue alpha_wrapper = [protocolor alpha]; float alpha = 1.0; if (alpha_wrapper != nil) { alpha = [alpha_wrapper value]; } return [UIColor colorWithRed:red green:green blue:blue alpha:alpha]; } static Color toProto(UIColor color) { CGFloat red, green, blue, alpha; if (![color getRed:&red green:&green blue:&blue alpha:&alpha]) { return nil; } Color result = [[Color alloc] init]; [result setRed:red]; [result setGreen:green]; [result setBlue:blue]; if (alpha <= 0.9999) { [result setAlpha:floatWrapperWithValue(alpha)]; } [result autorelease]; return result; } // ... Example (JavaScript): // ... var protoToCssColor = function(rgb_color) { var redFrac = rgb_color.red || 0.0; var greenFrac = rgb_color.green || 0.0; var blueFrac = rgb_color.blue || 0.0; var red = Math.floor(redFrac 255); var green = Math.floor(greenFrac 255); var blue = Math.floor(blueFrac * 255); if (!('alpha' in rgb_color)) { return rgbToCssColor(red, green, blue); } var alphaFrac = rgb_color.alpha.value || 0.0; var rgbParams = [red, green, blue].join(','); return ['rgba(', rgbParams, ',', alphaFrac, ')'].join(''); }; var rgbToCssColor = function(red, green, blue) { var rgbNumber = new Number((red << 16) | (green << 8) | blue); var hexString = rgbNumber.toString(16); var missingZeros = 6 - hexString.length; var resultBuilder = ['#']; for (var i = 0; i < missingZeros; i++) { resultBuilder.push('0'); } resultBuilder.push(hexString); return resultBuilder.join(''); }; // ...

Represents a whole or partial calendar date, such as a birthday. The time of day and time zone are either specified elsewhere or are insignificant. The date is relative to the Gregorian Calendar. This can represent one of the following: A full date, with non-zero year, month, and day values A month and day value, with a zero year, such as an anniversary A year on its own, with zero month and day values A year and month value, with a zero day, such as a credit card expiration date Related types are google.type.TimeOfDay and google.protobuf.Timestamp.

Represents civil time (or occasionally physical time). This type can represent a civil time in one of a few possible ways: When utc_offset is set and time_zone is unset: a civil time on a calendar day with a particular offset from UTC. When time_zone is set and utc_offset is unset: a civil time on a calendar day in a particular time zone. * When neither time_zone nor utc_offset is set: a civil time on a calendar day in local time. The date is relative to the Proleptic Gregorian Calendar. If year is 0, the DateTime is considered not to have a specific year. month and day must have valid, non-zero values. This type may also be used to represent a physical time if all the date and time fields are set and either case of the time_offset oneof is set. Consider using Timestamp message for physical time instead. If your use case also would like to store the user's timezone, that can be done in another field. This type is more flexible than some applications may want. Make sure to document and validate your application's limitations.

Represents an amount of money with its currency type.

Represents a postal address, e.g. for postal delivery or payments addresses. Given a postal address, a postal service can deliver items to a premise, P.O. Box or similar. It is not intended to model geographical locations (roads, towns, mountains). In typical usage an address would be created via user input or from importing existing data, depending on the type of process. Advice on address input / editing: - Use an i18n-ready address widget such as https://github.com/google/libaddressinput) - Users should not be presented with UI elements for input or editing of fields outside countries where that field is used. For more guidance on how to use this schema, please see: https://support.google.com/business/answer/6397478

API client metadata for GoogleApi.DocumentAI.V1beta3.

API calls for all endpoints tagged Projects.

Handle Tesla connections for GoogleApi.DocumentAI.V1beta3.

The long running operation metadata for delete processor method.

The long running operation metadata for delete processor version method.

The long running operation metadata for deploy processor version method.

The long running operation metadata for disable processor method.

Response message for the disable processor method. Intentionally empty proto for adding fields in future.

The long running operation metadata for enable processor method.

Response message for the enable processor method. Intentionally empty proto for adding fields in future.

The long running operation metadata for set default processor version method.

The metadata that represents a processor version being created.

The dataset validation information. This includes any and all errors with documents and the dataset.

The long running operation metadata for the undeploy processor version method.

The long running operation metadata for updating the human review configuration.

The long running operation metadata for batch process method.

Response message for batch process document method.

The status of human review on a processed document.

The long running operation metadata for review document method.

Response to an batch document processing request. This is returned in the LRO Operation after the operation is complete.

A bounding polygon for the detected image annotation.

Document represents the canonical document resource in Document Understanding AI. It is an interchange format that provides insights into documents and allows for collaboration between users and Document Understanding AI to iterate and optimize for quality.

A phrase in the text that is a known entity type, such as a person, an organization, or location.

Referencing the visual context of the entity in the Document.pages. Page anchors can be cross-page, consist of multiple bounding polygons and optionally reference specific layout element types.

Represents a weak reference to a page element within a document.

A block has a set of lines (collected into paragraphs) that have a common line-spacing and orientation.

A collection of tokens that a human would perceive as a line. Does not cross column boundaries, can be horizontal, vertical, etc.

Representation for transformation matrix, intended to be compatible and used with OpenCV format for image manipulation.

A collection of lines that a human would perceive as a paragraph.

A table representation similar to HTML table structure.

Detected non-text visual elements e.g. checkbox, signature etc. on the page.

Structure to identify provenance relationships between annotations in different revisions.

Structure for referencing parent provenances. When an element replaces one of more other elements parent references identify the elements that are replaced.

Contains past or forward revisions of this document.

For a large document, sharding may be performed to produce several document shards. Each document shard contains this field to detail which shard it is.

Annotation for common text style attributes. This adheres to CSS conventions as much as possible.

A text segment in the Document.text. The indices may be out of bounds which indicate that the text extends into another document shard for large sharded documents. See ShardInfo.text_offset

This message is used for text changes aka. OCR corrections.

The Google Cloud Storage location where the output file will be written to.

The Google Cloud Storage location where the input file will be read from.

A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.

Contains metadata for the BatchProcessDocuments operation.

A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image.

Response to an batch document processing request. This is returned in the LRO Operation after the operation is complete.

A bounding polygon for the detected image annotation.

Document represents the canonical document resource in Document Understanding AI. It is an interchange format that provides insights into documents and allows for collaboration between users and Document Understanding AI to iterate and optimize for quality.

A phrase in the text that is a known entity type, such as a person, an organization, or location.

Label attaches schema information and/or other metadata to segments within a Document. Multiple Labels on a single field can denote either different labels, different instances of the same label created at different times, or some combination of both.

Referencing the visual context of the entity in the Document.pages. Page anchors can be cross-page, consist of multiple bounding polygons and optionally reference specific layout element types.

Represents a weak reference to a page element within a document.

A block has a set of lines (collected into paragraphs) that have a common line-spacing and orientation.

A collection of tokens that a human would perceive as a line. Does not cross column boundaries, can be horizontal, vertical, etc.

Representation for transformation matrix, intended to be compatible and used with OpenCV format for image manipulation.

A collection of lines that a human would perceive as a paragraph.

A table representation similar to HTML table structure.

Detected non-text visual elements e.g. checkbox, signature etc. on the page.

Structure to identify provenance relationships between annotations in different revisions.

Structure for referencing parent provenances. When an element replaces one of more other elements parent references identify the elements that are replaced.

Contains past or forward revisions of this document.

For a large document, sharding may be performed to produce several document shards. Each document shard contains this field to detail which shard it is.

Annotation for common text style attributes. This adheres to CSS conventions as much as possible.

A text segment in the Document.text. The indices may be out of bounds which indicate that the text extends into another document shard for large sharded documents. See ShardInfo.text_offset

This message is used for text changes aka. OCR corrections.

The Google Cloud Storage location where the output file will be written to.

The Google Cloud Storage location where the input file will be read from.

A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.

Contains metadata for the BatchProcessDocuments operation.

A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image.

The common config to specify a set of documents used as input.

The long running operation metadata for batch process method.

A bounding polygon for the detected image annotation.

The long running operation metadata for delete processor method.

The long running operation metadata for disable processor method.

Response message for the disable processor method. Intentionally empty proto for adding fields in future.

Document represents the canonical document resource in Document Understanding AI. It is an interchange format that provides insights into documents and allows for collaboration between users and Document Understanding AI to iterate and optimize for quality.

A phrase in the text that is a known entity type, such as a person, an organization, or location.

Config that controls the output of documents. All documents will be written as a JSON file.

Referencing the visual context of the entity in the Document.pages. Page anchors can be cross-page, consist of multiple bounding polygons and optionally reference specific layout element types.

Represents a weak reference to a page element within a document.

A block has a set of lines (collected into paragraphs) that have a common line-spacing and orientation.

A collection of tokens that a human would perceive as a line. Does not cross column boundaries, can be horizontal, vertical, etc.

Representation for transformation matrix, intended to be compatible and used with OpenCV format for image manipulation.

A collection of lines that a human would perceive as a paragraph.

A table representation similar to HTML table structure.

Detected non-text visual elements e.g. checkbox, signature etc. on the page.

Structure to identify provenance relationships between annotations in different revisions.

Structure for referencing parent provenances. When an element replaces one of more other elements parent references identify the elements that are replaced.

Contains past or forward revisions of this document.

For a large document, sharding may be performed to produce several document shards. Each document shard contains this field to detail which shard it is.

Annotation for common text style attributes. This adheres to CSS conventions as much as possible.

A text segment in the Document.text. The indices may be out of bounds which indicate that the text extends into another document shard for large sharded documents. See ShardInfo.text_offset

This message is used for text changes aka. OCR corrections.

The long running operation metadata for enable processor method.

Response message for the enable processor method. Intentionally empty proto for adding fields in future.

Specifies all documents on Cloud Storage with a common prefix.

The status of human review on a processed document.

A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.

Response message for the process document method.

The first-class citizen for DAI. Each processor defines how to extract structural information from a document.

A processor type is responsible for performing a certain document understanding task on a certain type of document. All processor types are created by the documentai service internally. User will only list all available processor types via UI. For different users (projects), the available processor types may be different since we'll expose the access of some types via EAP whitelisting. We make the ProcessorType a resource under location so we have a unified API and keep the possibility that UI will load different available processor types from different regions. But for alpha the behavior is that the user will always get the union of all available processor types among all regions no matter which regionalized endpoint is called, and then we use the 'available_locations' field to show under which regions a processor type is available. For example, users can call either the 'US' or 'EU' endpoint to feach processor types. In the return, we will have an 'invoice parsing' processor with 'available_locations' field only containing 'US'. So the user can try to create an 'invoice parsing' processor under the location 'US'. Such attempt of creating under the location 'EU' will fail. Next ID: 7.

The location information about where the processor is available.

Payload message of raw document content (bytes).

The long running operation metadata for review document method.

Request message for review document method. Next Id: 6.

The schema defines the output of the processed document by a processor.

EntityType is the wrapper of a label of the corresponding model with detailed attributes and limitations for entity-based processors. Multiple types can also compose a dependency tree to represent nested types.

A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image.

The response message for Locations.ListLocations.

A resource that represents Google Cloud Platform location.

The response message for Operations.ListOperations.

This resource represents a long-running operation that is the result of a network API call.

A generic empty message that you can re-use to avoid defining duplicated empty messages in your APIs. A typical example is to use it as the request or the response type of an API method. For instance: service Foo { rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); } The JSON representation for Empty is empty JSON object {}.

The Status type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC. Each Status message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the API Design Guide.

Represents a color in the RGBA color space. This representation is designed for simplicity of conversion to/from color representations in various languages over compactness. For example, the fields of this representation can be trivially provided to the constructor of java.awt.Color in Java; it can also be trivially provided to UIColor's +colorWithRed:green:blue:alpha method in iOS; and, with just a little work, it can be easily formatted into a CSS rgba() string in JavaScript. This reference page doesn't carry information about the absolute color space that should be used to interpret the RGB value (e.g. sRGB, Adobe RGB, DCI-P3, BT.2020, etc.). By default, applications should assume the sRGB color space. When color equality needs to be decided, implementations, unless documented otherwise, treat two colors as equal if all their red, green, blue, and alpha values each differ by at most 1e-5. Example (Java): import com.google.type.Color; // ... public static java.awt.Color fromProto(Color protocolor) { float alpha = protocolor.hasAlpha() ? protocolor.getAlpha().getValue() : 1.0; return new java.awt.Color( protocolor.getRed(), protocolor.getGreen(), protocolor.getBlue(), alpha); } public static Color toProto(java.awt.Color color) { float red = (float) color.getRed(); float green = (float) color.getGreen(); float blue = (float) color.getBlue(); float denominator = 255.0; Color.Builder resultBuilder = Color .newBuilder() .setRed(red / denominator) .setGreen(green / denominator) .setBlue(blue / denominator); int alpha = color.getAlpha(); if (alpha != 255) { result.setAlpha( FloatValue .newBuilder() .setValue(((float) alpha) / denominator) .build()); } return resultBuilder.build(); } // ... Example (iOS / Obj-C): // ... static UIColor fromProto(Color protocolor) { float red = [protocolor red]; float green = [protocolor green]; float blue = [protocolor blue]; FloatValue alpha_wrapper = [protocolor alpha]; float alpha = 1.0; if (alpha_wrapper != nil) { alpha = [alpha_wrapper value]; } return [UIColor colorWithRed:red green:green blue:blue alpha:alpha]; } static Color toProto(UIColor color) { CGFloat red, green, blue, alpha; if (![color getRed:&red green:&green blue:&blue alpha:&alpha]) { return nil; } Color result = [[Color alloc] init]; [result setRed:red]; [result setGreen:green]; [result setBlue:blue]; if (alpha <= 0.9999) { [result setAlpha:floatWrapperWithValue(alpha)]; } [result autorelease]; return result; } // ... Example (JavaScript): // ... var protoToCssColor = function(rgb_color) { var redFrac = rgb_color.red || 0.0; var greenFrac = rgb_color.green || 0.0; var blueFrac = rgb_color.blue || 0.0; var red = Math.floor(redFrac 255); var green = Math.floor(greenFrac 255); var blue = Math.floor(blueFrac * 255); if (!('alpha' in rgb_color)) { return rgbToCssColor(red, green, blue); } var alphaFrac = rgb_color.alpha.value || 0.0; var rgbParams = [red, green, blue].join(','); return ['rgba(', rgbParams, ',', alphaFrac, ')'].join(''); }; var rgbToCssColor = function(red, green, blue) { var rgbNumber = new Number((red << 16) | (green << 8) | blue); var hexString = rgbNumber.toString(16); var missingZeros = 6 - hexString.length; var resultBuilder = ['#']; for (var i = 0; i < missingZeros; i++) { resultBuilder.push('0'); } resultBuilder.push(hexString); return resultBuilder.join(''); }; // ...

Represents a whole or partial calendar date, such as a birthday. The time of day and time zone are either specified elsewhere or are insignificant. The date is relative to the Gregorian Calendar. This can represent one of the following: A full date, with non-zero year, month, and day values A month and day value, with a zero year, such as an anniversary A year on its own, with zero month and day values A year and month value, with a zero day, such as a credit card expiration date Related types are google.type.TimeOfDay and google.protobuf.Timestamp.

Represents civil time (or occasionally physical time). This type can represent a civil time in one of a few possible ways: When utc_offset is set and time_zone is unset: a civil time on a calendar day with a particular offset from UTC. When time_zone is set and utc_offset is unset: a civil time on a calendar day in a particular time zone. * When neither time_zone nor utc_offset is set: a civil time on a calendar day in local time. The date is relative to the Proleptic Gregorian Calendar. If year is 0, the DateTime is considered not to have a specific year. month and day must have valid, non-zero values. This type may also be used to represent a physical time if all the date and time fields are set and either case of the time_offset oneof is set. Consider using Timestamp message for physical time instead. If your use case also would like to store the user's timezone, that can be done in another field. This type is more flexible than some applications may want. Make sure to document and validate your application's limitations.

Represents an amount of money with its currency type.

Represents a postal address, e.g. for postal delivery or payments addresses. Given a postal address, a postal service can deliver items to a premise, P.O. Box or similar. It is not intended to model geographical locations (roads, towns, mountains). In typical usage an address would be created via user input or from importing existing data, depending on the type of process. Advice on address input / editing: - Use an i18n-ready address widget such as https://github.com/google/libaddressinput) - Users should not be presented with UI elements for input or editing of fields outside countries where that field is used. For more guidance on how to use this schema, please see: https://support.google.com/business/answer/6397478