View Source API Reference google_api_speech v0.28.0

Modules

API client metadata for GoogleApi.Speech.V1.

API calls for all endpoints tagged Operations.

API calls for all endpoints tagged Projects.

API calls for all endpoints tagged Speech.

Handle Tesla connections for GoogleApi.Speech.V1.

Attributes

  • abnfStrings (type: list(String.t), default: nil) - All declarations and rules of an ABNF grammar broken up into multiple strings that will end up concatenated.

An item of the class.

Message sent by the client for the CreateCustomClass method.

Message sent by the client for the CreatePhraseSet method.

A set of words or phrases that represents a common concept likely to appear in your audio, for example a list of passenger ship names. CustomClass items can be substituted into placeholders that you set in PhraseSet phrases.

A generic empty message that you can re-use to avoid defining duplicated empty messages in your APIs. A typical example is to use it as the request or the response type of an API method. For instance: service Foo { rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); }

A single replacement configuration.

Message returned to the client by the ListCustomClasses method.

The response message for Operations.ListOperations.

Message returned to the client by the ListPhraseSet method.

Describes the progress of a long-running LongRunningRecognize call. It is included in the metadata field of the Operation returned by the GetOperation call of the google::longrunning::Operations service.

The top-level message sent by the client for the LongRunningRecognize method.

The only message returned to the client by the LongRunningRecognize method. It contains the result as zero or more sequential SpeechRecognitionResult messages. It is included in the result.response field of the Operation returned by the GetOperation call of the google::longrunning::Operations service.

This resource represents a long-running operation that is the result of a network API call.

A phrases containing words and phrase "hints" so that the speech recognition is more likely to recognize them. This can be used to improve the accuracy for specific words and phrases, for example, if specific commands are typically spoken by the user. This can also be used to add additional words to the vocabulary of the recognizer. See usage limits. List items can also include pre-built or custom classes containing groups of words that represent common concepts that occur in natural language. For example, rather than providing a phrase hint for every month of the year (e.g. "i was born in january", "i was born in febuary", ...), use the pre-built $MONTH class improves the likelihood of correctly transcribing audio that includes months (e.g. "i was born in $month"). To refer to pre-built classes, use the class' symbol prepended with `$e.g.$MONTH`. To refer to custom classes that were defined inline in the request, set the class's `custom_class_id` to a string unique to all class resources and inline classes. Then use the class' id wrapped in $`{...}` e.g. "${my-months}". To refer to custom classes resources, use the class' id wrapped in ${} (e.g. ${my-months}). Speech-to-Text supports three locations: global, us (US North America), and eu (Europe). If you are calling the speech.googleapis.com endpoint, use the global location. To specify a region, use a regional endpoint with matching us or eu location value.

Provides "hints" to the speech recognizer to favor specific words and phrases in the results.

Contains audio data in the encoding specified in the RecognitionConfig. Either content or uri must be supplied. Supplying both or neither returns google.rpc.Code.INVALID_ARGUMENT. See content limits.

Provides information to the recognizer that specifies how to process the request.

Description of audio data to be recognized.

The top-level message sent by the client for the Recognize method.

The only message returned to the client by the Recognize method. It contains the result as zero or more sequential SpeechRecognitionResult messages.

Config to enable speaker diarization.

Speech adaptation configuration.

Information on speech adaptation use in results

Provides "hints" to the speech recognizer to favor specific words and phrases in the results.

Alternative hypotheses (a.k.a. n-best list).

A speech recognition result corresponding to a portion of the audio.

The Status type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC. Each Status message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the API Design Guide.

Transcription normalization configuration. Use transcription normalization to automatically replace parts of the transcript with phrases of your choosing. For StreamingRecognize, this normalization only applies to stable partial transcripts (stability > 0.8) and final transcripts.

Specifies an optional destination for the recognition results.

Word-specific information for recognized words.