API Reference google_api_dialogflow v0.66.2

Modules

API client metadata for GoogleApi.Dialogflow.V2.

API calls for all endpoints tagged Projects.

Handle Tesla connections for GoogleApi.Dialogflow.V2.

Represents the natural speech audio to be processed.

Metadata returned for the TestCases.BatchRunTestCases long running operation.

The response message for TestCases.BatchRunTestCases.

Represents a result from running a test case in an agent environment.

One interaction between a human and virtual agent. The human provides some input and the virtual agent provides a response.

Metadata associated with the long running operation for Versions.CreateVersion.

An event handler specifies an event that can be handled during a session. When the specified event happens, the following actions are taken in order: If there is a trigger_fulfillment associated with the event, it will be called. If there is a target_page associated with the event, the session will transition into the specified page. * If there is a target_flow associated with the event, the session will transition into the specified flow.

Metadata returned for the TestCases.ExportTestCases long running operation. This message currently has no fields.

The response message for TestCases.ExportTestCases.

A form is a data model that groups related parameters that can be collected from the user. The process in which the agent prompts the user and collects parameter values from the user is called form filling. A form can be added to a page. When form filling is done, the filled parameters will be written to the session.

Configuration for how the filling of a parameter should be handled.

A fulfillment can do one or more of the following actions at the same time: Generate rich message responses. Set parameter values. * Call the webhook. Fulfillments can be called at various stages in the Page or Form lifecycle. For example, when a DetectIntentRequest drives a session to enter a new page, the page's entry fulfillment can add a static response to the QueryResult in the returning DetectIntentResponse, call the webhook (for example, to load user data from a database), or both.

A list of cascading if-else conditions. Cases are mutually exclusive. The first one with a matching condition is selected, all the rest ignored.

Each case has a Boolean condition. When it is evaluated to be True, the corresponding messages will be selected and evaluated recursively.

The list of messages or conditional cases to activate for this case.

Metadata in google::longrunning::Operation for Knowledge operations.

Metadata returned for the TestCases.ImportTestCases long running operation.

The response message for TestCases.ImportTestCases.

Instructs the speech recognizer on how to process the audio content.

An intent represents a user's intent to interact with a conversational agent. You can provide information for the Dialogflow API to use to match user input to an intent by adding training phrases (i.e., examples of user input) to your intent.

Represents the intent to trigger programmatically rather than as a result of natural language processing.

Represents an example that the agent is trained on to identify the intent.

A Dialogflow CX conversation (session) can be described and visualized as a state machine. The states of a CX session are represented by pages. For each flow, you define many pages, where your combined pages can handle a complete conversation on the topics the flow is designed for. At any given moment, exactly one page is the current page, the current page is considered active, and the flow associated with that page is considered active. Every flow has a special start page. When a flow initially becomes active, the start page page becomes the current page. For each conversational turn, the current page will either stay the same or transition to another page. You configure each page to collect information from the end-user that is relevant for the conversational state represented by the page. For more information, see the Page guide.

Represents page information communicated to and from the webhook.

Represents the query input. It can contain one of: 1. A conversational query in the form of text. 2. An intent query that specifies which intent to trigger. 3. Natural language speech audio to be processed. 4. An event to be triggered.

Represents a response message that can be returned by a conversational agent. Response messages are also used for output audio synthesis. The approach is as follows: If at least one OutputAudioText response is present, then all OutputAudioText responses are linearly concatenated, and the result is used for output audio synthesis. If the OutputAudioText responses are a mixture of text and SSML, then the concatenated result is treated as SSML; otherwise, the result is treated as either text or SSML as appropriate. The agent designer should ideally use either text or SSML consistently throughout the bot design. * Otherwise, all Text responses are linearly concatenated, and the result is used for output audio synthesis. This approach allows for more sophisticated user experience scenarios, where the text displayed to the user may differ from what is heard.

Indicates that the conversation succeeded, i.e., the bot handled the issue that the customer talked to it about. Dialogflow only uses this to determine which conversations should be counted as successful and doesn't process the metadata in this message in any way. Note that Dialogflow also considers conversations that get to the conversation end page as successful even if they don't return ConversationSuccess. You may set this, for example: In the entry_fulfillment of a Page if entering the page indicates that the conversation succeeded. In a webhook response when you determine that you handled the customer issue.

Indicates that interaction with the Dialogflow agent has ended. This message is generated by Dialogflow only and not supposed to be defined by the user.

Indicates that the conversation should be handed off to a live agent. Dialogflow only uses this to determine which conversations were handed off to a human agent for measurement purposes. What else to do with this signal is up to you and your handoff procedures. You may set this, for example: In the entry_fulfillment of a Page if entering the page indicates something went extremely wrong in the conversation. In a webhook response when you determine that the customer issue can only be handled by a human.

Represents an audio message that is composed of both segments synthesized from the Dialogflow agent prompts and ones hosted externally at the specified URIs. The external URIs are specified via play_audio. This message is generated by Dialogflow only and not supposed to be defined by the user.

A text or ssml response that is preferentially used for TTS output audio synthesis, as described in the comment on the ResponseMessage message.

Specifies an audio clip to be played by the client as part of the response.

Metadata returned for the Environments.RunContinuousTest long running operation.

The response message for Environments.RunContinuousTest.

Metadata returned for the TestCases.RunTestCase long running operation. This message currently has no fields.

The response message for TestCases.RunTestCase.

Represents session information communicated to and from the webhook.

Represents a result from running a test case in an agent environment.

Represents configurations for a test case.

The description of differences between original and replayed agent output.

Represents the natural language text to be processed.

A transition route specifies a intent that can be matched and/or a data condition that can be evaluated during a session. When a specified transition is matched, the following actions are taken in order: If there is a trigger_fulfillment associated with the transition, it will be called. If there is a target_page associated with the transition, the session will transition into the specified page. * If there is a target_flow associated with the transition, the session will transition into the specified flow.

The request message for a webhook call. The request is sent as a JSON object and the field names will be presented in camel cases.

Represents fulfillment information communicated to the webhook.

Represents intent information communicated to the webhook.

Represents the natural speech audio to be processed.

Metadata returned for the TestCases.BatchRunTestCases long running operation.

Represents a result from running a test case in an agent environment.

One interaction between a human and virtual agent. The human provides some input and the virtual agent provides a response.

Metadata associated with the long running operation for Versions.CreateVersion.

An event handler specifies an event that can be handled during a session. When the specified event happens, the following actions are taken in order: If there is a trigger_fulfillment associated with the event, it will be called. If there is a target_page associated with the event, the session will transition into the specified page. * If there is a target_flow associated with the event, the session will transition into the specified flow.

Metadata returned for the TestCases.ExportTestCases long running operation. This message currently has no fields.

A form is a data model that groups related parameters that can be collected from the user. The process in which the agent prompts the user and collects parameter values from the user is called form filling. A form can be added to a page. When form filling is done, the filled parameters will be written to the session.

Configuration for how the filling of a parameter should be handled.

A fulfillment can do one or more of the following actions at the same time: Generate rich message responses. Set parameter values. * Call the webhook. Fulfillments can be called at various stages in the Page or Form lifecycle. For example, when a DetectIntentRequest drives a session to enter a new page, the page's entry fulfillment can add a static response to the QueryResult in the returning DetectIntentResponse, call the webhook (for example, to load user data from a database), or both.

A list of cascading if-else conditions. Cases are mutually exclusive. The first one with a matching condition is selected, all the rest ignored.

Each case has a Boolean condition. When it is evaluated to be True, the corresponding messages will be selected and evaluated recursively.

The list of messages or conditional cases to activate for this case.

Metadata in google::longrunning::Operation for Knowledge operations.

Metadata returned for the TestCases.ImportTestCases long running operation.

Instructs the speech recognizer on how to process the audio content.

An intent represents a user's intent to interact with a conversational agent. You can provide information for the Dialogflow API to use to match user input to an intent by adding training phrases (i.e., examples of user input) to your intent.

Represents the intent to trigger programmatically rather than as a result of natural language processing.

Represents an example that the agent is trained on to identify the intent.

A Dialogflow CX conversation (session) can be described and visualized as a state machine. The states of a CX session are represented by pages. For each flow, you define many pages, where your combined pages can handle a complete conversation on the topics the flow is designed for. At any given moment, exactly one page is the current page, the current page is considered active, and the flow associated with that page is considered active. Every flow has a special start page. When a flow initially becomes active, the start page page becomes the current page. For each conversational turn, the current page will either stay the same or transition to another page. You configure each page to collect information from the end-user that is relevant for the conversational state represented by the page. For more information, see the Page guide.

Represents page information communicated to and from the webhook.

Represents the query input. It can contain one of: 1. A conversational query in the form of text. 2. An intent query that specifies which intent to trigger. 3. Natural language speech audio to be processed. 4. An event to be triggered.

Represents a response message that can be returned by a conversational agent. Response messages are also used for output audio synthesis. The approach is as follows: If at least one OutputAudioText response is present, then all OutputAudioText responses are linearly concatenated, and the result is used for output audio synthesis. If the OutputAudioText responses are a mixture of text and SSML, then the concatenated result is treated as SSML; otherwise, the result is treated as either text or SSML as appropriate. The agent designer should ideally use either text or SSML consistently throughout the bot design. * Otherwise, all Text responses are linearly concatenated, and the result is used for output audio synthesis. This approach allows for more sophisticated user experience scenarios, where the text displayed to the user may differ from what is heard.

Indicates that the conversation succeeded, i.e., the bot handled the issue that the customer talked to it about. Dialogflow only uses this to determine which conversations should be counted as successful and doesn't process the metadata in this message in any way. Note that Dialogflow also considers conversations that get to the conversation end page as successful even if they don't return ConversationSuccess. You may set this, for example: In the entry_fulfillment of a Page if entering the page indicates that the conversation succeeded. In a webhook response when you determine that you handled the customer issue.

Indicates that interaction with the Dialogflow agent has ended. This message is generated by Dialogflow only and not supposed to be defined by the user.

Indicates that the conversation should be handed off to a live agent. Dialogflow only uses this to determine which conversations were handed off to a human agent for measurement purposes. What else to do with this signal is up to you and your handoff procedures. You may set this, for example: In the entry_fulfillment of a Page if entering the page indicates something went extremely wrong in the conversation. In a webhook response when you determine that the customer issue can only be handled by a human.

Represents an audio message that is composed of both segments synthesized from the Dialogflow agent prompts and ones hosted externally at the specified URIs. The external URIs are specified via play_audio. This message is generated by Dialogflow only and not supposed to be defined by the user.

A text or ssml response that is preferentially used for TTS output audio synthesis, as described in the comment on the ResponseMessage message.

Specifies an audio clip to be played by the client as part of the response.

Metadata returned for the Environments.RunContinuousTest long running operation.

The response message for Environments.RunContinuousTest.

Metadata returned for the TestCases.RunTestCase long running operation. This message currently has no fields.

Represents session information communicated to and from the webhook.

Represents a result from running a test case in an agent environment.

The description of differences between original and replayed agent output.

Represents the natural language text to be processed.

A transition route specifies a intent that can be matched and/or a data condition that can be evaluated during a session. When a specified transition is matched, the following actions are taken in order: If there is a trigger_fulfillment associated with the transition, it will be called. If there is a target_page associated with the transition, the session will transition into the specified page. * If there is a target_flow associated with the transition, the session will transition into the specified flow.

The request message for a webhook call. The request is sent as a JSON object and the field names will be presented in camel cases.

Represents fulfillment information communicated to the webhook.

Represents intent information communicated to the webhook.

A Dialogflow agent is a virtual agent that handles conversations with your end-users. It is a natural language understanding module that understands the nuances of human language. Dialogflow translates end-user text or audio during a conversation to structured data that your apps and services can understand. You design and build a Dialogflow agent to handle the types of conversations required for your system. For more information about agents, see the Agent guide.

Represents a record of a human agent assist answer.

The request message for Participants.AnalyzeContent.

The response message for Participants.AnalyzeContent.

Represents a part of a message possibly annotated with an entity. The part can be an entity or purely a part of the message between two entities or message start/end.

Represents feedback the customer has about the quality & correctness of a certain answer in a conversation.

Answer records are records to manage answer history and feedbacks for Dialogflow. Currently, answer record includes: - human agent assistant article suggestion - human agent assistant faq article It doesn't include: - DetectIntent intent matching - DetectIntent knowledge Answer records are not related to the conversation history in the Dialogflow Console. A Record is generated even when the end-user disables conversation history in the console. Records are created when there's a human agent assistant suggestion generated. A typical workflow for customers provide feedback to an answer is: 1. For human agent assistant, customers get suggestion via ListSuggestions API. Together with the answers, AnswerRecord.name are returned to the customers. 2. The customer uses the AnswerRecord.name to call the UpdateAnswerRecord method to send feedback about a specific answer that they believe is wrong.

Defines the Automated Agent to connect to a conversation.

Represents a response from an automated agent.

The request message for EntityTypes.BatchCreateEntities.

The request message for EntityTypes.BatchDeleteEntities.

The request message for EntityTypes.BatchDeleteEntityTypes.

The request message for Intents.BatchDeleteIntents.

The request message for EntityTypes.BatchUpdateEntities.

The request message for EntityTypes.BatchUpdateEntityTypes.

The response message for EntityTypes.BatchUpdateEntityTypes.

Attributes

  • intentBatchInline (type: GoogleApi.Dialogflow.V2.Model.GoogleCloudDialogflowV2IntentBatch.t, default: nil) - The collection of intents to update or create.
  • intentBatchUri (type: String.t, default: nil) - The URI to a Google Cloud Storage file containing intents to update or create. The file format can either be a serialized proto (of IntentBatch type) or JSON object. Note: The URI must start with "gs://".
  • intentView (type: String.t, default: nil) - Optional. The resource view to apply to the returned intent.
  • languageCode (type: String.t, default: nil) - Optional. The language used to access language-specific data. If not specified, the agent's default language is used. For more information, see Multilingual intent and entity data.
  • updateMask (type: String.t, default: nil) - Optional. The mask to control which fields get updated.

The response message for Intents.BatchUpdateIntents.

The request message for Conversations.CompleteConversation.

Dialogflow contexts are similar to natural language context. If a person says to you "they are orange", you need context in order to understand what "they" is referring to. Similarly, for Dialogflow to handle an end-user expression like that, it needs to be provided with context in order to correctly match an intent. Using contexts, you can control the flow of a conversation. You can configure contexts for an intent by setting input and output contexts, which are identified by string names. When an intent is matched, any configured output contexts for that intent become active. While any contexts are active, Dialogflow is more likely to match intents that are configured with input contexts that correspond to the currently active contexts. For more information about context, see the Contexts guide.

Represents a conversation. A conversation is an interaction between an agent, including live agents and Dialogflow agents, and a support customer. Conversations can include phone calls and text-based chat sessions.

Represents a notification sent to Pub/Sub subscribers for conversation lifecycle events.

Represents a phone number for telephony integration. It allows for connecting a particular conversation over telephony.

Defines the services to connect to incoming Dialogflow conversations.

The message returned from the DetectIntent method.

A knowledge document to be used by a KnowledgeBase. For more information, see the knowledge base guide. Note: The projects.agent.knowledgeBases.documents resource is deprecated; only use projects.knowledgeBases.documents.

The message in the response that indicates the parameters of DTMF.

Each intent parameter has a type, called the entity type, which dictates exactly how data from an end-user expression is extracted. Dialogflow provides predefined system entities that can match many common types of data. For example, there are system entities for matching dates, times, colors, email addresses, and so on. You can also create your own custom entities for matching custom data. For example, you could define a vegetable entity that can match the types of vegetables available for purchase with a grocery store agent. For more information, see the Entity guide.

This message is a wrapper around a collection of entity types.

An entity entry for an associated entity type.

You can create multiple versions of your agent and publish them to separate environments. When you edit an agent, you are editing the draft agent. At any point, you can save the draft agent as an agent version, which is an immutable snapshot of your agent. When you save the draft agent, it is published to the default environment. When you create agent versions, you can publish them to custom environments. You can create a variety of custom environments for: - testing - development - production - etc. For more information, see the versions and environments guide.

The response message for Environments.GetEnvironmentHistory.

Events allow for matching intents by event name instead of the natural language input. For instance, input ` can trigger a personalized welcome response. The parameternamemay be used by the agent in the response:"Hello #welcome_event.name! What can I do for you today?". ## Attributes *languageCode(*type:*String.t, *default:*nil) - Required. The language of this query. See [Language Support](https://cloud.google.com/dialogflow/docs/reference/language) for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language. *name(*type:*String.t, *default:*nil) - Required. The unique identifier of the event. *parameters(*type:*map(), *default:*nil`) - The collection of parameters associated with the event. Depending on your protocol or client library language, this is a map, associative array, symbol table, dictionary, or JSON object composed of a collection of (MapKey, MapValue) pairs: - MapKey type: string - MapKey value: parameter name - MapValue type: - If parameter's entity type is a composite entity: map - Else: depending on parameter value type, could be one of string, number, boolean, null, list or map - MapValue value: - If parameter's entity type is a composite entity: map from composite entity property names to property values - Else: parameter value

Represents answer from "frequently asked questions".

By default, your agent responds to a matched intent with a static response. As an alternative, you can provide a more dynamic response by using fulfillment. When you enable fulfillment for an intent, Dialogflow responds to that intent by calling a service that you define. For example, if an end-user wants to schedule a haircut on Friday, your service can check your database and respond to the end-user with availability information for Friday. For more information, see the fulfillment guide.

Whether fulfillment is enabled for the specific feature.

Represents configuration for a generic web service. Dialogflow supports two mechanisms for authentications: - Basic authentication with username and password. - Authentication with additional authentication headers. More information could be found at: https://cloud.google.com/dialogflow/docs/fulfillment-configure.

Defines the Human Agent Assist to connect to a conversation.

Custom conversation models used in agent assist feature. Supported feature: ARTICLE_SUGGESTION, SMART_COMPOSE, SMART_REPLY.

Configuration for analyses to run on each conversation message.

Settings that determine how to filter recent conversation context when generating suggestions.

Represents a notification sent to Cloud Pub/Sub subscribers for human agent assistant events in a specific conversation.

Defines the hand off to a live agent, typically on which external agent service provider to connect to a conversation. Currently, this feature is not general available, please contact Google to get access.

Instructs the speech recognizer how to process the audio content.

An intent categorizes an end-user's intention for one conversation turn. For each agent, you define many intents, where your combined intents can handle a complete conversation. When an end-user writes or says something, referred to as an end-user expression or end-user input, Dialogflow matches the end-user input to the best intent in your agent. Matching an intent is also known as intent classification. For more information, see the intent guide.

This message is a wrapper around a collection of intents.

Represents a single followup intent in the chain.

A rich response message. Corresponds to the intent Response field in the Dialogflow console. For more information, see Rich response messages.

The basic card message. Useful for displaying information.

The button object that appears at the bottom of a card.

The card for presenting a carousel of options to select from.

The suggestion chip message that allows the user to jump out to the app or website associated with this agent.

The card for presenting a list of options to select from.

Additional info about the select item for when it is triggered in a dialog.

The simple response message containing speech or text.

The collection of simple response candidates. This message in QueryResult.fulfillment_messages and WebhookResponse.fulfillment_messages should contain only one SimpleResponse.

The suggestion chip message that the user can tap to quickly post a reply to the conversation.

Represents an example that the agent is trained on.

A knowledge base represents a collection of knowledge documents that you provide to Dialogflow. Your knowledge documents contain information that may be useful during conversations with end-users. Some Dialogflow features use knowledge bases when looking for a response to an end-user input. For more information, see the knowledge base guide. Note: The projects.agent.knowledgeBases resource is deprecated; only use projects.knowledgeBases.

Metadata in google::longrunning::Operation for Knowledge operations.

Response message for AnswerRecords.ListAnswerRecords.

The response message for Contexts.ListContexts.

The response message for ConversationProfiles.ListConversationProfiles.

The response message for Conversations.ListConversations.

The response message for EntityTypes.ListEntityTypes.

The response message for Environments.ListEnvironments.

The response message for Intents.ListIntents.

Response message for KnowledgeBases.ListKnowledgeBases.

The response message for Conversations.ListMessages.

The response message for Participants.ListParticipants.

The response message for SessionEntityTypes.ListSessionEntityTypes.

The response message for Versions.ListVersions.

Defines logging behavior for conversation lifecycle events.

Represents a message posted into a conversation.

Represents the result of annotation for the message.

Represents the contents of the original request that was passed to the [Streaming]DetectIntent call.

Represents the natural language speech audio to be played to the end user.

Instructs the speech synthesizer on how to generate the output audio content. If this audio config is supplied in a request, it overrides all existing text-to-speech settings applied to the agent.

Represents a conversation participant (human agent, virtual agent, end-user).

Represents the query input. It can contain either: 1. An audio config which instructs the speech recognizer how to process the speech audio. 2. A conversational query in the form of text,. 3. An event that specifies which intent to trigger.

Represents the parameters of the conversational query.

Represents the result of conversational query or event processing.

The sentiment, such as positive/negative feeling or association, for a unit of analysis, such as the query text.

Configures the types of sentiment analysis to perform.

The result of sentiment analysis. Sentiment analysis inspects user input and identifies the prevailing subjective opinion, especially to determine a user's attitude as positive, negative, or neutral. For Participants.DetectIntent, it needs to be configured in DetectIntentRequest.query_params. For Participants.StreamingDetectIntent, it needs to be configured in StreamingDetectIntentRequest.query_params. And for Participants.AnalyzeContent and Participants.StreamingAnalyzeContent, it needs to be configured in ConversationProfile.human_agent_assistant_config

A session represents a conversation between a Dialogflow agent and an end-user. You can create special entities, called session entities, during a session. Session entities can extend or replace custom entity types and only exist during the session that they were created for. All session data, including session entities, is stored by Dialogflow for 20 minutes. For more information, see the session entity guide.

Hints for the speech recognizer to help with recognition in a specific conversation state.

Configures speech transcription for ConversationProfile.

The request message for Participants.SuggestArticles.

The response message for Participants.SuggestArticles.

The request message for Participants.SuggestFaqAnswers.

The request message for Participants.SuggestFaqAnswers.

The type of Human Agent Assistant API suggestion to perform, and the maximum number of results to return for that type. Multiple Feature objects can be specified in the features list.

One response of different type of suggestion response which is used in the response of Participants.AnalyzeContent and Participants.AnalyzeContent, as well as HumanAgentAssistantEvent.

Configuration of how speech should be synthesized.

Represents the natural language text to be processed.

Instructs the speech synthesizer on how to generate the output audio content.

You can create multiple versions of your agent and publish them to separate environments. When you edit an agent, you are editing the draft agent. At any point, you can save the draft agent as an agent version, which is an immutable snapshot of your agent. When you save the draft agent, it is published to the default environment. When you create agent versions, you can publish them to custom environments. You can create a variety of custom environments for: - testing - development - production - etc. For more information, see the versions and environments guide.

Description of which voice to use for speech synthesis.

The response message for a webhook call. This response is validated by the Dialogflow server. If validation fails, an error will be returned in the QueryResult.diagnostic_info field. Setting JSON fields to an empty value with the wrong type is a common error. To avoid this error: - Use "" for empty strings - Use {} or null for empty objects - Use [] or null for empty arrays For more information, see the Protocol Buffers Language Guide.

Represents a part of a message possibly annotated with an entity. The part can be an entity or purely a part of the message between two entities or message start/end.

The response message for EntityTypes.BatchUpdateEntityTypes.

Dialogflow contexts are similar to natural language context. If a person says to you "they are orange", you need context in order to understand what "they" is referring to. Similarly, for Dialogflow to handle an end-user expression like that, it needs to be provided with context in order to correctly match an intent. Using contexts, you can control the flow of a conversation. You can configure contexts for an intent by setting input and output contexts, which are identified by string names. When an intent is matched, any configured output contexts for that intent become active. While any contexts are active, Dialogflow is more likely to match intents that are configured with input contexts that correspond to the currently active contexts. For more information about context, see the Contexts guide.

Represents a notification sent to Pub/Sub subscribers for conversation lifecycle events.

Each intent parameter has a type, called the entity type, which dictates exactly how data from an end-user expression is extracted. Dialogflow provides predefined system entities that can match many common types of data. For example, there are system entities for matching dates, times, colors, email addresses, and so on. You can also create your own custom entities for matching custom data. For example, you could define a vegetable entity that can match the types of vegetables available for purchase with a grocery store agent. For more information, see the Entity guide.

Events allow for matching intents by event name instead of the natural language input. For instance, input ` can trigger a personalized welcome response. The parameternamemay be used by the agent in the response:"Hello #welcome_event.name! What can I do for you today?". ## Attributes *languageCode(*type:*String.t, *default:*nil) - Required. The language of this query. See [Language Support](https://cloud.google.com/dialogflow/docs/reference/language) for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language. *name(*type:*String.t, *default:*nil) - Required. The unique identifier of the event. *parameters(*type:*map(), *default:*nil`) - The collection of parameters associated with the event. Depending on your protocol or client library language, this is a map, associative array, symbol table, dictionary, or JSON object composed of a collection of (MapKey, MapValue) pairs: - MapKey type: string - MapKey value: parameter name - MapValue type: - If parameter's entity type is a composite entity: map - Else: depending on parameter value type, could be one of string, number, boolean, null, list or map - MapValue value: - If parameter's entity type is a composite entity: map from composite entity property names to property values - Else: parameter value

Represents answer from "frequently asked questions".

Output only. Represents a notification sent to Pub/Sub subscribers for agent assistant events in a specific conversation.

An intent categorizes an end-user's intention for one conversation turn. For each agent, you define many intents, where your combined intents can handle a complete conversation. When an end-user writes or says something, referred to as an end-user expression or end-user input, Dialogflow matches the end-user input to the best intent in your agent. Matching an intent is also known as intent classification. For more information, see the intent guide.

Corresponds to the Response field in the Dialogflow console.

The basic card message. Useful for displaying information.

The button object that appears at the bottom of a card.

The card for presenting a carousel of options to select from.

The suggestion chip message that allows the user to jump out to the app or website associated with this agent.

The card for presenting a list of options to select from.

Rich Business Messaging (RBM) Media displayed in Cards The following media-types are currently supported: Image Types image/jpeg image/jpg' image/gif image/png Video Types video/h263 video/m4v video/mp4 video/mpeg video/mpeg4 video/webm

Carousel Rich Business Messaging (RBM) rich card. Rich cards allow you to respond to users with more vivid content, e.g. with media and suggestions. If you want to show a single card with more control over the layout, please use RbmStandaloneCard instead.

Standalone Rich Business Messaging (RBM) rich card. Rich cards allow you to respond to users with more vivid content, e.g. with media and suggestions. You can group multiple rich cards into one using RbmCarouselCard but carousel cards will give you less control over the card layout.

Rich Business Messaging (RBM) suggested client-side action that the user can choose from the card.

Opens the user's default dialer app with the specified phone number but does not dial automatically.

Opens the user's default web browser app to the specified uri If the user has an app installed that is registered as the default handler for the URL, then this app will be opened instead, and its icon will be used in the suggested action UI.

Opens the device's location chooser so the user can pick a location to send back to the agent.

Rich Business Messaging (RBM) suggested reply that the user can click instead of typing in their own response.

Rich Business Messaging (RBM) suggestion. Suggestions allow user to easily select/click a predefined response or perform an action (like opening a web uri).

Rich Business Messaging (RBM) text response with suggestions.

Additional info about the select item for when it is triggered in a dialog.

The simple response message containing speech or text.

The collection of simple response candidates. This message in QueryResult.fulfillment_messages and WebhookResponse.fulfillment_messages should contain only one SimpleResponse.

The suggestion chip message that the user can tap to quickly post a reply to the conversation.

Synthesizes speech and plays back the synthesized audio to the caller in Telephony Gateway. Telephony Gateway takes the synthesizer settings from DetectIntentResponse.output_audio_config which can either be set at request-level or can come from the agent-level synthesizer config.

Represents an example that the agent is trained on.

Represents the result of querying a Knowledge base.

Metadata in google::longrunning::Operation for Knowledge operations.

Represents a message posted into a conversation.

Represents the result of annotation for the message.

Represents the contents of the original request that was passed to the [Streaming]DetectIntent call.

Represents the result of conversational query or event processing.

The sentiment, such as positive/negative feeling or association, for a unit of analysis, such as the query text.

The result of sentiment analysis. Sentiment analysis inspects user input and identifies the prevailing subjective opinion, especially to determine a user's attitude as positive, negative, or neutral. For Participants.DetectIntent, it needs to be configured in DetectIntentRequest.query_params. For Participants.StreamingDetectIntent, it needs to be configured in StreamingDetectIntentRequest.query_params. And for Participants.AnalyzeContent and Participants.StreamingAnalyzeContent, it needs to be configured in ConversationProfile.human_agent_assistant_config

A session represents a conversation between a Dialogflow agent and an end-user. You can create special entities, called session entities, during a session. Session entities can extend or replace custom entity types and only exist during the session that they were created for. All session data, including session entities, is stored by Dialogflow for 20 minutes. For more information, see the session entity guide.

The response message for Participants.SuggestArticles.

The request message for Participants.SuggestFaqAnswers.

The response message for Participants.SuggestSmartReplies.

One response of different type of suggestion response which is used in the response of Participants.AnalyzeContent and Participants.AnalyzeContent, as well as HumanAgentAssistantEvent.

The response message for a webhook call. This response is validated by the Dialogflow server. If validation fails, an error will be returned in the QueryResult.diagnostic_info field. Setting JSON fields to an empty value with the wrong type is a common error. To avoid this error: - Use "" for empty strings - Use {} or null for empty objects - Use [] or null for empty arrays For more information, see the Protocol Buffers Language Guide.

Metadata in google::longrunning::Operation for Knowledge operations.

The response message for Locations.ListLocations.

A resource that represents Google Cloud Platform location.

The response message for Operations.ListOperations.

This resource represents a long-running operation that is the result of a network API call.

A generic empty message that you can re-use to avoid defining duplicated empty messages in your APIs. A typical example is to use it as the request or the response type of an API method. For instance: service Foo { rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); } The JSON representation for Empty is empty JSON object {}.

The Status type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC. Each Status message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the API Design Guide.

An object that represents a latitude/longitude pair. This is expressed as a pair of doubles to represent degrees latitude and degrees longitude. Unless specified otherwise, this object must conform to the WGS84 standard. Values must be within normalized ranges.