View Source API Reference google_api_dialogflow v0.88.3

Modules

API client metadata for GoogleApi.Dialogflow.V2.

API calls for all endpoints tagged Projects.

Handle Tesla connections for GoogleApi.Dialogflow.V2.

Hierarchical advanced settings for agent/flow/page/fulfillment/parameter. Settings exposed at lower level overrides the settings exposed at higher level. Overriding occurs at the sub-setting level. For example, the playback_interruption_settings at fulfillment level only overrides the playback_interruption_settings at the agent level, leaving other settings at the agent level unchanged. DTMF settings does not override each other. DTMF settings set at different levels define DTMF detections running in parallel. Hierarchy: Agent->Flow->Page->Fulfillment/Parameter.

Define behaviors for DTMF (dual tone multi frequency).

Represents the natural speech audio to be processed.

Configuration of the barge-in behavior. Barge-in instructs the API to return a detected utterance at a proper time while the client is playing back the response audio from a previous request. When the client sees the utterance, it should stop the playback and immediately get ready for receiving the responses for the current request. The barge-in handling requires the client to start streaming audio input as soon as it starts playing back the audio from the previous response. The playback is modeled into two phases: No barge-in phase: which goes first and during which speech detection should not be carried out. Barge-in phase: which follows the no barge-in phase and during which the API starts speech detection and may inform the client that an utterance has been detected. Note that no-speech event is not expected in this phase. The client provides this configuration in terms of the durations of those two phases. The durations are measured in terms of the audio length from the start of the input audio. No-speech event is a response with END_OF_UTTERANCE without any transcript following up.

Metadata returned for the TestCases.BatchRunTestCases long running operation.

The response message for TestCases.BatchRunTestCases.

Represents a result from running a test case in an agent environment.

This message is used to hold all the Conversation Signals data, which will be converted to JSON and exported to BigQuery.

One interaction between a human and virtual agent. The human provides some input and the virtual agent provides a response.

Metadata associated with the long running operation for Versions.CreateVersion.

A data store connection. It represents a data store in Discovery Engine and the type of the contents it contains.

Metadata returned for the Environments.DeployFlow long running operation.

The response message for Environments.DeployFlow.

Represents an environment for an agent. You can create multiple versions of your agent and publish them to separate environments. When you edit an agent, you are editing the draft agent. At any point, you can save the draft agent as an agent version, which is an immutable snapshot of your agent. When you save the draft agent, it is published to the default environment. When you create agent versions, you can publish them to custom environments. You can create a variety of custom environments for testing, development, production, etc.

An event handler specifies an event that can be handled during a session. When the specified event happens, the following actions are taken in order: If there is a trigger_fulfillment associated with the event, it will be called. If there is a target_page associated with the event, the session will transition into the specified page. * If there is a target_flow associated with the event, the session will transition into the specified flow.

Metadata returned for the EntityTypes.ExportEntityTypes long running operation.

The response message for EntityTypes.ExportEntityTypes.

Metadata returned for the Intents.ExportIntents long running operation.

Metadata returned for the TestCases.ExportTestCases long running operation. This message currently has no fields.

The response message for TestCases.ExportTestCases.

A form is a data model that groups related parameters that can be collected from the user. The process in which the agent prompts the user and collects parameter values from the user is called form filling. A form can be added to a page. When form filling is done, the filled parameters will be written to the session.

Configuration for how the filling of a parameter should be handled.

A fulfillment can do one or more of the following actions at the same time: Generate rich message responses. Set parameter values. * Call the webhook. Fulfillments can be called at various stages in the Page or Form lifecycle. For example, when a DetectIntentRequest drives a session to enter a new page, the page's entry fulfillment can add a static response to the QueryResult in the returning DetectIntentResponse, call the webhook (for example, to load user data from a database), or both.

A list of cascading if-else conditions. Cases are mutually exclusive. The first one with a matching condition is selected, all the rest ignored.

Each case has a Boolean condition. When it is evaluated to be True, the corresponding messages will be selected and evaluated recursively.

The list of messages or conditional cases to activate for this case.

Google Cloud Storage location for a Dialogflow operation that writes or exports objects (e.g. exported agent or transcripts) outside of Dialogflow.

Metadata returned for the EntityTypes.ImportEntityTypes long running operation.

The response message for EntityTypes.ImportEntityTypes.

Conflicting resources detected during the import process. Only filled when REPORT_CONFLICT is set in the request and there are conflicts in the display names.

Metadata returned for the Intents.ImportIntents long running operation.

Conflicting resources detected during the import process. Only filled when REPORT_CONFLICT is set in the request and there are conflicts in the display names.

Metadata returned for the TestCases.ImportTestCases long running operation.

The response message for TestCases.ImportTestCases.

Inline destination for a Dialogflow operation that writes or exports objects (e.g. intents) outside of Dialogflow.

Instructs the speech recognizer on how to process the audio content.

An intent represents a user's intent to interact with a conversational agent. You can provide information for the Dialogflow API to use to match user input to an intent by adding training phrases (i.e., examples of user input) to your intent.

Represents the intent to trigger programmatically rather than as a result of natural language processing.

Represents an example that the agent is trained on to identify the intent.

The Knowledge Connector settings for this page or flow. This includes information such as the attached Knowledge Bases, and the way to execute fulfillment.

Represents the language information of the request.

A Dialogflow CX conversation (session) can be described and visualized as a state machine. The states of a CX session are represented by pages. For each flow, you define many pages, where your combined pages can handle a complete conversation on the topics the flow is designed for. At any given moment, exactly one page is the current page, the current page is considered active, and the flow associated with that page is considered active. Every flow has a special start page. When a flow initially becomes active, the start page page becomes the current page. For each conversational turn, the current page will either stay the same or transition to another page. You configure each page to collect information from the end-user that is relevant for the conversational state represented by the page. For more information, see the Page guide.

Represents page information communicated to and from the webhook.

Represents the query input. It can contain one of: 1. A conversational query in the form of text. 2. An intent query that specifies which intent to trigger. 3. Natural language speech audio to be processed. 4. An event to be triggered. 5. DTMF digits to invoke an intent and fill in parameter value. 6. The results of a tool executed by the client.

Represents a response message that can be returned by a conversational agent. Response messages are also used for output audio synthesis. The approach is as follows: If at least one OutputAudioText response is present, then all OutputAudioText responses are linearly concatenated, and the result is used for output audio synthesis. If the OutputAudioText responses are a mixture of text and SSML, then the concatenated result is treated as SSML; otherwise, the result is treated as either text or SSML as appropriate. The agent designer should ideally use either text or SSML consistently throughout the bot design. * Otherwise, all Text responses are linearly concatenated, and the result is used for output audio synthesis. This approach allows for more sophisticated user experience scenarios, where the text displayed to the user may differ from what is heard.

Indicates that the conversation succeeded, i.e., the bot handled the issue that the customer talked to it about. Dialogflow only uses this to determine which conversations should be counted as successful and doesn't process the metadata in this message in any way. Note that Dialogflow also considers conversations that get to the conversation end page as successful even if they don't return ConversationSuccess. You may set this, for example: In the entry_fulfillment of a Page if entering the page indicates that the conversation succeeded. In a webhook response when you determine that you handled the customer issue.

Indicates that interaction with the Dialogflow agent has ended. This message is generated by Dialogflow only and not supposed to be defined by the user.

Represents info card response. If the response contains generative knowledge prediction, Dialogflow will return a payload with Infobot Messenger compatible info card. Otherwise, the info card response is skipped.

Indicates that the conversation should be handed off to a live agent. Dialogflow only uses this to determine which conversations were handed off to a human agent for measurement purposes. What else to do with this signal is up to you and your handoff procedures. You may set this, for example: In the entry_fulfillment of a Page if entering the page indicates something went extremely wrong in the conversation. In a webhook response when you determine that the customer issue can only be handled by a human.

Represents an audio message that is composed of both segments synthesized from the Dialogflow agent prompts and ones hosted externally at the specified URIs. The external URIs are specified via play_audio. This message is generated by Dialogflow only and not supposed to be defined by the user.

A text or ssml response that is preferentially used for TTS output audio synthesis, as described in the comment on the ResponseMessage message.

Specifies an audio clip to be played by the client as part of the response.

Represents the signal that telles the client to transfer the phone call connected to the agent to a third-party endpoint.

Metadata returned for the Environments.RunContinuousTest long running operation.

The response message for Environments.RunContinuousTest.

Metadata returned for the TestCases.RunTestCase long running operation. This message currently has no fields.

The response message for TestCases.RunTestCase.

Represents session information communicated to and from the webhook.

Represents a result from running a test case in an agent environment.

Represents configurations for a test case.

The description of differences between original and replayed agent output.

Represents the natural language text to be processed.

A transition route specifies a intent that can be matched and/or a data condition that can be evaluated during a session. When a specified transition is matched, the following actions are taken in order: If there is a trigger_fulfillment associated with the transition, it will be called. If there is a target_page associated with the transition, the session will transition into the specified page. * If there is a target_flow associated with the transition, the session will transition into the specified flow.

Collection of all signals that were extracted for a single turn of the conversation.

Webhooks host the developer's business logic. During a session, webhooks allow the developer to use the data extracted by Dialogflow's natural language processing to generate dynamic responses, validate collected data, or trigger actions on the backend.

Represents configuration for a generic web service.

Represents configuration of OAuth client credential flow for 3rd party API authentication.

The request message for a webhook call. The request is sent as a JSON object and the field names will be presented in camel cases. You may see undocumented fields in an actual request. These fields are used internally by Dialogflow and should be ignored.

Represents fulfillment information communicated to the webhook.

Represents intent information communicated to the webhook.

Hierarchical advanced settings for agent/flow/page/fulfillment/parameter. Settings exposed at lower level overrides the settings exposed at higher level. Overriding occurs at the sub-setting level. For example, the playback_interruption_settings at fulfillment level only overrides the playback_interruption_settings at the agent level, leaving other settings at the agent level unchanged. DTMF settings does not override each other. DTMF settings set at different levels define DTMF detections running in parallel. Hierarchy: Agent->Flow->Page->Fulfillment/Parameter.

Represents the natural speech audio to be processed.

Configuration of the barge-in behavior. Barge-in instructs the API to return a detected utterance at a proper time while the client is playing back the response audio from a previous request. When the client sees the utterance, it should stop the playback and immediately get ready for receiving the responses for the current request. The barge-in handling requires the client to start streaming audio input as soon as it starts playing back the audio from the previous response. The playback is modeled into two phases: No barge-in phase: which goes first and during which speech detection should not be carried out. Barge-in phase: which follows the no barge-in phase and during which the API starts speech detection and may inform the client that an utterance has been detected. Note that no-speech event is not expected in this phase. The client provides this configuration in terms of the durations of those two phases. The durations are measured in terms of the audio length from the start of the input audio. No-speech event is a response with END_OF_UTTERANCE without any transcript following up.

Metadata returned for the TestCases.BatchRunTestCases long running operation.

Represents a result from running a test case in an agent environment.

This message is used to hold all the Conversation Signals data, which will be converted to JSON and exported to BigQuery.

One interaction between a human and virtual agent. The human provides some input and the virtual agent provides a response.

Metadata associated with the long running operation for Versions.CreateVersion.

A data store connection. It represents a data store in Discovery Engine and the type of the contents it contains.

Metadata returned for the Environments.DeployFlow long running operation.

The response message for Environments.DeployFlow.

Represents an environment for an agent. You can create multiple versions of your agent and publish them to separate environments. When you edit an agent, you are editing the draft agent. At any point, you can save the draft agent as an agent version, which is an immutable snapshot of your agent. When you save the draft agent, it is published to the default environment. When you create agent versions, you can publish them to custom environments. You can create a variety of custom environments for testing, development, production, etc.

An event handler specifies an event that can be handled during a session. When the specified event happens, the following actions are taken in order: If there is a trigger_fulfillment associated with the event, it will be called. If there is a target_page associated with the event, the session will transition into the specified page. * If there is a target_flow associated with the event, the session will transition into the specified flow.

Metadata returned for the EntityTypes.ExportEntityTypes long running operation.

The response message for EntityTypes.ExportEntityTypes.

Metadata returned for the Intents.ExportIntents long running operation.

Metadata returned for the TestCases.ExportTestCases long running operation. This message currently has no fields.

A form is a data model that groups related parameters that can be collected from the user. The process in which the agent prompts the user and collects parameter values from the user is called form filling. A form can be added to a page. When form filling is done, the filled parameters will be written to the session.

Configuration for how the filling of a parameter should be handled.

A fulfillment can do one or more of the following actions at the same time: Generate rich message responses. Set parameter values. * Call the webhook. Fulfillments can be called at various stages in the Page or Form lifecycle. For example, when a DetectIntentRequest drives a session to enter a new page, the page's entry fulfillment can add a static response to the QueryResult in the returning DetectIntentResponse, call the webhook (for example, to load user data from a database), or both.

A list of cascading if-else conditions. Cases are mutually exclusive. The first one with a matching condition is selected, all the rest ignored.

Each case has a Boolean condition. When it is evaluated to be True, the corresponding messages will be selected and evaluated recursively.

The list of messages or conditional cases to activate for this case.

Google Cloud Storage location for a Dialogflow operation that writes or exports objects (e.g. exported agent or transcripts) outside of Dialogflow.

Metadata returned for the EntityTypes.ImportEntityTypes long running operation.

The response message for EntityTypes.ImportEntityTypes.

Conflicting resources detected during the import process. Only filled when REPORT_CONFLICT is set in the request and there are conflicts in the display names.

Metadata returned for the Intents.ImportIntents long running operation.

Conflicting resources detected during the import process. Only filled when REPORT_CONFLICT is set in the request and there are conflicts in the display names.

Metadata returned for the TestCases.ImportTestCases long running operation.

Inline destination for a Dialogflow operation that writes or exports objects (e.g. intents) outside of Dialogflow.

Instructs the speech recognizer on how to process the audio content.

An intent represents a user's intent to interact with a conversational agent. You can provide information for the Dialogflow API to use to match user input to an intent by adding training phrases (i.e., examples of user input) to your intent.

Represents the intent to trigger programmatically rather than as a result of natural language processing.

Represents an example that the agent is trained on to identify the intent.

The Knowledge Connector settings for this page or flow. This includes information such as the attached Knowledge Bases, and the way to execute fulfillment.

Represents the language information of the request.

A Dialogflow CX conversation (session) can be described and visualized as a state machine. The states of a CX session are represented by pages. For each flow, you define many pages, where your combined pages can handle a complete conversation on the topics the flow is designed for. At any given moment, exactly one page is the current page, the current page is considered active, and the flow associated with that page is considered active. Every flow has a special start page. When a flow initially becomes active, the start page page becomes the current page. For each conversational turn, the current page will either stay the same or transition to another page. You configure each page to collect information from the end-user that is relevant for the conversational state represented by the page. For more information, see the Page guide.

Represents page information communicated to and from the webhook.

Represents the query input. It can contain one of: 1. A conversational query in the form of text. 2. An intent query that specifies which intent to trigger. 3. Natural language speech audio to be processed. 4. An event to be triggered. 5. DTMF digits to invoke an intent and fill in parameter value. 6. The results of a tool executed by the client.

Represents a response message that can be returned by a conversational agent. Response messages are also used for output audio synthesis. The approach is as follows: If at least one OutputAudioText response is present, then all OutputAudioText responses are linearly concatenated, and the result is used for output audio synthesis. If the OutputAudioText responses are a mixture of text and SSML, then the concatenated result is treated as SSML; otherwise, the result is treated as either text or SSML as appropriate. The agent designer should ideally use either text or SSML consistently throughout the bot design. * Otherwise, all Text responses are linearly concatenated, and the result is used for output audio synthesis. This approach allows for more sophisticated user experience scenarios, where the text displayed to the user may differ from what is heard.

Indicates that the conversation succeeded, i.e., the bot handled the issue that the customer talked to it about. Dialogflow only uses this to determine which conversations should be counted as successful and doesn't process the metadata in this message in any way. Note that Dialogflow also considers conversations that get to the conversation end page as successful even if they don't return ConversationSuccess. You may set this, for example: In the entry_fulfillment of a Page if entering the page indicates that the conversation succeeded. In a webhook response when you determine that you handled the customer issue.

Indicates that interaction with the Dialogflow agent has ended. This message is generated by Dialogflow only and not supposed to be defined by the user.

Represents info card response. If the response contains generative knowledge prediction, Dialogflow will return a payload with Infobot Messenger compatible info card. Otherwise, the info card response is skipped.

Indicates that the conversation should be handed off to a live agent. Dialogflow only uses this to determine which conversations were handed off to a human agent for measurement purposes. What else to do with this signal is up to you and your handoff procedures. You may set this, for example: In the entry_fulfillment of a Page if entering the page indicates something went extremely wrong in the conversation. In a webhook response when you determine that the customer issue can only be handled by a human.

Represents an audio message that is composed of both segments synthesized from the Dialogflow agent prompts and ones hosted externally at the specified URIs. The external URIs are specified via play_audio. This message is generated by Dialogflow only and not supposed to be defined by the user.

A text or ssml response that is preferentially used for TTS output audio synthesis, as described in the comment on the ResponseMessage message.

Specifies an audio clip to be played by the client as part of the response.

Represents the signal that telles the client to transfer the phone call connected to the agent to a third-party endpoint.

Metadata returned for the Environments.RunContinuousTest long running operation.

The response message for Environments.RunContinuousTest.

Metadata returned for the TestCases.RunTestCase long running operation. This message currently has no fields.

Represents session information communicated to and from the webhook.

Represents a result from running a test case in an agent environment.

The description of differences between original and replayed agent output.

Represents the natural language text to be processed.

Represents a call of a specific tool's action with the specified inputs.

The result of calling a tool's action that has been executed by the client.

A transition route specifies a intent that can be matched and/or a data condition that can be evaluated during a session. When a specified transition is matched, the following actions are taken in order: If there is a trigger_fulfillment associated with the transition, it will be called. If there is a target_page associated with the transition, the session will transition into the specified page. * If there is a target_flow associated with the transition, the session will transition into the specified flow.

Collection of all signals that were extracted for a single turn of the conversation.

Webhooks host the developer's business logic. During a session, webhooks allow the developer to use the data extracted by Dialogflow's natural language processing to generate dynamic responses, validate collected data, or trigger actions on the backend.

Represents configuration of OAuth client credential flow for 3rd party API authentication.

The request message for a webhook call. The request is sent as a JSON object and the field names will be presented in camel cases. You may see undocumented fields in an actual request. These fields are used internally by Dialogflow and should be ignored.

Represents fulfillment information communicated to the webhook.

Represents intent information communicated to the webhook.

A Dialogflow agent is a virtual agent that handles conversations with your end-users. It is a natural language understanding module that understands the nuances of human language. Dialogflow translates end-user text or audio during a conversation to structured data that your apps and services can understand. You design and build a Dialogflow agent to handle the types of conversations required for your system. For more information about agents, see the Agent guide.

Represents a record of a human agent assist answer.

The request message for Participants.AnalyzeContent.

The response message for Participants.AnalyzeContent.

Represents a part of a message possibly annotated with an entity. The part can be an entity or purely a part of the message between two entities or message start/end.

Represents feedback the customer has about the quality & correctness of a certain answer in a conversation.

Answer records are records to manage answer history and feedbacks for Dialogflow. Currently, answer record includes: - human agent assistant article suggestion - human agent assistant faq article It doesn't include: - DetectIntent intent matching - DetectIntent knowledge Answer records are not related to the conversation history in the Dialogflow Console. A Record is generated even when the end-user disables conversation history in the console. Records are created when there's a human agent assistant suggestion generated. A typical workflow for customers provide feedback to an answer is: 1. For human agent assistant, customers get suggestion via ListSuggestions API. Together with the answers, AnswerRecord.name are returned to the customers. 2. The customer uses the AnswerRecord.name to call the AnswerRecords.UpdateAnswerRecord method to send feedback about a specific answer that they believe is wrong.

Represents the parameters of human assist query.

Defines the Automated Agent to connect to a conversation.

Represents a response from an automated agent.

The request message for EntityTypes.BatchCreateEntities.

The request message for EntityTypes.BatchDeleteEntities.

The request message for EntityTypes.BatchDeleteEntityTypes.

The request message for Intents.BatchDeleteIntents.

The request message for EntityTypes.BatchUpdateEntities.

The request message for EntityTypes.BatchUpdateEntityTypes.

The response message for EntityTypes.BatchUpdateEntityTypes.

Attributes

  • intentBatchInline (type: GoogleApi.Dialogflow.V2.Model.GoogleCloudDialogflowV2IntentBatch.t, default: nil) - The collection of intents to update or create.
  • intentBatchUri (type: String.t, default: nil) - The URI to a Google Cloud Storage file containing intents to update or create. The file format can either be a serialized proto (of IntentBatch type) or JSON object. Note: The URI must start with "gs://".
  • intentView (type: String.t, default: nil) - Optional. The resource view to apply to the returned intent.
  • languageCode (type: String.t, default: nil) - Optional. The language used to access language-specific data. If not specified, the agent's default language is used. For more information, see Multilingual intent and entity data.
  • updateMask (type: String.t, default: nil) - Optional. The mask to control which fields get updated.

The response message for Intents.BatchUpdateIntents.

Metadata for a ConversationProfiles.ClearSuggestionFeatureConfig operation.

The request message for ConversationProfiles.ClearSuggestionFeatureConfig.

The request message for Conversations.CompleteConversation.

Dialogflow contexts are similar to natural language context. If a person says to you "they are orange", you need context in order to understand what "they" is referring to. Similarly, for Dialogflow to handle an end-user expression like that, it needs to be provided with context in order to correctly match an intent. Using contexts, you can control the flow of a conversation. You can configure contexts for an intent by setting input and output contexts, which are identified by string names. When an intent is matched, any configured output contexts for that intent become active. While any contexts are active, Dialogflow is more likely to match intents that are configured with input contexts that correspond to the currently active contexts. For more information about context, see the Contexts guide.

Represents a conversation. A conversation is an interaction between an agent, including live agents and Dialogflow agents, and a support customer. Conversations can include phone calls and text-based chat sessions.

Context of the conversation, including transcripts.

Represents a conversation dataset that a user imports raw data into. The data inside ConversationDataset can not be changed after ImportConversationData finishes (and calling ImportConversationData on a dataset that already has data is not allowed).

Represents a notification sent to Pub/Sub subscribers for conversation lifecycle events.

Represents evaluation result of a conversation model.

Represents a phone number for telephony integration. It allows for connecting a particular conversation over telephony.

Defines the services to connect to incoming Dialogflow conversations.

Metadata for a ConversationModels.CreateConversationModelEvaluation operation.

The request message for ConversationModels.CreateConversationModelEvaluation

Metadata for a ConversationModels.CreateConversationModel operation.

Metadata for a ConversationModels.DeleteConversationModel operation.

Metadata for a ConversationModels.DeployConversationModel operation.

The request message for ConversationModels.DeployConversationModel

The message returned from the DetectIntent method.

A knowledge document to be used by a KnowledgeBase. For more information, see the knowledge base guide. Note: The projects.agent.knowledgeBases.documents resource is deprecated; only use projects.knowledgeBases.documents.

The message in the response that indicates the parameters of DTMF.

A customer-managed encryption key specification that can be applied to all created resources (e.g. Conversation).

Each intent parameter has a type, called the entity type, which dictates exactly how data from an end-user expression is extracted. Dialogflow provides predefined system entities that can match many common types of data. For example, there are system entities for matching dates, times, colors, email addresses, and so on. You can also create your own custom entities for matching custom data. For example, you could define a vegetable entity that can match the types of vegetables available for purchase with a grocery store agent. For more information, see the Entity guide.

This message is a wrapper around a collection of entity types.

An entity entry for an associated entity type.

You can create multiple versions of your agent and publish them to separate environments. When you edit an agent, you are editing the draft agent. At any point, you can save the draft agent as an agent version, which is an immutable snapshot of your agent. When you save the draft agent, it is published to the default environment. When you create agent versions, you can publish them to custom environments. You can create a variety of custom environments for: - testing - development - production - etc. For more information, see the versions and environments guide.

The response message for Environments.GetEnvironmentHistory.

Smart compose specific configuration for evaluation job.

Smart reply specific configuration for evaluation job.

Events allow for matching intents by event name instead of the natural language input. For instance, input ` can trigger a personalized welcome response. The parameternamemay be used by the agent in the response:"Hello #welcome_event.name! What can I do for you today?". ## Attributes *languageCode(*type:*String.t, *default:*nil) - Required. The language of this query. See [Language Support](https://cloud.google.com/dialogflow/docs/reference/language) for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language. This field is ignored when used in the context of a WebhookResponse.followup_event_input field, because the language was already defined in the originating detect intent request. *name(*type:*String.t, *default:*nil) - Required. The unique identifier of the event. *parameters(*type:*map(), *default:*nil`) - The collection of parameters associated with the event. Depending on your protocol or client library language, this is a map, associative array, symbol table, dictionary, or JSON object composed of a collection of (MapKey, MapValue) pairs: MapKey type: string MapKey value: parameter name MapValue type: If parameter's entity type is a composite entity then use map, otherwise, depending on the parameter value type, it could be one of string, number, boolean, null, list or map. MapValue value: If parameter's entity type is a composite entity then use map from composite entity property names to property values, otherwise, use parameter value.

Metadata related to the Export Data Operations (e.g. ExportDocument).

Represents answer from "frequently asked questions".

Providing examples in the generator (i.e. building a few-shot generator) helps convey the desired format of the LLM response.

By default, your agent responds to a matched intent with a static response. As an alternative, you can provide a more dynamic response by using fulfillment. When you enable fulfillment for an intent, Dialogflow responds to that intent by calling a service that you define. For example, if an end-user wants to schedule a haircut on Friday, your service can check your database and respond to the end-user with availability information for Friday. For more information, see the fulfillment guide.

Whether fulfillment is enabled for the specific feature.

Represents configuration for a generic web service. Dialogflow supports two mechanisms for authentications: - Basic authentication with username and password. - Authentication with additional authentication headers. More information could be found at: https://cloud.google.com/dialogflow/docs/fulfillment-configure.

Google Cloud Storage location for the output.

Google Cloud Storage location for the inputs.

The request message for Conversations.GenerateStatelessSuggestion.

The response message for Conversations.GenerateStatelessSuggestion.

The request message for Conversations.GenerateStatelessSummary.

The minimum amount of information required to generate a Summary without having a Conversation resource created.

The response message for Conversations.GenerateStatelessSummary.

Defines the Human Agent Assist to connect to a conversation.

Custom conversation models used in agent assist feature. Supported feature: ARTICLE_SUGGESTION, SMART_COMPOSE, SMART_REPLY, CONVERSATION_SUMMARIZATION.

Configuration for analyses to run on each conversation message.

Settings that determine how to filter recent conversation context when generating suggestions.

Custom sections to return when requesting a summary of a conversation. This is only supported when baseline_model_version == '2.0'. Supported features: CONVERSATION_SUMMARIZATION, CONVERSATION_SUMMARIZATION_VOICE.

Represents a notification sent to Cloud Pub/Sub subscribers for human agent assistant events in a specific conversation.

Defines the hand off to a live agent, typically on which external agent service provider to connect to a conversation. Currently, this feature is not general available, please contact Google to get access.

Metadata for a ConversationDatasets.ImportConversationData operation.

Response used for ConversationDatasets.ImportConversationData long running operation.

The request message for ConversationDatasets.ImportConversationData.

Metadata for initializing a location-level encryption specification.

The request to initialize a location-level encryption specification.

Instructs the speech recognizer how to process the audio content.

Represents the configuration of importing a set of conversation files in Google Cloud Storage.

InputDataset used to create model or do evaluation. NextID:5

An intent categorizes an end-user's intention for one conversation turn. For each agent, you define many intents, where your combined intents can handle a complete conversation. When an end-user writes or says something, referred to as an end-user expression or end-user input, Dialogflow matches the end-user input to the best intent in your agent. Matching an intent is also known as intent classification. For more information, see the intent guide.

This message is a wrapper around a collection of intents.

Represents a single followup intent in the chain.

A rich response message. Corresponds to the intent Response field in the Dialogflow console. For more information, see Rich response messages.

The basic card message. Useful for displaying information.

The button object that appears at the bottom of a card.

The card for presenting a carousel of options to select from.

The suggestion chip message that allows the user to jump out to the app or website associated with this agent.

The card for presenting a list of options to select from.

Additional info about the select item for when it is triggered in a dialog.

The simple response message containing speech or text.

The collection of simple response candidates. This message in QueryResult.fulfillment_messages and WebhookResponse.fulfillment_messages should contain only one SimpleResponse.

The suggestion chip message that the user can tap to quickly post a reply to the conversation.

Represents an example that the agent is trained on.

Represents an answer from Knowledge. Currently supports FAQ and Generative answers.

A knowledge base represents a collection of knowledge documents that you provide to Dialogflow. Your knowledge documents contain information that may be useful during conversations with end-users. Some Dialogflow features use knowledge bases when looking for a response to an end-user input. For more information, see the knowledge base guide. Note: The projects.agent.knowledgeBases resource is deprecated; only use projects.knowledgeBases.

Metadata in google::longrunning::Operation for Knowledge operations.

Response message for AnswerRecords.ListAnswerRecords.

The response message for Contexts.ListContexts.

The response message for ConversationDatasets.ListConversationDatasets.

The response message for ConversationModels.ListConversationModelEvaluations

The response message for ConversationModels.ListConversationModels

The response message for ConversationProfiles.ListConversationProfiles.

The response message for Conversations.ListConversations.

The response message for EntityTypes.ListEntityTypes.

The response message for Environments.ListEnvironments.

The response message for Intents.ListIntents.

Response message for KnowledgeBases.ListKnowledgeBases.

The response message for Conversations.ListMessages.

The response message for Participants.ListParticipants.

The response message for SessionEntityTypes.ListSessionEntityTypes.

The response message for Versions.ListVersions.

Defines logging behavior for conversation lifecycle events.

Represents a message posted into a conversation.

Represents the result of annotation for the message.

Represents a message entry of a conversation.

Represents the contents of the original request that was passed to the [Streaming]DetectIntent call.

Represents the natural language speech audio to be played to the end user.

Instructs the speech synthesizer on how to generate the output audio content. If this audio config is supplied in a request, it overrides all existing text-to-speech settings applied to the agent.

Represents a conversation participant (human agent, virtual agent, end-user).

Represents the query input. It can contain either: 1. An audio config which instructs the speech recognizer how to process the speech audio. 2. A conversational query in the form of text. 3. An event that specifies which intent to trigger.

Represents the parameters of the conversational query.

Represents the result of conversational query or event processing.

The request message for Conversations.SearchKnowledge.

Configuration specific to search queries with data stores.

Boost specification to boost certain documents. A copy of google.cloud.discoveryengine.v1main.BoostSpec, field documentation is available at https://cloud.google.com/generative-ai-app-builder/docs/reference/rest/v1alpha/BoostSpec

Specification for custom ranking based on customer specified attribute value. It provides more controls for customized ranking than the simple (condition, boost) combination above.

The control points used to define the curve. The curve defined through these control points can only be monotonically increasing or decreasing(constant values are acceptable).

The response message for Conversations.SearchKnowledge.

The sentiment, such as positive/negative feeling or association, for a unit of analysis, such as the query text. See: https://cloud.google.com/natural-language/docs/basics#interpreting_sentiment_analysis_values for how to interpret the result.

Configures the types of sentiment analysis to perform.

The result of sentiment analysis. Sentiment analysis inspects user input and identifies the prevailing subjective opinion, especially to determine a user's attitude as positive, negative, or neutral. For DetectIntent, it needs to be configured in DetectIntentRequest.query_params. For StreamingDetectIntent, it needs to be configured in StreamingDetectIntentRequest.query_params. And for Participants.AnalyzeContent and Participants.StreamingAnalyzeContent, it needs to be configured in ConversationProfile.human_agent_assistant_config

A session represents a conversation between a Dialogflow agent and an end-user. You can create special entities, called session entities, during a session. Session entities can extend or replace custom entity types and only exist during the session that they were created for. All session data, including session entities, is stored by Dialogflow for 20 minutes. For more information, see the session entity guide.

Metadata for a ConversationProfiles.SetSuggestionFeatureConfig operation.

The request message for ConversationProfiles.SetSuggestionFeatureConfig.

The evaluation metrics for smart reply model.

Evaluation metrics when retrieving n smart replies with the model.

Hints for the speech recognizer to help with recognition in a specific conversation state.

Configures speech transcription for ConversationProfile.

The request message for Participants.SuggestArticles.

The response message for Participants.SuggestArticles.

The request message for Conversations.SuggestConversationSummary.

The response message for Conversations.SuggestConversationSummary.

The request message for Participants.SuggestFaqAnswers.

The request message for Participants.SuggestFaqAnswers.

The request message for Participants.SuggestKnowledgeAssist.

The response message for Participants.SuggestKnowledgeAssist.

The request message for Participants.SuggestSmartReplies.

The response message for Participants.SuggestSmartReplies.

The type of Human Agent Assistant API suggestion to perform, and the maximum number of results to return for that type. Multiple Feature objects can be specified in the features list.

One response of different type of suggestion response which is used in the response of Participants.AnalyzeContent and Participants.AnalyzeContent, as well as HumanAgentAssistantEvent.

Summarization context that customer can configure.

Configuration of how speech should be synthesized.

Auxiliary proto messages. Represents the natural language text to be processed.

Instructs the speech synthesizer on how to generate the output audio content.

Metadata for a ConversationModels.UndeployConversationModel operation.

The request message for ConversationModels.UndeployConversationModel

You can create multiple versions of your agent and publish them to separate environments. When you edit an agent, you are editing the draft agent. At any point, you can save the draft agent as an agent version, which is an immutable snapshot of your agent. When you save the draft agent, it is published to the default environment. When you create agent versions, you can publish them to custom environments. You can create a variety of custom environments for: - testing - development - production - etc. For more information, see the versions and environments guide.

Description of which voice to use for speech synthesis.

The response message for a webhook call. This response is validated by the Dialogflow server. If validation fails, an error will be returned in the QueryResult.diagnostic_info field. Setting JSON fields to an empty value with the wrong type is a common error. To avoid this error: - Use "" for empty strings - Use {} or null for empty objects - Use [] or null for empty arrays For more information, see the Protocol Buffers Language Guide.

Represents a part of a message possibly annotated with an entity. The part can be an entity or purely a part of the message between two entities or message start/end.

The response message for EntityTypes.BatchUpdateEntityTypes.

Metadata for a ConversationProfile.ClearSuggestionFeatureConfig operation.

Dialogflow contexts are similar to natural language context. If a person says to you "they are orange", you need context in order to understand what "they" is referring to. Similarly, for Dialogflow to handle an end-user expression like that, it needs to be provided with context in order to correctly match an intent. Using contexts, you can control the flow of a conversation. You can configure contexts for an intent by setting input and output contexts, which are identified by string names. When an intent is matched, any configured output contexts for that intent become active. While any contexts are active, Dialogflow is more likely to match intents that are configured with input contexts that correspond to the currently active contexts. For more information about context, see the Contexts guide.

Represents a notification sent to Pub/Sub subscribers for conversation lifecycle events.

A customer-managed encryption key specification that can be applied to all created resources (e.g. Conversation).

Each intent parameter has a type, called the entity type, which dictates exactly how data from an end-user expression is extracted. Dialogflow provides predefined system entities that can match many common types of data. For example, there are system entities for matching dates, times, colors, email addresses, and so on. You can also create your own custom entities for matching custom data. For example, you could define a vegetable entity that can match the types of vegetables available for purchase with a grocery store agent. For more information, see the Entity guide.

Events allow for matching intents by event name instead of the natural language input. For instance, input ` can trigger a personalized welcome response. The parameternamemay be used by the agent in the response:"Hello #welcome_event.name! What can I do for you today?". ## Attributes *languageCode(*type:*String.t, *default:*nil) - Required. The language of this query. See [Language Support](https://cloud.google.com/dialogflow/docs/reference/language) for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language. This field is ignored when used in the context of a WebhookResponse.followup_event_input field, because the language was already defined in the originating detect intent request. *name(*type:*String.t, *default:*nil) - Required. The unique identifier of the event. *parameters(*type:*map(), *default:*nil`) - The collection of parameters associated with the event. Depending on your protocol or client library language, this is a map, associative array, symbol table, dictionary, or JSON object composed of a collection of (MapKey, MapValue) pairs: MapKey type: string MapKey value: parameter name MapValue type: If parameter's entity type is a composite entity then use map, otherwise, depending on the parameter value type, it could be one of string, number, boolean, null, list or map. MapValue value: If parameter's entity type is a composite entity then use map from composite entity property names to property values, otherwise, use parameter value.

Metadata related to the Export Data Operations (e.g. ExportDocument).

Represents answer from "frequently asked questions".

Google Cloud Storage location for the output.

Output only. Represents a notification sent to Pub/Sub subscribers for agent assistant events in a specific conversation.

Metadata for initializing a location-level encryption specification.

The request to initialize a location-level encryption specification.

An intent categorizes an end-user's intention for one conversation turn. For each agent, you define many intents, where your combined intents can handle a complete conversation. When an end-user writes or says something, referred to as an end-user expression or end-user input, Dialogflow matches the end-user input to the best intent in your agent. Matching an intent is also known as intent classification. For more information, see the intent guide.

Corresponds to the Response field in the Dialogflow console.

The basic card message. Useful for displaying information.

The button object that appears at the bottom of a card.

The card for presenting a carousel of options to select from.

The suggestion chip message that allows the user to jump out to the app or website associated with this agent.

The card for presenting a list of options to select from.

Rich Business Messaging (RBM) Media displayed in Cards The following media-types are currently supported: Image Types image/jpeg image/jpg' image/gif image/png Video Types video/h263 video/m4v video/mp4 video/mpeg video/mpeg4 video/webm

Carousel Rich Business Messaging (RBM) rich card. Rich cards allow you to respond to users with more vivid content, e.g. with media and suggestions. If you want to show a single card with more control over the layout, please use RbmStandaloneCard instead.

Standalone Rich Business Messaging (RBM) rich card. Rich cards allow you to respond to users with more vivid content, e.g. with media and suggestions. You can group multiple rich cards into one using RbmCarouselCard but carousel cards will give you less control over the card layout.

Rich Business Messaging (RBM) suggested client-side action that the user can choose from the card.

Opens the user's default dialer app with the specified phone number but does not dial automatically.

Opens the user's default web browser app to the specified uri If the user has an app installed that is registered as the default handler for the URL, then this app will be opened instead, and its icon will be used in the suggested action UI.

Opens the device's location chooser so the user can pick a location to send back to the agent.

Rich Business Messaging (RBM) suggested reply that the user can click instead of typing in their own response.

Rich Business Messaging (RBM) suggestion. Suggestions allow user to easily select/click a predefined response or perform an action (like opening a web uri).

Rich Business Messaging (RBM) text response with suggestions.

Additional info about the select item for when it is triggered in a dialog.

The simple response message containing speech or text.

The collection of simple response candidates. This message in QueryResult.fulfillment_messages and WebhookResponse.fulfillment_messages should contain only one SimpleResponse.

The suggestion chip message that the user can tap to quickly post a reply to the conversation.

Synthesizes speech and plays back the synthesized audio to the caller in Telephony Gateway. Telephony Gateway takes the synthesizer settings from DetectIntentResponse.output_audio_config which can either be set at request-level or can come from the agent-level synthesizer config.

Represents an example that the agent is trained on.

Represents the result of querying a Knowledge base.

Represents an answer from Knowledge. Currently supports FAQ and Generative answers.

Metadata in google::longrunning::Operation for Knowledge operations.

Represents a message posted into a conversation.

Represents the result of annotation for the message.

Represents the contents of the original request that was passed to the [Streaming]DetectIntent call.

Represents the result of conversational query or event processing.

Indicates that interaction with the Dialogflow agent has ended.

Indicates that the conversation should be handed off to a human agent. Dialogflow only uses this to determine which conversations were handed off to a human agent for measurement purposes. What else to do with this signal is up to you and your handoff procedures. You may set this, for example: In the entry fulfillment of a CX Page if entering the page indicates something went extremely wrong in the conversation. In a webhook response when you determine that the customer issue can only be handled by a human.

Represents an audio message that is composed of both segments synthesized from the Dialogflow agent prompts and ones hosted externally at the specified URIs.

Represents the signal that telles the client to transfer the phone call connected to the agent to a third-party endpoint.

The sentiment, such as positive/negative feeling or association, for a unit of analysis, such as the query text. See: https://cloud.google.com/natural-language/docs/basics#interpreting_sentiment_analysis_values for how to interpret the result.

The result of sentiment analysis. Sentiment analysis inspects user input and identifies the prevailing subjective opinion, especially to determine a user's attitude as positive, negative, or neutral. For Participants.DetectIntent, it needs to be configured in DetectIntentRequest.query_params. For Participants.StreamingDetectIntent, it needs to be configured in StreamingDetectIntentRequest.query_params. And for Participants.AnalyzeContent and Participants.StreamingAnalyzeContent, it needs to be configured in ConversationProfile.human_agent_assistant_config

A session represents a conversation between a Dialogflow agent and an end-user. You can create special entities, called session entities, during a session. Session entities can extend or replace custom entity types and only exist during the session that they were created for. All session data, including session entities, is stored by Dialogflow for 20 minutes. For more information, see the session entity guide.

Metadata for a ConversationProfile.SetSuggestionFeatureConfig operation.

The response message for Participants.SuggestArticles.

The response message for Participants.SuggestDialogflowAssists.

The request message for Participants.SuggestFaqAnswers.

The response message for Participants.SuggestKnowledgeAssist.

The response message for Participants.SuggestSmartReplies.

One response of different type of suggestion response which is used in the response of Participants.AnalyzeContent and Participants.AnalyzeContent, as well as HumanAgentAssistantEvent.

The response message for a webhook call. This response is validated by the Dialogflow server. If validation fails, an error will be returned in the QueryResult.diagnostic_info field. Setting JSON fields to an empty value with the wrong type is a common error. To avoid this error: - Use "" for empty strings - Use {} or null for empty objects - Use [] or null for empty arrays For more information, see the Protocol Buffers Language Guide.

This message is used to hold all the Conversation Signals data, which will be converted to JSON and exported to BigQuery.

Collection of all signals that were extracted for a single turn of the conversation.

The response message for Locations.ListLocations.

A resource that represents a Google Cloud location.

The response message for Operations.ListOperations.

This resource represents a long-running operation that is the result of a network API call.

A generic empty message that you can re-use to avoid defining duplicated empty messages in your APIs. A typical example is to use it as the request or the response type of an API method. For instance: service Foo { rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); }

The Status type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC. Each Status message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the API Design Guide.

An object that represents a latitude/longitude pair. This is expressed as a pair of doubles to represent degrees latitude and degrees longitude. Unless specified otherwise, this object must conform to the WGS84 standard. Values must be within normalized ranges.

API client metadata for GoogleApi.Dialogflow.V3.

API calls for all endpoints tagged Projects.

Handle Tesla connections for GoogleApi.Dialogflow.V3.

Hierarchical advanced settings for agent/flow/page/fulfillment/parameter. Settings exposed at lower level overrides the settings exposed at higher level. Overriding occurs at the sub-setting level. For example, the playback_interruption_settings at fulfillment level only overrides the playback_interruption_settings at the agent level, leaving other settings at the agent level unchanged. DTMF settings does not override each other. DTMF settings set at different levels define DTMF detections running in parallel. Hierarchy: Agent->Flow->Page->Fulfillment/Parameter.

Define behaviors for DTMF (dual tone multi frequency).

Agents are best described as Natural Language Understanding (NLU) modules that transform user requests into actionable data. You can include agents in your app, product, or service to determine user intent and respond to the user in a natural way. After you create an agent, you can add Intents, Entity Types, Flows, Fulfillments, Webhooks, TransitionRouteGroups and so on to manage the conversation flows.

Settings for connecting to Git repository for an agent.

The response message for Agents.GetAgentValidationResult.

Stores information about feedback provided by users about a response.

Stores extra information about why users provided thumbs down rating.

Represents the natural speech audio to be processed.

Configuration of the barge-in behavior. Barge-in instructs the API to return a detected utterance at a proper time while the client is playing back the response audio from a previous request. When the client sees the utterance, it should stop the playback and immediately get ready for receiving the responses for the current request. The barge-in handling requires the client to start streaming audio input as soon as it starts playing back the audio from the previous response. The playback is modeled into two phases: No barge-in phase: which goes first and during which speech detection should not be carried out. Barge-in phase: which follows the no barge-in phase and during which the API starts speech detection and may inform the client that an utterance has been detected. Note that no-speech event is not expected in this phase. The client provides this configuration in terms of the durations of those two phases. The durations are measured in terms of the audio length from the start of the input audio. No-speech event is a response with END_OF_UTTERANCE without any transcript following up.

The request message for TestCases.BatchDeleteTestCases.

Metadata returned for the TestCases.BatchRunTestCases long running operation.

The request message for TestCases.BatchRunTestCases.

The response message for TestCases.BatchRunTestCases.

Boost specification to boost certain documents. A copy of google.cloud.discoveryengine.v1main.BoostSpec, field documentation is available at https://cloud.google.com/generative-ai-app-builder/docs/reference/rest/v1alpha/BoostSpec

Specification for custom ranking based on customer specified attribute value. It provides more controls for customized ranking than the simple (condition, boost) combination above.

The control points used to define the curve. The curve defined through these control points can only be monotonically increasing or decreasing(constant values are acceptable).

The response message for TestCases.CalculateCoverage.

Changelogs represents a change made to a given agent.

The request message for Versions.CompareVersions.

The response message for Versions.CompareVersions.

Represents a result from running a test case in an agent environment.

This message is used to hold all the Conversation Signals data, which will be converted to JSON and exported to BigQuery.

One interaction between a human and virtual agent. The human provides some input and the virtual agent provides a response.

Metadata associated with the long running operation for Versions.CreateVersion.

A data store connection. It represents a data store in Discovery Engine and the type of the contents it contains.

Data store connection feature output signals. Might be only partially field if processing stop before the final answer. Reasons for this can be, but are not limited to: empty UCS search results, positive RAI check outcome, grounding failure, ...

Metadata returned for the Environments.DeployFlow long running operation.

The request message for Environments.DeployFlow.

The response message for Environments.DeployFlow.

Represents a deployment in an environment. A deployment happens when a flow version configured to be active in the environment. You can configure running pre-deployment steps, e.g. running validation test cases, experiment auto-rollout, etc.

The message returned from the DetectIntent method.

Entities are extracted from user input and represent parameters that are meaningful to your application. For example, a date range, a proper name such as a geographic location or landmark, and so on. Entities represent actionable data for your application. When you define an entity, you can also include synonyms that all map to that entity. For example, "soft drink", "soda", "pop", and so on. There are three types of entities: System - entities that are defined by the Dialogflow API for common data types such as date, time, currency, and so on. A system entity is represented by the EntityType type. Custom - entities that are defined by you that represent actionable data that is meaningful to your application. For example, you could define a pizza.sauce entity for red or white pizza sauce, a pizza.cheese entity for the different types of cheese on a pizza, a pizza.topping entity for different toppings, and so on. A custom entity is represented by the EntityType type. *User - entities that are built for an individual user such as favorites, preferences, playlists, and so on. A user entity is represented by the SessionEntityType type. For more information about entity types, see the Dialogflow documentation.

An entity entry for an associated entity type.

An excluded entity phrase that should not be matched.

Represents an environment for an agent. You can create multiple versions of your agent and publish them to separate environments. When you edit an agent, you are editing the draft agent. At any point, you can save the draft agent as an agent version, which is an immutable snapshot of your agent. When you save the draft agent, it is published to the default environment. When you create agent versions, you can publish them to custom environments. You can create a variety of custom environments for testing, development, production, etc.

An event handler specifies an event that can be handled during a session. When the specified event happens, the following actions are taken in order: If there is a trigger_fulfillment associated with the event, it will be called. If there is a target_page associated with the event, the session will transition into the specified page. * If there is a target_flow associated with the event, the session will transition into the specified flow.

Represents an experiment in an environment.

The inference result which includes an objective metric to optimize and the confidence interval.

A confidence interval is a range of possible values for the experiment objective you are trying to measure.

Metadata returned for the EntityTypes.ExportEntityTypes long running operation.

The request message for EntityTypes.ExportEntityTypes.

The response message for EntityTypes.ExportEntityTypes.

Metadata returned for the Intents.ExportIntents long running operation.

Metadata returned for the TestCases.ExportTestCases long running operation. This message currently has no fields.

The request message for TestCases.ExportTestCases.

The response message for TestCases.ExportTestCases.

Flows represents the conversation flows when you build your chatbot agent. A flow consists of many pages connected by the transition routes. Conversations always start with the built-in Start Flow (with an all-0 ID). Transition routes can direct the conversation session from the current flow (parent flow) to another flow (sub flow). When the sub flow is finished, Dialogflow will bring the session back to the parent flow, where the sub flow is started. Usually, when a transition route is followed by a matched intent, the intent will be "consumed". This means the intent won't activate more transition routes. However, when the followed transition route moves the conversation session into a different flow, the matched intent can be carried over and to be consumed in the target flow.

The flow import strategy used for resource conflict resolution associated with an ImportFlowRequest.

The response message for Flows.GetFlowValidationResult.

A form is a data model that groups related parameters that can be collected from the user. The process in which the agent prompts the user and collects parameter values from the user is called form filling. A form can be added to a page. When form filling is done, the filled parameters will be written to the session.

Configuration for how the filling of a parameter should be handled.

A fulfillment can do one or more of the following actions at the same time: Generate rich message responses. Set parameter values. * Call the webhook. Fulfillments can be called at various stages in the Page or Form lifecycle. For example, when a DetectIntentRequest drives a session to enter a new page, the page's entry fulfillment can add a static response to the QueryResult in the returning DetectIntentResponse, call the webhook (for example, to load user data from a database), or both.

A list of cascading if-else conditions. Cases are mutually exclusive. The first one with a matching condition is selected, all the rest ignored.

Each case has a Boolean condition. When it is evaluated to be True, the corresponding messages will be selected and evaluated recursively.

The list of messages or conditional cases to activate for this case.

Google Cloud Storage location for a Dialogflow operation that writes or exports objects (e.g. exported agent or transcripts) outside of Dialogflow.

Settings for knowledge connector. These parameters are used for LLM prompt like "You are . You are a helpful and verbose at , . Your task is to help humans on ".

Generators contain prompt to be sent to the LLM model to generate text. The prompt can contain parameters which will be resolved before calling the model. It can optionally contain banned phrases to ensure the model responses are safe.

Parameters to be passed to the LLM. If not set, default values will be used.

Represents a custom placeholder in the prompt text.

Metadata returned for the EntityTypes.ImportEntityTypes long running operation.

The request message for EntityTypes.ImportEntityTypes.

The response message for EntityTypes.ImportEntityTypes.

Conflicting resources detected during the import process. Only filled when REPORT_CONFLICT is set in the request and there are conflicts in the display names.

Metadata returned for the Intents.ImportIntents long running operation.

Conflicting resources detected during the import process. Only filled when REPORT_CONFLICT is set in the request and there are conflicts in the display names.

Metadata returned for the TestCases.ImportTestCases long running operation.

The request message for TestCases.ImportTestCases.

The response message for TestCases.ImportTestCases.

Inline destination for a Dialogflow operation that writes or exports objects (e.g. intents) outside of Dialogflow.

Inline source for a Dialogflow operation that reads or imports objects (e.g. intents) into Dialogflow.

Instructs the speech recognizer on how to process the audio content.

An intent represents a user's intent to interact with a conversational agent. You can provide information for the Dialogflow API to use to match user input to an intent by adding training phrases (i.e., examples of user input) to your intent.

Intent coverage represents the percentage of all possible intents in the agent that are triggered in any of a parent's test cases.

Represents the intent to trigger programmatically rather than as a result of natural language processing.

Represents an example that the agent is trained on to identify the intent.

The Knowledge Connector settings for this page or flow. This includes information such as the attached Knowledge Bases, and the way to execute fulfillment.

Represents the language information of the request.

The response message for Changelogs.ListChangelogs.

The response message for Environments.ListTestCaseResults.

The response message for Deployments.ListDeployments.

The response message for EntityTypes.ListEntityTypes.

The response message for Environments.ListEnvironments.

The response message for Experiments.ListExperiments.

The response message for Generators.ListGenerators.

The response message for SecuritySettings.ListSecuritySettings.

The response message for SessionEntityTypes.ListSessionEntityTypes.

The response message for TestCases.ListTestCaseResults.

The response message for TestCases.ListTestCases.

The response message for TransitionRouteGroups.ListTransitionRouteGroups.

The response message for Versions.ListVersions.

The response message for Webhooks.ListWebhooks.

The response message for Environments.LookupEnvironmentHistory.

Represents one match result of MatchIntent.

Instructs the speech synthesizer how to generate the output audio content.

A Dialogflow CX conversation (session) can be described and visualized as a state machine. The states of a CX session are represented by pages. For each flow, you define many pages, where your combined pages can handle a complete conversation on the topics the flow is designed for. At any given moment, exactly one page is the current page, the current page is considered active, and the flow associated with that page is considered active. Every flow has a special start page. When a flow initially becomes active, the start page page becomes the current page. For each conversational turn, the current page will either stay the same or transition to another page. You configure each page to collect information from the end-user that is relevant for the conversational state represented by the page. For more information, see the Page guide.

Represents page information communicated to and from the webhook.

Text input which can be used for prompt or banned phrases.

Represents the query input. It can contain one of: 1. A conversational query in the form of text. 2. An intent query that specifies which intent to trigger. 3. Natural language speech audio to be processed. 4. An event to be triggered. 5. DTMF digits to invoke an intent and fill in parameter value. 6. The results of a tool executed by the client.

Represents the parameters of a conversational query.

Represents the result of a conversational query.

Represents a response message that can be returned by a conversational agent. Response messages are also used for output audio synthesis. The approach is as follows: If at least one OutputAudioText response is present, then all OutputAudioText responses are linearly concatenated, and the result is used for output audio synthesis. If the OutputAudioText responses are a mixture of text and SSML, then the concatenated result is treated as SSML; otherwise, the result is treated as either text or SSML as appropriate. The agent designer should ideally use either text or SSML consistently throughout the bot design. * Otherwise, all Text responses are linearly concatenated, and the result is used for output audio synthesis. This approach allows for more sophisticated user experience scenarios, where the text displayed to the user may differ from what is heard.

Indicates that the conversation succeeded, i.e., the bot handled the issue that the customer talked to it about. Dialogflow only uses this to determine which conversations should be counted as successful and doesn't process the metadata in this message in any way. Note that Dialogflow also considers conversations that get to the conversation end page as successful even if they don't return ConversationSuccess. You may set this, for example: In the entry_fulfillment of a Page if entering the page indicates that the conversation succeeded. In a webhook response when you determine that you handled the customer issue.

Indicates that interaction with the Dialogflow agent has ended. This message is generated by Dialogflow only and not supposed to be defined by the user.

Represents info card response. If the response contains generative knowledge prediction, Dialogflow will return a payload with Infobot Messenger compatible info card. Otherwise, the info card response is skipped.

Indicates that the conversation should be handed off to a live agent. Dialogflow only uses this to determine which conversations were handed off to a human agent for measurement purposes. What else to do with this signal is up to you and your handoff procedures. You may set this, for example: In the entry_fulfillment of a Page if entering the page indicates something went extremely wrong in the conversation. In a webhook response when you determine that the customer issue can only be handled by a human.

Represents an audio message that is composed of both segments synthesized from the Dialogflow agent prompts and ones hosted externally at the specified URIs. The external URIs are specified via play_audio. This message is generated by Dialogflow only and not supposed to be defined by the user.

A text or ssml response that is preferentially used for TTS output audio synthesis, as described in the comment on the ResponseMessage message.

Specifies an audio clip to be played by the client as part of the response.

Represents the signal that telles the client to transfer the phone call connected to the agent to a third-party endpoint.

A single rollout step with specified traffic allocation.

Metadata returned for the Environments.RunContinuousTest long running operation.

The request message for Environments.RunContinuousTest.

The response message for Environments.RunContinuousTest.

Metadata returned for the TestCases.RunTestCase long running operation. This message currently has no fields.

The request message for TestCases.RunTestCase.

The response message for TestCases.RunTestCase.

Text input which can be used for prompt or banned phrases.

Search configuration for UCS search queries.

Represents the settings related to security issues, such as data redaction and data retention. It may take hours for updates on the settings to propagate to all the related components and take effect.

The result of sentiment analysis. Sentiment analysis inspects user input and identifies the prevailing subjective opinion, especially to determine a user's attitude as positive, negative, or neutral.

Session entity types are referred to as User entity types and are entities that are built for an individual user such as favorites, preferences, playlists, and so on. You can redefine a session entity type at the session level to extend or replace a custom entity type at the user session level (we refer to the entity types defined at the agent level as "custom entity types"). Note: session entity types apply to all queries, regardless of the language. For more information about entity types, see the Dialogflow documentation.

Represents session information communicated to and from the webhook.

The request message for Experiments.StartExperiment.

The request message for Experiments.StopExperiment.

Configuration of how speech should be synthesized.

Represents a result from running a test case in an agent environment.

Represents configurations for a test case.

The description of differences between original and replayed agent output.

Represents the natural language text to be processed.

Transition coverage represents the percentage of all possible page transitions (page-level transition routes and event handlers, excluding transition route groups) present within any of a parent's test cases.

A transition route specifies a intent that can be matched and/or a data condition that can be evaluated during a session. When a specified transition is matched, the following actions are taken in order: If there is a trigger_fulfillment associated with the transition, it will be called. If there is a target_page associated with the transition, the session will transition into the specified page. * If there is a target_flow associated with the transition, the session will transition into the specified flow.

A TransitionRouteGroup represents a group of TransitionRoutes to be used by a Page.

Transition route group coverage represents the percentage of all possible transition routes present within any of a parent's test cases. The results are grouped by the transition route group.

Collection of all signals that were extracted for a single turn of the conversation.

A single flow version with specified traffic allocation.

Description of which voice to use for speech synthesis.

Webhooks host the developer's business logic. During a session, webhooks allow the developer to use the data extracted by Dialogflow's natural language processing to generate dynamic responses, validate collected data, or trigger actions on the backend.

Represents configuration for a generic web service.

Represents configuration of OAuth client credential flow for 3rd party API authentication.

The request message for a webhook call. The request is sent as a JSON object and the field names will be presented in camel cases. You may see undocumented fields in an actual request. These fields are used internally by Dialogflow and should be ignored.

Represents fulfillment information communicated to the webhook.

Represents intent information communicated to the webhook.

Hierarchical advanced settings for agent/flow/page/fulfillment/parameter. Settings exposed at lower level overrides the settings exposed at higher level. Overriding occurs at the sub-setting level. For example, the playback_interruption_settings at fulfillment level only overrides the playback_interruption_settings at the agent level, leaving other settings at the agent level unchanged. DTMF settings does not override each other. DTMF settings set at different levels define DTMF detections running in parallel. Hierarchy: Agent->Flow->Page->Fulfillment/Parameter.

Represents the natural speech audio to be processed.

Configuration of the barge-in behavior. Barge-in instructs the API to return a detected utterance at a proper time while the client is playing back the response audio from a previous request. When the client sees the utterance, it should stop the playback and immediately get ready for receiving the responses for the current request. The barge-in handling requires the client to start streaming audio input as soon as it starts playing back the audio from the previous response. The playback is modeled into two phases: No barge-in phase: which goes first and during which speech detection should not be carried out. Barge-in phase: which follows the no barge-in phase and during which the API starts speech detection and may inform the client that an utterance has been detected. Note that no-speech event is not expected in this phase. The client provides this configuration in terms of the durations of those two phases. The durations are measured in terms of the audio length from the start of the input audio. No-speech event is a response with END_OF_UTTERANCE without any transcript following up.

Metadata returned for the TestCases.BatchRunTestCases long running operation.

Represents a result from running a test case in an agent environment.

This message is used to hold all the Conversation Signals data, which will be converted to JSON and exported to BigQuery.

One interaction between a human and virtual agent. The human provides some input and the virtual agent provides a response.

Metadata associated with the long running operation for Versions.CreateVersion.

A data store connection. It represents a data store in Discovery Engine and the type of the contents it contains.

Metadata returned for the Environments.DeployFlow long running operation.

The response message for Environments.DeployFlow.

Represents an environment for an agent. You can create multiple versions of your agent and publish them to separate environments. When you edit an agent, you are editing the draft agent. At any point, you can save the draft agent as an agent version, which is an immutable snapshot of your agent. When you save the draft agent, it is published to the default environment. When you create agent versions, you can publish them to custom environments. You can create a variety of custom environments for testing, development, production, etc.

An event handler specifies an event that can be handled during a session. When the specified event happens, the following actions are taken in order: If there is a trigger_fulfillment associated with the event, it will be called. If there is a target_page associated with the event, the session will transition into the specified page. * If there is a target_flow associated with the event, the session will transition into the specified flow.

Metadata returned for the EntityTypes.ExportEntityTypes long running operation.

The response message for EntityTypes.ExportEntityTypes.

Metadata returned for the Intents.ExportIntents long running operation.

Metadata returned for the TestCases.ExportTestCases long running operation. This message currently has no fields.

A form is a data model that groups related parameters that can be collected from the user. The process in which the agent prompts the user and collects parameter values from the user is called form filling. A form can be added to a page. When form filling is done, the filled parameters will be written to the session.

Configuration for how the filling of a parameter should be handled.

A fulfillment can do one or more of the following actions at the same time: Generate rich message responses. Set parameter values. * Call the webhook. Fulfillments can be called at various stages in the Page or Form lifecycle. For example, when a DetectIntentRequest drives a session to enter a new page, the page's entry fulfillment can add a static response to the QueryResult in the returning DetectIntentResponse, call the webhook (for example, to load user data from a database), or both.

A list of cascading if-else conditions. Cases are mutually exclusive. The first one with a matching condition is selected, all the rest ignored.

Each case has a Boolean condition. When it is evaluated to be True, the corresponding messages will be selected and evaluated recursively.

The list of messages or conditional cases to activate for this case.

Google Cloud Storage location for a Dialogflow operation that writes or exports objects (e.g. exported agent or transcripts) outside of Dialogflow.

Metadata returned for the EntityTypes.ImportEntityTypes long running operation.

The response message for EntityTypes.ImportEntityTypes.

Conflicting resources detected during the import process. Only filled when REPORT_CONFLICT is set in the request and there are conflicts in the display names.

Metadata returned for the Intents.ImportIntents long running operation.

Conflicting resources detected during the import process. Only filled when REPORT_CONFLICT is set in the request and there are conflicts in the display names.

Metadata returned for the TestCases.ImportTestCases long running operation.

Inline destination for a Dialogflow operation that writes or exports objects (e.g. intents) outside of Dialogflow.

Instructs the speech recognizer on how to process the audio content.

An intent represents a user's intent to interact with a conversational agent. You can provide information for the Dialogflow API to use to match user input to an intent by adding training phrases (i.e., examples of user input) to your intent.

Represents the intent to trigger programmatically rather than as a result of natural language processing.

Represents an example that the agent is trained on to identify the intent.

The Knowledge Connector settings for this page or flow. This includes information such as the attached Knowledge Bases, and the way to execute fulfillment.

Represents the language information of the request.

A Dialogflow CX conversation (session) can be described and visualized as a state machine. The states of a CX session are represented by pages. For each flow, you define many pages, where your combined pages can handle a complete conversation on the topics the flow is designed for. At any given moment, exactly one page is the current page, the current page is considered active, and the flow associated with that page is considered active. Every flow has a special start page. When a flow initially becomes active, the start page page becomes the current page. For each conversational turn, the current page will either stay the same or transition to another page. You configure each page to collect information from the end-user that is relevant for the conversational state represented by the page. For more information, see the Page guide.

Represents page information communicated to and from the webhook.

Represents the query input. It can contain one of: 1. A conversational query in the form of text. 2. An intent query that specifies which intent to trigger. 3. Natural language speech audio to be processed. 4. An event to be triggered. 5. DTMF digits to invoke an intent and fill in parameter value. 6. The results of a tool executed by the client.

Represents a response message that can be returned by a conversational agent. Response messages are also used for output audio synthesis. The approach is as follows: If at least one OutputAudioText response is present, then all OutputAudioText responses are linearly concatenated, and the result is used for output audio synthesis. If the OutputAudioText responses are a mixture of text and SSML, then the concatenated result is treated as SSML; otherwise, the result is treated as either text or SSML as appropriate. The agent designer should ideally use either text or SSML consistently throughout the bot design. * Otherwise, all Text responses are linearly concatenated, and the result is used for output audio synthesis. This approach allows for more sophisticated user experience scenarios, where the text displayed to the user may differ from what is heard.

Indicates that the conversation succeeded, i.e., the bot handled the issue that the customer talked to it about. Dialogflow only uses this to determine which conversations should be counted as successful and doesn't process the metadata in this message in any way. Note that Dialogflow also considers conversations that get to the conversation end page as successful even if they don't return ConversationSuccess. You may set this, for example: In the entry_fulfillment of a Page if entering the page indicates that the conversation succeeded. In a webhook response when you determine that you handled the customer issue.

Indicates that interaction with the Dialogflow agent has ended. This message is generated by Dialogflow only and not supposed to be defined by the user.

Represents info card response. If the response contains generative knowledge prediction, Dialogflow will return a payload with Infobot Messenger compatible info card. Otherwise, the info card response is skipped.

Indicates that the conversation should be handed off to a live agent. Dialogflow only uses this to determine which conversations were handed off to a human agent for measurement purposes. What else to do with this signal is up to you and your handoff procedures. You may set this, for example: In the entry_fulfillment of a Page if entering the page indicates something went extremely wrong in the conversation. In a webhook response when you determine that the customer issue can only be handled by a human.

Represents an audio message that is composed of both segments synthesized from the Dialogflow agent prompts and ones hosted externally at the specified URIs. The external URIs are specified via play_audio. This message is generated by Dialogflow only and not supposed to be defined by the user.

A text or ssml response that is preferentially used for TTS output audio synthesis, as described in the comment on the ResponseMessage message.

Specifies an audio clip to be played by the client as part of the response.

Represents the signal that telles the client to transfer the phone call connected to the agent to a third-party endpoint.

Metadata returned for the Environments.RunContinuousTest long running operation.

The response message for Environments.RunContinuousTest.

Metadata returned for the TestCases.RunTestCase long running operation. This message currently has no fields.

Represents session information communicated to and from the webhook.

Represents a result from running a test case in an agent environment.

The description of differences between original and replayed agent output.

Represents the natural language text to be processed.

Represents a call of a specific tool's action with the specified inputs.

The result of calling a tool's action that has been executed by the client.

A transition route specifies a intent that can be matched and/or a data condition that can be evaluated during a session. When a specified transition is matched, the following actions are taken in order: If there is a trigger_fulfillment associated with the transition, it will be called. If there is a target_page associated with the transition, the session will transition into the specified page. * If there is a target_flow associated with the transition, the session will transition into the specified flow.

Collection of all signals that were extracted for a single turn of the conversation.

Webhooks host the developer's business logic. During a session, webhooks allow the developer to use the data extracted by Dialogflow's natural language processing to generate dynamic responses, validate collected data, or trigger actions on the backend.

Represents configuration of OAuth client credential flow for 3rd party API authentication.

The request message for a webhook call. The request is sent as a JSON object and the field names will be presented in camel cases. You may see undocumented fields in an actual request. These fields are used internally by Dialogflow and should be ignored.

Represents fulfillment information communicated to the webhook.

Represents intent information communicated to the webhook.

Represents a part of a message possibly annotated with an entity. The part can be an entity or purely a part of the message between two entities or message start/end.

The response message for EntityTypes.BatchUpdateEntityTypes.

The response message for Intents.BatchUpdateIntents.

Metadata for a ConversationProfiles.ClearSuggestionFeatureConfig operation.

Dialogflow contexts are similar to natural language context. If a person says to you "they are orange", you need context in order to understand what "they" is referring to. Similarly, for Dialogflow to handle an end-user expression like that, it needs to be provided with context in order to correctly match an intent. Using contexts, you can control the flow of a conversation. You can configure contexts for an intent by setting input and output contexts, which are identified by string names. When an intent is matched, any configured output contexts for that intent become active. While any contexts are active, Dialogflow is more likely to match intents that are configured with input contexts that correspond to the currently active contexts. For more information about context, see the Contexts guide.

Represents a notification sent to Pub/Sub subscribers for conversation lifecycle events.

Metadata for a ConversationModels.CreateConversationModelEvaluation operation.

Metadata for a ConversationModels.CreateConversationModel operation.

Metadata for a ConversationModels.DeleteConversationModel operation.

Metadata for a ConversationModels.DeployConversationModel operation.

A customer-managed encryption key specification that can be applied to all created resources (e.g. Conversation).

Each intent parameter has a type, called the entity type, which dictates exactly how data from an end-user expression is extracted. Dialogflow provides predefined system entities that can match many common types of data. For example, there are system entities for matching dates, times, colors, email addresses, and so on. You can also create your own custom entities for matching custom data. For example, you could define a vegetable entity that can match the types of vegetables available for purchase with a grocery store agent. For more information, see the Entity guide.

An entity entry for an associated entity type.

Events allow for matching intents by event name instead of the natural language input. For instance, input ` can trigger a personalized welcome response. The parameternamemay be used by the agent in the response:"Hello #welcome_event.name! What can I do for you today?". ## Attributes *languageCode(*type:*String.t, *default:*nil) - Required. The language of this query. See [Language Support](https://cloud.google.com/dialogflow/docs/reference/language) for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language. This field is ignored when used in the context of a WebhookResponse.followup_event_input field, because the language was already defined in the originating detect intent request. *name(*type:*String.t, *default:*nil) - Required. The unique identifier of the event. *parameters(*type:*map(), *default:*nil`) - The collection of parameters associated with the event. Depending on your protocol or client library language, this is a map, associative array, symbol table, dictionary, or JSON object composed of a collection of (MapKey, MapValue) pairs: MapKey type: string MapKey value: parameter name MapValue type: If parameter's entity type is a composite entity then use map, otherwise, depending on the parameter value type, it could be one of string, number, boolean, null, list or map. MapValue value: If parameter's entity type is a composite entity then use map from composite entity property names to property values, otherwise, use parameter value.

Metadata related to the Export Data Operations (e.g. ExportDocument).

Represents answer from "frequently asked questions".

Google Cloud Storage location for the output.

Represents a notification sent to Cloud Pub/Sub subscribers for human agent assistant events in a specific conversation.

Metadata for a ConversationDatasets.ImportConversationData operation.

Response used for ConversationDatasets.ImportConversationData long running operation.

Metadata for initializing a location-level encryption specification.

The request to initialize a location-level encryption specification.

InputDataset used to create model or do evaluation. NextID:5

An intent categorizes an end-user's intention for one conversation turn. For each agent, you define many intents, where your combined intents can handle a complete conversation. When an end-user writes or says something, referred to as an end-user expression or end-user input, Dialogflow matches the end-user input to the best intent in your agent. Matching an intent is also known as intent classification. For more information, see the intent guide.

Represents a single followup intent in the chain.

A rich response message. Corresponds to the intent Response field in the Dialogflow console. For more information, see Rich response messages.

The basic card message. Useful for displaying information.

The button object that appears at the bottom of a card.

The card for presenting a carousel of options to select from.

The suggestion chip message that allows the user to jump out to the app or website associated with this agent.

The card for presenting a list of options to select from.

Additional info about the select item for when it is triggered in a dialog.

The simple response message containing speech or text.

The collection of simple response candidates. This message in QueryResult.fulfillment_messages and WebhookResponse.fulfillment_messages should contain only one SimpleResponse.

The suggestion chip message that the user can tap to quickly post a reply to the conversation.

Represents an example that the agent is trained on.

Represents an answer from Knowledge. Currently supports FAQ and Generative answers.

Metadata in google::longrunning::Operation for Knowledge operations.

Represents a message posted into a conversation.

Represents the result of annotation for the message.

Represents the contents of the original request that was passed to the [Streaming]DetectIntent call.

Represents the result of conversational query or event processing.

The sentiment, such as positive/negative feeling or association, for a unit of analysis, such as the query text. See: https://cloud.google.com/natural-language/docs/basics#interpreting_sentiment_analysis_values for how to interpret the result.

The result of sentiment analysis. Sentiment analysis inspects user input and identifies the prevailing subjective opinion, especially to determine a user's attitude as positive, negative, or neutral. For DetectIntent, it needs to be configured in DetectIntentRequest.query_params. For StreamingDetectIntent, it needs to be configured in StreamingDetectIntentRequest.query_params. And for Participants.AnalyzeContent and Participants.StreamingAnalyzeContent, it needs to be configured in ConversationProfile.human_agent_assistant_config

A session represents a conversation between a Dialogflow agent and an end-user. You can create special entities, called session entities, during a session. Session entities can extend or replace custom entity types and only exist during the session that they were created for. All session data, including session entities, is stored by Dialogflow for 20 minutes. For more information, see the session entity guide.

Metadata for a ConversationProfiles.SetSuggestionFeatureConfig operation.

The response message for Participants.SuggestArticles.

The request message for Participants.SuggestFaqAnswers.

The response message for Participants.SuggestKnowledgeAssist.

The response message for Participants.SuggestSmartReplies.

One response of different type of suggestion response which is used in the response of Participants.AnalyzeContent and Participants.AnalyzeContent, as well as HumanAgentAssistantEvent.

Metadata for a ConversationModels.UndeployConversationModel operation.

The response message for a webhook call. This response is validated by the Dialogflow server. If validation fails, an error will be returned in the QueryResult.diagnostic_info field. Setting JSON fields to an empty value with the wrong type is a common error. To avoid this error: - Use "" for empty strings - Use {} or null for empty objects - Use [] or null for empty arrays For more information, see the Protocol Buffers Language Guide.

Represents a part of a message possibly annotated with an entity. The part can be an entity or purely a part of the message between two entities or message start/end.

The response message for EntityTypes.BatchUpdateEntityTypes.

Metadata for a ConversationProfile.ClearSuggestionFeatureConfig operation.

Dialogflow contexts are similar to natural language context. If a person says to you "they are orange", you need context in order to understand what "they" is referring to. Similarly, for Dialogflow to handle an end-user expression like that, it needs to be provided with context in order to correctly match an intent. Using contexts, you can control the flow of a conversation. You can configure contexts for an intent by setting input and output contexts, which are identified by string names. When an intent is matched, any configured output contexts for that intent become active. While any contexts are active, Dialogflow is more likely to match intents that are configured with input contexts that correspond to the currently active contexts. For more information about context, see the Contexts guide.

Represents a notification sent to Pub/Sub subscribers for conversation lifecycle events.

A customer-managed encryption key specification that can be applied to all created resources (e.g. Conversation).

Each intent parameter has a type, called the entity type, which dictates exactly how data from an end-user expression is extracted. Dialogflow provides predefined system entities that can match many common types of data. For example, there are system entities for matching dates, times, colors, email addresses, and so on. You can also create your own custom entities for matching custom data. For example, you could define a vegetable entity that can match the types of vegetables available for purchase with a grocery store agent. For more information, see the Entity guide.

Events allow for matching intents by event name instead of the natural language input. For instance, input ` can trigger a personalized welcome response. The parameternamemay be used by the agent in the response:"Hello #welcome_event.name! What can I do for you today?". ## Attributes *languageCode(*type:*String.t, *default:*nil) - Required. The language of this query. See [Language Support](https://cloud.google.com/dialogflow/docs/reference/language) for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language. This field is ignored when used in the context of a WebhookResponse.followup_event_input field, because the language was already defined in the originating detect intent request. *name(*type:*String.t, *default:*nil) - Required. The unique identifier of the event. *parameters(*type:*map(), *default:*nil`) - The collection of parameters associated with the event. Depending on your protocol or client library language, this is a map, associative array, symbol table, dictionary, or JSON object composed of a collection of (MapKey, MapValue) pairs: MapKey type: string MapKey value: parameter name MapValue type: If parameter's entity type is a composite entity then use map, otherwise, depending on the parameter value type, it could be one of string, number, boolean, null, list or map. MapValue value: If parameter's entity type is a composite entity then use map from composite entity property names to property values, otherwise, use parameter value.

Metadata related to the Export Data Operations (e.g. ExportDocument).

Represents answer from "frequently asked questions".

Google Cloud Storage location for the output.

Output only. Represents a notification sent to Pub/Sub subscribers for agent assistant events in a specific conversation.

Metadata for initializing a location-level encryption specification.

The request to initialize a location-level encryption specification.

An intent categorizes an end-user's intention for one conversation turn. For each agent, you define many intents, where your combined intents can handle a complete conversation. When an end-user writes or says something, referred to as an end-user expression or end-user input, Dialogflow matches the end-user input to the best intent in your agent. Matching an intent is also known as intent classification. For more information, see the intent guide.

Corresponds to the Response field in the Dialogflow console.

The basic card message. Useful for displaying information.

The button object that appears at the bottom of a card.

The card for presenting a carousel of options to select from.

The suggestion chip message that allows the user to jump out to the app or website associated with this agent.

The card for presenting a list of options to select from.

Rich Business Messaging (RBM) Media displayed in Cards The following media-types are currently supported: Image Types image/jpeg image/jpg' image/gif image/png Video Types video/h263 video/m4v video/mp4 video/mpeg video/mpeg4 video/webm

Carousel Rich Business Messaging (RBM) rich card. Rich cards allow you to respond to users with more vivid content, e.g. with media and suggestions. If you want to show a single card with more control over the layout, please use RbmStandaloneCard instead.

Standalone Rich Business Messaging (RBM) rich card. Rich cards allow you to respond to users with more vivid content, e.g. with media and suggestions. You can group multiple rich cards into one using RbmCarouselCard but carousel cards will give you less control over the card layout.

Rich Business Messaging (RBM) suggested client-side action that the user can choose from the card.

Opens the user's default dialer app with the specified phone number but does not dial automatically.

Opens the user's default web browser app to the specified uri If the user has an app installed that is registered as the default handler for the URL, then this app will be opened instead, and its icon will be used in the suggested action UI.

Opens the device's location chooser so the user can pick a location to send back to the agent.

Rich Business Messaging (RBM) suggested reply that the user can click instead of typing in their own response.

Rich Business Messaging (RBM) suggestion. Suggestions allow user to easily select/click a predefined response or perform an action (like opening a web uri).

Rich Business Messaging (RBM) text response with suggestions.

Additional info about the select item for when it is triggered in a dialog.

The simple response message containing speech or text.

The collection of simple response candidates. This message in QueryResult.fulfillment_messages and WebhookResponse.fulfillment_messages should contain only one SimpleResponse.

The suggestion chip message that the user can tap to quickly post a reply to the conversation.

Synthesizes speech and plays back the synthesized audio to the caller in Telephony Gateway. Telephony Gateway takes the synthesizer settings from DetectIntentResponse.output_audio_config which can either be set at request-level or can come from the agent-level synthesizer config.

Represents an example that the agent is trained on.

Represents the result of querying a Knowledge base.

Represents an answer from Knowledge. Currently supports FAQ and Generative answers.

Metadata in google::longrunning::Operation for Knowledge operations.

Represents a message posted into a conversation.

Represents the result of annotation for the message.

Represents the contents of the original request that was passed to the [Streaming]DetectIntent call.

Represents the result of conversational query or event processing.

Indicates that interaction with the Dialogflow agent has ended.

Indicates that the conversation should be handed off to a human agent. Dialogflow only uses this to determine which conversations were handed off to a human agent for measurement purposes. What else to do with this signal is up to you and your handoff procedures. You may set this, for example: In the entry fulfillment of a CX Page if entering the page indicates something went extremely wrong in the conversation. In a webhook response when you determine that the customer issue can only be handled by a human.

Represents an audio message that is composed of both segments synthesized from the Dialogflow agent prompts and ones hosted externally at the specified URIs.

Represents the signal that telles the client to transfer the phone call connected to the agent to a third-party endpoint.

The sentiment, such as positive/negative feeling or association, for a unit of analysis, such as the query text. See: https://cloud.google.com/natural-language/docs/basics#interpreting_sentiment_analysis_values for how to interpret the result.

The result of sentiment analysis. Sentiment analysis inspects user input and identifies the prevailing subjective opinion, especially to determine a user's attitude as positive, negative, or neutral. For Participants.DetectIntent, it needs to be configured in DetectIntentRequest.query_params. For Participants.StreamingDetectIntent, it needs to be configured in StreamingDetectIntentRequest.query_params. And for Participants.AnalyzeContent and Participants.StreamingAnalyzeContent, it needs to be configured in ConversationProfile.human_agent_assistant_config

A session represents a conversation between a Dialogflow agent and an end-user. You can create special entities, called session entities, during a session. Session entities can extend or replace custom entity types and only exist during the session that they were created for. All session data, including session entities, is stored by Dialogflow for 20 minutes. For more information, see the session entity guide.

Metadata for a ConversationProfile.SetSuggestionFeatureConfig operation.

The response message for Participants.SuggestArticles.

The response message for Participants.SuggestDialogflowAssists.

The request message for Participants.SuggestFaqAnswers.

The response message for Participants.SuggestKnowledgeAssist.

The response message for Participants.SuggestSmartReplies.

One response of different type of suggestion response which is used in the response of Participants.AnalyzeContent and Participants.AnalyzeContent, as well as HumanAgentAssistantEvent.

The response message for a webhook call. This response is validated by the Dialogflow server. If validation fails, an error will be returned in the QueryResult.diagnostic_info field. Setting JSON fields to an empty value with the wrong type is a common error. To avoid this error: - Use "" for empty strings - Use {} or null for empty objects - Use [] or null for empty arrays For more information, see the Protocol Buffers Language Guide.

This message is used to hold all the Conversation Signals data, which will be converted to JSON and exported to BigQuery.

Collection of all signals that were extracted for a single turn of the conversation.

The response message for Locations.ListLocations.

A resource that represents a Google Cloud location.

The response message for Operations.ListOperations.

This resource represents a long-running operation that is the result of a network API call.

A generic empty message that you can re-use to avoid defining duplicated empty messages in your APIs. A typical example is to use it as the request or the response type of an API method. For instance: service Foo { rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); }

The Status type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC. Each Status message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the API Design Guide.

An object that represents a latitude/longitude pair. This is expressed as a pair of doubles to represent degrees latitude and degrees longitude. Unless specified otherwise, this object must conform to the WGS84 standard. Values must be within normalized ranges.