SpeechSense dialogs
Dialog is a SpeechSense object. There are two types of dialogs:
-
Audio: Agent's voice conversation with a customer recorded using contact center PBX. As soon as you upload a conversation's audio to SpeechSense, it will automatically recognize agent and customer speech.
-
Chat: Customer's text chat with an agent or bot. For chats, you need to manually specify message authors before uploading a chat to SpeechSense.
There are two dialog directions:
- Outgoing: Initiated by agent.
- Incoming: Initiated by customer.
Analyze dialogs in SpeechSense to evaluate the agents' performance. There are two ways to work with dialogs:
- In the dialog list, find the one you need and view its detailed info.
- Build a report on dialogs.
Detailed info about a dialog
You can get the following information for each dialog:
- Metadata, e.g., full names of agent and customer, call or message date, dialog language, etc. The metadata list is defined in the connection.
- Conversation audio (only for audio).
- Conversation contents.
- YandexGPT API analysis.
Dialog contents
On the dialog page, see the Dialog tab for the dialog contents:
- For audio: Text transcript of the dialog, automatically generated by Yandex SpeechKit.
- For chats: Text messages.
You can search for a text fragment through an audio text transcript or text chat messages in either the customer's or the agent's channel. The search returns exact matches. The found fragments are highlighted in yellow.
The text is automatically tagged with agent and customer tags. These indicate things like whether the agent greeted the customer, whether the customer was in good humor, etc.
YandexGPT analysis
On the dialog page, see the Analysis by YandexGPT tab for the autogenerated summary based on semantic analysis of the dialog. The summary has the following sections:
-
Analysis: Answers to questions helpful for evaluation of the agent’s performance and customer’s behavior during the conversation.
-
Summary: Reasons for having the conversation and its outcomes. This section also includes information on the evaluation criteria, e.g., the participants’ emotions or objections during the conversation.
-
Reasons: Why this dialog took place. Examples of reasons:
- Incoming contact: Customer has a service down.
- Outgoing contact: Agent is advertising a service subscription.
-
Subject: What the customer and agent discussed.
-
Outcomes: What the conversation led to. The outcomes are described for each of the listed reasons. Examples:
- Incoming contact: Agent helped to resolve the customer’s issue.
- Outgoing contact: Customer purchased the subscription.
To get more accurate analysis results in the future, evaluate them on the Analysis by YandexGPT tab for each dialog. YandexGPT learns from your feedback.
When building a report, you can use a neuroparameter. It is a dialog characteristic SpeechSense uses for semantic analysis to see if the dialog contains this characteristic. There are two ways you can use neuroparameters:
- As filters for dialogs. For example, you can build a report only for dialogs on a specific topic.
- As an evaluation parameter (only for Evaluation form reports). For example, you can view dialogs with a certain outcome as a proportion of all the dialogs included in the report.
See this guide to learn how you can configure neuroparameters.
Dialog filtering
Filters define the conditions for searching through dialogs.
There are the following types of filters:
- Agent: Agent data.
- Customer: Customer data.
- Bot (only for chats): Bot data.
- Speech statistics (only for audio): Agent and customer speech quality criteria, e.g., speech rate, mutual interruptions, etc.
- General metadata: Data about the conversation audio or text chat.
- Customer tags and Agent tags: Classifiers applied to conversation audio recognition results or text messages. You can learn more about tags here.
- YandexGPT analysis: Agent’s performance criteria and customer’s behavioral characteristics during the dialog, such as whether the agent was polite, whether the customer acted in a rude manner, etc.
For each filter, you can specify one or more filtering conditions. These can be of four types:
- Date: Select a date range from the calendar.
- Text: Enter a line of text. The search will only return exact matches.
- Number: Specify a range of numbers. You can specify either both range boundaries or just one of them. To find a particular value, specify it for both the top and bottom boundaries. The boundary values are included into the filtering range.
- Boolean: Yes or No.
You can use multiple filters at the same time to find the dialogs satisfying all the conditions you specified.
Related dialogs
In some CRM systems, chats may be grouped by task. For example, you can group together all chats with a customer who has contacted support multiple times with the same request. When uploading data into SpeechSense, you can specify the additional ticket_id
parameter to group such chats into related dialogs.
In each of the related chats, you will see the
- Chat metadata at the top of the page.
- Contents of all related chats on the Dialog tab. It shows a tag hierarchy for each individual chart. You can use text search within a single chat.
- Chat summaries autogenerated by YandexGPT API based on semantic analysis, on the YandexGPT analysis tab.