SpeechSense dialogs
Dialog is a SpeechSense object. There are two types of dialogs:
-
Audio: Agent's voice conversation with a customer recorded using contact center PBX. As soon as you upload a conversation's audio to SpeechSense, it will automatically recognize agent and customer speech.
-
Chat: Customer's text chat with an agent or bot. For chats, you need to manually specify message authors before uploading a chat to SpeechSense.
There are two dialog directions:
- Outgoing: Initiated by agent.
- Incoming: Initiated by customer.
Analyze dialogs in SpeechSense to evaluate the agents' performance. There are two ways to work with dialogs:
- In the dialog list, find the one you need and view its detailed info.
- Build a report on dialogs.
Detailed info about a dialog
You can get the following information for each dialog:
- Metadata, e.g., full names of agent and customer, call or message date, dialog language, etc. The metadata list is defined in the connection.
- Conversation audio (only for audio).
- Conversation contents.
- YandexGPT API analysis.
Dialog contents
On the dialog page, see the Dialog tab for the dialog contents:
- For audio: Text transcript of the dialog, automatically generated by Yandex SpeechKit.
- For chats: Text messages.
You can search for a text fragment through an audio text transcript or text chat messages in either the customer's or the agent's channel. The search returns exact matches. The found fragments are highlighted in yellow.
The text is automatically tagged with agent and customer tags. These indicate things like whether the agent greeted the customer, whether the customer was in good humor, etc.
YandexGPT analysis
Warning
Neuroreports will be discontinued starting February 24, 2025. Use semantic tags instead.
On the dialog page, you can see the YandexGPT analysis tab with the autogenerated summary of the dialog based on its semantic analysis. The summary has the following sections:
-
Analysis: Answers to questions helpful for evaluation of the agent’s performance and customer’s behavior during the conversation.
-
Summary: Reasons for having the conversation and its outcomes. This section also includes information on the evaluation criteria, e.g., the participants’ emotions or objections during the conversation.
-
Reasons: Why this dialog took place. Examples of reasons:
- Incoming contact: Customer has a service down.
- Outgoing contact: Agent is advertising a service subscription.
-
Subject: What the customer and agent discussed.
-
Outcomes: What the conversation led to. The outcomes are described for each of the listed reasons. Examples:
- Incoming contact: Agent helped to resolve the customer’s issue.
- Outgoing contact: Customer purchased the subscription.
To get more accurate analysis results in the future, evaluate them on the YandexGPT analysis tab for each dialog. YandexGPT learns from your feedback.
When generating a report, you can use a semantic attribute together with a search query. SpeechSense will analyze the dialog against the specified conditions. There are two ways you can use semantic attributes:
- As filters for dialogs. For example, you can create a report only for dialogs on a given topic.
- As an evaluation parameter (only for Evaluation form reports). For example, you can view dialogs with a certain outcome as a proportion of all the dialogs included in the report.
For information on setting up semantic attributes, see this guide.
Dialog filtering
Filters define the conditions for searching through dialogs.
There are the following types of filters:
- Agent: Agent data.
- Customer: Customer data.
- Bot (only for chats): Bot data.
- Speech statistics (only for audio): Agent and customer speech quality criteria, e.g., speech rate, mutual interruptions, etc.
- General metadata: Data about the conversation audio or text chat.
- Customer tags and Agent tags: Classifiers applied to conversation audio recognition results or text messages. To learn more about tags, see Concepts.
- YandexGPT analysis: Agent’s performance criteria and customer’s behavioral characteristics during the dialog, such as whether the agent was polite, whether the customer acted in a rude manner, etc.
For each filter, you can specify one or more filtering conditions. These can be of four types:
- Date: Select a date range from the calendar.
- Text: Enter a line of text. The search will only return exact matches.
- Number: Specify a range of numbers. You can specify either both range boundaries or just one of them. To find a particular value, specify it for both the top and bottom boundaries. The boundary values are included into the filtering range.
- Boolean: Yes or No.
You can use multiple filters at the same time. They will be combined by the logical AND
operation to find the dialogs satisfying all the conditions that were specified.
Related dialogs
In some CRM systems, chats may be grouped by task. For example, you can group together all chats with a customer who has contacted support multiple times with the same request. When uploading data into SpeechSense, you can specify the additional ticket_id
parameter to group such chats into related dialogs.
In each of the related chats, you will see the
- Chat metadata at the top of the page.
- Contents of all related chats on the Dialog tab. It shows a tag hierarchy for each individual chart. You can use text search within a single chat.
- Chat summaries autogenerated by YandexGPT API based on semantic analysis, on the YandexGPT analysis tab.