Speech recognition (STT)
Incorrect stress and pronunciation
Create a request and attach examples so that developers can make adjustments to the next releases of the speech synthesis model.
Poor speech recognition quality at 8kHz
If the issue is systematic (tens of percent of the total number of speech recognition requests), submit a request and attach examples for analysis. The more examples you send, the more likely the developers will discover the bug.
Feedback form on speech recognition quality
If you have any issues, please contact support
Two channels were recognized as one / How to recognize each channel separately
You can recognize multi-channel audio files only using asynchronous recognition.
Check the format of your recording:
- For LPCM, use the config.specification.audioChannelCount parameter set to 2.
- Do not specify this parameter for MP3 and OggOpus, since the number of channels is already stated in the file. The file will be automatically split into the appropriate number of recordings.
The recognized text in the response is separated by the channelTag parameter.
Is it possible to recognize two or more voices separated by speaker?
You can recognize multi-channel audio files only using asynchronous recognition.
During speech recognition, text is not split by voice, but you can place the voices in different channels and separate the recognized text in the response with the channelTag parameter.
You can specify the number of channels in a request using the config.specification.audioChannelCount parameter.
Incomplete audio recognition
If you recognize streaming audio, try using different API versions: API v1 or API v3.
To recognize an audio file, try different models.
The file doesn't exceed the limit, but an error occurs during recognition
If the file is multi-channel, take into account the total recording time of all channels. For the full list of limitations, see Quotas and limits in SpeechKit.
Internal Server Error
Make sure the format you specified in the request matches the actual file format. If the error persists, send us examples of your audio files that cannot be recognized.
When is a response sent during recognition?
Under synchronous and asynchronous recognition, a response is sent once: after processing the request.
In streaming recognition mode, you can configure the server behavior. By default, the server returns a response only after the received utterance is fully recognized. You can use the partialResults parameter to set up recognition so that the server also returns intermediate recognition results.
Intermediate results allow you to quickly respond to the recognized speech without waiting for the end of the utterance.
Where can I find an example of audio file recognition?
For SpeechKit usage examples, see Tutorials. To recognize pre-recorded audio files, use asynchronous recognition.
Where can I find an example of microphone speech recognition?
Example of streaming recognition of microphone-recorded speech.
Can I use POST for streaming recognition?
Streaming recognition uses gRPC and is not supported by the REST API, so you cannot use the POST method.
A streaming recognition session is broken/terminated
When using the API v2 for streaming recognition, the service awaits audio data. If it does not receive any data within 5 seconds, the session terminates. You cannot change this parameter in the API v2.
Streaming recognition runs in real time. You can send "silence" for recognition so that the service does not terminate the connection.
We recommend using the API v3 for streaming recognition. The API v3 features a special message type for sending "silence", so you will not have to simulate it yourself in your audio recording.
How does the service figure out the end of an utterance and the duration of a recognition session?
The end of an utterance is determined automatically by the "silence" after the utterance. For more information about end-of-utterance detection, see Detecting the end of utterance.
The maximum session duration for streaming recognition is 5 minutes.
What should I do if SpeechKit does not listen to a conversation to the end or, conversely, it takes too long to wait until it ends?
Interruptions or delays during streaming recognition may occur due to detecting the end of utterance (EOU). For EOU setup recommendations, see Detecting the end of utterance.
Error: OutOfRange desc = Exceeded maximum allowed stream duration
This error means that the maximum allowed duration of a recognition session has been exceeded. In this case, you need to reopen the session.
For streaming recognition, the maximum session duration is 5 minutes. This is a technical limitation due to the Yandex Cloud architecture and it cannot be changed.
What goes into the usage cost?
For usage cost calculation examples, pricing rules, and effective prices, see SpeechKit pricing policy.