Yandex Cloud
Search
Contact UsTry it for free
  • Customer Stories
  • Documentation
  • Blog
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • AI for business
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Start testing with double trial credits
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Center for Technologies and Society
    • Yandex Cloud Partner program
    • Price calculator
    • Pricing plans
  • Customer Stories
  • Documentation
  • Blog
© 2025 Direct Cursus Technology L.L.C.
Yandex AI Studio
  • Getting started with Model Gallery
    • About Yandex AI Studio
      • Overview
      • Common instance models
      • Dedicated instance models
      • Batch processing
      • Function calling
      • Reasoning mode
      • Formatting model responses
      • Embeddings
      • Datasets
      • Fine-tuning
      • Tokens
    • Yandex Workflows
    • Quotas and limits
    • Terms and definitions
  • Switching from the AI Assistant API to Responses API
  • Compatibility with OpenAI
  • Access management
  • Pricing policy
  • Audit Trails events
  • Public materials
  • Release notes

In this article:

  • Example
  • Tokenizing text for YandexGPT Pro
  1. Concepts
  2. Model Gallery
  3. Tokens

Tokens

Written by
Yandex Cloud
Updated at December 12, 2025
  • Example
    • Tokenizing text for YandexGPT Pro

Neural networks work with texts by representing words and sentences as tokens. Tokens are logical fragments or frequently used character sequences that are common for a natural language. Tokens help neural networks detect patterns and process natural language.

Each model uses its own tokenizer for text processing, so the number of tokens in the same text will differ. When working with models through OpenAI-compatible APIs, each model response returns the number of used tokens in the usage field. To estimate how many tokens a text contains, use the selected model's tokenizer.

Yandex models use a tokenizer specifically tailored to Russian-language texts to achieve a higher average number of characters per token and lower text processing costs. You can estimate the token count in your text for Yandex models using the dedicated Tokenizer methods or Yandex Cloud ML SDK, free of charge.

To use the tokenizer in AI Studio, you need the ai.languageModels.user role or higher for the folder.

ExampleExample

Note

The examples below only serve to illustrate the concept of tokens and do not indicate the actual number of tokens you may spend in your tasks. We estimated the number of tokens in the texts with online tokenizers and then calculated the averages.

  • Text in Russian: Управление генеративными моделями осуществляется с помощью промптов. Эффективный промпт должен содержать контекст запроса (инструкцию) для модели и непосредственно задание, которое модель должна выполнить, учитывая переданный контекст. Чем конкретнее составлен промпт, тем более точными будут результаты работы модели.\n Кроме промпта на результаты генерации моделей будут влиять и другие параметры запроса. Используйте AI Playground, доступный в консоли управления, чтобы протестировать ваши запросы.
    Number of characters in the text: 501.

    YandexGPT Pro Qwen3 235B gpt-oss-120b
    Number of tokens in the text 96 139 109
    Average number of symbols per token 5,2 3,6 4,6
  • Text in English: Generative models are managed using prompts. A good prompt should contain the context of your request to the model (instruction) and the actual task the model should complete based on the provided context. The more specific your prompt, the more accurate will be the results returned by the model.\n Apart from the prompt, other request parameters will impact the model's output too. Use Foundation Models Playground available from the management console to test your requests.
    Number of prompt characters: 477.

    Alice AI LLM Qwen3 235B gpt-oss-120b
    Number of tokens in the text 89 87 87
    Average number of symbols per token 5,36 5,48 5,48

Tokenizing text for YandexGPT ProTokenizing text for YandexGPT Pro

  1. Create a file named tbody.json with the request parameters:

    {
      "modelUri": "gpt://<folder_ID>/yandexgpt",
      "text": "Generative models are managed using prompts. A good prompt should contain the context of your request to the model (instruction) and the actual task the model should complete based on the provided context. The more specific your prompt, the more accurate will be the results returned by the model.\n Apart from the prompt, other request parameters will impact the model's output too. Use Foundation Models Playground available from the management console to test your requests."
    }
    

    Where <folder_ID> is the ID of the Yandex Cloud folder for which your account has the ai.languageModels.user role or higher.

  2. Send a request to the model:

    export IAM_TOKEN=<IAM_token>
    curl --request POST \
      --header "Authorization: Bearer ${IAM_TOKEN}" \
      --data "@tbody.json" \
      "https://llm.api.cloud.yandex.net/foundationModels/v1/tokenize"
    

    Where:

    • <IAM_token>: Value of the IAM token you got for your account.
    • tbody.json: JSON file with the request parameters.
Result
{"tokens":
    [{"id":"1","text":"\u003cs\u003e","special":true},
    {"id":"6010","text":"▁Gener","special":false},
    {"id":"1748","text":"ative","special":false},
    {"id":"7789","text":"▁models","special":false},
    {"id":"642","text":"▁are","special":false},
    {"id":"15994","text":"▁managed","special":false},
    {"id":"1772","text":"▁using","special":false},
    {"id":"80536","text":"▁prompts","special":false},
    {"id":"125820","text":".","special":false},
    {"id":"379","text":"▁A","special":false},
    {"id":"1967","text":"▁good","special":false},
    {"id":"19099","text":"▁prompt","special":false},
    {"id":"1696","text":"▁should","special":false},
    {"id":"11195","text":"▁contain","special":false},
    {"id":"292","text":"▁the","special":false},
    {"id":"7210","text":"▁context","special":false},
    {"id":"346","text":"▁of","special":false},
    {"id":"736","text":"▁your","special":false},
    {"id":"4104","text":"▁request","special":false},
    {"id":"342","text":"▁to","special":false},
    {"id":"292","text":"▁the","special":false},
    {"id":"2718","text":"▁model","special":false},
    {"id":"355","text":"▁(","special":false},
    {"id":"105793","text":"instruction","special":false},
    {"id":"125855","text":")","special":false},
    {"id":"353","text":"▁and","special":false},
    {"id":"292","text":"▁the","special":false},
    {"id":"9944","text":"▁actual","special":false},
    {"id":"7430","text":"▁task","special":false},
    {"id":"292","text":"▁the","special":false},
    {"id":"2718","text":"▁model","special":false},
    {"id":"1696","text":"▁should","special":false},
    {"id":"7052","text":"▁complete","special":false},
    {"id":"4078","text":"▁based","special":false},
    {"id":"447","text":"▁on","special":false},
    {"id":"292","text":"▁the","special":false},
    {"id":"6645","text":"▁provided","special":false},
    {"id":"7210","text":"▁context","special":false},
    {"id":"125820","text":".","special":false},
    {"id":"671","text":"▁The","special":false},
    {"id":"1002","text":"▁more","special":false},
    {"id":"4864","text":"▁specific","special":false},
    {"id":"736","text":"▁your","special":false},
    {"id":"19099","text":"▁prompt","special":false},
    {"id":"125827","text":",","special":false},
    {"id":"292","text":"▁the","special":false},
    {"id":"1002","text":"▁more","special":false},
    {"id":"16452","text":"▁accurate","special":false},
    {"id":"912","text":"▁will","special":false},
    {"id":"460","text":"▁be","special":false},
    {"id":"292","text":"▁the","special":false},
    {"id":"4168","text":"▁results","special":false},
    {"id":"13462","text":"▁returned","special":false},
    {"id":"711","text":"▁by","special":false},
    {"id":"292","text":"▁the","special":false},
    {"id":"2718","text":"▁model","special":false},
    {"id":"125820","text":".","special":false},
    {"id":"3","text":"[NL]","special":true},
    {"id":"29083","text":"▁Apart","special":false},
    {"id":"728","text":"▁from","special":false},
    {"id":"292","text":"▁the","special":false},
    {"id":"19099","text":"▁prompt","special":false},
    {"id":"125827","text":",","special":false},
    {"id":"1303","text":"▁other","special":false},
    {"id":"4104","text":"▁request","special":false},
    {"id":"9513","text":"▁parameters","special":false},
    {"id":"912","text":"▁will","special":false},
    {"id":"8209","text":"▁impact","special":false},
    {"id":"292","text":"▁the","special":false},
    {"id":"2718","text":"▁model","special":false},
    {"id":"125886","text":"'","special":false},
    {"id":"125811","text":"s","special":false},
    {"id":"5925","text":"▁output","special":false},
    {"id":"2778","text":"▁too","special":false},
    {"id":"125820","text":".","special":false},
    {"id":"7597","text":"▁Use","special":false},
    {"id":"12469","text":"▁Foundation","special":false},
    {"id":"27947","text":"▁Models","special":false},
    {"id":"118637","text":"▁Playground","special":false},
    {"id":"2871","text":"▁available","special":false},
    {"id":"728","text":"▁from","special":false},
    {"id":"292","text":"▁the","special":false},
    {"id":"7690","text":"▁management","special":false},
    {"id":"15302","text":"▁console","special":false},
    {"id":"342","text":"▁to","special":false},
    {"id":"2217","text":"▁test","special":false},
    {"id":"736","text":"▁your","special":false},
    {"id":"14379","text":"▁requests","special":false},
    {"id":"125820","text":".","special":false}],
"modelVersion":"23.10.2024"
}

Was the article helpful?

Previous
Fine-tuning
Next
Overview of AI agents
© 2025 Direct Cursus Technology L.L.C.