Estimating prompt size in tokens
Neural networks work with texts by representing words and sentences as tokens.
Foundation Models uses its own tokenizer for text processing. To calculate the token size of a text or prompt to a YandexGPT model, use the Tokenize method of the text generation API or Yandex Cloud ML SDK.
The token count of the same text may vary from one model to the next.
Getting started
To use the examples:
-
Create a service account and assign the
ai.languageModels.user
role to it. -
Get the service account API key and save it.
The following examples use API key authentication. Yandex Cloud ML SDK also supports IAM token and OAuth token authentication. For more information, see Authentication in Yandex Cloud ML SDK.
-
Use the pip
package manager to install the ML SDK library:pip install yandex-cloud-ml-sdk
Get API authentication credentials as described in Authentication with the Yandex Foundation Models API.
To use the examples, install cURL
Calculating prompt size
The example below estimates the size of a prompt to a YandexGPT model.
-
Create a file named
tokenize.py
and paste the following code into it:#!/usr/bin/env python3 from __future__ import annotations from yandex_cloud_ml_sdk import YCloudML messages = "Generative models are managed using prompts. A good prompt should contain the context of your request to the model (instruction) and the actual task the model should complete based on the provided context. The more specific your prompt, the more accurate will be the results returned by the model." def main(): sdk = YCloudML( folder_id="<folder_ID>", auth="<API_key>", ) model = sdk.models.completions("yandexgpt") result = model.tokenize(messages) for token in result: print(token) if __name__ == "__main__": main()
Where:
Note
As input data for a request, Yandex Cloud ML SDK can accept a string, a dictionary, an object of the
TextMessage
class, or an array containing any combination of these data types. For more information, see Yandex Cloud ML SDK usage.messages
: Message text.
-
<folder_ID>
: ID of the folder in which the service account was created. -
<API_key>
: Service account API key you got earlier required for authentication in the API.The following examples use API key authentication. Yandex Cloud ML SDK also supports IAM token and OAuth token authentication. For more information, see Authentication in Yandex Cloud ML SDK.
model
: Model version value. For more information, see Accessing models.
-
Run the created file:
python3 tokenize.py
The result of a prompt is a list of tokens obtained by the tokenizer:
{"tokens": [{"id":"1","text":"\u003cs\u003e","special":true}, {"id":"6010","text":"▁Gener","special":false}, {"id":"1748","text":"ative","special":false}, {"id":"7789","text":"▁models","special":false}, {"id":"642","text":"▁are","special":false}, {"id":"15994","text":"▁managed","special":false}, {"id":"1772","text":"▁using","special":false}, {"id":"80536","text":"▁prompts","special":false}, {"id":"125820","text":".","special":false}, {"id":"379","text":"▁A","special":false}, {"id":"1967","text":"▁good","special":false}, {"id":"19099","text":"▁prompt","special":false}, {"id":"1696","text":"▁should","special":false}, {"id":"11195","text":"▁contain","special":false}, {"id":"292","text":"▁the","special":false}, {"id":"7210","text":"▁context","special":false}, {"id":"346","text":"▁of","special":false}, {"id":"736","text":"▁your","special":false}, {"id":"4104","text":"▁request","special":false}, {"id":"342","text":"▁to","special":false}, {"id":"292","text":"▁the","special":false}, {"id":"2718","text":"▁model","special":false}, {"id":"355","text":"▁(","special":false}, {"id":"105793","text":"instruction","special":false}, {"id":"125855","text":")","special":false}, {"id":"353","text":"▁and","special":false}, {"id":"292","text":"▁the","special":false}, {"id":"9944","text":"▁actual","special":false}, {"id":"7430","text":"▁task","special":false}, {"id":"292","text":"▁the","special":false}, {"id":"2718","text":"▁model","special":false}, {"id":"1696","text":"▁should","special":false}, {"id":"7052","text":"▁complete","special":false}, {"id":"4078","text":"▁based","special":false}, {"id":"447","text":"▁on","special":false}, {"id":"292","text":"▁the","special":false}, {"id":"6645","text":"▁provided","special":false}, {"id":"7210","text":"▁context","special":false}, {"id":"125820","text":".","special":false}, {"id":"671","text":"▁The","special":false}, {"id":"1002","text":"▁more","special":false}, {"id":"4864","text":"▁specific","special":false}, {"id":"736","text":"▁your","special":false}, {"id":"19099","text":"▁prompt","special":false}, {"id":"125827","text":",","special":false}, {"id":"292","text":"▁the","special":false}, {"id":"1002","text":"▁more","special":false}, {"id":"16452","text":"▁accurate","special":false}, {"id":"912","text":"▁will","special":false}, {"id":"460","text":"▁be","special":false}, {"id":"292","text":"▁the","special":false}, {"id":"4168","text":"▁results","special":false}, {"id":"13462","text":"▁returned","special":false}, {"id":"711","text":"▁by","special":false}, {"id":"292","text":"▁the","special":false}, {"id":"2718","text":"▁model","special":false}, {"id":"125820","text":".","special":false}], "modelVersion":"23.10.2024" }
-
Create a file named
tbody.json
with the request parameters:{ "modelUri": "gpt://<folder_ID>/yandexgpt", "text": "Управление генеративными моделями осуществляется с помощью промтов. Эффективный промт должен содержать контекст запроса (инструкцию) для модели и непосредственно задание, которое модель должна выполнить, учитывая переданный контекст. Чем конкретнее составлен промт, тем более точными будут результаты работы модели." }
Where
<folder_ID>
is the ID of the Yandex Cloud folder for which your account has theai.languageModels.user
role or higher. -
Send a request to the model:
export IAM_TOKEN=<IAM_token> curl --request POST \ --header "Authorization: Bearer ${IAM_TOKEN}" \ --data "@tbody.json" \ "https://llm.api.cloud.yandex.net/foundationModels/v1/tokenize"
Where:
<IAM_token>
: Value of the IAM token you got for your account.tbody.json
: JSON file with request parameters.
The result of a prompt is a list of tokens obtained by the tokenizer:
{ "tokens": [ { "id": "1", "text": "<s>", "special": true }, { "id": "19078", "text": "▁Управление", "special": false }, { "id": "10810", "text": "▁генера", "special": false }, { "id": "26991", "text": "тивными", "special": false }, { "id": "77514", "text": "▁моделями", "special": false }, { "id": "10578", "text": "▁осуществляется", "special": false }, { "id": "277", "text": "▁с", "special": false }, { "id": "4390", "text": "▁помощью", "special": false }, { "id": "68740", "text": "▁пром", "special": false }, { "id": "769", "text": "тов", "special": false }, { "id": "125820", "text": ".", "special": false }, { "id": "43429", "text": "▁Good", "special": false }, { "id": "7146", "text": "тивный", "special": false }, { "id": "68740", "text": "▁пром", "special": false }, { "id": "125810", "text": "т", "special": false }, { "id": "4923", "text": "▁должен", "special": false }, { "id": "29443", "text": "▁содержать", "special": false }, { "id": "24719", "text": "▁контек", "special": false }, { "id": "269", "text": "ст", "special": false }, { "id": "43640", "text": "▁запроса", "special": false }, { "id": "355", "text": "▁(", "special": false }, { "id": "98434", "text": "инструк", "special": false }, { "id": "1511", "text": "цию", "special": false }, { "id": "125855", "text": ")", "special": false }, { "id": "571", "text": "▁для", "special": false }, { "id": "6234", "text": "▁модели", "special": false }, { "id": "286", "text": "▁и", "special": false }, { "id": "15616", "text": "▁непосредственно", "special": false }, { "id": "19633", "text": "▁задание", "special": false }, { "id": "125827", "text": ",", "special": false }, { "id": "6050", "text": "▁которое", "special": false }, { "id": "7549", "text": "▁модель", "special": false }, { "id": "7160", "text": "▁должна", "special": false }, { "id": "18879", "text": "▁выполнить", "special": false }, { "id": "125827", "text": ",", "special": false }, { "id": "31323", "text": "▁учитывая", "special": false }, { "id": "818", "text": "▁пере", "special": false }, { "id": "56857", "text": "данный", "special": false }, { "id": "24719", "text": "▁контек", "special": false }, { "id": "269", "text": "ст", "special": false }, { "id": "125820", "text": ".", "special": false }, { "id": "10500", "text": "▁Чем", "special": false }, { "id": "8504", "text": "▁конкре", "special": false }, { "id": "93886", "text": "тнее", "special": false }, { "id": "73199", "text": "▁составлен", "special": false }, { "id": "68740", "text": "▁пром", "special": false }, { "id": "125810", "text": "т", "special": false }, { "id": "125827", "text": ",", "special": false }, { "id": "1819", "text": "▁тем", "special": false }, { "id": "1800", "text": "▁более", "special": false }, { "id": "470", "text": "▁то", "special": false }, { "id": "10969", "text": "чными", "special": false }, { "id": "3315", "text": "▁будут", "special": false }, { "id": "11306", "text": "▁результаты", "special": false }, { "id": "1630", "text": "▁работы", "special": false }, { "id": "6234", "text": "▁модели", "special": false }, { "id": "125820", "text": ".", "special": false } ], "modelVersion": "23.10.2024" }
See also
- Tokens
- Text generation overview
- Examples of working with ML SDK on GitHub