Getting started with YandexGPT API
In this section, you will learn how to use the YandexGPT neural network to generate text in synchronous mode without adding context. For other examples, see Guides on how to use YandexGPT API
The management console
For information about YandexGPT API pricing, see Yandex Foundation Models pricing policy.
Getting started
To get started in Yandex Cloud:
- Log in to the management console
. If not signed up yet, navigate to the management console and follow the instructions. - In Yandex Cloud Billing
, make sure you have a billing account linked and its status isACTIVE
orTRIAL_ACTIVE
. If you do not have a billing account yet, create one. - If you do not have a folder yet, create one.
You can start working from the management console right away.
To run sample requests using the API, install cURL
To work with the YandexGPT API, you need to get authenticated using your account:
-
Get an IAM token: see the guide for a Yandex account or federated account.
-
Get the ID of the folder for which your account has the
ai.languageModels.user
role or higher. -
When accessing YandexGPT API via the API, provide the received parameters in each request:
- Specify the IAM token in the
Authorization
header. - Specify the folder ID in the
x-folder-id
header.
Authorization: Bearer <IAM_token> x-folder-id: <folder_ID>
- Specify the IAM token in the
For information about other API authentication methods, see Authentication with the Yandex Foundation Models API.
Generate the text
Note
To improve the quality of the responses you get, YandexGPT API logs user prompts. Do not use sensitive information and personal data in your prompts.
-
In the management console
, select the folder for which your account has theai.languageModels.user
role or higher. -
In the list of services, select Foundation Models.
-
In the left-hand panel, select
YandexGPT Prompt mode. -
In the Temperature field, enter a value between
0
and1
for the model's response variability. With a higher value, you get a less deterministic result. -
Describe the request context under Instructions.
-
Describe your request to the model under Request.
-
Click View answer. The answer will be shown on the right part of the screen.
-
Create a file with the request body, e.g.,
prompt.json
:{ "modelUri": "gpt://<folder_ID>/yandexgpt-lite", "completionOptions": { "stream": false, "temperature": 0.6, "maxTokens": "2000" }, "messages": [ { "role": "system", "text": "Find and correct errors in the text." }, { "role": "user", "text": "Laminate flooring is sutiable for instalation in the kitchen or in a child's room. It withsatnds moisturre and mechanical dammage thanks to a proctive layer of melamine films 0.2 mm thick and a wax-treated interlocking systme." } ] }
Where:
-
modelUri
: ID of the model to generate the response. The parameter contains the ID of a Yandex Cloud folder or the ID of a model fine-tuned in DataSphere. -
completionOptions
: Request configuration options:stream
: Enables streaming of partially generated text. It may take either thetrue
orfalse
value.temperature
: With a higher temperature, you get more creative and randomized response from the model. This parameter accepts values between0
and1
, inclusive. The default value is0.3
.maxTokens
: Sets a limit on the model's output in tokens. The maximum number of tokens per generation depends on the model. For more information, see Quotas and limits in Yandex Foundation Models.
-
messages
: List of messages that set the context for the model:-
role
: Message sender's role:user
: Used to send user messages to the model.system
: Used to set request context and define the model's behavior.assistant
: Used for responses generated by the model. In chat mode, the model's responses tagged with theassistant
role are included in the message to save the conversation context. Do not send user messages with this role.
-
text
: Text content of the message.
-
-
-
Use the completion method to send a request to the neural network in the following command:
export FOLDER_ID=<folder_ID> export IAM_TOKEN=<IAM_token> curl \ --request POST \ --header "Content-Type: application/json" \ --header "Authorization: Bearer ${IAM_TOKEN}" \ --header "x-folder-id: ${FOLDER_ID}" \ --data "@prompt.json" \ "https://llm.api.cloud.yandex.net/foundationModels/v1/completion"
Where:
FOLDER_ID
: ID of the folder for which your account has theai.languageModels.user
role or higher.IAM_TOKEN
: IAM token you got before you started.
In response, the service will return the generated text:
{ "result": { "alternatives": [ { "message": { "role": "assistant", "text": "Laminate flooring is suitable for installation in the kitchen or in a child's room. It withstands moisture and mechanical damage thanks to a protective layer of melamine films 0.2 mm thick and a wax-treated interlocking system." }, "status": "ALTERNATIVE_STATUS_TRUNCATED_FINAL" } ], "usage": { "inputTextTokens": "67", "completionTokens": "50", "totalTokens": "117" }, "modelVersion": "06.12.2023" } }