Fine-tuning an embedding model
Model tuning based on the LoRA method is at the Preview stage.
This example shows how to fine-tune an embedding model based on the LoRA method in Foundation Models. Links to other examples are available in the See also section.
Getting started
To use the examples:
You can start working from the management console right away.
-
Create a service account and assign the
ai.editor
role to it. -
Get the service account API key and save it.
The following examples use API key authentication. Yandex Cloud ML SDK also supports IAM token and OAuth token authentication. For more information, see Authentication in Yandex Cloud ML SDK.
-
Use the pip
package manager to install the ML SDK library:pip install yandex-cloud-ml-sdk
-
Get API authentication credentials as described in Authentication with the Yandex Foundation Models API.
-
To use the examples, install cURL
. -
Install gRPCurl
. -
(Optional) Install the jq
JSON stream processor. -
Get an IAM token used for authentication in the API.
Note
The IAM token has a short lifetime: no more than 12 hours.
Prepare data
- Prepare data in the required format. To fine-tune an embedding model, use datasets of
TextEmbeddingPairParams
pairs orTextEmbeddingTripletParams
triplets. - Create a dataset using any method of your choice. In the management console, you can also create a dataset later when creating the tuning.
Start tuning
-
In the management console
, select the folder for which your account has theai.playground.user
,ai.datasets.user
, andai.models.editor
roles or higher. -
From the list of services, select Foundation Models.
-
In the left-hand panel, click
Fine-tuning models. -
Click Fine-tune model.
-
Enter a name and descriptions for the dataset. The naming requirements are as follows:
- It must be from 2 to 63 characters long.
- It may contain lowercase Latin letters, numbers, and hyphens.
- It must start with a letter and cannot end with a hyphen.
-
Optionally, add or delete the tuning labels. You can use them to split or join resources into logical groups.
-
In the Task field, select Embedding.
-
Select Embeddings type that matches the prepared dataset.
-
In the Model field, select the model you need.
-
In the Dataset field, click Add.
-
In the window that opens, go to the Select from existings tab and select the dataset you created earlier.
-
Click Advanced settings to do advanced fine-tuning setup.
-
Click Start fine-tuning.
-
Create a file named
start-tuning.py
and add the following code to it:#!/usr/bin/env python3 from __future__ import annotations import pathlib import uuid from yandex_cloud_ml_sdk import YCloudML def main(): sdk = YCloudML( folder_id="<folder_ID>", auth="<API_key>", ) # Viewing the list of valid datasets for dataset in sdk.datasets.list(status="READY", name_pattern="completions"): print(f"List of existing datasets {dataset=}") # Setting the tuning dataset and the base model train_dataset = sdk.datasets.get("<dataset_ID>") base_model = sdk.models.text_embeddings('yandexgpt-lite') # Starting the tuning # Tuning can last up to several hours tuning_task = base_model.tune_deferred( train_dataset, name=str(uuid.uuid4()), embeddings_tune_type=tune_type ) tuned_model = tuning_task.wait() print(f"Resulting {tuned_model}") if __name__ == "__main__": main()
Where:
-
<folder_ID>
: ID of the folder the service account was created in. -
<API_key>
: Service account API key you got earlier required for authentication in the API.The following examples use API key authentication. Yandex Cloud ML SDK also supports IAM token and OAuth token authentication. For more information, see Authentication in Yandex Cloud ML SDK.
-
<dataset_ID>
: ID of the dataset for fine-tuning.
-
-
Run the created file:
python3 start-tuning.py
Model tuning may take up to 1 day depending on the size of the dataset and the system load.
Use the fine-tuned model's URI you got (the
uri
field value) when accessing the model. -
Fine-tuning metrics are available in TensorBoard format. You can open the downloaded file, for example, in the Yandex DataSphere
project:metrics_url = new_model.get_metrics_url() download_tensorboard(metrics_url)
-
Start tuning.
-
With a pair dataset:
grpcurl \ -H "Authorization: Bearer <IAM_token>" \ -d @ \ llm.api.cloud.yandex.net:443 yandex.cloud.ai.tuning.v1.TuningService/Tune <<EOM { "base_model_uri": "emb://<folder_ID>/yandexgpt-lite/latest", "train_datasets": [{"dataset_id": "<dataset_ID>", "weight": 1.0}], "name": "train-embeddings", "text_embedding_pair_params": {} } EOM
-
With a triplet dataset:
grpcurl \ -H "Authorization: Bearer <IAM_token>" \ -d @ \ llm.api.cloud.yandex.net:443 yandex.cloud.ai.tuning.v1.TuningService/Tune <<EOM { "base_model_uri": "emb://<folder_ID>/yandexgpt-lite/latest", "train_datasets": [{"dataset_id": "<dataset_ID>", "weight": 1.0}], "name": "train-embeddings", "text_embedding_triplet_params": {} } EOM
Where:
<IAM_token>
: IAM token of the service account you got before you started.<folder_ID>
: ID of the folder you are fine-tuning the model in.<dataset_ID>
: Dataset ID you saved in the previous step.
Result:
{ "id": "ftnlljf53kil********", "createdAt": "2025-04-20T11:17:33Z", "modifiedAt": "2025-04-20T11:17:33Z", "metadata": { "@type": "type.googleapis.com/yandex.cloud.ai.tuning.v1.TuningMetadata" } }
You will get the Operation object in response. Save the operation
id
you get in the response. -
-
Model tuning may take up to one day depending on the dataset size and the system load. To check if the fine-tuning is complete, request the operation status:
grpcurl \ -H "Authorization: Bearer <IAM_token>" \ -d '{"operation_id": "<operation_ID>"}' \ llm.api.cloud.yandex.net:443 yandex.cloud.operation.OperationService/Get
Where:
<IAM_token>
: IAM token of the service account you got before you started.<operation_ID>
: Model fine-tuning operation ID you got in the previous step.
If the fine-tuning process is over, the Operation object will contain the tuned model's URI in the
targetModelUri
field:{ "id": "ftnlljf53kil********", "createdAt": "2025-04-20T11:17:33Z", "modifiedAt": "2025-04-20T11:25:40Z", "done": true, "metadata": { "@type": "type.googleapis.com/yandex.cloud.ai.tuning.v1.TuningMetadata", "status": "COMPLETED", "tuningTaskId": "ftnlljf53kil********" }, "response": { "@type": "type.googleapis.com/yandex.cloud.ai.tuning.v1.TuningResponse", "status": "COMPLETED", "targetModelUri": "emb://b1gt6g8ht345********/yandexgpt-lite/latest@tamr2nc6pev5e********", "tuningTaskId": "ftnlljf53kil********" } }
Use the fine-tuned model's URI you got (the
targetModelUri
field value) when accessing the model. -
Fine-tuning metrics are available in TensorBoard format. Get the link to download the file:
grpcurl \ -H "Authorization: Bearer <IAM_token>" \ -d '{"task_id": "<job_ID>"}' \ llm.api.cloud.yandex.net:443 yandex.cloud.ai.tuning.v1.TuningService/GetMetricsUrl
You can open the downloaded file, for example, in the Yandex DataSphere
project.
Accessing a fine-tuned model
Once the model is fine-tuned, save its URI in this format: emb://<base_model_URI>/@<tuning_suffix>
. Use it as a custom embedding model, if needed. For example, you can specify model_uri
when building a search index.
See also
- Model tuning
- Fine-tuning a text generation model
- Fine-tuning a text classification model
- For more SDK examples, see our GitHub repository
.