Yandex Cloud
Search
Contact UsGet started
  • Pricing
  • Customer Stories
  • Documentation
  • Blog
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • AI Studio
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Start testing with double trial credits
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Center for Technologies and Society
    • Yandex Cloud Partner program
  • Pricing
  • Customer Stories
  • Documentation
  • Blog
© 2025 Direct Cursus Technology L.L.C.
Service page
Yandex DataSphere
Documentation
Yandex DataSphere
  • Getting started
    • About Yandex DataSphere
    • DataSphere resource relationships
    • Communities
    • Cost management
    • Project
    • Computing resource configurations
      • Overview
      • Secrets
      • Docker images
      • Datasets
      • Yandex Data Processing templates
      • S3 connectors
      • Spark connectors
      • Models
      • File storages
    • Foundation models
    • Quotas and limits
    • Special terms for educational institutions
  • Terraform reference
  • Audit Trails events
  • Access management
  • Pricing policy
  • Public materials
  • Release notes

In this article:

  • Supported variable types
  • Information about models as a resource
  • Use cases
  1. Concepts
  2. Resources
  3. Models

Models

Written by
Yandex Cloud
Updated at May 15, 2025
  • Supported variable types
  • Information about models as a resource
  • Use cases

While using Yandex DataSphere, a VM's memory stores the interpreter state, as well as computing and training results. You can save these computations to a separate resource named model.

In DataSphere, there are two types of models available:

  • Models trained in projects.
  • Foundation models tuned based on the Fine-tuning method.

Once created, the model is available for the project. Like any other resource, you can publish the model in the community to use it in other projects. To do this, you need at least the Editor role in the project and the Developer role in the community you want to publish it in. You can open the access on the Access tab on the model view page. The resource available to the community will appear on the community page under Community resources.

Supported variable typesSupported variable types

You can create a model based on different library types supported by serialzy. The table below provides a list of supported data and variable types.

Library Types Data format
CatBoost CatBoostRegressor, CatBoostClassifier, CatBoostRanker cbm
CatBoost Pool quantized pool
Tensorflow.Keras Sequential, Model with subclasses tf_keras
Tensorflow Checkpoint, Module with subclasses tf_pure
LightGBM LGBMClassifier, LGBMRegressor, LGBMRanker lgbm
XGBoost XGBClassifier, XGBRegressor, XGBRanker xgb
Torch Module with subclasses pt
ONNX ModelProto onnx

Information about models as a resourceInformation about models as a resource

All information about models created in a project is available under Resources and in the JupyterLab right-hand menu in the Models tab.

The following information is stored about each model:

  • Name.
  • Name of the notebook the model was created in.
  • Name of the variable the model was created from.
  • Model size in bytes.
  • Name of the user who created the model.
  • Dataset creation date in UTC format, e.g., July 18, 2023, 14:23.

To view model details, click its name in the project's model list.

Use casesUse cases

  • How to create, upload, and delete a model
  • Image generation using the Stable Diffusion model
  • Deploying a service based on an ONNX model

Was the article helpful?

Previous
Spark connectors
Next
File storages
© 2025 Direct Cursus Technology L.L.C.