Yandex Cloud
Search
Contact UsGet started
  • Pricing
  • Customer Stories
  • Documentation
  • Blog
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • AI for business
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Start testing with double trial credits
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Center for Technologies and Society
    • Yandex Cloud Partner program
  • Pricing
  • Customer Stories
  • Documentation
  • Blog
© 2025 Direct Cursus Technology L.L.C.
Yandex DataSphere
  • Getting started
    • About Yandex DataSphere
    • DataSphere resource relationships
    • Communities
    • Cost management
    • Project
    • Computing resource configurations
      • Jobs
      • DataSphere CLI
      • Docker images in jobs
      • Job runtime environment
      • Rerunning jobs
      • Integration with Managed Service for Apache Airflow™
      • Working with Spark connectors
    • Foundation models
    • Quotas and limits
    • Special terms for educational institutions
  • Terraform reference
  • Audit Trails events
  • Access management
  • Pricing policy
  • Public materials
  • Release notes
  1. Concepts
  2. DataSphere Jobs
  3. Docker images in jobs

Docker images in jobs

Written by
Yandex Cloud
Updated at October 11, 2024

By default, DataSphere Jobs jobs use the nvidia/cuda:11.8.0-cudnn8-runtime-ubuntu22.04 public image with a pre-installed Conda package manager, Python 3.10, and other additional packages. This image is stored in the DataSphere cache, so using the default environment allows you to run your jobs faster.

You can also use a different Docker image to run jobs by specifying it under env in the job configuration file. This can be:

  • DataSphere system image

    env:
      docker: system-python-3-10   # Python 3.10 system image
    
  • Custom Docker image available in the job project

    env:
      docker: <Docker_image_ID>  # ID expressed as b1gxxxxxxxxxxxxxxxxx
    

    Warning

    When using a project Docker image, the job runtime environment will not include libraries installed in the notebook.

  • External image

    You can use any preferred image registry (Yandex Container Registry, Docker Hub, Docker — Private Registries, etc.) by specifying the username and password to access the image.

    env:
      docker:
        image: <image_path>
        username: <username>
        password:
          secret-id: <project_secret_ID>
    

    Where:

    • <image_path>: Full path to the image in a container registry, e.g., cr.yandex/b1g**********/myenv:0.1.
    • <username>: Username for accessing your registry. For Yandex Container Registry authentication, use a service account and an authorized key.
    • <project_secret_ID>: ID of the secret with a password. The secret must be created in a DataSphere project.

    If you are using a public image, you do not need to specify authentication credentials:

    env:
      docker:
        image: ubuntu:focal
    

See alsoSee also

  • DataSphere Jobs
  • DataSphere CLI
  • Job runtime environment
  • Running jobs in DataSphere Jobs
  • GitHub repository with job run examples

Was the article helpful?

Previous
DataSphere CLI
Next
Job runtime environment
© 2025 Direct Cursus Technology L.L.C.