Docker images in jobs
By default, DataSphere Jobs jobs use the nvidia/cuda:11.8.0-cudnn8-runtime-ubuntu22.04
public image with a pre-installed Conda package manager, Python 3.10, and other additional packages. This image is stored in the DataSphere cache, so using the default environment allows you to run your jobs faster.
You can also use a different Docker image to run jobs by specifying it under env
in the job configuration file. This can be:
-
DataSphere system image
env: docker: system-python-3-10 # Python 3.10 system image
-
Custom Docker image available in the job project
env: docker: <Docker_image_ID> # ID expressed as b1gxxxxxxxxxxxxxxxxx
Warning
When using a project Docker image, the job runtime environment will not include libraries installed in the notebook.
-
External image
You can use any preferred image registry (Yandex Container Registry, Docker Hub
, Docker — Private Registries , etc.) by specifying the username and password to access the image.env: docker: image: <image_path> username: <username> password: secret-id: <project_secret_ID>
Where:
<image_path>
: Full path to the image in a container registry, e.g.,cr.yandex/b1g**********/myenv:0.1
.<username>
: Username for accessing your registry. For Yandex Container Registry authentication, use a service account and an authorized key.<project_secret_ID>
: ID of the secret with a password. The secret must be created in a DataSphere project.
If you are using a public image, you do not need to specify authentication credentials:
env: docker: image: ubuntu:focal
See also
- DataSphere Jobs
- DataSphere CLI
- Job runtime environment
- Running jobs in DataSphere Jobs
- GitHub repository
with job run examples