Yandex Cloud
Search
Contact UsGet started
  • Pricing
  • Customer Stories
  • Documentation
  • Blog
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • AI for business
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Start testing with double trial credits
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Center for Technologies and Society
    • Yandex Cloud Partner program
  • Pricing
  • Customer Stories
  • Documentation
  • Blog
© 2025 Direct Cursus Technology L.L.C.
Yandex DataSphere
  • Getting started
    • All guides
      • Connecting to JupyterLab from a local IDE
      • Selecting computing resources
      • Checking GPU load
      • Getting a notebook ID
      • Installing packages
      • Notebook code snippets
      • Clearing notebook cell outputs
      • Working with Git
      • Setting up template-based notebook creation
    • Migrating a workflow to a new version
  • Terraform reference
  • Audit Trails events
  • Access management
  • Pricing policy
  • Public materials
  • Release notes

In this article:

  • Getting started
  • Checking GPU performance
  • Checking a connection using TensorFlow
  • Checking a connection using nvidia-smi
  • Writing GPU utilization statistics while training a model
  • Example of writing GPU utilization statistics
  1. Step-by-step guides
  2. DataSphere Notebook
  3. Checking GPU load

Checking GPU load

Written by
Yandex Cloud
Updated at August 15, 2025
  • Getting started
  • Checking GPU performance
    • Checking a connection using TensorFlow
    • Checking a connection using nvidia-smi
  • Writing GPU utilization statistics while training a model
  • Example of writing GPU utilization statistics

Yandex DataSphere supports computing resource configurations with GPUs.

You can check the GPU performance, load, and resource utilization statistics using TensorFlow or nvidia-smi.

Getting startedGetting started

Open the DataSphere project:

  1. Select the project in your community or on the DataSphere home page in the Recent projects tab.

  2. Click Open project in JupyterLab and wait for the loading to complete.
  3. Open the notebook tab.

Checking GPU performanceChecking GPU performance

Checking a connection using TensorFlowChecking a connection using TensorFlow

  1. Select the GPU configuration you need. In our example, we use the g1.1 configuration.

  2. Enter the following code in a cell:

    import tensorflow as tf
    
    
    tf.config.list_physical_devices('GPU')
    
  3. Run the cell. To do this, click .

  4. This will output a list of all GPUs used in the notebook.

Checking a connection using nvidia-smiChecking a connection using nvidia-smi

  1. Select the GPU configuration you need. In our example, we use the g1.1 configuration.

  2. Enter the following code in a cell:

    #!:bash
    nvidia-smi
    
  3. Run the cell. To do this, click .

  4. This will output GPU status details.

Writing GPU utilization statistics while training a modelWriting GPU utilization statistics while training a model

  1. Enter the following code in a cell:

    import subprocess
    
    with open("stdout.txt","wb") as out:
    proc = subprocess.Popen(["nvidia-smi", "dmon"], stdout=out, stderr=subprocess.STDOUT)
    
    <GPU_utilization_code>
    
    proc.terminate()
    proc.kill()
    

    The code uses the nvidia-smi dmon command that collects GPU performance statistics every second.

  2. Run the cell. To do this, click .

  3. As a result, the stdout.txt file with detailed GPU statistics will appear in the model directory.

Example of writing GPU utilization statisticsExample of writing GPU utilization statistics

Use a pre-trained model to test GPU configurations. When running the code on the g1.1 and g2.1 configurations, the model utilizes 18% to 25% of GPU resources. You can view the data in the sm column of the stdout.txt file.

  1. Enter the following code in a cell:

    import subprocess
    import tensorflow as tf
    import datetime
    
    mnist = tf.keras.datasets.mnist
    
    (x_train, y_train),(x_test, y_test) = mnist.load_data()
    x_train, x_test = x_train / 255.0, x_test / 255.0
    
    def create_model():
      return tf.keras.models.Sequential([
        tf.keras.layers.Flatten(input_shape=(28, 28)),
        tf.keras.layers.Dense(512, activation='relu'),
        tf.keras.layers.Dropout(0.2),
        tf.keras.layers.Dense(10, activation='softmax')
      ])
    
    with open("stdout.txt","wb") as out:
        proc = subprocess.Popen(["nvidia-smi", "dmon"], stdout=out, stderr=subprocess.STDOUT)
    
        model = create_model()
        model.compile(optimizer='adam',
                      loss='sparse_categorical_crossentropy',
                      metrics=['accuracy'])
        model.fit(x=x_train,
                  y=y_train,
                  epochs=5,
                  validation_data=(x_test, y_test))
        model = create_model()
        model.compile(optimizer='adam',
                      loss='sparse_categorical_crossentropy',
                      metrics=['accuracy'])
    
        proc.terminate()
        proc.kill()
    
  2. Run the cell. To do this, click .

  3. As a result, the stdout.txt file with detailed GPU statistics will appear in the model directory.

Was the article helpful?

Previous
Selecting computing resources
Next
Getting a notebook ID
© 2025 Direct Cursus Technology L.L.C.