Yandex Cloud
Search
Contact UsGet started
  • Blog
  • Pricing
  • Documentation
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • ML & AI
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Customer Stories
    • Gateway to Russia
    • Cloud for Startups
    • Education and Science
  • Blog
  • Pricing
  • Documentation
Yandex project
© 2025 Yandex.Cloud LLC
Yandex Compute Cloud
  • Yandex Container Solution
    • Resource relationships
    • Graphics processing units (GPUs)
      • Overview
      • Disks
      • Disk snapshots
      • Creating scheduled snapshots
      • Non-replicated disk placement groups
      • File storages
      • Read and write operations
    • Images
    • Dedicated host
    • Encryption
    • Backups
    • Quotas and limits
  • Access management
  • Terraform reference
  • Monitoring metrics
  • Audit Trails events
  • Release notes

In this article:

  • Disk and file storage performance
  • Testing disk performance
  • Test examples
  • Throttling
  1. Concepts
  2. Disks and file storages
  3. Read and write operations

Read and write operations

Written by
Yandex Cloud
Updated at April 10, 2025
  • Disk and file storage performance
    • Testing disk performance
    • Test examples
    • Throttling

There are some technical restrictions on reads and writes that apply to disks and file storages. The restrictions apply both to the entire disk or storage and to each individual disk space allocation unit. The allocation unit size depends on the disk or storage type.

The maximum read and write operation parameters are as follows:

  • Maximum IOPS: Maximum number of read and write operations per second.
  • Maximum bandwidth: Total number of bytes that can be read or written per second.

The actual IOPS value depends on the disk or storage configuration, total bandwidth, and the size of the request in bytes. The provided IOPS value is calculated using the following formula:

Where:

  • Max IOPS: Maximum IOPS value for the disk or storage.
  • Max bandwidth: Maximum bandwidth value for the disk or storage.

Read and write operations utilize the same disk resource. The more read operations you do, the fewer write operations you can do, and vice versa. The total number of read and write operations per second is determined by this formula:

Where:

  • is the share of write operations out of the total number of read and write operations per second. Possible values: α∈[0,1].
  • WriteIOPS: IOPS write value obtained using the formula for the actual IOPS value.
  • ReadIOPS: IOPS read value obtained using the formula for the actual IOPS value.

For more information about maximum possible IOPS and bandwidth values, see Quotas and limits.

Disk and file storage performanceDisk and file storage performance

The maximum IOPS values are achieved when performing reads and writes that are 4 KB in size. Network SSDs and file storage have much higher IOPS for read operations and process requests faster than HDDs.

For maximum bandwidth, we recommend 4 MB reads and writes.

Disk or storage performance depends on its size: with more allocation units, you get higher IOPS and bandwidth values.

For smaller HDDs, there is a performance boosting mechanism in place for them to operate on a par with 1 TB disks during peak load periods. By operating at the basic performance level for 12 hours, a smaller HDD accumulates credits for operations. These will be automatically spent when the load increases, e.g., when the VM starts. Small HDDs can be boosted for about 30 minutes a day. Credits for operations can be spent either all at once or by small intervals. This feature is not available for HDD storages.

Testing disk performanceTesting disk performance

You can test the performance of your network disks with fio (Flexible I/O Tester):

  1. Attach the disk to the VM.

  2. Install fio on your VM instance.

    Sample command for Ubuntu:

    sudo apt-get update && sudo apt-get install fio -y
    
  3. Start fio and run the following:

    sudo fio \
    --name=<job_name>
    --filename=<path_to_mount_point>/testfile.bin \
    --filesize=1G \
    --direct=1 \
    --rw=write \
    --bs=4k \
    --ioengine=libaio \
    --iodepth=64 \
    --runtime=120 \
    --numjobs=8 \
    --time_based \
    --group_reporting \
    --eta-newline=1
    

    Where:

    • --name: Random job name.

    • --filename: Path to the mount point of the disk whose performance you want to test.

      Alert

      When testing write operations, do not use disk ID (e.g., /dev/vdb) as the --filename parameter value. This may cause you to lose all data on the disk.

    • --direct: Flag that toggles buffering; 0 means buffering is used, 1 means buffering is not used.

    • --rw: Load template. The possible values are as follows:

      • read: Sequential reads.
      • write: Sequential writes.
      • rw: Sequential reads and writes.
      • randread: Random reads and writes.
      • randwrite: Random writes.
      • randread: Random reads.
    • --bs: Read and write block size. To get better results, specify a value that is equal to the disk block size or less.

    • --iodepth: I/O block depth per job.

    • --runtime: Test duration in seconds.

    • --numjobs: Number of read and write jobs.

Test examplesTest examples

Test IOPS for sequential writesTest IOPS for sequential writes

sudo fio \
--name=readio \
--filename=<path_to_mount_point>/testfile.bin \
--filesize=1G \
--direct=1 \
--rw=write \
--bs=4k \
--ioengine=libaio \
--iodepth=96 \
--runtime=120 \
--numjobs=4 \
--time_based \
--group_reporting \
--eta-newline=1

Result:

---
  write: IOPS=39.7k, BW=155MiB/s (162MB/s)(5112MiB/33001msec); 0 zone resets
    slat (usec): min=2, max=19776, avg= 5.25, stdev=47.15
    clat (usec): min=874, max=5035.1k, avg=9677.38, stdev=40976.63
     lat (usec): min=889, max=5035.1k, avg=9682.81, stdev=40976.66
---

Test IOPS for random writesTest IOPS for random writes

sudo fio \
--name=randwrite \
--filename=<path_to_mount_point>/testfile.bin \
--filesize=1G \
--direct=1 \
--rw=randwrite \
--bs=4k \
--ioengine=libaio \
--iodepth=96 \
--runtime=120 \
--numjobs=1 \
--time_based \
--group_reporting \
--eta-newline=1

Result:

---
write: IOPS=9596, BW=37.5MiB/s (39.3MB/s)(4499MiB/120011msec); 0 zone resets
    slat (usec): min=2, max=338, avg= 5.21, stdev= 4.52
    clat (usec): min=680, max=161320, avg=9996.54, stdev=10695.67
     lat (usec): min=698, max=161323, avg=10001.94, stdev=10695.77
---

Test throughput for sequential writesTest throughput for sequential writes

sudo fio \
--name=writebw \
--filename=<path_to_mount_point>/testfile.bin \
--filesize=1G \
--direct=1 \
--rw=write \
--bs=4M \
--ioengine=libaio \
--iodepth=32 \
--runtime=120 \
--numjobs=1 \
--time_based \
--group_reporting \
--eta-newline=1

Result:

---
   write: IOPS=112, BW=449MiB/s (471MB/s)(52.8GiB/120237msec); 0 zone resets
    slat (usec): min=166, max=270963, avg=8814.82, stdev=10995.16
    clat (msec): min=58, max=661, avg=276.06, stdev=28.21
     lat (msec): min=60, max=679, avg=284.88, stdev=27.91
---

Test IOPS for sequential readsTest IOPS for sequential reads

sudo fio \
--name=readio \
--filename=<path_to_mount_point>/testfile.bin \
--filesize=1G \
--direct=1 \
--rw=read \
--bs=4k \
--ioengine=libaio \
--iodepth=128 \
--runtime=120 \
--numjobs=8 \
--time_based \
--group_reporting \
--eta-newline=1

Result:

---
  read: IOPS=62.2k, BW=243MiB/s (255MB/s)(28.5GiB/120008msec)
    slat (usec): min=2, max=123901, avg= 6.88, stdev=151.96
    clat (usec): min=859, max=168609, avg=16450.99, stdev=8226.23
     lat (usec): min=877, max=168611, avg=16458.07, stdev=8229.16
---

Test read throughputTest read throughput

sudo fio \
--name=readbw \
--filename=<path_to_mount_point>/testfile.bin \
--filesize=1G \
--direct=1 \
--rw=read \
--bs=4M \
--ioengine=libaio \
--iodepth=32 \
--runtime=120 \
--numjobs=1 \
--time_based \
--group_reporting \
--eta-newline=1

Result:

---
  read: IOPS=112, BW=449MiB/s (470MB/s)(52.7GiB/120227msec)
    slat (usec): min=85, max=177850, avg=8878.47, stdev=9824.19
    clat (msec): min=50, max=4898, avg=276.36, stdev=45.16
     lat (msec): min=52, max=4898, avg=285.24, stdev=44.94
---

Test IOPS for random readsTest IOPS for random reads

sudo fio \
--name=randread \
--filename=<path_to_mount_point>/testfile.bin \
--filesize=1G \
--direct=1 \
--rw=randread \
--bs=4k \
--ioengine=libaio \
--iodepth=16 \
--runtime=120 \
--numjobs=8 \
--time_based \
--group_reporting \
--eta-newline=1

Result:

---
 read: IOPS=17.0k, BW=66.4MiB/s (69.6MB/s)(7966MiB/120006msec)
    slat (usec): min=2, max=114, avg= 9.05, stdev= 5.36
    clat (usec): min=172, max=251507, avg=7519.93, stdev=6463.84
     lat (usec): min=179, max=251511, avg=7529.25, stdev=6464.41
---

ThrottlingThrottling

If a VM exceeds disk limits at any time, this will trigger throttling.

Throttling is a feature that forcibly limits the performance. When throttled, disk operations are suspended, and the disk operation wait time (iowait) is increased. Since all write and read operations are processed in a single thread (vCPU), overloading system disks may cause network problems. This is true for both VMs and physical servers.

For example, let's assume there is a write limit of 300 IPOS. The limit is split into 10 parts and applies once every 100 ms. 300 / 10 = 30 IOPS per write request will be allowed every 100 ms. If you send 30 requests once and then 30 more requests within 100 ms (evenly distributed across the 100 ms interval), this will trigger throttling and only the first 30 requests will be sent. The rest of them will be enqueued and processed within the next 100 ms. If write requests are executed sporadically, throttling may cause significant delays. At times, there will be up to N IOPS of requests within 100 ms.

Disk performance depends on its size. To improve the overall performance of the disk subsystem, use VMs with SSD network storage (network-ssd). Every increment of 32 GB increases the number of allocation units and, consequently, the performance.

You can select the storage type only when creating a VM. However, you can take a disk snapshot and create a new VM from such a snapshot with a network-ssd.

Was the article helpful?

Previous
File storages
Next
Images
Yandex project
© 2025 Yandex.Cloud LLC