Yandex Cloud
Search
Contact UsGet started
  • Blog
  • Pricing
  • Documentation
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • ML & AI
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Customer Stories
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Education and Science
    • Yandex Cloud Partner program
  • Blog
  • Pricing
  • Documentation
© 2025 Direct Cursus Technology L.L.C.
Yandex Managed Service for Apache Kafka®
  • Getting started
    • Resource relationships
    • Topics and partitions
    • Brokers
    • KRaft protocol
    • Producers and consumers
    • Managing data schemas
    • APIs in Managed Service for Apache Kafka®
    • Host classes
    • Networking in Managed Service for Apache Kafka®
    • Quotas and limits
    • Storage in Managed Service for Apache Kafka®
    • Connectors
    • Maintenance
    • Apache Kafka® settings
  • Access management
  • Pricing policy
  • Terraform reference
  • Yandex Monitoring metrics
  • Audit Trails events
  • Public materials
  • Release notes
  • FAQ

In this article:

  • Selecting the disk type during cluster creation
  • Minimum storage size
  • Maximum log segment size
  • Disk space management
  • Recovering a cluster from read-only mode
  • Automatic increase of storage size
  1. Concepts
  2. Storage in Managed Service for Apache Kafka®

Storage in Managed Service for Apache Kafka®

Written by
Yandex Cloud
Updated at March 28, 2025
  • Selecting the disk type during cluster creation
  • Minimum storage size
  • Maximum log segment size
  • Disk space management
    • Recovering a cluster from read-only mode
    • Automatic increase of storage size

Managed Service for Apache Kafka® allows you to use network and local storage drives for database clusters. Network drives are based on network blocks, which are virtual disks in the Yandex Cloud infrastructure. Local disks are physically located on broker servers.

When creating a cluster, you can select the following disk types for data storage:

  • Network HDDs (network-hdd): Most cost-effective option for clusters that do not require high read/write performance.

  • Network SSDs (network-ssd): Balanced solution. Such disks are slower than local SSD storage, but, unlike local disks, they ensure data integrity if Yandex Cloud hardware fails.

  • Non-replicated SSDs (network-ssd-nonreplicated): Network disks with enhanced performance achieved by eliminating redundancy.

    The storage size can only be increased in 93 GB increments.

  • Ultra high-speed network SSDs with three replicas (network-ssd-io-m3): Network disks with the same performance characteristics as non-replicated ones. This disk type provides redundancy.

    Such disks can be increased in size only in 93 GB increments.

    Access to high-performance SSDs is available on request. Contact support or your account manager.

  • Local SSDs (local-ssd): Disks with the best performance.

    The size of such a storage can be increased:

    • For Intel Cascade Lake: Only in 100 GB increments.

    • For Intel Ice Lake: In 368 GB increments only.

    Note

    For clusters with hosts residing in the ru-central1-d availability zone, local SSD storage is not available if using the Intel Cascade Lake platform.

Selecting the disk type during cluster creationSelecting the disk type during cluster creation

The number of broker hosts you can create together with an Apache Kafka® cluster depends on the selected disk type:

  • You can create a cluster of only three or more broker hosts using the following disk types:

    • Local SSDs (local-ssd)
    • Non-replicated SSDs (network-ssd-nonreplicated)

    This cluster will be fault-tolerant only if all the conditions are met.

  • You can add any number of broker hosts within the current quota when using the following disk types:

    • Network HDDs (network-hdd)
    • Network SSDs (network-ssd) * Ultra high-speed network SSDs with three replicas (network-ssd-io-m3)

For more information about limits on the number of broker hosts per cluster, see Quotas and limits.

Minimum storage sizeMinimum storage size

In order to work, each topic requires space in broker host storage. The amount of such space depends on the replication factor and the number of partitions. If there is not enough available storage space, you will not be able to create a new topic.

Tip

You can always increase the storage size up to the current quota.

You can calculate the minimum storage size for all topics using the formula below:

2 × maximum log segment size × number of partitions in cluster × replication factor.

If topic partitions are evenly distributed, divide the value calculated with this formula by the number of broker hosts.

Maximum log segment sizeMaximum log segment size

At least two log segments are required for each replica of a topic partition. You can set the maximum size of such a segment:

  • At the topic level using the Segment bytes setting.
  • Globally at the cluster level using the Log segment bytes setting.

Thus, the minimum storage size for all topics is: 2 × maximum log segment size × number of partitions in cluster × replication factor. If the cluster partitions are evenly distributed, you can divide the resulting value by the number of brokers to determine the required storage size per broker.

By default, the segment size is 1 GB.

Disk space managementDisk space management

As soon as Apache Kafka® logs take up 97% of storage capacity, the host automatically enters read-only mode. The Managed Service for Apache Kafka® cluster blocks requests to write to the topic.

You can monitor storage utilization on cluster hosts by setting up alerts in Yandex Monitoring:

Recovering a cluster from read-only modeRecovering a cluster from read-only mode

Use one of these methods:

  • Increase the storage capacity to exceed the threshold value. The Managed Service for Apache Kafka® cluster will then automatically clear read-only mode.
  • Set up automatic increase of storage size.

Automatic increase of storage sizeAutomatic increase of storage size

Automatic increase of storage size prevents situations where the disk runs out of free space and the host switches to read-only mode. The storage size increases upon reaching the specified threshold percentage of the total capacity. There are two thresholds:

  • Scheduled increase threshold: When reached, the storage size increases during the next maintenance window.
  • Immediate increase threshold: When reached, the storage size increases immediately.

You can use either one or both thresholds. If you set both, make sure the immediate increase threshold is higher than the scheduled one.

If the specified threshold is reached, the storage size may increase in different ways depending on the disk type:

  • For network HDDs and SSDs, by the higher of the two values: 20 GB or 20% of the current disk size.

  • For non-replicated SSDs, by 93 GB.

  • For local SSDs, in a platform cluster:

    • Intel Cascade Lake, by 100 GB.
    • Intel Ice Lake, by 368 GB.

If the threshold is reached again, the storage size will be automatically increased until it reaches the specified maximum. After that, you can specify a new maximum storage size manually.

You can configure automatic increase of storage size when creating or updating a cluster. If you set the scheduled increase threshold, you also need to configure the maintenance window schedule.

Warning

  • You cannot decrease the storage size.
  • While resizing the storage, cluster hosts will be unavailable.

Was the article helpful?

Previous
Quotas and limits
Next
Connectors
© 2025 Direct Cursus Technology L.L.C.