Yandex Cloud
Search
Contact UsGet started
  • Pricing
  • Customer Stories
  • Documentation
  • Blog
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • AI for business
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Start testing with double trial credits
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Center for Technologies and Society
    • Yandex Cloud Partner program
  • Pricing
  • Customer Stories
  • Documentation
  • Blog
© 2025 Direct Cursus Technology L.L.C.
Tutorials
    • All tutorials
    • Unassisted deployment of the Apache Kafka® web interface
    • Upgrading a Managed Service for Apache Kafka® cluster to migrate from ZooKeeper to KRaft
    • Migrating a database from a third-party Apache Kafka® cluster to Managed Service for Apache Kafka®
    • Moving data between Managed Service for Apache Kafka® clusters using Data Transfer
    • Delivering data from Managed Service for MySQL® to Managed Service for Apache Kafka® using Data Transfer
    • Delivering data from Managed Service for MySQL® to Managed Service for Apache Kafka® using Debezium
    • Delivering data from Managed Service for PostgreSQL to Managed Service for Apache Kafka® using Data Transfer
    • Delivering data from Managed Service for PostgreSQL to Managed Service for Apache Kafka® using Debezium
    • Delivering data from Managed Service for YDB to Managed Service for Apache Kafka® using Data Transfer
    • Delivering data from Managed Service for Apache Kafka® to Managed Service for ClickHouse® using Data Transfer
    • Delivering data from Managed Service for Apache Kafka® to Yandex MPP Analytics for PostgreSQL using Data Transfer
    • Delivering data from Managed Service for Apache Kafka® to Yandex StoreDoc using Data Transfer
    • Delivering data from Managed Service for Apache Kafka® to Managed Service for MySQL® using Data Transfer
    • Delivering data from Managed Service for Apache Kafka® to Managed Service for OpenSearch using Data Transfer
    • Delivering data from Managed Service for Apache Kafka® to Managed Service for PostgreSQL using Data Transfer
    • Delivering data from Managed Service for Apache Kafka® to Managed Service for YDB using Data Transfer
    • Delivering data from Managed Service for Apache Kafka® to Data Streams using Data Transfer
    • Delivering data from Data Streams to Managed Service for YDB using Data Transfer
    • Delivering data from Data Streams to Managed Service for Apache Kafka® using Data Transfer
    • YDB change data capture and delivery to YDS
    • Configuring Kafka Connect to work with a Managed Service for Apache Kafka® cluster
    • Synchronizing Apache Kafka® topics in Object Storage with no web access
    • Monitoring message loss in an Apache Kafka® topic
    • Automating Query tasks with Managed Service for Apache Airflow™
    • Sending requests to the Yandex Cloud API via the Yandex Cloud Python SDK
    • Configuring an SMTP server to send e-mail notifications
    • Adding data to a ClickHouse® DB
    • Migrating data to Managed Service for ClickHouse® using ClickHouse® tools
    • Migrating data to Managed Service for ClickHouse® using Data Transfer
    • Delivering data from Managed Service for MySQL® to Managed Service for ClickHouse® using Data Transfer
    • Asynchronously replicating data from PostgreSQL to ClickHouse®
    • Exchanging data between Managed Service for ClickHouse® and Yandex Data Processing
    • Configuring Managed Service for ClickHouse® for Graphite
    • Fetching data from Managed Service for Apache Kafka® to Managed Service for ClickHouse®
    • Fetching data from Managed Service for Apache Kafka® to ksqlDB
    • Fetching data from RabbitMQ to Managed Service for ClickHouse®
    • Saving a data stream from Data Streams to Managed Service for ClickHouse®
    • Asynchronous replication of data from Yandex Metrica to ClickHouse® using Data Transfer
    • Using hybrid storage in Managed Service for ClickHouse®
    • Sharding Managed Service for ClickHouse® tables
    • Loading data from Yandex Direct to a Managed Service for ClickHouse® data mart using Cloud Functions, Object Storage, and Data Transfer
    • Loading data from Object Storage to Managed Service for ClickHouse® using Data Transfer
    • Migrating data with change of storage from Managed Service for OpenSearch to Managed Service for ClickHouse® using Data Transfer
    • Loading data from Managed Service for YDB to Managed Service for ClickHouse® using Data Transfer
    • Yandex Managed Service for ClickHouse® integration with Microsoft SQL Server via ClickHouse® JDBC Bridge
    • Migrating databases from Google BigQuery to Managed Service for ClickHouse®
    • Yandex Managed Service for ClickHouse® integration with Oracle via ClickHouse® JDBC Bridge
    • Configuring Cloud DNS to access a Managed Service for ClickHouse® cluster from other cloud networks
    • Migrating a Yandex Data Processing HDFS cluster to a different availability zone
    • Importing data from Managed Service for MySQL® to Yandex Data Processing using Sqoop
    • Importing data from Managed Service for PostgreSQL to Yandex Data Processing using Sqoop
    • Mounting Object Storage buckets to the file system of Yandex Data Processing hosts
    • Working with Apache Kafka® topics using Yandex Data Processing
    • Automating operations with Yandex Data Processing using Managed Service for Apache Airflow™
    • Shared use of Yandex Data Processing tables through Apache Hive™ Metastore
    • Transferring metadata across Yandex Data Processing clusters using Apache Hive™ Metastore
    • Importing data from Object Storage, processing it, and exporting it to Managed Service for ClickHouse®
    • Migrating collections from a third-party MongoDB cluster to Yandex StoreDoc
    • Migrating data to Yandex StoreDoc
    • Migrating Yandex StoreDoc cluster from 4.4 to 6.0
    • Sharding Yandex StoreDoc collections
    • Yandex StoreDoc performance analysis and tuning
    • Managed Service for MySQL® performance analysis and tuning
    • Syncing data from a third-party MySQL® cluster to Managed Service for MySQL® using Data Transfer
    • Migrating a database from Managed Service for MySQL® to a third-party MySQL® cluster
    • Migrating a database from Managed Service for MySQL® to Object Storage using Data Transfer
    • Migrating data from Object Storage to Managed Service for MySQL® using Data Transfer
    • Delivering data from Managed Service for MySQL® to Managed Service for Apache Kafka® using Data Transfer
    • Delivering data from Managed Service for MySQL® to Managed Service for Apache Kafka® using Debezium
    • Migrating a database from Managed Service for MySQL® to Managed Service for YDB using Data Transfer
    • MySQL® change data capture and delivery to YDS
    • Migrating data from Managed Service for MySQL® to Managed Service for PostgreSQL using Data Transfer
    • Migrating data from AWS RDS for PostgreSQL to Managed Service for PostgreSQL using Data Transfer
    • Migrating data from Managed Service for MySQL® to Yandex MPP Analytics for PostgreSQL using Data Transfer
    • Configuring an index policy in Managed Service for OpenSearch
    • Migrating data from a third-party OpenSearch cluster to Managed Service for OpenSearch using Data Transfer
    • Loading data from Managed Service for OpenSearch to Object Storage using Data Transfer
    • Migrating data from Managed Service for OpenSearch to Managed Service for YDB using Data Transfer
    • Copying data from Managed Service for OpenSearch to Yandex MPP Analytics for PostgreSQL using Yandex Data Transfer
    • Migrating data from Managed Service for PostgreSQL to Managed Service for OpenSearch using Data Transfer
    • Authenticating a Managed Service for OpenSearch cluster in OpenSearch Dashboards using Keycloak
    • Using the yandex-lemmer plugin in Managed Service for OpenSearch
    • Creating a PostgreSQL cluster for 1C:Enterprise
    • Searching for the Managed Service for PostgreSQL cluster performance issues
    • Managed Service for PostgreSQL performance analysis and tuning
    • Logical replication in PostgreSQL
    • Migrating a database from a third-party PostgreSQL cluster to Managed Service for PostgreSQL
    • Migrating a database from Managed Service for PostgreSQL
    • Delivering data from Managed Service for PostgreSQL to Managed Service for Apache Kafka® using Data Transfer
    • Delivering data from Managed Service for PostgreSQL to Managed Service for Apache Kafka® using Debezium
    • Delivering data from Managed Service for PostgreSQL to Managed Service for YDB using Data Transfer
    • Migrating a database from Managed Service for PostgreSQL to Object Storage
    • Migrating data from Object Storage to Managed Service for PostgreSQL using Data Transfer
    • PostgreSQL change data capture and delivery to YDS
    • Migrating data from Managed Service for PostgreSQL to Managed Service for MySQL® using Data Transfer
    • Migrating data from Managed Service for PostgreSQL to Managed Service for OpenSearch using Data Transfer
    • Fixing string sorting issues in PostgreSQL after upgrading glibc
    • Migrating a database from Greenplum® to ClickHouse®
    • Migrating a database from Greenplum® to PostgreSQL
    • Exporting Greenplum® data to a cold storage in Object Storage
    • Loading data from Object Storage to Yandex MPP Analytics for PostgreSQL using Data Transfer
    • Copying data from Managed Service for OpenSearch to Yandex MPP Analytics for PostgreSQL using Yandex Data Transfer
    • Creating an external table from an Object Storage bucket table using a configuration file
    • Getting data from external sources using named queries in Greenplum®
    • Migrating a database from a third-party Valkey™ cluster to Yandex Managed Service for Valkey™
    • Using a Yandex Managed Service for Valkey™ cluster as a PHP session storage
    • Loading data from Object Storage to Managed Service for YDB using Data Transfer
    • Loading data from Managed Service for YDB to Object Storage using Data Transfer
    • Processing Audit Trails events
    • Processing Cloud Logging logs
    • Processing Debezium CDC streams
    • Analyzing data with Jupyter
    • Processing files with usage details in Yandex Cloud Billing
    • Ingesting data into storage systems
    • Smart log processing
    • Data transfer in microservice architectures
    • Migrating data to Object Storage using Data Transfer
    • Migrating data from a third-party Greenplum® or PostgreSQL cluster to Yandex MPP Analytics for PostgreSQL using Data Transfer
    • Migrating Yandex StoreDoc clusters
    • Migrating MySQL® clusters
    • Migrating to a third-party MySQL® cluster
    • Migrating PostgreSQL clusters
    • Creating a schema registry to deliver data in Debezium CDC format from Apache Kafka®
    • Automating operations using Yandex Managed Service for Apache Airflow™
    • Working with an Object Storage table from a PySpark job
    • Integrating Yandex Managed Service for Apache Spark™ with Apache Hive™ Metastore
    • Running a PySpark job using Yandex Managed Service for Apache Airflow™
    • Using Yandex Object Storage in Yandex Managed Service for Apache Spark™

In this article:

  • Required paid resources
  • Update the cluster version
  • Migrate the cluster to KRaft
  • Delete the resources you created
  1. Building a data platform
  2. Upgrading a Managed Service for Apache Kafka® cluster to migrate from ZooKeeper to KRaft

Upgrading a Managed Service for Apache Kafka® cluster to migrate from ZooKeeper to KRaft

Written by
Yandex Cloud
Updated at August 20, 2025
  • Required paid resources
  • Update the cluster version
  • Migrate the cluster to KRaft
  • Delete the resources you created

Managed Service for Apache Kafka® multi-host clusters version 3.5 and lower use ZooKeeper to manage metadata. ZooKeeper support will be discontinued starting from Apache Kafka® 4.0. You can migrate clusters with ZooKeeper hosts to the KRaft protocol. Starting with version 3.6, Apache Kafka® uses KRaft as the main metadata synchronization protocol.

To switch to KRaft in a ZooKeeper cluster:

  1. Update the cluster version.
  2. Migrate the cluster to KRaft.

If you no longer need the resources you created, delete them.

Required paid resourcesRequired paid resources

The support cost includes:

  • Managed Service for Apache Kafka® cluster fee: using computing resources allocated to hosts (including KRaft hosts) and disk space (see Managed Service for Apache Kafka® pricing).
  • Fee for using public IP addresses for cluster hosts (see Virtual Private Cloud pricing).

Update the cluster versionUpdate the cluster version

Update Apache Kafka® in your cluster with ZooKeeper to version 3.9 step by step without skipping versions in the following order: 3.5 → 3.6 → 3.7 → 3.8 → 3.9. If your cluster’s version is lower than 3.5, first, update the cluster to this version.

Management console
CLI
Terraform
REST API
gRPC API
  1. Navigate to the folder dashboard and select Managed Service for Kafka.
  2. In the row with your cluster, click , then select Edit.
  3. In the Version field, select version 3.6.
  4. Click Save.
  5. Repeat the steps for the remaining Apache Kafka® versions in the given order.

If you do not have the Yandex Cloud CLI installed yet, install and initialize it.

By default, the CLI uses the folder specified when creating the profile. To change the default folder, use the yc config set folder-id <folder_ID> command. You can also set a different folder for any specific command using the --folder-name or --folder-id parameter.

  1. Initiate updating Apache Kafka® in your cluster with the following command:

    yc managed-kafka cluster update <cluster_name_or_ID> \
       --version=3.6
    
  2. Repeat the command for the remaining versions in the given order.

  1. Open the current Terraform configuration file that defines your infrastructure.

  2. In the config section of the Managed Service for Apache Kafka® cluster, set 3.6 in the version field as the new Apache Kafka® version:

    resource "yandex_mdb_kafka_cluster" "<cluster_name>" {
      ...
      config {
        version = "3.6"
      }
    }
    
  3. Make sure the settings are correct.

    1. In the command line, navigate to the directory that contains the current Terraform configuration files defining the infrastructure.

    2. Run this command:

      terraform validate
      

      Terraform will show any errors found in your configuration files.

  4. Confirm updating the resources.

    1. Run this command to view the planned changes:

      terraform plan
      

      If you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.

    2. If everything looks correct, apply the changes:

      1. Run this command:

        terraform apply
        
      2. Confirm updating the resources.

      3. Wait for the operation to complete.

  5. Repeat the steps for the remaining Apache Kafka® versions in the given order.

  1. Get an IAM token for API authentication and put it into the environment variable:

    export IAM_TOKEN="<IAM_token>"
    
  2. Use the Cluster.update method and send the following request, e.g., via cURL:

    curl \
        --request PATCH \
        --header "Authorization: Bearer $IAM_TOKEN" \
        --header "Content-Type: application/json" \
        -url 'https://mdb.api.cloud.yandex.net/managed-kafka/v1/clusters/<cluster_ID>' \
        --data '{
                  "updateMask": "configSpec.version",
                  "configSpec": {
                    "version": "3.6"
                  }
                }'
    

    Where:

    • updateMask: List of parameters to update as a single string, separated by commas.

      Here only one parameter is specified: configSpec.version.

    • configSpec.version: Apache Kafka® version.

    You can request the cluster ID with the list of clusters in the folder.

  3. View the server response to make sure the request was successful.

  4. Repeat the steps for the remaining Apache Kafka® versions in the given order.

  1. Get an IAM token for API authentication and put it into the environment variable:

    export IAM_TOKEN="<IAM_token>"
    
  2. Use the ClusterService/Update call and send the following request, e.g., via gRPCurl:

    grpcurl \
        -format json \
        -import-path ~/cloudapi/ \
        -import-path ~/cloudapi/third_party/googleapis/ \
        -proto ~/cloudapi/yandex/cloud/mdb/kafka/v1/cluster_service.proto \
        -rpc-header "Authorization: Bearer $IAM_TOKEN" \
        -d '{
              "cluster_id": "<cluster_ID>",
              "update_mask": {
                "paths": [
                  "config_spec.version"
                ]
              },
              "config_spec": {
                "version": "3.6"
              }
            }' \
        mdb.api.cloud.yandex.net:443 \
        yandex.cloud.mdb.kafka.v1.ClusterService.Update
    

    Where:

    • update_mask: List of parameters to update as an array of paths[] strings.

      Here only one parameter is specified: config_spec.version.

    • config_spec.version: Apache Kafka® version.

    You can request the cluster ID with the list of clusters in the folder.

  3. View the server response to make sure the request was successful.

  4. Repeat the steps for the remaining Apache Kafka® versions in the given order.

Migrate the cluster to KRaftMigrate the cluster to KRaft

To migrate a Managed Service for Apache Kafka® cluster with ZooKeeper hosts to the KRaft protocol, configure resources for the KRaft controllers.

Management console
CLI
Terraform
REST API
gRPC API
  1. Navigate to the folder dashboard and select Managed Service for Kafka.
  2. Click the cluster name.
  3. At the top of the screen, click Migrate.
  4. Select the platform, host type, and host class for the KRaft controllers.
  5. Click Save.
  6. Wait for the migration to complete.

Run this command to start the cluster migration:

yc managed-kafka cluster update <cluster_name_or_ID> \
   --controller-resource-preset "<KRaft_host_class>" \
   --controller-disk-size <storage_size> \
   --controller-disk-type <disk_type>

Where:

  • --controller-resource-preset: KRaft host class.
  • --controller-disk-type: Disk type of KRaft hosts.

Note

For KRaft controllers:

  • Only the network-ssd and network-ssd-nonreplicated disk types are available.
  • The Intel Broadwell platform is not available.

To find out the cluster name or ID, get a list of clusters in the folder.

  1. Open the current Terraform configuration file that defines your infrastructure.

  2. Delete the config.zookeeper section of the Managed Service for Apache Kafka® cluster.

  3. Add the config.kraft section with the KRaft controller resource description:

    resource "yandex_mdb_kafka_cluster" "<cluster_name>" {
      ...
      config {
        ...
        kraft {
          resources {
            disk_size          = <storage_size_in_GB>
            disk_type_id       = "<disk_type>"
            resource_preset_id = "<KRaft_host_class>"
          }
        }
      }
    }
    

    Where:

    • kraft.resources.resource_preset_id: KRaft host class.
    • kraft.resources.disk_type_id: Disk type of KRaft hosts.

    Note

    For KRaft controllers:

    • Only the network-ssd and network-ssd-nonreplicated disk types are available.
    • The Intel Broadwell platform is not available.
  4. Make sure the settings are correct.

    1. In the command line, navigate to the directory that contains the current Terraform configuration files defining the infrastructure.

    2. Run this command:

      terraform validate
      

      Terraform will show any errors found in your configuration files.

  5. Confirm updating the resources.

    1. Run this command to view the planned changes:

      terraform plan
      

      If you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.

    2. If everything looks correct, apply the changes:

      1. Run this command:

        terraform apply
        
      2. Confirm updating the resources.

      3. Wait for the operation to complete.

  1. Get an IAM token for API authentication and put it into the environment variable:

    export IAM_TOKEN="<IAM_token>"
    
  2. Use the Cluster.update method and send the following request, e.g., via cURL:

    curl \
        --request PATCH \
        --header "Authorization: Bearer $IAM_TOKEN" \
        --header "Content-Type: application/json" \
        -url 'https://mdb.api.cloud.yandex.net/managed-kafka/v1/clusters/<cluster_ID>' \
        --data '{
                  "updateMask": "configSpec.kraft.resources.resourcePresetId,configSpec.kraft.resources.diskSize,configSpec.kraft.resources.diskTypeId",
                  "configSpec": {
                    "kraft": {
                      "resources": {
                        "resourcePresetId": "<KRaft_host_class>",
                        "diskSize": "<storage_size_in_bytes>",
                        "diskTypeId": "<disk_type>"
                      }
                    }
                  }
                }'
    

    Where:

    • updateMask: List of parameters to update as a single string, separated by commas.

      Here you need to specify all parameters of the resources you want to add: configSpec.kraft.resources.resourcePresetId, configSpec.kraft.resources.diskSize, configSpec.kraft.resources.diskTypeId.

    • configSpec.kraft: KRaft controller configuration:

      • resources.resourcePresetId: Host class ID. You can request the list of available host classes with their IDs using the ResourcePreset.list method.
      • resources.diskSize: Disk size in bytes.
      • resources.diskTypeId: Disk type.

      Note

      For KRaft controllers:

      • Only the network-ssd and network-ssd-nonreplicated disk types are available.
      • The Intel Broadwell platform is not available.

    You can request the cluster ID with the list of clusters in the folder.

  3. View the server response to make sure the request was successful.

  1. Get an IAM token for API authentication and put it into the environment variable:

    export IAM_TOKEN="<IAM_token>"
    
  2. Use the ClusterService/Update call and send the following request, e.g., via gRPCurl:

    grpcurl \
        -format json \
        -import-path ~/cloudapi/ \
        -import-path ~/cloudapi/third_party/googleapis/ \
        -proto ~/cloudapi/yandex/cloud/mdb/kafka/v1/cluster_service.proto \
        -rpc-header "Authorization: Bearer $IAM_TOKEN" \
        -d '{
              "cluster_id": "<cluster_ID>",
              "update_mask": {
                "paths": [
                  "config_spec.kraft.resources.resource_preset_id",
                  "config_spec.kraft.resources.disk_size",
                  "config_spec.kraft.resources.disk_type_id"
                ]
              },
              "config_spec": {
                "kraft": {
                  "resources": {
                    "resource_preset_id": "<KRaft_host_class>",
                    "disk_size": "<storage_size_in_bytes>",
                    "disk_type_id": "<disk_type>"
                  }
                }
              }
            }' \
        mdb.api.cloud.yandex.net:443 \
        yandex.cloud.mdb.kafka.v1.ClusterService.Update
    

    Where:

    • update_mask: List of parameters to update as an array of paths[] strings.

      Here you need to specify all parameters of the resources you want to add: config_spec.kraft.resources.resource_preset_id, config_spec.kraft.resources.disk_size, config_spec.kraft.resources.disk_type_id.

    • config_spec.kraft: KRaft controller configuration:

      • resources.resource_preset_id: Host class ID. You can request the list of available host classes with their IDs using the ResourcePreset.list method.
      • resources.disk_size: Disk size in bytes.
      • resources.disk_type_id: Disk type.

      Note

      For KRaft controllers:

      • Only the network-ssd and network-ssd-nonreplicated disk types are available.
      • The Intel Broadwell platform is not available.

    You can request the cluster ID with the list of clusters in the folder.

  3. View the server response to make sure the request was successful.

Delete the resources you createdDelete the resources you created

Some resources are not free of charge. To avoid paying for them, delete the resources you no longer need:

Manually
Terraform

Delete the Managed Service for Apache Kafka® cluster.

  1. In the terminal window, go to the directory containing the infrastructure plan.

    Warning

    Make sure the directory has no Terraform manifests with the resources you want to keep. Terraform deletes all resources that were created using the manifests in the current directory.

  2. Delete resources:

    1. Run this command:

      terraform destroy
      
    2. Confirm deleting the resources and wait for the operation to complete.

    All the resources described in the Terraform manifests will be deleted.

Was the article helpful?

Previous
Unassisted deployment of the Apache Kafka® web interface
Next
Migrating a database from a third-party Apache Kafka® cluster to Managed Service for Apache Kafka®
© 2025 Direct Cursus Technology L.L.C.