Upgrading a Managed Service for Apache Kafka® cluster to migrate from ZooKeeper to KRaft
Managed Service for Apache Kafka® multi-host clusters version 3.5 and lower use ZooKeeper to manage metadata. ZooKeeper support will be discontinued starting from Apache Kafka® 4.0. You can migrate clusters with ZooKeeper hosts to the KRaft protocol. Starting with version 3.6, Apache Kafka® uses KRaft as the main metadata synchronization protocol.
To switch to KRaft in a ZooKeeper cluster:
If you no longer need the resources you created, delete them.
Required paid resources
The support cost for this solution includes:
- Managed Service for Apache Kafka® cluster fee, which covers the use of computing resources allocated to hosts (including KRaft hosts) and disk space (see Managed Service for Apache Kafka® pricing).
- Fee for public IP addresses assigned to cluster hosts (see Virtual Private Cloud pricing).
Upgrade the cluster version
Upgrade your Apache Kafka® cluster with ZooKeeper to version 3.9 step by step, without skipping any versions in the following order: 3.5 → 3.6 → 3.7 → 3.8 → 3.9. If your cluster’s version is lower than 3.5, first, upgrade the cluster to this version.
- Navigate to the folder dashboard and select Managed Service for Kafka.
- In the cluster row, click
and select Edit. - In the Version field, select
3.6. - Click Save.
- Repeat the steps for the remaining Apache Kafka® versions in the given order.
If you do not have the Yandex Cloud CLI installed yet, install and initialize it.
By default, the CLI uses the folder specified when creating the profile. To change the default folder, use the yc config set folder-id <folder_ID> command. You can also set a different folder for any specific command using the --folder-name or --folder-id parameter.
-
Start the Apache Kafka® upgrade for your cluster with the following command:
yc managed-kafka cluster update <cluster_name_or_ID> \ --version=3.6 -
Repeat the command for the remaining versions in the given order.
-
Open the current Terraform configuration file describing your infrastructure.
-
In the
configsection of the Managed Service for Apache Kafka® cluster, specify3.6as the new Apache Kafka® version in theversionfield:resource "yandex_mdb_kafka_cluster" "<cluster_name>" { ... config { version = "3.6" } } -
Make sure the settings are correct.
-
In the command line, navigate to the directory that contains the current Terraform configuration files defining the infrastructure.
-
Run this command:
terraform validateTerraform will show any errors found in your configuration files.
-
-
Confirm updating the resources.
-
Run this command to view the planned changes:
terraform planIf you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.
-
If everything looks correct, apply the changes:
-
Run this command:
terraform apply -
Confirm updating the resources.
-
Wait for the operation to complete.
-
-
-
Repeat the steps for the remaining Apache Kafka® versions in the given order.
-
Get an IAM token for API authentication and put it into an environment variable:
export IAM_TOKEN="<IAM_token>" -
Call the Cluster.update method, e.g., via the following cURL
request:curl \ --request PATCH \ --header "Authorization: Bearer $IAM_TOKEN" \ --header "Content-Type: application/json" \ -url 'https://mdb.api.cloud.yandex.net/managed-kafka/v1/clusters/<cluster_ID>' \ --data '{ "updateMask": "configSpec.version", "configSpec": { "version": "3.6" } }'Where:
-
updateMask: Comma-separated string of settings you want to update.Here, we only specified a single setting,
configSpec.version. -
configSpec.version: Apache Kafka® version.
You can get the cluster ID with the list of clusters in the folder.
-
-
Check the server response to make sure your request was successful.
-
Repeat the steps for the remaining Apache Kafka® versions in the given order.
-
Get an IAM token for API authentication and put it into an environment variable:
export IAM_TOKEN="<IAM_token>" -
Call the ClusterService/Update method, e.g., via the following gRPCurl
request:grpcurl \ -format json \ -import-path ~/cloudapi/ \ -import-path ~/cloudapi/third_party/googleapis/ \ -proto ~/cloudapi/yandex/cloud/mdb/kafka/v1/cluster_service.proto \ -rpc-header "Authorization: Bearer $IAM_TOKEN" \ -d '{ "cluster_id": "<cluster_ID>", "update_mask": { "paths": [ "config_spec.version" ] }, "config_spec": { "version": "3.6" } }' \ mdb.api.cloud.yandex.net:443 \ yandex.cloud.mdb.kafka.v1.ClusterService.UpdateWhere:
-
update_mask: List of settings to update as an array of strings (paths[]).Here, we only specified a single setting,
config_spec.version. -
config_spec.version: Apache Kafka® version.
You can get the cluster ID with the list of clusters in the folder.
-
-
Check the server response to make sure your request was successful.
-
Repeat the steps for the remaining Apache Kafka® versions in the given order.
Migrate the cluster to KRaft
To migrate a Managed Service for Apache Kafka® cluster with ZooKeeper hosts to the KRaft protocol, configure resources for the KRaft controllers.
- Navigate to the folder dashboard and select Managed Service for Kafka.
- Click the name of your cluster.
- At the top of the screen, click Migrate.
- Select the platform, host type, and host class for the KRaft controllers.
- Click Save.
- Wait for the migration to complete.
Run this command to start the cluster migration:
yc managed-kafka cluster update <cluster_name_or_ID> \
--controller-resource-preset "<KRaft_host_class>" \
--controller-disk-size <storage_size> \
--controller-disk-type <disk_type>
Where:
--controller-resource-preset: KRaft host class.--controller-disk-type: Disk type of KRaft hosts.
Note
For KRaft controllers:
- Only the
network-ssdandnetwork-ssd-nonreplicateddisk types are available. - The Intel Broadwell platform is not available.
To find out the cluster name or ID, get the list of clusters in the folder.
-
Open the current Terraform configuration file describing your infrastructure.
-
Delete the
config.zookeepersection for the Managed Service for Apache Kafka® cluster. -
Add the
config.kraftsection with the KRaft controller resource description:resource "yandex_mdb_kafka_cluster" "<cluster_name>" { ... config { ... kraft { resources { disk_size = <storage_size_in_GB> disk_type_id = "<disk_type>" resource_preset_id = "<KRaft_host_class>" } } } }Where:
kraft.resources.resource_preset_id: KRaft host class.kraft.resources.disk_type_id: Disk type of KRaft hosts.
Note
For KRaft controllers:
- Only the
network-ssdandnetwork-ssd-nonreplicateddisk types are available. - The Intel Broadwell platform is not available.
-
Make sure the settings are correct.
-
In the command line, navigate to the directory that contains the current Terraform configuration files defining the infrastructure.
-
Run this command:
terraform validateTerraform will show any errors found in your configuration files.
-
-
Confirm updating the resources.
-
Run this command to view the planned changes:
terraform planIf you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.
-
If everything looks correct, apply the changes:
-
Run this command:
terraform apply -
Confirm updating the resources.
-
Wait for the operation to complete.
-
-
-
Get an IAM token for API authentication and put it into an environment variable:
export IAM_TOKEN="<IAM_token>" -
Call the Cluster.update method, e.g., via the following cURL
request:curl \ --request PATCH \ --header "Authorization: Bearer $IAM_TOKEN" \ --header "Content-Type: application/json" \ -url 'https://mdb.api.cloud.yandex.net/managed-kafka/v1/clusters/<cluster_ID>' \ --data '{ "updateMask": "configSpec.kraft.resources.resourcePresetId,configSpec.kraft.resources.diskSize,configSpec.kraft.resources.diskTypeId", "configSpec": { "kraft": { "resources": { "resourcePresetId": "<KRaft_host_class>", "diskSize": "<storage_size_in_bytes>", "diskTypeId": "<disk_type>" } } } }'Where:
-
updateMask: Comma-separated string of settings you want to update.Here, you need to specify all settings of the resources you want to add:
configSpec.kraft.resources.resourcePresetId,configSpec.kraft.resources.diskSize, andconfigSpec.kraft.resources.diskTypeId. -
configSpec.kraft: KRaft controller configuration:resources.resourcePresetId: Host class ID. You can get the list of available host classes with their IDs using the ResourcePreset.list method.resources.diskSize: Disk size, in bytes.resources.diskTypeId: Disk type.
Note
For KRaft controllers:
- Only the
network-ssdandnetwork-ssd-nonreplicateddisk types are available. - The Intel Broadwell platform is not available.
You can get the cluster ID with the list of clusters in the folder.
-
-
Check the server response to make sure your request was successful.
-
Get an IAM token for API authentication and put it into an environment variable:
export IAM_TOKEN="<IAM_token>" -
Call the ClusterService/Update method, e.g., via the following gRPCurl
request:grpcurl \ -format json \ -import-path ~/cloudapi/ \ -import-path ~/cloudapi/third_party/googleapis/ \ -proto ~/cloudapi/yandex/cloud/mdb/kafka/v1/cluster_service.proto \ -rpc-header "Authorization: Bearer $IAM_TOKEN" \ -d '{ "cluster_id": "<cluster_ID>", "update_mask": { "paths": [ "config_spec.kraft.resources.resource_preset_id", "config_spec.kraft.resources.disk_size", "config_spec.kraft.resources.disk_type_id" ] }, "config_spec": { "kraft": { "resources": { "resource_preset_id": "<KRaft_host_class>", "disk_size": "<storage_size_in_bytes>", "disk_type_id": "<disk_type>" } } } }' \ mdb.api.cloud.yandex.net:443 \ yandex.cloud.mdb.kafka.v1.ClusterService.UpdateWhere:
-
update_mask: List of settings to update as an array of strings (paths[]).Here, you need to specify all settings of the resources you want to add:
config_spec.kraft.resources.resource_preset_id,config_spec.kraft.resources.disk_size, andconfig_spec.kraft.resources.disk_type_id. -
config_spec.kraft: KRaft controller configuration:resources.resource_preset_id: Host class ID. You can get the list of available host classes with their IDs using the ResourcePreset.list method.resources.disk_size: Disk size, in bytes.resources.disk_type_id: Disk type.
Note
For KRaft controllers:
- Only the
network-ssdandnetwork-ssd-nonreplicateddisk types are available. - The Intel Broadwell platform is not available.
You can get the cluster ID with the list of clusters in the folder.
-
-
Check the server response to make sure your request was successful.
Delete the resources you created
Some resources are not free of charge. Delete the resources you no longer need to avoid paying for them:
-
In the terminal window, go to the directory containing the infrastructure plan.
Warning
Make sure the directory has no Terraform manifests with the resources you want to keep. Terraform deletes all resources that were created using the manifests in the current directory.
-
Delete resources:
-
Run this command:
terraform destroy -
Confirm deleting the resources and wait for the operation to complete.
All the resources described in the Terraform manifests will be deleted.
-