Updating Apache Kafka® cluster settings
After creating a Managed Service for Apache Kafka® cluster, you can:
- Change the cluster name and description
- Change the class and number of broker hosts
- Change the ZooKeeper host class
- Change security group and public access settings
- Change additional cluster settings
- Change Apache Kafka® settings
- Move a cluster to another folder
Learn more about other cluster updates:
- Upgrading Apache Kafka® version.
- Managing disk space in a Managed Service for Apache Kafka® cluster.
- Migrating Apache Kafka® cluster hosts to a different availability zone.
Changing the cluster name and description
- Go to the folder page
and select Managed Service for Kafka. - In the cluster row, click
, then select Edit. - Under Basic parameters, enter a new name and description for the cluster.
- Click Save.
If you do not have the Yandex Cloud command line interface yet, install and initialize it.
The folder specified in the CLI profile is used by default. You can specify a different folder using the --folder-name
or --folder-id
parameter.
To change the name and description of a cluster:
-
View a description of the update cluster CLI command:
yc managed-kafka cluster update --help
-
Specify a new name and description in the cluster update command:
yc managed-kafka cluster update <cluster_name_or_ID> \ --new-name <new_cluster_name> \ --description <new_cluster_description>
To find out the cluster name or ID, get a list of clusters in the folder.
Alert
Do not change the cluster name using Terraform. This will delete the existing cluster and create a new one.
To update the cluster description:
-
Open the current Terraform configuration file with an infrastructure plan.
For more information about creating this file, see Creating clusters.
-
In the Managed Service for Apache Kafka® cluster description, change the
description
parameter value:resource "yandex_mdb_kafka_cluster" "<cluster_name>" { name = "<cluster_name>" description = "<new_cluster_description>" ... }
-
Make sure the settings are correct.
-
Using the command line, navigate to the folder that contains the up-to-date Terraform configuration files with an infrastructure plan.
-
Run the command:
terraform validate
If there are errors in the configuration files, Terraform will point to them.
-
-
Confirm updating the resources.
-
Run the command to view planned changes:
terraform plan
If the resource configuration descriptions are correct, the terminal will display a list of the resources to modify and their parameters. This is a test step. No resources are updated.
-
If you are happy with the planned changes, apply them:
-
Run the command:
terraform apply
-
Confirm the update of resources.
-
Wait for the operation to complete.
-
-
For more information, see the Terraform
Time limits
A Terraform provider sets the timeout for Managed Service for Redis cluster operations:
- Creating a cluster, including by restoring one from a backup: 15 minutes.
- Editing a cluster: 60 minutes.
- Deleting a cluster: 15 minutes.
Operations exceeding the set timeout are interrupted.
How do I change these limits?
Add the timeouts
block to the cluster description, for example:
resource "yandex_mdb_redis_cluster" "<cluster_name>" {
...
timeouts {
create = "1h30m" # 1 hour 30 minutes
update = "2h" # 2 hours
delete = "30m" # 30 minutes
}
}
To change additional cluster settings, use the update REST API method for the Cluster resource or the ClusterService/Update gRPC API call and provide the following in the request:
- Cluster ID in the
clusterId
parameter. To find out the cluster ID, get a list of clusters in the folder. - New cluster name in the
name
parameter. - New cluster description in the
description
parameter. - List of fields to update (in this case,
name
anddescription
) in theupdateMask
parameter.
Warning
This API method overrides all parameters of the object being modified that were not explicitly passed in the request to the default values. To avoid this, list the settings you want to change in the updateMask
parameter (one line separated by commas).
Changing the broker host class and number
You can increase the number of broker hosts if the following conditions are met:
- The cluster uses Apache Kafka® 3.5 or lower. Clusters running Apache Kafka® 3.6 or higher use the Apache Kafka® Raft protocol; therefore, such clusters always have three Apache Kafka® hosts.
- The cluster contains at least two broker hosts in different availability zones.
You cannot have fewer broker hosts. To meet the cluster fault tolerance conditions, you need at least three broker hosts.
When changing the broker host class:
- A single broker host cluster will be unavailable for a few minutes with topic connections terminated.
- In a multiple broker host cluster, hosts will be stopped and updated one at a time. Stopped hosts will be unavailable for a few minutes.
We recommend changing broker host class only when there is no active workload on the cluster.
To change the class and number of hosts:
-
Go to the folder page
and select Managed Service for Kafka. -
In the cluster row, click
, then select Edit. -
Change the required settings:
- To edit the broker host class, select a new Host class.
- Change Number of brokers in zone.
-
Click Save.
If you do not have the Yandex Cloud command line interface yet, install and initialize it.
The folder specified in the CLI profile is used by default. You can specify a different folder using the --folder-name
or --folder-id
parameter.
To change the class and number of hosts:
-
Get information about the cluster:
yc managed-kafka cluster list yc managed-kafka cluster get <cluster_name_or_ID>
-
View a description of the update cluster CLI command:
yc managed-kafka cluster update --help
-
To increase the number of broker hosts, run this command:
yc managed-kafka cluster update <cluster_name_or_ID> --brokers-count <number>
-
To change the broker host class, run this command:
yc managed-kafka cluster update <cluster_name_or_ID> --resource-preset <host_class>
To find out the cluster name or ID, get a list of clusters in the folder.
-
Open the current Terraform configuration file with an infrastructure plan.
For more information about creating this file, see Creating clusters.
-
In the Managed Service for Apache Kafka® cluster description, change the
brokers_count
parameter to increase the number of broker hosts:resource "yandex_mdb_kafka_cluster" "<cluster_name>" { config { brokers_count = <number_of_broker_hosts> ... } ... }
-
In the Managed Service for Apache Kafka® cluster description, edit the value of the
resource_preset_id
parameter underkafka.resources
to specify a new broker host class:resource "yandex_mdb_kafka_cluster" "<cluster_name>" { ... kafka { resources { resource_preset_id = "<broker_host_class>" ... } } }
-
Make sure the settings are correct.
-
Using the command line, navigate to the folder that contains the up-to-date Terraform configuration files with an infrastructure plan.
-
Run the command:
terraform validate
If there are errors in the configuration files, Terraform will point to them.
-
-
Confirm updating the resources.
-
Run the command to view planned changes:
terraform plan
If the resource configuration descriptions are correct, the terminal will display a list of the resources to modify and their parameters. This is a test step. No resources are updated.
-
If you are happy with the planned changes, apply them:
-
Run the command:
terraform apply
-
Confirm the update of resources.
-
Wait for the operation to complete.
-
-
For more information, see the Terraform provider documentation
Time limits
The Terraform provider limits the amount of time for all Managed Service for Apache Kafka® cluster operations to complete to 60 minutes.
Operations exceeding the set timeout are interrupted.
How do I change these limits?
Add the timeouts
block to the cluster description, for example:
resource "yandex_mdb_kafka_cluster" "<cluster_name>" {
...
timeouts {
create = "1h30m" # 1 hour 30 minutes
update = "2h" # 2 hours
delete = "30m" # 30 minutes
}
}
To change the class and number of broker hosts, use the update REST API method for the Cluster resource or the ClusterService/Update gRPC API call and provide the following in the request:
- Cluster ID in the
clusterId
parameter. To find out the cluster ID, get a list of clusters in the folder. - Broker host class in the
configSpec.kafka.resources.resourcePresetId
parameter. - Number of broker hosts in the
configSpec.brokersCount
parameter. - List of settings to update, in the
updateMask
parameter.
Warning
This API method overrides all parameters of the object being modified that were not explicitly passed in the request to the default values. To avoid this, list the settings you want to change in the updateMask
parameter (one line separated by commas).
Changing the ZooKeeper host class
Note
The ZooKeeper host class is used only in clusters with Apache Kafka® 3.5 or lower.
- Go to the folder page
and select Managed Service for Kafka. - In the cluster row, click
, then select Edit. - Select a new ZooKeeper host class.
- Click Save.
If you do not have the Yandex Cloud command line interface yet, install and initialize it.
The folder specified in the CLI profile is used by default. You can specify a different folder using the --folder-name
or --folder-id
parameter.
To change the class of ZooKeeper hosts:
-
Get information about the cluster:
yc managed-kafka cluster list yc managed-kafka cluster get <cluster_name_or_ID>
-
View a description of the update cluster CLI command:
yc managed-kafka cluster update --help
-
To change the ZooKeeper host class, run this command:
yc managed-kafka cluster update <cluster_name_or_ID> \ --zookeeper-resource-preset <host_class>
To find out the cluster name or ID, get a list of clusters in the folder.
-
Open the current Terraform configuration file with an infrastructure plan.
For more information about creating this file, see Creating clusters.
-
In the Managed Service for Apache Kafka® cluster description, edit the value of the
resource_preset_id
parameter underzookeeper.resources
to specify a new ZooKeeper host class:resource "yandex_mdb_kafka_cluster" "<cluster_name>" { ... zookeeper { resources { resource_preset_id = "<ZooKeeper_host_class>" ... } } }
-
Make sure the settings are correct.
-
Using the command line, navigate to the folder that contains the up-to-date Terraform configuration files with an infrastructure plan.
-
Run the command:
terraform validate
If there are errors in the configuration files, Terraform will point to them.
-
-
Confirm updating the resources.
-
Run the command to view planned changes:
terraform plan
If the resource configuration descriptions are correct, the terminal will display a list of the resources to modify and their parameters. This is a test step. No resources are updated.
-
If you are happy with the planned changes, apply them:
-
Run the command:
terraform apply
-
Confirm the update of resources.
-
Wait for the operation to complete.
-
-
For more information, see the Terraform provider documentation
Time limits
The Terraform provider limits the amount of time for all Managed Service for Apache Kafka® cluster operations to complete to 60 minutes.
Operations exceeding the set timeout are interrupted.
How do I change these limits?
Add the timeouts
block to the cluster description, for example:
resource "yandex_mdb_kafka_cluster" "<cluster_name>" {
...
timeouts {
create = "1h30m" # 1 hour 30 minutes
update = "2h" # 2 hours
delete = "30m" # 30 minutes
}
}
To change the class of ZooKeeper hosts, use the update REST API method for the Cluster resource or the ClusterService/Update gRPC API call and provide the following in the request:
- Cluster ID in the
clusterId
parameter. To find out the cluster ID, get a list of clusters in the folder. - ZooKeeper host class in the
configSpec.zookeeper.resources.resourcePresetId
parameter. - List of settings to update, in the
updateMask
parameter.
Warning
This API method overrides all parameters of the object being modified that were not explicitly passed in the request to the default values. To avoid this, list the settings you want to change in the updateMask
parameter (one line separated by commas).
Changing security group and public access settings
- Go to the folder page
and select Managed Service for Kafka. - In the cluster row, click
, then select Edit. - Under Network settings, select security groups for cluster network traffic.
- Enable or disable public access to a cluster via the Public access option.
- Click Save.
Restart the cluster for the new public access settings to take effect.
If you do not have the Yandex Cloud command line interface yet, install and initialize it.
The folder specified in the CLI profile is used by default. You can specify a different folder using the --folder-name
or --folder-id
parameter.
To edit the list of security groups for your cluster:
-
View a description of the update cluster CLI command:
yc managed-kafka cluster update --help
-
Specify the security groups and public access settings in the update cluster command:
yc managed-kafka cluster update <cluster_name_or_ID> \ --security-group-ids <list_of_security_groups> \ --assign-public-ip=<public_access>
Where:
--security-group-ids
: List of cluster security group IDs.--assign-public-ip
: Public access to the cluster,true
orfalse
.
To find out the cluster name or ID, get a list of clusters in the folder.
Restart the cluster for the new public access settings to take effect.
-
Open the current Terraform configuration file with an infrastructure plan.
For more information about creating this file, see Creating clusters.
-
Change the values of the
security_group_ids
andassign_public_ip
parameters in the cluster description:resource "yandex_mdb_kafka_cluster" "<cluster_name>" { ... security_group_ids = [ <list_of_security_groups> ] ... config { assign_public_ip = "<public_access>" ... } }
Where:
security_group_ids
: List of cluster security group IDs.assign_public_ip
: Public access to the cluster,true
orfalse
.
-
Make sure the settings are correct.
-
Using the command line, navigate to the folder that contains the up-to-date Terraform configuration files with an infrastructure plan.
-
Run the command:
terraform validate
If there are errors in the configuration files, Terraform will point to them.
-
-
Confirm updating the resources.
-
Run the command to view planned changes:
terraform plan
If the resource configuration descriptions are correct, the terminal will display a list of the resources to modify and their parameters. This is a test step. No resources are updated.
-
If you are happy with the planned changes, apply them:
-
Run the command:
terraform apply
-
Confirm the update of resources.
-
Wait for the operation to complete.
-
-
Restart the cluster for the new public access settings to take effect.
For more information, see the Terraform provider documentation
Time limits
The Terraform provider limits the amount of time for all Managed Service for Apache Kafka® cluster operations to complete to 60 minutes.
Operations exceeding the set timeout are interrupted.
How do I change these limits?
Add the timeouts
block to the cluster description, for example:
resource "yandex_mdb_kafka_cluster" "<cluster_name>" {
...
timeouts {
create = "1h30m" # 1 hour 30 minutes
update = "2h" # 2 hours
delete = "30m" # 30 minutes
}
}
To change security group and public access settings, use the update REST API method for the Cluster resource or the ClusterService/Update gRPC API call and provide the following in the request:
- Cluster ID in the
clusterId
parameter. To find out the cluster ID, get a list of clusters in the folder. - List of security group IDs in the
securityGroupIds
parameter. - Public access settings in the
configSpec.assignPublicIp
parameter. - List of settings to update, in the
updateMask
parameter.
Warning
This API method overrides all parameters of the object being modified that were not explicitly passed in the request to the default values. To avoid this, list the settings you want to change in the updateMask
parameter (one line separated by commas).
Restart the cluster for the new public access settings to take effect.
You may need to additionally set up security groups to connect to the cluster.
Changing additional cluster settings
-
Go to the folder page
and select Managed Service for Kafka. -
In the cluster row, click
, then select Edit. -
Change additional cluster settings:
-
Maintenance window: Maintenance window settings:
- To enable maintenance at any time, select arbitrary (default).
- To specify the preferred maintenance start time, select by schedule and specify the desired day of the week and UTC hour. For example, you can choose a time when the cluster is least loaded.
Maintenance operations are carried out both on enabled and disabled clusters. They may include updating the DBMS, applying patches, and so on.
-
Deletion protection: Manages protection of the cluster, its databases, and users against accidental deletion.
Cluster deletion protection will not prevent a manual connection to a cluster to delete data.
-
To manage data schemas using Managed Schema Registry, enable the Schema registry setting.
Warning
Once enabled, the Schema registry setting cannot be disabled.
-
To allow sending requests to the Apache Kafka® API, enable Kafka Rest API.
It is implemented based on the Karapace
open-source tool. The Karapace API is compatible with the Confluent REST Proxy API with only minor exceptions.Warning
You cannot disable Kafka Rest API once it is enabled.
-
-
Click Save.
If you do not have the Yandex Cloud command line interface yet, install and initialize it.
The folder specified in the CLI profile is used by default. You can specify a different folder using the --folder-name
or --folder-id
parameter.
To change additional cluster settings:
-
View a description of the update cluster CLI command:
yc managed-kafka cluster update --help
-
Run the following command with a list of settings to update:
yc managed-kafka cluster update <cluster_name_or_ID> \ --maintenance-window type=<maintenance_type>,` `day=<day_of_week>,` `hour=<hour> \ --deletion-protection=<deletion_protection> \ --schema-registry=<data_schema_management>
You can change the following settings:
-
--maintenance-window
: Maintenance window settings (including for disabled clusters), wheretype
is the maintenance type:anytime
(default): Any time.weekly
: On a schedule. If setting this value, specify the day of week and the hour:day
: Day of week inDDD
format:MON
,TUE
,WED
,THU
,FRI
,SAT
, orSUN
.hour
: Hour (UTC) inHH
format:1
to24
.
-
--deletion-protection
: Protection of the cluster, its databases, and users against accidental deletion,true
orfalse
.Cluster deletion protection will not prevent a manual connection to a cluster to delete data.
-
--schema-registry
: Enable this option to manage data schemas using Managed Schema Registry.Warning
Once enabled, the Schema registry setting cannot be disabled.
To find out the cluster name or ID, get a list of clusters in the folder.
-
Open the current Terraform configuration file with an infrastructure plan.
For more information about creating this file, see Creating clusters.
-
To set up the maintenance window (for disabled clusters as well), add the
maintenance_window
section to the cluster description:resource "yandex_mdb_kafka_cluster" "<cluster_name>" { ... maintenance_window { type = <maintenance_type> day = <day_of_week> hour = <hour> } ... }
Where:
type
: Maintenance type. The possible values include:anytime
: Anytime.weekly
: By schedule.
day
: Day of the week for theweekly
type inDDD
format, e.g.,MON
.hour
: Hour of the day for theweekly
type in theHH
format, e.g.,21
.
-
To enable cluster protection against accidental deletion by a user of your cloud, add the
deletion_protection
field set totrue
to your cluster description:resource "yandex_mdb_kafka_cluster" "<cluster_name>" { ... deletion_protection = <deletion_protection> }
Cluster deletion protection will not prevent a manual connection to a cluster to delete data.
-
To enable data schema management using Managed Schema Registry, add the
config.schema_registry
field set totrue
to the cluster description:resource "yandex_mdb_kafka_cluster" "<cluster name>" { ... config { ... schema_registry = <data_schema_management> } }
Warning
Once enabled, the Schema registry setting cannot be disabled.
-
Make sure the settings are correct.
-
Using the command line, navigate to the folder that contains the up-to-date Terraform configuration files with an infrastructure plan.
-
Run the command:
terraform validate
If there are errors in the configuration files, Terraform will point to them.
-
-
Confirm updating the resources.
-
Run the command to view planned changes:
terraform plan
If the resource configuration descriptions are correct, the terminal will display a list of the resources to modify and their parameters. This is a test step. No resources are updated.
-
If you are happy with the planned changes, apply them:
-
Run the command:
terraform apply
-
Confirm the update of resources.
-
Wait for the operation to complete.
-
-
For more information, see the Terraform provider documentation
Time limits
The Terraform provider limits the amount of time for all Managed Service for Apache Kafka® cluster operations to complete to 60 minutes.
Operations exceeding the set timeout are interrupted.
How do I change these limits?
Add the timeouts
block to the cluster description, for example:
resource "yandex_mdb_kafka_cluster" "<cluster_name>" {
...
timeouts {
create = "1h30m" # 1 hour 30 minutes
update = "2h" # 2 hours
delete = "30m" # 30 minutes
}
}
To change additional cluster settings, use the update REST API method for the Cluster resource or the ClusterService/Update gRPC API call and provide the following in the request:
-
Cluster ID in the
clusterId
parameter. To find out the cluster ID, get a list of clusters in the folder. -
Maintenance window settings (including for disabled clusters) in the
maintenanceWindow
parameter. -
Cluster deletion protection settings in the
deletionProtection
parameter.Cluster deletion protection will not prevent a manual connection to a cluster to delete data.
-
Settings for data schema management using Managed Schema Registry in the
configSpec.schemaRegistry
parameter.Warning
Once enabled, the Schema registry setting cannot be disabled.
-
List of cluster configuration fields to update in the
updateMask
parameter.
Warning
This API method overrides all parameters of the object being modified that were not explicitly passed in the request to the default values. To avoid this, list the settings you want to change in the updateMask
parameter (one line separated by commas).
Changing Apache Kafka® settings
-
Go to the folder page
and select Managed Service for Kafka. -
In the cluster row, click
, then select Edit. -
Under Kafka Settings, click Settings.
For more information, see Apache Kafka® settings.
-
Click Save.
If you do not have the Yandex Cloud command line interface yet, install and initialize it.
The folder specified in the CLI profile is used by default. You can specify a different folder using the --folder-name
or --folder-id
parameter.
To update the Apache Kafka® settings:
-
View a description of the CLI update cluster settings command:
yc managed-kafka cluster update --help
-
Change the Apache Kafka® settings in the cluster update command (the example below does not list all possible parameters):
yc managed-kafka cluster update <cluster_name_or_ID> \ --compression-type <compression_type> \ --log-flush-interval-messages <number_of_messages_in_log> \ --log-flush-interval-ms <maximum_time_to_store_messages>
Where:
--log-flush-interval-messages
: Number of messages in the log to trigger flushing to disk.--log-flush-interval-ms
: Maximum time a message can be stored in memory before flushing to disk.
To find out the cluster name or ID, get a list of clusters in the folder.
-
Open the current Terraform configuration file with an infrastructure plan.
For more information about creating this file, see Creating clusters.
-
In the Managed Service for Apache Kafka® cluster description, modify the values of the parameters in the
kafka.kafka_config
section (the example below does not list all possible settings):resource "yandex_mdb_kafka_cluster" "<cluster_name>" { ... config { kafka { ... kafka_config { compression_type = "<compression_type>" log_flush_interval_messages = <maximum_number_of_messages_in_memory> ... } } } }
-
Make sure the settings are correct.
-
Using the command line, navigate to the folder that contains the up-to-date Terraform configuration files with an infrastructure plan.
-
Run the command:
terraform validate
If there are errors in the configuration files, Terraform will point to them.
-
-
Confirm updating the resources.
-
Run the command to view planned changes:
terraform plan
If the resource configuration descriptions are correct, the terminal will display a list of the resources to modify and their parameters. This is a test step. No resources are updated.
-
If you are happy with the planned changes, apply them:
-
Run the command:
terraform apply
-
Confirm the update of resources.
-
Wait for the operation to complete.
-
-
For more information, see the Terraform provider documentation
Time limits
The Terraform provider limits the amount of time for all Managed Service for Apache Kafka® cluster operations to complete to 60 minutes.
Operations exceeding the set timeout are interrupted.
How do I change these limits?
Add the timeouts
block to the cluster description, for example:
resource "yandex_mdb_kafka_cluster" "<cluster_name>" {
...
timeouts {
create = "1h30m" # 1 hour 30 minutes
update = "2h" # 2 hours
delete = "30m" # 30 minutes
}
}
To change Apache Kafka® settings, use the update REST API method for the Cluster resource or the ClusterService/Update gRPC API call and provide the following in the request:
-
Cluster ID in the
clusterId
parameter. To find out the cluster ID, get a list of clusters in the folder. -
New values of the Apache Kafka® settings in:
configSpec.kafka.kafkaConfig_2_8
: If you are using Apache Kafka®2.8
.configSpec.kafka.kafkaConfig_3
: If you are using Apache Kafka®3.x
.
-
List of settings to update, in the
updateMask
parameter.
Warning
This API method overrides all parameters of the object being modified that were not explicitly passed in the request to the default values. To avoid this, list the settings you want to change in the updateMask
parameter (one line separated by commas).
Moving a cluster to another folder
If you do not have the Yandex Cloud command line interface yet, install and initialize it.
The folder specified in the CLI profile is used by default. You can specify a different folder using the --folder-name
or --folder-id
parameter.
To move a cluster:
-
View a description of the CLI move cluster command:
yc managed-kafka cluster move --help
-
Specify the destination folder in the move cluster command:
yc managed-kafka cluster move <cluster_name_or_ID> \ --destination-folder-name=<destination_folder_name>
To find out the cluster name or ID, get a list of clusters in the folder.
To move a cluster, use the move REST API method for the Cluster resource or the ClusterService/Move gRPC API call and provide the following in the request:
- Cluster ID in the
clusterId
parameter. To find out the cluster ID, get a list of clusters in the folder. - ID of the destination folder in the
destinationFolderId
parameter.