Creating an Apache Kafka® cluster
A Managed Service for Apache Kafka® cluster consists of one or more broker hosts that hold topics and their partitions. Producers and consumers can work with these topics by connecting to Managed Service for Apache Kafka® cluster hosts.
Note
The available disk types depend on the selected host class.
Differences in configurations of clusters with ZooKeeper and clusters that use the Apache Kafka® Raft protocol
Different Apache Kafka® versions use different tools to store cluster metadata, state, and configuration:
- Versions 3.5 and lower support ZooKeeper.
- Versions 3.6 to (and including) 3.9 support ZooKeeper and Apache Kafka® Raft.
- Versions 4.0 or higher support Apache Kafka® Raft only.
Hosts with ZooKeeper
When selecting Apache Kafka® version 3.5 and lower, only ZooKeeper is supported.
If you create a cluster with more than one host, three dedicated ZooKeeper hosts will be added to the cluster.
Hosts with KRaft
When selecting Apache Kafka® version 3.6 and higher, the KRaft protocol is additionally supported.
The KRaft protocol is available in one of the following modes:
-
KRaft (combined mode): One Apache Kafka® host accommodates a broker and a KRaft metadata controller at the same time. In the cluster, only three Apache Kafka® hosts get created in one of these configurations:
- Three hosts in the same availability zone.
- Each host in a separate availability zone.
You cannot set the number of broker hosts manually.
-
KRaft (on separate hosts): A broker and a KRaft metadata controller are on separate hosts. When you create a multiple-host cluster, three dedicated KRaft hosts are added to it.
The number of broker hosts is set manually.
You cannot delete KRaft hosts. The number of KRaft hosts is fixed.
For more information about the differences in cluster configurations with ZooKeeper and KRaft, see Resource relationships in Managed Service for Apache Kafka®.
Getting started
- Calculate the minimum storage size for topics.
- Assign the following roles to your Yandex Cloud account:
- managed-kafka.editor or higher: To create a cluster.
- vpc.user: To use the cluster network.
- kms.keys.user: To manage disk encryption.
If you specify security group IDs when creating a Managed Service for Apache Kafka® cluster, you may also need to configure security groups to connect to the cluster.
Creating a cluster with ZooKeeper
Warning
When creating a cluster with ZooKeeper, do not specify the KRaft settings.
To create a Managed Service for Apache Kafka® cluster:
-
In the management console
, go to the appropriate folder. -
Go to Managed Service for Kafka.
-
Click Create cluster.
-
Under Basic parameters:
- Enter a name and description for the Managed Service for Apache Kafka® cluster. The Managed Service for Apache Kafka® cluster name must be unique within the folder.
-
Select the environment where you want to create the Managed Service for Apache Kafka® cluster (you cannot change the environment once the cluster is created):
PRODUCTION: For stable versions of your apps.PRESTABLE: For testing purposes. The prestable environment is similar to the production environment and likewise covered by an SLA, but it is the first to get new features, improvements, and bug fixes. In the prestable environment, you can test new versions for compatibility with your application.
- Select the Apache Kafka® version.
-
Under Host class, select the platform, host type, and host class.
The host class defines the technical specifications of VMs the Apache Kafka® nodes are deployed on. All available options are listed under Host classes.
When you change the host class for a Managed Service for Apache Kafka® cluster, the specifications of all existing instances also change.
-
Under Storage:
-
Select the disk type.
Warning
You cannot change disk type after you create a cluster.
The selected type determines the increments in which you can change your disk size:
- Network HDD and SSD storage: In increments of 1 GB.
- Local SSD storage:
- For Intel Cascade Lake: In increments of 100 GB.
- For Intel Ice Lake: In increments of 368 GB.
- Non-replicated SSD storage: In increments of 93 GB.
You cannot change the disk type for a Managed Service for Apache Kafka® cluster once the cluster is created.
-
Select the storage size to use for data.
-
-
Under Automatic increase of storage size, set the storage utilization thresholds that will trigger storage expansion when reached:
- In the Increase size field, select one or both thresholds:
- In the maintenance window when full at more than: Scheduled increase threshold. When reached, the storage size increases during the next maintenance window.
- Immediately when full at more than: Immediate increase threshold. When reached, the storage size increases immediately.
- Specify a threshold value (as a percentage of the total storage size). If you select both thresholds, make sure the immediate increase threshold is higher than the scheduled one.
- Set Maximum storage size.
- In the Increase size field, select one or both thresholds:
-
Under Network settings:
-
Select one or more availability zones to place your Apache Kafka® broker hosts in.
Warning
If you create a Managed Service for Apache Kafka® cluster with a single availability zone, you will not be able to increase the number of zones and broker hosts later.
-
Select the network.
-
Select subnets in each availability zone for this network. To create a new subnet, click Create next to the availability zone in question.
Note
For an Apache Kafka® cluster with multiple broker hosts, specify subnets in each availability zone even if you plan to place broker hosts only in some of them. You need these subnets to deploy three ZooKeeper hosts, one per availability zone. For more information, see Resource relationships.
-
Select security groups for the Managed Service for Apache Kafka® cluster's network traffic.
-
To enable internet access to broker hosts, select Public access. In this case, you can only connect to them using SSL. For more information, see Connecting to topics in a cluster.
-
-
Under Hosts:
-
Specify the number of Apache Kafka® broker hosts to place in each of the selected availability zones.
When selecting the number of hosts, consider the following:
- If you add more than one host to the cluster, the system automatically adds three ZooKeeper hosts.
- You need at least two hosts to enable replication in a Managed Service for Apache Kafka® cluster.
- High availability of a Managed Service for Apache Kafka® cluster depends on meeting specific conditions.
-
Select ZooKeeper (on separate hosts) as the coordination service.
-
-
If you specified more than one broker host, under ZooKeeper host class, specify the properties of the ZooKeeper hosts to place in each of the selected availability zones.
-
Specify additional Managed Service for Apache Kafka® cluster settings, if required:
-
Maintenance window: Maintenance window settings:
- To enable maintenance at any time, select arbitrary (default).
- To specify the preferred maintenance start time, select by schedule and specify the desired day of the week and UTC hour. For example, you can choose a time when the cluster is least loaded.
Maintenance operations are carried out both on enabled and disabled clusters. They may include updating the DBMS, applying patches, and so on.
-
Deletion protection: Manages cluster protection against accidental deletion.
Even with cluster deletion protection enabled, one can still delete a user or topic or connect manually and delete the data.
-
Disk encryption: Enable this setting to encrypt the disks with a custom KMS key.
-
To create a new key, click Create.
-
To use the key you created earlier, select it in the KMS key field.
To learn more about disk encryption, see Storage.
Warning
You can enable disk encryption only when creating a cluster.
-
-
Schema registry: Enable this setting to manage data schemas using Managed Schema Registry.
Warning
You cannot disable data schema management using Managed Schema Registry after connecting it.
-
Kafka Rest API: Enable this setting to allow sending requests to the Apache Kafka® API.
It is implemented based on the Karapace
open-source tool. The Karapace API is compatible with the Confluent REST Proxy API with only minor exceptions.Warning
You cannot disable Kafka Rest API once it is enabled.
-
Kafka UI: Enable this setting to use the Apache Kafka® web UI.
-
-
Configure the Apache Kafka® settings, if required.
-
Click Create.
-
Wait until the Managed Service for Apache Kafka® cluster is ready: its status on the Managed Service for Apache Kafka® dashboard will change to
Running, and its state, toAlive. This may take some time.
If you do not have the Yandex Cloud CLI installed yet, install and initialize it.
By default, the CLI uses the folder specified when creating the profile. To change the default folder, use the yc config set folder-id <folder_ID> command. You can also set a different folder for any specific command using the --folder-name or --folder-id parameter.
To create a Managed Service for Apache Kafka® cluster:
-
View the description of the CLI command to create a Managed Service for Apache Kafka® cluster:
yc managed-kafka cluster create --help -
Specify the Managed Service for Apache Kafka® cluster parameters in the create command (not all parameters are given in the example):
yc managed-kafka cluster create \ --name <cluster_name> \ --environment <environment> \ --version <version> \ --schema-registry \ --network-name <network_name> \ --subnet-ids <subnet_IDs> \ --zone-ids <availability_zones> \ --brokers-count <number_of_broker_hosts_in_zone> \ --resource-preset <host_class> \ --disk-type <network-hdd|network-ssd|network-ssd-nonreplicated|local-ssd> \ --disk-size <storage_size_in_GB> \ --assign-public-ip <enable_public_access_to_cluster> \ --security-group-ids <list_of_security_group_IDs> \ --deletion-protection \ --kafka-ui-enabled <true_or_false> \ --disk-encryption-key-id <KMS_key_ID>Where:
-
--environment: Cluster environment,prestableorproduction. -
--version: Apache Kafka® version, 3.6, 3.7, 3.8, 3.9 or 4.0. Additionally, provide the ZooKeeper host configuration. -
--schema-registry: Manage data schemas using Managed Schema Registry.Warning
You cannot disable data schema management using Managed Schema Registry after connecting it.
-
--zone-idsand--brokers-count: Availability zones and number of broker hosts per zone. -
--resource-preset: Host class. -
--disk-type: Disk type.Warning
You cannot change disk type after you create a cluster.
-
--deletion-protection: Cluster protection from accidental deletion,trueorfalse.Note
Even with cluster deletion protection enabled, one can still delete a user or topic or connect manually and delete the data.
-
--kafka-ui-enabled: This setting defines whether to use Kafka UI for Apache Kafka® and can betrueorfalse. -
--disk-encryption-key-id: ID of the custom KMS key. To encrypt the disk, provide the KMS key ID in this parameter. To learn more about disk encryption, see Storage.Warning
You can enable disk encryption only when creating a cluster.
Tip
You can also configure the Apache Kafka® settings here, if required.
-
-
To use ZooKeeper in your cluster, provide the ZooKeeper host configuration:
yc managed-kafka cluster create \ ... --zookeeper-resource-preset <host_class> \ --zookeeper-disk-size <storage_size_in_GB> \ --zookeeper-disk-type <network-hdd|network-ssd|network-ssd-nonreplicated|local-ssd> \ ...Where:
--zookeeper-resource-preset: ZooKeeper host class.--zookeeper-disk-size: Storage size.--zookeeper-disk-type: ZooKeeper disk type.
-
To set up a maintenance window (including for disabled Managed Service for Apache Kafka® clusters), provide the required value in the
--maintenance-windowparameter when creating your cluster:yc managed-kafka cluster create \ ... --maintenance-window type=<maintenance_type>,` `day=<day_of_week>,` `hour=<hour> \ ...Where
typeis the maintenance type:anytime: At any time (default).weekly: On a schedule. For this value, also specify the following:day: Day of week, i.e.,MON,TUE,WED,THU,FRI,SAT, orSUN.hour: Hour of day (UTC), from1to24.
-
To prevent the cluster disk space from running out, create a cluster that will increase the storage space automatically.
yc managed-kafka cluster create \ ... --disk-size-autoscaling disk-size-limit=<maximum_storage_size_in_bytes>,` `planned-usage-threshold=<scheduled_increase_percentage>,` `emergency-usage-threshold=<immediate_increase_percentage> \ ...Where:
-
planned-usage-threshold: Storage utilization percentage to trigger a storage increase in the next maintenance window.Use a percentage value between
0and100. The default value is0(automatic increase is disabled).If you set this parameter, configure the maintenance schedule.
-
emergency-usage-threshold: Storage utilization percentage to trigger an immediate storage increase.Use a percentage value between
0and100. The default value is0(automatic increase is disabled). This parameter value must be greater than or equal toplanned-usage-threshold. -
disk-size-limit: Maximum storage size, in bytes, that can be set when utilization reaches one of the specified percentages.If the value is
0, automatic increase of storage size will be disabled.
Warning
- You cannot reduce the storage size.
- When using local disks (
local-ssd), cluster hosts will be unavailable while the storage is being resized.
-
With Terraform
Terraform is distributed under the Business Source License
For more information about the provider resources, see the relevant documentation on the Terraform
If you do not have Terraform yet, install it and configure the Yandex Cloud provider.
To create a Managed Service for Apache Kafka® cluster:
-
In the configuration file, describe the resources you are creating:
-
Managed Service for Apache Kafka® cluster: Description of the cluster and its hosts. You can also configure the Apache Kafka® settings here, if required.
-
Network: Description of the cloud network where a cluster will be located. If you already have a suitable network, you don't have to describe it again.
-
Subnets: Description of the subnets to connect the cluster hosts to. If you already have suitable subnets, you don't have to describe them again.
Here is a configuration file example:
resource "yandex_mdb_kafka_cluster" "<cluster_name>" { environment = "<environment>" name = "<cluster_name>" network_id = "<network_ID>" subnet_ids = ["<list_of_subnet_IDs>"] security_group_ids = ["<list_of_cluster_security_group_IDs>"] deletion_protection = <protect_cluster_against_deletion> config { version = "<version>" zones = ["<availability_zones>"] brokers_count = <number_of_broker_hosts> assign_public_ip = "<enable_public_access_to_cluster>" schema_registry = "<enable_data_schema_management>" kafka_ui { enabled = <use_Kafka_UI> } kafka { resources { disk_size = <storage_size_in_GB> disk_type_id = "<disk_type>" resource_preset_id = "<host_class>" } kafka_config {} } } } resource "yandex_vpc_network" "<network_name>" { name = "<network_name>" } resource "yandex_vpc_subnet" "<subnet_name>" { name = "<subnet_name>" zone = "<availability_zone>" network_id = "<network_ID>" v4_cidr_blocks = ["<range>"] }Where:
-
environment: Cluster environment,PRESTABLEorPRODUCTION. -
version: Apache Kafka® version, 3.6, 3.7, 3.8, 3.9 or 4.0. Additionally, provide the ZooKeeper host configuration. -
zonesandbrokers_count: Availability zones and number of broker hosts per zone. -
deletion_protection: Cluster protection against accidental deletion,trueorfalse.Note
Even with cluster deletion protection enabled, one can still delete a user or topic or connect manually and delete the data.
-
disk_encryption_key_id: Disk encryption with a custom KMS key. Provide the KMS key ID to encrypt the disk. To learn more about disk encryption, see Storage. -
assign_public_ip: Public access to the cluster,trueorfalse. -
schema_registry: Manage data schemas using Managed Schema Registry,trueorfalse. The default value isfalse.Warning
You cannot disable data schema management using Managed Schema Registry after connecting it.
-
kafka_ui: This setting defines whether to use Kafka UI for Apache Kafka® and can betrueorfalse. The default value isfalse.
To use ZooKeeper in the cluster, add the
zookeepersection to the cluster description:resource "yandex_mdb_kafka_cluster" "<cluster_name>" { ... zookeeper { resources { resource_preset_id = "<host_class>" disk_type_id = "<disk_type>" disk_size = <storage_size_in_GB> } } }To set up the maintenance window (for disabled clusters as well), add the
maintenance_windowsection to the cluster description:resource "yandex_mdb_kafka_cluster" "<cluster_name>" { ... maintenance_window { type = <maintenance_type> day = <day_of_week> hour = <hour> } ... }Where:
type: Maintenance type. The possible values include:ANYTIME: AnytimeWEEKLY: On a schedule
day: Day of week for theWEEKLYtype, i.e.,MON,TUE,WED,THU,FRI,SAT, orSUN.hour: UTC hour for theWEEKLYtype, from1to24.
To encrypt disks with a custom KMS key, add the
disk_encryption_key_idparameter:resource "yandex_mdb_kafka_cluster" "<cluster_name>" { ... disk_encryption_key_id = <KMS_key_ID> ... }To learn more about disk encryption, see Storage.
-
-
Make sure the settings are correct.
-
In the command line, navigate to the directory that contains the current Terraform configuration files defining the infrastructure.
-
Run this command:
terraform validateTerraform will show any errors found in your configuration files.
-
-
Create a Managed Service for Apache Kafka® cluster.
-
Run this command to view the planned changes:
terraform planIf you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.
-
If everything looks correct, apply the changes:
-
Run this command:
terraform apply -
Confirm updating the resources.
-
Wait for the operation to complete.
-
This will create all the resources you need in the specified folder, and the terminal will display the FQDNs of the Managed Service for Apache Kafka® cluster hosts. You can check the new resources and their configuration using the management console
. -
For more information, see this Terraform provider guide.
Time limits
The Terraform provider limits the amount of time for all Managed Service for Apache Kafka® cluster operations to complete to 60 minutes.
Operations exceeding the set timeout are interrupted.
How do I change these limits?
Add the timeouts block to the cluster description, for example:
resource "yandex_mdb_kafka_cluster" "<cluster_name>" {
...
timeouts {
create = "1h30m" # 1 hour 30 minutes
update = "2h" # 2 hours
delete = "30m" # 30 minutes
}
}
-
Get an IAM token for API authentication and put it into an environment variable:
export IAM_TOKEN="<IAM_token>" -
Call the Cluster.create method, e.g., via the following cURL
request:-
Create a file named
body.jsonand paste the following code into it:Note
This example does not use all available parameters.
{ "folderId": "<folder_ID>", "name": "<cluster_name>", "environment": "<environment>", "networkId": "<network_ID>", "securityGroupIds": [ "<security_group_1_ID>", "<security_group_2_ID>", ... "<security_group_N_ID>" ], "configSpec": { "version": "<Apache Kafka®_version>", "kafka": { "resources": { "resourcePresetId": "<Apache Kafka®_host_class>", "diskSize": "<storage_size_in_bytes>", "diskTypeId": "<disk_type>" } }, "zookeeper": { "resources": { "resourcePresetId": "<ZooKeeper_host_class>", "diskSize": "<storage_size_in_bytes>", "diskTypeId": "<disk_type>" } }, "zoneId": [ <list_of_availability_zones> ], "brokersCount": "<number_of_brokers_in_zone>", "assignPublicIp": <enable_public_access_to_cluster>, "schemaRegistry": <enable_data_schema_management>, "restApiConfig": { "enabled": <enable_sending_requests_to_Apache Kafka®_API> }, "diskSizeAutoscaling": { <automatic_storage_expansion_parameters> }, "kafkaUiConfig": { "enabled": <use_Kafka_UI> } }, "topicSpecs": [ { "name": "<topic_name>", "partitions": "<number_of_partitions>", "replicationFactor": "<replication_factor>" }, { <similar_settings_for_topic_2> }, { ... }, { <similar_settings_for_topic_N> } ], "userSpecs": [ { "name": "<username>", "password": "<user_password>", "permissions": [ { "topicName": "<topic_name>", "role": "<user's_role>" } ] }, { <similar_settings_for_user_2> }, { ... }, { <similar_settings_for_user_N> } ], "maintenanceWindow": { "anytime": {}, "weeklyMaintenanceWindow": { "day": "<day_of_week>", "hour": "<hour_UTC>" } }, "deletionProtection": <protect_cluster_against_deletion>, "diskEncryptionKeyId": "<KMS_key_ID>" }Where:
-
name: Cluster name. -
environment: Cluster environment,PRODUCTIONorPRESTABLE. -
networkId: ID of the network where the cluster will be deployed. -
securityGroupIds: Security group IDs as an array of strings. Each string is a security group ID. -
configSpec: Cluster configuration:-
version: Apache Kafka® version, 3.6, 3.7, 3.8, 3.9 or 4.0. Additionally, provide the ZooKeeper host configuration. -
kafka: Apache Kafka® configuration:resources.resourcePresetId: Host class ID. You can get the list of available host classes with their IDs by calling the ResourcePreset.list method.resources.diskSize: Disk size, in bytes.resources.diskTypeId: Disk type.
-
zookeeper: ZooKeeper configuration:resources.resourcePresetId: Host class ID. You can get the list of available host classes with their IDs using the ResourcePreset.list method.resources.diskSize: Disk size, in bytes.resources.diskTypeId: Disk type.
-
zoneIdandbrokersCount: Availability zones and number of broker hosts per zone. -
assignPublicIp: Access to broker hosts from the internet,trueorfalse. -
schemaRegistry: Manage data schemas using Managed Schema Registry,trueorfalse. The default value isfalse. You will not be able to edit this setting once you create the Managed Service for Apache Kafka® cluster. -
restApiConfig: Apache Kafka® REST API configuration. To enable sending Apache Kafka® REST API requests, specifyenabled: true. -
diskSizeAutoscaling: Storage utilization thresholds (as a percentage of the total storage size) that will trigger storage expansion when reached:-
plannedUsageThreshold: Storage utilization percentage to trigger a storage increase during the next maintenance window.Use a percentage value between
0and100. The default value is0(automatic increase is disabled).If you set this parameter, configure the maintenance window schedule.
-
emergencyUsageThreshold: Storage utilization percentage to trigger an immediate storage increase.Use a percentage value between
0and100. The default value is0(automatic increase is disabled). This parameter value must be greater than or equal toplannedUsageThreshold. -
diskSizeLimit: Maximum storage size, in bytes, that can be set when utilization reaches one of the specified percentages.
-
-
kafkaUiConfig: Use Kafka UI. For access to Kafka UI, specifyenabled: true.
-
-
topicSpecs: Topic settings as an array of elements, one per topic. Each element has the following structure:-
name: Topic name.Note
Use the Apache Kafka® Admin API if you need to create a topic that starts with
_. You cannot create such a topic using the Yandex Cloud interfaces. -
partitions: Number of partitions. -
replicationFactor: Replication factor.
-
-
userSpecs: User settings as an array of elements, one per user. Each element has the following structure:-
name: Username. It may contain Latin letters, numbers, hyphens, and underscores, and must start with a letter or underscore. -
password: User password. The password must be from 8 to 128 characters long. -
permissions: List of topics the user must have access to.The list is arranged as an array of elements. Each element is for a separate topic and has the following structure:
topicName: Topic name or name template:*to allow access to any topics.- Full topic name to allow access to a specific topic.
<prefix>*to grant access to topics whose names start with the prefix. Let's assume you have topics namedtopic_a1,topic_a2, anda3. If you puttopic*, access will be granted totopic_a1andtopic_a2. To include all the cluster's topics, use the*mask.
role: User’s role,ACCESS_ROLE_CONSUMER,ACCESS_ROLE_PRODUCER, orACCESS_ROLE_ADMIN. TheACCESS_ROLE_ADMINrole is only available if all topics are selected (topicName: "*").allowHosts: (Optional) List of IP addresses the user is allowed to access the topic from.
-
-
maintenanceWindow: Maintenance window settings, including for stopped clusters. Select one of these options:anytime: Any time (default).weeklyMaintenanceWindow: On schedule:day: Day of week inDDDformat, i.e.,MON,TUE,WED,THU,FRI,SAT, orSUN.hour: Time of day (UTC) inHHformat, from1to24.
-
deletionProtection: Cluster protection against accidental deletion,trueorfalse. The default value isfalse.Note
Even with cluster deletion protection enabled, one can still delete a user or topic or connect manually and delete the data.
-
diskEncryptionKeyId: ID of the custom KMS key. Provide the KMS key ID to encrypt the disks. To learn more about disk encryption, see Storage.Warning
You can enable disk encryption only when creating a cluster.
You can get the folder ID with the list of folders in the cloud.
-
-
Run this request:
curl \ --request POST \ --header "Authorization: Bearer $IAM_TOKEN" \ --header "Content-Type: application/json" \ --url 'https://mdb.api.cloud.yandex.net/managed-kafka/v1/clusters' \ --data '@body.json'
-
-
View the server response to make sure your request was successful.
-
Get an IAM token for API authentication and put it into an environment variable:
export IAM_TOKEN="<IAM_token>" -
Clone the cloudapi
repository:cd ~/ && git clone --depth=1 https://github.com/yandex-cloud/cloudapiBelow, we assume that the repository contents reside in the
~/cloudapi/directory. -
Call the ClusterService/Create method, e.g., via the following gRPCurl
request:-
Create a file named
body.jsonand paste the following code into it:Note
This example does not use all available parameters.
{ "folder_id": "<folder_ID>", "name": "<cluster_name>", "environment": "<environment>", "network_id": "<network_ID>", "security_group_ids": [ "<security_group_1_ID>", "<security_group_2_ID>", ... "<security_group_N_ID>" ], "config_spec": { "version": "<Apache Kafka®_version>", "kafka": { "resources": { "resource_preset_id": "<Apache Kafka®_host_class>", "disk_size": "<storage_size_in_bytes>", "disk_type_id": "<disk_type>" } }, "zookeeper": { "resources": { "resource_preset_id": "<ZooKeeper_host_class>", "disk_size": "<storage_size_in_bytes>", "disk_type_id": "<disk_type>" } }, "zone_id": [ <list_of_availability_zones> ], "brokers_count": { "value": "<number_of_brokers_in_zone>" }, "assign_public_ip": <enable_public_access_to_cluster>, "schema_registry": <enable_data_schema_management>, "rest_api_config": { "enabled": <enable_sending_requests_to_Apache Kafka®_API> }, "disk_size_autoscaling": { <automatic_storage_size_increase_parameters> }, "kafka_ui_config": { "enabled": <use_Kafka_UI> } }, "topic_specs": [ { "name": "<topic_name>", "partitions": { "value": "<number_of_partitions>" }, "replication_factor": { "value": "<replication_factor>" } }, { <similar_settings_for_topic_2> }, { ... }, { <similar_settings_for_topic_N> } ], "user_specs": [ { "name": "<username>", "password": "<user_password>", "permissions": [ { "topic_name": "<topic_name>", "role": "<user's_role>" } ] }, { <similar_settings_for_user_2> }, { ... }, { <similar_settings_for_user_N> } ], "maintenance_window": { "anytime": {}, "weekly_maintenance_window": { "day": "<day_of_week>", "hour": "<hour_UTC>" } }, "deletion_protection": <protect_cluster_against_deletion>, "disk_encryption_key_id": "<KMS_key_ID>" }Where:
-
name: Cluster name. -
environment: Cluster environment,PRODUCTIONorPRESTABLE. -
network_id: ID of the network where the cluster will be deployed. -
security_group_ids: Security group IDs as an array of strings. Each string is a security group ID. -
config_spec: Cluster configuration:-
version: Apache Kafka® version, 3.6, 3.7, 3.8, 3.9 or 4.0. Additionally, provide the ZooKeeper host configuration. -
kafka: Apache Kafka® configuration:resources.resource_preset_id: Host class ID. You can get the list of available host classes with their IDs by calling the ResourcePreset.list method.resources.disk_size: Disk size, in bytes.resources.disk_type_id: Disk type.
-
zookeeper: ZooKeeper configuration:resources.resource_preset_id: Host class ID. You can get the list of available host classes with their IDs by calling the ResourcePreset.list method.resources.disk_size: Disk size, in bytes.resources.disk_type_id: Disk type.
-
zone_idandbrokers_count: Availability zones and number of broker hosts (provided as an object with thevaluefield) per zone. -
assign_public_ip: Access to broker hosts from the internet,trueorfalse. -
schema_registry: Manage data schemas using Managed Schema Registry,trueorfalse. The default value isfalse. You will not be able to edit this setting once you create the Managed Service for Apache Kafka® cluster. -
rest_api_config: Apache Kafka® REST API configuration. To enable sending Apache Kafka® REST API requests, specifyenabled: true. -
disk_size_autoscaling: To prevent the cluster disk space from running out, set the storage utilization thresholds (as a percentage of the total storage size) that will trigger storage expansion when reached:-
planned_usage_threshold: Storage utilization percentage to trigger a storage increase during the next maintenance window.Use a percentage value between
0and100. The default value is0(automatic increase is disabled).If you set this parameter, configure the maintenance window schedule.
-
emergency_usage_threshold: Storage utilization percentage to trigger an immediate storage increase.Use a percentage value between
0and100. The default value is0(automatic increase is disabled). This parameter value must be greater than or equal toplanned_usage_threshold. -
disk_size_limit: Maximum storage size, in bytes, that can be set when utilization reaches one of the specified percentages.
-
-
kafka_ui_config: Use Kafka UI. For access to Kafka UI, specifyenabled: true.
-
-
topic_specs: Topic settings as an array of elements. Each element is for a separate topic and has the following structure:-
name: Topic name.Note
Use the Apache Kafka® Admin API if you need to create a topic that starts with
_. You cannot create such a topic using the Yandex Cloud interfaces. -
partitions: Number of partitions, provided as an object with a field namedvalue. -
replication_factor: Replication factor. Provided as an object with thevaluefield.
-
-
user_specs: User settings as an array of elements, one per user. Each element has the following structure:-
name: Username. It may contain Latin letters, numbers, hyphens, and underscores, and must start with a letter or underscore. -
password: User password. The password must be from 8 to 128 characters long. -
permissions: List of topics the user must have access to.The list is arranged as an array of elements. Each element is for a separate topic and has the following structure:
topic_name: Topic name or name template:*to allow access to any topics.- Full topic name to allow access to a specific topic.
<prefix>*to grant access to topics whose names start with the prefix. Let's assume you have topics namedtopic_a1,topic_a2, anda3. If you puttopic*, access will be granted totopic_a1andtopic_a2. To include all the cluster's topics, use the*mask.
role: User’s role,ACCESS_ROLE_CONSUMER,ACCESS_ROLE_PRODUCER, orACCESS_ROLE_ADMIN. TheACCESS_ROLE_ADMINrole is only available if all topics are selected (topicName: "*").allow_hosts: (Optional) List of IP addresses the user is allowed to access the topic from, as an array of elements.
-
-
maintenance_window: Maintenance window settings, including for stopped clusters. Select one of these options:anytime: Any time (default).weekly_maintenance_window: On schedule:day: Day of week inDDDformat, i.e.,MON,TUE,WED,THU,FRI,SAT, orSUN.hour: Time of day (UTC) inHHformat, from1to24.
-
deletion_protection: Cluster protection against accidental deletion,trueorfalse. The default value isfalse.Note
Even with cluster deletion protection enabled, one can still delete a user or topic or connect manually and delete the data.
-
disk_encryption_key_id: ID of the custom KMS key. Provide the KMS key ID to encrypt the disks. To learn more about disk encryption, see Storage.Warning
You can enable disk encryption only when creating a cluster.
You can get the folder ID with the list of folders in the cloud.
-
-
Run this request:
curl \ --request POST \ --header "Authorization: Bearer $IAM_TOKEN" \ --header "Content-Type: application/json" \ --url 'https://mdb.api.cloud.yandex.net/managed-kafka/v1/clusters' \ --data '@body.json'
-
-
View the server response to make sure your request was successful.
Creating a cluster with KRaft
Warning
When creating a cluster with KRaft, do not specify the ZooKeeper settings.
To create a Managed Service for Apache Kafka® cluster:
-
In the management console
, go to the appropriate folder. -
Go to Managed Service for Kafka.
-
Click Create cluster.
-
Under Basic parameters:
-
Enter a name and description for the Managed Service for Apache Kafka® cluster. The Managed Service for Apache Kafka® cluster name must be unique within the folder.
-
Select the environment where you want to create the Managed Service for Apache Kafka® cluster (you cannot change the environment once the cluster is created):
PRODUCTION: For stable versions of your apps.PRESTABLE: For testing purposes. The prestable environment is similar to the production environment and likewise covered by an SLA, but it is the first to get new features, improvements, and bug fixes. In the prestable environment, you can test new versions for compatibility with your application.
-
Select Apache Kafka® 3.6 or higher.
-
-
Under Host class, select the platform, host type, and host class.
The host class defines the technical specifications of VMs the Apache Kafka® nodes are deployed on. All available options are listed under Host classes.
When you change the host class for a Managed Service for Apache Kafka® cluster, the specifications of all existing instances also change.
-
Under Storage:
-
Select the disk type.
Warning
You cannot change disk type after you create a cluster.
The selected type determines the increments in which you can change your disk size:
- Network HDD and SSD storage: In increments of 1 GB.
- Local SSD storage:
- For Intel Cascade Lake: In increments of 100 GB.
- For Intel Ice Lake: In increments of 368 GB.
- Non-replicated SSD storage: In increments of 93 GB.
You cannot change the disk type for a Managed Service for Apache Kafka® cluster once the cluster is created.
-
Select the storage size to use for data.
-
-
Under Automatic increase of storage size, set the storage utilization thresholds that will trigger storage expansion when reached:
- In the Increase size field, select one or both thresholds:
- In the maintenance window when full at more than: Scheduled increase threshold. When reached, the storage size increases during the next maintenance window.
- Immediately when full at more than: Immediate increase threshold. When reached, the storage size increases immediately.
- Specify a threshold value (as a percentage of the total storage size). If you select both thresholds, make sure the immediate increase threshold is higher than the scheduled one.
- Set Maximum storage size.
- In the Increase size field, select one or both thresholds:
-
Under Network settings:
-
Select one or more availability zones to place your Apache Kafka® broker hosts in.
Warning
If you create a Managed Service for Apache Kafka® cluster with a single availability zone, you will not be able to increase the number of zones and broker hosts later.
-
Select the network.
-
Select subnets in each availability zone for this network. To create a new subnet, click Create next to the availability zone in question.
Note
For an Apache Kafka® cluster with multiple broker hosts, specify subnets in each availability zone even if you plan to place broker hosts only in some of them. You need these subnets to deploy three KRaft hosts, one per availability zone. For more information, see Resource relationships.
-
Select security groups for the Managed Service for Apache Kafka® cluster's network traffic.
-
To enable internet access to broker hosts, select Public access. In this case, you can only connect to them using SSL. For more information, see Connecting to topics in a cluster.
-
-
Under Hosts:
-
Specify the number of Apache Kafka® broker hosts to place in each of the selected availability zones.
When selecting the number of hosts, consider the following:
- You cannot set the number of broker hosts manually if using KRaft (combined mode) as the coordination service.
- You need to set the number of brokers manually if using KRaft (on separate hosts) as the coordination service. A new multi-host cluster will automatically get three dedicated KRaft hosts.
- You need at least two hosts to enable replication in a Managed Service for Apache Kafka® cluster.
- High availability of a Managed Service for Apache Kafka® cluster depends on meeting specific conditions.
-
Under Coordination service, select one of these options:
-
KRaft (on separate hosts): Broker and KRaft metadata controller are on separate hosts.
-
KRaft (combined mode): One Apache Kafka® host accommodates a broker and a KRaft metadata controller at the same time.
You can create a cluster either in one or in three availability zones:
- One availability zone: Three broker hosts.
- Three availability zones: One broker host in each availability zone.
You cannot set the number of broker hosts manually.
-
-
-
If you specified more than one broker host, under KRaft host class, specify the properties of the KRaft hosts to place in each of the selected availability zones.
-
Specify additional Managed Service for Apache Kafka® cluster settings, if required:
-
Maintenance window: Maintenance window settings:
- To enable maintenance at any time, select arbitrary (default).
- To specify the preferred maintenance start time, select by schedule and specify the desired day of the week and UTC hour. For example, you can choose a time when the cluster is least loaded.
Maintenance operations are carried out both on enabled and disabled clusters. They may include updating the DBMS, applying patches, and so on.
-
Deletion protection: Manages cluster protection against accidental deletion.
Even with cluster deletion protection enabled, one can still delete a user or topic or connect manually and delete the data.
-
Disk encryption: Enable this setting to encrypt the disks with a custom KMS key.
-
To create a new key, click Create.
-
To use the key you created earlier, select it in the KMS key field.
To learn more about disk encryption, see Storage.
Warning
You can enable disk encryption only when creating a cluster.
-
-
Schema registry: Enable this setting to manage data schemas using Managed Schema Registry.
Warning
You cannot disable data schema management using Managed Schema Registry after connecting it.
-
Kafka Rest API: Enable this setting to allow sending requests to the Apache Kafka® API.
It is implemented based on the Karapace
open-source tool. The Karapace API is compatible with the Confluent REST Proxy API with only minor exceptions.Warning
You cannot disable Kafka Rest API once it is enabled.
-
Kafka UI: Enable this setting to use the Apache Kafka® web UI.
-
-
Configure the Apache Kafka® settings, if required.
-
Click Create.
-
Wait until the Managed Service for Apache Kafka® cluster is ready: its status on the Managed Service for Apache Kafka® dashboard will change to
Running, and its state, toAlive. This may take some time.
If you do not have the Yandex Cloud CLI installed yet, install and initialize it.
By default, the CLI uses the folder specified when creating the profile. To change the default folder, use the yc config set folder-id <folder_ID> command. You can also set a different folder for any specific command using the --folder-name or --folder-id parameter.
To create a Managed Service for Apache Kafka® cluster:
-
View the description of the CLI command to create a Managed Service for Apache Kafka® cluster:
yc managed-kafka cluster create --help -
Specify the Managed Service for Apache Kafka® cluster parameters in the create command (not all parameters are given in the example):
yc managed-kafka cluster create \ --name <cluster_name> \ --environment <environment> \ --version <version> \ --schema-registry \ --network-name <network_name> \ --subnet-ids <subnet_IDs> \ --resource-preset <host_class> \ --disk-type <network-hdd|network-ssd|network-ssd-nonreplicated|local-ssd> \ --disk-size <storage_size_in_GB> \ --assign-public-ip <enable_public_access_to_cluster> \ --security-group-ids <list_of_security_group_IDs> \ --deletion-protection \ --kafka-ui-enabled <true_or_false> \ --disk-encryption-key-id <KMS_key_ID>Where:
-
--environment: Cluster environment,prestableorproduction. -
--version: Apache Kafka® version. Specify 3.6 or higher. -
--schema-registry: Manage data schemas using Managed Schema Registry.Warning
You cannot disable data schema management using Managed Schema Registry after connecting it.
-
--zone-idsand--brokers-count: Availability zones and number of broker hosts per zone.If you are creating a cluster with KRaft (combined mode), specify one of the available configurations:
--zone-ids=ru-central1-a,ru-central1-b,ru-central1-d --brokers-count=1: Three availability zones with one broker host per zone.--zone-ids=<one_availability_zone> --brokers-count=3: One availability zone with three broker hosts.
-
--resource-preset: Host class. -
--disk-type: Disk type.Warning
You cannot change disk type after you create a cluster.
-
--deletion-protection: Cluster protection from accidental deletion,trueorfalse.Note
Even with cluster deletion protection enabled, one can still delete a user or topic or connect manually and delete the data.
-
--kafka-ui-enabled: This setting defines whether to use Kafka UI for Apache Kafka® and can betrueorfalse. -
--disk-encryption-key-id: ID of the custom KMS key. To encrypt the disk, provide the KMS key ID in this parameter. To learn more about disk encryption, see Storage.Warning
You can enable disk encryption only when creating a cluster.
Tip
You can also configure the Apache Kafka® settings here, if required.
-
-
To use KRaft (on separate hosts), provide the KRaft host configuration.
yc managed-kafka cluster create \ ... --controller-resource-preset <host_class> \ --controller-disk-size <storage_size_in_GB> \ --controller-disk-type <network-hdd|network-ssd|network-ssd-nonreplicated|local-ssd> \ ...Where:
--controller-resource-preset: KRaft host class.--controller-disk-size: Storage size.--controller-disk-type: KRaft disk type.
-
To set up a maintenance window (including for disabled Managed Service for Apache Kafka® clusters), provide the required value in the
--maintenance-windowparameter when creating your cluster:yc managed-kafka cluster create \ ... --maintenance-window type=<maintenance_type>,` `day=<day_of_week>,` `hour=<hour> \ ...Where
typeis the maintenance type:anytime: At any time (default).weekly: On a schedule. For this value, also specify the following:day: Day of week, i.e.,MON,TUE,WED,THU,FRI,SAT, orSUN.hour: Hour of day (UTC), from1to24.
-
To prevent the cluster disk space from running out, create a cluster that will increase the storage space automatically.
yc managed-kafka cluster create \ ... --disk-size-autoscaling disk-size-limit=<maximum_storage_size_in_bytes>,` `planned-usage-threshold=<scheduled_increase_percentage>,` `emergency-usage-threshold=<immediate_increase_percentage> \ ...Where:
-
planned-usage-threshold: Storage utilization percentage to trigger a storage increase in the next maintenance window.Use a percentage value between
0and100. The default value is0(automatic increase is disabled).If you set this parameter, configure the maintenance schedule.
-
emergency-usage-threshold: Storage utilization percentage to trigger an immediate storage increase.Use a percentage value between
0and100. The default value is0(automatic increase is disabled). This parameter value must be greater than or equal toplanned-usage-threshold. -
disk-size-limit: Maximum storage size, in bytes, that can be set when utilization reaches one of the specified percentages.If the value is
0, automatic increase of storage size will be disabled.
Warning
- You cannot reduce the storage size.
- When using local disks (
local-ssd), cluster hosts will be unavailable while the storage is being resized.
-
With Terraform
Terraform is distributed under the Business Source License
For more information about the provider resources, see the relevant documentation on the Terraform
If you do not have Terraform yet, install it and configure the Yandex Cloud provider.
To create a Managed Service for Apache Kafka® cluster:
-
In the configuration file, describe the resources you are creating:
-
Managed Service for Apache Kafka® cluster: Description of the cluster and its hosts. You can also configure the Apache Kafka® settings here, if required.
-
Network: Description of the cloud network where a cluster will be located. If you already have a suitable network, you don't have to describe it again.
-
Subnets: Description of the subnets to connect the cluster hosts to. If you already have suitable subnets, you don't have to describe them again.
Here is a configuration file example:
resource "yandex_mdb_kafka_cluster" "<cluster_name>" { environment = "<environment>" name = "<cluster_name>" network_id = "<network_ID>" subnet_ids = ["<list_of_subnet_IDs>"] security_group_ids = ["<list_of_cluster_security_group_IDs>"] deletion_protection = <protect_cluster_against_deletion> config { version = "<version>" zones = ["<availability_zones>"] brokers_count = <number_of_broker_hosts> assign_public_ip = "<enable_public_access_to_cluster>" schema_registry = "<enable_data_schema_management>" kafka_ui { enabled = <use_Kafka_UI> } kafka { resources { disk_size = <storage_size_in_GB> disk_type_id = "<disk_type>" resource_preset_id = "<host_class>" } kafka_config {} } } } resource "yandex_vpc_network" "<network_name>" { name = "<network_name>" } resource "yandex_vpc_subnet" "<subnet_name>" { name = "<subnet_name>" zone = "<availability_zone>" network_id = "<network_ID>" v4_cidr_blocks = ["<range>"] }Where:
-
environment: Cluster environment,PRESTABLEorPRODUCTION. -
version: Apache Kafka® version. Specify 3.6 or higher. -
zonesandbrokers_count: Availability zones and number of broker hosts per zone.If you are creating a cluster with KRaft (combined mode), specify one of the available configurations:
zones = ["ru-central1-a","ru-central1-b","ru-central1-d"] brokers_count = 1: Three availability zones with one broker host per zone.zones = ["<one_availability_zone>"] brokers_count = 3: One availability zone with three broker hosts.
-
deletion_protection: Cluster protection against accidental deletion,trueorfalse.Note
Even with cluster deletion protection enabled, one can still delete a user or topic or connect manually and delete the data.
-
assign_public_ip: Public access to the cluster,trueorfalse. -
schema_registry: Manage data schemas using Managed Schema Registry,trueorfalse. The default value isfalse.Warning
You cannot disable data schema management using Managed Schema Registry after connecting it.
-
kafka_ui: This setting defines whether to use Kafka UI for Apache Kafka® and can betrueorfalse. The default value isfalse.
Add the
kraftsection to the cluster description to use KRaft (on separate hosts).resource "yandex_mdb_kafka_cluster" "<cluster_name>" { ... kraft { resources { resource_preset_id = "<host_class>" disk_type_id = "<disk_type>" disk_size = <storage_size_in_GB> } } }To set up the maintenance window (for disabled clusters as well), add the
maintenance_windowsection to the cluster description:resource "yandex_mdb_kafka_cluster" "<cluster_name>" { ... maintenance_window { type = <maintenance_type> day = <day_of_week> hour = <hour> } ... }Where:
type: Maintenance type. The possible values include:ANYTIME: AnytimeWEEKLY: On a schedule
day: Day of week for theWEEKLYtype, i.e.,MON,TUE,WED,THU,FRI,SAT, orSUN.hour: UTC hour for theWEEKLYtype, from1to24.
To encrypt disks with a custom KMS key, add the
disk_encryption_key_idparameter:resource "yandex_mdb_kafka_cluster" "<cluster_name>" { ... disk_encryption_key_id = <KMS_key_ID> ... }To learn more about disk encryption, see Storage.
-
-
Make sure the settings are correct.
-
In the command line, navigate to the directory that contains the current Terraform configuration files defining the infrastructure.
-
Run this command:
terraform validateTerraform will show any errors found in your configuration files.
-
-
Create a Managed Service for Apache Kafka® cluster.
-
Run this command to view the planned changes:
terraform planIf you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.
-
If everything looks correct, apply the changes:
-
Run this command:
terraform apply -
Confirm updating the resources.
-
Wait for the operation to complete.
-
This will create all the resources you need in the specified folder, and the terminal will display the FQDNs of the Managed Service for Apache Kafka® cluster hosts. You can check the new resources and their configuration using the management console
. -
For more information, see this Terraform provider guide.
Time limits
The Terraform provider limits the amount of time for all Managed Service for Apache Kafka® cluster operations to complete to 60 minutes.
Operations exceeding the set timeout are interrupted.
How do I change these limits?
Add the timeouts block to the cluster description, for example:
resource "yandex_mdb_kafka_cluster" "<cluster_name>" {
...
timeouts {
create = "1h30m" # 1 hour 30 minutes
update = "2h" # 2 hours
delete = "30m" # 30 minutes
}
}
-
Get an IAM token for API authentication and put it into an environment variable:
export IAM_TOKEN="<IAM_token>" -
Call the Cluster.create method, e.g., via the following cURL
request:-
Create a file named
body.jsonand paste the following code into it:Note
This example does not use all available parameters.
{ "folderId": "<folder_ID>", "name": "<cluster_name>", "environment": "<environment>", "networkId": "<network_ID>", "securityGroupIds": [ "<security_group_1_ID>", "<security_group_2_ID>", ... "<security_group_N_ID>" ], "configSpec": { "version": "<Apache Kafka®_version>", "kafka": { "resources": { "resourcePresetId": "<Apache Kafka®_host_class>", "diskSize": "<storage_size_in_bytes>", "diskTypeId": "<disk_type>" } }, "kraft": { "resources": { "resourcePresetId": "<KRaft_host_class>", "diskSize": "<storage_size_in_bytes>", "diskTypeId": "<disk_type>" } }, "zoneId": [ <list_of_availability_zones> ], "brokersCount": "<number_of_brokers_in_zone>", "assignPublicIp": <enable_public_access_to_cluster>, "schemaRegistry": <enable_data_schema_management>, "restApiConfig": { "enabled": <enable_sending_requests_to_Apache Kafka®_API> }, "diskSizeAutoscaling": { <automatic_storage_expansion_parameters> }, "kafkaUiConfig": { "enabled": <use_Kafka_UI> } }, "topicSpecs": [ { "name": "<topic_name>", "partitions": "<number_of_partitions>", "replicationFactor": "<replication_factor>" }, { <similar_settings_for_topic_2> }, { ... }, { <similar_settings_for_topic_N> } ], "userSpecs": [ { "name": "<username>", "password": "<user_password>", "permissions": [ { "topicName": "<topic_name>", "role": "<user's_role>" } ] }, { <similar_settings_for_user_2> }, { ... }, { <similar_settings_for_user_N> } ], "maintenanceWindow": { "anytime": {}, "weeklyMaintenanceWindow": { "day": "<day_of_week>", "hour": "<hour_UTC>" } }, "deletionProtection": <protect_cluster_against_deletion>, "diskEncryptionKeyId": "<KMS_key_ID>" }Where:
-
name: Cluster name. -
environment: Cluster environment,PRODUCTIONorPRESTABLE. -
networkId: ID of the network where the cluster will be deployed. -
securityGroupIds: Security group IDs as an array of strings. Each string is a security group ID. -
configSpec: Cluster configuration:-
version: Apache Kafka® version. Specify 3.6 or higher. -
kafka: Apache Kafka® configuration:resources.resourcePresetId: Host class ID. You can get the list of available host classes with their IDs by calling the ResourcePreset.list method.resources.diskSize: Disk size, in bytes.resources.diskTypeId: Disk type.
-
kraft: KRaft configuration:resources.resourcePresetId: Host class ID. You can get the list of available host classes with their IDs using the ResourcePreset.list method.resources.diskSize: Disk size, in bytes.resources.diskTypeId: Disk type.
Warning
If you are creating a KRaft (combined mode) cluster, do not provide the KRaft host configuration.
-
zoneIdandbrokersCount: Availability zones and number of broker hosts per zone.If you are creating a cluster with KRaft (combined mode), specify one of the available configurations:
"zoneId": ["ru-central1-a","ru-central1-b","ru-central1-d"], "brokersCount": "1": Three availability zones with one broker host per zone."zoneId": ["<one_availability_zone>"], "brokersCount": "3": One availability zone with three broker hosts.
-
assignPublicIp: Access to broker hosts from the internet,trueorfalse. -
schemaRegistry: Manage data schemas using Managed Schema Registry,trueorfalse. The default value isfalse. You will not be able to edit this setting once you create a Managed Service for Apache Kafka® cluster. -
restApiConfig: Apache Kafka® REST API configuration. For access to sending requests to the Apache Kafka® REST API, specifyenabled: true. -
diskSizeAutoscaling: Storage utilization thresholds (as a percentage of the total storage size) that will trigger storage expansion when reached:-
plannedUsageThreshold: Storage utilization percentage to trigger a storage increase during the next maintenance window.Use a percentage value between
0and100. The default value is0(automatic increase is disabled).If you set this parameter, configure the maintenance window schedule.
-
emergencyUsageThreshold: Storage utilization percentage to trigger an immediate storage increase.Use a percentage value between
0and100. The default value is0(automatic increase is disabled). This parameter value must be greater than or equal toplannedUsageThreshold. -
diskSizeLimit: Maximum storage size, in bytes, that can be set when utilization reaches one of the specified percentages.
-
-
kafkaUiConfig: Use Kafka UI. For access to Kafka UI, specifyenabled: true.
-
-
topicSpecs: Topic settings as an array of elements, one per topic. Each element has the following structure:-
name: Topic name.Note
Use the Apache Kafka® Admin API if you need to create a topic that starts with
_. You cannot create such a topic using the Yandex Cloud interfaces. -
partitions: Number of partitions. -
replicationFactor: Replication factor.
-
-
userSpecs: User settings as an array of elements, one per user. Each element has the following structure:-
name: Username. It may contain Latin letters, numbers, hyphens, and underscores, and must start with a letter or underscore. -
password: User password. The password must be from 8 to 128 characters long. -
permissions: List of topics the user must have access to.The list is arranged as an array of elements. Each element is for a separate topic and has the following structure:
topicName: Topic name or name template:*to allow access to any topics.- Full topic name to allow access to a specific topic.
<prefix>*to grant access to topics whose names start with the prefix. Let's assume you have topics namedtopic_a1,topic_a2, anda3. If you puttopic*, access will be granted totopic_a1andtopic_a2. To include all the cluster's topics, use the*mask.
role: User’s role,ACCESS_ROLE_CONSUMER,ACCESS_ROLE_PRODUCER, orACCESS_ROLE_ADMIN. TheACCESS_ROLE_ADMINrole is only available if all topics are selected (topicName: "*").allowHosts: (Optional) List of IP addresses the user is allowed to access the topic from.
-
-
maintenanceWindow: Maintenance window settings, including for stopped clusters. Select one of these options:anytime: At any time (default).weeklyMaintenanceWindow: On schedule:day: Day of week inDDDformat, i.e.,MON,TUE,WED,THU,FRI,SAT, orSUN.hour: Time of day (UTC) inHHformat, from1to24.
-
deletionProtection: Cluster protection against accidental deletion,trueorfalse. The default value isfalse.Note
Even with cluster deletion protection enabled, one can still delete a user or topic or connect manually and delete the data.
-
diskEncryptionKeyId: ID of the custom KMS key. Provide the KMS key ID to encrypt the disks. To learn more about disk encryption, see Storage.Warning
You can enable disk encryption only when creating a cluster.
You can get the folder ID with the list of folders in the cloud.
-
-
Run this request:
curl \ --request POST \ --header "Authorization: Bearer $IAM_TOKEN" \ --header "Content-Type: application/json" \ --url 'https://mdb.api.cloud.yandex.net/managed-kafka/v1/clusters' \ --data '@body.json'
-
-
View the server response to make sure your request was successful.
-
Get an IAM token for API authentication and put it into an environment variable:
export IAM_TOKEN="<IAM_token>" -
Clone the cloudapi
repository:cd ~/ && git clone --depth=1 https://github.com/yandex-cloud/cloudapiBelow, we assume that the repository contents reside in the
~/cloudapi/directory. -
Call the ClusterService/Create method, e.g., via the following gRPCurl
request:-
Create a file named
body.jsonand paste the following code into it:Note
This example does not use all available parameters.
{ "folder_id": "<folder_ID>", "name": "<cluster_name>", "environment": "<environment>", "network_id": "<network_ID>", "security_group_ids": [ "<security_group_1_ID>", "<security_group_2_ID>", ... "<security_group_N_ID>" ], "config_spec": { "version": "<Apache Kafka®_version>", "kafka": { "resources": { "resource_preset_id": "<Apache Kafka®_host_class>", "disk_size": "<storage_size_in_bytes>", "disk_type_id": "<disk_type>" } }, "kraft": { "resources": { "resource_preset_id": "<KRaft_host_class>", "disk_size": "<storage_size_in_bytes>", "disk_type_id": "<disk_type>" } }, "zone_id": [ <list_of_availability_zones> ], "brokers_count": { "value": "<number_of_brokers_in_zone>" }, "assign_public_ip": <enable_public_access_to_cluster>, "schema_registry": <enable_data_schema_management>, "rest_api_config": { "enabled": <enable_sending_requests_to_Apache Kafka®_API> }, "disk_size_autoscaling": { <automatic_storage_size_increase_parameters> }, "kafka_ui_config": { "enabled": <use_Kafka_UI> } }, "topic_specs": [ { "name": "<topic_name>", "partitions": { "value": "<number_of_partitions>" }, "replication_factor": { "value": "<replication_factor>" } }, { <similar_settings_for_topic_2> }, { ... }, { <similar_settings_for_topic_N> } ], "user_specs": [ { "name": "<username>", "password": "<user_password>", "permissions": [ { "topic_name": "<topic_name>", "role": "<user's_role>" } ] }, { <similar_settings_for_user_2> }, { ... }, { <similar_settings_for_user_N> } ], "maintenance_window": { "anytime": {}, "weekly_maintenance_window": { "day": "<day_of_week>", "hour": "<hour_UTC>" } }, "deletion_protection": <protect_cluster_against_deletion>, "disk_encryption_key_id": "<KMS_key_ID>" }Where:
-
name: Cluster name. -
environment: Cluster environment,PRODUCTIONorPRESTABLE. -
network_id: ID of the network where the cluster will be deployed. -
security_group_ids: Security group IDs as an array of strings. Each string is a security group ID. -
config_spec: Cluster configuration:-
version: Apache Kafka® version. Specify 3.6 or higher. -
kafka: Apache Kafka® configuration:resources.resource_preset_id: Host class ID. You can get the list of available host classes with their IDs by calling the ResourcePreset.list method.resources.disk_size: Disk size, in bytes.resources.disk_type_id: Disk type.
-
kraft: KRaft configuration:resources.resource_preset_id: Host class ID. You can get the list of available host classes with their IDs by calling the ResourcePreset.list method.resources.disk_size: Disk size, in bytes.resources.disk_type_id: Disk type.
Warning
If you are creating a KRaft (combined mode) cluster, do not provide the KRaft host configuration.
-
zone_idandbrokers_count: Availability zones and number of broker hosts (provided as an object with thevaluefield) per zone.If you are creating a cluster with KRaft (combined mode), specify one of the available configurations:
"zone_id": ["ru-central1-a","ru-central1-b","ru-central1-d"], "brokers_count": {"value":"1"}: Three availability zones with one broker host per zone."zone_id": ["<one_availability_zone>"], "brokers_count": {"value":"3"}: One availability zone with three broker hosts.
-
assign_public_ip: Access to broker hosts from the internet,trueorfalse. -
schema_registry: Manage data schemas using Managed Schema Registry,trueorfalse. The default value isfalse. You will not be able to edit this setting once you create the Managed Service for Apache Kafka® cluster. -
rest_api_config: Apache Kafka® REST API configuration. For access to sending requests to the Apache Kafka® REST API, specifyenabled: true. -
disk_size_autoscaling: To prevent the cluster disk space from running out, set the storage utilization thresholds (as a percentage of the total storage size) that will trigger storage expansion when reached:-
planned_usage_threshold: Storage utilization percentage to trigger a storage increase during the next maintenance window.Use a percentage value between
0and100. The default value is0(automatic increase is disabled).If you set this parameter, configure the maintenance window schedule.
-
emergency_usage_threshold: Storage utilization percentage to trigger an immediate storage increase.Use a percentage value between
0and100. The default value is0(automatic increase is disabled). This parameter value must be greater than or equal toplanned_usage_threshold. -
disk_size_limit: Maximum storage size, in bytes, that can be set when utilization reaches one of the specified percentages.
-
-
kafka_ui_config: Use Kafka UI. For access to Kafka UI, specifyenabled: true.
-
-
topic_specs: Topic settings as an array of elements. Each element is for a separate topic and has the following structure:-
name: Topic name.Note
Use the Apache Kafka® Admin API if you need to create a topic that starts with
_. You cannot create such a topic using the Yandex Cloud interfaces. -
partitions: Number of partitions, provided as an object with a field namedvalue. -
replication_factor: Replication factor. Provided as an object with thevaluefield.
-
-
user_specs: User settings as an array of elements, one per user. Each element has the following structure:-
name: Username. It may contain Latin letters, numbers, hyphens, and underscores, and must start with a letter or underscore. -
password: User password. The password must be from 8 to 128 characters long. -
permissions: List of topics the user must have access to.The list is arranged as an array of elements. Each element is for a separate topic and has the following structure:
topic_name: Topic name or name template:*to allow access to any topics.- Full topic name to allow access to a specific topic.
<prefix>*to grant access to topics whose names start with the prefix. Let's assume you have topics namedtopic_a1,topic_a2, anda3. If you puttopic*, access will be granted totopic_a1andtopic_a2. To include all the cluster's topics, use the*mask.
role: User’s role,ACCESS_ROLE_CONSUMER,ACCESS_ROLE_PRODUCER, orACCESS_ROLE_ADMIN. TheACCESS_ROLE_ADMINrole is only available if all topics are selected (topicName: "*").allow_hosts: (Optional) List of IP addresses the user is allowed to access the topic from, as an array of elements.
-
-
maintenance_window: Maintenance window settings, including for stopped clusters. Select one of these options:anytime: At any time (default).weekly_maintenance_window: On schedule:day: Day of week inDDDformat, i.e.,MON,TUE,WED,THU,FRI,SAT, orSUN.hour: Time of day (UTC) inHHformat, from1to24.
-
deletion_protection: Cluster protection against accidental deletion,trueorfalse. The default value isfalse.Note
Even with cluster deletion protection enabled, one can still delete a user or topic or connect manually and delete the data.
-
disk_encryption_key_id: ID of the custom KMS key. Provide the KMS key ID to encrypt the disks. To learn more about disk encryption, see Storage.Warning
You can enable disk encryption only when creating a cluster.
You can get the folder ID with the list of folders in the cloud.
-
-
Run this request:
curl \ --request POST \ --header "Authorization: Bearer $IAM_TOKEN" \ --header "Content-Type: application/json" \ --url 'https://mdb.api.cloud.yandex.net/managed-kafka/v1/clusters' \ --data '@body.json'
-
-
View the server response to make sure your request was successful.
Creating a cluster copy
You can create an Apache Kafka® cluster with the settings of another one created earlier. To do this, import the original Apache Kafka® cluster configuration to Terraform. Then you can either create an identical copy or use the imported configuration as the baseline and modify it as needed. The import feature is useful when you need to replicate an Apache Kafka® cluster with multiple settings.
To create an Apache Kafka® cluster copy:
-
If you do not have Terraform yet, install it.
-
Get the authentication credentials. You can add them to environment variables or specify them later in the provider configuration file.
-
Configure and initialize a provider. There is no need to create a provider configuration file manually, you can download it
. -
Place the configuration file in a separate working directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file.
-
In the same working directory, place a
.tffile with the following contents:resource "yandex_mdb_kafka_cluster" "old" { } -
Save the ID of the original Apache Kafka® cluster to an environment variable:
export KAFKA_CLUSTER_ID=<cluster_ID>You can get the ID with the list of clusters in the folder.
-
Import the original Apache Kafka® cluster settings to the Terraform configuration:
terraform import yandex_mdb_kafka_cluster.old ${KAFKA_CLUSTER_ID} -
Get the imported configuration:
terraform show -
Copy it from the terminal and paste it into the
.tffile. -
Place the file in the new
imported-clusterdirectory. -
Edit the copied configuration so that you can create a new cluster from it:
- Specify the new cluster name in the
resourcestring and thenameparameter. - Delete
created_at,health,host,id, andstatus. - Add the
subnet_idsargument with the list of subnet IDs for each availability zone. - If the
maintenance_windowsection containstype = "ANYTIME", delete thehoursetting. - Optionally, make further changes if you need a customized configuration.
- Specify the new cluster name in the
-
Get the authentication credentials in the
imported-clusterdirectory. -
In the same directory, configure and initialize the provider. There is no need to create a provider configuration file manually, as you can download it
. -
Place the configuration file in the
imported-clusterdirectory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file. -
Make sure the Terraform configuration files are correct:
terraform validateTerraform will display any configuration errors detected in your files.
-
Create the required infrastructure:
-
Run this command to view the planned changes:
terraform planIf you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.
-
If everything looks correct, apply the changes:
-
Run this command:
terraform apply -
Confirm updating the resources.
-
Wait for the operation to complete.
-
All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console
. -
Time limits
The Terraform provider limits the amount of time for all Managed Service for Apache Kafka® cluster operations to complete to 60 minutes.
Operations exceeding the set timeout are interrupted.
How do I change these limits?
Add the timeouts block to the cluster description, for example:
resource "yandex_mdb_kafka_cluster" "<cluster_name>" {
...
timeouts {
create = "1h30m" # 1 hour 30 minutes
update = "2h" # 2 hours
delete = "30m" # 30 minutes
}
}
Examples
Creating a single-host cluster
Create a Managed Service for Apache Kafka® cluster with the following test specifications:
- Name:
mykf. - Environment:
production. - Apache Kafka® version: 3.9.
- Network:
default. - Subnet ID:
b0rcctk2rvtr8efcch64. - Security group:
enp6saqnq4ie244g67sb. - Broker host class:
s2.micro(one host), availability zone:ru-central1-a. - Network SSD storage (
network-ssd): 10 GB. - Public access: Enabled.
- Deletion protection: Enabled.
Run this command:
yc managed-kafka cluster create \
--name mykf \
--environment production \
--version 3.9 \
--network-name default \
--subnet-ids b0rcctk2rvtr8efcch64 \
--zone-ids ru-central1-a \
--brokers-count 1 \
--resource-preset s2.micro \
--disk-size 10 \
--disk-type network-ssd \
--assign-public-ip \
--security-group-ids enp6saqnq4ie244g67sb \
--deletion-protection
Create a Managed Service for Apache Kafka® cluster with the following test specifications:
-
Cloud ID:
b1gq90dgh25bebiu75o. -
Folder ID:
b1gia87mbaomkfvsleds. -
Name:
mykf. -
Environment:
PRODUCTION. -
Apache Kafka® version: 3.9.
-
New network:
mynet(new one), subnet:mysubnet, address range:10.5.0.0/24. -
Security group:
mykf-sgallowing internet connections to the Managed Service for Apache Kafka® cluster on port9091. -
Broker host class:
s2.micro(one host), availability zone:ru-central1-a. -
Network SSD storage (
network-ssd): 10 GB. -
Public access: Enabled.
-
Deletion protection: Enabled.
The configuration file for this Managed Service for Apache Kafka® cluster is as follows:
resource "yandex_mdb_kafka_cluster" "mykf" {
environment = "PRODUCTION"
name = "mykf"
network_id = yandex_vpc_network.mynet.id
subnet_ids = [ yandex_vpc_subnet.mysubnet.id ]
security_group_ids = [ yandex_vpc_security_group.mykf-sg.id ]
deletion_protection = true
config {
assign_public_ip = true
brokers_count = 1
version = "3.9"
kafka {
resources {
disk_size = 10
disk_type_id = "network-ssd"
resource_preset_id = "s2.micro"
}
kafka_config {}
}
zones = [
"ru-central1-a"
]
}
}
resource "yandex_vpc_network" "mynet" {
name = "mynet"
}
resource "yandex_vpc_subnet" "mysubnet" {
name = "mysubnet"
zone = "ru-central1-a"
network_id = yandex_vpc_network.mynet.id
v4_cidr_blocks = ["10.5.0.0/24"]
}
resource "yandex_vpc_security_group" "mykf-sg" {
name = "mykf-sg"
network_id = yandex_vpc_network.mynet.id
ingress {
description = "Kafka"
port = 9091
protocol = "TCP"
v4_cidr_blocks = [ "0.0.0.0/0" ]
}
}
Creating a cluster with KRaft in combined mode
In our example, we use the configuration with three availability zones, each hosting a single broker.
Create a Managed Service for Apache Kafka® cluster with the following test specifications:
-
Name:
kafka-kraft. -
Environment:
production. -
Apache Kafka® version:
3.9. -
Network ID:
enpc6eqfhmj2********. -
Subnet IDs:
e9bhbia2scnk********e2lfqbm5nt9r********fl8beqmjckv8********
-
One broker host in each of the following availability zones:
ru-central1-aru-central1-bru-central1-d
-
Host class:
s2.micro. -
Network HDD storage (
network-hdd):10GB. -
Public access: Enabled.
-
Security group:
enp68jq81uun********.
Run this command:
yc managed-kafka cluster create \
--name kafka-kraft \
--environment production \
--version 3.9 \
--network-id enpc6eqfhmj2******** \
--subnet-ids e9bhbia2scnk********,e2lfqbm5nt9r********,fl8beqmjckv8******** \
--zone-ids ru-central1-a,ru-central1-b,ru-central1-d \
--brokers-count 1 \
--resource-preset s2.micro \
--disk-size 10 \
--disk-type network-hdd \
--assign-public-ip \
--security-group-ids enp68jq81uun********
Create a Managed Service for Apache Kafka® cluster with the following test specifications:
-
Name:
kafka-kraft. -
Environment:
production. -
Apache Kafka® version:
3.9. -
New network:
kafka-net, subnets in each availability zone:kafka-subnet-awith the10.1.0.0/24address range.kafka-subnet-bwith the10.2.0.0/24address range.kafka-subnet-dwith the10.3.0.0/24address range.
-
One broker host in each of the following availability zones:
ru-central1-aru-central1-bru-central1-d
-
Host class:
s2.micro. -
Network HDD storage (
network-hdd):10GB. -
Public access: Enabled.
-
Security group:
kafka-sgallowing all incoming and outgoing traffic.
The configuration file for this Managed Service for Apache Kafka® cluster is as follows:
resource "yandex_mdb_kafka_cluster" "kafka-kraft" {
name = "kafka-kraft"
environment = "PRODUCTION"
network_id = yandex_vpc_network.kafka-net.id
subnet_ids = [yandex_vpc_subnet.kafka-subnet-a.id, yandex_vpc_subnet.kafka-subnet-b.id, yandex_vpc_subnet.kafka-subnet-d.id]
security_group_ids = [yandex_vpc_security_group.kafka-sg.id]
config {
version = "3.9"
brokers_count = 1
zones = ["ru-central1-a", "ru-central1-b", "ru-central1-d"]
assign_public_ip = true
kafka {
resources {
disk_size = 10
disk_type_id = "network-hdd"
resource_preset_id = "s2.micro"
}
kafka_config {}
}
}
}
resource "yandex_vpc_network" "kafka-net" {
name = "kafka-net"
}
resource "yandex_vpc_subnet" "kafka-subnet-a" {
name = "kafka-subnet-a"
zone = "ru-central1-a"
network_id = yandex_vpc_network.kafka-net.id
v4_cidr_blocks = ["10.1.0.0/24"]
}
resource "yandex_vpc_subnet" "kafka-subnet-b" {
name = "kafka-subnet-b"
zone = "ru-central1-b"
network_id = yandex_vpc_network.kafka-net.id
v4_cidr_blocks = ["10.2.0.0/24"]
}
resource "yandex_vpc_subnet" "kafka-subnet-d" {
name = "kafka-subnet-d"
zone = "ru-central1-d"
network_id = yandex_vpc_network.kafka-net.id
v4_cidr_blocks = ["10.3.0.0/24"]
}
resource "yandex_vpc_security_group" "kafka-sg" {
name = "kafka-sg"
network_id = yandex_vpc_network.kafka-net.id
ingress {
protocol = "ANY"
v4_cidr_blocks = ["0.0.0.0/0"]
}
egress {
protocol = "ANY"
v4_cidr_blocks = ["0.0.0.0/0"]
}
}
Creating a cluster with KRaft on separate hosts (multi-host cluster)
Create a Managed Service for Apache Kafka® cluster with the following test specifications:
-
Name:
kafka-kraft-mh. -
Environment:
production. -
Apache Kafka® version:
3.9. -
Network ID:
enpc6eqfhmj2********. -
Subnet IDs:
e9bhbia2scnk********e2lfqbm5nt9r********fl8beqmjckv8********
-
Hosts: Two broker hosts in the
ru-central1-aavailability zone. -
Host class:
s2.micro. -
Network HDD storage (
network-hdd):10GB. -
KRaft controller host class:
s2.micro. -
KRaft controller network SSD storage (
network-ssd):10GB. -
Public access: Enabled.
-
Security group:
enp68jq81uun********.
Run this command:
yc managed-kafka cluster create \
--name kafka-kraft-mh \
--environment production \
--version 3.9 \
--network-id enpc6eqfhmj2******** \
--subnet-ids e9bhbia2scnk********,e2lfqbm5nt9r********,fl8beqmjckv8******** \
--zone-ids ru-central1-a \
--brokers-count 2 \
--resource-preset s2.micro \
--disk-size 10 \
--disk-type network-hdd \
--controller-resource-preset s2.micro \
--controller-disk-size 10 \
--controller-disk-type network-ssd \
--assign-public-ip \
--security-group-ids enp68jq81uun********
Create a Managed Service for Apache Kafka® cluster with the following test specifications:
-
Name:
kafka-kraft-mh. -
Environment:
production. -
Apache Kafka® version:
3.9. -
New network:
kafka-net, subnets in each availability zone:kafka-subnet-awith the10.1.0.0/24address range.kafka-subnet-bwith the10.2.0.0/24address range.kafka-subnet-dwith the10.3.0.0/24address range.
-
Hosts: Two broker hosts in the
ru-central1-aavailability zone. -
Host class:
s2.micro. -
Network HDD storage (
network-hdd):10GB. -
KRaft controller host class:
s2.micro. -
KRaft controller network SSD storage (
network-ssd):10GB. -
Public access: Enabled.
-
Security group:
kafka-sgallowing all incoming and outgoing traffic.
The configuration file for this Managed Service for Apache Kafka® cluster is as follows:
resource "yandex_mdb_kafka_cluster" "kafka-kraft-mh" {
name = "kafka-kraft-mh"
environment = "PRODUCTION"
network_id = yandex_vpc_network.kafka-net.id
subnet_ids = [yandex_vpc_subnet.kafka-subnet-a.id,yandex_vpc_subnet.kafka-subnet-b.id,yandex_vpc_subnet.kafka-subnet-d.id]
security_group_ids = [yandex_vpc_security_group.kafka-sg.id]
config {
version = "3.9"
brokers_count = 2
zones = ["ru-central1-a"]
assign_public_ip = true
kafka {
resources {
disk_size = 10
disk_type_id = "network-hdd"
resource_preset_id = "s2.micro"
}
kafka_config {}
}
kraft {
resources {
resource_preset_id = "s2.micro"
disk_type_id = "network-ssd"
disk_size = 10
}
}
}
}
resource "yandex_vpc_network" "kafka-net" {
name = "kafka-net"
}
resource "yandex_vpc_subnet" "kafka-subnet-a" {
name = "kafka-subnet-a"
zone = "ru-central1-a"
network_id = yandex_vpc_network.kafka-net.id
v4_cidr_blocks = ["10.1.0.0/24"]
}
resource "yandex_vpc_subnet" "kafka-subnet-b" {
name = "kafka-subnet-b"
zone = "ru-central1-b"
network_id = yandex_vpc_network.kafka-net.id
v4_cidr_blocks = ["10.2.0.0/24"]
}
resource "yandex_vpc_subnet" "kafka-subnet-d" {
name = "kafka-subnet-d"
zone = "ru-central1-d"
network_id = yandex_vpc_network.kafka-net.id
v4_cidr_blocks = ["10.3.0.0/24"]
}
resource "yandex_vpc_security_group" "kafka-sg" {
name = "kafka-sg"
network_id = yandex_vpc_network.kafka-net.id
ingress {
protocol = "ANY"
v4_cidr_blocks = ["0.0.0.0/0"]
}
egress {
protocol = "ANY"
v4_cidr_blocks = ["0.0.0.0/0"]
}
}
Creating a cluster with ZooKeeper on separate hosts (multi-host cluster)
Create a Managed Service for Apache Kafka® cluster with the following test specifications:
-
Name:
kafka-zk-mh. -
Environment:
production. -
Apache Kafka® version:
3.9. -
Network ID:
enpc6eqfhmj2********. -
Subnet IDs:
e9bhbia2scnk********e2lfqbm5nt9r********fl8beqmjckv8********
-
Hosts: Two broker hosts in the
ru-central1-aavailability zone. -
Host class:
s2.micro. -
Network HDD storage (
network-hdd):10GB. -
ZooKeeper host class:
s2.micro. -
ZooKeeper network SSD storage (
network-ssd):10GB. -
Public access: Enabled.
-
Security group:
enp68jq81uun********.
Run this command:
yc managed-kafka cluster create \
--name kafka-zk-mh \
--environment production \
--version 3.9 \
--network-id enpc6eqfhmj2******** \
--subnet-ids e9bhbia2scnk********,e2lfqbm5nt9r********,fl8beqmjckv8******** \
--zone-ids ru-central1-a \
--brokers-count 2 \
--resource-preset s2.micro \
--disk-size 10 \
--disk-type network-hdd \
--zookeeper-resource-preset s2.micro \
--zookeeper-disk-size 10 \
--zookeeper-disk-type network-ssd \
--assign-public-ip \
--security-group-ids enp68jq81uun********
Create a Managed Service for Apache Kafka® cluster with the following test specifications:
-
Name:
kafka-zk-mh. -
Environment:
production. -
Apache Kafka® version:
3.9. -
New network:
kafka-net, subnets in each availability zone:kafka-subnet-awith the10.1.0.0/24address range.kafka-subnet-bwith the10.2.0.0/24address range.kafka-subnet-dwith the10.3.0.0/24address range.
-
Hosts: Two broker hosts in the
ru-central1-aavailability zone. -
Host class:
s2.micro. -
Network HDD storage (
network-hdd):10GB. -
ZooKeeper host class:
s2.micro. -
ZooKeeper network SSD storage (
network-ssd):10GB. -
Public access: Enabled.
-
Security group:
kafka-sgallowing all incoming and outgoing traffic.
The configuration file for this Managed Service for Apache Kafka® cluster is as follows:
resource "yandex_mdb_kafka_cluster" "kafka-zk-mh" {
name = "kafka-zk-mh"
environment = "PRODUCTION"
network_id = yandex_vpc_network.kafka-net.id
subnet_ids = [yandex_vpc_subnet.kafka-subnet-a.id,yandex_vpc_subnet.kafka-subnet-b.id,yandex_vpc_subnet.kafka-subnet-d.id]
security_group_ids = [yandex_vpc_security_group.kafka-sg.id]
config {
version = "3.9"
brokers_count = 2
zones = ["ru-central1-a"]
assign_public_ip = true
kafka {
resources {
disk_size = 10
disk_type_id = "network-hdd"
resource_preset_id = "s2.micro"
}
kafka_config {}
}
zookeeper {
resources {
resource_preset_id = "s2.micro"
disk_type_id = "network-ssd"
disk_size = 10
}
}
}
}
resource "yandex_vpc_network" "kafka-net" {
name = "kafka-net"
}
resource "yandex_vpc_subnet" "kafka-subnet-a" {
name = "kafka-subnet-a"
zone = "ru-central1-a"
network_id = yandex_vpc_network.kafka-net.id
v4_cidr_blocks = ["10.1.0.0/24"]
}
resource "yandex_vpc_subnet" "kafka-subnet-b" {
name = "kafka-subnet-b"
zone = "ru-central1-b"
network_id = yandex_vpc_network.kafka-net.id
v4_cidr_blocks = ["10.2.0.0/24"]
}
resource "yandex_vpc_subnet" "kafka-subnet-d" {
name = "kafka-subnet-d"
zone = "ru-central1-d"
network_id = yandex_vpc_network.kafka-net.id
v4_cidr_blocks = ["10.3.0.0/24"]
}
resource "yandex_vpc_security_group" "kafka-sg" {
name = "kafka-sg"
network_id = yandex_vpc_network.kafka-net.id
ingress {
protocol = "ANY"
v4_cidr_blocks = ["0.0.0.0/0"]
}
egress {
protocol = "ANY"
v4_cidr_blocks = ["0.0.0.0/0"]
}
}