Yandex Cloud
Search
Contact UsGet started
  • Blog
  • Pricing
  • Documentation
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • ML & AI
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Customer Stories
    • Gateway to Russia
    • Cloud for Startups
    • Education and Science
  • Blog
  • Pricing
  • Documentation
Yandex project
© 2025 Yandex.Cloud LLC
Yandex Managed Service for Apache Kafka®
  • Getting started
    • All guides
      • Information about existing clusters
      • Creating a cluster
      • Updating cluster settings
      • Apache Kafka® version upgrade
      • Managing disk space
      • Stopping and starting a cluster
      • Getting a list of cluster hosts
      • Migrating hosts to a different availability zone
      • Deleting a cluster
    • Managing topics
    • Managing users
    • Managing connectors
  • Access management
  • Pricing policy
  • Terraform reference
  • Yandex Monitoring metrics
  • Audit Trails events
  • Public materials
  • Release notes
  • FAQ

In this article:

  • Differences in cluster configuration with ZooKeeper and Apache Kafka® Raft protocol
  • Getting started
  • Creating a cluster
  • Creating a cluster copy
  • Examples
  • Creating a single-host cluster
  1. Step-by-step guides
  2. Clusters
  3. Creating a cluster

Creating an Apache Kafka® cluster

Written by
Yandex Cloud
Updated at May 5, 2025
  • Differences in cluster configuration with ZooKeeper and Apache Kafka® Raft protocol
  • Getting started
  • Creating a cluster
  • Creating a cluster copy
  • Examples
    • Creating a single-host cluster

A Managed Service for Apache Kafka® cluster is one or more broker hosts where topics and their partitions are located. Producers and consumers can work with these topics by connecting to Managed Service for Apache Kafka® cluster hosts.

Note

  • The number of broker hosts you can create together with a Managed Service for Apache Kafka® cluster depends on the selected disk type and host class.
  • Available disk types depend on the selected host class.

Note

Starting March 1, 2025, support for Apache Kafka® 2.8, 3.0, 3.1, 3.2, and 3.3 is discontinued. You cannot create a cluster with these versions.

Differences in cluster configuration with ZooKeeper and Apache Kafka® Raft protocolDifferences in cluster configuration with ZooKeeper and Apache Kafka® Raft protocol

If you create a cluster with Apache Kafka® 3.5 and more than one host, three dedicated ZooKeeper hosts will be added to the cluster.

Clusters with Apache Kafka® 3.6 or higher support Apache Kafka® Raft (KRaft for short). It is used instead of ZooKeeper to store metadata.

When creating a cluster with the KRaft protocol, the following configuration restrictions apply:

  • You can create a cluster with only one or three availability zones.
  • Limited number of broker hosts:
    • If you select one availability zone, you can create one or three broker hosts.
    • If you select three availability zones, you can only create one broker host.

For more information about the differences in cluster configurations with ZooKeeper and KRaft, see Resource relationships in Managed Service for Apache Kafka®.

Getting startedGetting started

  1. Calculate the minimum storage size for topics.
  2. Assign the vpc.user role and the managed-kafka.editor role or higher to your Yandex Cloud account.

If you specify security group IDs when creating a Managed Service for Apache Kafka® cluster, you may also need to configure security groups to connect to the cluster.

Creating a clusterCreating a cluster

Management console
CLI
Terraform
REST API
gRPC API

To create a Managed Service for Apache Kafka® cluster:

  1. In the management console, go to the appropriate folder.

  2. In the list of services, select Managed Service for Kafka.

  3. Click Create cluster.

  4. Under Basic parameters:

    1. Enter a name and description for the Managed Service for Apache Kafka® cluster. The Managed Service for Apache Kafka® cluster name must be unique within the folder.
    2. Select the environment where you want to create the Managed Service for Apache Kafka® cluster (you cannot change the environment once the cluster is created):
      • PRODUCTION: For stable versions of your apps.
      • PRESTABLE: For testing purposes. The prestable environment is similar to the production environment and likewise covered by the SLA, but it is the first to get new functionalities, improvements, and bug fixes. In the prestable environment, you can test compatibility of new versions with your application.
    3. Select the Apache Kafka® version.
  5. Under Host class, select the platform, host type, and host class.

    The host class defines the technical specifications of VMs the Apache Kafka® nodes are deployed on. All available options are listed under Host classes.

    When you change the host class for a Managed Service for Apache Kafka® cluster, the specifications of all existing instances also change.

  6. Under Storage:

    • Select the disk type.

      Warning

      You cannot change disk type after you create a cluster.

      The selected type determines the increments in which you can change your disk size:

      • Network HDD and SSD storage: In increments of 1 GB.
      • Local SSD storage:
        • For Intel Cascade Lake: In increments of 100 GB.
        • For Intel Ice Lake: In increments of 368 GB.
      • Non-replicated SSD storage: In increments of 93 GB.

      You cannot change the disk type for an Managed Service for Apache Kafka® cluster once you create it.

    • Select the storage size to use for data.

  7. Under Automatic increase of storage size, set the storage utilization thresholds that will trigger an increase in storage size when reached:

    1. In the Increase size field, select one or both thresholds:
      • In the maintenance window when full at more than: Scheduled increase threshold. When reached, the storage size increases during the next maintenance window.
      • Immediately when full at more than: Immediate increase threshold. When reached, the storage size increases immediately.
    2. Specify a threshold value (as a percentage of the total storage size). If you select both thresholds, make sure the immediate increase threshold is higher than the scheduled one.
    3. Set Maximum storage size.
  8. Under Network settings:

    1. Select one or more availability zones to place your Apache Kafka® broker hosts in.
      If you create a Managed Service for Apache Kafka® cluster with one availability zone, you will not be able to increase the number of zones and broker hosts later on.
      For clusters with Apache Kafka® version 3.6 and higher, you can select only one or three availability zones.

    2. Select a network.

    3. Select subnets in each availability zone for this network. To create a new subnet, click Create next to the availability zone in question.

      Note

      For a cluster with Apache Kafka® 3.5 and multiple broker hosts, specify subnets in each availability zone even if you plan to place broker hosts only in some of them. These subnets are required to host three ZooKeeper hosts, one in each availability zone. For more information, see Resource relationships in the service.

    4. Select security groups for the Managed Service for Apache Kafka® cluster's network traffic.

    5. To access broker hosts from the internet, select Public access. In this case, you can only connect to them over an SSL connection. For more information, see Connecting to topics in a cluster.

  9. Under Hosts:

    1. Specify the number of Apache Kafka® broker hosts to be located in each of the selected availability zones.

      When choosing the number of hosts, keep in mind that:

      • In Apache Kafka® versions 3.6 and higher the number of broker hosts depends on the selected availability zones:

        • One availability zone: one or three broker hosts. To use three broker hosts, enable the Combined mode setting.
        • Three availability zones: one broker host.

        You cannot set the number of broker hosts manually.

      • Replication is possible if there are at least two hosts per Managed Service for Apache Kafka® cluster.

      • If you selected local-ssd or network-ssd-nonreplicated under Storage, you need to add at least three hosts to the Managed Service for Apache Kafka® cluster.

      • There are conditions to be satisfied for a fault-tolerant Managed Service for Apache Kafka® cluster.

      • If you add more than one host to a cluster with Apache Kafka® 3.5, three ZooKeeper hosts will be automatically added as well.

    2. Optionally, select groups of dedicated hosts to host the Managed Service for Apache Kafka®cluster.

      Alert

      You cannot edit this setting after you create a cluster. The use of dedicated hosts significantly affects cluster pricing.

  10. If you are creating a cluster with version 3.5 and have specified more than one broker host, under ZooKeeper host class, specify the characteristics of the ZooKeeper hosts to place in each of the selected availability zones.

  11. Configure additional Managed Service for Apache Kafka® cluster settings, if required:

    • Maintenance window: Maintenance window settings:

      • To enable maintenance at any time, select arbitrary (default).
      • To specify the preferred maintenance start time, select by schedule and specify the desired day of the week and UTC hour. For example, you can choose a time when the cluster is least loaded.

      Maintenance operations are carried out both on enabled and disabled clusters. They may include updating the DBMS, applying patches, and so on.

    • Deletion protection: Manages cluster protection against accidental deletion.

      Even with cluster deletion protection enabled, one can still delete a user or topic or connect manually and delete the data.

    • To manage data schemas using Managed Schema Registry, enable the Schema registry setting.

      Warning

      You cannot disable data schema management using Managed Schema Registry after connecting it.

    • To allow sending requests to the Apache Kafka® API, enable Kafka Rest API.

      It is implemented based on the Karapace open-source tool. The Karapace API is compatible with the Confluent REST Proxy API with only minor exceptions.

      Warning

      You cannot disable Kafka Rest API once it is enabled.

  12. Configure the Apache Kafka® settings, if required.

  13. Click Create.

  14. Wait until the Managed Service for Apache Kafka® cluster is ready: its status on the Managed Service for Apache Kafka® dashboard will change to Running, and its state, to Alive. This may take some time.

If you do not have the Yandex Cloud CLI yet, install and initialize it.

The folder specified when creating the CLI profile is used by default. To change the default folder, use the yc config set folder-id <folder_ID> command. You can specify a different folder using the --folder-name or --folder-id parameter.

Warning

When creating a cluster with KRaft, do not specify the ZooKeeper settings.

To create a Managed Service for Apache Kafka® cluster:

  1. See the description of the CLI command for creating a Managed Service for Apache Kafka® cluster:

    yc managed-kafka cluster create --help
    
  2. Specify the Managed Service for Apache Kafka® cluster parameters in the create command (not all parameters are given in the example):

    yc managed-kafka cluster create \
       --name <cluster_name> \
       --environment <environment> \
       --version <version> \
       --schema-registry \
       --network-name <network_name> \
       --subnet-ids <subnet_IDs> \
       --zone-ids <availability_zones> \
       --brokers-count <number_of_broker_hosts_in_zone> \
       --resource-preset <host_class> \
       --disk-type <network-hdd|network-ssd|network-ssd-nonreplicated|local-ssd> \
       --disk-size <storage_size_in_GB> \
       --assign-public-ip <public_access> \
       --security-group-ids <list_of_security_group_IDs> \
       --deletion-protection
    

    Where:

    • --environment: Cluster environment, prestable or production.

    • --version: Apache Kafka® version, 3.5 or 3.6.

    • --schema-registry: Manage data schemas using Managed Schema Registry.

      Warning

      You cannot disable data schema management using Managed Schema Registry after connecting it.

    • --zone-ids and --brokers-count: Availability zones and number of broker hosts per zone.

      For clusters with Apache Kafka® version 3.6 and higher, only the following configurations are available:

      • --zone-ids=ru-central1-a,ru-central1-b,ru-central1-d --brokers-count=1
      • --zone-ids=<one_availability_zone> --brokers-count=1
      • --zone-ids=<one_availability_zone> --brokers-count=3
    • --resource-preset: Host class.

    • --disk-type: Disk type.

      Warning

      You cannot change disk type after you create a cluster.

    • --deletion-protection: Cluster protection from accidental deletion, true or false.

      Even with cluster deletion protection enabled, one can still delete a user or topic or connect manually and delete the data.

    Tip

    You can also configure the Apache Kafka® settings here, if required.

  3. To set up a maintenance window (including for disabled Managed Service for Apache Kafka® clusters), provide the required value in the --maintenance-window parameter when creating your cluster:

    yc managed-kafka cluster create \
       ...
       --maintenance-window type=<maintenance_type>,`
                           `day=<day_of_week>,`
                           `hour=<hour> \
    

    Where type is the maintenance type:

    • anytime (default): Any time.
    • weekly: On a schedule. If setting this value, specify the day of week and the hour:
      • day: Day of week in DDD format: MON, TUE, WED, THU, FRI, SAT, or SUN.
      • hour: Hour (UTC) in HH format: 1 to 24.
  4. To prevent the cluster disk space from running out, create a cluster that will increase the storage space automatically.

    yc managed-kafka cluster create \
       ...
       --disk-size-autoscaling disk-size-limit=<maximum_storage_size_in_bytes>,`
                              `planned-usage-threshold=<scheduled_increase_percentage>,`
                              `emergency-usage-threshold=<immediate_increase_percentage>
    

    Where:

    • planned-usage-threshold: Storage utilization percentage to trigger a storage increase in the next maintenance window.

      Use a percentage value between 0 and 100. The default value is 0 (automatic increase is disabled).

      If you set this parameter, configure the maintenance schedule.

    • emergency-usage-threshold: Storage utilization percentage to trigger an immediate storage increase.

      Use a percentage value between 0 and 100. The default value is 0 (automatic increase is disabled). This parameter value must be greater than or equal to planned-usage-threshold.

    • disk-size-limit: Maximum storage size, in bytes, that can be set when utilization reaches one of the specified percentages.

      If the value is 0, automatic increase of storage size will be disabled.

    Warning

    • You cannot decrease the storage size.
    • While resizing the storage, cluster hosts will be unavailable.
  5. To create a Managed Service for Apache Kafka® cluster based on dedicated host groups, specify their IDs as a comma-separated list in the --host-group-ids parameter when creating the cluster:

    yc managed-kafka cluster create \
       ...
       --host-group-ids <dedicated_host_group_IDs>
    

    Alert

    You cannot edit this setting after you create a cluster. The use of dedicated hosts significantly affects cluster pricing.

With Terraform, you can quickly create a cloud infrastructure in Yandex Cloud and manage it using configuration files. These files store the infrastructure description written in HashiCorp Configuration Language (HCL). If you change the configuration files, Terraform automatically detects which part of your configuration is already deployed, and what should be added or removed.

Terraform is distributed under the Business Source License. The Yandex Cloud provider for Terraform is distributed under the MPL-2.0 license.

For more information about the provider resources, see the documentation on the Terraform website or mirror website.

If you do not have Terraform yet, install it and configure its Yandex Cloud provider.

Warning

When creating a cluster with KRaft, do not specify the ZooKeeper settings.

To create a Managed Service for Apache Kafka® cluster:

  1. In the configuration file, describe the resources you are creating:

    • Managed Service for Apache Kafka® cluster: Description of a cluster and its hosts. You can also configure the Apache Kafka® settings here, if required.

    • Network: Description of the cloud network where a cluster will be located. If you already have a suitable network, you don't have to describe it again.

    • Subnets: Description of the subnets to connect the cluster hosts to. If you already have suitable subnets, you don't have to describe them again.

    Here is an example of the configuration file structure:

    resource "yandex_mdb_kafka_cluster" "<cluster_name>" {
      environment         = "<environment>"
      name                = "<cluster_name>"
      network_id          = "<network_ID>"
      subnet_ids          = ["<list_of_subnet_IDs>"]
      security_group_ids  = ["<list_of_cluster_security_group_IDs>"]
      deletion_protection = <cluster_deletion_protection>
    
      config {
        version          = "<version>"
        zones            = ["<availability_zones>"]
        brokers_count    = <number_of_broker_hosts>
        assign_public_ip = "<public_access>"
        schema_registry  = "<data_schema_management>"
        kafka {
          resources {
            disk_size          = <storage_size_in_GB>
            disk_type_id       = "<disk_type>"
            resource_preset_id = "<host_class>"
          }
          kafka_config {}
        }
      }
    }
    
    resource "yandex_vpc_network" "<network_name>" {
      name = "<network_name>"
    }
    
    resource "yandex_vpc_subnet" "<subnet_name>" {
      name           = "<subnet_name>"
      zone           = "<availability_zone>"
      network_id     = "<network_ID>"
      v4_cidr_blocks = ["<range>"]
    }
    

    Where:

    • environment: Cluster environment, PRESTABLE or PRODUCTION.

    • version: Apache Kafka® version, 3.5 or 3.6.

    • zones and brokers_count: Availability zones and number of broker hosts per zone.

      If you are creating a cluster with Apache Kafka® version 3.6 or higher, specify one of the available configurations:

      • zones = ["ru-central1-a","ru-central1-b","ru-central1-d"] brokers_count = 1
      • zones = ["<one_availability_zone>"] brokers_count = 1
      • zones = ["<one_availability_zone>"] brokers_count = 3
    • deletion_protection: Cluster protection from accidental deletion, true or false.

      Even with cluster deletion protection enabled, one can still delete a user or topic or connect manually and delete the data.

    • assign_public_ip: Public access to the cluster, true or false.

    • schema_registry: Manage data schemas using Managed Schema Registry, true or false. The default value is false.

      Warning

      You cannot disable data schema management using Managed Schema Registry after connecting it.

    To set up the maintenance window (for disabled clusters as well), add the maintenance_window section to the cluster description:

    resource "yandex_mdb_kafka_cluster" "<cluster_name>" {
      ...
      maintenance_window {
        type = <maintenance_type>
        day  = <day_of_week>
        hour = <hour>
      }
      ...
    }
    

    Where:

    • type: Maintenance type. The possible values include:
      • anytime: Anytime.
      • weekly: By schedule.
    • day: Day of the week for the weekly type in DDD format, e.g., MON.
    • hour: Hour of the day for the weekly type in the HH format, e.g., 21.
  2. Make sure the settings are correct.

    1. In the command line, navigate to the directory that contains the current Terraform configuration files defining the infrastructure.

    2. Run this command:

      terraform validate
      

      Terraform will show any errors found in your configuration files.

  3. Create a Managed Service for Apache Kafka® cluster.

    1. Run this command to view the planned changes:

      terraform plan
      

      If you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.

    2. If everything looks correct, apply the changes:

      1. Run this command:

        terraform apply
        
      2. Confirm updating the resources.

      3. Wait for the operation to complete.

    After this, all required resources will be created in the specified folder, and the FQDNs of the Managed Service for Apache Kafka® cluster hosts will be displayed in the terminal. You can check the new resources and their configuration in the management console.

For more information, see the Terraform provider documentation.

Time limits

The Terraform provider limits the amount of time for all Managed Service for Apache Kafka® cluster operations to complete to 60 minutes.

Operations exceeding the set timeout are interrupted.

How do I change these limits?

Add the timeouts block to the cluster description, for example:

resource "yandex_mdb_kafka_cluster" "<cluster_name>" {
  ...
  timeouts {
    create = "1h30m" # 1 hour 30 minutes
    update = "2h"    # 2 hours
    delete = "30m"   # 30 minutes
  }
}

Warning

When creating a cluster with KRaft, do not specify the ZooKeeper settings.

  1. Get an IAM token for API authentication and put it into the environment variable:

    export IAM_TOKEN="<IAM_token>"
    
  2. Use the Cluster.create method and send the following request, e.g., via cURL:

    1. Create a file named body.json and add the following contents to it:

      Note

      This example does not use all available parameters.

      {
        "folderId": "<folder_ID>",
        "name": "<cluster_name>",
        "environment": "<environment>",
        "networkId": "<network_ID>",
        "securityGroupIds": [
          "<security_group_1_ID>",
          "<security_group_2_ID>",
          ...
          "<security_group_N_ID>"
        ],
        "configSpec": {
          "version": "<Apache Kafka®_version>",
          "kafka": {
            "resources": {
              "resourcePresetId": "<Apache Kafka®_host_class>",
              "diskSize": "<storage_size_in_bytes>",
              "diskTypeId": "<disk_type>"
            }
          },
          "zookeeper": {
            "resources": {
              "resourcePresetId": "<ZooKeeper_host_class> ",
              "diskSize": "<storage_size_in_bytes>",
              "diskTypeId": "<disk_type>"                   
            }
          },
          "zoneId": [
            <list_of_availability_zones>
          ],
          "brokersCount": "<number_of_brokers_in_zone>",
          "assignPublicIp": <public_access:_true_or_false>,
          "schemaRegistry": <data_schema_management:_true_or_false>,
          "restApiConfig": {
            "enabled": <send_requests_to_Apache Kafka®_API:_true_or_false>
          },
          "diskSizeAutoscaling": {
            <automatic_storage_size_increase_parameters>
          },
        },
        "topicSpecs": [
          {
            "name": "<topic_name>",
            "partitions": "<number_of_partitions>",
            "replicationFactor": "<replication_factor>"
          },
          { <similar_list_of_settings_for_topic_2> },
          { ... },
          { <similar_list_of_settings_for_topic_N> }
        ],
        "userSpecs": [
          {
            "name": "<username>",
            "password": "<user_password>",
            "permissions": [
              {
                "topicName": "<topic_name>",
                "role": "<user's_role>"
              }
            ]
          },
          { <similar_configuration_for_user_2> },
          { ... },
          { <similar_configuration_for_user_N> }
        ],
        "maintenanceWindow": {
          "anytime": {},
          "weeklyMaintenanceWindow": {
            "day": "<day_of_week>",
            "hour": "<hour_UTC>"
          }
        },
        "deletionProtection": <cluster_deletion_protection:_true_or_false>
      }
      

      Where:

      • name: Cluster name.

      • environment: Cluster environment, PRODUCTION or PRESTABLE.

      • networkId: ID of the network the cluster will be in.

      • securityGroupIds: Security group IDs as an array of strings. Each string is a security group ID.

      • configSpec: Cluster configuration:

        • version: Apache Kafka® version, 3.5 or 3.6.

        • kafka: Apache Kafka® configuration:

          • resources.resourcePresetId: Host class ID. You can request the list of available host classes with their IDs using the ResourcePreset.list method.
          • resources.diskSize: Disk size in bytes.
          • resources.diskTypeId: Disk type.
        • zookeeper: ZooKeeper configuration.

          • resources.resourcePresetId: Host class ID. You can request the list of available host classes with their IDs using the ResourcePreset.list method.
          • resources.diskSize: Disk size in bytes.
          • resources.diskTypeId: Disk type.
        • zoneId and brokersCount: Availability zones and number of broker hosts per zone.

          If you are creating a cluster with Apache Kafka® version 3.6 or higher, specify one of the available configurations:

          • "zoneId": ["ru-central1-a","ru-central1-b","ru-central1-d"], "brokersCount": "1"
          • "zoneId": ["<one_availability_zone>"], "brokersCount": "1"
          • "zoneId": ["<one_availability_zone>"], "brokersCount": "3"
        • assignPublicIp: Internet access to the broker hosts, true or false.

        • schemaRegistry: Manage data schemas using Managed Schema Registry, true or false. The default value is false. You will not be able to edit this setting once you create a Managed Service for Apache Kafka® cluster.

        • restApiConfig: Apache Kafka® REST API configuration. For access to sending requests to the Apache Kafka® REST API, specify enabled: true.

        • diskSizeAutoscaling: Set the storage utilization thresholds (as a percentage of the total storage size), that will trigger an increase in storage size when reached:

          • plannedUsageThreshold: Storage utilization percentage to trigger a storage increase during the next maintenance window.

            Use a percentage value between 0 and 100. The default value is 0 (automatic increase is disabled).

            If you set this parameter, configure the maintenance window schedule.

          • emergencyUsageThreshold: Storage utilization percentage to trigger an immediate storage increase.

            Use a percentage value between 0 and 100. The default value is 0 (automatic increase is disabled). This parameter value must be greater than or equal to plannedUsageThreshold.

          • diskSizeLimit: Maximum storage size, in bytes, that can be set when utilization reaches one of the specified percentages.

      • topicSpecs: The topic settings as an array of elements. Each element is for a separate topic and has the following structure:

        • name: Topic name.

          Note

          Use the Apache Kafka® Admin API if you need to create a topic that starts with _. You cannot create such a topic using the Yandex Cloud interfaces.

        • partitions: Number of partitions.

        • replicationFactor: Replication factor.

      • userSpecs: User settings as an array of elements, one for each user. Each element has the following structure:

        • name: Username. It may contain Latin letters, numbers, hyphens, and underscores, and must start with a letter or underscore.

        • password: User password. The password must be from 8 to 128 characters long.

        • permissions: List of topics the user must have access to.

          The list is arranged as an array of elements. Each element is for a separate topic and has the following structure:

          • topicName: Topic name or name template:
            • * to allow access to any topics.
            • Full topic name to allow access to a specific topic.
            • <prefix>* to grant access to topics whose names start with the prefix. Let's assume you have topics named topic_a1, topic_a2, and a3. If you put topic*, access will be granted to topic_a1 and topic_a2. To include all the cluster's topics, use the * mask.
          • role: User’s role, ACCESS_ROLE_CONSUMER, ACCESS_ROLE_PRODUCER, or ACCESS_ROLE_ADMIN. The ACCESS_ROLE_ADMIN role is only available if all topics are selected (topicName: "*").
          • allowHosts: (Optional) List of IP addresses the user is allowed to access the topic from.
      • maintenanceWindow: Maintenance window settings (including for disabled clusters). Select one of the options:

        • anytime: At any time (default).
        • weeklyMaintenanceWindow: On schedule:
          • day: Day of week in DDD format: MON, TUE, WED, THU, FRI, SAT, or SUN.
          • hour: Hour of day (UTC) in HH format, from 1 to 24.
      • deletionProtection: Cluster protection from accidental deletion, true or false. The default value is false.

        Even with cluster deletion protection enabled, one can still delete a user or topic or connect manually and delete the data.

      To create a Managed Service for Apache Kafka® cluster based on dedicated host groups, provide a list of host group IDs in the hostGroupIds parameter.

      Alert

      You cannot edit this setting after you create a cluster. The use of dedicated hosts significantly affects cluster pricing.

      You can request the folder ID with the list of folders in the cloud.

    2. Run this request:

      curl \
        --request POST \
        --header "Authorization: Bearer $IAM_TOKEN" \
        --header "Content-Type: application/json" \
        --url 'https://mdb.api.cloud.yandex.net/managed-kafka/v1/clusters' \
        --data '@body.json'
      
  3. View the server response to make sure the request was successful.

Warning

When creating a cluster with KRaft, do not specify the ZooKeeper settings.

  1. Get an IAM token for API authentication and put it into the environment variable:

    export IAM_TOKEN="<IAM_token>"
    
  2. Clone the cloudapi repository:

    cd ~/ && git clone --depth=1 https://github.com/yandex-cloud/cloudapi
    

    Below, we assume the repository contents are stored in the ~/cloudapi/ directory.

  3. Use the ClusterService/Create call and send the following request, e.g., via gRPCurl:

    1. Create a file named body.json and add the following contents to it:

      Note

      This example does not use all available parameters.

      {
        "folder_id": "<folder_ID>",
        "name": "<cluster_name>",
        "environment": "<environment>",
        "network_id": "<network_ID>",
        "security_group_ids": [
          "<security_group_1_ID>",
          "<security_group_2_ID>",
          ...
          "<security_group_N_ID>"
        ],
        "config_spec": {
          "version": "<Apache Kafka®_version>",
          "kafka": {
            "resources": {
              "resource_preset_id": "<Apache Kafka®_host_class>",
              "disk_size": "<storage_size_in_bytes>",
              "disk_type_id": "<disk_type>"
            }
          },
          "zookeeper": {
            "resources": {
              "resource_preset_id": "<ZooKeeper_host_class> ",
              "disk_size": "<storage_size_in_bytes>",
              "disk_type_id": "<disk_type>"                   
            }
          },
          "zone_id": [
            <list_of_availability_zones>
          ],
          "brokers_count": {
            "value": "<number_of_brokers_in_zone>"
          },
          "assign_public_ip": <public_access:_true_or_false>,
          "schema_registry": <data_schema_management:_true_or_false>,
          "rest_api_config": {
            "enabled": <send_requests_to_Apache Kafka®_API:_true_or_false>
          },
          "disk_size_autoscaling": {
            <automatic_storage_size_increase_parameters>
          },
        },
        "topic_specs": [
          {
            "name": "<topic_name>",
            "partitions": {
              "value": "<number_of_partitions>"
            },
            "replication_factor": {
              "value": "<replication_factor>"
            }
          },
          { <similar_list_of_settings_for_topic_2> },
          { ... },
          { <similar_list_of_settings_for_topic_N> }
        ],
        "user_specs": [
          {
            "name": "<username>",
            "password": "<user_password>",
            "permissions": [
              {
                "topic_name": "<topic_name>",
                "role": "<user's_role>"
              }
            ]
          },
          { <similar_configuration_for_user_2> },
          { ... },
          { <similar_configuration_for_user_N> }
        ],
        "maintenance_window": {
          "anytime": {},
          "weekly_maintenance_window": {
            "day": "<day_of_week>",
            "hour": "<hour_UTC>"
          }
        },
        "deletion_protection": <cluster_deletion_protection:_true_or_false>
      }
      

      Where:

      • name: Cluster name.

      • environment: Cluster environment, PRODUCTION or PRESTABLE.

      • network_id: ID of the network the cluster will be in.

      • security_group_ids: Security group IDs as an array of strings. Each string is a security group ID.

      • config_spec: Cluster configuration:

        • version: Apache Kafka® version, 3.5 or 3.6.

        • kafka: Apache Kafka® configuration:

          • resources.resource_preset_id: Host class ID. You can request the list of available host classes with their IDs using the ResourcePreset.list call.
          • resources.disk_size: Disk size in bytes.
          • resources.disk_type_id: Disk type.
        • zookeeper: ZooKeeper configuration.

          • resources.resource_preset_id: Host class ID. You can request the list of available host classes with their IDs using the ResourcePreset.list call.
          • resources.disk_size: Disk size in bytes.
          • resources.disk_type_id: Disk type.
        • zone_id and brokers_count: Availability zones and number of broker hosts per zone (this number is provided as an object with the value field).

          If you are creating a cluster with Apache Kafka® version 3.6 or higher, specify one of the available configurations:

          • "zone_id": ["ru-central1-a","ru-central1-b","ru-central1-d"], "brokers_count": {"value":"1"}
          • "zone_id": ["<one_availability_zone>"], "brokers_count": {"value":"1"}
          • "zone_id": ["<one_availability_zone>"], "brokers_count": {"value":"3"}
        • assign_public_ip: Internet access to the broker hosts, true or false.

        • schema_registry: Manage data schemas using Managed Schema Registry, true or false. The default value is false. You will not be able to edit this setting once you create a Managed Service for Apache Kafka® cluster.

        • rest_api_config: Apache Kafka® REST API configuration. For access to sending requests to the Apache Kafka® REST API, specify enabled: true.

        • disk_size_autoscaling: To prevent the cluster disk space from running out, set the storage utilization thresholds (as a percentage of the total storage size) that will trigger an increase in storage size when reached:

          • planned_usage_threshold: Storage utilization percentage to trigger a storage increase during the next maintenance window.

            Use a percentage value between 0 and 100. The default value is 0 (automatic increase is disabled).

            If you set this parameter, configure the maintenance window schedule.

          • emergency_usage_threshold: Storage utilization percentage to trigger an immediate storage increase.

            Use a percentage value between 0 and 100. The default value is 0 (automatic increase is disabled). This parameter value must be greater than or equal to planned_usage_threshold.

          • disk_size_limit: Maximum storage size, in bytes, that can be set when utilization reaches one of the specified percentages.

      • topic_specs: Topic settings as an array of elements. Each element is for a separate topic and has the following structure:

        • name: Topic name.

          Note

          Use the Apache Kafka® Admin API if you need to create a topic that starts with _. You cannot create such a topic using the Yandex Cloud interfaces.

        • partitions: Number of partitions, provided as an object with a field named value.

        • replication_factor: Replication factor. Provided as an object with the value field.

      • user_specs: User settings as an array of elements, one for each user. Each element has the following structure:

        • name: Username. It may contain Latin letters, numbers, hyphens, and underscores, and must start with a letter or underscore.

        • password: User password. The password must be from 8 to 128 characters long.

        • permissions: List of topics the user must have access to.

          The list is arranged as an array of elements. Each element is for a separate topic and has the following structure:

          • topic_name: Topic name or name template:
            • * to allow access to any topics.
            • Full topic name to allow access to a specific topic.
            • <prefix>* to grant access to topics whose names start with the prefix. Let's assume you have topics named topic_a1, topic_a2, and a3. If you put topic*, access will be granted to topic_a1 and topic_a2. To include all the cluster's topics, use the * mask.
          • role: User’s role, ACCESS_ROLE_CONSUMER, ACCESS_ROLE_PRODUCER, or ACCESS_ROLE_ADMIN. The ACCESS_ROLE_ADMIN role is only available if all topics are selected (topicName: "*").
          • allow_hosts: (Optional) List of IP addresses the user is allowed to access the topic from, as an array of elements.
      • maintenance_window: Maintenance window settings (including for disabled clusters). Select one of these options:

        • anytime: At any time (default).
        • weekly_maintenance_window: On schedule:
          • day: Day of week in DDD format: MON, TUE, WED, THU, FRI, SAT, or SUN.
          • hour: Hour of day (UTC) in HH format, from 1 to 24.
      • deletion_protection: Cluster protection from accidental deletion, true or false. The default value is false.

        Even with cluster deletion protection enabled, one can still delete a user or topic or connect manually and delete the data.

      To create a Managed Service for Apache Kafka® cluster based on dedicated host groups, provide a list of host group IDs in the host_group_ids parameter.

      Alert

      You cannot edit this setting after you create a cluster. The use of dedicated hosts significantly affects cluster pricing.

      You can request the folder ID with the list of folders in the cloud.

    2. Run this request:

      curl \
        --request POST \
        --header "Authorization: Bearer $IAM_TOKEN" \
        --header "Content-Type: application/json" \
        --url 'https://mdb.api.cloud.yandex.net/managed-kafka/v1/clusters' \
        --data '@body.json'
      
  4. View the server response to make sure the request was successful.

To make sure the cluster created with Apache Kafka® version 3.6 or higher uses the KRaft protocol, get information about the cluster hosts:

Management console
CLI
REST API
gRPC API
  1. In the management console, go to the relevant folder.
  2. In the list of services, select Managed Service for Kafka.
  3. Click the name of the cluster you need and select the Hosts tab.

If you do not have the Yandex Cloud CLI yet, install and initialize it.

The folder specified when creating the CLI profile is used by default. To change the default folder, use the yc config set folder-id <folder_ID> command. You can specify a different folder using the --folder-name or --folder-id parameter.

To get a list of cluster hosts, run the command:

yc managed-kafka cluster list-hosts <cluster_name_or_ID>

You can request the cluster ID and name with a list of clusters in the folder.

  1. Get an IAM token for API authentication and put it into the environment variable:

    export IAM_TOKEN="<IAM_token>"
    
  2. Use the Cluster.listHosts method and send the following request, e.g., via cURL:

    curl \
        --request GET \
        --header "Authorization: Bearer $IAM_TOKEN" \
        --url 'https://mdb.api.cloud.yandex.net/managed-kafka/v1/clusters/<cluster_ID>/hosts'
    

    You can get the cluster ID with a list of clusters in the folder.

  3. View the server response to make sure the request was successful.

  1. Get an IAM token for API authentication and put it into the environment variable:

    export IAM_TOKEN="<IAM_token>"
    
  2. Clone the cloudapi repository:

    cd ~/ && git clone --depth=1 https://github.com/yandex-cloud/cloudapi
    

    Below, we assume the repository contents are stored in the ~/cloudapi/ directory.

  3. Use the ClusterService/ListHosts call and send the following request, e.g., via gRPCurl:

    grpcurl \
        -format json \
        -import-path ~/cloudapi/ \
        -import-path ~/cloudapi/third_party/googleapis/ \
        -proto ~/cloudapi/yandex/cloud/mdb/kafka/v1/cluster_service.proto \
        -rpc-header "Authorization: Bearer $IAM_TOKEN" \
        -d '{
                "cluster_id": "<cluster_ID>"
            }' \
        mdb.api.cloud.yandex.net:443 \
        yandex.cloud.mdb.kafka.v1.ClusterService.ListHosts
    

    You can get the cluster ID with a list of clusters in the folder.

  4. View the server response to make sure the request was successful.

If there are no ZooKeeper hosts, it means the cluster uses KRaft.

Creating a cluster copyCreating a cluster copy

You can create an Apache Kafka® cluster using the settings of another one created earlier. To do so, you need to import the configuration of the source Apache Kafka® cluster to Terraform. This way you can either create an identical copy or use the imported configuration as the baseline and modify it as needed. Importing a configuration is a good idea when the source Apache Kafka® cluster has a lot of settings and you need to create a similar one.

To create a Apache Kafka® cluster copy:

Terraform
  1. If you do not have Terraform yet, install it.

  2. Get the authentication credentials. You can add them to environment variables or specify them later in the provider configuration file.

  3. Configure and initialize a provider. There is no need to create a provider configuration file manually, you can download it.

  4. Place the configuration file in a separate working directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file.

  5. In the same working directory, place a .tf file with the following contents:

    resource "yandex_mdb_kafka_cluster" "old" { }
    
  6. Write the ID of the initial Apache Kafka® cluster to the environment variable:

    export KAFKA_CLUSTER_ID=<cluster_ID>
    

    You can request the ID with the list of clusters in the folder.

  7. Import the settings of the initial Apache Kafka® cluster into the Terraform configuration:

    terraform import yandex_mdb_kafka_cluster.old ${KAFKA_CLUSTER_ID}
    
  8. Get the imported configuration:

    terraform show
    
  9. Copy it from the terminal and paste it into the .tf file.

  10. Place the file in the new imported-cluster directory.

  11. Modify the copied configuration so that you can create a new cluster from it:

    • Specify the new cluster name in the resource string and the name parameter.
    • Delete created_at, health, host, id, and status.
    • Add the subnet_ids parameter with the list of subnet IDs for each availability zone.
    • If the maintenance_window section has type = "ANYTIME", delete the hour parameter.
    • Optionally, make further changes if you need to customize the configuration.
  12. Get the authentication credentials in the imported-cluster directory.

  13. In the same directory, configure and initialize a provider. There is no need to create a provider configuration file manually, you can download it.

  14. Place the configuration file in the imported-cluster directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file.

  15. Check that the Terraform configuration files are correct:

    terraform validate
    

    If there are any errors in the configuration files, Terraform will point them out.

  16. Create the required infrastructure:

    1. Run this command to view the planned changes:

      terraform plan
      

      If you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.

    2. If everything looks correct, apply the changes:

      1. Run this command:

        terraform apply
        
      2. Confirm updating the resources.

      3. Wait for the operation to complete.

    All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console.

Time limits

The Terraform provider limits the amount of time for all Managed Service for Apache Kafka® cluster operations to complete to 60 minutes.

Operations exceeding the set timeout are interrupted.

How do I change these limits?

Add the timeouts block to the cluster description, for example:

resource "yandex_mdb_kafka_cluster" "<cluster_name>" {
  ...
  timeouts {
    create = "1h30m" # 1 hour 30 minutes
    update = "2h"    # 2 hours
    delete = "30m"   # 30 minutes
  }
}

ExamplesExamples

Creating a single-host clusterCreating a single-host cluster

CLI
Terraform

Create a Managed Service for Apache Kafka® cluster with the following test specifications:

  • Name: mykf.
  • Environment: production.
  • Apache Kafka® version: 3.5.
  • Network: default.
  • Subnet ID: b0rcctk2rvtr8efcch64.
  • Security group: enp6saqnq4ie244g67sb.
  • Host class: s2.micro, availability zone: ru-central1-a.
  • With one broker host.
  • Network SSD storage (network-ssd): 10 GB.
  • Public access: Allowed.
  • Deletion protection: Enabled.

Run the following command:

yc managed-kafka cluster create \
   --name mykf \
   --environment production \
   --version 3.5 \
   --network-name default \
   --subnet-ids b0rcctk2rvtr8efcch64 \
   --zone-ids ru-central1-a \
   --brokers-count 1 \
   --resource-preset s2.micro \
   --disk-size 10 \
   --disk-type network-ssd \
   --assign-public-ip \
   --security-group-ids enp6saqnq4ie244g67sb \
   --deletion-protection

Create a Managed Service for Apache Kafka® cluster with the following test specifications:

  • Cloud ID: b1gq90dgh25bebiu75o.

  • Folder ID: b1gia87mbaomkfvsleds.

  • Name: mykf.

  • Environment: PRODUCTION.

  • Apache Kafka® version: 3.5.

  • New network: mynet, subnet: mysubnet.

  • Security group: mykf-sg (allow ingress connections to the Managed Service for Apache Kafka® cluster on port 9091).

  • Host class: s2.micro, availability zone: ru-central1-a.

  • With one broker host.

  • Network SSD storage (network-ssd): 10 GB.

  • Public access: Allowed.

  • Deletion protection: Enabled.

The configuration file for this Managed Service for Apache Kafka® cluster is as follows:

resource "yandex_mdb_kafka_cluster" "mykf" {
  environment         = "PRODUCTION"
  name                = "mykf"
  network_id          = yandex_vpc_network.mynet.id
  subnet_ids          = [ yandex_vpc_subnet.mysubnet.id ]
  security_group_ids  = [ yandex_vpc_security_group.mykf-sg.id ]
  deletion_protection = true

  config {
    assign_public_ip = true
    brokers_count    = 1
    version          = "3.5"
    kafka {
      resources {
        disk_size          = 10
        disk_type_id       = "network-ssd"
        resource_preset_id = "s2.micro"
      }
      kafka_config {}
    }

    zones = [
      "ru-central1-a"
    ]
  }
}

resource "yandex_vpc_network" "mynet" {
  name = "mynet"
}

resource "yandex_vpc_subnet" "mysubnet" {
  name           = "mysubnet"
  zone           = "ru-central1-a"
  network_id     = yandex_vpc_network.mynet.id
  v4_cidr_blocks = ["10.5.0.0/24"]
}

resource "yandex_vpc_security_group" "mykf-sg" {
  name       = "mykf-sg"
  network_id = yandex_vpc_network.mynet.id

  ingress {
    description    = "Kafka"
    port           = 9091
    protocol       = "TCP"
    v4_cidr_blocks = [ "0.0.0.0/0" ]
  }
}

Was the article helpful?

Previous
Information about existing clusters
Next
Updating cluster settings
Yandex project
© 2025 Yandex.Cloud LLC