Yandex Cloud
Search
Contact UsTry it for free
  • Customer Stories
  • Documentation
  • Blog
  • All Services
  • System Status
  • Marketplace
    • Featured
    • Infrastructure & Network
    • Data Platform
    • AI for business
    • Security
    • DevOps tools
    • Serverless
    • Monitoring & Resources
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Start testing with double trial credits
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Center for Technologies and Society
    • Yandex Cloud Partner program
    • Price calculator
    • Pricing plans
  • Customer Stories
  • Documentation
  • Blog
© 2026 Direct Cursus Technology L.L.C.
Tutorials
    • All tutorials
    • Unassisted deployment of the Apache Kafka® web interface
    • Upgrading a Managed Service for Apache Kafka® cluster to migrate from ZooKeeper to KRaft
    • Migrating a database from a third-party Apache Kafka® cluster to Managed Service for Apache Kafka®
    • Moving data between Managed Service for Apache Kafka® clusters using Data Transfer
    • Delivering data from Managed Service for MySQL® to Managed Service for Apache Kafka® using Data Transfer
    • Delivering data from Managed Service for MySQL® to Managed Service for Apache Kafka® using Debezium
    • Delivering data from Managed Service for PostgreSQL to Managed Service for Apache Kafka® using Data Transfer
    • Delivering data from Managed Service for PostgreSQL to Managed Service for Apache Kafka® using Debezium
    • Delivering data from Managed Service for YDB to Managed Service for Apache Kafka® using Data Transfer
    • Delivering data from Managed Service for Apache Kafka® to Managed Service for ClickHouse® using Data Transfer
    • Delivering data from Managed Service for Apache Kafka® to Yandex MPP Analytics for PostgreSQL using Data Transfer
    • Delivering data from Managed Service for Apache Kafka® to Yandex StoreDoc using Data Transfer
    • Delivering data from Managed Service for Apache Kafka® to Managed Service for MySQL® using Data Transfer
    • Delivering data from Managed Service for Apache Kafka® to Managed Service for OpenSearch using Data Transfer
    • Delivering data from Managed Service for Apache Kafka® to Managed Service for PostgreSQL using Data Transfer
    • Delivering data from Managed Service for Apache Kafka® to Managed Service for YDB using Data Transfer
    • Delivering data from Managed Service for Apache Kafka® to Data Streams using Data Transfer
    • Delivering data from Data Streams to Managed Service for YDB using Data Transfer
    • Delivering data from Data Streams to Managed Service for Apache Kafka® using Data Transfer
    • YDB change data capture and delivery to YDS
    • Configuring Kafka Connect to work with a Managed Service for Apache Kafka® cluster
    • Synchronizing Apache Kafka® topics in Object Storage with no web access
    • Monitoring message loss in an Apache Kafka® topic
    • Automating Query tasks with Managed Service for Apache Airflow™
    • Sending requests to the Yandex Cloud API via the Yandex Cloud Python SDK
    • Configuring an SMTP server to send e-mail notifications
    • Adding data to a ClickHouse® DB
    • Migrating data to Managed Service for ClickHouse® using ClickHouse®
    • Migrating data to Managed Service for ClickHouse® using Data Transfer
    • Delivering data from Managed Service for MySQL® to Managed Service for ClickHouse® using Data Transfer
    • Asynchronously replicating data from PostgreSQL to ClickHouse®
    • Exchanging data between Managed Service for ClickHouse® and Yandex Data Processing
    • Configuring Managed Service for ClickHouse® for Graphite
    • Fetching data from Managed Service for Apache Kafka® to Managed Service for ClickHouse®
    • Fetching data from Managed Service for Apache Kafka® to ksqlDB
    • Fetching data from RabbitMQ to Managed Service for ClickHouse®
    • Saving a data stream from Data Streams to Managed Service for ClickHouse®
    • Asynchronous replication of data from Yandex Metrica to ClickHouse® using Data Transfer
    • Using hybrid storage in Managed Service for ClickHouse®
    • Sharding Managed Service for ClickHouse® tables
    • Loading data from Yandex Direct to a Managed Service for ClickHouse® data mart using Cloud Functions, Object Storage, and Data Transfer
    • Loading data from Object Storage to Managed Service for ClickHouse® using Data Transfer
    • Migrating data from Managed Service for OpenSearch to Managed Service for ClickHouse® with a storage change using Data Transfer
    • Loading data from Managed Service for YDB to Managed Service for ClickHouse® using Data Transfer
    • Yandex Managed Service for ClickHouse® integration with Microsoft SQL Server via ClickHouse® JDBC Bridge
    • Migrating databases from Google BigQuery to Managed Service for ClickHouse®
    • Yandex Managed Service for ClickHouse® integration with Oracle via ClickHouse® JDBC Bridge
    • Configuring Cloud DNS to access a Managed Service for ClickHouse® cluster from other cloud networks
    • Migrating a Yandex Data Processing HDFS cluster to a different availability zone
    • Importing data from Managed Service for MySQL® to Yandex Data Processing using Sqoop
    • Importing data from Managed Service for PostgreSQL to Yandex Data Processing using Sqoop
    • Mounting Object Storage buckets to the file system of Yandex Data Processing hosts
    • Working with Apache Kafka® topics using Yandex Data Processing
    • Automating operations with Yandex Data Processing using Managed Service for Apache Airflow™
    • Shared use of Yandex Data Processing tables through Apache Hive™ Metastore
    • Transferring metadata across Yandex Data Processing clusters using Apache Hive™ Metastore
    • Importing, processing, and exporting Object Storage data to Managed Service for ClickHouse®
    • Migrating collections from a third-party MongoDB cluster to Yandex StoreDoc
    • Migrating data to Yandex StoreDoc
    • Migrating Yandex StoreDoc cluster from version 4.4 to 6.0
    • Sharding Yandex StoreDoc collections
    • Yandex StoreDoc performance analysis and tuning
    • Managed Service for MySQL® performance analysis and tuning
    • Syncing data from a third-party MySQL® cluster to Managed Service for MySQL® using Data Transfer
    • Migrating a database from Managed Service for MySQL® to a third-party MySQL® cluster
    • Migrating a database from Managed Service for MySQL® to Object Storage using Data Transfer
    • Migrating data from Object Storage to Managed Service for MySQL® via Data Transfer
    • Delivering data from Managed Service for MySQL® to Managed Service for Apache Kafka® using Data Transfer
    • Delivering data from Managed Service for MySQL® to Managed Service for Apache Kafka® using Debezium
    • Migrating a database from Managed Service for MySQL® to Managed Service for YDB using Data Transfer
    • MySQL® change data capture and delivery to YDS
    • Migrating data from Managed Service for MySQL® to Managed Service for PostgreSQL using Data Transfer
    • Migrating data from AWS RDS for PostgreSQL to Managed Service for PostgreSQL using Data Transfer
    • Migrating data from Managed Service for MySQL® to Yandex MPP Analytics for PostgreSQL using Data Transfer
    • Configuring an index policy in Managed Service for OpenSearch
    • Configuring a cold storage policy in Managed Service for OpenSearch
    • Migrating data from a third-party OpenSearch cluster to Managed Service for OpenSearch using Data Transfer
    • Loading data from Managed Service for OpenSearch to Object Storage using Data Transfer
    • Migrating data from Managed Service for OpenSearch to Managed Service for YDB using Data Transfer
    • Copying data from Managed Service for OpenSearch to Yandex MPP Analytics for PostgreSQL using Yandex Data Transfer
    • Migrating data from Managed Service for PostgreSQL to Managed Service for OpenSearch using Data Transfer
    • Authenticating a Managed Service for OpenSearch cluster in OpenSearch Dashboards using Keycloak
    • Using the yandex-lemmer plugin in Managed Service for OpenSearch
    • Sending email notifications in Managed Service for OpenSearch
    • Creating a PostgreSQL cluster for 1C:Enterprise
    • Searching for the Managed Service for PostgreSQL cluster performance issues
    • Managed Service for PostgreSQL performance analysis and tuning
    • Logical replication in PostgreSQL
    • Migrating a database from a third-party PostgreSQL cluster to Managed Service for PostgreSQL
    • Migrating a database from Managed Service for PostgreSQL
    • Migrating a Managed Service for PostgreSQL cluster to a different version
    • Delivering data from Managed Service for PostgreSQL to Managed Service for Apache Kafka® using Data Transfer
    • Delivering data from Managed Service for PostgreSQL to Managed Service for Apache Kafka® using Debezium
    • Delivering data from Managed Service for PostgreSQL to Managed Service for YDB using Data Transfer
    • Migrating a database from Managed Service for PostgreSQL to Object Storage
    • Migrating data from Object Storage to Managed Service for PostgreSQL via Data Transfer
    • PostgreSQL change data capture and delivery to YDS
    • Migrating data from Managed Service for PostgreSQL to Managed Service for MySQL® using Data Transfer
    • Migrating data from Managed Service for PostgreSQL to Managed Service for OpenSearch using Data Transfer
    • Fixing string sorting issues in PostgreSQL after a glibc upgrade
    • Using a Yandex Lockbox secret in a PySpark job to connect to Yandex Managed Service for PostgreSQL
    • Configuring permissions for access to a secret created by Connection Manager for a Managed Service for PostgreSQL user
    • Migrating a database from Greenplum® to ClickHouse®
    • Migrating a database from Greenplum® to PostgreSQL
    • Exporting Greenplum® data to a cold storage in Object Storage
    • Loading data from Object Storage to Yandex MPP Analytics for PostgreSQL using Data Transfer
    • Copying data from Managed Service for OpenSearch to Yandex MPP Analytics for PostgreSQL using Yandex Data Transfer
    • Creating an external table from a Object Storage bucket table using a configuration file
    • Getting data from external sources using named queries in Greenplum®
    • Migrating a database from a third-party Valkey™ cluster to Yandex Managed Service for Valkey™
    • Using a Yandex Managed Service for Valkey™ cluster as a PHP session storage
    • Loading data from Object Storage to Managed Service for YDB using Data Transfer
    • Loading data from Managed Service for YDB to Object Storage using Data Transfer
    • Processing Audit Trails events
    • Processing Cloud Logging logs
    • Processing Debezium CDC streams
    • Analyzing data with Jupyter
    • Processing files with usage details in Yandex Cloud Billing
    • Ingesting data into storage systems
    • Smart log processing
    • Data transfer in microservice architectures
    • Migrating data to Object Storage using Data Transfer
    • Migrating data from a third-party Greenplum® or PostgreSQL cluster to Yandex MPP Analytics for PostgreSQL using Data Transfer
    • Migrating Yandex StoreDoc clusters
    • Migrating MySQL® clusters
    • Migrating to a third-party MySQL® cluster
    • Migrating PostgreSQL clusters
    • Creating a schema registry to deliver data in Debezium CDC format from Apache Kafka®
    • Automating operations using Yandex Managed Service for Apache Airflow™
    • Working with an Object Storage table from a PySpark job
    • Integrating Yandex Managed Service for Apache Spark™ with Apache Hive™ Metastore
    • Running a PySpark job using Yandex Managed Service for Apache Airflow™
    • Using Yandex Object Storage in Yandex Managed Service for Apache Spark™
    • Yandex Managed Service for Apache Spark™ integration with DataSphere
    • Using a Yandex Lockbox secret in a PySpark job to connect to Yandex Managed Service for PostgreSQL
    • Running a PySpark job in Yandex Managed Service for YTsaurus

In this article:

  • Migrating data using Yandex Managed Service for Apache Kafka® Connector
  • Getting started
  • Prepare the source cluster
  • Create a target cluster and a connector
  • Check the target cluster topic for data
  • Delete the resources you created
  • Migrating data via MirrorMaker
  • Getting started
  • Set up your infrastructure
  • Configure the source cluster and VM
  • Configure MirrorMaker
  • Start replication
  • Check the target cluster topic for data
  • Delete the resources you created
  1. Building a data platform
  2. Migrating a database from a third-party Apache Kafka® cluster to Managed Service for Apache Kafka®

Migrating a database from a third-party Apache Kafka® cluster to Yandex Managed Service for Apache Kafka®

Written by
Yandex Cloud
Updated at May 8, 2026
  • Migrating data using Yandex Managed Service for Apache Kafka® Connector
    • Getting started
    • Prepare the source cluster
    • Create a target cluster and a connector
    • Check the target cluster topic for data
    • Delete the resources you created
  • Migrating data via MirrorMaker
    • Getting started
    • Set up your infrastructure
    • Configure the source cluster and VM
    • Configure MirrorMaker
    • Start replication
    • Check the target cluster topic for data
    • Delete the resources you created

There are two ways to migrate topics from an Apache Kafka® source cluster to a Managed Service for Apache Kafka® target cluster:

  • Using the built-in Yandex Managed Service for Apache Kafka® MirrorMaker connector.

    This method is easy to configure and does not require creating an intermediate VM.

  • Using MirrorMaker 2.0.

    This requires setting up the utility manually on an intermediate virtual machine. Use this method only if it is not possible to migrate data using the built-in MirrorMaker connector for whatever reason.

Both methods are also suitable for migrating a single-host Managed Service for Apache Kafka® cluster to a different availability zone.

Migrating data using Yandex Managed Service for Apache Kafka® ConnectorMigrating data using Yandex Managed Service for Apache Kafka® Connector

To transfer data using Yandex Managed Service for Apache Kafka® Connector:

  1. Prepare the source cluster.
  2. Create a target cluster and a connector.
  3. Check the target cluster topic for data.

If you no longer need the resources you created, delete them.

Getting startedGetting started

Sign up for Yandex Cloud and create a billing account:

  1. Navigate to the management console and log in to Yandex Cloud or create a new account.
  2. On the Yandex Cloud Billing page, make sure you have a billing account linked and it has the ACTIVE or TRIAL_ACTIVE status. If you do not have a billing account, create one and link a cloud to it.

If you have an active billing account, you can create or select a folder for your infrastructure on the cloud page.

Learn more about clouds and folders here.

Required paid resourcesRequired paid resources
  • Managed Service for Apache Kafka® cluster, which includes the use of computing resources allocated to hosts, storage and backup size (see Managed Service for Apache Kafka® pricing).
  • Public IP addresses if public access is enabled for cluster hosts (see Virtual Private Cloud pricing).

Prepare the source clusterPrepare the source cluster

  1. Create the admin-source user and assign them the ACCESS_ROLE_ADMIN role for all topics (*).
  2. Make sure the source cluster’s network settings allow cluster connections from the internet.

Create a target cluster and a connectorCreate a target cluster and a connector

Manually
Terraform
  1. Create a target cluster.

  2. Set up the target cluster:

    1. Create a user named admin-cloud.
    2. Create a topic in any configuration. You will only need it to configure user access to topics.
    3. Assign to the user the ACCESS_ROLE_ADMIN role for all topics (*).
    4. Enable the Auto create topics enable property.
    5. Configure security groups to connect to the target cluster.
  3. For the target cluster, create a connector of the MirrorMaker type, configured as follows:

    • Topics: List of topics to migrate. You can also specify a regular expression for selecting topics. To migrate all topics, specify .*.

    • Under Source cluster, specify the parameters for connecting to the source cluster:

      • Alias: Source cluster prefix in the connector settings. The default value is source. Topics in the target cluster will be created with the specified prefix.

      • Bootstrap servers: Comma-separated list of the FQDNs of the source cluster broker hosts with the port numbers, such as follows:

        FQDN_1:9091,FQDN_2:9091,...,FQDN_N:9091
        
      • SASL mechanism: Authentication mechanism for username and password validation, SCRAM-SHA-512.

      • SASL username and SASL password: Username and password of the previously created admin-source user.

      • Security protocol: Select the connection protocol for the connector:

        • SASL_PLAINTEXT: For connecting to the source cluster without SSL.
        • SASL_SSL: For SSL connections to the source cluster.
    • Under Target cluster, select Use this cluster.

  1. If you do not have Terraform yet, install it.

  2. Get the authentication credentials. You can add them to environment variables or specify them later in the provider configuration file.

  3. Configure and initialize a provider. There is no need to create a provider configuration file manually, you can download it.

  4. Place the configuration file in a separate working directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file.

  5. Download the kafka-mirrormaker-connector.tf configuration file to the same working directory.

    This file describes:

    • Network.
    • Subnet.
    • Default security group and inbound internet rules for the cluster.
    • Managed Service for Apache Kafka® target cluster with Auto create topics enable set to true.
    • admin-cloud admin user for the target cluster.
    • MirrorMaker connector for the target cluster.
  6. In the kafka-mirrormaker-connector.tf file, specify the following:

    • Source cluster username and passwords for the source and target cluster users.
    • FQDNs of the source cluster broker hosts.
    • Source and target cluster aliases.
    • Filter pattern for topics to migrate.
    • Apache Kafka® version.
  7. Validate your Terraform configuration files using this command:

    terraform validate
    

    Terraform will display any configuration errors detected in your files.

  8. Create the required infrastructure:

    1. Run this command to view the planned changes:

      terraform plan
      

      If you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.

    2. If everything looks correct, apply the changes:

      1. Run this command:

        terraform apply
        
      2. Confirm updating the resources.

      3. Wait for the operation to complete.

    All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console.

Note

Once created, the connector is automatically activated and data transfer starts.

Check the target cluster topic for dataCheck the target cluster topic for data

  1. In the management console, open the target cluster.

  2. Make sure the migrated topic is displayed on the Topics tab.

    A prefix (source by default) will be added to the topic name. For example, a topic named mytopic will be moved to the target cluster as source.mytopic.

  3. Connect to the target cluster topic using kafkacat. Add the prefix to the source cluster topic name.

  4. Make sure the console displays messages from the source cluster topic.

Delete the resources you createdDelete the resources you created

Some resources are not free of charge. Delete the resources you no longer need to avoid paying for them:

Manually
Terraform

Delete the Yandex Managed Service for Apache Kafka® cluster. The connector will be deleted together with the cluster.

  1. In the terminal window, go to the directory containing the infrastructure plan.

    Warning

    Make sure the directory has no Terraform manifests with the resources you want to keep. Terraform deletes all resources that were created using the manifests in the current directory.

  2. Delete resources:

    1. Run this command:

      terraform destroy
      
    2. Confirm deleting the resources and wait for the operation to complete.

    All the resources described in the Terraform manifests will be deleted.

Migrating data via MirrorMakerMigrating data via MirrorMaker

To transfer data using MirrorMaker:

  1. Set up your infrastructure.
  2. Configure the source cluster and VM.
  3. Configure MirrorMaker.
  4. Start replication.
  5. Check the target cluster topic for data.

If you no longer need the resources you created, delete them.

Getting startedGetting started

Sign up for Yandex Cloud and create a billing account:

  1. Navigate to the management console and log in to Yandex Cloud or create a new account.
  2. On the Yandex Cloud Billing page, make sure you have a billing account linked and it has the ACTIVE or TRIAL_ACTIVE status. If you do not have a billing account, create one and link a cloud to it.

If you have an active billing account, you can create or select a folder for your infrastructure on the cloud page.

Learn more about clouds and folders here.

Required paid resourcesRequired paid resources
  • Managed Service for Apache Kafka® cluster, which includes the use of computing resources allocated to hosts, storage and backup size (see Managed Service for Apache Kafka® pricing).
  • Public IP addresses if public access is enabled for cluster hosts (see Virtual Private Cloud pricing).
  • VM instance: use of computing resources, storage, public IP address, and OS (see Compute Cloud pricing).

Set up your infrastructureSet up your infrastructure

Manually
Terraform
  1. Create a Managed Service for Apache Kafka® target cluster.

  2. Set up the target cluster:

    • Create a user named admin-cloud.
    • Create a topic in any configuration. You will only need it to configure user access to topics.
    • Assign to the user the ACCESS_ROLE_ADMIN role for all topics (*).
    • Enable the Auto create topics enable property.
    • Configure security groups to connect to the target cluster.
  3. Create a new Linux VM for MirrorMaker in the same network as the target cluster.

    To connect to the VM via the internet:

    • Enable public access when creating the VM.
    • Make sure the VM's security group allows internet connections.
  1. If you do not have Terraform yet, install it.

  2. Get the authentication credentials. You can add them to environment variables or specify them later in the provider configuration file.

  3. Configure and initialize a provider. There is no need to create a provider configuration file manually, you can download it.

  4. Place the configuration file in a separate working directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file.

  5. Download the kafka-mirror-maker.tf configuration file to the same working directory.

    This file describes:

    • Network.
    • Subnet.
    • Default security group and inbound internet rules for your cluster and VM.
    • Managed Service for Apache Kafka® cluster with Auto create topics enable set to true.
    • Apache Kafka® administrator user named admin-cloud with the ACCESS_ROLE_ADMIN role for all cluster topics.
    • Virtual machine with public internet access.
  6. In kafka-mirror-maker.tf, specify the following:

    • Managed Service for Apache Kafka® cluster name.
    • Apache Kafka® admin user password.
    • Public Ubuntu image ID (non-GPU), e.g., Ubuntu 24.04 LTS.
    • Username and path to the public key for VM access.
  7. Validate your Terraform configuration files using this command:

    terraform validate
    

    Terraform will display any configuration errors detected in your files.

  8. Create the required infrastructure:

    1. Run this command to view the planned changes:

      terraform plan
      

      If you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.

    2. If everything looks correct, apply the changes:

      1. Run this command:

        terraform apply
        
      2. Confirm updating the resources.

      3. Wait for the operation to complete.

    All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console.

Configure the source cluster and VMConfigure the source cluster and VM

  1. Prepare the source cluster:

    1. In the source cluster, create the admin-source user and assign them the ACCESS_ROLE_ADMIN role for all topics (*).
    2. Enable the Auto create topics enable setting.
  2. Connect to the VM over SSH.

    1. Install the JDK:

      sudo apt update && sudo apt install --yes default-jdk
      
    2. Download and unpack the Apache Kafka® archive with the same version as installed on the target cluster, e.g., for version 3.9:

      wget https://archive.apache.org/dist/kafka/3.9.0/kafka_2.12-3.9.0.tgz && \
      tar -xvf kafka_2.12-3.9.0.tgz
      
    3. Download an SSL certificate for connecting to the Managed Service for Apache Kafka® cluster:

      sudo mkdir -p /usr/local/share/ca-certificates/Yandex && \
      sudo wget "https://storage.yandexcloud.net/cloud-certs/CA.pem" \
          --output-document /usr/local/share/ca-certificates/Yandex/YandexInternalRootCA.crt && \
      sudo chmod 0655 /usr/local/share/ca-certificates/Yandex/YandexInternalRootCA.crt
      
    4. Install kafkacat:

      sudo apt update && sudo apt install --yes kafkacat
      
  3. Configure a firewall and security groups if required for MirrorMaker connection to the target and source clusters.

Configure MirrorMakerConfigure MirrorMaker

  1. Connect to the MirrorMaker VM over SSH.

  2. In the home directory, create a folder named mirror-maker to store Java Keystore certificates and MirrorMaker configuration files:

    mkdir --parents /home/<home_directory>/mirror-maker
    
  3. Choose a password of at least 6 characters for a certificate store, create the store, and add the SSL certificate for cluster connection:

    sudo keytool --noprompt -importcert -alias YandexCA \
       -file /usr/local/share/ca-certificates/Yandex/YandexInternalRootCA.crt \
       -keystore /home/<home_directory>/mirror-maker/keystore \
       -storepass <certificate_store_password>
    
  4. Create a MirrorMaker configuration file named mm2.properties in the mirror-maker folder:

    # Kafka clusters
    clusters=source, cloud
    source.bootstrap.servers=<source_cluster_broker_FQDN>:9091
    cloud.bootstrap.servers=<source_cluster_broker_1_FQDN>:9091, ..., <source_cluster_broker_N_FQDN>:9091
    
    # Source and target cluster settings
    source->cloud.enabled=true
    cloud->source.enabled=false
    source.cluster.alias=source
    cloud.cluster.alias=cloud
    
    # Internal topics settings
    source.config.storage.replication.factor=<R>
    source.status.storage.replication.factor=<R>
    source.offset.storage.replication.factor=<R>
    source.offsets.topic.replication.factor=<R>
    source.errors.deadletterqueue.topic.replication.factor=<R>
    source.offset-syncs.topic.replication.factor=<R>
    source.heartbeats.topic.replication.factor=<R>
    source.checkpoints.topic.replication.factor=<R>
    source.transaction.state.log.replication.factor=<R>
    cloud.config.storage.replication.factor=<R>
    cloud.status.storage.replication.factor=<R>
    cloud.offset.storage.replication.factor=<R>
    cloud.offsets.topic.replication.factor=<R>
    cloud.errors.deadletterqueue.topic.replication.factor=<R>
    cloud.offset-syncs.topic.replication.factor=<R>
    cloud.heartbeats.topic.replication.factor=<R>
    cloud.checkpoints.topic.replication.factor=<R>
    cloud.transaction.state.log.replication.factor=<R>
    
    # Topics
    topics=.*
    groups=.*
    topics.blacklist=.*[\-\.]internal, .*\replica, __consumer_offsets
    groups.blacklist=console-consumer-.*, connect-.*, __.*
    replication.factor=<M>
    refresh.topics.enable=true
    sync.topic.configs.enabled=true
    refresh.topics.interval.seconds=10
    
    # Tasks
    tasks.max=<T>
    
    # Source cluster authentication parameters. Comment out if no authentication required
    source.client.id=mm2_consumer_test
    source.group.id=mm2_consumer_group
    source.security.protocol=SASL_PLAINTEXT
    source.sasl.mechanism=SCRAM-SHA-512
    source.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="admin-source" password="<password>";
    
    # Target cluster authentication parameters
    cloud.client.id=mm2_producer_test
    cloud.group.id=mm2_producer_group
    cloud.ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
    cloud.ssl.truststore.location=/home/<home_directory>/mirror-maker/keystore
    cloud.ssl.truststore.password=<certificate_store_password>
    cloud.ssl.protocol=TLS
    cloud.security.protocol=SASL_SSL
    cloud.sasl.mechanism=SCRAM-SHA-512
    cloud.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="admin-cloud" password="<password>";
    
    # Enable heartbeats and checkpoints
    source->target.emit.heartbeats.enabled=true
    source->target.emit.checkpoints.enabled=true
    

    MirrorMaker configuration notes:

    • It performs one-way replication (source->cloud.enabled = true, cloud->source.enabled = false).
    • In the topics parameter, list the topics you want to migrate. You can also specify a regular expression for selecting topics. To migrate all topics, specify .*. This configuration replicates all topics.
    • Topic names in the target cluster cluster match those in the source cluster.
    • <R> stands for the replication factor for MirrorMaker service topics. Its value should not exceed the lesser of the broker counts in the source and target clusters.
    • <M> stands for the default replication factor defined for topics in the target cluster.
    • <T> stands for the number of concurrent MirrorMaker processes. To distribute replication load evenly, we recommend a value of at least 2. For more information, see this Apache Kafka® guide.

    You can get the Managed Service for Apache Kafka® broker FQDNs with the list of hosts in the cluster.

Start replicationStart replication

  1. Connect to the MirrorMaker VM over SSH.

  2. Run MirrorMaker on the VM as follows:

    <Apache_Kafka_installation_path>/bin/connect-mirror-maker.sh /home/<home_directory>/mirror-maker/mm2.properties
    

Check the target cluster topic for dataCheck the target cluster topic for data

  1. In the management console, open the target cluster.

  2. Make sure the migrated topic is displayed on the Topics tab.

    A prefix (source by default) will be added to the topic name. For example, a topic named mytopic will be moved to the target cluster as source.mytopic.

  3. Connect to the target cluster topic using kafkacat. Add the prefix to the source cluster topic name.

  4. Make sure the console displays messages from the source cluster topic.

To learn more about MirrorMaker 2.0, see this Apache Kafka® article.

Delete the resources you createdDelete the resources you created

Some resources are not free of charge. Delete the resources you no longer need to avoid paying for them:

Manually
Terraform
  • Delete the Yandex Managed Service for Apache Kafka® cluster.
  • Delete the VM.
  • If you reserved public static IP addresses, release and delete them.
  1. In the terminal window, go to the directory containing the infrastructure plan.

    Warning

    Make sure the directory has no Terraform manifests with the resources you want to keep. Terraform deletes all resources that were created using the manifests in the current directory.

  2. Delete resources:

    1. Run this command:

      terraform destroy
      
    2. Confirm deleting the resources and wait for the operation to complete.

    All the resources described in the Terraform manifests will be deleted.

Was the article helpful?

Previous
Upgrading a Managed Service for Apache Kafka® cluster to migrate from ZooKeeper to KRaft
Next
Moving data between Managed Service for Apache Kafka® clusters using Data Transfer
© 2026 Direct Cursus Technology L.L.C.