Yandex Cloud
Search
Contact UsGet started
  • Blog
  • Pricing
  • Documentation
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • ML & AI
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Customer Stories
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Education and Science
    • Yandex Cloud Partner program
  • Blog
  • Pricing
  • Documentation
© 2025 Direct Cursus Technology L.L.C.
Tutorials
    • All tutorials
    • Deploying the Apache Kafka® web interface
    • Migrating a database from a third-party Apache Kafka® cluster to Managed Service for Apache Kafka®
    • Moving data between Managed Service for Apache Kafka® clusters using Data Transfer
    • Delivering data from Managed Service for MySQL® to Managed Service for Apache Kafka® using Data Transfer
    • Delivering data from Managed Service for MySQL® to Managed Service for Apache Kafka® using Debezium
    • Delivering data from Managed Service for PostgreSQL to Managed Service for Apache Kafka® using Data Transfer
    • Delivering data from Managed Service for PostgreSQL to Managed Service for Apache Kafka® using Debezium
    • Delivering data from Managed Service for YDB to Managed Service for Apache Kafka® using Data Transfer
    • Delivering data from Managed Service for Apache Kafka® to Managed Service for ClickHouse® using Data Transfer
    • Delivering data from Managed Service for Apache Kafka® to Managed Service for Greenplum® using Data Transfer
    • Delivering data from Managed Service for Apache Kafka® to Managed Service for MongoDB using Data Transfer
    • Delivering data from Managed Service for Apache Kafka® to Managed Service for MySQL® using Data Transfer
    • Delivering data from Managed Service for Apache Kafka® to Managed Service for OpenSearch using Data Transfer
    • Delivering data from Managed Service for Apache Kafka® to Managed Service for PostgreSQL using Data Transfer
    • Delivering data from Managed Service for Apache Kafka® to Managed Service for YDB using Data Transfer
    • Delivering data from Managed Service for Apache Kafka® to Data Streams using Data Transfer
    • Delivering data from Data Streams to Managed Service for YDB using Data Transfer
    • Delivering data from Data Streams to Managed Service for Apache Kafka® using Data Transfer
    • YDB change data capture and delivery to YDS
    • Configuring Kafka Connect to work with a Managed Service for Apache Kafka® cluster
    • Automating Query tasks with Managed Service for Apache Airflow™
    • Sending requests to the Yandex Cloud API via the Yandex Cloud Python SDK
    • Configuring an SMTP server to send e-mail notifications
    • Adding data to a ClickHouse® DB
    • Migrating data to Managed Service for ClickHouse® using Data Transfer
    • Delivering data from Managed Service for MySQL® to Managed Service for ClickHouse® using Data Transfer
    • Asynchronously replicating data from PostgreSQL to ClickHouse®
    • Exchanging data between Managed Service for ClickHouse® and Yandex Data Processing
    • Configuring Managed Service for ClickHouse® for Graphite
    • Fetching data from Managed Service for Apache Kafka® to Managed Service for ClickHouse®
    • Fetching data from Managed Service for Apache Kafka® to ksqlDB
    • Fetching data from RabbitMQ to Managed Service for ClickHouse®
    • Saving a Data Streams data stream in Managed Service for ClickHouse®
    • Asynchronous replication of data from Yandex Metrica to ClickHouse® using Data Transfer
    • Using hybrid storage in Managed Service for ClickHouse®
    • Sharding Managed Service for ClickHouse® tables
    • Data resharding in a Managed Service for ClickHouse® cluster
    • Loading data from Yandex Direct to a data mart enabled by Managed Service for ClickHouse® using Cloud Functions, Object Storage, and Data Transfer
    • Loading data from Object Storage to Managed Service for ClickHouse® using Data Transfer
    • Migrating data with change of storage from Managed Service for OpenSearch to Managed Service for ClickHouse® using Data Transfer
    • Loading data from Managed Service for YDB to Managed Service for ClickHouse® using Data Transfer
    • Migrating databases from Google BigQuery to Managed Service for ClickHouse®
    • Configuring Cloud DNS to access a Managed Service for ClickHouse® cluster from other cloud networks
    • Migrating a Yandex Data Processing HDFS cluster to a different availability zone
    • Importing data from Managed Service for MySQL® to Yandex Data Processing using Sqoop
    • Importing data from Managed Service for PostgreSQL to Yandex Data Processing using Sqoop
    • Mounting Object Storage buckets to the file system of Yandex Data Processing hosts
    • Working with Apache Kafka® topics using Yandex Data Processing
    • Automating operations with Yandex Data Processing using Managed Service for Apache Airflow™
    • Shared use of Yandex Data Processing tables through Metastore
    • Transferring metadata between Yandex Data Processing clusters using Metastore
    • Importing data from Object Storage, processing and exporting to Managed Service for ClickHouse®
    • Migrating to Managed Service for Elasticsearch using snapshots
    • Migrating collections from a third-party MongoDB cluster to Managed Service for MongoDB
    • Migrating data to Managed Service for MongoDB
    • Migrating Managed Service for MongoDB cluster from 4.4 to 6.0
    • Sharding MongoDB collections
    • MongoDB performance analysis and tuning
    • Migrating a database from a third-party MySQL® cluster to a Managed Service for MySQL® cluster
    • Managed Service for MySQL® performance analysis and tuning
    • Syncing data from a third-party MySQL® cluster to Managed Service for MySQL® using Data Transfer
    • Migrating a database from Managed Service for MySQL® to a third-party MySQL® cluster
    • Migrating a database from Managed Service for MySQL® to Object Storage using Data Transfer
    • Migrating data from Object Storage to Managed Service for MySQL® using Data Transfer
    • Delivering data from Managed Service for MySQL® to Managed Service for Apache Kafka® using Data Transfer
    • Delivering data from Managed Service for MySQL® to Managed Service for Apache Kafka® using Debezium
    • Migrating a database from Managed Service for MySQL® to Managed Service for YDB using Data Transfer
    • MySQL® change data capture and delivery to YDS
    • Migrating data from Managed Service for MySQL® to Managed Service for PostgreSQL using Data Transfer
    • Migrating data from AWS RDS for PostgreSQL to Managed Service for PostgreSQL using Data Transfer
    • Migrating data from Managed Service for MySQL® to Managed Service for Greenplum® using Data Transfer
    • Configuring an index policy in Managed Service for OpenSearch
    • Migrating data from Elasticsearch to Managed Service for OpenSearch
    • Migrating data from a third-party OpenSearch cluster to Managed Service for OpenSearch using Data Transfer
    • Loading data from Managed Service for OpenSearch to Object Storage using Data Transfer
    • Migrating data from Managed Service for OpenSearch to Managed Service for YDB using Data Transfer
    • Copying data from Managed Service for OpenSearch to Managed Service for Greenplum® using Yandex Data Transfer
    • Migrating data from Managed Service for PostgreSQL to Managed Service for OpenSearch using Data Transfer
    • Authenticating a Managed Service for OpenSearch cluster in OpenSearch Dashboards using Keycloak
    • Using the yandex-lemmer plugin in Managed Service for OpenSearch
    • Creating a PostgreSQL cluster for 1C:Enterprise
    • Searching for the Managed Service for PostgreSQL cluster performance issues
    • Managed Service for PostgreSQL performance analysis and tuning
    • Logical replication PostgreSQL
    • Migrating a database from a third-party PostgreSQL cluster to Managed Service for PostgreSQL
    • Migrating a database from Managed Service for PostgreSQL
    • Delivering data from Managed Service for PostgreSQL to Managed Service for Apache Kafka® using Data Transfer
    • Delivering data from Managed Service for PostgreSQL to Managed Service for Apache Kafka® using Debezium
    • Delivering data from Managed Service for PostgreSQL to Managed Service for YDB using Data Transfer
    • Migrating a database from Managed Service for PostgreSQL to Object Storage
    • Migrating data from Object Storage to Managed Service for PostgreSQL using Data Transfer
    • PostgreSQL change data capture and delivery to YDS
    • Migrating data from Managed Service for PostgreSQL to Managed Service for MySQL® using Data Transfer
    • Migrating data from Managed Service for PostgreSQL to Managed Service for OpenSearch using Data Transfer
    • Troubleshooting string sorting issues in PostgreSQL after upgrading glibc
    • Migrating a database from Greenplum® to ClickHouse®
    • Migrating a database from Greenplum® to PostgreSQL
    • Exporting Greenplum® data to a cold storage in Object Storage
    • Loading data from Object Storage to Managed Service for Greenplum® using Data Transfer
    • Copying data from Managed Service for OpenSearch to Managed Service for Greenplum® using Yandex Data Transfer
    • Creating an external table from a Object Storage bucket table using a configuration file
    • Migrating a database from a third-party Valkey™ cluster to Yandex Managed Service for Valkey™
    • Using a Yandex Managed Service for Valkey™ cluster as a PHP session storage
    • Loading data from Object Storage to Managed Service for YDB using Data Transfer
    • Loading data from Managed Service for YDB to Object Storage using Data Transfer
    • Processing Audit Trails events
    • Processing Cloud Logging logs
    • Processing CDC Debezium streams
    • Analyzing data with Jupyter
    • Processing files with usage details in Yandex Cloud Billing
    • Entering data into storage systems
    • Smart log processing
    • Transferring data within microservice architectures
    • Migrating data to Object Storage using Data Transfer
    • Migrating data from a third-party Greenplum® or PostgreSQL cluster to Managed Service for Greenplum® using Data Transfer
    • Migrating Managed Service for MongoDB clusters
    • Migrating MySQL® clusters
    • Migrating to a third-party MySQL® cluster
    • Migrating PostgreSQL clusters
    • Creating a schema registry to deliver data in Debezium CDC format from Apache Kafka®

In this article:

  • Required paid resources
  • Getting started
  • Preparing the source cluster
  • Configure Debezium
  • Prepare the target cluster
  • Start Debezium
  • Check the health of Debezium
  • Delete the resources you created
  1. Building a data platform
  2. Delivering data from Managed Service for MySQL® to Managed Service for Apache Kafka® using Debezium

Delivering data from Yandex Managed Service for MySQL® to Yandex Managed Service for Apache Kafka® using Debezium

Written by
Yandex Cloud
Updated at April 25, 2025
  • Required paid resources
  • Getting started
  • Preparing the source cluster
  • Configure Debezium
  • Prepare the target cluster
  • Start Debezium
  • Check the health of Debezium
  • Delete the resources you created

You can track data changes in Managed Service for MySQL® and send them to Managed Service for Apache Kafka® using Change Data Capture (CDC).

In this article, you will learn how to create a virtual machine in Yandex Cloud and set up Debezium, software used for CDC.

Required paid resourcesRequired paid resources

The support cost includes:

  • Managed Service for Apache Kafka® cluster fee: Using computing resources allocated to hosts (including ZooKeeper hosts) and disk space (see Apache Kafka® pricing).
  • Managed Service for MySQL® cluster fee: Using computing resources allocated to hosts and disk space (see MySQL® pricing).
  • Fee for using public IP addresses for cluster hosts (see Virtual Private Cloud pricing).
  • VM fee: using computing resources, storage, and public IP address (see Compute Cloud pricing).

Getting startedGetting started

  1. Create a source cluster with the following settings:

    • Hosts: Publicly available
    • Database: db1
    • User: user1
  2. Create an Managed Service for Apache Kafka® target cluster in any suitable configuration with publicly available hosts.

  3. Create a virtual machine with Ubuntu 20.04 and a public IP address.

  4. If you are using security groups, configure them to enable connecting to the clusters both from the internet and from the created VM. In addition, enable connecting to this VM over SSH from the internet:

    • Configuring Managed Service for Apache Kafka® cluster security groups.
    • Configuring Managed Service for MySQL® cluster security groups.
  5. Connect to a virtual machine over SSH and perform preliminary setup:

    1. Install the dependencies:

      sudo apt update && \
          sudo apt install kafkacat openjdk-17-jre mysql-client --yes
      

      Check that you can use it to connect to the Managed Service for Apache Kafka® source cluster over SSL.

    2. Create a folder for Apache Kafka®:

      sudo mkdir -p /opt/kafka/
      
    3. Download and unpack the archive with Apache Kafka® executable files in this folder. For example, to download and unpack Apache Kafka® 3.0, run the command:

      wget https://archive.apache.org/dist/kafka/3.0.0/kafka_2.13-3.0.0.tgz && \
      sudo tar xf kafka_2.13-3.0.0.tgz --strip 1 --directory /opt/kafka/
      

      You can check the current Apache Kafka® version on the page with project downloads.

    4. Install certificates on the VM and check the availability of clusters:

      • Managed Service for Apache Kafka® (use kafkacat)
      • Managed Service for MySQL® (use mysql)
    5. Create a folder that will store the files required for the operation of the Debezium connector:

      sudo mkdir -p /etc/debezium/plugins/
      
    6. The Debezium connector can connect to Managed Service for Apache Kafka® broker hosts if an SSL certificate is added to Java secure storage (Java Key Store). For added storage security, add a password, at least 6 characters long, to the -storepass parameter:

      sudo keytool \
          -importcert \
          -alias YandexCA -file /usr/local/share/ca-certificates/Yandex/YandexInternalRootCA.crt \
          -keystore /etc/debezium/keystore.jks \
          -storepass <JKS_password> \
          --noprompt
      

Preparing the source clusterPreparing the source cluster

  1. Assign the REPLICATION CLIENT and REPLICATION SLAVE global privileges to user1.

  2. Connect to the db1 database under user1.

  3. Add test data to the database. In this example, a simple table with information from car sensors is used.

    1. Create a table:

      CREATE TABLE measurements (
        `device_id` VARCHAR(32) PRIMARY KEY NOT NULL,
        `datetime` TIMESTAMP NOT NULL,
        `latitude` REAL NOT NULL,
        `longitude` REAL NOT NULL,
        `altitude` REAL NOT NULL,
        `speed` REAL NOT NULL,
        `battery_voltage` REAL,
        `cabin_temperature` REAL NOT NULL,
        `fuel_level` REAL
      );
      
    2. Populate the table with data:

      INSERT INTO measurements VALUES
        ('iv9a94th6rzt********', '2020-06-05 17:27:00', 55.70329032, 37.65472196,  427.5,    0, 23.5, 17, NULL),
        ('rhibbh3y08qm********', '2020-06-06 09:49:54', 55.71294467, 37.66542005, 429.13, 55.5, NULL, 18, 32),
        ('iv9a94th678t********', '2020-06-07 15:00:10', 55.70985913, 37.62141918,  417.0, 15.7, 10.3, 17, NULL);
      

Configure DebeziumConfigure Debezium

  1. Connect to the virtual machine over SSH.

  2. Download an up-to-date Debezium connector and unpack it to the /etc/debezium/plugins/ directory.

    You can check the current connector version on the project page. The commands for version 1.9.4.Final are below.

    VERSION="1.9.4.Final"
    wget https://repo1.maven.org/maven2/io/debezium/debezium-connector-mysql/${VERSION}/debezium-connector-mysql-${VERSION}-plugin.tar.gz && \
    sudo tar -xzvf debezium-connector-mysql-${VERSION}-plugin.tar.gz -C /etc/debezium/plugins/
    
  3. Create a file named /etc/debezium/mdb-connector.conf with Debezium connector settings for connecting to the source cluster:

    name=debezium-mmy
    connector.class=io.debezium.connector.mysql.MySqlConnector
    database.hostname=c-<cluster_ID>.rw.mdb.yandexcloud.net
    database.port=3306
    database.user=user1
    database.password=<user1_password>
    database.dbname=db1
    database.server.name=mmy
    database.ssl.mode=required_identity
    table.include.list=db1.measurements
    heartbeat.interval.ms=15000
    heartbeat.topics.prefix=__debezium-heartbeat
    
    snapshot.mode=never
    include.schema.changes=false
    database.history.kafka.topic=dbhistory.mmy
    database.history.kafka.bootstrap.servers=<broker_host_1_FQDN>:9091,...,<broker_host_N_FQDN>:9091
    
    # Producer settings
    database.history.producer.security.protocol=SSL
    database.history.producer.ssl.truststore.location=/etc/debezium/keystore.jks
    database.history.producer.ssl.truststore.password=<JKS_password>
    database.history.producer.sasl.mechanism=SCRAM-SHA-512
    database.history.producer.security.protocol=SASL_SSL
    database.history.producer.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="debezium" password="<debezium_user_password>";
    
    # Consumer settings
    database.history.consumer.security.protocol=SSL
    database.history.consumer.ssl.truststore.location=/etc/debezium/keystore.jks
    database.history.consumer.ssl.truststore.password=<JKS_password>
    database.history.consumer.sasl.mechanism=SCRAM-SHA-512
    database.history.consumer.security.protocol=SASL_SSL
    database.history.consumer.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="debezium" password="<debezium_user_password>";
    

    Where:

    • name: Logical name of the Debezium connector. Used for the connector's internal needs.

    • database.hostname: Special FQDN for connection to the source cluster's master host.

      You can get the cluster ID with the list of clusters in the folder.

    • database.user: MySQL® user name.

    • database.dbname: MySQL® database name.

    • database.server.name: Name of the database server that Debezium will use when choosing a topic for sending messages.

    • table.include.list: Names of tables for Debezium to track changes in. Specify full names that include the database name (db1). Debezium will use values from this field when selecting a topic for sending messages.

    • heartbeat.interval.ms and heartbeat.topics.prefix: Heartbeat settings required for Debezium.

    • database.history.kafka.topic: Name of the service topic the connector uses to send notifications about changes to the data schema in the source cluster.

Prepare the target clusterPrepare the target cluster

  1. Create a topic to store data from the source cluster:

    • Name: mmy.db1.measurements.

      Data topic names follow the <server_name>.<schema_name>.<table_name> convention.

      According to the Debezium configuration file:

      • The mmy server name is specified in the database.server.name parameter.
      • The db1 database name is specified together with the measurements table name in the table.include.list parameter.

    If you need to track data changes in multiple tables, create a separate topic for each one of them.

  2. Create a service topic to track the connector status:

    • Name: __debezium-heartbeat.mmy.

      Service topic names follow the <prefix_for_heartbeat>.<server_name> convention.

      According to the Debezium configuration file:

      • The __debezium-heartbeat prefix is specified in the heartbeat.topics.prefix parameter.
      • The mmy server name is specified in the database.server.name parameter.
    • Cleanup policy: Compact.

    If you need data from multiple source clusters, create a separate service topic for each of them.

  3. Create a service topic to track changes to the data format schema:

    • Name: dbhistory.mmy
    • Cleanup policy: Delete
    • Number of partitions: 1
  4. Create a user named debezium.

  5. Grant debezium the ACCESS_ROLE_CONSUMER and ACCESS_ROLE_PRODUCER permissions for the topics you created.

Start DebeziumStart Debezium

  1. Create a file with Debezium worker settings:

    /etc/debezium/worker.conf

    # AdminAPI connect properties
    bootstrap.servers=<broker_host_1_FQDN>:9091,...,<broker_host_N_FQDN>:9091
    sasl.mechanism=SCRAM-SHA-512
    security.protocol=SASL_SSL
    ssl.truststore.location=/etc/debezium/keystore.jks
    ssl.truststore.password=<JKS_password>
    sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="debezium" password="<debezium_user_password>";
    
    # Producer connect properties
    producer.sasl.mechanism=SCRAM-SHA-512
    producer.security.protocol=SASL_SSL
    producer.ssl.truststore.location=/etc/debezium/keystore.jks
    producer.ssl.truststore.password=<JKS_password>
    producer.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="debezium" password="<debezium_user_password>";
    
    # Worker properties
    plugin.path=/etc/debezium/plugins/
    key.converter=org.apache.kafka.connect.json.JsonConverter
    value.converter=org.apache.kafka.connect.json.JsonConverter
    key.converter.schemas.enable=true
    value.converter.schemas.enable=true
    offset.storage.file.filename=/etc/debezium/worker.offset
    
  2. In a separate terminal, start the connector:

    sudo /opt/kafka/bin/connect-standalone.sh \
        /etc/debezium/worker.conf \
        /etc/debezium/mdb-connector.properties
    

Check the health of DebeziumCheck the health of Debezium

  1. In a separate terminal, run the kafkacat utility in consumer mode:

    kafkacat \
        -C \
        -b <broker_host_1_FQDN>:9091,...,<broker_host_N_FQDN>:9091 \
        -t mmy.db1.measurements \
        -X security.protocol=SASL_SSL \
        -X sasl.mechanisms=SCRAM-SHA-512 \
        -X sasl.username=debezium \
        -X sasl.password=<password> \
        -X ssl.ca.location=/usr/local/share/ca-certificates/Yandex/YandexInternalRootCA.crt \
        -Z \
        -K:
    

    The output will return the data format schema of the db1.measurements table and information about the previously added rows.

    Example of the message fragment
    {
    "schema": {
        ...
    },
    "payload": {
        "before": null,
        "after": {
            "device_id": "iv9a94th6rzt********",
            "datetime": 1591378020000000,
            "latitude": 55.70329,
            "longitude": 37.65472,
            "altitude": 427.5,
            "speed": 0.0,
            "battery_voltage": 23.5,
            "cabin_temperature": 17.0,
            "fuel_level": null
        },
        "source": {
            "version": "1.8.1.Final",
            "connector": "mysql",
            "name": "mmy",
            "ts_ms": 1628245046882,
            "snapshot": "true",
            "db": "db1",
            "sequence": "[null,\"4328525512\"]",
            "table": "measurements",
            "txId": 8861,
            "lsn": 4328525328,
            "xmin": null
        },
        "op": "r",
        "ts_ms": 1628245046893,
        "transaction": null
      }
    }
    
  2. Connect to the source cluster and add another row to the measurements table:

    INSERT INTO measurements VALUES ('iv7b74th678t********', '2020-06-08 17:45:00', 53.70987913, 36.62549834, 378.0, 20.5, 5.3, 20, NULL);
    
  3. Make sure the terminal running kafkacat displays details about the added row.

Delete the resources you createdDelete the resources you created

Delete the resources you no longer need to avoid paying for them:

  1. Delete the virtual machine.

    If you reserved a public static IP address for the virtual machine, release and delete it.

  2. Delete the clusters:

    • Managed Service for Apache Kafka®.
    • Managed Service for MySQL®.

Was the article helpful?

Previous
Delivering data from Managed Service for MySQL® to Managed Service for Apache Kafka® using Data Transfer
Next
Delivering data from Managed Service for PostgreSQL to Managed Service for Apache Kafka® using Data Transfer
© 2025 Direct Cursus Technology L.L.C.