Yandex Cloud
Search
Contact UsGet started
  • Pricing
  • Customer Stories
  • Documentation
  • Blog
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • AI for business
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Start testing with double trial credits
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Center for Technologies and Society
    • Yandex Cloud Partner program
  • Pricing
  • Customer Stories
  • Documentation
  • Blog
© 2025 Direct Cursus Technology L.L.C.
Yandex Managed Service for Apache Kafka®
  • Getting started
    • All tutorials
    • Unassisted deployment of the Apache Kafka® web interface
    • Upgrading a Managed Service for Apache Kafka® cluster to migrate from ZooKeeper to KRaft
      • Delivering data from PostgreSQL to Apache Kafka®
      • Delivering data from Apache Kafka® to PostgreSQL
      • Delivering data from YDB to Apache Kafka®
      • Delivering data from Apache Kafka® to YDB
      • Delivering data from Apache Kafka® to ClickHouse®
      • Delivering data from Apache Kafka® to Data Streams
      • Delivering data from Data Streams to Apache Kafka®
      • Delivering data from Apache Kafka® to Greenplum®
      • Delivering data from Apache Kafka® to Yandex StoreDoc
      • Delivering data from MySQL® to Apache Kafka®
      • Delivering data from Apache Kafka® to MySQL®
      • Delivering data from Apache Kafka® to OpenSearch
    • Working with Apache Kafka® topics using Yandex Data Processing
    • Monitoring message loss in an Apache Kafka® topic
  • Access management
  • Pricing policy
  • Terraform reference
  • Yandex Monitoring metrics
  • Audit Trails events
  • Public materials
  • Release notes
  • FAQ

In this article:

  • Required paid resources
  • Getting started
  • Prepare the test data
  • Set up and activate the transfer
  • Test the transfer
  • Delete the resources you created
  1. Tutorials
  2. Delivering data using Data Transfer
  3. Delivering data from Apache Kafka® to MySQL®

Delivering data to Yandex Managed Service for MySQL® using Yandex Data Transfer

Written by
Yandex Cloud
Updated at November 24, 2025
  • Required paid resources
  • Getting started
  • Prepare the test data
  • Set up and activate the transfer
  • Test the transfer
  • Delete the resources you created

A Managed Service for MySQL® cluster can ingest data from Apache Kafka® topics in real time.

To run data delivery:

  1. Prepare the test data.
  2. Set up and activate the transfer.
  3. Test your transfer.

If you no longer need the resources you created, delete them.

Required paid resourcesRequired paid resources

  • Managed Service for Apache Kafka® cluster: computing resources allocated to hosts, size of storage and backups (see Managed Service for Apache Kafka® pricing).
  • Managed Service for MySQL® cluster: computing resources allocated to hosts, size of storage and backups (see Managed Service for MySQL® pricing).
  • Public IP addresses if public access is enabled for cluster hosts (see Virtual Private Cloud pricing).
  • Each transfer: use of computing resources and number of transferred data rows (see Data Transfer pricing).

Getting startedGetting started

  1. Set up your data pipeline infrastructure:

    Manually
    Terraform
    1. Create a Managed Service for Apache Kafka® source cluster with your preferred configuration. Enable public access to the cluster during creation so you can connect to it from your local machine. Connections from within the Yandex Cloud network are enabled by default.

    2. In the source cluster, create a topic named sensors.

    3. In the source cluster, create a user named mkf-user with the ACCESS_ROLE_PRODUCER and ACCESS_ROLE_CONSUMER permissions for the new topic.

    4. Create a Managed Service for MySQL® target cluster with the following settings:

      • Database name: db1.
      • Username: mmy-user.
      • In the same availability zone as the source cluster.
      • To connect to the cluster from the user's local machine instead of the Yandex Cloud cloud network, enable public access to the cluster hosts.
    5. To connect to the cluster from the user's local machine, configure security groups:

      • Managed Service for Apache Kafka®.
      • Managed Service for MySQL®.
    1. If you do not have Terraform yet, install it.

    2. Get the authentication credentials. You can add them to environment variables or specify them later in the provider configuration file.

    3. Configure and initialize a provider. There is no need to create a provider configuration file manually, you can download it.

    4. Place the configuration file in a separate working directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file.

    5. Download the data-transfer-mkf-mmy.tf configuration file to the same working directory.

      This file describes:

      • Network.
      • Subnet.
      • Security group and rules allowing inbound connections to the Managed Service for Apache Kafka® and Managed Service for MySQL® clusters.
      • Managed Service for Apache Kafka® source cluster.
      • Apache Kafka® topic named sensors.
      • Apache Kafka® user named mkf-user with the ACCESS_ROLE_PRODUCER and ACCESS_ROLE_CONSUMER access permissions to the sensors topic.
      • Managed Service for MySQL® target cluster with a database named db1 and a user named mmy-user.
      • Target endpoint.
      • Transfer.
    6. In the data-transfer-mkf-mmy.tf file, specify these variables:

      • source_kf_version: Apache Kafka® version in the source cluster.
      • source_user_password: mkf-user password in the source cluster.
      • target_mysql_version: MySQL® version in the target cluster.
      • target_user_password: mmy-user password in the target cluster.
      • transfer_enabled: Set to 0 to ensure that no transfer and target endpoint is created before you manually create a source endpoint.
    7. Make sure the Terraform configuration files are correct using this command:

      terraform validate
      

      Terraform will show any errors found in your configuration files.

    8. Create the required infrastructure:

      1. Run this command to view the planned changes:

        terraform plan
        

        If you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.

      2. If everything looks correct, apply the changes:

        1. Run this command:

          terraform apply
          
        2. Confirm updating the resources.

        3. Wait for the operation to complete.

      All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console.

  2. Install the following tools:

    • kafkacat: For reading from and writing to Apache Kafka® topics.

      sudo apt update && sudo apt install --yes kafkacat
      

      Make sure you can use it to connect to the Managed Service for Apache Kafka® source cluster over SSL.

    • jq for stream processing of JSON files.

      sudo apt update && sudo apt-get install --yes jq
      

Prepare the test dataPrepare the test data

Let's assume the Apache Kafka® sensors topic in the source cluster receives data from car sensors in JSON format.

Create a local sample.json file with the following test data:

sample.json
{
    "device_id": "iv9a94th6rzt********",
    "datetime": "2020-06-05 17:27:00",
    "latitude": 55.70329032,
    "longitude": 37.65472196,
    "altitude": 427.5,
    "speed": 0,
    "battery_voltage": 23.5,
    "cabin_temperature": 17,
    "fuel_level": null
}
{
    "device_id": "rhibbh3y08qm********",
    "datetime": "2020-06-06 09:49:54",
    "latitude": 55.71294467,
    "longitude": 37.66542005,
    "altitude": 429.13,
    "speed": 55.5,
    "battery_voltage": null,
    "cabin_temperature": 18,
    "fuel_level": 32
}
{
    "device_id": "iv9a94th6rzt********",
    "datetime": "2020-06-07 15:00:10",
    "latitude": 55.70985913,
    "longitude": 37.62141918,
    "altitude": 417.0,
    "speed": 15.7,
    "battery_voltage": 10.3,
    "cabin_temperature": 17,
    "fuel_level": null
}

Set up and activate the transferSet up and activate the transfer

  1. Create an endpoint for the Apache Kafka® source:

    Endpoint parameters:

    • Connection settings:

      • Connection type: Managed Service for Apache Kafka cluster.

        • Managed Service for Apache Kafka cluster: Select the source cluster from the list.

        • Authentication: SASL.

          • Username: mkf-user.
          • Password: Enter the user password.
      • Topic full name: sensors.

    • Advanced settings → Conversion rules:

      • Conversion rules: json.
        • Data scheme: JSON specification.

          Insert the data schema in JSON format:

          json
          [
              {
                  "name": "device_id",
                  "type": "utf8",
                  "key": true
              },
              {
                  "name": "datetime",
                  "type": "utf8"
              },
              {
                  "name": "latitude",
                  "type": "double"
              },
              {
                  "name": "longitude",
                  "type": "double"
              },
              {
                  "name": "altitude",
                  "type": "double"
              },
              {
                  "name": "speed",
                  "type": "double"
              },
              {
                  "name": "battery_voltage",
                  "type": "double"
              },
              {
                  "name": "cabin_temperature",
                  "type": "uint16"
              },
              {
                  "name": "fuel_level",
                  "type": "uint16"
              }
          ]
          
  2. Create a target endpoint and set up the transfer:

    Manually
    Terraform
    1. Create an endpoint for the MySQL® target:

      • Endpoint parameters → Connection settings:

        • Connection type: Managed Service for MySQL cluster.

          • Managed Service for MySQL cluster: Select the source cluster from the list.
        • Database: db1.

        • User: mmy-user.

        • Password: Enter the user password.

    2. Create a transfer of the Replication type that will use the new endpoints.

    3. Activate the transfer and wait for its status to change to Replicating.

    1. In the data-transfer-mkf-mmy.tf file, specify the following variables:

      • source_endpoint_id: Source endpoint ID.
      • transfer_enabled: Set to 1 for creating a target endpoint and a transfer.
    2. Make sure the Terraform configuration files are correct using this command:

      terraform validate
      

      Terraform will show any errors found in your configuration files.

    3. Create the required infrastructure:

      1. Run this command to view the planned changes:

        terraform plan
        

        If you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.

      2. If everything looks correct, apply the changes:

        1. Run this command:

          terraform apply
          
        2. Confirm updating the resources.

        3. Wait for the operation to complete.

    4. The transfer will activate automatically upon creation. Wait for its status to change to Replicating.

Test the transferTest the transfer

Make sure the data from the topic in the source Managed Service for Apache Kafka® cluster is being moved to the Managed Service for MySQL® cluster:

  1. Send data from sample.json to the Managed Service for Apache Kafka® sensors topic using jq and kafkacat:

    jq -rc . sample.json | kafkacat -P \
       -b <broker_host_FQDN>:9091 \
       -t sensors \
       -k key \
       -X security.protocol=SASL_SSL \
       -X sasl.mechanisms=SCRAM-SHA-512 \
       -X sasl.username="mkf-user" \
       -X sasl.password="<user_password_in_source_cluster>" \
       -X ssl.ca.location=/usr/local/share/ca-certificates/Yandex/YandexInternalRootCA.crt -Z
    

    To learn more about setting up an SSL certificate and using kafkacat, see Connecting to a Apache Kafka® cluster from applications.

  2. Check that the Managed Service for MySQL® cluster's sensors table contains the data that was sent:

    1. Connect to the Managed Service for MySQL® cluster.

    2. Get the contents of the sensors table using the query below:

      SELECT * FROM sensors;
      

Delete the resources you createdDelete the resources you created

Note

Before deleting the resources, deactivate the transfer.

Some resources are not free of charge. To avoid paying for them, delete the resources you no longer need:

  1. Delete the transfer.
  2. Delete the source and target endpoints.

Delete the other resources depending on how you created them:

Manually
Terraform
  • Delete the Managed Service for Apache Kafka® cluster.
  • Delete the Managed Service for MySQL® cluster.
  1. In the terminal window, go to the directory containing the infrastructure plan.

    Warning

    Make sure the directory has no Terraform manifests with the resources you want to keep. Terraform deletes all resources that were created using the manifests in the current directory.

  2. Delete resources:

    1. Run this command:

      terraform destroy
      
    2. Confirm deleting the resources and wait for the operation to complete.

    All the resources described in the Terraform manifests will be deleted.

Was the article helpful?

Previous
Delivering data from MySQL® to Apache Kafka®
Next
Delivering data from Apache Kafka® to OpenSearch
© 2025 Direct Cursus Technology L.L.C.