Yandex Cloud
Search
Contact UsTry it for free
  • Customer Stories
  • Documentation
  • Blog
  • All Services
  • System Status
  • Marketplace
    • Featured
    • Infrastructure & Network
    • Data Platform
    • AI for business
    • Security
    • DevOps tools
    • Serverless
    • Monitoring & Resources
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Start testing with double trial credits
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Center for Technologies and Society
    • Yandex Cloud Partner program
    • Price calculator
    • Pricing plans
  • Customer Stories
  • Documentation
  • Blog
© 2026 Direct Cursus Technology L.L.C.
Yandex Data Transfer
  • Available transfers
  • Getting started
    • All tutorials
      • Apache Kafka® to ClickHouse®
      • Apache Kafka® to PostgreSQL
      • Apache Kafka® to Greenplum®
      • Apache Kafka® to Yandex StoreDoc
      • Apache Kafka® to MySQL®
      • Apache Kafka® to OpenSearch
      • Apache Kafka® to YDB
      • Apache Kafka® to YDS
      • YDS to Apache Kafka®
      • YDS in ClickHouse®
      • YDS in Object Storage
      • YDS in YDB
      • Ingesting data into storage systems
  • Troubleshooting
  • Access management
  • Pricing policy
  • Terraform reference
  • Monitoring metrics
  • Audit Trails events
  • Public materials

In this article:

  • Required paid resources
  • Getting started
  • Create a data stream in Data Streams
  • Set up and activate the transfer
  • Test your transfer
  • Delete the resources you created
  1. Tutorials
  2. Delivering data from queues
  3. YDS to Apache Kafka®

Delivering data from an Data Streams queue to Managed Service for Apache Kafka®

Written by
Yandex Cloud
Updated at January 15, 2026
  • Required paid resources
  • Getting started
  • Create a data stream in Data Streams
  • Set up and activate the transfer
  • Test your transfer
  • Delete the resources you created

With Data Transfer, you can deliver data from a stream in Data Streams to a Managed Service for Apache Kafka® cluster.

To transfer data:

  1. Set up a data stream in Data Streams.
  2. Set up and activate the transfer.
  3. Test your transfer.

If you no longer need the resources you created, delete them.

Required paid resourcesRequired paid resources

  • Managed Service for YDB database (see Managed Service for YDB pricing). The cost depends on deployment mode:

    • In serverless mode, you pay for data operations and storage volume, including stored backups.
    • In dedicated instance mode, you pay for the use of computing resources allocated to the database, storage size, and backups.
  • Data Streams (see Data Streams pricing). The cost depends on the pricing model:

    • Based on allocated resources: You pay a fixed hourly rate for the established throughput limit and message retention period, and additionally for the number of units of actually written data.
    • On-demand: You pay for the performed read/write operations, the amount of read or written data, and the actual storage used for messages that are still within their retention period.
  • Managed Service for Apache Kafka® cluster: Computing resources allocated to hosts, storage and backup size (see Managed Service for Apache Kafka® pricing).

  • Public IP addresses if public access is enabled for cluster hosts (see Virtual Private Cloud pricing).

  • Each transfer: Use of computing resources and number of transferred data rows (see Data Transfer pricing).

Getting startedGetting started

Set up your data delivery infrastructure:

Manually
Terraform
  1. Create a Managed Service for YDB database with your preferred configuration.

  2. Create a Managed Service for Apache Kafka® cluster in any suitable configuration with publicly available hosts.

  3. In the Managed Service for Apache Kafka® cluster, create a topic named sensors.

  4. In the Managed Service for Apache Kafka® cluster, create a user named mkf-user with the ACCESS_ROLE_PRODUCER and ACCESS_ROLE_CONSUMER permissions for the new topic.

  1. If you do not have Terraform yet, install it.

  2. Get the authentication credentials. You can add them to environment variables or specify them later in the provider configuration file.

  3. Configure and initialize a provider. There is no need to create a provider configuration file manually, you can download it.

  4. Place the configuration file in a separate working directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file.

  5. Download the yds-to-kafka.tf configuration file to the same working directory.

    This file describes:

    • Network.
    • Subnet.
    • Security group and rules required to connect to a Managed Service for Apache Kafka® cluster.
    • Managed Service for YDB database.
    • Managed Service for Apache Kafka® cluster.
    • Managed Service for Apache Kafka® topic named sensors.
    • Managed Service for Apache Kafka® user with the ACCESS_ROLE_PRODUCER and ACCESS_ROLE_CONSUMER access permissions for the sensors topic.
    • Transfer.
  6. In yds-to-kafka.tf, specify the following settings:

    • mkf_version: Apache Kafka® cluster version.
    • ydb_name: Managed Service for YDB database name.
    • mkf_user_name: Managed Service for Apache Kafka® cluster user name.
    • mkf_user_password: Managed Service for Apache Kafka® cluster user password.
    • transfer_enabled: Set to 0 to ensure that no transfer is created until you create endpoints manually.
  7. Validate your Terraform configuration files using this command:

    terraform validate
    

    Terraform will display any configuration errors detected in your files.

  8. Create the required infrastructure:

    1. Run this command to view the planned changes:

      terraform plan
      

      If you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.

    2. If everything looks correct, apply the changes:

      1. Run this command:

        terraform apply
        
      2. Confirm updating the resources.

      3. Wait for the operation to complete.

    All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console.

Create a data stream in Data StreamsCreate a data stream in Data Streams

Create a data stream in Data Streams.

Set up and activate the transferSet up and activate the transfer

  1. Create a Data Streams source endpoint.

    • Database type: Yandex Data Streams.

    • Endpoint parameters:

      • Connection settings:

        • Database: Select the Managed Service for YDB database from the list.
        • Stream: Specify the name of the stream in Data Streams.
        • Service account: Select or create a service account with the yds.editor role.
      • Advanced settings:

        • Conversion rules: JSON.
        • Data scheme: JSON specification:

        Fill in the data schema:

        Data schema
            [
                {
                    "name": "device_id",
                    "type": "string"
                },
                {
                    "name": "datetime",
                    "type": "datetime"
                },
                {
                    "name": "latitude",
                    "type": "double"
                },
                {
                    "name": "longitude",
                    "type": "double"
                },
                {
                    "name": "altitude",
                    "type": "double"
                },
                {
                    "name": "speed",
                    "type": "double"
                },
                {
                    "name": "battery_voltage",
                    "type": "any"
                },
                {
                    "name": "cabin_temperature",
                    "type": "double"
                },
                {
                    "name": "fuel_level",
                    "type": "any"
                }
            ]
        
  2. Create a target endpoint in Managed Service for Apache Kafka®.

    • Database type: Kafka.

    • Endpoint parameters:

      • Connection settings:

        • Connection type: Select Managed Service for Apache Kafka cluster.
        • Managed Service for Apache Kafka cluster: Select your Managed Service for Apache Kafka® cluster from the list.
        • Authentication: Select SASL.
        • Username: Enter the Managed Service for Apache Kafka® cluster user name.
        • Password: Enter the Managed Service for Apache Kafka® cluster user password.
        • Topic: Select Topic full name.
        • Topic full name: Enter a name for the topic in the Managed Service for Apache Kafka® cluster.
  3. Create a transfer:

    Manually
    Terraform
    1. Create a Replication-type transfer configured to use the new endpoints.
    2. Activate the transfer.
    1. In the yds-to-kafka.tf file, specify the values of the following variables:

      • source_endpoint_id: Source endpoint ID.
      • target_endpoint_id: Target endpoint ID.
      • transfer_enabled: 1 to create a transfer.
    2. Validate your Terraform configuration files using this command:

      terraform validate
      

      Terraform will display any configuration errors detected in your files.

    3. Create the required infrastructure:

      1. Run this command to view the planned changes:

        terraform plan
        

        If you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.

      2. If everything looks correct, apply the changes:

        1. Run this command:

          terraform apply
          
        2. Confirm updating the resources.

        3. Wait for the operation to complete.

      The transfer will be activated automatically.

Test your transferTest your transfer

  1. Wait for the transfer status to change to Replicating.

  2. Send test data to the stream in Data Streams:

    {
        "device_id":"iv9a94th6rzt********",
        "datetime":"2020-06-05T17:27:00",
        "latitude":"55.70329032",
        "longitude":"37.65472196",
        "altitude":"427.5",
        "speed":"0",
        "battery_voltage":"23.5",
        "cabin_temperature":"17",
        "fuel_level":null
    }
    
  3. Make sure the data has moved to the sensors topic in the Managed Service for Apache Kafka® cluster:

    1. Get an SSL certificate to connect to the Managed Service for Apache Kafka® cluster.
    2. Install kafkacat:
    3. Run the command for receiving messages from a topic.

Delete the resources you createdDelete the resources you created

Note

Before deleting the resources, deactivate the transfer.

To reduce the consumption of resources you do not need, delete them:

  1. Delete the transfer.

  2. Delete the source and target endpoints.

  3. If you created a service account when creating the source endpoint, delete it.

  4. Delete other resources using the same method used for their creation:

    Manually
    Terraform
    1. Delete the Managed Service for Apache Kafka® cluster.
    2. Delete the Managed Service for YDB database.
    1. In the terminal window, go to the directory containing the infrastructure plan.

      Warning

      Make sure the directory has no Terraform manifests with the resources you want to keep. Terraform deletes all resources that were created using the manifests in the current directory.

    2. Delete resources:

      1. Run this command:

        terraform destroy
        
      2. Confirm deleting the resources and wait for the operation to complete.

      All the resources described in the Terraform manifests will be deleted.

Was the article helpful?

Previous
Apache Kafka® to YDS
Next
YDS in ClickHouse®
© 2026 Direct Cursus Technology L.L.C.