Delivering data from an Data Streams queue to Managed Service for Apache Kafka® using Yandex Data Transfer
With Data Transfer, you can deliver data from a stream in Data Streams to a Managed Service for Apache Kafka® cluster.
To transfer data:
If you no longer need the resources you created, delete them.
Required paid resources
The support cost includes:
-
Managed Service for Apache Kafka® cluster fee: Using computing resources allocated to hosts (including ZooKeeper hosts) and disk space (see Apache Kafka® pricing).
-
Fee for using public IP addresses for cluster hosts (see Virtual Private Cloud pricing).
-
Managed Service for YDB database fee. The charge depends on the usage mode:
- For the serverless mode, you pay for data operations and the amount of stored data.
- For the dedicated instance mode, you pay for the use of computing resources, dedicated DBs, and disk space.
Learn more about the Managed Service for YDB pricing plans here.
-
Data Streams fee, which depends on the pricing mode:
- Provisioned capacity pricing mode: You pay for the number of write units and resources allocated for data streaming.
- On-demand pricing mode:
-
If the DB operates in serverless mode, the data stream is charged according to the YDB serverless mode pricing policy.
-
If the DB operates in dedicated instance mode, the data stream is not charged separately (you only pay for the DB, see the pricing policy).
-
Learn more about the Data Streams pricing plans here.
-
Transfer fee: Using computing resources and the number of transferred data rows (see Data Transfer pricing).
Getting started
Set up your data transfer infrastructure:
-
Create a Managed Service for YDB database in any suitable configuration.
-
Create a Managed Service for Apache Kafka® cluster in any suitable configuration with publicly available hosts.
-
In the Managed Service for Apache Kafka® cluster, create a topic named
sensors
. -
In the Managed Service for Apache Kafka® cluster, create a user named
mkf-user
with theACCESS_ROLE_PRODUCER
andACCESS_ROLE_CONSUMER
permissions for the new topic.
-
If you do not have Terraform yet, install it.
-
Get the authentication credentials. You can add them to environment variables or specify them later in the provider configuration file.
-
Configure and initialize a provider. There is no need to create a provider configuration file manually, you can download it
. -
Place the configuration file in a separate working directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file.
-
Download the yds-to-kafka.tf
configuration file to the same working directory.This file describes:
- Network.
- Subnet.
- Security group and rules required to connect to a Managed Service for Apache Kafka® cluster.
- Managed Service for YDB database.
- Managed Service for Apache Kafka® cluster.
- Managed Service for Apache Kafka® topic named
sensors
. - Managed Service for Apache Kafka® user with the
ACCESS_ROLE_PRODUCER
andACCESS_ROLE_CONSUMER
access permissions for thesensors
topic. - Transfer.
-
In
yds-to-kafka.tf
, specify the following settings:mkf_version
: Apache Kafka® cluster version.ydb_name
: Managed Service for YDB database name.mkf_user_name
: Username in the Managed Service for Apache Kafka® cluster.mkf_user_password
: User password in the Managed Service for Apache Kafka® cluster.transfer_enabled
: Set to0
to ensure that no transfer is created until you create endpoints manually.
-
Make sure the Terraform configuration files are correct using this command:
terraform validate
Terraform will show any errors found in your configuration files.
-
Create the required infrastructure:
-
Run this command to view the planned changes:
terraform plan
If you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.
-
If everything looks correct, apply the changes:
-
Run this command:
terraform apply
-
Confirm updating the resources.
-
Wait for the operation to complete.
-
All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console
. -
Create a data stream in Data Streams
Create a data stream in Data Streams.
Set up and activate the transfer
-
Create a Data Streams source endpoint.
-
Database type:
Yandex Data Streams
. -
Endpoint parameters:
-
Connection settings:
- Database: Select the Managed Service for YDB database from the list.
- Stream: Specify the name of the stream in Data Streams.
- Service account: Select or create a service account with the
yds.editor
role.
-
Advanced settings:
- Conversion rules:
JSON
. - Data scheme:
JSON specification
:
Fill in the data schema:
Data schema
[ { "name": "device_id", "type": "string" }, { "name": "datetime", "type": "datetime" }, { "name": "latitude", "type": "double" }, { "name": "longitude", "type": "double" }, { "name": "altitude", "type": "double" }, { "name": "speed", "type": "double" }, { "name": "battery_voltage", "type": "any" }, { "name": "cabin_temperature", "type": "double" }, { "name": "fuel_level", "type": "any" } ]
- Conversion rules:
-
-
-
Create a target endpoint in Managed Service for Apache Kafka®.
-
Database type:
Kafka
. -
Endpoint parameters:
-
Connection settings:
- Connection type: Select
Managed Service for Apache Kafka cluster
. - Managed Service for Apache Kafka cluster: Select a Managed Service for Apache Kafka® cluster from the list.
- Authentication: Select SASL.
- Username: Enter a name for the Managed Service for Apache Kafka® cluster user.
- Password: Enter a password for the Managed Service for Apache Kafka® cluster user.
- Topic: Select Topic full name.
- Topic full name: Enter a name for the topic in the Managed Service for Apache Kafka® cluster.
- Connection type: Select
-
-
-
Create a transfer:
ManuallyTerraform- Create a transfer of the Replication type that will use the created endpoints.
- Activate your transfer.
-
In the
yds-to-kafka.tf
file, specify the following variables:source_endpoint_id
: Source endpoint ID.target_endpoint_id
: Target endpoint ID.transfer_enabled
: Set to1
to create a transfer.
-
Make sure the Terraform configuration files are correct using this command:
terraform validate
Terraform will show any errors found in your configuration files.
-
Create the required infrastructure:
-
Run this command to view the planned changes:
terraform plan
If you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.
-
If everything looks correct, apply the changes:
-
Run this command:
terraform apply
-
Confirm updating the resources.
-
Wait for the operation to complete.
-
The transfer will be activated automatically.
-
Test your transfer
-
Wait for the transfer status to change to Replicating.
-
Send test data to the stream in Data Streams:
{ "device_id":"iv9a94th6rzt********", "datetime":"2020-06-05T17:27:00", "latitude":"55.70329032", "longitude":"37.65472196", "altitude":"427.5", "speed":"0", "battery_voltage":"23.5", "cabin_temperature":"17", "fuel_level":null }
-
Make sure the data has moved to the
sensors
topic in the Managed Service for Apache Kafka® cluster:- Get an SSL certificate to connect to the Managed Service for Apache Kafka® cluster.
- Install
kafkacat
: - Run the command for receiving messages from a topic.
Delete the resources you created
Note
Before deleting the resources you created, deactivate the transfer.
Some resources are not free of charge. To avoid unnecessary charges, delete the resources you no longer need:
- Delete the transfer.
- Delete the source and target endpoints.
- If you created a service account when creating the source endpoint, delete it.
Delete the other resources depending on how you created them:
-
In the terminal window, go to the directory containing the infrastructure plan.
Warning
Make sure the directory has no Terraform manifests with the resources you want to keep. Terraform deletes all resources that were created using the manifests in the current directory.
-
Delete resources:
-
Run this command:
terraform destroy
-
Confirm deleting the resources and wait for the operation to complete.
All the resources described in the Terraform manifests will be deleted.
-