Yandex Cloud
Search
Contact UsGet started
  • Pricing
  • Customer Stories
  • Documentation
  • Blog
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • AI Studio
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Start testing with double trial credits
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Education and Science
    • Yandex Cloud Partner program
  • Pricing
  • Customer Stories
  • Documentation
  • Blog
© 2025 Direct Cursus Technology L.L.C.
Tutorials
    • All tutorials
    • Deploying the Apache Kafka® web interface
    • Migrating a database from a third-party Apache Kafka® cluster to Managed Service for Apache Kafka®
    • Moving data between Managed Service for Apache Kafka® clusters using Data Transfer
    • Delivering data from Managed Service for MySQL® to Managed Service for Apache Kafka® using Data Transfer
    • Delivering data from Managed Service for MySQL® to Managed Service for Apache Kafka® using Debezium
    • Delivering data from Managed Service for PostgreSQL to Managed Service for Apache Kafka® using Data Transfer
    • Delivering data from Managed Service for PostgreSQL to Managed Service for Apache Kafka® using Debezium
    • Delivering data from Managed Service for YDB to Managed Service for Apache Kafka® using Data Transfer
    • Delivering data from Managed Service for Apache Kafka® to Managed Service for ClickHouse® using Data Transfer
    • Delivering data from Managed Service for Apache Kafka® to Managed Service for Greenplum® using Data Transfer
    • Delivering data from Managed Service for Apache Kafka® to Managed Service for MongoDB using Data Transfer
    • Delivering data from Managed Service for Apache Kafka® to Managed Service for MySQL® using Data Transfer
    • Delivering data from Managed Service for Apache Kafka® to Managed Service for OpenSearch using Data Transfer
    • Delivering data from Managed Service for Apache Kafka® to Managed Service for PostgreSQL using Data Transfer
    • Delivering data from Managed Service for Apache Kafka® to Managed Service for YDB using Data Transfer
    • Delivering data from Managed Service for Apache Kafka® to Data Streams using Data Transfer
    • Delivering data from Data Streams to Managed Service for YDB using Data Transfer
    • Delivering data from Data Streams to Managed Service for Apache Kafka® using Data Transfer
    • YDB change data capture and delivery to YDS
    • Configuring Kafka Connect to work with a Managed Service for Apache Kafka® cluster
    • Automating Query tasks with Managed Service for Apache Airflow™
    • Sending requests to the Yandex Cloud API via the Yandex Cloud Python SDK
    • Configuring an SMTP server to send e-mail notifications
    • Adding data to a ClickHouse® DB
    • Migrating data to Managed Service for ClickHouse® using ClickHouse®
    • Migrating data to Managed Service for ClickHouse® using Data Transfer
    • Delivering data from Managed Service for MySQL® to Managed Service for ClickHouse® using Data Transfer
    • Asynchronously replicating data from PostgreSQL to ClickHouse®
    • Exchanging data between Managed Service for ClickHouse® and Yandex Data Processing
    • Configuring Managed Service for ClickHouse® for Graphite
    • Fetching data from Managed Service for Apache Kafka® to Managed Service for ClickHouse®
    • Fetching data from Managed Service for Apache Kafka® to ksqlDB
    • Fetching data from RabbitMQ to Managed Service for ClickHouse®
    • Saving a Data Streams data stream in Managed Service for ClickHouse®
    • Asynchronous replication of data from Yandex Metrica to ClickHouse® using Data Transfer
    • Using hybrid storage in Managed Service for ClickHouse®
    • Sharding Managed Service for ClickHouse® tables
    • Data resharding in a Managed Service for ClickHouse® cluster
    • Loading data from Yandex Direct to a Managed Service for ClickHouse® data mart using Cloud Functions, Object Storage, and Data Transfer
    • Loading data from Object Storage to Managed Service for ClickHouse® using Data Transfer
    • Migrating data with change of storage from Managed Service for OpenSearch to Managed Service for ClickHouse® using Data Transfer
    • Loading data from Managed Service for YDB to Managed Service for ClickHouse® using Data Transfer
    • Yandex Managed Service for ClickHouse® integration with Microsoft SQL Server via ClickHouse® JDBC Bridge
    • Migrating databases from Google BigQuery to Managed Service for ClickHouse®
    • Yandex Managed Service for ClickHouse® integration with Oracle via ClickHouse® JDBC Bridge
    • Configuring Cloud DNS to access a Managed Service for ClickHouse® cluster from other cloud networks
    • Migrating a Yandex Data Processing HDFS cluster to a different availability zone
    • Importing data from Managed Service for MySQL® to Yandex Data Processing using Sqoop
    • Importing data from Managed Service for PostgreSQL to Yandex Data Processing using Sqoop
    • Mounting Object Storage buckets to the file system of Yandex Data Processing hosts
    • Working with Apache Kafka® topics using Yandex Data Processing
    • Automating operations with Yandex Data Processing using Managed Service for Apache Airflow™
    • Shared use of Yandex Data Processing tables through Metastore
    • Transferring metadata between Yandex Data Processing clusters using Metastore
    • Importing data from Object Storage, processing and exporting to Managed Service for ClickHouse®
    • Migrating to Managed Service for Elasticsearch using snapshots
    • Migrating collections from a third-party MongoDB cluster to Managed Service for MongoDB
    • Migrating data to Managed Service for MongoDB
    • Migrating Managed Service for MongoDB cluster from 4.4 to 6.0
    • Sharding MongoDB collections
    • MongoDB performance analysis and tuning
    • Migrating a database from a third-party MySQL® cluster to a Managed Service for MySQL® cluster
    • Managed Service for MySQL® performance analysis and tuning
    • Syncing data from a third-party MySQL® cluster to Managed Service for MySQL® using Data Transfer
    • Migrating a database from Managed Service for MySQL® to a third-party MySQL® cluster
    • Migrating a database from Managed Service for MySQL® to Object Storage using Data Transfer
    • Migrating data from Object Storage to Managed Service for MySQL® using Data Transfer
    • Delivering data from Managed Service for MySQL® to Managed Service for Apache Kafka® using Data Transfer
    • Delivering data from Managed Service for MySQL® to Managed Service for Apache Kafka® using Debezium
    • Migrating a database from Managed Service for MySQL® to Managed Service for YDB using Data Transfer
    • MySQL® change data capture and delivery to YDS
    • Migrating data from Managed Service for MySQL® to Managed Service for PostgreSQL using Data Transfer
    • Migrating data from AWS RDS for PostgreSQL to Managed Service for PostgreSQL using Data Transfer
    • Migrating data from Managed Service for MySQL® to Managed Service for Greenplum® using Data Transfer
    • Configuring an index policy in Managed Service for OpenSearch
    • Migrating data from Elasticsearch to Managed Service for OpenSearch
    • Migrating data from a third-party OpenSearch cluster to Managed Service for OpenSearch using Data Transfer
    • Loading data from Managed Service for OpenSearch to Object Storage using Data Transfer
    • Migrating data from Managed Service for OpenSearch to Managed Service for YDB using Data Transfer
    • Copying data from Managed Service for OpenSearch to Managed Service for Greenplum® using Yandex Data Transfer
    • Migrating data from Managed Service for PostgreSQL to Managed Service for OpenSearch using Data Transfer
    • Authenticating a Managed Service for OpenSearch cluster in OpenSearch Dashboards using Keycloak
    • Using the yandex-lemmer plugin in Managed Service for OpenSearch
    • Creating a PostgreSQL cluster for 1C:Enterprise
    • Searching for the Managed Service for PostgreSQL cluster performance issues
    • Managed Service for PostgreSQL performance analysis and tuning
    • Logical replication PostgreSQL
    • Migrating a database from a third-party PostgreSQL cluster to Managed Service for PostgreSQL
    • Migrating a database from Managed Service for PostgreSQL
    • Delivering data from Managed Service for PostgreSQL to Managed Service for Apache Kafka® using Data Transfer
    • Delivering data from Managed Service for PostgreSQL to Managed Service for Apache Kafka® using Debezium
    • Delivering data from Managed Service for PostgreSQL to Managed Service for YDB using Data Transfer
    • Migrating a database from Managed Service for PostgreSQL to Object Storage
    • Migrating data from Object Storage to Managed Service for PostgreSQL using Data Transfer
    • PostgreSQL change data capture and delivery to YDS
    • Migrating data from Managed Service for PostgreSQL to Managed Service for MySQL® using Data Transfer
    • Migrating data from Managed Service for PostgreSQL to Managed Service for OpenSearch using Data Transfer
    • Fixing string sorting issues in PostgreSQL after upgrading glibc
    • Migrating a database from Greenplum® to ClickHouse®
    • Migrating a database from Greenplum® to PostgreSQL
    • Exporting Greenplum® data to a cold storage in Object Storage
    • Loading data from Object Storage to Managed Service for Greenplum® using Data Transfer
    • Copying data from Managed Service for OpenSearch to Managed Service for Greenplum® using Yandex Data Transfer
    • Creating an external table from a Object Storage bucket table using a configuration file
    • Migrating a database from a third-party Valkey™ cluster to Yandex Managed Service for Valkey™
    • Using a Yandex Managed Service for Valkey™ cluster as a PHP session storage
    • Loading data from Object Storage to Managed Service for YDB using Data Transfer
    • Loading data from Managed Service for YDB to Object Storage using Data Transfer
    • Processing Audit Trails events
    • Processing Cloud Logging logs
    • Processing CDC Debezium streams
    • Analyzing data with Jupyter
    • Processing files with usage details in Yandex Cloud Billing
    • Ingesting data into storage systems
    • Smart log processing
    • Transferring data within microservice architectures
    • Migrating data to Object Storage using Data Transfer
    • Migrating data from a third-party Greenplum® or PostgreSQL cluster to Managed Service for Greenplum® using Data Transfer
    • Migrating Managed Service for MongoDB clusters
    • Migrating MySQL® clusters
    • Migrating to a third-party MySQL® cluster
    • Migrating PostgreSQL clusters
    • Creating a schema registry to deliver data in Debezium CDC format from Apache Kafka®
    • Automating operations using Yandex Managed Service for Apache Airflow™

In this article:

  • Required paid resources
  • Getting started
  • Set up Apache Kafka® integration for the ksqlDB database
  • Review the format of the data coming from Managed Service for Apache Kafka®
  • Create a table in ksqlDB to capture the data stream from the Apache Kafka® topic
  • Get test data from the Managed Service for Apache Kafka® cluster
  • Write the test data to ksqlDB
  • Check for records in Apache Kafka® topics
  • Delete the resources you created
  1. Building a data platform
  2. Fetching data from Managed Service for Apache Kafka® to ksqlDB

Data delivery in ksqlDB

Written by
Yandex Cloud
Updated at April 25, 2025
  • Required paid resources
  • Getting started
  • Set up Apache Kafka® integration for the ksqlDB database
  • Review the format of the data coming from Managed Service for Apache Kafka®
  • Create a table in ksqlDB to capture the data stream from the Apache Kafka® topic
  • Get test data from the Managed Service for Apache Kafka® cluster
  • Write the test data to ksqlDB
  • Check for records in Apache Kafka® topics
  • Delete the resources you created

ksqlDB is a database designed for stream processing messages from Apache Kafka® topics. Working with message streams in ksqlDB is similar to working with tables in a regular database. The ksqlDB table is automatically updated with data from a topic, and the data that you add to the ksqlDB table is sent to the Apache Kafka® topic. You can learn more in the ksqlDB documentation.

To set up data delivery from Managed Service for Apache Kafka® to ksqlDB:

  1. Set up Apache Kafka® integration for the ksqlDB database.
  2. Review the format of the data coming from Managed Service for Apache Kafka®.
  3. Create a table in ksqlDB to capture the data stream from the Apache Kafka® topic.
  4. Get test data from the Managed Service for Apache Kafka® cluster.
  5. Write the test data to ksqlDB.
  6. Check that the test data is present in the Apache Kafka® topic.

If you no longer need the resources you created, delete them.

Required paid resourcesRequired paid resources

The support cost includes:

  • Managed Service for Apache Kafka® cluster fee: Using computing resources allocated to hosts (including ZooKeeper hosts) and disk space (see Apache Kafka® pricing).
  • Fee for using public IP addresses if public access is enabled for cluster hosts (see Virtual Private Cloud pricing).

Getting startedGetting started

  1. Create a Managed Service for Apache Kafka® cluster in any suitable configuration.

    • If the ksqlDB server is hosted on the internet, create a Managed Service for Apache Kafka® cluster with public access.

    • If the ksqlDB server is hosted in Yandex Cloud, create a Managed Service for Apache Kafka® cluster on the same cloud network as ksqlDB.

  2. Create topics in a Managed Service for Apache Kafka® cluster:

    1. Service topic named _confluent-ksql-default__command_topic set up as follows:
      • Replication factor: 1
      • Number of partitions: 1
      • Log cleanup policy: Delete
      • Log segment lifetime, ms: -1
      • Minimum number of in-sync replicas: 1
    2. Service topic named default_ksql_processing_log to write ksqlDB logs to. Use any settings.
    3. Data storage topic named locations. Use any settings.
  3. Create a user named ksql and assign them the ACCESS_ROLE_ADMIN role for all topics.

  4. Check that you can connect to the ksqlDB server.

  5. Install the kafkacat utility on the ksqlDB server and check that you can use it to connect to a Managed Service for Apache Kafka® cluster over SSL.

  6. Install the jq utility for stream processing JSON files on the ksqlDB server.

Set up Apache Kafka® integration for the ksqlDB databaseSet up Apache Kafka® integration for the ksqlDB database

  1. Connect to the ksqlDB server.

  2. Add the server's SSL certificate to the Java trusted certificate store (Java Key Store) so that ksqlDB can use this certificate for secure connections to the cluster hosts. Set a password in the -storepass parameter for additional storage protection:

    cd /etc/ksqldb && \
    sudo keytool -importcert -alias YandexCA -file /usr/local/share/ca-certificates/Yandex/YandexInternalRootCA.crt \
    -keystore ssl -storepass <certificate_store_password> \
    --noprompt
    
  3. In the /etc/ksqldb/ksql-server.properties ksqlDB configuration file, specify the credentials for authentication in the Managed Service for Apache Kafka® cluster:

    bootstrap.servers=<broker_FQDN_1>:9091,...,<broker_FQDN_N>:9091
    sasl.mechanism=SCRAM-SHA-512
    security.protocol=SASL_SSL
    ssl.truststore.location=/etc/ksqldb/ssl
    ssl.truststore.password=<certificate_store_password>
    sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="ksql" password="<ksql_user_password>";
    

    For info on how to get a broker host's FQDN, see this guide.

    You can request the cluster name with the list of clusters in the folder.

  4. In the ksqlDB logging configuration file named /etc/ksqldb/log4j.properties, configure logging to a Managed Service for Apache Kafka® cluster topic:

    log4j.appender.kafka_appender=org.apache.kafka.log4jappender.KafkaLog4jAppender
    log4j.appender.kafka_appender.layout=io.confluent.common.logging.log4j.StructuredJsonLayout
    log4j.appender.kafka_appender.BrokerList=<broker_FQDN_1>:9091,...,<broker_FQDN_N>:9091
    log4j.appender.kafka_appender.Topic=default_ksql_processing_log
    log4j.logger.io.confluent.ksql=INFO,kafka_appender
    
    log4j.appender.kafka_appender.clientJaasConf=org.apache.kafka.common.security.scram.ScramLoginModule required username="ksql" password="<ksql_user_password>";
    log4j.appender.kafka_appender.SecurityProtocol=SASL_SSL
    log4j.appender.kafka_appender.SaslMechanism=SCRAM-SHA-512
    log4j.appender.kafka_appender.SslTruststoreLocation=/etc/ksqldb/ssl
    log4j.appender.kafka_appender.SslTruststorePassword=<certificate_store_password>
    
  5. Restart the ksqlDB service with the command below:

    sudo systemctl restart confluent-ksqldb.service
    

Review the format of the data coming from Managed Service for Apache Kafka®Review the format of the data coming from Managed Service for Apache Kafka®

The processing of the Managed Service for Apache Kafka® data stream depends on the Apache Kafka® message view format.

In the example, geodata is written to the locations Apache Kafka® topic in JSON format:

  • profileId: ID
  • latitude: Latitude
  • longitude: Longitude

This data will be transmitted as Apache Kafka® messages. Each message will contain a JSON object as a string in the following format:

{"profileId": "c2309eec", "latitude": 37.7877, "longitude": -122.4205}

ksqlDB stores the values of the corresponding parameters from Apache Kafka® messages in a three-column table.

Next, we are going to configure the fields of a ksqlDB data stream table.

Create a table in ksqlDB to capture the data stream from the Apache Kafka® topicCreate a table in ksqlDB to capture the data stream from the Apache Kafka® topic

Create a table in ksqlDB for writing data from the Apache Kafka® topic. The table structure matches the format of the data from Managed Service for Apache Kafka®:

  1. Connect to the ksqlDB server.

  2. Run the ksql client using this command:

    ksql http://0.0.0.0:8088
    
  3. Run this request:

    CREATE STREAM riderLocations 
    (
      profileId VARCHAR,
      latitude DOUBLE,
      longitude DOUBLE
    ) WITH 
    (
      kafka_topic='locations', 
      value_format='json', 
      partitions=<number_of_"locations”_topic_sections>
    );
    

    This data stream table will be automatically populated with messages from the Managed Service for Apache Kafka® cluster's locations topic. ksqlDB uses the ksql user's settings to read messages.

    For more information about creating a data stream table in the ksqlDB engine, see the ksqlDB documentation.

  4. Run this request:

    SELECT * FROM riderLocations WHERE 
             GEO_DISTANCE(latitude, longitude, 37.4133, -122.1162) <= 5 
             EMIT CHANGES;
    

    The query waits for real-time data to appear in the table.

Get test data from the Managed Service for Apache Kafka® clusterGet test data from the Managed Service for Apache Kafka® cluster

  1. Connect to the ksqlDB server.

  2. Create a file named sample.json with the following test data:

    {
      "profileId": "c2309eec", 
      "latitude": 37.7877,
      "longitude": -122.4205
    }
    
    {
      "profileId": "4ab5cbad", 
      "latitude": 37.3952,
      "longitude": -122.0813
    }
    
    {
      "profileId": "4a7c7b41", 
      "latitude": 37.4049,
      "longitude": -122.0822
    }   
    
  3. Send a file named sample.json to the Managed Service for Apache Kafka® cluster's locations topic using jq and kafkacat:

    jq -rc . sample.json | kafkacat -P \
       -b <broker_FQDN_1>:9091,...,<broker_FQDN_N>:9091> \
       -t locations \
       -X security.protocol=SASL_SSL \
       -X sasl.mechanisms=SCRAM-SHA-512 \
       -X sasl.username=ksql \
       -X sasl.password="<ksql_user_password>" \
       -X ssl.ca.location=/usr/local/share/ca-certificates/Yandex/YandexInternalRootCA.crt -Z
    

    The information is sent using the ksql user. To learn more about setting up an SSL certificate and working with kafkacat, see Connecting to a Apache Kafka® cluster from applications.

  4. Make sure that the session displays the data sent to the topic:

    +--------------------------+--------------------------+------------------------+
    |PROFILEID                 |LATITUDE                  |LONGITUDE               |
    +--------------------------+--------------------------+------------------------+
    |4ab5cbad                  |37.3952                   |-122.0813               | 
    |4a7c7b41                  |37.4049                   |-122.0822               |
    

The data is read using the ksql user.

Write the test data to ksqlDBWrite the test data to ksqlDB

  1. Connect to the ksqlDB server.

  2. Run the ksql client using this command:

    ksql http://0.0.0.0:8088
    
  3. Insert the test data into the riderLocations table:

    INSERT INTO riderLocations (profileId, latitude, longitude) VALUES ('18f4ea86', 37.3903, -122.0643);
    INSERT INTO riderLocations (profileId, latitude, longitude) VALUES ('8b6eae59', 37.3944, -122.0813);
    INSERT INTO riderLocations (profileId, latitude, longitude) VALUES ('4ddad000', 37.7857, -122.4011);
    

    This data is sent synchronously to the locations Apache Kafka® topic using the ksql user.

Check for records in Apache Kafka® topicsCheck for records in Apache Kafka® topics

  1. Check messages in the Managed Service for Apache Kafka® cluster's locations topic using kafkacat and the ksql user:

    kafkacat -C \
     -b <broker_FQDN_1>:9091,...,<broker_FQDN_N>:9091 \
     -t locations \
     -X security.protocol=SASL_SSL \
     -X sasl.mechanisms=SCRAM-SHA-512 \
     -X sasl.username=ksql \
     -X sasl.password="<ksql_user_password>" \
     -X ssl.ca.location=/usr/local/share/ca-certificates/Yandex/YandexInternalRootCA.crt -Z -K:
    
  2. Make sure that the console displays the messages you inserted into the table.

  3. Check messages in the Managed Service for Apache Kafka® cluster's default_ksql_processing_log topic using kafkacat and the ksql user:

    kafkacat -C \
     -b <broker_FQDN_1>:9091,...,<broker_FQDN_N>:9091 \
     -t default_ksql_processing_log \
     -X security.protocol=SASL_SSL \
     -X sasl.mechanisms=SCRAM-SHA-512 \
     -X sasl.username=ksql \
     -X sasl.password="<ksql_user_password>" \
     -X ssl.ca.location=/usr/local/share/ca-certificates/Yandex/YandexInternalRootCA.crt -Z -K:
    
  4. Make sure the console displays ksqlDB log records.

Delete the resources you createdDelete the resources you created

Some resources are not free of charge. To avoid paying for them, delete the resources you no longer need:

  • Delete the virtual machine.
  • If you reserved a public static IP for your virtual machine, delete it.
  • Delete the Managed Service for Apache Kafka® cluster.

Was the article helpful?

Previous
Fetching data from Managed Service for Apache Kafka® to Managed Service for ClickHouse®
Next
Fetching data from RabbitMQ to Managed Service for ClickHouse®
© 2025 Direct Cursus Technology L.L.C.