Yandex Cloud
Search
Contact UsTry it for free
  • Customer Stories
  • Documentation
  • Blog
  • All Services
  • System Status
  • Marketplace
    • Featured
    • Infrastructure & Network
    • Data Platform
    • AI for business
    • Security
    • DevOps tools
    • Serverless
    • Monitoring & Resources
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Start testing with double trial credits
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Center for Technologies and Society
    • Yandex Cloud Partner program
    • Price calculator
    • Pricing plans
  • Customer Stories
  • Documentation
  • Blog
© 2026 Direct Cursus Technology L.L.C.
Yandex Data Transfer
  • Available transfers
  • Getting started
    • All tutorials
      • MySQL® to Apache Kafka®
      • MySQL® to YDS
      • PostgreSQL to Apache Kafka®
      • PostgreSQL to YDS
      • YDB to Apache Kafka®
      • YDB and delivery to YDS
  • Troubleshooting
  • Access management
  • Pricing policy
  • Terraform reference
  • Monitoring metrics
  • Audit Trails events
  • Public materials

In this article:

  • Required paid resources
  • Getting started
  • Set up your transfer
  • Activate the transfer
  • Test replication
  • Delete the resources you created
  1. Tutorials
  2. Change data capture and delivery to the queue
  3. PostgreSQL to YDS

PostgreSQL change data capture and delivery to Yandex Data Streams

Written by
Yandex Cloud
Updated at January 15, 2026
  • Required paid resources
  • Getting started
  • Set up your transfer
  • Activate the transfer
  • Test replication
  • Delete the resources you created

You can track data changes in a Managed Service for PostgreSQL source cluster and send them to a Data Streams target cluster using change data capture (CDC).

To set up CDC using Data Transfer:

  1. Set up your transfer.
  2. Activate the transfer.
  3. Test replication.

If you no longer need the resources you created, delete them.

Required paid resourcesRequired paid resources

  • Managed Service for PostgreSQL cluster: Computing resources allocated to hosts, storage and backup size (see Managed Service for PostgreSQL pricing).

  • Public IP addresses if public access is enabled for cluster hosts (see Virtual Private Cloud pricing).

  • Managed Service for YDB database (see Managed Service for YDB pricing). The cost depends on deployment mode:

    • In serverless mode, you pay for data operations and storage volume, including stored backups.
    • In dedicated instance mode, you pay for the use of computing resources allocated to the database, storage size, and backups.
  • Data Streams (see Data Streams pricing). The cost depends on the pricing model:

    • Based on allocated resources: You pay a fixed hourly rate for the established throughput limit and message retention period, and additionally for the number of units of actually written data.
    • On-demand: You pay for the performed read/write operations, the amount of read or written data, and the actual storage used for messages that are still within their retention period.

Getting startedGetting started

Set up the infrastructure:

Manually
Terraform
  1. Create a Managed Service for PostgreSQL source cluster using any suitable configuration with publicly accessible hosts. Specify the following settings:

    • DB name: db1
    • Username: pg-user
  2. Configure security groups, ensuring they allow cluster connections.

  3. Grant the mdb_replication role to pg-user.

  4. Create a Managed Service for YDB database ydb-example with your preferred configuration.

  5. Create a service account yds-sa with the yds.editor role. The transfer will use it to access Data Streams.

  1. If you do not have Terraform yet, install it.

  2. Get the authentication credentials. You can add them to environment variables or specify them later in the provider configuration file.

  3. Configure and initialize a provider. There is no need to create a provider configuration file manually, you can download it.

  4. Place the configuration file in a separate working directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file.

  5. Download the postgresql-yds.tf configuration file to your current working directory.

    This file describes:

    • Network.
    • Subnet.
    • Security group required for cluster access.
    • Managed Service for PostgreSQL source cluster.
    • Managed Service for YDB database.
    • Service account that will be used to access Data Streams.
    • Source endpoint.
    • Transfer.
  6. In the postgresql-yds.tf file, specify the PostgreSQL user password.

  7. Validate your Terraform configuration files using this command:

    terraform validate
    

    Terraform will display any configuration errors detected in your files.

  8. Create the required infrastructure:

    1. Run this command to view the planned changes:

      terraform plan
      

      If you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.

    2. If everything looks correct, apply the changes:

      1. Run this command:

        terraform apply
        
      2. Confirm updating the resources.

      3. Wait for the operation to complete.

    All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console.

Set up your transferSet up your transfer

  1. Create a data stream mpg-stream in Data Streams.

  2. Connect to the Managed Service for PostgreSQL cluster. In the db1 database, create a table named measurements and populate it with data:

    CREATE TABLE measurements (
        device_id varchar(200) NOT NULL,
        datetime timestamp NOT NULL,
        latitude real NOT NULL,
        longitude real NOT NULL,
        altitude real NOT NULL,
        speed real NOT NULL,
        battery_voltage real,
        cabin_temperature real NOT NULL,
        fuel_level real,
        PRIMARY KEY (device_id)
    );
    INSERT INTO measurements VALUES
        ('iv9a94th6rzt********', '2022-06-05 17:27:00', 55.70329032, 37.65472196,  427.5,    0, 23.5, 17, NULL),
        ('rhibbh3y08qm********', '2022-06-06 09:49:54', 55.71294467, 37.66542005, 429.13, 55.5, NULL, 18, 32);
    
  3. Create a Data Streams target endpoint with the following settings:

    • Database: ydb-example
    • Stream: mpg-stream
    • Service account: yds-sa
  4. Create a source endpoint and set up the transfer:

    Manually
    Terraform
    1. Create a PostgreSQL-type source endpoint and configure it using the following settings:

      • Installation type: Managed Service for PostgreSQL cluster.
      • Managed Service for PostgreSQL cluster: <PostgreSQL_source_cluster_name> from the drop-down list.
      • Database: db1.
      • User: pg-user.
      • Password: pg-user password.
    2. Create a Replication-type transfer configured to use the new endpoints.

    1. In the postgresql-yds.tf file, specify the following variables:

      • yds_endpoint_id: Target endpoint ID.
      • transfer_enabled: 1 to create a transfer.
    2. Validate your Terraform configuration files using this command:

      terraform validate
      

      Terraform will display any configuration errors detected in your files.

    3. Create the required infrastructure:

      1. Run this command to view the planned changes:

        terraform plan
        

        If you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.

      2. If everything looks correct, apply the changes:

        1. Run this command:

          terraform apply
          
        2. Confirm updating the resources.

        3. Wait for the operation to complete.

Activate the transferActivate the transfer

  1. Activate the transfer and wait for its status to change to Replicating.

  2. Make sure the data from the source has been moved to the Data Streams stream:

    Management console
    AWS CLI
    1. In the management console, select Data Streams.
    2. Select the target stream from the list and navigate to  Data viewer.
    3. Make sure shard-000000 now contains messages with the source table rows. For a closer look at the messages, click .
    1. Install the AWS CLI.

    2. Configure the environment for Data Streams.

    3. Read the stream data using:

      • AWS CLI.
      • AWS SDK.

Test replicationTest replication

  1. Connect to the source cluster.

  2. Add a new row to the measurements table:

    INSERT INTO measurements VALUES
        ('ad02l5ck6sdt********', '2022-06-05 17:27:00', 55.70329032, 37.65472196,  427.5,    0, 23.5, 19, 45);
    
  3. Verify that the new row has appeared in the data stream.

Delete the resources you createdDelete the resources you created

Note

Before deleting the created resources, deactivate the transfer.

To reduce the consumption of resources you do not need, delete them:

  1. Delete the transfer.

  2. Delete the target endpoint.

  3. Delete the Data Streams stream.

  4. Delete other resources using the same method used for their creation:

    Manually
    Terraform
    1. Delete the source endpoint.
    2. Delete the Managed Service for PostgreSQL cluster.
    3. Delete the Managed Service for YDB database.
    1. In the terminal window, go to the directory containing the infrastructure plan.

      Warning

      Make sure the directory has no Terraform manifests with the resources you want to keep. Terraform deletes all resources that were created using the manifests in the current directory.

    2. Delete resources:

      1. Run this command:

        terraform destroy
        
      2. Confirm deleting the resources and wait for the operation to complete.

      All the resources described in the Terraform manifests will be deleted.

Was the article helpful?

Previous
PostgreSQL to Apache Kafka®
Next
YDB to Apache Kafka®
© 2026 Direct Cursus Technology L.L.C.