Yandex Cloud
Search
Contact UsGet started
  • Blog
  • Pricing
  • Documentation
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • ML & AI
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Customer Stories
    • Gateway to Russia
    • Cloud for Startups
    • Education and Science
  • Blog
  • Pricing
  • Documentation
Yandex project
© 2025 Yandex.Cloud LLC
Yandex Managed Service for PostgreSQL
  • Getting started
    • All tutorials
    • Creating a PostgreSQL cluster for 1C
    • Creating a cluster of 1C:Enterprise Linux servers with a Managed Service for PostgreSQL cluster
    • Exporting a database to Yandex Data Processing
    • Searching for cluster performance issues
    • Performance analysis and tuning
    • Setting up a connection from a Serverless Containers container
    • Delivering data to Yandex Managed Service for Apache Kafka® using Yandex Data Transfer
    • Delivering data to Yandex Managed Service for YDB using Yandex Data Transfer
    • Delivering data to Yandex Managed Service for Apache Kafka® using Debezium
    • PostgreSQL change data capture and delivery to YDS
    • Delivering data from Yandex Managed Service for Apache Kafka® using Yandex Data Transfer
    • Transferring data from Yandex Object Storage using Yandex Data Transfer
    • Configuring a fault-tolerant architecture in Yandex Cloud
    • Status monitoring of geographically distributed devices
    • Writing load balancer logs to PostgreSQL
    • Creating an MLFlow server for logging experiments and artifacts
    • Working with data using Query
    • Federated data queries using Query
    • Fixing string sorting issues after upgrading _glibc_
    • Writing data from a device into a database
      • Logical replication PostgreSQL
      • Migrating a database to Managed Service for PostgreSQL
      • Migrating a database from Managed Service for PostgreSQL
      • Creating a logical replica of Amazon RDS for PostgreSQL in Managed Service for PostgreSQL
      • Migrating a database from Yandex Managed Service for PostgreSQL to Yandex Object Storage
      • Migrating data from Yandex Managed Service for MySQL® to Managed Service for PostgreSQL using Yandex Data Transfer
      • Migrating data from Managed Service for PostgreSQL to Yandex Managed Service for MySQL® using Yandex Data Transfer
      • Migrating data from AWS RDS for PostgreSQL to Managed Service for PostgreSQL using Yandex Data Transfer
      • Migrating a database from Greenplum® to PostgreSQL
      • Migrating data from Managed Service for PostgreSQL to Managed Service for OpenSearch using Yandex Data Transfer
      • Asynchronously replicating data from PostgreSQL to ClickHouse®
  • Access management
  • Terraform reference
  • Monitoring metrics
  • Audit Trails events
  • Public materials
  • Release notes

In this article:

  • Required paid resources
  • Getting started
  • Set up your transfer
  • Activate the transfer
  • Test the replication process
  • Select the data from ClickHouse®
  • Delete the resources you created
  1. Tutorials
  2. Replication and migration
  3. Asynchronously replicating data from PostgreSQL to ClickHouse®

Asynchronously replicating data from PostgreSQL to ClickHouse®

Written by
Yandex Cloud
Updated at May 5, 2025
  • Required paid resources
  • Getting started
  • Set up your transfer
  • Activate the transfer
  • Test the replication process
  • Select the data from ClickHouse®
  • Delete the resources you created

You can migrate a database from PostgreSQL to ClickHouse® using Yandex Data Transfer. Proceed as follows:

  1. Set up your transfer.
  2. Activate the transfer.
  3. Test the replication process.
  4. Select the data from the target.

If you no longer need the resources you created, delete them.

Required paid resourcesRequired paid resources

The support cost includes:

  • Managed Service for PostgreSQL cluster fee: Using computing resources allocated to hosts and disk space (see Managed Service for PostgreSQL pricing).
  • Managed Service for ClickHouse® cluster fee: Using computing resources allocated to hosts (including ZooKeeper hosts) and disk space (see Managed Service for ClickHouse® pricing).
  • Fee for using public IP addresses for cluster hosts (see Virtual Private Cloud pricing).
  • Transfer fee: Using computing resources and the number of transferred data rows (see Data Transfer pricing).

Getting startedGetting started

For clarity, we will create all required resources in Yandex Cloud. Set up your infrastructure:

Manually
Terraform
  1. Create a source Managed Service for PostgreSQL cluster in any applicable configuration with publicly available hosts and the following settings:

    • DB name: db1.
    • Username: pg-user.
    • Password: <source_password>.
  2. Create a Managed Service for ClickHouse® target cluster in any applicable configuration with publicly available hosts and the following settings:

    • Number of ClickHouse® hosts: At least two, which is required to enable replication in the cluster.
    • DB name: db1.
    • Username: ch-user.
    • Password: <target_password>.
  3. If you are using security groups in clusters, make sure they are set up correctly and allow connecting to the clusters:

    • Managed Service for ClickHouse®.
    • Managed Service for PostgreSQL.
  4. Grant the mdb_replication role to pg-user in the Managed Service for PostgreSQL cluster.

  1. If you do not have Terraform yet, install it.

  2. Get the authentication credentials. You can add them to environment variables or specify them later in the provider configuration file.

  3. Configure and initialize a provider. There is no need to create a provider configuration file manually, you can download it.

  4. Place the configuration file in a separate working directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file.

  5. Download the postgresql-to-clickhouse.tf configuration file to the same working directory.

    This file describes:

    • Networks.
    • Subnets.
    • Security groups for making cluster connections.
    • Managed Service for PostgreSQL source cluster.
    • Managed Service for ClickHouse® target cluster.
    • Source endpoint.
    • Target endpoint.
    • Transfer.
  6. In the postgresql-to-clickhouse.tf file, specify the passwords of the PostgreSQL and ClickHouse® admin user.

  7. Make sure the Terraform configuration files are correct using this command:

    terraform validate
    

    If there are any errors in the configuration files, Terraform will point them out.

  8. Create the required infrastructure:

    1. Run this command to view the planned changes:

      terraform plan
      

      If you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.

    2. If everything looks correct, apply the changes:

      1. Run this command:

        terraform apply
        
      2. Confirm updating the resources.

      3. Wait for the operation to complete.

    All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console.

Set up your transferSet up your transfer

  1. Connect to the Managed Service for PostgreSQL cluster.

  2. Create a table named x_tab in the db1 database and populate it with data:

    CREATE TABLE x_tab
    (
        id NUMERIC PRIMARY KEY,
        name CHAR(5)
    );
    CREATE INDEX ON x_tab (id);
    INSERT INTO x_tab (id, name) VALUES
      (40, 'User1'),
      (41, 'User2'),
      (42, 'User3'),
      (43, 'User4'),
      (44, 'User5');
    
  3. Create a transfer:

    Manually
    Terraform
    1. Create a source endpoint of the PostgreSQL type and specify the cluster connection parameters in it:

      • Installation type: Managed Service for PostgreSQL cluster.
      • Managed Service for PostgreSQL cluster: <source_PostgreSQL_cluster_name> from the drop-down list.
      • Database: db1.
      • User: pg-user.
      • Password: <user_password>.
    2. Create a target endpoint of the ClickHouse type and specify the cluster connection settings in it:

      • Connection type: Managed cluster.
      • Managed cluster: <target_ClickHouse®_cluster_name> from the drop-down list.
      • Database: db1.
      • User: ch-user.
      • Password: <user_password>.
      • Cleanup policy: DROP.
    3. Create a transfer of the Snapshot and replication type that will use the created endpoints.

    1. In the postgresql-to-clickhouse.tf file, set the transfer_enabled parameter to 1.

    2. Make sure the Terraform configuration files are correct using this command:

      terraform validate
      

      If there are any errors in the configuration files, Terraform will point them out.

    3. Create the required infrastructure:

      1. Run this command to view the planned changes:

        terraform plan
        

        If you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.

      2. If everything looks correct, apply the changes:

        1. Run this command:

          terraform apply
          
        2. Confirm updating the resources.

        3. Wait for the operation to complete.

Activate the transferActivate the transfer

  1. Activate the transfer and wait until its status switches to Replicating.

  2. To check that the transfer has moved the replicated data to the target, connect to the target Yandex Managed Service for ClickHouse® cluster and make sure that the x_tab table in db1 has the same columns as the x_tab table in the source database, as well as the timestamp columns, __data_transfer_commit_time and __data_transfer_delete_time:

    SELECT * FROM db1.x_tab WHERE id = 41;
    
    ┌─id─┬──name──┬─── __data-transfer_commit_time─┬───__data-transfer-delete_time─┐
    │ 41 │  User2 │   1633417594957267000          │ 0                             │
    └────┴────────┴────────────────────────────────┴───────────────────────────────┘
    

Test the replication processTest the replication process

  1. Connect to the source cluster.

  2. In the x_tab table of the source PostgreSQL database, delete the row with the 41 ID and update the one with the 42 ID:

    DELETE FROM db1.public.x_tab WHERE id = 41;
    UPDATE db1.public.x_tab SET name = 'Key3' WHERE id = 42;
    
  3. Check the changes in the x_tab table on the ClickHouse® target:

    SELECT * FROM db1.x_tab WHERE (id >= 41) AND (id <= 42);
    
    ┌─id─┬──name──┬─── __data-transfer_commit_time─┬───__data-transfer-delete_time─┐
    │ 41 │  User2 │   1633417594957267000          │ 1675417594957267000           │
    │ 42 │  Key3  │   1675417594957267000          │ 0                             │
    │ 42 │  User3 │   1633417594957268000          │ 1675417594957267000           │
    └────┴────────┴────────────────────────────────┴───────────────────────────────┘
    

Select the data from ClickHouse®Select the data from ClickHouse®

For table recovery, the ClickHouse® target with replication enabled uses the ReplicatedReplacingMergeTree and ReplacingMergeTree engines. The following columns are added automatically to each table:

  • __data_transfer_commit_time: Time when the was row updated to this value, in TIMESTAMP format.

  • __data_transfer_delete_time: Row deletion time, in TIMESTAMP format, if the row was deleted in the source. If the row was not deleted, the value is set to 0.

    The __data_transfer_commit_time column is required for the ReplicatedReplacedMergeTree engine to work. If a record is deleted or updated, a new row is inserted with a value in this column. A query by a single primary key returns multiple records with different __data_transfer_commit_time values.

With the Replicating transfer status, the source data can be added or deleted. To ensure standard behavior of SQL commands when a primary key points to a single record, provide a clause to filter data by __data_transfer_delete_time when querying tables transferred to ClickHouse®. Here is an example of a query to the x_tab table:

SELECT * FROM x_tab FINAL
WHERE __data_transfer_delete_time = 0;

To simplify the SELECT queries, create a view with filtering by __data_transfer_delete_time and use it for querying. Here is an example of a query to the x_tab table:

CREATE VIEW x_tab_view AS SELECT * FROM x_tab FINAL
WHERE __data_transfer_delete_time == 0;

Note

Using the FINAL keyword in queries makes them much less efficient. Avoid it when working with large tables whenever possible.

Delete the resources you createdDelete the resources you created

Some resources are not free of charge. To avoid paying for them, delete the resources you no longer need:

  • Make sure the transfer has the Completed status and delete it.

  • Delete the endpoints and clusters:

    Manually
    Terraform
    • Both the source and target endpoints.
    • Managed Service for PostgreSQL.
    • Managed Service for ClickHouse®.
    1. In the terminal window, go to the directory containing the infrastructure plan.

      Warning

      Make sure the directory has no Terraform manifests with the resources you want to keep. Terraform deletes all resources that were created using the manifests in the current directory.

    2. Delete resources:

      1. Run this command:

        terraform destroy
        
      2. Confirm deleting the resources and wait for the operation to complete.

      All the resources described in the Terraform manifests will be deleted.

ClickHouse® is a registered trademark of ClickHouse, Inc.

Was the article helpful?

Previous
Migrating data from Managed Service for PostgreSQL to Managed Service for OpenSearch using Yandex Data Transfer
Next
Resource relationships
Yandex project
© 2025 Yandex.Cloud LLC