Yandex Cloud
Search
Contact UsTry it for free
  • Customer Stories
  • Documentation
  • Blog
  • All Services
  • System Status
  • Marketplace
    • Featured
    • Infrastructure & Network
    • Data Platform
    • AI for business
    • Security
    • DevOps tools
    • Serverless
    • Monitoring & Resources
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Start testing with double trial credits
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Center for Technologies and Society
    • Yandex Cloud Partner program
    • Price calculator
    • Pricing plans
  • Customer Stories
  • Documentation
  • Blog
© 2026 Direct Cursus Technology L.L.C.
Yandex Data Transfer
  • Available transfers
  • Getting started
    • All tutorials
      • Greenplum® to ClickHouse®
      • MySQL® in ClickHouse®
      • Yandex Metrica to ClickHouse®
      • PostgreSQL in ClickHouse®
      • Greenplum® to PostgreSQL
      • Object Storage in MySQL®
      • Object Storage to PostgreSQL
      • Object Storage to Greenplum®
      • Yandex Direct to ClickHouse®
      • Object Storage in ClickHouse®
      • Object Storage to YDB
      • YDB to ClickHouse®
      • OpenSearch in ClickHouse®
  • Troubleshooting
  • Access management
  • Pricing policy
  • Terraform reference
  • Monitoring metrics
  • Audit Trails events
  • Public materials

In this article:

  • Required paid resources
  • Getting started
  • Set up and activate the transfer
  • Test the replication process
  • Query data in ClickHouse®
  • Delete the resources you created
  1. Tutorials
  2. Uploading data to data marts
  3. PostgreSQL in ClickHouse®

Loading data from PostgreSQL to a ClickHouse® data mart

Written by
Yandex Cloud
Updated at January 15, 2026
  • Required paid resources
  • Getting started
  • Set up and activate the transfer
  • Test the replication process
  • Query data in ClickHouse®
  • Delete the resources you created

You can migrate a database from PostgreSQL to ClickHouse® using Yandex Data Transfer. Proceed as follows:

  1. Set up and activate the transfer.
  2. Test the replication process.
  3. Query data in the target system.

If you no longer need the resources you created, delete them.

Required paid resourcesRequired paid resources

  • Managed Service for PostgreSQL cluster: Computing resources allocated to hosts, storage and backup size (see Managed Service for PostgreSQL pricing).
  • Managed Service for ClickHouse® cluster: Computing resources allocated to hosts, storage and backup size (see Managed Service for ClickHouse® pricing).
  • Public IP addresses if public access is enabled for cluster hosts (see Virtual Private Cloud pricing).
  • Each transfer: Use of computing resources and number of transferred data rows (see Data Transfer pricing).

Getting startedGetting started

In our example, we will create all required resources in Yandex Cloud. Set up the infrastructure:

Manually
Terraform
  1. Create a source Managed Service for PostgreSQL cluster using any suitable configuration with publicly accessible hosts. Specify the following settings:

    • DB name: db1.
    • Username: pg-user.
    • Password: <source_password>.
  2. Create a Managed Service for ClickHouse® target cluster using any suitable configuration with publicly accessible hosts. Specify the following settings:

    • Number of ClickHouse® hosts: Minimum of 2 to enable replication within the cluster.
    • DB name: db1.
    • Username: ch-user.
    • Password: <target_password>.
  3. If using security groups, make sure they are configured correctly and allow inbound connections to the clusters.

    • Managed Service for ClickHouse®.
    • Managed Service for PostgreSQL.
  4. Grant the mdb_replication role to pg-user in the Managed Service for PostgreSQL cluster.

  1. If you do not have Terraform yet, install it.

  2. Get the authentication credentials. You can add them to environment variables or specify them later in the provider configuration file.

  3. Configure and initialize a provider. There is no need to create a provider configuration file manually, you can download it.

  4. Place the configuration file in a separate working directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file.

  5. Download the postgresql-to-clickhouse.tf configuration file to your current working directory.

    This file describes:

    • Networks.
    • Subnets.
    • Security groups for cluster connectivity.
    • Managed Service for PostgreSQL source cluster.
    • Managed Service for ClickHouse® target cluster.
    • Source endpoint.
    • Target endpoint.
    • Transfer.
  6. In the postgresql-to-clickhouse.tf file, specify admin passwords for PostgreSQL and ClickHouse®.

  7. Validate your Terraform configuration files using this command:

    terraform validate
    

    Terraform will display any configuration errors detected in your files.

  8. Create the required infrastructure:

    1. Run this command to view the planned changes:

      terraform plan
      

      If you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.

    2. If everything looks correct, apply the changes:

      1. Run this command:

        terraform apply
        
      2. Confirm updating the resources.

      3. Wait for the operation to complete.

    All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console.

Set up and activate the transferSet up and activate the transfer

  1. Connect to the Managed Service for PostgreSQL cluster.

  2. In the db1 database, create a table named x_tab and populate it with data:

    CREATE TABLE x_tab
    (
        id NUMERIC PRIMARY KEY,
        name CHAR(5)
    );
    CREATE INDEX ON x_tab (id);
    INSERT INTO x_tab (id, name) VALUES
      (40, 'User1'),
      (41, 'User2'),
      (42, 'User3'),
      (43, 'User4'),
      (44, 'User5');
    
  3. Create a transfer:

    Manually
    Terraform
    1. Create a PostgreSQL-type source endpoint and configure it using the following settings:

      • Installation type: Managed Service for PostgreSQL cluster.
      • Managed Service for PostgreSQL cluster: Select <source_PostgreSQL_cluster_name> from the drop-down list.
      • Database: db1.
      • User: pg-user.
      • Password: <user_password>.
    2. Create a ClickHouse-type target endpoint and specify its cluster connection settings:

      • Connection type: Managed cluster.
      • Managed cluster: Select <target_ClickHouse®_cluster_name> from the drop-down list.
      • Database: db1.
      • User: ch-user.
      • Password: <user_password>.
      • Cleanup policy: DROP.
    3. Create a Snapshot and replication-type transfer, configure it to use the previously created endpoints, then activate it.

    1. In the postgresql-to-clickhouse.tf file, set the transfer_enabled variable to 1.

    2. Validate your Terraform configuration files using this command:

      terraform validate
      

      Terraform will display any configuration errors detected in your files.

    3. Create the required infrastructure:

      1. Run this command to view the planned changes:

        terraform plan
        

        If you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.

      2. If everything looks correct, apply the changes:

        1. Run this command:

          terraform apply
          
        2. Confirm updating the resources.

        3. Wait for the operation to complete.

    All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console.

    The transfer will activate automatically upon creation.

Test the replication processTest the replication process

  1. Wait for the transfer status to change to Replicating.

  2. To verify that the data has been replicated to the target, connect to the target Yandex Managed Service for ClickHouse® cluster. Make sure the x_tab tables in db1 and the source database have identical schemas, including timestamp columns, __data_transfer_commit_time and __data_transfer_delete_time:

    SELECT * FROM db1.x_tab WHERE id = 41;
    
    ┌─id─┬──name──┬─── __data-transfer_commit_time─┬───__data-transfer-delete_time─┐
    │ 41 │  User2 │   1633417594957267000          │ 0                             │
    └────┴────────┴────────────────────────────────┴───────────────────────────────┘
    
  3. Connect to the source cluster.

  4. In the x_tab table of the source PostgreSQL database, delete the row with ID = 41 and update the row with ID = 42:

    DELETE FROM db1.public.x_tab WHERE id = 41;
    UPDATE db1.public.x_tab SET name = 'Key3' WHERE id = 42;
    
  5. Make sure the changes have been applied to the x_tab table on the ClickHouse® target:

    SELECT * FROM db1.x_tab WHERE (id >= 41) AND (id <= 42);
    
    ┌─id─┬──name──┬─── __data-transfer_commit_time─┬───__data-transfer-delete_time─┐
    │ 41 │  User2 │   1633417594957267000          │ 1675417594957267000           │
    │ 42 │  Key3  │   1675417594957267000          │ 0                             │
    │ 42 │  User3 │   1633417594957268000          │ 1675417594957267000           │
    └────┴────────┴────────────────────────────────┴───────────────────────────────┘
    

Query data in ClickHouse®Query data in ClickHouse®

For table recovery, ClickHouse® targets with replication use the ReplicatedReplacingMergeTree and ReplacingMergeTree engines. The following columns are automatically added to each table:

  • __data_transfer_commit_time: Time the row was updated to this value, in TIMESTAMP format.

  • __data_transfer_delete_time: Time the row was deleted from the source, in TIMESTAMP format. A value of 0 indicates that the row is still active.

    The __data_transfer_commit_time column is essential for the ReplicatedReplacedMergeTree engine. If a record is deleted or updated, a new row gets inserted with a value in this column. Querying by the primary key alone returns several records with different __data_transfer_commit_time values.

The source data can be added or deleted while the transfer is in Replicating status. For standard SQL command behavior, where the primary key returns a single record, add filtering by the __data_transfer_delete_time column when querying tables transferred to ClickHouse®. For example, to query the x_tab table, use the following syntax:

SELECT * FROM x_tab FINAL
WHERE __data_transfer_delete_time = 0;

To simplify the SELECT queries, create a view filtering rows by __data_transfer_delete_time. Use this view for all your queries. For example, to query the x_tab table, use the following syntax:

CREATE VIEW x_tab_view AS SELECT * FROM x_tab FINAL
WHERE __data_transfer_delete_time == 0;

Note

Using the FINAL keyword reduces query performance, so avoid it whenever possible, especially on large tables.

Delete the resources you createdDelete the resources you created

To reduce the consumption of resources you do not need, delete them:

  1. Make sure the transfer status is Completed, upon which you can delete the transfer.

  2. Delete other resources using the same method used for their creation:

    Manually
    Terraform
    1. Delete both the source and target endpoints.
    2. Delete the Managed Service for PostgreSQL cluster.
    3. Delete the Managed Service for ClickHouse® cluster.
    1. In the terminal window, go to the directory containing the infrastructure plan.

      Warning

      Make sure the directory has no Terraform manifests with the resources you want to keep. Terraform deletes all resources that were created using the manifests in the current directory.

    2. Delete resources:

      1. Run this command:

        terraform destroy
        
      2. Confirm deleting the resources and wait for the operation to complete.

      All the resources described in the Terraform manifests will be deleted.

ClickHouse® is a registered trademark of ClickHouse, Inc.

Was the article helpful?

Previous
Yandex Metrica to ClickHouse®
Next
Greenplum® to PostgreSQL
© 2026 Direct Cursus Technology L.L.C.