Yandex Cloud
Search
Contact UsGet started
  • Blog
  • Pricing
  • Documentation
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • ML & AI
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Customer Stories
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Education and Science
    • Yandex Cloud Partner program
  • Blog
  • Pricing
  • Documentation
© 2025 Direct Cursus Technology L.L.C.
Yandex Data Transfer
  • Available transfers
  • Getting started
    • All guides
    • Preparing for a transfer
      • Managing endpoints
      • Migrating endpoints to a different availability zone
        • Source
        • Target
    • Managing transfer process
    • Working with databases during transfer
    • Monitoring transfer status
  • Troubleshooting
  • Access management
  • Terraform reference
  • Monitoring metrics
  • Audit Trails events
  • Public materials

In this article:

  • Scenarios for transferring data to ClickHouse®
  • Configuring the data source
  • Preparing the target database
  • Configuring the ClickHouse® target endpoint
  • Managed Service for ClickHouse® cluster
  • Custom installation
  • Additional settings
  • Tips for configuring endpoints
  • Troubleshooting data transfer issues
  • New tables cannot be added
  • Data is not transferred
  • Unsupported date range
  • Lack of resources or increasing data latency
  • Data blocks limit exceeded
  1. Step-by-step guides
  2. Configuring endpoints
  3. ClickHouse
  4. Target

Transferring data to a ClickHouse® target endpoint

Written by
Yandex Cloud
Improved by
Alexey K.
Updated at April 24, 2025
  • Scenarios for transferring data to ClickHouse®
  • Configuring the data source
  • Preparing the target database
  • Configuring the ClickHouse® target endpoint
    • Managed Service for ClickHouse® cluster
    • Custom installation
    • Additional settings
  • Tips for configuring endpoints
  • Troubleshooting data transfer issues
    • New tables cannot be added
    • Data is not transferred
    • Unsupported date range
    • Lack of resources or increasing data latency
    • Data blocks limit exceeded

Yandex Data Transfer enables you to migrate data to a ClickHouse® database and implement various data transfer, processing, and transformation scenarios. To implement a transfer:

  1. Explore possible data transfer scenarios.
  2. Configure one of the supported data sources.
  3. Prepare the ClickHouse® database for the transfer.
  4. Configure the target endpoint in Yandex Data Transfer.
  5. Create a transfer and start it.
  6. Perform required operations with the database and control the transfer.
  7. In case of any issues, use ready-made solutions to resolve them.

Scenarios for transferring data to ClickHouse®Scenarios for transferring data to ClickHouse®

  1. Migration: Moving data from one storage to another. Migration often means migrating a database from obsolete local databases to managed cloud ones.

    • Migrating a ClickHouse® cluster.
    • Redistributing data across shards.
    • Copying data from Managed Service for OpenSearch to Managed Service for ClickHouse® using Yandex Data Transfer
  2. Data delivery is a process of delivering arbitrary data to target storage. It includes data retrieval from a queue and its deserialization with subsequent transformation to target storage format.

    • Delivering data from Apache Kafka® to ClickHouse®.
    • Delivering data from YDS to ClickHouse®.
  3. Uploading data to data marts is a process of transferring prepared data to storage for subsequent visualization.

    • Loading Greenplum® data to ClickHouse®.

    • Loading MySQL® data to ClickHouse®.

    • Loading Yandex Metrica data to ClickHouse®.

    • Loading Yandex Direct data to ClickHouse®.

    • Loading PostgreSQL data to ClickHouse®.

    • Loading data from Object Storage to ClickHouse®.

    • Loading data from YDB to the ClickHouse® data mart.

For a detailed description of possible Yandex Data Transfer scenarios, see Tutorials.

Configuring the data sourceConfiguring the data source

Configure one of the supported data sources:

  • PostgreSQL
  • MySQL®
  • ClickHouse®
  • Greenplum®
  • Apache Kafka®
  • Airbyte®
  • Yandex Metrica
  • YDS
  • Yandex Object Storage
  • Oracle
  • Elasticsearch
  • OpenSearch

For a complete list of supported sources and targets in Yandex Data Transfer, see Available transfers.

Note

ClickHouse® has date range restrictions. If the source database contains unsupported dates, this may result in an error and stop the transfer.

Preparing the target databasePreparing the target database

Managed Service for ClickHouse®
ClickHouse®
  1. Create a target database.

    If you need to transfer multiple databases, create a separate transfer for each one of them.

  2. Create a user with access to the target database.

    Once started, the transfer will connect to the target on behalf of this user.

  3. If user management via SQL is enabled in the cluster, grant the new user the following permissions:

    GRANT CLUSTER ON *.* TO <username>
    GRANT SELECT, INSERT, ALTER DELETE, CREATE TABLE, CREATE VIEW, DROP TABLE, TRUNCATE, dictGet ON <DB_name>.* TO <username>
    GRANT SELECT(macro, substitution) ON system.macros TO <username>
    

    If user management via SQL is disabled, permissions are assigned via the management console and CLI.

  4. Create a security group and configure it.

  5. Assign the created security group to the Managed Service for ClickHouse® cluster.

  1. If not planning to use Cloud Interconnect or VPN for connections to an external cluster, make such cluster accessible from the Internet from IP addresses used by Data Transfer.

    For details on linking your network up with external resources, see this concept.

  2. Create a target database. Its name must be the same as the source database name. If you need to transfer multiple databases, create a separate transfer for each one of them.

  3. Create a user with access to the target database.

    Once started, the transfer will connect to the target on behalf of this user.

  4. Grant the new user the following permissions:

    GRANT CLUSTER ON *.* TO <username>
    GRANT SELECT, INSERT, ALTER DELETE, CREATE TABLE, CREATE VIEW, DROP TABLE, TRUNCATE, dictGet ON <DB_name>.* TO <username>
    GRANT SELECT(macro, substitution) ON system.macros TO <username>
    

Configuring the ClickHouse® target endpointConfiguring the ClickHouse® target endpoint

When creating or updating an endpoint, you can define:

  • Yandex Managed Service for ClickHouse® cluster connection or custom installation settings, including those based on Yandex Compute Cloud VMs. These are required parameters.
  • Additional parameters.

See also tips for configuring an endpoint when delivering data to ClickHouse® from queues.

Managed Service for ClickHouse® clusterManaged Service for ClickHouse® cluster

Warning

To create or edit an endpoint of a managed database, you will need the managed-clickhouse.viewer role or the primitive viewer role for the folder the cluster of this managed database resides in.

Connecting to the database with the cluster ID specified in Yandex Cloud.

Management console
CLI
Terraform
API
  • Managed cluster: From the list, select the name of the cluster you want to connect to.

  • Shard group: Specify the shard group to transfer the data to. If this value is not set, the data will go to all shards.

  • User: Specify the username that Data Transfer will use to connect to the database.

  • Password: Enter the user's password to the database.

  • Database: Specify the name of the database in the selected cluster.

  • Security groups: Select the cloud network to host the endpoint and security groups for network traffic. This will allow you to apply the specified security group rules to the VMs and clusters in the selected network without changing their settings. For more information, see Networking in Yandex Data Transfer.

    Make sure the selected security groups are configured.

  • Endpoint type: clickhouse-target.
  • --cluster-id: ID of the cluster you need to connect to.

  • --cluster-name: Shard group to transfer the data to. If this parameter is not set, data will go to all shards.

  • --database: Database name.

  • --user: Username that Data Transfer will use to connect to the database.

  • --security-group: Security groups for network traffic, whose rules will apply to VMs and clusters without changing their settings. For more information, see Networking in Yandex Data Transfer.

    Make sure the specified security groups are configured.

  • To set a user password to access the DB, use one of the following parameters:

    • --raw-password: Password as text.

    • --password-file: The path to the password file.

  • Endpoint type: clickhouse_target.
  • connection.connection_options.mdb_cluster_id: ID of cluster to connect to.

  • clickhouse_cluster_name: Shard group to transfer the data to. If this parameter is not set, data will go to all shards.

  • subnet_id: ID of the subnet the cluster is in. The transfer will use this subnet to access the cluster. If the ID is not specified, the cluster must be accessible from the internet.

    If the value in this field is specified for both endpoints, both subnets must be hosted in the same availability zone.

  • security_groups: Security groups for network traffic.

    Security group rules apply to a transfer. They allow opening up network access from the transfer VM to the cluster. For more information, see Networking in Yandex Data Transfer.

    Security groups and the subnet_id subnet, if the latter is specified, must belong to the same network as the cluster.

    Note

    In Terraform, it is not required to specify a network for security groups.

    Make sure the specified security groups are configured.

  • connection.connection_options.database: Database name.

  • connection.connection_options.user: Username that Data Transfer will use to connect to the database.

  • connection.connection_options.password.raw: Password in text form.

Here is the configuration file example:

resource "yandex_datatransfer_endpoint" "<endpoint_name_in_Terraform>" {
  name = "<endpoint_name>"
  settings {
    clickhouse_target {
      clickhouse_cluster_name="<shard_group>"
      security_groups = ["<list_of_security_group_IDs>"]
      subnet_id       = "<subnet_ID>"
      connection {
        connection_options {
          mdb_cluster_id = "<cluster_ID>"
          database       = "<name_of_database_to_migrate>"
          user           = "<username_for_connection>"
          password {
            raw = "<user_password>"
          }
        }
      }
      <additional_endpoint_settings>
    }
  }
}

For more information, see the Terraform provider documentation.

  • securityGroups: Security groups for network traffic, whose rules will apply to VMs and clusters without changing their settings. For more information, see Networking in Yandex Data Transfer.

    Make sure the specified security groups are configured.

  • mdbClusterId: ID of the cluster you need to connect to.

  • clickhouseClusterName: Shard group to transfer the data to. If this parameter is not set, the data will go to all shards.

  • database: Database name.

  • user: Username that Data Transfer will use to connect to the database.

  • password.raw: Database user password (in text form).

Custom installationCustom installation

Connecting to the database with explicitly specified network addresses and ports.

Management console
CLI
Terraform
API
  • Shards

    • Shard: Specify a row that will allow the service to distinguish shards from each other. If sharding is disabled in your custom installation, specify any name.
    • Hosts: Specify FQDNs or IP addresses of the hosts in the shard.
  • HTTP port: Set the number of the port that Data Transfer will use for the connection.

    When connecting via the HTTP port:

    • For optional fields, default values are used (if any).
    • Recording complex types is supported (such as array and tuple).
  • Native port: Set the number of the native port that Data Transfer will use for the connection.

  • SSL: Enable if the cluster supports only encrypted connections.

  • CA certificate: If transmitted data has to be be encrypted, e.g., to meet the PCI DSS, upload the certificate file or add its contents as text.

    Warning

    If no certificate is added, the transfer may fail with an error.

  • Subnet ID: Select or create a subnet in the required availability zone. The transfer will use this subnet to access the cluster.

    If the value in this field is specified for both endpoints, both subnets must be hosted in the same availability zone.

  • User: Specify the username that Data Transfer will use to connect to the database.

  • Password: Enter the user's password to the database.

  • Database: Specify the name of the database in the selected cluster.

  • Security groups: Select the cloud network to host the endpoint and security groups for network traffic.

    Thus, you will be able to apply the specified security group rules to the VMs and clusters in the selected network without changing the settings of these VMs and clusters. For more information, see Networking in Yandex Data Transfer.

  • Endpoint type: clickhouse-target.
  • --cluster-name: Name of the cluster to transfer the data to.

  • --host: List of IP addresses or FQDNs of hosts to connect to, in {shard_name}:{host_IP_address_or_FQDN} format. If sharding is disabled in your custom installation, specify any shard name.

  • http-port: Port number Data Transfer will use for HTTP connections.

  • native-port: Port number Data Transfer will use for connections to the ClickHouse® native interface.

  • --ca-certificate: CA certificate if the data to transfer must be encrypted to comply with PCI DSS requirements.

    Warning

    If no certificate is added, the transfer may fail with an error.

  • --subnet-id: ID of the subnet the host is in. The transfer will use that subnet to access the host.

  • --database: Database name.

  • --user: Username that Data Transfer will use to connect to the database.

  • --security-group: Security groups for network traffic, whose rules will apply to VMs and clusters without changing their settings. For more information, see Networking in Yandex Data Transfer.

  • To set a user password to access the DB, use one of the following parameters:

    • --raw-password: Password as text.

    • --password-file: The path to the password file.

  • Endpoint type: clickhouse_target.
  • Shard settings:

    • connection.connection_options.on_premise.shards.name: Shard name that the service will use to distinguish shards from each other. If sharding is disabled in your custom installation, specify any name.
    • connection.connection_options.on_premise.shards.hosts: specify the FQDNs or IP addresses of the hosts in the shard.
  • connection.connection_options.on_premise.http_port: Port number that Data Transfer will use for HTTP connections.

  • connection.connection_options.on_premise.native_port: Port number that Data Transfer will use for connections to the ClickHouse® native interface.

  • connection.connection_options.on_premise.tls_mode.enabled.ca_certificate: CA certificate if the data to transfer must be encrypted, e.g., to comply with the PCI DSS requirements.

    Warning

    If no certificate is added, the transfer may fail with an error.

  • clickhouse_cluster_name: Name of the cluster to transfer the data to.
  • subnet_id: ID of the subnet the cluster is in. The transfer will use this subnet to access the cluster. If the ID is not specified, the cluster must be accessible from the internet.

    If the value in this field is specified for both endpoints, both subnets must be hosted in the same availability zone.

  • security_groups: Security groups for network traffic.

    Security group rules apply to a transfer. They allow opening up network access from the transfer VM to the VM with the database. For more information, see Networking in Yandex Data Transfer.

    Security groups must belong to the same network as the subnet_id subnet, if the latter is specified.

    Note

    In Terraform, it is not required to specify a network for security groups.

  • connection.connection_options.database: Database name.

  • connection.connection_options.user: Username that Data Transfer will use to connect to the database.

  • connection.connection_options.password.raw: Password in text form.

Here is the configuration file example:

resource "yandex_datatransfer_endpoint" "<endpoint_name_in_Terraform>" {
  name = "<endpoint_name>"
  settings {
    clickhouse_target {
      clickhouse_cluster_name="<cluster_name>"
      security_groups = ["<list_of_security_group_IDs>"]
      subnet_id       = "<subnet_ID>"
      connection {
        connection_options {
          on_premise {
            http_port   = "<HTTP_connection_port>"
            native_port = "<port_for_native_interface_connection>"
            shards {
              name  = "<shard_name>"
              hosts = [ “list of IP addresses and FQDNs of shard hosts" ]
            }
            tls_mode {
              enabled {
                ca_certificate = "<certificate_in_PEM_format>"
              }
            }
          }
          database = "<name_of_database_to_migrate>"
          user     = "<username_for_connection>"
          password {
            raw = "<user_password>"
          }
        }
      }
      <additional_endpoint_settings>
    }
  }
}

For more information, see the Terraform provider documentation.

  • onPremise: Database connection parameters:
    • shards: Shard settings:

      • name: Shard name the service will use to distinguish shards one from another. If sharding is disabled in your custom installation, specify any name.
      • hosts: Specify FQDNs or IP addresses of the hosts in the shard.
    • httpPort: Port number Data Transfer will use for HTTP connections.

    • nativePort: Port number Data Transfer will use for connections to the ClickHouse® native interface.

    • tlsMode: Parameters for encrypting the data to transfer, if required, e.g., for compliance with the PCI DSS requirements.

      • disabled: Disabled.
      • enabled: Enabled.
        • caCertificate: CA certificate.

          Warning

          If no certificate is added, the transfer may fail with an error.

    • subnetId: ID of the subnet the host is in. The transfer will use that subnet to access the host.

  • clickhouseClusterName: Name of the cluster to transfer the data to.
  • securityGroups: Security groups for network traffic, whose rules will apply to VMs and clusters without changing their settings. For more information, see Networking in Yandex Data Transfer.

  • database: Database name.

  • user: Username that Data Transfer will use to connect to the database.

  • password.raw: Database user password (in text form).

Additional settingsAdditional settings

Management console
CLI
Terraform
API
  • Cleanup policy: Select a way to clean up data in the target database before the transfer:

    • Don't cleanup: Select this option only for replication without data copying.

    • Drop: Completely delete the tables included in the transfer (default).

      Use this option to always transfer the latest version of the table schema to the target database from the source whenever the transfer is activated.

    • Truncate: Delete only the data from the tables included in the transfer but keep the schema.

      Use this option if the schema in the target database differs from the one that would have been transferred from the source during the transfer.

  • Sharding settings: Specify the settings for sharding:

    • No sharding: No sharding is used.

    • Sharding by column value: Name of the table column to shard the data by. Uniform distribution between shards will depend on the hash of this column value. Specify the name of the column to be sharded in the appropriate field.

      For sharding by specific column values, specify them in the Mapping field. This field defines the mapping between the column and shard index values (the sequential number of the shard in the name-sorted list of shards), to enable sharding by specific data values.

    • Sharding by transfer ID: Data will be distributed between shards based on the transfer ID value. The transfer will ignore the Mapping setting and will only shard the data based on the transfer ID.

    • Uniform random sharding: Data will be randomly distributed between shards. Each shard will contain approximately the same amount of data.

  • Rename tables: Specify the settings for renaming tables during a transfer, if required.

  • Flush interval: Specify the delay with which the data should arrive at the target cluster. Increase the value in this field if ClickHouse® fails to merge data parts.

  • --alt-name: Rules for renaming the source database tables when transferring them to the target database. The values are specified in <source_table_name>:<target_table_name> format.

  • Data sharding settings:

    • --shard-by-column-hash: Name of the table column to shard the data by. Uniform distribution between shards will depend on the hash of this column value.

    • --custom-sharding-column-name: Name of the table column to shard the data by. Data sharding is based on the column values specified by the --custom-sharding-mapping-string setting.

    • --custom-sharding-mapping-string: Mapping of the values from the column specified in the --custom-sharding-column-name setting and shards. The setting values are specified in <colimn_value>:<shard_name> format.

    • --shard-by-transfer-id: Data will be distributed between shards based on the transfer ID value. The parameter contains no value.

    You can only specify one of the sharding options:

    • --shard-by-column-hash
    • --custom-sharding-column-name and --custom-sharding-mapping-string
    • --shard-by-transfer-id
  • cleanup_policy: Way to clean up data in the target database before the transfer:

    • CLICKHOUSE_CLEANUP_POLICY_DISABLED: Do not clean up (default).

      Select this option only for replication without data copying.

    • CLICKHOUSE_CLEANUP_POLICY_DROP: Completely delete the tables included in the transfer.

      Use this option to always transfer the latest version of the table schema to the target database from the source whenever the transfer is activated.

    • CLICKHOUSE_CLEANUP_POLICY_TRUNCATE: Delete only the data from the tables included in the transfer but keep the schema.

      Use this option if the schema in the target database differs from the one that would have been transferred from the source during the transfer.

  • alt_names: Rules for renaming the source database tables when transferring them to the target database.

    • from_name: Source table name.
    • to_name: Target table name.
  • Data sharding settings:

    • sharding.column_value_hash.column_name: Name of the table column to shard the data by. Uniform distribution between shards will depend on the hash of this column value.

    • sharding.transfer_id: Data is distributed between shards based on the transfer ID value. The transfer_id section contains no parameters.

    • sharding.custom_mapping: Sharding by column value:

      • column_name: Name of the table column to shard the data by.

      • mapping: Mapping of column values and shards:

        • column_value.string_value: Column value.
        • shard_name: Shard name.
    • sharding.round_robin: Data will be randomly distributed between shards. Each shard will contain approximately the same amount of data. The round_robin section contains no parameters.

    You can only specify one of the sharding options: sharding.column_value_hash.column_name, sharding.transfer_id, sharding.custom_mapping, or sharding.round_robin. If no sharding option is specified, all data will be transferred to a single shard.

  • altNames: Rules for renaming the source database tables when transferring them to the target database.

    • fromName: Source table name.
    • toName: Target table name.
  • cleanupPolicy: Way to clean up data in the target database before the transfer:

    • CLICKHOUSE_CLEANUP_POLICY_DISABLED: Do not clean up (default).

      Select this option only for replication without data copying.

    • CLICKHOUSE_CLEANUP_POLICY_DROP: Completely delete the tables included in the transfer.

      Use this option to always transfer the latest version of the table schema to the target database from the source whenever the transfer is activated.

    • CLICKHOUSE_CLEANUP_POLICY_TRUNCATE: Delete only the data from the tables included in the transfer but keep the schema.

      Use this option if the schema in the target database differs from the one that would have been transferred from the source during the transfer.

  • sharding: Settings for data sharding. You can only specify one of the sharding options:

    • columnValueHash.columnName: Name of the table column to shard the data by. Uniform distribution between shards will depend on the hash of this column value.

    • customMapping: Sharding by column value:

      • columnName: Name of the table column to shard the data by.

      • mapping: Mapping of column values and shards:

        • columnValue.stringValue: Column value.
        • shardName: Shard name.
    • transferId: Data will be distributed between shards based on the transfer ID value. The parameter contains no value.

    • roundRobin: Data will be randomly distributed between shards. Each shard will contain approximately the same amount of data. The parameter contains no value.

    If no sharding option is specified, all data will be transferred to a single shard.

After configuring the data source and target, create and start the transfer.

Tips for configuring endpointsTips for configuring endpoints

To accelerate the delivery of large volumes of data to ClickHouse® from queues associated withData Streams or Managed Service for Apache Kafka®, configure endpoints as follows:

Management console
CLI
Terraform
API
  • If the target ClickHouse® cluster has sharding enabled and the data is migrated into a sharded table, write the data into an underlying table based on the ReplicatedMergeTree engine, not a distributed table (Distributed engine). In the target, select the migrated data from the distributed table. To redefine the write table, specify it in the target settings: Rename tables → Target table name.
  • If in the source you selected JSON in Conversion rules → Data format, then you should specify UTF-8 instead of STRING for string types in the data schema.
  • If you select Add a column for missing keys, your data transfers may slow down.
  • If you need to migrate multiple topics, in the Rename tables target setting, specify the same ClickHouse® table name for all topic names of the source.
  • If the target ClickHouse® cluster has sharding enabled and the data is migrated into a sharded table, write the data into an underlying table based on the ReplicatedMergeTree engine, not a distributed table (Distributed engine). In the target, select the migrated data from the distributed table. To redefine the write table, specify it in the --alt-name setting for the target.
  • If you need to migrate multiple topics, in the --alt-name attribute of the target endpoint, specify the same target ClickHouse® table name for all topics of the source.
  • If the target ClickHouse® cluster has sharding enabled and the data is migrated into a sharded table, write the data into an underlying table based on the ReplicatedMergeTree engine, not a distributed table (Distributed engine). In the target, select the migrated data from the distributed table. To redefine the write table, specify it in the alt_names.to_name setting for the target.
  • If in the source you selected JSON in parser.json_parser:
    • You should specify UTF-8 instead of STRING for string types in the parser.json_parser.data_schema data schema.
    • The parser.json_parser.add_rest_column=true attribute may slow down your transfer.
  • If you need to migrate multiple topics, in the alt_names attribute of the target endpoint, specify the same ClickHouse® table name in alt_names.to_name for all topics in alt_names.from_name.
  • If the target ClickHouse® cluster has sharding enabled and the data is migrated into a sharded table, write the data into an underlying table based on the ReplicatedMergeTree engine, not a distributed table (Distributed engine). In the target, select the migrated data from the distributed table. To redefine the write table, specify it in the altNames.toName setting for the target.
  • If in the source you selected JSON in parser.jsonParser:
    • You should specify UTF-8 instead of STRING for string types in the parser.jsonParser.dataSchema data schema.
    • The parser.jsonParser.addRestColumn=true parameter may slow down your transfer.
  • If you need to migrate multiple topics, in the altNames parameter of the target endpoint, specify the same ClickHouse® table name in altNames.toName for all topics in altNames.fromName.

Troubleshooting data transfer issuesTroubleshooting data transfer issues

  • New tables cannot be added.
  • Data is not transferred.
  • Unsupported date range.
  • Lack of resources or increasing data latency.
  • Data blocks limit exceeded.

For more troubleshooting tips, see Troubleshooting.

New tables cannot be addedNew tables cannot be added

​No new tables are added to Snapshot and increment transfers.

Solution:

  1. Create tables in the target database manually. For the transfer to work, do the following when creating a table:

    1. Add the transfer service fields to it:

      __data_transfer_commit_time timestamp,
      __data_transfer_delete_time timestamp
      
    2. Use ReplacingMergeTree:

      ENGINE = ReplacingMergeTree
      
  2. Create a separate transfer of the Snapshot and increment type and add only new tables to the list of objects to transfer. Deactivating the original Snapshot and increment transfer is not required. Activate the new transfer, and once it switches to the Replicating status, deactivate it.

    To add other tables, put them into the list of objects to transfer in the created separate transfer (replacing other objects in that list), reactivate it, and, once it switches to the Replicating status, deactivate it.

    Note

    Since two transfers were simultaneously migrating data, you will have duplicate records in the new tables on the target. Run the SELECT * from TABLE <table_name> FINAL query to hide duplicate records or OPTIMIZE TABLE <table_name> to delete them.

Data is not transferredData is not transferred

An attempt to transfer data from a ClickHouse® source fails with this error:


Syntax error: failed at position 25 ('-'): <error_details>. Expected one of: token, Dot, UUID, alias, AS, identifier, FINAL, SAMPLE, INTO OUTFILE, FORMAT, SETTINGS, end of query

Solution:

Yandex Data Transfer cannot transfer a database if its name contains a hyphen. You need to rename your database, if you can.

Unsupported date rangeUnsupported date range

If the migrated data contains dates outside the supported ranges, ClickHouse® returns the following error:

TYPE_ERROR [target]: failed to run (abstract1 source): failed to push items from 0 to 1 in batch:
Push failed: failed to push 1 rows to ClickHouse shard 0:
ClickHouse Push failed: Unable to exec changeItem: clickhouse:
dateTime <field_name> must be between 1900-01-01 00:00:00 and 2262-04-11 23:47:16

Supported date ranges in ClickHouse®:

  • For the DateTime64 type fields: 1900-01-01 to 2299-12-31. For more information, see the ClickHouse® documentation.
  • For the DateTime type fields: 1970-01-01 to 2106-02-07. For more information, see the ClickHouse® documentation.

Solution: use one of the following options:

  • Convert all dates in the source DB to a range supported by ClickHouse®.
  • In the source endpoint parameters, exclude the table with incorrect dates from the transfer.
  • In the transfer parameters, specify the Convert values to string transformer. This will change the field type during the transfer.

Lack of resources or increasing data latencyLack of resources or increasing data latency

You may encounter the following problems when migrating data to a ClickHouse® target:

  1. Transfer fails with an error. Error message:

    pod instance restarted
    
  2. Transfer state monitoring indicates an increasing data latency (a time difference between when the records appear in the target and when they appear in the source).

Possible cause:

The write interval specified in the target endpoint settings is too large, which leads to the lack of RAM (OOM) on the transfer VM.

Solution:

In the management console, set the value of the Flush interval target endpoint setting to 10 seconds or less.

In addition, if your transfer type is Snapshot, reactivate it. Transfers of the other types will restart automatically.

Data blocks limit exceededData blocks limit exceeded

When migrating data to a ClickHouse® target, the transfer is interrupted due to an error. Error message:

ERROR Unable to Activate ... 
unable to upload tables: unable to upload data objects: unable upload part <table name> (): 
unable to start \*clickhouse.HTTPSource event source: failed to push events to destination: 
unable to push http batch: <table name>: failed: INSERT INTO ...

Additionally, you can also get this error:

pod instance restarted

Errors occur when you try to insert more data blocks than allowed in the max_partitions_per_insert_block setting to the ClickHouse® target.

Solution: Increase the max_partitions_per_insert_block parameter for the account the transfer uses to connect to the target. For the Managed Service for ClickHouse® target, you can change this parameter in user settings. For a ClickHouse® custom installation, you can create a settings profile and assign it to the account:

CREATE SETTINGS PROFILE max_partitions
SETTINGS max_partitions_per_insert_block = <setting_value>

ALTER USER <username> PROFILE 'max_partitions'

ClickHouse® is a registered trademark of ClickHouse, Inc.

Was the article helpful?

Previous
Source
Next
Source
© 2025 Direct Cursus Technology L.L.C.