Yandex Cloud
Search
Contact UsGet started
  • Blog
  • Pricing
  • Documentation
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • ML & AI
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Customer Stories
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Education and Science
    • Yandex Cloud Partner program
  • Blog
  • Pricing
  • Documentation
© 2025 Direct Cursus Technology L.L.C.
Tutorials
    • All tutorials
    • Deploying the Apache Kafka® web interface
    • Migrating a database from a third-party Apache Kafka® cluster to Managed Service for Apache Kafka®
    • Moving data between Managed Service for Apache Kafka® clusters using Data Transfer
    • Delivering data from Managed Service for MySQL® to Managed Service for Apache Kafka® using Data Transfer
    • Delivering data from Managed Service for MySQL® to Managed Service for Apache Kafka® using Debezium
    • Delivering data from Managed Service for PostgreSQL to Managed Service for Apache Kafka® using Data Transfer
    • Delivering data from Managed Service for PostgreSQL to Managed Service for Apache Kafka® using Debezium
    • Delivering data from Managed Service for YDB to Managed Service for Apache Kafka® using Data Transfer
    • Delivering data from Managed Service for Apache Kafka® to Managed Service for ClickHouse® using Data Transfer
    • Delivering data from Managed Service for Apache Kafka® to Managed Service for Greenplum® using Data Transfer
    • Delivering data from Managed Service for Apache Kafka® to Managed Service for MongoDB using Data Transfer
    • Delivering data from Managed Service for Apache Kafka® to Managed Service for MySQL® using Data Transfer
    • Delivering data from Managed Service for Apache Kafka® to Managed Service for OpenSearch using Data Transfer
    • Delivering data from Managed Service for Apache Kafka® to Managed Service for PostgreSQL using Data Transfer
    • Delivering data from Managed Service for Apache Kafka® to Managed Service for YDB using Data Transfer
    • Delivering data from Managed Service for Apache Kafka® to Data Streams using Data Transfer
    • Delivering data from Data Streams to Managed Service for YDB using Data Transfer
    • Delivering data from Data Streams to Managed Service for Apache Kafka® using Data Transfer
    • YDB change data capture and delivery to YDS
    • Configuring Kafka Connect to work with a Managed Service for Apache Kafka® cluster
    • Automating Query tasks with Managed Service for Apache Airflow™
    • Sending requests to the Yandex Cloud API via the Yandex Cloud Python SDK
    • Configuring an SMTP server to send e-mail notifications
    • Adding data to a ClickHouse® DB
    • Migrating data to Managed Service for ClickHouse® using Data Transfer
    • Delivering data from Managed Service for MySQL® to Managed Service for ClickHouse® using Data Transfer
    • Asynchronously replicating data from PostgreSQL to ClickHouse®
    • Exchanging data between Managed Service for ClickHouse® and Yandex Data Processing
    • Configuring Managed Service for ClickHouse® for Graphite
    • Fetching data from Managed Service for Apache Kafka® to Managed Service for ClickHouse®
    • Fetching data from Managed Service for Apache Kafka® to ksqlDB
    • Fetching data from RabbitMQ to Managed Service for ClickHouse®
    • Saving a Data Streams data stream in Managed Service for ClickHouse®
    • Asynchronous replication of data from Yandex Metrica to ClickHouse® using Data Transfer
    • Using hybrid storage in Managed Service for ClickHouse®
    • Sharding Managed Service for ClickHouse® tables
    • Data resharding in a Managed Service for ClickHouse® cluster
    • Loading data from Yandex Direct to a data mart enabled by Managed Service for ClickHouse® using Cloud Functions, Object Storage, and Data Transfer
    • Loading data from Object Storage to Managed Service for ClickHouse® using Data Transfer
    • Migrating data with change of storage from Managed Service for OpenSearch to Managed Service for ClickHouse® using Data Transfer
    • Loading data from Managed Service for YDB to Managed Service for ClickHouse® using Data Transfer
    • Migrating databases from Google BigQuery to Managed Service for ClickHouse®
    • Configuring Cloud DNS to access a Managed Service for ClickHouse® cluster from other cloud networks
    • Migrating a Yandex Data Processing HDFS cluster to a different availability zone
    • Importing data from Managed Service for MySQL® to Yandex Data Processing using Sqoop
    • Importing data from Managed Service for PostgreSQL to Yandex Data Processing using Sqoop
    • Mounting Object Storage buckets to the file system of Yandex Data Processing hosts
    • Working with Apache Kafka® topics using Yandex Data Processing
    • Automating operations with Yandex Data Processing using Managed Service for Apache Airflow™
    • Shared use of Yandex Data Processing tables through Metastore
    • Transferring metadata between Yandex Data Processing clusters using Metastore
    • Importing data from Object Storage, processing and exporting to Managed Service for ClickHouse®
    • Migrating to Managed Service for Elasticsearch using snapshots
    • Migrating collections from a third-party MongoDB cluster to Managed Service for MongoDB
    • Migrating data to Managed Service for MongoDB
    • Migrating Managed Service for MongoDB cluster from 4.4 to 6.0
    • Sharding MongoDB collections
    • MongoDB performance analysis and tuning
    • Migrating a database from a third-party MySQL® cluster to a Managed Service for MySQL® cluster
    • Managed Service for MySQL® performance analysis and tuning
    • Syncing data from a third-party MySQL® cluster to Managed Service for MySQL® using Data Transfer
    • Migrating a database from Managed Service for MySQL® to a third-party MySQL® cluster
    • Migrating a database from Managed Service for MySQL® to Object Storage using Data Transfer
    • Migrating data from Object Storage to Managed Service for MySQL® using Data Transfer
    • Delivering data from Managed Service for MySQL® to Managed Service for Apache Kafka® using Data Transfer
    • Delivering data from Managed Service for MySQL® to Managed Service for Apache Kafka® using Debezium
    • Migrating a database from Managed Service for MySQL® to Managed Service for YDB using Data Transfer
    • MySQL® change data capture and delivery to YDS
    • Migrating data from Managed Service for MySQL® to Managed Service for PostgreSQL using Data Transfer
    • Migrating data from AWS RDS for PostgreSQL to Managed Service for PostgreSQL using Data Transfer
    • Migrating data from Managed Service for MySQL® to Managed Service for Greenplum® using Data Transfer
    • Configuring an index policy in Managed Service for OpenSearch
    • Migrating data from Elasticsearch to Managed Service for OpenSearch
    • Migrating data from a third-party OpenSearch cluster to Managed Service for OpenSearch using Data Transfer
    • Loading data from Managed Service for OpenSearch to Object Storage using Data Transfer
    • Migrating data from Managed Service for OpenSearch to Managed Service for YDB using Data Transfer
    • Copying data from Managed Service for OpenSearch to Managed Service for Greenplum® using Yandex Data Transfer
    • Migrating data from Managed Service for PostgreSQL to Managed Service for OpenSearch using Data Transfer
    • Authenticating a Managed Service for OpenSearch cluster in OpenSearch Dashboards using Keycloak
    • Using the yandex-lemmer plugin in Managed Service for OpenSearch
    • Creating a PostgreSQL cluster for 1C:Enterprise
    • Searching for the Managed Service for PostgreSQL cluster performance issues
    • Managed Service for PostgreSQL performance analysis and tuning
    • Logical replication PostgreSQL
    • Migrating a database from a third-party PostgreSQL cluster to Managed Service for PostgreSQL
    • Migrating a database from Managed Service for PostgreSQL
    • Delivering data from Managed Service for PostgreSQL to Managed Service for Apache Kafka® using Data Transfer
    • Delivering data from Managed Service for PostgreSQL to Managed Service for Apache Kafka® using Debezium
    • Delivering data from Managed Service for PostgreSQL to Managed Service for YDB using Data Transfer
    • Migrating a database from Managed Service for PostgreSQL to Object Storage
    • Migrating data from Object Storage to Managed Service for PostgreSQL using Data Transfer
    • PostgreSQL change data capture and delivery to YDS
    • Migrating data from Managed Service for PostgreSQL to Managed Service for MySQL® using Data Transfer
    • Migrating data from Managed Service for PostgreSQL to Managed Service for OpenSearch using Data Transfer
    • Troubleshooting string sorting issues in PostgreSQL after upgrading glibc
    • Migrating a database from Greenplum® to ClickHouse®
    • Migrating a database from Greenplum® to PostgreSQL
    • Exporting Greenplum® data to a cold storage in Object Storage
    • Loading data from Object Storage to Managed Service for Greenplum® using Data Transfer
    • Copying data from Managed Service for OpenSearch to Managed Service for Greenplum® using Yandex Data Transfer
    • Creating an external table from a Object Storage bucket table using a configuration file
    • Migrating a database from a third-party Valkey™ cluster to Yandex Managed Service for Valkey™
    • Using a Yandex Managed Service for Valkey™ cluster as a PHP session storage
    • Loading data from Object Storage to Managed Service for YDB using Data Transfer
    • Loading data from Managed Service for YDB to Object Storage using Data Transfer
    • Processing Audit Trails events
    • Processing Cloud Logging logs
    • Processing CDC Debezium streams
    • Analyzing data with Jupyter
    • Processing files with usage details in Yandex Cloud Billing
    • Entering data into storage systems
    • Smart log processing
    • Transferring data within microservice architectures
    • Migrating data to Object Storage using Data Transfer
    • Migrating data from a third-party Greenplum® or PostgreSQL cluster to Managed Service for Greenplum® using Data Transfer
    • Migrating Managed Service for MongoDB clusters
    • Migrating MySQL® clusters
    • Migrating to a third-party MySQL® cluster
    • Migrating PostgreSQL clusters
    • Creating a schema registry to deliver data in Debezium CDC format from Apache Kafka®

In this article:

  • Required paid resources
  • Getting started
  • Prepare the infrastructure
  • Set up clickhouse-client
  • Create tables with data
  • Classic sharding
  • Group-based sharding
  • Advanced group-based sharding
  • Test the tables
  • Delete the resources you created
  1. Building a data platform
  2. Sharding Managed Service for ClickHouse® tables

Sharding tables Yandex Managed Service for ClickHouse®

Written by
Yandex Cloud
Updated at May 5, 2025
  • Required paid resources
  • Getting started
    • Prepare the infrastructure
    • Set up clickhouse-client
  • Create tables with data
    • Classic sharding
    • Group-based sharding
    • Advanced group-based sharding
  • Test the tables
  • Delete the resources you created

Sharding provides a number of benefits when dealing with high query rates and massive datasets. It works by creating a distributed table that routes queries to underlying tables. You can access data in sharded tables both directly or through the distributed table.

There are three approaches to sharding:

  • The classic approach, where the distributed table uses all shards in the cluster.
  • The group-based approach, where some shards are grouped together.
  • The advanced group-based approach, where shards are divided into two groups: one for the distributed table and the other for the underlying tables.

Below are examples of sharding setup for all three approaches.

For more information, see Sharding in Managed Service for ClickHouse®.

To set up sharding:

  1. Create tables with data.
  2. Test the tables.

If you no longer need the resources you created, delete them.

Required paid resourcesRequired paid resources

The support cost includes:

  • Managed Service for ClickHouse® cluster fee: Using computing resources allocated to hosts (including ZooKeeper hosts) and disk space (see Managed Service for ClickHouse® pricing).
  • Fee for using public IP addresses if public access is enabled for cluster hosts (see Virtual Private Cloud pricing).

Getting startedGetting started

Prepare the infrastructurePrepare the infrastructure

Manually
Terraform
  1. Create a Managed Service for ClickHouse® cluster:

    • Cluster name: chcluster.

    • Disk type: Select the required disk type.

      The disk type determines the minimum number of hosts per shard:

      • Two hosts, if you select local SSDs (local-ssd).
      • Three hosts, if you select network non-replicated SSDs (network-ssd-nonreplicated).

      Additional hosts for these disk types are required for fault tolerance.

      To learn more, see Disk types in Managed Service for ClickHouse®.

    • DB name: tutorial.

    Cluster hosts must be available online.

  2. Create two additional shards named shard2 and shard3.

  3. Add three ZooKeeper hosts to the cluster.

  4. Create shard groups. Their number depends on the sharding type:

    • Group-based sharding requires one shard group named sgroup, which includes shard1 and shard2.
    • Advanced group-based sharding requires two groups:
      • sgroup includes shard1 and shard2.
      • sgroup_data includes shard3.

    No shard groups are needed for classic sharding.

  5. If using security groups, configure them so that you can connect to the cluster from the internet.

  1. If you do not have Terraform yet, install it.

  2. Get the authentication credentials. You can add them to environment variables or specify them later in the provider configuration file.

  3. Configure and initialize a provider. There is no need to create a provider configuration file manually, you can download it.

  4. Place the configuration file in a separate working directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file.

  5. In the same working directory, download the configuration file for one of the sharding examples described below:

    • simple-sharding.tf: Classic sharding.
    • sharding-with-groups.tf: Group-based sharding.
    • advanced-sharding-with-groups.tf: Advanced group-based sharding.

    Each file describes the following:

    • Network.
    • Subnet.
    • Default security group and rules required to connect to the cluster from the internet.
    • Managed Service for ClickHouse® cluster with relevant hosts and shards.
  6. In the configuration file, specify the username and password to access the Managed Service for ClickHouse® cluster.

  7. Make sure the Terraform configuration files are correct using this command:

    terraform validate
    

    If there are any errors in the configuration files, Terraform will point them out.

  8. Create the required infrastructure:

    1. Run this command to view the planned changes:

      terraform plan
      

      If you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.

    2. If everything looks correct, apply the changes:

      1. Run this command:

        terraform apply
        
      2. Confirm updating the resources.

      3. Wait for the operation to complete.

    All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console.

Set up clickhouse-clientSet up clickhouse-client

Install and configure clickhouse-client to connect to your database.

Create tables with dataCreate tables with data

Let's assume you need to enable sharding for the hits_v1 table. The text of the table creation query depends on the sharding approach you selected.

For the table structure to substitute instead of <table_structure>, see the ClickHouse® documentation.

Once you engage any of the sharding methods, you can send the SELECT and INSERT queries to the distributed table you created, and they will be processed according to the specified configuration.

The sharding key in the examples is a random number rand().

Classic shardingClassic sharding

In this example, the distributed table that will be created based on hits_v1 uses all the shards of the chcluster cluster: shard1, shard2, shard3.

Before operating the distributed table:

  1. Connect to the tutorial database.

  2. Create a MergeTree table named hits_v1 that will reside on all the cluster's hosts:

    CREATE TABLE tutorial.hits_v1 ON CLUSTER '{cluster}' ( <table_structure> )
    ENGINE = MergeTree()
    PARTITION BY toYYYYMM(EventDate)
    ORDER BY (CounterID, EventDate, intHash32(UserID))
    SAMPLE BY intHash32(UserID)
    SETTINGS index_granularity = 8192
    

To create a distributed table named hits_v1_distributed in the cluster:

  1. Connect to the tutorial database.

  2. Create a Distributed table:

    CREATE TABLE tutorial.hits_v1_distributed ON CLUSTER '{cluster}' AS tutorial.hits_v1
    ENGINE = Distributed('{cluster}', tutorial, hits_v1, rand())
    

    Here, instead of explicitly specifying the table structure, you can use the AS tutorial.hits_v1 expression because the hits_v1_distributed and hits_v1 tables are on the same cluster hosts.

    When creating a Distributed table, use chcluster as the cluster ID. You can get it with the list of clusters in the folder.

    Tip

    Instead of the cluster ID, you can use the {cluster} macro: when executing the query, the ID of the cluster the CREATE TABLE operation is being executed in will be substituted automatically.

Group-based shardingGroup-based sharding

In this example:

  • One shard group is used named sgroup.
  • A distributed table and the underlying table named hits_v1 are in the same cluster shard group named sgroup.

Before operating the distributed table:

  1. Connect to the tutorial database.

  2. Create a MergeTree table named hits_v1 that uses all the cluster's sgroup shard group hosts:

    CREATE TABLE tutorial.hits_v1 ON CLUSTER sgroup ( <table_structure> )
    ENGINE = MergeTree()
    PARTITION BY toYYYYMM(EventDate)
    ORDER BY (CounterID, EventDate, intHash32(UserID))
    SAMPLE BY intHash32(UserID)
    SETTINGS index_granularity = 8192
    

To create a distributed table named tutorial.hits_v1_distributed in the cluster:

  1. Connect to the tutorial database.

  2. Create a Distributed table:

    CREATE TABLE tutorial.hits_v1_distributed ON CLUSTER sgroup AS tutorial.hits_v1
    ENGINE = Distributed(sgroup, tutorial, hits_v1, rand())
    

    Here, instead of explicitly specifying the table structure, you can use the AS tutorial.hits_v1 expression because the hits_v1_distributed and hits_v1 tables use the same shard and run on the same hosts.

Advanced group-based shardingAdvanced group-based sharding

In this example:

  1. Two shard groups are used: sgroup and sgroup_data.
  2. The distributed table is in the shard group named sgroup.
  3. The hits_v1 underlying table is in the shard group named sgroup_data.

Before operating the distributed table:

  1. Connect to the tutorial database.

  2. Create a ReplicatedMergeTree table named hits_v1 that uses all the cluster's sgroup_data shard group hosts:

    CREATE TABLE tutorial.hits_v1 ON CLUSTER sgroup_data ( <table_structure> )
    ENGINE = ReplicatedMergeTree('/tables/{shard}/hits_v1', '{replica}')
    PARTITION BY toYYYYMM(EventDate)
    ORDER BY (CounterID, EventDate, intHash32(UserID))
    SAMPLE BY intHash32(UserID)
    SETTINGS index_granularity = 8192
    

    The ReplicatedMergeTree engine is used for fault tolerance.

To create a distributed table named tutorial.hits_v1_distributed in the cluster:

  1. Connect to the tutorial database.

  2. Create a Distributed table:

    CREATE TABLE tutorial.hits_v1_distributed ON CLUSTER sgroup ( <table_structure> )
    ENGINE = Distributed(sgroup_data, tutorial, hits_v1, rand())
    

    Here you must explicitly specify the table structure because hits_v1_distributed and hits_v1 use different shards and are on different hosts.

Test the tablesTest the tables

To test your new distributed table named tutorial.hits_v1_distributed:

  1. Load the hits_v1 test dataset:

    curl https://storage.yandexcloud.net/doc-files/managed-clickhouse/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv
    
  2. Populate the table with test data:

    clickhouse-client \
       --host "<FQDN_of_any_host_with_distributed_table>" \
       --secure \
       --port 9440 \
       --user "<username>" \
       --password "<user_password>" \
       --database "tutorial" \
       --query "INSERT INTO tutorial.hits_v1_distributed FORMAT TSV" \
       --max_insert_block_size=100000 < hits_v1.tsv
    

    To find out the host names, request a list of ClickHouse® hosts in the cluster.

  3. Run one or more test queries to this table. For example, you can find out the number of rows in it:

    SELECT count() FROM tutorial.hits_v1_distributed
    

    Result:

    8873898
    

Delete the resources you createdDelete the resources you created

Delete the resources you no longer need to avoid paying for them:

Manually
Terraform
  1. Delete the Managed Service for ClickHouse® cluster.
  2. If static public IP addresses were used for cluster access, release and delete them.
  1. In the terminal window, go to the directory containing the infrastructure plan.

    Warning

    Make sure the directory has no Terraform manifests with the resources you want to keep. Terraform deletes all resources that were created using the manifests in the current directory.

  2. Delete resources:

    1. Run this command:

      terraform destroy
      
    2. Confirm deleting the resources and wait for the operation to complete.

    All the resources described in the Terraform manifests will be deleted.

ClickHouse® is a registered trademark of ClickHouse, Inc.

Was the article helpful?

Previous
Using hybrid storage in Managed Service for ClickHouse®
Next
Data resharding in a Managed Service for ClickHouse® cluster
© 2025 Direct Cursus Technology L.L.C.