Sharding tables in ClickHouse®
Sharding provides a number of benefits for coping with a high query rate and large data sets. It works by creating a distributed table that routes queries to underlying tables. You can access data in sharded tables both directly and through the distributed table.
There are three approaches to sharding:
- Classic approach, when the distributed table uses all shards in the cluster.
- Regular group-based approach, when some shards are combined into a group.
- Advanced group-based approach, when shards are split into two groups: one group is created for the distributed table and another group is created for underlying tables.
Below are examples of sharding setup for each of the three approaches.
For more information, see Sharding in Managed Service for ClickHouse®.
To set up sharding:
If you no longer need the resources you created, delete them.
Getting started
Prepare the infrastructure
-
Create a Managed Service for ClickHouse® cluster:
-
Cluster name:
chcluster
. -
Disk type: Select the required disk type.
The disk type determines the minimum number of hosts per shard:
- Two hosts, if you select local SSDs (
local-ssd
). - Three hosts, if you select non-replicated SSDs (
network-ssd-nonreplicated
).
Additional hosts for these disk types are required for fault tolerance.
For more information, see Disk types in Managed Service for ClickHouse®.
- Two hosts, if you select local SSDs (
-
DB name:
tutorial
.
Cluster hosts must be available online.
-
-
Create two additional shards named
shard2
andshard3
. -
Create shard groups. Their number depends on the sharding type:
- Group-based sharding requires one shard group named
sgroup
, which includesshard1
andshard2
. - Advanced group-based sharding requires two groups:
sgroup
includesshard1
andshard2
.sgroup_data
includesshard3
.
No shard groups are needed for classic sharding.
- Group-based sharding requires one shard group named
-
If you are using security groups, configure them so that you can connect to the cluster from the internet.
-
If you do not have Terraform yet, install it.
-
Get the authentication credentials. You can add them to environment variables or specify them later in the provider configuration file.
-
Configure and initialize a provider. There is no need to create a provider configuration file manually, you can download it
. -
Place the configuration file in a separate working directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file.
-
In the same working directory, download the configuration file for one of the sharding examples described below:
- simple-sharding.tf
: Classic sharding - sharding-with-groups.tf
: Group-based sharding - advanced-sharding-with-groups.tf
: Advanced group-based sharding
Each file describes:
- Network.
- Subnet.
- Default security group and rules required to connect to the cluster from the internet.
- Managed Service for ClickHouse® cluster with relevant hosts and shards.
- simple-sharding.tf
-
In the configuration file, specify the username and password to access the Managed Service for ClickHouse® cluster.
-
Check that the Terraform configuration files are correct using this command:
terraform validate
If there are any errors in the configuration files, Terraform will point them out.
-
Create the required infrastructure:
-
Run the command to view planned changes:
terraform plan
If the resource configuration descriptions are correct, the terminal will display a list of the resources to modify and their parameters. This is a test step. No resources are updated.
-
If you are happy with the planned changes, apply them:
-
Run the command:
terraform apply
-
Confirm the update of resources.
-
Wait for the operation to complete.
-
All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console
. -
Set up clickhouse-client
Install and configure clickhouse-client to connect to your database.
Create tables with data
Let's assume you need to enable sharding for the hits_v1
table
For the table structure to substitute instead of <table_structure>
, see the ClickHouse® documentation
Once you enable sharding by any of the methods, you can send the SELECT
and INSERT
queries to the distributed table you created, and they will be processed according to the specified configuration.
The sharding key in the examples is a random number rand()
.
Classic sharding
In this example, the distributed table that will be created based on hits_v1
uses all the shards of the chcluster
cluster: shard1
, shard2
, shard3
.
Before operating a distributed table:
-
Connect to the
tutorial
database. -
Create a MergeTree
table namedhits_v1
that will reside on all the cluster's hosts:CREATE TABLE tutorial.hits_v1 ON CLUSTER '{cluster}' ( <table_structure> ) ENGINE = MergeTree() PARTITION BY toYYYYMM(EventDate) ORDER BY (CounterID, EventDate, intHash32(UserID)) SAMPLE BY intHash32(UserID) SETTINGS index_granularity = 8192
To create a distributed table named hits_v1_distributed
in the cluster:
-
Connect to the
tutorial
database. -
Create a Distributed
table:CREATE TABLE tutorial.hits_v1_distributed ON CLUSTER '{cluster}' AS tutorial.hits_v1 ENGINE = Distributed('{cluster}', tutorial, hits_v1, rand())
Here, instead of explicitly specifying the table structure, you can use the
AS tutorial.hits_v1
expression because thehits_v1_distributed
andhits_v1
tables are on the same hosts in the cluster.When creating a Distributed
table, usechcluster
as the cluster ID. You can get it with a list of clusters in the folder.Tip
Instead of the cluster ID, you can use the
{cluster}
macro: when executing the query, the ID of the cluster theCREATE TABLE
operation is being executed in will be substituted automatically.
Sharding using shard groups
In this example:
- One shard group is used named
sgroup
. - A distributed table and the underlying table named
hits_v1
are in the same cluster shard group namedsgroup
.
Before operating a distributed table:
-
Connect to the
tutorial
database. -
Create a MergeTree
table namedhits_v1
that uses all the cluster'ssgroup
shard group hosts:CREATE TABLE tutorial.hits_v1 ON CLUSTER sgroup ( <table_structure> ) ENGINE = MergeTree() PARTITION BY toYYYYMM(EventDate) ORDER BY (CounterID, EventDate, intHash32(UserID)) SAMPLE BY intHash32(UserID) SETTINGS index_granularity = 8192
To create a distributed table namedtutorial.hits_v1_distributed
in the cluster:
-
Connect to the
tutorial
database. -
Create a Distributed
table:CREATE TABLE tutorial.hits_v1_distributed ON CLUSTER sgroup AS tutorial.hits_v1 ENGINE = Distributed(sgroup, tutorial, hits_v1, rand())
Here, instead of explicitly specifying the table structure, you can use the
AS tutorial.hits_v1
expression because thehits_v1_distributed
andhits_v1
tables use the same shard and run on the same hosts.
Advanced sharding using shard groups
In this example:
- Two shard groups are used:
sgroup
andsgroup_data
. - The distributed table is in the shard group named
sgroup
. - The
hits_v1
underlying table is in the shard group namedsgroup_data
.
Before operating a distributed table:
-
Connect to the
tutorial
database. -
Create a ReplicatedMergeTree
table namedhits_v1
that uses all the cluster'ssgroup_data
shard group hosts:CREATE TABLE tutorial.hits_v1 ON CLUSTER sgroup_data ( <table_structure> ) ENGINE = ReplicatedMergeTree('/tables/{shard}/hits_v1', '{replica}') PARTITION BY toYYYYMM(EventDate) ORDER BY (CounterID, EventDate, intHash32(UserID)) SAMPLE BY intHash32(UserID) SETTINGS index_granularity = 8192
The ReplicatedMergeTree engine ensures fault tolerance.
To create a distributed table namedtutorial.hits_v1_distributed
in the cluster:
-
Connect to the
tutorial
database. -
Create a Distributed
table:CREATE TABLE tutorial.hits_v1_distributed ON CLUSTER sgroup ( <table_structure> ) ENGINE = Distributed(sgroup_data, tutorial, hits_v1, rand())
Here you must explicitly specify the table structure because
hits_v1_distributed
andhits_v1
use different shards and are on different hosts.
Test the tables
To test your new distributed table named tutorial.hits_v1_distributed
:
-
Load the
hits_v1
test dataset:curl https://storage.yandexcloud.net/doc-files/managed-clickhouse/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv
-
Populate the table with test data:
clickhouse-client \ --host "<FQDN_of_any_host_with_distributed_table>" \ --secure \ --port 9440 \ --user "<username>" \ --password "<user_password>" \ --database "tutorial" \ --query "INSERT INTO tutorial.hits_v1_distributed FORMAT TSV" \ --max_insert_block_size=100000 < hits_v1.tsv
To find out the host names, request a list of ClickHouse® hosts in the cluster.
-
Run one or more test queries to this table. For example, you can find out the number of rows in it:
SELECT count() FROM tutorial.hits_v1_distributed
Result:
8873898
Delete the resources you created
Delete the resources you no longer need to avoid paying for them:
- Delete the Managed Service for ClickHouse® cluster.
- If static public IP addresses were used for cluster access, release and delete them.
-
In the terminal window, go to the directory containing the infrastructure plan.
Warning
Make sure the directory has no Terraform manifests with the resources you want to keep. Terraform deletes all resources that were created using the manifests in the current directory.
-
Delete resources:
-
Run this command:
terraform destroy
-
Confirm deleting the resources and wait for the operation to complete.
All the resources described in the Terraform manifests will be deleted.
-
ClickHouse® is a registered trademark of ClickHouse, Inc