Sharding tables in ClickHouse®
Sharding provides a number of benefits for coping with a high query rate and large data sets. It works by creating a distributed table that routes queries to underlying tables. You can access data in sharded tables both directly and through the distributed table.
There are three approaches to sharding:
- Classic approach, when the distributed table uses all shards in the cluster.
- Regular group-based approach, when some shards are combined into a group.
- Advanced group-based approach, when shards are split into two groups: one group is created for the distributed table and another group is created for underlying tables.
Below are examples of sharding setup for each of the three approaches.
For more information, see Sharding in Managed Service for ClickHouse®.
To set up sharding:
If you no longer need the resources you created, delete them.
Getting started
Prepare the infrastructure
-
Create a Managed Service for ClickHouse® cluster:
-
Cluster name:
cluster
-
Disk type: Select the required disk type.
The disk type determines the minimum number of hosts per shard:
- Two hosts, if you select local SSD disks (
local-ssd
). - Three hosts, if you select non-replicated SSD disks (
network-ssd-nonreplicated
).
Additional hosts for these disk types are required for fault tolerance.
For more information, see Disk types in Managed Service for ClickHouse®.
- Two hosts, if you select local SSD disks (
-
DB name:
tutorial
.
Cluster hosts must be available online.
-
-
Create two additional shards with the names
shard2
andshard3
. -
Create shard groups. Their number depends on the sharding type:
- Regular group-based sharding requires one shard group named
sgroup
, which includes theshard1
andshard2
shards. - Advanced group-based sharding requires two groups:
sgroup
includesshard1
andshard2
.sgroup_data
includesshard3
.
No shard groups are needed for classic sharding.
- Regular group-based sharding requires one shard group named
-
If using security groups, configure them to be able to connect to the cluster from the internet.
-
If you do not have Terraform yet, install it.
-
Get the authentication credentials. You can add them to environment variables or specify them later in the provider configuration file.
-
Configure and initialize a provider. There is no need to create a provider configuration file manually, you can download it
. -
Place the configuration file in a separate working directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file.
-
In the same working directory, download the configuration file for one of the sharding examples described below:
- simple-sharding.tf
: Classic sharding - sharding-with-groups.tf
: Group-based sharding - advanced-sharding-with-groups.tf
: Advanced group-based sharding
Each file describes:
- Network.
- Subnet.
- Default security group and rules required to connect to the cluster from the internet.
- Managed Service for ClickHouse® cluster with relevant hosts and shards.
- simple-sharding.tf
-
In the configuration file, specify the username and password to access the Managed Service for ClickHouse® cluster.
-
Make sure the Terraform configuration files are correct using this command:
terraform validate
If there are any errors in the configuration files, Terraform will point them out.
-
Create the required infrastructure:
-
Run the command to view planned changes:
terraform plan
If the resource configuration descriptions are correct, the terminal will display a list of the resources to modify and their parameters. This is a test step. No resources are updated.
-
If you are happy with the planned changes, apply them:
-
Run the command:
terraform apply
-
Confirm the update of resources.
-
Wait for the operation to complete.
-
All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console
. -
Set up clickhouse-client
Install and configure clickhouse-client to connect to your database.
Create tables with data
For example, you need to enable sharding for the tablehits_v1
. The text of the table creation query depends on the sharding approach that you selected.
For the table structure to use for <table_structure>
, see the ClickHouse® documentation
When you enable sharding by any of the methods, you can send the SELECT
and INSERT
queries to the created distributed table, and they will be processed according to the specified configuration.
The sharding key in the examples is a random number rand()
.
Traditional sharding
In this example, a distributed table that we create based on hits_v1
uses all the shards (shard1
, shard2
, and shard3
) in the chcluster
cluster.
Before operating a distributed table:
-
Connect to the
tutorial
database. -
Create a MergeTree
table namedhits_v1
, which will run on all cluster hosts:CREATE TABLE tutorial.hits_v1 ON CLUSTER '{cluster}' ( <table_structure> ) ENGINE = MergeTree() PARTITION BY toYYYYMM(EventDate) ORDER BY (CounterID, EventDate, intHash32(UserID)) SAMPLE BY intHash32(UserID) SETTINGS index_granularity = 8192
To create the hits_v1_distributed
distributed table in the cluster:
-
Connect to the
tutorial
database. -
Create a table on the Distributed
engine:CREATE TABLE tutorial.hits_v1_distributed ON CLUSTER '{cluster}' AS tutorial.hits_v1 ENGINE = Distributed('{cluster}', tutorial, hits_v1, rand())
In this case, instead of explicitly specifying the table structure, you can use the
AS tutorial.hits_v1
expression because thehits_v1_distributed
andhits_v1
tables run on the same hosts in the cluster.When creating a table on the Distributed
engine, usechcluster
as the cluster ID. You can get it with a list of clusters in the folder.Tip
Instead of the cluster ID, you can use the
{cluster}
macro: when executing the query, the ID of the cluster where theCREATE TABLE
operation is running will be picked up automatically.
Sharding using shard groups
In this example:
- One
sgroup
shard group is used. - A distributed table and the
hits_v1
underlying table are in the samesgroup
shard group in the cluster.
Before operating a distributed table:
-
Connect to the
tutorial
database. -
Create a MergeTree
table namedhits_v1
, which will use all of the hosts of thesgroup
shard group in the cluster:CREATE TABLE tutorial.hits_v1 ON CLUSTER sgroup ( <table_structure> ) ENGINE = MergeTree() PARTITION BY toYYYYMM(EventDate) ORDER BY (CounterID, EventDate, intHash32(UserID)) SAMPLE BY intHash32(UserID) SETTINGS index_granularity = 8192
To create the tutorial.hits_v1_distributed
distributed table in the cluster:
-
Connect to the
tutorial
database. -
Create a table on the Distributed
engine:CREATE TABLE tutorial.hits_v1_distributed ON CLUSTER sgroup AS tutorial.hits_v1 ENGINE = Distributed(sgroup, tutorial, hits_v1, rand())
In this case, instead of explicitly specifying the table structure, you can use the
AS tutorial.hits_v1
expression because thehits_v1_distributed
andhits_v1
tables use the same shard and run on the same hosts in the cluster.
Advanced sharding using shard groups
In this example:
- Two shard groups are used:
sgroup
andsgroup_data
. - The distributed table is located in the
sgroup
shard group. - The
hits_v1
underlying table is in thesgroup_data
shard group.
Before operating a distributed table:
-
Connect to the
tutorial
database. -
Create a ReplicatedMergeTree
table namedhits_v1
, which will use all of the hosts of thesgroup_data
shard group in the cluster:CREATE TABLE tutorial.hits_v1 ON CLUSTER sgroup_data ( <table_structure> ) ENGINE = ReplicatedMergeTree('/tables/{shard}/hits_v1', '{replica}') PARTITION BY toYYYYMM(EventDate) ORDER BY (CounterID, EventDate, intHash32(UserID)) SAMPLE BY intHash32(UserID) SETTINGS index_granularity = 8192
The ReplicatedMergeTree engine ensures fault tolerance.
To create the tutorial.hits_v1_distributed
distributed table in the cluster:
-
Connect to the
tutorial
database. -
Create a table on the Distributed
engine:CREATE TABLE tutorial.hits_v1_distributed ON CLUSTER sgroup ( <table_structure> ) ENGINE = Distributed(sgroup_data, tutorial, hits_v1, rand())
For example, you can find out the number of rows in the table because the
hits_v1_distributed
andhits_v1
tables use different shards and run on different hosts.
Test the tables
To test your new distributed table named tutorial.hits_v1_distributed
:
-
Load the
hits_v1
test dataset:curl https://storage.yandexcloud.net/doc-files/managed-clickhouse/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv
-
Populate the table with test data:
clickhouse-client \ --host "<FQDN_of_any_host_with_a_distributed_table>" \ --secure \ --port 9440 \ --user "<username>" \ --password "<user_password>" \ --database "tutorial" \ --query "INSERT INTO tutorial.hits_v1_distributed FORMAT TSV" \ --max_insert_block_size=100000 < hits_v1.tsv
To find out the host names, request a list of ClickHouse® hosts in the cluster.
-
Run one or more test queries to this table. For example, you can find out the number of rows in it:
SELECT count() FROM tutorial.hits_v1_distributed
Result:
8873898
Delete the resources you created
Delete the resources you no longer need to avoid paying for them:
- Delete the Managed Service for ClickHouse® cluster.
- If static public IP addresses were used for cluster access, release and delete them.
To delete the infrastructure created with Terraform:
-
In the terminal window, go to the directory containing the infrastructure plan.
-
Delete the configuration file (
simple-sharding.tf
,sharding-with-group.tf
, oradvanced-sharding-with-groups.tf
). -
Make sure the Terraform configuration files are correct using this command:
terraform validate
If there are any errors in the configuration files, Terraform will point them out.
-
Confirm updating the resources.
-
Run the command to view planned changes:
terraform plan
If the resource configuration descriptions are correct, the terminal will display a list of the resources to modify and their parameters. This is a test step. No resources are updated.
-
If you are happy with the planned changes, apply them:
-
Run the command:
terraform apply
-
Confirm the update of resources.
-
Wait for the operation to complete.
-
All the resources described in the configuration file will be deleted.
ClickHouse® is a registered trademark of ClickHouse, Inc