Yandex Cloud
Search
Contact UsGet started
  • Pricing
  • Customer Stories
  • Documentation
  • Blog
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • AI for business
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Start testing with double trial credits
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Center for Technologies and Society
    • Yandex Cloud Partner program
  • Pricing
  • Customer Stories
  • Documentation
  • Blog
© 2025 Direct Cursus Technology L.L.C.
Tutorials
    • All tutorials
    • Unassisted deployment of the Apache Kafka® web interface
    • Upgrading a Managed Service for Apache Kafka® cluster to migrate from ZooKeeper to KRaft
    • Migrating a database from a third-party Apache Kafka® cluster to Managed Service for Apache Kafka®
    • Moving data between Managed Service for Apache Kafka® clusters using Data Transfer
    • Delivering data from Managed Service for MySQL® to Managed Service for Apache Kafka® using Data Transfer
    • Delivering data from Managed Service for MySQL® to Managed Service for Apache Kafka® using Debezium
    • Delivering data from Managed Service for PostgreSQL to Managed Service for Apache Kafka® using Data Transfer
    • Delivering data from Managed Service for PostgreSQL to Managed Service for Apache Kafka® using Debezium
    • Delivering data from Managed Service for YDB to Managed Service for Apache Kafka® using Data Transfer
    • Delivering data from Managed Service for Apache Kafka® to Managed Service for ClickHouse® using Data Transfer
    • Delivering data from Managed Service for Apache Kafka® to Yandex MPP Analytics for PostgreSQL using Data Transfer
    • Delivering data from Managed Service for Apache Kafka® to Yandex StoreDoc using Data Transfer
    • Delivering data from Managed Service for Apache Kafka® to Managed Service for MySQL® using Data Transfer
    • Delivering data from Managed Service for Apache Kafka® to Managed Service for OpenSearch using Data Transfer
    • Delivering data from Managed Service for Apache Kafka® to Managed Service for PostgreSQL using Data Transfer
    • Delivering data from Managed Service for Apache Kafka® to Managed Service for YDB using Data Transfer
    • Delivering data from Managed Service for Apache Kafka® to Data Streams using Data Transfer
    • Delivering data from Data Streams to Managed Service for YDB using Data Transfer
    • Delivering data from Data Streams to Managed Service for Apache Kafka® using Data Transfer
    • YDB change data capture and delivery to YDS
    • Configuring Kafka Connect to work with a Managed Service for Apache Kafka® cluster
    • Synchronizing Apache Kafka® topics in Object Storage with no web access
    • Monitoring message loss in a Apache Kafka® topic
    • Automating Query tasks with Managed Service for Apache Airflow™
    • Sending requests to the Yandex Cloud API via the Yandex Cloud Python SDK
    • Configuring an SMTP server to send e-mail notifications
    • Adding data to a ClickHouse® DB
    • Migrating data to Managed Service for ClickHouse® using ClickHouse® tools
    • Migrating data to Managed Service for ClickHouse® using Data Transfer
    • Delivering data from Managed Service for MySQL® to Managed Service for ClickHouse® using Data Transfer
    • Asynchronously replicating data from PostgreSQL to ClickHouse®
    • Exchanging data between Managed Service for ClickHouse® and Yandex Data Processing
    • Configuring Managed Service for ClickHouse® for Graphite
    • Fetching data from Managed Service for Apache Kafka® to Managed Service for ClickHouse®
    • Fetching data from Managed Service for Apache Kafka® to ksqlDB
    • Fetching data from RabbitMQ to Managed Service for ClickHouse®
    • Saving a data stream from Data Streams to Managed Service for ClickHouse®
    • Asynchronous replication of data from Yandex Metrica to ClickHouse® using Data Transfer
    • Using hybrid storage in Managed Service for ClickHouse®
    • Sharding Managed Service for ClickHouse® tables
    • Loading data from Yandex Direct to a Managed Service for ClickHouse® data mart using Cloud Functions, Object Storage, and Data Transfer
    • Loading data from Object Storage to Managed Service for ClickHouse® using Data Transfer
    • Migrating data with change of storage from Managed Service for OpenSearch to Managed Service for ClickHouse® using Data Transfer
    • Loading data from Managed Service for YDB to Managed Service for ClickHouse® using Data Transfer
    • Yandex Managed Service for ClickHouse® integration with Microsoft SQL Server via ClickHouse® JDBC Bridge
    • Migrating databases from Google BigQuery to Managed Service for ClickHouse®
    • Yandex Managed Service for ClickHouse® integration with Oracle via ClickHouse® JDBC Bridge
    • Configuring Cloud DNS to access a Managed Service for ClickHouse® cluster from other cloud networks
    • Migrating a Yandex Data Processing HDFS cluster to a different availability zone
    • Importing data from Managed Service for MySQL® to Yandex Data Processing using Sqoop
    • Importing data from Managed Service for PostgreSQL to Yandex Data Processing using Sqoop
    • Mounting Object Storage buckets to the file system of Yandex Data Processing hosts
    • Working with Apache Kafka® topics using Yandex Data Processing
    • Automating operations with Yandex Data Processing using Managed Service for Apache Airflow™
    • Shared use of Yandex Data Processing tables through Apache Hive™ Metastore
    • Transferring metadata across Yandex Data Processing clusters using Apache Hive™ Metastore
    • Importing data from Object Storage, processing it, and exporting it to Managed Service for ClickHouse®
    • Migrating collections from a third-party MongoDB cluster to Yandex StoreDoc
    • Migrating data to Yandex StoreDoc
    • Migrating Yandex StoreDoc cluster from 4.4 to 6.0
    • Sharding Yandex StoreDoc collections
    • Yandex StoreDoc performance analysis and tuning
    • Migrating a database from a third-party MySQL® cluster to a Managed Service for MySQL® cluster
    • Managed Service for MySQL® performance analysis and tuning
    • Syncing data from a third-party MySQL® cluster to Managed Service for MySQL® using Data Transfer
    • Migrating a database from Managed Service for MySQL® to a third-party MySQL® cluster
    • Migrating a database from Managed Service for MySQL® to Object Storage using Data Transfer
    • Migrating data from Object Storage to Managed Service for MySQL® using Data Transfer
    • Delivering data from Managed Service for MySQL® to Managed Service for Apache Kafka® using Data Transfer
    • Delivering data from Managed Service for MySQL® to Managed Service for Apache Kafka® using Debezium
    • Migrating a database from Managed Service for MySQL® to Managed Service for YDB using Data Transfer
    • MySQL® change data capture and delivery to YDS
    • Migrating data from Managed Service for MySQL® to Managed Service for PostgreSQL using Data Transfer
    • Migrating data from AWS RDS for PostgreSQL to Managed Service for PostgreSQL using Data Transfer
    • Migrating data from Managed Service for MySQL® to Yandex MPP Analytics for PostgreSQL using Data Transfer
    • Configuring an index policy in Managed Service for OpenSearch
    • Migrating data from a third-party OpenSearch cluster to Managed Service for OpenSearch using Data Transfer
    • Loading data from Managed Service for OpenSearch to Object Storage using Data Transfer
    • Migrating data from Managed Service for OpenSearch to Managed Service for YDB using Data Transfer
    • Copying data from Managed Service for OpenSearch to Yandex MPP Analytics for PostgreSQL using Yandex Data Transfer
    • Migrating data from Managed Service for PostgreSQL to Managed Service for OpenSearch using Data Transfer
    • Authenticating a Managed Service for OpenSearch cluster in OpenSearch Dashboards using Keycloak
    • Using the yandex-lemmer plugin in Managed Service for OpenSearch
    • Creating a PostgreSQL cluster for 1C:Enterprise
    • Searching for the Managed Service for PostgreSQL cluster performance issues
    • Managed Service for PostgreSQL performance analysis and tuning
    • Logical replication in PostgreSQL
    • Migrating a database from a third-party PostgreSQL cluster to Managed Service for PostgreSQL
    • Migrating a database from Managed Service for PostgreSQL
    • Delivering data from Managed Service for PostgreSQL to Managed Service for Apache Kafka® using Data Transfer
    • Delivering data from Managed Service for PostgreSQL to Managed Service for Apache Kafka® using Debezium
    • Delivering data from Managed Service for PostgreSQL to Managed Service for YDB using Data Transfer
    • Migrating a database from Managed Service for PostgreSQL to Object Storage
    • Migrating data from Object Storage to Managed Service for PostgreSQL using Data Transfer
    • PostgreSQL change data capture and delivery to YDS
    • Migrating data from Managed Service for PostgreSQL to Managed Service for MySQL® using Data Transfer
    • Migrating data from Managed Service for PostgreSQL to Managed Service for OpenSearch using Data Transfer
    • Fixing string sorting issues in PostgreSQL after upgrading glibc
    • Migrating a database from Greenplum® to ClickHouse®
    • Migrating a database from Greenplum® to PostgreSQL
    • Exporting Greenplum® data to a cold storage in Object Storage
    • Loading data from Object Storage to Yandex MPP Analytics for PostgreSQL using Data Transfer
    • Copying data from Managed Service for OpenSearch to Yandex MPP Analytics for PostgreSQL using Yandex Data Transfer
    • Creating an external table from an Object Storage bucket table using a configuration file
    • Getting data from external sources using named queries in Greenplum®
    • Migrating a database from a third-party Valkey™ cluster to Yandex Managed Service for Valkey™
    • Using a Yandex Managed Service for Valkey™ cluster as a PHP session storage
    • Loading data from Object Storage to Managed Service for YDB using Data Transfer
    • Loading data from Managed Service for YDB to Object Storage using Data Transfer
    • Processing Audit Trails events
    • Processing Cloud Logging logs
    • Processing Debezium CDC streams
    • Analyzing data with Jupyter
    • Processing files with usage details in Yandex Cloud Billing
    • Ingesting data into storage systems
    • Smart log processing
    • Data transfer in microservice architectures
    • Migrating data to Object Storage using Data Transfer
    • Migrating data from a third-party Greenplum® or PostgreSQL cluster to Yandex MPP Analytics for PostgreSQL using Data Transfer
    • Migrating Yandex StoreDoc clusters
    • Migrating MySQL® clusters
    • Migrating to a third-party MySQL® cluster
    • Migrating PostgreSQL clusters
    • Creating a schema registry to deliver data in Debezium CDC format from Apache Kafka®
    • Automating operations using Yandex Managed Service for Apache Airflow™
    • Working with an Object Storage table from a PySpark job
    • Integrating Yandex Managed Service for Apache Spark™ with Apache Hive™ Metastore
    • Running a PySpark job using Yandex Managed Service for Apache Airflow™
    • Using Yandex Object Storage in Yandex Managed Service for Apache Spark™

In this article:

  • Required paid resources
  • Getting started
  • Set up the infrastructure
  • Set up the command line tools
  • Optionally, explore the test dataset
  • Create a table
  • Populate the table with data
  • Check data placement within a cluster
  • Run a test query
  • Monitor volume of data in Object Storage (optional step)
  • Delete the resources you created
  1. Building a data platform
  2. Using hybrid storage in Managed Service for ClickHouse®

Using hybrid storage in Yandex Managed Service for ClickHouse®

Written by
Yandex Cloud
Updated at September 30, 2025
  • Required paid resources
  • Getting started
    • Set up the infrastructure
    • Set up the command line tools
    • Optionally, explore the test dataset
  • Create a table
  • Populate the table with data
  • Check data placement within a cluster
  • Run a test query
  • Monitor volume of data in Object Storage (optional step)
  • Delete the resources you created

Hybrid storage enables you to store frequently used data on the Managed Service for ClickHouse® cluster’s network drives while keeping rarely used data in Yandex Object Storage. Your hybrid storage will automatically create a bucket and connect it to ClickHouse®. Automatic data movement between these storage tiers is only supported for MergeTree tables. For more information, see Managed Service for ClickHouse® storage.

To use hybrid storage:

  1. Create a table.
  2. Populate the table with data.
  3. Check data placement within a cluster.
  4. Run a test query.

If you no longer need the resources you created, delete them.

Required paid resourcesRequired paid resources

The support cost for this solution includes:

  • Managed Service for ClickHouse® cluster fee: Covers the use of computational resources allocated to hosts (including ZooKeeper hosts) and disk space (see Managed Service for ClickHouse® pricing).
  • Fee for public IP addresses if public access is enabled for cluster hosts (see Virtual Private Cloud pricing).

Getting startedGetting started

Set up the infrastructureSet up the infrastructure

Manually
Using Terraform
  1. Create a Managed Service for ClickHouse® cluster:

    • Disk type: Standard (network-hdd), fast (network-ssd), or non-replicated (network-ssd-nonreplicated) network disks.

    • Size: At least 15 GB.

    • User management via SQL: Disabled.

    • DB name: tutorial.

    • Hybrid storage: Enabled.

  2. Configure access permissions to allow read and write queries to your database.

  1. If you do not have Terraform yet, install it.

  2. Get the authentication credentials. You can add them to environment variables or specify them later in the provider configuration file.

  3. Configure and initialize a provider. There is no need to create a provider configuration file manually, you can download it.

  4. Place the configuration file in a separate working directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file.

  5. Download the clickhouse-hybrid-storage.tf configuration file to your current working directory.

    This file describes:

    • Network.
    • Subnet.
    • Default security group and rules enabling inbound cluster connections.
    • Managed Service for ClickHouse® cluster with hybrid storage enabled.
  6. In the clickhouse-hybrid-storage.tf file, specify the Managed Service for ClickHouse® cluster access username and password.

  7. Validate your Terraform configuration files using this command:

    terraform validate
    

    Terraform will display any configuration errors detected in your files.

  8. Create the required infrastructure:

    1. Run this command to view the planned changes:

      terraform plan
      

      If you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.

    2. If everything looks correct, apply the changes:

      1. Run this command:

        terraform apply
        
      2. Confirm updating the resources.

      3. Wait for the operation to complete.

    All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console.

Set up the command line toolsSet up the command line tools

  1. Install curl and unxz:

    apt-get update && apt-get install curl xz-utils
    
  2. Set up clickhouse-client and use it to connect to the database.

Optionally, explore the test datasetOptionally, explore the test dataset

To demonstrate the hybrid storage operation, we will use anonymized hit data (hits_v1) from Yandex Metrica. This dataset contains nearly 9 million hits recorded between March 17, 2014, to March 23, 2014.

Upon creation, the tutorial.hits_v1 table will be configured to place all recent data (starting from March 21, 2014) in network storage, while moving older data (from March 17, 2014 to March 20, 2014) to object storage.

Create a tableCreate a table

Create the tutorial.hits_v1 table configured for hybrid storage by running the following SQL query. Replace the <schema> placeholder with a list of column definitions from this ClickHouse® article:

CREATE TABLE tutorial.hits_v1
(
   <schema>
)
ENGINE = MergeTree()
PARTITION BY EventDate
ORDER BY (CounterID, EventDate, intHash32(UserID))
SAMPLE BY intHash32(UserID)
TTL EventDate + toIntervalDay(dateDiff('day', toDate('2014-03-20'), now())) TO DISK 'object_storage'
SETTINGS index_granularity = 8192
Query with full schema
CREATE TABLE tutorial.hits_v1
(
    `WatchID` UInt64,
    `JavaEnable` UInt8,
    `Title` String,
    `GoodEvent` Int16,
    `EventTime` DateTime,
    `EventDate` Date,
    `CounterID` UInt32,
    `ClientIP` UInt32,
    `ClientIP6` FixedString(16),
    `RegionID` UInt32,
    `UserID` UInt64,
    `CounterClass` Int8,
    `OS` UInt8,
    `UserAgent` UInt8,
    `URL` String,
    `Referer` String,
    `URLDomain` String,
    `RefererDomain` String,
    `Refresh` UInt8,
    `IsRobot` UInt8,
    `RefererCategories` Array(UInt16),
    `URLCategories` Array(UInt16),
    `URLRegions` Array(UInt32),
    `RefererRegions` Array(UInt32),
    `ResolutionWidth` UInt16,
    `ResolutionHeight` UInt16,
    `ResolutionDepth` UInt8,
    `FlashMajor` UInt8,
    `FlashMinor` UInt8,
    `FlashMinor2` String,
    `NetMajor` UInt8,
    `NetMinor` UInt8,
    `UserAgentMajor` UInt16,
    `UserAgentMinor` FixedString(2),
    `CookieEnable` UInt8,
    `JavascriptEnable` UInt8,
    `IsMobile` UInt8,
    `MobilePhone` UInt8,
    `MobilePhoneModel` String,
    `Params` String,
    `IPNetworkID` UInt32,
    `TraficSourceID` Int8,
    `SearchEngineID` UInt16,
    `SearchPhrase` String,
    `AdvEngineID` UInt8,
    `IsArtifical` UInt8,
    `WindowClientWidth` UInt16,
    `WindowClientHeight` UInt16,
    `ClientTimeZone` Int16,
    `ClientEventTime` DateTime,
    `SilverlightVersion1` UInt8,
    `SilverlightVersion2` UInt8,
    `SilverlightVersion3` UInt32,
    `SilverlightVersion4` UInt16,
    `PageCharset` String,
    `CodeVersion` UInt32,
    `IsLink` UInt8,
    `IsDownload` UInt8,
    `IsNotBounce` UInt8,
    `FUniqID` UInt64,
    `HID` UInt32,
    `IsOldCounter` UInt8,
    `IsEvent` UInt8,
    `IsParameter` UInt8,
    `DontCountHits` UInt8,
    `WithHash` UInt8,
    `HitColor` FixedString(1),
    `UTCEventTime` DateTime,
    `Age` UInt8,
    `Sex` UInt8,
    `Income` UInt8,
    `Interests` UInt16,
    `Robotness` UInt8,
    `GeneralInterests` Array(UInt16),
    `RemoteIP` UInt32,
    `RemoteIP6` FixedString(16),
    `WindowName` Int32,
    `OpenerName` Int32,
    `HistoryLength` Int16,
    `BrowserLanguage` FixedString(2),
    `BrowserCountry` FixedString(2),
    `SocialNetwork` String,
    `SocialAction` String,
    `HTTPError` UInt16,
    `SendTiming` Int32,
    `DNSTiming` Int32,
    `ConnectTiming` Int32,
    `ResponseStartTiming` Int32,
    `ResponseEndTiming` Int32,
    `FetchTiming` Int32,
    `RedirectTiming` Int32,
    `DOMInteractiveTiming` Int32,
    `DOMContentLoadedTiming` Int32,
    `DOMCompleteTiming` Int32,
    `LoadEventStartTiming` Int32,
    `LoadEventEndTiming` Int32,
    `NSToDOMContentLoadedTiming` Int32,
    `FirstPaintTiming` Int32,
    `RedirectCount` Int8,
    `SocialSourceNetworkID` UInt8,
    `SocialSourcePage` String,
    `ParamPrice` Int64,
    `ParamOrderID` String,
    `ParamCurrency` FixedString(3),
    `ParamCurrencyID` UInt16,
    `GoalsReached` Array(UInt32),
    `OpenstatServiceName` String,
    `OpenstatCampaignID` String,
    `OpenstatAdID` String,
    `OpenstatSourceID` String,
    `UTMSource` String,
    `UTMMedium` String,
    `UTMCampaign` String,
    `UTMContent` String,
    `UTMTerm` String,
    `FromTag` String,
    `HasGCLID` UInt8,
    `RefererHash` UInt64,
    `URLHash` UInt64,
    `CLID` UInt32,
    `YCLID` UInt64,
    `ShareService` String,
    `ShareURL` String,
    `ShareTitle` String,
    `ParsedParams` Nested(
        Key1 String,
        Key2 String,
        Key3 String,
        Key4 String,
        Key5 String,
        ValueDouble Float64),
    `IslandID` FixedString(16),
    `RequestNum` UInt32,
    `RequestTry` UInt8
)
ENGINE = MergeTree()
PARTITION BY EventDate
ORDER BY (CounterID, EventDate, intHash32(UserID))
SAMPLE BY intHash32(UserID)
TTL EventDate + toIntervalDay(dateDiff('day', toDate('2014-03-20'), now())) TO DISK 'object_storage'
SETTINGS index_granularity = 8192

Note

This table will use the default storage policy.

The TTL ... expression defines a policy for managing aging data:

  1. TTL sets the lifetime of a table row. In our example, it is the number of days from the current date to March 20, 2014.
  2. The system manages table data based on the EventDate value:
    • Records where the elapsed time (in days) since EventDate is less than the TTL value are retained on the network disk storage.
    • Records where the elapsed time since EventDate meets or exceeds the TTL value are moved to the object storage, as per the TO DISK 'object_storage' policy.

While specifying TTL for hybrid storage is optional, doing so gives you explicit control over what data is moved to Object Storage. Without a defined TTL, data is only moved to object storage when network disk space is full. For more information, see Managed Service for ClickHouse® storage.

Note

The TTL expression in our example is complex because the test dataset we use requires splitting historical, static data across various storage tiers. For frequently updated tables, you can use a simple TTL expression, such as EventDate + INTERVAL 5 DAY, which moves data to object storage after a five-day period.

Data is moved between network disk storage and object storage in parts, rather than per row. For optimal performance, align your TTL expression with your partitioning key to ensure that all rows within a part share the same expiration time. This prevents situations where a data part contains a mix of rows destined for different storage tiers, which can block expired data from moving to object storage. At a minimum, the TTL expression should use the same columns as the partitioning key, e.g., EventDate in the example above.

For more details on configuring TTL, see the ClickHouse® guides.

Populate the table with dataPopulate the table with data

  1. Disconnect from the database.

  2. Download the test dataset:

    curl https://storage.yandexcloud.net/doc-files/managed-clickhouse/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv
    

    The downloaded dataset size is about 10 GB.

  3. Load this dataset into ClickHouse® using clickhouse-client:

    clickhouse-client \
        --host <ClickHouse®_host_FQDN> \
        --secure \
        --user <username> \
        --database tutorial \
        --port 9440 \
        --ask-password \
        --query "INSERT INTO tutorial.hits_v1 FORMAT TSV" \
        --max_insert_block_size=100000 < hits_v1.tsv
    

    You can get the host FQDN from the cluster’s host list.

  4. Wait for the operation to complete, as importing the data may take some time.

For more information, see this ClickHouse® article.

Check data placement within a clusterCheck data placement within a cluster

  1. Connect to the database.

  2. Check the storage location of the table’s data:

    SELECT
        table,
        partition,
        name,
        rows,
        disk_name
    FROM system.parts
    WHERE active AND (table = 'hits_v1') AND (database = 'tutorial')
    

    Table partitions with EventDate older than the TTL allows should reside on the object_storage disk. All other partitions should reside on the default disk:

    ┌─table───┬─partition──┬─name───────────────┬───rows─┬─disk_name──────┐
    │ hits_v1 │ 2014-03-17 │ 20140317_6_80_2    │ 571657 │ object_storage │
    │ hits_v1 │ 2014-03-17 │ 20140317_86_125_1  │ 287545 │ object_storage │
    │ ...                                                                 │
    │ hits_v1 │ 2014-03-20 │ 20140320_109_145_1 │ 250484 │ object_storage │
    │ hits_v1 │ 2014-03-20 │ 20140320_149_200_1 │ 420081 │ object_storage │
    │ hits_v1 │ 2014-03-21 │ 20140321_3_57_1    │ 612616 │ default        │
    │ hits_v1 │ 2014-03-21 │ 20140321_65_65_0   │  53382 │ default        │
    │ ...                                                                 │
    │ hits_v1 │ 2014-03-23 │ 20140323_191_191_0 │  11145 │ default        │
    │ hits_v1 │ 2014-03-23 │ 20140323_197_197_0 │  98910 │ default        │
    └─────────┴────────────┴────────────────────┴────────┴────────────────┘
    
  3. Check the number of rows stored on each storage tier:

    SELECT
     sum(rows),
     disk_name
    FROM system.parts
    WHERE active AND (database = 'tutorial') AND (table = 'hits_v1')
    GROUP BY disk_name
    

    The result will show the distribution of table rows across storage tiers:

    ┌─sum(rows)─┬─disk_name──────┐
    │   2711246 │ default        │
    │   6162652 │ object_storage │
    └───────────┴────────────────┘
    

As you can see, the SQL results confirm the successful disrtibution of data across hybrid storage tiers.

Run a test queryRun a test query

Run a test query against the tutorial.hits_v1 table, accessing data from multiple storage tiers simultaneously:

SELECT
    URLDomain AS Domain,
    AVG(SendTiming) AS AvgSendTiming
FROM tutorial.hits_v1
WHERE (EventDate >= '2014-03-19') AND (EventDate <= '2014-03-22')
GROUP BY Domain
ORDER BY AvgSendTiming DESC
LIMIT 10

Result:

┌─Domain──────────────────────────────┬──────AvgSendTiming─┐
│ realty.ru.msn.com.travel            │ 101166.85714285714 │
│ podbor.ru.msn.com.uazbukatusprosima │  76429.16666666667 │
│ club.metalia-woman                  │ 64872.333333333336 │
│ avito.rusfootki                     │              51099 │
│ papas.drimmirkvart                  │ 50325.642857142855 │
│ apps.oyunuoyna.com.uazbukadelight   │ 32761.666666666668 │
│ voyeurhit                           │          31501.625 │
│ yandex.ru.com.travesti.net          │            31427.5 │
│ sozcu.com.ua.alm.slands             │              29439 │
│ hasters.ru                          │ 18365.666666666668 │
└─────────────────────────────────────┴────────────────────┘

As the result shows, the user interacts with a single logical table. ClickHouse® handles the complexity of querying data across different storage tiers automatically.

Monitor volume of data in Object Storage (optional step)Monitor volume of data in Object Storage (optional step)

To monitor the amount of space MergeTree table parts occupy in Object Storage, use the ch_s3_disk_parts_size metric in Yandex Monitoring.

  1. In the management console, select Monitoring.

  2. Navigate to the Metric Explorer section.

  3. Run this query:

    "ch_s3_disk_parts_size"{service="managed-clickhouse", resource_type="cluster", node="by_host", resource_id="<cluster_ID>", subcluster_name="clickhouse_subcluster"}
    

Delete the resources you createdDelete the resources you created

Delete the resources you no longer need to avoid paying for them:

Manually
Terraform

Delete the Managed Service for ClickHouse® cluster.

  1. In the terminal window, go to the directory containing the infrastructure plan.

    Warning

    Make sure the directory has no Terraform manifests with the resources you want to keep. Terraform deletes all resources that were created using the manifests in the current directory.

  2. Delete resources:

    1. Run this command:

      terraform destroy
      
    2. Confirm deleting the resources and wait for the operation to complete.

    All the resources described in the Terraform manifests will be deleted.

ClickHouse® is a registered trademark of ClickHouse, Inc.

Was the article helpful?

Previous
Asynchronous replication of data from Yandex Metrica to ClickHouse® using Data Transfer
Next
Sharding Managed Service for ClickHouse® tables
© 2025 Direct Cursus Technology L.L.C.