Yandex Cloud
Search
Contact UsTry it for free
  • Customer Stories
  • Documentation
  • Blog
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • AI for business
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Start testing with double trial credits
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Center for Technologies and Society
    • Yandex Cloud Partner program
    • Price calculator
    • Pricing plans
  • Customer Stories
  • Documentation
  • Blog
© 2025 Direct Cursus Technology L.L.C.
Yandex Managed Service for ClickHouse®
  • Getting started
    • All guides
      • Information about existing clusters
      • Creating a cluster
      • Updating cluster settings
      • ClickHouse® version upgrade
      • Stopping and starting a cluster
      • Managing backups
      • Deleting a cluster
  • Access management
  • Pricing policy
  • Terraform reference
  • Yandex Monitoring metrics
  • Audit Trails events
  • Public materials
  • Release notes

In this article:

  • Roles for creating a cluster
  • Creating a cluster
  • Creating a cluster copy
  • Examples
  • Creating a single-host cluster
  • Creating a multi-host cluster
  1. Step-by-step guides
  2. Clusters
  3. Creating a cluster

Creating a ClickHouse® cluster

Written by
Yandex Cloud
Updated at December 10, 2025
  • Roles for creating a cluster
  • Creating a cluster
  • Creating a cluster copy
  • Examples
    • Creating a single-host cluster
    • Creating a multi-host cluster

A ClickHouse® cluster consists of one or more database hosts with configurable replication across them.

Roles for creating a clusterRoles for creating a cluster

To create a Managed Service for ClickHouse® cluster, you need the vpc.user and managed-clickhouse.editor roles or higher.

To attach your service account to a cluster, e.g., to use Yandex Object Storage, make sure your Yandex Cloud account has the iam.serviceAccounts.user role or higher.

For more information about assigning roles, see this Yandex Identity and Access Management guide.

Creating a clusterCreating a cluster

  • The available disk types depend on the selected host class.

  • The number of hosts you can create in a ClickHouse® cluster depends on the selected disk type and host class.

  • When using ClickHouse® Keeper, a cluster must consist of three or more hosts. You do not need separate hosts to run ClickHouse® Keeper. You can only create this kind of cluster using the Yandex Cloud CLI or API.

  • When using ZooKeeper, a cluster may consist of two or more hosts. Another three ZooKeeper hosts will be added to the cluster automatically.

    The minimum number of cores per ZooKeeper host depends on the total number of cores on ClickHouse® hosts. To learn more, see Replication.

    Warning

    ZooKeeper hosts are counted towards the cloud resource quota and the cluster cost.

Cluster DB connections are managed by Connection Manager. Creating a cluster automatically creates:

  • Connection Manager connection with information about the database connection.

  • Yandex Lockbox secret that stores the DB owner's user password. Storing passwords in Yandex Lockbox ensures their security.

The connection and secret will be created for each new database user. To view all connections, select the Connections tab on the cluster page.

You need the connection-manager.viewer role to view connection info. You can use Connection Manager to configure access to connections.

You can use Connection Manager and secrets you create there free of charge.

Management console
CLI
Terraform
REST API
gRPC API

To create a Managed Service for ClickHouse® cluster:

  1. In the management console, select the folder where you want to create a database cluster.

  2. Select Managed Service for ClickHouse.

  3. Click Create cluster.

  4. Specify the cluster name in the Cluster name field. It must be unique within the folder.

  5. Select the environment where you want to create your cluster (you cannot change the environment once the cluster is created):

    • PRODUCTION: For stable versions of your applications.
    • PRESTABLE: For testing purposes. The prestable environment is similar to the production environment and likewise covered by an SLA, but it is the first to get new features, improvements, and bug fixes. In the prestable environment, you can test new versions for compatibility with your application.
  6. In the Version drop-down list, select the ClickHouse® version the Managed Service for ClickHouse® cluster will use. For most clusters, we recommend selecting the latest LTS version.

  7. If you are expecting to use data from an Object Storage bucket with restricted access, select a service account from the drop-down list or create a new one. For more information about setting up a service account, see Configuring access to Object Storage.

  8. Under Resources:

    • Select the platform, VM type, and host class. The latter determines the technical specifications of the VMs the database hosts will be deployed on. All available options are listed under Host classes. When you change the host class for a cluster, the specifications of all existing instances also change.

    • Select the disk type.

      Warning

      You cannot change disk type after you create a cluster.

      The selected type determines the increments in which you can change your disk size:

      • Network HDD and SSD storage: In increments of 1 GB.
      • Local SSD storage:
        • For Intel Broadwell and Intel Cascade Lake: In increments of 100 GB.
        • For Intel Ice Lake: In increments of 368 GB.
      • Non-replicated SSDs and ultra high-speed network SSDs with three replicas: In increments of 93 GB.
    • Select the size of your data and backup disk. For more information on how backups take up storage space, see Backups.

    • Optionally, configure the automatic storage expansion for a ClickHouse® subcluster:

      • In the Increase size field, select one or both thresholds:
        • In the maintenance window when full at more than: Scheduled expansion threshold. When reached, the storage expands during the next maintenance window.

          For a scheduled expansion, you need to set up a maintenance window schedule.

        • Immediately when full at more than: Immediate expansion threshold. When reached, the storage expands immediately.

      • Specify a threshold value (as a percentage of the total storage size). If you select both thresholds, make sure the immediate expansion threshold is not less than the scheduled one.
      • Set Maximum storage size.

      The automatic storage expansion settings for a ClickHouse® subcluster apply to all existing shards within the subcluster. If you add a new shard, it will use the settings of the oldest shard.

    • Optionally, select Encrypted disk to encrypt the disk with a custom KMS key.

      • To create a new key, click Create.

      • To use the key you created earlier, select it in the KMS key field.

      To learn more about disk encryption, see Storage.

  9. Under ZooKeeper host class:

    • Optionally, configure the automatic storage expansion for a ZooKeeper subcluster:

      • In the Increase size field, select one or both thresholds:
        • In the maintenance window when full at more than: Scheduled expansion threshold. When reached, the storage expands during the next maintenance window.

          For a scheduled expansion, you need to set up a maintenance window schedule.

        • Immediately when full at more than: Immediate expansion threshold. When reached, the storage expands immediately.

      • Specify a threshold value (as a percentage of the total storage size). If you select both thresholds, make sure the immediate expansion threshold is not less than the scheduled one.
      • Set Maximum storage size.
  10. Under DBMS settings:

    • If you want to manage cluster users via SQL, select Enabled from the drop-down list in the User management via SQL field and enter the admin password. This disables user management through other interfaces.

      Otherwise, select Disabled.

    • If you want to manage databases via SQL, select Enabled from the drop-down list in the Managing databases via SQL field. This disables database management through other interfaces. This field is inactive if user management via SQL is disabled.

      Otherwise, select Disabled.

      Warning

      You cannot disable settings for user or database management via SQL once they are enabled. You can enable them as required later when reconfiguring your cluster.

    • Specify a username.

      The username may contain Latin letters, numbers, hyphens, and underscores but must begin with a letter or underscore. The name may be up to 32 characters long.

    • Specify a user password:

      • Enter manually: Select this option to set your own password. It must be from 8 to 128 characters long.

      • Generate: Select this option to generate a password using Connection Manager.

      To view the password after creating a cluster, select the Users tab and click View password for the relevant user. This will open the page of the Yandex Lockbox secret containing the password. To view passwords, you need the lockbox.payloadViewer role.

    • Specify a DB name. The database name may contain Latin letters, numbers, and underscores. It may be up to 63 characters long. You cannot create a database named default.

    • Select the database engine:

      • By default, Atomic supports the non-blocking DROP TABLE and RENAME TABLE operations and atomic EXCHANGE TABLES operations.

      • Replicated supports table metadata replication across all database replicas. The set of tables and their schemas will be the same for all replicas.

        It is only available in replicated clusters.

      You set the engine when creating a database and cannot change it for this database.

    • Enable hybrid storage for the cluster, if required.

      Warning

      You cannot disable this option.

    • Configure the DBMS, if required. You can do it later.

      Using the Yandex Cloud interfaces, you can manage a limited number of settings. Using SQL queries, you can apply ClickHouse® settings at the query level.

  11. Under Network settings, select the cloud network to host your cluster and security groups for cluster network traffic. You may need to set up security groups to be able to connect to the cluster.

  12. Under Hosts, select the parameters of database hosts created along with the cluster. To change the host settings, click next to the host number:

    • Availability zone: Select the availability zone.

    • Subnet: Specify the subnet in the selected availability zone.

    • Public access: Allow access to the host from the internet.

      Warning

      For a more secure cluster with public host access enabled, use only trusted IP addresses or subnets in the cluster's security group rules. Learn more in Configuring security groups.

    To add hosts to your cluster, click Add host.

  13. Specify the cluster service settings, if required:

    • Backup start time (UTC): Time interval during which the cluster backup starts. Time is specified in 24-hour UTC format. The default time is 22:00 - 23:00 UTC.

    • Retention period for automatic backups, days: Retention period for automatic backups, in days. Backups are automatically deleted once their retention period expires. The default is 7 days. For more information, see Backups.

      Changing the retention period affects both new and existing automatic backups. For example, the initial retention period was 7 days, and the remaining lifetime of a single automatic backup is 1 day. If the retention period increases to 9 days, the remaining lifetime for this backup will now be 3 days.

    • Maintenance window: Maintenance window settings:

      • To enable maintenance at any time, select arbitrary (default).
      • To specify the preferred maintenance start time, select by schedule and specify the desired day of the week and UTC hour. For example, you can choose a time when the cluster is least loaded.

      Maintenance operations are carried out both on enabled and disabled clusters. They may include updating the DBMS, applying patches, and so on.

    • DataLens access: This option enables you to analyze cluster data in Yandex DataLens.

    • WebSQL access: This option enables you to run SQL queries against cluster databases from the Yandex Cloud management console using Yandex WebSQL.

    • Access from Metrica and AppMetrica: This option enables you to import data from AppMetrica to a cluster.

    • Serverless access: Enable this option to allow cluster access from Yandex Cloud Functions. For more information about setting up access, see this Cloud Functions guide.

    • Yandex Query access: Enable this option to allow cluster access from Yandex Query. This feature is at the Preview stage.

    • Deletion protection: Manages cluster protection against accidental deletion.

      Even with cluster deletion protection enabled, one can still delete a user or database or connect manually and delete the database contents.

  14. Click Create cluster.

If you do not have the Yandex Cloud CLI installed yet, install and initialize it.

By default, the CLI uses the folder specified when creating the profile. To change the default folder, use the yc config set folder-id <folder_ID> command. You can also set a different folder for any specific command using the --folder-name or --folder-id parameter.

To create a Managed Service for ClickHouse® cluster:

  1. Check whether the folder has any subnets for the cluster hosts:

    yc vpc subnet list
    

    If your folder has no subnets, create them in VPC.

  2. View the description of the CLI command for creating a cluster:

    yc managed-clickhouse cluster create --help
    
  3. In this command, specify the cluster properties (our example does not use all available parameters):

    yc managed-clickhouse cluster create \
      --name <cluster_name> \
      --environment <environment> \
      --network-name <network_name> \
      --host type=<host_type>,`
           `zone-id=<availability_zone>,`
           `subnet-id=<subnet_ID>,`
           `assign-public-ip=<public_access_to_host> \
      --clickhouse-resource-preset <host_class> \
      --clickhouse-disk-type <network-hdd|network-ssd|network-ssd-nonreplicated|local-ssd> \
      --clickhouse-disk-size <storage_size_in_GB> \
      --user name=<username>,password=<user_password> \
      --database name=<DB_name> \
      --security-group-ids <list_of_security_group_IDs> \
      --websql-access=<true_or_false> \
      --deletion-protection
    

    You need to specify the subnet-id if the selected availability zone has two or more subnets.

    Where:

    • --environment: Cluster environment, prestable or production.

    • --host: Host settings:

      • type: Host type, clickhouse or zookeeper.

      • zone-id: Availability zone.

      • assign-public-ip: Internet access to the host via a public IP address, true or false.

        Warning

        For a more secure cluster with public host access enabled, use only trusted IP addresses or subnets in the cluster's security group rules. Learn more in Configuring security groups.

    • --clickhouse-disk-type: Disk type.

      Warning

      You cannot change disk type after you create a cluster.

    • --user: Contains the ClickHouse® user name and password.

      The username may contain Latin letters, numbers, hyphens, and underscores but must begin with a letter or underscore. The name may be up to 32 characters long.

      The password must be from 8 to 128 characters long.

      Note

      You can also generate a password using Connection Manager. To do this, edit the command, specifying user properties as follows:

        --user name=<username>,generate-password=true
      

      To view the password, select your cluster in the management console, navigate to the Users tab, and click View password for the relevant user. This will open the page of the Yandex Lockbox secret containing the password. To view passwords, you need the lockbox.payloadViewer role.

    • --websql-access: Enables SQL queries against cluster databases from the Yandex Cloud management console using Yandex WebSQL. The default value is false.

    • --deletion-protection: Cluster protection from accidental deletion, true or false.

      Even with cluster deletion protection enabled, one can still delete a user or database or connect manually and delete the database contents.

    You can manage cluster users and databases via SQL.

    Warning

    You cannot disable settings for user or database management via SQL once they are enabled. You can enable them as required later when reconfiguring your cluster.

    1. To enable user management via SQL:

      • Set --enable-sql-user-management to true.
      • Set a password for admin in the --admin-password parameter.
      yc managed-clickhouse cluster create \
        ...
        --enable-sql-user-management true \
        --admin-password "<admin_password>"
      
    2. To enable database management via SQL:

      • Set --enable-sql-user-management and --enable-sql-database-management to true.
      • Set a password for admin in the --admin-password parameter.
      yc managed-clickhouse cluster create \
        ...
        --enable-sql-user-management true \
        --enable-sql-database-management true \
        --admin-password "<admin_password>"
      
    3. To encrypt the disk with a custom KMS key, provide --disk-encryption-key-id <KMS_key_ID>.

      To learn more about disk encryption, see Storage.

    4. To allow access to the cluster from Yandex Cloud Functions, provide the --serverless-access parameter. For more information about setting up access, see this Cloud Functions guide.

    5. To allow access to the cluster from Yandex Query, provide the --yandexquery-access=true parameter. This feature is at the Preview stage.

    6. To enable ClickHouse® Keeper in your cluster, set --embedded-keeper to true.

      yc managed-clickhouse cluster create \
        ...
        --embedded-keeper true
      

      Alert

      You cannot disable ClickHouse® Keeper after creating a cluster. You will not be able to use ZooKeeper hosts as well.

    7. To configure hybrid storage:

      • Set --cloud-storage to true to enable hybrid storage.

        Note

        Once hybrid storage is enabled, you cannot disable it.

      • Provide the hybrid storage settings in the relevant parameters:

        • --cloud-storage-data-cache: Enables caching files in the cluster storage. The default value is true (enabled).
        • --cloud-storage-data-cache-max-size: Sets the maximum cache size, in bytes, allocated in the cluster storage. The default value is 1073741824 (1 GB).
        • --cloud-storage-move-factor: Sets the minimum percentage of free space in the cluster storage. If your free space percentage is below this value, the data will be moved to Yandex Object Storage. The minimum value is 0, the maximum value is 1, and the default value is 0.01.
        • --cloud-storage-prefer-not-to-merge: Disables merging of data parts in cluster and object storages. To disable merging, set to true or provide this setting without a value. To leave merging enabled, set to false or do not provide this setting in your CLI command when creating a cluster.
      yc managed-clickhouse cluster create \
         ...
         --cloud-storage=true \
         --cloud-storage-data-cache=<file_storage> \
         --cloud-storage-data-cache-max-size=<memory_size_in_bytes> \
         --cloud-storage-move-factor=<share_of_free_space> \
         --cloud-storage-prefer-not-to-merge=<merging_data_parts>
        ...
      

      Where:

      • --cloud-storage-data-cache: Set to store files in a cluster storage, true or false.
      • --cloud-storage-prefer-not-to-merge: Disables merging of data parts in a cluster and object storage, true or false.
    8. To set up automatic storage expansion for ClickHouse® and ZooKeeper subclusters, use the --disk-size-autoscaling flag:

      yc managed-clickhouse cluster create \
        ...
        --disk-size-autoscaling clickhouse-disk-size-limit=<maximum_storage_size_in_GB>,`
                               `clickhouse-planned-usage-threshold=<threshold_for_scheduled_increase_in_percent>,`
                               `clickhouse-emergency-usage-threshold=<threshold_for_immediate_increase_in_percent>,`
                               `zookeeper-disk-size-limit=<maximum_storage_size_in_GB>,`
                               `zookeeper-planned-usage-threshold=<threshold_for_scheduled_increase_in_percent>,`
                               `zookeeper-emergency-usage-threshold=<threshold_for_immediate_increase_in_percent>
        ...
      

      Where --disk-size-autoscaling defines the automatic storage expansion settings:

      • clickhouse-disk-size-limit: Maximum storage size for a ClickHouse® subcluster, in GB.

      • clickhouse-planned-usage-threshold: ClickHouse® subcluster storage utilization threshold to trigger a storage expansion during the next maintenance window, in percent. The default value is 0 (automatic expansion disabled).

        The valid values range from 0 to 100.

      • clickhouse-emergency-usage-threshold: ClickHouse® subcluster storage utilization threshold to trigger an immediate storage expansion, in percent. The default value is 0 (automatic expansion disabled).

        The valid values range from 0 to 100.

      • zookeeper-disk-size-limit: Maximum storage size for a ZooKeeper subcluster, in GB.

      • zookeeper-planned-usage-threshold: ZooKeeper subcluster storage utilization threshold to trigger a storage expansion during the next maintenance window, in percent. The default value is 0 (automatic expansion disabled).

        The valid values range from 0 to 100.

      • zookeeper-emergency-usage-threshold: ZooKeeper subcluster storage utilization threshold to trigger an immediate storage expansion, in percent. The default value is 0 (automatic expansion disabled).

        The valid values range from 0 to 100.

      Warning

      • If you specify both thresholds for a ClickHouse® subcluster, clickhouse-emergency-usage-threshold must not be less than clickhouse-planned-usage-threshold.

      • If you specify both thresholds for a ZooKeeper subcluster, zookeeper-emergency-usage-threshold must not be less than zookeeper-planned-usage-threshold.

      • When using clickhouse-planned-usage-threshold and zookeeper-planned-usage-threshold, make sure to set up a maintenance window.

      Autoscaling settings configured for a ClickHouse® subcluster apply to all existing shards within the subcluster. If you add a new shard, it will use the settings of the oldest shard. These values are not saved in the subcluster configuration and are not displayed in the yc managed-clickhouse cluster get command output.

      To view information about a specific shard, including its autoscaling settings, use this command:

      yc managed-clickhouse shards get <shard_name> --cluster-id <cluster_ID>
      

      You can get the cluster ID with the list of clusters in the folder.

      You can get the shard name with the list of shards in the cluster.

    9. To set a maintenance window, use the --maintenance-window flag:

      yc managed-clickhouse cluster create \
        ...
        --maintenance-window type=<maintenance_type>,`
                            `hour=<hour>,`
                            `day=<day_of_week>
        ...
      

      Where --maintenance-window defines the maintenance window settings:

      • type: Maintenance window type. Valid values:

        • anytime: Any time (default).
        • weekly: On schedule. To use this value, you need to provide hour and day.
      • hour: Time of day (UTC). The valid values range from 1 to 24.

      • day: Day of week. The valid values are MON, TUE, WED, THU, FRI, SAT, and SUN.

With Terraform, you can quickly create a cloud infrastructure in Yandex Cloud and manage it using configuration files. These files store the infrastructure description written in HashiCorp Configuration Language (HCL). If you change the configuration files, Terraform automatically detects which part of your configuration is already deployed, and what should be added or removed.

Terraform is distributed under the Business Source License. The Yandex Cloud provider for Terraform is distributed under the MPL-2.0 license.

For more information about the provider resources, see the relevant documentation on the Terraform website or its mirror.

To create a Managed Service for ClickHouse® cluster:

  1. In the command line, navigate to the directory that will contain the Terraform configuration files with the infrastructure plan. If there is no such directory, create one.

  2. If you do not have Terraform yet, install it and configure the Yandex Cloud provider.

  3. Create a configuration file describing the cloud network and subnets.

    • Network: Description of the cloud network to host the cluster. If you already have a network in place, you do not need to describe it again.
    • Subnets: Description of the subnets to connect the cluster hosts to. If you already have subnets in place, you do not need to describe them again.

    Below is an example of a configuration file describing a single-subnet cloud network:

    resource "yandex_vpc_network" "<network_name_in_Terraform>" { name = "<network_name>" }
    
    resource "yandex_vpc_subnet" "<subnet_name_in_Terraform>" {
      name           = "<subnet_name>"
      zone           = "<availability_zone>"
      network_id     = yandex_vpc_network.<network_name_in_Terraform>.id
      v4_cidr_blocks = ["<subnet>"]
    }
    
  4. Create a configuration file describing the cluster resources to create:

    • Database cluster: Description of the cluster and its hosts. Optionally, here:

      • Specify the DBMS server-level settings. You can also provide them later.

      • Enable cluster protection against accidental deletion.

        Even with cluster deletion protection enabled, one can still delete a user or database or connect manually and delete the database contents.

    • Database: Cluster database description.

    • User: Cluster user description. Optionally, specify the DBMS user-level settings here. You can also provide them later.

      Using the Yandex Cloud interfaces, you can manage a limited number of settings. Using SQL queries, you can apply ClickHouse® settings at the query level.

    Below is an example structure of a configuration file describing a single-host cluster:

    resource "yandex_mdb_clickhouse_cluster" "<cluster_name>" {
      name                = "<cluster_name>"
      environment         = "<environment>"
      network_id          = yandex_vpc_network.<network_name_in_Terraform>.id
      security_group_ids  = ["<list_of_security_group_IDs>"]
      deletion_protection = <cluster_deletion_protection>
    
      clickhouse {
        resources {
          resource_preset_id = "<host_class>"
          disk_type_id       = "<disk_type>"
          disk_size          = <storage_size_in_GB>
        }
      }
    
      host {
        type             = "CLICKHOUSE"
        zone             = "<availability_zone>"
        subnet_id        = yandex_vpc_subnet.<subnet_name_in_Terraform>.id
        assign_public_ip = <public_access_to_host>
      }
    
      lifecycle {
        ignore_changes = [database, user]
      }
    }
    
    resource "yandex_mdb_clickhouse_database" "<DB_name>" {
      cluster_id = yandex_mdb_clickhouse_cluster.<cluster_name>.id
      name       = "<DB_name>"
    }
    
    resource "yandex_mdb_clickhouse_user" "<username>" {
      cluster_id = yandex_mdb_clickhouse_cluster.<cluster_name>.id
      name       = "<username>"
      password   = "<user_password>"
      permission {
        database_name = yandex_mdb_clickhouse_database.<DB_name>.name
      }
      settings {
        <parameter_1_name> = <value_1>
        <parameter_2_name> = <value_2>
        ...
      }
    }
    

    Where:

    • deletion_protection: Cluster protection against accidental deletion, true or false.

    • assign_public_ip: Public access to the host, true or false.

      Warning

      For a more secure cluster with public host access enabled, use only trusted IP addresses or subnets in the cluster's security group rules. Learn more in Configuring security groups.

    • lifecycle.ignore_changes: Eliminates resource conflicts in operations with users and databases created through individual resources.

    For a user, specify the following:

    • name and password: ClickHouse® username and password, respectively.

      The username may contain Latin letters, numbers, hyphens, and underscores but must begin with a letter or underscore. The name may be up to 32 characters long.

      The password must be from 8 to 128 characters long.

      Note

      You can also generate a password using Connection Manager. To do this, specify generate_password = true instead of "password" = "<user_password>".

      To view the password, select your cluster in the management console, navigate to the Users tab, and click View password for the relevant user. This will open the page of the Yandex Lockbox secret containing the password. To view passwords, you need the lockbox.payloadViewer role.

    1. To enable access from other services and allow running SQL queries from the management console using Yandex WebSQL, add the access section with the settings you need:

      resource "yandex_mdb_clickhouse_cluster" "<cluster_name>" {
        ...
        access {
          data_lens    = <access_from_DataLens>
          metrika      = <access_from_Metrica_and_AppMetrica>
          serverless   = <access_from_Cloud_Functions>
          yandex_query = <access_from_Yandex_Query>
          web_sql      = <run_SQL_queries_from_management_console>
        }
        ...
      }
      

      Where:

      • data_lens: Access from DataLens, true or false.

      • metrika: Access from Yandex Metrica and AppMetrica, true or false.

      • serverless: Access from Cloud Functions, true or false.

      • yandex_query: Access from Yandex Query, true or false.

      • web_sql: Running SQL queries from the management console, true or false.

    2. You can manage cluster users and databases via SQL.

      Warning

      You cannot disable settings for user or database management via SQL once they are enabled. You can enable them as required later when reconfiguring your cluster.

      • To enable user management via SQL, add the sql_user_management field set to true as well as the admin_password field with the admin password to the cluster description:

        resource "yandex_mdb_clickhouse_cluster" "<cluster_name>" {
          name                = "<cluster_name>"
          ...
          admin_password      = "<admin_password>"
          sql_user_management = true
          ...
        }
        
      • To enable database management via SQL, add the sql_user_management and sql_database_management fields set to true as well as the admin_password field with the admin password to the cluster description:

        resource "yandex_mdb_clickhouse_cluster" "<cluster_name>" {
          name                    = "<cluster_name>"
          ...
          admin_password          = "<admin_password>"
          sql_database_management = true
          sql_user_management     = true
          ...
        }
        
    3. To encrypt the disk with a custom KMS key, add the disk_encryption_key_id parameter:

      resource "yandex_mdb_clickhouse_cluster" "<cluster_name>" {
        ...
        disk_encryption_key_id = <KMS_key_ID>
        ...
      }
      

      To learn more about disk encryption, see Storage.

    For more information about the resources you can create with Terraform, see this provider guide.

  5. Make sure the Terraform configuration files are correct:

    1. In the command line, navigate to the directory that contains the current Terraform configuration files defining the infrastructure.

    2. Run this command:

      terraform validate
      

      Terraform will show any errors found in your configuration files.

  6. Create a cluster:

    1. Run this command to view the planned changes:

      terraform plan
      

      If you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.

    2. If everything looks correct, apply the changes:

      1. Run this command:

        terraform apply
        
      2. Confirm updating the resources.

      3. Wait for the operation to complete.

    All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console.

Timeouts

The Terraform provider sets the following timeouts for Managed Service for ClickHouse® cluster operations:

  • Creating a cluster, including by restoring from a backup: 60 minutes.
  • Updating a cluster: 90 minutes.
  • Deleting a cluster: 30 minutes.

Operations exceeding the timeout are aborted.

How do I change these limits?

Add a timeouts section to the cluster description, e.g.:

resource "yandex_mdb_clickhouse_cluster" "<cluster_name>" {
  ...
  timeouts {
    create = "1h30m" # 1 hour 30 minutes
    update = "2h"    # 2 hours
    delete = "30m"   # 30 minutes
  }
}
  1. Get an IAM token for API authentication and put it in an environment variable:

    export IAM_TOKEN="<IAM_token>"
    
  2. Call the Cluster.Create method, e.g., via the following cURL request:

    1. Create a file named body.json and paste the following code into it:

      Note

      This example does not use all available parameters.

      {
        "folderId": "<folder_ID>",
        "name": "<cluster_name>",
        "environment": "<environment>",
        "networkId": "<network_ID>",
        "securityGroupIds": [
          "<security_group_1_ID>",
          "<security_group_2_ID>",
          ...
          "<security_group_N_ID>"
        ],
        "configSpec": {
          "version": "<ClickHouse®_version>",
          "embeddedKeeper": <use_ClickHouse®_Keeper>,
          "clickhouse": {
            "resources": {
              "resourcePresetId": "<ClickHouse®_host_class>",
              "diskSize": "<storage_size_in_bytes>",
              "diskTypeId": "<disk_type>"
            },
            "diskSizeAutoscaling": {
              "plannedUsageThreshold": "<threshold_for_scheduled_increase_in_percent>",
              "emergencyUsageThreshold": "<threshold_for_immediate_increase_in_percent>",
              "diskSizeLimit": "<maximum_storage_size_in_bytes>"
            }
          },
          "zookeeper": {
            "resources": {
              "resourcePresetId": "<ZooKeeper_host_class>",
              "diskSize": "<storage_size_in_bytes>",
              "diskTypeId": "<disk_type>"
            },
            "diskSizeAutoscaling": {
              "plannedUsageThreshold": "<threshold_for_scheduled_increase_in_percent>",
              "emergencyUsageThreshold": "<threshold_for_immediate_increase_in_percent>",
              "diskSizeLimit": "<maximum_storage_size_in_bytes>"
            }
          },
          "access": {
            "dataLens": <access_from_DataLens>,
            "webSql": <run_SQL_queries_from_management_console>,
            "metrika": <access_from_Metrica_and_AppMetrica>,
            "serverless": <access_from_Cloud_Functions>,
            "dataTransfer": <access_from_Data_Transfer>,
            "yandexQuery": <access_from_Yandex_Query>
          },
          "cloudStorage": {
            "enabled": <hybrid_storage_use>,
            "moveFactor": "<share_of_free_space>",
            "dataCacheEnabled": <temporary_file_storage>,
            "dataCacheMaxSize": "<maximum_memory_for_file_storage>",
            "preferNotToMerge": <disabling_data_part_merging>
          },
          "adminPassword": "<admin_password>",
          "sqlUserManagement": <user_management_via_SQL>,
          "sqlDatabaseManagement": <database_management_via_SQL>
        },
        "databaseSpecs": [
          {
            "name": "<DB_name>",
            "engine": "<database_engine>"
          },
          { <similar_settings_for_database_2> },
          { ... },
          { <similar_settings_for_database_N> }
        ],
        "userSpecs": [
          {
            "name": "<username>",
            "password": "<user_password>",
            "permissions": [
              {
                "databaseName": "<DB_name>"
              }
            ]
          },
          { <similar_settings_for_user_2> },
          { ... },
          { <similar_settings_for_user_N> }
        ],
        "hostSpecs": [
          {
            "zoneId": "<availability_zone>",
            "type": "<host_type>",
            "subnetId": "<subnet_ID>",
            "assignPublicIp": <public_access_to_host>,
            "shardName": "<shard_name>"
          },
          { <similar_settings_for_host_2> },
          { ... },
          { <similar_settings_for_host_N> }
        ],
        "deletionProtection": <cluster_deletion_protection>,
        "maintenanceWindow": {
          "weeklyMaintenanceWindow": {
            "day": "<day_of_week>",
            "hour": "<hour>"
          }
        }
      }
      

      Where:

      • name: Cluster name.

      • environment: Cluster environment, PRODUCTION or PRESTABLE.

      • networkId: ID of the network to host the cluster.

      • securityGroupIds: Security group IDs as an array of strings. Each string is a security group ID.

      • configSpec: Cluster configuration:

        • version: ClickHouse® version, 24.8, 25.3, 25.4, 25.5, or 25.6.

        • embeddedKeeper: Using ClickHouse® Keeper instead of ZooKeeper, true or false.

          This setting determines how replication will be managed in a multi-host ClickHouse® cluster:

          • If true, ClickHouse® Keeper will manage the replication.

            Alert

            You cannot disable ClickHouse® Keeper after creating a cluster. You will not be able to use ZooKeeper hosts as well.

          • If not specified or false, ZooKeeper will manage the replication and query distribution.

        • clickhouse: ClickHouse® configuration:

          • resources.resourcePresetId: Host class ID. You can get the list of available host classes with their IDs using the ResourcePreset.list method.

          • resources.diskSize: Disk size, in bytes.

          • resources.diskTypeId: Disk type.

          • diskSizeAutoscaling: Automatic storage expansion settings for a ClickHouse® subcluster:

            • plannedUsageThreshold: ClickHouse® subcluster storage utilization threshold to trigger a storage expansion during the next maintenance window, in percent. The default value is 0 (automatic expansion disabled).

              The valid values range from 0 to 100.

            • emergencyUsageThreshold: ClickHouse® subcluster storage utilization threshold to trigger an immediate storage expansion, in percent. The default value is 0 (automatic expansion disabled).

              The valid values range from 0 to 100.

            • diskSizeLimit: Maximum storage size for a ClickHouse® subcluster, in bytes.

            Warning

            • If you specify both thresholds, emergencyUsageThreshold must not be less than plannedUsageThreshold.

            • When using plannedUsageThreshold, make sure to set up a maintenance window.

            Autoscaling settings configured for a ClickHouse® subcluster apply to all existing shards within the subcluster. If you add a new shard, it will use the settings of the oldest shard. These values are not saved in the subcluster configuration.

            To view information about a specific shard, including autoscaling settings, use the Cluster.GetShard method and provide the cluster ID and shard name in the request.

            You can get the cluster ID with the list of clusters in the folder.

            You can get the shard name with the list of shards in the cluster.

        • zookeeper: ZooKeeper configuration:

          Warning

          If you enabled ClickHouse® Keeper by setting embeddedKeeper: true, the ZooKeeper configuration in configSpec will not be applied.

          • resources.resourcePresetId: Host class ID. You can get the list of available host classes with their IDs using the ResourcePreset.list method.

          • resources.diskSize: Disk size, in bytes.

          • resources.diskTypeId: Disk type.

          • diskSizeAutoscaling: Automatic storage expansion settings for a ZooKeeper subcluster:

            • plannedUsageThreshold: ZooKeeper subcluster storage utilization threshold to trigger a storage expansion during the next maintenance window, in percent. The default value is 0 (automatic expansion disabled).

              The valid values range from 0 to 100.

            • emergencyUsageThreshold: ZooKeeper subcluster storage utilization threshold to trigger an immediate storage expansion, in percent. The default value is 0 (automatic expansion disabled).

              The valid values range from 0 to 100.

            • diskSizeLimit: Maximum storage size for a ZooKeeper subcluster, in bytes.

            Warning

            • If you specify both thresholds, emergencyUsageThreshold must not be less than plannedUsageThreshold.

            • When using plannedUsageThreshold, make sure to set up a maintenance window.

        • access: Settings enabling cluster access from other services and running SQL queries from the management console using Yandex WebSQL:

          • dataLens: Enable access from DataLens, true or false. The default value is false. For more information about setting up a connection, see Connecting from DataLens.

          • webSql: Enable SQL queries against cluster databases from the Yandex Cloud management console using Yandex WebSQL, true or false. The default value is false.

          • metrika: Enable data import from AppMetrica to your cluster, true or false. The default value is false.

          • serverless: Enable access to the cluster from Yandex Cloud Functions, true or false. The default value is false. For more information about setting up access, see this Cloud Functions guide.

          • dataTransfer: Enable access to the cluster from Yandex Data Transfer in Serverless mode, true or false. The default value is false.

            This will enable you to connect to Yandex Data Transfer running in Kubernetes via a special network to make other operations, e.g., transfer launch and deactivation, run faster.

          • yandexQuery: Enable access to the cluster from Yandex Query, true or false. This feature is at the Preview stage. The default value is false.

        • cloudStorage: Hybrid storage settings:

          • enabled: Enable hybrid storage in the cluster if it is disabled, true or false. The default value is false (disabled).

            Note

            Once hybrid storage is enabled, you cannot disable it.

          • moveFactor: Minimum percentage of free space in the cluster storage. If your free space percentage is below this value, the data will be moved to Yandex Object Storage.

            The minimum value is 0, the maximum value is 1, and the default value is 0.01.

          • dataCacheEnabled: Enable caching files in the cluster storage, true or false.

            The default value is true (enabled).

          • dataCacheMaxSize: Maximum cache size, in bytes, allocated in the cluster storage.

            The default value is 1073741824 (1 GB).

          • preferNotToMerge: Disable merging of data parts in the cluster and object storage, true or false.

            To disable merging, set to true. To leave merging enabled, set to false.

        • sql... and adminPassword: Group of settings for user and database management via SQL:

          • adminPassword: admin password.
          • sqlUserManagement: User management via SQL, true or false.
          • sqlDatabaseManagement: Database management via SQL, true or false. For that, you also need to enable user management via SQL.

          Warning

          You cannot disable settings for user or database management via SQL once they are enabled. You can enable them as required later when reconfiguring your cluster.

      • databaseSpecs: Database settings as an array of elements, one per database. Each element has the following structure:

        • name: Database name.

        • engine: Database engine. The possible values are:

          • DATABASE_ENGINE_ATOMIC (default): Atomic engine; supports non-blocking DROP TABLE and RENAME TABLE queries, and atomic EXCHANGE TABLES queries.

          • DATABASE_ENGINE_REPLICATED: Replicated engine; supports table metadata replication across all database replicas. The set of tables and their schemas will be the same for all replicas.

            It is only available in replicated clusters.

          • DATABASE_ENGINE_UNSPECIFIED: This value will set the default engine, i.e., DATABASE_ENGINE_ATOMIC.

          You set the engine when creating a database and cannot change it for this database.

      • userSpecs: User settings as an array of elements, one per user. Each element has the following structure:

        • name: Username. It may contain Latin letters, numbers, hyphens, and underscores, and must start with a letter or underscore. The name may be up to 32 characters long.

        • password: User password. The password must be from 8 to 128 characters long.

          You can also generate a password using Connection Manager. To do this, specify "generatePassword": true instead of "password": "<user_password>".

          To view the password, select your cluster in the management console, navigate to the Users tab, and click View password for the relevant user. This will open the page of the Yandex Lockbox secret containing the password. To view passwords, you need the lockbox.payloadViewer role.

        • permissions: List of databases the user should have access to.

          The list appears as an array of databaseName parameters. Each parameter contains the name of a separate database.

      • hostSpecs: Cluster host settings as an array of elements, one per host. Each element has the following structure:

        • type: Host type, CLICKHOUSE or ZOOKEEPER.

          If you enabled ClickHouse® Keeper by setting embeddedKeeper: true, specify only ClickHouse® host settings in hostSpecs.

        • zoneId: Availability zone.

        • subnetId: Subnet ID.

        • shardName: Shard name. This setting is only relevant for CLICKHOUSE hosts.

        • assignPublicIp: Internet access to the host via a public IP address, true or false.

          Warning

          For a more secure cluster with public host access enabled, use only trusted IP addresses or subnets in the cluster's security group rules. Learn more in Configuring security groups.

        If you are creating a multi-host cluster without using ClickHouse® Keeper, the following rules apply to ZooKeeper hosts:

        • If the cluster cloud network has subnets in each availability zone, and ZooKeeper host settings are not specified, then ZooKeeper hosts will automatically be added, one per subnet.

        • If only some availability zones in the cluster network have subnets, specify the ZooKeeper host settings explicitly.

      • deletionProtection: Cluster protection against accidental deletion, true or false. The default value is false.

        Even with cluster deletion protection enabled, one can still delete a user or database or connect manually and delete the database contents.

      • maintenanceWindow: Maintenance window settings:

        • weeklyMaintenanceWindow.day: Day of week. The valid values are MON, TUE, WED, THU, FRI, SAT, and SUN.
        • weeklyMaintenanceWindow.hour: Time of day (UTC). The valid values range from 1 to 24.

      You can get the folder ID with the list of folders in the cloud.

    2. Run this query:

      curl \
        --request POST \
        --header "Authorization: Bearer $IAM_TOKEN" \
        --header "Content-Type: application/json" \
        --url 'https://mdb.api.cloud.yandex.net/managed-clickhouse/v1/clusters' \
        --data '@body.json'
      
  3. View the server response to make sure your request was successful.

  1. Get an IAM token for API authentication and put it in an environment variable:

    export IAM_TOKEN="<IAM_token>"
    
  2. Clone the cloudapi repository:

    cd ~/ && git clone --depth=1 https://github.com/yandex-cloud/cloudapi
    

    Below, we assume the repository contents are stored in the ~/cloudapi/ directory.

  3. Call the ClusterService.Create method, e.g., via the following gRPCurl request:

    1. Create a file named body.json and paste the following code into it:

      Note

      This example does not use all available parameters.

      {
        "folder_id": "<folder_ID>",
        "name": "<cluster_name>",
        "environment": "<environment>",
        "network_id": "<network_ID>",
        "security_group_ids": [
          "<security_group_1_ID>",
          "<security_group_2_ID>",
          ...
          "<security_group_N_ID>"
        ],
        "config_spec": {
          "version": "<ClickHouse®_version>",
          "embedded_keeper": <use_ClickHouse® Keeper>,
          "clickhouse": {
            "resources": {
              "resource_preset_id": "<ClickHouse®_host_class>",
              "disk_size": "<storage_size_in_bytes>",
              "disk_type_id": "<disk_type>"
            },
            "disk_size_autoscaling": {
              "planned_usage_threshold": "<threshold_for_scheduled_increase_in_percent>",
              "emergency_usage_threshold": "<threshold_for_immediate_increase_in_percent>",
              "disk_size_limit": "<maximum_storage_size_in_bytes>"
            }
          },
          "zookeeper": {
            "resources": {
              "resource_preset_id": "<ZooKeeper_host_class>",
              "disk_size": "<storage_size_in_bytes>",
              "disk_type_id": "<disk_type>"
            },
            "disk_size_autoscaling": {
              "planned_usage_threshold": "<threshold_for_scheduled_increase_in_percent>",
              "emergency_usage_threshold": "<threshold_for_immediate_increase_in_percent>",
              "disk_size_limit": "<maximum_storage_size_in_bytes>"
            }
          },
          "access": {
            "data_lens": <access_from_DataLens>,
            "web_sql": <run_SQL_queries_from_management_console>,
            "metrika": <access_from_Metrica_and_AppMetrica>,
            "serverless": <access_from_Cloud_Functions>,
            "data_transfer": <access_from_Data_Transfer>,
            "yandex_query": <access_from_Yandex_Query>
          },
          "cloud_storage": {
            "enabled": <hybrid_storage_use>,
            "move_factor": "<share_of_free_space>",
            "data_cache_enabled": <temporary_file_storage>,
            "data_cache_max_size": "<maximum_memory_for_file_storage>",
            "prefer_not_to_merge": <disabling_data_part_merging>
          },
          "admin_password": "<admin_password>",
          "sql_user_management": <user_management_via_SQL>,
          "sql_database_management": <database_management_via_SQL>
        },
        "database_specs": [
          {
            "name": "<DB_name>",
            "engine": "<database_engine>"
          },
          { <similar_settings_for_database_2> },
          { ... },
          { <similar_settings_for_database_N> }
        ],
        "user_specs": [
          {
            "name": "<username>",
            "password": "<user_password>",
            "permissions": [
              {
                "database_name": "<DB_name>"
              }
            ]
          },
          { <similar_settings_for_user_2> },
          { ... },
          { <similar_settings_for_user_N> }
        ],
        "host_specs": [
          {
            "zone_id": "<availability_zone>",
            "type": "<host_type>",
            "subnet_id": "<subnet_ID>",
            "assign_public_ip": <public_access_to_host>,
            "shard_name": "<shard_name>"
          },
          { <similar_settings_for_host_2> },
          { ... },
          { <similar_settings_for_host_N> }
        ],
        "deletion_protection": <cluster_deletion_protection>,
        "maintenance_window": {
          "weekly_maintenance_window": {
            "day": "<day_of_week>",
            "hour": "<hour>"
          }
        }
      

      Where:

      • name: Cluster name.

      • environment: Cluster environment, PRODUCTION or PRESTABLE.

      • network_id: ID of the network to host the cluster.

      • security_group_ids: Security group IDs as an array of strings. Each string is a security group ID.

      • config_spec: Cluster configuration:

        • version: ClickHouse® version, 24.8, 25.3, 25.4, 25.5, or 25.6.

        • embedded_keeper: Using ClickHouse® Keeper instead of ZooKeeper, true or false.

          This setting determines how replication will be managed in a multi-host ClickHouse® cluster:

          • If true, ClickHouse® Keeper will manage the replication.

            Alert

            You cannot disable ClickHouse® Keeper after creating a cluster. You will not be able to use ZooKeeper hosts as well.

          • If not specified or false, ZooKeeper will manage the replication and query distribution.

        • clickhouse: ClickHouse® configuration:

          • resources.resource_preset_id: Host class ID. You can get the list of available host classes with their IDs using the ResourcePreset.list method.

          • resources.disk_size: Disk size, in bytes.

          • resources.disk_type_id: Disk type.

          • disk_size_autoscaling: Automatic storage expansion settings for a ClickHouse® subcluster:

            • planned_usage_threshold: ClickHouse® subcluster storage utilization threshold to trigger a storage expansion during the next maintenance window, in percent. The default value is 0 (automatic expansion disabled).

              The valid values range from 0 to 100.

            • emergency_usage_threshold: ClickHouse® subcluster storage utilization threshold to trigger an immediate storage expansion, in percent. The default value is 0 (automatic expansion disabled).

              The valid values range from 0 to 100.

            • disk_size_limit: Maximum storage size for a ClickHouse® subcluster, in bytes.

            Warning

            • If you specify both thresholds, emergency_usage_threshold must not be less than planned_usage_threshold.

            • When using planned_usage_threshold, make sure to set up a maintenance window.

            Autoscaling settings configured for a ClickHouse® subcluster apply to all existing shards within the subcluster. If you add a new shard, it will use the settings of the oldest shard. These values are not saved in the subcluster configuration.

            To view information about a specific shard, including its autoscaling settings, use the ClusterService.GetShard method and provide the cluster ID and shard name in the request.

            You can get the cluster ID with the list of clusters in the folder.

            You can get the shard name with the list of shards in the cluster.

        • zookeeper: ZooKeeper configuration:

          Warning

          If you enabled ClickHouse® Keeper by setting embeddedKeeper: true, the ZooKeeper configuration in configSpec will not be applied.

          • resources.resource_preset_id: Host class ID. You can get the list of available host classes with their IDs using the ResourcePreset.list method.

          • resources.disk_size: Disk size, in bytes.

          • resources.disk_type_id: Disk type.

          • disk_size_autoscaling: Automatic storage expansion settings for a ZooKeeper subcluster:

            • planned_usage_threshold: ZooKeeper subcluster storage utilization threshold to trigger a storage expansion during the next maintenance window, in percent. The default value is 0 (automatic expansion disabled).

              The valid values range from 0 to 100.

            • emergency_usage_threshold: ZooKeeper subcluster storage utilization threshold to trigger an immediate storage expansion, in percent. The default value is 0 (automatic expansion disabled).

              The valid values range from 0 to 100.

            • disk_size_limit: Maximum storage size for a ZooKeeper subcluster, in bytes.

            Warning

            • If you specify both thresholds, emergency_usage_threshold must not be less than planned_usage_threshold.

            • When using planned_usage_threshold, make sure to set up a maintenance window.

        • access: Settings enabling cluster access from other services and running SQL queries from the management console using Yandex WebSQL:

          • data_lens: Enable access from DataLens, true or false. The default value is false. For more information about setting up a connection, see Connecting from DataLens.

          • web_sql: Enable SQL queries against cluster databases from the Yandex Cloud management console using Yandex WebSQL, true or false. The default value is false.

          • metrika: Enable data import from AppMetrica to your cluster, true or false. The default value is false.

          • serverless: Enable access to the cluster from Yandex Cloud Functions, true or false. The default value is false. For more information about setting up access, see this Cloud Functions guide.

          • data_transfer: Enable access to the cluster from Yandex Data Transfer in Serverless mode, true or false. The default value is false.

            This will enable you to connect to Yandex Data Transfer running in Kubernetes via a special network to make other operations, e.g., transfer launch and deactivation, run faster.

          • yandex_query: Enable access to the cluster from Yandex Query, true or false. This feature is at the Preview stage. The default value is false.

        • cloud_storage: Hybrid storage settings:

          • enabled: Enable hybrid storage in the cluster if it is disabled, true or false. The default value is false (disabled).

            Note

            Once hybrid storage is enabled, you cannot disable it.

          • move_factor: Minimum percentage of free space in the cluster storage. If your free space percentage is below this value, the data will be moved to Yandex Object Storage.

            The minimum value is 0, the maximum value is 1, and the default value is 0.01.

          • data_cache_enabled: Enable caching files in the cluster storage, true or false.

            The default value is true (enabled).

          • data_cache_max_size: Maximum cache size, in bytes, allocated in the cluster storage.

            The default value is 1073741824 (1 GB).

          • prefer_not_to_merge: Disable merging of data parts in the cluster and object storage, true or false.

            To disable merging, set to true. To leave merging enabled, set to false.

        • sql... and admin_password: Group of settings for user and database management via SQL:

          • admin_password: admin password.
          • sql_user_management: User management via SQL, true or false.
          • sql_database_management: Database management via SQL, true or false. For that, you also need to enable user management via SQL.

          Warning

          You cannot disable settings for user or database management via SQL once they are enabled. You can enable them as required later when reconfiguring your cluster.

      • database_specs: Database settings as an array of elements, one per database. Each element has the following structure:

        • name: Database name.

        • engine: Database engine. The possible values are:

          • DATABASE_ENGINE_ATOMIC (default): Atomic engine; supports non-blocking DROP TABLE and RENAME TABLE queries, and atomic EXCHANGE TABLES queries.

          • DATABASE_ENGINE_REPLICATED: Replicated engine; supports table metadata replication across all database replicas. The set of tables and their schemas will be the same for all replicas.

            It is only available in replicated clusters.

          • DATABASE_ENGINE_UNSPECIFIED: This value will set the default engine, i.e., DATABASE_ENGINE_ATOMIC.

          You set the engine when creating a database and cannot change it for this database.

      • user_specs: User settings as an array of elements, one per user. Each element has the following structure:

        • name: Username. It may contain Latin letters, numbers, hyphens, and underscores, and must start with a letter or underscore. The name may be up to 32 characters long.

        • password: User password. The password must be from 8 to 128 characters long.

          You can also generate a password using Connection Manager. To do this, specify "generate_password": true instead of "password": "<user_password>".

          To view the password, select your cluster in the management console, navigate to the Users tab, and click View password for the relevant user. This will open the page of the Yandex Lockbox secret containing the password. To view passwords, you need the lockbox.payloadViewer role.

        • permissions: List of databases the user should have access to.

          The list appears as an array of database_name parameters. Each parameter contains the name of a separate database.

      • host_specs: Cluster host settings as an array of elements, one per host. Each element has the following structure:

        • type: Host type, CLICKHOUSE or ZOOKEEPER.

          If you enabled ClickHouse® Keeper by setting embedded_keeper: true, specify only ClickHouse® host settings in host_specs.

        • zone_id: Availability zone.

        • subnet_id: Subnet ID.

        • shard_name: Shard name. This setting is only relevant for CLICKHOUSE hosts.

        • assign_public_ip: Internet access to the host via a public IP address, true or false.

          Warning

          For a more secure cluster with public host access enabled, use only trusted IP addresses or subnets in the cluster's security group rules. Learn more in Configuring security groups.

        If you are creating a multi-host cluster without using ClickHouse® Keeper, the following rules apply to ZooKeeper hosts:

        • If the cluster cloud network has subnets in each availability zone, and ZooKeeper host settings are not specified, then ZooKeeper hosts will automatically be added, one per subnet.

        • If only some availability zones in the cluster network have subnets, specify the ZooKeeper host settings explicitly.

      • deletion_protection: Cluster protection against accidental deletion, true or false. The default value is false.

        Even with cluster deletion protection enabled, one can still delete a user or database or connect manually and delete the database contents.

      • maintenance_window: Maintenance window settings:

        • weekly_maintenance_window.day: Day of week. The valid values are MON, TUE, WED, THU, FRI, SAT, and SUN.
        • weekly_maintenance_window.hour: Time of day (UTC). The valid values range from 1 to 24.

      You can get the folder ID with the list of folders in the cloud.

    2. Run this query:

      grpcurl \
        -format json \
        -import-path ~/cloudapi/ \
        -import-path ~/cloudapi/third_party/googleapis/ \
        -proto ~/cloudapi/yandex/cloud/mdb/clickhouse/v1/cluster_service.proto \
        -rpc-header "Authorization: Bearer $IAM_TOKEN" \
        -d @ \
        mdb.api.cloud.yandex.net:443 \
        yandex.cloud.mdb.clickhouse.v1.ClusterService.Create \
        < body.json
      
  4. View the server response to make sure your request was successful.

Warning

If you specified security group IDs when creating a cluster, you may also need to set up security groups to connect to the cluster.

Creating a cluster copyCreating a cluster copy

You can create a ClickHouse® cluster with the settings of another one created earlier. To do this, import the ClickHouse® source cluster configuration to Terraform. This way, you can either create an identical copy or use the imported configuration as the baseline and modify it as needed. Importing a configuration is a good idea when a ClickHouse® source cluster has a lot of settings and you need to create a similar one.

To create a ClickHouse® cluster copy:

Terraform
  1. If you do not have Terraform yet, install it.

  2. Get the authentication credentials. You can add them to environment variables or specify them later in the provider configuration file.

  3. Configure and initialize a provider. There is no need to create a provider configuration file manually, you can download it.

  4. Place the configuration file in a separate working directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file.

  5. In the same working directory, place a .tf file with the following contents:

    resource "yandex_mdb_clickhouse_cluster" "old" { }
    
  6. Save the ID of the ClickHouse® source cluster to an environment variable:

    export CLICKHOUSE_CLUSTER_ID=<cluster_ID>
    

    You can get the ID with the list of clusters in the folder.

  7. Import the ClickHouse® source cluster settings to the Terraform configuration:

    terraform import yandex_mdb_clickhouse_cluster.old ${CLICKHOUSE_CLUSTER_ID}
    
  8. Get the imported configuration:

    terraform show
    
  9. Copy it from the terminal and paste it into the .tf file.

  10. Place the file in the new imported-cluster directory.

  11. Edit the copied configuration so that you can create a new cluster from it:

    • Specify the new cluster name in the resource string and the name parameter.
    • Delete the created_at, health, id, and status parameters.
    • In the host sections, delete fqdn.
    • Under clickhouse.config.merge_tree, if the max_bytes_to_merge_at_max_space_in_pool, max_parts_in_total, and number_of_free_entries_in_pool_to_execute_mutation parameters are set to 0, delete them.
    • Under clickhouse.config.kafka, set sasl_password or delete this parameter.
    • Under clickhouse.config.rabbitmq, set password or delete this parameter.
    • If the maintenance_window section has type = "ANYTIME", delete the hour parameter.
    • Delete all user sections (if any). You can add database users with a separate yandex_mdb_clickhouse_user resource.
    • Optionally, make further changes if you need a customized configuration.
  12. Get the authentication credentials in the imported-cluster directory.

  13. In the same directory, configure and initialize the provider. There is no need to create a provider configuration file manually, as you can download it.

  14. Place the configuration file in the imported-cluster directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file.

  15. Make sure the Terraform configuration files are correct:

    terraform validate
    

    Terraform will show any errors found in your configuration files.

  16. Create the required infrastructure:

    1. Run this command to view the planned changes:

      terraform plan
      

      If you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.

    2. If everything looks correct, apply the changes:

      1. Run this command:

        terraform apply
        
      2. Confirm updating the resources.

      3. Wait for the operation to complete.

    All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console.

Timeouts

The Terraform provider sets the following timeouts for Managed Service for ClickHouse® cluster operations:

  • Creating a cluster, including by restoring from a backup: 60 minutes.
  • Updating a cluster: 90 minutes.
  • Deleting a cluster: 30 minutes.

Operations exceeding the timeout are aborted.

How do I change these limits?

Add a timeouts section to the cluster description, e.g.:

resource "yandex_mdb_clickhouse_cluster" "<cluster_name>" {
  ...
  timeouts {
    create = "1h30m" # 1 hour 30 minutes
    update = "2h"    # 2 hours
    delete = "30m"   # 30 minutes
  }
}

ExamplesExamples

Creating a single-host clusterCreating a single-host cluster

CLI
Terraform

To create a cluster with a single host, provide a single --host parameter.

Create a Managed Service for ClickHouse® cluster with the following test specifications:

  • Name: mych.
  • Environment: production.
  • Network: default.
  • Security group: enp6saqnq4ie244g67sb.
  • One ClickHouse® s2.micro host in the b0rcctk2rvtr******** subnet, in the ru-central1-a availability zone.
  • ClickHouse® Keeper.
  • Network SSD storage (network-ssd): 20 GB.
  • User: user1, password: user1user1.
  • Database: db1.
  • Cluster protection against accidental deletion: Enabled.

Run this command:

yc managed-clickhouse cluster create \
  --name mych \
  --environment=production \
  --network-name default \
  --clickhouse-resource-preset s2.micro \
  --host type=clickhouse,zone-id=ru-central1-a,subnet-id=b0cl69g98qum******** \
  --embedded-keeper true \
  --clickhouse-disk-size 20 \
  --clickhouse-disk-type network-ssd \
  --user name=user1,password=user1user1 \
  --database name=db1 \
  --security-group-ids enp6saqnq4ie244g67sb \
  --deletion-protection

Create a Managed Service for ClickHouse® cluster and its network with the following test specifications:

  • Name: mych.

  • Environment: PRESTABLE.

  • Cloud ID: b1gq90dgh25bebiu75o.

  • Folder ID: b1gia87mbaomkfvsleds.

  • New cloud network: cluster-net.

  • New default security group: cluster-sg (in the cluster-net network). It must allow connections to any cluster host from any network (including the internet) on ports 8443 and 9440.

  • One s2.micro host in the new cluster-subnet-ru-central1-a subnet.

    Subnet parameters:

    • Address range: 172.16.1.0/24.
    • Network: cluster-net.
    • Availability zone: ru-central1-a.
  • Network SSD storage (network-ssd): 32 GB.

  • Database name: db1.

  • User: user1, password: user1user1.

The configuration files for this cluster are as follows:

  1. Configuration file with a description of the provider settings:

    provider.tf

    terraform {
      required_providers {
        yandex = {
          source = "yandex-cloud/yandex"
        }
      }
    }
    
    provider "yandex" {
      token     = "<OAuth_or_static_key_of_service_account>"
      cloud_id  = "b1gq90dgh25bebiu75o"
      folder_id = "b1gia87mbaomkfvsleds"
    }
    

    To get an OAuth token or a static access key, see the Yandex Identity and Access Management instructions.

  2. Configuration file with a description of the cloud network and subnet:

    networks.tf
    resource "yandex_vpc_network" "cluster-net" { name = "cluster-net" }
    
    resource "yandex_vpc_subnet" "cluster-subnet-a" {
      name           = "cluster-subnet-ru-central1-a"
      zone           = "ru-central1-a"
      network_id     = yandex_vpc_network.cluster-net.id
      v4_cidr_blocks = ["172.16.1.0/24"]
    }
    
  3. Configuration file with a description of the security group:

    security-groups.tf
    resource "yandex_vpc_default_security_group" "cluster-sg" {
      network_id = yandex_vpc_network.cluster-net.id
    
      ingress {
        description    = "HTTPS (secure)"
        port           = 8443
        protocol       = "TCP"
        v4_cidr_blocks = ["0.0.0.0/0"]
      }
    
      ingress {
        description    = "clickhouse-client (secure)"
        port           = 9440
        protocol       = "TCP"
        v4_cidr_blocks = ["0.0.0.0/0"]
      }
    
      egress {
        description    = "Allow all egress cluster traffic"
        protocol       = "TCP"
        v4_cidr_blocks = ["0.0.0.0/0"]
      }
    }
    
  4. Configuration file with a description of the cluster and its host:

    cluster.tf
    resource "yandex_mdb_clickhouse_cluster" "mych" {
      name               = "mych"
      environment        = "PRESTABLE"
      network_id         = yandex_vpc_network.cluster-net.id
      security_group_ids = [yandex_vpc_default_security_group.cluster-sg.id]
    
      clickhouse {
        resources {
          resource_preset_id = "s2.micro"
          disk_type_id       = "network-ssd"
          disk_size          = 32
        }
      }
    
      host {
        type      = "CLICKHOUSE"
        zone      = "ru-central1-a"
        subnet_id = yandex_vpc_subnet.cluster-subnet-a.id
      }
    
      lifecycle {
        ignore_changes = [database, user]
      }
    }
    
    resource "yandex_mdb_clickhouse_database" "db1" {
      cluster_id = yandex_mdb_clickhouse_cluster.mych.id
      name       = "db1"
    }
    
    resource "yandex_mdb_clickhouse_user" "user1" {
      cluster_id = yandex_mdb_clickhouse_cluster.mych.id
      name       = "user1"
      password   = "user1user1"
      permission {
        database_name = yandex_mdb_clickhouse_database.db1.name
      }
    }
    

Creating a multi-host clusterCreating a multi-host cluster

Terraform

Create a Managed Service for ClickHouse® cluster with the following test specifications:

  • Name: mych.

  • Environment: PRESTABLE.

  • Cloud ID: b1gq90dgh25bebiu75o.

  • Folder ID: b1gia87mbaomkfvsleds.

  • New cloud network: cluster-net.

  • Three ClickHouse® s2.micro hosts and three ZooKeeper b2.medium hosts (for replication).

    One host of each class will be added to the new subnets:

    • cluster-subnet-ru-central1-a: 172.16.1.0/24, availability zone: ru-central1-a.
    • cluster-subnet-ru-central1-b: 172.16.2.0/24, availability zone: ru-central1-b.
    • cluster-subnet-ru-central1-d: 172.16.3.0/24, availability zone: ru-central1-d.

    These subnets will belong to the cluster-net network.

  • New default security group: cluster-sg (in the cluster-net network). It must allow connections to any cluster host from any network (including the internet) on ports 8443 and 9440.

  • Network SSD storage (network-ssd) for each of the ClickHouse® cluster hosts: 32 GB.

  • Network SSD storage (network-ssd) for each of the ZooKeeper cluster hosts: 10 GB.

  • Database name: db1.

  • User: user1, password: user1user1.

The configuration files for this cluster are as follows:

  1. Configuration file with a description of the provider settings:

    provider.tf

    terraform {
      required_providers {
        yandex = {
          source = "yandex-cloud/yandex"
        }
      }
    }
    
    provider "yandex" {
      token     = "<OAuth_or_static_key_of_service_account>"
      cloud_id  = "b1gq90dgh25bebiu75o"
      folder_id = "b1gia87mbaomkfvsleds"
    }
    

    To get an OAuth token or a static access key, see the Yandex Identity and Access Management instructions.

  2. Configuration file with a description of the cloud network and subnets:

    networks.tf
    resource "yandex_vpc_network" "cluster-net" { name = "cluster-net" }
    
    resource "yandex_vpc_subnet" "cluster-subnet-a" {
      name           = "cluster-subnet-ru-central1-a"
      zone           = "ru-central1-a"
      network_id     = yandex_vpc_network.cluster-net.id
      v4_cidr_blocks = ["172.16.1.0/24"]
    }
    
    resource "yandex_vpc_subnet" "cluster-subnet-b" {
      name           = "cluster-subnet-ru-central1-b"
      zone           = "ru-central1-b"
      network_id     = yandex_vpc_network.cluster-net.id
      v4_cidr_blocks = ["172.16.2.0/24"]
    }
    
    resource "yandex_vpc_subnet" "cluster-subnet-d" {
      name           = "cluster-subnet-ru-central1-d"
      zone           = "ru-central1-d"
      network_id     = yandex_vpc_network.cluster-net.id
      v4_cidr_blocks = ["172.16.3.0/24"]
    }
    
  3. Configuration file with a description of the security group:

    security-groups.tf
    resource "yandex_vpc_default_security_group" "cluster-sg" {
      network_id = yandex_vpc_network.cluster-net.id
    
      ingress {
        description    = "HTTPS (secure)"
        port           = 8443
        protocol       = "TCP"
        v4_cidr_blocks = ["0.0.0.0/0"]
      }
    
      ingress {
        description    = "clickhouse-client (secure)"
        port           = 9440
        protocol       = "TCP"
        v4_cidr_blocks = ["0.0.0.0/0"]
      }
    
      egress {
        description    = "Allow all egress cluster traffic"
        protocol       = "TCP"
        v4_cidr_blocks = ["0.0.0.0/0"]
      }
    }
    
  4. Configuration file with a description of the cluster and its hosts:

    cluster.tf
    resource "yandex_mdb_clickhouse_cluster" "mych" {
      name               = "mych"
      environment        = "PRESTABLE"
      network_id         = yandex_vpc_network.cluster-net.id
      security_group_ids = [yandex_vpc_default_security_group.cluster-sg.id]
    
      clickhouse {
        resources {
          resource_preset_id = "s2.micro"
          disk_type_id       = "network-ssd"
          disk_size          = 32
        }
      }
    
      host {
        type      = "CLICKHOUSE"
        zone      = "ru-central1-a"
        subnet_id = yandex_vpc_subnet.cluster-subnet-a.id
      }
    
      host {
        type      = "CLICKHOUSE"
        zone      = "ru-central1-b"
        subnet_id = yandex_vpc_subnet.cluster-subnet-b.id
      }
    
      host {
        type      = "CLICKHOUSE"
        zone      = "ru-central1-d"
        subnet_id = yandex_vpc_subnet.cluster-subnet-d.id
      }
    
      zookeeper {
        resources {
          resource_preset_id = "b2.medium"
          disk_type_id       = "network-ssd"
          disk_size          = 10
        }
      }
    
      host {
        type      = "ZOOKEEPER"
        zone      = "ru-central1-a"
        subnet_id = yandex_vpc_subnet.cluster-subnet-a.id
      }
    
      host {
        type      = "ZOOKEEPER"
        zone      = "ru-central1-b"
        subnet_id = yandex_vpc_subnet.cluster-subnet-b.id
      }
    
      host {
        type      = "ZOOKEEPER"
        zone      = "ru-central1-d"
        subnet_id = yandex_vpc_subnet.cluster-subnet-d.id
      }
    
      lifecycle {
        ignore_changes = [database, user]
      }
    }
    
    resource "yandex_mdb_clickhouse_database" "db1" {
      cluster_id = yandex_mdb_clickhouse_cluster.mych.id
      name       = "db1"
    }
    
    resource "yandex_mdb_clickhouse_user" "user1" {
      cluster_id = yandex_mdb_clickhouse_cluster.mych.id
      name       = "user1"
      password   = "user1user1"
      permission {
        database_name = yandex_mdb_clickhouse_database.db1.name
      }
    }
    

ClickHouse® is a registered trademark of ClickHouse, Inc.

Was the article helpful?

Previous
Information about existing clusters
Next
Updating cluster settings
© 2025 Direct Cursus Technology L.L.C.