Yandex Cloud
Search
Contact UsTry it for free
  • Customer Stories
  • Documentation
  • Blog
  • All Services
  • System Status
  • Marketplace
    • Featured
    • Infrastructure & Network
    • Data Platform
    • AI for business
    • Security
    • DevOps tools
    • Serverless
    • Monitoring & Resources
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Start testing with double trial credits
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Center for Technologies and Society
    • Yandex Cloud Partner program
    • Price calculator
    • Pricing plans
  • Customer Stories
  • Documentation
  • Blog
© 2026 Direct Cursus Technology L.L.C.
Yandex Cloud Stackland
  • What's new
  • Installation
    • All tutorials
    • Installing Stackland on Yandex BareMetal
    • Setting up external access to a pod in a cluster
    • All guides
        • Creating a cluster ClickHouse®
        • Updating cluster settings
        • Creating backups
        • Recovering a cluster
        • Deleting a cluster
    • Projects
    • Resource model
  • Access management
  • Pricing policy
  • Diagnostics and troubleshooting

In this article:

  • Using the CLI
  • Using the management console
  • Connecting to a cluster
  • clickhouse-client
  • HTTP interface
  • Getting addresses for connection
  • Getting an FQDN via the CLI
  • Getting an FQDN via the management console
  1. Step-by-step guides
  2. Databases
  3. Managed Service for ClickHouse®
  4. Creating a cluster ClickHouse®

Creating a Managed Service for ClickHouse® cluster

Written by
Yandex Cloud
Updated at April 8, 2026
  • Using the CLI
  • Using the management console
  • Connecting to a cluster
    • clickhouse-client
    • HTTP interface
  • Getting addresses for connection
    • Getting an FQDN via the CLI
    • Getting an FQDN via the management console

If you have a project, you can create a ClickHouse® cluster in it.

Using the CLIUsing the CLI

  1. If the project does not exist yet, create it: kubectl create namespace <project name>.

  2. Create the ClickhouseCluster resource file, e.g., using the touch clickhousecluster.yaml command.

  3. Open the file and paste the configuration below into it:

    Minimum configuration
    Configuration with backup (stackland-storage)
    Backup to S3 (type: s3)
    apiVersion: clickhouse.stackland.yandex.cloud/v1alpha1
    kind: ClickhouseCluster
    metadata:
      labels:
        app.kubernetes.io/name: ch-stackland-operator
        app.kubernetes.io/managed-by: kustomize
      name: ch-sample-min
    spec:
      clickhouse:
        service: ClusterIP # Service type for the whole cluster (None, ClusterIP, or LoadBalancer). The default type is ClusterIP.
        shards:
          - id: shard-1
            # service: None # Service type for the shard (None, ClusterIP, or LoadBalancer). The default type is None (nothing is created).
        storage:
          size: 1Gi
        resources:
          requests:
            cpu: "500m"
            memory: "1Gi"
          limits:
            cpu: "1"
            memory: "2Gi"
      keeper:
        storage:
          size: 1Gi
    

    This option explicitly names the storage type, spec.backup.storage.type: stackland-storage, i.e., the bucket and access keys are created by the operator. You only need to create a superuser secret. Optionally, substitute the schedule into spec.backup.schedule (in CRON expression format) and then fill out the spec.backup.retention section to limit the number of backups and their retention period in S3.

    apiVersion: v1
    kind: Secret
    metadata:
      name: ch-sample-superuser
    type: Opaque
    stringData:
      password: "your_password"
      username: "your_username"
    ---
    apiVersion: clickhouse.stackland.yandex.cloud/v1alpha1
    kind: ClickhouseCluster
    metadata:
      labels:
        app.kubernetes.io/name: ch-stackland-operator
        app.kubernetes.io/managed-by: kustomize
      name: ch-sample-full
    spec:
      clickhouse:
        version: "25.3"
        service: ClusterIP # Service type for the entire cluster (None, ClusterIP, or LoadBalancer)
        shards:
          - id: "shard-1"
            weight: 1
            service: LoadBalancer # Service type for the shard (None, ClusterIP, or LoadBalancer)
            settings:
            instances: 2
            storage:
    #          storageClass: "your-storage-class"
              size: 2Gi
            resources:
              requests:
                cpu: "500m"
                memory: "1Gi"
              limits:
                cpu: "1"
                memory: "2Gi"
          - id: "shard-2"
            weight: 2
            service: None # No endpoint is created for this shard.
            settings:
            instances: 1
            storage:
    #          storageClass: "your-storage-class"
              size: 2Gi
            resources:
              requests:
                cpu: "500m"
                memory: "1Gi"
              limits:
                cpu: "1"
                memory: "2Gi"
        storage:
    #      storageClass: "your-storage-class"
          size: 2Gi
        resources:
          requests:
            cpu: "500m"
            memory: "1Gi"
          limits:
            cpu: "1"
            memory: "2Gi"
        enableSuperuserAccess: true
        superuserSecretRef:
          name: "ch-sample-superuser"
      keeper:
        instances: 3
        storage:
    #      storageClass: "your-storage-classs"
          size: 1Gi
        resources:
          requests:
            cpu: "500m"
            memory: "1Gi"
          limits:
            cpu: "1"
            memory: "2Gi"
      backup:
        storage:
          type: stackland-storage
        # schedule: "0 0 * * * *"
        retention:
          ignoreForManualBackups: true
          minBackupsToKeep: 5
          deleteBackupsAfter: 7d
    
    

    This option explicitly names the storage type, spec.backup.storage.type: s3, for an S3-compatible bucket. Create a secret with the access credentials for the bucket and a superuser secret. Substitute the bucket name, endpoint into the example and, optionally, schedule into spec.backup.schedule, and fill out the spec.backup.retention section to limit the number of backups and their retention period in S3.

    apiVersion: v1
    kind: Secret
    metadata:
      name: ch-sample-s3-credentials
    type: Opaque
    stringData:
      accessKey: "your_key"
      secret: "your_secret"
    ---
    apiVersion: v1
    kind: Secret
    metadata:
      name: ch-sample-superuser
    type: Opaque
    stringData:
      password: "your_password"
      username: "your_username"
    ---
    apiVersion: clickhouse.stackland.yandex.cloud/v1alpha1
    kind: ClickhouseCluster
    metadata:
      labels:
        app.kubernetes.io/name: ch-stackland-operator
        app.kubernetes.io/managed-by: kustomize
      name: ch-sample-full-s3
    spec:
      clickhouse:
        version: "25.3"
        service: ClusterIP # Service type for the whole cluster (`None`, `ClusterIP`, or `LoadBalancer`)
        shards:
          - id: "shard-1"
            weight: 1
            service: LoadBalancer # Service type for the shard (`None`, `ClusterIP`, or `LoadBalancer`)
            settings:
            instances: 2
            storage:
    #          storageClass: "your-storage-class"
              size: 2Gi
            resources:
              requests:
                cpu: "500m"
                memory: "1Gi"
              limits:
                cpu: "1"
                memory: "2Gi"
          - id: "shard-2"
            weight: 2
            service: None # For this shard, no endpoint is created
            settings:
            instances: 1
            storage:
    #          storageClass: "your-storage-class"
              size: 2Gi
            resources:
              requests:
                cpu: "500m"
                memory: "1Gi"
              limits:
                cpu: "1"
                memory: "2Gi"
        storage:
    #      storageClass: "your-storage-class"
          size: 2Gi
        resources:
          requests:
            cpu: "500m"
            memory: "1Gi"
          limits:
            cpu: "1"
            memory: "2Gi"
        enableSuperuserAccess: true
        superuserSecretRef:
          name: "ch-sample-superuser"
      keeper:
        instances: 3
        storage:
    #      storageClass: "your-storage-class"
          size: 1Gi
        resources:
          requests:
            cpu: "500m"
            memory: "1Gi"
          limits:
            cpu: "1"
            memory: "2Gi"
      backup:
        storage:
          s3:
            bucket: on-prem-quantum
            endpointUrl: "https://storage.yandexcloud.net"
            backupsToKeepRemote: 14
            region: "ru-central1"
            forcePathStyle: false
    #        storageClass: "STANDARD"
            credentialsSecretRef:
              name: ch-sample-s3-credentials
              accessKeyIdPath: accessKey
              secretAccessKeyPath: secret
        # schedule: "0 0 * * * *"
        deltaMaxSteps: 5
    
  4. Apply the manifest: kubectl apply -f clickhousecluster.yaml -n <project name>. Optionally, you can specify the project name in the metadata.namespace resource property and skip it in the command.

Using the management consoleUsing the management console

  1. If you have not opened a project yet, select one.

  2. In the left-hand menu, select ClickHouse® Clusters.

  3. Click Create.

  4. Fill out the fields as follows:

    Basic parameters

    • Cluster name: Cluster name. Only use lowercase letters, numbers, and hyphens.
    • Version: ClickHouse® version. Select from the list of available versions.
    • Cluster service type: Service type for accessing the entire cluster. Available values: ClusterIP (access only within the cluster, default) or LoadBalancer (access from outside).

    Storage

    • Storage class: stackland-nvme, stackland-ssd, stackland-hdd, stackland-other. Learn more about storage classes in Disk subsystem.
    • Storage size: Size of the disk used to store data. Once created, the disk size can only be increased.

    Settings (drop-down section)

    Resources

    • Requested CPU: Guaranteed amount of computing resources.
    • Requested memory: Guaranteed amount of RAM.
    • CPU limit: Maximum amount of computing resources.
    • Memory limit: Maximum amount of RAM.

    Shards

    List of cluster shards. By default, one shard-1 is created. You can add additional shards by clicking Add shard.

    For each shard, you can configure:

    • Shard ID: Shard ID.
    • Shard weight: Shard weight for data distribution.
    • Number of replicas: Number of replicas in the shard.
    • Shard service type: Service type for access to the shard. Available values: Do not create service (no endpoint is created, default), ClusterIP (access only within the cluster), or LoadBalancer (access from outside).

    Superuser

    • Allow access: Switch to allow creating a superuser.
    • Name: Superuser name for access to the database.
    • Password: Superuser password. You can generate it automatically by clicking Generate.

    ClickHouse® Keeper

    • Number of Keeper instances: Number of ClickHouse® Keeper instances for fault tolerance.
    • Keeper host class: Resource configuration for Keeper instances (storage, CPU, memory).

    Backup configuration

    • Enable automatic backups: Switch to enable automatic backups to the S3 bucket.
  5. Click Create.

Done. The cluster has now appeared in the ClickHouse® Clusters list.

Connecting to a clusterConnecting to a cluster

To connect to a cluster, use the host FQDN in <cluster_name>.<project_name>.svc.>.<cluster domain> format.

clickhouse-clientclickhouse-client

Install the client.

sudo apt update && sudo apt install --yes clickhouse-client

Connect to the cluster:

clickhouse-client --host <cluster_name>.<project_name>.svc.<cluster domain> \
                  --user <username> \
                  --database <DB_name> \
                  --port 9000 \
                  --ask-password

HTTP interfaceHTTP interface

Run the following request via HTTP:

curl --header "X-ClickHouse®-User: <username>" \
     --header "X-ClickHouse®-Key: <password>" \
     'http://<cluster_name>.<project_name>.svc.<cluster domain>:8123/?database=<database_name>&query=SELECT%20version()'

Note

For the first connection, use the clickhouse database and the superuser name specified when creating the cluster.

Getting addresses for connectionGetting addresses for connection

After you create a cluster, you can get addresses (FQDNs) for connection to the cluster and individual shards.

Getting an FQDN via the CLIGetting an FQDN via the CLI

Run this command:

kubectl get clickhousecluster <cluster_name> -n <project_name> -o jsonpath='{.status.clusterStatus.fqdns}'

The result contains the following:

  • cluster.internal: Internal FQDN for connecting to the entire cluster from other pods in Kubernetes.
  • cluster.external: External FQDN for external connections to the cluster (only available if spec.clickhouse.service is set to LoadBalancer).
  • shards[].serviceFqdn.internal: Internal FQDN for connecting to a specific shard.
  • shards[].serviceFqdn.external: External FQDN for external connections to a shard (only available if spec.clickhouse.shards[].service is set to LoadBalancer).

Result example:

{
  "cluster": {
    "internal": "ch-sample.my-project.svc.example.com",
    "external": "ch-sample.svc.example.com"
  },
  "shards": [
    {
      "id": "shard-1",
      "serviceFqdn": {
        "internal": "ch-sample-shard-1.my-project.svc.example.com",
        "external": "ch-sample-shard-1.svc.example.com"
      }
    }
  ]
}

Getting an FQDN via the management consoleGetting an FQDN via the management console

  1. Open your project.
  2. In the left-hand menu, select ClickHouse® Clusters.
  3. Select a cluster.
  4. On the Overview tab, under Cluster overview and shard, you will see the addresses for connection.

Note

Internal FQDNs have the <resource_name>.<project_name>.svc.<cluster domain> format and are only available within the Kubernetes cluster.

External FQDNs are created automatically for LoadBalancer type services and are available from outside the cluster. Learn more about DNS here.

Was the article helpful?

Previous
Deleting a cluster
Next
Updating cluster settings
© 2026 Direct Cursus Technology L.L.C.