Yandex Cloud
Search
Contact UsGet started
  • Blog
  • Pricing
  • Documentation
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • ML & AI
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Customer Stories
    • Gateway to Russia
    • Cloud for Startups
    • Education and Science
  • Blog
  • Pricing
  • Documentation
Yandex project
© 2025 Yandex.Cloud LLC
Tutorials
    • All tutorials
    • Setting up a Managed Service for PostgreSQL connection from a container in Serverless Containers
    • Creating a VM from a Container Optimized Image
    • Creating a VM from a Container Optimized Image with an additional volume for a Docker container
    • Creating an instance group from a Container Optimized Image with multiple Docker containers
    • Creating an instance group from a Container Optimized Image
    • Creating a VM from a Container Optimized Image with multiple Docker containers
    • Updating a Container Optimized Image VM
    • Configuring data output from a Docker container to a serial port
      • Creating a new Kubernetes project
      • Creating a Kubernetes cluster with no internet access
      • Running workloads with GPUs
      • Using node groups with GPUs and no pre-installed drivers
      • Setting up Time-Slicing GPUs
      • Migrating resources to a different availability zone
      • Encrypting secrets
        • Integrating with a corporate DNS zone
        • DNS autoscaling based on cluster size
        • Setting up NodeLocal DNS Cache
        • DNS Challenge for Let's Encrypt® certificates

In this article:

  • Required paid resources
  • Getting started
  • Configure kube-dns-autoscaler
  • Make sure that the app is up and running
  • Define the scaling parameters
  • Changing the configuration
  • Test scaling
  • Resize the Managed Service for Kubernetes cluster
  • Check the changes in the number of CoreDNS replicas
  • Set up the reducing of the number of Managed Service for Kubernetes nodes
  • Disable scaling
  • Delete the resources you created
  1. Container infrastructure
  2. Managed Service for Kubernetes
  3. Working with DNS
  4. DNS autoscaling based on cluster size

Yandex Managed Service for Kubernetes DNS cluster autoscaling based on cluster size

Written by
Yandex Cloud
Updated at May 5, 2025
  • Required paid resources
  • Getting started
  • Configure kube-dns-autoscaler
    • Make sure that the app is up and running
    • Define the scaling parameters
    • Changing the configuration
  • Test scaling
    • Resize the Managed Service for Kubernetes cluster
    • Check the changes in the number of CoreDNS replicas
    • Set up the reducing of the number of Managed Service for Kubernetes nodes
  • Disable scaling
  • Delete the resources you created

Managed Service for Kubernetes supports DNS autoscaling. The Managed Service for Kubernetes cluster hosts the kube-dns-autoscaler app which adjusts the number of CoreDNS replicas depending on:

  • Number of Managed Service for Kubernetes cluster nodes.
  • Number of vCPUs in the Managed Service for Kubernetes cluster.

The number of replicas is calculated by the formulas.

To automate DNS scaling:

  1. Configure kube-dns-autoscaler.
  2. Test scaling.

If you no longer need automatic scaling, disable it.

If you no longer need the resources you created, delete them.

Required paid resourcesRequired paid resources

The support cost includes:

  • Fee for the Managed Service for Kubernetes cluster: using the master and outgoing traffic (see Managed Service for Kubernetes pricing).
  • Fee for each VM (cluster nodes, VM for cluster management without public access): using computing resources, operating system, and storage (see Compute Cloud pricing).
  • Fee for the public IP address for the cluster nodes (see Virtual Private Cloud pricing).

Getting startedGetting started

  1. Create Managed Service for Kubernetes resources:

    Manually
    Terraform
    1. Create security groups for the Managed Service for Kubernetes cluster and its node groups.

      Warning

      The configuration of security groups determines the performance and availability of the cluster and the services and applications running in it.

    2. Create a Managed Service for Kubernetes cluster. When creating a cluster, specify the preconfigured security groups.

      For Yandex Cloud internal network usage, your cluster does not need a public IP address. To enable internet access to your cluster, assign it a public IP address.

    3. Create a node group. To enable internet access for your node group (e.g., for Docker image pulls), assign it a public IP address. Specify the preconfigured security groups.

    1. If you do not have Terraform yet, install it.

    2. Get the authentication credentials. You can add them to environment variables or specify them later in the provider configuration file.

    3. Configure and initialize a provider. There is no need to create a provider configuration file manually, you can download it.

    4. Place the configuration file in a separate working directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file.

    5. Download the k8s-cluster.tf configuration file of the Managed Service for Kubernetes cluster to the same working directory. This file describes:

      • Network.

      • Subnet.

      • Managed Service for Kubernetes cluster.

      • Managed Service for Kubernetes node group.

      • Service account required to create the Managed Service for Kubernetes cluster and node group.

      • Security groups which contain rules required for the Managed Service for Kubernetes cluster and its node groups.

        Warning

        The configuration of security groups determines the performance and availability of the cluster and the services and applications running in it.

    6. Specify the folder ID in the configuration file.

    7. Make sure the Terraform configuration files are correct using this command:

      terraform validate
      

      If there are any errors in the configuration files, Terraform will point them out.

    8. Create the required infrastructure:

      1. Run this command to view the planned changes:

        terraform plan
        

        If you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.

      2. If everything looks correct, apply the changes:

        1. Run this command:

          terraform apply
          
        2. Confirm updating the resources.

        3. Wait for the operation to complete.

      All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console.

  2. Install kubect and configure it to work with the new cluster.

    If a cluster has no public IP address assigned and kubectl is configured via the cluster's private IP address, run kubectl commands on a Yandex Cloud VM that is in the same network as the cluster.

Configure kube-dns-autoscalerConfigure kube-dns-autoscaler

Make sure that the app is up and runningMake sure that the app is up and running

Check Deployment in the kube-system namespace:

kubectl get deployment --namespace=kube-system

Result:

NAME                 READY  UP-TO-DATE  AVAILABLE  AGE
...
kube-dns-autoscaler  1/1    1           1          52m

Define the scaling parametersDefine the scaling parameters

The kube-dns-autoscaler pod regularly polls the Kubernetes server for the number of Managed Service for Kubernetes cluster nodes and cores. Based on this data, the number of CoreDNS replicas is calculated.

Two types of calculation are possible:

  • Linear mode.
  • Ladder mode (a step function).

For more information about calculating, see the cluster-proportional-autoscaler documentation.

In this example, we consider the linear mode in which the calculation follows this formula:

replicas = max( ceil( cores * 1/coresPerReplica ) , ceil( nodes * 1/nodesPerReplica ) )

Where:

  • coresPerReplica: Configuration parameter indicating the number of CoreDNS replicas per vCPU of the Managed Service for Kubernetes cluster.
  • nodesPerReplica: Configuration parameter indicating the number of CoreDNS replicas per node of the Managed Service for Kubernetes cluster.
  • cores: Actual number of vCPUs in the Managed Service for Kubernetes cluster.
  • nodes: Actual number of nodes in the Managed Service for Kubernetes cluster.
  • ceil: Function that rounds a decimal to an integer.
  • max: Function that returns the maximum of the two values.

The optional preventSinglePointFailure parameter is relevant for multi-node Managed Service for Kubernetes clusters. If true, the minimum number of DNS replicas is two.

You can also define the min and max configuration parameters that set the minimum and maximum number of CoreDNS replicas in the Managed Service for Kubernetes cluster:

replicas = min(replicas, max)
replicas = max(replicas, min)

For more information about calculating, see the cluster-proportional-autoscaler documentation.

Changing the configurationChanging the configuration

  1. Check the current settings.

    In this example, we are creating a Managed Service for Kubernetes node group named node-group-1 with the following parameters:

    • Number of Managed Service for Kubernetes nodes: 3
    • Number of vCPUs: 12

    By default, linear mode and the following scaling parameters are used:

    • coresPerReplica: 256
    • nodesPerReplica: 16
    • preventSinglePointFailure: true
    replicas = max( ceil( 12 * 1/256 ), ceil( 3 * 1/16 ) ) = 1
    

    As preventSinglePointFailure is set to true, the number of CoreDNS replicas will be two.

    To get the coredns pod data, run this command:

    kubectl get pods -n kube-system
    

    Result:

    NAME                      READY  STATUS   RESTARTS  AGE
    ...
    coredns-7c********-4dmjl  1/1    Running  0         128m
    coredns-7c********-n7qsv  1/1    Running  0         134m
    
  2. Set new parameters.

    Change the configuration as follows:

    • coresPerReplica: 4
    • nodesPerReplica: 2
    • preventSinglePointFailure: true
    replicas = max( ceil( 12 * 1/4 ), ceil( 3 * 1/2 ) ) = 3
    

    To deliver the parameters to kube-dns-autoscaler, edit the relevant ConfigMap using this command:

    kubectl edit configmap kube-dns-autoscaler --namespace=kube-system
    

    This will open a text editor with a kube-dns-autoscaler configuration. Change the line with the following parameters:

    linear: '{"coresPerReplica":4,"nodesPerReplica":2,"preventSinglePointFailure":true}'
    

    Save your changes. to see the operation output:

    configmap/kube-dns-autoscaler edited
    

    kube-dns-autoscaler will upload this configuration and scale the DNS service based on the new parameters.

Test scalingTest scaling

Resize the Managed Service for Kubernetes clusterResize the Managed Service for Kubernetes cluster

Create a second Managed Service for Kubernetes node group using this command:

yc managed-kubernetes node-group create \
  --name node-group-2 \
  --cluster-name dns-autoscaler \
  --location zone=ru-central1-a \
  --public-ip \
  --fixed-size 2 \
  --cores 4 \
  --core-fraction 5

Result:

done (2m43s)
...

Now the Managed Service for Kubernetes cluster has 5 nodes with 20 vCPUs. Calculate the number of replicas:

replicas = max( ceil( 20 * 1/4 ), ceil( 5 * 1/2 ) ) = 5

Check the changes in the number of CoreDNS replicasCheck the changes in the number of CoreDNS replicas

Run this command:

kubectl get pods -n kube-system

Result:

NAME                      READY  STATUS   RESTARTS  AGE
...
coredns-7c********-7l8mc  1/1    Running  0         3m30s
coredns-7c********-n7qsv  1/1    Running  0         3h20m
coredns-7c********-pv9cv  1/1    Running  0         3m40s
coredns-7c********-r2lss  1/1    Running  0         49m
coredns-7c********-s5jgz  1/1    Running  0         57m

Set up the reducing of the number of Managed Service for Kubernetes nodesSet up the reducing of the number of Managed Service for Kubernetes nodes

By default, Cluster Autoscaler does not reduce the number of nodes in a Managed Service for Kubernetes node group with autoscaling if these nodes contain pods from the kube-system namespace managed by the Deployment, ReplicaSet, or StatefulSet app replication controllers, e.g., CoreDNS pods. In this case, the number of Managed Service for Kubernetes nodes per group cannot be less than the number of CoreDNS pods.

To allow the number of Managed Service for Kubernetes nodes to decrease, configure the PodDisruptionBudget object for them, which enables you to stop two CoreDNS pods at a time:

kubectl create poddisruptionbudget <pdb_name> \
  --namespace=kube-system \
  --selector k8s-app=kube-dns \
  --min-available=2

Result:

apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
  name: <pdb_name>
spec:
  minAvailable: 2
  selector:
    matchLabels:
      k8s-app: kube-dns

Disable scalingDisable scaling

Reset to zero the number of replicas in the kube-dns-autoscaler application's Deployment:

kubectl scale deployment --replicas=0 kube-dns-autoscaler --namespace=kube-system

Result:

deployment.apps/kube-dns-autoscaler scaled

Check the result with this command:

kubectl get rs --namespace=kube-system

Result:

NAME                 READY  UP-TO-DATE  AVAILABLE  AGE
...
kube-dns-autoscaler  0/0    0           0          3h53m

Delete the resources you createdDelete the resources you created

Delete the resources you no longer need to avoid paying for them:

Manually
Terraform

Delete the Managed Service for Kubernetes cluster.

  1. In the terminal window, go to the directory containing the infrastructure plan.

    Warning

    Make sure the directory has no Terraform manifests with the resources you want to keep. Terraform deletes all resources that were created using the manifests in the current directory.

  2. Delete resources:

    1. Run this command:

      terraform destroy
      
    2. Confirm deleting the resources and wait for the operation to complete.

    All the resources described in the Terraform manifests will be deleted.

Was the article helpful?

Previous
Integrating with a corporate DNS zone
Next
Setting up NodeLocal DNS Cache
Yandex project
© 2025 Yandex.Cloud LLC