DNS autoscaling based on the cluster size
Managed Service for Kubernetes supports DNS autoscaling. kube-dns-autoscaler running in a Managed Service for Kubernetes cluster adjusts the number of CoreDNS replicas based on:
- Number of Managed Service for Kubernetes cluster nodes.
- Number of vCPUs in the Managed Service for Kubernetes cluster.
The number of replicas is calculated by the formulas.
To automate DNS scaling:
If you no longer need automatic scaling, disable it.
If you no longer need the resources you created, delete them.
Required paid resources
The support cost for this solution includes:
- Fee for using the master and outgoing traffic in a Managed Service for Kubernetes cluster (see Managed Service for Kubernetes pricing).
- Fee for each VM (cluster nodes and management VMs without public access) which covers the use of computing resources, operating system, and storage (see Compute Cloud pricing).
- Fee for a public IP address for cluster nodes (see Virtual Private Cloud pricing).
Getting started
-
Create Managed Service for Kubernetes resources:
ManuallyTerraform-
Create security groups for the Managed Service for Kubernetes cluster and its node groups.
Warning
The configuration of security groups determines the performance and availability of the cluster and the services and applications running in it.
-
Create a Managed Service for Kubernetes cluster. When creating a cluster, specify the preconfigured security groups.
For Yandex Cloud internal network usage, your cluster does not need a public IP address. To enable internet access to your cluster, assign it a public IP address.
-
Create a node group. To enable internet access for your node group (e.g., for Docker image pulls), assign it a public IP address. Specify the preconfigured security groups.
-
If you do not have Terraform yet, install it.
-
Get the authentication credentials. You can add them to environment variables or specify them later in the provider configuration file.
-
Configure and initialize a provider. There is no need to create a provider configuration file manually, you can download it
. -
Place the configuration file in a separate working directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file.
-
Download the k8s-cluster.tf
configuration file for the Managed Service for Kubernetes cluster to the same working directory. This file describes:-
Managed Service for Kubernetes cluster.
-
Managed Service for Kubernetes node group.
-
Service account required to create the Managed Service for Kubernetes cluster and node group.
-
Security groups which contain rules required for the Managed Service for Kubernetes cluster and its node groups.
Warning
The configuration of security groups determines the performance and availability of the cluster and the services and applications running in it.
-
Specify the folder ID in the configuration file.
-
Make sure the Terraform configuration files are correct using this command:
terraform validateTerraform will show any errors found in your configuration files.
-
Create the required infrastructure:
-
Run this command to view the planned changes:
terraform planIf you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.
-
If everything looks correct, apply the changes:
-
Run this command:
terraform apply -
Confirm updating the resources.
-
Wait for the operation to complete.
-
All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console
. -
-
-
Install kubect
and configure it to work with the new cluster.If a cluster has no public IP address assigned and
kubectlis configured via the cluster's private IP address, runkubectlcommands on a Yandex Cloud VM that is in the same network as the cluster.
Configure kube-dns-autoscaler
Make sure the app is up and running
Check for Deploymentkube-system namespace:
kubectl get deployment --namespace=kube-system
Result:
NAME READY UP-TO-DATE AVAILABLE AGE
...
kube-dns-autoscaler 1/1 1 1 52m
Define the scaling parameters
The kube-dns-autoscaler pod regularly polls the Kubernetes server for the number of Managed Service for Kubernetes cluster nodes and cores. Based on this data, it calculates the number of CoreDNS replicas.
There are two calculation modes:
- Linear
- Ladder (step function)
For more information about calculating, see this cluster-proportional-autoscaler page on GitHub
This example uses the linear mode which calculates the number of replicas by the following formula:
replicas = max( ceil( cores * 1/coresPerReplica ) , ceil( nodes * 1/nodesPerReplica ) )
Where:
coresPerReplica: Configuration parameter indicating the number of CoreDNS replicas per vCPU of the Managed Service for Kubernetes cluster.nodesPerReplica: Configuration parameter indicating the number of CoreDNS replicas per node of the Managed Service for Kubernetes cluster.cores: Actual number of vCPUs in the Managed Service for Kubernetes cluster.nodes: Actual number of nodes in the Managed Service for Kubernetes cluster.ceil: Function that rounds a decimal to an integer.max: Function that returns the maximum of the two values.
The optional preventSinglePointFailure parameter is relevant for multi-node Managed Service for Kubernetes clusters. If true, the minimum number of DNS replicas is two.
You can also define the min and max configuration parameters that set the minimum and maximum number of CoreDNS replicas in the Managed Service for Kubernetes cluster:
replicas = min(replicas, max)
replicas = max(replicas, min)
For more information about calculating, see this cluster-proportional-autoscaler page on GitHub
Change the configuration
-
Check the current settings.
In this example, we have a Managed Service for Kubernetes node group named
node-group-1with the following settings:- Number of Managed Service for Kubernetes nodes:
3 - Number of vCPUs:
12
By default,
linearmode and the following scaling parameters are used:coresPerReplica:256nodesPerReplica:16preventSinglePointFailure:true
replicas = max( ceil( 12 * 1/256 ), ceil( 3 * 1/16 ) ) = 1As
preventSinglePointFailureis set totrue, the number of CoreDNS replicas will be two.To get the
corednspod details, run this command:kubectl get pods -n kube-systemResult:
NAME READY STATUS RESTARTS AGE ... coredns-7c********-4dmjl 1/1 Running 0 128m coredns-7c********-n7qsv 1/1 Running 0 134m - Number of Managed Service for Kubernetes nodes:
-
Set new parameter values.
Change the configuration as follows:
coresPerReplica:4nodesPerReplica:2preventSinglePointFailure:true
replicas = max( ceil( 12 * 1/4 ), ceil( 3 * 1/2 ) ) = 3To deliver the parameters to
kube-dns-autoscaler, edit the relevant ConfigMap using this command:kubectl edit configmap kube-dns-autoscaler --namespace=kube-systemThis will open the
kube-dns-autoscalerconfiguration in a text editor. Edit the parameter line as follows:linear: '{"coresPerReplica":4,"nodesPerReplica":2,"preventSinglePointFailure":true}'Save your changes. You will see the result of the operation on the screen:
configmap/kube-dns-autoscaler editedkube-dns-autoscalerwill load this configuration and scale the DNS service based on the new parameters.
Test scaling
Resize the Managed Service for Kubernetes cluster
Create another Managed Service for Kubernetes node group using this command:
yc managed-kubernetes node-group create \
--name node-group-2 \
--cluster-name dns-autoscaler \
--location zone=ru-central1-a \
--public-ip \
--fixed-size 2 \
--cores 4 \
--core-fraction 5
Result:
done (2m43s)
...
Now the Managed Service for Kubernetes cluster has five nodes with 20 vCPUs. Calculate the number of replicas:
replicas = max( ceil( 20 * 1/4 ), ceil( 5 * 1/2 ) ) = 5
Check the changes in the number of CoreDNS replicas
Run this command:
kubectl get pods -n kube-system
Result:
NAME READY STATUS RESTARTS AGE
...
coredns-7c********-7l8mc 1/1 Running 0 3m30s
coredns-7c********-n7qsv 1/1 Running 0 3h20m
coredns-7c********-pv9cv 1/1 Running 0 3m40s
coredns-7c********-r2lss 1/1 Running 0 49m
coredns-7c********-s5jgz 1/1 Running 0 57m
Set up Managed Service for Kubernetes node downscaling
By default, Cluster Autoscaler does not scale down nodes in autoscaling Managed Service for Kubernetes node group if these nodes runs pods from the kube-system namespace managed by the Deployment
To enable Managed Service for Kubernetes node downscaling, configure a PodDisruptionBudget
kubectl create poddisruptionbudget <pdb_name> \
--namespace=kube-system \
--selector k8s-app=kube-dns \
--min-available=2
Result:
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: <pdb_name>
spec:
minAvailable: 2
selector:
matchLabels:
k8s-app: kube-dns
Disable scaling
Set the number of replicas in the kube-dns-autoscaler Deployment
kubectl scale deployment --replicas=0 kube-dns-autoscaler --namespace=kube-system
Result:
deployment.apps/kube-dns-autoscaler scaled
Check the result with this command:
kubectl get rs --namespace=kube-system
Result:
NAME READY UP-TO-DATE AVAILABLE AGE
...
kube-dns-autoscaler 0/0 0 0 3h53m
Delete the resources you created
Delete the resources you no longer need to avoid paying for them:
-
In the terminal window, go to the directory containing the infrastructure plan.
Warning
Make sure the directory has no Terraform manifests with the resources you want to keep. Terraform deletes all resources that were created using the manifests in the current directory.
-
Delete resources:
-
Run this command:
terraform destroy -
Confirm deleting the resources and wait for the operation to complete.
All the resources described in the Terraform manifests will be deleted.
-