Automatic DNS scaling by cluster size
Managed Service for Kubernetes supports automatic DNS scaling. The Managed Service for Kubernetes cluster runs the kube-dns-autoscaler
app that tunes the number of CoreDNS replicas depending on:
- The number of Managed Service for Kubernetes cluster nodes.
- The number of vCPUs in the Managed Service for Kubernetes cluster.
The number of replicas is calculated by the formulas.
To automate DNS scaling:
If you no longer need automatic scaling, disable it.
If you no longer need the resources you created, delete them.
Getting started
-
Create Managed Service for Kubernetes resources:
ManuallyTerraform-
Create security groups for the Managed Service for Kubernetes cluster and its node groups.
Warning
The configuration of security groups determines the performance and availability of the cluster and the services and applications running in it.
-
Create a Managed Service for Kubernetes cluster. When creating it, specify the security groups prepared in advance.
If you intend to use your cluster within the Yandex Cloud network, there is no need to allocate a public IP address to it. To allow connections from outside the network, assign a public IP address to the cluster.
-
Create a node group. Allocate it a public IP address to provide internet access and allow pulling Docker images and components. Specify the security groups prepared in advance.
-
If you do not have Terraform yet, install it.
-
Get the authentication credentials. You can add them to environment variables or specify them later in the provider configuration file.
-
Configure and initialize a provider. There is no need to create a provider configuration file manually, you can download it
. -
Place the configuration file in a separate working directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file.
-
Download the k8s-cluster.tf
configuration file of the Managed Service for Kubernetes cluster to the same working directory. The file describes:-
Managed Service for Kubernetes cluster.
-
Managed Service for Kubernetes node group.
-
Service account required to create the Managed Service for Kubernetes cluster and node group.
-
Security groups which contain rules required for the Managed Service for Kubernetes cluster and its node groups.
Warning
The configuration of security groups determines the performance and availability of the cluster and the services and applications running in it.
-
Specify the folder ID in the configuration file.
-
Make sure the Terraform configuration files are correct using this command:
terraform validate
If there are any errors in the configuration files, Terraform will point them out.
-
Create the required infrastructure:
-
Run the command to view planned changes:
terraform plan
If the resource configuration descriptions are correct, the terminal will display a list of the resources to modify and their parameters. This is a test step. No resources are updated.
-
If you are happy with the planned changes, apply them:
-
Run the command:
terraform apply
-
Confirm the update of resources.
-
Wait for the operation to complete.
-
All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console
. -
-
-
Install kubectl
and configure it to work with the created cluster.If a cluster has no public IP address assigned and
kubectl
is configured via the cluster's private IP address, runkubectl
commands on a Yandex Cloud VM that is in the same network as the cluster.
Configure kube-dns-autoscaler
Make sure that the app is up and running
Check the Deploymentkube-system
namespace:
kubectl get deployment --namespace=kube-system
Result:
NAME READY UP-TO-DATE AVAILABLE AGE
...
kube-dns-autoscaler 1/1 1 1 52m
Define the scaling parameters
The kube-dns-autoscaler
pod regularly polls the Kubernetes server for the number of Managed Service for Kubernetes cluster nodes and cores. Based on this data, the number of CoreDNS replicas is calculated.
Two types of calculation are possible:
- Linear mode.
- Ladder mode (a step function).
For more information about calculating, see the cluster-proportional-autoscaler
In this example, we use the linear
mode where calculations follow this formula:
replicas = max( ceil( cores * 1/coresPerReplica ) , ceil( nodes * 1/nodesPerReplica ) )
Where:
coresPerReplica
: Configuration parameter indicating the number of CoreDNS replicas per vCPU of the Managed Service for Kubernetes cluster.nodesPerReplica
: Configuration parameter indicating the number of CoreDNS replicas per Managed Service for Kubernetes cluster node.cores
: Actual number of vCPUs in the Managed Service for Kubernetes cluster.nodes
: Actual number of nodes in the Managed Service for Kubernetes cluster.ceil
: Ceiling function that rounds up a decimal number to an integer.max
: Max function that returns the largest of the two values.
The preventSinglePointFailure
additional parameter is relevant for multi-node Managed Service for Kubernetes clusters. If true
, the minimum number of DNS replicas is two.
You can also define the min
and max
configuration parameters that set the minimum and maximum number of CoreDNS replicas in the Managed Service for Kubernetes cluster:
replicas = min(replicas, max)
replicas = max(replicas, min)
For more information about calculating, see the cluster-proportional-autoscaler
Change the configuration
-
Check the current settings.
In this example, we are creating Managed Service for Kubernetes
node-group-1
with the following parameters:- Number of Managed Service for Kubernetes nodes:
3
vCPU cores
: 12
By default, the
linear
mode and the following scaling parameters are used:coresPerReplica
:256
nodesPerReplica
:16
preventSinglePointFailure
:true
replicas = max( ceil( 12 * 1/256 ), ceil( 3 * 1/16 ) ) = 1
The
preventSinglePointFailure
parameter istrue
, meaning the number of CoreDNS replicas is two.To get the
coredns
pod data, run this command:kubectl get pods -n kube-system
Result:
NAME READY STATUS RESTARTS AGE ... coredns-7c********-4dmjl 1/1 Running 0 128m coredns-7c********-n7qsv 1/1 Running 0 134m
- Number of Managed Service for Kubernetes nodes:
-
Set new parameters.
Change the configuration as follows:
coresPerReplica
:4
nodesPerReplica
:2
preventSinglePointFailure
:true
replicas = max( ceil( 12 * 1/4 ), ceil( 3 * 1/2 ) ) = 3
To deliver the parameters to the
kube-dns-autoscaler
application, edit the appropriate ConfigMap using this command:kubectl edit configmap kube-dns-autoscaler --namespace=kube-system
Once a text editor with the
kube-dns-autoscaler
configuration opens, change the line with the following parameters:linear: '{"coresPerReplica":4,"nodesPerReplica":2,"preventSinglePointFailure":true}'
Save your changes to see the operation output:
configmap/kube-dns-autoscaler edited
The
kube-dns-autoscaler
application will upload the configuration and scale the DNS service with the new parameters.
Test scaling
Resize the Managed Service for Kubernetes cluster
Create a second Managed Service for Kubernetes node group using this command:
yc managed-kubernetes node-group create \
--name node-group-2 \
--cluster-name dns-autoscaler \
--location zone=ru-central1-a \
--public-ip \
--fixed-size 2 \
--cores 4 \
--core-fraction 5
Result:
done (2m43s)
...
Now the Managed Service for Kubernetes cluster has 5 nodes with 20 vCPUs. Calculate the number of replicas:
replicas = max( ceil( 20 * 1/4 ), ceil( 5 * 1/2 ) ) = 5
Check the changes in the number of CoreDNS replicas
Run this command:
kubectl get pods -n kube-system
Result:
NAME READY STATUS RESTARTS AGE
...
coredns-7c********-7l8mc 1/1 Running 0 3m30s
coredns-7c********-n7qsv 1/1 Running 0 3h20m
coredns-7c********-pv9cv 1/1 Running 0 3m40s
coredns-7c********-r2lss 1/1 Running 0 49m
coredns-7c********-s5jgz 1/1 Running 0 57m
Set up the reducing of the number of Managed Service for Kubernetes nodes
By default, Cluster Autoscaler does not reduce the number of nodes in a Managed Service for Kubernetes node group with auto scaling if these nodes contain pods from the kube-system
namespace managed by the Deployment
To allow the number of Managed Service for Kubernetes nodes to decrease, configure the PodDisruptionBudget
kubectl create poddisruptionbudget <pdb_name> \
--namespace=kube-system \
--selector k8s-app=kube-dns \
--min-available=2
Result:
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: <pdb_name>
spec:
minAvailable: 2
selector:
matchLabels:
k8s-app: kube-dns
Disable scaling
Reset the number of replicas in the kube-dns-autoscaler
application Deployment
kubectl scale deployment --replicas=0 kube-dns-autoscaler --namespace=kube-system
Result:
deployment.apps/kube-dns-autoscaler scaled
Check the result with this command:
kubectl get rs --namespace=kube-system
Result:
NAME READY UP-TO-DATE AVAILABLE AGE
...
kube-dns-autoscaler 0/0 0 0 3h53m
Delete the resources you created
Delete the resources you no longer need to avoid paying for them:
To delete the infrastructure created with Terraform:
-
In the terminal window, go to the directory containing the infrastructure plan.
-
Delete the
k8s-cluster.tf
configuration file. -
Make sure the Terraform configuration files are correct using this command:
terraform validate
If there are any errors in the configuration files, Terraform will point them out.
-
Confirm updating the resources.
-
Run the command to view planned changes:
terraform plan
If the resource configuration descriptions are correct, the terminal will display a list of the resources to modify and their parameters. This is a test step. No resources are updated.
-
If you are happy with the planned changes, apply them:
-
Run the command:
terraform apply
-
Confirm the update of resources.
-
Wait for the operation to complete.
-
All the resources described in the
k8s-cluster.tf
configuration file will be deleted. -