Configuring autoscaling
Managed Service for Kubernetes has three autoscaling methods available:
Getting started
-
Create a Managed Service for Kubernetes cluster with any suitable configuration.
-
Install kubect
and configure it to work with the new cluster.
Configuring cluster autoscaling
Warning
You can only enable autoscaling of this type when creating a Managed Service for Kubernetes node group.
To create an autoscaling Managed Service for Kubernetes node group:
Create a Managed Service for Kubernetes node group with the following parameters:
- Scaling Type:
Automatic. - Minimum number of nodes: Number of Managed Service for Kubernetes nodes to remain in the group at the minimum workload.
- Maximum number of nodes: Maximum number of Managed Service for Kubernetes nodes allowed in the group.
- Initial number of nodes: Number of Managed Service for Kubernetes nodes to create together with the group. This number must be between the minimum and the maximum number of nodes in the group.
If you do not have the Yandex Cloud CLI installed yet, install and initialize it.
By default, the CLI uses the folder specified when creating the profile. To change the default folder, use the yc config set folder-id <folder_ID> command. You can also set a different folder for any specific command using the --folder-name or --folder-id parameter.
-
Check the command to create a Managed Service for Kubernetes node group:
yc managed-kubernetes node-group create --help -
Create an autoscaling Managed Service for Kubernetes node group:
yc managed-kubernetes node-group create \ ... --auto-scale min=<minimum_number_of_nodes>, max=<maximum_number_of_nodes>, initial=<initial_number_of_nodes>
-
With Terraform
, you can quickly create a cloud infrastructure in Yandex Cloud and manage it using configuration files. These files store the infrastructure description written in HashiCorp Configuration Language (HCL). If you change the configuration files, Terraform automatically detects which part of your configuration is already deployed, and what should be added or removed.Terraform is distributed under the Business Source License
. The Yandex Cloud provider for Terraform is distributed under the MPL-2.0 license.For more information about the provider resources, see the relevant documentation on the Terraform
website or its mirror.If you do not have Terraform yet, install it and configure the Yandex Cloud provider.
-
Open the current Terraform configuration file describing the node group.
For more information about creating this file, see Creating a node group.
-
Add a description of the new node group and specify the autoscaling settings under
scale_policy.auto_scale:resource "yandex_kubernetes_node_group" "<node_group_name>" { ... scale_policy { auto_scale { min = <minimum_number_of_nodes_in_group> max = <maximum_number_of_nodes_in_group> initial = <initial_number_of_nodes_in_group> } } } -
Make sure the configuration files are correct.
-
In the command line, navigate to the directory that contains the current Terraform configuration files defining the infrastructure.
-
Run this command:
terraform validateTerraform will show any errors found in your configuration files.
-
-
Confirm updating the resources.
-
Run this command to view the planned changes:
terraform planIf you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.
-
If everything looks correct, apply the changes:
-
Run this command:
terraform apply -
Confirm updating the resources.
-
Wait for the operation to complete.
-
Timeouts
The Terraform provider sets time limits for operations with Managed Service for Kubernetes cluster node groups:
- Creating and editing: 60 minutes.
- Deleting: 20 minutes.
Operations in excess of this time will be interrupted.
How do I modify these limits?
Add the
timeoutssection to the cluster node group description, e.g.:resource "yandex_kubernetes_node_group" "<node_group_name>" { ... timeouts { create = "1h30m" update = "1h30m" delete = "60m" } } -
Cluster Autoscaler is managed on the Managed Service for Kubernetes side.
Learn more about Cluster Autoscaler in Cluster autoscaling. You can find the default parameters in this Kubernetes guide
See also Questions and answers about node group autoscaling in Managed Service for Kubernetes.
Configuring horizontal pod autoscaling
-
Create a Horizontal Pod Autoscaler for your application, for example:
kubectl autoscale deployment/<application_name> --cpu-percent=50 --min=1 --max=3Where:
--cpu-percent: Expected Managed Service for Kubernetes pod load on the vCPU.--min: Minimum number of Managed Service for Kubernetes pods.--max: Maximum number of Managed Service for Kubernetes pods.
-
Check the Horizontal Pod Autoscaler status:
kubectl describe hpa/<application_name>
Learn more about Horizontal Pod Autoscaler in Horizontal pod autoscaling.
Configuring vertical pod autoscaling
-
Install the Vertical Pod Autoscaler from this repository
:cd /tmp && \ git clone https://github.com/kubernetes/autoscaler.git && \ cd autoscaler/vertical-pod-autoscaler/hack && \ ./vpa-up.sh -
Create a configuration file named
vpa.yamlfor your application:apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: <application_name> spec: targetRef: apiVersion: "apps/v1" kind: Deployment name: <application_name> updatePolicy: updateMode: "<VPA_update_mode>"Where
updateModeis the Vertical Pod Autoscaler operation mode,AutoorOff. -
Create a Vertical Pod Autoscaler for your application:
kubectl apply -f vpa.yaml -
Check the Vertical Pod Autoscaler status:
kubectl describe vpa <application_name>
Learn more about Vertical Pod Autoscaler in Vertical pod autoscaling.
Deleting Terminated pods
Sometimes during autoscaling, Managed Service for Kubernetes node pods are not removed and stay in the Terminated state. This happens because the Pod garbage collector (PodGC)
You can remove terminated Managed Service for Kubernetes pods:
Manually
Run this command:
kubectl get pods --all-namespaces | grep -i Terminated \
| awk '{print $1, $2}' | xargs -n2 kubectl delete pod -n
Automatically using a CronJob
To remove terminated Managed Service for Kubernetes pods automatically:
If you no longer need the CronJob, delete it.
Setting up automatic deletion in a CronJob
-
Create a file named
cronjob.yamlwith a specification for the CronJob and resources to run it:--- apiVersion: batch/v1 kind: CronJob metadata: name: terminated-pod-cleaner spec: schedule: "*/5 * * * *" jobTemplate: spec: template: spec: serviceAccountName: terminated-pod-cleaner containers: - name: terminated-pod-cleaner image: bitnamilegacy/kubectl imagePullPolicy: IfNotPresent command: ["/bin/sh", "-c"] args: ["kubectl get pods --all-namespaces | grep -i Terminated | awk '{print $1, $2}' | xargs --no-run-if-empty -n2 kubectl delete pod -n"] restartPolicy: Never --- apiVersion: v1 kind: ServiceAccount metadata: name: terminated-pod-cleaner --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: terminated-pod-cleaner rules: - apiGroups: [""] resources: - pods verbs: [list, delete] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: terminated-pod-cleaner subjects: - kind: ServiceAccount name: terminated-pod-cleaner namespace: default roleRef: kind: ClusterRole name: terminated-pod-cleaner apiGroup: rbac.authorization.k8s.ioThe
schedule: "*/5 * * * *"line defines a schedule in cron format: the job runs every 5 minutes. Change the interval if needed. -
Create a CronJob and its resources:
kubectl create -f cronjob.yamlResult:
cronjob.batch/terminated-pod-cleaner created serviceaccount/terminated-pod-cleaner created clusterrole.rbac.authorization.k8s.io/terminated-pod-cleaner created clusterrolebinding.rbac.authorization.k8s.io/terminated-pod-cleaner created -
Make sure the CronJob has been created:
kubectl get cronjob terminated-pod-cleanerResult:
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE terminated-pod-cleaner */5 * * * * False 0 <none> 4sAfter the interval specified in
SCHEDULE, a time value will appear in theLAST SCHEDULEcolumn. This means that the job run has finished successfully or finished failed.
Checking the results of CronJob jobs
-
Get a list of jobs:
kubectl get jobsResult:
NAME COMPLETIONS DURATION AGE <job_name> 1/1 4s 2m1s ... -
Get the name of the Managed Service for Kubernetes pod that ran the job:
kubectl get pods --selector=job-name=<job_name> --output=jsonpath={.items[*].metadata.name} -
View the Managed Service for Kubernetes pod logs:
kubectl logs <pod_name>The log will include a list of removed Managed Service for Kubernetes pods. If the log is empty, this means that there were no Managed Service for Kubernetes Terminated pods when the job ran.
Deleting the CronJob
To delete the CronJob and its resources, run this command:
kubectl delete cronjob terminated-pod-cleaner && \
kubectl delete serviceaccount terminated-pod-cleaner && \
kubectl delete clusterrole terminated-pod-cleaner && \
kubectl delete clusterrolebinding terminated-pod-cleaner