Automatic scaling
Automatic scaling is a way to modify the size of a node group, the number of pods, or the amount of resources allocated to each pod based on resource requests for pods running on the group's nodes. Autoscaling is available as of Kubernetes version 1.15.
In a Managed Service for Kubernetes cluster, three types of automatic scaling are available:
- Cluster autoscaling (Cluster Autoscaler). Managed Service for Kubernetes monitors the load on the nodes and updates the number of nodes within specified limits as required.
- Horizontal pod scaling (Horizontal Pod Autoscaler). Kubernetes dynamically changes the number of pods running on each node in the group.
- Vertical pod scaling (Vertical Pod Autoscaler). When load increases, Kubernetes allocates additional resources to each pod within established limits.
You can use several types of automatic scaling in the same cluster. However, using Horizontal Pod Autoscaler and Vertical Pod Autoscaler together is not recommended.
Cluster autoscaling
Cluster Autoscaler automatically modifies the number of nodes in a group depending on the load.
Warning
You can only place an autoscaling node group in one availability zone.
When creating a node group, select an automatic scaling type and set the minimum, maximum, and initial number of nodes in the group. Kubernetes will periodically check the pod status and node load on the nodes, adjusting the group size as required:
- If pods cannot be assigned due to a lack of vCPUs or RAM on the existing nodes, the number of nodes in the group will gradually increase to the specified maximum size.
- If the load on the nodes is insufficient, and all pods can be assigned with fewer nodes per group, the number of nodes per group will gradually decrease to the specified minimum size. If a node's pods cannot be evicted within the specified period of time (7 minutes), the node is forced to stop. The waiting time cannot be changed.
Note
When calculating the current limits and quotas
Cluster Autoscaler activation is only available when creating a node group. Cluster Autoscaler is managed on the Managed Service for Kubernetes side.
For more information, see the Kubernetes documentation:
See also Questions and answers about node group autoscaling in Managed Service for Kubernetes.
Horizontal pod autoscaling
When using horizontal pod scaling, Kubernetes changes the number of pods depending on vCPU load.
When creating a Horizontal Pod Autoscaler, specify the following using parameters:
- Desired average percentage vCPU load for each pod.
- Minimum and maximum number of pod replicas.
Horizontal pod autoscaling is available for the following controllers:
You can learn more about Horizontal Pod Autoscaler in the Kubernetes documentation
Vertical pod autoscaling
Kubernetes uses the limits parameters to restrict resources allocated for each application. A pod exceeding the vCPU limit will trigger CPU throttling. A pod that has exceeded the RAM limit will be stopped.
If required, Vertical Pod Autoscaler allocates additional vCPU and RAM resources to pods.
When creating a Vertical Pod Autoscaler, set the autoscaling option in the specification:
updateMode: "Off"for Vertical Pod Autoscaler to provide recommendations on managing pod resources without modifying them.updateMode: "Initial", for Vertical Pod Autoscaler only sets resource requests when Pods are first created. It does not update resources for already running Pods, even if recommendations change over time.updateMode: "Recreate", for Vertical Pod Autoscaler actively manages Pod resources by evicting Pods when their current resource requests differ significantly from recommendations.updateMode: "InPlaceOrRecreate", for Vertical Pod Autoscaler attempts to update resource requests and limits without restarting the Pod when possible. If in-place updates are not supported, the Pod is recreated in the same way as inRecreatemode.
You can learn more about Vertical Pod Autoscaler in the Kubernetes documentation