Automatic scaling
Automatic scaling is a way to modify the size of a node group, the number of pods, or the amount of resources allocated to each pod based on resource requests for pods running on the group's nodes. Autoscaling is available as of Kubernetes version 1.15.
In a Managed Service for Kubernetes cluster, three types of automatic scaling are available:
- Cluster autoscaling (Cluster Autoscaler). Managed Service for Kubernetes monitors the load on the nodes and modifies the number of nodes within specified limits as required.
- Horizontal Pod Autoscaler (Horizontal Pod Autoscaler). Kubernetes dynamically changes the number of pods running on each node of the group.
- Vertical pod scaling (Vertical Pod Autoscaler). When load increases, Kubernetes allocates additional resources to each pod within established limits.
You can use several types of automatic scaling in the same cluster. However, using Horizontal Pod Autoscaler and Vertical Pod Autoscaler together is not recommended.
Cluster autoscaling
Cluster Autoscaler automatically modifies the number of nodes in a group depending on the load.
When creating a node group, select an automatic scaling type and set the minimum, maximum, and initial number of nodes in the group. Kubernetes will check the pod status and the load on the nodes from time to time adjusting the group size as needed:
- If pods cannot be assigned due to a lack of vCPUs or RAM on the existing nodes, the number of nodes in the group will gradually increase to the specified maximum size.
- If the load on the nodes is insufficient, and all pods can be assigned with fewer nodes per group, the number of nodes per group will gradually decrease to the specified minimum size. If a node's pods cannot be evicted within the specified period of time (7 minutes), the node is forced to stop. The waiting time cannot be changed.
Note
When calculating the current limits and quotas
Cluster Autoscaler activation is only available when creating a node group. Cluster Autoscaler is managed on the Managed Service for Kubernetes side.
For more information, see the Kubernetes documentation:
See also Questions and answers about node group autoscaling in Managed Service for Kubernetes.
Horizontal pod autoscaling
When using horizontal pod scaling, Kubernetes changes the number of pods depending on vCPU load.
When creating a Horizontal Pod Autoscaler, specify the following using parameters:
- Desired average percentage vCPU load for each pod.
- Minimum and maximum number of pod replicas.
Horizontal pod autoscaling is available for the following controllers:
You can learn more about Horizontal Pod Autoscaler in the Kubernetes documentation
Vertical pod autoscaling
Kubernetes restricts resource allocation for each application using the limits
parameters. For a pod that has exceeded the vCPU limit, the processor clock cycle skip mode is enabled. A pod that has exceeded the RAM limit will be stopped.
If required, Vertical Pod Autoscaler allocates additional vCPU and RAM resources to pods.
When creating a Vertical Pod Autoscaler, set the autoscaling option in the specification:
updateMode: "Auto"
for Vertical Pod Autoscaler to manage pod resources automatically.updateMode: "Off"
for Vertical Pod Autoscaler to provide recommendations on managing pod resources without modifying them.
You can learn more about Vertical Pod Autoscaler in the Kubernetes documentation