Questions and answers about node group autoscaling in Managed Service for Kubernetes
-
In an autoscaling group, the number of nodes never scales down to one, even when there is no load
-
Why does the node group fail to scale down after a pod deletion?
Why does my cluster have N nodes and is not scaling down?
Autoscaling does not stop nodes with pods that cannot be evicted. The scaling barriers include:
- Pods whose eviction is limited with PodDisruptionBudget.
- Pods in the
kube-system
namespace:- Those not created under the DaemonSet
controller. - Those without
PodDisruptionBudget
installed or those whose eviction is limited withPodDisruptionBudget
.
- Those not created under the DaemonSet
- Pods that were not created under a replication controller (ReplicaSet
, Deployment , or StatefulSet ). - Pods with
local-storage
. - Pods that cannot be evicted anywhere due to limitations. For example, due to lack of resources or lack of nodes matching the affinity or anti-affinity
selectors. - Pods with an annotation that prohibits eviction:
"cluster-autoscaler.kubernetes.io/safe-to-evict": "false"
.
Note
You can evict kube-system
pods, pods with local-storage
, and pods without a replication controller. To do this, set the "safe-to-evict": "true"
annotation:
kubectl annotate pod <pod_name> cluster-autoscaler.kubernetes.io/safe-to-evict=true
Other possible causes include:
-
The node group has already reached its minimum size.
-
The node is idle for less than 10 minutes.
-
During the last 10 minutes, the node group has been scaled up.
-
During the last 3 minutes, there was an unsuccessful attempt to scale down the node group.
-
There was an unsuccessful attempt to stop a certain node. In this case, the next attempt occurs in 5 minutes.
-
The node has an annotation that prohibits stopping it when scaling it down:
"cluster-autoscaler.kubernetes.io/scale-down-disabled": "true"
. You can add or remove an annotation usingkubectl
.Check for annotation on the node:
kubectl describe node <node_name> | grep scale-down-disabled
Result:
Annotations: cluster-autoscaler.kubernetes.io/scale-down-disabled: true
Set the annotation:
kubectl annotate node <node_name> cluster-autoscaler.kubernetes.io/scale-down-disabled=true
You can remove an annotation by running the
kubectl
command with-
:kubectl annotate node <node_name> cluster-autoscaler.kubernetes.io/scale-down-disabled-
In an autoscaling group, the number of nodes never scales down to one, even when there is no load
In a Managed Service for Kubernetes cluster, the kube-dns-autoscaler
app decides on the number of CoreDNS replicas. If the preventSinglePointFailure
parameter in the kube-dns-autoscaler
configuration is set to true
and there is more than one node in the group, the minimum number of CoreDNS replicas is two. In this case, the Cluster Autoscaler cannot scale down the number of nodes in the cluster below that of CoreDNS pods.
Learn more about DNS scaling based on the cluster size here.
Solution:
-
Disable the protection setting that limits the minimum number of CoreDNS replicas to two. To do this, set the
preventSinglePointFailure
parameter tofalse
in thekube-dns-autoscaler
ConfigMap . -
Enable the
kube-dns-autoscaler
pod eviction by adding thesave-to-evict
annotation to Deployment :kubectl patch deployment kube-dns-autoscaler -n kube-system \ --type merge \ -p '{"spec":{"template":{"metadata":{"annotations":{"cluster-autoscaler.kubernetes.io/safe-to-evict":"true"}}}}}'
Why does the node group fail to scale down after a pod deletion?
If the node is underloaded, it is removed in 10 minutes.
Why does autoscaling fail to trigger even though the number of nodes is below the minimum or exeeds the maximum?
Autoscaling will not violate the preset limits, but Managed Service for Kubernetes does not explicitly control the limits. Upscaling will only happen if there are pods in an unschedulable
status.
Why do Terminated pods remain in my cluster?
This happens because the Pod garbage collector (PodGC)
To get answers to other questions about autoscaling, see the Kubernetes documentation
Is Horizontal Pod Autoscaler supported?
Yes, Managed Service for Kubernetes supports horizontal pod autoscaling.