Yandex Cloud
Search
Contact UsGet started
  • Pricing
  • Customer Stories
  • Documentation
  • Blog
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • ML Services
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Start testing with double trial credits
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Center for Technologies and Society
    • Yandex Cloud Partner program
  • Pricing
  • Customer Stories
  • Documentation
  • Blog
© 2025 Direct Cursus Technology L.L.C.
Yandex Managed Service for Kubernetes
  • Comparison with other Yandex Cloud services
  • Getting started
  • Access management
  • Pricing policy
  • Terraform reference
  • Monitoring metrics
  • Audit Trails events
  • Release notes
    • General questions
    • Data storage
    • Configuring and updating
    • Autoscaling
    • Resources
    • Monitoring and logs
    • Troubleshooting
    • All questions on one page
  1. FAQ
  2. Autoscaling

Questions and answers about node group autoscaling in Managed Service for Kubernetes

Written by
Yandex Cloud
Updated at September 26, 2025
  • Why does my cluster have N nodes and is not scaling down?

  • In an autoscaling group, the number of nodes never scales down to one, even when there is no load

  • Why does the node group fail to scale down after a pod deletion?

  • Why does autoscaling fail to trigger even though the number of nodes is below the minimum or exeeds the maximum?

  • Why do Terminated pods remain in my cluster?

  • Is Horizontal Pod Autoscaler supported?

Why does my cluster have N nodes and is not scaling down?Why does my cluster have N nodes and is not scaling down?

Autoscaling does not stop nodes with pods that cannot be evicted. The scaling barriers include:

  • Pods whose eviction is limited with PodDisruptionBudget.
  • Pods in the kube-system namespace:
    • Those not created under the DaemonSet controller.
    • Those without PodDisruptionBudget installed or those whose eviction is limited with PodDisruptionBudget.
  • Pods that were not created under a replication controller (ReplicaSet, Deployment, or StatefulSet).
  • Pods with local-storage.
  • Pods that cannot be evicted anywhere due to limitations. For example, due to lack of resources or lack of nodes matching the affinity or anti-affinity selectors.
  • Pods with an annotation that prohibits eviction: "cluster-autoscaler.kubernetes.io/safe-to-evict": "false".

Note

You can evict kube-system pods, pods with local-storage, and pods without a replication controller. To do this, set the "safe-to-evict": "true" annotation:

kubectl annotate pod <pod_name> cluster-autoscaler.kubernetes.io/safe-to-evict=true

Other possible causes include:

  • The node group has already reached its minimum size.

  • The node is idle for less than 10 minutes.

  • During the last 10 minutes, the node group has been scaled up.

  • During the last 3 minutes, there was an unsuccessful attempt to scale down the node group.

  • There was an unsuccessful attempt to stop a certain node. In this case, the next attempt occurs in 5 minutes.

  • The node has an annotation that prohibits stopping it when scaling it down: "cluster-autoscaler.kubernetes.io/scale-down-disabled": "true". You can add or remove an annotation using kubectl.

    Check for annotation on the node:

    kubectl describe node <node_name> | grep scale-down-disabled
    

    Result:

    Annotations:        cluster-autoscaler.kubernetes.io/scale-down-disabled: true
    

    Set the annotation:

    kubectl annotate node <node_name> cluster-autoscaler.kubernetes.io/scale-down-disabled=true
    

    You can remove an annotation by running the kubectl command with -:

    kubectl annotate node <node_name> cluster-autoscaler.kubernetes.io/scale-down-disabled-
    

In an autoscaling group, the number of nodes never scales down to one, even when there is no loadIn an autoscaling group, the number of nodes never scales down to one, even when there is no load

In a Managed Service for Kubernetes cluster, the kube-dns-autoscaler app decides on the number of CoreDNS replicas. If the preventSinglePointFailure parameter in the kube-dns-autoscaler configuration is set to true and there is more than one node in the group, the minimum number of CoreDNS replicas is two. In this case, the Cluster Autoscaler cannot scale down the number of nodes in the cluster below that of CoreDNS pods.

Learn more about DNS scaling based on the cluster size here.

Solution:

  1. Disable the protection setting that limits the minimum number of CoreDNS replicas to two. To do this, set the preventSinglePointFailure parameter to false in the kube-dns-autoscaler ConfigMap.

  2. Enable the kube-dns-autoscaler pod eviction by adding the save-to-evict annotation to Deployment:

    kubectl patch deployment kube-dns-autoscaler -n kube-system \
      --type merge \
      -p '{"spec":{"template":{"metadata":{"annotations":{"cluster-autoscaler.kubernetes.io/safe-to-evict":"true"}}}}}'
    

Why does the node group fail to scale down after a pod deletion?Why does the node group fail to scale down after a pod deletion?

If the node is underloaded, it is removed in 10 minutes.

Why does autoscaling fail to trigger even though the number of nodes is below the minimum or exeeds the maximum?Why does autoscaling fail to trigger even though the number of nodes is below the minimum or exeeds the maximum?

Autoscaling will not violate the preset limits, but Managed Service for Kubernetes does not explicitly control the limits. Upscaling will only happen if there are pods in an unschedulable status.

Why do Terminated pods remain in my cluster?Why do Terminated pods remain in my cluster?

This happens because the Pod garbage collector (PodGC) fails to delete these pods during autoscaling. For more information, see Deleting Terminated pods.

To get answers to other questions about autoscaling, see the Kubernetes documentation.

Is Horizontal Pod Autoscaler supported?Is Horizontal Pod Autoscaler supported?

Yes, Managed Service for Kubernetes supports horizontal pod autoscaling.

Was the article helpful?

Previous
Configuring and updating
Next
Resources
© 2025 Direct Cursus Technology L.L.C.