Yandex Cloud
Search
Contact UsTry it for free
  • Customer Stories
  • Documentation
  • Blog
  • All Services
  • System Status
  • Marketplace
    • Featured
    • Infrastructure & Network
    • Data Platform
    • AI for business
    • Security
    • DevOps tools
    • Serverless
    • Monitoring & Resources
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Start testing with double trial credits
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Center for Technologies and Society
    • Yandex Cloud Partner program
    • Price calculator
    • Pricing plans
  • Customer Stories
  • Documentation
  • Blog
© 2026 Direct Cursus Technology L.L.C.
Yandex Managed Service for Kubernetes
  • Comparing with other Yandex Cloud services
  • Getting started
  • Access management
  • Pricing policy
  • Terraform reference
  • Monitoring metrics
  • Audit Trails events
  • Release notes
    • General questions
    • Data storage
    • Configuring and updating
    • Autoscaling
    • Resources
    • Monitoring and logs
    • Troubleshooting
    • All questions on one page
  1. FAQ
  2. Autoscaling

Questions and answers about node group autoscaling in Managed Service for Kubernetes

Written by
Yandex Cloud
Updated at November 21, 2025
  • Why does my cluster have N nodes and is not scaling down?

  • In an autoscaling group, the number of nodes never scales down to one, even when there is no load

  • Why does the node group fail to scale down after the pod deletion?

  • Why does autoscaling fail to trigger even though the number of nodes is below the minimum or above the maximum?

  • Why do Terminated pods remain in my cluster?

  • Is Horizontal Pod Autoscaler supported?

Why does my cluster have N nodes and is not scaling down?Why does my cluster have N nodes and is not scaling down?

Autoscaling does not stop nodes with pods that cannot be evicted. The following prevents scaling:

  • Pods with a PodDisruptionBudget that restricts their eviction.
  • Pods in the kube-system namespace:
    • Those not managed by a DaemonSet controller.
    • Those without a PodDisruptionBudget or those with a PodDisruptionBudget restricting their eviction.
  • Pods not managed by a replication controller, such as ReplicaSet, Deployment, or StatefulSet.
  • Pods with local-storage.
  • Pods that cannot be scheduled anywhere due to restrictions, e.g., due to insufficient resources or lack of nodes matching the affinity or anti-affinity selectors.
  • Pods annotated with "cluster-autoscaler.kubernetes.io/safe-to-evict": "false".

Note

You can evict kube-system pods, pods with local-storage, and pods without a replication controller. To do this, set "safe-to-evict": "true":

kubectl annotate pod <pod_name> cluster-autoscaler.kubernetes.io/safe-to-evict=true

Other possible causes include:

  • The node group has already reached its minimum size.

  • The node has been idle for less than 10 minutes.

  • The node group was scaled up in the last 10 minutes.

  • There was a failed attempt to scale down the node group in the last three minutes.

  • There was an unsuccessful attempt to stop a certain node. In this case, the next attempt occurs in 5 minutes.

  • The node is annotated to prevent it from being stopped during downscaling: "cluster-autoscaler.kubernetes.io/scale-down-disabled": "true". You can add or remove the annotation using kubectl.

    Check the node for annotations:

    kubectl describe node <node_name> | grep scale-down-disabled
    

    Result:

    Annotations:        cluster-autoscaler.kubernetes.io/scale-down-disabled: true
    

    Set the annotation:

    kubectl annotate node <node_name> cluster-autoscaler.kubernetes.io/scale-down-disabled=true
    

    You can remove the annotation by running the kubectl command with -:

    kubectl annotate node <node_name> cluster-autoscaler.kubernetes.io/scale-down-disabled-
    

In an autoscaling group, the number of nodes never scales down to one, even when there is no loadIn an autoscaling group, the number of nodes never scales down to one, even when there is no load

In a Managed Service for Kubernetes cluster, the kube-dns-autoscaler app decides on the number of CoreDNS replicas. If the preventSinglePointFailure parameter in the kube-dns-autoscaler configuration is set to true and there is more than one node in the group, the minimum number of CoreDNS replicas is two. In this case, the Cluster Autoscaler cannot scale down the number of nodes in the cluster below that of CoreDNS pods.

Learn more about DNS scaling based on the cluster size here.

Solution:

  1. Disable the protection setting that limits the minimum number of CoreDNS replicas to two. To do this, set the preventSinglePointFailure parameter to false in the kube-dns-autoscaler ConfigMap.

  2. Enable the kube-dns-autoscaler pod eviction by adding the save-to-evict annotation to Deployment:

    kubectl patch deployment kube-dns-autoscaler -n kube-system \
      --type merge \
      -p '{"spec":{"template":{"metadata":{"annotations":{"cluster-autoscaler.kubernetes.io/safe-to-evict":"true"}}}}}'
    

Why does the node group fail to scale down after the pod deletion?Why does the node group fail to scale down after the pod deletion?

If a node is underutilized, it will be deleted after 10 minutes.

Why does autoscaling fail to trigger even though the number of nodes is below the minimum or above the maximum?Why does autoscaling fail to trigger even though the number of nodes is below the minimum or above the maximum?

Autoscaling will not violate the preset limits, but Managed Service for Kubernetes does not explicitly enforce the limits. Upscaling will only happen if there are unschedulable pods.

Why do Terminated pods remain in my cluster?Why do Terminated pods remain in my cluster?

This happens because the Pod garbage collector (PodGC) fails to timely clean up these pods during autoscaling. For more information, see Deleting terminated pods.

To get answers to other questions about autoscaling, see Kubernetes FAQ.

Is Horizontal Pod Autoscaler supported?Is Horizontal Pod Autoscaler supported?

Yes, Managed Service for Kubernetes supports Horizontal Pod Autoscaler.

Was the article helpful?

Previous
Configuring and updating
Next
Resources
© 2026 Direct Cursus Technology L.L.C.