Yandex Cloud
Search
Contact UsGet started
  • Blog
  • Pricing
  • Documentation
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • ML & AI
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Customer Stories
    • Gateway to Russia
    • Cloud for Startups
    • Education and Science
  • Blog
  • Pricing
  • Documentation
Yandex project
© 2025 Yandex.Cloud LLC
Yandex Managed Service for Kubernetes
  • Comparison with other Yandex Cloud services
  • Getting started
  • Access management
  • Pricing policy
  • Terraform reference
  • Monitoring metrics
  • Audit Trails events
  • Release notes
    • General questions
    • Data storage
    • Configuring and updating
    • Automatic scaling
    • Resources
    • Monitoring and logs
    • Troubleshooting
    • All questions on a single page
  1. FAQ
  2. Automatic scaling

Questions and answers about node group autoscaling in Managed Service for Kubernetes

Written by
Yandex Cloud
Updated at February 14, 2024

Why are there N nodes in my cluster now, but the cluster is not scaling down?Why are there N nodes in my cluster now, but the cluster is not scaling down?

Autoscaling does not stop nodes with pods that cannot be evicted. The scaling barriers include:

  • Pods whose eviction is limited with PodDisruptionBudget.
  • Pods in the kube-system namespace:
    • That were not created under the DaemonSet controller.
    • That do not have PodDisruptionBudget or whose eviction is limited by PodDisruptionBudget.
  • Pods that were not created under a replication controller (ReplicaSet, Deployment, or StatefulSet).
  • Pods with local storage.
  • Pods that cannot be evicted anywhere due to limitations. For example, due to lack of resources or lack of nodes matching the affinity or anti-affinity selectors.
  • Pods with an annotation that disables eviction: "cluster-autoscaler.kubernetes.io/safe-to-evict": "false".

Note

Kube-system pods, pods with local-storage, and pods without a replication controller can be evicted. To do this, set the "safe-to-evict": "true" annotation:

kubectl annotate pod <pod_name> cluster-autoscaler.kubernetes.io/safe-to-evict=true

Other possible causes include:

  • The node group has already reached its minimum size.

  • The node is idle for less than 10 minutes.

  • During the last 10 minutes, the node group has been scaled up.

  • During the last 3 minutes, there was an unsuccessful attempt to scale down the node group.

  • There was an unsuccessful attempt to stop a certain node. In this case, the next attempt occurs in 5 minutes.

  • The node has an annotation that prohibits stopping it on scale-down: "cluster-autoscaler.kubernetes.io/scale-down-disabled": "true". You can add or remove an annotation using kubectl.

    Check for annotation on the node:

    kubectl describe node <node_name> | grep scale-down-disabled
    

    Result:

    Annotations:        cluster-autoscaler.kubernetes.io/scale-down-disabled: true
    

    Set the annotation:

    kubectl annotate node <node_name> cluster-autoscaler.kubernetes.io/scale-down-disabled=true
    

    Remove the annotation by running the kubectl command with -:

    kubectl annotate node <node_name> cluster-autoscaler.kubernetes.io/scale-down-disabled-
    

Why does the node group fail to scale down after the pod deletion?Why does the node group fail to scale down after the pod deletion?

If the node is underloaded, it is removed in 10 minutes.

Why isn't autoscaling performed even when the number of nodes gets less than the minimum or greater than the maximum?Why isn't autoscaling performed even when the number of nodes gets less than the minimum or greater than the maximum?

Autoscaling will not violate the preset limits, but Managed Service for Kubernetes does not explicitly control the limits. Upscaling will only trigger if there are pods in the unschedulable status.

Why do Terminated pods remain in my cluster?Why do Terminated pods remain in my cluster?

This happens because the Pod garbage collector (PodGC) fails to delete these pods during autoscaling. For more information, see Deleting Terminated pods.

To get answers to other questions about autoscaling, see the Kubernetes documentation.

Was the article helpful?

Previous
Configuring and updating
Next
Resources
Yandex project
© 2025 Yandex.Cloud LLC