Yandex Cloud
Search
Contact UsGet started
  • Blog
  • Pricing
  • Documentation
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • ML & AI
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Customer Stories
    • Start testing with double trial credits
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Education and Science
    • Yandex Cloud Partner program
  • Blog
  • Pricing
  • Documentation
© 2025 Direct Cursus Technology L.L.C.
All solutions
    • All solutions for Managed Service for Kubernetes
    • Fixing the "timed out waiting for the condition" error when mounting a PVC
    • Resolving the "timed out waiting for the condition" error when mounting a volume with a large number of files
    • Hosts of the Managed Service for Kubernetes node groups stay in the Reconciling status for a long time
    • Constant restarts of the kube-dns-autoscaler pod
    • Number of nodes in the group fails to decrease
    • Resolving the "Multi-Attach error for volume" error
    • Resolving the "DEADLINE_EXCEEDED" error
    • Adding and updating Linux SSH keys on Managed Services for Kubernetes node group hosts
    • Assigning static external IP addresses for Managed Service for Kubernetes cluster nodes
    • Granting internet access to Kubernetes nodes
    • How to change the network or subnet for an MDB cluster
    • How to create an internal Ingress controller

In this article:

  • Issue description
  • Solution
  1. Managed Kubernetes
  2. Number of nodes in the group fails to decrease

Number of nodes in the group fails to decrease

Written by
Yandex Cloud
Updated at November 14, 2024
  • Issue description
  • Solution

Issue descriptionIssue description

The number of nodes in the group does not decrease in the Yandex Managed Service for Kubernetes cluster.

SolutionSolution

Yandex Managed Service for Kubernetes uses Kubernetes cluster-autoscaler for automatic scaling of the node group. This works as follows: you specify the minimum and maximum size of the node group, and the Kubernetes cluster periodically checks the state of the pods and nodes.

If the load on the nodes is insufficient and all pods can be assigned with fewer nodes in the group, then the number of nodes in the group will gradually decrease to the specified minimum size.

The Kubernetes cluster-autoscaler periodically checks the load on the nodes and, if the pods can be restarted on other nodes without overloading them, it clears the node and stops it.

To clear a node, check the following parameters:

  • The node is less than 50% loaded.
  • The pods on this node do not have local storage.
  • affinity/antiaffinity/nodeselector/tolopogyspread does not hinder the pod movement.
  • The pods are managed by a controller (deployment, stateful set).
  • The PodDisruptionBudget will not be exceeded after the node removal.

You can manually find the relevant node and check the pods (including pods from kube-system). Remove them manually if needed.

For details, see our documentation, as well as in the official cluster-autoscaler documentation.

We also recommend you to enable log writing from the master to your log group:

yc k8s cluster update <cluster_id> --master-logging enabled=true,log-group-id=<log_group_id>,cluster-autoscaler-enabled=true,kube-apiserver-enabled=true

There, you will be able to find the reason why downscaling is not functioning properly.

Was the article helpful?

Previous
Constant restarts of the kube-dns-autoscaler pod
Next
Resolving the "Multi-Attach error for volume" error
© 2025 Direct Cursus Technology L.L.C.