Yandex Cloud
Search
Contact UsTry it for free
  • Customer Stories
  • Documentation
  • Blog
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • AI for business
    • Security
    • DevOps tools
    • Serverless
    • Monitoring & Resources
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Start testing with double trial credits
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Center for Technologies and Society
    • Yandex Cloud Partner program
    • Price calculator
    • Pricing plans
  • Customer Stories
  • Documentation
  • Blog
© 2025 Direct Cursus Technology L.L.C.
All solutions
    • All solutions for Managed Service for Kubernetes
    • Resolving the `Timed out waiting for the condition` error when mounting a PVC
    • Resolving the `Timed out waiting for the condition` error when mounting a volume with a large number of files
    • Hosts of the Managed Service for Kubernetes node groups stay in the `RECONCILING` status for a long time
    • Cyclical restarts of the `kube-dns-autoscaler` pod
    • Cannot reduce the number of nodes in a Managed Service for Kubernetes group
    • Cannot increase the number of nodes in a Managed Service for Kubernetes group
    • Resolving the `Multi-Attach error for volume` error
    • Resolving the `DEADLINE_EXCEEDED` error
    • Troubleshooting time synchronization issues on Managed Service for Kubernetes cluster nodes
    • Troubleshooting DNS name resolving issues in Managed Service for Kubernetes
    • Resolving the `0/10 nodes are available - node(s) had untolerated taint` error
    • The cluster remains in the `STARTING` status for too long
    • Cluster pods remain in the `PENDING` status for too long
    • Troubleshooting HPA issues in Managed Service for Kubernetes
    • Resolving the `Can't use allocation_policy.locations.subnet_id together with node_template.network_interface_specs` error
    • Troubleshooting issues when mounting volumes to Managed Service for Kubernetes cluster pods using Container Storage Interface for S3
    • Managed Service for Kubernetes cluster remains `STARTING` for too long after renaming
    • How to add and update Linux SSH keys on Managed Service for Kubernetes node group hosts
    • How to assign static external IP addresses to Managed Service for Kubernetes cluster nodes
    • How to provide Kubernetes nodes with internet access
    • How to create an internal Ingress controller
    • How to set up autodeletion of old images from Managed Service for Kubernetes nodes
    • How to find out the external IP address of a Managed Service for Kubernetes node
    • How to use Certificate Manager certificates in load balancers or in Managed Service for Kubernetes
    • How to change the time zone on Managed Service for Kubernetes cluster nodes
    • How to find out the ephemeral storage size for a Managed Service for Kubernetes node

In this article:

  • Issue description
  • Solution
  • If the issue persists
  1. Managed Service for Kubernetes
  2. Cannot reduce the number of nodes in a Managed Service for Kubernetes group

Nodes in a Managed Service for Kubernetes group not scaling down

Written by
Yandex Cloud
Updated at December 17, 2025
  • Issue description
  • Solution
  • If the issue persists

Issue descriptionIssue description

Nodes in your Managed Service for Kubernetes cluster group will not scale down.

SolutionSolution

Managed Service for Kubernetes uses Cluster Autoscaler for autoscaling of node groups. Here is how it works: you specify the minimum and maximum size of the node group, and the Kubernetes cluster regularly checks the state of pods and nodes.

If a workload on nodes is low and all pods can be assigned to fewer nodes per group, the number of nodes in the group will gradually decrease to the specified minimum.

Cluster Autoscaler periodically checks the load on the nodes and, if the pods can be safely rescheduled to other nodes without overloading them, it drains and shuts down the node.

To enable node draining, check the following:

  • The node load is below 50%. To check the load level, you can use the yc managed-kubernetes cluster list-nodes $CLUSTER_ID command, where $CLUSTER_ID is the Managed Service for Kubernetes cluster ID.
  • The pods on this node do not have local storage.
  • There are no affinity, antiaffinity, nodeselector, or tolopogyspread rules preventing pod relocation.
  • The pods are managed by a controller, e.g., Deployment or StatefulSet.
  • PodDisruptionBudget will remain within its limit after the node deletion.

You can manually find the node in question and check its pods, including those from the kube-system namespace. Delete them manually, if required.

You can also set up the descheduler to delete the pods you no longer need. For details, see our autoscaling FAQs and the official Cluster Autoscaler documentation.

We recommend that you enable master logging in your log group:

yc k8s cluster update <cluster_id> --master-logging enabled=true,log-group-id=<log_group_id>,cluster-autoscaler-enabled=true,kube-apiserver-enabled=true

The logs will help you identify the cause of the failed downscale.

If the issue persistsIf the issue persists

If the above actions did not help, create a support ticket. Provide the following information in your ticket:

  1. Managed Service for Kubernetes cluster ID.
  2. Approximate date and time of Cluster Autoscaler errors.
  3. YAML specification of the pod controller, such as Deployment or StatefulSet.

Was the article helpful?

Previous
Cyclical restarts of the `kube-dns-autoscaler` pod
Next
Cannot increase the number of nodes in a Managed Service for Kubernetes group
© 2025 Direct Cursus Technology L.L.C.