Yandex Cloud
Search
Contact UsTry it for free
  • Customer Stories
  • Documentation
  • Blog
  • All Services
  • System Status
  • Marketplace
    • Featured
    • Infrastructure & Network
    • Data Platform
    • AI for business
    • Security
    • DevOps tools
    • Serverless
    • Monitoring & Resources
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Start testing with double trial credits
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Center for Technologies and Society
    • Yandex Cloud Partner program
    • Price calculator
    • Pricing plans
  • Customer Stories
  • Documentation
  • Blog
© 2026 Direct Cursus Technology L.L.C.
Yandex Managed Service for Kubernetes
  • Comparing with other Yandex Cloud services
  • Getting started
    • Resource relationships
    • Release channels and updates
    • Zones of control in Managed Service for Kubernetes
    • Updating node group OS
    • Encryption
      • Node group autoscaling
      • Evicting pods from nodes
      • Dynamic resource allocation for a node
      • Node groups with GPUs
    • Networking in Managed Service for Kubernetes
    • Network settings and cluster policies
    • Autoscaling
    • Audit policy
    • External cluster nodes
    • Quotas and limits
    • Recommendations on using Managed Service for Kubernetes
  • Access management
  • Pricing policy
  • Terraform reference
  • Monitoring metrics
  • Audit Trails events
  • Release notes
  1. Concepts
  2. Node group
  3. Evicting pods from nodes

Evicting pods from nodes

Written by
Yandex Cloud
Updated at January 29, 2026

When you update a node group, pods are evicted from the old node and migrated to the new one. To make sure eviction does not affect the availability of the services provided by your applications in the Kubernetes cluster, configure the Kubernetes API PodDisruptionBudget object for the pods of your application.

The PodDisruptionBudget object is defined by these three fields:

  • .spec.selector: Selector Kubernetes label that marks the set of pods this selector applies to. This is a required field.
  • .spec.minAvailable: Minimum number of pods from the set to be available after eviction. You can specify it as a percentage value.
  • .spec.maxUnavailable: Maximum number of pods from the set that may become not available after eviction. You can specify it as a percentage value.

If you do not define the PodDisruptionBudget policy, all pods will be evicted at once, which may disrupt your application.

Warning

  • Pod can only be evicted if they were created by an application replication controller: ReplicaSet, Deployment, or StatefulSet. If a pod is created without a controller, it will be lost during the update.
  • Persistent volumes (PersistentVolumes) used by the pods managed by the StatefulSet controller, can only be moved between nodes within a single availability zone.

Specifics for evicting pods from nodes:

  • Configure the PodDisruptionBudgets policy to prohibit eviction of too many pods at once, but allow eviction of at least one pod.

  • Pod eviction is subject to the node stop timeout (7 minutes). The node will be stopped even if not all pods are evicted over this time period.

  • When you scale down a node group to evict pods and then delete the node, the nodes without pods are drained and deleted first. If a group has a fixed number of nodes, a random node will be deleted.

  • You can manually drain the nodes you no longer need in an autoscaling node group. To do this, before scaling it down:

    1. Disable the creation of new pods on the relevant node with the kubectl cordon command.
    2. Evict the pods from the node using kubectl drain.

    When reducing the group size, the drained node will be deleted first.

  • The nodes to drain and stop are labeled as Unschedulable. This helps you avoid creating new pods on them.

  • Nodes in the group are drained one at a time.

  • Nodes are not drained when deleting a node group. If pods on the nodes to delete receive requests, they will not processed until Kubernetes checks the nodes as unhealthy and creates pods on the running nodes. To avoid this, change the size of the node group to zero, wait for the operation to complete, and delete the node group.

Was the article helpful?

Previous
Node group autoscaling
Next
Dynamic resource allocation for a node
© 2026 Direct Cursus Technology L.L.C.