Evicting pods from a node
Written by
Updated at December 15, 2023
When you update a node group, the pods are evicted from the old node and migrated to the new one. To make sure that eviction does not affect the availability of the services provided by your applications in the Kubernetes cluster, configure the Kubernetes API PodDisruptionBudget
The PodDisruptionBudget
object is defined by three fields:
.spec.selector
: Selector Kubernetes label that marks the set of pods it applies to. This is a required field..spec.minAvailable
: Minimum number of pods from the set to be available after eviction. You can specify it as a percentage value..spec.maxUnavailable
: Maximum number of pods from the set that may be unavailable after eviction. You can specify it as a percentage value.
If you do not define the PodDisruptionBudget policy
Warning
- A pod can only be evicted if it was created by an application replication controller: ReplicaSet
, Deployment , or StatefulSet . If a pod is created without a controller, it will be lost during the update. - Persistent volumes (
PersistentVolumes
objects) used by the pods, which are managed by theStatefulSet
controller, can only be moved between nodes within a single availability zone.
Specifics for evicting pods from nodes:
- Configure the
PodDisruptionBudgets
policy to make it impossible to evict too many pods at once, but possible to evict at least one pod. - Pod eviction is subject to the node stop timeout (7 minutes). The node is stopped even if not all pods are evicted during that time.
- When you downsize a node group to evict pods and then delete nodes, the nodes without pods are drained and deleted first. You can also manually drain the nodes you no longer need using the
kubectl drain
command. - The nodes to be drained and stopped are marked
Unschedulable
. This helps you avoid creating new pods on them. - Nodes in the group are drained one at a time.
- Nodes are not drained when a node group is deleted. If requests are sent to pods on the deleted nodes, they will not processed until Kubernetes diagnoses the nodes as unhealthy and creates pods on the running nodes. To avoid this, change the size of the node group to zero, wait for the operation to complete, and delete the node group.