Evicting pods from nodes
When you update a node group, pods are evicted from the old node and migrated to the new one. To make sure eviction does not affect the availability of the services provided by your applications in the Kubernetes cluster, configure the Kubernetes API PodDisruptionBudget
The PodDisruptionBudget object is defined by these three fields:
.spec.selector: Selector Kubernetes label that marks the set of pods this selector applies to. This is a required field..spec.minAvailable: Minimum number of pods from the set to be available after eviction. You can specify it as a percentage value..spec.maxUnavailable: Maximum number of pods from the set that may become not available after eviction. You can specify it as a percentage value.
If you do not define the PodDisruptionBudget policy
Warning
- Pod can only be evicted if they were created by an application replication controller: ReplicaSet
, Deployment , or StatefulSet . If a pod is created without a controller, it will be lost during the update. - Persistent volumes (
PersistentVolumes) used by the pods managed by theStatefulSetcontroller, can only be moved between nodes within a single availability zone.
Specifics for evicting pods from nodes:
-
Configure the
PodDisruptionBudgetspolicy to prohibit eviction of too many pods at once, but allow eviction of at least one pod. -
Pod eviction is subject to the node stop timeout (7 minutes). The node will be stopped even if not all pods are evicted over this time period.
-
When you scale down a node group to evict pods and then delete the node, the nodes without pods are drained and deleted first. If a group has a fixed number of nodes, a random node will be deleted.
-
You can manually drain the nodes you no longer need in an autoscaling node group. To do this, before scaling it down:
- Disable the creation of new pods on the relevant node with the
kubectl cordoncommand. - Evict the pods from the node using
kubectl drain.
When reducing the group size, the drained node will be deleted first.
- Disable the creation of new pods on the relevant node with the
-
The nodes to drain and stop are labeled as
Unschedulable. This helps you avoid creating new pods on them. -
Nodes in the group are drained one at a time.
-
Nodes are not drained when deleting a node group. If pods on the nodes to delete receive requests, they will not processed until Kubernetes checks the nodes as unhealthy and creates pods on the running nodes. To avoid this, change the size of the node group to zero, wait for the operation to complete, and delete the node group.