Yandex Cloud
Search
Contact UsTry it for free
  • Customer Stories
  • Documentation
  • Blog
  • All Services
  • System Status
  • Marketplace
    • Featured
    • Infrastructure & Network
    • Data Platform
    • AI for business
    • Security
    • DevOps tools
    • Serverless
    • Monitoring & Resources
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Start testing with double trial credits
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Center for Technologies and Society
    • Yandex Cloud Partner program
    • Price calculator
    • Pricing plans
  • Customer Stories
  • Documentation
  • Blog
© 2026 Direct Cursus Technology L.L.C.
Yandex Managed Service for Kubernetes
  • Comparing with other Yandex Cloud services
  • Getting started
    • Resource relationships
    • Release channels and updates
    • Support for Kubernetes versions
    • Zones of control in Managed Service for Kubernetes
    • Updating node group OS
    • Encryption
      • Node group autoscaling
      • Node group deploy policy
      • Evicting pods from nodes
      • Dynamic resource allocation for a node
      • Node groups with GPUs
    • Networking in Managed Service for Kubernetes
    • Network settings and cluster policies
    • Autoscaling
    • Audit policy
    • External cluster nodes
    • Quotas and limits
    • Recommendations on using Managed Service for Kubernetes
  • Access management
  • Pricing policy
  • Terraform reference
  • Monitoring metrics
  • Audit Trails events
  • Release notes
  1. Concepts
  2. Node group
  3. Node group deploy policy

Node group deployment policy in Managed Service for Kubernetes

Written by
Yandex Cloud
Updated at March 17, 2026

When modifying a node group (including during a Kubernetes version update), you may need to stop, reboot, or delete the group's nodes. As soon as you do it, the group will enter the Reconciling status, and the nodes will become unavailable.

With a deployment policy, you can control the number of available nodes in the group during such operations. You set the policy with a pair of parameters, max_expansion and max_unavailable, which you configure when creating or modifying a node group.

Parameter

Description

max_expansion

Maximum number of nodes by which you can expand the group when modifying or updating it.

These nodes are not temporary and are created with the new parameters specified for the group, e.g., designating the computing resources or Kubernetes version.

The minimum value is 0 (node group expansion prohibited), the maximum value is 100, and the default value is 3.

If the node group expansion is prohibited, you should allow the group to have unavailable nodes.

max_unavailable

Maximum number of unavailable nodes when the group is being modified or updated.

The minimum value is 0 (the group must not have unavailable nodes), the maximum value is 100, and the default value is 0.

If the group must not have any unavailable nodes, you should allow expanding the node group.

The max_expansion and max_unavailable parameters are interrelated, and at least one of them must have a non-zero value.

When you modify or update a node group, the cluster follows the deployment policy specified for it. The cluster's behavior will vary depending on how the policy is configured:

  • Policy with max_unavailable > 0 and max_expansion = 0.

    This policy prohibits expanding the node group in the course of the operation (max_expansion = 0).

    Modifying or updating the node group will be performed by sequentially executing the operation for max_unavailable nodes at a time, until the operation is completed for all nodes in the group. As the selected nodes will become unavailable during the operation, before it executes it, the cluster will try to migrate the workload from these nodes to the remaining ones in the group.

    Warning

    If the workload cannot be migrated to the remaining nodes due to insufficient computing resources on those nodes, the operation will be forcibly performed for the selected nodes.

    This may lead to complete or partial unavailability of your applications in the cluster until the operation is fully completed for the whole node group.

    Example

    You have a node group set up as follows:

    • Scaling type: Fixed.
    • Number of nodes: 5.
    • max_expansion: 0.
    • max_unavailable: 2.

    If you modify the node group in this configuration:

    1. The workload from two nodes will be migrated to the remaining three nodes.
    2. The two nodes without workload will enter the Reconciling status, get updated, rebooted, and then return to the Running status.
    3. The workload from the next un-updated nodes will be migrated to the two updated nodes and one un-updated node.
    4. The two un-updated nodes without workload will enter the Reconciling status, get updated, rebooted, and then return to the Running status.
    5. The workload from the last un-updated node will be migrated to the four updated ones.
    6. The last un-updated node without workload will enter the Reconciling status, get updated, rebooted, and then return to the Running status.
  • Policy with max_expansion > 0 and max_unavailable = 0.

    This policy makes sure you have no unavailable nodes during the update process (max_unavailable = 0).

    Modifying or updating the node group will be performed by sequentially expanding the node group by max_expansion new nodes at a time. These new nodes will get the modified configuration or updated Kubernetes version right away. Next, the new nodes will take the workload from the existing outdated ones, which will then be deleted. This process will continue until all nodes with the outdated configuration are replaced with new ones. The migration of workload from nodes consumes the computing resources of these nodes in the process.

    If you use a deployment policy like that, make sure your cloud has enough resources to expand the group before modifying it. Increase quotas if needed.

    Warning

    The operation for the node group may slow down or stop entirely if there are not enough resources for the expansion.

    When expanding your node group, you pay for the nodes you create. For more information, see Managed Service for Kubernetes pricing policy.

    Example

    You have a node group set up as follows:

    • Scaling type: Fixed.
    • Number of nodes: 5.
    • max_expansion: 2.
    • max_unavailable: 0.

    If you modify the node group in this configuration:

    1. Two new nodes with the updated configuration will be created.
    2. After the new nodes enter the Running status, the workload from the two un-updated nodes will be migrated to the new ones, and the two nodes without workload will be deleted.
    3. Two more new nodes with the updated configuration will be created.
    4. After the new nodes enter the Running status, the workload from the two un-updated nodes will be migrated to the new ones, and the two nodes without workload will be deleted.
    5. One more new node with the updated configuration will be created.
    6. After the new node enters the Running status, the workload from the last un-updated node will be migrated to the new one, and the node without workload will be deleted.
  • Policy with max_expansion > 0 and max_unavailable > 0.

    This policy is a combination of the policies described above.

    Modifying or updating the node group will be performed by sequentially executing the operation for max_unavailable nodes at a time, until the operation is completed for all nodes in the group. As the selected nodes will become unavailable during the operation, before it executes it, the cluster will try to migrate the workload from these nodes to the remaining ones in the group. The migration of workload from nodes consumes the computing resources of these nodes in the process.

    The node group will be expanded by max_expansion nodes to be able to handle the workload from the nodes that are being updated. The group's expansion and the operation for the nodes take place simultaneously.

    When using this policy, you should monitor both the available computing resources of the nodes and the quotas and resources of your cloud.

    Example

    You have a node group set up as follows:

    • Scaling type: Fixed.
    • Number of nodes: 5.
    • max_expansion: 2.
    • max_unavailable: 2.

    If you modify the node group in this configuration:

    1. Two new nodes with the updated configuration will start to be created. At the same time, the workload from the two un-updated nodes will start to be migrated to the remaining three un-updated nodes.
    2. The new nodes will enter the Running status and start getting workload from the migrated nodes.
    3. The two nodes without workload will enter the Reconciling status, get updated, rebooted, and then return to the Running status.
    4. The workload from the two un-updated nodes will be migrated to the four updated nodes and one un-updated node.
    5. One node without workload will enter the Reconciling status, get updated, rebooted, and then return to the Running status. The other one will be deleted.
    6. The workload from the remaining un-updated node will be migrated to the five updated nodes. With that done, this node will be deleted.

    The behavior may slightly differ from the description depending on which comes first: pod migration from the un-updated node or the new/rebooted nodes getting the Running status; however, ultimately, the group will enter the required state.

See alsoSee also

  • Configuring a deployment policy
  • Node group autoscaling in Managed Service for Kubernetes
  • Updating Kubernetes

Was the article helpful?

Previous
Node group autoscaling
Next
Evicting pods from nodes
© 2026 Direct Cursus Technology L.L.C.