Yandex Cloud
Search
Contact UsGet started
  • Blog
  • Pricing
  • Documentation
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • ML & AI
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Customer Stories
    • Start testing with double trial credits
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Education and Science
    • Yandex Cloud Partner program
  • Blog
  • Pricing
  • Documentation
© 2025 Direct Cursus Technology L.L.C.
Yandex Compute Cloud
    • All guides
      • Updating an instance group
      • Updating an instance group based on a YAML specification
      • Moving an instance group to a different availability zone
      • Moving an instance group with a network load balancer to a different availability zone
      • Moving an instance group with an L7 load balancer to a different availability zone
      • Configuring application health checks on a VM instance
      • Pausing an instance group
      • Resuming an instance group
      • Sequentially restarting instances in a group
      • Sequentially recreating instances in a group
      • Stopping an instance group
      • Starting an instance group
      • Configuring instance group access permissions
      • Removing an instance group from a placement group
      • Deleting an instance group
    • Viewing service resource operations
  • Yandex Container Solution
  • Access management
  • Pricing policy
  • Terraform reference
  • Monitoring metrics
  • Audit Trails events
  • Release notes
  1. Step-by-step guides
  2. Managing an instance group
  3. Moving an instance group with a network load balancer to a different availability zone

Moving an instance group with a network load balancer to a different availability zone

Written by
Yandex Cloud
Updated at May 29, 2025

To move an instance group with a network load balancer created using Yandex Network Load Balancer:

  1. Create a subnet in the availability zone you want to move your instance group to.

  2. Add the group instances to the new availability zone:

    Management console
    CLI
    Terraform
    API
    1. In the management console, open the folder containing the instance group you need.

    2. Select Compute Cloud.

    3. In the left-hand panel, select Instance groups.

    4. Select the instance group to update.

    5. In the top-right corner, click Edit.

    6. Under Allocation, add the availability zone you want to move the instance group to.

    7. If your instance group is a manually scaled one, under Scaling, specify a group size sufficient to place instances in all the selected availability zones.

      You will be able to reset the number of instances back to the initial one after all the instances in the group are moved to the new availability zone and deleted from the previous one.

    8. If your instance group is an autoscaling one and has the OPPORTUNISTIC shutdown strategy, change the strategy to PROACTIVE in the Stop VMs by strategy field.

      You will be able to reset the shutdown strategy back to OPPORTUNISTIC after all the instances in the group are moved to the new availability zone and deleted from the previous one.

    9. Click Save.

    If you do not have the Yandex Cloud CLI yet, install and initialize it.

    By default, the CLI uses the folder specified when creating the profile. To change the default folder, use the yc config set folder-id <folder_ID> command. You can also set a different folder for any specific command using the --folder-name or --folder-id parameter.

    1. Open the instance group specification file and edit the instance template:

      • Under allocation_policy, add a new availability zone.

      • Under network_interface_specs, add the ID of the previously created subnet.

      • If your instance group is a manually scaled one, under scale_policy, specify a group size sufficient to place instances in all the selected availability zones.

        You will be able to reset the number of instances back to the initial one after all the instances in the group are moved to the new availability zone and deleted from the previous one.

      • If your instance group is an autoscaling one and has the OPPORTUNISTIC shutdown strategy, change the strategy to PROACTIVE under deploy_policy.

        You will be able to reset the shutdown strategy back to OPPORTUNISTIC after all the instances in the group are moved to the new availability zone and deleted from the previous one.

    2. See the description of the CLI command for updating an instance group:

      yc compute instance-group update --help
      
    3. Get a list of all instance groups in the default folder:

      yc compute instance-group list
      

      Result:

      +----------------------+---------------------------------+--------+--------+
      |          ID          |              NAME               | STATUS |  SIZE  |
      +----------------------+---------------------------------+--------+--------+
      | cl15sjqilrei******** | first-fixed-group-with-balancer | ACTIVE |      3 |
      | cl19s7dmihgm******** | test-group                      | ACTIVE |      2 |
      +----------------------+---------------------------------+--------+--------+
      
    4. Update the instance group:

      yc compute instance-group update \
        --id <instance_group_ID> \
        --file <instance_specification_file>
      

      Where:

      • --id: Instance group ID.
      • --file: Path to the instance group specification file.

      Result:

      id: cl15sjqilrei********
      ...
      allocation_policy:
      zones:
      - zone_id: <previous_availability_zone>
      - zone_id: <new_availability_zone>
      ...
      

    If you do not have Terraform yet, install it and configure the Yandex Cloud provider.

    1. Open the Terraform configuration file for the instance group. Specify the new availability zone under allocation_policy and the ID of the previously created subnet under network_interface.

      ...
      network_interface {
        subnet_ids = [
          "<ID_of_subnet_in_previous_availability_zone>",
          "<ID_of_subnet_in_new_availability_zone>"
        ]
      }
      ...
      allocation_policy {
        zones = [
          "<previous_availability_zone>",
          "<new_availability_zone>"
        ]
      }
      ...
      

      Where:

      • zones: Availability zones to host the instance group, both the new and previous ones.
      • subnet_ids: IDs of subnets in the availability zones to host the instance group.

      If your instance group is a manually scaled one, under scale_policy, specify a group size sufficient to place instances in all the selected availability zones.

      ...
      scale_policy {
        fixed_scale {
          size = <number_of_instances_in_group>
        }
      }
      ...
      

      You will be able to reset the number of instances back to the initial one after all the instances in the group are moved to the new availability zone and deleted from the previous one.

      If your instance group is an autoscaling one and has the OPPORTUNISTIC shutdown strategy, change the strategy to PROACTIVE:

      ...
      deploy_policy {
        strategy = "proactive" 
      }
      ...
      

      You will be able to reset the shutdown strategy back to OPPORTUNISTIC after all the instances in the group are moved to the new availability zone and deleted from the previous one.

      For more information about resource parameters in Terraform, see the relevant provider documentation.

    2. Apply the changes:

      1. In the terminal, go to the folder where you edited the configuration file.

      2. Make sure the configuration file is correct using this command:

        terraform validate
        

        If the configuration is correct, you will get this message:

        Success! The configuration is valid.
        
      3. Run this command:

        terraform plan
        

        The terminal will display a list of resources with their properties. No changes will be made at this step. If the configuration contains any errors, Terraform will point them out.

      4. Apply the changes:

        terraform apply
        
      5. Type yes and press Enter to confirm the changes.

      This will add the new availability zone for your instance group. You can check the updates using the management console or this CLI command:

      yc compute instance-group get <instance_group_name>
      

    Use the update REST API method for the InstanceGroup resource or the InstanceGroupService/Update gRPC API call.

    If your instance group is a manually scaled one, specify a group size sufficient to place instances in all the selected availability zones. You will be able to reset the number of instances back to the initial one after all the instances in the group are moved to the new availability zone and deleted from the previous one.

    If your instance group is an autoscaling one and has the OPPORTUNISTIC shutdown strategy, change the strategy to PROACTIVE. You will be able to reset the shutdown strategy back to OPPORTUNISTIC after all the instances in the group are moved to the new availability zone and deleted from the previous one.

    Wait until the instances appear in the new availability zone and get the Running Actual status.

  3. Based on the load balancer type, follow these steps:

    • External load balancer (EXTERNAL):

      1. Wait until the resources of the target group in the new availability zone pass a health check and get the HEALTHY status. See Checking target health statuses.

        After that, the load balancer will start routing traffic through the new availability zone. This may take up to two minutes. See Achieving routing convergence in the availability zone.

      2. Delete the group instances from the previous availability zone:

        Management console
        CLI
        Terraform
        API
        1. In the management console, open the folder containing the instance group you need.
        2. Select Compute Cloud.
        3. In the left-hand panel, select Instance groups.
        4. Select the instance group to update.
        5. In the top-right corner, click Edit.
        6. Under Allocation, deselect the previous availability zone.
        7. Click Save.
        1. Open the instance group specification file and edit the instance template:

          • Delete the previous availability zone from the allocation_policy section.
          • Delete the ID of the subnet in the previous availability zone from the network_interface_specs section.
        2. Update the instance group:

          yc compute instance-group update \
            --id <instance_group_ID> \
            --file <instance_specification_file>
          

          Where:

          • --id: Instance group ID.
          • --file: Path to the instance group specification file.

          Result:

          id: cl15sjqilrei********
          ...
          allocation_policy:
          zones:
          - zone_id: <new_availability_zone>
          ...
          
        1. Open the Terraform configuration file for the instance group. Delete the previous availability zone from the allocation_policy section and the ID of the subnet in the previous availability zone from the network_interface section:

          ...
          network_interface {
            subnet_ids = ["<ID_of_subnet_in_new_availability_zone>"]
          }
          ...
          allocation_policy {
            zones = ["<new_availability_zone>"]
          }
          ...
          

          Where:

          • zones: Availability zone to move the instance group to. You can specify multiple availability zones.
          • subnet_ids: ID of the subnet in the availability zone you want to move your instance group to.

          For more information about resource parameters in Terraform, see the relevant provider documentation.

        2. Apply the changes:

          1. In the terminal, go to the folder where you edited the configuration file.

          2. Make sure the configuration file is correct using this command:

            terraform validate
            

            If the configuration is correct, you will get this message:

            Success! The configuration is valid.
            
          3. Run this command:

            terraform plan
            

            The terminal will display a list of resources with their properties. No changes will be made at this step. If the configuration contains any errors, Terraform will point them out.

          4. Apply the changes:

            terraform apply
            
          5. Type yes and press Enter to confirm the changes.

          The group instances will be deleted from the previous availability zone. You can check the updates using the management console or this CLI command:

          yc compute instance-group get <instance_group_name>
          

        Use the update REST API method for the InstanceGroup resource or the InstanceGroupService/Update gRPC API call.

    • Internal load balancer (INTERNAL):

      1. Move the resources requiring access to the internal load balancer to the previously created subnet.

      2. Switch to a new listener:

        Management console
        CLI
        Terraform
        API
        1. In the management console, select the folder containing the load balancer.

        2. In the list of services, select Network Load Balancer.

        3. Click the name of the load balancer in question.

        4. Under Listeners, click and select Remove listener.

        5. At the top right, click Create listener and create a new listener.

          When creating a new listener, select a subnet in the availability zone you want to move the load balancer to.

        6. Click Add.

        1. See the description of the CLI command for deleting a listener:

          yc load-balancer network-load-balancer remove-listener --help
          
        2. Get a list of all network load balancers in the default folder:

          yc load-balancer network-load-balancer list
          

          Result:

          +----------------------+---------------+-----------------+----------+----------------+------------------------+----------+
          |          ID          |     NAME      |    REGION ID    |   TYPE   | LISTENER COUNT | ATTACHED TARGET GROUPS |  STATUS  |
          +----------------------+---------------+-----------------+----------+----------------+------------------------+----------+
          | enp2btm6uvdr******** | nlb-34aa5-db1 | ru-central1 | INTERNAL |              0 |                        |  ACTIVE  |
          | enpvg9o73hqh******** | test-balancer | ru-central1 | EXTERNAL |              0 |                        |  ACTIVE  |
          +----------------------+---------------+-----------------+----------+----------------+------------------------+----------+
          
        3. Get the listener name:

          yc load-balancer network-load-balancer get <load_balancer_name>
          

          Result:

          id: enp2btm6uvdr********
          folder_id: b1gmit33ngp3********
          ...
          listeners:
            - name: listener-980ee-af3
              address: 172.17.0.36
          
        4. Delete the listener:

          yc load-balancer network-load-balancer remove-listener <load_balancer_name> \
            --listener-name <listener_name>
          

          Where --listener-name is the name of the listener to delete.

          Result:

          done (1s)
          id: enpvg9o73hqh********
          folder_id: b1gmit33ngp3********
          created_at: "2023-08-09T13:44:57Z"
          name: nlb-34aa5-db1
          region_id: ru-central1
          status: INACTIVE
          type: INTERNAL
          
        5. See the description of the CLI command for adding a listener:

          yc load-balancer network-load-balancer add-listener --help
          
        6. Add a listener:

          yc load-balancer network-load-balancer add-listener <load_balancer_name> \
            --listener name=<listener_name>,`
                       `port=<port>,`
                       `target-port=<target_port>,`
                       `internal-subnet-id=<subnet_ID>
          

          Where:

          • name: Listener name.
          • port: Port the load balancer will listen to incoming traffic on.
          • target-port: Target port the load balancer will route traffic to.
          • internal-subnet-id: ID of the subnet in the availability zone you want to move your load balancer to.

          Result:

          done (1s)
          id: enp2btm6uvdr********
          folder_id: b1gmit33ngp3********
          created_at: "2023-08-09T08:37:03Z"
          name: nlb-34aa5-db1
          region_id: ru-central1
          status: ACTIVE
          type: INTERNAL
          listeners:
            - name: new-listener
              address: 10.0.0.16
              port: "22"
              protocol: TCP
              target_port: "22"
              subnet_id: e2lgp8o00g06********
              ip_version: IPV4
          
        1. Open the Terraform file that contains the load balancer configuration and edit the name and subnet_id field values under listener:

          listener { 
            name = "<new_listener_name>" 
            port = 80 
            target_port = 81 
            protocol = "tcp" 
            internal_address_spec { 
              subnet_id = "<ID_of_subnet_in_target_availability_zone>" 
              ip_version = "ipv4" 
            } 
          }
          

          Where:

          • name: Listener name.
          • port: Port the load balancer will listen to incoming traffic on.
          • target_port: Target port the load balancer will route traffic to.
          • subnet_id: ID of the subnet in the availability zone you want to move your instance group to.

          For more information about resource parameters in Terraform, see the relevant provider documentation.

        2. Apply the changes:

          1. In the terminal, go to the folder where you edited the configuration file.

          2. Make sure the configuration file is correct using this command:

            terraform validate
            

            If the configuration is correct, you will get this message:

            Success! The configuration is valid.
            
          3. Run this command:

            terraform plan
            

            The terminal will display a list of resources with their properties. No changes will be made at this step. If the configuration contains any errors, Terraform will point them out.

          4. Apply the changes:

            terraform apply
            
          5. Type yes and press Enter to confirm the changes.

          This will add the new listener to the new availability zone. You can check the new listener using the management console.

        1. To remove a network load balancer's listener, use the removeListener REST API method for the NetworkLoadBalancer resource or the NetworkLoadBalancerService/RemoveListener gRPC API call, and provide the following in your request:

          • Load balancer ID in the networkLoadBalancerId parameter.
          • Name of the listener in the listenerName parameter.

          You can get the load balancer ID with a list of network load balancers in the folder, and the listener name with network load balancer details.

        2. To add a network load balancer's listener, use the addListener API method for the NetworkLoadBalancer resource or the NetworkLoadBalancerService/AddListener gRPC API call, and provide the following in your request:

          {
            "listenerSpec": {
              "name": "<listener_name>",
              "port": "<incoming_port>",
              "targetPort": "<target_port>",
              "internalAddressSpec": {
                "subnetId": "<subnet_ID>"
              }
            }
          }
          

          Where:

          • name: Listener name.
          • port: Port the load balancer will listen to incoming traffic on.
          • targetPort: Target port the load balancer will route traffic to.
          • subnetId: ID of the subnet in the availability zone you want to move your load balancer to.

        Warning

        Your listener IP address will change. Make sure to specify the new listener IP address in the settings of the resources sending traffic through the load balancer.

      3. Delete the group instances from the previous availability zone:

        Management console
        CLI
        Terraform
        API
        1. In the management console, open the folder containing the instance group you need.
        2. Select Compute Cloud.
        3. In the left-hand panel, select Instance groups.
        4. Select the instance group to update.
        5. In the top-right corner, click Edit.
        6. Under Allocation, deselect the previous availability zone.
        7. Click Save.
        1. Open the instance group specification file and edit the instance template:

          • Delete the previous availability zone from the allocation_policy section.
          • Delete the ID of the subnet in the previous availability zone from the network_interface_specs section.
        2. Update the instance group:

          yc compute instance-group update \
            --id <instance_group_ID> \
            --file <instance_specification_file>
          

          Where:

          • --id: Instance group ID.
          • --file: Path to the instance group specification file.

          Result:

          id: cl15sjqilrei********
          ...
          allocation_policy:
          zones:
          - zone_id: <new_availability_zone>
          ...
          
        1. Open the Terraform configuration file for the instance group. Delete the previous availability zone from the allocation_policy section and the ID of the subnet in the previous availability zone from the network_interface section:

          ...
          network_interface {
            subnet_ids = ["<ID_of_subnet_in_new_availability_zone>"]
          }
          ...
          allocation_policy {
            zones = ["<new_availability_zone>"]
          }
          ...
          

          Where:

          • zones: Availability zone to move the instance group to. You can specify multiple availability zones.
          • subnet_ids: ID of the subnet in the availability zone you want to move your instance group to.

          For more information about resource parameters in Terraform, see the relevant provider documentation.

        2. Apply the changes:

          1. In the terminal, go to the folder where you edited the configuration file.

          2. Make sure the configuration file is correct using this command:

            terraform validate
            

            If the configuration is correct, you will get this message:

            Success! The configuration is valid.
            
          3. Run this command:

            terraform plan
            

            The terminal will display a list of resources with their properties. No changes will be made at this step. If the configuration contains any errors, Terraform will point them out.

          4. Apply the changes:

            terraform apply
            
          5. Type yes and press Enter to confirm the changes.

          The group instances will be deleted from the previous availability zone. You can check the updates using the management console or this CLI command:

          yc compute instance-group get <instance_group_name>
          

        Use the update REST API method for the InstanceGroup resource or the InstanceGroupService/Update gRPC API call.

      4. Make sure the subnet in the previous availability zone has no resources left.

      5. Delete the subnet in the previous availability zone.

Was the article helpful?

Previous
Moving an instance group to a different availability zone
Next
Moving an instance group with an L7 load balancer to a different availability zone
© 2025 Direct Cursus Technology L.L.C.