Moving an instance group with a network load balancer to a different availability zone
To move an instance group with a network load balancer created using Yandex Network Load Balancer:
-
Create a subnet in the availability zone you want to move your instance group to.
-
Add the group instances to the new availability zone:
Management consoleCLITerraformAPI-
In the management console
, open the folder containing the instance group you need. -
Go to Compute Cloud.
-
In the left-hand panel, select
Instance groups. -
Select the instance group to update.
-
In the top-right corner, click
Edit. -
Under Allocation, add the availability zone you want to move the instance group to.
-
If your instance group is a manually scaled one, under Scaling, specify a group size sufficient to place instances in all the selected availability zones.
You will be able to reset the number of instances back to the initial one after all the instances in the group are moved to the new availability zone and deleted from the previous one.
-
If your instance group is an autoscaling one and has the
OPPORTUNISTICshutdown strategy, change the strategy toPROACTIVEin the Stop VMs by strategy field.You will be able to reset the shutdown strategy back to
OPPORTUNISTICafter all the instances in the group are moved to the new availability zone and deleted from the previous one. -
Click Save.
If you do not have the Yandex Cloud CLI installed yet, install and initialize it.
By default, the CLI uses the folder specified when creating the profile. To change the default folder, use the
yc config set folder-id <folder_ID>command. You can also set a different folder for any specific command using the--folder-nameor--folder-idparameter.-
Open the instance group specification file and edit the instance template:
-
Under
allocation_policy, add a new availability zone. -
Under
network_interface_specs, add the ID of the previously created subnet. -
If your instance group is a manually scaled one, under
scale_policy, specify a group size sufficient to place instances in all the selected availability zones.You will be able to reset the number of instances back to the initial one after all the instances in the group are moved to the new availability zone and deleted from the previous one.
-
If your instance group is an autoscaling one and has the
OPPORTUNISTICshutdown strategy, change the strategy toPROACTIVEunderdeploy_policy.You will be able to reset the shutdown strategy back to
OPPORTUNISTICafter all the instances in the group are moved to the new availability zone and deleted from the previous one.
-
-
See the description of the CLI command for updating an instance group:
yc compute instance-group update --help -
Get a list of all instance groups in the default folder:
yc compute instance-group listResult:
+----------------------+---------------------------------+--------+--------+ | ID | NAME | STATUS | SIZE | +----------------------+---------------------------------+--------+--------+ | cl15sjqilrei******** | first-fixed-group-with-balancer | ACTIVE | 3 | | cl19s7dmihgm******** | test-group | ACTIVE | 2 | +----------------------+---------------------------------+--------+--------+ -
Update the instance group:
yc compute instance-group update \ --id <instance_group_ID> \ --file <instance_specification_file>Where:
--id: Instance group ID.--file: Path to the instance group specification file.
Result:
id: cl15sjqilrei******** ... allocation_policy: zones: - zone_id: <previous_availability_zone> - zone_id: <new_availability_zone> ...
If you do not have Terraform yet, install it and configure the Yandex Cloud provider.
-
Open the Terraform configuration file for the instance group. Specify the new availability zone under
allocation_policyand the ID of the previously created subnet undernetwork_interface.... network_interface { subnet_ids = [ "<ID_of_subnet_in_previous_availability_zone>", "<ID_of_subnet_in_new_availability_zone>" ] } ... allocation_policy { zones = [ "<previous_availability_zone>", "<new_availability_zone>" ] } ...Where:
zones: Availability zones to host the instance group, both the new and previous ones.subnet_ids: IDs of subnets in the availability zones to host the instance group.
If your instance group is a manually scaled one, under
scale_policy, specify a group size sufficient to place instances in all the selected availability zones.... scale_policy { fixed_scale { size = <number_of_instances_in_group> } } ...You will be able to reset the number of instances back to the initial one after all the instances in the group are moved to the new availability zone and deleted from the previous one.
If your instance group is an autoscaling one and has the
OPPORTUNISTICshutdown strategy, change the strategy toPROACTIVE:... deploy_policy { strategy = "proactive" } ...You will be able to reset the shutdown strategy back to
OPPORTUNISTICafter all the instances in the group are moved to the new availability zone and deleted from the previous one.For more information about resource parameters in Terraform, see the relevant provider documentation.
-
Apply the changes:
-
In the terminal, go to the directory where you edited the configuration file.
-
Make sure the configuration file is correct using this command:
terraform validateIf the configuration is correct, you will get this message:
Success! The configuration is valid. -
Run this command:
terraform planYou will see a detailed list of resources. No changes will be made at this step. If the configuration contains any errors, Terraform will show them.
-
Apply the changes:
terraform apply -
Type
yesand press Enter to confirm the changes.
This will add the new availability zone for your instance group. You can check the updates using the management console
or this CLI command:yc compute instance-group get <instance_group_name> -
Use the update REST API method for the InstanceGroup resource or the InstanceGroupService/Update gRPC API call.
If your instance group is a manually scaled one, specify a group size sufficient to place instances in all the selected availability zones. You will be able to reset the number of instances back to the initial one after all the instances in the group are moved to the new availability zone and deleted from the previous one.
If your instance group is an autoscaling one and has the
OPPORTUNISTICshutdown strategy, change the strategy toPROACTIVE. You will be able to reset the shutdown strategy back toOPPORTUNISTICafter all the instances in the group are moved to the new availability zone and deleted from the previous one.Wait until the instances appear in the new availability zone and get the
Running Actualstatus. -
-
Based on the load balancer type, follow these steps:
-
External load balancer (
EXTERNAL):-
Wait until the resources of the target group in the new availability zone pass a health check and get the
HEALTHYstatus. See Checking target health statuses.After that, the load balancer will start routing traffic through the new availability zone. This may take up to two minutes. See Achieving routing convergence in the availability zone.
-
Delete the group instances from the previous availability zone:
Management consoleCLITerraformAPI- In the management console
, open the folder containing the instance group you need. - Go to Compute Cloud.
- In the left-hand panel, select
Instance groups. - Select the instance group to update.
- In the top-right corner, click
Edit. - Under Allocation, deselect the previous availability zone.
- Click Save.
-
Open the instance group specification file and edit the instance template:
- Delete the previous availability zone from the
allocation_policysection. - Delete the ID of the subnet in the previous availability zone from the
network_interface_specssection.
- Delete the previous availability zone from the
-
Update the instance group:
yc compute instance-group update \ --id <instance_group_ID> \ --file <instance_specification_file>Where:
--id: Instance group ID.--file: Path to the instance group specification file.
Result:
id: cl15sjqilrei******** ... allocation_policy: zones: - zone_id: <new_availability_zone> ...
-
Open the Terraform configuration file for the instance group. Delete the previous availability zone from the
allocation_policysection and the ID of the subnet in the previous availability zone from thenetwork_interfacesection:... network_interface { subnet_ids = ["<ID_of_subnet_in_new_availability_zone>"] } ... allocation_policy { zones = ["<new_availability_zone>"] } ...Where:
zones: Availability zone to move the instance group to. You can specify multiple availability zones.subnet_ids: ID of the subnet in the availability zone you want to move your instance group to.
For more information about resource parameters in Terraform, see the relevant provider documentation.
-
Apply the changes:
-
In the terminal, go to the directory where you edited the configuration file.
-
Make sure the configuration file is correct using this command:
terraform validateIf the configuration is correct, you will get this message:
Success! The configuration is valid. -
Run this command:
terraform planYou will see a detailed list of resources. No changes will be made at this step. If the configuration contains any errors, Terraform will show them.
-
Apply the changes:
terraform apply -
Type
yesand press Enter to confirm the changes.
The group instances will be deleted from the previous availability zone. You can check the updates using the management console
or this CLI command:yc compute instance-group get <instance_group_name> -
Use the update REST API method for the InstanceGroup resource or the InstanceGroupService/Update gRPC API call.
- In the management console
-
-
Internal load balancer (
INTERNAL):-
Move the resources requiring access to the internal load balancer to the previously created subnet.
-
Switch to a new listener:
Management consoleCLITerraformAPI-
In the management console
, select the folder with your load balancer. -
Go to Network Load Balancer.
-
Click your load balancer's name.
-
Under Listeners, click
and select Remove listener. -
At the top right, click
Create listener and create a new listener.When creating a new listener, select a subnet in the availability zone you want to move the load balancer to.
-
Click Add.
-
See the description of the CLI command for deleting a listener:
yc load-balancer network-load-balancer remove-listener --help -
Get a list of all network load balancers in the default folder:
yc load-balancer network-load-balancer listResult:
+----------------------+---------------+-----------------+----------+----------------+------------------------+----------+ | ID | NAME | REGION ID | TYPE | LISTENER COUNT | ATTACHED TARGET GROUPS | STATUS | +----------------------+---------------+-----------------+----------+----------------+------------------------+----------+ | enp2btm6uvdr******** | nlb-34aa5-db1 | ru-central1 | INTERNAL | 0 | | ACTIVE | | enpvg9o73hqh******** | test-balancer | ru-central1 | EXTERNAL | 0 | | ACTIVE | +----------------------+---------------+-----------------+----------+----------------+------------------------+----------+ -
Get the listener name:
yc load-balancer network-load-balancer get <load_balancer_name>Result:
id: enp2btm6uvdr******** folder_id: b1gmit33ngp3******** ... listeners: - name: listener-980ee-af3 address: 172.17.0.36 -
Delete the listener:
yc load-balancer network-load-balancer remove-listener <load_balancer_name> \ --listener-name <listener_name>Where
--listener-nameis the name of the listener to delete.Result:
done (1s) id: enpvg9o73hqh******** folder_id: b1gmit33ngp3******** created_at: "2023-08-09T13:44:57Z" name: nlb-34aa5-db1 region_id: ru-central1 status: INACTIVE type: INTERNAL -
See the description of the CLI command for adding a listener:
yc load-balancer network-load-balancer add-listener --help -
Add a listener:
yc load-balancer network-load-balancer add-listener <load_balancer_name> \ --listener name=<listener_name>,` `port=<port>,` `target-port=<target_port>,` `internal-subnet-id=<subnet_ID>Where:
name: Listener name.port: Port the load balancer will listen to incoming traffic on.target-port: Target port the load balancer will route traffic to.internal-subnet-id: ID of the subnet in the availability zone you want to move your load balancer to.
Result:
done (1s) id: enp2btm6uvdr******** folder_id: b1gmit33ngp3******** created_at: "2023-08-09T08:37:03Z" name: nlb-34aa5-db1 region_id: ru-central1 status: ACTIVE type: INTERNAL listeners: - name: new-listener address: 10.0.0.16 port: "22" protocol: TCP target_port: "22" subnet_id: e2lgp8o00g06******** ip_version: IPV4
-
Open the Terraform file that contains the load balancer configuration and edit the
nameandsubnet_idfield values underlistener:listener { name = "<new_listener_name>" port = 80 target_port = 81 protocol = "tcp" internal_address_spec { subnet_id = "<ID_of_subnet_in_target_availability_zone>" ip_version = "ipv4" } }Where:
name: Listener name.port: Port on which the load balancer will listen to incoming traffic.target_port: Target port the load balancer will route traffic to.subnet_id: ID of the subnet in the availability zone you want to move your instance group to.
For more information about resource parameters in Terraform, see the relevant provider documentation.
-
Apply the changes:
-
In the terminal, go to the directory where you edited the configuration file.
-
Make sure the configuration file is correct using this command:
terraform validateIf the configuration is correct, you will get this message:
Success! The configuration is valid. -
Run this command:
terraform planYou will see a detailed list of resources. No changes will be made at this step. If the configuration contains any errors, Terraform will show them.
-
Apply the changes:
terraform apply -
Type
yesand press Enter to confirm the changes.
This will add the new listener to the new availability zone. You can check the new listener using the management console
. -
-
To remove a network load balancer listener, use the removeListener REST API method for the NetworkLoadBalancer resource or the NetworkLoadBalancerService/RemoveListener gRPC API call, providing the following in your request:
- Load balancer ID in the
networkLoadBalancerIdparameter. - Name of the listener in the
listenerNameparameter.
You can get the load balancer ID with a list of network load balancers in the folder, and the listener name with network load balancer details.
- Load balancer ID in the
-
To add a network load balancer's listener, use the addListener API method for the NetworkLoadBalancer resource or the NetworkLoadBalancerService/AddListener gRPC API call, and provide the following in your request:
{ "listenerSpec": { "name": "<listener_name>", "port": "<incoming_port>", "targetPort": "<target_port>", "internalAddressSpec": { "subnetId": "<subnet_ID>" } } }Where:
name: Listener name.port: Port on which the load balancer will listen to incoming traffic.targetPort: Target port the load balancer will route traffic to.subnetId: ID of the subnet in the availability zone you want to move your load balancer to.
Warning
Your listener IP address will change. Make sure to specify the new listener IP address in the settings of the resources sending traffic through the load balancer.
-
-
Delete the group instances from the previous availability zone:
Management consoleCLITerraformAPI- In the management console
, open the folder containing the instance group you need. - Go to Compute Cloud.
- In the left-hand panel, select
Instance groups. - Select the instance group to update.
- In the top-right corner, click
Edit. - Under Allocation, deselect the previous availability zone.
- Click Save.
-
Open the instance group specification file and edit the instance template:
- Delete the previous availability zone from the
allocation_policysection. - Delete the ID of the subnet in the previous availability zone from the
network_interface_specssection.
- Delete the previous availability zone from the
-
Update the instance group:
yc compute instance-group update \ --id <instance_group_ID> \ --file <instance_specification_file>Where:
--id: Instance group ID.--file: Path to the instance group specification file.
Result:
id: cl15sjqilrei******** ... allocation_policy: zones: - zone_id: <new_availability_zone> ...
-
Open the Terraform configuration file for the instance group. Delete the previous availability zone from the
allocation_policysection and the ID of the subnet in the previous availability zone from thenetwork_interfacesection:... network_interface { subnet_ids = ["<ID_of_subnet_in_new_availability_zone>"] } ... allocation_policy { zones = ["<new_availability_zone>"] } ...Where:
zones: Availability zone to move the instance group to. You can specify multiple availability zones.subnet_ids: ID of the subnet in the availability zone you want to move your instance group to.
For more information about resource parameters in Terraform, see the relevant provider documentation.
-
Apply the changes:
-
In the terminal, go to the directory where you edited the configuration file.
-
Make sure the configuration file is correct using this command:
terraform validateIf the configuration is correct, you will get this message:
Success! The configuration is valid. -
Run this command:
terraform planYou will see a detailed list of resources. No changes will be made at this step. If the configuration contains any errors, Terraform will show them.
-
Apply the changes:
terraform apply -
Type
yesand press Enter to confirm the changes.
The group instances will be deleted from the previous availability zone. You can check the updates using the management console
or this CLI command:yc compute instance-group get <instance_group_name> -
Use the update REST API method for the InstanceGroup resource or the InstanceGroupService/Update gRPC API call.
- In the management console
-
Make sure the subnet in the previous availability zone has no resources left.
-
Delete the subnet in the previous availability zone.
-
-