Moving an instance group with a network load balancer to a different availability zone
Note
We are gradually deprecating the ru-central1-c
availability zone. For more information about development plans for availability zones and migration options, see this Yandex Cloud blog post.
To move an instance group with a Yandex Network Load Balancer network load balancer:
-
Create a subnet in the availability zone where you want to move your instance group.
-
Add the group instances to the new availability zone:
Management consoleCLITerraformAPI-
In the management console
, open the folder containing the instance group you need. -
Select Compute Cloud.
-
In the left-hand panel, select
Instance groups. -
Select the instance group to update.
-
In the top-right corner, click
Edit. -
Under Allocation, add the availability zone where you want to move the instance group.
-
If your instance group is manually scaled, under Scaling, specify a group size that will be sufficient for placing instances in all the selected availability zones.
You will be able to reset the number of instances back to the initial one after all the instances in the group are moved to the new availability zone and deleted from the old one.
-
If your instance group is autoscaling and has the
OPPORTUNISTIC
shutdown strategy, change the strategy toPROACTIVE
in the Stop VMs by strategy field.You will be able to reset the shutdown strategy back to OPPORTUNISTIC after all the instances in the group are moved to the new availability zone and deleted from the old one.
-
Click Save.
If you do not have the Yandex Cloud command line interface yet, install and initialize it.
The folder specified in the CLI profile is used by default. You can specify a different folder using the
--folder-name
or--folder-id
parameter.-
Open the instance group specification file and edit the VM template:
-
Under
allocation_policy
, add a new availability zone. -
Add the ID of the previously created subnet in the
network_interface_specs
section. -
If your instance group is manually scaled, under
scale_policy
, specify a group size that will be sufficient for placing instances in all the selected availability zones.You will be able to reset the number of instances back to the initial one after all the instances in the group are moved to the new availability zone and deleted from the old one.
-
If your instance group is autoscaling and has the
OPPORTUNISTIC
shutdown strategy, change the strategy toPROACTIVE
in thedeploy_policy
section.You will be able to reset the shutdown strategy back to OPPORTUNISTIC after all the instances in the group are moved to the new availability zone and deleted from the old one.
-
-
View a description of the CLI command to update an instance group:
yc compute instance-group update --help
-
Get a list of all instance groups in the default folder:
yc compute instance-group list
Result:
+----------------------+---------------------------------+--------+--------+ | ID | NAME | STATUS | SIZE | +----------------------+---------------------------------+--------+--------+ | cl15sjqilrei******** | first-fixed-group-with-balancer | ACTIVE | 3 | | cl19s7dmihgm******** | test-group | ACTIVE | 2 | +----------------------+---------------------------------+--------+--------+
-
Update the instance group:
yc compute instance-group update \ --id <instance_group_ID> \ --file <instance_group_specification_file>
Where:
--id
: Instance group ID.--file
: Path to the instance group specification file.
Result:
id: cl15sjqilrei******** ... allocation_policy: zones: - zone_id: <old_availability_zone> - zone_id: <new_availability_zone> ...
If you do not have Terraform yet, install it and configure the Yandex Cloud provider.
-
Open the Terraform configuration file for the instance group. Under
allocation_policy
, specify the new availability zone; in thenetwork_interface
section, specify the ID of the previously created subnet.... network_interface { subnet_ids = [ "<subnet_ID_in_the_old_availability_zone>", "<subnet_ID_in_the_new_availability_zone>" ] } ... allocation_policy { zones = [ "<old_availability_zone>", "<new_availability_zone>" ] } ...
Where:
zones
: Availability zones to place the instance group in (the new and old ones).subnet_ids
: IDs of subnets in the availability zones to place the instance group in.
If your instance group is manually scaled, under
scale_policy
, specify a group size that will be sufficient for placing instances in all the selected availability zones.... scale_policy { fixed_scale { size = <number_of_instances_per_group> } } ...
You will be able to reset the number of instances back to the initial one after all the instances in the group are moved to the new availability zone and deleted from the old one.
If your instance group is autoscaling and has the
OPPORTUNISTIC
shutdown strategy, change the strategy toPROACTIVE
:... deploy_policy { strategy = "proactive" } ...
You will be able to reset the shutdown strategy back to OPPORTUNISTIC after all the instances in the group are moved to the new availability zone and deleted from the old one.
For more information about resource parameters in Terraform, see the provider documentation
. -
Apply the changes:
-
In the terminal, change to the folder where you edited the configuration file.
-
Make sure the configuration file is correct using the command:
terraform validate
If the configuration is correct, the following message is returned:
Success! The configuration is valid.
-
Run the command:
terraform plan
The terminal will display a list of resources with parameters. No changes are made at this step. If the configuration contains errors, Terraform will point them out.
-
Apply the configuration changes:
terraform apply
-
Confirm the changes: type
yes
in the terminal and press Enter.
This will add a new availability zone for an instance group. You can check the update using the management console
or this CLI command:yc compute instance-group get <instance_group_name>
-
Use the update REST API method for the InstanceGroup resource or the InstanceGroupService/Update gRPC API call.
If your instance group is manually scaled, specify a group size that will be sufficient for placing instances in all the selected availability zones. You will be able to reset the number of instances back to the initial one after all the instances in the group are moved to the new availability zone and deleted from the old one.
If your instance group is autoscaling and has the
OPPORTUNISTIC
shutdown strategy, change the strategy toPROACTIVE
. You will be able to reset the shutdown strategy back to OPPORTUNISTIC after all the instances in the group are moved to the new availability zone and deleted from the old one.Wait until the instances appear in the new availability zone and switch to the
Running Actual
status. -
-
Depending on the load balancer type, follow these steps:
-
External load balancer (
EXTERNAL
type):-
Wait until the resources of the target group in the new availability zone pass a health check and switch to the
HEALTHY
status. See Checking target health statuses.After this, the load balancer will start routing traffic through the new availability zone. This may take up to two minutes. See Achieving routing convergence in the availability zone.
-
Delete the group instances from the previous availability zone:
Management consoleCLITerraformAPI- In the management console
, open the folder containing the instance group you need. - Select Compute Cloud.
- In the left-hand panel, select
Instance groups. - Select the instance group to update.
- In the top-right corner, click
Edit. - Under Allocation, disable the old availability zone.
- Click Save.
-
Open the instance group specification file and edit the VM template:
- Delete the old availability zone in the
allocation_policy
section. - Remove the subnet ID in the old availability zone from the
network_interface_specs
section.
- Delete the old availability zone in the
-
Update the instance group:
yc compute instance-group update \ --id <instance_group_ID> \ --file <instance_group_specification_file>
Where:
--id
: Instance group ID.--file
: Path to the instance group specification file.
Result:
id: cl15sjqilrei******** ... allocation_policy: zones: - zone_id: <new_availability_zone> ...
-
Open the Terraform configuration file for the instance group. Under
allocation_policy
, remove the old availability zone; also remove the ID of the subnet in the old availability zone from thenetwork_interface
section.... network_interface { subnet_ids = ["<subnet_ID_in_the_new_availability_zone>"] } ... allocation_policy { zones = ["<new_availability_zone>"] } ...
Where:
zones
: Availability zone to move the instance group to. You can specify multiple availability zones.subnet_ids
: ID of the subnet in the availability zone where you want to move your instance group.
For more information about resource parameters in Terraform, see the provider documentation
. -
Apply the changes:
-
In the terminal, change to the folder where you edited the configuration file.
-
Make sure the configuration file is correct using the command:
terraform validate
If the configuration is correct, the following message is returned:
Success! The configuration is valid.
-
Run the command:
terraform plan
The terminal will display a list of resources with parameters. No changes are made at this step. If the configuration contains errors, Terraform will point them out.
-
Apply the configuration changes:
terraform apply
-
Confirm the changes: type
yes
in the terminal and press Enter.
The group instances will be deleted from the old availability zone. You can check the update using the management console
or this CLI command:yc compute instance-group get <instance_group_name>
-
Use the update REST API method for the InstanceGroup resource or the InstanceGroupService/Update gRPC API call.
- In the management console
-
-
Internal load balancer (
INTERNAL
type):-
Move the resources that need access to the internal load balancer to the previously created subnet.
-
Switch to a new listener:
Management consoleCLITerraformAPI-
In the management console
, select the folder containing the load balancer. -
In the list of services, select Network Load Balancer.
-
Click the name of the load balancer you need.
-
Under Listeners, click
and select Remove listener. -
At the top right, click
Add listener and create a new listener.When creating a new listener, select a subnet in the availability zone to which you want to migrate your balancer.
-
Click Add.
-
View the description of the CLI command for deleting a listener:
yc load-balancer network-load-balancer remove-listener --help
-
Get a list of all network load balancers in the default folder:
yc load-balancer network-load-balancer list
Result:
+----------------------+---------------+-----------------+----------+----------------+------------------------+----------+ | ID | NAME | REGION ID | TYPE | LISTENER COUNT | ATTACHED TARGET GROUPS | STATUS | +----------------------+---------------+-----------------+----------+----------------+------------------------+----------+ | enp2btm6uvdr******** | nlb-34aa5-db1 | ru-central1 | INTERNAL | 0 | | ACTIVE | | enpvg9o73hqh******** | test-balancer | ru-central1 | EXTERNAL | 0 | | ACTIVE | +----------------------+---------------+-----------------+----------+----------------+------------------------+----------+
-
Get the listener name:
yc load-balancer network-load-balancer get <balancer_name>
Result:
id: enp2btm6uvdr******** folder_id: b1gmit33ngp3******** ... listeners: - name: listener-980ee-af3 address: 172.17.0.36
-
Delete the listener:
yc load-balancer network-load-balancer remove-listener <balancer_name> \ --listener-name <listener_name>
Where
--listener-name
is the name of the listener you need to delete.Result:
done (1s) id: enpvg9o73hqh******** folder_id: b1gmit33ngp3******** created_at: "2023-08-09T13:44:57Z" name: nlb-34aa5-db1 region_id: ru-central1 status: INACTIVE type: INTERNAL
-
View a description of the CLI command for adding a listener:
yc load-balancer network-load-balancer add-listener --help
-
Add a listener:
yc load-balancer network-load-balancer add-listener <balancer_name> \ --listener name=<listener_name>,` `port=<port>,` `target-port=<target_port>,` `internal-subnet-id=<subnet_ID>
Where:
name
: Listener name.port
: Port where the load balancer will accept incoming traffic.target-port
: Target port where the balancer will send traffic.internal-subnet-id
: ID of the subnet in the availability zone where you want to move your balancer.
Result:
done (1s) id: enp2btm6uvdr******** folder_id: b1gmit33ngp3******** created_at: "2023-08-09T08:37:03Z" name: nlb-34aa5-db1 region_id: ru-central1 status: ACTIVE type: INTERNAL listeners: - name: new-listener address: 10.0.0.16 port: "22" protocol: TCP target_port: "22" subnet_id: e2lgp8o00g06******** ip_version: IPV4
-
Open the Terraform file that contains the balancer configuration and modify the
name
andsubnet_id
filed values in thelistener
section:listener { name = "<new_listener_name>" port = 80 target_port = 81 protocol = "tcp" internal_address_spec { subnet_id = "<subnet_ID_in_target_availability_zone>" ip_version = "ipv4" } }
Where:
name
: Listener name.port
: Port where the load balancer will accept incoming traffic.target_port
: Target port where the balancer will send traffic.subnet_id
: ID of the subnet in the availability zone where you want to move your instance group.
For more information about resource parameters in Terraform, see the provider documentation
. -
Apply the changes:
-
In the terminal, change to the folder where you edited the configuration file.
-
Make sure the configuration file is correct using the command:
terraform validate
If the configuration is correct, the following message is returned:
Success! The configuration is valid.
-
Run the command:
terraform plan
The terminal will display a list of resources with parameters. No changes are made at this step. If the configuration contains errors, Terraform will point them out.
-
Apply the configuration changes:
terraform apply
-
Confirm the changes: type
yes
in the terminal and press Enter.
This will add your new listener to the new availability zone. You can use the management console
to check whether your listener was created properly. -
-
To remove a network load balancer's listener, use the removeListener REST API method for the NetworkLoadBalancer resource or the NetworkLoadBalancerService/RemoveListener gRPC API call, and provide the following in the request:
- Load balancer ID in the
networkLoadBalancerId
parameter. - Listener name in the
listenerName
parameter.
You can get the load balancer ID with a list of network load balancers in the folder and the listener name with network load balancer details.
- Load balancer ID in the
-
To add a network balancer listener, use the addListener API method for the NetworkLoadBalancer resource or the NetworkLoadBalancerService/AddListener gRPC API call, and provide the following in your request:
{ "listenerSpec": { "name": "<listener_name>", "port": "<incoming_port>", "targetPort": "<target_port>", "internalAddressSpec": { "subnetId": "<subnet_ID>" } } }
Where:
name
: Listener name.port
: Port where the load balancer will accept incoming traffic.targetPort
: Target port where the balancer will send traffic.subnetId
: ID of the subnet in the availability zone where you want to move your balancer.
Warning
Your listener IP address will change. Make sure to specify the new listener IP address in the settings of the resources the balancer receives traffic from.
-
-
Delete the group instances from the previous availability zone:
Management consoleCLITerraformAPI- In the management console
, open the folder containing the instance group you need. - Select Compute Cloud.
- In the left-hand panel, select
Instance groups. - Select the instance group to update.
- In the top-right corner, click
Edit. - Under Allocation, disable the old availability zone.
- Click Save.
-
Open the instance group specification file and edit the VM template:
- Delete the old availability zone in the
allocation_policy
section. - Remove the subnet ID in the old availability zone from the
network_interface_specs
section.
- Delete the old availability zone in the
-
Update the instance group:
yc compute instance-group update \ --id <instance_group_ID> \ --file <instance_group_specification_file>
Where:
--id
: Instance group ID.--file
: Path to the instance group specification file.
Result:
id: cl15sjqilrei******** ... allocation_policy: zones: - zone_id: <new_availability_zone> ...
-
Open the Terraform configuration file for the instance group. Under
allocation_policy
, remove the old availability zone; also remove the ID of the subnet in the old availability zone from thenetwork_interface
section.... network_interface { subnet_ids = ["<subnet_ID_in_the_new_availability_zone>"] } ... allocation_policy { zones = ["<new_availability_zone>"] } ...
Where:
zones
: Availability zone to move the instance group to. You can specify multiple availability zones.subnet_ids
: ID of the subnet in the availability zone where you want to move your instance group.
For more information about resource parameters in Terraform, see the provider documentation
. -
Apply the changes:
-
In the terminal, change to the folder where you edited the configuration file.
-
Make sure the configuration file is correct using the command:
terraform validate
If the configuration is correct, the following message is returned:
Success! The configuration is valid.
-
Run the command:
terraform plan
The terminal will display a list of resources with parameters. No changes are made at this step. If the configuration contains errors, Terraform will point them out.
-
Apply the configuration changes:
terraform apply
-
Confirm the changes: type
yes
in the terminal and press Enter.
The group instances will be deleted from the old availability zone. You can check the update using the management console
or this CLI command:yc compute instance-group get <instance_group_name>
-
Use the update REST API method for the InstanceGroup resource or the InstanceGroupService/Update gRPC API call.
- In the management console
-
Make sure the subnet in the previous availability zone has no resources left.
-
Delete the subnet in the previous availability zone.
-
-