Moving an instance group with an L7 load balancer to a different availability zone
Note
We are gradually deprecating the ru-central1-c
availability zone. For more information about development plans for availability zones and migration options, see this Yandex Cloud blog post.
To move an instance group with an [Yandex Application Load Balancer](../../../application-load-balancer/concepts/application-load-balancer.md) L7 load balancer:
-
Create a subnet in the availability zone where you want to move your instance group.
-
Enable traffic for the L7 load balancer in the new availability zone:
Management consoleCLITerraformAPI- In the management console
, select the folder containing the load balancer. - Select Application Load Balancer.
- In the line with the load balancer, click
and select Edit. - In the window that opens, under Allocation, enable traffic in the availability zone to move the instance group to.
- Click Save.
If you do not have the Yandex Cloud command line interface yet, install and initialize it.
The folder specified in the CLI profile is used by default. You can specify a different folder using the
--folder-name
or--folder-id
parameter.-
See the description of the CLI command to enable load balancer traffic:
yc application-load-balancer load-balancer enable-traffic --help
-
Get a list of all L7 load balancers in the default folder:
yc application-load-balancer load-balancer list
Result:
+----------------------+-----------------------+-------------+----------------+---------+ | ID | NAME | REGION ID | LISTENER COUNT | STATUS | +----------------------+-----------------------+-------------+----------------+---------+ | ds732hi8pn9n******** | sample-alb1 | ru-central1 | 1 | ACTIVE | | f3da23i86n2v******** | sample-alb2 | ru-central1 | 1 | ACTIVE | +----------------------+-----------------------+-------------+----------------+---------+
-
Enable traffic:
yc application-load-balancer load-balancer enable-traffic <load_balancer_name> \ --zone <availability_zone>
Where
--zone
is the availability zone where you want to move your instance groupResult:
id: ds7pmslal3km******** name: sample-alb1 folder_id: b1gmit33ngp3******** status: ACTIVE region_id: ru-central1 network_id: enpn46stivv8******** allocation_policy: locations: - zone_id: ru-central1-a subnet_id: e9bavnqlbiuk******** disable_traffic: true - zone_id: ru-central1-b subnet_id: e2lgp8o00g06******** - zone_id: ru-central1-d subnet_id: b0cv501fvp13******** log_group_id: ckgah4eo2j0r******** security_group_ids: - enpdjc5bitmj******** created_at: "2023-08-09T08:34:24.887765763Z" log_options: {}
If you do not have Terraform yet, install it and configure the Yandex Cloud provider.
-
Open the Terraform configuration file for the L7 load balancer and, under
allocation_policy
, specify the new availability zone and the ID of the previously created subnet:... allocation_policy { location { zone_id = [ "<previous_availability_zone>", "<new_availability_zone>" ] subnet_id = [ "<subnet_ID_in_previous_availability_zone>", "<subnet_ID_in_new_availability_zone>" ] } } } ...
Where:
zone_id
: Availability zones where the L7 load balancer will receive traffic.subnet_id
: IDs of the subnets in the availability zones.
For more information about resource parameters in Terraform, see the provider documentation
. -
Apply the changes:
-
In the terminal, change to the folder where you edited the configuration file.
-
Make sure the configuration file is correct using the command:
terraform validate
If the configuration is correct, the following message is returned:
Success! The configuration is valid.
-
Run the command:
terraform plan
The terminal will display a list of resources with parameters. No changes are made at this step. If the configuration contains errors, Terraform will point them out.
-
Apply the configuration changes:
terraform apply
-
Confirm the changes: type
yes
in the terminal and press Enter.
The L7 load balancer will start receiving traffic in the new availability zone. You can check this using the management console
or this CLI command:yc alb load-balancer get <L7_load_balancer_name>
-
Use the update REST API method for the LoadBalancer resource or the LoadBalancerService/Update gRPC API call.
- In the management console
-
Add the group instances to the new availability zone:
Management consoleCLITerraformAPI-
In the management console
, open the folder containing the instance group you need. -
Select Compute Cloud.
-
In the left-hand panel, select
Instance groups. -
Select the instance group to update.
-
In the top-right corner, click
Edit. -
Under Allocation, add the availability zone where you want to move the instance group.
-
If your instance group is a manually scaled one, under Scaling, specify a group size sufficient to place instances in all the selected availability zones.
You will be able to reset the number of instances back to the initial one after all the instances in the group are moved to the new availability zone and deleted from the old one.
-
If your instance group is an autoscaled one and has the
OPPORTUNISTIC
shutdown strategy, change the strategy toPROACTIVE
in the Stop VMs by strategy field.You will be able to reset the shutdown strategy back to OPPORTUNISTIC after all the instances in the group are moved to the new availability zone and deleted from the old one.
-
Click Save.
If you do not have the Yandex Cloud command line interface yet, install and initialize it.
The folder specified in the CLI profile is used by default. You can specify a different folder using the
--folder-name
or--folder-id
parameter.-
Open the instance group specification file and edit the VM template:
-
Under
allocation_policy
, add a new availability zone. -
Under
network_interface_specs
, add the ID of the previously created subnet. -
If your instance group is a manually scaled one, under
scale_policy
, specify a group size sufficient to place instances in all the selected availability zones.You will be able to reset the number of instances back to the initial one after all the instances in the group are moved to the new availability zone and deleted from the old one.
-
If your instance group is an autoscaled one and has the
OPPORTUNISTIC
shutdown strategy, change the strategy toPROACTIVE
underdeploy_policy
.You will be able to reset the shutdown strategy back to OPPORTUNISTIC after all the instances in the group are moved to the new availability zone and deleted from the old one.
-
-
View a description of the CLI command to update an instance group:
yc compute instance-group update --help
-
Get a list of all instance groups in the default folder:
yc compute instance-group list
Result:
+----------------------+---------------------------------+--------+--------+ | ID | NAME | STATUS | SIZE | +----------------------+---------------------------------+--------+--------+ | cl15sjqilrei******** | first-fixed-group-with-balancer | ACTIVE | 3 | | cl19s7dmihgm******** | test-group | ACTIVE | 2 | +----------------------+---------------------------------+--------+--------+
-
Update the instance group:
yc compute instance-group update \ --id <instance_group_ID> \ --file <VM_specification_file>
Where:
--id
: Instance group ID.--file
: Path to the instance group specification file.
Result:
id: cl15sjqilrei******** ... allocation_policy: zones: - zone_id: <old_availability_zone> - zone_id: <new_availability_zone> ...
If you do not have Terraform yet, install it and configure the Yandex Cloud provider.
-
Open the Terraform configuration file for the instance group. Under
allocation_policy
, specify the new availability zone; undernetwork_interface
, specify the ID of the previously created subnet.... network_interface { subnet_ids = [ "<subnet_ID_in_old_availability_zone>", "<subnet_ID_in_new_availability_zone>" ] } ... allocation_policy { zones = [ "<old_availability_zone>", "<new_availability_zone>" ] } ...
Where:
zones
: Availability zones the instance group will reside in (new and old).subnet_ids
: IDs of subnets in the availability zones the instance group will reside in.
If your instance group is a manually scaled one, under
scale_policy
, specify a group size sufficient to place instances in all the selected availability zones.... scale_policy { fixed_scale { size = <number_of_VMs_in_group> } } ...
You will be able to reset the number of instances back to the initial one after all the instances in the group are moved to the new availability zone and deleted from the old one.
If your instance group is an autoscaled one and has the
OPPORTUNISTIC
shutdown strategy, change the strategy toPROACTIVE
:... deploy_policy { strategy = "proactive" } ...
You will be able to reset the shutdown strategy back to OPPORTUNISTIC after all the instances in the group are moved to the new availability zone and deleted from the old one.
For more information about resource parameters in Terraform, see the provider documentation
. -
Apply the changes:
-
In the terminal, change to the folder where you edited the configuration file.
-
Make sure the configuration file is correct using the command:
terraform validate
If the configuration is correct, the following message is returned:
Success! The configuration is valid.
-
Run the command:
terraform plan
The terminal will display a list of resources with parameters. No changes are made at this step. If the configuration contains errors, Terraform will point them out.
-
Apply the configuration changes:
terraform apply
-
Confirm the changes: type
yes
in the terminal and press Enter.
This will add a new availability zone for an instance group. You can check the update using the management console
or this CLI command:yc compute instance-group get <VM_group_name>
-
Use the update REST API method for the InstanceGroup resource or the InstanceGroupService/Update gRPC API call.
If your instance group is a manually scaled one, specify a group size sufficient to place instances in all the selected availability zones. You will be able to reset the number of instances back to the initial one after all the instances in the group are moved to the new availability zone and deleted from the old one.
If your instance group is an autoscaling one and has the
OPPORTUNISTIC
shutdown strategy, change the strategy toPROACTIVE
. You will be able to reset the shutdown strategy back to OPPORTUNISTIC after all the instances in the group are moved to the new availability zone and deleted from the old one.Wait until the instances appear in the new availability zone and switch to the
Running Actual
status. -
-
Delete the group instances from the previous availability zone:
Management consoleCLITerraformAPI- In the management console
, open the folder containing the instance group you need. - Select Compute Cloud.
- In the left-hand panel, select
Instance groups. - Select the instance group to update.
- In the top-right corner, click
Edit. - Under Allocation, disable the old availability zone.
- Click Save.
- Open the instance group specification file and edit the VM template:
- Delete the old availability zone in the
allocation_policy
section. - Remove the subnet ID in the old availability zone from the
network_interface_specs
section.
- Delete the old availability zone in the
- Update the instance group:
yc compute instance-group update \ --id <instance_group_ID> \ --file <VM_specification_file>
Where:
--id
: Instance group ID.--file
: Path to the instance group specification file.
Result:
id: cl15sjqilrei******** ... allocation_policy: zones: - zone_id: <new_availability_zone> ...
-
Open the Terraform configuration file for the instance group. Delete the old availability zone from the
allocation_policy
section, and the subnet ID in the old availability zone from thenetwork_interface
section.... network_interface { subnet_ids = ["<subnet_ID_in_new_availability_zone>"] } ... allocation_policy { zones = ["<new_availability_zone>"] } ...
Where:
zones
: Availability zone to move the instance group to. You can specify multiple availability zones.subnet_ids
: ID of the subnet in the availability zone you want to move your instance group to.
For more information about resource parameters in Terraform, see the provider documentation
. -
Apply the changes:
-
In the terminal, change to the folder where you edited the configuration file.
-
Make sure the configuration file is correct using the command:
terraform validate
If the configuration is correct, the following message is returned:
Success! The configuration is valid.
-
Run the command:
terraform plan
The terminal will display a list of resources with parameters. No changes are made at this step. If the configuration contains errors, Terraform will point them out.
-
Apply the configuration changes:
terraform apply
-
Confirm the changes: type
yes
in the terminal and press Enter.
The group instances will be deleted from the old availability zone. You can check the update using the management console
or this CLI command:yc compute instance-group get <VM_group_name>
-
Use the update REST API method for the InstanceGroup resource or the InstanceGroupService/Update gRPC API call.
- In the management console