Moving an instance group with an L7 load balancer to a different availability zone
To move an instance group with a Yandex Application Load Balancer L7 load balancer:
-
Create a subnet in the availability zone you want to move your VM instance group to.
-
Enable traffic for the L7 load balancer in the new availability zone:
Management consoleCLITerraformAPI- In the management console
, select the folder containing the load balancer. - Select Application Load Balancer.
- In the line with the load balancer, click
and select Edit. - In the window that opens, under Allocation, enable traffic in the availability zone you want to move your instance group to.
- Click Save.
If you do not have the Yandex Cloud CLI yet, install and initialize it.
The folder specified in the CLI profile is used by default. You can specify a different folder through the
--folder-name
or--folder-id
parameter.-
See the description of the CLI command for enabling load balancer traffic:
yc application-load-balancer load-balancer enable-traffic --help
-
Get a list of all L7 load balancers in the default folder:
yc application-load-balancer load-balancer list
Result:
+----------------------+-----------------------+-------------+----------------+---------+ | ID | NAME | REGION ID | LISTENER COUNT | STATUS | +----------------------+-----------------------+-------------+----------------+---------+ | ds732hi8pn9n******** | sample-alb1 | ru-central1 | 1 | ACTIVE | | f3da23i86n2v******** | sample-alb2 | ru-central1 | 1 | ACTIVE | +----------------------+-----------------------+-------------+----------------+---------+
-
Enable traffic:
yc application-load-balancer load-balancer enable-traffic <load_balancer_name> \ --zone <availability_zone>
Where
--zone
is the availability zone you want to move your instance group to.Result:
id: ds7pmslal3km******** name: sample-alb1 folder_id: b1gmit33ngp3******** status: ACTIVE region_id: ru-central1 network_id: enpn46stivv8******** allocation_policy: locations: - zone_id: ru-central1-a subnet_id: e9bavnqlbiuk******** disable_traffic: true - zone_id: ru-central1-b subnet_id: e2lgp8o00g06******** - zone_id: ru-central1-d subnet_id: b0cv501fvp13******** log_group_id: ckgah4eo2j0r******** security_group_ids: - enpdjc5bitmj******** created_at: "2023-08-09T08:34:24.887765763Z" log_options: {}
If you do not have Terraform yet, install it and configure the Yandex Cloud provider.
-
Open the Terraform configuration file for the L7 load balancer and, under
allocation_policy
, specify the new availability zone and the ID of the subnet you created earlier:... allocation_policy { location { zone_id = [ "<previous_availability_zone>", "<new_availability_zone>" ] subnet_id = [ "<ID_of_subnet_in_previous_availability_zone>", "<subnet_ID_in_new_availability_zone>" ] } } } ...
Where:
zone_id
: Availability zones where the L7 load balancer will accept traffic.subnet_id
: IDs of the subnets in the availability zones.
For more information about resource parameters in Terraform, see the relevant provider documentation
. -
Apply the changes:
-
In the terminal, change to the folder where you edited the configuration file.
-
Make sure the configuration file is correct using the command:
terraform validate
If the configuration is correct, the following message is returned:
Success! The configuration is valid.
-
Run the command:
terraform plan
The terminal will display a list of resources with parameters. No changes are made at this step. If the configuration contains errors, Terraform will point them out.
-
Apply the configuration changes:
terraform apply
-
Confirm the changes: type
yes
in the terminal and press Enter.
Incoming traffic will now be routed to the new availability zone through the L7 load balancer. You can check this using the management console
or this CLI command:yc alb load-balancer get <L7_load_balancer_name>
-
Use the update REST API method for the LoadBalancer resource or the LoadBalancerService/Update gRPC API call.
- In the management console
-
Add the group instances to the new availability zone:
Management consoleCLITerraformAPI-
In the management console
, open the folder containing the instance group you need. -
Select Compute Cloud.
-
In the left-hand panel, select
Instance groups. -
Select the instance group to update.
-
In the top-right corner, click
Edit. -
Under Allocation, add the availability zone you want to move the instance group to.
-
If your instance group is a manually scaled one, under Scaling, specify a group size sufficient to place instances in all the selected availability zones.
You will be able to reset the number of instances back to the initial one after all the instances in the group are moved to the new availability zone and deleted from the previous one.
-
If your instance group is an autoscaling one and has the
OPPORTUNISTIC
shutdown strategy, change the strategy toPROACTIVE
in the Stop VMs by strategy field.You will be able to reset the shutdown strategy back to
OPPORTUNISTIC
after all the instances in the group are moved to the new availability zone and deleted from the previous one. -
Click Save.
If you do not have the Yandex Cloud CLI yet, install and initialize it.
The folder specified in the CLI profile is used by default. You can specify a different folder through the
--folder-name
or--folder-id
parameter.-
Open the instance group specification file and edit the instance template:
-
Under
allocation_policy
, add a new availability zone. -
Under
network_interface_specs
, add the ID of the previously created subnet. -
If your instance group is a manually scaled one, under
scale_policy
, specify a group size sufficient to place instances in all the selected availability zones.You will be able to reset the number of instances back to the initial one after all the instances in the group are moved to the new availability zone and deleted from the previous one.
-
If your instance group is an autoscaling one and has the
OPPORTUNISTIC
shutdown strategy, change the strategy toPROACTIVE
underdeploy_policy
.You will be able to reset the shutdown strategy back to
OPPORTUNISTIC
after all the instances in the group are moved to the new availability zone and deleted from the previous one.
-
-
See the description of the CLI command for updating an instance group:
yc compute instance-group update --help
-
Get a list of all instance groups in the default folder:
yc compute instance-group list
Result:
+----------------------+---------------------------------+--------+--------+ | ID | NAME | STATUS | SIZE | +----------------------+---------------------------------+--------+--------+ | cl15sjqilrei******** | first-fixed-group-with-balancer | ACTIVE | 3 | | cl19s7dmihgm******** | test-group | ACTIVE | 2 | +----------------------+---------------------------------+--------+--------+
-
Update the instance group:
yc compute instance-group update \ --id <instance_group_ID> \ --file <instance_specification_file>
Where:
--id
: Instance group ID.--file
: Path to the instance group specification file.
Result:
id: cl15sjqilrei******** ... allocation_policy: zones: - zone_id: <previous_availability_zone> - zone_id: <new_availability_zone> ...
If you do not have Terraform yet, install it and configure the Yandex Cloud provider.
-
Open the Terraform configuration file for the instance group. Specify the new availability zone under
allocation_policy
and the ID of the previously created subnet undernetwork_interface
.... network_interface { subnet_ids = [ "<ID_of_subnet_in_previous_availability_zone>", "<ID_of_subnet_in_new_availability_zone>" ] } ... allocation_policy { zones = [ "<previous_availability_zone>", "<new_availability_zone>" ] } ...
Where:
zones
: Availability zones to host the instance group, both the new and previous ones.subnet_ids
: IDs of subnets in the availability zones to host the instance group.
If your instance group is a manually scaled one, under
scale_policy
, specify a group size sufficient to place instances in all the selected availability zones.... scale_policy { fixed_scale { size = <number_of_instances_in_group> } } ...
You will be able to reset the number of instances back to the initial one after all the instances in the group are moved to the new availability zone and deleted from the previous one.
If your instance group is an autoscaling one and has the
OPPORTUNISTIC
shutdown strategy, change the strategy toPROACTIVE
:... deploy_policy { strategy = "proactive" } ...
You will be able to reset the shutdown strategy back to
OPPORTUNISTIC
after all the instances in the group are moved to the new availability zone and deleted from the previous one.For more information about resource parameters in Terraform, see the relevant provider documentation
. -
Apply the changes:
-
In the terminal, change to the folder where you edited the configuration file.
-
Make sure the configuration file is correct using the command:
terraform validate
If the configuration is correct, the following message is returned:
Success! The configuration is valid.
-
Run the command:
terraform plan
The terminal will display a list of resources with parameters. No changes are made at this step. If the configuration contains errors, Terraform will point them out.
-
Apply the configuration changes:
terraform apply
-
Confirm the changes: type
yes
in the terminal and press Enter.
This will add the new availability zone for your instance group. You can check the updates using the management console
or this CLI command:yc compute instance-group get <instance_group_name>
-
Use the update REST API method for the InstanceGroup resource or the InstanceGroupService/Update gRPC API call.
If your instance group is a manually scaled one, specify a group size sufficient to place instances in all the selected availability zones. You will be able to reset the number of instances back to the initial one after all the instances in the group are moved to the new availability zone and deleted from the previous one.
If your instance group is an autoscaling one and has the
OPPORTUNISTIC
shutdown strategy, change the strategy toPROACTIVE
. You will be able to reset the shutdown strategy back toOPPORTUNISTIC
after all the instances in the group are moved to the new availability zone and deleted from the previous one.Wait until the instances appear in the new availability zone and get the
Running Actual
status. -
-
Delete the group instances from the previous availability zone:
Management consoleCLITerraformAPI- In the management console
, open the folder containing the instance group you need. - Select Compute Cloud.
- In the left-hand panel, select
Instance groups. - Select the instance group to update.
- In the top-right corner, click
Edit. - Under Allocation, deselect the previous availability zone.
- Click Save.
-
Open the instance group specification file and edit the instance template:
- Delete the previous availability zone from the
allocation_policy
section. - Delete the ID of the subnet in the previous availability zone from the
network_interface_specs
section.
- Delete the previous availability zone from the
-
Update the instance group:
yc compute instance-group update \ --id <instance_group_ID> \ --file <instance_specification_file>
Where:
--id
: Instance group ID.--file
: Path to the instance group specification file.
Result:
id: cl15sjqilrei******** ... allocation_policy: zones: - zone_id: <new_availability_zone> ...
-
Open the Terraform configuration file for the instance group. Delete the previous availability zone from the
allocation_policy
section and the ID of the subnet in the previous availability zone from thenetwork_interface
section:... network_interface { subnet_ids = ["<ID_of_subnet_in_new_availability_zone>"] } ... allocation_policy { zones = ["<new_availability_zone>"] } ...
Where:
zones
: Availability zone to move the instance group to. You can specify multiple availability zones.subnet_ids
: ID of the subnet in the availability zone you want to move your instance group to.
For more information about resource parameters in Terraform, see the relevant provider documentation
. -
Apply the changes:
-
In the terminal, change to the folder where you edited the configuration file.
-
Make sure the configuration file is correct using the command:
terraform validate
If the configuration is correct, the following message is returned:
Success! The configuration is valid.
-
Run the command:
terraform plan
The terminal will display a list of resources with parameters. No changes are made at this step. If the configuration contains errors, Terraform will point them out.
-
Apply the configuration changes:
terraform apply
-
Confirm the changes: type
yes
in the terminal and press Enter.
The group instances will be deleted from the previous availability zone. You can check the updates using the management console
or this CLI command:yc compute instance-group get <instance_group_name>
-
Use the update REST API method for the InstanceGroup resource or the InstanceGroupService/Update gRPC API call.
- In the management console