Scheduled scaling of VM groups using Terraform
To set up scheduled scaling for a VM group using Terraform:
If you no longer need the resources you created, delete them.
Prepare your cloud
Sign up for Yandex Cloud and create a billing account:
- Go to the management console
and log in to Yandex Cloud or create an account if you do not have one yet. - On the Yandex Cloud Billing
page, make sure you have a billing account linked and it has theACTIVE
orTRIAL_ACTIVE
status. If you do not have a billing account, create one.
If you have an active billing account, you can go to the cloud page
Learn more about clouds and folders.
Required paid resources
The infrastructure support costs include:
- Fee for disks and continuously running VMs (see Compute Cloud pricing).
- Function calls, computing resources allocated to executing the function, and outgoing traffic (see Cloud Functions pricing).
Create an infrastructure
With Terraform
Terraform is distributed under the Business Source License
For more information about the provider resources, see the documentation on the Terraform
Create an infrastructure using Terraform:
-
Install Terraform and specify the source for installing the Yandex Cloud provider (see Configure a provider, step 1).
-
Prepare files with the infrastructure description:
Ready-made configurationManually-
Clone the repository with configuration files.
git clone https://github.com/yandex-cloud-examples/yc-vm-group-scheduled-scaling
-
Go to the directory with the repository. Make sure it contains the following files:
vm-scale-scheduled.tf
: New infrastructure configuration.vm-scale-scheduled.auto.tfvars
: User data file.vm-scale-scheduled-function.zip
: Archive with the Cloud Functions function code.
-
Create a folder for files.
-
In the folder, create:
-
vm-scale-scheduled.tf
configuration file:vm-scale-scheduled.tf
# Declaring variables for confidential parameters variable "folder_id" { type = string } variable "username" { type = string } variable "ssh_key_path" { type = string } # Configuring a provider terraform { required_providers { yandex = { source = "yandex-cloud/yandex" version = ">= 0.47.0" } } } provider "yandex" { folder_id = var.folder_id } # Creating a service account and assigning roles to it resource "yandex_iam_service_account" "vm-scale-scheduled-sa" { name = "vm-scale-scheduled-sa" } resource "yandex_resourcemanager_folder_iam_member" "vm-scale-scheduled-sa-role-compute" { folder_id = var.folder_id role = "compute.admin" member = "serviceAccount:${yandex_iam_service_account.vm-scale-scheduled-sa.id}" } resource "yandex_resourcemanager_folder_iam_member" "vm-scale-scheduled-sa-role-iam" { folder_id = var.folder_id role = "iam.serviceAccounts.user" member = "serviceAccount:${yandex_iam_service_account.vm-scale-scheduled-sa.id}" } resource "yandex_resourcemanager_folder_iam_member" "vm-scale-scheduled-sa-role-functions" { folder_id = var.folder_id role = "functions.functionInvoker" member = "serviceAccount:${yandex_iam_service_account.vm-scale-scheduled-sa.id}" } # Creating a cloud network and subnets resource "yandex_vpc_network" "vm-scale-scheduled-network" { name = "vm-scale-scheduled-network" } resource "yandex_vpc_subnet" "vm-scale-scheduled-subnet-a" { name = "vm-scale-scheduled-subnet-a" zone = "ru-central1-a" v4_cidr_blocks = ["192.168.1.0/24"] network_id = yandex_vpc_network.vm-scale-scheduled-network.id } resource "yandex_vpc_subnet" "vm-scale-scheduled-subnet-b" { name = "vm-scale-scheduled-subnet-b" zone = "ru-central1-b" v4_cidr_blocks = ["192.168.2.0/24"] network_id = yandex_vpc_network.vm-scale-scheduled-network.id } # Creating an image resource "yandex_compute_image" "vm-scale-scheduled-image" { source_family = "ubuntu-2004-lts" } # Creating an instance group resource "yandex_compute_instance_group" "vm-scale-scheduled-ig" { name = "vm-scale-scheduled-ig" service_account_id = yandex_iam_service_account.vm-scale-scheduled-sa.id allocation_policy { zones = [ "ru-central1-a", "ru-central1-b" ] } instance_template { boot_disk { mode = "READ_WRITE" initialize_params { image_id = yandex_compute_image.vm-scale-scheduled-image.id size = 15 } } platform_id = "standard-v3" resources { cores = 2 core_fraction = 20 memory = 2 } network_interface { network_id = yandex_vpc_network.vm-scale-scheduled-network.id subnet_ids = [ yandex_vpc_subnet.vm-scale-scheduled-subnet-a.id, yandex_vpc_subnet.vm-scale-scheduled-subnet-b.id ] } metadata = { user-data = "#cloud-config\nusers:\n - name: ${var.username}\n groups: sudo\n shell: /bin/bash\n sudo: 'ALL=(ALL) NOPASSWD:ALL'\n ssh_authorized_keys:\n - ${file("${var.ssh_key_path}")}" } } scale_policy { fixed_scale { size = 2 } } deploy_policy { max_unavailable = 2 max_creating = 2 max_expansion = 2 max_deleting = 2 } depends_on = [ yandex_resourcemanager_folder_iam_member.vm-scale-scheduled-sa-role-compute, yandex_resourcemanager_folder_iam_member.vm-scale-scheduled-sa-role-iam ] } # Creating a function resource "yandex_function" "vm-scale-scheduled-function" { name = "vm-scale-scheduled-function" runtime = "bash" user_hash = "function-v1" entrypoint = "handler.sh" content { zip_filename = "vm-scale-scheduled-function.zip" } execution_timeout = "60" memory = "128" service_account_id = yandex_iam_service_account.vm-scale-scheduled-sa.id environment = { IG_NAME = yandex_compute_instance_group.vm-scale-scheduled-ig.name IG_BASE_SIZE = "2" FOLDER_ID = var.folder_id } depends_on = [ yandex_resourcemanager_folder_iam_member.vm-scale-scheduled-sa-role-functions ] } # Creating a trigger resource "yandex_function_trigger" "vm-scale-scheduled-trigger" { name = "vm-scale-scheduled-trigger" timer { cron_expression = "*/2 * * * ? *" } function { id = yandex_function.vm-scale-scheduled-function.id tag = "$latest" service_account_id = yandex_iam_service_account.vm-scale-scheduled-sa.id } depends_on = [ yandex_resourcemanager_folder_iam_member.vm-scale-scheduled-sa-role-functions ] }
-
vm-scale-scheduled.auto.tfvars
user data file:vm-scale-scheduled.auto.tfvars
folder_id = "<folder_ID>" username = "<VM_user_name>" ssh_key_path = "<path_to_public_SSH_key>"
-
handler.sh
file with the Cloud Functions function code:handler.sh
Warning
# Get ID and current size of the instance group IG_SPEC=$(yc compute instance-group get --name $IG_NAME --folder-id $FOLDER_ID --format json) IG_ID=$(jq -r ".id" <<< $IG_SPEC) IG_SIZE=$(jq -r ".scale_policy.fixed_scale.size" <<< $IG_SPEC) # Calculate new size for the instance group if [ $IG_SIZE = $IG_BASE_SIZE ]; then IG_SIZE="$(($IG_BASE_SIZE + 1))" else IG_SIZE=$IG_BASE_SIZE fi # Update the instance group yc compute instance-group update --id $IG_ID --scale-policy-fixed-scale-size $IG_SIZE
-
-
In the folder, create the
vm-scale-scheduled-function.zip
archive that contains thehandler.sh
file.
For more information about the parameters of resources used in Terraform, see the provider documentation:
-
-
In the
vm-scale-scheduled.auto.tfvars
file, set the following user-defined properties:folder_id
: ID of the folder to create the resources in.username
: Name of the user to create in the VM.ssh_key_path
: Path to the file with a public SSH key to authenticate the user on the VM. You can create a key pair by following this guide.
-
Create resources:
-
In the terminal, change to the folder where you edited the configuration file.
-
Make sure the configuration file is correct using the command:
terraform validate
If the configuration is correct, the following message is returned:
Success! The configuration is valid.
-
Run the command:
terraform plan
The terminal will display a list of resources with parameters. No changes are made at this step. If the configuration contains errors, Terraform will point them out.
-
Apply the configuration changes:
terraform apply
-
Confirm the changes: type
yes
in the terminal and press Enter.
-
After creating the infrastructure, test instance group scaling.
Test instance group scaling
- In the management console
, selectexample-folder
. - In the list of services, select Compute Cloud.
- In the left-hand panel, select
Instance groups. - Select the
vm-scale-scheduled-ig
group. - Under VM states, make sure the number of instances changes every two minutes: increases from 2 to 3, then decreases from 3 to 2, etc. You can also check if the instance group has been updated by opening the
Operations tab.
Run the following command several times:
yc compute instance-group get vm-scale-scheduled-ig \
--folder-name example-folder
Result:
id: cl1l0ljqbmkp********
folder_id: b1g9hv2loamq********
created_at: "2022-03-28T13:24:20.693Z"
...
managed_instances_state:
target_size: "2"
running_actual_count: "2"
...
The value of the target_size
field for the group should change from 2
to 3
and back.
Get information about the vm-scale-scheduled-ig
instance group multiple times using the get REST API method for the InstanceGroup resource or the InstanceGroupService/Get gRPC API call. The value of the target_size
field for the group should change from 2
to 3
and back.
How to delete the resources you created
To stop paying for the resources you created:
-
Open the
vm-scale-scheduled.tf
configuration file and delete the description of the new infrastructure from it. -
Apply the changes:
-
In the terminal, change to the folder where you edited the configuration file.
-
Make sure the configuration file is correct using the command:
terraform validate
If the configuration is correct, the following message is returned:
Success! The configuration is valid.
-
Run the command:
terraform plan
The terminal will display a list of resources with parameters. No changes are made at this step. If the configuration contains errors, Terraform will point them out.
-
Apply the configuration changes:
terraform apply
-
Confirm the changes: type
yes
in the terminal and press Enter.
-