Creating an instance group connected to a file storage
One of the ways to handle stateful workloads is saving the application state to a file storage independent of the instance group.
To create an instance group that will automatically connect a common file storage to each of its instances:
-
By default, all operations in Instance Groups are performed on behalf of a service account. If you don't have a service account, create one.
-
If you do not have a file storage, create one.
-
Create an instance group:
CLITerraformAPIIf you do not have the Yandex Cloud command line interface yet, install and initialize it.
The folder specified in the CLI profile is used by default. You can specify a different folder using the
--folder-name
or--folder-id
parameter.-
View the description of the CLI command to create an instance group:
yc compute instance-group create --help
-
Check whether the folder contains any networks:
yc vpc network list
If there are none, create a network.
-
Select one of the Yandex Cloud Marketplace public images, e.g., Ubuntu 22.04 LTS.
To get a list of available images using the CLI, run this command:
yc compute image list --folder-id standard-images
Result:
+----------------------+-------------------------------------+--------------------------+----------------------+--------+ | ID | NAME | FAMILY | PRODUCT IDS | STATUS | +----------------------+-------------------------------------+--------------------------+----------------------+--------+ ... | fdvk34al8k5n******** | centos-7-1549279494 | centos-7 | dqni65lfhvv2******** | READY | | fdv7ooobjfl3******** | windows-2016-gvlk-1548913814 | windows-2016-gvlk | dqnnc72gj2is******** | READY | | fdv4f5kv5cvf******** | ubuntu-1604-lts-1549457823 | ubuntu-1604-lts | dqnnb6dc7640******** | READY | ... +----------------------+-------------------------------------+--------------------------+----------------------+--------+
-
Prepare a file with the YAML specification of the instance group and give it a name, e.g.,
specification.yaml
.To connect the file storage to VMs in the instance group, add the following to the specification:
-
In the
instance_template
field, a nestedfilesystem_specs
field containing the description of the file storage:instance_template: ... filesystem_specs: - mode: READ_WRITE device_name: <VM_device_name> filesystem_id: <file_storage_ID>
Where:
mode
: File storage access mode,READ_WRITE
(read and write).device_name
: Device name for connecting the file storage to the VM, e.g.,sample-fs
. The name may contain lowercase Latin letters, numbers, and hyphens. The first character must be a letter. The last character cannot be a hyphen. The name may be up to 63 characters long.filesystem_id
: File storage ID. You can view the ID in the management console or using theyc compute filesystem list
CLI command.
-
In the
#cloud-config
section of theinstance_template.metadata.user-data
field, commands for mounting the file storage to the VM:instance_template: ... metadata: user-data: |- #cloud-config ... runcmd: - mkdir <VM_mount_point> - mount -t virtiofs <VM_device_name> <VM_mount_point> - echo "test-fs <VM_mount_point> virtiofs rw 0 0" | tee -a /etc/fstab
Where:
<VM_mount_point>
: VM directory to mount the connected file storage to, e.g.,/mnt/vfs0
.<VM_device_name>
: Device name for connecting the file storage to the VM, The value must match the one specified earlier in theinstance_template.filesystem_specs.device_name
field.
YAML specification example:
name: my-vm-group-with-fs service_account_id: ajegtlf2q28a******** description: "This instance group was created using a YAML configuration file." instance_template: platform_id: standard-v3 resources_spec: memory: 2g cores: 2 boot_disk_spec: mode: READ_WRITE disk_spec: image_id: fd8dlvgiatiqd8tt2qke type_id: network-hdd size: 32g network_interface_specs: - network_id: enp9mji1m7b3******** primary_v4_address_spec: { one_to_one_nat_spec: { ip_version: IPV4 } } security_group_ids: - enpuatgvejtn******** filesystem_specs: - mode: READ_WRITE device_name: sample-fs filesystem_id: epdccsrlalon******** metadata: user-data: |- #cloud-config datasource: Ec2: strict_id: false ssh_pwauth: no users: - name: my-user sudo: ALL=(ALL) NOPASSWD:ALL shell: /bin/bash ssh_authorized_keys: - <public_SSH_key> runcmd: - mkdir /mnt/vfs0 - mount -t virtiofs sample-fs /mnt/vfs0 - echo "sample-fs /mnt/vfs0 virtiofs rw 0 0" | tee -a /etc/fstab deploy_policy: max_unavailable: 1 max_expansion: 0 scale_policy: fixed_scale: size: 2 allocation_policy: zones: - zone_id: ru-central1-a instance_tags_pool: - first - second
This example shows a specification for creating a fixed-size instance group with a file storage connected to the instances.
For more information about the instance group specification parameters, see Specification of an instance group in YAML format.
-
-
Create an instance group in the default folder:
yc compute instance-group create --file specification.yaml
This command creates a group of two similar instances with the following configuration:
- Name:
my-vm-group-with-fs
. - OS:
Ubuntu 22.04 LTS
. - Availability zone:
ru-central1-a
. - vCPUs: 2; RAM: 2 GB.
- Network HDD: 32 GB.
- Connected to a file storage. The file storage will be mounted to the
/mnt/vfs0
directory of the group VMs. -
Each VM of the group will have а public IP address assigned. This way, you can easily connect to the group VM over SSH when checking the result.
If you create an instance group without public IP addresses, you can still connect to a group VM over SSH by specifying its internal IP address or FQDN instead of the public IP address. However, you can only make such a connection from another virtual machine that has a public IP address and is located in the same Yandex Cloud cloud network as the group VM.
- Name:
If you don't have Terraform, install it and configure the Yandex Cloud provider.
-
In the configuration file, describe the parameters of the resources you want to create:
resource "yandex_iam_service_account" "ig-sa" { name = "ig-sa" description = "Service account for managing an instance group." } resource "yandex_resourcemanager_folder_iam_member" "editor" { folder_id = "<folder_ID>" role = "editor" member = "serviceAccount:${yandex_iam_service_account.ig-sa.id}" depends_on = [ yandex_iam_service_account.ig-sa, ] } resource "yandex_compute_instance_group" "ig-1" { name = "fixed-ig" folder_id = "<folder_ID>" service_account_id = "${yandex_iam_service_account.ig-sa.id}" deletion_protection = "<deletion_protection>" depends_on = [yandex_resourcemanager_folder_iam_member.editor] instance_template { platform_id = "standard-v3" resources { memory = <RAM_size_GB> cores = <number_of_vCPU_cores> } boot_disk { mode = "READ_WRITE" initialize_params { image_id = "<image_ID>" } } filesystem { mode = "READ_WRITE" device_name = "<VM_device_name>" filesystem_id = "<file_storage_ID>" } network_interface { network_id = "${yandex_vpc_network.network-1.id}" subnet_ids = ["${yandex_vpc_subnet.subnet-1.id}"] security_group_ids = ["<list_of_security_group_IDs>"] nat = true } metadata = { user-data = "#cloud-config\n datasource:\n Ec2:\n strict_id: false\n ssh_pwauth: no\n users:\n - name: <VM_user_name>\n sudo: ALL=(ALL) NOPASSWD:ALL\n shell: /bin/bash\n ssh_authorized_keys:\n - <public_SSH_key>\n runcmd:\n - mkdir <VM_mount_point>\n - mount -t virtiofs <VM_device_name> <VM_mount_point>\n - echo \"sample-fs <VM_mount_point> virtiofs rw 0 0\" | tee -a /etc/fstab" } } scale_policy { fixed_scale { size = <number_of_VMs_in_group> } } allocation_policy { zones = ["ru-central1-a"] } deploy_policy { max_unavailable = 1 max_expansion = 0 } } resource "yandex_vpc_network" "network-1" { name = "network1" } resource "yandex_vpc_subnet" "subnet-1" { name = "subnet1" zone = "ru-central1-a" network_id = "${yandex_vpc_network.network-1.id}" v4_cidr_blocks = ["192.168.10.0/24"] }
Where:
-
yandex_iam_service_account
: Service account description. All operations in Instance Groups are performed on behalf of the service account.You cannot delete a service account while it is linked to an instance group.
-
yandex_resourcemanager_folder_iam_member
: Description of access permissions to the folder the service account belongs to. To be able to create, update, and delete VM instances in the instance group, assign theeditor
role to the service account. -
yandex_compute_instance_group
: Description of the instance group.- General information about the VM group:
name
: VM group name.folder_id
: Folder ID.service_account_id
: Service account ID.deletion_protection
: Instance group protection against deletion,true
orfalse
. You cannot delete an instance group with this option enabled. The default value isfalse
.
- VM template:
-
platform_id
: Platform. -
resources
: Number of vCPU cores and RAM available to the VM. The values must match the selected platform. -
boot_disk
: Boot disk settings.mode
: Disk access mode,READ_ONLY
orREAD_WRITE
.image_id
: ID of the selected image. You can get the image ID from the list of public images.
-
filesystem
: File storage settings.mode
: File storage access mode,READ_WRITE
(read and write).device_name
: Device name for connecting the file storage to the VM, e.g.,sample-fs
. The name may contain lowercase Latin letters, numbers, and hyphens. The first character must be a letter. The last character cannot be a hyphen. The name may be up to 63 characters long.filesystem_id
: File storage ID. You can view the ID in the management console or using theyc compute filesystem list
CLI command.
-
network_interface
: Network configurations. Specify the IDs of your network, subnet, and security groups. -
metadata
: You need to provide the following in the metadata:- VM user name and public key to enable this user to access the VM via SSH.
- VM mount point for the file storage, i.e., VM directory to mount the connected file storage to, e.g.,
/mnt/vfs0
. - VM device name, i.e., device name for connecting the file storage to the VM. The value must match the one specified earlier in the
device_name
field of thefilesystem
section.
For more information, see VM metadata.
-
- Policies:
deploy_policy
: Deployment policy for instances in the group.scale_policy
: Scaling policy for instances in the group.allocation_policy
: Policy for allocating VM instances across availability zones and regions.
- General information about the VM group:
-
yandex_vpc_network
: Description of the cloud network. -
yandex_vpc_subnet
: Description of the subnet the instance group will connect to.Note
If you already have suitable resources, such as a service account, cloud network, and subnet, you do not need to describe them again. Use their names and IDs in the appropriate parameters.
For more information about the resources you can create with Terraform, see the provider documentation
. -
-
Create resources:
-
In the terminal, change to the folder where you edited the configuration file.
-
Make sure the configuration file is correct using the command:
terraform validate
If the configuration is correct, the following message is returned:
Success! The configuration is valid.
-
Run the command:
terraform plan
The terminal will display a list of resources with parameters. No changes are made at this step. If the configuration contains errors, Terraform will point them out.
-
Apply the configuration changes:
terraform apply
-
Confirm the changes: type
yes
in the terminal and press Enter.
All the resources you need will then be created in the specified folder. You can check the new resources and their settings using the management console
.Each VM of the group will have а public IP address assigned. This way, you can easily connect to the group VM over SSH when checking the result.
If you create an instance group without public IP addresses, you can still connect to a group VM over SSH by specifying its internal IP address or FQDN instead of the public IP address. However, you can only make such a connection from another virtual machine that has a public IP address and is located in the same Yandex Cloud cloud network as the group VM.
-
Use the create REST API method for the InstanceGroup resource or the InstanceGroupService/Create gRPC API call.
-
Make sure the file storage is connected to VMs in the instance group. To do so, connect to a VM via SSH and navigate to the directory that you specified as the mount point.