Deploying the GlusterFS parallel file system in high availability mode
GlusterFS
Use this tutorial to create an infrastructure made up of three segments sharing a common GlusterFS file system. Placing storage disks in three different availability zones will ensure high availability and fault tolerance of your file system.
To configure a file system with high availability:
- Prepare your cloud.
- Configure the CLI profile.
- Prepare an environment for deploying the resources.
- Deploy your resources.
- Install and configure GlusterFS.
- Test the solution for availability and fault tolerance.
If you no longer need the resources you created, delete them.
Prepare your cloud
Sign up for Yandex Cloud and create a billing account:
- Go to the management console
and log in to Yandex Cloud or create an account if you do not have one yet. - On the Yandex Cloud Billing
page, make sure you have a billing account linked and it has theACTIVE
orTRIAL_ACTIVE
status. If you do not have a billing account, create one.
If you have an active billing account, you can go to the cloud page
Learn more about clouds and folders.
Required paid resources
The infrastructure support costs include:
- Fee for continuously running VMs and disks (see Yandex Compute Cloud pricing).
- Fee for using public IP addresses and outgoing traffic (see Yandex Virtual Private Cloud pricing).
Configure the CLI profile
-
If you do not have the Yandex Cloud command line interface yet, install it and sign in as a user.
-
Create a service account:
Management consoleCLIAPI- In the management console
, select the folder where you want to create a service account. - In the list of services, select Identity and Access Management.
- Click Create service account.
- Enter a name for the service account, e.g.,
sa-glusterfs
. - Click Create.
The folder specified in the CLI profile is used by default. You can specify a different folder using the
--folder-name
or--folder-id
parameter.Run the command below to create a service account, specifying
sa-glusterfs
as its name:yc iam service-account create --name sa-glusterfs
Where
name
is the service account name.Result:
id: ajehr0to1g8b******** folder_id: b1gv87ssvu49******** created_at: "2023-06-20T09:03:11.665153755Z" name: sa-glusterfs
To create a service account, use the ServiceAccountService/Create gRPC API call or the create REST API method for the
ServiceAccount
resource. - In the management console
-
Assign the service account the administrator role for the folder:
Management consoleCLIAPI- On the management console home page
, select a folder. - Go to the Access bindings tab.
- Find the
sa-glusterfs
account in the list and click . - Click Edit roles.
- Click Add role in the dialog that opens and select the
admin
role.
Run this command:
yc resource-manager folder add-access-binding <folder_ID> \ --role admin \ --subject serviceAccount:<service_account_ID>
To assign the service account a role for the folder, use the setAccessBindings REST API method for the ServiceAccount resource or the ServiceAccountService/SetAccessBindings gRPC API call.
- On the management console home page
-
Set up the CLI profile to run operations on behalf of the service account:
CLI-
Create an authorized key for the service account and save it to the file:
yc iam key create \ --service-account-id <service_account_ID> \ --folder-id <ID_of_folder_with_service_account> \ --output key.json
Where:
service-account-id
: Service account ID.folder-id
: ID of the folder in which the service account was created.output
: Name of the file with the authorized key.
Result:
id: aje8nn871qo4******** service_account_id: ajehr0to1g8b******** created_at: "2023-06-20T09:16:43.479156798Z" key_algorithm: RSA_2048
-
Create a CLI profile to run operations on behalf of the service account:
yc config profile create sa-glusterfs
Result:
Profile 'sa-glusterfs' created and activated
-
Set the profile configuration:
yc config set service-account-key key.json yc config set cloud-id <cloud_ID> yc config set folder-id <folder_ID>
Where:
-
Add the credentials to the environment variables:
export YC_TOKEN=$(yc iam create-token) export YC_CLOUD_ID=$(yc config get cloud-id) export YC_FOLDER_ID=$(yc config get folder-id)
-
Prepare an environment for deploying the resources
-
Create an SSH key pair:
ssh-keygen -t ed25519
We recommend leaving the key file name unchanged.
-
Clone the
yandex-cloud-examples/yc-distributed-ha-storage-with-glusterfs
GitHub repository and go to theyc-distributed-ha-storage-with-glusterfs
folder:git clone https://github.com/yandex-cloud-examples/yc-distributed-ha-storage-with-glusterfs.git cd ./yc-distributed-ha-storage-with-glusterfs
-
Edit the
variables.tf
file, specifying the parameters of the resources you are deploying:Warning
The values set in the file result in deploying a resource-intensive infrastructure.
To deploy the resources within your available quotas, use the values below or change the values according to your specific needs.- In
disk_size
, changedefault
to30
. - In
client_cpu_count
, changedefault
to2
. - In
storage_cpu_count
, changedefault
to2
. - If you specified a non-default name when creating the SSH key pair, under
local_pubkey_path
, changedefault
to<path_to_public_SSH_key>
.
- In
Deploy your resources
- Initialize Terraform:
terraform init
- Check the Terraform file configuration:
terraform validate
- Check the list of cloud resources you are about to create:
terraform plan
- Create resources:
terraform apply -auto-approve
- Wait until a process completion message appears:
Outputs: connect_line = "ssh storage@158.160.108.137" public_ip = "158.160.108.137"
This will create three VMs for hosting client code (client01
, client02
, and client03
) in the folder, as well as three VMs for distributed data storage (gluster01
, gluster02
, and gluster03
) linked to the client VMs and placed in three different availability zones.
Install and configure GlusterFS
-
Connect to the
client01
VM using the command from the process completion output:ssh storage@158.160.108.137
-
Switch to the
root
superuser mode:sudo -i
-
Install ClusterShell
:dnf install epel-release -y dnf install clustershell -y echo 'ssh_options: -oStrictHostKeyChecking=no' >> /etc/clustershell/clush.conf
-
Create the configuration files:
cat > /etc/clustershell/groups.conf <<EOF [Main] default: cluster confdir: /etc/clustershell/groups.conf.d $CFGDIR/groups.conf.d autodir: /etc/clustershell/groups.d $CFGDIR/groups.d EOF cat > /etc/clustershell/groups.d/cluster.yaml <<EOF cluster: all: '@clients,@gluster' clients: 'client[01-03]' gluster: 'gluster[01-03]' EOF
-
Install GlusterFS:
clush -w @all hostname # check and auto add fingerprints clush -w @all dnf install centos-release-gluster -y clush -w @all dnf --enablerepo=powertools install glusterfs-server -y clush -w @gluster mkfs.xfs -f -i size=512 /dev/vdb clush -w @gluster mkdir -p /bricks/brick1 clush -w @gluster "echo '/dev/vdb /bricks/brick1 xfs defaults 1 2' >> /etc/fstab" clush -w @gluster "mount -a && mount"
-
Restart GlusterFS:
clush -w @gluster systemctl enable glusterd clush -w @gluster systemctl restart glusterd
-
Check the
gluster02
andgluster03
VM availability:clush -w gluster01 gluster peer probe gluster02 clush -w gluster01 gluster peer probe gluster03
-
Create a
vol0
folder in each data storage VM and configure availability and fault tolerance by connecting to theregional-volume
shared folder:clush -w @gluster mkdir -p /bricks/brick1/vol0 clush -w gluster01 gluster volume create regional-volume disperse 3 redundancy 1 gluster01:/bricks/brick1/vol0 gluster02:/bricks/brick1/vol0 gluster03:/bricks/brick1/vol0
-
Configure additional performance settings:
clush -w gluster01 gluster volume set regional-volume client.event-threads 8 clush -w gluster01 gluster volume set regional-volume server.event-threads 8 clush -w gluster01 gluster volume set regional-volume cluster.shd-max-threads 8 clush -w gluster01 gluster volume set regional-volume performance.read-ahead-page-count 16 clush -w gluster01 gluster volume set regional-volume performance.client-io-threads on clush -w gluster01 gluster volume set regional-volume performance.quick-read off clush -w gluster01 gluster volume set regional-volume performance.parallel-readdir on clush -w gluster01 gluster volume set regional-volume performance.io-thread-count 32 clush -w gluster01 gluster volume set regional-volume performance.cache-size 1GB clush -w gluster01 gluster volume set regional-volume server.allow-insecure on
-
Mount the
regional-volume
shared folder on the client VMs:clush -w gluster01 gluster volume start regional-volume clush -w @clients mount -t glusterfs gluster01:/regional-volume /mnt/
Test the solution for availability and fault tolerance
-
Check the status of the
regional-volume
shared folder:clush -w gluster01 gluster volume status
Result:
gluster01: Status of volume: regional-volume gluster01: Gluster process TCP Port RDMA Port Online Pid gluster01: ------------------------------------------------------------------------------ gluster01: Brick gluster01:/bricks/brick1/vol0 54660 0 Y 1374 gluster01: Brick gluster02:/bricks/brick1/vol0 58127 0 Y 7716 gluster01: Brick gluster03:/bricks/brick1/vol0 53346 0 Y 7733 gluster01: Self-heal Daemon on localhost N/A N/A Y 1396 gluster01: Self-heal Daemon on gluster02 N/A N/A Y 7738 gluster01: Self-heal Daemon on gluster03 N/A N/A Y 7755 gluster01: gluster01: Task Status of Volume regional-volume gluster01: ------------------------------------------------------------------------------ gluster01: There are no active volume tasks gluster01:
-
Create a text file:
cat > /mnt/test.txt <<EOF Hello, GlusterFS! EOF
-
Make sure the file is available on all three client VMs:
clush -w @clients sha256sum /mnt/test.txt
Result:
client01: 5fd9c031531c39f2568a8af5512803fad053baf3fe9eef2a03ed2a6f0a884c85 /mnt/test.txt client02: 5fd9c031531c39f2568a8af5512803fad053baf3fe9eef2a03ed2a6f0a884c85 /mnt/test.txt client03: 5fd9c031531c39f2568a8af5512803fad053baf3fe9eef2a03ed2a6f0a884c85 /mnt/test.txt
-
Shut down one of the storage VMs, e.g.,
gluster02
:Management consoleCLIAPI- In the management console
, select the folder the VM belongs to. - Select Compute Cloud.
- Select the
gluster02
VM from the list, click , and select Stop. - In the window that opens, click Stop.
-
See the description of the CLI command to stop a VM:
yc compute instance stop --help
-
Stop the VM:
yc compute instance stop gluster02
Use the stop REST API method for the Instance resource or the InstanceService/Stop gRPC API call.
- In the management console
-
Make sure that the VM is shut down:
clush -w gluster01 gluster volume status
Result:
gluster01: Status of volume: regional-volume gluster01: Gluster process TCP Port RDMA Port Online Pid gluster01: ------------------------------------------------------------------------------ gluster01: Brick gluster01:/bricks/brick1/vol0 54660 0 Y 1374 gluster01: Brick gluster03:/bricks/brick1/vol0 53346 0 Y 7733 gluster01: Self-heal Daemon on localhost N/A N/A Y 1396 gluster01: Self-heal Daemon on gluster03 N/A N/A Y 7755 gluster01: gluster01: Task Status of Volume regional-volume gluster01: ------------------------------------------------------------------------------ gluster01: There are no active volume tasks gluster01:
-
Make sure that the file is still available on all three client VMs:
clush -w @clients sha256sum /mnt/test.txt
Result:
client01: 5fd9c031531c39f2568a8af5512803fad053baf3fe9eef2a03ed2a6f0a884c85 /mnt/test.txt client02: 5fd9c031531c39f2568a8af5512803fad053baf3fe9eef2a03ed2a6f0a884c85 /mnt/test.txt client03: 5fd9c031531c39f2568a8af5512803fad053baf3fe9eef2a03ed2a6f0a884c85 /mnt/test.txt
How to delete the resources you created
To stop paying for the resources created, delete them:
terraform destroy -auto-approve