Yandex Cloud
Search
Contact UsGet started
  • Blog
  • Pricing
  • Documentation
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • ML & AI
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Customer Stories
    • Gateway to Russia
    • Cloud for Startups
    • Education and Science
  • Blog
  • Pricing
  • Documentation
Yandex project
© 2025 Yandex.Cloud LLC
Tutorials
    • All tutorials
    • Migrating data to Yandex Cloud using Hystax Acura
    • Fault protection with Hystax Acura
    • Configuring an SFTP server based on CentOS 7
    • VM backups using Hystax Acura
    • Backing up to Object Storage with MSP360 Backup (CloudBerry Desktop Backup)
    • Backing up to Object Storage via Duplicati
    • Backing up to Object Storage with Bacula
    • Backing up to Yandex Object Storage with Veeam Backup
    • Backing up to Object Storage with Veritas Backup Exec
    • Managed Service for Kubernetes cluster backups in Object Storage
    • Deploying GlusterFS in high availability mode
    • Deploying GlusterFS in high performance mode
    • Replicating logs to Object Storage using Data Streams
    • Replicating logs to Object Storage using Fluent Bit
    • Using Object Storage in Yandex Data Processing
    • Connecting a BareMetal server to Cloud Backup

In this article:

  • Get your cloud ready
  • Required paid resources
  • Configure the CLI profile
  • Set up your resource environment
  • Deploy your resources
  • Install and configure GlusterFS
  • Test the availability and fault tolerance of the solution
  • How to delete the resources you created
  1. Storing and recovering data
  2. Deploying GlusterFS in high availability mode

Deploying GlusterFS in high availability mode

Written by
Yandex Cloud
Updated at May 7, 2025
  • Get your cloud ready
    • Required paid resources
  • Configure the CLI profile
  • Set up your resource environment
  • Deploy your resources
  • Install and configure GlusterFS
  • Test the availability and fault tolerance of the solution
  • How to delete the resources you created

GlusterFS is a parallel, distributed, and scalable file system. With horizontal scaling, the system provides the cloud with an aggregate bandwidth in the tens of GB/s and hundreds of thousands of IOPS.

Use this tutorial to create an infrastructure made up of three segments sharing a common GlusterFS file system. Placing storage disks in three different availability zones will ensure the high availability and fault tolerance of your file system.

To configure a high availability file system:

  1. Get your cloud ready.
  2. Configure the CLI profile.
  3. Set up an environment for deploying the resources.
  4. Deploy your resources.
  5. Install and configure GlusterFS.
  6. Test the solution’s availability and fault tolerance.

If you no longer need the resources you created, delete them.

Get your cloud readyGet your cloud ready

Sign up in Yandex Cloud and create a billing account:

  1. Navigate to the management console and log in to Yandex Cloud or register a new account.
  2. On the Yandex Cloud Billing page, make sure you have a billing account linked and it has the ACTIVE or TRIAL_ACTIVE status. If you do not have a billing account, create one and link a cloud to it.

If you have an active billing account, you can navigate to the cloud page to create or select a folder for your infrastructure to operate in.

Learn more about clouds and folders.

Required paid resourcesRequired paid resources

The infrastructure support costs include:

  • Fee for continuously running VMs and disks (see Yandex Compute Cloud pricing).
  • Fee for using public IP addresses and outbound traffic (see Yandex Virtual Private Cloud pricing).

Configure the CLI profileConfigure the CLI profile

  1. If you do not have the Yandex Cloud CLI yet, install it and get authenticated according to instructions provided.

  2. Create a service account:

    Management console
    CLI
    API
    1. In the management console, select the folder where you want to create a service account.
    2. In the list of services, select Identity and Access Management.
    3. Click Create service account.
    4. Specify the service account name, e.g., sa-glusterfs.
    5. Click Create.

    The folder specified when creating the CLI profile is used by default. To change the default folder, use the yc config set folder-id <folder_ID> command. You can specify a different folder using the --folder-name or --folder-id parameter.

    Run the command below to create a service account, specifying sa-glusterfs as its name:

    yc iam service-account create --name sa-glusterfs
    

    Where name is the service account name.

    Result:

    id: ajehr0to1g8b********
    folder_id: b1gv87ssvu49********
    created_at: "2023-06-20T09:03:11.665153755Z"
    name: sa-glusterfs
    

    To create a service account, use the ServiceAccountService/Create gRPC API call or the create REST API method for the ServiceAccount resource.

  3. Assign the administrator role for the folder to the service account:

    Management console
    CLI
    API
    1. On the management console home page, select a folder.
    2. Go to the Access bindings tab.
    3. Find the sa-glusterfs account in the list and click .
    4. Click Edit roles.
    5. Click Add role in the dialog that opens and select the admin role.

    Run this command:

    yc resource-manager folder add-access-binding <folder_ID> \
       --role admin \
       --subject serviceAccount:<service_account_ID>
    

    To assign a role for a folder to a service account, use the setAccessBindings REST API method for the ServiceAccount resource or the ServiceAccountService/SetAccessBindings gRPC API call.

  4. Set up the CLI profile to run operations on behalf of the service account:

    CLI
    1. Create an authorized key for the service account and save it to the file:

      yc iam key create \
      --service-account-id <service_account_ID> \
      --folder-id <ID_of_folder_with_service_account> \
      --output key.json
      

      Where:

      • service-account-id: Service account ID.
      • folder-id: Service account folder ID.
      • output: Authorized key file name.

      Result:

      id: aje8nn871qo4********
      service_account_id: ajehr0to1g8b********
      created_at: "2023-06-20T09:16:43.479156798Z"
      key_algorithm: RSA_2048
      
    2. Create a CLI profile to run operations on behalf of the service account:

      yc config profile create sa-glusterfs
      

      Result:

      Profile 'sa-glusterfs' created and activated
      
    3. Configure the profile:

      yc config set service-account-key key.json
      yc config set cloud-id <cloud_ID>
      yc config set folder-id <folder_ID>
      

      Where:

      • service-account-key: Authorized key file name.
      • cloud-id: Cloud ID.
      • folder-id: Folder ID.
    4. Export your credentials to environment variables:

      export YC_TOKEN=$(yc iam create-token)
      export YC_CLOUD_ID=$(yc config get cloud-id)
      export YC_FOLDER_ID=$(yc config get folder-id)
      

Set up your resource environmentSet up your resource environment

  1. Create an SSH key pair:

    ssh-keygen -t ed25519
    

    We recommend using the default key file name.

  2. Install Terraform.

  3. Clone the yandex-cloud-examples/yc-distributed-ha-storage-with-glusterfs GitHub repository and go to the yc-distributed-ha-storage-with-glusterfs folder:

    git clone https://github.com/yandex-cloud-examples/yc-distributed-ha-storage-with-glusterfs.git
    cd ./yc-distributed-ha-storage-with-glusterfs
    
  4. Edit the variables.tf file, specifying the parameters of the resources you are deploying:

    Warning

    The values set in the file result in deploying a resource-intensive infrastructure.
    To deploy the resources within your available quotas, use the values below or adjust the values to your specific needs.

    1. In disk_size, change default to 30.
    2. In client_cpu_count, change default to 2.
    3. In storage_cpu_count, change default to 2.
    4. If you used a non-default name when creating the SSH key pair, change default to <public_SSH_key_path> under local_pubkey_path.

Deploy your resourcesDeploy your resources

  1. Initialize Terraform:
    terraform init
    
  2. Check the Terraform file configuration:
    terraform validate
    
  3. Preview the list of new cloud resources:
    terraform plan
    
  4. Create the resources:
    terraform apply -auto-approve
    
  5. Wait until you are notified it has been completed:
    Outputs:
    
    connect_line = "ssh storage@158.160.108.137"
    public_ip = "158.160.108.137"
    

This will create three VMs for hosting client code (client01, client02, and client03) in the folder, as well as three VMs for distributed data storage (gluster01, gluster02, and gluster03) linked to the client VMs and placed in three different availability zones.

Install and configure GlusterFSInstall and configure GlusterFS

  1. Connect to the client01 VM using the command from the process completion output:

    ssh storage@158.160.108.137
    
  2. Switch to the root mode:

    sudo -i
    
  3. Install ClusterShell:

    dnf install epel-release -y
    dnf install clustershell -y
    echo 'ssh_options: -oStrictHostKeyChecking=no' >> /etc/clustershell/clush.conf
    
  4. Create the configuration files:

    cat > /etc/clustershell/groups.conf <<EOF
    [Main]
    default: cluster
    confdir: /etc/clustershell/groups.conf.d $CFGDIR/groups.conf.d
    autodir: /etc/clustershell/groups.d $CFGDIR/groups.d
    EOF
    
    cat > /etc/clustershell/groups.d/cluster.yaml <<EOF
    cluster:
        all: '@clients,@gluster'
        clients: 'client[01-03]'
        gluster: 'gluster[01-03]'
    EOF 
    
  5. Install GlusterFS:

    clush -w @all hostname # check and auto add fingerprints
    clush -w @all dnf install centos-release-gluster -y
    clush -w @all dnf --enablerepo=powertools install glusterfs-server -y
    clush -w @gluster mkfs.xfs -f -i size=512 /dev/vdb
    clush -w @gluster mkdir -p /bricks/brick1
    clush -w @gluster "echo '/dev/vdb /bricks/brick1 xfs defaults 1 2' >> /etc/fstab"
    clush -w @gluster "mount -a && mount"
    
  6. Restart GlusterFS:

    clush -w @gluster systemctl enable glusterd
    clush -w @gluster systemctl restart glusterd
    
  7. Check whether gluster02 and gluster03 are available:

    clush -w gluster01 gluster peer probe gluster02
    clush -w gluster01 gluster peer probe gluster03
    
  8. Create a vol0 folder in each data storage VM and configure availability and fault tolerance by connecting to the regional-volume shared folder:

    clush -w @gluster mkdir -p /bricks/brick1/vol0
    clush -w gluster01 gluster volume create regional-volume disperse 3 redundancy 1 gluster01:/bricks/brick1/vol0 gluster02:/bricks/brick1/vol0 gluster03:/bricks/brick1/vol0
    
  9. Make use of additional performance settings:

    clush -w gluster01 gluster volume set regional-volume client.event-threads 8
    clush -w gluster01 gluster volume set regional-volume server.event-threads 8
    clush -w gluster01 gluster volume set regional-volume cluster.shd-max-threads 8
    clush -w gluster01 gluster volume set regional-volume performance.read-ahead-page-count 16
    clush -w gluster01 gluster volume set regional-volume performance.client-io-threads on
    clush -w gluster01 gluster volume set regional-volume performance.quick-read off 
    clush -w gluster01 gluster volume set regional-volume performance.parallel-readdir on 
    clush -w gluster01 gluster volume set regional-volume performance.io-thread-count 32
    clush -w gluster01 gluster volume set regional-volume performance.cache-size 1GB
    clush -w gluster01 gluster volume set regional-volume server.allow-insecure on
    
  10. Mount the regional-volume shared folder on the client VMs:

    clush -w gluster01 gluster volume start regional-volume
    clush -w @clients mount -t glusterfs gluster01:/regional-volume /mnt/
    

Test the availability and fault tolerance of the solutionTest the availability and fault tolerance of the solution

  1. Check the status of the regional-volume shared folder:

    clush -w gluster01 gluster volume status
    

    Result:

    gluster01: Status of volume: regional-volume
    gluster01: Gluster process                             TCP Port  RDMA Port  Online  Pid
    gluster01: ------------------------------------------------------------------------------
    gluster01: Brick gluster01:/bricks/brick1/vol0         54660     0          Y       1374
    gluster01: Brick gluster02:/bricks/brick1/vol0         58127     0          Y       7716
    gluster01: Brick gluster03:/bricks/brick1/vol0         53346     0          Y       7733
    gluster01: Self-heal Daemon on localhost               N/A       N/A        Y       1396
    gluster01: Self-heal Daemon on gluster02               N/A       N/A        Y       7738
    gluster01: Self-heal Daemon on gluster03               N/A       N/A        Y       7755
    gluster01:
    gluster01: Task Status of Volume regional-volume
    gluster01: ------------------------------------------------------------------------------
    gluster01: There are no active volume tasks
    gluster01:
    
  2. Create a text file:

    cat > /mnt/test.txt <<EOF
    Hello, GlusterFS!
    EOF
    
  3. Make sure the file is available on all three client VMs:

    clush -w @clients sha256sum /mnt/test.txt
    

    Result:

    client01: 5fd9c031531c39f2568a8af5512803fad053baf3fe9eef2a03ed2a6f0a884c85  /mnt/test.txt
    client02: 5fd9c031531c39f2568a8af5512803fad053baf3fe9eef2a03ed2a6f0a884c85  /mnt/test.txt
    client03: 5fd9c031531c39f2568a8af5512803fad053baf3fe9eef2a03ed2a6f0a884c85  /mnt/test.txt
    
  4. Shut down one of the storage VMs, e.g., gluster02:

    Management console
    CLI
    API
    1. In the management console, select the folder this VM belongs to.
    2. Select Compute Cloud.
    3. Select the gluster02 VM from the list, click , and select Stop.
    4. In the window that opens, click Stop.
    1. See the description of the CLI command for stopping a VM:

      yc compute instance stop --help
      
    2. Stop the VM:

      yc compute instance stop gluster02
      

    Use the stop REST API method for the Instance resource or the InstanceService/Stop gRPC API call.

  5. Make sure the VM is shut down:

    clush -w gluster01  gluster volume status
    

    Result:

    gluster01: Status of volume: regional-volume
    gluster01: Gluster process                             TCP Port  RDMA Port  Online  Pid
    gluster01: ------------------------------------------------------------------------------
    gluster01: Brick gluster01:/bricks/brick1/vol0         54660     0          Y       1374
    gluster01: Brick gluster03:/bricks/brick1/vol0         53346     0          Y       7733
    gluster01: Self-heal Daemon on localhost               N/A       N/A        Y       1396
    gluster01: Self-heal Daemon on gluster03               N/A       N/A        Y       7755
    gluster01:
    gluster01: Task Status of Volume regional-volume
    gluster01: ------------------------------------------------------------------------------
    gluster01: There are no active volume tasks
    gluster01:
    
  6. Make sure the file is still available on all three client VMs:

    clush -w @clients sha256sum /mnt/test.txt
    

    Result:

    client01: 5fd9c031531c39f2568a8af5512803fad053baf3fe9eef2a03ed2a6f0a884c85  /mnt/test.txt
    client02: 5fd9c031531c39f2568a8af5512803fad053baf3fe9eef2a03ed2a6f0a884c85  /mnt/test.txt
    client03: 5fd9c031531c39f2568a8af5512803fad053baf3fe9eef2a03ed2a6f0a884c85  /mnt/test.txt
    

How to delete the resources you createdHow to delete the resources you created

To stop paying for the resources created, delete them:

terraform destroy -auto-approve

Was the article helpful?

Previous
Managed Service for Kubernetes cluster backups in Object Storage
Next
Deploying GlusterFS in high performance mode
Yandex project
© 2025 Yandex.Cloud LLC