Yandex Cloud
Search
Contact UsTry it for free
  • Customer Stories
  • Documentation
  • Blog
  • All Services
  • System Status
  • Marketplace
    • Featured
    • Infrastructure & Network
    • Data Platform
    • AI for business
    • Security
    • DevOps tools
    • Serverless
    • Monitoring & Resources
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Start testing with double trial credits
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Center for Technologies and Society
    • Yandex Cloud Partner program
    • Price calculator
    • Pricing plans
  • Customer Stories
  • Documentation
  • Blog
© 2026 Direct Cursus Technology L.L.C.
Yandex Object Storage
    • All tutorials
      • Overview
      • Backing up to Object Storage with AWS S3 Sync
      • Backing up to Object Storage with rclone
      • Backing up to Object Storage with GeeseFS
      • Backing up to Object Storage with CloudBerry Desktop Backup
      • Backing up to Object Storage with Duplicati
      • Backing up to Object Storage with Bacula
      • Backing up to Object Storage with Veritas Backup Exec
      • Backing up to Object Storage with Veeam Backup
      • Backing up a VM with Hystax Acura Backup
      • Backing up a Managed Service for Kubernetes cluster to Object Storage
    • Getting statistics on object queries with S3 Select
    • Getting website traffic statistics with S3 Select
    • Getting statistics on object queries using Yandex Query
    • Cost analysis by resource
    • Server-side encryption
    • Integrating an L7 load balancer with CDN and Object Storage
    • Blue-green and canary deployment of service versions
    • Analyzing logs in DataLens
    • Mounting buckets to Yandex Data Processing host filesystems
    • Using Object Storage in Yandex Data Processing
    • Importing data from Object Storage, processing, and exporting it to Managed Service for ClickHouse®
    • Connecting a bucket as a disk in Windows
    • Migrating data from Yandex Data Streams using Yandex Data Transfer
    • Using hybrid storage in Yandex Managed Service for ClickHouse®
    • Loading data from Yandex Managed Service for OpenSearch to Yandex Object Storage using Yandex Data Transfer
    • Automatically copying objects from one bucket to another
    • Regular asynchronous recognition of audio files in a bucket
    • Training a model in Yandex DataSphere on data from Object Storage
    • Connecting to Object Storage from VPC
    • Migrating data to Yandex Managed Service for PostgreSQL using Yandex Data Transfer
    • Uploading data to Yandex MPP Analytics for PostgreSQL using Yandex Data Transfer
    • Uploading data to Yandex Managed Service for ClickHouse® using Yandex Data Transfer
    • Uploading data to Yandex Managed Service for YDB using Yandex Data Transfer
    • Exchanging data between Yandex Managed Service for ClickHouse® and Yandex Data Processing
    • Uploading data from Yandex Managed Service for YDB using Yandex Data Transfer
    • Hosting a static Gatsby website in Object Storage
    • Installing Ghost CMS CloudApp in Object Storage
    • Migrating a database from Managed Service for PostgreSQL to Object Storage
    • Exchanging data between Yandex Managed Service for ClickHouse® and Yandex Data Processing
    • Importing data from Yandex Managed Service for PostgreSQL to Yandex Data Processing using Sqoop
    • Importing data from Yandex Managed Service for MySQL® to Yandex Data Processing using Sqoop
    • Migrating data from Yandex Object Storage to Yandex Managed Service for MySQL® using Yandex Data Transfer
    • Migrating a database from Yandex Managed Service for MySQL® to Yandex Object Storage
    • Exporting Greenplum® data to a cold storage in Yandex Object Storage
    • Loading data from Yandex Direct to a Yandex Managed Service for ClickHouse® data mart using Yandex Cloud Functions, Yandex Object Storage, and Yandex Data Transfer
    • Uploading Terraform states to Object Storage
    • Locking Terraform states using Managed Service for YDB
    • Visualizing Yandex Query data
    • Publishing game updates
    • Developing a custom integration in API Gateway
    • URL shortener
    • Storing application runtime logs
    • Developing a skill for Alice and a website with authorization
    • Creating an interactive serverless application using WebSocket
    • Deploying a web application using the Java Servlet API
    • Developing a Telegram bot
    • Replicating logs to Object Storage using Fluent Bit
    • Replicating logs to Object Storage using Data Streams
    • Uploading audit logs to ArcSight SIEM
    • Uploading audit logs to Splunk SIEM
    • Creating an MLFlow server for logging experiments and artifacts
    • Operations with data in Yandex Query
    • Federated data queries using Query
    • Recognizing text in image archives using Vision OCR
    • Regular recognition of images and PDF documents from an Object Storage bucket
    • Converting a video to a GIF in Python
    • Automating tasks using Managed Service for Apache Airflow™
    • Processing files with usage details in Yandex Cloud Billing
    • Deploying a web app with JWT authorization in API Gateway and authentication in Firebase
    • Searching for Yandex Cloud events in Yandex Query
    • Searching for Yandex Cloud events in Object Storage
    • Creating an external table from a bucket table using a configuration file
    • Migrating databases from Google BigQuery to Managed Service for ClickHouse®
    • Using Object Storage in Yandex Managed Service for Apache Spark™
  • Pricing policy
  • Terraform reference
  • Monitoring metrics
  • Audit Trails events
  • Bucket logs
  • Release notes
  • FAQ

In this article:

  • Getting started
  • Required paid resources
  • Create a bucket
  • Create a service account
  • Create a static access key
  • Set up your environment
  • Install GeeseFS
  • Get authenticated in GeeseFS
  • Mount a bucket
  • Synchronize the local folder with the bucket
  • Manual synchronization
  • Automatic synchronization
  • How to delete the resources you created
  1. Tutorials
  2. Backups
  3. Backing up to Object Storage with GeeseFS

Backing up to Object Storage with GeeseFS

Written by
Yandex Cloud
Updated at January 29, 2026
  • Getting started
    • Required paid resources
  • Create a bucket
  • Create a service account
  • Create a static access key
  • Set up your environment
    • Install GeeseFS
    • Get authenticated in GeeseFS
  • Mount a bucket
  • Synchronize the local folder with the bucket
    • Manual synchronization
    • Automatic synchronization
  • How to delete the resources you created

In this tutorial, you will configure backup of local files to Yandex Object Storage with GeeseFS.

GeeseFS enables mounting a bucket as a regular folder, so you can use familiar tools for copying and synchronization. Backup essentially involves copying and synchronizing data between the local folder and the bucket as if these are two directories, one hosted in the cloud. The process is optimized with synchronization tools, e.g., rsync or robocopy, which move only new and updated files.

To configure backup using GeeseFS:

  1. Get your cloud ready.
  2. Create a bucket.
  3. Create a service account.
  4. Create a static access key.
  5. Set up your environment.
  6. Mount your bucket.
  7. Synchronize the local folder with the bucket.

If you no longer need the resources you created, delete them.

Getting startedGetting started

Sign up for Yandex Cloud and create a billing account:

  1. Navigate to the management console and log in to Yandex Cloud or create a new account.
  2. On the Yandex Cloud Billing page, make sure you have a billing account linked and it has the ACTIVE or TRIAL_ACTIVE status. If you do not have a billing account, create one and link a cloud to it.

If you have an active billing account, you can navigate to the cloud page to create or select a folder for your infrastructure.

Learn more about clouds and folders here.

Required paid resourcesRequired paid resources

The bucket support cost includes the fee for bucket data storage and data operations (see Yandex Object Storage pricing).

Create a bucketCreate a bucket

Note

To protect your backups from accidental file deletion, enable S3 bucket versioning. This way, deleted or overwritten files will be saved as previous versions you can restore if needed. For more information about S3 bucket versioning, see this guide.

Without versioning, you will not be able to restore files deleted from S3, even if previously copied.

Management console
AWS CLI
API
  1. In the management console, navigate to the relevant folder.
  2. Select Object Storage.
  3. Click Create bucket.
  4. Enter a name for the bucket according to the naming requirements.
  5. In the Read objects, Read object list, and Read settings fields, select With authorization.
  6. Click Create bucket.
  1. If you do not have the AWS CLI yet, install and configure it.

  2. Create a bucket by entering its name following the naming requirements:

    aws --endpoint-url=https://storage.yandexcloud.net \
      s3 mb s3://<bucket_name>
    

    Result:

    make_bucket: backup-bucket
    

Use the create REST API method for the Bucket resource, the BucketService/Create gRPC API call, or the create S3 API method.

Create a service accountCreate a service account

Create a service account to be used for backups.

Management console
CLI
API
  1. In the management console, select Identity and Access Management.
  2. Click Create service account.
  3. In the Name field, specify sa-backup-to-s3.
  4. Click Add role and select the storage.editor role.
  5. Click Create.

If you do not have the Yandex Cloud CLI installed yet, install and initialize it.

By default, the CLI uses the folder specified when creating the profile. To change the default folder, use the yc config set folder-id <folder_ID> command. You can also set a different folder for any specific command using the --folder-name or --folder-id parameter.

  1. Create a service account:

    yc iam service-account create --name sa-backup-to-s3 \
      --folder-name <folder_name>
    

    Result:

    id: ajeab0cnib1p********
    folder_id: b0g12ga82bcv********
    created_at: "2025-10-03T09:44:35.989446Z"
    name: sa-backup-to-s3
    
  2. Assign the storage.editor role for the folder to the service account:

    yc resource-manager folder add-access-binding <folder_name> \
      --service-account-name sa-backup-to-s3 \
      --role storage.editor \
      --folder-name <folder_name>
    

    Result:

    effective_deltas:
      - action: ADD
        access_binding:
          role_id: storage.editor
          subject:
            id: ajeab0cnib1p********
            type: serviceAccount
    
  1. Create a service account named sa-backup-to-s3. Do it by using the create REST API method for the ServiceAccount resource or the ServiceAccountService/Create gRPC API call.
  2. Assign the storage.editor role for the current folder to the the service account. Do it by using the setAccessBindings REST API method for the Folder resource or the FolderService/SetAccessBindings gRPC API call.

Note

To work with objects in an encrypted bucket, a user or service account must have the following roles for the encryption key in addition to the storage.configurer role:

  • kms.keys.encrypter: To read the key, encrypt and upload objects.
  • kms.keys.decrypter: To read the key, decrypt and download objects.
  • kms.keys.encrypterDecrypter: This role includes the kms.keys.encrypter and kms.keys.decrypter permissions.

For more information, see Yandex Key Management Service service roles.

Create a static access keyCreate a static access key

Management console
CLI
API
  1. In the management console, select Identity and Access Management.

  2. In the left-hand panel, select Service accounts.

  3. Select the sa-backup-to-s3 service account.

  4. In the top panel, click Create new key and select Create static access key.

  5. Enter a description for the key and click Create.

  6. Save the ID and secret key for later when you are mounting the bucket.

    Alert

    After you close this dialog, the key value will no longer be available.

  1. Run this command:

    yc iam access-key create \
      --service-account-name sa-backup-to-s3
    

    Where --service-account-name is the name of the service account you are creating the key for.

    Result:

    access_key:
      id: aje726ab18go********
      service_account_id: ajecikmc374i********
      created_at: "2024-11-28T14:16:44.936656476Z"
      key_id: YCAJEOmgIxyYa54LY********
    secret: YCMiEYFqczmjJQ2XCHMOenrp1s1-yva1********
    
  2. Save the ID (key_id) and secret key (secret) for later when you are mounting the bucket.

To create an access key, use the create REST API method for the AccessKey resource or the AccessKeyService/Create gRPC API call.

Save the ID (key_id) and secret key (secret) for later when you are mounting the bucket.

Set up your environmentSet up your environment

Install GeeseFSInstall GeeseFS

Debian/Ubuntu
CentOS
macOS
Windows
  1. Make sure the FUSE utilities are installed in the distribution:

    apt list --installed | grep fuse
    

    Warning

    Many Linux distributions have the utilities for working with FUSE pre-installed by default. Reinstalling or deleting them may lead to OS failures.

  2. If the FUSE utilities are not installed, run this command:

    sudo apt-get install fuse
    
  3. Download and install GeeseFS:

    wget https://github.com/yandex-cloud/geesefs/releases/latest/download/geesefs-linux-amd64
    chmod a+x geesefs-linux-amd64
    sudo cp geesefs-linux-amd64 /usr/bin/geesefs
    
  1. Make sure the FUSE utilities are installed in the distribution:

    yum list installed | grep fuse
    

    Warning

    Many Linux distributions have the utilities for working with FUSE pre-installed by default. Reinstalling or deleting them may lead to OS failures.

  2. If the FUSE utilities are not installed, run this command:

    sudo yum install fuse
    
  3. Download and install GeeseFS:

    wget https://github.com/yandex-cloud/geesefs/releases/latest/download/geesefs-linux-amd64
    chmod a+x geesefs-linux-amd64
    sudo cp geesefs-linux-amd64 /usr/bin/geesefs
    
  1. Install the macFUSE package.

  2. Enable support for third-party core extensions. This step is only required the first time you use MacFUSE on an Apple Silicon Mac.

  3. Allow loading the MacFUSE core extension (Apple Silicon and Intel Mac).

    For more information on installing macFUSE, see this installation guide in the macFUSE GitHub repository.

  4. Download and install GeeseFS:

    platform='arm64'
    if [[ $(uname -m) == 'x86_64' ]]; then platform='amd64'; fi
    wget https://github.com/yandex-cloud/geesefs/releases/latest/download/geesefs-mac-$platform
    chmod a+x geesefs-mac-$platform
    sudo cp geesefs-mac-$platform /usr/local/bin/geesefs
    
  1. Download and install WinFSP.

  2. Download the geesefs-win-x64.exe file.

  3. Rename geesefs-win-x64.exe to geesefs.exe for convenience.

  4. Create a folder named geesefs and move the geesefs.exe file there.

  5. Add geesefs to the PATH variable:

    1. Click Start and type Change system environment variables in the Windows search bar.
    2. Click Environment Variables... at the bottom right.
    3. In the window that opens, find the PATH parameter and click Edit.
    4. Add your folder path to the list.
    5. Click OK.

You can also build GeeseFS yourself using its source code. For more information, see this guide in the GeeseFS repository on GitHub.

Get authenticated in GeeseFSGet authenticated in GeeseFS

GeeseFS uses the static access key to Object Storage you got earlier. You can set it using one of the following methods:

Linux/macOS
Windows
  • Using the credentials file, which you need to put into the ~/.aws/ folder:

    1. Create a directory:

      mkdir ~/.aws
      
    2. Create a file named credentials with the following contents:

      [default]
      aws_access_key_id = <key_ID>
      aws_secret_access_key = <secret_key>
      

      If the key file is located elsewhere, specify its path in the --shared-config parameter when mounting the bucket:

      geesefs \
        --shared-config <path_to_key_file> \
        <bucket_name> <mount_point>
      

      The key file must have the same structure as ~/.aws/credentials.

  • Using environment variables:

    export AWS_ACCESS_KEY_ID=<key_ID>
    export AWS_SECRET_ACCESS_KEY=<secret_key>
    

Note

You can run the geesefs command with superuser privileges (sudo). In this case, make sure to send information about the key either in the --shared-config parameter or using environment variables.

  • Using the credentials file, which you need to put into the users/<current_user>/.aws/ folder:

    [default]
    aws_access_key_id = <key_ID>
    aws_secret_access_key = <secret_key>
    

    If the key file is located elsewhere, specify its path in the --shared-config parameter when mounting the bucket:

    geesefs ^
      --shared-config <path_to_key_file> ^
      <bucket_name> <mount_point>
    

    The key file must have the same structure as ~/.aws/credentials.

    Specify an existing folder as the mount point.

  • Using environment variables:

    set AWS_ACCESS_KEY_ID=<key_ID>
    set AWS_SECRET_ACCESS_KEY=<secret_key>
    

When using GeeseFS on a Compute Cloud VM that has a linked service account, you can enable simplified authentication that does not require a static access key. To do this, use the --iam parameter when mounting the bucket.

Mount a bucketMount a bucket

Select the folder or disk where you want to mount the bucket. Make sure you have sufficient permissions to perform this operation.

When mounting a bucket, you can also configure GeeseFS settings for system performance and object access permissions. To view the list of options and their descriptions, run geesefs --help.

  • For one-time bucket mounting:

    Linux/macOS
    Windows
    1. Create a folder for mounting:

      mkdir <mount_point>
      
    2. Mount the bucket:

      geesefs <bucket_name> <mount_point>
      

      Specify an existing folder as the mount point.

    Mount the bucket:

    geesefs <bucket_name> <mount_point>
    

    As the mount point, specify the name of the new folder that will be created when you mount the bucket. You cannot specify the name of an existing folder.

    Result:

    2025/10/06 21:14:27.488504 main.INFO File system has been successfully mounted.
    The service geesefs has been started.
    
  • To automatically mount a bucket at system startup:

    Linux
    macOS
    Windows
    1. Create a folder for automatic mounting:

      mkdir <mount_point>
      
    2. Open /etc/fuse.conf:

      sudo nano /etc/fuse.conf
      
    3. Add the following line to it:

      user_allow_other
      
    4. Open /etc/fstab:

      sudo nano /etc/fstab
      
    5. Add the following line to the /etc/fstab file:

      <bucket_name>    /home/<username>/<mount_point>    fuse.geesefs    _netdev,allow_other,--file-mode=0666,--dir-mode=0777,--shared-config=/home/<username>/.aws/credentials    0   0
      

      If you had created the .aws/credentials file for the root user, you do not need to specify the --shared-config parameter.

      Note

      For the bucket to be mounted correctly, provide the full absolute path to the mount point and to the key file without ~, e.g., /home/user/.

    6. Reboot your PC and check that the bucket has been mounted to the specified folder.

    To disable automounting, remove the line with the bucket name from the /etc/fstab file.

    1. Create a folder for automatic mounting:

      mkdir <mount_point>
      
    2. Create a file named com.geesefs.automount.plist with the autorun agent configuration:

      nano /Users/<username>/Library/LaunchAgents/com.geesefs.automount.plist
      
    3. Set the agent configuration by specifying the name of the bucket and the absolute path to the mount point:

      <?xml version="1.0" encoding="UTF-8"?>
      <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
      <plist version="1.0">
      <dict>
          <key>Label</key>
          <string>com.geesefs.automount</string>
          <key>ProgramArguments</key>
          <array>
              <string>/usr/local/bin/geesefs</string>
              <string><bucket_name></string>
              <string><absolute_path_to_mount_point></string>
          </array>
          <key>RunAtLoad</key>
          <true/>
          <key>KeepAlive</key>
          <dict>
              <key>NetworkState</key>
              <true/>
          </dict>
      </dict>
      </plist>
      

      Note

      Specify an existing empty folder as the mount point.

      For the bucket to be mounted correctly, provide the full absolute path to the mount point and to the key file without ~, e.g., /home/user/.

    4. Enable the agent you created:

      launchctl load /Users/<username>/Library/LaunchAgents/com.geesefs.automount.plist
      
    5. Reboot your PC and check that the bucket has been mounted to the specified folder.

    To disable agent autorun, use this command:

    launchctl unload /Users/<username>/Library/LaunchAgents/com.geesefs.automount.plist
    

    Create a Windows service that will automatically run at system startup:

    1. Run CMD as an administrator.

    2. Run this command:

      sc create <service_name> ^
        binPath="<command_for_mounting>" ^
        DisplayName= "<service_name>" ^
        type=own ^
        start=auto
      

      Where binPath is the path to the geesefs.exe file with the required mounting parameters. Here is an example: C:\geesefs\geesefs.exe <bucket_name> <mount_point>. As the mount point, specify the name of the new folder that will be created when you mount the bucket. You cannot specify the name of an existing folder.

      Result:

      [SC] CreateService: Success
      
    3. Click Start and start typing Services in the Windows search bar. Run the Services application as an administrator.

    4. In the window that opens, find the service you created earlier, right-click it, and select Properties.

    5. On the Log on tab, select This account and specify your Windows account name and password.

      If necessary, click Browse → Advanced → Search to find the user you need on the computer.

    6. Click OK.

    To delete the created service, open CMD as an administrator and run the following command:

    sc delete <service_name>
    

    Result:

    [SC] DeleteService: Success
    

Synchronize the local folder with the bucketSynchronize the local folder with the bucket

As the final backup configuration step, set up manual or automatic synchronization between the local folder and the bucket.

Manual synchronizationManual synchronization

Linux
Windows

For a one-off synchronization, run this command:

rsync -av \
  --no-owner \
  --no-group \
  --no-perms \
  --no-times \
  --delete \
  <local_folder_path>/ \
  <mount_folder_path>/

Where --delete is a flag to delete files from the bucket when they are deleted from the local folder.

Note

Specify the full absolute path to folders without using ~, e.g., /home/user/.

This command copies all contents from your local folder to the bucket using the folder mounted with GeeseFS. It only moves new and modified files.

The GeeseFS folder is not a proper POSIX-compliant file system, so ownership, permissions, and timestamps are not copied.

For a one-off synchronization, use the command line (CMD) to run the following:

robocopy "<local_folder_path>" "<mount_folder_path>" /MIR

Where /MIR indicates full folder synchronization, including deletion of missing files.

Note

Specify the full absolute path to folders without using ~, e.g., /home/user/.

Result:

-------------------------------------------------------------------------------
  ROBOCOPY     ::     Robust File Copy for Windows
-------------------------------------------------------------------------------

      Start : October 6, 2025, 21:16:36
    Source : C:\Users\username\geesefs\local\
  Target : C:\Users\username\geesefs\mount\

    Files: *.*
Parameters: *.* /S /E /DCOPY:DA /COPY:DAT /PURGE /MIR /R:1000000 /W:30

------------------------------------------------------------------------------

1    C:\Users\username\geesefs\local\
100%        New file             13793        image.PNG

------------------------------------------------------------------------------

              TotalCopied SkippedMismatch      FAILEDExtras
Folders :         1         0         1         0         0         0
   Files :         1         1         0         0         0         0
     Bytes :    13.4 k    13.4 k         0         0         0         0
    Time :   0:00:00   0:00:00                       0:00:00   0:00:00

Speed :         13793000 Bytes/sec.
Speed :            789.241 MB/min.
Ended : October 6, 2025, 21:16:36

Tip

To avoid running the command manually each time, you can create a BAT file:

  1. Open Notepad and add the following contents:

    @echo off
    robocopy "<local_folder_path>" "<mount_folder_path>" /MIR
    
  2. Save the file, e.g., as sync_to_s3.bat.

  3. To synchronize folders, run the BAT file.

Automatic synchronizationAutomatic synchronization

Linux
Windows

To automatically synchronize your local folder with the GeeseFS folder:

  1. Make sure the user who will schedule the cron job has access to both folders.

  2. Open the current user's scheduler file by running this command:

    crontab -e
    
  3. Add a line to the file to trigger autosync, e.g., every 10 minutes:

    */10 * * * * rsync -av --no-owner --no-group --no-perms --no-times --delete <local_folder_path>/ <mount_folder_path>/ --log-file=<log_file_path>
    

    Where:

    • --delete: Flag to delete files from the bucket when they are deleted from the local folder.
    • --log-file: Optional parameter for writing logs. Specify the full path.

    Note

    Specify the full absolute path to folders without using ~, e.g., /home/user/.

The job will run at the specified frequency and synchronize the folders.

The command in the cron job copies all contents from your local folder to the bucket using the folder mounted with GeeseFS. It only moves new and modified files.

The GeeseFS folder is not a proper POSIX-compliant file system, so ownership, permissions, and timestamps are not copied.

For auto sync, set up a task in the Task scheduler:

  1. Open the Windows Task Scheduler:

    • Start Menu → Task Scheduler.
    • Or start it in Run → taskschd.msc.
  2. Click Create task....

  3. In the Actions tab, add a new action by specifying the absolute path to the executable script, e.g., a BAT file, under Program or script.

  4. In the Triggers tab, add a schedule.

  5. Click OK.

How to delete the resources you createdHow to delete the resources you created

To stop paying for the resources you created:

  1. Delete the objects from the bucket.
  2. Delete the bucket.

Was the article helpful?

Previous
Backing up to Object Storage with rclone
Next
Backing up to Object Storage with CloudBerry Desktop Backup
© 2026 Direct Cursus Technology L.L.C.