Yandex Cloud
Search
Contact UsGet started
  • Blog
  • Pricing
  • Documentation
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • ML & AI
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Customer Stories
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Education and Science
    • Yandex Cloud Partner program
  • Blog
  • Pricing
  • Documentation
© 2025 Direct Cursus Technology L.L.C.
Yandex Object Storage
    • All tutorials
    • Getting statistics on object queries with S3 Select
    • Getting website traffic statistics with S3 Select
    • Getting statistics on object queries using Yandex Query
    • Generating a resource-by-resource cost breakdown report using S3 Select
    • Server-side encryption
    • Integrating an L7 load balancer with CDN and Object Storage
    • Blue-green and canary deployment of service versions
    • Analyzing logs in DataLens
    • Mounting buckets to the file system of Yandex Data Processing hosts
    • Using Object Storage in Yandex Data Processing
    • Importing data from Object Storage, processing and exporting to Managed Service for ClickHouse®
    • Mounting a bucket as a disk in Windows
    • Migrating data from Yandex Data Streams using Yandex Data Transfer
    • Using hybrid storage in Yandex Managed Service for ClickHouse®
    • Loading data from Yandex Managed Service for OpenSearch to Yandex Object Storage using Yandex Data Transfer
    • Automatically copying objects from one bucket to another
    • Recognizing audio files in a bucket on a regular basis
    • Training a model in Yandex DataSphere on data from Object Storage
    • Connecting to Object Storage from VPC
    • Migrating data to Yandex Managed Service for PostgreSQL using Yandex Data Transfer
    • Uploading data to Yandex Managed Service for Greenplum® using Yandex Data Transfer
    • Uploading data to Yandex Managed Service for ClickHouse® using Yandex Data Transfer
    • Uploading data to Yandex Managed Service for YDB using Yandex Data Transfer
    • Exchanging data between Yandex Managed Service for ClickHouse® and Yandex Data Processing
    • Uploading data from Yandex Managed Service for YDB using Yandex Data Transfer
    • Hosting a static Gatsby website in Object Storage
    • Migrating a database from Managed Service for PostgreSQL to Object Storage
    • Exchanging data between Yandex Managed Service for ClickHouse® and Yandex Data Processing
    • Importing data from Yandex Managed Service for PostgreSQL to Yandex Data Processing using Sqoop
    • Importing data from Yandex Managed Service for MySQL® to Yandex Data Processing using Sqoop
    • Migrating data from Yandex Object Storage to Yandex Managed Service for MySQL® using Yandex Data Transfer
    • Migrating a database from Yandex Managed Service for MySQL® to Yandex Object Storage
    • Exporting Greenplum® data to a cold storage in Yandex Object Storage
    • Loading data from Yandex Direct to a Yandex Managed Service for ClickHouse® data mart using Yandex Cloud Functions, Yandex Object Storage, and Yandex Data Transfer
    • Migrating data from Elasticsearch to Yandex Managed Service for OpenSearch
    • Uploading Terraform states to Object Storage
    • Locking Terraform states using Managed Service for YDB
    • Visualizing Yandex Query data
    • Publishing game updates
    • VM backups using Hystax Acura
    • Backing up to Object Storage with CloudBerry Desktop Backup
    • Backing up to Object Storage with Duplicati
    • Backing up to Object Storage with Bacula
    • Backing up to Yandex Object Storage with Veeam Backup
    • Backing up to Object Storage with Veritas Backup Exec
    • Managed Service for Kubernetes cluster backups in Object Storage
    • Developing a custom integration in API Gateway
    • URL shortener
    • Storing application runtime logs
    • Developing a skill for Alice and a website with authorization
    • Creating an interactive serverless application using WebSocket
    • Deploying a web application using the Java Servlet API
    • Developing a Telegram bot
    • Replicating logs to Object Storage using Fluent Bit
    • Replicating logs to Object Storage using Data Streams
    • Uploading audit logs to ArcSight SIEM
    • Exporting audit logs to SIEM Splunk systems
    • Creating an MLFlow server for logging experiments and artifacts
    • Operations with data using Yandex Query
    • Federated data queries using Query
    • Recognizing text in image archives using Vision OCR
    • Converting a video to a GIF in Python
    • Automating tasks using Managed Service for Apache Airflow™
    • Processing files with usage details in Yandex Cloud Billing
    • Deploying a web app with JWT authorization in API Gateway and authentication in Firebase
    • Searching for Yandex Cloud events in Yandex Query
    • Searching for Yandex Cloud events in Object Storage
    • Creating an external table from a bucket table using a configuration file
    • Migrating databases from Google BigQuery to Managed Service for ClickHouse®
  • Terraform reference
  • Monitoring metrics
  • Audit Trails events
  • Bucket logs
  • Release notes
  • FAQ

In this article:

  • Get your cloud ready
  • Required paid resources
  • Create service accounts
  • Create a static key
  • Create a secret
  • Create Object Storage buckets
  • Prepare a ZIP archive with the function code
  • Create a function
  • Create a trigger
  • Test the function
  • How to delete the resources you created
  1. Tutorials
  2. Automatically copying objects from one bucket to another

Automatically copying objects from one Object Storage bucket to another

Written by
Yandex Cloud
Updated at May 7, 2025
  • Get your cloud ready
    • Required paid resources
  • Create service accounts
  • Create a static key
  • Create a secret
  • Create Object Storage buckets
  • Prepare a ZIP archive with the function code
  • Create a function
  • Create a trigger
  • Test the function
  • How to delete the resources you created

Configure automatic object copying from one Object Storage bucket to another. Objects will be copied using a Cloud Functions function invoked by a trigger when a new object is added to a bucket.

To set up object copying:

  1. Get your cloud ready.
  2. Create service accounts.
  3. Create a static key.
  4. Create a Yandex Lockbox secret.
  5. Create Yandex Object Storage buckets.
  6. Prepare a ZIP archive with the function code.
  7. Create a Yandex Cloud Functions function.
  8. Create a trigger.
  9. Test the function.

If you no longer need the resources you created, delete them.

Get your cloud readyGet your cloud ready

Sign up in Yandex Cloud and create a billing account:

  1. Navigate to the management console and log in to Yandex Cloud or register a new account.
  2. On the Yandex Cloud Billing page, make sure you have a billing account linked and it has the ACTIVE or TRIAL_ACTIVE status. If you do not have a billing account, create one and link a cloud to it.

If you have an active billing account, you can navigate to the cloud page to create or select a folder for your infrastructure to operate in.

Learn more about clouds and folders.

Required paid resourcesRequired paid resources

The cost of resources includes:

  • Fee for storing data in a bucket (see Yandex Object Storage pricing).
  • Fee for the number of function calls, computing resources allocated to executing the function, and outgoing traffic (see Yandex Cloud Functions pricing).
  • Fee for storing secrets (see Yandex Lockbox pricing).

Create service accountsCreate service accounts

Create a service account named s3-copy-fn with the storage.uploader, storage.viewer, and lockbox.payloadViewer roles that will operate the function, and an account named s3-copy-trigger with the functions.functionInvoker role to invoke the function.

Management console
Yandex Cloud CLI
Terraform
API
  1. In the management console, select the folder where you want to create a service account.
  2. From the list of services, select Identity and Access Management.
  3. Click Create service account.
  4. Enter a name for the service account: s3-copy-fn.
  5. Click Add role and select the storage.uploader, storage.viewer, and lockbox.payloadViewer roles.
  6. Click Create.
  7. Repeat the previous steps and create a service account named s3-copy-trigger with the functions.functionInvoker role. This service account will invoke the function.

If you do not have the Yandex Cloud CLI yet, install and initialize it.

The folder specified when creating the CLI profile is used by default. To change the default folder, use the yc config set folder-id <folder_ID> command. You can specify a different folder using the --folder-name or --folder-id parameter.

  1. Create the s3-copy-fn service account:

    yc iam service-account create --name s3-copy-fn
    

    Result:

    id: nfersamh4sjq********
    folder_id: b1gc1t4cb638********
    created_at: "2023-03-21T10:36:29.726397755Z"
    name: s3-copy-fn
    

    Save the id of the s3-copy-fn service account and the folder where you created it (folder_id).

  2. Assign the storage.uploader, storage.viewer, and lockbox.payloadViewer roles to the service account:

    yc resource-manager folder add-access-binding <folder_ID> \
      --role storage.uploader \
      --subject serviceAccount:<service_account_ID>
    
    yc resource-manager folder add-access-binding <folder_ID> \
      --role storage.viewer \
      --subject serviceAccount:<service_account_ID>
    
    yc resource-manager folder add-access-binding <folder_ID> \
      --role lockbox.payloadViewer \
      --subject serviceAccount:<service_account_ID>
    

    Result:

    done (1s)
    
  3. Create the s3-copy-trigger service account:

    yc iam service-account create --name s3-copy-trigger
    

    Save the IDs of the s3-copy-trigger service account (id) and the folder where you created it (folder_id).

  4. Assign the functions.functionInvoker role for the folder to the service account:

    yc resource-manager folder add-access-binding <folder_ID> \
      --role storage.uploader \
      --subject serviceAccount:<service_account_ID>
    

If you do not have Terraform yet, install it and configure its Yandex Cloud provider.

  1. In the configuration file, describe the service account parameters:

    // Service account to operate the function
    resource "yandex_iam_service_account" "s3-copy-fn" {
      name        = "s3-copy-fn"
      folder_id   = "<folder_ID>"
    }
    
    resource "yandex_resourcemanager_folder_iam_member" "uploader" {
      folder_id = "<folder_ID>"
      role      = "storage.uploader"
      member    = "serviceAccount:${yandex_iam_service_account.s3-copy-fn.id}"
    }
    
    resource "yandex_resourcemanager_folder_iam_member" "viewer" {
      folder_id = "<folder_ID>"
      role      = "storage.viewer"
      member    = "serviceAccount:${yandex_iam_service_account.s3-copy-fn.id}"
    }
    
    resource "yandex_resourcemanager_folder_iam_member" "payloadViewer" {
      folder_id = "<folder_ID>"
      role      = "lockbox.payloadViewer"
      member    = "serviceAccount:${yandex_iam_service_account.s3-copy-fn.id}"
    }
    
    // Service account to invoke the function
    resource "yandex_iam_service_account" "s3-copy-trigger" {
      name        = "s3-copy-trigger"
      folder_id   = "<folder_ID>"
    }
    
    resource "yandex_resourcemanager_folder_iam_member" "functionInvoker" {
      folder_id = "<folder_ID>"
      role      = "functions.functionInvoker"
      member    = "serviceAccount:${yandex_iam_service_account.s3-copy-trigger.id}"
    }
    

    Where:

    • name: Service account name. This is a required parameter.
    • folder_id: Folder ID. This is an optional parameter. The default value in use is the one specified in the provider settings.
    • role: Role to assign.

    For more information about the yandex_iam_service_account resource parameters in Terraform, see the relevant Terraform documentation.

  2. Make sure the configuration files are correct.

    1. In the command line, navigate to the directory where you created the configuration file.

    2. Run a check using this command:

      terraform plan
      

    If you described the configuration correctly, the terminal will display information about the service account. If the configuration contains any errors, Terraform will point them out.

  3. Deploy the cloud resources.

    1. If the configuration does not contain any errors, run this command:

      terraform apply
      
    2. Confirm creating the service accounts by typing yes in the terminal and pressing Enter.

      This will create the service accounts. You can check the new service accounts using the management console or this CLI command:

      yc iam service-account list
      

To create a service account, use the create REST API method for the ServiceAccount resource or the ServiceAccountService/Create gRPC API call.

To assign the roles for the folder to the service account, use the setAccessBindings method for the ServiceAccount resource or the ServiceAccountService/SetAccessBindings gRPC API call.

Create a static keyCreate a static key

Create a static access key for the s3-copy-fn service account.

Management console
Yandex Cloud CLI
Terraform
API
  1. In the management console, select the folder with the service account.
  2. From the list of services, select Identity and Access Management.
  3. In the left-hand panel, select Service accounts and then, the s3-copy-fn service account.
  4. In the top panel, click Create new key.
  5. Select Create static access key.
  6. Specify the key description and click Create.
  7. Save the ID and secret key.
  1. Run this command:

    yc iam access-key create --service-account-name s3-copy-fn
    

    Result:

    access_key:
      id: aje6t3vsbj8l********
      service_account_id: ajepg0mjt06s********
      created_at: "2023-03-21T14:37:51Z"
      key_id: 0n8X6WY6S24********
    secret: JyTRFdqw8t1kh2-OJNz4JX5ZTz9Dj1rI********
    
  2. Save the ID (key_id) and secret key (secret). You will not be able to get the secret key again.

  1. In the configuration file, describe the key parameters:

    resource "yandex_iam_service_account_static_access_key" "sa-static-key" {
      service_account_id = "<service_account_ID>"
    }
    

    Where service_account_id is the s3-copy-fn service account ID.

    For more information about the yandex_iam_service_account_static_access_key resource parameters in Terraform, see the relevant Terraform documentation.

  2. Make sure the configuration files are correct.

    1. In the command line, navigate to the directory where you created the configuration file.

    2. Run a check using this command:

      terraform plan
      

    If you described the configuration correctly, the terminal will display a list of the resources being created and their parameters. If the configuration contains any errors, Terraform will point them out.

  3. Deploy the cloud resources.

    1. If the configuration does not contain any errors, run this command:

      terraform apply
      
    2. Confirm creating the static access key by typing yes in the terminal and pressing Enter.

      In there were any errors when creating the key, Terraform will point them out.
      If the key has been created successfully, Terraform will write it into its configuration without showing it to the user. The terminal will only display the ID of the created key.

      You can check the new service account key in the management console or using the CLI command:

      yc iam access-key list --service-account-name=s3-copy-fn
      

To create an access key, use the create REST API method for the AccessKey resource or the AccessKeyService/Create gRPC API call.

Create a secretCreate a secret

Create a Yandex Lockbox secret to store your static access key.

Management console
Yandex Cloud CLI
Terraform
API
  1. In the management console, select the folder where you want to create a secret.

  2. From the list of services, select Lockbox.

  3. Click Create secret.

  4. In the Name field, specify the secret name: s3-static-key.

  5. Under Secret data:

    1. Select the Custom secret type.

    2. Add the key ID value:

      • In the Key field, specify: key_id.
      • In the Value field, specify the key ID you got earlier.
    3. Click Add key/value.

    4. Add the secret key value:

      • In the Key field, specify: secret.
      • In the Value field, specify the secret key value you got earlier.
  6. Click Create.

To create a secret, run this command:

yc lockbox secret create --name s3-static-key \
  --payload "[{'key': 'key_id', 'text_value': '<key_ID>'},{'key': 'secret', 'text_value': '<private_key_value>'}]"

Result:

id: e6q2ad0j9b55********
folder_id: b1gktjk2rg49********
created_at: "2021-11-08T19:23:00.383Z"
name: s3-static-key
status: ACTIVE
current_version:
  id: g6q4fn3b6okj********
  secret_id: e6e2ei4u9b55********
  created_at: "2023-03-21T19:23:00.383Z"
  status: ACTIVE
  payload_entry_keys:
    - key_id
    - secret
  1. In the configuration file, describe the secret parameters:

    resource "yandex_lockbox_secret" "my_secret" {
      name = "s3-static-key"
    }
    
    resource "yandex_lockbox_secret_version" "my_version" {
      secret_id = yandex_lockbox_secret.my_secret.id
      entries {
        key        = "key_id"
        text_value = "<key_ID>"
      }
      entries {
        key        = "secret"
        text_value = "<private_key_value>"
      }
    }
    

    Where:

    • name: Secret name
    • key: Key name
    • text_value: Key value

    Note

    We recommend using yandex_lockbox_secret_version_hashed: it stores values in Terraform state in hashed format. We continue supporting yandex_lockbox_secret_version.

    For more information about yandex_lockbox_secret_version_hashed, see the relevant provider documentation.

    Learn more about the properties of Terraform resources in the Terraform documentation:

    • yandex_lockbox_secret
    • yandex_lockbox_secret_version.
  2. Make sure the configuration files are correct.

    1. In the command line, navigate to the directory where you created the configuration file.

    2. Run a check using this command:

      terraform plan
      

    If you described the configuration correctly, the terminal will display a list of the resources being created and their parameters. If the configuration contains any errors, Terraform will point them out.

  3. Deploy the cloud resources.

    1. If the configuration does not contain any errors, run this command:

      terraform apply
      
    2. Confirm creating the secret creation by typing yes in the terminal and pressing Enter.

To create a secret, use the create REST API method for the Secret resource or the SecretService/Create gRPC API call.

Create Object Storage bucketsCreate Object Storage buckets

Create two buckets: a main one to store files and a backup one to copy the main bucket's files to.

Management console
AWS CLI
Terraform
API
  1. In the management console, select the folder where you want to create buckets.

  2. From the list of services, select Object Storage.

  3. Create the main bucket:

    1. Click Create bucket.
    2. In the ** Name** field, enter a name for the main bucket.
    3. In the Object read access, Object listing access, and Read access to settings fields, select Restricted.
    4. Click Create bucket.
  4. Similarly, create the backup bucket.

If you do not have the AWS CLI yet, install and configure it.

Create the main and backup buckets:

aws --endpoint-url https://storage.yandexcloud.net \
  s3 mb s3://<main_bucket_name>

aws --endpoint-url https://storage.yandexcloud.net \
  s3 mb s3://<backup_bucket_name>

Result:

make_bucket: <main_bucket_name>
make_bucket: <backup_bucket_name>

Note

Terraform uses a service account to interact with Object Storage. Assign to the service account the required role, e.g., storage.admin, for the folder where you are going to create resources.

  1. Describe the parameters for creating a service account and access key in the configuration file:

    ...
    // Creating a service account
    resource "yandex_iam_service_account" "sa" {
      name = "<service_account_name>"
    }
    
    // Assigning a role to a service account
    resource "yandex_resourcemanager_folder_iam_member" "sa-admin" {
      folder_id = "<folder_ID>"
      role      = "storage.admin"
      member    = "serviceAccount:${yandex_iam_service_account.sa.id}"
    }
    
    // Creating a static access key
    resource "yandex_iam_service_account_static_access_key" "sa-static-key" {
      service_account_id = yandex_iam_service_account.sa.id
      description        = "static access key for object storage"
    }
    
  2. In the configuration file, describe the properties of the main and backup buckets:

    resource "yandex_storage_bucket" "main-bucket" {
      access_key = yandex_iam_service_account_static_access_key.sa-static-key.access_key
      secret_key = yandex_iam_service_account_static_access_key.sa-static-key.secret_key
      bucket     = "<main_bucket_name>"
    }
    
    resource "yandex_storage_bucket" "reserve-bucket" {
      access_key = yandex_iam_service_account_static_access_key.sa-static-key.access_key
      secret_key = yandex_iam_service_account_static_access_key.sa-static-key.secret_key
      bucket     = "<backup_bucket_name>"
    }
    

    For more information about the yandex_storage_bucket resource, see the Terraform documentation.

  3. Make sure the configuration files are correct.

    1. In the command line, navigate to the directory where you created the configuration file.

    2. Run a check using this command:

      terraform plan
      

    If you described the configuration correctly, the terminal will display a list of the resources being created and their parameters. If the configuration contains any errors, Terraform will point them out.

  4. Deploy the cloud resources.

    1. If the configuration does not contain any errors, run this command:

      terraform apply
      
    2. Confirm creating the buckets by typing yes in the terminal and pressing Enter.

To create a bucket, use the create REST API method for the Bucket resource or the BucketService/Create gRPC API call.

Prepare a ZIP archive with the function codePrepare a ZIP archive with the function code

  1. Save the following code to a file named handler.sh:

    set -e
    (
      cat | jq -c '.messages[]' | while read message; 
      do
        SRC_BUCKET=$(echo "$message" | jq -r .details.bucket_id)
        SRC_OBJECT=$(echo "$message" | jq -r .details.object_id)
        aws --endpoint-url="$S3_ENDPOINT" s3 cp "s3://$SRC_BUCKET/$SRC_OBJECT" "s3://$DST_BUCKET/$SRC_OBJECT"
      done;
    ) 1>&2
    
  2. Add the handler.sh file to the handler-sh.zip archive.

Create a functionCreate a function

Create a function that will copy a new object to the backup bucket once you add it to the main one.

Management console
Yandex Cloud CLI
Terraform
API
  1. In the management console, select the folder where you want to create a function.

  2. From the list of services, select Cloud Functions.

  3. Create a function:

    1. Click Create function.
    2. Specify the function name: copy-function.
    3. Click Create.
  4. Create a function version:

    1. Select the Bash runtime environment, disable the Add files with code examples option, and click Continue.

    2. Specify the ZIP archive upload method and select the handler-sh.zip archive created in the previous step.

    3. Specify the entry point: handler.sh.

    4. Under Parameters, specify:

      • Timeout: 600

      • Memory: 128 MB

      • Service account: s3-copy-fn

      • Environment variables:

        • S3_ENDPOINT: https://storage.yandexcloud.net
        • DST_BUCKET: Name of the backup bucket to copy files to
      • Lockbox secrets:

        • AWS_ACCESS_KEY_ID: s3-static-key secret ID, latest version ID, key_id secret key.
        • AWS_SECRET_ACCESS_KEY: s3-static-key secret ID, latest version ID, secret key.
    5. Click Save changes.

  1. Create a function named copy-function:

    yc serverless function create --name=copy-function
    

    Result:

    id: b09bhaokchn9********
    folder_id: <folder_ID>
    created_at: "2024-10-21T20:40:03.451Z"
    name: copy-function
    http_invoke_url: https://functions.yandexcloud.net/b09bhaokchn9********
    status: ACTIVE
    
  2. Create a version of copy-function:

    yc serverless function version create \
      --function-name copy-function \
      --memory=128m \
      --execution-timeout=600s \
      --runtime=bash \
      --entrypoint=handler.sh \
      --service-account-id=<service_account_ID> \
      --environment DST_BUCKET=<backup_bucket_name> \
      --environment S3_ENDPOINT=https://storage.yandexcloud.net \
      --secret name=s3-static-key,key=key_id,environment-variable=AWS_ACCESS_KEY_ID \
      --secret name=s3-static-key,key=secret,environment-variable=AWS_SECRET_ACCESS_KEY \
      --source-path=./handler-sh.zip
    

    Where:

    • --function-name: Name of the function.
    • --memory: Amount of RAM.
    • --execution-timeout: Maximum function running time before the timeout is exceeded.
    • --runtime: Runtime environment.
    • --entrypoint: Entry point.
    • --service-account-id: s3-copy-fn service account ID.
    • --environment: Environment variables.
    • --secret: Secret with parts of the static access key.
    • --source-path: Path to the handler-sh.zip archive.

    Result:

    done (1s)
    id: d4e394pt4nhf********
    function_id: d4efnkn79m7n********
    created_at: "2024-10-21T20:41:01.345Z"
    runtime: bash
    entrypoint: handler.sh
    resources:
      memory: "134217728"
    execution_timeout: 600s
    service_account_id: ajelprpohp7r********
    image_size: "4096"
    status: ACTIVE
    tags:
      - $latest
    environment:
      DST_BUCKET: <backup_bucket_name>
      S3_ENDPOINT: https://storage.yandexcloud.net
    secrets:
      - id: e6qo2oprlmgn********
        version_id: e6q6i1qt0ae8********
        key: key_id
        environment_variable: AWS_ACCESS_KEY_ID
      - id: e6qo2oprlmgn********
        version_id: e6q6i1qt0ae8********
        key: secret
        environment_variable: AWS_SECRET_ACCESS_KEY
    log_options:
      folder_id: b1g681qpemb4********
    concurrency: "1"
    
  1. In the configuration file, describe the function parameters:

    resource "yandex_function" "copy-function" {
      name               = "copy-functionn"
      user_hash          = "first function"
      runtime            = "bash"
      entrypoint         = "handler.sh"
      memory             = "128"
      execution_timeout  = "600"
      service_account_id = "<service_account_ID>"
      environment = {
        DST_BUCKET  = "<backup_bucket_name>"
        S3_ENDPOINT = "https://storage.yandexcloud.net"
      }
      secrets = {
        id = "<secret_ID>"
        version_id = "<secret_version_ID>"
        key = "key_id"
        environment_variable = "AWS_ACCESS_KEY_ID"
      }
      secrets = {
        id = "<secret_ID>"
        version_id = "<secret_version_ID>"
        key = "secret"
        environment_variable = "AWS_SECRET_ACCESS_KEY"
      }
      content {
        zip_filename = "./handler-sh.zip"
      }
    }
    

    Where:

    • name: Function name.
    • user_hash: Any string to identify the function version.
    • runtime: Function runtime environment.
    • entrypoint: Entry point.
    • memory: Amount of memory allocated for the function, in MB.
    • execution_timeout: Function running timeout.
    • service_account_id: s3-copy-fn service account ID.
    • environment: Environment variables.
    • secrets: Secret with parts of the static access key.
    • content: Path to the handler-sh.zip archive with the function source code.

    For more information about the yandex_function resource parameters, see the relevant Terraform documentation.

  2. Make sure the configuration files are correct.

    1. In the command line, navigate to the directory where you created the configuration file.

    2. Run a check using this command:

      terraform plan
      

    If you described the configuration correctly, the terminal will display a list of the resources being created and their parameters. If the configuration contains any errors, Terraform will point them out.

  3. Deploy the cloud resources.

    1. If the configuration does not contain any errors, run this command:

      terraform apply
      
    2. Confirm creating the function by typing yes in the terminal and pressing Enter.

To create a function, use the create REST API method for the Function resource or the FunctionService/Create gRPC API call.

To create a function version, use the createVersion REST API method for the Function resource or the FunctionService/CreateVersion gRPC API call.

Create a triggerCreate a trigger

Create a trigger for Object Storage that will invoke copy-function when you create a new object in the main bucket.

Management console
Yandex Cloud CLI
Terraform
API
  1. In the management console, select the folder where you want to create a trigger.

  2. From the list of services, select Cloud Functions.

  3. In the left-hand panel, select Triggers.

  4. Click Create trigger.

  5. Under Basic settings:

    • Specify a name for the trigger: bucket-to-bucket-copying.
    • In the Type field, select Object Storage.
    • In the Launched resource field, select Function.
  6. Under Object Storage settings:

    • In the Bucket field, select the main bucket.
    • In the Event types field, select Create object.
  7. Under Function settings:

    • In the Function field, select copy-function.
    • In the Service account field, select the s3-copy-trigger service account.
  8. Click Create trigger.

Run this command:

yc serverless trigger create object-storage \
  --name bucket-to-bucket-copying \
  --bucket-id <main_bucket_name> \
  --events 'create-object' \
  --invoke-function-name copy-function \
  --invoke-function-service-account-name s3-copy-trigger

Where:

  • --name: Trigger name.
  • --bucket-id: Name of the main bucket.
  • --events: Events activating the trigger.
  • --invoke-function-name: Name of the function being invoked.
  • --invoke-function-service-account-name: Name of the service account to use for invoking the function.

Result:

id: a1s92agr8mpg********
folder_id: b1g88tflru0e********
created_at: "2024-10-21T21:04:01.866959640Z"
name: bucket-to-bucket-copying
rule:
  object_storage:
    event_type:
      - OBJECT_STORAGE_EVENT_TYPE_CREATE_OBJECT
    bucket_id: <main_bucket_name>
    batch_settings:
      size: "1"
      cutoff: 1s
    invoke_function:
      function_id: d4eofc7n0m03********
      function_tag: $latest
      service_account_id: aje3932acd0c********
status: ACTIVE
  1. In the configuration file, describe the trigger parameters:

    resource "yandex_function_trigger" "my_trigger" {
      name        = "bucket-to-bucket-copying"
      object_storage {
          bucket_id = "<main_bucket_name>"
          create    = true
      }
      function {
        id                 = "<function_ID>"
        service_account_id = "<service_account_ID>"
      }
    }
    

    Where:

    • name: Trigger name.
    • object_storage: Storage parameters:
      • bucket_id: Name of the main bucket.
      • create: Trigger will invoke the function when a new object is created in the storage.
    • function: Settings for the function which the trigger will activate:
      • id: copy-function ID.
      • service_account_id: s3-copy-trigger service account ID.

    For more information about resource parameters in Terraform, see the relevant Terraform documentation.

  2. Make sure the configuration files are correct.

    1. In the command line, navigate to the directory where you created the configuration file.

    2. Run a check using this command:

      terraform plan
      

    If you described the configuration correctly, the terminal will display a list of the resources being created and their parameters. If the configuration contains any errors, Terraform will point them out.

  3. Deploy the cloud resources.

    1. If the configuration does not contain any errors, run this command:

      terraform apply
      
    2. Confirm creating the trigger by typing yes in the terminal and pressing Enter.

To create a trigger for Object Storage, use the create method for the Trigger resource or the TriggerService/Create gRPC API call.

Test the functionTest the function

Management console
  1. In the management console, navigate to the folder where the main bucket is located.
  2. From the list of services, select Object Storage.
  3. Click the name of the main bucket.
  4. In the top-right corner, click Upload.
  5. In the window that opens, select the required files and click Open.
  6. The management console will display all objects selected for upload. Click Upload.
  7. Refresh the page.
  8. Navigate to the backup bucket and make sure it contains the files you added.

How to delete the resources you createdHow to delete the resources you created

To stop paying for the resources you created:

  1. Delete the objects from the buckets.
  2. Delete the buckets.
  3. Delete the bucket-to-bucket-copying trigger.
  4. Delete copy-function.

Was the article helpful?

Previous
Loading data from Yandex Managed Service for OpenSearch to Yandex Object Storage using Yandex Data Transfer
Next
Recognizing audio files in a bucket on a regular basis
© 2025 Direct Cursus Technology L.L.C.