Yandex Cloud
Search
Contact UsGet started
  • Pricing
  • Customer Stories
  • Documentation
  • Blog
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • AI for business
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Start testing with double trial credits
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Center for Technologies and Society
    • Yandex Cloud Partner program
  • Pricing
  • Customer Stories
  • Documentation
  • Blog
© 2025 Direct Cursus Technology L.L.C.
Yandex Object Storage
    • All tutorials
    • Getting statistics on object queries with S3 Select
    • Getting website traffic statistics with S3 Select
    • Getting statistics on object queries using Yandex Query
    • Cost analysis by resource
    • Server-side encryption
    • Integrating an L7 load balancer with CDN and Object Storage
    • Blue-green and canary deployment of service versions
    • Analyzing logs in DataLens
    • Mounting buckets to Yandex Data Processing host filesystems
    • Using Object Storage in Yandex Data Processing
    • Importing data from Object Storage, processing it, and exporting it to Managed Service for ClickHouse®
    • Connecting a bucket as a disk in Windows
    • Migrating data from Yandex Data Streams using Yandex Data Transfer
    • Using hybrid storage in Yandex Managed Service for ClickHouse®
    • Loading data from Yandex Managed Service for OpenSearch to Yandex Object Storage using Yandex Data Transfer
    • Automatically copying objects from one bucket to another
    • Regular asynchronous recognition of audio files in a bucket
    • Training a model in Yandex DataSphere on data from Object Storage
    • Connecting to Object Storage from VPC
    • Migrating data to Yandex Managed Service for PostgreSQL using Yandex Data Transfer
    • Uploading data to Yandex MPP Analytics for PostgreSQL using Yandex Data Transfer
    • Uploading data to Yandex Managed Service for ClickHouse® using Yandex Data Transfer
    • Uploading data to Yandex Managed Service for YDB using Yandex Data Transfer
    • Exchanging data between Yandex Managed Service for ClickHouse® and Yandex Data Processing
    • Uploading data from Yandex Managed Service for YDB using Yandex Data Transfer
    • Hosting a static Gatsby website in Object Storage
    • Migrating a database from Managed Service for PostgreSQL to Object Storage
    • Exchanging data between Yandex Managed Service for ClickHouse® and Yandex Data Processing
    • Importing data from Yandex Managed Service for PostgreSQL to Yandex Data Processing using Sqoop
    • Importing data from Yandex Managed Service for MySQL® to Yandex Data Processing using Sqoop
    • Migrating data from Yandex Object Storage to Yandex Managed Service for MySQL® using Yandex Data Transfer
    • Migrating a database from Yandex Managed Service for MySQL® to Yandex Object Storage
    • Exporting Greenplum® data to a cold storage in Yandex Object Storage
    • Loading data from Yandex Direct to a Yandex Managed Service for ClickHouse® data mart using Yandex Cloud Functions, Yandex Object Storage, and Yandex Data Transfer
    • Uploading Terraform states to Object Storage
    • Locking Terraform states using Managed Service for YDB
    • Visualizing Yandex Query data
    • Publishing game updates
    • VM backups using Hystax Acura
    • Backing up to Object Storage with CloudBerry Desktop Backup
    • Backing up to Object Storage with Duplicati
    • Backing up to Object Storage with Bacula
    • Backing up to Object Storage with Veeam Backup
    • Backing up to Object Storage with Veritas Backup Exec
    • Managed Service for Kubernetes cluster backups in Object Storage
    • Developing a custom integration in API Gateway
    • URL shortener
    • Storing application runtime logs
    • Developing a skill for Alice and a website with authorization
    • Creating an interactive serverless application using WebSocket
    • Deploying a web application using the Java Servlet API
    • Developing a Telegram bot
    • Replicating logs to Object Storage using Fluent Bit
    • Replicating logs to Object Storage using Data Streams
    • Uploading audit logs to ArcSight SIEM
    • Uploading audit logs to Splunk SIEM
    • Creating an MLFlow server for logging experiments and artifacts
    • Operations with data using Yandex Query
    • Federated data queries using Query
    • Recognizing text in image archives using Vision OCR
    • Regular recognition of images and PDF documents from an Object Storage bucket
    • Converting a video to a GIF in Python
    • Automating tasks using Managed Service for Apache Airflow™
    • Processing files with usage details in Yandex Cloud Billing
    • Deploying a web app with JWT authorization in API Gateway and authentication in Firebase
    • Searching for Yandex Cloud events in Yandex Query
    • Searching for Yandex Cloud events in Object Storage
    • Creating an external table from a bucket table using a configuration file
    • Migrating databases from Google BigQuery to Managed Service for ClickHouse®
    • Using Object Storage in Yandex Managed Service for Apache Spark™
  • Pricing policy
  • Terraform reference
  • Monitoring metrics
  • Audit Trails events
  • Bucket logs
  • Release notes
  • FAQ

In this article:

  • Required paid resources
  • Getting started
  • Prepare the test data
  • Create a database in the target cluster
  • Prepare and activate your transfer
  • Test the transfer
  • Test the copy process
  • Test the replication process
  • Delete the resources you created
  1. Tutorials
  2. Uploading data to Yandex MPP Analytics for PostgreSQL using Yandex Data Transfer

Loading data from Object Storage to Yandex MPP Analytics for PostgreSQL using Yandex Data Transfer

Written by
Yandex Cloud
Updated at November 6, 2025
  • Required paid resources
  • Getting started
  • Prepare the test data
  • Create a database in the target cluster
  • Prepare and activate your transfer
  • Test the transfer
    • Test the copy process
    • Test the replication process
  • Delete the resources you created

Note

The functionality for loading data from Object Storage in Data Transfer is at the Preview stage. To get access, contact support or your account manager.

You can migrate data from Object Storage to the Yandex MPP Analytics for PostgreSQL table using Data Transfer. To do this:

  1. Prepare the test data.
  2. Create a database in the target cluster.
  3. Prepare and activate your transfer.
  4. Test your transfer.

If you no longer need the resources you created, delete them.

Required paid resourcesRequired paid resources

The support cost for this solution includes:

  • Object Storage bucket fee covering data storage and data operations (see Object Storage pricing).
  • Yandex MPP Analytics for PostgreSQL cluster fee: use of computing resources allocated to hosts and disk space (see Yandex MPP Analytics for PostgreSQL pricing).
  • Fee for public IP address assignment on cluster hosts (see Virtual Private Cloud pricing).
  • Per-transfer fee: use of computing resources and number of transferred data rows (see Data Transfer pricing).

Getting startedGetting started

  1. Set up the infrastructure:

    Manually
    Using Terraform
    1. Create a target Yandex MPP Analytics for PostgreSQL cluster in any suitable configuration with publicly available hosts and the following settings:

      • Username: user1.
      • Password: <user_password>.
    2. If using security groups in your cluster, make sure they are configured correctly and allow connecting to the cluster.

    3. Create an Object Storage bucket.

    4. Create a service account named storage-viewer with the storage.viewer role. The transfer will use it to access the bucket.

    5. Create a static access key for the storage-viewer service account.

    1. If you do not have Terraform yet, install it.

    2. Get the authentication credentials. You can add them to environment variables or specify them later in the provider configuration file.

    3. Configure and initialize a provider. There is no need to create a provider configuration file manually, you can download it.

    4. Place the configuration file in a separate working directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file.

    5. Download the object-storage-to-greenplum.tf configuration file to the same working directory.

      This file describes:

      • Network.
      • Subnet.
      • Cluster access security group.
      • Service account for bucket operations, e.g., creation and access.
      • Yandex Lockbox secret for the service account static key required to configure the source endpoint.
      • Source Object Storage bucket.
      • Yandex MPP Analytics for PostgreSQL target cluster.
      • Transfer.
    6. In the object-storage-to-greenplum.tf file, specify the values of the following variables:

      • folder_id: Cloud folder ID, same as in the provider settings.
      • bucket_name: Bucket name consistent with the naming conventions.
      • gp_version: Greenplum® version.
      • gp_password: Greenplum® user password.
    7. Make sure the Terraform configuration files are correct using this command:

      terraform validate
      

      Terraform will display any configuration errors detected in your files.

    8. Create the required infrastructure:

      1. Run this command to view the planned changes:

        terraform plan
        

        If you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.

      2. If everything looks correct, apply the changes:

        1. Run this command:

          terraform apply
          
        2. Confirm updating the resources.

        3. Wait for the operation to complete.

      All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console.

  2. Enable the Access from Data Transfer option in the target cluster.

Prepare the test dataPrepare the test data

  1. Prepare two CSV files with test data:

    • demo_data1.csv:

      1,Anna
      2,Robert
      3,Umar
      4,Algul
      5,Viktor
      
    • demo_data2.csv:

      6,Maria
      7,Alex
      
  2. Upload the demo_data1.csv file to the Object Storage bucket.

Create a database in the target clusterCreate a database in the target cluster

  1. Connect to the auxiliary postgres database in the Yandex MPP Analytics for PostgreSQL target cluster as user1.

  2. Create a database named db1:

    CREATE DATABASE db1;
    

Prepare and activate your transferPrepare and activate your transfer

  1. Create an Object Storage-type source endpoint with the following settings:

    • Database type: Object Storage.

    • Bucket: Object Storage bucket name.

    • Access Key ID: Public component of the service account’s static key. If you created your infrastructure using Terraform, copy the key value from the Yandex Lockbox secret.

    • Secret Access Key: Service account’s secret access key. If you created your infrastructure using Terraform, copy the key value from the Yandex Lockbox secret.

    • Endpoint: https://storage.yandexcloud.net.

    • Region: ru-central1.

    • Data format: CSV.

    • Delimiter: Comma (,).

    • Table: table1.

    • Result table schema: Select Manual and specify the following field names and data types:

      • Id: Int64
      • Name: UTF8

    Leave the default values for the other properties.

  2. Create a target endpoint of the Greenplum® type and specify the cluster connection settings in it:

    • Connection type: Managed Service for Greenplum cluster
    • Managed Service for Greenplum cluster: <target_Greenplum®_cluster_name> from the drop-down list
    • Database: db1
    • User: user1
    • Password: <user_password>.
  3. Create and activate your transfer:

    Manually
    Using Terraform
    1. Create a transfer of the Snapshot and replication type that will use the new endpoints.

    2. Activate the transfer and wait for its status to change to Replicating.

    1. In the object-storage-to-greenplum.tf file, specify these variables:

      • source_endpoint_id: Source endpoint ID.
      • target_endpoint_id: Target endpoint ID.
      • transfer_enabled: 1 to create a transfer.
    2. Make sure the Terraform configuration files are correct using this command:

      terraform validate
      

      Terraform will display any configuration errors detected in your files.

    3. Create the required infrastructure:

      1. Run this command to view the planned changes:

        terraform plan
        

        If you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.

      2. If everything looks correct, apply the changes:

        1. Run this command:

          terraform apply
          
        2. Confirm updating the resources.

        3. Wait for the operation to complete.

    4. The transfer will activate automatically upon creation. Wait for its status to change to Replicating.

Test the transferTest the transfer

To verify that the transfer is operational, test the copy and replication processes.

Test the copy processTest the copy process

  1. Connect to db1 in your target Yandex MPP Analytics for PostgreSQL cluster.

  2. Run this request:

    SELECT * FROM public.table1;
    
    Response example
      __file_name  | __row_index | Id |  Name
    ---------------+-------------+----+--------
    demo_data1.csv |           1 |  1 | Anna
    demo_data1.csv |           2 |  2 | Robert
    demo_data1.csv |           3 |  3 | Umar
    demo_data1.csv |           4 |  4 | Algul
    demo_data1.csv |           5 |  5 | Viktor
    

Test the replication processTest the replication process

  1. Upload the demo_data2.csv file to the Object Storage bucket.

  2. Make sure the data from demo_data2.csv has been added to the target database:

    1. Connect to db1 in the Yandex MPP Analytics for PostgreSQL target cluster.

    2. Run this request:

      SELECT * FROM public.table1;
      
      Response example
        __file_name  | __row_index | Id |  Name
      ---------------+-------------+----+--------
      demo_data1.csv |           1 |  1 | Anna
      demo_data1.csv |           2 |  2 | Robert
      demo_data1.csv |           3 |  3 | Umar
      demo_data1.csv |           4 |  4 | Algul
      demo_data1.csv |           5 |  5 | Viktor
      demo_data2.csv |           1 |  6 | Maria
      demo_data2.csv |           2 |  7 | Alex
      

Delete the resources you createdDelete the resources you created

Some resources are not free of charge. To avoid paying for them, delete the resources you no longer need:

  • Transfer

  • Source endpoint

  • Target endpoint

  • Objects from the bucket

  • Delete other resources using the same method used for their creation:

    Manually
    Using Terraform
    • Yandex MPP Analytics for PostgreSQL cluster.
    • Object Storage bucket.
    1. In the terminal window, go to the directory containing the infrastructure plan.

      Warning

      Make sure the directory has no Terraform manifests with the resources you want to keep. Terraform deletes all resources that were created using the manifests in the current directory.

    2. Delete resources:

      1. Run this command:

        terraform destroy
        
      2. Confirm deleting the resources and wait for the operation to complete.

      All the resources described in the Terraform manifests will be deleted.

Was the article helpful?

Previous
Migrating data to Yandex Managed Service for PostgreSQL using Yandex Data Transfer
Next
Uploading data to Yandex Managed Service for ClickHouse® using Yandex Data Transfer
© 2025 Direct Cursus Technology L.L.C.