Yandex Cloud
Search
Contact UsGet started
  • Blog
  • Pricing
  • Documentation
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • ML & AI
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Customer Stories
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Education and Science
    • Yandex Cloud Partner program
  • Blog
  • Pricing
  • Documentation
© 2025 Direct Cursus Technology L.L.C.
Yandex Object Storage
    • All tutorials
    • Getting statistics on object queries with S3 Select
    • Getting website traffic statistics with S3 Select
    • Getting statistics on object queries using Yandex Query
    • Generating a resource-by-resource cost breakdown report using S3 Select
    • Server-side encryption
    • Integrating an L7 load balancer with CDN and Object Storage
    • Blue-green and canary deployment of service versions
    • Analyzing logs in DataLens
    • Mounting buckets to the file system of Yandex Data Processing hosts
    • Using Object Storage in Yandex Data Processing
    • Importing data from Object Storage, processing and exporting to Managed Service for ClickHouse®
    • Mounting a bucket as a disk in Windows
    • Migrating data from Yandex Data Streams using Yandex Data Transfer
    • Using hybrid storage in Yandex Managed Service for ClickHouse®
    • Loading data from Yandex Managed Service for OpenSearch to Yandex Object Storage using Yandex Data Transfer
    • Automatically copying objects from one bucket to another
    • Recognizing audio files in a bucket on a regular basis
    • Training a model in Yandex DataSphere on data from Object Storage
    • Connecting to Object Storage from VPC
    • Migrating data to Yandex Managed Service for PostgreSQL using Yandex Data Transfer
    • Uploading data to Yandex Managed Service for Greenplum® using Yandex Data Transfer
    • Uploading data to Yandex Managed Service for ClickHouse® using Yandex Data Transfer
    • Uploading data to Yandex Managed Service for YDB using Yandex Data Transfer
    • Exchanging data between Yandex Managed Service for ClickHouse® and Yandex Data Processing
    • Uploading data from Yandex Managed Service for YDB using Yandex Data Transfer
    • Hosting a static Gatsby website in Object Storage
    • Migrating a database from Managed Service for PostgreSQL to Object Storage
    • Exchanging data between Yandex Managed Service for ClickHouse® and Yandex Data Processing
    • Importing data from Yandex Managed Service for PostgreSQL to Yandex Data Processing using Sqoop
    • Importing data from Yandex Managed Service for MySQL® to Yandex Data Processing using Sqoop
    • Migrating data from Yandex Object Storage to Yandex Managed Service for MySQL® using Yandex Data Transfer
    • Migrating a database from Yandex Managed Service for MySQL® to Yandex Object Storage
    • Exporting Greenplum® data to a cold storage in Yandex Object Storage
    • Loading data from Yandex Direct to a Yandex Managed Service for ClickHouse® data mart using Yandex Cloud Functions, Yandex Object Storage, and Yandex Data Transfer
    • Migrating data from Elasticsearch to Yandex Managed Service for OpenSearch
    • Uploading Terraform states to Object Storage
    • Locking Terraform states using Managed Service for YDB
    • Visualizing Yandex Query data
    • Publishing game updates
    • VM backups using Hystax Acura
    • Backing up to Object Storage with CloudBerry Desktop Backup
    • Backing up to Object Storage with Duplicati
    • Backing up to Object Storage with Bacula
    • Backing up to Yandex Object Storage with Veeam Backup
    • Backing up to Object Storage with Veritas Backup Exec
    • Managed Service for Kubernetes cluster backups in Object Storage
    • Developing a custom integration in API Gateway
    • URL shortener
    • Storing application runtime logs
    • Developing a skill for Alice and a website with authorization
    • Creating an interactive serverless application using WebSocket
    • Deploying a web application using the Java Servlet API
    • Developing a Telegram bot
    • Replicating logs to Object Storage using Fluent Bit
    • Replicating logs to Object Storage using Data Streams
    • Uploading audit logs to ArcSight SIEM
    • Exporting audit logs to SIEM Splunk systems
    • Creating an MLFlow server for logging experiments and artifacts
    • Operations with data using Yandex Query
    • Federated data queries using Query
    • Recognizing text in image archives using Vision OCR
    • Converting a video to a GIF in Python
    • Automating tasks using Managed Service for Apache Airflow™
    • Processing files with usage details in Yandex Cloud Billing
    • Deploying a web app with JWT authorization in API Gateway and authentication in Firebase
    • Searching for Yandex Cloud events in Yandex Query
    • Searching for Yandex Cloud events in Object Storage
    • Creating an external table from a bucket table using a configuration file
    • Migrating databases from Google BigQuery to Managed Service for ClickHouse®
  • Terraform reference
  • Monitoring metrics
  • Audit Trails events
  • Bucket logs
  • Release notes
  • FAQ

In this article:

  • Getting started
  • Manually
  • Using Terraform
  • Preparing the source cluster
  • Importing the database
  • Verify the import
  • Deleting the created resources
  1. Tutorials
  2. Importing data from Yandex Managed Service for PostgreSQL to Yandex Data Processing using Sqoop

Importing data from Yandex Managed Service for PostgreSQL to Yandex Data Processing using Sqoop

Written by
Yandex Cloud
Updated at May 7, 2025
  • Getting started
    • Manually
    • Using Terraform
  • Preparing the source cluster
  • Importing the database
  • Verify the import
  • Deleting the created resources

The Sqoop utility allows you to import databases to the Yandex Data Processing cluster. Depending on the Yandex Data Processing cluster configuration, you can import data to:

  • Yandex Object Storage bucket
  • HDFS directory
  • Apache Hive
  • Apache HBase

To use Sqoop to import the source cluster databases to the Yandex Data Processing target cluster:

  1. Prepare the source cluster.
  2. Run the import.
  3. Check the import for correctness.

If you no longer need the resources you created, delete them.

Note

Sqoop is not supported for Yandex Data Processing clusters version 2.0 and higher. Alternatively, use Apache Spark™ features.

Getting startedGetting started

Note

Place the clusters and the VM instance in the same cloud network.

  1. Create a cloud network.
  2. Create a subnet in the ru-central1-c availability zone.
  3. Set up a NAT gateway for the new subnet: this is a prerequisite for the Yandex Data Processing cluster.

You can create other resources manually or using Terraform.

ManuallyManually

  1. Create a Managed Service for PostgreSQL cluster in any suitable configuration with the following settings:

    • DB name: db1
    • Username: user1
  2. To import data to an Object Storage bucket:

    1. Create a bucket with restricted access.

    2. Create a service account with the following roles:

      • dataproc.agent
      • dataproc.provisioner
      • monitoring.viewer
      • storage.viewer
      • storage.uploader
    3. Grant this service account read and write permissions for this bucket.

  3. Create a Yandex Data Processing cluster in any suitable configuration.

    Specify the settings for the storage to import the data to:

    Object Storage
    HDFS directory
    Apache Hive
    Apache HBase
    • Service account: Name of the previously created service account.
    • Bucket name: Name of the bucket you created earlier.
    • Services: Sqoop.

    Services:

    • HBase
    • HDFS
    • Sqoop
    • Yarn
    • Zookeeper
    • Services:

      • HDFS
      • Hive
      • Mapreduce
      • Sqoop
      • Yarn
    • Properties: hive:hive.execution.engine key with the mr value.

    Services:

    • HBase
    • HDFS
    • Sqoop
    • Yarn
    • Zookeeper
  4. Create a virtual machine for connecting to Managed Service for PostgreSQL and Yandex Data Processing clusters.

  5. If you are using security groups for your clusters and VM instance, configure them to allow connecting:

    • To the VM instance and Yandex Data Processing cluster.
    • To the Managed Service for PostgreSQL cluster.

Using TerraformUsing Terraform

  1. If you do not have Terraform yet, install it.

  2. Get the authentication credentials. You can add them to environment variables or specify them later in the provider configuration file.

  3. Configure and initialize a provider. There is no need to create a provider configuration file manually, you can download it.

  4. Place the configuration file in a separate working directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file.

  5. Download the clusters-postgresql-data-proc-and-vm.tf configuration file and save it to the same working directory.

    This file describes:

    • Security groups for the clusters and VM.
    • Service account for the Yandex Data Processing cluster.
    • Object Storage bucket.
    • Managed Service for PostgreSQL cluster.
    • Yandex Data Processing cluster.
    • Virtual machine with public internet access.
  6. Specify the infrastructure settings in the clusters-postgresql-data-proc-and-vm.tf configuration file under locals:

    • folder_id: ID of the folder to create resources in.

    • network_id: ID of the cloud network you created earlier.

    • subnet_id: ID of the subnet you created earlier.

    • storage_sa_id: ID of the service account to use for creating a bucket in Object Storage.

    • data_proc_sa: Name of the Yandex Data Processing cluster service account. The name must be unique within the folder.

    • pg_cluster_version: PostgreSQL version of the Managed Service for PostgreSQL cluster.

    • pg_cluster_password: Password for user1 in the Managed Service for PostgreSQL database named db1.

    • vm_image_id: ID of the public image with Ubuntu without GPU, e.g., for Ubuntu 20.04 LTS.

    • vm_username and vm_public_key: Username and absolute path to the public SSH key to use for accessing the virtual machine. By default, the specified username is ignored in the Ubuntu 20.04 LTS image. A user with the ubuntu username is created instead. Use it to connect to the VM.

    • bucket_name: Bucket name in Object Storage. The name must be unique within the entire Object Storage.

    • dp_public_key: Absolute path to the public SSH key for the Yandex Data Processing cluster.

      For an SSH connection to the hosts of a Yandex Data Processing cluster version 1.x , use the root username.

  7. Make sure the Terraform configuration files are correct using this command:

    terraform validate
    

    If there are any errors in the configuration files, Terraform will point them out.

  8. Create the required infrastructure:

    1. Run this command to view the planned changes:

      terraform plan
      

      If you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.

    2. If everything looks correct, apply the changes:

      1. Run this command:

        terraform apply
        
      2. Confirm updating the resources.

      3. Wait for the operation to complete.

    All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console.

Preparing the source clusterPreparing the source cluster

  1. Connect to the Managed Service for PostgreSQL cluster's database named db1 as user1.

  2. Add test data to the database. The example uses a simple table with people's names and ages:

    1. Create a table:

      CREATE TABLE persons (
          Name VARCHAR(30) NOT NULL,
          Age INTEGER DEFAULT 0,
          PRIMARY KEY (Name)
      );
      
    2. Populate the table with data:

      INSERT INTO persons (Name, Age) VALUES
          ('Anna', 19),
          ('Michael', 65),
          ('Fred', 28),
          ('Alsou', 50),
          ('Max', 27),
          ('John', 34),
          ('Dmitry', 42),
          ('Oleg', 19),
          ('Alina', 20),
          ('Maria', 28);
      

Importing the databaseImporting the database

To enable database parallelism, Sqoop allows you to split imported data not only by the primary key but also by other table columns. In this example, the data is split by the age column.

Let's assume that:

  • FQDN of the Yandex Data Processing subcluster host for data storage: rc1c-dataproc-d-vfw6fa8x********.mdb.yandexcloud.net.
  • Bucket name in Object Storage.
  • Directory names in Object Storage and HDFS: import-directory.
  • Apache Hive database name: db-hive.
  • Name of the Apache HBase column family: family1.
  • Names of the HBase and Hive tables: import-table.
  • Managed Service for PostgreSQL cluster ID: c9qgcd6lplrs********.
Object Storage
HDFS directory
Apache Hive
Apache HBase
  1. Complete all prerequisite steps.

  2. Run this command:

    sqoop import "-Dorg.apache.sqoop.splitter.allow_text_splitter=true" \
        --connect "jdbc:postgresql://c-c9qgcd6lplrs********.rw.mdb.yandexcloud.net:6432/db1" \
        --username "user1" \
        --P \
        --table "persons" \
        --target-dir "s3a://<bucket_name>/import-directory" \
        --split-by "age"
    
  1. Complete all prerequisite steps.

  2. Run this command:

    sqoop import "-Dorg.apache.sqoop.splitter.allow_text_splitter=true" \
        --connect "jdbc:postgresql://c-c9qgcd6lplrs********.rw.mdb.yandexcloud.net:6432/db1" \
        --username "user1" \
        --table "persons" \
        --target-dir "import-directory" \
        --P \
        --split-by "age"
    
  1. Complete all prerequisite steps.

  2. Run this command:

    sqoop import "-Dorg.apache.sqoop.splitter.allow_text_splitter=true" \
        --connect "jdbc:postgresql://c-c9qgcd6lplrs********.rw.mdb.yandexcloud.net:6432/db1" \
        --username "user1" \
        --P \
        --table "persons" \
        --hive-import \
        --create-hive-table \
        --hive-database "db-hive" \
        --hive-table "import-table" \
        --split-by "age"
    
  1. Complete all prerequisite steps.

  2. Run this command:

    sqoop import "-Dorg.apache.sqoop.splitter.allow_text_splitter=true" \
        --connect "jdbc:postgresql://c-c9qgcd6lplrs********.rw.mdb.yandexcloud.net:6432/db1" \
        --username "user1" \
        --P \
        --table "persons" \
        --hbase-create-table \
        --column-family "family1" \
        --hbase-table "import-table" \
        --split-by "age"
    

Verify the importVerify the import

If the import was successful, you will see the contents of the persons table.

Object Storage
HDFS directory
Apache Hive
Apache HBase

Download the files with import results from the bucket.

  1. Connect over SSH to the Yandex Data Processing subcluster host to store the data.

  2. Run this command:

    hdfs dfs -cat /user/root/import-directory/*
    
  1. Connect over SSH to the Yandex Data Processing subcluster host to store the data.

  2. Run this command:

    hive -e "SELECT * FROM import-table;"
    
  1. Connect over SSH to the Yandex Data Processing subcluster host to store the data.

  2. Run this command:

    echo -e "scan 'import-table'" | hbase shell -n
    

Deleting the created resourcesDeleting the created resources

Some resources are not free of charge. To avoid paying for them, delete the resources you no longer need:

Manually
Terraform
  1. Delete the VM.

  2. If you reserved a public static IP address for the virtual machine, release and delete it.

  3. Delete the clusters:

    • Managed Service for PostgreSQL.
    • Yandex Data Processing.
  4. If you created an Object Storage bucket, delete it.

  5. Delete the subnet.

  6. Delete the route table.

  7. Delete the NAT gateway.

  8. Delete the cloud network.

  9. Delete the service account.

To delete an infrastructure created with Terraform:

  1. In the terminal window, go to the directory containing the infrastructure plan.

    Warning

    Make sure the directory has no Terraform manifests with the resources you want to keep. Terraform deletes all resources that were created using the manifests in the current directory.

  2. Delete resources:

    1. Run this command:

      terraform destroy
      
    2. Confirm deleting the resources and wait for the operation to complete.

    All the resources described in the Terraform manifests will be deleted.

Delete the resources you created manually:

  1. Subnet
  2. Route table
  3. NAT gateway
  4. Cloud network

Was the article helpful?

Previous
Exchanging data between Yandex Managed Service for ClickHouse® and Yandex Data Processing
Next
Importing data from Yandex Managed Service for MySQL® to Yandex Data Processing using Sqoop
© 2025 Direct Cursus Technology L.L.C.