Transferring metadata between Yandex Data Processing clusters using Apache Hive™ Metastore
You can transfer metadata between Yandex Data Processing clusters with the Hive DBMS. First, you need to export metadata from a cluster, then import it into a different one using Apache Hive™ Metastore.
To transfer metadata between Yandex Data Processing clusters:
- Create a test table.
- Export data.
- Connect Yandex Data Processing to Apache Hive™ Metastore.
- Import data.
- Check the result.
If you no longer need the resources you created, delete them.
Warning
If you want to configure an access policy for a bucket and connect to it from a Apache Hive™ Metastore cluster, you will need some additional infrastructure setup. For more information, see this guide.
Note
Apache Hive™ Metastore is at the Preview stage.
Required paid resources
The infrastructure support cost includes:
- Fee for the Yandex Data Processing cluster computing resources and storage (see Yandex Data Processing pricing).
- Fee for the Apache Hive™ Metastore cluster computing resources (see Yandex MetaData Hub pricing).
- Fee for data storage and operations in a bucket (see Yandex Object Storage pricing).
- Fee for NAT gateway usage and outbound traffic (see Yandex Virtual Private Cloud pricing).
Getting started
Set up the infrastructure:
-
Create a service account named
dataproc-s3-saand assign it thedataproc.agent,dataproc.provisioner,managed-metastore.integrationProvider, andstorage.uploaderroles. -
In Yandex Object Storage, create a bucket named
dataproc-bucket. Grant theREAD and WRITEpermission for this bucket to the service account. -
Create a cloud network named
dataproc-network. -
In this network, create a subnet named
dataproc-subnet. -
Set up a NAT gateway for the subnet you created.
-
Create a security group named
dataproc-security-groupwith the following rules:Security group rules
Target service for the rule
Rule purpose
Rule settings
Yandex Data Processing
For incoming service traffic.
- Port range:
0-65535 - Protocol:
Any - Source:
Security group - Security group:
Self
Yandex Data Processing
For incoming traffic, to allow access to NTP servers for time syncing.
- Port range:
123 - Protocol:
UDP - Source:
CIDR - CIDR blocks:
0.0.0.0/0
Yandex Data Processing
For incoming traffic, to connect from the internet via SSH to subcluster hosts with public access.
- Port range:
22 - Protocol:
TCP - Source:
CIDR - CIDR blocks:
0.0.0.0/0
Apache Hive™ Metastore
For incoming client traffic.
- Port range:
30000-32767 - Protocol:
Any - Source:
CIDR - CIDR blocks:
0.0.0.0/0
Apache Hive™ Metastore
For incoming load balancer traffic.
- Port range:
10256 - Protocol:
Any - Source:
Load balancer health checks
Yandex Data Processing
For outgoing service traffic.
- Port range:
0-65535 - Protocol:
Any - Source:
Security group - Security group:
Self
Yandex Data Processing
For outgoing HTTPS traffic.
- Port range:
443 - Protocol:
TCP - Destination:
CIDR - CIDR blocks:
0.0.0.0/0
Yandex Data Processing
For outgoing traffic, to allow access to NTP servers for time syncing.
- Port range:
123 - Protocol:
UDP - Source:
CIDR - CIDR blocks:
0.0.0.0/0
Yandex Data Processing
For outgoing traffic, to allow Yandex Data Processing cluster connections to Apache Hive™ Metastore.
- Port range:
9083 - Protocol:
Any - Source:
CIDR - CIDR blocks:
0.0.0.0/0
- Port range:
-
Create two Yandex Data Processing clusters named
dataproc-sourceanddataproc-targetwith the following settings:-
Environment:
PRODUCTION. -
Services:
HDFSHIVESPARKYARNZEPPELIN
-
Service account:
dataproc-s3-sa. -
Availability zone: Zone where
dataproc-subnetresides. -
Properties:
spark:spark.sql.hive.metastore.sharedPrefixeswith thecom.amazonaws,ru.yandex.cloudvalue. It is required for PySpark jobs and integration with Apache Hive™ Metastore. -
Bucket name:
dataproc-bucket. -
Network:
dataproc-network. -
Security groups:
dataproc-security-group. -
UI Proxy: Enabled.
-
Subnet for the Yandex Data Processing subclusters:
dataproc-subnet. -
Public access for the master host: Enabled.
-
-
If you do not have Terraform yet, install it.
-
Get the authentication credentials. You can add them to environment variables or specify them later in the provider configuration file.
-
Configure and initialize a provider. There is no need to create a provider configuration file manually, you can download it
. -
Place the configuration file in a separate working directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file.
-
Download the metastore-import.tf
configuration file to the same working directory.This file describes:
- Network.
- NAT gateway and route table required for Yandex Data Processing.
- Subnet.
- Security group for Yandex Data Processing and Apache Hive™ Metastore.
- Service account for the Yandex Data Processing cluster.
- Service account required to create an Object Storage bucket.
- Static access key to create a Yandex Object Storage bucket.
- Bucket.
- Two Yandex Data Processing clusters.
-
Specify the following in
metastore-import.tf:folder_id: Cloud folder ID, same as in the provider settings.dp_ssh_key: Absolute path to the public key for the Yandex Data Processing clusters. Learn more about connecting to a Yandex Data Processing host over SSH here.
-
Validate your Terraform configuration files using this command:
terraform validateTerraform will display any configuration errors detected in your files.
-
Create the required infrastructure:
-
Run this command to view the planned changes:
terraform planIf you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.
-
If everything looks correct, apply the changes:
-
Run this command:
terraform apply -
Confirm updating the resources.
-
Wait for the operation to complete.
-
All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console
. -
Create a test table
In the dataproc-source cluster, create a test table named countries:
-
Navigate to the folder dashboard
and select Yandex Data Processing. -
Open the
dataproc-sourcecluster page. -
Click the Zeppelin Web UI link under UI Proxy.
-
Select Notebook, then select
Create new note. -
In the window that opens, specify the name for the note and click Create.
-
To run a PySpark job, paste a Python script into the input line:
%pyspark from pyspark.sql.types import * schema = StructType([StructField('Name', StringType(), True), StructField('Capital', StringType(), True), StructField('Area', IntegerType(), True), StructField('Population', IntegerType(), True)]) df = spark.createDataFrame([('Australia', 'Canberra', 7686850, 19731984), ('Austria', 'Vienna', 83855, 7700000)], schema) df.write.mode("overwrite").option("path","s3a://dataproc-bucket/countries").saveAsTable("countries") -
Click
Run all paragraphs and wait until the job is complete. -
Replace the Python code in the input line with this SQL query:
%sql SELECT * FROM countries; -
Click
Run all paragraphs.Result:
| Name | Capital | Area | Population | | --------- | -------- | ------- | ---------- | | Australia | Canberra | 7686850 | 19731984 | | Austria | Vienna | 83855 | 7700000 |
Export data
To transfer data from one Yandex Data Processing cluster to another, back up the data in the dataproc-source cluster using pg_dump:
-
Use SSH to connect to the
dataproc-sourcecluster's master host:ssh ubuntu@<master_host_FQDN> -
Create a backup and save it to the
metastore_dump.sqlfile:pg_dump --data-only --schema public postgres://hive:hive-p2ssw0rd@localhost/metastore > metastore_dump.sql -
Disconnect from the master host.
-
Download the
metastore_dump.sqlfile to your local current directory:scp ubuntu@<master_host_FQDN>:metastore_dump.sql . -
Upload the
metastore_dump.sqlfile to thedataproc-bucketbucket.
Connect Yandex Data Processing to Apache Hive™ Metastore
-
Create a Apache Hive™ Metastore cluster with the following parameters:
- Service account:
dataproc-s3-sa. - Network:
dataproc-network. - Subnet:
dataproc-subnet. - Security groups:
dataproc-security-group.
- Service account:
-
Add to the
dataproc-targetcluster settings thespark:spark.hive.metastore.urisproperty with the following value:thrift://<Apache Hive™ Metastore_cluster_IP_address>:9083.To find out the Apache Hive™ Metastore cluster IP address, select Yandex MetaData Hub in the management console and then select the
Metastore page in the left-hand panel. Copy the IP address column value for the cluster in question.
Import data
- Open the Apache Hive™ Metastore cluster page.
- Click
Import. - In the window that opens, specify the
dataproc-bucketand themetastore_dump.sqlfile. - Click Import.
- Wait for the import to complete. You can check the import status on the Apache Hive™ Metastore cluster page under
Operations.
Check the result
-
Open the
dataproc-targetcluster page. -
Click the Zeppelin Web UI link under UI Proxy.
-
Select Notebook, then select
Create new note. -
In the window that opens, specify the name for the note and click Create.
-
Run the following SQL query:
%sql SELECT * FROM countries; -
Click
Run all paragraphs.Result:
| Name | Capital | Area | Population | | --------- | -------- | ------- | ---------- | | Australia | Canberra | 7686850 | 19731984 | | Austria | Vienna | 83855 | 7700000 |
The metadata from the dataproc-source cluster was successfully imported into the dataproc-target cluster.
Delete the resources you created
Some resources are not free of charge. Delete the resources you no longer need to avoid paying for them:
-
Delete the objects from the bucket.
-
Delete other resources depending on how they were created:
ManuallyTerraform-
In the terminal window, go to the directory containing the infrastructure plan.
Warning
Make sure the directory has no Terraform manifests with the resources you want to keep. Terraform deletes all resources that were created using the manifests in the current directory.
-
Delete resources:
-
Run this command:
terraform destroy -
Confirm deleting the resources and wait for the operation to complete.
All the resources described in the Terraform manifests will be deleted.
-
-
Apache® and Apache Hive™