Migrating an HDFS Yandex Data Processing cluster to a different availability zone
Subclusters of each Yandex Data Processing cluster reside in the same cloud network and availability zone. You can migrate a cluster to a different availability zone. The migration process depends on the cluster type:
- The following describes how to migrate HDFS clusters.
- For information on migrating lightweight clusters, check this guide.
Note
The Intel Broadwell platform is not available for clusters with hosts residing in the ru-central1-d availability zone.
To migrate an HDFS cluster:
- Create a cluster via import in Terraform.
- Copy the data to the new cluster.
- Delete the initial cluster.
To get started, create a subnet in the availability zone to which you are migrating the cluster.
Required paid resources
The support cost includes the fee for the Yandex Data Processing clusters (see Yandex Data Processing pricing).
Create a cluster via import in Terraform
To create a Yandex Data Processing cluster in a different availability zone with the same configuration as the initial cluster, import the initial cluster's configuration into Terraform:
-
If you do not have Terraform yet, install it.
-
Get the authentication credentials. You can add them to environment variables or specify them later in the provider configuration file.
-
Configure and initialize a provider. There is no need to create a provider configuration file manually, you can download it
. -
Place the configuration file in a separate working directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file.
-
In the same working directory, place a
.tffile with the following contents:resource "yandex_dataproc_cluster" "old" { } -
Write the initial cluster ID to the environment variable:
export DATAPROC_CLUSTER_ID=<cluster_ID>You can get the ID with the list of clusters in the folder.
-
Import the initial cluster settings into the Terraform configuration:
terraform import yandex_dataproc_cluster.old ${DATAPROC_CLUSTER_ID} -
Get the imported configuration:
terraform show -
Copy it from the terminal and paste it into the
.tffile. -
Place the file in the new
imported-clusterdirectory. -
Edit the copied configuration so that you can create a new cluster from it:
-
Specify the new cluster name in the
resourcestring and thenameparameter. -
Delete the
created_at,host_group_ids,id, andsubcluster_spec.idparameters. -
Change the availability zone in the
zone_idparameter. -
In the
subnet_idparameters of thesubcluster_specsections, specify the ID of the new subnet created in the required availability zone. -
Change the SSH key format in the
ssh_public_keysparameter. Initial format:ssh_public_keys = [ <<-EOT <key> EOT, ]Required format:
ssh_public_keys = [ "<key>" ]
-
-
Get the authentication credentials in the
imported-clusterdirectory. -
In the same directory, configure and initialize a provider. There is no need to create a provider configuration file manually, you can download it
. -
Place the configuration file in the
imported-clusterdirectory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file. -
Make sure the Terraform configuration files are correct using this command:
terraform validateTerraform will display any configuration errors detected in your files.
-
Create the required infrastructure:
-
Run this command to view the planned changes:
terraform planIf you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.
-
If everything looks correct, apply the changes:
-
Run this command:
terraform apply -
Confirm updating the resources.
-
Wait for the operation to complete.
-
All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console
. -
Copy the data to the new cluster
-
Make sure no operations or jobs are being performed on the HDFS files and directories you want to copy.
To see a list of running operations and jobs:
- In the management console
, select Yandex Data Processing. - Click the initial cluster name and select the Operations tab, then select Jobs.
Note
Do not run any operations or jobs modifying the HDFS files and directories you are copying until the migration is completed.
- In the management console
-
Connect via SSH to the master host of the initial cluster.
-
Get a list of directories and files to copy to the new cluster:
hdfs dfs -ls /You can specify the directory you need instead of
/. -
To test copying data to the new Yandex Data Processing cluster, create test directories:
hdfs dfs -mkdir /user/foo &&\ hdfs dfs -mkdir /user/testIn the example below, only the
/user/fooand/user/testtest directories are copied. -
Connect via SSH to the master host of the new cluster.
-
Create a file named
srclist:nano srclist -
Add to it a list of directories to migrate:
hdfs://<initial_cluster_FQDN>:8020/user/foo hdfs://<initial_cluster_FQDN>:8020/user/testIn the command, specify the FQDN of the master host of the initial cluster. Learn how to get an FQDN in this tutorial.
-
Place the
srclistfile to the/userHDFS directory:hdfs dfs -put srclist /user -
Create a directory to copy the data to. In this example, it is the
copydirectory nested in/user.hdfs dfs -mkdir /user/copy -
Copy the data between clusters using DistCp
:hadoop distcp -f hdfs://<new_cluster_FQDN>:8020/user/srclist \ hdfs://<new_cluster_FQDN>:8020/user/copyIn the command, specify the FQDN of the master host of the new cluster.
As a result, all directories and files specified in the
srclistwill be copied to the/user/copydirectory.If you need to copy a large volume of data, use the
-m <maximum_simultaneous_copies>flag in the command to limit network bandwidth consumption. For more information, see the DistCp documentation .You can check the data volume you copy in the HDFS web interface. To open it:
- In the management console
, select Yandex Data Processing. - Click the initial cluster name.
- On its page, in the UI Proxy section, click the HDFS Namenode UI link.
The DFS Used field shows the initial cluster's data volume in HDFS.
- In the management console
-
Make sure the data is copied:
hdfs dfs -ls /user/copy
This way you can copy all the data you need. To do this, specify the required directories and files in srclist.
Delete the initial cluster
Learn how to do this in this tutorial.