Migrating an HDFS Yandex Data Processing cluster to a different availability zone
Subclusters of each Yandex Data Processing cluster reside in the same cloud network and availability zone. You can migrate a cluster to a different availability zone. The migration process depends on the cluster type:
- The following describes how to migrate HDFS clusters.
- For information on migrating lightweight clusters, follow the tutorial.
Note
The Intel Broadwell platform is not available for clusters with hosts residing in the ru-central1-d
availability zone.
To migrate an HDFS cluster:
- Create a cluster via import in Terraform.
- Copy the data to the new cluster.
- Delete the initial cluster.
Before you begin, create a subnet in the availability zone to which you are migrating the cluster.
Create a cluster via import in Terraform
To create a Yandex Data Processing cluster in a different availability zone with the same configuration as the initial cluster, import the initial cluster's configuration into Terraform:
-
If you do not have Terraform yet, install it.
-
Get the authentication credentials. You can add them to environment variables or specify them later in the provider configuration file.
-
Configure and initialize a provider. There is no need to create a provider configuration file manually, you can download it
. -
Place the configuration file in a separate working directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file.
-
In the same working directory, place a
.tf
file with the following contents:resource "yandex_dataproc_cluster" "old" { }
-
Write the initial cluster ID to the environment variable:
export DATAPROC_CLUSTER_ID=<cluster_ID>
You can request the ID with a list of clusters in the folder.
-
Import the initial cluster settings into the Terraform configuration:
terraform import yandex_dataproc_cluster.old ${DATAPROC_CLUSTER_ID}
-
Get the imported configuration:
terraform show
-
Copy it from the terminal and paste it into the
.tf
file. -
Place the file in the new
imported-cluster
directory. -
Modify the copied configuration so that you can create a new cluster from it:
-
Specify the new cluster name in the
resource
string and thename
parameter. -
Delete the
created_at
,host_group_ids
,id
, andsubcluster_spec.id
parameters. -
Change the availability zone in the
zone_id
parameter. -
In the
subnet_id
parameters of thesubcluster_spec
sections, specify the ID of the new subnet created in the required availability zone. -
Change the SSH key format in the
ssh_public_keys
parameter. Source format:ssh_public_keys = [ <<-EOT <key> EOT, ]
Required format:
ssh_public_keys = [ "<key>" ]
-
-
Get the authentication credentials in the
imported-cluster
directory. -
In the same directory, configure and initialize a provider. There is no need to create a provider configuration file manually, you can download it
. -
Place the configuration file in the
imported-cluster
directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file. -
Make sure the Terraform configuration files are correct using this command:
terraform validate
If there are any errors in the configuration files, Terraform will point them out.
-
Create the required infrastructure:
-
Run the command to view planned changes:
terraform plan
If the resource configuration descriptions are correct, the terminal will display a list of the resources to modify and their parameters. This is a test step. No resources are updated.
-
If you are happy with the planned changes, apply them:
-
Run the command:
terraform apply
-
Confirm the update of resources.
-
Wait for the operation to complete.
-
All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console
. -
Copy the data to the new cluster
-
Make sure no operations or jobs are being performed on the HDFS files and directories you want to copy.
To see a list of running operations and jobs:
- In the management console
, select Yandex Data Processing. - Click the initial cluster name and select the Operations tab, then select Jobs.
Note
Until you have completed the migration, do not run any operations or jobs modifying the HDFS files and directories you are copying.
- In the management console
-
Connect via SSH to the master host of the initial cluster.
-
Get a list of directories and files to be copied to the new cluster:
hdfs dfs -ls /
Instead of the
/
symbol, you can specify the directory you need. -
To test copying data to the new Yandex Data Processing cluster, create test directories:
hdfs dfs -mkdir /user/foo &&\ hdfs dfs -mkdir /user/test
In the example below, only the
/user/foo
and/user/test
test directories are copied for demonstration purposes. -
Connect via SSH to the master host of the new cluster.
-
Create a file named
srclist
:nano srclist
-
Add to it a list of directories intended for migration:
hdfs://<initial_cluster_FQDN>:8020/user/foo hdfs://<initial_cluster_FQDN>:8020/user/test
In the command, specify the FQDN of the master host of the initial cluster. For information on how to obtain an FQDN, read the tutorial.
-
Put the
srclist
file into the/user
HDFS directory:hdfs dfs -put srclist /user
-
Create a directory to copy the data to. In the example, it is the
copy
directory, nested in/user
.hdfs dfs -mkdir /user/copy
-
Copy the data between clusters using DistCp
:hadoop distcp -f hdfs://<new_cluster_FQDN>:8020/user/srclist \ hdfs://<new_cluster_FQDN>:8020/user/copy
In the command, specify the FQDN of the master host of the new cluster.
As a result, all directories and files specified in the
srclist
will be copied to the/user/copy
directory.If copying a large volume of data, use the
-m <maximum_simultaneous_copies>
flag in the command to limit network bandwidth consumption. For more information, see the DistCp documentation .You can view the data volume you copy in the HDFS web interface. To open it:
- In the management console
, select Yandex Data Processing. - Click the initial cluster name.
- On its page, in the UI Proxy section, click the HDFS Namenode UI link.
The DFS Used field states the initial cluster's data volume in HDFS.
- In the management console
-
Make sure the data is copied:
hdfs dfs -ls /user/copy
This way you can copy all the data you need. To do this, specify the required directories and files in srclist
.
Delete the initial cluster
To do it, follow this guide.