Yandex Cloud
Search
Contact UsGet started
  • Pricing
  • Customer Stories
  • Documentation
  • Blog
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • AI Studio
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Start testing with double trial credits
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Center for Technologies and Society
    • Yandex Cloud Partner program
  • Pricing
  • Customer Stories
  • Documentation
  • Blog
© 2025 Direct Cursus Technology L.L.C.
Yandex Data Processing
  • Getting started
    • All tutorials
      • Configuring a network for Yandex Data Processing
      • Migrating an HDFS cluster to a different availability zone
      • Network connection switching during Yandex Data Processing cluster recreation
  • Access management
  • Pricing policy
  • Terraform reference
  • Monitoring metrics
  • Audit Trails events
  • Public materials
  • FAQ

In this article:

  • Required paid resources
  • Create a cluster via import in Terraform
  • Copy the data to the new cluster
  • Delete the initial cluster
  1. Tutorials
  2. Cluster network settings and maintenance
  3. Migrating an HDFS cluster to a different availability zone

Migrating an HDFS Yandex Data Processing cluster to a different availability zone

Written by
Yandex Cloud
Updated at July 4, 2025
  • Required paid resources
  • Create a cluster via import in Terraform
  • Copy the data to the new cluster
  • Delete the initial cluster

Subclusters of each Yandex Data Processing cluster reside in the same cloud network and availability zone. You can migrate a cluster to a different availability zone. The migration process depends on the cluster type:

  • The following describes how to migrate HDFS clusters.
  • For information on migrating lightweight clusters, check this guide.

Note

The Intel Broadwell platform is not available for clusters with hosts residing in the ru-central1-d availability zone.

To migrate an HDFS cluster:

  1. Create a cluster via import in Terraform.
  2. Copy the data to the new cluster.
  3. Delete the initial cluster.

To get started, create a subnet in the availability zone to which you are migrating the cluster.

Required paid resourcesRequired paid resources

The support cost includes the fee for the Yandex Data Processing clusters (see Yandex Data Processing pricing).

Create a cluster via import in TerraformCreate a cluster via import in Terraform

To create a Yandex Data Processing cluster in a different availability zone with the same configuration as the initial cluster, import the initial cluster's configuration into Terraform:

Terraform
  1. If you do not have Terraform yet, install it.

  2. Get the authentication credentials. You can add them to environment variables or specify them later in the provider configuration file.

  3. Configure and initialize a provider. There is no need to create a provider configuration file manually, you can download it.

  4. Place the configuration file in a separate working directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file.

  5. In the same working directory, place a .tf file with the following contents:

    resource "yandex_dataproc_cluster" "old" { }
    
  6. Write the initial cluster ID to the environment variable:

    export DATAPROC_CLUSTER_ID=<cluster_ID>
    

    You can get the ID with the list of clusters in the folder.

  7. Import the initial cluster settings into the Terraform configuration:

    terraform import yandex_dataproc_cluster.old ${DATAPROC_CLUSTER_ID}
    
  8. Get the imported configuration:

    terraform show
    
  9. Copy it from the terminal and paste it into the .tf file.

  10. Place the file in the new imported-cluster directory.

  11. Modify the copied configuration so that you can create a new cluster from it:

    • Specify the new cluster name in the resource string and the name parameter.

    • Delete the created_at, host_group_ids, id, and subcluster_spec.id parameters.

    • Change the availability zone in the zone_id parameter.

    • In the subnet_id parameters of the subcluster_spec sections, specify the ID of the new subnet created in the required availability zone.

    • Change the SSH key format in the ssh_public_keys parameter. Initial format:

      ssh_public_keys = [
        <<-EOT
          <key>
        EOT,
      ]
      

      Required format:

      ssh_public_keys = [
        "<key>"
      ]
      
  12. Get the authentication credentials in the imported-cluster directory.

  13. In the same directory, configure and initialize a provider. There is no need to create a provider configuration file manually, you can download it.

  14. Place the configuration file in the imported-cluster directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file.

  15. Make sure the Terraform configuration files are correct using this command:

    terraform validate
    

    Terraform will show any errors found in your configuration files.

  16. Create the required infrastructure:

    1. Run this command to view the planned changes:

      terraform plan
      

      If you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.

    2. If everything looks correct, apply the changes:

      1. Run this command:

        terraform apply
        
      2. Confirm updating the resources.

      3. Wait for the operation to complete.

    All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console.

Copy the data to the new clusterCopy the data to the new cluster

  1. Make sure no operations or jobs are being performed on the HDFS files and directories you want to copy.

    To see a list of running operations and jobs:

    1. In the management console, select Yandex Data Processing.
    2. Click the initial cluster name and select the Operations tab, then select Jobs.

    Note

    Do not run any operations or jobs modifying the HDFS files and directories you are copying until the migration is completed.

  2. Connect via SSH to the master host of the initial cluster.

  3. Get a list of directories and files to copy to the new cluster:

    hdfs dfs -ls /
    

    You can specify the directory you need instead of /.

  4. To test copying data to the new Yandex Data Processing cluster, create test directories:

    hdfs dfs -mkdir /user/foo &&\
    hdfs dfs -mkdir /user/test
    

    In the example below, only the /user/foo and /user/test test directories are copied.

  5. Connect via SSH to the master host of the new cluster.

  6. Create a file named srclist:

    nano srclist
    
  7. Add to it a list of directories to migrate:

    hdfs://<initial_cluster_FQDN>:8020/user/foo
    hdfs://<initial_cluster_FQDN>:8020/user/test
    

    In the command, specify the FQDN of the master host of the initial cluster. Learn how to get an FQDN in this tutorial.

  8. Place the srclist file to the /user HDFS directory:

    hdfs dfs -put srclist /user
    
  9. Create a directory to copy the data to. In this example, it is the copy directory nested in /user.

    hdfs dfs -mkdir /user/copy
    
  10. Copy the data between clusters using DistCp:

    hadoop distcp -f hdfs://<new_cluster_FQDN>:8020/user/srclist \
    hdfs://<new_cluster_FQDN>:8020/user/copy
    

    In the command, specify the FQDN of the master host of the new cluster.

    As a result, all directories and files specified in the srclist will be copied to the /user/copy directory.

    If you need to copy a large volume of data, use the -m <maximum_simultaneous_copies> flag in the command to limit network bandwidth consumption. For more information, see the DistCp documentation.

    You can check the data volume you copy in the HDFS web interface. To open it:

    1. In the management console, select Yandex Data Processing.
    2. Click the initial cluster name.
    3. On its page, in the UI Proxy section, click the HDFS Namenode UI link.

    The DFS Used field shows the initial cluster's data volume in HDFS.

  11. Make sure the data is copied:

    hdfs dfs -ls /user/copy
    

This way you can copy all the data you need. To do this, specify the required directories and files in srclist.

Delete the initial clusterDelete the initial cluster

Learn how to do this in this tutorial.

Was the article helpful?

Previous
Configuring a network for Yandex Data Processing
Next
Network connection switching during Yandex Data Processing cluster recreation
© 2025 Direct Cursus Technology L.L.C.