Yandex Cloud
Search
Contact UsTry it for free
  • Customer Stories
  • Documentation
  • Blog
  • All Services
  • System Status
  • Marketplace
    • Featured
    • Infrastructure & Network
    • Data Platform
    • AI for business
    • Security
    • DevOps tools
    • Serverless
    • Monitoring & Resources
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Start testing with double trial credits
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Center for Technologies and Society
    • Yandex Cloud Partner program
    • Price calculator
    • Pricing plans
  • Customer Stories
  • Documentation
  • Blog
© 2026 Direct Cursus Technology L.L.C.
Tutorials
    • All tutorials
    • Unassisted deployment of the Apache Kafka® web interface
    • Upgrading a Managed Service for Apache Kafka® cluster to migrate from ZooKeeper to KRaft
    • Migrating a database from a third-party Apache Kafka® cluster to Managed Service for Apache Kafka®
    • Moving data between Managed Service for Apache Kafka® clusters using Data Transfer
    • Delivering data from Managed Service for MySQL® to Managed Service for Apache Kafka® using Data Transfer
    • Delivering data from Managed Service for MySQL® to Managed Service for Apache Kafka® using Debezium
    • Delivering data from Managed Service for PostgreSQL to Managed Service for Apache Kafka® using Data Transfer
    • Delivering data from Managed Service for PostgreSQL to Managed Service for Apache Kafka® using Debezium
    • Delivering data from Managed Service for YDB to Managed Service for Apache Kafka® using Data Transfer
    • Delivering data from Managed Service for Apache Kafka® to Managed Service for ClickHouse® using Data Transfer
    • Delivering data from Managed Service for Apache Kafka® to Yandex MPP Analytics for PostgreSQL using Data Transfer
    • Delivering data from Managed Service for Apache Kafka® to Yandex StoreDoc using Data Transfer
    • Delivering data from Managed Service for Apache Kafka® to Managed Service for MySQL® using Data Transfer
    • Delivering data from Managed Service for Apache Kafka® to Managed Service for OpenSearch using Data Transfer
    • Delivering data from Managed Service for Apache Kafka® to Managed Service for PostgreSQL using Data Transfer
    • Delivering data from Managed Service for Apache Kafka® to Managed Service for YDB using Data Transfer
    • Delivering data from Managed Service for Apache Kafka® to Data Streams using Data Transfer
    • Delivering data from Data Streams to Managed Service for YDB using Data Transfer
    • Delivering data from Data Streams to Managed Service for Apache Kafka® using Data Transfer
    • YDB change data capture and delivery to YDS
    • Configuring Kafka Connect to work with a Managed Service for Apache Kafka® cluster
    • Synchronizing Apache Kafka® topics in Object Storage with no web access
    • Monitoring message loss in an Apache Kafka® topic
    • Automating Query tasks with Managed Service for Apache Airflow™
    • Sending requests to the Yandex Cloud API via the Yandex Cloud Python SDK
    • Configuring an SMTP server to send e-mail notifications
    • Adding data to a ClickHouse® DB
    • Migrating data to Managed Service for ClickHouse® using ClickHouse®
    • Migrating data to Managed Service for ClickHouse® using Data Transfer
    • Delivering data from Managed Service for MySQL® to Managed Service for ClickHouse® using Data Transfer
    • Asynchronously replicating data from PostgreSQL to ClickHouse®
    • Exchanging data between Managed Service for ClickHouse® and Yandex Data Processing
    • Configuring Managed Service for ClickHouse® for Graphite
    • Fetching data from Managed Service for Apache Kafka® to Managed Service for ClickHouse®
    • Fetching data from Managed Service for Apache Kafka® to ksqlDB
    • Fetching data from RabbitMQ to Managed Service for ClickHouse®
    • Saving a data stream from Data Streams to Managed Service for ClickHouse®
    • Asynchronous replication of data from Yandex Metrica to ClickHouse® using Data Transfer
    • Using hybrid storage in Managed Service for ClickHouse®
    • Sharding Managed Service for ClickHouse® tables
    • Loading data from Yandex Direct to a Managed Service for ClickHouse® data mart using Cloud Functions, Object Storage, and Data Transfer
    • Loading data from Object Storage to Managed Service for ClickHouse® using Data Transfer
    • Migrating data from Managed Service for OpenSearch to Managed Service for ClickHouse® with a storage change using Data Transfer
    • Loading data from Managed Service for YDB to Managed Service for ClickHouse® using Data Transfer
    • Yandex Managed Service for ClickHouse® integration with Microsoft SQL Server via ClickHouse® JDBC Bridge
    • Migrating databases from Google BigQuery to Managed Service for ClickHouse®
    • Yandex Managed Service for ClickHouse® integration with Oracle via ClickHouse® JDBC Bridge
    • Configuring Cloud DNS to access a Managed Service for ClickHouse® cluster from other cloud networks
    • Migrating a Yandex Data Processing HDFS cluster to a different availability zone
    • Importing data from Managed Service for MySQL® to Yandex Data Processing using Sqoop
    • Importing data from Managed Service for PostgreSQL to Yandex Data Processing using Sqoop
    • Mounting Object Storage buckets to the file system of Yandex Data Processing hosts
    • Working with Apache Kafka® topics using Yandex Data Processing
    • Automating operations with Yandex Data Processing using Managed Service for Apache Airflow™
    • Shared use of Yandex Data Processing tables through Apache Hive™ Metastore
    • Transferring metadata across Yandex Data Processing clusters using Apache Hive™ Metastore
    • Importing, processing, and exporting Object Storage data to Managed Service for ClickHouse®
    • Migrating collections from a third-party MongoDB cluster to Yandex StoreDoc
    • Migrating data to Yandex StoreDoc
    • Migrating Yandex StoreDoc cluster from 4.4 to 6.0
    • Sharding Yandex StoreDoc collections
    • Yandex StoreDoc performance analysis and tuning
    • Managed Service for MySQL® performance analysis and tuning
    • Syncing data from a third-party MySQL® cluster to Managed Service for MySQL® using Data Transfer
    • Migrating a database from Managed Service for MySQL® to a third-party MySQL® cluster
    • Migrating a database from Managed Service for MySQL® to Object Storage using Data Transfer
    • Migrating data from Object Storage to Managed Service for MySQL® via Data Transfer
    • Delivering data from Managed Service for MySQL® to Managed Service for Apache Kafka® using Data Transfer
    • Delivering data from Managed Service for MySQL® to Managed Service for Apache Kafka® using Debezium
    • Migrating a database from Managed Service for MySQL® to Managed Service for YDB using Data Transfer
    • MySQL® change data capture and delivery to YDS
    • Migrating data from Managed Service for MySQL® to Managed Service for PostgreSQL using Data Transfer
    • Migrating data from AWS RDS for PostgreSQL to Managed Service for PostgreSQL using Data Transfer
    • Migrating data from Managed Service for MySQL® to Yandex MPP Analytics for PostgreSQL using Data Transfer
    • Configuring an index policy in Managed Service for OpenSearch
    • Migrating data from a third-party OpenSearch cluster to Managed Service for OpenSearch using Data Transfer
    • Loading data from Managed Service for OpenSearch to Object Storage using Data Transfer
    • Migrating data from Managed Service for OpenSearch to Managed Service for YDB using Data Transfer
    • Copying data from Managed Service for OpenSearch to Yandex MPP Analytics for PostgreSQL using Yandex Data Transfer
    • Migrating data from Managed Service for PostgreSQL to Managed Service for OpenSearch using Data Transfer
    • Authenticating a Managed Service for OpenSearch cluster in OpenSearch Dashboards using Keycloak
    • Using the yandex-lemmer plugin in Managed Service for OpenSearch
    • Creating a PostgreSQL cluster for 1C:Enterprise
    • Searching for the Managed Service for PostgreSQL cluster performance issues
    • Managed Service for PostgreSQL performance analysis and tuning
    • Logical replication in PostgreSQL
    • Migrating a database from a third-party PostgreSQL cluster to Managed Service for PostgreSQL
    • Migrating a database from Managed Service for PostgreSQL
    • Migrating a Managed Service for PostgreSQL cluster to another version
    • Delivering data from Managed Service for PostgreSQL to Managed Service for Apache Kafka® using Data Transfer
    • Delivering data from Managed Service for PostgreSQL to Managed Service for Apache Kafka® using Debezium
    • Delivering data from Managed Service for PostgreSQL to Managed Service for YDB using Data Transfer
    • Migrating a database from Managed Service for PostgreSQL to Object Storage
    • Migrating data from Object Storage to Managed Service for PostgreSQL via Data Transfer
    • PostgreSQL change data capture and delivery to YDS
    • Migrating data from Managed Service for PostgreSQL to Managed Service for MySQL® using Data Transfer
    • Migrating data from Managed Service for PostgreSQL to Managed Service for OpenSearch using Data Transfer
    • Fixing string sorting issues in PostgreSQL after a glibc upgrade
    • Migrating a database from Greenplum® to ClickHouse®
    • Migrating a database from Greenplum® to PostgreSQL
    • Exporting Greenplum® data to a cold storage in Object Storage
    • Loading data from Object Storage to Yandex MPP Analytics for PostgreSQL using Data Transfer
    • Copying data from Managed Service for OpenSearch to Yandex MPP Analytics for PostgreSQL using Yandex Data Transfer
    • Creating an external table from an Object Storage bucket table using a configuration file
    • Getting data from external sources using named queries in Greenplum®
    • Migrating a database from a third-party Valkey™ cluster to Yandex Managed Service for Valkey™
    • Using a Yandex Managed Service for Valkey™ cluster as a PHP session storage
    • Loading data from Object Storage to Managed Service for YDB using Data Transfer
    • Loading data from Managed Service for YDB to Object Storage using Data Transfer
    • Processing Audit Trails events
    • Processing Cloud Logging logs
    • Processing Debezium CDC streams
    • Analyzing data with Jupyter
    • Processing files with usage details in Yandex Cloud Billing
    • Ingesting data into storage systems
    • Smart log processing
    • Data transfer in microservice architectures
    • Migrating data to Object Storage using Data Transfer
    • Migrating data from a third-party Greenplum® or PostgreSQL cluster to Yandex MPP Analytics for PostgreSQL using Data Transfer
    • Migrating Yandex StoreDoc clusters
    • Migrating MySQL® clusters
    • Migrating to a third-party MySQL® cluster
    • Migrating PostgreSQL clusters
    • Creating a schema registry to deliver data in Debezium CDC format from Apache Kafka®
    • Automating operations using Yandex Managed Service for Apache Airflow™
    • Working with an Object Storage table from a PySpark job
    • Integrating Yandex Managed Service for Apache Spark™ with Apache Hive™ Metastore
    • Running a PySpark job using Yandex Managed Service for Apache Airflow™
    • Using Yandex Object Storage in Yandex Managed Service for Apache Spark™
    • Running a PySpark job in Yandex Managed Service for YTsaurus

In this article:

  • Required paid resources
  • Prepare the source cluster
  • Prepare the target cluster
  • Prepare and activate the transfers
  • Switch to the new cluster
  • Check the data transfer
  • Delete the resources you created
  1. Building a data platform
  2. Migrating a Managed Service for PostgreSQL cluster to another version

Migrating a Yandex Managed Service for PostgreSQL cluster to a different version using Yandex Data Transfer

Written by
Yandex Cloud
Updated at March 5, 2026
  • Required paid resources
  • Prepare the source cluster
  • Prepare the target cluster
  • Prepare and activate the transfers
  • Switch to the new cluster
  • Check the data transfer
  • Delete the resources you created

You can migrate a loaded production database deployed in a Managed Service for PostgreSQL cluster to a higher version cluster. This tutorial covers migration from version 13 directly to version 17, without going stepwise through multiple versions (13 → 14 → 15 → 16 → 17).

To transfer data:

  1. Prepare the source cluster.
  2. Prepare the target cluster.
  3. Prepare and activate the transfers.
  4. Switch to the new cluster.
  5. Check the data transfer.

If you no longer need the resources you created, delete them.

Required paid resourcesRequired paid resources

  • PostgreSQL cluster, which includes computing resources allocated to hosts, storage and backup size (see PostgreSQL pricing).
  • Public IP addresses if public access is enabled for cluster hosts (see Virtual Private Cloud pricing).
  • Each transfer, which includes the use of computing resources and the number of transferred data rows (see Data Transfer pricing).

Prepare the source clusterPrepare the source cluster

  1. Prepare the source database for migration as per this guide.

  2. Estimate your database workload. If it exceeds 10,000 writes per second, plan several transfers.

    1. Identify the high-workload tables.
    2. Distribute the tables between several transfers.

Prepare the target clusterPrepare the target cluster

  1. Create a Managed Service for PostgreSQL target cluster:

    Manually
    Using Terraform

    Create a Managed Service for PostgreSQL target cluster with the same configuration as the source cluster and with the following settings:

    • Cluster version: 17.
    • Database name: db1.
    • Username: user1.

    If you intend to connect to the cluster from the internet, enable public access to the cluster hosts.

    1. If you do not have Terraform yet, install it.

    2. Get the authentication credentials. You can add them to environment variables or specify them later in the provider configuration file.

    3. Configure and initialize a provider. There is no need to create a provider configuration file manually, you can download it.

    4. Place the configuration file in a separate working directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file.

    5. In your current working directory, create a .tf file with the following contents:

      resource "yandex_mdb_postgresql_cluster" "old" { }
      
    6. Write the PostgreSQL version 13 cluster ID to an environment variable:

      export POSTGRESQL_CLUSTER_ID=<cluster_ID>
      

      You can get the cluster ID with the list of clusters in the folder.

    7. Import the PostgreSQL version 13 cluster settings into the Terraform configuration:

      terraform import yandex_mdb_postgresql_cluster.old ${POSTGRESQL_CLUSTER_ID}
      
    8. Get the imported configuration:

      terraform show
      
    9. Copy it from the terminal and paste it into the .tf file.

    10. Place the file in the new imported-cluster directory.

    11. Edit the copied configuration so that you can create a new cluster from it:

      • Specify the new cluster name in the resource string and in the name argument.
      • Under config, set version to 17.
      • Delete created_at, health, id, and status.
      • In the host sections, delete the fqdn, role, and priority arguments.
      • If the disk_size_autoscaling section has disk_size_limit = 0, delete this section.
      • If the maintenance_window section contains type = "ANYTIME", delete the hour argument.
      • Optionally, make further changes if you need to customize the configuration.
    12. Add to the file a resource to create a user named user1:

      resource "yandex_mdb_postgresql_user" "user1" {
        cluster_id = yandex_mdb_postgresql_cluster.<cluster_name>.id
        name       = "user1"
        password   = "<user_password>"
      }
      

      Where <cluster_name> is the new cluster name specified in the yandex_mdb_postgresql_cluster resource.

    13. Add to the file a resource to create the database:

      resource "yandex_mdb_postgresql_database" "db1" {
        cluster_id = yandex_mdb_postgresql_cluster.<cluster_name>.id
        name       = "db1"
        owner      = yandex_mdb_postgresql_user.user1.name
        depends_on = [yandex_mdb_postgresql_user.user1]
      }
      

      Where <cluster_name> is the new cluster name specified in the yandex_mdb_postgresql_cluster resource.

    14. Get the authentication credentials in the imported-cluster directory.

    15. In the same directory, configure and initialize the provider. Download the provider configuration file rather than creating it manually.

    16. Place the configuration file in the imported-cluster directory and specify the parameter values. If you have not set the authentication credentials as environment variables, specify them in the configuration file.

    17. Validate your Terraform configuration:

      terraform validate
      

      Terraform will display any configuration errors detected in your files.

    18. Create the required infrastructure:

      1. Run this command to view the planned changes:

        terraform plan
        

        If you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.

      2. If everything looks correct, apply the changes:

        1. Run this command:

          terraform apply
          
        2. Confirm updating the resources.

        3. Wait for the operation to complete.

      All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console.

    Timeouts

    The Terraform provider sets the following timeouts for Managed Service for PostgreSQL cluster operations:

    • Creating a cluster, including restoration from a backup: 30 minutes.
    • Updating a cluster: 60 minutes.
    • Deleting a cluster: 15 minutes.

    Operations exceeding the timeout are aborted.

    How can I change these timeouts?

    Add a timeouts section to the cluster description, e.g.:

    resource "yandex_mdb_postgresql_cluster" "<cluster_name>" {
      ...
      timeouts {
        create = "1h30m" # 1 hour 30 minutes
        update = "2h"    # 2 hours
        delete = "30m"   # 30 minutes
      }
    }
    
  2. If using security groups, make sure they are configured correctly and allow connections to your cluster.

  3. Prepare the target database for migration as per this guide.

Prepare and activate the transfersPrepare and activate the transfers

Manually
Using Terraform
  1. Create a source endpoint for each scheduled transfer and specify the endpoint parameters:

    • Database type: PostgreSQL.
    • Connection type: Manual setup.
    • Installation type: Managed Service for PostgreSQL cluster.
    • Managed DB cluster: <source_cluster_name> from the drop-down list.
    • Database: <source_cluster_database_name>.
    • Username: <username>.
    • Password: <password>.
    • List of included tables: For each endpoint, give a list of included tables as per your allocation plan for each transfer.

    Under Schema transfer, make sure you have After data migration set for foreign keys and indexes. In which case your foreign keys and indexes will be transferred at the transfer deactivation stage.

  2. Create a target endpoint for each planned transfer and specify endpoint parameters:

    • Database type: PostgreSQL.
    • Connection type: Manual setup.
    • Installation type: Managed Service for PostgreSQL cluster.
    • Managed DB cluster: db1 from the drop-down list.
    • Username: user1.
    • Password: <password>.
  3. Create transfers of the Snapshot and replication type that will use the created endpoints.

    To speed up the copying of large tables (over 100 GB), configure parallel copying for the transfer by specifying the required numbers of workers and streams.

    The table will be split into the specified number of parts that will be copied in parallel.

  4. Activate the transfers.

  1. In the imported-cluster folder, open the Terraform configuration file describing your infrastructure.

  2. Add to the file a resource to create the source endpoint.

    resource "yandex_datatransfer_endpoint" "<endpoint_name>" {
      name = "<endpoint_name>"
        settings {
          postgres_source {
            connection {
              mdb_cluster_id = "<source_cluster_ID>"
            }
            database = "<DB_name>"
            user     = "<username>"
            password {
              raw = "<password>"
            }
            include_tables = ["<schema>.<table_1>", ... , "<schema>.<table_N>"]
            object_transfer_settings {
              fk_constraint = "AFTER_DATA"
              index         = "AFTER_DATA"
            }
          }
        }
    }
    

    If you have scheduled multiple transfers, add a separate endpoint for each one. For each source endpoint, in the include_tables parameter, give a list of included tables as per your allocation plan for each transfer.

    The object_transfer_settings section specifies the schema transfer parameters. If set to AFTER_DATA, foreign keys and indexes will be migrated after the data is migrated (at the transfer deactivation stage).

  3. Add to the file a resource to create the target endpoint.

    resource "yandex_datatransfer_endpoint" "<endpoint_name>" {
      name = "<endpoint_name>"
        settings {
          postgres_target {
            connection {
              mdb_cluster_id = yandex_mdb_postgresql_cluster.<cluster_name>.id
            }
            database = "db1"
            user     = "user1"
            password {
              raw = "<password>"
            }
          }
        }
    }
    

    Where <cluster_name> is the cluster name specified in the yandex_mdb_postgresql_cluster resource.

    If you have scheduled multiple transfers, add a separate endpoint for each one.

  4. Add to the file a resource to create a transfer that will use your new endpoints.

    resource "yandex_datatransfer_transfer" "<transfer_name>" {
      name      = "<transfer_name>"
      source_id = yandex_datatransfer_endpoint.<source_endpoint_name>.id
      target_id = yandex_datatransfer_endpoint.<target_endpoint_name>.id
      type      = "SNAPSHOT_AND_INCREMENT"
      runtime {
        yc_runtime {
          upload_shard_params {
            job_count     = <number_of_workers>
            process_count = <number_of_streams>
          }
        }
      }
    }
    

    Where:

    • source_id: Source endpoint link.

    • target_id: Target endpoint link.

    • type: Transfer type. SNAPSHOT_AND_INCREMENT: Snapshot and replication.

    • runtime.yc_runtime.upload_shard_params: Parallel copy settings. This option speeds up the copying of large tables (over 100 GB).

      • job_count: Number of workers.
      • process_count: Number of streams.

    If you have distributed your tables between several endpoint pairs, create a separate transfer for each pair.

    Your transfers will be started automatically as soon as they are created.

Switch to the new clusterSwitch to the new cluster

  1. Wait for the transfer status to change to Replicating.

  2. Remove the writing load from the source cluster.

  3. On the transfer monitoring page, wait for the Maximum data transfer delay metric to reach zero for each transfer. This indicates that the target cluster now contains all changes made in the source cluster after the data copy completed.

  4. Switch the workload to the target cluster.

  5. Deactivate the transfers and wait for their status to change to Stopped.

    During deactivation, foreign keys and indexes are being created. This may take a while. The larger your database, the longer the deactivation time.

Check the data transferCheck the data transfer

  1. Connect to the db1 database in the Managed Service for PostgreSQL target cluster.

  2. Run this query to make sure the tables have appeared in the db1 database:

    SELECT schemaname AS schema, tablename AS table_name
    FROM pg_tables
    WHERE schemaname NOT IN ('pg_catalog', 'information_schema') 
      AND tablename NOT LIKE 'pg\_%'
    ORDER BY schemaname, tablename;
    

    The query will return a list of all non-system tables.

Delete the resources you createdDelete the resources you created

Some resources are not free of charge. Delete the resources you no longer need to avoid paying for them:

Manually
Using Terraform
  1. Delete the transfer.
  2. Delete the endpoints.
  3. Delete the Managed Service for PostgreSQL 17 cluster.
  1. In the terminal window, go to the directory containing the infrastructure plan.

    Warning

    Make sure the directory has no Terraform manifests with the resources you want to keep. Terraform deletes all resources that were created using the manifests in the current directory.

  2. Delete resources:

    1. Run this command:

      terraform destroy
      
    2. Confirm deleting the resources and wait for the operation to complete.

    All the resources described in the Terraform manifests will be deleted.

Was the article helpful?

Previous
Migrating a database from Managed Service for PostgreSQL
Next
Delivering data from Managed Service for PostgreSQL to Managed Service for Apache Kafka® using Data Transfer
© 2026 Direct Cursus Technology L.L.C.