Yandex Cloud
Search
Contact UsTry it for free
  • Customer Stories
  • Documentation
  • Blog
  • All Services
  • System Status
  • Marketplace
    • Featured
    • Infrastructure & Network
    • Data Platform
    • AI for business
    • Security
    • DevOps tools
    • Serverless
    • Monitoring & Resources
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Start testing with double trial credits
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Center for Technologies and Society
    • Yandex Cloud Partner program
    • Price calculator
    • Pricing plans
  • Customer Stories
  • Documentation
  • Blog
© 2026 Direct Cursus Technology L.L.C.
Yandex Managed Service for PostgreSQL
  • Getting started
    • All tutorials
    • Creating a PostgreSQL cluster for 1C
    • Creating a cluster of 1C:Enterprise Linux servers with a Managed Service for PostgreSQL cluster
    • Exporting a database to Yandex Data Processing
    • Searching for cluster performance issues
    • Performance analysis and tuning
    • Setting up a connection from a container in Serverless Containers
    • Delivering data to Yandex Managed Service for Apache Kafka® using Yandex Data Transfer
    • Delivering data to Yandex Managed Service for YDB using Yandex Data Transfer
    • Delivering data to Yandex Managed Service for Apache Kafka® using Debezium
    • PostgreSQL change data capture and delivery to YDS
    • Delivering data from Yandex Managed Service for Apache Kafka® using Yandex Data Transfer
    • Transferring data from Yandex Object Storage using Yandex Data Transfer
    • Configuring a fault-tolerant architecture in Yandex Cloud
    • Monitoring the status of geographically distributed devices
    • Writing load balancer logs to PostgreSQL
    • Creating an MLFlow server for logging experiments and artifacts
    • Working with data using Query
    • Federated data queries using Query
    • Fixing string sorting issues after upgrading glibc
    • Writing data from a device to a database
      • Logical replication in PostgreSQL
      • Migrating a database to Managed Service for PostgreSQL
      • Migrating a database from Managed Service for PostgreSQL
      • Creating a logical replica of Amazon RDS for PostgreSQL in Managed Service for PostgreSQL
      • Migrating a database from Yandex Managed Service for PostgreSQL to Yandex Object Storage
      • Migrating data from Yandex Managed Service for MySQL® to Managed Service for PostgreSQL using Yandex Data Transfer
      • Migrating data from Managed Service for PostgreSQL to Yandex Managed Service for MySQL® using Yandex Data Transfer
      • Migrating data from AWS RDS for PostgreSQL to Managed Service for PostgreSQL using Yandex Data Transfer
      • Migrating a database from Greenplum® to PostgreSQL
      • Migrating data from Managed Service for PostgreSQL to Managed Service for OpenSearch using Yandex Data Transfer
      • Asynchronously replicating data from PostgreSQL to ClickHouse®
  • Access management
  • Pricing policy
  • Terraform reference
  • Monitoring metrics
  • Audit Trails events
  • Public materials
  • Release notes

In this article:

  • Migrating data via Yandex Data Transfer
  • Required paid resources
  • Transfer the data
  • Migrating data via logical replication
  • Required paid resources
  • Getting started
  • Configure the source cluster
  • Export the database schema in the source cluster
  • Restore the database schema in the target cluster
  • Create a publication and a subscription
  • Migrate PostgreSQL sequences once the replication is complete
  • Delete the subscription and transfer the workload
  • Delete the resources you created
  • Migrating data using a logical dump (backup and restore)
  • Required paid resources
  • Getting started
  • Create a database dump
  • Optionally, upload the dump to a Yandex Cloud VM
  • Restore data from the dump into the target cluster
  • Delete the resources you created
  1. Tutorials
  2. Replication and migration
  3. Migrating a database to Managed Service for PostgreSQL

Migrating a database from a third-party PostgreSQL cluster to Managed Service for PostgreSQL

Written by
Yandex Cloud
Updated at January 27, 2026
  • Migrating data via Yandex Data Transfer
    • Required paid resources
    • Transfer the data
  • Migrating data via logical replication
    • Required paid resources
    • Getting started
    • Configure the source cluster
    • Export the database schema in the source cluster
    • Restore the database schema in the target cluster
    • Create a publication and a subscription
    • Migrate PostgreSQL sequences once the replication is complete
    • Delete the subscription and transfer the workload
    • Delete the resources you created
  • Migrating data using a logical dump (backup and restore)
    • Required paid resources
    • Getting started
    • Create a database dump
    • Optionally, upload the dump to a Yandex Cloud VM
    • Restore data from the dump into the target cluster
    • Delete the resources you created

Note

You can learn about data migration to a third-party PostgreSQL cluster in Migrating databases from Managed Service for PostgreSQL.

You can migrate data from a third-party source cluster to a Managed Service for PostgreSQL target cluster using the following three methods:

  • Migrating data via Yandex Data Transfer.

    This method enables you to:

    • Eliminate the need for an intermediate VM or public internet access to your Managed Service for PostgreSQL target cluster.
    • Migrate the entire database with zero downtime.
    • Migrate from older PostgreSQL versions to newer ones, e.g., upgrading the cluster’s PostgreSQL version from 15 to 16.

    To use this method, enable public access to the source cluster.

    For more information, see Problems addressed by Yandex Data Transfer.

  • Migrating data via logical replication.

    Logical replication uses the subscription mechanism, allowing you to migrate data to the target cluster with minimal downtime.

    Use this method only if data migration via Yandex Data Transfer is impossible.

  • Migrating data using a logical dump (backup and restore).

    A logical dump is a file containing a sequence of commands allowing you to restore a database to its saved state. You can create a logical dump using pg_dump. To ensure the logical dump is complete, switch the source cluster to read-only mode before creating the dump.

    Use this method only if data migration using other methods is impossible.

Warning

Users are not transferred automatically to a Managed Service for PostgreSQL cluster. You need to create them in the new cluster from scratch.

Migrating data via Yandex Data TransferMigrating data via Yandex Data Transfer

Required paid resourcesRequired paid resources

  • Managed Service for PostgreSQL cluster: Computing resources allocated to hosts, storage and backup size (see Managed Service for PostgreSQL pricing).
  • Public IP addresses if public access is enabled for cluster hosts (see Virtual Private Cloud pricing).
  • Each transfer: Use of computing resources and number of transferred data rows (see Data Transfer pricing).

Transfer the dataTransfer the data

  1. Prepare the source cluster.

  2. Set up the infrastructure:

    Manually
    Terraform
    1. Create a Managed Service for PostgreSQL target cluster with your preferred configuration. For this operation, the following requirements apply:

      • The PostgreSQL version must be the same or higher than the version in the source cluster. Migration to an earlier PostgreSQL version is not supported.
      • When creating a cluster, specify the same database name as in the source cluster.
      • Enable the same PostgreSQL extensions as in the source cluster.
    2. Prepare the target cluster.

    3. Create a source endpoint with the following settings:

      • Database type: PostgreSQL
      • Endpoint parameters → Connection settings: Custom installation

      Configure the source cluster connection settings.

    4. Create a target endpoint with the following settings:

      • Database type: PostgreSQL
      • Endpoint parameters → Connection settings: Managed Service for PostgreSQL cluster

      Specify the target cluster ID.

    5. Create a Snapshot and increment-type transfer configured to use the new endpoints.

    6. Activate the transfer.

      Warning

      Do not make any data schema changes in the source or target cluster during the transfer. For more information, see Working with databases during transfer.

    1. If you do not have Terraform yet, install it.

    2. Get the authentication credentials. You can add them to environment variables or specify them later in the provider configuration file.

    3. Configure and initialize a provider. There is no need to create a provider configuration file manually, you can download it.

    4. Place the configuration file in a separate working directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file.

    5. Download the data-transfer-pgsql-mpg.tf configuration file to your current working directory.

      This file describes:

      • Network.
      • Subnet.
      • Security group and the rule allowing cluster connections.
      • Managed Service for PostgreSQL target cluster.
      • Source endpoint.
      • Target endpoint.
      • Transfer.
    6. In the data-transfer-pgsql-mpg.tf file, specify the following:

      • Source endpoint settings.

      • pg-extensions: List of PostgreSQL extensions in the source cluster.

      • Target endpoint settings inherited from the target cluster configuration:

        • target_pgsql_version: PostgreSQL version. This version must be the same or higher than the version in the source cluster.
        • target_user and target_password: Database owner username and password.
    7. Validate your Terraform configuration files using this command:

      terraform validate
      

      Terraform will display any configuration errors detected in your files.

    8. Create the required infrastructure:

      1. Run this command to view the planned changes:

        terraform plan
        

        If you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.

      2. If everything looks correct, apply the changes:

        1. Run this command:

          terraform apply
          
        2. Confirm updating the resources.

        3. Wait for the operation to complete.

      All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console.

      The transfer will activate automatically upon creation.

  3. Wait for the transfer status to change to Replicating.

  4. Switch the source cluster to read-only.

  5. On the transfer monitoring page, wait until the Maximum data transfer delay value drops to zero. This means that all changes made in the source cluster after the initial data copy have been transferred to the target cluster.

  6. Deactivate the transfer and wait for its status to change to Stopped.

    For more information about transfer statuses, see Transfer lifecycle.

  7. Switch the workload to the target cluster.

  8. To reduce the consumption of resources you do not need, delete them:

    Manually created resources
    Resources created with Terraform
    1. Delete the Managed Service for PostgreSQL cluster.
    2. Ensure the transfer is stopped, then delete it.
    3. Delete the source and target endpoints.
    1. In the terminal window, go to the directory containing the infrastructure plan.

      Warning

      Make sure the directory has no Terraform manifests with the resources you want to keep. Terraform deletes all resources that were created using the manifests in the current directory.

    2. Delete resources:

      1. Run this command:

        terraform destroy
        
      2. Confirm deleting the resources and wait for the operation to complete.

      All the resources described in the Terraform manifests will be deleted.

Migrating data via logical replicationMigrating data via logical replication

Logical replication is supported in PostgreSQL starting from version 10. Logical replication supports both migrations within the same PostgreSQL version and upgrades to higher ones.

In Managed Service for PostgreSQL clusters, subscriptions can be used by the database owner (a user created alongside the cluster) and users with the mdb_admin role for that cluster.

Migration stages:

  1. Configure the source cluster.
  2. Export the database schema in the source cluster.
  3. Restore the database schema in the target cluster.
  4. Create a PostgreSQL publication and subscription.
  5. Migrate the PostgreSQL sequence once the replication is complete.
  6. Disable replication and transfer the workload.

If you no longer need the resources you created, delete them.

Required paid resourcesRequired paid resources

The support cost for this solution includes:

  • Managed Service for PostgreSQL cluster fee: Covers the use of computing resources allocated to hosts and disk storage (see Managed Service for PostgreSQL pricing).
  • Fee for public IP addresses if public access is enabled for cluster hosts (see Virtual Private Cloud pricing).

Getting startedGetting started

Create the required resources:

Manually
Terraform

Create a target Managed Service for PostgreSQL cluster with your preferred configuration. For this operation, the following requirements apply:

  • The PostgreSQL version must be the same or higher than the version in the source cluster. Migration to an earlier PostgreSQL version is not supported.
  • When creating a cluster, specify the same database name as in the source cluster.
  • Enable the same PostgreSQL extensions as in the source cluster.
  1. If you do not have Terraform yet, install it.

  2. Get the authentication credentials. You can add them to environment variables or specify them later in the provider configuration file.

  3. Configure and initialize a provider. There is no need to create a provider configuration file manually, you can download it.

  4. Place the configuration file in a separate working directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file.

  5. Download the data-migration-pgsql-mpg.tf configuration file to your current working directory.

    This file describes:

    • Network.
    • Subnet.
    • Security group and the rule allowing inbound cluster connections.
    • Managed Service for PostgreSQL cluster with public internet access.
  6. In the data-migration-pgsql-mpg.tf file, specify the following:

    • target_db_name: Database name.

    • pg-extensions: List of PostgreSQL extensions in the source cluster.

    • Target cluster settings:

      • target_pgsql_version: PostgreSQL version. This version must be the same or higher than the version in the source cluster.
      • target_user and target_password: Database owner username and password.
  7. Validate your Terraform configuration files using this command:

    terraform validate
    

    Terraform will display any configuration errors detected in your files.

  8. Create the required infrastructure:

    1. Run this command to view the planned changes:

      terraform plan
      

      If you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.

    2. If everything looks correct, apply the changes:

      1. Run this command:

        terraform apply
        
      2. Confirm updating the resources.

      3. Wait for the operation to complete.

    All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console.

Configure the source clusterConfigure the source cluster

  1. Make changes to the source cluster’s configuration and authentication settings. To do this, edit the postgresql.conf and pg_hba.conf files. On Debian and Ubuntu distributions, these files reside in the /etc/postgresql/<PostgreSQL_version>/main/ directory by default:

    1. Set the maximum number of user connections. To do this, update the max_connections setting in postgresql.conf:

      max_connections = <number_of_connections>
      

      Where <number_of_connections> is the maximum number of connections. This value must be equal or higher than N + 1, where N is the number of all possible connections to your PostgreSQL server.

      In the N + 1 formula, the 1 represents an additional connection reserved for the logical replication subscription. If you plan to use multiple subscriptions, change this value accordingly.

      You can check the current number of connections using the pg_stat_activity system table:

      SELECT count(*) FROM pg_stat_activity;
      
    2. Configure the Write Ahead Log (WAL) log level by setting wal_level to logical in postgresql.conf:

      wal_level = logical
      
    3. Optionally, configure SSL to enable data encryption and compression. To turn SSL on, specify the following in postgresql.conf:

      ssl = on
      
    4. Enable cluster access by updating the listen_addresses setting in postgresql.conf. For example, to make the source cluster accept connection requests from all IP addresses, specify the following:

      listen_addresses = '*'
      
    5. Configure authentication in the pg_hba.conf file:

      SSL
      Without SSL
      hostssl         all            all             <connection_IP_address>      md5
      hostssl         replication    all             <connection_IP_address>      md5
      
      host         all            all             <connection_IP_address>      md5
      host         replication    all             <connection_IP_address>      md5
      

      Where <connection_IP_address> can be either a specific IP address or an IP address range. For example, to allow access from the Yandex Cloud network, you can specify all public IP addresses in Yandex Cloud.

  2. If the source cluster uses a firewall, configure it to allow inbound connections from the required IP addresses.

  3. Restart PostgreSQL for the settings to apply:

    sudo systemctl restart postgresql
    
  4. Check the PostgreSQL status after restart:

    sudo systemctl status postgresql
    

Export the database schema in the source clusterExport the database schema in the source cluster

Using pg_dump, create a file containing the database schema that you will deploy in the target cluster.

pg_dump -h <IP_address_or_FQDN_for_master_host_of_source_cluster> \
        -U <username> \
        -p <port> \
        --schema-only \
        --no-privileges \
        --no-subscriptions \
        -d <DB_name> \
        -Fd -f /tmp/db_dump

This export command excludes all privilege and role data to prevent conflicts with the target Yandex Cloud database configuration. If you need additional database users, create them.

Restore the database schema in the target clusterRestore the database schema in the target cluster

Use pg_restore to restore the database schema in the target cluster:

pg_restore -h <IP_address_or_FQDN_for_master_host_of_target_cluster> \
           -U <username> \
           -p 6432 \
           -Fd -v \
           --single-transaction \
           -s --no-privileges \
           -d <DB_name> /tmp/db_dump

Create a publication and a subscriptionCreate a publication and a subscription

For logical replication to work, create a publication (a group of logically replicated tables) on the source cluster and a subscription (a description of the connection to another database) on the target cluster.

  1. On the source cluster, create a publication for all tables in the database. When migrating multiple databases, you need to create a separate publication for each one.

    Note

    Creating a publication for all tables requires superuser privileges, while creating one for specific tables does not. To learn more about creating publications, see this PostgreSQL article.

    Query:

    CREATE PUBLICATION p_data_migration FOR ALL TABLES;
    
  2. On the target Managed Service for PostgreSQL cluster host, create a subscription with the connection string pointing to the source publication. To learn more about creating subscriptions, see this PostgreSQL article.

    Query with SSL:

    CREATE SUBSCRIPTION s_data_migration CONNECTION 'host=<source_cluster_address> port=<port> user=<username> sslmode=verify-full dbname=<DB_name>' PUBLICATION p_data_migration;
    

    Query without SSL:

    CREATE SUBSCRIPTION s_data_migration CONNECTION 'host=<source_cluster_address> port=<port> user=<username> sslmode=disable dbname=<DB_name>' PUBLICATION p_data_migration;
    

    Tip

    By default, the CREATE SUBSCRIPTION statement also creates a replication slot. To attach a subscription to an existing replication slot and avoid creating a new one, add create_slot = false to the statement.

  3. To get the replication status, check the pg_subscription_rel folders. To review general replication status, check pg_stat_subscription for the target cluster, and pg_stat_replication for the source cluster.

    SELECT * FROM pg_subscription_rel;
    

    First, inspect the srsubstate field. A value of r in this field indicates that the initial synchronization is complete and the databases are ready for replication.

Migrate PostgreSQL sequences once the replication is completeMigrate PostgreSQL sequences once the replication is complete

To complete synchronization between the source cluster and the target cluster:

  1. Set the source cluster to read-only mode.

  2. In the source cluster, create a PostgreSQL-database dump that includes sequences:

    pg_dump -h <IP_address_or_FQDN_for_master_host_of_source_cluster> \
            -U <username> \
            -p <port> \
            -d <DB_name> \
            --data-only -t '*.*_seq' > /tmp/seq-data.sql
    

    Note the *.*_seq pattern we use. If your source database has sequences that do not match this pattern, use a different pattern to export them.

    To learn more about patterns, see this PostgreSQL article.

  3. Restore the dump containing sequences into your target cluster:

    psql -h <IP_address_or_FQDN_for_master_host_of_target_cluster> \
         -U <username> \
         -p 6432 \
         -d <DB_name> \
         < /tmp/seq-data.sql
    

Delete the subscription and transfer the workloadDelete the subscription and transfer the workload

  1. Delete the subscription in the target cluster:

    DROP SUBSCRIPTION s_data_migration;
    
  2. Switch the workload to the target cluster.

Delete the resources you createdDelete the resources you created

To avoid paying for the resources you no longer need, delete them:

Manually
Terraform

Delete the Managed Service for PostgreSQL cluster.

  1. In the terminal window, go to the directory containing the infrastructure plan.

    Warning

    Make sure the directory has no Terraform manifests with the resources you want to keep. Terraform deletes all resources that were created using the manifests in the current directory.

  2. Delete resources:

    1. Run this command:

      terraform destroy
      
    2. Confirm deleting the resources and wait for the operation to complete.

    All the resources described in the Terraform manifests will be deleted.

Migrating data using a logical dump (backup and restore)Migrating data using a logical dump (backup and restore)

In the source cluster, use pg_dump to create a dump of the database you want to migrate. Use pg_restore to restore the dump in the target cluster.

Note

To use pg_restore, you may need to install the pg_repack database extension.

Migration stages:

  1. Create a dump of the database you want to migrate.
  2. Optionally, create a Yandex Cloud VM and upload your database dump to it.
  3. Restore the data from the dump into the target cluster.

If you no longer need the resources you created, delete them.

Required paid resourcesRequired paid resources

The support cost for this solution includes:

  • Managed Service for PostgreSQL cluster fee: Covers the use of computing resources allocated to hosts and disk storage (see Managed Service for PostgreSQL pricing).
  • Fee for using public IP addresses if public access is enabled for cluster hosts (see Virtual Private Cloud pricing).
  • VM fee: Covers the use of computing resources, the OS, and the storage (see Compute Cloud pricing).
  • Fee for a public IP address assigned to your VM (see Virtual Private Cloud pricing).

Getting startedGetting started

Create the required resources:

Manually
Terraform
  1. Create a target Managed Service for PostgreSQL cluster with your preferred configuration. The following target cluster settings must match the corresponding settings in the source cluster:

    • PostgreSQL version.

    • Username.

      Note

      You may use different usernames for the source and the target. This, however, may result in an error when restoring a dump. For more information, see Migrating and restoring a PostgreSQL cluster.

    • PostgreSQL extensions.

  2. Optionally, Create a VM running Ubuntu 20.04 LTS with the following settings:

    • Disks and file storages → Size: Sufficient to store both archived and unarchived dumps.

      The recommended size is at least twice the combined size of the dump and its archive.

    • Network settings:

      • Subnet: Choose a subnet within the target cluster’s cloud network.
      • Public IP address: Select either Auto or a reserverd IP address from the list.
  3. If you use security groups for the staging VM and the Managed Service for PostgreSQL cluster, configure them.

  1. If you do not have Terraform yet, install it.

  2. Get the authentication credentials. You can add them to environment variables or specify them later in the provider configuration file.

  3. Configure and initialize a provider. There is no need to create a provider configuration file manually, you can download it.

  4. Place the configuration file in a separate working directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file.

  5. Download the data-restore-pgsql-mpg.tf configuration file to your current working directory.

    This file describes:

    • Network.
    • Subnet.
    • Security group and the rule allowing inbound cluster connections.
    • Managed Service for PostgreSQL cluster with public internet access.
    • Virtual machine with public internet access. This is an optional setting.
  6. In the data-restore-pgsql-mpg.tf file, specify the following:

    • pg-extensions: List of PostgreSQL extensions in the source cluster.

    • Target cluster settings:

      • target_pgsql_version: PostgreSQL version. This version must be the same or higher than the version in the source cluster.

      • target_db_name: Database name.

      • target_user: Database owner username. This username must match the username in the source cluster.

        Note

        You may use different usernames for the source and the target. This, however, may result in an error when restoring a dump. For more information, see Migrating and restoring a PostgreSQL cluster.

      • target_password: Database owner password.

    • Virtual machine settings (optional):

      • vm_image_id: Public Ubuntu image ID (non-GPU), e.g., for Ubuntu 20.04 LTS.
      • vm_username and vm_public_key: Username and absolute path to the public key to use for access to the virtual machine. By default, the Ubuntu 20.04 LTS image ignores the specified username, instead creating a user named ubuntu. Use it to connect to the VM.
  7. Validate your Terraform configuration files using this command:

    terraform validate
    

    Terraform will display any configuration errors detected in your files.

  8. Create the required infrastructure:

    1. Run this command to view the planned changes:

      terraform plan
      

      If you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.

    2. If everything looks correct, apply the changes:

      1. Run this command:

        terraform apply
        
      2. Confirm updating the resources.

      3. Wait for the operation to complete.

    All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console.

Create a database dumpCreate a database dump

  1. Switch the database to read-only mode.

  2. Create a dump using pg_dump. To speed up the process, run pg_dump in multithreaded mode using the --jobs argument to provide the number of available CPU cores:

    pg_dump --host=<IP_address_or_FQDN_for_master_host_of_source_cluster> \
            --port=<port> \
            --username=<username> \
            --jobs=<number_of_CPU_cores> \
            --format=d \
            --dbname=<DB_name> \
            --file=db_dump
    

Optionally, upload the dump to a Yandex Cloud VMOptionally, upload the dump to a Yandex Cloud VM

You need to transfer your data to the staging Compute Cloud VM in one of the following situations:

  • Your Managed Service for PostgreSQL cluster is not accessible from the internet.
  • Your hardware or connection to the cluster in Yandex Cloud are not very reliable.

The required amount of RAM and the number of CPU cores depend on the volume of data transferred and the required transfer speed.

To prepare your virtual machine for dump recovery:

  1. In the management console, create a new VM using an Ubuntu 20.04 image from Marketplace. The VM configuration will depend on the size of the database you want to migrate. The minimum configuration (1 CPU core, 2 GB RAM, 10 GB disk space) should be sufficient to migrate a database of up to 1 GB. Larger databases require more RAM and disk space. The disk space should be at least twice the database size.

    The VM must be in the same network and availability zone as the PostgreSQL cluster. The VM must have a public IP address to enable uploading the dump from outside Yandex Cloud.

  2. Set up the PostgreSQL apt repository.

  3. Install the PostgreSQL client and database management tools:

    sudo apt install postgresql-client-common && \
    sudo apt install postgresql-client-<PostgreSQL_version>
    
  4. Archive the dump:

    tar -cvzf db_dump.tar.gz db_dump
    
  5. Transfer the dump archive to the VM using scp or another tool of your choice:

    scp db_dump.tar.gz <VM_user_name>@<VM_public_address>:/db_dump.tar.gz
    
  6. Connect to the VM.

  7. Unpack the dump archive:

    tar -xzf db_dump.tar.gz
    

Restore data from the dump into the target clusterRestore data from the dump into the target cluster

Restore the database dump with pg_restore.

The pg_restore version must match the pg_dump version. Furthermore, its major version must be at least as high as the major version of the target database.

That is, to restore a dump from PostgreSQL 14, PostgreSQL 15, and PostgreSQL 16, use pg_restore 15, pg_restore 16, and pg_restore 14, respectively.

pg_restore --host=<IP_address_or_FQDN_for_master_host_of_target_cluster> \
           --username=<username> \
           --dbname=<DB_name> \
           --port=6432 \
           --format=d \
           --verbose \
           db_dump \
           --single-transaction \
           --no-privileges

If you only need to restore a single schema, add --schema=<schema_name> to the command. Without this argument, the command will only succeed when run by the database owner.

If the recovery process fails due to insufficient permissions for creating or updating extensions, remove the --single-transaction flag from the command. In this case, the system will ignore such errors:

pg_restore: warning: errors ignored on restore: 3

Verify that the errors only affect extensions and check the integrity of the restored data.

Delete the resources you createdDelete the resources you created

To avoid paying for the resources you no longer need, delete them:

Manually
Terraform
  • Delete the Yandex Managed Service for PostgreSQL cluster.
  • If you created a staging virtual machine, delete it.
  • If you reserved public static IP addresses, release and delete them.
  1. In the terminal window, go to the directory containing the infrastructure plan.

    Warning

    Make sure the directory has no Terraform manifests with the resources you want to keep. Terraform deletes all resources that were created using the manifests in the current directory.

  2. Delete resources:

    1. Run this command:

      terraform destroy
      
    2. Confirm deleting the resources and wait for the operation to complete.

    All the resources described in the Terraform manifests will be deleted.

Was the article helpful?

Previous
Logical replication in PostgreSQL
Next
Migrating a database from Managed Service for PostgreSQL
© 2026 Direct Cursus Technology L.L.C.