Migrating databases from Greenplum® to PostgreSQL
You can migrate a database from Greenplum® to the PostgreSQL cluster using Yandex Data Transfer.
To transfer a database from Greenplum® to PostgreSQL:
If you no longer need the resources you created, delete them.
Required paid resources
The support cost for this solution includes:
- Greenplum® cluster fee: Use of computing resources allocated to hosts and disk space (see Yandex MPP Analytics for PostgreSQL pricing).
- Managed Service for PostgreSQL cluster fee: Use of computing resources allocated to hosts and disk space (see Managed Service for PostgreSQL pricing).
- Fee for public IP addresses assigned to cluster hosts (see Virtual Private Cloud pricing).
- Fee per transfer: Use of computing resources and number of transferred data rows (see Data Transfer pricing).
Getting started
In our example, we will create all required resources in Yandex Cloud. Set up the infrastructure:
-
Create a Greenplum® source cluster in any suitable configuration with the
gp-useradmin username and public hosts. -
Create a Yandex Managed Service for PostgreSQL target cluster in any suitable configuration with publicly available hosts. When creating a cluster, specify:
- Username:
pg-user - DB name:
db1
- Username:
-
If using security groups, make sure they are configured correctly and allow inbound connections to the clusters:
-
If you do not have Terraform yet, install it.
-
Get the authentication credentials. You can add them to environment variables or specify them later in the provider configuration file.
-
Configure and initialize a provider. There is no need to create a provider configuration file manually, you can download it
. -
Place the configuration file in a separate working directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file.
-
Download the greenplum-postgresql.tf
configuration file to the same working directory.This file describes:
- Networks and subnets where your clusters will be hosted.
- Security groups to connect to clusters.
- Greenplum® source cluster in Yandex MPP Analytics for PostgreSQL.
- Managed Service for PostgreSQL target cluster.
- Target endpoint.
- Transfer.
-
In the
greenplum-postgresql.tffile, specify the admin user passwords and Greenplum® and PostgreSQL versions. -
Run the
terraform initcommand in the directory with the configuration file. This command initializes the provider specified in the configuration files and enables you to use its resources and data sources. -
Make sure the Terraform configuration files are correct using this command:
terraform validateTerraform will show any errors found in your configuration files.
-
Create the required infrastructure:
-
Run this command to view the planned changes:
terraform planIf you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.
-
If everything looks correct, apply the changes:
-
Run this command:
terraform apply -
Confirm updating the resources.
-
Wait for the operation to complete.
-
All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console
. -
Set up your transfer
-
Create a
Greenplum®-type source endpoint and configure it using the following settings:- Connection type:
Managed Service for Greenplum cluster - Managed Service for Greenplum cluster:
<Greenplum®_source_cluster_name>from the drop-down list - Database:
postgres - User:
gp-user - Password:
<user_password> - Service object schema:
public
- Connection type:
-
Create a target endpoint and set up the transfer:
ManuallyTerraform-
Create a
PostgreSQL-type target endpoint and specify its cluster connection settings:- Installation type:
Yandex Managed Service for PostgreSQL cluster - Managed Service for PostgreSQL cluster:
<PostgreSQL>_target_cluster_name from the drop-down list - Database:
db1 - User:
pg-user - Password:
<user_password>
- Installation type:
-
Create a transfer of the Snapshot type that will use the new endpoints.
While real-time replication is not supported for this endpoint pair, you can configure regular copying while creating the transfer. To do this, in the Snapshot field under Transfer parameters, select Regular and specify the copy interval. The transfer will automatically activate after the specified interval.
Warning
Before setting up regular copying, verify that the target endpoint is configured with a
DROPorTRUNCATEcleanup policy to prevent data duplication.
-
In the
greenplum-postgresql.tffile, specify the following variables:gp_source_endpoint_id: Source endpoint ID.transfer_enabled: Set to1to create a transfer.
-
Make sure the Terraform configuration files are correct using this command:
terraform validateTerraform will show any errors found in your configuration files.
-
Create the required infrastructure:
-
Run this command to view the planned changes:
terraform planIf you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.
-
If everything looks correct, apply the changes:
-
Run this command:
terraform apply -
Confirm updating the resources.
-
Wait for the operation to complete.
-
-
-
Activate the transfer
-
Connect to the Greenplum® cluster, create a table named
x_tab, and populate it with data:CREATE TABLE x_tab ( id NUMERIC, name CHARACTER(5) ); CREATE INDEX ON x_tab (id); INSERT INTO x_tab (id, name) VALUES (40, 'User1'), (41, 'User2'), (42, 'User3'), (43, 'User4'), (44, 'User5'); -
Activate the transfer and wait for its status to change to Completed.
-
To check that the data was transferred correctly, connect to the Managed Service for PostgreSQL target cluster and make sure that the columns of the
x_tabtable in thedb1database match those of thex_tabtable in the source database:SELECT id, name FROM db1.public.x_tab;┌─id─┬─name──┐ │ 40 │ User1 │ │ 41 │ User2 │ │ 42 │ User3 │ │ 43 │ User4 │ │ 44 │ User5 │ └────┴───────┘
Verify replication after reactivation
-
In the target endpoint parameters, select either
DROPorTRUNCATEas cleanup policy. -
In the
x_tabtable, delete the row with the41ID and update the one with the42ID:DELETE FROM x_tab WHERE id = 41; UPDATE x_tab SET name = 'Key3' WHERE id = 42; -
Reactivate the transfer and wait for its status to change to Completed.
-
Check the changes in the
x_tabtable of the PostgreSQL target:SELECT id, name FROM db1.public.x_tab;┌─id─┬─name──┐ │ 42 │ Key3 │ │ 40 │ User1 │ │ 43 │ User4 │ │ 44 │ User5 │ └────┴───────┘
Delete the resources you created
Some resources are not free of charge. Delete the resources you no longer need to avoid paying for them:
-
Make sure the transfer status is Completed, upon which you can delete the transfer.
-
Delete the clusters:
ManuallyTerraform-
In the terminal window, go to the directory containing the infrastructure plan.
Warning
Make sure the directory has no Terraform manifests with the resources you want to keep. Terraform deletes all resources that were created using the manifests in the current directory.
-
Delete resources:
-
Run this command:
terraform destroy -
Confirm deleting the resources and wait for the operation to complete.
All the resources described in the Terraform manifests will be deleted.
-
-