PostgreSQL change data capture and delivery to Yandex Data Streams
You can track data changes in a Managed Service for PostgreSQL source cluster and send them to a Data Streams target cluster using change data capture (CDC).
To set up CDC using Data Transfer:
If you no longer need the resources you created, delete them.
Required paid resources
-
Managed Service for PostgreSQL cluster: Computing resources allocated to hosts, storage and backup size (see Managed Service for PostgreSQL pricing).
-
Public IP addresses if public access is enabled for cluster hosts (see Virtual Private Cloud pricing).
-
Managed Service for YDB database (see Managed Service for YDB pricing). The cost depends on deployment mode:
- In serverless mode, you pay for data operations and storage volume, including stored backups.
- In dedicated instance mode, you pay for the use of computing resources allocated to the database, storage size, and backups.
-
Data Streams (see Data Streams pricing). The cost depends on the pricing model:
- Based on allocated resources: You pay a fixed hourly rate for the established throughput limit and message retention period, and additionally for the number of units of actually written data.
- On-demand: You pay for the performed read/write operations, the amount of read or written data, and the actual storage used for messages that are still within their retention period.
Getting started
Set up the infrastructure:
-
Create a Managed Service for PostgreSQL source cluster using any suitable configuration with publicly accessible hosts. Specify the following settings:
- DB name:
db1 - Username:
pg-user
- DB name:
-
Configure security groups, ensuring they allow cluster connections.
-
Grant the
mdb_replicationrole topg-user. -
Create a Managed Service for YDB database
ydb-examplewith your preferred configuration. -
Create a service account
yds-sawith theyds.editorrole. The transfer will use it to access Data Streams.
-
If you do not have Terraform yet, install it.
-
Get the authentication credentials. You can add them to environment variables or specify them later in the provider configuration file.
-
Configure and initialize a provider. There is no need to create a provider configuration file manually, you can download it
. -
Place the configuration file in a separate working directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file.
-
Download the postgresql-yds.tf
configuration file to your current working directory.This file describes:
- Network.
- Subnet.
- Security group required for cluster access.
- Managed Service for PostgreSQL source cluster.
- Managed Service for YDB database.
- Service account that will be used to access Data Streams.
- Source endpoint.
- Transfer.
-
In the
postgresql-yds.tffile, specify the PostgreSQL user password. -
Validate your Terraform configuration files using this command:
terraform validateTerraform will display any configuration errors detected in your files.
-
Create the required infrastructure:
-
Run this command to view the planned changes:
terraform planIf you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.
-
If everything looks correct, apply the changes:
-
Run this command:
terraform apply -
Confirm updating the resources.
-
Wait for the operation to complete.
-
All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console
. -
Set up your transfer
-
Connect to the Managed Service for PostgreSQL cluster. In the
db1database, create a table namedmeasurementsand populate it with data:CREATE TABLE measurements ( device_id varchar(200) NOT NULL, datetime timestamp NOT NULL, latitude real NOT NULL, longitude real NOT NULL, altitude real NOT NULL, speed real NOT NULL, battery_voltage real, cabin_temperature real NOT NULL, fuel_level real, PRIMARY KEY (device_id) ); INSERT INTO measurements VALUES ('iv9a94th6rzt********', '2022-06-05 17:27:00', 55.70329032, 37.65472196, 427.5, 0, 23.5, 17, NULL), ('rhibbh3y08qm********', '2022-06-06 09:49:54', 55.71294467, 37.66542005, 429.13, 55.5, NULL, 18, 32); -
Create a
Data Streamstarget endpoint with the following settings:- Database:
ydb-example - Stream:
mpg-stream - Service account:
yds-sa
- Database:
-
Create a source endpoint and set up the transfer:
ManuallyTerraform-
Create a
PostgreSQL-type source endpoint and configure it using the following settings:- Installation type:
Managed Service for PostgreSQL cluster. - Managed Service for PostgreSQL cluster:
<PostgreSQL_source_cluster_name>from the drop-down list. - Database:
db1. - User:
pg-user. - Password:
pg-userpassword.
- Installation type:
-
Create a Replication-type transfer configured to use the new endpoints.
-
In the
postgresql-yds.tffile, specify the following variables:yds_endpoint_id: Target endpoint ID.transfer_enabled:1to create a transfer.
-
Validate your Terraform configuration files using this command:
terraform validateTerraform will display any configuration errors detected in your files.
-
Create the required infrastructure:
-
Run this command to view the planned changes:
terraform planIf you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.
-
If everything looks correct, apply the changes:
-
Run this command:
terraform apply -
Confirm updating the resources.
-
Wait for the operation to complete.
-
-
-
Activate the transfer
-
Activate the transfer and wait for its status to change to Replicating.
-
Make sure the data from the source has been moved to the Data Streams stream:
Management consoleAWS CLI- In the management console
, select Data Streams. - Select the target stream from the list and navigate to
Data viewer. - Make sure
shard-000000now contains messages with the source table rows. For a closer look at the messages, click .
-
Install the AWS CLI
. -
Configure the environment for Data Streams.
-
Read the stream data using:
- In the management console
Test replication
-
Connect to the source cluster.
-
Add a new row to the
measurementstable:INSERT INTO measurements VALUES ('ad02l5ck6sdt********', '2022-06-05 17:27:00', 55.70329032, 37.65472196, 427.5, 0, 23.5, 19, 45); -
Verify that the new row has appeared in the data stream.
Delete the resources you created
Note
Before deleting the created resources, deactivate the transfer.
To reduce the consumption of resources you do not need, delete them:
-
Delete other resources using the same method used for their creation:
ManuallyTerraform-
In the terminal window, go to the directory containing the infrastructure plan.
Warning
Make sure the directory has no Terraform manifests with the resources you want to keep. Terraform deletes all resources that were created using the manifests in the current directory.
-
Delete resources:
-
Run this command:
terraform destroy -
Confirm deleting the resources and wait for the operation to complete.
All the resources described in the Terraform manifests will be deleted.
-
-