Migrating a database from a third-party MySQL® cluster to a Yandex Managed Service for MySQL® cluster
There are two ways to migrate data from a third-party source cluster to a Managed Service for MySQL® target cluster:
-
Transferring data using Yandex Data Transfer.
This method is easy to configure, does not require the creation of an intermediate VM, and allows you to transfer the entire database without interrupting user service. To use it, allow connections to the source cluster from the internet.
For more information, see What tasks Yandex Data Transfer is used for.
-
Transferring data by creating and restoring a logical dump.
A logical dump is a file with a set of commands running which one by one you can restore the state of a database. To achieve a full logical dump, before you create it, switch the source cluster to
read-only
.Use this method only if, for some reason, it is not possible to migrate data using Data Transfer.
Transferring data using Data Transfer
To transfer a database from MySQL® to Managed Service for MySQL®:
If you no longer need the resources you created, delete them.
Start data transfer
-
Prepare the infrastructure and start the data transfer:
ManuallyTerraform-
Create a Managed Service for MySQL® target cluster in any suitable configuration. In which case:
-
The MySQL® version must be the same or higher than in the source cluster.
Transferring data with MySQL® major version upgrade is possible but not guaranteed. For more information, see the MySQL® documentation
.You cannot
perform migration while downgrading MySQL® version. -
SQL mode must be the same as in the source cluster.
-
-
-
Database type:
MySQL
. -
Endpoint parameters → Connection settings:
Custom installation
.Specify the parameters for connecting to the source cluster.
-
-
-
Database type:
MySQL
. -
Endpoint parameters → Connection settings:
Managed Service for MySQL cluster
.Select a target cluster from the list and specify its connection settings.
-
-
Create a transfer of the Snapshot and increment type that will use the created endpoints.
-
Activate your transfer.
Warning
Abstain from making any changes to the data schema in the source and target clusters when the data transfer is running. For more information, see Working with databases during transfer.
-
If you do not have Terraform yet, install it.
-
Get the authentication credentials. You can add them to environment variables or specify them later in the provider configuration file.
-
Configure and initialize a provider. There is no need to create a provider configuration file manually, you can download it
. -
Place the configuration file in a separate working directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file.
-
Download the data-transfer-mysql-mmy.tf
configuration file to the same working directory.This file describes:
- Network.
- Subnet.
- Security group and the rule required to connect to a cluster.
- Managed Service for MySQL® target cluster.
- Source endpoint.
- Target endpoint.
- Transfer.
-
Specify the following in the
data-transfer-mysql-mmy.tf
file:-
Target cluster parameters also used as target endpoint parameters:
target_mysql_version
: MySQL® version. Must be the same or higher than in the source cluster.target_sql_mode
: SQL mode. It must be the same as in the source cluster.target_db_name
: Database name.target_user
andtarget_password
: Name and user password of the database owner.
-
Check that the Terraform configuration files are correct using this command:
terraform validate
If there are any errors in the configuration files, Terraform will point them out.
-
Create the required infrastructure:
-
Run the command to view planned changes:
terraform plan
If the resource configuration descriptions are correct, the terminal will display a list of the resources to modify and their parameters. This is a test step. No resources are updated.
-
If you are happy with the planned changes, apply them:
-
Run the command:
terraform apply
-
Confirm the update of resources.
-
Wait for the operation to complete.
-
All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console
.Once created, your transfer will be activated automatically.
-
-
Finish data transfer
-
Wait for the transfer status to change to Replicating.
-
Switch the source cluster to read-only and transfer the load to the target cluster.
-
On the transfer monitoring page, wait for the Maximum data transfer delay metric to decrease to zero. This means that all changes that occurred in the source cluster after data was copied are transferred to the target cluster.
-
Deactivate the transfer and wait for its status to change to Stopped.
For more information about transfer statuses, see Transfer lifecycle.
Delete the resources you created
Some resources are not free of charge. To avoid paying for them, delete the resources you no longer need:
- Delete the Managed Service for MySQL® cluster.
- Delete the stopped transfer.
- Delete endpoints for both the source and target.
-
In the terminal window, go to the directory containing the infrastructure plan.
Warning
Make sure the directory has no Terraform manifests with the resources you want to keep. Terraform deletes all resources that were created using the manifests in the current directory.
-
Delete resources:
-
Run this command:
terraform destroy
-
Confirm deleting the resources and wait for the operation to complete.
All the resources described in the Terraform manifests will be deleted.
-
For a real example of MySQL® database migration using Data Transfer, see Syncing MySQL data using Yandex Data Transfer.
Transferring data by creating and restoring a logical dump
To move data to a Managed Service for MySQL® cluster, create a logical dump of the desired database and restore it to the target cluster. There are two ways to do this:
- Use the
mydumper
andmyloader
utilities . A database dump is created as a collection of files in a separate folder. - Use
mysqldump
andmysql
. A database dump is created as a single file.
Migration stages:
-
Create a dump of the database you want to migrate.
-
(Optional) Upload a dump to an intermediate virtual machine in Yandex Cloud.
Transfer your data to an intermediate VM in Yandex Compute Cloud if:
- Your Managed Service for MySQL® cluster is not accessible from the internet.
- Your hardware or connection to the cluster in Yandex Cloud is not very reliable.
The larger the amount of data to be migrated and the required migration speed, the higher the virtual machine requirements: number of processor cores, RAM, and disk space.
If you no longer need the resources you created, delete them.
Getting started
Create the required resources:
-
Create a Managed Service for MySQL® target cluster in any suitable configuration. In which case:
-
The MySQL® version must be the same or higher than the version in the source cluster.
Transferring data with MySQL® major version upgrade is possible but not guaranteed. For more information, see the MySQL® documentation
.You cannot
perform migration while downgrading MySQL® version. -
SQL mode must be the same as in the source cluster.
-
-
(Optional step) Create a VM based on Ubuntu 20.04 LTS with the following parameters:
-
Disks and file storages → Size: Sufficient to store both archived and unarchived dumps.
The recommended size is two or three times the total dump and dump archive size.
-
Network settings:
- Subnet: Select a subnet on the cloud network hosting the target cluster.
- Public IP: Select
Auto
or one address from a list of reserved IPs.
-
-
If you use security groups for the intermediate VM and the Managed Service for MySQL® cluster, configure them.
-
If you do not have Terraform yet, install it.
-
Get the authentication credentials. You can add them to environment variables or specify them later in the provider configuration file.
-
Configure and initialize a provider. There is no need to create a provider configuration file manually, you can download it
. -
Place the configuration file in a separate working directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file.
-
Download the data-migration-mysql-mmy.tf
configuration file to the same working directory.This file describes:
- Network.
- Subnet.
- Security group and the rule required to connect to a cluster.
- Managed Service for MySQL® cluster with public internet access.
- (Optional) Virtual machine with public internet access.
-
Specify the following in the
data-migration-mysql-mmy.tf
file:-
Target cluster parameters:
target_mysql_version
: MySQL® version. Must be the same or higher than in the source cluster.target_sql_mode
: SQL mode. It must be the same as in the source cluster.target_db_name
: Database name.target_user
andtarget_password
: Name and user password of the database owner.
-
(Optional) Virtual machine parameters:
vm_image_id
: ID of the public image with Ubuntu without GPU, e.g., for Ubuntu 20.04 LTS.vm_username
andvm_public_key
: Username and absolute path to the public key, for access to the VM. By default, the specified username is ignored in the Ubuntu 20.04 LTS image. A user with theubuntu
username is created instead. Use it to connect to the instance.
-
-
Check that the Terraform configuration files are correct using this command:
terraform validate
If there are any errors in the configuration files, Terraform will point them out.
-
Create the required infrastructure:
-
Run the command to view planned changes:
terraform plan
If the resource configuration descriptions are correct, the terminal will display a list of the resources to modify and their parameters. This is a test step. No resources are updated.
-
If you are happy with the planned changes, apply them:
-
Run the command:
terraform apply
-
Confirm the update of resources.
-
Wait for the operation to complete.
-
All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console
. -
Creating a dump
-
Switch the database to
read-only
mode to avoid losing data that can appear while creating the dump. -
Install
mysqldump
in the source cluster, e.g., for Ubuntu:sudo apt update && sudo apt install mysql-client --yes
-
Create a database dump:
mysqldump \ --host=<FQDN_or_IP_address> \ --user=<username> \ --password \ --port=<port> \ --set-gtid-purged=OFF \ --quick \ --single-transaction \ <DB_name> > ~/db_dump.sql
Where
--host
is the FQDN or IP address of the master host in the source cluster.If required, provide additional parameters in the create dump command:
--events
, if there are recurring events in your database.--routines
, if your database stores procedures and functions.
For InnoDB tables, use the
--single-transaction
option for data integrity. -
In the dump file, change the table engine names to
InnoDB
:sed -i -e 's/MyISAM/InnoDB/g' -e 's/MEMORY/InnoDB/g' db_dump.sql
-
Archive the dump:
tar -cvzf db_dump.tar.gz ~/db_dump.sql
-
Switch the database to
read-only
mode to avoid losing data that can appear while creating the dump. -
Create a directory for the dump files:
mkdir db_dump
-
Install
mydumper
in the source cluster, e.g., for Ubuntu:sudo apt update && sudo apt install mydumper --yes
-
Create a database dump:
mydumper \ --triggers \ --events \ --routines \ --outputdir=db_dump \ --rows=10000000 \ --threads=8 \ --compress \ --database=<DB_name> \ --user=<username> \ --ask-password \ --host=<FQDN_or_IP_address>
Where:
--triggers
: Trigger dump.--events
: Event dump.--routines
: Stored procedure and function dump.--outputdir
: Dump file directory.--rows
: Number of rows in table fragments. The smaller the value, the more files in a dump.--threads
: Number of threads in use. The recommended value is equal to half the server's free cores.--compress
: Output file compression.- Where
--host
is the FQDN or IP address of the master host in the source cluster.
-
In the dump file, change the table engine names to
InnoDB
:sed -i -e 's/MyISAM/InnoDB/g' -e 's/MEMORY/InnoDB/g' `find /db_dump -name '*-schema.sql'`
-
Archive the dump:
tar -cvzf db_dump.tar.gz ~/db_dump
(Optional) Uploading a dump to a virtual machine in Yandex Cloud
-
Connect to an intermediate virtual machine over SSH.
-
Copy the archive containing the database dump to the intermediate virtual machine, e.g., using
scp
:scp ~/db_dump.tar.gz <VM_user_name>@<VM_public_IP_address>:~/db_dump.tar.gz
-
Extract the dump from the archive:
tar -xzf ~/db_dump.tar.gz
Restoring data
Alert
For Managed Service for MySQL® clusters, AUTOCOMMIT
This method is suitable if you used mysqldump
to create the dump.
-
Install the
mysql
utility to the host you are using to restore the dump, e.g., for Ubuntu:sudo apt update && sudo apt install mysql-client --yes
-
Start the database restore from the dump:
-
If you restore a dump from the VM in Yandex Cloud:
mysql \ --host=c-<target_cluster_ID>.rw.mdb.yandexcloud.net \ --user=<username> \ --port=3306 \ <DB_name> < ~/db_dump.sql
-
If you are restoring the dump from a host connecting to Yandex Cloud from the internet, get an SSL certificate and provide the
--ssl-ca
and the--ssl-mode
parameters in the restore command:mysql \ --host=c-<target_cluster_ID>.rw.mdb.yandexcloud.net \ --user=<username> \ --port=3306 \ --ssl-ca=~/.mysql/root.crt \ --ssl-mode=VERIFY_IDENTITY \ <DB_name> < ~/db_dump.sql
-
This method is suitable if you created the dump with mydumper
and are using an intermediate virtual machine to restore it.
-
Install the
myloader
utility to the host you are using to restore the dump, e.g., for Ubuntu:sudo apt update && sudo apt install mydumper --yes
-
Start the database restore from the dump:
myloader \ --host=c-<target_cluster_ID>.rw.mdb.yandexcloud.net \ --directory=db_dump/ \ --overwrite-tables \ --threads=8 \ --compress-protocol \ --user=<username> \ --ask-password
You can get the cluster ID with a list of clusters in the folder.
Deleting the created resources
Delete the resources you no longer need to avoid paying for them:
- Delete the Managed Service for MySQL® cluster.
- If you created an intermediate virtual machine, delete it.
- If you reserved public static IP addresses, release and delete them.
-
In the terminal window, go to the directory containing the infrastructure plan.
Warning
Make sure the directory has no Terraform manifests with the resources you want to keep. Terraform deletes all resources that were created using the manifests in the current directory.
-
Delete resources:
-
Run this command:
terraform destroy
-
Confirm deleting the resources and wait for the operation to complete.
All the resources described in the Terraform manifests will be deleted.
-