Transferring data by creating and restoring a logical dump
To move data to a Managed Service for MySQL® cluster, create a logical dump of the desired database and restore it to the target cluster. There are two ways to do this:
- Use the
mydumperandmyloaderutilities . A database dump is created as a collection of files in a separate folder. - Use
mysqldumpandmysql. A database dump is created as a single file.
Migration stages:
-
Create a dump of the database you want to migrate.
-
Optionally, upload a dump to an intermediate virtual machine in Yandex Cloud.
Transfer your data to an intermediate VM in Yandex Compute Cloud if:
- Your Managed Service for MySQL® cluster is not accessible from the internet.
- Your hardware or connection to the cluster in Yandex Cloud is not very reliable.
The larger the amount of data to be migrated and the required migration speed, the higher the virtual machine requirements: number of processor cores, RAM, and disk space.
If you no longer need the resources you created, delete them.
Required paid resources
- Managed Service for MySQL® cluster: computing resources allocated to hosts, size of storage and backups (see Managed Service for MySQL® pricing).
- Public IP addresses if public access is enabled for cluster hosts (see Virtual Private Cloud pricing).
- Virtual machine if created to download a dump: use of computing resources, storage, public IP address, and OS (see Compute Cloud pricing).
Getting started
Create the required resources:
-
Create a target Managed Service for MySQL® cluster with your preferred configuration. In this case, the following applies:
-
The MySQL® version must be the same or higher than the version in the source cluster.
Transferring data with MySQL® major version upgrade is possible but not guaranteed. For more information, see this MySQL® guide
.You cannot
perform migration while downgrading MySQL® version. -
SQL mode must be the same as in the source cluster.
-
-
Optionally, create a VM based on Ubuntu 20.04 LTS with the following parameters:
-
Disks and file storages → Size: Sufficient to store both archived and unarchived dumps.
The recommended size is two or three times the total dump and dump archive size.
-
Network settings:
- Subnet: Select a subnet on the cloud network hosting the target cluster.
- Public IP address: Select
Autoor one address from a list of reserved IPs.
-
-
If you use security groups for the intermediate VM and the Managed Service for MySQL® cluster, configure them.
-
If you do not have Terraform yet, install it.
-
Get the authentication credentials. You can add them to environment variables or specify them later in the provider configuration file.
-
Configure and initialize a provider. There is no need to create a provider configuration file manually, you can download it
. -
Place the configuration file in a separate working directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file.
-
Download the data-migration-mysql-mmy.tf
configuration file to the same working directory.This file describes:
- Network.
- Subnet.
- Security group and the rule permitting access to the cluster.
- Managed Service for MySQL® cluster with public internet access.
- Virtual machine with public internet access (optional).
-
Specify the following in
data-migration-mysql-mmy.tf:-
Target cluster parameters:
target_mysql_version: MySQL® version. Must be the same or higher than in the source cluster.target_sql_mode: SQL mode. It must be the same as in the source cluster.target_db_name: Database name.target_userandtarget_password: Database owner username and password.
-
Virtual machine parameters (optional):
vm_image_id: ID of the public image with Ubuntu without GPU, e.g., for Ubuntu 20.04 LTS.vm_usernameandvm_public_key: Username and absolute path to the public key, for access to the VM. By default, the specified username is ignored in the Ubuntu 20.04 LTS image. A user with theubuntuusername is created instead. Use it to connect to the VM.
-
-
Make sure the Terraform configuration files are correct using this command:
terraform validateTerraform will show any errors found in your configuration files.
-
Create the required infrastructure:
-
Run this command to view the planned changes:
terraform planIf you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.
-
If everything looks correct, apply the changes:
-
Run this command:
terraform apply -
Confirm updating the resources.
-
Wait for the operation to complete.
-
All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console
. -
Creating a dump
-
Switch the database to
read-only
mode to avoid losing data that can appear while creating the dump. -
Install
mysqldumpin the source cluster, e.g., for Ubuntu:sudo apt update && sudo apt install mysql-client --yes -
Create a database dump:
mysqldump \ --host=<FQDN_or_IP_address> \ --user=<username> \ --password \ --port=<port> \ --set-gtid-purged=OFF \ --quick \ --single-transaction \ <DB_name> > ~/db_dump.sqlWhere
--hostis the FQDN or IP address of the master host in the source cluster.If required, provide additional parameters in the create dump command:
--events, if there are recurring events in your database.--routines, if your database stores procedures and functions.
For InnoDB tables, use the
--single-transactionoption for data integrity. -
In the dump file, change the table engine names to
InnoDB:sed -i -e 's/MyISAM/InnoDB/g' -e 's/MEMORY/InnoDB/g' db_dump.sql -
Archive the dump:
tar -cvzf db_dump.tar.gz ~/db_dump.sql
-
Switch the database to
read-only
mode to avoid losing data that can appear while creating the dump. -
Create a directory for the dump files:
mkdir db_dump -
Install
mydumperin the source cluster, e.g., for Ubuntu:sudo apt update && sudo apt install mydumper --yes -
Create a database dump:
mydumper \ --triggers \ --events \ --routines \ --outputdir=db_dump \ --rows=10000000 \ --threads=8 \ --compress \ --database=<DB_name> \ --user=<username> \ --ask-password \ --host=<FQDN_or_IP_address>Where:
--triggers: Trigger dump.--events: Event dump.--routines: Stored procedure and function dump.--outputdir: Dump file directory.--rows: Number of rows in table fragments. The smaller the value, the more files in a dump.--threads: Number of threads in use. The recommended value is equal to half the server's free cores.--compress: Output file compression.- Where
--hostis the FQDN or IP address of the master host in the source cluster.
-
In the dump file, change the table engine names to
InnoDB:sed -i -e 's/MyISAM/InnoDB/g' -e 's/MEMORY/InnoDB/g' `find /db_dump -name '*-schema.sql'` -
Archive the dump:
tar -cvzf db_dump.tar.gz ~/db_dump
Uploading a dump to a virtual machine in Yandex Cloud (optional)
-
Connect to an intermediate virtual machine over SSH.
-
Copy the archive containing the database dump to the intermediate virtual machine, e.g., using
scp:scp ~/db_dump.tar.gz <VM_user_name>@<VM_public_IP_address>:~/db_dump.tar.gz -
Extract the dump from the archive:
tar -xzf ~/db_dump.tar.gz
Restoring data
Alert
For Managed Service for MySQL® clusters, AUTOCOMMIT
This method is suitable if you used mysqldump to create the dump.
-
Install the
mysqlutility to the host you are using to restore the dump, e.g., for Ubuntu:sudo apt update && sudo apt install mysql-client --yes -
Start the database restore from the dump:
-
If you restore a dump from the VM in Yandex Cloud:
mysql \ --host=c-<target_cluster_ID>.rw.mdb.yandexcloud.net \ --user=<username> \ --port=3306 \ <DB_name> < ~/db_dump.sql -
If you are restoring the dump from a host connecting to Yandex Cloud from the internet, get an SSL certificate and provide the
--ssl-caand the--ssl-modeparameters in the restore command:mysql \ --host=c-<target_cluster_ID>.rw.mdb.yandexcloud.net \ --user=<username> \ --port=3306 \ --ssl-ca=~/.mysql/root.crt \ --ssl-mode=VERIFY_IDENTITY \ <DB_name> < ~/db_dump.sql
-
This method is suitable if you created the dump with mydumper and are using an intermediate virtual machine to restore it.
-
Install the
myloaderutility to the host you are using to restore the dump, e.g., for Ubuntu:sudo apt update && sudo apt install mydumper --yes -
Start the database restore from the dump:
myloader \ --host=c-<target_cluster_ID>.rw.mdb.yandexcloud.net \ --directory=db_dump/ \ --overwrite-tables \ --threads=8 \ --compress-protocol \ --user=<username> \ --ask-password
You can get the cluster ID with the list of clusters in the folder.
Deleting the created resources
Delete the resources you no longer need to avoid paying for them:
- Delete the Managed Service for MySQL® cluster.
- If you created an intermediate virtual machine, delete it.
- If you reserved public static IP addresses, release and delete them.
-
In the terminal window, go to the directory containing the infrastructure plan.
Warning
Make sure the directory has no Terraform manifests with the resources you want to keep. Terraform deletes all resources that were created using the manifests in the current directory.
-
Delete resources:
-
Run this command:
terraform destroy -
Confirm deleting the resources and wait for the operation to complete.
All the resources described in the Terraform manifests will be deleted.
-