How to Migrate MySQL Data Source to GCP Cloud SQL MySQL Without Downtime
In the current market environment with numerous cloud vendors, there is a significant difference in pricing, performance, features, and after-sales service among various clouds. It is difficult to find a cloud that is the best in all aspects. Therefore, cross-cloud migration has become a choice for many enterprises, and our migration target this time is Google's GCP Cloud SQL MySQL.
Let's first look at why we choose GCP
GCP Cloud SQL is a fully managed relational database service provided by Google Cloud Platform (GCP). It provides reliable and secure database services, supports elastic scaling based on load, and offers automatic failover features to ensure high availability of the database. This allows developers to focus on application development without worrying about various issues of the underlying database.
In other words, you just need to pay, and the rest is taken care of by GCP. It seems quite reliable, and since it is a Google product, it is presumably guaranteed.
What are the issues with cross-cloud migration?
What we are going to do is cross-cloud migration, which sounds simple, but it is quite complex in actual operation. Most cloud vendors provide migration products that only allow data to be migrated into their own cloud, not out of it. This means that they provide good support for migrating databases to their own cloud, but often fail to meet the needs of migrating data out of their cloud.
For security reasons, cloud databases are usually closed intranet environments. Without the support of migration products, to migrate data, it is necessary to open public network access addresses. This move undoubtedly provides criminals with a great opportunity to attack the database. The enterprise may face the risk of the database being compromised, important data being leaked, and even worse, the data being completely cleared, and years of business achievements being destroyed in an instant.
In addition, there are a series of issues that must be considered:
- Business availability: The migration must be carried out without affecting the business. In other words, the migration cannot be stopped, and there are many things to consider: how to completely migrate existing and incremental data? How to deal with performance fluctuations during migration? How to achieve smooth switching of applications? And so on.
- Initialization of table structure and linkage of changes: When the number of tables to be migrated is huge, and DDL operations occur in the source database during the migration process, how to achieve efficient and stable database migration is undoubtedly a major challenge.
- Migration data quality: During the large-scale data migration process, how to ensure data consistency; when the business migration switch is abnormal, how to effectively roll back to ensure the availability of the business.
- How to stop loss in case of migration failure: Different clouds have different functions and performance, and the migration complexity is high. When the business migration fails, how to effectively ensure the availability of the business is also a key issue that the business needs to consider.
Therefore, we need a tool that does not require opening a public network, has comprehensive functions, is stable, fast, and can monitor the migration task status in real-time, and ensure the consistency of the migration results.
How to solve cross-cloud migration issues?
So here it comes, NineData's data replication function is specifically designed for the above pain points. Let's first look at some features of NineData:
- Support for multiple cloud vendors: It integrates the private network environments of various cloud vendors, and migration does not need to open public network access links on the cloud database end. For less common cloud vendors, it also provides a gateway function, which can also access the database directly without opening a public network, making the migration link safe and efficient.
- Business does not stop during migration: NineData provides structural migration, full data migration, and CDC incremental data migration based on redo log. During the database migration process, the source Oracle can provide normal services. NineData can automatically complete structural migration, full data migration, and automatically start real-time monitoring, collection, parsing, and replication capabilities of redo log. The incremental update data on the source end will be replicated in real-time to the target end. When NineData enters the incremental data migration phase and the replication is delay-free, the business can perform read-only verification on the target end and use NineData's data comparison tool for data consistency verification. After the business verification is passed, the business can be stopped and switched. It can be seen that the entire migration process has a very short business downtime.
- Strong replication performance: During the database migration process, the migration speed is undoubtedly an important factor affecting whether the business can be successfully switched. NineData is based on log analysis, intelligent sharding, dynamic batch accumulation, data merging, and unique data formats, etc., to effectively ensure the performance of full data replication and incremental data replication. Currently, NineData's full data replication performance is up to 200 GB/hour, and the incremental data replication performance is up to 20,000 records/second.
- Complete migration rollback plan: There is a big difference in functions and performance between different cloud databases, involving a lot of business transformation and performance tuning, and the difficulty of migration and docking is very high. To reduce the risk of failure, businesses usually need to prepare a rollback plan for failure. NineData provides CDC incremental replication capabilities, supporting real-time collection and parsing of logs from the source end and synchronizing increments to the target end. Before the business switches from the source end to the target end, a replication task can be set up in NineData to synchronize the new data generated after the business docking back to the source end in real-time. Based on this replication task, the new data generated after the business docking can be synchronized back to the source end, ensuring that the source end retains complete business data. In case of business operation problems caused by the functions or performance of the target end, the business can be switched back to the source end at any time, effectively avoiding business migration failures.
Based on the above capabilities, NineData can easily solve the migration issues between different cloud vendors. Let's take a look at how to operate.
Step 1: Enter the source and target data sources
Log in to the NineData console, click on Data Source Management > Data Source, and then click on Create Data Source on the page, and select the required cloud vendor.
According to the page prompts, enter the data source through the private network method, and then click Create Data Source to complete the creation. Repeat this step to complete the entry of the source data source and the target data source.
Step 2: Configure synchronization link
Log in to the NineData console, click on Data Replication > Data Replication, and then click on Create Replication.
According to the page prompts, configure the replication task. To ensure that the full data and incremental data from the source end are completely migrated to the target end, structural replication, full replication, and incremental replication need to be checked in the replication type.
After the configuration is completed, start the task. For all the synchronization objects you have configured, NineData will first perform a full migration of all the existing data, and then it will be the real-time migration of the newly added incremental data in the source end. All newly written data will be synchronized to the target end without omission. Whenever the incremental data in the target end catches up with the source end, the task panel will display delay 0 seconds, representing that the data in the current target end is the latest.
Step 3 (Optional): Verify the integrity of the synchronized data on the target end
In addition to the synchronization function, NineData also provides a comparison function for the synchronized data between the source end and the target end after synchronization to ensure the integrity of the data on the target end.
Log in to the NineData console, click on Data Replication > Data Replication, and then click on the replication task ID created in Step 2.
Click on the Data Comparison tab to display the comparison results (if Enable Data Consistency Comparison is not checked in the task configuration of Step 2, you need to click Enable Data Comparison here).
You can click Re-compare on the page after a while to verify the synchronization results of the latest incremental data.
Step 4 (Optional): Configure task exception alerts
Since it is a long-term task, you may need the system to monitor the task status in real-time and notify you immediately when there is an exception.
Log in to the NineData console, click on Data Replication > Data Replication, and then click on the replication task ID created in Step 2.
Click on Configure Alerts in the upper right corner.
Enter the Policy Name and click Save Configuration. You can directly use the built-in default rules to send a text message reminder when the task fails or the replication delay is greater than or equal to 10 minutes. You can also create custom rules to notify according to your needs.
Summary
Through the above steps, you can completely migrate your business data from other clouds to GCP Cloud SQL MySQL. On the premise that the incremental replication delay is 0, you can perform business docking at any time you need and switch the business traffic to the new cloud.
If you just need to use GCP Cloud SQL MySQL as a multi-active node for your business, you can also keep this migration link running continuously, and NineData will ensure that the data on both ends is kept consistent in real-time.
So far, you have successfully achieved non-stop database migration across cloud vendors, minimizing the impact on online business.