Skip to main content

OceanBase MySQL Migration Synchronization to Datahub

NineData data replication supports the efficient migration or real-time synchronization of OceanBase MySQL data to DataHub, ensuring seamless data integration and enabling data consolidation and analysis across multiple scenarios.

Prerequisites

  • The source and target data sources have been added to NineData. For instructions on how to add data sources, see Add Data Source.
  • The source database type is OceanBase MySQL.
  • The target data source type is Datahub.

Usage Limitations

  • Before performing data synchronization, it is necessary to assess the performance of the source and target data sources, and it is recommended to perform data synchronization during off-peak business hours. Otherwise, the full data initialization will occupy a certain amount of read and write resources of the source and target data sources, leading to increased database load.
  • It is necessary to ensure that each table in the synchronization object has a primary key or unique constraint, and the column names are unique; otherwise, the same data may be synchronized repeatedly.

Operation Steps

Commercialization Notice

NineData’s data replication product has been commercialized. You can still use 10 replication tasks for free, with the following considerations:

  • Among the 10 replication tasks, you can include 1 task, with a specification of Micro.

  • Tasks with a status of do not count towards the 10-task limit. If you have already created 10 replication tasks and want to create more, you can terminate previous replication tasks and then create new ones.

  • When creating replication tasks, you can only select the you have purchased. Specifications that have not been purchased will be grayed out and cannot be selected. If you need to purchase additional specifications, please contact us through the customer service icon at the bottom right of the page.

  1. Log in to the NineData Console.

  2. In the left navigation bar, click > .

  3. On the page, click in the top right corner.

  4. On the tab, configure the settings as shown below, then click .

    Parameter
    Description
    Enter a name for the replication task. For easier search and management, use a meaningful name. Maximum 64 characters.
    The data source containing the object to replicate. Select the data source that holds the data to be copied.
    The target data source to receive the object. Select the destination Datahub data source.
    Datahub ProjectSelect the target Datahub Project. Data from the source will be written into this specified project.
    Select the content to be replicated to the target data source.
    • : Only replicate the schema (database and table structure), not the data.
    • : Replicate all objects and data from the source, i.e., full data replication. The switch on the right enables periodic full replication. For more information, see Periodic Full Replication.
    • : After full replication, perform incremental replication based on the source's logs. The setting icon allows you to configure the incremental operation types. You can uncheck specific operation types to exclude them from incremental replication.
    (Unavailable when using )

    Optional only when the task contains or .

    The specifications of the replication task determine the replication speed: the larger the specification, the higher the replication rate. Hover over the detailsicon to view the replication rate and configuration details for each specification. Each specification displays the available quantity and the total number of specifications. If the available quantity is 0, it will be grayed out and cannot be selected.

    Required only when is set to .
    • : Use the task start time as the starting point for incremental replication.
    • : Select a custom start time for incremental replication. You can also specify the time zone based on your region. If the start time is earlier than the task's start time and includes DDL operations, the task may fail.
    • : If data exists in the target table during the pre-check, the task stops.
    • : If data exists, skip the conflicting data and append the rest.
  1. On the tab, configure the parameters below, then click .

    Parameter
    Description
    Select the content to replicate. You can choose to replicate everything in the source database, or to select specific objects from and click > to add them to the list.

If you need to create multiple replication tasks with the same objects, you can create a config file and import it when creating new tasks. Click in the top right, then click Download Template to download the config file template. After editing, upload it using to import the configurations in bulk. Config file details:

Parameter
Description
source_table_nameName of the source table containing the object to replicate.
destination_table_nameName of the target table to receive the object.
source_schema_nameSchema name of the source object.
destination_schema_nameSchema name of the target object.
source_database_nameDatabase name of the source object.
target_database_nameTarget database name.
column_listList of columns to be replicated.
extra_configurationAdditional configuration. You can include the following fields:
  • Column mapping: column_name, destination_column_name
  • Field value: column_value
  • Data filter: filter_condition
  • Example of extra_configuration:

    json


    复制编辑
    {
    "column_name": "created_time",
    "destination_column_name": "migrated_time",
    "column_value": "current_timestamp()",
    "filter_condition": "id != 0"
    }
  • Refer to the downloaded template for a complete configuration file example.

  1. On the tab, you can configure each table individually. Click next to a target table to configure its columns.

    If either source or target metadata has changed, click in the top right to refresh the metadata. Once done, click .

    Inside , you can also configure . Since Datahub is a queue-based system and does not support random UPDATE/DELETE operations, NineData adds the following metadata fields to every record for identification:

    Metadata FieldValue
    _record_id_${nd_record_id}
    _operation_type_${nd_operation_type}
    _execution_time_${nd_exec_timestamp}
    _before_image_${nd_before_image}
    _after_image_${nd_after_image}

You may also customize metadata field names or values as needed. For more details, refer to the Appendix.

  1. On the tab, wait for the system to complete the pre-check. Once it passes, click .

    You can check to automatically start a data consistency comparison after the replication completes. Depending on your selection of , the timing for this comparison task is as follows:

    • : Starts after schema replication.
    • + , or : Starts after full replication.
    • + + , or : Starts once incremental data matches the source and is 0 seconds. You can click to check the sync delay on the page.

      sync_delay

    • If pre-check fails, click in the column to troubleshoot. After fixing the issue, click to retry until the check passes.

    • If shows , you may choose to fix or ignore it based on the situation.

  2. On the page, once you see , the replication task has started. You can:

    • Click to view the task’s execution details.
    • Click to return to the task list page.

View Synchronization Results

  1. Log in to the NineData Console.

  2. Click on > in the left navigation bar.

  3. On the page, click on the of the target synchronization task, the page details are as follows.

    result

    Number
    Function
    Description
    1Sync DelayThe data synchronization delay between the source data source and the target data source. 0 seconds means there is no delay between the two ends, at which point you can choose to switch the business to the target data source for smooth migration.
    2Configure AlertsAfter configuring alerts, the system will notify you in the way you choose when the task fails. For more information, please refer to Operations Monitoring Introduction.
    3More
    • : Pause the task, only tasks with Running status are selectable.
    • : Create a new replication task with the same configuration as the current task.
    • : End tasks that are unfinished or listening (i.e., in incremental synchronization). Once the task is terminated, it cannot be restarted, so please proceed with caution. If the synchronization objects contain triggers, a trigger replication option will pop up, please select as needed.
    • : Delete the task. Once the task is deleted, it cannot be recovered, so please proceed with caution.
    4Structure Replication (displayed in scenarios involving structure replication)Displays the progress and detailed information of structure replication.
    • Click on on the right side of the page: View the execution logs of structure replication.
    • Click on refresh on the right side of the page: View the latest information.
    • Click on in the column on the right side of the target object in the list: View SQL playback.
    5Full Replication (displayed in scenarios involving full replication)Displays the progress and detailed information of full replication.
    • Click on on the right side of the page: View various monitoring indicators during full replication. During full replication, you can also click on on the right side of the monitoring indicator page to limit the rate of writing to the target data source per second. The unit is rows/second.
    • Click on on the right side of the page: View the execution logs of full replication.
    • Click on refresh on the right side of the page: View the latest information.
    6Incremental Replication (displayed in scenarios involving incremental replication)Displays various monitoring indicators of incremental replication.
    • Click on on the right side of the page: View the operations currently being executed by the current replication task, including:
      • : Replication tasks are executed in multiple threads, displaying the current thread number in progress.
      • : Details of the SQL statement currently being executed by the current thread.
      • : The response time of the current thread. If this value increases, it indicates that the current thread may be stuck for some reason.
      • : The timestamp when the current thread was started.
      • : The status of the current thread.
    • Click on on the right side of the page: Limit the rate of writing to the target data source per second. The unit is rows/second.
    • Click on on the right side of the page: View the execution logs of incremental replication.
    • Click on refresh on the right side of the page: View the latest information.
    7Modify ObjectDisplays the modification records of synchronization objects.
    • Click on on the right side of the page to configure the synchronization objects.
    • Click on refresh on the right side of the page: View the latest information.
    8Data ComparisonDisplays the comparison results between the source data source and the target data source. If you have not enabled data comparison, please click on on the page.
    • Click on on the right side of the page: Re-initiate the comparison between the current source and target data sources.
    • Click on on the right side of the page: After the comparison task starts, you can click this button to stop the comparison task immediately.
    • Click on on the right side of the page: View the execution logs of consistency comparison.
    • Click on (displayed only in data comparison): View the trend chart of RPS (records per second) comparison. Click on to view earlier records.
    • Click on details in the column on the right side of the comparison list (displayed under the tab only in the case of inconsistency): View details of the comparison between the source and target sides.
    • Click on sql in the column on the right side of the comparison list (displayed only in the case of inconsistency): Generate change SQL. You can directly copy this SQL to the target data source to execute and modify the inconsistent content.
    9ExpandDisplay detailed information of the current replication task. Common Options:
    • : Export the current task's database and table configuration, allowing for quick import when creating a new replication task. This helps rapidly establish multiple replication links with the same replication objects.
    • : Configure the alarm strategy for the current task.

Appendix 1: Data Type Mapping Table

TypeOceanBase MySQL Data TypeDatahub Default Mapping Data Type
Numeric TypesBOOL, BOOLEANBOOLEAN
TINYINTTINYINT
SMALLINTSMALLINT
MEDIUMINTMEDIUMINT
INTINT
BIGINTBIGINT
DECIMAL, DOUBLEDECIMAL
FLOATFLOAT
DOUBLEDOUBLE
BITSTRING
Date TypesDATETIME (without time zone), TIMESTAMP (with time zone)TIMESTAMP
DATESTRING
TIMESTRING
YEARINT
Character TypesCHAR (up to 256), VARCHAR (up to 262144), BINARY (up to 256), VARBINARY (up to 1048576), ENUM, SETSTRING
Large ObjectsTINYBLOB (up to 255), BLOB (up to 65535), MEDIUMBLOB (up to 16777215), LARGEBLOB (up to 536870910), TINYTEXT (up to 255), TEXT (up to 65535), MEDIUMTEXT (up to 16777215), LARGETEXT (up to 536870910)STRING
JSON TypeJSONSTRING
Spatial TypesGEOMETRY, POINT, LINESTRING, POLYGON, MULTIPOINT, MULTISTRING, MULTIPOLYGON, GEOMETRYCOLLECTIONSTRING

Appendix 2: Pre-Check Item List

Check ItemCheck Content
Source Data Source CheckCheck the gateway status of the source data source, instance reachability, and accuracy of username and password
Target Data Source CheckCheck the gateway status of the target data source, instance reachability, and accuracy of username and password
Source Database Permission CheckCheck if the account permissions of the source database meet the requirements
Target Database Data Existence CheckCheck if there is existing data for the objects to be replicated in the target database

Appendix 3: System Parameter Description

To implement incremental data storage in DataHub, NineData provides a set of default system parameters and metadata fields to identify data characteristics. Here are the specific meanings and usage scenarios of the system parameters.

Parameter NameMeaning and Usage
${nd_record_id}The unique ID for each data record (Record). In UPDATE operations, the record_id must remain the same before and after the update to achieve change association.
${nd_exec_timestamp}The execution time of the Record operation.
${nd_database_name}The name of the database to which the table belongs, facilitating data source differentiation.
${nd_table_name}The name of the table corresponding to the Record, used for precise location of change records.
${nd_operation_type}The type of Record change operation, with the following values:
  • I: Insert
  • U: Update
  • D: Delete
For full synchronization, all are I.
${nd_before_image}The before-image identifier, indicating that the target Record status is before the change occurs, i.e., the current data has been changed. Values:
  • Y: Before-image.
  • N: Not a before-image.
For example: INSERT operation is N; DELETE operation is Y; UPDATE operation before is Y, after is N.
${nd_after_image}The after-image identifier, indicating that the target Record status is after the change occurs, i.e., the current data is the latest state. Values:
  • Y: After-image.
  • N: Not an after-image.
For example: INSERT operation is Y; DELETE operation is N; UPDATE operation before is N, after is Y.
${nd_datasource}Data source information: The IP and port number of the data source, in the format ip:port.

Data Replication Introduction