Skip to main content

OceanBase Oracle Migration Synchronization to Datahub

NineData data replication supports the efficient migration or real-time synchronization of OceanBase Oracle data to DataHub, ensuring seamless data integration and enabling data consolidation and analysis across multiple scenarios.

Prerequisites

  • The source and target data sources have been added to NineData. For instructions on how to add data sources, please refer to Adding Data Sources.
  • The source database type is OceanBase Oracle.
  • The target data source type is Datahub.

Usage Limitations

  • Before executing data synchronization, it is necessary to assess the performance of the source and target data sources, and it is recommended to perform data synchronization during off-peak business hours. Otherwise, the full data initialization will occupy a certain amount of read and write resources of the source and target data sources, leading to increased database load.
  • It is necessary to ensure that each table in the synchronization object has a primary key or unique constraint, and the column names are unique; otherwise, the same data may be synchronized repeatedly.

Operation Steps

Translate the following text from Chinese to English, preserving all original formats without any changes:

import Notice from '../ref/_ref-cost_of_repli.md';

import Type from '../ref/_ref-replic-type.md';

<Notice />

1. Log in to the [NineData Console](https://console.ninedata.cloud).

2. Click on <kv>menu.replication</kv> > <kv>menu.replication.task</kv> in the left navigation bar.

3. On the <kv>nd.replication.title</kv> page, click on <kv>nd.replication.create.copy.action</kv> in the top right corner.

4. On the <kv>nd.backup.edit.label.data.source.and.target</kv> tab, configure according to the table below and click <kv>nd.common.next</kv>.

| Parameter<div style={{width:'60pt'}}></div> | Description |
| ------------------------------------------------------------ | ------------------------------------------------------------ |
| <kv>nd.common.form.taskName</kv> | Enter the name of the data synchronization task. To facilitate subsequent search and management, please use meaningful names as much as possible. Up to 64 characters are supported. |
| <kv>nd.replication.label.source</kv> | The data source where the synchronization object is located, select the data source where the data to be copied is located. |
| <kv>nd.replication.label.target</kv> | The data source that receives the synchronization object, select the target Datahub data source. |
| **Datahub Project** | Select the target Datahub Project, the data from the source data source will be written into the specified Project. |
| <kv>nd.replication.target.obj.name.rule</kv> | Select the rule for converting object names from the source to the target after migration.<ul><li><kv>nd.replication.name.all.lowercase</kv>: Regardless of the naming rules on the source side, the naming rules on the target side are all lowercase.</li><li><kv>nd.replication.stay.consistent.with.the.source</kv>: Follow the naming rules on the source side.</li><li><kv>nd.replication.name.all.uppercase</kv>: Regardless of the naming rules on the source side, the naming rules on the target side are all uppercase.</li></ul> |
| <kv>nd.replication.migrate.type</kv> | Select the content to be replicated to the target data source.<ul><li><kv>nd.replication.migrate.schema</kv>: Only synchronize the database table structure of the source data source, without synchronizing data.</li><li><kv>nd.replication.migrate.data</kv>: Synchronize all objects and data of the source data source, i.e., full data replication. The switch on the right is the switch for periodic full replication, for more information, please refer to [Periodic Full Replication](/replication/periodic_full_replication.md).</li><li><kv>nd.replication.incremental.replication</kv>: After the full synchronization is completed, perform incremental synchronization based on the logs of the source data source.![setting](./image/setting.svg) The icon is the configuration for incremental operation types, you can uncheck some operation types according to your needs, and these operations will be ignored in incremental synchronization after unchecking.</li></ul> |
| <kv>nd.replication.inc.size</kv>(not selectable only when <kv>nd.replication.migrate.schema</kv>) | <Type /> |
| <kv>nd.replication.incremental.start.time</kv> | <kv>nd.replication.migrate.type</kv> is only required when <kv>nd.replication.incremental.replication</kv>.<ul><li><kv>nd.replication.start.at.startup.time</kv>: Perform incremental replication based on the current replication task start time.</li><li><kv>nd.replication.custom.time</kv>: Select the start time for incremental replication, you can choose the time zone according to your business region. If the time point is configured before the current replication task starts, the replication task will fail if there are DDL operations during that period.</li></ul> |
| <kv>nd.replication.target.exist.same.name.object</kv> | <ul><li><kv>nd.replication.abort.after.data.confilict</kv>: Stop the task if data is detected in the target table during the pre-inspection stage.</li><li><kv>nd.replication.ignore.data.confilict</kv>: Ignore the data in the target table if detected during the pre-inspection stage, and append other data.</li></ul> |

<span id="step5"></span>

5. On the <kv>nd.replication.create.select.source</kv> tab, configure the following parameters, and then click <kv>nd.common.next</kv>.

| Parameter<div style={{width:'60pt'}}></div> | Description |
| -------------------------------------- | ------------------------------------------------------------ |
| <kv>nd.replication.migrate.source</kv> | Select the content to be replicated. You can choose <kv>nd.replication.all.objects</kv> to replicate all content from the source database, or you can choose <kv>nd.replication.custom.objects</kv>, select the content to be replicated in the <kv>nd.replication.source.object</kv> list, and click **>** to add to the right <kv>nd.replication.target.object</kv> list. |

If you need to create multiple replication chains with the same replication object, you can create a configuration file and import it when creating a new task. Click on <kv>nd.common.import.config</kv> in the top right corner, then click **Download Template** to download the configuration file template to your local machine, edit it, and then click <kv>nd.common.uploadfile</kv> to upload the configuration file to achieve batch import. Configuration file description:

| Parameter<div style={{width:'60pt'}}></div> | Description |
| -------------------------------------- | ------------------------------------------------------------ |
| `source_table_name` | The source table name where the object to be synchronized is located. |
| `destination_table_name` | The target table name where the object to be synchronized is received. |
| `source_schema_name` | The source Schema name where the object to be synchronized is located. |
| `destination_schema_name` | The target Schema name where the object to be synchronized is received. |
| `source_database_name` | The source database name where the object to be synchronized is located. |
| `target_database_name` | The target database name where the object to be synchronized is received. |
| `column_list` | The list of fields to be synchronized. |
| `extra_configuration` | Additional configuration information, you can configure the following information here: <ul><li>Field mapping: `column_name`, `destination_column_name`</li><li>Field value: `column_value`</li><li>Data filtering: `filter_condition`</li></ul> |

:::tip

- An example of `extra_configuration` is as follows:

```json
{
"column_name": "created_time", //Specify the original column name for column name mapping
"destination_column_name": "migrated_time", //Map the target column name to "migrated_time"
"column_value": "current_timestamp()", //Change the field value of the column to the current timestamp
"filter_condition": "id != 0" //Only rows with ID not equal to 0 will be synchronized.
}
```

- For the overall example content of the configuration file, please refer to the downloaded template.

:::

6. On the <kv>nd.replication.create.configure.mapping</kv> tab, you can configure each table that needs to be replicated to Datahub individually. Click on the <kv>nd.basic.column.selection.config</kv> on the right side of the target table to configure each column individually. If there are updates in the source and target data sources during the configuration mapping stage, you can click the <kv>nd.replication.refresh.metadata</kv> button in the top right corner of the page to refresh the information of the source and target data sources. After the configuration is completed, click <kv>nd.common.save.and.precheck</kv>.

In <kv>nd.basic.column.selection.config</kv>, you can also configure <kv>nd.basic.system.field.config</kv>. As a queue product, Datahub does not support random UPDATE/DELETE operations, so NineData needs to add the following metadata fields to the user's Datahub to identify the characteristics of each piece of data delivered to Datahub.

| **Metadata Field Name** | **Metadata Field Value** |
| ------------------ | -------------------- |
| `_record_id_` | ${nd_record_id} |
| `_operation_type_` | ${nd_operation_type} |
| `_execution_time_` | ${nd_exec_timestamp} |
| `_before_image_` | ${nd_before_image} |
| `_after_image_` | ${nd_after_image} |

You can also add or modify metadata field names or values as needed. For detailed information on each field value, please refer to the [Appendix](#appendix4) of this article.

7. On the <kv>nd.common.precheck</kv> tab, wait for the system to complete the pre-inspection. After the pre-inspection is passed, click <kv>nd.common.start.task</kv>.

:::tip

You can check <kv>nd.replication.start.compare.task</kv>. After the synchronization task is completed, automatically start the data consistency comparison based on the source data source to ensure data consistency on both sides. Depending on the <kv>nd.replication.migrate.type</kv> you choose, the timing for starting <kv>nd.replication.start.compare.task</kv> is as follows:

- <kv>nd.replication.migrate.schema</kv>: Start after the structure replication is completed.

- <kv>nd.replication.migrate.schema</kv>+<kv>nd.replication.migrate.data</kv>, <kv>nd.replication.migrate.data</kv>: Start after the full replication is completed.

- **<kv>nd.replication.migrate.schema</kv>**+<kv>nd.replication.migrate.data</kv>+<kv>nd.replication.incremental.replication</kv>, <kv>nd.replication.incremental.replication</kv>: Start when the incremental data is consistent with the source data source for the first time and <kv>nd.replication.sync.delay</kv> is 0 seconds. You can click <kv>nd.replication.more.detail</kv> to view the synchronization delay on the <kv>menu.replicationDetail</kv> page.

![sync_delay](../replication/image/sync_delay.png)

- If the pre-inspection does not pass, you need to click <kv>nd.common.detail</kv> in the <Kv>nd.common.option</Kv> column on the right side of the target inspection item to investigate the cause of the failure, manually fix it, and then click <kv>nd.common.precheck.again</kv> to re-execute the pre-inspection until it passes.

- If the <kv>nd.common.check.result</kv> is <kv>nd.common.warning</kv> for the inspection item, it can be fixed or ignored according to the specific situation.

8. On the <kv>nd.common.start.task</kv> page, it prompts <kv>nd.common.start.success</kv>, and the synchronization task starts running. At this time, you can perform the following operations:

* Click <kv>nd.replication.more.detail</kv> to view the execution status of each stage of the synchronization task.
* Click <kv>nd.replication.back.to.list</kv> to return to the <kv>nd.replication.title</kv> task list page.

Viewing Synchronization Results

  1. Log in to the NineData Console.

  2. Click on > in the left navigation bar.

  3. On the page, click on the of the target synchronization task, the page details are as follows.

    result

    Number
    Function
    Description
    1Sync DelayThe data synchronization delay between the source data source and the target data source. 0 seconds means there is no delay between the two ends, at which point you can choose to switch the business to the target data source for smooth migration.
    2Configure AlertsAfter configuring alerts, the system will notify you in the way you choose when the task fails. For more information, please refer to Operations Monitoring Introduction.
    3More
    • : Pause the task, only tasks with Running status are selectable.
    • : Create a new replication task with the same configuration as the current task.
    • : End tasks that are unfinished or listening (i.e., in incremental synchronization). Once the task is terminated, it cannot be restarted, so please proceed with caution. If the synchronization objects contain triggers, a trigger replication option will pop up, please select as needed.
    • : Delete the task. Once the task is deleted, it cannot be recovered, so please proceed with caution.
    4Structure Replication (displayed in scenarios involving structure replication)Displays the progress and detailed information of structure replication.
    • Click on on the right side of the page: View the execution logs of structure replication.
    • Click on refresh on the right side of the page: View the latest information.
    • Click on in the column on the right side of the target object in the list: View SQL playback.
    5Full Replication (displayed in scenarios involving full replication)Displays the progress and detailed information of full replication.
    • Click on on the right side of the page: View various monitoring indicators during full replication. During full replication, you can also click on on the right side of the monitoring indicator page to limit the rate of writing to the target data source per second. The unit is rows/second.
    • Click on on the right side of the page: View the execution logs of full replication.
    • Click on refresh on the right side of the page: View the latest information.
    6Incremental Replication (displayed in scenarios involving incremental replication)Displays various monitoring indicators of incremental replication.
    • Click on on the right side of the page: View the operations currently being executed by the current replication task, including:
      • : Replication tasks are executed in multiple threads, displaying the current thread number in progress.
      • : Details of the SQL statement currently being executed by the current thread.
      • : The response time of the current thread. If this value increases, it indicates that the current thread may be stuck for some reason.
      • : The timestamp when the current thread was started.
      • : The status of the current thread.
    • Click on on the right side of the page: Limit the rate of writing to the target data source per second. The unit is rows/second.
    • Click on on the right side of the page: View the execution logs of incremental replication.
    • Click on refresh on the right side of the page: View the latest information.
    7Modify ObjectDisplays the modification records of synchronization objects.
    • Click on on the right side of the page to configure the synchronization objects.
    • Click on refresh on the right side of the page: View the latest information.
    8Data ComparisonDisplays the comparison results between the source data source and the target data source. If you have not enabled data comparison, please click on on the page.
    • Click on on the right side of the page: Re-initiate the comparison between the current source and target data sources.
    • Click on on the right side of the page: After the comparison task starts, you can click this button to stop the comparison task immediately.
    • Click on on the right side of the page: View the execution logs of consistency comparison.
    • Click on (displayed only in data comparison): View the trend chart of RPS (records per second) comparison. Click on to view earlier records.
    • Click on details in the column on the right side of the comparison list (displayed under the tab only in the case of inconsistency): View details of the comparison between the source and target sides.
    • Click on sql in the column on the right side of the comparison list (displayed only in the case of inconsistency): Generate change SQL. You can directly copy this SQL to the target data source to execute and modify the inconsistent content.
    9ExpandDisplay detailed information of the current replication task. Common Options:
    • : Export the current task's database and table configuration, allowing for quick import when creating a new replication task. This helps rapidly establish multiple replication links with the same replication objects.
    • : Configure the alarm strategy for the current task.

Appendix 1: Data Type Mapping Table

CategoryOceanBase Oracle Data TypeStarRocks Data Type
Build-in Data TypesCHAR [(size [BYTE | CHAR])] STRING
NCHAR[(size)] STRING
VARCHAR2(size [BYTE | CHAR])STRING
NVARCHAR2(size) STRING
CLOB STRING
NUMBER [ (p [, s]) ] DECIMAL
FLOAT [(p)] FLOAT
DATE DATETIME
BINARY_FLOATFLOAT
BINARY_DOUBLE DOUBLE
TIMESTAMP [(fractional_seconds_precision)]TIMESTAMP
TIMESTAMP [(fractional_seconds_precision)] WITH TIME ZONETIMESTAMP
TIMESTAMP [(fractional_seconds_precision)] WITH LOCAL TIME ZONETIMESTAMP
INTERVAL YEAR [(year_precision)] TO MONTHSTRING
INTERVAL DAY [(day_precision)] TO SECOND [(fractional_seconds_precision)]STRING
RAW(size) STRING
BLOB STRING
Collection Data TypesREFSTRING
VARRAYSTRING
NESTED TABLE STRING
Oracle Supplied Data TypesAnyData STRING
Spatial Data TypesSDO_GEOMETRY STRING

Appendix 2: Pre-Check Item List

Check ItemCheck Content
Source Data Source Connection CheckCheck the gateway status of the source data source, whether the instance is reachable, and the accuracy of the username and password
Target Data Source Connection CheckCheck the gateway status of the target data source, whether the instance is reachable, and the accuracy of the username and password
Source Library Permission CheckCheck whether the account permissions of the source database meet the requirements
Target Library Data Existence CheckCheck whether there is data in the target database for the objects to be replicated

Appendix 3: System Parameter Description

To achieve the storage of incremental data in DataHub, NineData provides a set of default system parameters and metadata fields to identify data characteristics. Here are the specific meanings and usage scenarios of the system parameters.

Parameter NameMeaning and Usage
${nd_record_id}The unique ID of each data record (Record). In UPDATE operations, the record_id of the records before and after the update must be the same to achieve change association.
${nd_exec_timestamp}The runtime of the Record operation.
${nd_database_name}The name of the database to which the table belongs, facilitating the distinction of data sources.
${nd_table_name}The name of the table corresponding to the Record, used to accurately locate the change record.
${nd_operation_type}The type of Record change operation, with the following values:
  • I: Insert
  • U: Update
  • D: Delete
For full synchronization, all are I.
${nd_before_image}The pre-image identifier, indicating that the target Record status is before the change occurs, i.e., the current data has been changed. Values:
  • Y: Pre-image.
  • N: Non-pre-image.
For example: INSERT operation is N; DELETE operation is Y; UPDATE operation before is Y, after is N.
${nd_after_image}The post-image identifier, indicating that the target Record status is after the change occurs, i.e., the current data is the latest state. Values:
  • Y: Post-image.
  • N: Non-post-image.
For example: INSERT operation is Y; DELETE operation is N; UPDATE operation before is N, after is Y.
${nd_datasource}Data source information: The IP and port number of the data source, in the format ip:port.

Data Replication Introduction