PostgreSQL Migration Synchronization to Kafka
NineData data replication supports replicating data from PostgreSQL to Kafka, allowing enterprises to achieve efficient data migration and synchronization, ensuring that the data in the Kafka system is always up-to-date.
Background Information
Kafka, as a distributed streaming platform, is widely used for real-time data transmission and processing. PostgreSQL is a popular relational database management system, commonly used for storing structured data. In many application scenarios, synchronizing data from PostgreSQL to Kafka in real-time can improve data processing speed and scalability, providing support for enterprise data analysis, log processing, etc.
Prerequisites
The source and target data sources have been added to NineData. For instructions on how to add them, please refer to Create Data Source.
The target database type is Kafka 0.10 or above.
You must have the following permissions for the source data source:
Replication Type Permissions Full Replication CONNECT, SELECT Incremental Replication SUPERUSER For incremental replication, please open the
postgresql.conf
file and configure the following parameters. If you cannot find the location of the file, you can execute theSHOW config_file;
SQL command in the psql client to view it.- The
wal_level
parameter of the source data source must belogical
. To confirm the current value, you can directly executeSHOW wal_level
in the client. - The
wal_sender_timeout
parameter of the source data source is set to0
. This parameter is used to interrupt replication connections that have been stagnant for more than the specified number of milliseconds. The default value is 60000 milliseconds. Setting it to 0 will disable the timeout mechanism. To confirm the current value, you can directly executeSHOW wal_sender_timeout
in the client. - The
max_replication_slots
parameter of the source data source must be greater than1
. This parameter specifies the maximum number of replication slots the server can support. The default value is 10. - The
max_wal_senders
parameter of the source data source must be greater than1
. This parameter specifies the maximum number of concurrent connections. The default value is 10. To confirm the current value, you can directly executeSHOW max_wal_senders
in the client.
- The
Usage Limitations
- Before performing data synchronization, it is necessary to assess the performance of the source and target data sources, and it is recommended to perform data synchronization during off-peak business hours. Otherwise, the full data initialization will occupy a certain amount of read and write resources of the source and target data sources, leading to increased database load.
- It is necessary to ensure that each table in the synchronization object has a primary key or unique constraint, and the column names have uniqueness, otherwise, the same data may be synchronized repeatedly.
Operation Steps
NineData’s data replication product has been commercialized. You can still use 10 replication tasks for free, with the following considerations:
Among the 10 replication tasks, you can include 1 task, with a specification of Micro.
Tasks with a status of do not count towards the 10-task limit. If you have already created 10 replication tasks and want to create more, you can terminate previous replication tasks and then create new ones.
When creating replication tasks, you can only select the you have purchased. Specifications that have not been purchased will be grayed out and cannot be selected. If you need to purchase additional specifications, please contact us through the customer service icon at the bottom right of the page.
Log in to the NineData Console.
Click on > in the left navigation bar.
On the page, click on in the top right corner.
On the tab, configure according to the table below and click .
Parameter Description Enter the name of the data synchronization task. To facilitate subsequent search and management, please use meaningful names as much as possible. Up to 64 characters are supported. The data source where the synchronization object is located, select the data source where the data to be copied is located. The data source that receives the synchronization object, select the target Kafka data source. Select the target Kafka Topic (topic), the data from the source data source will be written into the specified Topic. When delivering data to a Topic, you can specify which partition of the Topic to deliver the data to. - : Deliver all data to the default partition 0.
- : Distribute data to different partitions, the system will use the hash value of the database name and table name to calculate which partition the target data should be delivered to, ensuring that the same table's data is delivered to the same partition during the hash delivery process.
Supports and two types of . - : Copy all objects and data from the source data source, i.e., full data replication. The switch on the right is the switch for periodic full replication, more information, please refer to Periodic Full Replication.
- : After the full replication is completed, incremental replication is performed based on the logs of the source data source.
(not selectable only under ) Optional only when the task contains or .
The specifications of the replication task determine the replication speed: the larger the specification, the higher the replication rate. Hover over the icon to view the replication rate and configuration details for each specification. Each specification displays the available quantity and the total number of specifications. If the available quantity is 0, it will be grayed out and cannot be selected.
Required when is only . - : Perform incremental replication based on the current replication task start time.
- : Select the start time of incremental replication, you can choose the time zone according to your business region. If the time point is configured before the current replication task starts, the replication task will fail if there are DDL operations during that time period.
- : Stop the task if data is detected in the target table during the pre-inspection phase.
- : Ignore the data if it is detected in the target table during the pre-inspection phase, and append other data.
- : Delete the data if it is detected in the target table during the pre-inspection phase, and rewrite it.
On the tab, configure the following parameters, and then click .
Parameter Description Select the content to be replicated, you can choose to replicate all content of the source library, or you can choose , select the content to be replicated in the list, and click > to add to the right list. If you need to create multiple replication chains with the same replication object, you can create a configuration file and import it when creating a new task. Click on in the top right corner, then click Download Template to download the configuration file template to your local machine, edit it, and then click to upload the configuration file to achieve batch import. Configuration file description:
Parameter Description source_table_name
The source table name where the object to be synchronized is located. destination_table_name
The target table name where the object to be synchronized is received. source_schema_name
The source Schema name where the object to be synchronized is located. destination_schema_name
The target Schema name where the object to be synchronized is received. source_database_name
The source library name where the object to be synchronized is located. target_database_name
The target library name where the object to be synchronized is received. column_list
The list of fields to be synchronized. extra_configuration
Additional configuration information, you can configure the following information here: - Field mapping:
column_name
,destination_column_name
- Field value:
column_value
- Data filtering:
filter_condition
tipAn example of
extra_configuration
is as follows:{
"column_name": "created_time", //Specify the original column name for column name mapping
"destination_column_name": "migrated_time", //Map the target column name to "migrated_time"
"column_value": "current_timestamp()", //Change the field value of the column to the current timestamp
"filter_condition": "id != 0" //Only rows with ID not equal to 0 will be synchronized.
}
- For the overall example content of the configuration file, please refer to the downloaded template.
- Field mapping:
On the tab, you can configure each column that needs to be replicated to Kafka individually, by default, all columns of the selected table will be replicated. If there are updates in the source and target data sources during the configuration mapping phase, you can click the button in the top right corner of the page to refresh the information of the source and target data sources. After the configuration is completed, click .
On the tab, wait for the system to complete the pre-inspection, and after the pre-inspection is passed, click .
tipIf the pre-inspection does not pass, you need to click on the in the column on the right side of the target inspection item, troubleshoot the cause of the failure, manually fix it, and then click to re-execute the pre-inspection until it passes.
If the is for the inspection item, it can be fixed or ignored according to the specific situation.
On the page, it prompts , and the synchronization task starts running. At this time, you can perform the following operations:
- Click to view the execution of each stage of the synchronization task.
- Click to return to the task list page.
View Synchronization Results
Log in to NineData Console.
Click on > in the left navigation bar.
On the page, click the ID of the target synchronization task to open the page, the page description is as follows.
No. Feature Description 1 Synchronization Delay The data synchronization delay between the source and target data sources, 0 seconds indicates no delay between the two ends, meaning that the data on the Kafka side has currently caught up with the source side. 2 Configure Alerts After configuring alerts, the system will notify you through the method you choose when the task fails. For more information, please refer to Operations Monitoring Introduction. 3 More - : Pause the task, only tasks with the status are selectable.
- : End tasks that are not completed or listening (i.e., in incremental synchronization), after terminating the task, it cannot be restarted, please operate with caution.
- : Delete the task, the task cannot be recovered after deletion, please operate with caution.
4 Full Replication (Displayed in scenarios including full replication) Display the progress and detailed information of full replication. - Click on the on the right side of the page: View various monitoring metrics during the full replication process. During the full replication process, you can also click on the on the right side of the monitoring metrics page to limit the rate of writing to the target data source per second. The unit is MB/S.
- Click on the on the right side of the page: View the execution logs of full replication.
- Click on the icon on the right side of the page: View the latest information.
5 Incremental Replication (Displayed in scenarios including incremental replication) Display various monitoring metrics for incremental replication. - Click on the on the right side of the page: Limit the rate of writing to the target data source per second. The unit is rows/second.
- Click on the on the right side of the page: View the execution logs of incremental replication.
- Click on the icon on the right side of the page: View the latest information.
6 Modify Object Display the modification records of the synchronization object. - Click on the on the right side of the page to configure the synchronization object.
- Click on the icon on the right side of the page: View the latest information.
7 Expand Display detailed information of the current replication task, including , , , etc.
Appendix 1: Data Format Description
The data migrated from PostgreSQL to Kafka will be stored in JSON format. The system will equally divide the data in PostgreSQL into multiple JSON objects, with each JSON representing a message (Message).
- Full replication phase: The number of PostgreSQL data rows stored in a single message is determined by the message.max.bytesmessage.max.bytes is the maximum message size allowed in the Kafka cluster, with a default value of
1000000
bytes (i.e., 1MB). You can adjust this value by modifying themessage.max.bytes
parameter in the Kafka configuration file, which will allow more PostgreSQL data rows to be stored in each message. Please note that since Kafka needs to allocate memory space for each message, setting this value too high may lead to decreased Kafka cluster performance. parameter. - Incremental replication phase: A single message stores one line of PostgreSQL data.
Field | Field Type | Field Description | Field Example |
---|---|---|---|
serverId | STRING | Message ownership data source information, format: <connection address:port> . | "serverId":"47.98.224.21:3307" |
id | LONG | The Record id of the message. This field is globally incremented and is used as the basis for determining message duplicate consumption. | "Id":156 |
es | INT | Represented by Unix timestamp, different task stages represent different meanings:
| "es":1668650385 |
ts | INT | The time when more data is delivered to Kafka, represented by Unix timestamp. | "ts":1668651053 |
isDdl | BOOLEAN | Whether the data is DDL, values:
| "is_ddl":true |
type | STRING | The type of data, values:
| "type":"INIT" |
database | STRING | The database to which the data belongs. | "database":"database_name" |
table | STRING | The table to which the data belongs. If the DDL statement corresponds to a non-table object, this field takes the value null . | "table":"table_name" |
mysqlType | JSON | Reserved field, no need to pay attention. | "mysqlType": null |
sqlType | JSON | The data types of each field in the source PostgreSQL. | "sqlType": {"id": "NUMBER","shipping_type":"varchar2(50)"} |
pkNames | ARRAY | The names of the primary keys (Primary Key) corresponding to the records (Record) in the Binlog. Values:
| "pkNames": ["id", "uid"] |
data | ARRAY[JSON] | The data from PostgreSQL delivered to Kafka, stored in a JSON format array.
| "data": [{ "name": "jl", "phone": "(737)1234787", "email": "caicai@yahoo.edu", "address": "zhejiang", "country": "china" }] |
old | ARRAY[JSON] | Records the details of the incremental replication from PostgreSQL to Kafka.
For other operations, the value of this field is null . | "old": [{ "name": "someone", "phone": "(737)1234787", "email": "someone@example.com", "address": "somewhere", "country": "china" }] |
sql | STRING | If the current data is an incremental DDL operation, it records the corresponding SQL statement. For other operations, the value of this field is null . | "sql":"create table sbtest1(id int primary key,name varchar(20))" |
Appendix 2: Pre-Check Item List
Check Item | Check Content |
---|---|
Source datasource connection check | Check the status of the gateway of the source datasource, database connectable, and verify the username and password |
Target datasource connection check | Check the status of the gateway of the target datasource, database connectable, and verify the username and password |
Source databse privilege check | Check whether the account privileges of the source database meet the requirements |
Target databse privilege check | Checking Kafka account's access permission to Topic |
Target database data existence check | Check if there is existing data in the topic |
Check wal_ Level | Check if the wal_level of the source datasource is set to logical |
Check max_wal_senders | Check whether max_wal_senders meets wal sender requirements |
Check max_replication_slots | Check whether max_replication_slots meets replication slot requirements |