Skip to main content

Data Import

NineData supports data import functionality, which allows files containing SQL scripts or data content to be imported into the target database, with a maximum file size of 5 GB.

Background

NineData's SQL task supports basic SQL script operations, providing a convenient way for batch execution of a small amount of SQL. However, for importing large amounts of data, it can take a lot of time and effort to process, so other efficient tools are needed to simplify this process. To address this issue, NineData offers powerful data import functionality to help users easily import large amounts of data from files into the target database.

Functionality

NineData's data import functionality allows users to import files containing large amounts of data into the target database. The specific functionality details are as follows:

Use Cases

NineData's data import functionality can play an important role in the following scenarios:

  • External data source import: In the scenario where an enterprise needs to obtain data from an external source, NineData's data import functionality can help users import the data from the external source into the target database for subsequent processing and management. For example, some e-commerce platforms need to import product data from their supplier's system, and NineData's data import functionality can help them quickly import the data into the database.
  • Data initialization: When setting up a new database or resetting the database structure, the initial data needs to be imported into the database. NineData's data import functionality can help users import the initial data from files into the target database, thereby quickly initializing the database.
  • Large-scale data updates: The scenario where a large-scale data update needs to be executed on the database.
    • Example 1: An enterprise needs to import a large amount of platform order data into their database for storage and analysis purposes.
    • Example 2: An enterprise needs to delete or merge duplicate or invalid data, or format or convert the data.
    • Example 3: An enterprise needs to update product prices, inventory, or other key business data in bulk.
    • Example 4: An enterprise needs to update the timestamps or data formats for troubleshooting purposes.

Precautions

In the commercial versions (, ), please ensure that your monthly/yearly subscription is not expired, as it may result in the inability to use the Database DevOpsservice. You can quickly check the remaining quota and expiration date at the top right corner of the NineData console page.

check_quota

Limitations

  • Non-SQL type data files must be imported into a specified table, and all columns in the data file must exist in the target table, otherwise the import task will fail.
  • When importing SQL type files that do not contain structure, please make sure that the table corresponding to each statement in the file already exists in the target database.
  • The current data import function does not support specification validation (pre-check), so caution should be exercised when using it.
  • The currently supported data source types are MySQL, PostgreSQL, SQL Server, Oracle, OceanBase Oracle, OceanBase MySQL, DB2, DaMeng, TiDB, GreatSQL, GaiaDB, GaiaDB-X, TDSQL MySQL.

Submitting a Task

Prerequisites

  • The permissions to the target database and tables have been granted. If not, please apply for them. For more information, see Permission Application and Approval.
  • The module permission is granted (which is granted by default). If you cannot see this entry, please contact your system administrator.
  • The permission is granted. For information on how to apply for this permission, see Permission Application and Approval.

Steps

  1. Log in to the NineData console.

  2. Click > in the left navigation pane.

  3. On the page, click in the upper-right corner of the page.

  4. On the page, configure the settings according to the table below.

    ParameterDescription
    Enter a name for the data import task. This is only required in organization mode. For ease of subsequent search and management, please use a meaningful name. The name can be up to 64 characters long.
    Select the target data source and database for the data import task. When is not SQL, you also need to select the specific target table for the import (SQL Server, Oracle, OceanBase Oracle, DB2 requires selecting the corresponding Schema before choosing the target table for import).
    Note: If you do not have permissions for this data source or database, a prompt will appear on the page. You can click the button in the prompt box to apply for permissions. For more information, see Authorization and Approval Process.
    Select the data import type, which supports the following three types:
    • SQL: A file containing a series of SQL statements that are imported into a MySQL database through SQL playback.
    • CSV: A text file format where each line represents a record, where each field is usually separated by commas.
    • EXCEL: The format of an Excel file, where each table cell contains a field value.
    Click the button to select the data file you want to import into the database. You can upload a .zip compressed file that contains multiple data files to perform batch uploading.
    Note: Please ensure that the column names and column numbers in the data file are consistent with those in the target table for the import, otherwise the import may fail.
    Select the character set of the data file to import:
    • (default): The system automatically detects the character set of the data file.
    • UTF-8: A variable-length Unicode character encoding.
    • GBK: A Chinese character encoding that extends from the GB2312 encoding.
    Enter the reason for the import. This is only required in organization mode and is used for the approval process.
    When enabled, the system does not perform specification pre-checks on the task, but only performs permission checks on its data source. This mode merges multiple SQL statements into one batch statement for submission, which can reduce network overhead and improve import performance. This mode is fixedly enabled in the current version.
    The person responsible for executing the task after it has been approved, which is only required in organization mode.
    Note: The options in the list depend on the following two situations:
    • The SQL Dev policy for the current data source have been configured: The options in the list are based on the rule configured in the current policy. This rule is located on the tab. For information on how to configure development specifications, see Edit policy.
    • The current data source has not been configured with development specifications: The options in the list are users who have permission for on the current data source. For more information on authorization, see Configuring user permissions.
    Clicking on will expand the configuration options.
    MySQL, TiDB, GreatSQL
    • (optional when is CSV or EXCEL):
      • (selected by default): Choose whether the first row of the data file contains field names. Please check or uncheck based on the actual content of your import file.
      • (In CSV format only, optional and selected by default): Choose the delimiter for each field in the data file. Select based on the actual content of your import file.
      • : This option allows you to select which column(s) from the data file to import into the target table. Multiple selections are supported. When selecting multiple columns, the system will import them in the order of the selected column names. This feature is useful when the column names or the number of columns in the data file and the target table are different.
    • (optional when is CSV or EXCEL):
      • Insert (default): Insert data into the table. If a similar row already exists in the table, the insertion will fail and an error message will be displayed.
      • Insert Ignore: Insert data into the table. If a similar row already exists in the table, the insertion operation will skip that row without making any changes.
      • Replace Into: Insert data into the table. If a similar row already exists in the table, it will be directly overwritten.
      Warning: Replace Into may result in the data in the table being overwritten and modified. Please use with caution.
    • :
      • : Select this option to skip checking restrictions imposed by foreign key constraints.
    PostgreSQL
    • (optional when is CSV or EXCEL):
      • (selected by default): Choose whether the first row of the data file contains field names. Please check or uncheck based on the actual content of your import file.
      • (In CSV format only, optional and selected by default): Choose the delimiter for each field in the data file. Select based on the actual content of your import file.
      • : This option allows you to select which column(s) from the data file to import into the target table. Multiple selections are supported. When selecting multiple columns, the system will import them in the order of the selected column names. This feature is useful when the column names or the number of columns in the data file and the target table are different.
    • (optional when is CSV or EXCEL):
      • Insert (default): Insert data into the table. If a similar row already exists in the table, the insertion will fail and an error message will be displayed.
      • Insert Ignore: Insert data into the table. If a similar row already exists in the table, the insertion operation will skip that row without making any changes.
    SQL Server
    • (optional when is CSV or EXCEL):
      • (selected by default): Choose whether the first row of the data file contains field names. Please check or uncheck based on the actual content of your import file.
      • (In CSV format only, optional and selected by default): Choose the delimiter for each field in the data file. Select based on the actual content of your import file.
      • : This option allows you to select which column(s) from the data file to import into the target table. Multiple selections are supported. When selecting multiple columns, the system will import them in the order of the selected column names. This feature is useful when the column names or the number of columns in the data file and the target table are different.
    • (displayed when is CSV or EXCEL):
      • Insert (fixed): Insert data into the table. If a similar row already exists in the table, the insertion will fail and an error message will be displayed.
    • :
      • : Select this option to skip checking restrictions imposed by foreign key constraints.
    Oracle|Oceanbase Oracle
    (optional when is CSV or EXCEL):
    • (selected by default): Choose whether the first row of the data file is the field name. Please check or uncheck based on the actual situation of your import file.
    • (optional and selected by default, for CSV only): Choose the delimiter for each field in the data file. Please select based on the actual situation of your import file.
    • : This option allows you to select which column(s) in the data file to import into the target table, supporting multiple selections. When selecting multiple columns, the system will import according to the order of the column names chosen by the user. This feature is suitable for situations where the column names in the data file and the target table are inconsistent or the number of columns is different.
    DB2|DaMeng
    (optional when is CSV or EXCEL):
    • (selected by default): Choose whether the first row of the data file is the field name. Please check or uncheck based on the actual situation of your import file.
    • (optional and selected by default, for CSV only): Choose the delimiter for each field in the data file. Please select based on the actual situation of your import file.
    • : This option allows you to select which column(s) in the data file to import into the target table, supporting multiple selections. When selecting multiple columns, the system will import according to the order of the column names chosen by the user. This feature is suitable for situations where the column names in the data file and the target table are inconsistent or the number of columns is different.
  5. Click to automatically jump to the page.

  6. At the node, wait for the task status to change to , and then click the in the upper right corner of the page to enter the node. In the node, you can also perform the following operations:

    • Click the refresh icon in the upper right corner of the page to refresh the task status.
    • Click in the upper right corner of the page to perform the check again.
    • Click in the upper right corner of the page to directly withdraw the task or edit the task and submit it again.
    • Click in the upper right corner of the page to copy a new task with the same configuration.
    tip
    • will automatically check the imported task, including syntax error check (when is SQL), estimated row count check, and library permission check. If any issues are found during the process, specific information will be prompted for you to locate and modify.
    • Under , this node will be skipped automatically, and the task will come to node, and the task status will change to .
    • If the approval process is not configured for this data source, the task will automatically skip the and nodes and come to the node, and the task status will change to . You can directly execute step 9.
  7. Click on in the upper right corner of the page, select the approvers in the pop-up window, and click .

    tip
    • Depending on the configuration of the approval process, the number of approvers to be selected here may vary. Please select according to the actual situation.
    • If the feature is enabled in the current approval process, this section will display . You do not need to manually specify an approver; all personnel authorized to approve the current work order will receive approval notifications and be able to approve it.

  8. The task status will change to . Before the approval, you can perform the following operations:

    • Click the refresh icon in the upper right corner of the page to refresh the task status.
    • Click in the upper right corner of the page to directly withdraw the task or edit and submit the task again.
    • Click in the upper right corner of the page to change the approver of the task.
    • Click in the upper right corner of the page to copy a new task with the same configuration.
  9. When the task status changes to , click of the task's in the upper right corner of the page, select , and click .

    • : Execute the data import task immediately.
    • : Select a time to execute the data import task.
  10. Wait for the task status to change to .

Approve Tasks

Prerequisites

  • Created or joined an organization. For more information, see Manage Organizations.
  • The NineData console is in organization mode. For information on switching from personal mode to organization mode, see Switch to Organization.
  • The current data import task is in status .
  • You are the or the of the current data import task.

Procedure

  1. Log in to the NineData console.

  2. Click and then on the left navigation pane.

  3. On the page, find the target task and click the task ID or in the column on the right of the task.

    tip

    You can also hover over the notification icon at the top of the page and click the target task in the tab.

  4. On the page, review the approval request and choose , , or based on the actual situation.

    Operation
    Description
    Transfer the approval request to another user.
    Approve the request. After clicking , you can optionally enter the , and click .
    Reject the request. After clicking , you also need to enter the and click .