docs
udts
Support
FAQ

FAQ

Which inter-regional cross-domain migrations are now supported?

Cross-domain migration with UDTS utilizes UDPN or dedicated lines, and can support any path that has a supported region and a UDPN/dedicated line transmission route. For supported regions, please refer to the control panel.

What conditions need to be met for a full migration of MySQL?

Please refer to

What conditions need to be met for an incremental migration of MySQL?

Please refer to

What to do with the incremental migration error message only support ROW format binlog?

Incremental migration requires the binlog_format value of MySQL (including UDB MySQL) to be ROW. You can choose one of the following methods to change the binlog_format; after the change is completed, you need to re-execute the full + incremental of UDTS once.

1. Modify the configuration file (default is my.cnf), restart MySQL

[mysqld]
...
binlog_format = ROW
binlog_row_image = FULL
...

Note: If it is MySQL 5.5, there is no binlog_row_image variable, no need to set.

2. Setting through MySQL Command

It should be noted that if the binlog_format is set through the MySQL command, when there are connections writing data to the database, the binlog_format of the written data remains the original value. It will only take effect after the connection is disconnected.

FLUSH TABLES WITH READ LOCK;
FLUSH LOGS;
SET GLOBAL binlog_format = 'ROW';
-- Similarly, if it is MySQL 5.5, there is no need to set binlog_row_image
SET GLOBAL binlog_row_image = 'FULL';
FLUSH LOGS;
UNLOCK TABLES;

After the change is completed, you can disconnect the existing connections through the following commands.

-- View all current connections
> show processlist;
+-----+------+-----------------+--------+---------+------+----------+------------------+
| Id  | User | Host            | db     | Command | Time | State    | Info             |
+-----+------+-----------------+--------+---------+------+----------+------------------+
| 495 | root | 10.20.5.1:56820 | <null> | Query   | 0    | starting | show processlist |
| 497 | root | 10.20.5.1:56828 | <null> | Sleep   | 3    |          | <null>           |
+-----+------+-----------------+--------+---------+------+----------+------------------+
 
-- Disconnect all sessions through kill, if you can confirm which connections have write operations, you can only kill these connections
> kill 497
 
FLUSH LOGS;

If you are using the master-slave mode, you need to execute the following commands.

Execute on the slave node:

stop slave;

Execute on the master node:

FLUSH TABLES WITH READ LOCK;
FLUSH LOGS;
SET GLOBAL binlog_format = 'ROW';
FLUSH LOGS;
UNLOCK TABLES;

Execute on the slave node

start slave;

Note:

After setting the parameter with SET GLOBAL binlog_format = 'ROW';, if you execute show global variables like 'binlog_format'; again, the value returned is still the original value. You need to disconnect and reconnect to display the changed value.

3. Cloud Data

If you are using cloud data, you can copy the configuration file currently in use, modify the corresponding binlog_format and binlog_row_image configuration items, and then reapply the database using this configuration.

ERROR 1292 (22007): Incorrect date value: '0000-00-00' for column Error Handling

This is caused by the inconsistency of sql_mode in the migrated MySQL (UDB-MySQL). The target database does not allow invalid dates like '0000-00-00'.

You can check the sql_mode with the following commands:

select @@sql_mode;

--- or
show variables like "sql_mode";

If the current sql_mode includes NO_ZERO_IN_DATE, NO_ZERO_DATE, delete them and add ALLOW_INVALID_DATES to the end of the sql_mode value to allow the insertion of such invalid dates.

Use the following command to modify sql_mode:

SET GLOBAL sql_mode='xxxx,ALLOW_INVALID_DATES';

Here, xxxx refers to the original sql_mode value (you need to delete NO_ZERO_IN_DATE and NO_ZERO_DATE).

How to Determine if the Source and Target Databases in MySQL-MySQL Incremental Task Have Been Synchronized

When the data in the target database of the incremental task is consistent with the source database, the status of the incremental task will change from Synchronizing to Synchronized.

The Synchronized status explanation:

It refers to the data of the target database and the source database specified in the migration task being consistent. For example: when the migration database is *, Synchronized means that all db outside the built-in libraries (sys, test, INFORMATION_SCHEMA, PERFORMANCE_SCHEMA, etc.) of the target database are consistent with the source database data. When the migration database is multiple db, such as db1,db2, it refers to the data of db1,db2 in the target database being consistent with the source database data. When any changes occur in the source database data (even if the db that changes is not in the migration task), the Synchronized status will change back to Synchronizing until the data is consistent again.

Error 1040: Too many connections Error Handling

The database server has reached the maximum number of connections.

show variables like '%max_connections%';
+-----------------+-------+
| Variable_name   | Value |
+-----------------+-------+
| max_connections | 151   |
+-----------------+-------+
1 row in set (0.01 sec)

Solution:

Increase max_connections: set GLOBAL max_connections=1000;

row size too large (\u003e 8126). Changing some columns to TEXT or BLOB or using ROW_FORMAT=DYNAMIC or ROW_FORMAT=COMPRESSED may help. In current row format, BLOB prefix of 768 bytes is stored inline Error Handling

Solution:

Execute in the target database:

set GLOBAL innodb_file_format_max = 'Barracuda';
set GLOBAL innodb_file_format = 'Barracuda';
set GLOBAL  innodb_file_per_table = ON;
set GLOBAL innodb_strict_mode = OFF;

Column count doesn't match value count Error Handling

Incremental tasks of migrating from Alibaba Cloud to Surfercloud encountered similar errors, Column count doesn't match value count: 2 (columns) vs 3 (values), because there are hidden primary keys in the source database, causing the columns and values to mismatch.

Solution: Find tables without primary keys and add primary keys.

  • Find tables without primary keys:
SELECT
         table_schema,
         table_name
     FROM
         information_schema.tables
     WHERE (table_schema, table_name)
     NOT in( SELECT DISTINCT
             table_schema, table_name FROM information_schema.columns
         WHERE
             COLUMN_KEY = 'PRI')
         AND table_schema NOT in('sys', 'mysql', 'INFORMATION_SCHEMA', 'PERFORMANCE_SCHEMA');
  • Add primary keys to these tables without primary keys.

sync: Type:ExecSQL msg:"exec jobs failed,err:Error 1205:Lock wait timeout exceeded;try restarting transaction" Error Handling

This problem may occur when the target database configuration is relatively low. Users can simply restart the task. The task will automatically continue from the last completed synchronization position (i.e., it supports breakpoint resume function).

Will there be a database lock during the running of the MySQL full task? When is it released?

In default mode: During the running of the full task, in order to ensure data consistency, FTWRL (FLUSH TABLES WITH READ LOCK) will be executed. If there are non-transactional tables such as MyISAM in the migrated database, the lock will be released after the data backup of these non-transactional tables is completed (the time of lock release is related to the size of the table data); for InnoDB engine tables, the lock will be released immediately, and a read-consistent transaction will be started with START TRANSACTION WITH CONSISTENT SNAPSHOT.

  • For MyISAM tables, the table being dumped allows reading, but not writing, until the table dump is completed.
  • For InnoDB tables, the table being dumped allows reading and writing.

In Nolock mode: No locks will be applied to any database or table.

Redis migration error ERR illegal address

The reason might be that the user has enabled the whitelist setting, but the IP of the UDTS machine is not in the whitelist. If you need a whitelist address, please contact technical support.

Error 3140: Invalid JSON text Error Handling

During the MySQL synchronization process, Error 3140: Invalid JSON text: \"The document is empty.\" at position 0 in value for column occurred, the reason is that the source database check is not strict. The database fields are required to be NOT NULL, but there is data with NULL values in the data.

There are two solutions, handle according to needs:

  • Repair the data in the source database, correct all data with NULL values to the correct values (this also meets the business logic needs)
  • Or modify the table in the target database to allow the field to be NULL. For example, the table is period_progress, and the field is total.
ALTER TABLE `period_progress` CHANGE `total` `total` JSON NULL;

MongoDB migration error error reading collection: cursor id 5707195885304103447 not found

The default validity period of a MongoDB cursor is 10 minutes. If the data cannot be processed within 10 minutes, the cursor will expire, resulting in the inability to obtain the remaining data. Solution:

  • Log into the MongoDB shell, switch to the admin database, and set the cursor timeout to 1 day.
db.runCommand( { setParameter:1 , "cursorTimeoutMillis":86400000}  )

Error 1264: Out of range value for column 'xxx' at row 100 Error Handling

During the data import process from MySQL2TiDB, you may encounter Error 1264: Out of range value for column 'xxx' at row 100. The reason for this is that the data was not strictly validated in the source database, meaning abnormal data could also be stored normally. However, when importing to the target database, the data must meet the specified constraints.

For example:

CREATE TABLE `aaa` (
  `id` bigint(20) NOT NULL AUTO_INCREMENT,
  `lot` float(10,7) NOT NULL DEFAULT '0.0000000',
  PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin;

Suppose there is a table in the source database with the above structure, where the lot field is required to be of float data type, length 10, with 7 decimal places. If there is a data entry of -1000, the import may fail, resulting in Error 1264: Out of range value for column 'lot'.

There are two solutions, both of which require adjustments to the source database:

  • Adjust the table structure to increase the field length:

    select max(`lot`) from `aaa`;
    select min(`lot`) from `aaa`;

    Obtain the maximum and minimum values, then use alter to modify the table structure to meet the specific conditions. For example, modify the lot of table aaa to float(11,7):

    ALTER TABLE `aaa` MODIFY `lot` float(11,7) NOT NULL DEFAULT '0.0000000',
  • Correct the data to comply with the table structure restrictions:

    select `id` where `lot` > 999 OR `lot` < -999;

    Correct the data found in the query.

TiDB migration error Error 1071: Specified key was too long; max key length is 3072 bytes

It is necessary to increase the max-index-length configuration item in the TiDB file of the target database.

TiDB migration error Error: incorrect utf8 value f09f8c80 for column xxx

It is necessary to to set the target database tidb_skip_utf8_check value:

set global tidb_skip_utf8_check=1;

TiDB migration error Error 1071: Specified key was too long; max key length is 3072 bytes

The value of max-index-length in the TiDB configuration file needs to be modified to 12288.

Cannot add foreign key constraint Error Handling

When migrating from MySQL to MySQL/TiDB, if you encounter the error Cannot add foreign key constraint, you need to check the CHARSET of the table structure. The CHARSET of the current table structure and its dependent foreign key table structure must be consistent.

Here is an example, when the CHARSET of the 'group' and 'user' tables are different, you may encounter the error: Cannot add foreign key constraint:

CREATE TABLE `group` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `name` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_bin NOT NULL,
  PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
 
CREATE TABLE `user` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `username` varchar(255) NOT NULL,
  `password` varchar(255) DEFAULT NULL,
  `g_id` int(11) NOT NULL,
  PRIMARY KEY (`id`),
  FOREIGN KEY (`g_id`) REFERENCES `group`(`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

How to handle this problem:

  1. In the source database, execute the following SQL to modify the CHARSET of the table. It is recommended to replace the old encoding with a compatible encoding (replace 'utf8' with 'utf8mb4') to make the CHARSET of user and group consistent.
ALTER TABLE `user` CONVERT TO CHARACTER SET utf8mb4 COLLATE utf8mb4_bin;
  1. Re-run the UDTS task.

The source database is MySQL, and an error occurs during incremental synchronization: ERROR 1236 (HY000): The slave is connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1, but the master has purged binary logs containing GTIDs that the slave requires

This is because the binlog synchronization point required for incremental synchronization cannot be found in the source database. There are four possible reasons:

  • 1 The user is running a full + incremental task, and the full time exceeds the maximum expiration time of binlog, causing the binlog synchronization point recorded at the end of the full task to be expired and cleared. In this case, you need to increase the expiration time of the source database binlog.
  • 2 The user's source database binlog was manually cleared.
  • 3 The user's task was paused or failed in the middle, and then the task was restarted after a period of time, during which the binlog expired and was cleared.
  • 4 The user did not use GTID for incremental synchronization and a master-slave switch occurred.

Solution: All of the above situations require re-execution of UDTS's full + incremental, and the problem cannot be solved by simply restarting the task.

Steps:

  • For full + incremental tasks, click edit task on the task details page, fill in the relevant task information, and then select save task. The current task progress will be cleared, and the task will start from the full amount after starting.
  • For incremental tasks, you need to do a full migration again, and then re-fill the correct BinlogName, BinlogPos, GTID, and the task will start synchronizing from the specified binlog position after starting.

There are two ways to modify the binlog log expiration time:

  • 1 Temporarily effective, invalid after restarting the database.

    View the default set expiration time:

    show variables like "%expire_logs%";

    Set to keep for 15 days:

    set global expire_logs_days=15

    Refresh the log:

    flush logs;

    View the newly generated binlog log:

    show master status\G:

    Note: The above commands will take effect immediately when executed in the database. Please determine the retention date of the setting data to avoid accidental deletion.

  • 2 Modify the configuration file, permanently effective.

    Please modify the location of the MySQL configuration file according to the actual situation:

    vim /etc/my.cnf

    Modify the configuration related to binlog expiration:

    [mysqld]
    expire_logs_days=15

    Note: After modifying the configuration file, you need to restart to take effect permanently. Setting to 0 means never expire, the unit is day.

The table 'xxx' is full or No space left on device Error Handling

The general situation is caused by insufficient target disk space, which requires upgrading the target disk space or cleaning up the data in the target database.

The table 'xxx' is full. There is another special case where the table uses the memory storage engine. This can be resolved by modifying the following two values in the MySQL configuration file (or changing the table to another storage engine). The default value is 16M.

tmp_table_size = 256M
max_heap_table_size = 256M

What is the appropriate value for this setting? This needs to be handled according to the actual situation.

  • What is the size of the memory owned by the current MySQL instance, then this value is the upper limit.
  • How much data is the memory engine table expected to store?

Error 1785: When @@GLOBAL.ENFORCE_GTID_CONSISTENCY = 1, updates to non-transactional tables can only be done in either autocommitted statements or single-statement transactions, and never in the same statement as updates to transactional tables Error Handling

In the same transaction of MySQL, if both transaction-supporting tables (such as InnoDB) and non-transaction-supporting tables (such as MyISAM) are operated, this error will occur when the database has GTID enabled.

To solve this, you can choose one of the following methods:

  • In the source database, change the non-transaction engine table (usually MyISAM) to a transaction engine table (usually InnoDB).
  • In the target database, disable GTID.

Invalid default value for 'xxxx_time' Error Handling

During the migration process, if the target database's sql_mode contains NO_ZERO_DATE or NO_ZERO_IN_DATE, and the source database contains date values like 0000-00-00 and time values like 00:00:00, errors may occur when loading data if such time values exist.

Solution:

  • Modify the sql_mode in the target database, remove NO_ZERO_DATE and NO_ZERO_IN_DATE, and add ALLOW_INVALID_DATES. The operation steps can refer to FAQ No. 5.
  • If STRICT_TRANS_TABLES exists in the target database's sql_mode, it can also be removed to avoid other problems, and added back after the migration is completed.

error creating indexes for xx.xx: createIndex error: WiredTigerIndex::insert: key too large to index, failing 1089 Error Handling

MongoDB introduced the failIndexKeyTooLong parameter in version 2.6 and deprecated it after version 4.2, which is used to limit the length of the index.

Solution: Execute in the target database.

db.runCommand({
    "setParameter": 1,
    "failIndexKeyTooLong": false}
);

OOM command not allowed when used memory > 'maxmemory' or rss_memory not enough. Error Handling

When the target database is a MongoDB shard cluster, there may be insufficient memory in the target shard.

Solution: Upgrade the shard capacity of the target cluster.

ddtLog.v_HBUSer: error creating collection xx.xx: error running create command: Cannot create view when the featureCompatibilityVersion is 3.X. Error Handling

During MongoDB migration, the above error indicates that the featureCompatibilityVersion values of the source and target databases are inconsistent, and there are features in the source database that are incompatible with the target database, such as views, sorting rules, etc.

Solution: Check and adjust the content that is incompatible between the source and target databases.

The source database is Redis, error message: [PANIC] read error, please check source redis log or network

During Redis migration, the above error indicates that the amount of data written to the source database is greater than the buffer size.

Solution: Execute the following command to modify the value of the client-output-buffer-limit parameter in the source database.

config set client-output-buffer-limit 'slave 536870912 0 0'

The source database is MongoDB, error message: Failed: error creating intents to dump: error creating intents for database config: error getting collections for database config: (Unauthorized) not authorized on config to execute command { listCollections: 1, filter: {}, cursor: {}, lsid: { id: UUID("12d62805-acea-472f-9862-ca27253c107e") }, $db: "config" }

During a full migration of MongoDB, the above error indicates that the source database user lacks backup permissions.

Solution: Execute the following command to modify the source database user permissions.

db.grantRolesToUser("root",[{role:"backup",db:"admin"}])
  • Company
  • ContactUs
  • Blog
Copyright © 2024 SurferCloud All Rights Reserved
  • Contact our sales to help your business.
  • Contact our sales to help your business.