Does mysqldump handle binary data reliably?

The dumps generated from mysqldump can be trusted.

To avoid problems with encodings, binary transfers, etc, use the --hex-blob option, so it translates each byte in a hex number (for example, 'abc' becomes 0x616263). It will make the dump bigger, but it will be the most compatible and secure way to have the info (since it will be pure text, no weird misinterpretations due to special symbols generated with the binary data on a text file).

You can ensure the integrity (and speed up the transfer) of the dump files packing it on a rar or zip file. That way you can easily detect that it didn't get corrupted with the transfer.

When you try to load it on your server, check you have assigned on your my.cnf server config file

[mysqld]
max_allowed_packet=600M

or more if needed.

BTW right now i just did a migration, and dumped lots of binary data with mysqldump and it worked perfectly.


No, it is not always reliable when you have binary blobs. In that case you MUST use the "--hex-blob" flag to get correct results.

Caveat from comment below:

If you combine the --hex-blob with the -T flag (file per table) then the hex-blob flag will be ignored, silently

I have a case where these calls fail (importing on a different server but both running Centos6/MariaDB 10):

mysqldump --single-transaction --routines --databases myalarm -uroot -p"PASSWORD" | gzip > /FILENAME.sql.gz
gunzip < FILENAME.sql.gz | mysql -p"PASSWORD" -uroot --comments

It produces a file that silently fails to import. Adding "--skip-extended-insert" gives me a file that's much easier to debug, and I find that this line is generated but can't be read (but no error is reported either exporting or importing):

INSERT INTO `panels` VALUES (1003,1,257126,141,6562,1,88891,'??\\\?ŖeV???,NULL);

Note that the terminating quote on the binary data is missing in the original.

select hex(packet_key) from panels where id=1003;
--> DE77CF5C075CE002C596176556AAF9ED

The column is binary data:

CREATE TABLE `panels` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `enabled` tinyint(1) NOT NULL DEFAULT '1',
  `serial_number` int(10) unsigned NOT NULL,
  `panel_types_id` int(11) NOT NULL,
  `all_panels_id` int(11) NOT NULL,
  `installers_id` int(11) DEFAULT NULL,
  `users_id` int(11) DEFAULT NULL,
  `packet_key` binary(16) NOT NULL,
  `user_deleted` timestamp NULL DEFAULT NULL,
  PRIMARY KEY (`id`),
  ...

So no, not only can you not necessarily trust mysqldump, you can't even rely on it to report an error when one occurs.


An ugly workaround I used was to mysqldump excluding the two afflicted tables by adding options like this to the dump:

--ignore-table=myalarm.panels 

Then this BASH script hack. Basically run a SELECT that produces INSERT values where the NULL columns are handled and the binary column gets turned into an UNHEX() call like so:

(123,45678,UNHEX("AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA"),"2014-03-17 00:00:00",NULL),

Paste it into your editor of choice to play with it if you need to.

echo "SET UNIQUE_CHECKS=0;SET FOREIGN_KEY_CHECKS=0;DELETE FROM panels;INSERT INTO panels VALUES " > all.sql
mysql -uroot -p"PASSWORD" databasename -e "SELECT CONCAT('(',id,',', enabled,',', serial_number,',', panel_types_id,',', all_panels_id,',', IFNULL(CONVERT(installers_id,CHAR(20)),'NULL'),',', IFNULL(CONVERT(users_id,CHAR(20)),'NULL'), ',UNHEX(\"',HEX(packet_key),'\"),', IF(ISNULL(user_deleted),'NULL',CONCAT('\"', user_deleted,'\"')),'),') FROM panels" >> all.sql
echo "SET UNIQUE_CHECKS=1;SET FOREIGN_KEY_CHECKS=1;" > all.sql

That gives me a file called "all.sql" that needs the final comma in the INSERT turned into a semicolon, then it can be run as above. I needed the "large import buffer" tweaks set in both the interactive mysql shell and the command line to process that file because it's large.

mysql ... --max_allowed_packet=1GB

When I reported the bug I was eventually pointed at the "--hex-blob" flag, which does the same as my workaround but in a trivial from my side way. Add that option, blobs get dumped as hex, the end.