Changed max_allowed_packet and still receiving 'Packet Too Large' error

The first I thought about was what max_allowed_packet actually controls. Here is what I found:

According to the page 99 of "Understanding MySQL Internals" (ISBN 0-596-00957-7), here are paragraphs 1-3 explaining it:

MySQL network communication code was written under the assumption that queries are always reasonably short, and therefore can be sent to and processed by the server in one chunk, which is called a packet in MySQL terminology. The server allocates the memory for a temporary buffer to store the packet, and it requests enough to fit it entirely. This architecture requires a precaution to avoid having the server run out of memory---a cap on the size of the packet, which this option accomplishes.

The code of interest in relation to this option is found in sql/net_serv.cc. Take a look at my_net_read(), then follow the call to my_real_read() and pay particular attention to net_realloc().

This variable also limits the length of a result of many string functons. See sql/field.cc and sql/intem_strfunc.cc for details.

Given that definition of max_allowed_packet, I then discovered something else from ServerFault: innodb_log_file_size and innodb_log_buffer_size combined must be larger than ten times your biggest blob object if you have a lot of large ones

Keeping these two things in mind, I would increase innodb_log_file_size in /etc/my.cnf to the max size allowed for it, 2047M. This of course requires the following

service mysql stop
rm -f /var/lib/mysql/ib_logfile*
service mysql start

This will accommodate any big blobs you may have in your data.


MySQL's max_allowed_packet still has to be within the limits of the shell that started it - does ulimit -a show that data seg size is unlimited?


For some reason, max_allowed_packet is ignored by mysqldump -- by design? The actual complement is net_buffer_length. So instead try

mysqldump --net_buffer_length=100k -u root -p databasename > dump.sql