MySQL any way to import a huge (32 GB) sql dump faster?

Percona's Vadim Tkachenko made this fine Pictorial Representation of InnoDB

InnoDB Architecture

You definitely need to change the following

innodb_buffer_pool_size = 4G
innodb_log_buffer_size = 256M
innodb_log_file_size = 1G
innodb_write_io_threads = 16
innodb_flush_log_at_trx_commit = 0

Why these settings ?

  • innodb_buffer_pool_size will cache frequently read data
  • innodb_log_buffer_size : Larger buffer reduces write I/O to Transaction Logs
  • innodb_log_file_size : Larger log file reduces checkpointing and write I/O
  • innodb_write_io_threads : Service Write Operations to .ibd files. According to MySQL Documentation on Configuring the Number of Background InnoDB I/O Threads, each thread can handle up to 256 pending I/O requests. Default for MySQL is 4, 8 for Percona Server. Max is 64.
  • innodb_flush_log_at_trx_commit
    • In the event of a crash, both 0 and 2 can lose once second of data.
    • The tradeoff is that both 0 and 2 increase write performance.
    • I choose 0 over 2 because 0 flushes the InnoDB Log Buffer to the Transaction Logs (ib_logfile0, ib_logfile1) once per second, with or without a commit. Setting 2 flushes the InnoDB Log Buffer only on commit. There are other advantages to setting 0 mentioned by @jynus, a former Percona instructor.

Restart mysql like this

service mysql restart --innodb-doublewrite=0

This disables the InnoDB Double Write Buffer

Import your data. When done, restart mysql normally

service mysql restart

This reenables the InnoDB Double Write Buffer

Give it a Try !!!

SIDE NOTE : You should upgrade to 5.6.21 for latest security patches.


Do you really need the entire database to be restored? If you don't, my 2c:

You can extract specific tables to do your restore on "chunks". Something like this:

zcat your-dump.gz.sql | sed -n -e '/DROP TABLE.*`TABLE_NAME`/,/UNLOCK TABLES/p' > table_name-dump.sql

I did it once and it took like 10 minutes to extract the table I needed - my full restore took 13~14 hours, with a 35GB (gziped) dump.

The /pattern/,/pattern/p with the -n parameter makes a slice "between the patterns" - including them.

Anyways, to restore the 35GB I used an AWS EC2 machine (c3.8xlarge), installed Percona via yum (Centos) and just added/changed the following lines on my.cnf:

max_allowed_packet=256M
wait_timeout=30000

I think the numbers are way too high, but worked for my setup.


The fastest way to import your database is to copy the ( .frm, .MYD, .MYI ) files if MyISAM, directly to the /var/lib/mysql/"database name".

Otherwise you can try : mysql > use database_name; \. /path/to/file.sql

Thats another way to import your data.