How can I monitor the progress of an import of a large .sql file?
If you're just importing from a dump file from the CLI on *nix, e.g.
mysql -uxxx -pxxx dbname < /sqlfile.sql
then first install pipe viewer on your OS then try something like this:
pv sqlfile.sql | mysql -uxxx -pxxxx dbname
which will show a progress bar as the program runs.
It's very useful and you can also use it to get an estimate for mysqldump progress.
pv dumps the sqlfile.sql
and passes them to mysql (because of the pipe operator). While it is dumping, it shows the progress. The cool thing is that mysql takes the data only as fast as it can progress it, so pv can show the progress of the import. I do not have any proof. But it seems so. I guess there is some buffer used, but at some point I think mysql
does not read any more data when it is still busy processing.
If you've already started the import, you can execute this command in another window to see the current size of your databases. This can be helpful if you know the total size of the .sql file you're importing.
SELECT table_schema "Data Base Name", sum( data_length + index_length ) / 1024 / 1024 "Data Base Size in MiB"
FROM information_schema.TABLES GROUP BY table_schema;
Credit to: http://forums.mysql.com/read.php?108,201578,201578
The MySQL 8.0 Reference states the following about the accuracy:
DATA_LENGTH
For MyISAM, DATA_LENGTH is the length of the data file, in bytes.
For InnoDB, DATA_LENGTH is the approximate amount of memory allocated for the clustered index, in bytes. Specifically, it is the clustered index size, in pages, multiplied by the InnoDB page size.
INDEX_LENGTH
For MyISAM, INDEX_LENGTH is the length of the index file, in bytes.
For InnoDB, INDEX_LENGTH is the approximate amount of memory allocated for non-clustered indexes, in bytes. Specifically, it is the sum of non-clustered index sizes, in pages, multiplied by the InnoDB page size.
When you execute a mysqldump of a single database, all tables are dumped in alphabetical order.
Naturally, the reload of the mysqldump into a database would also be in alphabetical order.
You could just do a SHOW PROCESSLIST; and find out the DB Connection running the mysqldump. When the dump is reloaded, the DB Connection will vanish.
If you want to know what tables are in the dumpfile, run this against foobar.sql
cat foobar.sql | grep "^CREATE TABLE" | awk '{print $3}'
UPDATE 2012-05-02 13:53 EDT
Sorry for not noticing that there is only one table.
If the table is MyISAM, the only way to monitor is from the OS point of view. The reason? The table is write-locked throughout the reload. What do you look for? The size of the .MYD
and .MYI
files. Of course, you need to compare that with what the table size was before on the other DB server you imported from.
If the table is InnoDB and you have innodb_file_per_table enabled, the only way to monitor is from the OS point of view. The reason? The table is write-locked throughout the reload. What do you look for? The size of the .ibd
file. Of course, you need to compare that with what the table size was before on the other DB server you imported from.
If the table is InnoDB and you have innodb_file_per_table disabled, not even the OS point of view can help.
UPDATE 2012-05-02 13:56 EDT
I addressed something like this last year : How do I get % progress for "type db.sql | mysql"
UPDATE 2012-05-02 14:09 EDT
Since a standard mysqldump write-locks the table like this:
LOCK TABLES `a` WRITE;
/*!40000 ALTER TABLE `a` DISABLE KEYS */;
INSERT INTO `a` VALUES (123),(451),(199),(0),(23);
/*!40000 ALTER TABLE `a` ENABLE KEYS */;
UNLOCK TABLES;
then, there is no way to get a progress from with mysql until the table lock is released.
If you can get LOCK TABLES
and UNLOCK TABLES
commented out of the dumpfile...
- if the table is MyISAM, SELECT COUNT(*) would work
- if the table is InnoDB, SELECT COUNT(*) would probably slow down/halt the load until count is done