`ERROR 1114 (HY000) the table ... is full` with innodb_file_per_table set to autoextend
FACTS
You said you are using ext4
. File size limit is 16TB. Thus, Sample.ibd
should not be full.
You said your innodb_data_file_path
is ibdata1:10M:autoextend
. Thus, the ibdata1 file itself has no cap to its size except from the OS.
Why is this message coming up at all? Notice the message is "The table ... is full", not "The disk ... is full". This table full condition is from a logical standpoint. Think about InnoDB. What interactions are going on ?
My guess is InnoDB is attempting to load 93GB of data as a single transaction. Where would the Table is Full
message emanate from? I would look at the ibdata1, not in terms its physical size (which you already ruled out), but in terms of what transaction limits are being reached.
What is inside ibdata1 when innodb_file_per_table is enabled and you load new data into MySQL?
- Data Dictionary
- Double Write Buffer
- Safety Net to Prevent Data Corruption
- Helps Bypass OS for Caching
- Insert Buffer (Streamlines Changes to Secondary Indexes)
- Rollback Segments
- Undo Logs
- Click Here to see a Pictorial Representation of
ibdata1
My suspicions tell me that the Undo Logs and/or Redo Logs are to blame.
What are these logs? According to the Book
Chapter 10 : "Storage Engines" Page 203 Paragraphs 3,4 say the following:
The InnoDB engine keeps two types of logs: an undo log and a redo log. The purpose of an undo log is to roll back transactions, as well as to display the older versions of the data for queries running in the transaction isolation level that requires it. The code that handles the undo log can be found in storage/innobase/buf/log/log0log.c.
The purpose of the redo log is to store the information to be used in crash recovery. It permits the recovery process to re-execute the transactions that may or may not have completed before the crash. After re-executing those transactions, the database is brought to a consistent state. The code dealing with the redo log can be found in storage/innobase/log/log0recv.c.
ANALYSIS
There are 1023 Undo Logs inside ibdata1 (See Rollback Segments and Undo Space). Since the undo logs keep copies of data as they appeared before the reload, all 1023 Undo Logs have reached its limit. From another perspective, all 1023 Undo Logs may be dedicated to the one transaction that loads the Sample
table.
BUT WAIT...
You are probably saying "I am loading an empty Sample
table". How are Undo Logs involved? Before the Sample
table was loaded with 93GB of data, it was empty. Representing every row that did not exist must take up some housecleaning space in the Undo Logs. Filling up 1023 Undo Logs seems trivial given the amount of data pouring into ibdata1
. I am not the first person to suspect this:
From the MySQL 4.1 Documentation, note Posted by Chris Calender on September 4 2009 4:25pm
:
Note that in 5.0 (pre-5.0.85) and in 5.1 (pre-5.1.38), you could receive the "table is full" error for an InnoDB table if InnoDB runs out of undo slots (bug #18828).
Here is the bug report for MySQL 5.0 : http://bugs.mysql.com/bug.php?id=18828
SUGGESTIONS
When you create the mysqldump of the Sample
table, please use --no-autocommit
mysqldump --no-autocommit ... mydb Sample > Sample.sql
This will put an explicit COMMIT;
after every INSERT
. Then, reload the table.
If this does not work (you are not going to like this), do this
mysqldump --no-autocommit --skip-extended-insert ... mydb Sample > Sample.sql
This will make each INSERT have just one row. The mysqldump will be much larger (10+ times bigger) and could take 10 to 100 times longer to reload.
In either case, this will spare the Undo Logs from being inundated.
Give it a Try !!!
UPDATE 2013-06-03 13:05 EDT
ADDITIONAL SUGGESTION
If the InnoDB system table (a.k.a ibdata1) strikes a filesize limit and Undo Logs cannot be used, you could just add another system tablespace (ibdata2).
I just encountered this situation just two days ago. I updated my old post with what I did: See Database Design - Creating Multiple databases to avoid the headache of limit on table size
In essence, you have to change innodb_data_file_path to accommodate a new system tablespace file. Let me explain how:
SCENARIO
On disk (ext3), my client's server had the following:
[root@l*****]# ls -l ibd*
-rw-rw---- 1 s-em7-mysql s-em7-mysql 362807296 Jun 2 00:15 ibdata1
-rw-rw---- 1 s-em7-mysql s-em7-mysql 2196875759616 Jun 2 00:15 ibdata2
The setting was
innodb_data_file_path=ibdata1:346M;ibdata2:500M:autoextend:max:10240000M
Note that ibdata2
grew to 2196875759616 which is 2145386484M
.
I had to embed the filesize of ibdata2
into innodb_data_file_path and add ibdata3
innodb_data_file_path=ibdata1:346M;ibdata2:2196875759616;ibdata3:10M:autoextend
When I restarted mysqld, it worked:
[root@l*****]# ls -l ibd*
-rw-rw---- 1 s-em7-mysql s-em7-mysql 362807296 Jun 3 17:02 ibdata1
-rw-rw---- 1 s-em7-mysql s-em7-mysql 2196875759616 Jun 3 17:02 ibdata2
-rw-rw---- 1 s-em7-mysql s-em7-mysql 32315015168 Jun 3 17:02 ibdata3
In 40 hours, ibdata3
grew to 31G. MySQL was once again working.
I had the same problem and I just did one thing and it worked.
You seem to have a too low maximum size for your innodb_data_file_path
in your my.cnf
configuration file. Just change the below code -
innodb_data_file_path = ibdata1:10M:autoextend:max:512M
Important Note You cannot host more than 512MB
of data in all InnoDB
tables combined.
You can also switch to an innodb-per-table scheme using innodb_file_per_table
.