MySQL InnoDB locks primary key on delete even in READ COMMITTED

I can see how READ_COMMITTED can cause this situation.

READ_COMMITTED allows for three things:

  • Visibility of committed changes by other transactions using READ_COMMITTED isolation level.
  • Non-Repeatable Reads : Transaction performing same retrieval with the possibility of getting a different result each time.
  • Phantoms : Transactions may have rows appear where it was not visible beforehand.

This creates an internal paradigm for the transaction itself because the transaction must maintain contact with:

  • InnoDB Buffer Pool (while commit is still unflushed)
  • Table's Primary Key
  • Possibly
    • the Double Write Buffer
    • Undo Tablespace
  • Pictorial Representation

If two distinct READ_COMMITTED transactions are accessing the same tables/rows that are being updated in the same way, be ready to expect not a table lock, but an exclusive lock within the gen_clust_index (aka Clustered Index). Given the queries from your simplified case:

  • Transaction 1

    SET TRANSACTION ISOLATION LEVEL READ COMMITTED;
    SET AUTOCOMMIT=0;
    BEGIN;
    DELETE c FROM child c 
      INNER JOIN parent p ON p.id = c.parent_id 
    WHERE p.id = 1;
    
  • Transaction 2

    SET TRANSACTION ISOLATION LEVEL READ COMMITTED;
    SET AUTOCOMMIT=0;
    BEGIN;
    DELETE c FROM child c 
      INNER JOIN parent p ON p.id = c.parent_id 
    WHERE p.id = 2;
    

You are locking the same location in the gen_clust_index. One may say, "but each transaction has a different primary key.". Unfortunately, this is not the case in the eyes of InnoDB. It just so happens that id 1 and id 2 reside on the same page.

Look back at information_schema.innodb_locks you supplied in the Question

| lock_id                | lock_trx_id | lock_mode | lock_type | lock_table | lock_index | lock_space | lock_page | lock_rec | lock_data |
| '1A2973A4:0:3172298:2' | '1A2973A4'  | 'X'       | 'RECORD'  | '`deadlock_test`.`proc_warnings`' | '`PRIMARY`' | '0' | '3172298' | '2' | '53' |
| '1A296F67:0:3172298:2' | '1A296F67'  | 'X'       | 'RECORD'  | '`deadlock_test`.`proc_warnings`' | '`PRIMARY`' | '0' | '3172298' | '2' | '53' |

With the exception of lock_id, lock_trx_id, the rest of the lock description is identical. Since the transactions are on the same level playing field (same transaction isolation), this should indeed happen.

Believe me, I have addressed this kind of situation before. Here are my past posts on this:

  • Nov 05, 2012 : How to analyse innodb status on deadlock in insert Query?
  • Aug 08, 2011 : Are InnoDB Deadlocks exclusive to INSERT/UPDATE/DELETE?
  • Jun 14, 2011 : Reasons for occasionally slow queries?
  • Jun 08, 2011 : Will these two queries result in a deadlock if executed in sequence?
  • Jun 06, 2011 : Trouble deciphering a deadlock in an innodb status log

NEW ANSWER (MySQL-style dynamic SQL): Ok, this one tackles the problem in the way one of the other poster's described - reversing the order in which mutually incompatible exclusive locks are acquired so that regardless of how many occur, they occur only for the least amount of time at the end of transaction execution.

This is accomplished by separating the read part of the statement into it's own select statement and dynamically generating a delete statement that will be forced to run last due to order of statement appearance, and which will affect only the proc_warnings table.

A demo is available at sql fiddle:

This link shows the schema w/ sample data, and a simple query for rows that match on ivehicle_id=2. 2 rows result, as none of them have been deleted.

This link shows the same schema, sample data, but pass a value 2 to the DeleteEntries stored program, telling the SP to delete proc_warnings entries for ivehicle_id=2. The simple query for rows returns no results as they've all been successfully deleted. The demo links only demostrate that the code works as intended to delete. The user with the proper test environment can comment on whether this solves the problem of the blocked thread.

Here is the code as well for convenience:

CREATE PROCEDURE DeleteEntries (input_vid INT)
BEGIN

    SELECT @idstring:= '';
    SELECT @idnum:= 0;
    SELECT @del_stmt:= '';

    SELECT @idnum:= @idnum+1 idnum_col, @idstring:= CONCAT(@idstring, CASE WHEN CHARACTER_LENGTH(@idstring) > 0 THEN ',' ELSE '' END, CAST(id AS CHAR(10))) idstring_col
    FROM proc_warnings
    WHERE EXISTS (
        SELECT 0
        FROM day_position
        WHERE day_position.transaction_id = proc_warnings.transaction_id
        AND day_position.dirty_data = 1
        AND EXISTS (
            SELECT 0
            FROM ivehicle_days
            WHERE ivehicle_days.id = day_position.ivehicle_day_id
            AND ivehicle_days.ivehicle_id = input_vid
        )
    )
    ORDER BY idnum_col DESC
    LIMIT 1;

    IF (@idnum > 0) THEN
        SELECT @del_stmt:= CONCAT('DELETE FROM proc_warnings WHERE id IN (', @idstring, ');');

        PREPARE del_stmt_hndl FROM @del_stmt;
        EXECUTE del_stmt_hndl;
        DEALLOCATE PREPARE del_stmt_hndl;
    END IF;
END;

This is the syntax to call the program from within a transaction:

CALL DeleteEntries(2);

ORIGINAL ANSWER (still think it's not too shabby) Looks like 2 issues: 1) slow query 2) unexpected locking behavior

As regards issue #1, slow queries are often resolved by the same two techniques in tandem query statement simplification and useful additions of or modifications to indexes. You yourself already made the connection to indexes - without them the optimizer cannot search for a limited set of rows to process, and each row from each table multiplying per extra row scanned the amount of extra work which must be done.

REVISED AFTER SEEING POST OF SCHEMA AND INDEXES: But I imagine you'll get the most performance benefit for your query by making sure you have a good index configuration. To do so, you can go for better delete performance, and possibly even better delete performance, with trade off of larger indexes and perhaps noticeably slower insert performance on the same tables to which additional index structure is added.

SOMEWHAT BETTER:

CREATE TABLE  `day_position` (
    ...,
    KEY `day_position__id_rvrsd` (`dirty_data`, `ivehicle_day_id`)

) ;


CREATE TABLE  `ivehicle_days` (
    ...,
    KEY `ivehicle_days__vid_no_sort_index` (`ivehicle_id`)
);

REVISED HERE TOO: Since it takes as long as it does to run, I'd leave the dirty_data in the index, and I got it wrong too for sure when I placed it after the ivehicle_day_id in index order - it should be first.

But if I had my hands on it, at this point, since there must be a good amount of data to make it take that long, I'd would just go for all covering indexes just to make sure I was getting the best indexing that my troubleshooting time could buy, if nothing else to rule that part of the problem out.

BEST/COVERING INDEXES:

CREATE TABLE  `day_position` (
    ...,
    KEY `day_position__id_rvrsd_trnsid_cvrng` (`dirty_data`, `ivehicle_day_id`, `transaction_id`)
) ;

CREATE TABLE  `ivehicle_days` (
    ...,
    UNIQUE KEY `ivehicle_days__vid_id_cvrng` (ivehicle_id, id)
);

CREATE TABLE  `proc_warnings` (

    .., /*rename primary key*/
    CONSTRAINT pk_proc_warnings PRIMARY KEY (id),
    UNIQUE KEY `proc_warnings__transaction_id_id_cvrng` (`transaction_id`, `id`)
);

There are two performance optimization goals sought by the last two change suggestions:
1) If the search keys for successively accessed tables are not the same as the clustered key results returned for the currently accessed table, we eliminate what would have been a need to make a second set of index-seek-with-scan operations on the clustered index
2) If the latter is not the case, there is still at least the possibility that the optimizer can select a more efficient join algorithm since the indexes will be keeping the required join keys in sorted order.

Your query seems about as simplified as it can be (copied here in case it is edited later):

DELETE pw 
FROM proc_warnings pw 
INNER JOIN day_position dp 
    ON dp.transaction_id = pw.transaction_id 
INNER JOIN ivehicle_days vd 
    ON vd.id = dp.ivehicle_day_id 
WHERE vd.ivehicle_id=2 AND dp.dirty_data=1;

Unless of course there's something about written join order that affects the way the query optimizer proceeds in which case you could try some of the rewrite suggestions others have provided, including perhaps this one w/ index hints (optional):

DELETE FROM proc_warnings
FORCE INDEX (`proc_warnings__transaction_id_id_cvrng`, `pk_proc_warnings`)
WHERE EXISTS (
    SELECT 0
    FROM day_position
    FORCE INDEX (`day_position__id_rvrsd_trnsid_cvrng`)  
    WHERE day_position.transaction_id = proc_warnings.transaction_id
    AND day_position.dirty_data = 1
    AND EXISTS (
        SELECT 0
        FROM ivehicle_days
        FORCE INDEX (`ivehicle_days__vid_id_cvrng`)  
        WHERE ivehicle_days.id = day_position.ivehicle_day_id
        AND ivehicle_days.ivehicle_id = ?
    )
);

As regards #2, unexpected locking behavior.

As I can see both queries wants an exclusive X lock on a row with primary key = 53. However, neither of them must delete rows from proc_warnings table. I just don't understand why the index is locked.

I guess it would be the index that's locked because the row of data to be locked is in a clustered index, i.e. the single row of data itself resides in the index.

It would be locked, because:
1) according to http://dev.mysql.com/doc/refman/5.1/en/innodb-locks-set.html

...a DELETE generally set record locks on every index record that is scanned in the processing of the SQL statement. It does not matter whether there are WHERE conditions in the statement that would exclude the row. InnoDB does not remember the exact WHERE condition, but only knows which index ranges were scanned.

You also mentioned above:

...as for me the main feature of READ COMMITTED is how it deals with locks. It should release the index locks of non-matching rows, but it doesn't.

and provided the following reference for that:
http://dev.mysql.com/doc/refman/5.1/en/set-transaction.html#isolevel_read-committed

Which states the same as you, except that according to that same reference there is a condition upon which a lock shall be released:

Also, record locks for nonmatching rows are released after MySQL has evaluated the WHERE condition.

Which is reiterated as well at this manual page http://dev.mysql.com/doc/refman/5.1/en/innodb-record-level-locks.html

There are also other effects of using the READ COMMITTED isolation level or enabling innodb_locks_unsafe_for_binlog: Record locks for nonmatching rows are released after MySQL has evaluated the WHERE condition.

So, we're told that the WHERE condition must be evaluated before the lock can be relased. Unfortunately we're not told when the WHERE condition is evaluated and it would probably something subject to change from one plan to another created by the optimizer. But it does tell us that lock release, is dependent somehow on performance of query execution, optimization of which as we discuss above is dependent on careful writing of the statement, and judicious use of indexes. It can also be improved by better table design but that would probably be left best to a separate question.

Moreover, the index is not locked either when proc_warnings table is empty

The database can't lock records within the index if there are none.

Moreover, the index is not locked when...the day_position table contains fewer number of rows (i.e. one hundred rows).

This could mean numerous things such as but probably not limited to: a different execution plan due to a change in statistics, a too-brief-to-be-observed-lock due to a much faster execution due to a much smaller data set/join operation.


I looked at the query and the explain. I am not sure, but have a gut feeling, that the problem is the following. Let's look at the query:

DELETE pw 
FROM proc_warnings pw 
INNER JOIN day_position dp 
   ON dp.transaction_id = pw.transaction_id 
INNER JOIN ivehicle_days vd 
   ON vd.id = dp.ivehicle_day_id 
WHERE vd.ivehicle_id=? AND dp.dirty_data=1;

The equivalent SELECT is:

SELECT pw.id
FROM proc_warnings pw
INNER JOIN day_position dp
   ON dp.transaction_id = pw.transaction_id
INNER JOIN ivehicle_days vd
   ON vd.id = dp.ivehicle_day_id
WHERE vd.ivehicle_id=16 AND dp.dirty_data=1;

If you look at its explain, you will see that the execution plan starts with the proc_warnings table. That means that MySQL scans the primary key in the table and for each row checks if the condition is true, and if it is - the row is deleted. That is MySQL has to lock the whole primary key.

What you need is to invert the JOIN order, that is find all transaction ids with vd.ivehicle_id=16 AND dp.dirty_data=1 and join them on proc_warnings table.

That is you will need to patch one of the indices:

ALTER TABLE `day_position`
 DROP INDEX `day_position__id`,
 ADD INDEX `day_position__id`
   USING BTREE (`ivehicle_day_id`, `dirty_data`, `transaction_id`);

and rewrite the delete query:

DELETE pw
FROM (
  SELECT DISTINCT dp.transaction_id
  FROM ivehicle_days vd
  JOIN day_position dp ON dp.ivehicle_day_id = vd.id
  WHERE vd.ivehicle_id=? AND dp.dirty_data=1
) as tr_id
JOIN proc_warnings pw ON pw.transaction_id = tr_id.transaction_id;