How to deal with concurrent updates in databases?
Wrapping the code inside a transaction it's not enough in some cases regardless the isolation level you define (e.g imaging you have deployed your code into 2 different servers in production).
Let's say you have these steps and 2 concurrency threads:
1) open a transaction
2) fetch the data (SELECT creds FROM credits WHERE userid = 1;)
3) do your work (credits + amount)
4) update the data (UPDATE credits SET creds = ? WHERE userid = 1;)
5) commit
And this time line:
Time = 0; creds = 100
Time = 1; ThreadA executes (1) and creates Txn1
Time = 2; ThreadB executes (1) and creates Txn2
Time = 3; ThreadA executes (2) and fetches 100
Time = 4; ThreadB executes (2) and fetches 100
Time = 5; ThreadA executes (3) and adds 100 + 50
Time = 6; ThreadB executes (3) and adds 100 + 50
Time = 7; ThreadA executes (4) and updates creds to 150
Time = 8; ThreadB tries to executes (4) but in the best scenario the transaction
(depending of isolation level) won't allow it and you get an error
The transaction prevents you to override the creds value with a wrong value but it's not enough because I don't want to fail any error.
I prefer instead an slower process that never fail and I solved the problem with a "database row lock" in the moment I fetch the data (step 2) that prevents other threads can read the same row until I'm done with it.
There are few ways to do in SQL Server and this is one of them:
SELECT creds FROM credits WITH (UPDLOCK) WHERE userid = 1;
If I recreate the previous time line with this improvement you get something like this:
Time = 0; creds = 100
Time = 1; ThreadA executes (1) and creates Txn1
Time = 2; ThreadB executes (1) and creates Txn2
Time = 3; ThreadA executes (2) with lock and fetches 100
Time = 4; ThreadB tries executes (2) but the row is locked and
it's has to wait...
Time = 5; ThreadA executes (3) and adds 100 + 50
Time = 6; ThreadA executes (4) and updates creds to 150
Time = 7; ThreadA executes (5) and commits the Txn1
Time = 8; ThreadB was waiting up to this point and now is able to execute (2)
with lock and fetches 150
Time = 9; ThreadB executes (3) and adds 150 + 50
Time = 10; ThreadB executes (4) and updates creds to 200
Time = 11; ThreadB executes (5) and commits the Txn2
For MySQL InnoDB tables, this really depends on the isolation level you set.
If you are using the default level 3 (REPEATABLE READ), then you would need to lock any row that affects subsequent writes, even if you are in a transaction. In your example you will need to :
SELECT FOR UPDATE creds FROM credits WHERE userid = 1;
-- calculate --
UPDATE credits SET creds = 150 WHERE userid = 1;
If you are using level 4 (SERIALIZABLE), then a simple SELECT followed by update is sufficient. Level 4 in InnoDB is implemented by read-locking every row that you read.
SELECT creds FROM credits WHERE userid = 1;
-- calculate --
UPDATE credits SET creds = 150 WHERE userid = 1;
However in this specific example, since the computation (adding credits) is simple enough to be done in SQL, a simple:
UPDATE credits set creds = creds - 150 where userid=1;
will be equivalent to a SELECT FOR UPDATE followed by UPDATE.
Use transactions:
BEGIN WORK;
SELECT creds FROM credits WHERE userid = 1;
-- do your work
UPDATE credits SET creds = 150 WHERE userid = 1;
COMMIT;
Some important notes:
- Not all database types support transactions. In particular, mysql's old default database engine (default before version 5.5.5), MyISAM, doesn't. Use InnoDB (the new default) if you're on mysql.
- Transactions can abort due to reasons beyond your control. If this happens, your application must be prepared to start all over again, from the BEGIN WORK.
- You'll need to set the isolation level to SERIALIZABLE, otherwise the first select can read data that other transactions have not committed yet(transactions arn't like mutexes in programming languages). Some databases will throw an error if there's concurrent ongoing SERIALIZABLE transactions, and you'll have to restart the transaction.
- Some DBMS provide SELECT .. FOR UPDATE , which will lock the rows retreived by select until the transaction ends.
Combining transactions with SQL stored procedures can make the latter part easier to deal with; the application would just call a single stored procedure in a transaction, and re-call it if the transaction aborts.