operational error: database is locked
Here's what I used to manage concurrency. I was hitting one DB from 270 processes. Just increasing SQLite's timeout didn't help, but this approach where you just wait without attempting to connect for a while seemed to work. The number of attempts (50) and wait period (10-30 seconds) could be adjusted. I was collecting results from long running analyses, so 10-30 seconds was fine, but maybe 1-3 would have worked.
import random
import sqlite3
def do_query(path, q, args=None, commit=False):
"""
do_query - Run a SQLite query, waiting for DB in necessary
Args:
path (str): path to DB file
q (str): SQL query
args (list): values for `?` placeholders in q
commit (bool): whether or not to commit after running query
Returns:
list of lists: fetchall() for the query
"""
if args is None:
args = []
for attempt in range(50):
try:
con = sqlite3.connect(path)
cur = con.cursor()
cur.execute(q, args)
ans = cur.fetchall()
if commit:
con.commit()
cur.close()
con.close()
del cur
del con
return ans
except sqlite3.OperationalError:
time.sleep(random.randint(10, 30))
make sure you commit the other connections by using con.commit()
I had the same issue and found killing all Python processes solved the problem.
This is what this error means:
SQLite is meant to be a lightweight database, and thus can't support a high level of concurrency. OperationalError: database is locked errors indicate that your application is experiencing more concurrency than sqlite can handle in default configuration. This error means that one thread or process has an exclusive lock on the database connection and another thread timed out waiting for the lock the be released.
Python's SQLite wrapper has a default timeout value that determines how long the second thread is allowed to wait on the lock before it times out and raises the OperationalError: database is locked error.
If you're getting this error, you can solve it by:
Switching to another database backend. At a certain point SQLite becomes too "lite" for real-world applications, and these sorts of concurrency errors indicate you've reached that point.
Rewriting your code to reduce concurrency and ensure that database transactions are short-lived.
Increase the default timeout value by setting the timeout database option.
Probably you have another connection in your code that is not closed or not committed and this cause this error. Basically trying to do second execute
when it is already locked by the another one. If you really want to have your concurrent transactions you need to have a RDBMS.