Make Django test case database visible to Celery
For your unittests I would recommend skipping the celery dependency, the two following links will provide you with the necesarry infos to start your unittests:
- http://docs.celeryproject.org/projects/django-celery/en/2.4/cookbook/unit-testing.html
- http://docs.celeryproject.org/en/latest/userguide/testing.html
If you really want to test the celery function calls including a queue I'd propably set up a dockercompose with the server, worker, queue combination and extend the custom CeleryTestRunner from the django-celery docs. But I wouldn't see a benefit from it because the test system is pbly to far away from production to be representative.
This is possible by starting a Celery worker within the Django test case.
Background
Django's in-memory database is sqlite3. As it says on the description page for Sqlite in-memory databases, "[A]ll database connections sharing the in-memory database need to be in the same process." This means that, as long as Django uses an in-memory test database and Celery is started in a separate process, it is fundamentally impossible to have Celery and Django to share a test database.
However, with celery.contrib.testing.worker.start_worker
, it possible to start a Celery worker in a separate thread within the same process. This worker can access the in-memory database.
This assumes that Celery is already setup in the usual way with the Django project.
Solution
Because Django-Celery involves some cross-thread communication, only test cases that don't run in isolated transactions will work. The test case must inherit directly from SimpleTestCase
or its Rest equivalent APISimpleTestCase
and set databases
to '__all__'
or just the database that the test interacts with.
The key is to start a Celery worker in the setUpClass
method of the TestCase
and close it in the tearDownClass
method. The key function is celery.contrib.testing.worker.start_worker
, which requires an instance of the current Celery app, presumably obtained from mysite.celery.app
and returns a Python ContextManager
, which has __enter__
and __exit__
methods, which must be called in setUpClass
and tearDownClass
, respectively. There is probably a way to avoid manually entering and existing the ContextManager
with a decorator or something, but I couldn't figure it out. Here is an example tests.py
file:
from celery.contrib.testing.worker import start_worker
from django.test import SimpleTestCase
from mysite.celery import app
class BatchSimulationTestCase(SimpleTestCase):
databases = '__all__'
@classmethod
def setUpClass(cls):
super().setUpClass()
# Start up celery worker
cls.celery_worker = start_worker(app, perform_ping_check=False)
cls.celery_worker.__enter__()
@classmethod
def tearDownClass(cls):
super().tearDownClass()
# Close worker
cls.celery_worker.__exit__(None, None, None)
def test_my_function(self):
# my_task.delay() or something
For whatever reason, the testing worker tries to use a task called 'celery.ping'
, probably to provide better error messages in the case of worker failure. The task it is looking for is celery.contrib.testing.tasks.ping
, which is not available at test time. Setting the perform_ping_check
argument of start_worker
to False
skips the check for this and avoids the associated error.
Now, when the tests are run, there is no need to start a separate Celery process. A Celery worker will be started in the Django test process as a separate thread. This worker can see any in-memory databases, including the default in-memory test database. To control the number of workers, there are options available in start_worker
, but it appears the default is a single worker.