Run tests concurrently

Using pytest-subtests as suggested by Nathaniel seems to be a viable solution. Here is how it may be solved using trio, it run subtests on every function whose name starts with io_.

import pytest
import sys
import trio
import inspect
import re
import time


pytestmark = pytest.mark.trio
io_test_pattern = re.compile("io_.*")


async def tests(subtests):

    def find_io_tests(subtests, ignored_names):
        functions = inspect.getmembers(sys.modules[__name__], inspect.isfunction)
        for (f_name, function) in functions:
            if f_name in ignored_names:
                continue
            if re.search(io_test_pattern, f_name):
                yield (run, subtests, f_name, function)

    async def run(subtests, test_name, test_function):
        with subtests.test(msg=test_name):
            await test_function()

    self_name = inspect.currentframe().f_code.co_name
    async with trio.open_nursery() as nursery:
        for io_test in find_io_tests(subtests, {self_name}):
            nursery.start_soon(*io_test)


accepted_error = 0.1

async def io_test_1():
    await assert_sleep_duration_ok(1)

async def io_test_2():
    await assert_sleep_duration_ok(2)

async def io_test_3():
    await assert_sleep_duration_ok(3)

async def io_test_4():
    await assert_sleep_duration_ok(4)

async def assert_sleep_duration_ok(duration):
    start = time.time()
    await trio.sleep(duration)
    actual_duration = time.time() - start
    assert abs(actual_duration - duration) < accepted_error

Running python -m pytest -v outputs:

============================ test session starts =============================
platform darwin -- Python 3.7.0, pytest-4.6.2, py-1.8.0, pluggy-0.12.0
plugins: asyncio-0.10.0, trio-0.5.2, subtests-0.2.1
collected 1 item

tests/stripe_test.py::tests PASSED                                     [100%]
tests/stripe_test.py::tests PASSED                                     [100%]
tests/stripe_test.py::tests PASSED                                     [100%]
tests/stripe_test.py::tests PASSED                                     [100%]
tests/stripe_test.py::tests PASSED                                     [100%]

========================== 1 passed in 4.07 seconds ==========================

That's not perfect as the percentage is only relative to the number of tests and not to the number of subtests (ie. io_* marked functions here), but that seems like a good start.

Also note that time.time() is used so it makes sense for both trio and asyncio but in a real use case trio.current_time() should be used instead.

The same tests could be achieved using asyncio, you would basically have to replace three things:

  • pytestmark = pytest.mark.triopytestmark = pytest.mark.asyncio
  • yield (run, subtests, f_name, function)yield run(subtests, f_name, function)
  • And finally the nursey loop should be replaced with something like:
await asyncio.gather(*find_io_tests(subtests, {self_name}))

Unfortunately, the way pytest works internally, you can't really run multiple tests at the same time under the same call to trio.run/asyncio.run/curio.run. (This is also good in some ways – it prevents state from leaking between tests, and with trio at least it lets you configure trio differently for different tests, e.g. setting one test to use the autojump clock while another test doesn't.)

Definitely the simplest option is to use pytest-xdist to run tests in separate threads. You can still use async internally inside each test – all these async libraries support running different loops in different threads.

If you really need to use async concurrency, then I think you'll have to write a single pytest test function, and then inside that function do your own scheduling and concurrency. If you do it this way, then from pytest's perspective there's only one test, so it won't be easy to get nice per-test output. I guess you could try using pytest-subtests?


There's https://github.com/willemt/pytest-asyncio-cooperative now.

Today 2020-03-25, the limitations are quite steep — you have to ensure your tests don't share anything (well, technically, don't share mutable state) and you can't use mock.patch (technically don't mock anything another test might use).

You can follow the discussion at https://github.com/pytest-dev/pytest-asyncio/issues/69, I believe it is hard, but possible to come up with a way mark each fixture to allow or disallow concurrent use, and schedule tests to preserve these restrictions.