How to skip the rest of tests in the class if one has failed?

The pytest -x option will stop test after first failure: pytest -vs -x test_sample.py


  • If you'd like to stop the test execution after N failures anywhere (not in a particular test class) the command line option pytest --maxfail=N is the way to go: https://docs.pytest.org/en/latest/usage.html#stopping-after-the-first-or-n-failures

  • if you instead want to stop a test that is comprised of multiple steps if any of them fails, (and continue executing the other tests) you should put all your steps in a class, and use the @pytest.mark.incremental decorator on that class and edit your conftest.py to include the code shown here https://docs.pytest.org/en/latest/example/simple.html#incremental-testing-test-steps.


I like the general "test-step" idea. I'd term it as "incremental" testing and it makes most sense in functional testing scenarios IMHO.

Here is a an implementation that doesn't depend on internal details of pytest (except for the official hook extensions). Copy this into your conftest.py:

import pytest

def pytest_runtest_makereport(item, call):
    if "incremental" in item.keywords:
        if call.excinfo is not None:
            parent = item.parent
            parent._previousfailed = item

def pytest_runtest_setup(item):
    previousfailed = getattr(item.parent, "_previousfailed", None)
    if previousfailed is not None:
        pytest.xfail("previous test failed (%s)" % previousfailed.name)

If you now have a "test_step.py" like this:

import pytest

@pytest.mark.incremental
class TestUserHandling:
    def test_login(self):
        pass
    def test_modification(self):
        assert 0
    def test_deletion(self):
        pass

then running it looks like this (using -rx to report on xfail reasons):

(1)hpk@t2:~/p/pytest/doc/en/example/teststep$ py.test -rx
============================= test session starts ==============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev17
plugins: xdist, bugzilla, cache, oejskit, cli, pep8, cov, timeout
collected 3 items

test_step.py .Fx

=================================== FAILURES ===================================
______________________ TestUserHandling.test_modification ______________________

self = <test_step.TestUserHandling instance at 0x1e0d9e0>

    def test_modification(self):
>       assert 0
E       assert 0

test_step.py:8: AssertionError
=========================== short test summary info ============================
XFAIL test_step.py::TestUserHandling::()::test_deletion
  reason: previous test failed (test_modification)
================ 1 failed, 1 passed, 1 xfailed in 0.02 seconds =================

I am using "xfail" here because skips are rather for wrong environments or missing dependencies, wrong interpreter versions.

Edit: Note that neither your example nor my example would directly work with distributed testing. For this, the pytest-xdist plugin needs to grow a way to define groups/classes to be sent whole-sale to one testing slave instead of the current mode which usually sends test functions of a class to different slaves.


It's generally bad practice to do what are you doing. Each test should be as independent as possible from the others, while you completely depend on the results of the other tests.

Anyway, reading the docs it seems like a feature like the one you want is not implemented.(Probably because it wasn't considered useful).

A work-around could be to "fail" your tests calling a custom method which sets some condition on the class, and mark each test with the "skipIf" decorator:

class MyTestCase(unittest.TestCase):
    skip_all = False

   @pytest.mark.skipIf("MyTestCase.skip_all")
   def test_A(self):
        ...
        if failed:
            MyTestCase.skip_all = True
  @pytest.mark.skipIf("MyTestCase.skip_all")
  def test_B(self):
      ...
      if failed:
          MyTestCase.skip_all = True

Or you can do this control before running each test and eventually call pytest.skip().

edit: Marking as xfail can be done in the same way, but using the corresponding function calls.

Probably, instead of rewriting the boiler-plate code for each test, you could write a decorator(this would probably require that your methods return a "flag" stating if they failed or not).

Anyway, I'd like to point out that,as you state, if one of these tests fails then other failing tests in the same test case should be considered false positive... but you can do this "by hand". Just check the output and spot the false positives. Even though this might be boring./error prone.