:game_die: Pytest plugin to randomly order tests and control random.seed

Related tags

Testing pytest
Overview

pytest-randomly

https://img.shields.io/github/workflow/status/pytest-dev/pytest-randomly/CI/main?style=for-the-badge https://img.shields.io/pypi/v/pytest-randomly.svg?style=for-the-badge https://img.shields.io/badge/code%20style-black-000000.svg?style=for-the-badge pre-commit
Randomness power.

Pytest plugin to randomly order tests and control random.seed.

Features

All of these features are on by default but can be disabled with flags.

  • Randomly shuffles the order of test items. This is done first at the level of modules, then at the level of test classes (if you have them), then at the order of functions. This also works with things like doctests.
  • Resets random.seed() at the start of every test case and test to a fixed number - this defaults to time.time() from the start of your test run, but you can pass in --randomly-seed to repeat a randomness-induced failure.
  • If factory boy is installed, its random state is reset at the start of every test. This allows for repeatable use of its random 'fuzzy' features.
  • If faker is installed, its random state is reset at the start of every test. This is also for repeatable fuzzy data in tests - factory boy uses faker for lots of data. This is also done if you're using the faker pytest fixture, by defining the faker_seed fixture (docs).
  • If numpy is installed, its random state is reset at the start of every test.
  • If additional random generators are used, they can be registered under the pytest_randomly.random_seeder entry point and will have their seed reset at the start of every test. Register a function that takes the current seed value.
  • Works with pytest-xdist.

About

Randomness in testing can be quite powerful to discover hidden flaws in the tests themselves, as well as giving a little more coverage to your system.

By randomly ordering the tests, the risk of surprising inter-test dependencies is reduced - a technique used in many places, for example Google's C++ test runner googletest. Research suggests that "dependent tests do exist in practice" and a random order of test executions can effectively detect such dependencies [1]. Alternatively, a reverse order of test executions, as provided by pytest-reverse, may find less dependent tests but can achieve a better benefit/cost ratio.

By resetting the random seed to a repeatable number for each test, tests can create data based on random numbers and yet remain repeatable, for example factory boy's fuzzy values. This is good for ensuring that tests specify the data they need and that the tested system is not affected by any data that is filled in randomly due to not being specified.

I have written a blog post covering the history of pytest-randomly, including how it started life as the nose plugin nose-randomly.

Additionally, I appeared on the Test and Code podcast to talk about pytest-randomly.

Installation

Install from pip with:

python -m pip install pytest-randomly

Python 3.6 to 3.9 supported.


Testing a Django project? Check out my book Speed Up Your Django Tests which covers loads of best practices so you can write faster, more accurate tests.


Usage

Pytest will automatically find the plugin and use it when you run pytest. The output will start with an extra line that tells you the random seed that is being used:

$ pytest
...
platform darwin -- Python 3.7.2, pytest-4.3.1, py-1.8.0, pluggy-0.9.0
Using --randomly-seed=1553614239
...

If the tests fail due to ordering or randomly created data, you can restart them with that seed using the flag as suggested:

pytest --randomly-seed=1234

Or more conveniently, use the special value last:

pytest --randomly-seed=last

Since the ordering is by module, then by class, you can debug inter-test pollution failures by narrowing down which tests are being run to find the bad interaction by rerunning just the module/class:

pytest --randomly-seed=1234 tests/module_that_failed/

You can disable behaviours you don't like with the following flags:

  • --randomly-dont-reset-seed - turn off the reset of random.seed() at the start of every test
  • --randomly-dont-reorganize - turn off the shuffling of the order of tests

The plugin appears to Pytest with the name 'randomly'. To disable it altogether, you can use the -p argument, for example:

pytest -p no:randomly

Entry Point

If you're using a different randomness generator in your third party package, you can register an entrypoint to be called every time pytest-randomly reseeds. Implement the entrypoint pytest_randomly.random_seeder, referring to a function/callable that takes one argument, the new seed (int).

For example in your setup.cfg:

[options.entry_points]
pytest_randomly.random_seeder =
    mypackage = mypackage.reseed

Then implement reseed(new_seed).

References

[1] Sai Zhang, Darioush Jalali, Jochen Wuttke, Kıvanç Muşlu, Wing Lam, Michael D. Ernst, and David Notkin. 2014. Empirically revisiting the test independence assumption. In Proceedings of the 2014 International Symposium on Software Testing and Analysis (ISSTA 2014). Association for Computing Machinery, New York, NY, USA, 385–396. doi:https://doi.org/10.1145/2610384.2610404
Comments
  • crashes with numpy and --randomly-seed=7106521602475165645

    crashes with numpy and --randomly-seed=7106521602475165645

     INTERNALERROR>   File "/home/runner/work/pytest-randomly/pytest-randomly/.tox/py36/lib/python3.6/site-packages/pytest_randomly.py", line 144, in _reseed
    INTERNALERROR>     np_random.seed(seed)
    INTERNALERROR>   File "mtrand.pyx", line 243, in numpy.random.mtrand.RandomState.seed
    INTERNALERROR>   File "_mt19937.pyx", line 166, in numpy.random._mt19937.MT19937._legacy_seeding
    INTERNALERROR>   File "_mt19937.pyx", line 180, in numpy.random._mt19937.MT19937._legacy_seeding
    INTERNALERROR> ValueError: Seed must be between 0 and 2**32 - 1
    

    looks like some sort of seed truncation is needed:

    def _numpy_seed(seed):
        return seed if 0 <= seed <= 2**32-1 else random.Random(seed).getrandbits(32)
    
    opened by graingert 10
  • Unable to use random in pytest.mark.parametrize with xdist and randomly

    Unable to use random in pytest.mark.parametrize with xdist and randomly

    Packages:

    pytest-3.2.2
    xdist-1.20.0
    randomly-1.2.1
    

    Example code:

    import pytest
    import random
    
    
    def gen_param():
        a = random.random()
        b = random.random()
        c = a + b
        return a, b, c
    
    
    
    @pytest.mark.parametrize('a,b,c', [gen_param() for _ in range(10)])
    def test_sum(a, b, c):
        assert a + b == c
    

    Example result:

    Different tests were collected between gw1 and gw0. The difference is:
    --- gw1
    
    +++ gw0
    
    @@ -1,10 +1,10 @@
    
    -test_it.py::test_sum[0.21119735007187512-0.03478699051186407-0.2459843405837392]
    -test_it.py::test_sum[0.19989965451085068-0.21530345609429247-0.41520311060514314]
    -test_it.py::test_sum[0.5682066547612487-0.7243829926261657-1.2925896473874143]
    -test_it.py::test_sum[0.5138857769400398-0.9866435513079722-1.500529328248012]
    -test_it.py::test_sum[0.32391650283278506-0.39646296915151646-0.7203794719843015]
    -test_it.py::test_sum[0.9573539653252039-0.46631807929040026-1.4236720446156041]
    -test_it.py::test_sum[0.18758435224247982-0.4081118220534776-0.5956961742959574]
    -test_it.py::test_sum[0.8300722136940875-0.24370118062201607-1.0737733943161034]
    -test_it.py::test_sum[0.45416992471686735-0.5539633757267955-1.0081333004436628]
    -test_it.py::test_sum[0.6404127883887936-0.07517291369462298-0.7155857020834165]
    +test_it.py::test_sum[0.4235467615256703-0.6336556280381637-1.0572023895638338]
    +test_it.py::test_sum[0.08598091323183876-0.9197414141632071-1.0057223273950457]
    +test_it.py::test_sum[0.6499835837722387-0.08942031974171283-0.7394039035139516]
    +test_it.py::test_sum[0.5982265644051936-0.4014341639946195-0.9996607283998131]
    +test_it.py::test_sum[0.6108773740309141-0.39536962117174335-1.0062469952026576]
    +test_it.py::test_sum[0.13520942528376823-0.36746285760417974-0.502672282887948]
    +test_it.py::test_sum[0.8469134601088156-0.34936702626625926-1.196280486375075]
    +test_it.py::test_sum[0.5828050759610505-0.028386017512678552-0.611191093473729]
    +test_it.py::test_sum[0.1425962119341786-0.5579729193825124-0.700569131316691]
    +test_it.py::test_sum[0.6183292075112786-0.5376259380555282-1.1559551455668067]
    

    From what I could gather, it's possible to fix it simply by adding

    def pytest_configure(config):
        _reseed(config)
    

    to pytest_randomly.py. But I've never written a single plugin for pytest and I've read only some excerpts from the documentation, so I may be wrong.

    opened by p-himik 9
  • Different test order on different machines but --randomly-seed is the same?

    Different test order on different machines but --randomly-seed is the same?

    Hi,

    I have a question about how the --randomly-seed option works.

    Recently we had a test failure on our CI system, and since we use pytest-randomly I grabbed the seed value from the output on the CI job and ran the test on my machine.

    What I noticed was that despite using the same --randomly-seed value, the tests ran in a different order on my machine. And in fact the tests all passed on my machine.

    My question then is, is it expected that the test order be different on different machines for the same --randomly-seed value?

    Thanks,

    Dan

    opened by danieljacobs1 8
  • mimesis support

    mimesis support

    Hi, I am one of the developers of mimesis and a huge fan of your library.

    I really want to provide a native integration of mimesis and pytest-randomly just like the one you have with faker and factoryboy. I really like that the reseed happens before each test, so the results are significantly better than the regular one time seed.

    What needs to be done?

    1. From our side we have changed how random is used internally to expose global random instance: https://github.com/lk-geimfari/mimesis/pull/471/files#diff-02fe14a63fc506efde39da2f898b5e0fR127 so it would be easy to seed it
    2. I can provide a PR with the same logic as you already use for faker and others, if that's fine

    Related: https://github.com/lk-geimfari/mimesis/issues/469 I would like to hear your opinion. Thanks!

    opened by sobolevn 8
  • 3.8.0: pytest is failing

    3.8.0: pytest is failing

    + /usr/bin/python3 -Bm pytest -ra -p no:randomly
    =========================================================================== test session starts ============================================================================
    platform linux -- Python 3.8.9, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
    rootdir: /home/tkloczko/rpmbuild/BUILD/pytest-randomly-3.8.0
    plugins: forked-1.3.0, shutil-1.7.0, virtualenv-1.7.0, expect-1.1.0, httpbin-1.0.0, xdist-2.2.1, flake8-1.0.7, timeout-1.4.2, betamax-0.8.1, pyfakefs-4.4.0, freezegun-0.4.2, cases-3.4.6, case-1.5.3, isort-1.3.0, aspectlib-1.5.2, asyncio-0.15.1, toolbox-0.5, xprocess-0.17.1, flaky-3.7.0, aiohttp-0.3.0, checkdocs-2.7.0, mock-3.6.1, rerunfailures-9.1.1, requests-mock-1.9.3, hypothesis-6.13.7, Faker-8.4.0, cov-2.12.1
    collected 37 items
    
    . .                                                                                                                                                                  [  2%]
    tests/test_pytest_randomly.py .........FFFFFFF.F................                                                                                                     [100%]
    
    ================================================================================= FAILURES =================================================================================
    ___________________________________________________________________________ test_files_reordered ___________________________________________________________________________
    
    ourtestdir = <Testdir local('/tmp/pytest-of-tkloczko/pytest-18/test_files_reordered0')>
    
        def test_files_reordered(ourtestdir):
            code = """
                def test_it():
                    pass
            """
            ourtestdir.makepyfile(test_a=code, test_b=code, test_c=code, test_d=code)
            args = ["-v", "--randomly-seed=15"]
    
            out = ourtestdir.runpytest(*args)
    
            out.assert_outcomes(passed=4, failed=0)
    >       assert out.outlines[8:12] == [
                "test_d.py::test_it PASSED",
                "test_c.py::test_it PASSED",
                "test_a.py::test_it PASSED",
                "test_b.py::test_it PASSED",
            ]
    E       AssertionError: assert ['', 'test_d....st_it PASSED'] == ['test_d.py::...st_it PASSED']
    E         At index 0 diff: '' != 'test_d.py::test_it PASSED'
    E         Use -v to get the full diff
    
    /home/tkloczko/rpmbuild/BUILD/pytest-randomly-3.8.0/tests/test_pytest_randomly.py:241: AssertionError
    --------------------------------------------------------------------------- Captured stdout call ---------------------------------------------------------------------------
    ============================= test session starts ==============================
    platform linux -- Python 3.8.9, pytest-6.2.4, py-1.10.0, pluggy-0.13.1 -- /usr/bin/python3
    cachedir: .pytest_cache
    Using --randomly-seed=15
    hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/tkloczko/rpmbuild/BUILD/pytest-randomly-3.8.0/.hypothesis/examples')
    rootdir: /tmp/pytest-of-tkloczko/pytest-18/test_files_reordered0, configfile: pytest.ini
    plugins: randomly-3.8.0, forked-1.3.0, shutil-1.7.0, virtualenv-1.7.0, expect-1.1.0, httpbin-1.0.0, xdist-2.2.1, flake8-1.0.7, timeout-1.4.2, betamax-0.8.1, pyfakefs-4.4.0, freezegun-0.4.2, cases-3.4.6, case-1.5.3, isort-1.3.0, aspectlib-1.5.2, asyncio-0.15.1, toolbox-0.5, xprocess-0.17.1, flaky-3.7.0, aiohttp-0.3.0, checkdocs-2.7.0, mock-3.6.1, rerunfailures-9.1.1, requests-mock-1.9.3, hypothesis-6.13.7, Faker-8.4.0, cov-2.12.1
    collecting ... collected 4 items
    
    test_d.py::test_it PASSED
    test_c.py::test_it PASSED
    test_a.py::test_it PASSED
    test_b.py::test_it PASSED
    
    ============================== 4 passed in 0.11s ===============================
    _________________________________________________________________ test_files_reordered_when_seed_not_reset _________________________________________________________________
    
    ourtestdir = <Testdir local('/tmp/pytest-of-tkloczko/pytest-18/test_files_reordered_when_seed_not_reset0')>
    
        def test_files_reordered_when_seed_not_reset(ourtestdir):
            code = """
                def test_it():
                    pass
            """
            ourtestdir.makepyfile(test_a=code, test_b=code, test_c=code, test_d=code)
            args = ["-v", "--randomly-seed=15"]
    
            args.append("--randomly-dont-reset-seed")
            out = ourtestdir.runpytest(*args)
    
            out.assert_outcomes(passed=4, failed=0)
    >       assert out.outlines[8:12] == [
                "test_d.py::test_it PASSED",
                "test_c.py::test_it PASSED",
                "test_a.py::test_it PASSED",
                "test_b.py::test_it PASSED",
            ]
    E       AssertionError: assert ['', 'test_d....st_it PASSED'] == ['test_d.py::...st_it PASSED']
    E         At index 0 diff: '' != 'test_d.py::test_it PASSED'
    E         Use -v to get the full diff
    
    /home/tkloczko/rpmbuild/BUILD/pytest-randomly-3.8.0/tests/test_pytest_randomly.py:261: AssertionError
    --------------------------------------------------------------------------- Captured stdout call ---------------------------------------------------------------------------
    ============================= test session starts ==============================
    platform linux -- Python 3.8.9, pytest-6.2.4, py-1.10.0, pluggy-0.13.1 -- /usr/bin/python3
    cachedir: .pytest_cache
    Using --randomly-seed=15
    hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/tkloczko/rpmbuild/BUILD/pytest-randomly-3.8.0/.hypothesis/examples')
    rootdir: /tmp/pytest-of-tkloczko/pytest-18/test_files_reordered_when_seed_not_reset0, configfile: pytest.ini
    plugins: randomly-3.8.0, forked-1.3.0, shutil-1.7.0, virtualenv-1.7.0, expect-1.1.0, httpbin-1.0.0, xdist-2.2.1, flake8-1.0.7, timeout-1.4.2, betamax-0.8.1, pyfakefs-4.4.0, freezegun-0.4.2, cases-3.4.6, case-1.5.3, isort-1.3.0, aspectlib-1.5.2, asyncio-0.15.1, toolbox-0.5, xprocess-0.17.1, flaky-3.7.0, aiohttp-0.3.0, checkdocs-2.7.0, mock-3.6.1, rerunfailures-9.1.1, requests-mock-1.9.3, hypothesis-6.13.7, Faker-8.4.0, cov-2.12.1
    collecting ... collected 4 items
    
    test_d.py::test_it PASSED
    test_c.py::test_it PASSED
    test_a.py::test_it PASSED
    test_b.py::test_it PASSED
    
    ============================== 4 passed in 0.11s ===============================
    __________________________________________________________________________ test_classes_reordered __________________________________________________________________________
    
    ourtestdir = <Testdir local('/tmp/pytest-of-tkloczko/pytest-18/test_classes_reordered0')>
    
        def test_classes_reordered(ourtestdir):
            ourtestdir.makepyfile(
                test_one="""
                from unittest import TestCase
    
    
                class A(TestCase):
                    def test_a(self):
                        pass
    
    
                class B(TestCase):
                    def test_b(self):
                        pass
    
    
                class C(TestCase):
                    def test_c(self):
                        pass
    
    
                class D(TestCase):
                    def test_d(self):
                        pass
                """
            )
            args = ["-v", "--randomly-seed=15"]
    
            out = ourtestdir.runpytest(*args)
    
            out.assert_outcomes(passed=4, failed=0)
    >       assert out.outlines[8:12] == [
                "test_one.py::D::test_d PASSED",
                "test_one.py::C::test_c PASSED",
                "test_one.py::A::test_a PASSED",
                "test_one.py::B::test_b PASSED",
            ]
    E       AssertionError: assert ['', 'test_on...est_a PASSED'] == ['test_one.py...est_b PASSED']
    E         At index 0 diff: '' != 'test_one.py::D::test_d PASSED'
    E         Use -v to get the full diff
    
    /home/tkloczko/rpmbuild/BUILD/pytest-randomly-3.8.0/tests/test_pytest_randomly.py:300: AssertionError
    --------------------------------------------------------------------------- Captured stdout call ---------------------------------------------------------------------------
    ============================= test session starts ==============================
    platform linux -- Python 3.8.9, pytest-6.2.4, py-1.10.0, pluggy-0.13.1 -- /usr/bin/python3
    cachedir: .pytest_cache
    Using --randomly-seed=15
    hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/tkloczko/rpmbuild/BUILD/pytest-randomly-3.8.0/.hypothesis/examples')
    rootdir: /tmp/pytest-of-tkloczko/pytest-18/test_classes_reordered0, configfile: pytest.ini
    plugins: randomly-3.8.0, forked-1.3.0, shutil-1.7.0, virtualenv-1.7.0, expect-1.1.0, httpbin-1.0.0, xdist-2.2.1, flake8-1.0.7, timeout-1.4.2, betamax-0.8.1, pyfakefs-4.4.0, freezegun-0.4.2, cases-3.4.6, case-1.5.3, isort-1.3.0, aspectlib-1.5.2, asyncio-0.15.1, toolbox-0.5, xprocess-0.17.1, flaky-3.7.0, aiohttp-0.3.0, checkdocs-2.7.0, mock-3.6.1, rerunfailures-9.1.1, requests-mock-1.9.3, hypothesis-6.13.7, Faker-8.4.0, cov-2.12.1
    collecting ... collected 4 items
    
    test_one.py::D::test_d PASSED
    test_one.py::C::test_c PASSED
    test_one.py::A::test_a PASSED
    test_one.py::B::test_b PASSED
    
    ============================== 4 passed in 0.11s ===============================
    ____________________________________________________________________ test_class_test_methods_reordered _____________________________________________________________________
    
    ourtestdir = <Testdir local('/tmp/pytest-of-tkloczko/pytest-18/test_class_test_methods_reordered0')>
    
        def test_class_test_methods_reordered(ourtestdir):
            ourtestdir.makepyfile(
                test_one="""
                from unittest import TestCase
    
                class T(TestCase):
                    def test_a(self):
                        pass
    
                    def test_b(self):
                        pass
    
                    def test_c(self):
                        pass
    
                    def test_d(self):
                        pass
                """
            )
            args = ["-v", "--randomly-seed=15"]
    
            out = ourtestdir.runpytest(*args)
    
            out.assert_outcomes(passed=4, failed=0)
    >       assert out.outlines[8:12] == [
                "test_one.py::T::test_d PASSED",
                "test_one.py::T::test_c PASSED",
                "test_one.py::T::test_a PASSED",
                "test_one.py::T::test_b PASSED",
            ]
    E       AssertionError: assert ['', 'test_on...est_a PASSED'] == ['test_one.py...est_b PASSED']
    E         At index 0 diff: '' != 'test_one.py::T::test_d PASSED'
    E         Use -v to get the full diff
    
    /home/tkloczko/rpmbuild/BUILD/pytest-randomly-3.8.0/tests/test_pytest_randomly.py:332: AssertionError
    --------------------------------------------------------------------------- Captured stdout call ---------------------------------------------------------------------------
    ============================= test session starts ==============================
    platform linux -- Python 3.8.9, pytest-6.2.4, py-1.10.0, pluggy-0.13.1 -- /usr/bin/python3
    cachedir: .pytest_cache
    Using --randomly-seed=15
    hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/tkloczko/rpmbuild/BUILD/pytest-randomly-3.8.0/.hypothesis/examples')
    rootdir: /tmp/pytest-of-tkloczko/pytest-18/test_class_test_methods_reordered0, configfile: pytest.ini
    plugins: randomly-3.8.0, forked-1.3.0, shutil-1.7.0, virtualenv-1.7.0, expect-1.1.0, httpbin-1.0.0, xdist-2.2.1, flake8-1.0.7, timeout-1.4.2, betamax-0.8.1, pyfakefs-4.4.0, freezegun-0.4.2, cases-3.4.6, case-1.5.3, isort-1.3.0, aspectlib-1.5.2, asyncio-0.15.1, toolbox-0.5, xprocess-0.17.1, flaky-3.7.0, aiohttp-0.3.0, checkdocs-2.7.0, mock-3.6.1, rerunfailures-9.1.1, requests-mock-1.9.3, hypothesis-6.13.7, Faker-8.4.0, cov-2.12.1
    collecting ... collected 4 items
    
    test_one.py::T::test_d PASSED
    test_one.py::T::test_c PASSED
    test_one.py::T::test_a PASSED
    test_one.py::T::test_b PASSED
    
    ============================== 4 passed in 0.11s ===============================
    ______________________________________________________________________ test_test_functions_reordered _______________________________________________________________________
    
    ourtestdir = <Testdir local('/tmp/pytest-of-tkloczko/pytest-18/test_test_functions_reordered0')>
    
        def test_test_functions_reordered(ourtestdir):
            ourtestdir.makepyfile(
                test_one="""
                def test_a():
                    pass
    
                def test_b():
                    pass
    
                def test_c():
                    pass
    
                def test_d():
                    pass
                """
            )
            args = ["-v", "--randomly-seed=15"]
    
            out = ourtestdir.runpytest(*args)
    
            out.assert_outcomes(passed=4, failed=0)
    >       assert out.outlines[8:12] == [
                "test_one.py::test_d PASSED",
                "test_one.py::test_c PASSED",
                "test_one.py::test_a PASSED",
                "test_one.py::test_b PASSED",
            ]
    E       AssertionError: assert ['', 'test_on...est_a PASSED'] == ['test_one.py...est_b PASSED']
    E         At index 0 diff: '' != 'test_one.py::test_d PASSED'
    E         Use -v to get the full diff
    
    /home/tkloczko/rpmbuild/BUILD/pytest-randomly-3.8.0/tests/test_pytest_randomly.py:361: AssertionError
    --------------------------------------------------------------------------- Captured stdout call ---------------------------------------------------------------------------
    ============================= test session starts ==============================
    platform linux -- Python 3.8.9, pytest-6.2.4, py-1.10.0, pluggy-0.13.1 -- /usr/bin/python3
    cachedir: .pytest_cache
    Using --randomly-seed=15
    hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/tkloczko/rpmbuild/BUILD/pytest-randomly-3.8.0/.hypothesis/examples')
    rootdir: /tmp/pytest-of-tkloczko/pytest-18/test_test_functions_reordered0, configfile: pytest.ini
    plugins: randomly-3.8.0, forked-1.3.0, shutil-1.7.0, virtualenv-1.7.0, expect-1.1.0, httpbin-1.0.0, xdist-2.2.1, flake8-1.0.7, timeout-1.4.2, betamax-0.8.1, pyfakefs-4.4.0, freezegun-0.4.2, cases-3.4.6, case-1.5.3, isort-1.3.0, aspectlib-1.5.2, asyncio-0.15.1, toolbox-0.5, xprocess-0.17.1, flaky-3.7.0, aiohttp-0.3.0, checkdocs-2.7.0, mock-3.6.1, rerunfailures-9.1.1, requests-mock-1.9.3, hypothesis-6.13.7, Faker-8.4.0, cov-2.12.1
    collecting ... collected 4 items
    
    test_one.py::test_d PASSED
    test_one.py::test_c PASSED
    test_one.py::test_a PASSED
    test_one.py::test_b PASSED
    
    ============================== 4 passed in 0.11s ===============================
    _________________________________________________________ test_test_functions_reordered_when_randomness_in_module __________________________________________________________
    
    ourtestdir = <Testdir local('/tmp/pytest-of-tkloczko/pytest-18/test_test_functions_reordered_when_randomness_in_module0')>
    
        def test_test_functions_reordered_when_randomness_in_module(ourtestdir):
            ourtestdir.makepyfile(
                test_one="""
                import random
                import time
    
                random.seed(time.time() * 100)
    
                def test_a():
                    pass
    
                def test_b():
                    pass
    
                def test_c():
                    pass
    
                def test_d():
                    pass
                """
            )
            args = ["-v", "--randomly-seed=15"]
    
            out = ourtestdir.runpytest(*args)
    
            out.assert_outcomes(passed=4, failed=0)
    >       assert out.outlines[8:12] == [
                "test_one.py::test_d PASSED",
                "test_one.py::test_c PASSED",
                "test_one.py::test_a PASSED",
                "test_one.py::test_b PASSED",
            ]
    E       AssertionError: assert ['', 'test_on...est_a PASSED'] == ['test_one.py...est_b PASSED']
    E         At index 0 diff: '' != 'test_one.py::test_d PASSED'
    E         Use -v to get the full diff
    
    /home/tkloczko/rpmbuild/BUILD/pytest-randomly-3.8.0/tests/test_pytest_randomly.py:395: AssertionError
    --------------------------------------------------------------------------- Captured stdout call ---------------------------------------------------------------------------
    ============================= test session starts ==============================
    platform linux -- Python 3.8.9, pytest-6.2.4, py-1.10.0, pluggy-0.13.1 -- /usr/bin/python3
    cachedir: .pytest_cache
    Using --randomly-seed=15
    hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/tkloczko/rpmbuild/BUILD/pytest-randomly-3.8.0/.hypothesis/examples')
    rootdir: /tmp/pytest-of-tkloczko/pytest-18/test_test_functions_reordered_when_randomness_in_module0, configfile: pytest.ini
    plugins: randomly-3.8.0, forked-1.3.0, shutil-1.7.0, virtualenv-1.7.0, expect-1.1.0, httpbin-1.0.0, xdist-2.2.1, flake8-1.0.7, timeout-1.4.2, betamax-0.8.1, pyfakefs-4.4.0, freezegun-0.4.2, cases-3.4.6, case-1.5.3, isort-1.3.0, aspectlib-1.5.2, asyncio-0.15.1, toolbox-0.5, xprocess-0.17.1, flaky-3.7.0, aiohttp-0.3.0, checkdocs-2.7.0, mock-3.6.1, rerunfailures-9.1.1, requests-mock-1.9.3, hypothesis-6.13.7, Faker-8.4.0, cov-2.12.1
    collecting ... collected 4 items
    
    test_one.py::test_d PASSED
    test_one.py::test_c PASSED
    test_one.py::test_a PASSED
    test_one.py::test_b PASSED
    
    ============================== 4 passed in 0.11s ===============================
    _________________________________________________________________________ test_doctests_reordered __________________________________________________________________________
    
    ourtestdir = <Testdir local('/tmp/pytest-of-tkloczko/pytest-18/test_doctests_reordered0')>
    
        def test_doctests_reordered(ourtestdir):
            ourtestdir.makepyfile(
                test_one="""
                def foo():
                    '''
                    >>> foo()
                    9001
                    '''
                    return 9001
    
                def bar():
                    '''
                    >>> bar()
                    9002
                    '''
                    return 9002
                """
            )
            args = ["-v", "--doctest-modules", "--randomly-seed=5"]
    
            out = ourtestdir.runpytest(*args)
            out.assert_outcomes(passed=2)
    >       assert out.outlines[8:10] == [
                "test_one.py::test_one.bar PASSED",
                "test_one.py::test_one.foo PASSED",
            ]
    E       AssertionError: assert ['', 'test_on...e.bar PASSED'] == ['test_one.py...e.foo PASSED']
    E         At index 0 diff: '' != 'test_one.py::test_one.bar PASSED'
    E         Use -v to get the full diff
    
    /home/tkloczko/rpmbuild/BUILD/pytest-randomly-3.8.0/tests/test_pytest_randomly.py:425: AssertionError
    --------------------------------------------------------------------------- Captured stdout call ---------------------------------------------------------------------------
    ============================= test session starts ==============================
    platform linux -- Python 3.8.9, pytest-6.2.4, py-1.10.0, pluggy-0.13.1 -- /usr/bin/python3
    cachedir: .pytest_cache
    Using --randomly-seed=5
    hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/tkloczko/rpmbuild/BUILD/pytest-randomly-3.8.0/.hypothesis/examples')
    rootdir: /tmp/pytest-of-tkloczko/pytest-18/test_doctests_reordered0, configfile: pytest.ini
    plugins: randomly-3.8.0, forked-1.3.0, shutil-1.7.0, virtualenv-1.7.0, expect-1.1.0, httpbin-1.0.0, xdist-2.2.1, flake8-1.0.7, timeout-1.4.2, betamax-0.8.1, pyfakefs-4.4.0, freezegun-0.4.2, cases-3.4.6, case-1.5.3, isort-1.3.0, aspectlib-1.5.2, asyncio-0.15.1, toolbox-0.5, xprocess-0.17.1, flaky-3.7.0, aiohttp-0.3.0, checkdocs-2.7.0, mock-3.6.1, rerunfailures-9.1.1, requests-mock-1.9.3, hypothesis-6.13.7, Faker-8.4.0, cov-2.12.1
    collecting ... collected 2 items
    
    test_one.py::test_one.bar PASSED
    test_one.py::test_one.foo PASSED
    
    ============================== 2 passed in 0.11s ===============================
    ___________________________________________________________________ test_doctests_in_txt_files_reordered ___________________________________________________________________
    
    ourtestdir = <Testdir local('/tmp/pytest-of-tkloczko/pytest-18/test_doctests_in_txt_files_reordered0')>
    
        def test_doctests_in_txt_files_reordered(ourtestdir):
            ourtestdir.tmpdir.join("test.txt").write(
                """\
                >>> 2 + 2
                4
                """
            )
            ourtestdir.tmpdir.join("test2.txt").write(
                """\
                >>> 2 - 2
                0
                """
            )
            args = ["-v", "--randomly-seed=1"]
    
            out = ourtestdir.runpytest(*args)
            out.assert_outcomes(passed=2)
    >       assert out.outlines[8:10] == [
                "test2.txt::test2.txt PASSED",
                "test.txt::test.txt PASSED",
            ]
    E       AssertionError: assert ['', 'test2.t...2.txt PASSED'] == ['test2.txt::...t.txt PASSED']
    E         At index 0 diff: '' != 'test2.txt::test2.txt PASSED'
    E         Use -v to get the full diff
    
    /home/tkloczko/rpmbuild/BUILD/pytest-randomly-3.8.0/tests/test_pytest_randomly.py:498: AssertionError
    --------------------------------------------------------------------------- Captured stdout call ---------------------------------------------------------------------------
    ============================= test session starts ==============================
    platform linux -- Python 3.8.9, pytest-6.2.4, py-1.10.0, pluggy-0.13.1 -- /usr/bin/python3
    cachedir: .pytest_cache
    Using --randomly-seed=1
    hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/tkloczko/rpmbuild/BUILD/pytest-randomly-3.8.0/.hypothesis/examples')
    rootdir: /tmp/pytest-of-tkloczko/pytest-18/test_doctests_in_txt_files_reordered0, configfile: pytest.ini
    plugins: randomly-3.8.0, forked-1.3.0, shutil-1.7.0, virtualenv-1.7.0, expect-1.1.0, httpbin-1.0.0, xdist-2.2.1, flake8-1.0.7, timeout-1.4.2, betamax-0.8.1, pyfakefs-4.4.0, freezegun-0.4.2, cases-3.4.6, case-1.5.3, isort-1.3.0, aspectlib-1.5.2, asyncio-0.15.1, toolbox-0.5, xprocess-0.17.1, flaky-3.7.0, aiohttp-0.3.0, checkdocs-2.7.0, mock-3.6.1, rerunfailures-9.1.1, requests-mock-1.9.3, hypothesis-6.13.7, Faker-8.4.0, cov-2.12.1
    collecting ... collected 2 items
    
    test2.txt::test2.txt PASSED
    test.txt::test.txt PASSED
    
    ============================== 2 passed in 0.10s ===============================
    ========================================================================= short test summary info ==========================================================================
    FAILED tests/test_pytest_randomly.py::test_files_reordered - AssertionError: assert ['', 'test_d....st_it PASSED'] == ['test_d.py::...st_it PASSED']
    FAILED tests/test_pytest_randomly.py::test_files_reordered_when_seed_not_reset - AssertionError: assert ['', 'test_d....st_it PASSED'] == ['test_d.py::...st_it PASSED']
    FAILED tests/test_pytest_randomly.py::test_classes_reordered - AssertionError: assert ['', 'test_on...est_a PASSED'] == ['test_one.py...est_b PASSED']
    FAILED tests/test_pytest_randomly.py::test_class_test_methods_reordered - AssertionError: assert ['', 'test_on...est_a PASSED'] == ['test_one.py...est_b PASSED']
    FAILED tests/test_pytest_randomly.py::test_test_functions_reordered - AssertionError: assert ['', 'test_on...est_a PASSED'] == ['test_one.py...est_b PASSED']
    FAILED tests/test_pytest_randomly.py::test_test_functions_reordered_when_randomness_in_module - AssertionError: assert ['', 'test_on...est_a PASSED'] == ['test_one.py......
    FAILED tests/test_pytest_randomly.py::test_doctests_reordered - AssertionError: assert ['', 'test_on...e.bar PASSED'] == ['test_one.py...e.foo PASSED']
    FAILED tests/test_pytest_randomly.py::test_doctests_in_txt_files_reordered - AssertionError: assert ['', 'test2.t...2.txt PASSED'] == ['test2.txt::...t.txt PASSED']
    ====================================================================== 8 failed, 27 passed in 38.43s =======================================================================
    
    opened by kloczek 7
  • Consider moving to the pytest-dev organization

    Consider moving to the pytest-dev organization

    Hi!

    Just stumbled on this plugin! It looks like a much better version (and maintained!) of the honorable https://github.com/klrmn/pytest-random.

    Would you like to consider moving it under the pytest-dev organization for more visibility? You can read more about this here. 👍

    opened by nicoddemus 7
  • Provide reverse order

    Provide reverse order

    As a developer, I want to execute the tests in reverse order to reveal test dependencies.

    Given that the natural order of execution is O={t1, ..., tn}, maybe provide a command line switch that allows to run all tests on reverse order reverse(O)={tn, ..., t1}.

    Motivation: An empirical study [0] on the test independence assumption (i.e., the result of a test execution does not depend on other test executions) shows, that the reverse strategy can already reveal quite a large amount of test dependencies. A reverse order can reveal more than 70% of all cases for manually created tests and more than 60% of all cases for generated tests, compared to the best detection strategy.

    The authors of the study finally suggest to use the randomized strategy. However, the suggestions has limitations in practice. The authors evaluated 10 reruns for random order executions with different seeds. This may lead to a 10x increase in test costs (such as waiting times) and may not be practical. Alternatively to 10 reruns in a row, we could execute the tests in random order only once per commit and observe over time. For each 10 commits, we reach the same threshold of 10 random executions. However, in this case, the commit when a flawed test is introduced may be different to the commit for which the test fails. This leads to higher effort for the root cause detection.

    Running the tests in reverse order is a simple and effective approach to detect simple order issues.

    (I am also unable to create a PR for this issue because I do not fully understand the logic of the conditionals in the pytest_collection_modifyitems function. It would be helpful for a reader to have some comments on why the logic flow is organized in the current way.)

    [0] Sai Zhang, Darioush Jalali, Jochen Wuttke, Kıvanç Muşlu, Wing Lam, Michael D. Ernst, and David Notkin. 2014. Empirically revisiting the test independence assumption. In Proceedings of the 2014 International Symposium on Software Testing and Analysis (ISSTA 2014). Association for Computing Machinery, New York, NY, USA, 385–396. doi:https://doi.org/10.1145/2610384.2610404

    opened by thbde 6
Owner
pytest-dev
pytest-dev
pytest plugin providing a function to check if pytest is running.

pytest-is-running pytest plugin providing a function to check if pytest is running. Installation Install with: python -m pip install pytest-is-running

Adam Johnson 21 Nov 1, 2022
Pytest-typechecker - Pytest plugin to test how type checkers respond to code

pytest-typechecker this is a plugin for pytest that allows you to create tests t

vivax 2 Aug 20, 2022
pytest splinter and selenium integration for anyone interested in browser interaction in tests

Splinter plugin for the pytest runner Install pytest-splinter pip install pytest-splinter Features The plugin provides a set of fixtures to use splin

pytest-dev 238 Nov 14, 2022
ApiPy was created for api testing with Python pytest framework which has also requests, assertpy and pytest-html-reporter libraries.

ApiPy was created for api testing with Python pytest framework which has also requests, assertpy and pytest-html-reporter libraries. With this f

Mustafa 1 Jul 11, 2022
a wrapper around pytest for executing tests to look for test flakiness and runtime regression

bubblewrap a wrapper around pytest for assessing flakiness and runtime regressions a cs implementations practice project How to Run: First, install de

Anna Nagy 1 Aug 5, 2021
The pytest framework makes it easy to write small tests, yet scales to support complex functional testing

The pytest framework makes it easy to write small tests, yet scales to support complex functional testing for applications and libraries. An example o

pytest-dev 9.6k Jan 2, 2023
Selects tests affected by changed files. Continous test runner when used with pytest-watch.

This is a pytest plug-in which automatically selects and re-executes only tests affected by recent changes. How is this possible in dynamic language l

Tibor Arpas 614 Dec 30, 2022
Playwright Python tool practice pytest pytest-bdd screen-play page-object allure cucumber-report

pytest-ui-automatic Playwright Python tool practice pytest pytest-bdd screen-play page-object allure cucumber-report How to run Run tests execute_test

moyu6027 11 Nov 8, 2022
Pytest-rich - Pytest + rich integration (proof of concept)

pytest-rich Leverage rich for richer test session output. This plugin is not pub

Bruno Oliveira 170 Dec 2, 2022
A command-line tool and Python library and Pytest plugin for automated testing of RESTful APIs, with a simple, concise and flexible YAML-based syntax

1.0 Release See here for details about breaking changes with the upcoming 1.0 release: https://github.com/taverntesting/tavern/issues/495 Easier API t

null 909 Dec 15, 2022
pytest plugin for distributed testing and loop-on-failures testing modes.

xdist: pytest distributed testing plugin The pytest-xdist plugin extends pytest with some unique test execution modes: test run parallelization: if yo

pytest-dev 1.1k Dec 30, 2022
a plugin for py.test that changes the default look and feel of py.test (e.g. progressbar, show tests that fail instantly)

pytest-sugar pytest-sugar is a plugin for pytest that shows failures and errors instantly and shows a progress bar. Requirements You will need the fol

Teemu 963 Dec 28, 2022
pytest plugin for manipulating test data directories and files

pytest-datadir pytest plugin for manipulating test data directories and files. Usage pytest-datadir will look up for a directory with the name of your

Gabriel Reis 191 Dec 21, 2022
pytest plugin that let you automate actions and assertions with test metrics reporting executing plain YAML files

pytest-play pytest-play is a codeless, generic, pluggable and extensible automation tool, not necessarily test automation only, based on the fantastic

pytest-dev 67 Dec 1, 2022
A Django plugin for pytest.

Welcome to pytest-django! pytest-django allows you to test your Django project/applications with the pytest testing tool. Quick start / tutorial Chang

pytest-dev 1.1k Dec 31, 2022
Coverage plugin for pytest.

Overview docs tests package This plugin produces coverage reports. Compared to just using coverage run this plugin does some extras: Subprocess suppor

pytest-dev 1.4k Dec 29, 2022
Plugin for generating HTML reports for pytest results

pytest-html pytest-html is a plugin for pytest that generates a HTML report for test results. Resources Documentation Release Notes Issue Tracker Code

pytest-dev 548 Dec 28, 2022
Mypy static type checker plugin for Pytest

pytest-mypy Mypy static type checker plugin for pytest Features Runs the mypy static type checker on your source files as part of your pytest test run

Dan Bader 218 Jan 3, 2023
A rewrite of Python's builtin doctest module (with pytest plugin integration) but without all the weirdness

The xdoctest package is a re-write of Python's builtin doctest module. It replaces the old regex-based parser with a new abstract-syntax-tree based pa

Jon Crall 174 Dec 16, 2022