Coverage plugin for pytest.

Related tags

Testing python pytest
Overview

Overview

docs Documentation Status
tests
Travis-CI Build Status AppVeyor Build Status Requirements Status
package

This plugin produces coverage reports. Compared to just using coverage run this plugin does some extras:

  • Subprocess support: you can fork or run stuff in a subprocess and will get covered without any fuss.
  • Xdist support: you can use all of pytest-xdist's features and still get coverage.
  • Consistent pytest behavior. If you run coverage run -m pytest you will have slightly different sys.path (CWD will be in it, unlike when running pytest).

All features offered by the coverage package should work, either through pytest-cov's command line options or through coverage's config file.

  • Free software: MIT license

Installation

Install with pip:

pip install pytest-cov

For distributed testing support install pytest-xdist:

pip install pytest-xdist

Upgrading from ancient pytest-cov

pytest-cov 2.0 is using a new .pth file (pytest-cov.pth). You may want to manually remove the older init_cov_core.pth from site-packages as it's not automatically removed.

Uninstalling

Uninstall with pip:

pip uninstall pytest-cov

Under certain scenarios a stray .pth file may be left around in site-packages.

  • pytest-cov 2.0 may leave a pytest-cov.pth if you installed without wheels (easy_install, setup.py install etc).
  • pytest-cov 1.8 or older will leave a init_cov_core.pth.

Usage

pytest --cov=myproj tests/

Would produce a report like:

-------------------- coverage: ... ---------------------
Name                 Stmts   Miss  Cover
----------------------------------------
myproj/__init__          2      0   100%
myproj/myproj          257     13    94%
myproj/feature4286      94      7    92%
----------------------------------------
TOTAL                  353     20    94%

Documentation

http://pytest-cov.rtfd.org/

Coverage Data File

The data file is erased at the beginning of testing to ensure clean data for each test run. If you need to combine the coverage of several test runs you can use the --cov-append option to append this coverage data to coverage data from previous test runs.

The data file is left at the end of testing so that it is possible to use normal coverage tools to examine it.

Limitations

For distributed testing the workers must have the pytest-cov package installed. This is needed since the plugin must be registered through setuptools for pytest to start the plugin on the worker.

For subprocess measurement environment variables must make it from the main process to the subprocess. The python used by the subprocess must have pytest-cov installed. The subprocess must do normal site initialisation so that the environment variables can be detected and coverage started.

Acknowledgements

Whilst this plugin has been built fresh from the ground up it has been influenced by the work done on pytest-coverage (Ross Lawley, James Mills, Holger Krekel) and nose-cover (Jason Pellerin) which are other coverage plugins.

Ned Batchelder for coverage and its ability to combine the coverage results of parallel runs.

Holger Krekel for pytest with its distributed testing support.

Jason Pellerin for nose.

Michael Foord for unittest2.

No doubt others have contributed to these tools as well.

Comments
  • No coverage data is collected for installed package when py.test is run in the project's root directory

    No coverage data is collected for installed package when py.test is run in the project's root directory

    If I have a package(and pytest-cov) installed in a virtualenv, and I run pytest-cov from the project's root directory(which has the package folder), then no coverage data is collected.

    opened by jck 39
  • support for a multiprocessing inside a subprocess?

    support for a multiprocessing inside a subprocess?

    I'm working on https://github.com/Pylons/hupper. It's a forking process monitor. I've tried a lot of permutations of pytest-cov and haven't found any success.

    The process hierarchy is this:

    $ py.test --cov-report=term-missing --cov=hupper --cov=tests
    |- (a) tests/test_it.py (run by py.test process)
       |- (b) tests/myapp (run by test_it tests using subprocess)
          |- (c) tests/myapp (re-run by myapp via hupper using multiprocessing)
    

    Here is an example output, but first some observations:

    1. The tests didn't pass when I ran them with --cov flags enabled. This is because of the "No data was collected" messages in the worker process (this is process c in the above diagram). There should definitely be data collected if things were working correctly.
    2. Some coverage data is missing. For example you can see that some code was executed in tests/myapp/__main__.py however lines 1-10 are missing, which are at module scope. What?
    3. After the run is over I have a .coverage.alai.27676.269523 file left over. Presumably this file is not getting combined with the rest of the output, but that's just a guess.
    4. Almost nothing in the hupper library itself is covered. However it's used in both processes b and c.
    $ env/bin/py.test --cov-report=term-missing --cov=hupper --cov=tests
    ============================= test session starts ==============================
    platform darwin -- Python 3.5.2, pytest-3.0.3, py-1.4.31, pluggy-0.4.0
    rootdir: /Users/michael/work/oss/hupper, inifile: setup.cfg
    plugins: cov-2.4.0
    collected 2 items
    
    tests/test_it.py FF
    
    ---------- coverage: platform darwin, python 3.5.2-final-0 -----------
    Name                      Stmts   Miss  Cover   Missing
    -------------------------------------------------------
    hupper/__init__.py            3      3     0%   4-9
    hupper/compat.py             39     39     0%   2-62
    hupper/interfaces.py         12     12     0%   1-44
    hupper/ipc.py                10      8    20%   1-60
    hupper/polling.py            43     43     0%   1-62
    hupper/reloader.py          133    131     2%   1-204, 241-266
    hupper/watchdog.py           26     26     0%   2-40
    hupper/winapi.py             86     86     0%   1-153
    hupper/worker.py            123     70    43%   1-24, 29, 34, 42, 55-60, 74-76, 80, 90-166, 175, 179, 186-187, 191-192, 196, 200-204
    tests/__init__.py             0      0   100%
    tests/myapp/__init__.py       0      0   100%
    tests/myapp/__main__.py      45     18    60%   1-10, 23, 28-32, 35-36, 39, 57-58
    tests/test_it.py             19      2    89%   13, 25
    tests/util.py                68      0   100%
    -------------------------------------------------------
    TOTAL                       607    438    28%
    
    
    =================================== FAILURES ===================================
    _____________________ test_myapp_reloads_when_touching_ini _____________________
    
        def test_myapp_reloads_when_touching_ini():
            with util.TestApp('myapp', ['--reload']) as app:
                app.wait_for_response(interval=1)
                util.touch('myapp/foo.ini')
                app.wait_for_response()
                app.stop()
    
                assert len(app.response) == 2
    >           assert app.stderr == ''
    E           assert 'Coverage.py ... collected.\n' == ''
    E             - Coverage.py warning: No data was collected.
    
    tests/test_it.py:12: AssertionError
    ___________________ test_myapp_reloads_when_touching_pyfile ____________________
    
        def test_myapp_reloads_when_touching_pyfile():
            with util.TestApp('myapp', ['--reload']) as app:
                app.wait_for_response(interval=1)
                util.touch('myapp/__main__.py')
                app.wait_for_response()
                app.stop()
    
                assert len(app.response) == 2
    >           assert app.stderr == ''
    E           assert 'Coverage.py ... collected.\n' == ''
    E             - Coverage.py warning: No data was collected.
    
    tests/test_it.py:24: AssertionError
    =========================== 2 failed in 6.61 seconds ===========================
    

    I'd really appreciate any insights here as I have no experience getting coverage reports when in a multiprocessing/subprocess environment but as far as I can tell my setup follows the guidelines (each process has pytest-cov installed and environment variables are propagated).

    opened by mmerickel 31
  • pytest-cov==1.8.1 + pytest-xdist does not show coverage report

    pytest-cov==1.8.1 + pytest-xdist does not show coverage report

    the latest pytest-cov==1.8.1 together with xdist does not show the coverage report anymore.

    pytest-cov==1.6 works

    # pip freeze | grep pytest
    pytest==2.7.1
    pytest-cache==1.0
    pytest-cov==1.8.1
    pytest-pep8==1.0.6
    pytest-random==0.2
    pytest-xdist==1.12
    
    bug 
    opened by diefans 29
  • Send '.coverage' docker output to coveralls from travis

    Send '.coverage' docker output to coveralls from travis

    The pytest-cov documentation states the following:

    These three report options output to files without showing anything on the terminal:

       py.test --cov-report html
               --cov-report xml
               --cov-report annotate
               --cov=myproj tests/
    

    The output location for each of these reports can be specified. The output location for the XML report is a file. Where as the output location for the HTML and annotated source code reports are directories:

    py.test --cov-report html:cov_html
            --cov-report xml:cov.xml
            --cov-report annotate:cov_annotate
            --cov=myproj tests/
    

    The final report option can also suppress printing to the terminal:

    py.test --cov-report= --cov=myproj tests/
    

    This mode can be especially useful on continuous integration servers, where a coverage file is needed for subsequent processing, but no local report needs to be viewed. For example, tests run on Travis-CI could produce a .coverage file for use with Coveralls.

    But, how do I send the redirected report generated from --cov-report, to coveralls? I am currently using this framework within a series of docker containers, which my .travis.yml spins up for unit testing. This framework generates the coverage report, and copied from a docker container, to the host for the Travis CI. Then, my implementation of python-coveralls, used explicitly by Travis is responsible for sending the coverage report to coveralls:

            pytest.main([
                '--cov', '.',
                '--cov-report', 'xml:/var/machine-learning/coverage.xml',
                'test/live_server'
            ])
    

    However, it seems the xml option is an invalid format for sending to coveralls.

    Note: this is the associated issue I'm working on corresponding to the above statement.

    opened by jeff1evesque 26
  • Using the plugin stops PyCharm from hitting breakpoints.

    Using the plugin stops PyCharm from hitting breakpoints.

    System Details:

    python 3.5.1 pytest 3.0.2 pytest-cov 2.3.1 PyCharm Community Edition 2016.2.2 Build #PC-162.1812.1, built on August 16, 2016 JRE: 1.8.0_76-release-b216 x86 JVM: OpenJDK Server VM by JetBrains s.r.o

    MCVE:

    main.py

    def foo():
        x = 10
        return x
    

    test.py

    import main
    
    def test_foo():
        assert main.foo() == 10
    

    Create a new project in PyCharm with those two files and create a new test config with the following options --cov=. --cov-report term-missing -rw -vv. Set a breakpoint on any line of code and hit debug.

    Expected result: The breakpoints work.

    Actual result: The breakpoints don't work.

    If you remove the options, it hits the breakpoints fine.

    opened by morganthrapp 25
  • Super-simple implementation of pytest contexts

    Super-simple implementation of pytest contexts

    Hey there! Coverage.py 5.0a5 will have a method Coverage.switch_context to set the dynamic context externally. This is a simple proof-of-concept of using that method to set the pytest test id, phase, and status as the context.

    I haven't done anything about limiting this to Coverage 5.0a5, or writing tests yet.

    opened by nedbat 23
  • Pytest-cov leaves .coverage.hostname.number.number data files if running tests against multiprocessing.Pool

    Pytest-cov leaves .coverage.hostname.number.number data files if running tests against multiprocessing.Pool

    Running pytest with coverage for tests against multiprocessing.Pool generates several data files, but not all of them are cleaned after. It looks similar to #100. Pool.join() doesn't seem to change anything. Example filename .coverage.myhost.local.7582.948066

    Here's the repository to reproduce: https://github.com/manycoding/pytest_cov_pool_datafiles_250

    Your operating system name and version: Mac OS Mojave 10.14 (18A391)

    Any details about your local setup that might be helpful in troubleshooting:

    pipenv shell; pipenv install
    

    Pipfile

    pytest = "*"
    pytest-cov = "*"
    pytest-mock = "*"
    tox-pipenv = "*"
    pytest-pythonpath = "*"
    

    Detailed steps to reproduce the bug:

    pytest --cov=src --cov-report=term-missing  tests/test_cov_pool.py
    
    opened by manycoding 22
  • AttributeError: 'Function' object has no attribute 'get_marker'

    AttributeError: 'Function' object has no attribute 'get_marker'

    I just started getting the following error with pytest-cov v2.6.0, pytest v4.1.0 on Python 3.4.6, PyPy, and PyPy3

    self = <pytest_cov.plugin.CovPlugin object at 0x110f365c0>, item = <Function test_x>
    
        @compat.hookwrapper
        def pytest_runtest_call(self, item):
    >       if (item.get_marker('no_cover')
                    or 'no_cover' in getattr(item, 'fixturenames', ())):
    E               AttributeError: 'Function' object has no attribute 'get_marker'
    
    ../.pyenv/versions/3.4.9/envs/venv/lib/python3.4/site-packages/pytest_cov/plugin.py:289: AttributeError
    

    How to replicate

    • Create a virtualenv with Python 3.4.9

    • pip install pytest==4.1.0 pytest-cov==2.6.0

    • Create something.py with the following contents:

    def test_x():
        assert True
    
    • Run pytest --cov=something something.py

    Pytest output

    ========================================================== test session starts ===========================================================
    platform darwin -- Python 3.4.9, pytest-4.1.0, py-1.7.0, pluggy-0.8.0 -- /Users/ross/.pyenv/versions/3.4.9/envs/venv/bin/python
    cachedir: .pytest_cache
    rootdir: /Users/ross/temp, inifile:
    plugins: cov-2.6.0
    collected 1 item
    
    something.py::test_x FAILED                                                                                                        [100%]
    
    ================================================================ FAILURES ================================================================
    _________________________________________________________________ test_x _________________________________________________________________
    
    self = <pytest_cov.plugin.CovPlugin object at 0x110f365c0>, item = <Function test_x>
    
        @compat.hookwrapper
        def pytest_runtest_call(self, item):
    >       if (item.get_marker('no_cover')
                    or 'no_cover' in getattr(item, 'fixturenames', ())):
    E               AttributeError: 'Function' object has no attribute 'get_marker'
    
    ../.pyenv/versions/3.4.9/envs/venv/lib/python3.4/site-packages/pytest_cov/plugin.py:289: AttributeError
    
    ---------- coverage: platform darwin, python 3.4.9-final-0 -----------
    Name           Stmts   Miss  Cover
    ----------------------------------
    something.py       2      1    50%
    
    ======================================================== 1 failed in 0.07 seconds ========================================================
    
    opened by rossmacarthur 21
  • Incorrect coverage report

    Incorrect coverage report

    Using py.test --cov-config .coveragerc --cov nengo -n 6 nengo a lot of lines that should be hit get reported as missed (like class and function definitions in a module). This might be related to #19 as the project has a conftest file importing other modules from the project.

    Using coverage run --rcfile .coveragerc --source nengo -m py.test nengo instead a correct coverage report is generated, but this command does not support xdist.

    opened by jgosmann 21
  • Windows: coverage through multiprocessing not catched correctly

    Windows: coverage through multiprocessing not catched correctly

    Hi ;)

    Recently I'm trying to get the test coverage to 100% on a windows machine, but it doesn't want to. My code under test uses an event-loop driven by a concurrent.futures.ProcessPoolExecutor. Coverage is 100% on a Linux machine, but not on Windows.

    PR: https://github.com/coala/coala/pull/4475 Build on AppVeyor: https://ci.appveyor.com/project/coala/coala/build/1.0.7774 Same build but on Linux (Travis): https://travis-ci.org/coala/coala/builds/254572218?utm_source=github_status&utm_medium=notification Relevant Python module/components under test: https://github.com/coala/coala/tree/8f4113a2f471acf4c0121199c6ebb9b29dfa84ff/coalib/core, specificially

    • https://github.com/coala/coala/blob/8f4113a2f471acf4c0121199c6ebb9b29dfa84ff/coalib/core/FileBear.py
    • https://github.com/coala/coala/blob/8f4113a2f471acf4c0121199c6ebb9b29dfa84ff/coalib/core/ProjectBear.py

    The missing lines are the ones inside def execute_task.

    setup.cfg (and yes, coverage is enabled ;D): https://github.com/coala/coala/blob/8f4113a2f471acf4c0121199c6ebb9b29dfa84ff/setup.cfg#L32

    Exchanging the ProcessPoolExecutor with a ThreadPoolExecutor does the trick and produces full coverage. However it would be nice to have everything being tested with a ProcessPoolExecutor (especially due to possible pickling issues on Windows for subprocesses).

    Not proven assumption: I have the feeling that other parts of the code only run through a subprocess is measured properly, only those two modules under test don't work.

    opened by Makman2 20
  • coverage is wrong when running with xdist's --boxed

    coverage is wrong when running with xdist's --boxed

    I need to run my tests with xdist's --boxed feature. there is a big difference in coverage compared to running unboxed:

    • unboxed: 93.48 %
    • boxed: 26.93 %

    35 % of all modules are completely uncovered. I also noticed, that despite that I run the tests with py.test tests/ --boxed -d -n 8 --random the uncovered modules are always the same.

    When I run one single test module in boxed mode coverage complains that the module to cover was not imported:

    py.test tests/unit/security/test_features.py --boxed -d -n 8 --cov bm.security
    ===================================================================================================== test session starts =====================================================================================================
    platform linux2 -- Python 2.7.5 -- pytest-2.5.0
    Tests are shuffled using seed number 358684703390.
    plugins: random, bdd, cov, capturelog, ipdb, cache, pep8, greendots, xdist
    gw0 [49] / gw1 [49] / gw2 [49] / gw3 [49] / gw4 [49] / gw5 [49] / gw6 [49] / gw7 [49]
    scheduling tests via LoadScheduling
    ................................s..Coverage.py warning: Module bm.security was never imported.
    Coverage.py warning: No data was collected.
    .......Coverage.py warning: Module bm.security was never imported.
    Coverage.py warning: No data was collected.
    .Coverage.py warning: Module bm.security was never imported.
    Coverage.py warning: No data was collected.
    .Coverage.py warning: Module bm.security was never imported.
    Coverage.py warning: No data was collected.
    .Coverage.py warning: Module bm.security was never imported.
    Coverage.py warning: No data was collected.
    .Coverage.py warning: Module bm.security was never imported.
    Coverage.py warning: No data was collected.
    .Coverage.py warning: Module bm.security was never imported.
    Coverage.py warning: No data was collected.
    .Coverage.py warning: Module bm.security was never imported.
    Coverage.py warning: No data was collected.
    .
    --------------------------------------------------------------------------------------- coverage: platform linux2, python 2.7.5-final-0 ---------------------------------------------------------------------------------------
    Name    Stmts   Miss     Cover   Missing
    ----------------------------------------
    ============================================================================================ 48 passed, 1 skipped in 3.60 seconds =============================================================================================
    
    bug 
    opened by diefans 20
  • Is it possible pytest-cov tie to fixed version of `coverage`?

    Is it possible pytest-cov tie to fixed version of `coverage`?

    Looks like we're always pulling latest version of coverage.

    Recently it's upgraded to 7.0.0 and break old code (I'm using omit rule e.g. something** in coveragerc)

    I could solve it by adding coverage lib with fixed version to my project. But is it possible to tied to major version number? to avoid breaking change?

    Thanks

    opened by akivamu 1
  • Source full path

    Source full path

    Team,

    I'm trying to set up Python build in a project which is 'governed' by Gradle. I'm also trying to use Gradle/Maven-conventional source path for Python files (src/main/python, src/test/python).

    In general, everything works fine and as expected with one caveat (for me, specifically). I'm using pytest with pytest.ini where I set pytest-cov configuration with the following:

    [pytest]
    norecursedirs = .gradle/** gradle/** build/** out/** **/src/main/python/**
    python_files = *_spec.py
    python_functions = feature_*
    pythonpath = src/main/python
    addopts = --cov=src/main/python --cov-config=.coveragerc --cov-report xml:build/test-results/python.xml --cov-report html:build/reports/tests/python src/test/python
    

    and corresponding .coveragerc:

    [run]
    omit = **/test/**
    source = src/test/python
    

    The problem I'm having is the path to the source files in the python.xml test coverage report. It seems to prefix the src/main/python with the output of the pwd command and I'm effectively getting stuff like

    <sources>
       <source>/home/jenkins/workspace/pipeline-name-here/application/src/main/python</source>
    </sources>
    

    where /home/jenkins/workspace/pipeline-name-here/application/ is that annoying prefix I'm need to get rid of.

    My Gradle folder structure (w/service folders omitted) is like

    |-- Git-name-of-the-project
       |--- ansible
         |--- ansible files
       |--- application
         |--- <temp build-related folders>
         |--- src
                 |--- main
                     |--- python
                         |--- python-file.py
         |--- gradlew <etc>
         |--- .coveragerc
         |--- pytest.ini 
    

    Any suggestions?

    opened by nskmda 1
  • Should html files be counted in coverage reports?

    Should html files be counted in coverage reports?

    Please go over all the sections and search https://pytest-cov.readthedocs.io/en/latest/ or https://coverage.readthedocs.io/en/latest/ before opening the issue.

    Summary

    I use both pytest-django and pytest-cov to test my Django project. The coverage report is listing out all .html files as having 0% test coverage. This substantially lowers the project's coverage. I expect coverage to list my Python files but not any of my HTML files.

    • Are there ways to write pytests to cover Python code in HTML files?
    • Is there a way to remove all .html files from my coverage report?

    Expected vs actual result

    Reproducer

    Versions

    Output of relevant packages pip list, python --version, pytest --version etc.

    Make sure you include complete output of tox if you use it (it will show versions of various things).

    Config

    Include your tox.ini, pytest.ini, .coveragerc, setup.cfg or any relevant configuration.

    Code

    Link to your repository, gist, pastebin or just paste raw code that illustrates the issue.

    If you paste raw code make sure you quote it, eg:

    def foobar():
        pass
    

    What has been tried to solve the problem

    You should outline the things you tried to solve the problem but didn't work.

    opened by cclauss 0
  • Tests fails with coverage 6.5.0

    Tests fails with coverage 6.5.0

    Summary

    test_contexts fails with the latest release of coverage == 6.5.0.

    Versions

    A tox run is visible in my fork: https://github.com/danigm/pytest-cov/actions/runs/3563407946/jobs/5986157416

    This is the output:

    =================================== FAILURES ===================================
    ____________________________ test_contexts[nodist] _____________________________
    
    pytester = <Pytester PosixPath('/tmp/pytest-of-runner/pytest-0/test_contexts0')>
    testdir = <Testdir local('/tmp/pytest-of-runner/pytest-0/test_contexts0')>
    opts = ''
    
        @pytest.mark.skipif("coverage.version_info < (5, 0)")
        @xdist_params
        def test_contexts(pytester, testdir, opts):
            with open(os.path.join(os.path.dirname(__file__), "contextful.py")) as f:
                contextful_tests = f.read()
            script = testdir.makepyfile(contextful_tests)
            result = testdir.runpytest('-v',
                                       '--cov=%s' % script.dirpath(),
                                       '--cov-context=test',
                                       script,
                                       *opts.split()
                                       )
            assert result.ret == 0
            result.stdout.fnmatch_lines([
                'test_contexts* 100%*',
            ])
        
            data = coverage.CoverageData(".coverage")
            data.read()
    >       assert data.measured_contexts() == set(EXPECTED_CONTEXTS)
    E       AssertionError: assert {'',\n 'test_contexts.py::OldStyleTests::test_03|run',\n 'test_contexts.py::OldStyleTests::test_03|setup',\n 'test_contexts.py::OldStyleTests::test_03|teardown',\n 'test_contexts.py::OldStyleTests::test_04|run',\n 'test_contexts.py::OldStyleTests::test_04|setup',\n 'test_contexts.py::OldStyleTests::test_04|teardown',\n 'test_contexts.py::test_01|run',\n 'test_contexts.py::test_01|setup',\n 'test_contexts.py::test_01|teardown',\n 'test_contexts.py::test_02|run',\n 'test_contexts.py::test_02|setup',\n 'test_contexts.py::test_02|teardown',\n 'test_contexts.py::test_05|run',\n 'test_contexts.py::test_05|setup',\n 'test_contexts.py::test_05|teardown',\n 'test_contexts.py::test_06|run',\n 'test_contexts.py::test_06|setup',\n 'test_contexts.py::test_06|teardown',\n 'test_contexts.py::test_07|run',\n 'test_contexts.py::test_07|setup',\n 'test_contexts.py::test_07|teardown',\n 'test_contexts.py::test_08|run',\n 'test_contexts.py::test_08|setup',\n 'test_contexts.py::test_08|teardown',\n 'test_contexts.py::test_09[1]|run',\n 'test_contexts.py::test_09[1]|setup',\n 'test_contexts.py::test_09[1]|teardown',\n 'test_contexts.py::test_09[2]|run',\n 'test_contexts.py::test_09[2]|setup',\n 'test_contexts.py::test_09[2]|teardown',\n 'test_contexts.py::test_09[3]|run',\n 'test_contexts.py::test_09[3]|setup',\n 'test_contexts.py::test_09[3]|teardown',\n 'test_contexts.py::test_10|run',\n 'test_contexts.py::test_10|setup',\n 'test_contexts.py::test_10|teardown',\n 'test_contexts.py::test_11[1-101]|run',\n 'test_contexts.py::test_11[1-101]|setup',\n 'test_contexts.py::test_11[1-101]|teardown',\n 'test_contexts.py::test_11[2-202]|run',\n 'test_contexts.py::test_11[2-202]|setup',\n 'test_contexts.py::test_11[2-202]|teardown',\n 'test_contexts.py::test_12[one]|run',\n 'test_contexts.py::test_12[one]|setup',\n 'test_contexts.py::test_12[one]|teardown',\n 'test_contexts.py::test_12[two]|run',\n 'test_contexts.py::test_12[two]|setup',\n 'test_contexts.py::test_12[two]|teardown',\n 'test_contexts.py::test_13[3-1]|run',\n 'test_contexts.py::test_13[3-1]|setup',\n 'test_contexts.py::test_13[3-1]|teardown',\n 'test_contexts.py::test_13[3-2]|run',\n 'test_contexts.py::test_13[3-2]|setup',\n 'test_contexts.py::test_13[3-2]|teardown',\n 'test_contexts.py::test_13[4-1]|run',\n 'test_contexts.py::test_13[4-1]|setup',\n 'test_contexts.py::test_13[4-1]|teardown',\n 'test_contexts.py::test_13[4-2]|run',\n 'test_contexts.py::test_13[4-2]|setup',\n 'test_contexts.py::test_13[4-2]|teardown'} == {'',\n 'test_contexts.py::OldStyleTests::test_03|run',\n 'test_contexts.py::OldStyleTests::test_03|setup',\n 'test_contexts.py::OldStyleTests::test_04|run',\n 'test_contexts.py::OldStyleTests::test_04|teardown',\n 'test_contexts.py::test_01|run',\n 'test_contexts.py::test_02|run',\n 'test_contexts.py::test_05|run',\n 'test_contexts.py::test_05|setup',\n 'test_contexts.py::test_06|run',\n 'test_contexts.py::test_06|setup',\n 'test_contexts.py::test_07|run',\n 'test_contexts.py::test_07|setup',\n 'test_contexts.py::test_08|run',\n 'test_contexts.py::test_09[1]|run',\n 'test_contexts.py::test_09[1]|setup',\n 'test_contexts.py::test_09[2]|run',\n 'test_contexts.py::test_09[2]|setup',\n 'test_contexts.py::test_09[3]|run',\n 'test_contexts.py::test_09[3]|setup',\n 'test_contexts.py::test_10|run',\n 'test_contexts.py::test_11[1-101]|run',\n 'test_contexts.py::test_11[2-202]|run',\n 'test_contexts.py::test_12[one]|run',\n 'test_contexts.py::test_12[two]|run',\n 'test_contexts.py::test_13[3-1]|run',\n 'test_contexts.py::test_13[3-2]|run',\n 'test_contexts.py::test_13[4-1]|run',\n 'test_contexts.py::test_13[4-2]|run'}
    E         Extra items in the left set:
    E         'test_contexts.py::test_02|setup'
    E         'test_contexts.py::test_09[1]|teardown'
    E         'test_contexts.py::test_12[two]|setup'
    E         'test_contexts.py::test_11[1-101]|teardown'
    E         'test_contexts.py::test_13[4-2]|teardown'
    E         'test_contexts.py::test_08|setup'
    E         'test_contexts.py::test_13[4-1]|setup'
    E         'test_contexts.py::test_06|teardown'
    E         'test_contexts.py::test_11[2-202]|teardown'
    E         'test_contexts.py::test_13[3-2]|teardown'
    E         'test_contexts.py::test_01|teardown'
    E         'test_contexts.py::test_10|teardown'
    E         'test_contexts.py::test_12[two]|teardown'
    E         'test_contexts.py::test_09[3]|teardown'
    E         'test_contexts.py::OldStyleTests::test_03|teardown'
    E         'test_contexts.py::test_12[one]|setup'
    E         'test_contexts.py::test_11[1-101]|setup'
    E         'test_contexts.py::test_01|setup'
    E         'test_contexts.py::test_13[3-1]|teardown'
    E         'test_contexts.py::test_13[4-1]|teardown'
    E         'test_contexts.py::OldStyleTests::test_04|setup'
    E         'test_contexts.py::test_13[3-2]|setup'
    E         'test_contexts.py::test_09[2]|teardown'
    E         'test_contexts.py::test_10|setup'
    E         'test_contexts.py::test_07|teardown'
    E         'test_contexts.py::test_13[3-1]|setup'
    E         'test_contexts.py::test_11[2-202]|setup'
    E         'test_contexts.py::test_05|teardown'
    E         'test_contexts.py::test_08|teardown'
    E         'test_contexts.py::test_12[one]|teardown'
    E         'test_contexts.py::test_13[4-2]|setup'
    E         'test_contexts.py::test_02|teardown'
    E         Full diff:
    E           {
    E            '',
    E            'test_contexts.py::OldStyleTests::test_03|run',
    E            'test_contexts.py::OldStyleTests::test_03|setup',
    E         +  'test_contexts.py::OldStyleTests::test_03|teardown',
    E            'test_contexts.py::OldStyleTests::test_04|run',
    E         +  'test_contexts.py::OldStyleTests::test_04|setup',
    E            'test_contexts.py::OldStyleTests::test_04|teardown',
    E            'test_contexts.py::test_01|run',
    E         +  'test_contexts.py::test_01|setup',
    E         +  'test_contexts.py::test_01|teardown',
    E            'test_contexts.py::test_02|run',
    E         +  'test_contexts.py::test_02|setup',
    E         +  'test_contexts.py::test_02|teardown',
    E            'test_contexts.py::test_05|run',
    E            'test_contexts.py::test_05|setup',
    E         +  'test_contexts.py::test_05|teardown',
    E            'test_contexts.py::test_06|run',
    E            'test_contexts.py::test_06|setup',
    E         +  'test_contexts.py::test_06|teardown',
    E            'test_contexts.py::test_07|run',
    E            'test_contexts.py::test_07|setup',
    E         +  'test_contexts.py::test_07|teardown',
    E            'test_contexts.py::test_08|run',
    E         +  'test_contexts.py::test_08|setup',
    E         +  'test_contexts.py::test_08|teardown',
    E            'test_contexts.py::test_09[1]|run',
    E            'test_contexts.py::test_09[1]|setup',
    E         +  'test_contexts.py::test_09[1]|teardown',
    E            'test_contexts.py::test_09[2]|run',
    E            'test_contexts.py::test_09[2]|setup',
    E         +  'test_contexts.py::test_09[2]|teardown',
    E            'test_contexts.py::test_09[3]|run',
    E            'test_contexts.py::test_09[3]|setup',
    E         +  'test_contexts.py::test_09[3]|teardown',
    E            'test_contexts.py::test_10|run',
    E         +  'test_contexts.py::test_10|setup',
    E         +  'test_contexts.py::test_10|teardown',
    E            'test_contexts.py::test_11[1-101]|run',
    E         +  'test_contexts.py::test_11[1-101]|setup',
    E         +  'test_contexts.py::test_11[1-101]|teardown',
    E            'test_contexts.py::test_11[2-202]|run',
    E         +  'test_contexts.py::test_11[2-202]|setup',
    E         +  'test_contexts.py::test_11[2-202]|teardown',
    E            'test_contexts.py::test_12[one]|run',
    E         +  'test_contexts.py::test_12[one]|setup',
    E         +  'test_contexts.py::test_12[one]|teardown',
    E            'test_contexts.py::test_12[two]|run',
    E         +  'test_contexts.py::test_12[two]|setup',
    E         +  'test_contexts.py::test_12[two]|teardown',
    E            'test_contexts.py::test_13[3-1]|run',
    E         +  'test_contexts.py::test_13[3-1]|setup',
    E         +  'test_contexts.py::test_13[3-1]|teardown',
    E            'test_contexts.py::test_13[3-2]|run',
    E         +  'test_contexts.py::test_13[3-2]|setup',
    E         +  'test_contexts.py::test_13[3-2]|teardown',
    E            'test_contexts.py::test_13[4-1]|run',
    E         +  'test_contexts.py::test_13[4-1]|setup',
    E         +  'test_contexts.py::test_13[4-1]|teardown',
    E            'test_contexts.py::test_13[4-2]|run',
    E         +  'test_contexts.py::test_13[4-2]|setup',
    E         +  'test_contexts.py::test_13[4-2]|teardown',
    E           }
    
    /home/runner/work/pytest-cov/pytest-cov/tests/test_pytest_cov.py:1937: AssertionError
    ----------------------------- Captured stdout call -----------------------------
    running: /home/runner/work/pytest-cov/pytest-cov/.tox/py310-pytest71-xdist250-coverage65/bin/python -mpytest --basetemp=/tmp/pytest-of-runner/pytest-0/test_contexts0/runpytest-0 -v --cov=/tmp/pytest-of-runner/pytest-0/test_contexts0 --cov-context=test /tmp/pytest-of-runner/pytest-0/test_contexts0/test_contexts.py --basetemp=/tmp/pytest-of-runner/pytest-0/basetemp
         in: /tmp/pytest-of-runner/pytest-0/test_contexts0
    ============================= test session starts ==============================
    platform linux -- Python 3.10.8, pytest-7.1.2, pluggy-1.0.0 -- /home/runner/work/pytest-cov/pytest-cov/.tox/py310-pytest71-xdist250-coverage65/bin/python
    cachedir: .pytest_cache
    rootdir: /tmp/pytest-of-runner/pytest-0/test_contexts0
    plugins: forked-1.4.0, xdist-2.5.0, cov-4.0.0
    collecting ... collected 20 items
    
    test_contexts.py::test_01 PASSED                                         [  5%]
    test_contexts.py::test_02 PASSED                                         [ 10%]
    test_contexts.py::OldStyleTests::test_03 PASSED                          [ 15%]
    test_contexts.py::OldStyleTests::test_04 PASSED                          [ 20%]
    test_contexts.py::test_05 PASSED                                         [ 25%]
    test_contexts.py::test_06 PASSED                                         [ 30%]
    test_contexts.py::test_07 PASSED                                         [ 35%]
    test_contexts.py::test_08 PASSED                                         [ 40%]
    test_contexts.py::test_09[1] PASSED                                      [ 45%]
    test_contexts.py::test_09[2] PASSED                                      [ 50%]
    test_contexts.py::test_09[3] PASSED                                      [ 55%]
    test_contexts.py::test_10 PASSED                                         [ 60%]
    test_contexts.py::test_11[1-101] PASSED                                  [ 65%]
    test_contexts.py::test_11[2-202] PASSED                                  [ 70%]
    test_contexts.py::test_12[one] PASSED                                    [ 75%]
    test_contexts.py::test_12[two] PASSED                                    [ 80%]
    test_contexts.py::test_13[3-1] PASSED                                    [ 85%]
    test_contexts.py::test_13[3-2] PASSED                                    [ 90%]
    test_contexts.py::test_13[4-1] PASSED                                    [ 95%]
    test_contexts.py::test_13[4-2] PASSED                                    [100%]
    
    ---------- coverage: platform linux, python 3.10.8-final-0 -----------
    Name               Stmts   Miss  Cover
    --------------------------------------
    test_contexts.py      58      0   100%
    --------------------------------------
    TOTAL                 58      0   100%
    
    
    ============================== 20 passed in 0.14s ==============================
    _____________________________ test_contexts[xdist] _____________________________
    
    pytester = <Pytester PosixPath('/tmp/pytest-of-runner/pytest-0/test_contexts1')>
    testdir = <Testdir local('/tmp/pytest-of-runner/pytest-0/test_contexts1')>
    opts = '-n 1'
    
        @pytest.mark.skipif("coverage.version_info < (5, 0)")
        @xdist_params
        def test_contexts(pytester, testdir, opts):
            with open(os.path.join(os.path.dirname(__file__), "contextful.py")) as f:
                contextful_tests = f.read()
            script = testdir.makepyfile(contextful_tests)
            result = testdir.runpytest('-v',
                                       '--cov=%s' % script.dirpath(),
                                       '--cov-context=test',
                                       script,
                                       *opts.split()
                                       )
            assert result.ret == 0
            result.stdout.fnmatch_lines([
                'test_contexts* 100%*',
            ])
        
            data = coverage.CoverageData(".coverage")
            data.read()
    >       assert data.measured_contexts() == set(EXPECTED_CONTEXTS)
    E       AssertionError: assert {'',\n 'test_contexts.py::OldStyleTests::test_03|run',\n 'test_contexts.py::OldStyleTests::test_03|setup',\n 'test_contexts.py::OldStyleTests::test_03|teardown',\n 'test_contexts.py::OldStyleTests::test_04|run',\n 'test_contexts.py::OldStyleTests::test_04|setup',\n 'test_contexts.py::OldStyleTests::test_04|teardown',\n 'test_contexts.py::test_01|run',\n 'test_contexts.py::test_01|setup',\n 'test_contexts.py::test_01|teardown',\n 'test_contexts.py::test_02|run',\n 'test_contexts.py::test_02|setup',\n 'test_contexts.py::test_02|teardown',\n 'test_contexts.py::test_05|run',\n 'test_contexts.py::test_05|setup',\n 'test_contexts.py::test_05|teardown',\n 'test_contexts.py::test_06|run',\n 'test_contexts.py::test_06|setup',\n 'test_contexts.py::test_06|teardown',\n 'test_contexts.py::test_07|run',\n 'test_contexts.py::test_07|setup',\n 'test_contexts.py::test_07|teardown',\n 'test_contexts.py::test_08|run',\n 'test_contexts.py::test_08|setup',\n 'test_contexts.py::test_08|teardown',\n 'test_contexts.py::test_09[1]|run',\n 'test_contexts.py::test_09[1]|setup',\n 'test_contexts.py::test_09[1]|teardown',\n 'test_contexts.py::test_09[2]|run',\n 'test_contexts.py::test_09[2]|setup',\n 'test_contexts.py::test_09[2]|teardown',\n 'test_contexts.py::test_09[3]|run',\n 'test_contexts.py::test_09[3]|setup',\n 'test_contexts.py::test_09[3]|teardown',\n 'test_contexts.py::test_10|run',\n 'test_contexts.py::test_10|setup',\n 'test_contexts.py::test_10|teardown',\n 'test_contexts.py::test_11[1-101]|run',\n 'test_contexts.py::test_11[1-101]|setup',\n 'test_contexts.py::test_11[1-101]|teardown',\n 'test_contexts.py::test_11[2-202]|run',\n 'test_contexts.py::test_11[2-202]|setup',\n 'test_contexts.py::test_11[2-202]|teardown',\n 'test_contexts.py::test_12[one]|run',\n 'test_contexts.py::test_12[one]|setup',\n 'test_contexts.py::test_12[one]|teardown',\n 'test_contexts.py::test_12[two]|run',\n 'test_contexts.py::test_12[two]|setup',\n 'test_contexts.py::test_12[two]|teardown',\n 'test_contexts.py::test_13[3-1]|run',\n 'test_contexts.py::test_13[3-1]|setup',\n 'test_contexts.py::test_13[3-1]|teardown',\n 'test_contexts.py::test_13[3-2]|run',\n 'test_contexts.py::test_13[3-2]|setup',\n 'test_contexts.py::test_13[3-2]|teardown',\n 'test_contexts.py::test_13[4-1]|run',\n 'test_contexts.py::test_13[4-1]|setup',\n 'test_contexts.py::test_13[4-1]|teardown',\n 'test_contexts.py::test_13[4-2]|run',\n 'test_contexts.py::test_13[4-2]|setup',\n 'test_contexts.py::test_13[4-2]|teardown'} == {'',\n 'test_contexts.py::OldStyleTests::test_03|run',\n 'test_contexts.py::OldStyleTests::test_03|setup',\n 'test_contexts.py::OldStyleTests::test_04|run',\n 'test_contexts.py::OldStyleTests::test_04|teardown',\n 'test_contexts.py::test_01|run',\n 'test_contexts.py::test_02|run',\n 'test_contexts.py::test_05|run',\n 'test_contexts.py::test_05|setup',\n 'test_contexts.py::test_06|run',\n 'test_contexts.py::test_06|setup',\n 'test_contexts.py::test_07|run',\n 'test_contexts.py::test_07|setup',\n 'test_contexts.py::test_08|run',\n 'test_contexts.py::test_09[1]|run',\n 'test_contexts.py::test_09[1]|setup',\n 'test_contexts.py::test_09[2]|run',\n 'test_contexts.py::test_09[2]|setup',\n 'test_contexts.py::test_09[3]|run',\n 'test_contexts.py::test_09[3]|setup',\n 'test_contexts.py::test_10|run',\n 'test_contexts.py::test_11[1-101]|run',\n 'test_contexts.py::test_11[2-202]|run',\n 'test_contexts.py::test_12[one]|run',\n 'test_contexts.py::test_12[two]|run',\n 'test_contexts.py::test_13[3-1]|run',\n 'test_contexts.py::test_13[3-2]|run',\n 'test_contexts.py::test_13[4-1]|run',\n 'test_contexts.py::test_13[4-2]|run'}
    E         Extra items in the left set:
    E         'test_contexts.py::test_02|setup'
    E         'test_contexts.py::test_09[1]|teardown'
    E         'test_contexts.py::test_12[two]|setup'
    E         'test_contexts.py::test_11[1-101]|teardown'
    E         'test_contexts.py::test_13[4-2]|teardown'
    E         'test_contexts.py::test_08|setup'
    E         'test_contexts.py::test_13[4-1]|setup'
    E         'test_contexts.py::test_06|teardown'
    E         'test_contexts.py::test_11[2-202]|teardown'
    E         'test_contexts.py::test_13[3-2]|teardown'
    E         'test_contexts.py::test_01|teardown'
    E         'test_contexts.py::test_10|teardown'
    E         'test_contexts.py::test_12[two]|teardown'
    E         'test_contexts.py::test_09[3]|teardown'
    E         'test_contexts.py::OldStyleTests::test_03|teardown'
    E         'test_contexts.py::test_12[one]|setup'
    E         'test_contexts.py::test_11[1-101]|setup'
    E         'test_contexts.py::test_01|setup'
    E         'test_contexts.py::test_13[3-1]|teardown'
    E         'test_contexts.py::test_13[4-1]|teardown'
    E         'test_contexts.py::OldStyleTests::test_04|setup'
    E         'test_contexts.py::test_13[3-2]|setup'
    E         'test_contexts.py::test_09[2]|teardown'
    E         'test_contexts.py::test_10|setup'
    E         'test_contexts.py::test_07|teardown'
    E         'test_contexts.py::test_13[3-1]|setup'
    E         'test_contexts.py::test_11[2-202]|setup'
    E         'test_contexts.py::test_05|teardown'
    E         'test_contexts.py::test_08|teardown'
    E         'test_contexts.py::test_12[one]|teardown'
    E         'test_contexts.py::test_13[4-2]|setup'
    E         'test_contexts.py::test_02|teardown'
    E         Full diff:
    E           {
    E            '',
    E            'test_contexts.py::OldStyleTests::test_03|run',
    E            'test_contexts.py::OldStyleTests::test_03|setup',
    E         +  'test_contexts.py::OldStyleTests::test_03|teardown',
    E            'test_contexts.py::OldStyleTests::test_04|run',
    E         +  'test_contexts.py::OldStyleTests::test_04|setup',
    E            'test_contexts.py::OldStyleTests::test_04|teardown',
    E            'test_contexts.py::test_01|run',
    E         +  'test_contexts.py::test_01|setup',
    E         +  'test_contexts.py::test_01|teardown',
    E            'test_contexts.py::test_02|run',
    E         +  'test_contexts.py::test_02|setup',
    E         +  'test_contexts.py::test_02|teardown',
    E            'test_contexts.py::test_05|run',
    E            'test_contexts.py::test_05|setup',
    E         +  'test_contexts.py::test_05|teardown',
    E            'test_contexts.py::test_06|run',
    E            'test_contexts.py::test_06|setup',
    E         +  'test_contexts.py::test_06|teardown',
    E            'test_contexts.py::test_07|run',
    E            'test_contexts.py::test_07|setup',
    E         +  'test_contexts.py::test_07|teardown',
    E            'test_contexts.py::test_08|run',
    E         +  'test_contexts.py::test_08|setup',
    E         +  'test_contexts.py::test_08|teardown',
    E            'test_contexts.py::test_09[1]|run',
    E            'test_contexts.py::test_09[1]|setup',
    E         +  'test_contexts.py::test_09[1]|teardown',
    E            'test_contexts.py::test_09[2]|run',
    E            'test_contexts.py::test_09[2]|setup',
    E         +  'test_contexts.py::test_09[2]|teardown',
    E            'test_contexts.py::test_09[3]|run',
    E            'test_contexts.py::test_09[3]|setup',
    E         +  'test_contexts.py::test_09[3]|teardown',
    E            'test_contexts.py::test_10|run',
    E         +  'test_contexts.py::test_10|setup',
    E         +  'test_contexts.py::test_10|teardown',
    E            'test_contexts.py::test_11[1-101]|run',
    E         +  'test_contexts.py::test_11[1-101]|setup',
    E         +  'test_contexts.py::test_11[1-101]|teardown',
    E            'test_contexts.py::test_11[2-202]|run',
    E         +  'test_contexts.py::test_11[2-202]|setup',
    E         +  'test_contexts.py::test_11[2-202]|teardown',
    E            'test_contexts.py::test_12[one]|run',
    E         +  'test_contexts.py::test_12[one]|setup',
    E         +  'test_contexts.py::test_12[one]|teardown',
    E            'test_contexts.py::test_12[two]|run',
    E         +  'test_contexts.py::test_12[two]|setup',
    E         +  'test_contexts.py::test_12[two]|teardown',
    E            'test_contexts.py::test_13[3-1]|run',
    E         +  'test_contexts.py::test_13[3-1]|setup',
    E         +  'test_contexts.py::test_13[3-1]|teardown',
    E            'test_contexts.py::test_13[3-2]|run',
    E         +  'test_contexts.py::test_13[3-2]|setup',
    E         +  'test_contexts.py::test_13[3-2]|teardown',
    E            'test_contexts.py::test_13[4-1]|run',
    E         +  'test_contexts.py::test_13[4-1]|setup',
    E         +  'test_contexts.py::test_13[4-1]|teardown',
    E            'test_contexts.py::test_13[4-2]|run',
    E         +  'test_contexts.py::test_13[4-2]|setup',
    E         +  'test_contexts.py::test_13[4-2]|teardown',
    E           }
    
    /home/runner/work/pytest-cov/pytest-cov/tests/test_pytest_cov.py:1937: AssertionError
    ----------------------------- Captured stdout call -----------------------------
    running: /home/runner/work/pytest-cov/pytest-cov/.tox/py310-pytest71-xdist250-coverage65/bin/python -mpytest --basetemp=/tmp/pytest-of-runner/pytest-0/test_contexts1/runpytest-0 -v --cov=/tmp/pytest-of-runner/pytest-0/test_contexts1 --cov-context=test /tmp/pytest-of-runner/pytest-0/test_contexts1/test_contexts.py -n 1 --basetemp=/tmp/pytest-of-runner/pytest-0/basetemp
         in: /tmp/pytest-of-runner/pytest-0/test_contexts1
    ============================= test session starts ==============================
    platform linux -- Python 3.10.8, pytest-7.1.2, pluggy-1.0.0 -- /home/runner/work/pytest-cov/pytest-cov/.tox/py310-pytest71-xdist250-coverage65/bin/python
    cachedir: .pytest_cache
    rootdir: /tmp/pytest-of-runner/pytest-0/test_contexts1
    plugins: forked-1.4.0, xdist-2.5.0, cov-4.0.0
    gw0 I
    
    [gw0] linux Python 3.10.8 cwd: /tmp/pytest-of-runner/pytest-0/test_contexts1
    
    [gw0] Python 3.10.8 (main, Oct 18 2022, 06:43:21) [GCC 9.4.0]
    gw0 [20]
    
    scheduling tests via LoadScheduling
    
    test_contexts.py::test_01 
    [gw0] [  5%] PASSED test_contexts.py::test_01 
    test_contexts.py::test_02 
    [gw0] [ 10%] PASSED test_contexts.py::test_02 
    test_contexts.py::OldStyleTests::test_03 
    [gw0] [ 15%] PASSED test_contexts.py::OldStyleTests::test_03 
    test_contexts.py::OldStyleTests::test_04 
    [gw0] [ 20%] PASSED test_contexts.py::OldStyleTests::test_04 
    test_contexts.py::test_05 
    [gw0] [ 25%] PASSED test_contexts.py::test_05 
    test_contexts.py::test_06 
    [gw0] [ 30%] PASSED test_contexts.py::test_06 
    test_contexts.py::test_07 
    [gw0] [ 35%] PASSED test_contexts.py::test_07 
    test_contexts.py::test_08 
    [gw0] [ 40%] PASSED test_contexts.py::test_08 
    test_contexts.py::test_09[1] 
    [gw0] [ 45%] PASSED test_contexts.py::test_09[1] 
    test_contexts.py::test_09[2] 
    [gw0] [ 50%] PASSED test_contexts.py::test_09[2] 
    test_contexts.py::test_09[3] 
    [gw0] [ 55%] PASSED test_contexts.py::test_09[3] 
    test_contexts.py::test_10 
    [gw0] [ 60%] PASSED test_contexts.py::test_10 
    test_contexts.py::test_11[1-101] 
    [gw0] [ 65%] PASSED test_contexts.py::test_11[1-101] 
    test_contexts.py::test_11[2-202] 
    [gw0] [ 70%] PASSED test_contexts.py::test_11[2-202] 
    test_contexts.py::test_12[one] 
    [gw0] [ 75%] PASSED test_contexts.py::test_12[one] 
    test_contexts.py::test_12[two] 
    [gw0] [ 80%] PASSED test_contexts.py::test_12[two] 
    test_contexts.py::test_13[3-1] 
    [gw0] [ 85%] PASSED test_contexts.py::test_13[3-1] 
    test_contexts.py::test_13[3-2] 
    [gw0] [ 90%] PASSED test_contexts.py::test_13[3-2] 
    test_contexts.py::test_13[4-1] 
    [gw0] [ 95%] PASSED test_contexts.py::test_13[4-1] 
    test_contexts.py::test_13[4-2] 
    [gw0] [100%] PASSED test_contexts.py::test_13[4-2] 
    
    ---------- coverage: platform linux, python 3.10.8-final-0 -----------
    Name               Stmts   Miss  Cover
    --------------------------------------
    test_contexts.py      58      0   100%
    --------------------------------------
    TOTAL                 58      0   100%
    
    
    ============================== 20 passed in 0.60s ==============================
    =============================== warnings summary ===============================
    .tox/py310-pytest71-xdist250-coverage65/lib/python3.10/site-packages/_pytest/config/__init__.py:1198
      /home/runner/work/pytest-cov/pytest-cov/.tox/py310-pytest71-xdist250-coverage65/lib/python3.10/site-packages/_pytest/config/__init__.py:1198: PytestRemovedIn8Warning: The --strict option is deprecated, use --strict-markers instead.
        self.issue_config_time_warning(
    
    -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
    =========================== short test summary info ============================
    SKIPPED [1] tests/test_pytest_cov.py:367: condition: coverage.version_info >= (6, 3)
    SKIPPED [1] tests/test_pytest_cov.py:1028: Since pytest-xdist 2.3.0 the parent sys.path is copied in the child process
    SKIPPED [3] tests/test_pytest_cov.py:1141: condition: sys.platform != "win32"
    SKIPPED [1] tests/test_pytest_cov.py:1952: condition: coverage.version_info >= (5, 0)
    FAILED tests/test_pytest_cov.py::test_contexts[nodist] - AssertionError: asse...
    FAILED tests/test_pytest_cov.py::test_contexts[xdist] - AssertionError: asser...
    ======= 2 failed, 120 passed, 6 skipped, 1 warning in 111.56s (0:01:51) ========
    ERROR: InvocationError for command /home/runner/work/pytest-cov/pytest-cov/.tox/py310-pytest71-xdist250-coverage65/bin/pytest -vv (exited with code 1)
    ___________________________________ summary ____________________________________
    ERROR:   py310-pytest71-xdist250-coverage65: commands failed
    Error: Process completed with exit code 1.
    
    opened by danigm 0
  • Add Python 3.11 and PyPy 3.9 to the testing

    Add Python 3.11 and PyPy 3.9 to the testing

    • https://github.com/actions/cache/releases
    • https://github.com/actions/checkout/releases
    • https://github.com/actions/setup-python/releases
    • https://docs.python.org/3/whatsnew/3.11.html
    • https://www.pypy.org/download.html
    skip-changelog 
    opened by cclauss 4
Owner
pytest-dev
pytest-dev
Pytest-typechecker - Pytest plugin to test how type checkers respond to code

pytest-typechecker this is a plugin for pytest that allows you to create tests t

vivax 2 Aug 20, 2022
A pytest plugin to run an ansible collection's unit tests with pytest.

pytest-ansible-units An experimental pytest plugin to run an ansible collection's unit tests with pytest. Description pytest-ansible-units is a pytest

Community managed Ansible repositories 9 Dec 9, 2022
ApiPy was created for api testing with Python pytest framework which has also requests, assertpy and pytest-html-reporter libraries.

ApiPy was created for api testing with Python pytest framework which has also requests, assertpy and pytest-html-reporter libraries. With this f

Mustafa 1 Jul 11, 2022
Playwright Python tool practice pytest pytest-bdd screen-play page-object allure cucumber-report

pytest-ui-automatic Playwright Python tool practice pytest pytest-bdd screen-play page-object allure cucumber-report How to run Run tests execute_test

moyu6027 11 Nov 8, 2022
Pytest-rich - Pytest + rich integration (proof of concept)

pytest-rich Leverage rich for richer test session output. This plugin is not pub

Bruno Oliveira 170 Dec 2, 2022
A Django plugin for pytest.

Welcome to pytest-django! pytest-django allows you to test your Django project/applications with the pytest testing tool. Quick start / tutorial Chang

pytest-dev 1.1k Dec 31, 2022
A command-line tool and Python library and Pytest plugin for automated testing of RESTful APIs, with a simple, concise and flexible YAML-based syntax

1.0 Release See here for details about breaking changes with the upcoming 1.0 release: https://github.com/taverntesting/tavern/issues/495 Easier API t

null 909 Dec 15, 2022
pytest plugin for distributed testing and loop-on-failures testing modes.

xdist: pytest distributed testing plugin The pytest-xdist plugin extends pytest with some unique test execution modes: test run parallelization: if yo

pytest-dev 1.1k Dec 30, 2022
Plugin for generating HTML reports for pytest results

pytest-html pytest-html is a plugin for pytest that generates a HTML report for test results. Resources Documentation Release Notes Issue Tracker Code

pytest-dev 548 Dec 28, 2022
Mypy static type checker plugin for Pytest

pytest-mypy Mypy static type checker plugin for pytest Features Runs the mypy static type checker on your source files as part of your pytest test run

Dan Bader 218 Jan 3, 2023
:game_die: Pytest plugin to randomly order tests and control random.seed

pytest-randomly Pytest plugin to randomly order tests and control random.seed. Features All of these features are on by default but can be disabled wi

pytest-dev 471 Dec 30, 2022
A rewrite of Python's builtin doctest module (with pytest plugin integration) but without all the weirdness

The xdoctest package is a re-write of Python's builtin doctest module. It replaces the old regex-based parser with a new abstract-syntax-tree based pa

Jon Crall 174 Dec 16, 2022
pytest plugin for manipulating test data directories and files

pytest-datadir pytest plugin for manipulating test data directories and files. Usage pytest-datadir will look up for a directory with the name of your

Gabriel Reis 191 Dec 21, 2022
pytest plugin that let you automate actions and assertions with test metrics reporting executing plain YAML files

pytest-play pytest-play is a codeless, generic, pluggable and extensible automation tool, not necessarily test automation only, based on the fantastic

pytest-dev 67 Dec 1, 2022
pytest plugin for a better developer experience when working with the PyTorch test suite

pytest-pytorch What is it? pytest-pytorch is a lightweight pytest-plugin that enhances the developer experience when working with the PyTorch test sui

Quansight 39 Nov 18, 2022
Pytest plugin for testing the idempotency of a function.

pytest-idempotent Pytest plugin for testing the idempotency of a function. Usage pip install pytest-idempotent Documentation Suppose we had the follo

Tyler Yep 3 Dec 14, 2022
A pytest plugin, that enables you to test your code that relies on a running PostgreSQL Database

This is a pytest plugin, that enables you to test your code that relies on a running PostgreSQL Database. It allows you to specify fixtures for PostgreSQL process and client.

Clearcode 252 Dec 21, 2022
A pytest plugin that enables you to test your code that relies on a running Elasticsearch search engine

pytest-elasticsearch What is this? This is a pytest plugin that enables you to test your code that relies on a running Elasticsearch search engine. It

Clearcode 65 Nov 10, 2022
This is a pytest plugin, that enables you to test your code that relies on a running MongoDB database

This is a pytest plugin, that enables you to test your code that relies on a running MongoDB database. It allows you to specify fixtures for MongoDB process and client.

Clearcode 19 Oct 21, 2022