pytest plugin for distributed testing and loop-on-failures testing modes.

Related tags

Testing pytest-xdist
Overview
PyPI version Python versions

xdist: pytest distributed testing plugin

The pytest-xdist plugin extends pytest with some unique test execution modes:

  • test run parallelization: if you have multiple CPUs or hosts you can use those for a combined test run. This allows to speed up development or to use special resources of remote machines.
  • --looponfail: run your tests repeatedly in a subprocess. After each run pytest waits until a file in your project changes and then re-runs the previously failing tests. This is repeated until all tests pass after which again a full run is performed.
  • Multi-Platform coverage: you can specify different Python interpreters or different platforms and run tests in parallel on all of them.

Before running tests remotely, pytest efficiently "rsyncs" your program source code to the remote place. All test results are reported back and displayed to your local terminal. You may specify different Python versions and interpreters.

If you would like to know how pytest-xdist works under the covers, checkout OVERVIEW.

Installation

Install the plugin with:

pip install pytest-xdist

To use psutil for detection of the number of CPUs available, install the psutil extra:

pip install pytest-xdist[psutil]

Speed up test runs by sending tests to multiple CPUs

To send tests to multiple CPUs, use the -n (or --numprocesses) option:

pytest -n NUMCPUS

Pass -n auto to use as many processes as your computer has CPU cores. This can lead to considerable speed ups, especially if your test suite takes a noticeable amount of time.

If a test crashes a worker, pytest-xdist will automatically restart that worker and report the test’s failure. You can use the --max-worker-restart option to limit the number of worker restarts that are allowed, or disable restarting altogether using --max-worker-restart 0.

By default, using --numprocesses will send pending tests to any worker that is available, without any guaranteed order. You can change the test distribution algorithm this with the --dist option. It takes these values:

  • --dist no: The default algorithm, distributing one test at a time.
  • --dist loadscope: Tests are grouped by module for test functions and by class for test methods. Groups are distributed to available workers as whole units. This guarantees that all tests in a group run in the same process. This can be useful if you have expensive module-level or class-level fixtures. Grouping by class takes priority over grouping by module.
  • --dist loadfile: Tests are grouped by their containing file. Groups are distributed to available workers as whole units. This guarantees that all tests in a file run in the same worker.

Making session-scoped fixtures execute only once

pytest-xdist is designed so that each worker process will perform its own collection and execute a subset of all tests. This means that tests in different processes requesting a high-level scoped fixture (for example session) will execute the fixture code more than once, which breaks expectations and might be undesired in certain situations.

While pytest-xdist does not have a builtin support for ensuring a session-scoped fixture is executed exactly once, this can be achieved by using a lock file for inter-process communication.

The example below needs to execute the fixture session_data only once (because it is resource intensive, or needs to execute only once to define configuration options, etc), so it makes use of a FileLock to produce the fixture data only once when the first process requests the fixture, while the other processes will then read the data from a file.

Here is the code:

import json

import pytest
from filelock import FileLock


@pytest.fixture(scope="session")
def session_data(tmp_path_factory, worker_id):
    if worker_id == "master":
        # not executing in with multiple workers, just produce the data and let
        # pytest's fixture caching do its job
        return produce_expensive_data()

    # get the temp directory shared by all workers
    root_tmp_dir = tmp_path_factory.getbasetemp().parent

    fn = root_tmp_dir / "data.json"
    with FileLock(str(fn) + ".lock"):
        if fn.is_file():
            data = json.loads(fn.read_text())
        else:
            data = produce_expensive_data()
            fn.write_text(json.dumps(data))
    return data

The example above can also be use in cases a fixture needs to execute exactly once per test session, like initializing a database service and populating initial tables.

This technique might not work for every case, but should be a starting point for many situations where executing a high-scope fixture exactly once is important.

Running tests in a Python subprocess

To instantiate a python3.5 subprocess and send tests to it, you may type:

pytest -d --tx popen//python=python3.5

This will start a subprocess which is run with the python3.5 Python interpreter, found in your system binary lookup path.

If you prefix the --tx option value like this:

--tx 3*popen//python=python3.5

then three subprocesses would be created and tests will be load-balanced across these three processes.

Running tests in a boxed subprocess

This functionality has been moved to the pytest-forked plugin, but the --boxed option is still kept for backward compatibility.

Sending tests to remote SSH accounts

Suppose you have a package mypkg which contains some tests that you can successfully run locally. And you have a ssh-reachable machine myhost. Then you can ad-hoc distribute your tests by typing:

pytest -d --tx ssh=myhostpopen --rsyncdir mypkg mypkg

This will synchronize your mypkg package directory to a remote ssh account and then locally collect tests and send them to remote places for execution.

You can specify multiple --rsyncdir directories to be sent to the remote side.

Note

For pytest to collect and send tests correctly you not only need to make sure all code and tests directories are rsynced, but that any test (sub) directory also has an __init__.py file because internally pytest references tests as a fully qualified python module path. You will otherwise get strange errors during setup of the remote side.

You can specify multiple --rsyncignore glob patterns to be ignored when file are sent to the remote side. There are also internal ignores: .*, *.pyc, *.pyo, *~ Those you cannot override using rsyncignore command-line or ini-file option(s).

Sending tests to remote Socket Servers

Download the single-module socketserver.py Python program and run it like this:

python socketserver.py

It will tell you that it starts listening on the default port. You can now on your home machine specify this new socket host with something like this:

pytest -d --tx socket=192.168.1.102:8888 --rsyncdir mypkg mypkg

Running tests on many platforms at once

The basic command to run tests on multiple platforms is:

pytest --dist=each --tx=spec1 --tx=spec2

If you specify a windows host, an OSX host and a Linux environment this command will send each tests to all platforms - and report back failures from all platforms at once. The specifications strings use the xspec syntax.

Identifying the worker process during a test

New in version 1.15.

If you need to determine the identity of a worker process in a test or fixture, you may use the worker_id fixture to do so:

@pytest.fixture()
def user_account(worker_id):
    """ use a different account in each xdist worker """
    return "account_%s" % worker_id

When xdist is disabled (running with -n0 for example), then worker_id will return "master".

Worker processes also have the following environment variables defined:

  • PYTEST_XDIST_WORKER: the name of the worker, e.g., "gw2".
  • PYTEST_XDIST_WORKER_COUNT: the total number of workers in this session, e.g., "4" when -n 4 is given in the command-line.

The information about the worker_id in a test is stored in the TestReport as well, under the worker_id attribute.

Since version 2.0, the following functions are also available in the xdist module:

def is_xdist_worker(request_or_session) -> bool:
    """Return `True` if this is an xdist worker, `False` otherwise

    :param request_or_session: the `pytest` `request` or `session` object
    """

def is_xdist_master(request_or_session) -> bool:
    """Return `True` if this is the xdist master, `False` otherwise

    Note: this method also returns `False` when distribution has not been
    activated at all.

    :param request_or_session: the `pytest` `request` or `session` object
    """

def get_xdist_worker_id(request_or_session) -> str:
    """Return the id of the current worker ('gw0', 'gw1', etc) or 'master'
    if running on the 'master' node.

    If not distributing tests (for example passing `-n0` or not passing `-n` at all) also return 'master'.

    :param request_or_session: the `pytest` `request` or `session` object
    """

Uniquely identifying the current test run

New in version 1.32.

If you need to globally distinguish one test run from others in your workers, you can use the testrun_uid fixture. For instance, let's say you wanted to create a separate database for each test run:

import pytest
from posix_ipc import Semaphore, O_CREAT

@pytest.fixture(scope="session", autouse=True)
def create_unique_database(testrun_uid):
    """ create a unique database for this particular test run """
    database_url = f"psql://myapp-{testrun_uid}"

    with Semaphore(f"/{testrun_uid}-lock", flags=O_CREAT, initial_value=1):
        if not database_exists(database_url):
            create_database(database_url)

@pytest.fixture()
def db(testrun_uid):
    """ retrieve unique database """
    database_url = f"psql://myapp-{testrun_uid}"
    return database_get_instance(database_url)

Additionally, during a test run, the following environment variable is defined:

  • PYTEST_XDIST_TESTRUNUID: the unique id of the test run.

Accessing sys.argv from the master node in workers

To access the sys.argv passed to the command-line of the master node, use request.config.workerinput["mainargv"].

Specifying test exec environments in an ini file

You can use pytest's ini file configuration to avoid typing common options. You can for example make running with three subprocesses your default like this:

[pytest]
addopts = -n3

You can also add default environments like this:

[pytest]
addopts = --tx ssh=myhost//python=python3.5 --tx ssh=myhost//python=python3.6

and then just type:

pytest --dist=each

to run tests in each of the environments.

Specifying "rsync" dirs in an ini-file

In a tox.ini or setup.cfg file in your root project directory you may specify directories to include or to exclude in synchronisation:

[pytest]
rsyncdirs = . mypkg helperpkg
rsyncignore = .hg

These directory specifications are relative to the directory where the configuration file was found.

Comments
  • New scheduler for distribution of groups of related tests

    New scheduler for distribution of groups of related tests

    Hi, this PR is a possible fix of #18 (and its duplicate #84).

    As stated in #18, current implementation of the LoadScheduler distributes tests without taking into account their relation. In particular, if the user is using a module level fixture that performs a large amount of work, distribution in this manner will trigger the fixture in each node, causing a large overhead.

    In my case, we use the Topology Modular Framework to build a topology of nodes. When using the topology_docker builder, each module will start a set of Docker nodes, connect them, initialize them, configure them, and then the testing can start. When running on other builders, we have an even larger overhead for building and destroying the topology.

    With this new scheduler, the tests will be aggregated by suite (basically anything before the :: in the nodeid of a test. I called this chunks of related tests "work units". The scheduler then will distribute complete work units to the workers, and thus, triggering the module level fixtures only once per worker.

    We are running a test suite of more than 150 large tests in series that is taking 20 hours. We tried running it with xdist LoadScheduler at it took even more (30 and then we stopped it). With this change, we are able to scale the solution, and with just 4 workers we were able to reduce it to 5 hours.

    I've included an example test suite using the loadsuite scheduler in examples/loadsuite that shows 3 simple suites (alpha, beta, gamma), any taking a total of 50 seconds to complete. And the results are as follows:

    • Serial: 150 seconds.
    • 3 workers: 50 seconds (one suite per worker).
    • 2 workers: 100 seconds (two suites in one workers, one suite in the other worker).
    • More than 3 workers: raise, not enough work.

    This PR still requires a little bit of work, which I'm willing to do with your guidance. In particular:

    • I'm using OrderedDicts to keep track of ordering, but it isn't available on Python2.6. There is a PyPI package for Python 2.6 and below that includes it but I was hesitant to add it, or maybe conditionally add it for Python 2.6 only.
    • Include tests specific to the new scheduler (it currently works fine, but doesn't have a regression on it).
    • Not sure how to handle the len(workers) > len(suites). Currently we are just raising a RuntimeException.
    • The changelog thingy.
    • Document this new option of scheduler.
    • Any other feedback you may want to provide.

    Thank you very much for your consideration of this PR.

    opened by carlos-jenkins 32
  • Allow custom scheduler class implementation

    Allow custom scheduler class implementation

    Hi. I've seen several tickets when people need more control over tests scheduling (see #18). I think, it would be much better, if xdist would give a a way to implement custom test scheduler class and pass it to DSession object. I've came up with this:

    1. We freeze a schedule interface.

    2. One can have his one scheduler class implementation, available for import.

      from _pytest.runner import CollectReport
      import py
      from xdist.dsession import report_collection_diff
      
      
      class CustomScheduling:
          def __init__(self, numnodes, log=None, config=None):
              self.numnodes = numnodes
              self.log = log
              self.config = config
      
          # ... custom scheduling logic implementation
      
    3. Python import path should be passed to xdist module through optional --dc command line parameter:

      pytest --boxed -nauto --dc=custom.CustomScheduling tests.py
      
    opened by wronglink 29
  • Slaves crash on win64 with error

    Slaves crash on win64 with error "Not properly terminated"

    Following on to #68 - after setting the PYTEST_DEBUG environment variable, I get the following output when I run using more than 1 process for example:

    py.test -v  -n 3 -m regression --max-slave-restart=0 [redacted]
    
    
    Slave restarting disabled
          pytest_testnodedown [hook]
              node: <SlaveController gw1>
              error: Not properly terminated
    [gw1] node down: Not properly terminated
          finish pytest_testnodedown --> [] [hook]
          pytest_runtest_logreport [hook]
              report: <TestReport 'redacted.py::Redacted::test_eta2' when='???' outcome='failed'>
            pytest_report_teststatus [hook]
                report: <TestReport 'redacted.py::Redacted::test_eta2' when='???' outcome='failed'>
            finish pytest_report_teststatus --> ('failed', 'f', 'FAILED') [hook]
    [gw1] FAILED redacted.py::Redacted::test_eta2       finish pytest_runtest_logreport --> [] [hook]
    
    

    The set of tests that fail is different upon different runs, but all failures are with the above error (Not properly terminated).

    versions:

    pytest                    2.9.1                    py27_0    
    pytest-xdist              1.13.1                   py27_0    
    
    opened by rekcahpassyla 21
  • pytest_runtest_makereport of custom conftest doesn't get called when running with xdist with -n option

    pytest_runtest_makereport of custom conftest doesn't get called when running with xdist with -n option

    I have noticed registering new plugin having the method (item, call)pytest_runtest_makereport doesn't get call back. However it works without -n option.

    opened by ssrikanta-scea 20
  • Enforce serial execution of related tests

    Enforce serial execution of related tests

    I think this is related to https://github.com/pytest-dev/pytest-xdist/issues/18 but didn't want to clutter the comment history if that's not the case.

    I am looking for a way to enforce certain tests, usually the tests within a specific class, to be executed in serial due to that they interfere with each other. I could use -n0 but that would be an overreach - it would apply to the entire test run, not just for a specific class for instance. Would it be possible to just have a decorator directive on the class to 1. instruct xdist to schedule the tests in the class to the same slave as well as 2. instruct them to be run in serial?

    If there are other ways of achieving this, I would be interested in that too.

    Thanks for a great product.

    opened by betamos 19
  • Pass and use original sys.argv to/with workers

    Pass and use original sys.argv to/with workers

    This gets used e.g. by argparse for the "prog" part.

    We could explicitly pass it through and/or set it on config._parser.prog, but that is a bit tedious just for this use case, and it looks like "simulating" the main prog here appears to not be that bad of a hack after all.

    Fixes https://github.com/pytest-dev/pytest-xdist/issues/358, Fixes #384.

    opened by blueyed 18
  • [WIP] Issue #130

    [WIP] Issue #130

    Updated serialize_report and unserialize_report to allow passing of additional exception info using ReprExceptionInfo. We could also use ExceptionChainRepr instead actually. I mostly just took a suggestion from mszpala and ran with it.

    This addresses issue #130.

    Thanks for submitting a PR, your contribution is really appreciated!

    Here's a quick checklist that should be present in PRs:

    • [x] Make sure to include reasonable tests for your change if necessary

    • [x] Add a new news fragment into the changelog folder, following these guidelines:

      • Name it $issue_id.$type for example 588.bug

      • If you don't have an issue_id change it to the PR id after creating it

      • Ensure type is one of removal, feature, bugfix, vendor, doc or trivial

      • Make sure to use full sentences with correct case and punctuation, for example:

        Fix issue with non-ascii contents in doctest text files.
        
    opened by timyhou 18
  • pytest_runtest_makereport is not giving the report result with xdist

    pytest_runtest_makereport is not giving the report result with xdist

    platform window Python 3.9.9 pytest-7.1.2 pluggy-1.0.0 plugins: xdist-2.5.0, forked-1.4.0

    I have a scenarioes after executing all the test cases with xdist.i need to colloect the report with pytest_runtest_makereport as follw: @pytest.hookimpl(tryfirst=True, hookwrapper=True) def pytest_runtest_makereport(self,item, call): outcome = yield result = outcome.get_result() print('makereport:', result) if result.when == 'call': item.session.results[item] = result

    but result = outcome.get_result() is not giving any result with xdist.without xdist it is working fine.

    question needs information 
    opened by gurdeepsinghiet 17
  • broadcast child shouldstop or fail Fixes pytest-dev/pytest#5655

    broadcast child shouldstop or fail Fixes pytest-dev/pytest#5655

    Thanks for submitting a PR, your contribution is really appreciated!

    Here's a quick checklist that should be present in PRs:

    • [ ] Make sure to include reasonable tests for your change if necessary

    • [ ] We use towncrier for changelog management, so please add a news file into the changelog folder following these guidelines:

      • Name it $issue_id.$type for example 588.bugfix;

      • If you don't have an issue_id change it to the PR id after creating it

      • Ensure type is one of removal, feature, bugfix, vendor, doc or trivial

      • Make sure to use full sentences with correct case and punctuation, for example:

        Fix issue with non-ascii contents in doctest text files.
        
    opened by graingert 16
  • Can not retrieve

    Can not retrieve "assertion" reports from remote run (using --tx ...)

    I have a code which can retrieve assertion reports:

    • this works correctly when I run my tests locally (without --tx ...)
    • self.__testcase_context is the request object
    • the self.__logger_factory is my own logger factory (not native pytest)
    terminalreporter = self.__testcase_context.config.pluginmanager.getplugin("terminalreporter")
    if terminalreporter and terminalreporter.stats.get("failed"):
        for failed_report in terminalreporter.stats.get("failed"):
            if failed_report.location[2] == self.__testcase_context.node.name:
                self.__logger = self.__logger_factory.create_logger(
                    "Teardown", use_console_handler=False, use_file_handler=True
                )
                self.__logger.error(str(failed_report.longrepr))
    

    The above code is not work in case of remote run (with --tx ...), maybe there is no terminalreporter object? But I can see the needed AssertionError in my console, which I wanted to query remotely. Is there a way to get assertion reports on remote run?

    Thanks for your answer

    question 
    opened by mitzkia 16
  • How to create individual log files for each worker

    How to create individual log files for each worker

    I'm trying to figure out how individual log files can be created for each worker, in the same manor that pytest --log-file does when not using xdist.

    Since the basic pytest --log-file opens with write, not append (see https://github.com/pytest-dev/pytest/issues/3813), there does not seem to be a way of getting a complete log file for a run using xdist with multiple workers. However, even if it could be opened in append mode, the individual worker IDs would not be present, and having everything mixed together can be more difficult for troubleshooting.

    I'm hoping that there is some way of creating a log file for each worker, but I'm not really sure how to do that, if it's even possible. I tried setting up a logger in pytest_configure, but worker_id does not seem to be available at that time.

    question 
    opened by nbartos 16
  • "Pin" certain parameters to a process?

    I have a bunch of tests (all in one project) following pretty much the same pattern:

    import pytest
    
    @pytest.mark.parametrize("foo", ["a", "b"])  # pin those permutations to one process?
    @pytest.mark.parametrize("bar", ["c", "d"])
    def test_something(foo: str, bar: str):
        pass
    

    I'd like to parallelize them - but, if possible, I'd like "to pin" all permutation associated with one specific parameter to one process. In the above example, let's say foo is pinned, then one process could work through ('a', 'c') and ('a', 'd') while the other process could work through ('b', 'c') and ('b', 'd'). All variations with foo == "a" happen in one process, all variations with foo == "b" can potentially happen in another (single) process.

    This is where I was hoping I could pin a parameter to a process. Is something like this possible or conceivable in some way, shape or form?


    For context, my tests are heavily built on top of ctypes which is, for better or for worse, not entirely stateless as I have recently discovered. I.e. if I do something stupid with it, a completely unrelated test (many tests later) might crash. The exact behavior depends on the version of CPython (and on 32 bit Wine & Windows on a DLL's calling convention), but all from at least 3.7 to 3.11 have those hidden states of some form. The only good news is that this behavior can be reproduced if all tests run in the exact same order within a single process.

    I am working on zugbruecke, a ctypes drop-in replacement that allows to call Windows DLLs from Unix-like systems or, in other words, a fancy RPC layer between a Unix process and a Wine process. The test suite can be found here. An original test looks as follows:

    @pytest.mark.parametrize("arch,conv,ctypes,dll_handle", get_context(__file__))
    def test_int_with_size_without_pointer(arch, conv, ctypes, dll_handle):
        """
        Test simple int passing with size
        """
    
        sqrt_int = dll_handle.sqrt_int
        sqrt_int.argtypes = (ctypes.c_int16,)
        sqrt_int.restype = ctypes.c_int16
    
        assert 3 == sqrt_int(9)
    

    arch can either be win32 or win64 (for 32 bit and 64 bit DLLs). conv can be cdll or windll (only relevant for 32 bit DLLs). ctypes represents my drop-in-replacement backed by different versions of CPython on top of Wine. dll_handle is just a ctypes-like handle to a DLL. The ctypes parameter would need to be pinned.

    The test suite currently has 1.6k tests running anywhere from 10 to 40 minutes (single process), depending on the hardware underneath.

    opened by s-m-e 1
  • Implement work-stealing scheduler

    Implement work-stealing scheduler

    The more I look at the scheduling problem, the more I think it should be solved using work stealing https://en.wikipedia.org/wiki/Work_stealing :

    • Initially, all tests are split among the workers evenly.
    • When some worker completes all of its tests, the scheduler asks the worker with max number of pending tests to give half of its pending tests back.
    • After receiving the confirmation, the scheduler sends the returned tests to the worker that ran out of tests.
    • Shutdown workers when no tests can be reallocated ("stolen").

    Of course, there are some tricky synchronization details - but nothing impossible to implement. This algorithm shouldn't need any parameters or assumptions about test duration to perform optimally. And the code could even turn out to be simpler than the current LoadScheduling.

    By "perform optimally" I mean:

    • In the ideal case, where all test have the same duration: n tests are split among m workers, each worker runs n/m consecutive tests - and achieves the best reuse of fixtures possible (possible without additional information about the fixtures).
    • No workers are idle while there are pending tests.

    The only issue is that I don't know when I'll have enough free time to write a custom scheduler from scratch.

    I plan to work on this myself, but I'm not sure when I'll have time for it.

    Also, I would appreciate feedback on the idea from the maintainers.

    opened by amezin 2
  • -nauto isn't smart enough

    -nauto isn't smart enough

    -nauto is fine for deciding how many processes you need based on processors, but it should wait and make it's decision AFTER tests have been collected and filtered and if the number of tests to run is less than the number of processors, it should scale back and only start as many processes as are needed. This will probably mean that it takes a tiny bit longer to get started but if you are using vscode and debugging tests, you won't spawn 10 processes and only use 1 of them. Maybe this behavior is only needed if there are tests specified on the command line (which would mean it wouldn't slow down full test runs).

    opened by boatcoder 1
  • Time Before Session Starts Increases Exponentially With Number Of Workers

    Time Before Session Starts Increases Exponentially With Number Of Workers

    Hello, I'm upgrading from Python 3.8 to Python 3.10. In my previous configuration, this was the pytest set up:

    platform linux -- Python 3.8.6, pytest-3.10.1, py-1.9.0, pluggy-0.13.1
    plugins: random-order-1.0.4, random-0.2, xdist-1.20.1, forked-1.3.0
    

    It's a test suite with several thousands of tests. Using -n auto, everything was working fine on 32 workers.

    Because of the upgrade, I had to upgrade also pytest and pytest-xdist. I tested a few versions, but I get the same problem with all of them. The current set-up:

    platform linux -- Python 3.10.8, pytest-6.2.5, py-1.9.0, pluggy-0.13.1
    plugins: xdist-2.0.0, forked-1.3.0, random-0.2, random-order-1.0.4
    

    What happens is that between the pytest call and the start session message there is a gap of several minutes. Here's a list of minutes depending on the number of workers 1: 3:25 2: 5:48 3: 16:52 4: 17:27 8: 57:39

    An example of the log:

    build	28-Nov-2022 16:01:26	8
    build	28-Nov-2022 16:59:05	============================= test session starts ==============================
    build	28-Nov-2022 16:59:05	platform linux -- Python 3.10.8, pytest-6.2.5, py-1.9.0, pluggy-0.13.1 -- /usr/local/bin/python
    
    

    (8 is the number of workers, it's written by the same script that runs pytest right before it, like

    echo "8"
    pytest -v testSuite -n 8
    

    With the old version, there was absolutely no time gap, while now it takes almost an hour just to start

    Any insight of what might be the problem?

    opened by micric 5
  • [info] pytest-xdist worker crash then log something

    [info] pytest-xdist worker crash then log something

    I have a usecases where if worker crashes i want to collect some info and dump in log file, so that i can debug. i tried to search the doc of pytest-xdist and some google but couldn't find anything. Do we have any hooks that can be called from conftest to dump after the worker crashes? or any other way possible? Thanks in advance for help and nice tool development.

    opened by jay746 0
  • DumpError: strings must be utf-8 decodable

    DumpError: strings must be utf-8 decodable

    In pypa/distutils#183, I've stumbled onto an issue with xdist. In that issue, distutils/setuptools are moving from a stdout-based logging system to the Python logging framework. As a result, the quiet() context, which suppresses writes to sys.stdout no longer has the effect of suppressing logs.

    One of the things that setuptools is logging is a filename containing surrogate escapes. It logs the name so the user can identify which filename was failing.

    With pytest-xdist, however, the test fails with an INTERNALERROR in gateway_base.

    A minimal test is to run pytest -n auto on the following:

    def test_log_non_utf8():
        __import__('logging').getLogger().warn(
            'föö'.encode('latin-1').decode('utf-8', errors='surrogateescape')
        )
        __import__('pytest').fail('failing to emit messages')
    

    That test seems legitimate and capturing behavior that provides value to the user. The issue doesn't occur without pytest-xdist.

    My instinct is that pytest-xdist shouldn't be putting constraints on the allowed outputs for logging. Can something be done to be more lenient about legitimate non-encodeable values being passed? If the encoding is only an internal implementation detail between worker and supervisor, it should serialize any strings in a way that they're deserialized with fidelity to the original.

    opened by jaraco 2
Releases(v1.15.0)
Owner
pytest-dev
pytest-dev
Pytest-typechecker - Pytest plugin to test how type checkers respond to code

pytest-typechecker this is a plugin for pytest that allows you to create tests t

vivax 2 Aug 20, 2022
A pytest plugin to run an ansible collection's unit tests with pytest.

pytest-ansible-units An experimental pytest plugin to run an ansible collection's unit tests with pytest. Description pytest-ansible-units is a pytest

Community managed Ansible repositories 9 Dec 9, 2022
ApiPy was created for api testing with Python pytest framework which has also requests, assertpy and pytest-html-reporter libraries.

ApiPy was created for api testing with Python pytest framework which has also requests, assertpy and pytest-html-reporter libraries. With this f

Mustafa 1 Jul 11, 2022
A command-line tool and Python library and Pytest plugin for automated testing of RESTful APIs, with a simple, concise and flexible YAML-based syntax

1.0 Release See here for details about breaking changes with the upcoming 1.0 release: https://github.com/taverntesting/tavern/issues/495 Easier API t

null 909 Dec 15, 2022
Pytest plugin for testing the idempotency of a function.

pytest-idempotent Pytest plugin for testing the idempotency of a function. Usage pip install pytest-idempotent Documentation Suppose we had the follo

Tyler Yep 3 Dec 14, 2022
Playwright Python tool practice pytest pytest-bdd screen-play page-object allure cucumber-report

pytest-ui-automatic Playwright Python tool practice pytest pytest-bdd screen-play page-object allure cucumber-report How to run Run tests execute_test

moyu6027 11 Nov 8, 2022
Pytest-rich - Pytest + rich integration (proof of concept)

pytest-rich Leverage rich for richer test session output. This plugin is not pub

Bruno Oliveira 170 Dec 2, 2022
:game_die: Pytest plugin to randomly order tests and control random.seed

pytest-randomly Pytest plugin to randomly order tests and control random.seed. Features All of these features are on by default but can be disabled wi

pytest-dev 471 Dec 30, 2022
pytest plugin for manipulating test data directories and files

pytest-datadir pytest plugin for manipulating test data directories and files. Usage pytest-datadir will look up for a directory with the name of your

Gabriel Reis 191 Dec 21, 2022
pytest plugin that let you automate actions and assertions with test metrics reporting executing plain YAML files

pytest-play pytest-play is a codeless, generic, pluggable and extensible automation tool, not necessarily test automation only, based on the fantastic

pytest-dev 67 Dec 1, 2022
A Django plugin for pytest.

Welcome to pytest-django! pytest-django allows you to test your Django project/applications with the pytest testing tool. Quick start / tutorial Chang

pytest-dev 1.1k Dec 31, 2022
Coverage plugin for pytest.

Overview docs tests package This plugin produces coverage reports. Compared to just using coverage run this plugin does some extras: Subprocess suppor

pytest-dev 1.4k Dec 29, 2022
Plugin for generating HTML reports for pytest results

pytest-html pytest-html is a plugin for pytest that generates a HTML report for test results. Resources Documentation Release Notes Issue Tracker Code

pytest-dev 548 Dec 28, 2022
Mypy static type checker plugin for Pytest

pytest-mypy Mypy static type checker plugin for pytest Features Runs the mypy static type checker on your source files as part of your pytest test run

Dan Bader 218 Jan 3, 2023
A rewrite of Python's builtin doctest module (with pytest plugin integration) but without all the weirdness

The xdoctest package is a re-write of Python's builtin doctest module. It replaces the old regex-based parser with a new abstract-syntax-tree based pa

Jon Crall 174 Dec 16, 2022
pytest plugin for a better developer experience when working with the PyTorch test suite

pytest-pytorch What is it? pytest-pytorch is a lightweight pytest-plugin that enhances the developer experience when working with the PyTorch test sui

Quansight 39 Nov 18, 2022
A pytest plugin, that enables you to test your code that relies on a running PostgreSQL Database

This is a pytest plugin, that enables you to test your code that relies on a running PostgreSQL Database. It allows you to specify fixtures for PostgreSQL process and client.

Clearcode 252 Dec 21, 2022
A pytest plugin that enables you to test your code that relies on a running Elasticsearch search engine

pytest-elasticsearch What is this? This is a pytest plugin that enables you to test your code that relies on a running Elasticsearch search engine. It

Clearcode 65 Nov 10, 2022
This is a pytest plugin, that enables you to test your code that relies on a running MongoDB database

This is a pytest plugin, that enables you to test your code that relies on a running MongoDB database. It allows you to specify fixtures for MongoDB process and client.

Clearcode 19 Oct 21, 2022