pytest plugin for manipulating test data directories and files

Overview

pytest-datadir

pytest plugin for manipulating test data directories and files.

Build Status PyPI PythonVersions CondaForge

Usage

pytest-datadir will look up for a directory with the name of your module or the global 'data' folder. Let's say you have a structure like this:

.
├── data/
│   └── hello.txt
├── test_hello/
│   └── spam.txt
└── test_hello.py

You can access the contents of these files using injected variables datadir (for test_ folder) or shared_datadir (for data folder):

def test_read_global(shared_datadir):
    contents = (shared_datadir / 'hello.txt').read_text()
    assert contents == 'Hello World!\n'

def test_read_module(datadir):
    contents = (datadir / 'spam.txt').read_text()
    assert contents == 'eggs\n'

pytest-datadir will copy the original file to a temporary folder, so changing the file contents won't change the original data file.

Both datadir and shared_datadir fixtures are pathlib.Path objects.

Releases

Follow these steps to make a new release:

  1. Create a new branch release-X.Y.Z from master.
  2. Update CHANGELOG.rst.
  3. Open a PR.
  4. After it is green and approved, push a new tag in the format X.Y.Z.

Travis will deploy to PyPI automatically.

Afterwards, update the recipe in conda-forge/pytest-datadir-feedstock.

License

MIT.

Comments
  • Referencing the file twice will copy it again

    Referencing the file twice will copy it again

    I've noticed that on every call to datadir.__getitem__ the file is always "re-copied".

    This is a problem in my opinion since my first impression of calling __getitem__ is that only the path of the file is returned, not copied again.

    bug :beetle: 
    opened by edisongustavo 7
  • Add support for test subdirectories

    Add support for test subdirectories

    Request

    Add some sort of structure that incorporates data files in a subdirectory that aren't explicit name matches for a test file.

    Issue Description

    Often when projects get large you want to start using subdirectories in your project. You don't really want to share data between all of your test modules but you do want a subset of your test files to share the same data.

    Example Test Structure

    I want to share data files for all files in performance_stats.

    test/
    ├── data
    ├── dict_types_unit_test.py
    ├── __init__.py
    ├── performance_stats
    │   ├── 20181018_runlog.txt # Test Data
    │   ├── 20181019_runlog.txt # Test Data
    │   ├── fixtures.pyc
    │   ├── __init__.py
    │   ├── integration_tests.py
    │   ├── test_cli.py
    │   └── test_parsing.py
    ├── setz_unit_test.py
    ├── test-report.xml
    └── yaml_config_base_unit_test.py
    

    Current Work Around

    use shared_datadir and accept a "global" namespace.

    Unimportant note

    I appreciate the library a lot. I've been writing my own fixtures to do this for years. =D

    enhancement :raised_hands: 
    opened by rawrgulmuffins 5
  • Release 1.0?

    Release 1.0?

    I just noticed that we set the Development Status to Production, but have not released 1.0 yet. My question is:

    @gabrielcnr do you see still think the API might change? If so, we should probably change the Development Status to Alpha or Beta. If not, we should probably release 1.0 and promise backward compatibility, at least until 2.0 is released.

    opened by nicoddemus 4
  • Failed to reference from session/module scope

    Failed to reference from session/module scope

    Hi, it seems that the plugin fixtures' default scope is function, when I try to reference the fixture from a fixture with session scope, py.test will raise a ScopeMismatch error.

    My use case is:

    1. put a database schema init sql file to the data dir
    2. add a fixture call init_db, which has session scope, and it will recreate the database schema only once during the whole test execution
    3. init_db fixture use shared_datadir to read the init sql file

    Any suggestions for this use case? Or can we change the plugin fixtures scope to session (or something else)?

    opened by bcho 3
  • A way to update files in datadir automatic

    A way to update files in datadir automatic

    Just a suggestion, I'm always had a little bit of work to update files in the shared data dirs. Do you think in a way to update files?

    I used sometimes jest with snapshots it has a very good way to update snapshot files. Maybe we could implement something in this way, what do you think? https://jestjs.io/docs/en/snapshot-testing#updating-snapshots

    Tks!!

    opened by alissonperez 2
  • Enable Travis

    Enable Travis

    @gabrielcnr

    I don't have access, so you will need to first enable Travis on this repository. Go to travis.org, go into "My Repositories" on the top left and flip the switch for pytest-datadir. :grin:

    I enabled it for my fork, you can see the first build here.

    opened by nicoddemus 1
  • Change license to MIT?

    Change license to MIT?

    Unfortunately LGPL brings a number of problems for some people/organizations:

    From pytest-dev/pytest-mock#45:

    I can't use any form of GPL at work (including LGPL). Even if it was possible, the handling considerations wouldn't be worth it for a plugin. A few months ago I had a remote worker try to borrow code from an LGPL plugin for a project, thinking "hey it's open source, and freedom". Its not easy to convey the ramification of creating a derivative work.

    pytest itself is MIT.

    What do you think about changing the license of this plugin to MIT?

    opened by nicoddemus 1
  • Adds global data dir for fixtures shared over different modules

    Adds global data dir for fixtures shared over different modules

    This adds a second lookup place for data files at the 'data' dir from the tests folder. With this, tests can share the same data files over different modules but still able to have their own files, who will take precedence over the global 'data' dir. This also improves the docs with usage info and provides a tox.ini to test in py27 and py34

    opened by mauriciosl 1
  • [pre-commit.ci] pre-commit autoupdate

    [pre-commit.ci] pre-commit autoupdate

    opened by pre-commit-ci[bot] 0
  • test_method_dir?

    test_method_dir?

    Thank you very much for the plugin Have you considered including a test_method_dir (or a similar name) to be able to have files only for one method (instead of the whole module)? I don't think it's a good idea to copy all the files in a module when I only want the files for one method. The directory for the method would be inside the module directory. In the example, it would be test_hello \ test_a_method, so I could inject what is common to the module or what is necessary only for the method. If you give the OK, I could try to do it myself and do a PR. Thanks.

    opened by panicoenlaxbox 4
  • 1.3.1: pytest is failing

    1.3.1: pytest is failing

    Just normal build, install and test cycle used on building package from non-root account:

    • "setup.py build"
    • "setup.py install --root </install/prefix>"
    • "pytest with PYTHONPATH pointing to setearch and sitelib inside </install/prefix>
    + PYTHONPATH=/home/tkloczko/rpmbuild/BUILDROOT/python-pytest-datadir-1.3.1-2.fc35.x86_64/usr/lib64/python3.8/site-packages:/home/tkloczko/rpmbuild/BUILDROOT/python-pytest-datadir-1.3.1-2.fc35.x86_64/usr/lib/python3.8/site-packages
    + PYTHONDONTWRITEBYTECODE=1
    + /usr/bin/pytest -ra
    =========================================================================== test session starts ============================================================================
    platform linux -- Python 3.8.9, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
    benchmark: 3.4.1 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)
    Using --randomly-seed=1916038535
    rootdir: /home/tkloczko/rpmbuild/BUILD/pytest-datadir-1.3.1
    plugins: datadir-1.3.1, forked-1.3.0, shutil-1.7.0, virtualenv-1.7.0, expect-1.1.0, httpbin-1.0.0, flake8-1.0.7, timeout-1.4.2, betamax-0.8.1, freezegun-0.4.2, case-1.5.3, isort-1.3.0, aspectlib-1.5.2, asyncio-0.15.1, toolbox-0.5, xprocess-0.17.1, aiohttp-0.3.0, checkdocs-2.7.0, mock-3.6.1, rerunfailures-9.1.1, requests-mock-1.9.3, cov-2.12.1, pyfakefs-4.5.0, cases-3.6.1, flaky-3.7.0, hypothesis-6.14.0, benchmark-3.4.1, xdist-2.3.0, pylama-7.7.1, randomly-3.8.0, Faker-8.8.2
    collected 10 items
    
    . F                                                                                                                                                                  [ 11%]
    tests/test_hello.py ......                                                                                                                                           [ 77%]
    tests/test_pathlib.py s                                                                                                                                              [ 88%]
    tests/test_nonexistent.py .                                                                                                                                          [100%]
    
    ================================================================================= FAILURES =================================================================================
    _______________________________________________________________________________ test session _______________________________________________________________________________
    
    cls = <class '_pytest.runner.CallInfo'>, func = <function call_runtest_hook.<locals>.<lambda> at 0x7fa6f5ad89d0>, when = 'call'
    reraise = (<class '_pytest.outcomes.Exit'>, <class 'KeyboardInterrupt'>)
    
        @classmethod
        def from_call(
            cls,
            func: "Callable[[], TResult]",
            when: "Literal['collect', 'setup', 'call', 'teardown']",
            reraise: Optional[
                Union[Type[BaseException], Tuple[Type[BaseException], ...]]
            ] = None,
        ) -> "CallInfo[TResult]":
            excinfo = None
            start = timing.time()
            precise_start = timing.perf_counter()
            try:
    >           result: Optional[TResult] = func()
    
    /usr/lib/python3.8/site-packages/_pytest/runner.py:311:
    _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
    
    >       lambda: ihook(item=item, **kwds), when=when, reraise=reraise
        )
    
    /usr/lib/python3.8/site-packages/_pytest/runner.py:255:
    _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
    
    self = <_HookCaller 'pytest_runtest_call'>, args = (), kwargs = {'item': <CheckdocsItem project>}, notincall = set()
    
        def __call__(self, *args, **kwargs):
            if args:
                raise TypeError("hook calling supports only keyword arguments")
            assert not self.is_historic()
            if self.spec and self.spec.argnames:
                notincall = (
                    set(self.spec.argnames) - set(["__multicall__"]) - set(kwargs.keys())
                )
                if notincall:
                    warnings.warn(
                        "Argument(s) {} which are declared in the hookspec "
                        "can not be found in this hook call".format(tuple(notincall)),
                        stacklevel=2,
                    )
    >       return self._hookexec(self, self.get_hookimpls(), kwargs)
    
    /usr/lib/python3.8/site-packages/pluggy/hooks.py:286:
    _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
    
    self = <_pytest.config.PytestPluginManager object at 0x7fa6fd062f10>, hook = <_HookCaller 'pytest_runtest_call'>
    methods = [<HookImpl plugin_name='runner', plugin=<module '_pytest.runner' from '/usr/lib/python3.8/site-packages/_pytest/runner...pper name='/dev/null' mode='r' encoding='UTF-8'>> _state='suspended' _in_suspended=False> _capture_fixture=None>>, ...]
    kwargs = {'item': <CheckdocsItem project>}
    
        def _hookexec(self, hook, methods, kwargs):
            # called from all hookcaller instances.
            # enable_tracing will set its own wrapping function at self._inner_hookexec
    >       return self._inner_hookexec(hook, methods, kwargs)
    
    /usr/lib/python3.8/site-packages/pluggy/manager.py:93:
    _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
    
    hook = <_HookCaller 'pytest_runtest_call'>
    methods = [<HookImpl plugin_name='runner', plugin=<module '_pytest.runner' from '/usr/lib/python3.8/site-packages/_pytest/runner...pper name='/dev/null' mode='r' encoding='UTF-8'>> _state='suspended' _in_suspended=False> _capture_fixture=None>>, ...]
    kwargs = {'item': <CheckdocsItem project>}
    
    >   self._inner_hookexec = lambda hook, methods, kwargs: hook.multicall(
            methods,
            kwargs,
            firstresult=hook.spec.opts.get("firstresult") if hook.spec else False,
        )
    
    /usr/lib/python3.8/site-packages/pluggy/manager.py:84:
    _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
    
    hook_impls = [<HookImpl plugin_name='runner', plugin=<module '_pytest.runner' from '/usr/lib/python3.8/site-packages/_pytest/runner...pper name='/dev/null' mode='r' encoding='UTF-8'>> _state='suspended' _in_suspended=False> _capture_fixture=None>>, ...]
    caller_kwargs = {'item': <CheckdocsItem project>}, firstresult = False
    
        def _multicall(hook_impls, caller_kwargs, firstresult=False):
            """Execute a call into multiple python functions/methods and return the
            result(s).
    
            ``caller_kwargs`` comes from _HookCaller.__call__().
            """
            __tracebackhide__ = True
            results = []
            excinfo = None
            try:  # run impl and wrapper setup functions in a loop
                teardowns = []
                try:
                    for hook_impl in reversed(hook_impls):
                        try:
                            args = [caller_kwargs[argname] for argname in hook_impl.argnames]
                        except KeyError:
                            for argname in hook_impl.argnames:
                                if argname not in caller_kwargs:
                                    raise HookCallError(
                                        "hook call must provide argument %r" % (argname,)
                                    )
    
                        if hook_impl.hookwrapper:
                            try:
                                gen = hook_impl.function(*args)
                                next(gen)  # first yield
                                teardowns.append(gen)
                            except StopIteration:
                                _raise_wrapfail(gen, "did not yield")
                        else:
                            res = hook_impl.function(*args)
                            if res is not None:
                                results.append(res)
                                if firstresult:  # halt further impl calls
                                    break
                except BaseException:
                    excinfo = sys.exc_info()
            finally:
                if firstresult:  # first result hooks return a single value
                    outcome = _Result(results[0] if results else None, excinfo)
                else:
                    outcome = _Result(results, excinfo)
    
                # run all wrapper post-yield blocks
                for gen in reversed(teardowns):
                    try:
                        gen.send(outcome)
                        _raise_wrapfail(gen, "has second yield")
                    except StopIteration:
                        pass
    
    >           return outcome.get_result()
    
    /usr/lib/python3.8/site-packages/pluggy/callers.py:208:
    _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
    
    self = <pluggy.callers._Result object at 0x7fa6f5aaea60>
    
        def get_result(self):
            """Get the result(s) for this hook call.
    
            If the hook was marked as a ``firstresult`` only a single value
            will be returned otherwise a list of results.
            """
            __tracebackhide__ = True
            if self._excinfo is None:
                return self._result
            else:
                ex = self._excinfo
                if _py3:
    >               raise ex[1].with_traceback(ex[2])
    
    /usr/lib/python3.8/site-packages/pluggy/callers.py:80:
    _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
    
    hook_impls = [<HookImpl plugin_name='runner', plugin=<module '_pytest.runner' from '/usr/lib/python3.8/site-packages/_pytest/runner...pper name='/dev/null' mode='r' encoding='UTF-8'>> _state='suspended' _in_suspended=False> _capture_fixture=None>>, ...]
    caller_kwargs = {'item': <CheckdocsItem project>}, firstresult = False
    
        def _multicall(hook_impls, caller_kwargs, firstresult=False):
            """Execute a call into multiple python functions/methods and return the
            result(s).
    
            ``caller_kwargs`` comes from _HookCaller.__call__().
            """
            __tracebackhide__ = True
            results = []
            excinfo = None
            try:  # run impl and wrapper setup functions in a loop
                teardowns = []
                try:
                    for hook_impl in reversed(hook_impls):
                        try:
                            args = [caller_kwargs[argname] for argname in hook_impl.argnames]
                        except KeyError:
                            for argname in hook_impl.argnames:
                                if argname not in caller_kwargs:
                                    raise HookCallError(
                                        "hook call must provide argument %r" % (argname,)
                                    )
    
                        if hook_impl.hookwrapper:
                            try:
                                gen = hook_impl.function(*args)
                                next(gen)  # first yield
                                teardowns.append(gen)
                            except StopIteration:
                                _raise_wrapfail(gen, "did not yield")
                        else:
    >                       res = hook_impl.function(*args)
    
    /usr/lib/python3.8/site-packages/pluggy/callers.py:187:
    _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
    
    item = <CheckdocsItem project>
    
        def pytest_runtest_call(item: Item) -> None:
            _update_current_test_var(item, "call")
            try:
                del sys.last_type
                del sys.last_value
                del sys.last_traceback
            except AttributeError:
                pass
            try:
                item.runtest()
            except Exception as e:
                # Store trace info to allow postmortem debugging
                sys.last_type = type(e)
                sys.last_value = e
                assert e.__traceback__ is not None
                # Skip *this* frame
                sys.last_traceback = e.__traceback__.tb_next
    >           raise e
    
    /usr/lib/python3.8/site-packages/_pytest/runner.py:170:
    _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
    
    item = <CheckdocsItem project>
    
        def pytest_runtest_call(item: Item) -> None:
            _update_current_test_var(item, "call")
            try:
                del sys.last_type
                del sys.last_value
                del sys.last_traceback
            except AttributeError:
                pass
            try:
    >           item.runtest()
    
    /usr/lib/python3.8/site-packages/_pytest/runner.py:162:
    _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
    
    self = <CheckdocsItem project>
    
        def runtest(self):
    >       desc = self.get_long_description()
    
    /usr/lib/python3.8/site-packages/pytest_checkdocs/__init__.py:29:
    _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
    
    self = <CheckdocsItem project>
    
        def get_long_description(self):
    >       return Description.from_md(ensure_clean(pep517.meta.load('.').metadata))
    
    /usr/lib/python3.8/site-packages/pytest_checkdocs/__init__.py:60:
    _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
    
    root = '.'
    
        def load(root):
            """
            Given a source directory (root) of a package,
            return an importlib.metadata.Distribution object
            with metadata build from that package.
            """
            root = os.path.expanduser(root)
            system = compat_system(root)
            builder = functools.partial(build, source_dir=root, system=system)
    >       path = Path(build_as_zip(builder))
    
    /usr/lib/python3.8/site-packages/pep517/meta.py:71:
    _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
    
    builder = functools.partial(<function build at 0x7fa6fa00baf0>, source_dir='.', system={'build-backend': 'setuptools.build_meta:__legacy__', 'requires': ['setuptools', 'wheel']})
    
        def build_as_zip(builder=build):
            with tempdir() as out_dir:
    >           builder(dest=out_dir)
    
    /usr/lib/python3.8/site-packages/pep517/meta.py:58:
    _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
    
    source_dir = '.', dest = '/tmp/tmps6v8iqcl', system = {'build-backend': 'setuptools.build_meta:__legacy__', 'requires': ['setuptools', 'wheel']}
    
        def build(source_dir='.', dest=None, system=None):
            system = system or load_system(source_dir)
            dest = os.path.join(source_dir, dest or 'dist')
            mkdir_p(dest)
            validate_system(system)
            hooks = Pep517HookCaller(
                source_dir, system['build-backend'], system.get('backend-path')
            )
    
            with hooks.subprocess_runner(quiet_subprocess_runner):
                with BuildEnvironment() as env:
                    env.pip_install(system['requires'])
    >               _prep_meta(hooks, env, dest)
    
    /usr/lib/python3.8/site-packages/pep517/meta.py:53:
    _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
    
    hooks = <pep517.wrappers.Pep517HookCaller object at 0x7fa6f5b15460>, env = <pep517.envbuild.BuildEnvironment object at 0x7fa6f5b15d30>, dest = '/tmp/tmps6v8iqcl'
    
        def _prep_meta(hooks, env, dest):
            reqs = hooks.get_requires_for_build_wheel({})
            log.info('Got build requires: %s', reqs)
    
            env.pip_install(reqs)
            log.info('Installed dynamic build dependencies')
    
            with tempdir() as td:
                log.info('Trying to build metadata in %s', td)
    >           filename = hooks.prepare_metadata_for_build_wheel(td, {})
    
    /usr/lib/python3.8/site-packages/pep517/meta.py:36:
    _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
    
    self = <pep517.wrappers.Pep517HookCaller object at 0x7fa6f5b15460>, metadata_directory = '/tmp/tmphqw_z7w2', config_settings = {}, _allow_fallback = True
    
        def prepare_metadata_for_build_wheel(
                self, metadata_directory, config_settings=None,
                _allow_fallback=True):
            """Prepare a ``*.dist-info`` folder with metadata for this project.
    
            Returns the name of the newly created folder.
    
            If the build backend defines a hook with this name, it will be called
            in a subprocess. If not, the backend will be asked to build a wheel,
            and the dist-info extracted from that (unless _allow_fallback is
            False).
            """
    >       return self._call_hook('prepare_metadata_for_build_wheel', {
                'metadata_directory': abspath(metadata_directory),
                'config_settings': config_settings,
                '_allow_fallback': _allow_fallback,
            })
    
    /usr/lib/python3.8/site-packages/pep517/wrappers.py:184:
    _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
    
    self = <pep517.wrappers.Pep517HookCaller object at 0x7fa6f5b15460>, hook_name = 'prepare_metadata_for_build_wheel'
    kwargs = {'_allow_fallback': True, 'config_settings': {}, 'metadata_directory': '/tmp/tmphqw_z7w2'}
    
        def _call_hook(self, hook_name, kwargs):
            # On Python 2, pytoml returns Unicode values (which is correct) but the
            # environment passed to check_call needs to contain string values. We
            # convert here by encoding using ASCII (the backend can only contain
            # letters, digits and _, . and : characters, and will be used as a
            # Python identifier, so non-ASCII content is wrong on Python 2 in
            # any case).
            # For backend_path, we use sys.getfilesystemencoding.
            if sys.version_info[0] == 2:
                build_backend = self.build_backend.encode('ASCII')
            else:
                build_backend = self.build_backend
            extra_environ = {'PEP517_BUILD_BACKEND': build_backend}
    
            if self.backend_path:
                backend_path = os.pathsep.join(self.backend_path)
                if sys.version_info[0] == 2:
                    backend_path = backend_path.encode(sys.getfilesystemencoding())
                extra_environ['PEP517_BACKEND_PATH'] = backend_path
    
            with tempdir() as td:
                hook_input = {'kwargs': kwargs}
                compat.write_json(hook_input, pjoin(td, 'input.json'),
                                  indent=2)
    
                # Run the hook in a subprocess
                with _in_proc_script_path() as script:
                    python = self.python_executable
    >               self._subprocess_runner(
                        [python, abspath(str(script)), hook_name, td],
                        cwd=self.source_dir,
                        extra_environ=extra_environ
                    )
    
    /usr/lib/python3.8/site-packages/pep517/wrappers.py:265:
    _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
    
    cmd = ['/usr/bin/python3', '/usr/lib/python3.8/site-packages/pep517/in_process/_in_process.py', 'prepare_metadata_for_build_wheel', '/tmp/tmp442xzl4j']
    cwd = '/home/tkloczko/rpmbuild/BUILD/pytest-datadir-1.3.1', extra_environ = {'PEP517_BUILD_BACKEND': 'setuptools.build_meta:__legacy__'}
    
        def quiet_subprocess_runner(cmd, cwd=None, extra_environ=None):
            """A method of calling the wrapper subprocess while suppressing output."""
            env = os.environ.copy()
            if extra_environ:
                env.update(extra_environ)
    
    >       check_output(cmd, cwd=cwd, env=env, stderr=STDOUT)
    
    /usr/lib/python3.8/site-packages/pep517/wrappers.py:75:
    _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
    
    timeout = None
    popenargs = (['/usr/bin/python3', '/usr/lib/python3.8/site-packages/pep517/in_process/_in_process.py', 'prepare_metadata_for_build_wheel', '/tmp/tmp442xzl4j'],)
    kwargs = {'cwd': '/home/tkloczko/rpmbuild/BUILD/pytest-datadir-1.3.1', 'env': {'AR': '/usr/bin/gcc-ar', 'BASH_FUNC_which%%': '(...sh-protection -fcf-protection -fdata-sections -ffunction-sections -flto=auto -flto-partition=none', ...}, 'stderr': -2}
    
        def check_output(*popenargs, timeout=None, **kwargs):
            r"""Run command with arguments and return its output.
    
            If the exit code was non-zero it raises a CalledProcessError.  The
            CalledProcessError object will have the return code in the returncode
            attribute and output in the output attribute.
    
            The arguments are the same as for the Popen constructor.  Example:
    
            >>> check_output(["ls", "-l", "/dev/null"])
            b'crw-rw-rw- 1 root root 1, 3 Oct 18  2007 /dev/null\n'
    
            The stdout argument is not allowed as it is used internally.
            To capture standard error in the result, use stderr=STDOUT.
    
            >>> check_output(["/bin/sh", "-c",
            ...               "ls -l non_existent_file ; exit 0"],
            ...              stderr=STDOUT)
            b'ls: non_existent_file: No such file or directory\n'
    
            There is an additional optional argument, "input", allowing you to
            pass a string to the subprocess's stdin.  If you use this argument
            you may not also use the Popen constructor's "stdin" argument, as
            it too will be used internally.  Example:
    
            >>> check_output(["sed", "-e", "s/foo/bar/"],
            ...              input=b"when in the course of fooman events\n")
            b'when in the course of barman events\n'
    
            By default, all communication is in bytes, and therefore any "input"
            should be bytes, and the return value will be bytes.  If in text mode,
            any "input" should be a string, and the return value will be a string
            decoded according to locale encoding, or by "encoding" if set. Text mode
            is triggered by setting any of text, encoding, errors or universal_newlines.
            """
            if 'stdout' in kwargs:
                raise ValueError('stdout argument not allowed, it will be overridden.')
    
            if 'input' in kwargs and kwargs['input'] is None:
                # Explicitly passing input=None was previously equivalent to passing an
                # empty string. That is maintained here for backwards compatibility.
                if kwargs.get('universal_newlines') or kwargs.get('text'):
                    empty = ''
                else:
                    empty = b''
                kwargs['input'] = empty
    
    >       return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
                       **kwargs).stdout
    
    /usr/lib64/python3.8/subprocess.py:415:
    _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
    
    input = None, capture_output = False, timeout = None, check = True
    popenargs = (['/usr/bin/python3', '/usr/lib/python3.8/site-packages/pep517/in_process/_in_process.py', 'prepare_metadata_for_build_wheel', '/tmp/tmp442xzl4j'],)
    kwargs = {'cwd': '/home/tkloczko/rpmbuild/BUILD/pytest-datadir-1.3.1', 'env': {'AR': '/usr/bin/gcc-ar', 'BASH_FUNC_which%%': '(...-fcf-protection -fdata-sections -ffunction-sections -flto=auto -flto-partition=none', ...}, 'stderr': -2, 'stdout': -1}
    process = <subprocess.Popen object at 0x7fa6f5aae4f0>
    stdout = b'Traceback (most recent call last):\n  File "/usr/lib/python3.8/site-packages/pep517/in_process/_in_process.py", line...ng pip, instead of https://github.com/user/proj/archive/master.zip use git+https://github.com/user/proj.git#egg=proj\n'
    stderr = None, retcode = 1
    
        def run(*popenargs,
                input=None, capture_output=False, timeout=None, check=False, **kwargs):
            """Run command with arguments and return a CompletedProcess instance.
    
            The returned instance will have attributes args, returncode, stdout and
            stderr. By default, stdout and stderr are not captured, and those attributes
            will be None. Pass stdout=PIPE and/or stderr=PIPE in order to capture them.
    
            If check is True and the exit code was non-zero, it raises a
            CalledProcessError. The CalledProcessError object will have the return code
            in the returncode attribute, and output & stderr attributes if those streams
            were captured.
    
            If timeout is given, and the process takes too long, a TimeoutExpired
            exception will be raised.
    
            There is an optional argument "input", allowing you to
            pass bytes or a string to the subprocess's stdin.  If you use this argument
            you may not also use the Popen constructor's "stdin" argument, as
            it will be used internally.
    
            By default, all communication is in bytes, and therefore any "input" should
            be bytes, and the stdout and stderr will be bytes. If in text mode, any
            "input" should be a string, and stdout and stderr will be strings decoded
            according to locale encoding, or by "encoding" if set. Text mode is
            triggered by setting any of text, encoding, errors or universal_newlines.
    
            The other arguments are the same as for the Popen constructor.
            """
            if input is not None:
                if kwargs.get('stdin') is not None:
                    raise ValueError('stdin and input arguments may not both be used.')
                kwargs['stdin'] = PIPE
    
            if capture_output:
                if kwargs.get('stdout') is not None or kwargs.get('stderr') is not None:
                    raise ValueError('stdout and stderr arguments may not be used '
                                     'with capture_output.')
                kwargs['stdout'] = PIPE
                kwargs['stderr'] = PIPE
    
            with Popen(*popenargs, **kwargs) as process:
                try:
                    stdout, stderr = process.communicate(input, timeout=timeout)
                except TimeoutExpired as exc:
                    process.kill()
                    if _mswindows:
                        # Windows accumulates the output in a single blocking
                        # read() call run on child threads, with the timeout
                        # being done in a join() on those threads.  communicate()
                        # _after_ kill() is required to collect that and add it
                        # to the exception.
                        exc.stdout, exc.stderr = process.communicate()
                    else:
                        # POSIX _communicate already populated the output so
                        # far into the TimeoutExpired exception.
                        process.wait()
                    raise
                except:  # Including KeyboardInterrupt, communicate handled that.
                    process.kill()
                    # We don't call process.wait() as .__exit__ does that for us.
                    raise
                retcode = process.poll()
                if check and retcode:
    >               raise CalledProcessError(retcode, process.args,
                                             output=stdout, stderr=stderr)
    E               subprocess.CalledProcessError: Command '['/usr/bin/python3', '/usr/lib/python3.8/site-packages/pep517/in_process/_in_process.py', 'prepare_metadata_for_build_wheel', '/tmp/tmp442xzl4j']' returned non-zero exit status 1.
    
    /usr/lib64/python3.8/subprocess.py:516: CalledProcessError
    ========================================================================= short test summary info ==========================================================================
    SKIPPED [1] tests/test_pathlib.py:4: Python 2 only
    FAILED ::project - subprocess.CalledProcessError: Command '['/usr/bin/python3', '/usr/lib/python3.8/site-packages/pep517/in_process/_in_process.py', 'prepare_metadata_fo...
    ================================================================== 1 failed, 7 passed, 1 skipped in 8.04s ==================================================================
    
    opened by kloczek 1
  • pytest error on (Windows + RAM drive) when using datadir fixture

    pytest error on (Windows + RAM drive) when using datadir fixture

    Pytest is reporting OSError with even trivial test case below:

    C:\Users\foobar\repo> type tests\test_datadir.py
    
    def test_datadir(shared_datadir):
        assert True
    

    Gist link for error log here.


    It happens only when following requirements are all met:

    1. datadir or shared_datadir is actually used in test case, even only as function argument. Tests not using these fixtures at all would be fine.
    2. Windows temp path is using RAM disk, not normal drives attached to physical storage

    For second point, above test case would succeed if pytest is instructed to use normal drive as temp, like:

    pytest --basetemp=C:\Temp -k test_datadir -s tests\
    

    As extra info, here is my RAM drive setup (if it is useful at all):

    C:\Users\foobar\repo> imdisk.exe -l -u 0
    Drive letter: M
    Image file: \BaseNamedObjects\Global\RamDyne8694d353
    Size: 12884901888 bytes (12 GB), Proxy, HDD, Modified.
    
    C:\Users\foobar\repo> echo %TEMP%
    M:\Temp
    

    The issue pytest-dev/pytest#5826 has symptom very similar to this one, which ultimately points to long time unfixed python bug in Path.resolve(). Comment in Python bug 33016 also has some insight about resolve() being unable to handle Windows filesystem API properly.

    While it may not be the responsibility for pytest-datadir to handle such failure, how about some warning ahead. such as in README, for similar user case? Because the bug doesn't manifest unless datadir plugin is installed and used, people may come search here for solution initially. Not to mention the real bug won't go away anytime soon...

    opened by abelcheung 3
  • UNC path breaks _win32_longpath()

    UNC path breaks _win32_longpath()

    I have, due to DFS in my environment, tests running in a dfs path, which results in a UNC path (\\my.domain\dfs\some\path) being passed to copytree in shared_datadir(). This results in copytree failing with

    OSError: [WinError 123] The filename, directory name, or volume label syntax is incorrect: '\\\\?\\\\\\my.domain\\dfs\\bla...
    

    I traced this back to return '\\\\?\\' + os.path.normpath(path) in plugin._win32_longpath(), which afaict is not correct in case of a UNC path (see above UNC link). Then you need to insert an additional UNC\, so the prefix should be '\\\\?\\UNC\\' instead. UNC paths should be recognizable because they start with \\.

    opened by bilderbuchi 4
Owner
Gabriel Reis
Software Developer, Python enthusiast and passionate about improving quality of my code and forever learner.
Gabriel Reis
pytest plugin that let you automate actions and assertions with test metrics reporting executing plain YAML files

pytest-play pytest-play is a codeless, generic, pluggable and extensible automation tool, not necessarily test automation only, based on the fantastic

pytest-dev 67 Dec 1, 2022
pytest plugin providing a function to check if pytest is running.

pytest-is-running pytest plugin providing a function to check if pytest is running. Installation Install with: python -m pip install pytest-is-running

Adam Johnson 21 Nov 1, 2022
A pytest plugin to run an ansible collection's unit tests with pytest.

pytest-ansible-units An experimental pytest plugin to run an ansible collection's unit tests with pytest. Description pytest-ansible-units is a pytest

Community managed Ansible repositories 9 Dec 9, 2022
pytest plugin for a better developer experience when working with the PyTorch test suite

pytest-pytorch What is it? pytest-pytorch is a lightweight pytest-plugin that enhances the developer experience when working with the PyTorch test sui

Quansight 39 Nov 18, 2022
A pytest plugin, that enables you to test your code that relies on a running PostgreSQL Database

This is a pytest plugin, that enables you to test your code that relies on a running PostgreSQL Database. It allows you to specify fixtures for PostgreSQL process and client.

Clearcode 252 Dec 21, 2022
A pytest plugin that enables you to test your code that relies on a running Elasticsearch search engine

pytest-elasticsearch What is this? This is a pytest plugin that enables you to test your code that relies on a running Elasticsearch search engine. It

Clearcode 65 Nov 10, 2022
This is a pytest plugin, that enables you to test your code that relies on a running MongoDB database

This is a pytest plugin, that enables you to test your code that relies on a running MongoDB database. It allows you to specify fixtures for MongoDB process and client.

Clearcode 19 Oct 21, 2022
ApiPy was created for api testing with Python pytest framework which has also requests, assertpy and pytest-html-reporter libraries.

ApiPy was created for api testing with Python pytest framework which has also requests, assertpy and pytest-html-reporter libraries. With this f

Mustafa 1 Jul 11, 2022
Playwright Python tool practice pytest pytest-bdd screen-play page-object allure cucumber-report

pytest-ui-automatic Playwright Python tool practice pytest pytest-bdd screen-play page-object allure cucumber-report How to run Run tests execute_test

moyu6027 11 Nov 8, 2022
Pytest-rich - Pytest + rich integration (proof of concept)

pytest-rich Leverage rich for richer test session output. This plugin is not pub

Bruno Oliveira 170 Dec 2, 2022
Selects tests affected by changed files. Continous test runner when used with pytest-watch.

This is a pytest plug-in which automatically selects and re-executes only tests affected by recent changes. How is this possible in dynamic language l

Tibor Arpas 614 Dec 30, 2022
a plugin for py.test that changes the default look and feel of py.test (e.g. progressbar, show tests that fail instantly)

pytest-sugar pytest-sugar is a plugin for pytest that shows failures and errors instantly and shows a progress bar. Requirements You will need the fol

Teemu 963 Dec 28, 2022
A command-line tool and Python library and Pytest plugin for automated testing of RESTful APIs, with a simple, concise and flexible YAML-based syntax

1.0 Release See here for details about breaking changes with the upcoming 1.0 release: https://github.com/taverntesting/tavern/issues/495 Easier API t

null 909 Dec 15, 2022
Local continuous test runner with pytest and watchdog.

pytest-watch -- Continuous pytest runner pytest-watch a zero-config CLI tool that runs pytest, and re-runs it when a file in your project changes. It

Joe Esposito 675 Dec 23, 2022
API Test Automation with Requests and Pytest

api-testing-requests-pytest Install Make sure you have Python 3 installed on your machine. Then: 1.Install pipenv sudo apt-get install pipenv 2.Go to

Sulaiman Haque 2 Nov 21, 2021
a wrapper around pytest for executing tests to look for test flakiness and runtime regression

bubblewrap a wrapper around pytest for assessing flakiness and runtime regressions a cs implementations practice project How to Run: First, install de

Anna Nagy 1 Aug 5, 2021
pytest plugin for distributed testing and loop-on-failures testing modes.

xdist: pytest distributed testing plugin The pytest-xdist plugin extends pytest with some unique test execution modes: test run parallelization: if yo

pytest-dev 1.1k Dec 30, 2022
:game_die: Pytest plugin to randomly order tests and control random.seed

pytest-randomly Pytest plugin to randomly order tests and control random.seed. Features All of these features are on by default but can be disabled wi

pytest-dev 471 Dec 30, 2022
A set of pytest fixtures to test Flask applications

pytest-flask An extension of pytest test runner which provides a set of useful tools to simplify testing and development of the Flask extensions and a

pytest-dev 433 Dec 23, 2022