A command-line tool and Python library and Pytest plugin for automated testing of RESTful APIs, with a simple, concise and flexible YAML-based syntax

https://travis-ci.org/taverntesting/tavern.svg?branch=master https://readthedocs.org/projects/pip/badge/?version=latest&style=flat

1.0 Release

See here for details about breaking changes with the upcoming 1.0 release: https://github.com/taverntesting/tavern/issues/495

Easier API testing

Tavern is a pytest plugin, command-line tool and Python library for automated testing of APIs, with a simple, concise and flexible YAML-based syntax. It's very simple to get started, and highly customisable for complex tests. Tavern supports testing RESTful APIs as well as MQTT based APIs.

The best way to use Tavern is with pytest. Tavern comes with a pytest plugin so that literally all you have to do is install pytest and Tavern, write your tests in .tavern.yaml files and run pytest. This means you get access to all of the pytest ecosystem and allows you to do all sorts of things like regularly run your tests against a test server and report failures or generate HTML reports.

You can also integrate Tavern into your own test framework or continuous integration setup using the Python library, or use the command line tool, tavern-ci with bash scripts and cron jobs.

To learn more, check out the examples or the complete documentation. If you're interested in contributing to the project take a look at the GitHub repo.


First up run pip install tavern.

Then, let's create a basic test, test_minimal.tavern.yaml:

# Every test file has one or more tests...
test_name: Get some fake data from the JSON placeholder API

# ...and each test has one or more stages (e.g. an HTTP request)
  - name: Make sure we have the right ID

    # Define the request to be made...
      url: https://jsonplaceholder.typicode.com/posts/1
      method: GET

    # ...and the expected response code and body
      status_code: 200
        id: 1

This file can have any name, but if you intend to use Pytest with Tavern, it will only pick up files called test_*.tavern.yaml.

This can then be run like so:

$ pip install tavern[pytest]
$ py.test test_minimal.tavern.yaml  -v
=================================== test session starts ===================================
platform linux -- Python 3.5.2, pytest-3.4.2, py-1.5.2, pluggy-0.6.0 -- /home/taverntester/.virtualenvs/tavernexample/bin/python3
cachedir: .pytest_cache
rootdir: /home/taverntester/myproject, inifile:
plugins: tavern-0.7.2
collected 1 item

test_minimal.tavern.yaml::Get some fake data from the JSON placeholder API PASSED   [100%]

================================ 1 passed in 0.14 seconds =================================

It is strongly advised that you use Tavern with Pytest - not only does it have a lot of utility to control discovery and execution of tests, there are a huge amount of plugins to improve your development experience. If you absolutely can't use Pytest for some reason, use the tavern-ci command line interface:

$ pip install tavern
$ tavern-ci --stdout test_minimal.tavern.yaml
2017-11-08 16:17:00,152 [INFO]: (tavern.core:55) Running test : Get some fake data from the JSON placeholder API
2017-11-08 16:17:00,153 [INFO]: (tavern.core:69) Running stage : Make sure we have the right ID
2017-11-08 16:17:00,239 [INFO]: (tavern.core:73) Response: '<Response [200]>' ({
  "userId": 1,
  "id": 1,
  "title": "sunt aut facere repellat provident occaecati excepturi optio reprehenderit",
  "json": "quia et suscipit\nsuscipit recusandae consequuntur expedita et cum\nreprehenderit molestiae ut ut quas totam\nnostrum rerum est autem sunt rem eveniet architecto"
2017-11-08 16:17:00,239 [INFO]: (tavern.printer:9) PASSED: Make sure we have the right ID [200]

Why not Postman, Insomnia or pyresttest etc?

Tavern is a focused tool which does one thing well: automated testing of APIs.

Postman and Insomnia are excellent tools which cover a wide range of use-cases for RESTful APIs, and indeed we use Tavern alongside Postman. However, specifically with regards to automated testing, Tavern has several advantages over Postman:

  • A full-featured Python environment for writing easily reusable custom validation functions
  • Testing of MQTT based systems in tandem with RESTful APIS.
  • Seamless integration with pytest to keep all your tests in one place
  • A simpler, less verbose and clearer testing language

Tavern does not do many of the things Postman and Insomnia do. For example, Tavern does not have a GUI nor does it do API monitoring or mock servers. On the other hand, Tavern is free and open-source and is a more powerful tool for developers to automate tests.

pyresttest is a similar tool to Tavern for testing RESTful APIs, but is no longer actively developed. On top of MQTT testing, Tavern has several other advantages over PyRestTest which overall add up to a better developer experience:

  • Cleaner test syntax which is more intuitive, especially for non-developers
  • Validation function are more flexible and easier to use
  • Better explanations of why a test failed

Hacking on Tavern

If you want to add a feature to Tavern or just play around with it locally, it's a good plan to first create a local development environment (this page has a good primer for working with development environments with Python). After you've created your development environment, just pip install tox and run tox to run the unit tests. If you want to run the integration tests, make sure you have docker installed and run tox -c tox-integration.ini (bear in mind this might take a while.) It's that simple!

If you want to develop things in tavern, enter your virtualenv and run pip install -r requirements.txt to install the library, any requirements, and other useful development options.

Tavern uses [black](https://github.com/ambv/black) to keep all of the code formatted consistently. There is a pre-commit hook to run black which can be enabled by running pre-commit install.

If you want to add a feature to get merged back into mainline Tavern:

  • Add the feature you want
  • Add some tests for your feature:
    • If you are adding some utility functionality such as improving verification of responses, adding some unit tests might be best. These are in the tests/unit/ folder and are written using Pytest.
    • If you are adding more advanced functionality like extra validation functions, or some functionality that directly depends on the format of the input YAML, it might also be useful to add some integration tests. At the time of writing, this is done by adding an example flask endpoint in tests/integration/server.py and a corresponding Tavern YAML test file in the same directory. This will be cleaned up a bit once we have a proper plugin system implemented.
  • Open a pull request.


Tavern makes use of several excellent open-source projects:


Tavern is currently maintained by

  • @michaelboulton
  • Variable Passing: int

    Variable Passing: int

    Hello gang!

    I believe that there might be an issue reading integers from one file when they've been included in another.

    For instance: foo.yaml: owner-id: 1001

    bar.yaml: !include: foo.yaml owner-id: {owner-id} --or owner-id: "{owner-id:d}"

    So far, I haven't been able to get this to return an int to the calling document. If explicitly casting to an int:

    bar.yaml: !include: foo.yaml owner-id: !!int "{owner-id:d}"

    The following error is produced: ValueError: invalid literal for int() with base 10: '{testid' because the cast is happening before the variable is returned.

    If I have an incorrect implementation somewhere, please let me know! Thanks!

    opened by BusinessFawn 14
  • Bind all MQTT Client callbacks

    Bind all MQTT Client callbacks

    Fix for #753 where I bound all the available MQTT callbacks that would be useful to a developer facing connection issues. There are still a few callbacks that could be bound, but I haven't ever needed them.

    Ran the code formatter and smoke tests on the code. Unit tests seem to have unrelated failures, and there is only calls to the logger in this. If there is a good way you wanted to unit test the new callbacks, let me know.

    opened by RFRIEDM-Trimble 13
  • Parametrize without combinations.

    Parametrize without combinations.

    I am trying to create a parameterized test but don't want the combinations. for example:

        key: p1
            - a
            - b
        key: p2

    This gives 4 tests, (a,x), (a,y), (b,x), (b,y)but I only want 2 tests (a,x), (b,y)

    Another use case is that the pass criteria changes with each parameter. For example:

    key: input
        - 1
        - 3
    key: double
        - 2
        - 6

    Is that possible? thanks.

    opened by rvanderwall 12
  • Tavern '1.0' changes

    Tavern '1.0' changes

    There are a few things that need changing and a few issues open that are hard to fix without breaking backwards compatibility. To make sure we know what is going to break in advance, I'm going to try and lay it all out here. This is almost all just related to tighter integration with pytest.

    pytest integration

    • tavern-ci will either be removed or just become an alias for py.test

    • A 'base' REST/MQTT request/response object to make it easier for plugins to verify responses

    • delay_after and delay_before will become built-in fixtures or hooks. This is a breaking API change.

    • parametrization with pytest - running the same test, but with a different formatted value (or possibly a different dict/list/value)

    • marking tests - this will just be for use with the -m flag. A list of tags applied to each test which can be used to run a certain subset of tests.

    • This is sort of covered by parametrization and fixtures, but we want to be able to add 'plugin' blocks depending on a fixture. eg https://github.com/taverntesting/tavern/issues/113 . This would probably be a breaking API change, but I doubt many people are using plugins at the moment so it shouldn't be too bad.


    • Some sort of integration with pytest fixtures - unsure how this will work now, but it will allow you to do some arbitrary python code and return a dictionary/list/value for the request/response/etc. This will probably be implemented such that we can send a value to the fixture if it's a generator, so fixtures can be written like this:
    def test_complicated_return_value():
        val = calculate_something_complicated()
        response = (yield val)
        expected = calculate_complicated_response()
        assert response.body["value"] == expected    
        sent_value: !fromfixture "tavern_complicated_request_val"
    • extra hooks to allow people to run things before/after/during tests. For example, a hook called tavern_before_test which is run before every test and is called with the associated request and YamlItem.

    Either one of these should cover all use cases, but both of them being available would definitely be the best case scenario.


    • xfailing tests - this is probably going to be hidden from external use but it would be helpful to be able to write a deliberately broken test for tavern to make sure that it can be caught and reported to the user appropriately.
    • skipif with pytest - eg, skip a test if running against a certain server. This will provide similar but different functionality to marking tests. I'm not sure of a nice way to implement this other than using exec(), but that might just have to happen.


    • Pytest only has the concept of a test, not the concept of a stage like Tavern does. We don't want to have to reimplement all the possible pytest behaviour for stages as well as tests, so adding parametrization or marking to individual stages is probably not going to happen. skip/skipif should be possible. Fixtures will almost always have to be run per-stage, so there may be some rudimentary 'decoration' of stages and tests to allow for this.

    Note going to happen as a part of 1.0

    • Better test reporting - this is quite a lot of work, but adding a better error reporting which actually prints the stage that failed and why it failed with arrows next to it like pytest would help a lot.

    Note that none of this is set in stone, and no schema for the above changes has been decided on. This probably isn't going to be pushed out for at least a month or two. If there are any other (minor) breaking API changes that would greatly improve the user experience, feel free to suggest them here.

    Type: Enhancement 
    opened by michaelboulton 12
  • Error loading plugin paho-mqtt

    Error loading plugin paho-mqtt

    I'm using Tavern tests to test my REST API here.

    On my Windows machine, I can run all tests without problem.

    When running my tests on a Ubuntu host or inside an Ubuntu container, I get the following error:

    E   tavern.util.exceptions.PluginLoadError: Error loading plugin paho-mqtt = tavern._plugins.mqtt.tavernhook - [Errno 20] Not a directory: '/home/stefan/venv-tng/lib/python3.6/site-packages/tavern-0.26.1-py3.6.egg/tavern/_plugins/mqtt/schema.yaml'

    In don't get this. I have installed tavern along with all other dependencies in a virtualenv. The error happens both when using python setup.py install and python setup.py develop.

    opened by stefanbschneider 11
  • cannot install from other pip_index

    cannot install from other pip_index

    When trying to install this package with

    pip3 install -i https://devops.myURL/artifactory/api/pypi/pypi-all/simple -r requirements.txt

    with requirements.txt


    one gets the following error:

    Collecting pytest (from -r requirements.txt (line 1))
      Downloading https://devops.myURL/artifactory/api/pypi/pypi-all/packages/77/64/3a76f6fbb0f392d60c5960f2b2fbad8c2b802dada87ca6d1b99c0083a929/pytest-3.6.3-py2.py3-none-any.whl (195kB)
    Collecting tavern[pytest] (from -r requirements.txt (line 2))
      Downloading https://devops.myURL/artifactory/api/pypi/pypi-all/packages/83/9f/419752827e66422a144597a40f4d53b1805f6af57ee64006f547bf1f89a0/tavern-0.14.4.tar.gz (42kB)
        Complete output from command python setup.py egg_info:
        Download error on https://pypi.org/simple/pytest-runner/: [Errno 99] Cannot assign requested address -- Some packages may not be found!
        Couldn't find index page for 'pytest-runner' (maybe misspelled?)
        Download error on https://pypi.org/simple/: [Errno 99] Cannot assign requested address -- Some packages may not be found!
        No local packages or working download links found for pytest-runner
        Traceback (most recent call last):
          File "<string>", line 1, in <module>
          File "/tmp/pip-build-_4ygw1zi/tavern/setup.py", line 87, in <module>
            "tests": TESTS_REQUIRE
          File "/home/one/.local/lib/python3.5/site-packages/setuptools/__init__.py", line 130, in setup
          File "/home/one/.local/lib/python3.5/site-packages/setuptools/__init__.py", line 125, in _install_setup_requires
          File "/home/one/.local/lib/python3.5/site-packages/setuptools/dist.py", line 514, in fetch_build_eggs
          File "/home/one/.local/lib/python3.5/site-packages/pkg_resources/__init__.py", line 773, in resolve
          File "/home/one/.local/lib/python3.5/site-packages/pkg_resources/__init__.py", line 1056, in best_match
            return self.obtain(req, installer)
          File "/home/one/.local/lib/python3.5/site-packages/pkg_resources/__init__.py", line 1068, in obtain
            return installer(requirement)
          File "/home/one/.local/lib/python3.5/site-packages/setuptools/dist.py", line 581, in fetch_build_egg
            return cmd.easy_install(req)
          File "/home/one/.local/lib/python3.5/site-packages/setuptools/command/easy_install.py", line 670, in easy_install
            raise DistutilsError(msg)
        distutils.errors.DistutilsError: Could not find suitable distribution for Requirement.parse('pytest-runner')
    Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-_4ygw1zi/tavern/

    the Installation fails as this (ci) machine has only connectivity to this registry and not to the internet..

    cc @max-wittig

    opened by gocarlos 11
  • How to get interpolated variables to stay dictionaries instead of being coerced into strings?

    How to get interpolated variables to stay dictionaries instead of being coerced into strings?

    Two dictionaries are loaded from fixtures in conftest.py: tmp_author_alias_a and tmp_author_alias_b.

    They are same structure with differing values:

    def tmp_author_alias_a():
        return {
                   "first_name": "Patrick",
                   "last_name": "Neve",
    # . . .
                   "author_id": "dcfab002-d02f-4895-9557-b55f304af92d",
                   "id": "ea36f093-1cae-403b-9c6e-3fe18b617221"

    They appear in the tavern script in - usefixtures:

      - usefixtures:
          - encode_web_token
          - conjure_uuid
          - conjure_uuid_2
          - tmp_author_alias_a
          - tmp_author_alias_b

    They need to be in the request json as list of dictionaries.

    Entered this way:

                - {tmp_author_alias_a}
                - {tmp_author_alias_b}

    throws BadSchemaError: cannot define an empty value in test . . .

    This way:

                - "{tmp_author_alias_a}"
                - "{tmp_author_alias_b}"
    In line 39 dict_util.py format_keys the list is pulled in as a node_class:
    formatted = val  # node_class: ['temp_author_alias_a', 'temp_author_alias_b']
    box_vars = Box(variables)
    That is as a list of strings.
    Each is then sent back through format_keys() (in line 48)
        formatted = [format_keys(item, box_vars) for item in val]
     as a string into which the dict is interpolated (line 51-53)
    elif isinstance(val, (ustr, str)):
            formatted = val.format(**box_vars)
    Any way to get the list of dictionaries into the request json?  
    A nice poser for your afternoon diversion! ;-)
    opened by pmneve 10
  • Would like a

    Would like a "foreach" test item - to run the same test with multiple values (enhancement request)

    The problem I want to solve is running multiple tests with almost the same data - without creating YAML for each possible test. An example might be "list users" API and then validate that each user has a login directory (for example) in a subsequent stage.

    There are two variations of this that come to mind:

    • specify values to set in a JSON (or CSV) list in a file
    • perform a query in a previous step which provides a saved JSON list to be used in subsequent tests.

    And then walk through these values, and provide them to all the "stages" which would be nested below this for this particular subtest.

    After this "stage" was completed, test execution would continue with the next stage at the outer level.

    I hope this is clear...

    I would imagine a "foreach" tag to be used below the stages level - or below the name level. And below it would be more stages: tags and stages which are to be repeated using values specified either from a file or in previous saved JSON saved from a prior stage.

    I'm currently doing this by doing textual substitution in a template file, and generating many YAML files - one for each iteration. I'd like to just do this as part of the test - without this extra python program, and then a separate test to run this program against the template - and so on...

    Type: Enhancement 
    opened by Alan-R 10
  • [Feature Request] Detailed Reporting for Tavern Testing with the format of HTML,others etc

    [Feature Request] Detailed Reporting for Tavern Testing with the format of HTML,others etc

    Hi All,

    Is there any plan in pipeline to add the report generation(html or some other standard) in the near future.It would be good if we add the reporting part. It would be good if we add detailed reports capturing in the format like, TestName Stages steps/name status with success/failure indicators,validation part and also request or response passed

    As of now we are using pytest junitxml generation and pytest-html plugin for reporting .These options are giving only high-level reports.It would be good if we provide details in granular level. This feature would be very useful for Tavern wide usage for API testing.

    Type: Enhancement 
    opened by kvenkat88 9
  • Tavern integration issue with [tox, pytest, django]

    Tavern integration issue with [tox, pytest, django]

    Hi. I want to run tavern integration tests on my test database created in my test django environment. Also I want to run django server before run testing. (without typing and starting server manually) Tox manage all tests. Is there some possibility to do this stuff? If not is there some solution to enhance tavern? Thank you.

    PS: also I use pytest-django library.

    opened by zurek11 9
  • How to test a non-json body response

    How to test a non-json body response

    Hi all

    I would like to check for a non-json body response. In my case the response is a simple: True or False.

    I can't find any documentation stating how to check a text body instead of a json body.

    opened by alphadijkstra 9
  • "Contain" in json response

    Is possible to match only few parameters in returned json? I have big json, and I don't want checking all fields. I would like to check only key elements.

    opened by michaldev 1
  • mqtt: fix unexpected behaviour

    mqtt: fix unexpected behaviour

    Currently mqtt_response with the unexpected key set to true fails if no message is received. This is strange, was we don't expect a message, and should only fail if a message is received.

    Formatted stage:
        topic: inet6/add
        payload: !anything ''
        timeout: 5
        topic: vallumd/will
        unexpected: true
    E   tavern.util.exceptions.TestFailError: Test 'add IPv4 IP to IPv6 ipset' failed:
        - Expected '<Tavern YAML sentinel for anything>' on topic 'vallumd/will' but no such message received

    Fix this by suppressing the error MQTTResponse._await_response() and returning an empty dict if the unexpected key is set.

    Signed-off-by: Stijn Tintel [email protected]

    opened by stintel 2
  • Incorrect test file path printed when multiple targets are supplied

    Incorrect test file path printed when multiple targets are supplied

    I originally submitted this as a pytest issue but they suggested that I report it here instead.

    In each of two directories, I have one Tavern test file that will run with pytest:

    C02VF3KDHTD8:api schooler$ ls 2-disruptive/test_hardware_negative.tavern.yaml 3-destructive/test_components.tavern.yaml
    2-disruptive/test_hardware_negative.tavern.yaml	3-destructive/test_components.tavern.yaml

    When I execute them both with one invocation, the summary output shows an incorrect test file path.

    C02VF3KDHTD8:api schooler$ python3 -m pytest 2-disruptive/ 3-destructive/
    ========================================================================================================== test session starts ===========================================================================================================
    platform darwin -- Python 3.9.13, pytest-7.1.2, pluggy-1.0.0
    rootdir: <REDACTED>, configfile: pytest.ini
    plugins: tavern-1.23.3
    collected 2 items
    2-disruptive/test_hardware_negative.tavern.yaml .                                                                                                                                                                                  [ 50%]
    2-disruptive/test_components.tavern.yaml .                                                                                                                                                                                         [100%]
    =========================================================================================================== 2 passed in 5.74s ============================================================================================================

    The summary output should show 3-destructive/test_components.tavern.yaml as the last test and not 2-disruptive/test_components.tavern.yaml which doesn't exist.

    Additional comment from the pytest thread that was closed: "I suggest to report this in the tavern repository, as I suspect there might be an issue on how they are creating the test items."

    Type: Bug Priority: Low 
    opened by schooler-hpe 2
  • Look at removing pypy tests

    Look at removing pypy tests

    Seeing as the pypy tests take a long time to run, and I doubt many people use pypy to run Tavern as it goes multiple times slower than just normal python for some reason

    opened by michaelboulton 0
  • Tavern test in 2 separate yaml files, first one containing several stages common to many specific stages in 2nd file; be able to use saved variables from the first file.

    Tavern test in 2 separate yaml files, first one containing several stages common to many specific stages in 2nd file; be able to use saved variables from the first file.

    Chaining stages in 2 separate files, for example: test_setup.tavern.yaml - to be used by many other tests in different/separate yaml files, saves some variables test_execution.tavern.yaml - depends on test_setup.tavern.yaml to be executed first, and be able to use the saved variables in test_setup.tavern.yaml

    Read about reusable code, but would like to have more stages in the reusable file, that is included.

    opened by bharatidesai 3
pytest plugin that let you automate actions and assertions with test metrics reporting executing plain YAML files

pytest-play pytest-play is a codeless, generic, pluggable and extensible automation tool, not necessarily test automation only, based on the fantastic

pytest-dev 67 Dec 1, 2022
ApiPy was created for api testing with Python pytest framework which has also requests, assertpy and pytest-html-reporter libraries.

ApiPy was created for api testing with Python pytest framework which has also requests, assertpy and pytest-html-reporter libraries. With this f

Mustafa 1 Jul 11, 2022
pytest plugin providing a function to check if pytest is running.

pytest-is-running pytest plugin providing a function to check if pytest is running. Installation Install with: python -m pip install pytest-is-running

Adam Johnson 21 Nov 1, 2022
Pytest-typechecker - Pytest plugin to test how type checkers respond to code

pytest-typechecker this is a plugin for pytest that allows you to create tests t

vivax 2 Aug 20, 2022
A pytest plugin to run an ansible collection's unit tests with pytest.

pytest-ansible-units An experimental pytest plugin to run an ansible collection's unit tests with pytest. Description pytest-ansible-units is a pytest

Community managed Ansible repositories 9 Dec 9, 2022
Playwright Python tool practice pytest pytest-bdd screen-play page-object allure cucumber-report

pytest-ui-automatic Playwright Python tool practice pytest pytest-bdd screen-play page-object allure cucumber-report How to run Run tests execute_test

moyu6027 11 Nov 8, 2022
Minimal example of how to use pytest with automated 'devops' style automated test runs

Pytest python example with automated testing This is a minimal viable example of pytest with an automated run of tests for every push/merge into the m

Karma Computing 2 Jan 2, 2022
Pytest plugin for testing the idempotency of a function.

pytest-idempotent Pytest plugin for testing the idempotency of a function. Usage pip install pytest-idempotent Documentation Suppose we had the follo

Tyler Yep 3 Dec 14, 2022
Pytest-rich - Pytest + rich integration (proof of concept)

pytest-rich Leverage rich for richer test session output. This plugin is not pub

Bruno Oliveira 170 Dec 2, 2022
Hypothesis is a powerful, flexible, and easy to use library for property-based testing.

Hypothesis Hypothesis is a family of testing libraries which let you write tests parametrized by a source of examples. A Hypothesis implementation the

Hypothesis 6.4k Jan 5, 2023
Python Projects - Few Python projects with Testing using Pytest

Python_Projects Few Python projects : Fast_API_Docker_PyTest- Just a simple auto

Tal Mogendorff 1 Jan 22, 2022
A collection of testing examples using pytest and many other libreris

Effective testing with Python This project was created for PyConEs 2021 Check out the test samples at tests Check out the slides at slides (markdown o

Héctor Canto 10 Oct 23, 2022
The pytest framework makes it easy to write small tests, yet scales to support complex functional testing

The pytest framework makes it easy to write small tests, yet scales to support complex functional testing for applications and libraries. An example o

pytest-dev 9.6k Jan 2, 2023
Set your Dynaconf environment to testing when running pytest

pytest-dynaconf Set your Dynaconf environment to testing when running pytest. Installation You can install "pytest-dynaconf" via pip from PyPI: $ pip

David Baumgold 3 Mar 11, 2022
pytest_pyramid provides basic fixtures for testing pyramid applications with pytest test suite

pytest_pyramid pytest_pyramid provides basic fixtures for testing pyramid applications with pytest test suite. By default, pytest_pyramid will create

Grzegorz Śliwiński 12 Dec 4, 2022
A rewrite of Python's builtin doctest module (with pytest plugin integration) but without all the weirdness

The xdoctest package is a re-write of Python's builtin doctest module. It replaces the old regex-based parser with a new abstract-syntax-tree based pa

Jon Crall 174 Dec 16, 2022
:game_die: Pytest plugin to randomly order tests and control random.seed

pytest-randomly Pytest plugin to randomly order tests and control random.seed. Features All of these features are on by default but can be disabled wi

pytest-dev 471 Dec 30, 2022
pytest plugin for manipulating test data directories and files

pytest-datadir pytest plugin for manipulating test data directories and files. Usage pytest-datadir will look up for a directory with the name of your

Gabriel Reis 191 Dec 21, 2022
A Django plugin for pytest.

Welcome to pytest-django! pytest-django allows you to test your Django project/applications with the pytest testing tool. Quick start / tutorial Chang

pytest-dev 1.1k Dec 31, 2022