Declarative HTTP Testing for Python and anything else

Overview
Documentation Status

Gabbi

Release Notes

Gabbi is a tool for running HTTP tests where requests and responses are represented in a declarative YAML-based form. The simplest test looks like this:

tests:
- name: A test
  GET: /api/resources/id

See the docs for more details on the many features and formats for setting request headers and bodies and evaluating responses.

Gabbi is tested with Python 3.6, 3.7, 3.8, 3.9 and pypy3.

Tests can be run using unittest style test runners, pytest or from the command line with a gabbi-run script.

There is a gabbi-demo repository which provides a tutorial via its commit history. The demo builds a simple API using gabbi to facilitate test driven development.

Purpose

Gabbi works to bridge the gap between human readable YAML files that represent HTTP requests and expected responses and the obscured realm of Python-based, object-oriented unit tests in the style of the unittest module and its derivatives.

Each YAML file represents an ordered list of HTTP requests along with the expected responses. This allows a single file to represent a process in the API being tested. For example:

  • Create a resource.
  • Retrieve a resource.
  • Delete a resource.
  • Retrieve a resource again to confirm it is gone.

At the same time it is still possible to ask gabbi to run just one request. If it is in a sequence of tests, those tests prior to it in the YAML file will be run (in order). In any single process any test will only be run once. Concurrency is handled such that one file runs in one process.

These features mean that it is possible to create tests that are useful for both humans (as tools for improving and developing APIs) and automated CI systems.

Testing and Developing Gabbi

To get started, after cloning the repository, you should install the development dependencies:

$ pip install -r requirements-dev.txt

If you prefer to keep things isolated you can create a virtual environment:

$ virtualenv gabbi-venv
$ . gabbi-venv/bin/activate
$ pip install -r requirements-dev.txt

Gabbi is set up to be developed and tested using tox (installed via requirements-dev.txt). To run the built-in tests (the YAML files are in the directories gabbi/tests/gabbits_* and loaded by the file gabbi/test_*.py), you call tox:

tox -epep8,py37

If you have the dependencies installed (or a warmed up virtualenv) you can run the tests by hand and exit on the first failure:

python -m subunit.run discover -f gabbi | subunit2pyunit

Testing can be limited to individual modules by specifying them after the tox invocation:

tox -epep8,py37 -- test_driver test_handlers

If you wish to avoid running tests that connect to internet hosts, set GABBI_SKIP_NETWORK to True.

Comments
  • Coerce JSON types into correct values for later $RESPONSE replacements

    Coerce JSON types into correct values for later $RESPONSE replacements

    Resolves #147, having $RESPONSE replacements that contained integer or decimal values would wind up as quoted strings after substitution, e.g. {"id": 825} would later become {"id": "825"}.

    This passes tox -epep8, but running tox -epy27 seems to have the first few tests fail, unfortunately. This patch works for my use case, however; I could not POST data to particular endpoints of an API as the number values were wrapped in quotes (and the API was doing type checks on the values).

    This may not be an ideal (or eloquent) solution, but I've also tried to keep performance in mind in lieu of large JSON response bodies, namely that I expect the exception cases to be more common than exceptional, so I've added some additional checking to see if it's really worth parsing particular values as strings (line no. 359) and or if it even looks like a number in the first place (line no. 376). That said, if this performance hit is not an issue, it certainly is a lot more readable without the checks.

    I also chose to do two try/excepts instead of simply using float. Firstly we try parsing for int and then for float as I prefer the resulting JSON to be correct i.e. I would rather not an id field that was initially an int be cast into a float. For example, consider we get a response of {"id": 825} and we simply had one try/except that used float. The value would parse, but the resulting JSON (from json.dumps) would be {"id": 825.0}. This pragmatically doesn't matter as I'm sure most endpoints will accept a decimal value with an appended .0 to be valid as an integer, but I felt the semantics would be a surprise to other users of the lib and it's still possible that certain APIs might have an issue with.

    And thanks for all the effort you've put into the lib!

    opened by justanotherdot 20
  • Unable to make a relative Content Handler import from the command-line

    Unable to make a relative Content Handler import from the command-line

    On the command line, importing a custom Response Handler using a relative path requires manipulation of the PYTHONPATH environment variable to add . to the list of paths.

    Should Gabbi allow relative imports to work out-of-the-box?

    e.g.

    gabbi-run -r foo.bar:ExampleHandler < example.yaml
    

    ... fails with, ModuleNotFoundError: No module named 'foo'.

    Updating PYTHONPATH...

    PYTHONPATH=${PYTHONPATH}:. gabbi-run -r foo.bar:ExampleHandler < example.yaml
    

    ... works.

    opened by scottwallacesh 17
  • Allow to load python object from yaml

    Allow to load python object from yaml

    It can be interesting to write custom object to compare values.

    For example, I need to ensure an output is equal to .NAN

    Because .NAN == .NAN always returns false. We currently can't compare it with assert_equals().

    With the unsafe yaml loader we can register a custom method to check NAN, for example:

    class IsNAN(object):
        @classmethod
        def constructor(cls, loader, node):
            return cls()
    
        def __eq__(self, other):
            return numpy.isnan(other)
    
    yaml.add_constructor(u'!ISNAN', ISNAN.constructor)
    
    opened by sileht 17
  • extra verbosity to include request/response bodies

    extra verbosity to include request/response bodies

    Currently it can be somewhat tricky to debug unexpected outcomes, as verbose: true only prints headers.

    In my case, I wanted to verify that a CSRF token was included in a form submission. The simplest way to check the request body was to start netcat and change my test's URL to http://localhost:9999.

    It would be useful if gabbi provided a way to inspect the entire data being sent over the wire.

    opened by FND 15
  • Ability to run gabbi test cases individually

    Ability to run gabbi test cases individually

    The ability to run an individual gabbi test without any of the tests preceding it in the yaml file could be useful. I created a project where I drive gabbi test cases using Robot Framework (https://github.com/dkt26111/robotframework-gabbilibrary). In order for that to work I explicitly set the prior field of the gabbi test case being run to None.

    opened by dkt26111 14
  • Verbose misses response body

    Verbose misses response body

    I have got the following test spec:

    tests:
    -   name: auth
        verbose: all
        url: /api/sessions
        method: POST
        data: "asdsad"
        status: 200
    

    which has got data not a proper json on purpose. The response results in 400 instead of 200 code. Verbose is set to all, but it still does not print the response body, although detects non empty content:

    ... #### auth ####
    > POST http://localhost:7000/api/sessions
    > user-agent: gabbi/1.40.0 (Python urllib3)
    
    < 400 Bad Request
    < content-length: 48
    < date: Wed, 19 Aug 2020 06:11:24 GMT
    
    ✗ gabbi-runner.input_auth
    
    FAIL: gabbi-runner.input_auth
            Traceback (most recent call last):
              File "/usr/lib/python3/dist-packages/gabbi/case.py", line 94, in wrapper
                func(self)
              File "/usr/lib/python3/dist-packages/gabbi/case.py", line 143, in test_request
                self._run_test()
              File "/usr/lib/python3/dist-packages/gabbi/case.py", line 550, in _run_test
                self._assert_response()
              File "/usr/lib/python3/dist-packages/gabbi/case.py", line 188, in _assert_response
                self._test_status(self.test_data['status'], self.response['status'])
              File "/usr/lib/python3/dist-packages/gabbi/case.py", line 591, in _test_status
                self.assert_in_or_print_output(observed_status, statii)
              File "/usr/lib/python3/dist-packages/gabbi/case.py", line 654, in assert_in_or_print_output
                self.assertIn(expected, iterable)
              File "/usr/lib/python3/dist-packages/testtools/testcase.py", line 417, in assertIn
                self.assertThat(haystack, Contains(needle), message)
              File "/usr/lib/python3/dist-packages/testtools/testcase.py", line 498, in assertThat
                raise mismatch_error
            testtools.matchers._impl.MismatchError: '400' not in ['200']
    ----------------------------------------------------------------------
    

    The expected behavior:

    • response body is printed to the stdout alongside the response headers.
    opened by avkonst 11
  • Q: Persist (across > 1 tests) value to variable based on JSON response?

    Q: Persist (across > 1 tests) value to variable based on JSON response?

    Example algorithm to be expressed in YAML:

    • Create an object given an object name.
    • Store in "$FOO" (or similar) the UUID given to object per JSON response.
    • Do unrelated tests.
    • Perform a GET using "$FOO" (the UUID)

    Thanks as always!

    enhancement 
    opened by josdotso 11
  • Variable is not replace to previous result in a request body

    Variable is not replace to previous result in a request body

    Seen from https://gabbi.readthedocs.io/en/latest/format.html#any-previous-test

    There are 2 requests:

    1. post a task, will return a taskId
    2. query the task with the taskId
    • previous test return: {"dataSet": {"header": {"serverIp": "xxx.xxx.xxx.xxx", "version": "1.0", "errorKeys": [{"error_key" : "2-0-0"}], "errorInfo": "", "returnCode": 0}, "data": {"taskId": "3008929"}}}

    • yaml define data: taskId: $HISTORY['start live migrate a vm'].$RESPONSE['$.dataSet.data.taskId']

    • actual result: "data": { "taskId": "$.dataSet.data.taskId" }

    opened by taget 9
  • jsonhandler: allow reading yaml data from disk

    jsonhandler: allow reading yaml data from disk

    This commit aims to change the jsonhandler to be able to read data from disk if it is a yaml file.

    Note:

    • Simply replacing the loads call with yaml.safe_load is not enough due to the nature of the NaN checker requiring an unsafe load[1].

    closes #253 [1] https://github.com/cdent/gabbi/commit/98adca65e05b7de4f1ab2bf90ab0521af5030f35

    opened by trevormccasland 9
  • pytest not working correctly!

    pytest not working correctly!

    Hi, I have been trying the gabbi to write some simple tests and was lucky enough when using the gabbi-run, but I need jenkins report so I tried the py.test version. with the loader code looking like this:

    import os
    
    from gabbi import driver
    
    # By convention the YAML files are put in a directory named
    # "gabbits" that is in the same directory as the Python test file. 
    TESTS_DIR = 'gabbits'
    
    def test_gabbits():
        test_dir = os.path.join(os.path.dirname(__file__), TESTS_DIR)
        test_generator = driver.py_test_generator(
            test_dir, host="http://www.nfdsfdsfdsf.se", port=80)
    
        for test in test_generator:
            yield test
    

    The yaml-file looks very simple:

    tests:
      - name: Do get to a faulty site
        url: /sdsdsad
        method: GET
        status: 200
    

    The problem is now that the test passes, the URL does not exist so the test has to fail with a connection refused, I have also tried with a site returning 404 but still the test passes. Am I doing something wrong here?

    opened by keyhan 9
  • Add yaml-based tests for host header and sni checking

    Add yaml-based tests for host header and sni checking

    The addition of server_hostname to the httpclient PoolManager, without sufficient testing, has revealed some issues:

    • The minimum urllib3 required is too low. server_hostname was introduced in 1.24.x
    • However, there is a bug [1] in PoolManager when mixing schemes in the same pool manager. This is being fixed so the minimum urllib3 will need to be higher still.

    Tests are added here, and the minimum value for urllib3 will be set when a release is made.

    Some of the tests are "live" meaning they require network, and can be skipped via the live test fixture if the GABBI_SKIP_NETWORK env variable is set to "true".

    [1] https://github.com/urllib3/urllib3/issues/2534

    Fixes #307 Fixes #309

    opened by cdent 8
  • gabbi doesn't support client cert yet

    gabbi doesn't support client cert yet

    Gabbi doesn't support client cert yet

    Help gabbi could support: gabbi-run ... --cacert /etc/kubernetes/pki/ca.crt --cert /etc/kubernetes/pki/client.crt --key /etc/kubernetes/pki/client.key ...

    opened by wu-wenxiang 4
  • Socket leak with large YAML test files

    Socket leak with large YAML test files

    I have a YAML file with nearly 2000 tests in it. When invoked from the command line, I run out of open file handles due to large amounts of sockets left open:

    ERROR: gabbi-runner.input_/foo/bar/__test_l__
    	[Errno 24] Too many open files
    

    By default a Linux user has 1024 file handles:

    $ ulimit -n
    1024
    

    Inspecting the open file handles:

    $ ls -l /proc/$(pgrep gabbi-run)/fd | awk '{print $NF}' | cut -f1 -d: | sort | uniq -c
          1 0
          2 /path/to/a/file.txt
          1 /path/to/another/file.yaml
       1021 socket
    
    opened by scottwallacesh 3
  • Consider per-suite pre & post executables

    Consider per-suite pre & post executables

    Like fixtures, but a call to an external executable, for when gabbi-run is being used.

    This could be explicit, by putting something in the yaml file, or implicit off the name of the yaml file. That is:

    • if gabbit is foo.yaml
    • if foo-start and foo-end exist in the same dir and are executable

    either way, when the start is called gabbi should save, as a list, the line separated stdout, if any, it produced

    and provide that as args (or stdin?) to foo-end

    this would allow passing things like pids of started stuff

    /cc @FND for sanity check

    enhancement 
    opened by cdent 7
  • some fixtures that

    some fixtures that "capture" no longer work with the removal of testtools

    In https://github.com/cdent/gabbi/pull/279 testtools was removed.

    Fixtures in the openstack community that do output capturing rely on some "end of test" handling in testtools to dump the accumulated data. You can see this by trying a LOG.critical("hi") somewhere in the (e.g.) placement code and causing a test to fail. Dropping to a gabbi <2 makes it work again.

    We're definitely not going to add testtools back in, but the test case subclass in gabbi itself may be able to handling the data gathering that's required. Some investigation required.

    /cc @lbragstad for awareness

    opened by cdent 0
  • Faster tests development with gold files

    Faster tests development with gold files

    There is a cool method to speed up development of tests. It would be great if gabbi supported it too.

    Here is the idea:

    1. a test defines that a response should be compared with a gold file (reference to gold file can be custom configurable per every test)

    2. gabbi runs tests with a new flag 'generate-gold-files', which forces gabbi to capture response bodies and headers and (re-)write gold files containing the captured response data

    3. developer reviews the gold files (usually happens one by one as tests are added one by one during development)

    4. gabbi runs tests as usually

      a) if a test has got a reference to a gold file, it captures actual response output and compares with gold file b) if content of the actual output matches the gold file content, verification is considered to be passed c) otherwise test is failed

    This would allow me to reduce size of my test files by half at least.

    opened by avkonst 3
  • test files with - in the name can lead to failing tests when looking for content-type

    test files with - in the name can lead to failing tests when looking for content-type

    Bear with me, this is hard to explain

    Python v 3.6.9

    gabbi: 1.49.0

    A test file with named device-types.yaml with a test of:

    tests:                                                                          
    - name: get only 405                                                            
      POST: /device-types                                                           
      status: 405    
    

    errors with the following when run in a unittest-style harness:

        b'Traceback (most recent call last):'
        b'  File "/home/cdent/.uhana/lib/python3.6/site-packages/gabbi/handlers/core.py", line 68, in action'
        b'    response_value = str(response[header])'
        b'  File "/home/cdent/.uhana/lib/python3.6/site-packages/urllib3/_collections.py", line 156, in __getitem__'
        b'    val = self._container[key.lower()]'
        b"KeyError: 'content-type'"
        b''
        b'During handling of the above exception, another exception occurred:'
        b''
        b'Traceback (most recent call last):'
        b'  File "/home/cdent/.uhana/lib/python3.6/site-packages/gabbi/suitemaker.py", line 96, in do_test'
        b'    return test_method(*args, **kwargs)'
        b'  File "/home/cdent/.uhana/lib/python3.6/site-packages/gabbi/case.py", line 95, in wrapper'
        b'    func(self)'
        b'  File "/home/cdent/.uhana/lib/python3.6/site-packages/gabbi/case.py", line 149, in test_request'
        b'    self._run_test()'
        b'  File "/home/cdent/.uhana/lib/python3.6/site-packages/gabbi/case.py", line 556, in _run_test'
        b'    self._assert_response()'
        b'  File "/home/cdent/.uhana/lib/python3.6/site-packages/gabbi/case.py", line 196, in _assert_response'
        b'    handler(self)'
        b'  File "/home/cdent/.uhana/lib/python3.6/site-packages/gabbi/handlers/base.py", line 54, in __call__'
        b'    self.action(test, item, value=value)'
        b'  File "/home/cdent/.uhana/lib/python3.6/site-packages/gabbi/handlers/core.py", line 72, in action'
        b'    header, response.keys()))'
        b"AssertionError: 'content-type' header not present in response: KeysView(HTTPHeaderDict({'Vary': 'Origin', 'Date': 'Tue, 24 Mar 2020 14:17:33 GMT', 'Content-Length': '0', 'status': '405', 'reason': 'Method Not Allowed'}))"
        b''
    

    However, rename the file to foo.yaml and the test works, or run the device-types.yaml file with gabbi-run and the tests work. Presumably something about test naming.

    So the short term workaround is to rename the file, but this needs to be fixed because using - in filenames is idiomatic for gabbi.

    opened by cdent 1
Releases(2.3.0)
  • 2.3.0(Sep 3, 2021)

    • For the $ENVIRON and $RESPONSE :ref:substitutions <state-substitution> it is now possible to :ref:cast <casting> the value to a type of int, float, str, or bool.
    • The JSONHandler is now more strict about how it detects that a body content is JSON, avoiding some errors where the content-type header suggests JSON but the content cannot be decoded as such.
    • Better error message when content cannot be decoded.
    • Addition of the disable_response_handler test setting for those cases when the test author has no control over the content-type header and it is wrong.
    Source code(tar.gz)
    Source code(zip)
Automatically mock your HTTP interactions to simplify and speed up testing

VCR.py ?? This is a Python version of Ruby's VCR library. Source code https://github.com/kevin1024/vcrpy Documentation https://vcrpy.readthedocs.io/ R

Kevin McCarthy 1.8k Feb 7, 2021
Language-agnostic HTTP API Testing Tool

Dredd — HTTP API Testing Framework Dredd is a language-agnostic command-line tool for validating API description document against backend implementati

Apiary 4k Jan 5, 2023
One-stop solution for HTTP(S) testing.

HttpRunner HttpRunner is a simple & elegant, yet powerful HTTP(S) testing framework. Enjoy! ✨ ?? ✨ Design Philosophy Convention over configuration ROI

HttpRunner 3.5k Jan 4, 2023
pytest plugin for distributed testing and loop-on-failures testing modes.

xdist: pytest distributed testing plugin The pytest-xdist plugin extends pytest with some unique test execution modes: test run parallelization: if yo

pytest-dev 1.1k Dec 30, 2022
PENBUD is penetration testing buddy which helps you in penetration testing by making various important tools interactive.

penbud - Penetration Tester Buddy PENBUD is penetration testing buddy which helps you in penetration testing by making various important tools interac

Himanshu Shukla 15 Feb 1, 2022
PacketPy is an open-source solution for stress testing network devices using different testing methods

PacketPy About PacketPy is an open-source solution for stress testing network devices using different testing methods. Currently, there are only two c

null 4 Sep 22, 2022
A command-line tool and Python library and Pytest plugin for automated testing of RESTful APIs, with a simple, concise and flexible YAML-based syntax

1.0 Release See here for details about breaking changes with the upcoming 1.0 release: https://github.com/taverntesting/tavern/issues/495 Easier API t

null 909 Dec 15, 2022
Python Projects - Few Python projects with Testing using Pytest

Python_Projects Few Python projects : Fast_API_Docker_PyTest- Just a simple auto

Tal Mogendorff 1 Jan 22, 2022
HTTP client mocking tool for Python - inspired by Fakeweb for Ruby

HTTPretty 1.0.5 HTTP Client mocking tool for Python created by Gabriel Falcão . It provides a full fake TCP socket module. Inspired by FakeWeb Github

Gabriel Falcão 2k Jan 6, 2023
HTTP client mocking tool for Python - inspired by Fakeweb for Ruby

HTTPretty 1.0.5 HTTP Client mocking tool for Python created by Gabriel Falcão . It provides a full fake TCP socket module. Inspired by FakeWeb Github

Gabriel Falcão 1.9k Feb 6, 2021
✅ Python web automation and testing. 🚀 Fast, easy, reliable. 💠

Build fast, reliable, end-to-end tests. SeleniumBase is a Python framework for web automation, end-to-end testing, and more. Tests are run with "pytes

SeleniumBase 3k Jan 4, 2023
Python version of the Playwright testing and automation library.

?? Playwright for Python Docs | API Playwright is a Python library to automate Chromium, Firefox and WebKit browsers with a single API. Playwright del

Microsoft 7.8k Jan 2, 2023
ApiPy was created for api testing with Python pytest framework which has also requests, assertpy and pytest-html-reporter libraries.

ApiPy was created for api testing with Python pytest framework which has also requests, assertpy and pytest-html-reporter libraries. With this f

Mustafa 1 Jul 11, 2022
An interactive TLS-capable intercepting HTTP proxy for penetration testers and software developers.

mitmproxy mitmproxy is an interactive, SSL/TLS-capable intercepting proxy with a console interface for HTTP/1, HTTP/2, and WebSockets. mitmdump is the

mitmproxy 29.7k Jan 2, 2023
Wraps any WSGI application and makes it easy to send test requests to that application, without starting up an HTTP server.

WebTest This wraps any WSGI application and makes it easy to send test requests to that application, without starting up an HTTP server. This provides

Pylons Project 325 Dec 30, 2022
Scalable user load testing tool written in Python

Locust Locust is an easy to use, scriptable and scalable performance testing tool. You define the behaviour of your users in regular Python code, inst

Locust.io 20.4k Jan 4, 2023
Scalable user load testing tool written in Python

Locust Locust is an easy to use, scriptable and scalable performance testing tool. You define the behaviour of your users in regular Python code, inst

Locust.io 15.3k Feb 8, 2021
Parameterized testing with any Python test framework

Parameterized testing with any Python test framework Parameterized testing in Python sucks. parameterized fixes that. For everything. Parameterized te

David Wolever 714 Dec 21, 2022
Python Rest Testing

pyresttest Table of Contents What Is It? Status Installation Sample Test Examples Installation How Do I Use It? Running A Simple Test Using JSON Valid

Sam Van Oort 1.1k Dec 28, 2022