py.test fixture for benchmarking code

Overview

Overview

docs Documentation Status Join the chat at https://gitter.im/ionelmc/pytest-benchmark
tests
Travis-CI Build Status AppVeyor Build Status Requirements Status
Coverage Status Coverage Status
package

A pytest fixture for benchmarking code. It will group the tests into rounds that are calibrated to the chosen timer.

See calibration and FAQ.

  • Free software: BSD 2-Clause License

Installation

pip install pytest-benchmark

Documentation

For latest release: pytest-benchmark.readthedocs.org/en/stable.

For master branch (may include documentation fixes): pytest-benchmark.readthedocs.io/en/latest.

Examples

But first, a prologue:

This plugin tightly integrates into pytest. To use this effectively you should know a thing or two about pytest first. Take a look at the introductory material or watch talks.

Few notes:

  • This plugin benchmarks functions and only that. If you want to measure block of code or whole programs you will need to write a wrapper function.
  • In a test you can only benchmark one function. If you want to benchmark many functions write more tests or use parametrization <http://docs.pytest.org/en/latest/parametrize.html>.
  • To run the benchmarks you simply use pytest to run your "tests". The plugin will automatically do the benchmarking and generate a result table. Run pytest --help for more details.

This plugin provides a benchmark fixture. This fixture is a callable object that will benchmark any function passed to it.

Example:

def something(duration=0.000001):
    """
    Function that needs some serious benchmarking.
    """
    time.sleep(duration)
    # You may return anything you want, like the result of a computation
    return 123

def test_my_stuff(benchmark):
    # benchmark something
    result = benchmark(something)

    # Extra code, to verify that the run completed correctly.
    # Sometimes you may want to check the result, fast functions
    # are no good if they return incorrect results :-)
    assert result == 123

You can also pass extra arguments:

def test_my_stuff(benchmark):
    benchmark(time.sleep, 0.02)

Or even keyword arguments:

def test_my_stuff(benchmark):
    benchmark(time.sleep, duration=0.02)

Another pattern seen in the wild, that is not recommended for micro-benchmarks (very fast code) but may be convenient:

def test_my_stuff(benchmark):
    @benchmark
    def something():  # unnecessary function call
        time.sleep(0.000001)

A better way is to just benchmark the final function:

def test_my_stuff(benchmark):
    benchmark(time.sleep, 0.000001)  # way more accurate results!

If you need to do fine control over how the benchmark is run (like a setup function, exact control of iterations and rounds) there's a special mode - pedantic:

def my_special_setup():
    ...

def test_with_setup(benchmark):
    benchmark.pedantic(something, setup=my_special_setup, args=(1, 2, 3), kwargs={'foo': 'bar'}, iterations=10, rounds=100)

Screenshots

Normal run:

Screenshot of pytest summary

Compare mode (--benchmark-compare):

Screenshot of pytest summary in compare mode

Histogram (--benchmark-histogram):

Histogram sample

Also, it has nice tooltips.

Development

To run the all tests run:

tox

Credits

Comments
  • Elasticsearch report backend

    Elasticsearch report backend

    Hello,

    we found your plugin very useful. I wrote this PR so we can be able to store information about benchmarks in elasticsearch. As we have plenty of services and we want to track them in CI this is more suitable option for us than files. We can also then generate reports in Kibana as this: pytest_benchmark_elasticsearch_kibana_graph

    I put saving to files to FileReportBackend and created new one ElasticsearchReportBackend. Package elasticsearch is only optional. I hope I kept spirit of your awesome plugin and did not bend it too much.

    opened by Artimi 30
  • Feature request: number=1 or max_rounds

    Feature request: number=1 or max_rounds

    I realise how much effort has gone into getting a reasonable average benchmark.

    However I have just run into a use case where the unit under test must run exactly once. It's not so much a benchmark as indicative. The unit is inserting objects into a database (within a complex seq) so runs after the first are not representative.

    A bit of an edge case I know.

    For now I'm using: t = timeit.timeit(sync_objects, number=1) assert t < 1

    opened by gavingc 21
  • Number of benchmark warmup rounds is sometimes not enough for PyPy

    Number of benchmark warmup rounds is sometimes not enough for PyPy

    See https://travis-ci.org/thedrow/drf-benchmarks/jobs/66443524#L420 for example. I need a way to specify the minimum of warmup rounds for a benchmark so that I'll be able to verify that the JIT has been triggered.

    measurement 
    opened by thedrow 17
  • Release a new version on pypi?

    Release a new version on pypi?

    The current version that's out there has a Python 3 syntax error (except clause) so it's broken and not installable -- might make sense to release a new one since that error has been fixed?

    opened by aldanor 16
  • Marker can't be found on py2?

    Marker can't be found on py2?

    Was trying to run lazy-object-proxy tests manually using pytest and the latest pytest-benchmark from git, and weirdly enough, it works just fine on Python 3, but doesn't work on Python 2 due to pytest being unable to find the marker. The two environments are pretty much identical aside from the Python version, and the same version of pytest-benchmark is properly installed in both. Any ideas why this could happen?

    tests/test_lazy_object_proxy.py:1900: in <module>
        @pytest.mark.benchmark(group="prototypes")
    ../envs/py2/lib/python2.7/site-packages/_pytest/mark.py:183: in __getattr__
        self._check(name)
    ../envs/py2/lib/python2.7/site-packages/_pytest/mark.py:198: in _check
        raise AttributeError("%r not a registered marker" % (name,))
    E   AttributeError: 'benchmark' not a registered marker
    _____________________________________________________ ERROR collecting tests/test_lazy_object_proxy.py ______________________________________________________
    tests/test_lazy_object_proxy.py:1900: in <module>
        @pytest.mark.benchmark(group="prototypes")
    ../envs/py2/lib/python2.7/site-packages/_pytest/mark.py:183: in __getattr__
        self._check(name)
    ../envs/py2/lib/python2.7/site-packages/_pytest/mark.py:198: in _check
        raise AttributeError("%r not a registered marker" % (name,))
    E   AttributeError: 'benchmark' not a registered marker
    
    opened by aldanor 13
  • Compatibility issue with pytest 7.2: Package `py` is used but not listed as dependency

    Compatibility issue with pytest 7.2: Package `py` is used but not listed as dependency

    The package py is used at least here but is not listed in the dependencies.

    Until version 7.1.x, Pytest would require py, but this has been dropped with Pytest 7.2, leading to the following error:

    INTERNALERROR> Traceback (most recent call last):
    INTERNALERROR>   File "/Users/hhip/Library/Caches/pypoetry/virtualenvs/vpype-l5n3HO8W-py3.10/lib/python3.10/site-packages/_pytest/main.py", line 266, in wrap_session
    INTERNALERROR>     config._do_configure()
    INTERNALERROR>   File "/Users/hhip/Library/Caches/pypoetry/virtualenvs/vpype-l5n3HO8W-py3.10/lib/python3.10/site-packages/_pytest/config/__init__.py", line 1037, in _do_configure
    INTERNALERROR>     self.hook.pytest_configure.call_historic(kwargs=dict(config=self))
    INTERNALERROR>   File "/Users/hhip/Library/Caches/pypoetry/virtualenvs/vpype-l5n3HO8W-py3.10/lib/python3.10/site-packages/pluggy/_hooks.py", line 277, in call_historic
    INTERNALERROR>     res = self._hookexec(self.name, self.get_hookimpls(), kwargs, False)
    INTERNALERROR>   File "/Users/hhip/Library/Caches/pypoetry/virtualenvs/vpype-l5n3HO8W-py3.10/lib/python3.10/site-packages/pluggy/_manager.py", line 80, in _hookexec
    INTERNALERROR>     return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
    INTERNALERROR>   File "/Users/hhip/Library/Caches/pypoetry/virtualenvs/vpype-l5n3HO8W-py3.10/lib/python3.10/site-packages/pluggy/_callers.py", line 60, in _multicall
    INTERNALERROR>     return outcome.get_result()
    INTERNALERROR>   File "/Users/hhip/Library/Caches/pypoetry/virtualenvs/vpype-l5n3HO8W-py3.10/lib/python3.10/site-packages/pluggy/_result.py", line 60, in get_result
    INTERNALERROR>     raise ex[1].with_traceback(ex[2])
    INTERNALERROR>   File "/Users/hhip/Library/Caches/pypoetry/virtualenvs/vpype-l5n3HO8W-py3.10/lib/python3.10/site-packages/pluggy/_callers.py", line 39, in _multicall
    INTERNALERROR>     res = hook_impl.function(*args)
    INTERNALERROR>   File "/Users/hhip/Library/Caches/pypoetry/virtualenvs/vpype-l5n3HO8W-py3.10/lib/python3.10/site-packages/pytest_benchmark/plugin.py", line 440, in pytest_configure
    INTERNALERROR>     bs = config._benchmarksession = BenchmarkSession(config)
    INTERNALERROR>   File "/Users/hhip/Library/Caches/pypoetry/virtualenvs/vpype-l5n3HO8W-py3.10/lib/python3.10/site-packages/pytest_benchmark/session.py", line 38, in __init__
    INTERNALERROR>     self.logger = Logger(level, config=config)
    INTERNALERROR>   File "/Users/hhip/Library/Caches/pypoetry/virtualenvs/vpype-l5n3HO8W-py3.10/lib/python3.10/site-packages/pytest_benchmark/logger.py", line 20, in __init__
    INTERNALERROR>     self.term = py.io.TerminalWriter(file=sys.stderr)
    INTERNALERROR> AttributeError: module 'py' has no attribute 'io'
    

    This is fixed by explicitly manually installing py or adding py to the project.

    opened by abey79 10
  • help: estimate BigO for multiple functions?

    help: estimate BigO for multiple functions?

    Hi, I'd like to use this package to estimate the BigO for multiple functions. I wonder what's the practical way to implement it. Currently I can get the benchmark statistics for one input size, and I have to manually change the input size and run it again to get a curve for function v.s. input size.

    the code is like this

    size = 100
    x = np.random.randn(size)
    
    def test_f1(benchmark):
        benchmark(f1)
    
    def test_f2(benchmark):
        benchmark(f2)
    
    def test_f3(benchmark):
        benchmark(f3)
    

    Thanks.

    opened by yitang 10
  • benchmarking side-effectful code

    benchmarking side-effectful code

    I'm working on benchmarking some functions that modify their input. The input is a list of dictionaries, and the code sorts the list and also adds things to the dictionaries. As a result of this, the benchmarks are not totally true, because the first iteration modifies the input and the consequent iterations have to do much less work, because the input is already sorted.

    I can "fix" this problem by doing something (expensive) like deepcopy before the function I'm benchmarking is run, however, this will add to the running time statistics. Any thoughts?

    opened by vmagdin 10
  • Issue with Xdist plugin: impossibile to serialize

    Issue with Xdist plugin: impossibile to serialize

    I have different test written with pytest-benchmark. I use also the Xdist plugin to distribute my test on more than a process. Starting py.test with Xdist run gives this output:

    py.test -n 2
     ============================= test session starts =============================
    platform win32 -- Python 2.7.3 -- py-1.4.26 -- pytest-2.6.4
    plugins: benchmark, xdist
    gw0 C / gw1 IINTERNALERROR> Traceback (most recent call last):
    INTERNALERROR>   File "C:\Python27\lib\site-packages\_pytest\main.py", line 82,
    in wrap_session
    INTERNALERROR>     config.hook.pytest_sessionstart(session=session)
    INTERNALERROR>   File "C:\Python27\lib\site-packages\_pytest\core.py", line 413,
     in __call__
    INTERNALERROR>     return self._docall(methods, kwargs)
    INTERNALERROR>   File "C:\Python27\lib\site-packages\_pytest\core.py", line 424,
     in _docall
    INTERNALERROR>     res = mc.execute()
    INTERNALERROR>   File "C:\Python27\lib\site-packages\_pytest\core.py", line 315,
     in execute
    INTERNALERROR>     res = method(**kwargs)
    INTERNALERROR>   File "C:\Python27\lib\site-packages\xdist\dsession.py", line 48
    0, in pytest_sessionstart
    INTERNALERROR>     nodes = self.nodemanager.setup_nodes(putevent=self.queue.put)
    
    INTERNALERROR>   File "C:\Python27\lib\site-packages\xdist\slavemanage.py", line
     45, in setup_nodes
    INTERNALERROR>     nodes.append(self.setup_node(spec, putevent))
    INTERNALERROR>   File "C:\Python27\lib\site-packages\xdist\slavemanage.py", line
     54, in setup_node
    INTERNALERROR>     node.setup()
    INTERNALERROR>   File "C:\Python27\lib\site-packages\xdist\slavemanage.py", line
     223, in setup
    INTERNALERROR>     self.channel.send((self.slaveinput, args, option_dict))
    INTERNALERROR>   File "C:\Python27\lib\site-packages\execnet\gateway_base.py", l
    ine 681, in send
    INTERNALERROR>     self.gateway._send(Message.CHANNEL_DATA, self.id, dumps_inter
    nal(item))
    INTERNALERROR>   File "C:\Python27\lib\site-packages\execnet\gateway_base.py", l
    ine 1285, in dumps_internal
    INTERNALERROR>     return _Serializer().save(obj)
    INTERNALERROR>   File "C:\Python27\lib\site-packages\execnet\gateway_base.py", l
    ine 1303, in save
    INTERNALERROR>     self._save(obj)
    INTERNALERROR>   File "C:\Python27\lib\site-packages\execnet\gateway_base.py", l
    ine 1321, in _save
    INTERNALERROR>     dispatch(self, obj)
    INTERNALERROR>   File "C:\Python27\lib\site-packages\execnet\gateway_base.py", l
    ine 1402, in save_tuple
    INTERNALERROR>     self._save(item)
    INTERNALERROR>   File "C:\Python27\lib\site-packages\execnet\gateway_base.py", l
    ine 1321, in _save
    INTERNALERROR>     dispatch(self, obj)
    INTERNALERROR>   File "C:\Python27\lib\site-packages\execnet\gateway_base.py", l
    ine 1398, in save_dict
    INTERNALERROR>     self._write_setitem(key, value)
    INTERNALERROR>   File "C:\Python27\lib\site-packages\execnet\gateway_base.py", l
    ine 1392, in _write_setitem
    INTERNALERROR>     self._save(value)
    INTERNALERROR>   File "C:\Python27\lib\site-packages\execnet\gateway_base.py", l
    ine 1319, in _save
    INTERNALERROR>     raise DumpError("can't serialize %s" % (tp,))
    INTERNALERROR> DumpError: can't serialize <class 'pytest_benchmark.plugin.NameWr
    apper'>
    
    opened by JayZar21 9
  • machine_info['cpu'] become always unknown

    machine_info['cpu'] become always unknown

    Because dependnecy py-cpuinfo v6.0.0 changed API https://github.com/workhorsy/py-cpuinfo/pull/123/files

    machine_info['cpu']['brand'] machine_info['cpu']['vendor_id'] and machine_info['cpu']['hardware'] become always unknown value.

    Here is the citation from py-cpuinfo/README.md

    Raw Fields

    These fields are pulled directly from the CPU and are unverified. They may contain expected results. Other times they may contain wildly unexpected results or garbage. So it would be a bad idea to rely on them.

    | key | Example value | Return Format | | :---------------------------- | :------------------------ | :-------------------- | | "vendor_id_raw" | "GenuineIntel" | string | | "hardware_raw" | "BCM2708" | string | | "brand_raw" | "Intel(R) Core(TM) i7 CPU 870 @ 2.93GHz" | string | | "arch_string_raw" | "x86_64" | string |

    opened by miurahr 8
  • allow pytest to default to skip

    allow pytest to default to skip

    @ionelmc Hello again!

    This allows:

    # tox.ini
    [pytest]
    addopts = --benchmark-skip
    

    and one would only need to put --benchmark-only for the small subset of invocations/envs where you actually want to benchmark, eliminating --benchmark-skip everywhere else.

    opened by ofek 8
  • HTML reports

    HTML reports

    Feature request Hi Team , When we use pytest benchmark with CI solutions like Jenkins , If we have option of saving results in HTML or exporting the results into image will be easier for end user to see results. Currently we need to go through console output to see results or export it to json (looking through json looks cumbersome ). So if we have option to export it to HTML will add value I believe. Tried with https://pytest-html.readthedocs.io/en/latest/user_guide.html , but doesn't contain much info for benchmark results.

    opened by nirupbbnk 0
  • Return the first, instead of the last result in benchmark fixture

    Return the first, instead of the last result in benchmark fixture

    The current behaviour is to return the last result of multiple iterations. This makes testing stateful functions difficult. For example:

    import pytest
    
    class foo():
        def __init__(self):
            self.x = 0
    
        def inc(self):
            self.x = self.x + 1
            return self.x
    
    
    @pytest.mark.benchmark
    def test_inc(benchmark):
        f = foo()
    
        res = benchmark(f.inc)
        assert res == 1
    

    The above snippet only passes with --benchmark-disable and fails with benchmarks enabled. I know it's possible to get the number of executions from the benchmark object, but the expected result might not be as easy to calculate as above.

    one alternative is:

    res = f.inc()
    assert res == 1
    if (benchmark.enabled):
       benchmark(f.inc)
    

    but that complains/warns that the benchmark fixture is unused so the correct workaround would be

    res = f.inc()
    assert res == 1
    if (benchmark.enabled):
       benchmark(f.inc)
    else:
       benchmark(lambda :None)
    
    opened by jvesely 1
  • skip setup function time

    skip setup function time

    Hi Team,

    I have a simple test scenario where I want to measure time taken to add 1k entries and delete 1k entries in the system.

    Example: The problem is when i want to measure the time taken to delete the 1000 entries every time i have to add back 1000 entries to the system(so i made adding entries as setup function). and for deletion It takes 0.5 secs to delete but 9.5 secs to add 1k entries and run it 100 rounds and take an average. It measures it as 10 secs, But i want to measure only 0.5 secs. can i skip time measurement of setup function?

    Is there a way to skip time taken to run setup function while doing benchmark?

    I am open to suggestions ? are there any other ways to do this?

    opened by visheshh 0
  • compare-fail with different criteria per test or group

    compare-fail with different criteria per test or group

    I have a setup where I am using --benchmark-compare-fail=mean:15%

    I would like to specify a different percentage for each group of tests, because some tests have a varying performance, while others are more stable. Is it possible to specify this in code per test?

    I want to run all tests in one go, as all test share a large run-time generated data-set and generating this takes most of the time of running the tests. Otherwise I could have just separated the tests in different runs, with different arguments.

    opened by mortalisk 7
  • Storage not relative to cwd fails

    Storage not relative to cwd fails

    I am trying to give the --benchmark-storage parameter a network drive path. This fails with an error: ValueError: '/network/path/.benchmarks' does not start with '/tmp/jenkins-komodo-f_scout_ci/workspace/ert-performance'

    
    INTERNALERROR> Traceback (most recent call last):
    INTERNALERROR>   File "/tmp/jenkins-komodo-f_scout_ci/workspace/ert-performance/env/lib64/python3.6/site-packages/_pytest/main.py", line 264, in wrap_session
    INTERNALERROR>     config._do_configure()
    INTERNALERROR>   File "/tmp/jenkins-komodo-f_scout_ci/workspace/ert-performance/env/lib64/python3.6/site-packages/_pytest/config/__init__.py", line 992, in _do_configure
    INTERNALERROR>     self.hook.pytest_configure.call_historic(kwargs=dict(config=self))
    INTERNALERROR>   File "/tmp/jenkins-komodo-f_scout_ci/workspace/ert-performance/env/lib64/python3.6/site-packages/pluggy/_hooks.py", line 277, in call_historic
    INTERNALERROR>     res = self._hookexec(self.name, self.get_hookimpls(), kwargs, False)
    INTERNALERROR>   File "/tmp/jenkins-komodo-f_scout_ci/workspace/ert-performance/env/lib64/python3.6/site-packages/pluggy/_manager.py", line 80, in _hookexec
    INTERNALERROR>     return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
    INTERNALERROR>   File "/tmp/jenkins-komodo-f_scout_ci/workspace/ert-performance/env/lib64/python3.6/site-packages/pluggy/_callers.py", line 60, in _multicall
    INTERNALERROR>     return outcome.get_result()
    INTERNALERROR>   File "/tmp/jenkins-komodo-f_scout_ci/workspace/ert-performance/env/lib64/python3.6/site-packages/pluggy/_result.py", line 60, in get_result
    INTERNALERROR>     raise ex[1].with_traceback(ex[2])
    INTERNALERROR>   File "/tmp/jenkins-komodo-f_scout_ci/workspace/ert-performance/env/lib64/python3.6/site-packages/pluggy/_callers.py", line 39, in _multicall
    INTERNALERROR>     res = hook_impl.function(*args)
    INTERNALERROR>   File "/tmp/jenkins-komodo-f_scout_ci/workspace/ert-performance/env/lib64/python3.6/site-packages/pytest_benchmark/plugin.py", line 441, in pytest_configure
    INTERNALERROR>     bs.handle_loading()
    INTERNALERROR>   File "/tmp/jenkins-komodo-f_scout_ci/workspace/ert-performance/env/lib64/python3.6/site-packages/pytest_benchmark/session.py", line 197, in handle_loading
    INTERNALERROR>     compared_benchmark=compared_benchmark,
    INTERNALERROR>   File "/tmp/jenkins-komodo-f_scout_ci/workspace/ert-performance/env/lib64/python3.6/site-packages/pluggy/_hooks.py", line 265, in __call__
    INTERNALERROR>     return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)
    INTERNALERROR>   File "/tmp/jenkins-komodo-f_scout_ci/workspace/ert-performance/env/lib64/python3.6/site-packages/pluggy/_manager.py", line 80, in _hookexec
    INTERNALERROR>     return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
    INTERNALERROR>   File "/tmp/jenkins-komodo-f_scout_ci/workspace/ert-performance/env/lib64/python3.6/site-packages/pluggy/_callers.py", line 60, in _multicall
    INTERNALERROR>     return outcome.get_result()
    INTERNALERROR>   File "/tmp/jenkins-komodo-f_scout_ci/workspace/ert-performance/env/lib64/python3.6/site-packages/pluggy/_result.py", line 60, in get_result
    INTERNALERROR>     raise ex[1].with_traceback(ex[2])
    INTERNALERROR>   File "/tmp/jenkins-komodo-f_scout_ci/workspace/ert-performance/env/lib64/python3.6/site-packages/pluggy/_callers.py", line 39, in _multicall
    INTERNALERROR>     res = hook_impl.function(*args)
    INTERNALERROR>   File "/tmp/jenkins-komodo-f_scout_ci/workspace/ert-performance/env/lib64/python3.6/site-packages/pytest_benchmark/plugin.py", line 266, in pytest_benchmark_compare_machine_info
    INTERNALERROR>     benchmarksession.storage.location,
    INTERNALERROR>   File "/tmp/jenkins-komodo-f_scout_ci/workspace/ert-performance/env/lib64/python3.6/site-packages/pytest_benchmark/storage/file.py", line 28, in location
    INTERNALERROR>     return str(self.path.relative_to(os.getcwd()))
    INTERNALERROR>   File "/usr/lib64/python3.6/pathlib.py", line 874, in relative_to
    INTERNALERROR>     .format(str(self), str(formatted)))
    INTERNALERROR> ValueError: '/network/path/.benchmarks' does not start with '/tmp/jenkins-komodo-f_scout_ci/workspace/ert-performance'
    
    opened by mortalisk 0
  • Histogram title customization

    Histogram title customization

    I noticed that the histogram title always looks something like this:

    example histogram from the docs

    Speaking of pedantic, I find "speed" with units of time surprising :) Beyond pedantry I would expect "larger speed" meaning faster, while "larger time interval" meaning slower.

    Looking at the implementation the "Speed in" prefix seems hard-coded:

    https://github.com/ionelmc/pytest-benchmark/blob/c0d0104d385a09acc9246e6ee2a6d42fae937c2d/src/pytest_benchmark/histogram.py#L100-L106

    Would it be possible to allow some customization of the prefix? I would like to change it to something like "Runtime in {0}".

    opened by adeak 0
Owner
Ionel Cristian Mărieș
Ionel Cristian Mărieș
a plugin for py.test that changes the default look and feel of py.test (e.g. progressbar, show tests that fail instantly)

pytest-sugar pytest-sugar is a plugin for pytest that shows failures and errors instantly and shows a progress bar. Requirements You will need the fol

Teemu 963 Dec 28, 2022
Pynguin, The PYthoN General UnIt Test geNerator is a test-generation tool for Python

Pynguin, the PYthoN General UnIt test geNerator, is a tool that allows developers to generate unit tests automatically.

Chair of Software Engineering II, Uni Passau 997 Jan 6, 2023
Ab testing - The using AB test to test of difference of conversion rate

Facebook recently introduced a new type of offer that is an alternative to the current type of bidding called maximum bidding he introduced average bidding.

null 5 Nov 21, 2022
A small automated test structure using python to test *.cpp codes

Get Started Insert C++ Codes Add Test Code Run Test Samples Check Coverages Insert C++ Codes you can easily add c++ files in /inputs directory there i

Alireza Zahiri 2 Aug 3, 2022
Airspeed Velocity: A simple Python benchmarking tool with web-based reporting

airspeed velocity airspeed velocity (asv) is a tool for benchmarking Python packages over their lifetime. It is primarily designed to benchmark a sing

null 745 Dec 28, 2022
A collection of benchmarking tools.

Benchmark Utilities About A collection of benchmarking tools. PYPI Package Table of Contents Using the library Installing and using the library Manual

Kostas Georgiou 2 Jan 28, 2022
WIP SAT benchmarking tooling, written with only my personal use in mind.

SAT Benchmarking Some early work in progress tooling for running benchmarks and keeping track of the results when working on SAT solvers and related t

Jannis Harder 1 Dec 26, 2021
A pytest plugin, that enables you to test your code that relies on a running PostgreSQL Database

This is a pytest plugin, that enables you to test your code that relies on a running PostgreSQL Database. It allows you to specify fixtures for PostgreSQL process and client.

Clearcode 252 Dec 21, 2022
A pytest plugin that enables you to test your code that relies on a running Elasticsearch search engine

pytest-elasticsearch What is this? This is a pytest plugin that enables you to test your code that relies on a running Elasticsearch search engine. It

Clearcode 65 Nov 10, 2022
This is a pytest plugin, that enables you to test your code that relies on a running MongoDB database

This is a pytest plugin, that enables you to test your code that relies on a running MongoDB database. It allows you to specify fixtures for MongoDB process and client.

Clearcode 19 Oct 21, 2022
Pytest-typechecker - Pytest plugin to test how type checkers respond to code

pytest-typechecker this is a plugin for pytest that allows you to create tests t

vivax 2 Aug 20, 2022
Green is a clean, colorful, fast python test runner.

Green -- A clean, colorful, fast python test runner. Features Clean - Low redundancy in output. Result statistics for each test is vertically aligned.

Nathan Stocks 756 Dec 22, 2022
splinter - python test framework for web applications

splinter - python tool for testing web applications splinter is an open source tool for testing web applications using Python. It lets you automate br

Cobra Team 2.6k Dec 27, 2022
A test fixtures replacement for Python

factory_boy factory_boy is a fixtures replacement based on thoughtbot's factory_bot. As a fixtures replacement tool, it aims to replace static, hard t

FactoryBoy project 3k Jan 5, 2023
create custom test databases that are populated with fake data

About Generate fake but valid data filled databases for test purposes using most popular patterns(AFAIK). Current support is sqlite, mysql, postgresql

Emir Ozer 2.2k Jan 4, 2023
A test fixtures replacement for Python

factory_boy factory_boy is a fixtures replacement based on thoughtbot's factory_bot. As a fixtures replacement tool, it aims to replace static, hard t

FactoryBoy project 2.4k Feb 5, 2021
splinter - python test framework for web applications

splinter - python tool for testing web applications splinter is an open source tool for testing web applications using Python. It lets you automate br

Cobra Team 2.3k Feb 5, 2021
Django test runner using nose

django-nose django-nose provides all the goodness of nose in your Django tests, like: Testing just your apps by default, not all the standard ones tha

Jazzband 880 Dec 15, 2022
A set of pytest fixtures to test Flask applications

pytest-flask An extension of pytest test runner which provides a set of useful tools to simplify testing and development of the Flask extensions and a

pytest-dev 433 Dec 23, 2022