Airspeed Velocity: A simple Python benchmarking tool with web-based reporting

Overview

airspeed velocity

airspeed velocity (asv) is a tool for benchmarking Python packages over their lifetime.

It is primarily designed to benchmark a single project over its lifetime using a given suite of benchmarks. The results are displayed in an interactive web frontend that requires only a basic static webserver to host.

See an example airspeed velocity site.

See the full documentation for more information.

The latest release can be installed from PyPI using:

pip install asv

Are you using asv? Consider adding a badge to your project's README like this:

http://img.shields.io/badge/benchmarked%20by-asv-blue.svg?style=flat

By using the following markdown:

[![asv](http://img.shields.io/badge/benchmarked%20by-asv-blue.svg?style=flat)](http://your-url-here/)

License: BSD three-clause license.

Authors: Michael Droettboom, Pauli Virtanen

Comments
  • ENH: enable specifying custom conda channels + create env via environment.yml

    ENH: enable specifying custom conda channels + create env via environment.yml

    • This PR adds the basic functionality to specify a list of custom conda channels using a new key in the JSON config file; the associated feature request issue is #405
    • This actually looks fairly clean to me (and works for me), but there are probably some things that the core devs will know that I haven't thought of
    • I suspect the way the channels are added may need slight modification -- installing packages is done in a single call to conda for performance reasons, and we may wish to do the same for adding channels (I currently loop over the input channels as you can see)
    • need for tests? -- make sure this works when adding a single vs. multiple custom channels? mock the call to those channels? how much of a pain would this be to do? Maybe there's something simpler that can be done test-wise.
    opened by tylerjereddy 84
  • Difference between `asv dev` and `asv run` (due to OpenMP)

    Difference between `asv dev` and `asv run` (due to OpenMP)

    I run into a weird issue which I cannot track down (using asv v0.2.1), so I thought I'd ask here if it is a known or common problem.

    If I run a simple test using asv dev with a parameter that can take three different values, I get the following output:

    asv/> asv dev -b dipole_time
    · Discovering benchmarks
    · Running 1 total benchmarks (1 commits * 1 environments * 1 benchmarks)
    [  0.00%] ·· Building for existing-py_home_dtr_anaconda3_bin_python
    [  0.00%] ·· Benchmarking existing-py_home_dtr_anaconda3_bin_python
    [100.00%] ··· Running model.Dipole.time_dipole_time                                    ok
    [100.00%] ···· 
                   ====== ==========
                    loop            
                   ------ ----------
                    None   139.11ms 
                    freq   322.62ms 
                    off    139.09ms 
                   ====== ==========
    

    These times agree with the times I get if I run the tests simply with %timeit in an IPython console.

    Now if I run asv run instead of asv dev,

    asv/> asv run -b time_dipole_time
    · Fetching recent changes
    · Creating environments
    · Discovering benchmarks
    ·· Uninstalling from conda-py3.6
    ·· Building for conda-py3.6
    ·· Installing into conda-py3.6.
    · Running 1 total benchmarks (1 commits * 1 environments * 1 benchmarks)
    [  0.00%] · For empymod commit hash b804477a:
    [  0.00%] ·· Building for conda-py3.6.
    [  0.00%] ·· Benchmarking conda-py3.6
    [100.00%] ··· Running model.Dipole.time_dipole_time                          655.54ms;...
    

    the result stored in b804477a-conda-py3.6.json is:

    [snip]
        "results": {
            "model.Dipole.time_dipole_time": {
                "params": [
                    [
                        "None",
                        "'freq'",
                        "'off'"
                    ]
                ],
                "result": [
                    0.6555442148,
                    0.3023681289999999,
                    0.6842122744999999
                ]
            }
    [snip]
    

    The times vastly differ between the asv dev and the asv run results. Any idea why that could be?

    question 
    opened by prisae 22
  • Compare command

    Compare command

    This is just a very experimental idea for a 'compare' command that can be used to compare two specific revisions, e.g:

    $ asv compare 7810d6d7 19aa5743
    · Fetching recent changes.
    
    All benchmarks:
    
      before     after    ratio
      98.30ms   68.19ms      0.69  benchmarks.AperturePhotometry.time_big_data_error_multiple_big_apertures_circ_ann_center
     139.58ms  108.97ms      0.78  benchmarks.AperturePhotometry.time_big_data_error_multiple_big_apertures_circ_ann_exact
      98.94ms   67.96ms      0.69  benchmarks.AperturePhotometry.time_big_data_error_multiple_big_apertures_circ_ann_subpixel_01
     105.35ms   73.63ms      0.70  benchmarks.AperturePhotometry.time_big_data_error_multiple_big_apertures_circ_ann_subpixel_05
     116.30ms   86.23ms      0.74  benchmarks.AperturePhotometry.time_big_data_error_multiple_big_apertures_circ_ann_subpixel_10
      48.15ms   51.00ms      1.06  benchmarks.AperturePhotometry.time_big_data_error_multiple_big_apertures_circ_center
      68.18ms   69.50ms      1.02  benchmarks.AperturePhotometry.time_big_data_error_multiple_big_apertures_circ_exact
      48.59ms   51.24ms      1.05  benchmarks.AperturePhotometry.time_big_data_error_multiple_big_apertures_circ_subpixel_01
      51.13ms   53.71ms      1.05  benchmarks.AperturePhotometry.time_big_data_error_multiple_big_apertures_circ_subpixel_05
      56.26ms   58.65ms      1.04  benchmarks.AperturePhotometry.time_big_data_error_multiple_big_apertures_circ_subpixel_10
        4.71s  231.49ms      0.05  benchmarks.AperturePhotometry.time_big_data_error_multiple_big_apertures_elli_ann_center
        1.70s     1.67s      0.98  benchmarks.AperturePhotometry.time_big_data_error_multiple_big_apertures_elli_ann_exact
        4.72s  242.42ms      0.05  benchmarks.AperturePhotometry.time_big_data_error_multiple_big_apertures_elli_ann_subpixel_01
       failed  454.41ms       n/a  benchmarks.AperturePhotometry.time_big_data_error_multiple_big_apertures_elli_ann_subpixel_05
       failed     1.13s       n/a  benchmarks.AperturePhotometry.time_big_data_error_multiple_big_apertures_elli_ann_subpixel_10
    ...
    

    Benchmarks are color-coded as green if they are faster by a given threshold, and red if they are slower. The threshold can be adjusted with the --threshold option.

    The output can be split between benchmarks that got better, didn't change, or got worse, with the --split option:

    $ asv compare 7810d6d7 19aa5743 --split -t 1.5
    · Fetching recent changes..
    
    Benchmarks that have improved:
    
      before     after    ratio
        4.71s  231.49ms      0.05  benchmarks.AperturePhotometry.time_big_data_error_multiple_big_apertures_elli_ann_center
        4.72s  242.42ms      0.05  benchmarks.AperturePhotometry.time_big_data_error_multiple_big_apertures_elli_ann_subpixel_01
       failed  454.41ms       n/a
    [snip]
       failed  159.65ms       n/a  benchmarks.AperturePhotometry.time_small_data_multiple_small_apertures_elli_subpixel_05
       failed  215.41ms       n/a  benchmarks.AperturePhotometry.time_small_data_multiple_small_apertures_elli_subpixel_10
    
    Benchmarks that have stayed the same:
    
      before     after    ratio
      98.30ms   68.19ms      0.69  benchmarks.AperturePhotometry.time_big_data_error_multiple_big_apertures_circ_ann_center
     139.58ms  108.97ms      0.78  benchmarks.AperturePhotometry.time_big_data_error_multiple_big_apertures_circ_ann_exact
      98.94ms   67.96ms      0.69
    [snip]
       1.55ms    1.52ms      0.98  benchmarks.AperturePhotometry.time_small_data_single_small_aperture_elli_subpixel_05
       1.72ms    1.58ms      0.92  benchmarks.AperturePhotometry.time_small_data_single_small_aperture_elli_subpixel_10
    
    Benchmarks that have got worse:
    
      before     after    ratio
       2.30ms    3.71ms      1.61  benchmarks.AperturePhotometry.time_big_data_single_big_aperture_elli_center
       2.31ms    3.71ms      1.61  benchmarks.AperturePhotometry.time_big_data_single_big_aperture_elli_subpixel_01
    

    I implemented this because I needed it for a specific set of benchmarks where I'm trying to compare the impact of different optimization strategies, but of course you can reject this or make suggestions for significant changes, as needed.

    opened by astrofrog 21
  • Handle benchmarks by commit revision instead of date

    Handle benchmarks by commit revision instead of date

    Commits date are not necessary monotonic along the commit log, especially if working with rebase workflow (instead of merge), this generate false positive or missed regression detection and unwanted peaks in the web UI.

    This patch introduce a revision number as a drop-in replacement of commit date. This revision number is an incremental number representing the number of ancestors commits.

    In the web UI the distance between commits is the number of commits between them. Commits can still be displayed by date using the date selector.

    Closes #390

    opened by philpep 19
  • Benchmark statistics

    Benchmark statistics

    Record more benchmark samples, and compute, display, and use statistics based on them.

    • Change default settings to record more (but shorter) samples. Changes meaning of goal_time Makes determination of goal_time more accurate; however, no big changes to methodology --- there's room to improve here.
    • Do warmup before benchmark --- this seems to matter also on CPython (although possibly not due to CPython itself but some OS/CPU effects)
    • Display spread of measurements (median +/- half of interquartile range)
    • Estimate confidence interval and use that to decide what is not significant in asv compare.
    • Optionally, save samples to the json files.
    • Switch to gzipped files.

    The statistical confidence estimates are a somewhat tricky point, because timing samples usually have strong autocorrelation (multimodality, stepwise changes in location, etc.), which makes simple stuff often misleading. There's some alleviation for this currently there in that it tries to regress the timing sample time series looking for steps, and adds those to the CI. Not rigorous, but probably better than nothing.

    The problem is that there's low-frequency noise in the measurement, so measuring for a couple of second does not give a good idea of the full distribution.

    todo:

    • [x] more tests, e.g. printing and formatting is mostly untested
    • [x] do we really need to gzip?
    • [x] update documentation
    • [x] tuning for pypy
    opened by pv 17
  • ENH: Add an option to randomize the order of the hashes

    ENH: Add an option to randomize the order of the hashes

    When running on a repo with e.g. 10,000 commits, this allows one to see the general trends building up slowly rather than doing each commit sequentially.

    needs-work 
    opened by astrofrog 17
  • Implement parameterized benchmarks

    Implement parameterized benchmarks

    Implement parameterization of benchmarks.

    Plotting can deal with numerical and categorical parameters, user can select in the UI parameter combinations to show and what x-axis to use.

    Addresses gh-36.

    Sample output here: https://pv.github.io/scipy-bench/

    Some sample code here: https://github.com/pv/scipy-bench/tree/param/benchmarks/

    TODO:

    • [x] ensure asv compare and other cmdline commands deal with parameterized results
    • [x] produce text output (tables) showing results, either always or add a command line switch
    • [x] for many benchmark parameters, the selection buttons become too small; switch to vertical view (currently used for commit selection) when many choices
    • [x] check javascript performance for large numbers of commits
    • [x] it probably loads the data set many times from the server, could be refactored a bit if so
    • [x] more careful type check for params and param_names on python side, the javascript code does currently not tolerate garbage input here (or, the javascript code should make these checks)
    • [x] better unit testing
    • [x] also number and repeats may need to be parameter-dependent?
    • [x] should parameterized benchmarks be considered as separate benchmarks, or as benchmarks with multiple return values as currently? Is the parameterization API elegant and suitable?
    • [x] should the API be such that first implement benchmarks with multiple return values, and then build parameterized benchmarks on top of that?

    TODO for later (separate PR):

    • what would be good summary plot? now it just averages over parameters (should these switch to geom means?)
    opened by pv 17
  • Stagger benchmarks to smooth background noise

    Stagger benchmarks to smooth background noise

    When running asv continuous to compare commits A and B, all the benchmarks from A run followed by all the benchmarks from B, e.g. "A.foo, A.bar, A.baz, B.foo, B.bar, B.baz". Would it be feasible to instead run "A.foo, B.foo, A.bar, B.bar, A.baz, B.baz"?

    (In fact because each benchmark is run multiple times, Ideally I would like the staggering to be even finer-grained.)

    The thought here is that background processes make noise auto-correlated, so the running comparisons back-to-back may give more informative ratios. (Based on amateur speculation)

    enhancement 
    opened by jbrockmendel 15
  • asv preview not loading graphs. Lots of 404s in the console

    asv preview not loading graphs. Lots of 404s in the console

    I am trying to use asv to visualize the performance difference between two commits. I ran the benchmarks fine (it is these benchmarks, except I changed the repo url to my local repo).

    But when I asv publish and asv preview, none of the graphs load. In the console, I have a lot of errors like

    :8080/graphs/summary/cse.TimeCSE.time_cse.json?_=1501786104493 Failed to load resource: the server responded with a status of 404 (File not found)
    :8080/graphs/summary/dsolve.TimeDsolve01.time_dsolve.json?_=1501786104494 Failed to load resource: the server responded with a status of 404 (File not found)
    :8080/graphs/summary/integrate.TimeIntegration01.time_doit.json?_=1501786104495 Failed to load resource: the server responded with a status of 404 (File not found)
    :8080/graphs/summary/lambdify.TimeLambdifyCreation.time_lambdify_create.json?_=1501786104497 Failed to load resource: the server responded with a status of 404 (File not found)
    :8080/graphs/summary/large_exprs.TimeLargeExpressionOperations.peakmem_jacobian_wrt_functions.json?_=1501786104499 Failed to load resource: the server responded with a status of 404 (File not found)
    :8080/graphs/summary/integrate.TimeIntegration01.time_doit_meijerg.json?_=1501786104496 Failed to load resource: the server responded with a status of 404 (File not found)
    :8080/graphs/summary/lambdify.TimeLambdifyEvaluate.time_lambdify_evaluate.json?_=1501786104498 Failed to load resource: the server responded with a status of 404 (File not found)
    :8080/graphs/summary/large_exprs.TimeLargeExpressionOperations.peakmem_jacobian_wrt_symbols.json?_=1501786104500 Failed to load resource: the server responded with a status of 404 (File not found)
    :8080/graphs/summary/large_exprs.TimeLargeExpressionOperations.time_cse.json?_=1501786104503 Failed to load resource: the server responded with a status of 404 (File not found)
    :8080/graphs/summary/large_exprs.TimeLargeExpressionOperations.time_count_ops.json?_=1501786104502 Failed to load resource: the server responded with a status of 404 (File not found)
    :8080/graphs/summary/large_exprs.TimeLargeExpressionOperations.peakmem_subs.json?_=1501786104501 Failed to load resource: the server responded with a status of 404 (File not found)
    :8080/graphs/summary/large_exprs.TimeLargeExpressionOperations.time_free_symbols.json?_=1501786104504 Failed to load resource: the server responded with a status of 404 (File not found)
    :8080/graphs/summary/large_exprs.TimeLargeExpressionOperations.time_jacobian_wrt_functions.json?_=1501786104505 Failed to load resource: the server responded with a status of 404 (File not found)
    :8080/graphs/summary/large_exprs.TimeLargeExpressionOperations.time_jacobian_wrt_symbols.json?_=1501786104506 Failed to load resource: the server responded with a status of 404 (File not found)
    :8080/graphs/summary/logic.LogicSuite.time_dpll.json?_=1501786104509 Failed to load resource: the server responded with a status of 404 (File not found)
    :8080/graphs/summary/logic.LogicSuite.time_dpll2.json?_=1501786104510 Failed to load resource: the server responded with a status of 404 (File not found)
    :8080/graphs/summary/large_exprs.TimeLargeExpressionOperations.time_manual_jacobian_wrt_functions.json?_=1501786104507 Failed to load resource: the server responded with a status of 404 (File not found)
    :8080/graphs/summary/large_exprs.TimeLargeExpressionOperations.time_subs.json?_=1501786104508 Failed to load resource: the server responded with a status of 404 (File not found)
    :8080/graphs/summary/logic.LogicSuite.time_load_file.json?_=1501786104511 Failed to load resource: the server responded with a status of 404 (File not found)
    :8080/graphs/summary/solve.TimeMatrixArithmetic.time_dense_multiply.json?_=1501786104515 Failed to load resource: the server responded with a status of 404 (File not found)
    :8080/graphs/summary/solve.TimeMatrixOperations.time_det.json?_=1501786104516 Failed to load resource: the server responded with a status of 404 (File not found)
    :8080/graphs/summary/physics.mechanics.kane.KanesMethodMassSpringDamper.time_kanesmethod_mass_spring_damper.json?_=1501786104512 Failed to load resource: the server responded with a status of 404 (File not found)
    :8080/graphs/summary/solve.TimeMatrixOperations.time_det_berkowitz.json?_=1501786104518 Failed to load resource: the server responded with a status of 404 (File not found)
    :8080/graphs/summary/solve.TimeMatrixArithmetic.time_dense_add.json?_=1501786104514 Failed to load resource: the server responded with a status of 404 (File not found)
    :8080/graphs/summary/physics.mechanics.lagrange.LagrangesMethodMassSpringDamper.time_lagrangesmethod_mass_spring_damper.json?_=1501786104513 Failed to load resource: the server responded with a status of 404 (File not found)
    :8080/graphs/summary/solve.TimeMatrixOperations.time_det_bareiss.json?_=1501786104517 Failed to load resource: the server responded with a status of 404 (File not found)
    :8080/graphs/summary/solve.TimeMatrixOperations.time_rank.json?_=1501786104519 Failed to load resource: the server responded with a status of 404 (File not found)
    :8080/graphs/summary/solve.TimeMatrixOperations.time_rref.json?_=1501786104520 Failed to load resource: the server responded with a status of 404 (File not found)
    

    I've tried clearing the html directory and republishing and it doesn't help.

    This is with asv 0.2. I also tried using the git version, but that failed:

    $PYTHONPATH=~/Documents/asv python -m asv publish
    Traceback (most recent call last):
      File "/Users/aaronmeurer/anaconda3/lib/python3.5/runpy.py", line 183, in _run_module_as_main
        mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
      File "/Users/aaronmeurer/anaconda3/lib/python3.5/runpy.py", line 142, in _get_module_details
        return _get_module_details(pkg_main_name, error)
      File "/Users/aaronmeurer/anaconda3/lib/python3.5/runpy.py", line 109, in _get_module_details
        __import__(pkg_name)
      File "/Users/aaronmeurer/Documents/asv/asv/__init__.py", line 17, in <module>
        from ._version import __version__, __githash__, __release__
    ImportError: No module named 'asv._version'
    
    question 
    opened by asmeurer 14
  • Revised/additional summary page

    Revised/additional summary page

    Add a second summary page, showing a list of benchmarks (instead of a grid).

    The list shows benchmarks only for one choice of environment parameters (machine, cpu, packages, ...) at once.

    Also show how the benchmark performance has evolved "recently", some ideas borrowed from http://speed.pypy.org/changes/

    Some overlap with the Regressions display, with the exception that regressions shows regressions across all branches, and does not show improved results.

    Demo: https://pv.github.io/numpy-bench/#summarylist https://pv.github.io/scipy-bench/#summarylist

    • [x] what information should it actually show?
    • [x] what does "recently" mean? 1 month, 10 revisions, ...?
    • [x] what to do with benchmarks with no results?
    • [x] the param selector should deal sensibly if there are no results for some param combination
    • [x] tests, etc. polish
    • [x] a selector for the time/revision range that you want to consider (e.g. show the worst regression ever vs. the last one vs. benchmark-specific configuration in the config file)
    • [x] maybe add expandable view to show all regressions?
    • [x] the popup graphs
    opened by pv 14
  • Possible fork of asv

    Possible fork of asv

    pandas got a grant to work in our benchmarks, which use asv, and could be improved in different ways. We'd benefit from some work on asv itself, and we've got resources to help improve it.

    Seems like asv is abandoned. I contacted @pv and @mdboom to discuss how we can move forward with asv, and coordinate with the resources we've got, but no answer after a while. If we don't hear back soon from them, and we don't find a better option, we're planning to fork the project. If the maintainers of the original asv have the time to come back to it after we fork, we'll coordinate and see if we can merge our changes back to this repo, or what's best. But with the current situation, seems like the only option, other than leave asv abandoned, use the current version forever, or move to a different project.

    Opening this issue for visibility, so other users, developers and stakeholders of asv are aware, and can provide their feedback.

    opened by datapythonista 13
  • BUG: Graphs are empty for tag commit hashes

    BUG: Graphs are empty for tag commit hashes

    Currently, running asv publish when results are present (say through a HASHFILE) based on tag commit hashes, the tags cannot be found and so the graphs are essentially empty:

    asv publish
    [ 11.11%] · Loading machine info
    [ 22.22%] · Getting params, commits, tags and branches
    [ 33.33%] · Loading results.
    [ 33.33%] ·· Couldn't find 4f0a3eb8 in branches (main)
    [ 33.33%] ·· Couldn't find ef0ec786 in branches (main).
    [ 33.33%] ·· Couldn't find 7ce41185 in branches (main).
    [ 33.33%] ·· Couldn't find f6dddcb2 in branches (main)
    [ 33.33%] ·· Couldn't find 08772f91 in branches (main).
    [ 33.33%] ·· Couldn't find 21cacafb in branches (main)
    [ 33.33%] ·· Couldn't find 5726e6ce in branches (main).
    [ 33.33%] ·· Couldn't find 8cec8201 in branches (main)
    [ 33.33%] ·· Couldn't find f8021557 in branches (main).
    [ 33.33%] ·· Couldn't find 4adc87df in branches (main)
    [ 33.33%] ·· Couldn't find 6377d884 in branches (main).
    [ 33.33%] ·· Couldn't find e47cbb69 in branches (main).
    [ 33.33%] ·· Couldn't find 54c52f13 in branches (main)
    [ 33.33%] ·· Couldn't find 7d4349e3 in branches (main).
    [ 33.33%] ·· Couldn't find 1f82da74 in branches (main)
    [ 33.33%] ·· Couldn't find 5c598ed6 in branches (main).
    [ 33.33%] ·· Couldn't find de82cd94 in branches (main)
    [ 33.33%] ·· Couldn't find 754e59d5 in branches (main)
    [ 44.44%] · Detecting steps.
    [ 55.56%] · Generating graphs
    [ 66.67%] · Generating output for SummaryGrid
    [ 77.78%] · Generating output for SummaryList
    [ 88.89%] · Generating output for Regressions.
    [100.00%] · Writing index
    
    opened by HaoZeke 0
  • Question: Can you return a tuple from custom tracking functions

    Question: Can you return a tuple from custom tracking functions

    We are trying to use asv to monitor a few different parameters in our benchmarks. We would like it to just "run" a few times.

    can we return a tuple in track_ methods? How would we specify the units?

    opened by hmaarrfk 0
  • ENH show stderr / stdout if failure

    ENH show stderr / stdout if failure

    In https://github.com/pandas-dev/pandas/actions/runs/3515019336/jobs/5892108533 I got

    [ 33.27%] ··· plotting.FramePlotting.time_frame_plot                  1/9 failed
    [ 33.27%] ··· ========= =========
                     kind            
                  --------- ---------
                     line    472±0ms 
                     bar     503±0ms 
                     area    580±0ms 
                     barh    501±0ms 
                     hist    508±0ms 
                     kde     947±0ms 
                     pie     478±0ms 
                   scatter   383±0ms 
                    hexbin    failed 
                  ========= =========
    

    with no further info

    I had to clone asv, insert a breakpoint, and do

    print(result.stderr)
    

    to figure out what had caused the failure

    It'd be useful if this could be shown in the report too

    opened by MarcoGorelli 0
  • Add machine comparison list for published results

    Add machine comparison list for published results

    ASV tracks performance regression and improvements per machine and python versions. However, tracking performance on multiple configurations/optimizations options for python builds can only be done as separate machines configurations. Though, comparing machines against each other is only visible per benchmark view.

    This PR will provide a comparison list for machines against each other. It can, also, exports the comparison list to markdown format (example).

    opened by qunaibit 0
Releases(v0.5.1)
  • v0.4.2(May 16, 2020)

  • v0.4.1(May 30, 2019)

  • v0.4(May 26, 2019)

    New features:

    • asv check command for a quick check of benchmark suite validity. (#782)
    • asv run HASHFILE:filename can read commit hashes to run from file or stdin (#768)
    • --set-commit-hash option to asv run, which allows recording results from runs in "existing" environments not managed by asv (#794)
    • --cpu-affinity option to asv run and others, to set CPU affinity (#769)
    • "Hide legend" option in web UI (#807)
    • pretty_source benchmark attribute for customizing source code shown (#810)
    • Record number of cores in machine information (#761)

    API Changes:

    • Default timer changed from process_time() to timeit.default_timer() to fix resolution issues on Windows. Old behavior can be restored by setting Benchmark.timer = time.process_time (#780)

    Bug Fixes:

    • Fix pip command line in install_command (#806)
    • Python 3.8 compatibility (#814)
    • Minor fixes and improvements (#759, #764, #767, #772, #779, #783, #784, #787, #790, #795, #799, #804, #812, #813, #815, #816, #817, #818, #820)
    Source code(tar.gz)
    Source code(zip)
  • v0.3.1(Oct 20, 2018)

    Minor bugfixes and improvements.

    • Use measured uncertainties to weigh step detection. (#753)
    • Detect also single-commit regressions, if significant. (#745)
    • Use proper two-sample test when raw results available. (#754)
    • Use a better regression "badness" measure. (#744)
    • Display verbose command output immediately, not when command completes. (#747)
    • Fix handling of benchmark suite import failures in forkserver and benchmark discovery. (#743, #742)
    • Fix forkserver child process handling.
    • In asv test suite, use dummy conda packages. (#738)
    • Other minor fixes (#756, #750, #749, #746)
    Source code(tar.gz)
    Source code(zip)
  • v0.3(Sep 9, 2018)

    Major release with several new features.

    New Features

    • Revised timing benchmarking. asv will display and record the median and interquartile ranges of timing measurement results. The information is also used by asv compare and asv continuous in determining what changes are significant. The asv run command has new options for collecting samples. Timing benchmarks have new benchmarking parameters for controlling how timing works, including processes attribute for collect data by running benchmarks in different sequential processes. The defaults are adjusted to obtain faster benchmarking. (#707, #698, #695, #689, #683, #665, #652, #575, #503, #493)
    • Interleaved benchmark running. Timing benchmarks can be run in interleaved order via asv run --interleave-processes, to obtain better sampling over long-time background performance variations. (#697, #694, #647)
    • Customization of build/install/uninstall commands. (#699)
    • Launching benchmarks via a fork server (on Unix-based systems). Reduces the import time overheads in launching new benchmarks. Default on Linux. (#666, #709, #730)
    • Benchmark versioning. Invalidate old benchmark results when benchmarks change, via a benchmark version attribute. User-configurable, by default based on source code. (#509)
    • Setting benchmark attributes on command line, via --attribute. (#647)
    • asv show command for displaying results on command line. (#711)
    • Support for Conda channels. (#539)
    • Provide ASV-specific environment variables to launched commands. (#624)
    • Show branch/tag names in addition to commit hashes. (#705)
    • Support for projects in repository subdirectories. (#611)
    • Way to run specific parametrized benchmarks. (#593)
    • Group benchmarks in the web benchmark grid (#557)
    • Make the web interface URL addresses more copypasteable. (#608, #605, #580)
    • Allow customizing benchmark display names (#484)
    • Don't reinstall project if it is already installed (#708)

    API Changes

    • The goal_time attribute in timing benchmarks is removed (and now ignored). See documentation on how to tune timing benchmarks now.
    • asv publish may ask you to run asv update once after upgrading, to regenerate benchmarks.json if asv run was not yet run.
    • If you are using asv plugins, check their compatibility. The internal APIs in asv are not guaranteed to be backward compatible.

    Bug Fixes

    • Fixes in 0.2.1 and 0.2.2 are also included in 0.3.
    • Make asv compare accept named commits (#704)
    • Fix asv profile --python=same (#702)
    • Make asv compare behave correctly with multiple machines/envs (#687)
    • Avoid making too long result file names (#675)
    • Fix saving profile data (#680)
    • Ignore missing branches during benchmark discovery (#674)
    • Perform benchmark discovery only when necessary (#568)
    • Fix benchmark skipping to operate on a per-environment basis (#603)
    • Allow putting asv.conf.json to benchmark suite directory (#717)
    • Miscellaneous minor fixes (#735, #734, #733, #729, #728, #727, #726, #723, #721, #719, #718, #716, #715, #714, #713, #706, #701, #691, #688, #684, #682, #660, #634, #615, #600, #573, #556)

    Other Changes and Additions

    • www: display regressions separately, one per commit (#720)
    • Internal changes. (#712, #700, #681, #663, #662, #637, #613, #606, #572)
    • CI/etc changes. (#585, #570)
    • Added internal debugging command asv.benchmarks (#685)
    • Make tests not require network connection, except with Conda (#696)
    • Drop support for end-of-lifed Python versions 2.6 & 3.2 & 3.3 (#548)
    Source code(tar.gz)
    Source code(zip)
  • v0.3b1(Aug 28, 2018)

    Major release with several new features.

    New Features

    • Revised timing benchmarking. asv will display and record the median and interquartile ranges of timing measurement results. The information is also used by asv compare and asv continuous in determining what changes are significant. The asv run command has new options for collecting samples. Timing benchmarks have new benchmarking parameters for controlling how timing works, including processes attribute for collect data by running benchmarks in different sequential processes. The defaults are adjusted to obtain faster benchmarking. (#707, #698, #695, #689, #683, #665, #652, #575, #503, #493)
    • Interleaved benchmark running. Timing benchmarks can be run in interleaved order via asv run --interleave-processes, to obtain better sampling over long-time background performance variations. (#697, #694, #647)
    • Customization of build/install/uninstall commands. (#699)
    • Launching benchmarks via a fork server (on Unix-based systems). Reduces the import time overheads in launching new benchmarks. Default on Linux. (#709, #666)
    • Benchmark versioning. Invalidate old benchmark results when benchmarks change, via a benchmark version attribute. User-configurable, by default based on source code. (#509)
    • Setting benchmark attributes on command line, via --attribute. (#647)
    • asv show command for displaying results on command line. (#711)
    • Support for Conda channels. (#539)
    • Provide ASV-specific environment variables to launched commands. (#624)
    • Show branch/tag names in addition to commit hashes. (#705)
    • Support for projects in repository subdirectories. (#611)
    • Way to run specific parametrized benchmarks. (#593)
    • Group benchmarks in the web benchmark grid (#557)
    • Make the web interface URL addresses more copypasteable. (#608, #605, #580)
    • Allow customizing benchmark display names (#484)
    • Don't reinstall project if it is already installed (#708)

    API Changes

    • The goal_time attribute in timing benchmarks is removed (and now ignored). See documentation on how to tune timing benchmarks now.
    • asv publish may ask you to run asv update once after upgrading, to regenerate benchmarks.json if asv run was not yet run.
    • If you are using asv plugins, check their compatibility. The internal APIs in asv are not guaranteed to be backward compatible.

    Bug Fixes

    • Fixes in 0.2.1 and 0.2.2 are also included in 0.3.
    • Make asv compare accept named commits (#704)
    • Fix asv profile --python=same (#702)
    • Make asv compare behave correctly with multiple machines/envs (#687)
    • Avoid making too long result file names (#675)
    • Fix saving profile data (#680)
    • Ignore missing branches during benchmark discovery (#674)
    • Perform benchmark discovery only when necessary (#568)
    • Fix benchmark skipping to operate on a per-environment basis (#603)
    • Allow putting asv.conf.json to benchmark suite directory (#717)
    • Miscellaneous minor fixes (#719, #718, #716, #715, #714, #713, #706, #701, #691, #688, #684, #682, #660, #634, #615, #600, #573, #556)

    Other Changes and Additions

    • www: display regressions separately, one per commit (#720)
    • Internal changes. (#712, #700, #681, #663, #662, #637, #613, #606, #572)
    • CI/etc changes. (#585, #570)
    • Added internal debugging command asv.benchmarks (#685)
    • Make tests not require network connection, except with Conda (#696)
    • Drop support for end-of-lifed Python versions 2.6 & 3.2 & 3.3 (#548)
    Source code(tar.gz)
    Source code(zip)
  • v0.2.2(Jul 14, 2018)

    Bugfix release with minor feature additions.

    New Features

    • Add a --no-pull option to asv publish and asv run (#592)
    • Add a --rewrite option to asv gh-pages and fix bugs (#578, #529)
    • Add a --html-dir option to asv publish (#545)
    • Add a --yes option to asv machine (#540)
    • Enable running via python -masv (#538)

    Bug Fixes

    • Fix support for mercurial >= 4.5 (#643)
    • Fix detection of git subrepositories (#642)
    • Find conda executable in the "official" way (#646)
    • Hide tracebacks in testing functions (#601)
    • Launch virtualenv in a more sensible way (#555)
    • Disable user site directory also when using conda (#553)
    • Set PIP_USER to false when running an executable (#524)
    • Set PATH for commands launched inside environments (#541)
    • os.environ can only contain bytes on Win/py2 (#528)
    • Fix hglib encoding issues on Python 3 (#508)
    • Set GIT_CEILING_DIRECTORIES for Git (#636)
    • Run pip via python -mpip to avoid shebang limits (#569)
    • Always use https URLs (#583)
    • Add a min-height on graphs to avoid a flot traceback (#596)
    • Escape label html text in plot legends (#614)
    • Disable pip build isolation in wheel_cache (#670)
    • Fixup CI, test, etc issues (#616, #552, #601, #586, #554, #549, #571, #527, #560, #565)
    Source code(tar.gz)
    Source code(zip)
  • v0.2.2rc1(Jul 9, 2018)

    Bugfix release with minor feature additions.

    New Features

    • Add a --no-pull option to asv publish and asv run (#592)
    • Add a --rewrite option to asv gh-pages and fix bugs (#578, #529)
    • Add a --html-dir option to asv publish (#545)
    • Add a --yes option to asv machine (#540)
    • Enable running via python -masv (#538)

    Bug Fixes

    • Fix support for mercurial >= 4.5 (#643)
    • Fix detection of git subrepositories (#642)
    • Find conda executable in the "official" way (#646)
    • Hide tracebacks in testing functions (#601)
    • Launch virtualenv in a more sensible way (#555)
    • Disable user site directory also when using conda (#553)
    • Set PIP_USER to false when running an executable (#524)
    • Set PATH for commands launched inside environments (#541)
    • os.environ can only contain bytes on Win/py2 (#528)
    • Fix hglib encoding issues on Python 3 (#508)
    • Set GIT_CEILING_DIRECTORIES for Git (#636)
    • Run pip via python -mpip to avoid shebang limits (#569)
    • Always use https URLs (#583)
    • Add a min-height on graphs to avoid a flot traceback (#596)
    • Escape label html text in plot legends (#614)
    • Fixup CI, test, etc issues (#616, #552, #601, #586, #554, #549, #571, #527, #560, #565)
    Source code(tar.gz)
    Source code(zip)
  • v0.2.1(Jun 22, 2017)

    Bug fixes:

    • Use process groups on Windows (#489)
    • Sanitize html filenames (#498)
    • Fix incorrect date formatting + default sort order in web ui (#504)
    Source code(tar.gz)
    Source code(zip)
  • v0.2(Oct 22, 2016)

    New Features

    • Automatic detection and listing of performance regressions. (#236)
    • Support for Windows. (#282)
    • New setup_cache method. (#277)
    • Exclude/include rules in configuration matrix. (#329)
    • Command-line option for selecting environments. (#352)
    • Possibility to include packages via pip in conda environments. (#373)
    • The pretty_name attribute can be used to change the display name of benchmarks. (#425)
    • Git submodules are supported. (#426)
    • The time when benchmarks were run is tracked. (#428)
    • New summary web page showing a list of benchmarks. (#437)
    • Atom feed for regressions. (#447)
    • PyPy support. (#452)

    API Changes

    • The parent directory of the benchmark suite is no longer inserted into sys.path. (#307)
    • Repository mirrors are no longer created for local repositories. (#314)
    • In asv.conf.json matrix, null previously meant (undocumented) the latest version. Now it means that the package is to not be installed. (#329)
    • Previously, the setup and teardown methods were run only once even when the benchmark method was run multiple times, for example due to repeat > 1 being present in timing benchmarks. This is now changed so that also they are run multiple times. (#316)
    • The default branch for Mercurial is now default, not tip. (#394)
    • Benchmark results are now by default ordered by commit, not by date. (#429)
    • When asv run and other commands are called without specifying revisions, the default values are taken from the branches in asv.conf.json. (#430)
    • The default value for --factor in asv continuous and asv compare was changed from 2.0 to 1.1 (#469).

    Bug Fixes

    • Output will display on non-Unicode consoles. (#313, #318, #336)
    • Longer default install timeout. (#342)
    • Many other bugfixes and minor improvements.
    Source code(tar.gz)
    Source code(zip)
  • v0.2rc2(Oct 17, 2016)

    New Features

    • Automatic detection and listing of performance regressions. (#236)
    • Support for Windows. (#282)
    • New setup_cache method. (#277)
    • Exclude/include rules in configuration matrix. (#329)
    • Command-line option for selecting environments. (#352)
    • Possibility to include packages via pip in conda environments. (#373)
    • The pretty_name attribute can be used to change the display name of benchmarks. (#425)
    • Git submodules are supported. (#426)
    • The time when benchmarks were run is tracked. (#428)
    • New summary web page showing a list of benchmarks. (#437)
    • Atom feed for regressions. (#447)
    • PyPy support. (#452)

    API Changes

    • The parent directory of the benchmark suite is no longer inserted into sys.path. (#307)
    • Repository mirrors are no longer created for local repositories. (#314)
    • In asv.conf.json matrix, null previously meant (undocumented) the latest version. Now it means that the package is to not be installed. (#329)
    • Previously, the setup and teardown methods were run only once even when the benchmark method was run multiple times, for example due to repeat > 1 being present in timing benchmarks. This is now changed so that also they are run multiple times. (#316)
    • The default branch for Mercurial is now default, not tip. (#394)
    • Benchmark results are now by default ordered by commit, not by date. (#429)
    • When asv run and other commands are called without specifying revisions, the default values are taken from the branches in asv.conf.json. (#430)
    • The default value for --factor in asv continuous and asv compare was changed from 2.0 to 1.1 (#469).

    Bug Fixes

    • Output will display on non-Unicode consoles. (#313, #318, #336)
    • Longer default install timeout. (#342)
    • Many other bugfixes and minor improvements.
    Source code(tar.gz)
    Source code(zip)
py.test fixture for benchmarking code

Overview docs tests package A pytest fixture for benchmarking code. It will group the tests into rounds that are calibrated to the chosen timer. See c

Ionel Cristian Mărieș 1k Jan 3, 2023
A collection of benchmarking tools.

Benchmark Utilities About A collection of benchmarking tools. PYPI Package Table of Contents Using the library Installing and using the library Manual

Kostas Georgiou 2 Jan 28, 2022
WIP SAT benchmarking tooling, written with only my personal use in mind.

SAT Benchmarking Some early work in progress tooling for running benchmarks and keeping track of the results when working on SAT solvers and related t

Jannis Harder 1 Dec 26, 2021
A command-line tool and Python library and Pytest plugin for automated testing of RESTful APIs, with a simple, concise and flexible YAML-based syntax

1.0 Release See here for details about breaking changes with the upcoming 1.0 release: https://github.com/taverntesting/tavern/issues/495 Easier API t

null 909 Dec 15, 2022
PyBuster A directory busting tool for web application penetration tester, written in python

PyBuster A directory busting tool for web application penetration tester, written in python. Supports custom wordlist,recursive search. Screenshots Pr

Anukul Pandey 4 Jan 30, 2022
A simple python script that uses selenium(chrome web driver),pyautogui,time and schedule modules to enter google meets automatically

A simple python script that uses selenium(chrome web driver),pyautogui,time and schedule modules to enter google meets automatically

null 3 Feb 7, 2022
A modern API testing tool for web applications built with Open API and GraphQL specifications.

Schemathesis Schemathesis is a modern API testing tool for web applications built with Open API and GraphQL specifications. It reads the application s

Schemathesis.io 1.6k Jan 6, 2023
A modern API testing tool for web applications built with Open API and GraphQL specifications.

Schemathesis Schemathesis is a modern API testing tool for web applications built with Open API and GraphQL specifications. It reads the application s

Schemathesis.io 1.6k Dec 30, 2022
Webscreener is a tool for mass web domains pentesting.

Webscreener is a tool for mass web domains pentesting. It is used to take snapshots for domains that is generated by a tool like knockpy or Sublist3r. It cuts out most of the pentesting time by screenshooting many domains and present them in an HTML page for the pentester to check them all at once.

Seekurity 3 Jun 7, 2021
WEB PENETRATION TESTING TOOL 💥

N-WEB ADVANCE WEB PENETRATION TESTING TOOL Features ?? Admin Panel Finder Admin Scanner Dork Generator Advance Dork Finder Extract Links No Redirect H

null 56 Dec 23, 2022
This is a web test framework based on python+selenium

Basic thoughts for this framework There should have a BasePage.py to be the parent page and all the page object should inherit this class BasePage.py

Cactus 2 Mar 9, 2022
Selenium-python but lighter: Helium is the best Python library for web automation.

Selenium-python but lighter: Helium Selenium-python is great for web automation. Helium makes it easier to use. For example: Under the hood, Helium fo

Michael Herrmann 3.2k Dec 31, 2022
A simple tool to test internet stability.

pingtest Description A personal project for testing internet stability, intended for use in Linux and Windows.

chris 0 Oct 17, 2021
splinter - python test framework for web applications

splinter - python tool for testing web applications splinter is an open source tool for testing web applications using Python. It lets you automate br

Cobra Team 2.6k Dec 27, 2022
splinter - python test framework for web applications

splinter - python tool for testing web applications splinter is an open source tool for testing web applications using Python. It lets you automate br

Cobra Team 2.3k Feb 5, 2021
✅ Python web automation and testing. 🚀 Fast, easy, reliable. 💠

Build fast, reliable, end-to-end tests. SeleniumBase is a Python framework for web automation, end-to-end testing, and more. Tests are run with "pytes

SeleniumBase 3k Jan 4, 2023
Aioresponses is a helper for mock/fake web requests in python aiohttp package.

aioresponses Aioresponses is a helper to mock/fake web requests in python aiohttp package. For requests module there are a lot of packages that help u

null 402 Jan 6, 2023
User-oriented Web UI browser tests in Python

Selene - User-oriented Web UI browser tests in Python (Selenide port) Main features: User-oriented API for Selenium Webdriver (code like speak common

Iakiv Kramarenko 575 Jan 2, 2023
🏆 A ranked list of awesome python libraries for web development. Updated weekly.

Best-of Web Development with Python ?? A ranked list of awesome python libraries for web development. Updated weekly. This curated list contains 540 a

Machine Learning Tooling 1.8k Dec 26, 2022