Bayesian A/B testing

Overview

Tests Codecov PyPI

Bayesian A/B testing

bayesian_testing is a small package for a quick evaluation of A/B (or A/B/C/...) tests using Bayesian approach.

The package currently supports these data inputs:

  • binary data ([0, 1, 0, ...]) - convenient for conversion-like A/B testing
  • normal data with unknown variance - convenient for normal data A/B testing
  • delta-lognormal data (lognormal data with zeros) - convenient for revenue-like A/B testing

The core evaluation metric of the approach is Probability of Being Best (i.e. "being larger" from data point of view) which is calculated using simulations from posterior distributions (considering given data).

Installation

bayesian_testing can be installed using pip:

pip install bayesian_testing

Alternatively, you can clone the repository and use poetry manually:

cd bayesian_testing
pip install poetry
poetry install
poetry shell

Basic Usage

The primary features are BinaryDataTest, NormalDataTest and DeltaLognormalDataTest classes.

In all cases, there are two methods to insert data:

  • add_variant_data - adding raw data for a variant as a list of numbers (or numpy 1-D array)
  • add_variant_data_agg - adding aggregated variant data (this can be practical for large data, as the aggregation can be done on a database level)

Both methods for adding data are allowing specification of prior distribution using default parameters (see details in respective docstrings). Default prior setup should be sufficient for most of the cases (e.g. in cases with unknown priors or large amounts of data).

To get the results of the test, simply call method evaluate, or probabs_of_being_best for returning just the probabilities.

Probabilities of being best are approximated using simulations, hence evaluate can return slightly different values for different runs. To stabilize it, you can set sim_count parameter of evaluate to higher value (default value is 20K), or even use seed parameter to fix it completely.

BinaryDataTest

Class for Bayesian A/B test for binary-like data (e.g. conversions, successes, etc.).

import numpy as np
from bayesian_testing.experiments import BinaryDataTest

# generating some random data
rng = np.random.default_rng(52)
# random 1x1500 array of 0/1 data with 5.2% probability for 1:
data_a = rng.binomial(n=1, p=0.052, size=1500)
# random 1x1200 array of 0/1 data with 6.7% probability for 1:
data_b = rng.binomial(n=1, p=0.067, size=1200)

# initialize a test
test = BinaryDataTest()

# add variant using raw data (arrays of zeros and ones):
test.add_variant_data("A", data_a)
test.add_variant_data("B", data_b)
# priors can be specified like this (default for this test is a=b=1/2):
# test.add_variant_data("B", data_b, a_prior=1, b_prior=20)

# add variant using aggregated data (same as raw data with 950 zeros and 50 ones):
test.add_variant_data_agg("C", totals=1000, positives=50)

# evaluate test
test.evaluate()
[{'variant': 'A',
  'totals': 1500,
  'positives': 80,
  'conv_rate': 0.05333,
  'prob_being_best': 0.06625},
 {'variant': 'B',
  'totals': 1200,
  'positives': 80,
  'conv_rate': 0.06667,
  'prob_being_best': 0.89005},
 {'variant': 'C',
  'totals': 1000,
  'positives': 50,
  'conv_rate': 0.05,
  'prob_being_best': 0.0437}]

NormalDataTest

Class for Bayesian A/B test for normal data.

import numpy as np
from bayesian_testing.experiments import NormalDataTest

# generating some random data
rng = np.random.default_rng(21)
data_a = rng.normal(7.2, 2, 1000)
data_b = rng.normal(7.1, 2, 800)
data_c = rng.normal(7.0, 4, 500)

# initialize a test
test = NormalDataTest()

# add variant using raw data:
test.add_variant_data("A", data_a)
test.add_variant_data("B", data_b)
# test.add_variant_data("C", data_c)

# add variant using aggregated data:
test.add_variant_data_agg("C", len(data_c), sum(data_c), sum(np.square(data_c)))

# evaluate test
test.evaluate(sim_count=20000, seed=52)
[{'variant': 'A',
  'totals': 1000,
  'sum_values': 7294.67901,
  'avg_values': 7.29468,
  'prob_being_best': 0.1707},
 {'variant': 'B',
  'totals': 800,
  'sum_values': 5685.86168,
  'avg_values': 7.10733,
  'prob_being_best': 0.00125},
 {'variant': 'C',
  'totals': 500,
  'sum_values': 3736.91581,
  'avg_values': 7.47383,
  'prob_being_best': 0.82805}]

DeltaLognormalDataTest

Class for Bayesian A/B test for delta-lognormal data (log-normal with zeros). Delta-lognormal data is typical case of revenue per session data where many sessions have 0 revenue but non-zero values are positive numbers with possible log-normal distribution. To handle this data, the calculation is combining binary Bayes model for zero vs non-zero "conversions" and log-normal model for non-zero values.

0 for x in data_b), sum_values=sum(data_b), sum_logs=sum([np.log(x) for x in data_b if x > 0]), sum_logs_2=sum([np.square(np.log(x)) for x in data_b if x > 0]) ) test.evaluate(seed=21)">
import numpy as np
from bayesian_testing.experiments import DeltaLognormalDataTest

test = DeltaLognormalDataTest()

data_a = [7.1, 0.3, 5.9, 0, 1.3, 0.3, 0, 0, 0, 0, 0, 1.5, 2.2, 0, 4.9, 0, 0, 0, 0, 0]
data_b = [4.0, 0, 3.3, 19.3, 18.5, 0, 0, 0, 12.9, 0, 0, 0, 0, 0, 0, 0, 0, 3.7, 0, 0]

# adding variant using raw data
test.add_variant_data("A", data_a)

# alternatively, variant can be also added using aggregated data:
test.add_variant_data_agg(
    name="B",
    totals=len(data_b),
    positives=sum(x > 0 for x in data_b),
    sum_values=sum(data_b),
    sum_logs=sum([np.log(x) for x in data_b if x > 0]),
    sum_logs_2=sum([np.square(np.log(x)) for x in data_b if x > 0])
)

test.evaluate(seed=21)
[{'variant': 'A',
  'totals': 20,
  'positives': 8,
  'sum_values': 23.5,
  'avg_values': 1.175,
  'avg_positive_values': 2.9375,
  'prob_being_best': 0.18915},
 {'variant': 'B',
  'totals': 20,
  'positives': 6,
  'sum_values': 61.7,
  'avg_values': 3.085,
  'avg_positive_values': 10.28333,
  'prob_being_best': 0.81085}]

Development

To set up development environment use Poetry and pre-commit:

pip install poetry
poetry install
poetry run pre-commit install

Roadmap

Test classes to be added:

  • PoissonDataTest
  • ExponentialDataTest

Metrics to be added:

  • Expected Loss
  • Potential Value Remaining

References

You might also like...
Language-agnostic HTTP API Testing Tool
Language-agnostic HTTP API Testing Tool

Dredd — HTTP API Testing Framework Dredd is a language-agnostic command-line tool for validating API description document against backend implementati

Web testing library for Robot Framework

SeleniumLibrary Contents Introduction Keyword Documentation Installation Browser drivers Usage Extending SeleniumLibrary Community Versions History In

✅ Python web automation and testing. 🚀 Fast, easy, reliable. 💠
✅ Python web automation and testing. 🚀 Fast, easy, reliable. 💠

Build fast, reliable, end-to-end tests. SeleniumBase is a Python framework for web automation, end-to-end testing, and more. Tests are run with "pytes

A command-line tool and Python library and Pytest plugin for automated testing of RESTful APIs, with a simple, concise and flexible YAML-based syntax

1.0 Release See here for details about breaking changes with the upcoming 1.0 release: https://github.com/taverntesting/tavern/issues/495 Easier API t

One-stop solution for HTTP(S) testing.
One-stop solution for HTTP(S) testing.

HttpRunner HttpRunner is a simple & elegant, yet powerful HTTP(S) testing framework. Enjoy! ✨ 🚀 ✨ Design Philosophy Convention over configuration ROI

Declarative HTTP Testing for Python and anything else

Gabbi Release Notes Gabbi is a tool for running HTTP tests where requests and responses are represented in a declarative YAML-based form. The simplest

A modern API testing tool for web applications built with Open API and GraphQL specifications.
A modern API testing tool for web applications built with Open API and GraphQL specifications.

Schemathesis Schemathesis is a modern API testing tool for web applications built with Open API and GraphQL specifications. It reads the application s

A framework-agnostic library for testing ASGI web applications

async-asgi-testclient Async ASGI TestClient is a library for testing web applications that implements ASGI specification (version 2 and 3). The motiva

A Modular Penetration Testing Framework
A Modular Penetration Testing Framework

fsociety A Modular Penetration Testing Framework Install pip install fsociety Update pip install --upgrade fsociety Usage usage: fsociety [-h] [-i] [-

Comments
  • Results are different from online tool

    Results are different from online tool

    Hi,

    I tested your library and cross-checked against this online calculator: Here is the result from your library:

    [{'variant': 'True True True False False False False',
      'totals': 1172,
      'positives': 461,
      'positive_rate': 0.39334,
      'prob_being_best': 0.7422,
      'expected_loss': 0.0582635},
     {'variant': 'False True True False False False False',
      'totals': 222,
      'positives': 27,
      'positive_rate': 0.12162,
      'prob_being_best': 0.0,
      'expected_loss': 0.3280173},
     {'variant': 'False False True False False False False',
      'totals': 1363,
      'positives': 63,
      'positive_rate': 0.04622,
      'prob_being_best': 0.0,
      'expected_loss': 0.4051768},
     {'variant': 'False False False False False False False',
      'totals': 1052,
      'positives': 0,
      'positive_rate': 0.0,
      'prob_being_best': 0.0,
      'expected_loss': 0.4512031},
     {'variant': 'True False True False False False False',
      'totals': 1,
      'positives': 0,
      'positive_rate': 0.0,
      'prob_being_best': 0.2578,
      'expected_loss': 0.1997566}]
    

    So the best variant has 74% probability to be the winner. On the online calculator it is 63.48% instead (last variant is 36.52% instead of 25.78%).

    I used the BinaryDataTest() without any priors.

    I did not dig deeper on what might be right here, but wanted to drop this as feedback.

    opened by ThomasMeissnerDS 6
  • Minimum sample size

    Minimum sample size

    First, this package is great! I wanted to know if the probability estimates rely on a minimum sample size or how one might go about determining minimum sample size for a Binary test, for example.

    opened by abrunner94 5
  • Bump jupyter-server from 1.13.5 to 1.15.4

    Bump jupyter-server from 1.13.5 to 1.15.4

    Bumps jupyter-server from 1.13.5 to 1.15.4.

    Release notes

    Sourced from jupyter-server's releases.

    v1.15.3

    1.15.3

    (Full Changelog)

    Bugs fixed

    Maintenance and upkeep improvements

    Contributors to this release

    (GitHub contributors page for this release)

    @​blink1073 | @​codecov-commenter | @​minrk

    v1.15.2

    1.15.2

    (Full Changelog)

    Bugs fixed

    Maintenance and upkeep improvements

    Contributors to this release

    (GitHub contributors page for this release)

    @​blink1073 | @​minrk | @​Zsailer

    v1.15.1

    1.15.1

    (Full Changelog)

    ... (truncated)

    Changelog

    Sourced from jupyter-server's changelog.

    Changelog

    All notable changes to this project will be documented in this file.

    1.16.0

    (Full Changelog)

    New features added

    Enhancements made

    Bugs fixed

    Maintenance and upkeep improvements

    Other merged PRs

    Contributors to this release

    (GitHub contributors page for this release)

    @​andreyvelich | @​blink1073 | @​codecov-commenter | @​divyansshhh | @​dleen | @​fcollonval | @​jhamet93 | @​meeseeksdev | @​minrk | @​rccern | @​welcome | @​Zsailer

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 2
Releases(v0.5.3)
Owner
Matus Baniar
Data data data
Matus Baniar
PacketPy is an open-source solution for stress testing network devices using different testing methods

PacketPy About PacketPy is an open-source solution for stress testing network devices using different testing methods. Currently, there are only two c

null 4 Sep 22, 2022
Hypothesis is a powerful, flexible, and easy to use library for property-based testing.

Hypothesis Hypothesis is a family of testing libraries which let you write tests parametrized by a source of examples. A Hypothesis implementation the

Hypothesis 6.4k Jan 5, 2023
Generic automation framework for acceptance testing and RPA

Robot Framework Introduction Installation Example Usage Documentation Support and contact Contributing License Introduction Robot Framework is a gener

Robot Framework 7.7k Jan 7, 2023
Scalable user load testing tool written in Python

Locust Locust is an easy to use, scriptable and scalable performance testing tool. You define the behaviour of your users in regular Python code, inst

Locust.io 20.4k Jan 4, 2023
A modern API testing tool for web applications built with Open API and GraphQL specifications.

Schemathesis Schemathesis is a modern API testing tool for web applications built with Open API and GraphQL specifications. It reads the application s

Schemathesis.io 1.6k Jan 6, 2023
Sixpack is a language-agnostic a/b-testing framework

Sixpack Sixpack is a framework to enable A/B testing across multiple programming languages. It does this by exposing a simple API for client libraries

null 1.7k Dec 24, 2022
Automatically mock your HTTP interactions to simplify and speed up testing

VCR.py ?? This is a Python version of Ruby's VCR library. Source code https://github.com/kevin1024/vcrpy Documentation https://vcrpy.readthedocs.io/ R

Kevin McCarthy 2.3k Jan 1, 2023
fsociety Hacking Tools Pack – A Penetration Testing Framework

Fsociety Hacking Tools Pack A Penetration Testing Framework, you will have every script that a hacker needs. Works with Python 2. For a Python 3 versi

Manisso 8.2k Jan 3, 2023
Scalable user load testing tool written in Python

Locust Locust is an easy to use, scriptable and scalable performance testing tool. You define the behaviour of your users in regular Python code, inst

Locust.io 15.3k Feb 8, 2021
Automatically mock your HTTP interactions to simplify and speed up testing

VCR.py ?? This is a Python version of Ruby's VCR library. Source code https://github.com/kevin1024/vcrpy Documentation https://vcrpy.readthedocs.io/ R

Kevin McCarthy 1.8k Feb 7, 2021