Probabilistic programming framework that facilitates objective model selection for time-varying parameter models.

Overview

bayesloop

Build status Documentation status Coverage Status License: MIT DOI

Time series analysis today is an important cornerstone of quantitative science in many disciplines, including natural and life sciences as well as economics and social sciences. Regarding diverse phenomena like tumor cell migration, brain activity and stock trading, a similarity of these complex systems becomes apparent: the observable data we measure – cell migration paths, neuron spike rates and stock prices – are the result of a multitude of underlying processes that act over a broad range of spatial and temporal scales. It is thus to expect that the statistical properties of these systems are not constant, but themselves show stochastic or deterministic dynamics of their own. Time series models used to understand the dynamics of complex systems therefore have to account for temporal changes of the models' parameters.

bayesloop is a python module that focuses on fitting time series models with time-varying parameters and model selection based on Bayesian inference. Instead of relying on MCMC methods, bayesloop uses a grid-based approach to evaluate probability distributions, allowing for an efficient approximation of the marginal likelihood (evidence). The marginal likelihood represents a powerful tool to objectively compare different models and/or optimize the hyper-parameters of hierarchical models. To avoid the curse of dimensionality when analyzing time series models with time-varying parameters, bayesloop employs a sequential inference algorithm that is based on the forward-backward-algorithm used in Hidden Markov models. Here, the relevant parameter spaces are kept low-dimensional by processing time series data step by step. The module covers a large class of time series models and is easily extensible.

bayesloop has been successfully employed in cancer research (studying the migration paths of invasive tumor cells), financial risk assessment, climate research and accident analysis. For a detailed description of these applications, see the following articles:

Bayesian model selection for complex dynamic systems
Mark C., Metzner C., Lautscham L., Strissel P.L., Strick R. and Fabry B.
Nature Communications 9:1803 (2018)

Superstatistical analysis and modelling of heterogeneous random walks
Metzner C., Mark C., Steinwachs J., Lautscham L., Stadler F. and Fabry B.
Nature Communications 6:7516 (2015)

Features

  • infer time-varying parameters from time series data
  • compare hypotheses about parameter dynamics (model evidence)
  • create custom models based on SymPy and SciPy
  • straight-forward handling of missing data points
  • predict future parameter values
  • detect change-points and structural breaks in time series data
  • employ model selection to online data streams

Getting started

For a comprehensive introduction and overview of the main features that bayesloop provides, see the documentation.

The following code provides a minimal example of an analysis carried out using bayesloop. The data here consists of the number of coal mining disasters in the UK per year from 1851 to 1962 (see this article for further information).

import bayesloop as bl
import matplotlib.pyplot as plt
import seaborn as sns

S = bl.HyperStudy()  # start new data study
S.loadExampleData()  # load data array

# observed number of disasters is modeled by Poisson distribution
L = bl.om.Poisson('rate')

# disaster rate itself may change gradually over time
T = bl.tm.GaussianRandomWalk('sigma', bl.cint(0, 1.0, 20), target='rate')

S.set(L, T)
S.fit()  # inference

# plot data together with inferred parameter evolution
plt.figure(figsize=(8, 3))

plt.subplot2grid((1, 3), (0, 0), colspan=2)
plt.xlim([1852, 1961])
plt.bar(S.rawTimestamps, S.rawData, align='center', facecolor='r', alpha=.5)
S.plot('rate')
plt.xlabel('year')

# plot hyper-parameter distribution
plt.subplot2grid((1, 3), (0, 2))
plt.xlim([0, 1])
S.plot('sigma', facecolor='g', alpha=0.7, lw=1, edgecolor='k')
plt.tight_layout()
plt.show()

Analysis plot

This analysis indicates a significant improvement of safety conditions between 1880 and 1900. Check out the documentation for further insights!

Installation

The easiest way to install the latest release version of bayesloop is via pip:

pip install bayesloop

Alternatively, a zipped version can be downloaded here. The module is installed by calling python setup.py install.

Development version

The latest development version of bayesloop can be installed from the master branch using pip (requires git):

pip install git+https://github.com/christophmark/bayesloop

Alternatively, use this zipped version or clone the repository.

Dependencies

bayesloop is tested on Python 2.7, 3.5 and 3.6. It depends on NumPy, SciPy, SymPy, matplotlib, tqdm and dill. All except the last two are already included in the Anaconda distribution of Python. Windows users may also take advantage of pre-compiled binaries for all dependencies, which can be found at Christoph Gohlke's page.

Optional dependencies

bayesloop supports multiprocessing for computationally expensive analyses, based on the pathos module. The latest version can be obtained directly from GitHub using pip (requires git):

pip install git+https://github.com/uqfoundation/pathos

Note: Windows users need to install a C compiler before installing pathos. One possible solution for 64bit systems is to install Microsoft Visual C++ 2008 SP1 Redistributable Package (x64) and Microsoft Visual C++ Compiler for Python 2.7.

License

The MIT License (MIT)

If you have any further questions, suggestions or comments, do not hesitate to contact me: [email protected]

Comments
  • Prior distributions

    Prior distributions

    While bayesloop allows to set prior distributions at some stages in the modeling process, the main inference algorithm of the study class still assumes a flat prior.

    Prior distributions could be implemented based on scipy.stats.

    Uninformative priors could be implemented directly into the observation models.

    enhancement 
    opened by christophmark 5
  • Implement raster-study

    Implement raster-study

    Up to this point, only the change-point study allows to determine the distribution of hyper-parameter values (change-point distribution). A similar study could be performed with continuous hyper-parameters by choosing discrete values on a grid to approximate the distribution of those hyper-parameters. The current change-point study can probably be integrated in this more general scheme.

    enhancement 
    opened by christophmark 3
  • Simpler implementation of new observation models

    Simpler implementation of new observation models

    Creating new observation models requires a lot of knowledge of how bayesloop works. By creating a parent class for observation models that takes care of integrating the whole grid-stuff and missing value routines, we could probably create observation models from any scipy.stats object on-the-fly.

    enhancement 
    opened by christophmark 3
  • Bump nbconvert from 5.0.0 to 6.3.0 in /docs/source

    Bump nbconvert from 5.0.0 to 6.3.0 in /docs/source

    Bumps nbconvert from 5.0.0 to 6.3.0.

    Release notes

    Sourced from nbconvert's releases.

    5.5.0 Documentation Release

    This tag is used to provide a working documentation build for RTD.

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 2
  • bayesloop module fails to load, pythin 3.7.7 64-bit

    bayesloop module fails to load, pythin 3.7.7 64-bit

    I wanted to try out the bayesloop. I followed the installation instructions in the documentation, but module import failed:

    Python 3.7.7 (default, May  6 2020, 11:45:54) [MSC v.1916 64 bit (AMD64)] :: Anaconda, Inc. on win32
    Type "help", "copyright", "credits" or "license" for more information.
    >>> import bayesloop
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "D:\workspace\tools\Miniconda3\lib\site-packages\bayesloop\__init__.py", line 4, in <module>
        from .core import Study, HyperStudy, ChangepointStudy, OnlineStudy
      File "D:\workspace\tools\Miniconda3\lib\site-packages\bayesloop\core.py", line 15, in <module>
        from scipy.misc import factorial
    ImportError: cannot import name 'factorial' from 'scipy.misc' (D:\workspace\tools\Miniconda3\lib\site-packages\scipy\misc\__init__.py)
    

    I tried to resolve the problem, following https://stackoverflow.com/questions/56283294/importerror-cannot-import-name-factorial but that didn't resolve.

    Any idea on how to overcome this issue?

    opened by amilenovic 2
  • Replace the deprecated scipy.misc with scipy.special

    Replace the deprecated scipy.misc with scipy.special

    factorial and logsumexp is moved from scipy.misc to scipy.special

    Details: https://scipy.github.io/devdocs/release.1.3.0.html#scipy-interpolate-changes

    opened by EzzEddin 2
  • Serial transition model

    Serial transition model

    In some applications, different transition models apply for different time segments. Therefore, a concept similar to the combined transition model would be desirable that e.g. assumes static parameters until a certain time step t, and after that assumes Gaussian fluctuations.

    enhancement 
    opened by christophmark 2
  • Bump nbconvert from 5.0.0 to 6.5.1 in /docs/source

    Bump nbconvert from 5.0.0 to 6.5.1 in /docs/source

    Bumps nbconvert from 5.0.0 to 6.5.1.

    Release notes

    Sourced from nbconvert's releases.

    Release 6.5.1

    No release notes provided.

    6.5.0

    What's Changed

    New Contributors

    Full Changelog: https://github.com/jupyter/nbconvert/compare/6.4.5...6.5

    6.4.3

    What's Changed

    New Contributors

    Full Changelog: https://github.com/jupyter/nbconvert/compare/6.4.2...6.4.3

    6.4.0

    What's Changed

    New Contributors

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 1
  • Test failing, possibly because of changes to scipy optimize (?)

    Test failing, possibly because of changes to scipy optimize (?)

    See here:

    =================================== FAILURES ===================================
    _____________________ TestTwoParameterModel.test_optimize ______________________
    self = <test_study.TestTwoParameterModel object at 0x7fa2fc813610>
        def test_optimize(self):
            # carry out fit
            S = bl.Study()
            S.loadData(np.array([1, 2, 3, 4, 5]))
            S.setOM(bl.om.Gaussian('mean', bl.cint(0, 6, 20), 'sigma', bl.oint(0, 2, 20), prior=lambda m, s: 1/s**3))
            T = bl.tm.CombinedTransitionModel(bl.tm.GaussianRandomWalk('sigma', 1.07, target='mean'),
                                              bl.tm.RegimeSwitch('log10pMin', -3.90))
            S.setTM(T)
            S.optimize()
            # test parameter distributions
    >       np.testing.assert_allclose(S.getParameterDistributions('mean', density=False)[1][:, 5],
                                       [4.525547e-04, 1.677968e-03, 2.946498e-07, 1.499508e-08, 1.102637e-09],
                                       rtol=1e-05, err_msg='Erroneous posterior distribution values.')
    E       AssertionError: 
    E       Not equal to tolerance rtol=1e-05, atol=0
    E       Erroneous posterior distribution values.
    E       Mismatched elements: 5 / 5 (100%)
    E       Max absolute difference: 6.48028681e-08
    E       Max relative difference: 0.00072859
    E        x: array([4.525729e-04, 1.677903e-03, 2.945258e-07, 1.498415e-08,
    E              1.102384e-09])
    E        y: array([4.525547e-04, 1.677968e-03, 2.946498e-07, 1.499508e-08,
    E              1.102637e-09])
    test_study.py:330: AssertionError
    ----------------------------- Captured stdout call -----------------------------
    + Created new study.
    + Successfully imported array.
    + Observation model: Gaussian observations. Parameter(s): ['mean', 'sigma']
    + Transition model: Combined transition model. Hyper-Parameter(s): ['sigma', 'log10pMin']
    + Starting optimization...
      --> All model parameters are optimized (except change/break-points).
        + Log10-evidence: -3.47897 - Parameter values: [ 1.07 -3.9 ]
        + Log10-evidence: -3.75097 - Parameter values: [ 2.07 -3.9 ]
        + Log10-evidence: -3.48400 - Parameter values: [ 1.07 -2.9 ]
        + Log10-evidence: -6.94856 - Parameter values: [ 0.07017134 -3.91851083]
        + Log10-evidence: -4.399[24](https://github.com/christophmark/bayesloop/runs/7068584434?check_suite_focus=true#step:5:25) - Parameter values: [ 0.57008567 -3.909[25](https://github.com/christophmark/bayesloop/runs/7068584434?check_suite_focus=true#step:5:26)542]
        + Log10-evidence: -3.52186 - Parameter values: [ 1.31999906 -3.90068388]
        + Log10-evidence: -3.47888 - Parameter values: [ 1.06965806 -4.02499953]
        + Log10-evidence: -3.59705 - Parameter values: [ 0.81965823 -4.02529235]
        + Log10-evidence: -3.49207 - Parameter values: [ 1.19465698 -4.02551875]
        + Log10-evidence: -3.48383 - Parameter values: [ 1.00715847 -4.02522561]
        + Log10-evidence: -3.47981 - Parameter values: [ 1.1009061  -4.02534994]
        + Log10-evidence: -3.47888 - Parameter values: [ 1.06948286 -4.04062355]
        + Log10-evidence: -3.48005 - Parameter values: [ 1.03823334 -4.04079793]
        + Log10-evidence: -3.47909 - Parameter values: [ 1.08510319 -4.04100528]
        + Log10-evidence: -3.47890 - Parameter values: [ 1.06167279 -4.04081835]
        + Log10-evidence: -3.47888 - Parameter values: [ 1.07332013 -4.04135437]
        + Log10-evidence: -3.47888 - Parameter values: [ 1.06911745 -4.04254219]
        + Log10-evidence: -3.47883 - Parameter values: [ 1.06570473 -4.0396313 ]
        + Log10-evidence: -3.47889 - Parameter values: [ 1.06187307 -4.03887159]
        + Log10-evidence: -3.47889 - Parameter values: [ 1.06666122 -4.03792841]
        + Log10-evidence: -3.47890 - Parameter values: [ 1.06608651 -4.04154675]
        + Log10-evidence: -3.47884 - Parameter values: [ 1.0647419  -4.03946811]
        + Log10-evidence: -3.47883 - Parameter values: [ 1.06578141 -4.04011352]
        + Log10-evidence: -3.47890 - Parameter values: [ 1.06602252 -4.04007519]
        + Log10-evidence: -3.47884 - Parameter values: [ 1.06529985 -4.0401943 ]
        + Log10-evidence: -3.47890 - Parameter values: [ 1.06602539 -4.04012232]
        + Log10-evidence: -3.47883 - Parameter values: [ 1.06568278 -4.04013005]
        + Log10-evidence: -3.47890 - Parameter values: [ 1.06578967 -4.04016284]
        + Log10-evidence: -3.47883 - Parameter values: [ 1.0657657  -4.04001477]
    + Finished optimization.
    + Started new fit:
        + Formatted data.
        + Set prior (function): <lambda>. Values have been re-normalized.
        + Finished forward pass.
        + Log10-evidence: -3.47883
        + Finished backward pass.
        + Computed mean parameter values.
    ----------------------------- Captured stderr call -----------------------------
      0%|          | 0/5 [00:00<?, ?it/s]
    100%|██████████| 5/5 [00:00<00:00, 6[26](https://github.com/christophmark/bayesloop/runs/7068584434?check_suite_focus=true#step:5:27)0.16it/s]
      0%|          | 0/5 [00:00<?, ?it/s]
    100%|██████████| 5/5 [00:00<00:00, 51[32](https://github.com/christophmark/bayesloop/runs/7068584434?check_suite_focus=true#step:5:33).[53](https://github.com/christophmark/bayesloop/runs/7068584434?check_suite_focus=true#step:5:54)it/s]
    =============================== warnings summary ===============================
    ../../../../../../opt/hostedtoolcache/Python/3.8.12/x[64](https://github.com/christophmark/bayesloop/runs/7068584434?check_suite_focus=true#step:5:65)/lib/python3.8/site-packages/bayesloop/core.py:23
      /opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/bayesloop/core.py:23: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.10 it will stop working
        from collections import OrderedDict, Iterable
    ../../../../../../opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/bayesloop/transitionModels.py:15
      /opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/bayesloop/transitionModels.py:15: DeprecationWarning: Please use `gaussian_filter1d` from the `scipy.ndimage` namespace, the `scipy.ndimage.filters` namespace is deprecated.
        from scipy.ndimage.filters import gaussian_filter1d
    ../../../../../../opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/bayesloop/transitionModels.py:16
      /opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/bayesloop/transitionModels.py:16: DeprecationWarning: Please use `shift` from the `scipy.ndimage` namespace, the `scipy.ndimage.interpolation` namespace is deprecated.
        from scipy.ndimage.interpolation import shift
    
    opened by christophmark 1
  • Performance

    Performance

    Hi,

    I am following along the example:

    for r in tqdm_notebook(logReturns):
        S.step(r)
    

    It is taking about 1.5 seconds for each step to process. Is this normal or did I do something wrong? I have minute data and about 1M prices of historical data. Would I need to send all of them into the step function? I am sorry but I am new to probabilistic programming and bayesloop. Thanks for any help/guidance.

    opened by jmrichardson 1
  • Get number positivity

    Get number positivity

    I have a rule that throws numbers between -50 and 50 randomly, is there any way to predict the sign (positive or negative) of the next release with at least 90% accuracy based on a historical record?

    opened by lorenzolopez928 1
Releases(1.5.7)
  • 1.5.7(Dec 20, 2022)

  • 1.5.6(Jul 28, 2022)

    Switch from dill to cloudpickle for saving/loading Study-objects to/from file. This ensures compatibility with JupyterLite as dill cannot easily be imported in a pyodide kernel.

    Source code(tar.gz)
    Source code(zip)
  • 1.5.5(Jul 24, 2022)

  • 1.5.4(Jul 22, 2022)

  • 1.5.3(Dec 4, 2021)

  • 1.5.2(Dec 4, 2021)

  • 1.5.1(Dec 4, 2021)

  • 1.5.0(Dec 4, 2021)

    New features:

    • New transition model: Bivariate random walk

    Fixes:

    • various import fixes
    • more stability for complex transformations in the Parser module

    Development:

    • moved tests and coverage to Github Actions
    Source code(tar.gz)
    Source code(zip)
  • 1.4(Mar 7, 2018)

    New features:

    • New observation model: Laplace distribution
    • Hyper-parameter optimization now supports "forward-only" algorithm

    Fixes:

    • Model evidence of ChangePoint transition model depended on the chosen grid-size
    • RegimeSwitch transition model did not support integer parameter values
    • Jeffreys prior for Gaussian observation model was parametrized on variance, not standard deviation
    • SymPy observation models now support Beta function
    Source code(tar.gz)
    Source code(zip)
  • 1.3(Sep 29, 2017)

    New features:

    • Additional API functions in OnlineStudy
    • Probability Parser for arithmetic operations on inferred (hyper-)parameters
    • Custom likelihood functions (observation models) based on NumPy functions
    • Universal plot method
    • Convenience methods load, set, add, eval

    Fixes:

    • Support for besseli function in SymPy models
    • Consistent order of parameters in SymPy/SciPy models
    • Consistent order of parameters in joint-distribution plots
    • Fix to support SymPy 1.1
    • AlphaStableRandomWalk transition model
    • NotEqual transition model

    Development:

    • bayesloop now features automatic testing based on TravisCI.
    • Automatic code coverage evaluation by coveralls.io
    Source code(tar.gz)
    Source code(zip)
  • 1.2.2(Feb 12, 2017)

    Fixes

    • Hotfix for scaling of hyper-prior values in ChangepointStudy, resulting in distorted model evidence values. This bug was introduced in version 1.2.0.
    Source code(tar.gz)
    Source code(zip)
  • 1.2.1(Feb 8, 2017)

  • 1.2.0(Feb 5, 2017)

    Algorithm & API changes

    • dill module is a required dependency. Loading and saving Study instances is no longer an optional feature.
    • Major refinements to OnlineStudy. This type of analysis now behaves more like a HyperStudy and continually updates hyper-parameter distributions and transition model probabilities.
    • SymPy priors are not re-normalized. This allows to define priors with a support interval that deviates from the defined parameter grid.
    • get...Distribution()-methods return probability values of (hyper-)parameters, not density values. This allows easier post-processing of (hyper-)parameter distributions.
    Source code(tar.gz)
    Source code(zip)
  • 1.1.4(Jan 17, 2017)

  • 1.1.3(Jan 16, 2017)

    Fixes

    • fixed "dtype=object" error for likelihood array
    • hyper-parameter values were not restored after fitting (hyper-study)
    • improved verbosity settings (hyper-study and data import)
    • evaluation of average posterior distribution now numerically stable for large data sets (hyper-study)
    • hyper-study handles standard fits and simple change-point-analyses
    Source code(tar.gz)
    Source code(zip)
  • 1.1.2(Dec 22, 2016)

    Fixes

    • import of list data
    • use of custom timestamps in HyperStudy
    • lattice scaling for deterministic transition models
    • use of deterministic transition models with more than one hyper-parameter
    • prevent excessive output in the case of inapt parameter boundaries
    • progress bar also shown for standard fit method
    • require updated version of tqdm, fixes warning
    Source code(tar.gz)
    Source code(zip)
  • 1.1.1(Dec 13, 2016)

  • 1.1(Dec 5, 2016)

    New features

    • Change-points can be used in serial transition models
    • Added Gaussian mean model
    • New API functions for online study class
    • Added documentation for online study class
    Source code(tar.gz)
    Source code(zip)
  • 1.0(Nov 10, 2016)

Owner
Christoph Mark
Data scientist with a focus on time series analysis and complex systems, always searching for patterns in biological, financial, and business-related data.
Christoph Mark
Probabilistic time series modeling in Python

GluonTS - Probabilistic Time Series Modeling in Python GluonTS is a Python toolkit for probabilistic time series modeling, built around Apache MXNet (

Amazon Web Services - Labs 3.3k Jan 3, 2023
MooGBT is a library for Multi-objective optimization in Gradient Boosted Trees.

MooGBT is a library for Multi-objective optimization in Gradient Boosted Trees. MooGBT optimizes for multiple objectives by defining constraints on sub-objective(s) along with a primary objective. The constraints are defined as upper bounds on sub-objective loss function. MooGBT uses a Augmented Lagrangian(AL) based constrained optimization framework with Gradient Boosted Trees, to optimize for multiple objectives.

Swiggy 66 Dec 6, 2022
MBTR is a python package for multivariate boosted tree regressors trained in parameter space.

MBTR is a python package for multivariate boosted tree regressors trained in parameter space.

SUPSI-DACD-ISAAC 61 Dec 19, 2022
Kats is a toolkit to analyze time series data, a lightweight, easy-to-use, and generalizable framework to perform time series analysis.

Kats, a kit to analyze time series data, a lightweight, easy-to-use, generalizable, and extendable framework to perform time series analysis, from understanding the key statistics and characteristics, detecting change points and anomalies, to forecasting future trends.

Facebook Research 4.1k Dec 29, 2022
Regularization and Feature Selection in Least Squares Temporal Difference Learning

Regularization and Feature Selection in Least Squares Temporal Difference Learning Description This is Python implementations of Least Angle Regressio

Mina Parham 0 Jan 18, 2022
machine learning model deployment project of Iris classification model in a minimal UI using flask web framework and deployed it in Azure cloud using Azure app service

This is a machine learning model deployment project of Iris classification model in a minimal UI using flask web framework and deployed it in Azure cloud using Azure app service. We initially made this project as a requirement for an internship at Indian Servers. We are now making it open to contribution.

Krishna Priyatham Potluri 73 Dec 1, 2022
Mosec is a high-performance and flexible model serving framework for building ML model-enabled backend and microservices

Mosec is a high-performance and flexible model serving framework for building ML model-enabled backend and microservices. It bridges the gap between any machine learning models you just trained and the efficient online service API.

null 164 Jan 4, 2023
Model search (MS) is a framework that implements AutoML algorithms for model architecture search at scale.

Model Search Model search (MS) is a framework that implements AutoML algorithms for model architecture search at scale. It aims to help researchers sp

AriesTriputranto 1 Dec 13, 2021
Automatically build ARIMA, SARIMAX, VAR, FB Prophet and XGBoost Models on Time Series data sets with a Single Line of Code. Now updated with Dask to handle millions of rows.

Auto_TS: Auto_TimeSeries Automatically build multiple Time Series models using a Single Line of Code. Now updated with Dask. Auto_timeseries is a comp

AutoViz and Auto_ViML 519 Jan 3, 2023
AtsPy: Automated Time Series Models in Python (by @firmai)

Automated Time Series Models in Python (AtsPy) SSRN Report Easily develop state of the art time series models to forecast univariate data series. Simp

Derek Snow 465 Jan 2, 2023
A unified framework for machine learning with time series

Welcome to sktime A unified framework for machine learning with time series We provide specialized time series algorithms and scikit-learn compatible

The Alan Turing Institute 6k Jan 6, 2023
ETNA is an easy-to-use time series forecasting framework.

ETNA is an easy-to-use time series forecasting framework. It includes built in toolkits for time series preprocessing, feature generation, a variety of predictive models with unified interface - from classic machine learning to SOTA neural networks, models combination methods and smart backtesting. ETNA is designed to make working with time series simple, productive, and fun.

Tinkoff.AI 674 Jan 7, 2023
Merlion: A Machine Learning Framework for Time Series Intelligence

Merlion is a Python library for time series intelligence. It provides an end-to-end machine learning framework that includes loading and transforming data, building and training models, post-processing model outputs, and evaluating model performance. I

Salesforce 2.8k Jan 5, 2023
ETNA – time series forecasting framework

ETNA Time Series Library Predict your time series the easiest way Homepage | Documentation | Tutorials | Contribution Guide | Release Notes ETNA is an

Tinkoff.AI 675 Jan 8, 2023
A Python implementation of GRAIL, a generic framework to learn compact time series representations.

GRAIL A Python implementation of GRAIL, a generic framework to learn compact time series representations. Requirements Python 3.6+ numpy scipy tslearn

null 3 Nov 24, 2021
A Python Automated Machine Learning tool that optimizes machine learning pipelines using genetic programming.

Master status: Development status: Package information: TPOT stands for Tree-based Pipeline Optimization Tool. Consider TPOT your Data Science Assista

Epistasis Lab at UPenn 8.9k Jan 9, 2023
learn python in 100 days, a simple step could be follow from beginner to master of every aspect of python programming and project also include side project which you can use as demo project for your personal portfolio

learn python in 100 days, a simple step could be follow from beginner to master of every aspect of python programming and project also include side project which you can use as demo project for your personal portfolio

BDFD 6 Nov 5, 2022
A collection of interactive machine-learning experiments: 🏋️models training + 🎨models demo

?? Interactive Machine Learning experiments: ??️models training + ??models demo

Oleksii Trekhleb 1.4k Jan 6, 2023
Model Agnostic Confidence Estimator (MACEST) - A Python library for calibrating Machine Learning models' confidence scores

Model Agnostic Confidence Estimator (MACEST) - A Python library for calibrating Machine Learning models' confidence scores

Oracle 95 Dec 28, 2022