Methods to get the probability of a changepoint in a time series.

Overview

Bayesian Changepoint Detection

Methods to get the probability of a changepoint in a time series. Both online and offline methods are available. Read the following papers to really understand the methods:

[1] Paul Fearnhead, Exact and Efficient Bayesian Inference for Multiple
Changepoint problems, Statistics and computing 16.2 (2006), pp. 203--213

[2] Ryan P. Adams, David J.C. MacKay, Bayesian Online Changepoint Detection,
arXiv 0710.3742 (2007)

[3] Xuan Xiang, Kevin Murphy, Modeling Changing Dependency Structure in
Multivariate Time Series, ICML (2007), pp. 1055--1062

To see it in action have a look at the example notebook.

Comments
  • Other observation models besides Gaussian

    Other observation models besides Gaussian

    Hi. I was wondering if you had any insight in extending your code to include other emission models besides gaussian. In particular, how about a GMM with known number of gaussians?

    I was going to take a stab at implementing it and submit a PR, but wanted to get your input first.

    Thanks

    Dan

    enhancement 
    opened by mathDR 16
  • CD automation for deployment to PyPI

    CD automation for deployment to PyPI

    What is this feature about? CD for deploying package to PyPI. It makes use of Github's workflow.

    Closes Issues: https://github.com/hildensia/bayesian_changepoint_detection/issues/32

    Pre-req for owner @hildensia before merging this :

    1. Create a new API tokens inside your PyPI account where this project lives https://pypi.org/
    2. Creating two Repository secrets inside the github project setting. (Steps defined here) A. Secret name PYPI_USERNAME which value __token__
      B. Secret name PYPI_PROD_PASSWORD with the token value from step #1

    How to release a package? Leveraging github release feature This can only be done by project admin/maintainer. @hildensia Right now have made the release to based on manual action. We have to make use the of Releases option shown by github, provide a version tag number and description. If we want to change the release strategy we can update the cd.yml accordingly but usually I have seen projects follow manual release.

    What Testing was done? I have tested this pipeline where the package was deployed to my Test PyPI account. https://github.com/zillow/bayesian_changepoint_detection/actions/runs/1966108484

    opened by shahsmit14 12
  • Add pyx file again

    Add pyx file again

    Was removed during a PR. Is there a good way to keep cython and python in sync. I'm not sure if I prefer one over the other (python is better for debugging, cython is faster).

    opened by hildensia 5
  • How to utilize R matrix to detect change points?

    How to utilize R matrix to detect change points?

    In the current version of code, Nw=10; ax.plot(R[Nw,Nw:-1]) is used to exhibit the changpoints. Although it works fine, I am really confused about the moral behind it. I tried to plot the run length with maximum prob in each time step i.e. the y index of maximum prob in each x col, but the result showed the run length keeps going up... I also went back to Admas's paper but found nothing about change point indentification stuff (he just stop at R matrix)... I also tried to find Adams's MATLAB code, but the code seems to have been removed...

    I am trying to use this method in my work, and I believe it's the best to fully understand it before any deployment. Any help will be appreciated and thanks a lot!

    opened by mike-ocean 4
  • Corrected scale and beta factor calculation

    Corrected scale and beta factor calculation

    The scale factor should be the standard deviation. There was a small bug in the betaT0 calculation, this makes it consistent with the paper/gaussdemo.m file.

    opened by nariox 3
  • Example notebook does not work

    Example notebook does not work

    If I click on the "example notebook" work - an nbviewer link - I get a "too many redirects" error.

    It would be nice if the example notebook was easily accessible in the repo (maybe I overlooked it... ) because we don't need a live notebook / nbviewer to figure out whether the example fits our use case.

    opened by chryss 2
  • Updating parameters for bayesian online change point

    Updating parameters for bayesian online change point

    I think my question is related to the one, which was not answered and is already closed: https://github.com/hildensia/bayesian_changepoint_detection/issues/19

    In your example, you have applied the student t-distribution as a likelihood. I understand the distribution, its parameters, but I have a question about how you set up prior and update its parameters in the code. So the following is:

    df = 2*self.alpha
    scale = np.sqrt(self.beta * (self.kappa+1) / (self.alpha * self.kappa))
    

    I don't understand what alpha, beta and kappa correspond to. How have you come across this expression? The paper by Adams and McKey refers to updating sufficient statistics. Is your expression related to that? If so, how can I do that for any other distribution, let's say gaussian? In my comment, I refer to the following formula in the paper:

    equation

    opened by celdorwow 2
  • Scipy Import Error on newer versions

    Scipy Import Error on newer versions

    Hi guys,

    there is an import issue if one uses newer scipy versions.

    Would be a quick fix if you adapt the import statement at offline_changepoint_detection.py

    try:  # SciPy >= 0.19
        from scipy.special import comb, logsumexp
    except ImportError:
        from scipy.misc import comb, logsumexp  # noqa
    
    opened by fhaselbeck 2
  • Multivariate T

    Multivariate T

    • Introduces a pluggable prior/posterior config for multivariate Gaussian data, with sensible defaults. Note that this only works for scipy > 1.6.0, where they introduced the multivariate t PDF. The library will remind you to upgrade if you have an old version.
    • Adds a test for this new configuration, as well as for the univariate one
    • Adds a "dev" and "multivariate" setup extra, meaning that you can pip install bayesian_changepoint_detection[dev] for development work (currently this installs pytest), or pip install bayesian_changepoint_detection[multivariate] (enforces that you have a new enough scipy version for this new feature)
    opened by multimeric 2
  • Why the probability exceeds one?

    Why the probability exceeds one?

    I ran the given online detection example in the notebook, and I assumed the y axis indicating the probability of changepoint (am I right?). But the y value ranged from zero to hundreds. I am not very familiar with the math, so can anyone please explain this outcome?

    Thanks.

    opened by mike-ocean 2
  • Fix full covariance method and add example

    Fix full covariance method and add example

    This fixes the full cov method and adds an example similar to the original ipython notebook. If you prefer, I can merge them separately, but since they are related, I thought it'd be fine to merge them together.

    opened by nariox 2
  • About the conditions to use bocpd

    About the conditions to use bocpd

    Hi,nice to meet you,and i want to aks a basic question,if i don’t know the distribution of data(not the normal distribution),then could i use the bocpd? Thank you!

    opened by Codergers 0
  • Scaling of Data

    Scaling of Data

    Hi, I've noticed is the scaling of the data can have an effect on the result, but I am not sure why it would and can't find any reason for it in the code or references. Below I have the CP probabilities for the same data with or without a constant factor, which are somewhat different.

    Are there some assumptions about the input data I am missing? Thanks

    image image

    opened by stefan37 3
  • How to adjust the sensitivity of the BOCD algorithm?

    How to adjust the sensitivity of the BOCD algorithm?

    There is always a tradeoff between false alarms and missed alarms, and when the algorithm is more sensitive we should have higher false alarm rate and lower missed alarm rate. My question is, is it possible to adjust the sensitivity level of this algorithm by changing the hyperparameter (e.g., alpha, beta, kappa, mu)? Thank you!

    opened by gqffqggqf 4
  • 'FloatingPointError: underflow encountered in logaddexp'  occurs when setting np.seterr(all='raise')

    'FloatingPointError: underflow encountered in logaddexp' occurs when setting np.seterr(all='raise')

    Hi,

    I installed bayesian_changepoint_detection from this github repository.

    By setting (accidentally) np.seterr(all='raise'), I was able to cause the following exception.

    I am not sure whether this would have any relevance for the further processing, but I just wanted to draw attention to people working on / with this library.

    /home/user/venv/env01/bin/python3.6 /home/user/PycharmProjects/project01/snippet.py
    Use scipy logsumexp().
    Traceback (most recent call last):
      File "/home/user/PycharmProjects/project01/snippet.py", line 68, in <module>
        Q, P, Pcp = offcd.offline_changepoint_detection(data, partial(offcd.const_prior, l=(len(data) + 1)), offcd.gaussian_obs_log_likelihood, truncate=-40)
      File "/home/user/experiments/original-unforked/bayesian_changepoint_detection/bayesian_changepoint_detection/offline_changepoint_detection.py", line 98, in offline_changepoint_detection
        Q[t] = np.logaddexp(P_next_cp, P[t, n-1] + antiG)
    FloatingPointError: underflow encountered in logaddexp
    
    Process finished with exit code 1
    
    
    opened by alatif-alatif 0
  • Added Normal known precision, Poisson distributions + alternate hazard function

    Added Normal known precision, Poisson distributions + alternate hazard function

    For someone whoever is interested, I have added Normal known precision, poisson distributions in my fork below. Also tried adding another type of hazard function which is normally distributed over time. Usage of the same is updated in Example code as well. Find my fork here - https://github.com/kmsravindra/bayesian_changepoint_detection

    opened by kmsravindra 2
  • Confused about the R matrix interpretation

    Confused about the R matrix interpretation

    Hi,

    I am confused about the returned R matrix interpretation in the online detection algorithm. In the notebook example, the third plot is R[Nw,Nw:-1], where it is mentioned to be "the probability at each time step for a sequence length of 0, i.e. the probability of the current time step to be a changepoint." So why do we choose the indices R[Nw,Nw:-1] ? why not R[Nw,:]

    Also, it was mentioned as an example that R[7,3] means the probability at time step 7 taking a sequence of length 3, so does R[Nw,Nw:-1] means that we are taking all the probabilities at time step Nw ?

    Any suggestions to help me to understand the output R ?

    Thanks

    opened by RanaElnaggar 4
Releases(v0.4)
Owner
Johannes Kulick
Machine Learning and Robotics Scientist
Johannes Kulick
A PyTorch-based open-source framework that provides methods for improving the weakly annotated data and allows researchers to efficiently develop and compare their own methods.

Knodle (Knowledge-supervised Deep Learning Framework) - a new framework for weak supervision with neural networks. It provides a modularization for se

null 93 Nov 6, 2022
Implementation of temporal pooling methods studied in [ICIP'20] A Comparative Evaluation Of Temporal Pooling Methods For Blind Video Quality Assessment

Implementation of temporal pooling methods studied in [ICIP'20] A Comparative Evaluation Of Temporal Pooling Methods For Blind Video Quality Assessment

Zhengzhong Tu 5 Sep 16, 2022
Reliable probability face embeddings

ProbFace, arxiv This is a demo code of training and testing [ProbFace] using Tensorflow. ProbFace is a reliable Probabilistic Face Embeddging (PFE) me

Kaen Chan 34 Dec 31, 2022
Implementation of Diverse Semantic Image Synthesis via Probability Distribution Modeling

Diverse Semantic Image Synthesis via Probability Distribution Modeling (CVPR 2021) Paper Zhentao Tan, Menglei Chai, Dongdong Chen, Jing Liao, Qi Chu,

tzt 45 Nov 17, 2022
Universal Probability Distributions with Optimal Transport and Convex Optimization

Sylvester normalizing flows for variational inference Pytorch implementation of Sylvester normalizing flows, based on our paper: Sylvester normalizing

Rianne van den Berg 172 Dec 13, 2022
A foreign language learning aid using a neural network to predict probability of translating foreign words

Langy Langy is a reading-focused foreign language learning aid orientated towards young children. Reading is an activity that every child knows. It is

Shona Lowden 6 Nov 17, 2021
Python KNN model: Predicting a probability of getting a work visa. Tableau: Non-immigrant visas over the years.

The value of international students to the United States. Probability of getting a non-immigrant visa. Project timeline: Jan 2021 - April 2021 Project

Zinaida Dvoskina 2 Nov 21, 2021
Buffon’s needle: one of the oldest problems in geometric probability

Buffon-s-Needle Buffon’s needle is one of the oldest problems in geometric proba

null 3 Feb 18, 2022
null 5 Jan 5, 2023
(CVPR 2022 - oral) Multi-View Depth Estimation by Fusing Single-View Depth Probability with Multi-View Geometry

Multi-View Depth Estimation by Fusing Single-View Depth Probability with Multi-View Geometry Official implementation of the paper Multi-View Depth Est

Bae, Gwangbin 138 Dec 28, 2022
Using NumPy to solve the equations of fluid mechanics together with Finite Differences, explicit time stepping and Chorin's Projection methods

Computational Fluid Dynamics in Python Using NumPy to solve the equations of fluid mechanics ?? ?? ?? together with Finite Differences, explicit time

Felix Köhler 4 Nov 12, 2022
A unified framework for machine learning with time series

Welcome to sktime A unified framework for machine learning with time series We provide specialized time series algorithms and scikit-learn compatible

The Alan Turing Institute 6k Jan 8, 2023
Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting

Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting This is the origin Pytorch implementation of Informer in the followin

Haoyi 3.1k Dec 29, 2022
Implementation of the paper NAST: Non-Autoregressive Spatial-Temporal Transformer for Time Series Forecasting.

Non-AR Spatial-Temporal Transformer Introduction Implementation of the paper NAST: Non-Autoregressive Spatial-Temporal Transformer for Time Series For

Chen Kai 66 Nov 28, 2022
MINIROCKET: A Very Fast (Almost) Deterministic Transform for Time Series Classification

MINIROCKET: A Very Fast (Almost) Deterministic Transform for Time Series Classification

null 187 Dec 26, 2022
AntroPy: entropy and complexity of (EEG) time-series in Python

AntroPy is a Python 3 package providing several time-efficient algorithms for computing the complexity of time-series. It can be used for example to e

Raphael Vallat 153 Dec 27, 2022
Spectral Temporal Graph Neural Network (StemGNN in short) for Multivariate Time-series Forecasting

Spectral Temporal Graph Neural Network for Multivariate Time-series Forecasting This repository is the official implementation of Spectral Temporal Gr

Microsoft 306 Dec 29, 2022
This project is a loose implementation of paper "Algorithmic Financial Trading with Deep Convolutional Neural Networks: Time Series to Image Conversion Approach"

Stock Market Buy/Sell/Hold prediction Using convolutional Neural Network This repo is an attempt to implement the research paper titled "Algorithmic F

Asutosh Nayak 136 Dec 28, 2022