Implementation of linear CorEx and temporal CorEx.

Overview

Correlation Explanation Methods

Official implementation of linear correlation explanation (linear CorEx) and temporal correlation explanation (T-CorEx) methods.

Linear CorEx

Linear CorEx searches for independent latent factors that explain all correlations between observed variables, while also biasing the model selection towards modular latent factor models – directed latent factor graphical models where each observed variable has a single latent variable as its only parent. This is useful for covariance estimation, clustering related variables, and dimensionality reduction, especially in the high-dimensional and under-sampled regime. The complete description of the method is presented in NeurIPS 2019 paper "Fast structure learning with modular regularization" by Greg Ver Steeg, Hrayr Harutyunyan, Daniel Moyer, and Aram Galstyan. If you want to cite this paper, please use the following BibTex entry:

@incollection{NIPS2019_9691,
title = {Fast structure learning with modular regularization},
author = {Ver Steeg, Greg and Harutyunyan, Hrayr and Moyer, Daniel and Galstyan, Aram},
booktitle = {Advances in Neural Information Processing Systems 32},
editor = {H. Wallach and H. Larochelle and A. Beygelzimer and F. d\textquotesingle Alch\'{e}-Buc and E. Fox and R. Garnett},
pages = {15567--15577},
year = {2019},
publisher = {Curran Associates, Inc.},
url = {http://papers.nips.cc/paper/9691-fast-structure-learning-with-modular-regularization.pdf}
}

Note: Greg Ver Steeg has an alternative implementation of linear CorEx, which is available at github.com/gregversteeg/LinearCorex. That implementation uses a quazi-Newton optimization method for learning the model parameters. In contrast, the implementation provided in this repository uses ADAM optimizer. This latter implementation utilizes GPUs better, and can converge to slightly better objective values if the input data is highly non-modular. Nevertheless, we highly encourage to take a look at the alternative implementation.

T-CorEx

T-CorEx is a method for covariance estimation from temporal data. It trains a linear CorEx for each time period, while employing two regularization techniques to enforce temporal consistency of estimates. The method is introduced in the paper "Efficient Covariance Estimation from Temporal Data" by Hrayr Harutunyan, Daniel Moyer, Hrant Khachatrian, Greg Ver Steeg, and Aram Galstyan. If you want to cite this paper, please use the following BibTex entry:

@article{tcorex,
  title={Efficient Covariance Estimation from Temporal Data},
  author={Harutyunyan, Hrayr and Moyer, Daniel and Khachatrian, Hrant and Steeg, Greg Ver and Galstyan, Aram},
  journal={arXiv preprint arXiv:1905.13276},
  year={2019}
}

Both linear CorEx and T-CorEx have linear time and memory complexity with respect to the number of observed variables and can be applied to high-dimensional datasets. For example, it takes less than an hour on a moderate PC to estimate the covariance structure for time series with 100K variables using T-CorEx. Both methods are implemented in PyTorch and can run on CPUs and GPUs.

Requirements and Installation

The code is writen in Python 3, but should run on Python 2 as well. The dependencies are the following:

  • numpy, scipy, tqdm, PyTorch
  • [optional] nibabel (for fMRI experiments)
  • [optional] nose (for tests)
  • [optional] sklearn, regain, TVGL, linearcorex, pandas (for running comparisions)
  • [optional] matplotlib and nilearn (for visualizations)

To install the code, run the following command:

python setup.py install

Description

The main method for linear CorEx is the class tcorex.Corex, and that of T-CorEx is 'tcorex.TCorex'. The complete description of parameters of these classes can be found in the corresponding docstrings. While there are many parameters (especially for T-CorEx), in general only a couple of them need to be tuned (others are set to their "best" values). Those parameters are:

Parameter Linear CorEx T-CorEx Description
m + + The number of latent variables. Usually this is much smaller than the number of observed variables.
l1 - + A non-negative real number specifying the coefficient of l1 temporal regularization.
gamma - + A real number in [0,1]. This argument controls the sample weights. The samples of time period t' will have weight w_t(t')=gamma^|t' - t| when estimating quantities for time period t. Smaller values are used for very dynamic time series.

Usage

Run the following command for a sample run of TCorex.

python -m examples.sample_run

The code is shown below:

from __future__ import print_function
from __future__ import absolute_import

from tcorex.experiments.data import load_modular_sudden_change
from tcorex.experiments import baselines
from tcorex import base
from tcorex import TCorex
from tcorex import covariance as cov_utils

import numpy as np
import matplotlib
matplotlib.use('agg')
from matplotlib import pyplot as plt


def main():
    nv = 32         # number of observed variables
    m = 4           # number of hidden variables
    nt = 10         # number of time periods
    train_cnt = 16  # number of training samples for each time period
    val_cnt = 4     # number of validation samples for each time period

    # Generate some data with a sudden change in the middle.
    data, ground_truth_sigma = load_modular_sudden_change(nv=nv, m=m, nt=nt, ns=(train_cnt + val_cnt))

    # Split it into train and validation.
    train_data = [X[:train_cnt] for X in data]
    val_data = [X[train_cnt:] for X in data]

    # NOTE: the load_modular_sudden_change function above creates data where the time axis
    # is already divided into time periods. If your data is not divided into time periods
    # you can use the following procedure to do that:
    # bucketed_data, index_to_bucket = make_buckets(data, window=train_cnt + val_cnt, stride='full')
    # where the make_buckets function can be found at tcorex.experiments.data

    # The core method we have is the tcorex.TCorex class.
    tc = TCorex(nt=nt,
                nv=nv,
                n_hidden=m,
                max_iter=500,
                device='cpu',  # for GPU set 'cuda',
                l1=0.3,        # coefficient of temporal regularization term
                gamma=0.3,     # parameter that controls sample weights
                verbose=1,     # 0, 1, 2
                )

    # Fit the parameters of T-CorEx.
    tc.fit(train_data)

    # We can compute the clusters of observed variables for each time period.
    t = 8
    clusters = tc.clusters()
    print("Clusters at time period {}: {}".format(t, clusters[t]))

    # We can get an estimate of the covariance matrix for each time period.
    # When normed=True, estimates of the correlation matrices will be returned.
    covs = tc.get_covariance()

    # We can visualize the covariance matrices.
    fig, ax = plt.subplots(1, figsize=(5, 5))
    im = ax.imshow(covs[t])
    fig.colorbar(im)
    ax.set_title("Estimated covariance matrix\nat time period {}".format(t))
    fig.savefig('covariance-matrix.png')

    # It is usually useful to compute the inverse correlation matrices,
    # since this matrices can be interpreted as adjacency matrices of
    # Markov random fields.
    cors = tc.get_covariance(normed=True)
    inv_cors = [np.linalg.inv(x) for x in cors]

    # We can visualize the thresholded inverse correlation matrices.
    fig, ax = plt.subplots(1, figsize=(5, 5))
    thresholded_inv_cor = np.abs(inv_cors[t]) > 0.05
    ax.imshow(thresholded_inv_cor)
    ax.set_title("Thresholded inverse correlation\nmatrix at time period {}".format(t))
    fig.savefig('thresholded-inverse-correlation-matrix.png')

    # We can also plot the Frobenius norm of the differences of inverse
    # correlation matrices of  neighboring time periods. This is helpful
    # for detecting the sudden change points of the system.
    diffs = cov_utils.diffs(inv_cors)
    fig, ax = plt.subplots(1, figsize=(5, 5))
    ax.plot(diffs)
    ax.set_xlabel('t')
    ax.set_ylabel('$||\Sigma^{-1}_{t+1} - \Sigma^{-1}_{t}||_2$')
    ax.set_title("Frobenius norms of differences between\ninverse correlation matrices")
    fig.savefig('inv-correlation-difference-norms.png')

    # We can also do grid search on a hyperparameter grid the following way.
    # NOTE: this can take time!
    baseline, grid = (baselines.TCorex(tcorex=TCorex, name='T-Corex'), {
        'nv': nv,
        'n_hidden': m,
        'max_iter': 500,
        'device': 'cpu',
        'l1': [0.0, 0.03, 0.3, 3.0],
        'gamma': [1e-6, 0.3, 0.5, 0.8]
    })

    best_score, best_params, best_covs, best_method, all_results = baseline.select(train_data, val_data, grid)
    tc = best_method  # this is the model that performed the best on the validation data, you can use it as above
    base.save(tc, 'best_method.pkl')


if __name__ == '__main__':
    main()
You might also like...
treeinterpreter - Interpreting scikit-learn's decision tree and random forest predictions.

TreeInterpreter Package for interpreting scikit-learn's decision tree and random forest predictions. Allows decomposing each prediction into bias and

A collection of infrastructure and tools for research in neural network interpretability.
A collection of infrastructure and tools for research in neural network interpretability.

Lucid Lucid is a collection of infrastructure and tools for research in neural network interpretability. We're not currently supporting tensorflow 2!

Visualizer for neural network, deep learning, and machine learning models
Visualizer for neural network, deep learning, and machine learning models

Netron is a viewer for neural network, deep learning and machine learning models. Netron supports ONNX (.onnx, .pb, .pbtxt), Keras (.h5, .keras), Tens

tensorboard for pytorch (and chainer, mxnet, numpy, ...)
tensorboard for pytorch (and chainer, mxnet, numpy, ...)

tensorboardX Write TensorBoard events with simple function call. The current release (v2.1) is tested on anaconda3, with PyTorch 1.5.1 / torchvision 0

TensorFlowTTS: Real-Time State-of-the-art Speech Synthesis for Tensorflow 2 (supported including English, Korean, Chinese, German and Easy to adapt for other languages)

🤪 TensorFlowTTS provides real-time state-of-the-art speech synthesis architectures such as Tacotron-2, Melgan, Multiband-Melgan, FastSpeech, FastSpeech2 based-on TensorFlow 2. With Tensorflow 2, we can speed-up training/inference progress, optimizer further by using fake-quantize aware and pruning, make TTS models can be run faster than real-time and be able to deploy on mobile devices or embedded systems.

A collection of research papers and software related to explainability in graph machine learning.
A collection of research papers and software related to explainability in graph machine learning.

A collection of research papers and software related to explainability in graph machine learning.

Quickly and easily create / train a custom DeepDream model
Quickly and easily create / train a custom DeepDream model

Dream-Creator This project aims to simplify the process of creating a custom DeepDream model by using pretrained GoogleNet models and custom image dat

Visualizer for neural network, deep learning, and machine learning models
Visualizer for neural network, deep learning, and machine learning models

Netron is a viewer for neural network, deep learning and machine learning models. Netron supports ONNX, TensorFlow Lite, Keras, Caffe, Darknet, ncnn,

Linear algebra python - Number of operations and problems in Linear Algebra and Numerical Linear Algebra

Linear algebra in python Number of operations and problems in Linear Algebra and

Hierarchical unsupervised and semi-supervised topic models for sparse count data with CorEx

Anchored CorEx: Hierarchical Topic Modeling with Minimal Domain Knowledge Correlation Explanation (CorEx) is a topic model that yields rich topics tha

Tool for producing high quality forecasts for time series data that has multiple seasonality with linear or non-linear growth.

Prophet: Automatic Forecasting Procedure Prophet is a procedure for forecasting time series data based on an additive model where non-linear trends ar

With this package, you can generate mixed-integer linear programming (MIP) models of trained artificial neural networks (ANNs) using the rectified linear unit (ReLU) activation function

With this package, you can generate mixed-integer linear programming (MIP) models of trained artificial neural networks (ANNs) using the rectified linear unit (ReLU) activation function. At the moment, only TensorFlow sequential models are supported. Interfaces to either the Pyomo or Gurobi modeling environments are offered.

Simple Linear 2nd ODE Solver GUI - A 2nd constant coefficient linear ODE solver with simple GUI using euler's method
Simple Linear 2nd ODE Solver GUI - A 2nd constant coefficient linear ODE solver with simple GUI using euler's method

Simple_Linear_2nd_ODE_Solver_GUI Description It is a 2nd constant coefficient li

Hitters Linear Regression - Hitters Linear Regression With Python
Hitters Linear Regression - Hitters Linear Regression With Python

Hitters_Linear_Regression Kullanacağımız veri seti Carnegie Mellon Üniversitesi'

Implementation of temporal pooling methods studied in [ICIP'20] A Comparative Evaluation Of Temporal Pooling Methods For Blind Video Quality Assessment

Implementation of temporal pooling methods studied in [ICIP'20] A Comparative Evaluation Of Temporal Pooling Methods For Blind Video Quality Assessment

Cascaded Deep Video Deblurring Using Temporal Sharpness Prior and Non-local Spatial-Temporal Similarity
Cascaded Deep Video Deblurring Using Temporal Sharpness Prior and Non-local Spatial-Temporal Similarity

This repository is the official PyTorch implementation of Cascaded Deep Video Deblurring Using Temporal Sharpness Prior and Non-local Spatial-Temporal Similarity

CVPR2021: Temporal Context Aggregation Network for Temporal Action Proposal Refinement
CVPR2021: Temporal Context Aggregation Network for Temporal Action Proposal Refinement

Temporal Context Aggregation Network - Pytorch This repo holds the pytorch-version codes of paper: "Temporal Context Aggregation Network for Temporal

An implementation of Performer, a linear attention-based transformer, in Pytorch
An implementation of Performer, a linear attention-based transformer, in Pytorch

Performer - Pytorch An implementation of Performer, a linear attention-based transformer variant with a Fast Attention Via positive Orthogonal Random

Comments
  • Divide by zero error when running pytorch_tcorex

    Divide by zero error when running pytorch_tcorex

    Happens on lines 832, 839, 846.

    The code on each line is: w = 1.0 / np.power(self.gamma, np.abs(i - t)).

    When using the default value for gamma (int 2) and for a large exponent, we overflow. A quick fix would be to force gamma to be a float.

    opened by turambar 4
  • diffs returns two diffrences - differences between consecutive time periods, and differences with the first time period (t=0)

    diffs returns two diffrences - differences between consecutive time periods, and differences with the first time period (t=0)

    Added diff with matrix at t=0

    In addition to showing the differences between matrices at adjacent time periods, the function also returns the difference between the matrix at time t and the matrix at time t=0. This is useful for interpreting how the matrix changes over time w.r.t a baseline t=0.

    opened by venkatasg 0
  • basic example - finance data

    basic example - finance data

    Hi @hrayrhar !

    Amazing algorithm, I am trying to use it on a basic two dimesnaional dataset. Please see my attempt below -

    from __future__ import print_function
    from __future__ import absolute_import
    
    from tcorex.experiments.data import load_modular_sudden_change
    from tcorex.experiments import baselines
    from tcorex import base
    from tcorex import TCorex
    from tcorex import covariance as cov_utils
    
    import numpy as np
    import matplotlib
    matplotlib.use('agg')
    from matplotlib import pyplot as plt
    
    import yfinance as yf
    data = yf.download("SPY GOOGL", start="2014-01-01", end="2019-04-30")
    data
    return_target=data['Close'].pct_change().dropna()
    
    nv = 2        # number of observed variables
    m = 1           # number of hidden variables
    nt = 10         # number of time periods
    train_cnt = 16  # number of training samples for each time period
    val_cnt = 4     # number of validation samples for each time period
    
    # Generate some data with a sudden change in the middle.
    #data, ground_truth_sigma = load_modular_sudden_change(nv=nv, m=m, nt=nt, ns=(train_cnt + val_cnt))
    
    data =return_target.values
    
    # Split it into train and validation.
    #train_data = [X[:train_cnt] for X in data]
    
    train_data=data
    #val_data = [X[train_cnt:] for X in data]
    
    # NOTE: the load_modular_sudden_change function above creates data where the time axis
    # is already divided into time periods. If your data is not divided into time periods
    # you can use the following procedure to do that:
    # bucketed_data, index_to_bucket = make_buckets(data, window=train_cnt + val_cnt, stride='full')
    # where the make_buckets function can be found at tcorex.experiments.data
    
    # The core method we have is the tcorex.TCorex class.
    tc = TCorex(nt=nt,
             nv=nv,
             n_hidden=m,
             max_iter=500,
             device='cpu',  # for GPU set 'cuda',
             l1=0.3,        # coefficient of temporal regularization term
             gamma=0.3,     # parameter that controls sample weights
             verbose=1,     # 0, 1, 2
             )
    
    # # Fit the parameters of T-CorEx.
    tc.fit(train_data)
    
    
    
    opened by andrewczgithub 11
Releases(0.1)
  • 0.1(Dec 6, 2018)

    Things to do for the next release:

    • Refactor the code
    • Make it a pip package
    • Write better tests
    • Write good README
    • Structure the code
    • Remove useless codes
    Source code(tar.gz)
    Source code(zip)
Owner
Hrayr Harutyunyan
PhD student at University of Southern California
Hrayr Harutyunyan
Pytorch implementation of convolutional neural network visualization techniques

Convolutional Neural Network Visualizations This repository contains a number of convolutional neural network visualization techniques implemented in

Utku Ozbulak 7k Jan 3, 2023
PyTorch implementation of DeepDream algorithm

neural-dream This is a PyTorch implementation of DeepDream. The code is based on neural-style-pt. Here we DeepDream a photograph of the Golden Gate Br

null 121 Nov 5, 2022
pytorch implementation of "Distilling a Neural Network Into a Soft Decision Tree"

Soft-Decision-Tree Soft-Decision-Tree is the pytorch implementation of Distilling a Neural Network Into a Soft Decision Tree, paper recently published

Kim Heecheol 262 Dec 4, 2022
Interpretability and explainability of data and machine learning models

AI Explainability 360 (v0.2.1) The AI Explainability 360 toolkit is an open-source library that supports interpretability and explainability of datase

null 1.2k Dec 29, 2022
Portal is the fastest way to load and visualize your deep neural networks on images and videos 🔮

Portal is the fastest way to load and visualize your deep neural networks on images and videos ??

Datature 243 Jan 5, 2023
Many Class Activation Map methods implemented in Pytorch for CNNs and Vision Transformers. Including Grad-CAM, Grad-CAM++, Score-CAM, Ablation-CAM and XGrad-CAM

Class Activation Map methods implemented in Pytorch pip install grad-cam ⭐ Comprehensive collection of Pixel Attribution methods for Computer Vision.

Jacob Gildenblat 6.5k Jan 1, 2023
Algorithms for monitoring and explaining machine learning models

Alibi is an open source Python library aimed at machine learning model inspection and interpretation. The focus of the library is to provide high-qual

Seldon 1.9k Dec 30, 2022
Bias and Fairness Audit Toolkit

The Bias and Fairness Audit Toolkit Aequitas is an open-source bias audit toolkit for data scientists, machine learning researchers, and policymakers

Data Science for Social Good 513 Jan 6, 2023
Visual analysis and diagnostic tools to facilitate machine learning model selection.

Yellowbrick Visual analysis and diagnostic tools to facilitate machine learning model selection. What is Yellowbrick? Yellowbrick is a suite of visual

District Data Labs 3.9k Dec 30, 2022
A library for debugging/inspecting machine learning classifiers and explaining their predictions

ELI5 ELI5 is a Python package which helps to debug machine learning classifiers and explain their predictions. It provides support for the following m

null 2.6k Dec 30, 2022