Modular Probabilistic Programming on MXNet

Overview

MXFusion

Build Status | codecov | pypi | Documentation Status | GitHub license

MXFusion

Tutorials | Documentation | Contribution Guide

MXFusion is a modular deep probabilistic programming library.

With MXFusion Modules you can use state-of-the-art inference techniques for specialized probabilistic models without needing to implement those techniques yourself. MXFusion helps you rapidly build and test new methods at scale, by focusing on the modularity of probabilistic models and their integration with modern deep learning techniques.

MXFusion uses MXNet as its computational platform to bring the power of distributed, heterogenous computation to probabilistic modeling.

Installation

Dependencies / Prerequisites

MXFusion's primary dependencies are MXNet >= 1.3 and Networkx >= 2.1. See requirements.

Supported Architectures / Versions

MXFusion is tested on Python 3.4+ on MacOS and Linux.

Installation of MXNet

There are multiple PyPi packages of MXNet. A straight-forward installation with only CPU support can be done by:

pip install mxnet

For an installation with GPU or MKL, detailed instructions can be found on MXNet site.

pip

If you just want to use MXFusion and not modify the source, you can install through pip:

pip install mxfusion

From source

To install MXFusion from source, after cloning the repository run the following from the top-level directory:

pip install .

Where to go from here?

Tutorials

Documentation

Contributions

Community

We welcome your contributions and questions and are working to build a responsive community around MXFusion. Feel free to file an Github issue if you find a bug or want to request a new feature.

Contributing

Have a look at our contributing guide, thanks for the interest!

Points of contact for MXFusion are:

  • Eric Meissner (@meissnereric)
  • Zhenwen Dai (@zhenwendai)

License

MXFusion is licensed under Apache 2.0. See LICENSE.

Comments
  • LBFGS optimizer

    LBFGS optimizer

    Issue #, if available: #75

    Description of changes: This is a draft pull request to add LBFGS optimizer for optimization of deterministic loss functions. I found this works much better than the current default optimizer for vanilla GPs.

    If you don't object to the approach I use here I will take the time and add tests and tidy the code up.

    By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

    opened by marpulli 5
  • Copy over module algorithms correctly

    Copy over module algorithms correctly

    The module algorithms dictionary is expected to be a list containing tuples but after cloning a module this is not the case. This PR fixes that.

    By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

    opened by marpulli 5
  • Changed transpose property to function.

    Changed transpose property to function.

    This stops the debugger accidentally creating new variables on access.

    Description of changes: When using an interactive debugger (e.g. PyCharm), inspecting any Variable object invokes the property T which has the side effect of creating new variables that are not part of the graph. By changing this to a function .transpose() the functionality is maintained (at the slight expense of ease of use) whilst preventing this from happening.

    By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

    opened by tdiethe 5
  • GP replicate_self fixes

    GP replicate_self fixes

    This fixes #183 (but maybe you want to copy kernels in a different way as discussed yesterday). It also fixes a few other issues with cloning this specific GP module and adds tests

    By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

    opened by marpulli 4
  • DGP implementation

    DGP implementation

    This PR adds an implementation of DGPs.

    I have added a marginalise_conditional_distribution method to the conditional GP class which is used both by the deep GP and the SVI GP.

    I chose not to explicitly represent the conditional p(f|u) as I couldn't work out how to represent it while making the marginalization as efficient.

    By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

    opened by marpulli 4
  • Move constants in InferenceParameters into ParameterDict

    Move constants in InferenceParameters into ParameterDict

    Description of changes: Shift MXNet based constants from being stored in the InferenceParameters._constants dictionary (which still exists and keeps scalar/native constants) to keeping MXNet constants in the ParameterDict where other Parameters are kept.

    This removes the need for the separate MXNet constants file at serialization file as well!

    (Feel free to ignore the documentation change here, it's a remnant of the larger docs change I made in another PR and I'll try to make sure the other one is the one that gets kept.)

    By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

    enhancement 
    opened by meissnereric 4
  • Implement common MXNet operators (dot product, diag, etc. in MXFusion

    Implement common MXNet operators (dot product, diag, etc. in MXFusion

    These should be implemented as stateless functions.

    Create a class that takes in variables and returns a function evaluation.

    Basic - Add, subtract, multiplication, division of variables.

    elementwise - square, exponentiation, log

    Aggregation - sum, mean, prod

    Matrix ops - dot product, diag

    Matrix manipulation - reshape, transpose

    enhancement 
    opened by meissnereric 4
  • Folder structure changes and renaming

    Folder structure changes and renaming

    It contains the following changes:

    • Folder structure changes and renaming according to our discussion.
    • Extend the Mark's multivariate normal distribution implementation to allow general broadcasting.

    @marpulli I changed the behavior of your log_pdf and kl_divergence implementation to return an array of the size of mean without the last dimension. This is consistent with our other distributions.

    By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

    opened by zhenwendai 3
  • The monthly cron build of Travis-CI on master fails.

    The monthly cron build of Travis-CI on master fails.

    The monthly cron build of Travis-CI on master branch fails, because it tries to deploy again. Here is the link to a build: https://travis-ci.org/amzn/MXFusion/jobs/453598240.

    bug 
    opened by zhenwendai 3
  • Uniform and Laplace distributions

    Uniform and Laplace distributions

    Issue #, if available:

    Description of changes:

    Uniform and Laplace distributions + tests

    By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

    enhancement 
    opened by tdiethe 3
  • Added Bernoulli distribution + tests

    Added Bernoulli distribution + tests

    Issue #, if available: #21

    Description of changes: Added Bernoulli distribution + tests

    By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

    enhancement 
    opened by tdiethe 3
  • Switch to deep numpy as backend.

    Switch to deep numpy as backend.

    Hi,

    I'm a member of Deep Numpy (a new MXNet frontend API with numpy like interface: https://numpy.mxnet.io/index.html) dev team and I'm currently working on the random module for Deep Numpy, which, as its name, has behavior exactly like Numpy, including broadcastable parameters and output shape, for example: https://github.com/apache/incubator-mxnet/pull/15858

    Also, sampling backend for rejection sampling is entirely rewritten to remove while loop from GPU Kernel. The new version, according to my profiling, performs ten times faster than nd.random on GPU at the cost of tiny increase in memory usage. (not merged yet) https://github.com/apache/incubator-mxnet/issues/15928

    We could have some further discussion if you have interests in switching to deep numpy's random sampling backend, or, migrating MXFusion to Deep Numpy API :-)

    opened by xidulu 0
  • GP Likelihood classes

    GP Likelihood classes

    Is your feature request related to a problem? Please describe. The SVGP and DGP modules should be able to work with alternative likelihoods. They are both currently implemented for Gaussian likelihoods only. The ability to switch out likelihoods would make things like #126 much simpler with less code duplication.

    Describe the solution you'd like Currently:

    m = Model()
    m.X = Variable()
    m.noise_var = Variable(transformation=PositiveTransformation())
    m.Y = SVGP.define_variable(m.X, kern, m.noise_var...)
    

    I'd like something more like

    m = Model()
    m.X = Variable()
    m.noise_var = Variable(transformation=PositiveTransformation())
    m.likelihood = NormalGPLikelihood(variance)
    # This could also be: 
    # m.likelihood = BernoulliGPLikelihood()
    m.Y = SVGP.define_variable(m.X, kern, m.likelihood...)
    

    These likelihoods would need to be able to compute: where p(f) is Gaussian.

    class BernoulliGPLikelihood(GPLikelihood):
        # This class would also contain the transformation of the 
        # latent function to the interval [0, 1] I think
       def expectation(self, mean, variance, data): 
           # This method would implement the quadrature rule to do this here
    
    class NormalGPLikelihood(GPLikelihood):
       def expectation(self, mean, variance, data): # think of a better name!
           # This method could use the analytically equation
    

    Describe alternatives you've considered Creating a new class might not be necessary, we might be able to reuse the current distribution objects directly.

    I would be happy to implement this, if we come up with a design which people are happy with first.

    opened by marpulli 0
  • Inference.run throws when passing Variables of type PARAMETER as 'constants' argument

    Inference.run throws when passing Variables of type PARAMETER as 'constants' argument

    Describe the bug

    When passing Variables of type PARAMETER as 'constants' argument to the constructor of Inference (or child classes), this line throws during the execution of Inference's run method since the Variables that were passed as 'constants' are in self._var_trans but not in the 'kw' input argument.

    Expected behavior

    Execution does not throw and Variables passed as 'constants' argument to the Inference constructor, even if they are of type PARAMETER, are treated as constants and not optimized in training.

    Desktop:

    • OS: Ubuntu 18.04.2
    • Python version: 3.6
    • MXNet version: 1.4.1
    • MXFusion version: 0.3.1
    • MXNet context: CPU
    bug 
    opened by pabfer 2
  • Execution throws if a Kernel Variable is set to CONSTANT

    Execution throws if a Kernel Variable is set to CONSTANT

    Describe the bug

    If a kernel Variable is of CONSTANT type, fetch_parameters throws here since CONSTANT Variables do not make it to params input argument.

    Expected behavior

    Execution does not throw, and kernel Variables that are of type CONSTANT are kept constant and not optimized in training.

    Desktop:

    • OS: Ubuntu 18.04.2
    • Python version: 3.6
    • MXNet version: 1.4.1
    • MXFusion version: 0.3.1
    • MXNet context: CPU
    bug 
    opened by pabfer 1
  • Unexplicit throw in MinibatchInferenceLoop if batch_size is larger than dataset size

    Unexplicit throw in MinibatchInferenceLoop if batch_size is larger than dataset size

    Describe the bug

    If batch_size for MinibatchInferenceLoop is larger than the dataset size, this line throws due to division by zero.

    Expected behavior

    Possible mitigations are:

    • Option No. 1: Default batch size to min("requested batch size", "dataset size") and issue a warning if "requested batch size" > "dataset size".
    • Option No. 2: Throw with a message stating why it's throwing and how to fix the issue.

    Desktop:

    • OS: Ubuntu 18.04.2
    • Python version: 3.6
    • MXNet version: 1.4.1
    • MXFusion version: 0.3.1
    • MXNet context: CPU
    bug Easy 
    opened by pabfer 1
Releases(v0.3.1)
  • v0.3.1(May 30, 2019)

    • Added SVGP regression notebook.
    • Updated the VAE and GP regression notebook.
    • Removed the dependency on scikit-learn.
    • Moved the parameters of Gluon block to be controlled by MXFusion.
    • Fixed a bug in mini-batch learning.
    • Extended the SVGPRegression module to take samples as input variables.
    • Documentation and stylistic edits.
    • Merged in the PILCO Changes
    Source code(tar.gz)
    Source code(zip)
  • v0.3.0(Feb 20, 2019)

    The bigger changes are:

    • #133 Changing variable broadcasting for factors
    • #153 Simplify serialization to use a simple zip file

    Other than that its mostly bug fixes and documentation changes.

    • #144 Add details to inference and serialization documentation
    • #148 Fix a bug for SVGP regression with minibatch traning
    • #81 Reduce num_samples in uniform non-mock test to 1000
    • #137 Changed transpose property to function.
    • #135 Bug fix in function evaluation
    Source code(tar.gz)
    Source code(zip)
  • v0.2.1(Nov 9, 2018)

    • Add the tutorial for Gaussian process regression.
    • Fix empty operator bug.
    • Fix bug to allow the same variable for multiple inputs to a factor.
    • Add module serialization.
    • Fix the bug that causes the failure of documentation compilation.
    • Fix the bug: the inference methods of GP modules do not handle samples.
    • Update issue templates.
    • Add license headers to all files.
    • Add the getting started notebook.
    • Remove the dependency on future.
    • Update the Inference documentation.
    • Implement Dirichlet distribution.
    • Add logistic variable transformation.
    • Implement Expectation inference.
    • Fix the bugs related to dtype.
    • Validate shape of array variables.
    • Fix divide by zero error if max_iter < n_prints.
    • Add multiply kernel.
    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Oct 19, 2018)

    • Improve the printing messages of gradient loop and bug fix for GluonFunctionEvaluation
    • Add Gamma Distribution
    • GP Modules enhancement
    • Implement Module, GPRegression, SparseGPRegression and SVGPRegression…
    • Update the README
    • Add score function for variational inference enhancement
    • Refactor the interface of inferece for module
    • Add score function inference
    • Implement common MXNet operators (dot product, diag, etc. in MXFusion enhancement
    • Add support for MXNet operators.
    • Cleanup kernel function and function wrappers for copying enhancement
    • Implement the base class MXFusionFunction
    Source code(tar.gz)
    Source code(zip)
Owner
Amazon
Amazon
Deep universal probabilistic programming with Python and PyTorch

Getting Started | Documentation | Community | Contributing Pyro is a flexible, scalable deep probabilistic programming library built on PyTorch. Notab

null 7.7k Dec 30, 2022
Probabilistic Programming and Statistical Inference in PyTorch

PtStat Probabilistic Programming and Statistical Inference in PyTorch. Introduction This project is being developed during my time at Cogent Labs. The

Stefano Peluchetti 109 Nov 26, 2022
PClean: A Domain-Specific Probabilistic Programming Language for Bayesian Data Cleaning

PClean: A Domain-Specific Probabilistic Programming Language for Bayesian Data Cleaning Warning: This is a rapidly evolving research prototype.

MIT Probabilistic Computing Project 190 Dec 27, 2022
Deep Probabilistic Programming Course @ DIKU

Deep Probabilistic Programming Course @ DIKU

null 52 May 14, 2022
DeepProbLog is an extension of ProbLog that integrates Probabilistic Logic Programming with deep learning by introducing the neural predicate.

DeepProbLog DeepProbLog is an extension of ProbLog that integrates Probabilistic Logic Programming with deep learning by introducing the neural predic

KU Leuven Machine Learning Research Group 94 Dec 18, 2022
Simple, efficient and flexible vision toolbox for mxnet framework.

MXbox: Simple, efficient and flexible vision toolbox for mxnet framework. MXbox is a toolbox aiming to provide a general and simple interface for visi

Ligeng Zhu 31 Oct 19, 2019
MMdnn is a set of tools to help users inter-operate among different deep learning frameworks. E.g. model conversion and visualization. Convert models between Caffe, Keras, MXNet, Tensorflow, CNTK, PyTorch Onnx and CoreML.

MMdnn MMdnn is a comprehensive and cross-framework tool to convert, visualize and diagnose deep learning (DL) models. The "MM" stands for model manage

Microsoft 5.7k Jan 9, 2023
InsightFace: 2D and 3D Face Analysis Project on MXNet and PyTorch

InsightFace: 2D and 3D Face Analysis Project on MXNet and PyTorch

Deep Insight 13.2k Jan 6, 2023
Tensorboard for pytorch (and chainer, mxnet, numpy, ...)

tensorboardX Write TensorBoard events with simple function call. The current release (v2.3) is tested on anaconda3, with PyTorch 1.8.1 / torchvision 0

Tzu-Wei Huang 7.5k Dec 28, 2022
MXNet implementation for: Drop an Octave: Reducing Spatial Redundancy in Convolutional Neural Networks with Octave Convolution

Octave Convolution MXNet implementation for: Drop an Octave: Reducing Spatial Redundancy in Convolutional Neural Networks with Octave Convolution Imag

Meta Research 549 Dec 28, 2022
Reproduce ResNet-v2(Identity Mappings in Deep Residual Networks) with MXNet

Reproduce ResNet-v2 using MXNet Requirements Install MXNet on a machine with CUDA GPU, and it's better also installed with cuDNN v5 Please fix the ran

Wei Wu 531 Dec 4, 2022
Fast, flexible and easy to use probabilistic modelling in Python.

Please consider citing the JMLR-MLOSS Manuscript if you've used pomegranate in your academic work! pomegranate is a package for building probabilistic

Jacob Schreiber 3k Dec 29, 2022
InferPy: Deep Probabilistic Modeling with Tensorflow Made Easy

InferPy: Deep Probabilistic Modeling Made Easy InferPy is a high-level API for probabilistic modeling written in Python and capable of running on top

PGM-Lab 141 Oct 13, 2022
Supervised domain-agnostic prediction framework for probabilistic modelling

A supervised domain-agnostic framework that allows for probabilistic modelling, namely the prediction of probability distributions for individual data

The Alan Turing Institute 112 Oct 23, 2022
NU-Wave: A Diffusion Probabilistic Model for Neural Audio Upsampling

NU-Wave: A Diffusion Probabilistic Model for Neural Audio Upsampling For Official repo of NU-Wave: A Diffusion Probabilistic Model for Neural Audio Up

Rishikesh (ऋषिकेश) 38 Oct 11, 2022
Registration Loss Learning for Deep Probabilistic Point Set Registration

RLLReg This repository contains a Pytorch implementation of the point set registration method RLLReg. Details about the method can be found in the 3DV

Felix Järemo Lawin 35 Nov 2, 2022
Symbolic Parallel Adaptive Importance Sampling for Probabilistic Program Analysis in JAX

SYMPAIS: Symbolic Parallel Adaptive Importance Sampling for Probabilistic Program Analysis Overview | Installation | Documentation | Examples | Notebo

Yicheng Luo 4 Sep 13, 2022
Adversarial Attacks on Probabilistic Autoregressive Forecasting Models.

Attack-Probabilistic-Models This is the source code for Adversarial Attacks on Probabilistic Autoregressive Forecasting Models. This repository contai

SRI Lab, ETH Zurich 25 Sep 14, 2022
Probabilistic Gradient Boosting Machines

PGBM Probabilistic Gradient Boosting Machines (PGBM) is a probabilistic gradient boosting framework in Python based on PyTorch/Numba, developed by Air

Olivier Sprangers 112 Dec 28, 2022