A probabilistic programming language in TensorFlow. Deep generative models, variational inference.

Overview

edward

Build Status Coverage Status Join the chat at https://gitter.im/blei-lab/edward

Edward is a Python library for probabilistic modeling, inference, and criticism. It is a testbed for fast experimentation and research with probabilistic models, ranging from classical hierarchical models on small data sets to complex deep probabilistic models on large data sets. Edward fuses three fields: Bayesian statistics and machine learning, deep learning, and probabilistic programming.

It supports modeling with

  • Directed graphical models
  • Neural networks (via libraries such as tf.layers and Keras)
  • Implicit generative models
  • Bayesian nonparametrics and probabilistic programs

It supports inference with

  • Variational inference
    • Black box variational inference
    • Stochastic variational inference
    • Generative adversarial networks
    • Maximum a posteriori estimation
  • Monte Carlo
    • Gibbs sampling
    • Hamiltonian Monte Carlo
    • Stochastic gradient Langevin dynamics
  • Compositions of inference
    • Expectation-Maximization
    • Pseudo-marginal and ABC methods
    • Message passing algorithms

It supports criticism of the model and inference with

  • Point-based evaluations
  • Posterior predictive checks

Edward is built on top of TensorFlow. It enables features such as computational graphs, distributed training, CPU/GPU integration, automatic differentiation, and visualization with TensorBoard.

Resources

See Getting Started for how to install Edward.

Comments
  • doc/website

    doc/website

    Summary:

    This PR does three things:

    1. it introduces a brand-new website to replace the current one (via a custom pandoc workflow).
    2. it introduces automatic API documentation (via sphinx).

    There are a few remaining tasks before we can merge this:

    • [x] figure out what goes into deploy.sh
    • [x] finalize getting started / delving in / tutorials
    • [x] update README.md to match website and updated logo

    How to Verify:

    Look at the website and check for issues/typos.

    Look at docstrings and how they render via sphinx.

    Side Effects:

    The old website is deleted.

    Documentation:

    There is a new README.md under /website that documents the scripts that build and deploy the website

    opened by akucukelbir 27
  • think about what it means to

    think about what it means to "default" to reparameterization gradient

    we currently default to the reparameterization gradient if the Variational class implements reparam

    however, if the Inference class does not support reparameterization gradients (e.g. KLpq) then it doesn't matter whether the Variational class implements it or not.

    Code cleanup 
    opened by akucukelbir 21
  • Issues running examples

    Issues running examples

    Hi, I've had a handful of issues trying to run the mixture gaussians and the bayesian linear regression examples. For reference, I installed tensorflow through anaconda using a separate environment. I then installed edward in this environment. I can run several of the examples, but it appears that something in the normal models is causing errors.

    For instance, when running mixture_gaussians.py, I end up with:

    (tensorflow)Orbis:Edward joppenheim$ python mixture_gaussian.py Traceback (most recent call last): File "mixture_gaussian.py", line 144, in inference.run(n_iter=2500, n_samples=10, n_minibatch=20) File "/Users/joppenheim/anaconda/envs/tensorflow/lib/python2.7/site-packages/edward/inferences.py", line 184, in run self.initialize(_args, *_kwargs) File "/Users/joppenheim/anaconda/envs/tensorflow/lib/python2.7/site-packages/edward/inferences.py", line 367, in initialize return super(MFVI, self).initialize(_args, *_kwargs) File "/Users/joppenheim/anaconda/envs/tensorflow/lib/python2.7/site-packages/edward/inferences.py", line 243, in initialize loss = self.build_loss() File "/Users/joppenheim/anaconda/envs/tensorflow/lib/python2.7/site-packages/edward/inferences.py", line 411, in build_loss return self.build_score_loss() File "/Users/joppenheim/anaconda/envs/tensorflow/lib/python2.7/site-packages/edward/inferences.py", line 437, in build_score_loss p_log_prob = self.model_wrapper.log_prob(x, z) File "mixture_gaussian.py", line 80, in log_prob matrix += [tf.ones(N) * tf.log(pi[k]) + File "/Users/joppenheim/anaconda/envs/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/math_ops.py", line 760, in binary_op_wrapper return func(x, y, name=name) File "/Users/joppenheim/anaconda/envs/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/math_ops.py", line 909, in _mul_dispatch return gen_math_ops.mul(x, y, name=name) File "/Users/joppenheim/anaconda/envs/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/gen_math_ops.py", line 1464, in mul result = _op_def_lib.apply_op("Mul", x=x, y=y, name=name) File "/Users/joppenheim/anaconda/envs/tensorflow/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 703, in apply_op op_def=op_def) File "/Users/joppenheim/anaconda/envs/tensorflow/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2319, in create_op set_shapes_for_outputs(ret) File "/Users/joppenheim/anaconda/envs/tensorflow/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1711, in set_shapes_for_outputs shapes = shape_func(op) File "/Users/joppenheim/anaconda/envs/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/math_ops.py", line 1809, in _BroadcastShape % (shape_x, shape_y)) ValueError: Incompatible shapes for broadcasting: (20,) and (2,)

    That 20 comes from the n_minibatch parameter.

    A similar error occurred with Bayesian Linear Regression.

    Any thoughts?

    opened by jnoppenheim 20
  • support for pymc3 as a modeling language?

    support for pymc3 as a modeling language?

    We want to support modeling languages which are either popular or are useful for certain tasks over the alternatives we support. With that in mind, pymc3 seems appealing for specifying large discrete latent variable models. You can't write them as a Stan program, and it could be rather annoying to code them up in raw TensorFlow or NumPy/SciPy.

    On the other hand, it's one more modeling language to maintain; pymc3 actually uses Theano as a backend, which may lead to some bugs(?); and I don't know how popular pymc3 would be as a use case over the other supported languages.

    interface language 
    opened by dustinvtran 17
  • NormalWishart-Normal model

    NormalWishart-Normal model

    Hi there!

    I'm trying to implement a NormalWishart-Normal model with Edward. I think the model representation is OK but, what do you think?. Here is the code:

    # -*- coding: UTF-8 -*-
    
    """
    NormalWishart-Normal Model
    Posterior inference with Edward BBVI
    [DOING]
    """
    
    import edward as ed
    import matplotlib.cm as cm
    import matplotlib.pyplot as plt
    import numpy as np
    import tensorflow as tf
    from edward.models import MultivariateNormalFull, WishartCholesky
    from scipy.stats import invwishart
    
    N = 1000
    D = 2
    
    # Data generation
    # NIW Inverse Wishart hyperparameters
    v = 3.
    W = np.array([[20., 30.], [25., 40.]])
    sigma = invwishart.rvs(v, W)
    # NIW Normal hyperparameters
    m = np.array([1., 1.])
    k = 0.8
    mu = np.random.multivariate_normal(m, sigma / k)
    xn_data = np.random.multivariate_normal(mu, sigma, N)
    plt.scatter(xn_data[:, 0], xn_data[:, 1], cmap=cm.gist_rainbow, s=5)
    plt.show()
    print('mu={}'.format(mu))
    print('sigma={}'.format(sigma))
    
    # Prior definition
    v_prior = tf.Variable(3., dtype=tf.float64, trainable=False)
    W_prior = tf.Variable(np.array([[1., 0.], [0., 1.]]),
                          dtype=tf.float64, trainable=False)
    m_prior = tf.Variable(np.array([0.5, 0.5]), dtype=tf.float64, trainable=False)
    k_prior = tf.Variable(0.6, dtype=tf.float64, trainable=False)
    
    print('***** PRIORS *****')
    print('v_prior: {}'.format(v_prior))
    print('W_prior: {}'.format(W_prior))
    print('m_prior: {}'.format(m_prior))
    print('k_prior: {}'.format(k_prior))
    
    # Posterior inference
    # Probabilistic model
    sigma = WishartCholesky(df=v_prior, scale=W_prior)
    mu = MultivariateNormalFull(m_prior, k_prior * sigma)
    xn = MultivariateNormalFull(tf.reshape(tf.tile(mu, [N]), [N, D]),
                                tf.reshape(tf.tile(sigma, [N, 1]), [N, 2, 2]))
    
    print('***** PROBABILISTIC MODEL *****')
    print('mu: {}'.format(mu))
    print('sigma: {}'.format(sigma))
    print('xn: {}'.format(xn))
    
    # Variational model
    qmu = MultivariateNormalFull(
        tf.Variable(tf.random_normal([D], dtype=tf.float64), name='v1'),
        tf.nn.softplus(
            tf.Variable(tf.random_normal([D, D], dtype=tf.float64), name='v2')))
    qsigma = WishartCholesky(
        df=tf.nn.softplus(
            tf.Variable(tf.random_normal([], dtype=tf.float64), name='v3')),
        scale=tf.nn.softplus(
            tf.Variable(tf.random_normal([D, D], dtype=tf.float64), name='v4')))
    
    print('***** VARIATIONAL MODEL *****')
    print('qmu: {}'.format(qmu))
    print('qsigma: {}'.format(qsigma))
    
    # Inference
    print('xn_data: {}'.format(xn_data.dtype))
    inference = ed.KLqp({mu: qmu, sigma: qsigma}, data={xn: xn_data})
    inference.run(n_iter=2000, n_samples=20)
    

    But it seems there is a type error:

    File "NW_normal_edward.py", line 78, in <module>
        inference.run(n_iter=2000, n_samples=20)
      File "/home/alberto/.virtualenvs/GMM/local/lib/python2.7/site-packages/edward/inferences/inference.py", line 218, in run
        self.initialize(*args, **kwargs)
      File "/home/alberto/.virtualenvs/GMM/local/lib/python2.7/site-packages/edward/inferences/klqp.py", line 66, in initialize
        return super(KLqp, self).initialize(*args, **kwargs)
      File "/home/alberto/.virtualenvs/GMM/local/lib/python2.7/site-packages/edward/inferences/variational_inference.py", line 70, in initialize
        self.loss, grads_and_vars = self.build_loss_and_gradients(var_list)
      File "/home/alberto/.virtualenvs/GMM/local/lib/python2.7/site-packages/edward/inferences/klqp.py", line 108, in build_loss_and_gradients
        return build_reparam_loss_and_gradients(self, var_list)
      File "/home/alberto/.virtualenvs/GMM/local/lib/python2.7/site-packages/edward/inferences/klqp.py", line 343, in build_reparam_loss_and_gradients
        z_copy = copy(z, dict_swap, scope=scope)
      File "/home/alberto/.virtualenvs/GMM/local/lib/python2.7/site-packages/edward/util/random_variables.py", line 176, in copy
        new_rv = rv.__class__(*args, **kwargs)
      File "/home/alberto/.virtualenvs/GMM/local/lib/python2.7/site-packages/edward/models/random_variable.py", line 62, in __init__
        super(RandomVariable, self).__init__(*args, **kwargs)
      File "/home/alberto/.virtualenvs/GMM/local/lib/python2.7/site-packages/tensorflow/contrib/distributions/python/ops/wishart.py", line 521, in __init__
        name=ns)
      File "/home/alberto/.virtualenvs/GMM/local/lib/python2.7/site-packages/tensorflow/contrib/distributions/python/ops/wishart.py", line 125, in __init__
        dtype=self._scale_operator_pd.dtype, name="dimension")
      File "/home/alberto/.virtualenvs/GMM/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 651, in convert_to_tensor
        as_ref=False)
      File "/home/alberto/.virtualenvs/GMM/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 730, in internal_convert_to_tensor
        dtype.name, ret.dtype.name))
    RuntimeError: dimension: Conversion function <function _constant_tensor_conversion_function at 0x7f35210922a8> for type <type 'object'> returned incompatible dtype: requested = float64_ref, actual = float64
    

    Do you think it is a Tensorflow's WishartCholesky problem? Do you have some model example using Wishart distribution in Edward?

    opened by bertini36 16
  • TypeError: First parent class declared for Empirical must be Distribution, but saw 'RandomVariable'

    TypeError: First parent class declared for Empirical must be Distribution, but saw 'RandomVariable'

    Hi,

    I just installed edward from source using the command python setup.py install. When I open python's interactive shell and enter import edward I get the following error:

    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "build/bdist.linux-x86_64/egg/edward/__init__.py", line 5, in <module>
      File "build/bdist.linux-x86_64/egg/edward/criticisms/__init__.py", line 5, in <module>
      File "build/bdist.linux-x86_64/egg/edward/criticisms/evaluate.py", line 9, in <module>
      File "build/bdist.linux-x86_64/egg/edward/models/__init__.py", line 7, in <module>
      File "build/bdist.linux-x86_64/egg/edward/models/random_variables.py", line 14, in <module>
      File "path_to_tensorflow/tensorflow/contrib/distributions/python/ops/distribution.py", line 138, in __new__
        "Distribution, but saw '%s'" % (classname, base.__name__))
    TypeError: First parent class declared for Empirical must be Distribution, but saw 'RandomVariable'
    

    (I have installed tensorflow 0.11rc1 from source.)

    Can you please help me solve the issue?

    opened by MahdiNazemi 16
  • Is it possible to vectorize samples in Monte Carlo estimation of ELBO gradients?

    Is it possible to vectorize samples in Monte Carlo estimation of ELBO gradients?

    In Monte Carlo estimation of ELBO gradients, samples are iterated in a for-loop. Is it possible to vectorize samples in the calculation?

    Here is my thought based on my very basic understanding Edward. Suppose a single sample is tensor s , and a array of samples is tensor S, which is one rank higher than s and has the zero-th dimension corresponding to different samples. Then

    1. many operations, such as linear calculation and element-wise functions, on a single sample s can be generalized to to S ;
    2. slicing of s ( for Rao-Blackwellization) corresponds to slicing S .

    If some operations on s cannot be generalized to S, can we postpone the "for" loop to the point before such operations? I am saying, we sample and try to calculate with S at first; if we cannot calculate with S, we then iterate over its elements.

    I am not considering the problem that tensorflow does not support gradients of some operators, such as gather_n and scatter_update. I have no knowledge about how much speed-up we can get from the vectorization.

    opened by lipingliulp 15
  • [WIP] Fix for #74

    [WIP] Fix for #74

    Otherwise this happens :) :

    import edward
    from edward.models import Variational
    from edward.models import Normal
    
    print Variational().layers
    Variational().add(Normal(2))
    print Variational().layers
    

    prints

    []
    [<edward.models.variationals.Normal instance at 0x117cc0680>]
    
    opened by kudkudak 15
  • AttributeError: module 'keras.backend' has no attribute 'set_session'

    AttributeError: module 'keras.backend' has no attribute 'set_session'

    Hello, I am trying to execute your example, namely this line of code:

    inference = ed.KLqp({W_0: qW_0, b_0: qb_0,
                         W_1: qW_1, b_1: qb_1}, data={y: y_train})
    

    But unfortunately I see this error:

     File "<ipython-input-4-3c0965890976>", line 2, in <module>
        W_1: qW_1, b_1: qb_1}, data={y: y_train})
    
      File "/Users/a/anaconda/lib/python3.6/site-packages/edward/inferences/klqp.py", line 59, in __init__
        super(KLqp, self).__init__(*args, **kwargs)
    
      File "/Users/a/anaconda/lib/python3.6/site-packages/edward/inferences/variational_inference.py", line 32, in __init__
        super(VariationalInference, self).__init__(*args, **kwargs)
    
      File "/Users/a/anaconda/lib/python3.6/site-packages/edward/inferences/inference.py", line 66, in __init__
        sess = get_session()
    
      File "/Users/a/anaconda/lib/python3.6/site-packages/edward/util/graphs.py", line 42, in get_session
        K.set_session(_ED_SESSION)
    
    AttributeError: module 'keras.backend' has no attribute 'set_session'
    

    Python 3.6 on Anaconda, latest Edward+Keras+Tensorflow installed through PIP. MacOS 10.12.6.

    Please advise, thanks.

    opened by CoderCoderCoder 14
  • Implementing LDA with Edward

    Implementing LDA with Edward

    Is it possible to implement LDA with Edward? We have a large dataset and we want to implement LDA in Tensorflow. I think Edward should help. I didnt find any implementation of LDA in tensforflow.If any samples is there it will help (We need to do topic modelling )

    opened by amit2103 14
  • Edward Restructuring

    Edward Restructuring

    Hello, when I try and redo the getting-started tutorial using the latest github version of the code, it seems as if ed.MFVI behaves very differently. Is the current release of edward on PIP and the bleeding-edge version different for this?

    The model learned now doesn't learn the relationship. I'm sorry if I missed out on something - changes happen very fast on Edward!

    Edit: Noticed that MVFI doesn't exist in the inference.py file. Where is it right now? And is it different in it's implementation from the older code?

    Second edit: Noticed the massive restructuring - but still don't know where MVFI is.

    opened by bhargavvader 14
  • Cannot import Edward model in GoogleColab

    Cannot import Edward model in GoogleColab

    opened by WasinV 2
  • Attribution error for TensorFlow when running the PPCA example

    Attribution error for TensorFlow when running the PPCA example

    When trying to run the PPCA example given, I get errors from TensorFlow.

    Running version 1.14, I get: "AttributeError: module 'tensorflow.python.framework.op_def_registry' has no attribute 'register_op_list'"

    If I go to newer versions, it throws the following error: "'No module named 'tensorflow.contrib'"

    Ideas?

    opened by PhatBoy44 1
  • Compositions of inference

    Compositions of inference

    Could someone tell me where is the code and documentation for Compositions of inference? In the home page of Edward, it says it support Message passing algorithms, which I would like to use for inference of Bayesian network. But I could find it in source code or documentation.

    opened by wuziniu 0
  • flag error

    flag error

    I got this error: TypeError Traceback (most recent call last) in () ----> 1 tf.flags.DEFINE_string("data_dir", default="/tmp/data", help="") 2 tf.flags.DEFINE_string("out_dir", default="/tmp/out", help="") 3 tf.flags.DEFINE_integer("M", default=128, help="Batch size during training.") 4 tf.flags.DEFINE_integer("d", default=10, help="Latent dimension.") 5 tf.flags.DEFINE_integer("n_epoch", default=100, help="")

    TypeError: DEFINE_string() got an unexpected keyword argument 'default'

    opened by cymqqqq 1
Releases(1.3.5)
  • 1.3.5(Jan 22, 2018)

    • Added automatic posterior approximations in variational inference (#775).
    • Added use of tf.GraphKeys.REGULARIZATION_LOSSES to variational inference (#813).
    • Added multinomial classification metrics (#743).
    • Added utility function to assess conditional independence (#791).
    • Added custom metrics in evaluate.py (#809).
    • Minor bug fixes, including automatic transformations (#808); ratio inside ed.MetropolisHastings (#806).

    Acknowledgements

    • Thanks go to Baris Kayalibay (@bkayalibay), Christopher Lovell (@christopherlovell), David Moore (@davmre), Kris Sankaran (@krisrs1128), Manuel Haussmann (@manuelhaussmann), Matt Hoffman (@matthewdhoffman), Siddharth Agrawal (@siddharth-agrawal), William Wolf (@cavaunpeu), @gfeldman.

    We are also grateful to all who filed issues or helped resolve them, asked and answered questions, and were part of inspiring discussions.

    Source code(tar.gz)
    Source code(zip)
  • 1.3.4(Sep 28, 2017)

    This version release comes with several new features, alongside a significant push for better documentation, examples, and unit testing.

    • ed.KLqp's score function gradient now does more intelligent (automatic) Rao-Blackwellization for variance reduction.
    • Automated transformations are enabled for all inference algorithms that benefit from it [tutorial].
    • Added Wake-Sleep algorithm (ed.WakeSleep).
    • Many minor bug fixes.

    Examples

    Documentation & Testing

    • Sealed all undocumented functions and modules in Edward.
    • Parser and BibTeX to auto-generate API docs.
    • Added unit testing to (most) all Jupyter notebooks.

    Acknowledgements

    • Thanks go to Matthew Feickert (@matthewfeickert), Alp Kucukelbir (@akucukelbir), Romain Lopez (@romain-lopez), Emile Mathieu (@emilemathieu), Stephen Ra (@stephenra), Kashif Rasul (@kashif), Philippe Rémy (@philipperemy), Charles Shenton (@cshenton), Yuto Yamaguchi (@yamaguchiyuto), @evahlis, @samnolen, @seiyab.

    We are also grateful to all who filed issues or helped resolve them, asked and answered questions, and were part of inspiring discussions.

    Source code(tar.gz)
    Source code(zip)
  • 1.3.3(Jun 16, 2017)

    • Edward is updated to require a TensorFlow version of at least 1.2.0rc0.
    • Miscellaneous bug fixes and revisions.

    Acknowledgements

    • Thanks go to Joshua Engelman (@jengelman), Matt Hoffman (@matthewdhoffman), Kashif Rasul (@kashif).

    We are also grateful to all who filed issues or helped resolve them, asked and answered questions, and were part of inspiring discussions.

    Source code(tar.gz)
    Source code(zip)
  • 1.3.2(May 30, 2017)

    • More TensorBoard support, including default summaries. See the tutorial (#598, #654, #653).
    • A batch training tutorial is added.
    • Improved training of Wasserstein GANs via penalty (#626).
    • Fixed error in sampling for DirichletProcess (#652).
    • Miscellaneous bug fixes, documentation, and speed ups.

    Acknowledgements

    • Thanks go to Janek Berger (@janekberger), Ian Dewancker (@iandewancker) Patrick Foley (@patrickeganfoley), Nitish Joshi (@nitishjoshi25), Akshay Khatri (@akshaykhatri639), Sean Kruzel (@closedLoop), Fritz Obermeyer (@fritzo), Lyndon Ollar (@lbollar), Olivier Verdier (@olivierverdier), @KonstantinLukaschenko, @meta-inf.

    We are also grateful to all who filed issues or helped resolve them, asked and answered questions, and were part of inspiring discussions.

    Source code(tar.gz)
    Source code(zip)
  • 1.3.1(Apr 30, 2017)

  • 1.3.0(Apr 27, 2017)

    Edward requires a TensorFlow version of at least 1.1.0rc0. This includes several breaking API changes:

    • All Edward random variables use English keyword arguments instead of Greek. For example, Normal(loc=0.0, scale=1.0) replaces the older syntax of Normal(mu=0.0, sigma=1.0).
    • MultivariateNormalCholesky is renamed to MultivariateNormalTriL.
    • MultivariateNormalFull is removed.
    • rv.get_batch_shape() is renamed to rv.batch_shape.
    • rv.get_event_shape() is renamed to rv.event_shape.

    Model

    • Random variables accept an optional sample_shape argument. This lets its associated tensor to represent more than a single sample (#591).
    • Added a ParamMixture random variable. It is a mixture of random variables where each component has the same distribution (#592).
    • DirichletProcess has persistent states across calls to sample() (#565, #575, #583).

    Inference

    • Added conjugacy & symbolic algebra. This includes a ed.complete_conditional function (#588, #605, #613). See a Beta-Bernoulli example.
    • Added Gibbs sampling (#607). See the unsupervised learning tutorial for a demo.
    • Added BiGANInference for adversarial feature learning (#597).
    • Inference, MonteCarlo, VariationalInference are abstract classes, preventing instantiation (#582).

    Miscellaneous

    • A more informative message appears if the TensorFlow version is not supported (#572).
    • Added a shape property to random variables. It is the same as get_shape().
    • Added collections argument to random variables(#609).
    • Added ed.get_blanket to get Markov blanket of a random variable (#590).
    • ed.get_dims and ed.multivariate_rbf utility functions are removed.
    • Miscellaneous bug fixes and speed ups (e.g., #567, #596, #616).

    Acknowledgements

    • Thanks go to Robert DiPietro (@rdipietro), Alex Lewandowski (@AlexLewandowski), Konstantin Lukaschenko (@KonstantinLukaschenko) Matt Hoffman (@matthewdhoffman), Jan-Matthis Lückmann (@jan-matthis), Shubhanshu Mishra (@napsternxg), Lyndon Ollar (@lbollar), John Reid (@johnreid), @Phdntom.

    We are also grateful to all who filed issues or helped resolve them, asked and answered questions, and were part of inspiring discussions.

    Source code(tar.gz)
    Source code(zip)
  • 1.2.4(Mar 20, 2017)

    • Added DirichletProcess random variable (#555)
    • Added progress bar for inference (#546).
    • Improved type support and error messages (#561, #563).
    • Miscellaneous bug fixes.

    Documentation

    • Added Edward Forum (https://discourse.edwardlib.org)
    • Added Jupyter notebook for all tutorials (#520).
    • Added tutorial on linear mixed effects models (#539).
    • Added example of probabilistic matrix factorization (#557).
    • Improved API styling and reference page (#536, #548, #549).
    • Updated website sidebar, including a community page (#533, #551).

    Acknowledgements

    • Thanks go to Mayank Agrawal (@timshell), Siddharth Agrawal (@siddharth-agrawal), Lyndon Ollar (@lbollar), Christopher Prohm (@chmp), Maja Rudolph (@mariru).

    We are also grateful to all who filed issues or helped resolve them, asked and answered questions, and were part of inspiring discussions.

    Source code(tar.gz)
    Source code(zip)
  • 1.2.3(Mar 8, 2017)

    Models

    • All support is removed for model wrappers (#514, #517).
    • Direct fetching (sess.run() and eval()) is enabled for RandomVariable (#503).
    • Index, iterator, and boolean operators are overloaded for RandomVariable (#515).

    Inference

    • Variational inference is added for implicit probabilistic models (#491).
    • Laplace approximation uses multivariate normal approximating families (#506).
    • Removed need for manually specifying Keras session during inference (#490).
    • Recursive graphs are properly handled during inference (#500).

    Documentation & Examples

    • Probabilistic PCA tutorial is added (#499).
    • Dirichlet process with base distribution example is added (#508).
    • Bayesian logistic regression example is added (#509).

    Miscellanea

    • Dockerfile is added (#494).
    • Replace some utility functions with TensorFlow's (#504, #507).
    • A number of miscellaneous revisions and improvements (e.g., #422, #493, #495).

    Acknowledgements

    • Thanks go to Mayank Agrawal (@timshell), Paweł Biernat (@pwl), Tom Diethe (@tdiethe), Christopher Prohm (@chmp), Maja Rudolph (@mariru), @SnowMasaya.

    We are also grateful to all who filed issues or helped resolve them, asked and answered questions, and were part of inspiring discussions.

    Source code(tar.gz)
    Source code(zip)
  • 1.2.2(Feb 28, 2017)

    Models

    • Operators are overloaded for RandomVariable. For example, this enables x + y (#445).
    • Keras' neural net layers can now be applied directly to RandomVariable (#483).

    Inference

    • Generative adversarial networks are implemented, available as GANInference. There's a tutorial (#310).
    • Wasserstein GANs are implemented, available as WGANInference (#448).
    • Several integration tests are implemented (#487).
    • The scale factor argument for VariationalInference is generalized to be a tensor (#467).
    • Inference can now work with tf.Tensor latent variables and observed variables (#488).

    Criticism

    • A number of miscellaneous improvements are made to ed.evaluate and ed.ppc. This includes support for checking implicit models and proper Monte Carlo estimates for the posterior predictive density (#485).

    Documentation & Examples

    • Edward tutorials are reorganized in the style of a flattened list (#455).
    • Mixture density network tutorial is updated to use native modeling language (#459).
    • Mixed effects model examples are added (#461).
    • Dirichlet-Categorical example is added (#466).
    • Inverse Gamma-Normal example is added (#475).
    • Minor fixes have been made to documentation (#437, #438, #440, #441, #454).
    • Minor fixes have been made to examples (#434).

    Miscellanea

    • To support both tensorflow and tensorflow-gpu, TensorFlow is no longer an explicit dependency (#482).
    • The ed.tile utility function is removed (#484).
    • Minor fixes have been made in the code base (#433, #479, #486).

    Acknowledgements

    • Thanks go to Janek Berger (@janekberger), Nick Foti (@nfoti), Patrick Foley (@patrickeganfoley), Alp Kucukelbir (@akucukelbir), Alberto Quirós (@bertini36), Ramakrishna Vedantam (@vrama91), Robert Winslow (@rw).

    We are also grateful to all who filed issues or helped resolve them, asked and answered questions, and were part of inspiring discussions.

    Source code(tar.gz)
    Source code(zip)
  • 1.2.1(Jan 30, 2017)

    • Edward is compatible with TensorFlow 1.0. This provides significantly more distribution support. In addition, Edward now requires TensorFlow 1.0.0-alpha or above (#374, #426).

    Inference

    • Stochastic gradient Hamiltonian Monte Carlo is implemented (#415).
    • Leapfrog calculation is streamlined in HMC, providing speedups in the algorithm (#414).
    • Inference now accepts int and float data types (#421).
    • Order mismatch of latent variables during MCMC updates is fixed (#413).

    Documentation & Examples

    • Rasch model example is added (#410).
    • Collapsed mixture model example is added (#350).
    • Importance weighted variational inference example is updated to use native modeling language.
    • Lots of minor improvements to code and documentation (e.g., #409, #418).

    Acknowledgements

    • Thanks go to Gökçen Eraslan (@gokceneraslan), Jeremy Kerfs (@jkerfs), Matt Hoffman (@matthewdhoffman), Nick Foti (@nfoti), Daniel Wadden (@dwadden), Shijie Wu (@shijie-wu).

    We are also grateful to all who filed issues or helped resolve them, asked and answered questions, and were part of inspiring discussions.

    Source code(tar.gz)
    Source code(zip)
  • 1.2.0(Jan 16, 2017)

    Documentation

    • Website documentation and API is improved (#381, #382, #383).
    • Gitter channel is added (#400).
    • Added docstrings to random variables (#394).

    Miscellaneous

    • copy is disabled for Queue operations (#384).
    • All VariationalInference methods must use build_loss_and_gradients (#385).
    • Logging is improved for VariationalInference (#337).
    • Fixed logging issue during inference (#391).
    • Fixed copy function to work with lists of RandomVariable (#401).
    • Fixed bug with Theano NameError during inference (#395).

    Acknowledgements

    • Thanks go to Gilles Boulianne (@bouliagi), Nick Foti (@nfoti), Jeremy Kerfs (@jkerfs), Alp Kucukelbir (@akucukelbir), John Pearson (@jmxpearson), and @redst4r.

    We are also grateful to all who filed issues or helped resolve them, asked and answered questions, and were part of inspiring discussions.

    Source code(tar.gz)
    Source code(zip)
  • 1.1.6(Dec 13, 2016)

    • TensorFlow v0.12.0rc0 and v0.12.0rc1 broke compatibility with Edward (see #315 for more details). For now, users are recommended to use v0.11.0.
    • A bug with KLqp using the score function gradient estimator is fixed (#373).
    Source code(tar.gz)
    Source code(zip)
  • 1.1.5(Nov 16, 2016)

    Models

    • RandomVariables now accept an optional value argument, enabling use of random variables that don't currently have sampling such as Poisson (#326).
    • Documentation on model compositionality is added. [Webpage]

    Inference

    • Inference compositionality is added, enabling algorithms such as Expectation-Maximization and message passing (#330). [Webpage]
    • Data subsampling is added, enabling proper local and global variable scaling for stochastic optimization (#327). [Webpage]
    • Documentation on inference classes is added. [Webpage]
    • VariationalInference has new defaults for a TensorFlow variable list as argument (#336).
    • Type and shape checking is improved during __init__ of Inference.

    Miscellaneous

    • Fixed an issue where a new Div node is created every Monte Carlo update (#318).
    • Travis build is now functioning properly (#324).
    • Coveralls is now functioning properly (#342).
    • tf.placeholder can now be used instead of ed.placeholder.
    • Website tutorials, documentation, and API are generally more polished.
    • Fixed an issue where computation was incorrectly shared among inferences (#348).
    • scipy is now an optional rather than mandatory dependency (#344).

    Deprecated Features

    NOTE: Several features in Edward are now deprecated (#344):

    • model wrappers, including PythonModel, PyMC3Model, and StanModel—in favor of Edward's native language;
    • the edward.stats module—in favor of random variables in edward.models;
    • MFVI—in favor of KLqp;
    • ed.placeholder—in favor of TensorFlow's tf.placeholder.

    Edward will continue their support for one or two more versions. They will be removed in some future release.

    Acknowledgements

    • Thanks go to Alp Kucukelbir (@akucukelbir), Dawen Liang (@dawenl), John Pearson (@jmxpearson), Hayate Iso (@isohyt), Marmaduke Woodman (@maedoc), and Matthew Hoffman (@matthewdhoffman).

    We are also grateful to all who filed issues or helped resolve them, asked and answered questions, and were part of inspiring discussions.

    Source code(tar.gz)
    Source code(zip)
  • 1.1.4(Nov 7, 2016)

  • 1.1.3(Oct 3, 2016)

    Models

    • New random variables and methods are added (#256, #274). For example, random variables such as Mixture, QuantizedDistribution, WishartCholesky, and methods such as survival_function().
    • Random variables and methods are now automatically generated from tf.contrib.distributions (#276). Edward random variables are minimal and adapt to the TensorFlow version.

    Inference

    Inference

    • The API is generalized to enable more fine-grained control (#253, #259, #260).

    Monte Carlo

    • Significant infrastructure for Monte Carlo is added (#254, #255). This makes it easy to develop new Monte Carlo methods.
    • Metropolis-Hastings is implemented (#255)
    • Hamiltonian Monte Carlo is implemented (#269).
    • Stochastic gradient Langevin dynamics is implemented (#272).

    Variational inference

    • Black box-style methods are refactored internally (#249).

    Documentation

    • The website tutorials are placed in a directory and have clean links (#263, #264).
    • Initial progress is made on iPython notebook versions of the tutorials (#261).
    • The website API is revamped (#268). Everything is now LaTeX-sourced, and the Delving In page is moved to the frontpage of the API.

    Miscellaneous

    • Printing behavior of random variables is changed (#276).
    • edward.criticisms is its own subpackage (#258).
    • The TensorFlow dependency is now >=0.11.0rc0 (#274).

    Acknowledgements

    • Thanks go to Alp Kucukelbir (@akucukelbir), Bhargav Srinivasa (@bhargavvader), and Justin Bayer (@bayerj).

    We are also grateful to all who filed issues or helped resolve them, asked and answered questions, and were part of inspiring discussions.

    Source code(tar.gz)
    Source code(zip)
  • 1.1.2(Sep 23, 2016)

    Functionality

    • A new modeling language is added, which exposes model structure to the user. This enables development of both model-specific and generic inference algorithms (#239).
    • All of inference and criticism is updated to support the new language and also be backward-compatible with the model wrappers (#239).

    Documentation

    • All of the website is updated to reflect the new modeling language (#252).
    • Several existing tutorials now use the modeling language instead of a model wrapper (#252).

    Examples

    Miscellaneous

    • The TensorFlow dependency is now >=0.10.0.
    • Momentum optimizer argument is fixed (#246).
    Source code(tar.gz)
    Source code(zip)
  • 1.1.1(Aug 30, 2016)

    Functionality

    • The API for inference and criticism is changed. It is a more intuitive interface that allows for multiple sets of latent variables (#192).
    • The API for variational models is changed (#237). The user must explicitly define the parameters that he or she wishes to train; this allows for more flexibility in how to initialize and train variational parameters.
    • edward.models is refactored to incorporate all random variables in tf.contrib.distributions (#237). This speeds up computation, is more robust, and supports additional distributions and distribution methods.
    • edward.stats is refactored to have its main internals reside in tf.contrib.distributions (#238). This speeds up computation, is more robust, and supports additional distributions and distribution methods.

    Documentation

    • All of the website is updated to reflect the new API changes.
    • The contributing page is revamped.

    Examples

    Testing

    • py.test is now the testing tool of choice.
    • Code now follows all of PEP8, with the exception of two-space indenting following TensorFlow's style guide (#214, #215, #216, #217, #218, #219, #220, #221, #223, #225, #227, #228, #229, #230).
    • Travis automates checking for PEP8.
    • (minimal) Tensorboard support is added. Specifically, one can now visualize the computational graph used during inference.

    Miscellaneous

    • The TensorFlow dependency is now >=0.10.0rc0.
    • ed.__version__ displays Edward's version.
    • ed.set_seed() is more robust, checking to see if any random ops were created prior to setting the seed.
    Source code(tar.gz)
    Source code(zip)
  • 1.1.0(Jul 18, 2016)

    Functionality

    • Three ways to read data are supported, enabling the range from storing data in memory within TensorFlow's computational graph to manually feeding data to reading data from files. (see #170)
    • Support for Python 3 is added.
    • The naming scheme for various attributes is made consistent. (see https://github.com/blei-lab/edward/pull/162#issuecomment-232517072)

    Documentation

    • The website is given a complete overhaul, now with getting started and delving in pages, in-depth tutorials, and an API describing the design of Edward and autogenerated doc for each function in Edward. (see #149)

    Examples

    Source code(tar.gz)
    Source code(zip)
  • 1.0.9(Jul 10, 2016)

    • There is now one data object to rule them all: a Python dictionary. (see #156)
    • Distribution objects can be of arbitrary shape. For example, a 5 x 2 matrix of Normal random variables is declared with x = Normal([5, 2]). (see #138)

    Documentation

    • All of Edward is documented. (see #148)
    • Edward now follows TensorFlow style guidelines.
    • A tutorial on black box variational inference is available. (see #153)

    Miscellaneous

    • We now use the special functions and their automatic differentation available in TensorFlow, e.g., tf.lgamma, tf.digamma, tf.lbeta.
    • Sampling via NumPy/SciPy is done using a tf.py_func wrapper, speeding up sampling and avoiding internal overhead from the previous solution. (see #160)
    • Sampling via reparameterizable distributions now follows the convention of tf.contrib.distributions. (see #161)
    • Fixed bug where a class copy of the layers object in Variational is done (see #119)
    Source code(tar.gz)
    Source code(zip)
  • 1.0.8(Jun 26, 2016)

    • distributions can now be specified with parameters, simplifying use of inference networks, alternative parameterizations, and much of the internals for developing new inference algorithms; see #126
    • TensorFlow session is now a global variable and can simply be accessed with get_session(); see #117
    • added Laplace approximation
    • added utility function to calculate hessian of TensorFlow tensors
    Source code(tar.gz)
    Source code(zip)
  • 1.0.7(Jun 5, 2016)

  • 1.0.6(Jun 4, 2016)

    • website with revamped documentation: http://edwardlib.org. See details in #108
    • criticism of probabilistic models with ed.evaluate() and ed.ppc(). See details in #107
    Source code(tar.gz)
    Source code(zip)
  • 1.0.5(May 24, 2016)

    • enabled Keras as neural network specification
    • samples in variational model can now leverage TensorFlow-based samplers and not only SciPy-based samplers
    • let user optionally specify sess when using inference
    • mean-field variational inference can now take advantage of analytically tractable KL terms for standard normal priors
    • data can additionally be a list of np.ndarrays or list of tf.placeholders
    • added mixture density network as example
    • enabled dimensions of distribution output to match with input dimensions
    • renamed log_gamma, log_beta, multivariate_log_beta to lgamma and lbeta to follow convention in TensorFlow API
    • let PointMass be a variational factor
    • fixed Multinomial variational factor
    • added continuous integration for unit tests
    Source code(tar.gz)
    Source code(zip)
  • 1.0.4(May 14, 2016)

    • interface-wise, you now import models (probability models or variational models) using
    from edward.models import PythonModel, Variational, Normal
    

    By default you can also do something like ed.StanModel(model_file=model_file).

    • variational distributions now default to initializing with only one factor
    Source code(tar.gz)
    Source code(zip)
  • 1.0.3(May 14, 2016)

    • generalized internals of variational distributions to use multivariate factors
    • vectorized all distributions and with unit tests
    • added additional distributions: binom, chi2, geom, lognorm, nbinom, uniform
    • vectorized log density calls in variational distributions
    • vectorized log density calls in model examples
    Source code(tar.gz)
    Source code(zip)
  • 1.0.2(May 9, 2016)

  • 1.0.1(May 7, 2016)

  • 1.0.0(May 3, 2016)

    Edward is a Python library for probabilistic modeling, inference, and criticism. It enables black box inference for models with discrete and continuous latent variables, neural network parameterizations, and infinite dimensional parameter spaces. Edward serves as a fusion of three fields: Bayesian statistics and machine learning, deep learning, and probabilistic programming.

    Source code(tar.gz)
    Source code(zip)
    edward-1.0.0.tar.gz(12.13 KB)
Owner
Blei Lab
We are malleable but resistant to corrosion.
Blei Lab
Probabilistic Programming in Python: Bayesian Modeling and Probabilistic Machine Learning with Theano

PyMC3 is a Python package for Bayesian statistical modeling and Probabilistic Machine Learning focusing on advanced Markov chain Monte Carlo (MCMC) an

PyMC 7.2k Dec 30, 2022
Deep universal probabilistic programming with Python and PyTorch

Getting Started | Documentation | Community | Contributing Pyro is a flexible, scalable deep probabilistic programming library built on PyTorch. Notab

null 7.7k Dec 30, 2022
Functional tensors for probabilistic programming

Funsor Funsor is a tensor-like library for functions and distributions. See Functional tensors for probabilistic programming for a system description.

null 208 Dec 29, 2022
Probabilistic reasoning and statistical analysis in TensorFlow

TensorFlow Probability TensorFlow Probability is a library for probabilistic reasoning and statistical analysis in TensorFlow. As part of the TensorFl

null 3.8k Jan 5, 2023
A Python package for Bayesian forecasting with object-oriented design and probabilistic models under the hood.

Disclaimer This project is stable and being incubated for long-term support. It may contain new experimental code, for which APIs are subject to chang

Uber Open Source 1.6k Dec 29, 2022
Fast, flexible and easy to use probabilistic modelling in Python.

Please consider citing the JMLR-MLOSS Manuscript if you've used pomegranate in your academic work! pomegranate is a package for building probabilistic

Jacob Schreiber 3k Jan 2, 2023
Python Library for learning (Structure and Parameter) and inference (Statistical and Causal) in Bayesian Networks.

pgmpy pgmpy is a python library for working with Probabilistic Graphical Models. Documentation and list of algorithms supported is at our official sit

pgmpy 2.2k Dec 25, 2022
pyhsmm MITpyhsmm - Bayesian inference in HSMMs and HMMs. MIT

Bayesian inference in HSMMs and HMMs This is a Python library for approximate unsupervised inference in Bayesian Hidden Markov Models (HMMs) and expli

Matthew Johnson 527 Dec 4, 2022
Randomisation-based inference in Python based on data resampling and permutation.

Randomisation-based inference in Python based on data resampling and permutation.

null 67 Dec 27, 2022
Validation and inference over LinkML instance data using souffle

Translates LinkML schemas into Datalog programs and executes them using Souffle, enabling advanced validation and inference over instance data

Linked data Modeling Language 7 Aug 7, 2022
Python library for creating data pipelines with chain functional programming

PyFunctional Features PyFunctional makes creating data pipelines easy by using chained functional operators. Here are a few examples of what it can do

Pedro Rodriguez 2.1k Jan 5, 2023
Gaussian processes in TensorFlow

Website | Documentation (release) | Documentation (develop) | Glossary Table of Contents What does GPflow do? Installation Getting Started with GPflow

GPflow 1.7k Jan 6, 2023
ForecastGA is a Python tool to forecast Google Analytics data using several popular time series models.

ForecastGA is a tool that combines a couple of popular libraries, Atspy and googleanalytics, with a few enhancements.

JR Oakes 36 Jan 3, 2023
Hidden Markov Models in Python, with scikit-learn like API

hmmlearn hmmlearn is a set of algorithms for unsupervised learning and inference of Hidden Markov Models. For supervised learning learning of HMMs and

null 2.7k Jan 3, 2023
Describing statistical models in Python using symbolic formulas

Patsy is a Python library for describing statistical models (especially linear models, or models that have a linear component) and building design mat

Python for Data 866 Dec 16, 2022
A Pythonic introduction to methods for scaling your data science and machine learning work to larger datasets and larger models, using the tools and APIs you know and love from the PyData stack (such as numpy, pandas, and scikit-learn).

This tutorial's purpose is to introduce Pythonistas to methods for scaling their data science and machine learning work to larger datasets and larger models, using the tools and APIs they know and love from the PyData stack (such as numpy, pandas, and scikit-learn).

Coiled 102 Nov 10, 2022
Generate lookml for views from dbt models

dbt2looker Use dbt2looker to generate Looker view files automatically from dbt models. Features Column descriptions synced to looker Dimension for eac

lightdash 126 Dec 28, 2022
A Python package for the mathematical modeling of infectious diseases via compartmental models

A Python package for the mathematical modeling of infectious diseases via compartmental models. Originally designed for epidemiologists, epispot can be adapted for almost any type of modeling scenario.

epispot 12 Dec 28, 2022
Collections of pydantic models

pydantic-collections The pydantic-collections package provides BaseCollectionModel class that allows you to manipulate collections of pydantic models

Roman Snegirev 20 Dec 26, 2022