Deep GPs built on top of TensorFlow/Keras and GPflow

Related tags

Deep Learning GPflux
Overview

GPflux

Quality checks and Tests Docs build

Documentation | Tutorials | API reference | Slack

What does GPflux do?

GPflux is a toolbox dedicated to Deep Gaussian processes (DGP), the hierarchical extension of Gaussian processes (GP).

GPflux uses the mathematical building blocks from GPflow and marries these with the powerful layered deep learning API provided by Keras. This combination leads to a framework that can be used for:

  • researching new (deep) Gaussian process models, and
  • building, training, evaluating and deploying (deep) Gaussian processes in a modern way — making use of the tools developed by the deep learning community.

Getting started

In the Documentation, we have multiple Tutorials showing the basic functionality of the toolbox, a benchmark implementation and a comprehensive API reference.

Install GPflux

This project is assuming you are using python3.

For users

To install the latest (stable) release of the toolbox from PyPI, use pip:

$ pip install gpflux

For contributors

To install this project in editable mode, run the commands below from the root directory of the GPflux repository.

make install

Check that the installation was successful by running the tests:

make test

You can have a peek at the Makefile for the commands.

The Secondmind Labs Community

Getting help

Bugs, feature requests, pain points, annoying design quirks, etc: Please use GitHub issues to flag up bugs/issues/pain points, suggest new features, and discuss anything else related to the use of GPflux that in some sense involves changing the GPflux code itself. We positively welcome comments or concerns about usability, and suggestions for changes at any level of design. We aim to respond to issues promptly, but if you believe we may have forgotten about an issue, please feel free to add another comment to remind us.

Slack workspace

We have a public Secondmind Labs slack workspace. Please use this invite link and join the #gpflux channel, whether you'd just like to ask short informal questions or want to be involved in the discussion and future development of GPflux.

Contributing

All constructive input is very much welcome. For detailed information, see the guidelines for contributors.

Maintainers

GPflux was originally created at Secondmind Labs and is now actively maintained by (in alphabetical order) Vincent Dutordoir and ST John. We are grateful to all contributors who have helped shape GPflux.

GPflux is an open source project. If you have relevant skills and are interested in contributing then please do contact us (see "The Secondmind Labs Community" section above).

We are very grateful to our Secondmind Labs colleagues, maintainers of GPflow, Trieste and Bellman, for their help with creating contributing guidelines, instructions for users and open-sourcing in general.

Citing GPflux

To cite GPflux, please reference our arXiv paper where we review the framework and describe the design. Sample Bibtex is given below:

@article{dutordoir2021gpflux,
    author = {Dutordoir, Vincent and Salimbeni, Hugh and Hambro, Eric and McLeod, John and
        Leibfried, Felix and Artemev, Artem and van der Wilk, Mark and Deisenroth, Marc P.
        and Hensman, James and John, ST},
    title = {GPflux: A library for Deep Gaussian Processes},
    year = {2021},
    journal = {arXiv:2104.05674},
    url = {https://arxiv.org/abs/2104.05674}
}

License

Apache License 2.0

Comments
  • Attempting to learn models with multidimensional inputs leads to an error.

    Attempting to learn models with multidimensional inputs leads to an error.

    Thanks a lot for making this exciting project public! I'm not 100% sure if what I'm reporting is a bug of if this isn't supposed to work in GPflux, but here we go:

    Describe the bug Attempting to learn models with multidimensional inputs leads to an error.

    To reproduce First of all, the setup of a toy example and a GPflow SVGP-based version which works as expected:

    import numpy as np
    import tensorflow as tf
    import matplotlib.pyplot as plt
    import gpflow
    import gpflux
    from gpflow.utilities import print_summary, set_trainable
    
    tf.keras.backend.set_floatx("float64")
    tf.get_logger().setLevel("INFO")
    
    grid = np.meshgrid(np.linspace(0, np.pi*2, 20),
                       np.linspace(0, np.pi*2, 20))
    X = np.column_stack(tuple(map(np.ravel, grid)))
    Y = (np.sin(X[:, 0]) * np.sin(X[:, 1]))[:, None]
    
    plt.contourf(grid[0], grid[1], Y.reshape(grid[0].shape))
    plt.title("DATA")
    plt.show()
    
    num_data = len(X)
    num_inducing = 10
    output_dim = Y.shape[1]
    
    kernel = (gpflow.kernels.SquaredExponential(active_dims=[0]) *
              gpflow.kernels.SquaredExponential(active_dims=[1]))
    inducing_variable = gpflow.inducing_variables.InducingPoints(
        X[np.random.choice(X.shape[0], size=num_inducing, replace=False),:].copy()
    )
    
    #---------- SVGP
    svgp = gpflow.models.SVGP(kernel, gpflow.likelihoods.Gaussian(), inducing_variable,
                              num_latent_gps=output_dim, num_data=num_data)
    set_trainable(svgp.q_mu, False)
    set_trainable(svgp.q_sqrt, False)
    variational_params = [(svgp.q_mu, svgp.q_sqrt)]
    natgrad_opt = gpflow.optimizers.NaturalGradient(gamma=0.1)
    adam_opt = tf.optimizers.Adam(0.01)
    minibatch_size = 10
    train_dataset = tf.data.Dataset.from_tensor_slices(
        (X, Y)).repeat().shuffle(num_data)
    iter_train = iter(train_dataset.batch(minibatch_size))
    objective = svgp.training_loss_closure(iter_train, compile=True)
    
    @tf.function
    def optim_step():
        natgrad_opt.minimize(objective, var_list=variational_params)
        adam_opt.minimize(objective, svgp.trainable_variables)
    
    for i in range(100):
        optim_step()
    elbo = -objective().numpy()
    print(f"it: {i} of dual-optimizer... elbo: {elbo}")
    
    
    atgrid = np.meshgrid(np.linspace(0, np.pi*2, 40),
                         np.linspace(0, np.pi*2, 40))
    atX = np.column_stack(tuple(map(np.ravel, atgrid)))
    
    mean, var = svgp.predict_f(atX)
    plt.contourf(atgrid[0], atgrid[1], mean.numpy().reshape(atgrid[0].shape))
    plt.title("SVGP")
    plt.show()
    

    And here a single-layer DGP with GPflux:

    #---------- DEEPGP
    gp_layer = gpflux.layers.GPLayer(
        kernel, inducing_variable, num_data=num_data, num_latent_gps=output_dim
    )
    
    likelihood_layer = gpflux.layers.LikelihoodLayer(gpflow.likelihoods.Gaussian(0.1))
    
    single_layer_dgp = gpflux.models.DeepGP([gp_layer], likelihood_layer)
    model = single_layer_dgp.as_training_model()
    model.compile(tf.optimizers.Adam(0.01))
    
    log = model.fit({"inputs": X, "targets": Y}, epochs=int(100), verbose=1)
    

    which throws the following error when reaching the last line of the example:

    ValueError: in user code:
    
        venv/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py:805 train_function  *
            return step_function(self, iterator)
        venv/lib/python3.7/site-packages/gpflux/layers/gp_layer.py:277 call  *
            outputs = super().call(inputs, *args, **kwargs)
        venv/lib/python3.7/site-packages/tensorflow_probability/python/layers/distribution_layer.py:252 call  **
            inputs, *args, **kwargs)
        venv/lib/python3.7/site-packages/tensorflow/python/keras/layers/core.py:917 call
            result = self.function(inputs, **kwargs)
        venv/lib/python3.7/site-packages/tensorflow_probability/python/layers/distribution_layer.py:172 _fn
            d = make_distribution_fn(*fargs, **fkwargs)
        venv/lib/python3.7/site-packages/gpflux/layers/gp_layer.py:328 _make_distribution_fn
            return tfp.distributions.MultivariateNormalDiag(loc=mean, scale_diag=tf.sqrt(cov))
        <decorator-gen-394>:2 __init__
            
        venv/lib/python3.7/site-packages/tensorflow_probability/python/distributions/distribution.py:298 wrapped_init
            default_init(self_, *args, **kwargs)
        venv/lib/python3.7/site-packages/tensorflow/python/util/deprecation.py:538 new_func
            return func(*args, **kwargs)
        venv/lib/python3.7/site-packages/tensorflow_probability/python/distributions/mvn_diag.py:252 __init__
            name=name)
        <decorator-gen-322>:2 __init__
            
        venv/lib/python3.7/site-packages/tensorflow_probability/python/distributions/distribution.py:298 wrapped_init
            default_init(self_, *args, **kwargs)
        venv/lib/python3.7/site-packages/tensorflow_probability/python/distributions/mvn_linear_operator.py:190 __init__
            loc, scale)
        venv/lib/python3.7/site-packages/tensorflow_probability/python/internal/distribution_util.py:136 shapes_from_loc_and_scale
            'of `loc` ({}).'.format(event_size_, loc_event_size_))
    
        ValueError: Event size of `scale` (1) could not be broadcast up to that of `loc` (2).
    

    Expected behaviour I expected this to not throw an error, and produce a (at least qualitatively) similar result to the SVGP implementation, but again, I'm not sure if this expectation is justified.

    System information

    • OS: Linux, kernel 5.4.112-1
    • Python version: 3.7.5
    • GPflux version: 0.1.0 from pip
    • TensorFlow version: 2.4.1
    • GPflow version: 2.1.5
    bug 
    opened by clwgg 6
  • Conditional Density Estimation notebook

    Conditional Density Estimation notebook

    Notebook building and fitting a deep (two layer) latent variable model using VI. No changes to the core of GPflux are required but careful setting of fitting options is necessary. For example, it is important to set shuffle to False and batch_size to the number of datapoints to have correct optimisation of the latent variables.

    opened by vdutor 3
  • Customize NatGrad Model  to turn off variational parameters in hidden layers

    Customize NatGrad Model to turn off variational parameters in hidden layers

    It seems that from the NatGradModel class, the requirement of having a NaturalGradient optimizer for each layer reduces flexibility for the user as this forces each layer variational parameters to be optimized. Even by setting the parameters off manually outside through set_trainable , tensorflow would throw a ValueError: None values not supported. The issue is to set off the variational parameters off except for the last hidden layer Any way to get around this issue?

      # Set all var params in inner layers off
      var_params = [(layer.q_mu, layer.q_sqrt) for layer in dgp_model.f_layers[:-1]]
      for vv in var_params:
          set_trainable(vv[0], False)
          set_trainable(vv[1], False)
    
      # Train Last Layer with NatGrad: (NOTE: this uses the given class from gpflux and not customized NatGradModel_)
      train_mode = NatGradWrapper(dgp_model.as_training_model())
      train_mode.compile([NaturalGradient(gamma=0.01), NaturalGradient(gamma=1.0), tf.optimizers.Adam(0.001)])
      history = train_mode.fit({"inputs": Xsc, "targets": Y}, epochs=int(5000), verbose=1)
    

    I only got it to work by changing the _split_natgrad_params_and_other_vars and optimizer.setter functions. Although it works im not too sure whether it is correct.

    class NatGradModel_(tf.keras.Model):
    
        @property
        def natgrad_optimizers(self) -> List[gpflow.optimizers.NaturalGradient]:
            if not hasattr(self, "_all_optimizers"):
                raise AttributeError(
                    "natgrad_optimizers accessed before optimizer being set"
                )  # pragma: no cover
            if self._all_optimizers is None:
                return None  # type: ignore
            return self._all_optimizers
    
        @property
        def optimizer(self) -> tf.optimizers.Optimizer:
    
            if not hasattr(self, "_all_optimizers"):
                raise AttributeError("optimizer accessed before being set")
            if self._all_optimizers is None:
                return None
            return self._all_optimizers
    
        @optimizer.setter
        def optimizer(self, optimizers: List[NaturalGradient]) -> None:
            # # Remove AdamOptimizer Requirement
            if optimizers is None:
                # tf.keras.Model.__init__() sets self.optimizer = None
                self._all_optimizers = None
                return
    
            if optimizers is self.optimizer:
                # Keras re-sets optimizer with itself; this should not have any effect on the state
                return
    
            self._all_optimizers = optimizers
    
        def _split_natgrad_params_and_other_vars(
            self,
        ) -> List[Tuple[Parameter, Parameter]]:
    
            # self.layers[-1] is Likelihood Layer, self.layers[-2] is Input Layer,
            # Last hidden layer is self.layers[-3]
            variational_params = [(self.layers[-3].q_mu, self.layers[-3].q_sqrt)]
    
            return variational_params
    
        def _apply_backwards_pass(self, loss: tf.Tensor, tape: tf.GradientTape) -> None:
     
            variational_params = self._split_natgrad_params_and_other_vars()
            variational_params_vars = [
                (q_mu.unconstrained_variable, q_sqrt.unconstrained_variable)
                for (q_mu, q_sqrt) in variational_params
            ]
    
            variational_params_grads = tape.gradient(loss, (variational_params_vars))
    
    
            num_natgrad_opt = len(self.natgrad_optimizers)
            num_variational = len(variational_params)
            if len(self.natgrad_optimizers) != len(variational_params):
                raise ValueError(
                    f"Model has {num_natgrad_opt} NaturalGradient optimizers, "
                    f"but {num_variational} variational distributions"
                )  # pragma: no cover
    
            for (natgrad_optimizer, (q_mu_grad, q_sqrt_grad), (q_mu, q_sqrt)) in zip(
                self.natgrad_optimizers, variational_params_grads, variational_params
            ):
                natgrad_optimizer._natgrad_apply_gradients(q_mu_grad, q_sqrt_grad, q_mu, q_sqrt)
    
    
        def train_step(self, data: Any) -> Mapping[str, Any]:
            """
            The logic for one training step. For more details of the
            implementation, see TensorFlow's documentation of how to
            `customize what happens in Model.fit
            <https://www.tensorflow.org/guide/keras/customizing_what_happens_in_fit>`_.
            """
            from tensorflow.python.keras.engine import data_adapter
    
            data = data_adapter.expand_1d(data)
            x, y, sample_weight = data_adapter.unpack_x_y_sample_weight(data)
    
            with tf.GradientTape() as tape:
                y_pred = self.__call__(x, training=True)
                loss = self.compiled_loss(y, y_pred, sample_weight, regularization_losses=self.losses)
    
            self._apply_backwards_pass(loss, tape=tape)
    
            self.compiled_metrics.update_state(y, y_pred, sample_weight)
            return {m.name: m.result() for m in self.metrics}
    
    

    The problem that im trying to reproduce is from https://github.com/ICL-SML/Doubly-Stochastic-DGP/blob/master/demos/using_natural_gradients.ipynb

    However, even with the same settings, i am still unable to reproduce the results

    opened by izsahara 3
  • GPLayer's prediction seems to be too confident

    GPLayer's prediction seems to be too confident

    Looking at the "Hybrid Deep GP models: ..." tutorial, the GPLayer's prediction seems to be too confident, i.e. its uncertainty estimate (95% confidence level) does not cover the training data spread. Its prediction accuracy (mean), however, is very good, nearly identical to that of the neural network model obtained by removing the GPLayer.

    When I replace the GPLayer with a TFP DenseVariational layer using Gaussin priors, the prediction accuracy is not as good. However, importantly, the uncertainty estimate is very good, covering the training data spread well.

    Without good uncertainty estimate, the GPLayer seems to add little value over the neural network model, which already provides good prediction accuracy.

    bug 
    opened by dtchang 3
  • GPLayer doesn't seem to support multiple input units

    GPLayer doesn't seem to support multiple input units

    Using the "Hybrid Deep GP models: ..." tutorial, when I changed tf.keras.layers.Dense(1, activation="linear"), to tf.keras.layers.Dense(2, activation="linear"), I got an error, same as reported in #27.

    I then set a Zero mean function, as suggested in #27: gp_layer = gpflux.layers.GPLayer( kernel, inducing_variable, num_data=num_data, num_latent_gps=output_dim, mean_function=gpflow.mean_functions.Zero() ), I got a different error.

    bug 
    opened by dtchang 3
  • Update quality-check.yaml

    Update quality-check.yaml

    GPflux does not work at present with TensorFlow 2.5.0. @st-- has an open PR (#30) exploring how this could work.

    At present the develop build fails. This PR bounds the version of TensorFlow above by 2.5.0.

    opened by johnamcleod 3
  • GPflux for text classification?

    GPflux for text classification?

    Hey, many thanks for this project! I am currently investigating GPs for binary (and one-class-) classification tasks and did some first experiments using pre-trained sentence embeddings for feature representation, PCA for dimension reduction and GPs (GPFlow) for classification. It sounds promising to use a text embedding, some dense layers and a GP in an end-to-end fashion. At a first glance, GPflux seems to offer this. After checking the gpflux tutorials (Hybrid Deep GP models), I am actually not sure how to define the inducing variables. Seems like they have to cover the expected data ranges in each latent space dimension, right? Furthermore, I am not sure if GPflux offers variational inference for binary classification. Any comments, suggestions, links that could help to build hybrid models are appreciated. Many thanks! Kind regards Jens

    opened by kerstenj 2
  • Carry on using Ubuntu 20.04 for now

    Carry on using Ubuntu 20.04 for now

    ubuntu-latest has recently been updated to use Ubuntu 22.04, which seems to break our tests. While we investigate this we should continue to use the old builder.

    opened by uri-granta 1
  • Update to newer GPflow and TensorFlow.

    Update to newer GPflow and TensorFlow.

    Update to make GPflux compatible with newer versions of GPflow, that require an additional X parameter in the likelihoods. In this PR I just pass a dummy None value as X. Alternatively we could:

    1. I don't know Keras and GPflux well, but maybe we can find a "real" value of X to use?
    2. Do we want to attempt to write code that's compatible with earlier version of GPflow as well? I suppose we could add an if somewhere?
    opened by jesnie 1
  • Support tensorflow 2.5 through 2.8.

    Support tensorflow 2.5 through 2.8.

    1. Dropped support for Python 3.6.
    2. Added Python 3.9 and 3.10.
    3. Added TensorFlow 2.6, 2.7 and 2.8.
    4. Updated github actions to test all of these combinations.
    5. Had to update some of the tests_requirements - this caused some reformatting.
    6. tfp.Distributions are sometimes wrapped in a _TensorCoercible - I added unwrap_dist to handle this.
    7. For some versions there are problems serialising gpflow.Parameters. I skip the relevant tests.
    8. Apparently the tags that are exported by TensorBoard changes slightly with version. Did a version test for that.
    9. Had to down-adjust coverage to 96% - presumably related to the skipped tests above.

    Notice the changes to the build system will require you/us to update the settings on which tests are required to merge.

    opened by jesnie 1
  • Place upper bound on TensorFlow (Probability) dependencies

    Place upper bound on TensorFlow (Probability) dependencies

    TensorFlow (TF) 2.6.0 introduces some breaking changes. Until these are addressed, we must ensure the versions installed are strictly less than 2.6.0 and 0.14.0 for TF and TF-Probability respectively (TF-Probability version 0.14.0 and above require TensorFlow version equal to or greater than 2.6.0).

    opened by ltiao 1
  • Sebastian.p/orth dgp

    Sebastian.p/orth dgp

    Implementation of "Sparse Orthogonal Variational Inference for Gaussian Processes".

    Created the following folders: -- conditionals: needs a specialized form to account for the two different GPs that have to be summed g() and h() as in the paper. --covariances: needed to compute Cvv and Cvf as in the paper. These covariances rely on the other set of inducing points. --posteriors.py: different conditionals are needed here. -- conditionals.util.conditional_GP_maths might have to be re-designed or at least have its name changed to something more sensible.

    opened by SebastianPopescu 5
  • Fix for new GPflow heteroskedastic likelihood breaks for quadrature dependent likelihoods

    Fix for new GPflow heteroskedastic likelihood breaks for quadrature dependent likelihoods

    A clear and concise description of what the bug is. In https://github.com/secondmind-labs/GPflux/pull/84 there were several changes made to accommodate the new framework in GPflow for heteroskedastic likelihoods. More precisely, no_X = None in gpflux/layers/likelihood_layer.py.

    This works well with Gaussian or Student-t likelihoods, however it will break when using Softmax, which uses quadrature for variational_expectations or predict_mean_and_var. Both methods require access to the shape of X, so because currently we are passing None this results in an error.

    bug 
    opened by SebastianPopescu 0
  • Add version upper bound before adjust gpflow>=2.6

    Add version upper bound before adjust gpflow>=2.6

    Describe the bug GPflow>=2.6 seems to change the way to calculate likelihood p(Y|F,X) from by using F and Y to by using X, F and Y. I guess this change would be incompatible to the current gpflux develop branch.

    To reproduce Steps to reproduce the behaviour:

    1. git clone https://github.com/secondmind-labs/GPflux.git
    2. pip install -e .
    3. python ./gpflux/docs/notebooks/intro.py

    An error will occur at Line 99.

    System information

    • OS: Ubuntu20.04
    • Python version: 3.9.13
    • GPflux version: develop branch f95e1cb
    • TensorFlow version: 2.8.3
    • GPflow version: 2.6.1

    Additional context It would be ok if switching to gpflow==2.5.2

    bug 
    opened by zjowowen 3
  • Sebastian.p/generalized rff

    Sebastian.p/generalized rff

    The purpose of this PR is to to support sampling with models

    • that can have SeparateIndependent and SharedIndependent kernels & inducing variables.
    • with Heteroskedastic likelihoods (so multiple GP heads)

    Main changes:

    • creation of feature_decomposition_kernels folder. Idea was to structure it just like in GPflow (i.e. multioutput subfolder). Having a separate folder for this type of kernels will prove to be better suited if we are planning on including in the future some other papers such as .. [1] Solin, Arno, and Simo Särkkä. "Hilbert space methods for reduced-rank Gaussian process regression." Statistics and Computing (2020). .. [2] Borovitskiy, Viacheslav, et al. "Matérn Gaussian processes on Riemannian manifolds." In Advances in Neural Information Processing Systems (2020).
    • in gpflux.layers.basis_functions.fourier_features I have added the multioutput version
    • in gpflux.sampling I have added the multioutput version
    opened by SebastianPopescu 3
Releases(v0.3.1)
  • v0.3.1(Nov 17, 2022)

    What's Changed

    • Add conditionally tensorflow-macos to setup.py by @vdutor in https://github.com/secondmind-labs/GPflux/pull/77
    • removing dependency on setting the Keras backend by @hstojic in https://github.com/secondmind-labs/GPflux/pull/79
    • Update to newer GPflow and TensorFlow. by @jesnie in https://github.com/secondmind-labs/GPflux/pull/84
    • Bump version to 0.3.1 by @uri-granta in https://github.com/secondmind-labs/GPflux/pull/86

    New Contributors

    • @hstojic made their first contribution in https://github.com/secondmind-labs/GPflux/pull/79
    • @uri-granta made their first contribution in https://github.com/secondmind-labs/GPflux/pull/86

    Full Changelog: https://github.com/secondmind-labs/GPflux/compare/v0.3.0...v0.3.1

    Source code(tar.gz)
    Source code(zip)
  • v0.3.0(May 30, 2022)

    What's Changed

    • Add priors to kernel hyperparameters to loss by @vdutor in https://github.com/secondmind-labs/GPflux/pull/62
    • Restructure basis function modules by @ltiao in https://github.com/secondmind-labs/GPflux/pull/63
    • Use iv.num_inducing instead of len(iv), for compatibility with future GPflow. by @jesnie in https://github.com/secondmind-labs/GPflux/pull/66
    • adding import in init to make "fourier_features" module available by @NicolasDurrande in https://github.com/secondmind-labs/GPflux/pull/69
    • Fixing issue #70 by @sebastianober in https://github.com/secondmind-labs/GPflux/pull/71
    • Support tensorflow 2.5 through 2.8. by @jesnie in https://github.com/secondmind-labs/GPflux/pull/72
    • Pin protobuf to 3.19.0 by @vdutor in https://github.com/secondmind-labs/GPflux/pull/73

    New Contributors

    • @jesnie made their first contribution in https://github.com/secondmind-labs/GPflux/pull/66
    • @NicolasDurrande made their first contribution in https://github.com/secondmind-labs/GPflux/pull/69
    • @sebastianober made their first contribution in https://github.com/secondmind-labs/GPflux/pull/71

    Full Changelog: https://github.com/secondmind-labs/GPflux/compare/v0.2.7...v0.3.0

    Source code(tar.gz)
    Source code(zip)
  • v0.2.7(Nov 8, 2021)

  • v0.2.6(Nov 8, 2021)

  • v0.2.5(Nov 8, 2021)

    What's Changed

    • hotfix version by @tensorlicious in https://github.com/secondmind-labs/GPflux/pull/59
    • fix version in setup.py by @tensorlicious in https://github.com/secondmind-labs/GPflux/pull/60

    New Contributors

    • @tensorlicious made their first contribution in https://github.com/secondmind-labs/GPflux/pull/59

    Full Changelog: https://github.com/secondmind-labs/GPflux/compare/v0.2.4...v0.2.5

    Source code(tar.gz)
    Source code(zip)
  • v0.2.4(Nov 5, 2021)

    What's Changed

    • TF 2.5 compatibility by @vdutor in https://github.com/secondmind-labs/GPflux/pull/48
    • Place upper bound on TensorFlow (Probability) dependencies by @ltiao in https://github.com/secondmind-labs/GPflux/pull/52
    • Make RFF weights explicitly not trainable by @ltiao in https://github.com/secondmind-labs/GPflux/pull/51
    • Refactoring basis functions by @ltiao in https://github.com/secondmind-labs/GPflux/pull/53
    • Added support for alternative Fourier feature map by @ltiao in https://github.com/secondmind-labs/GPflux/pull/54
    • Quadrature Fourier features by @ltiao in https://github.com/secondmind-labs/GPflux/pull/56
    • Orthogonal Random Features by @ltiao in https://github.com/secondmind-labs/GPflux/pull/57

    New Contributors

    • @ltiao made their first contribution in https://github.com/secondmind-labs/GPflux/pull/52

    Full Changelog: https://github.com/secondmind-labs/GPflux/compare/v0.2.3...v0.2.4

    Source code(tar.gz)
    Source code(zip)
  • v0.2.3(Aug 24, 2021)

    Release 0.2.3

    Bugfixes

    Fix PyPi upload Github Action

    Thanks to our Contributors

    This release contains contributions from (alphabetical order)

    @sebastianober, @vdutor

    Source code(tar.gz)
    Source code(zip)
  • v0.2.2(Aug 24, 2021)

    Release 0.2.2

    Bugfixes

    • Fix PyPi upload Github Action (#46)

    Thanks to our Contributors

    This release contains contributions from (alphabetical order)

    @sebastianober, @vdutor

    Source code(tar.gz)
    Source code(zip)
  • v0.2.1(Aug 24, 2021)

    Release 0.2.1

    Bugfixes

    • Fix PyPi upload Github Action (#45)

    Thanks to our Contributors

    This release contains contributions from (alphabetical order)

    @sebastianober, @vdutor

    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Aug 24, 2021)

    Release 0.2.0

    Improvements

    • Allow for whitening in sampling methods (#26)
    • Add warning for default Identity mean function in GPLayer (#42)
    • Allow the user to specify which layers to train with NaturalGradient (#43)

    Documentation

    • Update README with link to GPflux paper (#22)
    • Clean notebook deep_gp_samples notebook (#23)
    • Fix header in efficient_sampling notebook (#24)
    • New notebook on conditional density estimation with GPflux (#40)
    • Improve plotting in gpflux_with_keras_layers (#41)

    Thanks to our Contributors

    This release contains contributions from (alphabetical order)

    @johnamcleod, @sebastianober, @st--, @vdutor

    Source code(tar.gz)
    Source code(zip)
Owner
Secondmind Labs
Secondmind Labs
A deep learning network built with TensorFlow and Keras to classify gender and estimate age.

Convolutional Neural Network (CNN). This repository contains a source code of a deep learning network built with TensorFlow and Keras to classify gend

Pawel Dziemiach 1 Dec 18, 2021
A deep learning network built with TensorFlow and Keras to classify gender and estimate age.

Convolutional Neural Network (CNN). This repository contains a source code of a deep learning network built with TensorFlow and Keras to classify gend

Pawel Dziemiach 1 Dec 19, 2021
Data and Code for ACL 2021 Paper "Inter-GPS: Interpretable Geometry Problem Solving with Formal Language and Symbolic Reasoning"

Introduction Code and data for ACL 2021 Paper "Inter-GPS: Interpretable Geometry Problem Solving with Formal Language and Symbolic Reasoning". We cons

Pan Lu 81 Dec 27, 2022
An open source bike computer based on Raspberry Pi Zero (W, WH) with GPS and ANT+. Including offline map and navigation.

Pi Zero Bikecomputer An open-source bike computer based on Raspberry Pi Zero (W, WH) with GPS and ANT+ https://github.com/hishizuka/pizero_bikecompute

hishizuka 264 Jan 2, 2023
tsai is an open-source deep learning package built on top of Pytorch & fastai focused on state-of-the-art techniques for time series classification, regression and forecasting.

Time series Timeseries Deep Learning Pytorch fastai - State-of-the-art Deep Learning with Time Series and Sequences in Pytorch / fastai

timeseriesAI 2.8k Jan 8, 2023
SCAAML is a deep learning framwork dedicated to side-channel attacks run on top of TensorFlow 2.x.

SCAAML (Side Channel Attacks Assisted with Machine Learning) is a deep learning framwork dedicated to side-channel attacks. It is written in python and run on top of TensorFlow 2.x.

Google 69 Dec 21, 2022
MMdnn is a set of tools to help users inter-operate among different deep learning frameworks. E.g. model conversion and visualization. Convert models between Caffe, Keras, MXNet, Tensorflow, CNTK, PyTorch Onnx and CoreML.

MMdnn MMdnn is a comprehensive and cross-framework tool to convert, visualize and diagnose deep learning (DL) models. The "MM" stands for model manage

Microsoft 5.7k Jan 9, 2023
Advanced Deep Learning with TensorFlow 2 and Keras (Updated for 2nd Edition)

Advanced Deep Learning with TensorFlow 2 and Keras (Updated for 2nd Edition)

Packt 1.5k Jan 3, 2023
Realtime Face Anti Spoofing with Face Detector based on Deep Learning using Tensorflow/Keras and OpenCV

Realtime Face Anti-Spoofing Detection ?? Realtime Face Anti Spoofing Detection with Face Detector to detect real and fake faces Please star this repo

Prem Kumar 86 Aug 3, 2022
Classification models 1D Zoo - Keras and TF.Keras

Classification models 1D Zoo - Keras and TF.Keras This repository contains 1D variants of popular CNN models for classification like ResNets, DenseNet

Roman Solovyev 12 Jan 6, 2023
QKeras: a quantization deep learning library for Tensorflow Keras

QKeras github.com/google/qkeras QKeras 0.8 highlights: Automatic quantization using QKeras; Stochastic behavior (including stochastic rouding) is disa

Google 437 Jan 3, 2023
Vision Deep-Learning using Tensorflow, Keras.

Welcome! I am a computer vision deep learning developer working in Korea. This is my blog, and you can see everything I've studied here. https://www.n

kimminjun 6 Dec 14, 2022
This is an implementation of Googles Yogi-Optimizer in Keras (tf.keras)

Yogi-Optimizer_Keras This is an implementation of Googles Yogi-Optimizer in Keras (tf.keras) The NeurIPS-Paper can be found here: http://papers.nips.c

null 14 Sep 13, 2022
Keras udrl - Keras implementation of Upside Down Reinforcement Learning

keras_udrl Keras implementation of Upside Down Reinforcement Learning This is me

Eder Santana 7 Jan 24, 2022
Example-custom-ml-block-keras - Custom Keras ML block example for Edge Impulse

Custom Keras ML block example for Edge Impulse This repository is an example on

Edge Impulse 8 Nov 2, 2022
Object DGCNN and DETR3D, Our implementations are built on top of MMdetection3D.

This repo contains the implementations of Object DGCNN (https://arxiv.org/abs/2110.06923) and DETR3D (https://arxiv.org/abs/2110.06922). Our implementations are built on top of MMdetection3D.

Wang, Yue 539 Jan 7, 2023
A lossless neural compression framework built on top of JAX.

Kompressor Branch CI Coverage main (active) main development A neural compression framework built on top of JAX. Install setup.py assumes a compatible

Rosalind Franklin Institute 2 Mar 14, 2022
HiddenMarkovModel implements hidden Markov models with Gaussian mixtures as distributions on top of TensorFlow

Class HiddenMarkovModel HiddenMarkovModel implements hidden Markov models with Gaussian mixtures as distributions on top of TensorFlow 2.0 Installatio

Susara Thenuwara 2 Nov 3, 2021