Implementation of Restricted Boltzmann Machine (RBM) and its variants in Tensorflow

Overview

xRBM Library

Implementation of Restricted Boltzmann Machine (RBM) and its variants in Tensorflow

Installation

Using pip:

pip install xrbm

Examples

Tutorial 1: Training an RBM on MNIST Dataset

Tutorial 2: Training an RBM on MNIST Dataset - More Tricks

Tutorial 3: Training a Conditional RBM on Timeseries Data

Documentation

https://omid.al/xRBM/

Feedback, Bugs, and Questions

For any questions, feedback, and bug reports, please use the Github Issues.

Credits

Created by Omid Alemi

License

This code is available under the MIT license.

You might also like...
 	Code for
Code for "The Intrinsic Dimension of Images and Its Impact on Learning" - ICLR 2021 Spotlight

dimensions Estimating the instrinsic dimensionality of image datasets Code for: The Intrinsic Dimensionaity of Images and Its Impact On Learning - Phi

a reccurrent neural netowrk that when trained on a peice of text and fed a starting prompt will write its on 250 character text using LSTM layers

RNN-Playwrite a reccurrent neural netowrk that when trained on a peice of text and fed a starting prompt will write its on 250 character text using LS

Vertical Federated Principal Component Analysis and Its Kernel Extension on Feature-wise Distributed Data based on Pytorch Framework
Vertical Federated Principal Component Analysis and Its Kernel Extension on Feature-wise Distributed Data based on Pytorch Framework

VFedPCA+VFedAKPCA This is the official source code for the Paper: Vertical Federated Principal Component Analysis and Its Kernel Extension on Feature-

Multi-modal co-attention for drug-target interaction annotation and Its Application to SARS-CoV-2

CoaDTI Multi-modal co-attention for drug-target interaction annotation and Its Application to SARS-CoV-2 Abstract Environment The test was conducted i

An e-commerce company wants to segment its customers and determine marketing strategies according to these segments.

customer_segmentation_with_rfm Business Problem : An e-commerce company wants to

Neural machine translation between the writings of Shakespeare and modern English using TensorFlow

Shakespeare translations using TensorFlow This is an example of using the new Google's TensorFlow library on monolingual translation going from modern

Face2webtoon - Despite its importance, there are few previous works applying I2I translation to webtoon.
Face2webtoon - Despite its importance, there are few previous works applying I2I translation to webtoon.

Despite its importance, there are few previous works applying I2I translation to webtoon. I collected dataset from naver webtoon 연애혁명 and tried to transfer human faces to webtoon domain.

A PaddlePaddle version of Neural Renderer, refer to its PyTorch version
A PaddlePaddle version of Neural Renderer, refer to its PyTorch version

Neural 3D Mesh Renderer in PadddlePaddle A PaddlePaddle version of Neural Renderer, refer to its PyTorch version Install Run: pip install neural-rende

WormMovementSimulation - 3D Simulation of Worm Body Movement with Neurons attached to its body
WormMovementSimulation - 3D Simulation of Worm Body Movement with Neurons attached to its body

Generate 3D Locomotion Data This module is intended to create 2D video trajector

Comments
  • nan value

    nan value

    Hi,

    I am new to Neural Network. Thanks a lot for your xRBM! However, when I try the rbm_mnist_simple.py, nan values are created (sometimes, not always) when I set num_hid to 2. The printed massage is: Epoch 2/15|cost=nan|lr=0.100000|monentum=0.000000|sparse cost=0.000000. Would you please take a look at this issue? Thank you very much!

    regards, YC

    opened by YCWang6 1
  • Dimension Error for Binary Visible Data in CRBM

    Dimension Error for Binary Visible Data in CRBM

    Hi Omid,

    I followed your Tutorial 3: Training a Conditional RBM on Timeseries Data and was trying to replace the vis_type to 'binary' in:

    crbm = xrbm.models.CRBM(num_vis=num_vis,
                            num_cond=num_cond,
                            num_hid=num_hid,
                            vis_type='gaussian',
                            initializer=tf.contrib.layers.xavier_initializer(),
                            name='crbm')
    

    Then I got a dimension error:

    ValueError                                Traceback (most recent call last)
    <ipython-input-7-205d82de8a49> in <module>()
         25                                            momentum=momentum,
         26                                            k=1)
    ---> 27 train_op           = cdapproximator.train(crbm, vis_data=batch_vis_data, in_data=[batch_cond_data])
    
    ~/anaconda3/lib/python3.6/site-packages/xrbm/train/cdk.py in train(self, model, vis_data, in_data, global_step, var_list, name)
         74 
         75         # Get the model's cost function for the training data and the reconstructed data (chain_end)
    ---> 76         cost = model.get_cost(vis_data, chain_end, in_data)
         77 
         78         # We a regularizer is set, then add the regularization terms to the cost function
    
    ~/anaconda3/lib/python3.6/site-packages/xrbm/models/crbm.py in get_cost(self, v_sample, chain_end, in_data)
        298 
        299         with tf.variable_scope('fe_cost'):
    --> 300             cost = tf.reduce_mean(self.free_energy(v_sample, cond)
        301                     - self.free_energy(chain_end, cond), reduction_indices=0)
        302         return cost
    
    ~/anaconda3/lib/python3.6/site-packages/xrbm/models/crbm.py in free_energy(self, v_sample, cond)
        327 
        328             if self.vis_type == 'binary':
    --> 329                 v = - tf.matmul(v_sample, tf.expand_dims(vbias_n_cond,1), name='bin_visible_term')
        330             elif self.vis_type == 'gaussian':
        331                 v = tf.reduce_sum(0.5 * tf.square(v_sample - vbias_n_cond), reduction_indices=1, name='gauss_visible_term')
    ...
    ...
    ValueError: Shape must be rank 2 but is rank 3 for 'fe_cost/free_energy/bin_visible_term' (op: 'MatMul') with input shapes: [?,4], [?,1,4].
    

    If we check xrbm.models.crbm.free_energy (shown below), we have cond.shape to be [None, num_cond], which makes the shape of vbias_n_cond = self.vbias + tf.matmul(cond, self.A) to be [None, num_vis].

    When we calculate v = - tf.matmul(v_sample, tf.expand_dims(vbias_n_cond,1), name='bin_visible_term'), v_sample.shape is [None, num_vis] and the shape of tf.expand_dims(vbias_n_cond,1) is [None, 1, num_vis], which causes the ValueError: Shape must be rank 2 but is rank 3 for 'fe_cost/free_energy/bin_visible_term' (op: 'MatMul') with input shapes: [?,4], [?,1,4].

    Is it because for binary visible data, I should not define the shape of batch_cond_data = tf.placeholder(tf.float32, shape=(None, num_cond), name='cond_data')? Or any other possible reasons?

    def free_energy(self, v_sample, cond):
            """
            Calcuates the free-energy of a given visible tensor
    
            Parameters
            ----------
            v_sample:   tensor
                the visible units tensor
            cond:       tensor
                the condition units tensor
    
            Returns
            -------
            e:  float
                the free energy
            """
            with tf.variable_scope('free_energy'):
                bottom_up = (tf.matmul(v_sample, self.W) + # visible to hidden 
                             tf.matmul(cond, self.B) + # condition to hidden
                             self.hbias) # static hidden biases
                
                vbias_n_cond = self.vbias + tf.matmul(cond, self.A)
    
                if self.vis_type == 'binary':
                    v = - tf.matmul(v_sample, tf.expand_dims(vbias_n_cond,1), name='bin_visible_term')
                elif self.vis_type == 'gaussian':
                    v = tf.reduce_sum(0.5 * tf.square(v_sample - vbias_n_cond), reduction_indices=1, name='gauss_visible_term')
    

    Thanks, Tian

    bug 
    opened by tianc2014 1
Owner
Omid Alemi
AI Researcher | Software Engineer
Omid Alemi
Save-restricted-v-3 - Save restricted content Bot For telegram

Save restricted content Bot Contact: Telegram A stable telegram bot to get restr

DEVANSH 11 Dec 21, 2022
Network Pruning That Matters: A Case Study on Retraining Variants (ICLR 2021)

Network Pruning That Matters: A Case Study on Retraining Variants (ICLR 2021)

Duong H. Le 18 Jun 13, 2022
This repository contains notebook implementations of the following Neural Process variants: Conditional Neural Processes (CNPs), Neural Processes (NPs), Attentive Neural Processes (ANPs).

The Neural Process Family This repository contains notebook implementations of the following Neural Process variants: Conditional Neural Processes (CN

DeepMind 892 Dec 28, 2022
Deploy tensorflow graphs for fast evaluation and export to tensorflow-less environments running numpy.

Deploy tensorflow graphs for fast evaluation and export to tensorflow-less environments running numpy. Now with tensorflow 1.0 support. Evaluation usa

Marcel R. 349 Aug 6, 2022
TensorFlow Ranking is a library for Learning-to-Rank (LTR) techniques on the TensorFlow platform

TensorFlow Ranking is a library for Learning-to-Rank (LTR) techniques on the TensorFlow platform

null 2.6k Jan 4, 2023
Robust Video Matting in PyTorch, TensorFlow, TensorFlow.js, ONNX, CoreML!

Robust Video Matting in PyTorch, TensorFlow, TensorFlow.js, ONNX, CoreML!

Peter Lin 6.5k Jan 4, 2023
Robust Video Matting in PyTorch, TensorFlow, TensorFlow.js, ONNX, CoreML!

Robust Video Matting (RVM) English | 中文 Official repository for the paper Robust High-Resolution Video Matting with Temporal Guidance. RVM is specific

flow-dev 2 Aug 21, 2022
Code for the paper: "On the Bottleneck of Graph Neural Networks and Its Practical Implications"

On the Bottleneck of Graph Neural Networks and its Practical Implications This is the official implementation of the paper: On the Bottleneck of Graph

null 75 Dec 22, 2022