PyTorch implementation of the paper Deep Networks from the Principle of Rate Reduction

Overview

Deep Networks from the Principle of Rate Reduction

This repository is the official PyTorch implementation of the paper Deep Networks from the Principle of Rate Reduction (2021) by Kwan Ho Ryan Chan* (UC Berkeley), Yaodong Yu* (UC Berkeley), Chong You* (UC Berkeley), Haozhi Qi (UC Berkeley), John Wright (Columbia), and Yi Ma (UC Berkeley). For the NumPy version of ReduNet, please go checkout: https://github.com/ryanchankh/redunet_paper

What is ReduNet?

ReduNet is a deep neural network construcuted naturally by deriving the gradients of the Maximal Coding Rate Reduction (MCR2) [1] objective. Every layer of this network can be interpreted based on its mathematical operations and the network collectively is trained in a feed-forward manner only. In addition, by imposing shift invariant properties to our network, the convolutional operator can be derived using only the data and MCR2 objective function, hence making our network design principled and interpretable.


Figure: Weights and operations for one layer of ReduNet

[1] Yu, Yaodong, Kwan Ho Ryan Chan, Chong You, Chaobing Song, and Yi Ma. "Learning diverse and discriminative representations via the principle of maximal coding rate reduction" Advances in Neural Information Processing Systems 33 (2020).

Requirements

This codebase is written for python3. To install necessary python packages, run conda create --name redunet_official --file requirements.txt.

Demo

For a quick demonstration of ReduNet on Gaussian 2D or 3D cases, please visit the notebook by running one of the two commands:

$ jupyter notebook ./examples/gaussian2d.ipynb
$ jupyter notebook ./examples/gaussian3d.ipynb

Core Usage and Design

The design of this repository aims to be easy-to-use and easy-to-intergrate to the current framework of your experiment, as long as it uses PyTorch. The ReduNet object inherents from nn.Sequential, and layers ReduLayers, such as Vector, Fourier1D and Fourier2D inherent from nn.Module. Loss functions are implemented in loss.py. Architectures and Dataset options are located in load.py file. Data objects and pre-set architectures are loaded in folders dataset and architectures. Feel free to add more based on the experiments you want to run. We have provided basic experiment setups, located in train_.py and evaluate_.py, where is the type of experiment. For utility functions, please check out functional.py or utils.py. Feel free to email us if there are any issues or suggestions.

Example: Forward Construction

To train a ReduNet using forward construction, please checkout train_forward.py. For evaluating, please checkout evaluate_forward.py. For example, to train on 40-layer ReduNet on MNIST using 1000 samples per class, run:

$ python3 train_forward.py --data mnistvector --arch layers50 --samples 1000

After training, you can evaluate the trained model using evaluate_forward.py, by running:

$ python3 evaluate_forward.py --model_dir ./saved_models/forward/mnistvector+layers50/samples1000 

, which will evaluate using all available training samples and testing samples. For more training and testing options, please checkout the file train_forward.py and evaluate_forward.py.

Experiments in Paper

For code used to generate experimental empirical results listed in our paper, please visit our other repository: https://github.com/ryanchankh/redunet_paper

Reference

For technical details and full experimental results, please check the paper. Please consider citing our work if you find it helpful to yours:

@article{chan2020deep,
  title={Deep networks from the principle of rate reduction},
  author={Chan, Kwan Ho Ryan and Yu, Yaodong and You, Chong and Qi, Haozhi and Wright, John and Ma, Yi},
  journal={arXiv preprint arXiv:2010.14765},
  year={2020}
}

License and Contributing

  • This README is formatted based on paperswithcode.
  • Feel free to post issues via Github.

Contact

Please contact [email protected] and [email protected] if you have any question on the codes.

You might also like...
A Pytorch Implementation of a continuously rate adjustable learned image compression framework.
A Pytorch Implementation of a continuously rate adjustable learned image compression framework.

GainedVAE A Pytorch Implementation of a continuously rate adjustable learned image compression framework, Gained Variational Autoencoder(GainedVAE). N

Pytorch implementation of Learning Rate Dropout.
Pytorch implementation of Learning Rate Dropout.

Learning-Rate-Dropout Pytorch implementation of Learning Rate Dropout. Paper Link: https://arxiv.org/pdf/1912.00144.pdf Train ResNet-34 for Cifar10: r

TensorFlow implementation of Barlow Twins (Barlow Twins: Self-Supervised Learning via Redundancy Reduction)
TensorFlow implementation of Barlow Twins (Barlow Twins: Self-Supervised Learning via Redundancy Reduction)

Barlow-Twins-TF This repository implements Barlow Twins (Barlow Twins: Self-Supervised Learning via Redundancy Reduction) in TensorFlow and demonstrat

Implementation of
Implementation of "Meta-rPPG: Remote Heart Rate Estimation Using a Transductive Meta-Learner"

Meta-rPPG: Remote Heart Rate Estimation Using a Transductive Meta-Learner This repository is the official implementation of Meta-rPPG: Remote Heart Ra

 TLDR: Twin Learning for Dimensionality Reduction
TLDR: Twin Learning for Dimensionality Reduction

TLDR (Twin Learning for Dimensionality Reduction) is an unsupervised dimensionality reduction method that combines neighborhood embedding learning with the simplicity and effectiveness of recent self-supervised learning losses.

InDuDoNet+: A Model-Driven Interpretable Dual Domain Network for Metal Artifact Reduction in CT Images

InDuDoNet+: A Model-Driven Interpretable Dual Domain Network for Metal Artifact Reduction in CT Images Hong Wang, Yuexiang Li, Haimiao Zhang, Deyu Men

DimReductionClustering - Dimensionality Reduction + Clustering + Unsupervised Score Metrics
DimReductionClustering - Dimensionality Reduction + Clustering + Unsupervised Score Metrics

Dimensionality Reduction + Clustering + Unsupervised Score Metrics Introduction

An implementation demo of the ICLR 2021 paper Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks in PyTorch.

Neural Attention Distillation This is an implementation demo of the ICLR 2021 paper Neural Attention Distillation: Erasing Backdoor Triggers from Deep

This is the pytorch implementation of the paper - Axiomatic Attribution for Deep Networks.
This is the pytorch implementation of the paper - Axiomatic Attribution for Deep Networks.

Integrated Gradients This is the pytorch implementation of "Axiomatic Attribution for Deep Networks". The original tensorflow version could be found h

Comments
  • fail to install requirement in win 10 or linux

    fail to install requirement in win 10 or linux

    run conda create --name redunet_official --file requirements.txt on win10 or github's codespace

    Collecting package metadata (current_repodata.json): done
    Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
    Collecting package metadata (repodata.json): done
    Solving environment: failed
    
    PackagesNotFoundError: The following packages are not available from current channels:
    
      - lz4-c==1.9.2=h79c402e_3
      - pysocks==1.7.1=py37hecd8cb5_0
      - openssl==1.1.1k=h9ed2024_0
      - tornado==6.0.4=py37h1de35cc_1
      - libopus==1.3.1=h1de35cc_0
      - nettle==3.4.1=h3018a27_0
      - brotlipy==0.7.0=py37h9ed2024_1003
      - torchaudio==0.8.0=py37
      - ninja==1.10.1=py37h879752b_0
      - libgfortran==3.0.1=h93005f0_2
      - mkl-service==2.3.0=py37hfbe908c_0
      - libtiff==4.1.0=hcb84e12_1
      - mkl_random==1.1.1=py37h959d312_0
      - pytorch==1.8.0=py3.7_0
      - libuv==1.40.0=haf1e3a3_0
      - opencv-python==4.4.0.44=pypi_0
      - lame==3.100=h1de35cc_0
      - scikit-learn==0.23.2=py37h959d312_0
      - llvm-openmp==10.0.0=h28b9765_0
      - gettext==0.19.8.1=hb0f4f8b_2
      - chardet==4.0.0=py37hecd8cb5_1003
      - intel-openmp==2019.4=233
      - lcms2==2.11=h92f6f08_0
      - bzip2==1.0.8=h1de35cc_0
      - libffi==3.3=hb1e8313_2
      - torchvision==0.9.0=py37_cpu
      - mkl==2019.4=233
      - ca-certificates==2021.1.19=hecd8cb5_1
      - x264==1!157.20191217=h1de35cc_0
      - libedit==3.1.20191231=h1de35cc_1
      - freetype==2.10.4=ha233b18_0
      - libiconv==1.16=h1de35cc_0
      - pillow==8.0.0=py37h1a82f1a_0
      - xz==5.2.5=h1de35cc_0
      - python==3.7.9=h26836e1_0
      - scipy==1.5.2=py37h912ce22_0
      - tk==8.6.10=hb0a8c7a_0
      - gnutls==3.6.5=h91ad68e_1002
      - pandas==1.1.3=py37hb1e8313_0
      - setuptools==50.3.0=py37h0dc7051_1
      - gmp==6.1.2=hb37e062_1
      - appnope==0.1.0=py37_0
      - ncurses==6.2=h0a44026_1
      - zeromq==4.3.3=hb1e8313_3
      - sqlite==3.33.0=hffcf06c_0
      - cffi==1.14.4=py37h2125817_0
      - zstd==1.4.5=h41d2c2f_0
      - numpy==1.19.1=py37h3b9f5b6_0
      - libvpx==1.7.0=h378b8a2_0
      - numpy-base==1.19.1=py37hcfb5961_0
      - zlib==1.2.11=h1de35cc_3
      - readline==8.0=h1de35cc_0
      - ffmpeg==4.2.2=h97e5cf8_0
      - pyzmq==19.0.2=py37hb1e8313_1
      - openh264==2.1.0=hd9629dc_0
      - kiwisolver==1.2.0=py37h04f5b5a_0
      - libpng==1.6.37=ha441bb4_0
      - jpeg==9b=he5867d9_2
      - matplotlib-base==3.3.2=py37h181983e_0
      - certifi==2020.12.5=py37hecd8cb5_0
      - mkl_fft==1.2.0=py37hc64f4ea_0
      - libsodium==1.0.18=h1de35cc_0
      - cryptography==3.3.1=py37hbcfaee0_0
    
    Current channels:
    
      - https://repo.anaconda.com/pkgs/main/linux-64
      - https://repo.anaconda.com/pkgs/main/noarch
      - https://repo.anaconda.com/pkgs/r/linux-64
      - https://repo.anaconda.com/pkgs/r/noarch
      - https://conda.anaconda.org/conda-forge/linux-64
      - https://conda.anaconda.org/conda-forge/noarch
    
    To search for alternate channels that may provide the conda package you're
    looking for, navigate to
    
        https://anaconda.org
    
    and use the search bar at the top of the page.```
    opened by dailing57 0
  • Uncertainty Estimation

    Uncertainty Estimation

    Hi,

    Very impressed by your work. And I have been wondering since the ReduNet is a white-box, one should be able to write down what is the uncertainty of the ReduNet's prediction analytically. Say in the test phase, I feed an image of half apple half orange to the ReduNet (which is trained to classify apple and orange), I should be able to get the prediction uncertainty for free? And in theory, I should also be able to track back through every layer to see how the uncertainty propagate, right? Is uncertainty estimation in your roadmap?

    opened by xyp8023 0
  • Backprop training

    Backprop training

    Hi,

    Thank you for publishing your code with the paper. It's very nice work! In section 5.2 the authors discuss backpropagation training with redunet. Is code for training with backprop published in the repo?

    Thanks, Matt

    opened by themantalope 1
  • memory will be very large

    memory will be very large

    If the data format is slightly larger, the memory will be very large. Do you have any suggestions for optimization? Any suggestions are welcome, thank you!

    opened by zdx3578 0
Owner
null
Rate-limit-semaphore - Semaphore implementation with rate limit restriction for async-style (any core)

Rate Limit Semaphore Rate limit semaphore for async-style (any core) There are t

Yan Kurbatov 4 Jun 21, 2022
Learning hidden low dimensional dyanmics using a Generalized Onsager Principle and neural networks

OnsagerNet Learning hidden low dimensional dyanmics using a Generalized Onsager Principle and neural networks This is the original pyTorch implemenati

Haijun.Yu 3 Aug 24, 2022
Re-implementation of the Noise Contrastive Estimation algorithm for pyTorch, following "Noise-contrastive estimation: A new estimation principle for unnormalized statistical models." (Gutmann and Hyvarinen, AISTATS 2010)

Noise Contrastive Estimation for pyTorch Overview This repository contains a re-implementation of the Noise Contrastive Estimation algorithm, implemen

Denis Emelin 42 Nov 24, 2022
An official source code for paper Deep Graph Clustering via Dual Correlation Reduction, accepted by AAAI 2022

Dual Correlation Reduction Network An official source code for paper Deep Graph Clustering via Dual Correlation Reduction, accepted by AAAI 2022. Any

yueliu1999 109 Dec 23, 2022
PyTorch implementation of some learning rate schedulers for deep learning researcher.

pytorch-lr-scheduler PyTorch implementation of some learning rate schedulers for deep learning researcher. Usage WarmupReduceLROnPlateauScheduler Visu

Soohwan Kim 59 Dec 8, 2022
PyKale is a PyTorch library for multimodal learning and transfer learning as well as deep learning and dimensionality reduction on graphs, images, texts, and videos

PyKale is a PyTorch library for multimodal learning and transfer learning as well as deep learning and dimensionality reduction on graphs, images, texts, and videos. By adopting a unified pipeline-based API design, PyKale enforces standardization and minimalism, via reusing existing resources, reducing repetitions and redundancy, and recycling learning models across areas.

PyKale 370 Dec 27, 2022
[CVPR 2022] "The Principle of Diversity: Training Stronger Vision Transformers Calls for Reducing All Levels of Redundancy" by Tianlong Chen, Zhenyu Zhang, Yu Cheng, Ahmed Awadallah, Zhangyang Wang

The Principle of Diversity: Training Stronger Vision Transformers Calls for Reducing All Levels of Redundancy Codes for this paper: [CVPR 2022] The Pr

VITA 16 Nov 26, 2022
PyTorch implementation HoroPCA: Hyperbolic Dimensionality Reduction via Horospherical Projections

HoroPCA This code is the official PyTorch implementation of the ICML 2021 paper: HoroPCA: Hyperbolic Dimensionality Reduction via Horospherical Projec

HazyResearch 52 Nov 14, 2022
Source code for NAACL 2021 paper "TR-BERT: Dynamic Token Reduction for Accelerating BERT Inference"

TR-BERT Source code and dataset for "TR-BERT: Dynamic Token Reduction for Accelerating BERT Inference". The code is based on huggaface's transformers.

THUNLP 37 Oct 30, 2022
CBREN: Convolutional Neural Networks for Constant Bit Rate Video Quality Enhancement

CBREN This is the Pytorch implementation for our IEEE TCSVT paper : CBREN: Convolutional Neural Networks for Constant Bit Rate Video Quality Enhanceme

Zhao Hengrun 3 Nov 4, 2022