Learning recognition/segmentation models without end-to-end training. 40%-60% less GPU memory footprint. Same training time. Better performance.

Overview

InfoPro-Pytorch

The Information Propagation algorithm for training deep networks with local supervision.

Update on 2021/01/25: Release Pre-trained models on ImageNet and Cityscapes.

Update on 2021/01/24: Release Code for Image Classification on CIFAR/SVHN/STL10/ImageNet and Semantic Segmentation on Cityscapes.

Introduction

We propose Information Propagation (InfoPro), a locally supervised deep learning algorithm, from the information-theoretic perspective. By splitting the whole deep network into multiple local modules and training them with local InfoPro loss, we reduce the GPU memory footprint by 40-60% without introducing notable extra computational cost or training time, but improve the performance moderately.

Citation

If you find this work valuable or use our code in your own research, please consider citing us with the following bibtex:

@inproceedings{wang2021revisiting,
        title = {Revisiting Locally Supervised Learning: an Alternative to End-to-end Training},
       author = {Yulin Wang and Zanlin Ni and Shiji Song and Le Yang and Gao Huang},
    booktitle = {International Conference on Learning Representations (ICLR)},
         year = {2021},
          url = {https://openreview.net/forum?id=fAbkE6ant2}
}

Get Started

Please go to the folder Experiments on CIFAR-SVHN-STL10, Experiments on ImageNet and Semantic segmentation for specific docs.

Results

  • CIFAR & STL-10

  • ImageNet

  • Semantic Segmentation

GPU Memory Cost

In the paper, we report the minimally required GPU memory to run the InfoPro* algorithm with torch.backends.cudnn.benchmark=True (for practical acceleration). Note that this result is (sometimes largely) different from what is printed by nvidia-smi.

Contact

This repo is a re-implementation of our original code. If you have any question, please feel free to contact the authors. Yulin Wang: [email protected].

Acknowledgments

Our code of Semantic Segmentation is from MMSegmentation. We highly appreciate their awesome work!

You might also like...
Multiple types of NN model optimization environments. It is possible to directly access the host PC GUI and the camera to verify the operation. Intel iHD GPU (iGPU) support. NVIDIA GPU (dGPU) support.
Multiple types of NN model optimization environments. It is possible to directly access the host PC GUI and the camera to verify the operation. Intel iHD GPU (iGPU) support. NVIDIA GPU (dGPU) support.

mtomo Multiple types of NN model optimization environments. It is possible to directly access the host PC GUI and the camera to verify the operation.

GrabGpu_py: a scripts for grab gpu when gpu is free

GrabGpu_py a scripts for grab gpu when gpu is free. WaitCondition: gpu_memory

AdaFocus V2: End-to-End Training of Spatial Dynamic Networks for Video Recognition
AdaFocus V2: End-to-End Training of Spatial Dynamic Networks for Video Recognition

AdaFocusV2 This repo contains the official code and pre-trained models for AdaFo

PyTorch Code of "Memory In Memory: A Predictive Neural Network for Learning Higher-Order Non-Stationarity from Spatiotemporal Dynamics"

Memory In Memory Networks It is based on the paper Memory In Memory: A Predictive Neural Network for Learning Higher-Order Non-Stationarity from Spati

A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.
A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.

Website | Documentation | Tutorials | Installation | Release Notes CatBoost is a machine learning method based on gradient boosting over decision tree

A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.
A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.

Website | Documentation | Tutorials | Installation | Release Notes CatBoost is a machine learning method based on gradient boosting over decision tree

Train neural network for semantic segmentation (deep lab V3) with pytorch in less then 50 lines of code

Train neural network for semantic segmentation (deep lab V3) with pytorch in 50 lines of code Train net semantic segmentation net using Trans10K datas

Mind the Trade-off: Debiasing NLU Models without Degrading the In-distribution Performance

Models for natural language understanding (NLU) tasks often rely on the idiosyncratic biases of the dataset, which make them brittle against test cases outside the training distribution.

Pytorch library for end-to-end transformer models training and serving

Pytorch library for end-to-end transformer models training and serving

Comments
  • CVE-2007-4559 Patch

    CVE-2007-4559 Patch

    Patching CVE-2007-4559

    Hi, we are security researchers from the Advanced Research Center at Trellix. We have began a campaign to patch a widespread bug named CVE-2007-4559. CVE-2007-4559 is a 15 year old bug in the Python tarfile package. By using extract() or extractall() on a tarfile object without sanitizing input, a maliciously crafted .tar file could perform a directory path traversal attack. We found at least one unsantized extractall() in your codebase and are providing a patch for you via pull request. The patch essentially checks to see if all tarfile members will be extracted safely and throws an exception otherwise. We encourage you to use this patch or your own solution to secure against CVE-2007-4559. Further technical information about the vulnerability can be found in this blog.

    If you have further questions you may contact us through this projects lead researcher Kasimir Schulz.

    opened by TrellixVulnTeam 0
  • Question about mutual information estimation

    Question about mutual information estimation

    Thanks for your awesome work! I'm very interested in mutual information estimation used in your paper. According to Appendix-G and your finding (figure-6), you train an auxiliary classifier to estimate I(h, y) and end2end supervised training retains all task-relevant information. From my perspective, it shows that we don't need to build a classifier upon the final feature map and deploying the classifier (trained on feature maps from many layers) to the first feature map is enough. I'm not sure I understand this correctly. Would you help me clarify this? Screenshot 2021-04-30 113406

    opened by Jason-cs18 0
  • why zero_grad() before train iter

    why zero_grad() before train iter

    Hi. Thanks for the great idea and sharing the code. I have a question. You changed the position of runner.optimizer.zero_grad() from after_train_iter to before_train_iter, refererence to https://github.com/blackfeather-wang/InfoPro-Pytorch/blob/a38362a2b59949a82b7451af7b95dc0b8da31a0b/Semantic%20segmentation/mmsegmentation-master/mmseg/apis/train.py#L26

    I was wondering what this change will do and how does it affect the training and model performance. I adopted your idea to UNet in medical segmentation but missed the aforementioned change. Yet the training converges correctly as expected. Now I realized the change and like to know its supposed effect. Thank you again.

    opened by baibaidj 3
Owner
null
ActNN: Reducing Training Memory Footprint via 2-Bit Activation Compressed Training

ActNN : Activation Compressed Training This is the official project repository for ActNN: Reducing Training Memory Footprint via 2-Bit Activation Comp

UC Berkeley RISE 178 Jan 5, 2023
Attention for PyTorch with Linear Memory Footprint

Attention for PyTorch with Linear Memory Footprint Unofficially implements https://arxiv.org/abs/2112.05682 to get Linear Memory Cost on Attention (+

null 11 Jan 9, 2022
NVIDIA Merlin is an open source library providing end-to-end GPU-accelerated recommender systems, from feature engineering and preprocessing to training deep learning models and running inference in production.

NVIDIA Merlin NVIDIA Merlin is an open source library designed to accelerate recommender systems on NVIDIA’s GPUs. It enables data scientists, machine

null 419 Jan 3, 2023
Official PyTorch implementation of Less is More: Pay Less Attention in Vision Transformers.

Less is More: Pay Less Attention in Vision Transformers Official PyTorch implementation of Less is More: Pay Less Attention in Vision Transformers. By

null 73 Jan 1, 2023
High performance Cross-platform Inference-engine, you could run Anakin on x86-cpu,arm, nv-gpu, amd-gpu,bitmain and cambricon devices.

Anakin2.0 Welcome to the Anakin GitHub. Anakin is a cross-platform, high-performance inference engine, which is originally developed by Baidu engineer

null 514 Dec 28, 2022
The Dual Memory is build from a simple CNN for the deep memory and Linear Regression fro the fast Memory

Simple-DMA a simple Dual Memory Architecture for classifications. based on the paper Dual-Memory Deep Learning Architectures for Lifelong Learning of

null 1 Jan 27, 2022
Calculates carbon footprint based on fuel mix and discharge profile at the utility selected. Can create graphs and tabular output for fuel mix based on input file of series of power drawn over a period of time.

carbon-footprint-calculator Conda distribution ~/anaconda3/bin/conda install anaconda-client conda-build ~/anaconda3/bin/conda config --set anaconda_u

Seattle university Renewable energy research 7 Sep 26, 2022
🐤 Nix-TTS: An Incredibly Lightweight End-to-End Text-to-Speech Model via Non End-to-End Distillation

?? Nix-TTS An Incredibly Lightweight End-to-End Text-to-Speech Model via Non End-to-End Distillation Rendi Chevi, Radityo Eko Prasojo, Alham Fikri Aji

Rendi Chevi 156 Jan 9, 2023
WarpDrive: Extremely Fast End-to-End Deep Multi-Agent Reinforcement Learning on a GPU

WarpDrive is a flexible, lightweight, and easy-to-use open-source reinforcement learning (RL) framework that implements end-to-end multi-agent RL on a single GPU (Graphics Processing Unit).

Salesforce 334 Jan 6, 2023
Run Effective Large Batch Contrastive Learning on Limited Memory GPU

Gradient Cache Gradient Cache is a simple technique for unlimitedly scaling contrastive learning batch far beyond GPU memory constraint. This means tr

Luyu Gao 198 Dec 29, 2022