A code implementation of AC-GC: Activation Compression with Guaranteed Convergence, in NeurIPS 2021.

Related tags

Deep Learning acgc
Overview

Code For AC-GC: Lossy Activation Compression with Guaranteed Convergence

This code is intended to be used as a supplemental material for submission to NeurIPS 2021.

DO NOT DISTRIBUTE

Setup

This code is tested on Ubuntu 20.04 with Python 3 and CUDA 10.1. Other cuda versions can be used by modifying the cupy version in requirements.txt, provided that CuDNN is installed.

# Set up environment
python3 -m venv
source venv/bin/activate
pip3 install -r requirements.txt

Training

Configurations are provided for CIFAR10/ResNet50 in the acgc/configs folder.

source venv/bin/activate
cd acgc
./configs/rn50_baseline.sh

To replicate GridQuantZ results from the paper, you additionally need to:

  • Run quantz with bitwidths of 2, 4, 6, 8, 10, 12, 14, and 16 bits, and run each 5 times
  • Select the result with the lowest bitwidth and average accuracy no less than the baseline - 0.1%

Evaluation

Evaluation with the CIFAR10 test dataset is run during training. The 'validation/main/accuracy' entry in the report.txt or log contains test accuracy throughout training.

Pre-trained Models

You can download pre-trained snapshots for each config from acgc/configs.

These snapshots can be run using

python3 train_cifar_act_error.py ... --resume <snapshot_file>

Results

We have added reports and logs for each configuration under acgc/results. The logs are associated with each snapshot, above.

A summarized output from these runs is:

Configuration Best Test Acc Average Bits Epochs
rn50_baseline 95.16 % N/A 300
rn50_quant_8bit 94.90 % 8.000 300
rn50_quantz_8bit 94.82 % 7.426 300
rn50_autoquant 94.73 % 7.305 300
rn50_autoquantz 94.91 % 6.694 300

Code Layout

Argument parsing and model initialization are handled in acgc/cifar.py and acgc/train_cifar_act_error.py

Modifications to the training loop are in acgc/common/compression/compressed_momentum_sgd.py.

The baseline fixpoint implementation is in acgc/common/compression/quant.py.

The AutoQuant implementation, and error bound calculation are in acgc/common/compression/autoquant.py.

Gradient and parameter estimation are performed in acgc/common/compression/grad_approx.py

You might also like...
Unofficial Tensorflow 2 implementation of the paper Implicit Neural Representations with Periodic Activation Functions
Unofficial Tensorflow 2 implementation of the paper Implicit Neural Representations with Periodic Activation Functions

Siren: Implicit Neural Representations with Periodic Activation Functions The unofficial Tensorflow 2 implementation of the paper Implicit Neural Repr

Code of paper
Code of paper "CDFI: Compression-Driven Network Design for Frame Interpolation", CVPR 2021

CDFI (Compression-Driven-Frame-Interpolation) [Paper] (Coming soon...) | [arXiv] Tianyu Ding*, Luming Liang*, Zhihui Zhu, Ilya Zharkov IEEE Conference

ActNN: Reducing Training Memory Footprint via 2-Bit Activation Compressed Training
ActNN: Reducing Training Memory Footprint via 2-Bit Activation Compressed Training

ActNN : Activation Compressed Training This is the official project repository for ActNN: Reducing Training Memory Footprint via 2-Bit Activation Comp

FactSeg: Foreground Activation Driven Small Object Semantic Segmentation in Large-Scale Remote Sensing Imagery (TGRS)
FactSeg: Foreground Activation Driven Small Object Semantic Segmentation in Large-Scale Remote Sensing Imagery (TGRS)

FactSeg: Foreground Activation Driven Small Object Semantic Segmentation in Large-Scale Remote Sensing Imagery by Ailong Ma, Junjue Wang*, Yanfei Zhon

Many Class Activation Map methods implemented in Pytorch for CNNs and Vision Transformers. Including Grad-CAM, Grad-CAM++, Score-CAM, Ablation-CAM and XGrad-CAM
Many Class Activation Map methods implemented in Pytorch for CNNs and Vision Transformers. Including Grad-CAM, Grad-CAM++, Score-CAM, Ablation-CAM and XGrad-CAM

Class Activation Map methods implemented in Pytorch pip install grad-cam ⭐ Tested on many Common CNN Networks and Vision Transformers. ⭐ Includes smoo

With this package, you can generate mixed-integer linear programming (MIP) models of trained artificial neural networks (ANNs) using the rectified linear unit (ReLU) activation function

With this package, you can generate mixed-integer linear programming (MIP) models of trained artificial neural networks (ANNs) using the rectified linear unit (ReLU) activation function. At the moment, only TensorFlow sequential models are supported. Interfaces to either the Pyomo or Gurobi modeling environments are offered.

Source for the paper "Universal Activation Function for machine learning"

Universal Activation Function Tensorflow and Pytorch source code for the paper Yuen, Brosnan, Minh Tu Hoang, Xiaodai Dong, and Tao Lu. "Universal acti

NeuroGen: activation optimized image synthesis for discovery neuroscience
NeuroGen: activation optimized image synthesis for discovery neuroscience

NeuroGen: activation optimized image synthesis for discovery neuroscience NeuroGen is a framework for synthesizing images that control brain activatio

PyTorch reimplementation of the Smooth ReLU activation function proposed in the paper
PyTorch reimplementation of the Smooth ReLU activation function proposed in the paper "Real World Large Scale Recommendation Systems Reproducibility and Smooth Activations" [arXiv 2022].

Smooth ReLU in PyTorch Unofficial PyTorch reimplementation of the Smooth ReLU (SmeLU) activation function proposed in the paper Real World Large Scale

Owner
Dave Evans
Student at University of British Columbia. Interests: FPGAs, Accelerators, Computer Architecture, Machine Learning
Dave Evans
Revisiting Discriminator in GAN Compression: A Generator-discriminator Cooperative Compression Scheme (NeurIPS2021)

Revisiting Discriminator in GAN Compression: A Generator-discriminator Cooperative Compression Scheme (NeurIPS2021) Overview Prerequisites Linux Pytho

Shaojie Li 34 Mar 31, 2022
An Image compression simulator that uses Source Extractor and Monte Carlo methods to examine the post compressive effects different compression algorithms have.

ImageCompressionSimulation An Image compression simulator that uses Source Extractor and Monte Carlo methods to examine the post compressive effects o

James Park 1 Dec 11, 2021
Official Pytorch implementation for Deep Contextual Video Compression, NeurIPS 2021

Introduction Official Pytorch implementation for Deep Contextual Video Compression, NeurIPS 2021 Prerequisites Python 3.8 and conda, get Conda CUDA 11

null 51 Dec 3, 2022
Seach Losses of our paper 'Loss Function Discovery for Object Detection via Convergence-Simulation Driven Search', accepted by ICLR 2021.

CSE-Autoloss Designing proper loss functions for vision tasks has been a long-standing research direction to advance the capability of existing models

Peidong Liu(刘沛东) 54 Dec 17, 2022
Pytorch implementation of AngularGrad: A New Optimization Technique for Angular Convergence of Convolutional Neural Networks

AngularGrad Optimizer This repository contains the oficial implementation for AngularGrad: A New Optimization Technique for Angular Convergence of Con

mario 124 Sep 16, 2022
SmallInitEmb - LayerNorm(SmallInit(Embedding)) in a Transformer to improve convergence

SmallInitEmb LayerNorm(SmallInit(Embedding)) in a Transformer I find that when t

PENG Bo 11 Dec 25, 2022
Official Repsoitory for "Activate or Not: Learning Customized Activation." [CVPR 2021]

CVPR 2021 | Activate or Not: Learning Customized Activation. This repository contains the official Pytorch implementation of the paper Activate or Not

null 184 Dec 27, 2022
Tensorflow Implementation of SMU: SMOOTH ACTIVATION FUNCTION FOR DEEP NETWORKS USING SMOOTHING MAXIMUM TECHNIQUE

SMU A Tensorflow Implementation of SMU: SMOOTH ACTIVATION FUNCTION FOR DEEP NETWORKS USING SMOOTHING MAXIMUM TECHNIQUE arXiv https://arxiv.org/abs/211

Fuhang 5 Jan 18, 2022
PyTorch implementation of ''Background Activation Suppression for Weakly Supervised Object Localization''.

Background Activation Suppression for Weakly Supervised Object Localization PyTorch implementation of ''Background Activation Suppression for Weakly S

null 35 Jan 6, 2023
Implementation of parameterized soft-exponential activation function.

Soft-Exponential-Activation-Function: Implementation of parameterized soft-exponential activation function. In this implementation, the parameters are

Shuvrajeet Das 1 Feb 23, 2022