EdMIPS: Rethinking Differentiable Search for Mixed-Precision Neural Networks

Related tags

Deep Learning EdMIPS
Overview

EdMIPS: Rethinking Differentiable Search for Mixed-Precision Neural Networks

by Zhaowei Cai, and Nuno Vasconcelos.

This implementation is written by Zhaowei Cai at UC San Diego.

Introduction

EdMIPS is an efficient algorithm to search the optimal mixed-precision neural network directly without proxy task on ImageNet given computation budgets. It can be applied to many popular network architectures, including ResNet, GoogLeNet, and Inception-V3. More details can be found in the paper.

Citation

If you use our code/model/data, please cite our paper:

@inproceedings{cai20edmips,
  author = {Zhaowei Cai and Nuno Vasconcelos},
  Title = {Rethinking Differentiable Search for Mixed-Precision Neural Networks},
  booktitle = {CVPR},
  Year  = {2020}
}

Installation

  1. Install PyTorch and ImageNet dataset following the official PyTorch ImageNet training code.

  2. Clone the EdMIPS repository, and we'll call the directory that you cloned EdMIPS into EdMIPS_ROOT

    git clone https://github.com/zhaoweicai/EdMIPS.git
    cd EdMIPS_ROOT/

Searching the Mixed-precision Network with EdMIPS

You can start training EdMIPS. Take ResNet-18 for example.

python search.py \
  -a mixres18_w1234a234 --epochs 25 --step-epoch 10 --lr 0.1 --lra 0.01 --cd 0.00335 -j 16 \
  [your imagenet-folder with train and val folders]

The other network architectures are also available, including ResNet-50, GoogLeNet and Inception-V3.

Training the Searched Mixed-precision Network

After the EdMIPS searching is finished, with the checkpoint arch_checkpoint.pth.tar, you can start to train the classification model with the learned bit allocation.

python main.py \
  -a quantres18_cfg --epochs 95 --step-epoch 30 -j 16 \
  --ac arch_checkpoint.pth.tar \
  [your imagenet-folder with train and val folders]

Results

The results are shown as following:

network precision bit --cd top-1/5 acc. model
ResNet-18 uniform 2.0 65.1/86.2 download
ResNet-18 mixed 1.992 0.00335 65.9/86.5 download
ResNet-50 uniform 2.0 70.6/89.8 download
ResNet-50 mixed 2.007 0.00015 72.1/90.6 download
GoogleNet uniform 2.0 64.8/86.3 download
GoogleNet mixed 1.994 0.00045 67.8/88.0 download
Inception-V3 uniform 2.0 71.0/89.9 download
Inception-V3 mixed 1.982 0.0015 72.4/90.7 download

Disclaimer

  1. The training of EdMIPS has some variance. Tune --cd a little bit to get the optimal bit allocation you want.

  2. The BitOps are counted only on the quantized layers. They are normalized to the bit space as in the above table.

  3. Since some changes have been made after the paper submission, you may get slightly worse performances (0.1~0.2 points) than those in the paper.

If you encounter any issue when using our code/model, please let me know.

You might also like...
[ICCV 2021] Amplitude-Phase Recombination: Rethinking Robustness of Convolutional Neural Networks in Frequency Domain
[ICCV 2021] Amplitude-Phase Recombination: Rethinking Robustness of Convolutional Neural Networks in Frequency Domain

Amplitude-Phase Recombination (ICCV'21) Official PyTorch implementation of "Amplitude-Phase Recombination: Rethinking Robustness of Convolutional Neur

[ICLR 2021]
[ICLR 2021] "CPT: Efficient Deep Neural Network Training via Cyclic Precision" by Yonggan Fu, Han Guo, Meng Li, Xin Yang, Yining Ding, Vikas Chandra, Yingyan Lin

CPT: Efficient Deep Neural Network Training via Cyclic Precision Yonggan Fu, Han Guo, Meng Li, Xin Yang, Yining Ding, Vikas Chandra, Yingyan Lin Accep

Deep Image Search is an AI-based image search engine that includes deep transfor learning features Extraction and tree-based vectorized search.
Deep Image Search is an AI-based image search engine that includes deep transfor learning features Extraction and tree-based vectorized search.

Deep Image Search - AI-Based Image Search Engine Deep Image Search is an AI-based image search engine that includes deep transfer learning features Ex

Official implementation of GraphMask as presented in our paper Interpreting Graph Neural Networks for NLP With Differentiable Edge Masking.

GraphMask This repository contains an implementation of GraphMask, the interpretability technique for graph neural networks presented in our ICLR 2021

OptNet: Differentiable Optimization as a Layer in Neural Networks

OptNet: Differentiable Optimization as a Layer in Neural Networks This repository is by Brandon Amos and J. Zico Kolter and contains the PyTorch sourc

Neural Turing Machine (NTM) & Differentiable Neural Computer (DNC) with pytorch & visdom
Neural Turing Machine (NTM) & Differentiable Neural Computer (DNC) with pytorch & visdom

Neural Turing Machine (NTM) & Differentiable Neural Computer (DNC) with pytorch & visdom Sample on-line plotting while training(avg loss)/testing(writ

Official repository of the paper
Official repository of the paper "A Variational Approximation for Analyzing the Dynamics of Panel Data". Mixed Effect Neural ODE. UAI 2021.

Official repository of the paper (UAI 2021) "A Variational Approximation for Analyzing the Dynamics of Panel Data", Mixed Effect Neural ODE. Panel dat

Rethinking Space-Time Networks with Improved Memory Coverage for Efficient Video Object Segmentation
Rethinking Space-Time Networks with Improved Memory Coverage for Efficient Video Object Segmentation

STCN Rethinking Space-Time Networks with Improved Memory Coverage for Efficient Video Object Segmentation Ho Kei Cheng, Yu-Wing Tai, Chi-Keung Tang [a

Official DGL implementation of
Official DGL implementation of "Rethinking High-order Graph Convolutional Networks"

SE Aggregation This is the implementation for Rethinking High-order Graph Convolutional Networks. Here we show the codes for citation networks as an e

Comments
  • How to determine the gaussian and hwgq step values

    How to determine the gaussian and hwgq step values

    How to determine the gaussian and hwgq step values if we want to get more bit configuration?

    https://github.com/zhaoweicai/EdMIPS/blob/master/models/quant_module.py#L7 https://github.com/zhaoweicai/EdMIPS/blob/master/models/quant_module.py#L8

    Thanks

    opened by ken012git 2
  • Why do we need asserts for resnet and this part?

    Why do we need asserts for resnet and this part?

    Hi, thank you for a nice work, I was wondering, why do we need assertions at this part?

    https://github.com/zhaoweicai/EdMIPS/blob/690a7c650ea468f8e9273aceb833f1849f980ff9/models/mixresnet.py#L154

    Also, why did not you use nn.AdaptiveAvgPool2d((1, 1)) instead of nn.AvgPool2d(7) for resnets?

    Thank you!

    opened by diff7 1
Owner
Zhaowei Cai
Zhaowei Cai
Quantization library for PyTorch. Support low-precision and mixed-precision quantization, with hardware implementation through TVM.

HAWQ: Hessian AWare Quantization HAWQ is an advanced quantization library written for PyTorch. HAWQ enables low-precision and mixed-precision uniform

Zhen Dong 293 Dec 30, 2022
A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch

This repository holds NVIDIA-maintained utilities to streamline mixed precision and distributed training in Pytorch. Some of the code here will be included in upstream Pytorch eventually. The intention of Apex is to make up-to-date utilities available to users as quickly as possible.

NVIDIA Corporation 6.9k Jan 3, 2023
This is the pytorch implementation for the paper: Generalizable Mixed-Precision Quantization via Attribution Rank Preservation, which is accepted to ICCV2021.

GMPQ: Generalizable Mixed-Precision Quantization via Attribution Rank Preservation This is the pytorch implementation for the paper: Generalizable Mix

null 18 Sep 2, 2022
A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch

Introduction This is a Python package available on PyPI for NVIDIA-maintained utilities to streamline mixed precision and distributed training in Pyto

Artit 'Art' Wangperawong 5 Sep 29, 2021
Differentiable Neural Computers, Sparse Access Memory and Sparse Differentiable Neural Computers, for Pytorch

Differentiable Neural Computers and family, for Pytorch Includes: Differentiable Neural Computers (DNC) Sparse Access Memory (SAM) Sparse Differentiab

ixaxaar 302 Dec 14, 2022
[ICLR2021oral] Rethinking Architecture Selection in Differentiable NAS

DARTS-PT Code accompanying the paper ICLR'2021: Rethinking Architecture Selection in Differentiable NAS Ruochen Wang, Minhao Cheng, Xiangning Chen, Xi

Ruochen Wang 86 Dec 27, 2022
Official implementation of Rethinking Graph Neural Architecture Search from Message-passing (CVPR2021)

Rethinking Graph Neural Architecture Search from Message-passing Intro The GNAS can automatically learn better architecture with the optimal depth of

Shaofei Cai 48 Sep 30, 2022
With this package, you can generate mixed-integer linear programming (MIP) models of trained artificial neural networks (ANNs) using the rectified linear unit (ReLU) activation function

With this package, you can generate mixed-integer linear programming (MIP) models of trained artificial neural networks (ANNs) using the rectified linear unit (ReLU) activation function. At the moment, only TensorFlow sequential models are supported. Interfaces to either the Pyomo or Gurobi modeling environments are offered.

ChemEngAI 40 Dec 27, 2022
Differentiable architecture search for convolutional and recurrent networks

Differentiable Architecture Search Code accompanying the paper DARTS: Differentiable Architecture Search Hanxiao Liu, Karen Simonyan, Yiming Yang. arX

Hanxiao Liu 3.7k Jan 9, 2023
A PyTorch implementation of " EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks."

EfficientNet A PyTorch implementation of EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. [arxiv] [Official TF Repo] Implemen

AhnDW 298 Dec 10, 2022