Post-training Quantization for Neural Networks with Provable Guarantees

Overview

Post-training Quantization for Neural Networks with Provable Guarantees

Authors: Jinjie Zhang ([email protected]), Yixuan Zhou ([email protected]) and Rayan Saab ([email protected])

Overview

This directory contains code necessary to run a post-training neural-network quantization method GPFQ, that is based on a greedy path-following mechanism. One can also use it to reproduce the experiment results in our paper "Post-training Quantization for Neural Networks with Provable Guarantees". In this paper, we also prove theoretical guarantees for the proposed method, that is, for quantizing a single-layer network, the relative square error essentially decays linearly in the number of weights – i.e., level of over-parametrization.

If you make use of this code or our quantization method in your work, please cite the following paper:

 @article{zhang2022posttraining,
     author = {Zhang, Jinjie and Zhou, Yixuan and Saab, Rayan},
     title = {Post-training Quantization for Neural Networks with Provable Guarantees},
     booktitle = {arXiv preprint arXiv:2201.11113},
     year = {2022}
   }

Note: The code is designed to work primarily with the ImageNet dataset. Due to the size of this dataset, it is likely one may need heavier computational resources than a local machine. Nevertheless, the experiments can be run, for example, using a cloud computation center, e.g. AWS. When we run this experiment, we use the m5.8xlarge EC2 instance with a disk space of 300GB.

Installing Dependencies

We assume a python version that is greater than 3.8.0 is installed in the user's machine. In the root directory of this repo, we provide a requirements.txt file for installing the python libraries that will be used in our code.

To install the necessary dependency, one can first start a virtual environment by doing the following:

python3 -m venv .venv
source .venv/bin/activate

The code above should activate a new python virtual environments.

Then one can make use of the requirements.txt by

pip3 install -r requirement.txt

This should install all the required dependencies of this project.

Obtaining ImageNet Dataset

In this project, we make use of the Imagenet dataset, in particular, we use the ILSVRC-2012 version.

To obtain the Imagenet dataset, one can submit a request through this link.

Once the dataset is obtained, place the .tar files for training set and validation set both under the data/ILSVRC2012 directory of this repo.

Then use the following procedure to unzip Imagenet dataset:

tar -xvf ILSVRC2012_img_train.tar && rm -f ILSVRC2012_img_train.tar
find . -name "*.tar" | while read NAME ; do mkdir -p "${NAME%.tar}"; tar -xvf "${NAME}" -C "${NAME%.tar}"; rm -f "${NAME}"; done
cd ..
# Extract the validation data and move images to subfolders:
tar -xvf ILSVRC2012_img_val.tar

Running Experiments

The implementation of the modified GPFQ in our paper is contained in quantization_scripts. Additionally, adhoc_quantization_scripts and retraining_scripts provide extra experiments and both of them are variants of the framework in quantization_scripts. adhoc_quantization_scripts contains heuristic modifications used to further improve the performance of GPFQ, such as bias correction, mixed precision, and unquantizing the last layer. retraining_scripts shows a quantization-aware training strategy that is designed to retrain the neural network after each layer is quantized.

In this section, we will give a guidance on running our code contained in quantization_scripts and the implementation of other two counterparts adhoc_quantization_scripts and retraining_scripts are very similar to quantization_scripts.

  1. Before getting started, run in the root directory of the repo and run mkdir modelsto create a directory in which we will store the quantized model.

  2. The entry point of the project starts with quantization_scripts/quantize.py. Once the file is opened, there is a section to set hyperparameters, for example, the model_name parameter, the number of bits/batch size used for quantization, the scalar of alphabets, the probability for subsampling in CNNs etc. Note that the model_name mentioned above should be the same as the model that you will quantize. After you selected a model_name and assuming you are still in the root directory of this repo, run mkdir models/{model_name}, where the {model_name} should be the python string that you provided for the model_name parameter in the quantize.py file. If the directory already exists, you can skip this step.

  3. Then navigate to the logs directory and run python3 init_logs.py. This will prepare a log file which is used to store the results of the experiment.

  4. Finally, open the quantization_scripts directory and run python3 quantize.py to start the experiment.

You might also like...
Ppq - A powerful offline neural network quantization tool with custimized IR
Ppq - A powerful offline neural network quantization tool with custimized IR

PPL Quantization Tool(PPL 量化工具) PPL Quantization Tool (PPQ) is a powerful offlin

Complex-Valued Neural Networks (CVNN)Complex-Valued Neural Networks (CVNN)

Complex-Valued Neural Networks (CVNN) Done by @NEGU93 - J. Agustin Barrachina Using this library, the only difference with a Tensorflow code is that y

QKeras: a quantization deep learning library for Tensorflow Keras

QKeras github.com/google/qkeras QKeras 0.8 highlights: Automatic quantization using QKeras; Stochastic behavior (including stochastic rouding) is disa

I-BERT: Integer-only BERT Quantization
I-BERT: Integer-only BERT Quantization

I-BERT: Integer-only BERT Quantization HuggingFace Implementation I-BERT is also available in the master branch of HuggingFace! Visit the following li

FID calculation with proper image resizing and quantization steps
FID calculation with proper image resizing and quantization steps

clean-fid: Fixing Inconsistencies in FID Project | Paper The FID calculation involves many steps that can produce inconsistencies in the final metric.

TorchPQ is a python library for Approximate Nearest Neighbor Search (ANNS) and Maximum Inner Product Search (MIPS) on GPU using Product Quantization (PQ) algorithm.
This is the pytorch implementation for the paper: Generalizable Mixed-Precision Quantization via Attribution Rank Preservation, which is accepted to ICCV2021.

GMPQ: Generalizable Mixed-Precision Quantization via Attribution Rank Preservation This is the pytorch implementation for the paper: Generalizable Mix

MQBench: Towards Reproducible and Deployable Model Quantization Benchmark

MQBench: Towards Reproducible and Deployable Model Quantization Benchmark We propose a benchmark to evaluate different quantization algorithms on vari

This is an official implementation of the paper "Distance-aware Quantization", accepted to ICCV2021.

PyTorch implementation of DAQ This is an official implementation of the paper "Distance-aware Quantization", accepted to ICCV2021. For more informatio

Comments
  • Error when running pip3 install -r requirements.txt

    Error when running pip3 install -r requirements.txt

    I am running into a dependency conflict when running pip3 install -r requirements.txt. I am getting the error:


    ERROR: Cannot install -r requirements.txt (line 7) and torch==1.10.0 because these package versions have conflicting dependencies. The conflict is caused by: The user requested torch==1.10.0 torchvision 0.11.2 depends on torch==1.10.1

    To fix this you could try to:

    1. loosen the range of package versions you've specified
    2. remove package versions to allow pip attempt to solve the dependency conflict

    ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts


    My python version is 3.8.15.

    Could you please help me solve this issue?

    opened by gobenavides 2
  • Ready to Merge

    Ready to Merge

    Merged the two branches of mine, also merged with main.

    In the quantization_scripts folder, everything that we have done before is left intact.

    The final version of trying last-layer unquantize and bias correction is in adhoc_quantization_scripts folder.

    opened by YixuanSeanZhou 0
Owner
Yixuan Zhou
3rd Year UCSD CS double Math undergrad.
Yixuan Zhou
DiffQ performs differentiable quantization using pseudo quantization noise. It can automatically tune the number of bits used per weight or group of weights, in order to achieve a given trade-off between model size and accuracy.

Differentiable Model Compression via Pseudo Quantization Noise DiffQ performs differentiable quantization using pseudo quantization noise. It can auto

Facebook Research 145 Dec 30, 2022
Quantization library for PyTorch. Support low-precision and mixed-precision quantization, with hardware implementation through TVM.

HAWQ: Hessian AWare Quantization HAWQ is an advanced quantization library written for PyTorch. HAWQ enables low-precision and mixed-precision uniform

Zhen Dong 293 Dec 30, 2022
Nonuniform-to-Uniform Quantization: Towards Accurate Quantization via Generalized Straight-Through Estimation. In CVPR 2022.

Nonuniform-to-Uniform Quantization This repository contains the training code of N2UQ introduced in our CVPR 2022 paper: "Nonuniform-to-Uniform Quanti

Zechun Liu 60 Dec 28, 2022
Degree-Quant: Quantization-Aware Training for Graph Neural Networks.

Degree-Quant This repo provides a clean re-implementation of the code associated with the paper Degree-Quant: Quantization-Aware Training for Graph Ne

null 35 Oct 7, 2022
Minimal implementation of Denoised Smoothing: A Provable Defense for Pretrained Classifiers in TensorFlow.

Denoised-Smoothing-TF Minimal implementation of Denoised Smoothing: A Provable Defense for Pretrained Classifiers in TensorFlow. Denoised Smoothing is

Sayak Paul 19 Dec 11, 2022
ResNEsts and DenseNEsts: Block-based DNN Models with Improved Representation Guarantees

ResNEsts and DenseNEsts: Block-based DNN Models with Improved Representation Guarantees This repository is the official implementation of the empirica

Kuan-Lin (Jason) Chen 2 Oct 2, 2022
STEM: An approach to Multi-source Domain Adaptation with Guarantees

STEM: An approach to Multi-source Domain Adaptation with Guarantees Introduction This is the official implementation of ``STEM: An approach to Multi-s

null 5 Dec 19, 2022
PyTorch implementation of the Transformer in Post-LN (Post-LayerNorm) and Pre-LN (Pre-LayerNorm).

Transformer-PyTorch A PyTorch implementation of the Transformer from the paper Attention is All You Need in both Post-LN (Post-LayerNorm) and Pre-LN (

Jared Wang 22 Feb 27, 2022
QTool: A Low-bit Quantization Toolbox for Deep Neural Networks in Computer Vision

This project provides abundant choices of quantization strategies (such as the quantization algorithms, training schedules and empirical tricks) for quantizing the deep neural networks into low-bit counterparts.

Monash Green AI Lab 51 Dec 10, 2022
MQBench Quantization Aware Training with PyTorch

MQBench Quantization Aware Training with PyTorch I am using MQBench(Model Quantization Benchmark)(http://mqbench.tech/) to quantize the model for depl

Ling Zhang 29 Nov 18, 2022