QTool: A Low-bit Quantization Toolbox for Deep Neural Networks in Computer Vision

Related tags

Deep Learning QTool
Overview

QTool: A Low-bit Quantization Toolbox for Deep Neural Networks in Computer Vision

This project provides abundant choices of quantization strategies (such as the quantization algorithms, training schedules and empirical tricks) for quantizing the deep neural networks into low-bit counterparts. This project can act as a flexible plugin and benefit various computer vision tasks, such as image classification, dense detection and segmentation, text parsing and super resolution. Pretrained models are provided to show high standard of the code on achieving appealing quantization performance.

Instructions for different tasks

Update History

  • 2020.12.12 Text parsing
  • 2020.11.01 Super Resolution
  • 2020.07.08 Instance Segmentation
  • 2020.07.08 Object Detection
  • 2020.06.23 Add classification quantization

Citation

Please cite the following work if you find the project helpful.

@misc{chen2020qtool,
author = {Peng Chen, Bohan Zhuang, Jing Liu and Chunlei Liu},
title = {{QTool: A Low-bit Quantization Toolbox for Deep Neural Networks in Computer Vision}},
year = {2020},
howpublished = {\url{https://github.com/MonashAI/QTool/}},
note = {Accessed: [Insert date here]}
}

This project includes the implementation of some of our works:

@inproceedings{chen2021aqd,
  title={Aqd: Towards accurate quantized object detection},
  author={Chen, Peng and Liu, Jing and Zhuang, Bohan and Tan, Mingkui and Shen, Chunhua},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={104--113},
  year={2021}
}

@inproceedings{chen2021fatnn,
  title={FATNN: Fast and Accurate Ternary Neural Networks},
  author={Chen, Peng and Zhuang, Bohan and Shen, Chunhua},
  booktitle={Proceedings of the International Conference on Computer Vision},
  year={2021}
}

Also cite the corresponding publications when you choose dedicated algorithms.

We are integrating more of our work and other great studies into this project.

Contribute

To contribute, PR is appreciated and suggestions are welcome to discuss with.

License

For academic use, this project is licensed under the 2-clause BSD License. See LICENSE file. For commercial use, please contact Chunhua Shen and Peng Chen.

Comments
  • DAIA

    DAIA

    Thanks for your great paper on SR quantization. I have one problem about the method:

    DAIA, Is there any other difference from LSQ expcept your first warm-up to initilize the step size?

    or did you make specification of LSQ for SR task, thus you get your Distribution-Aware Interval Adaptation?

    opened by qiulinzhang 3
  • EDSR-PyTorch can not be found

    EDSR-PyTorch can not be found

    I want to use the part of super resolution. When download the quantization version of EDSR-PyTorch project. i get: fatal: repository 'https://github.com/blueardour/EDSR-PyTorch/' not found

    opened by www132409011 2
  • Biases and BatchNorm not quantized as described in

    Biases and BatchNorm not quantized as described in "AQD: Towards Accurate Quantized Object Detection"

    Rebasing the repo:

    Import issuses from old url:

    ShechemKS:

    After reading the paper "AQD: Towards Accurate Quantized Object Detection", I have been using this repo to quantize an object detector. After reading the code, I realized that the biases of the convolutions (if it has biases) and batch normalization is not quantized. However, the paper "AQD: Towards Accurate Quantized Object Detection" states

    We propose an Accurate Quantized object Detection (AQD) method to fully get rid of floating-point computation in each layer of the network, including convolutional layers, normalization layers and skip connections.

    Specifically, I cannot find the code that corresponds to the equations given in section 3.2.2 of the paper. Am I missing something? How does that work in the code? Am I not using the correct keywords? (I have used the default ones provided: keyword: ["debug", "dorefa", "lsq"]). The biases don't seem to be quantized either.

    Additionally, in the default configurations, the weights are quantized using the adaptive mode var-mean (i.e. the weights are normalized before being quantized, to my understanding). Is this also part of the method adopted in the paper, or should I disable this if I am to replicate those results?

    opened by blueardour 2
  • NameError: name 'task_cls' is not defined

    NameError: name 'task_cls' is not defined

    When I run the code "python tools.py --keyword update,raw --mf weights/det-resnet18/mf.txt --mt weights/det-resnet18/mt.txt --old weights/pytorch-resnet18/resnet18-5c106cde.pth --new weights", I encounter the issue as follows: image How can I solve it?

    opened by smurf-1119 1
  • loss become infinite while training quant models

    loss become infinite while training quant models

    hi, when i try to train a quant model using configdetectron2/configs/COCO-Detection/retinanet_R_18_FPN_1x-Full-SyncBN-lsq-2bit.yaml, and the loss became nan at iterations 390

    -- Process 0 terminated with the following error:
    Traceback (most recent call last):
      File "/home/zhangjinhe/anaconda3/envs/torch/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 19, in _wrap
        fn(i, *args)
      File "/home/zhangjinhe/QTools/git/detectron2/detectron2/engine/launch.py", line 125, in _distributed_worker    main_func(*args)
      File "/home/zhangjinhe/QTools/git/detectron2/tools/train_net.py", line 154, in main
        return trainer.train()
      File "/home/zhangjinhe/QTools/git/detectron2/detectron2/engine/defaults.py", line 489, in train    super().train(self.start_iter, self.max_iter)
      File "/home/zhangjinhe/QTools/git/detectron2/detectron2/engine/train_loop.py", line 149, in train    self.run_step()  File "/home/zhangjinhe/QTools/git/detectron2/detectron2/engine/defaults.py", line 499, in run_step    self._trainer.run_step()  File "/home/zhangjinhe/QTools/git/detectron2/detectron2/engine/train_loop.py", line 289, in run_step    self._write_metrics(loss_dict, data_time)  File "/home/zhangjinhe/QTools/git/detectron2/detectron2/engine/train_loop.py", line 332, in _write_metrics
        f"Loss became infinite or NaN at iteration={self.iter}!\n"
    FloatingPointError: Loss became infinite or NaN at iteration=390!
    

    The commang i use is python tools/train_net.py --config-file configs/COCO-Detection/retinanet_R_18_FPN_1x-Full-SyncBN-lsq-2bit.yaml --num-gpus 4 MODEL.WEIGHTS output/coco-detection/retinanet_R_18_FPN_1x-Full_BN/model_final.pth

    I change the input_size from (640, 672, 704, 736, 768, 800) to (800,) and the checkpoint file is the result of another experiment using config retinanet_R_18_FPN_1x-Full-BN.yaml

    Any ideas why?

    opened by RaidenE1 1
  • AQD

    AQD

    I think i find a bug in model-quantization/task_cls.py: you shold add import utils or it will caused an NameError when i try to import my own pretrained model.

    opened by RaidenE1 1
Owner
Monash Green AI Lab
Monash Green AI Lab
Quantization library for PyTorch. Support low-precision and mixed-precision quantization, with hardware implementation through TVM.

HAWQ: Hessian AWare Quantization HAWQ is an advanced quantization library written for PyTorch. HAWQ enables low-precision and mixed-precision uniform

Zhen Dong 293 Dec 30, 2022
[ICLR 2022 Oral] F8Net: Fixed-Point 8-bit Only Multiplication for Network Quantization

F8Net Fixed-Point 8-bit Only Multiplication for Network Quantization (ICLR 2022 Oral) OpenReview | arXiv | PDF | Model Zoo | BibTex PyTorch implementa

Snap Research 76 Dec 13, 2022
DiffQ performs differentiable quantization using pseudo quantization noise. It can automatically tune the number of bits used per weight or group of weights, in order to achieve a given trade-off between model size and accuracy.

Differentiable Model Compression via Pseudo Quantization Noise DiffQ performs differentiable quantization using pseudo quantization noise. It can auto

Facebook Research 145 Dec 30, 2022
Nonuniform-to-Uniform Quantization: Towards Accurate Quantization via Generalized Straight-Through Estimation. In CVPR 2022.

Nonuniform-to-Uniform Quantization This repository contains the training code of N2UQ introduced in our CVPR 2022 paper: "Nonuniform-to-Uniform Quanti

Zechun Liu 60 Dec 28, 2022
Monk is a low code Deep Learning tool and a unified wrapper for Computer Vision.

Monk - A computer vision toolkit for everyone Why use Monk Issue: Want to begin learning computer vision Solution: Start with Monk's hands-on study ro

Tessellate Imaging 507 Dec 4, 2022
Code for our paper at ECCV 2020: Post-Training Piecewise Linear Quantization for Deep Neural Networks

PWLQ Updates 2020/07/16 - We are working on getting permission from our institution to release our source code. We will release it once we are granted

null 54 Dec 15, 2022
LeafSnap replicated using deep neural networks to test accuracy compared to traditional computer vision methods.

Deep-Leafsnap Convolutional Neural Networks have become largely popular in image tasks such as image classification recently largely due to to Krizhev

Sujith Vishwajith 48 Nov 27, 2022
S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural Networks via Guided Distribution Calibration (CVPR 2021)

S2-BNN (Self-supervised Binary Neural Networks Using Distillation Loss) This is the official pytorch implementation of our paper: "S2-BNN: Bridging th

Zhiqiang Shen 52 Dec 24, 2022
CBREN: Convolutional Neural Networks for Constant Bit Rate Video Quality Enhancement

CBREN This is the Pytorch implementation for our IEEE TCSVT paper : CBREN: Convolutional Neural Networks for Constant Bit Rate Video Quality Enhanceme

Zhao Hengrun 3 Nov 4, 2022
Degree-Quant: Quantization-Aware Training for Graph Neural Networks.

Degree-Quant This repo provides a clean re-implementation of the code associated with the paper Degree-Quant: Quantization-Aware Training for Graph Ne

null 35 Oct 7, 2022
This project demonstrates the use of neural networks and computer vision to create a classifier that interprets the Brazilian Sign Language.

LIBRAS-Image-Classifier This project demonstrates the use of neural networks and computer vision to create a classifier that interprets the Brazilian

Aryclenio Xavier Barros 26 Oct 14, 2022
Lacmus is a cross-platform application that helps to find people who are lost in the forest using computer vision and neural networks.

lacmus The program for searching through photos from the air of lost people in the forest using Retina Net neural nwtwork. The project is being develo

Lacmus Foundation 168 Dec 27, 2022
Code for HLA-Face: Joint High-Low Adaptation for Low Light Face Detection (CVPR21)

HLA-Face: Joint High-Low Adaptation for Low Light Face Detection The official PyTorch implementation for HLA-Face: Joint High-Low Adaptation for Low L

Wenjing Wang 77 Dec 8, 2022
Official code of "R2RNet: Low-light Image Enhancement via Real-low to Real-normal Network."

R2RNet Official code of "R2RNet: Low-light Image Enhancement via Real-low to Real-normal Network." Jiang Hai, Zhu Xuan, Ren Yang, Yutong Hao, Fengzhu

null 77 Dec 24, 2022
PyTorch implementation of Wide Residual Networks with 1-bit weights by McDonnell (ICLR 2018)

1-bit Wide ResNet PyTorch implementation of training 1-bit Wide ResNets from this paper: Training wide residual networks for deployment using a single

Sergey Zagoruyko 122 Dec 7, 2022
Implementation of accepted AAAI 2021 paper: Deep Unsupervised Image Hashing by Maximizing Bit Entropy

Deep Unsupervised Image Hashing by Maximizing Bit Entropy This is the PyTorch implementation of accepted AAAI 2021 paper: Deep Unsupervised Image Hash

null 62 Dec 30, 2022
CVNets: A library for training computer vision networks

CVNets: A library for training computer vision networks This repository contains the source code for training computer vision models. Specifically, it

Apple 1.1k Jan 3, 2023
Ppq - A powerful offline neural network quantization tool with custimized IR

PPL Quantization Tool(PPL 量化工具) PPL Quantization Tool (PPQ) is a powerful offlin

null 605 Jan 3, 2023
QKeras: a quantization deep learning library for Tensorflow Keras

QKeras github.com/google/qkeras QKeras 0.8 highlights: Automatic quantization using QKeras; Stochastic behavior (including stochastic rouding) is disa

Google 437 Jan 3, 2023