Degree-Quant: Quantization-Aware Training for Graph Neural Networks.

Overview

Degree-Quant

This repo provides a clean re-implementation of the code associated with the paper Degree-Quant: Quantization-Aware Training for Graph Neural Networks. At time of writing, only the core method + experiments on Reddit-Binary have been ported over. We may add the remaining experiments at a later date; however, extensive experiment details are supplied in the appendix of the camera ready paper. We do not include the nQAT method in the codebase either for the sake of cleanliness; see the fairseq repo if you are interested in implementing this yourself.

This code is useful primarily for downstream users who want to quickly experiment with different quantization methods applied to GNNs. You will most likely be interested in the dq folder. For each layer, you can supply a dictionary of functions that when called returns a quantizer; see reddit_binary/gin.py for an example. This should enable you to quickly plug-in your own quantization implementations without needing to modify the layers we supplied.

Running the runall_reddit_binary.sh script will launch the quantization experiments for reddit binary. We include some output from running our code on reddit binary in the runs folder.

Dependencies

This code has been tested to work with PyTorch 1.7 and Torch Geometric 1.6.3

Improving this Work

This work is by no means complete. Our study merely identified the issues that will arise when trying to quantize GNNs, and it is likely that you can improve upon our methods in several ways:

  1. The runtime of our method is very slow. It is tolerable for the results, but ideally we would make the method faster. We supply a --sample_prop flag for the Reddit-Binary experiments that allows you to use sampling on tensors before running the percentile operation. We supply no guarantees on this, but it does seem to offer some improvements to runtime with little noticeable change in accuracy.
  2. You may want to consider using learned step sizes -- see the paper on this topic by Esser et al. at ICLR 2020.
  3. Robust quantization is another approach that might help -- these works focus on making the network less sensitive to changes in the quantization parameter choices.

We expand on all this in the appendix of the camera ready paper.

Citing this Work

@inproceedings{
tailor2021degreequant,
title={Degree-Quant: Quantization-Aware Training for Graph Neural Networks},
author={Shyam A. Tailor and Javier Fernandez-Marques and Nicholas D. Lane},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=NSBrFgJAHg}
}
Comments
  • Train and Test splits on other datasets

    Train and Test splits on other datasets

    Hello, I am trying to reproduce the results for the other datasets as well, for Cora dataset for example, what is the train/test split you have used (this dataset is for node-classification), I am not aware of formal test/train split, did you take some percentage as train and the rest as test? any clarification about this will help a lot Thansk!

    opened by AmeenAli 3
  • reproducing results

    reproducing results

    Hi I am trying to reproduce your results on reddit dataset using the 4 BITS configuration. I run the following command : python reddit_binary/main.py --int4 --gc_per --lr 0.001 --DQ --low 0.1 --change 0.1 --wd 4e-5 --epochs 200 --outdir ${RUN_DIR} --path ${DATA_DIR} | tee int4_dq.txt However the results I get are 86.4, The paper says that for this config it should give 81.3 (table 3) am I missing something ? Thanks !

    opened by AmeenAli 3
  • Degree-Quant dataset transform bug

    Degree-Quant dataset transform bug

    Thank you for releasing the code! Found a bug in how transforms are composed for Degree-Quant when I customized the REDDIT example code to my own dataset:

    https://github.com/camlsys/degree-quant/blob/5f056017bd0f4b1018842fb1a083ccf6c666e0b4/reddit_binary/dataset.py#L71

    Had to change to:

    if dataset.transform is None:
        dataset.transform = dq_transform
    else:
        dataset.transform = T.Compose([dataset.transform, dq_transform])
    

    Otherwise, the transform becomes a list [None, ProbabilisticHighDegreeMask].

    opened by chaitjo 2
  • About inference time on GPU

    About inference time on GPU

    Hello, Your paper seems to give the int8 model running time on the GPU in table 4, and the time is reduced compared to float32. Does this code describe how to get this time?

    I have tested the "val_loss time" in this code, but the FP32 model seems to be faster than the INT8 model.

    torch.cuda.synchronize()
    s = time.time()
    val_loss = eval_loss(model, val_loader)
    torch.cuda.synchronize()
    e = time.time()
    print(e-s)
    

    Could you tell me how I should measure the time if my method is wrong?

    opened by jmliu206 1
  • Training GCN with Cora Dataset

    Training GCN with Cora Dataset

    Thank you so much for posting the code!

    I am just facing a lot of issues trying to extend your code for GCNs. I created a file for gcn.py similar to gin.py and passed the in and out channels for GCN accordingly and adjusted the dataset. I am now getting an error regarding the keyword "weights_low" as follows:

    Traceback (most recent call last): File "degree/reddit_binary/main.py", line 109, in Namespace(DQ=True, batch_size=128, change=0.1, epochs=200, fp32=False, gc_abs=False, gc_mom=False, gc_per=True, hidden=64, int4=False, int8=True, low=0.0, lr=0.005, lr_decay_factor=0.5, lr_decay_step_size=50, noise=1.0, num_layers=5, outdir='./run', path='./data', sample_prop=None, ste_abs=False, ste_mom=False, ste_per=False, wd=0.0002) Generating ProbabilisticHighDegreeMask: {'prob_mask_low': 0.0, 'prob_mask_change': 0.1} model = GCN( File "/content/degree/reddit_binary/gcn.py", line 134, in init self.conv1 = gcn_layer( File "/content/degree/dq/multi_quant.py", line 247, in init self.reset_parameters() File "/content/degree/dq/multi_quant.py", line 259, in reset_parameters self.layer_quantizers[key] = self.layer_quant_fnskey KeyError: 'weights_low'

    Can you please help with this?

    opened by Salmaafifi98 0
  • Training on obg-molhiv dataset

    Training on obg-molhiv dataset

    Thanks a lot for posting the code! I'm quite new to GNN, and I am facing errors in training the GIN model with degree quant support with the OGB-MOLHIV binary classification dataset. However, I face an issue in the below line: https://github.com/camlsys/degree-quant/blob/257d6bcf25141f522d0dbfdd2ae9fdabb679f7e0/dq/quantization.py#L57

    The error is:

    RuntimeError: result type Float can't be cast to the desired output type Long
    

    The inv_scale and zero_point values I obtained are 15.9375 and -128.0, respectively.

    I have only changed the dataset inside the get_dataset() function. Do I need to consider anything else? I fixed the dataset transform bug as reported by issue #1

    Please help me with how to resolve this.

    Thanks!!

    opened by BalaDhinesh 0
Owner
null
DiffQ performs differentiable quantization using pseudo quantization noise. It can automatically tune the number of bits used per weight or group of weights, in order to achieve a given trade-off between model size and accuracy.

Differentiable Model Compression via Pseudo Quantization Noise DiffQ performs differentiable quantization using pseudo quantization noise. It can auto

Facebook Research 145 Dec 30, 2022
Quantization library for PyTorch. Support low-precision and mixed-precision quantization, with hardware implementation through TVM.

HAWQ: Hessian AWare Quantization HAWQ is an advanced quantization library written for PyTorch. HAWQ enables low-precision and mixed-precision uniform

Zhen Dong 293 Dec 30, 2022
Nonuniform-to-Uniform Quantization: Towards Accurate Quantization via Generalized Straight-Through Estimation. In CVPR 2022.

Nonuniform-to-Uniform Quantization This repository contains the training code of N2UQ introduced in our CVPR 2022 paper: "Nonuniform-to-Uniform Quanti

Zechun Liu 60 Dec 28, 2022
Code for our paper at ECCV 2020: Post-Training Piecewise Linear Quantization for Deep Neural Networks

PWLQ Updates 2020/07/16 - We are working on getting permission from our institution to release our source code. We will release it once we are granted

null 54 Dec 15, 2022
MQBench Quantization Aware Training with PyTorch

MQBench Quantization Aware Training with PyTorch I am using MQBench(Model Quantization Benchmark)(http://mqbench.tech/) to quantize the model for depl

Ling Zhang 29 Nov 18, 2022
Technical Indicators implemented in Python only using Numpy-Pandas as Magic - Very Very Fast! Very tiny! Stock Market Financial Technical Analysis Python library . Quant Trading automation or cryptocoin exchange

MyTT Technical Indicators implemented in Python only using Numpy-Pandas as Magic - Very Very Fast! to Stock Market Financial Technical Analysis Python

dev 34 Dec 27, 2022
QTool: A Low-bit Quantization Toolbox for Deep Neural Networks in Computer Vision

This project provides abundant choices of quantization strategies (such as the quantization algorithms, training schedules and empirical tricks) for quantizing the deep neural networks into low-bit counterparts.

Monash Green AI Lab 51 Dec 10, 2022
This is an official implementation of the paper "Distance-aware Quantization", accepted to ICCV2021.

PyTorch implementation of DAQ This is an official implementation of the paper "Distance-aware Quantization", accepted to ICCV2021. For more informatio

CV Lab @ Yonsei University 36 Nov 4, 2022
Implementation of CVAE. Trained CVAE on faces from UTKFace Dataset to produce synthetic faces with a given degree of happiness/smileyness.

Conditional Smiles! (SmileCVAE) About Implementation of AE, VAE and CVAE. Trained CVAE on faces from UTKFace Dataset. Using an encoding of the Smile-s

Raúl Ortega 3 Jan 9, 2022
Automatic detection and classification of Covid severity degree in LUS (lung ultrasound) scans

Final-Project Final project in the Technion, Biomedical faculty, by Mor Ventura, Dekel Brav & Omri Magen. Subproject 1: Automatic Detection of LUS Cha

Mor Ventura 1 Dec 18, 2021
Python script that analyses the given datasets and comes up with the best polynomial regression representation with the smallest polynomial degree possible

Python script that analyses the given datasets and comes up with the best polynomial regression representation with the smallest polynomial degree possible, to be the most reliable with the least complexity possible

Nikolas B Virionis 2 Aug 1, 2022
Ppq - A powerful offline neural network quantization tool with custimized IR

PPL Quantization Tool(PPL 量化工具) PPL Quantization Tool (PPQ) is a powerful offlin

null 605 Jan 3, 2023
A static analysis library for computing graph representations of Python programs suitable for use with graph neural networks.

python_graphs This package is for computing graph representations of Python programs for machine learning applications. It includes the following modu

Google Research 258 Dec 29, 2022
The source code of the paper "Understanding Graph Neural Networks from Graph Signal Denoising Perspectives"

GSDN-F and GSDN-EF This repository provides a reference implementation of GSDN-F and GSDN-EF as described in the paper "Understanding Graph Neural Net

Guoji Fu 18 Nov 14, 2022
Some tentative models that incorporate label propagation to graph neural networks for graph representation learning in nodes, links or graphs.

Some tentative models that incorporate label propagation to graph neural networks for graph representation learning in nodes, links or graphs.

zshicode 1 Nov 18, 2021
On Size-Oriented Long-Tailed Graph Classification of Graph Neural Networks

On Size-Oriented Long-Tailed Graph Classification of Graph Neural Networks We provide the code (in PyTorch) and datasets for our paper "On Size-Orient

Zemin Liu 4 Jun 18, 2022
[Preprint] "Bag of Tricks for Training Deeper Graph Neural Networks A Comprehensive Benchmark Study" by Tianlong Chen*, Kaixiong Zhou*, Keyu Duan, Wenqing Zheng, Peihao Wang, Xia Hu, Zhangyang Wang

Bag of Tricks for Training Deeper Graph Neural Networks: A Comprehensive Benchmark Study Codes for [Preprint] Bag of Tricks for Training Deeper Graph

VITA 101 Dec 29, 2022
This repository is an implementation of paper : Improving the Training of Graph Neural Networks with Consistency Regularization

CRGNN Paper : Improving the Training of Graph Neural Networks with Consistency Regularization Environments Implementing environment: GeForce RTX™ 3090

THUDM 1 Dec 9, 2021