An implementation of based on pytorch and mmcv

Overview

FisherPruning-Pytorch

An implementation of <Group Fisher Pruning for Practical Network Compression> based on pytorch and mmcv


Main Functions

  • Pruning for fully-convolutional structures, such as one-stage detectors; (copied from the official code)

  • Pruning for networks combining convolutional layers and fully-connected layers, such as faster-RCNN and ResNet;

  • Pruning for networks which involve group convolutions, such as ResNeXt and RegNet.

Usage

Requirements

torch
torchvision
mmcv / mmcv-full
mmcls 
mmdet 

Compatibility

This code is tested with

pytorch=1.3
torchvision=0.4
cudatoolkit=10.0
mmcv-full==1.3.14
mmcls=0.16 
mmdet=2.17

and

pytorch=1.8
torchvision=0.9
cudatoolkit=11.1
mmcv==1.3.16
mmcls=0.16 
mmdet=2.17

Data

Download ImageNet and COCO, then extract them and organize the folders as

- detection
  |- tools
  |- configs
  |- data
  |   |- coco
  |   |   |- train2017
  |   |   |- val2017
  |   |   |- test2017
  |   |   |- annotations
  |
- classification
  |- tools
  |- configs
  |- data
  |   |- imagenet
  |   |   |- train
  |   |   |- val
  |   |   |- test 
  |   |   |- meta
  |
- ...

Commands

e.g. Classification

cd classification
  1. Pruning

    # single GPU
    python tools/train.py configs/xxx_pruning.py --gpus=1
    # multi GPUs (e.g. 4 GPUs)
    python -m torch.distributed.launch --nproc_per_node=4 tools/train.py configs/xxx_pruning.py --launch pytorch
  2. Fine-tune

    In the config file, modify the deploy_from to the pruned model, and modify the samples_per_gpu to 256/#GPUs. Then

    # single GPU
    python tools/train.py configs/xxx_finetune.py --gpus=1
    # multi GPUs (e.g. 4 GPUs)
    python -m torch.distributed.launch --nproc_per_node=4 tools/train.py configs/xxx_finetune.py --launch pytorch
  3. Test

    In the config file, add the attribute load_from to the finetuned model. Then

    python tools/test.py configs/xxx_finetune.py --metrics=accuracy

The commands for pruning and finetuning of detection models are similar to that of classification models. Instructions will be added soon.

Acknowledgments

My project acknowledges the official code FisherPruning.

You might also like...
PyTorch Implementation of the SuRP algorithm by the authors of the AISTATS 2022 paper "An Information-Theoretic Justification for Model Pruning"

PyTorch Implementation of the SuRP algorithm by the authors of the AISTATS 2022 paper "An Information-Theoretic Justification for Model Pruning".

A curated list of neural network pruning resources.

A curated list of neural network pruning and related resources. Inspired by awesome-deep-vision, awesome-adversarial-machine-learning, awesome-deep-learning-papers and Awesome-NAS.

Code for PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning

PackNet: https://arxiv.org/abs/1711.05769 Pretrained models are available here: https://uofi.box.com/s/zap2p03tnst9dfisad4u0sfupc0y1fxt Datasets in Py

Network Pruning That Matters: A Case Study on Retraining Variants (ICLR 2021)
Network Pruning That Matters: A Case Study on Retraining Variants (ICLR 2021)

Network Pruning That Matters: A Case Study on Retraining Variants (ICLR 2021)

Code for paper: Group-CAM: Group Score-Weighted Visual Explanations for Deep Convolutional Networks
Code for paper: Group-CAM: Group Score-Weighted Visual Explanations for Deep Convolutional Networks

Group-CAM By Zhang, Qinglong and Rao, Lu and Yang, Yubin [State Key Laboratory for Novel Software Technology at Nanjing University] This repo is the o

BC3407-Group-5-Project - BC3407 Group Project With Python
BC3407-Group-5-Project - BC3407 Group Project With Python

BC3407-Group-5-Project As the world struggles to contain the ever-changing varia

Pytorch implementation of COIN, a framework for compression with implicit neural representations 🌸
Pytorch implementation of COIN, a framework for compression with implicit neural representations 🌸

COIN 🌟 This repo contains a Pytorch implementation of COIN: COmpression with Implicit Neural representations, including code to reproduce all experim

This is the pytorch implementation for the paper: *Learning Accurate Performance Predictors for Ultrafast Automated Model Compression*, which is in submission to TPAMI

SeerNet This is the pytorch implementation for the paper: Learning Accurate Performance Predictors for Ultrafast Automated Model Compression, which is

A Pytorch Implementation of a continuously rate adjustable learned image compression framework.
A Pytorch Implementation of a continuously rate adjustable learned image compression framework.

GainedVAE A Pytorch Implementation of a continuously rate adjustable learned image compression framework, Gained Variational Autoencoder(GainedVAE). N

Comments
  • runtime error in pruning resnet50

    runtime error in pruning resnet50

    Thank you very much for your optimization. I tried to reproduce the pruning effect on classification. But it reported an error. I suspect it is a torch version problem, but after switching to the same version as your experiment, still have this problem. Can you give suggestions? image

    2021-11-18 21:53:59,417 - mmcls - INFO - Environment info:

    sys.platform: linux Python: 3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0] CUDA available: True GPU 0,1,2,3,4,5,6,7: GeForce RTX 3090 CUDA_HOME: /usr/local/cuda-11.1 NVCC: Build cuda_11.1.TC455_06.29069683_0 GCC: gcc (GCC) 5.4.0 PyTorch: 1.8.0+cu111 PyTorch compiling details: PyTorch built with:

    • GCC 7.3
    • C++ Version: 201402
    • Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications
    • Intel(R) MKL-DNN v1.7.0 (Git Hash 7aed236906b1f7a05c0917e5257a1af05e9ff683)
    • OpenMP 201511 (a.k.a. OpenMP 4.5)
    • NNPACK is enabled
    • CPU capability usage: AVX2
    • CUDA Runtime 11.1
    • NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86
    • CuDNN 8.0.5
    • Magma 2.5.2
    • Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.1, CUDNN_VERSION=8.0.5, CXX_COMPILER=/opt/rh/devtoolset-7/root/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.8.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON,

    TorchVision: 0.9.0+cu111 OpenCV: 4.5.3 MMCV: 1.3.17 MMCV Compiler: GCC 5.4 MMCV CUDA Compiler: 11.1 MMClassification: 0.15.0+729c6c1

    opened by zhaoxin111 10
  • baseline

    baseline

    您好,仓库里没有baseline训练文件,请问您是怎么对照剪枝效果呢?baseline是这样子训练吗? base = [ '../base/models/resnet50.py', '../base/datasets/cifar10_bs16.py', '../base/schedules/cifar10_bs128.py', '../base/default_runtime.py' ]

    optimizer = dict(lr=0.004) work_dir = "work_dirs/resnet50-baselilne" load_from = "https://download.openmmlab.com/mmclassification/v0/resnet/resnet50_8xb32_in1k_20210831-ea4938fc.pth"

    opened by tu12306 4
  • RuntimeError: The size of tensor a (256) must match the size of tensor b (36) at non-singleton dimension 1

    RuntimeError: The size of tensor a (256) must match the size of tensor b (36) at non-singleton dimension 1

    Hello! I saw you tested the code in torch1.8.

    I tried it on RTX 3090 with torch==1.8, cudatoolkit==11.1, mmcv-full with serveral different version installed with pip, mmdet==2.17 When running on COCO in pruning stage, it always occurs

      File "tools/prune_train.py", line 195, in <module>
        main()
      File "tools/prune_train.py", line 191, in main
        meta=meta)
      File "/home/dell/programme/FisherPruning-Pytorch/mmdet/apis/train.py", line 174, in train_detector
        runner.run(data_loaders, cfg.workflow)
      File "/home/dell/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 127, in run
        epoch_runner(data_loaders[i], **kwargs)
      File "/home/dell/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 51, in train
        self.call_hook('after_train_iter')
      File "/home/dell/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/mmcv/runner/base_runner.py", line 307, in call_hook
        getattr(hook, fn_name)(self)
      File "/home/dell/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/mmcv/runner/hooks/optimizer.py", line 36, in after_train_iter
        runner.outputs['loss'].backward()
      File "/home/dell/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/torch/tensor.py", line 245, in backward
        torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
      File "/home/dell/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/torch/autograd/__init__.py", line 147, in backward
        allow_unreachable=True, accumulate_grad=True)  # allow_unreachable flag
      File "/home/dell/programme/FisherPruning-Pytorch/tools/fisher_pruning.py", line 385, in compute_fisher_backward_hook
        grads = feature * grad_feature
    RuntimeError: The size of tensor a (256) must match the size of tensor b (36) at non-singleton dimension 1
    

    But when I tried on TITAN RTX with torch==1.3 as the author of this paper suggests, this error disappears.

    Have you encountered this problem? Thanks!

    opened by YihaoChan 0
  • error when pruning mobilenet

    error when pruning mobilenet

    Firstly, thanks for your excellect job! I had tried to prune the mobilenet, it contains many DWConv. To support DWConv pruning, I make little changes image Otherwise, the following error will be reported image

    When finetuning the pruned model, there was error about the DWConv. image

    image

    opened by zhaoxin111 5
Owner
Peng Lu
Peng Lu
MMGeneration is a powerful toolkit for generative models, based on PyTorch and MMCV.

Documentation: https://mmgeneration.readthedocs.io/ Introduction English | 简体中文 MMGeneration is a powerful toolkit for generative models, especially f

OpenMMLab 1.3k Dec 29, 2022
A Closer Look at Structured Pruning for Neural Network Compression

A Closer Look at Structured Pruning for Neural Network Compression Code used to reproduce experiments in https://arxiv.org/abs/1810.04622. To prune, w

Bayesian and Neural Systems Group 140 Dec 5, 2022
YOLOv5 Series Multi-backbone, Pruning and quantization Compression Tool Box.

YOLOv5-Compression Update News Requirements 环境安装 pip install -r requirements.txt Evaluation metric Visdrone Model mAP mAP@50 Parameters(M) GFLOPs FPS@

ZhangYuan 719 Jan 2, 2023
Official PyTorch code for CVPR 2020 paper "Deep Active Learning for Biased Datasets via Fisher Kernel Self-Supervision"

Deep Active Learning for Biased Datasets via Fisher Kernel Self-Supervision https://arxiv.org/abs/2003.00393 Abstract Active learning (AL) aims to min

Denis 29 Nov 21, 2022
An Image compression simulator that uses Source Extractor and Monte Carlo methods to examine the post compressive effects different compression algorithms have.

ImageCompressionSimulation An Image compression simulator that uses Source Extractor and Monte Carlo methods to examine the post compressive effects o

James Park 1 Dec 11, 2021
Revisiting Discriminator in GAN Compression: A Generator-discriminator Cooperative Compression Scheme (NeurIPS2021)

Revisiting Discriminator in GAN Compression: A Generator-discriminator Cooperative Compression Scheme (NeurIPS2021) Overview Prerequisites Linux Pytho

Shaojie Li 34 Mar 31, 2022
This code reproduces the results of the paper, "Measuring Data Leakage in Machine-Learning Models with Fisher Information"

Fisher Information Loss This repository contains code that can be used to reproduce the experimental results presented in the paper: Awni Hannun, Chua

Facebook Research 43 Dec 30, 2022
Implementation of "A Deep Learning Loss Function based on Auditory Power Compression for Speech Enhancement" by pytorch

This repository is used to suspend the results of our paper "A Deep Learning Loss Function based on Auditory Power Compression for Speech Enhancement"

ScorpioMiku 19 Sep 30, 2022
Ensemble Knowledge Guided Sub-network Search and Fine-tuning for Filter Pruning

Ensemble Knowledge Guided Sub-network Search and Fine-tuning for Filter Pruning This repository is official Tensorflow implementation of paper: Ensemb

Seunghyun Lee 12 Oct 18, 2022
PyTorch Implementation of [1611.06440] Pruning Convolutional Neural Networks for Resource Efficient Inference

PyTorch implementation of [1611.06440 Pruning Convolutional Neural Networks for Resource Efficient Inference] This demonstrates pruning a VGG16 based

Jacob Gildenblat 836 Dec 26, 2022