PyTorch Implementation of [1611.06440] Pruning Convolutional Neural Networks for Resource Efficient Inference

Overview

PyTorch implementation of [1611.06440 Pruning Convolutional Neural Networks for Resource Efficient Inference]

This demonstrates pruning a VGG16 based classifier that classifies a small dog/cat dataset.

This was able to reduce the CPU runtime by x3 and the model size by x4.

For more details you can read the blog post.

At each pruning step 512 filters are removed from the network.

Usage

This repository uses the PyTorch ImageFolder loader, so it assumes that the images are in a different directory for each category.

Train

......... dogs

......... cats

Test

......... dogs

......... cats

The images were taken from here but you should try training this on your own data and see if it works!

Training: python finetune.py --train

Pruning: python finetune.py --prune

TBD

  • Change the pruning to be done in one pass. Currently each of the 512 filters are pruned sequentually. for layer_index, filter_index in prune_targets: model = prune_vgg16_conv_layer(model, layer_index, filter_index)

    This is inefficient since allocating new layers, especially fully connected layers with lots of parameters, is slow.

    In principle this can be done in a single pass.

  • Change prune_vgg16_conv_layer to support additional architectures. The most immediate one would be VGG with batch norm.

Comments
  • RuntimeError: dimension out of range (expected to be in range of [-2, 1], but got 3)

    RuntimeError: dimension out of range (expected to be in range of [-2, 1], but got 3)

    Hi Jacob, I get this error when i run finetune.py --prune

    Traceback (most recent call last): File "fine_tune.py", line 271, in fine_tuner.prune() File "fine_tune.py", line 218, in prune prune_targets = self.get_candidates_to_prune(num_filters_to_prune_per_iteration) File "fine_tune.py", line 184, in get_candidates_to_prune self.train_epoch(rank_filters = True) File "fine_tune.py", line 179, in train_epoch self.train_batch(optimizer, batch.cuda(), label.cuda(), rank_filters) File "fine_tune.py", line 172, in train_batch self.criterion(output, Variable(label)).backward() File "/usr/local/lib/python2.7/dist-packages/torch/autograd/variable.py", line 156, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph, retain_variables) File "/usr/local/lib/python2.7/dist-packages/torch/autograd/init.py", line 98, in backward variables, grad_variables, retain_graph) File "fine_tune.py", line 77, in compute_rank sum(dim=2).sum(dim=3)[0, :, 0, 0].data File "/usr/local/lib/python2.7/dist-packages/torch/autograd/variable.py", line 476, in sum return Sum.apply(self, dim, keepdim) File "/usr/local/lib/python2.7/dist-packages/torch/autograd/_functions/reduce.py", line 21, in forward return input.sum(dim) RuntimeError: dimension out of range (expected to be in range of [-2, 1], but got 3)

    I have not been able to figure out exactly what's causing the error

    opened by pgadosey 18
  • [CUDA Runtime Error] Assertion `t >= 0 && t < n_classes` failed.

    [CUDA Runtime Error] Assertion `t >= 0 && t < n_classes` failed.

    os: CentOS 7 torch (0.4.0) torchvision (0.2.1) python: 2.7

    I downloaded the dog-cat dataset from kaggle, and run the python finetune.py --train --train_path=. --test_path=. Then I get the following Error:

    $ python finetune.py --train --train_path=. --test_path=.
    /home/web_server/dlpy72/dlpy/lib/python2.7/site-packages/torchvision/transforms/transforms.py:188: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.
      "please use transforms.Resize instead.")
    /home/web_server/dlpy72/dlpy/lib/python2.7/site-packages/torchvision/transforms/transforms.py:563: UserWarning: The use of the transforms.RandomSizedCrop transform is deprecated, please use transforms.RandomResizedCrop instead.
      "please use transforms.RandomResizedCrop instead.")
    train data loading finished
    Epoch:  0
    THCudaCheck FAIL file=/pytorch/aten/src/THCUNN/generic/Threshold.cu line=67 error=59 : device-side assert triggered
    Traceback (most recent call last):
      File "finetune.py", line 267, in <module>
        fine_tuner.train(epoches = 20)
      File "finetune.py", line 162, in train
        self.train_epoch(optimizer)
      File "finetune.py", line 180, in train_epoch
        self.train_batch(optimizer, batch.cuda(), label.cuda(), rank_filters)
      File "finetune.py", line 175, in train_batch
        self.criterion(self.model(input), Variable(label)).backward()
      File "/home/web_server/dlpy72/dlpy/lib/python2.7/site-packages/torch/tensor.py", line 93, in backward
        torch.autograd.backward(self, gradient, retain_graph, create_graph)
      File "/home/web_server/dlpy72/dlpy/lib/python2.7/site-packages/torch/autograd/__init__.py", line 89, in backward
        allow_unreachable=True)  # allow_unreachable flag
    RuntimeError: cuda runtime error (59) : device-side assert triggered at /pytorch/aten/src/THCUNN/generic/Threshold.cu:67
    /pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [0,0,0] Assertion `t >= 0 && t < n_classes` failed.
    /pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [2,0,0] Assertion `t >= 0 && t < n_classes` failed.
    /pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [4,0,0] Assertion `t >= 0 && t < n_classes` failed.
    /pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [5,0,0] Assertion `t >= 0 && t < n_classes` failed.
    /pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [6,0,0] Assertion `t >= 0 && t < n_classes` failed.
    /pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [8,0,0] Assertion `t >= 0 && t < n_classes` failed.
    /pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [9,0,0] Assertion `t >= 0 && t < n_classes` failed.
    /pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [10,0,0] Assertion `t >= 0 && t < n_classes` failed.
    /pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [11,0,0] Assertion `t >= 0 && t < n_classes` failed.
    /pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [12,0,0] Assertion `t >= 0 && t < n_classes` failed.
    /pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [13,0,0] Assertion `t >= 0 && t < n_classes` failed.
    /pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [15,0,0] Assertion `t >= 0 && t < n_classes` failed.
    /pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [16,0,0] Assertion `t >= 0 && t < n_classes` failed.
    /pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [17,0,0] Assertion `t >= 0 && t < n_classes` failed.
    /pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [20,0,0] Assertion `t >= 0 && t < n_classes` failed.
    /pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [23,0,0] Assertion `t >= 0 && t < n_classes` failed.
    /pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [24,0,0] Assertion `t >= 0 && t < n_classes` failed.
    /pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [25,0,0] Assertion `t >= 0 && t < n_classes` failed.
    /pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [26,0,0] Assertion `t >= 0 && t < n_classes` failed.
    /pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [27,0,0] Assertion `t >= 0 && t < n_classes` failed.
    /pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [28,0,0] Assertion `t >= 0 && t < n_classes` failed.
    /pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [30,0,0] Assertion `t >= 0 && t < n_classes` failed.
    /pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [31,0,0] Assertion `t >= 0 && t < n_classes` failed.
    
    opened by oscarriddle 5
  • RuntimeError: inconsistent tensor sizes

    RuntimeError: inconsistent tensor sizes

    Running on the cats and dogs dataset, your repo does not work given the following error

    my@my:~/Dropbox/x/CV/pytorch-pruning$ CUDA_VISIBLE_DEVICES=0 python finetune.py --train PrunningFineTuner ('Train folder size', 25000) ('Test folder size', 25000) fine_tuner.train() Epoch: 0 Traceback (most recent call last): File "finetune.py", line 273, in fine_tuner.train(epoches = 3) File "finetune.py", line 164, in train self.train_epoch(optimizer) File "finetune.py", line 181, in train_epoch for batch, label in self.train_data_loader: File "/home/my/anaconda2/lib/python2.7/site-packages/torch/utils/data/dataloader.py", line 212, in next return self._process_next_batch(batch) File "/home/my/anaconda2/lib/python2.7/site-packages/torch/utils/data/dataloader.py", line 239, in _process_next_batch raise batch.exc_type(batch.exc_msg) RuntimeError: Traceback (most recent call last): File "/home/my/anaconda2/lib/python2.7/site-packages/torch/utils/data/dataloader.py", line 41, in _worker_loop samples = collate_fn([dataset[i] for i in batch_indices]) File "/home/my/anaconda2/lib/python2.7/site-packages/torch/utils/data/dataloader.py", line 110, in default_collate return [default_collate(samples) for samples in transposed] File "/home/my/anaconda2/lib/python2.7/site-packages/torch/utils/data/dataloader.py", line 92, in default_collate return torch.stack(batch, 0, out=out) File "/home/my/anaconda2/lib/python2.7/site-packages/torch/functional.py", line 60, in stack return torch.cat(inputs, dim, out=out) RuntimeError: inconsistent tensor sizes at /py/conda-bld/pytorch_1493676237139/work/torch/lib/TH/generic/THTensorMath.c:2559

    opened by alphamupsiomega 3
  • Getting Error in pruning

    Getting Error in pruning

    Hi, I'm getting the following error while running pruning. Can anyone help me regarding this issue? It ran for one iteration out of 5 perfectly. It is giving an error in 2 iterations. image

    opened by ritesh2212 1
  • Accuracy drops from 96.46% to 58.67%

    Accuracy drops from 96.46% to 58.67%

    I tried the project on Python3.6. Here is the log, the accuracy drops significantly, which is different from your blog result: The accuracy dropped from 98.7% to 97.5%.

    $ python3 test_pruning.py --prune
    CHECK GPU AVAILEBLE: True
    /home/web_server/dlpy72/py3.6/lib/python3.6/site-packages/torchvision/transforms/transforms.py:156: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.
      "please use transforms.Resize instead.")
    /home/web_server/dlpy72/py3.6/lib/python3.6/site-packages/torchvision/transforms/transforms.py:397: UserWarning: The use of the transforms.RandomSizedCrop transform is deprecated, please use transforms.RandomResizedCrop instead.
      "please use transforms.RandomResizedCrop instead.")
    Correct: 845, Failed: 31, Accuracy: 0.9646118721461188
    Number of prunning iterations to reduce 67% filters 5
    Ranking filters.. 
    Layers that will be prunned {28: 130, 17: 56, 26: 71, 21: 53, 0: 5, 19: 60, 10: 20, 12: 20, 7: 9, 2: 4, 24: 62, 14: 13, 5: 9}
    Prunning filters.. 
    Filters prunned 87.87878787878788%
    Correct: 838, Failed: 38, Accuracy: 0.95662100456621
    Fine tuning to recover from prunning iteration.
    Ranking filters.. 
    Layers that will be prunned {28: 110, 26: 69, 14: 17, 24: 80, 21: 60, 10: 23, 17: 64, 7: 7, 19: 52, 12: 18, 5: 5, 0: 4, 2: 3}
    Prunning filters.. 
    Filters prunned 75.75757575757575%
    Correct: 817, Failed: 59, Accuracy: 0.932648401826484
    Fine tuning to recover from prunning iteration.
    Ranking filters.. 
    Layers that will be prunned {24: 80, 21: 47, 17: 75, 14: 22, 26: 92, 2: 4, 12: 23, 19: 64, 10: 21, 28: 67, 5: 8, 7: 8, 0: 1}
    Prunning filters.. 
    Filters prunned 63.63636363636363%
    Correct: 754, Failed: 122, Accuracy: 0.860730593607306
    Fine tuning to recover from prunning iteration.
    Ranking filters.. 
    Layers that will be prunned {26: 103, 19: 98, 14: 19, 17: 54, 21: 88, 24: 63, 12: 17, 10: 16, 28: 42, 7: 2, 2: 1, 0: 6, 5: 3}
    Prunning filters.. 
    Filters prunned 51.515151515151516%
    Correct: 468, Failed: 408, Accuracy: 0.5342465753424658
    Fine tuning to recover from prunning iteration.
    Ranking filters.. 
    Layers that will be prunned {21: 91, 17: 79, 5: 17, 14: 36, 19: 68, 10: 33, 12: 32, 26: 40, 0: 10, 24: 69, 2: 5, 28: 25, 7: 7}
    Prunning filters.. 
    Filters prunned 39.39393939393939%
    Correct: 514, Failed: 362, Accuracy: 0.58675799086758
    Fine tuning to recover from prunning iteration.
    Finished. Going to fine tune the model a bit more
    
    
    opened by oscarriddle 1
  • SqueezeNet Pruning

    SqueezeNet Pruning

    Has anyone tried pruning the SqueezeNet using this method and the program? I have been trying to prune squeezenet but the test accuracy, during the finetuning after pruning first set of filter, is always 0.5, any idea what might be wrong?

    I am confused about which filter to remove after getting the 'filter_index' from the 'compute_rank()' method.

    Thank you!!!

    opened by Kuldeep-Attri 1
  • Pruning VGG-19 model for neural style transfer on mobile device

    Pruning VGG-19 model for neural style transfer on mobile device

    I'm looking to reduce the memory requirements for this model by 80%, if possible, to perform neural-style transfer on a mobile device. Will altering this script allow for such things?

    opened by wilkinsmicawber 0
  • Is it possible to use your code directly on other network like Resnet, Inception V3

    Is it possible to use your code directly on other network like Resnet, Inception V3

    Impressive job, I want to compress and accelerate Resnet, Is it possible to use your code directly? Or need to modify some codes, If need modification, where should be modified? Thank you very much.

    opened by guoxiaolu 0
  • Running project in google colab

    Running project in google colab

    Hello, I am new to github and pytorch. I do not know if my question is appropriate.

    I have cloned the project to google colab and run the command line as instruction (python finetune.py --train) to verify the result. But it get an error that it cannot fine the directory to train function. If anyone had the same problem, could you held me out?

    Thank you

    opened by tdd2454 0
  • Will the pruned weight reactivated after finetuning?

    Will the pruned weight reactivated after finetuning?

    @jacobgil I find there is no limits when using optimizer.step(). So the pruned weight will get a gradient and after stepping, it will be no longer 0 which means it cannot be regarded as pruned?

    Am I right? Hope for your response!

    opened by igo312 0
  • how Pruning the last conv layer affects the first linear layer of the classifier

    how Pruning the last conv layer affects the first linear layer of the classifier

    I trained the vgg and saved the model as pth file. then I load it for pruning some filters of it. the last conv after pruning is not 512 anymore, some filters are gone. how Pruning the last conv layer affects the first linear layer of the classifier which is (512 7 7, 4096). how can I prune the input weights of classifier according to the last conv layer.

    opened by Saharkakavand 0
Owner
Jacob Gildenblat
Machine learning / Computer Vision.
Jacob Gildenblat
This repository implements and evaluates convolutional networks on the Möbius strip as toy model instantiations of Coordinate Independent Convolutional Networks.

Orientation independent Möbius CNNs This repository implements and evaluates convolutional networks on the Möbius strip as toy model instantiations of

Maurice Weiler 59 Dec 9, 2022
A PyTorch implementation of "Cluster-GCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks" (KDD 2019).

ClusterGCN ⠀⠀ A PyTorch implementation of "Cluster-GCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks" (KDD 2019). A

Benedek Rozemberczki 697 Dec 27, 2022
Bayesian-Torch is a library of neural network layers and utilities extending the core of PyTorch to enable the user to perform stochastic variational inference in Bayesian deep neural networks

Bayesian-Torch is a library of neural network layers and utilities extending the core of PyTorch to enable the user to perform stochastic variational inference in Bayesian deep neural networks. Bayesian-Torch is designed to be flexible and seamless in extending a deterministic deep neural network architecture to corresponding Bayesian form by simply replacing the deterministic layers with Bayesian layers.

Intel Labs 210 Jan 4, 2023
PyTorch-LIT is the Lite Inference Toolkit (LIT) for PyTorch which focuses on easy and fast inference of large models on end-devices.

PyTorch-LIT PyTorch-LIT is the Lite Inference Toolkit (LIT) for PyTorch which focuses on easy and fast inference of large models on end-devices. With

Amin Rezaei 157 Dec 11, 2022
Channel Pruning for Accelerating Very Deep Neural Networks (ICCV'17)

Channel Pruning for Accelerating Very Deep Neural Networks (ICCV'17)

Yihui He 1k Jan 3, 2023
Stochastic Downsampling for Cost-Adjustable Inference and Improved Regularization in Convolutional Networks

Stochastic Downsampling for Cost-Adjustable Inference and Improved Regularization in Convolutional Networks (SDPoint) This repository contains the cod

Jason Kuen 17 Jul 4, 2022
Pytorch implementation of AngularGrad: A New Optimization Technique for Angular Convergence of Convolutional Neural Networks

AngularGrad Optimizer This repository contains the oficial implementation for AngularGrad: A New Optimization Technique for Angular Convergence of Con

mario 124 Sep 16, 2022
PyTorch implementation of "ContextNet: Improving Convolutional Neural Networks for Automatic Speech Recognition with Global Context" (INTERSPEECH 2020)

ContextNet ContextNet has CNN-RNN-transducer architecture and features a fully convolutional encoder that incorporates global context information into

Sangchun Ha 24 Nov 24, 2022
A PyTorch implementation of " EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks."

EfficientNet A PyTorch implementation of EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. [arxiv] [Official TF Repo] Implemen

AhnDW 298 Dec 10, 2022
A PyTorch implementation for V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation

A PyTorch implementation of V-Net Vnet is a PyTorch implementation of the paper V-Net: Fully Convolutional Neural Networks for Volumetric Medical Imag

Matthew Macy 606 Dec 21, 2022
A PyTorch implementation of Multi-digit Number Recognition from Street View Imagery using Deep Convolutional Neural Networks

SVHNClassifier-PyTorch A PyTorch implementation of Multi-digit Number Recognition from Street View Imagery using Deep Convolutional Neural Networks If

Potter Hsu 182 Jan 3, 2023
PyTorch implementation of convolutional neural networks-based text-to-speech synthesis models

Deepvoice3_pytorch PyTorch implementation of convolutional networks-based text-to-speech synthesis models: arXiv:1710.07654: Deep Voice 3: Scaling Tex

Ryuichi Yamamoto 1.8k Jan 8, 2023
PPLNN is a Primitive Library for Neural Network is a high-performance deep-learning inference engine for efficient AI inferencing

PPLNN is a Primitive Library for Neural Network is a high-performance deep-learning inference engine for efficient AI inferencing

null 943 Jan 7, 2023
An implementation of based on pytorch and mmcv

FisherPruning-Pytorch An implementation of <Group Fisher Pruning for Practical Network Compression> based on pytorch and mmcv Main Functions Pruning f

Peng Lu 15 Dec 17, 2022
PyTorch Implementation of the SuRP algorithm by the authors of the AISTATS 2022 paper "An Information-Theoretic Justification for Model Pruning"

PyTorch Implementation of the SuRP algorithm by the authors of the AISTATS 2022 paper "An Information-Theoretic Justification for Model Pruning".

Berivan Isik 8 Dec 8, 2022
A curated list of neural network pruning resources.

A curated list of neural network pruning and related resources. Inspired by awesome-deep-vision, awesome-adversarial-machine-learning, awesome-deep-learning-papers and Awesome-NAS.

Yang He 1.7k Jan 9, 2023
A Closer Look at Structured Pruning for Neural Network Compression

A Closer Look at Structured Pruning for Neural Network Compression Code used to reproduce experiments in https://arxiv.org/abs/1810.04622. To prune, w

Bayesian and Neural Systems Group 140 Dec 5, 2022
EPSANet:An Efficient Pyramid Split Attention Block on Convolutional Neural Network

EPSANet:An Efficient Pyramid Split Attention Block on Convolutional Neural Network This repo contains the official Pytorch implementaion code and conf

Hu Zhang 175 Jan 7, 2023
Fine-tune pretrained Convolutional Neural Networks with PyTorch

Fine-tune pretrained Convolutional Neural Networks with PyTorch. Features Gives access to the most popular CNN architectures pretrained on ImageNet. A

Alex Parinov 694 Nov 23, 2022