MXNet implementation for: Drop an Octave: Reducing Spatial Redundancy in Convolutional Neural Networks with Octave Convolution

Overview

Octave Convolution

MXNet implementation for:

Drop an Octave: Reducing Spatial Redundancy in Convolutional Neural Networks with Octave Convolution

ImageNet

Ablation

  • Loss: Softmax
  • Learning rate: Cosine (warm-up: 5 epochs, lr: 0.4)
  • MXNet API: Symbol API

example

Model baseline alpha = 0.125 alpha = 0.25 alpha = 0.5 alpha = 0.75
DenseNet-121 75.4 / 92.7 76.1 / 93.0 75.9 / 93.1 -- --
ResNet-26 73.2 / 91.3 75.8 / 92.6 76.1 / 92.6 75.5 / 92.5 74.6 / 92.1
ResNet-50 77.0 / 93.4 78.2 / 93.9 78.0 / 93.8 77.4 / 93.6 76.7 / 93.0
SE-ResNet-50 77.6 / 93.6 78.7 / 94.1 78.4 / 94.0 77.9 / 93.8 77.4 / 93.5
ResNeXt-50 78.4 / 94.0 -- 78.8 / 94.2 78.4 / 94.0 77.5 / 93.6
ResNet-101 78.5 / 94.1 79.2 / 94.4 79.2 / 94.4 78.7 / 94.1 --
ResNeXt-101 79.4 / 94.6 -- 79.6 / 94.5 78.9 / 94.4 --
ResNet-200 79.6 / 94.7 80.0 / 94.9 79.8 / 94.8 79.5 / 94.7 --

Note:

  • Top-1 / Top-5, single center crop accuracy is shown in the table. (testing script)
  • All residual networks in ablation study adopt pre-actice version[1] for convenience.

Others

  • Learning rate: Cosine (warm-up: 5 epochs, lr: 0.4)
  • MXNet API: Gluon API
Model alpha label smoothing[2] mixup[3] #Params #FLOPs Top1 / Top5
0.75 MobileNet (v1) .375 2.6 M 213 M 70.5 / 89.5
1.0 MobileNet (v1) .5 4.2 M 321 M 72.5 / 90.6
1.0 MobileNet (v2) .375 Yes 3.5 M 256 M 72.0 / 90.7
1.125 MobileNet (v2) .5 Yes 4.2 M 295 M 73.0 / 91.2
Oct-ResNet-152 .125 Yes Yes 60.2 M 10.9 G 81.4 / 95.4
Oct-ResNet-152 + SE .125 Yes Yes 66.8 M 10.9 G 81.6 / 95.7

Citation

@article{chen2019drop,
  title={Drop an Octave: Reducing Spatial Redundancy in Convolutional Neural Networks with Octave Convolution},
  author={Chen, Yunpeng and Fan, Haoqi and Xu, Bing and Yan, Zhicheng and Kalantidis, Yannis and Rohrbach, Marcus and Yan, Shuicheng and Feng, Jiashi},
  journal={Proceedings of the IEEE International Conference on Computer Vision},
  year={2019}
}

Third-party Implementations

Acknowledgement

  • Thanks MXNet, Gluon-CV and TVM!
  • Thanks @Ldpe2G for sharing the code for calculating the #FLOPs (link)
  • Thanks Min Lin (Mila), Xin Zhao (Qihoo Inc.), Tao Wang (NUS) for helpful discussions on the code development.

Reference

[1] He K, et al "Identity Mappings in Deep Residual Networks".

[2] Christian S, et al "Rethinking the Inception Architecture for Computer Vision"

[3] Zhang H, et al. "mixup: Beyond empirical risk minimization.".

License

The code and the models are MIT licensed, as found in the LICENSE file.

Comments
  • Details about GPU run times in Table 2 of the paper

    Details about GPU run times in Table 2 of the paper

    Hello @cypw. First of all, thanks to your amazing research. I had a question regarding Table 2 in your paper. This is not an issue but rather a follow up question. You reported CPU inference times of Resnet-50 for various values of alpha. I'm wondering if you had observed a similar trend of decreasing run times on GPU ? I've replicated your implementation in Tensorflow and here are my inference times on 2080Ti for Resnet-50 imagenet model. Image size 224x224x3

    alpha | GPU runtime (ms) ------------ | ------------- 0 | 13.7 0.125 | 22.8 0.25 | 23.1 0.5 | 23.8 0.75 | 22.6

    From the above table, octave convolution performance that I get is worse on GPU. Did you see a similar performance on GPU from your side? Does this suggest that Octave convolution improvement is more suitable for CPU rather than GPU?

    Thank you !!

    opened by peri044 7
  • Third-party Implementation in PyTorch

    Third-party Implementation in PyTorch

    Just for reference, there is a PyTorch implementation of OctConv https://github.com/d-li14/octconv.pytorch, including detailed training logs and pre-trained models of ResNet-50 on the ImageNet benchmark.

    opened by d-li14 2
  • about enlarges the receptive field

    about enlarges the receptive field

    Why OctConv processes low-frequency information with corresponding (low-frequency) convolutions can effectively enlarges the receptive field in the original pixel space.

    opened by PJJie 0
  • Octave Transposed Convolution

    Octave Transposed Convolution

    Hi, Thanks for sharing the code. I am just wondering if there is any implementation for the Octave Transposed Convolution (octave de-convolution)? I have not found it in your code. Is there a plan to implement it? Thanks.

    opened by makbari7 0
  • How to divide low frequency and high frequency?

    How to divide low frequency and high frequency?

    First of all, feel the author's open source spirit, great idea!

    I know that low and high frequencies are divided according to a given proportion, but how are they divided along the channel dimension? Are they divided randomly according to proportion? Or are there other pre-processing in it?

    Thank you very much.

    opened by ys-dpc 2
Owner
Meta Research
Meta Research
Python implementation of cover trees, near-drop-in replacement for scipy.spatial.kdtree

This is a Python implementation of cover trees, a data structure for finding nearest neighbors in a general metric space (e.g., a 3D box with periodic

Patrick Varilly 28 Nov 25, 2022
TensorFlow implementation of Barlow Twins (Barlow Twins: Self-Supervised Learning via Redundancy Reduction)

Barlow-Twins-TF This repository implements Barlow Twins (Barlow Twins: Self-Supervised Learning via Redundancy Reduction) in TensorFlow and demonstrat

Sayak Paul 36 Sep 14, 2022
Spatial Sparse Convolution Library

SpConv: Spatially Sparse Convolution Library PyPI Install Downloads CPU (Linux Only) pip install spconv CUDA 10.2 pip install spconv-cu102 CUDA 11.1 p

Yan Yan 1.2k Jan 7, 2023
This repository implements and evaluates convolutional networks on the Möbius strip as toy model instantiations of Coordinate Independent Convolutional Networks.

Orientation independent Möbius CNNs This repository implements and evaluates convolutional networks on the Möbius strip as toy model instantiations of

Maurice Weiler 59 Dec 9, 2022
R-Drop: Regularized Dropout for Neural Networks

R-Drop: Regularized Dropout for Neural Networks R-drop is a simple yet very effective regularization method built upon dropout, by minimizing the bidi

null 756 Dec 27, 2022
Spatial Temporal Graph Convolutional Networks (ST-GCN) for Skeleton-Based Action Recognition in PyTorch

Reminder ST-GCN has transferred to MMSkeleton, and keep on developing as an flexible open source toolbox for skeleton-based human understanding. You a

sijie yan 1.1k Dec 25, 2022
Reproduce ResNet-v2(Identity Mappings in Deep Residual Networks) with MXNet

Reproduce ResNet-v2 using MXNet Requirements Install MXNet on a machine with CUDA GPU, and it's better also installed with cuDNN v5 Please fix the ran

Wei Wu 531 Dec 4, 2022
Machine learning evaluation metrics, implemented in Python, R, Haskell, and MATLAB / Octave

Note: the current releases of this toolbox are a beta release, to test working with Haskell's, Python's, and R's code repositories. Metrics provides i

Ben Hamner 1.6k Dec 26, 2022
The official implementation of NeurIPS 2021 paper: Finding Optimal Tangent Points for Reducing Distortions of Hard-label Attacks

The official implementation of NeurIPS 2021 paper: Finding Optimal Tangent Points for Reducing Distortions of Hard-label Attacks

machen 11 Nov 27, 2022
[WACV 2020] Reducing Footskate in Human Motion Reconstruction with Ground Contact Constraints

Reducing Footskate in Human Motion Reconstruction with Ground Contact Constraints Official implementation for Reducing Footskate in Human Motion Recon

Virginia Tech Vision and Learning Lab 38 Nov 1, 2022
ActNN: Reducing Training Memory Footprint via 2-Bit Activation Compressed Training

ActNN : Activation Compressed Training This is the official project repository for ActNN: Reducing Training Memory Footprint via 2-Bit Activation Comp

UC Berkeley RISE 178 Jan 5, 2023
This repository contains the code for "Self-Diagnosis and Self-Debiasing: A Proposal for Reducing Corpus-Based Bias in NLP".

Self-Diagnosis and Self-Debiasing This repository contains the source code for Self-Diagnosis and Self-Debiasing: A Proposal for Reducing Corpus-Based

Timo Schick 62 Dec 12, 2022
Reducing Information Bottleneck for Weakly Supervised Semantic Segmentation (NeurIPS 2021)

Reducing Information Bottleneck for Weakly Supervised Semantic Segmentation (NeurIPS 2021) The implementation of Reducing Infromation Bottleneck for W

Jungbeom Lee 81 Dec 16, 2022
Tensorflow Implementation for "Pre-trained Deep Convolution Neural Network Model With Attention for Speech Emotion Recognition"

Tensorflow Implementation for "Pre-trained Deep Convolution Neural Network Model With Attention for Speech Emotion Recognition" Pre-trained Deep Convo

Ankush Malaker 5 Nov 11, 2022
Simple, efficient and flexible vision toolbox for mxnet framework.

MXbox: Simple, efficient and flexible vision toolbox for mxnet framework. MXbox is a toolbox aiming to provide a general and simple interface for visi

Ligeng Zhu 31 Oct 19, 2019
Modular Probabilistic Programming on MXNet

MXFusion | | | | Tutorials | Documentation | Contribution Guide MXFusion is a modular deep probabilistic programming library. With MXFusion Modules yo

Amazon 100 Dec 10, 2022
MMdnn is a set of tools to help users inter-operate among different deep learning frameworks. E.g. model conversion and visualization. Convert models between Caffe, Keras, MXNet, Tensorflow, CNTK, PyTorch Onnx and CoreML.

MMdnn MMdnn is a comprehensive and cross-framework tool to convert, visualize and diagnose deep learning (DL) models. The "MM" stands for model manage

Microsoft 5.7k Jan 9, 2023
InsightFace: 2D and 3D Face Analysis Project on MXNet and PyTorch

InsightFace: 2D and 3D Face Analysis Project on MXNet and PyTorch

Deep Insight 13.2k Jan 6, 2023
Tensorboard for pytorch (and chainer, mxnet, numpy, ...)

tensorboardX Write TensorBoard events with simple function call. The current release (v2.3) is tested on anaconda3, with PyTorch 1.8.1 / torchvision 0

Tzu-Wei Huang 7.5k Dec 28, 2022