Iterative Normalization: Beyond Standardization towards Efficient Whitening

Overview

IterNorm

Code for reproducing the results in the following paper:

Iterative Normalization: Beyond Standardization towards Efficient Whitening

Lei Huang, Yi Zhou, Fan Zhu, Li Liu, Ling Shao

IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019. arXiv:1904.03441

This is the torch implementation (results of experimetns are based on this implementation). Other implementation are shown as follows:

1. Pytorch re-implementation

2. Tensorflow implementation by Lei Zhao.

=======================================================================

Requirements and Dependency

  • Install Torch with CUDA (for GPU).
  • Install cudnn.
  • Install the dependency optnet by:
luarocks install optnet

Experiments

1. Reproduce the results of VGG-network on Cifar-10 datasets:

Prepare the data: download CIFAR-10 , and put the data files under ./data/.

  • Run:
bash y_execute_vggE_base.sh               //basic configuration
bash y_execute_vggE_b1024.sh              //batch size of 1024
bash y_execute_vggE_b16.sh                //batch size of 16
bash y_execute_vggE_LargeLR.sh            //10x larger learning rate
bash y_execute_vggE_IterNorm_Iter.sh      //effect of iteration number
bash y_execute_vggE_IterNorm_Group.sh     //effect of group size

Note that the scripts don't inculde the setups of Decorrelated Batch Noarmalizaiton (DBN). To reproduce the results of DBN please follow the instructions of the DBN project, and the corresponding hyper-parameters described in the paper.

2. Reproduce the results of Wide-Residual-Networks on Cifar-10 datasets:

Prepare the data: same as in VGG-network on Cifar-10 experiments.

  • Run:
bash y_execute_wr.sh               

3. Reproduce the ImageNet experiments.

  • Download ImageNet and put it in: /data/lei/imageNet/input_torch/ (you can also customize the path in opts_imageNet.lua)
  • Install the IterNorm module to Torch as a Lua package: go to the directory ./models/imagenet/cuSpatialDBN/ and run luarocks make cudbn-1.0-0.rockspec. (Note that the modules in ./models/imagenet/cuSpatialDBN/ are the same as in the ./module/, and the installation by luarocks is for convinience in training ImageNet with multithreads.)
  • run the script with `z_execute_imageNet_***'

This project is based on the training scripts of Wide Residual Network repo and Facebook's ResNet repo.

Contact

Email: [email protected].. Discussions and suggestions are welcome!

You might also like...
Implementation of Perceiver, General Perception with Iterative Attention in TensorFlow
Implementation of Perceiver, General Perception with Iterative Attention in TensorFlow

Perceiver This Python package implements Perceiver: General Perception with Iterative Attention by Andrew Jaegle in TensorFlow. This model builds on t

source code the paper Fast and Robust Iterative Closet Point.

Fast-Robust-ICP This repository includes the source code the paper Fast and Robust Iterative Closet Point. Authors: Juyong Zhang, Yuxin Yao, Bailin De

Read Like Humans: Autonomous, Bidirectional and Iterative Language Modeling for Scene Text Recognition
Read Like Humans: Autonomous, Bidirectional and Iterative Language Modeling for Scene Text Recognition

Read Like Humans: Autonomous, Bidirectional and Iterative Language Modeling for Scene Text Recognition The official code of ABINet (CVPR 2021, Oral).

PyTorch Implementation of Google Brain's WaveGrad 2: Iterative Refinement for Text-to-Speech Synthesis
PyTorch Implementation of Google Brain's WaveGrad 2: Iterative Refinement for Text-to-Speech Synthesis

WaveGrad2 - PyTorch Implementation PyTorch Implementation of Google Brain's WaveGrad 2: Iterative Refinement for Text-to-Speech Synthesis. Status (202

Pytorch implementation of “Recursive Non-Autoregressive Graph-to-Graph Transformer for Dependency Parsing with Iterative Refinement”

Graph-to-Graph Transformers Self-attention models, such as Transformer, have been hugely successful in a wide range of natural language processing (NL

Code for PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning

PackNet: https://arxiv.org/abs/1711.05769 Pretrained models are available here: https://uofi.box.com/s/zap2p03tnst9dfisad4u0sfupc0y1fxt Datasets in Py

[CVPR 2021] Official PyTorch Implementation for
[CVPR 2021] Official PyTorch Implementation for "Iterative Filter Adaptive Network for Single Image Defocus Deblurring"

IFAN: Iterative Filter Adaptive Network for Single Image Defocus Deblurring Checkout for the demo (GUI/Google Colab)! The GUI version might occasional

Unoffical implementation about Image Super-Resolution via Iterative Refinement by Pytorch
Unoffical implementation about Image Super-Resolution via Iterative Refinement by Pytorch

Image Super-Resolution via Iterative Refinement Paper | Project Brief This is a unoffical implementation about Image Super-Resolution via Iterative Re

Demonstrates iterative FGSM on Apple's NeuralHash model.
Demonstrates iterative FGSM on Apple's NeuralHash model.

apple-neuralhash-attack Demonstrates iterative FGSM on Apple's NeuralHash model. TL;DR: It is possible to apply noise to CSAM images and make them loo

Owner
Lei Huang
Associate professor in BeiHang University, research interest: deep learning, semi-supervised learning, active learning and their application to visual dada
Lei Huang
StackRec: Efficient Training of Very Deep Sequential Recommender Models by Iterative Stacking

StackRec: Efficient Training of Very Deep Sequential Recommender Models by Iterative Stacking Datasets You can download datasets that have been pre-pr

null 25 May 29, 2022
Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting

Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting This is the origin Pytorch implementation of Informer in the followin

Haoyi 3.1k Dec 29, 2022
ICRA 2021 "Towards Precise and Efficient Image Guided Depth Completion"

PENet: Precise and Efficient Depth Completion This repo is the PyTorch implementation of our paper to appear in ICRA2021 on "Towards Precise and Effic

null 232 Dec 25, 2022
Code for the ICML 2021 paper "Bridging Multi-Task Learning and Meta-Learning: Towards Efficient Training and Effective Adaptation", Haoxiang Wang, Han Zhao, Bo Li.

Bridging Multi-Task Learning and Meta-Learning Code for the ICML 2021 paper "Bridging Multi-Task Learning and Meta-Learning: Towards Efficient Trainin

AI Secure 57 Dec 15, 2022
Official Implementation of DE-CondDETR and DELA-CondDETR in "Towards Data-Efficient Detection Transformers"

DE-DETRs By Wen Wang, Jing Zhang, Yang Cao, Yongliang Shen, and Dacheng Tao This repository is an official implementation of DE-CondDETR and DELA-Cond

Wen Wang 41 Dec 12, 2022
Official Implementation of DE-DETR and DELA-DETR in "Towards Data-Efficient Detection Transformers"

DE-DETRs By Wen Wang, Jing Zhang, Yang Cao, Yongliang Shen, and Dacheng Tao This repository is an official implementation of DE-DETR and DELA-DETR in

Wen Wang 61 Dec 12, 2022
Reviving Iterative Training with Mask Guidance for Interactive Segmentation

This repository provides the source code for training and testing state-of-the-art click-based interactive segmentation models with the official PyTorch implementation

Visual Understanding Lab @ Samsung AI Center Moscow 406 Jan 1, 2023
Implementation of Perceiver, General Perception with Iterative Attention, in Pytorch

Perceiver - Pytorch Implementation of Perceiver, General Perception with Iterative Attention, in Pytorch Install $ pip install perceiver-pytorch Usage

Phil Wang 876 Dec 29, 2022
Official repository for "PAIR: Planning and Iterative Refinement in Pre-trained Transformers for Long Text Generation"

pair-emnlp2020 Official repository for the paper: Xinyu Hua and Lu Wang: PAIR: Planning and Iterative Refinement in Pre-trained Transformers for Long

Xinyu Hua 31 Oct 13, 2022
Official Implementation for "ReStyle: A Residual-Based StyleGAN Encoder via Iterative Refinement" https://arxiv.org/abs/2104.02699

ReStyle: A Residual-Based StyleGAN Encoder via Iterative Refinement Recently, the power of unconditional image synthesis has significantly advanced th

null 967 Jan 4, 2023