Code for the paper "Curriculum Dropout", ICCV 2017

Overview

Curriculum Dropout

Dropout is a very effective way of regularizing neural networks. Stochastically "dropping out" units with a certain probability discourages over-specific co-adaptations of feature detectors, preventing overfitting and improving network generalization. However, we show that using a fixed dropout probability during training is a suboptimal choice. We propose a time scheduling for the probability of retaining neurons in the network. This induces an adaptive regularization scheme that smoothly increases the difficulty of the optimization problem. This idea of "starting easy" and adaptively increasing the difficulty of the learning problem has its roots in curriculum learning and allows one to train better models. Indeed, we prove that our optimization strategy implements a very general curriculum scheme, by gradually adding noise to both the input and intermediate feature representations in the network architecture. The method, named Curriculum Dropout, yields to better generalization.

Code

Each sub-folder (...in progress...) is named after the dataset analyzed and equipped with its own README. The provided code runs with Python 2.7 (should run with Python 3 as well, not tested). For the installation of tensorflow-gpu please refer to the website.

The following command should install the main dependencies on most Linux (Ubuntu) machines

sudo apt-get install python-dev python-pip && sudo pip install -r requirements.txt

Download and extract MNIST

  • The script download.sh downloads and extracts mnist. Deafult storing directory is ~/mnist.
sudo chmod a+x download.sh
./download.sh

Move the mnist/ folder wherever you like (e.g. /mydata) and then tell the training scripts where to find it

echo /mydata >> data_dir.txt

Reference

If you use this code as part of any published research, please acknowledge the following paper:

"Curriculum Dropout"
Pietro Morerio, Jacopo Cavazza, Riccardo Volpi, René Vidal and Vittorio Murino pdf

@InProceedings{Morerio2017dropout,
    title={Curriculum Dropout},
    author={Morerio, Pietro and Cavazza, Jacopo and Volpi, Riccardo and Vidal, Ren\'e and Murino, Vittorio},
    booktitle = {ICCV},
    year={2017}
} 

License

This repository is released under the GNU GENERAL PUBLIC LICENSE.

You might also like...
meProp: Sparsified Back Propagation for Accelerated Deep Learning (ICML 2017)
meProp: Sparsified Back Propagation for Accelerated Deep Learning (ICML 2017)

meProp The codes were used for the paper meProp: Sparsified Back Propagation for Accelerated Deep Learning with Reduced Overfitting (ICML 2017) [pdf]

Code for ICCV 2021 paper
Code for ICCV 2021 paper "Distilling Holistic Knowledge with Graph Neural Networks"

HKD Code for ICCV 2021 paper "Distilling Holistic Knowledge with Graph Neural Networks" cifia-100 result The implementation of compared methods are ba

code for ICCV 2021 paper 'Generalized Source-free Domain Adaptation'

G-SFDA Code (based on pytorch 1.3) for our ICCV 2021 paper 'Generalized Source-free Domain Adaptation'. [project] [paper]. Dataset preparing Download

Code for ICCV 2021 paper: ARAPReg: An As-Rigid-As Possible Regularization Loss for Learning Deformable Shape Generators..
Code for ICCV 2021 paper: ARAPReg: An As-Rigid-As Possible Regularization Loss for Learning Deformable Shape Generators..

ARAPReg Code for ICCV 2021 paper: ARAPReg: An As-Rigid-As Possible Regularization Loss for Learning Deformable Shape Generators.. Installation The cod

Code for the ICCV 2021 paper
Code for the ICCV 2021 paper "Pixel Difference Networks for Efficient Edge Detection" (Oral).

Pixel Difference Convolution This repository contains the PyTorch implementation for "Pixel Difference Networks for Efficient Edge Detection" by Zhuo

Sync2Gen Code for ICCV 2021 paper: Scene Synthesis via Uncertainty-Driven Attribute Synchronization
Sync2Gen Code for ICCV 2021 paper: Scene Synthesis via Uncertainty-Driven Attribute Synchronization

Sync2Gen Code for ICCV 2021 paper: Scene Synthesis via Uncertainty-Driven Attribute Synchronization 0. Environment Environment: python 3.6 and cuda 10

Code for the paper "Spatio-temporal Self-Supervised Representation Learning for 3D Point Clouds" (ICCV 2021)

Spatio-temporal Self-Supervised Representation Learning for 3D Point Clouds This is the official code implementation for the paper "Spatio-temporal Se

Official code for our ICCV paper: "From Continuity to Editability: Inverting GANs with Consecutive Images"

GANInversion_with_ConsecutiveImgs Official code for our ICCV paper: "From Continuity to Editability: Inverting GANs with Consecutive Images" https://a

Code release for ICCV 2021 paper
Code release for ICCV 2021 paper "Anticipative Video Transformer"

Anticipative Video Transformer Ranked first in the Action Anticipation task of the CVPR 2021 EPIC-Kitchens Challenge! (entry: AVT-FB-UT) [project page

Comments
  • Wrong dropout probability

    Wrong dropout probability

    From my understanding the following line should use the keep_prob_conv instead: https://github.com/pmorerio/curriculum-dropout/blob/5620c5bf8dec55fa9283bfdeb9a337ad37af3609/double_mnist/evaluate_dm.py#L119

    Also while the CNN-2 is implemented the documentation of the network initialisation seems to refer to the CNN-1 (LeNet) with 32 filters: https://github.com/pmorerio/curriculum-dropout/blob/5620c5bf8dec55fa9283bfdeb9a337ad37af3609/double_mnist/evaluate_dm.py#L93

    opened by timmeinhardt 2
  • Can you explain the data generated methods from train valid test of doubleMnist function ?

    Can you explain the data generated methods from train valid test of doubleMnist function ?

    the doubleMnist function generate image merge different pics of 0-9 by random indices sampling, This may be valid for train valid data for the sample sampling indices generator, But use independent generator for test data, Not only the pics for add is different with the train / valid pics but the indices are not shared (in generated mechanism) , so i can't understand why use this test data as accuracy measurement, when training, the conclusion for test accuracy always approximate 0.0.

    opened by svjack 1
  • Unable to run double_mnist/run.sh

    Unable to run double_mnist/run.sh

    I'm trying to run this python file on a Ubuntu 18.04.3 virtual machine with Python 2.7 and Tensorflow 1.14 but even if I follow the instructions where I download the mnist dataset (I didn't move it from where it default downloaded to), requirements.txt, chmod'ing everything, and running run.sh I encounter an error of some kind. Currently, it's saying that the var expDir index is out of range. I must be doing something wrong in my setup but I'm not sure what?

    opened by MrMagicTuna 1
Owner
Pietro Morerio
Post-Doc @ Italian Institute of Technology
Pietro Morerio
Code for the USENIX 2017 paper: kAFL: Hardware-Assisted Feedback Fuzzing for OS Kernels

kAFL: Hardware-Assisted Feedback Fuzzing for OS Kernels Blazing fast x86-64 VM kernel fuzzing framework with performant VM reloads for Linux, MacOS an

Chair for Sys­tems Se­cu­ri­ty 541 Nov 27, 2022
The PyTorch improved version of TPAMI 2017 paper: Face Alignment in Full Pose Range: A 3D Total Solution.

Face Alignment in Full Pose Range: A 3D Total Solution By Jianzhu Guo. [Updates] 2020.8.30: The pre-trained model and code of ECCV-20 are made public

Jianzhu Guo 3.4k Jan 2, 2023
PyTorch implementation of NIPS 2017 paper Dynamic Routing Between Capsules

Dynamic Routing Between Capsules - PyTorch implementation PyTorch implementation of NIPS 2017 paper Dynamic Routing Between Capsules from Sara Sabour,

Adam Bielski 475 Dec 24, 2022
Implementation of EMNLP 2017 Paper "Natural Language Does Not Emerge 'Naturally' in Multi-Agent Dialog" using PyTorch and ParlAI

Language Emergence in Multi Agent Dialog Code for the Paper Natural Language Does Not Emerge 'Naturally' in Multi-Agent Dialog Satwik Kottur, José M.

Karan Desai 105 Nov 25, 2022
PyTorch version of the paper 'Enhanced Deep Residual Networks for Single Image Super-Resolution' (CVPRW 2017)

About PyTorch 1.2.0 Now the master branch supports PyTorch 1.2.0 by default. Due to the serious version problem (especially torch.utils.data.dataloade

Sanghyun Son 2.1k Jan 1, 2023
Implementation of EMNLP 2017 Paper "Natural Language Does Not Emerge 'Naturally' in Multi-Agent Dialog" using PyTorch and ParlAI

Language Emergence in Multi Agent Dialog Code for the Paper Natural Language Does Not Emerge 'Naturally' in Multi-Agent Dialog Satwik Kottur, José M.

Karan Desai 105 Nov 25, 2022
An efficient PyTorch implementation of the winning entry of the 2017 VQA Challenge.

Bottom-Up and Top-Down Attention for Visual Question Answering An efficient PyTorch implementation of the winning entry of the 2017 VQA Challenge. The

Hengyuan Hu 731 Jan 3, 2023
Fader Networks: Manipulating Images by Sliding Attributes - NIPS 2017

FaderNetworks PyTorch implementation of Fader Networks (NIPS 2017). Fader Networks can generate different realistic versions of images by modifying at

Facebook Research 753 Dec 23, 2022
Oriented Response Networks, in CVPR 2017

Oriented Response Networks [Home] [Project] [Paper] [Supp] [Poster] Torch Implementation The torch branch contains: the official torch implementation

ZhouYanzhao 217 Dec 12, 2022
Improving Convolutional Networks via Attention Transfer (ICLR 2017)

Attention Transfer PyTorch code for "Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Tran

Sergey Zagoruyko 1.4k Dec 23, 2022