Unofficial implementation of the ImageNet, CIFAR 10 and SVHN Augmentation Policies learned by AutoAugment using pillow

Overview

AutoAugment - Learning Augmentation Policies from Data

Unofficial implementation of the ImageNet, CIFAR10 and SVHN Augmentation Policies learned by AutoAugment, described in this Google AI Blogpost.

Update July 13th, 2018: Wrote a Blogpost about AutoAugment and Double Transfer Learning.

Tested with Python 3.6. Needs pillow>=5.0.0

Examples of the best ImageNet Policy


Example

from autoaugment import ImageNetPolicy
image = PIL.Image.open(path)
policy = ImageNetPolicy()
transformed = policy(image)

To see examples of all operations and magnitudes applied to images, take a look at AutoAugment_Exploration.ipynb.

Example as a PyTorch Transform - ImageNet

from autoaugment import ImageNetPolicy
data = ImageFolder(rootdir, transform=transforms.Compose(
                        [transforms.RandomResizedCrop(224), 
                         transforms.RandomHorizontalFlip(), ImageNetPolicy(), 
                         transforms.ToTensor(), transforms.Normalize(...)]))
loader = DataLoader(data, ...)

Example as a PyTorch Transform - CIFAR10

from autoaugment import CIFAR10Policy
data = ImageFolder(rootdir, transform=transforms.Compose(
                        [transforms.RandomCrop(32, padding=4, fill=128), # fill parameter needs torchvision installed from source
                         transforms.RandomHorizontalFlip(), CIFAR10Policy(), 
			 transforms.ToTensor(), 
                         Cutout(n_holes=1, length=16), # (https://github.com/uoguelph-mlrg/Cutout/blob/master/util/cutout.py)
                         transforms.Normalize(...)]))
loader = DataLoader(data, ...)

Example as a PyTorch Transform - SVHN

from autoaugment import SVHNPolicy
data = ImageFolder(rootdir, transform=transforms.Compose(
                        [SVHNPolicy(), 
			 transforms.ToTensor(), 
                         Cutout(n_holes=1, length=20), # (https://github.com/uoguelph-mlrg/Cutout/blob/master/util/cutout.py)
                         transforms.Normalize(...)]))
loader = DataLoader(data, ...)

Results with AutoAugment

Generalizable Data Augmentations

Finally, we show that policies found on one task can generalize well across different models and datasets. For example, the policy found on ImageNet leads to significant improvements on a variety of FGVC datasets. Even on datasets for which fine-tuning weights pre-trained on ImageNet does not help significantly [26], e.g. Stanford Cars [27] and FGVC Aircraft [28], training with the ImageNet policy reduces test set error by 1.16% and 1.76%, respectively. This result suggests that transferring data augmentation policies offers an alternative method for transfer learning.

CIFAR 10

CIFAR10 Results

CIFAR 100

CIFAR10 Results

ImageNet

ImageNet Results

SVHN

SVHN Results

Fine Grained Visual Classification Datasets

SVHN Results

Comments
  • ImageNet performance?

    ImageNet performance?

    Hi, does anyone gets the performance on ImageNet with the provided autoaugment? Here is my results with autoaugment using official implementation, compared to official results, no impressive improvements are got? Results of ResNet50,101,152 in terms of top1/5 accuracy: official without autoaugment: 76.15/92.87, 77.37/93.56, 78.31/94.06. mine with autoaugment: 75.33/92.45, 77.57/93.78, 78.51/94.07. Update: all the above results are tested with training epochs as 90, a longer one such as 270 used in the paper may help get the reported results.

    opened by hszhao 8
  • Can't pickle local object ''SubPolicy.__init__.<locals>.<lambda>'’

    Can't pickle local object ''SubPolicy.__init__..'’

    Hi, I meet a problem when I run AutoAugment and I can't find any solution by google.

    My environment is different from the tested one. Well, I will feel lucky if it is not a complex compatibility issue.

    My environment is as below: python: 3.7 pytorch: 1.10 Pillow: 5.4.1

    File "/home/dc2-user/anaconda3/envs/kaidi_env/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 469, in init w.start() File "/home/dc2-user/anaconda3/envs/kaidi_env/lib/python3.7/multiprocessing/process.py", line 112, in start self._popen = self._Popen(self) File "/home/dc2-user/anaconda3/envs/kaidi_env/lib/python3.7/multiprocessing/context.py", line 223, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "/home/dc2-user/anaconda3/envs/kaidi_env/lib/python3.7/multiprocessing/context.py", line 284, in _Popen return Popen(process_obj) File "/home/dc2-user/anaconda3/envs/kaidi_env/lib/python3.7/multiprocessing/popen_spawn_posix.py", line 32, in init super().init(process_obj) File "/home/dc2-user/anaconda3/envs/kaidi_env/lib/python3.7/multiprocessing/popen_fork.py", line 20, in init self._launch(process_obj) File "/home/dc2-user/anaconda3/envs/kaidi_env/lib/python3.7/multiprocessing/popen_spawn_posix.py", line 47, in _launch reduction.dump(process_obj, fp) File "/home/dc2-user/anaconda3/envs/kaidi_env/lib/python3.7/multiprocessing/reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) AttributeError: Can't pickle local object "SubPolicy.init.<locals>.<lambda>"

    opened by KaiOtter 6
  • Unexpected keyword argument 'fillcolor' when running demo code

    Unexpected keyword argument 'fillcolor' when running demo code

    When I test the demo AutoAugment_Exploration.ipynb, I always get the same error: "TypeError: transform() got an unexpected keyword argument 'fillcolor'". Anyone knows why this happens?

    opened by HelaiBAI97 2
  • [Bug] Missing random.choice in rotate

    [Bug] Missing random.choice in rotate

    Hi, Thank you for the wonderful work! Just wanna share a tiny concern. It seems to me that here the magitude should be magnitude * random.choice([-1, 1], otherwise the rotate will always be in one direction. https://github.com/DeepVoltaire/AutoAugment/blob/17d718251f25c0d9413bf30f91b523907924f33a/autoaugment.py#L208 Thank you.

    opened by LXXXXR 1
  • Magnitude normalization for `posterize` and `solarize`

    Magnitude normalization for `posterize` and `solarize`

    Magnitudes are in range [1-10], but based on https://pillow.readthedocs.io/en/3.0.x/reference/ImageOps.html#PIL.ImageOps.posterize

    solarize: range is [0, 256] posterize: range is [1, 8]

    Shouldn't the magnitudes be normalized in https://github.com/DeepVoltaire/AutoAugment/blob/master/autoaugment.py#L210 ? For example, line 211 would be

    "solarize": lambda img, magnitude: ImageOps.solarize(img, 256.0 / magnitude)

    opened by marksibrahim 1
  • Why is there no arg of cutoff for autocontrast at this line?

    Why is there no arg of cutoff for autocontrast at this line?

    https://github.com/DeepVoltaire/AutoAugment/blob/17d718251f25c0d9413bf30f91b523907924f33a/autoaugment.py#L218

    Hi,

    I notice that for autocontrast, the magnitude is not used, and for ImageOps.autocontrast, there is an args named cutoff, which controls the degree of contrast. If we set cutoff=0, the original image would be returned by this function. Does this mean that the ImageOps.autocontrast is not used at all in both the auto-ml search phase and the verification phase(experiments on imagenet dataset) ?

    opened by CoinCheung 1
  • Hi, why do you divide the magnitude of `translate operation` by 331?

    Hi, why do you divide the magnitude of `translate operation` by 331?

    Hi,

    Thanks a lot for your kindly publishing the code base !!

    https://github.com/DeepVoltaire/AutoAugment/blob/d708e2125cf71a4c08a101e55e5cc569521ffe50/autoaugment.py#L177

    I noticed that, in this line, the magnitude of translate operation is divided by 331. Would you please tell me the reason you implement this?

    By the way, is the meanings of the magnitudes and their explanations in the paper of learning augs for object detection the same as in this paper? Could I simply modify the combination of the policies in this codebase to implement the augmentation method proposed in that paper?

    I am looking for your generous guidance :)

    opened by CoinCheung 1
  • Pretrained imagenet models?

    Pretrained imagenet models?

    Hi,

    Would it be possible to release the pretrained imagenet autoaugment models? In particular, the resnet50 one would be very helpful. I am looking to use it for further downstream work.

    thanks

    opened by rtaori 1
  • I tried your autoaugment policies to my image-classification learning(imagenet-pretrained fine tuning).

    I tried your autoaugment policies to my image-classification learning(imagenet-pretrained fine tuning).

    Hi, @DeepVoltaire

    I tried your autoaugment policies to my image-classification learning(imagenet-pretrained fine tuning).

    But, accuracy is not good than default augmentation.

    What's wrong to me?

    Autoaugmentation is not suitable for transfer learning or finetuning?

    Thanks at any rate.

    opened by bemoregt 1
  • missing one poilicy

    missing one poilicy

    https://github.com/DeepVoltaire/AutoAugment/blob/00e838ee449b194b8cd6e392b117dad1db6d8933/autoaugment.py#L103

    Sub-policy 24(Equalize,0.8,8)(Equalize,0.6,3)

    opened by xuyuan 1
  • Underfitting problem

    Underfitting problem

    When I used auto augment with my custom NN arch with CIFAR-10 dataset, I got more testing accuracy and less training accuracy. I tried many things and it happens when I implement autoaugmentation only, the under fitting problem, Can please explain why this happening? Just looking for answers...

    opened by Jayan-K-Duggal 1
  • Extended the existing policies to segmentation tasks.

    Extended the existing policies to segmentation tasks.

    Added an optional constructor parameter to allow users to select the segmentation adapted policy. Changed the default fill colors, as (0, 0, 0) is a more common choice, in my opinion.

    Thanks for your great work. Your repo has been very useful to me!

    opened by jlcsilva 1
Owner
Philip Popien
Deep Learning Engineer focused on Computer Vision applications. Effective Altruist.
Philip Popien
CIFAR-10_train-test - training and testing codes for dataset CIFAR-10

CIFAR-10_train-test - training and testing codes for dataset CIFAR-10

Frederick Wang 3 Apr 26, 2022
Code for EMNLP 2021 main conference paper "Text AutoAugment: Learning Compositional Augmentation Policy for Text Classification"

Text-AutoAugment (TAA) This repository contains the code for our paper Text AutoAugment: Learning Compositional Augmentation Policy for Text Classific

LancoPKU 105 Jan 3, 2023
Distilling Motion Planner Augmented Policies into Visual Control Policies for Robot Manipulation (CoRL 2021)

Distilling Motion Planner Augmented Policies into Visual Control Policies for Robot Manipulation [Project website] [Paper] This project is a PyTorch i

Cognitive Learning for Vision and Robotics (CLVR) lab @ USC 6 Feb 28, 2022
Base pretrained models and datasets in pytorch (MNIST, SVHN, CIFAR10, CIFAR100, STL10, AlexNet, VGG16, VGG19, ResNet, Inception, SqueezeNet)

This is a playground for pytorch beginners, which contains predefined models on popular dataset. Currently we support mnist, svhn cifar10, cifar100 st

Aaron Chen 2.4k Dec 28, 2022
An integration of several popular automatic augmentation methods, including OHL (Online Hyper-Parameter Learning for Auto-Augmentation Strategy) and AWS (Improving Auto Augment via Augmentation Wise Weight Sharing) by Sensetime Research.

An integration of several popular automatic augmentation methods, including OHL (Online Hyper-Parameter Learning for Auto-Augmentation Strategy) and AWS (Improving Auto Augment via Augmentation Wise Weight Sharing) by Sensetime Research.

null 45 Dec 8, 2022
Training Cifar-10 Classifier Using VGG16

opevcvdl-hw3 This project uses pytorch and Qt to achieve the requirements. Version Python 3.6 opencv-contrib-python 3.4.2.17 Matplotlib 3.1.1 pyqt5 5.

Kenny Cheng 3 Aug 17, 2022
Implementation of Squeezenet in pytorch, pretrained models on Cifar 10 data to come

Pytorch Squeeznet Pytorch implementation of Squeezenet model as described in https://arxiv.org/abs/1602.07360 on cifar-10 Data. The definition of Sque

gaurav pathak 86 Oct 28, 2022
The source code for the Cutoff data augmentation approach proposed in this paper: "A Simple but Tough-to-Beat Data Augmentation Approach for Natural Language Understanding and Generation".

Cutoff: A Simple Data Augmentation Approach for Natural Language This repository contains source code necessary to reproduce the results presented in

Dinghan Shen 49 Dec 22, 2022
Image transformations designed for Scene Text Recognition (STR) data augmentation. Published at ICCV 2021 Workshop on Interactive Labeling and Data Augmentation for Vision.

Data Augmentation for Scene Text Recognition (ICCV 2021 Workshop) (Pronounced as "strog") Paper Arxiv Why it matters? Scene Text Recognition (STR) req

Rowel Atienza 152 Dec 28, 2022
Everything you want about DP-Based Federated Learning, including Papers and Code. (Mechanism: Laplace or Gaussian, Dataset: femnist, shakespeare, mnist, cifar-10 and fashion-mnist. )

Differential Privacy (DP) Based Federated Learning (FL) Everything about DP-based FL you need is here. (所有你需要的DP-based FL的信息都在这里) Code Tip: the code o

wenzhu 83 Dec 24, 2022
Softlearning is a reinforcement learning framework for training maximum entropy policies in continuous domains. Includes the official implementation of the Soft Actor-Critic algorithm.

Softlearning Softlearning is a deep reinforcement learning toolbox for training maximum entropy policies in continuous domains. The implementation is

Robotic AI & Learning Lab Berkeley 997 Dec 30, 2022
TensorFlow2 Classification Model Zoo playing with TensorFlow2 on the CIFAR-10 dataset.

Training CIFAR-10 with TensorFlow2(TF2) TensorFlow2 Classification Model Zoo. I'm playing with TensorFlow2 on the CIFAR-10 dataset. Architectures LeNe

Chia-Hung Yuan 16 Sep 27, 2022
Training a deep learning model on the noisy CIFAR dataset

Training-a-deep-learning-model-on-the-noisy-CIFAR-dataset This repository contai

null 1 Jun 14, 2022
PyTorch implementation of CVPR 2020 paper (Reference-Based Sketch Image Colorization using Augmented-Self Reference and Dense Semantic Correspondence) and pre-trained model on ImageNet dataset

Reference-Based-Sketch-Image-Colorization-ImageNet This is a PyTorch implementation of CVPR 2020 paper (Reference-Based Sketch Image Colorization usin

Yuzhi ZHAO 11 Jul 28, 2022
Neural Dynamic Policies for End-to-End Sensorimotor Learning

This is a PyTorch based implementation for our NeurIPS 2020 paper on Neural Dynamic Policies for end-to-end sensorimotor learning.

Shikhar Bahl 47 Dec 11, 2022
Official codebase for Legged Robots that Keep on Learning: Fine-Tuning Locomotion Policies in the Real World

Legged Robots that Keep on Learning Official codebase for Legged Robots that Keep on Learning: Fine-Tuning Locomotion Policies in the Real World, whic

Laura Smith 70 Dec 7, 2022
Code for NeurIPS 2021 paper: Invariant Causal Imitation Learning for Generalizable Policies

Invariant Causal Imitation Learning for Generalizable Policies Ioana Bica, Daniel Jarrett, Mihaela van der Schaar Neural Information Processing System

Ioana Bica 17 Dec 1, 2022
A small demonstration of using WebDataset with ImageNet and PyTorch Lightning

A small demonstration of using WebDataset with ImageNet and PyTorch Lightning

Tom 50 Dec 16, 2022
A small demonstration of using WebDataset with ImageNet and PyTorch Lightning

A small demonstration of using WebDataset with ImageNet and PyTorch Lightning This is a small repo illustrating how to use WebDataset on ImageNet. usi

null 50 Dec 16, 2022