Official repository for "Intriguing Properties of Vision Transformers" (2021)

Overview

Intriguing Properties of Vision Transformers

Muzammal Naseer, Kanchana Ranasinghe, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, & Ming-Hsuan Yang

Paper Link

Abstract: Vision transformers (ViT) have demonstrated impressive performance across various machine vision tasks. These models are based on multi-head self-attention mechanisms that can flexibly attend to a sequence of image patches to encode contextual cues. An important question is how such flexibility (in attending image-wide context conditioned on a given patch) can facilitate handling nuisances in natural images e.g., severe occlusions, domain shifts, spatial permutations, adversarial and natural perturbations. We systematically study this question via an extensive set of experiments encompassing three ViT families and provide comparisons with a high-performing convolutional neural network (CNN). We show and analyze the following intriguing properties of ViT: (a) Transformers are highly robust to severe occlusions, perturbations and domain shifts, e.g., retain as high as 60% top-1 accuracy on ImageNet even after randomly occluding 80% of the image content. (b) The robust performance to occlusions is not due to a bias towards local textures, and ViTs are significantly less biased towards textures compared to CNNs. When properly trained to encode shape-based features, ViTs demonstrate shape recognition capability comparable to that of human visual system, previously unmatched in the literature. (c) Using ViTs to encode shape representation leads to an interesting consequence of accurate semantic segmentation without pixel-level supervision. (d) Off-the-shelf features from a single ViT model can be combined to create a feature ensemble, leading to high accuracy rates across a range of classification datasets in both traditional and few-shot learning paradigms. We show effective features of ViTs are due to flexible and dynamic receptive fields possible via self-attention mechanisms. Our code will be publicly released.

Citation

@misc{naseer2021intriguing,
      title={Intriguing Properties of Vision Transformers}, 
      author={Muzammal Naseer and Kanchana Ranasinghe and Salman Khan and Munawar Hayat and Fahad Shahbaz Khan and Ming-Hsuan Yang},
      year={2021},
      eprint={2105.10497},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

We are in the process of cleaning our code. We will update this repo shortly. Here are the highlights of what to expect :)

  1. Pretrained ViT models trained on Stylized ImageNet (along with distilled ones). We will provide code to use these models for auto-segmentation.
  2. Training and Evaluations for our proposed off-the-shelf ensemble features.
  3. Code to evaluate any model on our proposed occulusion stratagies (random, foreground and background).
  4. Code for evaluation of permutation invaraince.
  5. Pretrained models to study the effect of varying patch sizes and positional encoding.
  6. Pretrained adversarial patches and code to evalute them.
  7. Training on Stylized Imagenet.

Requirements

pip install -r requirements.txt

Shape Biased Models

Our shape biased pretrained models can be downloaded from here. Code for evaluating their shape bias using auto segmentation on the PASCAL VOC dataset can be found under scripts. Please fix any paths as necessary. You may place the VOC devkit folder under data/voc of fix the paths appropriately.

Running segmentation evaluation on models:

./scripts/eval_segmentation.sh

Visualizing segmentation for images in a given folder:

./scripts/visualize_segmentation.sh

Off the Shelf Classification

Training code for off-the-shelf experiment in classify_metadataset.py. Seven datasets (aircraft CUB DTD fungi GTSRB Places365 INAT) available by default. Set the appropriate dir path in classify_md.sh by fixing DATA_PATH.

Run training and evaluation for a selected dataset (aircraft by default) using selected model (DeiT-T by default):

./scripts/classify_md.sh

Occlusion Evaluation

Evaluation on ImageNet val set (change path in script) for our proposed occlusion techniques:

./scripts/evaluate_occlusion.sh

Permutation Invariance Evaluation

Evaluation on ImageNet val set (change path in script) for the shuffle operation:

./scripts/evaluate_shuffle.sh

Varying Patch Sizes and Positional Encoding

Pretrained models to study the effect of varying patch sizes and positional encoding:

DeiT-T Model Top-1 Top-5 Pretrained
No Pos. Enc. 68.3 89.0 Link
Patch 22 68.7 89.0 Link
Patch 28 65.2 86.7 Link
Patch 32 63.1 85.3 Link
Patch 38 55.2 78.8 Link

References

Code borrowed from DeiT and DINO repositories.

Comments
  • Question about links of pretrained models

    Question about links of pretrained models

    Hi! First of all, thank the authors for the exciting work! I noticed that the checkpoint link of the pretrained 'deit_tiny_distilled_patch16_224' in vit_models/deit.py is different from the one of the shape-biased model DeiT-T-SIN (distilled), as given in README.md. I thought deit_tiny_distilled_patch16_224 has the same definition with DeiT-T-SIN (distilled). Do they have differences in model architecture or training procedure?

    opened by ZhouqyCH 3
  • Two questions on your paper

    Two questions on your paper

    Hi. This is heonjin.

    Firstly, big thanks to you and your paper. well-read and precise paper! I have two questions on your paper.

    1. Please take a look at Figure 9. image On the 'no positional encoding' experiment, there is a peak on 196 shuffle size of "DeiT-T-no-pos". Why is there a peak? and I wonder why there is a decreasing from 0 shuffle size to 64 of "DeiT-T-no-pos".

    2. On the Figure 14, image On the Aircraft(few shot), Flower(few shot) dataset, CNN performs better than DeiT. Could you explain this why?

    Thanks in advance.

    opened by hihunjin 2
  • Attention maps DINO Patchdrop

    Attention maps DINO Patchdrop

    Hi, thanks for the amazing paper.

    My question is about how which patches are dropped from the image with the DINO model. It looks like in the code in evaluate.py on line 132 head_number = 1. I want to understand the reason why this number was chosen (the other params used to index the attention maps seem to make sense). Wouldn't averaging the attention maps across heads give you better segmentation?

    Thanks,

    Ravi

    opened by rraju1 1
  • Support CPU when visualizing segmentations

    Support CPU when visualizing segmentations

    Most of the code to visualize segmentation is ready for GPU and CPU, but I bumped into this one place where there is a hard-coded .cuda() call. I changed it to .to(device) to support CPU.

    opened by cgarbin 0
  • Expand the instructions to install the PASCAL VOC dataset

    Expand the instructions to install the PASCAL VOC dataset

    I inspected the code to understand the expected directory structure. This note in the README may help other users put the dataset in the right place from the start.

    opened by cgarbin 0
  • Add note to use Python 3.8 because of PyTorch 1.7

    Add note to use Python 3.8 because of PyTorch 1.7

    PyTorch 1.7 requires Python 3.8. Refer to the discussion in https://github.com/pytorch/pytorch/issues/47354.

    Suggest adding this note to the README to help reproduce the environment because running pip install -r requirements.txt with the wrong version of Python gives an obscure error message.

    opened by cgarbin 0
  • Amazing work, but can it work on DETR?

    Amazing work, but can it work on DETR?

    ViT family show strong robustness on RandomDrop and Domain shift Problem. The thing is , I 'm working on object detection these days,detr is an end to end object detection methods which adopted Transformer's encoder decoder part, but the backbone I use , is Resnet50, it can still find the properties that your paper mentioned. Above all I want to ask two questions: (1).Do these intriguing properties come from encoder、decoder part? (2).What's the difference between distribution shift and domain shift(I saw distribution shift first time on your paper)?

    opened by 1184125805 0
Owner
Muzammal Naseer
PhD student at Australian National University.
Muzammal Naseer
Official repository for "On Improving Adversarial Transferability of Vision Transformers" (2021)

Improving-Adversarial-Transferability-of-Vision-Transformers Muzammal Naseer, Kanchana Ranasinghe, Salman Khan, Fahad Khan, Fatih Porikli arxiv link A

Muzammal Naseer 47 Dec 2, 2022
CVPR 2021 - Official code repository for the paper: On Self-Contact and Human Pose.

selfcontact This repo is part of our project: On Self-Contact and Human Pose. [Project Page] [Paper] [MPI Project Page] It includes the main function

Lea Müller 68 Dec 6, 2022
CVPR 2021 - Official code repository for the paper: On Self-Contact and Human Pose.

SMPLify-XMC This repo is part of our project: On Self-Contact and Human Pose. [Project Page] [Paper] [MPI Project Page] License Software Copyright Lic

Lea Müller 83 Dec 14, 2022
Official repository for the CVPR 2021 paper "Learning Feature Aggregation for Deep 3D Morphable Models"

Deep3DMM Official repository for the CVPR 2021 paper Learning Feature Aggregation for Deep 3D Morphable Models. Requirements This code is tested on Py

null 38 Dec 27, 2022
Official Repository for the ICCV 2021 paper "PixelSynth: Generating a 3D-Consistent Experience from a Single Image"

PixelSynth: Generating a 3D-Consistent Experience from a Single Image (ICCV 2021) Chris Rockwell, David F. Fouhey, and Justin Johnson [Project Website

Chris Rockwell 95 Nov 22, 2022
Official repository for "On Generating Transferable Targeted Perturbations" (ICCV 2021)

On Generating Transferable Targeted Perturbations (ICCV'21) Muzammal Naseer, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, and Fatih Porikli Paper:

Muzammal Naseer 46 Nov 17, 2022
Official repository with code and data accompanying the NAACL 2021 paper "Hurdles to Progress in Long-form Question Answering" (https://arxiv.org/abs/2103.06332).

Hurdles to Progress in Long-form Question Answering This repository contains the official scripts and datasets accompanying our NAACL 2021 paper, "Hur

Kalpesh Krishna 41 Nov 8, 2022
This repository contains the official implementation code of the paper Improving Multimodal Fusion with Hierarchical Mutual Information Maximization for Multimodal Sentiment Analysis, accepted at EMNLP 2021.

MultiModal-InfoMax This repository contains the official implementation code of the paper Improving Multimodal Fusion with Hierarchical Mutual Informa

Deep Cognition and Language Research (DeCLaRe) Lab 89 Dec 26, 2022
CVPR 2021 - Official code repository for the paper: On Self-Contact and Human Pose.

TUCH This repo is part of our project: On Self-Contact and Human Pose. [Project Page] [Paper] [MPI Project Page] License Software Copyright License fo

Lea Müller 45 Jan 7, 2023
Official repository of the paper "A Variational Approximation for Analyzing the Dynamics of Panel Data". Mixed Effect Neural ODE. UAI 2021.

Official repository of the paper (UAI 2021) "A Variational Approximation for Analyzing the Dynamics of Panel Data", Mixed Effect Neural ODE. Panel dat

Jurijs Nazarovs 7 Nov 26, 2022
This repository provides the official implementation of 'Learning to ignore: rethinking attention in CNNs' accepted in BMVC 2021.

inverse_attention This repository provides the official implementation of 'Learning to ignore: rethinking attention in CNNs' accepted in BMVC 2021. Le

Firas Laakom 5 Jul 8, 2022
Official repository for "PAIR: Planning and Iterative Refinement in Pre-trained Transformers for Long Text Generation"

pair-emnlp2020 Official repository for the paper: Xinyu Hua and Lu Wang: PAIR: Planning and Iterative Refinement in Pre-trained Transformers for Long

Xinyu Hua 31 Oct 13, 2022
Official repository for Few-shot Image Generation via Cross-domain Correspondence (CVPR '21)

Few-shot Image Generation via Cross-domain Correspondence Utkarsh Ojha, Yijun Li, Jingwan Lu, Alexei A. Efros, Yong Jae Lee, Eli Shechtman, Richard Zh

Utkarsh Ojha 251 Dec 11, 2022
Official repository for Jia, Raghunathan, Göksel, and Liang, "Certified Robustness to Adversarial Word Substitutions" (EMNLP 2019)

Certified Robustness to Adversarial Word Substitutions This is the official GitHub repository for the following paper: Certified Robustness to Adversa

Robin Jia 38 Oct 16, 2022
The repository offers the official implementation of our paper in PyTorch.

Cloth Interactive Transformer (CIT) Cloth Interactive Transformer for Virtual Try-On Bin Ren1, Hao Tang1, Fanyang Meng2, Runwei Ding3, Ling Shao4, Phi

Bingoren 49 Dec 1, 2022
Official code repository of the paper Learning Associative Inference Using Fast Weight Memory by Schlag et al.

Learning Associative Inference Using Fast Weight Memory This repository contains the offical code for the paper Learning Associative Inference Using F

Imanol Schlag 18 Oct 12, 2022
Official repository for "Action-Based Conversations Dataset: A Corpus for Building More In-Depth Task-Oriented Dialogue Systems"

Action-Based Conversations Dataset (ABCD) This respository contains the code and data for ABCD (Chen et al., 2021) Introduction Whereas existing goal-

ASAPP Research 49 Oct 9, 2022
Official repository for HOTR: End-to-End Human-Object Interaction Detection with Transformers (CVPR'21, Oral Presentation)

Official PyTorch Implementation for HOTR: End-to-End Human-Object Interaction Detection with Transformers (CVPR'2021, Oral Presentation) HOTR: End-to-

Kakao Brain 114 Nov 28, 2022
Competitive Programming Club, Clinify's Official repository for CP problems hosting by club members.

Clinify-CPC_Programs This repository holds the record of the competitive programming club where the competitive coding aspirants are thriving hard and

Clinify Open Sauce 4 Aug 22, 2022