Official repository for "On Improving Adversarial Transferability of Vision Transformers" (2021)

Overview

Improving-Adversarial-Transferability-of-Vision-Transformers

Muzammal Naseer, Kanchana Ranasinghe, Salman Khan, Fahad Khan, Fatih Porikli

arxiv link

demo trm

Abstract: Vision transformers (ViTs) process input images as sequences of patches via self-attention; a radically different architecture than convolutional neural networks(CNNs). This makes it interesting to study the adversarial feature space of ViT models and their transferability. In particular, we observe that adversarial patterns found via conventional adversarial attacks show very low black-box transferability even for large ViT models. However, we show that this phenomenon is only due to the sub-optimal attack procedures that do not leverage the true representation potential of ViTs. A deep ViT is composed of multiple blocks, with a consistent architecture comprising of self-attention and feed-forward layers, where each block is capable of independently producing a class token. Formulating an attack using only the last class token (conventional approach) does not directly leverage the discriminative information stored in the earlier tokens, leading to poor adversarial transferability of ViTs. Using the compositional nature of ViT models, we enhance transferability of existing attacks by introducing two novel strategies specific to the architecture of ViT models.(i) Self-Ensemble:We propose a method to find multiple discriminative pathways by dissecting a single ViT model into an ensemble of networks. This allows explicitly utilizing class-specific information at each ViT block.(ii) Token Refinement:We then propose to refine the tokens to further enhance the discriminative capacity at each block of ViT. Our token refinement systematically combines the class tokens with structural information preserved within the patch tokens. An adversarial attack when applied to such refined tokens within the ensemble of classifiers found in a single vision transformer has significantly higher transferability and thereby brings out the true generalization potential of the ViT’s adversarial space.

Contents

  1. Quickstart
  2. Self-Ensemble
  3. Token Refinement Module
  4. Training TRM
  5. References
  6. Citation

Requirements

pip install -r requirements.txt

Quickstart

(top) To directly run demo transfer attacks using baseline, ensemble, and ensemble with TRM strategies, use following scripts. The path to the dataset must be updated.

./scripts/run_attack.sh

Dataset

We use a subset of the ImageNet validation set (5000 images) containing 5 random samples from each class that are correctly classified by both ResNet50 and ViT-small. This dataset is used for all experiments. This list of images is present in data/image_list.json. In following code, setting the path to the original ImageNet 2012 val set is sufficient; only the subset of images will be used for the evaluation.

Self-Ensemble Strategy

(top) Run transfer attack using our ensemble strategy as follows. DATA_DIR points to the root directory containing the validation images of ImageNet (original imagenet). We support attack types FGSM, PGD, MI-FGSM, DIM, and TI by default. Note that any other attack can be applied on ViT models using the self-ensemble strategy.

python test.py \
  --test_dir "$DATA_DIR" \
  --src_model deit_tiny_patch16_224 \
  --tar_model tnt_s_patch16_224  \
  --attack_type mifgsm \
  --eps 16 \
  --index "all" \
  --batch_size 128

For other model families, the pretrained models will have to be downloaded and the paths updated in the relevant files under vit_models.

Token Refinement Module

(top) For self-ensemble attack with TRM, run the following. The same options are available for attack types and DATA_DIR must be set to point to the data directory.

python test.py \
  --test_dir "$DATA_DIR" \
  --src_model tiny_patch16_224_hierarchical \
  --tar_model tnt_s_patch16_224  \
  --attack_type mifgsm \
  --eps 16 \
  --index "all" \
  --batch_size 128

Pretrained TRM modules

Model Avg Acc Inc Pretrained
DeiT-T 12.43 Link
DeiT-S 15.21 Link
DeiT-B 16.70 Link

Average accuracy increase (Avg Acc Inc) refers to the improvement of discriminativity of each ViT block (measured by top-1 accuracy on ImageNet val set using each block output). The increase after adding TRM averaged across blocks is reported.

Training TRM

(top) For training the TRM module, use the following:

./scripts/train_trm.sh

Set the variables for experiment name (EXP_NAME) used for logging checkpoints and update DATA_PATH to point to the ImageNet 2012 root directory (containing /train and /val folders). We train using a single GPU. We initialize the weights using a pre-trained model and update only the TRM weights.

For using other models, replace the model name and the pretrained model path as below:

python -m torch.distributed.launch \
  --nproc_per_node=1 \
  --master_port="$RANDOM" \
  --use_env train_trm.py \
  --exp "$EXP_NAME" \
  --model "small_patch16_224_hierarchical" \
  --lr 0.01 \
  --batch-size 256 \
  --start-epoch 0 \
  --epochs 12 \
  --data "$DATA_PATH" \
  --pretrained "https://dl.fbaipublicfiles.com/deit/deit_small_patch16_224-cd65a155.pth" \
  --output_dir "checkpoints/$EXP_NAME"

References

(top) Code borrowed from DeiT repository and TIMM library. We thank them for their wonderful code bases.

Citation

If you find our work, this repository, or pretrained transformers with refined tokens useful, please consider giving a star and citation.

@misc{naseer2021improving,
      title={On Improving Adversarial Transferability of Vision Transformers}, 
      author={Muzammal Naseer and Kanchana Ranasinghe and Salman Khan and Fahad Shahbaz Khan and Fatih Porikli},
      year={2021},
      eprint={2106.04169},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}
You might also like...
This repository provides the official implementation of 'Learning to ignore: rethinking attention in CNNs' accepted in BMVC 2021.

inverse_attention This repository provides the official implementation of 'Learning to ignore: rethinking attention in CNNs' accepted in BMVC 2021. Le

Official repository for "PAIR: Planning and Iterative Refinement in Pre-trained Transformers for Long Text Generation"

pair-emnlp2020 Official repository for the paper: Xinyu Hua and Lu Wang: PAIR: Planning and Iterative Refinement in Pre-trained Transformers for Long

Official repository for Few-shot Image Generation via Cross-domain Correspondence (CVPR '21)
Official repository for Few-shot Image Generation via Cross-domain Correspondence (CVPR '21)

Few-shot Image Generation via Cross-domain Correspondence Utkarsh Ojha, Yijun Li, Jingwan Lu, Alexei A. Efros, Yong Jae Lee, Eli Shechtman, Richard Zh

Official repository for Jia, Raghunathan, Göksel, and Liang, "Certified Robustness to Adversarial Word Substitutions" (EMNLP 2019)

Certified Robustness to Adversarial Word Substitutions This is the official GitHub repository for the following paper: Certified Robustness to Adversa

The repository offers the official implementation of our paper in PyTorch.

Cloth Interactive Transformer (CIT) Cloth Interactive Transformer for Virtual Try-On Bin Ren1, Hao Tang1, Fanyang Meng2, Runwei Ding3, Ling Shao4, Phi

Official code repository of the paper Learning Associative Inference Using Fast Weight Memory by Schlag et al.

Learning Associative Inference Using Fast Weight Memory This repository contains the offical code for the paper Learning Associative Inference Using F

Official repository for
Official repository for "Action-Based Conversations Dataset: A Corpus for Building More In-Depth Task-Oriented Dialogue Systems"

Action-Based Conversations Dataset (ABCD) This respository contains the code and data for ABCD (Chen et al., 2021) Introduction Whereas existing goal-

Official repository for HOTR: End-to-End Human-Object Interaction Detection with Transformers (CVPR'21, Oral Presentation)
Official repository for HOTR: End-to-End Human-Object Interaction Detection with Transformers (CVPR'21, Oral Presentation)

Official PyTorch Implementation for HOTR: End-to-End Human-Object Interaction Detection with Transformers (CVPR'2021, Oral Presentation) HOTR: End-to-

Competitive Programming Club, Clinify's Official repository for CP problems hosting by club members.

Clinify-CPC_Programs This repository holds the record of the competitive programming club where the competitive coding aspirants are thriving hard and

Comments
  • ImageNet dataset cannot be loaded

    ImageNet dataset cannot be loaded

    I tested the code (run_attack.sh) and found that I cannot load imagenet dataset. I dug into it and found that maybe its because in dataset.py, in class AdvImageNet: self.image_list is a set loaded with the predifined data/image_list.json, so an element string in it looks like this: n01820546/ILSVRC2012_val_00027008.JPEG Nonetheless, the is_valid_file function used in super init keeps only the last 38 char of the image file path, like ILSVRC2012_val_00027008.JPEG , to check if it's listed in self.image_list. Thus, the function will always return false as there is no class folder in the string, and no image will be loaded.

    A simple workaround will work (at least I've tested):

    class AdvImageNet(torchvision.datasets.ImageFolder):
    
        def __init__(self, image_list="data/image_list.json", *args, **kwargs):
            self.image_list = list(json.load(open(image_list, "r"))["images"])
            for i in range(len(self.image_list)):
                self.image_list[i] = self.image_list[i].split('/')[1]
            super(AdvImageNet, self).__init__(
                is_valid_file=self.is_valid_file, *args, **kwargs)
    
        def is_valid_file(self, x: str) -> bool:
            return x[-38:] in self.image_list
    

    Another possibility is that the imagenet structure used by this repo is different from mine:

      val/ <-- designated as DATA_DIR in run_attack.sh
        n01820546/
          ILSVRC2012_val_00027008.JPEG
    

    In this case, could you specify how the dataset should be structured? Thank you!

    opened by HigasaOR 5
Releases(v0)
Owner
Muzammal Naseer
PhD student at Australian National University.
Muzammal Naseer
Official repository for "Intriguing Properties of Vision Transformers" (2021)

Intriguing Properties of Vision Transformers Muzammal Naseer, Kanchana Ranasinghe, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, & Ming-Hsuan Yang P

Muzammal Naseer 155 Dec 27, 2022
CVPR 2021 - Official code repository for the paper: On Self-Contact and Human Pose.

selfcontact This repo is part of our project: On Self-Contact and Human Pose. [Project Page] [Paper] [MPI Project Page] It includes the main function

Lea Müller 68 Dec 6, 2022
CVPR 2021 - Official code repository for the paper: On Self-Contact and Human Pose.

SMPLify-XMC This repo is part of our project: On Self-Contact and Human Pose. [Project Page] [Paper] [MPI Project Page] License Software Copyright Lic

Lea Müller 83 Dec 14, 2022
Official repository for the CVPR 2021 paper "Learning Feature Aggregation for Deep 3D Morphable Models"

Deep3DMM Official repository for the CVPR 2021 paper Learning Feature Aggregation for Deep 3D Morphable Models. Requirements This code is tested on Py

null 38 Dec 27, 2022
Official Repository for the ICCV 2021 paper "PixelSynth: Generating a 3D-Consistent Experience from a Single Image"

PixelSynth: Generating a 3D-Consistent Experience from a Single Image (ICCV 2021) Chris Rockwell, David F. Fouhey, and Justin Johnson [Project Website

Chris Rockwell 95 Nov 22, 2022
Official repository for "On Generating Transferable Targeted Perturbations" (ICCV 2021)

On Generating Transferable Targeted Perturbations (ICCV'21) Muzammal Naseer, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, and Fatih Porikli Paper:

Muzammal Naseer 46 Nov 17, 2022
Official repository with code and data accompanying the NAACL 2021 paper "Hurdles to Progress in Long-form Question Answering" (https://arxiv.org/abs/2103.06332).

Hurdles to Progress in Long-form Question Answering This repository contains the official scripts and datasets accompanying our NAACL 2021 paper, "Hur

Kalpesh Krishna 41 Nov 8, 2022
This repository contains the official implementation code of the paper Improving Multimodal Fusion with Hierarchical Mutual Information Maximization for Multimodal Sentiment Analysis, accepted at EMNLP 2021.

MultiModal-InfoMax This repository contains the official implementation code of the paper Improving Multimodal Fusion with Hierarchical Mutual Informa

Deep Cognition and Language Research (DeCLaRe) Lab 89 Dec 26, 2022
CVPR 2021 - Official code repository for the paper: On Self-Contact and Human Pose.

TUCH This repo is part of our project: On Self-Contact and Human Pose. [Project Page] [Paper] [MPI Project Page] License Software Copyright License fo

Lea Müller 45 Jan 7, 2023
Official repository of the paper "A Variational Approximation for Analyzing the Dynamics of Panel Data". Mixed Effect Neural ODE. UAI 2021.

Official repository of the paper (UAI 2021) "A Variational Approximation for Analyzing the Dynamics of Panel Data", Mixed Effect Neural ODE. Panel dat

Jurijs Nazarovs 7 Nov 26, 2022