Beyond ImageNet Attack: Towards Crafting Adversarial Examples for Black-box Domains (ICLR'2022)
This is the Pytorch code for our paper Beyond ImageNet Attack: Towards Crafting Adversarial Examples for Black-box Domains). In this paper, with only the knowledge of the ImageNet domain, we propose a Beyond ImageNet Attack (BIA) to investigate the transferability towards black-box domains (unknown classification tasks).
Requirement
- Python 3.7
- Pytorch 1.8.0
- torchvision 0.9.0
- numpy 1.20.2
- scipy 1.7.0
- pandas 1.3.0
- opencv-python 4.5.2.54
- joblib 0.14.1
- Pillow 6.1
Dataset
-
Download the ImageNet training dataset.
- ImageNet Training Set.
-
Download the testing dataset.
- ImageNet Validation Set. We also provide precomputed imagenet validation dataset in here (download it to
imagenet
folder and then runconvert.py
, thanks @aaron-xichen). - CUB-200-2011
- Stanford Cars
- FGVC AirCraft
- CIFAR-10, CIFAR-100, STL-10 and SVHN can be automatically downloaded via
torchvision.dataset
- ImageNet Validation Set. We also provide precomputed imagenet validation dataset in here (download it to
Note: After downloading CUB-200-2011, Standford Cars and FGVC Aircraft, you should set the "self.rawdata_root" (DCL_finegrained/config.py: lines 59-75) to your saved path.
Target model
The checkpoint of target model should be put into model
folder.
- CUB-200-2011, Stanford Cars and FGVC AirCraft can be downloaded from here.
- CIFAR-10, CIFAR-100, STL-10 and SVHN can be automatically downloaded.
- ImageNet pre-trained models are available at torchvision.
Pretrained-Generators
Adversarial generators are trained against following four ImageNet pre-trained models.
- VGG19
- VGG16
- ResNet152
- DenseNet169
After finishing training, the resulting generator will be put into saved_models
folder. You can also download our pretrained-generator from here.
Train
Train the generator using vanilla BIA (RN: False, DA: False)
python train.py --model_type vgg16 --train_dir your_imagenet_path --RN False --DA False
your_imagenet_path
is the path where you download the imagenet training set.
Evaluation
Evaluate the performance of vanilla BIA (RN: False, DA: False)
python eval.py --model_type vgg16 --RN False --DA False
Citing this work
If you find this work is useful in your research, please consider citing:
@inproceedings{Zhang2022BIA,
author = {Qilong Zhang and
Xiaodan Li and
Yuefeng Chen and
Jingkuan Song and
Lianli Gao and
Yuan He and
Hui Xue},
title = {Beyond ImageNet Attack: Towards Crafting Adversarial Examples for Black-box Domains},
Booktitle = {International Conference on Learning Representations},
year = {2022}
}
Acknowledge
Thank @aaron-xichen and @Muzammal-Naseer for sharing their codes.