Adversarial Attacks are Reversible via Natural Supervision

Overview

Adversarial Attacks are Reversible via Natural Supervision

ICCV2021

Citation

@InProceedings{Mao_2021_ICCV,
    author    = {Mao, Chengzhi and Chiquier, Mia and Wang, Hao and Yang, Junfeng and Vondrick, Carl},
    title     = {Adversarial Attacks Are Reversible With Natural Supervision},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
    month     = {October},
    year      = {2021},
    pages     = {661-671}
}

setup

  • Create the environment from the environment.yml file:
  • conda env create -f environment.yml
  • conda activate myenv

CIFAR-10 Experiment

  • Choose the right normalization function in cifar10_defense.py L23-26

  • File cifar10_defense.py is for both training SSL branch and test reversal defense. If you would like to train SSL, do not use --eval_only, and vice versa.

Example Command for running our method:

Semi-SL Carmon et. al.

  • Do not do std, mean normalize, they just use 0-1.

  • Download Carmon et. al.'s model: RobustBackboneClassifier: cifar10_rst_adv.pt.ckpt, Our SSL Model: ssl_model_130.pth

  • Train SSL: CUDA_VISIBLE_DEVICES=0 python cifar10_defense.py --fname unlab_cifar10_srn28-10_carmon --md_path /local/rcs/mcz/2021Spring/RobPretrained/unlabeled-rob/cifar10_rst_adv.pt.ckpt --carmon, if you use our checkponit, you can pass this step.

  • Test: CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python cifar10_defense.py --fname test --md_path /local/rcs/mcz/2021Spring/RobPretrained/unlabeled-rob/cifar10_rst_adv.pt.ckpt --carmon --eval_only --ssl_model_path /local/rcs/mcz/2021Spring/SSRobdata/unlab_cifar10_srn28-10_carmon/March1/ssl_model_130.pth

  • We offer PGD, CW, and BIM attack

  • For AutoAttack, run the following: CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python cifar10_defense_rebAA.py --fname test --md_path /proj/vondrick/mcz/SSRobust/Pretrained_model/unlabeled-rob/cifar10_rst_adv.pt.ckpt --carmon --eval_only --ssl_model_path /proj/vondrick/mcz/SSRobust/Ours/unlab_cifar10_srn28-10_carmon/March1/ssl_model_130.pth --attack-iters 1 --n_views 4

You might also like...
PyTorch implementation of our method for adversarial attacks and defenses in hyperspectral image classification.
PyTorch implementation of our method for adversarial attacks and defenses in hyperspectral image classification.

Self-Attention Context Network for Hyperspectral Image Classification PyTorch implementation of our method for adversarial attacks and defenses in hyp

Stable Neural ODE with Lyapunov-Stable Equilibrium Points for Defending Against Adversarial Attacks

Stable Neural ODE with Lyapunov-Stable Equilibrium Points for Defending Against Adversarial Attacks Stable Neural ODE with Lyapunov-Stable Equilibrium

Official PyTorch code for CVPR 2020 paper
Official PyTorch code for CVPR 2020 paper "Deep Active Learning for Biased Datasets via Fisher Kernel Self-Supervision"

Deep Active Learning for Biased Datasets via Fisher Kernel Self-Supervision https://arxiv.org/abs/2003.00393 Abstract Active learning (AL) aims to min

Implementation of the paper All Labels Are Not Created Equal: Enhancing Semi-supervision via Label Grouping and Co-training
Implementation of the paper All Labels Are Not Created Equal: Enhancing Semi-supervision via Label Grouping and Co-training

SemCo The official pytorch implementation of the paper All Labels Are Not Created Equal: Enhancing Semi-supervision via Label Grouping and Co-training

Improving Transferability of Representations via Augmentation-Aware Self-Supervision
Improving Transferability of Representations via Augmentation-Aware Self-Supervision

Improving Transferability of Representations via Augmentation-Aware Self-Supervision Accepted to NeurIPS 2021 TL;DR: Learning augmentation-aware infor

Code for the paper: Adversarial Training Against Location-Optimized Adversarial Patches. ECCV-W 2020.

Adversarial Training Against Location-Optimized Adversarial Patches arXiv | Paper | Code | Video | Slides Code for the paper: Sukrut Rao, David Stutz,

Adversarial Color Enhancement: Generating Unrestricted Adversarial Images by Optimizing a Color Filter

ACE Please find the preliminary version published at BMVC 2020 in the folder BMVC_version, and its extended journal version in Journal_version. Datase

Adversarial-Information-Bottleneck - Distilling Robust and Non-Robust Features in Adversarial Examples by Information Bottleneck (NeurIPS21)
Comments
  • Detailed command to run RO with PreRes-18

    Detailed command to run RO with PreRes-18

    I used RO public ckpt cifar10_l2_eps128.pth and train preResnet18SSL model with env CUDA_VISIBLE_DEVICES=0,1,2,3 python3 cifar10_defense.py --md_path=checkpoint/cifar10_l2_eps128.pth --batch-size=1024 --contrastive_bs=1024 --epochs=100 --res18, but I failed to get similar results shown in the paper (28.45 vs 30.99 in my case, 52.40 vs 54.59 in paper).

    Is there any hyperparameter I need to change here? Thx!

    opened by VertexC 6
  • training or eval question

    training or eval question

    Hi. Thank you for sharing the code! I think I have trouble training or evaluating. Runing the code provided, it stayed here for several hours, even I change the small contrastive batch size. clean contrastive=6.242605 adv contrastive=6.484784 contrastive bs 64

    opened by devicerxx 5
  • License?

    License?

    Thanks for sharing the code for your work! Could you add a license to it, such as the MIT or Apache licenses?

    Clear licensing makes it more straightforward to run (and therefore cite) your research. This is especially the case for researchers in industry.

    opened by shelhamer 1
  • Environment configuration problems

    Environment configuration problems

    image OSError: [WinError 123] 文件名、目录名或卷标语法不正确。: '/local/rcs/mcz/spring/ssrobdata\train_ssl\2022-10-03_19:16:04da3ce48f' After trying to check, an error is reported. The file name also conforms to windows.

    opened by zhongyubunenghunl 2
Owner
Computer Vision Lab at Columbia University
Computer Vision Lab at Columbia University
[ICCV2021] IICNet: A Generic Framework for Reversible Image Conversion

IICNet - Invertible Image Conversion Net Official PyTorch Implementation for IICNet: A Generic Framework for Reversible Image Conversion (ICCV2021). D

felixcheng97 55 Dec 6, 2022
Deep Halftoning with Reversible Binary Pattern

Deep Halftoning with Reversible Binary Pattern ICCV Paper | Project Website | BibTex Overview Existing halftoning algorithms usually drop colors and f

Menghan Xia 17 Nov 22, 2022
Learning trajectory representations using self-supervision and programmatic supervision.

Trajectory Embedding for Behavior Analysis (TREBA) Implementation from the paper: Jennifer J. Sun, Ann Kennedy, Eric Zhan, David J. Anderson, Yisong Y

null 58 Jan 6, 2023
Mixup for Supervision, Semi- and Self-Supervision Learning Toolbox and Benchmark

OpenSelfSup News Downstream tasks now support more methods(Mask RCNN-FPN, RetinaNet, Keypoints RCNN) and more datasets(Cityscapes). 'GaussianBlur' is

AI Lab, Westlake University 332 Jan 3, 2023
CLIP: Connecting Text and Image (Learning Transferable Visual Models From Natural Language Supervision)

CLIP (Contrastive Language–Image Pre-training) Experiments (Evaluation) Model Dataset Acc (%) ViT-B/32 (Paper) CIFAR100 65.1 ViT-B/32 (Our) CIFAR100 6

Myeongjun Kim 52 Jan 7, 2023
Attack classification models with transferability, black-box attack; unrestricted adversarial attacks on imagenet

Attack classification models with transferability, black-box attack; unrestricted adversarial attacks on imagenet, CVPR2021 安全AI挑战者计划第六期:ImageNet无限制对抗攻击 决赛第四名(team name: Advers)

null 51 Dec 1, 2022
Implementation of Wasserstein adversarial attacks.

Stronger and Faster Wasserstein Adversarial Attacks Code for Stronger and Faster Wasserstein Adversarial Attacks, appeared in ICML 2020. This reposito

null 21 Oct 6, 2022
Adversarial Attacks on Probabilistic Autoregressive Forecasting Models.

Attack-Probabilistic-Models This is the source code for Adversarial Attacks on Probabilistic Autoregressive Forecasting Models. This repository contai

SRI Lab, ETH Zurich 25 Sep 14, 2022
Defending graph neural networks against adversarial attacks (NeurIPS 2020)

GNNGuard: Defending Graph Neural Networks against Adversarial Attacks Authors: Xiang Zhang ([email protected]), Marinka Zitnik (marinka@hms.

Zitnik Lab @ Harvard 44 Dec 7, 2022
Boosting Adversarial Attacks with Enhanced Momentum (BMVC 2021)

EMI-FGSM This repository contains code to reproduce results from the paper: Boosting Adversarial Attacks with Enhanced Momentum (BMVC 2021) Xiaosen Wa

John Hopcroft Lab at HUST 10 Sep 26, 2022