Multitask Learning Strengthens Adversarial Robustness

Overview

Multitask Learning Strengthens Adversarial Robustness

@inproceedings{mao2020multitask,
  author    = {Chengzhi Mao and
               Amogh Gupta and
               Vikram Nitin and
               Baishakhi Ray and
               Shuran Song and
               Junfeng Yang and
               Carl Vondrick},
  title     = {Multitask Learning Strengthens Adversarial Robustness},
  booktitle = {Computer Vision - {ECCV} 2020 - 16th European Conference, Glasgow,
               UK, August 23-28, 2020, Proceedings, Part {II}},
  series    = {Lecture Notes in Computer Science},
  volume    = {12347},
  pages     = {158--174},
  publisher = {Springer},
  year      = {2020},
  url       = {https://doi.org/10.1007/978-3-030-58536-5\_10},
  doi       = {10.1007/978-3-030-58536-5\_10},
}

Demo for Robustness under multitask attack

Download Cityscapes dataset from Cityscapes.

Download pretrained DRN-22 model from DRN model zoo.

Modify the path to data and model in demo_mtlrobust.py.

Run demo to see the trend that model overall robustness is increased when the output dimension increased.

To see the gradient norm measurement of robustness, set get_grad=True,

To see the actually robust accuracy for model, set test_acc_output_dim=False

python demo_mtlrobust.py

which explains why segmentation is inherently robust.

CityScape

Data preprocessing

Run python data_resize_cityscape.py to resize to smaller images.

Train Robust model against single task attack

  1. Set up the path to data in config/drn_d_22_cityscape_config.json

  2. Run cityscape_example.sh to train a main task with auxiliary task for robustness.

Taskonomy

Data Preprocessing

You can use our preprocessed data from preprocessed data

Or do from scratch

  1. Download data from official raw data.

  2. Run python data_resize_taskonomy.py to resize to smaller images.

  3. Rename segment_semantic to segmentsemantic.

Train Robust model against single task attack

  1. Set up the path to data in config/resnet18_taskonomy_config.json

  2. Run taskonomy_example.sh to train a main task with auxiliary task for robustness. For different task, we have different different setup, refer to our paper and supplementary for details.

Model evaluation

We offer our pretrained models to download here: Cityscapes segmentation depth and Taskonomy taskonomy segmentation demo

After setting up the path to your downloaded models in test_cityscapes_seg.py and test_taskonomy_seg.py,

Run python test_cityscapes_seg.py and python test_taskonomy_seg.py for evaluating the robustness of multitask models under single task attacks.

Pretrained models for other tasks for Taskonomy can be downloaded [here, comming soon](comming soon)

Acknowledgement

Our code refer the code at: https://github.com/fyu/drn/blob/master/drn.py Taskonomy https://github.com/tstandley/taskgrouping,

We thank the authors for open sourcing their code.

You might also like...
Code for the paper: Adversarial Training Against Location-Optimized Adversarial Patches. ECCV-W 2020.

Adversarial Training Against Location-Optimized Adversarial Patches arXiv | Paper | Code | Video | Slides Code for the paper: Sukrut Rao, David Stutz,

Adversarial Color Enhancement: Generating Unrestricted Adversarial Images by Optimizing a Color Filter

ACE Please find the preliminary version published at BMVC 2020 in the folder BMVC_version, and its extended journal version in Journal_version. Datase

transfer attack; adversarial examples; black-box attack; unrestricted Adversarial Attacks on ImageNet; CVPR2021 天池黑盒竞赛
transfer attack; adversarial examples; black-box attack; unrestricted Adversarial Attacks on ImageNet; CVPR2021 天池黑盒竞赛

transfer_adv CVPR-2021 AIC-VI: unrestricted Adversarial Attacks on ImageNet CVPR2021 安全AI挑战者计划第六期赛道2:ImageNet无限制对抗攻击 介绍 : 深度神经网络已经在各种视觉识别问题上取得了最先进的性能。

Adversarial-Information-Bottleneck - Distilling Robust and Non-Robust Features in Adversarial Examples by Information Bottleneck (NeurIPS21)
Super-Fast-Adversarial-Training - A PyTorch Implementation code for developing super fast adversarial training

Super-Fast-Adversarial-Training This is a PyTorch Implementation code for develo

STYLER: Style Factor Modeling with Rapidity and Robustness via Speech Decomposition for Expressive and Controllable Neural Text to Speech
STYLER: Style Factor Modeling with Rapidity and Robustness via Speech Decomposition for Expressive and Controllable Neural Text to Speech

STYLER: Style Factor Modeling with Rapidity and Robustness via Speech Decomposition for Expressive and Controllable Neural Text to Speech Keon Lee, Ky

This is our ARTS test set, an enriched test set to probe Aspect Robustness of ABSA.
This is our ARTS test set, an enriched test set to probe Aspect Robustness of ABSA.

This is the repository for our 2020 paper "Tasty Burgers, Soggy Fries: Probing Aspect Robustness in Aspect-Based Sentiment Analysis". Data We provide

This is the dataset for testing the robustness of various VO/VIO methods
This is the dataset for testing the robustness of various VO/VIO methods

KAIST VIO dataset This is the dataset for testing the robustness of various VO/VIO methods You can download the whole dataset on KAIST VIO dataset Ind

Comments
  • Reproducing segmentation mIoU on Cityscapes clean/adversarial

    Reproducing segmentation mIoU on Cityscapes clean/adversarial

    The logs of provided segmentation models only show a maximum of around 44 % mIoU. However, in the paper an mIoU of 48 % is reported.

    Could the trained model which is reported in the paper be provided?

    Also, out of the different cityscapes attack function in attack.py, which one is the actual function used to report PGD results in table 2 ?

    Thank you!

    opened by NareshGuru77 0
Owner
Columbia University
Columbia University
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams

Adversarial Robustness Toolbox (ART) is a Python library for Machine Learning Security. ART provides tools that enable developers and researchers to defend and evaluate Machine Learning models and applications against the adversarial threats of Evasion, Poisoning, Extraction, and Inference. ART supports all popular machine learning frameworks (TensorFlow, Keras, PyTorch, MXNet, scikit-learn, XGBoost, LightGBM, CatBoost, GPy, etc.), all data types (images, tables, audio, video, etc.) and machine learning tasks (classification, object detection, speech recognition, generation, certification, etc.).

null 3.4k Jan 4, 2023
Consistency Regularization for Adversarial Robustness

Consistency Regularization for Adversarial Robustness Official PyTorch implementation of Consistency Regularization for Adversarial Robustness by Jiho

null 40 Dec 17, 2022
Official repository for Jia, Raghunathan, Göksel, and Liang, "Certified Robustness to Adversarial Word Substitutions" (EMNLP 2019)

Certified Robustness to Adversarial Word Substitutions This is the official GitHub repository for the following paper: Certified Robustness to Adversa

Robin Jia 38 Oct 16, 2022
Pytorch implementation for "Adversarial Robustness under Long-Tailed Distribution" (CVPR 2021 Oral)

Adversarial Long-Tail This repository contains the PyTorch implementation of the paper: Adversarial Robustness under Long-Tailed Distribution, CVPR 20

Tong WU 89 Dec 15, 2022
Implementations of orthogonal and semi-orthogonal convolutions in the Fourier domain with applications to adversarial robustness

Orthogonalizing Convolutional Layers with the Cayley Transform This repository contains implementations and source code to reproduce experiments for t

CMU Locus Lab 36 Dec 30, 2022
Improving adversarial robustness by a coupling rejection strategy

Adversarial Training with Rectified Rejection The code for the paper Adversarial Training with Rectified Rejection. Environment settings and libraries

Tianyu Pang 29 Jan 6, 2023
Imbalanced Gradients: A Subtle Cause of Overestimated Adversarial Robustness

Imbalanced Gradients: A Subtle Cause of Overestimated Adversarial Robustness Code for Paper "Imbalanced Gradients: A Subtle Cause of Overestimated Adv

Hanxun Huang 11 Nov 30, 2022
Code repository accompanying the paper "On Adversarial Robustness: A Neural Architecture Search perspective"

On Adversarial Robustness: A Neural Architecture Search perspective Preparation: Clone the repository: https://github.com/tdchaitanya/nas-robustness.g

Chaitanya Devaguptapu 4 Nov 10, 2022
LBK 20 Dec 2, 2022
noisy labels; missing labels; semi-supervised learning; entropy; uncertainty; robustness and generalisation.

ProSelfLC: CVPR 2021 ProSelfLC: Progressive Self Label Correction for Training Robust Deep Neural Networks For any specific discussion or potential fu

amos_xwang 57 Dec 4, 2022