Hierarchical-Bayesian-Defense - Towards Adversarial Robustness of Bayesian Neural Network through Hierarchical Variational Inference (Openreview)

Overview

Towards Adversarial Robustness of Bayesian Neural Network through Hierarchical Variational Inference [paper]

Baseline of this code is the official repository for this paper. We just replace the BNN regularizer from ELBO with enhanced Bayesian regularizer based on hierarchical-ELBO.

Alt text


Citation

If you find this work helpful, please cite it as:

@misc{
lee2021towards,
title={Towards Adversarial Robustness of Bayesian Neural Network through Hierarchical Variational Inference},
author={Byung-Kwan Lee and Youngjoon Yu and Yong Man Ro},
year={2021},
url={https://openreview.net/forum?id=Cue2ZEBf12}
}

Hierarchical-Bayeisan-Defense

Dataset

  • CIFAR10
  • STL10
  • CIFAR100
  • Tiny-ImageNet

Network

  • VGG16 (for CIFAR-10/CIFAR-100/Tiny-ImageNet)
  • Aaron (for STL10)
  • WideResNet (for CIFAR-10/100)

Attack (by torchattack)

  • PGD attack
  • EOT-PGD attack

Defense methods

  • adv: Adversarial training
  • adv_vi: Adversarial training with Bayesian neural network
  • adv_hvi: Adversarial training with Enhanced Bayesian neural network based on hierarchical-ELBO

How to Train

1. Adversarial training

Run train_adv.sh

lr=0.01
steps=10
max_norm=0.03
data=tiny # or `cifar10`, `stl10`, `cifar100`
root=./datasets
model=vgg # vgg for `cifar10` `stl10` `cifar100`, aaron for `stl10`, wide for `cifar10` or `cifar100`
model_out=./checkpoint/${data}_${model}_${max_norm}_adv
echo "Loading: " ${model_out}
CUDA_VISIBLE_DEVICES=0 python ./main_adv.py \
                        --lr ${lr} \
                        --step ${steps} \
                        --max_norm ${max_norm} \
                        --data ${data} \
                        --model ${model} \
                        --root ${root} \
                        --model_out ${model_out}.pth \

2. Adversarial training with BNN

Run train_adv_vi.sh

lr=0.01
steps=10
max_norm=0.03
sigma_0=0.1
init_s=0.1
data=tiny # or `cifar10`, `stl10`, `cifar100`
root=./datasets
model=vgg # vgg for `cifar10` `stl10` `cifar100`, aaron for `stl10`, wide for `cifar10` or `cifar100`
model_out=./checkpoint/${data}_${model}_${max_norm}_adv_vi
echo "Loading: " ${model_out}
CUDA_VISIBLE_DEVICES=0 python3 ./main_adv_vi.py \
                        --lr ${lr} \
                        --step ${steps} \
                        --max_norm ${max_norm} \
                        --sigma_0 ${sigma_0} \
                        --init_s ${init_s} \
                        --data ${data} \
                        --model ${model} \
                        --root ${root} \
                        --model_out ${model_out}.pth \

3. Adversarial training with enhanced Bayesian regularizer based on hierarchical-ELBO

Run train_adv_hvi.sh

lr=0.01
steps=10
max_norm=0.03
sigma_0=0.1
init_s=0.1
data=tiny # or `cifar10`, `stl10`, `cifar100`
root=./datasets
model=vgg # vgg for `cifar10` `stl10` `cifar100`, aaron for `stl10`, wide for `cifar10` or `cifar100`
model_out=./checkpoint/${data}_${model}_${max_norm}_adv_hvi
echo "Loading: " ${model_out}
CUDA_VISIBLE_DEVICES=0 python3 ./main_adv_hvi.py \
                        --lr ${lr} \
                        --step ${steps} \
                        --max_norm ${max_norm} \
                        --sigma_0 ${sigma_0} \
                        --init_s ${init_s} \
                        --data ${data} \
                        --model ${model} \
                        --root ${root} \
                        --model_out ${model_out}.pth \

How to Test

Testing adversarial robustness

Run acc_under_attack.sh

model=vgg # vgg for `cifar10` `stl10` `cifar100`, aaron for `stl10`, wide for `cifar10` or `cifar100`
defense=adv_hvi # or `adv_vi`, `adv`
data=tiny-imagenet # or `cifar10`, `stl10`, `cifar100`
root=./datasets
n_ensemble=50
step=10
max_norm=0.03
echo "Loading" ./checkpoint/${data}_${model}_${max_norm}_${defense}.pth

CUDA_VISIBLE_DEVICES=0 python3 acc_under_attack.py \
    --model $model \
    --defense $defense \
    --data $data \
    --root $root \
    --n_ensemble $n_ensemble \
    --step $step \
    --max_norm $max_norm

How to check the learning parameters and KL divergence

Run check_parameters.sh

model=vgg # vgg for `cifar10` `stl10` `cifar100`, aaron for `stl10`, wide for `cifar10` or `cifar100`
defense=adv_hvi # or `adv_vi`
data=tiny-imagenet # or `cifar10`, `stl10`, `cifar100`
max_norm=0.03
echo "Loading" ./checkpoint/${data}_${model}_${max_norm}_${defense}.pth

CUDA_VISIBLE_DEVICES=0 python3 check_parameters.py \
    --model $model \
    --defense $defense \
    --data $data \
    --max_norm $max_norm \

How to check uncertainty by predictive entropy

Run uncertainty.sh

model=vgg # vgg for `cifar10` `stl10` `cifar100`, aaron for `stl10`, wide for `cifar10` or `cifar100`
defense=adv_hvi # or `adv_vi`
data=tiny-imagenet # or `cifar10`, `stl10`, `cifar100`
root=./datasets
n_ensemble=50
step=10
max_norm=0.03
echo "Loading" ./checkpoint/${data}_${model}_${max_norm}_${defense}.pth

CUDA_VISIBLE_DEVICES=0 python3 uncertainty.py \
    --model $model \
    --defense $defense \
    --data $data \
    --root $root \
    --n_ensemble $n_ensemble \
    --step $step \
    --max_norm $max_norm
You might also like...
Dcf-game-infrastructure-public - Contains all the components necessary to run a DC finals (attack-defense CTF) game from OOO

dcf-game-infrastructure All the components necessary to run a game of the OOO DC

【CVPR 2021, Variational Inference Framework, PyTorch】 From Rain Generation to Rain Removal
【CVPR 2021, Variational Inference Framework, PyTorch】 From Rain Generation to Rain Removal

From Rain Generation to Rain Removal (CVPR2021) Hong Wang, Zongsheng Yue, Qi Xie, Qian Zhao, Yefeng Zheng, and Deyu Meng [PDF&&Supplementary Material]

Pytorch Implementation of paper
Pytorch Implementation of paper "Noisy Natural Gradient as Variational Inference"

Noisy Natural Gradient as Variational Inference PyTorch implementation of Noisy Natural Gradient as Variational Inference. Requirements Python 3 Pytor

TensorFlow implementation of
TensorFlow implementation of "Variational Inference with Normalizing Flows"

[TensorFlow 2] Variational Inference with Normalizing Flows TensorFlow implementation of "Variational Inference with Normalizing Flows" [1] Concept Co

Consistency Regularization for Adversarial Robustness
Consistency Regularization for Adversarial Robustness

Consistency Regularization for Adversarial Robustness Official PyTorch implementation of Consistency Regularization for Adversarial Robustness by Jiho

Official repository for Jia, Raghunathan, Göksel, and Liang, "Certified Robustness to Adversarial Word Substitutions" (EMNLP 2019)

Certified Robustness to Adversarial Word Substitutions This is the official GitHub repository for the following paper: Certified Robustness to Adversa

Pytorch implementation for
Pytorch implementation for "Adversarial Robustness under Long-Tailed Distribution" (CVPR 2021 Oral)

Adversarial Long-Tail This repository contains the PyTorch implementation of the paper: Adversarial Robustness under Long-Tailed Distribution, CVPR 20

Multitask Learning Strengthens Adversarial Robustness

Multitask Learning Strengthens Adversarial Robustness

Implementations of orthogonal and semi-orthogonal convolutions in the Fourier domain with applications to adversarial robustness
Implementations of orthogonal and semi-orthogonal convolutions in the Fourier domain with applications to adversarial robustness

Orthogonalizing Convolutional Layers with the Cayley Transform This repository contains implementations and source code to reproduce experiments for t

Owner
LBK
Ph.D Candidate, KAIST EE
LBK
A certifiable defense against adversarial examples by training neural networks to be provably robust

DiffAI v3 DiffAI is a system for training neural networks to be provably robust and for proving that they are robust. The system was developed for the

SRI Lab, ETH Zurich 202 Dec 13, 2022
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams

Adversarial Robustness Toolbox (ART) is a Python library for Machine Learning Security. ART provides tools that enable developers and researchers to defend and evaluate Machine Learning models and applications against the adversarial threats of Evasion, Poisoning, Extraction, and Inference. ART supports all popular machine learning frameworks (TensorFlow, Keras, PyTorch, MXNet, scikit-learn, XGBoost, LightGBM, CatBoost, GPy, etc.), all data types (images, tables, audio, video, etc.) and machine learning tasks (classification, object detection, speech recognition, generation, certification, etc.).

null 3.4k Jan 4, 2023
Python package facilitating the use of Bayesian Deep Learning methods with Variational Inference for PyTorch

PyVarInf PyVarInf provides facilities to easily train your PyTorch neural network models using variational inference. Bayesian Deep Learning with Vari

null 342 Dec 2, 2022
HNECV: Heterogeneous Network Embedding via Cloud model and Variational inference

HNECV This repository provides a reference implementation of HNECV as described in the paper: HNECV: Heterogeneous Network Embedding via Cloud model a

null 4 Jun 28, 2022
Code repository accompanying the paper "On Adversarial Robustness: A Neural Architecture Search perspective"

On Adversarial Robustness: A Neural Architecture Search perspective Preparation: Clone the repository: https://github.com/tdchaitanya/nas-robustness.g

Chaitanya Devaguptapu 4 Nov 10, 2022
House-GAN++: Generative Adversarial Layout Refinement Network towards Intelligent Computational Agent for Professional Architects

House-GAN++ Code and instructions for our paper: House-GAN++: Generative Adversarial Layout Refinement Network towards Intelligent Computational Agent

null 122 Dec 28, 2022
SLIDE : In Defense of Smart Algorithms over Hardware Acceleration for Large-Scale Deep Learning Systems

The SLIDE package contains the source code for reproducing the main experiments in this paper. Dataset The Datasets can be downloaded in Amazon-

Intel Labs 72 Dec 16, 2022
Minimal implementation of Denoised Smoothing: A Provable Defense for Pretrained Classifiers in TensorFlow.

Denoised-Smoothing-TF Minimal implementation of Denoised Smoothing: A Provable Defense for Pretrained Classifiers in TensorFlow. Denoised Smoothing is

Sayak Paul 19 Dec 11, 2022