Resilience from Diversity: Population-based approach to harden models against adversarial attacks

Overview

Resilience from Diversity: Population-based approach to harden models against adversarial attacks

Requirements

To install requirements:

pip install -r requirements.txt

Training

To train the model(s) in the paper, run the following commands depending on the experiment:

For the MNIST experiment:
python ./mnist/clm_train.py --folder 
   
     --nmodel 
    
      --alpha 
     
       --delta 
      
        --pre 
       
         --pref 
        
          --epochs 
         
           --prse 
          
            --lr 
           
             --adv 
             For the CIFAR-10 experiment: python ./cifar-10/clm_train.py --folder 
             
               --nmodel 
              
                --alpha 
               
                 --delta 
                
                  --pre 
                 
                   --pref 
                  
                    --epochs 
                   
                     --prse 
                    
                      --lr 
                     
                       --adv 
                     
                    
                   
                  
                 
                
               
              
             
             
           
          
         
        
       
      
     
    
   

Evaluation

To evaluate the models against adversarial attacks, run the following commands depending on the experiment:

For the MNIST experiment:
python ./mnist/mra.py --attack 
   
     --folder 
    
      --nmodel 
     
       --epsilon 
      
        --testid 
       
         --batch 
        
          For the CIFAR-10 experiment: python ./cifar-10/attack.py --attack 
         
           --folder 
          
            --nmodel 
           
             --epsilon 
            
              --testid 
             
               --batch 
              
                The following is the list of attacks you can test against: - fgsm: Fast Gradient Sign Method attack - pgd: Projected Gradient Descent attack - Linf - auto: AutoAttack - mifgsm: MI-FGSM attack. 
              
             
            
           
          
         
        
       
      
     
    
   

Pre-trained Models

Pretrained models are included in the folders of mnist and cifar-10.

Since GitHub has a limit of the size of uploaded files, you can download the pretrained models through this link: https://drive.google.com/drive/folders/1Dkupi4bObIKofjKZOwOG0owsBFwfwo_5?usp=sharing

├── LICENSE
├── README.md
├── __init__.py
├── cifar-10
│   ├── clm10-a0.5d0.1-epochs150-prse10 
   
    
│   ├── clm_adv4-a0.1d0.05-epochs150-prse10 
    
     
│   ├── clm_train.py
│   ├── mra.py
│   ├── ulm10 
     
      
│   └── ulm_adv4 
      
       
├── mnist
│   ├── clm10-a0.1d0.1-epochs5-prse10 
       
         │   ├── clm_adv4-a0.01d0.005-epochs5-prse1 
        
          │   ├── clm_train.py │   ├── mra.py │   ├── ulm10 
         
           │   └── ulm_adv4 
          
            ├── models │   ├── lenet5.py │   └── resnet.py └── requirements.txt 
          
         
        
       
      
     
    
   

Contributing

MIT License

You might also like...
The source code for the Cutoff data augmentation approach proposed in this paper: "A Simple but Tough-to-Beat Data Augmentation Approach for Natural Language Understanding and Generation".

Cutoff: A Simple Data Augmentation Approach for Natural Language This repository contains source code necessary to reproduce the results presented in

Implementation of Wasserstein adversarial attacks.

Stronger and Faster Wasserstein Adversarial Attacks Code for Stronger and Faster Wasserstein Adversarial Attacks, appeared in ICML 2020. This reposito

Boosting Adversarial Attacks with Enhanced Momentum (BMVC 2021)

EMI-FGSM This repository contains code to reproduce results from the paper: Boosting Adversarial Attacks with Enhanced Momentum (BMVC 2021) Xiaosen Wa

PyTorch implementation of our method for adversarial attacks and defenses in hyperspectral image classification.
PyTorch implementation of our method for adversarial attacks and defenses in hyperspectral image classification.

Self-Attention Context Network for Hyperspectral Image Classification PyTorch implementation of our method for adversarial attacks and defenses in hyp

Official Pytorch Implementation of:
Official Pytorch Implementation of: "Semantic Diversity Learning for Zero-Shot Multi-label Classification"(2021) paper

Semantic Diversity Learning for Zero-Shot Multi-label Classification Paper Official PyTorch Implementation Avi Ben-Cohen, Nadav Zamir, Emanuel Ben Bar

A bare-bones Python library for quality diversity optimization.
A bare-bones Python library for quality diversity optimization.

pyribs Website Source PyPI Conda CI/CD Docs Docs Status Twitter pyribs.org GitHub docs.pyribs.org A bare-bones Python library for quality diversity op

DvD-TD3: Diversity via Determinants for TD3 version

DvD-TD3: Diversity via Determinants for TD3 version The implementation of paper Effective Diversity in Population Based Reinforcement Learning. Instal

Clustering with variational Bayes and population Monte Carlo

pypmc pypmc is a python package focusing on adaptive importance sampling. It can be used for integration and sampling from a user-defined target densi

Locally cache assets that are normally streamed in POPULATION: ONE

Population One Localizer This is no longer needed as of the build shipped on 03/03/22, thank you bigbox :) Locally cache assets that are normally stre

Owner
null
Code for "Diversity can be Transferred: Output Diversification for White- and Black-box Attacks"

Output Diversified Sampling (ODS) This is the github repository for the NeurIPS 2020 paper "Diversity can be Transferred: Output Diversification for W

null 50 Dec 11, 2022
Code for the paper: Adversarial Training Against Location-Optimized Adversarial Patches. ECCV-W 2020.

Adversarial Training Against Location-Optimized Adversarial Patches arXiv | Paper | Code | Video | Slides Code for the paper: Sukrut Rao, David Stutz,

Sukrut Rao 32 Dec 13, 2022
transfer attack; adversarial examples; black-box attack; unrestricted Adversarial Attacks on ImageNet; CVPR2021 天池黑盒竞赛

transfer_adv CVPR-2021 AIC-VI: unrestricted Adversarial Attacks on ImageNet CVPR2021 安全AI挑战者计划第六期赛道2:ImageNet无限制对抗攻击 介绍 : 深度神经网络已经在各种视觉识别问题上取得了最先进的性能。

null 25 Dec 8, 2022
Attack classification models with transferability, black-box attack; unrestricted adversarial attacks on imagenet

Attack classification models with transferability, black-box attack; unrestricted adversarial attacks on imagenet, CVPR2021 安全AI挑战者计划第六期:ImageNet无限制对抗攻击 决赛第四名(team name: Advers)

null 51 Dec 1, 2022
Adversarial Attacks on Probabilistic Autoregressive Forecasting Models.

Attack-Probabilistic-Models This is the source code for Adversarial Attacks on Probabilistic Autoregressive Forecasting Models. This repository contai

SRI Lab, ETH Zurich 25 Sep 14, 2022
Simulate genealogical trees and genomic sequence data using population genetic models

msprime msprime is a population genetics simulator based on tskit. Msprime can simulate random ancestral histories for a sample of individuals (consis

Tskit developers 150 Dec 14, 2022
A parallel framework for population-based multi-agent reinforcement learning.

MALib: A parallel framework for population-based multi-agent reinforcement learning MALib is a parallel framework of population-based learning nested

MARL @ SJTU 348 Jan 8, 2023
Code for the Population-Based Bandits Algorithm, presented at NeurIPS 2020.

Population-Based Bandits (PB2) Code for the Population-Based Bandits (PB2) Algorithm, from the paper Provably Efficient Online Hyperparameter Optimiza

Jack Parker-Holder 22 Nov 16, 2022
This is the official code of our paper "Diversity-based Trajectory and Goal Selection with Hindsight Experience Relay" (PRICAI 2021)

Diversity-based Trajectory and Goal Selection with Hindsight Experience Replay This is the official implementation of our paper "Diversity-based Traje

Tianhong Dai 6 Jul 18, 2022
A certifiable defense against adversarial examples by training neural networks to be provably robust

DiffAI v3 DiffAI is a system for training neural networks to be provably robust and for proving that they are robust. The system was developed for the

SRI Lab, ETH Zurich 202 Dec 13, 2022