Resilience from Diversity: Population-based approach to harden models against adversarial attacks
Requirements
To install requirements:
pip install -r requirements.txt
Training
To train the model(s) in the paper, run the following commands depending on the experiment:
For the MNIST experiment:
python ./mnist/clm_train.py --folder
--nmodel
--alpha
--delta
--pre
--pref
--epochs
--prse
--lr
--adv
For the CIFAR-10 experiment: python ./cifar-10/clm_train.py --folder
--nmodel
--alpha
--delta
--pre
--pref
--epochs
--prse
--lr
--adv
Evaluation
To evaluate the models against adversarial attacks, run the following commands depending on the experiment:
For the MNIST experiment:
python ./mnist/mra.py --attack
--folder
--nmodel
--epsilon
--testid
--batch
For the CIFAR-10 experiment: python ./cifar-10/attack.py --attack
--folder
--nmodel
--epsilon
--testid
--batch
The following is the list of attacks you can test against: - fgsm: Fast Gradient Sign Method attack - pgd: Projected Gradient Descent attack - Linf - auto: AutoAttack - mifgsm: MI-FGSM attack.
Pre-trained Models
Pretrained models are included in the folders of mnist and cifar-10.
Since GitHub has a limit of the size of uploaded files, you can download the pretrained models through this link: https://drive.google.com/drive/folders/1Dkupi4bObIKofjKZOwOG0owsBFwfwo_5?usp=sharing
├── LICENSE
├── README.md
├── __init__.py
├── cifar-10
│ ├── clm10-a0.5d0.1-epochs150-prse10
│ ├── clm_adv4-a0.1d0.05-epochs150-prse10
│ ├── clm_train.py
│ ├── mra.py
│ ├── ulm10
│ └── ulm_adv4
├── mnist
│ ├── clm10-a0.1d0.1-epochs5-prse10
│ ├── clm_adv4-a0.01d0.005-epochs5-prse1
│ ├── clm_train.py │ ├── mra.py │ ├── ulm10
│ └── ulm_adv4
├── models │ ├── lenet5.py │ └── resnet.py └── requirements.txt
Contributing
MIT License