Code to reproduce the results for Statistically Robust Neural Network Classification, published in UAI 2021

Overview

Statistically Robust Neural Network Classification

Code to reproduce the experimental results for Statistically Robust Neural Network Classification, UAI 2021.

Experiment 6.1

To reproduce the results of Experiment 6.1, run the following from the base directory:

python run_exp_1.py

This will:

  1. Train the NN classifier on MNIST using natural and corrupted training methods, as described in the paper;
  2. Evaluate the TSRM metric on each trained NN at a number of epsilon values;
  3. Collate the results and produce the plot of Figure 1.

Experiment 6.2

Likewise, to reproduce the results of Experiment 6.2, run the following:

python run_exp_2.py

This will:

  1. Train the wide ResNet CNN classifier on CIFAR-10 using natural, corruption and adversarial training methods;
  2. Evaluate the trained networks on natural risk, SRR, and adversarial risk, outputting the results to a csv file (corresponding to results in Table 1).

Experiment 6.3

Likewise, to reproduce the results of Experiment 6.3, run the following:

python run_exp_3.py

This will:

  1. Train the NN classifier on MNIST using natural and corrupted training methods (2 networks);
  2. Keep track of the natural and SRR weighted cross entropy loss during each epoch of training for both networks;
  3. Produce the plot of Figure 2.

Experiment in Appendix A

Likewise, to reproduce the results of the experiment in Appendix A, run the following (AFTER running Experiment 6.1):

python run_exp_estimation.py

This will:

  1. Load the naturally trained NN classifier on MNIST from Experiment 6.1;
  2. Evaluate the TSRM using both adaptive sampling and monte carlo for this network and 100 datapoints from the MNIST test set;
  3. Produce the plot of Figure 3.
You might also like...
Hl classification bc - A Network-Based High-Level Data Classification Algorithm Using Betweenness Centrality
Hl classification bc - A Network-Based High-Level Data Classification Algorithm Using Betweenness Centrality

A Network-Based High-Level Data Classification Algorithm Using Betweenness Centr

Applications using the GTN library and code to reproduce experiments in "Differentiable Weighted Finite-State Transducers"

gtn_applications An applications library using GTN. Current examples include: Offline handwriting recognition Automatic speech recognition Installing

Code to reproduce the experiments in the paper
Code to reproduce the experiments in the paper "Transformer Based Multi-Source Domain Adaptation" (EMNLP 2020)

Transformer Based Multi-Source Domain Adaptation Dustin Wright and Isabelle Augenstein To appear in EMNLP 2020. Read the preprint: https://arxiv.org/a

Code reproduce for paper
Code reproduce for paper "Vehicle Re-identification with Viewpoint-aware Metric Learning"

VANET Code reproduce for paper "Vehicle Re-identification with Viewpoint-aware Metric Learning" Introduction This is the implementation of article VAN

This repo will contain code to reproduce and build upon understanding transfer learning

What is being transferred in transfer learning? This repo contains the code for the following paper: Behnam Neyshabur*, Hanie Sedghi*, Chiyuan Zhang*.

PyTorch Implementation of Fully Convolutional Networks. (Training code to reproduce the original result is available.)
PyTorch Implementation of Fully Convolutional Networks. (Training code to reproduce the original result is available.)

pytorch-fcn PyTorch implementation of Fully Convolutional Networks. Requirements pytorch = 0.2.0 torchvision = 0.1.8 fcn = 6.1.5 Pillow scipy tqdm

Code to reproduce experiments in the paper "Explainability Requires Interactivity".

Explainability Requires Interactivity This repository contains the code to train all custom models used in the paper Explainability Requires Interacti

Source code and notebooks to reproduce experiments and benchmarks on Bias Faces in the Wild (BFW).
Source code and notebooks to reproduce experiments and benchmarks on Bias Faces in the Wild (BFW).

Face Recognition: Too Bias, or Not Too Bias? Robinson, Joseph P., Gennady Livitz, Yann Henon, Can Qin, Yun Fu, and Samson Timoner. "Face recognition:

Code used to generate the results appearing in "Train longer, generalize better: closing the generalization gap in large batch training of neural networks"

Train longer, generalize better - Big batch training This is a code repository used to generate the results appearing in "Train longer, generalize bet

Owner
null
Official repository of the paper "A Variational Approximation for Analyzing the Dynamics of Panel Data". Mixed Effect Neural ODE. UAI 2021.

Official repository of the paper (UAI 2021) "A Variational Approximation for Analyzing the Dynamics of Panel Data", Mixed Effect Neural ODE. Panel dat

Jurijs Nazarovs 7 Nov 26, 2022
In this repo we reproduce and extend results of Learning in High Dimension Always Amounts to Extrapolation by Balestriero et al. 2021

In this repo we reproduce and extend results of Learning in High Dimension Always Amounts to Extrapolation by Balestriero et al. 2021. Balestriero et

Sean M. Hendryx 1 Jan 27, 2022
Code to reproduce the results for Compositional Attention: Disentangling Search and Retrieval.

Compositional-Attention This repository contains the official implementation for the paper Compositional Attention: Disentangling Search and Retrieval

Sarthak Mittal 17 Oct 23, 2021
Code to reproduce the results in the paper "Tensor Component Analysis for Interpreting the Latent Space of GANs".

Tensor Component Analysis for Interpreting the Latent Space of GANs [ paper | project page ] Code to reproduce the results in the paper "Tensor Compon

James Oldfield 4 Jun 17, 2022
Code to reproduce the experiments from our NeurIPS 2021 paper " The Limitations of Large Width in Neural Networks: A Deep Gaussian Process Perspective"

Code To run: python runner.py new --save <SAVE_NAME> --data <PATH_TO_DATA_DIR> --dataset <DATASET> --model <model_name> [options] --n 1000 - train - t

Geoff Pleiss 5 Dec 12, 2022
The official implementation of the IEEE S&P`22 paper "SoK: How Robust is Deep Neural Network Image Classification Watermarking".

Watermark-Robustness-Toolbox - Official PyTorch Implementation This repository contains the official PyTorch implementation of the following paper to

null 49 Dec 19, 2022
Tensorflow 2 implementation of the paper: Learning and Evaluating Representations for Deep One-class Classification published at ICLR 2021

Deep Representation One-class Classification (DROC). This is not an officially supported Google product. Tensorflow 2 implementation of the paper: Lea

Google Research 137 Dec 23, 2022
null 190 Jan 3, 2023
LBK 35 Dec 26, 2022
code for the ICLR'22 paper: On Robust Prefix-Tuning for Text Classification

On Robust Prefix-Tuning for Text Classification Prefix-tuning has drawed much attention as it is a parameter-efficient and modular alternative to adap

Zonghan Yang 12 Nov 30, 2022