ReLU-GP Residual (RGPR)
This repository contains code for reproducing the following NeurIPS 2021 paper:
@inproceedings{kristiadi2021infinite,
title={An infinite-feature extension for {B}ayesian {ReLU} nets that fixes their asymptotic overconfidence},
author={Kristiadi, Agustinus and Hein, Matthias and Hennig, Philipp},
booktitle={NeurIPS},
year={2021}
}
Contents
General
eval_*.py
scripts are for running experiments.aggregate_*.py
scripts are for processing experiment results, to make them paper-ready.
RGPR-specific code
- The implementation of the double-sided cubic spline (DSCS) kernel is in
rgpr/kernel.py
. - To apply RGPR on a BNN, see the respective prediction code (no retraining required):
- The
predict
function inlaplace/llla.py
for last-layer Laplace. - The
predict
function inlaplace/util.py
for general BNNs (with Monte Carlo sampling).
- The
- To do hyperparameter search for RGPR, see the
get_best_kernel_var
function ineval_OOD.py
. - To generate mean and standard deviation of an NN's activations (required by the non-asymptotic extension of RGPR), use
compute_meanstd.py
.
Running the code:
- Install dependencies (check
requirements.txt
). We use Python 3.7. - Install BackPACK: https://f-dangel.github.io/backpack/.
- In
util/dataloaders.py
, changepath = ...
to your liking.
Pre-trained models are in https://nc.mlcloud.uni-tuebingen.de/index.php/s/NknifES7G9MBrAB