Code to replicate the key results from Exploring the Limits of Out-of-Distribution Detection

Overview

Exploring the Limits of Out-of-Distribution Detection

In this repository we're collecting replications for the key experiments in the Exploring the Limits of Out-of-Distribution Detection paper by Stanislav Fort, Jie Ren, Balaji Lakshminarayanan that was published at NeurIPS 2021, arXiv link.

The use of a large, pretrained and finetuned Vision Transformer for near-OOD detection on the CIFAR-100 vs CIFAR-10 task is demonstrated in this Colab. We showcase the use of the Standard Mahalanobis distance, the Relative Mahalanobis distance (presented in this paper), and the baseline Maximum of Softmax Probabilities. The results you should expect from running the Colab in full (in around 20 minutes on a free GPU instance) are shown in bellow. Prior to this paper, they would put you on top of the task leaderboard.

Colab: https://github.com/stanislavfort/exploring_the_limits_of_OOD_detection/blob/main/ViT_for_strong_near_OOD_detection.ipynb

Maximum over Softmax Probs Standard Mahalanobis distance Relative Mahalanobis distance
You might also like...
Official repository for CVPR21 paper "Deep Stable Learning for Out-Of-Distribution Generalization".

StableNet StableNet is a deep stable learning method for out-of-distribution generalization. This is the official repo for CVPR21 paper "Deep Stable L

Training Confidence-Calibrated Classifier for Detecting Out-of-Distribution Samples / ICLR 2018

Training Confidence-Calibrated Classifier for Detecting Out-of-Distribution Samples This project is for the paper "Training Confidence-Calibrated Clas

Codebase for Amodal Segmentation through Out-of-Task andOut-of-Distribution Generalization with a Bayesian Model

Codebase for Amodal Segmentation through Out-of-Task andOut-of-Distribution Generalization with a Bayesian Model

Code for the ICME 2021 paper "Exploring Driving-Aware Salient Object Detection via Knowledge Transfer"

TSOD Code for the ICME 2021 paper "Exploring Driving-Aware Salient Object Detection via Knowledge Transfer" Usage For training, open train_test, run p

An official implementation of the paper Exploring Sequence Feature Alignment for Domain Adaptive Detection Transformers
An official implementation of the paper Exploring Sequence Feature Alignment for Domain Adaptive Detection Transformers

Sequence Feature Alignment (SFA) By Wen Wang, Yang Cao, Jing Zhang, Fengxiang He, Zheng-jun Zha, Yonggang Wen, and Dacheng Tao This repository is an o

Exploring Classification Equilibrium in Long-Tailed Object Detection, ICCV2021
Exploring Classification Equilibrium in Long-Tailed Object Detection, ICCV2021

Exploring Classification Equilibrium in Long-Tailed Object Detection (LOCE, ICCV 2021) Paper Introduction The conventional detectors tend to make imba

[Official] Exploring Temporal Coherence for More General Video Face Forgery Detection(ICCV 2021)
[Official] Exploring Temporal Coherence for More General Video Face Forgery Detection(ICCV 2021)

Exploring Temporal Coherence for More General Video Face Forgery Detection(FTCN) Yinglin Zheng, Jianmin Bao, Dong Chen, Ming Zeng, Fang Wen Accepted b

Categorical Depth Distribution Network for Monocular 3D Object Detection
Categorical Depth Distribution Network for Monocular 3D Object Detection

CaDDN CaDDN is a monocular-based 3D object detection method. This repository is based off of [OpenPCDet]. Categorical Depth Distribution Network for M

Code for CVPR 2021 oral paper
Code for CVPR 2021 oral paper "Exploring Data-Efficient 3D Scene Understanding with Contrastive Scene Contexts"

Exploring Data-Efficient 3D Scene Understanding with Contrastive Scene Contexts The rapid progress in 3D scene understanding has come with growing dem

Comments
  • The provided colab link does not run.

    The provided colab link does not run.

    The authors said the provided google colab link should reproduce the result. However, the colab notebook, when run in a default colab environment, fails even before loading a model.

    The main error occurred in this cell:

    # Import files from repository.
    
    import sys
    if './vision_transformer' not in sys.path:
      sys.path.append('./vision_transformer')
    
    %load_ext autoreload
    %autoreload 2
    
    from vit_jax import checkpoint
    from vit_jax import models
    from vit_jax import train
    from vit_jax.configs import augreg as augreg_config
    from vit_jax.configs import models as models_config
    

    The error message says

    ---------------------------------------------------------------------------
    ModuleNotFoundError                       Traceback (most recent call last)
    [<ipython-input-8-e79c46bdde58>](https://localhost:8080/#) in <module>()
          8 get_ipython().magic('autoreload 2')
          9 
    ---> 10 from vit_jax import checkpoint
         11 from vit_jax import models
         12 from vit_jax import train
    
    [/content/vision_transformer/vit_jax/checkpoint.py](https://localhost:8080/#) in <module>()
         17 
         18 from absl import logging
    ---> 19 import flax
         20 from  flax.training import checkpoints
         21 import jax.numpy as jnp
    
    ModuleNotFoundError: No module named 'flax'
    
    ---------------------------------------------------------------------------
    NOTE: If your import is failing due to a missing package, you can
    manually install dependencies using either !pip or !apt.
    
    To view examples of installing some common dependencies, click the
    "Open Examples" button below.
    ---------------------------------------------------------------------------
    

    I wish the authors could look into the issue and keep the work reproducible.

    opened by swyoon 0
  • Colab Notebook Running Issue - standalone_get_prelogits()

    Colab Notebook Running Issue - standalone_get_prelogits()

    When I try and run the following cell:

    cifar10_test_prelogits, cifar10_test_logits, cifar10_test_labels = standalone_get_prelogits( params, cifar10_ds_test, image_count=N_test )

    I get the error message below. Please help me resolve this issue :)

    ERROR MESSAGE:

    UnfilteredStackTrace: RuntimeError: UNKNOWN: Failed to determine best cudnn convolution algorithm for: %cudnn-conv = (f32[128,24,24,1024]{2,1,3,0}, u8[0]{0}) custom-call(f32[128,384,384,3]{2,1,3,0} %copy, f32[16,16,3,1024]{1,0,2,3} %copy.1), window={size=16x16 stride=16x16}, dim_labels=b01f_01io->b01f, custom_call_target="__cudnn$convForward", metadata={op_name="jit(conv_general_dilated)/jit(main)/conv_general_dilated[window_strides=(16, 16) padding=((0, 0), (0, 0)) lhs_dilation=(1, 1) rhs_dilation=(1, 1) dimension_numbers=ConvDimensionNumbers(lhs_spec=(0, 3, 1, 2), rhs_spec=(3, 2, 0, 1), out_spec=(0, 3, 1, 2)) feature_group_count=1 batch_group_count=1 lhs_shape=(128, 384, 384, 3) rhs_shape=(16, 16, 3, 1024) precision=None preferred_element_type=None]" source_file="/usr/local/lib/python3.7/dist-packages/flax/linen/linear.py" source_line=371}, backend_config="{"conv_result_scale":1,"activation_mode":"0","side_input_scale":0}"

    Original error: UNIMPLEMENTED: DNN library is not found.

    To ignore this failure and try to use a fallback algorithm (which may have suboptimal performance), use XLA_FLAGS=--xla_gpu_strict_conv_algorithm_picker=false. Please also file a bug for the root cause of failing autotuning.

    The stack trace below excludes JAX-internal frames. The preceding is the original exception that occurred, unmodified.


    The above exception was the direct cause of the following exception:

    RuntimeError Traceback (most recent call last) /usr/local/lib/python3.7/dist-packages/flax/linen/linear.py in call(self, inputs) 369 dimension_numbers=dimension_numbers, 370 feature_group_count=self.feature_group_count, --> 371 precision=self.precision 372 ) 373 else:

    RuntimeError: UNKNOWN: Failed to determine best cudnn convolution algorithm for: %cudnn-conv = (f32[128,24,24,1024]{2,1,3,0}, u8[0]{0}) custom-call(f32[128,384,384,3]{2,1,3,0} %copy, f32[16,16,3,1024]{1,0,2,3} %copy.1), window={size=16x16 stride=16x16}, dim_labels=b01f_01io->b01f, custom_call_target="__cudnn$convForward", metadata={op_name="jit(conv_general_dilated)/jit(main)/conv_general_dilated[window_strides=(16, 16) padding=((0, 0), (0, 0)) lhs_dilation=(1, 1) rhs_dilation=(1, 1) dimension_numbers=ConvDimensionNumbers(lhs_spec=(0, 3, 1, 2), rhs_spec=(3, 2, 0, 1), out_spec=(0, 3, 1, 2)) feature_group_count=1 batch_group_count=1 lhs_shape=(128, 384, 384, 3) rhs_shape=(16, 16, 3, 1024) precision=None preferred_element_type=None]" source_file="/usr/local/lib/python3.7/dist-packages/flax/linen/linear.py" source_line=371}, backend_config="{"conv_result_scale":1,"activation_mode":"0","side_input_scale":0}"

    Original error: UNIMPLEMENTED: DNN library is not found.

    To ignore this failure and try to use a fallback algorithm (which may have suboptimal performance), use XLA_FLAGS=--xla_gpu_strict_conv_algorithm_picker=false. Please also file a bug for the root cause of failing autotuning.

    opened by CallumJMac 1
  • Question about table 5

    Question about table 5

    Thanks for the nice work.

    For the experiment of CLIP that uses only the names of in-distribution classes (baseline), how is it implemented? what does Labels 2 '---' means? Did you use words like 'null' 'others' when out of distribution class is uncertain?

    opened by SamitHuang 0
  • Question about Relative Mahalanobis distance

    Question about Relative Mahalanobis distance

    Hi, I have a question about Relative Mahalanobis distance, where RMD(k) = MD(K)-MD(0). MD(0) is a Mahalanobis distance of sample z(0) to a distribution fitted to the entire training data not considering the class labels. In fact, MD(0) is constant for sample z(0). Thus, the confidence score of RMD(k) is just partially shifted relative to the MD(K), nothing changes. Why does RMD(k) have an effect?

    opened by jingzhengli 0
Owner
Stanislav Fort
PhD student at Stanford | ML, AI & Physics
Stanislav Fort
Code for EMNLP 2021 paper Contrastive Out-of-Distribution Detection for Pretrained Transformers.

Contra-OOD Code for EMNLP 2021 paper Contrastive Out-of-Distribution Detection for Pretrained Transformers. Requirements PyTorch Transformers datasets

Wenxuan Zhou 27 Oct 28, 2022
Official implementation for Likelihood Regret: An Out-of-Distribution Detection Score For Variational Auto-encoder at NeurIPS 2020

Likelihood-Regret Official implementation of Likelihood Regret: An Out-of-Distribution Detection Score For Variational Auto-encoder at NeurIPS 2020. T

Xavier 33 Oct 12, 2022
Outlier Exposure with Confidence Control for Out-of-Distribution Detection

OOD-detection-using-OECC This repository contains the essential code for the paper Outlier Exposure with Confidence Control for Out-of-Distribution De

Nazim Shaikh 64 Nov 2, 2022
Principled Detection of Out-of-Distribution Examples in Neural Networks

ODIN: Out-of-Distribution Detector for Neural Networks This is a PyTorch implementation for detecting out-of-distribution examples in neural networks.

null 189 Nov 29, 2022
The Official Implementation of the ICCV-2021 Paper: Semantically Coherent Out-of-Distribution Detection.

SCOOD-UDG (ICCV 2021) This repository is the official implementation of the paper: Semantically Coherent Out-of-Distribution Detection Jingkang Yang,

Jake YANG 62 Nov 21, 2022
Learning Confidence for Out-of-Distribution Detection in Neural Networks

Learning Confidence Estimates for Neural Networks This repository contains the code for the paper Learning Confidence for Out-of-Distribution Detectio

null 235 Jan 5, 2023
RODD: A Self-Supervised Approach for Robust Out-of-Distribution Detection

RODD Official Implementation of 2022 CVPRW Paper RODD: A Self-Supervised Approach for Robust Out-of-Distribution Detection Introduction: Recent studie

Umar Khalid 17 Oct 11, 2022
Code for EMNLP'21 paper "Types of Out-of-Distribution Texts and How to Detect Them"

ood-text-emnlp Code for EMNLP'21 paper "Types of Out-of-Distribution Texts and How to Detect Them" Files fine_tune.py is used to finetune the GPT-2 mo

Udit Arora 19 Oct 28, 2022
Demos of essentia classifiers hosted on replicate.ai

essentia-replicate-demos Demos of Essentia models hosted on replicate.ai's MTG site. The models Check our site for a complete list of the models avail

Music Technology Group - Universitat Pompeu Fabra 12 Nov 14, 2022
Official PyTorch implementation of the Fishr regularization for out-of-distribution generalization

Fishr: Invariant Gradient Variances for Out-of-distribution Generalization Official PyTorch implementation of the Fishr regularization for out-of-dist

null 62 Dec 22, 2022