The Official Implementation of the ICCV-2021 Paper: Semantically Coherent Out-of-Distribution Detection.

Overview

SCOOD-UDG (ICCV 2021)

paper   projectpage   gdrive  onedrive

This repository is the official implementation of the paper:

Semantically Coherent Out-of-Distribution Detection
Jingkang Yang, Haoqi Wang, Litong Feng, Xiaopeng Yan, Huabin Zheng, Wayne Zhang, Ziwei Liu
Proceedings of the IEEE International Conference on Computer Vision (ICCV 2021)

udg

Dependencies

We use conda to manage our dependencies, and CUDA 10.1 to run our experiments.

You can specify the appropriate cudatoolkit version to install on your machine in the environment.yml file, and then run the following to create the conda environment:

conda env create -f environment.yml
conda activate scood

SC-OOD Dataset

scood

The SC-OOD dataset introduced in the paper can be downloaded here.

gdrive onedrive

Our codebase accesses the dataset from the root directory in a folder named data/ by default, i.e.

├── ...
├── data
│   ├── images
│   └── imglist
├── scood
├── test.py
├── train.py
├── ...

Training

The entry point for training is the train.py script. The hyperparameters for each experiment is specified by a .yml configuration file (examples given in configs/train/).

All experiment artifacts are saved in the specified args.output_dir directory.

python train.py \
    --config configs/train/cifar10_udg.yml \
    --data_dir data \
    --output_dir output/cifar10_udg

Testing

Evaluation for a trained model is performed by the test.py script, with its hyperparameters also specified by a .yml configuration file (examples given in configs/test/)

Within the configuration file, you can also specify which post-processing OOD method to use (e.g. ODIN or Energy-based OOD detector (EBO)).

The evaluation results are saved in a .csv file as specified.

python test.py \
    --config configs/test/cifar10.yml \
    --checkpoint output/cifar10_udg/best.ckpt \
    --data_dir data \
    --csv_path output/cifar10_udg/results.csv

Results

CIFAR-10 (+ Tiny-ImageNet) Results on ResNet18

You can run the following script (specifying the data and output directories) which perform training + testing for our main experimental results:

CIFAR-10, UDG

bash scripts/cifar10_udg.sh data_dir output_dir

We report the mean ± std results from the current codebase as follows, which match the performance reported in our original paper.

Metrics ODIN EBO OE UDG (ours)
FPR95 ↓ 50.76 ± 3.39 50.70 ± 2.86 54.99 ± 4.06 39.94 ± 3.77
AUROC ↑ 82.11 ± 0.24 83.99 ± 1.05 87.48 ± 0.61 93.27 ± 0.64
AUPR In ↑ 73.07 ± 0.40 76.84 ± 1.56 85.75 ± 1.70 93.36 ± 0.56
AUPR Out ↑ 85.06 ± 0.29 85.44 ± 0.73 86.95 ± 0.28 91.21 ± 1.23
CCR@FPRe-4 ↑ 0.30 ± 0.04 0.26 ± 0.09 7.09 ± 0.48 16.36 ± 4.33
CCR@FPRe-3 ↑ 1.22 ± 0.28 1.46 ± 0.18 13.69 ± 0.78 32.99 ± 4.16
CCR@FPRe-2 ↑ 6.13 ± 0.72 8.17 ± 0.96 29.60 ± 5.31 59.14 ± 2.60
CCR@FPRe-1 ↑ 39.61 ± 0.72 47.57 ± 3.33 64.33 ± 3.44 81.04 ± 1.46

License and Acknowledgements

This project is open-sourced under the MIT license.

The codebase is refactored by Ang Yi Zhe, and maintained by Jingkang Yang and Ang Yi Zhe.

Citation

If you find our repository useful for your research, please consider citing our paper:

@InProceedings{yang2021scood,
    author = {Yang, Jingkang and Wang, Haoqi and Feng, Litong and Yan, Xiaopeng and Zheng, Huabin and Zhang, Wayne and Liu, Ziwei},
    title = {Semantically Coherent Out-of-Distribution Detection},
    booktitle = {Proceedings of the IEEE International Conference on Computer Vision},
    year = {2021}
}
You might also like...
Code for EMNLP'21 paper "Types of Out-of-Distribution Texts and How to Detect Them"

ood-text-emnlp Code for EMNLP'21 paper "Types of Out-of-Distribution Texts and How to Detect Them" Files fine_tune.py is used to finetune the GPT-2 mo

Putting NeRF on a Diet: Semantically Consistent Few-Shot View Synthesis Implementation
Putting NeRF on a Diet: Semantically Consistent Few-Shot View Synthesis Implementation

Putting NeRF on a Diet: Semantically Consistent Few-Shot View Synthesis Implementation This project attempted to implement the paper Putting NeRF on a

CrossNorm and SelfNorm for Generalization under Distribution Shifts (ICCV 2021)
CrossNorm and SelfNorm for Generalization under Distribution Shifts (ICCV 2021)

CrossNorm (CN) and SelfNorm (SN) (Accepted at ICCV 2021) This is the official PyTorch implementation of our CNSN paper, in which we propose CrossNorm

CrossNorm and SelfNorm for Generalization under Distribution Shifts (ICCV 2021)
CrossNorm and SelfNorm for Generalization under Distribution Shifts (ICCV 2021)

CrossNorm (CN) and SelfNorm (SN) (Accepted at ICCV 2021) This is the official PyTorch implementation of our CNSN paper, in which we propose CrossNorm

Github project for Attention-guided Temporal Coherent Video Object Matting.

Attention-guided Temporal Coherent Video Object Matting This is the Github project for our paper Attention-guided Temporal Coherent Video Object Matti

A PyTorch Reimplementation of TecoGAN: Temporally Coherent GAN for Video Super-Resolution
A PyTorch Reimplementation of TecoGAN: Temporally Coherent GAN for Video Super-Resolution

TecoGAN-PyTorch Introduction This is a PyTorch reimplementation of TecoGAN: Temporally Coherent GAN for Video Super-Resolution (VSR). Please refer to

Temporally Coherent GAN SIGGRAPH project.
Temporally Coherent GAN SIGGRAPH project.

TecoGAN This repository contains source code and materials for the TecoGAN project, i.e. code for a TEmporally COherent GAN for video super-resolution

SeMask: Semantically Masked Transformers for Semantic Segmentation.
SeMask: Semantically Masked Transformers for Semantic Segmentation.

SeMask: Semantically Masked Transformers Jitesh Jain, Anukriti Singh, Nikita Orlov, Zilong Huang, Jiachen Li, Steven Walton, Humphrey Shi This repo co

Training Confidence-Calibrated Classifier for Detecting Out-of-Distribution Samples / ICLR 2018

Training Confidence-Calibrated Classifier for Detecting Out-of-Distribution Samples This project is for the paper "Training Confidence-Calibrated Clas

Comments
  • Faiss assertion 'err == CUBLAS_STATUS_SUCCESS' failed in void

    Faiss assertion 'err == CUBLAS_STATUS_SUCCESS' failed in void

    Faiss assertion 'err == CUBLAS_STATUS_SUCCESS' failed in void faiss::gpu::runMatrixMult(faiss::gpu::Tensor<float, 2, true>&, bool, faiss::gpu::Tensor<T, 2, true>&, bool, faiss::gpu::Tensor<IndexType, 2, true>&, bool, float, float, cublasHandle_t, cudaStream_t) [with AT = float; BT = float; cublasHandle_t = cublasContext*; cudaStream_t = CUstream_st*] at /__w/faiss-wheels/faiss-wheels/faiss/faiss/gpu/utils/MatrixMult-inl.cuh:265; details: cublas failed (13): (512, 256) x (1000, 256)' = (512, 1000) gemm params m 1000 n 512 k 256 trA T trB N lda 256 ldb 256 ldc 1000
    [1]    135532 abort (core dumped)  python train.py --config configs/train/cifar10_udg.yml --data_dir data  
    

    I got this error which cost me a ton of time to fix but still failed. I am using cuda 11.4, torch=1.8, 4 A 100 GPUs

    opened by algoteam5 4
  • Reproduce UDG

    Reproduce UDG

    Hi, I am trying to replicate the results of the paper with SC-OOD CIFAR-100 benchmarks. Are the results of papers are not the results of averaged performance over a few runs and are the performance of the best model that shows the best accuracy in the test set?

    Thanks.

    opened by tupbbtus 4
  • The volatility of reproducibility results

    The volatility of reproducibility results

    Thank you for your great job! I've run your code many times(with idf method of ‘udg’ ),but the results have been fluctuating. Your results in Github are expressed by means and standard ,do you think the volatility comes from the randomness of Clustering used in your paper?

    opened by LuFan31 2
  • Why in-distribution classification results are so bad?

    Why in-distribution classification results are so bad?

    I tested your model. The model could detect ood images quite well, but when it comes to classifying in-distribution classes. The model produced very bad results with wrong class label. So why in-distribution classification results are so bad?

    opened by algoteam5 1
Owner
Jake YANG
MMLab@NTU PhD Student
Jake YANG
Code for EMNLP 2021 paper Contrastive Out-of-Distribution Detection for Pretrained Transformers.

Contra-OOD Code for EMNLP 2021 paper Contrastive Out-of-Distribution Detection for Pretrained Transformers. Requirements PyTorch Transformers datasets

Wenxuan Zhou 27 Oct 28, 2022
From this paper "SESNet: A Semantically Enhanced Siamese Network for Remote Sensing Change Detection"

SESNet for remote sensing image change detection It is the implementation of the paper: "SESNet: A Semantically Enhanced Siamese Network for Remote Se

null 1 May 24, 2022
Official repository for CVPR21 paper "Deep Stable Learning for Out-Of-Distribution Generalization".

StableNet StableNet is a deep stable learning method for out-of-distribution generalization. This is the official repo for CVPR21 paper "Deep Stable L

null 120 Dec 28, 2022
Official PyTorch implementation of the Fishr regularization for out-of-distribution generalization

Fishr: Invariant Gradient Variances for Out-of-distribution Generalization Official PyTorch implementation of the Fishr regularization for out-of-dist

null 62 Dec 22, 2022
Outlier Exposure with Confidence Control for Out-of-Distribution Detection

OOD-detection-using-OECC This repository contains the essential code for the paper Outlier Exposure with Confidence Control for Out-of-Distribution De

Nazim Shaikh 64 Nov 2, 2022
Principled Detection of Out-of-Distribution Examples in Neural Networks

ODIN: Out-of-Distribution Detector for Neural Networks This is a PyTorch implementation for detecting out-of-distribution examples in neural networks.

null 189 Nov 29, 2022
Learning Confidence for Out-of-Distribution Detection in Neural Networks

Learning Confidence Estimates for Neural Networks This repository contains the code for the paper Learning Confidence for Out-of-Distribution Detectio

null 235 Jan 5, 2023
RODD: A Self-Supervised Approach for Robust Out-of-Distribution Detection

RODD Official Implementation of 2022 CVPRW Paper RODD: A Self-Supervised Approach for Robust Out-of-Distribution Detection Introduction: Recent studie

Umar Khalid 17 Oct 11, 2022
Code for "NeuralRecon: Real-Time Coherent 3D Reconstruction from Monocular Video", CVPR 2021 oral

NeuralRecon: Real-Time Coherent 3D Reconstruction from Monocular Video Project Page | Paper NeuralRecon: Real-Time Coherent 3D Reconstruction from Mon

ZJU3DV 1.4k Dec 30, 2022
SnapMix: Semantically Proportional Mixing for Augmenting Fine-grained Data (AAAI 2021)

SnapMix: Semantically Proportional Mixing for Augmenting Fine-grained Data (AAAI 2021) PyTorch implementation of SnapMix | paper Method Overview Cite

DavidHuang 126 Dec 30, 2022