Official PyTorch implementation of "Adversarial Reciprocal Points Learning for Open Set Recognition"

Overview

Adversarial Reciprocal Points Learning for Open Set Recognition

Official PyTorch implementation of "Adversarial Reciprocal Points Learning for Open Set Recognition".

1. Requirements

Environments

Currently, requires following packages

  • python 3.6+
  • torch 1.4+
  • torchvision 0.5+
  • CUDA 10.1+
  • scikit-learn 0.22+

Datasets

For Tiny-ImageNet, please download the following datasets to ./data/tiny_imagenet.

2. Training & Evaluation

Open Set Recognition

To train open set recognition models in paper, run this command:

python osr.py --dataset <DATASET> --loss <LOSS>

Option --loss can be one of ARPLoss/RPLoss/GCPLoss/Softmax. --dataset is one of mnist/svhn/cifar10/cifar100/tiny_imagenet. To run ARPL+CS, add --cs after this command.

Out-of-Distribution Detection

To train out-of-distribution models in paper, run this command:

python ood.py --dataset <DATASET> --out-dataset <DATASET> --model <NETWORK> --loss <LOSS>

Option --out-dataset denotes the out-of-distribution dataset for evaluation. --loss can be one of ARPLoss/RPLoss/GCPLoss/Softmax. --dataset is one of mnist/cifar10. --out-dataset is one of kmnist/svhn/cifar100. To run ARPL+CS, add --cs after this command.

Evaluation

To evaluate the trained model for Open Set Classification Rate (OSCR) and Out-of-Distribution (OOD) detection setting, add --eval after the training command.

3. Results

We visualize the deep feature of Softmax/GCPL/ARPL/ARPL+CS as below.

Colored triangles represent the learned reciprocal points of different known classes.

4. PKU-AIR300

A new large-scale challenging aircraft dataset for open set recognition: Aircraft 300 (Air-300). It contains 320,000 annotated colour images from 300 different classes in total. Each category contains 100 images at least, and a maximum of 10,000 images, which leads to the long tail distribution.

Citation

  • If you find our work or the code useful, please consider cite our paper using:
@inproceedings{chen2021adversarial,
    title={Adversarial Reciprocal Points Learning for Open Set Recognition},
    author={Chen, Guangyao and Peng, Peixi and Wang, Xiangqian and Tian, Yonghong},
    journal={arXiv preprint arXiv:2103.00953},
    year={2021}
}
  • All publications using Air-300 Dataset should cite the paper below:
@InProceedings{chen_2020_ECCV,
    author = {Chen, Guangyao and Qiao, Limeng and Shi, Yemin and Peng, Peixi and Li, Jia and Huang, Tiejun and Pu, Shiliang and Tian, Yonghong},
    title = {Learning Open Set Network with Discriminative Reciprocal Points},
    booktitle = {The European Conference on Computer Vision (ECCV)},
    month = {August},
    year = {2020}
}
Comments
  • Replicating results from Table 3 for ARPL

    Replicating results from Table 3 for ARPL

    Hi, I have trouble replicating the results that you reported in Table 3 in the paper for the ARPL method (so far).

    I downloaded you git repository and run this command to train the ARPL model (I run it three times to account for random initialization using different out dirs): python ood.py --dataset cifar10 --out-dataset svhn --model arpl --loss ARPLoss --outf log_arpl Here is a training log from one of the model training script logs.txt.

    To evaluate I run this command (for each trained model): python ood.py --dataset cifar10 --out-dataset svhn --model arpl --loss ARPLoss --outf log_arpl --eval

    I get the following results:

    Acc: 92.58000
           TNR    AUROC  DTACC  AUIN   AUOUT
    Bas    25.496 82.803 78.414 65.403 90.405
    Acc (%): 92.580  AUROC (%): 82.803       OSCR (%): 79.600
    
    Acc: 92.68000
           TNR    AUROC  DTACC  AUIN   AUOUT
    Bas    21.589 78.534 74.769 54.259 88.407
    Acc (%): 92.680  AUROC (%): 78.534       OSCR (%): 75.498
    
    Acc: 92.68000
           TNR    AUROC  DTACC  AUIN   AUOUT
    Bas    37.930 82.561 77.525 78.463 82.497
    Acc (%): 92.680  AUROC (%): 82.561       OSCR (%): 78.857
    

    There were no errors or warnings during the running of the scripts. All metrics are significantly below the reported numbers. Do you have any idea what may be the issue? Thank you.

    opened by vojirt 7
  • Why did the experiment select the known class as the positive sample?

    Why did the experiment select the known class as the positive sample?

    I noticed that in the experimental code, you use known class as a positive sample to calculate related indicators. This does not seem to be consistent with the openset/ODD field. Most of the openset open source codes choose unknown class as a positive sample, such as the OSRCI method you compared.

    opened by leyiweb 3
  • Questions regarding the baseline results

    Questions regarding the baseline results

    Hi there! Thanks for your inspiring work and releasing the code. I have a small question regarding the baseline results. I did not modify the code and run it with command python osr.py --dataset cifar10 --loss Softmax. If I understand correctly, this would be the baseline method, and according to the Table 1 in your paper, the result AUROC should be 67.7 for the CIFAR10 dataset. However, the log I obtained is as follows:

    ,0,1,2,3,4
    TNR,34.0,30.625000000000004,22.624999999999996,35.175,30.500000000000004
    AUROC,86.9976125,85.59652499999999,84.34554583333333,86.77233749999999,86.72411666666667
    DTACC,80.25416666666668,79.03333333333333,78.1625,79.575,79.97916666666667
    AUIN,92.19165744328468,90.77076215295214,90.76942678194541,91.79811544137021,91.64555773527704
    AUOUT,77.31935437681796,75.15246512319506,72.03227915144676,77.35292209454035,76.84887201588833
    ACC,94.16666666666667,95.58333333333333,91.35,95.3,95.18333333333334
    OSCR,84.46526250000002,84.14960000000002,80.82762291666654,84.88179583333343,84.94574374999975
    unknown,"[0, 8, 3, 5]","[2, 3, 4, 5]","[0, 8, 2, 6]","[8, 2, 3, 5]","[8, 2, 3, 5]"
    known,"[2, 4, 1, 7, 9, 6]","[8, 6, 1, 9, 0, 7]","[1, 5, 7, 3, 9, 4]","[7, 6, 4, 9, 0, 1]","[0, 6, 4, 9, 1, 7]"
    

    And the average AUROC is about 86.09, which is significantly higher than the results reported. I'd like to know if there is anything that I haven't done properly. Thanks in advance!

    opened by Duconnor 1
  • Why assign class splits specifically in code ?

    Why assign class splits specifically in code ?

    I noticed that the classes of close set are assigned in split.py. Are the results only reimplemented via the specific classes ? Two phenomenons shown in experiments.

    1. I tried to change the class number into random set. The performance decrease dramatically. The average AUROC in 5 times is only about 50.
    2. I trained the close set with close set with cross entropy loss, and use the softmax probability as logits in evaluation. Then, I get similar perfromance (about 74~ in AUROC) in results. May I miss some something in your code ?
    opened by akira-l 1
  • OSCR

    OSCR

    我跑你的代码的时候,我发现当我把compute_oscr函数的条件改成跟计算AUROC一样的时候,log输出的AUROC的值要大于OSCR的值。然后我阅读compute_oscr中计算不同阈值TP和FP的代码时候,发现TP是K+1但是FP没有K+1(我知道+1是为了取消最大的置信度当阈值时的计算) CC = s_k_target[k+1:].sum() FP = s_u_target[k:].sum() 我想问这里的FP的计算是不是索引也应该是k+1

    opened by sanqudui8ban 0
  • The code for the ImageNet experiment wanted

    The code for the ImageNet experiment wanted

    Could you provide the code as well as pretrained models for the experiment on ImageNet1k? I'm trying to evaluate ARPL on various other OOD datasets. Thanks !

    opened by xyzedd 0
Owner
Guangyao Chen
Ph.D student @ PKU
Guangyao Chen
Official PyTorch implementation for paper Context Matters: Graph-based Self-supervised Representation Learning for Medical Images

Context Matters: Graph-based Self-supervised Representation Learning for Medical Images Official PyTorch implementation for paper Context Matters: Gra

null 49 Nov 23, 2022
StyleGAN2-ADA - Official PyTorch implementation

Abstract: Training generative adversarial networks (GAN) using too little data typically leads to discriminator overfitting, causing training to diverge. We propose an adaptive discriminator augmentation mechanism that significantly stabilizes training in limited data regimes.

NVIDIA Research Projects 3.2k Dec 30, 2022
Official PyTorch implementation of Joint Object Detection and Multi-Object Tracking with Graph Neural Networks

This is the official PyTorch implementation of our paper: "Joint Object Detection and Multi-Object Tracking with Graph Neural Networks". Our project website and video demos are here.

Richard Wang 443 Dec 6, 2022
Official pytorch implementation of paper "Image-to-image Translation via Hierarchical Style Disentanglement".

HiSD: Image-to-image Translation via Hierarchical Style Disentanglement Official pytorch implementation of paper "Image-to-image Translation

null 364 Dec 14, 2022
Official pytorch implementation of paper "Inception Convolution with Efficient Dilation Search" (CVPR 2021 Oral).

IC-Conv This repository is an official implementation of the paper Inception Convolution with Efficient Dilation Search. Getting Started Download Imag

Jie Liu 111 Dec 31, 2022
Official PyTorch Implementation of Unsupervised Learning of Scene Flow Estimation Fusing with Local Rigidity

UnRigidFlow This is the official PyTorch implementation of UnRigidFlow (IJCAI2019). Here are two sample results (~10MB gif for each) of our unsupervis

Liang Liu 28 Nov 16, 2022
Official implementation of our paper "LLA: Loss-aware Label Assignment for Dense Pedestrian Detection" in Pytorch.

LLA: Loss-aware Label Assignment for Dense Pedestrian Detection This project provides an implementation for "LLA: Loss-aware Label Assignment for Dens

null 35 Dec 6, 2022
An official implementation of "SFNet: Learning Object-aware Semantic Correspondence" (CVPR 2019, TPAMI 2020) in PyTorch.

PyTorch implementation of SFNet This is the implementation of the paper "SFNet: Learning Object-aware Semantic Correspondence". For more information,

CV Lab @ Yonsei University 87 Dec 30, 2022
Old Photo Restoration (Official PyTorch Implementation)

Bringing Old Photo Back to Life (CVPR 2020 oral)

Microsoft 11.3k Dec 30, 2022
Official PyTorch implementation of Spatial Dependency Networks.

Spatial Dependency Networks: Neural Layers for Improved Generative Image Modeling Đorđe Miladinović   Aleksandar Stanić   Stefan Bauer   Jürgen Schmid

Djordje Miladinovic 34 Jan 19, 2022
Official implementation of our CVPR2021 paper "OTA: Optimal Transport Assignment for Object Detection" in Pytorch.

OTA: Optimal Transport Assignment for Object Detection This project provides an implementation for our CVPR2021 paper "OTA: Optimal Transport Assignme

null 217 Jan 3, 2023
This is the official PyTorch implementation of the paper "TransFG: A Transformer Architecture for Fine-grained Recognition" (Ju He, Jie-Neng Chen, Shuai Liu, Adam Kortylewski, Cheng Yang, Yutong Bai, Changhu Wang, Alan Yuille).

TransFG: A Transformer Architecture for Fine-grained Recognition Official PyTorch code for the paper: TransFG: A Transformer Architecture for Fine-gra

Ju He 307 Jan 3, 2023
StyleGAN2-ADA - Official PyTorch implementation

Need Help? If you’re new to StyleGAN2-ADA and looking to get started, please check out this video series from a course Lia Coleman and I taught in Oct

Derrick Schultz 217 Jan 4, 2023
Official PyTorch implementation of "ArtFlow: Unbiased Image Style Transfer via Reversible Neural Flows"

ArtFlow Official PyTorch implementation of the paper: ArtFlow: Unbiased Image Style Transfer via Reversible Neural Flows Jie An*, Siyu Huang*, Yibing

null 123 Dec 27, 2022
Official PyTorch implementation of RobustNet (CVPR 2021 Oral)

RobustNet (CVPR 2021 Oral): Official Project Webpage Codes and pretrained models will be released soon. This repository provides the official PyTorch

Sungha Choi 173 Dec 21, 2022
Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.

PyTorch Implementation of Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers 1 Using Colab Please notic

Hila Chefer 489 Jan 7, 2023
[PyTorch] Official implementation of CVPR2021 paper "PointDSC: Robust Point Cloud Registration using Deep Spatial Consistency". https://arxiv.org/abs/2103.05465

PointDSC repository PyTorch implementation of PointDSC for CVPR'2021 paper "PointDSC: Robust Point Cloud Registration using Deep Spatial Consistency",

null 153 Dec 14, 2022
Official PyTorch implementation of MX-Font (Multiple Heads are Better than One: Few-shot Font Generation with Multiple Localized Experts)

Introduction Pytorch implementation of Multiple Heads are Better than One: Few-shot Font Generation with Multiple Localized Expert. | paper Song Park1

Clova AI Research 97 Dec 23, 2022
Official Pytorch implementation of 'GOCor: Bringing Globally Optimized Correspondence Volumes into Your Neural Network' (NeurIPS 2020)

Official implementation of GOCor This is the official implementation of our paper : GOCor: Bringing Globally Optimized Correspondence Volumes into You

Prune Truong 71 Nov 18, 2022