Tensorflow 2 implementation of the paper: Learning and Evaluating Representations for Deep One-class Classification published at ICLR 2021

Overview

Deep Representation One-class Classification (DROC).

This is not an officially supported Google product.

Tensorflow 2 implementation of the paper: Learning and Evaluating Representations for Deep One-class Classification published at ICLR 2021 as a conference paper by Kihyuk Sohn, Chun-Liang Li, Jinsung Yoon, Minho Jin, and Tomas Pfister.

This directory contains a two-stage framework for deep one-class classification example, which includes the self-supervised deep representation learning from one-class data, and a classifier using generative or discriminative models.

Install

The requirements.txt includes all the dependencies for this project, and an example of install and run the project is given in run.sh.

$sh deep_representation_one_class/run.sh

Download datasets

script/prepare_data.sh includes an instruction how to prepare data for CatVsDog and CelebA datasets. For CatVsDog dataset, the data needs to be downloaded manually. Please uncomment line 2 to set DATA_DIR to download datasets before starting it.

Run

The options for the experiments are specified thru the command line arguments. The detailed explanation can be found in train_and_eval_loop.py. Scripts for running experiments can be found

  • Rotation prediction: script/run_rotation.sh

  • Contrastive learning: script/run_contrastive.sh

  • Contrastive learning with distribution augmentation: script/run_contrastive_da.sh

Evaluation

After running train_and_eval_loop.py, the evaluation results can be found in $MODEL_DIR/stats/summary.json, where MODEL_DIR is specified as model_dir of train_and_eval_loop.py.

Contacts

[email protected], [email protected], [email protected], [email protected], [email protected]

You might also like...
Repository for
Repository for "Improving evidential deep learning via multi-task learning," published in AAAI2022

Improving evidential deep learning via multi task learning It is a repository of AAAI2022 paper, “Improving evidential deep learning via multi-task le

code for the ICLR'22 paper: On Robust Prefix-Tuning for Text Classification

On Robust Prefix-Tuning for Text Classification Prefix-tuning has drawed much attention as it is a parameter-efficient and modular alternative to adap

[ICLR 2021] Heteroskedastic and Imbalanced Deep Learning with Adaptive Regularization

Heteroskedastic and Imbalanced Deep Learning with Adaptive Regularization Kaidi Cao, Yining Chen, Junwei Lu, Nikos Arechiga, Adrien Gaidon, Tengyu Ma

A library for preparing, training, and evaluating scalable deep learning hybrid recommender systems using PyTorch.
A library for preparing, training, and evaluating scalable deep learning hybrid recommender systems using PyTorch.

collie_recs Collie is a library for preparing, training, and evaluating implicit deep learning hybrid recommender systems, named after the Border Coll

A library for preparing, training, and evaluating scalable deep learning hybrid recommender systems using PyTorch.
A library for preparing, training, and evaluating scalable deep learning hybrid recommender systems using PyTorch.

collie Collie is a library for preparing, training, and evaluating implicit deep learning hybrid recommender systems, named after the Border Collie do

Source code, datasets and trained models for the paper Learning Advanced Mathematical Computations from Examples (ICLR 2021), by François Charton, Amaury Hayat (ENPC-Rutgers) and Guillaume Lample

Maths from examples - Learning advanced mathematical computations from examples This is the source code and data sets relevant to the paper Learning a

Official implementation of the ICLR 2021 paper
Official implementation of the ICLR 2021 paper

You Only Need Adversarial Supervision for Semantic Image Synthesis Official PyTorch implementation of the ICLR 2021 paper "You Only Need Adversarial S

This project is the official implementation of our accepted ICLR 2021 paper BiPointNet: Binary Neural Network for Point Clouds.
This project is the official implementation of our accepted ICLR 2021 paper BiPointNet: Binary Neural Network for Point Clouds.

BiPointNet: Binary Neural Network for Point Clouds Created by Haotong Qin, Zhongang Cai, Mingyuan Zhang, Yifu Ding, Haiyu Zhao, Shuai Yi, Xianglong Li

Comments
  • Question about reproducing the results

    Question about reproducing the results

    Thanks for your great work.

    I ran your 'contrastive_da.sh' file with

    DATA=cifar10ood SEED=3 CATEGORY=0

    But I got

    { "embed.auc": 63.617005555555565, "embed.gde": 61.748266666666666, "embed.kde": 64.29441111111112, "embed.kocsvm": 63.649888888888896, "embed.locsvm": 66.37852222222223, "pool.auc": 77.99122777777778, "pool.gde": 80.53367777777778, "pool.kde": 74.89410000000001, "pool.kocsvm": 75.82452222222223, "pool.locsvm": 67.06372222222222 }.

    And with other seed 72

    I got

    { "embed.auc": 50.0, "embed.gde": 49.977777777777774, "embed.kde": 50.0, "embed.kocsvm": 49.977777777777774, "embed.locsvm": 50.022222222222226, "pool.auc": 60.998122222222214, "pool.gde": 60.27448888888889, "pool.kde": 56.09522222222223, "pool.kocsvm": 55.180677777777774, "pool.locsvm": 51.08432222222222 }

    Could you let me know how to reproduce your results?! Thank you:)

    opened by e0jun 4
  • Question about paper

    Question about paper"CutPaste: Self-Supervised Learning for Anomaly Detection and Localization"

    Thanks for your great work. I notice that you are also the authors of paper "CutPaste: Self-Supervised Learning for Anomaly Detection and Localization", and I can't reproduce the Cutpaste(normal cutpaste) results of Experiment with ResNet18 in A.1 section.
    About the experiment setting: Backbone: Resnet 18 + MLP head(Train from scratch) For example, Capsule in mvtec dataset, training epoches: 256, num of training samples: 219, batchsize:64, one epoch needs 4 steps. In paper, "Note that, unlike conventional definition for an epoch, we define 256 parameter update steps as one epoch." So 65536 steps. the other parameters i set is lined with paper(including learning rate, weight_decay, moment). The training loss curve: loss

    The acc curve:
    acc

    The lr curve: lr

    The epoch curve: epoch

    I'm not sure 65536 steps is too many. but according the loss curve, it' kind of weird. Finally the ROCAUC is , (paper: 87.9+-0.7), I think my evaluating is correct. I try evaluating pretrained efficientnetb4 and b5 without training(paper, table 3) 4082DA83-F622-42ee-B79B-2A8EA44F3086

    So about the experiment of resnet18, Could you please give me some advice?

    opened by Youskrpig 4
  • Use of projection head

    Use of projection head

    If I understand the code correctly, then the experiments with projection heads (e.g. script/run_contrastive_da.sh) use the projection head at test time and not just training time. This is from looking at resnet_util.ResNet which looks like the output embeds is always the projection head. So, in the notation of the paper (e.g. figure 4), the contrastive results generated by this repo are all Contrastive (DA) g o f

    So then in table 2 of the paper where it says Contrastive (DA) that refers to using g o f`` since the scriptscript/run_contrastive_da.sh` is supposed to reproduce that row?

    That's a little confusing to me since table 4 shows that the best results are obtained without using the projection head at test time (i.e. in the second stage).

    Is my understanding correct?

    opened by ekorman 2
  • TypeError: Expected int32 passed to parameter 'shape' of op 'ScatterNd', got [3, None, 64, 64, 3]

    TypeError: Expected int32 passed to parameter 'shape' of op 'ScatterNd', got [3, None, 64, 64, 3]

    Hi, I am currently having problem trying to replicate the results from celeba. I am running the run_contrastive.sh with these Settings:

    DATA=celeba
    METHOD=Contrastive
    SEED=1
    CATEGORY=Eyeglasses
    MODEL_DIR='.'
    python train_and_eval_loop.py \
      --model_dir="${MODEL_DIR}" \
      --method=${METHOD} \
      --file_path="${DATA}_${PREFIX}_s${SEED}_c${CATEGORY}" \
      --dataset=${DATA} \
      --category=${CATEGORY} \
      --seed=${SEED} \
      --root='' \
      --net_type=ResNet18 \
      --net_width=1 \
      --latent_dim=0 \
      --aug_list="cnr0.5+hflip+jitter_b0.4_c0.4_s0.4_h0.4+gray0.2+blur0.5,+" \
      --aug_list_for_test="x" \
      --input_shape="64,64,3" \
      --optim_type=sgd \
      --sched_type=cos \
      --learning_rate=0.01 \
      --momentum=0.9 \
      --weight_decay=0.0003 \
      --head_dims="512,512,512,512,512,512,512,512,128" \
      --num_epoch=2048 \
      --batch_size=32 \
      --temperature=0.2 \
      --distaug_type 1
    

    the error I get is this:

    /test/contrastive.py:87 step_fn  *
            y = self.get_target_labels(
        /test/contrastive.py:69 get_target_labels  *
            x_concat = self.cross_replica_concat(x, replica_context=replica_context)
        /test/util/train.py:792 cross_replica_concat  *
            ext_tensor = tf.scatter_nd(
        /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_array_ops.py:8856 scatter_nd  **
            "ScatterNd", indices=indices, updates=updates, shape=shape, name=name)
        /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/op_def_library.py:479 _apply_op_helper
            repr(values), type(values).__name__, err))
    
        TypeError: Expected int32 passed to parameter 'shape' of op 'ScatterNd', got [3, None, 64, 64, 3] of type 'list' instead. Error: Expected int32, got None of type 'NoneType' instead.
    
    

    can you help me fix this?

    opened by Leggin 2
Owner
Google Research
Google Research
The implementation of the algorithm in the paper "Safe Deep Semi-Supervised Learning for Unseen-Class Unlabeled Data" published in ICML 2020.

DS3L This is the code for paper "Safe Deep Semi-Supervised Learning for Unseen-Class Unlabeled Data" published in ICML 2020. Setups The code is implem

Guolz 36 Oct 19, 2022
Code for Learning Manifold Patch-Based Representations of Man-Made Shapes, in ICLR 2021.

LearningPatches | Webpage | Paper | Video Learning Manifold Patch-Based Representations of Man-Made Shapes Dmitriy Smirnov, Mikhail Bessmeltsev, Justi

Dima Smirnov 22 Nov 14, 2022
Official Pytorch implementation of Online Continual Learning on Class Incremental Blurry Task Configuration with Anytime Inference (ICLR 2022)

The Official Implementation of CLIB (Continual Learning for i-Blurry) Online Continual Learning on Class Incremental Blurry Task Configuration with An

NAVER AI 34 Oct 26, 2022
Code to reproduce the results for Statistically Robust Neural Network Classification, published in UAI 2021

Code to reproduce the results for Statistically Robust Neural Network Classification, published in UAI 2021

null 1 Jun 2, 2022
An implementation demo of the ICLR 2021 paper Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks in PyTorch.

Neural Attention Distillation This is an implementation demo of the ICLR 2021 paper Neural Attention Distillation: Erasing Backdoor Triggers from Deep

Yige-Li 84 Jan 4, 2023
Official Pytorch implementation of ICLR 2018 paper Deep Learning for Physical Processes: Integrating Prior Scientific Knowledge.

Deep Learning for Physical Processes: Integrating Prior Scientific Knowledge: Official Pytorch implementation of ICLR 2018 paper Deep Learning for Phy

emmanuel 47 Nov 6, 2022
Official implementation for the paper: "Multi-label Classification with Partial Annotations using Class-aware Selective Loss"

Multi-label Classification with Partial Annotations using Class-aware Selective Loss Paper | Pretrained models Official PyTorch Implementation Emanuel

null 99 Dec 27, 2022
Official implementation of NeurIPS 2021 paper "One Loss for All: Deep Hashing with a Single Cosine Similarity based Learning Objective"

Official implementation of NeurIPS 2021 paper "One Loss for All: Deep Hashing with a Single Cosine Similarity based Learning Objective"

Ng Kam Woh 71 Dec 22, 2022
Unofficial Tensorflow 2 implementation of the paper Implicit Neural Representations with Periodic Activation Functions

Siren: Implicit Neural Representations with Periodic Activation Functions The unofficial Tensorflow 2 implementation of the paper Implicit Neural Repr

Seyma Yucer 2 Jun 27, 2022
Code for the paper One Thing One Click: A Self-Training Approach for Weakly Supervised 3D Semantic Segmentation, CVPR 2021.

One Thing One Click One Thing One Click: A Self-Training Approach for Weakly Supervised 3D Semantic Segmentation (CVPR2021) Code for the paper One Thi

null 44 Dec 12, 2022