[NIPS 2021] UOTA: Improving Self-supervised Learning with Automated Unsupervised Outlier Arbitration.

Related tags

Deep Learning uota
Overview

UOTA: Improving Self-supervised Learning with Automated Unsupervised Outlier Arbitration

This repository is the official PyTorch implementation of UOTA (Unsupervised OuTlier Arbitration).

0 Requirements

  • Python 3.6
  • PyTorch install = 1.6.0
  • torchvision install = 0.7.0
  • CUDA 10.1
  • Apex with CUDA extension
  • Other dependencies: opencv-python, scipy, pandas, numpy

1 Pretraining

We release a demo to pretrain ResNet50 on ImageNet1K with SwAV+UOTA pretrained models.

1.1 SwAV+UOTA pretrain

To train SwAV+UOTA on a single node with 4 gpus for 200 epochs, run:

DATASET_PATH="path/to/ImageNet1K/train"
EXPERIMENT_PATH="path/to/experiment"

python -m torch.distributed.launch --nproc_per_node=4 main_uota.py \
--data_path ${DATASET_PATH} \
--nmb_crops 2 6 \
--size_crops 224 96 \
--min_scale_crops 0.14 0.05 \
--max_scale_crops 1. 0.14 \
--crops_for_assign 0 1 \
--use_pil_blur true \
--epochs 200 \
--warmup_epochs 0 \
--batch_size 64 \
--base_lr 0.6 \
--final_lr 0.0006 \
--uota_tau 350. \
--epoch_uota_starts 100 \
--wd 0.000001 \
--use_fp16 true \
--dist_url "tcp://localhost:40000" \
--arch uota_r50 \
--sync_bn pytorch \
--dump_path ${EXPERIMENT_PATH}

2 Linear Evaluation

To train a linear classifier on frozen features out of deep network pretrained via various self-supervised pretraining methods, run:

DATASET_PATH="path/to/ImageNet1K"
EXPERIMENT_PATH="path/to/experiment"
LINCLS_PATH="path/to/lincls"

python -m torch.distributed.launch --nproc_per_node=4 eval_linear.py \
--data_path ${DATASET_PATH} \
--arch resnet50 \
--lr 1.2 \
--dump_path ${LINCLS_PATH} \
--pretrained ${EXPERIMENT_PATH}/swav_uota_r50_e200_pretrained.pth \
--batch_size 64 \
--num_classes 100 \

3 Results

To compare with SwAV fairly, we provide a SwAV+UOTA model with ResNet-50 architecture pretrained on ImageNet1K for 200 epochs, and release the pretrained model and the linear classier.

method epochs batch-size multi-crop ImageNet1K top-1 acc. pretrained model linear classifier
SwAV 200 256 2x224 + 6x96 72.7 / /
SwAV + UOTA 200 256 2x224 + 6x96 73.5 pretrained linear

4 Citation

@InProceedings{wang2021NeurIPS,
  title={Improving Self-supervised Learning with Automated Unsupervised Outlier Arbitration},
  author={Wang, Yu and Lin, Jingyang and Zou, Jingjing and Pan, Yingwei and Yao, Ting and Mei, Tao},
  booktitle={NeurIPS},
  year={2021},
}
You might also like...
PyTorch implementation of spectral graph ConvNets, NIPS’16
PyTorch implementation of spectral graph ConvNets, NIPS’16

Graph ConvNets in PyTorch October 15, 2017 Xavier Bresson http://www.ntu.edu.sg/home/xbresson https://github.com/xbresson https://twitter.com/xbresson

PyTorch implementation of the Value Iteration Networks (VIN) (NIPS '16 best paper)
PyTorch implementation of the Value Iteration Networks (VIN) (NIPS '16 best paper)

Value Iteration Networks in PyTorch Tamar, A., Wu, Y., Thomas, G., Levine, S., and Abbeel, P. Value Iteration Networks. Neural Information Processing

Pytorch implementation of Value Iteration Networks (NIPS 2016 best paper)
Pytorch implementation of Value Iteration Networks (NIPS 2016 best paper)

VIN: Value Iteration Networks A quick thank you A few others have released amazing related work which helped inspire and improve my own implementation

pytorch implementation of
pytorch implementation of "Contrastive Multiview Coding", "Momentum Contrast for Unsupervised Visual Representation Learning", and "Unsupervised Feature Learning via Non-Parametric Instance-level Discrimination"

Unofficial implementation: MoCo: Momentum Contrast for Unsupervised Visual Representation Learning (Paper) InsDis: Unsupervised Feature Learning via N

The official implementation of CVPR 2021 Paper: Improving Weakly Supervised Visual Grounding by Contrastive Knowledge Distillation.

Improving Weakly Supervised Visual Grounding by Contrastive Knowledge Distillation This repository is the official implementation of CVPR 2021 paper:

(JMLR'19) A Python Toolbox for Scalable Outlier Detection (Anomaly Detection)
(JMLR'19) A Python Toolbox for Scalable Outlier Detection (Anomaly Detection)

Python Outlier Detection (PyOD) Deployment & Documentation & Stats Build Status & Coverage & Maintainability & License PyOD is a comprehensive and sca

Streaming Anomaly Detection Framework in Python (Outlier Detection for Streaming Data)

Python Streaming Anomaly Detection (PySAD) PySAD is an open-source python framework for anomaly detection on streaming multivariate data. Documentatio

A gesture recognition system powered by OpenPose, k-nearest neighbours, and local outlier factor.
A gesture recognition system powered by OpenPose, k-nearest neighbours, and local outlier factor.

OpenHands OpenHands is a gesture recognition system powered by OpenPose, k-nearest neighbours, and local outlier factor. Currently the system can iden

Outlier Exposure with Confidence Control for Out-of-Distribution Detection
Outlier Exposure with Confidence Control for Out-of-Distribution Detection

OOD-detection-using-OECC This repository contains the essential code for the paper Outlier Exposure with Confidence Control for Out-of-Distribution De

Releases(v1.0.0)
Owner
null
Official implementation for NIPS'17 paper: PredRNN: Recurrent Neural Networks for Predictive Learning Using Spatiotemporal LSTMs.

PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive Learning The predictive learning of spatiotemporal sequences aims to generate future

THUML: Machine Learning Group @ THSS 243 Dec 26, 2022
PyTorch implementation of the NIPS-17 paper "Poincaré Embeddings for Learning Hierarchical Representations"

Poincaré Embeddings for Learning Hierarchical Representations PyTorch implementation of Poincaré Embeddings for Learning Hierarchical Representations

Facebook Research 1.6k Dec 25, 2022
UniMoCo: Unsupervised, Semi-Supervised and Full-Supervised Visual Representation Learning

UniMoCo: Unsupervised, Semi-Supervised and Full-Supervised Visual Representation Learning This is the official PyTorch implementation for UniMoCo pape

dddzg 49 Jan 2, 2023
Unified Pre-training for Self-Supervised Learning and Supervised Learning for ASR

UniSpeech The family of UniSpeech: UniSpeech (ICML 2021): Unified Pre-training for Self-Supervised Learning and Supervised Learning for ASR UniSpeech-

Microsoft 282 Jan 9, 2023
[CVPR 2021] "The Lottery Tickets Hypothesis for Supervised and Self-supervised Pre-training in Computer Vision Models" Tianlong Chen, Jonathan Frankle, Shiyu Chang, Sijia Liu, Yang Zhang, Michael Carbin, Zhangyang Wang

The Lottery Tickets Hypothesis for Supervised and Self-supervised Pre-training in Computer Vision Models Codes for this paper The Lottery Tickets Hypo

VITA 59 Dec 28, 2022
VOS: Learning What You Don’t Know by Virtual Outlier Synthesis

VOS This is the source code accompanying the paper VOS: Learning What You Don’t

null 248 Dec 25, 2022
《Improving Unsupervised Image Clustering With Robust Learning》(2020)

Improving Unsupervised Image Clustering With Robust Learning This repo is the PyTorch codes for "Improving Unsupervised Image Clustering With Robust L

Sungwon Park 129 Dec 27, 2022
The code release of paper 'Domain Generalization for Medical Imaging Classification with Linear-Dependency Regularization' NIPS 2020.

Domain Generalization for Medical Imaging Classification with Linear Dependency Regularization The code release of paper 'Domain Generalization for Me

Yufei Wang 56 Dec 28, 2022
Fader Networks: Manipulating Images by Sliding Attributes - NIPS 2017

FaderNetworks PyTorch implementation of Fader Networks (NIPS 2017). Fader Networks can generate different realistic versions of images by modifying at

Facebook Research 753 Dec 23, 2022
PyTorch implementation of NIPS 2017 paper Dynamic Routing Between Capsules

Dynamic Routing Between Capsules - PyTorch implementation PyTorch implementation of NIPS 2017 paper Dynamic Routing Between Capsules from Sara Sabour,

Adam Bielski 475 Dec 24, 2022