Asym-Siam: On the Importance of Asymmetry for Siamese Representation Learning
This is a PyTorch implementation of the Asym-Siam paper, CVPR 2022:
@inproceedings{wang2022asym,
title = {On the Importance of Asymmetry for Siamese Representation Learning},
author = {Xiao Wang and Haoqi Fan and Yuandong Tian and Daisuke Kihara and Xinlei Chen},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2022}
}
The pre-training code is built on MoCo, with additional designs described and analyzed in the paper.
The linear classification code is from SimSiam, which uses LARS optimizer.
Installation
-
Install PyTorch and ImageNet dataset following the official PyTorch ImageNet training code.
-
Install apex for the LARS optimizer used in linear classification. If you find it hard to install apex, it suffices to just copy the code directly for use.
-
Clone the repository:
git clone https://github.com/facebookresearch/asym-siam & cd asym-siam
1 Unsupervised Training
This implementation only supports multi-gpu, DistributedDataParallel training, which is faster and simpler; single-gpu or DataParallel training is not supported.
1.1 Our MoCo Baseline (BN in projector MLP)
To do unsupervised pre-training of a ResNet-50 model on ImageNet in an 8-gpu machine, run:
python main_moco.py \
-a resnet50 \
--lr 0.03 \
--batch-size 256 \
--dist-url 'tcp://localhost:10001' --multiprocessing-distributed --world-size 1 --rank 0 \
[your imagenet-folder with train and val folders]
This script uses all the default hyper-parameters as described in the MoCo v2 paper. We only upgrade the projector to a MLP with BN layer.
1.2 MoCo + MultiCrop
python main_moco.py \
-a resnet50 \
--lr 0.03 \
--batch-size 256 \
--dist-url 'tcp://localhost:10001' --multiprocessing-distributed --world-size 1 --rank 0 \
[your imagenet-folder with train and val folders] --enable-multicrop
By simply setting --enable-multicrop to true, we can have asym MultiCrop on source side.
1.3 MoCo + ScaleMix
python main_moco.py \
-a resnet50 \
--lr 0.03 \
--batch-size 256 \
--dist-url 'tcp://localhost:10001' --multiprocessing-distributed --world-size 1 --rank 0 \
[your imagenet-folder with train and val folders] --enable-scalemix
By simply setting --enable-scalemix to true, we can have asym ScaleMix on source side.
1.4 MoCo + AsymAug
python main_moco.py \
-a resnet50 \
--lr 0.03 \
--batch-size 256 \
--dist-url 'tcp://localhost:10001' --multiprocessing-distributed --world-size 1 --rank 0 \
[your imagenet-folder with train and val folders] --enable-asymm-aug
By simply setting --enable-asymm-aug to true, we can have Stronger Augmentation on source side and Weaker Augmentation on target side.
1.5 MoCo + AsymBN
python main_moco.py \
-a resnet50 \
--lr 0.03 \
--batch-size 256 \
--dist-url 'tcp://localhost:10001' --multiprocessing-distributed --world-size 1 --rank 0 \
[your imagenet-folder with train and val folders] --enable-asym-bn
By simply setting --enable-asym-bn to true, we can have asym BN on target side (sync BN for target).
1.6 MoCo + MeanEnc
python main_moco.py \
-a resnet50 \
--lr 0.03 \
--batch-size 256 \
--dist-url 'tcp://localhost:10001' --multiprocessing-distributed --world-size 1 --rank 0 \
[your imagenet-folder with train and val folders] --enable-mean-encoding
By simply setting --enable-mean-encoding to true, we can have MeanEnc on target side.
2 Linear Classification
With a pre-trained model, to train a supervised linear classifier on frozen features/weights, run:
python main_lincls.py \
-a resnet50 \
--lars \
--dist-url 'tcp://localhost:10001' --multiprocessing-distributed --world-size 1 --rank 0 \
--pretrained [your checkpoint path] \
[your imagenet-folder with train and val folders]
Linear classification results on ImageNet using this repo with 8 NVIDIA V100 GPUs :
Method | pre-train epochs |
pre-train time |
top-1 | model | md5 |
---|---|---|---|---|---|
Our MoCo | 100 | 23.6h | 65.8 | download | e82ede |
MoCo +MultiCrop |
100 | 50.8h | 69.9 | download | 892916 |
MoCo +ScaleMix |
100 | 30.7h | 67.6 | download | 3f5d79 |
MoCo +AsymAug |
100 | 24.0h | 67.2 | download | d94e24 |
MoCo +AsymBN |
100 | 23.8h | 66.3 | download | 2bf912 |
MoCo +MeanEnc |
100 | 32.2h | 67.7 | download | 599801 |
License
This project is under the CC-BY-NC 4.0 license. See LICENSE for details.