Asymmetric metric learning for knowledge transfer

Related tags

Deep Learning aml
Overview

Asymmetric metric learning

This is the official code that enables the reproduction of the results from our paper:

Asymmetric metric learning for knowledge transfer, Budnik M., Avrithis Y. [arXiv]

Content

This repository provides the means to train and test all the models presented in the paper. This includes:

  1. Code to train the models with and without the teacher (asymmetric and symmetric).
  2. Code to do symmetric and asymmetric testing on rOxford and rParis datasets.
  3. Best pre-trainend models (including whitening).

Dependencies

  1. Python3 (tested on version 3.6)
  2. Numpy 1.19
  3. PyTorch (tested on version 1.4.0)
  4. Datasets and base models will be downloaded automatically.

Training and testing the networks

To train a model use the following script:

python main.py [-h] [--training-dataset DATASET] [--directory EXPORT_DIR] [--no-val]
                  [--test-datasets DATASETS] [--test-whiten DATASET]
                  [--val-freq N] [--save-freq N] [--arch ARCH] [--pool POOL]
                  [--local-whitening] [--regional] [--whitening]
                  [--not-pretrained] [--loss LOSS] [--loss-margin LM] 
                  [--mode MODE] [--teacher TEACHER] [--sym]
                  [--feat-path FEAT] [--feat-val-path FEATVAL]
                  [--image-size N] [--neg-num N] [--query-size N]
                  [--pool-size N] [--gpu-id N] [--workers N] [--epochs N]
                  [--batch-size N] [--optimizer OPTIMIZER] [--lr LR]
                  [--momentum M] [--weight-decay W] [--print-freq N]
                  [--resume FILENAME] [--comment COMMENT] 
                  

Most parameters are the same as in CNN Image Retrieval in PyTorch. Here, we describe parameters added or modified in this work, namely:
--arch - architecture of the model to be trained, in our case the student.
--mode - is the training mode, which determines how the dataset is handled, e.g. are the tuples constructed randomly or with mining; which examples are coming from the teacher vs student, etc. So for example while the --loss is set to 'contrastive', 'ts' enables standard student-teacher training (includes mining), 'ts_self' trains using the Contr+ approach, 'reg' uses the regression. When using 'rand' or 'reg' no mining is used. With 'std' it follows the original training protocol from here (the teacher model is not used).
--teacher - the model of the teacher(vgg16 or resnet101), note that this param makes the last layer of the student match that of the teacher. Therefore, this can be used even in a standard symmetric training.
--sym - a flag that indicates if the training should be symmetric or asymmetric.
--feat-path and --feat-val-path - a path to the extracted teacher features used to train the student. The features can be extracted using the extract_features.py script.

To perform a symmetric test of the model that is already trained:

python test.py [-h] (--network-path NETWORK | --network-offtheshelf NETWORK)
               [--datasets DATASETS] [--image-size N] [--multiscale MULTISCALE] 
               [--whitening WHITENING] [--teacher TEACHER]

For the asymmetric testing:

python test.py [-h] (--network-path NETWORK | --network-offtheshelf NETWORK)
               [--datasets DATASETS] [--image-size N] [--multiscale MULTISCALE] 
               [--whitening WHITENING] [--teacher TEACHER] [--asym]

Examples:

Perform a symmetric test with a pre-trained model:

python test.py -npath  mobilenet-v2-gem-contr-vgg16 -d 'roxford5k,rparis6k' -ms '[1, 1/2**(1/2), 1/2]' -w retrieval-SfM-120k --teacher vgg16

For an asymmetric test:

python test.py -npath  mobilenet-v2-gem-contr-vgg16 -d 'roxford5k,rparis6k' -ms '[1, 1/2**(1/2), 1/2]' -w retrieval-SfM-120k --teacher vgg16 --asym

If you are interested in just the trained models, you can find the links to them in the test.py file.

Acknowledgements

This code is adapted and modified based on the amazing repository by F. Radenović called CNN Image Retrieval in PyTorch: Training and evaluating CNNs for Image Retrieval in PyTorch

You might also like...
Code for CVPR2021 "Visualizing Adapted Knowledge in Domain Transfer". Visualization for domain adaptation. #explainable-ai

Visualizing Adapted Knowledge in Domain Transfer @inproceedings{hou2021visualizing, title={Visualizing Adapted Knowledge in Domain Transfer}, auth

Pytorch version of VidLanKD: Improving Language Understanding viaVideo-Distilled Knowledge Transfer

VidLanKD Implementation of VidLanKD: Improving Language Understanding via Video-Distilled Knowledge Transfer by Zineng Tang, Jaemin Cho, Hao Tan, Mohi

Does Pretraining for Summarization Reuqire Knowledge Transfer?

Pretraining summarization models using a corpus of nonsense

Code for the ICME 2021 paper "Exploring Driving-Aware Salient Object Detection via Knowledge Transfer"

TSOD Code for the ICME 2021 paper "Exploring Driving-Aware Salient Object Detection via Knowledge Transfer" Usage For training, open train_test, run p

[IJCAI-2021] A benchmark of data-free knowledge distillation from paper
[IJCAI-2021] A benchmark of data-free knowledge distillation from paper "Contrastive Model Inversion for Data-Free Knowledge Distillation"

DataFree A benchmark of data-free knowledge distillation from paper "Contrastive Model Inversion for Data-Free Knowledge Distillation" Authors: Gongfa

TF2 implementation of knowledge distillation using the "function matching" hypothesis from the paper Knowledge distillation: A good teacher is patient and consistent by Beyer et al.

FunMatch-Distillation TF2 implementation of knowledge distillation using the "function matching" hypothesis from the paper Knowledge distillation: A g

Source Code for our paper: Understand me, if you refer to Aspect Knowledge: Knowledge-aware Gated Recurrent Memory Network
Source Code for our paper: Understand me, if you refer to Aspect Knowledge: Knowledge-aware Gated Recurrent Memory Network

KaGRMN-DSG_ABSA This repository contains the PyTorch source Code for our paper: Understand me, if you refer to Aspect Knowledge: Knowledge-aware Gated

Metric learning algorithms in Python

metric-learn: Metric Learning in Python metric-learn contains efficient Python implementations of several popular supervised and weakly-supervised met

Dogs classification with Deep Metric Learning using some popular losses
Dogs classification with Deep Metric Learning using some popular losses

Tsinghua Dogs classification with Deep Metric Learning 1. Introduction Tsinghua Dogs dataset Tsinghua Dogs is a fine-grained classification dataset fo

Owner
null
Speech Enhancement Generative Adversarial Network Based on Asymmetric AutoEncoder

ASEGAN: Speech Enhancement Generative Adversarial Network Based on Asymmetric AutoEncoder 中文版简介 Readme with English Version 介绍 基于SEGAN模型的改进版本,使用自主设计的非

Nitin 53 Nov 17, 2022
(SIGIR2020) “Asymmetric Tri-training for Debiasing Missing-Not-At-Random Explicit Feedback’’

Asymmetric Tri-training for Debiasing Missing-Not-At-Random Explicit Feedback About This repository accompanies the real-world experiments conducted i

yuta-saito 19 Dec 1, 2022
Asymmetric Bilateral Motion Estimation for Video Frame Interpolation, ICCV2021

ABME (ICCV2021) Junheum Park, Chul Lee, and Chang-Su Kim Official PyTorch Code for "Asymmetric Bilateral Motion Estimation for Video Frame Interpolati

Junheum Park 86 Dec 28, 2022
Code of Adverse Weather Image Translation with Asymmetric and Uncertainty aware GAN

Adverse Weather Image Translation with Asymmetric and Uncertainty-aware GAN (AU-GAN) Official Tensorflow implementation of Adverse Weather Image Trans

Jeong-gi Kwak 36 Dec 26, 2022
PyTorch implementation of Asymmetric Siamese (https://arxiv.org/abs/2204.00613)

Asym-Siam: On the Importance of Asymmetry for Siamese Representation Learning This is a PyTorch implementation of the Asym-Siam paper, CVPR 2022: @inp

Meta Research 89 Dec 18, 2022
Transfer-Learn is an open-source and well-documented library for Transfer Learning.

Transfer-Learn is an open-source and well-documented library for Transfer Learning. It is based on pure PyTorch with high performance and friendly API. Our code is pythonic, and the design is consistent with torchvision. You can easily develop new algorithms, or readily apply existing algorithms.

THUML @ Tsinghua University 2.2k Jan 3, 2023
🌈 PyTorch Implementation for EMNLP'21 Findings "Reasoning Visual Dialog with Sparse Graph Learning and Knowledge Transfer"

SGLKT-VisDial Pytorch Implementation for the paper: Reasoning Visual Dialog with Sparse Graph Learning and Knowledge Transfer Gi-Cheon Kang, Junseok P

Gi-Cheon Kang 9 Jul 5, 2022
Official PyTorch implementation of "Proxy Synthesis: Learning with Synthetic Classes for Deep Metric Learning" (AAAI 2021)

Proxy Synthesis: Learning with Synthetic Classes for Deep Metric Learning Official PyTorch implementation of "Proxy Synthesis: Learning with Synthetic

NAVER/LINE Vision 30 Dec 6, 2022
code for our paper "Source Data-absent Unsupervised Domain Adaptation through Hypothesis Transfer and Labeling Transfer"

SHOT++ Code for our TPAMI submission "Source Data-absent Unsupervised Domain Adaptation through Hypothesis Transfer and Labeling Transfer" that is ext

null 75 Dec 16, 2022
Transfer style api - An API to use with Tranfer Style App, where you can use two image and transfer the style

Transfer Style API It's an API to use with Tranfer Style App, where you can use

Brian Alejandro 1 Feb 13, 2022