SSR
(NeurIPS 2021) Pytorch implementation of paper "Re-ranking for image retrieval and transductivefew-shot classification"
[Paper] [Project webpage] [Video] [Slide]
The project is an extension work to SIB. If our project is helpful for your research, please consider citing :
@inproceedings{shen2021reranking,
title={Re-ranking for image retrieval and transductive few-shot classification},
author={Shen, Xi and Xiao, Yang and Hu, Shell Xu, and Sbai, Othman and Aubry, Mathieu},
booktitle={Conference on Neural Information Processing Systems (NeurIPS)},
year={2021}
}
Table of Content
1. Installation
Code is tested under Pytorch > 1.0 + Python 3.6 environment.
Please refer to image retrieval and transductive few-shot classification to download datasets.
2. Methods and Results
SSR learns updates for a similarity graph.
It decomposes the N * N similarity graph into N subgraphs where rows and columns of the matrix are ordered depending on similarities to the subgraph reference image.
The output of SSR is an improved similarity matrix.
2.1 Image retrieval
2.1.1 SSR module
Rows : the subgraph reference image (red) and the query image (green);
Columns : top retrieved images of the query image (green). These images are ordered according to the reference image (red).
2.1.2 Results
To reproduce the results on image retrieval datasets (rOxford5k, rParis6k), please refer to Image Retrieval
2.2 Transductive few-shot classification
2.2.1 SSR module
We illustrate our idea with an 1-shot-2way example:
Rows: the subgraph reference image (red) and the support set S;
Columns: the support set S and the query set Q. Both S and Q are ordered according to the reference image (red).
2.2.2 Results
To reproduce the results on few-shot datasets (CIFAR-FS, Mini-ImageNet, TieredImageNet), please refer to transductive few-shot classification
3. Acknowledgement
-
The implementation of k-reciprocal is adapted from its public code
-
The implementation of few-shot training, evaluation and synthetic gradient is adapted from SIB
4. ChangeLog
- 21/10/29, model, evaluation + training released
5. License
This code is distributed under an MIT LICENSE.
Note that our code depends on Pytorch, and uses datasets which each have their own respective licenses that must also be followed.