Code for the CVPR2021 paper "Patch-NetVLAD: Multi-Scale Fusion of Locally-Global Descriptors for Place Recognition"

Overview

Patch-NetVLAD: Multi-Scale Fusion of Locally-Global Descriptors for Place Recognition

License: MIT stars GitHub issues GitHub closed issues GitHub repo size QUT Centre for Robotics

PWC PWC PWC PWC PWC PWC

This repository contains code for the CVPR2021 paper "Patch-NetVLAD: Multi-Scale Fusion of Locally-Global Descriptors for Place Recognition"

The article can be found on arXiv and the official proceedings.

Patch-NetVLAD method diagram

License + attribution/citation

When using code within this repository, please refer the following paper in your publications:

@inproceedings{hausler2021patchnetvlad,
  title={Patch-NetVLAD: Multi-Scale Fusion of Locally-Global Descriptors for Place Recognition},
  author={Hausler, Stephen and Garg, Sourav and Xu, Ming and Milford, Michael and Fischer, Tobias},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={14141--14152},
  year={2021}
}

The code is licensed under the MIT License.

Installation

We recommend using conda (or better: mamba) to install all dependencies. If you have not yet installed conda/mamba, please download and install mambaforge.

conda create -n patchnetvlad python=3.8 numpy pytorch-gpu torchvision natsort tqdm opencv pillow scikit-learn faiss matplotlib-base -c conda-forge

conda activate patchnetvlad

We provide several pre-trained models and configuration files. The pre-trained models will be downloaded automatically into the pretrained_models the first time feature extraction is performed.

Alternatively, you can manually download the pre-trained models into a folder of your choice; click to expand if you want to do so.

We recommend downloading the models into the pretrained_models folder (which is setup in the config files within the configs directory):

# Note: the pre-trained models will be downloaded automatically the first time feature extraction is performed
# the steps below are optional!

# You can use the download script which automatically downloads the models:
python ./download_models.py

# Manual download:
cd pretrained_models
wget -O mapillary_WPCA128.pth.tar https://cloudstor.aarnet.edu.au/plus/s/vvr0jizjti0z2LR/download
wget -O mapillary_WPCA512.pth.tar https://cloudstor.aarnet.edu.au/plus/s/DFxbGgFwh1y1wAz/download
wget -O mapillary_WPCA4096.pth.tar https://cloudstor.aarnet.edu.au/plus/s/ZgW7DMEpeS47ELI/download
wget -O pittsburgh_WPCA128.pth.tar https://cloudstor.aarnet.edu.au/plus/s/2ORvaCckitjz4Sd/download
wget -O pittsburgh_WPCA512.pth.tar https://cloudstor.aarnet.edu.au/plus/s/WKl45MoboSyB4SH/download
wget -O pittsburgh_WPCA4096.pth.tar https://cloudstor.aarnet.edu.au/plus/s/1aoTGbFjsekeKlB/download

If you want to use the shortcuts patchnetvlad-match-two, patchnetvlad-feature-match and patchnetvlad-feature-extract, you also need to run (which also lets you use Patch-NetVLAD in a modular way):

pip3 install --no-deps -e .

Quick start

Feature extraction

Replace performance.ini with speed.ini or storage.ini if you want, and adapt the dataset paths - examples are given for the Pittsburgh30k dataset (simply replace pitts30k with tokyo247 or nordland for these datasets).

python feature_extract.py \
  --config_path patchnetvlad/configs/performance.ini \
  --dataset_file_path=pitts30k_imageNames_index.txt \
  --dataset_root_dir=/path/to/your/pitts/dataset \
  --output_features_dir patchnetvlad/output_features/pitts30k_index

Repeat for the query images by replacing _index with _query. Note that you have to adapt dataset_root_dir.

Feature matching (dataset)

python feature_match.py \
  --config_path patchnetvlad/configs/performance.ini \
  --dataset_root_dir=/path/to/your/pitts/dataset \
  --query_file_path=pitts30k_imageNames_query.txt \
  --index_file_path=pitts30k_imageNames_index.txt \
  --query_input_features_dir patchnetvlad/output_features/pitts30k_query \
  --index_input_features_dir patchnetvlad/output_features/pitts30k_index \
  --ground_truth_path patchnetvlad/dataset_gt_files/pitts30k_test.npz \
  --result_save_folder patchnetvlad/results/pitts30k

Note that providing ground_truth_path is optional.

This will create three output files in the folder specified by result_save_folder:

  • recalls.txt with a plain text output (only if ground_truth_path is specified)
  • NetVLAD_predictions.txt with top 100 reference images for each query images obtained using "vanilla" NetVLAD in Kapture format
  • PatchNetVLAD_predictions.txt with top 100 reference images from above re-ranked by Patch-NetVLAD, again in Kapture format

Feature matching (two files)

python match_two.py \
--config_path patchnetvlad/configs/performance.ini \
--first_im_path=patchnetvlad/example_images/tokyo_query.jpg \
--second_im_path=patchnetvlad/example_images/tokyo_db.png

We provide the match_two.py script which computes the Patch-NetVLAD features for two given images and then determines the local feature matching between these images. While we provide example images, any image pair can be used.

The script will print a score value as an output, where a larger score indicates more similar images and a lower score means dissimilar images. The function also outputs a matching figure, showing the patch correspondances (after RANSAC) between the two images. The figure is saved as results/patchMatchings.png.

FAQ

Patch-NetVLAD qualitative results

How to Create New Ground Truth Files

We provide three ready-to-go ground truth files in the dataset_gt_files folder, however, for evaluation on other datasets you will need to create your own .npz ground truth data files. Each .npz stores three variables: utmQ (a numpy array of floats), utmDb (a numpy array of floats) and posDistThr (a scalar numpy float).

Each successive element within utmQ and utmDb needs to correspond to the corresponding row of the image list file. posDistThr is the ground truth tolerance value (typically in meters).

The following mock example details the steps required to create a new ground truth file:

  1. Collect GPS data for your query and database traverses and convert to utm format. Ensure the data is sampled at the same rate as your images.
  2. Select your own choice of posDistThr value.
  3. Save these variables using Numpy, such as this line of code: np.savez('dataset_gt_files/my_dataset.npz', utmQ=my_utmQ, utmDb=my_utmDb, posDistThr=my_posDistThr)

Acknowledgements

We would like to thank Gustavo Carneiro, Niko Suenderhauf and Mark Zolotas for their valuable comments in preparing this paper. This work received funding from the Australian Government, via grant AUSMURIB000001 associated with ONR MURI grant N00014-19-1-2571. The authors acknowledge continued support from the Queensland University of Technology (QUT) through the Centre for Robotics.

Related works

Please check out this collection of related works on place recognition.

Comments
  • Fixing val.py incorrect recalls issue #43

    Fixing val.py incorrect recalls issue #43

    Made changes to val.py that should fix the indices issue that was causing low reported recalls for mapillary validation on the cph and sf datasets. Without PCA, the recalls with NetVLAD (with the mapillary trained model) are now: 0.495, 0.65, 0.718, 0.77, 0.83, 0.868.

    opened by StephenHausler 10
  • About the result on Mapillary

    About the result on Mapillary

    Thank you for this nice code base and paper. I am trying to reproduce the result on Mapillary following the training from scratch instruction. However, after training for multiple weeks, the result is still extremely low: image

    Is the result before PCAW supposed to be like this? Or maybe there is something wrong? I just follow the instruction: python train.py
    --config_path patchnetvlad/configs/train.ini
    --cache_path=/path/to/your/desired/cache/folder
    --save_path=/path/to/your/desired/checkpoint/save/folder
    --dataset_root_dir=/path/to/your/mapillary/dataset And there is no error reported. Thank you so much for your time. Looking forward to hearing from you.

    opened by Jeff-Zilence 9
  • Downloading Nordland dataset

    Downloading Nordland dataset

    Hi, will you release your version of the Nordland dataset? It would be great to have the chance to download the dataset directly, as it would avoid possible inconsistencies. Also, there shouldn't be any legal issue given that it is licensed under Creative Commons :-)

    opened by gmberton 6
  • permission for loading the model [certificate verify failed]

    permission for loading the model [certificate verify failed]

    hi :) I installed all packages and tried to run "match_two.py" file. for loading Auto-download pretrained models, it asked me so I type "YES" but i got a error message. [certificate verify failed]

    I am using Window PC, please let me know how to solve this. thanks

    image

    opened by jaaaaaang 6
  • Reproduce results of NetVLAD on RobotCar Seasons v2

    Reproduce results of NetVLAD on RobotCar Seasons v2

    @Tobias-Fischer @oravus Hi,first,I really appreciate for your work. I'm trying to reproduce results of NetVLAD on RobotCar Seasons v2 in the Table 1 and Supplementary Table 1,but encountered some problems.I have extracted the global descriptor using Netvlad with netvlad_extract.ini.May I ask if it is convenient to supplement the subsequent code to facilitate the reoccurrence work?Perhaps it would be better to have detailed instructions. I am always looking forward to your kind response. Best regards.

    opened by MAX-OTW 5
  • question about training loss

    question about training loss

    Hi, I really appreciate for the work! It is helpful for me to try it with your instructions. But I have a question about training. I try to retrain the network (with original Vgg16 backbone or change it to ResNet18, cropped it properly and maintain most of the parameters as the pretrained weights). I found that the loss went down rapidly at the first epoch and almost reach to 0.00. Would you describe the change of your loss when you train this network? Thank you very much!

    Best regards

    opened by LyyyyyyyN 5
  • how to use pool=patchnetvlad to train the network

    how to use pool=patchnetvlad to train the network

    Dear,

    Thank you for the great job!

    I got a question about how to use the output of patchnetvlad to train the network while vlad_local and vlad_global are generated at the same time. is it OK to use the vlad_global feature to estimate the triplet loss?

    Best, Qiang

    opened by zhaiqx 5
  • Question about “posDistThr” of the Nordland dataset

    Question about “posDistThr” of the Nordland dataset

    Hi, thanks for your work. I have a question about “posDistThr” of Nordland dataset. This repository sets the utm coordinate of i-th frame to (i,i) in the Nordland.npz file, which allows us to have a consistent function for evaluating R@N for all datasets. It's great! The tolerance for Nordland dataset is 10 frames in the Patch-NetVLAD paper. However, the radius of the first frame (0,0) to the 11-th frame (10,10) is 10sqrt(2) rather than 10. The “posDistThr” is set to 10 in Nordland.npz file in this repository. I have wrote a script to calculate R@N on Nordland (tolerance is 2 frames). The result is consistent with setting the “posDistThr” to 2sqrt(2) rather than 2.

    opened by Lu-Feng 4
  • Reproduce the results of DELG in the paper

    Reproduce the results of DELG in the paper

    Hi, thank you for sharing this great work. In the official paper, DELG achieves the top performance on some datasets. May I know how to reproduce those results of DELG? Do you use the pre-trained model or fine-tune it on Pitts30k/MSLS?

    opened by hellocasper 4
  • Incorrect Aachen Day-Night result

    Incorrect Aachen Day-Night result

    Hi, I tried to use your pretrained pittsburgh_WPCA4096 model to test on Aachen Day-Night dataset, but got totally incorrect result. here is my complete process.

    1. generate Aachen_db_path.txt and Aachen_query_path.txt

    2. run the feature_extract.py for db and query images use following two commands.

    python feature_extract.py \
      --config_path patchnetvlad/configs/speed.ini \
      --dataset_file_path=Aachen_db_path.txt \
      --output_features_dir patchnetvlad/output_features/aachen_index
    
    python feature_extract.py \
      --config_path patchnetvlad/configs/speed.ini \
      --dataset_file_path=Aachen_query_path.txt \
      --output_features_dir patchnetvlad/output_features/aachen_query
    
    1. run the 'feture_match.py' for feature matching without providing ground_truth_path
    python feature_match.py \
      --config_path patchnetvlad/configs/speed.ini \
      --query_file_path=Aachen_query_path.txt \
      --index_file_path=Aachen_db_path.txt \
      --query_input_features_dir patchnetvlad/output_features/aachen_query \
      --index_input_features_dir patchnetvlad/output_features/aachen_index \
      --result_save_folder patchnetvlad/results/aachen_subset
    
    1. then process PatchNetVLAD_predictions.txt, for each query image it output top 100 reference images. I just used the pose of top1 reference image as the estimated pose of each query image and post the result to benchmark, but got totally wrong result. here is the result:

    image

    I want to know what's wrong with my steps and why it outputs such unacceptable results. Is the model version that I used is wrong? Or have you test your models on Aachen dataset, if you did, could you please provide the result ?

    opened by HeartbreakSurvivor 4
  • About RobotCar v2 results

    About RobotCar v2 results

    Hi, I really appreciate for your work. I'm trying to reproduce your result on RobotCar seasons v2 dataset and encountered some problems. Which dataset split is used in the Tables 1 and 2 of the main paper, test or train? Besides, are these results integrate ALL conditions including day and night?

    I noticed that the Suppl. Table 1. reports results for each condition obtained from the training split. But when I tried to summary these results in Suppl. Table 1. using statistics on the train query set, I failed to obtain the results in the Tables 1 of the main paper. So I'm a bit confused about the dataset settings.

    I am always looking forward to your kind response.

    Best regards,

    opened by RuotongWANG 4
  • Can't reproduce Robotcar results

    Can't reproduce Robotcar results

    Hi, I tried to reproduce your result on Robotcar Seasons V2 test set by submitting to the challenge submission server. I used the released performance-focused model which is pre-trained on MSLS dataset, but I got this incorrect result: image And I tried the model pre-trained on Pitts30k, the results are not correct either. image Besides, the results on other datasets is normal. Is the model version that I used is wrong? Could you possibly release the model state that achieves the results on Robotcat dataset shown in the paper? Or would you provide the results on test set split by conditions like the Supplementary Table 1? Thank you so much.

    Best regards,

    opened by RuotongWANG 11
  • Train on custom dataset

    Train on custom dataset

    Hi, I really appreciate for the work you've done. I wondered if the training code will be released, which can be used to train our custom dataset, thanks.

    Best regards.

    opened by JiananZhao0224 14
Releases(v0.1.6)
  • v0.1.6(Nov 11, 2021)

    Training code

    This release adds code to train NetVLAD on the Mapillary dataset + PCAing models using either Mapillary dataset or Pittsburgh (https://github.com/QVPR/Patch-NetVLAD/pull/20 and https://github.com/QVPR/Patch-NetVLAD/pull/39)

    Bugfixes

    • Support for Multi-GPU inference (https://github.com/QVPR/Patch-NetVLAD/pull/22 - thanks @michaelschleiss)
    • Fix in keypoint positions (https://github.com/QVPR/Patch-NetVLAD/commit/6005b555cf05414afac3f3c0203e22a249d05b91)
    • Fix recalls when validating on Mapillary dataset (https://github.com/QVPR/Patch-NetVLAD/pull/44)

    Full Changelog: https://github.com/QVPR/Patch-NetVLAD/compare/v0.1.5...v0.1.6

    Source code(tar.gz)
    Source code(zip)
  • v0.1.5(Jun 22, 2021)

  • v0.1.4(Jun 11, 2021)

  • v0.1.2(Jun 2, 2021)

  • v0.1.1(Jun 2, 2021)

Owner
QVPR
QVPR
[CVPR2021] The source code for our paper 《Removing the Background by Adding the Background: Towards Background Robust Self-supervised Video Representation Learning》.

TBE The source code for our paper "Removing the Background by Adding the Background: Towards Background Robust Self-supervised Video Representation Le

Jinpeng Wang 150 Dec 28, 2022
Code for CVPR2021 paper "Robust Reflection Removal with Reflection-free Flash-only Cues"

Robust Reflection Removal with Reflection-free Flash-only Cues (RFC) Paper | To be released: Project Page | Video | Data Tensorflow implementation for

Chenyang LEI 162 Jan 5, 2023
Code for the paper "Graph Attention Tracking". (CVPR2021)

SiamGAT 1. Environment setup This code has been tested on Ubuntu 16.04, Python 3.5, Pytorch 1.2.0, CUDA 9.0. Please install related libraries before r

null 122 Dec 24, 2022
PyTorch code for the paper "Curriculum Graph Co-Teaching for Multi-target Domain Adaptation" (CVPR2021)

PyTorch code for the paper "Curriculum Graph Co-Teaching for Multi-target Domain Adaptation" (CVPR2021) This repo presents PyTorch implementation of M

Evgeny 79 Dec 19, 2022
Code for CVPR2021 paper "Learning Salient Boundary Feature for Anchor-free Temporal Action Localization"

AFSD: Learning Salient Boundary Feature for Anchor-free Temporal Action Localization This is an official implementation in PyTorch of AFSD. Our paper

Tencent YouTu Research 146 Dec 24, 2022
Code for C2-Matching (CVPR2021). Paper: Robust Reference-based Super-Resolution via C2-Matching.

C2-Matching (CVPR2021) This repository contains the implementation of the following paper: Robust Reference-based Super-Resolution via C2-Matching Yum

Yuming Jiang 151 Dec 26, 2022
Code for CVPR2021 paper 'Where and What? Examining Interpretable Disentangled Representations'.

PS-SC GAN This repository contains the main code for training a PS-SC GAN (a GAN implemented with the Perceptual Simplicity and Spatial Constriction c

Xinqi/Steven Zhu 40 Dec 16, 2022
Official code of paper "PGT: A Progressive Method for Training Models on Long Videos" on CVPR2021

PGT Code for paper PGT: A Progressive Method for Training Models on Long Videos. Install Run pip install -r requirements.txt. Run python setup.py buil

Bo Pang 27 Mar 30, 2022
PyTorch code for our paper "Image Super-Resolution with Non-Local Sparse Attention" (CVPR2021).

Image Super-Resolution with Non-Local Sparse Attention This repository is for NLSN introduced in the following paper "Image Super-Resolution with Non-

null 143 Dec 28, 2022
Repo for CVPR2021 paper "QPIC: Query-Based Pairwise Human-Object Interaction Detection with Image-Wide Contextual Information"

QPIC: Query-Based Pairwise Human-Object Interaction Detection with Image-Wide Contextual Information by Masato Tamura, Hiroki Ohashi, and Tomoaki Yosh

null 105 Dec 23, 2022
Official implementation of our CVPR2021 paper "OTA: Optimal Transport Assignment for Object Detection" in Pytorch.

OTA: Optimal Transport Assignment for Object Detection This project provides an implementation for our CVPR2021 paper "OTA: Optimal Transport Assignme

null 217 Jan 3, 2023
[PyTorch] Official implementation of CVPR2021 paper "PointDSC: Robust Point Cloud Registration using Deep Spatial Consistency". https://arxiv.org/abs/2103.05465

PointDSC repository PyTorch implementation of PointDSC for CVPR'2021 paper "PointDSC: Robust Point Cloud Registration using Deep Spatial Consistency",

null 153 Dec 14, 2022
The implementation of the CVPR2021 paper "Structure-Aware Face Clustering on a Large-Scale Graph with 10^7 Nodes"

STAR-FC This code is the implementation for the CVPR 2021 paper "Structure-Aware Face Clustering on a Large-Scale Graph with 10^7 Nodes" ?? ?? . ?? Re

Shuai Shen 87 Dec 28, 2022
The official repo of the CVPR2021 oral paper: Representative Batch Normalization with Feature Calibration

Representative Batch Normalization (RBN) with Feature Calibration The official implementation of the CVPR2021 oral paper: Representative Batch Normali

Open source projects of ShangHua-Gao 76 Nov 9, 2022
The implementation of CVPR2021 paper Temporal Query Networks for Fine-grained Video Understanding, by Chuhan Zhang, Ankush Gupta and Andrew Zisserman.

Temporal Query Networks for Fine-grained Video Understanding ?? This repository contains the implementation of CVPR2021 paper Temporal_Query_Networks

null 55 Dec 21, 2022
The official implementation of the CVPR2021 paper: Decoupled Dynamic Filter Networks

Decoupled Dynamic Filter Networks This repo is the official implementation of CVPR2021 paper: "Decoupled Dynamic Filter Networks". Introduction DDF is

F.S.Fire 180 Dec 30, 2022
PyTorch implementation of our Adam-NSCL algorithm from our CVPR2021 (oral) paper "Training Networks in Null Space for Continual Learning"

Adam-NSCL This is a PyTorch implementation of Adam-NSCL algorithm for continual learning from our CVPR2021 (oral) paper: Title: Training Networks in N

Shipeng Wang 34 Dec 21, 2022
Pytorch implementation of CVPR2021 paper "MUST-GAN: Multi-level Statistics Transfer for Self-driven Person Image Generation"

MUST-GAN Code | paper The Pytorch implementation of our CVPR2021 paper "MUST-GAN: Multi-level Statistics Transfer for Self-driven Person Image Generat

TianxiangMa 46 Dec 26, 2022
A pytorch implementation of the CVPR2021 paper "VSPW: A Large-scale Dataset for Video Scene Parsing in the Wild"

VSPW: A Large-scale Dataset for Video Scene Parsing in the Wild A pytorch implementation of the CVPR2021 paper "VSPW: A Large-scale Dataset for Video

null 45 Nov 29, 2022