[TCSVT] Each Part Matters: Local Patterns Facilitate Cross-view Geo-localization
LPN
NEWs
Prerequisites
- Python 3.6
- GPU Memory >= 8G
- Numpy > 1.12.1
- Pytorch 0.3+
- scipy == 1.2.1
- [Optional] apex (for float16) Requirements & Quick Start
Getting started
Dataset & Preparation
Download University-1652 upon request. You may use the request template.
For CVUSA, I follow the training/test split in (https://github.com/Liumouliu/OriCNN).
Train & Evaluation
Train & Evaluation University-1652
sh run.sh
Train & Evaluation CVUSA
python prepare_cvusa.py
sh run_cvusa.sh
Train & Evaluation CVACT
python prepare_cvact.py
sh run_cvact.sh
Citation
@article{wang2021LPN,
title={Each Part Matters: Local Patterns Facilitate Cross-view Geo-localization},
author={Wang, Tingyu and Zheng, Zhedong and Yan, Chenggang and Zhang, jiyong and Sun, Yaoqi and Zheng, Bolun and Yang, Yi},
journal={IEEE Transactions on Circuits and Systems for Video Technology},
year={2021},
publisher={IEEE},
note={doi:{
\href{http://dx.doi.org/10.1109/TCSVT.2021.3061265}{10.1109/TCSVT.2021.3061265}}}
}
@article{zheng2020university,
title={University-1652: A Multi-view Multi-source Benchmark for Drone-based Geo-localization},
author={Zheng, Zhedong and Wei, Yunchao and Yang, Yi},
journal={ACM Multimedia},
year={2020}
}