Rotation Robust Descriptors

Overview

RoRD

Rotation-Robust Descriptors and Orthographic Views for Local Feature Matching

Project Page | Paper link

pipeline

Evaluation and Datasets

Pretrained Models

Download models from Google Drive (73.9 MB) in the base directory.

Evaluating RoRD

You can evaluate RoRD on demo images or replace it with your custom images.

  1. Dependencies can be installed in a conda of virtualenv by running:
    1. pip install -r requirements.txt
  2. python extractMatch.py <rgb_image1> <rgb_image2> --model_file <path to the model file RoRD>
  3. Example:
    python extractMatch.py demo/rgb/rgb1_1.jpg demo/rgb/rgb1_2.jpg --model_file models/rord.pth
  4. This should give you output like this:

RoRD

pipeline

SIFT

pipeline

DiverseView Dataset

Download dataset from Google Drive (97.8 MB) in the base directory (only needed if you want to evaluate on DiverseView Dataset).

Evaluation on DiverseView Dataset

The DiverseView Dataset is a custom dataset consisting of 4 scenes with images having high-angle camera rotations and viewpoint changes.

  1. Pose estimation on single image pair of DiverseView dataset:
    1. cd demo
    2. python register.py --rgb1 <path to rgb image 1> --rgb2 <path to rgb image 2> --depth1 <path to depth image 1> --depth2 <path to depth image 2> --model_rord <path to the model file RoRD>
    3. Example:
      python register.py --rgb1 rgb/rgb2_1.jpg --rgb2 rgb/rgb2_2.jpg --depth1 depth/depth2_1.png --depth2 depth/depth2_2.png --model_rord ../models/rord.pth
    4. This should give you output like this:

RoRD matches in perspective view

pipeline

RoRD matches in orthographic view

pipeline

  1. To visualize the registered point cloud, use --viz3d command:
    1. python register.py --rgb1 rgb/rgb2_1.jpg --rgb2 rgb/rgb2_2.jpg --depth1 depth/depth2_1.png --depth2 depth/depth2_2.png --model_rord ../models/rord.pth --viz3d

PointCloud registration using correspondences

pipeline

  1. Pose estimation on a sequence of DiverseView dataset:
    1. cd evaluation/DiverseView/
    2. python evalRT.py --dataset <path to DiverseView dataset> --sequence <sequence name> --model_rord <path to RoRD model> --output_dir <name of output dir>
    3. Example:
      1. python evalRT.py --dataset /path/to/preprocessed/ --sequence data1 --model_rord ../../models/rord.pth --output_dir out
    4. This would generate out folder containing predicted transformations and matching results in out/vis folder, containing images like below:

RoRD

pipeline

Training RoRD on PhotoTourism Images

  1. Training using rotation homographies with initialization from D2Net weights (Download base models as mentioned in Pretrained Models).

  2. Download branderburg_gate dataset that is used in the configs/train_scenes_small.txt from here(5.3 Gb) in phototourism folder.

  3. Folder stucture should be:

    phototourism/  
    ___ brandenburg_gate  
    ___ ___ dense  
    ___ ___	___ images  
    ___ ___	___ stereo  
    ___ ___	___ sparse  
    
  4. python trainPT_ipr.py --dataset_path <path_to_phototourism_folder> --init_model models/d2net.pth --plot

TO-DO

  • Provide VPR code
  • Provide combine training of RoRD + D2Net
  • Provide code for calculating error in Diverseview Dataset

Credits

Our base model is borrowed from D2-Net.

BibTex

If you use this code in your project, please cite the following paper:

@misc{rord2021,
      title={RoRD: Rotation-Robust Descriptors and Orthographic Views for Local Feature Matching}, 
      author={Udit Singh Parihar and Aniket Gujarathi and Kinal Mehta and Satyajit Tourani and Sourav Garg and Michael Milford and K. Madhava Krishna},
      year={2021},
      eprint={2103.08573},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}
You might also like...
 Patch Rotation: A Self-Supervised Auxiliary Task for Robustness and Accuracy of Supervised Models
Patch Rotation: A Self-Supervised Auxiliary Task for Robustness and Accuracy of Supervised Models

Patch-Rotation(PatchRot) Patch Rotation: A Self-Supervised Auxiliary Task for Robustness and Accuracy of Supervised Models Submitted to Neurips2021 To

Geometric Vector Perceptron --- a rotation-equivariant GNN for learning from biomolecular structure

Geometric Vector Perceptron Code to accompany Learning from Protein Structure with Geometric Vector Perceptrons by B Jing, S Eismann, P Suriana, RJL T

PRIN/SPRIN: On Extracting Point-wise Rotation Invariant Features
PRIN/SPRIN: On Extracting Point-wise Rotation Invariant Features

PRIN/SPRIN: On Extracting Point-wise Rotation Invariant Features Overview This repository is the Pytorch implementation of PRIN/SPRIN: On Extracting P

Extreme Rotation Estimation using Dense Correlation Volumes
Extreme Rotation Estimation using Dense Correlation Volumes

Extreme Rotation Estimation using Dense Correlation Volumes This repository contains a PyTorch implementation of the paper: Extreme Rotation Estimatio

Geometric Vector Perceptrons --- a rotation-equivariant GNN for learning from biomolecular structure
Geometric Vector Perceptrons --- a rotation-equivariant GNN for learning from biomolecular structure

Geometric Vector Perceptron Implementation of equivariant GVP-GNNs as described in Learning from Protein Structure with Geometric Vector Perceptrons b

Focal Loss for Dense Rotation Object Detection

Convert ResNets weights from GluonCV to Tensorflow Abstract GluonCV released some new resnet pre-training weights and designed some new resnets (such

ADB-IP-ROTATION - Use your mobile phone to gain a temporary IP address using ADB and data tethering

ADB IP ROTATE This an Python script based on Android Debug Bridge (adb) shell sc

The code for our paper submitted to RAL/IROS 2022:  OverlapTransformer: An Efficient and Rotation-Invariant Transformer Network for LiDAR-Based Place Recognition.
The code for our paper submitted to RAL/IROS 2022: OverlapTransformer: An Efficient and Rotation-Invariant Transformer Network for LiDAR-Based Place Recognition.

OverlapTransformer The code for our paper submitted to RAL/IROS 2022: OverlapTransformer: An Efficient and Rotation-Invariant Transformer Network for

Official Pytorch implementation of  6DRepNet: 6D Rotation representation for unconstrained head pose estimation.
Official Pytorch implementation of 6DRepNet: 6D Rotation representation for unconstrained head pose estimation.

6D Rotation Representation for Unconstrained Head Pose Estimation (Pytorch) Paper Thorsten Hempel and Ahmed A. Abdelrahman and Ayoub Al-Hamadi, "6D Ro

Comments
  • camera.txt

    camera.txt

    Hello, thank you for your contributions! I want to ask: [RoRD/evaluation/DiverseView/evalRT py]. The camera.txt is the need to generate? It is still provided by you, if I use the file you provided, but I did not find the camera.txt.

    opened by yrcrcy 1
  • Details on creating a custom dataset for training.

    Details on creating a custom dataset for training.

    Hi @UditSinghParihar,

    I have gone through the Readme and instructions work well for training on given dataset.

    I am looking to generate a custom dataset and train on that. Can you please add more detail on the same (like how to annotate, format of data, do we need to modify some code etc)?

    Thanks!

    opened by 1995YogeshSharma 1
You Only Hypothesize Once: Point Cloud Registration with Rotation-equivariant Descriptors

You Only Hypothesize Once: Point Cloud Registration with Rotation-equivariant Descriptors In this paper, we propose a novel local descriptor-based fra

Haiping Wang 80 Dec 15, 2022
Code for the CVPR2021 paper "Patch-NetVLAD: Multi-Scale Fusion of Locally-Global Descriptors for Place Recognition"

Patch-NetVLAD: Multi-Scale Fusion of Locally-Global Descriptors for Place Recognition This repository contains code for the CVPR2021 paper "Patch-NetV

QVPR 368 Jan 6, 2023
Code for the RA-L (ICRA) 2021 paper "SeqNet: Learning Descriptors for Sequence-Based Hierarchical Place Recognition"

SeqNet: Learning Descriptors for Sequence-Based Hierarchical Place Recognition [ArXiv+Supplementary] [IEEE Xplore RA-L 2021] [ICRA 2021 YouTube Video]

Sourav Garg 63 Dec 12, 2022
Compute descriptors for 3D point cloud registration using a multi scale sparse voxel architecture

MS-SVConv : 3D Point Cloud Registration with Multi-Scale Architecture and Self-supervised Fine-tuning Compute features for 3D point cloud registration

null 42 Jul 25, 2022
Official code for "Eigenlanes: Data-Driven Lane Descriptors for Structurally Diverse Lanes", CVPR2022

[CVPR 2022] Eigenlanes: Data-Driven Lane Descriptors for Structurally Diverse Lanes Dongkwon Jin, Wonhui Park, Seong-Gyun Jeong, Heeyeon Kwon, and Cha

Dongkwon Jin 106 Dec 29, 2022
LBK 35 Dec 26, 2022
A series of convenience functions to make basic image processing operations such as translation, rotation, resizing, skeletonization, and displaying Matplotlib images easier with OpenCV and Python.

imutils A series of convenience functions to make basic image processing functions such as translation, rotation, resizing, skeletonization, and displ

Adrian Rosebrock 4.3k Jan 8, 2023
Implementation of Geometric Vector Perceptron, a simple circuit for 3d rotation equivariance for learning over large biomolecules, in Pytorch. Idea proposed and accepted at ICLR 2021

Geometric Vector Perceptron Implementation of Geometric Vector Perceptron, a simple circuit with 3d rotation equivariance for learning over large biom

Phil Wang 59 Nov 24, 2022
Official code of the paper "ReDet: A Rotation-equivariant Detector for Aerial Object Detection" (CVPR 2021)

ReDet: A Rotation-equivariant Detector for Aerial Object Detection ReDet: A Rotation-equivariant Detector for Aerial Object Detection (CVPR2021), Jiam

csuhan 334 Dec 23, 2022
Rotation-Only Bundle Adjustment

ROBA: Rotation-Only Bundle Adjustment Paper, Video, Poster, Presentation, Supplementary Material In this repository, we provide the implementation of

Seong 51 Nov 29, 2022