Learnable Motion Coherence for Correspondence Pruning

Related tags

Deep Learning LMCNet
Overview

Learnable Motion Coherence for Correspondence Pruning
Yuan Liu, Lingjie Liu, Cheng Lin, Zhen Dong, Wenping Wang
Project Page

Any questions or discussions are welcomed!

Requirements & Compilation

  1. Requirements

Required packages are listed in requirements.txt.

The code is tested using Python-3.8.5 with PyTorch 1.7.1.

  1. Compile extra modules
cd network/knn_search
python setup.py build_ext --inplace
cd ../pointnet2_ext
python setup.py build_ext --inplace
cd ../../utils/extend_utils
python build_extend_utils_cffi.py

According to your installation path of CUDA, you may need to revise the variables cuda_version in build_extend_utils_cffi.py.

Datasets & Pretrain Models

  1. Download the YFCC100M dataset and the SUN3D dataset from the OANet repository and the ScanNet dataset from here.

  2. Download pretrained LMCNet models from here and SuperGlue/SuperPoint models from here.

  3. Unzip and arrange all files like the following.

data/
├── superpoint/
    └── superpoint_v1.pth
├── superglue/
    ├── superglue_indoor.pth
    └── superglue_outdoor.pth
├── model/
    ├── lmcnet_sift_indoor/
    ├── lmcnet_sift_outdoor/
    └── lmcnet_spg_indoor/
├── yfcc100m/
├── sun3d_test/
├── sun3d_train/
├── scannet_dataset/
└── scannet_train_dataset/

Evaluation

Evaluate on the YFCC100M with SIFT descriptors and Nearest Neighborhood (NN) matcher:

python eval.py --name scannet --cfg configs/eval/lmcnet_sift_yfcc.yaml

Evaluate on the SUN3D with SIFT descriptors and NN matcher:

python eval.py --name sun3d --cfg configs/eval/lmcnet_sift_sun3d.yaml

Evaluate on the ScanNet with SuperPoint descriptors and SuperGlue matcher:

python eval.py --name scannet --cfg configs/eval/lmcnet_spg_scannet.yaml

Training

  1. Generate training dataset for training on YFCC100M with SIFT descriptor and NN matcher.
python trainset_generate.py \
      --ext_cfg configs/detector/sift.yaml \
      --match_cfg configs/matcher/nn.yaml \
      --output data/yfcc_train_cache \
      --eig_name small_min \
      --prefix yfcc
  1. Model training.
python train_model.py --cfg configs/lmcnet/lmcnet_sift_outdoor_train.yaml

Acknowledgement

We have used codes from the following repositories, and we thank the authors for sharing their codes.

SuperGlue: https://github.com/magicleap/SuperGluePretrainedNetwork

OANet: https://github.com/zjhthu/OANet

KNN-CUDA: https://github.com/vincentfpgarcia/kNN-CUDA

Pointnet2.PyTorch: https://github.com/sshaoshuai/Pointnet2.PyTorch

Comments
  • Pretrained model for geometric features

    Pretrained model for geometric features

    Hi, thanks for the great work!

    I find that the pretrained model are for geometric features+descriptor. Is it possible to share the model for geometric features only?

    opened by vdvchen 5
  • about evaluation setting?

    about evaluation setting?

    Hollow, thanks for your generous sharing. And we are interested in your work and your brief code of evaluation framework. There’re two question about the evalution?

    1. Is there any difference of RANSACEstimator and RescaleRANSACEstimator when evaluting the filter model?In readme.md, It uses rescale_ransac_1600 in yfcc and ransac_0.001 in sun3d while they'are both using sift to extract keypoints without rescaling .
    2. We find the auc is entire lowering about 0.5%~1% with different methods (pretrained model, lmc,lmc-gemo,oanet...) in yfcc with setting: sift,nn,"method" rescale_ransac_1600 (using in readme.md)comparing with it in the paper, is there any setting wrong in evalution?
    opened by DIVE128 3
  • Question about evaluation metrics.

    Question about evaluation metrics.

    Thanks for the interesting work!

    I have some questions about the evaluation result. When I compare your results with OANet, I found that the reported AUC 5 is different from theirs. For example, In Table 2 of OANet, the best AUC 5 should be 39.33 and the results in Table 1 in your paper is 29.12. As I see in your paper, the dataset splitting in your work is the same as OANet. So is your evaluation metric here different from OANet?

    Thanks again!

    opened by FanLu97 2
  • Training Details

    Training Details

    Hello! Thank you so much for releasing the code of this work. It's a nice work. therefore, i have some interesting questions for you in this work. I am looking for your reply . Firstly, on the comparison experiment, how many steps do you have for training the LMC model ? Is it 500K ?

    opened by CSX777 1
  • Evaluation Setting of SUN3D dataset

    Evaluation Setting of SUN3D dataset

    Hi! Much thanks for releasing the code.

    I encountered an irrational result while evaluating on the SUN3D dataset following the config in this repository. It got much less AUC scores than those in the paper. I am wondering if any config setting I need to change by myself to nearly reach the accuracy shown in the paper.

    opened by ruby50082 1
  • Could you please help  on trying on different input dataset

    Could you please help on trying on different input dataset

    Hi ! Thanks so much for releasing the code of this work. I love reading your paper. There are so many interesting elements in this work, and I would like to try it out with my own dataset..

    Here, I have the following questions regarding the input to LMCNetFilter... I am running the eval.py on the ScanNet dataset.

    • At first, I see that LMCNetFilter require K0 and K1 parameters. Could you please let me know what are they?

    • As I get into pose_dataset.py, what are K_default, R and T? I would be nice if you could help explain what are the necessary information that the dataset should have? I am sorry. I am not very familiar with the ScanNet dataset.

    • If I don't know K0 and K1 parameters, should I assigned them as an eye matrix of size 3x3?

    • Also, I would like to try this method with Megadepth dataset. Could you suggest what are K0 and K1,K_default,RandT` in the Megadepth dataset?

    opened by GabbySuwichaya 6
Owner
liuyuan
liuyuan
Sparse R-CNN: End-to-End Object Detection with Learnable Proposals, CVPR2021

End-to-End Object Detection with Learnable Proposal, CVPR2021

Peize Sun 1.2k Dec 27, 2022
Leibniz is a python package which provide facilities to express learnable partial differential equations with PyTorch

Leibniz is a python package which provide facilities to express learnable partial differential equations with PyTorch

Beijing ColorfulClouds Technology Co.,Ltd. 16 Aug 7, 2022
Learnable Multi-level Frequency Decomposition and Hierarchical Attention Mechanism for Generalized Face Presentation Attack Detection

LMFD-PAD Note This is the official repository of the paper: LMFD-PAD: Learnable Multi-level Frequency Decomposition and Hierarchical Attention Mechani

null 28 Dec 2, 2022
This repository contains the code for the paper "Hierarchical Motion Understanding via Motion Programs"

Hierarchical Motion Understanding via Motion Programs (CVPR 2021) This repository contains the official implementation of: Hierarchical Motion Underst

Sumith Kulal 40 Dec 5, 2022
Exploring Versatile Prior for Human Motion via Motion Frequency Guidance (3DV2021)

Exploring Versatile Prior for Human Motion via Motion Frequency Guidance This is the codebase for video-based human motion reconstruction in human-mot

Jiachen Xu 5 Jul 14, 2022
An official implementation of "SFNet: Learning Object-aware Semantic Correspondence" (CVPR 2019, TPAMI 2020) in PyTorch.

PyTorch implementation of SFNet This is the implementation of the paper "SFNet: Learning Object-aware Semantic Correspondence". For more information,

CV Lab @ Yonsei University 87 Dec 30, 2022
Official Pytorch implementation of 'GOCor: Bringing Globally Optimized Correspondence Volumes into Your Neural Network' (NeurIPS 2020)

Official implementation of GOCor This is the official implementation of our paper : GOCor: Bringing Globally Optimized Correspondence Volumes into You

Prune Truong 71 Nov 18, 2022
Facial Action Unit Intensity Estimation via Semantic Correspondence Learning with Dynamic Graph Convolution

FAU Implementation of the paper: Facial Action Unit Intensity Estimation via Semantic Correspondence Learning with Dynamic Graph Convolution. Yingruo

Evelyn 78 Nov 29, 2022
《Dual-Resolution Correspondence Network》(NeurIPS 2020)

Dual-Resolution Correspondence Network Dual-Resolution Correspondence Network, NeurIPS 2020 Dependency All dependencies are included in asset/dualrcne

Active Vision Laboratory 45 Nov 21, 2022
Official repository for Few-shot Image Generation via Cross-domain Correspondence (CVPR '21)

Few-shot Image Generation via Cross-domain Correspondence Utkarsh Ojha, Yijun Li, Jingwan Lu, Alexei A. Efros, Yong Jae Lee, Eli Shechtman, Richard Zh

Utkarsh Ojha 251 Dec 11, 2022
Code release for "COTR: Correspondence Transformer for Matching Across Images"

COTR: Correspondence Transformer for Matching Across Images This repository contains the inference code for COTR. We plan to release the training code

UBC Computer Vision Group 360 Jan 6, 2023
Self-Learned Video Rain Streak Removal: When Cyclic Consistency Meets Temporal Correspondence

In this paper, we address the problem of rain streaks removal in video by developing a self-learned rain streak removal method, which does not require any clean groundtruth images in the training process.

Yang Wenhan 44 Dec 6, 2022
CoCosNet v2: Full-Resolution Correspondence Learning for Image Translation

CoCosNet v2: Full-Resolution Correspondence Learning for Image Translation (CVPR 2021, oral presentation) CoCosNet v2: Full-Resolution Correspondence

Microsoft 308 Dec 7, 2022
A PyTorch implementation of "DGC-Net: Dense Geometric Correspondence Network"

DGC-Net: Dense Geometric Correspondence Network This is a PyTorch implementation of our work "DGC-Net: Dense Geometric Correspondence Network" TL;DR A

null 191 Dec 16, 2022
code for Multi-scale Matching Networks for Semantic Correspondence, ICCV

MMNet This repo is the official implementation of ICCV 2021 paper "Multi-scale Matching Networks for Semantic Correspondence.". Pre-requisite conda cr

joey zhao 25 Dec 12, 2022
PyTorch implementation of CVPR 2020 paper (Reference-Based Sketch Image Colorization using Augmented-Self Reference and Dense Semantic Correspondence) and pre-trained model on ImageNet dataset

Reference-Based-Sketch-Image-Colorization-ImageNet This is a PyTorch implementation of CVPR 2020 paper (Reference-Based Sketch Image Colorization usin

Yuzhi ZHAO 11 Jul 28, 2022
DiscoBox: Weakly Supervised Instance Segmentation and Semantic Correspondence from Box Supervision

The Official PyTorch Implementation of DiscoBox: Weakly Supervised Instance Segmentation and Semantic Correspondence from Box Supervision

Shiyi Lan 3 Oct 15, 2021
The PyTorch implementation of DiscoBox: Weakly Supervised Instance Segmentation and Semantic Correspondence from Box Supervision.

DiscoBox: Weakly Supervised Instance Segmentation and Semantic Correspondence from Box Supervision The PyTorch implementation of DiscoBox: Weakly Supe

Shiyi Lan 1 Oct 23, 2021
Learning Correspondence from the Cycle-consistency of Time (CVPR 2019)

TimeCycle Code for Learning Correspondence from the Cycle-consistency of Time (CVPR 2019, Oral). The code is developed based on the PyTorch framework,

Xiaolong Wang 706 Nov 29, 2022