Code for "PV-RAFT: Point-Voxel Correlation Fields for Scene Flow Estimation of Point Clouds", CVPR 2021

Overview

PV-RAFT

This repository contains the PyTorch implementation for paper "PV-RAFT: Point-Voxel Correlation Fields for Scene Flow Estimation of Point Clouds" (CVPR 2021)[arXiv]

Installation

Prerequisites

  • Python 3.8
  • PyTorch 1.8
  • torch-scatter
  • CUDA 10.2
  • RTX 2080 Ti
  • tqdm, tensorboard, scipy, imageio, png
conda create -n pvraft python=3.8
conda install pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch
conda install tqdm tensorboard scipy imageio
pip install pypng
pip install torch-scatter -f https://pytorch-geometric.com/whl/torch-1.8.0+cu102.html

Usage

Data Preparation

We follow HPLFlowNet to prepare FlyingThings3D and KITTI datasets. Please refer to repo. Make sure the project structure look like this:

RAFT_SceneFlow/
    data/
        FlyingThings3D_subset_processed_35m/
        kitti_processed/
    data_preprocess/
    datasets/
    experiments/
    model/
    modules/
    tools/

After downloading datasets, we need to preprocess them.

FlyingThings3D Dataset

python process_flyingthings3d_subset.py --raw_data_path=path_src/FlyingThings3D_subset --save_path=path_dst/FlyingThings3D_subset_processed_35m --only_save_near_pts

You should replace raw_data_path and save_path with your own setting.

KITTI Dataset

python process_kitti.py --raw_data_path=path_src/kitti --save_path=path_dst/kitti_processed --calib_path=calib_folder_path

You should replace raw_data_path, save_path and calib_path with your own setting.

Train

python train.py --exp_path=pv_raft --batch_size=2 --gpus=0,1 --num_epochs=20 --max_points=8192 --iters=8  --root=./

where exp_path is the experiment folder name and root is the project root path. These 20 epochs take about 53 hours on two RTX 2080 Ti.

If you want to train the refine model, please add --refine and specify --weights parameter as the directory name of the pre-trained model. For example,

python train.py --refine --exp_path=pv_raft_finetune --batch_size=2 --gpus=0,1 --num_epochs=10 --max_points=8192 --iters=32 --root=./ --weights=./experiments/pv_raft/checkpoints/best_checkpoint.params

These 10 epochs take about 38 hours on two RTX 2080 Ti.

Test

python test.py --dataset=KITTI --exp_path=pv_raft --gpus=1 --max_points=8192 --iters=8 --root=./ --weights=./experiments/pv_raft/checkpoints/best_checkpoint.params

where dataset should be chosen from FT3D/KITTI, and weights is the absolute path of checkpoint file.

If you want to test the refine model, please add --refine. For example,

python test.py --refine --dataset=KITTI --exp_path=pv_raft_finetune --gpus=1 --max_points=8192 --iters=32 --root=./ --weights=./experiments/pv_raft_finetune/checkpoints/best_checkpoint.params

Reproduce results

You can download the checkpoint of refined model here.

Acknowledgement

Our code is based on FLOT. We also refer to RAFT and HPLFlowNet.

Citation

If you find our work useful in your research, please consider citing:

@inproceedings{wei2020pv,
  title={{PV-RAFT: Point-Voxel Correlation Fields for Scene Flow Estimation of Point Clouds}},
  author={Wei, Yi and Wang, Ziyi and Rao, Yongming and Lu, Jiwen and Zhou, Jie},
  booktitle={CVPR},
  year={2021}
}
You might also like...
Code of paper
Code of paper "CDFI: Compression-Driven Network Design for Frame Interpolation", CVPR 2021

CDFI (Compression-Driven-Frame-Interpolation) [Paper] (Coming soon...) | [arXiv] Tianyu Ding*, Luming Liang*, Zhihui Zhu, Ilya Zharkov IEEE Conference

Code release for
Code release for "Transferable Semantic Augmentation for Domain Adaptation" (CVPR 2021)

Transferable Semantic Augmentation for Domain Adaptation Code release for "Transferable Semantic Augmentation for Domain Adaptation" (CVPR 2021) Paper

Code for "LoFTR: Detector-Free Local Feature Matching with Transformers", CVPR 2021

LoFTR: Detector-Free Local Feature Matching with Transformers Project Page | Paper LoFTR: Detector-Free Local Feature Matching with Transformers Jiami

Code for
Code for "NeuralRecon: Real-Time Coherent 3D Reconstruction from Monocular Video", CVPR 2021 oral

NeuralRecon: Real-Time Coherent 3D Reconstruction from Monocular Video Project Page | Paper NeuralRecon: Real-Time Coherent 3D Reconstruction from Mon

Official PyTorch Code of GrooMeD-NMS: Grouped Mathematically Differentiable NMS for Monocular 3D Object Detection (CVPR 2021)
Official PyTorch Code of GrooMeD-NMS: Grouped Mathematically Differentiable NMS for Monocular 3D Object Detection (CVPR 2021)

GrooMeD-NMS: Grouped Mathematically Differentiable NMS for Monocular 3D Object Detection GrooMeD-NMS: Grouped Mathematically Differentiable NMS for Mo

Official code for the CVPR 2021 paper "How Well Do Self-Supervised Models Transfer?"

How Well Do Self-Supervised Models Transfer? This repository hosts the code for the experiments in the CVPR 2021 paper How Well Do Self-Supervised Mod

Demo code for paper
Demo code for paper "Learning optical flow from still images", CVPR 2021.

Depthstillation Demo code for "Learning optical flow from still images", CVPR 2021. [Project page] - [Paper] - [Supplementary] This code is provided t

Official PyTorch code of Holistic 3D Scene Understanding from a Single Image with Implicit Representation (CVPR 2021)
Official PyTorch code of Holistic 3D Scene Understanding from a Single Image with Implicit Representation (CVPR 2021)

Implicit3DUnderstanding (Im3D) [Project Page] Holistic 3D Scene Understanding from a Single Image with Implicit Representation Cheng Zhang, Zhaopeng C

Official code of CVPR 2021's PLOP: Learning without Forgetting for Continual Semantic Segmentation
Official code of CVPR 2021's PLOP: Learning without Forgetting for Continual Semantic Segmentation

PLOP: Learning without Forgetting for Continual Semantic Segmentation This repository contains all of our code. It is a modified version of Cermelli e

Comments
  • Save flow file as numpy and visualize it

    Save flow file as numpy and visualize it

    thank you so much for sharing your awesome code

    could you please explain how to save the flow file as .npy to visualize it as in visualize.py file

    thank you in advance

    opened by RokiaAbdeen 1
  • The output of est_flow result

    The output of est_flow result

    Hi, thanks for your great work. I successfully run test.py, using FT3D dataset and best_checkpoints.params model. But, I can't find the output of estimate-flow file or a folder like result/FT3D which mentioned in visual.py. How can i get these files? Do I need to save the est_flow which mentioned in test.py at 120 lines as a npy file ?

    Thanks in advance for your help.

    opened by lyh-12345 1
  • Error in loading state_dict of the pre-trained the model

    Error in loading state_dict of the pre-trained the model

    Dear all, By loading the state_dict, the attached error is faced: Screenshot from 2021-08-12 15-42-52

    It seems that the pre-trained model uses another normalization than the original one.

    Thanks in advance for your quick help

    opened by ramy-bat 5
Owner
Yi Wei
Yi Wei
[CVPR 21] Vectorization and Rasterization: Self-Supervised Learning for Sketch and Handwriting, IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2021.

Vectorization and Rasterization: Self-Supervised Learning for Sketch and Handwriting, CVPR 2021. Ayan Kumar Bhunia, Pinaki nath Chowdhury, Yongxin Yan

Ayan Kumar Bhunia 44 Dec 12, 2022
[CVPR 2022] CoTTA Code for our CVPR 2022 paper Continual Test-Time Domain Adaptation

CoTTA Code for our CVPR 2022 paper Continual Test-Time Domain Adaptation Prerequisite Please create and activate the following conda envrionment. To r

Qin Wang 87 Jan 8, 2023
[CVPR 2021] Released code for Counterfactual Zero-Shot and Open-Set Visual Recognition

Counterfactual Zero-Shot and Open-Set Visual Recognition This project provides implementations for our CVPR 2021 paper Counterfactual Zero-S

null 144 Dec 24, 2022
Code for Multiple Instance Active Learning for Object Detection, CVPR 2021

Language: 简体中文 | English Introduction This is the code for Multiple Instance Active Learning for Object Detection, CVPR 2021. Installation A Linux pla

Tianning Yuan 269 Dec 21, 2022
Code for our CVPR 2021 paper "MetaCam+DSCE"

Joint Noise-Tolerant Learning and Meta Camera Shift Adaptation for Unsupervised Person Re-Identification (CVPR'21) Introduction Code for our CVPR 2021

FlyingRoastDuck 59 Oct 31, 2022
Official code of the paper "ReDet: A Rotation-equivariant Detector for Aerial Object Detection" (CVPR 2021)

ReDet: A Rotation-equivariant Detector for Aerial Object Detection ReDet: A Rotation-equivariant Detector for Aerial Object Detection (CVPR2021), Jiam

csuhan 334 Dec 23, 2022
Code for "Neural Parts: Learning Expressive 3D Shape Abstractions with Invertible Neural Networks", CVPR 2021

Neural Parts: Learning Expressive 3D Shape Abstractions with Invertible Neural Networks This repository contains the code that accompanies our CVPR 20

Despoina Paschalidou 161 Dec 20, 2022
Code for Multiple Instance Active Learning for Object Detection, CVPR 2021

MI-AOD Language: 简体中文 | English Introduction This is the code for Multiple Instance Active Learning for Object Detection (The PDF is not available tem

Tianning Yuan 269 Dec 21, 2022
Official code for the paper: Deep Graph Matching under Quadratic Constraint (CVPR 2021)

QC-DGM This is the official PyTorch implementation and models for our CVPR 2021 paper: Deep Graph Matching under Quadratic Constraint. It also contain

Quankai Gao 55 Nov 14, 2022
Code for CVPR 2021 paper: Anchor-Free Person Search

Introduction This is the implementationn for Anchor-Free Person Search in CVPR2021 License This project is released under the Apache 2.0 license. Inst

null 158 Jan 4, 2023