(CVPR 2021) Back-tracing Representative Points for Voting-based 3D Object Detection in Point Clouds

Related tags

Deep Learning BRNet
Overview

BRNet

fig_overview-c2

Introduction

This is a release of the code of our paper Back-tracing Representative Points for Voting-based 3D Object Detection in Point Clouds, CVPR 2021.

Authors: Bowen Cheng, Lu Sheng*, Shaoshuai Shi, Ming Yang, Dong Xu (*corresponding author)

[arxiv]

In this repository, we reimplement BRNet based on mmdetection3d for easier usage.

Citation

If you find our work useful in your research, please consider citing:

@inproceedings{cheng2021brnet,
  title={Back-tracing Representative Points for Voting-based 3D Object Detection in Point Clouds},
  author={Cheng, Bowen and Sheng, Lu and Shi, Shaoshuai and Yang, Ming and Xu, Dong},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2021}
}

Installation

This repo is built based on mmdetection3d (V0.11.0), please follow the getting_started.md for installation.

The code is tested under the following environment:

  • Ubuntu 16.04 LTS
  • Python 3.7.10
  • Pytorch 1.5.0
  • CUDA 10.1
  • GCC 7.3

Datasets

ScanNet

Please follow the instruction here to prepare ScanNet Data.

SUN RGB-D

Please follow the instruction here to prepare SUN RGB-D Data.

Download Trained Models

We provide the trained models of ScanNet and SUN RGB-D with per-class performances.

ScanNet V2 AP_0.25 AR_0.25 AP_0.50 AR_0.50
cabinet 0.4898 0.7634 0.2800 0.5349
bed 0.8849 0.9506 0.7915 0.8642
chair 0.9149 0.9357 0.8354 0.8604
sofa 0.9049 0.9794 0.8027 0.9278
table 0.6802 0.8486 0.6146 0.7600
door 0.5955 0.7430 0.3721 0.5418
window 0.4814 0.7092 0.2405 0.4078
bookshelf 0.5876 0.8701 0.5032 0.7532
picture 0.1716 0.3243 0.0687 0.1396
counter 0.6085 0.8846 0.3545 0.5385
desk 0.7538 0.9528 0.5481 0.7874
curtain 0.6275 0.7910 0.4126 0.5224
refrigerator 0.5467 0.9474 0.4882 0.8070
showercurtrain 0.7349 0.9643 0.5189 0.6786
toilet 0.9896 1.0000 0.9227 0.9310
sink 0.5901 0.6735 0.3521 0.4490
bathtub 0.8605 0.9355 0.8565 0.9032
garbagebin 0.4726 0.7151 0.3169 0.5170
Overall 0.6608 0.8327 0.5155 0.6624
SUN RGB-D AP_0.25 AR_0.25 AP_0.50 AR_0.50
bed 0.8633 0.9553 0.6544 0.7592
table 0.5136 0.8552 0.2981 0.5268
sofa 0.6754 0.8931 0.5830 0.7193
chair 0.7864 0.8723 0.6301 0.7137
toilet 0.8699 0.9793 0.7125 0.8345
desk 0.2929 0.8082 0.1134 0.4017
dresser 0.3237 0.7615 0.2058 0.4954
night_stand 0.5933 0.8627 0.4490 0.6588
bookshelf 0.3394 0.7199 0.1574 0.3652
bathtub 0.7505 0.8776 0.5383 0.6531
Overall 0.6008 0.8585 0.4342 0.6128

Note: Due to the detection results are unstable and fluctuate within 1~2 mAP points, the results here are slightly different from those in the paper.

Training

For ScanNet V2, please run:

CUDA_VISIBLE_DEVICES=0 python tools/train.py configs/brnet/brnet_8x1_scannet-3d-18class.py --seed 42

For SUN RGB-D, please run:

CUDA_VISIBLE_DEVICES=0 python tools/train.py configs/brnet/brnet_8x1_sunrgbd-3d-10class.py --seed 42

Demo

To test a 3D detector on point cloud data, please refer to Single modality demo and Point cloud demo in MMDetection3D docs.

Here, we provide a demo on SUN RGB-D dataset.

CUDA_VISIBLE_DEVICES=0 python demo/pcd_demo.py sunrgbd_000094.bin demo/brnet_8x1_sunrgbd-3d-10class.py checkpoints/brnet_8x1_sunrgbd-3d-10class_trained.pth

Visualization results

ScanNet

SUN RGB-D

Acknowledgments

Our code is heavily based on mmdetection3d. Thanks mmdetection3d Development Team for their awesome codebase.

Comments
  • About show_result_meshlab ?

    About show_result_meshlab ?

    Hi, Cheng I have read source code, but I can not get below result in your README.md. 微信截图_20210522151009

    But, when I run on my machine, i got below result. All the detect bboxes are green, why? Abviously, there are two object(table and chair). 微信截图_20210522151455 Look forward your reply, thanks!

    opened by xinghuokang 12
  • KeyError: 'BRNet is not in the detector registry'

    KeyError: 'BRNet is not in the detector registry'

    Thank you for your good work! I am trying to run BRNet on my computer. I get an error when I try to train BRNet. Traceback (most recent call last): File "tools/train.py", line 213, in <module> main() File "tools/train.py", line 178, in main test_cfg=cfg.get('test_cfg')) File "/home/logic/Desktop/2021Project/mmdetection3d/mmdet3d/models/builder.py", line 48, in build_detector return build(cfg, DETECTORS, dict(train_cfg=train_cfg, test_cfg=test_cfg)) File "/home/logic/Desktop/2021Project/mmdetection/mmdet/models/builder.py", line 34, in build return build_from_cfg(cfg, registry, default_args) File "/home/logic/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/mmcv/utils/registry.py", line 172, in build_from_cfg f'{obj_type} is not in the {registry.name} registry') KeyError: 'BRNet is not in the detector registry' Coud you provied any suggestion? It seem like a version problem of mmcv.

    opened by WangZhouTao 6
  • ImportError: cannot import name 'build'

    ImportError: cannot import name 'build'

    Hi, developers of BRNet. I have installed BRNet following the getting_started.md. All prerequisites and their corresponding version are meet the requirements. (python==3.6.9 cuda==10.1 pytorch==1.5.0 torchvision==0.6.0 mmcv==1.3.14 mmdet==2.17.0 mmdet3d==0.17.0)

    1. But when I run the train.py, I get error like this:

    Traceback (most recent call last): File "tools/train.py", line 212, in main() File "tools/train.py", line 176, in main test_cfg=cfg.get('test_cfg')) File "/home/BRNet/mmdetection3d/mmdet3d/models/builder.py", line 58, in build_detector cfg, default_args=dict(train_cfg=train_cfg, test_cfg=test_cfg)) File "/home/BRNet/mmcv/mmcv/utils/registry.py", line 212, in build return self.build_func(*args, **kwargs, registry=self) File "/home/BRNet/mmcv/mmcv/cnn/builder.py", line 27, in build_model_from_cfg return build_from_cfg(cfg, registry, default_args) File "/home/BRNet/mmcv/mmcv/utils/registry.py", line 45, in build_from_cfg f'{obj_type} is not in the {registry.name} registry') KeyError: 'BRNet is not in the models registry'

    1. And when I test the installation by "from mmdet3d.apis import init_detector, inference_detector", I get error like this:

    Traceback (most recent call last): File "", line 1, in File "/home/BRNet/mmdet3d/apis/init.py", line 1, in from .inference import inference_detector, init_detector, show_result_meshlab File "/home/BRNet/mmdet3d/apis/inference.py", line 11, in from mmdet3d.models import build_detector File "/home/BRNet/mmdet3d/models/init.py", line 1, in from .backbones import * # noqa: F401,F403 File "/home/BRNet/mmdet3d/models/backbones/init.py", line 3, in from .nostem_regnet import NoStemRegNet File "/home/BRNet/mmdet3d/models/backbones/nostem_regnet.py", line 2, in from ..builder import BACKBONES File "/home/BRNet/mmdet3d/models/builder.py", line 3, in from mmdet.models.builder import (BACKBONES, DETECTORS, HEADS, LOSSES, NECKS, ImportError: cannot import name 'build'

    Thanks for your help if you know how to install BRNet to avoid these problem.

    opened by Enerald 2
  • ImportError: cannot import name 'init_detector' from 'mmdet3d.apis'

    ImportError: cannot import name 'init_detector' from 'mmdet3d.apis'

    Describe the bug I got an error about init_detector

    Reproduction

    1. What command or script did you run?
    CUDA_VISIBLE_DEVICES=0 python demo/pcd_demo.py sunrgbd_000094.bin demo/brnet_8x1_sunrgbd-3d-10class.py checkpoints/brnet_8x1_sunrgbd-3d-10class_trained.pth
    
    1. What dataset did you use? sunrgbd_000094.bin

    Error traceback If applicable, paste the error trackback here.

    Traceback (most recent call last):
      File "demo/pcd_demo.py", line 3, in <module>
        from mmdet3d.apis import inference_detector, init_detector, show_result_meshlab
    ImportError: cannot import name 'init_detector' from 'mmdet3d.apis' (/.../BRNet/mmdetection3d/mmdet3d/apis/__init__.py)
    
    opened by an-dhyun 2
  • Large performance gap between single gpu and 8 gpus training

    Large performance gap between single gpu and 8 gpus training

    Hi,

    Thanks for your great work. I found there is a large performance gap between training on single gpu and 8 gpus on the sunrgbd dataset.

    When I train the code on single gpu as the instruction in README.md, I got AP_0.25, AR_0.25, AP_0.50, AR_0.50 0.5941, 0.8539, 0.4211, 0.5965

    However, when I train the code on 8 gpus with tools/dist_train.sh, I got AP_0.25, AR_0.25, AP_0.50, AR_0.50 0.5794, 0.8587, 0.3958, 0.5704

    There exists a sudden performance drop when training on 8 gpus.

    (1) Could you provide some instructions on how to get the normal results on 8 gpus? For example, how to set lr, batch size, syncbn?

    (2) Even on single gpu, there is also one point performance gap from the reported 43.7 on mAP50. Is it just from the instability? Need I train multiple times to get better results?

    Thanks in advance.

    Regards, Chen Yukang

    opened by yukang2017 2
  •  Dear Bowen,I meet a problem in runing this code.TypeError: BRNet: CAVoteHead: cfg must be a dict, but got <class 'NoneType'> My mmdet3D version is  v0.15.0.

    Dear Bowen,I meet a problem in runing this code.TypeError: BRNet: CAVoteHead: cfg must be a dict, but got My mmdet3D version is v0.15.0.

    Traceback (most recent call last): File "tools/train.py", line 232, in main() File "tools/train.py", line 191, in main test_cfg=cfg.get('test_cfg')) File "/home/zpx/code/代码/mmdetection3d/mmdet3d/models/builder.py", line 83, in build_model return build_detector(cfg, train_cfg=train_cfg, test_cfg=test_cfg) File "/home/zpx/code/代码/mmdetection3d/mmdet3d/models/builder.py", line 57, in build_detector cfg, default_args=dict(train_cfg=train_cfg, test_cfg=test_cfg)) File "/home/zpx/anaconda3/envs/brnet/lib/python3.6/site-packages/mmcv/utils/registry.py", line 210, in build return self.build_func(*args, **kwargs, registry=self) File "/home/zpx/anaconda3/envs/brnet/lib/python3.6/site-packages/mmcv/cnn/builder.py", line 26, in build_model_from_cfg return build_from_cfg(cfg, registry, default_args) File "/home/zpx/anaconda3/envs/brnet/lib/python3.6/site-packages/mmcv/utils/registry.py", line 54, in build_from_cfg raise type(e)(f'{obj_cls.name}: {e}') TypeError: BRNet: CAVoteHead: cfg must be a dict, but got <class 'NoneType'>

    Have you ever met this problem? By the way,my mmdet3D version is v0.15.0. for NVIDIA 30 series GPU.

    opened by xiyuchenji 1
  • SUN RGB-D Data Preparation

    SUN RGB-D Data Preparation

    Hello, thank you for open-sourcing your codebase.

    I was wondering whether your training data for SUN RGB-D was v1 or v2. It is mentioned in the paper that results shown are from the v1 validation set, but I wasn't sure which version you were using for training.

    opened by Divadi 1
  • how to test the models

    how to test the models

    is there any config file for testing? or just the same as training (configs/brnet/brnet_8x1_scannet-3d-8class.py )? what is the command? what about the arguments?

    opened by MandyDongrs 3
  • ImportError: /workspaces/BRNet/mmdet3d/ops/ball_query/ball_query_ext.cpython-37m-x86_64-linux-gnu.so: undefined symbol: _ZN6caffe28TypeMeta21_typeMetaDataInstanceIN3c107complexINS2_4HalfEEEEEPKNS_6detail12TypeMetaDataEv

    ImportError: /workspaces/BRNet/mmdet3d/ops/ball_query/ball_query_ext.cpython-37m-x86_64-linux-gnu.so: undefined symbol: _ZN6caffe28TypeMeta21_typeMetaDataInstanceIN3c107complexINS2_4HalfEEEEEPKNS_6detail12TypeMetaDataEv

    Thanks for your error report and we appreciate it a lot.

    Checklist

    1. I have searched related issues but cannot get the expected help.
    2. The bug has not been fixed in the latest version.

    Describe the bug I tried running the demo mentioned in the readme.md and get the following error:

    Traceback (most recent call last): File "demo/pcd_demo.py", line 3, in from mmdet3d.apis import inference_detector, init_detector, show_result_meshlab File "/workspaces/BRNet/mmdet3d/apis/init.py", line 1, in from .inference import inference_detector, init_detector, show_result_meshlab File "/workspaces/BRNet/mmdet3d/apis/inference.py", line 8, in from mmdet3d.core import Box3DMode, show_result File "/workspaces/BRNet/mmdet3d/core/init.py", line 2, in from .bbox import * # noqa: F401, F403 File "/workspaces/BRNet/mmdet3d/core/bbox/init.py", line 2, in from .coders import DeltaXYZWLHRBBoxCoder File "/workspaces/BRNet/mmdet3d/core/bbox/coders/init.py", line 6, in from .class_agnostic_bbox_coder import ClassAgnosticBBoxCoder File "/workspaces/BRNet/mmdet3d/core/bbox/coders/class_agnostic_bbox_coder.py", line 4, in from mmdet3d.core.bbox.structures import rotation_3d_in_axis File "/workspaces/BRNet/mmdet3d/core/bbox/structures/init.py", line 1, in from .base_box3d import BaseInstance3DBoxes File "/workspaces/BRNet/mmdet3d/core/bbox/structures/base_box3d.py", line 5, in from mmdet3d.ops.iou3d import iou3d_cuda File "/workspaces/BRNet/mmdet3d/ops/init.py", line 5, in from .ball_query import ball_query File "/workspaces/BRNet/mmdet3d/ops/ball_query/init.py", line 1, in from .ball_query import ball_query File "/workspaces/BRNet/mmdet3d/ops/ball_query/ball_query.py", line 4, in from . import ball_query_ext ImportError: /workspaces/BRNet/mmdet3d/ops/ball_query/ball_query_ext.cpython-37m-x86_64-linux-gnu.so: undefined symbol: _ZN6caffe28TypeMeta21_typeMetaDataInstanceIN3c107complexINS2_4HalfEEEEEPKNS_6detail12TypeMetaDataEv

    Reproduction

    1. What command or script did you run?
    CUDA_VISIBLE_DEVICES=0 python demo/pcd_demo.py sunrgbd_000094.bin demo/brnet_8x1_sunrgbd-3d-10class.py checkpoints/brnet_8x1_sunrgbd-3d-10class_trained.pth
    

    I used the config file included so don't think that is a solution to this problem.

    opened by fathyshalaby 1
  • Collaboration with MMDetection3D

    Collaboration with MMDetection3D

    Hi, developers of BRNet,

    Excited to find your project is reimplemented based on mmdet3d. It's good news that this framework benefits this community in terms of easier usage and development of research ideas.

    I believe your work could bring great insights to this field, so I am wondering whether you have the willingness to contribute an implementation to MMDetection3D. I believe it could increase the impact of your work and also leave a mark on a fair and comprehensive benchmark.

    We welcome any kind of contributions, so please feel free to ask us for help needed from our team. Thanks.

    On behalf of the MMDet3D Development Team

    Best, Tai

    opened by Tai-Wang 3
Owner
MSc student@BUAA
null
Point Cloud Registration using Representative Overlapping Points.

Point Cloud Registration using Representative Overlapping Points (ROPNet) Abstract 3D point cloud registration is a fundamental task in robotics and c

ZhuLifa 36 Dec 16, 2022
Code for "PV-RAFT: Point-Voxel Correlation Fields for Scene Flow Estimation of Point Clouds", CVPR 2021

PV-RAFT This repository contains the PyTorch implementation for paper "PV-RAFT: Point-Voxel Correlation Fields for Scene Flow Estimation of Point Clou

Yi Wei 43 Dec 5, 2022
Voxel Set Transformer: A Set-to-Set Approach to 3D Object Detection from Point Clouds (CVPR 2022)

Voxel Set Transformer: A Set-to-Set Approach to 3D Object Detection from Point Clouds (CVPR2022)[paper] Authors: Chenhang He, Ruihuang Li, Shuai Li, L

Billy HE 141 Dec 30, 2022
Code for the paper SphereRPN: Learning Spheres for High-Quality Region Proposals on 3D Point Clouds Object Detection, ICIP 2021.

SphereRPN Code for the paper SphereRPN: Learning Spheres for High-Quality Region Proposals on 3D Point Clouds Object Detection, ICIP 2021. Authors: Th

Thang Vu 15 Dec 2, 2022
Fog Simulation on Real LiDAR Point Clouds for 3D Object Detection in Adverse Weather

LiDAR fog simulation Created by Martin Hahner at the Computer Vision Lab of ETH Zurich. This is the official code release of the paper Fog Simulation

Martin Hahner 110 Dec 30, 2022
BADet: Boundary-Aware 3D Object Detection from Point Clouds (Pattern Recognition 2022)

BADet: Boundary-Aware 3D Object Detection from Point Clouds (Pattern Recognition

Rui Qian 17 Dec 12, 2022
(CVPR 2021) PAConv: Position Adaptive Convolution with Dynamic Kernel Assembling on Point Clouds

PAConv: Position Adaptive Convolution with Dynamic Kernel Assembling on Point Clouds by Mutian Xu*, Runyu Ding*, Hengshuang Zhao, and Xiaojuan Qi. Int

CVMI Lab 228 Dec 25, 2022
[CVPR 2022] Back To Reality: Weak-supervised 3D Object Detection with Shape-guided Label Enhancement

Back To Reality: Weak-supervised 3D Object Detection with Shape-guided Label Enhancement Announcement ?? We have not tested the code yet. We will fini

Xiuwei Xu 7 Oct 30, 2022
A general python framework for single object tracking in LiDAR point clouds, based on PyTorch Lightning.

Open3DSOT A general python framework for single object tracking in LiDAR point clouds, based on PyTorch Lightning. The official code release of BAT an

Kangel Zenn 172 Dec 23, 2022
Style-based Point Generator with Adversarial Rendering for Point Cloud Completion (CVPR 2021)

Style-based Point Generator with Adversarial Rendering for Point Cloud Completion (CVPR 2021) An efficient PyTorch library for Point Cloud Completion.

Microsoft 119 Jan 2, 2023
(CVPR 2022 Oral) Official implementation for "Surface Representation for Point Clouds"

RepSurf - Surface Representation for Point Clouds [CVPR 2022 Oral] By Haoxi Ran* , Jun Liu, Chengjie Wang ( * : corresponding contact) The pytorch off

Haoxi Ran 264 Dec 23, 2022
Implementation of CVPR'2022:Reconstructing Surfaces for Sparse Point Clouds with On-Surface Priors

Reconstructing Surfaces for Sparse Point Clouds with On-Surface Priors (CVPR 2022) Personal Web Pages | Paper | Project Page This repository contains

null 151 Dec 26, 2022
Implementation of CVPR'2022:Surface Reconstruction from Point Clouds by Learning Predictive Context Priors

Surface Reconstruction from Point Clouds by Learning Predictive Context Priors (CVPR 2022) Personal Web Pages | Paper | Project Page This repository c

null 136 Dec 12, 2022
Code Release for ICCV 2021 (oral), "AdaFit: Rethinking Learning-based Normal Estimation on Point Clouds"

AdaFit: Rethinking Learning-based Normal Estimation on Point Clouds (ICCV 2021 oral) **Project Page | Arxiv ** Runsong Zhu¹, Yuan Liu², Zhen Dong¹, Te

null 40 Dec 30, 2022
Code for "CloudAAE: Learning 6D Object Pose Regression with On-line Data Synthesis on Point Clouds" @ICRA2021

CloudAAE This is an tensorflow implementation of "CloudAAE: Learning 6D Object Pose Regression with On-line Data Synthesis on Point Clouds" Files log:

Gee 35 Nov 14, 2022
The official implementation of ICCV paper "Box-Aware Feature Enhancement for Single Object Tracking on Point Clouds".

Box-Aware Tracker (BAT) Pytorch-Lightning implementation of the Box-Aware Tracker. Box-Aware Feature Enhancement for Single Object Tracking on Point C

Kangel Zenn 5 Mar 26, 2022
Pixel Consensus Voting for Panoptic Segmentation (CVPR 2020)

Implementation for Pixel Consensus Voting (CVPR 2020). This codebase contains the essential ingredients of PCV, including various spatial discretizati

Haochen 23 Oct 25, 2022
Code for "PVNet: Pixel-wise Voting Network for 6DoF Pose Estimation" CVPR 2019 oral

Good news! We release a clean version of PVNet: clean-pvnet, including how to train the PVNet on the custom dataset. Use PVNet with a detector. The tr

ZJU3DV 722 Dec 27, 2022
[CVPR 2022 Oral] EPro-PnP: Generalized End-to-End Probabilistic Perspective-n-Points for Monocular Object Pose Estimation

EPro-PnP EPro-PnP: Generalized End-to-End Probabilistic Perspective-n-Points for Monocular Object Pose Estimation In CVPR 2022 (Oral). [paper] Hanshen

 同济大学智能汽车研究所综合感知研究组 ( Comprehensive Perception Research Group under Institute of Intelligent Vehicles, School of Automotive Studies, Tongji University) 842 Jan 4, 2023