LiDAR Distillation: Bridging the Beam-Induced Domain Gap for 3D Object Detection

Overview

LiDAR Distillation

Paper | Model


LiDAR Distillation: Bridging the Beam-Induced Domain Gap for 3D Object Detection
Yi Wei, Zibu Wei, Yongming Rao, Jiaxin Li, Jiwen Lu, Jie Zhou

Introduction

In this paper, we propose the LiDAR Distillation to bridge the domain gap induced by different LiDAR beams for 3D object detection. In many real-world applications, the LiDAR points used by mass-produced robots and vehicles usually have fewer beams than that in large-scale public datasets. Moreover, as the LiDARs are upgraded to other product models with different beam amount, it becomes challenging to utilize the labeled data captured by previous versions’ high-resolution sensors. Despite the recent progress on domain adaptive 3D detection, most methods struggle to eliminate the beam-induced domain gap.

Model Zoo

Cross-dataset Adaptation

model method AP_BEV AP_3D
SECOND-IoU Direct transfer 32.91 17.24
SECOND-IoU ST3D 35.92 20.19
SECOND-IoU Ours 40.66 22.86
SECOND-IoU Ours (w / ST3D) 42.04 24.50
PV-RCNN Direct transfer 34.50 21.47
PV-RCNN ST3D 36.42 22.99
PV-RCNN Ours 43.31 25.63
PV-RCNN Ours (w / ST3D) 44.08 26.37
PointPillar Direct transfer 27.8 12.1
PointPillar ST3D 30.6 15.6
PointPillar Ours 40.23 19.12
PointPillar Ours (w / ST3D) 40.83 20.97

Results of cross-dataset adaptation from Waymo to nuScenes. The training Waymo data used in our work is version 1.0.

Single-dataset Adaptation

beams method AP_BEV AP_3D
32 Direct transfer 79.81 65.91
32 ST3D 71.29 57.57
32 Ours 82.22 70.15
32* Direct transfer 73.56 57.77
32* ST3D 67.08 53.30
32* Ours 79.47 66.96
16 Direct transfer 64.91 47.48
16 ST3D 57.58 42.40
16 Ours 74.32 59.87
16* Direct transfer 56.32 38.75
16* ST3D 55.63 37.02
16* Ours 70.43 55.24

Results of single-dataset adaptation on KITTI dataset with PointPillars (moderate difficulty). For SECOND-IoU and PV-RCNN, we find that it is easy to raise cuda error on low-beam data, which is may caused by the bug in spconv. Thus, we do not provide the model but you can still run these experiments with the yamls.

Installation

Please refer to INSTALL.md.

Getting Started

Please refer to GETTING_STARTED.md.

License

Our code is released under the Apache 2.0 license.

Acknowledgement

Our code is heavily based on OpenPCDet v0.2 and ST3D. Thanks OpenPCDet Development Team for their awesome codebase.

Citation

If you find this project useful in your research, please consider cite:

@article{wei2022lidar,
  title={LiDAR Distillation: Bridging the Beam-Induced Domain Gap for 3D Object Detection},
  author={Wei, Yi and Wei, Zibu and Rao, Yongming and Li, Jiaxin and Zhou, Jie and Lu, Jiwen},
  journal={arXiv preprint arXiv:2203.14956},
  year={2022}
}
@misc{openpcdet2020,
    title={OpenPCDet: An Open-source Toolbox for 3D Object Detection from Point Clouds},
    author={OpenPCDet Development Team},
    howpublished = {\url{https://github.com/open-mmlab/OpenPCDet}},
    year={2020}
}
You might also like...
Instance-conditional Knowledge Distillation for Object Detection

Instance-conditional Knowledge Distillation for Object Detection This is a MegEngine implementation of the paper "Instance-conditional Knowledge Disti

Adversarial Adaptation with Distillation for BERT Unsupervised Domain Adaptation

Knowledge Distillation for BERT Unsupervised Domain Adaptation Official PyTorch implementation | Paper Abstract A pre-trained language model, BERT, ha

[NeurIPS-2021] Mosaicking to Distill: Knowledge Distillation from Out-of-Domain Data
[NeurIPS-2021] Mosaicking to Distill: Knowledge Distillation from Out-of-Domain Data

MosaicKD Code for NeurIPS-21 paper "Mosaicking to Distill: Knowledge Distillation from Out-of-Domain Data" 1. Motivation Natural images share common l

A Pytorch Implementation of [Source data‐free domain adaptation of object detector through domain

A Pytorch Implementation of Source data‐free domain adaptation of object detector through domain‐specific perturbation Please follow Faster R-CNN and

modelvshuman is a Python library to benchmark the gap between human and machine vision
modelvshuman is a Python library to benchmark the gap between human and machine vision

modelvshuman is a Python library to benchmark the gap between human and machine vision. Using this library, both PyTorch and TensorFlow models can be evaluated on 17 out-of-distribution datasets with high-quality human comparison data.

PyTorch implementation of
PyTorch implementation of "Transparency by Design: Closing the Gap Between Performance and Interpretability in Visual Reasoning"

Transparency-by-Design networks (TbD-nets) This repository contains code for replicating the experiments and visualizations from the paper Transparenc

Code used to generate the results appearing in "Train longer, generalize better: closing the generalization gap in large batch training of neural networks"

Train longer, generalize better - Big batch training This is a code repository used to generate the results appearing in "Train longer, generalize bet

DeepConsensus uses gap-aware sequence transformers to correct errors in Pacific Biosciences (PacBio) Circular Consensus Sequencing (CCS) data.
DeepConsensus uses gap-aware sequence transformers to correct errors in Pacific Biosciences (PacBio) Circular Consensus Sequencing (CCS) data.

DeepConsensus DeepConsensus uses gap-aware sequence transformers to correct errors in Pacific Biosciences (PacBio) Circular Consensus Sequencing (CCS)

Hybrid CenterNet - Hybrid-supervised object detection / Weakly semi-supervised object detection
Hybrid CenterNet - Hybrid-supervised object detection / Weakly semi-supervised object detection

Hybrid-Supervised Object Detection System Object detection system trained by hybrid-supervision/weakly semi-supervision (HSOD/WSSOD): This project is

Comments
  • About pillarhead

    About pillarhead

    Hello, when I use pillarhead to train in KITTI dataset, when the num_class is 3, the following error will be reported

    RuntimeError: CUDA error: device-side assert triggered
    CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
    For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
    

    In addition, in the specific implementation of pillarhead, the roi_ grid_pool function does not seem to be used,

    opened by zhangkangkai 0
  • Performance

    Performance

    Hello, I have a question. I sampled waymo 1/20 for training, and directly tested nuscenes. The performance (26.44/12.11) is similar to that given in Table 1 of the paper. However, the distillation performance is 27.37/13.46, which is quite different from that given in the paper (40.23/19.12). The performance of "our" in Table 1 is "distillation +sn" or only "distillation".

    opened by CBY-9527 3
Owner
Yi Wei
Yi Wei
[MICCAI'20] AlignShift: Bridging the Gap of Imaging Thickness in 3D Anisotropic Volumes

AlignShift NEW: Code for our new MICCAI'21 paper "Asymmetric 3D Context Fusion for Universal Lesion Detection" will also be pushed to this repository

Medical 3D Vision 42 Jan 6, 2023
[ECCV 2020] Gradient-Induced Co-Saliency Detection

Gradient-Induced Co-Saliency Detection Zhao Zhang*, Wenda Jin*, Jun Xu, Ming-Ming Cheng ⭐ Project Home » The official repo of the ECCV 2020 paper Grad

Zhao Zhang 35 Nov 25, 2022
CLOCs: Camera-LiDAR Object Candidates Fusion for 3D Object Detection

CLOCs is a novel Camera-LiDAR Object Candidates fusion network. It provides a low-complexity multi-modal fusion framework that improves the performance of single-modality detectors. CLOCs operates on the combined output candidates of any 3D and any 2D detector, and is trained to produce more accurate 3D and 2D detection results.

Su Pang 254 Dec 16, 2022
[IJCAI-2021] A benchmark of data-free knowledge distillation from paper "Contrastive Model Inversion for Data-Free Knowledge Distillation"

DataFree A benchmark of data-free knowledge distillation from paper "Contrastive Model Inversion for Data-Free Knowledge Distillation" Authors: Gongfa

ZJU-VIPA 47 Jan 9, 2023
TF2 implementation of knowledge distillation using the "function matching" hypothesis from the paper Knowledge distillation: A good teacher is patient and consistent by Beyer et al.

FunMatch-Distillation TF2 implementation of knowledge distillation using the "function matching" hypothesis from the paper Knowledge distillation: A g

Sayak Paul 67 Dec 20, 2022
This is the official implementation of 3D-CVF: Generating Joint Camera and LiDAR Features Using Cross-View Spatial Feature Fusion for 3D Object Detection, built on SECOND.

3D-CVF This is the official implementation of 3D-CVF: Generating Joint Camera and LiDAR Features Using Cross-View Spatial Feature Fusion for 3D Object

YecheolKim 97 Dec 20, 2022
Fog Simulation on Real LiDAR Point Clouds for 3D Object Detection in Adverse Weather

LiDAR fog simulation Created by Martin Hahner at the Computer Vision Lab of ETH Zurich. This is the official code release of the paper Fog Simulation

Martin Hahner 110 Dec 30, 2022
OpenPCDet Toolbox for LiDAR-based 3D Object Detection.

OpenPCDet OpenPCDet is a clear, simple, self-contained open source project for LiDAR-based 3D object detection. It is also the official code release o

OpenMMLab 3.2k Dec 31, 2022
Localization Distillation for Object Detection

Localization Distillation for Object Detection This repo is based on mmDetection. This is the code for our paper: Localization Distillation

null 274 Dec 26, 2022
TorchDistiller - a collection of the open source pytorch code for knowledge distillation, especially for the perception tasks, including semantic segmentation, depth estimation, object detection and instance segmentation.

This project is a collection of the open source pytorch code for knowledge distillation, especially for the perception tasks, including semantic segmentation, depth estimation, object detection and instance segmentation.

yifan liu 147 Dec 3, 2022