MMDetection3D is an open source object detection toolbox based on PyTorch

Overview

docs badge codecov license

News: We released the codebase v0.12.0.

In the recent nuScenes 3D detection challenge of the 5th AI Driving Olympics in NeurIPS 2020, we obtained the best PKL award and the second runner-up by multi-modality entry, and the best vision-only results. Code and models will be released soon!

Documentation: https://mmdetection3d.readthedocs.io/

Introduction

English | 简体中文

The master branch works with PyTorch 1.3+.

MMDetection3D is an open source object detection toolbox based on PyTorch, towards the next-generation platform for general 3D detection. It is a part of the OpenMMLab project developed by MMLab.

demo image

Major features

  • Support multi-modality/single-modality detectors out of box

    It directly supports multi-modality/single-modality detectors including MVXNet, VoteNet, PointPillars, etc.

  • Support indoor/outdoor 3D detection out of box

    It directly supports popular indoor and outdoor 3D detection datasets, including ScanNet, SUNRGB-D, Waymo, nuScenes, Lyft, and KITTI. For nuScenes dataset, we also support nuImages dataset.

  • Natural integration with 2D detection

    All the about 300+ models, methods of 40+ papers, and modules supported in MMDetection can be trained or used in this codebase.

  • High efficiency

    It trains faster than other codebases. The main results are as below. Details can be found in benchmark.md. We compare the number of samples trained per second (the higher, the better). The models that are not supported by other codebases are marked by ×.

    Methods MMDetection3D OpenPCDet votenet Det3D
    VoteNet 358 × 77 ×
    PointPillars-car 141 × × 140
    PointPillars-3class 107 44 × ×
    SECOND 40 30 × ×
    Part-A2 17 14 × ×

Like MMDetection and MMCV, MMDetection3D can also be used as a library to support different projects on top of it.

License

This project is released under the Apache 2.0 license.

Changelog

v0.12.0 was released in 1/4/2021. Please refer to changelog.md for details and release history.

Benchmark and model zoo

Supported methods and backbones are shown in the below table. Results and models are available in the model zoo.

Support backbones:

  • PointNet (CVPR'2017)
  • PointNet++ (NeurIPS'2017)
  • RegNet (CVPR'2020)

Support methods

ResNet ResNeXt SENet PointNet++ HRNet RegNetX Res2Net
SECOND
PointPillars
FreeAnchor
VoteNet
H3DNet
3DSSD
Part-A2
MVXNet
CenterPoint
SSN
ImVoteNet

Other features

Note: All the about 300+ models, methods of 40+ papers in 2D detection supported by MMDetection can be trained or used in this codebase.

Installation

Please refer to getting_started.md for installation.

Get Started

Please see getting_started.md for the basic usage of MMDetection3D. We provide guidance for quick run with existing dataset and with customized dataset for beginners. There are also tutorials for learning configuration systems, adding new dataset, designing data pipeline, customizing models, customizing runtime settings and Waymo dataset.

Citation

If you find this project useful in your research, please consider cite:

@misc{mmdet3d2020,
    title={{MMDetection3D: OpenMMLab} next-generation platform for general {3D} object detection},
    author={MMDetection3D Contributors},
    howpublished = {\url{https://github.com/open-mmlab/mmdetection3d}},
    year={2020}
}

Contributing

We appreciate all contributions to improve MMDetection3D. Please refer to CONTRIBUTING.md for the contributing guideline.

Acknowledgement

MMDetection3D is an open source project that is contributed by researchers and engineers from various colleges and companies. We appreciate all the contributors as well as users who give valuable feedbacks. We wish that the toolbox and benchmark could serve the growing research community by providing a flexible toolkit to reimplement existing methods and develop their own new 3D detectors.

Projects in OpenMMLab

  • MMCV: OpenMMLab foundational library for computer vision.
  • MMClassification: OpenMMLab image classification toolbox and benchmark.
  • MMDetection: OpenMMLab detection toolbox and benchmark.
  • MMDetection3D: OpenMMLab next-generation platform for general 3D object detection.
  • MMSegmentation: OpenMMLab semantic segmentation toolbox and benchmark.
  • MMAction2: OpenMMLab's next-generation action understanding toolbox and benchmark.
  • MMTracking: OpenMMLab video perception toolbox and benchmark.
  • MMPose: OpenMMLab pose estimation toolbox and benchmark.
  • MMEditing: OpenMMLab image and video editing toolbox.
Comments
  • KeyError:

    KeyError: "NuScenesDataset: 'infos'"

    I WANT TO PREDICETION : python tools/test.py configs/fcos3d/fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d_finetune.py model/weights/fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d_finetune_20210427_091419-35aaaad0.pth --show-dir /home/mnt_abc004-data/mq/Project/mmdetection3d-master/output/ --eval mAP

    always no key "NuScenesDataset: 'infos'",why??? my set image

    opened by lijain 33
  • Questions about FCOS3D and PGD model 3D box

    Questions about FCOS3D and PGD model 3D box

    Dear author: Hello,Thank you very much for the framework and model you published! I trained the FCOS3D algorithm using KITTI data set as described in issue #865. The profile is the same as issue #865, training 72 epochs. After the training, I tested the trained weight model and PGD weight model respectively, and encountered two problems: 1、I ran into this on both models: too many 3D boxes: FCOS3D: 000008_pred PGD: 000008_pred As you can see. They all detect too many 3D boxes. PGD even has a bunch of 3D boxes that shouldn't be there. What causes this problem? How to solve it? 2、I wrote test instructions to test according to the official documents and git instructions. Unfortunately, it can only be viewed and saved one by one. Batch tests cannot be completed, saved, and then viewed. Is there any solution to this problem? Maybe I'm not doing it right? I enclose mine here: python tools/test.py configs/fcos3d/fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_kitti-mono3d.py /media/lee/DATA/project/FCOS3D/latest.pth --show --show-dir /media/lee/DATA/project/dataset/FCOS python tools/test.py configs/pgd/pgd_r101_caffe_fpn_gn-head_3x4_4x_kitti-mono3d.py checkpoints/pgd_r101_caffe_fpn_gn-head_3x4_4x_kitti-mono3d_20211022_102608-8a97533b.pth --show --show-dir /media/lee/DATA/project/dataset/PGD I look forward to your early reply! Thank you very much.

    reimplementation 
    opened by Gewaihir 31
  • Getting

    Getting "CUDA error: an illegal memory access was encountered" on MVXNet

    Continuing #336

    Describe the bug Getting "CUDA error: an illegal memory access was encountered"

    Reproduction

    1. What command or script did you run?
    python tools/train.py configs/mvxnet/dv_mvx-fpn_second_secfpn_adamw_2x8_80e_kitti-3d-3class.py
    
    1. What dataset did you use? I have used custom dataset converted to kitti format. Training plain kitti dataset works, but with the custom dataset I am getting the error. I have regenerated infos for my dataset.

    Environment

    sys.platform: linux
    Python: 3.6.9 (default, Oct  8 2020, 12:12:24) [GCC 8.4.0]
    CUDA available: True
    GPU 0: TITAN X (Pascal)
    CUDA_HOME: /home/kirilly/cuda10.1
    NVCC: Cuda compilation tools, release 10.1, V10.1.105
    GCC: gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
    PyTorch: 1.5.0+cu101
    PyTorch compiling details: PyTorch built with:
      - GCC 7.3
      - C++ Version: 201402
      - Intel(R) Math Kernel Library Version 2019.0.5 Product Build 20190808 for Intel(R) 64 architecture applications
      - Intel(R) MKL-DNN v0.21.1 (Git Hash 7d2fd500bc78936d1d648ca713b901012f470dbc)
      - OpenMP 201511 (a.k.a. OpenMP 4.5)
      - NNPACK is enabled
      - CPU capability usage: AVX2
      - CUDA Runtime 10.1
      - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_37,code=compute_37
      - CuDNN 7.6.3
      - Magma 2.5.2
      - Build settings: BLAS=MKL, BUILD_TYPE=Release, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -fopenmp -DNDEBUG -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_INTERNAL_THREADPOOL_IMPL -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, USE_CUDA=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_STATIC_DISPATCH=OFF, 
    
    TorchVision: 0.6.0+cu101
    OpenCV: 4.5.1
    MMCV: 1.2.6
    MMCV Compiler: GCC 7.5
    MMCV CUDA Compiler: 10.1
    MMDetection: 2.9.0
    MMDetection3D: 0.10.0+ac9a3e8
    

    Error traceback

    2021-03-04 08:35:26,752 - mmdet - INFO - Epoch [1][50/8195]     lr: 4.323e-04, eta: 1 day, 22:41:33, time: 0.513, data_time: 0.052, memory: 4503, loss_cls: 1.1683, loss_bbox: 2.4333, loss_dir: 0.1463, loss: 3.7479, grad_norm: 113.1708
    2021-03-04 08:35:49,141 - mmdet - INFO - Epoch [1][100/8195]    lr: 5.673e-04, eta: 1 day, 19:43:21, time: 0.448, data_time: 0.006, memory: 4523, loss_cls: 0.9347, loss_bbox: 2.0219, loss_dir: 0.1392, loss: 3.0958, grad_norm: 19.0356
    Traceback (most recent call last):
      File "tools/train.py", line 166, in <module>
        main()
      File "tools/train.py", line 162, in main
        meta=meta)
      File "/home/kirilly/v2pearl5p36/lib/python3.6/site-packages/mmdet/apis/train.py", line 150, in train_detector
        runner.run(data_loaders, cfg.workflow, cfg.total_epochs)
      File "/home/kirilly/v2pearl5p36/lib/python3.6/site-packages/mmcv/runner/epoch_based_runner.py", line 125, in run
        epoch_runner(data_loaders[i], **kwargs)
      File "/home/kirilly/v2pearl5p36/lib/python3.6/site-packages/mmcv/runner/epoch_based_runner.py", line 50, in train
        self.run_iter(data_batch, train_mode=True)
      File "/home/kirilly/v2pearl5p36/lib/python3.6/site-packages/mmcv/runner/epoch_based_runner.py", line 30, in run_iter
        **kwargs)
      File "/home/kirilly/v2pearl5p36/lib/python3.6/site-packages/mmcv/parallel/data_parallel.py", line 67, in train_step
        return self.module.train_step(*inputs[0], **kwargs[0])
      File "/home/kirilly/v2pearl5p36/lib/python3.6/site-packages/mmdet/models/detectors/base.py", line 247, in train_step
        losses = self(**data)
      File "/home/kirilly/v2pearl5p36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__
        result = self.forward(*input, **kwargs)
      File "/home/kirilly/v2pearl5p36/lib/python3.6/site-packages/mmcv/runner/fp16_utils.py", line 84, in new_func
        return old_func(*args, **kwargs)
      File "/home/kirilly/git_repos/mmdetection3d/mmdet3d/models/detectors/base.py", line 59, in forward
        return self.forward_train(**kwargs)
      File "/home/kirilly/git_repos/mmdetection3d/mmdet3d/models/detectors/mvx_two_stage.py", line 274, in forward_train
        points, img=img, img_metas=img_metas)
      File "/home/kirilly/git_repos/mmdetection3d/mmdet3d/models/detectors/mvx_two_stage.py", line 208, in extract_feat
        pts_feats = self.extract_pts_feat(points, img_feats, img_metas)
      File "/home/kirilly/git_repos/mmdetection3d/mmdet3d/models/detectors/mvx_faster_rcnn.py", line 54, in extract_pts_feat
        voxels, coors, points, img_feats, img_metas)
      File "/home/kirilly/v2pearl5p36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__
        result = self.forward(*input, **kwargs)
      File "/home/kirilly/v2pearl5p36/lib/python3.6/site-packages/mmcv/runner/fp16_utils.py", line 164, in new_func
        return old_func(*args, **kwargs)
      File "/home/kirilly/git_repos/mmdetection3d/mmdet3d/models/voxel_encoders/voxel_encoder.py", line 244, in forward
        voxel_mean, mean_coors = self.cluster_scatter(features, coors)
      File "/home/kirilly/v2pearl5p36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__
        result = self.forward(*input, **kwargs)
      File "/home/kirilly/git_repos/mmdetection3d/mmdet3d/ops/voxel/scatter_points.py", line 113, in forward
        points[inds], coors[inds][:, 1:])
      File "/home/kirilly/git_repos/mmdetection3d/mmdet3d/ops/voxel/scatter_points.py", line 92, in forward_single
        self.point_cloud_range)
      File "/home/kirilly/git_repos/mmdetection3d/mmdet3d/ops/voxel/scatter_points.py", line 38, in forward
        coors_range)
    RuntimeError: CUDA error: an illegal memory access was encountered
    terminate called after throwing an instance of 'c10::Error'
      what():  CUDA error: an illegal memory access was encountered (insert_events at /pytorch/c10/cuda/CUDACachingAllocator.cpp:771)
    frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x46 (0x7fc48ce74536 in /home/kirilly/v2pearl5p36/lib/python3.6/site-packages/torch/lib/libc10.so)
    frame #1: c10::cuda::CUDACachingAllocator::raw_delete(void*) + 0x7ae (0x7fc48d0b7fbe in /home/kirilly/v2pearl5p36/lib/python3.6/site-packages/torch/lib/libc10_cuda.so)
    frame #2: c10::TensorImpl::release_resources() + 0x4d (0x7fc48ce64abd in /home/kirilly/v2pearl5p36/lib/python3.6/site-packages/torch/lib/libc10.so)
    frame #3: <unknown function> + 0x522fe2 (0x7fc4d4054fe2 in /home/kirilly/v2pearl5p36/lib/python3.6/site-packages/torch/lib/libtorch_python.so)
    frame #4: <unknown function> + 0x523086 (0x7fc4d4055086 in /home/kirilly/v2pearl5p36/lib/python3.6/site-packages/torch/lib/libtorch_python.so)
    frame #5: python() [0x54f146]
    frame #6: python() [0x588475]
    frame #7: python() [0x572b20]
    frame #8: python() [0x54ee4b]
    frame #9: python() [0x54ee4b]
    frame #10: python() [0x588948]
    frame #11: python() [0x5ad418]
    frame #12: python() [0x5ad42e]
    frame #13: python() [0x5ad42e]
    frame #14: python() [0x5ad42e]
    frame #15: python() [0x5ad42e]
    frame #16: python() [0x5ad42e]
    frame #17: python() [0x5ad42e]
    frame #18: python() [0x56b4c6]
    <omitting python frames>
    frame #24: __libc_start_main + 0xe7 (0x7fc4e5694bf7 in /lib/x86_64-linux-gnu/libc.so.6)
    
    Aborted (core dumped)
    
    reimplementation 
    opened by manonthegithub 27
  • numpy.ndarray size changed, may indicate binary incompatibility. Expected 88 from C header, got 80 from PyObject

    numpy.ndarray size changed, may indicate binary incompatibility. Expected 88 from C header, got 80 from PyObject

    when run : python tools/test.py configs/votenet/votenet_8x8_scannet-3d-18class.py
    checkpoints/votenet_8x8_scannet-3d-18class_20200620_230238-2cea9c3a.pth
    --show --show-dir ./data/scannet/show_results

    *Traceback (most recent call last): File "tools/test.py", line 12, in from mmdet3d.apis import single_gpu_test File "/media/chenyan/DATA/baidu/mmdetection3d/mmdet3d/apis/init.py", line 1, in from .inference import (convert_SyncBN, inference_detector, File "/media/chenyan/DATA/baidu/mmdetection3d/mmdet3d/apis/inference.py", line 10, in from mmdet3d.core import (Box3DMode, DepthInstance3DBoxes, File "/media/chenyan/DATA/baidu/mmdetection3d/mmdet3d/core/init.py", line 1, in from .anchor import * # noqa: F401, F403 File "/media/chenyan/DATA/baidu/mmdetection3d/mmdet3d/core/anchor/init.py", line 1, in from mmdet.core.anchor import build_anchor_generator File "/media/chenyan/DATA/baidu/mmdetection/mmdet/core/init.py", line 5, in from .mask import * # noqa: F401, F403 File "/media/chenyan/DATA/baidu/mmdetection/mmdet/core/mask/init.py", line 2, in from .structures import BaseInstanceMasks, BitmapMasks, PolygonMasks File "/media/chenyan/DATA/baidu/mmdetection/mmdet/core/mask/structures.py", line 6, in import pycocotools.mask as maskUtils File "/home/chenyan/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/pycocotools/mask.py", line 3, in import pycocotools._mask as _mask File "pycocotools/_mask.pyx", line 1, in init pycocotools._mask ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 88 from C header, got 80 from PyObject

    opened by superchenyan 26
  • [Feature] Support PointRCNN RPN and RCNN module

    [Feature] Support PointRCNN RPN and RCNN module

    Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily get feedback. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.

    Motivation

    Support PointRCNN RPN and RCNN module to support the complete PointRCNN detector.

    Modification

    Support PointRCNN RPN and RCNN module.

    BC-breaking (Optional)

    Does the modification introduce changes that break the back-compatibility of the downstream repos? If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.

    Use cases (Optional)

    If this PR introduces a new feature, it is better to list some use cases here, and update the documentation.

    Checklist

    1. Pre-commit or other linting tools are used to fix the potential lint issues.
    2. The modification is covered by complete unit tests. If not, please add more unit test to ensure the correctness.
    3. If the modification has potential influence on downstream projects, this PR should be tested with downstream projects.
    4. The documentation has been modified accordingly, like docstring or example tutorials.
    opened by wHao-Wu 22
  • The confusion about results of 3DSSD between official and MMDet3D implementation.

    The confusion about results of 3DSSD between official and MMDet3D implementation.

    Thanks for developers extraordinary work! I have a question about 3DSSD evaluation result between author and MMDet3D implementation. The author's release result:

    Methods | Easy AP | Moderate AP | Hard AP | Models -- | -- | -- | -- | -- 3DSSD | 91.71 | 83.30 | 80.44 | model PointRCNN | 88.91 | 79.88 | 78.37 | model

    In MMDet3D, the result:

    Backbone | Class | Lr schd | Mem (GB) | Inf time (fps) | mAP | Download -- | -- | -- | -- | -- | -- | -- PointNet2SAMSG | Car | 72e | 4.7 |   | 78.39(81.00)1 | model | log

    I notice "Experiment details on KITTI datasets", which shows the difference between official implementation.

    1.Official implementation based on Tensorflow1.4, but I guess pytorch is not the reason of poor performance, or tensorflow and pytorch exist performance gap? 2.It is about two percent margin(81.0 and 83.3) between two implementation, can we come up with some methods to fix it?

    I also use single2080Ti to train a train+val model with configs/3DSSD/3dssd_kitti-3d-car.py, I modified the ann_file=data_root + 'kitti_infos_train.pkl', to ann_file=data_root + 'kitti_infos_trainval.pkl', the rest code was kept as origin. when the train was finished, I will evaluate on test, and get the result there to discuss. Thanks again!

    community discussion reimplementation 
    opened by Physu 22
  • [Fix] fix a bug that may cause compilation failure of dynamic voxelization when using GPUs with compute capability lower than 6.x

    [Fix] fix a bug that may cause compilation failure of dynamic voxelization when using GPUs with compute capability lower than 6.x

    fix a bug that may cause compilation failure of dynamic voxelization when using GPUs with compute capability lower than 6.x

    fix imperfection kernel code that may unintentionally discard valid points when input points count is larger than 50000 * 512 (nearly impossible though).

    opened by zhanggefan 21
  • Mix precision training failed for Pointpillars on Waymo dataset. Got nan output all the time.

    Mix precision training failed for Pointpillars on Waymo dataset. Got nan output all the time.

    Hi,

    I tried training Pointpillars with fp16 on Waymo dataset but got nan output almost every iteration.

    After some troubleshooting, I accidentally found out the nan is caused by the inf value at the input point cloud tensor, at the "intensity" column. Waymo uses the raw intensity value and it covers a wide range as mentioned by the following issue:

    https://github.com/waymo-research/waymo-open-dataset/issues/226#issue-714104811

    I also found that more than half the point clouds in Waymo dataset contain points with intensity values larger than 65504, the maximum value fp16 can hold. As a result overflown happens here:

    https://github.com/open-mmlab/mmdetection3d/blob/1b39a483263825ac5a3e57fdf07d380adb7aa16d/mmdet3d/models/detectors/base.py#L46-L49

    enhancement community discussion 
    opened by zhanggefan 21
  • ERROR Trying to train 3DSSD

    ERROR Trying to train 3DSSD

    Hello,

    I had this error when trying to execute train.py for 3DSSD model:

    File "/home/rmoreira/.local/lib/python3.8/site-packages/mmdet3d-0.11.0-py3.8-linux-x86_64.egg/mmdet3d/ops/ball_query/ball_query.py", line 4, in <module>
        from . import ball_query_ext
    ImportError: libcudart.so.10.1: cannot open shared object file: No such file or directory
    

    Hope anyone has some knowledge about this.

    Thanks in advance!

    reimplementation 
    opened by RolandoAvides 20
  • Getting

    Getting "RuntimeError: CUDA error: invalid configuration argument" trying to train MVX-Net

    I am getting this trying to train MVX-Net python tools/train.py configs/mvxnet/dv_mvx-fpn_second_secfpn_adamw_2x8_80e_kitti-3d-3class.py

    Traceback (most recent call last):
      File "tools/train.py", line 166, in <module>
        main()
      File "tools/train.py", line 162, in main
        meta=meta)
      File "/home/kirilly/v2pearl5p36/lib/python3.6/site-packages/mmdet/apis/train.py", line 150, in train_detector
        runner.run(data_loaders, cfg.workflow, cfg.total_epochs)
      File "/home/kirilly/v2pearl5p36/lib/python3.6/site-packages/mmcv/runner/epoch_based_runner.py", line 125, in run
        epoch_runner(data_loaders[i], **kwargs)
      File "/home/kirilly/v2pearl5p36/lib/python3.6/site-packages/mmcv/runner/epoch_based_runner.py", line 50, in train
        self.run_iter(data_batch, train_mode=True)
      File "/home/kirilly/v2pearl5p36/lib/python3.6/site-packages/mmcv/runner/epoch_based_runner.py", line 30, in run_iter
        **kwargs)
      File "/home/kirilly/v2pearl5p36/lib/python3.6/site-packages/mmcv/parallel/data_parallel.py", line 67, in train_step
        return self.module.train_step(*inputs[0], **kwargs[0])
      File "/home/kirilly/v2pearl5p36/lib/python3.6/site-packages/mmdet/models/detectors/base.py", line 247, in train_step
        losses = self(**data)
      File "/home/kirilly/v2pearl5p36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__
        result = self.forward(*input, **kwargs)
      File "/home/kirilly/v2pearl5p36/lib/python3.6/site-packages/mmcv/runner/fp16_utils.py", line 84, in new_func
        return old_func(*args, **kwargs)
      File "/home/kirilly/git_repos/mmdetection3d/mmdet3d/models/detectors/base.py", line 59, in forward
        return self.forward_train(**kwargs)
      File "/home/kirilly/git_repos/mmdetection3d/mmdet3d/models/detectors/mvx_two_stage.py", line 274, in forward_train
        points, img=img, img_metas=img_metas)
      File "/home/kirilly/git_repos/mmdetection3d/mmdet3d/models/detectors/mvx_two_stage.py", line 208, in extract_feat
        pts_feats = self.extract_pts_feat(points, img_feats, img_metas)
      File "/home/kirilly/git_repos/mmdetection3d/mmdet3d/models/detectors/mvx_faster_rcnn.py", line 54, in extract_pts_feat
        voxels, coors, points, img_feats, img_metas)
      File "/home/kirilly/v2pearl5p36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__
        result = self.forward(*input, **kwargs)
      File "/home/kirilly/v2pearl5p36/lib/python3.6/site-packages/mmcv/runner/fp16_utils.py", line 164, in new_func
        return old_func(*args, **kwargs)
      File "/home/kirilly/git_repos/mmdetection3d/mmdet3d/models/voxel_encoders/voxel_encoder.py", line 274, in forward
        voxel_feats, voxel_coors = self.vfe_scatter(point_feats, coors)
      File "/home/kirilly/v2pearl5p36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__
        result = self.forward(*input, **kwargs)
      File "/home/kirilly/git_repos/mmdetection3d/mmdet3d/ops/voxel/scatter_points.py", line 113, in forward
        points[inds], coors[inds][:, 1:])
      File "/home/kirilly/git_repos/mmdetection3d/mmdet3d/ops/voxel/scatter_points.py", line 92, in forward_single
        self.point_cloud_range)
      File "/home/kirilly/git_repos/mmdetection3d/mmdet3d/ops/voxel/scatter_points.py", line 38, in forward
        coors_range)
    RuntimeError: CUDA error: invalid configuration argument
    

    the environment:

    sys.platform: linux
    Python: 3.6.9 (default, Oct  8 2020, 12:12:24) [GCC 8.4.0]
    CUDA available: True
    GPU 0,1: TITAN X (Pascal)
    CUDA_HOME: /home/kirilly/cuda10.1
    NVCC: Cuda compilation tools, release 10.1, V10.1.105
    GCC: gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
    PyTorch: 1.5.0+cu101
    PyTorch compiling details: PyTorch built with:
      - GCC 7.3
      - C++ Version: 201402
      - Intel(R) Math Kernel Library Version 2019.0.5 Product Build 20190808 for Intel(R) 64 architecture applications
      - Intel(R) MKL-DNN v0.21.1 (Git Hash 7d2fd500bc78936d1d648ca713b901012f470dbc)
      - OpenMP 201511 (a.k.a. OpenMP 4.5)
      - NNPACK is enabled
      - CPU capability usage: AVX2
      - CUDA Runtime 10.1
      - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_37,code=compute_37
      - CuDNN 7.6.3
      - Magma 2.5.2
      - Build settings: BLAS=MKL, BUILD_TYPE=Release, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -fopenmp -DNDEBUG -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_INTERNAL_THREADPOOL_IMPL -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, USE_CUDA=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_STATIC_DISPATCH=OFF, 
    
    TorchVision: 0.6.0+cu101
    OpenCV: 4.5.1
    MMCV: 1.2.6
    MMCV Compiler: GCC 7.5
    MMCV CUDA Compiler: 10.1
    MMDetection: 2.9.0
    MMDetection3D: 0.10.0+ac9a3e8
    

    Did anyone encounter this? What could be the steps for solution? Thank you a lot in advance for help!

    awaiting response 
    opened by manonthegithub 20
  • How to use a dual GPU computer for single GPU training

    How to use a dual GPU computer for single GPU training

    2021-11-08 17:30:03,141 - mmdet - INFO - Set random seed to 0, deterministic: False 2021-11-08 17:30:03,201 - mmdet - INFO - initialize MYSECOND with init_cfg {'type': 'Kaiming', 'layer': 'Conv2d'} 2021-11-08 17:30:03,269 - mmdet - INFO - initialize MYSECONDFPN with init_cfg [{'type': 'Kaiming', 'layer': 'ConvTranspose2d'}, {'type': 'Constant', 'layer': 'NaiveSyncBatchNorm2d', 'val': 1.0}] 2021-11-08 17:30:03,272 - mmdet - INFO - initialize Anchor3DHead with init_cfg {'type': 'Normal', 'layer': 'Conv2d', 'std': 0.01, 'override': {'type': 'Normal', 'name': 'conv_cls', 'std': 0.01, 'bias_prob': 0.01}} 2021-11-08 17:30:03,276 - mmdet - INFO - Model: VoxelNet( (backbone): MYSECOND( (blocks): ModuleList( (0): Sequential( (0): Conv2d(64, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (1): BatchNorm2d(64, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) (2): SiLU(inplace=True) (3): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (4): BatchNorm2d(64, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) (5): SiLU(inplace=True) (6): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (7): BatchNorm2d(64, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) (8): SiLU(inplace=True) (9): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (10): BatchNorm2d(64, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) (11): SiLU(inplace=True) ) (1): Sequential( (0): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (1): BatchNorm2d(128, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) (2): SiLU(inplace=True) (3): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (4): BatchNorm2d(128, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) (5): SiLU(inplace=True) (6): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (7): BatchNorm2d(128, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) (8): SiLU(inplace=True) (9): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (10): BatchNorm2d(128, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) (11): SiLU(inplace=True) (12): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (13): BatchNorm2d(128, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) (14): SiLU(inplace=True) (15): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (16): BatchNorm2d(128, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) (17): SiLU(inplace=True) ) (2): Sequential( (0): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (1): BatchNorm2d(256, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) (2): SiLU(inplace=True) (3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (4): BatchNorm2d(256, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) (5): SiLU(inplace=True) (6): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (7): BatchNorm2d(256, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) (8): SiLU(inplace=True) (9): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (10): BatchNorm2d(256, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) (11): SiLU(inplace=True) (12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (13): BatchNorm2d(256, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) (14): SiLU(inplace=True) (15): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (16): BatchNorm2d(256, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) (17): SiLU(inplace=True) ) ) ) init_cfg={'type': 'Kaiming', 'layer': 'Conv2d'} (neck): MYSECONDFPN( (deblocks): ModuleList( (0): Sequential( (0): Upsample(scale_factor=1.0, mode=bilinear) (1): Conv2d(64, 128, kernel_size=(1, 1), stride=(1, 1)) (2): BatchNorm2d(128, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) (3): SiLU(inplace=True) ) (1): Sequential( (0): Upsample(scale_factor=2.0, mode=bilinear) (1): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1)) (2): BatchNorm2d(128, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) (3): SiLU(inplace=True) ) (2): Sequential( (0): Upsample(scale_factor=4.0, mode=bilinear) (1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1)) (2): BatchNorm2d(128, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) (3): SiLU(inplace=True) ) ) ) init_cfg=[{'type': 'Kaiming', 'layer': 'ConvTranspose2d'}, {'type': 'Constant', 'layer': 'NaiveSyncBatchNorm2d', 'val': 1.0}] (bbox_head): Anchor3DHead( (loss_cls): FocalLoss() (loss_bbox): SmoothL1Loss() (loss_dir): CrossEntropyLoss() (conv_cls): Conv2d(384, 2, kernel_size=(1, 1), stride=(1, 1)) (conv_reg): Conv2d(384, 14, kernel_size=(1, 1), stride=(1, 1)) (conv_dir_cls): Conv2d(384, 4, kernel_size=(1, 1), stride=(1, 1)) ) init_cfg={'type': 'Normal', 'layer': 'Conv2d', 'std': 0.01, 'override': {'type': 'Normal', 'name': 'conv_cls', 'std': 0.01, 'bias_prob': 0.01}} (voxel_layer): Voxelization(voxel_size=[0.16, 0.16, 4], point_cloud_range=[0, -39.68, -3, 69.12, 39.68, 1], max_num_points=32, max_voxels=(16000, 40000), deterministic=True) (voxel_encoder): PillarFeatureNet( (pfn_layers): ModuleList( (0): PFNLayer( (norm): BatchNorm1d(64, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) (linear): Linear(in_features=9, out_features=64, bias=False) ) ) ) (middle_encoder): PointPillarsScatter() )

    When I'm training, it'll get stuck here,after waiting 10 minutes, it comed.

    RuntimeError: Broken pipe

    opened by stidk 19
  • [Feature] groupfree3d objectness loss

    [Feature] groupfree3d objectness loss

    What's the feature?

    groupfree3d_head.py line 441 and line 445 when computing objectness loss, it uses 1 - obj_targets. Is it better than just using obj_targets?

    Any other context?

    sampling_objectness_loss = self.sampling_objectness_loss( sampling_obj_score, 1 - sampling_targets.reshape(-1), sampling_weights.reshape(-1), avg_factor=batch_size)

    objectness_loss = self.objectness_loss( obj_score.reshape(-1, 1), 1 - objectness_targets.reshape(-1), objectness_weights.reshape(-1), avg_factor=batch_size)

    opened by yehx1 1
  • [Bug]  Result is saved to /tmp/tmpmp_yh0x5/results.pkl.

    [Bug] Result is saved to /tmp/tmpmp_yh0x5/results.pkl.

    Prerequisite

    Task

    I'm using the official example scripts/configs for the officially supported tasks/models/datasets.

    Branch

    master branch https://github.com/open-mmlab/mmdetection3d

    Environment

    ubuntu20.04

    Reproduces the problem - code sample

    [>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 3624/3624, 73.3 task/s, elapsed: 49s, ETA: 0s Result is saved to /tmp/tmpzp7n6imw/results.pkl. 2023-01-02 12:47:26,511 - mmdet - INFO - Car [email protected], 0.70, 0.70: bbox AP:10.1963, 7.1649, 7.1313 bev AP:62.6201, 44.8156, 44.7846 3d AP:46.7023, 32.4005, 32.3829 aos AP:4.89, 3.41, 3.39 Car [email protected], 0.50, 0.50:

    Reproduces the problem - command or script

    when training

    Reproduces the problem - error message

    the work-dir is right ,but the weights saved at a wrong tmp place

    Additional information

    No response

    opened by wszhengjx 2
  • [Fix] Fix `cam_list` for `WaymoDataset`

    [Fix] Fix `cam_list` for `WaymoDataset`

    Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily get feedback. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.

    Motivation

    Please describe the motivation of this PR and the goal you want to achieve through this PR.

    Modification

    For cam_list in waymo, it should be FRONT, FRONT_LEFT, FRONT_RIGHT, SIDE_LEFT, SIDE_RIGHT instead of FRONT, FRONT_RIGHT, FRONT_LEFT, SIDE_RIGHT, SIDE_LEFT

    BC-breaking (Optional)

    Does the modification introduce changes that break the back-compatibility of the downstream repos? If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.

    Use cases (Optional)

    If this PR introduces a new feature, it is better to list some use cases here, and update the documentation.

    Checklist

    1. Pre-commit or other linting tools are used to fix the potential lint issues.
    2. The modification is covered by complete unit tests. If not, please add more unit test to ensure the correctness.
    3. If the modification has potential influence on downstream projects, this PR should be tested with downstream projects.
    4. The documentation has been modified accordingly, like docstring or example tutorials.
    opened by Xiangxu-0103 1
  • [CodeCamp #1503] Use mmeval.InstanceSeg

    [CodeCamp #1503] Use mmeval.InstanceSeg

    Motivation

    Use mmeval.InstanceSeg

    Modification

    • mmdet3d/evaluation/functional/instance_seg_eval.py (deleted)
    • mmdet3d/evaluation/metrics/instance_seg_metric.py
    • tests/test_evaluation/test_metrics/test_instance_seg_metric.py

    BC-breaking

    Delete the original implementation of instance_seg_eval.

    Use cases

    For mmdetection3d currently doesn't support InstanceSeg, I cannot provide the use case now.

    opened by Pzzzzz5142 1
  • [Fix] update SECOND checkpoints link and README

    [Fix] update SECOND checkpoints link and README

    Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily get feedback. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.

    Motivation

    Please describe the motivation of this PR and the goal you want to achieve through this PR.

    Modification

    Please briefly describe what modification is made in this PR.

    BC-breaking (Optional)

    Does the modification introduce changes that break the back-compatibility of the downstream repos? If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.

    Use cases (Optional)

    If this PR introduces a new feature, it is better to list some use cases here, and update the documentation.

    Checklist

    1. Pre-commit or other linting tools are used to fix the potential lint issues.
    2. The modification is covered by complete unit tests. If not, please add more unit test to ensure the correctness.
    3. If the modification has potential influence on downstream projects, this PR should be tested with downstream projects.
    4. The documentation has been modified accordingly, like docstring or example tutorials.
    opened by ZCMax 0
  • [Fix] Fix bugs in `analyze_logs`

    [Fix] Fix bugs in `analyze_logs`

    Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily get feedback. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.

    Motivation

    Please describe the motivation of this PR and the goal you want to achieve through this PR.

    Modification

    Please briefly describe what modification is made in this PR.

    BC-breaking (Optional)

    Does the modification introduce changes that break the back-compatibility of the downstream repos? If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.

    Use cases (Optional)

    If this PR introduces a new feature, it is better to list some use cases here, and update the documentation.

    Checklist

    1. Pre-commit or other linting tools are used to fix the potential lint issues.
    2. The modification is covered by complete unit tests. If not, please add more unit test to ensure the correctness.
    3. If the modification has potential influence on downstream projects, this PR should be tested with downstream projects.
    4. The documentation has been modified accordingly, like docstring or example tutorials.
    opened by Xiangxu-0103 1
Releases(v1.0.0rc6)
  • v1.0.0rc6(Dec 16, 2022)

    New Features

    • Add Projects/ folder and the first example project (#2082)

    Improvements

    • Update Waymo converter to save storage space (#1759)
    • Update model link and performance of CenterPoint (#1916)

    Bug Fixes

    • Fix GPU memory occupancy problem in PointRCNN (#1928)
    • Fix sampling bug in IoUNegPiecewiseSampler (#2018)

    Contributors

    A total of 6 developers contributed to this release.

    @oyel, @zzj403, @VVsssssk, @Tai-Wang, @tpoisonooo, @JingweiZhang12, @ZCMax, @ZwwWayne

    Full Changelog: https://github.com/open-mmlab/mmdetection3d/compare/v1.0.0rc5...v1.0.0rc6

    Source code(tar.gz)
    Source code(zip)
  • v1.1.0rc2(Dec 3, 2022)

    Highlights

    • Support PV-RCNN
    • Speed up evaluation on Waymo dataset

    New Features

    • Support PV-RCNN (#1597, #2045)
    • Speed up evaluation on Waymo dataset (#2008)
    • Refactor FCAF3D into the framework of mmdet3d v1.1 (#1945)
    • Refactor S3DIS dataset into the framework of mmdet3d v1.1 (#1984)
    • Add Projects/ folder and the first example project (#2042)

    Improvements

    • Rename CLASSES and PALETTE to classes and palette respectively (#1932)
    • Update metainfo in pkl files and add categories into metainfo (#1934)
    • Show instance statistics before and after through the pipeline (#1863)
    • Add configs of DGCNN for different testing areas (#1967)
    • Remove testing utils from tests/utils/ to mmdet3d/testing/ (#2012)
    • Add typehint for code in models/layers/ (#2014)
    • Refine documentation (#1891, #1994)
    • Refine voxelization for better speed (#2062)

    Bug Fixes

    • Fix loop visualization error about point cloud (#1914)
    • Fix image conversion of Waymo to avoid information loss (#1979)
    • Fix evaluation on KITTI testset (#2005)
    • Fix sampling bug in IoUNegPiecewiseSampler (#2017)
    • Fix point cloud range in CenterPoint (#1998)
    • Fix some loading bugs and support FOV-image-based mode on Waymo dataset (#1942)
    • Fix dataset conversion utils (#1923, #2040, #1971)
    • Update metafiles in all the configs (#2006)

    Contributors

    A total of 12 developers contributed to this release.

    @vavanade, @oyel, @thinkthinking, @PeterH0323, @274869388, @cxiang26, @lianqing11, @VVsssssk, @ZCMax, @Xiangxu-0103, @JingweiZhang12, @Tai-Wang

    New Contributors

    • @PeterH0323 made their first contribution in #2065
    • @cxiang26 made their first contribution in #1965
    • @vavanade made their first contribution in #2031
    • @oyel made their first contribution in #2017
    • @thinkthinking made their first contribution in #2026
    • @274869388 made their first contribution in #1973

    Full Changelog: https://github.com/open-mmlab/mmdetection3d/compare/v1.1.0rc1...v1.1.0rc2

    Source code(tar.gz)
    Source code(zip)
  • v1.1.0rc1(Oct 17, 2022)

    Highlights

    • Support a camera-only 3D detection baseline on Waymo, MV-FCOS3D++

    New Features

    • Support a camera-only 3D detection baseline on Waymo, MV-FCOS3D++, with new evaluation metrics and transformations (#1716)
    • Refactor PointRCNN in the framework of mmdet3d v1.1 (#1819)

    Improvements

    • Add auto_scale_lr in config to support training with auto-scale learning rates (#1807)
    • Fix CI (#1813, #1865, #1877)
    • Update browse_dataset.py script (#1817)
    • Update SUN RGB-D and Lyft datasets documentation (#1833)
    • Rename convert_to_datasample to add_pred_to_datasample in detectors (#1843)
    • Update customized dataset documentation (#1845)
    • Update Det3DLocalVisualization and visualization documentation (#1857)
    • Add the code of generating cam_sync_labels for Waymo dataset (#1870)
    • Update dataset transforms typehints (#1875)

    Bug Fixes

    • Fix missing registration of models in setup_env.py (#1808)
    • Fix the data base sampler bugs when using the ground plane data (#1812)
    • Add output directory existing check during visualization (#1828)
    • Fix bugs of nuScenes dataset for monocular 3D detection (#1837)
    • Fix visualization hook to support the visualization of different data modalities (#1839)
    • Fix monocular 3D detection demo (#1864)
    • Fix the lack of num_pts_feats key in nuscenes dataset and complete docstring (#1882)

    Contributors

    A total of 10 developers contributed to this release.

    @ZwwWayne, @Tai-Wang, @lianqing11, @VVsssssk, @ZCMax, @Xiangxu-0103, @JingweiZhang12, @tpoisonooo, @ice-tong, @jshilong

    New Contributors

    • @ice-tong made their first contribution in https://github.com/open-mmlab/mmdetection3d/pull/1838

    Full Changelog: https://github.com/open-mmlab/mmdetection3d/compare/v1.1.0rc0...v1.1.0rc1

    Source code(tar.gz)
    Source code(zip)
  • v1.0.0rc5(Oct 17, 2022)

    New Features

    • Support ImVoxelNet on SUN RGB-D (#1738)

    Improvements

    • Fix the cross-codebase reference problem in metafile README (#1644)
    • Update the Chinese documentation about getting started (#1715)
    • Fix docs link and add docs link checker (#1811)

    Bug Fixes

    • Fix a visualization bug that is potentially triggered by empty prediction labels (#1725)
    • Fix point cloud segmentation visualization bug due to wrong parameter passing (#1858)
    • Fix Nan loss bug during PointRCNN training (#1874)

    Contributors

    A total of 11 developers contributed to this release.

    @ZwwWayne, @Tai-Wang, @filaPro, @VVsssssk, @ZCMax, @Xiangxu-0103, @holtvogt, @tpoisonooo, @lianqing01, @TommyZihao, @aditya9710

    New Contributors

    • @tpoisonooo made their first contribution in https://github.com/open-mmlab/mmdetection3d/pull/1614
    • @holtvogt made their first contribution in https://github.com/open-mmlab/mmdetection3d/pull/1725
    • @TommyZihao made their first contribution in https://github.com/open-mmlab/mmdetection3d/pull/1778
    • @aditya9710 made their first contribution in https://github.com/open-mmlab/mmdetection3d/pull/1889

    Full Changelog: https://github.com/open-mmlab/mmdetection3d/compare/v1.0.0rc4...v1.0.0rc5

    Source code(tar.gz)
    Source code(zip)
  • v1.1.0rc0(Sep 1, 2022)

    Changelog of v1.1

    v1.1.0rc0 (1/9/2022)

    We are excited to announce the release of MMDetection3D 1.1.0rc0. MMDet3D 1.1.0rc0 is the first version of MMDetection3D 1.1, a part of the OpenMMLab 2.0 projects. Built upon the new training engine and MMDet 3.x, MMDet3D 1.1 unifies the interfaces of dataset, models, evaluation, and visualization with faster training and testing speed. It also provides a standard data protocol for different datasets, modalities, and tasks for 3D perception. We will support more strong baselines in the future release, with our latest exploration on camera-only 3D detection from videos.

    Highlights

    1. New engines. MMDet3D 1.1 is based on MMEngine and MMDet 3.x, which provides a universal and powerful runner that allows more flexible customizations and significantly simplifies the entry points of high-level interfaces.

    2. Unified interfaces. As a part of the OpenMMLab 2.0 projects, MMDet3D 1.1 unifies and refactors the interfaces and internal logics of train, testing, datasets, models, evaluation, and visualization. All the OpenMMLab 2.0 projects share the same design in those interfaces and logics to allow the emergence of multi-task/modality algorithms.

    3. Standard data protocol for all the datasets, modalities, and tasks for 3D perception. Based on the unified base datasets inherited from MMEngine, we also design a standard data protocol that defines and unifies the common keys across different datasets, tasks, and modalities. It significantly simplifies the usage of multiple datasets and data modalities for multi-task frameworks and eases dataset customization. Please refer to the documentation of customized datasets for details.

    4. Strong baselines. We will release strong baselines of many popular models to enable fair comparisons among state-of-the-art models.

    5. More documentation and tutorials. We add a bunch of documentation and tutorials to help users get started more smoothly. Read it here.

    Breaking Changes

    MMDet3D 1.1 has undergone significant changes to have better design, higher efficiency, more flexibility, and more unified interfaces. Besides the changes of API, we briefly list the major breaking changes in this section. We will update the migration guide to provide complete details and migration instructions. Users can also refer to the compatibility documentation and API doc for more details.

    Dependencies

    • MMDet3D 1.1 runs on PyTorch>=1.6. We have deprecated the support of PyTorch 1.5 to embrace the mixed precision training and other new features since PyTorch 1.6. Some models can still run on PyTorch 1.5, but the full functionality of MMDet3D 1.1 is not guaranteed.
    • MMDet3D 1.1 relies on MMEngine to run. MMEngine is a new foundational library for training deep learning models of OpenMMLab and are widely depended by OpenMMLab 2.0 projects. The dependencies of file IO and training are migrated from MMCV 1.x to MMEngine.
    • MMDet3D 1.1 relies on MMCV>=2.0.0rc0. Although MMCV no longer maintains the training functionalities since 2.0.0rc0, MMDet3D 1.1 relies on the data transforms, CUDA operators, and image processing interfaces in MMCV. Note that the package mmcv is the version that provides pre-built CUDA operators and mmcv-lite does not since MMCV 2.0.0rc0, while mmcv-full has been deprecated since 2.0.0rc0.
    • MMDet3D 1.1 is based on MMDet 3.x, which is also a part of OpenMMLab 2.0 projects.

    Training and testing

    • MMDet3D 1.1 uses Runner in MMEngine rather than that in MMCV. The new Runner implements and unifies the building logic of dataset, model, evaluation, and visualizer. Therefore, MMDet3D 1.1 no longer relies on the building logics of those modules in mmdet3d.train.apis and tools/train.py. Those code have been migrated into MMEngine. Please refer to the migration guide of Runner in MMEngine for more details.
    • The Runner in MMEngine also supports testing and validation. The testing scripts are also simplified, which has similar logic as that in training scripts to build the runner.
    • The execution points of hooks in the new Runner have been enriched to allow more flexible customization. Please refer to the migration guide of Hook in MMEngine for more details.
    • Learning rate and momentum scheduling has been migrated from Hook to Parameter Scheduler in MMEngine. Please refer to the migration guide of Parameter Scheduler in MMEngine for more details.

    Configs

    • The Runner in MMEngine uses a different config structure to ease the understanding of the components in runner. Users can read the config example of MMDet3D 1.1 or refer to the migration guide in MMEngine for migration details.
    • The file names of configs and models are also refactored to follow the new rules unified across OpenMMLab 2.0 projects. The names of checkpoints are not updated for now as there is no BC-breaking of model weights between MMDet3D 1.1 and 1.0.x. We will progressively replace all the model weights by those trained in MMDet3D 1.1. Please refer to the user guides of config for more details.

    Dataset

    The Dataset classes implemented in MMDet3D 1.1 all inherits from the Det3DDataset and Seg3DDataset, which inherits from the BaseDataset in MMEngine. In addition to the changes of interfaces, there are several changes of Dataset in MMDet3D 1.1.

    • All the datasets support to serialize the internal data list to reduce the memory when multiple workers are built for data loading.
    • The internal data structure in the dataset is changed to be self-contained (without losing information like class names in MMDet3D 1.0.x) while keeping simplicity.
    • Common keys across different datasets and data modalities are defined and all the info files are unified into a standard protocol.
    • The evaluation functionality of each dataset has been removed from dataset so that some specific evaluation metrics like KITTI AP can be used to evaluate the prediction on other datasets.

    Data Transforms

    The data transforms in MMDet3D 1.1 all inherits from BaseTransform in MMCV>=2.0.0rc0, which defines a new convention in OpenMMLab 2.0 projects. Besides the interface changes, there are several changes listed as below:

    • The functionality of some data transforms (e.g., Resize) are decomposed into several transforms to simplify and clarify the usages.
    • The format of data dict processed by each data transform is changed according to the new data structure of dataset.
    • Some inefficient data transforms (e.g., normalization and padding) are moved into data preprocessor of model to improve data loading and training speed.
    • The same data transforms in different OpenMMLab 2.0 libraries have the same augmentation implementation and the logic given the same arguments, i.e., Resize in MMDet 3.x and MMSeg 1.x will resize the image in the exact same manner given the same arguments.

    Model

    The models in MMDet3D 1.1 all inherits from BaseModel in MMEngine, which defines a new convention of models in OpenMMLeb 2.0 projects. Users can refer to the tutorial of model in MMengine for more details. Accordingly, there are several changes as the following:

    • The model interfaces, including the input and output formats, are significantly simplified and unified following the new convention in MMDet3D 1.1. Specifically, all the input data in training and testing are packed into inputs and data_samples, where inputs contains model inputs like a dict contain a list of image tensors and the point cloud data, and data_samples contains other information of the current data sample such as ground truths, region proposals, and model predictions. In this way, different tasks in MMDet3D 1.1 can share the same input arguments, which makes the models more general and suitable for multi-task learning and some flexible training paradigms like semi-supervised learning.
    • The model has a data preprocessor module, which are used to pre-process the input data of model. In MMDet3D 1.1, the data preprocessor usually does necessary steps to form the input images into a batch, such as padding. It can also serve as a place for some special data augmentations or more efficient data transformations like normalization.
    • The internal logic of model have been changed. In MMDet3D 1.1, model uses forward_train, forward_test, simple_test, and aug_test to deal with different model forward logics. In MMDet3D 1.1 and OpenMMLab 2.0, the forward function has three modes: 'loss', 'predict', and 'tensor' for training, inference, and tracing or other purposes, respectively. The forward function calls self.loss, self.predict, and self._forward given the modes 'loss', 'predict', and 'tensor', respectively.

    Evaluation

    The evaluation in MMDet3D 1.0.x strictly binds with the dataset. In contrast, MMDet3D 1.1 decomposes the evaluation from dataset, so that all the detection dataset can evaluate with KITTI AP and other metrics implemented in MMDet3D 1.1. MMDet3D 1.1 mainly implements corresponding metrics for each dataset, which are manipulated by Evaluator to complete the evaluation. Users can build evaluator in MMDet3D 1.1 to conduct offline evaluation, i.e., evaluate predictions that may not produced in MMDet3D 1.1 with the dataset as long as the dataset and the prediction follows the dataset conventions. More details can be find in the tutorial in mmengine.

    Visualization

    The functions of visualization in MMDet3D 1.1 are removed. Instead, in OpenMMLab 2.0 projects, we use Visualizer to visualize data. MMDet3D 1.1 implements Det3DLocalVisualizer to allow visualization of 2D and 3D data, ground truths, model predictions, and feature maps, etc., at any place. It also supports to send the visualization data to any external visualization backends such as Tensorboard.

    Planned changes

    We list several planned changes of MMDet3D 1.1.0rc0 so that the community could more comprehensively know the progress of MMDet3D 1.1. Feel free to create a PR, issue, or discussion if you are interested, have any suggestions and feedbacks, or want to participate.

    1. Test-time augmentation: which is supported in MMDet3D 1.0.x, is not implemented in this version due to limited time slot. We will support it in the following releases with a new and simplified design.
    2. Inference interfaces: a unified inference interfaces will be supported in the future to ease the use of released models.
    3. Interfaces of useful tools that can be used in notebook: more useful tools that implemented in the tools directory will have their python interfaces so that they can be used through notebook and in downstream libraries.
    4. Documentation: we will add more design docs, tutorials, and migration guidance so that the community can deep dive into our new design, participate the future development, and smoothly migrate downstream libraries to MMDet3D 1.1.
    5. Wandb visualization: MMDet 2.x supports data visualization by WandB since v2.25.0, which has not been migrated to MMDet 3.x for now. Since Wandb provides strong visualization and experiment management capabilities, a DetWandbVisualizer and maybe a hook are planned to fully migrated those functionalities in MMDet 2.x and a Det3DWandbVisualizer will be supported in MMDet3D 1.1 accordingly.
    6. Will support recent new features added in MMDet3D 1.0.x and our recent exploration on camera-only 3D detection from videos: we will refactor these models and support them with benchmarks and models soon.

    Contributors

    A total of 6 developers contributed to this release. Thanks @ZCMax , @jshilong, @VVsssssk, @Tai-Wang , @lianqing11, @ZwwWayne

    Source code(tar.gz)
    Source code(zip)
  • v1.0.0rc4(Aug 8, 2022)

    Highlights

    New Features

    • Support FCAF3D (#1547)
    • Add the transformation to support multi-camera 3D object detection (#1580)
    • Support lift-splat-shoot view transformer (#1598)

    Improvements

    • Remove the limitation of the maximum number of points during SUN RGB-D preprocessing (#1555)
    • Support circle CI (#1647)
    • Add mim to extras_require in setup.py (#1560, #1574)
    • Update dockerfile package version (#1697)

    Bug Fixes

    • Flip yaw angle for DepthInstance3DBoxes.overlaps (#1548, #1556)
    • Fix DGCNN configs (#1587)
    • Fix bbox head not registered bug (#1625)
    • Fix missing objects in S3DIS preprocessing (#1665)
    • Fix spconv2.0 model loading bug (#1699)

    Contributors

    A total of 9 developers contributed to this release.

    @Tai-Wang, @ZwwWayne, @filaPro, @lianqing11, @ZCMax, @HuangJunJie2017, @Xiangxu-0103, @ChonghaoSima, @VVsssssk

    New Contributors

    • @HuangJunJie2017 made their first contribution in https://github.com/open-mmlab/mmdetection3d/pull/1580
    • @ChonghaoSima made their first contribution in https://github.com/open-mmlab/mmdetection3d/pull/1614

    Full Changelog: https://github.com/open-mmlab/mmdetection3d/compare/v1.0.0rc3...v1.0.0rc4

    Source code(tar.gz)
    Source code(zip)
  • v1.0.0rc3(Jun 14, 2022)

    Highlights

    New Features

    Improvements

    • Add Chinese documentation for vision-only 3D detection (#1438)
    • Update CenterPoint pretrained models that are compatible with refactored coordinate systems (#1450)
    • Configure myst-parser to parse anchor tag in the documentation (#1488)
    • Replace markdownlint with mdformat for avoiding installing ruby (#1489)
    • Add missing gt_names when getting annotation info in Custom3DDataset (#1519)
    • Support S3DIS full ceph training (#1542)
    • Rewrite the installation and FAQ documentation (#1545)

    Bug Fixes

    • Fix the incorrect registry name when building RoI extractors (#1460)
    • Fix the potential problems caused by the registry scope update when composing pipelines (#1466) and using CocoDataset (#1536)
    • Fix the missing selection with order in the box3d_nms introduced by #1403 (#1479)
    • Update the PointPillars config to make it consistent with the log (#1486)
    • Fix heading anchor in documentation (#1490)
    • Fix the compatibility of mmcv in the dockerfile (#1508)
    • Make overwrite_spconv packaged when building whl (#1516)
    • Fix the requirement of mmcv and mmdet (#1537)
    • Update configs of PartA2 and support its compatibility with spconv 2.0 (#1538)

    Contributors

    A total of 13 developers contributed to this release.

    @Xiangxu-0103, @ZCMax, @jshilong, @filaPro, @atinfinity, @Tai-Wang, @wenbo-yu, @yi-chen-isuzu, @ZwwWayne, @wchen61, @VVsssssk, @AlexPasqua, @lianqing11

    New Contributors

    • @atinfinity made their first contribution in https://github.com/open-mmlab/mmdetection3d/pull/1508
    • @wenbo-yu made their first contribution in https://github.com/open-mmlab/mmdetection3d/pull/1337
    • @wchen61 made their first contribution in https://github.com/open-mmlab/mmdetection3d/pull/1516
    • @AlexPasqua made their first contribution in https://github.com/open-mmlab/mmdetection3d/pull/1519
    • @lianqing11 made their first contribution in https://github.com/open-mmlab/mmdetection3d/pull/1545

    Full Changelog: https://github.com/open-mmlab/mmdetection3d/compare/v1.0.0rc2...v1.0.0rc3

    Source code(tar.gz)
    Source code(zip)
  • v1.0.0rc2(May 2, 2022)

    Highlights

    • Support spconv 2.0
    • Support MinkowskiEngine with MinkResNet
    • Support training models on custom datasets with only point clouds
    • Update Registry to distinguish the scope of built functions
    • Replace mmcv.iou3d with a set of bird-eye-view (BEV) operators to unify the operations of rotated boxes

    New Features

    • Add loader arguments in the configuration files (#1388)
    • Support spconv 2.0 when the package is installed. Users can still use spconv 1.x in MMCV with CUDA 9.0 (only cost more memory) without losing the compatibility of model weights between two versions (#1421)
    • Support MinkowskiEngine with MinkResNet (#1422)

    Improvements

    • Add the documentation for model deployment (#1373, #1436)
    • Add Chinese documentation of
      • Speed benchmark (#1379)
      • LiDAR-based 3D detection (#1368)
      • LiDAR 3D segmentation (#1420)
      • Coordinate system refactoring (#1384)
    • Support training models on custom datasets with only point clouds (#1393)
    • Replace mmcv.iou3d with a set of bird-eye-view (BEV) operators to unify the operations of rotated boxes (#1403, #1418)
    • Update Registry to distinguish the scope of building functions (#1412, #1443)
    • Replace recommonmark with myst_parser for documentation rendering (#1414)

    Bug Fixes

    • Fix the show pipeline in the browse_dataset.py (#1376)
    • Fix missing init files after coordinate system refactoring (#1383)
    • Fix the incorrect yaw in the visualization caused by coordinate system refactoring (#1407)
    • Fix NaiveSyncBatchNorm1d and NaiveSyncBatchNorm2d to support non-distributed cases and more general inputs (#1435)

    Contributors

    A total of 11 developers contributed to this release.

    @ZCMax, @ZwwWayne, @Tai-Wang, @VVsssssk, @HanaRo, @JoeyforJoy, @ansonlcy, @filaPro, @jshilong, @Xiangxu-0103, @deleomike

    New Contributors

    • @HanaRo made their first contribution in https://github.com/open-mmlab/mmdetection3d/pull/1379
    • @JoeyforJoy made their first contribution in https://github.com/open-mmlab/mmdetection3d/pull/1368
    • @ansonlcy made their first contribution in https://github.com/open-mmlab/mmdetection3d/pull/1391
    • @deleomike made their first contribution in https://github.com/open-mmlab/mmdetection3d/pull/1383

    Full Changelog: https://github.com/open-mmlab/mmdetection3d/compare/v1.0.0rc1...v1.0.0rc2

    Source code(tar.gz)
    Source code(zip)
  • v1.0.0rc1(Apr 6, 2022)

    Compatibility

    • We migrate all the mmdet3d ops to mmcv and do not need to compile them when installing mmdet3d.
    • To fix the imprecise timestamp and optimize its saving method, we reformat the point cloud data during Waymo data conversion. The data conversion time is also optimized significantly by supporting parallel processing. Please re-generate KITTI format Waymo data if necessary. See more details in the compatibility documentation.
    • We update some of the model checkpoints after the refactor of coordinate systems. Please stay tuned for the release of the remaining model checkpoints.

    | | Fully Updated | Partially Updated | In Progress | No Influcence | |--------------------|:-------------:|:--------:| :-----------: | :-----------: | | SECOND | | ✓ | | | | PointPillars | | ✓ | | | | FreeAnchor | ✓ | | | | | VoteNet | ✓ | | | | | H3DNet | ✓ | | | | | 3DSSD | | ✓ | | | | Part-A2 | ✓ | | | | | MVXNet | ✓ | | | | | CenterPoint | | |✓ | | | SSN | ✓ | | | | | ImVoteNet | ✓ | | | | | FCOS3D | | | |✓ | | PointNet++ | | | |✓ | | Group-Free-3D | | | |✓ | | ImVoxelNet | ✓ | | | | | PAConv | | | |✓ | | DGCNN | | | |✓ | | SMOKE | | | |✓ | | PGD | | | |✓ | | MonoFlex | | | |✓ |

    Highlights

    • Migrate all the mmdet3d ops to mmcv
    • Support parallel waymo data converter
    • Add ScanNet instance segmentation dataset with metrics
    • Better compatibility for windows with CI support, op migration and bug fixes
    • Support loading annotations from Ceph

    New Features

    • Add ScanNet instance segmentation dataset with metrics (#1230)
    • Support different random seeds for different ranks (#1321)
    • Support loading annotations from Ceph (#1325)
    • Support resuming from the latest checkpoint automatically (#1329)
    • Add windows CI (#1345)

    Improvements

    • Update the table format and OpenMMLab project orders in README.md (#1272, #1283)
    • Migrate all the mmdet3d ops to mmcv (#1240, #1286, #1290, #1333)
    • Add with_plane flag in the KITTI data conversion (#1278)
    • Update instructions and links in the documentation (#1300, 1309, #1319)
    • Support parallel Waymo dataset converter and ground truth database generator (#1327)
    • Add quick installation commands to getting_started.md (#1366)

    Bug Fixes

    • Update nuimages configs to use new nms config style (#1258)
    • Fix the usage of np.long for windows compatibility (#1270)
    • Fix the incorrect indexing in BasePoints (#1274)
    • Fix the incorrect indexing in the pillar_scatter.forward_single (#1280)
    • Fix unit tests that use GPUs (#1301)
    • Fix incorrect feature dimensions in DynamicPillarFeatureNet caused by previous upgrading of PillarFeatureNet (#1302)
    • Remove the CameraPoints constraint in PointSample (#1314)
    • Fix imprecise timestamps saving of Waymo dataset (#1327)

    Contributors

    A total of 10 developers contributed to this release.

    @ZCMax, @ZwwWayne, @wHao-Wu, @Tai-Wang, @wangruohui, @zjwzcx, @Xiangxu-0103, @EdAyers, @hongye-dev, @zhanggefan

    New Contributors

    • @VVsssssk made their first contribution in https://github.com/open-mmlab/mmdetection3d/pull/1275
    • @Xiangxu-0103 made their first contribution in https://github.com/open-mmlab/mmdetection3d/pull/1300
    • @Subjectivist made their first contribution in https://github.com/open-mmlab/mmdetection3d/pull/1298
    • @EdAyers made their first contribution in https://github.com/open-mmlab/mmdetection3d/pull/1258
    • @hongye-dev made their first contribution in https://github.com/open-mmlab/mmdetection3d/pull/1280
    • @jshilong made their first contribution in https://github.com/open-mmlab/mmdetection3d/pull/1366

    Full Changelog: https://github.com/open-mmlab/mmdetection3d/compare/v1.0.0rc0...v1.0.0rc1

    Source code(tar.gz)
    Source code(zip)
  • v1.0.0rc0(Mar 1, 2022)

    Compatibility

    • We refactor our three coordinate systems to make their rotation directions and origins more consistent, and further remove unnecessary hacks in different datasets and models. Therefore, please re-generate data information or convert the old version to the new one with our provided scripts. We will also provide updated checkpoints in the next version. Please refer to the compatibility documentation for more details.
    • Unify the camera keys for consistent transformation between coordinate systems on different datasets. The modification changes the key names to lidar2img, depth2img, cam2img, etc., for easier understanding. Customized codes using legacy keys may be influenced.
    • The next release will begin to move files of CUDA ops to MMCV. It will influence the way to import related functions. We will not break the compatibility but will raise a warning first and please prepare to migrate it.

    Highlights

    • Support new monocular 3D detectors: PGD, SMOKE, MonoFlex
    • Support a new LiDAR-based detector: PointRCNN
    • Support a new backbone: DGCNN
    • Support 3D object detection on the S3DIS dataset
    • Support compilation on Windows
    • Full benchmark for PAConv on S3DIS
    • Further enhancement for documentation, especially on the Chinese documentation

    New Features

    • Support 3D object detection on the S3DIS dataset (#835)
    • Support PointRCNN (#842, #843, #856, #974, #1022, #1109, #1125)
    • Support DGCNN (#896)
    • Support PGD (#938, #940, #948, #950, #964, #1014, #1065, #1070, #1157)
    • Support SMOKE (#939, #955, #959, #975, #988, #999, #1029)
    • Support MonoFlex (#1026, #1044, #1114, #1115, #1183)
    • Support CPU Training (#1196)

    Improvements

    • Support point sampling based on distance metric (#667, #840)
    • Refactor coordinate systems (#677, #774, #803, #899, #906, #912, #968, #1001)
    • Unify camera keys in PointFusion and transformations between different systems (#791, #805)
    • Refine documentation (#792, #827, #829, #836, #849, #854, #859, #1111, #1113, #1116, #1121, #1132, #1135, #1185, #1193, #1226)
    • Add a script to support benchmark regression (#808)
    • Benchmark PAConvCUDA on S3DIS (#847)
    • Support to download pdf and epub documentation (#850)
    • Change the repeat setting in Group-Free-3D configs to reduce training epochs (#855)
    • Support KITTI AP40 evaluation metric (#927)
    • Add the mmdet3d2torchserve tool for SECOND (#977)
    • Add code-spell pre-commit hook and fix typos (#995)
    • Support the latest numba version (#1043)
    • Set a default seed to use when the random seed is not specified (#1072)
    • Distribute mix-precision models to each algorithm folder (#1074)
    • Add abstract and a representative figure for each algorithm (#1086)
    • Upgrade pre-commit hook (#1088, #1217)
    • Support augmented data and ground truth visualization (#1092)
    • Add local yaw property for CameraInstance3DBoxes (#1130)
    • Lock the required numba version to 0.53.0 (#1159)
    • Support the usage of plane information for KITTI dataset (#1162)
    • Deprecate the support for "python setup.py test" (#1164)
    • Reduce the number of multi-process threads to accelerate training (#1168)
    • Support 3D flip augmentation for semantic segmentation (#1181)
    • Update README format for each model (#1195)

    Bug Fixes

    • Fix compiling errors on Windows (#766)
    • Fix the deprecated nms setting in the ImVoteNet config (#828)
    • Use the latest wrap_fp16_model import from mmcv (#861)
    • Remove 2D annotations generation on Lyft (#867)
    • Update index files for the Chinese documentation to be consistent with the English version (#873)
    • Fix the nested list transpose in the CenterPoint head (#879)
    • Fix deprecated pretrained model loading for RegNet (#889)
    • Fix the incorrect dimension indices of rotations and testing config in the CenterPoint test time augmentation (#892)
    • Fix and improve visualization tools (#956, #1066, #1073)
    • Fix PointPillars FLOPs calculation error (#1075)
    • Fix missing dimension information in the SUN RGB-D data generation (#1120)
    • Fix incorrect anchor range settings in the PointPillars config for KITTI (#1163)
    • Fix incorrect model information in the RegNet metafile (#1184)
    • Fix bugs in non-distributed multi-gpu training and testing (#1197)
    • Fix a potential assertion error when generating corners from an empty box (#1212)
    • Upgrade bazel version according to the requirement of Waymo Devkit (#1223)

    Contributors

    A total of 12 developers contributed to this release.

    @THU17cyz, @wHao-Wu, @wangruohui, @Wuziyi616, @filaPro, @ZwwWayne, @Tai-Wang, @DCNSW, @xieenze, @robin-karlsson0, @ZCMax, @Otteri

    New Contributors

    • @Otteri made their first contribution in https://github.com/open-mmlab/mmdetection3d/pull/1070
    • @zeyu-hello made their first contribution in https://github.com/open-mmlab/mmdetection3d/pull/1225
    • @maskjp made their first contribution in https://github.com/open-mmlab/mmdetection3d/pull/1207

    Full Changelog: https://github.com/open-mmlab/mmdetection3d/compare/v0.18.1...v1.0.0rc0

    Source code(tar.gz)
    Source code(zip)
  • v0.18.1(Feb 9, 2022)

    Improvements

    • Support Flip3D augmentation in semantic segmentation task (#1182)
    • Update regnet metafile (#1184)
    • Add point cloud annotation tools introduction in FAQ (#1185)
    • Add missing explanations of cam_intrinsic in the nuScenes dataset doc (#1193)

    Bug Fixes

    • Deprecate the support for "python setup.py test" (#1164)
    • Fix the rotation matrix while rotation axis=0 (#1182)
    • Fix the bug in non-distributed multi-gpu training/testing (#1197)
    • Fix a potential bug when generating corners of empty bounding boxes (#1212)

    Contributors

    A total of 4 developers contributed to this release.

    @ZwwWayne, @ZCMax, @Tai-Wang, @wHao-Wu

    Full Changelog: https://github.com/open-mmlab/mmdetection3d/compare/v0.18.0...v0.18.1

    Source code(tar.gz)
    Source code(zip)
  • v0.18.0(Jan 5, 2022)

    Highlights

    • Update the required minimum version of mmdet and mmseg

    Improvements

    • Use the official markdownlint hook and add codespell hook for pre-committing (#1088)
    • Improve CI operation (#1095, #1102, #1103)
    • Use shared menu content from OpenMMLab's theme and remove duplicated contents from config (#1111)
    • Refactor the structure of documentation (#1113, #1121)
    • Update the required minimum version of mmdet and mmseg (#1147)

    Bug Fixes

    • Fix symlink failure on Windows (#1096)
    • Fix the upper bound of mmcv version in the mminstall requirements (#1104)
    • Fix API documentation compilation and mmcv build errors (#1116)
    • Fix figure links and pdf documentation compilation (#1132, #1135)

    Contributors

    A total of 4 developers contributed to this release.

    @ZwwWayne, @ZCMax, @Tai-Wang, @wHao-Wu

    Full Changelog: https://github.com/open-mmlab/mmdetection3d/compare/v0.17.3...v0.18.0

    Source code(tar.gz)
    Source code(zip)
  • v0.17.3(Dec 6, 2021)

    What's Changed

    • [Fix] Update mmcv version in dockerfile by @wHao-Wu in https://github.com/open-mmlab/mmdetection3d/pull/1036
    • [Fix] Fix the memory-leak problem in init_detector by @Tai-Wang in https://github.com/open-mmlab/mmdetection3d/pull/1045
    • [Fix] Fix default show value in show_result function and a typo in waymo_data_prep by @ZCMax in https://github.com/open-mmlab/mmdetection3d/pull/1034
    • [Fix] Fix incorrect velo indexing when formatting boxes on nuScenes by @Tai-Wang in https://github.com/open-mmlab/mmdetection3d/pull/1049
    • [Enhance] Clean unnecessary custom_imports in entrypoints by @ZCMax in https://github.com/open-mmlab/mmdetection3d/pull/1068
    • [Doc] Add MMFlow into README by @ZCMax in https://github.com/open-mmlab/mmdetection3d/pull/1067
    • Explicitly setting torch.cuda.device at init_model by @aldakata in https://github.com/open-mmlab/mmdetection3d/pull/1056
    • [Fix] Fix PointPillars FLOPs calculation error for master branch by @ZCMax in https://github.com/open-mmlab/mmdetection3d/pull/1076
    • [Enhance] Add mmFewShot in README by @ZCMax in https://github.com/open-mmlab/mmdetection3d/pull/1085
    • Label visualization by @MilkClouds in https://github.com/open-mmlab/mmdetection3d/pull/1050
    • [Enhance] add mmhuman3d in readme by @ZCMax in https://github.com/open-mmlab/mmdetection3d/pull/1094
    • [Enhance] fix mmhuman3d reference by @ZCMax in https://github.com/open-mmlab/mmdetection3d/pull/1100
    • Bump to v0.17.3 by @Tai-Wang in https://github.com/open-mmlab/mmdetection3d/pull/1083

    New Contributors

    • @aldakata made their first contribution in https://github.com/open-mmlab/mmdetection3d/pull/1056
    • @MilkClouds made their first contribution in https://github.com/open-mmlab/mmdetection3d/pull/1050

    Full Changelog: https://github.com/open-mmlab/mmdetection3d/compare/v0.17.2...v0.17.3

    Source code(tar.gz)
    Source code(zip)
  • v0.17.2(Nov 2, 2021)

    Improvements

    • Update Group-Free-3D and FCOS3D bibtex (#985)
    • Update the solutions for incompatibility of pycocotools in the FAQ (#993)
    • Add Chinese documentation for the KITTI (#1003) and Lyft (#1010) dataset tutorial
    • Add the H3DNet checkpoint converter for incompatible keys (#1007)

    Bug Fixes

    • Update mmdetection and mmsegmentation version in the Dockerfile (#992)
    • Fix links in the Chinese documentation (#1015)

    Contributors

    A total of 4 developers contributed to this release.

    @Tai-Wang, @wHao-Wu, @ZwwWayne, @ZCMax

    Full Changelog: https://github.com/open-mmlab/mmdetection3d/compare/v0.17.1...v0.17.2

    Source code(tar.gz)
    Source code(zip)
  • v0.17.1(Oct 8, 2021)

    Highlights

    • Support a faster but non-deterministic version of hard voxelization
    • Completion of dataset tutorials and the Chinese documentation
    • Improved the aesthetics of the documentation format

    Improvements

    • Add Chinese Documentation for training on customized datasets and designing customized models (#729, #820)
    • Support a faster but non-deterministic version of hard voxelization (#904)
    • Update paper titles and code details for metafiles (#917)
    • Add a tutorial for KITTI dataset (#953)
    • Use Pytorch sphinx theme to improve the format of documentation (#958)
    • Use the docker to accelerate CI (#971)

    Bug Fixes

    • Fix the sphinx version used in the documentation (#902)
    • Fix a dynamic scatter bug that discards the first voxel by mistake when all input points are valid (#915)
    • Fix the inconsistent variable names used in the unit test for voxel generator (#919)
    • Upgrade to use build_prior_generator to replace the legacy build_anchor_generator (#941)
    • Fix a minor bug caused by a too-small difference set in the FreeAnchor Head (#944)

    Contributors

    A total of 8 developers contributed to this release.

    @DCNSW, @zhanggefan, @mickeyouyou, @ZCMax, @wHao-Wu, @tojimahammatov, @xiliu8006, @Tai-Wang

    New Contributors

    • @mickeyouyou made their first contribution in https://github.com/open-mmlab/mmdetection3d/pull/920
    • @tojimahammatov made their first contribution in https://github.com/open-mmlab/mmdetection3d/pull/944

    Full Changelog: https://github.com/open-mmlab/mmdetection3d/compare/v0.17.0...v0.17.1

    Source code(tar.gz)
    Source code(zip)
  • v0.17.0(Sep 2, 2021)

    Compatibility

    • Unify the camera keys for consistent transformation between coordinate systems on different datasets. The modification changes the key names to lidar2img, depth2img, cam2img, etc. for easier understanding. Customized codes using legacy keys may be influenced.
    • The next release will begin to move files of CUDA ops to MMCV. It will influence the way to import related functions. We will not break the compatibility but will raise a warning first and please prepare to migrate it.

    Highlights

    • Support 3D object detection on the S3DIS dataset
    • Support compilation on Windows
    • Full benchmark for PAConv on S3DIS
    • Further enhancement for documentation, especially on the Chinese documentation

    New Features

    • Support 3D object detection on the S3DIS dataset (#835)

    Improvements

    • Support point sampling based on distance metric (#667, #840)
    • Update PointFusion to support unified camera keys (#791)
    • Add Chinese documentation for customized dataset (#792), data pipeline (#827), customized runtime (#829), 3D Detection on ScanNet (#836), nuScenes (#854) and Waymo (#859)
    • Unify camera keys used in the transformation between different systems (#805)
    • Add a script to support benchmark regression (#808)
    • Benchmark PAConvCUDA on S3DIS (#847)
    • Add a tutorial for 3D detection on the Lyft dataset (#849)
    • Support to download pdf and epub documentation (#850)
    • Change the repeat setting in Group-Free-3D configs to reduce training epochs (#855)

    Bug Fixes

    • Fix compiling errors on Windows (#766)
    • Fix the deprecated NMS setting in the ImVoteNet config (#828)
    • Use the latest wrap_fp16_model import from MMCV (#861)
    • Remove 2D annotations generation on Lyft (#867)
    • Update index files for the Chinese documentation to be consistent with the English version (#873)
    • Fix the nested list transpose in the CenterPoint head (#879)
    • Fix deprecated pretrained model loading for RegNet (#889)

    Contributors

    A total of 11 developers contributed to this release.

    @THU17cyz, @wHao-Wu, @wangruohui, @Wuziyi616, @filaPro, @ZwwWayne, @Tai-Wang, @DCNSW, @xieenze, @robin-karlsson0, @ZCMax

    New Contributors

    • @wangruohui made their first contribution in https://github.com/open-mmlab/mmdetection3d/pull/766
    • @xieenze made their first contribution in https://github.com/open-mmlab/mmdetection3d/pull/872
    • @robin-karlsson0 made their first contribution in https://github.com/open-mmlab/mmdetection3d/pull/879

    Full Changelog: https://github.com/open-mmlab/mmdetection3d/compare/v0.16.0...v0.17.0

    Source code(tar.gz)
    Source code(zip)
  • v0.16.0(Aug 3, 2021)

    Compatibility

    • Remove the rotation and dimension hack in the monocular 3D detection on nuScenes by applying corresponding transformation in the pre-processing and post-processing. The modification only influences nuScenes coco-style json files. Please re-run the data preparation scripts if necessary. See more details in the PR #744.
    • Add a new pre-processing module for the ScanNet dataset in order to support multi-view detectors. Please run the updated scripts to extract the RGB data and its annotations. See more details in the PR #696.

    Highlights

    • Support to use MIM with pip installation
    • Support PAConv models and benchmarks on S3DIS
    • Enhance the documentation especially on dataset tutorials

    New Features

    • Support RGB images on ScanNet for multi-view detectors (#696)
    • Support FLOPs and number of parameters calculation (#736)
    • Support to use MIM with pip installation (#782)
    • Support PAConv models and benchmarks on the S3DIS dataset (#783, #809)

    Improvements

    • Refactor Group-Free-3D to make it inherit BaseModule from MMCV (#704)
    • Modify the initialization methods of FCOS3D to be consistent with the refactored approach (#705)
    • Benchmark the Group-Free-3D models on ScanNet (#710)
    • Add Chinese Documentation for Getting Started (#725), FAQ (#730), Model Zoo (#735), Demo (#745), Quick Run (#746), Data Preparation (#787) and Configs (#788)
    • Add documentation for semantic segmentation on ScanNet and S3DIS (#743, #747, #806, #807)
    • Add a parameter max_keep_ckpts to limit the maximum number of saved Group-Free-3D checkpoints (#765)
    • Add documentation for 3D detection on SUN RGB-D and nuScenes (#770, #793)
    • Remove mmpycocotools in the Dockerfile (#785)

    Bug Fixes

    • Fix versions of OpenMMLab dependencies (#708)
    • Convert rt_mat to torch.Tensor in coordinate transformation for compatibility (#709)
    • Fix the bev_range initialization in ObjectRangeFilter according to the gt_bboxes_3d type (#717)
    • Fix Chinese documentation and incorrect doc format due to the incompatible Sphinx version (#718)
    • Fix a potential bug when setting interval == 1 in analyze_logs.py (#720)
    • Update the structure of Chinese Documentation (#722)
    • Fix FCOS3D FPN BC-Breaking caused by the code refactoring in MMDetection (#739)
    • Fix wrong in_channels when with_distance=True in the Dynamic VFE Layers (#749)
    • Fix the dimension and yaw hack of FCOS3D on nuScenes (#744, #794, #795, #818)
    • Fix the missing default bbox_mode in the show_multi_modality_result (#825)

    Contributors

    A total of 12 developers contributed to this release.

    @yinchimaoliang, @gopi231091, @filaPro, @ZwwWayne, @ZCMax, @hjin2902, @wHao-Wu, @Wuziyi616, @xiliu8006, @THU17cyz, @DCNSW, @Tai-Wang

    New Contributors

    • @gopi231091 made their first contribution in https://github.com/open-mmlab/mmdetection3d/pull/709

    Full Changelog: https://github.com/open-mmlab/mmdetection3d/compare/v0.15.0...v0.16.0

    Source code(tar.gz)
    Source code(zip)
  • v0.15.0(Jul 2, 2021)

    Highlights

    • Support PAConv
    • Support monocular/multi-view 3D detector ImVoxelNet on KITTI
    • Support Transformer-based 3D detection method Group-Free-3D on ScanNet
    • Add documentation for tasks including LiDAR-based 3D detection, vision-only 3D detection and point-based 3D semantic segmentation
    • Add dataset documents like ScanNet
    • Upgrade to use MMCV-full v1.3.8

    Compatibility

    In order to fix the problem that the priority of EvalHook is too low, all hook priorities have been re-adjusted in 1.3.8, so MMDetection 2.14.0 needs to rely on the latest MMCV 1.3.8 version. For related information, please refer to #1120, for related issues, please refer to #5343.

    New Features

    • Support Group-Free-3D on ScanNet (#539)
    • Support PAConv modules (#598, #599)
    • Support ImVoxelNet on KITTI (#627, #654)

    Improvements

    • Add unit tests for pipeline functions LoadImageFromFileMono3D, ObjectNameFilter and ObjectRangeFilter (#615)
    • Enhance IndoorPatchPointSample (#617)
    • Refactor model initialization methods based MMCV (#622)
    • Add Chinese docs (#629)
    • Add documentation for LiDAR-based 3D detection (#642)
    • Unify intrinsic and extrinsic matrices for all datasets (#653)
    • Add documentation for point-based 3D semantic segmentation (#663)
    • Add documentation of ScanNet for 3D detection (#664)
    • Refine docs for tutorials (#666)
    • Add documentation for vision-only 3D detection (#669)
    • Refine docs for Quick Run and Useful Tools (#686)

    Bug Fixes

    • Fix the bug of BackgroundPointsFilter using the bottom center of ground truth (#609)
    • Fix LoadMultiViewImageFromFiles to unravel stacked multi-view images to list to be consistent with DefaultFormatBundle (#611)
    • Fix the potential bug in analyze_logs when the training resumes from a checkpoint or is stopped before evaluation (#634)
    • Fix test commands in docs and make some refinements (#635)
    • Fix wrong config paths in unit tests (#641)

    Full Changelog: https://github.com/open-mmlab/mmdetection3d/compare/v0.14.0...v0.15.0

    Source code(tar.gz)
    Source code(zip)
  • v0.14.0(Jun 1, 2021)

    Highlights

    • Support the point cloud segmentation method PointNet++

    New Features

    • Support PointNet++ (#479, #528, #532, #541)
    • Support RandomJitterPoints transform for point cloud segmentation (#584)
    • Support RandomDropPointsColor transform for point cloud segmentation (#585)

    Improvements

    • Move the point alignment of ScanNet from data pre-processing to pipeline (#439, #470)
    • Add compatibility document to provide detailed descriptions of BC-breaking changes (#504)
    • Add MMSegmentation installation requirement (#535)
    • Support points rotation even without bounding box in GlobalRotScaleTrans for point cloud segmentaiton (#540)
    • Support visualization of detection results and dataset browse for nuScenes Mono-3D dataset (#542, #582)
    • Support faster implementation of KNN (#586)
    • Support RegNetX models on Lyft dataset (#589)
    • Remove a useless parameter label_weight from segmentation datasets including Custom3DSegDataset, ScanNetSegDataset and S3DISSegDataset (#607)

    Bug Fixes

    • Fix a corrupted lidar data file in Lyft dataset in data_preparation (#546)
    • Fix evaluation bugs in nuScenes and Lyft dataset (#549)
    • Fix converting points between coordinates with specific transformation matrix in the coord_3d_mode.py (#556)
    • Support PointPillars models on Lyft dataset (#578)
    • Fix the bug of demo with pre-trained VoteNet model on ScanNet (#600)

    New Contributors

    • @haotian-liu made their first contribution in https://github.com/open-mmlab/mmdetection3d/pull/515
    • @JSchuurmans made their first contribution in https://github.com/open-mmlab/mmdetection3d/pull/565

    Full Changelog: https://github.com/open-mmlab/mmdetection3d/compare/v0.13.0...v0.14.0

    Source code(tar.gz)
    Source code(zip)
  • v0.13.0(May 1, 2021)

    Highlights

    • Support a monocular 3D detection method FCOS3D
    • Support ScanNet and S3DIS semantic segmentation dataset
    • Enhancement of visualization tools for dataset browsing and demos, including support of visualization for multi-modality data and point cloud segmentation.

    New Features

    • Support ScanNet semantic segmentation dataset (#390)
    • Support monocular 3D detection on nuScenes (#392)
    • Support multi-modality visualization (#405)
    • Support nuImages visualization (#408)
    • Support monocular 3D detection on KITTI (#415)
    • Support online visualization of semantic segmentation results (#416)
    • Support ScanNet test results submission to online benchmark (#418)
    • Support S3DIS data pre-processing and dataset class (#433)
    • Support FCOS3D (#436, #442, #482, #484)
    • Support dataset browse for multiple types of datasets (#467)
    • Adding paper-with-code (PWC) metafile for each model in the model zoo (#485)

    Improvements

    • Support dataset browsing for SUNRGBD, ScanNet or KITTI points and detection results (#367)
    • Add the pipeline to load data using file client (#430)
    • Support to customize the type of runner (#437)
    • Make pipeline functions process points and masks simultaneously when sampling points (#444)
    • Add waymo unit tests (#455)
    • Split the visualization of projecting points onto image from that for only points (#480)
    • Efficient implementation of PointSegClassMapping (#489)
    • Use the new model registry from mmcv (#495)

    Bug Fixes

    • Fix Pytorch 1.8 Compilation issue in the scatter_points_cuda.cu (#404)
    • Fix dynamic_scatter errors triggered by empty point input (#417)
    • Fix the bug of missing points caused by using break incorrectly in the voxelization (#423)
    • Fix the missing coord_type in the waymo dataset config (#441)
    • Fix errors in four unittest functions of configs, test_detectors.py, test_heads.py (#453)
    • Fix 3DSSD training errors and simplify configs (#462)
    • Clamp 3D votes projections to image boundaries in ImVoteNet (#463)
    • Update out-of-date names of pipelines in the config of pointpillars benchmark (#474)
    • Fix the lack of a placeholder when unpacking RPN targets in the h3d_bbox_head.py (#508)
    • Fix the incorrect value of K when creating pickle files for SUN RGB-D (#511)

    New Contributors

    • @gillbam made their first contribution in https://github.com/open-mmlab/mmdetection3d/pull/423
    • @Divadi made their first contribution in https://github.com/open-mmlab/mmdetection3d/pull/463
    • @virusapex made their first contribution in https://github.com/open-mmlab/mmdetection3d/pull/511

    Full Changelog: https://github.com/open-mmlab/mmdetection3d/compare/v0.12.0...v0.13.0

    Source code(tar.gz)
    Source code(zip)
  • v0.12.0(Apr 2, 2021)

    Highlights

    • Support a new multi-modality method ImVoteNet.
    • Support pytorch 1.7 and 1.8
    • Refactor the structure of tools and train.py/test.py

    Bug Fixes

    • Fix missing keys coord_type in database sampler config (#345)
    • Rename H3DNet configs (#349)
    • Fix CI by using ubuntu 18.04 in github workflow (#350)
    • Add assertions to avoid 4-dim points being input to points_in_boxes (#357)
    • Fix the SECOND results on Waymo in the corresponding README (#363)
    • Fix the incorrectly adopted pipeline when adding val to workflow (#370)
    • Fix a potential bug when indices used in the backwarding in ThreeNN (#377)
    • Fix a compilation error triggered by scatter_points_cuda.cu in pytorch 1.7 (#393)

    New Features

    • Support LiDAR-based semantic segmentation metrics (#332)
    • Support ImVoteNet (#352, #384)
    • Support the KNN GPU operation (#360, #371)

    Improvements

    • Add FAQ for common problems in the documentation (#333)
    • Refactor the structure of tools (#339)
    • Refactor train.py and test.py (#343)
    • Support demo on nuScenes (#353)
    • Add 3DSSD checkpoints (#359)
    • Update the Bibtex of CenterPoint (#368)
    • Add citation format and reference to other OpenMMLab projects in the README (#374)
    • Upgrade the mmcv version requirements (#376)
    • Add numba and numpy version requirements in FAQ (#379)
    • Avoid unnecessary for-loop execution of VFE layer creation (#389)
    • Update SUNRGBD dataset documentation to stress the requirements for training ImVoteNet (#391)
    • Modify vote head to support 3DSSD (#396)

    New Contributors

    • @tianweiy made their first contribution in https://github.com/open-mmlab/mmdetection3d/pull/368
    • @happynear made their first contribution in https://github.com/open-mmlab/mmdetection3d/pull/363
    • @zehuichen123 made their first contribution in https://github.com/open-mmlab/mmdetection3d/pull/389

    Full Changelog: https://github.com/open-mmlab/mmdetection3d/compare/v0.11.0...v0.12.0

    Source code(tar.gz)
    Source code(zip)
  • v0.11.0(Mar 2, 2021)

    Highlights

    • Support more friendly visualization interfaces based on open3d
    • Support a faster and more memory-efficient implementation of DynamicScatter
    • Refactor unit tests and details of configs

    Bug Fixes

    • Fix an unsupported bias setting in the unit test for CenterPoint head (#304)
    • Fix errors due to typos in the CenterPoint head (#308)
    • Fix a minor bug in points_in_boxes.py when tensors are not in the same device. (#317)

    New Features

    • Support new visualization methods based on Open3D (#284, #323)

    Improvements

    • Refactor unit tests (#303)
    • Move the key train_cfg and test_cfg into the model configs (#307)
    • Update README with Chinese version and instructions for getting started. (#310, #316)
    • Support a faster and more memory-efficient implementation of DynamicScatter (#318, #326)
    • Fix warning of deprecated usages of nonzero during training with PyTorch 1.6 (#330)

    Full Changelog: https://github.com/open-mmlab/mmdetection3d/compare/v0.10.0...v0.11.0

    Source code(tar.gz)
    Source code(zip)
  • v0.10.0(Feb 1, 2021)

    Highlights

    • Preliminary release of API for SemanticKITTI dataset.
    • Documentation and demo enhancement for better user experience.
    • Fix a number of underlying minor bugs and add some corresponding important unit tests.

    Bug Fixes

    • Fixed the issue of unpacking size in furthest_point_sample.py (#248)
    • Fix bugs for 3DSSD triggered by empty ground truths (#258)
    • Remove models without checkpoints in model zoo statistics of documentation (#259)
    • Fix some unclear installation instructions in getting_started.md (#269)
    • Fix relative paths/links in the documentation (#271)
    • Fix a minor bug in scatter_points_cuda.cu when num_features != 4 (#275)
    • Fix the bug about missing text files when testing on KITTI (#278)
    • Fix issues caused by inplace modification of tensors in BaseInstance3DBoxes (#283)
    • Fix log analysis for evaluation and adjust the documentation accordingly (#285)

    New Features

    • Support SemanticKITTI dataset preliminarily (#287)

    Improvements

    • Add tags to README in configurations for specifying different uses (#262)
    • Update instructions for evaluation metrics in the documentation (#265)
    • Add nuImages entry in README.md and gif demo (#266, #268)
    • Add unit test for voxelization (#275)

    New Contributors

    • @congee524 made their first contribution in https://github.com/open-mmlab/mmdetection3d/pull/259
    • @wikiwen made their first contribution in https://github.com/open-mmlab/mmdetection3d/pull/269
    • @EricWiener made their first contribution in https://github.com/open-mmlab/mmdetection3d/pull/248

    Full Changelog: https://github.com/open-mmlab/mmdetection3d/compare/v0.9.0...v0.10.0

    Source code(tar.gz)
    Source code(zip)
  • v0.9.0(Jan 4, 2021)

    Highlights

    • Documentation refactoring with better structure, especially about how to implement new models and customized datasets.
    • More compatible with refactored point structure by bug fixes in ground truth sampling.

    Bug Fixes

    • Fix point structure related bugs in ground truth sampling (#211)
    • Fix loading points in ground truth sampling augmentation on nuScenes (#221)
    • Fix channel setting in the SeparateHead of CenterPoint (#228)
    • Fix evaluation for indoors 3D detection in case of less classes in prediction (#231)
    • Remove unreachable lines in nuScenes data converter (#235)
    • Minor adjustments of numpy implementation for perspective projection and prediction filtering criterion in KITTI evaluation (#241)

    Improvements

    • Documentation refactoring (#242)

    New Contributors

    • @meng-zha made their first contribution in https://github.com/open-mmlab/mmdetection3d/pull/211
    • @zhezhao1989 made their first contribution in https://github.com/open-mmlab/mmdetection3d/pull/228

    Full Changelog: https://github.com/open-mmlab/mmdetection3d/compare/v0.8.0...v0.9.0

    Source code(tar.gz)
    Source code(zip)
  • v0.8.0(Nov 30, 2020)

    v0.8.0 (30/11/2020)

    Highlights

    • Refactor points structure with more constructive and clearer implementation.
    • Support axis-aligned IoU loss for VoteNet with better performance.
    • Update and enhance SECOND benchmark on Waymo.

    New Features

    • Support axis-aligned IoU loss for VoteNet. (#194)
    • Support points structure for consistent processing of all the point related representation. (#196, #204)

    Improvements

    • Enhance SECOND benchmark on Waymo with stronger baselines. (#166)
    • Add model zoo statistics and polish the documentation. (#201)
    Source code(tar.gz)
    Source code(zip)
  • v0.7.0(Nov 1, 2020)

    Highlights

    • Support a new method SSN with benchmarks on nuScenes and Lyft datasets.
    • Update benchmarks for SECOND on Waymo, CenterPoint with TTA on nuScenes and models with mixed precision training on KITTI and nuScenes.
    • Support semantic segmentation on nuImages and provide HTC models with configurations and performance for reference.

    Bug Fixes

    • Fix incorrect code weights in anchor3d_head when introducing mixed precision training (#173)
    • Fix the incorrect label mapping on nuImages dataset (#155)

    New Features

    • Modified primitive head which can support the setting on SUN-RGBD dataset (#136)
    • Support semantic segmentation and HTC with models for reference on nuImages dataset (#155)
    • Support SSN on nuScenes and Lyft datasets (#147, #174, #166, #182)
    • Support double flip for test time augmentation of CenterPoint with updated benchmark (#143)

    Improvements

    • Update SECOND benchmark with configurations for reference on Waymo (#166)
    • Delete checkpoints on Waymo to comply its specific license agreement (#180)
    • Update models and instructions with mixed precision training on KITTI and nuScenes (#178)
    Source code(tar.gz)
    Source code(zip)
  • v0.6.1(Oct 11, 2020)

    Highlights

    • Support mixed precision training of voxel-based methods
    • Support docker with PyTorch 1.6.0
    • Update baseline configs and results (CenterPoint on nuScenes and PointPillars on Waymo with full dataset)
    • Switch model zoo to download.openmmlab.com

    Bug Fixes

    • Fix a bug of visualization in multi-batch case (#120)
    • Fix bugs in DCN unit test (#130)
    • Fix DCN bias bug in CenterPoint (#137)
    • Fix dataset mapping in the evaluation of nuScenes mini dataset (#140)
    • Fix origin initialization in CameraInstance3DBoxes (#148, #150)
    • Correct documentation link in the getting_started.md (#159)
    • Fix model save path bug in gather_models.py (#153)
    • Fix image padding shape bug in PointFusion (#162)

    New Features

    • Support dataset pipeline VoxelBasedPointSampler to sample multi-sweep points based on voxelization. (#125)
    • Support mixed precision training of voxel-based methods (#132)
    • Support docker with PyTorch 1.6.0 (#160)

    Improvements

    • Reduce requirements for the case exclusive of Waymo (#121)
    • Switch model zoo to download.openmmlab.com (#126)
    • Update docs related to Waymo (#128)
    • Add version assertion in the init file (#129)
    • Add evaluation interval setting for CenterPoint (#131)
    • Add unit test for CenterPoint (#133)
    • Update PointPillars baselines on Waymo with full dataset (#142)
    • Update CenterPoint results with models and logs (#154)
    Source code(tar.gz)
    Source code(zip)
  • v0.6.0(Sep 20, 2020)

    Highlights

    • Support new methods H3DNet, 3DSSD, CenterPoint.
    • Support new dataset Waymo (with PointPillars baselines) and nuImages (with Mask R-CNN and Cascade Mask R-CNN baselines).
    • Support Batch Inference
    • Support Pytorch 1.6
    • Start to publish mmdet3d package to PyPI since v0.5.0. You can use mmdet3d through pip install mmdet3d.

    Backwards Incompatible Changes

    • Support Batch Inference (#95, #103, #116): MMDetection3D v0.6.0 migrates to support batch inference based on MMDetection >= v2.4.0. This change influences all the test APIs in MMDetection3D and downstream codebases.
    • Start to use collect environment function from MMCV (#113): MMDetection3D v0.6.0 migrates to use collect_env function in MMCV. get_compiler_version and get_compiling_cuda_version compiled in mmdet3d.ops.utils are removed. Please import these two functions from mmcv.ops.

    Bug Fixes

    • Rename CosineAnealing to CosineAnnealing (#57)
    • Fix device inconsistant bug in 3D IoU computation (#69)
    • Fix a minor bug in json2csv of lyft dataset (#78)
    • Add missed test data for pointnet modules (#85)
    • Fix use_valid_flag bug in CustomDataset (#106)

    New Features

    • Support nuImages dataset by converting them into coco format and release Mask R-CNN and Cascade Mask R-CNN baseline models (#91, #94)
    • Support to publish to PyPI in github-action (#17, #19, #25, #39, #40)
    • Support CBGSDataset and make it generally applicable to all the supported datasets (#75, #94)
    • Support H3DNet and release models on ScanNet dataset (#53, #58, #105)
    • Support Fusion Point Sampling used in 3DSSD (#66)
    • Add BackgroundPointsFilter to filter background points in data pipeline (#84)
    • Support pointnet2 with multi-scale grouping in backbone and refactor pointnets (#82)
    • Support dilated ball query used in 3DSSD (#96)
    • Support 3DSSD and release models on KITTI dataset (#83, #100, #104)
    • Support CenterPoint and release models on nuScenes dataset (#49, #92)
    • Support Waymo dataset and release PointPillars baseline models (#118)
    • Allow LoadPointsFromMultiSweeps to pad empty sweeps and select multiple sweeps randomly (#67)

    Improvements

    • Fix all warnings and bugs in Pytorch 1.6.0 (#70, #72)
    • Update issue templates (#43)
    • Update unit tests (#20, #24, #30)
    • Update documentation for using ply format point cloud data (#41)
    • Use points loader to load point cloud data in ground truth (GT) samplers (#87)
    • Unify version file of OpenMMLab projects by using version.py (#112)
    • Remove unnecessary data preprocessing commands of SUN RGB-D dataset (#110)
    Source code(tar.gz)
    Source code(zip)
Object DGCNN and DETR3D, Our implementations are built on top of MMdetection3D.

This repo contains the implementations of Object DGCNN (https://arxiv.org/abs/2110.06923) and DETR3D (https://arxiv.org/abs/2110.06922). Our implementations are built on top of MMdetection3D.

Wang, Yue 539 Jan 7, 2023
An open source object detection toolbox based on PyTorch

MMDetection is an open source object detection toolbox based on PyTorch. It is a part of the OpenMMLab project.

Bo Chen 24 Dec 28, 2022
LaneDet is an open source lane detection toolbox based on PyTorch that aims to pull together a wide variety of state-of-the-art lane detection models

LaneDet is an open source lane detection toolbox based on PyTorch that aims to pull together a wide variety of state-of-the-art lane detection models. Developers can reproduce these SOTA methods and build their own methods.

TuZheng 405 Jan 4, 2023
MMFlow is an open source optical flow toolbox based on PyTorch

Documentation: https://mmflow.readthedocs.io/ Introduction English | 简体中文 MMFlow is an open source optical flow toolbox based on PyTorch. It is a part

OpenMMLab 688 Jan 6, 2023
mmfewshot is an open source few shot learning toolbox based on PyTorch

OpenMMLab FewShot Learning Toolbox and Benchmark

OpenMMLab 514 Dec 28, 2022
OpenPCDet Toolbox for LiDAR-based 3D Object Detection.

OpenPCDet OpenPCDet is a clear, simple, self-contained open source project for LiDAR-based 3D object detection. It is also the official code release o

OpenMMLab 3.2k Dec 31, 2022
TorchDistiller - a collection of the open source pytorch code for knowledge distillation, especially for the perception tasks, including semantic segmentation, depth estimation, object detection and instance segmentation.

This project is a collection of the open source pytorch code for knowledge distillation, especially for the perception tasks, including semantic segmentation, depth estimation, object detection and instance segmentation.

yifan liu 147 Dec 3, 2022
PaddleRobotics is an open-source algorithm library for robots based on Paddle, including open-source parts such as human-robot interaction, complex motion control, environment perception, SLAM positioning, and navigation.

简体中文 | English PaddleRobotics paddleRobotics是基于paddle的机器人开源算法库集,包括人机交互、复杂运动控制、环境感知、slam定位导航等开源算法部分。 人机交互 主动多模交互技术TFVT-HRI 主动多模交互技术是通过视觉、语音、触摸传感器等输入机器人

null 185 Dec 26, 2022
OBBDetection: an oriented object detection toolbox modified from MMdetection

OBBDetection note: If you have questions or good suggestions, feel free to propose issues and contact me. introduction OBBDetection is an oriented obj

MIXIAOXIN_HO 3 Nov 11, 2022
EasyMocap is an open-source toolbox for markerless human motion capture from RGB videos.

EasyMocap is an open-source toolbox for markerless human motion capture from RGB videos. In this project, we provide the basic code for fitt

ZJU3DV 2.2k Jan 5, 2023
AdelaiDepth is an open source toolbox for monocular depth prediction.

AdelaiDepth is an open source toolbox for monocular depth prediction.

Adelaide Intelligent Machines (AIM) Group 743 Jan 1, 2023
✨✨✨An awesome open source toolbox for stereo matching.

OpenStereo This is an awesome open source toolbox for stereo matching. Supported Methods: BM SGM(T-PAMI'07) GCNet(ICCV'17) PSMNet(CVPR'18) StereoNet(E

Wang Qingyu 6 Nov 4, 2022
(JMLR'19) A Python Toolbox for Scalable Outlier Detection (Anomaly Detection)

Python Outlier Detection (PyOD) Deployment & Documentation & Stats Build Status & Coverage & Maintainability & License PyOD is a comprehensive and sca

Yue Zhao 6.6k Jan 3, 2023
Hybrid CenterNet - Hybrid-supervised object detection / Weakly semi-supervised object detection

Hybrid-Supervised Object Detection System Object detection system trained by hybrid-supervision/weakly semi-supervision (HSOD/WSSOD): This project is

null 5 Dec 10, 2022
Yolo object detection - Yolo object detection with python

How to run download required files make build_image make download Docker versio

null 3 Jan 26, 2022
ObjectDetNet is an easy, flexible, open-source object detection framework

Getting started with the ObjectDetNet ObjectDetNet is an easy, flexible, open-source object detection framework which allows you to easily train, resu

null 5 Aug 25, 2020
An Unsupervised Graph-based Toolbox for Fraud Detection

An Unsupervised Graph-based Toolbox for Fraud Detection Introduction: UGFraud is an unsupervised graph-based fraud detection toolbox that integrates s

SafeGraph 99 Dec 11, 2022
Official PyTorch implementation of Joint Object Detection and Multi-Object Tracking with Graph Neural Networks

This is the official PyTorch implementation of our paper: "Joint Object Detection and Multi-Object Tracking with Graph Neural Networks". Our project website and video demos are here.

Richard Wang 443 Dec 6, 2022