OpenPCDet Toolbox for LiDAR-based 3D Object Detection.

Overview

OpenPCDet

OpenPCDet is a clear, simple, self-contained open source project for LiDAR-based 3D object detection.

It is also the official code release of [PointRCNN], [Part-A^2 net], [PV-RCNN] and [Voxel R-CNN].

Overview

Changelog

[2021-06-08] Added support for the voxel-based 3D object detection model Voxel R-CNN

[2021-05-14] Added support for the monocular 3D object detection model CaDDN

[2020-11-27] Bugfixed: Please re-prepare the validation infos of Waymo dataset (version 1.2) if you would like to use our provided Waymo evaluation tool (see PR). Note that you do not need to re-prepare the training data and ground-truth database.

[2020-11-10] NEW: The Waymo Open Dataset has been supported with state-of-the-art results. Currently we provide the configs and results of SECOND, PartA2 and PV-RCNN on the Waymo Open Dataset, and more models could be easily supported by modifying their dataset configs.

[2020-08-10] Bugfixed: The provided NuScenes models have been updated to fix the loading bugs. Please redownload it if you need to use the pretrained NuScenes models.

[2020-07-30] OpenPCDet v0.3.0 is released with the following features:

[2020-07-17] Add simple visualization codes and a quick demo to test with custom data.

[2020-06-24] OpenPCDet v0.2.0 is released with pretty new structures to support more models and datasets.

[2020-03-16] OpenPCDet v0.1.0 is released.

Introduction

What does OpenPCDet toolbox do?

Note that we have upgrated PCDet from v0.1 to v0.2 with pretty new structures to support various datasets and models.

OpenPCDet is a general PyTorch-based codebase for 3D object detection from point cloud. It currently supports multiple state-of-the-art 3D object detection methods with highly refactored codes for both one-stage and two-stage 3D detection frameworks.

Based on OpenPCDet toolbox, we win the Waymo Open Dataset challenge in 3D Detection, 3D Tracking, Domain Adaptation three tracks among all LiDAR-only methods, and the Waymo related models will be released to OpenPCDet soon.

We are actively updating this repo currently, and more datasets and models will be supported soon. Contributions are also welcomed.

OpenPCDet design pattern

  • Data-Model separation with unified point cloud coordinate for easily extending to custom datasets:

  • Unified 3D box definition: (x, y, z, dx, dy, dz, heading).

  • Flexible and clear model structure to easily support various 3D detection models:

  • Support various models within one framework as:

Currently Supported Features

  • Support both one-stage and two-stage 3D object detection frameworks
  • Support distributed training & testing with multiple GPUs and multiple machines
  • Support multiple heads on different scales to detect different classes
  • Support stacked version set abstraction to encode various number of points in different scenes
  • Support Adaptive Training Sample Selection (ATSS) for target assignment
  • Support RoI-aware point cloud pooling & RoI-grid point cloud pooling
  • Support GPU version 3D IoU calculation and rotated NMS

Model Zoo

KITTI 3D Object Detection Baselines

Selected supported methods are shown in the below table. The results are the 3D detection performance of moderate difficulty on the val set of KITTI dataset.

  • All models are trained with 8 GTX 1080Ti GPUs and are available for download.
  • The training time is measured with 8 TITAN XP GPUs and PyTorch 1.5.
training time Car@R11 Pedestrian@R11 Cyclist@R11 download
PointPillar ~1.2 hours 77.28 52.29 62.68 model-18M
SECOND ~1.7 hours 78.62 52.98 67.15 model-20M
SECOND-IoU - 79.09 55.74 71.31 model
PointRCNN ~3 hours 78.70 54.41 72.11 model-16M
PointRCNN-IoU ~3 hours 78.75 58.32 71.34 model-16M
Part-A^2-Free ~3.8 hours 78.72 65.99 74.29 model-226M
Part-A^2-Anchor ~4.3 hours 79.40 60.05 69.90 model-244M
PV-RCNN ~5 hours 83.61 57.90 70.47 model-50M
Voxel R-CNN (Car) ~2.2 hours 84.54 - - model-28M
CaDDN ~15 hours 21.38 13.02 9.76 model-774M

NuScenes 3D Object Detection Baselines

All models are trained with 8 GTX 1080Ti GPUs and are available for download.

mATE mASE mAOE mAVE mAAE mAP NDS download
PointPillar-MultiHead 33.87 26.00 32.07 28.74 20.15 44.63 58.23 model-23M
SECOND-MultiHead (CBGS) 31.15 25.51 26.64 26.26 20.46 50.59 62.29 model-35M

Waymo Open Dataset Baselines

We provide the setting of DATA_CONFIG.SAMPLED_INTERVAL on the Waymo Open Dataset (WOD) to subsample partial samples for training and evaluation, so you could also play with WOD by setting a smaller DATA_CONFIG.SAMPLED_INTERVAL even if you only have limited GPU resources.

By default, all models are trained with 20% data (~32k frames) of all the training samples on 8 GTX 1080Ti GPUs, and the results of each cell here are mAP/mAPH calculated by the official Waymo evaluation metrics on the whole validation set (version 1.2).

Vec_L1 Vec_L2 Ped_L1 Ped_L2 Cyc_L1 Cyc_L2
SECOND 68.03/67.44 59.57/59.04 61.14/50.33 53.00/43.56 54.66/53.31 52.67/51.37
Part-A^2-Anchor 71.82/71.29 64.33/63.82 63.15/54.96 54.24/47.11 65.23/63.92 62.61/61.35
PV-RCNN 74.06/73.38 64.99/64.38 62.66/52.68 53.80/45.14 63.32/61.71 60.72/59.18

We could not provide the above pretrained models due to Waymo Dataset License Agreement, but you could easily achieve similar performance by training with the default configs.

Other datasets

More datasets are on the way.

Installation

Please refer to INSTALL.md for the installation of OpenPCDet.

Quick Demo

Please refer to DEMO.md for a quick demo to test with a pretrained model and visualize the predicted results on your custom data or the original KITTI data.

Getting Started

Please refer to GETTING_STARTED.md to learn more usage about this project.

License

OpenPCDet is released under the Apache 2.0 license.

Acknowledgement

OpenPCDet is an open source project for LiDAR-based 3D scene perception that supports multiple LiDAR-based perception models as shown above. Some parts of PCDet are learned from the official released codes of the above supported methods. We would like to thank for their proposed methods and the official implementation.

We hope that this repo could serve as a strong and flexible codebase to benefit the research community by speeding up the process of reimplementing previous works and/or developing new methods.

Citation

If you find this project useful in your research, please consider cite:

@misc{openpcdet2020,
    title={OpenPCDet: An Open-source Toolbox for 3D Object Detection from Point Clouds},
    author={OpenPCDet Development Team},
    howpublished = {\url{https://github.com/open-mmlab/OpenPCDet}},
    year={2020}
}

Contribution

Welcome to be a member of the OpenPCDet development team by contributing to this repo, and feel free to contact us for any potential contributions.

Comments
  • Low accuracy when training on waymo dataset

    Low accuracy when training on waymo dataset

    Hi, I add the recent update on the waymo dataset to the previous version of OpenPCDet (maybe June2020 version), and train second/PartA2 on the waymo dataset. I do not modify the preprocess script and model cfg for waymo.

    However, the accuracy is much lower. For second, the final accuracy is as follows: 742 2020-11-24 08:00:32,763 INFO Evaluation done.* 743 2020-11-24 09:47:11,516 INFO *************** EPOCH 0 EVALUATION ***************** 744 2020-11-24 09:48:05,627 INFO *************** Performance of EPOCH 0 ***************** 745 2020-11-24 09:48:05,627 INFO Generate label finished(sec_per_example: 0.0068 second). 746 2020-11-24 09:48:05,628 INFO recall_roi_0.3: 0.000000 (0 / 296561) 747 2020-11-24 09:48:05,628 INFO recall_rcnn_0.3: 0.689605 (204510 / 296561) 748 2020-11-24 09:48:05,628 INFO recall_roi_0.5: 0.000000 (0 / 296561) 749 2020-11-24 09:48:05,628 INFO recall_rcnn_0.5: 0.574320 (170321 / 296561) 750 2020-11-24 09:48:05,628 INFO recall_roi_0.7: 0.000000 (0 / 296561) 751 2020-11-24 09:48:05,628 INFO recall_rcnn_0.7: 0.316451 (93847 / 296561) 752 2020-11-24 09:48:05,637 INFO Average predicted number of objects(7998 samples): 100.557 753 2020-11-24 09:51:25,703 INFO 754 OBJECT_TYPE_TYPE_VEHICLE_LEVEL_1/AP: 0.3911 755 OBJECT_TYPE_TYPE_VEHICLE_LEVEL_1/APH: 0.3855 756 OBJECT_TYPE_TYPE_VEHICLE_LEVEL_2/AP: 0.3620 757 OBJECT_TYPE_TYPE_VEHICLE_LEVEL_2/APH: 0.3568 758 OBJECT_TYPE_TYPE_PEDESTRIAN_LEVEL_1/AP: 0.3631 759 OBJECT_TYPE_TYPE_PEDESTRIAN_LEVEL_1/APH: 0.2795 760 OBJECT_TYPE_TYPE_PEDESTRIAN_LEVEL_2/AP: 0.3308 761 OBJECT_TYPE_TYPE_PEDESTRIAN_LEVEL_2/APH: 0.2545 762 OBJECT_TYPE_TYPE_SIGN_LEVEL_1/AP: 0.0000 763 OBJECT_TYPE_TYPE_SIGN_LEVEL_1/APH: 0.0000 764 OBJECT_TYPE_TYPE_SIGN_LEVEL_2/AP: 0.0000 765 OBJECT_TYPE_TYPE_SIGN_LEVEL_2/APH: 0.0000 766 OBJECT_TYPE_TYPE_CYCLIST_LEVEL_1/AP: 0.2761 767 OBJECT_TYPE_TYPE_CYCLIST_LEVEL_1/APH: 0.2636 768 OBJECT_TYPE_TYPE_CYCLIST_LEVEL_2/AP: 0.2757 769 OBJECT_TYPE_TYPE_CYCLIST_LEVEL_2/APH: 0.2632

    I'd like to ask what may cause this problem.

    Thank you very much.

    invalid to be closed stale 
    opened by LZDSJTU 52
  • Problems using custom data sets

    Problems using custom data sets

    I am trying to use my own Lidar data to test PV-RCNN instead of kitti data, I used similar kaggle annotations However, I get an error when trying to run the code and the error message is as follows

    File "***/OpenPCDet/pcdet/datasets/innovusion/innovusion_dataset.py", line 77, in __getitem__
        data_dict = self.prepare_data(data_dict=input_dict)
      File "***/OpenPCDet/pcdet/datasets/dataset.py", line 124, in prepare_data
        'gt_boxes_mask': gt_boxes_mask
      File "***/OpenPCDet/pcdet/datasets/augmentor/data_augmentor.py", line 93, in forward
        data_dict = cur_augmentor(data_dict=data_dict)
      File "***/OpenPCDet/pcdet/datasets/augmentor/database_sampler.py", line 179, in __call__
        sampled_boxes = np.stack([x['box3d_lidar'] for x in sampled_dict], axis=0).astype(np.float32)
      File "<__array_function__ internals>", line 6, in stack
      File "***/anaconda3/envs/ml/lib/python3.7/site-packages/numpy/core/shape_base.py", line 423, in stack
        raise ValueError('need at least one array to stack')
    ValueError: need at least one array to stack
    

    I located the code and found that it was related to data enhancement, in pcdet/datasets/augmentor/database_sampler.py

        def __call__(self, data_dict):
            """
            Args:
                data_dict:
                    gt_boxes: (N, 7 + C) [x, y, z, dx, dy, dz, heading, ...]
    
            Returns:
    
            """
            gt_boxes = data_dict['gt_boxes']
            gt_names = data_dict['gt_names'].astype(str)
            existed_boxes = gt_boxes
            total_valid_sampled_dict = []
            for class_name, sample_group in self.sample_groups.items():
                if self.limit_whole_scene:
                    num_gt = np.sum(class_name == gt_names)
                    sample_group['sample_num'] = str(int(self.sample_class_num[class_name]) - num_gt)
                if int(sample_group['sample_num']) > 0:
                    sampled_dict = self.sample_with_fixed_number(class_name, sample_group)  ### need help
    
                    sampled_boxes = np.stack([x['box3d_lidar'] for x in sampled_dict], axis=0).astype(np.float32)
    
                    if self.sampler_cfg.get('DATABASE_WITH_FAKELIDAR', False):
                        sampled_boxes = box_utils.boxes3d_kitti_fakelidar_to_lidar(sampled_boxes)
    
                    iou1 = iou3d_nms_utils.boxes_bev_iou_cpu(sampled_boxes[:, 0:7], existed_boxes[:, 0:7])
                    iou2 = iou3d_nms_utils.boxes_bev_iou_cpu(sampled_boxes[:, 0:7], sampled_boxes[:, 0:7])
                    iou2[range(sampled_boxes.shape[0]), range(sampled_boxes.shape[0])] = 0
                    iou1 = iou1 if iou1.shape[1] > 0 else iou2
                    valid_mask = ((iou1.max(axis=1) + iou2.max(axis=1)) == 0).nonzero()[0]
                    valid_sampled_dict = [sampled_dict[x] for x in valid_mask]
                    valid_sampled_boxes = sampled_boxes[valid_mask]
    
                    existed_boxes = np.concatenate((existed_boxes, valid_sampled_boxes), axis=0)
                    total_valid_sampled_dict.extend(valid_sampled_dict)
    
            sampled_gt_boxes = existed_boxes[gt_boxes.shape[0]:, :]
            if total_valid_sampled_dict.__len__() > 0:
                data_dict = self.add_sampled_boxes_to_scene(data_dict, sampled_gt_boxes, total_valid_sampled_dict)
    
            data_dict.pop('gt_boxes_mask')
            return data_dict
    

    Then the key function is sample_with_fixed_number(self, class_name, sample_group)

        def sample_with_fixed_number(self, class_name, sample_group):
            """
            Args:
                class_name:
                sample_group:
            Returns:
    
            """
            sample_num, pointer, indices = int(sample_group['sample_num']), sample_group['pointer'], sample_group['indices']
            if pointer >= len(self.db_infos[class_name]):
                indices = np.random.permutation(len(self.db_infos[class_name]))
                pointer = 0
    
            sampled_dict = [self.db_infos[class_name][idx] for idx in indices[pointer: pointer + sample_num]]
            pointer += sample_num
            sample_group['pointer'] = pointer
            sample_group['indices'] = indices
            return sampled_dict
    

    Self.db_infos is used in the code, it is specified by sampler_cfg.DB_INFO_PATH, but my data dose not have it, so I am stuck here, what do I need to do to fix it, or is there a detailed explanation for me to understand this code Note: My data annotation format

    id confidence center_x center_y center_z width length height yaw class_name
    

    thank you all

    help wanted needs discussion 
    opened by xixioba 41
  • Train custom data (without 2D information?)

    Train custom data (without 2D information?)

    Hi,

    If I want to train my datasets, before that, I was wondering is it possible to only label and mark the target bounding box and class without providing any 2D information, such as calibration of the camera, images, and 2D rectangle box on images as KITTI does?

    In short, may I only use these 3D data as input to train?

    Thanks in advance.

    question stale 
    opened by gltina 26
  • my CenterPoint config for KITTI dataset

    my CenterPoint config for KITTI dataset

    This config file is modified from both official code released for KITTI and OpenPCDet version for WAYMO. And if anyone wants to get a pre-training model, please let me know. Update:

    • 2022-06-30: I've uploaded the pre-trained model centerpoint@KITTI to the Google Drive. It is available at Performance and Models section in the repo
    CLASS_NAMES: ['Car', 'Pedestrian', 'Cyclist']
    
    DATA_CONFIG:
        _BASE_CONFIG_: cfgs/dataset_configs/kitti_dataset.yaml
    
    MODEL:
        NAME: CenterPoint
    
        VFE:
            NAME: MeanVFE
    
        BACKBONE_3D:
            NAME: VoxelResBackBone8x
    
        MAP_TO_BEV:
            NAME: HeightCompression
            NUM_BEV_FEATURES: 256
    
        BACKBONE_2D:
            NAME: BaseBEVBackbone
    
            LAYER_NUMS: [5]
            LAYER_STRIDES: [1]
            NUM_FILTERS: [128]
            UPSAMPLE_STRIDES: [2]
            NUM_UPSAMPLE_FILTERS: [256]
    
        DENSE_HEAD:
            NAME: CenterHead
            CLASS_AGNOSTIC: False
    
            CLASS_NAMES_EACH_HEAD: [
                [ 'Car', 'Pedestrian', 'Cyclist' ]
            ]
    
            SHARED_CONV_CHANNEL: 64
            USE_BIAS_BEFORE_NORM: True
            NUM_HM_CONV: 2 #  heatmap
            SEPARATE_HEAD_CFG:
                HEAD_ORDER: ['center', 'center_z', 'dim', 'rot']
                HEAD_DICT: {
                    'center': {'out_channels': 2, 'num_conv': 2}, # offset
                    'center_z': {'out_channels': 1, 'num_conv': 2},
                    'dim': {'out_channels': 3, 'num_conv': 2},
                    'rot': {'out_channels': 2, 'num_conv': 2},
                }
    
            TARGET_ASSIGNER_CONFIG:
                FEATURE_MAP_STRIDE: 4
                NUM_MAX_OBJS: 500
                GAUSSIAN_OVERLAP: 0.1
                MIN_RADIUS: 2
    
            LOSS_CONFIG:
                LOSS_WEIGHTS: {
                    'cls_weight': 1.0,
                    'loc_weight': 2.0,
                    'code_weights': [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]
                }
            POST_PROCESSING:
                SCORE_THRESH: 0.1
                POST_CENTER_LIMIT_RANGE: [-75.2, -75.2, -2, 75.2, 75.2, 4]
                MAX_OBJ_PER_SAMPLE: 500
                NMS_CONFIG:
                    NMS_TYPE: nms_gpu
                    NMS_THRESH: 0.7
                    NMS_PRE_MAXSIZE: 4096
                    NMS_POST_MAXSIZE: 500
    
        POST_PROCESSING:
            RECALL_THRESH_LIST: [0.3, 0.5, 0.7]
            SCORE_THRESH: 0.1
            OUTPUT_RAW_SCORE: False
    
            EVAL_METRIC: kitti
    
            NMS_CONFIG:
                MULTI_CLASSES_NMS: False
                NMS_TYPE: nms_gpu
                NMS_THRESH: 0.01
                NMS_PRE_MAXSIZE: 4096
                NMS_POST_MAXSIZE: 500
    
    
    OPTIMIZATION:
        BATCH_SIZE_PER_GPU: 4
        NUM_EPOCHS: 80
    
        OPTIMIZER: adam_onecycle
        LR: 0.003
        WEIGHT_DECAY: 0.01
        MOMENTUM: 0.9
    
        MOMS: [0.95, 0.85]
        PCT_START: 0.4
        DIV_FACTOR: 10
        DECAY_STEP_LIST: [35, 45]
        LR_DECAY: 0.1
        LR_CLIP: 0.0000001
    
        LR_WARMUP: False
        WARMUP_EPOCH: 1
    
        GRAD_NORM_CLIP: 10
    
    stale 
    opened by OuyangJunyuan 24
  • CUDA error: no kernel image is available for execution on the device

    CUDA error: no kernel image is available for execution on the device

    Training PointPillars and SECOND works fine, but when I try to train PartA^2Net, I get this error message after addressing issue #70:

    Traceback (most recent call last):
    File "train.py", line 215, in <module>
    main()
    File "train.py", line 210, in main
    max_ckpt_save_num=arguments.max_ckpt_save_num)
    File "/scratch_net/hox/mhahner/repositories/PCDet/tools/train_utils/train_utils.py", line 80, in train_model
    leave_pbar=(cur_epoch + 1 == total_epochs)
    File "/scratch_net/hox/mhahner/repositories/PCDet/tools/train_utils/train_utils.py", line 36, in train_one_epoch
    loss, tb_dict, disp_dict = model_func(model, batch)
    File "/scratch_net/hox/mhahner/repositories/PCDet/pcdet/models/__init__.py", line 25, in model_func
    ret_dict, tb_dict, disp_dict = model(input_dict)
    File "/home/mhahner/scratch/apps/anaconda3/envs/spconv/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
    File "/scratch_net/hox/mhahner/repositories/PCDet/pcdet/models/detectors/PartA2_net.py", line 112, in forward
    batch_size, voxel_centers, coords, rpn_ret_dict, input_dict
    File "/scratch_net/hox/mhahner/repositories/PCDet/pcdet/models/detectors/PartA2_net.py", line 98, in forward_rcnn
    rcnn_ret_dict = self.rcnn_net.forward(rcnn_input_dict)
    File "/scratch_net/hox/mhahner/repositories/PCDet/pcdet/models/rcnn/partA2_rcnn_net.py", line 323, in forward
    targets_dict = self.assign_targets(batch_size, rcnn_dict)
    File "/scratch_net/hox/mhahner/repositories/PCDet/pcdet/models/rcnn/partA2_rcnn_net.py", line 27, in assign_targets
    targets_dict = proposal_target_layer(rcnn_dict, roi_sampler_cfg=self.rcnn_target_config)
    File "/scratch_net/hox/mhahner/repositories/PCDet/pcdet/models/model_utils/proposal_target_layer.py", line 14, in proposal_target_layer
    sample_rois_for_rcnn(rois, gt_boxes, roi_raw_scores, roi_labels, roi_sampler_cfg)
    File "/scratch_net/hox/mhahner/repositories/PCDet/pcdet/models/model_utils/proposal_target_layer.py", line 82, in sample_rois_for_rcnn
    cur_gt[:, 0:7], cur_gt_labels)
    File "/scratch_net/hox/mhahner/repositories/PCDet/pcdet/models/model_utils/proposal_target_layer.py", line 183, in get_maxiou3d_with_same_class
    iou3d = iou3d_nms_utils.boxes_iou3d_gpu(cur_roi, cur_gt) # (M, N)
    File "/scratch_net/hox/mhahner/repositories/PCDet/pcdet/ops/iou3d_nms/iou3d_nms_utils.py", line 47, in boxes_iou3d_gpu
    overlaps_h = torch.clamp(min_of_max - max_of_min, min=0)
    RuntimeError: CUDA error: no kernel image is available for execution on the device
    

    My conda environment looks like this:

    # packages in environment at /home/mhahner/scratch/apps/anaconda3/envs/spconv:
    #
    # Name                    Version                   Build  Channel
    _libgcc_mutex             0.1                 conda_forge    conda-forge
    _openmp_mutex             4.5                      1_llvm    conda-forge
    beautifulsoup4            4.9.1                    pypi_0    pypi
    bzip2                     1.0.8                h516909a_2    conda-forge
    ca-certificates           2020.1.1                      0
    cachetools                4.1.0                    pypi_0    pypi
    certifi                   2020.4.5.1               py37_0
    chardet                   3.0.4                    pypi_0    pypi
    cmake                     3.17.0               h28c56e5_0    conda-forge
    coloredlogs               14.0                     pypi_0    pypi
    cudatoolkit               10.1.243             h6bb024c_0
    cudatoolkit-dev           10.1.243             h516909a_3    conda-forge
    cudnn                     7.6.5                cuda10.1_0
    cycler                    0.10.0                   pypi_0    pypi
    decorator                 4.4.2                    pypi_0    pypi
    easydict                  1.9                      pypi_0    pypi
    expat                     2.2.9                he1b5a44_2    conda-forge
    google                    2.0.3                    pypi_0    pypi
    google-auth               1.15.0                   pypi_0    pypi
    google-auth-oauthlib      0.4.1                    pypi_0    pypi
    gspread                   3.6.0                    pypi_0    pypi
    httplib2                  0.18.0                   pypi_0    pypi
    humanfriendly             8.2                      pypi_0    pypi
    idna                      2.9                      pypi_0    pypi
    imagecodecs               2020.2.18                pypi_0    pypi
    imageio                   2.8.0                    pypi_0    pypi
    kiwisolver                1.2.0                    pypi_0    pypi
    krb5                      1.17.1               h2fd8d38_0    conda-forge
    ld_impl_linux-64          2.34                 h53a641e_0    conda-forge
    libblas                   3.8.0               16_openblas    conda-forge
    libcblas                  3.8.0               16_openblas    conda-forge
    libcurl                   7.69.1               hf7181ac_0    conda-forge
    libedit                   3.1.20170329      hf8c457e_1001    conda-forge
    libffi                    3.2.1             he1b5a44_1007    conda-forge
    libgcc-ng                 9.2.0                h24d8f2e_2    conda-forge
    libgfortran-ng            7.5.0                hdf63c60_6    conda-forge
    liblapack                 3.8.0               16_openblas    conda-forge
    libopenblas               0.3.9                h5ec1e0e_0    conda-forge
    libssh2                   1.9.0                hab1572f_2    conda-forge
    libstdcxx-ng              9.2.0                hdf63c60_2    conda-forge
    libuv                     1.34.0               h516909a_0    conda-forge
    llvm-openmp               10.0.0               hc9558a2_0    conda-forge
    llvmlite                  0.32.1                   pypi_0    pypi
    matplotlib                3.2.1                    pypi_0    pypi
    mkl                       2020.1                      219    conda-forge
    ncurses                   6.1               hf484d3e_1002    conda-forge
    networkx                  2.4                      pypi_0    pypi
    ninja                     1.10.0               hc9558a2_0    conda-forge
    numba                     0.49.1                   pypi_0    pypi
    numpy                     1.18.4           py37h8960a57_0    conda-forge
    oauth2client              4.1.3                    pypi_0    pypi
    oauthlib                  3.1.0                    pypi_0    pypi
    openssl                   1.1.1g               h7b6447c_0
    pillow                    7.1.2                    pypi_0    pypi
    pip                       20.1.1             pyh9f0ad1d_0    conda-forge
    protobuf                  3.12.0                   pypi_0    pypi
    pyasn1                    0.4.8                    pypi_0    pypi
    pyasn1-modules            0.2.8                    pypi_0    pypi
    pyparsing                 2.4.7                    pypi_0    pypi
    python                    3.7.6           h8356626_5_cpython    conda-forge
    python-dateutil           2.8.1                    pypi_0    pypi
    python_abi                3.7                     1_cp37m    conda-forge
    pytorch                   1.4.0           py3.7_cuda10.1.243_cudnn7.6.3_0    pytorch
    pywavelets                1.1.1                    pypi_0    pypi
    pyyaml                    5.3.1                    pypi_0    pypi
    readline                  8.0                  hf8c457e_0    conda-forge
    requests                  2.23.0                   pypi_0    pypi
    requests-oauthlib         1.3.0                    pypi_0    pypi
    rhash                     1.3.6             h14c3975_1001    conda-forge
    rsa                       4.0                      pypi_0    pypi
    scikit-image              0.17.2                   pypi_0    pypi
    scipy                     1.4.1                    pypi_0    pypi
    setuptools                46.4.0           py37hc8dfbb8_0    conda-forge
    six                       1.14.0                   pypi_0    pypi
    soupsieve                 2.0.1                    pypi_0    pypi
    spconv                    1.0                      pypi_0    pypi
    sqlite                    3.30.1               hcee41ef_0    conda-forge
    tensorboardx              2.0                      pypi_0    pypi
    tifffile                  2020.5.11                pypi_0    pypi
    tk                        8.6.10               hed695b0_0    conda-forge
    tqdm                      4.46.0                   pypi_0    pypi
    urllib3                   1.25.9                   pypi_0    pypi
    wheel                     0.34.2                     py_1    conda-forge
    xz                        5.2.5                h516909a_0    conda-forge
    zlib                      1.2.11            h516909a_1006    conda-forge
    
    opened by MartinHahner 24
  • Add support for spconv 2.0

    Add support for spconv 2.0

    Hi,

    this PR addresses this issue and aims to add support for spconv v2.0+. I've updated the spconv imports, changed the data_processor.py to use the new VoxelGenerator, and added support for conversion of older models, the weights for spconv blocks need to be transposed. This way of supporting older models requires a version bump.

    I have tested these changes with demo.py and the performance of models remained unchanged.

    Best, Amar

    opened by acivgin1 23
  • Training using our own dataset

    Training using our own dataset

    Good evening, First I would like to thank you for this project, I had a query regarding training using our own dataset. I see that in Demo you have shown how to use our own dataset of point cloud and save to numpy format and test it on a pre-trained model, is there a way we can train using our own dataset, if so could you please guide me as to how to do it, Thanks for your help

    stale 
    opened by SaMeEr9597 21
  • Error while installing PCDet in Windows

    Error while installing PCDet in Windows

    Hi everyone,

    I've been trying to port some neural network code from a colleague that uses OpenPCDet from Linux to Windows 10, and everytime we try to execute the setup.py file, we have the same error :

    MicrosoftTeams-image (Please note that this error even happends when installing PCDet in standalone)

    Is OpenPCDet compatible with Windows? If yes, any idea on this issue?

    We've tried with :

    • Cuda versions 10.2, 11.3
    • PyTorch versions 1.1, 1.10
    • spconv 2.1 (using the latest master commit of OpenPCDet that is supposed to be compatible with this version).
    • Python 3.7.12 (via Anaconda3
    stale 
    opened by Andjii 21
  • Reproduce the released results

    Reproduce the released results

    Hi, I tried to train pointpillar and PV-RCNN with default configure and only modified the number of training epochs, however I can't get the released results. (with 8 RTX 2080Ti). In your released model zoom, pointpillar (moderate categories) -- 77.28 / 52.29 / 62.28, but I my best results is 75.52 / 44.69 / 63.91. For PV-RCNN, your released is 83.61 / 57.90 / 70.47, and my best results is 78.89 / 52.86 / 72.21. Is the gap reasonable? Or can you provide any suggestions to finetune the model then get a better performance? Thank you!

    question 
    opened by LCJHust 19
  • Failed to train on multiple gpus!

    Failed to train on multiple gpus!

    I tried this script

    python -m torch.distributed.launch --nproc_per_node=6 train.py --launcher pytorch --cfg_file cfgs/my_models/pv_rcnn.yaml
    

    And my model training logger showed that training sequence was stuck. It seems it took forever to proceed after it finished the following line in train.py

        if args.ckpt is not None:
            it, start_epoch = model.load_params_with_optimizer(args.ckpt, to_cpu=dist, optimizer=optimizer, logger=logger)
            last_epoch = start_epoch + 1
        else:
            ckpt_list = glob.glob(str(ckpt_dir / '*checkpoint_epoch_*.pth'))
            if len(ckpt_list) > 0:
                ckpt_list.sort(key=os.path.getmtime)
                it, start_epoch = model.load_params_with_optimizer(
                    ckpt_list[-1], to_cpu=dist, optimizer=optimizer, logger=logger
                )
                last_epoch = start_epoch + 1
    

    No further lines are executed. Is this a bug?

    question to be closed 
    opened by NLCharles 18
  • RuntimeError: CUDA error: invalid device function

    RuntimeError: CUDA error: invalid device function

    hi, when i run test.py, it throws such an error: Error! Traceback (most recent call last): File "test.py", line 200, in main() File "test.py", line 196, in main eval_single_ckpt(model, test_loader, args, eval_output_dir, logger, epoch_id, dist_test=dist_test) File "test.py", line 62, in eval_single_ckpt eval_utils.eval_one_epoch( File "/home/nio/Desktop/pointpillar/tools/eval_utils/eval_utils.py", line 57, in eval_one_epoch pred_dicts, ret_dict = model(batch_dict) File "/home/nio/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 532, in call result = self.forward(*input, **kwargs) File "/home/nio/Desktop/pointpillar/pcdet/models/detectors/pointpillar.py", line 21, in forward pred_dicts, recall_dicts = self.post_processing(batch_dict) File "/home/nio/Desktop/pointpillar/pcdet/models/detectors/detector3d_template.py", line 261, in post_processing recall_dict = self.generate_recall_record( File "/home/nio/Desktop/pointpillar/pcdet/models/detectors/detector3d_template.py", line 298, in generate_recall_record iou3d_rcnn = iou3d_nms_utils.boxes_iou3d_gpu(box_preds[:, 0:7], cur_gt[:, 0:7]) File "/home/nio/Desktop/pointpillar/pcdet/ops/iou3d_nms/iou3d_nms_utils.py", line 71, in boxes_iou3d_gpu overlaps_h = torch.clamp(min_of_max - max_of_min, min=0) RuntimeError: CUDA error: invalid device function

    looking forward to your reply...

    opened by hughlee815 18
  • what

    what "filter_by_min_points" means?

            filter_by_min_points: ['Pedestrian.Standing:10','Pedestrian.Crouching:10','Pedestrian.Lying:10',
            'Cyclist.Moto_with_Person:10','Cyclist.Bike_with_Person:10','Cyclist.Tricycle_with_Person:10', 'Cyclist.Cart_with_Person:10',
            'Vehicle.Tiny_Car:15','Vehicle.Sedan:15','Vehicle.SUV:15','Vehicle.Van:15',
            'Vehicle.Bus:15','Vehicle.Bendy_Bus:15','Vehicle.Lorry:15','Vehicle.Truck:15',
            'Movable_Obstacle.Cone:15','Movable_Obstacle.Warning:15','Movable_Obstacle.Post:15',
          }
          
          According to my understanding, it is the minimum number of point clouds that make up this object.However, How is it generally set? , this is my setup, but it doesn't work very well :(
    
    opened by Baron6977 0
  • custom dataset with muti-labels training by pvrcnn++

    custom dataset with muti-labels training by pvrcnn++

    The effect is very poor, each label category has only 10-20% accuracy, so I would like to ask the reason for the poor effect or other networks suitable for multi-label training in openpcdet,Thanks~

    opened by Baron6977 0
  • Domain Adaptation (Trained PVRCNN on Waymo, Testing on Kitti), Error: Not Updated Weight, Poor Performance

    Domain Adaptation (Trained PVRCNN on Waymo, Testing on Kitti), Error: Not Updated Weight, Poor Performance

    Hey Guys,

    • I have trained the PVRCNN on Waymo with the given Config file from PCDET.
    • The Waymo dataset Config, that I used for training the above model, for features are:

    POINT_FEATURE_ENCODING: { encoding_type: absolute_coordinates_encoding, used_feature_list: ['x', 'y', 'z', 'intensity', 'elongation'], src_feature_list: ['x', 'y', 'z', 'intensity', 'elongation'], }

    Now, I am testing the performance of the above trained PVRCNN model (with the exact config as it was in Training) on the KITTI dataset. Waymo -> Kitti

    Setting 1.

    • The source dataset config (which is Waymo) are:

    POINT_FEATURE_ENCODING: { encoding_type: absolute_coordinates_encoding, used_feature_list: ['x', 'y', 'z', 'intensity', 'elongation'], src_feature_list: ['x', 'y', 'z', 'intensity', 'elongation'], }

    • The target dataset config (which is Kitti) are:

    POINT_FEATURE_ENCODING: { encoding_type: absolute_coordinates_encoding, used_feature_list: ['x', 'y', 'z', 'intensity','timestamp'], src_feature_list: ['x', 'y', 'z', 'intensity','timestamp'], }

    ERROR 1. with the Setting1. (all the pretrained weights are updating successfully), I am getting this error (error snippet):

    File "../pcdet/datasets/processor/data_processor.py", line 134, in transform_points_to_voxels voxel_output = self.voxel_generator.generate(points) File "../pcdet/datasets/processor/data_processor.py", line 54, in generate voxel_output = self._voxel_generator.point_to_voxel(tv.from_numpy(points)) RuntimeError: /io/build/temp.linux-x86_64-cpython-37/spconv/build/src/csrc/sparse/all/ops_cpu3d/Point2VoxelCPU/Point2VoxelCPU_point_to_voxel_static.cc(22) num_features == voxels.dim(2) assert faild. your points num features doesn't equal to voxel.

    • evaluation is not working because of the above error

    Setting 2.

    • The source dataset config (which is Waymo) are:

    POINT_FEATURE_ENCODING: { encoding_type: absolute_coordinates_encoding, used_feature_list: ['x', 'y', 'z', 'intensity', 'elongation'], src_feature_list: ['x', 'y', 'z', 'intensity', 'elongation'], }

    • The target dataset config (which is Kitti) are:

    POINT_FEATURE_ENCODING: { encoding_type: absolute_coordinates_encoding, used_feature_list: ['x', 'y', 'z', 'intensity'], src_feature_list: ['x', 'y', 'z', 'intensity','timestamp'], }

    NO ERROR (But performance is very low). with Setting2. (all the pretrained weights are not updating successfully).

    • weights which are not updating are:

    INFO Not updated weight backbone_3d.conv_input.0.weight: torch.Size([16, 3, 3, 3, 4]) INFO Not updated weight pfe.SA_rawpoints.mlps.0.0.weight: torch.Size([16, 4, 1, 1]) INFO Not updated weight pfe.SA_rawpoints.mlps.1.0.weight: torch.Size([16, 4, 1, 1]) INFO ==> Done (loaded 316/319)

    • evaluation is working without any error but getting very poor performance

    Questions:

    • Is there any way to solve this problem (load all the weights, and also complete the evaluation run) without training a new model (with changed feature list)?
    • What other things should I keep in my mind (if I want to train a model for testing on different datasets), while testing the performance on different dataset?

    Thanks!!

    opened by AbhishekKaushikCV 3
  • Use timm style to record log by logger

    Use timm style to record log by logger

    1. Change log_train_***.txt to train_***.log so that this text can be highlighted in ide.
    2. Record the log according to the style of pytorch-image-models
    2023-01-05 16:34:33,418   INFO  **********************Start training nuscenes_models/cbgs_voxel0075_res3d_centerpoint(timm_style)**********************
    2023-01-05 16:34:37,748   INFO  Train:    1/3 ( 33%) [   0/323 (  0%)]  Loss: 516.7 (517.)  LR: 1.000e-04  Time cost: 00:04/21:46 [00:04/1:05:18]  Acc_iter 1           Data time: 0.87(0.87)  Forward time: 3.18(3.18)  Batch time: 4.04(4.04)
    2023-01-05 16:34:49,377   INFO  Train:    1/3 ( 33%) [  49/323 ( 15%)]  Loss: 36.77 (81.7)  LR: 1.351e-04  Time cost: 00:15/01:25 [00:15/04:48]  Acc_iter 50          Data time: 0.00(0.02)  Forward time: 0.31(0.29)  Batch time: 0.31(0.31)
    2023-01-05 16:35:00,786   INFO  Train:    1/3 ( 33%) [  99/323 ( 31%)]  Loss: 23.51 (54.8)  LR: 2.377e-04  Time cost: 00:27/01:00 [00:27/03:55]  Acc_iter 100         Data time: 0.00(0.01)  Forward time: 0.19(0.26)  Batch time: 0.20(0.27)
    2023-01-05 16:35:11,175   INFO  Train:    1/3 ( 33%) [ 149/323 ( 46%)]  Loss: 28.79 (45.1)  LR: 3.910e-04  Time cost: 00:37/00:43 [00:37/03:24]  Acc_iter 150         Data time: 0.00(0.01)  Forward time: 0.14(0.24)  Batch time: 0.15(0.25)
    2023-01-05 16:35:22,124   INFO  Train:    1/3 ( 33%) [ 199/323 ( 62%)]  Loss: 23.56 (39.8)  LR: 5.701e-04  Time cost: 00:48/00:30 [00:48/03:06]  Acc_iter 200         Data time: 0.00(0.01)  Forward time: 0.19(0.24)  Batch time: 0.20(0.24)
    2023-01-05 16:35:32,667   INFO  Train:    1/3 ( 33%) [ 249/323 ( 77%)]  Loss: 19.39 (36.5)  LR: 7.460e-04  Time cost: 00:58/00:17 [00:59/02:49]  Acc_iter 250         Data time: 0.00(0.01)  Forward time: 0.28(0.23)  Batch time: 0.29(0.24)
    2023-01-05 16:35:43,319   INFO  Train:    1/3 ( 33%) [ 299/323 ( 93%)]  Loss: 19.67 (34.2)  LR: 8.900e-04  Time cost: 01:09/00:05 [01:09/02:35]  Acc_iter 300         Data time: 0.00(0.01)  Forward time: 0.27(0.23)  Batch time: 0.27(0.23)
    2023-01-05 16:35:47,906   INFO  Train:    1/3 ( 33%) [ 322/323 (100%)]  Loss: 23.89 (33.4)  LR: 9.388e-04  Time cost: 01:14/00:00 [01:14/02:28]  Acc_iter 323         Data time: 0.00(0.01)  Forward time: 0.14(0.22)  Batch time: 0.14(0.23)
    2023-01-05 16:35:49,021   INFO  Train:    2/3 ( 67%) [   0/323 (  0%)]  Loss: 26.78 (26.8)  LR: 9.406e-04  Time cost: 00:00/04:46 [01:15/09:33]  Acc_iter 324         Data time: 0.49(0.49)  Forward time: 0.39(0.39)  Batch time: 0.89(0.89)
    2023-01-05 16:35:54,039   INFO  Train:    2/3 ( 67%) [  26/323 (  8%)]  Loss: 25.13 (23.2)  LR: 9.788e-04  Time cost: 00:05/01:04 [01:20/02:15]  Acc_iter 350         Data time: 0.00(0.02)  Forward time: 0.21(0.20)  Batch time: 0.21(0.22)
    2023-01-05 16:36:03,994   INFO  Train:    2/3 ( 67%) [  76/323 ( 24%)]  Loss: 24.75 (22.6)  LR: 9.990e-04  Time cost: 00:15/00:50 [01:30/01:57]  Acc_iter 400         Data time: 0.00(0.01)  Forward time: 0.19(0.20)  Batch time: 0.19(0.21)
    2023-01-05 16:36:13,669   INFO  Train:    2/3 ( 67%) [ 126/323 ( 39%)]  Loss: 20.90 (22.2)  LR: 9.723e-04  Time cost: 00:25/00:39 [01:40/01:44]  Acc_iter 450         Data time: 0.00(0.01)  Forward time: 0.22(0.20)  Batch time: 0.22(0.20)
    2023-01-05 16:36:23,137   INFO  Train:    2/3 ( 67%) [ 176/323 ( 54%)]  Loss: 23.41 (21.9)  LR: 9.114e-04  Time cost: 00:35/00:29 [01:49/01:32]  Acc_iter 500         Data time: 0.00(0.00)  Forward time: 0.14(0.19)  Batch time: 0.14(0.20)
    2023-01-05 16:36:32,327   INFO  Train:    2/3 ( 67%) [ 226/323 ( 70%)]  Loss: 21.03 (21.6)  LR: 8.207e-04  Time cost: 00:44/00:18 [01:58/01:21]  Acc_iter 550         Data time: 0.00(0.00)  Forward time: 0.14(0.19)  Batch time: 0.14(0.19)
    2023-01-05 16:36:41,883   INFO  Train:    2/3 ( 67%) [ 276/323 ( 85%)]  Loss: 19.83 (21.5)  LR: 7.068e-04  Time cost: 00:53/00:09 [02:08/01:11]  Acc_iter 600         Data time: 0.00(0.00)  Forward time: 0.19(0.19)  Batch time: 0.19(0.19)
    2023-01-05 16:36:50,047   INFO  Train:    2/3 ( 67%) [ 322/323 (100%)]  Loss: 18.90 (21.2)  LR: 5.886e-04  Time cost: 01:01/00:00 [02:16/01:02]  Acc_iter 646         Data time: 0.00(0.00)  Forward time: 0.15(0.19)  Batch time: 0.15(0.19)
    2023-01-05 16:36:51,106   INFO  Train:    3/3 (100%) [   0/323 (  0%)]  Loss: 21.00 (21.0)  LR: 5.859e-04  Time cost: 00:00/04:28 [02:17/04:28]  Acc_iter 647         Data time: 0.54(0.54)  Forward time: 0.29(0.29)  Batch time: 0.83(0.83)
    2023-01-05 16:36:51,666   INFO  Train:    3/3 (100%) [   3/323 (  1%)]  Loss: 20.29 (20.3)  LR: 5.780e-04  Time cost: 00:01/01:51 [02:18/01:51]  Acc_iter 650         Data time: 0.00(0.14)  Forward time: 0.19(0.21)  Batch time: 0.20(0.35)
    2023-01-05 16:37:01,307   INFO  Train:    3/3 (100%) [  53/323 ( 16%)]  Loss: 21.82 (20.0)  LR: 4.434e-04  Time cost: 00:11/00:55 [02:27/00:55]  Acc_iter 700         Data time: 0.00(0.01)  Forward time: 0.16(0.19)  Batch time: 0.16(0.20)
    2023-01-05 16:37:10,328   INFO  Train:    3/3 (100%) [ 103/323 ( 32%)]  Loss: 23.87 (19.7)  LR: 3.130e-04  Time cost: 00:20/00:42 [02:36/00:42]  Acc_iter 750         Data time: 0.00(0.01)  Forward time: 0.15(0.19)  Batch time: 0.15(0.19)
    2023-01-05 16:37:19,906   INFO  Train:    3/3 (100%) [ 153/323 ( 47%)]  Loss: 17.04 (19.6)  LR: 1.962e-04  Time cost: 00:29/00:32 [02:46/00:32]  Acc_iter 800         Data time: 0.00(0.01)  Forward time: 0.21(0.19)  Batch time: 0.21(0.19)
    2023-01-05 16:37:29,243   INFO  Train:    3/3 (100%) [ 203/323 ( 63%)]  Loss: 18.42 (19.4)  LR: 1.013e-04  Time cost: 00:38/00:22 [02:55/00:22]  Acc_iter 850         Data time: 0.00(0.00)  Forward time: 0.17(0.19)  Batch time: 0.17(0.19)
    2023-01-05 16:37:38,561   INFO  Train:    3/3 (100%) [ 253/323 ( 78%)]  Loss: 21.43 (19.3)  LR: 3.528e-05  Time cost: 00:48/00:13 [03:05/00:13]  Acc_iter 900         Data time: 0.00(0.00)  Forward time: 0.20(0.19)  Batch time: 0.20(0.19)
    2023-01-05 16:37:48,159   INFO  Train:    3/3 (100%) [ 303/323 ( 94%)]  Loss: 16.19 (19.1)  LR: 2.921e-06  Time cost: 00:57/00:03 [03:14/00:03]  Acc_iter 950         Data time: 0.00(0.00)  Forward time: 0.23(0.19)  Batch time: 0.23(0.19)
    2023-01-05 16:37:51,368   INFO  Train:    3/3 (100%) [ 322/323 (100%)]  Loss: 19.52 (19.0)  LR: 1.728e-08  Time cost: 01:01/00:00 [03:17/00:00]  Acc_iter 969         Data time: 0.00(0.00)  Forward time: 0.11(0.19)  Batch time: 0.11(0.19)
    2023-01-05 16:37:51,550   INFO  **********************End training nuscenes_models/cbgs_voxel0075_res3d_centerpoint(timm_style)**********************
    
    opened by hova88 0
  • kitti result

    kitti result

    results on KITTI (displayed in README.md) is from the last epoch(80)? but I find that R11 moderate result on some classes will be higher than the last epoch(80), it will be nice of you if telling me about how to determine the final result while comparing the result from different models.

    opened by DezeZhao 1
Owner
OpenMMLab
OpenMMLab
This is the official implementation of 3D-CVF: Generating Joint Camera and LiDAR Features Using Cross-View Spatial Feature Fusion for 3D Object Detection, built on SECOND.

3D-CVF This is the official implementation of 3D-CVF: Generating Joint Camera and LiDAR Features Using Cross-View Spatial Feature Fusion for 3D Object

YecheolKim 97 Dec 20, 2022
Fog Simulation on Real LiDAR Point Clouds for 3D Object Detection in Adverse Weather

LiDAR fog simulation Created by Martin Hahner at the Computer Vision Lab of ETH Zurich. This is the official code release of the paper Fog Simulation

Martin Hahner 110 Dec 30, 2022
LiDAR Distillation: Bridging the Beam-Induced Domain Gap for 3D Object Detection

LiDAR Distillation Paper | Model LiDAR Distillation: Bridging the Beam-Induced Domain Gap for 3D Object Detection Yi Wei, Zibu Wei, Yongming Rao, Jiax

Yi Wei 75 Dec 22, 2022
MMDetection3D is an open source object detection toolbox based on PyTorch

MMDetection3D is an open source object detection toolbox based on PyTorch, towards the next-generation platform for general 3D detection. It is a part of the OpenMMLab project developed by MMLab.

OpenMMLab 3.2k Jan 5, 2023
An open source object detection toolbox based on PyTorch

MMDetection is an open source object detection toolbox based on PyTorch. It is a part of the OpenMMLab project.

Bo Chen 24 Dec 28, 2022
Mmdetection3d Noted - MMDetection3D is an open source object detection toolbox based on PyTorch

MMDetection3D is an open source object detection toolbox based on PyTorch

Jiangjingwen 13 Jan 6, 2023
LaneDet is an open source lane detection toolbox based on PyTorch that aims to pull together a wide variety of state-of-the-art lane detection models

LaneDet is an open source lane detection toolbox based on PyTorch that aims to pull together a wide variety of state-of-the-art lane detection models. Developers can reproduce these SOTA methods and build their own methods.

TuZheng 405 Jan 4, 2023
Moving Object Segmentation in 3D LiDAR Data: A Learning-based Approach Exploiting Sequential Data

LiDAR-MOS: Moving Object Segmentation in 3D LiDAR Data This repo contains the code for our paper: Moving Object Segmentation in 3D LiDAR Data: A Learn

Photogrammetry & Robotics Bonn 394 Dec 29, 2022
A general python framework for single object tracking in LiDAR point clouds, based on PyTorch Lightning.

Open3DSOT A general python framework for single object tracking in LiDAR point clouds, based on PyTorch Lightning. The official code release of BAT an

Kangel Zenn 172 Dec 23, 2022
OBBDetection: an oriented object detection toolbox modified from MMdetection

OBBDetection note: If you have questions or good suggestions, feel free to propose issues and contact me. introduction OBBDetection is an oriented obj

MIXIAOXIN_HO 3 Nov 11, 2022
(JMLR'19) A Python Toolbox for Scalable Outlier Detection (Anomaly Detection)

Python Outlier Detection (PyOD) Deployment & Documentation & Stats Build Status & Coverage & Maintainability & License PyOD is a comprehensive and sca

Yue Zhao 6.6k Jan 3, 2023
Hybrid CenterNet - Hybrid-supervised object detection / Weakly semi-supervised object detection

Hybrid-Supervised Object Detection System Object detection system trained by hybrid-supervision/weakly semi-supervision (HSOD/WSSOD): This project is

null 5 Dec 10, 2022
Yolo object detection - Yolo object detection with python

How to run download required files make build_image make download Docker versio

null 3 Jan 26, 2022
LiDAR R-CNN: An Efficient and Universal 3D Object Detector

LiDAR R-CNN: An Efficient and Universal 3D Object Detector Introduction This is the official code of LiDAR R-CNN: An Efficient and Universal 3D Object

TuSimple 295 Jan 5, 2023
An Unsupervised Graph-based Toolbox for Fraud Detection

An Unsupervised Graph-based Toolbox for Fraud Detection Introduction: UGFraud is an unsupervised graph-based fraud detection toolbox that integrates s

SafeGraph 99 Dec 11, 2022
ObjectDrawer-ToolBox: a graphical image annotation tool to generate ground plane masks for a 3D object reconstruction system

ObjectDrawer-ToolBox is a graphical image annotation tool to generate ground plane masks for a 3D object reconstruction system, Object Drawer.

null 77 Jan 5, 2023
Range Image-based LiDAR Localization for Autonomous Vehicles Using Mesh Maps

Range Image-based 3D LiDAR Localization This repo contains the code for our ICRA2021 paper: Range Image-based LiDAR Localization for Autonomous Vehicl

Photogrammetry & Robotics Bonn 208 Dec 15, 2022
chen2020iros: Learning an Overlap-based Observation Model for 3D LiDAR Localization.

Overlap-based 3D LiDAR Monte Carlo Localization This repo contains the code for our IROS2020 paper: Learning an Overlap-based Observation Model for 3D

Photogrammetry & Robotics Bonn 219 Dec 15, 2022