CenterPoint 3D Object Detection and Tracking using center points in the bird-eye view.

Overview

CenterPoint

3D Object Detection and Tracking using center points in the bird-eye view.

Center-based 3D Object Detection and Tracking,
Tianwei Yin, Xingyi Zhou, Philipp Krähenbühl,
arXiv technical report (arXiv 2006.11275)

@article{yin2021center,
  title={Center-based 3D Object Detection and Tracking},
  author={Yin, Tianwei and Zhou, Xingyi and Kr{\"a}henb{\"u}hl, Philipp},
  journal={CVPR},
  year={2021},
}

This repo is an reimplementation of CenterPoint on the KITTI dataset. For nuScenes and Waymo, please refer to the original repo. We provide two configs, centerpoint.yaml for the vanilla centerpoint model and centerpoint_rcnn.yaml which combines centerpoint with PVRCNN. Pretrained models are coming soon.

Acknowledgement

Our code is based on OpenPCDet. Some util files are copied from mmdetection and mmdetection3d. Thanks OpenMMLab Development Team for their awesome codebases.

Comments
  • TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first

    TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first

    Hi Thank for sharing your great task.

    I'm trying to training with single GPU.

    If I set batch size > 1, always cuda out of memory issues are occur, so I set the batch size = 1

    2021-06-03 16:12:13,039 INFO Start training home/kimsuyeon/a/CenterPoint-KITTI/tools/cfgs/kitti_models/centerpoint_rcnn(default) epochs: 0%| | 0/80 [00:01<?, ?it/s] Traceback (most recent call last): | 0/3712 [00:00<?, ?it/s] File "train.py", line 198, in main() File "train.py", line 153, in main train_model( File "/home/kimsuyeon/a/CenterPoint-KITTI/tools/train_utils/train_utils.py", line 86, in train_model accumulated_iter = train_one_epoch( File "/home/kimsuyeon/a/CenterPoint-KITTI/tools/train_utils/train_utils.py", line 38, in train_one_epoch loss, tb_dict, disp_dict = model_func(model, batch) File "/home/kimsuyeon/a/CenterPoint-KITTI/pcdet/models/init.py", line 30, in model_func ret_dict, tb_dict, disp_dict = model(batch_dict) File "/home/kimsuyeon/a/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/kimsuyeon/a/CenterPoint-KITTI/pcdet/models/detectors/centerpoint_rcnn.py", line 11, in forward batch_dict = cur_module(batch_dict) File "/home/kimsuyeon/a/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/kimsuyeon/a/CenterPoint-KITTI/pcdet/models/dense_heads/centerpoint_head_single.py", line 77, in forward targets_dict = self.assign_targets( File "/home/kimsuyeon/a/CenterPoint-KITTI/pcdet/models/dense_heads/centerpoint_head_single.py", line 142, in assign_targets heatmaps = np.array(heatmaps).transpose(1, 0).tolist() File "/home/kimsuyeon/a/lib/python3.8/site-packages/torch/tensor.py", line 621, in array return self.numpy() TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.

    And I got this problem. Do you have any idea?

    opened by tndus5497 11
  • RuntimeError: Expected object of backend CUDA but got backend CPU for argument #2 'other'

    RuntimeError: Expected object of backend CUDA but got backend CPU for argument #2 'other'

    Hi, I followed the installation and get_start files to complete necessary running environment, and this bug happened. How should I solve this, and please help me ! Thanks!

    opened by bigbird11 10
  • The result of Centerpoint training on KITTI.

    The result of Centerpoint training on KITTI.

    Hi, I am now trying to study how to training Centerpoint on KITTI. My setting is training on the original KITTI dataset of 20 epoches. And what I got is as follows:

    Car [email protected], 0.70, 0.70: bbox AP:94.7996, 88.9117, 87.8464 bev AP:89.7246, 86.8356, 83.5177 3d AP:86.4783, 76.2949, 72.4491 aos AP:94.75, 88.76, 87.63 Car [email protected], 0.70, 0.70: bbox AP:96.9610, 91.3264, 88.8810 bev AP:93.9845, 87.7600, 85.2048 3d AP:88.2733, 75.9294, 73.1899 aos AP:96.92, 91.16, 88.66 Car [email protected], 0.50, 0.50: bbox AP:94.7996, 88.9117, 87.8464 bev AP:94.9225, 89.3233, 88.8077 3d AP:94.8194, 89.2578, 88.6416 aos AP:94.75, 88.76, 87.63 Car [email protected], 0.50, 0.50: bbox AP:96.9610, 91.3264, 88.8810 bev AP:97.2669, 93.8379, 93.0098 3d AP:97.1964, 93.4531, 91.2541 aos AP:96.92, 91.16, 88.66 Pedestrian [email protected], 0.50, 0.50: bbox AP:66.8845, 65.2675, 62.3663 bev AP:57.1777, 53.5311, 50.7458 3d AP:50.1384, 48.6857, 44.5479 aos AP:64.70, 62.75, 59.74 Pedestrian [email protected], 0.50, 0.50: bbox AP:68.0557, 65.3659, 62.4463 bev AP:55.0341, 52.0861, 48.6433 3d AP:48.6057, 45.7464, 41.7122 aos AP:65.56, 62.53, 59.48 Pedestrian [email protected], 0.25, 0.25: bbox AP:66.8845, 65.2675, 62.3663 bev AP:74.2538, 72.4498, 69.3171 3d AP:74.1821, 72.2494, 69.1611 aos AP:64.70, 62.75, 59.74 Pedestrian [email protected], 0.25, 0.25: bbox AP:68.0557, 65.3659, 62.4463 bev AP:74.8110, 73.2887, 70.1591 3d AP:74.7206, 73.0705, 69.9633 aos AP:65.56, 62.53, 59.48 Cyclist [email protected], 0.50, 0.50: bbox AP:84.2834, 73.6666, 70.5905 bev AP:80.0970, 66.7770, 63.7807 3d AP:74.6868, 62.2825, 57.8456 aos AP:84.12, 73.11, 70.07 Cyclist [email protected], 0.50, 0.50: bbox AP:88.0991, 74.7699, 71.1338 bev AP:81.6918, 67.8036, 63.8179 3d AP:75.0102, 61.3894, 57.8612 aos AP:87.91, 74.20, 70.57 Cyclist [email protected], 0.25, 0.25: bbox AP:84.2834, 73.6666, 70.5905 bev AP:81.9637, 69.6221, 67.1479 3d AP:81.9637, 69.6218, 67.1479 aos AP:84.12, 73.11, 70.07 Cyclist [email protected], 0.25, 0.25: bbox AP:88.0991, 74.7699, 71.1338 bev AP:85.6864, 70.9843, 67.4068 3d AP:85.6864, 70.9842, 67.4062 aos AP:87.91, 74.20, 70.57

    My questions are:

    1. I am not sure if it's right or not? Did you get similar results?
    2. And what does the "Cyclist [email protected], 0.25, 0.25" mean? Could we modify the setting? Or this setting is fixed because we use the KITTI dataset?

    Thanks in advance!

    opened by yaxi333 8
  • Why there is completely different source code for KITTI dataset?

    Why there is completely different source code for KITTI dataset?

    Could you please explain why you did not extend the original repository for KITTI dataset? I'm not sure which repository to use as I would also like to evaluate on KITTI (besides NuScenes and Waymo).

    opened by AndreasR30 5
  • one stage model

    one stage model

    Hi, Tianwei. Thanks for your great work. I have checked your implementation of CenterPoint model based on PCDet in this repo. The version is one stage centerpoint? When can we get the official two stage model? Or the latest CenterPoint++ model?

    Thanks!

    opened by HuangVictorAuto 4
  • TypeError: expected str, bytes or os.PathLike object, not NoneType

    TypeError: expected str, bytes or os.PathLike object, not NoneType

    Hi I ran this code in colab after installing all the requirements mentioned in Install.md and followed Getting_started.md ! python /content/CenterPoint-KITTI-main/OpenPCDet/tools/train.py I am getting this error:

    Traceback (most recent call last): File "/content/CenterPoint-KITTI-main/OpenPCDet/tools/train.py", line 198, in main() File "/content/CenterPoint-KITTI-main/OpenPCDet/tools/train.py", line 59, in main args, cfg = parse_config() File "/content/CenterPoint-KITTI-main/OpenPCDet/tools/train.py", line 48, in parse_config cfg_from_yaml_file(args.cfg_file, cfg) File "/content/CenterPoint-KITTI-main/OpenPCDet/pcdet/config.py", line 72, in cfg_from_yaml_file with open(cfg_file, 'r') as f: TypeError: expected str, bytes or os.PathLike object, not NoneType [ ]

    opened by bhargav-inthezone 4
  • On model robustness

    On model robustness

    I've inferred the Pandset(Waymo) dataset by your KITTI weight, it can't work well. but can inference the KITTI by cityscape models in 2D. What are the differences between different point cloud datasets?(except the number of laser lines)

    opened by fyzhong 3
  • How to detect bounding boxes at the back of the vehicle?

    How to detect bounding boxes at the back of the vehicle?

    This is a dataset/general query.

    I have trained an object detector with "centerpoint_rnn.cfg" config and the model has trained and is working well. But I noticed just now that the KITTI dataset has gt bbox information just for objects which are in the FOV of the cameras. So the cars at the back of the vehicle are not detected. This has caused problems for me since I am working on Moving Object Segmentation and I want to detect vehicles that are at the back as well. Any suggestion on how that can be achieved. Screenshot from 2021-10-18 12-47-22 In the image, it can be seen that the cars in front of the vehicle bearing laser scanner are detected but there is a car just behind the vehicle which remains undetected

    opened by vardeep-sandhu 3
  • ERROR: Could not find a version that satisfies the requirement spconv (from pcdet) ERROR: No matching distribution found for spconv`

    ERROR: Could not find a version that satisfies the requirement spconv (from pcdet) ERROR: No matching distribution found for spconv`

    When I run the script pip install pcdet-0.3.0+95b7309-cp37-cp37m-linux_x86_64.whl. the error is below. My environment is below: pytorch1.7.1 cuda11.1 A600

    Processing ./pcdet-0.3.0+95b7309-cp37-cp37m-linux_x86_64.whl Requirement already satisfied: pyyaml in /home/CN/zizhang.wu/anaconda3/envs/CaDDN/lib/python3.7/site-packages (from pcdet==0.3.0+95b7309) (5.4.1) Requirement already satisfied: numba in /home/CN/zizhang.wu/anaconda3/envs/CaDDN/lib/python3.7/site-packages (from pcdet==0.3.0+95b7309) (0.52.0) Requirement already satisfied: numpy in /home/CN/zizhang.wu/anaconda3/envs/CaDDN/lib/python3.7/site-packages (from pcdet==0.3.0+95b7309) (1.20.1) ERROR: Could not find a version that satisfies the requirement spconv (from pcdet) ERROR: No matching distribution found for spconv

    opened by rockywind 3
  • Question about CenterPoint's model

    Question about CenterPoint's model

    Hi,

    sorry to bother you. Regarding the model of CenterPoint, I have the following questions:

    1. Which is the first network layer for CenterPoint? In my opinion, the first layer of CenterPoint network may be the MeanVFE, correct?

    2. What is the input and output of the first layer network?What is the size of its input? Could you please point out where is it defined?

    Thank you so much.

    opened by ZhangYu1ing 3
  • about the test on tracking in this project

    about the test on tracking in this project

    Hello! Thanks for your projects! I would like to ask if this project have the test of tracking on KITTI dataset. I can't find the code named tracking in tools. How can I run this project and get the effect of tracking. Looking forward to your reply!

    opened by FreemanGong 2
  • Details about the KITTI model

    Details about the KITTI model

    Hi, thanks for your excellent work! I noticed that unlike the models on nuScenes and Waymo, the KITTI model has no 'Shared Conv' and 'Separate Heads'. The size of the feature maps for the detector head is also different.

    Could you tell me the reason for such a model design? I would like to use CenterPoint on a new dataset that has a similar size to KITTI and uses the same LiDAR as used by KITTI. However, this dataset has 8 categories and there is a severe category imbalance like in nuScenes.

    Can you give me some advice on the structure of CenterPoint, such as whether to use ‘Share Conv’ and ‘Separate Head’?

    Also, I would like to ask how to set the ‘SAMPLE_GROUPS’ for different categories in GT Sampling.

    Thanks in advance!

    opened by Devoe-97 0
  • multi gpu training result is incorrect , how to correct it ?

    multi gpu training result is incorrect , how to correct it ?

    my trainning result with single gpu is like this image but multi gpu training result is like this image

    can you help me see what's wrong when multi gpu mode ?

    btw, in order to run the repo code ,i modify some code , like

    1. modify pcdet/datasets/processor/data_process.py function transform_points_to_voxels some spconv api to adapt spconv2.x version
    def transform_points_to_voxels(self, data_dict=None, config=None, voxel_generator=None):
            if data_dict is None:
    #            try:
    #                from spconv.utils import VoxelGeneratorV2 as VoxelGenerator
    #            except:
    #                from spconv.utils import VoxelGenerator
    #
                from spconv.pytorch.utils import PointToVoxel
                voxel_generator = PointToVoxel(
                    vsize_xyz=config.VOXEL_SIZE,
                    coors_range_xyz=self.point_cloud_range,
                    num_point_features=self.num_point_features,
                    max_num_voxels=config.MAX_NUMBER_OF_VOXELS[self.mode],
                    max_num_points_per_voxel=config.MAX_POINTS_PER_VOXEL
                )
    #            voxel_generator = VoxelGenerator(
    #                voxel_size=config.VOXEL_SIZE,
    #                point_cloud_range=self.point_cloud_range,
    #                max_num_points=config.MAX_POINTS_PER_VOXEL,
    #                max_voxels=config.MAX_NUMBER_OF_VOXELS[self.mode]
    #            )
                grid_size = (self.point_cloud_range[3:6] - self.point_cloud_range[0:3]) / np.array(config.VOXEL_SIZE)
                self.grid_size = np.round(grid_size).astype(np.int64)
                self.voxel_size = config.VOXEL_SIZE
                return partial(self.transform_points_to_voxels, voxel_generator=voxel_generator)
    
            points = data_dict['points']
            # voxel_output = voxel_generator.generate(points)
            voxel_output = voxel_generator(torch.from_numpy(points))
            if isinstance(voxel_output, dict):
                voxels, coordinates, num_points = \
                    voxel_output['voxels'], voxel_output['coordinates'], voxel_output['num_points_per_voxel']
            else:
                voxels, coordinates, num_points = voxel_output
    
            if not data_dict['use_lead_xyz']:
                voxels = voxels[..., 3:]  # remove xyz in voxels(N, 3)
    
            data_dict['voxels'] = voxels
            data_dict['voxel_coords'] = coordinates
            data_dict['voxel_num_points'] = num_points
            return data_dict
    
    1. modify pcdet/models/dense_heads/centerpoint_head_single.py function assign_targets to solve tensor convert problem because numpy version is too high.
    def assign_targets(self, gt_boxes):
            """Generate targets.
    
            Args:
                gt_boxes: (B, M, 8) box + cls 
    
            Returns:
                Returns:
                    tuple[list[torch.Tensor]]: Tuple of target including \
                        the following results in order.
    
                        - list[torch.Tensor]: Heatmap scores.
                        - list[torch.Tensor]: Ground truth boxes.
                        - list[torch.Tensor]: Indexes indicating the \
                            position of the valid boxes.
                        - list[torch.Tensor]: Masks indicating which \
                            boxes are valid.
            """
            gt_bboxes_3d, gt_labels_3d = gt_boxes[..., :-1], gt_boxes[..., -1]
    
            heatmaps, anno_boxes, inds, masks = multi_apply(
                self.get_targets_single, gt_bboxes_3d, gt_labels_3d)
            # transpose heatmaps, because the dimension of tensors in each task is
            # different, we have to use numpy instead of torch to do the transpose.
            # heatmaps = np.array(heatmaps).transpose(1, 0).tolist()
            heatmaps = list(map(list, zip(*heatmaps)))
            heatmaps = [torch.stack(hms_) for hms_ in heatmaps]
            # transpose anno_boxes
            # anno_boxes = np.array(anno_boxes).transpose(1, 0).tolist()
            anno_boxes = list(map(list, zip(*anno_boxes)))
            anno_boxes = [torch.stack(anno_boxes_) for anno_boxes_ in anno_boxes]
            # transpose inds
            # inds = np.array(inds).transpose(1, 0).tolist()
            inds = list(map(list, zip(*inds)))
            inds = [torch.stack(inds_) for inds_ in inds]
            # transpose inds
            # masks = np.array(masks).transpose(1, 0).tolist()
            masks = list(map(list, zip(*masks)))
            masks = [torch.stack(masks_) for masks_ in masks]
            
            all_targets_dict = {
                'heatmaps': heatmaps,
                'anno_boxes': anno_boxes,
                'inds': inds,
                'masks': masks
            }
            
            return all_targets_dict
    

    my cmd is single gpu training : python train.py --cfg_file cfgs/kitti_models/centerpoint.yaml multi gpu training : bash scripts/dist_train.sh 8 --cfg_file cfgs/kitti_models/centerpoint.yaml

    opened by chenrui17 0
  • RuntimeError: Error compiling objects for extension

    RuntimeError: Error compiling objects for extension

    when I run the seyup.py there are some error and here is my conda environment. can someone help me to solve these problem

    pcdet 0.3.0+95b7309 dev_0 pcre 8.45 h295c915_0
    pillow 8.3.2 pypi_0 pypi pip 21.2.2 py36h06a4308_0
    protobuf 3.17.3 pypi_0 pypi pycparser 2.21 pyhd3eb1b0_0
    pyface 7.3.0 py36h06a4308_1
    pygments 2.11.2 pyhd3eb1b0_0
    pyparsing 2.4.7 pypi_0 pypi pyqt 5.9.2 py36h05f1152_2
    python 3.6.13 h12debd9_1
    python-dateutil 2.8.2 pypi_0 pypi pytorch 1.7.0 py3.6_cuda11.0.221_cudnn8.0.3_0 pyt

    ===================================================================== log msg:

    running develop running egg_info writing pcdet.egg-info/PKG-INFO writing dependency_links to pcdet.egg-info/dependency_links.txt writing requirements to pcdet.egg-info/requires.txt writing top-level names to pcdet.egg-info/top_level.txt reading manifest file 'pcdet.egg-info/SOURCES.txt' adding license file 'LICENSE' writing manifest file 'pcdet.egg-info/SOURCES.txt' running build_ext building 'pcdet.ops.iou3d_nms.iou3d_nms_cuda' extension Emitting ninja build file /home/lidar/centerpoint_workspace/CenterPoint-KITTI/build/temp.linux-x86_64-3.6/build.ninja... Compiling objects... Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) [1/1] /usr/local/cuda/bin/nvcc -I/home/lidar/anaconda3/envs/centerpoint/lib/python3.6/site-packages/torch/include -I/home/lidar/anaconda3/envs/centerpoint/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/lidar/anaconda3/envs/centerpoint/lib/python3.6/site-packages/torch/include/TH -I/home/lidar/anaconda3/envs/centerpoint/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/lidar/anaconda3/envs/centerpoint/include/python3.6m -c -c /home/lidar/centerpoint_workspace/CenterPoint-KITTI/pcdet/ops/iou3d_nms/src/iou3d_nms_kernel.cu -o /home/lidar/centerpoint_workspace/CenterPoint-KITTI/build/temp.linux-x86_64-3.6/pcdet/ops/iou3d_nms/src/iou3d_nms_kernel.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=iou3d_nms_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_86,code=sm_86 -std=c++14 FAILED: /home/lidar/centerpoint_workspace/CenterPoint-KITTI/build/temp.linux-x86_64-3.6/pcdet/ops/iou3d_nms/src/iou3d_nms_kernel.o /usr/local/cuda/bin/nvcc -I/home/lidar/anaconda3/envs/centerpoint/lib/python3.6/site-packages/torch/include -I/home/lidar/anaconda3/envs/centerpoint/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/lidar/anaconda3/envs/centerpoint/lib/python3.6/site-packages/torch/include/TH -I/home/lidar/anaconda3/envs/centerpoint/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/lidar/anaconda3/envs/centerpoint/include/python3.6m -c -c /home/lidar/centerpoint_workspace/CenterPoint-KITTI/pcdet/ops/iou3d_nms/src/iou3d_nms_kernel.cu -o /home/lidar/centerpoint_workspace/CenterPoint-KITTI/build/temp.linux-x86_64-3.6/pcdet/ops/iou3d_nms/src/iou3d_nms_kernel.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=iou3d_nms_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_86,code=sm_86 -std=c++14 nvcc fatal : Unsupported gpu architecture 'compute_86' ninja: build stopped: subcommand failed. Traceback (most recent call last): File "/home/lidar/anaconda3/envs/centerpoint/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 1522, in _run_ninja_build env=env) File "/home/lidar/anaconda3/envs/centerpoint/lib/python3.6/subprocess.py", line 438, in run output=stdout, stderr=stderr) subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

    The above exception was the direct cause of the following exception:

    Traceback (most recent call last): File "setup.py", line 106, in 'src/sampling_gpu.cu', File "/home/lidar/anaconda3/envs/centerpoint/lib/python3.6/site-packages/setuptools/init.py", line 153, in setup return distutils.core.setup(**attrs) File "/home/lidar/anaconda3/envs/centerpoint/lib/python3.6/distutils/core.py", line 148, in setup dist.run_commands() File "/home/lidar/anaconda3/envs/centerpoint/lib/python3.6/distutils/dist.py", line 955, in run_commands self.run_command(cmd) File "/home/lidar/anaconda3/envs/centerpoint/lib/python3.6/distutils/dist.py", line 974, in run_command cmd_obj.run() File "/home/lidar/anaconda3/envs/centerpoint/lib/python3.6/site-packages/setuptools/command/develop.py", line 34, in run self.install_for_development() File "/home/lidar/anaconda3/envs/centerpoint/lib/python3.6/site-packages/setuptools/command/develop.py", line 114, in install_for_development self.run_command('build_ext') File "/home/lidar/anaconda3/envs/centerpoint/lib/python3.6/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/home/lidar/anaconda3/envs/centerpoint/lib/python3.6/distutils/dist.py", line 974, in run_command cmd_obj.run() File "/home/lidar/anaconda3/envs/centerpoint/lib/python3.6/site-packages/setuptools/command/build_ext.py", line 79, in run _build_ext.run(self) File "/home/lidar/anaconda3/envs/centerpoint/lib/python3.6/distutils/command/build_ext.py", line 339, in run self.build_extensions() File "/home/lidar/anaconda3/envs/centerpoint/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 653, in build_extensions build_ext.build_extensions(self) File "/home/lidar/anaconda3/envs/centerpoint/lib/python3.6/distutils/command/build_ext.py", line 448, in build_extensions self._build_extensions_serial() File "/home/lidar/anaconda3/envs/centerpoint/lib/python3.6/distutils/command/build_ext.py", line 473, in _build_extensions_serial self.build_extension(ext) File "/home/lidar/anaconda3/envs/centerpoint/lib/python3.6/site-packages/setuptools/command/build_ext.py", line 202, in build_extension _build_ext.build_extension(self, ext) File "/home/lidar/anaconda3/envs/centerpoint/lib/python3.6/distutils/command/build_ext.py", line 533, in build_extension depends=ext.depends) File "/home/lidar/anaconda3/envs/centerpoint/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 482, in unix_wrap_ninja_compile with_cuda=with_cuda) File "/home/lidar/anaconda3/envs/centerpoint/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 1238, in _write_ninja_file_and_compile_objects error_prefix='Error compiling objects for extension') File "/home/lidar/anaconda3/envs/centerpoint/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 1538, in _run_ninja_build raise RuntimeError(message) from e RuntimeError: Error compiling objects for extension

    opened by up198 0
  • RuntimeError: CUDA error: out of memory

    RuntimeError: CUDA error: out of memory

    hello, this is my error when i try to train the centerpoint.yaml

    cfg.OPTIMIZATION = edict() 2022-04-07 11:02:52,498 INFO cfg.OPTIMIZATION.BATCH_SIZE_PER_GPU: 4 2022-04-07 11:02:52,498 INFO cfg.OPTIMIZATION.NUM_EPOCHS: 80 2022-04-07 11:02:52,498 INFO cfg.OPTIMIZATION.OPTIMIZER: adam_onecycle 2022-04-07 11:02:52,498 INFO cfg.OPTIMIZATION.LR: 0.003 2022-04-07 11:02:52,498 INFO cfg.OPTIMIZATION.WEIGHT_DECAY: 0.01 2022-04-07 11:02:52,498 INFO cfg.OPTIMIZATION.MOMENTUM: 0.9 2022-04-07 11:02:52,498 INFO cfg.OPTIMIZATION.MOMS: [0.95, 0.85] 2022-04-07 11:02:52,498 INFO cfg.OPTIMIZATION.PCT_START: 0.4 2022-04-07 11:02:52,498 INFO cfg.OPTIMIZATION.DIV_FACTOR: 10 2022-04-07 11:02:52,498 INFO cfg.OPTIMIZATION.DECAY_STEP_LIST: [35, 45] 2022-04-07 11:02:52,498 INFO cfg.OPTIMIZATION.LR_DECAY: 0.1 2022-04-07 11:02:52,498 INFO cfg.OPTIMIZATION.LR_CLIP: 1e-07 2022-04-07 11:02:52,498 INFO cfg.OPTIMIZATION.LR_WARMUP: False 2022-04-07 11:02:52,498 INFO cfg.OPTIMIZATION.WARMUP_EPOCH: 1 2022-04-07 11:02:52,498 INFO cfg.OPTIMIZATION.GRAD_NORM_CLIP: 10 2022-04-07 11:02:52,498 INFO cfg.TAG: centerpoint 2022-04-07 11:02:52,498 INFO cfg.EXP_GROUP_PATH: kitti_models 2022-04-07 11:02:52,570 INFO Database filter by min points Car: 14357 => 13532 2022-04-07 11:02:52,571 INFO Database filter by min points Pedestrian: 2207 => 2168 2022-04-07 11:02:52,571 INFO Database filter by min points Cyclist: 734 => 705 2022-04-07 11:02:52,582 INFO Database filter by difficulty Car: 13532 => 10759 2022-04-07 11:02:52,584 INFO Database filter by difficulty Pedestrian: 2168 => 2075 2022-04-07 11:02:52,585 INFO Database filter by difficulty Cyclist: 705 => 581 2022-04-07 11:02:52,588 INFO Loading KITTI dataset 2022-04-07 11:02:52,646 INFO Total samples for KITTI dataset: 3712 Traceback (most recent call last): File "train.py", line 202, in main() File "train.py", line 116, in main model = build_network(model_cfg=cfg.MODEL, num_class=len(cfg.CLASS_NAMES), dataset=train_set) File "../pcdet/models/init.py", line 18, in build_network model_cfg=model_cfg, num_class=num_class, dataset=dataset File "../pcdet/models/detectors/init.py", line 30, in build_detector model_cfg=model_cfg, num_class=num_class, dataset=dataset File "../pcdet/models/detectors/centerpoint.py", line 7, in init self.module_list = self.build_networks() File "../pcdet/models/detectors/detector3d_template.py", line 47, in build_networks model_info_dict=model_info_dict File "../pcdet/models/detectors/detector3d_template.py", line 136, in build_dense_head voxel_size=model_info_dict.get('voxel_size', False) File "../pcdet/models/dense_heads/center_head.py", line 66, in init [self.class_names.index(x) for x in cur_class_names if x in class_names] RuntimeError: CUDA error: out of memory

    base on GTX3070 cuda11.1 pytorch1.8.2 , can u give me some suggestions

    opened by BigPig117 0
  • Kitti dataset preprocess without images

    Kitti dataset preprocess without images

    Since this is a Lidar-only object detection model, could we preprocess the data without the images? I think if we don't use the evaluation metric of 'bbox', we can ignore the infos from images, right?

    Another question is, why do we only count the points inside the gt in the FOV of the image?

    opened by DJNing 4
Owner
Tianwei Yin
Student@UT Austin
Tianwei Yin
PyTorch code for the paper "FIERY: Future Instance Segmentation in Bird's-Eye view from Surround Monocular Cameras"

FIERY This is the PyTorch implementation for inference and training of the future prediction bird's-eye view network as described in: FIERY: Future In

Wayve 406 Dec 24, 2022
Export CenterPoint PonintPillars ONNX Model For TensorRT

CenterPoint-PonintPillars Pytroch model convert to ONNX and TensorRT Welcome to CenterPoint! This project is fork from tianweiy/CenterPoint. I impleme

CarkusL 149 Dec 13, 2022
Joint detection and tracking model named DEFT, or ``Detection Embeddings for Tracking.

DEFT: Detection Embeddings for Tracking DEFT: Detection Embeddings for Tracking, Mohamed Chaabane, Peter Zhang, J. Ross Beveridge, Stephen O'Hara

Mohamed Chaabane 253 Dec 18, 2022
Blender add-on: Add to Cameras menu: View → Camera, View → Add Camera, Camera → View, Previous Camera, Next Camera

Blender add-on: Camera additions In 3D view, it adds these actions to the View|Cameras menu: View → Camera : set the current camera to the 3D view Vie

German Bauer 11 Feb 8, 2022
(CVPR 2022 - oral) Multi-View Depth Estimation by Fusing Single-View Depth Probability with Multi-View Geometry

Multi-View Depth Estimation by Fusing Single-View Depth Probability with Multi-View Geometry Official implementation of the paper Multi-View Depth Est

Bae, Gwangbin 138 Dec 28, 2022
Towards End-to-end Video-based Eye Tracking

Towards End-to-end Video-based Eye Tracking The code accompanying our ECCV 2020 publication and dataset, EVE. Authors: Seonwook Park, Emre Aksan, Xuco

Seonwook Park 76 Dec 12, 2022
Object tracking and object detection is applied to track golf puts in real time and display stats/games.

Putting_Game Object tracking and object detection is applied to track golf puts in real time and display stats/games. Works best with the Perfect Prac

Max 1 Dec 29, 2021
Official PyTorch implementation of Joint Object Detection and Multi-Object Tracking with Graph Neural Networks

This is the official PyTorch implementation of our paper: "Joint Object Detection and Multi-Object Tracking with Graph Neural Networks". Our project website and video demos are here.

Richard Wang 443 Dec 6, 2022
Object Detection and Multi-Object Tracking

Object Detection and Multi-Object Tracking

Bobby Chen 1.6k Jan 4, 2023
(CVPR 2021) Back-tracing Representative Points for Voting-based 3D Object Detection in Point Clouds

BRNet Introduction This is a release of the code of our paper Back-tracing Representative Points for Voting-based 3D Object Detection in Point Clouds,

null 86 Oct 5, 2022
Python package for multiple object tracking research with focus on laboratory animals tracking.

motutils is a Python package for multiple object tracking research with focus on laboratory animals tracking. Features loads: MOTChallenge CSV, sleap

Matěj Šmíd 2 Sep 5, 2022
A motion tracking system for any arbitaray points in a video frame.

PointTracking This code is written by Majid Masoumi @ [email protected] I have used lucas kanade optical flow technique to track the points b

Dr. Majid Masoumi 1 Feb 9, 2022
Web service for facial landmark detection, head pose estimation, facial action unit recognition, and eye-gaze estimation based on OpenFace 2.0

OpenGaze: Web Service for OpenFace Facial Behaviour Analysis Toolkit Overview OpenFace is a fantastic tool intended for computer vision and machine le

Sayom Shakib 4 Nov 3, 2022
OpenFace – a state-of-the art tool intended for facial landmark detection, head pose estimation, facial action unit recognition, and eye-gaze estimation.

OpenFace 2.2.0: a facial behavior analysis toolkit Over the past few years, there has been an increased interest in automatic facial behavior analysis

Tadas Baltrusaitis 5.8k Dec 31, 2022
Object Tracking and Detection Using OpenCV

Object tracking is one such application of computer vision where an object is detected in a video, otherwise interpreted as a set of frames, and the object’s trajectory is estimated. For instance, you have a video of a baseball match, and you want to track the ball’s location constantly throughout the video.

Happy  N. Monday 4 Aug 21, 2022
This is the official implementation of 3D-CVF: Generating Joint Camera and LiDAR Features Using Cross-View Spatial Feature Fusion for 3D Object Detection, built on SECOND.

3D-CVF This is the official implementation of 3D-CVF: Generating Joint Camera and LiDAR Features Using Cross-View Spatial Feature Fusion for 3D Object

YecheolKim 97 Dec 20, 2022
Classify bird species based on their songs using SIamese Networks and 1D dilated convolutions.

The goal is to classify different birds species based on their songs/calls. Spectrograms have been extracted from the audio samples and used as features for classification.

Aditya Dutt 9 Dec 27, 2022
Flappy bird automation using Neuroevolution of Augmenting Topologies (NEAT) in Python

FlappyAI Flappy bird automation using Neuroevolution of Augmenting Topologies (NEAT) in Python Everything Used Genetic Algorithm especially NEAT conce

Eryawan Presma Y. 2 Mar 24, 2022