Export CenterPoint PonintPillars ONNX Model For TensorRT

Overview

CenterPoint-PonintPillars Pytroch model convert to ONNX and TensorRT

Welcome to CenterPoint! This project is fork from tianweiy/CenterPoint. I implement some code to export CenterPoint-PonintPillars ONNX model and deploy the onnx model using TensorRT.

Center-based 3D Object Detection and Tracking

3D Object Detection and Tracking using center points in the bird-eye view.

Center-based 3D Object Detection and Tracking,
Tianwei Yin, Xingyi Zhou, Philipp Krähenbühl,
arXiv technical report (arXiv 2006.11275)

@article{yin2020center,
  title={Center-based 3D Object Detection and Tracking},
  author={Yin, Tianwei and Zhou, Xingyi and Kr{\"a}henb{\"u}hl, Philipp},
  journal={arXiv:2006.11275},
  year={2020},
}

NEWS

[2021-01-06] CenterPoint v1.0 is released. Without bells and whistles, we rank first among all Lidar-only methods on Waymo Open Dataset with a single model that runs at 11 FPS. Check out CenterPoint's model zoo for Waymo and nuScenes.

[2020-12-11] 3 out of the top 4 entries in the recent NeurIPS 2020 nuScenes 3D Detection challenge used CenterPoint. Congratualations to other participants and please stay tuned for more updates on nuScenes and Waymo soon.

Contact

Any questions or suggestions are welcome!

Tianwei Yin [email protected] Xingyi Zhou [email protected]

Abstract

Three-dimensional objects are commonly represented as 3D boxes in a point-cloud. This representation mimics the well-studied image-based 2D bounding-box detection but comes with additional challenges. Objects in a 3D world do not follow any particular orientation, and box-based detectors have difficulties enumerating all orientations or fitting an axis-aligned bounding box to rotated objects. In this paper, we instead propose to represent, detect, and track 3D objects as points. Our framework, CenterPoint, first detects centers of objects using a keypoint detector and regresses to other attributes, including 3D size, 3D orientation, and velocity. In a second stage, it refines these estimates using additional point features on the object. In CenterPoint, 3D object tracking simplifies to greedy closest-point matching. The resulting detection and tracking algorithm is simple, efficient, and effective. CenterPoint achieved state-of-the-art performance on the nuScenes benchmark for both 3D detection and tracking, with 65.5 NDS and 63.8 AMOTA for a single model. On the Waymo Open Dataset, CenterPoint outperforms all previous single model method by a large margin and ranks first among all Lidar-only submissions.

Highlights

  • Simple: Two sentences method summary: We use standard 3D point cloud encoder with a few convolutional layers in the head to produce a bird-eye-view heatmap and other dense regression outputs including the offset to centers in the previous frame. Detection is a simple local peak extraction with refinement, and tracking is a closest-distance matching.

  • Fast and Accurate: Our best single model achieves 71.9 mAPH on Waymo and 65.5 NDS on nuScenes while running at 11FPS+.

  • Extensible: Simple replacement for anchor-based detector in your novel algorithms.

Main results

3D detection on Waymo test set

#Frame Veh_L2 Ped_L2 Cyc_L2 MAPH FPS
VoxelNet 1 71.9 67.0 68.2 69.0 13
VoxelNet 2 73.0 71.5 71.3 71.9 11

3D detection on Waymo domain adaptation test set

#Frame Veh_L2 Ped_L2 Cyc_L2 MAPH FPS
VoxelNet 2 56.1 47.8 65.2 56.3 11

3D detection on nuScenes test set

MAP ↑ NDS ↑ PKL ↓ FPS ↑
VoxelNet 58.0 65.5 0.69 11

3D tracking on Waymo test set

#Frame Veh_L2 Ped_L2 Cyc_L2 MOTA FPS
VoxelNet 2 59.4 56.6 60.0 58.7 11

3D Tracking on nuScenes test set

AMOTA ↑ AMOTP ↓
VoxelNet (flip test) 63.8 0.555

All results are tested on a Titan RTX GPU with batch size 1.

Third-party resources

  • AFDet: another work inspired by CenterPoint achieves good performance on KITTI/Waymo dataset.
  • mmdetection3d: CenterPoint in mmdet framework.

Use CenterPoint

Installation

Please refer to INSTALL to set up libraries needed for distributed training and sparse convolution.

First download the model (By default, centerpoint_pillar_512) and put it in work_dirs/centerpoint_pillar_512_demo.

We provide a driving sequence clip from the nuScenes dataset. Donwload the folder and put in the main directory.
Then run a demo by python tools/demo.py. If setup corectly, you will see an output video like (red is gt objects, blue is the prediction):

Benchmark Evaluation and Training

Please refer to GETTING_START to prepare the data. Then follow the instruction there to reproduce our detection and tracking results. All detection configurations are included in configs and we provide the scripts for all tracking experiments in tracking_scripts.

Export ONNX

I divide Pointpillars model into two parts, pfe(include PillarFeatureNet) and rpn(include RPN and CenterHead). The PointPillarsScatter isn't exported. I use ScatterND node instead of PointPillarsScatter.

  • Install packages

    pip install onnx onnx-simplifier onnxruntime
  • step 1. Download the trained model(latest.pth) and nuscenes mini dataset(v1.0-mini.tar)

  • step 2 Prepare dataset. Please refer to docs/NUSC.md

  • step 3. Export pfe.onnx and rpn.onnx

    python tool/export_pointpillars_onnx.py
  • step 4. Use onnx-simplify and scripte to simplify pfe.onnx and rpn.onnx.

    python tool/simplify_model.py
  • step 5. Merge pfe.onnx and rpn.onnx. We use ScatterND node to connect pfe and rpn. TensorRT doesn't support ScatterND operater. If you want to run CenterPoint-pointpillars by TensorRT, you can run pfe.onnx and rpn.onnx respectively.

    python tool/merge_pfe_rpn_model.py

    All onnx model are saved in onnx_model.

    I add an argument(export_onnx) for export onnx model in config file

    model = dict(
      type="PointPillars",
      pretrained=None,
      export_onnx=True, # for export onnx model
      reader=dict(
          type="PillarFeatureNet",
          num_filters=[64, 64],
          num_input_features=5,
          with_distance=False,
          voxel_size=(0.2, 0.2, 8),
          pc_range=(-51.2, -51.2, -5.0, 51.2, 51.2, 3.0),
          export_onnx=True, # for export onnx model
      ),
      backbone=dict(type="PointPillarsScatter", ds_factor=1),
      neck=dict(
          type="RPN",
          layer_nums=[3, 5, 5],
          ds_layer_strides=[2, 2, 2],
          ds_num_filters=[64, 128, 256],
          us_layer_strides=[0.5, 1, 2],
          us_num_filters=[128, 128, 128],
          num_input_features=64,
          logger=logging.getLogger("RPN"),
      ),

Centerpoint Pointpillars For TensorRT

see Readme

License

CenterPoint is release under MIT license (see LICENSE). It is developed based on a forked version of det3d. We also incorperate a large amount of code from CenterNet and CenterTrack. See the NOTICE for details. Note that both nuScenes and Waymo datasets are under non-commercial licenses.

Acknowlegement

This project is not possible without multiple great opensourced codebases. We list some notable examples below.

Comments
  • Help with use of onnx model and TensorRT

    Help with use of onnx model and TensorRT

    Hello @CarkusL, I haven't tested yet your code for exporting to onnx model, but congratulations. I tried to implement the same exports to onnx in the last few days until I realized that exporting PointPillars as a whole model was difficult because of the PillarsScatter backbone.

    Have you tried using your onnx model in TensorRT or what's the purpose of converting the model to onnx in your case? In my attempts the "ScatterND" operation was not supported in TensorRT, that's why I gave up Do you have maybe and idea how to do the same operation without Scatter, another alternative?

    I noticed in order to get the final results for training or inference, the functions here https://github.com/CarkusL/CenterPoint/blob/4f2fa6d0159841a8a09c3731ce5eb849f2fe58b2/det3d/models/detectors/point_pillars.py#L56 self.bbox_head.loss and self.bbox_head.predict should be adapted to the output of onnx because originally in PyTorch code the output of the head is a list of dictionaries, but in onnx the outputs are quiet different... are you also working on this adaption of the bbox_head functions to onnx output for further post-processing?

    opened by xavidzo 5
  • engine.cpp (1036) - Cuda Error in executeInternal: 700 (an illegal memory access was encountered)

    engine.cpp (1036) - Cuda Error in executeInternal: 700 (an illegal memory access was encountered)

    Created the onnx models without any problem. There was also nothing wrong with compilation of TRT sample. However I get the below output when sample is run.

    filePath[idx]: ../data/centerpoint//points/0106a9b8e65f4ad1867b44591aeed8b0.bin [12/30/2021-15:56:31] [I] [INFO] pointNum : 282378 [12/30/2021-15:56:31] [I] PreProcess Time: 31.4462 ms [12/30/2021-15:56:31] [I] inferenceDuration Time: 9.84006 ms [12/30/2021-15:56:31] [I] PostProcessDuration Time: 1.9788 ms filePath[idx]: ../data/centerpoint//points/048a45dd2cf54aa5808d8ccc85731d44.bin [12/30/2021-15:56:31] [I] [INFO] pointNum : 278690 [12/30/2021-15:56:31] [I] PreProcess Time: 30.7371 ms [12/30/2021-15:56:31] [I] inferenceDuration Time: 9.28456 ms [12/30/2021-15:56:31] [I] PostProcessDuration Time: 11.9031 ms filePath[idx]: ../data/centerpoint//points/05bc09f952ab4cf8b754405f20d503c5.bin [12/30/2021-15:56:31] [I] [INFO] pointNum : 243328 [12/30/2021-15:56:31] [I] PreProcess Time: 22.8271 ms [12/30/2021-15:56:31] [I] inferenceDuration Time: 10.7552 ms [12/30/2021-15:56:31] [I] PostProcessDuration Time: 2.65588 ms filePath[idx]: ../data/centerpoint//points/06be0e3b665c44fa8d17d9f4770bdf9c.bin [12/30/2021-15:56:31] [I] [INFO] pointNum : 258553 [12/30/2021-15:56:31] [I] PreProcess Time: 29.4288 ms [12/30/2021-15:56:31] [I] inferenceDuration Time: 9.25186 ms [12/30/2021-15:56:31] [I] PostProcessDuration Time: 7.31504 ms filePath[idx]: ../data/centerpoint//points/07fad91090c746ccaa1b2bdb55329e20.bin [12/30/2021-15:56:31] [I] [INFO] pointNum : 285130 [12/30/2021-15:56:31] [I] PreProcess Time: 29.1292 ms [12/30/2021-15:56:31] [I] inferenceDuration Time: 9.2483 ms [12/30/2021-15:56:31] [I] PostProcessDuration Time: 8.86581 ms filePath[idx]: ../data/centerpoint//points/092051710b7b4d9294e98dcb3f0f7be1.bin [12/30/2021-15:56:31] [I] [INFO] pointNum : 285204 [12/30/2021-15:56:31] [E] [TRT] engine.cpp (1036) - Cuda Error in executeInternal: 700 (an illegal memory access was encountered) [12/30/2021-15:56:31] [E] [TRT] FAILED_EXECUTION: std::exception [12/30/2021-15:56:31] [E] [TRT] engine.cpp (169) - Cuda Error in ~ExecutionContext: 700 (an illegal memory access was encountered) [12/30/2021-15:56:31] [E] [TRT] INTERNAL_ERROR: std::exception [12/30/2021-15:56:31] [E] [TRT] Parameter check failed at: safeContext.cpp::terminateCommonContext::216, condition: cudnnDestroy(context.cudnn) failure. [12/30/2021-15:56:31] [E] [TRT] Parameter check failed at: safeContext.cpp::terminateCommonContext::221, condition: cudaEventDestroy(context.start) failure. [12/30/2021-15:56:31] [E] [TRT] Parameter check failed at: safeContext.cpp::terminateCommonContext::226, condition: cudaEventDestroy(context.stop) failure. [12/30/2021-15:56:31] [E] [TRT] ../rtSafe/safeRuntime.cpp (32) - Cuda Error in free: 700 (an illegal memory access was encountered) terminate called after throwing an instance of 'nvinfer1::CudaError' what(): std::exception

    opened by OrcunCanDeniz 4
  • onnx result and torch result don't match

    onnx result and torch result don't match

    Hi, Carkusl: thanks a lot for your work. I use the same tools and the same config to turn torch model to onnx model; I get pfe_onnx and rpn_onnx. Before simplify the onnx model, I use onnxruntime to test the onnx result to validation if the reuslt consistent with the torch model. But I got the negative results. Wheather the pfe or the rpn, the onnx inferenc results are very different with the torch results. I have not do the simplify. dose the simiplification work for the big difference of the model-inference-result?

    Do you have any idea about the problem.

    opened by MengWangTHU 4
  • free(): invalid next size (normal)

    free(): invalid next size (normal)

    I observed that infer() function ends succesfully, however any of the parts coming after the execution of the infer() cant execute because of the error free(): invalid next size (normal). When I ran valgrind ;

    valgrind: m_mallocfree.c:307 (get_bszB_as_is): Assertion 'bszB_lo == bszB_hi' failed. valgrind: Heap block lo/hi size mismatch: lo = 24002528, hi = 3181631744. This is probably caused by your program erroneously writing past the end of a heap block and corrupting heap metadata. If you fix any invalid writes reported by Memcheck, this assertion failure will probably go away. Please try that before reporting this as a bug.

    opened by OrcunCanDeniz 2
  • size mismatch happened when I use your config file xxx_export_onnx.py for train, when I set export_onnx=False in config file, the train process is ok.

    size mismatch happened when I use your config file xxx_export_onnx.py for train, when I set export_onnx=False in config file, the train process is ok.

    (centerpoint-trt) qwe@qwe:~/project/centerpoint-trt/CenterPoint$ python -m torch.distributed.launch --nproc_per_node=4 ./tools/train.py ./configs/nusc/pp/nusc_centerpoint_pp_02voxel_two_pfn_10sweep_demo_export_onnx.py No Tensorflow No Tensorflow No Tensorflow No Tensorflow 2021-10-22 16:18:38,149 - INFO - Distributed training: True 2021-10-22 16:18:38,151 - INFO - torch.backends.cudnn.benchmark: False Use HM Bias: -2.19 Use HM Bias: -2.19 Use HM Bias: -2.19 2021-10-22 16:18:38,219 - INFO - Finish RPN Initialization 2021-10-22 16:18:38,219 - INFO - num_classes: [1, 2, 2, 1, 2, 2] Use HM Bias: -2.19 2021-10-22 16:18:38,269 - INFO - Finish CenterHead Initialization 2021-10-22 16:18:43,309 - INFO - {'car': 5, 'truck': 5, 'bus': 5, 'trailer': 5, 'construction_vehicle': 5, 'traffic_cone': 5, 'barrier': 5, 'motorcycle': 5, 'bicycle': 5, 'pedestrian': 5} 2021-10-22 16:18:43,310 - INFO - [-1] 2021-10-22 16:18:47,357 - INFO - load 62964 traffic_cone database infos 2021-10-22 16:18:47,357 - INFO - load 65262 truck database infos 2021-10-22 16:18:47,357 - INFO - load 339949 car database infos 2021-10-22 16:18:47,357 - INFO - load 161928 pedestrian database infos 2021-10-22 16:18:47,357 - INFO - load 26297 ignore database infos 2021-10-22 16:18:47,358 - INFO - load 11050 construction_vehicle database infos 2021-10-22 16:18:47,358 - INFO - load 107507 barrier database infos 2021-10-22 16:18:47,358 - INFO - load 8846 motorcycle database infos 2021-10-22 16:18:47,358 - INFO - load 8185 bicycle database infos 2021-10-22 16:18:47,358 - INFO - load 12286 bus database infos 2021-10-22 16:18:47,358 - INFO - load 19202 trailer database infos 10 2021-10-22 16:18:48,855 - INFO - After filter database: 2021-10-22 16:18:48,857 - INFO - load 55823 traffic_cone database infos 2021-10-22 16:18:48,857 - INFO - load 60428 truck database infos 2021-10-22 16:18:48,857 - INFO - load 294575 car database infos 2021-10-22 16:18:48,857 - INFO - load 148872 pedestrian database infos 2021-10-22 16:18:48,857 - INFO - load 26297 ignore database infos 2021-10-22 16:18:48,857 - INFO - load 10591 construction_vehicle database infos 2021-10-22 16:18:48,857 - INFO - load 102093 barrier database infos 2021-10-22 16:18:48,857 - INFO - load 8055 motorcycle database infos 2021-10-22 16:18:48,857 - INFO - load 7533 bicycle database infos 2021-10-22 16:18:48,857 - INFO - load 11622 bus database infos 2021-10-22 16:18:48,857 - INFO - load 18104 trailer database infos

    2021-10-22 16:18:53,526 - INFO - Start running, host: xxx@xxx, work_dir: /home/qwe/project/centerpoint-trt/CenterPoint/work_dirs/nusc_centerpoint_pp_02voxel_two_pfn_10sweep_demo_export_onnx 2021-10-22 16:18:53,526 - INFO - workflow: [('train', 1)], max: 20 epochs 2021-10-22 16:19:03,373 - INFO - finding looplift candidates 2021-10-22 16:19:04,209 - INFO - finding looplift candidates 2021-10-22 16:19:04,322 - INFO - finding looplift candidates 2021-10-22 16:19:04,345 - INFO - finding looplift candidates 2021-10-22 16:19:04,520 - INFO - finding looplift candidates 2021-10-22 16:19:04,530 - INFO - finding looplift candidates 2021-10-22 16:19:04,562 - INFO - finding looplift candidates 2021-10-22 16:19:04,812 - INFO - finding looplift candidates Traceback (most recent call last): File "./tools/train.py", line 137, in main() File "./tools/train.py", line 132, in main logger=logger, File "/home/qwe/project/centerpoint-trt/CenterPoint/det3d/torchie/apis/train.py", line 327, in train_detector trainer.run(data_loaders, cfg.workflow, cfg.total_epochs, local_rank=cfg.local_rank) File "/home/qwe/project/centerpoint-trt/CenterPoint/det3d/torchie/trainer/trainer.py", line 543, in run epoch_runner(data_loaders[i], self.epoch, **kwargs) File "/home/qwe/project/centerpoint-trt/CenterPoint/det3d/torchie/trainer/trainer.py", line 410, in train self.model, data_batch, train_mode=True, **kwargs File "/home/qwe/project/centerpoint-trt/CenterPoint/det3d/torchie/trainer/trainer.py", line 368, in batch_processor_inline losses = model(example, return_loss=True) File "/home/qwe/anaconda3/envs/centerpoint-trt/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in call result = self.forward(*input, **kwargs) File "/home/qwe/anaconda3/envs/centerpoint-trt/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 376, in forward output = self.module(*inputs[0], **kwargs[0]) File "/home/qwe/anaconda3/envs/centerpoint-trt/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in call result = self.forward(*input, **kwargs) File "/home/qwe/project/centerpoint-trt/CenterPoint/det3d/models/detectors/point_pillars.py", line 50, in forward x = self.extract_feat(data) File "/home/qwe/project/centerpoint-trt/CenterPoint/det3d/models/detectors/point_pillars.py", line 25, in extract_feat data["features"], data["num_voxels"], data["coors"] File "/home/qwe/anaconda3/envs/centerpoint-trt/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in call result = self.forward(*input, **kwargs) File "/home/qwe/project/centerpoint-trt/CenterPoint/det3d/models/readers/pillar_encoder.py", line 156, in forward features = pfn(features) File "/home/qwe/anaconda3/envs/centerpoint-trt/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in call result = self.forward(*input, **kwargs) File "/home/qwe/project/centerpoint-trt/CenterPoint/det3d/models/readers/pillar_encoder.py", line 42, in forward x = self.linear(inputs) File "/home/qwe/anaconda3/envs/centerpoint-trt/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in call result = self.forward(*input, **kwargs) File "/home/qwe/anaconda3/envs/centerpoint-trt/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 92, in forward return F.linear(input, self.weight, self.bias) File "/home/qwe/anaconda3/envs/centerpoint-trt/lib/python3.6/site-packages/torch/nn/functional.py", line 1408, in linear output = input.matmul(weight.t()) RuntimeError: size mismatch, m1: [1683720 x 5], m2: [10 x 32] at /opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/generic/THCTensorMathBlas.cu:268


    Just as I mentioned above, some thing is wrong when I set export_onnx=True in the config file for nuscenes data train, is there anything wrong in the part when export_onnx=True? Thank you. Hope to receive your reply.

    opened by bennyUSTC 2
  • Fps of TensorRT implementation

    Fps of TensorRT implementation

    Hello, @CarkusL! It's really interesting project and a lot of work has been done. I'm wondering what fps the resulting TensorRT model has. Have you evaluated the final FPS?

    opened by MaxLyubimov 2
  • could not find plugin ScatterND

    could not find plugin ScatterND

    Hi。I can run your tensorrt code sucessfully following your README in tensorrt/samples. But I cannot get scatterND plugin when I using your code out of the root of tensorrt :

    ` Doc string:

    input.name():input.1 input.name():indices_input [01/29/2018-00:52:33] [W] [TRT] /home/ubuntu/work/onnx-tensorrt-v7.0/onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32. [01/29/2018-00:52:33] [I] [TRT] /home/ubuntu/work/onnx-tensorrt-v7.0/ModelImporter.cpp:107: Parsing node: pfe_MatMul_0 [Conv] [01/29/2018-00:52:33] [I] [TRT] /home/ubuntu/work/onnx-tensorrt-v7.0/ModelImporter.cpp:129: pfe_MatMul_0 [Conv] inputs: [input.1 -> (1, 10, 30000, 20)], [48 -> (32, 10, 1, 1)], [01/29/2018-00:52:33] [I] [TRT] /home/ubuntu/work/onnx-tensorrt-v7.0/ModelImporter.cpp:183: pfe_MatMul_0 [Conv] outputs: [16 -> (1, 32, 30000, 20)], [01/29/2018-00:52:33] [I] [TRT] /home/ubuntu/work/onnx-tensorrt-v7.0/ModelImporter.cpp:107: Parsing node: pfe_BatchNormalization_2 [BatchNormalization] [01/29/2018-00:52:33] [I] [TRT] /home/ubuntu/work/onnx-tensorrt-v7.0/ModelImporter.cpp:129: pfe_BatchNormalization_2 [BatchNormalization] inputs: [16 -> (1, 32, 30000, 20)], [pfn_layers.0.norm.weight -> (32)], [pfn_layers.0.norm.bias -> (32)], [pfn_layers.0.norm.running_mean -> (32)], [pfn_layers.0.norm.running_var -> (32)], [01/29/2018-00:52:33] [I] [TRT] /home/ubuntu/work/onnx-tensorrt-v7.0/ModelImporter.cpp:183: pfe_BatchNormalization_2 [BatchNormalization] outputs: [18 -> (1, 32, 30000, 20)], [01/29/2018-00:52:33] [I] [TRT] /home/ubuntu/work/onnx-tensorrt-v7.0/ModelImporter.cpp:107: Parsing node: pfe_Relu_4 [Relu] [01/29/2018-00:52:33] [I] [TRT] /home/ubuntu/work/onnx-tensorrt-v7.0/ModelImporter.cpp:129: pfe_Relu_4 [Relu] inputs: [18 -> (1, 32, 30000, 20)], [01/29/2018-00:52:33] [I] [TRT] /home/ubuntu/work/onnx-tensorrt-v7.0/ModelImporter.cpp:183: pfe_Relu_4 [Relu] outputs: [20 -> (1, 32, 30000, 20)], [01/29/2018-00:52:33] [I] [TRT] /home/ubuntu/work/onnx-tensorrt-v7.0/ModelImporter.cpp:107: Parsing node: pfe_ReduceMax_5 [MaxPool] [01/29/2018-00:52:33] [I] [TRT] /home/ubuntu/work/onnx-tensorrt-v7.0/ModelImporter.cpp:129: pfe_ReduceMax_5 [MaxPool] inputs: [20 -> (1, 32, 30000, 20)], [01/29/2018-00:52:33] [I] [TRT] /home/ubuntu/work/onnx-tensorrt-v7.0/ModelImporter.cpp:183: pfe_ReduceMax_5 [MaxPool] outputs: [21 -> (1, 32, 30000, 1)], [01/29/2018-00:52:33] [I] [TRT] /home/ubuntu/work/onnx-tensorrt-v7.0/ModelImporter.cpp:107: Parsing node: pfe_Tile_16 [Tile] [01/29/2018-00:52:33] [I] [TRT] /home/ubuntu/work/onnx-tensorrt-v7.0/ModelImporter.cpp:129: pfe_Tile_16 [Tile] inputs: [21 -> (1, 32, 30000, 1)], [34 -> (4)], [01/29/2018-00:52:33] [I] [TRT] /home/ubuntu/work/onnx-tensorrt-v7.0/ModelImporter.cpp:183: pfe_Tile_16 [Tile] outputs: [38 -> (1, 32, 30000, 20)], [01/29/2018-00:52:33] [I] [TRT] /home/ubuntu/work/onnx-tensorrt-v7.0/ModelImporter.cpp:107: Parsing node: pfe_Concat_17 [Concat] [01/29/2018-00:52:33] [I] [TRT] /home/ubuntu/work/onnx-tensorrt-v7.0/ModelImporter.cpp:129: pfe_Concat_17 [Concat] inputs: [20 -> (1, 32, 30000, 20)], [38 -> (1, 32, 30000, 20)], [01/29/2018-00:52:33] [I] [TRT] /home/ubuntu/work/onnx-tensorrt-v7.0/ModelImporter.cpp:183: pfe_Concat_17 [Concat] outputs: [39 -> (1, 64, 30000, 20)], [01/29/2018-00:52:33] [I] [TRT] /home/ubuntu/work/onnx-tensorrt-v7.0/ModelImporter.cpp:107: Parsing node: pfe_MatMul_18 [Conv] [01/29/2018-00:52:33] [I] [TRT] /home/ubuntu/work/onnx-tensorrt-v7.0/ModelImporter.cpp:129: pfe_MatMul_18 [Conv] inputs: [39 -> (1, 64, 30000, 20)], [53 -> (64, 64, 1, 1)], [01/29/2018-00:52:33] [I] [TRT] /home/ubuntu/work/onnx-tensorrt-v7.0/ModelImporter.cpp:183: pfe_MatMul_18 [Conv] outputs: [41 -> (1, 64, 30000, 20)], [01/29/2018-00:52:33] [I] [TRT] /home/ubuntu/work/onnx-tensorrt-v7.0/ModelImporter.cpp:107: Parsing node: pfe_BatchNormalization_20 [BatchNormalization] [01/29/2018-00:52:33] [I] [TRT] /home/ubuntu/work/onnx-tensorrt-v7.0/ModelImporter.cpp:129: pfe_BatchNormalization_20 [BatchNormalization] inputs: [41 -> (1, 64, 30000, 20)], [pfn_layers.1.norm.weight -> (64)], [pfn_layers.1.norm.bias -> (64)], [pfn_layers.1.norm.running_mean -> (64)], [pfn_layers.1.norm.running_var -> (64)], [01/29/2018-00:52:33] [I] [TRT] /home/ubuntu/work/onnx-tensorrt-v7.0/ModelImporter.cpp:183: pfe_BatchNormalization_20 [BatchNormalization] outputs: [43 -> (1, 64, 30000, 20)], [01/29/2018-00:52:33] [I] [TRT] /home/ubuntu/work/onnx-tensorrt-v7.0/ModelImporter.cpp:107: Parsing node: pfe_Relu_22 [Relu] [01/29/2018-00:52:33] [I] [TRT] /home/ubuntu/work/onnx-tensorrt-v7.0/ModelImporter.cpp:129: pfe_Relu_22 [Relu] inputs: [43 -> (1, 64, 30000, 20)], [01/29/2018-00:52:33] [I] [TRT] /home/ubuntu/work/onnx-tensorrt-v7.0/ModelImporter.cpp:183: pfe_Relu_22 [Relu] outputs: [45 -> (1, 64, 30000, 20)], [01/29/2018-00:52:33] [I] [TRT] /home/ubuntu/work/onnx-tensorrt-v7.0/ModelImporter.cpp:107: Parsing node: pfe_ReduceMax_23 [MaxPool] [01/29/2018-00:52:33] [I] [TRT] /home/ubuntu/work/onnx-tensorrt-v7.0/ModelImporter.cpp:129: pfe_ReduceMax_23 [MaxPool] inputs: [45 -> (1, 64, 30000, 20)], [01/29/2018-00:52:33] [I] [TRT] /home/ubuntu/work/onnx-tensorrt-v7.0/ModelImporter.cpp:183: pfe_ReduceMax_23 [MaxPool] outputs: [46 -> (1, 64, 30000, 1)], [01/29/2018-00:52:33] [I] [TRT] /home/ubuntu/work/onnx-tensorrt-v7.0/ModelImporter.cpp:107: Parsing node: pfe_Squeeze_1 [Squeeze] [01/29/2018-00:52:33] [I] [TRT] /home/ubuntu/work/onnx-tensorrt-v7.0/ModelImporter.cpp:129: pfe_Squeeze_1 [Squeeze] inputs: [46 -> (1, 64, 30000, 1)], [01/29/2018-00:52:33] [I] [TRT] /home/ubuntu/work/onnx-tensorrt-v7.0/ModelImporter.cpp:183: pfe_Squeeze_1 [Squeeze] outputs: [pfe_squeeze_1 -> (1, 64, 30000)], [01/29/2018-00:52:33] [I] [TRT] /home/ubuntu/work/onnx-tensorrt-v7.0/ModelImporter.cpp:107: Parsing node: pfe_Transpose_1 [Transpose] [01/29/2018-00:52:33] [I] [TRT] /home/ubuntu/work/onnx-tensorrt-v7.0/ModelImporter.cpp:129: pfe_Transpose_1 [Transpose] inputs: [pfe_squeeze_1 -> (1, 64, 30000)], [01/29/2018-00:52:33] [I] [TRT] /home/ubuntu/work/onnx-tensorrt-v7.0/ModelImporter.cpp:183: pfe_Transpose_1 [Transpose] outputs: [pfe_transpose_1 -> (1, 30000, 64)], [01/29/2018-00:52:33] [I] [TRT] /home/ubuntu/work/onnx-tensorrt-v7.0/ModelImporter.cpp:107: Parsing node: ScatterND_1 [ScatterND] [01/29/2018-00:52:33] [I] [TRT] /home/ubuntu/work/onnx-tensorrt-v7.0/ModelImporter.cpp:129: ScatterND_1 [ScatterND] inputs: [scatter_data -> (1, 262144, 64)], [indices_input -> (1, 30000, 2)], [pfe_transpose_1 -> (1, 30000, 64)], [01/29/2018-00:52:33] [I] [TRT] /home/ubuntu/work/onnx-tensorrt-v7.0/ModelImporter.cpp:139: No importer registered for op: ScatterND. Attempting to import as plugin. [01/29/2018-00:52:33] [I] [TRT] /home/ubuntu/work/onnx-tensorrt-v7.0/builtin_op_importers.cpp:3762: Searching for plugin: ScatterND, plugin_version: 1, plugin_namespace: [01/29/2018-00:52:33] [E] [TRT] INVALID_ARGUMENT: getPluginCreator could not find plugin ScatterND version 1 While parsing node number 187 [ScatterND]: ERROR: /home/ubuntu/work/onnx-tensorrt-v7.0/builtin_op_importers.cpp:3764 In function importFallbackPluginImporter: [8] Assertion failed: creator && "Plugin not found, are the plugin name, version, and namespace correct?" `

    My environment is : platform : jetson AGX xavier cuda : 10.2 tensorrt: 7.x

    can you give me some advance? thank you very much!!!

    opened by daxiongpro 0
  • Slightly different result

    Slightly different result

    .pth file from the link was loaded and then evaluated. However my pytorch result is slightly different from yours.

    PyTorch results

    | |Yours|Mine| |--|--------|-------| |car|0.886|0.887| |truck|0.660|0.660| |bus|0.967|0.964| |trailer|0.000|0.000| |construction vehicle|0.000|0.000| |pedestrian|0.886|0.889| |motorcycle|0.505|0.510| |bicycle|0.216|0.223| |traffic cone|0.023|0.046| |barrier|0.000|0.000|

    I think the result from the same .pth file should be the same.

    So I have 2 questions.

    1. Did you use same .pth file from the link?
    2. Can you tell me what library / framework version did you use?

    My setting

    • GPUs: GTX 1070 ti
    • CUDA: 11.3
    • PyTorch: 1.10.0
    • torchvision: 0.11.1
    • ONNX: 1.12.0
    • spconv: 2.2.6
    • nuscenes-devkit: 1.0.5
    opened by aisaack 0
  • bbox_head Module not included in the ONNX model?

    bbox_head Module not included in the ONNX model?

    Hi @CarkusL , I came across your repo and I am trying out something similar. It helped me a lot in understanding things. I am trying to serve the PointPillar model to an inference engine with ONNX runtime on it. For that I need the ONNX model. But I noticed that the Pytorch model in your repo includes the bbox_head and predicts the boxes directly which are then filtered with NMS. However, the ONNX model does not have this bbox_head included. Did I understand this correctly? Also the inputs to the ONNX model are: input.1 with shape [1, 10, 30000, 20] and indices_input with shape [1, 30000, 2]. How do I feed the input from the voxel generator to such a model? Any help would be appreciated.

    opened by niqbal996 0
  • problem about tensorrt  inference

    problem about tensorrt inference

    Thanks for your excellent work. In CenterPoint/tensorrt/samples/centerpoint)/README.md, Do i have to install docker and run the step2. (Because i run centerpoint in anaconda) or just need to run step1 then step3 and 4? thanks again.

    opened by XGL-github 1
  • result error using TensorRT inference

    result error using TensorRT inference

    @CarkusL Thanks for your great work.I Merge pfe_sim.onnx and rpn.onnx to pointpillars_trt.onnx. And use it by TensorRt to inference.But the result is error which is showed in the link. could you help me please?


    (https://user-images.githubusercontent.com/44578367/160119938-0f1976c9-76e9-4b3c-aadf-28492f65ec08.png)

    opened by ZHUANG-JLU 1
Owner
CarkusL
CarkusL
CenterPoint 3D Object Detection and Tracking using center points in the bird-eye view.

CenterPoint 3D Object Detection and Tracking using center points in the bird-eye view. Center-based 3D Object Detection and Tracking, Tianwei Yin, Xin

Tianwei Yin 134 Dec 23, 2022
PyTorch ,ONNX and TensorRT implementation of YOLOv4

PyTorch ,ONNX and TensorRT implementation of YOLOv4

null 4.2k Jan 1, 2023
A high-performance anchor-free YOLO. Exceeding yolov3~v5 with ONNX, TensorRT, NCNN, and Openvino supported.

YOLOX is an anchor-free version of YOLO, with a simpler design but better performance! It aims to bridge the gap between research and industrial communities. For more details, please refer to our report on Arxiv.

null 7.7k Jan 6, 2023
YOLOX is a high-performance anchor-free YOLO, exceeding yolov3~v5 with ONNX, TensorRT, ncnn, and OpenVINO supported.

Introduction YOLOX is an anchor-free version of YOLO, with a simpler design but better performance! It aims to bridge the gap between research and ind

null 7.7k Jan 3, 2023
WHENet - ONNX, OpenVINO, TFLite, TensorRT, EdgeTPU, CoreML, TFJS, YOLOv4/YOLOv4-tiny-3L

HeadPoseEstimation-WHENet-yolov4-onnx-openvino ONNX, OpenVINO, TFLite, TensorRT, EdgeTPU, CoreML, TFJS, YOLOv4/YOLOv4-tiny-3L 1. Usage $ git clone htt

Katsuya Hyodo 49 Sep 21, 2022
ONNX-GLPDepth - Python scripts for performing monocular depth estimation using the GLPDepth model in ONNX

ONNX-GLPDepth - Python scripts for performing monocular depth estimation using the GLPDepth model in ONNX

Ibai Gorordo 18 Nov 6, 2022
ONNX-PackNet-SfM: Python scripts for performing monocular depth estimation using the PackNet-SfM model in ONNX

Python scripts for performing monocular depth estimation using the PackNet-SfM model in ONNX

Ibai Gorordo 14 Dec 9, 2022
ONNX Runtime Web demo is an interactive demo portal showing real use cases running ONNX Runtime Web in VueJS.

ONNX Runtime Web demo is an interactive demo portal showing real use cases running ONNX Runtime Web in VueJS. It currently supports four examples for you to quickly experience the power of ONNX Runtime Web.

Microsoft 58 Dec 18, 2022
An executor that loads ONNX models and embeds documents using the ONNX runtime.

ONNXEncoder An executor that loads ONNX models and embeds documents using the ONNX runtime. Usage via Docker image (recommended) from jina import Flow

Jina AI 2 Mar 15, 2022
A very simple tool for situations where optimization with onnx-simplifier would exceed the Protocol Buffers upper file size limit of 2GB, or simply to separate onnx files to any size you want.

sne4onnx A very simple tool for situations where optimization with onnx-simplifier would exceed the Protocol Buffers upper file size limit of 2GB, or

Katsuya Hyodo 10 Aug 30, 2022
Simple ONNX operation generator. Simple Operation Generator for ONNX.

sog4onnx Simple ONNX operation generator. Simple Operation Generator for ONNX. https://github.com/PINTO0309/simple-onnx-processing-tools Key concept V

Katsuya Hyodo 6 May 15, 2022
A very simple tool to rewrite parameters such as attributes and constants for OPs in ONNX models. Simple Attribute and Constant Modifier for ONNX.

sam4onnx A very simple tool to rewrite parameters such as attributes and constants for OPs in ONNX models. Simple Attribute and Constant Modifier for

Katsuya Hyodo 6 May 15, 2022
Simple tool to combine(merge) onnx models. Simple Network Combine Tool for ONNX.

snc4onnx Simple tool to combine(merge) onnx models. Simple Network Combine Tool for ONNX. https://github.com/PINTO0309/simple-onnx-processing-tools 1.

Katsuya Hyodo 8 Oct 13, 2022
Very simple NCHW and NHWC conversion tool for ONNX. Change to the specified input order for each and every input OP. Also, change the channel order of RGB and BGR. Simple Channel Converter for ONNX.

scc4onnx Very simple NCHW and NHWC conversion tool for ONNX. Change to the specified input order for each and every input OP. Also, change the channel

Katsuya Hyodo 16 Dec 22, 2022
Step by Step on how to create an vision recognition model using LOBE.ai, export the model and run the model in an Azure Function

Step by Step on how to create an vision recognition model using LOBE.ai, export the model and run the model in an Azure Function

El Bruno 3 Mar 30, 2022
Convert openmmlab (not only mmdetection) series model to tensorrt

MMDet to TensorRT This project aims to convert the mmdetection model to TensorRT model end2end. Focus on object detection for now. Mask support is exp

JinTian 4 Dec 17, 2021
Convert Pytorch model to onnx or tflite, and the converted model can be visualized by Netron

Convert Pytorch model to onnx or tflite, and the converted model can be visualized by Netron

Roxbili 5 Nov 19, 2022
tensorrt int8 量化yolov5 4.0 onnx模型

onnx模型转换为 int8 tensorrt引擎

null 123 Dec 28, 2022