MonoRCNN is a monocular 3D object detection method for automonous driving

Overview

MonoRCNN

MonoRCNN is a monocular 3D object detection method for automonous driving, published at ICCV 2021. This project is an implementation of MonoRCNN.

Visualization

Methodology

Related Link

Installation

  • Python 3.6
  • PyTorch 1.5.0
  • Detectron2 0.1.3

Please use the Detectron2 included in this project. To ignore fully occluded objects during training, build.py, rpn.py, and roi_heads.py have been modified.

Dataset Preparation

Model & Log

Organize the downloaded files as follows:

├── projects
│   ├── MonoRCNN
│   │   ├── output
│   │   │   ├── model
│   │   │   ├── log.txt
│   │   │   ├── ...

Test

cd projects/MonoRCNN
./main.py --config-file config/MonoRCNN_KITTI.yaml --num-gpus 1 --resume --eval-only

Set VISUALIZE as True to visualize 3D object detection results (saved in output/evaluation/test/visualization).

Training

cd projects/MonoRCNN
./main.py --config-file config/MonoRCNN_KITTI.yaml --num-gpus 1

Citation

If you find this project useful in your research, please cite:

@inproceedings{MonoRCNN_ICCV21,
    title = {Geometry-based Distance Decomposition for Monocular 3D Object Detection},
    author = {Xuepeng Shi and Qi Ye and 
              Xiaozhi Chen and Chuangrong Chen and 
              Zhixiang Chen and Tae-Kyun Kim},
    booktitle = {ICCV},
    year = {2021},
}

Contact

[email protected]

Acknowledgement

You might also like...
Hybrid CenterNet - Hybrid-supervised object detection / Weakly semi-supervised object detection
Hybrid CenterNet - Hybrid-supervised object detection / Weakly semi-supervised object detection

Hybrid-Supervised Object Detection System Object detection system trained by hybrid-supervision/weakly semi-supervision (HSOD/WSSOD): This project is

Yolo object detection - Yolo object detection with python

How to run download required files make build_image make download Docker versio

Multiple custom object count and detection using YOLOv3-Tiny method
Multiple custom object count and detection using YOLOv3-Tiny method

Electronic-Component-YOLOv3 Introduce This project created to detect, count, and recognize multiple custom object using YOLOv3-Tiny method. The target

MOT-Tracking-by-Detection-Pipeline - For Tracking-by-Detection format MOT (Multi Object Tracking), is it a framework that separates Detection and Tracking processes? Official implementation of Monocular Quasi-Dense 3D Object Tracking
Official implementation of Monocular Quasi-Dense 3D Object Tracking

Monocular Quasi-Dense 3D Object Tracking Monocular Quasi-Dense 3D Object Tracking (QD-3DT) is an online framework detects and tracks objects in 3D usi

GDR-Net: Geometry-Guided Direct Regression Network for Monocular 6D Object Pose Estimation. (CVPR 2021)
GDR-Net: Geometry-Guided Direct Regression Network for Monocular 6D Object Pose Estimation. (CVPR 2021)

GDR-Net This repo provides the PyTorch implementation of the work: Gu Wang, Fabian Manhardt, Federico Tombari, Xiangyang Ji. GDR-Net: Geometry-Guided

[CVPR 2022 Oral] EPro-PnP: Generalized End-to-End Probabilistic Perspective-n-Points for Monocular Object Pose Estimation
[CVPR 2022 Oral] EPro-PnP: Generalized End-to-End Probabilistic Perspective-n-Points for Monocular Object Pose Estimation

EPro-PnP EPro-PnP: Generalized End-to-End Probabilistic Perspective-n-Points for Monocular Object Pose Estimation In CVPR 2022 (Oral). [paper] Hanshen

Comments
  • Using model for video stream

    Using model for video stream

    I could get the expected results by testing the model on validation set of KITTI dataset. How can I use the MonoRCNN for my own video?

    As MonoRCNN is built upon a model from detectron, some of the model parameters don't seem to be recognized by Detectron API when I use demo.py with MonoRCNN Configuration. @Rock-100

    opened by harisgulzar1 3
  • Evaluation output is not correct

    Evaluation output is not correct

    I have installed the dependencies and downloaded dateset as explained in README.md While executing the main.py as ./main.py --config-file config/MonoRCNN_KITTI.yaml --num-gpus 1 --resume --eval-only and by setting the VISUALIZE to True. I got output results in /output/evaluation/test/visualization. However, the bounding boxes in output images are completely random and incorrect, program doesn't seem to work correctly.

    000002

    I don't know how I should approach troubleshooting this issue. My Environment. CUDA 11.0 Python 3.8.5 Torch 1.7.1 I installed Detectrone2 from its official page.

    Edit: This part of command line output seems to be the problem.

    [05/09 02:15:03 d2.checkpoint.c2_model_loading]: Following weights matched with submodule backbone.bottom_up: | Names in Model | Names in Checkpoint | Shapes | |:------------------|:-------------------------|:------------------------------------------------| | res2.0.conv1.* | res2_0_branch2a_{bn_,w} | (64,) (64,) (64,) (64,) (64,64,1,1) | | res2.0.conv2. | res2_0_branch2b_{bn_,w} | (64,) (64,) (64,) (64,) (64,64,3,3) | | res2.0.conv3. | res2_0_branch2c_{bn_,w} | (256,) (256,) (256,) (256,) (256,64,1,1) | | res2.0.shortcut. | res2_0_branch1_{bn_,w} | (256,) (256,) (256,) (256,) (256,64,1,1) | | res2.1.conv1. | res2_1_branch2a_{bn_,w} | (64,) (64,) (64,) (64,) (64,256,1,1) | | res2.1.conv2. | res2_1_branch2b_{bn_,w} | (64,) (64,) (64,) (64,) (64,64,3,3) | | res2.1.conv3. | res2_1_branch2c_{bn_,w} | (256,) (256,) (256,) (256,) (256,64,1,1) | | res2.2.conv1. | res2_2_branch2a_{bn_,w} | (64,) (64,) (64,) (64,) (64,256,1,1) | | res2.2.conv2. | res2_2_branch2b_{bn_,w} | (64,) (64,) (64,) (64,) (64,64,3,3) | | res2.2.conv3. | res2_2_branch2c_{bn_,w} | (256,) (256,) (256,) (256,) (256,64,1,1) | | res3.0.conv1. | res3_0_branch2a_{bn_,w} | (128,) (128,) (128,) (128,) (128,256,1,1) | | res3.0.conv2. | res3_0_branch2b_{bn_,w} | (128,) (128,) (128,) (128,) (128,128,3,3) | | res3.0.conv3. | res3_0_branch2c_{bn_,w} | (512,) (512,) (512,) (512,) (512,128,1,1) | | res3.0.shortcut. | res3_0_branch1_{bn_,w} | (512,) (512,) (512,) (512,) (512,256,1,1) | | res3.1.conv1. | res3_1_branch2a_{bn_,w} | (128,) (128,) (128,) (128,) (128,512,1,1) | | res3.1.conv2. | res3_1_branch2b_{bn_,w} | (128,) (128,) (128,) (128,) (128,128,3,3) | | res3.1.conv3. | res3_1_branch2c_{bn_,w} | (512,) (512,) (512,) (512,) (512,128,1,1) | | res3.2.conv1. | res3_2_branch2a_{bn_,w} | (128,) (128,) (128,) (128,) (128,512,1,1) | | res3.2.conv2. | res3_2_branch2b_{bn_,w} | (128,) (128,) (128,) (128,) (128,128,3,3) | | res3.2.conv3. | res3_2_branch2c_{bn_,w} | (512,) (512,) (512,) (512,) (512,128,1,1) | | res3.3.conv1. | res3_3_branch2a_{bn_,w} | (128,) (128,) (128,) (128,) (128,512,1,1) | | res3.3.conv2. | res3_3_branch2b_{bn_,w} | (128,) (128,) (128,) (128,) (128,128,3,3) | | res3.3.conv3. | res3_3_branch2c_{bn_,w} | (512,) (512,) (512,) (512,) (512,128,1,1) | | res4.0.conv1. | res4_0_branch2a_{bn_,w} | (256,) (256,) (256,) (256,) (256,512,1,1) | | res4.0.conv2. | res4_0_branch2b_{bn_,w} | (256,) (256,) (256,) (256,) (256,256,3,3) | | res4.0.conv3. | res4_0_branch2c_{bn_,w} | (1024,) (1024,) (1024,) (1024,) (1024,256,1,1) | | res4.0.shortcut. | res4_0_branch1_{bn_,w} | (1024,) (1024,) (1024,) (1024,) (1024,512,1,1) | | res4.1.conv1. | res4_1_branch2a_{bn_,w} | (256,) (256,) (256,) (256,) (256,1024,1,1) | | res4.1.conv2. | res4_1_branch2b_{bn_,w} | (256,) (256,) (256,) (256,) (256,256,3,3) | | res4.1.conv3. | res4_1_branch2c_{bn_,w} | (1024,) (1024,) (1024,) (1024,) (1024,256,1,1) | | res4.2.conv1. | res4_2_branch2a_{bn_,w} | (256,) (256,) (256,) (256,) (256,1024,1,1) | | res4.2.conv2. | res4_2_branch2b_{bn_,w} | (256,) (256,) (256,) (256,) (256,256,3,3) | | res4.2.conv3. | res4_2_branch2c_{bn_,w} | (1024,) (1024,) (1024,) (1024,) (1024,256,1,1) | | res4.3.conv1. | res4_3_branch2a_{bn_,w} | (256,) (256,) (256,) (256,) (256,1024,1,1) | | res4.3.conv2. | res4_3_branch2b_{bn_,w} | (256,) (256,) (256,) (256,) (256,256,3,3) | | res4.3.conv3. | res4_3_branch2c_{bn_,w} | (1024,) (1024,) (1024,) (1024,) (1024,256,1,1) | | res4.4.conv1. | res4_4_branch2a_{bn_,w} | (256,) (256,) (256,) (256,) (256,1024,1,1) | | res4.4.conv2. | res4_4_branch2b_{bn_,w} | (256,) (256,) (256,) (256,) (256,256,3,3) | | res4.4.conv3. | res4_4_branch2c_{bn_,w} | (1024,) (1024,) (1024,) (1024,) (1024,256,1,1) | | res4.5.conv1. | res4_5_branch2a_{bn_,w} | (256,) (256,) (256,) (256,) (256,1024,1,1) | | res4.5.conv2. | res4_5_branch2b_{bn_,w} | (256,) (256,) (256,) (256,) (256,256,3,3) | | res4.5.conv3. | res4_5_branch2c_{bn_,w} | (1024,) (1024,) (1024,) (1024,) (1024,256,1,1) | | res5.0.conv1. | res5_0_branch2a_{bn_,w} | (512,) (512,) (512,) (512,) (512,1024,1,1) | | res5.0.conv2. | res5_0_branch2b_{bn_,w} | (512,) (512,) (512,) (512,) (512,512,3,3) | | res5.0.conv3. | res5_0_branch2c_{bn_,w} | (2048,) (2048,) (2048,) (2048,) (2048,512,1,1) | | res5.0.shortcut. | res5_0_branch1_{bn_,w} | (2048,) (2048,) (2048,) (2048,) (2048,1024,1,1) | | res5.1.conv1. | res5_1_branch2a_{bn_,w} | (512,) (512,) (512,) (512,) (512,2048,1,1) | | res5.1.conv2. | res5_1_branch2b_{bn_,w} | (512,) (512,) (512,) (512,) (512,512,3,3) | | res5.1.conv3. | res5_1_branch2c_{bn_,w} | (2048,) (2048,) (2048,) (2048,) (2048,512,1,1) | | res5.2.conv1. | res5_2_branch2a_{bn_,w} | (512,) (512,) (512,) (512,) (512,2048,1,1) | | res5.2.conv2. | res5_2_branch2b_{bn_,w} | (512,) (512,) (512,) (512,) (512,512,3,3) | | res5.2.conv3. | res5_2_branch2c_{bn_,w} | (2048,) (2048,) (2048,) (2048,) (2048,512,1,1) | | stem.conv1.norm. | res_conv1_bn_* | (64,) (64,) (64,) (64,) | | stem.conv1.weight | conv1_w | (64, 3, 7, 7) |

    WARNING [05/09 02:15:04 fvcore.common.checkpoint]: Some model parameters or buffers are not found in the checkpoint:

    backbone.bottom_up.res3.0.conv2_offset.{bias, weight} backbone.bottom_up.res3.1.conv2_offset.{bias, weight} backbone.bottom_up.res3.2.conv2_offset.{bias, weight} backbone.bottom_up.res3.3.conv2_offset.{bias, weight} backbone.bottom_up.res4.0.conv2_offset.{bias, weight} backbone.bottom_up.res4.1.conv2_offset.{bias, weight} backbone.bottom_up.res4.2.conv2_offset.{bias, weight} backbone.bottom_up.res4.3.conv2_offset.{bias, weight} backbone.bottom_up.res4.4.conv2_offset.{bias, weight} backbone.bottom_up.res4.5.conv2_offset.{bias, weight} backbone.bottom_up.res5.0.conv2_offset.{bias, weight} backbone.bottom_up.res5.1.conv2_offset.{bias, weight} backbone.bottom_up.res5.2.conv2_offset.{bias, weight} backbone.fpn_lateral2.{bias, weight} backbone.fpn_lateral3.{bias, weight} backbone.fpn_lateral4.{bias, weight} backbone.fpn_lateral5.{bias, weight} backbone.fpn_output2.{bias, weight} backbone.fpn_output3.{bias, weight} backbone.fpn_output4.{bias, weight} backbone.fpn_output5.{bias, weight} proposal_generator.rpn_head.anchor_deltas.{bias, weight} proposal_generator.rpn_head.conv.{bias, weight} proposal_generator.rpn_head.objectness_logits.{bias, weight} roi_heads.att_head.dim_layer.{bias, weight} roi_heads.att_head.fc1.{bias, weight} roi_heads.att_head.fc2.{bias, weight} roi_heads.att_head.kpt_layer.{bias, weight} roi_heads.att_head.yaw_layer.{bias, weight} roi_heads.box_head.fc1.{bias, weight} roi_heads.box_head.fc2.{bias, weight} roi_heads.box_predictor.bbox_pred.{bias, weight} roi_heads.box_predictor.cls_score.{bias, weight} roi_heads.dis_head.H_layer.{bias, weight} roi_heads.dis_head.fc1.{bias, weight} roi_heads.dis_head.fc2.{bias, weight} roi_heads.dis_head.hrec_layer.{bias, weight}

    WARNING [05/09 02:15:04 fvcore.common.checkpoint]: The checkpoint state_dict contains keys that are not used by the model:

    fc1000.{bias, weight} stem.conv1.bias

    [05/09 02:15:04 d2.MonoDet]: Loaded 3769 images in COCO format from ../KITTI/val1/KITTI_val1_val.json

    opened by harisgulzar1 3
  • RuntimeError: CUDA error: device-side assert triggered

    RuntimeError: CUDA error: device-side assert triggered

    While starting training. The code throws following error. I guess this is because of some mismatch between model parameter and training labels. Reference Has this problem been encountered before? Any insight would be helpful.

    [05/10 01:19:15 d2.MonoDet]: Loaded 3712 images in COCO format from ../KITTI/val1/KITTI_val1_train.json
    [05/10 01:19:22 d2.MonoDet]: Positive labels: ['Car']
    [05/10 01:19:22 d2.MonoDet]: Positive samples: 10081
    [05/10 01:19:22 d2.data.build]: Removed 645 images with no usable annotations. 3067 images left.
    [05/10 01:19:22 d2.data.common]: Serializing 3067 elements to byte tensors and concatenating them all ...
    [05/10 01:19:22 d2.data.common]: Serialized dataset takes 7.84 MiB
    [05/10 01:19:22 d2.data.build]: Using training sampler TrainingSampler
    [05/10 01:19:22 detectron2]: Starting training from iteration 0
    /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [102,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
    /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [109,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
    /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [34,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
    /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [35,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
    /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [42,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
    /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [49,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
    /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [50,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
    /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [57,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
    /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [4,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
    /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [5,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
    /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [12,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
    /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [19,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
    /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [20,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
    /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [27,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
    /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [64,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
    /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [65,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
    /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [72,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
    /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [79,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
    /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [87,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
    /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [94,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
    Traceback (most recent call last):
      File "./main.py", line 179, in <module>
        launch(
      File "/workspace/detectron2/detectron2/engine/launch.py", line 57, in launch
        main_func(*args)
      File "./main.py", line 174, in main
        do_train(cfg, model, resume=args.resume)
      File "./main.py", line 108, in do_train
        loss_dict = model(data)
      File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
        result = self.forward(*input, **kwargs)
      File "/workspace/detectron2/detectron2/modeling/meta_arch/rcnn.py", line 117, in forward
        proposals, proposal_losses = self.proposal_generator(images, features, gt_instances)
      File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
        result = self.forward(*input, **kwargs)
      File "/workspace/detectron2/detectron2/modeling/proposal_generator/rpn.py", line 425, in forward
        gt_labels, gt_boxes = self.label_and_sample_anchors(anchors, gt_instances)
      File "/opt/conda/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
        return func(*args, **kwargs)
      File "/workspace/detectron2/detectron2/modeling/proposal_generator/rpn.py", line 289, in label_and_sample_anchors
        matched_idxs, gt_labels_i = retry_if_cuda_oom(self.anchor_matcher)(match_quality_matrix)
      File "/workspace/detectron2/detectron2/utils/memory.py", line 72, in wrapped
        return func(*args, **kwargs)
      File "/workspace/detectron2/detectron2/modeling/matcher.py", line 88, in __call__
        assert torch.all(match_quality_matrix >= 0)
    RuntimeError: CUDA error: device-side assert triggered
    
    opened by harisgulzar1 1
  • when is the time of releasing code

    when is the time of releasing code

    Hello, rock-100! Congrats to your work to be accepted by ICCV2021! I am wondering whether you will release your code now that your work has been accepted.

    opened by gujiaqivadin 1
Owner
null
Repository to run object detection on a model trained on an autonomous driving dataset.

Autonomous Driving Object Detection on the Raspberry Pi 4 Description of Repository This repository contains code and instructions to configure the ne

Ethan 51 Nov 17, 2022
Code for the ICME 2021 paper "Exploring Driving-Aware Salient Object Detection via Knowledge Transfer"

TSOD Code for the ICME 2021 paper "Exploring Driving-Aware Salient Object Detection via Knowledge Transfer" Usage For training, open train_test, run p

Jinming Su 2 Dec 23, 2021
Official PyTorch Code of GrooMeD-NMS: Grouped Mathematically Differentiable NMS for Monocular 3D Object Detection (CVPR 2021)

GrooMeD-NMS: Grouped Mathematically Differentiable NMS for Monocular 3D Object Detection GrooMeD-NMS: Grouped Mathematically Differentiable NMS for Mo

Abhinav Kumar 76 Jan 2, 2023
Categorical Depth Distribution Network for Monocular 3D Object Detection

CaDDN CaDDN is a monocular-based 3D object detection method. This repository is based off of [OpenPCDet]. Categorical Depth Distribution Network for M

Toronto Robotics and AI Laboratory 289 Jan 5, 2023
ImVoxelNet: Image to Voxels Projection for Monocular and Multi-View General-Purpose 3D Object Detection

ImVoxelNet: Image to Voxels Projection for Monocular and Multi-View General-Purpose 3D Object Detection This repository contains implementation of the

Visual Understanding Lab @ Samsung AI Center Moscow 190 Dec 30, 2022
Delving into Localization Errors for Monocular 3D Object Detection, CVPR'2021

Delving into Localization Errors for Monocular 3D Detection By Xinzhu Ma, Yinmin Zhang, Dan Xu, Dongzhan Zhou, Shuai Yi, Haojie Li, Wanli Ouyang. Intr

XINZHU.MA 124 Jan 4, 2023
[CVPR'21] MonoRUn: Monocular 3D Object Detection by Reconstruction and Uncertainty Propagation

MonoRUn MonoRUn: Monocular 3D Object Detection by Reconstruction and Uncertainty Propagation. CVPR 2021. [paper] Hansheng Chen, Yuyao Huang, Wei Tian*

 同济大学智能汽车研究所综合感知研究组 ( Comprehensive Perception Research Group under Institute of Intelligent Vehicles, School of Automotive Studies, Tongji University) 96 Dec 10, 2022
Progressive Coordinate Transforms for Monocular 3D Object Detection

Progressive Coordinate Transforms for Monocular 3D Object Detection This repository is the official implementation of PCT. Introduction In this paper,

null 58 Nov 6, 2022
Released code for Objects are Different: Flexible Monocular 3D Object Detection, CVPR21

MonoFlex Released code for Objects are Different: Flexible Monocular 3D Object Detection, CVPR21. Work in progress. Installation This repo is tested w

Yunpeng 169 Dec 6, 2022
ICCV2021 Paper: AutoShape: Real-Time Shape-Aware Monocular 3D Object Detection

ICCV2021 Paper: AutoShape: Real-Time Shape-Aware Monocular 3D Object Detection

Zongdai 107 Dec 20, 2022