Not All Points Are Equal: Learning Highly Efficient Point-based Detectors for 3D LiDAR Point Clouds (CVPR 2022, Oral)

Overview

arXiv GitHub Stars visitors

Not All Points Are Equal: Learning Highly Efficient Point-based Detectors for 3D LiDAR Point Clouds (CVPR 2022, Oral)

This is the official implementation of IA-SSD (CVPR 2022), a simple and highly efficient point-based detector for 3D LiDAR point clouds. For more details, please refer to:

Not All Points Are Equal: Learning Highly Efficient Point-based Detectors for 3D LiDAR Point Clouds
Yifan Zhang, Qingyong Hu*, Guoquan Xu, Yanxin Ma, Jianwei Wan, Yulan Guo

[Paper] [Video]

Getting Started

Installation

a. Clone this repository

git clone https://github.com/yifanzhang713/IA-SSD.git && cd IA-SSD

b. Configure the environment

We have tested this project with the following environments:

  • Ubuntu18.04/20.04
  • Python = 3.7
  • PyTorch = 1.1
  • CUDA = 10.0
  • CMake >= 3.13
  • spconv = 1.0
    # install spconv=1.0 library
    git clone https://github.com/yifanzhang713/spconv1.0.git
    cd spconv1.0
    sudo apt-get install libboostall-dev
    python setup.py bdist_wheel
    pip install ./dist/spconv-1.0*   # wheel file name may be different
    cd ..

*You are encouraged to try to install higher versions above, please refer to the official github repository for more information. Note that the maximum number of parallel frames during inference might be slightly decrease due to the larger initial GPU memory footprint with updated Pytorch version.

c. Install pcdet toolbox.

pip install -r requirements.txt
python setup.py develop

d. Prepare the datasets.

Download the official KITTI with road planes and Waymo datasets, then organize the unzipped files as follows:

IA-SSD
├── data
│   ├── kitti
│   │   ├── ImageSets
│   │   ├── training
│   │   │   ├──calib & velodyne & label_2 & image_2 & (optional: planes)
│   │   ├── testing
│   │   ├── calib & velodyne & image_2
│   ├── waymo
│   │   │── ImageSets
│   │   │── raw_data
│   │   │   │── segment-xxxxxxxx.tfrecord
|   |   |   |── ...
|   |   |── waymo_processed_data_v0_5_0
│   │   │   │── segment-xxxxxxxx/
|   |   |   |── ...
│   │   │── waymo_processed_data_v0_5_0_gt_database_train_sampled_1/
│   │   │── waymo_processed_data_v0_5_0_waymo_dbinfos_train_sampled_1.pkl
│   │   │── waymo_processed_data_v0_5_0_gt_database_train_sampled_1_global.npy (optional)
│   │   │── waymo_processed_data_v0_5_0_infos_train.pkl (optional)
│   │   │── waymo_processed_data_v0_5_0_infos_val.pkl (optional)
├── pcdet
├── tools

Generate the data infos by running the following commands:

# KITTI dataset
python -m pcdet.datasets.kitti.kitti_dataset create_kitti_infos tools/cfgs/dataset_configs/kitti_dataset.yaml

# Waymo dataset
python -m pcdet.datasets.waymo.waymo_dataset --func create_waymo_infos \
    --cfg_file tools/cfgs/dataset_configs/waymo_dataset.yaml

Quick Inference

We provide the pre-trained weight file so you can just run with that:

cd tools 
# To achieve fully GPU memory footprint (NVIDIA RTX2080Ti, 11GB).
python test.py --cfg_file cfgs/kitti_models/IA-SSD.yaml --batch_size 100 \
    --ckpt IA-SSD.pth --set MODEL.POST_PROCESSING.RECALL_MODE 'speed'

# To reduce the pressure on the CPU during preprocessing, a suitable batchsize is recommended, e.g. 16. (Over 5 batches per second on RTX2080Ti)
python test.py --cfg_file cfgs/kitti_models/IA-SSD.yaml --batch_size 16 \
    --ckpt IA-SSD.pth --set MODEL.POST_PROCESSING.RECALL_MODE 'speed' 
  • Then detailed inference results can be found here.

Training

The configuration files are in tools/cfgs/kitti_models/IA-SSD.yaml and tools/cfgs/waymo_models/IA-SSD.yaml, and the training scripts are in tools/scripts.

Train with single or multiple GPUs: (e.g., KITTI dataset)

python train.py --cfg_file cfgs/kitti_models/IA-SSD.yaml

# or 

sh scripts/dist_train.sh ${NUM_GPUS} --cfg_file cfgs/kitti_models/IA-SSD.yaml

Evaluation

Evaluate with single or multiple GPUs: (e.g., KITTI dataset)

python test.py --cfg_file cfgs/kitti_models/IA-SSD.yaml  --batch_size ${BATCH_SIZE} --ckpt ${PTH_FILE}

# or

sh scripts/dist_test.sh ${NUM_GPUS} \
    --cfg_file cfgs/kitti_models/IA-SSD.yaml --batch_size ${BATCH_SIZE} --ckpt ${PTH_FILE}

Experimental results

KITTI dataset

Quantitative results of different approaches on KITTI dataset (test set):

Qualitative results of our IA-SSD on KITTI dataset:

z z
z z

Quantitative results of different approaches on Waymo dataset (validation set):

Qualitative results of our IA-SSD on Waymo dataset:

z z
z z

Quantitative results of different approaches on ONCE dataset (validation set):

Qualitative result of our IA-SSD on ONCE dataset:

Citation

If you find this project useful in your research, please consider citing:

@inproceedings{zhang2022not,
  title={Not All Points Are Equal: Learning Highly Efficient Point-based Detectors for 3D LiDAR Point Clouds},
  author={Zhang, Yifan and Hu, Qingyong and Xu, Guoquan and Ma, Yanxin and Wan, Jianwei and Guo, Yulan},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  year={2022}
}

Acknowledgement

  • This work is built upon the OpenPCDet (version 0.5), an open source toolbox for LiDAR-based 3D scene perception. Please refer to the official github repository for more information.

  • Parts of our Code refer to 3DSSD-pytorch-openPCDet library and the the recent work SASA.

License

This project is released under the Apache 2.0 license.

Related Repos

  1. RandLA-Net: Efficient Semantic Segmentation of Large-Scale Point Clouds GitHub stars
  2. SensatUrban: Learning Semantics from Urban-Scale Photogrammetric Point Clouds GitHub stars
  3. 3D-BoNet: Learning Object Bounding Boxes for 3D Instance Segmentation on Point Clouds GitHub stars
  4. SpinNet: Learning a General Surface Descriptor for 3D Point Cloud Registration GitHub stars
  5. SQN: Weakly-Supervised Semantic Segmentation of Large-Scale 3D Point Clouds GitHub stars
  6. SoTA-Point-Cloud: Deep Learning for 3D Point Clouds: A Survey GitHub stars
Comments
  • About kitti test result

    About kitti test result

    Hi, yifan! Thanks for you wounderful work. I eval your model on kitti val dataset and get the same result as your repo. However, when i submit the test result on kitti website. I can not get the result in your paper. 2022-04-21 00:46:25,841 INFO *************** Performance of EPOCH no_number ***************** 2022-04-21 00:46:25,841 INFO Generate label finished(sec_per_example: 0.0177 second). 2022-04-21 00:46:25,841 INFO recall_roi_0.3: 0.000000 2022-04-21 00:46:25,841 INFO recall_rcnn_0.3: 0.927099 2022-04-21 00:46:25,842 INFO recall_roi_0.5: 0.000000 2022-04-21 00:46:25,842 INFO recall_rcnn_0.5: 0.884041 2022-04-21 00:46:25,842 INFO recall_roi_0.7: 0.000000 2022-04-21 00:46:25,842 INFO recall_rcnn_0.7: 0.663060 2022-04-21 00:46:25,848 INFO Average predicted number of objects(3769 samples): 7.721 2022-04-21 00:46:57,396 INFO Car [email protected], 0.70, 0.70: bbox AP:96.2779, 90.1569, 89.5057 bev AP:90.3980, 88.8603, 86.9664 3d AP:89.3976, 79.5625, 78.4435 aos AP:96.25, 90.10, 89.35 Car [email protected], 0.70, 0.70: bbox AP:97.9059, 95.3214, 92.7594 bev AP:94.7520, 91.3533, 88.8415 3d AP:91.7990, 83.3737, 80.3455 aos AP:97.88, 95.23, 92.57 Car [email protected], 0.50, 0.50: bbox AP:96.2779, 90.1569, 89.5057 bev AP:96.3642, 90.2276, 89.7638 3d AP:96.3233, 90.2013, 89.6963 aos AP:96.25, 90.10, 89.35 Car [email protected], 0.50, 0.50: bbox AP:97.9059, 95.3214, 92.7594 bev AP:97.9552, 95.6085, 95.0305 3d AP:97.9302, 95.5291, 94.8719 aos AP:97.88, 95.23, 92.57 Pedestrian [email protected], 0.50, 0.50: bbox AP:72.7975, 70.9381, 67.4165 bev AP:66.7870, 61.3725, 57.6589 3d AP:60.8069, 58.3171, 52.2580 aos AP:68.50, 65.93, 62.32 Pedestrian [email protected], 0.50, 0.50: bbox AP:74.5996, 70.7681, 67.1326 bev AP:66.4324, 61.7818, 56.7088 3d AP:61.6833, 56.9830, 51.8147 aos AP:69.51, 65.15, 61.29 Pedestrian [email protected], 0.25, 0.25: bbox AP:72.7975, 70.9381, 67.4165 bev AP:81.5397, 78.5586, 73.2419 3d AP:81.4691, 78.5135, 73.1643 aos AP:68.50, 65.93, 62.32 Pedestrian [email protected], 0.25, 0.25: bbox AP:74.5996, 70.7681, 67.1326 bev AP:82.5906, 79.6797, 75.2987 3d AP:82.5212, 79.6466, 75.1587 aos AP:69.51, 65.15, 61.29 Cyclist [email protected], 0.50, 0.50: bbox AP:95.5281, 78.1780, 76.5017 bev AP:93.1435, 74.5972, 72.0342 3d AP:85.1286, 71.1990, 68.6697 aos AP:95.43, 77.85, 76.05 Cyclist [email protected], 0.50, 0.50: bbox AP:96.6252, 81.1220, 78.3019 bev AP:94.3767, 75.5814, 72.4732 3d AP:89.8377, 71.4524, 68.1976 aos AP:96.52, 80.74, 77.81 Cyclist [email protected], 0.25, 0.25: bbox AP:95.5281, 78.1780, 76.5017 bev AP:94.4477, 79.1653, 74.1242 3d AP:94.4477, 79.1653, 74.1242 aos AP:95.43, 77.85, 76.05 Cyclist [email protected], 0.25, 0.25: bbox AP:96.6252, 81.1220, 78.3019 bev AP:95.6348, 79.5295, 75.5297 3d AP:95.6348, 79.5295, 75.5297 aos AP:96.52, 80.74, 77.81

    image

    opened by muomi 7
  • Question about centroid prediction loss

    Question about centroid prediction loss

    Hi! I just read about this paper, IA-SSD's performance is quite amazing! But I got some trouble when trying to understand the centroid predction loss image Here are the questions:

    1. What do the F+ and j mean in the formula?
    2. How to understand the second term |c_ij - c_i|? Quating the paper: to minimize the uncertainty of the centroid prediction Thanks in advance!
    opened by DeclK 5
  • about kitti video

    about kitti video

    Hi, this is a wonderful work. I found in your readme.md you show Qualitative results of our IA-SSD on KITTI dataset. How you get kitti video? Is it kitti 360 dataset?

    opened by mc171819 3
  • Could you release the code of calculating the instance recall rate?

    Could you release the code of calculating the instance recall rate?

    @QingyongHu @yifanzhang713 Hi, thanks for your work. The following is the results I reproduced, but it is much different(car class) from the results in the paper. Could you release the code of calculating the instance recall rate for foreground points ?

                           4096 points          ...  256 points
    in paper    Ctr-aware  98.3%  100%   97.2%  ...  97.9%  98.4%  97.2%
    mine        Ctr-aware  93.4%  99.6%  98.3%  ...  92.2%  97.3%  95.2%
    
    opened by kellen5l 2
  • Pretrained weights

    Pretrained weights

    Nice work! Would you please share your other pretrained models on Kitti, Waymo and/or Once? I have tried your weights tools/IA-SSD.pth on Kitti data and it seems to work worth than seen in the your Qualitative results in the demo.

    Thanks!

    opened by YoushaaMurhij 2
  • KeyError: 'road_plane'

    KeyError: 'road_plane'

    Thanks for your fantastic work. Following your instruction, I can obtain similar inference results with you. However, the training process with command "python train.py --cfg_file cfgs/kitti_models/IA-SSD.yaml" encounters the following errors. Any advices about how to fix it?

    2022-03-30 15:08:13,569 INFO Start training kitti_models/IA-SSD(default) epochs: 0%| | 0/80 [00:00<?, ?it/s] Traceback (most recent call last): | 0/464 [00:00<?, ?it/s] File "train.py", line 205, in main() File "train.py", line 174, in main merge_all_iters_to_one_epoch=args.merge_all_iters_to_one_epoch File "/disk/yangle/IA-SSD/tools/train_utils/train_utils.py", line 118, in train_model dataloader_iter=dataloader_iter File "/disk/yangle/IA-SSD/tools/train_utils/train_utils.py", line 25, in train_one_epoch batch = next(dataloader_iter) File "/disk/yangle/software/anaconda3/envs/iassd/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 819, in next return self._process_data(data) File "/disk/yangle/software/anaconda3/envs/iassd/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 846, in _process_data data.reraise() File "/disk/yangle/software/anaconda3/envs/iassd/lib/python3.6/site-packages/torch/_utils.py", line 385, in reraise raise self.exc_type(msg) KeyError: Caught KeyError in DataLoader worker process 0. Original Traceback (most recent call last): File "/disk/yangle/software/anaconda3/envs/iassd/lib/python3.6/site-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop data = fetcher.fetch(index) File "/disk/yangle/software/anaconda3/envs/iassd/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/disk/yangle/software/anaconda3/envs/iassd/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in data = [self.dataset[idx] for idx in possibly_batched_index] File "../pcdet/datasets/kitti/kitti_dataset.py", line 424, in getitem data_dict = self.prepare_data(data_dict=input_dict) File "../pcdet/datasets/dataset.py", line 130, in prepare_data 'gt_boxes_mask': gt_boxes_mask File "../pcdet/datasets/augmentor/data_augmentor.py", line 281, in forward data_dict = cur_augmentor(data_dict=data_dict) File "../pcdet/datasets/augmentor/database_sampler.py", line 245, in call data_dict = self.add_sampled_boxes_to_scene(data_dict, sampled_gt_boxes, total_valid_sampled_dict) File "../pcdet/datasets/augmentor/database_sampler.py", line 163, in add_sampled_boxes_to_scene sampled_gt_boxes, data_dict['road_plane'], data_dict['calib'] KeyError: 'road_plane'

    opened by VividLe 2
  • Question about the algorithm speed

    Question about the algorithm speed

    Hi, Thank you for your excellent work.

    I have test your work on kitti-dataset and my custom dataset. I achieved similar results as you. But I have some doubts about the calculation of the speed.

    I just run python test.py --cfg_file cfgs/kitti_models/IA-SSD.yaml --batch_size 16 --ckpt IA-SSD.pth --set MODEL.POST_PROCESSING.RECALL_MODE 'speed' to get the following result:

    ......
    2022-03-28 16:49:37,358   INFO  *************** Performance of EPOCH no_number *****************
    2022-03-28 16:49:37,359   INFO  Generate label finished(sec_per_example: 0.0138 second).
    2022-03-28 16:49:37,359   INFO  recall_roi_0.3: 0.000000
    2022-03-28 16:49:37,359   INFO  recall_rcnn_0.3: 0.000000
    2022-03-28 16:49:37,359   INFO  recall_roi_0.5: 0.000000
    2022-03-28 16:49:37,359   INFO  recall_rcnn_0.5: 0.000000
    2022-03-28 16:49:37,359   INFO  recall_roi_0.7: 0.000000
    2022-03-28 16:49:37,359   INFO  recall_rcnn_0.7: 0.000000
    2022-03-28 16:49:37,362   INFO  Average predicted number of objects(3769 samples): 7.685
    2022-03-28 16:50:04,217   INFO  Car [email protected], 0.70, 0.70:
    bbox AP:96.2632, 90.1455, 89.4070
    bev  AP:90.3738, 88.7840, 87.0682
    3d   AP:89.3510, 79.5290, 78.5040
    aos  AP:96.22, 90.04, 89.22
    Car [email protected], 0.70, 0.70:
    bbox AP:97.8414, 95.2234, 92.6498
    bev  AP:94.8053, 91.3833, 88.8623
    3d   AP:91.8502, 83.3872, 80.4255
    aos  AP:97.80, 95.08, 92.43
    

    But when I change the batchsize to 1, I got the the result; sec_per_example: 0.0529 second. My machine is NVIDIA V100S. I also use demo.py to test the speed. the average time for network inference is about 48ms.

    I want to know how can I achieve 85FPS use the work. Generally, when the algorithm is used in actual scenarios, such as automatic driving, the batch size is always 1. It would be great if it could also reach 85FPS with the batchsize = 1.

    opened by chenxyyy 2
  • Distributed Training

    Distributed Training

    I tried to train this model on Waymo dataset with the defaut configs but I am facing this error:

    epochs:   0%|          | 0/30 [00:00<?, ?it/s]
    epochs:   0%|          | 0/30 [00:00<?, ?it/s]
    epochs:   0%|          | 0/30 [00:00<?, ?it/s]
    epochs:   0%|          | 0/30 [00:00<?, ?it/s]
    epochs:   0%|          | 0/30 [00:00<?, ?it/s]
    epochs:   0%|          | 0/30 [00:00<?, ?it/s]Exception ignored in: <function DataBaseSampler.__del__ at 0x7f6d406b9560>
    Traceback (most recent call last):
      File "../pcdet/datasets/augmentor/database_sampler.py", line 60, in __del__
        if self.use_shared_memory:
    AttributeError: 'DataBaseSampler' object has no attribute 'use_shared_memory'
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "/usr/lib/python3.7/multiprocessing/spawn.py", line 105, in spawn_main
        exitcode = _main(fd)
      File "/usr/lib/python3.7/multiprocessing/spawn.py", line 115, in _main
        self = reduction.pickle.load(from_parent)
    _pickle.UnpicklingError: pickle data was truncated
    Traceback (most recent call last):
      File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
        "__main__", mod_spec)
      File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
        exec(code, run_globals)
      File "/home/AI/.local/lib/python3.7/site-packages/torch/distributed/launch.py", line 261, in <module>
        main()
      File "/home/AI/.local/lib/python3.7/site-packages/torch/distributed/launch.py", line 257, in main
        cmd=cmd)
    
    

    Here's dist_train.sh :

    #!/usr/bin/env bash
    set -x
    NGPUS=$1
    PY_ARGS=${@:2}
    
    while true
    do
        PORT=$(( ((RANDOM<<15)|RANDOM) % 49152 + 10000 ))
        status="$(nc -z 127.0.0.1 $PORT < /dev/null &>/dev/null; echo $?)"
        if [ "${status}" != "0" ]; then
            break;
        fi
    done
    echo $PORT
    
    python -m torch.distributed.launch --nproc_per_node=${NGPUS} --master_port $PORT train.py --launcher pytorch ${PY_ARGS}
    

    Any suggestions? Thanks

    opened by YoushaaMurhij 1
  • About the 'center_origin_cls_labels'

    About the 'center_origin_cls_labels'

    Thanks for your interesting work. I now try to train on my own dataset. But in some cases, it meets a bug, as shown below:

      File "../pcdet/models/detectors/IASSD.py", line 13, in forward
        loss, tb_dict, disp_dict = self.get_training_loss()
      File "../pcdet/models/detectors/IASSD.py", line 24, in get_training_loss
        loss_point, tb_dict = self.point_head.get_loss()
      File "../pcdet/models/dense_heads/IASSD_head.py", line 413, in get_loss
        center_loss_reg, tb_dict_3 = self.get_contextual_vote_loss()
      File "../pcdet/models/dense_heads/IASSD_head.py", line 470, in get_contextual_vote_loss
        center_origin_loss_box = torch.cat(center_origin_loss_box, dim=-1).mean()
    NotImplementedError: There were no tensor arguments to this function (e.g., you passed an empty list of Tensors), but no fallback function is registered for schema aten::_cat.  This usually means that this function requires a non-empty list of Tensors, or that you (the operator writer) forgot to register a fallback function.  Available functions are [CPU, CUDA, QuantizedCPU, BackendSelect, Python, Named, Conjugate, Negative, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradLazy, AutogradXPU, AutogradMLC, AutogradHPU, AutogradNestedTensor, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, UNKNOWN_TENSOR_TYPE_ID, Autocast, Batched, VmapMode].
    

    and I find the main problem is that self.forward_ret_dict['center_origin_cls_labels'] is a zero matrix, leading to center_origin_loss_box is a empty list. Could you please tell me how to solve this bug?

    image
    opened by dk-liang 1
  • Do you have any plan to upload config for ONCE dataset?

    Do you have any plan to upload config for ONCE dataset?

    First of all, Thank you for your awesome works.

    It seems that config file for ONCE dataset is missing.

    Do you have any plan to share config file for ONCE dataset?

    Thanks.

    opened by frogbam 1
  • Inference speed gap in waymo dataset

    Inference speed gap in waymo dataset

    Thank you for your work, I used a single A40 to test the IA-SSD in the waymo dataset and found that the inference speed was only 2.2FPS, while in the paper this data was 8FPS, how should I set the parameters to improve the inference speed .

    opened by hht1996ok 0
  • How to generate GIF images?

    How to generate GIF images?

    Thank you for the amazing work! Your result in GIF format is very interesting and intuitive.

    I want to export the result in GIF format as you uploaded in docs. Do you mind if I ask you to provide the python code to generate the GIF?

    Best, Dongmin Choi.

    opened by ChoiDM 0
  • pedestrian and bicycle score for 3DSSD

    pedestrian and bicycle score for 3DSSD

    Hi, I was able to produce 3DSSD with higher score for pedestrian and bicycle maybe you can update in the paper: https://github.com/zye1996/3DSSD-torch

    opened by zye1996 0
  • bug report

    bug report

    https://github.com/yifanzhang713/IA-SSD/blob/67db5159260474c9afb2e34261e1fc95f56107b5/pcdet/ops/pointnet2/pointnet2_batch/pointnet2_modules.py#L178-L182 The radius and min_radius seem to be reversed in dilated mode.

    opened by OuyangJunyuan 0
  • about  class-aware sampling

    about class-aware sampling

    Thanks for sharing great work.

    I wonder why not directly apply topk over the cls_features_max rather than sigmoid scores.

    cls_features_max, class_pred = cls_features_tmp.max(dim=-1)
    score_pred = torch.sigmoid(cls_features_max) # B,N
    score_picked, sample_idx = torch.topk(score_pred, npoint, dim=-1)           
    sample_idx = sample_idx.int()
    
    opened by chyohoo 2
Owner
Yifan Zhang
Yifan Zhang
(CVPR 2021) Back-tracing Representative Points for Voting-based 3D Object Detection in Point Clouds

BRNet Introduction This is a release of the code of our paper Back-tracing Representative Points for Voting-based 3D Object Detection in Point Clouds,

null 85 Sep 22, 2022
code for paper "Not All Unlabeled Data are Equal: Learning to Weight Data in Semi-supervised Learning" by Zhongzheng Ren*, Raymond A. Yeh*, Alexander G. Schwing.

Not All Unlabeled Data are Equal: Learning to Weight Data in Semi-supervised Learning Overview This code is for paper: Not All Unlabeled Data are Equa

Jason Ren 21 Aug 27, 2022
Implementation of the paper All Labels Are Not Created Equal: Enhancing Semi-supervision via Label Grouping and Co-training

SemCo The official pytorch implementation of the paper All Labels Are Not Created Equal: Enhancing Semi-supervision via Label Grouping and Co-training

null 41 May 11, 2022
[CVPR 2022 Oral] EPro-PnP: Generalized End-to-End Probabilistic Perspective-n-Points for Monocular Object Pose Estimation

EPro-PnP EPro-PnP: Generalized End-to-End Probabilistic Perspective-n-Points for Monocular Object Pose Estimation In CVPR 2022 (Oral). [paper] Hanshen

 同济大学智能汽车研究所综合感知研究组 ( Comprehensive Perception Research Group under Institute of Intelligent Vehicles, School of Automotive Studies, Tongji University) 761 Sep 22, 2022
Scribble-Supervised LiDAR Semantic Segmentation, CVPR 2022 (ORAL)

Scribble-Supervised LiDAR Semantic Segmentation Dataset and code release for the paper Scribble-Supervised LiDAR Semantic Segmentation, CVPR 2022 (ORA

null 80 Sep 29, 2022
A general python framework for single object tracking in LiDAR point clouds, based on PyTorch Lightning.

Open3DSOT A general python framework for single object tracking in LiDAR point clouds, based on PyTorch Lightning. The official code release of BAT an

Kangel Zenn 152 Sep 10, 2022
Uncertainty-aware Semantic Segmentation of LiDAR Point Clouds for Autonomous Driving

SalsaNext: Fast, Uncertainty-aware Semantic Segmentation of LiDAR Point Clouds for Autonomous Driving Abstract In this paper, we introduce SalsaNext f

null 280 Sep 20, 2022
Fog Simulation on Real LiDAR Point Clouds for 3D Object Detection in Adverse Weather

LiDAR fog simulation Created by Martin Hahner at the Computer Vision Lab of ETH Zurich. This is the official code release of the paper Fog Simulation

Martin Hahner 89 Sep 20, 2022
Implementation of CVPR'2022:Surface Reconstruction from Point Clouds by Learning Predictive Context Priors

Surface Reconstruction from Point Clouds by Learning Predictive Context Priors (CVPR 2022) Personal Web Pages | Paper | Project Page This repository c

null 129 Sep 23, 2022
Code Release for ICCV 2021 (oral), "AdaFit: Rethinking Learning-based Normal Estimation on Point Clouds"

AdaFit: Rethinking Learning-based Normal Estimation on Point Clouds (ICCV 2021 oral) **Project Page | Arxiv ** Runsong Zhu¹, Yuan Liu², Zhen Dong¹, Te

null 36 Sep 21, 2022
Voxel Set Transformer: A Set-to-Set Approach to 3D Object Detection from Point Clouds (CVPR 2022)

Voxel Set Transformer: A Set-to-Set Approach to 3D Object Detection from Point Clouds (CVPR2022)[paper] Authors: Chenhang He, Ruihuang Li, Shuai Li, L

Billy HE 120 Sep 29, 2022
Implementation of CVPR'2022:Reconstructing Surfaces for Sparse Point Clouds with On-Surface Priors

Reconstructing Surfaces for Sparse Point Clouds with On-Surface Priors (CVPR 2022) Personal Web Pages | Paper | Project Page This repository contains

null 141 Sep 29, 2022
Code for "PV-RAFT: Point-Voxel Correlation Fields for Scene Flow Estimation of Point Clouds", CVPR 2021

PV-RAFT This repository contains the PyTorch implementation for paper "PV-RAFT: Point-Voxel Correlation Fields for Scene Flow Estimation of Point Clou

Yi Wei 41 Aug 23, 2022
Erpnext app for make employee salary on payroll entry based on one or more project with percentage for all project equal 100 %

Project Payroll this app for make payroll for employee based on projects like project on 30 % and project 2 70 % as account dimension it makes genral

Ibrahim Morghim 6 Sep 19, 2022
《Train in Germany, Test in The USA: Making 3D Object Detectors Generalize》(CVPR 2020)

Train in Germany, Test in The USA: Making 3D Object Detectors Generalize This paper has been accpeted by Conference on Computer Vision and Pattern Rec

Xiangyu Chen 98 Sep 16, 2022
Temporally Efficient Vision Transformer for Video Instance Segmentation, CVPR 2022, Oral

Temporally Efficient Vision Transformer for Video Instance Segmentation Temporally Efficient Vision Transformer for Video Instance Segmentation (CVPR

Hust Visual Learning Team 190 Sep 19, 2022
The code for our paper submitted to RAL/IROS 2022: OverlapTransformer: An Efficient and Rotation-Invariant Transformer Network for LiDAR-Based Place Recognition.

OverlapTransformer The code for our paper submitted to RAL/IROS 2022: OverlapTransformer: An Efficient and Rotation-Invariant Transformer Network for

HAOMO.AI 115 Sep 20, 2022
Attention-based Transformation from Latent Features to Point Clouds (AAAI 2022)

Attention-based Transformation from Latent Features to Point Clouds This repository contains a PyTorch implementation of the paper: Attention-based Tr

null 10 Jul 18, 2022
[CVPR 2022] TransEditor: Transformer-Based Dual-Space GAN for Highly Controllable Facial Editing

TransEditor: Transformer-Based Dual-Space GAN for Highly Controllable Facial Editing (CVPR 2022) This repository provides the official PyTorch impleme

Billy XU 120 Sep 7, 2022