Uncertainty-aware Semantic Segmentation of LiDAR Point Clouds for Autonomous Driving

Overview

PWC arXiv

SalsaNext: Fast, Uncertainty-aware Semantic Segmentation of LiDAR Point Clouds for Autonomous Driving

Abstract

In this paper, we introduce SalsaNext for the uncertainty-aware semantic segmentation of a full 3D LiDAR point cloud in real-time. SalsaNext is the next version of SalsaNet which has an encoder-decoder architecture where the encoder unit has a set of ResNet blocks and the decoder part combines upsampled features from the residual blocks. In contrast to SalsaNet, we introduce a new context module, replace the ResNet encoder blocks with a new residual dilated convolution stack with gradually increasing receptive fields and add the pixel-shuffle layer in the decoder. Additionally, we switch from stride convolution to average pooling and also apply central dropout treatment. To directly optimize the Jaccard index, we further combine the weighted cross-entropy loss with Lovasz-Softmax loss . We finally inject a Bayesian treatment to compute the epistemic and aleatoric uncertainties for each point in the cloud. We provide a thorough quantitative evaluation on the Semantic-KITTI dataset, which demonstrates that the proposed SalsaNext outperforms other state-of-the-art semantic segmentation.

Examples

Example Gif

Video

Inference of Sequence 13

Semantic Kitti Segmentation Scores

The up-to-date scores can be found in the Semantic-Kitti page.

How to use the code

First create the anaconda env with: conda env create -f salsanext_cuda10.yml --name salsanext then activate the environment with conda activate salsanext.

To train/eval you can use the following scripts:

  • Training script (you might need to chmod +x the file)
    • We have the following options:
      • -d [String] : Path to the dataset
      • -a [String]: Path to the Architecture configuration file
      • -l [String]: Path to the main log folder
      • -n [String]: additional name for the experiment
      • -c [String]: GPUs to use (default no gpu)
      • -u [String]: If you want to train an Uncertainty version of SalsaNext (default false) [Experimental: tests done so with uncertainty far used pretrained SalsaNext with Deep Uncertainty Estimation]
    • For example if you have the dataset at /dataset the architecture config file in /salsanext.yml and you want to save your logs to /logs to train "salsanext" with 2 GPUs with id 3 and 4:
      • ./train.sh -d /dataset -a /salsanext.yml -m salsanext -l /logs -c 3,4


  • Eval script (you might need to chmod +x the file)
    • We have the following options:
      • -d [String]: Path to the dataset
      • -p [String]: Path to save label predictions
      • -m [String]: Path to the location of saved model
      • -s [String]: Eval on Validation or Train (standard eval on both separately)
      • -u [String]: If you want to infer using an Uncertainty model (default false)
      • -c [Int]: Number of MC sampling to do (default 30)
    • If you want to infer&evaluate a model that you saved to /salsanext/logs/[the desired run] and you want to infer$eval only the validation and save the label prediction to /pred:
      • ./eval.sh -d /dataset -p /pred -m /salsanext/logs/[the desired run] -s validation -n salsanext

Pretrained Model

SalsaNext

Disclamer

We based our code on RangeNet++, please go show some support!

Citation

@misc{cortinhal2020salsanext,
    title={SalsaNext: Fast, Uncertainty-aware Semantic Segmentation of LiDAR Point Clouds for Autonomous Driving},
    author={Tiago Cortinhal and George Tzelepis and Eren Erdal Aksoy},
    year={2020},
    eprint={2003.03653},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}
Comments
  • Multiple errors trying to run SalsaNext

    Multiple errors trying to run SalsaNext

    Opening arch config file from /home/arl/SalsaNext/TrainEvalResources/SalsaNextTrainedModel [Errno 20] Not a directory: '/home/arl/SalsaNext/TrainEvalResources/SalsaNextTrainedModel/arch_cfg.yaml' Error opening arch yaml file. I downloaded the train model, named it "SalsaNextTrainedModel" and put it in "/home/arl/SalsaNext/TrainEvalResources/"

    I do not understand why it is looking for arch_cfg.yaml there. Also arch_cfg.yaml does not exist anywhere in the repo that I cloned.

    1. I have the following, not sure this is correct: ./eval.sh -d /home/arl/SalsaNext/TrainEvalResources/Eval-dataset # location=folder of images to test -p /home/arl/SalsaNext/TrainEvalResources/SaveLabelPredictions # folder of where to put prediction results -m /home/arl/SalsaNext/TrainEvalResources/SalsaNextTrainedModel # folder/name of downloaded trained model -s valid # eval on validation set, using ```-s validation``` as per instructions does not work -n salsanextExp1 # name of experiment -c 0 # required to avoid error 3. below

    2. ./infer.py: error: argument --monte-carlo/-c: invalid int value: '' Fixed by setting -c 0 in arg list

    3. ModuleNotFoundError: No module named 'tasks.semantic.modules.SalsaNextUncertainty' Fixed by changing SalsaNextUncertainty to SalsaNextAdf at line 20 of user.py

    opened by jfhauris 20
  • Questions about training  process

    Questions about training process

    Hi, Thanks for your generous opening source of this briliant project!

    I have two questions for the project:

    1. Does pretrained model use uncertainty or not during training? Is this pretrained model the one which could reproduce 59.5 point-wise mean-IoU in the Table I of paper?

    2. When I train the model on my self, I found that there is a serious overfitting problem. The pictures below are my training loss and valid loss curve.
      I want to ask why this problem occurs? Do you also have this problem during training?

    image image

    I am looking forward to your reply!

    opened by iris0329 7
  • Size mismatch between pretrained model and current model?

    Size mismatch between pretrained model and current model?

    When I try to use the pretrained model I get the error message below. I have not changed anything on the model but apparently the two do not fit together. Has the model been changed? The only thing I changed in the pretrained model is the name of the modules, as described in https://github.com/Halmstad-University/SalsaNext/issues/65 Thanks a lot! Bildschirmfoto 2021-12-20 um 16 14 35

    opened by finnSartoris 5
  • Pretrained model : dead link

    Pretrained model : dead link

    Hello, I'm a French student and I make an internship onto semantic segmentation, so I need to run your code, but the link to the pretrained model seems dead. If you have the time, can you update your link or send me another ? Thank you ! (Sorry if it's not the place to ask that I'm not very familiar with Github)

    opened by loukabvn 4
  • Some inference questions about final mean and variance

    Some inference questions about final mean and variance

    image Based on the paper "A General Framework for Uncertainty Estimation in Deep Learning", I have questions about your inference code in user.py.

    1. Should the final prediction be average of proj_output_r or proj_output2? The equation from the paper is average of T times monte-carlo predictions. https://github.com/Halmstad-University/SalsaNext/blob/cc8c75dc68d2607d16e2c82be61e7254e5b74a12/train/tasks/semantic/modules/user.py#L157

    2. Is log_var2 sensor uncertainty? If yes, based on the equation, we just need to do average. Why did you do log_var_r.var()? https://github.com/Halmstad-University/SalsaNext/blob/cc8c75dc68d2607d16e2c82be61e7254e5b74a12/train/tasks/semantic/modules/user.py#L159

    Thank you.

    opened by fcyeh 4
  • Set dropout to training in monte-carlo mode while inference

    Set dropout to training in monte-carlo mode while inference

    https://github.com/Halmstad-University/SalsaNext/blob/cc8c75dc68d2607d16e2c82be61e7254e5b74a12/train/tasks/semantic/modules/user.py#L152

    Should we set dropout to training when we do monte-carlo while inference? Just like deep_uncertainty_estimation repo did: https://github.com/uzh-rpg/deep_uncertainty_estimation/blob/0093e12f234ad20da5a97aebb57ba060c8c4ca75/eval.py#L177 Thank you.

    opened by fcyeh 4
  • About segmentation and evaluation method in the test sequences of semanticKITTI dataset

    About segmentation and evaluation method in the test sequences of semanticKITTI dataset

    Hi, Thanks for your great opening source of this project!

    I have 3 questions about the segmentation and evaluation method in the test sequences.

    1.) About the predicted value is not saved when I run Eval-Script Now I can run Training-Script and Eval-Script using sequences 00-10, respectively. First, when I ran the Training-Script, directories such as predictions were created in thelogs/2020-11-17-23: 08 salsanext-cpdirectory. However, when the Eval-Script was executed, the pred directory for storing the value of prediction was created, but the contents of pred/sequences/<sequence No.>/predictions were all empty. The run-time command is $ ./eval.sh -d ~/Dataset/SemanticKITTI/dataset -p pred -m logs2/logs/2020-11-17-23: 08 salsanext-cp -s valid -c 30 -n salsanext. I don't know what the problem is, so I would appreciate it if you could tell me.

    2.) About labels of test sequences. Next, I would like to run test and evaluate using the test sequences, but I do not know how to carry out the experiment because there is no labels. Is it generated as a predicted value when I run some program? or if not necessary? I would appreciate it if you could tell me how to run these scripts.

    3.) How to use visualize.py First, is there a way to save the results? And then also, like Question 2, I don't know how to run it in an unlabeled test sequences. Is the labels still something that can be generated by running some program in SalsaNext, or is the option instead of the label the value of predictions that should be generated by Eval-Script with -p? It seems that visualize.py is not executed using the execution result of SalsaNext, so I would appreciate it if you could tell me how to use visualize.py.

    I am very sorry for the amateur questions. thank you.

    opened by AkinoriKotani 4
  • About the memory occupied by different batch_size

    About the memory occupied by different batch_size

    Hi~@TiagoCortinhal When I tried to use your library to complete the experiment, I found a very strange problem: I use 4 GPUs to train the network by modifying the batch_size in salsanext.yml.

    When batch_size == 24, I found that GPU memory does not take up much memory:

    |-------------------------------+----------------------+----------------------+
    | GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
    | Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
    |===============================+======================+==
    |   0  Tesla P40           On   | 00000000:02:00.0 Off |                    0 |
    | N/A   47C    P0   157W / 250W |   8841MiB / 22919MiB |     75%      Default |
    +-------------------------------+----------------------+----------------------+
    |   1  Tesla P40           On   | 00000000:03:00.0 Off |                    0 |
    | N/A   50C    P0   160W / 250W |   8283MiB / 22919MiB |     77%      Default |
    +-------------------------------+----------------------+----------------------+
    |   2  Tesla P40           On   | 00000000:83:00.0 Off |                    0 |
    | N/A   47C    P0   151W / 250W |   8273MiB / 22919MiB |     56%      Default |
    +-------------------------------+----------------------+----------------------+
    |   3  Tesla P40           On   | 00000000:84:00.0 Off |                    0 |
    | N/A   46C    P0   146W / 250W |   8285MiB / 22919MiB |     79%      Default |
    +-------------------------------+----------------------+----------------------+
    

    When setting batch_size == 4, I found that the GPU memory is full:

    -------------------------------+----------------------+----------------------+
    | GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
    | Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
    |===============================+======================+===
    |   0  Tesla P40           On   | 00000000:02:00.0 Off |                    0 |
    | N/A   42C    P0    69W / 250W |   2103MiB / 22919MiB |     98%      Default |
    +-------------------------------+----------------------+----------------------+
    |   1  Tesla P40           On   | 00000000:03:00.0 Off |                    0 |
    | N/A   43C    P0    79W / 250W |   1981MiB / 22919MiB |     60%      Default |
    +-------------------------------+----------------------+----------------------+
    |   2  Tesla P40           On   | 00000000:83:00.0 Off |                    0 |
    | N/A   41C    P0    90W / 250W |   1993MiB / 22919MiB |     86%      Default |
    +-------------------------------+----------------------+----------------------+
    |   3  Tesla P40           On   | 00000000:84:00.0 Off |                    0 |
    | N/A   40C    P0   110W / 250W |   1981MiB / 22919MiB |     73%      Default |
    +-------------------------------+----------------------+----------------------+
    

    Hope you can answer my confusion Best wishes

    opened by 123zhen123 4
  • arch_cfg.yaml not found

    arch_cfg.yaml not found

    infer.py tries to load arch_cfg.yaml and data_cfg.yaml. Neither file can be found.

        # open arch config file
        try:
            print("Opening arch config file from %s" % FLAGS.model)
            ARCH = yaml.safe_load(open(FLAGS.model + "/arch_cfg.yaml", 'r'))
        except Exception as e:
            print(e)
            print("Error opening arch yaml file.")
            quit()
    
        # open data config file
        try:
            print("Opening data config file from %s" % FLAGS.model)
            DATA = yaml.safe_load(open(FLAGS.model + "/data_cfg.yaml", 'r'))
        except Exception as e:
            print(e)
            print("Error opening data yaml file.")
            quit()
    
    opened by SoftwareApe 4
  • Pre-trained model Inference on CPU : Error during forward pass

    Pre-trained model Inference on CPU : Error during forward pass

    Hello,

    • My aim to run inference without retraining
    • I am running on Intel CPU (i.e. no GPU)
    • I am using the pre-trained model provided in this repo.
    • I got the complete datasets from SemanticKITTI.
    • I setup everything as per the instruction provided.

    When I run eval script with all variables, I get error in user.py on line 226 (which I believe is doing a forward pass on the model). The error says:

    "Illegal instruction (core dumped)"

    Because of this, the Evaluation script fails when checking for length of pred_names (off course the inference never finished correctly) I concluded this with some print statements in user.py and infer.py files.

    Any help or debug tips you can provide? Thank You in advance!

    Terminal Output :

    (salanext) ~/zebra/NNs/SalsaNext/SalsaNext-master$ ./eval.sh -d dataset -p logs_preds -m saved_model -s train -c 30

    INTERFACE: dataset /home/mipso/zebra/NNs/SalsaNext/SalsaNext-master/dataset log /home/mipso/zebra/NNs/SalsaNext/SalsaNext-master/logs_preds model /home/mipso/zebra/NNs/SalsaNext/SalsaNext-master/saved_model Uncertainty False Monte Carlo Sampling 30 infering train


    Opening arch config file from /home/mipso/zebra/NNs/SalsaNext/SalsaNext-master/saved_model Opening data config file from /home/mipso/zebra/NNs/SalsaNext/SalsaNext-master/saved_model train 00 train 01 train 02 train 03 train 04 train 05 train 06 train 07 train 09 train 10 valid 08 test 11 test 12 test 13 test 14 test 15 test 16 test 17 test 18 test 19 test 20 test 21 model folder exists! Using model from /home/mipso/zebra/NNs/SalsaNext/SalsaNext-master/saved_model Sequences folder exists! Using sequences from /home/mipso/zebra/NNs/SalsaNext/SalsaNext-master/dataset/sequences parsing seq 00 parsing seq 01 parsing seq 02 parsing seq 03 parsing seq 04 parsing seq 05 parsing seq 06 parsing seq 07 parsing seq 09 parsing seq 10 Using 19130 scans from sequences [0, 1, 2, 3, 4, 5, 6, 7, 9, 10] Sequences folder exists! Using sequences from /home/mipso/zebra/NNs/SalsaNext/SalsaNext-master/dataset/sequences parsing seq 08 Using 4071 scans from sequences [8] Sequences folder exists! Using sequences from /home/mipso/zebra/NNs/SalsaNext/SalsaNext-master/dataset/sequences parsing seq 11 parsing seq 12 parsing seq 13 parsing seq 14 parsing seq 15 parsing seq 16 parsing seq 17 parsing seq 18 parsing seq 19 parsing seq 20 parsing seq 21 Using 20351 scans from sequences [11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21]


    Cleaning point-clouds with kNN post-processing kNN parameters: knn: 5 search: 5 sigma: 1.0 cutoff: 1.0 nclasses: 20


    Infering in device: cpu *** JS INFO: Entering infer **** *** JS INFO: Entering infer_subset **** *** JS INFO: self.gpu = False *** JS INFO: Entering infer_subset NO uncertainty ****

    Illegal instruction (core dumped)

    finishing infering. Starting evaluating


    INTERFACE: Data: /home/mipso/zebra/NNs/SalsaNext/SalsaNext-master/dataset Predictions: /home/mipso/zebra/NNs/SalsaNext/SalsaNext-master/logs_preds Split: train Config: config/labels/semantic-kitti.yaml Limit: None


    Opening data config file config/labels/semantic-kitti.yaml Ignoring xentropy class 0 in IoU evaluation [IOU EVAL] IGNORE: tensor([0]) [IOU EVAL] INCLUDE: tensor([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]) Traceback (most recent call last): File "./evaluate_iou.py", line 237, in eval(DATA["split"][FLAGS.split],splits,FLAGS.predictions) File "./evaluate_iou.py", line 67, in eval len(label_names) == len(pred_names)) AssertionError

    opened by Mipsology 3
  • Downsampling rate

    Downsampling rate

    Hi!

    From your ResBlock class, I can see that you use a constant downsampling rate of 2 by using the nn.AvgPool2d layer with kernel_size=3, stride=2 and padding=1.

    However, in your arXiv paper, the first residual block downsamples the width from 2048 to 512 which indicates a downsampling rate of 4. Also, I don't understand how the last layer upsamples the feature map from 1024x64x32 to 2048x64x32 since in your code, a Conv2d layer with kernel_size=(1,1) is used here.

    Is this a mistake in the visualization of the architecture?

    Thank you!

    opened by benemer 3
  • Errors occur during training.

    Errors occur during training.

    Thanks for author's remarkable working!

    when i start to train this network without pretrained model, this error occurs showing below. Can someone help me, thanks a lot!

    /media/lijianguo/data_ssd/coding_test_platfrom/SalsaNext-master/train/tasks/semantic$ python3 train.py -d /media/lijianguo/data_ssd/kittidata/SemanticKITTI/data/dataset -ac /media/lijianguo/data_ssd/coding_test_platfrom/SalsaNext-master/salsanext.yml -l /media/lijianguo/data_ssd/coding_test_platfrom/SalsaNext-master/train_logs INTERFACE: dataset /media/lijianguo/data_ssd/kittidata/SemanticKITTI/data/dataset arch_cfg /media/lijianguo/data_ssd/coding_test_platfrom/SalsaNext-master/salsanext.yml data_cfg config/labels/semantic-kitti.yaml uncertainty False Total of Trainable Parameters: 6.71M log /media/lijianguo/data_ssd/coding_test_platfrom/SalsaNext-master/train_logs/logs/2022-4-01-18:46 pretrained None

    Opening arch config file /media/lijianguo/data_ssd/coding_test_platfrom/SalsaNext-master/salsanext.yml Opening data config file config/labels/semantic-kitti.yaml Not creating new log file. Using pretrained directory No pretrained directory found. Copying files to None for further reference. unsupported operand type(s) for +: 'NoneType' and 'str' Error copying files, check permissions. Exiting...

    opened by LiXiang0021 5
  • Interpretation uncertainty values

    Interpretation uncertainty values

    Thanks for open-sourcing your code! I have a question about the interpretation of the uncertainty values.

    1.) I guess log_var and proj_output should change places like her #34 https://github.com/Halmstad-University/SalsaNext/blob/a02fad97d646d4c132266ab79fbaea3ecfc237ed/train/tasks/semantic/modules/user.py#L154 and proj_argmax = proj_output[0].argmax(dim=0) is missing.

    2.) I have had the pretrained model evaluated with the changes listed above and sequence 8 with SalsaNext uncertainty. As output, I do not get any percentage values for epistemic uncertainty. Is it intended that the uncertainty values are not output as percentages, since only the mean variance from the monte carlo sampling is calculated? https://github.com/Halmstad-University/SalsaNext/blob/a02fad97d646d4c132266ab79fbaea3ecfc237ed/train/tasks/semantic/modules/user.py#L159
    If so, how can the values be interpreted, except that a higher value reflects a higher uncertainty? If this was not the intention, I know I have an error somewhere

    opened by finnSartoris 0
  • results on SemanticKitti validation set

    results on SemanticKitti validation set

    Thanks for the great work! Since it's not found in your paper, may I know your separate results (for each category) on SemanticKitti validation set? It would be very helpful for us to compare with your method and cite your paper.

    opened by Colin97 0
  • Algo Suggestion

    Algo Suggestion

    Consider using circular padding at the image side edges (especially if the results there are lower). This might improve results due to the full use of the receptive field at the edges... Possible downside: stitching or time-alignment issues

    opened by EadMan46 0
  • Slow inference with uncertainty

    Slow inference with uncertainty

    Hi,

    I tried infering labels using the pretrained model, and it worked great without uncertainty

    Network seq 00 scan 000000.label in 0.7649648189544678 sec KNN Infered seq 00 scan 000000.label in 0.0006988048553466797 sec Network seq 00 scan 000001.label in 0.049358367919921875 sec KNN Infered seq 00 scan 000001.label in 0.0001277923583984375 sec Network seq 00 scan 000002.label in 0.03960108757019043 sec KNN Infered seq 00 scan 000002.label in 0.00012493133544921875 sec Network seq 00 scan 000003.label in 0.03216409683227539 sec KNN Infered seq 00 scan 000003.label in 0.00012087821960449219 sec Network seq 00 scan 000004.label in 0.03187704086303711 sec KNN Infered seq 00 scan 000004.label in 0.00012159347534179688 sec Network seq 00 scan 000005.label in 0.03268098831176758 sec KNN Infered seq 00 scan 000005.label in 0.0001220703125 sec Network seq 00 scan 000006.label in 0.035898447036743164 sec KNN Infered seq 00 scan 000006.label in 0.0001232624053955078 sec Network seq 00 scan 000007.label in 0.03408312797546387 sec KNN Infered seq 00 scan 000007.label in 0.0001232624053955078 sec Network seq 00 scan 000008.label in 0.032814741134643555 sec KNN Infered seq 00 scan 000008.label in 0.00012636184692382812 sec Network seq 00 scan 000009.label in 0.0343012809753418 sec KNN Infered seq 00 scan 000009.label in 0.0001857280731201172 sec

    Using -u (and requiring a little fix as mentionned here : https://github.com/Halmstad-University/SalsaNext/issues/12#issuecomment-885596970) it seems to work and generates log_var and uncert label files but is much slower, (more than 6 seconds per sequence, thus 200 times slower for 30 iterations).

    Infered seq 00 scan 000000.label in 7.025984764099121 sec 7.025984764099121 Infered seq 00 scan 000001.label in 6.477065801620483 sec 6.751525282859802 Infered seq 00 scan 000002.label in 6.456561803817749 sec 6.653204123179118 Infered seq 00 scan 000003.label in 6.463520765304565 sec 6.60578328371048 Infered seq 00 scan 000004.label in 6.522738695144653 sec 6.589174365997314 Infered seq 00 scan 000005.label in 6.484813451766968 sec 6.571780880292256 Infered seq 00 scan 000006.label in 6.5031116008758545 sec 6.56197098323277 Infered seq 00 scan 000007.label in 6.512105464935303 sec 6.555737793445587 Infered seq 00 scan 000008.label in 6.457853555679321 sec 6.544861767027113 Infered seq 00 scan 000009.label in 6.514480829238892 sec 6.541823673248291 Infered seq 00 scan 000010.label in 6.4799792766571045 sec 6.536201455376365

    I also reduced the number of iterations with -mc 10 instead of the default 30, but it still takes around 3 seconds per sequence inference.

    Is there a particular reason for such a difference ?

    opened by RaphaelLorenzo 1
  • Inconsistent parameters

    Inconsistent parameters

    Hi

    As you mentioned in issues 17

    For the Number of Parameters we used the built-in functions of Pytorch like so: sum(p.numel() for p in model.parameters() if p.requires_grad) For the FLOPs we used this package: https://github.com/sovrasov/flops-counter.pytorch Originally posted by @TiagoCortinhal in https://github.com/Halmstad-University/SalsaNext/issues/17#issuecomment-698228698

    Following your advice, I also calculated FLOPs and parameters myself.

    But what is strange is that the parameter I calculated is 6.71M instead of 6.73M in the paper. At the same time, FLOPs are the same as the results in the paper.

    Attach my code:

        from ptflops import get_model_complexity_info
    
        with torch.cuda.device(0):
            model = SalsaNext(nclasses=20)
            macs, params = get_model_complexity_info(model, (5, 64, 2048), as_strings=True,
                                                     print_per_layer_stat=True, verbose=True)
            print('{:<30}  {:<8}'.format('Computational complexity: ', macs))
            print('{:<30}  {:<8}'.format('Number of parameters: ', params))
    
        # Computational complexity:       62.84 GMac   1 Mac = 2 FLOPs
        # Number of parameters:           6.71 M
    

    Do you have any suggestions for reproducing the results in the paper?

    Best, Iris

    opened by iris0329 0
Owner
null
Fog Simulation on Real LiDAR Point Clouds for 3D Object Detection in Adverse Weather

LiDAR fog simulation Created by Martin Hahner at the Computer Vision Lab of ETH Zurich. This is the official code release of the paper Fog Simulation

Martin Hahner 110 Dec 30, 2022
A general python framework for single object tracking in LiDAR point clouds, based on PyTorch Lightning.

Open3DSOT A general python framework for single object tracking in LiDAR point clouds, based on PyTorch Lightning. The official code release of BAT an

Kangel Zenn 172 Dec 23, 2022
Point Cloud Denoising input segmentation output raw point-cloud valid/clear fog rain de-noised Abstract Lidar sensors are frequently used in environme

Point Cloud Denoising input segmentation output raw point-cloud valid/clear fog rain de-noised Abstract Lidar sensors are frequently used in environme

null 75 Nov 24, 2022
Official Repo for Ground-aware Monocular 3D Object Detection for Autonomous Driving

Visual 3D Detection Package: This repo aims to provide flexible and reproducible visual 3D detection on KITTI dataset. We expect scripts starting from

Yuxuan Liu 305 Dec 19, 2022
An efficient 3D semantic segmentation framework for Urban-scale point clouds like SensatUrban, Campus3D, etc.

An efficient 3D semantic segmentation framework for Urban-scale point clouds like SensatUrban, Campus3D, etc.

Zou 33 Jan 3, 2023
Public repository of the 3DV 2021 paper "Generative Zero-Shot Learning for Semantic Segmentation of 3D Point Clouds"

Generative Zero-Shot Learning for Semantic Segmentation of 3D Point Clouds Björn Michele1), Alexandre Boulch1), Gilles Puy1), Maxime Bucher1) and Rena

valeo.ai 15 Dec 22, 2022
Code repository for Semantic Terrain Classification for Off-Road Autonomous Driving

BEVNet Datasets Datasets should be put inside data/. For example, data/semantic_kitti_4class_100x100. Training BEVNet-S Example: cd experiments bash t

(Brian) JoonHo Lee 24 Dec 12, 2022
An unofficial personal implementation of UM-Adapt, specifically to tackle joint estimation of panoptic segmentation and depth prediction for autonomous driving datasets.

Semisupervised Multitask Learning This repository is an unofficial and slightly modified implementation of UM-Adapt[1] using PyTorch. This code primar

Abhinav Atrishi 11 Nov 25, 2022
Code for "PV-RAFT: Point-Voxel Correlation Fields for Scene Flow Estimation of Point Clouds", CVPR 2021

PV-RAFT This repository contains the PyTorch implementation for paper "PV-RAFT: Point-Voxel Correlation Fields for Scene Flow Estimation of Point Clou

Yi Wei 43 Dec 5, 2022
Synthetic LiDAR sequential point cloud dataset with point-wise annotations

SynLiDAR dataset: Learning From Synthetic LiDAR Sequential Point Cloud This is official repository of the SynLiDAR dataset. For technical details, ple

null 78 Dec 27, 2022
The official implementation of ICCV paper "Box-Aware Feature Enhancement for Single Object Tracking on Point Clouds".

Box-Aware Tracker (BAT) Pytorch-Lightning implementation of the Box-Aware Tracker. Box-Aware Feature Enhancement for Single Object Tracking on Point C

Kangel Zenn 5 Mar 26, 2022
BADet: Boundary-Aware 3D Object Detection from Point Clouds (Pattern Recognition 2022)

BADet: Boundary-Aware 3D Object Detection from Point Clouds (Pattern Recognition

Rui Qian 17 Dec 12, 2022
Unofficial implementation of Point-Unet: A Context-Aware Point-Based Neural Network for Volumetric Segmentation

Point-Unet This is an unofficial implementation of the MICCAI 2021 paper Point-Unet: A Context-Aware Point-Based Neural Network for Volumetric Segment

Namt0d 9 Dec 7, 2022
This repo is a PyTorch implementation for Paper "Unsupervised Learning for Cuboid Shape Abstraction via Joint Segmentation from Point Clouds"

Unsupervised Learning for Cuboid Shape Abstraction via Joint Segmentation from Point Clouds This repository is a PyTorch implementation for paper: Uns

Kaizhi Yang 42 Dec 9, 2022
Official PyTorch implementation of UACANet: Uncertainty Aware Context Attention for Polyp Segmentation

UACANet: Uncertainty Aware Context Attention for Polyp Segmentation Official pytorch implementation of UACANet: Uncertainty Aware Context Attention fo

Taehun Kim 85 Dec 14, 2022
Code for "FPS-Net: A convolutional fusion network for large-scale LiDAR point cloud segmentation".

FPS-Net Code for "FPS-Net: A convolutional fusion network for large-scale LiDAR point cloud segmentation", accepted by ISPRS journal of Photogrammetry

null 15 Nov 30, 2022
UnpNet - Rethinking 3-D LiDAR Point Cloud Segmentation(IEEE TNNLS)

UnpNet Citation Please cite the following paper if you use this repository in your reseach. @article {PMID:34914599, Title = {Rethinking 3-D LiDAR Po

Shijie Li 4 Jul 15, 2022
Range Image-based LiDAR Localization for Autonomous Vehicles Using Mesh Maps

Range Image-based 3D LiDAR Localization This repo contains the code for our ICRA2021 paper: Range Image-based LiDAR Localization for Autonomous Vehicl

Photogrammetry & Robotics Bonn 208 Dec 15, 2022
GndNet: Fast ground plane estimation and point cloud segmentation for autonomous vehicles using deep neural networks.

GndNet: Fast Ground plane Estimation and Point Cloud Segmentation for Autonomous Vehicles. Authors: Anshul Paigwar, Ozgur Erkent, David Sierra Gonzale

Anshul Paigwar 114 Dec 29, 2022