Exploring Simple 3D Multi-Object Tracking for Autonomous Driving (ICCV 2021)

Overview

Exploring Simple 3D Multi-Object Tracking for Autonomous Driving

Chenxu Luo, Xiaodong Yang, Alan Yuille
Exploring Simple 3D Multi-Object Tracking for Autonomous Driving, ICCV 2021
[Paper] [Poster] [YouTube]

Getting Started

Installation

Please refer to INSTALL for the detail.

Data Preparation

python ./tools/create_data.py nuscenes_data_prep --root_path=NUSCENES_TRAINVAL_DATASET_ROOT --version="v1.0-trainval" --nsweeps=10

Training

python -m torch.distributed.launch --nproc_per_node=8 ./tools/train.py examples/point_pillars/configs/nusc_all_pp_centernet_tracking.py --work_dir SAVE_DIR

Test

In ./model_zoo we provide our trained (pillar based) model on nuScenes.
Note: We currently only support inference with a single GPU.

python ./tools/val_nusc_tracking.py examples/point_pillars/configs/nusc_all_pp_centernet_tracking.py --checkpoint CHECKPOINTFILE  --work_dir SAVE_DIR

Citation

Please cite the following paper if this repo helps your research:

@InProceedings{Luo_2021_ICCV,
    author    = {Luo, Chenxu and Yang, Xiaodong and Yuille, Alan},
    title     = {Exploring Simple 3D Multi-Object Tracking for Autonomous Driving},
    booktitle = {International Conference on Computer Vision (ICCV)},
    year      = {2021}
}

License

Copyright (C) 2021 QCraft. All rights reserved. Licensed under the CC BY-NC-SA 4.0 (Attribution-NonCommercial-ShareAlike 4.0 International). The code is released for academic research use only. For commercial use, please contact [email protected].

Comments
  • It seems that point_pillars_tracking.py is needed

    It seems that point_pillars_tracking.py is needed

    when i train the model using command line in the tutorial, I got this : Traceback (most recent call last): File "./tools/train.py", line 13, in <module> from det3d.models import build_detector File "/home/lz/task3/simtrack/simtrack/det3d/models/__init__.py", line 13, in <module> from .detectors import * # noqa: F401,F403 File "/home/lz/task3/simtrack/simtrack/det3d/models/detectors/__init__.py", line 3, in <module> from .point_pillars_tracking import PointPillarsTracking ModuleNotFoundError: No module named 'det3d.models.detectors.point_pillars_tracking' The error occurs because there is no point_pillars_tracking.py in /home/lz/task3/simtrack/simtrack/det3d/models/detectors, but a voxelnet.py which has been commented out. Can anybodyhelp me ?

    opened by lucksonzhen 3
  • The difference with centerpoint.

    The difference with centerpoint.

    I have one question, does the model structure of the simTrack is completely the same as CenterPoint? And the only difference is the post-processing. Am I right? Do I miss something important?

    opened by dwy927 2
  • A100 for training simtrack but not reproduce the paper results

    A100 for training simtrack but not reproduce the paper results

    Dear author,I still have some questions.

    1. We use A100 for training, 4 gpu cards, 4 epochs , the rest of the parameters have not changed, the training 200 epochs still does not achieve the effect. What training skills do you have? such as the learning rate modify and so on
    2. We train the model that comes with simtrack. After training for 20 epochs, we continue to train based on the model, and the loss will continue to decline. why did you stop training? I sincerely look forward to your reply.
    opened by gzgzgz666 1
  • ModuleNotFoundError: No module named 'det3d.models.detectors.point_pillars_tracking'

    ModuleNotFoundError: No module named 'det3d.models.detectors.point_pillars_tracking'

    File "/mnt/data02/wzy/simtrack/det3d/models/detectors/init.py", line 3, in from .point_pillars_tracking import PointPillarsTracking ModuleNotFoundError: No module named 'det3d.models.detectors.point_pillars_tracking'

    i have looked through the det3d ,and don`t find it. What should i do for it ?

    opened by sourzizi 1
  • No module named 'det3d', how to install det3d?

    No module named 'det3d', how to install det3d?

    I followed install.md, does anyone know how to install det3d? Thanks.

    (simtrack) z@z:~/dev/simtrack$ python ./tools/val_nusc_tracking.py examples/point_pillars/configs/nusc_all_pp_centernet_tracking.py --checkpoint model_zoo/simtrack_pillar.pth  --work_dir work_dirs/
    Traceback (most recent call last):
      File "./tools/val_nusc_tracking.py", line 8, in <module>
        from det3d.datasets import build_dataloader, build_dataset
    ModuleNotFoundError: No module named 'det3d'
    
    opened by DuZzzs 1
  • About motion compensation

    About motion compensation

    Thanks your fantastic work first, and I have a problem while view the code, did you do anything about motion compensation or undistort the pointcloud? I saw you use the matrix between two sweep lidar points to concat the points, but I did not see motion process when concat two sweeps lidar points, maybe I missed. If do not do this, is there any distort problem for the objects with big velocity?

    Look forward to your kind reply,Thank you

    opened by xibinyue 1
  • 12GB GPU memory is not enough for training with default config

    12GB GPU memory is not enough for training with default config

    I have a gpu card with only 12GB video memory. The process failed with out-of-memory error when I tried to run with default config. What should I change in the default config to run your model normally?

    opened by qiyancos 1
  • Train pointpillars model with dynamic voxelizer

    Train pointpillars model with dynamic voxelizer

    Nice work. I noticed that your implemented DynamicPillarFeatureNet requests the raw points as an input and not the output of the voxelizer. I wonder how to train using this pipeline?

    Thanks!

    opened by YoushaaMurhij 0
  • tracking_batch_hm = (batch_hm + prev_hm[task_id]) / 2.0

    tracking_batch_hm = (batch_hm + prev_hm[task_id]) / 2.0

    Hi,author! tracking_batch_hm = (batch_hm + prev_hm[task_id]) / 2.0

    I don't understand the actual physical meaning of “tracking_batch_hm”.I also don’t understand why we need to execute this way instead of directly using batch_hm or prev_hm?

    I have another question that if the displacement of an object is relatively large, then the position of this object in the previous centerness map will be farther away from that in the current centerness map.(In other words, there is no intersection between the representation of this object in the previous centerness map and the current centerness map.)

    So after NMS operation, will this object be considered as a new object?

    opened by JayChan-USTC 0
  • No recurrence nuscenes experiments

    No recurrence nuscenes experiments

    **Dear authors: I really appreciate your great work, and i'm tring to use it in some of my projects. I train your model on 4 3090 and this model is trained for 187 epochs on nuscenes, but the tracking result is bad, just like this picture, What is the cause? image

    Thanks for your answer!**

    opened by gzgzgz666 1
  • KeyError: 'PointPillars is not in the detector registry'

    KeyError: 'PointPillars is not in the detector registry'

    hello, when I tried to train a model, I just got this error, the traceback is following:

    2022-04-24 18:16:11,886 - INFO - Distributed training: False 2022-04-24 18:16:11,886 - INFO - torch.backends.cudnn.benchmark: False 2022-04-24 18:16:11,900 - INFO - Backup source files to SAVE_DIR/det3d Traceback (most recent call last): File "./tools/train.py", line 146, in main() File "./tools/train.py", line 119, in main model = build_detector(cfg.model, train_cfg=cfg.train_cfg, test_cfg=cfg.test_cfg) File "/simtrack/det3d/models/builder.py", line 53, in build_detector return build(cfg, DETECTORS, dict(train_cfg=train_cfg, test_cfg=test_cfg)) File "/simtrack/det3d/models/builder.py", line 21, in build return build_from_cfg(cfg, registry, default_args) File "/simtrack/det3d/utils/registry.py", line 66, in build_from_cfg "{} is not in the {} registry".format(obj_type, registry.name) KeyError: 'PointPillars is not in the detector registry' Traceback (most recent call last): File "/anaconda3/envs/simtrack/lib/python3.6/runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "/anaconda3/envs/simtrack/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/anaconda3/envs/simtrack/lib/python3.6/site-packages/torch/distributed/launch.py", line 260, in main() File "/anaconda3/envs/simtrack/lib/python3.6/site-packages/torch/distributed/launch.py", line 256, in main cmd=cmd) subprocess.CalledProcessError: Command '[/anaconda3/envs/simtrack/bin/python', '-u', './tools/train.py', '--local_rank=0', 'examples/point_pillars/configs/nusc_all_pp_centernet_tracking.py', '--work_dir', 'SAVE_DIR']' returned non-zero exit status 1.

    the order is: python -m torch.distributed.launch --nproc_per_node=1 ./tools/train.py examples/point_pillars/configs/nusc_all_pp_centernet_tracking.py --work_dir SAVE_DIR

    opened by zhaopengkang 0
  • val_nusc_tracking.py AssertionError

    val_nusc_tracking.py AssertionError

    I used the default model to eval, but triger a warming and a AssertionError, sincerely look forward to your answer. Also, when will you release the full version of the code?

    python ./tools/val_nusc_tracking.py examples/point_pillars/configs/nusc_all_pp_centernet_tracking.py --checkpoint model_zoo/simtrack_pillar.pth --work_dir /data/simtrack_output/

    Use HM Bias: -2.19
    ====== Loading NuScenes tables for version v1.0-trainval...
    23 category,
    8 attribute, 4 visibility, 64386 instance,
    12 sensor,
    10200 calibrated_sensor,
    2631083 ego_pose,
    68 log,
    850 scene, 34149 sample,
    2631083 sample_data,
    1166187 sample_annotation,
    4 map,
    Done loading in 135.2 seconds.

    Reverse indexing ...
    Done reverse indexing in 10.4 seconds.

    /data/simtrack/det3d/core/bbox/geometry.py:160: NumbaWarning:
    Compilation is falling back to object mode WITH looplifting enabled because Function "points_in_convex_polygon_jit" failed type inference due to: No implementation of function Function() found for signature:

    getitem(array(float64, 3d, C), Tuple(slice<a:b>, list(int64)<iv=None>, slice<a:b>))

    There are 22 candidate implementations:

    • Of which 20 did not match due to:
      Overload of function 'getitem': File: : Line N/A. With argument(s): '(array(float64, 3d, C), Tuple(slice<a:b>, list(int64)<iv=None>, slice<a:b>))':
      No match.
    • Of which 2 did not match due to: Overload in function 'GetItemBuffer.generic': File: numba/core/typing/arraydecl.py: Line 166. With argument(s): '(array(float64, 3d, C), Tuple(slice<a:b>, list(int64)<iv=None>, slice<a:b>))':
      Rejected as the implementation raised a specific error:
      NumbaTypeError: unsupported array index type list(int64)<iv=None> in Tuple(slice<a:b>, list(int64)<iv=None>, slice<a:b>) raised from /home/yangjinrong/miniconda3/envs/det3d/lib/python3.7/site-packages/numba/core/typing/arraydecl.py:73

    During: typing of intrinsic-call at /data/simtrack/det3d/core/bbox/geometry.py (179)

    File "det3d/core/bbox/geometry.py", line 179:
    def points_in_convex_polygon_jit(points, polygon, clockwise=True):

    :,
    [num_points_of_polygon - 1] + list(range(num_points_of_polygon - 1)), ^

    @numba.jit
    /data/simtrack/det3d/core/bbox/geometry.py:160: NumbaWarning: Compilation is falling back to object mode WITHOUT looplifting enabled because Function "points_in_convex_polygon_jit" failed type inference due to: Cannot determine Numba type of <class 'numba.core.dispatcher.LiftedLoop'> File "det3d/core/bbox/geometry.py", line 196: def points_in_convex_polygon_jit(points, polygon, clockwise=True):

    cross = 0.0 for i in range(num_points):
    ^
    @numba.jit
    /home/yangjinrong/miniconda3/envs/det3d/lib/python3.7/site-packages/numba/core/object_mode_passes.py:152: NumbaWarning: Function "points_in_convex_polygon_jit" was compiled in object mode without forceobj=True, but has lifted loops. File "det3d/core/bbox/geometry.py", line 171:
    def points_in_convex_polygon_jit(points, polygon, clockwise=True):
    # first convert polygon to directed lines
    num_points_of_polygon = polygon.shape[1] ^
    state.func_ir.loc))
    /home/yangjinrong/miniconda3/envs/det3d/lib/python3.7/site-packages/numba/core/object_mode_passes.py:162: NumbaDeprecationWarning: Fall-back from the nopython compilation path to the object mode compilation path has been detected, this is deprecated behaviour.

    For more information visit https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit

    File "det3d/core/bbox/geometry.py", line 171: def points_in_convex_polygon_jit(points, polygon, clockwise=True):

    # first convert polygon to directed lines
    num_points_of_polygon = polygon.shape[1]
    ^

    state.func_ir.loc))

    Loading NuScenes tables for version v1.0-trainval...
    23 category,
    8 attribute,
    4 visibility,
    64386 instance,
    12 sensor,
    10200 calibrated_sensor,
    2631083 ego_pose,
    68 log,
    850 scene,
    34149 sample,
    2631083 sample_data,
    1166187 sample_annotation,
    4 map,
    Done loading in 35.9 seconds.

    Reverse indexing ...
    Done reverse indexing in 9.5 seconds.

    Finish generate predictions for testset, save to /data/simtrack_output/tracking_results.json

    Loading NuScenes tables for version v1.0-trainval...
    23 category, 8 attribute, 4 visibility,
    64386 instance,
    12 sensor, 10200 calibrated_sensor,
    2631083 ego_pose,68 log,
    850 scene,
    34149 sample, 2631083 sample_data, 1166187 sample_annotation,
    4 map,
    Done loading in 36.0 seconds.

    Reverse indexing ...
    Done reverse indexing in 8.5 seconds. ======
    Initializing nuScenes tracking evaluation Loaded results from /data/simtrack_output/tracking_results.json. Found detections for 6019 samples.
    Loading annotations for val split from nuScenes version: v1.0-trainval
    100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 6019/6019 [00:06<00:00, 871.16it/s] Loaded ground truth annotations for 6019 samples.
    Filtering tracks
    => Original number of boxes: 227984
    => After distance based filtering: 190099
    => After LIDAR points based filtering: 190099
    => After bike rack filtering: 189972
    Filtering ground truth tracks
    => Original number of boxes: 142261
    => After distance based filtering: 103564 => After LIDAR points based filtering: 93885
    => After bike rack filtering: 93875
    Accumulating metric data...
    Computing metrics for class bicycle...

    Computed thresholds [15/167] MOTAR MOTP Recall Frames GT GT-Mtch GT-Miss GT-IDS Pred Pred-TP Pred-FP Pred-IDS thr_0.1681 0.000 0.278 0.507 1923 1993 971 982 40 2431 971 1420 40

                MOTAR   MOTP    Recall  Frames  GT      GT-Mtch GT-Miss GT-IDS  Pred    Pred-TP Pred-FP Pred-IDS   
    

    thr_0.1975 0.000 0.271 0.488 1769 1993 939 1021 33 1999 939 1027 33

                MOTAR   MOTP    Recall  Frames  GT      GT-Mtch GT-Miss GT-IDS  Pred    Pred-TP Pred-FP Pred-IDS                  
    

    thr_0.2212 0.127 0.266 0.462 1686 1993 893 1073 27 1700 893 780 27

                MOTAR   MOTP    Recall  Frames  GT      GT-Mtch GT-Miss GT-IDS  Pred    Pred-TP Pred-FP Pred-IDS
    

    thr_0.2547 0.450 0.262 0.441 1546 1993 857 1114 22 1350 857 471 22

                MOTAR   MOTP    Recall  Frames  GT      GT-Mtch GT-Miss GT-IDS  Pred    Pred-TP Pred-FP Pred-IDS                    
    

    thr_0.2824 0.535 0.262 0.414 1514 1993 804 1167 22 1200 804 374 22

                MOTAR   MOTP    Recall  Frames  GT      GT-Mtch GT-Miss GT-IDS  Pred    Pred-TP Pred-FP Pred-IDS
    

    thr_0.2922 0.551 0.260 0.395 1501 1993 766 1205 22 1132 766 344 22

                MOTAR   MOTP    Recall  Frames  GT      GT-Mtch GT-Miss GT-IDS  Pred    Pred-TP Pred-FP Pred-IDS
    

    thr_0.3012 0.538 0.277 0.368 1490 1993 712 1260 21 1062 712 329 21

                MOTAR   MOTP    Recall  Frames  GT      GT-Mtch GT-Miss GT-IDS  Pred    Pred-TP Pred-FP Pred-IDS
    

    thr_0.3335 0.603 0.266 0.346 1470 1993 673 1303 17 957 673 267 17

                MOTAR   MOTP    Recall  Frames  GT      GT-Mtch GT-Miss GT-IDS  Pred    Pred-TP Pred-FP Pred-IDS
    

    thr_0.3816 0.741 0.267 0.316 1422 1993 617 1364 12 789 617 160 12

                MOTAR   MOTP    Recall  Frames  GT      GT-Mtch GT-Miss GT-IDS  Pred    Pred-TP Pred-FP Pred-IDS
    

    thr_0.4070 0.769 0.248 0.293 1413 1993 575 1410 8 716 575 133 8

                MOTAR   MOTP    Recall  Frames  GT      GT-Mtch GT-Miss GT-IDS  Pred    Pred-TP Pred-FP Pred-IDS
    

    thr_0.4231 0.781 0.241 0.276 1407 1993 544 1443 6 669 544 119 6

                MOTAR   MOTP    Recall  Frames  GT      GT-Mtch GT-Miss GT-IDS  Pred    Pred-TP Pred-FP Pred-IDS
    

    thr_0.4741 0.841 0.231 0.243 1385 1993 479 1508 6 561 479 76 6

                MOTAR   MOTP    Recall  Frames  GT      GT-Mtch GT-Miss GT-IDS  Pred    Pred-TP Pred-FP Pred-IDS
    

    thr_0.4873 0.837 0.223 0.221 1385 1993 435 1553 5 511 435 71 5

                MOTAR   MOTP    Recall  Frames  GT      GT-Mtch GT-Miss GT-IDS  Pred    Pred-TP Pred-FP Pred-IDS
    

    thr_0.5002 0.860 0.206 0.199 1378 1993 394 1596 3 452 394 55 3

                MOTAR   MOTP    Recall  Frames  GT      GT-Mtch GT-Miss GT-IDS  Pred    Pred-TP Pred-FP Pred-IDS
    

    thr_0.5331 0.926 0.202 0.183 1351 1993 363 1628 2 392 363 27 2

                MOTAR   MOTP    Recall  Frames  GT      GT-Mtch GT-Miss GT-IDS  Pred    Pred-TP Pred-FP Pred-IDS
    

    thr_0.5464 0.944 0.199 0.153 1347 1993 303 1688 2 322 303 17 2

                MOTAR   MOTP    Recall  Frames  GT      GT-Mtch GT-Miss GT-IDS  Pred    Pred-TP Pred-FP Pred-IDS
    

    thr_0.5668 0.940 0.206 0.134 1347 1993 266 1726 1 283 266 16 1

                MOTAR   MOTP    Recall  Frames  GT      GT-Mtch GT-Miss GT-IDS  Pred    Pred-TP Pred-FP Pred-IDS
    

    thr_0.5879 0.956 0.207 0.104 1343 1993 206 1786 1 216 206 9 1

    Traceback (most recent call last): File "./tools/val_nusc_tracking.py", line 202, in tracking() File "./tools/val_nusc_tracking.py", line 148, in tracking dataset.evaluation_tracking(copy.deepcopy(predictions), output_dir=args.work_dir, testset=False) File "/data/simtrack/det3d/datasets/nuscenes/nuscenes.py", line 382, in evaluation_tracking metrics_summary = nusc_eval.main() File "/home/yangjinrong/miniconda3/envs/det3d/lib/python3.7/site-packages/nuscenes/eval/tracking/evaluate.py", line 205, in main metrics, metric_data_list = self.evaluate() File "/home/yangjinrong/miniconda3/envs/det3d/lib/python3.7/site-packages/nuscenes/eval/tracking/evaluate.py", line 135, in evaluate accumulate_class(class_name) File "/home/yangjinrong/miniconda3/envs/det3d/lib/python3.7/site-packages/nuscenes/eval/tracking/evaluate.py", line 131, in accumulate_class curr_md = curr_ev.accumulate() File "/home/yangjinrong/miniconda3/envs/det3d/lib/python3.7/site-packages/nuscenes/eval/tracking/algo.py", line 156, in accumulate assert unachieved_thresholds + duplicate_thresholds + len(thresh_metrics) == self.num_thresholds AssertionError

    opened by yancie-yjr 0
  • 8 2080ti GPU using default cfg to train,but triger CUDA out of memory

    8 2080ti GPU using default cfg to train,but triger CUDA out of memory

    2022-03-17 14:21:36,906 - INFO - Start running, host: yangjinrong@tracking-q5x64-32246-worker-0, work_dir: /data/simtrack_output 2022-03-17 14:21:36,907 - INFO - workflow: [('train', 1), ('val', 1)], max: 20 epochs
    Traceback (most recent call last):
    File "./tools/train.py", line 141, in
    main()
    File "./tools/train.py", line 136, in main
    logger=logger,
    File "/data/simtrack/det3d/torchie/apis/train.py", line 206, in train_detector
    trainer.run(data_loaders, cfg.workflow, cfg.total_epochs, local_rank=cfg.local_rank)
    File "/data/simtrack/det3d/torchie/trainer/trainer.py", line 527, in run
    epoch_runner(data_loaders[i], self.epoch, **kwargs)
    File "/data/simtrack/det3d/torchie/trainer/trainer.py", line 393, in train
    self.model, data_batch, train_mode=True, **kwargs
    File "/data/simtrack/det3d/torchie/trainer/trainer.py", line 356, in batch_processor
    losses = model(example, return_loss=True)
    File "/home/yangjinrong/miniconda3/envs/det3d/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs)
    File "/home/yangjinrong/miniconda3/envs/det3d/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 511, in forward output = self.module(*inputs[0], **kwargs[0])
    File "/home/yangjinrong/miniconda3/envs/det3d/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs)
    File "/data/simtrack/det3d/models/detectors/point_pillars.py", line 48, in forward
    x = self.extract_feat(data)
    File "/data/simtrack/det3d/models/detectors/point_pillars.py", line 29, in extract_feat
    x = self.neck(x)
    File "/home/yangjinrong/miniconda3/envs/det3d/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs)
    File "/data/simtrack/det3d/models/necks/rpn.py", line 142, in forward
    ups.append(self.deblocksi - self._upsample_start_idx)
    File "/home/yangjinrong/miniconda3/envs/det3d/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs)
    File "/data/simtrack/det3d/models/utils/misc.py", line 82, in forward
    input = module(input)
    File "/home/yangjinrong/miniconda3/envs/det3d/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs)
    File "/home/yangjinrong/miniconda3/envs/det3d/lib/python3.7/site-packages/torch/nn/modules/activation.py", line 102, in forward
    return F.relu(input, inplace=self.inplace)
    File "/home/yangjinrong/miniconda3/envs/det3d/lib/python3.7/site-packages/torch/nn/functional.py", line 1119, in relu
    result = torch.relu(input)
    RuntimeError: CUDA out of memory. Tried to allocate 64.00 MiB (GPU 2; 10.76 GiB total capacity; 9.76 GiB already allocated; 47.44 MiB free; 9.88 GiB reserved in total by PyTorch) ^CProcess Process-10:
    ^CProcess Process-9:
    Process Process-9:
    Process Process-9:
    Process Process-9:
    Process Process-3:
    Process Process-2:
    Process Process-5:
    Traceback (most recent call last):
    File "/home/yangjinrong/miniconda3/envs/det3d/lib/python3.7/subprocess.py", line 1019, in wait
    Process Process-2:
    Process Process-10:
    Process Process-5:
    Process Process-10:
    Process Process-2:
    Process Process-10:
    Process Process-5:
    Process Process-1:
    Process Process-7:
    Process Process-5:
    Process Process-9:
    Process Process-5:
    Process Process-4:
    Process Process-8:
    Process Process-8:
    Process Process-8:
    Process Process-4:
    Process Process-4:
    Process Process-1:
    Process Process-3:
    Process Process-1:
    Process Process-6:
    Process Process-1:
    Process Process-7:
    Process Process-7:
    return self._wait(timeout=timeout)
    File "/home/yangjinrong/miniconda3/envs/det3d/lib/python3.7/subprocess.py", line 1653, in _wait
    (pid, sts) = self._try_wait(0)
    File "/home/yangjinrong/miniconda3/envs/det3d/lib/python3.7/subprocess.py", line 1611, in _try_wait
    (pid, sts) = os.waitpid(self.pid, wait_flags)
    KeyboardInterrupt

    During handling of the above exception, another exception occurred:
    Traceback (most recent call last):
    File "/home/yangjinrong/miniconda3/envs/det3d/lib/python3.7/runpy.py", line 193, in _run_module_as_main
    "main", mod_spec)
    File "/home/yangjinrong/miniconda3/envs/det3d/lib/python3.7/runpy.py", line 85, in _run_code
    exec(code, run_globals) File "/home/yangjinrong/miniconda3/envs/det3d/lib/python3.7/site-packages/torch/distributed/launch.py", line 261, in
    main()
    File "/home/yangjinrong/miniconda3/envs/det3d/lib/python3.7/site-packages/torch/distributed/launch.py", line 254, in main
    process.wait()
    File "/home/yangjinrong/miniconda3/envs/det3d/lib/python3.7/subprocess.py", line 1032, in wait
    self._wait(timeout=sigint_timeout) File "/home/yangjinrong/miniconda3/envs/det3d/lib/python3.7/subprocess.py", line 1647, in _wait time.sleep(delay) KeyboardInterruptq

    opened by yancie-yjr 1
  • Visualization of the result

    Visualization of the result

    Dear authors: I really appreciate your great work, and i'm tring to use it in some of my projects. I am wondering how to visualize your result just like the first gif of your README. Thanks for your answer!

    opened by LeoDuhz 0
Owner
QCraft
QCraft
[ICCV'21] NEAT: Neural Attention Fields for End-to-End Autonomous Driving

NEAT: Neural Attention Fields for End-to-End Autonomous Driving Paper | Supplementary | Video | Poster | Blog This repository is for the ICCV 2021 pap

null 254 Jan 2, 2023
Repository to run object detection on a model trained on an autonomous driving dataset.

Autonomous Driving Object Detection on the Raspberry Pi 4 Description of Repository This repository contains code and instructions to configure the ne

Ethan 51 Nov 17, 2022
Official Repo for Ground-aware Monocular 3D Object Detection for Autonomous Driving

Visual 3D Detection Package: This repo aims to provide flexible and reproducible visual 3D detection on KITTI dataset. We expect scripts starting from

Yuxuan Liu 305 Dec 19, 2022
Unofficial PyTorch implementation of "RTM3D: Real-time Monocular 3D Detection from Object Keypoints for Autonomous Driving" (ECCV 2020)

RTM3D-PyTorch The PyTorch Implementation of the paper: RTM3D: Real-time Monocular 3D Detection from Object Keypoints for Autonomous Driving (ECCV 2020

Nguyen Mau Dzung 271 Nov 29, 2022
[CVPR'21] Multi-Modal Fusion Transformer for End-to-End Autonomous Driving

TransFuser This repository contains the code for the CVPR 2021 paper Multi-Modal Fusion Transformer for End-to-End Autonomous Driving. If you find our

null 695 Jan 5, 2023
Self-Supervised Pillar Motion Learning for Autonomous Driving (CVPR 2021)

Self-Supervised Pillar Motion Learning for Autonomous Driving Chenxu Luo, Xiaodong Yang, Alan Yuille Self-Supervised Pillar Motion Learning for Autono

QCraft 101 Dec 5, 2022
Exploring Relational Context for Multi-Task Dense Prediction [ICCV 2021]

Adaptive Task-Relational Context (ATRC) This repository provides source code for the ICCV 2021 paper Exploring Relational Context for Multi-Task Dense

David Brüggemann 35 Dec 5, 2022
Tracking code for the winner of track 1 in the MMP-Tracking Challenge at ICCV 2021 Workshop.

Tracking Code for the winner of track1 in MMP-Trakcing challenge This repository contains our tracking code for the Multi-camera Multiple People Track

DamoCV 29 Nov 13, 2022
RTS3D: Real-time Stereo 3D Detection from 4D Feature-Consistency Embedding Space for Autonomous Driving

RTS3D: Real-time Stereo 3D Detection from 4D Feature-Consistency Embedding Space for Autonomous Driving (AAAI2021). RTS3D is efficiency and accuracy s

null 71 Nov 29, 2022
[arXiv] What-If Motion Prediction for Autonomous Driving ❓🚗💨

WIMP - What If Motion Predictor Reference PyTorch Implementation for What If Motion Prediction [PDF] [Dynamic Visualizations] Setup Requirements The W

William Qi 96 Dec 29, 2022
Uncertainty-aware Semantic Segmentation of LiDAR Point Clouds for Autonomous Driving

SalsaNext: Fast, Uncertainty-aware Semantic Segmentation of LiDAR Point Clouds for Autonomous Driving Abstract In this paper, we introduce SalsaNext f

null 308 Jan 4, 2023
One Million Scenes for Autonomous Driving

ONCE Benchmark This is a reproduced benchmark for 3D object detection on the ONCE (One Million Scenes) dataset. The code is mainly based on OpenPCDet.

null 148 Dec 28, 2022
This solves the autonomous driving issue which is supported by deep learning technology. Given a video, it splits into images and predicts the angle of turning for each frame.

Self Driving Car An autonomous car (also known as a driverless car, self-driving car, and robotic car) is a vehicle that is capable of sensing its env

Sagor Saha 4 Sep 4, 2021
Code repository for Semantic Terrain Classification for Off-Road Autonomous Driving

BEVNet Datasets Datasets should be put inside data/. For example, data/semantic_kitti_4class_100x100. Training BEVNet-S Example: cd experiments bash t

(Brian) JoonHo Lee 24 Dec 12, 2022
An unofficial personal implementation of UM-Adapt, specifically to tackle joint estimation of panoptic segmentation and depth prediction for autonomous driving datasets.

Semisupervised Multitask Learning This repository is an unofficial and slightly modified implementation of UM-Adapt[1] using PyTorch. This code primar

Abhinav Atrishi 11 Nov 25, 2022
Graph Self-Attention Network for Learning Spatial-Temporal Interaction Representation in Autonomous Driving

GSAN Introduction Code for paper GSAN: Graph Self-Attention Network for Learning Spatial-Temporal Interaction Representation in Autonomous Driving, wh

YE Luyao 6 Oct 27, 2022
Official PyTorch implementation of Joint Object Detection and Multi-Object Tracking with Graph Neural Networks

This is the official PyTorch implementation of our paper: "Joint Object Detection and Multi-Object Tracking with Graph Neural Networks". Our project website and video demos are here.

Richard Wang 443 Dec 6, 2022
Object Detection and Multi-Object Tracking

Object Detection and Multi-Object Tracking

Bobby Chen 1.6k Jan 4, 2023