Implementation for Learning to Track with Object Permanence

Overview

Learning to Track with Object Permanence

A video-based MOT approach capable of tracking through full occlusions:

Learning to Track with Object Permanence,
Pavel Tokmakov, Jie Li, Wolfram Burgard, Adrien Gaidon,
arXiv technical report (arXiv 2103.14258)

@inproceedings{tokmakov2021learning,
  title={Learning to Track with Object Permanence},
  author={Tokmakov, Pavel and Li, Jie and Burgard, Wolfram and Gaidon, Adrien},
  booktitle={ICCV},
  year={2021}
}

Abstract

Tracking by detection, the dominant approach for online multi-object tracking, alternates between localization and association steps. As a result, it strongly depends on the quality of instantaneous observations, often failing when objects are not fully visible. In contrast, tracking in humans is underlined by the notion of object permanence: once an object is recognized, we are aware of its physical existence and can approximately localize it even under full occlusions. In this work, we introduce an end-to-end trainable approach for joint object detection and tracking that is capable of such reasoning. We build on top of the recent CenterTrack architecture, which takes pairs of frames as input, and extend it to videos of arbitrary length. To this end, we augment the model with a spatio-temporal, recurrent memory module, allowing it to reason about object locations and identities in the current frame using all the previous history. It is, however, not obvious how to train such an approach. We study this question on a new, large-scale, synthetic dataset for multi-object tracking, which provides ground truth annotations for invisible objects, and propose several approaches for supervising tracking behind occlusions. Our model, trained jointly on synthetic and real data, outperforms the state of the art on KITTI and MOT17 datasets thanks to its robustness to occlusions.

Installation

Please refer to INSTALL.md for installation instructions.

Benchmark Evaluation and Training

After installation, follow the instructions in DATA.md to setup the datasets. Then check GETTING_STARTED.md to reproduce the results in the paper. We provide scripts for all the experiments in the experiments folder.

License

PermaTrack is developed upon CenterTrack. Both codebases are released under MIT License themselves. Some code of CenterTrack are from third-parties with different licenses, please check the CenterTrack repo for details. In addition, this repo uses py-motmetrics for MOT evaluation, nuscenes-devkit for nuScenes evaluation and preprocessing, and TAO codebase for computing Track AP. ConvGRU implementation is adopted from this repo. See NOTICE for detail. Please note the licenses of each dataset. Most of the datasets we used in this project are under non-commercial licenses.

Comments
  • aws model

    aws model

    Thx for releasing the model. Just a quick question, since I'm new to AWS. How to get the model from AWS s3:bucket, the Link seems cannot be downloaded from the AWS webpage.

    opened by Di-Gu 2
  • Reproduce results on KITTI test

    Reproduce results on KITTI test

    Hi,

    Could you provide the pretrained model to reproduce the tracking results on KITTI test? I found the pretrained weights which is for tracking on KITTI validation only.

    opened by noahcao 1
  • test runtime error

    test runtime error

    Kindly help me to fix this error

    mot17_fulltrain |#### | [2462/17757]|Tot: 0:03:39 |ETA: 0:20:00 |tot 0.077s (0.089s) |load 0.000s (0.000s) |pre 0.001s (0.001s) |net 0.068s (0.082s) |dec 0.002s (0.003s) |post mot17_fulltrain |#### | [2463/17757]|Tot: 0:03:39 |ETA: 0:20:00 |tot 0.079s (0.089s) |load 0.000s (0.000s) |pre 0.002s (0.001s) |net 0.071s (0.082s) |dec 0.002s (0.003s) |post mot17_fulltrain |#### | [2464/17757]|Tot: 0:03:39 |ETA: 0:20:00 |tot 0.077s (0.089s) |load 0.000s (0.000s) |pre 0.002s (0.001s) |net 0.069s (0.082s) |dec 0.002s (0.003s) |post mot17_fulltrain |#### | [2465/17757]|Tot: 0:03:39 |ETA: 0:20:00 |tot 0.077s (0.089s) |load 0.000s (0.000s) |pre 0.001s (0.001s) |net 0.069s (0.082s) |dec 0.002s (0.003s) |post mot17_fulltrain |#### | [2466/17757]|Tot: 0:03:39 |ETA: 0:20:00 |tot 0.078s (0.089s) |load 0.000s (0.000s) |pre 0.001s (0.001s) |net 0.070s (0.082s) |dec 0.002s (0.003s) |post mot17_fulltrain |#### | [2467/17757]|Tot: 0:03:39 |ETA: 0:19:46 |tot 0.077s (0.089s) |load 0.000s (0.000s) |pre 0.001s (0.001s) |net 0.068s (0.082s) |dec 0.002s (0.003s) |post mot17_fulltrain |#### | [2468/17757]|Tot: 0:03:39 |ETA: 0:19:46 |tot 0.077s (0.089s) |load 0.000s (0.000s) |pre 0.001s (0.001s) |net 0.069s (0.082s) |dec 0.002s (0.003s) |post 0.003s (0.002s) |merge 0.000s (0.000s) |track 0.001s (0.001s) Traceback (most recent call last): File "test.py", line 195, in prefetch_test(opt) File "test.py", line 109, in prefetch_test ret = detector.run(pre_processed_images) File "/home/mca/Downloads/CenterTrackP/src/lib/detector.py", line 144, in run images, self.pre_images, self.pre_hms, pre_inds, return_time=True, original_batch=pre_processed_images) File "/home/mca/Downloads/CenterTrackP/src/lib/detector.py", line 414, in process output, self.h = self.model.step(batch_list, self.h) File "/home/mca/Downloads/CenterTrackP/src/lib/model/networks/base_model.py", line 117, in step feats = self.imgpre2feats(x, None, torch.zeros(1)) File "/home/mca/Downloads/CenterTrackP/src/lib/model/networks/dla.py", line 678, in imgpre2feats y = self.do_tensor_pass(x, pre_img, pre_hm) File "/home/mca/Downloads/CenterTrackP/src/lib/model/networks/dla.py", line 633, in do_tensor_pass x = self.dla_up(x) File "/home/mca/anaconda3/envs/CenterTrack_new/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call result = self.forward(*input, **kwargs) File "/home/mca/Downloads/CenterTrackP/src/lib/model/networks/dla.py", line 574, in forward ida(layers, len(layers) -i - 2, len(layers)) File "/home/mca/anaconda3/envs/CenterTrack_new/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call result = self.forward(*input, **kwargs) File "/home/mca/Downloads/CenterTrackP/src/lib/model/networks/dla.py", line 547, in forward layers[i] = node(layers[i] + layers[i - 1]) File "/home/mca/anaconda3/envs/CenterTrack_new/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call result = self.forward(*input, **kwargs) File "/home/mca/Downloads/CenterTrackP/src/lib/model/networks/dla.py", line 518, in forward x = self.conv(x) File "/home/mca/anaconda3/envs/CenterTrack_new/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call result = self.forward(*input, kwargs) File "/home/mca/Downloads/CenterTrackP/src/lib/model/networks/DCNv2/dcn_v2.py", line 128, in forward self.deformable_groups) File "/home/mca/Downloads/CenterTrackP/src/lib/model/networks/DCNv2/dcn_v2.py", line 31, in forward ctx.deformable_groups) RuntimeError: CUDA out of memory. Tried to allocate 72.00 MiB (GPU 0; 7.93 GiB total capacity; 6.72 GiB already allocated; 19.31 MiB free; 6.99 GiB reserved in total by PyTorch) (malloc at /pytorch/c10/cuda/CUDACachingAllocator.cpp:289) frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x33 (0x7fb5db859193 in /home/mca/anaconda3/envs/CenterTrack_new/lib/python3.6/site-packages/torch/lib/libc10.so) frame #1: + 0x1bccc (0x7fb5dba9accc in /home/mca/anaconda3/envs/CenterTrack_new/lib/python3.6/site-packages/torch/lib/libc10_cuda.so) frame #2: + 0x1cd5e (0x7fb5dba9bd5e in /home/mca/anaconda3/envs/CenterTrack_new/lib/python3.6/site-packages/torch/lib/libc10_cuda.so) frame #3: at::native::empty_cuda(c10::ArrayRef, c10::TensorOptions const&, c10::optionalc10::MemoryFormat) + 0x284 (0x7fb5e19226b4 in /home/mca/anaconda3/envs/CenterTrack_new/lib/python3.6/site-packages/torch/lib/libtorch.so) frame #4: + 0x45bd7d8 (0x7fb5e02697d8 in /home/mca/anaconda3/envs/CenterTrack_new/lib/python3.6/site-packages/torch/lib/libtorch.so) frame #5: + 0x1f4fb37 (0x7fb5ddbfbb37 in /home/mca/anaconda3/envs/CenterTrack_new/lib/python3.6/site-packages/torch/lib/libtorch.so) frame #6: + 0x3f0f795 (0x7fb5dfbbb795 in /home/mca/anaconda3/envs/CenterTrack_new/lib/python3.6/site-packages/torch/lib/libtorch.so) frame #7: + 0x1f4fb37 (0x7fb5ddbfbb37 in /home/mca/anaconda3/envs/CenterTrack_new/lib/python3.6/site-packages/torch/lib/libtorch.so) frame #8: std::result_of<c10::Dispatcher::callUnboxedOnly<at::Tensor, c10::ArrayRef, c10::TensorOptions const&, c10::optionalc10::MemoryFormat >(c10::OperatorHandle const&, c10::ArrayRef, c10::TensorOptions const&, c10::optionalc10::MemoryFormat) const::{lambda(c10::DispatchTable const&)#1} (c10::DispatchTable const&)>::type c10::LeftRightc10::DispatchTable::read<c10::Dispatcher::callUnboxedOnly<at::Tensor, c10::ArrayRef, c10::TensorOptions const&, c10::optionalc10::MemoryFormat >(c10::OperatorHandle const&, c10::ArrayRef, c10::TensorOptions const&, c10::optionalc10::MemoryFormat) const::{lambda(c10::DispatchTable const&)#1}>(c10::Dispatcher::callUnboxedOnly<at::Tensor, c10::ArrayRef, c10::TensorOptions const&, c10::optionalc10::MemoryFormat >(c10::OperatorHandle const&, c10::ArrayRef, c10::TensorOptions const&, c10::optionalc10::MemoryFormat) const::{lambda(c10::DispatchTable const&)#1}&&) const + 0x18c (0x7fb5d8d1d81c in /home/mca/Downloads/CenterTrackP/src/lib/model/networks/DCNv2/_ext.cpython-36m-x86_64-linux-gnu.so) frame #9: c10::guts::infer_function_traits<c10::Dispatcher::callUnboxedOnly<at::Tensor, c10::ArrayRef, c10::TensorOptions const&, c10::optionalc10::MemoryFormat >(c10::OperatorHandle const&, c10::ArrayRef, c10::TensorOptions const&, c10::optionalc10::MemoryFormat) const::{lambda(c10::DispatchTable const&)#1}>::type::return_type c10::impl::OperatorEntry::readDispatchTable<c10::Dispatcher::callUnboxedOnly<at::Tensor, c10::ArrayRef, c10::TensorOptions const&, c10::optionalc10::MemoryFormat >(c10::OperatorHandle const&, c10::ArrayRef, c10::TensorOptions const&, c10::optionalc10::MemoryFormat) const::{lambda(c10::DispatchTable const&)#1}>(c10::Dispatcher::callUnboxedOnly<at::Tensor, c10::ArrayRef, c10::TensorOptions const&, c10::optionalc10::MemoryFormat >(c10::OperatorHandle const&, c10::ArrayRef, c10::TensorOptions const&, c10::optionalc10::MemoryFormat) const::{lambda(c10::DispatchTable const&)#1}&&) const + 0x4e (0x7fb5d8d2b53c in /home/mca/Downloads/CenterTrackP/src/lib/model/networks/DCNv2/_ext.cpython-36m-x86_64-linux-gnu.so) frame #10: at::Tensor c10::Dispatcher::callUnboxedOnly<at::Tensor, c10::ArrayRef, c10::TensorOptions const&, c10::optionalc10::MemoryFormat >(c10::OperatorHandle const&, c10::ArrayRef, c10::TensorOptions const&, c10::optionalc10::MemoryFormat) const + 0x9d (0x7fb5d8d28b1b in /home/mca/Downloads/CenterTrackP/src/lib/model/networks/DCNv2/_ext.cpython-36m-x86_64-linux-gnu.so) frame #11: + 0x5912d (0x7fb5d8d2012d in /home/mca/Downloads/CenterTrackP/src/lib/model/networks/DCNv2/_ext.cpython-36m-x86_64-linux-gnu.so) frame #12: dcn_v2_cuda_forward(at::Tensor const&, at::Tensor const&, at::Tensor const&, at::Tensor const&, at::Tensor const&, int, int, int, int, int, int, int, int, int) + 0xa59 (0x7fb5d8d2108a in /home/mca/Downloads/CenterTrackP/src/lib/model/networks/DCNv2/_ext.cpython-36m-x86_64-linux-gnu.so) frame #13: dcn_v2_forward(at::Tensor const&, at::Tensor const&, at::Tensor const&, at::Tensor const&, at::Tensor const&, int, int, int, int, int, int, int, int, int) + 0x143 (0x7fb5d8cfa463 in /home/mca/Downloads/CenterTrackP/src/lib/model/networks/DCNv2/_ext.cpython-36m-x86_64-linux-gnu.so) frame #14: + 0x3ffff (0x7fb5d8d06fff in /home/mca/Downloads/CenterTrackP/src/lib/model/networks/DCNv2/_ext.cpython-36m-x86_64-linux-gnu.so) frame #15: + 0x3d6ae (0x7fb5d8d046ae in /home/mca/Downloads/CenterTrackP/src/lib/model/networks/DCNv2/_ext.cpython-36m-x86_64-linux-gnu.so) frame #21: THPFunction_apply(_object, _object) + 0xa8f (0x7fb6268b382f in /home/mca/anaconda3/envs/CenterTrack_new/lib/python3.6/site-packages/torch/lib/libtorch_python.so)

    opened by ssbilakeri 1
  • Error while testing the code

    Error while testing the code

    when I run test.py file facing bellow error. Please help me to fix it.

    RuntimeError: CUDA out of memory. Tried to allocate 72.00 MiB (GPU 0; 7.93 GiB total capacity; 6.65 GiB already allocated; 48.38 MiB free; 6.92 GiB reserved in total by PyTorch) (malloc at /pytorch/c10/cuda/CUDACachingAllocator.cpp:289) frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x33 (0x7fcbce710193 in /home/mca/anaconda3/envs/CenterTrack_new/lib/python3.6/site-packages/torch/lib/libc10.so) frame #1: + 0x1bccc (0x7fcbce951ccc in /home/mca/anaconda3/envs/CenterTrack_new/lib/python3.6/site-packages/torch/lib/libc10_cuda.so) frame #2: + 0x1cd5e (0x7fcbce952d5e in /home/mca/anaconda3/envs/CenterTrack_new/lib/python3.6/site-packages/torch/lib/libc10_cuda.so) frame #3: at::native::empty_cuda(c10::ArrayRef, c10::TensorOptions const&, c10::optionalc10::MemoryFormat) + 0x284 (0x7fcbd47d96b4 in /home/mca/anaconda3/envs/CenterTrack_new/lib/python3.6/site-packages/torch/lib/libtorch.so) frame #4: + 0x45bd7d8 (0x7fcbd31207d8 in /home/mca/anaconda3/envs/CenterTrack_new/lib/python3.6/site-packages/torch/lib/libtorch.so) frame #5: + 0x1f4fb37 (0x7fcbd0ab2b37 in /home/mca/anaconda3/envs/CenterTrack_new/lib/python3.6/site-packages/torch/lib/libtorch.so) frame #6: + 0x3f0f795 (0x7fcbd2a72795 in /home/mca/anaconda3/envs/CenterTrack_new/lib/python3.6/site-packages/torch/lib/libtorch.so) frame #7: + 0x1f4fb37 (0x7fcbd0ab2b37 in /home/mca/anaconda3/envs/CenterTrack_new/lib/python3.6/site-packages/torch/lib/libtorch.so) frame #8: std::result_of<c10::Dispatcher::callUnboxedOnly<at::Tensor, c10::ArrayRef, c10::TensorOptions const&, c10::optionalc10::MemoryFormat >(c10::OperatorHandle const&, c10::ArrayRef, c10::TensorOptions const&, c10::optionalc10::MemoryFormat) const::{lambda(c10::DispatchTable const&)#1} (c10::DispatchTable const&)>::type c10::LeftRightc10::DispatchTable::read<c10::Dispatcher::callUnboxedOnly<at::Tensor, c10::ArrayRef, c10::TensorOptions const&, c10::optionalc10::MemoryFormat >(c10::OperatorHandle const&, c10::ArrayRef, c10::TensorOptions const&, c10::optionalc10::MemoryFormat) const::{lambda(c10::DispatchTable const&)#1}>(c10::Dispatcher::callUnboxedOnly<at::Tensor, c10::ArrayRef, c10::TensorOptions const&, c10::optionalc10::MemoryFormat >(c10::OperatorHandle const&, c10::ArrayRef, c10::TensorOptions const&, c10::optionalc10::MemoryFormat) const::{lambda(c10::DispatchTable const&)#1}&&) const + 0x18c (0x7fcbcbd1481c in /home/mca/Downloads/perm-test/src/lib/model/networks/DCNv2/_ext.cpython-36m-x86_64-linux-gnu.so) frame #9: c10::guts::infer_function_traits<c10::Dispatcher::callUnboxedOnly<at::Tensor, c10::ArrayRef, c10::TensorOptions const&, c10::optionalc10::MemoryFormat >(c10::OperatorHandle const&, c10::ArrayRef, c10::TensorOptions const&, c10::optionalc10::MemoryFormat) const::{lambda(c10::DispatchTable const&)#1}>::type::return_type c10::impl::OperatorEntry::readDispatchTable<c10::Dispatcher::callUnboxedOnly<at::Tensor, c10::ArrayRef, c10::TensorOptions const&, c10::optionalc10::MemoryFormat >(c10::OperatorHandle const&, c10::ArrayRef, c10::TensorOptions const&, c10::optionalc10::MemoryFormat) const::{lambda(c10::DispatchTable const&)#1}>(c10::Dispatcher::callUnboxedOnly<at::Tensor, c10::ArrayRef, c10::TensorOptions const&, c10::optionalc10::MemoryFormat >(c10::OperatorHandle const&, c10::ArrayRef, c10::TensorOptions const&, c10::optionalc10::MemoryFormat) const::{lambda(c10::DispatchTable const&)#1}&&) const + 0x4e (0x7fcbcbd2253c in /home/mca/Downloads/perm-test/src/lib/model/networks/DCNv2/_ext.cpython-36m-x86_64-linux-gnu.so) frame #10: at::Tensor c10::Dispatcher::callUnboxedOnly<at::Tensor, c10::ArrayRef, c10::TensorOptions const&, c10::optionalc10::MemoryFormat >(c10::OperatorHandle const&, c10::ArrayRef, c10::TensorOptions const&, c10::optionalc10::MemoryFormat) const + 0x9d (0x7fcbcbd1fb1b in /home/mca/Downloads/perm-test/src/lib/model/networks/DCNv2/_ext.cpython-36m-x86_64-linux-gnu.so) frame #11: + 0x5912d (0x7fcbcbd1712d in /home/mca/Downloads/perm-test/src/lib/model/networks/DCNv2/_ext.cpython-36m-x86_64-linux-gnu.so) frame #12: dcn_v2_cuda_forward(at::Tensor const&, at::Tensor const&, at::Tensor const&, at::Tensor const&, at::Tensor const&, int, int, int, int, int, int, int, int, int) + 0xa59 (0x7fcbcbd1808a in /home/mca/Downloads/perm-test/src/lib/model/networks/DCNv2/_ext.cpython-36m-x86_64-linux-gnu.so) frame #13: dcn_v2_forward(at::Tensor const&, at::Tensor const&, at::Tensor const&, at::Tensor const&, at::Tensor const&, int, int, int, int, int, int, int, int, int) + 0x143 (0x7fcbcbcf1463 in /home/mca/Downloads/perm-test/src/lib/model/networks/DCNv2/_ext.cpython-36m-x86_64-linux-gnu.so) frame #14: + 0x3ffff (0x7fcbcbcfdfff in /home/mca/Downloads/perm-test/src/lib/model/networks/DCNv2/_ext.cpython-36m-x86_64-linux-gnu.so) frame #15: + 0x3d6ae (0x7fcbcbcfb6ae in /home/mca/Downloads/perm-test/src/lib/model/networks/DCNv2/_ext.cpython-36m-x86_64-linux-gnu.so) frame #21: THPFunction_apply(_object*, _object*) + 0xa8f (0x7fcc1976a82f in /home/mca/anaconda3/envs/CenterTrack_new/lib/python3.6/site-packages/torch/lib/libtorch_python.so)

    opened by ssbilakeri 1
  • Data directory missing

    Data directory missing

    The initial repository does not contain a data directory. This results in a wrong execution of for example the get_mot_17.sh script. The mkdir ../../data/mot17 throws an error.

    opened by timmeinhardt 1
  • PD dataset

    PD dataset

    I can not find a valid email address for requesting the PD dataset. I tried to get in contact with Adrien and Pavel, but I did not receive a response. Could you provide a email for that?

    opened by RaymondByc 1
  • How to reproduce paper results for KITTI?

    How to reproduce paper results for KITTI?

    Hi, I noticed that the MOTA for Kitti dataset is ~90 in the paper, but the pretrained model only gives 73 MOTA. How to train the model to get 90 MOTA? Or is there an explanation for the gap? Thanks!

    opened by KevinZzy129 0
  • How to create results to submit?

    How to create results to submit?

    Hi, thanks so much for the excellent.

    I'd like to know if you have provided the script to create submission results? For example, if I'd like to reproduce the results on KITTI leaderboard, how should I create the result files to submit?

    opened by noahcao 0
  • error while running OC-SORT on KITTI

    error while running OC-SORT on KITTI

    I got this bug while running "python tools/run_ocsort_public.py --hp --out_path kitti_test --dataset kitti --raw_results_path exps/permatrack_kitti_test". ImportError: /usr/lib/x86_64-linux-gnu/libstdc++.so.6: version `GLIBCXX_3.4.26' not found (required by /home/anda/anaconda3/envs/cuong/lib/python3.10/site-packages/cv2/python-3.10/cv2.cpython-310-x86_64-linux-gnu.so) Could you help me to solve this? Thanh you so much.

    opened by cuonga1cvp 0
  • RuntimeError: cublas runtime error : the GPU program failed to execute at /pytorch/aten/src/THC/THCBlas.cu:343

    RuntimeError: cublas runtime error : the GPU program failed to execute at /pytorch/aten/src/THC/THCBlas.cu:343

    when I ran python test.py tracking --exp_id kitti_fulltrain --dataset kitti_tracking --dataset_version test --track_thresh 0.4 --load_model ../models/kitti_fulltrain_model_last.pth --resume --is_recurrent --gru_filter_size 3 --num_gru_layers 1 --visibility --visibility_thresh_eval 0.2 --stream_test --flip_test --trainval I got the following problem Traceback (most recent call last): File "test.py", line 301, in prefetch_test(opt) File "test.py", line 114, in prefetch_test ret = detector.run(pre_processed_images) File "/home/dehazing/permatrack/src/lib/detector.py", line 149, in run images, self.pre_images, self.pre_hms, pre_inds, return_time=True, original_batch=pre_processed_images) File "/home/dehazing/permatrack/src/lib/detector.py", line 419, in process output, self.h = self.model.step(batch_list, self.h) File "/home/dehazing/permatrack/src/lib/model/networks/base_model.py", line 117, in step feats = self.imgpre2feats(x, None, torch.zeros(1)) File "/home/dehazing/permatrack/src/lib/model/networks/dla.py", line 678, in imgpre2feats y = self.do_tensor_pass(x, pre_img, pre_hm) File "/home/dehazing/permatrack/src/lib/model/networks/dla.py", line 633, in do_tensor_pass x = self.dla_up(x) File "/home/dehazing/anaconda3/envs/CenterTrack/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call result = self.forward(*input, **kwargs) File "/home/dehazing/permatrack/src/lib/model/networks/dla.py", line 574, in forward ida(layers, len(layers) -i - 2, len(layers)) File "/home/dehazing/anaconda3/envs/CenterTrack/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call result = self.forward(*input, **kwargs) File "/home/dehazing/permatrack/src/lib/model/networks/dla.py", line 545, in forward layers[i] = upsample(project(layers[i])) File "/home/dehazing/anaconda3/envs/CenterTrack/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call result = self.forward(*input, **kwargs) File "/home/dehazing/permatrack/src/lib/model/networks/dla.py", line 518, in forward x = self.conv(x) File "/home/dehazing/anaconda3/envs/CenterTrack/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call result = self.forward(*input, **kwargs) File "/home/dehazing/permatrack/src/lib/model/networks/DCNv2/dcn_v2.py", line 128, in forward self.deformable_groups) File "/home/dehazing/permatrack/src/lib/model/networks/DCNv2/dcn_v2.py", line 31, in forward ctx.deformable_groups) RuntimeError: cublas runtime error : the GPU program failed to execute at /pytorch/aten/src/THC/THCBlas.cu:343 My OS is ubuntu16.04 CUDA10.1 pytorch1.4.0 Nvidia Quadro P6000 Please help me! Thank you very much!

    opened by BrainPotter 0
  • There are some questions about the details of the paper

    There are some questions about the details of the paper

    Hi, thanks for your awesome job. There are some questions about the details of the paper: In the design of loss, why use heatmap-like supervision for the visibility of object instead of nuscenes_att-like supervision?

    opened by ZZXin 0
Owner
Toyota Research Institute - Machine Learning
Toyota Research Institute - Machine Learning
Tools to create pixel-wise object masks, bounding box labels (2D and 3D) and 3D object model (PLY triangle mesh) for object sequences filmed with an RGB-D camera.

Tools to create pixel-wise object masks, bounding box labels (2D and 3D) and 3D object model (PLY triangle mesh) for object sequences filmed with an RGB-D camera. This project prepares training and testing data for various deep learning projects such as 6D object pose estimation projects singleshotpose, as well as object detection and instance segmentation projects.

null 305 Dec 16, 2022
Official PyTorch implementation of the paper "Recycling Discriminator: Towards Opinion-Unaware Image Quality Assessment Using Wasserstein GAN", accepted to ACM MM 2021 BNI Track.

RecycleD Official PyTorch implementation of the paper "Recycling Discriminator: Towards Opinion-Unaware Image Quality Assessment Using Wasserstein GAN

Yunan Zhu 23 Nov 5, 2022
Official PyTorch implementation of Joint Object Detection and Multi-Object Tracking with Graph Neural Networks

This is the official PyTorch implementation of our paper: "Joint Object Detection and Multi-Object Tracking with Graph Neural Networks". Our project website and video demos are here.

Richard Wang 443 Dec 6, 2022
[ICCV2021] Learning to Track Objects from Unlabeled Videos

Unsupervised Single Object Tracking (USOT) ?? Learning to Track Objects from Unlabeled Videos Jilai Zheng, Chao Ma, Houwen Peng and Xiaokang Yang 2021

null 53 Dec 28, 2022
O2O-Afford: Annotation-Free Large-Scale Object-Object Affordance Learning (CoRL 2021)

O2O-Afford: Annotation-Free Large-Scale Object-Object Affordance Learning Object-object Interaction Affordance Learning. For a given object-object int

Kaichun Mo 26 Nov 4, 2022
SafePicking: Learning Safe Object Extraction via Object-Level Mapping, ICRA 2022

SafePicking Learning Safe Object Extraction via Object-Level Mapping Kentaro Wad

Kentaro Wada 49 Oct 24, 2022
AI grand challenge 2020 Repo (Speech Recognition Track)

KorBERT를 활용한 한국어 텍스트 기반 위협 상황인지(2020 인공지능 그랜드 챌린지) 본 프로젝트는 ETRI에서 제공된 한국어 korBERT 모델을 활용하여 폭력 기반 한국어 텍스트를 분류하는 다양한 분류 모델들을 제공합니다. 본 개발자들이 참여한 2020 인공지

Young-Seok Choi 23 Jan 25, 2022
OCTIS: Comparing Topic Models is Simple! A python package to optimize and evaluate topic models (accepted at EACL2021 demo track)

OCTIS : Optimizing and Comparing Topic Models is Simple! OCTIS (Optimizing and Comparing Topic models Is Simple) aims at training, analyzing and compa

MIND 478 Jan 1, 2023
This repo is developed for Strong Baseline For Vehicle Re-Identification in Track 2 Ai-City-2021 Challenges

A STRONG BASELINE FOR VEHICLE RE-IDENTIFICATION This paper is accepted to the IEEE Conference on Computer Vision and Pattern Recognition Workshop(CVPR

Cybercore Co. Ltd 78 Dec 29, 2022
🏆 The 1st Place Submission to AICity Challenge 2021 Natural Language-Based Vehicle Retrieval Track (Alibaba-UTS submission)

AI City 2021: Connecting Language and Vision for Natural Language-Based Vehicle Retrieval ?? The 1st Place Submission to AICity Challenge 2021 Natural

null 82 Dec 29, 2022
Narya API allows you track soccer player from camera inputs, and evaluate them with an Expected Discounted Goal (EDG) Agent

Narya The Narya API allows you track soccer player from camera inputs, and evaluate them with an Expected Discounted Goal (EDG) Agent. This repository

Paul Garnier 121 Dec 30, 2022
DSTC10 Track 2 - Knowledge-grounded Task-oriented Dialogue Modeling on Spoken Conversations

DSTC10 Track 2 - Knowledge-grounded Task-oriented Dialogue Modeling on Spoken Conversations This repository contains the data, scripts and baseline co

Alexa 51 Dec 17, 2022
Optimizing DR with hard negatives and achieving SOTA first-stage retrieval performance on TREC DL Track (SIGIR 2021 Full Paper).

Optimizing Dense Retrieval Model Training with Hard Negatives Jingtao Zhan, Jiaxin Mao, Yiqun Liu, Jiafeng Guo, Min Zhang, Shaoping Ma This repo provi

Jingtao Zhan 99 Dec 27, 2022
LoveDA: A Remote Sensing Land-Cover Dataset for Domain Adaptive Semantic Segmentation (NeurIPS2021 Benchmark and Dataset Track)

LoveDA: A Remote Sensing Land-Cover Dataset for Domain Adaptive Semantic Segmentation by Junjue Wang, Zhuo Zheng, Ailong Ma, Xiaoyan Lu, and Yanfei Zh

Kingdrone 174 Dec 22, 2022
The description of FMFCC-A (audio track of FMFCC) dataset and Challenge resluts.

FMFCC-A This project is the description of FMFCC-A (audio track of FMFCC) dataset and Challenge resluts. The FMFCC-A dataset is shared through BaiduCl

null 2 Oct 20, 2021
Tracking code for the winner of track 1 in the MMP-Tracking Challenge at ICCV 2021 Workshop.

Tracking Code for the winner of track1 in MMP-Trakcing challenge This repository contains our tracking code for the Multi-camera Multiple People Track

DamoCV 29 Nov 13, 2022
Convolutional neural network web app trained to track our infant’s sleep schedule using our Google Nest camera.

Machine Learning Sleep Schedule Tracker What is it? Convolutional neural network web app trained to track our infant’s sleep schedule using our Google

g-parki 7 Jul 15, 2022
QQ Browser 2021 AI Algorithm Competition Track 1 1st Place Program

QQ Browser 2021 AI Algorithm Competition Track 1 1st Place Program

null 249 Jan 3, 2023
Yolo algorithm for detection + centroid tracker to track vehicles

Vehicle Tracking using Centroid tracker Algorithm used : Yolo algorithm for detection + centroid tracker to track vehicles Backend : opencv and python

null 6 Dec 21, 2022