SiamMOT is a region-based Siamese Multi-Object Tracking network that detects and associates object instances simultaneously.

Overview

SiamMOT

SiamMOT is a region-based Siamese Multi-Object Tracking network that detects and associates object instances simultaneously.

SiamMOT: Siamese Multi-Object Tracking,
Bing Shuai, Andrew Berneshawi, Xinyu Li, Davide Modolo, Joseph Tighe,

@inproceedings{shuai2021siammot,
  title={SiamMOT: Siamese Multi-Object Tracking},
  author={Shuai, Bing and Berneshawi, Andrew and Li, Xinyu and Modolo, Davide and Tighe, Joseph},
  booktitle={CVPR},
  year={2021}
}

Abstract

In this paper, we focus on improving online multi-object tracking (MOT). In particular, we introduce a region-based Siamese Multi-Object Tracking network, which we name SiamMOT. SiamMOT includes a motion model that estimates the instance’s movement between two frames such that detected instances are associated. To explore how the motion modelling affects its tracking capability, we present two variants of Siamese tracker, one that implicitly models motion and one that models it explicitly. We carry out extensive quantitative experiments on three different MOT datasets: MOT17, TAO-person and Caltech Roadside Pedestrians, showing the importance of motion modelling for MOT and the ability of SiamMOT to substantially outperform the state-of-the-art. Finally, SiamMOT also outperforms the winners of ACM MM’20 HiEve Grand Challenge on HiEve dataset. Moreover, SiamMOT is efficient, and it runs at 17 FPS for 720P videos on a single modern GPU.

Installation

Please refer to INSTALL.md for installation instructions.

Try SiamMOT demo

For demo purposes, we provide two tracking models -- tracking person (visible part) or jointly tracking person and vehicles (bus, car, truck, motorcycle, etc). The person tracking model is trained on COCO-17 and CrowdHuman, while the latter model is trained on COCO-17 and VOC12. Currently, both models used in demos use EMM as its motion model, which performs best among different alternatives.

In order to run the demo, use the following command:

python3 demos/demo.py --demo-video  PATH_TO_DEMO_VIDE --track-class person --dump-video True

You can choose person or person_vehicel for track-class such that person tracking or person/vehicle tracking model is used accordingly.

The model would be automatically downloaded to demos/models, and the visualization of tracking outputs is automatically saved to demos/demo_vis

We also provide several pre-trained models in model_zoos.md that can be used for demo.

Dataset Evaluation and Training

After installation, follow the instructions in DATA.md to setup the datasets. As a sanity check, the models presented in model_zoos.md can be used to for benchmark testing.

Use the following command to train a model on an 8-GPU machine: Before running training / inference, setup the configuration file properly

python3 -m torch.distributed.launch --nproc_per_node=8 tools/train_net.py --config-file configs/dla/DLA_34_FPN.yaml --train-dir PATH_TO_TRAIN_DIR --model-suffix MODEL_SUFFIX 

Use the following command to test a model on a single-GPU machine:

python3 tools/test_net.py --config-file configs/dla/DLA_34_FPN.yaml --output-dir PATH_TO_OUTPUT_DIR --model-file PATH_TO_MODEL_FILE --test-dataset DATASET_KEY --set val

Note: If you get an error ModuleNotFoundError: No module named 'siammot' when running in the git root then make sure your PYTHONPATH includes the current directory, which you can add by running: export PYTHONPATH=.:$PYTHONPATH or you can explicitly add the project to the path by replacing the '.' in the export command with the absolute path to the git root.

Multi-gpu testing is going to be supported later.

Version

This is the preliminary version specifically for Airbone Object Tracking (AOT) workshop. The current version only support the motion model being EMM.

We will add more motion models in the next version, together with more features, stay tuned.

License

This project is licensed under the Apache-2.0 License.

Comments
  • Variable memory requirements

    Variable memory requirements

    I have noticed that the memory requirements for the model change depending on whether the training starts from a freshly initialized model or a model initialized from a checkpoint.

    I am training the model on NVidia RTX 2080Ti GPU, which provides 11GB of memory. In order to start the training without running into RuntimeError: CUDA error: out of memory exception, I need to set the number of video clips per batch equal to 3. More specifically, in terms of configuration settings:

    SOLVER:
      VIDEO_CLIPS_PER_BATCH: 3
    

    This produces the batch size equal to 6, since we have 2 random frames per clip, as given by the configuration below:

    VIDEO:
      RANDOM_FRAMES_PER_CLIP: 2
    

    However. if I restart the training from a previously stored checkpoint, the memory consumption decreases to such an extent, that I can add one more video clip per batch without crashing due to insufficient memory capacity. More concretely, my configuration allows the following:

    SOLVER:
      VIDEO_CLIPS_PER_BATCH: 4
    

    This does not seem to influence the model performance after training.

    I have tried explicitly calling the garbage collector and emptying the CUDA cache using

    import gc
    import torch
    
    gc.collect()
    torch.cuda.empty_cache()
    

    but to no avail.

    My question is. What do you think might be causing this sort of memory leak? I have been working on this architecture for some time and yet I haven't found a reasonable explanation so far.

    At this point, my pipeline involves two separate configurations. First, I run the training for 100 iterations, save the checkpoint, halt the training, and then restart it with a different configuration allowing a bigger batch size, and let it train as required. It is pretty cumbersome as well as highly unprofessional. I would like to understand the underlying cause.

    Thank you for your input.

    opened by mondrasovic 8
  • how to output the video ?

    how to output the video ?

    Thank you for your sharing. I follow the README.md instruction. when I input the following command, like this: python3 demos/demo.py --demo-video test.mp4 --track-class person --dump-video True

    I can't see any generated result video in the file "demos/demos_vis/".

    I have tried many commands, like these :

    ① python3 demos/demo.py --demo-video test.mp4 --track-class person --dump-video True --output-path demos/demos_vis

    ② python3 demos/demo.py --demo-video test.mp4 --track-class person --dump-video False

    ③ python3 demos/demo.py --demo-video test.mp4 --track-class person --dump-video False --output-path demos/demos_vis

    and so on.

    any suggestions for it ? How to output the result video ? Thank you

    opened by TommyTang930 6
  • Nan values for loss and accuracy in training  and testing.

    Nan values for loss and accuracy in training and testing.

    As per our capacity, we reduced it to 4 GPUs and kept the learning rate as default 0.02. After 40-60 iterations we started getting Nan value of losses We reduced the rate to 0.015 and then trained. Even with this for >200 iterations, sometimes it shows Nan loss values, sometimes it runs fine. Even after testing with Nan loss values, we found all the accuracy values in the output table came out to be Nan.

    WhatsApp Image 2021-09-04 at 22 48 10

    opened by adityagupta-9900 5
  • error about train the model

    error about train the model

    Thanks for the great works. when i train the model with MOT17 dataset by the following command:

    python3 -m torch.distributed.launch --nproc_per_node=2 tools/train_net.py --config-file configs/dla/DLA_34_FPN_EMM_MOT17.yaml --train-dir my_train_results/MOT17_TEST/ --model-suffix pth

    i got the error:

    Traceback (most recent call last): File "tools/train_net.py", line 132, in main() File "tools/train_net.py", line 128, in main train(cfg, train_dir, args.local_rank, args.distributed, logger) File "tools/train_net.py", line 80, in train logger, tensorboard_writer File "./siammot/engine/trainer.py", line 51, in do_train result, loss_dict = model(images, targets) File "/home/sx/Documents/anaconda/anaconda3/envs/pt170/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/home/sx/Documents/anaconda/anaconda3/envs/pt170/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 619, in forward output = self.module(*inputs[0], **kwargs[0]) File "/home/sx/Documents/anaconda/anaconda3/envs/pt170/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/home/sx/Documents/anaconda/anaconda3/envs/pt170/lib/python3.6/site-packages/apex-0.1-py3.6-linux-x86_64.egg/apex/amp/_initialize.py", line 197, in new_fwd **applier(kwargs, input_caster)) File "./siammot/modelling/rcnn.py", line 47, in forward features = self.backbone(images.tensors) File "/home/sx/Documents/anaconda/anaconda3/envs/pt170/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/home/sx/Documents/anaconda/anaconda3/envs/pt170/lib/python3.6/site-packages/torch/nn/modules/container.py", line 117, in forward input = module(input) File "/home/sx/Documents/anaconda/anaconda3/envs/pt170/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "./siammot/modelling/backbone/dla.py", line 297, in forward x5 = self.level5(x4) File "/home/sx/Documents/anaconda/anaconda3/envs/pt170/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "./siammot/modelling/backbone/dla.py", line 231, in forward x1 = self.tree1(x, residual) File "/home/sx/Documents/anaconda/anaconda3/envs/pt170/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "./siammot/modelling/backbone/dla.py", line 54, in forward out += residual RuntimeError: The size of tensor a (47) must match the size of tensor b (46) at non-singleton dimension 3 can anybody help me ? thank you !

    opened by xsu-yy 5
  • errors about train_net.py

    errors about train_net.py

    This is my train.py parameter configuration: 图片 This is my data structure: 图片 But I have this problem: 图片 I think there's something wrong here.But I do not know how to solve, ask for help, thank you 图片 图片

    opened by yanghaibin-cool 3
  • <tuple> not callable in /data/adapters/augmentation/video_augmentation.py  line 105

    not callable in /data/adapters/augmentation/video_augmentation.py line 105

    In siam-mot/blob/main/siammot/data/adapters/augmentation/video_augmentation.py, line 99 the value getting assigned to "transform" seems to be a tuple but in line 105 "tranform(image)", transform is getting called like a function. Is there any error in our setting of data set or some error in code ?

    opened by adityagupta-9900 3
  • maskrcnn-benchmark vs. Detectron2

    maskrcnn-benchmark vs. Detectron2

    I have a question rather than an issue. Why did you choose to use maskrcnn-benchmark, which is no longer actively supported? Do you think SiamMOT could use Detectron2 instead? If so, could you give me some tips on how to do this?

    opened by SkalskiP 3
  • only use tracker

    only use tracker

    Hi, excellent work! But I have some questions. If I already have bboxes in the video and I only want to use the SiamMOT for tracking-by-detection, what should I do. Particularly, how to just inference with my bboxes. And, how to train SiamMOT with my bboxes and then inference. Thank you. (maybe issue #5 is the same problem with me)

    opened by YunhaoDu 3
  • why need video in demos/demo_vis?

    why need video in demos/demo_vis?

    Hi everyone, I am new to learning cv and deep learning, and I use colab as my machine. My input video is person_car.mp4, and the output video should be in demos/demo_vis, but when I run demo.py, it needs person_car.mp4 in demos/demo_vis too, that is strange. Can somebody explain? image this is my github which store my colab file

    opened by Lin1225 3
  • Test error

    Test error

    Hello. I encountered this error when testing with the following command,“python3 -m tools.test_net --config-file configs/dla/DLA_34_FPN_EMM_MOT17.yaml --output-dir /home/yhb/下载/track_results --model-file /home/yhb/下载/DLA-34-FPN_EMM_crowdhuman_mot17 --test-dataset /media/yhb/Data/BaiduNetdiskDownload/MOT challenge/MOT17/test” error:test_net.py: error: unrecognized arguments: challenge/MOT17/test --test-path is the test path in MOT17 downloaded from the official website has not been modified. What changes do I need to make to --test-path? It is very helpful for beginners. Thank you very much!

    opened by yanghaibin-cool 2
  • Weights not available

    Weights not available

    Dear authors, Thanks for making your code available, I am grateful to you for it. In regards to the code base, I was wondering when would the weights for the MOTChallenge-2017 Test (Public detection) data set, as referenced in the model zoo readme be available?

    Thanks again.

    opened by GrantorShadow 2
  • Bump pillow from 9.0.1 to 9.3.0

    Bump pillow from 9.0.1 to 9.3.0

    Bumps pillow from 9.0.1 to 9.3.0.

    Release notes

    Sourced from pillow's releases.

    9.3.0

    https://pillow.readthedocs.io/en/stable/releasenotes/9.3.0.html

    Changes

    ... (truncated)

    Changelog

    Sourced from pillow's changelog.

    9.3.0 (2022-10-29)

    • Limit SAMPLESPERPIXEL to avoid runtime DOS #6700 [wiredfool]

    • Initialize libtiff buffer when saving #6699 [radarhere]

    • Inline fname2char to fix memory leak #6329 [nulano]

    • Fix memory leaks related to text features #6330 [nulano]

    • Use double quotes for version check on old CPython on Windows #6695 [hugovk]

    • Remove backup implementation of Round for Windows platforms #6693 [cgohlke]

    • Fixed set_variation_by_name offset #6445 [radarhere]

    • Fix malloc in _imagingft.c:font_setvaraxes #6690 [cgohlke]

    • Release Python GIL when converting images using matrix operations #6418 [hmaarrfk]

    • Added ExifTags enums #6630 [radarhere]

    • Do not modify previous frame when calculating delta in PNG #6683 [radarhere]

    • Added support for reading BMP images with RLE4 compression #6674 [npjg, radarhere]

    • Decode JPEG compressed BLP1 data in original mode #6678 [radarhere]

    • Added GPS TIFF tag info #6661 [radarhere]

    • Added conversion between RGB/RGBA/RGBX and LAB #6647 [radarhere]

    • Do not attempt normalization if mode is already normal #6644 [radarhere]

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Connection error on download

    Connection error on download

    I'm trying to run demos/demo.py but returns connection error after a while. I think that the model links are not working anymore

    Downloading: "http://dl.yf.io/dla/models/imagenet/dla34-ba72cf86.pth" to C:\Users\xxxxx/.cache\torch\hub\checkpoints\dla34-ba72cf86.pth
    Traceback (most recent call last):
      File "C:\Users\caros\anaconda3\lib\urllib\request.py", line 1346, in do_open
        h.request(req.get_method(), req.selector, req.data, headers,
      File "C:\Users\caros\anaconda3\lib\http\client.py", line 1285, in request
        self._send_request(method, url, body, headers, encode_chunked)
      File "C:\Users\caros\anaconda3\lib\http\client.py", line 1331, in _send_request
        self.endheaders(body, encode_chunked=encode_chunked)
      File "C:\Users\caros\anaconda3\lib\http\client.py", line 1280, in endheaders
        self._send_output(message_body, encode_chunked=encode_chunked)
      File "C:\Users\caros\anaconda3\lib\http\client.py", line 1040, in _send_output
        self.send(msg)
      File "C:\Users\caros\anaconda3\lib\http\client.py", line 980, in send
        self.connect()
      File "C:\Users\caros\anaconda3\lib\http\client.py", line 946, in connect
        self.sock = self._create_connection(
      File "C:\Users\caros\anaconda3\lib\socket.py", line 844, in create_connection
        raise err
      File "C:\Users\caros\anaconda3\lib\socket.py", line 832, in create_connection
        sock.connect(sa)
    TimeoutError: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond
    
    opened by carolinasolfernandez 1
  • Out of date project? Unable to setup demo

    Out of date project? Unable to setup demo

    I was trying to setup the demo locally but it does not work:

    root@alex-OMEN:/opt/company/siam-mot# pip3 install -r requirements.txt
    ...
    root@alex-OMEN:/opt/company/siam-mot# python3 demos/demo.py --demo-video ../long_video.mp4 --track-class  person_vehicel -dump-video True 
    Traceback (most recent call last):
      File "demos/demo.py", line 5, in <module>
        from demos.demo_inference import DemoInference
    ModuleNotFoundError: No module named 'demos'
    

    Then I copied demo.py to the root of project and try again :

    root@alex-OMEN:/opt/company/siam-mot# python3 demo.py --demo-video ../long_video.mp4 --track-class  person_vehicel -dump-video True 
    Traceback (most recent call last):
      File "demo.py", line 5, in <module>
        from demos.demo_inference import DemoInference
      File "/opt/company/siam-mot/demos/demo_inference.py", line 10, in <module>
        from maskrcnn_benchmark.structures.bounding_box import BoxList
    ModuleNotFoundError: No module named 'maskrcnn_benchmark'
    

    I installed maskrcnn according to https://github.com/facebookresearch/maskrcnn-benchmark/blob/main/INSTALL.md

    And the current error is :

    root@alex-OMEN:/opt/company/siam-mot# python3 demo.py --demo-video ../long_video.mp4 --track-class  person_vehicel -dump-video True 
    Traceback (most recent call last):
      File "demo.py", line 5, in <module>
        from demos.demo_inference import DemoInference
      File "/opt/company/siam-mot/demos/demo_inference.py", line 11, in <module>
        from maskrcnn_benchmark.utils.checkpoint import DetectronCheckpointer
      File "/usr/local/lib/python3.8/dist-packages/maskrcnn_benchmark-0.1-py3.8-linux-x86_64.egg/maskrcnn_benchmark/utils/checkpoint.py", line 7, in <module>
        from maskrcnn_benchmark.utils.model_serialization import load_state_dict
      File "/usr/local/lib/python3.8/dist-packages/maskrcnn_benchmark-0.1-py3.8-linux-x86_64.egg/maskrcnn_benchmark/utils/model_serialization.py", line 7, in <module>
        from maskrcnn_benchmark.utils.imports import import_file
      File "/usr/local/lib/python3.8/dist-packages/maskrcnn_benchmark-0.1-py3.8-linux-x86_64.egg/maskrcnn_benchmark/utils/imports.py", line 4, in <module>
        if torch._six.PY3:
    AttributeError: module 'torch._six' has no attribute 'PY3'
    
    

    What am I doing wrong? Is this project still actual?

    opened by alexkutsan 1
  • How to replace the default spatial matching with a new method?

    How to replace the default spatial matching with a new method?

    I've running the code successfully,thank you for your great work! Now I want to do some research,for example, replacing the default spatial matching method with my newly proposed method. The question is: where can we modify the code in order to change the spatial matching method? I've read the related code (./siammot/modeling/track_head),but found nothing related to the matching part. Could you please tell me more about the spatial matching part? maybe I missed some information.

    opened by YaoMufeng 1
Owner
null
Cross-Image Region Mining with Region Prototypical Network for Weakly Supervised Segmentation

Cross-Image Region Mining with Region Prototypical Network for Weakly Supervised Segmentation The code of: Cross-Image Region Mining with Region Proto

LiuWeide 16 Nov 26, 2022
Official code for 'Robust Siamese Object Tracking for Unmanned Aerial Manipulator' and offical introduction to UAMT100 benchmark

SiamSA: Robust Siamese Object Tracking for Unmanned Aerial Manipulator Demo video ?? Our video on Youtube and bilibili demonstrates the evaluation of

Intelligent Vision for Robotics in Complex Environment 12 Dec 18, 2022
The official implementation of paper Siamese Transformer Pyramid Networks for Real-Time UAV Tracking, accepted by WACV22

SiamTPN Introduction This is the official implementation of the SiamTPN (WACV2022). The tracker intergrates pyramid feature network and transformer in

Robotics and Intelligent Systems Control @ NYUAD 28 Nov 25, 2022
Multi-Object Tracking in Satellite Videos with Graph-Based Multi-Task Modeling

TGraM Multi-Object Tracking in Satellite Videos with Graph-Based Multi-Task Modeling, Qibin He, Xian Sun, Zhiyuan Yan, Beibei Li, Kun Fu Abstract Rece

Qibin He 6 Nov 25, 2022
Novel Instances Mining with Pseudo-Margin Evaluation for Few-Shot Object Detection

Novel Instances Mining with Pseudo-Margin Evaluation for Few-Shot Object Detection (NimPme) The official implementation of Novel Instances Mining with

null 12 Sep 8, 2022
A PyTorch implementation of "Multi-Scale Contrastive Siamese Networks for Self-Supervised Graph Representation Learning", IJCAI-21

MERIT A PyTorch implementation of our IJCAI-21 paper Multi-Scale Contrastive Siamese Networks for Self-Supervised Graph Representation Learning. Depen

Graph Analysis & Deep Learning Laboratory, GRAND 32 Jan 2, 2023
From this paper "SESNet: A Semantically Enhanced Siamese Network for Remote Sensing Change Detection"

SESNet for remote sensing image change detection It is the implementation of the paper: "SESNet: A Semantically Enhanced Siamese Network for Remote Se

null 1 May 24, 2022
[ICCV 2021] Official Pytorch implementation for Discriminative Region-based Multi-Label Zero-Shot Learning SOTA results on NUS-WIDE and OpenImages

Discriminative Region-based Multi-Label Zero-Shot Learning (ICCV 2021) [arXiv][Project page >> coming soon] Sanath Narayan*, Akshita Gupta*, Salman Kh

Akshita Gupta 54 Nov 21, 2022
[ICCV 2021] Official Pytorch implementation for Discriminative Region-based Multi-Label Zero-Shot Learning SOTA results on NUS-WIDE and OpenImages

Discriminative Region-based Multi-Label Zero-Shot Learning (ICCV 2021) [arXiv][Project page >> coming soon] Sanath Narayan*, Akshita Gupta*, Salman Kh

Akshita Gupta 54 Nov 21, 2022
Official PyTorch implementation of Joint Object Detection and Multi-Object Tracking with Graph Neural Networks

This is the official PyTorch implementation of our paper: "Joint Object Detection and Multi-Object Tracking with Graph Neural Networks". Our project website and video demos are here.

Richard Wang 443 Dec 6, 2022
Object Detection and Multi-Object Tracking

Object Detection and Multi-Object Tracking

Bobby Chen 1.6k Jan 4, 2023
TSDF++: A Multi-Object Formulation for Dynamic Object Tracking and Reconstruction

TSDF++: A Multi-Object Formulation for Dynamic Object Tracking and Reconstruction TSDF++ is a novel multi-object TSDF formulation that can encode mult

ETHZ ASL 130 Dec 29, 2022
Python library containing BART query generation and BERT-based Siamese models for neural retrieval.

Neural Retrieval Embedding-based Zero-shot Retrieval through Query Generation leverages query synthesis over large corpuses of unlabeled text (such as

Amazon Web Services - Labs 35 Apr 14, 2022
Classify bird species based on their songs using SIamese Networks and 1D dilated convolutions.

The goal is to classify different birds species based on their songs/calls. Spectrograms have been extracted from the audio samples and used as features for classification.

Aditya Dutt 9 Dec 27, 2022
Python package for multiple object tracking research with focus on laboratory animals tracking.

motutils is a Python package for multiple object tracking research with focus on laboratory animals tracking. Features loads: MOTChallenge CSV, sleap

Matěj Šmíd 2 Sep 5, 2022
Drone-based Joint Density Map Estimation, Localization and Tracking with Space-Time Multi-Scale Attention Network

DroneCrowd Paper Detection, Tracking, and Counting Meets Drones in Crowds: A Benchmark. Introduction This paper proposes a space-time multi-scale atte

VisDrone 98 Nov 16, 2022
YOLTv5 rapidly detects objects in arbitrarily large aerial or satellite images that far exceed the ~600×600 pixel size typically ingested by deep learning object detection frameworks

YOLTv5 rapidly detects objects in arbitrarily large aerial or satellite images that far exceed the ~600×600 pixel size typically ingested by deep learning object detection frameworks.

Adam Van Etten 145 Jan 1, 2023
YOLOX-CondInst - Implement CondInst which is a instances segmentation method on YOLOX

YOLOX CondInst -- YOLOX 实例分割 前言 本项目是自己学习实例分割时,复现的代码. 通过自己编程,让自己对实例分割有更进一步的了解。 若想

DDGRCF 16 Nov 18, 2022