Code for Multiple Instance Active Learning for Object Detection, CVPR 2021

Overview

MI-AOD

Language: 简体中文 | English

Introduction

This is the code for Multiple Instance Active Learning for Object Detection (The PDF is not available temporarily), CVPR 2021.

Other introduction and figures are not available temporarily.

Installation

A Linux platform (Ours are Ubuntu 18.04 LTS) and anaconda3 is recommended, since they can install and manage environments and packages conveniently and efficiently.

A TITAN V GPU and CUDA 10.2 with CuDNN 7.6.5 is recommended, since they can speed up model training.

After anaconda3 installation, you can create a conda environment as below:

conda create -n miaod python=3.7 -y
conda activate miaod

Please refer to MMDetection v2.3.0 and the install.md of it for environment installation.

And then please clone this repository as below:

git clone https://github.com/yuantn/MI-AOD.git
cd MI-AOD

If it is too slow, you can also try downloading the repository like this:

wget https://github.com/yuantn/MI-AOD/archive/master.zip
unzip master.zip
cd MI-AOD-master

Modification in the mmcv Package

To train with two dataloaders (i.e., the labeled set dataloader and the unlabeled set dataloader mentioned in the paper) at the same time, you will need to modify the epoch_based_runner.py in the mmcv package.

Considering that this will affect all code that uses this environment, so we suggest you set up a separate environment for MI-AOD (i.e., the miaod environment created above).

cp -v epoch_based_runner.py ~/anaconda3/envs/miaod/lib/python3.7/site-packages/mmcv/runner/

After that, if you have modified anything in the mmcv package (including but not limited to: updating/re-installing Python, PyTorch, mmdetection, mmcv, mmcv-full, conda environment), you are supposed to copy the “epoch_base_runner.py” provided in this repository to the mmcv directory again. (Issue #3)

Datasets Preparation

Please download VOC2007 datasets ( trainval + test ) and VOC2012 datasets ( trainval ) from:

VOC2007 ( trainval ): http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtrainval_06-Nov-2007.tar

VOC2007 ( test ): http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtest_06-Nov-2007.tar

VOC2012 ( trainval ): http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar

And after that, please ensure the file directory tree is as below:

├── VOCdevkit
│   ├── VOC2007
│   │   ├── Annotations
│   │   ├── ImageSets
│   │   ├── JPEGImages
│   ├── VOC2012
│   │   ├── Annotations
│   │   ├── ImageSets
│   │   ├── JPEGImages

You may also use the following commands directly:

cd $YOUR_DATASET_PATH
wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtrainval_06-Nov-2007.tar
wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtest_06-Nov-2007.tar
wget http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar
tar -xf VOCtrainval_06-Nov-2007.tar
tar -xf VOCtest_06-Nov-2007.tar
tar -xf VOCtrainval_11-May-2012.tar

After that, please modify the corresponding dataset directory in this repository, they are located in:

Line 1 of configs/MIAOD.py: data_root='$YOUR_DATASET_PATH/VOCdevkit/'
Line 1 of configs/_base_/voc0712.py: data_root='$YOUR_DATASET_PATH/VOCdevkit/'

Please change the $YOUR_DATASET_PATHs above to your actual dataset directory (i.e., the directory where you intend to put the downloaded VOC tar file).

And please use the absolute path (i.e., start with /) but not a relative path (i.e., start with ./ or ../).

Training and Test

We recommend you to use a GPU but not a CPU to train and test, because it will greatly shorten the time.

And we also recommend you to use a single GPU, because the usage of multi-GPU may result in errors caused by the multi-processing of the dataloader.

If you use only a single GPU, you can use the script.sh file directly as below:

chmod 777 ./script.sh
./script.sh $YOUR_GPU_ID

Please change the $YOUR_GPU_ID above to your actual GPU ID number (usually a non-negative number).

Please ignore the error rm: cannot remove './log_nohup/nohup_$YOUR_GPU_ID.log': No such file or directory if you run the script.sh file for the first time.

The script.sh file will use the GPU with the ID number $YOUR_GPU_ID and PORT (30000+$YOUR_GPU_ID*100) to train and test.

The log file will not flush in the terminal, but will be saved and updated in the file ./log_nohup/nohup_$YOUR_GPU_ID.log and ./work_dirs/MI-AOD/$TIMESTAMP.log . These two logs are the same. You can change the directories and names of the latter log files in Line 48 of ./configs/MIAOD.py .

You can also use other files in the directory './work_dirs/MI-AOD/ if you like, they are as follows:

  • JSON file $TIMESTAMP.log.json

    You can load the losses and mAPs during training and test from it more conveniently than from the ./work_dirs/MI-AOD/$TIMESTAMP.log file.

  • npy file X_L_$CYCLE.npy and X_U_$CYCLE.npy

    The $CYCLE is an integer from 0 to 6, which are the active learning cycles.

    You can load the indexes of the labeled set and unlabeled set for each cycle from them.

    The indexes are the integers from 0 to 16550 for PASCAL VOC datasets, where 0 to 5010 is for PASCAL VOC 2007 trainval set and 5011 to 16550 for PASCAL VOC 2012 trainval set.

    An example code for loading these files is the Line 108-114 in the ./tools/train.py file (which are in comments now).

  • pth file epoch_$EPOCH.pth and latest.pth

    The $EPOCH is an integer from 0 to 2, which are the epochs of the last label set training.

    You can load the model state dictionary from them.

    An example code for loading these files is the Line 109, 143-145 in the ./tools/train.py file (which are in comments now).

  • txt file trainval_L_07.txt, trainval_U_07.txt, trainval_L_12.txt and trainval_U_12.txt in each cycle$CYCLE directory

    The $CYCLE is the same as above.

    You can load the names of JPEG images of the labeled set and unlabeled set for each cycle from them.

    "L" is for the labeled set and "U" is for the unlabeled set. "07" is for the PASCAL VOC 2007 trainval set and "12" is for the PASCAL VOC 2012 trainval set.

An example output folder is provided on Google Drive and Baidu Drive, including the log file, the last trained model, and all other files above.

Code Structure

├── $YOUR_ANACONDA_DIRECTORY
│   ├── anaconda3
│   │   ├── envs
│   │   │   ├── miaod
│   │   │   │   ├── lib
│   │   │   │   │   ├── python3.7
│   │   │   │   │   │   ├── site-packages
│   │   │   │   │   │   │   ├── mmcv
│   │   │   │   │   │   │   │   ├── runner
│   │   │   │   │   │   │   │   │   ├── epoch_based_runner.py
│
├── ...
│
├── configs
│   ├── _base_
│   │   ├── default_runtime.py
│   │   ├── retinanet_r50_fpn.py
│   │   ├── voc0712.py
│   ├── MIAOD.py
│── log_nohup
├── mmdet
│   ├── apis
│   │   ├── __init__.py
│   │   ├── test.py
│   │   ├── train.py
│   ├── models
│   │   ├── dense_heads
│   │   │   ├── __init__.py
│   │   │   ├── MIAOD_head.py
│   │   │   ├── MIAOD_retina_head.py
│   │   │   ├── base_dense_head.py 
│   │   ├── detectors
│   │   │   ├── base.py
│   │   │   ├── single_stage.py
│   ├── utils
│   │   ├── active_datasets.py
├── tools
│   ├── train.py
├── work_dirs
│   ├── MI-AOD
├── script.sh

The code files and folders shown above are the main part of MI-AOD, while other code files and folders are created following MMDetection to avoid potential problems.

The explanation of each code file or folder is as follows:

  • epoch_based_runner.py: Code for training and test in each epoch, which can be called by ./apis/train.py.

  • configs: Configuration folder, including running settings, model settings, dataset settings and other custom settings for active learning and MI-AOD.

    • __base__: Base configuration folder provided by MMDetection, which only need a little modification and then can be recalled by .configs/MIAOD.py.

      • default_runtime.py: Configuration code for running settings, which can be called by ./configs/MIAOD.py.

      • retinanet_r50_fpn.py: Configuration code for model training and test settings, which can be called by ./configs/MIAOD.py.

      • voc0712.py: Configuration code for PASCAL VOC dataset settings and data preprocessing, which can be called by ./configs/MIAOD.py.

    • MIAOD.py: Configuration code in general including most custom settings, containing active learning dataset settings, model training and test parameter settings, custom hyper-parameter settings, log file and model saving settings, which can be mainly called by ./tools/train.py. The more detailed introduction of each parameter is in the comments of this file.

  • log_nohup: Log folder for storing log output on each GPU temporarily.

  • mmdet: The core code folder for MI-AOD, including intermidiate training code, object detectors and detection heads and active learning dataset establishment.

    • apis: The inner training, test and calculating uncertainty code folder of MI-AOD.

      • __init__.py: Some function initialization in the current folder.

      • test.py: Code for testing the model and calculating uncertainty, which can be called by epoch_based_runner.py and ./tools/train.py.

      • train.py: Code for setting random seed and creating training dataloaders to prepare for the following epoch-level training, which can be called by ./tools/train.py.

    • models: The code folder with the details of network model architecture, training loss, forward propagation in test and calculating uncertainty.

      • dense_heads: The code folder of training loss and the network model architecture, especially the well-designed head architecture.

        • __init__.py: Some function initialization in the current folder.

        • MIAOD_head.py: Code for forwarding anchor-level model output, calculating anchor-level loss, generating pseudo labels and getting bounding boxes from existing model output in more details, which can be called by ./mmdet/models/dense_heads/base_dense_head.py and ./mmdet/models/detectors/single_stage.py.

        • MIAOD_retina_head.py: Code for building the MI-AOD model architecture, especially the well-designed head architecture, and define the forward output, which can be called by ./mmdet/models/dense_heads/MIAOD_head.py.

        • base_dense_head.py: Code for choosing different equations to calculate loss, which can be called by ./mmdet/models/detectors/single_stage.py.

      • detectors: The code folder of the forward propogation and backward propogation in the overall training, test and calculating uncertainty process.

        • base.py: Code for arranging training loss to print and returning the loss and image information, which can be called by epoch_based_runner.py.

        • single_stage.py: Code for extracting image features, getting bounding boxes from the model output and returning the loss, which can be called by ./mmdet/models/detectors/base.py.

    • utils: The code folder for creating active learning datasets.

      • active_dataset.py: Code for creating active learning datasets, including creating initial labeled set, creating the image name file for the labeled set and unlabeled set and updating the labeled set after each active learning cycle, which can be called by ./tools/train.py.
  • tools: The outer training and test code folder of MI-AOD.

    • train.py: Outer code for training and test for MI-AOD, including generating PASCAL VOC datasets for active learning, loading image sets and models, Instance Uncertainty Re-weighting and Informative Image Selection in general, which can be called by ./script.sh.
  • work_dirs: Work directory of the index and image name of the labeled set and unlabeled set for each cycle, all log and json outputs and the model state dictionary for the last 3 cycle, which are introduced in the Training and Test part above.

  • script.sh: The script to run MI-AOD on a single GPU. You can run it to train and test MI-AOD simply and directly mentioned in the Training and Test part above as long as you have prepared the conda environment and PASCAL VOC 2007+2012 datasets.

Citation

If you find this repository useful for your publications, please consider citing our paper. (The PDF is not available temporarily)

@inproceedings{MIAOD2021,
    author    = {Tianning Yuan and
                 Fang Wan and
                 Mengying Fu and
                 Jianzhuang Liu and
                 Songcen Xu and
                 Xiangyang Ji and
                 Qixiang Ye},
    title     = {Multiple Instance Active Learning for Object Detection},
    booktitle = {CVPR},
    year      = {2021}
}

Acknowledgement

In this repository, we reimplemented RetinaNet on PyTorch based on mmdetection.

Comments
  • When I am training my data, there is a StopIteration problem.

    When I am training my data, there is a StopIteration problem.

    I want to know which parameters need to be modified when training my datasets. I trained with my own ten picture data, but there was a problem, as shown in the figure. 2021-07-12 15-23-11屏幕截图 I don't know whether my parameters are not modified correctly or the training data is small. Can you help me, please? Thank you.

    good first issue see README.md 
    opened by chh6936 18
  • 使用此程序跑自己制作的单类voc数据集出错如何修改。

    使用此程序跑自己制作的单类voc数据集出错如何修改。

    我使用这个程序跑了自己制作的VOC数据集,使用的默认的检测器跑的,然后首次循环可以跑得通,出现了评估表ap,然后选择部分数据尽心那个第二次循环。但是因为我自己制作的VOC数据集只有一个类别,因此这个首次选取5%数据时其中的训练参数l_imgcls:0.0000 这个一直是0,然后在第二轮再次选取2.5%数据之后的训练报错,显示index 0 is out of bounds for dimension 0 with size 0,麻烦问下这个如果迁移到只有一个类别的数据即l_imgcls:0.0000时,应该怎么修改将其跑通。谢谢~ 第二次循环报错为: File "/home/stu1/Documents/datademo/MI-AOD-master/mmdet/models/dense_heads/MIAOD_head.py", line 479, in L_wave_min if y_loc_img[0][0][0] < 0: IndexError: index 0 is out of bounds for dimension 0 with size 0

    individual 
    opened by yushiyundelei 17
  • higher performance on random,entropy and coreset methods compared to paper

    higher performance on random,entropy and coreset methods compared to paper

    image I try to reproduce random,entropy and coreset methods based on mmdetection. However, results are much higher compared to paper. Do you have any idea about this? Thanks.

    paper 
    opened by ChenggangLu 15
  • 训练自己的数据集问题

    训练自己的数据集问题

    作者您好! 我在使用论文中的数据集时,可以正常的训练和测试,在换上自己的数据集时,可以进行第一轮的训练,但是在最小化的过程中,代码报了如下错误,望解答 QAQ File "tools/train.py", line 267, in main() File "tools/train.py", line 203, in main distributed=distributed, validate=(not args.no_validate), timestamp=timestamp, meta=meta) File "/home/zx/anaconda3/envs/active/lib/python3.7/site-packages/mmdet/apis/train.py", line 122, in train_detector runner.run([data_loaders_L, data_loaders_U], cfg.workflow, cfg.total_epochs) File "/home/zx/anaconda3/envs/active/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 192, in run epoch_runner([data_loaders[i], data_loaders_u[i]], **kwargs) File "/home/zx/anaconda3/envs/active/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 60, in train outputs = self.model.train_step(X_L, self.optimizer, **kwargs) File "/home/zx/anaconda3/envs/active/lib/python3.7/site-packages/mmcv/parallel/data_parallel.py", line 31, in train_step return self.module.train_step(*inputs[0], **kwargs[0]) File "/home/zx/anaconda3/envs/active/lib/python3.7/site-packages/mmdet/models/detectors/base.py", line 228, in train_step losses = self(**data) File "/home/zx/anaconda3/envs/active/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/home/zx/anaconda3/envs/active/lib/python3.7/site-packages/mmdet/core/fp16/decorators.py", line 51, in new_func return old_func(*args, **kwargs) File "/home/zx/anaconda3/envs/active/lib/python3.7/site-packages/mmdet/models/detectors/base.py", line 162, in forward return self.forward_train(x, img_metas, **kwargs) File "/home/zx/anaconda3/envs/active/lib/python3.7/site-packages/mmdet/models/detectors/single_stage.py", line 83, in forward_train losses = self.bbox_head.forward_train(x, img_metas, y_loc_img, y_cls_img, y_loc_img_ignore) File "/home/zx/anaconda3/envs/active/lib/python3.7/site-packages/mmdet/models/dense_heads/base_dense_head.py", line 81, in forward_train loss = self.L_wave_min(*loss_inputs, y_loc_img_ignore=y_loc_img_ignore) File "/home/zx/anaconda3/envs/active/lib/python3.7/site-packages/mmdet/core/fp16/decorators.py", line 131, in new_func return old_func(*args, **kwargs) File "/home/zx/anaconda3/envs/active/lib/python3.7/site-packages/mmdet/models/dense_heads/MIAOD_head.py", line 479, in L_wave_min if y_loc_img[0][0][0]< 0: IndexError: index 0 is out of bounds for dimension 0 with size 0

    individual 
    opened by li2jin 15
  • Question about changing backbone

    Question about changing backbone

    Hello, I have a question about changing the backbones. I want to change the backbone retinanet to ssd. ( customize 3 just like you customized 1 to 2 according to the paper )

    I tried customizing it, and the following error occurred. ( look at link4 ) TypeError: init() missing 1 required positional argument: 'input_size'

    So when I deleted input_size, I got the following error. NameError: name 'input_size' is not defined

    model, train_cfg, test_cfg are well inputted. How can I fix this?

    1. https://github.com/open-mmlab/mmdetection/blob/master/configs/base/models/retinanet_r50_fpn.py
    2. https://github.com/yuantn/MI-AOD/blob/master/configs/base/retinanet_r50_fpn.py
    3. https://github.com/open-mmlab/mmdetection/blob/master/configs/base/models/ssd300.py
    4. https://github.com/bluvory/kuaicv/blob/main/ssd_vgg_custom.py
    individual 
    opened by bluvory 13
  • How to run it on COCO Dataset?

    How to run it on COCO Dataset?

    I have a problem, if i wante run it on coco dataset, if we need to fix the file<active_datasets.py>? because its way to get X_U and X_L only fix the standard VOC dataset.Hope for your reply,and if possible, could you provide the python file to fit the coco dataset? THX~

    enhancement 
    opened by gen0924 10
  • Validaiton| TypeError: 'DataContainer' object is not subscriptable

    Validaiton| TypeError: 'DataContainer' object is not subscriptable

    Encounter this error when procssessing validation

    [ ] 0/4952, elapsed: 0s, ETA:Traceback (most recent call last): File "./tools/train.py", line 267, in main() File "./tools/train.py", line 242, in main distributed=distributed, validate=args.no_validate, timestamp=timestamp, meta=meta) File "/home/bdggj/MI-AOD/mmdet/apis/train.py", line 120, in train_detector runner.run(data_loaders_L, cfg.workflow, cfg.total_epochs) File "/home/bdggj/anaconda3/envs/miaod/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 161, in run epoch_runner(data_loaders[i], **kwargs) File "/home/bdggj/anaconda3/envs/miaod/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 43, in train self.call_hook('after_train_epoch') File "/home/bdggj/anaconda3/envs/miaod/lib/python3.7/site-packages/mmcv/runner/base_runner.py", line 298, in call_hook getattr(hook, fn_name)(self) File "/home/bdggj/MI-AOD/mmdet/core/evaluation/eval_hooks.py", line 71, in after_train_epoch gpu_collect=self.gpu_collect) File "/home/bdggj/MI-AOD/mmdet/apis/test.py", line 85, in multi_gpu_test y_head = model(return_loss=False, rescale=True, **data) File "/home/bdggj/anaconda3/envs/miaod/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/bdggj/anaconda3/envs/miaod/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 705, in forward output = self.module(*inputs[0], **kwargs[0]) File "/home/bdggj/anaconda3/envs/miaod/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/bdggj/MI-AOD/mmdet/core/fp16/decorators.py", line 51, in new_func return old_func(*args, **kwargs) File "/home/bdggj/MI-AOD/mmdet/models/detectors/base.py", line 164, in forward return self.forward_test(x, img_metas, return_box=return_box, **kwargs) File "/home/bdggj/MI-AOD/mmdet/models/detectors/base.py", line 144, in forward_test return self.simple_test(imgs[0], img_metas[0], return_box=return_box, **kwargs) File "/home/bdggj/MI-AOD/mmdet/models/detectors/single_stage.py", line 109, in simple_test y_head_loc_cls = self.bbox_head.get_bboxes(*outs, img_metas, rescale=rescale) File "/home/bdggj/MI-AOD/mmdet/core/fp16/decorators.py", line 131, in new_func return old_func(*args, **kwargs) File "/home/bdggj/MI-AOD/mmdet/models/dense_heads/MIAOD_head.py", line 641, in get_bboxes img_shape = img_metas[img_id]['img_shape'] TypeError: 'DataContainer' object is not subscriptable Traceback (most recent call last): File "/home/bdggj/anaconda3/envs/miaod/lib/python3.7/runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "/home/bdggj/anaconda3/envs/miaod/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/bdggj/anaconda3/envs/miaod/lib/python3.7/site-packages/torch/distributed/launch.py", line 340, in main() File "/home/bdggj/anaconda3/envs/miaod/lib/python3.7/site-packages/torch/distributed/launch.py", line 326, in main sigkill_handler(signal.SIGTERM, None) # not coming back File "/home/bdggj/anaconda3/envs/miaod/lib/python3.7/site-packages/torch/distributed/launch.py", line 301, in sigkill_handler raise subprocess.CalledProcessError(returncode=last_return_code, cmd=cmd) subprocess.CalledProcessError: Command '['/home/bdggj/anaconda3/envs/miaod/bin/python', '-u', './tools/train.py', '--local_rank=0', 'configs/MIAOD.py', '--launcher', 'pytorch']' returned non-zero exit status 1. Killing subprocess 30225

    bug 
    opened by chrissyguo 10
  • No detection of testing of custom train model

    No detection of testing of custom train model

    Dear Author,

    I have attached the training log on custom data (converted in pascal VOC format). image

    It gives good results on pascal VOC. However, it does not provide class-wise detection results 'cls_dets' on custom data.

    I checked the return result of single_gpu_test, (y_head), which is used as cls_det to calculate mean_ap. In the detection results, it shows 0 to dets column. Can you suggest the better solution ?

    Thank you for your time and consideration.

    individual 
    opened by chiran7 8
  • Questions about difference between code and paper

    Questions about difference between code and paper

    Question 1: In def forward_single(self, x) of ~mmdet/dense_heads/MIAOD_retina_head.py​: y_head_cls = y_head_f_mil.softmax(2) * y_head_cls_term2.sigmoid().max(2, keepdim=True)[0].softmax(1) It seems that this code (with max(2, keepdim=True)[0]) is not in accordance with the equation (5) in the paper: Screenshot from 2021-06-01 10-31-07 What is the function of max(2, keepdim=True)[0]? And why softmax(1) after all computation? I thought the code should be: y_head_cls = (y_head_f_mil.softmax(2)) * (y_head_cls_term2.sigmoid().softmax(1))according to my understanding.

    Question 2: For uncertainty calculation definition in the paper:

    The instance uncertainty is defined as the prediction discrepancy of f_1 and f_2.

    and the equation for discrepancy is:

    Screenshot from 2021-06-01 11-10-35

    After reweighting, the equation has changed to: Screenshot from 2021-06-01 11-10-01

    I thought the code of uncertainty for loss computation and the code of uncertainty for data selection should be the same. However, the code in def l_wave_dis of ~mmdet/dense_heads/MIAOD_head.py​:

    l_det_cls_all = (abs(y_head_f_1_single - y_head_f_2_single) * w_i.reshape(-1, self.C)).mean(dim=1).sum() * self.param_lambda

    directly compute the discrepancy by subtraction (Manhattan Distance?), while the code in def calculate_uncertainty of ~mmdet/apis/test.py​ is:

    loss_l2_p = (y_head_f_1 - y_head_f_2).pow(2)

    which uses pow(2) for square computation (Euclidean Distance?). I would like to know why you have such designs in code.

    Thank you so much!

    in-depth paper 
    opened by chrissyguo 8
  • Question to the image classification score for multi-label object detection

    Question to the image classification score for multi-label object detection

    Hello,

    I am working on an object detection problem where I have to detect N classes where each bbox may be one or more of those N labels. The image classification score computed in your paper for uncertainty re-weighting uses softmax, assuming that only one class can be present within one instance.

    image

    Could you give me some guidance how this formula can be generalized to multi-label case?

    individual discussion 
    opened by Dmytro-Shvetsov 7
  • Question about SSD on VOC

    Question about SSD on VOC

    Hello,

    Thank you for sharing works!

    I ran MIAOD_SSD.py config to reproduce the Figure 5(b) in the paper. But, the number of labeled images increases by 2k (not 1k) each cycle.

    good first issue 
    opened by ghost 7
Owner
Tianning Yuan
A master candidate of UCAS (University of Chinese Academy of Sciences).
Tianning Yuan
Quasi-Dense Similarity Learning for Multiple Object Tracking, CVPR 2021 (Oral)

Quasi-Dense Tracking This is the offical implementation of paper Quasi-Dense Similarity Learning for Multiple Object Tracking. We present a trailer th

ETH VIS Research Group 327 Dec 27, 2022
Robust Instance Segmentation through Reasoning about Multi-Object Occlusion [CVPR 2021]

Robust Instance Segmentation through Reasoning about Multi-Object Occlusion [CVPR 2021] Abstract Analyzing complex scenes with DNN is a challenging ta

Irene Yuan 24 Jun 27, 2022
Implementation of the CVPR 2021 paper "Online Multiple Object Tracking with Cross-Task Synergy"

Online Multiple Object Tracking with Cross-Task Synergy This repository is the implementation of the CVPR 2021 paper "Online Multiple Object Tracking

null 54 Oct 15, 2022
[ArXiv 2021] Data-Efficient Instance Generation from Instance Discrimination

InsGen - Data-Efficient Instance Generation from Instance Discrimination Data-Efficient Instance Generation from Instance Discrimination Ceyuan Yang,

GenForce: May Generative Force Be with You 93 Dec 25, 2022
Official PyTorch code for CVPR 2020 paper "Deep Active Learning for Biased Datasets via Fisher Kernel Self-Supervision"

Deep Active Learning for Biased Datasets via Fisher Kernel Self-Supervision https://arxiv.org/abs/2003.00393 Abstract Active learning (AL) aims to min

Denis 29 Nov 21, 2022
Code and models for ICCV2021 paper "Robust Object Detection via Instance-Level Temporal Cycle Confusion".

Robust Object Detection via Instance-Level Temporal Cycle Confusion This repo contains the implementation of the ICCV 2021 paper, Robust Object Detect

Xin Wang 69 Oct 13, 2022
TorchDistiller - a collection of the open source pytorch code for knowledge distillation, especially for the perception tasks, including semantic segmentation, depth estimation, object detection and instance segmentation.

This project is a collection of the open source pytorch code for knowledge distillation, especially for the perception tasks, including semantic segmentation, depth estimation, object detection and instance segmentation.

yifan liu 147 Dec 3, 2022
Official code of the paper "ReDet: A Rotation-equivariant Detector for Aerial Object Detection" (CVPR 2021)

ReDet: A Rotation-equivariant Detector for Aerial Object Detection ReDet: A Rotation-equivariant Detector for Aerial Object Detection (CVPR2021), Jiam

csuhan 334 Dec 23, 2022
Official PyTorch Code of GrooMeD-NMS: Grouped Mathematically Differentiable NMS for Monocular 3D Object Detection (CVPR 2021)

GrooMeD-NMS: Grouped Mathematically Differentiable NMS for Monocular 3D Object Detection GrooMeD-NMS: Grouped Mathematically Differentiable NMS for Mo

Abhinav Kumar 76 Jan 2, 2023
The official repo of the CVPR 2021 paper Group Collaborative Learning for Co-Salient Object Detection .

GCoNet The official repo of the CVPR 2021 paper Group Collaborative Learning for Co-Salient Object Detection . Trained model Download final_gconet.pth

Qi Fan 46 Nov 17, 2022
Leveraging Instance-, Image- and Dataset-Level Information for Weakly Supervised Instance Segmentation

Leveraging Instance-, Image- and Dataset-Level Information for Weakly Supervised Instance Segmentation This paper has been accepted and early accessed

Yun Liu 39 Sep 20, 2022
This is an official implementation for "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows" on Object Detection and Instance Segmentation.

Swin Transformer for Object Detection This repo contains the supported code and configuration files to reproduce object detection results of Swin Tran

Swin Transformer 1.4k Dec 30, 2022
Fast, modular reference implementation of Instance Segmentation and Object Detection algorithms in PyTorch.

Faster R-CNN and Mask R-CNN in PyTorch 1.0 maskrcnn-benchmark has been deprecated. Please see detectron2, which includes implementations for all model

Facebook Research 9k Jan 4, 2023
Object detection and instance segmentation toolkit based on PaddlePaddle.

Object detection and instance segmentation toolkit based on PaddlePaddle.

null 9.3k Jan 2, 2023
Instance-conditional Knowledge Distillation for Object Detection

Instance-conditional Knowledge Distillation for Object Detection This is a MegEngine implementation of the paper "Instance-conditional Knowledge Disti

MEGVII Research 47 Nov 17, 2022
Complete-IoU (CIoU) Loss and Cluster-NMS for Object Detection and Instance Segmentation (YOLACT)

Complete-IoU Loss and Cluster-NMS for Improving Object Detection and Instance Segmentation. Our paper is accepted by IEEE Transactions on Cybernetics

null 290 Dec 25, 2022
Res2Net for Instance segmentation and Object detection using MaskRCNN

Res2Net for Instance segmentation and Object detection using MaskRCNN Since the MaskRCNN-benchmark of facebook is deprecated, we suggest to use our mm

Res2Net Applications 55 Oct 30, 2022
Mask R-CNN for object detection and instance segmentation on Keras and TensorFlow

Mask R-CNN for Object Detection and Segmentation This is an implementation of Mask R-CNN on Python 3, Keras, and TensorFlow. The model generates bound

Matterport, Inc 22.5k Jan 4, 2023
Hybrid CenterNet - Hybrid-supervised object detection / Weakly semi-supervised object detection

Hybrid-Supervised Object Detection System Object detection system trained by hybrid-supervision/weakly semi-supervision (HSOD/WSSOD): This project is

null 5 Dec 10, 2022