The Official PyTorch Implementation of DiscoBox.

Overview

NVIDIA Source Code License Python 3.8

DiscoBox: Weakly Supervised Instance Segmentation and Semantic Correspondence from Box Supervision

Paper | Project page | Demo (Youtube) | Demo (Bilibili)

DiscoBox: Weakly Supervised Instance Segmentation and Semantic Correspondence from Box Supervision.
Shiyi Lan, Zhiding Yu, Chris Choy, Subhashree Radhakrishnan, Guilin Liu, Yuke Zhu, Larry Davis, Anima Anandkumar
International Conference on Computer Vision (ICCV) 2021

This repository contains the official Pytorch implementation of training & evaluation code and pretrained models for DiscoBox. DiscoBox is a state of the art framework that can jointly predict high quality instance segmentation and semantic correspondence from box annotations.

We use MMDetection v2.10.0 as the codebase.

All of our models are trained and tested using automatic mixed precision, which leverages float16 for speedup and less GPU memory consumption.

Installation

This implementation is based on PyTorch==1.9.0, mmcv==2.13.0, and mmdetection==2.10.0

Please refer to get_started.md for installation.

Or you can download the docker image from our dockerhub repository.

Models

Results on COCO val 2017

Backbone Weights AP AP@50 AP@75 AP@Small AP@Medium AP@Large
ResNet-50 download 30.7 52.6 30.6 13.3 34.1 45.6
ResNet-101-DCN download 35.3 59.1 35.4 16.9 39.2 53.0
ResNeXt-101-DCN download 37.3 60.4 39.1 17.8 41.1 55.4

Results on COCO test-dev

We also evaluate the models in the section Results on COCO val 2017 with the same weights on COCO test-dev.

Backbone Weights AP AP@50 AP@75 AP@Small AP@Medium AP@Large
ResNet-50 download 32.0 53.6 32.6 11.7 33.7 48.4
ResNet-101-DCN download 35.8 59.8 36.4 16.9 38.7 52.1
ResNeXt-101-DCN download 37.9 61.4 40.0 18.0 41.1 53.9

Training

COCO

ResNet-50 (8 GPUs):

bash tools/dist_train.sh \
     configs/discobox/discobox_solov2_r50_fpn_3x.py 8

ResNet-101-DCN (8 GPUs):

bash tools/dist_train.sh \
     configs/discobox/discobox_solov2_r101_dcn_fpn_3x.py 8

ResNeXt-101-DCN (8 GPUs):

bash tools/dist_train.sh \
     configs/discobox/discobox_solov2_x101_dcn_fpn_3x.py 8

Pascal VOC 2012

ResNet-50 (4 GPUs):

bash tools/dist_train.sh \
     configs/discobox/discobox_solov2_voc_r50_fpn_6x.py 4

ResNet-101 (4 GPUs):

bash tools/dist_train.sh \
     configs/discobox/discobox_solov2_voc_r101_fpn_6x.py 4

Testing

COCO

ResNet-50 (8 GPUs):

bash tools/dist_test.sh \
     configs/discobox/discobox_solov2_r50_fpn_3x.py \
     work_dirs/coco_r50_fpn_3x.pth 8 --eval segm

ResNet-101-DCN (8 GPUs):

bash tools/dist_test.sh \
     configs/discobox/discobox_solov2_r101_dcn_fpn_3x.py \
     work_dirs/coco_r101_dcn_fpn_3x.pth 8 --eval segm

ResNeXt-101-DCN (GPUs):

bash tools/dist_test.sh \
     configs/discobox/discobox_solov2_x101_dcn_fpn_3x_fp16.py \
     work_dirs/coco_x101_dcn_fpn_3x.pth 8 --eval segm

Pascal VOC 2012 (COCO API)

ResNet-50 (4 GPUs):

bash tools/dist_test.sh \
     configs/discobox/discobox_solov2_voc_r50_fpn_3x_fp16.py \
     work_dirs/voc_r50_6x.pth 4 --eval segm

ResNet-101 (4 GPUs):

bash tools/dist_test.sh \
     configs/discobox/discobox_solov2_voc_r101_fpn_3x_fp16.py \
     work_dirs/voc_r101_6x.pth 4 --eval segm

Pascal VOC 2012 (Matlab)

Step 1: generate results

ResNet-50 (4 GPUs):

bash tools/dist_test.sh \
     configs/discobox/discobox_solov2_voc_r50_fpn_3x_fp16.py \
     work_dirs/voc_r50_6x.pth 4 \
     --format-only \
     --options "jsonfile_prefix=work_dirs/voc_r50_results.json"

ResNet-101 (4 GPUs):

bash tools/dist_test.sh \
     configs/discobox/discobox_solov2_voc_r101_fpn_3x_fp16.py \
     work_dirs/voc_r101_6x.pth 4 \
     --format-only \
     --options "jsonfile_prefix=work_dirs/voc_r101_results.json"

Step 2: format conversion

ResNet-50:

python tools/json2mat.pywork_dirs/voc_r50_results.json work_dirs/voc_r50_results.mat

ResNet-101:

python tools/json2mat.pywork_dirs/voc_r101_results.json work_dirs/voc_r101_results.mat

Step 3: evaluation

Please visit BBTP for the evaluation code written in Matlab.

PF-Pascal

Please visit this repository.

LICENSE

Please check the LICENSE file. DiscoBox may be used non-commercially, meaning for research or evaluation purposes only. For business inquiries, please contact [email protected].

Citation

@article{lan2021discobox,
  title={DiscoBox: Weakly Supervised Instance Segmentation and Semantic Correspondence from Box Supervision},
  author={Lan, Shiyi and Yu, Zhiding and Choy, Christopher and Radhakrishnan, Subhashree and Liu, Guilin and Zhu, Yuke and Davis, Larry S and Anandkumar, Anima},
  journal={arXiv preprint arXiv:2105.06464},
  year={2021}
}
Comments
  • KeyError: 'DiscoBoxSOLOv2 is not in the models registry' running visualization

    KeyError: 'DiscoBoxSOLOv2 is not in the models registry' running visualization

    I am able to run the COCO (ResNeXt-101-DCN) test successfully, but when I try to run your visualization example:

    python tools/test.py configs/discobox/discobox_solov2_x101_dcn_fpn_3x.py coco_x101_dcn_fpn_3x.pth --show --show-dir discobox_vis_x101
    

    I get the following errors:

    root@mymachine:~/src/github/DiscoBox# python tools/test.py configs/discobox/discobox_solov2_x101_dcn_fpn_3x.py coco_x101_dcn_fpn_3x.pth --show --show-dir discobox_vis_x101
    /root/src/github/mmdetection/mmdet/datasets/api_wrappers/coco_api.py:20: UserWarning: mmpycocotools is deprecated. Please install official pycocotools by "pip install pycocotools"
      warnings.warn(
    loading annotations into memory...
    Done (t=0.51s)
    creating index...
    index created!
    Traceback (most recent call last):
      File "tools/test.py", line 222, in <module>
        main()
      File "tools/test.py", line 175, in main
        model = build_detector(cfg.model, test_cfg=cfg.get('test_cfg'))
      File "/root/src/github/mmdetection/mmdet/models/builder.py", line 58, in build_detector
        return DETECTORS.build(
      File "/opt/conda/lib/python3.8/site-packages/mmcv/utils/registry.py", line 212, in build
        return self.build_func(*args, **kwargs, registry=self)
      File "/opt/conda/lib/python3.8/site-packages/mmcv/cnn/builder.py", line 27, in build_model_from_cfg
        return build_from_cfg(cfg, registry, default_args)
      File "/opt/conda/lib/python3.8/site-packages/mmcv/utils/registry.py", line 44, in build_from_cfg
        raise KeyError(
    KeyError: 'DiscoBoxSOLOv2 is not in the models registry'
    

    How do I add the DiscoBox models to mmcv's registry?

    My build info: I am using your Docker container (on Ubuntu 20.04 with CUDA 11.4), with mmcv-full version 1.3.17 and latest (as of July 15, 2022) master of mmdetection.

    opened by robonrrd 5
  • Question regarding box-conditioned inference

    Question regarding box-conditioned inference

    Hi, I'm new to MMDet and DiscoBox, and I'm trying to train a model to convert bounding boxs to polygons on my own dataset. So I use this command: python ./tools/train.py configs/Andy_self/Discobox/boxcond_discobox_solov2_x101_dcn_fpn_3x.py --work-dir ./Disco_work_dirs --load-from ./checkpoints/Disco_coco_x101_dcn_fpn_3x.pth and I get the following error:

    File "./tools/train.py", line 191, in main() File "./tools/train.py", line 187, in main meta=meta) File "/home/yuecao/project/terrasense/DiscoBox/mmdet/apis/train.py", line 172, in train_detector runner.run(data_loaders, cfg.workflow) File "/home/yuecao/anaconda3/envs/discobox/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 127, in run epoch_runner(data_loaders[i], **kwargs) File "/home/yuecao/anaconda3/envs/discobox/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 50, in train self.run_iter(data_batch, train_mode=True, **kwargs) File "/home/yuecao/anaconda3/envs/discobox/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 30, in run_iter **kwargs) File "/home/yuecao/anaconda3/envs/discobox/lib/python3.7/site-packages/mmcv/parallel/data_parallel.py", line 67, in train_step return self.module.train_step(*inputs[0], **kwargs[0]) File "/home/yuecao/project/terrasense/DiscoBox/mmdet/models/detectors/base.py", line 251, in train_step losses = self(**data) File "/home/yuecao/anaconda3/envs/discobox/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/home/yuecao/anaconda3/envs/discobox/lib/python3.7/site-packages/mmcv/runner/fp16_utils.py", line 128, in new_func output = old_func(*new_args, **new_kwargs) File "/home/yuecao/project/terrasense/DiscoBox/mmdet/models/detectors/single_stage_wsis.py", line 249, in forward return self.forward_train(img, img_metas, **kwargs) File "/home/yuecao/project/terrasense/DiscoBox/mmdet/models/detectors/single_stage_wsis.py", line 86, in forward_train use_ts_loss=self.avg_loss_ins<0.4) File "/home/yuecao/anaconda3/envs/discobox/lib/python3.7/site-packages/torch/cuda/amp/autocast_mode.py", line 141, in decorate_autocast return func(*args, **kwargs) TypeError: loss() got an unexpected keyword argument 'use_ts_loss'

    After I checked the source code from single_stage_wsis.py I changed the argument name to "use_loss_ts" and the error disappeared. Bet another error occurred:

    Traceback (most recent call last): File "./tools/train.py", line 191, in main() File "./tools/train.py", line 187, in main meta=meta) File "/home/yuecao/project/terrasense/DiscoBox/mmdet/apis/train.py", line 172, in train_detector runner.run(data_loaders, cfg.workflow) File "/home/yuecao/anaconda3/envs/discobox/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 127, in run epoch_runner(data_loaders[i], **kwargs) File "/home/yuecao/anaconda3/envs/discobox/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 50, in train self.run_iter(data_batch, train_mode=True, **kwargs) File "/home/yuecao/anaconda3/envs/discobox/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 30, in run_iter **kwargs) File "/home/yuecao/anaconda3/envs/discobox/lib/python3.7/site-packages/mmcv/parallel/data_parallel.py", line 67, in train_step return self.module.train_step(*inputs[0], **kwargs[0]) File "/home/yuecao/project/terrasense/DiscoBox/mmdet/models/detectors/base.py", line 251, in train_step losses = self(**data) File "/home/yuecao/anaconda3/envs/discobox/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/home/yuecao/anaconda3/envs/discobox/lib/python3.7/site-packages/mmcv/runner/fp16_utils.py", line 128, in new_func output = old_func(*new_args, **new_kwargs) File "/home/yuecao/project/terrasense/DiscoBox/mmdet/models/detectors/single_stage_wsis.py", line 249, in forward return self.forward_train(img, img_metas, **kwargs) File "/home/yuecao/project/terrasense/DiscoBox/mmdet/models/detectors/single_stage_wsis.py", line 86, in forward_train use_loss_ts=self.avg_loss_ins<0.4) File "/home/yuecao/anaconda3/envs/discobox/lib/python3.7/site-packages/torch/cuda/amp/autocast_mode.py", line 141, in decorate_autocast return func(*args, **kwargs) TypeError: loss() missing 2 required positional arguments: 'img_metas' and 'cfg'

    Did I do anything wrong? How should I fix this problem?

    Thank you very much for your help

    opened by caincdiy 4
  • Does the code use the same hyper parameters as the paper described?

    Does the code use the same hyper parameters as the paper described?

    Thanks for great work!

    I read the code in MeanField class. It seems that it is inconsistent with the paper.

    屏幕快照 2021-10-24 15 18 19

    https://github.com/NVlabs/DiscoBox/blob/3b170414c330ca6a41af56bfaad36313383d702c/mmdet/models/dense_heads/discobox_solov2_head.py#L744-L814

    opened by ssssholmes 3
  • I met an error while training my own dataset: IndexError: index 0 is out of bounds for dimension 0 with size 0

    I met an error while training my own dataset: IndexError: index 0 is out of bounds for dimension 0 with size 0

    Thanks for your nice implementation. I have tried DiscoBox to train my own dataset and met the following error. I used this config file, with changing dataset path and classes info.

    Traceback (most recent call last):
      File "tools/train.py", line 191, in <module>
        main()
      File "tools/train.py", line 180, in main
        train_detector(
      File "/workspace/mmdet/apis/train.py", line 172, in train_detector
        runner.run(data_loaders, cfg.workflow)
      File "/opt/conda/lib/python3.8/site-packages/mmcv/runner/epoch_based_runner.py", line 127, in run
        epoch_runner(data_loaders[i], **kwargs)
      File "/opt/conda/lib/python3.8/site-packages/mmcv/runner/epoch_based_runner.py", line 50, in train
        self.run_iter(data_batch, train_mode=True, **kwargs)
      File "/opt/conda/lib/python3.8/site-packages/mmcv/runner/epoch_based_runner.py", line 29, in run_iter
        outputs = self.model.train_step(data_batch, self.optimizer,
      File "/opt/conda/lib/python3.8/site-packages/mmcv/parallel/data_parallel.py", line 67, in train_step
        return self.module.train_step(*inputs[0], **kwargs[0])
      File "/workspace/mmdet/models/detectors/base.py", line 251, in train_step
        losses = self(**data)
      File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
        result = self.forward(*input, **kwargs)
      File "/opt/conda/lib/python3.8/site-packages/mmcv/runner/fp16_utils.py", line 128, in new_func
        output = old_func(*new_args, **new_kwargs)
      File "/workspace/mmdet/models/detectors/base.py", line 185, in forward
        return self.forward_train(img, img_metas, **kwargs)
      File "/workspace/mmdet/models/detectors/single_stage_wsis.py", line 206, in forward_train
        losses = self.bbox_head.loss(
      File "/opt/conda/lib/python3.8/site-packages/torch/cuda/amp/autocast_mode.py", line 140, in decorate_autocast
        return func(*args, **kwargs)
      File "/workspace/mmdet/models/dense_heads/discobox_solov2_head.py", line 1329, in loss
        ins_label_list, cate_label_list, ins_ind_label_list, grid_order_list = multi_apply(
      File "/workspace/mmdet/core/utils/misc.py", line 29, in multi_apply
        return tuple(map(list, zip(*map_results)))
      File "/workspace/mmdet/models/dense_heads/discobox_solov2_head.py", line 1620, in solov2_target_single
        device = gt_labels_raw[0].device
    IndexError: index 0 is out of bounds for dimension 0 with size 0
    

    This error seems gt_labels_raw is empty. But images in my dataset have at least one bbox. Could you suggest some clues to solve this problem? Thanks.

    opened by miyajiyuta 2
  • Discobox evaluation failed with custom dataset

    Discobox evaluation failed with custom dataset

    Thanks to the authors for making DiscoBox code public. I am trying to finetune the COCO checkpoint on my custom COCO style dataset. My training seems to be running fine, but as soon as the scripts gets into evaluation (during train), it gets stuck and after a minutes wait, process is terminated abruptly. Here is the train command I use:

    bash tools/dist_train.sh configs/discobox/custom_discobox_solov2_r50_fpn_3x.py 2
    

    Error I get -

      [>>>>>>>>>>>>>>>>>>>>>>>>>>>>>                     ] 166/279, 1.3 task/s, elapsed: 124s, ETA:    85sTraceback (most recent call last):
      File "/opt/conda/envs/open-mmlab/lib/python3.7/runpy.py", line 193, in _run_module_as_main
        "__main__", mod_spec)
      File "/opt/conda/envs/open-mmlab/lib/python3.7/runpy.py", line 85, in _run_code
        exec(code, run_globals)
      File "/opt/conda/envs/open-mmlab/lib/python3.7/site-packages/torch/distributed/launch.py", line 261, in <module>
        main()
      File "/opt/conda/envs/open-mmlab/lib/python3.7/site-packages/torch/distributed/launch.py", line 257, in main
        cmd=cmd)
    subprocess.CalledProcessError: Command '['/opt/conda/envs/open-mmlab/bin/python', '-u', 'tools/test.py', '--local_rank=0', 'configs/discobox/custom_solov2_r50_fpn_3x.py', 'work_dirs/roboflow_data/epoch_1.pth', '--launcher', 'pytorch', '--eval', 'bbox', 'segm']' died with <Signals.SIGKILL: 9>.
    

    To further investigate this, I also tried running training on 1 GPU,

    bash tools/dist_train.sh configs/discobox/custom_discobox_solov2_r50_fpn_3x.py 1
    

    This failed too with the same error.

    Another thing I thought is wroth trying is to run a separate eval -

    bash tools/dist_test.sh configs/discobox/custom_solov2_r50_fpn_3x.py work_dirs/roboflow_data/epoch_1.pth 1 --eval bbox segm
    

    Again, resulted into the same error.

    Has anyone come across this error? Please help, thanks!

    opened by ameyparanjape 2
  • Results of model with 1x r50 on COCO.

    Results of model with 1x r50 on COCO.

    Hello, thanks for your promising work. Due to the long training time of 3x on COCO, can you provide the AP resutls of COCO with 1x training schedule as a performance reference.

    opened by LiWentomng 2
  • Question about the BoxInst Mentioned in the paper

    Question about the BoxInst Mentioned in the paper

    Greetings! Thanks for supporting your code for your awesome work!

    I noticed that you have listed the result of BoxInst in your paper, would you have try to re-implement BoxInst in your mmdet code?

    opened by Unrealluver 2
  • ERROR

    ERROR

    ImportError: /home/user/miniconda3/envs/open-mmlab/lib/python3.7/site-packages/mmcv/_ext.cpython-37m-x86_64-linux-gnu.so: undefined symbol: _ZNK3c1010TensorImpl36is_contiguous_nondefault_policy_implENS_12MemoryFormatE

    opened by Huster-Hq 1
  • About loss_ts, loss_corr

    About loss_ts, loss_corr

    I did bash tools/dist_train.sh
    configs/discobox/discobox_solov2_r50_fpn_3x.py 2

    I got message as below 2022-09-19 00:46:25,659 - mmdet - INFO - Epoch [1][50/29317] lr: 3.425e-04, eta: 35 days, 8:29:59, time: 2.894, data_time: 1.802, memory: 3714, loss_ins: 1.1359, loss_ts: 0.0000, loss_cate: 0.9793, loss_corr: 0.0000, loss: 2.1152, grad_norm: 11.1556 2022-09-19 00:47:05,600 - mmdet - INFO - Epoch [1][100/29317] lr: 5.900e-04, eta: 22 days, 13:17:57, time: 0.799, data_time: 0.058, memory: 3714, loss_ins: 1.1685, loss_ts: 0.0000, loss_cate: 0.8634, loss_corr: 0.0000, loss: 2.0319, grad_norm: 9.6288 2022-09-19 00:47:46,070 - mmdet - INFO - Epoch [1][150/29317] lr: 8.376e-04, eta: 18 days, 7:56:31, time: 0.809, data_time: 0.049, memory: 3714, loss_ins: 1.0841, loss_ts: 0.0000, loss_cate: 0.8087, loss_corr: 0.0000, loss: 1.8928, grad_norm: 9.4327 2022-09-19 00:48:24,677 - mmdet - INFO - Epoch [1][200/29317] lr: 1.085e-03, eta: 16 days, 2:30:51, time: 0.772, data_time: 0.043, memory: 3714, loss_ins: 1.1373, loss_ts: 0.0000, loss_cate: 0.8160, loss_corr: 0.0000, loss: 1.9532, grad_norm: 8.7715

    loss_ts, loss_corr are 0.

    It seems it is problem, I can not get any loss from loss_ts, loss_corr. What should I do?

    opened by MIYU8305 1
  • About Box-conditioned inference

    About Box-conditioned inference

    I did

    bash tools/dist_test.sh
    configs/discobox/boxcond_discobox_solov2_x101_dcn_fpn_3x.py
    work_dirs/coco_x101_dcn_fpn_3x.pth 2
    --format-only
    --options "jsonfile_prefix=work_dirs/coco_x101_dcn_fpn_results.json"

    I got error as below Traceback (most recent call last): File "tools/test.py", line 226, in main() File "tools/test.py", line 202, in main outputs = single_gpu_test(model, data_loader, args.show, args.show_dir, File "/home/user/workdir/mmdet/apis/test.py", line 30, in single_gpu_test result = model(return_loss=False, rescale=True, **data) File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.8/site-packages/mmcv/parallel/data_parallel.py", line 42, in forward return super().forward(*inputs, **kwargs) File "/opt/conda/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 165, in forward return self.module(*inputs[0], **kwargs[0]) File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.8/site-packages/mmcv/runner/fp16_utils.py", line 128, in new_func output = old_func(*new_args, **new_kwargs) File "/home/user/workdir/mmdet/models/detectors/single_stage_wsis.py", line 251, in forward return self.forward_test(img, img_metas, **kwargs) File "/home/user/workdir/mmdet/models/detectors/base.py", line 164, in forward_test return self.simple_test(imgs[0], img_metas[0], **kwargs) File "/home/user/workdir/mmdet/models/detectors/single_stage_wsis.py", line 262, in simple_test results = self.bbox_head.get_seg(*seg_inputs, img=img, gt_bboxes=gt_bboxes, gt_labels=gt_labels, gt_masks=gt_masks) TypeError: get_seg() got an unexpected keyword argument 'gt_bboxes'

    How can I solve this error? If I changed from results = self.bbox_head.get_seg(*seg_inputs, img=img, gt_bboxes=gt_bboxes, gt_labels=gt_labels, gt_masks=gt_masks) to results = self.bbox_head.get_seg(*seg_inputs, img=img)

    Error has disappeared.

    opened by MIYU8305 1
  • one question about IndexError: list index out of range,i want use my coco data ,but show this question,I hope you can help me.

    one question about IndexError: list index out of range,i want use my coco data ,but show this question,I hope you can help me.

    I only modify config about my coco data,

    @DATASETS.register_module()
    class CocoDataset(CustomDataset):
        CLASSES = ('air-hole', 'bite-edge', 'broken-arc', 'crack', 'hollow-bead', 'overlap',
                   'slag-inclusion', 'unfused')
        # CLASSES = ('person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus',
        #            'train', 'truck', 'boat', 'traffic light', 'fire hydrant',
        #            'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog',
        #            'horse', 'sheep', 'cow', 'elephant', 'bear', 'zebra', 'giraffe',
        #            'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee',
        #            'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat',
        #            'baseball glove', 'skateboard', 'surfboard', 'tennis racket',
        #            'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl',
        #            'banana', 'apple', 'sandwich', 'orange', 'broccoli', 'carrot',
        #            'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch',
        #            'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop',
        #            'mouse', 'remote', 'keyboard', 'cell phone', 'microwave',
        #            'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock',
        #            'vase', 'scissors', 'teddy bear', 'hair drier', 'toothbrush')
    

    question:

    
    Traceback (most recent call last):
      File "tools/train.py", line 191, in <module>
        main()
      File "tools/train.py", line 187, in main
        meta=meta)
      File "/root/DiscoBox/mmdet/apis/train.py", line 172, in train_detector
        runner.run(data_loaders, cfg.workflow)
      File "/root/.local/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 125, in run
        epoch_runner(data_loaders[i], **kwargs)
      File "/root/.local/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 54, in train
        self.call_hook('after_train_epoch')
      File "/root/.local/lib/python3.7/site-packages/mmcv/runner/base_runner.py", line 307, in call_hook
        getattr(hook, fn_name)(self)
      File "/root/DiscoBox/mmdet/core/evaluation/eval_hooks.py", line 279, in after_train_epoch
        key_score = self.evaluate(runner, results)
      File "/root/DiscoBox/mmdet/core/evaluation/eval_hooks.py", line 177, in evaluate
        results, logger=runner.logger, **self.eval_kwargs)
      File "/root/DiscoBox/mmdet/datasets/coco.py", line 497, in evaluate
        cocoEval.evaluate()
      File "/root/.local/lib/python3.7/site-packages/pycocotools/cocoeval.py", line 149, in evaluate
        self._prepare()
      File "/root/.local/lib/python3.7/site-packages/pycocotools/cocoeval.py", line 110, in _prepare
        _toMask(gts, self.cocoGt)
      File "/root/.local/lib/python3.7/site-packages/pycocotools/cocoeval.py", line 95, in _toMask
        rle = coco.annToRLE(ann)
      File "/root/.local/lib/python3.7/site-packages/pycocotools/coco.py", line 497, in annToRLE
        rles = maskUtils.frPyObjects(segm, h, w)
      File "pycocotools/_mask.pyx", line 292, in pycocotools._mask.frPyObjects
    IndexError: list index out of range
    Traceback (most recent call last):
      File "/root/.local/conda/envs/py37/lib/python3.7/runpy.py", line 193, in _run_module_as_main
        "__main__", mod_spec)
      File "/root/.local/conda/envs/py37/lib/python3.7/runpy.py", line 85, in _run_code
        exec(code, run_globals)
      File "/root/.local/conda/envs/py37/lib/python3.7/site-p
    opened by DaDogs 1
  • error

    error

    one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [2, 128, 80, 88]], which is output 0 of ReluBackward0, is at version 3; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).

    opened by Huster-Hq 1
Owner
NVIDIA Research Projects
NVIDIA Research Projects
ALBERT-pytorch-implementation - ALBERT pytorch implementation

ALBERT-pytorch-implementation developing... 모델의 개념이해를 돕기 위한 구현물로 현재 변수명을 상세히 적었고

BG Kim 3 Oct 6, 2022
Official PyTorch implementation for paper Context Matters: Graph-based Self-supervised Representation Learning for Medical Images

Context Matters: Graph-based Self-supervised Representation Learning for Medical Images Official PyTorch implementation for paper Context Matters: Gra

null 49 Nov 23, 2022
StyleGAN2-ADA - Official PyTorch implementation

Abstract: Training generative adversarial networks (GAN) using too little data typically leads to discriminator overfitting, causing training to diverge. We propose an adaptive discriminator augmentation mechanism that significantly stabilizes training in limited data regimes.

NVIDIA Research Projects 3.2k Dec 30, 2022
Official PyTorch implementation of Joint Object Detection and Multi-Object Tracking with Graph Neural Networks

This is the official PyTorch implementation of our paper: "Joint Object Detection and Multi-Object Tracking with Graph Neural Networks". Our project website and video demos are here.

Richard Wang 443 Dec 6, 2022
Official pytorch implementation of paper "Image-to-image Translation via Hierarchical Style Disentanglement".

HiSD: Image-to-image Translation via Hierarchical Style Disentanglement Official pytorch implementation of paper "Image-to-image Translation

null 364 Dec 14, 2022
Official pytorch implementation of paper "Inception Convolution with Efficient Dilation Search" (CVPR 2021 Oral).

IC-Conv This repository is an official implementation of the paper Inception Convolution with Efficient Dilation Search. Getting Started Download Imag

Jie Liu 111 Dec 31, 2022
Official PyTorch Implementation of Unsupervised Learning of Scene Flow Estimation Fusing with Local Rigidity

UnRigidFlow This is the official PyTorch implementation of UnRigidFlow (IJCAI2019). Here are two sample results (~10MB gif for each) of our unsupervis

Liang Liu 28 Nov 16, 2022
Official implementation of our paper "LLA: Loss-aware Label Assignment for Dense Pedestrian Detection" in Pytorch.

LLA: Loss-aware Label Assignment for Dense Pedestrian Detection This project provides an implementation for "LLA: Loss-aware Label Assignment for Dens

null 35 Dec 6, 2022
An official implementation of "SFNet: Learning Object-aware Semantic Correspondence" (CVPR 2019, TPAMI 2020) in PyTorch.

PyTorch implementation of SFNet This is the implementation of the paper "SFNet: Learning Object-aware Semantic Correspondence". For more information,

CV Lab @ Yonsei University 87 Dec 30, 2022
Old Photo Restoration (Official PyTorch Implementation)

Bringing Old Photo Back to Life (CVPR 2020 oral)

Microsoft 11.3k Dec 30, 2022
Official PyTorch implementation of Spatial Dependency Networks.

Spatial Dependency Networks: Neural Layers for Improved Generative Image Modeling Đorđe Miladinović   Aleksandar Stanić   Stefan Bauer   Jürgen Schmid

Djordje Miladinovic 34 Jan 19, 2022
Official implementation of our CVPR2021 paper "OTA: Optimal Transport Assignment for Object Detection" in Pytorch.

OTA: Optimal Transport Assignment for Object Detection This project provides an implementation for our CVPR2021 paper "OTA: Optimal Transport Assignme

null 217 Jan 3, 2023
This is the official PyTorch implementation of the paper "TransFG: A Transformer Architecture for Fine-grained Recognition" (Ju He, Jie-Neng Chen, Shuai Liu, Adam Kortylewski, Cheng Yang, Yutong Bai, Changhu Wang, Alan Yuille).

TransFG: A Transformer Architecture for Fine-grained Recognition Official PyTorch code for the paper: TransFG: A Transformer Architecture for Fine-gra

Ju He 307 Jan 3, 2023
StyleGAN2-ADA - Official PyTorch implementation

Need Help? If you’re new to StyleGAN2-ADA and looking to get started, please check out this video series from a course Lia Coleman and I taught in Oct

Derrick Schultz 217 Jan 4, 2023
Official PyTorch implementation of "ArtFlow: Unbiased Image Style Transfer via Reversible Neural Flows"

ArtFlow Official PyTorch implementation of the paper: ArtFlow: Unbiased Image Style Transfer via Reversible Neural Flows Jie An*, Siyu Huang*, Yibing

null 123 Dec 27, 2022
Official PyTorch implementation of RobustNet (CVPR 2021 Oral)

RobustNet (CVPR 2021 Oral): Official Project Webpage Codes and pretrained models will be released soon. This repository provides the official PyTorch

Sungha Choi 173 Dec 21, 2022
Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.

PyTorch Implementation of Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers 1 Using Colab Please notic

Hila Chefer 489 Jan 7, 2023
[PyTorch] Official implementation of CVPR2021 paper "PointDSC: Robust Point Cloud Registration using Deep Spatial Consistency". https://arxiv.org/abs/2103.05465

PointDSC repository PyTorch implementation of PointDSC for CVPR'2021 paper "PointDSC: Robust Point Cloud Registration using Deep Spatial Consistency",

null 153 Dec 14, 2022
Official PyTorch implementation of MX-Font (Multiple Heads are Better than One: Few-shot Font Generation with Multiple Localized Experts)

Introduction Pytorch implementation of Multiple Heads are Better than One: Few-shot Font Generation with Multiple Localized Expert. | paper Song Park1

Clova AI Research 97 Dec 23, 2022