Boundary IoU API (Beta version)

Overview

Boundary IoU API (Beta version)

Bowen Cheng, Ross Girshick, Piotr Dollár, Alexander C. Berg, Alexander Kirillov

[arXiv] [Project] [BibTeX]

This API is an experimental version of Boundary IoU for 5 datasets:

To install Boundary IoU API, run:

pip install git+https://github.com/bowenc0221/boundary-iou-api.git

or

git clone [email protected]:bowenc0221/boundary-iou-api.git
cd boundary_iou_api
pip install -e .

Summary of usage

We provide two ways to use this api, you can either replace imports with our api or do offline evaluation.

Replacing imports

Our Boundary IoU API supports both evaluation with Mask IoU and Boundary IoU with the same interface as original ones. Thus, you only need to change the import, without worried about breaking your existing code.

  1. COCO instance segmentation
    replace

    from pycocotools.coco import COCO
    from pycocotools.cocoeval import COCOeval

    with

    from boundary_iou.coco_instance_api.coco import COCO
    from boundary_iou.coco_instance_api.cocoeval import COCOeval

    and set

    COCOeval(..., iouType="boundary")
  2. LVIS instance segmentation
    replace

    from lvis import LVISEval

    with

    from boundary_iou.lvis_instance_api.eval import LVISEval

    and set

    LVISEval(..., iou_type="boundary")
  3. Cityscapes instance segmentation
    replace

    import cityscapesscripts.evaluation.evalInstanceLevelSemanticLabeling as cityscapes_eval

    with

    import boundary_iou.cityscapes_instance_api.evalInstanceLevelSemanticLabeling as cityscapes_eval

    and set

    cityscapes_eval.args.iou_type = "boundary"
  4. COCO panoptic segmentation
    replace

    from panopticapi.evaluation import pq_compute

    with

    from boundary_iou.coco_panoptic_api.evaluation import pq_compute

    and set

    pq_compute(..., iou_type="boundary")
  5. Cityscapes panoptic segmentation
    replace

    from cityscapesscripts.evaluation.evalPanopticSemanticLabeling as evaluatePanoptic

    with

    from boundary_iou.cityscapes_panoptic_api.evalPanopticSemanticLabeling import evaluatePanoptic

    and set

    evaluatePanoptic(..., iou_type="boundary")

Offline evaluation

We also provide evaluation code that can evaluates your prediction files for each dataset.

  1. COCO instance segmentation

    python ./tools/coco_instance_evaluation.py \
        --gt-json-file COCO_GT_JSON \
        --dt-json-file COCO_DT_JSON \
        --iou-type boundary
  2. LVIS instance segmentation

    python ./tools/lvis_instance_evaluation.py \
        --gt-json-file LVIS_GT_JSON \
        --dt-json-file LVIS_DT_JSON \
        --iou-type boundary
  3. Cityscapes instance segmentation

    python ./tools/cityscapes_instance_evaluation.py \
        --gt_dir GT_DIR \
        --result_dir RESULT_DIR \
        --iou-type boundary
  4. COCO panoptic segmentation

    python ./tools/coco_panoptic_evaluation.py \
        --gt_json_file PANOPTIC_GT_JSON \
        --gt_folder PANOPTIC_GT_DIR \
        --pred_json_file PANOPTIC_PRED_JSON \
        --pred_folder PANOPTIC_PRED_DIR \
        --iou-type boundary
  5. Cityscapes panoptic segmentation

    python ./tools/cityscapes_panoptic_evaluation.py \
        --gt_json_file PANOPTIC_GT_JSON \
        --gt_folder PANOPTIC_GT_DIR \
        --pred_json_file PANOPTIC_PRED_JSON \
        --pred_folder PANOPTIC_PRED_DIR \
        --iou-type boundary

Citing Boundary IoU

If you find Boundary IoU helpful in your research or wish to refer to the referenced results, please use the following BibTeX entry.

@inproceedings{cheng2021boundary,
  title={Boundary {IoU}: Improving Object-Centric Image Segmentation Evaluation},
  author={Bowen Cheng and Ross Girshick and Piotr Doll{\'a}r and Alexander C. Berg and Alexander Kirillov},
  booktitle={CVPR},
  year={2021}
}

Contact

If you have any questions regarding this API, please contact us at bcheng9 AT illinois.edu

Comments
  • can you provide the scripts for the Cityscapes pixel-level semantic segmentation?

    can you provide the scripts for the Cityscapes pixel-level semantic segmentation?

    I found your work very interesting and thinking about using the Boundary IoU metric to evaluate my segmentation models. However, I can only see instance and panoptic segmentation scripts in this repo. Does the boundary IoU metric not applicable to pixel-level segmentation?

    opened by dvssajay 2
  • can this evaluation be used as a loss function for segmentation?

    can this evaluation be used as a loss function for segmentation?

    Hello How are you? Thanks for contributing to this project. I have a question. Can this be used as boundary loss in the segmentation model training? Thanks

    Originally posted by @rose-jinyang in https://github.com/bowenc0221/boundary-iou-api/issues/2#issuecomment-843366616

    opened by dreamer121121 1
  • COCO mask and boundary metrics are exactly same

    COCO mask and boundary metrics are exactly same

    Hi, I am trying to evaluate my dataset using masks and boundaries. I am seeing that the AP values are exactly same. My instances are approximately 60x60 pixels. Any insights ?

    opened by ruthvik92 0
  • can not use mmdetection

    can not use mmdetection

    After I install the boundary-iou-api : pip install git+https://github.com/bowenc0221/boundary-iou-api.git I can not use the mmdetecion code : ./tools/./tools/dist_tra.sh

    the erro is : torch.distributed.elastic.multiprocessing.errors.ChildFailedError:

    ./tools/train.py FAILED

    opened by zhaozhen2333 1
  • Is the ground truth mask a binary mask (only 0 or 1)?

    Is the ground truth mask a binary mask (only 0 or 1)?

    Hi Author, Thanks for your excellent work. I would like to ask whether the ground truth and prediction are binary masks (only includes 0 or 1)? If the prediction and ground truth are not 0s or 1s, should we use a threshold value to make it to be a binary mask? If yes, how do we choose the threshold value, is it 128 or 0.5? If yes, why did we choose them as the threshold values? Thank you!

    opened by hdjsjyl 0
  • where is the Mask Iou you are using in the paper?

    where is the Mask Iou you are using in the paper?

    Hi Bowen, Thanks for your excellent work. I am newer about segmentation tasks. I would like to ask could you point the Mask IoU you are using in the paper to me? Thank you!

    opened by hdjsjyl 0
  • Ground truth boxes for evaluation?

    Ground truth boxes for evaluation?

    Hi Bowen,

    I really like that you made a metric which focuses on the boundary quality rather than overall IOU. I was applying your method on my models and was not sure exactly how to use the ground truth boxes for evaluation (Section 6.2, Table 4). I have the intuition that instead of taking the region predicted by RPN, you use the ground truth boxes' region and apply a ROIAlign operation to get the region of interest. Am I correct?

    Also, can you suggest a clean way of doing this?

    Thanks in advance.

    Best, Hamd ul Moqeet Riaz

    opened by Moqeet 2
  • BoundaryIOU for very small objects

    BoundaryIOU for very small objects

    Congratulations for the awesome work!

    I have a doubt which is - How will the boundaries be generated if the objects are very small ? I observed, that it does not generate boundaries for let say a very small square mask of size 2 x 2.

    opened by sauradip 1
Owner
Bowen Cheng
Ph.D. at University of Illinois Urbana-Champaign
Bowen Cheng
A Robust Non-IoU Alternative to Non-Maxima Suppression in Object Detection

Confluence: A Robust Non-IoU Alternative to Non-Maxima Suppression in Object Detection 1. 介绍 用以替代 NMS,在所有 bbox 中挑选出最优的集合。 NMS 仅考虑了 bbox 的得分,然后根据 IOU 来

null 44 Sep 15, 2022
Location-Sensitive Visual Recognition with Cross-IOU Loss

The trained models are temporarily unavailable, but you can train the code using reasonable computational resource. Location-Sensitive Visual Recognit

Kaiwen Duan 146 Dec 25, 2022
Alpha-IoU: A Family of Power Intersection over Union Losses for Bounding Box Regression

Alpha-IoU: A Family of Power Intersection over Union Losses for Bounding Box Regression YOLOv5 with alpha-IoU losses implemented in PyTorch. Example r

Jacobi(Jiabo He) 147 Dec 5, 2022
Official PyTorch Implementation of Mask-aware IoU and maYOLACT Detector [BMVC2021]

The official implementation of Mask-aware IoU and maYOLACT detector. Our implementation is based on mmdetection. Mask-aware IoU for Anchor Assignment

Kemal Oksuz 11 Oct 21, 2021
Complete-IoU (CIoU) Loss and Cluster-NMS for Object Detection and Instance Segmentation (YOLACT)

Complete-IoU Loss and Cluster-NMS for Improving Object Detection and Instance Segmentation. Our paper is accepted by IEEE Transactions on Cybernetics

null 290 Dec 25, 2022
git《Beta R-CNN: Looking into Pedestrian Detection from Another Perspective》(NeurIPS 2020) GitHub:[fig3]

Beta R-CNN: Looking into Pedestrian Detection from Another Perspective This is the pytorch implementation of our paper "[Beta R-CNN: Looking into Pede

null 35 Sep 8, 2021
Implementations of polygamma, lgamma, and beta functions for PyTorch

lgamma Implementations of polygamma, lgamma, and beta functions for PyTorch. It's very hacky, but that's usually ok for research use. To build, run: .

Rachit Singh 24 Nov 9, 2021
PyTorch implementations of the beta divergence loss.

Beta Divergence Loss - PyTorch Implementation This repository contains code for a PyTorch implementation of the beta divergence loss. Dependencies Thi

Billy Carson 7 Nov 9, 2022
TransNet V2: Shot Boundary Detection Neural Network

TransNet V2: Shot Boundary Detection Neural Network This repository contains code for TransNet V2: An effective deep network architecture for fast sho

Tomáš Souček 212 Dec 27, 2022
Code for CVPR2021 paper "Learning Salient Boundary Feature for Anchor-free Temporal Action Localization"

AFSD: Learning Salient Boundary Feature for Anchor-free Temporal Action Localization This is an official implementation in PyTorch of AFSD. Our paper

Tencent YouTu Research 146 Dec 24, 2022
A public available dataset for road boundary detection in aerial images

Topo-boundary This is the official github repo of paper Topo-boundary: A Benchmark Dataset on Topological Road-boundary Detection Using Aerial Images

Zhenhua Xu 79 Jan 4, 2023
code for `Look Closer to Segment Better: Boundary Patch Refinement for Instance Segmentation`

Look Closer to Segment Better: Boundary Patch Refinement for Instance Segmentation (CVPR 2021) Introduction PBR is a conceptually simple yet effective

H.Chen 143 Jan 5, 2023
Generic Event Boundary Detection: A Benchmark for Event Segmentation

Generic Event Boundary Detection: A Benchmark for Event Segmentation We release our data annotation & baseline codes for detecting generic event bound

null 47 Nov 22, 2022
Code for Boundary-Aware Segmentation Network for Mobile and Web Applications

BASNet Boundary-Aware Segmentation Network for Mobile and Web Applications This repository contain implementation of BASNet in tensorflow/keras. comme

Hamid Ali 8 Nov 24, 2022
Out-of-boundary View Synthesis towards Full-frame Video Stabilization

Out-of-boundary View Synthesis towards Full-frame Video Stabilization Introduction | Update | Results Demo | Introduction This repository contains the

null 25 Oct 10, 2022
BMN: Boundary-Matching Network

BMN: Boundary-Matching Network A pytorch-version implementation codes of paper: "BMN: Boundary-Matching Network for Temporal Action Proposal Generatio

qinxin 260 Dec 6, 2022
Qimera: Data-free Quantization with Synthetic Boundary Supporting Samples

Qimera: Data-free Quantization with Synthetic Boundary Supporting Samples This repository is the official implementation of paper [Qimera: Data-free Q

Kanghyun Choi 21 Nov 3, 2022
[AAAI-2021] Visual Boundary Knowledge Translation for Foreground Segmentation

Trans-Net Code for (Visual Boundary Knowledge Translation for Foreground Segmentation, AAAI2021). [https://ojs.aaai.org/index.php/AAAI/article/view/16

ZJU-VIPA 2 Mar 4, 2022
Boundary-preserving Mask R-CNN (ECCV 2020)

BMaskR-CNN This code is developed on Detectron2 Boundary-preserving Mask R-CNN ECCV 2020 Tianheng Cheng, Xinggang Wang, Lichao Huang, Wenyu Liu Video

Hust Visual Learning Team 178 Nov 28, 2022