MViTs Excel at Class-agnostic Object Detection
Multi-modal Vision Transformers Excel at Class-agnostic Object Detection
Muhammad Maaz, Hanoona Rasheed, Salman Khan, Fahad Shahbaz Khan, Rao Muhammad Anwer and Ming-Hsuan Yang
Paper: https://arxiv.org/abs/2111.11430
Abstract: What constitutes an object? This has been a long-standing question in computer vision. Towards this goal, numerous learning-free and learning-based approaches have been developed to score objectness. However, they generally do not scale well across new domains and for unseen objects. In this paper, we advocate that existing methods lack a top-down supervision signal governed by human-understandable semantics. To bridge this gap, we explore recent Multi-modal Vision Transformers (MViT) that have been trained with aligned image-text pairs. Our extensive experiments across various domains and novel objects show the state-of-the-art performance of MViTs to localize generic objects in images. Based on these findings, we develop an efficient and flexible MViT architecture using multi-scale feature processing and deformable self-attention that can adaptively generate proposals given a specific language query. We show the significance of MViT proposals in a diverse range of applications including open-world object detection, salient and camouflage object detection, supervised and self-supervised detection tasks. Further, MViTs offer enhanced interactability with intelligible text queries.
Architecture overview of MViTs used in this work
Results
Class-agnostic OD performance of MViTs in comparison with uni-modal detector (RetinaNet) on several datasets. MViTs show consistently good results on all datasets.
Enhanced Interactability: Effect of using different intuitive text queries on the MDef-DETR class-agnostic OD performance. Combining detections from multiple queries captures varying aspects of objectness.
Generalization to Rare/Novel Classes: MDef-DETR class-agnostic OD performance on rarely and frequently occurring categories in the pretraining captions. The numbers on top of the bars indicate occurrences of the corresponding category in the training dataset. The MViT achieves good recall values even for the classes with no or very few occurrences.
Open-world Object Detection: Effect of using class-agnostic OD proposals from MDef-DETR for pseudo labelling of unknowns in Open World Detector (ORE).
Pretraining for Class-aware Object Detection: Effect of using MDef-DETR proposals for pre-training of DETReg instead of Selective Search proposals.
Evaluation
The provided codebase contains the pre-computed detections for all datasets using ours MDef-DETR model. The provided directory structure is as follows,
-> README.md
-> LICENSE
-> get_eval_metrics.py
-> get_multi_dataset_eval_metrics.py
-> data
-> voc2007
-> combined.pkl
-> coco
-> combined.pkl
-> kitti
-> combined.pkl
-> kitchen
-> combined.pkl
-> cliaprt
-> combined.pkl
-> comic
-> combined.pkl
-> watercolor
-> combined.pkl
-> dota
-> combined.pkl
Where combined.pkl
contains the combined detections from multiple intutive text queries for corresponding datasets. (Refer Section 5.1: Enhanced Interactability for more details)
Download the annotations for all datasets and arrange them as shown below. Note that the script expect COCO annotations in standard COCO format & annotations of all other datasets in VOC format.
...
...
-> data
-> voc2007
-> combined.pkl
-> Annotations
-> coco
-> combined.pkl
-> instances_val2017_filt.json
-> kitti
-> combined.pkl
-> Annotations
...
-> kitchen
-> combined.pkl
-> Annotations
-> cliaprt
-> combined.pkl
-> Annotations
-> comic
-> combined.pkl
-> Annotations
-> watercolor
-> combined.pkl
-> Annotations
-> dota
-> combined.pkl
-> Annotations
Once the above mentioned directory structure is created, follow the following steps to calculate the metrics.
- Install numpy
$ pip install numpy
- Calculate metrics
$ python get_multi_dataset_eval_metrics.py
The calculated metrics will be stored in a data.csv
file in the same directory.
Citation
If you use our work, please consider citing:
@article{Maaz2021Multimodal,
title={Multi-modal Transformers Excel at Class-agnostic Object Detection},
author={Muhammad Maaz and Hanoona Rasheed and Salman Khan and Fahad Shahbaz Khan and Rao Muhammad Anwer and Ming-Hsuan Yang},
journal={ArXiv 2111.11430},
year={2021}
}
Contact
Should you have any question, please contact [email protected] or [email protected]