PyTorch implementation of ''Background Activation Suppression for Weakly Supervised Object Localization''.

Related tags

Deep Learning BAS
Overview

Background Activation Suppression for Weakly Supervised Object Localization

PyTorch implementation of ''Background Activation Suppression for Weakly Supervised Object Localization''. This repository contains PyTorch training code, inference code and pretrained models.

๐Ÿ“‹ Table of content

  1. ๐Ÿ“Ž Paper Link
  2. ๐Ÿ’ก Abstract
  3. โœจ Motivation
  4. ๐Ÿ“– Method
  5. ๐Ÿ“ƒ Requirements
  6. โœ๏ธ Usage
    1. Start
    2. Download Datasets
    3. Training
    4. Inference
  7. ๐Ÿ“Š Experimental Results
  8. โœ‰๏ธ Statement
  9. ๐Ÿ” Citation

๐Ÿ“Ž Paper Link

Background Activation Suppression for Weakly Supervised Object Localization (link)

  • Authors: Pingyu Wu*, Wei Zhai*, Yang Cao
  • Institution: University of Science and Technology of China (USTC)

๐Ÿ’ก Abstract

Weakly supervised object localization (WSOL) aims to localize the object region using only image-level labels as supervision. Recently a new paradigm has emerged by generating a foreground prediction map (FPM) to achieve the localization task. Existing FPM-based methods use cross-entropy (CE) to evaluate the foreground prediction map and to guide the learning of generator. We argue for using activation value to achieve more efficient learning. It is based on the experimental observation that, for a trained network, CE converges to zero when the foreground mask covers only part of the object region. While activation value increases until the mask expands to the object boundary, which indicates that more object areas can be learned by using activation value. In this paper, we propose a Background Activation Suppression (BAS) method. Specifically, an Activation Map Constraint module (AMC) is designed to facilitate the learning of generator by suppressing the background activation values. Meanwhile, by using the foreground region guidance and the area constraint, BAS can learn the whole region of the object. Furthermore, in the inference phase, we consider the prediction maps of different categories together to obtain the final localization results. Extensive experiments show that BAS achieves significant and consistent improvement over the baseline methods on the CUB-200-2011 and ILSVRC datasets.

โœจ Motivation


Motivation. (A) The entroy value of CE loss $w.r.t$ foreground mask and foreground activation value $w.r.t$ foreground mask. To illustrate the generality of this phenomenon, more examples are shown in the subfigure on the right. (B) Experimental procedure and related definitions. Implementation details of the experiment and further results are available in the Supplementary Material.

Exploratory Experiment

We introduce the implementation of the experiment, as shown in Fig. \ref{Exploratory Experiment} (A). For a given GT binary mask, the activation value (Activation) and cross-entropy (Entropy) corresponding to this mask are generated by masking the feature map. We erode and dilate the ground-truth mask with a convolution of kernel size $5n \times 5n$, obtain foreground masks with different area sizes by changing the value of $n$, and plot the activation value versus cross-entropy with the area as the horizontal axis, as shown in Fig. \ref{Exploratory Experiment} (B). By inverting the foreground mask, the corresponding background activation values for the foreground mask area are generated in the same way. In Fig. \ref{Exploratory Experiment} (C), we show the curves of entropy, foreground activation, and background activation with mask area. It can be noticed that both background activation and foreground activation values have a higher correlation with the mask compared to the entropy. We show more examples in the Supplementary Material.


Exploratory Experiment. Examples about the entroy value of CE loss $w.r.t$ foreground mask and foreground activation value $w.r.t$ foreground mask.

๐Ÿ“– Method


The architecture of the proposed BAS. In the training phase, the class-specific foreground prediction map $F^{fg}$ and the coupled background prediction map $F^{bg}$ are obtained by the generator, and then fed into the activation map constraint module together with the feature map $F$. In the inference phase, we utilize Top-k to generate the final localization map.

๐Ÿ“ƒ Requirements

  • python 3.6.10
  • torch 1.4.0
  • torchvision 0.5.0
  • opencv 4.5.3

โœ๏ธ Usage

Start

git clone https://github.com/wpy1999/BAS.git
cd BAS

Download Datasets

Training

We will release our training code upon acceptance.

Inference

To test the CUB models, you can download the trained models from [ Google Drive (VGG16) ], [ Google Drive (Mobilenetv1) ], [ Google Drive (ResNet50) ], [ Google Drive (Inceptionv3) ], then run BAS_inference.py:

cd CUB
python BAS_inference.py --arch vgg

To test the ILSVRC models, you can download the trained models from [ Google Drive (VGG16) ], [ Google Drive (Mobilenetv1) ], [ Google Drive (ResNet50) ], [ Google Drive (Inceptionv3) ], then run BAS_inference.py:

cd ILSVRC
python BAS_inference.py --arch vgg

๐Ÿ“Š Experimental Results



โœ‰๏ธ Statement

This project is for research purpose only, please contact us for the licence of commercial use. For any other questions please contact [email protected] or [email protected].

๐Ÿ” Citation

@inproceedings{BAS,
  title={Background Activation Suppression for Weakly Supervised Object Localization},
  author={Pingyu Wu and Wei Zhai and Yang Cao},
  booktitle={xxx},
  year={2021}
}
You might also like...
A Robust Non-IoU Alternative to Non-Maxima Suppression in Object Detection
A Robust Non-IoU Alternative to Non-Maxima Suppression in Object Detection

Confluence: A Robust Non-IoU Alternative to Non-Maxima Suppression in Object Detection 1. ไป‹็ป ็”จไปฅๆ›ฟไปฃ NMS๏ผŒๅœจๆ‰€ๆœ‰ bbox ไธญๆŒ‘้€‰ๅ‡บๆœ€ไผ˜็š„้›†ๅˆใ€‚ NMS ไป…่€ƒ่™‘ไบ† bbox ็š„ๅพ—ๅˆ†๏ผŒ็„ถๅŽๆ นๆฎ IOU ๆฅ

Official implementation of Protected Attribute Suppression System, ICCV 2021

Official implementation of Protected Attribute Suppression System, ICCV 2021

SSL_SLAM2: Lightweight 3-D Localization and Mapping for Solid-State LiDAR (mapping and localization separated)  ICRA 2021
SSL_SLAM2: Lightweight 3-D Localization and Mapping for Solid-State LiDAR (mapping and localization separated) ICRA 2021

SSL_SLAM2 Lightweight 3-D Localization and Mapping for Solid-State LiDAR (Intel Realsense L515 as an example) This repo is an extension work of SSL_SL

Project code for weakly supervised 3D object detectors using wide-baseline multi-view traffic camera data: WIBAM.
Project code for weakly supervised 3D object detectors using wide-baseline multi-view traffic camera data: WIBAM.

WIBAM (Work in progress) Weakly Supervised Training of Monocular 3D Object Detectors Using Wide Baseline Multi-view Traffic Camera Data 3D object dete

Weakly Supervised 3D Object Detection from Point Cloud with Only Image Level Annotation

SCCKTIM Weakly Supervised 3D Object Detection from Point Cloud with Only Image-Level Annotation Our code will be available soon. The class knowledge t

Group R-CNN for Point-based Weakly Semi-supervised Object Detection (CVPR2022)
Group R-CNN for Point-based Weakly Semi-supervised Object Detection (CVPR2022)

Group R-CNN for Point-based Weakly Semi-supervised Object Detection (CVPR2022) By Shilong Zhang*, Zhuoran Yu*, Liyang Liu*, Xinjiang Wang, Aojun Zhou,

Accurate 3D Face Reconstruction with Weakly-Supervised Learning: From Single Image to Image Set (CVPRW 2019). A PyTorch implementation.
Accurate 3D Face Reconstruction with Weakly-Supervised Learning: From Single Image to Image Set (CVPRW 2019). A PyTorch implementation.

Accurate 3D Face Reconstruction with Weakly-Supervised Learning: From Single Image to Image Set โ€”โ€” PyTorch implementation This is an unofficial offici

The PyTorch implementation of DiscoBox: Weakly Supervised Instance Segmentation and Semantic Correspondence from Box Supervision.
The PyTorch implementation of DiscoBox: Weakly Supervised Instance Segmentation and Semantic Correspondence from Box Supervision.

DiscoBox: Weakly Supervised Instance Segmentation and Semantic Correspondence from Box Supervision The PyTorch implementation of DiscoBox: Weakly Supe

FactSeg: Foreground Activation Driven Small Object Semantic Segmentation in Large-Scale Remote Sensing Imagery (TGRS)
FactSeg: Foreground Activation Driven Small Object Semantic Segmentation in Large-Scale Remote Sensing Imagery (TGRS)

FactSeg: Foreground Activation Driven Small Object Semantic Segmentation in Large-Scale Remote Sensing Imagery by Ailong Ma, Junjue Wang*, Yanfei Zhon

Comments
  • bas seems not to work

    bas seems not to work

    Great work! I have tried to reproduce the paper. Experiment reaches expectation on baseline(using foreground, AC and CLS loss). However, after adding the BAS loss, regression drops to 0. I have tried various version of BAS including detach background map and so on. Also the description does not match with the demo graph. Could you please share some details of how to train with BAS Loss. Thanks

    opened by williamium3000 3
  • cannot reproduce the result in the essay

    cannot reproduce the result in the essay

    Your work is really great. I have trained a new vgg16 model based on your public code and default parameters. However, my test result is smaller than the result in the essay. I have tried change the batch size to 64, the result is improved but still smaller than the result in the essay. I wonder whether the default parameters are right and what is the batch size of the vgg16. I'll appreciate it if you can help me figure out my problem, thanks

    opened by HZIH 1
Owner
null
[CVPR2021] The source code for our paper ใ€ŠRemoving the Background by Adding the Background: Towards Background Robust Self-supervised Video Representation Learningใ€‹.

TBE The source code for our paper "Removing the Background by Adding the Background: Towards Background Robust Self-supervised Video Representation Le

Jinpeng Wang 150 Dec 28, 2022
Discriminative Region Suppression for Weakly-Supervised Semantic Segmentation

Discriminative Region Suppression for Weakly-Supervised Semantic Segmentation (AAAI 2021) Official pytorch implementation of our paper: Discriminative

Beom 74 Dec 27, 2022
Weakly Supervised Dense Event Captioning in Videos, i.e. generating multiple sentence descriptions for a video in a weakly-supervised manner.

WSDEC This is the official repo for our NeurIPS paper Weakly Supervised Dense Event Captioning in Videos. Description Repo directories ./: global conf

Melon(Xuguang Duan) 96 Nov 1, 2022
Codes for TS-CAM: Token Semantic Coupled Attention Map for Weakly Supervised Object Localization.

TS-CAM: Token Semantic Coupled Attention Map for Weakly SupervisedObject Localization This is the official implementaion of paper TS-CAM: Token Semant

vasgaowei 112 Jan 2, 2023
Normalization Matters in Weakly Supervised Object Localization (ICCV 2021)

Normalization Matters in Weakly Supervised Object Localization (ICCV 2021) 99% of the code in this repository originates from this link. ICCV 2021 pap

Jeesoo Kim 10 Feb 1, 2022
Python scripts performing class agnostic object localization using the Object Localization Network model in ONNX.

ONNX Object Localization Network Python scripts performing class agnostic object localization using the Object Localization Network model in ONNX. Ori

Ibai Gorordo 15 Oct 14, 2022
Official Pytorch Implementation of 'Learning Action Completeness from Points for Weakly-supervised Temporal Action Localization' (ICCV-21 Oral)

Learning-Action-Completeness-from-Points Official Pytorch Implementation of 'Learning Action Completeness from Points for Weakly-supervised Temporal A

Pilhyeon Lee 67 Jan 3, 2023
Hybrid CenterNet - Hybrid-supervised object detection / Weakly semi-supervised object detection

Hybrid-Supervised Object Detection System Object detection system trained by hybrid-supervision/weakly semi-supervision (HSOD/WSSOD): This project is

null 5 Dec 10, 2022
Rational Activation Functions - Replacing Padรฉ Activation Units

Rational Activations - Learnable Rational Activation Functions First introduce as PAU in Padรฉ Activation Units: End-to-end Learning of Activation Func

ml-research@TUDarmstadt 38 Nov 22, 2022
Official implementation for CVPR 2021 paper: Adaptive Class Suppression Loss for Long-Tail Object Detection

Adaptive Class Suppression Loss for Long-Tail Object Detection This repo is the official implementation for CVPR 2021 paper: Adaptive Class Suppressio

CASIA-IVA-Lab 67 Dec 4, 2022