Source code of our work: "Benchmarking Deep Models for Salient Object Detection"

Related tags

Deep Learning SALOD
Overview

SALOD

Source code of our work: "Benchmarking Deep Models for Salient Object Detection".
In this works, we propose a new benchmark for SALient Object Detection (SALOD) methods.

We re-implement 14 methods using same settings, including input size, data loader and evaluation metrics (thanks to Metrics). Hyperparameters of optimizer are different because of various network structures and objective functions. We try our best to tune the optimizer for these models to achieve the best performance one-by-one. Some other networks are debugging now, it is welcome for your contributions on these networks to obtain better performance.

Properties

  1. A unify interface for new models. To develop a new network, you only need to 1) set configs; 2) define network; 3) define loss function. See methods/template.
  2. We build a new dataset by collecting several prevalent datasets in SOD task.
  3. Easy to adopt different backbones (Available backbones: ResNet-50, VGG-16, MobileNet-v2, EfficientNet-B0, GhostNet, Res2Net)
  4. Testing all networks on your own device. By input the name of network, you can test all available methods in our benchmark. Comparisons includes FPS, GFLOPs, model size and multiple effectiveness metrics.
  5. We implement a loss factory that you can change the loss functions using command line parameters.

Available Methods:

Methods Publish. Input Weight Optim. LR Epoch Paper Src Code
DHSNet CVPR2016 320^2 95M Adam 2e-5 30 openaccess Pytorch
NLDF CVPR2017 320^2 161M Adam 1e-5 30 openaccess Pytorch/TF
Amulet ICCV2017 320^2 312M Adam 1e-5 30 openaccess Pytorch
SRM ICCV2017 320^2 240M Adam 5e-5 30 openaccess Pytorch
PicaNet CVPR2018 320^2 464M SGD 1e-2 30 openaccess Pytorch
DSS TPAMI2019 320^2 525M Adam 2e-5 30 IEEE/ArXiv Pytorch
BASNet CVPR2019 320^2 374M Adam 1e-5 30 openaccess Pytorch
CPD CVPR2019 320^2 188M Adam 1e-5 30 openaccess Pytorch
PoolNet CVPR2019 320^2 267M Adam 5e-5 30 openaccess Pytorch
EGNet ICCV2019 320^2 437M Adam 5e-5 30 openaccess Pytorch
SCRN ICCV2019 320^2 100M SGD 1e-2 30 openaccess Pytorch
GCPA AAAI2020 320^2 263M SGD 1e-2 30 aaai.org Pytorch
ITSD CVPR2020 320^2 101M SGD 5e-3 30 openaccess Pytorch
MINet CVPR2020 320^2 635M SGD 1e-3 30 openaccess Pytorch
Tuning ----- ----- ------ ------ ----- ----- ----- -----
*PAGE CVPR2019 320^2 ------ ------ ----- ----- openaccess TF
*PFA CVPR2019 320^2 ------ ------ ----- ----- openaccess Pytorch
*F3Net AAAI2020 320^2 ------ ------ ----- ----- aaai.org Pytorch
*PFPN AAAI2020 320^2 ------ ------ ----- ----- aaai.org Pytorch
*LDF CVPR2020 320^2 ------ ------ ----- ----- openaccess Pytorch

Usage

# model_name: lower-cased method name. E.g. poolnet, egnet, gcpa, dhsnet or minet.
python3 train.py model_name --gpus=0

python3 test.py model_name --gpus=0 --weight=path_to_weight 

python3 test_fps.py model_name --gpus=0

# To evaluate generated maps:
python3 eval.py --pre_path=path_to_maps

Results

We report benchmark results here.
More results please refer to Reproduction, Few-shot and Generalization.

Notice: please contact us if you get better results.

VGG16-based:

Methods #Param. GFLOPs Tr. Time FPS max-F ave-F Fbw MAE SM EM Weight
DHSNet 15.4 52.5 7.5 69.8 .884 .815 .812 .049 .880 .893
Amulet 33.2 1362 12.5 35.1 .855 .790 .772 .061 .854 .876
NLDF 24.6 136 9.7 46.3 .886 .824 .828 .045 .881 .898
SRM 37.9 73.1 7.9 63.1 .857 .779 .769 .060 .859 .874
PicaNet 26.3 74.2 40.5* 8.8 .889 .819 .823 .046 .884 .899
DSS 62.2 99.4 11.3 30.3 .891 .827 .826 .046 .888 .899
BASNet 80.5 114.3 16.9 32.6 .906 .853 .869 .036 .899 .915
CPD 29.2 85.9 10.5 36.3 .886 .815 .792 .052 .885 .888
PoolNet 52.5 236.2 26.4 23.1 .902 .850 .852 .039 .898 .913
EGNet 101 178.8 19.2 16.3 .909 .853 .859 .037 .904 .914
SCRN 16.3 47.2 9.3 24.8 .896 .820 .822 .046 .891 .894
GCPA 42.8 197.1 17.5 29.3 .903 .836 .845 .041 .898 .907
ITSD 16.9 76.3 15.2* 30.6 .905 .820 .834 .045 .901 .896
MINet 47.8 162 21.8 23.4 .900 .839 .852 .039 .895 .909

ResNet50-based:

Methods #Param. GFLOPs Tr. Time FPS max-F ave-F Fbw MAE SM EM Weight
DHSNet 24.2 13.8 3.9 49.2 .909 .830 .848 .039 .905 .905
Amulet 79.8 1093.8 6.3 35.1 .895 .822 .835 .042 .894 .900
NLDF 41.1 115.1 9.2 30.5 .903 .837 .855 .038 .898 .910
SRM 61.2 20.2 5.5 34.3 .882 .803 .812 .047 .885 .891
PicaNet 106.1 36.9 18.5* 14.8 .904 .823 .843 .041 .902 .902
DSS 134.3 35.3 6.6 27.3 .894 .821 .826 .045 .893 .898
BASNet 95.5 47.2 12.2 32.8 .917 .861 .884 .032 .909 .921
CPD 47.9 14.7 7.7 22.7 .906 .842 .836 .040 .904 .908
PoolNet 68.3 66.9 10.2 33.9 .912 .843 .861 .036 .907 .912
EGNet 111.7 222.8 25.7 10.2 .917 .851 .867 .036 .912 .914
SCRN 25.2 12.5 5.5 19.3 .910 .838 .845 .040 .906 .905
GCPA 67.1 54.3 6.8 37.8 .916 .841 .866 .035 .912 .912
ITSD 25.7 19.6 5.7 29.4 .913 .825 .842 .042 .907 .899
MINet 162.4 87 11.7 23.5 .913 .851 .871 .034 .906 .917

Create New Model

To create a new model, you can copy the template folder and modify it as you want.

cp -r ./methods/template ./methods/new_name

More details please refer to python files in template floder.

Loss Factory

We supply a Loss Factory for an easier way to tune the loss functions. You can set --loss and --lw parameters to use it.

Here are some examples:

loss_dict = {'b': BCE, 's': SSIM, 'i': IOU, 'd': DICE, 'e': Edge, 'c': CTLoss}

python train.py ... --loss=bd
# loss = 1 * bce_loss + 1 * dice_loss

python train.py ... --loss=bs --lw=0.3,0.7
# loss = 0.3 * bce_loss + 0.7 * ssim_loss

python train.py ... --loss=bsid --lw=0.3,0.1,0.5,0.2
# loss = 0.3 * bce_loss + 0.1 * ssim_loss + 0.5 * iou_loss + 0.2 * dice_loss
You might also like...
This is the source code for our ICLR2021 paper: Adaptive Universal Generalized PageRank Graph Neural Network.
This is the source code for our ICLR2021 paper: Adaptive Universal Generalized PageRank Graph Neural Network.

GPRGNN This is the source code for our ICLR2021 paper: Adaptive Universal Generalized PageRank Graph Neural Network. Hidden state feature extraction i

The dataset and source code for our paper: "Did You Ask a Good Question? A Cross-Domain Question IntentionClassification Benchmark for Text-to-SQL"

TriageSQL The dataset and source code for our paper: "Did You Ask a Good Question? A Cross-Domain Question Intention Classification Benchmark for Text

Source Code for our paper: Understand me, if you refer to Aspect Knowledge: Knowledge-aware Gated Recurrent Memory Network
Source Code for our paper: Understand me, if you refer to Aspect Knowledge: Knowledge-aware Gated Recurrent Memory Network

KaGRMN-DSG_ABSA This repository contains the PyTorch source Code for our paper: Understand me, if you refer to Aspect Knowledge: Knowledge-aware Gated

Source code of our BMVC 2021 paper: AniFormer: Data-driven 3D Animation with Transformer
Source code of our BMVC 2021 paper: AniFormer: Data-driven 3D Animation with Transformer

AniFormer This is the PyTorch implementation of our BMVC 2021 paper AniFormer: Data-driven 3D Animation with Transformer. Haoyu Chen, Hao Tang, Nicu S

Code for our NeurIPS 2021 paper 'Exploiting the Intrinsic Neighborhood Structure for Source-free Domain Adaptation'

Exploiting the Intrinsic Neighborhood Structure for Source-free Domain Adaptation (NeurIPS 2021) Code for our NeurIPS 2021 paper 'Exploiting the Intri

 Source code for our paper
Source code for our paper "Improving Empathetic Response Generation by Recognizing Emotion Cause in Conversations"

Source code for our paper "Improving Empathetic Response Generation by Recognizing Emotion Cause in Conversations" this repository is maintained by bo

 Source code for our paper
Source code for our paper "Empathetic Response Generation with State Management"

Source code for our paper "Empathetic Response Generation with State Management" this repository is maintained by both Jun Gao and Yuhan Liu Model Ove

Source code for our paper
Source code for our paper "Learning to Break Deep Perceptual Hashing: The Use Case NeuralHash"

Learning to Break Deep Perceptual Hashing: The Use Case NeuralHash Abstract: Apple recently revealed its deep perceptual hashing system NeuralHash to

[AAAI2021] The source code for our paper 《Enhancing Unsupervised Video Representation Learning by Decoupling the Scene and the Motion》.
[AAAI2021] The source code for our paper 《Enhancing Unsupervised Video Representation Learning by Decoupling the Scene and the Motion》.

DSM The source code for paper Enhancing Unsupervised Video Representation Learning by Decoupling the Scene and the Motion Project Website; Datasets li

Owner
null
This repo contains the official code of our work SAM-SLR which won the CVPR 2021 Challenge on Large Scale Signer Independent Isolated Sign Language Recognition.

Skeleton Aware Multi-modal Sign Language Recognition By Songyao Jiang, Bin Sun, Lichen Wang, Yue Bai, Kunpeng Li and Yun Fu. Smile Lab @ Northeastern

Isen (Songyao Jiang) 128 Dec 8, 2022
Official code of our work, AVATAR: A Parallel Corpus for Java-Python Program Translation.

AVATAR Official code of our work, AVATAR: A Parallel Corpus for Java-Python Program Translation. AVATAR stands for jAVA-pyThon progrAm tRanslation. AV

Wasi Ahmad 26 Dec 3, 2022
This repository contains the entire code for our work "Two-Timescale End-to-End Learning for Channel Acquisition and Hybrid Precoding"

Two-Timescale-DNN Two-Timescale End-to-End Learning for Channel Acquisition and Hybrid Precoding This repository contains the entire code for our work

QiyuHu 3 Mar 7, 2022
This is the repo for our work "Towards Persona-Based Empathetic Conversational Models" (EMNLP 2020)

Towards Persona-Based Empathetic Conversational Models (PEC) This is the repo for our work "Towards Persona-Based Empathetic Conversational Models" (E

Zhong Peixiang 35 Nov 17, 2022
This is the implementation of our work Deep Extreme Cut (DEXTR), for object segmentation from extreme points.

This is the implementation of our work Deep Extreme Cut (DEXTR), for object segmentation from extreme points.

Sergi Caelles 828 Jan 5, 2023
Code for the prototype tool in our paper "CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning".

CoProtector Code for the prototype tool in our paper "CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning".

Zhensu Sun 1 Oct 26, 2021
PyTorch implementation of our Adam-NSCL algorithm from our CVPR2021 (oral) paper "Training Networks in Null Space for Continual Learning"

Adam-NSCL This is a PyTorch implementation of Adam-NSCL algorithm for continual learning from our CVPR2021 (oral) paper: Title: Training Networks in N

Shipeng Wang 34 Dec 21, 2022
Convolutional neural network web app trained to track our infant’s sleep schedule using our Google Nest camera.

Machine Learning Sleep Schedule Tracker What is it? Convolutional neural network web app trained to track our infant’s sleep schedule using our Google

g-parki 7 Jul 15, 2022
[CVPR2021] The source code for our paper 《Removing the Background by Adding the Background: Towards Background Robust Self-supervised Video Representation Learning》.

TBE The source code for our paper "Removing the Background by Adding the Background: Towards Background Robust Self-supervised Video Representation Le

Jinpeng Wang 150 Dec 28, 2022
code for our paper "Source Data-absent Unsupervised Domain Adaptation through Hypothesis Transfer and Labeling Transfer"

SHOT++ Code for our TPAMI submission "Source Data-absent Unsupervised Domain Adaptation through Hypothesis Transfer and Labeling Transfer" that is ext

null 75 Dec 16, 2022