iNAS: Integral NAS for Device-Aware Salient Object Detection
Introduction
Integral search design (jointly consider backbone/head structures, design/deploy devices).
Covers mainstream handcraft saliency head design.
SOTA performance with large latency reduction on diverse hardware platforms.
Updates
0.1.0 was released in 15/11/2021:
- Support training and searching on Salient Object Detection (SOD).
- Support four stages in one-shot architecture search.
- Support stand-alone model inference with json configuration.
- Provide off-the-shelf models and experiment logs.
Please refer to changelog.md for details and release history.
Dependencies and Installation
Dependencies
- Python >= 3.7 (Recommend to use Anaconda or Miniconda)
- PyTorch >= 1.7
- NVIDIA GPU + CUDA
Install from a local clone
-
Clone the repo
git clone https://github.com/guyuchao/iNAS.git
-
Install dependent packages
conda create -n iNAS python=3.8 conda install -c pytorch pytorch=1.7 torchvision cudatoolkit=10.2 pip install -r requirements.txt
-
Install iNAS
Please run the following commands in the iNAS root path to install iNAS:python setup.py develop
Dataset Preparation
Folder Structure
iNAS
├── iNAS
├── experiment
├── scripts
├── options
├── datasets
│ ├── saliency
│ │ ├── DUTS-TR/ # Contains both images (.jpg) and labels (.png).
│ │ ├── DUTS-TR.lst # Specify the image-label pair for training or testing.
│ │ ├── ECSSD/
│ │ ├── ECSSD.lst
│ │ ├── ...
Common Image SOD Datasets
We provide a list of common salient object detection datasets.
Name | Datasets | Short Description | Download |
---|---|---|---|
SOD Training | DUTS-TR | 10553 images for SOD training | Google Drive / Baidu Drive (psd: w69q) |
SOD Testing | ECSSD | 1000 images for SOD testing | |
DUT-OMRON | 5168 images for SOD testing | ||
DUTS-TE | 5019 images for SOD testing | ||
HKU-IS | 4447 images for SOD testing | ||
PASCAL-S | 850 images for SOD testing |
How to Use
The iNAS integrates four main steps of one-shot neural architecture search:
- Train supernet: Provide a fast performance evaluator for searching.
- Search models: Find a pareto frontier based on performance evaluator and resource evaluator.
- Convert weight/Retrain/Finetune: Promote searched model performance to its best. (We now support converting supernet weight to stand-alone models without retraining.)
- Deploy: Test stand-alone models.
Please see Tutorial.md for the basic usage of those steps in iNAS.
Model Zoo
Pre-trained models and log examples are available in ModelZoo.md.
TODO List
- Support multi-processing search (simply use data-parallel cannot increase search speed).
- Complete documentations.
- Add some applications.
Citation
If you find this project useful in your research, please consider cite:
@inproceedings{gu2021inas,
title={iNAS: Integral NAS for Device-Aware Salient Object Detection},
author={Gu, Yu-Chao and Gao, Shang-Hua and Cao, Xu-Sheng and Du, Peng and Lu, Shao-Ping and Cheng, Ming-Ming},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={4934--4944},
year={2021}
}
License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (cc-by-nc-sa), where only non-commercial usage is allowed. For commercial usage, please contact us.
Acknowledgement
The project structure is borrowed from BasicSR, and parts of implementation and evaluation codes are borrowed from Once-For-All, BASNet and BiSeNet . Thanks for these excellent projects.
Contact
If you have any questions, please email [email protected]
.