NAS-FCOS: Fast Neural Architecture Search for Object Detection (CVPR 2020)

Overview

NAS-FCOS: Fast Neural Architecture Search for Object Detection

This project hosts the train and inference code with pretrained model for implementing the NAS-FCOS algorithm for object detection, as presented in our paper:

NAS-FCOS: Fast Neural Architecture Search for Object Detection;
Ning Wang, Yang Gao, Hao Chen, Peng Wang, Zhi Tian, Chunhua Shen;
In: Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR), 2020.

The full paper is available at: NAS-FCOS Paper.

Updates

  • News: Accepted by CVPR 2020. (24/02/2020)
  • Upload solver module to support self training. (06/02/2020)
  • Support RetinaNet detector in NAS module (pretrained model coming soon). (06/02/2020)
  • Update NAS head module, config files and pretrained model links. (07/01/2020)

Required hardware

We use 4 Nvidia V100 GPUs.

Installation

This NAS-FCOS implementation is based on maskrcnn-benchmark. Therefore the installation is the same as original maskrcnn-benchmark.

Please check INSTALL.md for installation instructions. You may also want to see the original README.md of maskrcnn-benchmark.

Train

The train command line on coco train:

python -m torch.distributed.launch \
    --nproc_per_node=4 \
    --master_port=1213 \
    tools/train_net.py --config-file "configs/search/R_50_NAS_retinanet.yaml"

Inference

The inference command line on coco minival split:

python -m torch.distributed.launch \
    --nproc_per_node=1 \
    tools/test_net.py --config-file "configs/search/R_50_NAS_densebox.yaml"

Please note that:

  1. If your model's name is different, please replace models/R-50-NAS.pth with your own.
  2. If you enounter out-of-memory error, please try to reduce TEST.IMS_PER_BATCH to 1.
  3. If you want to evaluate a different model, please change --config-file to its config file (in configs/search) and MODEL.WEIGHT to its weights file.

For your convenience, we provide the following trained models (more models are coming soon).

Model Multi-scale training AP (minival) AP (test-dev) Link Fetch Code
Mobile_NAS No 32.6 33.1 download 3dm9
Mobile_NAS_head No 34.4 34.7 download -
R_50_NAS No 38.5 38.9 download f88u
R_50_NAS_head No 39.5 39.8 download -
R_101_NAS Yes 42.1 42.5 download euuz
R_101_NAS_head Yes 42.8 43.0 download -
R_101_X_32x8d_NAS Yes 43.4 43.7 download 4cci

Attention: If the above model link cannot be downloaded normally, please refer to the link below. Mobile_NAS, Mobile_NAS_head, R_50_NAS, R_50_NAS_head, R_101_NAS, R_101_NAS_head R_101_X_32x8d_NAS

All results are obtained with a single model and without any test time data augmentation such as multi-scale, flipping and etc..

Contributing to the project

Any pull requests or issues are welcome.

Citations

Please consider citing our paper in your publications if the project helps your research. BibTeX reference is as follows.

@InProceedings{Wang_2020_CVPR,
    author = {Wang, Ning and Gao, Yang and Chen, Hao and Wang, Peng and Tian, Zhi and Shen, Chunhua and Zhang, Yanning},
    title = {NAS-FCOS: Fast Neural Architecture Search for Object Detection},
    booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month = {June},
    year = {2020}
}

License

For academic use, this project is licensed under the 2-clause BSD License - see the LICENSE file for details. For commercial use, please contact the authors.

Comments
  • No output when executing fpn module and the code exited without any error.

    No output when executing fpn module and the code exited without any error.

    Hi, I have tested your code on coco dataset. no matter training or testing, there is no output when executing fpn module and the code exited without any errors.the code is as follows: in single_stage_detector.py : image there is no output and the code is exited without any errors or warnnings: image training script:python -m torch.distributed.launch --nproc_per_node=2 --master_port=1213 tools/train_net.py --config-file "configs/search/R_50_NAS_retinanet.yaml" testing script:python -m torch.distributed.launch --nproc_per_node=1 tools/test_net.py --config-file "configs/search/R_50_NAS_retinanet.yaml"

    Have you met this problem?could you explan the reason? thanks!

    question 
    opened by bobzhang123 5
  • Could you please upload the pretrained model on Google Drive?

    Could you please upload the pretrained model on Google Drive?

    Thank you for sharing your great works!

    I'm trying hard but I think it's maybe impossible to download it from Baidu for non-china people as it's so hard to make even just an account for that site. So, please I would like to ask you to share the pretrained model on dropbox or google drive.

    Thank you so much!

    opened by dedoogong 2
  • The trained model you provided is NAS FPN model, not NAS FPN + head model

    The trained model you provided is NAS FPN model, not NAS FPN + head model

    Hi, I found the trained model you provided (R_50_NAS) is NAS FPN model, not NAS FPN + head model. In the paper, you say "It turns out that our NAS-FCOS model still achieves better performance (AP = 38.9 with FPN search only, and AP = 39.8 with both FPN and Head searched) than the DeformFPN-FCOS model (AP = 38.4) under this circumstance." So, the R_50_NAS is NAS FCOS of FPN search only?

    opened by iimmortall 1
  • Inference speed of your best model(NAS-FCOS, w/ improvement, X-64x4d-101, Box AP = 46.1)

    Inference speed of your best model(NAS-FCOS, w/ improvement, X-64x4d-101, Box AP = 46.1)

    Hi, thanks for your work! I wonder what is the GPU latency(e.g., on a TITAN-Xp) of your best model (NAS-FCOS, w/ improvement, X-64x4d-101, Box AP = 46.1 on coco test-dev).

    image
    opened by Yuxin-CV 0
  • Requirement of data on device

    Requirement of data on device

    Hello,

    I was wondering if it is necessary to have dataset on device. For example to train a model for a resource constrained device (Say Tx2 or mobile phone), does the implementation require the dataset (COCO) to be present on the target device.

    Could you let me know a possible solution in a situation wherein we cant load the entire data on a resource constrained device

    opened by varghesealex90 0
  • How to install this project?

    How to install this project?

    I read the following installation instructions for the project you gave me, which is the installation process of maskrcnn_bench. How to install your open source project and search the network on your own dataset for target detection?Thank you very much for your answers in your spare time

    opened by never-to-never 0
CM-NAS: Cross-Modality Neural Architecture Search for Visible-Infrared Person Re-Identification (ICCV2021)

CM-NAS Official Pytorch code of paper CM-NAS: Cross-Modality Neural Architecture Search for Visible-Infrared Person Re-Identification in ICCV2021. Vis

JDAI-CV 40 Nov 25, 2022
Naszilla is a Python library for neural architecture search (NAS)

A repository to compare many popular NAS algorithms seamlessly across three popular benchmarks (NASBench 101, 201, and 301). You can implement your ow

null 270 Jan 3, 2023
Block-wisely Supervised Neural Architecture Search with Knowledge Distillation (CVPR 2020)

DNA This repository provides the code of our paper: Blockwisely Supervised Neural Architecture Search with Knowledge Distillation. Illustration of DNA

Changlin Li 215 Dec 19, 2022
NAS Benchmark in "Prioritized Architecture Sampling with Monto-Carlo Tree Search", CVPR2021

NAS-Bench-Macro This repository includes the benchmark and code for NAS-Bench-Macro in paper "Prioritized Architecture Sampling with Monto-Carlo Tree

null 35 Jan 3, 2023
[CVPR 2021] 'Searching by Generating: Flexible and Efficient One-Shot NAS with Architecture Generator'

[CVPR2021] Searching by Generating: Flexible and Efficient One-Shot NAS with Architecture Generator Overview This is the entire codebase for the paper

null 35 Dec 1, 2022
code for paper "Does Unsupervised Architecture Representation Learning Help Neural Architecture Search?"

Does Unsupervised Architecture Representation Learning Help Neural Architecture Search? Code for paper: Does Unsupervised Architecture Representation

null 39 Dec 17, 2022
Densely Connected Search Space for More Flexible Neural Architecture Search (CVPR2020)

DenseNAS The code of the CVPR2020 paper Densely Connected Search Space for More Flexible Neural Architecture Search. Neural architecture search (NAS)

Jamin Fong 291 Nov 18, 2022
[ICLR2021oral] Rethinking Architecture Selection in Differentiable NAS

DARTS-PT Code accompanying the paper ICLR'2021: Rethinking Architecture Selection in Differentiable NAS Ruochen Wang, Minhao Cheng, Xiangning Chen, Xi

Ruochen Wang 86 Dec 27, 2022
An implementation for Neural Architecture Search with Random Labels (CVPR 2021 poster) on Pytorch.

Neural Architecture Search with Random Labels(RLNAS) Introduction This project provides an implementation for Neural Architecture Search with Random L

null 18 Nov 8, 2022
[CVPR21] LightTrack: Finding Lightweight Neural Network for Object Tracking via One-Shot Architecture Search

LightTrack: Finding Lightweight Neural Networks for Object Tracking via One-Shot Architecture Search The official implementation of the paper LightTra

Multimedia Research 290 Dec 24, 2022
DeepHyper: Scalable Asynchronous Neural Architecture and Hyperparameter Search for Deep Neural Networks

What is DeepHyper? DeepHyper is a software package that uses learning, optimization, and parallel computing to automate the design and development of

DeepHyper Team 214 Jan 8, 2023
Model search is a framework that implements AutoML algorithms for model architecture search at scale

Model search (MS) is a framework that implements AutoML algorithms for model architecture search at scale. It aims to help researchers speed up their exploration process for finding the right model architecture for their classification problems (i.e., DNNs with different types of layers).

Google 3.2k Dec 31, 2022
Deep Image Search is an AI-based image search engine that includes deep transfor learning features Extraction and tree-based vectorized search.

Deep Image Search - AI-Based Image Search Engine Deep Image Search is an AI-based image search engine that includes deep transfer learning features Ex

null 139 Jan 1, 2023
[ICLR 2021] "Neural Architecture Search on ImageNet in Four GPU Hours: A Theoretically Inspired Perspective" by Wuyang Chen, Xinyu Gong, Zhangyang Wang

Neural Architecture Search on ImageNet in Four GPU Hours: A Theoretically Inspired Perspective [PDF] Wuyang Chen, Xinyu Gong, Zhangyang Wang In ICLR 2

VITA 156 Nov 28, 2022
BossNAS: Exploring Hybrid CNN-transformers with Block-wisely Self-supervised Neural Architecture Search

BossNAS This repository contains PyTorch evaluation code, retraining code and pretrained models of our paper: BossNAS: Exploring Hybrid CNN-transforme

Changlin Li 127 Dec 26, 2022
Deep Multimodal Neural Architecture Search

MMNas: Deep Multimodal Neural Architecture Search This repository corresponds to the PyTorch implementation of the MMnas for visual question answering

Vision and Language Group@ MIL 23 Dec 21, 2022
Official implementation of Rethinking Graph Neural Architecture Search from Message-passing (CVPR2021)

Rethinking Graph Neural Architecture Search from Message-passing Intro The GNAS can automatically learn better architecture with the optimal depth of

Shaofei Cai 48 Sep 30, 2022
Code release to accompany paper "Geometry-Aware Gradient Algorithms for Neural Architecture Search."

Geometry-Aware Gradient Algorithms for Neural Architecture Search This repository contains the code required to run the experiments for the DARTS sear

null 18 May 27, 2022
Official PyTorch implementation of "Rapid Neural Architecture Search by Learning to Generate Graphs from Datasets" (ICLR 2021)

Rapid Neural Architecture Search by Learning to Generate Graphs from Datasets This is the official PyTorch implementation for the paper Rapid Neural A

null 48 Dec 26, 2022