[ICLR2021oral] Rethinking Architecture Selection in Differentiable NAS

Overview

DARTS-PT

Code accompanying the paper ICLR'2021: Rethinking Architecture Selection in Differentiable NAS
Ruochen Wang, Minhao Cheng, Xiangning Chen, Xiaocheng Tang, Cho-Jui Hsieh

Requirements

Python >= 3.7
PyTorch >= 1.5
tensorboard == 2.0.1
gpustat

Experiments on NAS-Bench-201

Dataset preparation

Download the NAS-Bench-201-v1_0-e61699.pth and save it under ./data folder.

Install NasBench201 via pip:

pip install nas-bench-201

Running DARTS-PT on NAS-Bench-201

Supernet training

The ckpts and logs will be saved to ./experiments/nasbench201/search-{script_name}-{seed}/. For example, the ckpt dir would be ./experiments/nasbench201/search-darts-201-1/ for the command below.

bash darts-201.sh

Architecture selection (projection)

The projection script loads ckpts from experiments/nasbench201/{resume_expid}

bash darts-proj-201.sh --resume_epoch 100 --resume_expid search-darts-201-1

Fix-alpha version (blank-pt):

bash blank-201.sh
bash blank-proj-201.sh --resume_expid search-blank-201-1

Experiments on S1-S4

Supernet training

The ckpts and logs will be saved to ./experiments/sota/{dataset}/search-{script_name}-{space_id}-{seed}/. For example, ./experiments/sota/cifar10/search-darts-sota-s3-1/ (script: darts-sota, space: s3, seed: 1).

bash darts-sota.sh --space [s1/s2/s3/s4] --dataset [cifar10/cifar100/svhn]

Architecture selection (projection)

bash darts-proj-sota.sh --space [s1/s2/s3/s4] --dataset [cifar10/cifar100/svhn] --resume_expid search-darts-sota-[s1/s2/s3/s4]-2

Fix-alpha version (blank-pt):

bash blank-sota.sh --space [s1/s2/s3/s4] --dataset [cifar10/cifar100/svhn]
bash blank-proj-201.sh --space [s1/s2/s3/s4] --dataset [cifar10/cifar100/svhn] --resume_expid search-blank-sota-[s1/s2/s3/s4]-2

Evaluation

bash eval.sh --arch [genotype_name]
bash eval-c100.sh --arch [genotype_name]
bash eval-svhn.sh --arch [genotype_name]

Expeirments on DARTS Space

Supernet training

bash darts-sota.sh

Archtiecture selection (projection)

bash darts-proj-sota.sh --resume_expid search-blank-sota-s5-2

Fix-alpha version (blank-pt)

bash blank-sota.sh
bash blank-proj-201.sh --resume_expid search-blank-sota-s5-2

Evaluation

bash eval.sh --arch [genotype_name]

Citation

@inproceedings{
  ruochenwang2021dartspt,
  title={{Rethinking Architecture Selection in Differentiable NAS},
  author={Ruochen Wang, Minhao Cheng, Xiangning Chen, Xiaocheng Tang, Cho-Jui Hsieh},
  booktitle={International Conference on Learning Representations (ICLR)},
  year={2021}
}
Comments
  • Argument to be given for --arch in evaluating sota.

    Argument to be given for --arch in evaluating sota.

    For evaluation experiments on S1-S4, we are supposed to run: 'bash eval.sh --arch [genotype_name]' What is genotype_name here? Suppose I got genotype as Genotype(normal=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1)], normal_concat=range(2, 6), reduce=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3',2)], reduce_concat=range(2, 6)) How to map this to genotype name?

    opened by pcvishak 2
  • Which version of nas-bench-201 does the code use?

    Which version of nas-bench-201 does the code use?

    https://github.com/ruocwang/darts-pt/blob/bf3ffdd03f9a4773fee015d5c73a8a38cd0859b7/nasbench201/train_search.py#L394

    Here seems missing argument "hp" in api.query_by_arch() according to

    https://github.com/D-X-Y/NAS-Bench-201/blob/8558547969c131f75af2725869ff1ece98e98f23/nas_201_api/api_201.py#L118`

    Fixed after adding hp argument with 12 or 200 result = api.query_by_arch(genotype, hp='200')

    opened by yuezhixiong 2
  • Order to run code, doubts on projection.py

    Order to run code, doubts on projection.py

    Hello! First of all I would like to thank you both for the very clear paper, and for provision a code - does not often happen -. I have a doubt about the pipeline to follow to run a complete NAS algorithm.

    As far as I understood, the first step is for sure to run train_search.py with specified parameters. After this, what should I do? I say there is a script darts-proj-sota.sh that calls the file projection.py which is not clear to me what it does, or to which step of NAS it corresponds. Is that step mandatory? Then, how to build the final network to train from scratch and evaluate? I saw that after calling darts-proj-sota.sh there's the final output that is a Genotype. I guess I need to insert that genotype in the python file genotype.py and run train.py, but I'm not sure of what's going on, if projection.py is mandatory step etc.

    I hope I expressed my doubts clearly, if you could provide some more hints on a complete pipeline to follow it would be great. Thanka a lot!

    opened by casarinsof 1
  • The accuracy on cifar10 on NAS-Bench-201

    The accuracy on cifar10 on NAS-Bench-201

    Thanks for your work! I ran DARTS-PT on NAS-Bench-201(cifar10) with darts-201.sh and darts-proj-201.sh, but I didn't get the hight accuracy in the paper. The accuracy I got was 85.03. Is this result normal? If not, can you give me some advice?

    opened by zhengjian2322 1
  • About the generation of structures

    About the generation of structures

    Hi Ruochen, while I was learning your code, some errors came upcode, some errors came up before a new epoch started training. before a new epoch started training. image Please give me some tips about this, thank you~

    opened by Liu-pf 10
  • Performance in Subspace

    Performance in Subspace

    Hi, thanks for the good work, and for the performance in subspace. Are you run each experiment four times with 4 random seeds and select the best to report? Just as what previous work did? This part not be illustrated on paper, just wanna check if it's right.

    opened by zerocostpt 1
Owner
Ruochen Wang
MSCS at UCLA. AutoML, GNN, Machine Learning
Ruochen Wang
[ICLR 2021] HW-NAS-Bench: Hardware-Aware Neural Architecture Search Benchmark

HW-NAS-Bench: Hardware-Aware Neural Architecture Search Benchmark Accepted as a spotlight paper at ICLR 2021. Table of content File structure Prerequi

null 72 Jan 3, 2023
"NAS-Bench-301 and the Case for Surrogate Benchmarks for Neural Architecture Search".

NAS-Bench-301 This repository containts code for the paper: "NAS-Bench-301 and the Case for Surrogate Benchmarks for Neural Architecture Search". The

AutoML-Freiburg-Hannover 57 Nov 30, 2022
NAS Benchmark in "Prioritized Architecture Sampling with Monto-Carlo Tree Search", CVPR2021

NAS-Bench-Macro This repository includes the benchmark and code for NAS-Bench-Macro in paper "Prioritized Architecture Sampling with Monto-Carlo Tree

null 35 Jan 3, 2023
CM-NAS: Cross-Modality Neural Architecture Search for Visible-Infrared Person Re-Identification (ICCV2021)

CM-NAS Official Pytorch code of paper CM-NAS: Cross-Modality Neural Architecture Search for Visible-Infrared Person Re-Identification in ICCV2021. Vis

JDAI-CV 40 Nov 25, 2022
Naszilla is a Python library for neural architecture search (NAS)

A repository to compare many popular NAS algorithms seamlessly across three popular benchmarks (NASBench 101, 201, and 301). You can implement your ow

null 270 Jan 3, 2023
PyTorch Implementation for AAAI'21 "Do Response Selection Models Really Know What's Next? Utterance Manipulation Strategies for Multi-turn Response Selection"

UMS for Multi-turn Response Selection Implements the model described in the following paper Do Response Selection Models Really Know What's Next? Utte

Taesun Whang 47 Nov 22, 2022
Implementation of "Selection via Proxy: Efficient Data Selection for Deep Learning" from ICLR 2020.

Selection via Proxy: Efficient Data Selection for Deep Learning This repository contains a refactored implementation of "Selection via Proxy: Efficien

Stanford Future Data Systems 70 Nov 16, 2022
EdMIPS: Rethinking Differentiable Search for Mixed-Precision Neural Networks

EdMIPS is an efficient algorithm to search the optimal mixed-precision neural network directly without proxy task on ImageNet given computation budgets. It can be applied to many popular network architectures, including ResNet, GoogLeNet, and Inception-V3.

Zhaowei Cai 47 Dec 30, 2022
Differentiable Neural Computers, Sparse Access Memory and Sparse Differentiable Neural Computers, for Pytorch

Differentiable Neural Computers and family, for Pytorch Includes: Differentiable Neural Computers (DNC) Sparse Access Memory (SAM) Sparse Differentiab

ixaxaar 302 Dec 14, 2022
Official implementation of Rethinking Graph Neural Architecture Search from Message-passing (CVPR2021)

Rethinking Graph Neural Architecture Search from Message-passing Intro The GNAS can automatically learn better architecture with the optimal depth of

Shaofei Cai 48 Sep 30, 2022
Rethinking the U-Net architecture for multimodal biomedical image segmentation

MultiResUNet Rethinking the U-Net architecture for multimodal biomedical image segmentation This repository contains the original implementation of "M

Nabil Ibtehaz 308 Jan 5, 2023
Differentiable architecture search for convolutional and recurrent networks

Differentiable Architecture Search Code accompanying the paper DARTS: Differentiable Architecture Search Hanxiao Liu, Karen Simonyan, Yiming Yang. arX

Hanxiao Liu 3.7k Jan 9, 2023
code for paper "Does Unsupervised Architecture Representation Learning Help Neural Architecture Search?"

Does Unsupervised Architecture Representation Learning Help Neural Architecture Search? Code for paper: Does Unsupervised Architecture Representation

null 39 Dec 17, 2022
NAS-HPO-Bench-II is the first benchmark dataset for joint optimization of CNN and training HPs.

NAS-HPO-Bench-II API Overview NAS-HPO-Bench-II is the first benchmark dataset for joint optimization of CNN and training HPs. It helps a fair and low-

yoichi hirose 8 Nov 21, 2022
Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers

Segmentation Transformer Implementation of Segmentation Transformer in PyTorch, a new model to achieve SOTA in semantic segmentation while using trans

Abhay Gupta 161 Dec 8, 2022
Implementation of SETR model, Original paper: Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers.

SETR - Pytorch Since the original paper (Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers.) has no official

zhaohu xing 112 Dec 16, 2022
《Rethinking Sptil Dimensions of Vision Trnsformers》(2021)

Rethinking Spatial Dimensions of Vision Transformers Byeongho Heo, Sangdoo Yun, Dongyoon Han, Sanghyuk Chun, Junsuk Choe, Seong Joon Oh | Paper NAVER

NAVER AI 224 Dec 27, 2022
Simple Pose: Rethinking and Improving a Bottom-up Approach for Multi-Person Pose Estimation

SimplePose Code and pre-trained models for our paper, “Simple Pose: Rethinking and Improving a Bottom-up Approach for Multi-Person Pose Estimation”, a

Jia Li 256 Dec 24, 2022
[CVPR 2021] Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers

[CVPR 2021] Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers

Fudan Zhang Vision Group 897 Jan 5, 2023