NATS-Bench: Benchmarking NAS Algorithms for Architecture Topology and Size

Overview

NATS-Bench: Benchmarking NAS Algorithms for Architecture Topology and Size

Xuanyi Dong, Lu Liu, Katarzyna Musial, Bogdan Gabrys

in IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2021

Abstract: Neural architecture search (NAS) has attracted a lot of attention and has been illustrated to bring tangible benefits in a large number of applications in the past few years. Network topology and network size have been regarded as two of the most important aspects for the performance of deep learning models and the community has spawned lots of searching algorithms for both of those aspects of the neural architectures. However, the performance gain from these searching algorithms is achieved under different search spaces and training setups. This makes the overall performance of the algorithms incomparable and the improvement from a sub-module of the searching model unclear. In this paper, we propose NATS-Bench, a unified benchmark on searching for both topology and size, for (almost) any up-to-date NAS algorithm. NATS-Bench includes the search space of 15,625 neural cell candidates for architecture topology and 32,768 for architecture size on three datasets. We analyze the validity of our benchmark in terms of various criteria and performance comparison of all candidates in the search space. We also show the versatility of NATS-Bench by benchmarking 13 recent state-of-the-art NAS algorithms on it. All logs and diagnostic information trained using the same setup for each candidate are provided. This facilitates a much larger community of researchers to focus on developing better NAS algorithms in a more comparable and computationally effective environment.

You can use pip install nats_bench to install the library of NATS-Bench or install from source by python setup.py install.

If you are seeking how to re-create NATS-Bench from scratch or reproduce benchmarked results, please see use AutoDL-Projects and see these instructions.

If you have questions, please ask at here or email me :)

This figure is the main difference between NATS-Bench, NAS-Bench-101, and NAS-Bench-201. The topology search space ($\mathcal{S}_t$) in NATS-Bench is the same as NAS-Bench-201, while we upgrade with results of more runs for the architecture candidates, and the benchmarked NAS algorithms have better hyperparameters.

Preparation and Download

Step-1: download raw vision datasets. (you can skip this one if you do not use weight-sharing NAS or re-create NATS-Bench).

In NATS-Bench, we (create and) use three image datasets -- CIFAR-10, CIFAR-100, and ImageNet16-120. For more details, please see Sec-3.2 in the NATS-Bench paper. To download these three datasets, please find them at Google Drive. To create the ImageNet16-120 PyTorch dataset, please call AutoDL-Projects/lib/datasets/ImageNet16, by using:

train_data = ImageNet16(root, True , train_transform, 120)
test_data  = ImageNet16(root, False, test_transform , 120)

Step-2: download benchmark files of NATS-Bench.

The latest benchmark file of NATS-Bench can be downloaded from Google Drive. After download NATS-[tss/sss]-[version]-[md5sum]-simple.tar, please uncompress it by using tar xvf [file_name]. We highly recommend to put the downloaded benchmark file (NATS-sss-v1_0-50262.pickle.pbz2 / NATS-tss-v1_0-3ffb9.pickle.pbz2) or uncompressed archive (NATS-sss-v1_0-50262-simple / NATS-tss-v1_0-3ffb9-simple) into $TORCH_HOME. In this way, our api will automatically find the path for these benchmark files, which are convenient for the users. Otherwise, you need to indicate the file when creating the benchmark instance manually.

The history of benchmark files is as follows, tss indicates the topology search space and sss indicates the size search space. The benchmark file is used when creating the NATS-Bench instance with fast_mode=False. The archive is used when fast_mode=True, where archive is a directory containing 15,625 files for tss or contains 32,768 files for sss. Each file contains all the information for a specific architecture candidate. The full archive is similar to archive, while each file in full archive contains the trained weights. Since the full archive is too large, we use split -b 30G file_name file_name to split it into multiple 30G chunks. To merge the chunks into the original full archive, you can use cat file_name* > file_name.

Date benchmark file (tss) archive (tss) full archive (tss) benchmark file (sss) archive (sss) full archive (sss)
2020.08.31 NATS-tss-v1_0-3ffb9.pickle.pbz2 NATS-tss-v1_0-3ffb9-simple.tar NATS-tss-v1_0-3ffb9-full NATS-sss-v1_0-50262.pickle.pbz2 NATS-sss-v1_0-50262-simple.tar NATS-sss-v1_0-50262-full
2021.04.22 (Baidu-Pan) NATS-tss-v1_0-3ffb9.pickle.pbz2 (code: 8duj) NATS-tss-v1_0-3ffb9-simple.tar (code: tu1e) NATS-tss-v1_0-3ffb9-full (code:ssub) NATS-sss-v1_0-50262.pickle.pbz2 (code: za2h) NATS-sss-v1_0-50262-simple.tar (code: e4t9) NATS-sss-v1_0-50262-full (code: htif)

These benchmark files (without pretrained weights) can also be downloaded from Dropbox, OneDrive or Baidu-Pan (extract code: h6pm).

For the full checkpoints in NATS-*ss-*-full, we split the file into multiple parts (NATS-*ss-*-full.tara*) since they are too large to upload. Each file is about 30GB. For Baidu Pan, since they restrict the maximum size of each file, we further split NATS-*ss-*-full.tara* into NATS-*ss-*-full.tara*-aa and NATS-*ss-*-full.tara*-ab. All splits are created by the command split.

Note: if you encounter the quota exceed erros when download from Google Drive, please try to (1) login your personal Google account, (2) right-click-copy the files to your personal Google Drive, and (3) download from your personal Google Drive.

Usage

See more examples at notebooks.

1, create the benchmark instance:

from nats_bench import create
# Create the API instance for the size search space in NATS
api = create(None, 'sss', fast_mode=True, verbose=True)

# Create the API instance for the topology search space in NATS
api = create(None, 'tss', fast_mode=True, verbose=True)

2, query the performance:

# Query the loss / accuracy / time for 1234-th candidate architecture on CIFAR-10
# info is a dict, where you can easily figure out the meaning by key
info = api.get_more_info(1234, 'cifar10')

# Query the flops, params, latency. info is a dict.
info = api.get_cost_info(12, 'cifar10')

# Simulate the training of the 1224-th candidate:
validation_accuracy, latency, time_cost, current_total_time_cost = api.simulate_train_eval(1224, dataset='cifar10', hp='12')

3, create the instance of an architecture candidate in NATS-Bench:

# Create the instance of th 12-th candidate for CIFAR-10.
# To keep NATS-Bench repo concise, we did not include any model-related codes here because they rely on PyTorch.
# The package of [models] is defined at https://github.com/D-X-Y/AutoDL-Projects
#   so that one need to first import this package.
import xautodl
from xautodl.models import get_cell_based_tiny_net
config = api.get_net_config(12, 'cifar10')
network = get_cell_based_tiny_net(config)

# Load the pre-trained weights: params is a dict, where the key is the seed and value is the weights.
params = api.get_net_param(12, 'cifar10', None)
network.load_state_dict(next(iter(params.values())))

4, others:

# Clear the parameters of the 12-th candidate.
api.clear_params(12)

# Reload all information of the 12-th candidate.
api.reload(index=12)

Please see api_test.py for more examples.

from nats_bench import api_test
api_test.test_nats_bench_tss('NATS-tss-v1_0-3ffb9-simple')
api_test.test_nats_bench_tss('NATS-sss-v1_0-50262-simple')

How to Re-create NATS-Bench from Scratch

You need to use the AutoDL-Projects repo to re-create NATS-Bench from scratch.

The Size Search Space

The following command will train all architecture candidate in the size search space with 90 epochs and use the random seed of 777. If you want to use a different number of training epochs, please replace 90 with it, such as 01 or 12.

bash ./scripts/NATS-Bench/train-shapes.sh 00000-32767 90 777

The checkpoint of all candidates are located at output/NATS-Bench-size by default.

After training these candidate architectures, please use the following command to re-organize all checkpoints into the official benchmark file.

python exps/NATS-Bench/sss-collect.py

The Topology Search Space

The following command will train all architecture candidate in the topology search space with 200 epochs and use the random seed of 777/888/999. If you want to use a different number of training epochs, please replace 200 with it, such as 12.

bash scripts/NATS-Bench/train-topology.sh 00000-15624 200 '777 888 999'

The checkpoint of all candidates are located at output/NATS-Bench-topology by default.

After training these candidate architectures, please use the following command to re-organize all checkpoints into the official benchmark file.

python exps/NATS-Bench/tss-collect.py

To Reproduce 13 Baseline NAS Algorithms in NATS-Bench

You need to use the AutoDL-Projects repo to run 13 baseline NAS methods. Here are a brief introduction on how to run each algorithm (NATS-algos).

Reproduce NAS methods on the topology search space

Please use the following commands to run different NAS methods on the topology search space:

Four multi-trial based methods:
python ./exps/NATS-algos/reinforce.py       --dataset cifar100 --search_space tss --learning_rate 0.01
python ./exps/NATS-algos/regularized_ea.py  --dataset cifar100 --search_space tss --ea_cycles 200 --ea_population 10 --ea_sample_size 3
python ./exps/NATS-algos/random_wo_share.py --dataset cifar100 --search_space tss
python ./exps/NATS-algos/bohb.py            --dataset cifar100 --search_space tss --num_samples 4 --random_fraction 0.0 --bandwidth_factor 3

DARTS (first order):
python ./exps/NATS-algos/search-cell.py --dataset cifar10  --data_path $TORCH_HOME/cifar.python --algo darts-v1
python ./exps/NATS-algos/search-cell.py --dataset cifar100 --data_path $TORCH_HOME/cifar.python --algo darts-v1
python ./exps/NATS-algos/search-cell.py --dataset ImageNet16-120 --data_path $TORCH_HOME/cifar.python/ImageNet16 --algo darts-v1

DARTS (second order):
python ./exps/NATS-algos/search-cell.py --dataset cifar10  --data_path $TORCH_HOME/cifar.python --algo darts-v2
python ./exps/NATS-algos/search-cell.py --dataset cifar100 --data_path $TORCH_HOME/cifar.python --algo darts-v2
python ./exps/NATS-algos/search-cell.py --dataset ImageNet16-120 --data_path $TORCH_HOME/cifar.python/ImageNet16 --algo darts-v2

GDAS:
python ./exps/NATS-algos/search-cell.py --dataset cifar10  --data_path $TORCH_HOME/cifar.python --algo gdas
python ./exps/NATS-algos/search-cell.py --dataset cifar100 --data_path $TORCH_HOME/cifar.python --algo gdas
python ./exps/NATS-algos/search-cell.py --dataset ImageNet16-120 --data_path $TORCH_HOME/cifar.python/ImageNet16

SETN:
python ./exps/NATS-algos/search-cell.py --dataset cifar10  --data_path $TORCH_HOME/cifar.python --algo setn
python ./exps/NATS-algos/search-cell.py --dataset cifar100 --data_path $TORCH_HOME/cifar.python --algo setn
python ./exps/NATS-algos/search-cell.py --dataset ImageNet16-120 --data_path $TORCH_HOME/cifar.python/ImageNet16 --algo setn

Random Search with Weight Sharing:
python ./exps/NATS-algos/search-cell.py --dataset cifar10  --data_path $TORCH_HOME/cifar.python --algo random
python ./exps/NATS-algos/search-cell.py --dataset cifar100 --data_path $TORCH_HOME/cifar.python --algo random
python ./exps/NATS-algos/search-cell.py --dataset ImageNet16-120 --data_path $TORCH_HOME/cifar.python/ImageNet16 --algo random

ENAS:
python ./exps/NATS-algos/search-cell.py --dataset cifar10  --data_path $TORCH_HOME/cifar.python --algo enas --arch_weight_decay 0 --arch_learning_rate 0.001 --arch_eps 0.001
python ./exps/NATS-algos/search-cell.py --dataset cifar100 --data_path $TORCH_HOME/cifar.python --algo enas --arch_weight_decay 0 --arch_learning_rate 0.001 --arch_eps 0.001
python ./exps/NATS-algos/search-cell.py --dataset ImageNet16-120 --data_path $TORCH_HOME/cifar.python/ImageNet16 --algo enas --arch_weight_decay 0 --arch_learning_rate 0.001 --arch_eps 0.001

Reproduce NAS methods on the size search space

Please use the following commands to run different NAS methods on the size search space:

Four multi-trial based methods:
python ./exps/NATS-algos/reinforce.py       --dataset cifar100 --search_space sss --learning_rate 0.01
python ./exps/NATS-algos/regularized_ea.py  --dataset cifar100 --search_space sss --ea_cycles 200 --ea_population 10 --ea_sample_size 3
python ./exps/NATS-algos/random_wo_share.py --dataset cifar100 --search_space sss
python ./exps/NATS-algos/bohb.py            --dataset cifar100 --search_space sss --num_samples 4 --random_fraction 0.0 --bandwidth_factor 3


Run Transformable Architecture Search (TAS), proposed in Network Pruning via Transformable Architecture Search, NeurIPS 2019

python ./exps/NATS-algos/search-size.py --dataset cifar10  --data_path $TORCH_HOME/cifar.python --algo tas --rand_seed 777
python ./exps/NATS-algos/search-size.py --dataset cifar100 --data_path $TORCH_HOME/cifar.python --algo tas --rand_seed 777
python ./exps/NATS-algos/search-size.py --dataset ImageNet16-120 --data_path $TORCH_HOME/cifar.python/ImageNet16 --algo tas --rand_seed 777


Run the channel search strategy in FBNet-V2 -- masking + Gumbel-Softmax :

python ./exps/NATS-algos/search-size.py --dataset cifar10  --data_path $TORCH_HOME/cifar.python --algo mask_gumbel --rand_seed 777
python ./exps/NATS-algos/search-size.py --dataset cifar100 --data_path $TORCH_HOME/cifar.python --algo mask_gumbel --rand_seed 777
python ./exps/NATS-algos/search-size.py --dataset ImageNet16-120 --data_path $TORCH_HOME/cifar.python/ImageNet16 --algo mask_gumbel --rand_seed 777


Run the channel search strategy in TuNAS -- masking + sampling :

python ./exps/NATS-algos/search-size.py --dataset cifar10  --data_path $TORCH_HOME/cifar.python --algo mask_rl --arch_weight_decay 0 --rand_seed 777 --use_api 0
python ./exps/NATS-algos/search-size.py --dataset cifar100 --data_path $TORCH_HOME/cifar.python --algo mask_rl --arch_weight_decay 0 --rand_seed 777
python ./exps/NATS-algos/search-size.py --dataset ImageNet16-120 --data_path $TORCH_HOME/cifar.python/ImageNet16 --algo mask_rl --arch_weight_decay 0 --rand_seed 777

Final Discovered Architectures for Each Algorithm

The architecture index can be found by use api.query_index_by_arch(architecture_string).

The final discovered architecture ID on CIFAR-10:

DARTS (first order):
|skip_connect~0|+|skip_connect~0|skip_connect~1|+|skip_connect~0|skip_connect~1|skip_connect~2|
|skip_connect~0|+|skip_connect~0|skip_connect~1|+|skip_connect~0|skip_connect~1|skip_connect~2|
|skip_connect~0|+|skip_connect~0|skip_connect~1|+|skip_connect~0|skip_connect~1|skip_connect~2|

DARTS (second order):
|skip_connect~0|+|skip_connect~0|skip_connect~1|+|skip_connect~0|skip_connect~1|skip_connect~2|
|skip_connect~0|+|skip_connect~0|skip_connect~1|+|skip_connect~0|skip_connect~1|skip_connect~2|
|skip_connect~0|+|skip_connect~0|skip_connect~1|+|skip_connect~0|skip_connect~1|skip_connect~2|

GDAS:
|nor_conv_3x3~0|+|nor_conv_3x3~0|none~1|+|nor_conv_1x1~0|nor_conv_3x3~1|nor_conv_3x3~2|
|nor_conv_3x3~0|+|nor_conv_3x3~0|none~1|+|nor_conv_3x3~0|nor_conv_3x3~1|nor_conv_3x3~2|
|avg_pool_3x3~0|+|nor_conv_3x3~0|skip_connect~1|+|nor_conv_3x3~0|nor_conv_1x1~1|nor_conv_1x1~2|

The final discovered architecture ID on CIFAR-100:

DARTS (V1):
|none~0|+|skip_connect~0|none~1|+|skip_connect~0|nor_conv_1x1~1|none~2|
|none~0|+|skip_connect~0|none~1|+|skip_connect~0|nor_conv_1x1~1|none~2|
|skip_connect~0|+|skip_connect~0|none~1|+|skip_connect~0|nor_conv_1x1~1|nor_conv_3x3~2|

DARTS (V2):
|none~0|+|skip_connect~0|none~1|+|skip_connect~0|nor_conv_1x1~1|skip_connect~2|
|skip_connect~0|+|nor_conv_3x3~0|none~1|+|skip_connect~0|none~1|none~2|
|skip_connect~0|+|nor_conv_1x1~0|none~1|+|nor_conv_3x3~0|skip_connect~1|none~2|

GDAS:
|nor_conv_3x3~0|+|nor_conv_1x1~0|none~1|+|avg_pool_3x3~0|nor_conv_3x3~1|nor_conv_3x3~2|
|avg_pool_3x3~0|+|nor_conv_1x1~0|none~1|+|nor_conv_3x3~0|avg_pool_3x3~1|nor_conv_1x1~2|
|avg_pool_3x3~0|+|nor_conv_3x3~0|none~1|+|nor_conv_3x3~0|nor_conv_1x1~1|nor_conv_1x1~2|

The final discovered architecture ID on ImageNet-16-120:

DARTS (V1):
|none~0|+|skip_connect~0|none~1|+|skip_connect~0|none~1|nor_conv_3x3~2|
|none~0|+|skip_connect~0|none~1|+|skip_connect~0|none~1|nor_conv_3x3~2|
|none~0|+|skip_connect~0|none~1|+|skip_connect~0|none~1|nor_conv_1x1~2|

DARTS (V2):
|none~0|+|skip_connect~0|none~1|+|skip_connect~0|none~1|skip_connect~2|

GDAS:
|none~0|+|none~0|none~1|+|nor_conv_3x3~0|none~1|none~2|
|none~0|+|none~0|none~1|+|nor_conv_3x3~0|none~1|none~2|
|none~0|+|none~0|none~1|+|nor_conv_3x3~0|none~1|none~2|

Others

We use black for Python code formatter. Please use black . -l 120.

Citation

If you find that NATS-Bench helps your research, please consider citing it:

@article{dong2021nats,
  title   = {{NATS-Bench}: Benchmarking NAS Algorithms for Architecture Topology and Size},
  author  = {Dong, Xuanyi and Liu, Lu and Musial, Katarzyna and Gabrys, Bogdan},
  doi     = {10.1109/TPAMI.2021.3054824},
  journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)},
  year    = {2021},
  note    = {\mbox{doi}:\url{10.1109/TPAMI.2021.3054824}}
}
@inproceedings{dong2020nasbench201,
  title     = {{NAS-Bench-201}: Extending the Scope of Reproducible Neural Architecture Search},
  author    = {Dong, Xuanyi and Yang, Yi},
  booktitle = {International Conference on Learning Representations (ICLR)},
  url       = {https://openreview.net/forum?id=HJxyZkBKDr},
  year      = {2020}
}
Comments
  • How to use cifar10 in NASBench201

    How to use cifar10 in NASBench201

    Describe the Question A clear and concise description of the question.

    • topology search space in NATS-Bench

    Hi, I have some questions about how to use val and test acc when searching on cifar10 subset.

    First, when training an arch search algo, we use val acc as labels during the training procedure. And to evaluate the algo, the searched arch's test acc is queried as result. This works for cifar100 and imagenet16 subset since they offer val and test acc(I really don't know what valtest_acc for)

    For cifar10, there are two subsets: cifar10-val and cifar10. The former offers val and test acc, so the above train and eval pipeline still works,

    截屏2021-10-20 下午11 05 35

    but the latter only offers test acc, so when using the latter subset, I can not build labels since there are no val acc.

    截屏2021-10-20 下午11 06 00

    Above information is offered by api.get_more_info

    Many thanks

    opened by AlbertiPot 9
  • FLOPS data in nasbench201 are different from flops count tools like thop and ptflops

    FLOPS data in nasbench201 are different from flops count tools like thop and ptflops

    FLOPS counts

    • about the topology search space in NATS-Bench?

    Hi, the question is that when I using the flops counts tools to calculate the flops of the model in nasbench201, the results are always larger than the flops offered by 201 dataset, for example arch index 0, cifar10-valid flops 15.64737, cifar100 15.65322, and imagenet16 flops 3.91948, however from the thop tools they are 16.464576, 16.470336, 4.123712, from the ptflops tools they are 16.810634, 16.816484, 4.210296. But either 201 dataset or the above tools offer the same params counts.

    are there any operations that the model does not count?

    Many thanks for your excellent works and code!

    opened by AlbertiPot 6
  • The details of results  in the paper

    The details of results in the paper

    Hi, I want to use the NATS-Bench, but I still have some questions about the results of benchmark algorithms given in the paper. (1) Hp=200 in tss and hp=90 in sss, right? (2)the results is top1 or top5 or others? (3)Sorry, but can I trouble you to wirte a usage example to show how can I get the correct metrics when I search a new arch? Thanks!

    opened by Littleyezi 6
  • change the candidate's input resolution

    change the candidate's input resolution

    Hi, I would like to use you NATS-Bench for other datasets except cifar and Imagenet, and with higher resolutions like (256*256). Is it possible to sample a network like what you did for cifar below and then change the cells resolutions?

    import xautodl, nats_bench
    
    from nats_bench import create
    from xautodl.models import get_cell_based_tiny_net
    
    api = create(None, 'tss', fast_mode=True, verbose=True)
    
    config = api.get_net_config(12, 'cifar10')
    network = get_cell_based_tiny_net(config)
    

    #then a code to change the input resolution to the target size of 256*256

    Thanks for your response

    question 
    opened by Mshz2 5
  • How to generate a architecute model with torch

    How to generate a architecute model with torch

    In the README.MD, there is only one example to generate an architecture at index 12 with get_cell_based_tiny_net.

    But in the codebase, I found there are many functions including:

    1. get_cell_based_tiny_net
    2. obtain_model

    Which is the real model at index 12? (The one whose performance is recorded in get_more_info).

    If I want to obtain the model/architecture at the index 12 whose performance is the exactly measured and recorded in get_more_info. Which method should i use ?

    Thank you

    opened by NLGithubWP 4
  • The results of validation/test accuracy in NATS-Bench paper

    The results of validation/test accuracy in NATS-Bench paper

    Hi! I want to get the validation and test accuracy as Table 4 in "NATS-Bench: Benchmarking NAS Algorithms for Architecture Topology and Size" paper. I just want to check the following commands are correct or not:

    After finishing the architecture search (I'm studying Weight-Sharing approach), I get the genotype. Then, I get the arch_index via: arch_index = api.query_index_by_arch('......genotype here......') Therefore, for Cifar10 validation accuracy: info = api.get_more_info(arch_index, 'cifar10-valid', hp=200) for Cifar10 test accuracy: info = api.get_more_info(arch_index, 'cifar10', hp=200) for Cifar100 validation/test accuracy: info = api.get_more_info(arch_index, 'cifar100', hp=200) for ImageNet16-120 validation/test accuracy: info = api.get_more_info(arch_index, 'ImageNet16-120', hp=200)

    Following are some points I want to check:

    1. Does Cifar10 test accuracy results use train + valid set for training and test set for testing? Thus, I should use 'cifar10' instead of 'cifar10-valid' to get test accuracy?
    2. What does valtest-accuracy mean in Cifar100 and ImageNet16-120?
    3. I get the architecture with validation accuracy higher than Optimal values reported in the paper for ImageNet16-120. Why? image

    Great Thanks!

    question 
    opened by Tommy787576 4
  • get_cost_info(hp=

    get_cost_info(hp="200") returns weird value for some models in topology search space

    Describe the bug For some architectures, get_cost_info() returns different flops/params for hp="12" and hp="200" in topology search space. For example, the number of parameters (for CIFAR-10) of architecture |skip_connect~0|+|none~0|nor_conv_3x3~1|+|avg_pool_3x3~0|nor_conv_3x3~1|nor_conv_3x3~2| is 0.802426 for hp="12", but for hp="200", the value is 0.6403993333333333.

    I've found many other architectures in which the same behavior occurs, but some specific examples are as follows:

    • |nor_conv_1x1~0|+|nor_conv_3x3~0|skip_connect~1|+|none~0|skip_connect~1|none~2|
    • |nor_conv_3x3~0|+|skip_connect~0|skip_connect~1|+|none~0|skip_connect~1|none~2|

    To Reproduce

    from nats_bench import create
    
    arch = "|skip_connect~0|+|none~0|nor_conv_3x3~1|+|avg_pool_3x3~0|nor_conv_3x3~1|nor_conv_3x3~2|"
    api_path = "NATS-tss-v1_0-3ffb9-simple"
    search_space = "tss"
    dataset = "cifar10"
    
    api = create(api_path, search_space, fast_mode=True, verbose=False)
    idx = api.query_index_by_arch(arch)
    info_12 = api.get_cost_info(idx, dataset, hp="12")
    info_200 = api.get_cost_info(idx, dataset, hp="200")
    print(info_12)
    print(info_200)
    
    {'flops': 113.95137, 'params': 0.802426, 'latency': 0.016719988563604522, 'T-train@epoch': 21.51544686158498, 'T-train@total': 258.1853623390198, 'T-ori-test@epoch': 1.496117415882292, 'T-ori-test@total': 17.953408990587505}
    {'flops': 90.35841, 'params': 0.6403993333333333, 'latency': 0.016719988563604522, 'T-train@epoch': 21.515446861584987, 'T-train@total': 4303.0893723169975, 'T-ori-test@epoch': 1.4961174158822923, 'T-ori-test@total': 299.22348317645844}
    
    • OS: Ubuntu 20.04.3 LTS (Focal Fossa)
    • Python version: 3.8.12
    • PyTorch version: 1.8.1+cu102

    Expected behavior I think flops/params for hp="12" and hp="200" should be the same. (From reading the paper and your implementation in AutoDL-Projects, I could not find any factors that would make a difference.)

    bug 
    opened by mzsrkeen10 4
  • How to generate a computational graph from an architecture from the size search space?

    How to generate a computational graph from an architecture from the size search space?

    Describe the Question A clear and concise description of the question.

    • Is it about the topology search space in NATS-Bench? No
    • Is it about the size search space in NATS-Bench? Yes
    • Which figure or table are you referring to in the paper? N/A

    I would like to generate a computational graph from a sample from the size search space. Do you provide any functionality for that? I presume that you must have used this functionality when creating this dataset, so if it is not available within the NATS-Bench codebase, can you give some pointers as to how I can do this?

    Thanks!

    opened by ifed-ucsd 4
  • Unable to use benchmark file

    Unable to use benchmark file

    I have downloaded archive(tss) file and used tar command then uploaded the folder ungenerated on google drive to access it from colab for below code: from nats_bench import create api = create('/content/drive/MyDrive/NATS-tss-v1_0-3ffb9-simple/', 'tss', fast_mode=True, verbose=True)

    I get: FileNotFoundError: [Errno 2] No such file or directory: '/content/drive/MyDrive/NATS-tss-v1_0-3ffb9-simple//meta.pickle.pbz2.pbz2' why is that?

    opened by ayushi-3536 4
  • There are some problems with the accuracy of the model

    There are some problems with the accuracy of the model

    Which Algorithm Size Search Space Cifar 10

    Describe the Question I get all 32768 architecture candidates‘ acc using the code followed, but all of them in under 90%. The results in your paper of all method is above 90%. How can I get the result of table 4 in your paper?

    from nats_bench import create model_cifar10_rank = {} api = create(None, 'sss') for index in range(32768): info = api.get_more_info(index, 'cifar10') config = api.get_net_config(index, 'cifar10') model_cifar10_rank[config['channels']] = info['test-accuracy']

    models_cifar10_acc_rank 2.txt

    question 
    opened by Trent-tangtao 4
  • Documentation for NATS Bench data format

    Documentation for NATS Bench data format

    I downloaded the archive with the evaluations of NATS Bench and I would like to get more information about the format of the data. In particular, the data linked in this page https://pypi.org/project/nats-bench/ at the 'archive' links.

    I already have software for running my experiments, I would just like to convert the data format instead of integrating a new library, which may be more suitable for a different setting.

    Of course, I will cite the paper as suggested.

    question 
    opened by 610v4nn1 4
  • Regarding the checkpoints

    Regarding the checkpoints

    Hello. I would like to know if the checkpoints split are independent to each others. I meant if i download one of those 30Gb split does it contains all the information of a set of specific architectures or do i need to download others checkpoints and jointly unzip them in order to be able to use them.

    Where can i find the transform classes for inference if i want to evaluate the retrieved checkpoint to reproduce same test accuracy in the model information? thank you?

    opened by sorobedio 0
  • get_net_param returns empty dictionary

    get_net_param returns empty dictionary

    Describe the bug When retrieving network parameters with get_net_param I obtain an empty dict that I clearly cannot import as state_dict of a Pytorch model.

    To Reproduce Please provide a small script to reproduce the behavior:

    from nats_bench import create
    nats_api = create("NATS-tss-v1_0-3ffb9-simple", search_space="topology", fast_mode=True, verbose=False)
    # sample random idx to simulate behavior
    random_idx = nats_api.random()
    # get corresponding architecture on CIFAR10
    random_architecture = nats_api.get_net_config(index=random_idx, dataset="cifar10")
    # retrieve network parameters according to main README.md
    random_params = nats_api.get_net_param(index=random_idx, dataset="cifar10", seed=None)
    print(random_params)
    # prints {111: None}
    print(next(iter(random_params.values())))
    # prints None
    

    OS: macOS Ventura 13.0.1 Python version: 3.10.8 Pytorch version: 1.13.0 Expected behavior random_params should be a meaningful dictionary that I could use as params dict for a torch model

    opened by fracapuano 1
  • Is NATS Extension of NAS_201 bench

    Is NATS Extension of NAS_201 bench

    I was working with NAS_201 bench earlier and now am shifting to NATS bench. I have the following doubts:

    • Will the index of architecture obtained with NAS_201 and NATS bench same for a given arch? Because when using both NAS_201 is giving 12804 index while NATS bench is returning -1. Also the index of NAS_201 provides different config arch when used as index in NATS.
    • While obtaining weights I am getting an empty return when using the NAS_201 index.

    Following is my implementation. I am using benchmark file with sss.

    api = create(d, 'sss', fast_mode=False, verbose=True) index = api.query_index_by_arch(convert_naslib_to_str(best_arch)) config = api.get_net_config(index, 'cifar10') best_arch = get_cell_based_tiny_net(config) logger.info("Queried results ({}): {}".format(metric, best_arch)) params = api.get_net_param(index, 'cifar10', None) best_arch.load_state_dict(next(iter(params.values())))

    opened by Mars-204 3
  • Question about ImageNet16120

    Question about ImageNet16120

    I am not sure if there is val/test part in ImageNet16120 dataset. I only found files of train set and val set in document you provided. But in Readme document, the test set mentioned is actually val set, according to your source code.

    If the val set is used for test or just wrongly named, did you use the whole training set for model training without spliting?

    opened by nukeyyou 1
  • Details about the data structures more_info and cost_info

    Details about the data structures more_info and cost_info

    Hello, first of all, thank you very much for your work in the  field of NAS. Here are my questions about NATS-Bench,and hope to get your confirmation:
    

    image

    1. What are the specific units for results such as 'train-all-time'? ( for example, minutes?)
    2. What does training cost mean specifically? In particular, what is the difference between 'train-all-time' and 'T-train@total'?
    opened by WanFG99 2
Owner
D-X-Y
Research Scientist on AutoDL.
D-X-Y
NAS-HPO-Bench-II is the first benchmark dataset for joint optimization of CNN and training HPs.

NAS-HPO-Bench-II API Overview NAS-HPO-Bench-II is the first benchmark dataset for joint optimization of CNN and training HPs. It helps a fair and low-

yoichi hirose 8 Nov 21, 2022
🍅🍅🍅YOLOv5-Lite: lighter, faster and easier to deploy. Evolved from yolov5 and the size of model is only 1.7M (int8) and 3.3M (fp16). It can reach 10+ FPS on the Raspberry Pi 4B when the input size is 320×320~

YOLOv5-Lite:lighter, faster and easier to deploy Perform a series of ablation experiments on yolov5 to make it lighter (smaller Flops, lower memory, a

pogg 1.5k Jan 5, 2023
A very simple tool for situations where optimization with onnx-simplifier would exceed the Protocol Buffers upper file size limit of 2GB, or simply to separate onnx files to any size you want.

sne4onnx A very simple tool for situations where optimization with onnx-simplifier would exceed the Protocol Buffers upper file size limit of 2GB, or

Katsuya Hyodo 10 Aug 30, 2022
[CVPR 2021] 'Searching by Generating: Flexible and Efficient One-Shot NAS with Architecture Generator'

[CVPR2021] Searching by Generating: Flexible and Efficient One-Shot NAS with Architecture Generator Overview This is the entire codebase for the paper

null 35 Dec 1, 2022
[ICLR2021oral] Rethinking Architecture Selection in Differentiable NAS

DARTS-PT Code accompanying the paper ICLR'2021: Rethinking Architecture Selection in Differentiable NAS Ruochen Wang, Minhao Cheng, Xiangning Chen, Xi

Ruochen Wang 86 Dec 27, 2022
NAS Benchmark in "Prioritized Architecture Sampling with Monto-Carlo Tree Search", CVPR2021

NAS-Bench-Macro This repository includes the benchmark and code for NAS-Bench-Macro in paper "Prioritized Architecture Sampling with Monto-Carlo Tree

null 35 Jan 3, 2023
CM-NAS: Cross-Modality Neural Architecture Search for Visible-Infrared Person Re-Identification (ICCV2021)

CM-NAS Official Pytorch code of paper CM-NAS: Cross-Modality Neural Architecture Search for Visible-Infrared Person Re-Identification in ICCV2021. Vis

JDAI-CV 40 Nov 25, 2022
Naszilla is a Python library for neural architecture search (NAS)

A repository to compare many popular NAS algorithms seamlessly across three popular benchmarks (NASBench 101, 201, and 301). You can implement your ow

null 270 Jan 3, 2023
RobustART: Benchmarking Robustness on Architecture Design and Training Techniques

The first comprehensive Robustness investigation benchmark on large-scale dataset ImageNet regarding ARchitecture design and Training techniques towards diverse noises.

null 132 Dec 23, 2022
Tgbox-bench - Simple TGBOX upload speed benchmark

TGBOX Benchmark This script will benchmark upload speed to TGBOX storage. Build

Non 1 Jan 9, 2022
Implementation for the paper SMPLicit: Topology-aware Generative Model for Clothed People (CVPR 2021)

SMPLicit: Topology-aware Generative Model for Clothed People [Project] [arXiv] License Software Copyright License for non-commercial scientific resear

Enric Corona 225 Dec 13, 2022
Official repository for the ICLR 2021 paper Evaluating the Disentanglement of Deep Generative Models with Manifold Topology

Official repository for the ICLR 2021 paper Evaluating the Disentanglement of Deep Generative Models with Manifold Topology Sharon Zhou, Eric Zelikman

Stanford Machine Learning Group 34 Nov 16, 2022
clDice - a Novel Topology-Preserving Loss Function for Tubular Structure Segmentation

README clDice - a Novel Topology-Preserving Loss Function for Tubular Structure Segmentation CVPR 2021 Authors: Suprosanna Shit and Johannes C. Paetzo

null 110 Dec 29, 2022
[ICCV2021] Official code for "Channel-wise Topology Refinement Graph Convolution for Skeleton-Based Action Recognition"

CTR-GCN This repo is the official implementation for Channel-wise Topology Refinement Graph Convolution for Skeleton-Based Action Recognition. The pap

Yuxin Chen 148 Dec 16, 2022
An official PyTorch implementation of the TKDE paper "Self-Supervised Graph Representation Learning via Topology Transformations".

Self-Supervised Graph Representation Learning via Topology Transformations This repository is the official PyTorch implementation of the following pap

Hsiang Gao 2 Oct 31, 2022
code for paper "Does Unsupervised Architecture Representation Learning Help Neural Architecture Search?"

Does Unsupervised Architecture Representation Learning Help Neural Architecture Search? Code for paper: Does Unsupervised Architecture Representation

null 39 Dec 17, 2022
Model search is a framework that implements AutoML algorithms for model architecture search at scale

Model search (MS) is a framework that implements AutoML algorithms for model architecture search at scale. It aims to help researchers speed up their exploration process for finding the right model architecture for their classification problems (i.e., DNNs with different types of layers).

Google 3.2k Dec 31, 2022
Code release to accompany paper "Geometry-Aware Gradient Algorithms for Neural Architecture Search."

Geometry-Aware Gradient Algorithms for Neural Architecture Search This repository contains the code required to run the experiments for the DARTS sear

null 18 May 27, 2022
Revisiting, benchmarking, and refining Heterogeneous Graph Neural Networks.

Heterogeneous Graph Benchmark Revisiting, benchmarking, and refining Heterogeneous Graph Neural Networks. Roadmap We organize our repo by task, and on

THUDM 176 Dec 17, 2022