"NAS-Bench-301 and the Case for Surrogate Benchmarks for Neural Architecture Search".

Overview

NAS-Bench-301

This repository containts code for the paper: "NAS-Bench-301 and the Case for Surrogate Benchmarks for Neural Architecture Search".

The surrogate models can be downloaded on figshare. This includes the models for v0.9 and v1.0 as well as the dataset that was used to train the surrogate models. We also provide the full training logs for all architectures, which include learning curves on the train, validation and test sets. These can be automatically downloaded, please see nasbench301/example.py.

To install all requirements (this may take a few minutes), run

$ cat requirements.txt | xargs -n 1 -L 1 pip install
$ pip install nasbench301

If installing directly from github

$ git clone https://github.com/automl/nasbench301
$ cd nasbench301
$ cat requirements.txt | xargs -n 1 -L 1 pip install
$ pip install .

To run the example

$ python3 nasbench301/example.py

To fit a surrogate model run

$ python3 fit_model.py --model gnn_gin --nasbench_data PATH_TO_NB_301_DATA_ROOT --data_config_path configs/data_configs/nb_301.json  --log_dir LOG_DIR

NOTE: This codebase is still subject to changes. Upcoming updates include improved versions of the surrogate models and code for all experiments from the paper. The API may still be subject to changes.

Comments
  • Adapt sys paths

    Adapt sys paths

    import nasbench301 yields

    Exception has occurred: ModuleNotFoundError
    No module named 'surrogate_models'
    

    due to /surrogate_models not being added via sys.path

    opened by LMZimmer 4
  • Errors when installing dependencies on Ubuntu 18.04 LTS, python 3.8

    Errors when installing dependencies on Ubuntu 18.04 LTS, python 3.8

    Hi,

    When I go to install the dependencies via cat requirements.txt | xargs -n 1 -L 1 pip install on a new conda python 3.8 environment in Ubuntu 18.04 64-bit LTS I get a number of errors especially with respect to tensorflow version and then the various geometric libraries. See screenshots below. I am working through them but if you have any leads please let me know :-)

    image

    image

    Distributor ID: Ubuntu Description: Ubuntu 18.04.5 LTS Release: 18.04 Codename: bionic

    opened by debadeepta 2
  • What's the meaning of `NetworkSelectorDatasetInfo:darts:inputs_node_normal_3` in config file?

    What's the meaning of `NetworkSelectorDatasetInfo:darts:inputs_node_normal_3` in config file?

    Can you give a simple tutorial about the meaning of items in the config file? What's the meaning of the options of the item NetworkSelectorDatasetInfo:darts:inputs_node_normal_3? The option 0_1 refers a connection from node 0 to node 1 or just represents the encoding of the input node?

    opened by auroua 2
  • [Few bug fixes] Some importing/resource path mistakes

    [Few bug fixes] Some importing/resource path mistakes

    I think nb301 is quite a valuable work, and we're trying to use it already. It seems that the current github version has some small mistakes regarding importing/resource path mistakes. Maybe I've used it incorrectly, but here is a PR that I use to fix the bugs while using Python 3.7.3.

    opened by walkerning 2
  • Creating models from the search space

    Creating models from the search space

    Could you release the code for generating the models based on a ConfigSpace Instance of the search space so that it is possible to retrain them yourself?

    While it is probably possible to get something close to what you have used based on the code in the original DARTS repo I'd much rather get identical models if possible.

    Thanks

    opened by DanielHesslow 2
  • Loosen the restriction on ConfigSpace requirements

    Loosen the restriction on ConfigSpace requirements

    Hi,

    The version of ConfigSpace required by the nasbench301 is set to exactly 0.4.12. This clashes with NASLib, which requires at least 0.4.17.

    I've raised #16 to fix this issue.

    Best Arjun

    opened by Neonkraft 1
  • Training scripts for the architectures

    Training scripts for the architectures

    Hi all,

    Many thanks for the great work!

    I am aware that the training logs of the architectures used as train data for the surrogate in NAS-Bench-301 are released, but I'm wondering whether I could access the original scripts used to train the architectures from scratch? I appreciate that the training details are mentioned in App A of the paper, but with some additional augmentations used (e.g. MixUp) compared to standard DARTS procedure, I think it'd be great if I could use the exact original training script used to construct the benchmark.

    If they are already in the repo, I'd be grateful if you could point me to the right place.

    Many thanks in advance for your help.

    Xingchen

    opened by xingchenwan 1
  • Direct pip install throws binary incompatibility error

    Direct pip install throws binary incompatibility error

    If I do directly pip install nasbench301 it throws an error when I go to execute the example.

    Distributor ID: Ubuntu Description: Ubuntu 18.04.5 LTS Release: 18.04 Codename: bionic

    (nasbench301) REDMOND.dedey@GCRAZGDL0459:~/nasbench301$ python nasbench301/example.py 
    Traceback (most recent call last):
      File "nasbench301/example.py", line 4, in <module>
        from ConfigSpace.read_and_write import json as cs_json
      File "/home/dedey/.local/lib/python3.8/site-packages/ConfigSpace/__init__.py", line 37, in <module>
        from ConfigSpace.configuration_space import Configuration, \
      File "ConfigSpace/configuration_space.pyx", line 39, in init ConfigSpace.configuration_space
      File "__init__.pxd", line 242, in init ConfigSpace.hyperparameters
    ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 88 from C header, got 80 from PyObject
    
    opened by debadeepta 1
  • PyPi version does not seem to work (v0.2)

    PyPi version does not seem to work (v0.2)

    The PyPi version does not seem to include most of the source code.

    This can be confirmed by going to PyPi, looking up nasbench301, download files and then downloading and viewing the tar ball.

    • https://pypi.org/project/nasbench301/#files

    I can also confirm it happens locally:

    > mkdir ~/test
    
    > cd test
    
    > python -m venv ./.myvenv
    
    > source ./.myvenv/bin/activate
    
    > pip install --no-cache-dir nasbench301
    Collecting nasbench301
      Downloading nasbench301-0.2-py3-none-any.whl (9.7 kB)
    Installing collected packages: nasbench301
    Successfully installed nasbench301-0.2
    
    > ls .myvenv/lib/python3.8/site-packages/nasbench301
    __pycache__  .  ..  api.py  example.py  __init__.py  representations.py
    
    > python -c "import nasbench301"        
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "/home/eddiebergman/test/.myvenv/lib/python3.8/site-packages/nasbench301/__init__.py", line 1, in <module>
        from nasbench301.api import load_ensemble
      File "/home/eddiebergman/test/.myvenv/lib/python3.8/site-packages/nasbench301/api.py", line 6, in <module>
        from surrogate_models import utils
    ModuleNotFoundError: No module named 'surrogate_models'
    

    There are only the 4 files located in the installed package.

    Unfortunately I have not actually released a package to PyPi so I am not sure of the fix but I would imagine it is a relatively straight forward issue to do with the setup.py file.

    opened by eddiebergman 1
  • About NasBenchDataset

    About NasBenchDataset

    I'm a beginner of deep learning, and i need an example to process data in this codeļ¼Œ which means to process json and let json become the Graph the latter process needed in NASBenchDataset. Could someone please give me an example.

    opened by BaiYuanxi-dev 0
  • Made more pacakge friendly, auto model downloader

    Made more pacakge friendly, auto model downloader

    Hi, thanks for the awesome library!

    Note: I further added functionality to download the models automatically to save the users any extra steps, this is done in the example as well, please see the note at the bottom.

    I did have some issue with paths throughout and installing it as a local package is broken due to:

    • The use of relative paths (paths relative to where python is being run, rather than where the nasbench files are)
    • json config files not being included when the package is actually installed using pip
    • Imports not working as they did not belong to a module, a folder containing an __init__.py file.

    This PR essentially makes the library more package friendly by allowing the user to install the library using pip:

    # Assuming all requirements met
    git clone https://github.com/automl/nasbench301/
    pip install ./nasbench301
    python -c "import nasbench301"
    

    Fixes

    The pull request addressed these issues by:

    • Making sure all json files required for the code to run are included using a MANIFEST.in. (docs)
    • Adding __init__.py files where missing.
    • Updating to use paths relative to where the code is using __file__.
    • Updated README.md to take out the requirement of the export PYTHONPATH=$PWD

    Benefits

    A user should now just be able to import and use nasbench301 as a package as they would any other third-party package downloaded from github, without having to do the export step mentioned in the README.md. (assuming they have downloaded the nb_models v0.9 or v1.0)

    # ~/myscript.py
    import nasbench301
    
    gnngin_model_path = 'path/to/model' 
    performance_model = nasbench301.load_ensemble(gnngin_model_path)
    
    genome = ...
    prediction = performance_model.predict(config=genome, representation='genotype')
    print(prediction)
    
    > git clone https://github.com/automl/nasbench301
    ...
    
    > pip install ./nasbench301
    ...
    
    > pip show nasbench301
    Name: nasbench301
    Version: 0.2
    Summary: A surrogate benchmark for neural architecture search
    Home-page: https://github.com/automl/nasbench301
    Author: AutoML Freiburg
    Author-email: [email protected]
    License: 3-clause BSD
    Location: /home/--omitted--/.venv/lib/python3.8/site-packages
    Requires: 
    Required-by: 
    
    > python ~/myscript.py
    22.41
    

    Test

    This was tested with my own code that uses all 3 models. I have also tested this by running the example.py.

    > python -v
    Python 3.8.6
    > pip -v
    pip 20.2.1
    

    Updated Example and Model Downloading

    To make it easier to run the example I included a script to download the models and updated the example to use it if the models do not exist.

    If the library is installed using pip install nasbench301, the downloader defaults to deleting the zip files afterwards to prevent any bloat into the environment it's installed into.

    opened by eddiebergman 0
  • Significance of MSE vs Kendall's Tau / Spearman's Rank Correlation

    Significance of MSE vs Kendall's Tau / Spearman's Rank Correlation

    Hi folks. I am working on designing a surrogate benchmark for some hardware specific performance metrics based on the principles suggested in your work.

    I am currently using a small dataset of between 500 - 1000 model architectures from within an MNASNet-like search space with XGB to evaluate the performance of this surrogate with only this small dataset. The hyperparameters utilized for XGB are copied from your work.

    I am getting high validation/test MSE results (~ 0.4 to 0.6) but with a high Kendall's Tau (~0.92) and Spearman's rank correlation (~0.98).

    When I utilize the same number of models selected randomly from nb301_dataset (from random search directory) offered by you to train the surrogate, I get low MSE (~0.16) but with low KT (0.60) and Spearman's (0.78).

    I'm wondering if this disparity could potentially be due to sub-optimal values of the hyperparameters. Do you have some insights on what could cause such a huge difference in the performance of the predictor? Furthermore, for evaluating performance of a surrogate, do you think Kendall's Tau or Spearman's rank correlation is a better metric compared to MSE, or vice versa.

    opened by afzalxo 0
  • Perfect installation

    Perfect installation

    • I've already installed nasbench301 on windows10 suceesfullly. It takes me a few moments to run the code without a bug. I find it is easy to begin nasbench 301 with example.py.
    opened by RaoXuan-1998 0
  • Mention requirements uses libraries specifically for cuda 10.2

    Mention requirements uses libraries specifically for cuda 10.2

    It might be worth mentioning that the requirements.txt specifically installs torch libraries for cuda 10.2 as this took a while to debug.

    I am using this with torch 1.8.0+cu111 and it all seems to work as intended.

    > pip list | grep torch
    torch                  1.8.0+cu111
    torch-cluster          1.5.9
    torch-geometric        1.6.3
    torch-scatter          2.0.6
    torch-sparse           0.6.9
    torch-spline-conv      1.2.1
    torchaudio             0.8.0
    torchvision            0.9.0+cu111
    
    opened by eddiebergman 0
Owner
AutoML-Freiburg-Hannover
AutoML-Freiburg-Hannover