Learning Versatile Neural Architectures by Propagating Network Codes

Related tags

Deep Learning NCP
Overview

Learning Versatile Neural Architectures by Propagating Network Codes

Mingyu Ding, Yuqi Huo, Haoyu Lu, Linjie Yang, Zhe Wang, Zhiwu Lu, Jingdong Wang, Ping Luo

diagram

Introduction

This work includes:
(1) NAS-Bench-MR, a NAS benchmark built on four challenging datasets under practical training settings for learning task-transferable architectures.
(2) An efficient predictor-based algorithm Network Coding Propagation (NCP), which back-propagates the gradients of neural predictors to directly update architecture codes along desired gradient directions for various objectives.

This framework is implemented and tested with Ubuntu/Mac OS, CUDA 9.0/10.0, Python 3, Pytorch 1.3-1.6, NVIDIA Tesla V100/CPU.

Dataset

We build our benchmark on four computer vision tasks, i.e., image classification (ImageNet), semantic segmentation (CityScapes), 3D detection (KITTI), and video recognition (HMDB51). Totally 9 different settings are included, as shown in the data/*/trainval.pkl folders.

Note that each .pkl file contains more than 2500 architectures, and their corresponding evaluation results under multiple metrics. The original training logs and checkpoints (including model weights and optimizer data) will be uploaded to Google drive (more than 4T). We will share the download link once the upload is complete.

Quick start

First, train the predictor

python3 tools/train_predictor.py  # --cfg configs/seg.yaml

Then, edit architecture based on desired gradients

python3 tools/ncp.py  # --cfg configs/seg.yaml

Examples

  • An example in NAS-Bench-MR (Seg):
{'mIoU': 70.57,
 'mAcc': 80.07,
 'aAcc': 95.29,
 'input_channel': [16, 64],
 # [num_branches, [num_convs], [num_channels]]
 'network_setting': [[1, [3], [128]],
  [2, [3, 3], [32, 48]],
  [2, [3, 3], [32, 48]],
  [2, [3, 3], [32, 48]],
  [3, [2, 3, 2], [16, 32, 16]],
  [3, [2, 3, 2], [16, 32, 16]],
  [4, [2, 4, 1, 1], [96, 112, 48, 80]]],
 'last_channel': 112,
 # [num_branches, num_block1, num_convs1, num_channels1, ..., num_block4, num_convs4, num_channels4, last_channel]
 'embedding': [16, 64, 1, 3, 128, 3, 3, 3, 32, 48, 2, 2, 3, 2, 16, 32, 16, 1, 2, 4, 1, 1, 96, 112, 48, 80]
}
  • Load Datasets:
import pickle
exps = pickle.load(open('data/seg/trainval.pkl', 'rb'))
# Then process each item in exps
  • Load Model / Get Params and Flops (based on the thop library):
import torch
from thop import profile
from models.supernet import MultiResolutionNet

# Get model using input_channel & network_setting & last_channel
model = MultiResolutionNet(input_channel=[16, 64],
                           network_setting=[[1, [3], [128]],
                            [2, [3, 3], [32, 48]],
                            [2, [3, 3], [32, 48]],
                            [2, [3, 3], [32, 48]],
                            [3, [2, 3, 2], [16, 32, 16]],
                            [3, [2, 3, 2], [16, 32, 16]],
                            [4, [2, 4, 1, 1], [96, 112, 48, 80]]],
                          last_channel=112)

# Get Flops and Parameters
input = torch.randn(1, 3, 224, 224)
macs, params = profile(model, inputs=(input, ))  

structure

Data Format

Each code in data/search_list.txt denotes an architecture. It can be load in our supernet as follows:

  • Code2Setting
params = '96_128-1_1_1_48-1_2_1_1_128_8-1_3_1_1_1_128_128_120-4_4_4_4_4_4_128_128_128_128-64'
embedding = [int(item) for item in params.replace('-', '_').split('_')]

embedding = [ 96, 128,   1,   1,  48,   1,   1,   1, 128,   8,   1,   1,
           1,   1, 128, 128, 120,   4,   4,   4,   4,   4, 128, 128,
         128, 128, 64]
input_channels = embedding[0:2]
block_1 = embedding[2:3] + [1] + embedding[3:5]
block_2 = embedding[5:6] + [2] + embedding[6:10]
block_3 = embedding[10:11] + [3] + embedding[11:17]
block_4 = embedding[17:18] + [4] + embedding[18:26]
last_channels = embedding[26:27]
network_setting = []
for item in [block_1, block_2, block_3, block_4]:
    for _ in range(item[0]):
        network_setting.append([item[1], item[2:-int(len(item) / 2 - 1)], item[-int(len(item) / 2 - 1):]])

# network_setting = [[1, [1], [48]], 
#  [2, [1, 1], [128, 8]],
#  [3, [1, 1, 1], [128, 128, 120]], 
#  [4, [4, 4, 4, 4], [128, 128, 128, 128]], 
#  [4, [4, 4, 4, 4], [128, 128, 128, 128]], 
#  [4, [4, 4, 4, 4], [128, 128, 128, 128]], 
#  [4, [4, 4, 4, 4], [128, 128, 128, 128]]]
# input_channels = [96, 128]
# last_channels = [64]
  • Setting2Code
input_channels = [str(item) for item in input_channels]
block_1 = [str(item) for item in block_1]
block_2 = [str(item) for item in block_2]
block_3 = [str(item) for item in block_3]
block_4 = [str(item) for item in block_4]
last_channels = [str(item) for item in last_channels]

params = [input_channels, block_1, block_2, block_3, block_4, last_channels]
params = ['_'.join(item) for item in params]
params = '-'.join(params)
# params
# 96_128-1_1_1_48-1_2_1_1_128_8-1_3_1_1_1_128_128_120-4_4_4_4_4_4_128_128_128_128-64'

License

For academic use, this project is licensed under the 2-clause BSD License. For commercial use, please contact the author.

You might also like...
Totally Versatile Miscellanea for Pytorch

Totally Versatile Miscellania for PyTorch Thomas Viehmann [email protected] This repository collects various things I have implmented for PyTorch Laye

The code for our paper CrossFormer: A Versatile Vision Transformer Based on Cross-scale Attention.
The code for our paper CrossFormer: A Versatile Vision Transformer Based on Cross-scale Attention.

CrossFormer This repository is the code for our paper CrossFormer: A Versatile Vision Transformer Based on Cross-scale Attention. Introduction Existin

Versatile Generative Language Model
Versatile Generative Language Model

Versatile Generative Language Model This is the implementation of the paper: Exploring Versatile Generative Language Model Via Parameter-Efficient Tra

X-modaler is a versatile and high-performance codebase for cross-modal analytics.
X-modaler is a versatile and high-performance codebase for cross-modal analytics.

X-modaler X-modaler is a versatile and high-performance codebase for cross-modal analytics. This codebase unifies comprehensive high-quality modules i

 Exploring Versatile Prior for Human Motion via Motion Frequency Guidance (3DV2021)
Exploring Versatile Prior for Human Motion via Motion Frequency Guidance (3DV2021)

Exploring Versatile Prior for Human Motion via Motion Frequency Guidance This is the codebase for video-based human motion reconstruction in human-mot

[CVPR 2022 Oral] Versatile Multi-Modal Pre-Training for Human-Centric Perception
[CVPR 2022 Oral] Versatile Multi-Modal Pre-Training for Human-Centric Perception

Versatile Multi-Modal Pre-Training for Human-Centric Perception Fangzhou Hong1  Liang Pan1  Zhongang Cai1,2,3  Ziwei Liu1* 1S-Lab, Nanyang Technologic

This is a model made out of Neural Network specifically a Convolutional Neural Network model
This is a model made out of Neural Network specifically a Convolutional Neural Network model

This is a model made out of Neural Network specifically a Convolutional Neural Network model. This was done with a pre-built dataset from the tensorflow and keras packages. There are other alternative libraries that can be used for this purpose, one of which is the PyTorch library.

An implementation of
An implementation of "MixHop: Higher-Order Graph Convolutional Architectures via Sparsified Neighborhood Mixing" (ICML 2019).

MixHop and N-GCN ⠀ A PyTorch implementation of "MixHop: Higher-Order Graph Convolutional Architectures via Sparsified Neighborhood Mixing" (ICML 2019)

 Vision Transformer and MLP-Mixer Architectures
Vision Transformer and MLP-Mixer Architectures

Vision Transformer and MLP-Mixer Architectures Update (2.7.2021): Added the "When Vision Transformers Outperform ResNets..." paper, and SAM (Sharpness

Comments
  • NCP Search Space

    NCP Search Space

    Hello!

    I am interested in using you search space, specifically for image segmentation task. Roughly speaking, I want to search through all valid codes and occasionally query the oracle (dataset) to get the true mean accuracy. The issue is that the size of the dataset you provided (~2500) is much much lower than all possible codes so there must be a pattern or set of rules to validate a code (or network). For example if in the second stage we get 2 blocks then the number of channels has to be within a special range. Thank you in advance for taking the time and reading my question.

    Paniz,

    opened by panizbehboudian 4
Owner
Mingyu Ding
Mingyu Ding
Open source implementation of AceNAS: Learning to Rank Ace Neural Architectures with Weak Supervision of Weight Sharing

AceNAS This repo is the experiment code of AceNAS, and is not considered as an official release. We are working on integrating AceNAS as a built-in st

Yuge Zhang 6 Sep 7, 2022
Code image classification of MNIST dataset using different architectures: simple linear NN, autoencoder, and highway network

Deep Learning for image classification pip install -r http://webia.lip6.fr/~baskiotisn/requirements-amal.txt Train an autoencoder python3 train_auto

Hector Kohler 0 Mar 30, 2022
An experimental technique for efficiently exploring neural architectures.

SMASH: One-Shot Model Architecture Search through HyperNetworks An experimental technique for efficiently exploring neural architectures. This reposit

Andy Brock 478 Aug 4, 2022
Official Code for AdvRush: Searching for Adversarially Robust Neural Architectures (ICCV '21)

AdvRush Official Code for AdvRush: Searching for Adversarially Robust Neural Architectures (ICCV '21) Environmental Set-up Python == 3.6.12, PyTorch =

null 11 Dec 10, 2022
Codes and models for the paper "Learning Unknown from Correlations: Graph Neural Network for Inter-novel-protein Interaction Prediction".

GNN_PPI Codes and models for the paper "Learning Unknown from Correlations: Graph Neural Network for Inter-novel-protein Interaction Prediction". Lear

Ursa Zrimsek 2 Dec 14, 2022
Deep Learning: Architectures & Methods Project: Deep Learning for Audio Super-Resolution

Deep Learning: Architectures & Methods Project: Deep Learning for Audio Super-Resolution Figure: Example visualization of the method and baseline as a

Oliver Hahn 16 Dec 23, 2022
PyTorch implementation of SMODICE: Versatile Offline Imitation Learning via State Occupancy Matching

SMODICE: Versatile Offline Imitation Learning via State Occupancy Matching This is the official PyTorch implementation of SMODICE: Versatile Offline I

Jason Ma 14 Aug 30, 2022
Keras like implementation of Deep Learning architectures from scratch using numpy.

Mini-Keras Keras like implementation of Deep Learning architectures from scratch using numpy. How to contribute? The project contains implementations

MANU S PILLAI 5 Oct 10, 2021
Official repository of OFA. Paper: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework

Paper | Blog OFA is a unified multimodal pretrained model that unifies modalities (i.e., cross-modality, vision, language) and tasks (e.g., image gene

OFA Sys 1.4k Jan 8, 2023
[CVPR 2021 Oral] ForgeryNet: A Versatile Benchmark for Comprehensive Forgery Analysis

ForgeryNet: A Versatile Benchmark for Comprehensive Forgery Analysis ForgeryNet: A Versatile Benchmark for Comprehensive Forgery Analysis [arxiv|pdf|v

Yinan He 78 Dec 22, 2022