LyaNet: A Lyapunov Framework for Training Neural ODEs

Overview

LyaNet: A Lyapunov Framework for Training Neural ODEs

Provide the model type--config-name to train and test models configured as those shown in the paper.

Classification Training

For the code assumes the project root is the current directory.

Example commands:

python sl_pipeline.py --config-name classical +dataset=MNIST

Tensorboards are saved to run_data/tensorboards and can be viewed by running:

tensorboard --logdir ./run_data/tensorboards --reload_multifile True

Only the model with the best validation error is saved. To quickly verify the the test error of this model, run the adversarial robustness script. It prints the nominal test error before performing the attack.

Adversarial Robustness

Assuming the current directory is robustness. Notice that the model file name will be different depending on the dataset and model combination you have run. The path provided should provide an idea of the directory structure where models are stored.

These scripts will print the testing error, followed by the testing error with and adversarial attack. Notice adversarial testing requires significantly more resources.

L2 Adversarial robustness experiments

PYTHONPATH=../ python untargeted_robustness.py --config-name classical norm="2" \
+dataset=MNIST \
"+model_file='../run_data/tensorboards/d.MNIST_m.ClassicalModule(RESNET18)_b.128_lr.0.01_wd.0.0001_mepoch120._sd0/default/version_0/checkpoints/epoch=7-step=3375.ckpt'"

L Infinity Adversarial robustness experiments

PYTHONPATH=../ python untargeted_robustness.py --config-name classical \
norm="inf"  +dataset=MNIST \
"+model_file='../run_data/tensorboards/d.MNIST_m.ClassicalModule(RESNET18)_b.128_lr.0.01_wd.0.0001_mepoch120._sd0/default/version_0/checkpoints/epoch=7-step=3375.ckpt'"

Datasets supported

  • MNIST
  • FashionMNIST
  • CIFAR10
  • CIFAR100

Models Supported

  • anode : Data-controlled dynamics with ResNet18 Component trained through solution differentiation
  • classical: ResNet18
  • lyapunov: Data-controlled dynamics with ResNet18 Component trained with LyaNet
  • continuous_net: ContinuousNet from [1] trained through solution differentiation
  • continuous_net_lyapunov: ContinuousNet from [1] trained with LyaNet

References

  1. Continuous-in-Depth Neural Networks Code
  2. Learning by Turning: Neural Architecture Aware Optimisation Code
You might also like...
A framework for joint super-resolution and image synthesis, without requiring real training data
A framework for joint super-resolution and image synthesis, without requiring real training data

SynthSR This repository contains code to train a Convolutional Neural Network (CNN) for Super-resolution (SR), or joint SR and data synthesis. The met

Bagua is a flexible and performant distributed training algorithm development framework.
Bagua is a flexible and performant distributed training algorithm development framework.

Bagua is a flexible and performant distributed training algorithm development framework.

The Medical Detection Toolkit contains 2D + 3D implementations of prevalent object detectors such as Mask R-CNN, Retina Net, Retina U-Net, as well as a training and inference framework focused on dealing with medical images.
The Medical Detection Toolkit contains 2D + 3D implementations of prevalent object detectors such as Mask R-CNN, Retina Net, Retina U-Net, as well as a training and inference framework focused on dealing with medical images.

The Medical Detection Toolkit contains 2D + 3D implementations of prevalent object detectors such as Mask R-CNN, Retina Net, Retina U-Net, as well as a training and inference framework focused on dealing with medical images.

ManiSkill-Learn is a framework for training agents on SAPIEN Open-Source Manipulation Skill Challenge (ManiSkill Challenge), a large-scale learning-from-demonstrations benchmark for object manipulation.

ManiSkill-Learn ManiSkill-Learn is a framework for training agents on SAPIEN Open-Source Manipulation Skill Challenge, a large-scale learning-from-dem

We present a framework for training multi-modal deep learning models on unlabelled video data by forcing the network to learn invariances to transformations applied to both the audio and video streams.

Multi-Modal Self-Supervision using GDT and StiCa This is an official pytorch implementation of papers: Multi-modal Self-Supervision from Generalized D

An Agnostic Computer Vision Framework - Pluggable to any Training Library: Fastai, Pytorch-Lightning with more to come
An Agnostic Computer Vision Framework - Pluggable to any Training Library: Fastai, Pytorch-Lightning with more to come

IceVision is the first agnostic computer vision framework to offer a curated collection with hundreds of high-quality pre-trained models from torchvision, MMLabs, and soon Pytorch Image Models. It orchestrates the end-to-end deep learning workflow allowing to train networks with easy-to-use robust high-performance libraries such as Pytorch-Lightning and Fastai

Asterisk is a framework to generate high-quality training datasets at scale
Asterisk is a framework to generate high-quality training datasets at scale

Asterisk is a framework to generate high-quality training datasets at scale

The pure and clear PyTorch Distributed Training Framework.
The pure and clear PyTorch Distributed Training Framework.

The pure and clear PyTorch Distributed Training Framework. Introduction Requirements and Usage Dependency Dataset Basic Usage Slurm Cluster Usage Base

This is the official PyTorch implementation for
This is the official PyTorch implementation for "Mesa: A Memory-saving Training Framework for Transformers".

Mesa: A Memory-saving Training Framework for Transformers This is the official PyTorch implementation for Mesa: A Memory-saving Training Framework for

Owner
Ivan Dario Jimenez Rodriguez
Ivan Dario Jimenez Rodriguez
This repository contains notebook implementations of the following Neural Process variants: Conditional Neural Processes (CNPs), Neural Processes (NPs), Attentive Neural Processes (ANPs).

The Neural Process Family This repository contains notebook implementations of the following Neural Process variants: Conditional Neural Processes (CN

DeepMind 892 Dec 28, 2022
Ultra-Data-Efficient GAN Training: Drawing A Lottery Ticket First, Then Training It Toughly

Ultra-Data-Efficient GAN Training: Drawing A Lottery Ticket First, Then Training It Toughly Code for this paper Ultra-Data-Efficient GAN Tra

VITA 77 Oct 5, 2022
Learning recognition/segmentation models without end-to-end training. 40%-60% less GPU memory footprint. Same training time. Better performance.

InfoPro-Pytorch The Information Propagation algorithm for training deep networks with local supervision. (ICLR 2021) Revisiting Locally Supervised Lea

null 78 Dec 27, 2022
ActNN: Reducing Training Memory Footprint via 2-Bit Activation Compressed Training

ActNN : Activation Compressed Training This is the official project repository for ActNN: Reducing Training Memory Footprint via 2-Bit Activation Comp

UC Berkeley RISE 178 Jan 5, 2023
This is the code for our KILT leaderboard submission to the T-REx and zsRE tasks. It includes code for training a DPR model then continuing training with RAG.

KGI (Knowledge Graph Induction) for slot filling This is the code for our KILT leaderboard submission to the T-REx and zsRE tasks. It includes code fo

International Business Machines 72 Jan 6, 2023
BERT model training impelmentation using 1024 A100 GPUs for MLPerf Training v1.1

Pre-trained checkpoint and bert config json file Location of checkpoint and bert config json file This MLCommons members Google Drive location contain

SAIT (Samsung Advanced Institute of Technology) 12 Apr 27, 2022
FuseDream: Training-Free Text-to-Image Generationwith Improved CLIP+GAN Space OptimizationFuseDream: Training-Free Text-to-Image Generationwith Improved CLIP+GAN Space Optimization

FuseDream This repo contains code for our paper (paper link): FuseDream: Training-Free Text-to-Image Generation with Improved CLIP+GAN Space Optimizat

XCL 191 Dec 31, 2022
Super-Fast-Adversarial-Training - A PyTorch Implementation code for developing super fast adversarial training

Super-Fast-Adversarial-Training This is a PyTorch Implementation code for develo

LBK 26 Dec 2, 2022