Physics-Aware Training (PAT) is a method to train real physical systems with backpropagation.

Overview

g5382

Physics-Aware Training (PAT) is a method to train real physical systems with backpropagation. It was introduced in Wright, Logan G. & Onodera, Tatsuhiro et al. (2021)1 to train Physical Neural Networks (PNNs) - neural networks whose building blocks are physical systems.

Example 1 animation

This repository is a PyTorch-based implementation of Physics-Aware Training. It lets users build Physical Neural Networks and automates many of the necessary steps to train them with Physics-Aware Training. To use an existing physical system as a building block in a neural network, users have to supply a class that receives batches of input data and processes them in the physical system. After specifying the trainable parameters, the system can be trained with this code. The methodology is demonstrated on an illustrative example of simulated, nonlinear coupled pendula. In our paper, we demonstrated the method on real experiments.

This repository also gives users access to documented reference code to implement or modify PAT.

Getting started

How to cite this code

If you use Physics-Aware Training in your research, please consider citing the following paper:

Logan G. Wright, Tatsuhiro Onodera, Martin M. Stein, Tianyu Wang, Darren T. Schachter, Zoey Hu, and Peter L. McMahon (2021). Deep physical neural networks enabled by a backpropagation algorithm for arbitrary physical systems. https://arxiv.org/abs/2104.13386

License

The code in this repository is released under the following license:

Creative Commons Attribution 4.0 International

A copy of this license is given in this repository as license.txt.

You might also like...
Code for Private Recommender Systems: How Can Users Build Their Own Fair Recommender Systems without Log Data? (SDM 2022)

Private Recommender Systems: How Can Users Build Their Own Fair Recommender Systems without Log Data? (SDM 2022) We consider how a user of a web servi

Official Pytorch implementation of ICLR 2018 paper Deep Learning for Physical Processes: Integrating Prior Scientific Knowledge.
Official Pytorch implementation of ICLR 2018 paper Deep Learning for Physical Processes: Integrating Prior Scientific Knowledge.

Deep Learning for Physical Processes: Integrating Prior Scientific Knowledge: Official Pytorch implementation of ICLR 2018 paper Deep Learning for Phy

A PyTorch implementation of Radio Transformer Networks from the paper "An Introduction to Deep Learning for the Physical Layer".

An Introduction to Deep Learning for the Physical Layer An usable PyTorch implementation of the noisy autoencoder infrastructure in the paper "An Intr

 Phy-Q: A Benchmark for Physical Reasoning
Phy-Q: A Benchmark for Physical Reasoning

Phy-Q: A Benchmark for Physical Reasoning Cheng Xue*, Vimukthini Pinto*, Chathura Gamage* Ekaterina Nikonova, Peng Zhang, Jochen Renz School of Comput

Code repo for
Code repo for "RBSRICNN: Raw Burst Super-Resolution through Iterative Convolutional Neural Network" (Machine Learning and the Physical Sciences workshop in NeurIPS 2021).

RBSRICNN: Raw Burst Super-Resolution through Iterative Convolutional Neural Network An official PyTorch implementation of the RBSRICNN network as desc

FCA: Learning a 3D Full-coverage Vehicle Camouflage for Multi-view Physical Adversarial Attack
FCA: Learning a 3D Full-coverage Vehicle Camouflage for Multi-view Physical Adversarial Attack

FCA: Learning a 3D Full-coverage Vehicle Camouflage for Multi-view Physical Adversarial Attack Case study of the FCA. The code can be find in FCA. Cas

A complete end-to-end demonstration in which we collect training data in Unity and use that data to train a deep neural network to predict the pose of a cube. This model is then deployed in a simulated robotic pick-and-place task.
A complete end-to-end demonstration in which we collect training data in Unity and use that data to train a deep neural network to predict the pose of a cube. This model is then deployed in a simulated robotic pick-and-place task.

Object Pose Estimation Demo This tutorial will go through the steps necessary to perform pose estimation with a UR3 robotic arm in Unity. You’ll gain

Code used to generate the results appearing in "Train longer, generalize better: closing the generalization gap in large batch training of neural networks"

Train longer, generalize better - Big batch training This is a code repository used to generate the results appearing in "Train longer, generalize bet

Comments
  • jupyter notebook syntax error

    jupyter notebook syntax error

    Error for syntax even thought the code is correct ( Editor used - Jupyter notebook ) code written : first_name = "Jack" print(first_name)

    Error shown :


    NameError Traceback (most recent call last) in 1 # Exercise 3 ----> 2 print(first_name)

    NameError: name 'first_name' is not defined

    opened by fearless0611 3
  • Questions

    Questions

    First of all, I want to thank you for the excellent documentation of the work. It's very impressive. Regarding to the repo and the paper, I have several questions listed below.

    1. Regarding the machine learning task presented in the paper, are there any specific tasks that would be most appropriate for the specific physical systems, for example, for the SHG system? Did the same system show any limitations when trained for other ML tasks?

    2. In the example code of the multilayer net for coupled pendula, I noticed that in the forward model the PNN is constructed by propagating the input through pendula 1, pendula 2, and pendula 3 using the same underlying neural networks. I was wondering why a three-times loop of one pendula is not used in In [30]? What if I would like to simulate a relatively deeper network, can I use a loop?

    3. For the simulation of Coupled Pendulas and other tasks, I noticed that the user needs to define a way to extract the output in order to have a meaningful interpretation of the ML result. For instance, in Coupled Pendula, the output (final layer) is selected as to be the angle of the middle pendula. May I ask if this is a tricky part of training a PNN? Does the definition of the last layer matter a lot in terms of training? Do you have any suggestions on how I should define the last layer given a certain physical system?

    Thanks for your time and patience!

    opened by emilyxia 1
  • Could I establish my PNN on simulation platform such as PSCAD/EMTDC rather than a real circuit?

    Could I establish my PNN on simulation platform such as PSCAD/EMTDC rather than a real circuit?

    Hello! Thank you for your contributions in PNN. I am very excited to read this paper, which may be applied in my subject for fault diagnosis in power system. I want to establish a similar PNN on the simulation software like PSCAD/EMTDC first, or have ever implement PNN in some simulation software before?

    opened by Jianhong-Gao 2
  • Documentation for The Actual Physical Setup?

    Documentation for The Actual Physical Setup?

    Thank you all for this ground breaking work!! I really want to make my DIY PNN, but I ran into some questions after reading the docs. I'm wondering if you could publish a bit more documentations about the actual building process of the PNN?

    For instance, in Example 1, the first oscillator network is composed of 196 oscillators. If I construct an audio-frequency mechanical oscillator from a commercially available speaker, just like you what did in the paper, does it mean that I would need to buy 196 speakers? And how would you connect them to form a "layer", and how would you make this layer coupled with the next layer?

    I'm been working with digital NN only and I have some experience making some hobbyist breadboard circuits, but never in a scale like this. It would be a dream come true if I can make my own PNN..

    Any help / directions / references would be appreciated!

    opened by tjwangml 1
Owner
McMahon Lab
McMahon Lab
Proximal Backpropagation - a neural network training algorithm that takes implicit instead of explicit gradient steps

Proximal Backpropagation Proximal Backpropagation (ProxProp) is a neural network training algorithm that takes implicit instead of explicit gradient s

Thomas Frerix 40 Dec 17, 2022
Paddle-Adversarial-Toolbox (PAT) is a Python library for Deep Learning Security based on PaddlePaddle.

Paddle-Adversarial-Toolbox Paddle-Adversarial-Toolbox (PAT) is a Python library for Deep Learning Security based on PaddlePaddle. Model Zoo Common FGS

AgentMaker 17 Nov 8, 2022
A PyTorch implementation for Unsupervised Domain Adaptation by Backpropagation(DANN), support Office-31 and Office-Home dataset

DANN A PyTorch implementation for Unsupervised Domain Adaptation by Backpropagation Prerequisites Linux or OSX NVIDIA GPU + CUDA (may CuDNN) and corre

null 8 Apr 16, 2022
Meta Learning Backpropagation And Improving It (VSML)

Meta Learning Backpropagation And Improving It (VSML) This is research code for the NeurIPS 2021 publication Kirsch & Schmidhuber 2021. Many concepts

Louis Kirsch 22 Dec 21, 2022
Official PyTorch implementation of "Physics-aware Difference Graph Networks for Sparsely-Observed Dynamics".

Physics-aware Difference Graph Networks for Sparsely-Observed Dynamics This repository is the official PyTorch implementation of "Physics-aware Differ

USC-Melady 46 Nov 20, 2022
Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data

Real-ESRGAN Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data Ported from https://github.com/xinntao/Real-ESRGAN Depend

Holy Wu 44 Dec 27, 2022
A PyTorch implementation of EventProp [https://arxiv.org/abs/2009.08378], a method to train Spiking Neural Networks

Spiking Neural Network training with EventProp This is an unofficial PyTorch implemenation of EventProp, a method to compute exact gradients for Spiki

Pedro Savarese 35 Jul 29, 2022
This repository contains a set of codes to run (i.e., train, perform inference with, evaluate) a diarization method called EEND-vector-clustering.

EEND-vector clustering The EEND-vector clustering (End-to-End-Neural-Diarization-vector clustering) is a speaker diarization framework that integrates

null 45 Dec 26, 2022