Deep Distributed Control of Port-Hamiltonian Systems

Overview

De(e)pendable Distributed Control of Port-Hamiltonian Systems (DeepDisCoPH)

This repository is associated to the paper [1] and it contains:

  1. The full paper manuscript.
  2. The code to reproduce numerical experiments.

Summary

By embracing the compositional properties of port-Hamiltonian (pH) systems, we characterize deep Hamiltonian control policies with built-in closed-loop stability guarantees — irrespective of the interconnection topology and the chosen neural network parameters. Furthermore, our setup enables leveraging recent results on well-behaved neural ODEs to prevent the phenomenon of vanishing gradients by design [2]. The numerical experiments described in the report and available in this repository corroborate the dependability of the proposed DeepDisCoPH architecture, while matching the performance of general neural network policies.

Report

The report as well as the corresponding Appendices can be found in the docs folder.

Installation of DeepDisCoPH

The following lines indicates how to install the Deep Distributed Control for Port-Hamiltonian Systems (DeepDisCoPH) package.

git clone https://github.com/DecodEPFL/DeepDisCoPH.git

cd DeepDisCoPH

python setup.py install

Basic usage

To train distributed controllers for the 12 robots in the xy-plane:

./run.py --model [MODEL]

where available values for MODEL are distributed_HDNN, distributed_HDNN_TI and distributed_MLP.

To plot the norms of the backward sensitivity matrices (BSMs) when training a distributed H-DNN as the previous example, run:

./bsm.py --layer [LAYER]

where available values for LAYER are 1,2,...,100. If LAYER=-1, then it is set to N. The LAYER parameter indicates the layer number at which we consider the loss function is evaluated.

Examples: formation control with collision avoidance

The following gifs show the trajectories of the robots before and after the training of a distributed H-DNN controller. The goal is to reach the target positions within T = 5 seconds while avoiding collisions.

robot_trajectories_before_training robot_trajectories_after_training_a_distributed_HDNN_controller

Training performed for t in [0,5]. Trajectories shown for t in [0,6], highlighting that robots stay close to the desired position when the time horizon is extended (grey background).

Early stopping of the training

We verify that DeepDisCoPH controllers ensure closed-loop stability by design even during exploration. We train the DeepDisCoPH controller for 25%, 50% and 75% of the total number of iterations and report the results in the following gifs.

robot_trajectories_25_training robot_trajectories_50_training robot_trajectories_75_training

Training performed for t in [0,5]. Trajectories shown for t in [0,15]. The extended horizon, i.e. when t in [5,15], is shown with grey background. Partially trained distributed controllers exhibit suboptimal behavior, but never compromise closed-loop stability.

References

[1] Luca Furieri, Clara L. Galimberti, Muhammad Zakwan and Giancarlo Ferrrari Trecate. "Distributed neural network control with dependability guarantees: a compositional port-Hamiltonian approach", under review.

[2] Clara L. Galimberti, Luca Furieri, Liang Xu and Giancarlo Ferrrari Trecate. "Hamiltonian Deep Neural Networks Guaranteeing Non-vanishing Gradients by Design," arXiv:2105.13205, 2021.

You might also like...
Distributed Deep learning with Keras & Spark
Distributed Deep learning with Keras & Spark

Elephas: Distributed Deep Learning with Keras & Spark Elephas is an extension of Keras, which allows you to run distributed deep learning models at sc

Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more
Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more

Apache MXNet (incubating) for Deep Learning Master Docs License Apache MXNet (incubating) is a deep learning framework designed for both efficiency an

DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.

DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.

Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.
Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.

English | 简体中文 Easy Parallel Library Overview Easy Parallel Library (EPL) is a general and efficient library for distributed model training. Usability

SLIDE : In Defense of Smart Algorithms over Hardware Acceleration for Large-Scale Deep Learning Systems

The SLIDE package contains the source code for reproducing the main experiments in this paper. Dataset The Datasets can be downloaded in Amazon-

A library for preparing, training, and evaluating scalable deep learning hybrid recommender systems using PyTorch.
A library for preparing, training, and evaluating scalable deep learning hybrid recommender systems using PyTorch.

collie_recs Collie is a library for preparing, training, and evaluating implicit deep learning hybrid recommender systems, named after the Border Coll

NVIDIA Merlin is an open source library providing end-to-end GPU-accelerated recommender systems, from feature engineering and preprocessing to training deep learning models and running inference in production.

NVIDIA Merlin NVIDIA Merlin is an open source library designed to accelerate recommender systems on NVIDIA’s GPUs. It enables data scientists, machine

A library for preparing, training, and evaluating scalable deep learning hybrid recommender systems using PyTorch.
A library for preparing, training, and evaluating scalable deep learning hybrid recommender systems using PyTorch.

collie Collie is a library for preparing, training, and evaluating implicit deep learning hybrid recommender systems, named after the Border Collie do

TensorFlow ROCm port
TensorFlow ROCm port

Documentation TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries, a

Owner
Dependable Control and Decision group - EPFL
Dependable Control and Decision group - EPFL
Double pendulum simulator using a symplectic Euler's method and Hamiltonian mechanics

Symplectic Double Pendulum Simulator Double pendulum simulator using a symplectic Euler's method. The program calculates the momentum and position of

Scott Marino 1 Jan 12, 2022
Mini-hmc-jax - A simple implementation of Hamiltonian Monte Carlo in JAX

mini-hmc-jax This is a simple implementation of Hamiltonian Monte Carlo in JAX t

Martin Marek 6 Mar 3, 2022
Code for Private Recommender Systems: How Can Users Build Their Own Fair Recommender Systems without Log Data? (SDM 2022)

Private Recommender Systems: How Can Users Build Their Own Fair Recommender Systems without Log Data? (SDM 2022) We consider how a user of a web servi

joisino 20 Aug 21, 2022
ROS-UGV-Control-Interface - Control interface which can be used in any UGV

ROS-UGV-Control-Interface Cam Closed: Cam Opened:

Ahmet Fatih Akcan 1 Nov 4, 2022
Hand Gesture Volume Control is AIML based project which uses image processing to control the volume of your Computer.

Hand Gesture Volume Control Modules There are basically three modules Handtracking Program Handtracking Module Volume Control Program Handtracking Pro

VITTAL 1 Jan 12, 2022
Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more

Apache MXNet (incubating) for Deep Learning Apache MXNet is a deep learning framework designed for both efficiency and flexibility. It allows you to m

The Apache Software Foundation 20.2k Jan 8, 2023
Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more

Apache MXNet (incubating) for Deep Learning Apache MXNet is a deep learning framework designed for both efficiency and flexibility. It allows you to m

The Apache Software Foundation 20.2k Jan 5, 2023
Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more

Apache MXNet (incubating) for Deep Learning Apache MXNet is a deep learning framework designed for both efficiency and flexibility. It allows you to m

The Apache Software Foundation 19.3k Feb 12, 2021
PArallel Distributed Deep LEarning: Machine Learning Framework from Industrial Practice (『飞桨』核心框架,深度学习&机器学习高性能单机、分布式训练和跨平台部署)

English | 简体中文 Welcome to the PaddlePaddle GitHub. PaddlePaddle, as the only independent R&D deep learning platform in China, has been officially open

null 19.4k Jan 4, 2023