De(e)pendable Distributed Control of Port-Hamiltonian Systems (DeepDisCoPH)
This repository is associated to the paper [1] and it contains:
- The full paper manuscript.
- The code to reproduce numerical experiments.
Summary
By embracing the compositional properties of port-Hamiltonian (pH) systems, we characterize deep Hamiltonian control policies with built-in closed-loop stability guarantees — irrespective of the interconnection topology and the chosen neural network parameters. Furthermore, our setup enables leveraging recent results on well-behaved neural ODEs to prevent the phenomenon of vanishing gradients by design [2]. The numerical experiments described in the report and available in this repository corroborate the dependability of the proposed DeepDisCoPH architecture, while matching the performance of general neural network policies.
Report
The report as well as the corresponding Appendices can be found in the docs
folder.
Installation of DeepDisCoPH
The following lines indicates how to install the Deep Distributed Control for Port-Hamiltonian Systems (DeepDisCoPH) package.
git clone https://github.com/DecodEPFL/DeepDisCoPH.git
cd DeepDisCoPH
python setup.py install
Basic usage
To train distributed controllers for the 12 robots in the xy-plane:
./run.py --model [MODEL]
where available values for MODEL
are distributed_HDNN
, distributed_HDNN_TI
and distributed_MLP
.
To plot the norms of the backward sensitivity matrices (BSMs) when training a distributed H-DNN as the previous example, run:
./bsm.py --layer [LAYER]
where available values for LAYER
are 1,2,...,100. If LAYER
=-1, then it is set to N. The LAYER
parameter indicates the layer number at which we consider the loss function is evaluated.
Examples: formation control with collision avoidance
The following gifs show the trajectories of the robots before and after the training of a distributed H-DNN controller. The goal is to reach the target positions within T = 5 seconds while avoiding collisions.
Training performed for t in [0,5]. Trajectories shown for t in [0,6], highlighting that robots stay close to the desired position when the time horizon is extended (grey background).Early stopping of the training
We verify that DeepDisCoPH controllers ensure closed-loop stability by design even during exploration. We train the DeepDisCoPH controller for 25%, 50% and 75% of the total number of iterations and report the results in the following gifs.
Training performed for t in [0,5]. Trajectories shown for t in [0,15]. The extended horizon, i.e. when t in [5,15], is shown with grey background. Partially trained distributed controllers exhibit suboptimal behavior, but never compromise closed-loop stability.References
[1] Luca Furieri, Clara L. Galimberti, Muhammad Zakwan and Giancarlo Ferrrari Trecate. "Distributed neural network control with dependability guarantees: a compositional port-Hamiltonian approach", under review.
[2] Clara L. Galimberti, Luca Furieri, Liang Xu and Giancarlo Ferrrari Trecate. "Hamiltonian Deep Neural Networks Guaranteeing Non-vanishing Gradients by Design," arXiv:2105.13205, 2021.