Dynamical movement primitives (DMPs), probabilistic movement primitives (ProMPs), spatially coupled bimanual DMPs.

Overview

codecov

Movement Primitives

Movement primitives are a common group of policy representations in robotics. There are many different types and variations. This repository focuses mainly on imitation learning, generalization, and adaptation of movement primitives. It provides implementations in Python and Cython.

Features

  • Dynamical Movement Primitives (DMPs) for
    • positions (with fast Runge-Kutta integration)
    • Cartesian position and orientation (with fast Cython implementation)
    • Dual Cartesian position and orientation (with fast Cython implementation)
  • Coupling terms for synchronization of position and/or orientation of dual Cartesian DMPs
  • Propagation of DMP weight distribution to state space distribution
  • Probabilistic Movement Primitives (ProMPs)

API Documentation

The API documentation is available here.

Install Library

This library requires Python 3.6 or later and pip is recommended for the installation. In the following instructions, we assume that the command python refers to Python 3. If you use the system's Python version, you might have to add the flag --user to any installation command.

I recommend to install the library via pip in editable mode:

python -m pip install -e .[all]

If you don't want to have all dependencies installed, just omit [all]. Alternatively, you can install dependencies with

python -m pip install -r requirements.txt

You could also just build the Cython extension with

python setup.py build_ext --inplace

or install the library with

python setup.py install

Non-public Extensions

Note that scripts from the subfolder examples/external_dependencies/ require access to git repositories (URDF files or optional dependencies) that are not publicly available.

MoCap Library

# untested: pip install git+https://git.hb.dfki.de/dfki-interaction/mocap.git
git clone [email protected]:dfki-interaction/mocap.git
cd mocap
python -m pip install -e .
cd ..

Get URDFs

# RH5
git clone [email protected]:models-robots/rh5_models/pybullet-only-arms-urdf.git --recursive
# RH5v2
git clone [email protected]:models-robots/rh5v2_models/pybullet-urdf.git --recursive
# Kuka
git clone [email protected]:models-robots/kuka_lbr.git
# Solar panel
git clone [email protected]:models-objects/solar_panels.git
# RH5 Gripper
git clone [email protected]:motto/abstract-urdf-gripper.git --recursive

Data

I assume that your data is located in the folder data/ in most scripts. You should put a symlink there to point to your actual data folder.

Build API Documentation

You can build an API documentation with pdoc3. You can install pdoc3 with

pip install pdoc3

... and build the documentation from the main folder with

pdoc movement_primitives --html

It will be located at html/movement_primitives/index.html.

Test

To run the tests some python libraries are required:

python -m pip install -e .[test]

The tests are located in the folder test/ and can be executed with: python -m nose test

This command searches for all files with test and executes the functions with test_*.

Contributing

To add new features, documentation, or fix bugs you can open a pull request. Directly pushing to the main branch is not allowed.

Examples

Conditional ProMPs

Probabilistic Movement Primitives (ProMPs) define distributions over trajectories that can be conditioned on viapoints. In this example, we plot the resulting posterior distribution after conditioning on varying start positions.

Script

Potential Field of 2D DMP

A Dynamical Movement Primitive defines a potential field that superimposes several components: transformation system (goal-directed movement), forcing term (learned shape), and coupling terms (e.g., obstacle avoidance).

Script

DMP with Final Velocity

Not all DMPs allow a final velocity > 0. In this case we analyze the effect of changing final velocities in an appropriate variation of the DMP formulation that allows to set the final velocity.

Script

ProMPs

The LASA Handwriting dataset learned with ProMPs. The dataset consists of 2D handwriting motions. The first and third column of the plot represent demonstrations and the second and fourth column show the imitated ProMPs with 1-sigma interval.

Script

Contextual ProMPs

We use a dataset of Mronga and Kirchner (2021) with 10 demonstrations per 3 different panel widths that were obtained through kinesthetic teaching. The panel width is considered to be the context over which we generalize with contextual ProMPs. Each color in the above visualizations corresponds to a ProMP for a different context.

Script

Dependencies that are not publicly available:

Dual Cartesian DMP

We offer specific dual Cartesian DMPs to control dual-arm robotic systems like humanoid robots.

Scripts: Open3D, PyBullet

Dependencies that are not publicly available:

Coupled Dual Cartesian DMP

We can introduce a coupling term in a dual Cartesian DMP to constrain the relative position, orientation, or pose of two end-effectors of a dual-arm robot.

Scripts: Open3D, PyBullet

Dependencies that are not publicly available:

Propagation of DMP Distribution to State Space

If we have a distribution over DMP parameters, we can propagate them to state space through an unscented transform.

Script

Dependencies that are not publicly available:

Funding

This library has been developed initially at the Robotics Innovation Center of the German Research Center for Artificial Intelligence (DFKI GmbH) in Bremen. At this phase the work was supported through a grant of the German Federal Ministry of Economic Affairs and Energy (BMWi, FKZ 50 RA 1701).

You might also like...
Spatially-Adaptive Pixelwise Networks for Fast Image Translation, CVPR 2021

Image Translation with ASAPNets Spatially-Adaptive Pixelwise Networks for Fast Image Translation, CVPR 2021 Webpage | Paper | Video Installation insta

Implementation of CVPR 2021 paper
Implementation of CVPR 2021 paper "Spatially-invariant Style-codes Controlled Makeup Transfer"

SCGAN Implementation of CVPR 2021 paper "Spatially-invariant Style-codes Controlled Makeup Transfer" Prepare The pre-trained model is avaiable at http

Toward Spatially Unbiased Generative Models (ICCV 2021)
Toward Spatially Unbiased Generative Models (ICCV 2021)

Toward Spatially Unbiased Generative Models Implementation of Toward Spatially Unbiased Generative Models (ICCV 2021) Overview Recent image generation

Official PyTorch code for Mutual Affine Network for Spatially Variant Kernel Estimation in Blind Image Super-Resolution (MANet, ICCV2021)
Official PyTorch code for Mutual Affine Network for Spatially Variant Kernel Estimation in Blind Image Super-Resolution (MANet, ICCV2021)

Mutual Affine Network for Spatially Variant Kernel Estimation in Blind Image Super-Resolution (MANet, ICCV2021) This repository is the official PyTorc

Simple Tensorflow implementation of Toward Spatially Unbiased Generative Models (ICCV 2021)
Simple Tensorflow implementation of Toward Spatially Unbiased Generative Models (ICCV 2021)

Spatial unbiased GANs — Simple TensorFlow Implementation [Paper] : Toward Spatially Unbiased Generative Models (ICCV 2021) Abstract Recent image gener

Official PyTorch implementation of BlobGAN: Spatially Disentangled Scene Representations

BlobGAN: Spatially Disentangled Scene Representations Official PyTorch Implementation Paper | Project Page | Video | Interactive Demo BlobGAN.mp4 This

Optimized primitives for collective multi-GPU communication

NCCL Optimized primitives for inter-GPU communication. Introduction NCCL (pronounced "Nickel") is a stand-alone library of standard communication rout

HashNeRF-pytorch - Pure PyTorch Implementation of NVIDIA paper on Instant Training of Neural Graphics primitives
HashNeRF-pytorch - Pure PyTorch Implementation of NVIDIA paper on Instant Training of Neural Graphics primitives

HashNeRF-pytorch Instant-NGP recently introduced a Multi-resolution Hash Encodin

Predict stock movement with Machine Learning and Deep Learning algorithms

Project Overview Stock market movement prediction using LSTM Deep Neural Networks and machine learning algorithms Software and Library Requirements Th

Comments
  • Modify the initial method of T in dmp_open_loop_quaternion() to avoid numerical rounding errors

    Modify the initial method of T in dmp_open_loop_quaternion() to avoid numerical rounding errors

    the origin initial method about T in dmp_open_loop_quaternion() is:T = [start_t]; while t <run_t: last_t=t, t+=dt,T.append(t), which will cause the numerical rounding errors when run_t = 2.99. In detail: when t = 2.07, t+= dt t should be 2.08, but is the real scene, it will become 2.0799999999. And it will cause the length of Yr becomes 301. In the End, I am greenhand about Github, I am sorry if I do something wrong operation about repo.

    opened by CodingCatMountain 5
  • A Problem about CartesianDMP due to the parameter 'dt'...

    A Problem about CartesianDMP due to the parameter 'dt'...

    Hi, this package is very very very good, it do really help me to learn about the Learn from Demonstrations. But last night, I find a problem about open_loop, which is function included in the CartesianDMP class. The problem is the length about the python list, which named Yr in this function. And I have checked the source code, I found : My Y, which is passed to cartesian_dmp.imitate(T,Y), it's length is 600; And Yp in CartesianDMP.open_loop(), which returned by dmp_open_loop, it's length is 600, which are correct, but the length Yr in CartesianDMP.open_loop() is 601. I believe the relationship about T and dt in dmp_open_loop() and dmp_open_loop_quaternion() has some problem. Please Check! The T in dmp_open_loop() is initialized via this way : T=np.arange(start_t, run_t + dt, dt) , and the T in dmp_open_loop_quaternion() is initialized via this way: T=[start_t], which start_t is 0.0, and in a loop , last_t = t, t+=dt, T.append(t).

    opened by CodingCatMountain 4
  • CartesianDMP object has no attribute forcing_term

    CartesianDMP object has no attribute forcing_term

    I would like to save the weights of a trained CartesianDMP. There is no overloaded function get_weights() so I guess the one from the DMP base class should work. However, when calling it it raises the error in the title:

    AttributeError: 'CartesianDMP' object has no attribute 'forcing_term'
    

    Do you know what could be the issue here? Thanks in advance.

    opened by buschbapti 3
  • Can this repo for the periodic motion and orientation?

    Can this repo for the periodic motion and orientation?

    Thanks for sharing. Though DMPs are widely used to encode point-to-point movements, implementing the periodic DMP for translation and orientation is still challenging. Can this repository achieve these? If possible, would you provide any examples?

    opened by HongminWu 1
Releases(0.5.0)
Owner
DFKI Robotics Innovation Center
Research group at the German Research Center for Artificial Intelligence. For a list of our other open source contributuions click the link below:
DFKI Robotics Innovation Center
LVI-SAM: Tightly-coupled Lidar-Visual-Inertial Odometry via Smoothing and Mapping

LVI-SAM This repository contains code for a lidar-visual-inertial odometry and mapping system, which combines the advantages of LIO-SAM and Vins-Mono

Tixiao Shan 1.1k Dec 27, 2022
Codes for TS-CAM: Token Semantic Coupled Attention Map for Weakly Supervised Object Localization.

TS-CAM: Token Semantic Coupled Attention Map for Weakly SupervisedObject Localization This is the official implementaion of paper TS-CAM: Token Semant

vasgaowei 112 Jan 2, 2023
Efficient electromagnetic solver based on rigorous coupled-wave analysis for 3D and 2D multi-layered structures with in-plane periodicity

Efficient electromagnetic solver based on rigorous coupled-wave analysis for 3D and 2D multi-layered structures with in-plane periodicity, such as gratings, photonic-crystal slabs, metasurfaces, surface-emitting lasers, nano-antennas, and more.

Alex Song 17 Dec 19, 2022
Semi-supervised Video Deraining with Dynamical Rain Generator (CVPR, 2021, Pytorch)

S2VD Semi-supervised Video Deraining with Dynamical Rain Generator (CVPR, 2021) Requirements and Dependencies Ubuntu 16.04, cuda 10.0 Python 3.6.10, P

Zongsheng Yue 53 Nov 23, 2022
Dynamical Wasserstein Barycenters for Time Series Modeling

Dynamical Wasserstein Barycenters for Time Series Modeling This is the code related for the Dynamical Wasserstein Barycenter model published in Neurip

null 8 Sep 9, 2022
Autolfads-tf2 - A TensorFlow 2.0 implementation of Latent Factor Analysis via Dynamical Systems (LFADS) and AutoLFADS

autolfads-tf2 A TensorFlow 2.0 implementation of LFADS and AutoLFADS. Installati

Systems Neural Engineering Lab 11 Oct 29, 2022
Implementation of Cross Transformer for spatially-aware few-shot transfer, in Pytorch

Cross Transformers - Pytorch (wip) Implementation of Cross Transformer for spatially-aware few-shot transfer, in Pytorch Install $ pip install cross-t

Phil Wang 40 Dec 22, 2022
PyTorch Implementation of Spatially Consistent Representation Learning(SCRL)

Spatially Consistent Representation Learning (CVPR'21) Official PyTorch implementation of Spatially Consistent Representation Learning (SCRL). This re

Kakao Brain 102 Nov 3, 2022
CVPR 2021: "The Spatially-Correlative Loss for Various Image Translation Tasks"

Spatially-Correlative Loss arXiv | website We provide the Pytorch implementation of "The Spatially-Correlative Loss for Various Image Translation Task

Chuanxia Zheng 89 Jan 4, 2023