My Body is a Cage: the Role of Morphology in Graph-Based Incompatible Control

Overview

My Body is a Cage: the Role of Morphology in Graph-Based Incompatible Control

ICLR 2021

OpenReview | Arxiv

Vitaly Kurin, Maximilian Igl, Tim Rocktäschel, Wendelin Boehmer, Shimon Whiteson

TL;DR

Providing morphological structure as an input graph is not a useful inductive bias in Graph-Based Incompatible Control. If we let the structural information go, we can do better with transformers.

@inproceedings{
kurin2021my,
title={My Body is a Cage: the Role of Morphology in Graph-Based Incompatible Control},
author={Vitaly Kurin and Maximilian Igl and Tim Rockt{\"a}schel and Wendelin Boehmer and Shimon Whiteson},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=N3zUDGN5lO}
}

Setup

All the experiments are done in a Docker container. To build it, run ./docker_build.sh , where can be cpu or cu101. It will use CUDA by default.

To build and run the experiments, you need a MuJoCo license. Put it to the root folder before running docker_build.sh.

Running

./docker_run  # either GPU id or cpu
cd amorpheus             # select the experiment to replicate
bash cwhh.sh             # run it on a task

We were using Sacred with a remote MongoDB for experiment management. For release, we changed Sacred to log to local files instead. You can change it back to MongDB if you provide credentials in modular-rl/src/main.py.

Acknowledgement

  • The code is built on top of SMP repository.
  • NerveNet Walkers environment are taken and adapted from the original repo.
  • Initial implementation of the transformers was taken from the official Pytorch tutorial and modified thereafter.
You might also like...
[3DV 2020] PeeledHuman: Robust Shape Representation for Textured 3D Human Body Reconstruction
[3DV 2020] PeeledHuman: Robust Shape Representation for Textured 3D Human Body Reconstruction

PeeledHuman: Robust Shape Representation for Textured 3D Human Body Reconstruction International Conference on 3D Vision, 2020 Sai Sagar Jinka1, Rohan

HPRNet: Hierarchical Point Regression for Whole-Body Human Pose Estimation

HPRNet: Hierarchical Point Regression for Whole-Body Human Pose Estimation Official PyTroch implementation of HPRNet. HPRNet: Hierarchical Point Regre

 Human POSEitioning System (HPS): 3D Human Pose Estimation and Self-localization in Large Scenes from Body-Mounted Sensors, CVPR 2021
Human POSEitioning System (HPS): 3D Human Pose Estimation and Self-localization in Large Scenes from Body-Mounted Sensors, CVPR 2021

Human POSEitioning System (HPS): 3D Human Pose Estimation and Self-localization in Large Scenes from Body-Mounted Sensors Human POSEitioning System (H

Pose Detection and Machine Learning for real-time body posture analysis during exercise to provide audiovisual feedback on improvement of form.

Posture: Pose Tracking and Machine Learning for prescribing corrective suggestions to improve posture and form while exercising. This repository conta

This is the open-source reference implementation of the SIGGRAPH 2021 paper Intersection-free Rigid Body Dynamics. Ultra-lightweight human body posture key point CNN model. ModelSize:2.3MB  HUAWEI P40 NCNN benchmark: 6ms/img,
Ultra-lightweight human body posture key point CNN model. ModelSize:2.3MB HUAWEI P40 NCNN benchmark: 6ms/img,

Ultralight-SimplePose Support NCNN mobile terminal deployment Based on MXNET(=1.5.1) GLUON(=0.7.0) framework Top-down strategy: The input image is t

pytorch implementation of openpose including Hand and Body Pose Estimation.
pytorch implementation of openpose including Hand and Body Pose Estimation.

pytorch-openpose pytorch implementation of openpose including Body and Hand Pose Estimation, and the pytorch model is directly converted from openpose

 SMPL-X: A new joint 3D model of the human body, face and hands together
SMPL-X: A new joint 3D model of the human body, face and hands together

SMPL-X: A new joint 3D model of the human body, face and hands together [Paper Page] [Paper] [Supp. Mat.] Table of Contents License Description News I

🕺Full body detection and tracking
🕺Full body detection and tracking

Pose-Detection 🤔 Overview Human pose estimation from video plays a critical role in various applications such as quantifying physical exercises, sign

Comments
  • Evaluation of the generalization performance?

    Evaluation of the generalization performance?

    Hello, Vitaly.

    Thanks for your interesting work!

    I have 3 questions about how to evaluate the generalization performance in your paper.

    1. Did you evaluate the model checkpoint saved at the last (after training)? I guess you used the last checkpoint as default setting in your code. But, it is also possible to evaluate other checkpoints saved during training, so could you clarify it?

    2. In generalization section in Appendix of your paper, you said,

    The numbers show the average performance of three seeds evaluated on 100 rollouts and standard error of the mean.

    How did you compute average of three seeds on 100 rollouts ? I guess you computed the average performance of each seed, and then computed mean and stderr over 3 seeds. I would like to hear from you and clarify the evaluation method.

    1. What kind of model did you evaluate? The evaluation table contains test sets of walkers, humanoids and cheetahs. Did you evaluate the checkpoint of walkers, humanoids and cheetahs respectively or only one checkpoint of cwhh?

    Thanks for your great work again and I'm looking forward to your reply.

    Thank you.

    opened by sunghoonhong 4
  • raise error.UnregisteredEnv('No registered env with id: {}'.format(id)) gym.error.UnregisteredEnv: No registered env with id: walker_4_flipped-v0

    raise error.UnregisteredEnv('No registered env with id: {}'.format(id)) gym.error.UnregisteredEnv: No registered env with id: walker_4_flipped-v0

    Hello, Vitaly.

    Thanks for your fantastic work! I am trying to run experiments with your code, however I was stuck in : Traceback (most recent call last): File "/home/carla/anaconda3/envs/mujoco/lib/python3.6/site-packages/gym/envs/registration.py", line 132, in spec return self.env_specs[id] KeyError: 'walker_4_flipped-v0'

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "/home/carla/anaconda3/envs/mujoco/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap self.run() File "/home/carla/anaconda3/envs/mujoco/lib/python3.6/multiprocessing/process.py", line 93, in run self._target(*self._args, **self._kwargs) File "/home/carla/baselines/baselines/common/vec_env/subproc_vec_env.py", line 15, in worker envs = [env_fn_wrapper() for env_fn_wrapper in env_fn_wrappers.x] File "/home/carla/baselines/baselines/common/vec_env/subproc_vec_env.py", line 15, in envs = [env_fn_wrapper() for env_fn_wrapper in env_fn_wrappers.x] File "/home/carla/amorpheus/modular-rl/src/utils.py", line 18, in helper e = gym.make("environments:%s-v0" % env_name) File "/home/carla/anaconda3/envs/mujoco/lib/python3.6/site-packages/gym/envs/registration.py", line 156, in make return registry.make(id, **kwargs) File "/home/carla/anaconda3/envs/mujoco/lib/python3.6/site-packages/gym/envs/registration.py", line 100, in make spec = self.spec(path) File "/home/carla/anaconda3/envs/mujoco/lib/python3.6/site-packages/gym/envs/registration.py", line 142, in spec raise error.UnregisteredEnv('No registered env with id: {}'.format(id)) gym.error.UnregisteredEnv: No registered env with id: walker_4_flipped-v0

    Something wrong with gym.make(), I had tried using different gym versions and mujuco-py versions like mujoco-py==2.0.2.4 and mujoco-py==1.50.1.68. It still doesn't work. I wonder if you can give some suggestions. Thanks for your great work again and I'm looking forward to your reply.

    Thank you.

    opened by Mingle0228 7
Owner
yobi byte
Machine Learning/ RL PhD student at the University of Oxford. ex: Intern at Facebook, NVIDIA; Latent Logic.
yobi byte
Full body anonymization - Realistic Full-Body Anonymization with Surface-Guided GANs

Realistic Full-Body Anonymization with Surface-Guided GANs This is the official

Håkon Hukkelås 30 Nov 18, 2022
WormMovementSimulation - 3D Simulation of Worm Body Movement with Neurons attached to its body

Generate 3D Locomotion Data This module is intended to create 2D video trajector

null 1 Aug 9, 2022
Code for the paper Task Agnostic Morphology Evolution.

Task-Agnostic Morphology Optimization This repository contains code for the paper Task-Agnostic Morphology Evolution by Donald (Joey) Hejna, Pieter Ab

Joey Hejna 18 Aug 4, 2022
Self-Correcting Quantum Many-Body Control using Reinforcement Learning with Tensor Networks

Self-Correcting Quantum Many-Body Control using Reinforcement Learning with Tensor Networks This repository contains the code and data for the corresp

Friederike Metz 7 Apr 23, 2022
N-RPG - Novel role playing game da turfu

N-RPG Ce README sera la page de garde du projet. Contenu Il contiendra la présen

null 4 Mar 15, 2022
Hand Gesture Volume Control is AIML based project which uses image processing to control the volume of your Computer.

Hand Gesture Volume Control Modules There are basically three modules Handtracking Program Handtracking Module Volume Control Program Handtracking Pro

VITTAL 1 Jan 12, 2022
Demo for Real-time RGBD-based Extended Body Pose Estimation paper

Real-time RGBD-based Extended Body Pose Estimation This repository is a real-time demo for our paper that was published at WACV 2021 conference The ou

Renat Bashirov 118 Dec 26, 2022
ROS-UGV-Control-Interface - Control interface which can be used in any UGV

ROS-UGV-Control-Interface Cam Closed: Cam Opened:

Ahmet Fatih Akcan 1 Nov 4, 2022
OpenPose: Real-time multi-person keypoint detection library for body, face, hands, and foot estimation

Build Type Linux MacOS Windows Build Status OpenPose has represented the first real-time multi-person system to jointly detect human body, hand, facia

null 25.7k Jan 9, 2023
FrankMocap: A Strong and Easy-to-use Single View 3D Hand+Body Pose Estimator

FrankMocap pursues an easy-to-use single view 3D motion capture system developed by Facebook AI Research (FAIR). FrankMocap provides state-of-the-art 3D pose estimation outputs for body, hand, and body+hands in a single system. The core objective of FrankMocap is to democratize the 3D human pose estimation technology, enabling anyone (researchers, engineers, developers, artists, and others) can easily obtain 3D motion capture outputs from videos and images.

Facebook Research 1.9k Jan 7, 2023