Supplementary code for SIGGRAPH 2021 paper: Discovering Diverse Athletic Jumping Strategies

Related tags

Deep Learning Jump
Overview

SIGGRAPH 2021: Discovering Diverse Athletic Jumping Strategies

project page

paper

demo video

image_0032

Prerequisites

Important Notes

We suspect there are bugs in linux gcc > 9.2 or kernel > 5.3 or our code somehow is not compatible with that. Our code has large numerical errors from unknown source given the new C++ compiler. Please use older versions of C++ compiler or test the project on Windows.

C++ Setup

This project has C++ components. There is a cmake project inside Kinematic folder. We have setup the CMake project so that it can be built on both linux and Windows. Use cmake, cmake-gui or visual studio to build the project. It requires eigen library.

Python Setup

Install the Python requirements listed in requirements.txt. The version shouldn't matter. You should be safe to install the latest versions of these packages.

Rendering Setup

To visualize training results, please set up our simulation renderer.

  • Clone and follow build instructions in UnityKinematics. This is a flexible networking utility that will send raw simulation geometry data to Unity for rendering purpose.
  • Copy [UnityKinematics build folder]/pyUnityRenderer to this root project folder.
  • Here's a sample Unity project called SimRenderer in which you can render the scenes for this project. Clone SimRenderer outside this project folder.
  • After building UnityKinematics, copy [UnityKinematics build folder]/Assets/Scripts/API to SimRenderer/Assets/Scripts. Start Unity, load SimRenderer project and it's ready to use.

Training P-VAE

We have included a pre-trained model in results/vae/models/13dim.pth. If you would like to retrain the model, run the following:

python train_pose_vae.py

This will generate the new model in results/vae/test**/test.pth. Copy the .pth file and the associated .pth.norm.npy file into results/vae/models. Change presets/default/vae/vae.yaml under the model key to use your new model.

Train Run-ups

python train.py runup

Modify presets/custom/runup.yaml to change parts of the target take-off features. Refer to Appendix A in the paper to see reference parameters.

After training, run

python once.py runup no_render results/runup***/checkpoint_2000.tar

to generate take-off state file in npy format used to train take-off controller.

Train Jumpers

Open presets/custom/jump.yaml, change env.highjump.initial_state to the path to the generated take-off state file, like results/runup***/checkpoint_2000.tar.npy. Then change env.highjump.wall_rotation to specify the wall orientation (in degrees). Refer to Appendix A in the paper to see reference parameters (note that we use radians in the paper). Run

python train.py jump

to start training.

Start the provided SimRenderer (in Unity), enter play mode, the run

python evaluate.py jump results/jump***/checkpoint_***.tar

to evaluate the visualize the motion at any time. Note that env.highjump.initial_wall_height must be set to the training height at the time of this checkpoint for correct evaluation. Training height information is available through training logs, available both in the console and through tensorboard logs. You can start tensorboard through

python -m tensorboard.main --bind_all --port xx --logdir results/jump***/
Comments
  • Question about novelty reward term

    Question about novelty reward term

    https://github.com/arpspoof/Jump/blob/1c9c1bd5c499e24bab25eb7decaa772b60798794/env/jumper/highjump.py#L184-L199 Hi, I read your paper and got an interest in the reward term. I could not find the intention of a reward term(on line #191-199) from your paper. I wonder the role of the term, and what q_bases is(Why the minimum angle difference from root quaternion should be larger than pi/2?)

    opened by y0ngw00 2
  • there is no run-up process. Is there a problem with my parameter setting?

    there is no run-up process. Is there a problem with my parameter setting?

    after 12000 times of training, run evaluate.py, the render is different from that in the video, and there is no run-up process. Is there a problem with my parameter setting?

    opened by wp133716 0
  • Bayesian Diversity Search

    Bayesian Diversity Search

    Hi,

    I found your paper very interesting!

    I found training p-vae(train_pose_vae.py) and training ppo for motion control (train.py). Can you tell me which part of the released code is about the bayesian diversity search? It says in the paper that it is implemented using GPFlow, but I found no code using it.

    Thank you.

    opened by sunny-Codes 0
  • Cannot learn to jump.

    Cannot learn to jump.

    I cannot get the result the same as the paper. When the training of jump policy, I always gets reward 0.

    The default values to learn runup policy are:

    algorithm.max_iterations: 2000 experiment.env: jumper_run2 env.jumper_run2.angular_v: [-3.0, -3.0, 1.0] env.jumper_run2.linear_v_z: -2.4

    The jump policy with the following parameters, which are the recommended ones to learn Fosbury Flop.

    algorithm.max_iterations: 12000 experiment.env: highjump # initial state file generated by the run-up training env.highjump.initial_state: results/runup-2022-Feb-10-175005/checkpoint_2000.tar.npy # wall orientation in degrees env.highjump.wall_rotation: -0.05 # must correspond to the training height of the checkpoint env.highjump.initial_wall_height: 0.5

    opened by fredericgo 0
  • Input layer size VAE

    Input layer size VAE

    Hi,

    I noticed that you used a layer size of 200 for input features, but at the end you seems to use only the joints position. Is this somethings i'm missing? do you train on larger feature (like velociy) to get more motion differentiation? and then use a small portion (joints positions) that you are adding the offset aftermath?

    Thank you.

    opened by antho6222 0
Owner
null
Co-GAIL: Learning Diverse Strategies for Human-Robot Collaboration

CoGAIL Table of Content Overview Installation Dataset Training Evaluation Trained Checkpoints Acknowledgement Citations License Overview This reposito

Jeremy Wang 29 Dec 24, 2022
Supplementary code for the AISTATS 2021 paper "Matern Gaussian Processes on Graphs".

Matern Gaussian Processes on Graphs This repo provides an extension for gpflow with Matérn kernels, inducing variables and trainable models implemente

null 41 Dec 17, 2022
Supplementary code for the paper "Meta-Solver for Neural Ordinary Differential Equations" https://arxiv.org/abs/2103.08561

Meta-Solver for Neural Ordinary Differential Equations Towards robust neural ODEs using parametrized solvers. Main idea Each Runge-Kutta (RK) solver w

Julia Gusak 25 Aug 12, 2021
Supplementary code for TISMIR paper "Sliding-Window Pitch-Class Histograms as a Means of Modeling Musical Form"

Sliding-Window Pitch-Class Histograms as a Means of Modeling Musical Form This is supplementary code for the TISMIR paper Sliding-Window Pitch-Class H

null 1 Nov 27, 2021
Code for the SIGGRAPH 2021 paper "Consistent Depth of Moving Objects in Video".

Consistent Depth of Moving Objects in Video This repository contains training code for the SIGGRAPH 2021 paper "Consistent Depth of Moving Objects in

Google 203 Jan 5, 2023
Code for our ACL 2021 paper "One2Set: Generating Diverse Keyphrases as a Set"

One2Set This repository contains the code for our ACL 2021 paper “One2Set: Generating Diverse Keyphrases as a Set”. Our implementation is built on the

Jiacheng Ye 63 Jan 5, 2023
A supplementary code for Editable Neural Networks, an ICLR 2020 submission.

Editable neural networks A supplementary code for Editable Neural Networks, an ICLR 2020 submission by Anton Sinitsin, Vsevolod Plokhotnyuk, Dmitry Py

Anton Sinitsin 32 Nov 29, 2022
PyTorch implementations for our SIGGRAPH 2021 paper: Editable Free-viewpoint Video Using a Layered Neural Representation.

st-nerf We provide PyTorch implementations for our paper: Editable Free-viewpoint Video Using a Layered Neural Representation SIGGRAPH 2021 Jiakai Zha

Diplodocus 258 Jan 2, 2023
Code for HodgeNet: Learning Spectral Geometry on Triangle Meshes, in SIGGRAPH 2021.

HodgeNet | Webpage | Paper | Video HodgeNet: Learning Spectral Geometry on Triangle Meshes Dmitriy Smirnov, Justin Solomon SIGGRAPH 2021 Set-up To ins

Dima Smirnov 61 Nov 27, 2022
Codes for paper "Towards Diverse Paragraph Captioning for Untrimmed Videos". CVPR 2021

Towards Diverse Paragraph Captioning for Untrimmed Videos This repository contains PyTorch implementation of our paper Towards Diverse Paragraph Capti

Yuqing Song 61 Oct 11, 2022
A code repository associated with the paper A Benchmark for Rough Sketch Cleanup by Chuan Yan, David Vanderhaeghe, and Yotam Gingold from SIGGRAPH Asia 2020.

A Benchmark for Rough Sketch Cleanup This is the code repository associated with the paper A Benchmark for Rough Sketch Cleanup by Chuan Yan, David Va

null 33 Dec 18, 2022
All supplementary material used by me while TA-ing CS3244: Machine Learning

CS3244-Tutorial-Material All supplementary material used by me while TA-ing CS3244: Machine Learning at NUS School of Computing. What is this? I teach

Rishabh Anand 18 Sep 23, 2022
discovering subdomains, hidden paths, extracting unique links

python-website-crawler discovering subdomains, hidden paths, extracting unique links pip install -r requirements.txt discover subdomain: You can give

merve 4 Sep 5, 2022
Ian Covert 130 Jan 1, 2023
The self-supervised goal reaching benchmark introduced in Discovering and Achieving Goals via World Models

Lexa-Benchmark Codebase for the self-supervised goal reaching benchmark introduced in 'Discovering and Achieving Goals via World Models'. Setup Create

null 1 Oct 14, 2021
Discovering Interpretable GAN Controls [NeurIPS 2020]

GANSpace: Discovering Interpretable GAN Controls Figure 1: Sequences of image edits performed using control discovered with our method, applied to thr

Erik Härkönen 1.7k Jan 3, 2023
DIR-GNN - Discovering Invariant Rationales for Graph Neural Networks

DIR-GNN "Discovering Invariant Rationales for Graph Neural Networks" (ICLR 2022)

Ying-Xin (Shirley) Wu 70 Nov 13, 2022
Code and data for "Broaden the Vision: Geo-Diverse Visual Commonsense Reasoning" (EMNLP 2021).

GD-VCR Code for Broaden the Vision: Geo-Diverse Visual Commonsense Reasoning (EMNLP 2021). Research Questions and Aims: How well can a model perform o

Da Yin 24 Oct 13, 2022