Character Controllers using Motion VAEs

Overview

Character Controllers using Motion VAEs

This repo is the codebase for the SIGGRAPH 2020 paper with the title above. Please find the paper and demo at our project website https://www.cs.ubc.ca/~hyuling/projects/mvae/.

Quick Start

This library should run on Linux, Mac, or Windows.

Install Requirements

# TODO: Create and activate virtual env

cd MotionVAEs
pip install -r requirements
NOTE: installing pybullet requires Visual C++ 14 or higher. You can get it from here: https://visualstudio.microsoft.com/visual-cpp-build-tools/

Run Pretrained Models

Run pretrained models using the play scripts. The results are rendered in PyBullet. Use mouse to control camera. Hit r reset task and g for additional controls.

cd vae_motion

# Random Walk
python play_mvae.py --vae models/posevae_c1_e6_l32.pt

# Control Tasks: {Target, Joystick, PathFollow, HumanMaze}Env-v0
python play_controller.py --dir models --env TargetEnv-v0

Train from Scratch

Train models from scratch using train scripts.

The train_mvae.py script assumes the mocap data to be at environments/mocap.npz. The original training data is not included in this repo; but can be easily extracted from other public datasets. Please refer to our paper for more detail on the input format. All training parameters can be set inside main() in the code.

Use train_controller.py to train controllers on top of trained MVAE models. The trained model path, control task, and learning hyperparameters can be set inside main() in the code. The task names follow the same convention as above, e.g. TargetEnv-v0, JoystickEnv-v0, and so on.

Citation

Please cite the following paper if you find our work useful.

@article{ling2020character,
  author    = {Ling, Hung Yu and Zinno, Fabio and Cheng, George and van de Panne, Michiel},
  title     = {Character Controllers Using Motion VAEs},
  year      = {2020},
  publisher = {Association for Computing Machinery},
  volume    = {39},
  number    = {4},
  journal   = {ACM Trans. Graph.}
}
Comments
  • What are the num_parallels for?

    What are the num_parallels for?

    Hi again. I am messing around with the code for months. I did read the paper over and over again, seems like controller are for synthesizing motion.

    So.. I have made my own version of environment to train the characters to compete each other. and I have notice there is num_parallel variable which visualize numerous characters.

    However, as it is on the code, it seems that action space dimension is only for single character unlike other things as root_history which includes dimension of num_parallel.

    Is num_parallel variable exists just for training? I am confused..

    opened by ameliacode 4
  • Question about calculating target angle in JoyStick env

    Question about calculating target angle in JoyStick env

    Hello, I have some qusetion about the target angle in mocap_env.py.

    The defination of target angle is

    target_angle = (
                torch.atan2(target_delta[:, 1], target_delta[:, 0]).unsqueeze(1)
                + self.root_facing
            )
    

    which seems to be a relative angle plus current facing angle.

    But in calculating the reward, you just calculate the cos of target angle instead of the cos of relative angle

    def calc_progress_reward(self):
            _, target_angle = self.get_target_delta_and_angle()
            direction_reward = target_angle.cos().add(-1)
            speed = self.next_frame[:, [0, 1]].norm(dim=1, keepdim=True)
            speed_reward = (self.target_speed_buf - speed).abs().mul(-1)
            return (direction_reward + speed_reward).exp()
    

    Did I understand something wrong?

    opened by heyuanYao-pku 4
  • reset() takes 1 positional argument but 2 were given

    reset() takes 1 positional argument but 2 were given

    Hello, thank you for the great work!

    While running vae_motion/train_controller.py, An error shows up as below:

    logger.warn( Traceback (most recent call last): File "train_controller.py", line 240, in main() File "train_controller.py", line 206, in main obs = env.reset(reset_indices) TypeError: reset() takes 1 positional argument but 2 were given

    Can I ask how it would be fixed?

    opened by soomean 4
  • A question about using the joints positions and joints orientations at the same time

    A question about using the joints positions and joints orientations at the same time

    The paper mentiones that use the joints position and orientation at the same time to represent a motion. However, I think one of them is able to represent a character adequately. Also, I noticed that the code render the character after the function set_joint_positions. I think it use the positions of each joint, but I am not sure what role does each joint's orientation play. I would appreaciate it if you can reply!

    opened by yuyujunjun 2
  • can not get good result when training from Scratch

    can not get good result when training from Scratch

    Thanks for your great work. I use https://github.com/ubisoft/ubisoft-laforge-animation-dataset to train from sractch. the reading bvh data code is referred by

    from fairmotion.data import bvh
    
    motion = bvh.load(BVH_FILENAME)
    
    positions = motion.positions(local=False)  # (frames, joints, 3)
    velocities = positions[1:] - positions[:-1]
    orientations = motion.rotations(local=False)[..., :, :2].reshape(-1, 22, _6)
    

    I only adjust 22 to 21 because new. dataset has 21 joint. here is my last epoch logger 237m 45s (- 0m 00s) (140 100.0%) | Recon: 1.618e-02 | KL: 2.287e-07 | PP: 0.000e+00; And then, I play it by gym with your code in random work mode. the result is not normal, can you give me some advices.

    opened by renrenzsbbb 3
  • Question about

    Question about "dist_entropy" when updating ppo

    Hi, I am reading your codes and have problem in evaluate_actions when updating ppo:

    • https://github.com/electronicarts/character-motion-vaes/blob/main/algorithms/ppo.py#L95

    I notice that you get dist_entropy along with action and value loss, which function in backward propagation. Though dist_entropy doesn't work in your code since the entropy_coef currently in your code is 0 as default, I am still curious about how it functions and why you use this (What exactly "An ugly hack for my KFAC implementation." is :stuck_out_tongue_closed_eyes:)

    Thanks

    opened by quintus0505 2
  • Acyclic motions conditioning

    Acyclic motions conditioning

    Hello,

    In the article corresponding to this repository, in section 7.4 Acyclic Motions, is is mentioned an idea about additionally conditioning the VAE model. It is said that it can generate the next frame considering a one-hot encoding of an acyclic action like kick or header.

    I have gone through the repository code, but I might have missed this part. As far as I have seen, at least for the default model (Mixture VAE), the encoder only conditions on the given number of previous frames (https://github.com/electronicarts/character-motion-vaes/blob/main/vae_motion/models.py#L87). Did I miss this additional conditioning part or it was not included in current state of the repository?

    Thank you!

    opened by Gabriel-Bercaru 2
  • How to generate mocap.npz?

    How to generate mocap.npz?

    Hi, How to generage mocap.npz, which seems not easy to me. Can u give a clue how to generate mocap.npz from public mocap dataset?

    The train_mvae.py script assumes the mocap data to be at environments/mocap.npz. The original training data is not included in this repo; but can be easily extracted from other public datasets.

    Thanks very much!

    BEST

    opened by Minotaur-CN 15
Owner
Electronic Arts
Electronic Arts
a pytorch implementation of auto-punctuation learned character by character

Learning Auto-Punctuation by Reading Engadget Articles Link to Other of my work ?? Deep Learning Notes: A collection of my notes going from basic mult

Ge Yang 137 Nov 9, 2022
a pytorch implementation of auto-punctuation learned character by character

Learning Auto-Punctuation by Reading Engadget Articles Link to Other of my work ?? Deep Learning Notes: A collection of my notes going from basic mult

Ge Yang 137 Nov 9, 2022
Code for "MetaMorph: Learning Universal Controllers with Transformers", Gupta et al, ICLR 2022

MetaMorph: Learning Universal Controllers with Transformers This is the code for the paper MetaMorph: Learning Universal Controllers with Transformers

Agrim Gupta 50 Jan 3, 2023
Very deep VAEs in JAX/Flax

Very Deep VAEs in JAX/Flax Implementation of the experiments in the paper Very Deep VAEs Generalize Autoregressive Models and Can Outperform Them on I

Jamie Townsend 42 Dec 12, 2022
Official implementation of the paper "Topographic VAEs learn Equivariant Capsules"

Topographic Variational Autoencoder Paper: https://arxiv.org/abs/2109.01394 Getting Started Install requirements with Anaconda: conda env create -f en

T. Andy Keller 69 Dec 12, 2022
Pytorch implementation of VAEs for heterogeneous likelihoods.

Heterogeneous VAEs Beware: This repository is under construction ??️ Pytorch implementation of different VAE models to model heterogeneous data. Here,

Adrián Javaloy 35 Nov 29, 2022
Generative Autoregressive, Normalized Flows, VAEs, Score-based models (GANVAS)

GANVAS-models This is an implementation of various generative models. It contains implementations of the following: Autoregressive Models: PixelCNN, G

MRSAIL (Mini Robotics, Software & AI Lab) 6 Nov 26, 2022
This repository contains the code for the paper "Hierarchical Motion Understanding via Motion Programs"

Hierarchical Motion Understanding via Motion Programs (CVPR 2021) This repository contains the official implementation of: Hierarchical Motion Underst

Sumith Kulal 40 Dec 5, 2022
Exploring Versatile Prior for Human Motion via Motion Frequency Guidance (3DV2021)

Exploring Versatile Prior for Human Motion via Motion Frequency Guidance This is the codebase for video-based human motion reconstruction in human-mot

Jiachen Xu 5 Jul 14, 2022
a reccurrent neural netowrk that when trained on a peice of text and fed a starting prompt will write its on 250 character text using LSTM layers

RNN-Playwrite a reccurrent neural netowrk that when trained on a peice of text and fed a starting prompt will write its on 250 character text using LS

Arno Barton 1 Oct 29, 2021
Indonesian Car License Plate Character Recognition using Tensorflow, Keras and OpenCV.

Monopol Indonesian Car License Plate (Indonesia Mobil Nomor Polisi) Character Recognition using Tensorflow, Keras and OpenCV. Background This applicat

Jayaku Briliantio 3 Apr 7, 2022
CharacterGAN: Few-Shot Keypoint Character Animation and Reposing

CharacterGAN Implementation of the paper "CharacterGAN: Few-Shot Keypoint Character Animation and Reposing" by Tobias Hinz, Matthew Fisher, Oliver Wan

Tobias Hinz 181 Dec 27, 2022
Implementation of character based convolutional neural network

Character Based CNN This repo contains a PyTorch implementation of a character-level convolutional neural network for text classification. The model a

Ahmed BESBES 248 Nov 21, 2022
GeneralOCR is open source Optical Character Recognition based on PyTorch.

Introduction GeneralOCR is open source Optical Character Recognition based on PyTorch. It makes a fidelity and useful tool to implement SOTA models on

null 57 Dec 29, 2022
An addon uses SMPL's poses and global translation to drive cartoon character in Blender.

Blender addon for driving character The addon drives the cartoon character by passing SMPL's poses and global translation into model's armature in Ble

犹在镜中 153 Dec 14, 2022
PyTorch implementation of CDistNet: Perceiving Multi-Domain Character Distance for Robust Text Recognition

PyTorch implementation of CDistNet: Perceiving Multi-Domain Character Distance for Robust Text Recognition The unofficial code of CDistNet. Now, we ha

null 25 Jul 20, 2022
Scripts and a shader to get you started on setting up an exported Koikatsu character in Blender.

KK Blender Shader Pack A plugin and a shader to get you started with setting up an exported Koikatsu character in Blender. The plugin is a Blender add

null 166 Jan 1, 2023
Character-Input - Create a program that asks the user to enter their name and their age

Character-Input Create a program that asks the user to enter their name and thei

PyLaboratory 0 Feb 6, 2022
This program generates a random 12 digit/character password (upper and lowercase) and stores it in a file along with your username and app/website.

PasswordGeneratorAndVault This program generates a random 12 digit/character password (upper and lowercase) and stores it in a file along with your us

Chris 1 Feb 26, 2022