Bringing Characters to Life with Computer Brains in Unity

Overview

AI4Animation: Deep Learning for Character Control

This project explores the opportunities of deep learning for character animation and control as part of my Ph.D. research at the University of Edinburgh in the School of Informatics, supervised by Taku Komura. Over the last couple years, this project has become a comprehensive framework for data-driven character animation, including data processing, network training and runtime control, developed in Unity3D / Tensorflow / PyTorch. This repository demonstrates using neural networks for animating biped locomotion, quadruped locomotion, and character-scene interactions with objects and the environment, plus sports and fighting games. Further advances on this research will continue being added to this project.


SIGGRAPH 2021
Neural Animation Layering for Synthesizing Martial Arts Movements
Sebastian Starke, Yiwei Zhao, Fabio Zinno, Taku Komura, ACM Trans. Graph. 40, 4, Article 92.

Interactively synthesizing novel combinations and variations of character movements from different motion skills is a key problem in computer animation. In this research, we propose a deep learning framework to produce a large variety of martial arts movements in a controllable manner from raw motion capture data. Our method imitates animation layering using neural networks with the aim to overcome typical challenges when mixing, blending and editing movements from unaligned motion sources. The system can be used for offline and online motion generation alike, provides an intuitive interface to integrate with animator workflows, and is relevant for real-time applications such as computer games.

- Video - Paper -


SIGGRAPH 2020
Local Motion Phases for Learning Multi-Contact Character Movements
Sebastian Starke, Yiwei Zhao, Taku Komura, Kazi Zaman. ACM Trans. Graph. 39, 4, Article 54.

Not sure how to align complex character movements? Tired of phase labeling? Unclear how to squeeze everything into a single phase variable? Don't worry, a solution exists!

Controlling characters to perform a large variety of dynamic, fast-paced and quickly changing movements is a key challenge in character animation. In this research, we present a deep learning framework to interactively synthesize such animations in high quality, both from unstructured motion data and without any manual labeling. We introduce the concept of local motion phases, and show our system being able to produce various motion skills, such as ball dribbling and professional maneuvers in basketball plays, shooting, catching, avoidance, multiple locomotion modes as well as different character and object interactions, all generated under a unified framework.

- Video - Paper - Code - Windows Demo - ReadMe -


SIGGRAPH Asia 2019
Neural State Machine for Character-Scene Interactions
Sebastian Starke+, He Zhang+, Taku Komura, Jun Saito. ACM Trans. Graph. 38, 6, Article 178.
(+Joint First Authors)

Animating characters can be an easy or difficult task - interacting with objects is one of the latter. In this research, we present the Neural State Machine, a data-driven deep learning framework for character-scene interactions. The difficulty in such animations is that they require complex planning of periodic as well as aperiodic movements to complete a given task. Creating them in a production-ready quality is not straightforward and often very time-consuming. Instead, our system can synthesize different movements and scene interactions from motion capture data, and allows the user to seamlessly control the character in real-time from simple control commands. Since our model directly learns from the geometry, the motions can naturally adapt to variations in the scene. We show that our system can generate a large variety of movements, icluding locomotion, sitting on chairs, carrying boxes, opening doors and avoiding obstacles, all from a single model. The model is responsive, compact and scalable, and is the first of such frameworks to handle scene interaction tasks for data-driven character animation.

- Video - Paper - Code & Demo - Mocap Data - ReadMe -


SIGGRAPH 2018
Mode-Adaptive Neural Networks for Quadruped Motion Control
He Zhang+, Sebastian Starke+, Taku Komura, Jun Saito. ACM Trans. Graph. 37, 4, Article 145.
(+Joint First Authors)

Animating characters can be a pain, especially those four-legged monsters! This year, we will be presenting our recent research on quadruped animation and character control at the SIGGRAPH 2018 in Vancouver. The system can produce natural animations from real motion data using a novel neural network architecture, called Mode-Adaptive Neural Networks. Instead of optimising a fixed group of weights, the system learns to dynamically blend a group of weights into a further neural network, based on the current state of the character. That said, the system does not require labels for the phase or locomotion gaits, but can learn from unstructured motion capture data in an end-to-end fashion.

- Video - Paper - Code - Mocap Data - Windows Demo - Linux Demo - Mac Demo - ReadMe -

- Animation Authoring Tool -


SIGGRAPH 2017
Phase-Functioned Neural Networks for Character Control
Daniel Holden, Taku Komura, Jun Saito. ACM Trans. Graph. 36, 4, Article 42.

This work continues the recent work on PFNN (Phase-Functioned Neural Networks) for character control. A demo in Unity3D using the original weights for terrain-adaptive locomotion is contained in the Assets/Demo/SIGGRAPH_2017/Original folder. Another demo on flat ground using the Adam character is contained in the Assets/Demo/SIGGRAPH_2017/Adam folder. In order to run them, you need to download the neural network weights from the link provided in the Link.txt file, extract them into the /NN folder, and store the parameters via the custom inspector button.

- Video - Paper - Code (Unity) - Windows Demo - Linux Demo - Mac Demo -


Processing Pipeline

In progress. More information will be added soon.

Copyright Information

This project is only for research or education purposes, and not freely available for commercial use or redistribution. The motion capture data is available only under the terms of the Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license.

Comments
  • How to train a fight animation?

    How to train a fight animation?

    Hello, I compiled the Android version, found that can not be compiled successfully, lack of Library dependency, can you provide the dependent source code? Since last year, I found your paper is waiting for your demo. Until recently, I saw that you posted it to GitHub. I am very interested in the application of AI in game, but I don't know much about neural network, so I can learn from your demo first.

    opened by hanbim520 14
  • Train failed

    Train failed

    I use the "D1_001_KAN01_001.bvh" (only the one) to train the model. This is my Motion Editor setting image I want to test whether the Motion Exporter output data can be correct training. However, i failed. Can you help find the problem? This is my scene. image This is upside down.

    opened by yh8899 9
  • How do I create my own training data?

    How do I create my own training data?

    There are 3 questions about SIGGRAPH 2017 paper:

    1. How do I create my own training data, given that I have VR devices like leg bands, hand controller, and headset?
    2. Is it possible to include hand and head input from VR devices into the model too (instead of only Up/Down/Left/Right input)?
    3. What is the training set size for the model provided in this repository?

    I will need to read more about the paper. Any advice is appreciated. Thank you!

    opened by off99555 8
  • Giving the AI spline waypoints

    Giving the AI spline waypoints

    How can I let the AI move the character using spline waypoints like the one you showed at the end of your video ? Do you have any Slack group or Gitter or something like that ??

    opened by siamaksalman 7
  • No Motion Editor found in scene in PFNN unity project

    No Motion Editor found in scene in PFNN unity project

    Hi! I am new to unity and this project. And I got one problem for your help : ( After I imported BVH file using Data Processing/BVH importer in the demo_adam (and demo_original) , and I wanted to export the BVH file to input.txt and output.txt as the training data. The Motion Exporter just shows that "no motion editor found in scene". I have no clue to solve this out. Is there missing scene.unity file in this project?

    opened by AndyVerne 6
  • In SIGGRAPH_2017,  I tried to import and run the unity project in unity, after I built and ran , the character can't be controlled when I pressed W,A,S,D keys on keyboard.

    In SIGGRAPH_2017, I tried to import and run the unity project in unity, after I built and ran , the character can't be controlled when I pressed W,A,S,D keys on keyboard.

    Hello, I even tried to directly built and ran project of unity in SIGGRAPH_2017 without editing. But in the program have built, the character doesn't respond to my input. And one more question, how could I use the weights I trained in PFNN to unity project? Thank you so much. By the way, I really appreciate your work in AI4Animation. Hope you have a nice day : )

    opened by AndyVerne 6
  • Wrong Data Processing Results for MANN

    Wrong Data Processing Results for MANN

    Hi, I tried to convert the raw motion capture data in MANN into the format for neural nerwork input, using the data processing scripts you provided in Unity (BVHImporter, MotionExporter etc.). However, the resulted Input.txt and Output.txt does not align with the ones you provided, both in trajectory and bone parameter fields. Training MANN with my own txts also displayed weird results, proving that the generated files are wrong.

    I looked through the code and everything seems to be right. I haven't alter any of the code you released, and only clicked "Export" botton and added Trajectory Module for each clip in the Motion Editor panel (leaving alone the Style Module because the annotation of styles does not affect other parameters).

    Do you have any idea where things might go wrong?

    Thank you! :)

    opened by Fiona730 6
  • How do I try the project?

    How do I try the project?

    Hello, I saw a video about this project and would like to try it, but found no instruction on how to do it.

    I assume it is played in Unity game engine but I tried "Adding" the "Adam" folder in the Projects screen but it said that it was an invalid path.

    Can you please provide some instructions?

    With regards, John

    opened by addeps3 5
  • Wolf not moving

    Wolf not moving

    Hi! I tried to check your Demo project but after I followed your instructions and stored the network parameters the wolf character is not moving if a press the control buttons. I am new to Unity, maybe it is a trivial one. Do you have any suggestions? (Windows 10, Unity 2018.2.0f2)

    opened by zovathio 4
  • Network weights for Adam in PFNN

    Network weights for Adam in PFNN

    Hi, thanks for your awesome work! I noticed that there is neither pretrained weights nor training data for the Adam model in PFNN. Just want to confirm whether these data are unreleased, in case I missed them somewhere.

    opened by Fiona730 3
  • Ballroom dancing dog?

    Ballroom dancing dog?

    image image

    Not sure what's going on here, but I've managed to... well, mangle the dog. Or make him dance, not entirely sure.

    1. Make the dog fully sit (hold V for a few seconds)
    2. Release V
    3. Wait 2 seconds
    4. Tap V
    5. Repeat steps 3 and 4 repeatedly.

    Is this an issue with the network, or just the Unity/demo code?

    opened by Qix- 3
  • Siggraph Asia 2019 Motion Data Processing

    Siggraph Asia 2019 Motion Data Processing

    Hi Sebastian,

    Thank you for posting this amazing work!

    I am trying to reproduce the results from scratch (the raw .bvh files -> asset data -> input.txt/output.txt for training). But the results I reproduced have some problems:

    1. The agent cannot turn left or right when running, I mean when I press Shift+W+E/Q at the same time, the action of agent will crash;
    2. The posture of the agent is unnatural when sitting down, and there is always an offset from the marked contacts point; And when I directly export your data as training data pressing Export Data button in MotionExporter, the results don't have these problems.

    This is my data processing:

    1. Use BVH Importer to import .bvh files which comes from MotionCapture.zip. When importing some .bvh files, the Flip option is checked, such as Jump, RunTurn, WalkTurn etc.. The data is saved in Assets/MotionCapture_reproduce/;
    2. Copy the *.unity files in Assets/MotionCapture/ and overwrite the *.unity files in Assets/MotionCapture_reproduce/. I found that many actions need to create objects by myself, such as Amchair, Avoid, Sit etc., which would be time-consuming to do manually, so I directly copied them;
    3. Click the Import option in the figure below, and in the 'public void Import()' in MotionEditor.cs, I added the following code to import the Modules, Sequences, Export, Framerate, MirrorAxis, and Offset parameters from Assets/MotionCapture/ and copied to the data in Assets/MotionCapture_reproduce/;
    4. Use the MotionExporter in the unity scene to export my own data; image

    At the same time, I also noticed that the data in Assets/MotionCapture/ is far more than the .bvh files from MotionCapture.zip, but I don't know how the extra data came from, so I only used the data in MotionCapture.zip to reproduce the results;

    So, is my data processing missing something? Or which step is wrong? Where does the extra data in Assets/MotionCapture/ come from? Can you elaborate your data processing from the raw .bvh files to asset data?

    Need your help. Thanks a lot!

    opened by walkerwjt 0
  • Velocity details in Deepphase

    Velocity details in Deepphase

    Hello, Thank you for your great work!

    Can you detail how to extract the velocity? Or the code? It seems to be the acceleration that is extracted in the readme?

    opened by YoungSeng 0
  • Is it possible to run at 30hz? (SIGGRAPH_2017)

    Is it possible to run at 30hz? (SIGGRAPH_2017)

    This code is included with a comment saying it is for 60hz (BioAnimation_Adam.cs):

    //Trajectory for 60 Hz framerate
    private const int Framerate = 60;
    private const int Points = 111;
    private const int PointSamples = 12;
    private const int PastPoints = 60;
    private const int FuturePoints = 50;
    private const int RootPointIndex = 60;
    private const int PointDensity = 10;
    

    Is it possible to change these values to have it run at 30hz? If so, aside from changing the 'Framerate' variable to 30, what other values need to be changed?

    opened by benjaminnoer2112 0
  • Possiblely mistakes in the paper

    Possiblely mistakes in the paper "Deep phase"

    Hello Sebastian,

    I''ve sent you email twice but have not recieved any response. I am not sure if there is any connection problem and I try to connect you via github issues.

    I found there might be two mistakes in the paper.

    1. In Eq. 5, the range of S should be (-pi, pi), so I think S should be out of the innermost scope: $A\cdot sin(2pi(F\cdot Tau-S)+B (5) -> A\cdot sin(2pi(F\cdot Tau)-S)+B (5.1) $, or divided by 2pi before passing it in Eq. (5). I found it hard to predict S to be similar to the curve in fig. 3 using Eq. 5 directly. I tested Eq. (5) and Eq. (5.1) and plotted the figures after training for 5 epoches. Eq 5: Eq 5 Eq 5.1: Eq 5 1

    2. In Eq 9, the next phase is calculated by interpolating two phases, and multiply it by $A_{t+dt}$: Snipaste_2022-08-22_17-33-09 However, the phase $P_t$ and $P_{t+dt}$ also contains the information about the amplitudes, so the expected prediction of $A_{t+dt}$ should be very similar to one, which means scale the interpolated result. I am not sure if the amplitudes should be predicted by the difference between two frames, just like the frequency does?

    Besides, I am not sure how to handle the loss of the phase. I employ the mse loss between the predicted phase in eq 9 and the groundtruth phase, but the frequency of the predicted phase is inaccurate. Do we have to employ loss on the amplitudes and frequence additionally? Which method do you use?

    I will appreciate it if you correct me if I am wrong. I look forward to hearing from you.

    Best regards,

    Xiangjun Tang

    opened by yuyujunjun 1
  • Questions about

    Questions about "DeepPhase: Periodic Autoencoders for learning motion phase manifolds"

    Hi, Sebastian

    How are you? We are following closely with your research work on applying Deep Learning onto character animation, and I want to say they are great work! We are reading your Siggraph 2022 paper "DeepPhase: Periodic Autoencoders for Learning Motion Phase Manifolds" and trying to reproduce the work, but got stuck on some questions. I am wondering if you could help me with these detailed questions.

    1. What's the kernel-size of the convolultional layer?
    2. What method did you use to initialize the weights?
    3. What are the validation/test loss you achieved after you finished training?
    4. If I change the kernal size, there are quite a few occations that loss became nan,do you know what could be the reason for this?
    5. In the paper, does every channel connect to a unique fully connected layer? What's the activation function of the fully connected layer?
    6. Does the FFT layer has weights to learn as well?
    7. The sampling time for a time window is 2 second, correct? Also the T in "f" in formula (3) is also 2 seconds, right?

    We used your dataset from paper "Neural State Machine for Character-Scene Interactions",but the lowest loss we could get is 0.2. We think it is too high and don't find a way to reduce it. Can you shed some light on this?

    Avoid 18863(5.24min) Carry 53094(14.75min) Crouch 7659 (2.13min) Door 58479(16.24min) Jump 4511 (1.25min) Loco 59859(16.63min) Sit 199472(55.41min) total: 401937 (111min)

    Thanks a lot!

    opened by wengn 0
Owner
Sebastian Starke
Ph.D. Student in Character Animation @ The University of Edinburgh, AI Scientist @ Electronic Arts, Formerly @ Adobe Research
Sebastian Starke
Official Pytorch implementation of 'GOCor: Bringing Globally Optimized Correspondence Volumes into Your Neural Network' (NeurIPS 2020)

Official implementation of GOCor This is the official implementation of our paper : GOCor: Bringing Globally Optimized Correspondence Volumes into You

Prune Truong 71 Nov 18, 2022
Histocartography is a framework bringing together AI and Digital Pathology

Documentation | Paper Welcome to the histocartography repository! histocartography is a python-based library designed to facilitate the development of

null 155 Nov 23, 2022
Applicator Kit for Modo allow you to apply Apple ARKit Face Tracking data from your iPhone or iPad to your characters in Modo.

Applicator Kit for Modo Applicator Kit for Modo allow you to apply Apple ARKit Face Tracking data from your iPhone or iPad with a TrueDepth camera to

Andrew Buttigieg 3 Aug 24, 2021
Official PyTorch implementation of "Synthesis of Screentone Patterns of Manga Characters"

Manga Character Screentone Synthesis Official PyTorch implementation of "Synthesis of Screentone Patterns of Manga Characters" presented in IEEE ISM 2

Tsubota 2 Nov 20, 2021
This project uses reinforcement learning on stock market and agent tries to learn trading. The goal is to check if the agent can learn to read tape. The project is dedicated to hero in life great Jesse Livermore.

Reinforcement-trading This project uses Reinforcement learning on stock market and agent tries to learn trading. The goal is to check if the agent can

Deepender Singla 1.4k Dec 22, 2022
Creating Artificial Life with Reinforcement Learning

Although Evolutionary Algorithms have shown to result in interesting behavior, they focus on learning across generations whereas behavior could also be learned during ones lifetime.

Maarten Grootendorst 49 Dec 21, 2022
Real life contra a deep learning project built using mediapipe and openc

real-life-contra Description A python script that translates the body movement into in game control. Welcome to all new real life contra a deep learni

Programminghut 7 Jan 26, 2022
Python-experiments - A Repository which contains python scripts to automate things and make your life easier with python

Python Experiments A Repository which contains python scripts to automate things

Vivek Kumar Singh 11 Sep 25, 2022
Art Project "Schrödinger's Game of Life"

Repo of the project "Team Creative Quantum AI: Schrödinger's Game of Life" Installation new conda env: conda create --name qcml python=3.8 conda activ

ℍ◮ℕℕ◭ℍ  ℝ∈ᛔ∈ℝ 2 Sep 15, 2022
Open source Python module for computer vision

About PCV PCV is a pure Python library for computer vision based on the book "Programming Computer Vision with Python" by Jan Erik Solem. More details

Jan Erik Solem 1.9k Jan 6, 2023
PyTorchCV: A PyTorch-Based Framework for Deep Learning in Computer Vision.

PyTorchCV: A PyTorch-Based Framework for Deep Learning in Computer Vision @misc{CV2018, author = {Donny You ([email protected])}, howpubl

Donny You 40 Sep 14, 2022
Build fully-functioning computer vision models with PyTorch

Detecto is a Python package that allows you to build fully-functioning computer vision and object detection models with just 5 lines of code. Inferenc

Alan Bi 576 Dec 29, 2022
Implementation of self-attention mechanisms for general purpose. Focused on computer vision modules. Ongoing repository.

Self-attention building blocks for computer vision applications in PyTorch Implementation of self attention mechanisms for computer vision in PyTorch

AI Summer 962 Dec 23, 2022
Datasets, Transforms and Models specific to Computer Vision

torchvision The torchvision package consists of popular datasets, model architectures, and common image transformations for computer vision. Installat

null 13.1k Jan 2, 2023
Repository providing a wide range of self-supervised pretrained models for computer vision tasks.

Hierarchical Pretraining: Research Repository This is a research repository for reproducing the results from the project "Self-supervised pretraining

Colorado Reed 53 Nov 9, 2022
A PyTorch-Based Framework for Deep Learning in Computer Vision

TorchCV: A PyTorch-Based Framework for Deep Learning in Computer Vision @misc{you2019torchcv, author = {Ansheng You and Xiangtai Li and Zhen Zhu a

Donny You 2.2k Jan 9, 2023
Open Source Differentiable Computer Vision Library for PyTorch

Kornia is a differentiable computer vision library for PyTorch. It consists of a set of routines and differentiable modules to solve generic computer

kornia 7.6k Jan 4, 2023
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

null 107 Dec 2, 2022
This project demonstrates the use of neural networks and computer vision to create a classifier that interprets the Brazilian Sign Language.

LIBRAS-Image-Classifier This project demonstrates the use of neural networks and computer vision to create a classifier that interprets the Brazilian

Aryclenio Xavier Barros 26 Oct 14, 2022