An end-to-end library for editing and rendering motion of 3D characters with deep learning [SIGGRAPH 2020]

Overview

Deep-motion-editing

Python Pytorch Blender

This library provides fundamental and advanced functions to work with 3D character animation in deep learning with Pytorch. The code contains end-to-end modules, from reading and editing animation files to visualizing and rendering (using Blender) them.

The main deep editing operations provided here, motion retargeting and motion style transfer, are based on two works published in SIGGRAPH 2020:

Skeleton-Aware Networks for Deep Motion Retargeting: Project | Paper | Video


Unpaired Motion Style Transfer from Video to Animation: Project | Paper | Video


This library is written and maintained by Kfir Aberman, Peizhuo Li and Yijia Weng. The library is still under development.

Prerequisites

  • Linux or macOS
  • Python 3
  • CPU or NVIDIA GPU + CUDA CuDNN

Quick Start

We provide pretrained models together with demo examples using animation files specified in bvh format.

Motion Retargeting

Download and extract the test dataset from Google Drive or Baidu Disk (ye1q). Then place the Mixamo directory within retargeting/datasets.

To generate the demo examples with the pretrained model, run

cd retargeting
sh demo.sh

The results will be saved in retargeting/examples.

To reconstruct the quantitative result with the pretrained model, run

cd retargeting
python test.py

The retargeted demo results, that consists both intra-structual retargeting and cross-structural retargeting, will be saved in retargeting/pretrained/results.

Motion Style Transfer

To receive the demo examples, simply run

sh style_transfer/demo.sh

The results will be saved in style_transfer/demo_results, where each folder contains the raw output raw.bvh and the output after footskate clean-up fixed.bvh.

Train from scratch

We provide instructions for retraining our models

Motion Retargeting

Dataset

We use Mixamo dataset to train our model. You can download our preprocessed data from Google Drive or Baidu Disk(4rgv). Then place the Mixamo directory within retargeting/datasets.

Otherwise, if you want to download Mixamo dataset or use your own dataset, please follow the instructions below. Unless specifically mentioned, all script should be run in retargeting directory.

  • To download Mixamo on your own, you can refer to this good tutorial. You will need to download as fbx file (skin is not required) and make a subdirectory for each character in retargeting/datasets/Mixamo. In our original implementation we download 60fps fbx files and downsample them into 30fps. Since we use an unpaired way in training, it is recommended to divide all motions into two equal size sets for each group and equal size sets for each character in each group. If you use your own data, you need to make sure that your dataset consists of bvh files with same t-pose. You should also put your dataset in subdirectories of retargeting/datasets/Mixamo.

  • Enter retargeting/datasets directory and run blender -b -P fbx2bvh.py to convert fbx files to bvh files. If you already have bvh file as dataset, please skil this step.

  • In our original implementation, we manually split three joints for skeletons in group A. If you want to follow our routine, run python datasets/split_joint.py. This step is optional.

  • Run python datasets/preprocess.py to simplify the skeleton by removing some less interesting joints, e.g. fingers and convert bvh files into npy files. If you use your own data, you'll need to define simplified structure in retargeting/datasets/bvh_parser.py. This information currently is hard-coded in the code. See the comment in source file for more details. There are four steps to make your own dataset work.

  • Training and testing character are hard-coded in retargeting/datasets/__init__.py. You'll need to modify it if you want to use your own dataset.

Train

After preparing dataset, simply run

cd retargeting
python train.py --save_dir=./training/

It will use default hyper-parameters to train the model and save trained model in retargeting/training directory. More options are available in retargeting/option_parser.py. You can use tensorboard to monitor the training progress by running

tensorboard --logdir=./retargeting/training/logs/

Motion Style Transfer

Dataset

  • Download the dataset from Google Drive or Baidu Drive (zzck). The dataset consists of two parts: one is the taken from the motion style transfer dataset proposed by Xia et al. and the other is our BFA dataset, where both parts contain .bvh files retargeted to the standard skeleton of CMU mocap dataset.

  • Extract the .zip files into style_transfer/data

  • Pre-process data for training:

    cd style_transfer/data_proc
    sh gen_dataset.sh

    This will produce xia.npz, bfa.npz in style_transfer/data.

Train

After downloading the dataset simply run

python style_transfer/train.py

Style from videos

To run our models in test time with your own videos, you first need to use OpenPose to extract the 2D joint positions from the video, then use the resulting JSON files as described in the demo examples.

Blender Visualization

We provide a simple wrapper of blender's python API (2.80) for rendering 3D animations.

Prerequisites

The Blender releases distributed from blender.org include a complete Python installation across all platforms, which means that any extensions you have installed in your systems Python won’t appear in Blender.

To use external python libraries, you can install new packages directly to Blender's python distribution. Alternatively, you can change the default blender python interpreter by:

  1. Remove the built-in python directory: [blender_path]/2.80/python.

  2. Make a symbolic link or simply copy a python interpreter at [blender_path]/2.80/python. E.g. ln -s ~/anaconda3/envs/env_name [blender_path]/2.80/python

This interpreter should be python 3.7.x version and contains at least: numpy, scipy.

Usage

Arguments

Due to blender's argparse system, the argument list should be separated from the python file with an extra '--', for example:

blender -P render.py -- --arg1 [ARG1] --arg2 [ARG2]

engine: "cycles" or "eevee". Please refer to Render section for more details.

render: 0 or 1. If set to 1, the data will be rendered outside blender's GUI. It is recommended to use render = 0 in case you need to manually adjust the camera.

The full parameters list can be displayed by: blender -P render.py -- -h

Load bvh File (load_bvh.py)

To load example.bvh, run blender -P load_bvh.py. Please finish the preparation first.

Note that currently it uses primitive_cone with 5 vertices for limbs.

Note that Blender and bvh file have different xyz-coordinate systems. In bvh file, the "height" axis is y-axis while in blender it's z-axis. load_bvh.py swaps the axis in the BVH_file class initialization funtion.

Currently all the End Sites in bvh file are discarded, this is because of the out-side code used in utils/.

After loading the bvh file, it's height is normalized to 10.

Material, Texture, Light and Camera (scene.py)

This file enables to add a checkerboard floor, camera, a "sun" to the scene and to apply a basic color material to character.

The floor is placed at y=0, and should be corrected manually in case that it is needed (depends on the character parametes in the bvh file).

Rendering

We support 2 render engines provided in Blender 2.80: Eevee and Cycles, where the trade-off is between speed and quality.

Eevee (left) is a fast, real-time, render engine provides limited quality, while Cycles (right) is a slower, unbiased, ray-tracing render engine provides photo-level rendering result. Cycles also supports CUDA and OpenGL acceleration.

Skinning

Automatic Skinning

We provide a blender script that applies "skinning" to the output skeletons. You first need to download the fbx file which corresponds to the targeted character (for example, "mousey"). Then, you can get a skinned animation by simply run

blender -P blender_rendering/skinning.py -- --bvh_file [bvh file path] --fbx_file [fbx file path]

Note that the script might not work well for all the fbx and bvh files. If it fails, you can try to tweak the script or follow the manual skinning guideline below.

Manual Skinning

Here we provide a "quick and dirty" guideline for how to apply skin to the resulting bvh files, with blender:

  • Download the fbx file that corresponds to the retargeted character (for example, "mousey")
  • Import the fbx file to blender (uncheck the "import animation" option)
  • Merge meshes - select all the parts and merge them (ctrl+J)
  • Import the retargeted bvh file
  • Click "context" (menu bar) -> "Rest Position" (under sekeleton)
  • Manually align the mesh and the skeleton (rotation + translation)
  • Select the skeleton and the mesh (the skeleton object should be highlighted)
  • Click Object -> Parent -> with automatic weights (or Ctrl+P)

Now the skeleton and the skin are bound and the animation can be rendered.

Acknowledgments

The code in the utils directory is mostly taken from Holden et al. [2016].
In addition, part of the MoCap dataset is taken from Adobe Mixamo and from the work of Xia et al..

Citation

If you use this code for your research, please cite our papers:

@article{aberman2020skeleton,
  author = {Aberman, Kfir and Li, Peizhuo and Sorkine-Hornung Olga and Lischinski, Dani and Cohen-Or, Daniel and Chen, Baoquan},
  title = {Skeleton-Aware Networks for Deep Motion Retargeting},
  journal = {ACM Transactions on Graphics (TOG)},
  volume = {39},
  number = {4},
  pages = {62},
  year = {2020},
  publisher = {ACM}
}

and

@article{aberman2020unpaired,
  author = {Aberman, Kfir and Weng, Yijia and Lischinski, Dani and Cohen-Or, Daniel and Chen, Baoquan},
  title = {Unpaired Motion Style Transfer from Video to Animation},
  journal = {ACM Transactions on Graphics (TOG)},
  volume = {39},
  number = {4},
  pages = {64},
  year = {2020},
  publisher = {ACM}
}
Comments
  • Some obfuscation in the instance function

    Some obfuscation in the instance function "to_numpy" of class"BVH_file" in retargeting/datasets/bvh_parser.py

    Hi, I was kind of confused about a code snippet in the instance function "to_numpy" of class"BVH_file" in retargeting/datasets/bvh_parser.py.

    from line #180 to line #184 if edge: >>>>index = [] >>>> for e in self.edges: >>>>>>>> index.append(e[0]) >>>>rotations = rotations[:, index, :] rotations = rotations.reshape(rotations.shape[0], -1)

    According to my understanding so far, after doing: rotations = self.anim.rotations[:, self.corps, :] in line #174 the variable "rotation" will hold the rotation info of the simplified skeleton. Then why we still need to do the operation from line #180 to line #184?

    B.T.W could you please explain the "MOTION" part of the .bvh file for me? I searched some explainations from the website, but I didn't get a satisfied one. I guessed each line of the "MOTION" part presents the rotation information of the skeleton's joints in one frame. But how to understand the order of these numbers?

    Many thanks!

    opened by ANYMS-A 11
  • Video extract

    Video extract

    If we use our own video dataset, or directly use openpose to extract the 2D joint positions to generate a json format file, can we replace the original json file and directly perform style transfer? How many bone points does the json file extracted from your video dataset contain?

    opened by Hellodan-77 9
  • Why create a

    Why create a "global_part_neighbor" in the function "find_neighbor" in the skeleton.py?

    Hi, I was recently reading the source code very carefully in order to understand the whole pipeline of your work.

    I was kind of confused that why init a variable named "global_part_neighbor" in the in the function "find_neighbor" in the line #373 of the models/skeleton.py?

    Besides, why append a "edge_num" into those global_part_neighbor's list in line #375?

    Many thanks!

    opened by ANYMS-A 9
  • 非标准躯干数据集的可能性

    非标准躯干数据集的可能性

    李老师您好! 我想咨询一下就是我现在有一份不标准的躯干的bvh数据,这是因为我选取了一套挥杆动作,高尔夫球杆绑定在人的左手上,也就是左手会比右手多出一些关节,这样不对称的躯干会对训练产生影响吗,我自己尝试训练了一下并尝试了intra的重定向但是效果很不理想,是因为末梢点变成了球杆的末梢导致的吗,烦请李老师百忙之余回答一下,感谢! WechatIMG46 WechatIMG45

    opened by dddyyyplus 8
  • Document the workflow for retargeting

    Document the workflow for retargeting

    Hi, I tried the demo's and the demo shows as described the IK re-targeting.

    However, I wanted to understand how to make a new target skeleton something we can move animations to.

    There's many pieces of information scattered in the issue tracker and the re-targeting process is hard to understand.

    I'm trying to writing a guide on how to use the retargetter.

    Workflow:

    Inputs:

    • A trained deep motion editing ML model
    • A specific VRM skeleton animation that has been preprocessed to strip non-essential information
    • Animation of a specific VRM skeleton model
      • The process of using another tool to retarget the VRM skeleton to train retargeting is complicated because it requires a CUDA install and a separate re-targeting tool. Can this step be removed?
    • Following the instructions to add the four steps to the bvh importer.

    Goal:

    Input the trained model and give animations on any of the trained rigs and have it be on the new rig.

    Other:

    How many animations is necessary to train a new skeleton?

    What is the expected time to retrain?

    Also is it possible to standardize on gltf2 for the skeleton information?

    I also have problems with the bvh being in the wrong orientation with respect to the floor and with cm to meter scaling.

    Is it possible to store this as an ONNX model for game runtime retargeting?

    opened by fire 8
  • the lambda in the option_parser.py is different from the paper

    the lambda in the option_parser.py is different from the paper

    Hi, when I read the source code, I find that the lambda set in the option_parser.py is different from the paper set, like the lambda_ee in the paper is 2 but is 100 in the option_parse.py. Have I missed something important?

    opened by MikeXuQ 8
  • I use my own data for retargeting, but the program will give an exception

    I use my own data for retargeting, but the program will give an exception

    Traceback (most recent call last): File "eval_single_pair.py", line 104, in main() File "eval_single_pair.py", line 92, in main new_motion = (new_motion - dataset.mean[i][j]) / dataset.var[i][j] RuntimeError: The size of tensor a (99) must match the size of tensor b (91) at non-singleton dimension 1 Traceback (most recent call last): File "demo.py", line 46, in example('Aj', 'BigVegas', '01.bvh', 'intra', './examples/intra_structure') File "demo.py", line 42, in example height) File "/home/wuxiaoliang/docker/newAPP/deep-motion-editing/retargeting/models/IK.py", line 57, in fix_foot_contact anim, name, ftime = BVH.load(input_file) File "../utils/BVH.py", line 58, in load f = open(filename, "r") FileNotFoundError: [Errno 2] No such file or directory: './examples/intra_structure/result.bvh'

    Hello, how can I solve this problem?@ @kfiraberman

    opened by wuxiaolianggit 8
  • Question about Mixamo dataset

    Question about Mixamo dataset

    Hi, great works! I am curious about the dataset setting: are all of Mixamo animations manually designed or are they also created by some kind of motion retargeting? It raises my doubts when some characters in Mixamo, e.g., Sword Woman, do not behave naturally in certain motions. If the Mixamo animations are not designed by human artists, then why can we take them as ground truth (please feel free to correct me)? Thanks.

    opened by GIMPS 7
  • style transfer

    style transfer

    Hello! I have now generated bfa.npz and xia.npz files through gen_dataset.sh. For video data, I use my own video actions to extract joint points and output json files. I would like to ask the following questions:

    1. What should I do next with the video json file extracted with openpose? Do I need to generate an npz file like the bfa dataset? What needs to be modified? Where is the specific code? How does it work?
    2. I saw that the next step after running gen_dataset.sh that you introduced on the github code homepage is to run train.py, then for the two processed data of video and BVH files, if I want to train, which ones need to be modified What about the relevant code files?
    opened by Hellodan-77 6
  • Removing the requirement groups to be exactly 2.

    Removing the requirement groups to be exactly 2.

    Hi, I am trying to remove the requirement that there are only A and B groups of skeletons.

    I want there to be A, B, C, D, any number of groups.

    Currently each group must have the same skeleton.

    Problem 1: Support > 2 groups.

    Problem 2: Support picking any pair from any group and in forwards or backwards order.

    opened by fire 6
  • Is it the skeleton convolution making the unpaired training process possible?

    Is it the skeleton convolution making the unpaired training process possible?

    Hi, I recently tried to transfer the facial animations(facial expression) between 2 person. Since the data is not paired, I followed your methodology to set a reconstruction loss and latent consistency loss as the training objective, but I didn't use the skeleton convolution operator because the data are quite different. However, the results are not really good, I'd like to know does the skeleton convolution are the key factor of this unsupervised training process?

    Many thanks!

    opened by ANYMS-A 6
  • 使用额外bvh文件进行retargeting

    使用额外bvh文件进行retargeting

    李老师: 您好,感谢你们在重定向方面开源了这么棒的工作,受益匪浅。我在你们的数据集上实验均没有问题,但是使用自己的数据集bvh进行训练时发现效果不佳,我用的数据集是Trinity Speech-Gesture dataset,在blender中打开具有这样的特点:正常动作面朝的方向与Tpose相反,如下图 image image

    针对该数据集新定义了简化后的骨骼 corps_name_trinity = ['Hips', "LeftUpLeg", "LeftLeg", "LeftFoot", "LeftForeFoot", "LeftToeBase", "RightUpLeg", "RightLeg", "RightFoot", "RightForeFoot", "RightToeBase", 'Spine', 'Spine1', 'Spine2', "Spine3", 'Neck', 'Neck1', 'Head', 'LeftShoulder', 'LeftArm', 'LeftForeArm', 'LeftHand', 'RightShoulder', 'RightArm', 'RightForeArm', 'RightHand'] 共26个 并以此进行preprocess,构建完数据集后将其作为groupA,原数据集中xxx-m依然作为groupB,训练了1w多epoch,测试后发现骨骼的朝向好像有问题,导致整个骨骼很扭曲,但是大致动作能看出与输入动作的一致性 image image 希望李老师能给个大概思路,可能是哪里存在问题,谢谢~期待您的回复

    opened by Mrcaoyu1 5
  • Use a new dataset in style transfer

    Use a new dataset in style transfer

    Hi! Recently, I am trying to use a new dataset of motion in style transfer. But I face some problems about the preprocess of the dataset.

    1. Does the new dataset skeleton have to have the same T-pose as the original skeleton?

    2. Can you tell me some information about the skeleton_MCU.yml file? I just know that it seems to preserve the static information of the skeleton. How can I create a new .yml for the new dataset?

    Looking forward to your kind reply. Thanks!

    opened by XingliangJin 1
  • Retargeting retraining on customized dataset

    Retargeting retraining on customized dataset

    Hello Peizhuo,

    Thanks for making this amazing project public available!

    I am trying to retrain the retargeting model on my own dataset, I have a new skeleton and its motion data. As your suggestion said the motion data should have the same T pose, does it mean the bvh files should start from the same T pose?

    Thanks, Haozhou

    opened by HaozhouPang 3
  • 处理过手指的重定向吗?

    处理过手指的重定向吗?

    您好,请问有使用此方法处理过手指吗? 我用自己的数据训练了模型,重定向后的身体动作基本一致,但是手指部分还是存在一些问题,以下用其中一个动作来示意说明,原动作如下图所示: 原动作 重定向后的手指如下图所示: 重定向后 主要存在的问题是手指关节的角度有较明显的差别,详细如下: 1、左手四指指端没有伸直并拢,左手大拇指没有弯曲并拢; 2、右手四指弯曲的角度不对。 我大致的思路是,在bvh_parser.py对骨架扩展了所有的手指关节,手指的末端作为末端关节,去掉gan后进行训练,训练的epoch到2k,loss曲线如下: loss

    opened by anddyhzw 2
  • How the the skining work?

    How the the skining work?

    Hi, from the skining script I can tell anywhere saved the skined fbx. Doesn't it for applying bvh animation to target fbx and generated new fbx with animation? I can 't get saved result of that.

    opened by jinfagang 1
  • 代码与原论文有所出入

    代码与原论文有所出入

    @PeizhuoLi 您好!关于end-effectors的loss,源代码实现方式为: https://github.com/DeepMotionEditing/deep-motion-editing/blob/master/retargeting/models/architecture.py#L116 ee = get_ee(pos, self.dataset.joint_topologies[i], self.dataset.ee_ids[i], velo=self.args.ee_velo, from_root=self.args.ee_from_root) height = self.models[i].height[offset_idx] height = height.reshape((height.shape[0], 1, height.shape[1], 1)) ee /= height 其中height为 “左脚---左手” 这一条路径长度。 然而,论文中关于 End-Effectors Loss 部分如下: hAe, hBe are the lengths of the kinematic chains from the root to the end-effector e, in each of the skeletons SA and SB. 描述的是 “根节点---端点” 的路径,与代码有出入,不知我的理解是否有误,还望解答,感谢!

    opened by zzzark 3
Owner
null
TilinGNN: Learning to Tile with Self-Supervised Graph Neural Network (SIGGRAPH 2020)

TilinGNN: Learning to Tile with Self-Supervised Graph Neural Network (SIGGRAPH 2020) About The goal of our research problem is illustrated below: give

null 59 Dec 9, 2022
A code repository associated with the paper A Benchmark for Rough Sketch Cleanup by Chuan Yan, David Vanderhaeghe, and Yotam Gingold from SIGGRAPH Asia 2020.

A Benchmark for Rough Sketch Cleanup This is the code repository associated with the paper A Benchmark for Rough Sketch Cleanup by Chuan Yan, David Va

null 33 Dec 18, 2022
Face Identity Disentanglement via Latent Space Mapping [SIGGRAPH ASIA 2020]

Face Identity Disentanglement via Latent Space Mapping Description Official Implementation of the paper Face Identity Disentanglement via Latent Space

null 150 Dec 7, 2022
Implementation for "Manga Filling Style Conversion with Screentone Variational Autoencoder" (SIGGRAPH ASIA 2020 issue)

Manga Filling with ScreenVAE SIGGRAPH ASIA 2020 | Project Website | BibTex This repository is for ScreenVAE introduced in the following paper "Manga F

null 30 Dec 24, 2022
Differential rendering based motion capture blender project.

TraceArmature Summary TraceArmature is currently a set of python scripts that allow for high fidelity motion capture through the use of AI pose estima

William Rodriguez 4 May 27, 2022
NVIDIA Merlin is an open source library providing end-to-end GPU-accelerated recommender systems, from feature engineering and preprocessing to training deep learning models and running inference in production.

NVIDIA Merlin NVIDIA Merlin is an open source library designed to accelerate recommender systems on NVIDIA’s GPUs. It enables data scientists, machine

null 419 Jan 3, 2023
This repository contains the code for the CVPR 2020 paper "Differentiable Volumetric Rendering: Learning Implicit 3D Representations without 3D Supervision"

Differentiable Volumetric Rendering Paper | Supplementary | Spotlight Video | Blog Entry | Presentation | Interactive Slides | Project Page This repos

null 697 Jan 6, 2023
Official code for "End-to-End Optimization of Scene Layout" -- including VAE, Diff Render, SPADE for colorization (CVPR 2020 Oral)

End-to-End Optimization of Scene Layout Code release for: End-to-End Optimization of Scene Layout CVPR 2020 (Oral) Project site, Bibtex For help conta

Andrew Luo 41 Dec 9, 2022
Source code for "Progressive Transformers for End-to-End Sign Language Production" (ECCV 2020)

Progressive Transformers for End-to-End Sign Language Production Source code for "Progressive Transformers for End-to-End Sign Language Production" (B

null 58 Dec 21, 2022
[CVPR 2020] Interpreting the Latent Space of GANs for Semantic Face Editing

InterFaceGAN - Interpreting the Latent Space of GANs for Semantic Face Editing Figure: High-quality facial attributes editing results with InterFaceGA

GenForce: May Generative Force Be with You 1.3k Dec 29, 2022
This repository contains the code for the paper "Hierarchical Motion Understanding via Motion Programs"

Hierarchical Motion Understanding via Motion Programs (CVPR 2021) This repository contains the official implementation of: Hierarchical Motion Underst

Sumith Kulal 40 Dec 5, 2022
Exploring Versatile Prior for Human Motion via Motion Frequency Guidance (3DV2021)

Exploring Versatile Prior for Human Motion via Motion Frequency Guidance This is the codebase for video-based human motion reconstruction in human-mot

Jiachen Xu 5 Jul 14, 2022
Motion Planner Augmented Reinforcement Learning for Robot Manipulation in Obstructed Environments (CoRL 2020)

Motion Planner Augmented Reinforcement Learning for Robot Manipulation in Obstructed Environments [Project website] [Paper] This project is a PyTorch

Cognitive Learning for Vision and Robotics (CLVR) lab @ USC 49 Nov 28, 2022
WarpDrive: Extremely Fast End-to-End Deep Multi-Agent Reinforcement Learning on a GPU

WarpDrive is a flexible, lightweight, and easy-to-use open-source reinforcement learning (RL) framework that implements end-to-end multi-agent RL on a single GPU (Graphics Processing Unit).

Salesforce 334 Jan 6, 2023
Teaching end to end workflow of deep learning

Deep-Education This repository is now available for public use for teaching end to end workflow of deep learning. This implies that learners/researche

Data Lab at College of William and Mary 2 Sep 26, 2022
An end-to-end machine learning library to directly optimize AUC loss

LibAUC An end-to-end machine learning library for AUC optimization. Why LibAUC? Deep AUC Maximization (DAM) is a paradigm for learning a deep neural n

Andrew 75 Dec 12, 2022
Avalanche RL: an End-to-End Library for Continual Reinforcement Learning

Avalanche RL: an End-to-End Library for Continual Reinforcement Learning Avalanche Website | Getting Started | Examples | Tutorial | API Doc | Paper |

ContinualAI 43 Dec 24, 2022
dataset for ECCV 2020 "Motion Capture from Internet Videos"

Motion Capture from Internet Videos Motion Capture from Internet Videos Junting Dong*, Qing Shuai*, Yuanqing Zhang, Xian Liu, Xiaowei Zhou, Hujun Bao

ZJU3DV 98 Dec 7, 2022
[WACV 2020] Reducing Footskate in Human Motion Reconstruction with Ground Contact Constraints

Reducing Footskate in Human Motion Reconstruction with Ground Contact Constraints Official implementation for Reducing Footskate in Human Motion Recon

Virginia Tech Vision and Learning Lab 38 Nov 1, 2022