Motion Reconstruction Code and Data for Skills from Videos (SFV)

Overview

Motion Reconstruction Code and Data for Skills from Videos (SFV)

This repo contains the data and the code for motion reconstruction component of the SFV paper:

SFV: Reinforcement Learning of Physical Skills from Videos
Transactions on Graphics (Proc. ACM SIGGRAPH Asia 2018)
Xue Bin Peng, Angjoo Kanazawa, Jitendra Malik, Pieter Abbeel, Sergey Levine
University of California, Berkeley

Project Page

Teaser Image

Data

The data for the video can be found in this link.
It contains the:

  • Input videos
  • Intermediate 2D OpenPose, tracks, and HMR outputs
  • Result video of before and after motion reconstruction
  • Output of motion reconstruction in bvh used to train the character

See the README in the tar file for more details.

Requirements

  • TensorFlow
  • SMPL
  • Have the same models/ structure as in HMR (you need the trained models and neutral_smpl_with_cocoplus_reg.pkl)

Rotation augmented models

This repo uses fine-tuned models for OpenPose and HMR with rotation augmentation. The models used can be found here: ft-OpenPose, ft-HMR

Steps to run:

  1. python -m run_openpose

  2. python -m refine_video

I recommend starting with the preprocessed data that's packaged with the above link, and start from python -m refine_video. Then run step 1 for your own video.

Comments

Note this repo is more of a research code demo compared to my other project code releases. It's also slightly dated. I'm putting this out there in case this is useful for others. You may need to fix some quirks.

Pull requests/contributions welcome!

License

This particular repo is under BSD but please follow the license agreement for tools that I build on such as SMPL and OpenPose.

June 28 2019.

In this repo, motion reconstruction smoothes HMR output. We recently released the demo for Human Mesh and Motion Recovery (HMMR), which will give you smoother outputs. You can apply motion reconstrution on top of the HMMR outputs, which will be a better starting point. This would probably be the best combination of the tools out there today.

I'm also using 2D pose from OpenPose here and have my own hacky tracking code. However there are more recent tools such as AlphaPose and PoseFlow that will compute the tracklet for you. (We use this in the HMMR codebase).

Fitting the HMMR output to DensePose output will be another simple loss function to add to the motion reconstruction to get a good 3D body fit to a video.

All of these would be a good starter project ;)

Another practical improvements that should be made is that this uses OpenDR renderer to render the results, which is slow and takes up most of the run time. In HMMR we use (the pytorch NMR)[https://github.com/daniilidis-group/neural_renderer] to render the results. The same logic can be adapted here.

Citation

If you use this code for your research, please consider citing:

@article{
	2018-TOG-SFV,
	author = {Peng, Xue Bin and Kanazawa, Angjoo and Malik, Jitendra and Abbeel, Pieter and Levine, Sergey},
	title = {SFV: Reinforcement Learning of Physical Skills from Videos},
	journal = {ACM Trans. Graph.},
	volume = {37},
	number = {6},
	month = nov,
	year = {2018},
	articleno = {178},
	numpages = {14},
	publisher = {ACM},
	address = {New York, NY, USA},
	keywords = {physics-based character animation, computer vision, video imitation, reinforcement learning, motion reconstruction}
} 
@inProceedings{kanazawaHMR18,
  title={End-to-end Recovery of Human Shape and Pose},
  author = {Angjoo Kanazawa
  and Michael J. Black
  and David W. Jacobs
  and Jitendra Malik},
  booktitle={Computer Vision and Pattern Regognition (CVPR)},
  year={2018}
}
Comments
  • where is the PRETRAINED_MODEL

    where is the PRETRAINED_MODEL

    in refiner.py, you need a pretrained_model which name should be Feb12_2100_save75_model.ckpt-667589, but in hmr the model isn't this so I wonder how can I get it ?

    opened by Zju-George 3
  • How to get root position

    How to get root position

    How to get root position? Not rotation or anything, but root position, I saw in your bvhs, you have root position, but in your code, the model didn't predict that thing, so how did you get it?

    opened by Zju-George 2
  • Is there code about Imitation Learning of training the policy?

    Is there code about Imitation Learning of training the policy?

    I havn't run these two line code because I havn't configure the environment.

    python -m run_openpose

    python -m refine_video

    But I guess the first one is to get the json file storing all the keypoints and the second one is to get the bvh and h5 file storing the animation..

    The question is, how to train the policy? Is the code for training not uploaded yet? I'm a newbee about Deep Learning and I will be very appreciated if you have time to answer me.

    opened by Zju-George 1
  • BVH File Format or how to import in Unity

    BVH File Format or how to import in Unity

    Hello,

    I wanted to ask if anybody has implemented the write2bvh function or has another way of importing the results of the project into Unity as an animation. I have successfully imported the pre-existing .bvh files provided in the data folder and I wanted to make my own bvh files by running the project. How can I achieve that?

    Thank you!

    opened by andreilica 0
  • Update to OpenPose v1.7.0 Python3 TensorFlow2

    Update to OpenPose v1.7.0 Python3 TensorFlow2

    fix all paths and indents remove openpose scale and resolution so that tensorflow will not abort changed all python2 to python3 e.g d.iteritems() to iter(d.items()) removed dictionary deletion from iteration -- deleted afterwards updated openpose key "pos_keypoints" to "pose_keypoints_2d"

    opened by cliarie 3
  • Demo Data bounding boxes don't exist

    Demo Data bounding boxes don't exist

    When I try to run refine_video.py on the demo videos and .h5 files, I get errors like:

    !!!./demo_data/openpose_output/run_bboxes.h5 doesnt exist!!!

    What exactly am I supposed to do with the .h5 and the .bvh files in the demo data for the demo to work? As there is no *_bboxes.h5 file given in the demo data.

    opened by wkwan 2
  • Integrate HMMR

    Integrate HMMR

    Hi,

    first of all, thanks for releasing the code.

    Do you have any suggestion about how to integrate HMMR instead of HMR with this code? Something like a todo list or steps to go through

    opened by tfederico 4
Owner
null
"MST++: Multi-stage Spectral-wise Transformer for Efficient Spectral Reconstruction" (CVPRW 2022) & (Winner of NTIRE 2022 Challenge on Spectral Reconstruction from RGB)

MST++: Multi-stage Spectral-wise Transformer for Efficient Spectral Reconstruction (CVPRW 2022) Yuanhao Cai, Jing Lin, Zudi Lin, Haoqian Wang, Yulun Z

Yuanhao Cai 274 Jan 5, 2023
[WACV 2020] Reducing Footskate in Human Motion Reconstruction with Ground Contact Constraints

Reducing Footskate in Human Motion Reconstruction with Ground Contact Constraints Official implementation for Reducing Footskate in Human Motion Recon

Virginia Tech Vision and Learning Lab 38 Nov 1, 2022
Exploring Versatile Prior for Human Motion via Motion Frequency Guidance (3DV2021)

Exploring Versatile Prior for Human Motion via Motion Frequency Guidance This is the codebase for video-based human motion reconstruction in human-mot

Jiachen Xu 5 Jul 14, 2022
Towards uncontrained hand-object reconstruction from RGB videos

Towards uncontrained hand-object reconstruction from RGB videos Yana Hasson, Gül Varol, Ivan Laptev and Cordelia Schmid Project page Paper Table of Co

Yana 69 Dec 27, 2022
EasyMocap is an open-source toolbox for markerless human motion capture from RGB videos.

EasyMocap is an open-source toolbox for markerless human motion capture from RGB videos. In this project, we provide the basic code for fitt

ZJU3DV 2.2k Jan 5, 2023
dataset for ECCV 2020 "Motion Capture from Internet Videos"

Motion Capture from Internet Videos Motion Capture from Internet Videos Junting Dong*, Qing Shuai*, Yuanqing Zhang, Xian Liu, Xiaowei Zhou, Hujun Bao

ZJU3DV 98 Dec 7, 2022
data/code repository of "C2F-FWN: Coarse-to-Fine Flow Warping Network for Spatial-Temporal Consistent Motion Transfer"

C2F-FWN data/code repository of "C2F-FWN: Coarse-to-Fine Flow Warping Network for Spatial-Temporal Consistent Motion Transfer" (https://arxiv.org/abs/

EKILI 46 Dec 14, 2022
Official repository for "Restormer: Efficient Transformer for High-Resolution Image Restoration". SOTA for motion deblurring, image deraining, denoising (Gaussian/real data), and defocus deblurring.

Restormer: Efficient Transformer for High-Resolution Image Restoration Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan,

Syed Waqas Zamir 906 Dec 30, 2022
Open source repository for the code accompanying the paper 'Non-Rigid Neural Radiance Fields Reconstruction and Novel View Synthesis of a Deforming Scene from Monocular Video'.

Non-Rigid Neural Radiance Fields This is the official repository for the project "Non-Rigid Neural Radiance Fields: Reconstruction and Novel View Synt

Facebook Research 296 Dec 29, 2022
Research code for CVPR 2021 paper "End-to-End Human Pose and Mesh Reconstruction with Transformers"

MeshTransformer ✨ This is our research code of End-to-End Human Pose and Mesh Reconstruction with Transformers. MEsh TRansfOrmer is a simple yet effec

Microsoft 473 Dec 31, 2022
The code for the CVPR 2021 paper Neural Deformation Graphs, a novel approach for globally-consistent deformation tracking and 3D reconstruction of non-rigid objects.

Neural Deformation Graphs Project Page | Paper | Video Neural Deformation Graphs for Globally-consistent Non-rigid Reconstruction Aljaž Božič, Pablo P

Aljaz Bozic 134 Dec 16, 2022
Code for "NeuralRecon: Real-Time Coherent 3D Reconstruction from Monocular Video", CVPR 2021 oral

NeuralRecon: Real-Time Coherent 3D Reconstruction from Monocular Video Project Page | Paper NeuralRecon: Real-Time Coherent 3D Reconstruction from Mon

ZJU3DV 1.4k Dec 30, 2022
Code release for paper: The Boombox: Visual Reconstruction from Acoustic Vibrations

The Boombox: Visual Reconstruction from Acoustic Vibrations Boyuan Chen, Mia Chiquier, Hod Lipson, Carl Vondrick Columbia University Project Website |

Boyuan Chen 12 Nov 30, 2022
Code for "LASR: Learning Articulated Shape Reconstruction from a Monocular Video". CVPR 2021.

LASR Installation Build with conda conda env create -f lasr.yml conda activate lasr # install softras cd third_party/softras; python setup.py install;

Google 157 Dec 26, 2022
Code for paper ECCV 2020 paper: Who Left the Dogs Out? 3D Animal Reconstruction with Expectation Maximization in the Loop.

Who Left the Dogs Out? Evaluation and demo code for our ECCV 2020 paper: Who Left the Dogs Out? 3D Animal Reconstruction with Expectation Maximization

Benjamin Biggs 29 Dec 28, 2022
Source code for CVPR 2021 paper "Riggable 3D Face Reconstruction via In-Network Optimization"

Riggable 3D Face Reconstruction via In-Network Optimization Source code for CVPR 2021 paper "Riggable 3D Face Reconstruction via In-Network Optimizati

null 130 Jan 2, 2023
This repository contains the code for using the H3DS dataset introduced in H3D-Net: Few-Shot High-Fidelity 3D Head Reconstruction

H3DS Dataset This repository contains the code for using the H3DS dataset introduced in H3D-Net: Few-Shot High-Fidelity 3D Head Reconstruction Access

Crisalix 72 Dec 10, 2022
This is the code repository implementing the paper "TreePartNet: Neural Decomposition of Point Clouds for 3D Tree Reconstruction".

TreePartNet This is the code repository implementing the paper "TreePartNet: Neural Decomposition of Point Clouds for 3D Tree Reconstruction". Depende

刘彦超 34 Nov 30, 2022
[ICCV21] Code for RetrievalFuse: Neural 3D Scene Reconstruction with a Database

RetrievalFuse Paper | Project Page | Video RetrievalFuse: Neural 3D Scene Reconstruction with a Database Yawar Siddiqui, Justus Thies, Fangchang Ma, Q

Yawar Nihal Siddiqui 75 Dec 22, 2022