A minimal solution to hand motion capture from a single color camera at over 100fps. Easy to use, plug to run.

Overview

Minimal Hand

A minimal solution to hand motion capture from a single color camera at over 100fps. Easy to use, plug to run.

teaser

This project provides the core components for hand motion capture:

  1. estimating joint locations from a monocular RGB image (DetNet)
  2. estimating joint rotations from locations (IKNet)

We focus on:

  1. ease of use (all you need is a webcam)
  2. time efficiency (on our 1080Ti, 8.9ms for DetNet, 0.9ms for IKNet)
  3. robustness to occlusion, hand-object interaction, fast motion, changing scale and view point

Some links: [video] [paper] [supp doc] [webpage]

The author is too busy to collect the training code for release. On the other hand, it should not be difficult to implement the training part. Feel free to open an issue for any encountered problems.

Pytorch Version

Here is a pytorch version implemented by @MengHao666. I didn't personally check it but I believe it worth trying. Many thanks to @MengHao666 !

With Unity

Here is a project that connects this repo to unity. It looks very cool and many thanks to @vinnik-dmitry07 !

Usage

Install dependencies

Please check requirements.txt. All dependencies are available via pip and conda.

Prepare MANO hand model

  1. Download MANO model from here and unzip it.
  2. In config.py, set OFFICIAL_MANO_PATH to the left hand model.
  3. Run python prepare_mano.py, you will get the converted MANO model that is compatible with this project at config.HAND_MESH_MODEL_PATH.

Prepare pre-trained network models

  1. Download models from here.
  2. Put detnet.ckpt.* in model/detnet, and iknet.ckpt.* in model/iknet.
  3. Check config.py, make sure all required files are there.

Run the demo for webcam input

  1. python app.py
  2. Put your right hand in front of the camera. The pre-trained model is for left hand, but the input would be flipped internally.
  3. Press ESC to quit.
  4. Although the model is robust to variant scales, most ideally the image should be 1.3x larger than the hand bounding box. A good bounding box may result in better accuracy. You can track the bounding box with the 2D predictions of the model.

We found that the model may fail on some "simple" poses. We think this is because such poses were no presented in the training data. We are working on a v2 version with further extended data to tackle this problem.

Use the models in your project

Please check wrappers.py.

IKNet Alternative

We also provide an optimization-based IK solver here.

Dataset

The detection model is trained with following datasets:

The IK model is trained with the poses shipped with MANO.

Citation

This is the official implementation of the paper "Monocular Real-time Hand Shape and Motion Capture using Multi-modal Data" (CVPR 2020).

The quantitative numbers reported in the paper can be found in plot.py.

If you find the project helpful, please consider citing us:

@inproceedings{zhou2020monocular,
  title={Monocular Real-time Hand Shape and Motion Capture using Multi-modal Data},
  author={Zhou, Yuxiao and Habermann, Marc and Xu, Weipeng and Habibie, Ikhsanul and Theobalt, Christian and Xu, Feng},
  booktitle={Proceedings of the IEEE International Conference on Computer Vision},
  pages={0--0},
  year={2020}
}
Issues
  • About SMPL(mosh) label .

    About SMPL(mosh) label .

    Hello, Ask a question again. There is no mosh(SMPL theta and beta) in STB、RHD、FreiHand dataset etc. How to translate 3D keypoints to mesh(SMPL theta beta)? Hope your reply, thanks.

    opened by www516717402 13
  • how to use the Right-hand model

    how to use the Right-hand model

    In config.py, I set OFFICIAL_MANO_PATH to the right hand model and Run python prepare_mano.py.Then I can get the converted MANO model about right hand. But when I use the converted MANO model about right hand, the result is so bad. Where are the wrongs? Thus, I want to known how to use the MANO model about right hand OR how to convert the MANO model about right hand? looking forward to your reply! Thanks a lot.

    opened by huangfuts 12
  • Questions about training IKNet

    Questions about training IKNet

    Thank you for great project,I have a few questions about training IKNet

    1. When changing the original 16 rotations of MANO into 21 rotations, do W, T0, I0, M0, R0, and L0 share the rotation of W in the original MANO?
    2. I found the joints_xyz calculated by using MANO ref_pose and the transformed 21 rotation parameters using the method in hand_mesh.py is not equal to the 'J_transformed' saved in the MANO pkl file , the order of joints has been adjusted according to kinematics.py. When using the MANO dataset to train IKNet, how did you get the ground truth 3D joint annotation in Lxyz? Is the calculation method of FK (Q) the same as the calculation method of joint_xyz in hand_mesh.py
    opened by Gel-smile 9
  • How to mix and train the different datasets?

    How to mix and train the different datasets?

    Paper say that: DetNet is trained on 3 datasets: theCMU Panoptic Dataset (CMU) , the Rendered Hand-pose Dataset (RHD) and the GANerated Hands Dataset(GAN).

    Since the images of three datasets are different from each other, can u please tell me how to preprocess the image?

    opened by LyazS 8
  • How to get beta in IKNet?

    How to get beta in IKNet?

    You have done really a great work!

    When I read your paper about, I am a little confused about how to find the best beta in IKNet by minimizing E(beta). Is beta directly got by solving the function? Or using ML methods like Newton down-hill method?

    Thank you Best wishes

    opened by Mrsirovo 6
  • How can I use the model output quaternion to unity?

    How can I use the model output quaternion to unity?

    Thank you for your great work! I'm trying to use the model output to animate a virtual hand in unity, I tried to set the quaternion into unity's localrotation but it did not work. Could you share some insight about how I can achieve that?

    opened by wangtss 5
  • Finding the centroid of the hand

    Finding the centroid of the hand

    Can we get the centroid or is there a link in the middle to roughly estimate the centroid

    opened by RohitSingh1226 5
  • Any plans on evaluating on FreiHAND dataset?

    Any plans on evaluating on FreiHAND dataset?

    I'm curious as it seems to be one of the better datasets publicly available, not only does it include really accurate 3D poses, but they are all on real images include challenging poses and object interactions. Along with all of this, it includes MANO hand shape ground truths. I would love to see how this model performs.

    It also allows for seeing how this performs without needing alignment since both camera intrinsics and scale are included for each image

    I'm also curious if this would be a good alternative for training IKNet instead of the MoCap data since it includes the hand shape ground truths. I'm not sure if I should open a separate issue for that to make it easier for others to find

    opened by pablovela5620 5
  • Obtaining MoCAP from a two hand video dataset

    Obtaining MoCAP from a two hand video dataset

    Greetings and many thanks for the great work.

    I wanted to utilize your code to extract MoCAP data given a first person RGB video dataset that has a clear view of both hands during a task. Given that your model is restricted to predicting from a single hand I wonder whether it will consistently show preference for the left hand if presented with videos that display both? If that's the case I suppose I could parse the dataset twice, flipping it the second time to obtain both hands' coordinates, right?

    opened by Linardos 5
  • how to do

    how to do "global alignment"?

    Hi,I got confused about another problem.

    In your paper ,u said "As previous work, we perform a global alignment to better measure the local hand pose. " How do u implement the "global alignment"? Is it just to transalate the root joint to same location of label (Is the lable here is also root-relative and normalized using reference bone?) I got AUC of 0.1 only using DetNet retrained in RHD.

    Could u provide the "prevous work" that do a global alignment like u? And it would bebetter if their code has been public available. Thanks!

    opened by MengHao666 5
Owner
Yuxiao Zhou
Good luck, have fun.
Yuxiao Zhou
FrankMocap: A Strong and Easy-to-use Single View 3D Hand+Body Pose Estimator

FrankMocap pursues an easy-to-use single view 3D motion capture system developed by Facebook AI Research (FAIR). FrankMocap provides state-of-the-art 3D pose estimation outputs for body, hand, and body+hands in a single system. The core objective of FrankMocap is to democratize the 3D human pose estimation technology, enabling anyone (researchers, engineers, developers, artists, and others) can easily obtain 3D motion capture outputs from videos and images.

Facebook Research 1.4k Dec 4, 2021
EasyMocap is an open-source toolbox for markerless human motion capture from RGB videos.

EasyMocap is an open-source toolbox for markerless human motion capture from RGB videos. In this project, we provide the basic code for fitt

ZJU3DV 841 Dec 3, 2021
dataset for ECCV 2020 "Motion Capture from Internet Videos"

Motion Capture from Internet Videos Motion Capture from Internet Videos Junting Dong*, Qing Shuai*, Yuanqing Zhang, Xian Liu, Xiaowei Zhou, Hujun Bao

ZJU3DV 81 Nov 19, 2021
PhysCap: Physically Plausible Monocular 3D Motion Capture in Real Time

PhysCap: Physically Plausible Monocular 3D Motion Capture in Real Time The implementation is based on SIGGRAPH Aisa'20. Dependencies Python 3.7 Ubuntu

soratobtai 34 Dec 1, 2021
A real-time motion capture system that estimates poses and global translations using only 6 inertial measurement units

TransPose Code for our SIGGRAPH 2021 paper "TransPose: Real-time 3D Human Translation and Pose Estimation with Six Inertial Sensors". This repository

Xinyu Yi 46 Dec 1, 2021
Motion and Shape Capture from Sparse Markers

MoSh++ This repository contains the official chumpy implementation of mocap body solver used for AMASS: AMASS: Archive of Motion Capture as Surface Sh

Nima Ghorbani 58 Nov 12, 2021
Adversarial Color Enhancement: Generating Unrestricted Adversarial Images by Optimizing a Color Filter

ACE Please find the preliminary version published at BMVC 2020 in the folder BMVC_version, and its extended journal version in Journal_version. Datase

null 25 Nov 17, 2021
A hand tracking demo made with mediapipe where you can control lights with pinching your fingers and moving your hand up/down.

HandTrackingBrightnessControl A hand tracking demo made with mediapipe where you can control lights with pinching your fingers and moving your hand up

Teemu Laurila 18 Oct 24, 2021
PyTorch reimplementation of minimal-hand (CVPR2020)

Minimal Hand Pytorch Unofficial PyTorch reimplementation of minimal-hand (CVPR2020). you can also find in youtube or bilibili bare hand youtube or bil

Hao Meng 122 Nov 18, 2021
git《Pseudo-ISP: Learning Pseudo In-camera Signal Processing Pipeline from A Color Image Denoiser》(2021) GitHub: [fig5]

Pseudo-ISP: Learning Pseudo In-camera Signal Processing Pipeline from A Color Image Denoiser Abstract The success of deep denoisers on real-world colo

Yue Cao 39 Nov 27, 2021
Expressive Body Capture: 3D Hands, Face, and Body from a Single Image

Expressive Body Capture: 3D Hands, Face, and Body from a Single Image [Project Page] [Paper] [Supp. Mat.] Table of Contents License Description Fittin

Vassilis Choutas 965 Nov 22, 2021
A hobby project which includes a hand-gesture based virtual piano using a mobile phone camera and OpenCV library functions

Overview This is a hobby project which includes a hand-gesture controlled virtual piano using an android phone camera and some OpenCV library. My moti

Abhinav Gupta 1 Nov 19, 2021
This repository contains the code for the paper "Hierarchical Motion Understanding via Motion Programs"

Hierarchical Motion Understanding via Motion Programs (CVPR 2021) This repository contains the official implementation of: Hierarchical Motion Underst

Sumith Kulal 24 Nov 24, 2021
Exploring Versatile Prior for Human Motion via Motion Frequency Guidance (3DV2021)

Exploring Versatile Prior for Human Motion via Motion Frequency Guidance This is the codebase for video-based human motion reconstruction in human-mot

Jiachen Xu 2 Dec 3, 2021
Detection of drones using their thermal signatures from thermal camera through YOLO-V3 based CNN with modifications to encapsulate drone motion

Drone Detection using Thermal Signature This repository highlights the work for night-time drone detection using a using an Optris PI Lightweight ther

Chong Yu Quan 3 Jun 1, 2021
Easy to use Python camera interface for NVIDIA Jetson

JetCam JetCam is an easy to use Python camera interface for NVIDIA Jetson. Works with various USB and CSI cameras using Jetson's Accelerated GStreamer

NVIDIA AI IOT 302 Nov 28, 2021
tinykernel - A minimal Python kernel so you can run Python in your Python

tinykernel - A minimal Python kernel so you can run Python in your Python

fast.ai 29 Aug 21, 2021
Kaggle Lyft Motion Prediction for Autonomous Vehicles 4th place solution

Lyft Motion Prediction for Autonomous Vehicles Code for the 4th place solution of Lyft Motion Prediction for Autonomous Vehicles on Kaggle. Discussion

null 40 Nov 16, 2021