FrankMocap: A Strong and Easy-to-use Single View 3D Hand+Body Pose Estimator

Overview

FrankMocap: A Strong and Easy-to-use Single View 3D Hand+Body Pose Estimator

FrankMocap pursues an easy-to-use single view 3D motion capture system developed by Facebook AI Research (FAIR). FrankMocap provides state-of-the-art 3D pose estimation outputs for body, hand, and body+hands in a single system. The core objective of FrankMocap is to democratize the 3D human pose estimation technology, enabling anyone (researchers, engineers, developers, artists, and others) can easily obtain 3D motion capture outputs from videos and images.

Btw, why the name FrankMocap? Our pipeline to integrate body and hand modules reminds us of Frankenstein's monster!

News:

  • [2020/10/09] We have improved openGL rendering speed. It's about 40% faster. (e.g., body module: 6fps -> 11fps)

Key Features

  • Body Motion Capture:

  • Hand Motion Capture

  • Egocentric Hand Motion Capture

  • Whole body Motion Capture (body + hands)

Installation

A Quick Start

  • Run body motion capture

    # using a machine with a monitor to show output on screen
    python -m demo.demo_bodymocap --input_path ./sample_data/han_short.mp4 --out_dir ./mocap_output
    
    # screenless mode (e.g., a remote server)
    xvfb-run -a python -m demo.demo_bodymocap --input_path ./sample_data/han_short.mp4 --out_dir ./mocap_output
    
  • Run hand motion capture

    # using a machine with a monitor to show outputs on screen
    python -m demo.demo_handmocap --input_path ./sample_data/han_hand_short.mp4 --out_dir ./mocap_output
    
    # screenless mode  (e.g., a remote server)
    xvfb-run -a python -m demo.demo_handmocap --input_path ./sample_data/han_hand_short.mp4 --out_dir ./mocap_output
    
  • Run whole body motion capture

    # using a machine with a monitor to show outputs on screen
    python -m demo.demo_frankmocap --input_path ./sample_data/han_short.mp4 --out_dir ./mocap_output
    
    # screenless mode  (e.g., a remote server)
    xvfb-run -a python -m demo.demo_frankmocap --input_path ./sample_data/han_short.mp4 --out_dir ./mocap_output
    
  • Note:

    • Above commands use openGL by default. If it does not work, you may try alternative renderers (pytorch3d or openDR).
    • See the readme of each module for details

Joint Order

Body Motion Capture Module

Hand Motion Capture Module

Whole Body Motion Capture Module (Body + Hand)

License

References

  • FrankMocap is based on the following research outputs:
@article{rong2020frankmocap,
  title={FrankMocap: Fast Monocular 3D Hand and Body Motion Capture by Regression and Integration},
  author={Rong, Yu and Shiratori, Takaaki and Joo, Hanbyul},
  journal={arXiv preprint arXiv:2008.08324},
  year={2020}
}

@article{joo2020eft,
  title={Exemplar Fine-Tuning for 3D Human Pose Fitting Towards In-the-Wild 3D Human Pose Estimation},
  author={Joo, Hanbyul and Neverova, Natalia and Vedaldi, Andrea},
  journal={arXiv preprint arXiv:2004.03686},
  year={2020}
}
Comments
  • What is the order of Body Joints

    What is the order of Body Joints

    I am trying to animate a humanoid avatar in unity using your code. Is the order of body joints as follows?

    // 'Torso_Back': 1,
    //'Torso_Front': 2,
    //'RHand': 3,
    //'LHand': 4,
    //'LFoot': 5,
    //'RFoot': 6,
    
    //'R_upperLeg_back': 7,
    //'L_upperLeg_back': 8,
    //'R_upperLeg_front': 9,
    //'L_upperLeg_front': 10,
    
    //'R_lowerLeg_back': 11,
    //'L_lowerLeg_back': 12,
    //'R_lowerLeg_front': 13,
    //'L_lowerLeg_front': 14,
    
    //'L_upperArm_front': 15,
    //'R_upperArm_front': 16,
    //'L_upperArm_back': 17,
    //'R_upperArm_back': 18,
    
    //'L_lowerArm_back': 19,
    //'R_lowerArm_back': 20,
    //'L_lowerArm_front': 21,
    //'R_lowerArm_front': 22,
    
    //'RFace': 23,
    //'LFace': 24
    

    If not then could you please share the correct order?

    opened by abhin7993 34
  • Questions about the correspondence between the hand joint points and the smplx model

    Questions about the correspondence between the hand joint points and the smplx model

    It can be seen from the code that there are 21 bone points in the hand,such as: 0 : Wrist 1 : Thumb_00 2 : Thumb_01 3 : Thumb_02 4 : Thumb_03 5 : Index_00 6 : Index_01 7 : Index_02 8 : Index_03 9 : Middle_00 10 : Middle_01 11 : Middle_02 12 : Middle_03 13 : Ring_00 14 : Ring_01 15 : Ring_02 16 : Ring_03 17 : Little_00 18 : Little_01 19 : Little_02 20 : Little_03 But in smplx, there are only 15 joint points, can anyone tell me how these joint points correspond? The following is a comparison between my posture in smplx and the actual posture image image

    opened by liuhaorandezhanghao 16
  • question about hand joints

    question about hand joints

    From this repo I can get 3D body joints from here pred_joints_3d https://github.com/facebookresearch/frankmocap/blob/741f9c9cc6d7b0226755fb5749ee55f7ad80457f/bodymocap/body_mocap_api.py#L85 as @penincillin answered in issue #32 that "The body pose parameters [24*3] does not record the joint positions (x, y, z). It records the relative rotation of each bone. For example, rotation of upper arm relative to shoulder." So if we want 3D body joints then we need to use pred_joints_3d

    So my question is that how I can get only 3D hands joints same as 3D body joints. I can get hand_pose_params https://github.com/facebookresearch/frankmocap/blob/741f9c9cc6d7b0226755fb5749ee55f7ad80457f/handmocap/hand_mocap_api.py#L194 from above but where to get 3D joints only for hands?

    opened by lisa676 15
  • 3D joints output

    3D joints output

    Thanks for your work! I have questions about 3D joints locations and rotation output.

    Is the third column of pred_img_joints the depth? Because it contains both positive and negative values. If it is depth, then where is the origin of z axis (distance)? If the origin lies in the camera, then the depth should be the same sign. For example: pred_joints_img:[[ 390.74615 49.098785 -149.06514 ] [ 336.53275 54.662476 -73.1041 ] [ 257.58252 97.084625 -76.1069 ] [ 237.26962 212.33102 -78.993065 ] [ 275.6853 305.63647 -140.10107 ] [ 414.1083 95.09487 -60.660408 ] [ 417.5486 216.00244 -63.482674 ] [ 357.40317 292.06683 -127.28589 ] [ 318.93048 274.93042 10.864108 ] [ 286.71722 313.6593 8.190479 ] [ 232.2674 365.68256 -152.63435 ] [ 268.9242 440.0937 12.79833 ] [ 343.40744 319.95975 15.447592 ] [ 404.9802 392.42572 -128.94804 ] [ 314.9046 432.92227 27.402205 ] [ 372.99844 28.811356 -157.45268 ] [ 396.2119 23.579697 -137.64821 ] [ 328.26425 17.475601 -139.92213 ] [ 380.43423 5.5624847 -97.695274 ] [ 311.8253 514.0408 -12.167442 ] [ 332.66678 505.26297 18.350056 ] [ 311.78812 418.31158 54.551823 ] [ 255.01738 516.3665 -37.44374 ] [ 228.81992 500.87808 -11.421182 ] [ 274.59213 434.56015 45.34534 ] [ 268.9242 440.0937 12.79833 ] [ 232.2674 365.68256 -152.63435 ] [ 260.23444 259.62225 15.009101 ] [ 377.86072 269.1056 25.666203 ] [ 404.9802 392.42572 -128.94804 ] [ 314.9046 432.92227 27.402205 ] [ 275.6853 305.63647 -140.10107 ] [ 237.26962 212.33102 -78.993065 ] [ 257.58252 97.084625 -76.1069 ] [ 414.1083 95.09487 -60.660408 ] [ 417.5486 216.00244 -63.482674 ] [ 357.40317 292.06683 -127.28589 ] [ 341.1523 59.68271 -78.694466 ] [ 374.2525 -37.75685 -153.31737 ] [ 318.14172 266.06372 28.740244 ] [ 337.04575 83.39772 -66.99801 ] [ 328.95578 151.211 -9.214244 ] [ 366.79074 43.32393 -118.1924 ] [ 364.16168 -15.888809 -135.93591 ] [ 390.74615 49.098785 -149.06514 ] [ 396.2119 23.579697 -137.64821 ] [ 372.99844 28.811356 -157.45268 ] [ 380.43423 5.5624847 -97.695274 ] [ 328.26425 17.475601 -139.92213 ]]

    Besides, regarding the output pred_body_pose, the axis-angle representation[thetax, thetay, thetaz]for each joint, is its coordinate like below?

    rotation Thanks a lot!

    opened by Q-Y-Yang 12
  • Problem with export video demo_frankmocap

    Problem with export video demo_frankmocap

    Hi, thank for all work with project, is amazing. I would like to ask some problems with the solution video.

    When I watch the exported video with frankmocap if the avatar only has half the body, the legs appear overlapping and interfere with playback. Is there a way to remove the legs or find out whether or not it has legs? Captura de pantalla 2020-12-30 a las 11 59 50

    opened by jalovi 12
  • Cannot install

    Cannot install

    Hello,

    I'm a novice in this field, and I tried to follow the installation guide, but I encountered problems when I tried to install required packages using requirements.txt file. Error message says that it failed to build opendr, and then bunch of other error messages in red which basically says that it can't install other things as well. I tried to google the solution, and it seems that many others are also having trouble installing opendr and using opendr with python3 (some says that opendr only works with python2). Can you plz help me?

    opened by jiuney 10
  • ModuleNotFoundError: No module named 'model'

    ModuleNotFoundError: No module named 'model'

    Hi, I've encountered this error when running:

    python -m demo.demo_handmocap --input_path ./sample_data/han_hand_short.mp4 --out_dir ./mocap_output
    

    I am running on Windows 10 Conda. I think I've successfully built detectron2 but not sure if this is its problem or something else.

    Traceback (most recent call last):
      File "C:\Users\anaconda3\envs\venv_frankmocap\lib\runpy.py", line 193, in _run_module_as_main
        "__main__", mod_spec)
      File "C:\Users\anaconda3\envs\venv_frankmocap\lib\runpy.py", line 85, in _run_code
        exec(code, run_globals)
      File "G:\frankmocap\frankmocap\demo\demo_handmocap.py", line 16, in <module>
        from handmocap.hand_bbox_detector import HandBboxDetector
      File "G:\frankmocap\frankmocap\handmocap\hand_bbox_detector.py", line 30, in <module>
        from model.utils.config import cfg as cfgg
    ModuleNotFoundError: No module named 'model'
    

    How can I solve this? Thank you.

    opened by bycloudai 10
  • AssertionError: Failed in opening video

    AssertionError: Failed in opening video

    I get an error after running below command. Detectron2 and sample data have been installed successfully.

    python -m demo.demo_bodymocap --input_path ./sample_data/han_short.mp4 --out_dir ./mocap_output

    Traceback (most recent call last): File "C:\Anaconda\envs\frankmocap\lib\runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "C:\Anaconda\envs\frankmocap\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "M:\GITHUB\FRANKMOCAP\demo\demo_bodymocap.py", line 178, in main() File "M:\GITHUB\FRANKMOCAP\demo\demo_bodymocap.py", line 174, in main run_body_mocap(args, body_bbox_detector, body_mocap, visualizer) File "M:\GITHUB\FRANKMOCAP\demo\demo_bodymocap.py", line 27, in run_body_mocap input_type, input_data = demo_utils.setup_input(args) File "M:\GITHUB\FRANKMOCAP\mocap_utils\demo_utils.py", line 105, in setup_input assert cap.isOpened(), f"Failed in opening video: {args.input_path}" AssertionError: Failed in opening video: ./sample_data/han_short.mp4

    opened by CGMikeG 9
  • ImportError: /home/mona/research/code/frankmocap/detectors/hand_object_detector/lib/model/_C.cpython-38-x86_64-linux-gnu.so: undefined symbol: _ZN6caffe28TypeMeta21_typeMetaDataInstanceIdEEPKNS_6detail12TypeMetaDataEv

    ImportError: /home/mona/research/code/frankmocap/detectors/hand_object_detector/lib/model/_C.cpython-38-x86_64-linux-gnu.so: undefined symbol: _ZN6caffe28TypeMeta21_typeMetaDataInstanceIdEEPKNS_6detail12TypeMetaDataEv

    Do you know what is the reason for this error and how it can be resolved?

    
    (frank) mona@goku:~/research/code/frankmocap$ python -m demo.demo_frankmocap --input_path ./sample_data/han_short.mp4 --out_dir ./mocap_output
    Traceback (most recent call last):
      File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
        return _run_code(code, main_globals, None,
      File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
        exec(code, run_globals)
      File "/home/mona/research/code/frankmocap/demo/demo_frankmocap.py", line 25, in <module>
        from handmocap.hand_bbox_detector import HandBboxDetector
      File "/home/mona/research/code/frankmocap/handmocap/hand_bbox_detector.py", line 33, in <module>
        from detectors.hand_object_detector.lib.model.roi_layers import nms # might raise segmentation fault at the end of program
      File "/home/mona/research/code/frankmocap/detectors/hand_object_detector/lib/model/roi_layers/__init__.py", line 3, in <module>
        from .nms import nms
      File "/home/mona/research/code/frankmocap/detectors/hand_object_detector/lib/model/roi_layers/nms.py", line 3, in <module>
        from model import _C
    ImportError: /home/mona/research/code/frankmocap/detectors/hand_object_detector/lib/model/_C.cpython-38-x86_64-linux-gnu.so: undefined symbol: _ZN6caffe28TypeMeta21_typeMetaDataInstanceIdEEPKNS_6detail12TypeMetaDataEv
    

    env:

    $ lsb_release -a
    LSB Version:	core-11.1.0ubuntu2-noarch:security-11.1.0ubuntu2-noarch
    Distributor ID:	Ubuntu
    Description:	Ubuntu 20.04.2 LTS
    Release:	20.04
    Codename:	focal
    
    $ python
    Python 3.8.5 (default, Jan 27 2021, 15:41:15) 
    [GCC 9.3.0] on linux
    Type "help", "copyright", "credits" or "license" for more information.
    >>> import torch
    >>> torch.__version__
    '1.8.1+cu111'
    >>> import detectron2
    >>> detectron2.__version__
    '0.4'
    >>> from detectron2 import _C
    
    

    Also,

    $ python -m detectron2.utils.collect_env
    ----------------------  --------------------------------------------------------------------------
    sys.platform            linux
    Python                  3.8.5 (default, Jan 27 2021, 15:41:15) [GCC 9.3.0]
    numpy                   1.19.5
    detectron2              0.4 @/home/mona/venv/frank/lib/python3.8/site-packages/detectron2
    Compiler                GCC 7.3
    CUDA compiler           CUDA 11.1
    detectron2 arch flags   3.7, 5.0, 5.2, 6.0, 6.1, 7.0, 7.5, 8.0, 8.6
    DETECTRON2_ENV_MODULE   <not set>
    PyTorch                 1.8.1+cu111 @/home/mona/venv/frank/lib/python3.8/site-packages/torch
    PyTorch debug build     False
    GPU available           True
    GPU 0                   GeForce GTX 1650 Ti with Max-Q Design (arch=7.5)
    CUDA_HOME               /usr
    Pillow                  8.1.0
    torchvision             0.9.1+cu111 @/home/mona/venv/frank/lib/python3.8/site-packages/torchvision
    torchvision arch flags  3.5, 5.0, 6.0, 7.0, 7.5, 8.0, 8.6
    fvcore                  0.1.3.post20210311
    cv2                     4.5.1
    ----------------------  --------------------------------------------------------------------------
    PyTorch built with:
      - GCC 7.3
      - C++ Version: 201402
      - Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications
      - Intel(R) MKL-DNN v1.7.0 (Git Hash 7aed236906b1f7a05c0917e5257a1af05e9ff683)
      - OpenMP 201511 (a.k.a. OpenMP 4.5)
      - NNPACK is enabled
      - CPU capability usage: AVX2
      - CUDA Runtime 11.1
      - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86
      - CuDNN 8.0.5
      - Magma 2.5.2
      - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.1, CUDNN_VERSION=8.0.5, CXX_COMPILER=/opt/rh/devtoolset-7/root/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.8.1, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, 
    
    
    
    opened by monacv 9
  •   File from detectors.body_pose_estimator.pose2d_models.with_mobilenet import PoseEstimationWithMobileNet File "./frankmocap/detectors/body_pose_estimator/pose2d_models/with_mobilenet.py", line 4, in from modules.conv import conv, conv_dw, conv_dw_no_bn ModuleNotFoundError: No module named 'modules'">

    File "./frankmocap/bodymocap/body_bbox_detector.py", line 18, in from detectors.body_pose_estimator.pose2d_models.with_mobilenet import PoseEstimationWithMobileNet File "./frankmocap/detectors/body_pose_estimator/pose2d_models/with_mobilenet.py", line 4, in from modules.conv import conv, conv_dw, conv_dw_no_bn ModuleNotFoundError: No module named 'modules'

    When I am trying to use your code modules in my own test demo, I get this error (despite getting no error in frank mocap demo python file):

    $ python demo.py 
    Traceback (most recent call last):
      File "demo.py", line 22, in <module>
        from bodymocap.body_bbox_detector import BodyPoseEstimator
      File "./frankmocap/bodymocap/body_bbox_detector.py", line 18, in <module>
        from detectors.body_pose_estimator.pose2d_models.with_mobilenet import PoseEstimationWithMobileNet
      File "./frankmocap/detectors/body_pose_estimator/pose2d_models/with_mobilenet.py", line 4, in <module>
        from modules.conv import conv, conv_dw, conv_dw_no_bn
    ModuleNotFoundError: No module named 'modules'
    

    I am not sure how to fix this.

    opened by monacv 8
  • Questions about joints rotations

    Questions about joints rotations

    Hello everyone. We are trying to use your solution for moving an human character on Unreal 4 engine. We know that the axes of joint angle are these shown in figure: axis We would know if this orientation is the same for each bone and if each angle written in pred_body_pose is the resultant of the angle between the previous bone and the current on each axis or has a different logic. We are approaching to body tracking right now. In addition, if possible, can you provide me an image of the skeleton shape and points? We tried to use opengl commands, but with no result. Thank you in advance and thank you for the amazing job.

    opened by sautechgroup 8
  • Some question about the initial pose of body and hand

    Some question about the initial pose of body and hand

    Thanks for your great work! I have learned that the SMPL-X use the pose parameters to get the final pose from the initial pose (T-Pose), and the pose parameters for each joints is the axis-angle format rotation relative to its parent's rotation. However, I have no detail information about the actual initial coordinate of each joints, including body and hands (especially for the hands). I have tried to search in the original paper, but still got nothing. Here is an example that I would like to know about the initial coordinates. joint-coordinates Is there any documentation about it? Thanks for your attention!

    opened by henrycjh 1
  • Using multcamera for hand tracking calibration

    Using multcamera for hand tracking calibration

    Hi, thank you for your amazing works.

    Your work is still accurate. and can I apply multi-camera calibration to your code to make it more sophisticated?

    If possible, can I know which part of the code should be changed?

    opened by tjddn0145 1
  • Can the parameters of hand be used to output mano?

    Can the parameters of hand be used to output mano?

    Thanks for the excellent work!!!

    I have extracted *.pkl files from handmocap. Everything looked so good when I used your code to output the hand mesh. However, when I tried to output mano, I got the results like the pictures shown: Screenshot from 2022-11-17 15-14-15 Screenshot from 2022-11-17 15-14-26

    The index finger always looks weird, but the other fingers seem normal.

    It seems that the parameters cannot be used to output mano directly. If I want to output mano, how to change the parameters?

    Thanks!!!

    opened by Juliejulie111 1
  • ques about the cam parameter converge

    ques about the cam parameter converge

    I found the cam params (scale tx, ty) converge worse with or without the cam loss, and i use weak perspective in my code, in which kpy_2d = scale(kyp3d[, :2]) + txy. I think the key reason is the focal length of the freihand dataset is different with each image, and it range from 400 mm to 800 mm. So does the scale (focal / global_z) differs with each image. So maybe the network cannot regress the scale well? I want to how to make cam params converget better? is there any tricks in training? thanks!

    opened by lvZic 5
Owner
Facebook Research
Facebook Research
pytorch implementation of openpose including Hand and Body Pose Estimation.

pytorch-openpose pytorch implementation of openpose including Body and Hand Pose Estimation, and the pytorch model is directly converted from openpose

Hzzone 1.4k Jan 7, 2023
(CVPR 2022 - oral) Multi-View Depth Estimation by Fusing Single-View Depth Probability with Multi-View Geometry

Multi-View Depth Estimation by Fusing Single-View Depth Probability with Multi-View Geometry Official implementation of the paper Multi-View Depth Est

Bae, Gwangbin 138 Dec 28, 2022
A minimal solution to hand motion capture from a single color camera at over 100fps. Easy to use, plug to run.

Minimal Hand A minimal solution to hand motion capture from a single color camera at over 100fps. Easy to use, plug to run. This project provides the

Yuxiao Zhou 824 Jan 7, 2023
Full body anonymization - Realistic Full-Body Anonymization with Surface-Guided GANs

Realistic Full-Body Anonymization with Surface-Guided GANs This is the official

Håkon Hukkelås 30 Nov 18, 2022
WormMovementSimulation - 3D Simulation of Worm Body Movement with Neurons attached to its body

Generate 3D Locomotion Data This module is intended to create 2D video trajector

null 1 Aug 9, 2022
SE3 Pose Interp - Interpolate camera pose or trajectory in SE3, pose interpolation, trajectory interpolation

SE3 Pose Interpolation Pose estimated from SLAM system are always discrete, and

Ran Cheng 4 Dec 15, 2022
Blender add-on: Add to Cameras menu: View → Camera, View → Add Camera, Camera → View, Previous Camera, Next Camera

Blender add-on: Camera additions In 3D view, it adds these actions to the View|Cameras menu: View → Camera : set the current camera to the 3D view Vie

German Bauer 11 Feb 8, 2022
Code for "Single-view robot pose and joint angle estimation via render & compare", CVPR 2021 (Oral).

Single-view robot pose and joint angle estimation via render & compare Yann Labbé, Justin Carpentier, Mathieu Aubry, Josef Sivic CVPR: Conference on C

Yann Labbé 51 Oct 14, 2022
A hand tracking demo made with mediapipe where you can control lights with pinching your fingers and moving your hand up/down.

HandTrackingBrightnessControl A hand tracking demo made with mediapipe where you can control lights with pinching your fingers and moving your hand up

Teemu Laurila 19 Feb 12, 2022
MohammadReza Sharifi 27 Dec 13, 2022
Hand-distance-measurement-game - Hand Distance Measurement Game

Hand Distance Measurement Game This is program is made to calculate the distance

Priyansh 2 Jan 12, 2022
Face and Pose detector that emits MQTT events when a face or human body is detected and not detected.

Face Detect MQTT Face or Pose detector that emits MQTT events when a face or human body is detected and not detected. I built this as an alternative t

Jacob Morris 38 Oct 21, 2022
Human POSEitioning System (HPS): 3D Human Pose Estimation and Self-localization in Large Scenes from Body-Mounted Sensors, CVPR 2021

Human POSEitioning System (HPS): 3D Human Pose Estimation and Self-localization in Large Scenes from Body-Mounted Sensors Human POSEitioning System (H

Aymen Mir 66 Dec 21, 2022
Pose Detection and Machine Learning for real-time body posture analysis during exercise to provide audiovisual feedback on improvement of form.

Posture: Pose Tracking and Machine Learning for prescribing corrective suggestions to improve posture and form while exercising. This repository conta

Pratham Mehta 10 Nov 11, 2022
HPRNet: Hierarchical Point Regression for Whole-Body Human Pose Estimation

HPRNet: Hierarchical Point Regression for Whole-Body Human Pose Estimation Official PyTroch implementation of HPRNet. HPRNet: Hierarchical Point Regre

Nermin Samet 53 Dec 4, 2022
Demo for Real-time RGBD-based Extended Body Pose Estimation paper

Real-time RGBD-based Extended Body Pose Estimation This repository is a real-time demo for our paper that was published at WACV 2021 conference The ou

Renat Bashirov 118 Dec 26, 2022
Exposure Time Calculator (ETC) and radial velocity precision estimator for the Near InfraRed Planet Searcher (NIRPS) spectrograph

NIRPS-ETC Exposure Time Calculator (ETC) and radial velocity precision estimator for the Near InfraRed Planet Searcher (NIRPS) spectrograph February 2

Nolan Grieves 2 Sep 15, 2022
JumpDiff: Non-parametric estimator for Jump-diffusion processes for Python

jumpdiff jumpdiff is a python library with non-parametric Nadaraya─Watson estimators to extract the parameters of jump-diffusion processes. With jumpd

Rydin 28 Dec 10, 2022
An energy estimator for eyeriss-like DNN hardware accelerator

Energy-Estimator-for-Eyeriss-like-Architecture- An energy estimator for eyeriss-like DNN hardware accelerator This is an energy estimator for eyeriss-

HEXIN BAO 2 Mar 26, 2022