Demo for Real-time RGBD-based Extended Body Pose Estimation paper

Overview

Real-time RGBD-based Extended Body Pose Estimation

This repository is a real-time demo for our paper that was published at WACV 2021 conference

The output of our module is in SMPL-X parametric body mesh model:

Combined system runs at 30 fps on a 2080ti GPU and 8 core @ 4GHz CPU.

Alt Text

How to use

Build

  • Prereqs: your nvidia driver should support cuda 10.2, Windows or Mac are not supported.
  • Clone repo:
    • git clone https://github.com/rmbashirov/rgbd-kinect-pose.git
    • cd rgbd-kinect-pose
    • git submodule update --force --init --remote
  • Docker setup:
  • Build docker image: run 2 cmds
  • Attach your Azure Kinect camera
  • Check your Azure Kinect camera is working inside Docker container:
    • Enter Docker container: ./run_local.sh from docker dir
    • Then run python -m pyk4a.viewer --vis_color --no_bt --no_depth inside docker container

Download data

  • Download our data archive smplx_kinect_demo_data.tar.gz
  • Unzip: mkdir /your/unpacked/dir, tar -zxf smplx_kinect_demo_data.tar.gz -C /your/unpacked/dir
  • Download models for hand, see link in "Download models from here" line in our fork, put to /your/unpacked/dir/minimal_hand/model
  • To download SMPL-X parametric body model go to this project website, register, go to the downloads section, download SMPL-X v1.1 model, put to /your/unpacked/dir/pykinect/body_models/smplx
  • /your/unpacked/dir should look like this
  • Set data_dirpath and output_dirpath variables in config file:
    • data_dirpath is a path to /your/unpacked/dir
    • output_dirpath is used to check timings or to store result images
    • ensure these paths are visible inside docker container, set VOLUMES variable here

Run

  • Run demo: in src dir run ./run_server.sh, the latter will enter docker container and will use config file where shape of the person is loaded from an external file: in our work we did not focus on person's shape estimation

What else

Apart from our main body pose estimation contribution you can find this repository useful for:

  • minimal_pytorch_rasterizer python package: CUDA non-differentiable mesh rasterization library for pytorch tensors with python bindings
  • pyk4a python package: real-time streaming from Azure Kinect camera, this package also works in our provided docker environment
  • multiprocessing_pipeline python package: set-up pipeline graph of python blocks running in parallel, see usage in server.py

Citation

If you find the project helpful, please consider citing us:

@inproceedings{bashirov2021real,
  title={Real-Time RGBD-Based Extended Body Pose Estimation},
  author={Bashirov, Renat and Ianina, Anastasia and Iskakov, Karim and Kononenko, Yevgeniy and Strizhkova, Valeriya and Lempitsky, Victor and Vakhitov, Alexander},
  booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision},
  pages={2807--2816},
  year={2021}
}

Non-commercial use only

Comments
  • Docker env + working demo video + docker images link

    Docker env + working demo video + docker images link

    I don't know what is going on. I just try to test python -m pyk4a.viewer --undistort_depth --vis_depth --parallel_bt I have connected my Kinect Camera. But it still can not run. The Bugs look like these: 2021-06-23 19-39-09屏幕截图

    opened by XGenietony 13
  • After running ./run_server.sh:  no windows shows up

    After running ./run_server.sh: no windows shows up

    1. When I ran the ./run_server.sh, there was no windows/plots displayed.
    2. I also receive an issue: "tjDestroy error"

    I was wondering if any suggestion to resolve the above issues? Here is the screenshot: Screenshot from 2021-05-17 19-03-47

    BTW, when I tested "python -m pyk4a.viewer --vis_color --no_bt --no_depth", there was a window that shows the RGB stream captured by Kinect.

    Thank you!

    opened by vccjihao 6
  • Conversion to SMPL from SMPLX

    Conversion to SMPL from SMPLX

    Hi Renat,

    I was trying to obtain the SMPLX parameters and convert them to an SMPL mesh. The only difference is that I lose the hand pose information and thus I just set the last 6 axis angle parameters in the SMPL body pose as zero. But it turns out there is a consistent shift along the y-axis. I attached the SMPL and SMPLX point cloud in this drive link https://drive.google.com/drive/folders/1JsmK6dNOJfsDImfuqyESp5sQdwilKhGU?usp=sharing.

    I extract the intermediate results by inserting the following code in the file aggregate.py before visualization and run the standard SMPL model forward function to get the SMPL vertices. BTW, I disabled the prediction for hand and face pose, but I don't think that can result in my current issue. This seems to relate to the global translation. But it should not differ between SMPL and SMPLX. Do you maybe have any idea why this happened? Many thanks in advance!

              smpl_dict['body_pose'] = x['body_pose']
              smpl_dict['global_rot'] = x['global_rot']
              smpl_dict['global_trans'] = x['global_trans']
              smpl_dict['smplx_verts'] = vertices.detach().cpu().numpy().squeeze()
              import pickle as pkl
              pkl.dump(smpl_dict, open('extract_from_raw.pkl', 'wb'))
    
    opened by MoyGcc 5
  • install

    install

    Hi, @rmbashirov, Thanks a lot for publishing this code and really interesting paper. As I am quite new to use the docker to build and run the examples.

    • According to your README.md, I have installed the docker and nvidia-docker and build the images like: image
    • I attached my azure camera to computer (I also check by running k4aviewer with success)
    • After change some information in run_local.sh and data_dirpath in renet.yaml, I run the sudo ./run_server.sh , but it gives me this:
    docker: Error response from daemon: OCI runtime create failed: container_linux.go:367: starting container process caused: exec: "python": executable file not found in $PATH: unknown
    

    it seems that there is no python

    Computer infor:

    • Ubuntu 18.04
    • NVIDIA Geforce 1080Ti
    opened by legoc1986 3
  • How Can I Use Kinect2 Camera?

    How Can I Use Kinect2 Camera?

    Dear Sir: If I want to use my Kinect ONE (Kinect2)camera to get the image instead of Kinect Azure, how can I modify the code? In what codes? Could you please give me some hints? Thanks Li

    opened by lj-cug 2
  • opencv ImportError in docker.

    opencv ImportError in docker.

    Thanks for your great work. I download the docker from Google Drive. Failed to run pyk4a.viewer

    ~/humanReconstruct/rgbd-kinect-pose/docker$ ./run_local.sh docker@txt-Lab:/src$ python -m pyk4a.viewer --vis_color --no_bt --no_depth Traceback (most recent call last): File "/opt/conda/lib/python3.7/runpy.py", line 183, in _run_module_as_main mod_name, mod_spec, code = _get_module_details(mod_name, _Error) File "/opt/conda/lib/python3.7/runpy.py", line 109, in _get_module_details import(pkg_name) File "/opt/conda/lib/python3.7/site-packages/pyk4a/init.py", line 2, in from .pyk4a import * File "/opt/conda/lib/python3.7/site-packages/pyk4a/pyk4a.py", line 3, in import k4a_module ImportError: libopencv_core.so.4.2: cannot open shared object file: No such file or directory

    any advice?

    opened by TxT1212 1
  • SMPLify-RGBD

    SMPLify-RGBD

    Hi Can you pleas share your code for what you call in the paper ’SMPLify-RGBD’ - a single view modification of the offline model fitting. I would like to use it to create a reference solution using a Real Sense RGBD camera.

    Thanks, David

    opened by dStanhill 1
  • Can this be used for rgbd data from other devices?

    Can this be used for rgbd data from other devices?

    Thank you @rmbashirov for open sourcing this piece of art. Can we use this for RGB-D data from other sources like iPhone TrueDepth or other sources? If this can be done, where shall we provide the camera intrinsic?

    opened by pramishp 1
  • In file folder

    In file folder "docker",run ./build.sh ERROR

    ...... E: Unable to locate package libcudnn7 E: Unable to locate package libcudnn7-dev The command '/bin/sh -c apt-get -y update && apt-get install -y --no-install-recommends libcudnn7=7.6.5.32-1+cuda10.2 libcudnn7-dev=7.6.5.32-1+cuda10.2 && apt-mark hold libcudnn7 && rm -rf /var/lib/apt/lists/*' returned a non-zero code: 100

    opened by Britton-Li 0
  • Just build an doceker but I have no firmware——Azure Kinect camera

    Just build an doceker but I have no firmware——Azure Kinect camera

    image I have follow all your description in the README.MD. And I build the docker from scratch step by step. I doubt whether the result like this is good or not. Thanks!!
    opened by Yaosmart 1
  • Using google drive docker image - kinect test, transform engine init failure

    Using google drive docker image - kinect test, transform engine init failure

    Hi @rmbashirov,

    Thanks for your support on this repo! It's very interesting project.

    I believe my issue may be related to #5.

    I have downloaded the docker image from google drive link and I'm attempting to test the kinect viewer. I'm receiving the following error. image

    I completed the steps indicated by @xgen in #5 to expose all GPU's in the call to ./run_local.sh.

    My GPU is listed when calling nvidia-smi inside docker, however I notice the CUDA version is 11.6, not 10.2.

    Any ideas?

    opened by dr00b 1
  • Is it possible to run this model by providing depth maps and rgb video or previously computed keypoints?

    Is it possible to run this model by providing depth maps and rgb video or previously computed keypoints?

    Hello,

    I am interested in using this model for a research study, however, I am unable to run this model in real-time as I'm using a cluster for computation. Is it possible to use this model not in real-time e.g. by providing depth maps and video files, or by providing previously captured keypoints from azure skeletal tracking, minimalhand and mediapipe?

    Any advice on this would be greatly appreciated.

    opened by bs97 1
  • Unable to connect camera in docker

    Unable to connect camera in docker

    @rmbashirov hello,Thank you for sharing this project. I made the following mistakes in the implementation process. Can you help me see what the problem is? 1644802836(1)

    opened by kingsunsoft 6
Owner
Renat Bashirov
CV research engineer
Renat Bashirov
Pose Detection and Machine Learning for real-time body posture analysis during exercise to provide audiovisual feedback on improvement of form.

Posture: Pose Tracking and Machine Learning for prescribing corrective suggestions to improve posture and form while exercising. This repository conta

Pratham Mehta 10 Nov 11, 2022
OpenPose: Real-time multi-person keypoint detection library for body, face, hands, and foot estimation

Build Type Linux MacOS Windows Build Status OpenPose has represented the first real-time multi-person system to jointly detect human body, hand, facia

null 25.7k Jan 9, 2023
Repository of our paper 'Refer-it-in-RGBD' in CVPR 2021

Refer-it-in-RGBD This is the repository of our paper 'Refer-it-in-RGBD: A Bottom-up Approach for 3D Visual Grounding in RGBD Images' in CVPR 2021 Pape

Haolin Liu 34 Nov 7, 2022
HPRNet: Hierarchical Point Regression for Whole-Body Human Pose Estimation

HPRNet: Hierarchical Point Regression for Whole-Body Human Pose Estimation Official PyTroch implementation of HPRNet. HPRNet: Hierarchical Point Regre

Nermin Samet 53 Dec 4, 2022
Human POSEitioning System (HPS): 3D Human Pose Estimation and Self-localization in Large Scenes from Body-Mounted Sensors, CVPR 2021

Human POSEitioning System (HPS): 3D Human Pose Estimation and Self-localization in Large Scenes from Body-Mounted Sensors Human POSEitioning System (H

Aymen Mir 66 Dec 21, 2022
pytorch implementation of openpose including Hand and Body Pose Estimation.

pytorch-openpose pytorch implementation of openpose including Body and Hand Pose Estimation, and the pytorch model is directly converted from openpose

Hzzone 1.4k Jan 7, 2023
Expressive Body Capture: 3D Hands, Face, and Body from a Single Image

Expressive Body Capture: 3D Hands, Face, and Body from a Single Image [Project Page] [Paper] [Supp. Mat.] Table of Contents License Description Fittin

Vassilis Choutas 1.3k Jan 7, 2023
Full body anonymization - Realistic Full-Body Anonymization with Surface-Guided GANs

Realistic Full-Body Anonymization with Surface-Guided GANs This is the official

Håkon Hukkelås 30 Nov 18, 2022
WormMovementSimulation - 3D Simulation of Worm Body Movement with Neurons attached to its body

Generate 3D Locomotion Data This module is intended to create 2D video trajector

null 1 Aug 9, 2022
ONNX Runtime Web demo is an interactive demo portal showing real use cases running ONNX Runtime Web in VueJS.

ONNX Runtime Web demo is an interactive demo portal showing real use cases running ONNX Runtime Web in VueJS. It currently supports four examples for you to quickly experience the power of ONNX Runtime Web.

Microsoft 58 Dec 18, 2022
WHENet: Real-time Fine-Grained Estimation for Wide Range Head Pose

WHENet: Real-time Fine-Grained Estimation for Wide Range Head Pose Yijun Zhou and James Gregson - BMVC2020 Abstract: We present an end-to-end head-pos

null 368 Dec 26, 2022
Python and C++ implementation of "MarkerPose: Robust real-time planar target tracking for accurate stereo pose estimation". Accepted at LXCV @ CVPR 2021.

MarkerPose: Robust real-time planar target tracking for accurate stereo pose estimation This is a PyTorch and LibTorch implementation of MarkerPose: a

Jhacson Meza 47 Nov 18, 2022
This repo is official PyTorch implementation of MobileHumanPose: Toward real-time 3D human pose estimation in mobile devices(CVPRW 2021).

Github Code of "MobileHumanPose: Toward real-time 3D human pose estimation in mobile devices" Introduction This repo is official PyTorch implementatio

Choi Sang Bum 203 Jan 5, 2023
Real-time pose estimation accelerated with NVIDIA TensorRT

trt_pose Want to detect hand poses? Check out the new trt_pose_hand project for real-time hand pose and gesture recognition! trt_pose is aimed at enab

NVIDIA AI IOT 803 Jan 6, 2023
Repository for the paper "PoseAug: A Differentiable Pose Augmentation Framework for 3D Human Pose Estimation", CVPR 2021.

PoseAug: A Differentiable Pose Augmentation Framework for 3D Human Pose Estimation Code repository for the paper: PoseAug: A Differentiable Pose Augme

Pyjcsx 328 Dec 17, 2022
This repository contains codes of ICCV2021 paper: SO-Pose: Exploiting Self-Occlusion for Direct 6D Pose Estimation

SO-Pose This repository contains codes of ICCV2021 paper: SO-Pose: Exploiting Self-Occlusion for Direct 6D Pose Estimation This paper is basically an

shangbuhuan 52 Nov 25, 2022
Light-weight network, depth estimation, knowledge distillation, real-time depth estimation, auxiliary data.

light-weight-depth-estimation Boosting Light-Weight Depth Estimation Via Knowledge Distillation, https://arxiv.org/abs/2105.06143 Junjie Hu, Chenyou F

Junjie Hu 13 Dec 10, 2022
SE3 Pose Interp - Interpolate camera pose or trajectory in SE3, pose interpolation, trajectory interpolation

SE3 Pose Interpolation Pose estimated from SLAM system are always discrete, and

Ran Cheng 4 Dec 15, 2022
Web service for facial landmark detection, head pose estimation, facial action unit recognition, and eye-gaze estimation based on OpenFace 2.0

OpenGaze: Web Service for OpenFace Facial Behaviour Analysis Toolkit Overview OpenFace is a fantastic tool intended for computer vision and machine le

Sayom Shakib 4 Nov 3, 2022