Convert human motion from video to .bvh

Overview

video_to_bvh

Convert human motion from video to .bvh with Google Colab

Usage

1. Open video_to_bvh.ipynb in Google Colab

  1. Go to https://colab.research.google.com
  2. File > Upload notebook... > GitHub > Paste this link: https://github.com/Dene33/video_to_bvh/blob/master/video_to_bvh.ipynb
  3. Ensure that Runtime > Change runtime type is Python 3 with GPU

2. Initial imports, install, initializations

Second step is to install all the required dependencies. Select the first code cell and push shift+enter. You'll see running lines of executing code. Wait until it's done (1-2 minutes).

3. Upload video

  1. Select the code cell and push shift+enter
  2. Push select files button
  3. Select the video you want to process (it should contain only one person, all body parts in frame, long videos will take a lot of time to process)

4. Process the video

  1. Specify desired fps rate at which you want to convert video to images. Lower fps = faster processing
  2. Select the code cell and push shift+enter

This step does all the job:

  1. Convertion of video to images (images are required for pose estimation to work)
  2. 2d pose estimation. For each image creates corresponding .json file with 2djoints with format similar to output .json format of original openpose. Fork of keras_Realtime_Multi-Person_Pose_Estimation is used.
  3. 3d pose estimation. Creates .csv file of all the frames of video with 3d joints coordinates. Fork of End-to-end Recovery of Human Shape and Pose
  4. Convertion of estimated .csv files to .bvh with help of custom script with .blend file.

5. Download .bvh

  1. Select the code cell and push shift+enter .bvh will be saved to your PC.
  2. If you want preview it, run Blender on your PC. File > Import > Motion Capture (.bvh) > alt+a

6. Clear all the generated data if you want to process new video

  1. Select the code cell and push shift+enter.
Comments
  • trying to run it locally, but hit a issue.

    trying to run it locally, but hit a issue.

    windows 10 , python 3.6 when running 3dpose_estimate.sh it will always get stuck.. right at ipdb i changed python2 to python within 3dpose_estimate.sh. i tried adding neutral_smpl_with_cocoplus_reg.pkl file and also moving it around to see if it would find it. any ideas? can i add any other info that will help? thanks

    Processing ti10002
    Fix path to models/
    D:\Anaconda3\lib\site-packages\dask\config.py:168: YAMLLoadWarning: calling yaml .load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
    data = yaml.load(f.read()) or {}
    d:\machinelearning\3dpose\hmr-master\src\config.py(26)()
    25 ipdb.set_trace()
    ---> 26 SMPL_MODEL_PATH = osp.join(model_dir, 'neutral_smpl_with_cocoplus_reg.pkl')
    27 SMPL_FACE_PATH = osp.join(curr_path, '../src/tf_smpl', 'smpl_faces.npy')
    ipdb>

    opened by crypticsymmetry 5
  • Questions regarding 3D lifting from other 2D detectors

    Questions regarding 3D lifting from other 2D detectors

    Hi Great work ! I was wondering if it was possible to have other 2D detectors as input ? There are two possible detectors that ouput the 2D joints hrnet posenet

    For hrnet implementation, there is a python implementation done in the following https://github.com/lxy5513/hrnet And the keypoints are returned in the following https://github.com/lxy5513/hrnet/blob/master/pose_estimation/demo.py#L153 Where

    preds.shape = (N, 17, 2), N is the frame number of video , 2 is the coordinate
    maxvals = (N, 17, 1), 1 is the confidence of coordinate
    

    I think the output keypoints have to be rearranged a bit to get something similar to that of openpose which is of the format {"people": [{"pose_keypoints_2d": [374, 460, 374, 516, 324, 518, 296, 596, 336, 636, 424, 512, 446, 590, 424, 604, 340, 660, 324, 776, 308, 890, 400, 660, 402, 792, 400, 904, 364, 448, 382, 450, 348, 450, 396, 450]}]}

    For posenet there is a python implementation https://github.com/rwightman/posenet-python and the keypoints are returned here

    It also has a different keypoint ordering , but can be extended to openpose format.

    You also mentioned the 3d joints are exported in a csv format. Is it possible , also to export it to unity , for animation ? Your thoughts and inputs would be appreciated.

    opened by timtensor 4
  • CSV to blend error

    CSV to blend error

    Hello!

    When I execute "blender --background hmr/csv_to_bvh.blend -noaudio -P hmr/csv_to_bvh.py" to convert .csv files to bvh, I am always getting the same error in different videos:

    BVH Exported: hmr/output/bvh_animation/estimated_animation.bvh frames:251
    
    Error, region type 4 missing in - name:"Action", id:12
    Error, region type 4 missing in - name:"Action", id:12
    
    Blender quit
    
    

    It only export 251 frames, why is this happing? But I have more than 1000K frames, it always stops here

    Thanks in advance

    opened by aitorgutierrez 3
  • FileNotFoundError Due to [Choose Files] Button the Upload Widget is not Available Even the Session is Rerun

    FileNotFoundError Due to [Choose Files] Button the Upload Widget is not Available Even the Session is Rerun

    2 error messages are shown below for your information:

    1> MessageError Traceback (most recent call last) in () 1 #upload video ----> 2 exec(open('upload_videos.py').read())

    in ()

    2> /usr/local/lib/python3.6/dist-packages/google/colab/_message.py in read_reply_from_input(message_id, timeout_sec) 104 reply.get('colab_msg_id') == message_id): 105 if 'error' in reply: --> 106 raise MessageError(reply['error']) 107 return reply.get('data', None) 108

    MessageError: TypeError: Cannot read property '_uploadFiles' of undefined

    Another Error Message is shown after the {Upload Video] cell is rerun as below:

    FileNotFoundError Traceback (most recent call last) in () 1 #upload video ----> 2 exec(open('upload_videos.py').read())

    FileNotFoundError: [Errno 2] No such file or directory: 'upload_videos.py'

    It happens even though I've rerun the Upload Video, the cell has been re-executed in the current browser session, agin and again.

    Please help. Thanks!

    opened by markchang2006 2
  • csv_joined key pionts type

    csv_joined key pionts type

    Hello, I have been thinking about using the key points in the final CSV join file to animate an avatar in Unity and I would like to know if anyone here has tried it already and if it is feasible. Is the x, y and z obtained, rotation or position point?

    opened by Adilrn 2
  • tensorflow.contrib.slim error in

    tensorflow.contrib.slim error in "Process the video"

    I have followed the steps to execute the code but I keep having an issue regarding the processing. In fact, when executing the segment "process the video", it doesn't go through. This is how the execution ends:

    Traceback (most recent call last): File "hmr/demo.py", line 33, in from src.RunModel import RunModel File "/content/hmr/src/RunModel.py", line 13, in from .models import get_encoder_fn_separate File "/content/hmr/src/models.py", line 19, in import tensorflow.contrib.slim as slim ImportError: No module named contrib.slim Done Read blend: /content/hmr/csv_to_bvh.blend [bpy.data.objects['Ankle.R'], bpy.data.objects['Knee.R'], bpy.data.objects['Hip.R'], bpy.data.objects['Hip.L'], bpy.data.objects['Knee.L'], bpy.data.objects['Ankle.L'], bpy.data.objects['Wrist.R'], bpy.data.objects['Elbow.R'], bpy.data.objects['Shoulder.R'], bpy.data.objects['Shoulder.L'], bpy.data.objects['Elbow.L'], bpy.data.objects['Wrist.L'], bpy.data.objects['Neck'], bpy.data.objects['Head'], bpy.data.objects['Nose'], bpy.data.objects['Eye.L'], bpy.data.objects['Eye.R'], bpy.data.objects['Ear.L'], bpy.data.objects['Ear.R'], bpy.data.objects['Hip.Center']] Traceback (most recent call last): File "/content/hmr/csv_to_bvh.py", line 20, in with open(fullpath, 'r', newline='') as csvfile: FileNotFoundError: [Errno 2] No such file or directory: 'hmr/output/csv_joined/csv_joined.csv'

    Blender quit src/tcmalloc.cc:283] Attempt to free invalid pointer 0x7f135b80e400

    Can anybody help me with this? I tried using tensorflow 1.3.0 ; 1.11 ;1.14 and finally 2.2.0. When using 1.3.0 an 1.11, I get a different error, the code doesn't even reach the point where it requires contrib.slim

    opened by Adilrn 2
  • Instructions on running python version locally ?

    Instructions on running python version locally ?

    Hi , i had some issues with running your google collab notebook. So i looked in your python code https://github.com/Dene33/hmr

    I am trying to follow the steps as you mentioned in the notebook but i failed to run a video sequence on it, Is there a guide on how to run a video and get the mesh for the above mentioned repository ? hmr
    As instructed the pretrained model has ben downloaded to the model directory . Thanks!

    opened by timtensor 2
  • IOError and FileNotFoundError

    IOError and FileNotFoundError

    I am gabind the following erros

    IOError: [Errno 2] No such file or directory: 'keras_Realtime_Multi-Person_Pose_Estimation/sample_images/*' Done Read blend: /content/hmr/csv_to_bvh.blend [bpy.data.objects['Ankle.R'], bpy.data.objects['Knee.R'], bpy.data.objects['Hip.R'], bpy.data.objects['Hip.L'], bpy.data.objects['Knee.L'], bpy.data.objects['Ankle.L'], bpy.data.objects['Wrist.R'], bpy.data.objects['Elbow.R'], bpy.data.objects['Shoulder.R'], bpy.data.objects['Shoulder.L'], bpy.data.objects['Elbow.L'], bpy.data.objects['Wrist.L'], bpy.data.objects['Neck'], bpy.data.objects['Head'], bpy.data.objects['Nose'], bpy.data.objects['Eye.L'], bpy.data.objects['Eye.R'], bpy.data.objects['Ear.L'], bpy.data.objects['Ear.R'], bpy.data.objects['Hip.Center']] Traceback (most recent call last): File "/content/hmr/csv_to_bvh.py", line 20, in with open(fullpath, 'r', newline='') as csvfile: FileNotFoundError: [Errno 2] No such file or directory: 'hmr/output/csv_joined/csv_joined.csv'

    Blender quit src/tcmalloc.cc:283] Attempt to free invalid pointer 0x7ff0fa40e400

    opened by KevinKons 2
  • Frame limit to 250?

    Frame limit to 250?

    Hi,

    Thanks for the awesome script.

    I have been trying to animate a larger frame sequence (1500 appx) but the resulting animation is only 250 frames long.

    'BVH Exported: ./estimated_animation.bvh frames:251'

    and no error. Can you guide me through fixing this?

    Thanks

    opened by noumanriazkhan 2
  • visualize motion with bvh file

    visualize motion with bvh file

    Hi,

    Thanks for sharing this. I successfully got the bvh file by following the steps you documented. I want to know how to visualize the motion in blender? Do I need to create a new human mesh file and then apply the motion to it? Besides, is there any way to convert the bvh file to ARM format file or can we directly output the AMC file by using the hmr?

    Thanks

    opened by huiwenzhang 1
  • UnknownError: Failed to get convolution algorithm.

    UnknownError: Failed to get convolution algorithm.

    Great work on this project it looks like a really great tool. I'm getting an error when I get to the process video portion of the code. Do you have any insight on how to go about fixing this error?

    Thanks for any help!

    Here's the output `--------------------------------------------------------------------------- UnknownError Traceback (most recent call last) in () 3 #2d pose estimation. For each image creates corresponding .json file with format 4 #similar to output .json format of openpose (https://github.com/CMU-Perceptual-Computing-Lab/openpose) ----> 5 exec(open('2d_pose_estimation.py').read()) 6 7 #3d pose estimation

    in ()

    /usr/local/lib/python3.6/dist-packages/keras/engine/training.py in predict(self, x, batch_size, verbose, steps) 1167 batch_size=batch_size, 1168 verbose=verbose, -> 1169 steps=steps) 1170 1171 def train_on_batch(self, x, y,

    /usr/local/lib/python3.6/dist-packages/keras/engine/training_arrays.py in predict_loop(model, f, ins, batch_size, verbose, steps) 292 ins_batch[i] = ins_batch[i].toarray() 293 --> 294 batch_outs = f(ins_batch) 295 batch_outs = to_list(batch_outs) 296 if batch_index == 0:

    /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py in call(self, inputs) 2713 return self._legacy_call(inputs) 2714 -> 2715 return self._call(inputs) 2716 else: 2717 if py_any(is_tensor(x) for x in inputs):

    /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py in _call(self, inputs) 2673 fetched = self._callable_fn(*array_vals, run_metadata=self.run_metadata) 2674 else: -> 2675 fetched = self._callable_fn(*array_vals) 2676 return fetched[:len(self.outputs)] 2677

    /usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py in call(self, *args, **kwargs) 1437 ret = tf_session.TF_SessionRunCallable( 1438 self._session._session, self._handle, args, status, -> 1439 run_metadata_ptr) 1440 if run_metadata: 1441 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

    /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/errors_impl.py in exit(self, type_arg, value_arg, traceback_arg) 526 None, None, 527 compat.as_text(c_api.TF_Message(self.status.status)), --> 528 c_api.TF_GetCode(self.status.status)) 529 # Delete the underlying status object from memory otherwise it stays alive 530 # as there is a reference to status from this from the traceback due to

    UnknownError: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above. [[{{node conv1_1/convolution}} = Conv2D[T=DT_FLOAT, data_format="NCHW", dilations=[1, 1, 1, 1], padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](conv1_1/convolution-0-TransposeNHWCToNCHW-LayoutOptimizer, conv1_1/kernel/read)]]`

    opened by mikebru 1
  • Process the video

    Process the video

    So I have successfully ran 2d pose estimation and have gotten json outputs. But I am running into this issue with 3D pose estimation function (bash hmr/3dpose_estimate.sh). Please help!

    Traceback (most recent call last): File "hmr/demo.py", line 32, in import src.config File "/content/hmr/src/config.py", line 59, in flags.DEFINE_string('log_dir', 'logs', 'Where to save training models') File "/usr/local/lib/python2.7/dist-packages/absl/flags/_defines.py", line 241, in DEFINE_string DEFINE(parser, name, default, help, flag_values, serializer, **args) File "/usr/local/lib/python2.7/dist-packages/absl/flags/_defines.py", line 82, in DEFINE flag_values, module_name) File "/usr/local/lib/python2.7/dist-packages/absl/flags/_defines.py", line 104, in DEFINE_flag fv[flag.name] = flag File "/usr/local/lib/python2.7/dist-packages/absl/flags/_flagvalues.py", line 430, in setitem raise _exceptions.DuplicateFlagError.from_flag(name, self) absl.flags._exceptions.DuplicateFlagError: The flag 'log_dir' is defined twice. First from absl.logging, Second from src.config. Description from first occurrence: directory to write logfiles into Done Read blend: /content/hmr/csv_to_bvh.blend [bpy.data.objects['Ankle.R'], bpy.data.objects['Knee.R'], bpy.data.objects['Hip.R'], bpy.data.objects['Hip.L'], bpy.data.objects['Knee.L'], bpy.data.objects['Ankle.L'], bpy.data.objects['Wrist.R'], bpy.data.objects['Elbow.R'], bpy.data.objects['Shoulder.R'], bpy.data.objects['Shoulder.L'], bpy.data.objects['Elbow.L'], bpy.data.objects['Wrist.L'], bpy.data.objects['Neck'], bpy.data.objects['Head'], bpy.data.objects['Nose'], bpy.data.objects['Eye.L'], bpy.data.objects['Eye.R'], bpy.data.objects['Ear.L'], bpy.data.objects['Ear.R'], bpy.data.objects['Hip.Center']] Traceback (most recent call last): File "/content/hmr/csv_to_bvh.py", line 20, in with open(fullpath, 'r', newline='') as csvfile: FileNotFoundError: [Errno 2] No such file or directory: 'hmr/output/csv_joined/csv_joined.csv'

    Blender quit src/tcmalloc.cc:283] Attempt to free invalid pointer 0x7f1e9680e400

    opened by sidul001 1
  • specify version in requirements.txt

    specify version in requirements.txt

    Wouldn't it be a good and extremely obvious idea to specify the exact versions to use while setting up the project? There tons of error here, please add this

    opened by alexeybozhchenko 1
  • Distortions in feet in bvh.

    Distortions in feet in bvh.

    Hey @Dene33 , great work! Although, I had one issue. I generated a csv from other means, and matched it exactly with the csv format you are using (headers, order etc.) and then converted that csv to bvh using the technique you used. The whole structure in bvh is correct but only the feet in the bvh are being rotated. However, I input the same video into your .ipynb file and the feet are correctly aligned in it. I dont understand why is that happening. I am attaching link to both the CSVs, BVHs and original video for your reference. https://drive.google.com/drive/folders/1UEq-8Ftrz3kyZvXQb_uoNw8_oVUFUVBM?usp=sharing If you could please help me with this.

    opened by bhumikasinghrk 0
Owner
Dene
Python, machine learning, animation, game dev
Dene
PaddleRobotics is an open-source algorithm library for robots based on Paddle, including open-source parts such as human-robot interaction, complex motion control, environment perception, SLAM positioning, and navigation.

简体中文 | English PaddleRobotics paddleRobotics是基于paddle的机器人开源算法库集,包括人机交互、复杂运动控制、环境感知、slam定位导航等开源算法部分。 人机交互 主动多模交互技术TFVT-HRI 主动多模交互技术是通过视觉、语音、触摸传感器等输入机器人

null 185 Dec 26, 2022
EasyMocap is an open-source toolbox for markerless human motion capture from RGB videos.

EasyMocap is an open-source toolbox for markerless human motion capture from RGB videos. In this project, we provide the basic code for fitt

ZJU3DV 2.2k Jan 5, 2023
Synthesizing Long-Term 3D Human Motion and Interaction in 3D in CVPR2021

Long-term-Motion-in-3D-Scenes This is an implementation of the CVPR'21 paper "Synthesizing Long-Term 3D Human Motion and Interaction in 3D". Please ch

Jiashun Wang 76 Dec 13, 2022
[WACV 2020] Reducing Footskate in Human Motion Reconstruction with Ground Contact Constraints

Reducing Footskate in Human Motion Reconstruction with Ground Contact Constraints Official implementation for Reducing Footskate in Human Motion Recon

Virginia Tech Vision and Learning Lab 38 Nov 1, 2022
Code for ICCV 2021 paper "HuMoR: 3D Human Motion Model for Robust Pose Estimation"

Code for ICCV 2021 paper "HuMoR: 3D Human Motion Model for Robust Pose Estimation"

Davis Rempe 367 Dec 24, 2022
Official Pytorch implementation of the paper "Action-Conditioned 3D Human Motion Synthesis with Transformer VAE", ICCV 2021

ACTOR Official Pytorch implementation of the paper "Action-Conditioned 3D Human Motion Synthesis with Transformer VAE", ICCV 2021. Please visit our we

Mathis Petrovich 248 Dec 23, 2022
Official Pytorch implementation of the paper "MotionCLIP: Exposing Human Motion Generation to CLIP Space"

MotionCLIP Official Pytorch implementation of the paper "MotionCLIP: Exposing Human Motion Generation to CLIP Space". Please visit our webpage for mor

Guy Tevet 173 Dec 26, 2022
Simple-System-Convert--C--F - Simple System Convert With Python

Simple-System-Convert--C--F REQUIREMENTS Python version : 3 HOW TO USE Run the c

Jonathan Santos 2 Feb 16, 2022
Video-Captioning - A machine Learning project to generate captions for video frames indicating the relationship between the objects in the video

Video-Captioning - A machine Learning project to generate captions for video frames indicating the relationship between the objects in the video

null 1 Jan 23, 2022
[CVPR2021] UAV-Human: A Large Benchmark for Human Behavior Understanding with Unmanned Aerial Vehicles

UAV-Human Official repository for CVPR2021: UAV-Human: A Large Benchmark for Human Behavior Understanding with Unmanned Aerial Vehicle Paper arXiv Res

null 129 Jan 4, 2023
Human POSEitioning System (HPS): 3D Human Pose Estimation and Self-localization in Large Scenes from Body-Mounted Sensors, CVPR 2021

Human POSEitioning System (HPS): 3D Human Pose Estimation and Self-localization in Large Scenes from Body-Mounted Sensors Human POSEitioning System (H

Aymen Mir 66 Dec 21, 2022
Human Action Controller - A human action controller running on different platforms.

Human Action Controller (HAC) Goal A human action controller running on different platforms. Fun Easy-to-use Accurate Anywhere Fun Examples Mouse Cont

null 27 Jul 20, 2022
Python scripts for performing 3D human pose estimation using the Mobile Human Pose model in ONNX.

Python scripts for performing 3D human pose estimation using the Mobile Human Pose model in ONNX.

Ibai Gorordo 99 Dec 31, 2022
StyleGAN-Human: A Data-Centric Odyssey of Human Generation

StyleGAN-Human: A Data-Centric Odyssey of Human Generation Abstract: Unconditional human image generation is an important task in vision and graphics,

stylegan-human 762 Jan 8, 2023
Official implementation of the network presented in the paper "M4Depth: A motion-based approach for monocular depth estimation on video sequences"

M4Depth This is the reference TensorFlow implementation for training and testing depth estimation models using the method described in M4Depth: A moti

Michaël Fonder 76 Jan 3, 2023
Asymmetric Bilateral Motion Estimation for Video Frame Interpolation, ICCV2021

ABME (ICCV2021) Junheum Park, Chul Lee, and Chang-Su Kim Official PyTorch Code for "Asymmetric Bilateral Motion Estimation for Video Frame Interpolati

Junheum Park 86 Dec 28, 2022
Video Autoencoder: self-supervised disentanglement of 3D structure and motion

Video Autoencoder: self-supervised disentanglement of 3D structure and motion This repository contains the code (in PyTorch) for the model introduced

null 157 Dec 22, 2022
[AAAI2021] The source code for our paper 《Enhancing Unsupervised Video Representation Learning by Decoupling the Scene and the Motion》.

DSM The source code for paper Enhancing Unsupervised Video Representation Learning by Decoupling the Scene and the Motion Project Website; Datasets li

Jinpeng Wang 114 Oct 16, 2022
Measures input lag without dedicated hardware, performing motion detection on recorded or live video

What is InputLagTimer? This tool can measure input lag by analyzing a video where both the game controller and the game screen can be seen on a webcam

Bruno Gonzalez 4 Aug 18, 2022