PyTorch implementation for 3D human pose estimation

Overview

Towards 3D Human Pose Estimation in the Wild: a Weakly-supervised Approach

This repository is the PyTorch implementation for the network presented in:

Xingyi Zhou, Qixing Huang, Xiao Sun, Xiangyang Xue, Yichen Wei, Towards 3D Human Pose Estimation in the Wild: a Weakly-supervised Approach ICCV 2017 (arXiv:1704.02447)

Note: This repository has been updated and is different from the method discribed in the paper. To fully reproduce the results in the paper, please checkout the original torch implementation or our pytorch re-implementation branch (slightly worse than torch). We also provide a clean 2D hourglass network branch.

The updates include:

  • Change network backbone to ResNet50 with deconvolution layers (Xiao et al. ECCV2018). Training is now about 3x faster than the original hourglass net backbone (but no significant performance improvement).
  • Change the depth regression sub-network to a one-layer depth map (described in our StarMap project).
  • Change the Human3.6M dataset to official release in ECCV18 challenge.
  • Update from python 2.7 and pytorch 0.1.12 to python 3.6 and pytorch 0.4.1.

Contact: [email protected]

Installation

The code was tested with Anaconda Python 3.6 and PyTorch v0.4.1. After install Anaconda and Pytorch:

  1. Clone the repo:

    POSE_ROOT=/path/to/clone/pytorch-pose-hg-3d
    git clone https://github.com/xingyizhou/pytorch-pose-hg-3d POSE_ROOT
    
  2. Install dependencies (opencv, and progressbar):

    conda install --channel https://conda.anaconda.org/menpo opencv
    conda install --channel https://conda.anaconda.org/auto progress
    
  3. Disable cudnn for batch_norm (see issue):

    # PYTORCH=/path/to/pytorch
    # for pytorch v0.4.0
    sed -i "1194s/torch\.backends\.cudnn\.enabled/False/g" ${PYTORCH}/torch/nn/functional.py
    # for pytorch v0.4.1
    sed -i "1254s/torch\.backends\.cudnn\.enabled/False/g" ${PYTORCH}/torch/nn/functional.py
    
  4. Optionally, install tensorboard for visializing training.

    pip install tensorflow
    

Demo

  • Download our pre-trained model and move it to models.
  • Run python demo.py --demo /path/to/image/or/image/folder [--gpus -1] [--load_model /path/to/model].

--gpus -1 is for CPU mode. We provide example images in images/. For testing your own image, it is important that the person should be at the center of the image and most of the body parts should be within the image.

Benchmark Testing

To test our model on Human3.6 dataset run

python main.py --exp_id test --task human3d --dataset fusion_3d --load_model ../models/fusion_3d_var.pth --test --full_test

The expected results should be 64.55mm.

Training

  • Prepare the training data:

    ${POSE_ROOT}
    |-- data
    `-- |-- mpii
        `-- |-- annot
            |   |-- train.json
            |   |-- valid.json
            `-- images
                |-- 000001163.jpg
                |-- 000003072.jpg
    `-- |-- h36m
        `-- |-- ECCV18_Challenge
            |   |-- Train
            |   |-- Val
            `-- msra_cache
                `-- |-- HM36_eccv_challenge_Train_cache
                    |   |-- HM36_eccv_challenge_Train_w288xh384_keypoint_jnt_bbox_db.pkl
                    `-- HM36_eccv_challenge_Val_cache
                        |-- HM36_eccv_challenge_Val_w288xh384_keypoint_jnt_bbox_db.pkl
    
  • Stage1: Train 2D pose only. model, log

python main.py --exp_id mpii
  • Stage2: Train on 2D and 3D data without geometry loss (drop LR at 45 epochs). model, log
python main.py --exp_id fusion_3d --task human3d --dataset fusion_3d --ratio_3d 1 --weight_3d 0.1 --load_model ../exp/mpii/model_last.pth --num_epoch 60 --lr_step 45
  • Stage3: Train with geometry loss. model, log
python main.py --exp_id fusion_3d_var --task human3d --dataset fusion_3d --ratio_3d 1 --weight_3d 0.1 --weight_var 0.01 --load_model ../models/fusion_3d.pth  --num_epoch 10 --lr 1e-4

Citation

@InProceedings{Zhou_2017_ICCV,
author = {Zhou, Xingyi and Huang, Qixing and Sun, Xiao and Xue, Xiangyang and Wei, Yichen},
title = {Towards 3D Human Pose Estimation in the Wild: A Weakly-Supervised Approach},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2017}
}
Comments
  • Acc drop significantly during the last epoch of stage1

    Acc drop significantly during the last epoch of stage1

    Hi Xingyi, After training the 2D hourglass component for 50+ epochs, the accuracy is approximately 83%, but after the 60th epoch, the accuracy suddenly drop to 43%.

    Here's the log. image

    opened by FANG-Xiaolin 18
  • Error Demo CPU, not CUDA

    Error Demo CPU, not CUDA

    I use CPU, not GPU, not CUDA. I use Anaconda, Spider to build demo.py. I had a error:

    File "C:\Users\user\Anaconda3\envs\tensorflow\lib\site-packages\torch\serialization.py", line 78, in validate_cuda_device raise RuntimeError('Attempting to deserialize object on a CUDA '

    RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location='cpu' to map your storages to the CPU.

    Can you help me?

    opened by NguyenDangBinh 16
  • Demo error - 'BatchNorm2d' object has no attribute 'track_running_stats'

    Demo error - 'BatchNorm2d' object has no attribute 'track_running_stats'

    Hi,

    I'am using torch version 0.5 and i get that error in that line

    output = model(input_var)

    The error message looks like that:

      File "src/demo.py", line 26, in main
        output = model(input_var)
      File "/home/narvis/miniconda3/envs/hpe/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__
        result = self.forward(*input, **kwargs)
      File "/home/narvis/Dev/pytorch-pose-hg-3d/src/models/hg_3d.py", line 101, in forward
        x = self.bn1(x)
      File "/home/narvis/miniconda3/envs/hpe/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__
        result = self.forward(*input, **kwargs)
      File "/home/narvis/miniconda3/envs/hpe/lib/python3.6/site-packages/torch/nn/modules/batchnorm.py", line 66, in forward
        self.training or not self.track_running_stats,
      File "/home/narvis/miniconda3/envs/hpe/lib/python3.6/site-packages/torch/nn/modules/module.py", line 518, in __getattr__
        type(self).__name__, name))
    AttributeError: 'BatchNorm2d' object has no attribute 'track_running_stats'
    

    I found that other repositories have similar issues when upgrading from torch 0.3 to 0.4: https://github.com/kunglab/ddnn/issues/2

    And I checked the specific commit to find out how to solve this errorr: https://github.com/kunglab/ddnn/commit/071c82fbff0ae86ff1da934ff725c0004c2ccc7d

    But I couldn't find the right solution yet.

    I would like to use the most recent torch version so downgrading is not really an option for me.

    opened by tobiascz 14
  • Testing

    Testing

    Testing Download our pre-trained model and move it to models. Run python demo.py -demo /path/to/image [-loadModel /path/to/image].

    Kindly please define again how to run testing side. I have downloaded the pre-trained model and moved to models folder. Then I have to run demo.py which is in src folder? Am I right?

    opened by manza-ari 12
  • RuntimeError: The size of tensor a (27) must match the size of tensor b (26) at non-singleton dimension 3

    RuntimeError: The size of tensor a (27) must match the size of tensor b (26) at non-singleton dimension 3

    I put the pretrained model and a test picture in /src/, and run python demo.py -demo test_1.png The std error is

    /home/ubuntu/anaconda3/envs/pytorch_p27/lib/python2.7/site-packages/matplotlib/init.py:962: UserWarning: Duplicate key in file "/home/ubuntu/.config/matplotlib/matplotlibrc", line #2 (fname, cnt)) /home/ubuntu/anaconda3/envs/pytorch_p27/lib/python2.7/site-packages/matplotlib/init.py:962: UserWarning: Duplicate key in file "/home/ubuntu/.config/matplotlib/matplotlibrc", line #3 (fname, cnt)) /home/ubuntu/anaconda3/envs/pytorch_p27/lib/python2.7/site-packages/torch/serialization.py:325: SourceChangeWarning: source code of class 'torch.nn.modules.conv.Conv2d' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes. warnings.warn(msg, SourceChangeWarning) /home/ubuntu/anaconda3/envs/pytorch_p27/lib/python2.7/site-packages/torch/serialization.py:325: SourceChangeWarning: source code of class 'torch.nn.modules.batchnorm.BatchNorm2d' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes. warnings.warn(msg, SourceChangeWarning) /home/ubuntu/anaconda3/envs/pytorch_p27/lib/python2.7/site-packages/torch/serialization.py:325: SourceChangeWarning: source code of class 'torch.nn.modules.activation.ReLU' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes. warnings.warn(msg, SourceChangeWarning) /home/ubuntu/anaconda3/envs/pytorch_p27/lib/python2.7/site-packages/torch/serialization.py:325: SourceChangeWarning: source code of class 'torch.nn.modules.pooling.MaxPool2d' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes. warnings.warn(msg, SourceChangeWarning) /home/ubuntu/anaconda3/envs/pytorch_p27/lib/python2.7/site-packages/torch/serialization.py:325: SourceChangeWarning: source code of class 'torch.nn.modules.container.ModuleList' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes. warnings.warn(msg, SourceChangeWarning) /home/ubuntu/anaconda3/envs/pytorch_p27/lib/python2.7/site-packages/torch/serialization.py:325: SourceChangeWarning: source code of class 'torch.nn.modules.upsampling.Upsample' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes. warnings.warn(msg, SourceChangeWarning) /home/ubuntu/anaconda3/envs/pytorch_p27/lib/python2.7/site-packages/torch/serialization.py:325: SourceChangeWarning: source code of class 'torch.nn.modules.container.Sequential' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes. warnings.warn(msg, SourceChangeWarning) /home/ubuntu/anaconda3/envs/pytorch_p27/lib/python2.7/site-packages/torch/serialization.py:325: SourceChangeWarning: source code of class 'torch.nn.modules.linear.Linear' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes. warnings.warn(msg, SourceChangeWarning) Traceback (most recent call last): File "demo.py", line 31, in main() File "demo.py", line 20, in main output = model(input_var) File "/home/ubuntu/anaconda3/envs/pytorch_p27/lib/python2.7/site-packages/torch/nn/modules/module.py", line 357, in call result = self.forward(*input, **kwargs) File "/home/ubuntu/pytorch-pose-hg-3d/src/models/hg_3d.py", line 112, in forward hg = self.hourglassi File "/home/ubuntu/anaconda3/envs/pytorch_p27/lib/python2.7/site-packages/torch/nn/modules/module.py", line 357, in call result = self.forward(*input, **kwargs) File "/home/ubuntu/pytorch-pose-hg-3d/src/models/hg_3d.py", line 46, in forward low2 = self.low2(low1) File "/home/ubuntu/anaconda3/envs/pytorch_p27/lib/python2.7/site-packages/torch/nn/modules/module.py", line 357, in call result = self.forward(*input, **kwargs) File "/home/ubuntu/pytorch-pose-hg-3d/src/models/hg_3d.py", line 46, in forward low2 = self.low2(low1) File "/home/ubuntu/anaconda3/envs/pytorch_p27/lib/python2.7/site-packages/torch/nn/modules/module.py", line 357, in call result = self.forward(*input, **kwargs) File "/home/ubuntu/pytorch-pose-hg-3d/src/models/hg_3d.py", line 46, in forward low2 = self.low2(low1) File "/home/ubuntu/anaconda3/envs/pytorch_p27/lib/python2.7/site-packages/torch/nn/modules/module.py", line 357, in call result = self.forward(*input, **kwargs) File "/home/ubuntu/pytorch-pose-hg-3d/src/models/hg_3d.py", line 57, in forward return up1 + up2 RuntimeError: The size of tensor a (27) must match the size of tensor b (26) at non-singleton dimension 3

    opened by XinyuZhou-1014 9
  • pre processing for mpii dataset for 2d version

    pre processing for mpii dataset for 2d version

    Hi @xingyizhou ,

    I was trying out the 2D pre-trained model given by you on mpii images,

    I'm getting the following error

    Traceback (most recent call last):
      File "demo.py", line 37, in <module>
        main()
      File "demo.py", line 26, in main
        output = model(input_var)
      File "/home/udion/anaconda3/envs/DeepCV3.5/lib/python3.5/site-packages/torch/nn/modules/module.py", line 325, in __call__
        result = self.forward(*input, **kwargs)
      File "/media/udion/a2c5c487-f939-4b82-a348-86b3d1bdb024/udion_home/Projects/Uncertain_pose_estimate/src/models/hg.py", line 106, in forward
        hg = self.hourglass[i](x)
      File "/home/udion/anaconda3/envs/DeepCV3.5/lib/python3.5/site-packages/torch/nn/modules/module.py", line 325, in __call__
        result = self.forward(*input, **kwargs)
      File "/media/udion/a2c5c487-f939-4b82-a348-86b3d1bdb024/udion_home/Projects/Uncertain_pose_estimate/src/models/hg.py", line 45, in forward
        low2 = self.low2(low1)
      File "/home/udion/anaconda3/envs/DeepCV3.5/lib/python3.5/site-packages/torch/nn/modules/module.py", line 325, in __call__
        result = self.forward(*input, **kwargs)
      File "/media/udion/a2c5c487-f939-4b82-a348-86b3d1bdb024/udion_home/Projects/Uncertain_pose_estimate/src/models/hg.py", line 45, in forward
        low2 = self.low2(low1)
      File "/home/udion/anaconda3/envs/DeepCV3.5/lib/python3.5/site-packages/torch/nn/modules/module.py", line 325, in __call__
        result = self.forward(*input, **kwargs)
      File "/media/udion/a2c5c487-f939-4b82-a348-86b3d1bdb024/udion_home/Projects/Uncertain_pose_estimate/src/models/hg.py", line 45, in forward
        low2 = self.low2(low1)
      File "/home/udion/anaconda3/envs/DeepCV3.5/lib/python3.5/site-packages/torch/nn/modules/module.py", line 325, in __call__
        result = self.forward(*input, **kwargs)
      File "/media/udion/a2c5c487-f939-4b82-a348-86b3d1bdb024/udion_home/Projects/Uncertain_pose_estimate/src/models/hg.py", line 56, in forward
        return up1 + up2
    RuntimeError: The size of tensor a (15) must match the size of tensor b (14) at non-singleton dimension 2
    

    I think there is some preprocessing to be done on mpii, any clues?

    (P.S : I adopted the demo.py from 3D version)

    opened by udion 8
  • H3.6M preprocessing code

    H3.6M preprocessing code

    Hi,

    I was wondering if you can share (possibly privately) the preprocessing code that was used to obtain the preprocessed H3.6M figures and positions.

    Thanks!

    opened by bloodymeli 8
  • How to mapping from xy in joint_2d to x'y' in  joint_3d_mono?

    How to mapping from xy in joint_2d to x'y' in joint_3d_mono?

    Hi, is there any relationship between x,y in pts and x,y in pts_3d_mono?
    The model output x,y is in pts, but the ground truth is in pts_3d_mono. So I want to figure out whether there is a mapping. Thanks! https://github.com/xingyizhou/pytorch-pose-hg-3d/blob/84ad44e7a8aa15307b9a371ce85b3dee8d5ad2dc/src/datasets/h36m.py#L40-L43

    opened by Fangyh09 5
  • Terrible result during training

    Terrible result during training

    Hi Xingyi, When I was training in stage2&3, I found that the accuracy and MPJPE is so terrible. I noticed accuracy drop from 0.83 to 0.02 in the first epoch of stage1! Is that possible reason for such case?

    Here is the log. pytorch-gpu version=0.3.1 image

    opened by liuyangwen 4
  • two different way to calculate the root joint

    two different way to calculate the root joint

    It seems that the pts_3d[7] is the spine and root joint in your code. A) However, its position is calculated in two different way. one is the average of left shoulder and right shoulder , pts_3d[7] = (pts_3d[12] + pts_3d[13]) / 2 the other is the average of the neck and hip(pelvis), p[i, 7] = (p[i, 6] + p[i, 8]) / 2. I guess the latter is more reasonable because there seems no joint location is the average of the left shoulder and right shoulder. Is there some special consideration to do so? Please see h36m.py and eval.py

    B) It seems pts_3d[6] i.e. hip(pelvis) is a more commonly used root joint in other work of 3d human pose estimation. So I wonder why the pts_3d[7] i.e. spine is taken as root joint, is it because a better performance can be got in the experiment?

    Thank you very much!

    opened by DeepRunner 4
  • Q: scaling in 3D

    Q: scaling in 3D

    Hi,

    I was wondering if you can please help me understand the reasoning behind this code. `

    pts_3d = pts_3d - pts_3d[self.root]
    s2d, s3d = 0, 0
    for e in ref.edges:
      s2d += ((pts[e[0]] - pts[e[1]]) ** 2).sum() ** 0.5
      s3d += ((pts_3d[e[0], :2] - pts_3d[e[1], :2]) ** 2).sum() ** 0.5
    scale = s2d / s3d
      
    for j in range(ref.nJoints):
      pts_3d[j, 0] = pts_3d[j, 0] * scale + pts[self.root, 0]
      pts_3d[j, 1] = pts_3d[j, 1] * scale + pts[self.root, 1]
      pts_3d[j, 2] = pts_3d[j, 2] * scale + ref.h36mImgSize / 2`
    

    A) If I understand correctly, it is that all coordinates in pts_3d will be of the same order of magnitude. Am I correct? B) What is the reason behind multiplying by the scale factor in the last three rows? C) Why is root location not multiplied by the same factor, D) why ref.h36mImgSize / 2 is the offset for the z coordinate?

    opened by bloodymeli 4
  • demo  has error

    demo has error

    python demo.py --demo ../images/mpi_inf_3dhp_1456.png --load_model ../hgreg-3d.pth heads {'hm': 16} => using msra resnet 'msra_50' Downloading: "https://download.pytorch.org/models/resnet50-19c8e357.pth" to /home/featurize/.cache/torch/hub/checkpoints/resnet50-19c8e357.pth 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 97.8M/97.8M [00:00<00:00, 111MB/s] => loading pretrained model https://download.pytorch.org/models/resnet50-19c8e357.pth Traceback (most recent call last): File "demo.py", line 80, in main(opt) File "demo.py", line 60, in main model, _, _ = create_model(opt) File "/home/featurize/work/Pytorch-pose-hg-3d/src/lib/model.py", line 20, in create_model opt.load_model, map_location=lambda storage, loc: storage) File "/environment/miniconda3/lib/python3.7/site-packages/torch/serialization.py", line 608, in load return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args) File "/environment/miniconda3/lib/python3.7/site-packages/torch/serialization.py", line 787, in _legacy_load result = unpickler.load() ModuleNotFoundError: No module named 'models.hg_3d'

    opened by henbucuoshanghai 1
  •  Is it possible to input 2D coordinates obtained by other methods to obtain 3D coordinates in your model?

    Is it possible to input 2D coordinates obtained by other methods to obtain 3D coordinates in your model?

    Hello! I'm glad to see your work, especially for non computer related students. It's really great.

    In the demo, it seems that compared with Baidu AI, an open platform, the 2D pose estimation of the project is not so accurate. Is it possible to input 2D coordinates obtained by other methods to obtain 3D coordinates in your model?

    opened by polo1968 0
  • how to using the camera of my own pc??

    how to using the camera of my own pc??

    I want to know that whether having the possibility to using "cv2.VideoCapture()" to get current frame as the input of the model's network and get thr correct result??I have tried while I failed...Thanks.

    opened by ZXin0305 0
  • 段错误,核心已转储

    段错误,核心已转储

    hello,when I running the demo,py ,some erro happened like the title ,so it's the problem of my computer?? thank you ,I want to get the answer from you .

    opened by ZXin0305 1
  • the model behave worse when detect the squat

    the model behave worse when detect the squat

    Hi, thanks for your amazing work !!! i use the fusion_3d_var.pth model to detect pose when someone is doing squat as below 273 png the pose coordinate is worse.How can i make it better. Or, can we fuse the time domain to get a more accuracy 3d-pose? Thanks in advance.

    opened by dandingol03 0
Owner
Xingyi Zhou
CS Ph.D. student at UT Austin.
Xingyi Zhou
Human POSEitioning System (HPS): 3D Human Pose Estimation and Self-localization in Large Scenes from Body-Mounted Sensors, CVPR 2021

Human POSEitioning System (HPS): 3D Human Pose Estimation and Self-localization in Large Scenes from Body-Mounted Sensors Human POSEitioning System (H

Aymen Mir 66 Dec 21, 2022
PyTorch implementation for 3D human pose estimation

Towards 3D Human Pose Estimation in the Wild: a Weakly-supervised Approach This repository is the PyTorch implementation for the network presented in:

Xingyi Zhou 579 Dec 22, 2022
This repository is the offical Pytorch implementation of ContextPose: Context Modeling in 3D Human Pose Estimation: A Unified Perspective (CVPR 2021).

Context Modeling in 3D Human Pose Estimation: A Unified Perspective (CVPR 2021) Introduction This repository is the offical Pytorch implementation of

null 37 Nov 21, 2022
This repo is official PyTorch implementation of MobileHumanPose: Toward real-time 3D human pose estimation in mobile devices(CVPRW 2021).

Github Code of "MobileHumanPose: Toward real-time 3D human pose estimation in mobile devices" Introduction This repo is official PyTorch implementatio

Choi Sang Bum 203 Jan 5, 2023
Re-implementation of the Noise Contrastive Estimation algorithm for pyTorch, following "Noise-contrastive estimation: A new estimation principle for unnormalized statistical models." (Gutmann and Hyvarinen, AISTATS 2010)

Noise Contrastive Estimation for pyTorch Overview This repository contains a re-implementation of the Noise Contrastive Estimation algorithm, implemen

Denis Emelin 42 Nov 24, 2022
SE3 Pose Interp - Interpolate camera pose or trajectory in SE3, pose interpolation, trajectory interpolation

SE3 Pose Interpolation Pose estimated from SLAM system are always discrete, and

Ran Cheng 4 Dec 15, 2022
A PyTorch toolkit for 2D Human Pose Estimation.

PyTorch-Pose PyTorch-Pose is a PyTorch implementation of the general pipeline for 2D single human pose estimation. The aim is to provide the interface

Wei Yang 1.1k Dec 30, 2022
The project is an official implementation of our CVPR2019 paper "Deep High-Resolution Representation Learning for Human Pose Estimation"

Deep High-Resolution Representation Learning for Human Pose Estimation (CVPR 2019) News [2020/07/05] A very nice blog from Towards Data Science introd

Leo Xiao 3.9k Jan 5, 2023
This is an official implementation of our CVPR 2021 paper "Bottom-Up Human Pose Estimation Via Disentangled Keypoint Regression" (https://arxiv.org/abs/2104.02300)

Bottom-Up Human Pose Estimation Via Disentangled Keypoint Regression Introduction In this paper, we are interested in the bottom-up paradigm of estima

HRNet 367 Dec 27, 2022
The project is an official implementation of our paper "3D Human Pose Estimation with Spatial and Temporal Transformers".

3D Human Pose Estimation with Spatial and Temporal Transformers This repo is the official implementation for 3D Human Pose Estimation with Spatial and

Ce Zheng 363 Dec 28, 2022
This is an official implementation for "Exploiting Temporal Contexts with Strided Transformer for 3D Human Pose Estimation".

Exploiting Temporal Contexts with Strided Transformer for 3D Human Pose Estimation This repo is the official implementation of Exploiting Temporal Con

Vegetabird 241 Jan 7, 2023
《Unsupervised 3D Human Pose Representation with Viewpoint and Pose Disentanglement》(ECCV 2020) GitHub: [fig9]

Unsupervised 3D Human Pose Representation [Paper] The implementation of our paper Unsupervised 3D Human Pose Representation with Viewpoint and Pose Di

null 42 Nov 24, 2022
Simple Pose: Rethinking and Improving a Bottom-up Approach for Multi-Person Pose Estimation

SimplePose Code and pre-trained models for our paper, “Simple Pose: Rethinking and Improving a Bottom-up Approach for Multi-Person Pose Estimation”, a

Jia Li 256 Dec 24, 2022
This repository contains codes of ICCV2021 paper: SO-Pose: Exploiting Self-Occlusion for Direct 6D Pose Estimation

SO-Pose This repository contains codes of ICCV2021 paper: SO-Pose: Exploiting Self-Occlusion for Direct 6D Pose Estimation This paper is basically an

shangbuhuan 52 Nov 25, 2022
Human head pose estimation using Keras over TensorFlow.

RealHePoNet: a robust single-stage ConvNet for head pose estimation in the wild.

Rafael Berral Soler 71 Jan 5, 2023
Deep Dual Consecutive Network for Human Pose Estimation (CVPR2021)

Deep Dual Consecutive Network for Human Pose Estimation (CVPR2021) Introduction This is the official code of Deep Dual Consecutive Network for Human P

null 295 Dec 29, 2022
Bottom-up Human Pose Estimation

Introduction This is the official code of Rethinking the Heatmap Regression for Bottom-up Human Pose Estimation. This paper has been accepted to CVPR2

null 108 Dec 1, 2022
HPRNet: Hierarchical Point Regression for Whole-Body Human Pose Estimation

HPRNet: Hierarchical Point Regression for Whole-Body Human Pose Estimation Official PyTroch implementation of HPRNet. HPRNet: Hierarchical Point Regre

Nermin Samet 53 Dec 4, 2022
A large-scale video dataset for the training and evaluation of 3D human pose estimation models

ASPset-510 ASPset-510 (Australian Sports Pose Dataset) is a large-scale video dataset for the training and evaluation of 3D human pose estimation mode

Aiden Nibali 36 Oct 30, 2022