3D Multi-Person Pose Estimation by Integrating Top-Down and Bottom-Up Networks

Overview

3D Multi-Person Pose Estimation by Integrating Top-Down and Bottom-Up Networks

arXiv

Introduction

This repository contains the code and models for the following paper.

Monocular 3D Multi-Person Pose Estimation by Integrating Top-Down and Bottom-Up Networks
Cheng Yu, Bo Wang, Bo Yang, Robby T. Tan
Computer Vision and Pattern Recognition, CVPR 2021.

Overview of the proposed method:

Updates

  • 06/18/2021 evaluation code of PCK (person-centric) and PCK_abs (camera-centric), and pre-trained model for MuPoTS dataset tested and released

Installation

Dependencies

Pytorch >= 1.5
Python >= 3.6

Create an enviroment.

conda create -n 3dmpp python=3.6
conda activate 3dmpp

Install the latest version of pytorch (tested on pytorch 1.5 - 1.7) based on your OS and GPU driver installed following install pytorch. For example, command to use on Linux with CUDA 11.0 is like:

conda install pytorch torchvision cudatoolkit=11.0 -c pytorch

Install dependencies

pip install - r requirements.txt

Build the Fast Gaussian Map tool:

cd lib/fastgaus
python setup.py build_ext --inplace
cd ../..

Models and Testing Data

Pre-trained Models

Download the pre-trained model and processed human keypoint files here, and unzip the downloaded zip file to this project's root directory, two folders are expected to see after doing that (i.e., ./ckpts and ./mupots).

MuPoTS Dataset

MuPoTS eval set is needed to perform evaluation as the results reported in Table 3 in the main paper, which is available on the MuPoTS dataset website. You need to download the mupots-3d-eval.zip file, unzip it, and run get_mupots-3d.sh to download the dataset. After the download is complete, a MultiPersonTestSet.zip is avaiable, ~5.6 GB. Unzip it and move the folder MultiPersonTestSet to the root directory of the project to perform evaluation on MuPoTS test set. Now you should see the following directory structure.

${3D-Multi-Person-Pose_ROOT}
|-- ckpts              <-- the downloaded pre-trained Models
|-- lib
|-- MultiPersonTestSet <-- the newly added MuPoTS eval set
|-- mupots             <-- the downloaded processed human keypoint files
|-- util
|-- 3DMPP_framework.png
|-- calculate_mupots_btmup.py
|-- other python code, LICENSE, and README files
...

Usage

MuPoTS dataset evaluation

3D Multi-Person Pose Estimation Evaluation on MuPoTS Dataset

The following table is similar to Table 3 in the main paper, where the quantitative evaluations on MuPoTS-3D dataset are provided (best performance in bold). Evaluation instructions to reproduce the results (PCK and PCK_abs) are provided in the next section.

Group Methods PCK PCK_abs
Person-centric (relative 3D pose) Mehta et al., 3DV'18 65.0 N/A
Person-centric (relative 3D pose) Rogez et al., IEEE TPAMI'19 70.6 N/A
Person-centric (relative 3D pose) Mehta et al., ACM TOG'20 70.4 N/A
Person-centric (relative 3D pose) Cheng et al., ICCV'19 74.6 N/A
Person-centric (relative 3D pose) Cheng et al., AAAI'20 80.5 N/A
Camera-centric (absolute 3D pose) Moon et al., ICCV'19 82.5 31.8
Camera-centric (absolute 3D pose) Lin et al., ECCV'20 83.7 35.2
Camera-centric (absolute 3D pose) Zhen et al., ECCV'20 80.5 38.7
Camera-centric (absolute 3D pose) Li et al., ECCV'20 82.0 43.8
Camera-centric (absolute 3D pose) Cheng et al., AAAI'21 87.5 45.7
Camera-centric (absolute 3D pose) Our method 89.6 48.0

Run evaluation on MuPoTS dataset with estimated 2D joints as input

We split the whole pipeline into several separate steps to make it more clear for the users.

python calculate_mupots_topdown_pts.py
python calculate_mupots_topdown_depth.py
python calculate_mupots_btmup.py
python calculate_mupots_integrate.py

Please note that python calculate_mupots_btmup.py is going to take a while (30-40 minutes depending on your machine).

To evaluate the person-centric 3D multi-person pose estimation:

python eval_mupots_pck.py

After running the above code, the following PCK (person-centric, pelvis-based origin) value is expected, which matches the number reported in Table 3, PCK = 89 (percentage) in the paper.

...
Seq: 18
Seq: 19
Seq: 20
PCK_MEAN: 0.8994453169938017

To evaluate camera-centric (i.e., camera coordinates) 3D multi-person pose estimation:

python eval_mupots_pck_abs.py

After running the above code, the following PCK_abs (camera-centric) value is expected, which matches the number reported in Table 3, PCK_abs = 48 (percentage) in the paper.

...
Seq: 18
Seq: 19
Seq: 20
PCK_MEAN: 0.48514110933606175

License

The code is released under the MIT license. See LICENSE for details.

Citation

If this work is useful for your research, please cite our paper.

@InProceedings{Cheng_2021_CVPR,
    author    = {Cheng, Yu and Wang, Bo and Yang, Bo and Tan, Robby T.},
    title     = {Monocular 3D Multi-Person Pose Estimation by Integrating Top-Down and Bottom-Up Networks},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2021},
    pages     = {7649-7659}
}
Comments
  • Error in computing PCK_abs

    Error in computing PCK_abs

    Hi, Congratulations for your work and thanks for releasing your code. I think that there is a bug in your evaluation pipeline or I am missing something. When you are computing the PCK_abs, you are first doing a rigid alignement of the predicted pose and the ground-truth pose and then you are setting the predicted root location to the ground-truth root location. You should not do that and directly compute the euclidean distance for each joint between pred_pose and gt_pose which are expressed in camera-coordinate system. Am I saying something wrong? Best,

    opened by fabienbaradel 4
  • Question about plot the figure 1 in paper

    Question about plot the figure 1 in paper

    I see figure 1 in paper with a ground grid. Could you tell me how do you get the rotation relationship of people in camera coordinates with world coordinates? I would really appreciate your help.

    opened by zenithfang 3
  • Dropbox not working

    Dropbox not working

    Hi, very appreciate for your great work. Could you please share other links to download the pretrained model? cause the dropbox seems to be unconnected.

    opened by psuu0001 2
  • About the top-down network

    About the top-down network

    Hi,

    I found your paper very interesting. I just can't wait until the code is released, so ask here.

    The paper says that the TD estimates all joints for all person in a bounding box, but GCN&TCN seems to produce one person per one bounding box. Then how do you group or select joints for a person in a bounding box before feeding the joint heatmap to GCN&TCN? Or do you put all joint heatmaps to GCN&TCN? (I don't think it's possible)

    Also, BU use the concatenation of joint heatmaps and the input frame as input. But how? Is the channel of input is 3(rgb)+1(heatmap)? There are several potential problems. The number of people in the input frame change which may lead to dynamic input channel, or overlapping joint heatmaps of the same person. Could you give more details about them?

    Thank you!

    opened by hongsukchoi 2
  • Focal lenght

    Focal lenght

    Hi, Kudos for the great work and for releasing the code.

    You mention that you do not use camera parameters, however, in my understanding, if you can compute a ratio between the predicted root and the GT root, then you predict the absolute root right? Or assume at least some fixed focal length? I could not find an exact value of focal length in the code. Is this right?

    Thanks!

    opened by nicolasugrinovic 1
  • model LICENSE?

    model LICENSE?

    Hello! Thank you so much for sharing the code and the model.

    I would like to ask, what protocol is the Pre-trained Models? Can I use it in a business project?

    Your reply is very much needed.

    opened by go-to-here 1
  • how to process to get pickle file ?

    how to process to get pickle file ?

    hi, thank for ur share. i have read the instruction, but can u tell me the p2b shape is (frame-length,x,y)? and ur provide link get pafs , i can not understand how to extract for human36, and ur example pkl for affb = torch.ones(2,16,16) / 16, how to explain?

    can u share ur pickle file preprocess code?

    opened by chunniunai220ml 0
  • 2D Multi Person Pose Estimator

    2D Multi Person Pose Estimator

    Thanks for your great work! It gave me lots of inspiration.

    I have some questions regarding your 2D Multi-Person Pose Estimator,

    (1) How did you combine COCO and MuCo for training? Isn't they have a different KP annotation format? (2) How much AP did you achieve after training?

    Thank you!

    opened by shawnpark07 0
  • Code error

    Code error

    It seemed that the final absolute xyz prediction is depended on the xy of the GT which is uncomparable with other methods predicting root xyz and relative xyz and add them together to get the final joint xyz without any information from gt. https://github.com/3dpose/3D-Multi-Person-Pose/blob/2f95e6e36220a3410b96540c4a13f330b1124a86/eval_mupots_pck_abs.py#L86

    opened by zyc573823770 0
  • PCK_abs threshold

    PCK_abs threshold

    Hello, First of all, thank you for publishing your code.

    I am having some trouble understanding how absolute coordinates are evaluated in MupoTS. What is the threshold value used when computing the PCK_abs? Is it 150mm or 250 mm which is used to compute AP root?

    Thanks in advance for your response and your time.

    opened by amek-aymen 0
  • FileNotFoundError

    FileNotFoundError

    There is a error FileNotFoundError: [Errno 2] No such file or directory: 'mupots/pred_inte/1.pkl' when I run the eval_mupots_pck_abs.py. It looks like 'mupots' does not contain 'pred_inte'.

    opened by zhaoshuaitao 2
Owner
null
Official PyTorch implementation of "Camera Distance-aware Top-down Approach for 3D Multi-person Pose Estimation from a Single RGB Image", ICCV 2019

PoseNet of "Camera Distance-aware Top-down Approach for 3D Multi-person Pose Estimation from a Single RGB Image" Introduction This repo is official Py

Gyeongsik Moon 677 Dec 25, 2022
Implementation of Bidirectional Recurrent Independent Mechanisms (Learning to Combine Top-Down and Bottom-Up Signals in Recurrent Neural Networks with Attention over Modules)

BRIMs Bidirectional Recurrent Independent Mechanisms Implementation of the paper Learning to Combine Top-Down and Bottom-Up Signals in Recurrent Neura

Sarthak Mittal 26 May 26, 2022
An attempt at the implementation of Glom, Geoffrey Hinton's new idea that integrates neural fields, predictive coding, top-down-bottom-up, and attention (consensus between columns)

GLOM - Pytorch (wip) An attempt at the implementation of Glom, Geoffrey Hinton's new idea that integrates neural fields, predictive coding,

Phil Wang 173 Dec 14, 2022
Bottom-up Human Pose Estimation

Introduction This is the official code of Rethinking the Heatmap Regression for Bottom-up Human Pose Estimation. This paper has been accepted to CVPR2

null 108 Dec 1, 2022
This is an official implementation of our CVPR 2021 paper "Bottom-Up Human Pose Estimation Via Disentangled Keypoint Regression" (https://arxiv.org/abs/2104.02300)

Bottom-Up Human Pose Estimation Via Disentangled Keypoint Regression Introduction In this paper, we are interested in the bottom-up paradigm of estima

HRNet 367 Dec 27, 2022
Code for "Multi-View Multi-Person 3D Pose Estimation with Plane Sweep Stereo"

Multi-View Multi-Person 3D Pose Estimation with Plane Sweep Stereo This repository includes the source code for our CVPR 2021 paper on multi-view mult

Jiahao Lin 66 Jan 4, 2023
Keras implementation of PersonLab for Multi-Person Pose Estimation and Instance Segmentation.

PersonLab This is a Keras implementation of PersonLab for Multi-Person Pose Estimation and Instance Segmentation. The model predicts heatmaps and vari

OCTI 160 Dec 21, 2022
PyTorch Implementation of Realtime Multi-Person Pose Estimation project.

PyTorch Realtime Multi-Person Pose Estimation This is a pytorch version of Realtime_Multi-Person_Pose_Estimation, origin code is here Realtime_Multi-P

Dave Fang 157 Nov 12, 2022
Code repo for realtime multi-person pose estimation in CVPR'17 (Oral)

Realtime Multi-Person Pose Estimation By Zhe Cao, Tomas Simon, Shih-En Wei, Yaser Sheikh. Introduction Code repo for winning 2016 MSCOCO Keypoints Cha

Zhe Cao 4.9k Dec 31, 2022
PoseViz – Multi-person, multi-camera 3D human pose visualization tool built using Mayavi.

PoseViz – 3D Human Pose Visualizer Multi-person, multi-camera 3D human pose visualization tool built using Mayavi. As used in MeTRAbs visualizations.

István Sárándi 79 Dec 30, 2022
CondLaneNet: a Top-to-down Lane Detection Framework Based on Conditional Convolution

CondLaneNet: a Top-to-down Lane Detection Framework Based on Conditional Convolution This is the official implementation code of the paper "CondLaneNe

Alibaba Cloud 311 Dec 30, 2022
TransFGU: A Top-down Approach to Fine-Grained Unsupervised Semantic Segmentation

TransFGU: A Top-down Approach to Fine-Grained Unsupervised Semantic Segmentation Zhaoyun Yin, Pichao Wang, Fan Wang, Xianzhe Xu, Hanling Zhang, Hao Li

DamoCV 25 Dec 16, 2022
SE3 Pose Interp - Interpolate camera pose or trajectory in SE3, pose interpolation, trajectory interpolation

SE3 Pose Interpolation Pose estimated from SLAM system are always discrete, and

Ran Cheng 4 Dec 15, 2022
Repository for the paper "PoseAug: A Differentiable Pose Augmentation Framework for 3D Human Pose Estimation", CVPR 2021.

PoseAug: A Differentiable Pose Augmentation Framework for 3D Human Pose Estimation Code repository for the paper: PoseAug: A Differentiable Pose Augme

Pyjcsx 328 Dec 17, 2022
This repository contains codes of ICCV2021 paper: SO-Pose: Exploiting Self-Occlusion for Direct 6D Pose Estimation

SO-Pose This repository contains codes of ICCV2021 paper: SO-Pose: Exploiting Self-Occlusion for Direct 6D Pose Estimation This paper is basically an

shangbuhuan 52 Nov 25, 2022
Python scripts for performing 3D human pose estimation using the Mobile Human Pose model in ONNX.

Python scripts for performing 3D human pose estimation using the Mobile Human Pose model in ONNX.

Ibai Gorordo 99 Dec 31, 2022
OpenPose: Real-time multi-person keypoint detection library for body, face, hands, and foot estimation

Build Type Linux MacOS Windows Build Status OpenPose has represented the first real-time multi-person system to jointly detect human body, hand, facia

null 25.7k Jan 9, 2023
Web service for facial landmark detection, head pose estimation, facial action unit recognition, and eye-gaze estimation based on OpenFace 2.0

OpenGaze: Web Service for OpenFace Facial Behaviour Analysis Toolkit Overview OpenFace is a fantastic tool intended for computer vision and machine le

Sayom Shakib 4 Nov 3, 2022
OpenFace – a state-of-the art tool intended for facial landmark detection, head pose estimation, facial action unit recognition, and eye-gaze estimation.

OpenFace 2.2.0: a facial behavior analysis toolkit Over the past few years, there has been an increased interest in automatic facial behavior analysis

Tadas Baltrusaitis 5.8k Dec 31, 2022