Deep Dual Consecutive Network for Human Pose Estimation (CVPR2021)

Related tags

Deep Learning DCPose
Overview

Deep Dual Consecutive Network for Human Pose Estimation (CVPR2021)

Introduction

This is the official code of Deep Dual Consecutive Network for Human Pose Estimation.

Multi-frame human pose estimation in complicated situations is challenging. Although state-of-the-art human joints detectors have demonstrated remarkable results for static images, their performances come short when we apply these models to video sequences. Prevalent shortcomings include the failure to handle motion blur, video defocus, or pose occlusions, arising from the inability in capturing the temporal dependency among video frames. On the other hand, directly employing conventional recurrent neural networks incurs empirical difficulties in modeling spatial contexts, especially for dealing with pose occlusions. In this paper, we propose a novel multi-frame human pose estimation framework, leveraging abundant temporal cues between video frames to facilitate keypoint detection. Three modular components are designed in our framework. A Pose Temporal Merger encodes keypoint spatiotemporal context to generate effective searching scopes while a Pose Residual Fusion module computes weighted pose residuals in dual directions. These are then processed via our Pose Correction Network for efficient refining of pose estimations. Our method ranks No.1 in the Multi-frame Person Pose Estimation Challenge on the large-scale benchmark datasets PoseTrack2017 and PoseTrack2018. We have released our code, hoping to inspire future research.

Visual Results

On PoseTrack

Comparison with SOTA method

Experiments

Results on PoseTrack 2017 validation set

Method Head Shoulder Elbow Wrist Hip Knee Ankle Mean
PoseFlow 66.7 73.3 68.3 61.1 67.5 67.0 61.3 66.5
JointFlow - - - - - - - 69.3
FastPose 80.0 80.3 69.5 59.1 71.4 67.5 59.4 70.3
SimpleBaseline(2018 ECCV) 81.7 83.4 80.0 72.4 75.3 74.8 67.1 76.7
STEmbedding 83.8 81.6 77.1 70.0 77.4 74.5 70.8 77.0
HRNet(2019 CVPR) 82.1 83.6 80.4 73.3 75.5 75.3 68.5 77.3
MDPN 85.2 88.8 83.9 77.5 79.0 77.0 71.4 80.7
PoseWarper(2019 NIPS) 81.4 88.3 83.9 78.0 82.4 80.5 73.6 81.2
DCPose 88.0 88.7 84.1 78.4 83.0 81.4 74.2 82.8

Results on PoseTrack 2017 test set(https://posetrack.net/leaderboard.php)

Method Head Shoulder Elbow Wrist Hip Knee Ankle Total
PoseFlow 64.9 67.5 65.0 59.0 62.5 62.8 57.9 63.0
JointFlow - - - 53.1 - - 50.4 63.4
KeyTrack - - - 71.9 - - 65.0 74.0
DetTrack - - - 69.8 - - 65.9 74.1
SimpleBaseline 80.1 80.2 76.9 71.5 72.5 72.4 65.7 74.6
HRNet 80.0 80.2 76.9 72.0 73.4 72.5 67.0 74.9
PoseWarper 79.5 84.3 80.1 75.8 77.6 76.8 70.8 77.9
DCPose 84.3 84.9 80.5 76.1 77.9 77.1 71.2 79.2

Results on PoseTrack 2018 validation set

Method Head Shoulder Elbow Wrist Hip Knee Ankle Mean
AlphaPose 63.9 78.7 77.4 71.0 73.7 73.0 69.7 71.9
MDPN 75.4 81.2 79.0 74.1 72.4 73.0 69.9 75.0
PoseWarper 79.9 86.3 82.4 77.5 79.8 78.8 73.2 79.7
DCPose 84.0 86.6 82.7 78.0 80.4 79.3 73.8 80.9

Results on PoseTrack 2018 test set

Method Head Shoulder Elbow Wrist Hip Knee Ankle Mean
AlphaPose++ - - - 66.2 - - 65.0 67.6
DetTrack - - - 69.8 - - 67.1 73.5
MDPN - - - 74.5 - - 69.0 76.4
PoseWarper 78.9 84.4 80.9 76.8 75.6 77.5 71.8 78.0
DCPose 82.8 84.0 80.8 77.2 76.1 77.6 72.3 79.0

Installation & Quick Start

Check docs/installation.md for instructions on how to build DCPose from source.

Comments
  • different video frame,same target?

    different video frame,same target?

    I would like to ask, yolo only can detect target in a single frame .when defining clip i(p,c,n),by expanding the bounding box by about 25%, the same target between frames at different intervals is cropped. How do you determine the different positions of the same target in different frames? How to cut it?

    opened by LiuJiaji1999 7
  • cannot import name 'deform_conv_cuda' from partially initialized module 'thirdparty.deform_conv' (most likely due to a circular import)

    cannot import name 'deform_conv_cuda' from partially initialized module 'thirdparty.deform_conv' (most likely due to a circular import)

    大佬您好,我想运行试一下效果,运行 video.py 的时候报了这个错误,不知道要怎么解决,谢谢啦

    cannot import name 'deform_conv_cuda' from partially initialized module 'thirdparty.deform_conv' (most likely due to a circular import)

    报错位置: File "xxx / DCPose/thirdparty/deform_conv/functions/deform_conv.py", line 5, in from .. import deform_conv_cuda

    opened by DWCTOD 6
  • hi ,bro.how can i run code using my own video?

    hi ,bro.how can i run code using my own video?

    cd demo/                   
    mkdir input/
    # Put your video in the input directory
    python video.py
    

    Your method above didnt work.Maybe because the pretrained model (pretrained_coco_model)is not suitable? thanks a lot !

    opened by qhdqhd 5
  •  Can this run on the windows system?

    Can this run on the windows system?

    I tried to run on the windows system, but the problem of VC toolkit parsing was always prompted, and the bug still appeared after referring to the solution.How can i solve?? build\lib.win-amd64-3.6\deform_conv_cuda.cp36-win_amd64.pyd : fatal error LNK1120: 8 个无法解析的外部命令 error: command 'C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30037\bin\HostX86\x64\link.exe' failed with exit status 1120

    opened by LiuJiaji1999 4
  • Questions about the performance

    Questions about the performance

    Thanks for your excellent work!

    I run the code successfully and I can use your pretrained model to get a result of 82.8 for PoseTrack17 validation dataset, but if I use the provided code and config files to train a new model and the result is only 81.6.

    So I want to know that is there any other settings about the training or the config files about the training are not the updated version?

    opened by zhangrj91 4
  • Questions about additional data

    Questions about additional data

    Thank you for your work!

    I noticed the result of your model posted on the leaderboard of PoseTrack17 is with additional COCO data.

    Could you give more details of using additional data?

    Thanks!

    opened by zhangrj91 3
  • Performance questions

    Performance questions

    Thank you for your good research. And thank you for revealing the code quickly. I tried to implement your code in the same environment. However, using a single GPU environment, the batch have become 1/2 size. And I got the following results. edit config line 4 : GPUS: (1,) (in my environment using 0:3080, 1:2080TI)

    Posetrack 2017 val

    | Model | Head | Shoulder | Elbow | Wrist | Hip | Knee | Ankle | Mean | |:-----------|:--------|:-----------|:--------|:--------|:--------|:--------|:--------|:--------| | DcPose_RSN | 86.3907 | 87.718 | 83.2292 | 76.2394 | 80.1681 | 79.1894 | 71.2038 | 80.9779 |

    Posetrack 2018 val

    | Model | Head | Shoulder | Elbow | Wrist | Hip | Knee | Ankle | Mean | |:-----------|:--------|:-----------|:--------|:--------|:--------|:--------|:--------|:--------| | DcPose_RSN | 83.8176 | 86.2703 | 81.4414 | 75.3437 | 77.1077 | 77.9727 | 72.2061 | 79.4758 |

    The result of not achieving the performance suggested is because the batch size is small?

    I would like to change the deform conv module to torchvision to run your code on CUDA11.1. https://pytorch.org/vision/stable/_modules/torchvision/ops/deform_conv.html#deform_conv2d

    I also encountered an error in the posetrack 2017 test dataset.

    2021-04-02 14:23:13 [engine.core.function] INFO: test: [3100/5462]      Time 1.659 (1.713)      Data 0.027s (0.083s)    Accuracy 0.000 (0.006)
    2021-04-02 14:25:59 [engine.core.function] INFO: test: [3200/5462]      Time 1.659 (1.712)      Data 0.027s (0.081s)    Accuracy 0.000 (0.006)
    Traceback (most recent call last):
      File "run.py", line 33, in <module>
        main()
      File "run.py", line 29, in main
        runner.launch()
      File "/DCPose/engine/defaults/runner.py", line 63, in launch
        evaluator.exec()
      File "/DCPose/engine/defaults/evaluator.py", line 20, in exec
        self.eval()
      File "/DCPose/engine/defaults/evaluator.py", line 73, in eval
        phase=self.phase)
      File "/DCPose/engine/core/function.py", line 165, in eval
        input_x, input_sup_A, input_sup_B, target_heatmaps, target_heatmaps_weight, meta = next(self.dataloader_iter)
      File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 345, in __next__
        data = self._next_data()
      File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 856, in _next_data
        return self._process_data(data)
      File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 881, in _process_data
        data.reraise()
      File "/opt/conda/lib/python3.7/site-packages/torch/_utils.py", line 394, in reraise
        raise self.exc_type(msg)
    AttributeError: Caught AttributeError in DataLoader worker process 0.
    Original Traceback (most recent call last):
      File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop
        data = fetcher.fetch(index)
      File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
        data = [self.dataset[idx] for idx in possibly_batched_index]
      File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
        data = [self.dataset[idx] for idx in possibly_batched_index]
      File "/DCPose/datasets/zoo/posetrack/PoseTrack.py", line 100, in __getitem__
        return self._get_spatiotemporal_window(data_item)
      File "/DCPose/datasets/zoo/posetrack/PoseTrack.py", line 166, in _get_spatiotemporal_window
        self.logger.error(error_msg)
    AttributeError: 'PoseTrack' object has no attribute 'logger'
    
    opened by HoBeom 3
  • /DCPose/thirdparty/deform_conv# python setup.py develop

    /DCPose/thirdparty/deform_conv# python setup.py develop

    `ninja: build stopped: subcommand failed. Traceback (most recent call last): File "/opt/conda/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 1673, in _run_ninja_build env=env) File "/opt/conda/lib/python3.6/subprocess.py", line 438, in run output=stdout, stderr=stderr) subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

    The above exception was the direct cause of the following exception:

    Traceback (most recent call last): File "setup.py", line 42, in zip_safe=False) File "/opt/conda/lib/python3.6/site-packages/setuptools/init.py", line 153, in setup return distutils.core.setup(**attrs) File "/opt/conda/lib/python3.6/distutils/core.py", line 148, in setup dist.run_commands() File "/opt/conda/lib/python3.6/distutils/dist.py", line 955, in run_commands self.run_command(cmd) File "/opt/conda/lib/python3.6/distutils/dist.py", line 974, in run_command cmd_obj.run() File "/opt/conda/lib/python3.6/site-packages/setuptools/command/develop.py", line 34, in run self.install_for_development() File "/opt/conda/lib/python3.6/site-packages/setuptools/command/develop.py", line 136, in install_for_development self.run_command('build_ext') File "/opt/conda/lib/python3.6/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/opt/conda/lib/python3.6/distutils/dist.py", line 974, in run_command cmd_obj.run() File "/opt/conda/lib/python3.6/site-packages/setuptools/command/build_ext.py", line 79, in run _build_ext.run(self) File "/opt/conda/lib/python3.6/site-packages/Cython/Distutils/old_build_ext.py", line 186, in run _build_ext.build_ext.run(self) File "/opt/conda/lib/python3.6/distutils/command/build_ext.py", line 339, in run self.build_extensions() File "/opt/conda/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 708, in build_extensions build_ext.build_extensions(self) File "/opt/conda/lib/python3.6/site-packages/Cython/Distutils/old_build_ext.py", line 195, in build_extensions _build_ext.build_ext.build_extensions(self) File "/opt/conda/lib/python3.6/distutils/command/build_ext.py", line 448, in build_extensions self._build_extensions_serial() File "/opt/conda/lib/python3.6/distutils/command/build_ext.py", line 473, in _build_extensions_serial self.build_extension(ext) File "/opt/conda/lib/python3.6/site-packages/setuptools/command/build_ext.py", line 196, in build_extension _build_ext.build_extension(self, ext) File "/opt/conda/lib/python3.6/distutils/command/build_ext.py", line 533, in build_extension depends=ext.depends) File "/opt/conda/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 538, in unix_wrap_ninja_compile with_cuda=with_cuda) File "/opt/conda/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 1359, in _write_ninja_file_and_compile_objects error_prefix='Error compiling objects for extension') File "/opt/conda/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 1683, in _run_ninja_build raise RuntimeError(message) from e RuntimeError: Error compiling objects for extension `

    opened by KangChou 2
  • error in setup.py

    error in setup.py

    src/deform_conv_cuda.cpp:559:3: error: ‘AT_CHECK’ was not declared in this scope; did you mean ‘DCHECK’?

      559 |   AT_CHECK(input.is_contiguous(), "input tensor has to be contiguous");
          |   ^~~~~~~~
          |   DCHECK
    

    error: command '/usr/bin/gcc' failed with exit code 1

    opened by FinallyKiKi 1
  • Training Time

    Training Time

    Hi,I saw you said "train our model for a batch size of 32 for 20 epochs with 2 Nvidia GeForce Titan X GPUs" in the paper. Can I know the sepcific training time you will cost in this situation? Thanks

    opened by ZYJ-JMF 1
  • Datasets

    Datasets

    Thank you very much for your work. I am in China now and I cannot download the posetrack2017 and posetrack2018 dataset. Could you please share it with me? Or can I train directly with the COCO dataset? What should I do?(Do I need to change the format in COCO?) Looking forward to your reply ! Thanks!!

    opened by A7777-gp 1
  • Posetrack download

    Posetrack download

    Hello,The Posetrack official website is down and I'm unable to access the dataset. I could see that many have faced the same issue.It would be great if you could resolve the website issue or send us the data personally.Thank you so much for your time.

    opened by sandhiyaprabhakannuraj 0
  • A question about model output

    A question about model output

    Why not let the model directly output 15 joints corresponding to Posetrack dataset, but output 17 joints corresponding to COCO dataset, and then convert to posetrack?

    opened by Whj-cv 0
  • performance

    performance

    I ran training new model on posetrack17 and test it on validation dataset for three times. I only got 80.9mAP for three times where there are large gap. 2 * 2080 and other config is not changed. I used 0, 1000, 8000 random seed for different initialization. So where's the problem.

    opened by chenrxi 1
  • Throws error when I try to train from scratch in posetrack 18

    Throws error when I try to train from scratch in posetrack 18

    The below error occurs:

    022-04-23 06:32:46 [posetimation.zoo.DcPose.dcpose_rsn] ERROR: => please download pre-trained models first! Traceback (most recent call last): File "run.py", line 33, in main() File "run.py", line 29, in main runner.launch() File "/home/awanish/DCPose-main/engine/defaults/runner.py", line 52, in launch trainer = DefaultTrainer(self.cfg, self.output_path_dict, PE_Name=self.args.PE_Name) File "/home/awanish/DCPose-main/engine/defaults/trainer.py", line 32, in init self.model = build_model(cfg, phase='train') File "/home/awanish/DCPose-main/posetimation/zoo/build.py", line 17, in build_model model_instance.init_weights() File "/home/awanish/DCPose-main/posetimation/zoo/DcPose/dcpose_rsn.py", line 285, in init_weights raise ValueError('{} is not exist!'.format(self.pretrained)) ValueError: /home/awanish/DCPose-main/DcPose_supp_files/pretrained_models/pretrained_.pth is not exist!

    there is no file with the name pretrained_.pth given in supp files folder

    opened by kawanish 1
Owner
null
Repository for the paper "PoseAug: A Differentiable Pose Augmentation Framework for 3D Human Pose Estimation", CVPR 2021.

PoseAug: A Differentiable Pose Augmentation Framework for 3D Human Pose Estimation Code repository for the paper: PoseAug: A Differentiable Pose Augme

Pyjcsx 328 Dec 17, 2022
[CVPR2021 Oral] FFB6D: A Full Flow Bidirectional Fusion Network for 6D Pose Estimation.

FFB6D This is the official source code for the CVPR2021 Oral work, FFB6D: A Full Flow Biderectional Fusion Network for 6D Pose Estimation. (Arxiv) Tab

Yisheng (Ethan) He 201 Dec 28, 2022
Human POSEitioning System (HPS): 3D Human Pose Estimation and Self-localization in Large Scenes from Body-Mounted Sensors, CVPR 2021

Human POSEitioning System (HPS): 3D Human Pose Estimation and Self-localization in Large Scenes from Body-Mounted Sensors Human POSEitioning System (H

Aymen Mir 66 Dec 21, 2022
The official repo for CVPR2021——ViPNAS: Efficient Video Pose Estimation via Neural Architecture Search.

ViPNAS: Efficient Video Pose Estimation via Neural Architecture Search [paper] Introduction This is the official implementation of ViPNAS: Efficient V

Lumin 42 Sep 26, 2022
Official code for our ICCV paper: "From Continuity to Editability: Inverting GANs with Consecutive Images"

GANInversion_with_ConsecutiveImgs Official code for our ICCV paper: "From Continuity to Editability: Inverting GANs with Consecutive Images" https://a

QingyangXu 38 Dec 7, 2022
Consecutive-Subsequence - Simple software to calculate susequence with highest sum

Simple software to calculate susequence with highest sum This repository contain

Gbadamosi Farouk 1 Jan 31, 2022
[CVPR2021] UAV-Human: A Large Benchmark for Human Behavior Understanding with Unmanned Aerial Vehicles

UAV-Human Official repository for CVPR2021: UAV-Human: A Large Benchmark for Human Behavior Understanding with Unmanned Aerial Vehicle Paper arXiv Res

null 129 Jan 4, 2023
The project is an official implementation of our CVPR2019 paper "Deep High-Resolution Representation Learning for Human Pose Estimation"

Deep High-Resolution Representation Learning for Human Pose Estimation (CVPR 2019) News [2020/07/05] A very nice blog from Towards Data Science introd

Leo Xiao 3.9k Jan 5, 2023
Deep High-Resolution Representation Learning for Human Pose Estimation

Deep High-Resolution Representation Learning for Human Pose Estimation (accepted to CVPR2019) News If you are interested in internship or research pos

HRNet 167 Dec 27, 2022
SE3 Pose Interp - Interpolate camera pose or trajectory in SE3, pose interpolation, trajectory interpolation

SE3 Pose Interpolation Pose estimated from SLAM system are always discrete, and

Ran Cheng 4 Dec 15, 2022
《Unsupervised 3D Human Pose Representation with Viewpoint and Pose Disentanglement》(ECCV 2020) GitHub: [fig9]

Unsupervised 3D Human Pose Representation [Paper] The implementation of our paper Unsupervised 3D Human Pose Representation with Viewpoint and Pose Di

null 42 Nov 24, 2022
Simple Pose: Rethinking and Improving a Bottom-up Approach for Multi-Person Pose Estimation

SimplePose Code and pre-trained models for our paper, “Simple Pose: Rethinking and Improving a Bottom-up Approach for Multi-Person Pose Estimation”, a

Jia Li 256 Dec 24, 2022
This repository contains codes of ICCV2021 paper: SO-Pose: Exploiting Self-Occlusion for Direct 6D Pose Estimation

SO-Pose This repository contains codes of ICCV2021 paper: SO-Pose: Exploiting Self-Occlusion for Direct 6D Pose Estimation This paper is basically an

shangbuhuan 52 Nov 25, 2022
Human head pose estimation using Keras over TensorFlow.

RealHePoNet: a robust single-stage ConvNet for head pose estimation in the wild.

Rafael Berral Soler 71 Jan 5, 2023
Bottom-up Human Pose Estimation

Introduction This is the official code of Rethinking the Heatmap Regression for Bottom-up Human Pose Estimation. This paper has been accepted to CVPR2

null 108 Dec 1, 2022
This is an official implementation of our CVPR 2021 paper "Bottom-Up Human Pose Estimation Via Disentangled Keypoint Regression" (https://arxiv.org/abs/2104.02300)

Bottom-Up Human Pose Estimation Via Disentangled Keypoint Regression Introduction In this paper, we are interested in the bottom-up paradigm of estima

HRNet 367 Dec 27, 2022
HPRNet: Hierarchical Point Regression for Whole-Body Human Pose Estimation

HPRNet: Hierarchical Point Regression for Whole-Body Human Pose Estimation Official PyTroch implementation of HPRNet. HPRNet: Hierarchical Point Regre

Nermin Samet 53 Dec 4, 2022
A large-scale video dataset for the training and evaluation of 3D human pose estimation models

ASPset-510 ASPset-510 (Australian Sports Pose Dataset) is a large-scale video dataset for the training and evaluation of 3D human pose estimation mode

Aiden Nibali 36 Oct 30, 2022
A large-scale video dataset for the training and evaluation of 3D human pose estimation models

ASPset-510 (Australian Sports Pose Dataset) is a large-scale video dataset for the training and evaluation of 3D human pose estimation models. It contains 17 different amateur subjects performing 30 sports-related actions each, for a total of 510 action clips.

Aiden Nibali 25 Jun 20, 2021