Official implementation for the paper: Generating Smooth Pose Sequences for Diverse Human Motion Prediction

Related tags

Deep Learning gsps
Overview

Generating Smooth Pose Sequences for Diverse Human Motion Prediction

Loading GSPS Overview

This is official implementation for the paper

Generating Smooth Pose Sequences for Diverse Human Motion Prediction. In ICCV 21.

Wei Mao, Miaomiao Liu, Mathieu Salzmann.

[paper] [talk]

Dependencies

  • Python >= 3.8
  • PyTorch >= 1.8
  • Tensorboard

tested on pytorch == 1.8.1

Datasets

  • We follow the data preprocessing steps (DATASETS.md) inside the VideoPose3D repo.
  • Given the processed dataset, we further compute the multi-modal future for each motion sequence. All data needed can be downloaded from Google Drive and place all the dataset in data folder inside the root of this repo.

Training and Evaluation

  • We provide 4 YAML configs inside motion_pred/cfg: [dataset].yml and [dataset]_nf.yml for training generator and normalizing flow respectively. These configs correspond to pretrained models inside results.
  • The training and evaluation command is included in run.sh file.

Citing

If you use our code, please cite our work

@inproceedings{mao2021generating,
  title={Generating Smooth Pose Sequences for Diverse Human Motion Prediction},
  author={Mao, Wei and Liu, Miaomiao and Salzmann, Mathieu},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  pages={13309--13318},
  year={2021}
}

Acknowledgments

The overall code framework (dataloading, training, testing etc.) is adapted from DLow.

Licence

MIT

You might also like...
《Unsupervised 3D Human Pose Representation with Viewpoint and Pose Disentanglement》(ECCV 2020) GitHub: [fig9]
《Unsupervised 3D Human Pose Representation with Viewpoint and Pose Disentanglement》(ECCV 2020) GitHub: [fig9]

Unsupervised 3D Human Pose Representation [Paper] The implementation of our paper Unsupervised 3D Human Pose Representation with Viewpoint and Pose Di

The project is an official implementation of our CVPR2019 paper
The project is an official implementation of our CVPR2019 paper "Deep High-Resolution Representation Learning for Human Pose Estimation"

Deep High-Resolution Representation Learning for Human Pose Estimation (CVPR 2019) News [2020/07/05] A very nice blog from Towards Data Science introd

This is an official implementation of our CVPR 2021 paper "Bottom-Up Human Pose Estimation Via Disentangled Keypoint Regression" (https://arxiv.org/abs/2104.02300)

Bottom-Up Human Pose Estimation Via Disentangled Keypoint Regression Introduction In this paper, we are interested in the bottom-up paradigm of estima

The project is an official implementation of our paper
The project is an official implementation of our paper "3D Human Pose Estimation with Spatial and Temporal Transformers".

3D Human Pose Estimation with Spatial and Temporal Transformers This repo is the official implementation for 3D Human Pose Estimation with Spatial and

The official TensorFlow implementation of the paper Action Transformer: A Self-Attention Model for Short-Time Pose-Based Human Action Recognition
The official TensorFlow implementation of the paper Action Transformer: A Self-Attention Model for Short-Time Pose-Based Human Action Recognition

Action Transformer A Self-Attention Model for Short-Time Human Action Recognition This repository contains the official TensorFlow implementation of t

 Human POSEitioning System (HPS): 3D Human Pose Estimation and Self-localization in Large Scenes from Body-Mounted Sensors, CVPR 2021
Human POSEitioning System (HPS): 3D Human Pose Estimation and Self-localization in Large Scenes from Body-Mounted Sensors, CVPR 2021

Human POSEitioning System (HPS): 3D Human Pose Estimation and Self-localization in Large Scenes from Body-Mounted Sensors Human POSEitioning System (H

A selection of State Of The Art research papers (and code) on human locomotion (pose + trajectory) prediction (forecasting)

A selection of State Of The Art research papers (and code) on human trajectory prediction (forecasting). Papers marked with [W] are workshop papers.

CVPR 2021:
CVPR 2021: "Generating Diverse Structure for Image Inpainting With Hierarchical VQ-VAE"

Diverse Structure Inpainting ArXiv | Papar | Supplementary Material | BibTex This repository is for the CVPR 2021 paper, "Generating Diverse Structure

Tensorflow Implementation of SMU: SMOOTH ACTIVATION FUNCTION FOR DEEP NETWORKS USING SMOOTHING MAXIMUM TECHNIQUE

SMU A Tensorflow Implementation of SMU: SMOOTH ACTIVATION FUNCTION FOR DEEP NETWORKS USING SMOOTHING MAXIMUM TECHNIQUE arXiv https://arxiv.org/abs/211

Comments
  • About the path

    About the path "./results_pretrained/" in the evaluation code

    Thanks for you wonderful work and code! I have a question about the path "./results_pretrained/" in the evaluation code.

    in the eval.py, line 484, it says cp_path = './results_pretrained/h36m_linNF_pose_prior_float/models/vae_0025.p' if cfg.dataset == 'h36m' else
    ​'./results_pretrained/humaneva_nf/models/vae_0025.p'

    But your provided code does not contain a "results_pretrained" folder. I am wondering, do I need to train again and then make my own "results_pretrained" to run the evaluation code?

    I would appreciate it very much if you could reply!

    opened by KVBK01 3
  • compute the multi-modal future for each motion sequence?

    compute the multi-modal future for each motion sequence?

    Hi, thank you very much for sharing this project.

    I'm trying to use this code on my own data, which I've formatted in the same way as the H36m data set but with different action labels. Can you provide more details on this, so that I may be able to replicate on my own data set...

    "Given the processed dataset, we further compute the multi-modal future for each motion sequence"?

    opened by jimmybuffi 0
  • wrong index in joint_loss?

    wrong index in joint_loss?

    In your line 20 of the train.py joint_loss, it writes:

    parts_idx = [(np.array(p) * 3).tolist() + (np.array(p) * 3 + 1).tolist() + (np.array(p) * 3 + 2).tolist() for p in parts]

    I think the 48 joints are in order of 16 x 3, instead of 3 x 16, so parts_idx should be:

    parts_idx = [np.asarray([[i*3, i*3+1, i*3+2] for i in p]).flatten().tolist() for p in parts]

    opened by Droliven 1
  • The naming of joints

    The naming of joints

    Hi, I'm using 17 joints from https://github.com/facebookresearch/VideoPose3D Human3.6 data but I see that you only use 16 joints for valid_angle. Can you show me the names of those 16 joints that I can update the loss to use in my case? Thank you,

    opened by hmchuong 1
Owner
Wei Mao
Wei Mao
This is the official Pytorch implementation of the paper "Diverse Motion Stylization for Multiple Style Domains via Spatial-Temporal Graph-Based Generative Model"

Diverse Motion Stylization (Official) This is the official Pytorch implementation of this paper. Diverse Motion Stylization for Multiple Style Domains

Soomin Park 28 Dec 16, 2022
Python scripts for performing 3D human pose estimation using the Mobile Human Pose model in ONNX.

Python scripts for performing 3D human pose estimation using the Mobile Human Pose model in ONNX.

Ibai Gorordo 99 Dec 31, 2022
Official implementation of the network presented in the paper "M4Depth: A motion-based approach for monocular depth estimation on video sequences"

M4Depth This is the reference TensorFlow implementation for training and testing depth estimation models using the method described in M4Depth: A moti

Michaël Fonder 76 Jan 3, 2023
Code for ICCV 2021 paper "HuMoR: 3D Human Motion Model for Robust Pose Estimation"

Code for ICCV 2021 paper "HuMoR: 3D Human Motion Model for Robust Pose Estimation"

Davis Rempe 367 Dec 24, 2022
Repository for the paper "PoseAug: A Differentiable Pose Augmentation Framework for 3D Human Pose Estimation", CVPR 2021.

PoseAug: A Differentiable Pose Augmentation Framework for 3D Human Pose Estimation Code repository for the paper: PoseAug: A Differentiable Pose Augme

Pyjcsx 328 Dec 17, 2022
Exploring Versatile Prior for Human Motion via Motion Frequency Guidance (3DV2021)

Exploring Versatile Prior for Human Motion via Motion Frequency Guidance This is the codebase for video-based human motion reconstruction in human-mot

Jiachen Xu 5 Jul 14, 2022
Code for our ACL 2021 paper "One2Set: Generating Diverse Keyphrases as a Set"

One2Set This repository contains the code for our ACL 2021 paper “One2Set: Generating Diverse Keyphrases as a Set”. Our implementation is built on the

Jiacheng Ye 63 Jan 5, 2023
Official Pytorch implementation of the paper "Action-Conditioned 3D Human Motion Synthesis with Transformer VAE", ICCV 2021

ACTOR Official Pytorch implementation of the paper "Action-Conditioned 3D Human Motion Synthesis with Transformer VAE", ICCV 2021. Please visit our we

Mathis Petrovich 248 Dec 23, 2022
Official Pytorch implementation of the paper "MotionCLIP: Exposing Human Motion Generation to CLIP Space"

MotionCLIP Official Pytorch implementation of the paper "MotionCLIP: Exposing Human Motion Generation to CLIP Space". Please visit our webpage for mor

Guy Tevet 173 Dec 26, 2022
SE3 Pose Interp - Interpolate camera pose or trajectory in SE3, pose interpolation, trajectory interpolation

SE3 Pose Interpolation Pose estimated from SLAM system are always discrete, and

Ran Cheng 4 Dec 15, 2022