TCMR: Beyond Static Features for Temporally Consistent 3D Human Pose and Shape from a Video
Qualtitative result | Paper teaser video |
---|---|
Introduction
This repository is the official Pytorch implementation of Beyond Static Features for Temporally Consistent 3D Human Pose and Shape from a Video. The base codes are largely borrowed from VIBE. Find more qualitative results here.
Installation
TCMR is tested on Ubuntu 16.04 with Pytorch 1.4 and Python 3.7.10. You may need sudo privilege for the installation.
source scripts/install_pip.sh
Quick demo
- Download the pre-trained demo TCMR and required data by below command and download SMPL layers from here (male&female) and here (neutral). Put SMPL layers (pkl files) under
${ROOT}/data/base_data/
.
source scripts/get_base_data.sh
- Run demo with options (e.g. render on plain background). See more option details in bottom lines of
demo.py
. - A video overlayed with rendered meshes will be saved in
${ROOT}/output/demo_output/
.
python demo.py --vid_file demo.mp4 --gpu 0
Results
Here I report the performance of TCMR.
See our paper for more details.
Running TCMR
Download pre-processed data (except InstaVariety dataset) from here. You may also download datasets from sources and pre-process yourself. Refer to this. Put SMPL layers (pkl files) under ${ROOT}/data/base_data/
.
The data directory structure should follow the below hierarchy.
${ROOT}
|-- data
| |-- base_data
| |-- preprocessed_data
| |-- pretrained_models
Evaluation
- Download pre-trained TCMR weights from here.
- Run the evaluation code with a corresponding config file to reproduce the performance in the tables of our paper.
# dataset: 3dpw, mpii3d, h36m
python evaluate.py --dataset 3dpw --cfg ./configs/repr_table4_3dpw_model.yaml --gpu 0
- You may test options such as average filtering and rendering. See the bottom lines of
${ROOT}/lib/core/config.py
. - We checked rendering results of TCMR on 3DPW validation and test sets.
Reproduction (Training)
- Run the training code with a corresponding config file to reproduce the performance in the tables of our paper.
# training outputs are saved in `experiments` directory
# mkdir experiments
python train.py --cfg ./configs/repr_table4_3dpw_model.yaml --gpu 0
- Evaluate the trained TCMR (either
checkpoint.pth.tar
ormodel_best.pth.tar
) on a target dataset. - You may test the motion discriminator introduced in VIBE by uncommenting the codes that have
exclude motion discriminator
notations. - We do not release NeuralAnnot SMPL annotations of Human36M used in our paper yet. Thus the performance in Table 6 may be slightly different with the paper.
Reference
@InProceedings{choi2020beyond,
title={Beyond Static Features for Temporally Consistent 3D Human Pose and Shape from a Video},
author={Choi, Hongsuk and Moon, Gyeongsik and Lee, Kyoung Mu},
booktitle = {Conference on Computer Vision and Pattern Recognition (CVPR)}
year={2021}
}