Locally Aware Piecewise Transformation Fields for 3D Human Mesh Registration
This repository contains the implementation of our paper Locally Aware Piecewise Transformation Fields for 3D Human Mesh Registration . The code is largely based on Occupancy Networks - Learning 3D Reconstruction in Function Space.
You can find detailed usage instructions for training your own models and using pretrained models below.
If you find our code useful, please consider citing:
@InProceedings{PTF:CVPR:2021,
author = {Shaofei Wang and Andreas Geiger and Siyu Tang},
title = {Locally Aware Piecewise Transformation Fields for 3D Human Mesh Registration},
booktitle = {Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2021}
}
Installation
This repository has been tested on the following platforms:
- Python 3.7, PyTorch 1.6 with CUDA 10.2 and cuDNN 7.6.5, Ubuntu 20.04
- Python 3.7, PyTorch 1.6 with CUDA 10.1 and cuDNN 7.6.4, CentOS 7.9.2009
First you have to make sure that you have all dependencies in place. The simplest way to do so, is to use anaconda.
You can create an anaconda environment called PTF
using
conda env create -n PTF python=3.7
conda activate PTF
Second, install PyTorch 1.6 via the official PyTorch website.
Third, install dependencies via
pip install -r requirements.txt
Fourth, manually install pytorch-scatter.
Lastly, compile the extension modules. You can do this via
python setup.py build_ext --inplace
(Optional) if you want to use the registration code under smpl_registration/
, you need to install kaolin. Download the code from the kaolin repository, checkout to commit e7e513173bd4159ae45be6b3e156a3ad156a3eb9 and install it according to the instructions.
(Optional) if you want to train/evaluate single-view models (which corresponds to configurations in configs/cape_sv
), you need to install OpenDR to render depth images. You need to first install OSMesa, here is the command of installing it on Ubuntu:
sudo apt-get install libglu1-mesa-dev freeglut3-dev mesa-common-dev libosmesa6-dev
For installing OSMesa on CentOS 7, please check this related issue. After installing OSMesa, install OpenDR via:
pip install opendr
Build the dataset
To prepare the dataset for training/evaluation, you have to first download the CAPE dataset from the CAPE website.
- Download SMPL v1.0, clean-up the chumpy objects inside the models using this code, and rename the files and extract them to
./body_models/smpl/
, eventually, the./body_models
folder should have the following structure:body_models └-- smpl ├-- male | └-- model.pkl └-- female └-- model.pkl
Besides the SMPL models, you will also need to download all the .pkl files from IP-Net repository and put them under ./body_models/misc/
. Finally, run the following script to extract necessary SMPL parameters used in our code:
python extract_smpl_parameters.py
The extracted SMPL parameters will be save into ./body_models/misc/
.
- Extract CAPE dataset to an arbitrary path, denoted as ${CAPE_ROOT}. The extracted dataset should have the following structure:
${CAPE_ROOT} ├-- 00032 ├-- 00096 | ... ├-- 03394 └-- cape_release
- Create
data
directory under the project directory. - Modify the parameters in
preprocess/build_dataset.sh
accordingly (i.e. modify the --dataset_path to ${CAPE_ROOT}) to extract training/evaluation data. - Run
preprocess/build_dataset.sh
to preprocess the CAPE dataset.
Pre-trained models
We provide pre-trained PTF and IP-Net models with two encoder resolutions, that is, 64x3 and 128x3. After downloading them, please put them under respective directories ./out/cape
or ./out/cape_sv
.
Generating Meshes
To generate all evaluation meshes using a trained model, use
python generate.py configs/cape/{config}.yaml
Alternatively, if you want to parallelize the generation on a HPC cluster, use:
python generate.py --subject-idx ${SUBJECT_IDX} --sequence-idx ${SEQUENCE_IDX} configs/cape/${config}.yaml
to generate meshes for specified subject/sequence combination. A list of all subject/sequence combinations can be found in ./misc/subject_sequence.txt
.
SMPL/SMPL+D Registration
To register SMPL/SMPL+D models to the generated meshes, use either of the following:
python smpl_registration/fit_SMPLD_PTFs.py --num-joints 24 --use-parts --init-pose configs/cape/${config}.yaml # for PTF
python smpl_registration/fit_SMPLD_PTFs.py --num-joints 14 --use-parts configs/cape/${config}.yaml # for IP-Net
Note that registration is very slow, taking roughly 1-2 minutes per frame. If you have access to HPC cluster, it is advised to parallelize over subject/sequence combinations using the same subject/sequence input arguments for generating meshes.
Training
Finally, to train a new network from scratch, run
python train.py --num_workers 8 configs/cape/${config}.yaml
You can monitor on http://localhost:6006 the training process using tensorboard:
tensorboard --logdir ${OUTPUT_DIR}/logs --port 6006
where you replace ${OUTPUT_DIR} with the respective output directory.
License
We employ MIT License for the PTF code, which covers
extract_smpl_parameters.py
generate.py
train.py
setup.py
im2mesh/
preprocess/
Modules not covered by our license are modified versions from IP-Net (./smpl_registration
) and SMPL-X (./human_body_prior
); for these parts, please consult their respective licenses and cite the respective papers.