MetaAvatar: Learning Animatable Clothed Human Models from Few Depth Images
This repository contains the implementation of our paper MetaAvatar: Learning Animatable Clothed Human Models from Few Depth Images.
You can find detailed usage instructions for training your own models and using pretrained models below.
If you find our code useful, please cite:
@InProceedings{MetaAvatar:NeurIPS:2021,
title = {MetaAvatar: Learning Animatable Clothed Human Models from Few Depth Images},
author = {Shaofei Wang and Marko Mihajlovic and Qianli Ma and Andreas Geiger and Siyu Tang},
booktitle = {Advances in Neural Information Processing Systems},
year = {2021}
}
Installation
This repository has been tested on the following platform:
- Python 3.7, PyTorch 1.7.1 with CUDA 10.2 and cuDNN 7.6.5, Ubuntu 20.04
To clone the repo, run either:
git clone --recursive https://github.com/taconite/MetaAvatar-release.git
or
git clone https://github.com/taconite/MetaAvatar-release.git
git submodule update --init --recursive
First you have to make sure that you have all dependencies in place. The simplest way to do so, is to use anaconda.
You can create an anaconda environment called meta-avatar
using
conda env create -f environment.yml
conda activate meta-avatar
(Optional) if you want to use the evaluation code under evaluation/
, then you need to install kaolin. Download the code from the kaolin repository, checkout to commit e7e513173bd4159ae45be6b3e156a3ad156a3eb9 and install it according to the instructions.
Build the dataset
To prepare the dataset for training/fine-tuning/evaluation, you have to first download the CAPE dataset from the CAPE website.
- Download SMPL v1.0, clean-up the chumpy objects inside the models using this code, and rename the files and extract them to
./body_models/smpl/
, eventually, the./body_models
folder should have the following structure:body_models └-- smpl ├-- male | └-- model.pkl └-- female └-- model.pkl
(Optional) if you want to use the evaluation code under evaluation/
, then you need to download all the .pkl files from IP-Net repository and put them under ./body_models/misc/
.
Finally, run the following script to extract necessary SMPL parameters used in our code:
python extract_smpl_parameters.py
The extracted SMPL parameters will be save into ./body_models/misc/
.
- Extract CAPE dataset to an arbitrary path, denoted as ${CAPE_ROOT}. The extracted dataset should have the following structure:
${CAPE_ROOT} ├-- 00032 ├-- 00096 | ... ├-- 03394 └-- cape_release
- Create
data
directory under the project directory. - Modify the parameters in
preprocess/build_dataset.sh
accordingly (i.e. modify the --dataset_path to ${CAPE_ROOT}) to extract training/fine-tuning/evaluation data. - Run
preprocess/build_dataset.sh
to preprocess the CAPE dataset.
(Optional) if you want evaluate performance on interpolation task, then you need to process CAPE data again in order to generate processed data at full framerate. Simply comment the first command and uncomment the second command in preprocess/build_dataset.sh
and run the script.
Pre-trained models
We provide pre-trained models, including 1) forward/backward skinning networks for full pointcloud (stage 0) 2) forward/backward skinning networks for depth pointcloud (stage 0) 3) meta-learned static SDF (stage 1) 3) meta-learned hypernetwork (stage 2) . After downloading them, please put them in respective folders under ./out/metaavatar
.
Fine-tuning fromt the pre-trained model
We provide script to fine-tune subject/cloth-type specific avatars in batch. Simply run:
bash run_fine_tuning.sh
And it will conduct fine-tuning with default setting (subject 00122 with shortlong). You can comment/uncomment/add lines in jobs/splits to modify data splits.
Training
To train new networks from scratch, run
python train.py --num-workers 8 configs/meta-avatar/${config}.yaml
You can train the two stage 0 models in parallel, while stage 1 model depends on stage 0 models and stage 2 model depends on stage 1 model.
You can monitor on http://localhost:6006 the training process using tensorboard:
tensorboard --logdir ${OUTPUT_DIR}/logs --port 6006
where you replace ${OUTPUT_DIR} with the respective output directory.
Evaluation
To evaluate the generated meshes, use the following script:
bash run_evaluation.sh
Again, it will conduct evaluation with default setting (subject 00122 with shortlong). You can comment/uncomment/add lines in jobs/splits to modify data splits.
License
We employ MIT License for the MetaAvatar code, which covers
extract_smpl_parameters.py
run_fine_tuning.py
train.py
configs
jobs/
depth2mesh/
preprocess/
The SIREN networks are borrowed from the official SIREN repository. Mesh extraction code is borrowed from the DeeSDF repository.
Modules not covered by our license are: