Learning to Estimate Robust 3D Human Mesh from In-the-Wild Crowded Scenes / 3DCrowdNet
News
Introduction
This repo is the official PyTorch implementation of Learning to Estimate Robust 3D Human Mesh from In-the-Wild Crowded Scenes (CVPR 2022).
Installation
We recommend you to use an Anaconda virtual environment. Install PyTorch >=1.6.0 and Python >= 3.7.3. Then, run sh requirements.sh
. You should slightly change torchgeometry
kernel code following here.
Quick demo
Preparing
- Download the pre-trained 3DCrowdNet checkpoint from here and place it under
${ROOT}/demo/
. - Download demo inputs from here and place them under
${ROOT}/demo/input
(just unzip the demo_input.zip). - Make
${ROOT}/demo/output
directory. - Get SMPL layers and VPoser according to this.
- Download
J_regressor_extra.npy
from here and place under${ROOT}/data/
.
Running
- Run
python demo.py --gpu 0
. You can change the input image with--img_idx {img number}
. - A mesh obj, a rendered mesh image, and an input 2d pose are saved under
${ROOT}/demo/
. - The demo images and 2D poses are from CrowdPose and HigherHRNet respectively.
- The depth order is not estimated. You can manually change it.
Results
Directory
Refer to here.
Running 3DCrowdNet
First finish the directory setting. Then, refer to here to train and test 3DCrowdNet.
Reference
@InProceedings{choi2022learning,
author = {Choi, Hongsuk and Moon, Gyeongsik and Park, JoonKyu and Lee, Kyoung Mu},
title = {Learning to Estimate Robust 3D Human Mesh from In-the-Wild Crowded Scenes},
booktitle = {Conference on Computer Vision and Pattern Recognition (CVPR)}
year = {2022}
}