EMOCA: Emotion Driven Monocular Face Capture and Animation
Radek Daněček · Michael J. Black · Timo Bolkart
CVPR 2022
This repository is the official implementation of the CVPR 2022 paper EMOCA: Emotion-Driven Monocular Face Capture and Animation.
Top row: input images. Middle row: coarse shape reconstruction. Bottom row: reconstruction with detailed displacements.
EMOCA takes a single in-the-wild image as input and reconstructs a 3D face with sufficient facial expression detail to convey the emotional state of the input image. EMOCA advances the state-of-the-art monocular face reconstruction in-the-wild, putting emphasis on accurate capture of emotional content. The official project page is here.
EMOCA project
The training and testing script for EMOCA can be found in this subfolder:
Installation
Dependencies
- Clone this repo
Short version
- Run the installation script:
bash install.sh
If this ran without any errors, you now have a functioning conda environment with all the necessary packages to run the demos. If you had issues with the installation script, go through the long version of the installation and see what went wrong. Certain packages (especially for CUDA, PyTorch and PyTorch3D) may cause issues for some users.
Long version
- Pull the relevant submodules using:
bash pull_submodules.sh
- Set up a conda environment with one of the provided conda files. I recommend using
conda-environment_py36_cu11_ubuntu.yml
.
You can use mamba to create a conda environment (strongly recommended):
mamba env create python=3.6 --file conda-environment_py36_cu11_ubuntu.yml
but you can also use plain conda if you want (but it will be slower):
conda env create python=3.6 --file conda-environment_py36_cu11_ubuntu.yml
Note: the environment might contain some packages. If you find an environment is missing then just conda/mamba
- or pip
- install it and please notify me.
- Activate the environment:
conda activate work36_cu11
- For some reason cython is glitching in the requirements file so install it separately:
pip install Cython==0.29.14
- Install
gdl
using pip install. I recommend using the-e
option and I have not tested otherwise.
pip install -e .
- Verify that previous step correctly installed Pytorch3D
For some people the compilation fails during requirements install and works after. Try running the following separately:
pip install git+https://github.com/facebookresearch/[email protected]
Pytorch3D installation (which is part of the requirements file) can unfortunately be tricky and machine specific. EMOCA was developed with is Pytorch3D 0.6.0 and the previous command includes its installation from source (to ensure its compatibility with pytorch and CUDA). If it fails to compile, you can try to find another way to install Pytorch3D.
Note: EMOCA was developed with Pytorch 1.9.1 and Pytorch3d 0.6.0 running on CUDA toolkit 11.1.1 with cuDNN 8.0.5. If for some reason installation of these failed on your machine (which can happen), feel free to install these dependencies another way. The most important thing is that version of Pytorch and Pytorch3D match. The version of CUDA is probably less important.
Usage
- Activate the environment:
conda activate work36_cu11
-
For running EMOCA examples, go to EMOCA
-
For running examples of Emotion Recognition, go to EmotionRecognition
Structure
This repo has two subpackages. gdl
and gdl_apps
GDL
gdl
is a library full of research code. Some things are OK organized, some things are badly organized. It includes but is not limited to the following:
models
is a module with (larger) deep learning modules (pytorch based)layers
contains individual deep learning layersdatasets
contains base classes and their implementations for various datasets I had to use at some points. It's mostly image-based datasets with various forms of GT if anyutils
- various tools
The repo is heavily based on PyTorch and Pytorch Lightning.
GDL_APPS
gdl_apps
contains prototypes that use the GDL library. These can include scripts on how to train, evaluate, test and analyze models from gdl
and/or data for various tasks.
Look for individual READMEs in each sub-projects.
Current projects:
Citation
If you use this work in your publication, please cite the following publications:
@inproceedings{EMOCA:CVPR:2022,
title = {{EMOCA}: {E}motion Driven Monocular Face Capture and Animation},
author = {Danecek, Radek and Black, Michael J. and Bolkart, Timo},
booktitle = {Conference on Computer Vision and Pattern Recognition (CVPR)},
pages = {},
year = {2022}
}
As EMOCA builds on top of DECA and uses parts of DECA as fixed part of the model, please further cite:
@article{DECA:Siggraph2021,
title={Learning an Animatable Detailed {3D} Face Model from In-The-Wild Images},
author={Feng, Yao and Feng, Haiwen and Black, Michael J. and Bolkart, Timo},
journal = {ACM Transactions on Graphics (ToG), Proc. SIGGRAPH},
volume = {40},
number = {8},
year = {2021},
url = {https://doi.org/10.1145/3450626.3459936}
}
License
This code and model are available for non-commercial scientific research purposes as defined in the LICENSE file. By downloading and using the code and model you agree to the terms of this license.
Acknowledgements
There are many people who deserve to get credited. These include but are not limited to: Yao Feng and Haiwen Feng and their original implementation of DECA. Antoine Toisoul and colleagues for EmoNet.