Official repository accompanying a CVPR 2022 paper EMOCA: Emotion Driven Monocular Face Capture And Animation. EMOCA takes a single image of a face as input and produces a 3D reconstruction. EMOCA sets the new standard on reconstructing highly emotional images in-the-wild

Overview

EMOCA: Emotion Driven Monocular Face Capture and Animation

Radek Daněček · Michael J. Black · Timo Bolkart

CVPR 2022

This repository is the official implementation of the CVPR 2022 paper EMOCA: Emotion-Driven Monocular Face Capture and Animation.

Top row: input images. Middle row: coarse shape reconstruction. Bottom row: reconstruction with detailed displacements.


PyTorch Lightning Project Page Youtube Video Paper PDF

EMOCA takes a single in-the-wild image as input and reconstructs a 3D face with sufficient facial expression detail to convey the emotional state of the input image. EMOCA advances the state-of-the-art monocular face reconstruction in-the-wild, putting emphasis on accurate capture of emotional content. The official project page is here.

EMOCA project

The training and testing script for EMOCA can be found in this subfolder:

EMOCA

Installation

Dependencies

  1. Install conda

  2. Install mamba

  1. Clone this repo

Short version

  1. Run the installation script:
bash install.sh

If this ran without any errors, you now have a functioning conda environment with all the necessary packages to run the demos. If you had issues with the installation script, go through the long version of the installation and see what went wrong. Certain packages (especially for CUDA, PyTorch and PyTorch3D) may cause issues for some users.

Long version

  1. Pull the relevant submodules using:
bash pull_submodules.sh
  1. Set up a conda environment with one of the provided conda files. I recommend using conda-environment_py36_cu11_ubuntu.yml.

You can use mamba to create a conda environment (strongly recommended):

mamba env create python=3.6 --file conda-environment_py36_cu11_ubuntu.yml

but you can also use plain conda if you want (but it will be slower):

conda env create python=3.6 --file conda-environment_py36_cu11_ubuntu.yml

Note: the environment might contain some packages. If you find an environment is missing then just conda/mamba- or pip- install it and please notify me.

  1. Activate the environment:
conda activate work36_cu11
  1. For some reason cython is glitching in the requirements file so install it separately:
pip install Cython==0.29.14
  1. Install gdl using pip install. I recommend using the -e option and I have not tested otherwise.
pip install -e .
  1. Verify that previous step correctly installed Pytorch3D

For some people the compilation fails during requirements install and works after. Try running the following separately:

pip install git+https://github.com/facebookresearch/[email protected]

Pytorch3D installation (which is part of the requirements file) can unfortunately be tricky and machine specific. EMOCA was developed with is Pytorch3D 0.6.0 and the previous command includes its installation from source (to ensure its compatibility with pytorch and CUDA). If it fails to compile, you can try to find another way to install Pytorch3D.

Note: EMOCA was developed with Pytorch 1.9.1 and Pytorch3d 0.6.0 running on CUDA toolkit 11.1.1 with cuDNN 8.0.5. If for some reason installation of these failed on your machine (which can happen), feel free to install these dependencies another way. The most important thing is that version of Pytorch and Pytorch3D match. The version of CUDA is probably less important.

Usage

  1. Activate the environment:
conda activate work36_cu11
  1. For running EMOCA examples, go to EMOCA

  2. For running examples of Emotion Recognition, go to EmotionRecognition

Structure

This repo has two subpackages. gdl and gdl_apps

GDL

gdl is a library full of research code. Some things are OK organized, some things are badly organized. It includes but is not limited to the following:

  • models is a module with (larger) deep learning modules (pytorch based)
  • layers contains individual deep learning layers
  • datasets contains base classes and their implementations for various datasets I had to use at some points. It's mostly image-based datasets with various forms of GT if any
  • utils - various tools

The repo is heavily based on PyTorch and Pytorch Lightning.

GDL_APPS

gdl_apps contains prototypes that use the GDL library. These can include scripts on how to train, evaluate, test and analyze models from gdl and/or data for various tasks.

Look for individual READMEs in each sub-projects.

Current projects:

Citation

If you use this work in your publication, please cite the following publications:

@inproceedings{EMOCA:CVPR:2022,
  title = {{EMOCA}: {E}motion Driven Monocular Face Capture and Animation},
  author = {Danecek, Radek and Black, Michael J. and Bolkart, Timo},
  booktitle = {Conference on Computer Vision and Pattern Recognition (CVPR)},
  pages = {},
  year = {2022}
}

As EMOCA builds on top of DECA and uses parts of DECA as fixed part of the model, please further cite:

@article{DECA:Siggraph2021,
  title={Learning an Animatable Detailed {3D} Face Model from In-The-Wild Images},
  author={Feng, Yao and Feng, Haiwen and Black, Michael J. and Bolkart, Timo},
  journal = {ACM Transactions on Graphics (ToG), Proc. SIGGRAPH},
  volume = {40}, 
  number = {8}, 
  year = {2021}, 
  url = {https://doi.org/10.1145/3450626.3459936} 
}

License

This code and model are available for non-commercial scientific research purposes as defined in the LICENSE file. By downloading and using the code and model you agree to the terms of this license.

Acknowledgements

There are many people who deserve to get credited. These include but are not limited to: Yao Feng and Haiwen Feng and their original implementation of DECA. Antoine Toisoul and colleagues for EmoNet.

Issues
  • Does the 'Expression Encoder' fineturned in Detail Stage?

    Does the 'Expression Encoder' fineturned in Detail Stage?

    Dear Radek Daněček

    Good morning, I have a question about the pipeline. I read DECA and EMOCA, but confused on the EMOCA Detail Stage. The psi comes from 'Expression Encoder' in EMOCA different as in DECA. So can you please tell me does the 'Expression Encoder' fineturned in deatil stage in EMOCA?

    opened by kjhgfdsaas 3
  • Broken detailed mesh

    Broken detailed mesh

    When I run demos/test_emoca_on_images.py --input_folder assets/ --output_folder results --model_name EMOCA --save_images True --save_codes True --save_mesh True, it returns coarse and detailed meshes. image image

    In the first image you can see the coarse mesh, it looks really good, but I want to go further and get the detailed mesh. As you can see, it's broken in the second image. Currently, I'm trying to fix that problem, but if you can give me insights about quick solution it would be great!

    opened by animtel 1
  • Using this repo to evaluate DECA as well

    Using this repo to evaluate DECA as well

    Thank you for sharing your code and model with the community ! Is it possible to use this repository (with some arguments maybe?) to run only DECA without the emotional improvements as well ?

    opened by filby89 1
  • FLAME on the fly using WEBCAM

    FLAME on the fly using WEBCAM

    Hi @radekd91, Thank you for your great work! I adjusted the code flame extraction from images such that it can extract flame on the fly. However, it is slow and currently can read only 7 fps. I am trying to improve the speed of detection.

    1. I profiled my code and found out that _get_face_detected_torch_image take and get_landmarks_from_image are the major contributors to the delay.

    Do you have any suggestion to improve the speed of detection ? For example, increasing the number of workers or setting the persistent_worker flag on? Thank you!

    I use the default parameters given in test_flame_extraction_images.py

    opened by Daksitha 0
  • Old_Size variable used

    Old_Size variable used

    Hi, just a small question what is old_Size variable initialise??? What do this mathematical expression return : - old_size = (right - left + bottom - top) / 2 * 1.1

    opened by PRAN20 0
  • Wrong checkpoint paths for emotion recognition

    Wrong checkpoint paths for emotion recognition

    I tried to run python demos/test_emotion_recognition_on_images.py --model_type 3dmm --model_name EMOCA-emorec but always get an error that the model checkpoint does not exist - independent which model I choose.

    my working directory is gdl_apps/EmotionRecognition

    For example:

    FileNotFoundError: [Errno 2] No such file or directory: '/is/cluster/work/rdanecek/emoca/finetune_deca/2021_11_09_19-05-01_5101174495546322475_ExpDECA_Affec_clone_NoRing_EmoB_F2_DeSegrend_BlackB_Aug_early/detail/checkpoints/last.ckpt'

    opened by Alpe6825 0
  • about expression encoder

    about expression encoder

    Hello, great job. Emoca have a dedicated expression encoder. I want to know where I can find the structure of this encoder? I don't seem to find it in the paper and supplementary materials. Thank you for your answers

    opened by LangR7 1
Implementation of "StrengthNet: Deep Learning-based Emotion Strength Assessment for Emotional Speech Synthesis"

StrengthNet Implementation of "StrengthNet: Deep Learning-based Emotion Strength Assessment for Emotional Speech Synthesis" https://arxiv.org/abs/2110

RuiLiu 57 Jul 7, 2022
"MST++: Multi-stage Spectral-wise Transformer for Efficient Spectral Reconstruction" (CVPRW 2022) & (Winner of NTIRE 2022 Challenge on Spectral Reconstruction from RGB)

MST++: Multi-stage Spectral-wise Transformer for Efficient Spectral Reconstruction (CVPRW 2022) Yuanhao Cai, Jing Lin, Zudi Lin, Haoqian Wang, Yulun Z

Yuanhao Cai 226 Aug 11, 2022
Implementation of CVPR'2022:Reconstructing Surfaces for Sparse Point Clouds with On-Surface Priors

Reconstructing Surfaces for Sparse Point Clouds with On-Surface Priors (CVPR 2022) Personal Web Pages | Paper | Project Page This repository contains

null 128 Aug 1, 2022
[SIGGRAPH 2022 Journal Track] AvatarCLIP: Zero-Shot Text-Driven Generation and Animation of 3D Avatars

AvatarCLIP: Zero-Shot Text-Driven Generation and Animation of 3D Avatars Fangzhou Hong1*  Mingyuan Zhang1*  Liang Pan1  Zhongang Cai1,2,3  Lei Yang2 

Fangzhou Hong 575 Aug 11, 2022
Official Pytorch implementation of "Learning to Estimate Robust 3D Human Mesh from In-the-Wild Crowded Scenes", CVPR 2022

Learning to Estimate Robust 3D Human Mesh from In-the-Wild Crowded Scenes / 3DCrowdNet News ?? 3DCrowdNet achieves the state-of-the-art accuracy on 3D

Hongsuk Choi 90 Aug 2, 2022
Official repository of the AAAI'2022 paper "Contrast and Generation Make BART a Good Dialogue Emotion Recognizer"

CoG-BART Contrast and Generation Make BART a Good Dialogue Emotion Recognizer Quick Start: To run the model on test sets of four datasets, Download th

null 32 Aug 10, 2022
This is the code for the paper "Jinkai Zheng, Xinchen Liu, Wu Liu, Lingxiao He, Chenggang Yan, Tao Mei: Gait Recognition in the Wild with Dense 3D Representations and A Benchmark. (CVPR 2022)"

Gait3D-Benchmark This is the code for the paper "Jinkai Zheng, Xinchen Liu, Wu Liu, Lingxiao He, Chenggang Yan, Tao Mei: Gait Recognition in the Wild

null 68 Aug 11, 2022
Expressive Body Capture: 3D Hands, Face, and Body from a Single Image

Expressive Body Capture: 3D Hands, Face, and Body from a Single Image [Project Page] [Paper] [Supp. Mat.] Table of Contents License Description Fittin

Vassilis Choutas 1.2k Aug 15, 2022
Accurate 3D Face Reconstruction with Weakly-Supervised Learning: From Single Image to Image Set (CVPRW 2019). A PyTorch implementation.

Accurate 3D Face Reconstruction with Weakly-Supervised Learning: From Single Image to Image Set —— PyTorch implementation This is an unofficial offici

Sicheng Xu 658 Aug 11, 2022
[CVPR'21] MonoRUn: Monocular 3D Object Detection by Reconstruction and Uncertainty Propagation

MonoRUn MonoRUn: Monocular 3D Object Detection by Reconstruction and Uncertainty Propagation. CVPR 2021. [paper] Hansheng Chen, Yuyao Huang, Wei Tian*

 同济大学智能汽车研究所综合感知研究组 ( Comprehensive Perception Research Group under Institute of Intelligent Vehicles, School of Automotive Studies, Tongji University) 88 Jul 12, 2022
Code for "NeuralRecon: Real-Time Coherent 3D Reconstruction from Monocular Video", CVPR 2021 oral

NeuralRecon: Real-Time Coherent 3D Reconstruction from Monocular Video Project Page | Paper NeuralRecon: Real-Time Coherent 3D Reconstruction from Mon

ZJU3DV 1.2k Aug 9, 2022
Code for "LASR: Learning Articulated Shape Reconstruction from a Monocular Video". CVPR 2021.

LASR Installation Build with conda conda env create -f lasr.yml conda activate lasr # install softras cd third_party/softras; python setup.py install;

Google 143 Jul 26, 2022
The 7th edition of NTIRE: New Trends in Image Restoration and Enhancement workshop will be held on June 2022 in conjunction with CVPR 2022.

NTIRE 2022 - Image Inpainting Challenge Important dates 2022.02.01: Release of train data (input and output images) and validation data (only input) 2

Andrés Romero 28 Jul 20, 2022
Repository for "Toward Practical Monocular Indoor Depth Estimation" (CVPR 2022)

Toward Practical Monocular Indoor Depth Estimation Cho-Ying Wu, Jialiang Wang, Michael Hall, Ulrich Neumann, Shuochen Su [arXiv] [project site] DistDe

Meta Research 91 Aug 14, 2022
Self-Supervised Monocular 3D Face Reconstruction by Occlusion-Aware Multi-view Geometry Consistency[ECCV 2020]

Self-Supervised Monocular 3D Face Reconstruction by Occlusion-Aware Multi-view Geometry Consistency(ECCV 2020) This is an official python implementati

null 294 Aug 9, 2022
Source code of our BMVC 2021 paper: AniFormer: Data-driven 3D Animation with Transformer

AniFormer This is the PyTorch implementation of our BMVC 2021 paper AniFormer: Data-driven 3D Animation with Transformer. Haoyu Chen, Hao Tang, Nicu S

null 7 Oct 22, 2021
An image base contains 490 images for learning (400 cars and 90 boats), and another 21 images for testingAn image base contains 490 images for learning (400 cars and 90 boats), and another 21 images for testing

SVM Données Une base d’images contient 490 images pour l’apprentissage (400 voitures et 90 bateaux), et encore 21 images pour fait des tests. Prétrait

Achraf Rahouti 3 Nov 30, 2021
Official implementation of "SinIR: Efficient General Image Manipulation with Single Image Reconstruction" (ICML 2021)

SinIR (Official Implementation) Requirements To install requirements: pip install -r requirements.txt We used Python 3.7.4 and f-strings which are in

null 46 Aug 11, 2022