DECA: Detailed Expression Capture and Animation (SIGGRAPH 2021)

Overview

DECA: Detailed Expression Capture and Animation (SIGGRAPH2021)

input image, aligned reconstruction, animation with various poses & expressions

This is the official Pytorch implementation of DECA.

DECA reconstructs a 3D head model with detailed facial geometry from a single input image. The resulting 3D head model can be easily animated. Please refer to the arXiv paper for more details.

The main features:

  • Reconstruction: produces head pose, shape, detailed face geometry, and lighting information from a single image.
  • Animation: animate the face with realistic wrinkle deformations.
  • Robustness: tested on facial images in unconstrained conditions. Our method is robust to various poses, illuminations and occlusions.
  • Accurate: state-of-the-art 3D face shape reconstruction on the NoW Challenge benchmark dataset.

Getting Started

Clone the repo:

git clone https://github.com/YadiraF/DECA
cd DECA

Requirements

  • Python 3.7 (numpy, skimage, scipy, opencv)
  • PyTorch >= 1.6 (pytorch3d)
  • face-alignment (Optional for detecting face)
    You can run
    pip install -r requirements.txt
    Or use virtual environment by runing
    bash install_conda.sh
    For visualization, we use our rasterizer that uses pytorch JIT Compiling Extensions. If there occurs a compiling error, you can install pytorch3d instead and set --rasterizer_type=pytorch3d when running the demos.

Usage

  1. Prepare data
    a. download FLAME model, choose FLAME 2020 and unzip it, copy 'generic_model.pkl' into ./data
    b. download DECA trained model, and put it in ./data (no unzip required)
    c. (Optional) follow the instructions for the Albedo model to get 'FLAME_albedo_from_BFM.npz', put it into ./data

  2. Run demos
    a. reconstruction

    python demos/demo_reconstruct.py -i TestSamples/examples --saveDepth True --saveObj True

    to visualize the predicted 2D landmanks, 3D landmarks (red means non-visible points), coarse geometry, detailed geometry, and depth.

    You can also generate an obj file (which can be opened with Meshlab) that includes extracted texture from the input image.

    Please run python demos/demo_reconstruct.py --help for more details.

    b. expression transfer

    python demos/demo_transfer.py

    Given an image, you can reconstruct its 3D face, then animate it by tranfering expressions from other images. Using Meshlab to open the detailed mesh obj file, you can see something like that:

    (Thank Soubhik for allowing me to use his face ^_^)

    Note that, you need to set '--useTex True' to get full texture.

    c. for the teaser gif (reposing and animation)

    python demos/demo_teaser.py 

    More demos and training code coming soon.

Evaluation

DECA (ours) achieves 9% lower mean shape reconstruction error on the NoW Challenge dataset compared to the previous state-of-the-art method.
The left figure compares the cumulative error of our approach and other recent methods (RingNet and Deng et al. have nearly identitical performance, so their curves overlap each other). Here we use point-to-surface distance as the error metric, following the NoW Challenge.

For more details of the evaluation, please check our arXiv paper.

Training

  1. Prepare Training Data

    a. Download image data
    In DECA, we use VGGFace2, BUPT-Balancedface and VoxCeleb2

    b. Prepare label
    FAN to predict 68 2D landmark
    face_segmentation to get skin mask

    c. Modify dataloader
    Dataloaders for different datasets are in decalib/datasets, use the right path for prepared images and labels.

  2. Download face recognition trained model
    We use the model from VGGFace2-pytorch for calculating identity loss, download resnet50_ft, and put it into ./data

  3. Start training

    Train from scratch:

    python main_train.py --cfg configs/release_version/deca_pretrain.yml 
    python main_train.py --cfg configs/release_version/deca_coarse.yml 
    python main_train.py --cfg configs/release_version/deca_detail.yml 

    In the yml files, write the right path for 'output_dir' and 'pretrained_modelpath'.
    You can also use released model as pretrained model, then ignor the pretrain step.

Citation

If you find our work useful to your research, please consider citing:

@inproceedings{DECA:Siggraph2021,
  title={Learning an Animatable Detailed {3D} Face Model from In-The-Wild Images},
  author={Feng, Yao and Feng, Haiwen and Black, Michael J. and Bolkart, Timo},
  journal = {ACM Transactions on Graphics, (Proc. SIGGRAPH)}, 
  volume = {40}, 
  number = {8}, 
  year = {2021}, 
  url = {https://doi.org/10.1145/3450626.3459936} 
}

License

This code and model are available for non-commercial scientific research purposes as defined in the LICENSE file. By downloading and using the code and model you agree to the terms in the LICENSE.

Acknowledgements

For functions or scripts that are based on external sources, we acknowledge the origin individually in each file.
Here are some great resources we benefit:

We would also like to thank other recent public 3D face reconstruction works that allow us to easily perform quantitative and qualitative comparisons :)
RingNet, Deep3DFaceReconstruction, Nonlinear_Face_3DMM, 3DDFA-v2, extreme_3d_faces, facescape

Comments
  • Can't get the correct detailed mesh and image

    Can't get the correct detailed mesh and image

    Hi, your work is really impressive! However, after I set everything properly and run the demo_reconstruct.py, I couldn't get the detailed mesh like what the README shows. I see other results like: example1

    But what I got is like: 16_vis_original_size

    Everything is all right with the code, I tried both standard rasterization and pytorch3d rasterization, I also tried useTex(I got the required texture npz file) and without using it, but in each case, I got the same results like I show above. Does anyone know the reason here and maybe give me some hint to get the correct results? Thank you all, maybe this is a stupid question but it really is driving me crazy. :(

    opened by QingXIA233 8
  • FileNotfoundError

    FileNotfoundError

    I used commmand python demos/demo_reconstruct.py -i TestSamples/examples --saveDepth True --saveObj Truebut encounteered a problem.

    fc.weight not available in reconstructed resnet fc.bias not available in reconstructed resnet fc.weight not available in reconstructed resnet fc.bias not available in reconstructed resnet creating the FLAME Decoder trained model found. load E:\毕业设计\DECA\data\deca_model.tar D:\Anaconda3\envs\py37\lib\site-packages\pytorch3d-0.3.0-py3.7-win-amd64.egg\pytorch3d\io\obj_io.py:476: UserWarning: Mtl file does not exist: E:\毕业设计\DECA\data\template.mtl warnings.warn(f"Mtl file does not exist: {f}") 0%| | 0/6 [00:00<?, ?it/s]D:\Anaconda3\envs\py37\lib\site-packages\torch\nn\functional.py:3121: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details. "See the documentation of nn.Upsample for details.".format(mode)) D:\Anaconda3\envs\py37\lib\site-packages\torch\nn\functional.py:3384: UserWarning: Default grid_sample and affine_grid behavior has changed to align_corners=False since 1.3.0. Please specify align_corners=True if the old behavior is desired. See the documentation of grid_sample for details. warnings.warn("Default grid_sample and affine_grid behavior has changed " 0%| | 0/6 [00:07<?, ?it/s] Traceback (most recent call last): File "demos/demo_reconstruct.py", line 103, in main(parser.parse_args()) File "demos/demo_reconstruct.py", line 58, in main deca.save_obj(os.path.join(savefolder, name, name + '.obj'), opdict) File "E:\毕业设计\DECA\decalib\deca.py", line 254, in save_obj normal_map=normal_map) File "E:\毕业设计\DECA\decalib\utils\util.py", line 97, in write_obj with open(obj_name, 'w') as f: FileNotFoundError: [Errno 2] No such file or directory: 'TestSamples/examples/results\examples\IMG_0392_inputs\examples\IMG_0392_inputs.obj'

    opened by hanssssssss 8
  • Is it possible to have a CPU-only version for this?

    Is it possible to have a CPU-only version for this?

    Hi @YadiraF, I got the following AssertionError: Torch not compiled with CUDA enabled

    Pasting the traceback for your reference:

    Traceback (most recent call last): File "demos/demo_reconstruct.py", line 103, in main(parser.parse_args()) File "demos/demo_reconstruct.py", line 36, in main testdata = datasets.TestData(args.inputpath, iscrop=args.iscrop, face_detector=args.detector) File "...DECA/decalib/datasets/datasets.py", line 70, in init self.face_detector = detectors.FAN() File ".../DECA/decalib/datasets/detectors.py", line 22, in init self.model = face_alignment.FaceAlignment(face_alignment.LandmarksType._2D, flip_input=False) File ".../lib/python3.7/site-packages/face_alignment/api.py", line 75, in init self.face_detector = face_detector_module.FaceDetector(device=device, verbose=verbose) File ".../lib/python3.7/site-packages/face_alignment/detection/sfd/sfd_detector.py", line 30, in init self.face_detector.to(device) File ".../lib/python3.7/site-packages/torch/nn/modules/module.py", line 612, in to return self._apply(convert) File ".../lib/python3.7/site-packages/torch/nn/modules/module.py", line 359, in _apply module._apply(fn) File ".../lib/python3.7/site-packages/torch/nn/modules/module.py", line 381, in _apply param_applied = fn(param) File ".../lib/python3.7/site-packages/torch/nn/modules/module.py", line 610, in convert return t.to(device, dtype if t.is_floating_point() else None, non_blocking) File ".../lib/python3.7/site-packages/torch/cuda/init.py", line 166, in _lazy_init raise AssertionError("Torch not compiled with CUDA enabled") AssertionError: Torch not compiled with CUDA enabled

    Would love to have your answer soon, thanks!

    opened by AbhayKoushik 7
  • cannot find any .npy file of  Dataset or generation method

    cannot find any .npy file of Dataset or generation method

    Thanks for your nice work!

    Could you please provide a list of training data? or tell us how to get it, eg.(vggface2_val_list_max_normal_100_ring_5_1_serial.npy)

    Thanks a lot!

    opened by littlePrince126 6
  • Question about how to choosing 4 images per identity?

    Question about how to choosing 4 images per identity?

    I have a question regarding the training process.

    In the paper, it says that you chose 950k images from the vggface2 dataset. After that, it says that for the coarse model, you only used 4 images per subject. But as far as I know, the vggface2 dataset has around 9k subjects (around 350images each). Then this would lead to 9k * 4 = 36k images, which is way smaller than 950k. Am I misunderstanding something? Or do you use 950k in the pretraining step? Or is it that there are duplicated subjects of 4 images?

    I welcome any response! Thank you

    opened by wearegolden 5
  • compatibility w/ pytorch3d

    compatibility w/ pytorch3d

    I'm using pytorch3d(==0.5.0) for rasterizer_type as the standard one occurs compile error. However, it seems the latest commit doesn't work with pytorch3d properly..

    image err message: Traceback (most recent call last): File "demos/demo_reconstruct.py", line 126, in main(parser.parse_args()) File "demos/demo_reconstruct.py", line 52, in main opdict, visdict = deca.decode(codedict) #tensor File "/media/user/data/Exp/DECA/decalib/deca.py", line 229, in decode shape_images, _, grid, alpha_images = self.render.render_shape(verts, trans_verts, h=h, w=w, images=background, return_grid=True) File "/media/user/data/Exp/DECA/decalib/utils/renderer.py", line 360, in render_shape rendering = self.rasterizer(transformed_vertices, self.faces.expand(batch_size, -1, -1), attributes, h, w) File "/home/user/anaconda3/envs/decan/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) TypeError: forward() takes from 3 to 4 positional arguments but 6 were given

    cudatoolkit==10.1 numpy==1.20.3 scipy==1.7.1 chumpy==0.70 scikit-image==0.18.3 opencv-python==4.5.4.58 PyYAML==5.1.1
    torch==1.6.0 # for compatible with pytorch3d torchvision==0.7.0 face-alignment==1.3.5 yacs==0.1.8 ninja==1.10.2.2 pytorch3d==0.5.0 #pytorch3d-nightly kornia==0.6.1

    (In addition, it seems kornia==0.6.1 officially supports torch>=1.8.1 which crushes with pytorch3d's requirement ==1.6.0)

    opened by jnwnlee 5
  • Parameter difference from Flame original model

    Parameter difference from Flame original model

    I want to use the output of DECA in Blender addon from the Flame website. So DECA gives

    • 100 shape parameters
    • 50 expression parameters

    But the original Flame model (used in Blender addon) has

    • 300 shape parameters
    • 100 expression parameters

    Is there a way to get original Flame parameters from DECA parameters ? Thanks !

    opened by avartation 5
  • ninja: build stopped: subcommand failed.

    ninja: build stopped: subcommand failed.

    RuntimeError: Error building extension 'standard_rasterize_cuda': [1/3] :/usr/local/cuda-11.2/bin/nvcc -DTORCH_EXTENSION_NAME=standard_rasterize_cuda -DTORCH_API_INCLUDE_EXTENSION_H -isystem /home/seemmo/.local/lib/python3.7/site-packages/torch/include -isystem /home/seemmo/.local/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -isystem /home/seemmo/.local/lib/python3.7/site-packages/torch/include/TH -isystem /home/seemmo/.local/lib/python3.7/site-packages/torch/include/THC -isystem :/usr/local/cuda-11.2/include -isystem /usr/include/python3.7m -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_61,code=sm_61 --compiler-options '-fPIC' -std=c++14 -ccbin=$(which gcc-7) -c /home/seemmo/DECA/decalib/utils/rasterizer/standard_rasterize_cuda_kernel.cu -o standard_rasterize_cuda_kernel.cuda.o FAILED: standard_rasterize_cuda_kernel.cuda.o :/usr/local/cuda-11.2/bin/nvcc -DTORCH_EXTENSION_NAME=standard_rasterize_cuda -DTORCH_API_INCLUDE_EXTENSION_H -isystem /home/seemmo/.local/lib/python3.7/site-packages/torch/include -isystem /home/seemmo/.local/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -isystem /home/seemmo/.local/lib/python3.7/site-packages/torch/include/TH -isystem /home/seemmo/.local/lib/python3.7/site-packages/torch/include/THC -isystem :/usr/local/cuda-11.2/include -isystem /usr/include/python3.7m -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_61,code=sm_61 --compiler-options '-fPIC' -std=c++14 -ccbin=$(which gcc-7) -c /home/seemmo/DECA/decalib/utils/rasterizer/standard_rasterize_cuda_kernel.cu -o standard_rasterize_cuda_kernel.cuda.o /bin/sh: 1: :/usr/local/cuda-11.2/bin/nvcc: not found [2/3] c++ -MMD -MF standard_rasterize_cuda.o.d -DTORCH_EXTENSION_NAME=standard_rasterize_cuda -DTORCH_API_INCLUDE_EXTENSION_H -isystem /home/seemmo/.local/lib/python3.7/site-packages/torch/include -isystem /home/seemmo/.local/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -isystem /home/seemmo/.local/lib/python3.7/site-packages/torch/include/TH -isystem /home/seemmo/.local/lib/python3.7/site-packages/torch/include/THC -isystem :/usr/local/cuda-11.2/include -isystem /usr/include/python3.7m -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++14 -c /home/seemmo/DECA/decalib/utils/rasterizer/standard_rasterize_cuda.cpp -o standard_rasterize_cuda.o FAILED: standard_rasterize_cuda.o c++ -MMD -MF standard_rasterize_cuda.o.d -DTORCH_EXTENSION_NAME=standard_rasterize_cuda -DTORCH_API_INCLUDE_EXTENSION_H -isystem /home/seemmo/.local/lib/python3.7/site-packages/torch/include -isystem /home/seemmo/.local/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -isystem /home/seemmo/.local/lib/python3.7/site-packages/torch/include/TH -isystem /home/seemmo/.local/lib/python3.7/site-packages/torch/include/THC -isystem :/usr/local/cuda-11.2/include -isystem /usr/include/python3.7m -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++14 -c /home/seemmo/DECA/decalib/utils/rasterizer/standard_rasterize_cuda.cpp -o standard_rasterize_cuda.o In file included from /home/seemmo/.local/lib/python3.7/site-packages/torch/include/torch/csrc/Device.h:3:0, from /home/seemmo/.local/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/python.h:8, from /home/seemmo/.local/lib/python3.7/site-packages/torch/include/torch/extension.h:6, from /home/seemmo/.local/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/torch.h:6, from /home/seemmo/DECA/decalib/utils/rasterizer/standard_rasterize_cuda.cpp:1: /home/seemmo/.local/lib/python3.7/site-packages/torch/include/torch/csrc/python_headers.h:10:20: fatal error: Python.h: No such file or directory compilation terminated. ninja: build stopped: subcommand failed.

    could you help me. i have tried for cuda10.1+cudnn7.6.5+torch1.6.0+torchvision0.7.0+ubuntu16.04
    always this error.

    opened by MollyHoo 4
  • ModuleNotFoundError: No module named 'iopath'

    ModuleNotFoundError: No module named 'iopath'

    My system is Ubuntu 20.04 LTS.I only omit usage 1.c. I get error when I try to run "python demos/demo_reconstruct.py -i TestSamples/examples --saveDepth True --saveObj True"

    The message is Traceback (most recent call last): File "demos/demo_reconstruct.py", line 25, in from decalib.deca import DECA File "/home/michael/Downloads/DECA-master/decalib/deca.py", line 27, in from .utils.renderer import SRenderY File "/home/michael/Downloads/DECA-master/decalib/utils/renderer.py", line 23, in from pytorch3d.io import load_obj File "/home/michael/anaconda3/envs/DECA/lib/python3.7/site-packages/pytorch3d/io/init.py", line 4, in from .obj_io import load_obj, load_objs_as_meshes, save_obj File "/home/michael/anaconda3/envs/DECA/lib/python3.7/site-packages/pytorch3d/io/obj_io.py", line 12, in from pytorch3d.io.mtl_io import load_mtl, make_mesh_texture_atlas File "/home/michael/anaconda3/envs/DECA/lib/python3.7/site-packages/pytorch3d/io/mtl_io.py", line 11, in from pytorch3d.io.utils import _open_file, _read_image File "/home/michael/anaconda3/envs/DECA/lib/python3.7/site-packages/pytorch3d/io/utils.py", line 10, in from fvcore.common.file_io import PathManager File "/home/michael/anaconda3/envs/DECA/lib/python3.7/site-packages/fvcore/common/file_io.py", line 10, in from iopath.common.file_io import ( ModuleNotFoundError: No module named 'iopath'

    opened by Michaelwhite34 4
  • CUDA Version issue

    CUDA Version issue

    I'm trying to run demo on Google Colab, but I get the following error:

    AssertionError: The NVIDIA driver on your system is too old (found version 10010). Please update your GPU driver by downloading and installing a new version from the URL: http://www.nvidia.com/Download/index.aspx Alternatively, go to: https://pytorch.org to install a PyTorch version that has been compiled with your version of the CUDA driver.

    However, when I check the version with Nvidia-smi, it is CUDA version: 10.1, which is fine. Is there any solution?

    opened by cedro3 4
  • Access estimated transformation of mesh

    Access estimated transformation of mesh

    Hi, thanks for the great work and repository.

    I am looking to access the estimated homogeneous transform that aligns the mesh from an initial frame to a frame that is aligned with the camera position of an input image. Or conversely the homogeneous transform that transform the camera pose to align the mesh with the image.

    I suspect that the rotational information I need is available as an angle axis representation in the first pose entry in the code_dict: code_dict['pose'][..., 0:3], could you confirm this is correct? I also suspect the translation vector I desire is available as code_dict['cam'], but I am unsure how to transform this [scale, t_x, t_y] vector into a translation vector in R3.

    Could you provide any hints or pseudocode on how to generate a homogeneous transformation matrix from the available pose estimation?

    opened by mjvanderboon 3
  • error after downloading deca_model.tar

    error after downloading deca_model.tar

    I followed the colab example. I keep getting this error: RuntimeError: storage has wrong size: expected 4379311892713323070 got 589824

    when I run: !python demos/demo_reconstruct.py -i $input_folder -s $output_folder --saveDepth True --saveObj True

    has anyone faced this issue? I cant run the model because of this. Any help is appreciated

    opened by ahmedfadhil 0
  • AttributeError: extract_tex in demo_transfer.py (--useTex True) & demo_teaser.py

    AttributeError: extract_tex in demo_transfer.py (--useTex True) & demo_teaser.py

    I have createdand placed the FLAME_albedo_from_BFM.npz file in the ./data directory. However when trying to run the last two demos, i get the following error: AttributeError: extract_tex.

    It seems as if there is a bug in the deca_cfg object passed in the DECA model constructor. Also, in demo_teaser.py the --useTex argument is missing which should definetely be used cause we need the FLAME albedo model.

    opened by GiannisPikoulis 3
  • I wonder how to generate the

    I wonder how to generate the "data/head_template.obj" with a specific vertice_uv_coordinates ?

    The number of Flame's vertices is 5023, howerver, the number of aux.verts_uvs of the "head_template.obj" is 5118. What is the specific correspondence between idx of vertice and idx of verts_uvs?

    I would appreciate it if anyone could answer the above questions!

    opened by zydmu123 0
  • How to use texture from personal image?

    How to use texture from personal image?

    I am using --useTex argument for full texture, but somehow it is using the mean_texture.jpg provided in the data folder. How can I use texture from the original image image Check, the hair color is brown whereas in the original image the color is black. Also, I checked in your sample image the ear color is completely fit to ears in the mesh, but for my image, there is something wrong with the colors. I am a beginner in this, so if anybody has an idea how to fix it, it would be appreciated. @vabrevaya @vchoutas

    opened by RAJA-PARIKSHAT 0
  • What should be the steps for DECA-HR training

    What should be the steps for DECA-HR training

    Hey @YuliangXiu @YadiraF @HavenFeng , Can you please what changes should be done for DECA-HR training? Or how to get albedo map with higher resolution?

    opened by ujjawalcse 0
Owner
Yao Feng
Yao Feng
Official implementation for ICDAR 2021 paper "Handwritten Mathematical Expression Recognition with Bidirectionally Trained Transformer"

Handwritten Mathematical Expression Recognition with Bidirectionally Trained Transformer Description Convert offline handwritten mathematical expressi

Wenqi Zhao 87 Dec 27, 2022
Source code of our BMVC 2021 paper: AniFormer: Data-driven 3D Animation with Transformer

AniFormer This is the PyTorch implementation of our BMVC 2021 paper AniFormer: Data-driven 3D Animation with Transformer. Haoyu Chen, Hao Tang, Nicu S

null 7 Oct 22, 2021
General Virtual Sketching Framework for Vector Line Art (SIGGRAPH 2021)

General Virtual Sketching Framework for Vector Line Art - SIGGRAPH 2021 Paper | Project Page Outline Dependencies Testing with Trained Weights Trainin

Haoran MO 118 Dec 27, 2022
Implementation for "Seamless Manga Inpainting with Semantics Awareness" (SIGGRAPH 2021 issue)

Seamless Manga Inpainting with Semantics Awareness [SIGGRAPH 2021](To appear) | Project Website | BibTex Introduction: Manga inpainting fills up the d

null 101 Jan 1, 2023
Supplementary code for SIGGRAPH 2021 paper: Discovering Diverse Athletic Jumping Strategies

SIGGRAPH 2021: Discovering Diverse Athletic Jumping Strategies project page paper demo video Prerequisites Important Notes We suspect there are bugs i

null 54 Dec 6, 2022
Official implementation of "StyleCariGAN: Caricature Generation via StyleGAN Feature Map Modulation" (SIGGRAPH 2021)

StyleCariGAN in PyTorch Official implementation of StyleCariGAN:Caricature Generation via StyleGAN Feature Map Modulation in PyTorch Requirements PyTo

PeterZhouSZ 49 Oct 31, 2022
Code for HodgeNet: Learning Spectral Geometry on Triangle Meshes, in SIGGRAPH 2021.

HodgeNet | Webpage | Paper | Video HodgeNet: Learning Spectral Geometry on Triangle Meshes Dmitriy Smirnov, Justin Solomon SIGGRAPH 2021 Set-up To ins

Dima Smirnov 61 Nov 27, 2022
Tracing Versus Freehand for Evaluating Computer-Generated Drawings (SIGGRAPH 2021)

Tracing Versus Freehand for Evaluating Computer-Generated Drawings (SIGGRAPH 2021) Zeyu Wang, Sherry Qiu, Nicole Feng, Holly Rushmeier, Leonard McMill

Zach Zeyu Wang 23 Dec 9, 2022
PyTorch implementations for our SIGGRAPH 2021 paper: Editable Free-viewpoint Video Using a Layered Neural Representation.

st-nerf We provide PyTorch implementations for our paper: Editable Free-viewpoint Video Using a Layered Neural Representation SIGGRAPH 2021 Jiakai Zha

Diplodocus 258 Jan 2, 2023
Code for the SIGGRAPH 2021 paper "Consistent Depth of Moving Objects in Video".

Consistent Depth of Moving Objects in Video This repository contains training code for the SIGGRAPH 2021 paper "Consistent Depth of Moving Objects in

Google 203 Jan 5, 2023
ManipNet: Neural Manipulation Synthesis with a Hand-Object Spatial Representation - SIGGRAPH 2021

ManipNet: Neural Manipulation Synthesis with a Hand-Object Spatial Representation - SIGGRAPH 2021 Dataset Code Demos Authors: He Zhang, Yuting Ye, Tak

HE ZHANG 194 Dec 6, 2022
Official implementation of "StyleCariGAN: Caricature Generation via StyleGAN Feature Map Modulation" (SIGGRAPH 2021)

StyleCariGAN: Caricature Generation via StyleGAN Feature Map Modulation This repository contains the official PyTorch implementation of the following

Wonjong Jang 270 Dec 30, 2022
[SIGGRAPH Asia 2021] DeepVecFont: Synthesizing High-quality Vector Fonts via Dual-modality Learning.

DeepVecFont This is the homepage for "DeepVecFont: Synthesizing High-quality Vector Fonts via Dual-modality Learning". Yizhi Wang and Zhouhui Lian. WI

Yizhi Wang 17 Dec 22, 2022
[SIGGRAPH Asia 2021] DeepVecFont: Synthesizing High-quality Vector Fonts via Dual-modality Learning.

DeepVecFont This is the homepage for "DeepVecFont: Synthesizing High-quality Vector Fonts via Dual-modality Learning". Yizhi Wang and Zhouhui Lian. WI

Yizhi Wang 5 Oct 22, 2021
The implemetation of Dynamic Nerual Garments proposed in Siggraph Asia 2021

DynamicNeuralGarments Introduction This repository contains the implemetation of Dynamic Nerual Garments proposed in Siggraph Asia 2021. ./GarmentMoti

null 42 Dec 27, 2022
Facial detection, landmark tracking and expression transfer library for Windows, Linux and Mac

Welcome to the CSIRO Face Analysis SDK. Documentation for the SDK can be found in doc/documentation.html. All code in this SDK is provided according t

Luiz Carlos Vieira 7 Jul 16, 2020
Official repository of the paper Learning to Regress 3D Face Shape and Expression from an Image without 3D Supervision

Official repository of the paper Learning to Regress 3D Face Shape and Expression from an Image without 3D Supervision

Soubhik Sanyal 689 Dec 25, 2022
Realtime micro-expression recognition using OpenCV and PyTorch

Micro-expression Recognition Realtime micro-expression recognition from scratch using OpenCV and PyTorch Try it out with a webcam or video using the e

Irfan 35 Dec 5, 2022