Python tools for 3D face: 3DMM, Mesh processing(transform, camera, light, render), 3D face representations.

Related tags

Deep Learning face3d
Overview

face3d: Python tools for processing 3D face

Introduction

This project implements some basic functions related to 3D faces.

You can use this to process mesh data, generate 3D faces from morphable model, reconstruct 3D face with a single image and key points as inputs, render faces with difference lightings(for more, please see examples).

In the beginning, I wrote this project for learning 3D face reconstruction and for personal research use, so all the codes are written in python(numpy). However, some functions(eg. rasterization) can not use vectorization to optimize, writing them in python is too slow to use, then I choose to write these core parts in c++(without any other big libraries, such as opencv, eigen) and compile them with Cython for python use. So the final version is very lightweight and fast.

In addition, the numpy version is also retained, considering that beginners can focus on algorithms themselves in python and researches can modify and verify their ideas quickly. I also try my best to add references/formulas in each function, so that you can learn basic knowledge and understand the codes.

For more information and researches related to 3D faces, please see 3D face papers.

Enjoy it ^_^

Structure

# Since triangle mesh is the most popular representation of 3D face, 
# the main part is mesh processing.
mesh/             # written in python and c++
|  cython/               # c++ files, use cython to compile 
|  io.py                 # read & write obj
|  vis.py                # plot mesh
|  transform.py          # transform mesh & estimate matrix
|  light.py              # add light & estimate light(to do)
|  render.py             # obj to image using rasterization render

mesh_numpy/      # the same with mesh/, with each part written in numpy
                 # slow but easy to learn and modify

# 3DMM is one of the most popular methods to generate & reconstruct 3D face.
morphable_model/
|  morphable_model.py    # morphable model class: generate & fit
|  fit.py                # estimate shape&expression parameters. 3dmm fitting.
|  load.py               # load 3dmm data

Examples:

cd ./examples

  • 3dmm. python 2_3dmm.py

    left: random example generated by 3dmm

    right: fitting face with 3dmm using 68 key points

  • transform. python 3_transform.py
    left:

    fix camera position & use orthographic projection. (often used in reconstruction)

    then transform face object: scale, change pitch angle, change yaw angle, change roll angle

    right:

    fix obj position & use perspective projection(fovy=30). (simulating real views)

    then move camera position and rotate camera: from far to near, down & up, left & right, rotate camera

  • light. python 4_light.py

    single point light: from left to right, from up to down, from near to far

  • image map python 6_image_map.py

    render different attributes in image pixels.

    : depth, pncc, uv coordinates

  • uv map python 7_uv_map.py

    render different attributes in uv coordinates.

    : colors(texture map), position(2d facial image & corresponding position map)

Getting Started

Prerequisite

  • Python 2 or Python 3

  • Python packages:

    • numpy
    • skimage (for reading&writing image)
    • scipy (for loading mat)
    • matplotlib (for show)
    • Cython (for compiling c++ file)

Usage

  1. Clone the repository

    git clone https://github.com/YadiraF/face3d
    cd face3d
  2. Compile c++ files to .so for python use (ignore if you use numpy version)

    cd face3d/mesh/cython
    python setup.py build_ext -i 
  3. Prepare BFM Data (ignore if you don't use 3dmm)

    see Data/BFM/readme.md

  4. Run examples

    (examples use cython version, you can change mesh into mesh_numpy to use numpy version)

    cd examples
    python 1_pipeline.py 

    For beginners who want to continue researches on 3D faces, I strongly recommend you first run examples according to the order, then view the codes in mesh_numpy and read the comments written in the beginning in each file. Hope this helps!

    Moreover, I am new in computer graphics, so it would be great appreciated if you could point out some of my wrong expressions. Thanks!

Changelog

  • 2018/10/08 change structure. add comments. add introduction. add paper collections.
  • 2018/07/15 first release
Comments
  • 3dmm fitting result does not look like the input image

    3dmm fitting result does not look like the input image

    The input image with landmarks: bfm

    and the result is not good. Screen Shot 2020-08-12 at 12 08 22 PM Screen Shot 2020-08-12 at 12 08 11 PM

    Is this caused by fitting the 3dmm model using 2d landmark or BFM not working well for Asian faces?

    opened by zengxianyu 8
  • 3DFFA does not contain

    3DFFA does not contain "Modelplus_nose_hole.mat" and "Modelplus_parallel.mat"

    Hi yao, Really thanks for releasing the training code. I can not find "Modelplus_nose_hole.mat" and "Modelplus_parallel.mat" from the source code 3DFFA. Where can I get these two files?

    opened by marvin521 6
  • linux cython compile error

    linux cython compile error

    I get this message when I run python setup.py build_ext -i, can someone help me out? I can't seem to find a fix for this.

    running build_ext skipping 'mesh_core_cython.cpp' Cython extension (up-to-date) building 'mesh_core_cython' extension gcc -pthread -B /home/tychokoster/anaconda3/envs/3dmm_cnn/compiler_compat -Wl,--sysroot=/ -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/tychokoster/anaconda3/envs/3dmm_cnn/lib/python2.7/site-packages/numpy/core/include -I/home/tychokoster/anaconda3/envs/3dmm_cnn/include/python2.7 -c mesh_core_cython.cpp -o build/temp.linux-x86_64-2.7/mesh_core_cython.o cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++ In file included from /home/tychokoster/anaconda3/envs/3dmm_cnn/lib/python2.7/site-packages/numpy/core/include/numpy/ndarraytypes.h:1821:0, from /home/tychokoster/anaconda3/envs/3dmm_cnn/lib/python2.7/site-packages/numpy/core/include/numpy/ndarrayobject.h:18, from /home/tychokoster/anaconda3/envs/3dmm_cnn/lib/python2.7/site-packages/numpy/core/include/numpy/arrayobject.h:4, from mesh_core_cython.cpp:607: /home/tychokoster/anaconda3/envs/3dmm_cnn/lib/python2.7/site-packages/numpy/core/include/numpy/npy_1_7_deprecated_api.h:15:2: warning: #warning "Using deprecated NumPy API, disable it by " "#defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-Wcpp] #warning "Using deprecated NumPy API, disable it by "
    ^ gcc -pthread -B /home/tychokoster/anaconda3/envs/3dmm_cnn/compiler_compat -Wl,--sysroot=/ -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/tychokoster/anaconda3/envs/3dmm_cnn/lib/python2.7/site-packages/numpy/core/include -I/home/tychokoster/anaconda3/envs/3dmm_cnn/include/python2.7 -c mesh_core.cpp -o build/temp.linux-x86_64-2.7/mesh_core.o cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++ mesh_core.cpp: In function ‘void _write_obj_with_colors_texture(std::string, std::string, float*, int*, float*, float*, int, int, int)’: mesh_core.cpp:349:31: error: no matching function for call to ‘std::basic_ofstream::basic_ofstream(std::string&)’ ofstream obj_file(filename); ^ mesh_core.cpp:349:31: note: candidates are: In file included from mesh_core.h:9:0, from mesh_core.cpp:11: /usr/include/c++/4.9/fstream:643:7: note: std::basic_ofstream<_CharT, _Traits>::basic_ofstream(const char*, std::ios_base::openmode) [with _CharT = char; _Traits = std::char_traits; std::ios_base::openmode = std::_Ios_Openmode] basic_ofstream(const char* __s, ^ /usr/include/c++/4.9/fstream:643:7: note: no known conversion for argument 1 from ‘std::string {aka std::basic_string}’ to ‘const char*’ /usr/include/c++/4.9/fstream:628:7: note: std::basic_ofstream<_CharT, _Traits>::basic_ofstream() [with _CharT = char; _Traits = std::char_traits] basic_ofstream(): __ostream_type(), _M_filebuf() ^ /usr/include/c++/4.9/fstream:628:7: note: candidate expects 0 arguments, 1 provided /usr/include/c++/4.9/fstream:602:11: note: std::basic_ofstream::basic_ofstream(const std::basic_ofstream&) class basic_ofstream : public basic_ostream<_CharT,_Traits> ^ /usr/include/c++/4.9/fstream:602:11: note: no known conversion for argument 1 from ‘std::string {aka std::basic_string}’ to ‘const std::basic_ofstream&’ error: command 'gcc' failed with exit status 1

    opened by TychoKoster 5
  • plot_mesh is rather slow

    plot_mesh is rather slow

    Thanks for your sharing your code! The mesh render of mpl_toolkits.mplot3d seems rather slow? The render result of Data/example1.mat by this line mesh.vis.plot_mesh(camera_vertices, triangles) is

    Is there any solutions to real-time render 3D face mesh like Matlab?

    opened by cleardusk 5
  • error in compiling with Cython

    error in compiling with Cython

    I encountered the error related to the line349 in mesh_core.cpp

    mesh_core.cpp:349:31: error: no matching function for call to 'std::basic_ofstream<char>::basic_ofstream(std::string&)'

    After I changed the line to:

    ofstream obj_file(filename.c_str());

    The problem can be solved.

    opened by Cogito2012 4
  • cannot import mesh_core_cython

    cannot import mesh_core_cython

    i have complied /face3d/mesh/cython by readme.md and add empty file named 'init.py' in cython reffering another issue 'ImportError: No module named cython #26' but another problem 'cannot find refferences 'mesh_core_cython' in 'init.py'' came out how to solve it? many thanks

    opened by codgodtao 2
  • error: command 'gcc' failed with exit status 1

    error: command 'gcc' failed with exit status 1

    When Compile c++ files to .so for python use:

    python setup.py build_ext -i the error occurs.

    I use python 3.6, and the default gcc version in my anaconda env is 7.3.0

    mesh_core.cpp:349:31: error: no matching function for call to ‘std::basic_ofstream::basic_ofstream(std::__cxx11::string&)’

    opened by welbeckzhou 2
  • The Problem of Generating UV Map

    The Problem of Generating UV Map

    When I run 6_generate_image_map.py script, I found that there are difference between my result and author's result on uv_coords.jpg.

    uv_coords = face3d.morphable_model.load.load_uv_coords('Data/BFM/Out/BFM_UV.mat') uv_coords_image = mesh.render.render_colors(image_vertices, triangles, attribute, h, w, c=2) uv_coords_image = np.concatenate((np.zeros((h, w, 1)), uv_coords_image), 2) io.imsave('{}/uv_coords.jpg'.format(save_folder), np.squeeze(uv_coords_image))

    color depth uv_coords

    I think that there are some difference between my BFM_UV.mat and author's BFM_UV.mat. Is there anyone who can provide a file A that can be used correctly?

    opened by yqwu94 1
  • Verification of the posmap from 8_generate_posmap_300WLP.py

    Verification of the posmap from 8_generate_posmap_300WLP.py

    Hello, I was trying to re-train PRNet and was using the provided function to generate gt position map, but I wonder if I did it correctly, since when I use plot functions from PRNet with the generated posmap, there seems to be a bit unaccurate. Shouldn't the results be more accurate? or is there any other way to verify if the posmap were properly generated?

    sparse alignment_screenshot_28 08 2019

    opened by heathentw 1
  • bad_ind problem

    bad_ind problem

    line at 67 in generate.m: bad_ind = [10032, 10155, 10280]; and line at 89: tm_inner = tm_inner(:, setdiff(all_ind, bad_ind)); which all_ind is: 1matlabarray([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107]])` so, the setdiff not work here. my Model_tri_mouth.mat file is 681kb. Does anyone has got this problem?

    opened by YuTingLiu 1
  • P_Affine is not an orthogonal rotation matrix

    P_Affine is not an orthogonal rotation matrix

    P attained from https://github.com/YadiraF/face3d/blob/master/face3d/mesh/transform.py#L239 is not an orthogonal rotation matrix.

    Here, https://github.com/YadiraF/face3d/blob/master/face3d/morphable_model/fit.py#L254 , we only get an approximation of s, R, t (P != [sR|t]). R is not an orthogonal rotation matrix, either.

    The transformation of R to angles, https://github.com/YadiraF/face3d/blob/master/face3d/morphable_model/morphabel_model.py#L140 , is not lossless, because 3 Euler angles cannot reconstruct the matrix R if R is not an orthogonal rotation matrix.

    opened by T2hhbmEK 0
  • Environmental preparation (provide reference)

    Environmental preparation (provide reference)

    I encountered some problems in the process of environment configuration. The solutions are as follows:

    1. BFM

    cd examples/Data/BFM and View "readme.md", download files and create the corresponding folder. You can also follow #95 , according to examples/data/generate.m put the file in the corresponding folder.

    2. Cython

    pip install cython cd ./engineer/render/face3d/mesh/cython python3 setup.py build_ext --inplace python3 setup.py install

    opened by Luh1124 0
  • share the BMF.mat file

    share the BMF.mat file

    I share the Matlab result files for the guys who don't wanna install Matlab. The files include BMF.mat. https://github.com/peterjiang4648/BFM_model/releases/tag/1.0

    opened by peterjiang4648 2
  • how to generate uv position map by my own dataset

    how to generate uv position map by my own dataset

    Very nice sharing! I want to generate uv postion map to train PRNet by my own dataset. But I didn't find any instructions anywhere.Could you tell me where i can find it and what i need to prepare?

    opened by wrainbow0705 1
  • bug reported in perspective camera

    bug reported in perspective camera

    1. FOV in perspective project https://github.com/YadiraF/face3d/blob/2fc26906d159a11398cd3e7a9b3f16b6f8937da3/face3d/mesh/transform.py#L179 -> top = near*np.tan(fovy / 2)

    2. typo https://github.com/YadiraF/face3d/blob/2fc26906d159a11398cd3e7a9b3f16b6f8937da3/face3d/morphable_model/morphabel_model.py#L49 -> sp = np.zeros((self.n_shape_para, 1))

    opened by jasonomad 0
Owner
Yao Feng
Yao Feng
AI Face Mesh: This is a simple face mesh detection program based on Artificial intelligence.

AI Face Mesh: This is a simple face mesh detection program based on Artificial Intelligence which made with Python. It's able to detect 468 different

Md. Rakibul Islam 1 Jan 13, 2022
Tools to create pixel-wise object masks, bounding box labels (2D and 3D) and 3D object model (PLY triangle mesh) for object sequences filmed with an RGB-D camera.

Tools to create pixel-wise object masks, bounding box labels (2D and 3D) and 3D object model (PLY triangle mesh) for object sequences filmed with an RGB-D camera. This project prepares training and testing data for various deep learning projects such as 6D object pose estimation projects singleshotpose, as well as object detection and instance segmentation projects.

null 305 Dec 16, 2022
Hough Transform and Hough Line Transform Using OpenCV

Hough transform is a feature extraction method for detecting simple shapes such as circles, lines, etc in an image. Hough Transform and Hough Line Transform is implemented in OpenCV with two methods; the Standard Hough Transform and the Probabilistic Hough Line Transform.

Happy  N. Monday 3 Feb 15, 2022
A repo that contains all the mesh keys needed for mesh backend, along with a code example of how to use them in python

Mesh-Keys A repo that contains all the mesh keys needed for mesh backend, along with a code example of how to use them in python Have been seeing alot

Joseph 53 Dec 13, 2022
Mesh Graphormer is a new transformer-based method for human pose and mesh reconsruction from an input image

MeshGraphormer ✨ ✨ This is our research code of Mesh Graphormer. Mesh Graphormer is a new transformer-based method for human pose and mesh reconsructi

Microsoft 251 Jan 8, 2023
CoSMA: Convolutional Semi-Regular Mesh Autoencoder. From Paper "Mesh Convolutional Autoencoder for Semi-Regular Meshes of Different Sizes"

Mesh Convolutional Autoencoder for Semi-Regular Meshes of Different Sizes Implementation of CoSMA: Convolutional Semi-Regular Mesh Autoencoder arXiv p

Fraunhofer SCAI 10 Oct 11, 2022
Given a 2D triangle mesh, we could randomly generate cloud points that fill in the triangle mesh

generate_cloud_points Given a 2D triangle mesh, we could randomly generate cloud points that fill in the triangle mesh. Run python disp_mesh.py Or you

Peng Yu 2 Dec 24, 2021
Swapping face using Face Mesh with TensorFlow Lite

Swapping face using Face Mesh with TensorFlow Lite

iwatake 17 Apr 26, 2022
Code for HLA-Face: Joint High-Low Adaptation for Low Light Face Detection (CVPR21)

HLA-Face: Joint High-Low Adaptation for Low Light Face Detection The official PyTorch implementation for HLA-Face: Joint High-Low Adaptation for Low L

Wenjing Wang 77 Dec 8, 2022
Camera-caps - Examine the camera capabilities for V4l2 cameras

camera-caps This is a graphical user interface over the v4l2-ctl command line to

Jetsonhacks 25 Dec 26, 2022
MPI-IS Mesh Processing Library

Perceiving Systems Mesh Package This package contains core functions for manipulating meshes and visualizing them. It requires Python 3.5+ and is supp

Max Planck Institute for Intelligent Systems 494 Jan 6, 2023
[ECCV 2020] Reimplementation of 3DDFAv2, including face mesh, head pose, landmarks, and more.

Stable Head Pose Estimation and Landmark Regression via 3D Dense Face Reconstruction Reimplementation of (ECCV 2020) Towards Fast, Accurate and Stable

Remilia Scarlet 221 Dec 30, 2022
MediaPipeのPythonパッケージのサンプルです。2020/12/11時点でPython実装のある4機能(Hands、Pose、Face Mesh、Holistic)について用意しています。

mediapipe-python-sample MediaPipeのPythonパッケージのサンプルです。 2020/12/11時点でPython実装のある以下4機能について用意しています。 Hands Pose Face Mesh Holistic Requirement mediapipe 0.

KazuhitoTakahashi 217 Dec 12, 2022
git《Pseudo-ISP: Learning Pseudo In-camera Signal Processing Pipeline from A Color Image Denoiser》(2021) GitHub: [fig5]

Pseudo-ISP: Learning Pseudo In-camera Signal Processing Pipeline from A Color Image Denoiser Abstract The success of deep denoisers on real-world colo

Yue Cao 51 Nov 22, 2022
Camera calibration & 3D pose estimation tools for AcinoSet

AcinoSet: A 3D Pose Estimation Dataset and Baseline Models for Cheetahs in the Wild Daniel Joska, Liam Clark, Naoya Muramatsu, Ricardo Jericevich, Fre

African Robotics Unit 42 Nov 16, 2022
A light and fast one class detection framework for edge devices. We provide face detector, head detector, pedestrian detector, vehicle detector......

A Light and Fast Face Detector for Edge Devices Big News: LFD, which is a big update of LFFD, now is released (2021.03.09). It is strongly recommended

YonghaoHe 1.3k Dec 25, 2022
A Light CNN for Deep Face Representation with Noisy Labels

A Light CNN for Deep Face Representation with Noisy Labels Citation If you use our models, please cite the following paper: @article{wulight, title=

Alfred Xiang Wu 715 Nov 5, 2022
Jetson Nano-based smart camera system that measures crowd face mask usage in real-time.

MaskCam MaskCam is a prototype reference design for a Jetson Nano-based smart camera system that measures crowd face mask usage in real-time, with all

BDTI 212 Dec 29, 2022
A GUI for Face Recognition, based upon Docker, Tkinter, GPU and a camera device.

Face Recognition GUI This repository is a GUI version of Face Recognition by Adam Geitgey, where e.g. Docker and Tkinter are utilized. All the materia

Kasper Henriksen 6 Dec 5, 2022