This is the official repository for evaluation on the NoW Benchmark Dataset. The goal of the NoW benchmark is to introduce a standard evaluation metric to measure the accuracy and robustness of 3D face reconstruction methods from a single image under variations in viewing angle, lighting, and common occlusions.

Overview

NoW Evaluation

This is the official repository for evaluation on the NoW Benchmark Dataset. The goal of the NoW benchmark is to introduce a standard evaluation metric to measure the accuracy and robustness of 3D face reconstruction methods from a single image under variations in viewing angle, lighting, and common occlusions.

Evaluation metric

Given a single monocular image, the challenge consists of reconstructing a 3D face. Since the predicted meshes occur in different local coordinate systems, the reconstructed 3D mesh is rigidly aligned (rotation, translation, and scaling) to the scan using a set of corresponding landmarks between the prediction and the scan. We further perform a rigid alignment based on the scan-to-mesh distance (which is the absolute distance between each scan vertex and the closest point in the mesh surface) between the ground truth scan, and the reconstructed mesh using the landmarks alignment as initialization. For more details, see the NoW Website or the RingNet paper.

Learning to Regress 3D Face Shape and Expression from an Image without 3D Supervision
Soubhik Sanyal, Timo Bolkart, Haiwen Feng, Michael J. Black
Computer Vision and Pattern Recognition (CVPR) 2019

Clone the repository

git clone https://github.com/soubhiksanyal/now_evaluation.git

Installation

Please install the virtual environment

mkdir <your_home_dir>/.virtualenvs
python3 -m venv <your_home_dir>/.virtualenvs/now_evaluation
source <your_home_dir>/.virtualenvs/now_evaluation/bin/activate

Make sure your pip version is up-to-date:

pip install -U pip

Install the requirements by using:

pip install -r requirements.txt

Install mesh processing libraries from MPI-IS/mesh within the virtual environment.

Installing Scan2Mesh distance:

Clone the flame-fitting repository and copy the required folders by the following comments

git clone https://github.com/Rubikplayer/flame-fitting.git
cp flame-fitting/smpl_webuser now_evaluation/smpl_webuser -r
cp flame-fitting/sbody now_evaluation/sbody -r

Clone Eigen and copy the it to the following folder

git clone https://gitlab.com/libeigen/eigen.git
cp eigen now_evaluation/sbody/alignment/mesh_distance/eigen -r

Edit the file 'now_evaluation/sbody/alignment/mesh_distance/setup.py' to set EIGEN_DIR to the location of Eigen. Then compile the code by following command

cd now_evaluation/sbody/alignment/mesh_distance
make

The installation of Scan2Mesh is followed by the codebase provided by flame-fitting. Please check that repository for more detailed instructions on Scan2Mesh installation.

Evaluation

Download the NoW Dataset and the validation set scans from the Now websiste, and predict 3D faces for all validation images.

Check data setup

Before running the now evaluation, 1) check that the predicted meshes can be successfuly loaded by the used mesh loader by running

python check_predictions.py <predicted_mesh_path>

Running this loads the <predicted_mesh_path> mesh and exports it to ./predicted_mesh_export.obj. Please check if this file can be loaded by e.g. MeshLab or any other mesh loader, and that the resulting mesh looks like the input mesh.

2) check that the landmarks for the predicted meshes are correct by running

python check_predictions.py <predicted_mesh_path> <predicted_mesh_landmark_path> <gt_scan_path> <gt_lmk_path> 

Running this loads the <predicted_mesh_path> mesh, rigidly aligns it with the the scan <gt_scan_path>, and outputs the aligned mesh to ./predicted_mesh_aligned.obj, and the cropped scan to ./cropped_scan.obj. Please check if the output mesh and scan are rigidly aligned by jointly opening them in e.g. MeshLab.

Error computation

To run the now evaluation on the validation set, run

python compute_error.py

The function in metric_computation() in compute_error.py is used to compute the error metric. You can run python compute_error.py <dataset_folder> <predicted_mesh_folder> <validatton_or_test_set>. For more options please see compute_error.py

The predicted_mesh_folder should in a similar structure as mentioned in the RingNet website.

Prior to computing the point-to-surface distance, a rigid alignment between each predicted mesh and the scan is computed. The rigid alignment computation requires for each predicted mesh a file with following 7 landmarks:

Visualization

Visualization of the reconstruction error is best done with a cumulative error curve. To generate a cumulative error plot, call generating_cumulative_error_plots() in the cumulative_errors.py with the list of output files and the corresponding list method names.

Note that ground truth scans are only provided for the validation set. In order to participate in the NoW challenge, please submit the test set predictions to [email protected] as described here.

Known issues

The used mesh loader is unable to load OBJ files with vertex colors appended to the vertices. I.e. if the OBJ contains lines of the following format v vx vy vz cr cg cb\n, export the meshes without vertex colors.

License

By using the model or the code code, you acknowledge that you have read the license terms of RingNet, understand them, and agree to be bound by them. If you do not agree with these terms and conditions, you must not use the code.

Citing

This codebase was developed for evaluation of the RingNet project. When using the code or NoW evaluation results in a scientific publication, please cite

@inproceedings{RingNet:CVPR:2019,
title = {Learning to Regress 3D Face Shape and Expression from an Image without 3D Supervision},
author = {Sanyal, Soubhik and Bolkart, Timo and Feng, Haiwen and Black, Michael},
booktitle = {Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)},
month = jun,
year = {2019},
month_numeric = {6}
}
Comments
  • The evaluation speed is slow

    The evaluation speed is slow

    Dear authors, thank you for providing the codes. My issue is, it seems quite time-consuming to run 'compute_error.py'. It took me about 1 to 2 hours to process one image. Is it normal?

    Thank you in advance for your help.

    opened by ChunLLee 11
  • rigid_scan_2_mesh_alignment(scan, mesh) very slow

    rigid_scan_2_mesh_alignment(scan, mesh) very slow

    Hi, I am trying the NoW with the evaluation code. However, I generated the BFM shape model and landmarks from the validation image set, and pass them to the evaluation code. It shows something like this:

    5.60e+07 | dist: 5.60e+07 | s_reg: 0.00e+00 4.75e+07 | dist: 4.75e+07 | s_reg: 0.00e+00 2.92e+07 | dist: 2.92e+07 | s_reg: 0.00e+00 7.28e+06 | dist: 7.28e+06 | s_reg: 0.00e+00 3.22e+06 | dist: 3.22e+06 | s_reg: 0.00e+00 1.67e+06 | dist: 1.67e+06 | s_reg: 0.00e+00 4.79e+05 | dist: 4.79e+05 | s_reg: 0.00e+00 1.87e+05 | dist: 1.87e+05 | s_reg: 0.00e+00 1.15e+05 | dist: 1.15e+05 | s_reg: 0.00e+00 8.80e+04 | dist: 8.80e+04 | s_reg: 0.00e+00 7.32e+04 | dist: 7.32e+04 | s_reg: 0.00e+00

    It takes a few minutes to print a new line of the above information but no result return. Could you please help me with it?

    opened by sysuKaiYang 6
  • rigidly aligned error

    rigidly aligned error

    I followed the install guide and everything was correct.

    In step 2, run the check_predictions.py to check the landmarks. The script could run and output the result, but the result was not correct and I think the 7 landmarks I got were right.

    The method that I use to reconstruct was Deep3D pytorch. The 3DMM mode was BFM model.

    I have drawn the landmarks in 2D image to compare (red points is the 7 landmarks):

    image

    image

    the result I got was not aligned:

    image image image

    And then, I ran the error compute script, I got the result distance error was too large then this method should has: image

    So, I wondered if the version of scan to mesh was wrong or other things. By the way, the flame-fitting I used the master branch, and eigen I used the 3.4 version

    opened by wengjincheng 4
  • Performing evaluation with DECA

    Performing evaluation with DECA

    Hello guys, thank you for releasing the code for your amazing research! I have been trying to perform the allignment withe results obtained with the DECA technique. However, I have stumbled into issues regarding the allignment with the ground-truth scans.

    We were using the landmarks available with the face model, however, they seem not to be the landmarks that you used during the evaluation for DECA> How did you manage to make the quantitative evaluation with deca and which landmarks did you use? Do you have any file on the vertices?

    opened by danperazzo 4
  • what's the meaning of output?

    what's the meaning of output?

    when i run python check_predictions.py ./test/IMG_0045_detail.obj.obj ./test/IMG_0045.txt ./test/natural_head_rotation.000001.obj ./test/natural_head_rotation.000001_picked_points.pp The output is 1.56e+08 | dist: 1.56e+08 | s_reg: 0.00e+00 1.40e+08 | dist: 1.40e+08 | s_reg: 0.00e+00 1.13e+08 | dist: 1.13e+08 | s_reg: 0.00e+00 1.00e+08 | dist: 1.00e+08 | s_reg: 0.00e+00 9.64e+07 | dist: 9.64e+07 | s_reg: 0.00e+00 9.64e+07 | dist: 9.64e+07 | s_reg: 0.00e+00

    what's meaning?

    opened by The-crucified 3
  • Does the head mesh affect the performance?

    Does the head mesh affect the performance?

    Hi, I am trying to evaluate the performance of some reconstruction methods in NoW. I noticed that some methods (e.g., DECA) output a complete head mesh while the GT scans only contain the face region. As I am not clear about the details of the ScanToMesh function in the code, I am not sure whether it is necessary to output only the face region mesh? Does it matter?

    opened by JiejiangWu 2
  • About the reply time of now test

    About the reply time of now test

    Thx for your gereat work!

    I would like to ask you how long it usually takes you to reply to the email for the test set test of now benchmark? I may have some urgent needs, thank you!

    opened by VictoryLoveJessica 2
  • Installation script

    Installation script

    git clone https://github.com/soubhiksanyal/now_evaluation.git
    mkdir now_env
    python3 -m venv now_env
    source now_env/bin/activate
    pip install -U pip
    cd now_evaluation
    pip install -r requirements.txt
    cd ..
    sudo apt-get install libboost-dev
    git clone https://github.com/MPI-IS/mesh.git
    cd mesh
    BOOST_INCLUDE_DIRS=/usr/include/boost make all
    cd ..
    git clone https://github.com/Rubikplayer/flame-fitting.git
    cp flame-fitting/smpl_webuser now_evaluation/smpl_webuser -r
    cp flame-fitting/sbody now_evaluation/sbody -r
    git clone https://gitlab.com/libeigen/eigen.git
    cd eigen
    git checkout 3.4.0
    cd ..
    cp eigen now_evaluation/sbody/alignment/mesh_distance/eigen -r
    cd now_evaluation/sbody/alignment/mesh_distance
    make
    
    opened by AyushP123 2
  • Issues with eigen

    Issues with eigen

    Hi,

    While running make in now_evaluation/sbody/alignment/mesh_distance I am getting a lot of errors related to eigen. Am I supposed to use a specific version of eigen for installation?

    opened by AyushP123 2
  • hello,about bfm model evaluation

    hello,about bfm model evaluation

    Thank you for your work,I want to test methods based on BFM model, such as 3ddfa, deepfacecon or synergynet; What is the index correspondence of landmarks? Should I set the average expression and expression coefficient to zero,and keep the influence of the rotation and translation matrices?? look forward to your answers~

    opened by Luh1124 1
  • expression coefficients set to zero?

    expression coefficients set to zero?

    Thanks for releasing the code for your amazing research!

    I notice that the scans only with people in neutral expressions.

    so when i use deep3D with bfm to construct the mesh, should i set the expression coefficients to zero? Because the val scans only with neutral expressions, and how about the pose coefficients?

    Thank you very much!

    opened by wadesunyang 1
Owner
Soubhik Sanyal
Currently Applied Scientist at Amazon Research PhD Student
Soubhik Sanyal
ImageNet-CoG is a benchmark for concept generalization. It provides a full evaluation framework for pre-trained visual representations which measure how well they generalize to unseen concepts.

The ImageNet-CoG Benchmark Project Website Paper (arXiv) Code repository for the ImageNet-CoG Benchmark introduced in the paper "Concept Generalizatio

NAVER 23 Oct 9, 2022
Accurate 3D Face Reconstruction with Weakly-Supervised Learning: From Single Image to Image Set (CVPRW 2019). A PyTorch implementation.

Accurate 3D Face Reconstruction with Weakly-Supervised Learning: From Single Image to Image Set —— PyTorch implementation This is an unofficial offici

Sicheng Xu 833 Dec 28, 2022
This is the repository for CVPR2021 Dynamic Metric Learning: Towards a Scalable Metric Space to Accommodate Multiple Semantic Scales

Intro This is the repository for CVPR2021 Dynamic Metric Learning: Towards a Scalable Metric Space to Accommodate Multiple Semantic Scales Vehicle Sam

null 39 Jul 21, 2022
Code for "Single-view robot pose and joint angle estimation via render & compare", CVPR 2021 (Oral).

Single-view robot pose and joint angle estimation via render & compare Yann Labbé, Justin Carpentier, Mathieu Aubry, Josef Sivic CVPR: Conference on C

Yann Labbé 51 Oct 14, 2022
Unsupervised Learning of Multi-Frame Optical Flow with Occlusions

This is a Pytorch implementation of Janai, J., Güney, F., Ranjan, A., Black, M. and Geiger, A., Unsupervised Learning of Multi-Frame Optical Flow with

Anurag Ranjan 110 Nov 2, 2022
This is the dataset for testing the robustness of various VO/VIO methods

KAIST VIO dataset This is the dataset for testing the robustness of various VO/VIO methods You can download the whole dataset on KAIST VIO dataset Ind

null 1 Sep 1, 2022
Patch Rotation: A Self-Supervised Auxiliary Task for Robustness and Accuracy of Supervised Models

Patch-Rotation(PatchRot) Patch Rotation: A Self-Supervised Auxiliary Task for Robustness and Accuracy of Supervised Models Submitted to Neurips2021 To

null 4 Jul 12, 2021
Official implementation of "SinIR: Efficient General Image Manipulation with Single Image Reconstruction" (ICML 2021)

SinIR (Official Implementation) Requirements To install requirements: pip install -r requirements.txt We used Python 3.7.4 and f-strings which are in

null 47 Oct 11, 2022
Implementation of temporal pooling methods studied in [ICIP'20] A Comparative Evaluation Of Temporal Pooling Methods For Blind Video Quality Assessment

Implementation of temporal pooling methods studied in [ICIP'20] A Comparative Evaluation Of Temporal Pooling Methods For Blind Video Quality Assessment

Zhengzhong Tu 5 Sep 16, 2022
[TIP 2021] SADRNet: Self-Aligned Dual Face Regression Networks for Robust 3D Dense Face Alignment and Reconstruction

SADRNet Paper link: SADRNet: Self-Aligned Dual Face Regression Networks for Robust 3D Dense Face Alignment and Reconstruction Requirements python

Multimedia Computing Group, Nanjing University 99 Dec 30, 2022
PrimitiveNet: Primitive Instance Segmentation with Local Primitive Embedding under Adversarial Metric (ICCV 2021)

PrimitiveNet Source code for the paper: Jingwei Huang, Yanfeng Zhang, Mingwei Sun. [PrimitiveNet: Primitive Instance Segmentation with Local Primitive

Jingwei Huang 47 Dec 6, 2022
Lighting the Darkness in the Deep Learning Era: A Survey, An Online Platform, A New Dataset

Lighting the Darkness in the Deep Learning Era: A Survey, An Online Platform, A New Dataset This repository provides a unified online platform, LoLi-P

Chongyi Li 457 Jan 3, 2023
Pytorch implementation for "Adversarial Robustness under Long-Tailed Distribution" (CVPR 2021 Oral)

Adversarial Long-Tail This repository contains the PyTorch implementation of the paper: Adversarial Robustness under Long-Tailed Distribution, CVPR 20

Tong WU 89 Dec 15, 2022
LeafSnap replicated using deep neural networks to test accuracy compared to traditional computer vision methods.

Deep-Leafsnap Convolutional Neural Networks have become largely popular in image tasks such as image classification recently largely due to to Krizhev

Sujith Vishwajith 48 Nov 27, 2022
This solves the autonomous driving issue which is supported by deep learning technology. Given a video, it splits into images and predicts the angle of turning for each frame.

Self Driving Car An autonomous car (also known as a driverless car, self-driving car, and robotic car) is a vehicle that is capable of sensing its env

Sagor Saha 4 Sep 4, 2021
Repo for "Benchmarking Robustness of 3D Point Cloud Recognition against Common Corruptions" https://arxiv.org/abs/2201.12296

Benchmarking Robustness of 3D Point Cloud Recognition against Common Corruptions This repo contains the dataset and code for the paper Benchmarking Ro

Jiachen Sun 168 Dec 29, 2022
Angle data is a simple data type.

angledat Angle data is a simple data type. Installing + using Put angledat.py in the main dir of your project. Import it and use. Comments Comments st

null 1 Jan 5, 2022