[TIP 2021] SADRNet: Self-Aligned Dual Face Regression Networks for Robust 3D Dense Face Alignment and Reconstruction

Overview

SADRNet

Paper link: SADRNet: Self-Aligned Dual Face Regression Networks for Robust 3D Dense Face Alignment and Reconstruction

image

Requirements

python                 3.6.2
matplotlib             3.1.1  
Cython                 0.29.13
numba                  0.45.1
numpy                  1.16.0   
opencv-python          4.1.1
Pillow                 6.1.0                 
pyrender               0.1.33                
scikit-image           0.15.0                
scipy                  1.3.1
torch                  1.2.0                 
torchvision            0.4.0

Pretrained model

Link: https://drive.google.com/file/d/1mqdBdVzC9myTWImkevQIn-AuBrVEix18/view?usp=sharing .

Please put it under data/saved_model/SADRNv2/.

Please set ./SADRN as the working directory when running codes in this repo.

Predicting

  • Put images under data/example/.

  • Run src/run/predirct.py.

The network takes cropped-out 256×256×3 images as the input.

Training

  • Download 300W-LP and AFLW2000-3D at http://www.cbsr.ia.ac.cn/users/xiangyuzhu/projects/3ddfa/main.htm .

  • Extract them into 'data/packs/AFLW2000' and 'data/packs/300W_LP'

  • Please refer to face3d to prepare BFM data. And move the generated files in Out/ to data/Out/

  • Run src/run/prepare_dataset.py, it will take several hours.

  • Run train_block_data.py. Some training settings are included in config.py and src/configs.

Acknowledgements

We especially thank the contributors of the face3d codebase for providing helpful code.

Comments
  • 0 data added

    0 data added

    image I think is something wrong with the code, it can't read the image in the folder. After I change the code, there is no corresponding _info.mat : image What the format of testing images should be? Thank you for your response.

    opened by laceyliao 3
  • prepare_dataset.py: What is Extra_LP? train_blocks is empty

    prepare_dataset.py: What is Extra_LP? train_blocks is empty

    When I run src/run/prepare_dataset.py, I encounter an error: FileNotFoundError: [Errno 2] No such file or directory: 'data/dataset/Extra_LP/all_image_data.pkl'

    skip  data/dataset/300W_LP_crop/HELEN_Flip/HELEN_173153923_2_12
    skip  data/dataset/300W_LP_crop/HELEN_Flip/HELEN_2236814888_2_3
    skip  data/dataset/300W_LP_crop/IBUG_Flip/IBUG_image_018_5
    skip  data/dataset/300W_LP_crop/LFPW/LFPW_image_train_0328_6
    skip  data/dataset/300W_LP_crop/LFPW_Flip/LFPW_image_train_0380_3
    skip  data/dataset/300W_LP_crop/landmarks/AFW
    skip  data/dataset/300W_LP_crop/landmarks/HELEN
    skip  data/dataset/300W_LP_crop/landmarks/IBUG
    skip  data/dataset/300W_LP_crop/landmarks/LFPW
    0 data added
    saving data path list
    data path list saved
    0 data added
    saving data path list
    Traceback (most recent call last):
      File "src/run/prepare_dataset.py", line 19, in <module>
        train_dataset = make_dataset(TRAIN_DIR, 'train')
      File "./src/dataset/dataloader.py", line 217, in make_dataset
        raw_dataset.add_image_data(folder, mode)
      File "./src/dataset/dataloader.py", line 116, in add_image_data
        self.save_image_data_paths(all_data, data_dir)
      File "./src/dataset/dataloader.py", line 120, in save_image_data_paths
        ft = open(f'{data_dir}/all_image_data.pkl', 'wb')
    FileNotFoundError: [Errno 2] No such file or directory: 'data/dataset/Extra_LP/all_image_data.pkl'
    worker: 7 end 236/250  data/packs/AFLW2000/image00451.jpgLEN_2208472833_2_8.jpgpgg
    worker: 3 end 230/15307  data/packs/300W_LP/HELEN/HELEN_2618147986_1_3.jpg.jpgpgpg
    worker: 5 end 236/15307  data/packs/300W_LP/LFPW/LFPW_image_train_0304_13.jpgjpgg
    worker: 1 end 242/15307  data/packs/300W_LP/IBUG_Flip/IBUG_image_030_3.jpgpg0.jpgg
    worker: 4 end 245/250  data/packs/AFLW2000/image02453.jpgW_image_train_0731_3.jpg
    worker: 0 end 239/15307  data/packs/300W_LP/HELEN/HELEN_248684423_1_12.jpg1_10.jpg
    worker: 6 end 235/15301  data/packs/300W_LP/HELEN_Flip/HELEN_2419679570_1_5.jpg
    worker: 2 end 245/15307  data/packs/300W_LP/HELEN/HELEN_2345048760_1_10.jpggjpgpg
    worker: 6 end 15168/15307  data/packs/300W_LP/LFPW/LFPW_image_train_0476_15.jpgpgggg
    worker: 0 end 15281/15301  data/packs/300W_LP/HELEN_Flip/HELEN_2466594504_1_9.jpgggg
    worker: 5 end 15270/15307  data/packs/300W_LP/HELEN/HELEN_2882149940_1_6.jpg0.jpgg
    worker: 1 end 15287/15301  data/packs/300W_LP/HELEN_Flip/HELEN_3026147764_1_11.jpg
    worker: 7 end 15272/15307  data/packs/300W_LP/HELEN/HELEN_111835766_1_16.jpgjpgpgg
    worker: 2 end 15259/15307  data/packs/300W_LP/LFPW/LFPW_image_train_0741_0.jpgg
    worker: 3 end 15270/15307  data/packs/300W_LP/LFPW/LFPW_image_train_0051_4.jpgg
    worker: 4 end 15306/15307  data/packs/300W_LP/LFPW/LFPW_image_train_0382_5.jpgg
    

    My question is what is Extra_LP? Is it a dataset? It is not mentioned in the readme.md nor the paper. And where should I get the Extra_LP/all_image_data.pkl ?

    Another problem is after I run src/run/prepare_dataset.py, I find SADRNet/data/dataset/train_blocks is generated, but it is empty. Is it correct? Or I missed something?

    opened by lhyfst 3
  • run error

    run error

    When I run src/run/predict.py thanks

    
    folders= 1
    dirs= data/example
    dirs= ['image00354', 'image00357', 'image00351', 'image00355', 'image00359', 'image00350', 'image00352']
    7dirs= []
    dirs= []
    dirs= []
    dirs= []
    dirs= []
    dirs= []
    dirs= []
    7 data added
    saving data path list
    data path list saved
    get data 7
     11 7
    Traceback (most recent call last):
      File "src/run/predict.py", line 246, in <module>
        evaluator.evaluate_example(predictor_1)
      File "src/run/predict.py", line 223, in evaluate_example
        out = predictor.model({'img': image}, 'predict')
      File "/root/anaconda/envs/ai/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
        result = self.forward(*input, **kwargs)
      File "./src/model/SADRNv2.py", line 98, in forward
        x = self.block1(x)
      File "/root/anaconda/envs/ai/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
        result = self.forward(*input, **kwargs)
      File "./src/model/modules.py", line 540, in forward
        out += identity
    RuntimeError: The size of tensor a (129) must match the size of tensor b (128) at non-singleton dimension 3
    
    opened by jackweiwang 3
  • No module named 'config'

    No module named 'config'

    Thanks for your great work! When I run src/run/predirct.py, it pointed out No module named 'config'. Should I set the working directory ? My current working directory is already SADRN.

    opened by haoxurt 2
  • About data augmentation for occlusion

    About data augmentation for occlusion

    Thanks for your great work!!!

    During the process of data augmentation, especially for the occlusion augmentation, I find that the whole face region in the image would be occluded for its random character. I want to know your operation on these ill images. Maybe just delete these images? Or others?

    Hope for your reply! Thanks again!

    opened by MariaWang96 2
  • ImportError: cannot import name 'uv_triangles'

    ImportError: cannot import name 'uv_triangles'

    When I run the python src/run/train_block_data.py command, the following error is reported. How can I solve it?

    note: UV_TRIANGLES_PATH = '../data/uv_data/uv_triangles.npy' exists.

    Traceback (most recent call last):
      File "src/run/train_block_data.py", line 343, in <module>
        trainer = SADRNv2Trainer()
      File "src/run/train_block_data.py", line 319, in __init__
        super(SADRNTrainer, self).__init__()
      File "src/run/train_block_data.py", line 122, in __init__
        self.model = self.get_model()
      File "src/run/train_block_data.py", line 325, in get_model
        from src.model.SADRNv2 import get_model
      File "/media/bangyanhe/disk/SADRNet-main/src/model/SADRNv2.py", line 2, in <module>
        from src.model.loss import *
      File "/media/bangyanhe/disk/SADRNet-main/src/model/loss.py", line 7, in <module>
        from src.dataset.uv_face import face_mask_np, face_mask_fix_rate, foreface_ind, uv_kpt_ind, uv_edges, uv_triangles
    ImportError: cannot import name 'uv_triangles'
    
    opened by HeBangYan 1
  • Issue for running the pre-trained model with Pytorch version > 1.4

    Issue for running the pre-trained model with Pytorch version > 1.4

    Hi, Thanks a lot for sharing the code and the pre-trained model of your awesome paper. Really appreciated.

    We tried to load and test the pre-trained model, but we face this error: Unexpected tensor shape for one of the building blocks, class ResBlock4(nn.Module)..

    It seems that there was an update for the Conv layers after version 1.4 (https://discuss.pytorch.org/t/did-conv2d-shapes-change-between-torch-1-4-0-and-1-6-0/93859).

    I like to ask whether you can share with us a new pre-trained model that trained with Pytorch version > 1.4 ?

    Thanks, Amin

    opened by amin-jourabloo 1
  • How to extract Point Cloud?

    How to extract Point Cloud?

    Thanks for sharing this project. As mentioned in the paper

    we choose the face region containing 19K points

    I am interested in extracting point cloud and use CPD to skin displacement. Can you point me to location to extract these point clouds?

    Thanks

    opened by noumanriazkhan 1
  • To compute RecLoss, why do you compute outer_interocular_dist? Why do not you use bbox_size in KptNME2D directly?

    To compute RecLoss, why do you compute outer_interocular_dist? Why do not you use bbox_size in KptNME2D directly?

    To compute RecLoss, why do you compute outer_interocular_dist? https://github.com/MCG-NJU/SADRNet/blob/a5e6fac904c66711a6e9457fc603bd7b6d348d21/src/model/loss.py#L353-L359

    Why do not you use bbox_size in KptNME2D directly? https://github.com/MCG-NJU/SADRNet/blob/a5e6fac904c66711a6e9457fc603bd7b6d348d21/src/model/loss.py#L255-L260

    opened by lhyfst 1
  • What are the meaning of POSMAP_FIX_RATE, OFFSET_FIX_RATE, NME.rate ?

    What are the meaning of POSMAP_FIX_RATE, OFFSET_FIX_RATE, NME.rate ?

    POSMAP_FIX_RATE OFFSET_FIX_RATE: https://github.com/MCG-NJU/SADRNet/blob/8c0741713bd2ae5c66506659a5b0ae5c99cacc9a/src/configs/config_SADRN_v2_eval.py#L3-L4

    NME.rate: https://github.com/MCG-NJU/SADRNet/blob/a5e6fac904c66711a6e9457fc603bd7b6d348d21/src/model/loss.py#L67-L69

    opened by lhyfst 1
  • License

    License

    Hey guys

    Thanks for sharing your work and releasing the code. However officially you would need to add a license to allow people to use your code or to build upon it for academic/industrial research.

    You can probably add Apache-2.0 License like the mmediting (https://github.com/open-mmlab/mmediting) or BSD License like pix2pixHD (https://github.com/NVIDIA/pix2pixHD) or anything else

    Do you plan to add a License to your code?

    opened by ofirkris 1
  • osmesa error

    osmesa error

    raceback (most recent call last): File "src/run/predict.py", line 22, in from src.visualize.render_mesh import render_uvm #render_face_orthographic, File "/workspace/SADRNet/./src/visualize/render_mesh.py", line 16, in r = pyrender.OffscreenRenderer(CROPPED_IMAGE_SIZE, CROPPED_IMAGE_SIZE) File "/usr/local/lib/python3.8/dist-packages/pyrender/offscreen.py", line 31, in init self._create() File "/usr/local/lib/python3.8/dist-packages/pyrender/offscreen.py", line 134, in _create self._platform.init_context() File "/usr/local/lib/python3.8/dist-packages/pyrender/platforms/osmesa.py", line 19, in init_context from OpenGL.osmesa import ( ImportError: cannot import name 'OSMesaCreateContextAttribs' from 'OpenGL.osmesa' (/usr/local/lib/python3.8/dist-packages/OpenGL/osmesa/init.py)

    opened by sunjunlishi 3
Owner
Multimedia Computing Group, Nanjing University
Multimedia Computing Group, Nanjing University
Implementation for our ICCV 2021 paper: Dual-Camera Super-Resolution with Aligned Attention Modules

DCSR: Dual Camera Super-Resolution Implementation for our ICCV 2021 oral paper: Dual-Camera Super-Resolution with Aligned Attention Modules paper | pr

Tengfei Wang 110 Dec 20, 2022
A Planar RGB-D SLAM which utilizes Manhattan World structure to provide optimal camera pose trajectory while also providing a sparse reconstruction containing points, lines and planes, and a dense surfel-based reconstruction.

ManhattanSLAM Authors: Raza Yunus, Yanyan Li and Federico Tombari ManhattanSLAM is a real-time SLAM library for RGB-D cameras that computes the camera

null 117 Dec 28, 2022
[ICCV 2021] FaPN: Feature-aligned Pyramid Network for Dense Image Prediction

FaPN: Feature-aligned Pyramid Network for Dense Image Prediction [arXiv] [Project Page] @inproceedings{ huang2021fapn, title={{FaPN}: Feature-alig

Shihua Huang 23 Jul 22, 2022
PyTorch implementation for Partially View-aligned Representation Learning with Noise-robust Contrastive Loss (CVPR 2021)

2021-CVPR-MvCLN This repo contains the code and data of the following paper accepted by CVPR 2021 Partially View-aligned Representation Learning with

XLearning Group 33 Nov 1, 2022
DVG-Face: Dual Variational Generation for Heterogeneous Face Recognition, TPAMI 2021

DVG-Face: Dual Variational Generation for HFR This repo is a PyTorch implementation of DVG-Face: Dual Variational Generation for Heterogeneous Face Re

null 52 Dec 30, 2022
img2pose: Face Alignment and Detection via 6DoF, Face Pose Estimation

img2pose: Face Alignment and Detection via 6DoF, Face Pose Estimation Figure 1: We estimate the 6DoF rigid transformation of a 3D face (rendered in si

Vítor Albiero 519 Dec 29, 2022
Official PyTorch Implementation of GAN-Supervised Dense Visual Alignment

GAN-Supervised Dense Visual Alignment — Official PyTorch Implementation Paper | Project Page | Video This repo contains training, evaluation and visua

null 944 Jan 7, 2023
Occlusion robust 3D face reconstruction model in CFR-GAN (WACV 2022)

Occlusion Robust 3D face Reconstruction Yeong-Joon Ju, Gun-Hee Lee, Jung-Ho Hong, and Seong-Whan Lee Code for Occlusion Robust 3D Face Reconstruction

Yeongjoon 31 Dec 19, 2022
Code for "3D Human Pose and Shape Regression with Pyramidal Mesh Alignment Feedback Loop"

PyMAF This repository contains the code for the following paper: 3D Human Pose and Shape Regression with Pyramidal Mesh Alignment Feedback Loop Hongwe

Hongwen Zhang 450 Dec 28, 2022
The Dual Memory is build from a simple CNN for the deep memory and Linear Regression fro the fast Memory

Simple-DMA a simple Dual Memory Architecture for classifications. based on the paper Dual-Memory Deep Learning Architectures for Lifelong Learning of

null 1 Jan 27, 2022
Face Detection and Alignment using Multi-task Cascaded Convolutional Networks (MTCNN)

Face-Detection-with-MTCNN Face detection is a computer vision problem that involves finding faces in photos. It is a trivial problem for humans to sol

Chetan Hirapara 3 Oct 7, 2022
The official codes of our CVPR2022 paper: A Differentiable Two-stage Alignment Scheme for Burst Image Reconstruction with Large Shift

TwoStageAlign The official codes of our CVPR2022 paper: A Differentiable Two-stage Alignment Scheme for Burst Image Reconstruction with Large Shift Pa

Shi Guo 32 Dec 15, 2022
[NAACL & ACL 2021] SapBERT: Self-alignment pretraining for BERT.

SapBERT: Self-alignment pretraining for BERT This repo holds code for the SapBERT model presented in our NAACL 2021 paper: Self-Alignment Pretraining

Cambridge Language Technology Lab 104 Dec 7, 2022
Code for the TIP 2021 Paper "Salient Object Detection with Purificatory Mechanism and Structural Similarity Loss"

PurNet Project for the TIP 2021 Paper "Salient Object Detection with Purificatory Mechanism and Structural Similarity Loss" Abstract Image-based salie

Jinming Su 4 Aug 25, 2022
MonoRec: Semi-Supervised Dense Reconstruction in Dynamic Environments from a Single Moving Camera

MonoRec: Semi-Supervised Dense Reconstruction in Dynamic Environments from a Single Moving Camera

Felix Wimbauer 494 Jan 6, 2023
The official implementation code of "PlantStereo: A Stereo Matching Benchmark for Plant Surface Dense Reconstruction."

PlantStereo This is the official implementation code for the paper "PlantStereo: A Stereo Matching Benchmark for Plant Surface Dense Reconstruction".

Wang Qingyu 14 Nov 28, 2022
Self-Supervised Monocular 3D Face Reconstruction by Occlusion-Aware Multi-view Geometry Consistency[ECCV 2020]

Self-Supervised Monocular 3D Face Reconstruction by Occlusion-Aware Multi-view Geometry Consistency(ECCV 2020) This is an official python implementati

null 304 Jan 3, 2023
(IEEE TIP 2021) Regularized Densely-connected Pyramid Network for Salient Instance Segmentation

RDPNet IEEE TIP 2021: Regularized Densely-connected Pyramid Network for Salient Instance Segmentation PyTorch training and testing code are available.

Yu-Huan Wu 41 Oct 21, 2022
Quantile Regression DQN a Minimal Working Example, Distributional Reinforcement Learning with Quantile Regression

Quantile Regression DQN Quantile Regression DQN a Minimal Working Example, Distributional Reinforcement Learning with Quantile Regression (https://arx

Arsenii Senya Ashukha 80 Sep 17, 2022