Implementation for Paper "Inverting Generative Adversarial Renderer for Face Reconstruction"

Overview

StyleGAR

TODO: add arxiv link

Implementation of Inverting Generative Adversarial Renderer for Face Reconstruction

TODO: for test

Currently, some models are being modified with open-code resources (3dmm, landmark, segmentation), to get rid of commercial models, will update soon.

Usage

First align all faces according to landmarks:

python uitls_face.py --lmk dlib --bfm BFM.mat --output OUTPUT_PATH DATASET_PATH

This will align according to specified landmark model, you can change to other kinds of landmark detector

Create lmdb datasets:

python prepare_data.py --out LMDB_PATH --n_worker N_WORKER --size SIZE1,SIZE2,SIZE3,... OUTPUT_PATH

This will convert images to jpeg and pre-resizes it. This implementation does not use progressive growing, but you can create multiple resolution datasets using size arguments with comma separated lists, for the cases that you want to try another resolutions later.

Then you can train model in distributed settings

python -m torch.distributed.launch --nproc_per_node=N_GPU --master_port=PORT train.py --batch BATCH_SIZE LMDB_PATH

train.py supports Weights & Biases logging. If you want to use it, add --wandb arguments to the script.

Generate samples

python generate.py --sample N_FACES --pics N_PICS --ckpt PATH_CHECKPOINT

You should change your size (--size 256 for example) if you train with another dimension.

License

Model details and custom CUDA kernel codes are from official repostiories: https://github.com/NVlabs/stylegan2

Codes for Learned Perceptual Image Patch Similarity, LPIPS came from https://github.com/richzhang/PerceptualSimilarity

To match FID scores more closely to tensorflow official implementations, I have used FID Inception V3 implementations in https://github.com/mseitzer/pytorch-fid

The work is built on the implementation of stylegan2 with Pytorch (https://github.com/rosinality/stylegan2-pytorch.git)

Comments
  • pretrained checkpoint for testing

    pretrained checkpoint for testing

    Hi Piao, thanks for your awesome works. Currently I find that there is only traning code released. I wonder if it is possible to release the pretrained checkpoint for some testing and evaluation. Or is it possible to send you several images to observe the performances? Looking forward to your reply.

    opened by JesseZhang92 2
  • Bug: rasterize on 'cuda:1,2,3...'

    Bug: rasterize on 'cuda:1,2,3...'

    in /op/rasterize.py
        when i try:
    
        if use_cuda:
    	v = v.cuda(1)
    	f = f.cuda(1)
    	t = t.cuda(1)
    o = rasterize(v, t, f, 5)
       the process keep running, and got nothing from the print() function;
       if try it in torch.distributed, got errors like:
       CUDA error: an illegal memory access was encountered.
    
    opened by redlibo 0
  • Regarding the initialization of the class 'GeneratorWithMap'

    Regarding the initialization of the class 'GeneratorWithMap'

    Hi,

    Thank you for sharing the code of the great work!

    I have been adopt the code of the generator to my own project, and I am concerned with the following line of code: https://github.com/WestlyPark/StyleRenderer/blob/a1a093a030716ceb24b46e2fd48d2580f7429463/model.py#L216

    I feel this line is unnecessary as the to_rgb layers are already initialized in https://github.com/WestlyPark/StyleRenderer/blob/a1a093a030716ceb24b46e2fd48d2580f7429463/model.py#L192 If I understand it corretly, the current version of code will double the amount of to_rgb layers, but only the first half are used in the forward process (the generation results are still good though).

    Best regards

    opened by yunfan0621 0
  • ['v '], ['tex'], ['tri '] are not found in BFM.mat.

    ['v '], ['tex'], ['tri '] are not found in BFM.mat.

    Hi When running th code, the ['v '], ['tex'], ['tri '] attributes of model are not found in BFM.mat. the BFM.mat (01_MorphableModel.mat) is downloaded from the BFM2009 webside: https://faces.dmi.unibas.ch/bfm/main.php?nav=1-0&id=basel_face_model
    As we all know, 01_MorphableModel.mat does not have th key of ['v '], ['tex']. Can you share your BFM.mat,or the code to get it?

    opened by gangeqian 3
  • About rasterize

    About rasterize

    Thank you for your great work. But when I run this commond "python uitls_face.py --lmk dlib --bfm BFM.mat --output OUTPUT_PATH DATASET_PATH", I encounter some issues like below: Traceback (most recent call last): File "H:/stylerender/StyleRenderer/utils_face.py", line 522, in from op import rasterize File "H:\stylerender\StyleRenderer\op_init_.py", line 3, in from .rasterize import rasterize File "H:\stylerender\StyleRenderer\op\rasterize.py", line 15, in os.path.join(module_path, 'rasterize.cu')
    File "C:\Users\Administrator\anaconda3\envs\styleflow\lib\site-packages\torch\utils\cpp_extension.py", line 898, in load is_python_module) File "C:\Users\Administrator\anaconda3\envs\styleflow\lib\site-packages\torch\utils\cpp_extension.py", line 1097, in _jit_compile return _import_module_from_library(name, build_directory, is_python_module) File "C:\Users\Administrator\anaconda3\envs\styleflow\lib\site-packages\torch\utils\cpp_extension.py", line 1422, in _import_module_from_library file, path, description = imp.find_module(module_name, [path]) File "C:\Users\Administrator\anaconda3\envs\styleflow\lib\imp.py", line 302, in find_module raise ImportError(_ERR_MSG.format(name), name=name) ImportError: No module named 'rasterize'

    Whether I need to complie rasterize.cpp and rasterize.cu? Or mayebe the CUDA and pytorch version are wrong?

    opened by Hpjhpjhs 4
Owner
null
Inference code for "StylePeople: A Generative Model of Fullbody Human Avatars" paper. This code is for the part of the paper describing video-based avatars.

NeuralTextures This is repository with inference code for paper "StylePeople: A Generative Model of Fullbody Human Avatars" (CVPR21). This code is for

Visual Understanding Lab @ Samsung AI Center Moscow 18 Oct 6, 2022
Code for paper ECCV 2020 paper: Who Left the Dogs Out? 3D Animal Reconstruction with Expectation Maximization in the Loop.

Who Left the Dogs Out? Evaluation and demo code for our ECCV 2020 paper: Who Left the Dogs Out? 3D Animal Reconstruction with Expectation Maximization

Benjamin Biggs 29 Dec 28, 2022
The project is an official implementation of our CVPR2019 paper "Deep High-Resolution Representation Learning for Human Pose Estimation"

Deep High-Resolution Representation Learning for Human Pose Estimation (CVPR 2019) News [2020/07/05] A very nice blog from Towards Data Science introd

Leo Xiao 3.9k Jan 5, 2023
Home repository for the Regularized Greedy Forest (RGF) library. It includes original implementation from the paper and multithreaded one written in C++, along with various language-specific wrappers.

Regularized Greedy Forest Regularized Greedy Forest (RGF) is a tree ensemble machine learning method described in this paper. RGF can deliver better r

RGF-team 364 Dec 28, 2022
Official implementation of AAAI-21 paper "Label Confusion Learning to Enhance Text Classification Models"

Description: This is the official implementation of our AAAI-21 accepted paper Label Confusion Learning to Enhance Text Classification Models. The str

null 101 Nov 25, 2022
Official PyTorch implementation for paper Context Matters: Graph-based Self-supervised Representation Learning for Medical Images

Context Matters: Graph-based Self-supervised Representation Learning for Medical Images Official PyTorch implementation for paper Context Matters: Gra

null 49 Nov 23, 2022
A PyTorch re-implementation of the paper 'Exploring Simple Siamese Representation Learning'. Reproduced the 67.8% Top1 Acc on ImageNet.

Exploring simple siamese representation learning This is a PyTorch re-implementation of the SimSiam paper on ImageNet dataset. The results match that

Taojiannan Yang 72 Nov 9, 2022
Implementation of the paper NAST: Non-Autoregressive Spatial-Temporal Transformer for Time Series Forecasting.

Non-AR Spatial-Temporal Transformer Introduction Implementation of the paper NAST: Non-Autoregressive Spatial-Temporal Transformer for Time Series For

Chen Kai 66 Nov 28, 2022
This is a Pytorch implementation of the paper: Self-Supervised Graph Transformer on Large-Scale Molecular Data.

This is a Pytorch implementation of the paper: Self-Supervised Graph Transformer on Large-Scale Molecular Data.

null 212 Dec 25, 2022
Official implementation of the ICLR 2021 paper

You Only Need Adversarial Supervision for Semantic Image Synthesis Official PyTorch implementation of the ICLR 2021 paper "You Only Need Adversarial S

Bosch Research 272 Dec 28, 2022
Implementation of Nyström Self-attention, from the paper Nyströmformer

Nyström Attention Implementation of Nyström Self-attention, from the paper Nyströmformer. Yannic Kilcher video Install $ pip install nystrom-attention

Phil Wang 95 Jan 2, 2023
Implementation of SETR model, Original paper: Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers.

SETR - Pytorch Since the original paper (Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers.) has no official

zhaohu xing 112 Dec 16, 2022
Official implementation of the paper Image Generators with Conditionally-Independent Pixel Synthesis https://arxiv.org/abs/2011.13775

CIPS -- Official Pytorch Implementation of the paper Image Generators with Conditionally-Independent Pixel Synthesis Requirements pip install -r requi

Multimodal Lab @ Samsung AI Center Moscow 201 Dec 21, 2022
Official pytorch implementation of paper "Image-to-image Translation via Hierarchical Style Disentanglement".

HiSD: Image-to-image Translation via Hierarchical Style Disentanglement Official pytorch implementation of paper "Image-to-image Translation

null 364 Dec 14, 2022
PyTorch implementation of paper "Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes", CVPR 2021

Neural Scene Flow Fields PyTorch implementation of paper "Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes", CVPR 20

Zhengqi Li 585 Jan 4, 2023
Implementation of Barlow Twins paper

barlowtwins PyTorch Implementation of Barlow Twins paper: Barlow Twins: Self-Supervised Learning via Redundancy Reduction This is currently a work in

IgorSusmelj 86 Dec 20, 2022
Official pytorch implementation of paper "Inception Convolution with Efficient Dilation Search" (CVPR 2021 Oral).

IC-Conv This repository is an official implementation of the paper Inception Convolution with Efficient Dilation Search. Getting Started Download Imag

Jie Liu 111 Dec 31, 2022
Official implementation of our paper "LLA: Loss-aware Label Assignment for Dense Pedestrian Detection" in Pytorch.

LLA: Loss-aware Label Assignment for Dense Pedestrian Detection This project provides an implementation for "LLA: Loss-aware Label Assignment for Dens

null 35 Dec 6, 2022
Functional TensorFlow Implementation of Singular Value Decomposition for paper Fast Graph Learning

tf-fsvd TensorFlow Implementation of Functional Singular Value Decomposition for paper Fast Graph Learning with Unique Optimal Solutions Cite If you f

Sami Abu-El-Haija 14 Nov 25, 2021