Face Identity Disentanglement via Latent Space Mapping [SIGGRAPH ASIA 2020]

Overview

Face Identity Disentanglement via Latent Space Mapping

Description

Official Implementation of the paper Face Identity Disentanglement via Latent Space Mapping for both training and evaluation.

Face Identity Disentanglement via Latent Space Mapping
Yotam Nitzan1, Amit Bermano1, Yangyan Li2, Daniel Cohen-Or1
1Tel-Aviv University, 2Alibaba
https://arxiv.org/abs/2005.07728

Abstract: Learning disentangled representations of data is a fundamental problem in artificial intelligence. Specifically, disentangled latent representations allow generative models to control and compose the disentangled factors in the synthesis process. Current methods, however, require extensive supervision and training, or instead, noticeably compromise quality. In this paper, we present a method that learns how to represent data in a disentangled way, with minimal supervision, manifested solely using available pre-trained networks. Our key insight is to decouple the processes of disentanglement and synthesis, by employing a leading pre-trained unconditional image generator, such as StyleGAN. By learning to map into its latent space, we leverage both its state-of-the-art quality, and its rich and expressive latent space, without the burden of training it. We demonstrate our approach on the complex and high dimensional domain of human heads. We evaluate our method qualitatively and quantitatively, and exhibit its success with de-identification operations and with temporal identity coherency in image sequences. Through extensive experimentation, we show that our method successfully disentangles identity from other facial attributes, surpassing existing methods, even though they require more training and supervision.

Setup

To setup everything you need check out the setup instructions.

Training

Preparing the Dataset

The dataset is comprised of StyleGAN-generated images and W latent codes, both are generated from a single StyleGAN model.

We also use real images from FFHQ to evaluate quality at test time.

The dataset is assumed to be in the following structure:

Path Description
base directory Directory for all datasets
├  real FFHQ image dataset
├  dataset_N dataset for resolution NxN
│  ├  images images generated by StyleGAN
│  └  ws W latent codes generated by StyleGAN

To generate the dataset_N directory, run:

cd utils\
python generate_fake_data.py \ 
    --resolution N \
    --batch_size BATCH_SIZE \
    --output_path OUTPUT_PATH \
    --pretrained_models_path PRETRAINED_MODELS_PATH \
    --num_images NUM_IMAGES \
    --gpu GPU

It will generate an image dataset in similar format to FFHQ.

Start training

To train the model as done in the paper

python main.py
    NAME
    --resolution N
    --pretrained_models_path PRETRAINED_MODELS_PATH
    --dataset BASE_DATASET_DIR
    --batch_size BATCH_SIZE
    --cross_frequency 3
    --train_data_size 70000
    --results_dir RESULTS_DIR        

Please run python main.py -h for more details.

Inference

For convenience, there are a few inference functions - each serving a different use case. The functions are resolved using the name of the function.

All possible combinations in dirs

Input data: Two directories, one identity inputs and another for attribute inputs.
Runs over all N*M combinations in two directories.

python test.py 
    Name
    --pretrained_models_path PRETRAINED_MODELS_PATH \
    --load_checkpoint PATH_TO_WEIGHTS \
    --id_dir DIR_OF_IMAGES_FOR_ID \
    --attr_dir DIR_OF_IMAGES_FOR_ATTR \
    --output_dir DIR_FOR_OUTPUTS \
    --test_func infer_on_dirs

Paired data

Input data: Two directories, one identity inputs and another for attribute inputs.
The two directories are assumed to be paired. Inference runs on images with the same names.

python test.py 
    Name
    --pretrained_models_path PRETRAINED_MODELS_PATH \
    --load_checkpoint PATH_TO_WEIGHTS \
    --id_dir DIR_OF_IMAGES_FOR_ID \
    --attr_dir DIR_OF_IMAGES_FOR_ATTR \
    --output_dir DIR_FOR_OUTPUTS \
    --test_func infer_pairs

Disentangled interpolation

Interpolating attributes

Interpolating identity

Input data: A directory with any number of subdirectories. In each subdir, there are three images. All images should have exactly one of attr or id in their name. If there are two attr images and one id image, it will interpolate attribute. If there is one attr images and two id images, it will interpolate identity.

python test.py 
    Name
    --pretrained_models_path PRETRAINED_MODELS_PATH \
    --load_checkpoint PATH_TO_WEIGHTS \
    --input_dir PARENT_DIR \
    --output_dir DIR_FOR_OUTPUTS \
    --test_func interpolate

Checkpoints

Our pretrained 256x256 checkpoint is also available.

Citation

If you use this code for your research, please cite our paper using:

@article{Nitzan2020FaceID,
  title={Face identity disentanglement via latent space mapping},
  author={Yotam Nitzan and A. Bermano and Yangyan Li and D. Cohen-Or},
  journal={ACM Transactions on Graphics (TOG)},
  year={2020},
  volume={39},
  pages={1 - 14}
}
Comments
  • question about paper

    question about paper

    Hi,thanks for your beautiful work about ID-disentanglement, i have some question as follows:

    1. why only fixed the Eid and G, the Eaddr and Eind shouldn't be fixed?
    2. what the relationship between the LG(adv) and LG(non-adv) ? i mean the order of optimization.the paper says they are optimized separately , but no matter optimize which first , it will change the result after optimize the other . I think there should be some relationship between them.
    3. just like question 2, LD(adv) use z, this z is not the concatenation of the Eattr and Eid ,right?Because the GT has only one W per time,i means we doesn't have the W's GT.
    opened by diaodeyi 5
  • Generate 256x256 size training data

    Generate 256x256 size training data

    I have encountered an issue when generating 256x256 training dataset. However, I can generate 1024x1024 training data without any error.

    The generate_fake_data.py script failed with the following error message:

    Traceback (most recent call last):
      File "generate_fake_data.py", line 85, in <module>
        main(args)
      File "generate_fake_data.py", line 37, in main
        stylegan_G(tf.ones((1, 512)))
      File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/base_layer.py", line 1012, in __call__
        outputs = call_fn(inputs, *args, **kwargs)
      File "../model/stylegan.py", line 483, in call
        x = self.model_synthesis(x)
      File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/base_layer.py", line 998, in __call__
        input_spec.assert_input_compatibility(self.input_spec, inputs, self.name)
      File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/input_spec.py", line 274, in assert_input_compatibility
        ', found shape=' + display_shape(x.shape))
    ValueError: Input 0 is incompatible with layer G_synthesis: expected shape=(None, 14, 512), found shape=(1, 18, 512)
    
    opened by NTU-P04922004 3
  •  AttributeError: 'G' object has no attribute 'landmarks'

    AttributeError: 'G' object has no attribute 'landmarks'

    !python test.py molo2 --pretrained_models_path "/content/ID-disentanglement/stylegan_G_256x256_synthesis/" \
        --load_checkpoint "/content/drive/MyDrive/checkpoint" \
        --input_dir "/content/id" \
        --output_dir "/content/out" \
        --test_func interpolate
    
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py:2281: UserWarning: `layer.add_variable` is deprecated and will be removed in a future version. Please use `layer.add_weight` method instead.
      warnings.warn('`layer.add_variable` is deprecated and '
    Wrong data format for .ipynb_checkpoints
    Traceback (most recent call last):
      File "test.py", line 45, in <module>
        main()
      File "test.py", line 41, in main
        test_func()
      File "/content/ID-disentanglement/inference.py", line 115, in interpolate
        const_img = self.G(const_img, const_img)[0]
      File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py", line 1012, in __call__
        outputs = call_fn(inputs, *args, **kwargs)
      File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py", line 828, in __call__
        result = self._call(*args, **kwds)
      File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py", line 871, in _call
        self._initialize(args, kwds, add_initializers_to=initializers)
      File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py", line 726, in _initialize
        *args, **kwds))
      File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py", line 2969, in _get_concrete_function_internal_garbage_collected
        graph_function, _ = self._maybe_define_function(args, kwargs)
      File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py", line 3361, in _maybe_define_function
        graph_function = self._create_graph_function(args, kwargs)
      File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py", line 3206, in _create_graph_function
        capture_by_value=self._capture_by_value),
      File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/func_graph.py", line 990, in func_graph_from_py_func
        func_outputs = python_func(*func_args, **func_kwargs)
      File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py", line 634, in wrapped_fn
        out = weak_wrapped_fn().__wrapped__(*args, **kwds)
      File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py", line 3887, in bound_method_wrapper
        return wrapped_fn(*args, **kwargs)
      File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/func_graph.py", line 977, in wrapper
        raise e.ag_error_metadata.to_exception(e)
    AttributeError: in user code:
    
        /content/ID-disentanglement/model/generator.py:42 call  *
            lnds = self.landmarks(x2)
    
        AttributeError: 'G' object has no attribute 'landmarks'
    
    opened by molo32 3
  • Conv2DCustomBackpropInputOp only supports NHWC

    Conv2DCustomBackpropInputOp only supports NHWC

    Hello and thanks for publishing this nice repo!

    I've tried running test.py after following the setup instructions and am getting an error, not sure how to solve this

    python test.py Whut --pretrained_models_path ./modele/stylegan_G_256x256/ --gpu 0 --load_checkpoint ./checkpointuri/checkpoint/ --resolution 256 --input_dir ./imagini/ --output_dir ./rezultate/ --test_func interpolate

    Traceback (most recent call last):
      File "test.py", line 45, in <module>
        main()
      File "test.py", line 41, in main
        test_func()
      File "/home/ubuntu/gituri/ID-disentanglement/inference.py", line 115, in interpolate
        const_img = self.G(const_img, const_img)[0]
      File "/home/ubuntu/anaconda3/envs/id_disen/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py", line 891, in __call__
        outputs = self.call(cast_inputs, *args, **kwargs)
      File "/home/ubuntu/anaconda3/envs/id_disen/lib/python3.7/site-packages/tensorflow_core/python/eager/def_function.py", line 457, in __call__
        result = self._call(*args, **kwds)
      File "/home/ubuntu/anaconda3/envs/id_disen/lib/python3.7/site-packages/tensorflow_core/python/eager/def_function.py", line 526, in _call
        return self._concrete_stateful_fn._filtered_call(canon_args, canon_kwds)  # pylint: disable=protected-access
      File "/home/ubuntu/anaconda3/envs/id_disen/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py", line 1141, in _filtered_call
        self.captured_inputs)
      File "/home/ubuntu/anaconda3/envs/id_disen/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py", line 1224, in _call_flat
        ctx, args, cancellation_manager=cancellation_manager)
      File "/home/ubuntu/anaconda3/envs/id_disen/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py", line 511, in call
        ctx=ctx)
      File "/home/ubuntu/anaconda3/envs/id_disen/lib/python3.7/site-packages/tensorflow_core/python/eager/execute.py", line 67, in quick_execute
        six.raise_from(core._status_to_exception(e.code, message), None)
      File "<string>", line 3, in raise_from
    tensorflow.python.framework.errors_impl.InvalidArgumentError:  Conv2DCustomBackpropInputOp only supports NHWC.
             [[node G_synthesis/G_synthesis/128x128/Conv0_up/conv2d_transpose (defined at /home/ubuntu/anaconda3/envs/id_disen/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py:1751) ]] [Op:__inference_call_25996]
    
    Function call stack:
    call
    
    

    If I try changing from NCHW to NHWC then some dimensions stop working. I've also tried passing it the --gpu 0 and --gpu 1 parameters but that doesn't have an effect.

    Is there something I am missing? Is this maybe familiar to you?

    Thanks

    opened by vlad-i 3
  • test.py

    test.py

    Hello ,i don't know how to run test.py?can you tell me

    python test.py Name --pretrained_models_path PRETRAINED_MODELS_PATH
    --load_checkpoint PATH_TO_WEIGHTS
    --id_dir DIR_OF_IMAGES_FOR_ID
    --attr_dir DIR_OF_IMAGES_FOR_ATTR
    --output_dir DIR_FOR_OUTPUTS
    --test_func infer_on_dirs

    what the 'Name' to set?

    opened by mzprose 2
  • Loading pretrained StyleGAN weights (h5)

    Loading pretrained StyleGAN weights (h5)

    I have encountered an issue when generating the training dataset.

    I ran the generate_fake_data.py script but it failed with the following error:

    Model created.                                                                                                                             
    Traceback (most recent call last):                                                                                                   
      File "generate_fake_data.py", line 91, in <module>                                                                            
        main(args)                                                                                                                      
      File "generate_fake_data.py", line 44, in main                                                                                   
        stylegan_G.load_weights(str(stylegan_G_path))                                                                                   
      File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py", line 2200, in load_weights                       
        'Unable to load weights saved in HDF5 format into a subclassed '                                                                     
    ValueError: Unable to load weights saved in HDF5 format into a subclassed Model which has not created its variables yet. Call the Model first
    , then load the weights.      
    

    PS: My development environment is TensorFlow v2.3 and the pretrained weight I use is FFHQ StyleGAN 256x256 from the setup doc.

    Any idea what has caused this issue?

    opened by NTU-P04922004 1
  • Wrong data format for  error

    Wrong data format for error

    can you explain what you mean by name attr id

    can illustrate the hierarchy there must be for this to work

    I have it like this

    ! python test.py molo2 --pretrained_models_path "/ content / ID-disentanglement / pre"
    --load_checkpoint "/ content / drive / MyDrive / checkpoint"
    --input_dir "/ input /"
    --output_dir "/ out"
    --test_func interpolate

    hierarchy

    input folder
      id folder
          id.jpg
          id1.jpg
    
       attr folder
            att1.jpg
    
    
    

    I get this error

    Wrong data format for id Wrong data format for attr

    opened by molo32 1
  • Poor Results

    Poor Results

    I'm getting poor results running the pretrained model. For example, with both id and attr as: id_0 I get a prediction of: prediction_0

    I'm running: python test.py Name --pretrained_models_path checkpoints --id_dir "testinput" --attr_dir "testtarget" --output_dir "results" --test_func infer_on_dirs The only change I made to the code was stylegan.py:121, where I changed "_convolution_op" to "convolution_op" since it seems Keras updated the function name.

    Additional info: OS - Windows 11 conda list: Name Version Build Channel absl-py 1.2.0 pypi_0 pypi astunparse 1.6.3 pypi_0 pypi bzip2 1.0.8 he774522_0 ca-certificates 2022.07.19 haa95532_0 cachetools 5.2.0 pypi_0 pypi certifi 2022.9.14 py37haa95532_0 charset-normalizer 2.1.1 pypi_0 pypi cmake 3.22.1 h9ad04ae_0 colorama 0.4.5 pypi_0 pypi cudatoolkit 11.2.2 h933977f_10 conda-forge cudnn 8.1.0.77 h3e0f4f4_0 conda-forge cycler 0.11.0 pypi_0 pypi dlib 19.24.0 pypi_0 pypi flatbuffers 2.0.7 pypi_0 pypi fonttools 4.37.3 pypi_0 pypi gast 0.4.0 pypi_0 pypi google-auth 2.11.1 pypi_0 pypi google-auth-oauthlib 0.4.6 pypi_0 pypi google-pasta 0.2.0 pypi_0 pypi grpcio 1.48.1 pypi_0 pypi h5py 3.7.0 pypi_0 pypi idna 3.4 pypi_0 pypi imageio 2.22.0 pypi_0 pypi importlib-metadata 4.12.0 pypi_0 pypi keras 2.10.0 pypi_0 pypi keras-preprocessing 1.1.2 pypi_0 pypi kiwisolver 1.4.4 pypi_0 pypi libclang 14.0.6 pypi_0 pypi libuv 1.40.0 he774522_0 lz4-c 1.9.3 h2bbff1b_1 markdown 3.4.1 pypi_0 pypi markupsafe 2.1.1 pypi_0 pypi matplotlib 3.5.3 pypi_0 pypi mtcnn 0.1.1 pypi_0 pypi networkx 2.6.3 pypi_0 pypi numpy 1.21.6 pypi_0 pypi oauthlib 3.2.1 pypi_0 pypi opencv-python 4.6.0.66 pypi_0 pypi openssl 1.1.1q h2bbff1b_0 opt-einsum 3.3.0 pypi_0 pypi packaging 21.3 pypi_0 pypi pillow 9.2.0 pypi_0 pypi pip 22.1.2 py37haa95532_0 protobuf 3.19.5 pypi_0 pypi pyasn1 0.4.8 pypi_0 pypi pyasn1-modules 0.2.8 pypi_0 pypi pyparsing 3.0.9 pypi_0 pypi python 3.7.13 h6244533_0 python-dateutil 2.8.2 pypi_0 pypi pywavelets 1.3.0 pypi_0 pypi requests 2.28.1 pypi_0 pypi requests-oauthlib 1.3.1 pypi_0 pypi rsa 4.9 pypi_0 pypi scikit-image 0.19.3 pypi_0 pypi scipy 1.7.3 pypi_0 pypi setuptools 63.4.1 py37haa95532_0 six 1.16.0 pypi_0 pypi sqlite 3.39.2 h2bbff1b_0 tensorboard 2.10.0 pypi_0 pypi tensorboard-data-server 0.6.1 pypi_0 pypi tensorboard-plugin-wit 1.8.1 pypi_0 pypi tensorflow 2.10.0 pypi_0 pypi tensorflow-addons 0.18.0 pypi_0 pypi tensorflow-estimator 2.10.0 pypi_0 pypi tensorflow-io-gcs-filesystem 0.27.0 pypi_0 pypi termcolor 2.0.1 pypi_0 pypi tifffile 2021.11.2 pypi_0 pypi tqdm 4.64.1 pypi_0 pypi typeguard 2.13.3 pypi_0 pypi typing-extensions 4.3.0 pypi_0 pypi urllib3 1.26.12 pypi_0 pypi vc 14.2 h21ff451_1 vs2015_runtime 14.27.29016 h5e58377_2 werkzeug 2.2.2 pypi_0 pypi wheel 0.37.1 pyhd3eb1b0_0 wincertstore 0.2 py37haa95532_2 wrapt 1.14.1 pypi_0 pypi xz 5.2.5 h8cc25b3_1 zipp 3.8.1 pypi_0 pypi zlib 1.2.12 h8cc25b3_3 zstd 1.5.2 h19a0ad4_0

    opened by k128 4
  • About Pretrained model

    About Pretrained model

    Thanks for your sharing. I tried to train your code, but I found that there is no pretrained G image I have download your pretrained model in setup files, the contents are as follows image In your google drive, it has four files image But there is no stylegan_G_256x256_synthesis file.

    I only used stylegan model in pytorch, I want to know if I made a mistake.

    opened by xuedue 4
  • Possible to inference on wild images

    Possible to inference on wild images

    Hey @YotamNitzan , Thanks for sharing such an awesome work! I wanted to transfer head pose(attributes) from one source image to target image. They both are different people from the wild. is this possible?

    opened by amil-rp-work 0
A code repository associated with the paper A Benchmark for Rough Sketch Cleanup by Chuan Yan, David Vanderhaeghe, and Yotam Gingold from SIGGRAPH Asia 2020.

A Benchmark for Rough Sketch Cleanup This is the code repository associated with the paper A Benchmark for Rough Sketch Cleanup by Chuan Yan, David Va

null 33 Dec 18, 2022
Implementation for "Manga Filling Style Conversion with Screentone Variational Autoencoder" (SIGGRAPH ASIA 2020 issue)

Manga Filling with ScreenVAE SIGGRAPH ASIA 2020 | Project Website | BibTex This repository is for ScreenVAE introduced in the following paper "Manga F

null 30 Dec 24, 2022
[SIGGRAPH Asia 2021] DeepVecFont: Synthesizing High-quality Vector Fonts via Dual-modality Learning.

DeepVecFont This is the homepage for "DeepVecFont: Synthesizing High-quality Vector Fonts via Dual-modality Learning". Yizhi Wang and Zhouhui Lian. WI

Yizhi Wang 17 Dec 22, 2022
[SIGGRAPH Asia 2021] DeepVecFont: Synthesizing High-quality Vector Fonts via Dual-modality Learning.

DeepVecFont This is the homepage for "DeepVecFont: Synthesizing High-quality Vector Fonts via Dual-modality Learning". Yizhi Wang and Zhouhui Lian. WI

Yizhi Wang 5 Oct 22, 2021
Official PyTorch Implementation for InfoSwap: Information Bottleneck Disentanglement for Identity Swapping

InfoSwap: Information Bottleneck Disentanglement for Identity Swapping Code usage Please check out the user manual page. Paper Gege Gao, Huaibo Huang,

Grace Hešeri 56 Dec 20, 2022
[CVPR 2020] Interpreting the Latent Space of GANs for Semantic Face Editing

InterFaceGAN - Interpreting the Latent Space of GANs for Semantic Face Editing Figure: High-quality facial attributes editing results with InterFaceGA

GenForce: May Generative Force Be with You 1.3k Dec 29, 2022
Code of 3D Shape Variational Autoencoder Latent Disentanglement via Mini-Batch Feature Swapping for Bodies and Faces

3D Shape Variational Autoencoder Latent Disentanglement via Mini-Batch Feature Swapping for Bodies and Faces Installation After cloning the repo open

null 37 Dec 3, 2022
The implemetation of Dynamic Nerual Garments proposed in Siggraph Asia 2021

DynamicNeuralGarments Introduction This repository contains the implemetation of Dynamic Nerual Garments proposed in Siggraph Asia 2021. ./GarmentMoti

null 42 Dec 27, 2022
Disentangled Face Attribute Editing via Instance-Aware Latent Space Search, accepted by IJCAI 2021.

Instance-Aware Latent-Space Search This is a PyTorch implementation of the following paper: Disentangled Face Attribute Editing via Instance-Aware Lat

null 67 Dec 21, 2022
Minimal PyTorch implementation of Generative Latent Optimization from the paper "Optimizing the Latent Space of Generative Networks"

Minimal PyTorch implementation of Generative Latent Optimization This is a reimplementation of the paper Piotr Bojanowski, Armand Joulin, David Lopez-

Thomas Neumann 117 Nov 27, 2022
InterFaceGAN - Interpreting the Latent Space of GANs for Semantic Face Editing

InterFaceGAN - Interpreting the Latent Space of GANs for Semantic Face Editing Figure: High-quality facial attributes editing results with InterFaceGA

GenForce: May Generative Force Be with You 1.3k Jan 9, 2023
SSL_SLAM2: Lightweight 3-D Localization and Mapping for Solid-State LiDAR (mapping and localization separated) ICRA 2021

SSL_SLAM2 Lightweight 3-D Localization and Mapping for Solid-State LiDAR (Intel Realsense L515 as an example) This repo is an extension work of SSL_SL

Wang Han 王晗 1.3k Jan 8, 2023
Generating images from caption and vice versa via CLIP-Guided Generative Latent Space Search

CLIP-GLaSS Repository for the paper Generating images from caption and vice versa via CLIP-Guided Generative Latent Space Search An in-browser demo is

Federico Galatolo 172 Dec 22, 2022
PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of Generative Models

PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of Generative Models Code accompanying CVPR'20 paper of the same title. Paper lin

Alex Damian 7k Dec 30, 2022
Code for "SRHEN: Stepwise-Refining Homography Estimation Network via Parsing Geometric Correspondences in Deep Latent Space"

SRHEN This is a better and simpler implementation for "SRHEN: Stepwise-Refining Homography Estimation Network via Parsing Geometric Correspondences in

null 1 Oct 28, 2022
Implementation based on Paper - Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling

Implementation based on Paper - Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling

HamasKhan 3 Jul 8, 2022
《Unsupervised 3D Human Pose Representation with Viewpoint and Pose Disentanglement》(ECCV 2020) GitHub: [fig9]

Unsupervised 3D Human Pose Representation [Paper] The implementation of our paper Unsupervised 3D Human Pose Representation with Viewpoint and Pose Di

null 42 Nov 24, 2022
Official pytorch implementation of paper "Image-to-image Translation via Hierarchical Style Disentanglement".

HiSD: Image-to-image Translation via Hierarchical Style Disentanglement Official pytorch implementation of paper "Image-to-image Translation

null 364 Dec 14, 2022
TilinGNN: Learning to Tile with Self-Supervised Graph Neural Network (SIGGRAPH 2020)

TilinGNN: Learning to Tile with Self-Supervised Graph Neural Network (SIGGRAPH 2020) About The goal of our research problem is illustrated below: give

null 59 Dec 9, 2022