Official implementation for "Style Transformer for Image Inversion and Editing" (CVPR 2022)

Overview

Style Transformer for Image Inversion and Editing (CVPR2022)

https://arxiv.org/abs/2203.07932

Existing GAN inversion methods fail to provide latent codes for reliable reconstruction and flexible editing simultaneously. This paper presents a transformer-based image inversion and editing model for pretrained StyleGAN which is not only with less distortions, but also of high quality and flexibility for editing. The proposed model employs a CNN encoder to provide multi-scale image features as keys and values. Meanwhile it regards the style code to be determined for different layers of the generator as queries. It first initializes query tokens as learnable parameters and maps them into $W^+$ space. Then the multi-stage alternate self- and cross-attention are utilized, updating queries with the purpose of inverting the input by the generator. Moreover, based on the inverted code, we investigate the reference- and label-based attribute editing through a pretrained latent classifier, and achieve flexible image-to-image translation with high quality results. Extensive experiments are carried out, showing better performances on both inversion and editing tasks within StyleGAN.


Our style transformer proposes novel multi-stage style transformer in w+ space to invert image accurately, and we characterize the image editing in StyleGAN into label-based and reference-based, and use a non-linear classifier to generate the editing vector.

Getting Started

Prerequisites

  • Ubuntu 16.04
  • NVIDIA GPU + CUDA CuDNN
  • Python 3

Pretrained Models

We provide the pre-trained models of inversion for face and car domains.

Training

Preparing Datasets

Update configs/paths_config.py with the necessary data paths and model paths for training and inference.

dataset_paths = {
    'train_data': '/path/to/train/data'
    'test_data': '/path/to/test/data',
}

Preparing Generator

We use rosinality's StyleGAN2 implementation. You can download the 256px pretrained model in the project and put it in the directory /pretrained_models.

Training Inversion Model

python scripts/train.py \
--dataset_type=ffhq_encode \
--exp_dir=results/train_style_transformer \
--batch_size=8 \
--test_batch_size=8 \
--val_interval=5000 \
--save_interval=10000 \
--stylegan_weights=pretrained_models/stylegan2-ffhq-config-f.pt

Inference

python scripts/inference.py \
--exp_dir=results/infer_style_transformer \
--checkpoint_path=results/train_style_transformer/checkpoints/best_model.pt \
--data_path=/test_data \
--test_batch_size=8 \

Citation

If you use this code for your research, please cite

@article{hu2022style,
  title={Style Transformer for Image Inversion and Editing},
  author={Hu, Xueqi and Huang, Qiusheng and Shi, Zhengyi and Li, Siyuan and Gao, Changxin and Sun, Li and Li, Qingli},
  journal={arXiv preprint arXiv:2203.07932},
  year={2022}
}
You might also like...
(CVPR 2022 Oral) Official implementation for
(CVPR 2022 Oral) Official implementation for "Surface Representation for Point Clouds"

RepSurf - Surface Representation for Point Clouds [CVPR 2022 Oral] By Haoxi Ran* , Jun Liu, Chengjie Wang ( * : corresponding contact) The pytorch off

Official implementation of the paper 'Details or Artifacts: A Locally Discriminative Learning Approach to Realistic Image Super-Resolution' in CVPR 2022
Official implementation of the paper 'Details or Artifacts: A Locally Discriminative Learning Approach to Realistic Image Super-Resolution' in CVPR 2022

LDL Paper | Supplementary Material Details or Artifacts: A Locally Discriminative Learning Approach to Realistic Image Super-Resolution Jie Liang*, Hu

Official MegEngine implementation of CREStereo(CVPR 2022 Oral).
Official MegEngine implementation of CREStereo(CVPR 2022 Oral).

[CVPR 2022] Practical Stereo Matching via Cascaded Recurrent Network with Adaptive Correlation This repository contains MegEngine implementation of ou

[CVPR 2022] Official code for the paper:
[CVPR 2022] Official code for the paper: "A Stitch in Time Saves Nine: A Train-Time Regularizing Loss for Improved Neural Network Calibration"

MDCA Calibration This is the official PyTorch implementation for the paper: "A Stitch in Time Saves Nine: A Train-Time Regularizing Loss for Improved

Sound-guided Semantic Image Manipulation - Official Pytorch Code (CVPR 2022)
Sound-guided Semantic Image Manipulation - Official Pytorch Code (CVPR 2022)

🔉 Sound-guided Semantic Image Manipulation (CVPR2022) Official Pytorch Implementation Sound-guided Semantic Image Manipulation IEEE/CVF Conference on

Official code of the paper
Official code of the paper "Expanding Low-Density Latent Regions for Open-Set Object Detection" (CVPR 2022)

OpenDet Expanding Low-Density Latent Regions for Open-Set Object Detection (CVPR2022) Jiaming Han, Yuqiang Ren, Jian Ding, Xingjia Pan, Ke Yan, Gui-So

[CVPR 2022] Official Pytorch code for OW-DETR: Open-world Detection Transformer
[CVPR 2022] Official Pytorch code for OW-DETR: Open-world Detection Transformer

OW-DETR: Open-world Detection Transformer (CVPR 2022) [Paper] Akshita Gupta*, Sanath Narayan*, K J Joseph, Salman Khan, Fahad Shahbaz Khan, Mubarak Sh

Official code for the CVPR 2022 (oral) paper
Official code for the CVPR 2022 (oral) paper "Extracting Triangular 3D Models, Materials, and Lighting From Images".

nvdiffrec Joint optimization of topology, materials and lighting from multi-view image observations as described in the paper Extracting Triangular 3D

Official repository for the paper
Official repository for the paper "Self-Supervised Models are Continual Learners" (CVPR 2022)

Self-Supervised Models are Continual Learners This is the official repository for the paper: Self-Supervised Models are Continual Learners Enrico Fini

Comments
  • About Model Size

    About Model Size

    Thank you for your excellent work. In section 5.1 Implementation Details, you mentioned that your model is based on pSp encoder. So why is your model lighter than pSp (as shown in Table 1)?

    opened by gzhhhere 1
  • style_transformer.py, AttributeError: 'NoneType' object has no attribute 'repeat'

    style_transformer.py, AttributeError: 'NoneType' object has no attribute 'repeat'

    		if self.opts.start_from_latent_avg:
    			if self.opts.learn_in_w:
    				codes = codes + self.latent_avg.repeat(codes.shape[0], 1)
    			else:
    				codes = codes + self.latent_avg.repeat(codes.shape[0], 1, 1)
    

    AttributeError: 'NoneType' object has no attribute 'repeat', How can I fix it? Does it mean that I need another model? so self.latent_avg isn't None.


    Why did you delete some code?


    and I downloaded fused_bias_act.cpp, fused_bias_act_kernel.cu, upfirdn2d.cpp, upfirdn2d_kernel.cu and w_norm.py from https://github.com/omertov/encoder4editing.

    u should release ur final code

    opened by JNash123 0
  • TypeError: upfirdn2d(): incompatible function arguments. The following argument types are supported:

    TypeError: upfirdn2d(): incompatible function arguments. The following argument types are supported:

    CUDA_VISIBLE_DEVICES=3 python scripts/train.py --dataset_type=ffhq_encode --exp_dir=results/debug --batch_size=2 --test_batch_size=2 --val_interval=2500 --save_interval=5000 --stylegan_weights=pretrained_models/stylegan2-ffhq-config-f.pt {'batch_size': 2, 'board_interval': 50, 'checkpoint_path': None, 'dataset_type': 'ffhq_encode', 'exp_dir': 'results/debug', 'id_lambda': 0.1, 'image_interval': 5000, 'input_nc': 3, 'l2_lambda': 1.0, 'l2_ref_lambda': 1.0, 'l2_src_lambda': 1.0, 'label_nc': 0, 'learn_in_w': False, 'learning_rate': 0.0001, 'lpips_lambda': 0.8, 'max_steps': 600000, 'moco_lambda': 0, 'optim_name': 'ranger', 'output_size': 1024, 'resize_factors': None, 'save_interval': 5000, 'start_from_latent_avg': True, 'stylegan_weights': 'pretrained_models/stylegan2-ffhq-config-f.pt', 'test_batch_size': 2, 'test_workers': 0, 'train_decoder': False, 'val_interval': 2500, 'workers': 0} Loading encoders weights from irse50! Loading decoder weights from pretrained! Loading ResNet ArcFace Loading dataset for ffhq_encode Number of training samples: 70000 Number of test samples: 30000 Traceback (most recent call last): File "scripts/train.py", line 35, in main() File "scripts/train.py", line 31, in main coach.train() File "/home/hba/xurz/style-transformer-backup/./training/coach_invert.py", line 82, in train y_hat, latent = self.net.forward(x, return_latents=True) File "/home/hba/xurz/style-transformer-backup/./models/style_transformer.py", line 73, in forward images, result_latent = self.decoder([codes], File "/home/hba/miniconda3/envs/pytorch/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/home/hba/miniconda3/envs/pytorch/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 166, in forward return self.module(*inputs[0], **kwargs[0]) File "/home/hba/miniconda3/envs/pytorch/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/home/hba/xurz/style-transformer-backup/./models/stylegan2/model.py", line 530, in forward out = conv1(out, latent[:, i], noise=noise1) File "/home/hba/miniconda3/envs/pytorch/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/home/hba/xurz/style-transformer-backup/./models/stylegan2/model.py", line 333, in forward out = self.conv(input, style) File "/home/hba/miniconda3/envs/pytorch/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/home/hba/xurz/style-transformer-backup/./models/stylegan2/model.py", line 258, in forward out = self.blur(out) File "/home/hba/miniconda3/envs/pytorch/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/home/hba/xurz/style-transformer-backup/./models/stylegan2/model.py", line 85, in forward out = upfirdn2d(input, self.kernel, pad=self.pad) TypeError: upfirdn2d(): incompatible function arguments. The following argument types are supported: 1. (arg0: at::Tensor, arg1: at::Tensor, arg2: int, arg3: int, arg4: int, arg5: int, arg6: int, arg7: int, arg8: int, arg9: int) -> at::Tensor

    Invoked with: tensor([[[[-7.0863e-02, -1.9267e-02, 1.1953e-01, ..., 3.6038e-02, -5.0872e-02, 1.7522e-01], [ 4.7256e-02, 2.0239e-01, 2.4303e-01, ..., -2.4588e-01, -6.1401e-02, 1.9535e-01], [ 8.9887e-02, 2.8334e-01, 3.7718e-01, ..., -6.3546e-01, -4.9164e-01, -3.5744e-01], ..., [ 3.9674e-01, -5.6920e-01, -1.5007e+00, ..., -1.8475e-01, -2.0443e-01, -2.0384e-01], [ 3.1867e-01, -2.8569e-02, -1.1513e+00, ..., -6.8369e-01, -1.5514e-01, -2.6385e-01], [ 3.9637e-01, 1.1031e-01, -4.8724e-01, ..., -4.9993e-01, -7.2906e-03, -1.2178e-01]],

         [[ 3.2634e-02, -3.8011e-03, -2.7605e-02,  ...,  1.5077e-01,
           -1.2546e-01,  1.5372e-02],
          [-1.1103e-02,  3.4795e-02,  4.5295e-02,  ..., -3.0931e-02,
           -6.0514e-02,  1.3120e-01],
          [ 9.4653e-02,  5.5100e-02, -1.1730e-01,  ...,  3.8312e-01,
            2.0375e-01,  3.8039e-01],
          ...,
          [-2.2502e-01, -3.6959e-01, -1.1004e+00,  ..., -5.3195e-01,
           -1.2216e-01, -2.0280e-01],
          [-7.3724e-02, -1.9792e-01, -3.5554e-01,  ..., -2.7539e-02,
           -8.5349e-02, -1.8674e-01],
          [-3.0079e-01, -3.9553e-01, -2.5432e-01,  ...,  9.5803e-02,
            4.9642e-02, -5.1067e-02]],
    
         [[ 7.0051e-02, -1.0902e-01, -3.5707e-01,  ..., -2.6778e-01,
           -7.4599e-02, -9.3972e-02],
          [ 1.7302e-01, -4.3622e-03, -4.4086e-01,  ...,  5.4927e-02,
           -1.2039e-01, -2.4331e-01],
          [-6.5779e-02,  5.1633e-03, -2.1504e-03,  ...,  5.5416e-01,
            5.9228e-01, -3.0418e-01],
          ...,
          [-1.3928e-01,  2.4431e-02, -5.3865e-01,  ...,  2.1794e+00,
           -4.5357e-01, -7.8216e-01],
          [-3.1485e-01,  8.6408e-03, -3.2148e-01,  ...,  1.0229e+00,
           -2.6701e-02, -2.4561e-01],
          [-1.8246e-01, -1.0159e-01,  3.0965e-01,  ...,  2.1239e-01,
            1.2809e-01, -7.9901e-02]],
    
         ...,
    
         [[-1.1881e-01,  1.1500e-01,  4.2204e-01,  ...,  1.1155e-01,
           -1.3575e-01, -1.2005e-01],
          [-2.6823e-02,  3.8702e-02,  2.7662e-01,  ..., -1.2204e-01,
           -5.9489e-02,  5.7028e-02],
          [ 4.3397e-01,  3.4989e-01, -5.9945e-01,  ..., -1.0691e-01,
            2.2342e-01,  1.6033e-01],
          ...,
          [ 6.4690e-02, -3.0652e-01,  7.9697e-01,  ..., -7.2266e-01,
           -6.8843e-01, -4.6057e-01],
          [ 6.2778e-02,  1.0604e-01, -2.5180e-02,  ..., -3.2028e-01,
           -2.3006e-01, -3.5884e-02],
          [ 1.0963e-02,  1.3945e-01,  1.6201e-01,  ...,  9.0606e-02,
           -2.4952e-01, -2.9084e-01]],
    
         [[-2.8116e-01, -1.6318e-01,  5.4617e-02,  ..., -1.5196e-01,
           -6.0700e-02,  1.3869e-03],
          [-3.7157e-02, -1.5182e-01, -3.1035e-01,  ...,  5.9000e-02,
           -2.0383e-02,  8.2096e-02],
          [-9.8154e-02, -1.5565e-01, -5.4512e-01,  ...,  2.2385e-01,
            8.9730e-02, -2.5882e-01],
          ...,
          [-2.0729e-01, -1.1295e-01, -1.2671e+00,  ...,  4.4530e-01,
           -2.5992e-01, -3.9009e-01],
          [-3.6128e-02, -1.0358e-01, -4.3999e-01,  ...,  4.2818e-01,
           -9.3182e-02, -4.7367e-02],
          [ 1.1230e-01, -8.7373e-03,  2.7169e-01,  ...,  3.5928e-01,
            2.2897e-01,  7.2306e-03]],
    
         [[ 6.5134e-02,  1.4504e-02,  2.8846e-02,  ...,  2.4671e-01,
            7.6808e-02, -3.0013e-01],
          [ 6.7936e-03,  1.5039e-02,  7.2357e-02,  ...,  1.2040e-01,
           -2.6003e-02, -8.9164e-02],
          [-1.5093e-01,  1.6345e-02, -4.5325e-01,  ...,  4.6241e-01,
           -1.9645e-01,  1.0321e-01],
          ...,
          [ 4.6823e-02, -1.5455e-01, -1.4505e+00,  ...,  5.8839e-01,
            2.1670e-01, -1.9133e-02],
          [ 2.4774e-01, -1.6071e-02, -8.1439e-01,  ...,  8.9548e-01,
            1.7218e-01, -6.3549e-02],
          [ 8.3652e-02,  2.1827e-01, -4.1087e-01,  ...,  8.0515e-01,
            2.8648e-01, -4.0463e-02]]],
    
    
        [[[-4.8236e-02, -2.9586e-02,  1.4862e-01,  ...,  5.8172e-02,
           -5.0443e-02,  1.9501e-01],
          [ 4.8709e-02,  1.8268e-01,  2.6050e-01,  ..., -2.4306e-01,
           -7.4218e-02,  2.0313e-01],
          [ 5.9480e-02,  2.6771e-01,  4.3760e-01,  ..., -6.3749e-01,
           -4.6659e-01, -3.4716e-01],
          ...,
          [ 4.1737e-01, -5.6703e-01, -1.4684e+00,  ..., -3.3145e-01,
           -2.1022e-01, -2.4497e-01],
          [ 3.3706e-01, -9.9801e-03, -1.1319e+00,  ..., -7.3025e-01,
           -1.7711e-01, -2.9828e-01],
          [ 3.9671e-01,  1.1614e-01, -4.9124e-01,  ..., -5.4504e-01,
           -7.7301e-03, -1.3089e-01]],
    
         [[ 3.0175e-02,  1.6488e-02, -1.4256e-02,  ...,  2.1164e-01,
           -1.2507e-01,  4.6381e-03],
          [-1.9486e-02,  3.0499e-02,  9.1573e-02,  ...,  2.7872e-02,
           -2.9555e-02,  1.4699e-01],
          [ 8.9091e-02,  2.6696e-02, -1.8950e-01,  ...,  5.1870e-01,
            2.5125e-01,  4.1237e-01],
          ...,
          [-2.2371e-01, -3.8183e-01, -1.0706e+00,  ..., -4.8319e-01,
           -1.1171e-01, -1.8868e-01],
          [-5.4057e-02, -1.7674e-01, -3.4005e-01,  ..., -6.4595e-02,
           -1.1097e-01, -2.1361e-01],
          [-2.8810e-01, -3.9132e-01, -2.4705e-01,  ...,  7.4987e-02,
            5.7562e-02, -6.8132e-02]],
    
         [[ 8.6585e-02, -1.3022e-01, -4.0973e-01,  ..., -2.6427e-01,
           -7.6284e-02, -8.0445e-02],
          [ 1.9776e-01,  4.9115e-03, -4.6800e-01,  ...,  7.3570e-02,
           -1.1603e-01, -2.2852e-01],
          [-1.2531e-03,  5.0595e-02,  3.3748e-02,  ...,  4.3794e-01,
            5.9591e-01, -3.1598e-01],
          ...,
          [-1.1801e-01,  7.4536e-02, -4.2253e-01,  ...,  2.2582e+00,
           -5.1008e-01, -8.4858e-01],
          [-3.0005e-01,  1.8628e-02, -2.8964e-01,  ...,  1.1090e+00,
           -2.0699e-02, -2.6316e-01],
          [-1.9965e-01, -9.0140e-02,  3.1928e-01,  ...,  2.4753e-01,
            1.1536e-01, -7.2092e-02]],
    
         ...,
    
         [[-1.3064e-01,  8.8274e-02,  4.4553e-01,  ...,  1.0943e-01,
           -1.1982e-01, -1.0819e-01],
          [-2.2374e-02,  4.1778e-02,  2.7276e-01,  ..., -1.3158e-01,
           -4.3049e-02,  5.7812e-02],
          [ 4.3195e-01,  3.5955e-01, -6.0783e-01,  ..., -8.6985e-02,
            2.3344e-01,  1.7606e-01],
          ...,
          [ 1.0997e-01, -2.9960e-01,  7.7522e-01,  ..., -7.6526e-01,
           -7.4969e-01, -4.8915e-01],
          [ 7.7136e-02,  9.8861e-02, -3.8172e-02,  ..., -3.3207e-01,
           -2.7570e-01, -6.0005e-02],
          [ 2.9268e-03,  1.2287e-01,  1.7141e-01,  ...,  6.7748e-02,
           -2.9653e-01, -2.9938e-01]],
    
         [[-2.9581e-01, -1.6831e-01,  4.3044e-02,  ..., -1.4001e-01,
           -4.9988e-02,  1.4783e-02],
          [-3.3337e-02, -1.4255e-01, -2.9431e-01,  ...,  8.0748e-02,
           -1.9201e-02,  8.6632e-02],
          [-9.7542e-02, -1.5204e-01, -4.6657e-01,  ...,  2.6567e-01,
            7.2280e-02, -2.6986e-01],
          ...,
          [-2.2431e-01, -1.0231e-01, -1.2543e+00,  ...,  5.2281e-01,
           -2.6029e-01, -4.0956e-01],
          [-3.7473e-02, -1.0534e-01, -4.2216e-01,  ...,  4.8075e-01,
           -9.7433e-02, -6.7837e-02],
          [ 1.2422e-01,  2.3889e-03,  3.1974e-01,  ...,  4.2763e-01,
            2.2478e-01, -1.1080e-02]],
    
         [[ 6.0220e-02,  1.2085e-02,  5.0718e-02,  ...,  2.8320e-01,
            1.0121e-01, -3.0089e-01],
          [ 6.8593e-03,  2.8495e-02,  1.2168e-01,  ...,  1.8969e-01,
            9.6662e-03, -6.4513e-02],
          [-1.2478e-01,  1.4937e-02, -3.7207e-01,  ...,  4.7513e-01,
           -1.8248e-01,  1.2553e-01],
          ...,
          [ 8.0169e-02, -1.5569e-01, -1.4747e+00,  ...,  5.7343e-01,
            2.4659e-01, -8.3309e-03],
          [ 2.2040e-01, -1.7805e-02, -7.8429e-01,  ...,  9.1232e-01,
            1.6582e-01, -5.9815e-02],
          [ 8.7558e-02,  2.2698e-01, -3.9275e-01,  ...,  8.2376e-01,
            2.9159e-01, -5.6248e-02]]]], device='cuda:0',
       grad_fn=<ViewBackward0>), tensor([[0.0625, 0.1875, 0.1875, 0.0625],
        [0.1875, 0.5625, 0.5625, 0.1875],
        [0.1875, 0.5625, 0.5625, 0.1875],
        [0.0625, 0.1875, 0.1875, 0.0625]], device='cuda:0'); kwargs: pad=(1, 1)
    
    opened by sharif-xu 2
Owner
Xueqi Hu
Xueqi Hu
Imposter-detector-2022 - HackED 2022 Team 3IQ - 2022 Imposter Detector

HackED 2022 Team 3IQ - 2022 Imposter Detector By Aneeljyot Alagh, Curtis Kan, Jo

Joshua Ji 3 Aug 20, 2022
The 7th edition of NTIRE: New Trends in Image Restoration and Enhancement workshop will be held on June 2022 in conjunction with CVPR 2022.

NTIRE 2022 - Image Inpainting Challenge Important dates 2022.02.01: Release of train data (input and output images) and validation data (only input) 2

Andrés Romero 37 Nov 27, 2022
Official implementation of "Can You Spot the Chameleon? Adversarially Camouflaging Images from Co-Salient Object Detection" in CVPR 2022.

Jadena Official implementation of "Can You Spot the Chameleon? Adversarially Camouflaging Images from Co-Salient Object Detection" in CVPR 2022. arXiv

Qing Guo 13 Nov 29, 2022
Official Pytorch implementation of "Learning to Estimate Robust 3D Human Mesh from In-the-Wild Crowded Scenes", CVPR 2022

Learning to Estimate Robust 3D Human Mesh from In-the-Wild Crowded Scenes / 3DCrowdNet News ?? 3DCrowdNet achieves the state-of-the-art accuracy on 3D

Hongsuk Choi 113 Dec 21, 2022
Official Implementation of CVPR 2022 paper: "Mimicking the Oracle: An Initial Phase Decorrelation Approach for Class Incremental Learning"

(CVPR 2022) Mimicking the Oracle: An Initial Phase Decorrelation Approach for Class Incremental Learning ArXiv This repo contains Official Implementat

Yujun Shi 24 Nov 1, 2022
Commonality in Natural Images Rescues GANs: Pretraining GANs with Generic and Privacy-free Synthetic Data - Official PyTorch Implementation (CVPR 2022)

Commonality in Natural Images Rescues GANs: Pretraining GANs with Generic and Privacy-free Synthetic Data (CVPR 2022) Potentials of primitive shapes f

null 31 Sep 27, 2022
Official implementation for "QS-Attn: Query-Selected Attention for Contrastive Learning in I2I Translation" (CVPR 2022)

QS-Attn: Query-Selected Attention for Contrastive Learning in I2I Translation (CVPR2022) https://arxiv.org/abs/2203.08483 Unpaired image-to-image (I2I

Xueqi Hu 50 Dec 16, 2022
Official PyTorch implementation of the paper "Deep Constrained Least Squares for Blind Image Super-Resolution", CVPR 2022.

Deep Constrained Least Squares for Blind Image Super-Resolution [Paper] This is the official implementation of 'Deep Constrained Least Squares for Bli

MEGVII Research 141 Dec 30, 2022
Official pytorch implementation for Learning to Listen: Modeling Non-Deterministic Dyadic Facial Motion (CVPR 2022)

Learning to Listen: Modeling Non-Deterministic Dyadic Facial Motion This repository contains a pytorch implementation of "Learning to Listen: Modeling

null 50 Dec 17, 2022
[CVPR 2022] Official PyTorch Implementation for "Reference-based Video Super-Resolution Using Multi-Camera Video Triplets"

Reference-based Video Super-Resolution (RefVSR) Official PyTorch Implementation of the CVPR 2022 Paper Project | arXiv | RealMCVSR Dataset This repo c

Junyong Lee 151 Dec 30, 2022