Official code for CVPR2022 paper: Depth-Aware Generative Adversarial Network for Talking Head Video Generation

Overview

📖 Depth-Aware Generative Adversarial Network for Talking Head Video Generation (CVPR 2022)

🔥 If DaGAN is helpful in your photos/projects, please help to it or recommend it to your friends. Thanks 🔥

[Paper]   [Project Page]   [Demo]   [Poster Video]

Fa-Ting Hong, Longhao Zhang, Li Shen, Dan Xu
The Hong Kong University of Science and Technology

Cartoon Sample

cartoon.mp4

Human Sample

celeb.mp4

Voxceleb1 Dataset

🚩 Updates

  • 🔥 🔥 May 19, 2022: The depth face model trained on Voxceleb2 is released! (The corresponding checkpoint of DaGAN will release soon). Click the LINK

  • 🔥 🔥 April 25, 2022: Integrated into Huggingface Spaces 🤗 using Gradio. Try out the web demo: Hugging Face Spaces (GPU version will come soon!)

  • 🔥 🔥 Add SPADE model, which produces more natural results.

🔧 Dependencies and Installation

Installation

We now provide a clean version of DaGAN, which does not require customized CUDA extensions.

  1. Clone repo

    git clone https://github.com/harlanhong/CVPR2022-DaGAN.git
    cd CVPR2022-DaGAN
  2. Install dependent packages

    pip install -r requirements.txt
    
    ## Install the Face Alignment lib
    cd face-alignment
    pip install -r requirements.txt
    python setup.py install

Quick Inference

We take the paper version for an example. More models can be found here.

YAML configs

See config/vox-adv-256.yaml to get description of each parameter.

Pre-trained checkpoint

The pre-trained checkpoint of face depth network and our DaGAN checkpoints can be found under following link: OneDrive.

Inference! To run a demo, download checkpoint and run the following command:

CUDA_VISIBLE_DEVICES=0 python demo.py  --config config/vox-adv-256.yaml --driving_video path/to/driving --source_image path/to/source --checkpoint path/to/checkpoint --relative --adapt_scale --kp_num 15 --generator DepthAwareGenerator 

The result will be stored in result.mp4. The driving videos and source images should be cropped before it can be used in our method. To obtain some semi-automatic crop suggestions you can use python crop-video.py --inp some_youtube_video.mp4. It will generate commands for crops using ffmpeg.

💻 Training

Datasets

  1. VoxCeleb. Please follow the instruction from https://github.com/AliaksandrSiarohin/video-preprocessing.

Train on VoxCeleb

To train a model on specific dataset run:

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch --master_addr="0.0.0.0" --master_port=12348 run.py --config config/vox-adv-256.yaml --name DaGAN --rgbd --batchsize 12 --kp_num 15 --generator DepthAwareGenerator

The code will create a folder in the log directory (each run will create a new name-specific directory). Checkpoints will be saved to this folder. To check the loss values during training see log.txt. By default the batch size is tunned to run on 8 GeForce RTX 3090 gpu (You can obtain the best performance after about 150 epochs). You can change the batch size in the train_params in .yaml file.

🚩 Please use multiple GPUs to train your own model, if you use only one GPU, you would meet the inplace problem.

Also, you can watch the training loss by running the following command:

tensorboard --logdir log/DaGAN/log

When you kill your process for some reasons in the middle of training, a zombie process may occur, you can kill it using our provided tool:

python kill_port.py PORT

Training on your own dataset

  1. Resize all the videos to the same size e.g 256x256, the videos can be in '.gif', '.mp4' or folder with images. We recommend the later, for each video make a separate folder with all the frames in '.png' format. This format is loss-less, and it has better i/o performance.

  2. Create a folder data/dataset_name with 2 subfolders train and test, put training videos in the train and testing in the test.

  3. Create a config config/dataset_name.yaml, in dataset_params specify the root dir the root_dir: data/dataset_name. Also adjust the number of epoch in train_params.

📜 Acknowledgement

Our DaGAN implementation is inspired by FOMM. We appreciate the authors of FOMM for making their codes available to public.

📜 BibTeX

@inproceedings{hong2022depth,
            title={Depth-Aware Generative Adversarial Network for Talking Head Video Generation},
            author={Hong, Fa-Ting and Zhang, Longhao and Shen, Li and Xu, Dan},
            journal={IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
            year={2022}
          }

📧 Contact

If you have any question, please email [email protected].

Comments
  • add web demo/model to Huggingface

    add web demo/model to Huggingface

    Hi, would you be interested in adding DaGAN to Hugging Face? The Hub offers free hosting, and it would make your work more accessible and visible to the rest of the ML community. Models/datasets/spaces(web demos) can be added to a user account or organization similar to github.

    Example from other organizations: Keras: https://huggingface.co/keras-io Microsoft: https://huggingface.co/microsoft Facebook: https://huggingface.co/facebook

    Example spaces with repos: github: https://github.com/salesforce/BLIP Spaces: https://huggingface.co/spaces/salesforce/BLIP

    github: https://github.com/facebookresearch/omnivore Spaces: https://huggingface.co/spaces/akhaliq/omnivore

    and here are guides for adding spaces/models/datasets to your org

    How to add a Space: https://huggingface.co/blog/gradio-spaces how to add models: https://huggingface.co/docs/hub/adding-a-model uploading a dataset: https://huggingface.co/docs/datasets/upload_dataset.html

    Please let us know if you would be interested and if you have any questions, we can also help with the technical implementation.

    opened by AK391 18
  • crop face

    crop face

    Your work is amazing!

    But I have two questions:

    1. Is it possible to pad more borders when cropping faces? Or does it have to crop the face strictly according to the detected box?
    2. https://github.com/harlanhong/CVPR2022-DaGAN/blob/78b22edcdbb4192b81c5adf343f980b42cddfe5d/crop-video.py#L25 When -1 is used, the IndexError is reported.
    opened by Carlyx 6
  • Error No such file or directory: 'depth/models/weights_19/encoder.pth'

    Error No such file or directory: 'depth/models/weights_19/encoder.pth'

    I downloaded the pre-trained weights from the onedrive DaGAN_vox_adv_256.pth.tar and put it in a checkpoints directory. When I run the demo command with --cpu I get the following error:

    (dagan) user@Users-MacBook-Air CVPR2022-DaGAN % python demo.py --config config/vox-adv-256.yaml --driving_video ./assets/driving.mp4 --source_image ./assets/leo.jpg --checkpoint ./checkpoints/DaGAN_vox_adv_256.pth.tar --relative --adapt_scale --kp_num 15 --generator DepthAwareGenerator --cpu                 
    Traceback (most recent call last):
      File "demo.py", line 165, in <module>
        loaded_dict_enc = torch.load('depth/models/weights_19/encoder.pth')
      File "/Users/user/miniconda3/envs/dagan/lib/python3.7/site-packages/torch/serialization.py", line 594, in load
        with _open_file_like(f, 'rb') as opened_file:
      File "/Users/user/miniconda3/envs/dagan/lib/python3.7/site-packages/torch/serialization.py", line 230, in _open_file_like
        return _open_file(name_or_buffer, mode)
      File "/Users/user/miniconda3/envs/dagan/lib/python3.7/site-packages/torch/serialization.py", line 211, in __init__
        super(_open_file, self).__init__(open(name, mode))
    FileNotFoundError: [Errno 2] No such file or directory: 'depth/models/weights_19/encoder.pth'
    

    How can I solve it? Many thanks, great job and good luck for ICLR :) !

    opened by tikitong 5
  • The generated face remains the same pose

    The generated face remains the same pose

    Thanks for your good work; however when i tried run the demo, the generated video tends to remains the same pose as the source image; while in the paper (Figure 2) the generated results have driving frame's pose(this is also the case for the results from README), so why is this the case?

    https://user-images.githubusercontent.com/29053705/165462856-da97c242-b091-4609-b122-414c4216f492.mp4

    opened by hallwaypzh 4
  • Error in running a demo version!

    Error in running a demo version!

    Hello! Thanks for sharing openly amazing work! My research is also related to generating talking faces. I face error when tried to run: CUDA_VISIBLE_DEVICES=0 python demo.py --config config/vox-adv-256.yaml --driving_video data/2.mp4 --source_image data/2.jpg --checkpoint depth/models/weights_19/encoder.pth --relative --adapt_scale --kp_num 15 --generator DepthAwareGenerator image Can you please correct me where I made mistakes while running the demo one?

    opened by muxiddin19 3
  • testing error

    testing error

    when i run this command CUDA_VISIBLE_DEVICES=0 python demo.py --config config/vox-adv-256.yaml --driving_video ./example_video.mp4 --source_image ./example_image.png --checkpoint ./checkpoints/SPADE_DaGAN_vox_adv_256.pth.tar --relative --adapt_scale --kp_num 15 --generator SPADEDepthAwareGenerator --result_video results/example_out.mp4 --find_best_frame

    I got the following error: Traceback (most recent call last): File "demo.py", line 169, in depth_encoder.load_state_dict(filtered_dict_enc) File "/home/miniconda3/envs/dagan/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1407, in load_state_dict self.class.name, "\n\t".join(error_msgs))) RuntimeError: Error(s) in loading state_dict for ResnetEncoder: size mismatch for encoder.layer1.0.conv1.weight: copying a param with shape torch.Size([64, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]). size mismatch for encoder.layer1.1.conv1.weight: copying a param with shape torch.Size([64, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]). size mismatch for encoder.layer2.0.conv1.weight: copying a param with shape torch.Size([128, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([128, 64, 3, 3]). size mismatch for encoder.layer2.0.downsample.0.weight: copying a param with shape torch.Size([512, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([128, 64, 1, 1]). size mismatch for encoder.layer2.0.downsample.1.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for encoder.layer2.0.downsample.1.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for encoder.layer2.0.downsample.1.running_mean: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for encoder.layer2.0.downsample.1.running_var: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for encoder.layer2.1.conv1.weight: copying a param with shape torch.Size([128, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]). size mismatch for encoder.layer3.0.conv1.weight: copying a param with shape torch.Size([256, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 128, 3, 3]). size mismatch for encoder.layer3.0.downsample.0.weight: copying a param with shape torch.Size([1024, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 128, 1, 1]). size mismatch for encoder.layer3.0.downsample.1.weight: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for encoder.layer3.0.downsample.1.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for encoder.layer3.0.downsample.1.running_mean: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for encoder.layer3.0.downsample.1.running_var: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for encoder.layer3.1.conv1.weight: copying a param with shape torch.Size([256, 1024, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]). size mismatch for encoder.layer4.0.conv1.weight: copying a param with shape torch.Size([512, 1024, 1, 1]) from checkpoint, the shape in current model is torch.Size([512, 256, 3, 3]). size mismatch for encoder.layer4.0.downsample.0.weight: copying a param with shape torch.Size([2048, 1024, 1, 1]) from checkpoint, the shape in current model is torch.Size([512, 256, 1, 1]). size mismatch for encoder.layer4.0.downsample.1.weight: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for encoder.layer4.0.downsample.1.bias: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for encoder.layer4.0.downsample.1.running_mean: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for encoder.layer4.0.downsample.1.running_var: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for encoder.layer4.1.conv1.weight: copying a param with shape torch.Size([512, 2048, 1, 1]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]). size mismatch for encoder.fc.weight: copying a param with shape torch.Size([1000, 2048]) from checkpoint, the shape in current model is torch.Size([1000, 512]).

    good first issue 
    opened by Ha0Tang 3
  • Fix some codes about py-feat library

    Fix some codes about py-feat library

    Hi @harlanhong !

    First, I'm very pleased to see your works, DaGAN. Thanks for your effort. The reason why I issue this post is I just want to fix your code a little bit. In your utils.py, there are some codes using py-feat library and this a causes of problem. I don't know which version of py-feat you use, but no matter what you should change some codes like this way due to latest version using this way:

    p1 = out1.facepose().values # AS-IS
    p1 = out1.facepose.values # TO-BE
    

    because latest version of py-feat uses facepose as property like this:

    @property
        def facepose(self):
            """Returns the facepose data using the columns set in fex.facepose_columns
    
            Returns:
                DataFrame: facepose data
            """
            return self[self.facepose_columns]
    

    Could you fix this problems for anybody who will use this codes?

    opened by samsara-ku 3
  • Size of input

    Size of input

    Hello Thanks for your great work! I have a question, does your model support input resolution higher, than 256px? 512px for example I see that in code input video and image are resized to 256px, so causes the loss of visual quality Is there a way to use 512x512 img/vid without losing quality?

    opened by NikitaKononov 3
  • Error while training on VoxCeleb

    Error while training on VoxCeleb

    Hi, I am trying to train DaGAN on VoxCeleb. The following error is occurring.

      File "run.py", line 144, in <module>
        train(config, generator, discriminator, kp_detector, opt.checkpoint, log_dir, dataset, opt.local_rank,device,opt,writer)
      File "/home/madhav3101/gan_codes/CVPR2022-DaGAN/train.py", line 66, in train
        losses_generator, generated = generator_full(x)
      File "/home/madhav3101/env_tf/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
        return forward_call(*input, **kwargs)
      File "/home/madhav3101/gan_codes/CVPR2022-DaGAN/modules/model.py", line 189, in forward
        kp_driving = self.kp_extractor(driving)
      File "/home/madhav3101/env_tf/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
        return forward_call(*input, **kwargs)
      File "/home/madhav3101/env_tf/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 886, in forward
        output = self.module(*inputs[0], **kwargs[0])
      File "/home/madhav3101/env_tf/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
        return forward_call(*input, **kwargs)
      File "/home/madhav3101/gan_codes/CVPR2022-DaGAN/modules/keypoint_detector.py", line 51, in forward
        feature_map = self.predictor(x) #x bz,4,64,64
      File "/home/madhav3101/env_tf/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
        return forward_call(*input, **kwargs)
      File "/home/madhav3101/gan_codes/CVPR2022-DaGAN/modules/util.py", line 252, in forward
        return self.decoder(self.encoder(x))
      File "/home/madhav3101/env_tf/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
        return forward_call(*input, **kwargs)
      File "/home/madhav3101/gan_codes/CVPR2022-DaGAN/modules/util.py", line 178, in forward
        out = up_block(out)
      File "/home/madhav3101/env_tf/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
        return forward_call(*input, **kwargs)
      File "/home/madhav3101/gan_codes/CVPR2022-DaGAN/modules/util.py", line 92, in forward
        out = self.norm(out)
      File "/home/madhav3101/env_tf/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
        return forward_call(*input, **kwargs)
      File "/home/madhav3101/env_tf/lib/python3.7/site-packages/torch/nn/modules/batchnorm.py", line 745, in forward
        self.eps,
      File "/home/madhav3101/env_tf/lib/python3.7/site-packages/torch/nn/functional.py", line 2283, in batch_norm
        input, weight, bias, running_mean, running_var, training, momentum, eps, torch.backends.cudnn.enabled
     (function _print_stack)
    ^M  0%|          | 0/3965 [00:26<?, ?it/s]
    ^M  0%|          | 0/150 [00:26<?, ?it/s]
    
    Traceback (most recent call last):
      File "run.py", line 144, in <module>
        train(config, generator, discriminator, kp_detector, opt.checkpoint, log_dir, dataset, opt.local_rank,device,opt,writer)
      File "/home/madhav3101/gan_codes/CVPR2022-DaGAN/train.py", line 70, in train
        loss.backward()
      File "/home/madhav3101/env_tf/lib/python3.7/site-packages/torch/_tensor.py", line 307, in backward
        torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
      File "/home/madhav3101/env_tf/lib/python3.7/site-packages/torch/autograd/__init__.py", line 156, in backward
        allow_unreachable=True, accumulate_grad=True)  # allow_unreachable flag
    RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [32]] is at version 4; expected version 3 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
    /home/madhav3101/env_tf/lib/python3.7/site-packages/torch/distributed/launch.py:186: FutureWarning: The module torch.distributed.launch is deprecated
    and will be removed in future. Use torchrun.
    Note that --use_env is set by default in torchrun.
    If your script expects `--local_rank` argument to be set, please
    change it to read from `os.environ['LOCAL_RANK']` instead. See
    https://pytorch.org/docs/stable/distributed.html#launch-utility for
    further instructions
    
      FutureWarning,
    ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 13113) of binary: /home/madhav3101/env_tf/bin/python
    Traceback (most recent call last):
      File "/home/madhav3101/miniconda3/lib/python3.7/runpy.py", line 193, in _run_module_as_main
        "__main__", mod_spec)
      File "/home/madhav3101/miniconda3/lib/python3.7/runpy.py", line 85, in _run_code
        exec(code, run_globals)
      File "/home/madhav3101/env_tf/lib/python3.7/site-packages/torch/distributed/launch.py", line 193, in <module>
        main()
      File "/home/madhav3101/env_tf/lib/python3.7/site-packages/torch/distributed/launch.py", line 189, in main
        launch(args)
      File "/home/madhav3101/env_tf/lib/python3.7/site-packages/torch/distributed/launch.py", line 174, in launch
        run(args)
      File "/home/madhav3101/env_tf/lib/python3.7/site-packages/torch/distributed/run.py", line 713, in run
        )(*cmd_args)
      File "/home/madhav3101/env_tf/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 131, in __call__
        return launch_agent(self._config, self._entrypoint, list(args))
      File "/home/madhav3101/env_tf/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 261, in launch_agent
        failures=result.failures,
    torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
    ============================================================
    run.py FAILED
    ------------------------------------------------------------
    Failures:
      <NO_OTHER_FAILURES>
    ------------------------------------------------------------
    Root Cause (first observed failure):
    [0]:
      time      : 2022-04-25_17:30:13
      host      : gnode90.local
      rank      : 0 (local_rank: 0)
      exitcode  : 1 (pid: 13113)
      error_file: <N/A>
      traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
    ============================================================
    
    
    opened by mdv3101 3
  • real-time property of FDN

    real-time property of FDN

    hi, DaGAN is fantastic! i'm interested in the face depth network, and i want to know the real-time of FDN, or in other word, how much time it takes to infer a depth map of an input picture? because i want to know if it is suitable for mobile platform.

    opened by Bruce-yu199 2
  • Missing setup.py

    Missing setup.py

    Hi,

    Thanks for this wonderful work!

    It seems that the setup.py file is missing in this new version. Is it possible for you to upload it again? Thanks a lot for the help!

    Best, Wenhua

    opened by WinnieLaugh 2
  • Error when using encoder and decoder from Vox Celeb2 with Spade checkpoint

    Error when using encoder and decoder from Vox Celeb2 with Spade checkpoint

    Hi - awesome work!

    I get the following error when using the Encoder and Decoder from the trained Vox Celeb2 checkpoint:

    Screen Shot 2022-11-15 at 12 19 55 PM

    I'm using the SPADE_DaGAN_vox_adv_256.pth.tar checkpoint, as I could not find a checkpoint that is updated with Vox Celeb2.

    Would appreciate any help!

    (Also, less importantly, where did you get the cartoon faces from in your demo from? Would love to run some tests with those).

    opened by samching 2
  • 在源图像中使用手工标记的关键点

    在源图像中使用手工标记的关键点

    HRZT)6TX)VRWJ3 7W}W4C4E 您好,非常感谢您开源代码。我的源图像是face_alignment识别不了他是一个人脸,所以我想使用手工标记关键点,实现这个任务。 1.我想问的是kp_source 中value和jacobian的联系 2.value值为{Tensor(1,15,2)} 为什么是15个点呢? 3.jacobian值为{Tensor(1,15,2,2)} 为什么是这个输出呢? 4我该如何使用手工标记的关键点 替代此处的kp_source 非常感谢您!!!

    opened by hsk-yjk 2
  • evaluate_PRMSE_AUCON

    evaluate_PRMSE_AUCON

    When I run utils.py, errors were called: 'Detector' object has no attribute 'detect_facepose' 'Fex' object has no attribute 'facepose' 'Fex' object has no attribute 'Pitch'

    opened by Carlyx 1
Multi-View Consistent Generative Adversarial Networks for 3D-aware Image Synthesis (CVPR2022)

Multi-View Consistent Generative Adversarial Networks for 3D-aware Image Synthesis Multi-View Consistent Generative Adversarial Networks for 3D-aware

Xuanmeng Zhang 78 Dec 10, 2022
Official code release for "Learned Spatial Representations for Few-shot Talking-Head Synthesis" ICCV 2021

Official code release for "Learned Spatial Representations for Few-shot Talking-Head Synthesis" ICCV 2021

Moustafa Meshry 16 Oct 5, 2022
This repository contains a PyTorch implementation of "AD-NeRF: Audio Driven Neural Radiance Fields for Talking Head Synthesis".

AD-NeRF: Audio Driven Neural Radiance Fields for Talking Head Synthesis | Project Page | Paper | PyTorch implementation for the paper "AD-NeRF: Audio

null 551 Dec 29, 2022
Unofficial implementation of One-Shot Free-View Neural Talking Head Synthesis

face-vid2vid Usage Dataset Preparation cd datasets wget https://yt-dl.org/downloads/latest/youtube-dl -O youtube-dl chmod a+rx youtube-dl python load_

worstcoder 68 Dec 30, 2022
Code for the CVPR2022 paper "Frequency-driven Imperceptible Adversarial Attack on Semantic Similarity"

Introduction This is an official release of the paper "Frequency-driven Imperceptible Adversarial Attack on Semantic Similarity" (arxiv link). Abstrac

Leo 21 Nov 23, 2022
Official code for "Towards An End-to-End Framework for Flow-Guided Video Inpainting" (CVPR2022)

E2FGVI (CVPR 2022) English | 简体中文 This repository contains the official implementation of the following paper: Towards An End-to-End Framework for Flo

Media Computing Group @ Nankai University 537 Jan 7, 2023
The implemention of Video Depth Estimation by Fusing Flow-to-Depth Proposals

Flow-to-depth (FDNet) video-depth-estimation This is the implementation of paper Video Depth Estimation by Fusing Flow-to-Depth Proposals Jiaxin Xie,

null 32 Jun 14, 2022
Official implementation of the network presented in the paper "M4Depth: A motion-based approach for monocular depth estimation on video sequences"

M4Depth This is the reference TensorFlow implementation for training and testing depth estimation models using the method described in M4Depth: A moti

Michaël Fonder 76 Jan 3, 2023
Digan - Official PyTorch implementation of Generating Videos with Dynamics-aware Implicit Generative Adversarial Networks

DIGAN (ICLR 2022) Official PyTorch implementation of "Generating Videos with Dyn

Sihyun Yu 147 Dec 31, 2022
Code for Pose-Controllable Talking Face Generation by Implicitly Modularized Audio-Visual Representation (CVPR 2021)

Pose-Controllable Talking Face Generation by Implicitly Modularized Audio-Visual Representation (CVPR 2021) Hang Zhou, Yasheng Sun, Wayne Wu, Chen Cha

Hang_Zhou 628 Dec 28, 2022
Code for One-shot Talking Face Generation from Single-speaker Audio-Visual Correlation Learning (AAAI 2022)

One-shot Talking Face Generation from Single-speaker Audio-Visual Correlation Learning (AAAI 2022) Paper | Demo Requirements Python >= 3.6 , Pytorch >

FuxiVirtualHuman 84 Jan 3, 2023
Light-weight network, depth estimation, knowledge distillation, real-time depth estimation, auxiliary data.

light-weight-depth-estimation Boosting Light-Weight Depth Estimation Via Knowledge Distillation, https://arxiv.org/abs/2105.06143 Junjie Hu, Chenyou F

Junjie Hu 13 Dec 10, 2022
This is an official implementation of the CVPR2022 paper "Blind2Unblind: Self-Supervised Image Denoising with Visible Blind Spots".

Blind2Unblind: Self-Supervised Image Denoising with Visible Blind Spots Blind2Unblind Citing Blind2Unblind @inproceedings{wang2022blind2unblind, tit

demonsjin 58 Dec 6, 2022
The official codes of our CVPR2022 paper: A Differentiable Two-stage Alignment Scheme for Burst Image Reconstruction with Large Shift

TwoStageAlign The official codes of our CVPR2022 paper: A Differentiable Two-stage Alignment Scheme for Burst Image Reconstruction with Large Shift Pa

Shi Guo 32 Dec 15, 2022
Towards Implicit Text-Guided 3D Shape Generation (CVPR2022)

Towards Implicit Text-Guided 3D Shape Generation Towards Implicit Text-Guided 3D Shape Generation (CVPR2022) Code for the paper [Towards Implicit Text

null 55 Dec 16, 2022
π-GAN: Periodic Implicit Generative Adversarial Networks for 3D-Aware Image Synthesis

π-GAN: Periodic Implicit Generative Adversarial Networks for 3D-Aware Image Synthesis Project Page | Paper | Data Eric Ryan Chan*, Marco Monteiro*, Pe

null 375 Dec 31, 2022
Official code for "Eigenlanes: Data-Driven Lane Descriptors for Structurally Diverse Lanes", CVPR2022

[CVPR 2022] Eigenlanes: Data-Driven Lane Descriptors for Structurally Diverse Lanes Dongkwon Jin, Wonhui Park, Seong-Gyun Jeong, Heeyeon Kwon, and Cha

Dongkwon Jin 106 Dec 29, 2022
Inference code for "StylePeople: A Generative Model of Fullbody Human Avatars" paper. This code is for the part of the paper describing video-based avatars.

NeuralTextures This is repository with inference code for paper "StylePeople: A Generative Model of Fullbody Human Avatars" (CVPR21). This code is for

Visual Understanding Lab @ Samsung AI Center Moscow 18 Oct 6, 2022
Unofficial implement with paper SpeakerGAN: Speaker identification with conditional generative adversarial network

Introduction This repository is about paper SpeakerGAN , and is unofficially implemented by Mingming Huang ([email protected]), Tiezheng Wang (wtz920729

null 7 Jan 3, 2023