[CVPR 2022] Thin-Plate Spline Motion Model for Image Animation.

Overview

[CVPR2022] Thin-Plate Spline Motion Model for Image Animation

License: MIT stars GitHub repo size

Source code of the CVPR'2022 paper "Thin-Plate Spline Motion Model for Image Animation"

Paper | Supp

Example animation

vox ted

PS: The paper trains the model for 100 epochs for a fair comparison. You can use more data and train for more epochs to get better performance.

Web demo for animation

  • Try the web demo for animation here: Replicate
  • Google Colab: Open In Colab

Pre-trained models

Installation

We support python3.(Recommended version is Python 3.9). To install the dependencies run:

pip install -r requirements.txt

YAML configs

There are several configuration files one for each dataset in the config folder named as config/dataset_name.yaml.

See description of the parameters in the config/taichi-256.yaml.

Datasets

  1. MGif. Follow Monkey-Net.

  2. TaiChiHD and VoxCeleb. Follow instructions from video-preprocessing.

  3. TED-talks. Follow instructions from MRAA.

Training

To train a model on specific dataset run:

CUDA_VISIBLE_DEVICES=0,1 python run.py --config config/dataset_name.yaml --device_ids 0,1

A log folder named after the timestamp will be created. Checkpoints, loss values, reconstruction results will be saved to this folder.

Training AVD network

To train a model on specific dataset run:

CUDA_VISIBLE_DEVICES=0 python run.py --mode train_avd --checkpoint '{checkpoint_folder}/checkpoint.pth.tar' --config config/dataset_name.yaml

Checkpoints, loss values, reconstruction results will be saved to {checkpoint_folder}.

Evaluation on video reconstruction

To evaluate the reconstruction performance run:

CUDA_VISIBLE_DEVICES=0 python run.py --mode reconstruction --config config/dataset_name.yaml --checkpoint '{checkpoint_folder}/checkpoint.pth.tar'

The reconstruction subfolder will be created in {checkpoint_folder}. The generated video will be stored to this folder, also generated videos will be stored in png subfolder in loss-less '.png' format for evaluation. To compute metrics, follow instructions from pose-evaluation.

Image animation demo

  • notebook: demo.ipynb, edit the config cell and run for image animation.
  • python:
CUDA_VISIBLE_DEVICES=0 python demo.py --config config/vox-256.yaml --checkpoint checkpoints/vox.pth.tar --source_image ./source.jpg --driving_video ./driving.mp4

Acknowledgments

The main code is based upon FOMM and MRAA

Thanks for the excellent works!

Thanks iperov, this work has been integrated in DeepFaceLive

Comments
  • error when trying to train

    error when trying to train

    \Thin-Plate-Spline-Motion-Model\train.py", line 93, in train logger.log_epoch(epoch, model_save, inp=x, out=generated) UnboundLocalError: local variable 'x' referenced before assignment

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "C:\NEURAL\Thin-Plate-Spline-Motion-Model\run.py", line 83, in train(config, inpainting, kp_detector, bg_predictor, dense_motion_network, opt.checkpoint, log_dir, dataset) File "C:\NEURAL\Thin-Plate-Spline-Motion-Model\train.py", line 93, in train logger.log_epoch(epoch, model_save, inp=x, out=generated) TypeError: exit() takes 1 positional argument but 4 were given

    opened by surfingnirvana 6
  • source image path is undefined

    source image path is undefined

    I keep getting an error of:

    ---> 12 source_image = imageio.imread(source_image_path)


    the source image path however is definitely defined and correct, i have uploaded the png file directly into the assets folder and even put the path variable in line 12 "./assets/still.png" rather than calling source_image_path to double check but i still get the error

    opened by emmajane1313 6
  • output of dense_motion

    output of dense_motion

    Maybe stupid question.

    Is it possible to get the deformed/'animated' source_ image before using the inpaint module?

    What is the output of the dense motion module.?....

    out_dict['contribution_maps'] = contribution_maps out_dict['deformation'] = deformation # Optical Flow out_dict['occlusion_map'] = occlusion_map # Multi-resolution Occlusion Masks

    out_dict['deformed_source'] = deformed_source ?

    Thanks in advance

    opened by instant-high 6
  • added this model to DeepFaceLive

    added this model to DeepFaceLive

    I added this model to DeepFaceLive https://github.com/iperov/DeepFaceLive/

    it took a lot of work to export to onnx, because forward pytorch code is not graph-friendly

    opened by iperov 3
  • Could not find a backend to open `RESULTS`` with iomode `wI`.

    Could not find a backend to open `RESULTS`` with iomode `wI`.

    Seems to go fine right up until the end when it's trying to open the RESULTS.MP4 file. I've installed in Conda, pip, and Windows and it always end with that error and no RESULTS file is ever output.

    (Thin-Plate-Spline) PS F:\Thin-Plate-Spline-Motion-Model> python demo.py --config config/vox-256.yaml --checkpoint checkpoints/vox.pth.tar --source_image ./assets/mfox256.jpg --driving_video ./assets/output.mp4 --result_video RESULTS --mode standard --find_best_frame F:\Thin-Plate-Spline-Motion-Model\demo.py:147: DeprecationWarning: Starting with ImageIO v3 the behavior of this function will switch to that of iio.v3.imread. To keep the current behavior (and make this warning disappear) use import imageio.v2 as imageio or call imageio.v2.imread directly. source_image = imageio.imread(opt.source_image) C:\Users\cjay777xb.conda\envs\Thin-Plate-Spline\lib\site-packages\torchvision\models_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and will be removed in 0.15, please use 'weights' instead. warnings.warn( C:\Users\cjay777xb.conda\envs\Thin-Plate-Spline\lib\site-packages\torchvision\models_utils.py:223: UserWarning: Arguments other than a weight enum or None for 'weights' are deprecated since 0.13 and will be removed in 0.15. The current behavior is equivalent to passing weights=None. warnings.warn(msg) C:\Users\cjay777xb.conda\envs\Thin-Plate-Spline\lib\site-packages\torch\functional.py:478: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\TensorShape.cpp:2895.) return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined] 221it [00:10, 21.03it/s] Best frame: 115 100%|████████████████████████████████████████████████████████████████████████████████| 106/106 [00:03<00:00, 34.26it/s] 100%|████████████████████████████████████████████████████████████████████████████████| 116/116 [00:02<00:00, 43.04it/s] Traceback (most recent call last): File "F:\Thin-Plate-Spline-Motion-Model\demo.py", line 178, in imageio.mimsave(opt.result_video, [img_as_ubyte(frame) for frame in predictions], fps=fps) File "C:\Users\cjay777xb.conda\envs\Thin-Plate-Spline\lib\site-packages\imageio\v2.py", line 330, in mimwrite with imopen(uri, "wI", **imopen_args) as file: File "C:\Users\cjay777xb.conda\envs\Thin-Plate-Spline\lib\site-packages\imageio\core\imopen.py", line 303, in imopen raise err_type(err_msg) ValueError: Could not find a backend to open RESULTS`` with iomodewI`. (Thin-Plate-Spline) PS F:\Thin-Plate-Spline-Motion-Model>

    (Thin-Plate-Spline) PS F:\Thin-Plate-Spline-Motion-Model> conda list

    packages in environment at C:\Users\cjay777xb.conda\envs\Thin-Plate-Spline:

    Name Version Build Channel

    av 9.2.0 pypi_0 pypi blas 2.116 mkl conda-forge blas-devel 3.9.0 16_win64_mkl conda-forge blosc 1.21.1 h74325e0_3 conda-forge brotli 1.0.9 h8ffe710_7 conda-forge brotli-bin 1.0.9 h8ffe710_7 conda-forge brotlipy 0.7.0 py39hb82d6ee_1004 conda-forge bzip2 1.0.8 h8ffe710_4 conda-forge ca-certificates 2019.11.28 hecc5488_0 conda-forge/label/cf202003 certifi 2022.9.24 py39haa95532_0 cfitsio 3.470 h2bbff1b_7 charls 2.2.0 h6c2663c_0 charset-normalizer 2.1.1 pyhd8ed1ab_0 conda-forge cloudpickle 2.2.0 pyhd8ed1ab_0 conda-forge colorama 0.4.5 pyhd8ed1ab_0 conda-forge contourpy 1.0.5 pypi_0 pypi cryptography 37.0.4 py39h7bc7c5c_0 conda-forge cudatoolkit 11.6.0 hc0ea762_10 conda-forge cycler 0.11.0 pyhd8ed1ab_0 conda-forge cytoolz 0.12.0 py39hb82d6ee_0 conda-forge dask-core 2022.9.2 pyhd8ed1ab_0 conda-forge ffmpeg 4.2 h6538335_0 conda-forge/label/cf202003 fonttools 4.25.0 pyhd3eb1b0_0 freetype 2.12.1 h546665d_0 conda-forge fsspec 2022.8.2 pyhd8ed1ab_0 conda-forge giflib 5.2.1 h8d14728_2 conda-forge icu 68.2 h0e60522_0 conda-forge idna 3.4 pyhd8ed1ab_0 conda-forge imagecodecs 2021.8.26 py39hc0a7faf_1 imageio 2.22.0 pyhfa7a67d_0 conda-forge imageio-ffmpeg 0.4.7 pyhd8ed1ab_0 conda-forge intel-openmp 2022.1.0 h57928b3_3787 conda-forge jpeg 9e h8ffe710_2 conda-forge kiwisolver 1.4.4 py39h2e07f2f_0 conda-forge lcms2 2.12 h2a16943_0 conda-forge lerc 3.0 h0e60522_0 conda-forge libaec 1.0.6 h39d44d4_0 conda-forge libblas 3.9.0 16_win64_mkl conda-forge libbrotlicommon 1.0.9 h8ffe710_7 conda-forge libbrotlidec 1.0.9 h8ffe710_7 conda-forge libbrotlienc 1.0.9 h8ffe710_7 conda-forge libcblas 3.9.0 16_win64_mkl conda-forge libclang 11.1.0 default_h5c34c98_1 conda-forge libdeflate 1.8 h2bbff1b_5 liblapack 3.9.0 16_win64_mkl conda-forge liblapacke 3.9.0 16_win64_mkl conda-forge libpng 1.6.37 h1d00b33_4 conda-forge libtiff 4.4.0 h8a3f274_0 libuv 1.44.2 h8ffe710_0 conda-forge libwebp-base 1.2.4 h8ffe710_0 conda-forge libxcb 1.13 hcd874cb_1004 conda-forge libzlib 1.2.12 h8ffe710_2 conda-forge libzopfli 1.0.3 h0e60522_0 conda-forge llvmlite 0.39.1 pypi_0 pypi locket 1.0.0 pyhd8ed1ab_0 conda-forge lz4-c 1.9.3 h8ffe710_1 conda-forge m2w64-gcc-libgfortran 5.3.0 6 conda-forge m2w64-gcc-libs 5.3.0 7 conda-forge m2w64-gcc-libs-core 5.3.0 7 conda-forge m2w64-gmp 6.1.0 2 conda-forge m2w64-libwinpthread-git 5.0.0.4634.697f757 2 conda-forge matplotlib 3.6.0 pypi_0 pypi mkl 2022.1.0 h6a75c08_874 conda-forge mkl-devel 2022.1.0 h57928b3_875 conda-forge mkl-include 2022.1.0 h6a75c08_874 conda-forge msys2-conda-epoch 20160418 1 conda-forge munkres 1.1.4 pyh9f0ad1d_0 conda-forge networkx 2.8.7 pyhd8ed1ab_0 conda-forge numba 0.56.2 pypi_0 pypi numpy 1.23.2 py39h1a62c8c_0 conda-forge opencv-python 4.6.0.66 pypi_0 pypi openjpeg 2.5.0 hc9384bd_1 conda-forge openssl 1.1.1q h2bbff1b_0 packaging 21.3 pyhd8ed1ab_0 conda-forge partd 1.3.0 pyhd8ed1ab_0 conda-forge pillow 9.2.0 py39hcef8f5f_2 conda-forge pip 22.2.2 py39haa95532_0 psutil 5.9.2 pypi_0 pypi pthread-stubs 0.4 hcd874cb_1001 conda-forge pyopenssl 22.0.0 pyhd8ed1ab_1 conda-forge pyparsing 3.0.9 pyhd8ed1ab_0 conda-forge pyqt 5.12.3 py39hb0d2dfa_4 conda-forge pyqt5-sip 4.19.18 pypi_0 pypi pyqtchart 5.12 pypi_0 pypi pyqtwebengine 5.12.1 pypi_0 pypi pysocks 1.7.1 pyh0701188_6 conda-forge python 3.9.13 h6244533_1 python-dateutil 2.8.2 pyhd8ed1ab_0 conda-forge python_abi 3.9 2_cp39 conda-forge pytorch 1.12.1 py3.9_cuda11.6_cudnn8_0 pytorch pytorch-mutex 1.0 cuda pytorch pywavelets 1.4.1 pypi_0 pypi pyyaml 6.0 py39hb82d6ee_4 conda-forge qt 5.12.9 h5909a2a_4 conda-forge requests 2.28.1 pyhd8ed1ab_1 conda-forge scikit-image 0.19.3 py39h2e25243_1 conda-forge scipy 1.9.1 pypi_0 pypi setuptools 59.8.0 pypi_0 pypi six 1.16.0 pyh6c4a22f_0 conda-forge snappy 1.1.9 h82413e6_1 conda-forge sqlite 3.39.3 h2bbff1b_0 tbb 2021.5.0 h2d74725_1 conda-forge tifffile 2022.8.12 pypi_0 pypi tk 8.6.12 h8ffe710_0 conda-forge toolz 0.12.0 pyhd8ed1ab_0 conda-forge torchaudio 0.12.1 py39_cu116 pytorch torchvision 0.13.1 py39_cu116 pytorch tornado 6.2 py39hb82d6ee_0 conda-forge tqdm 4.64.1 pyhd8ed1ab_0 conda-forge typing_extensions 4.3.0 pyha770c72_0 conda-forge tzdata 2022c h04d1e81_0 urllib3 1.26.11 pyhd8ed1ab_0 conda-forge vc 14.2 h21ff451_1 vs2015_runtime 14.27.29016 h5e58377_2 wheel 0.37.1 pyhd3eb1b0_0 win_inet_pton 1.1.0 py39hcbf5309_4 conda-forge wincertstore 0.2 py39haa95532_2 xorg-libxau 1.0.9 hcd874cb_0 conda-forge xorg-libxdmcp 1.1.3 hcd874cb_0 conda-forge xz 5.2.6 h8d14728_0 conda-forge yaml 0.2.5 h8ffe710_2 conda-forge zfp 0.5.5 h0e60522_8 conda-forge zlib 1.2.12 h8ffe710_2 conda-forge zstd 1.5.2 h6255e5f_4 conda-forge (Thin-Plate-Spline) PS F:\Thin-Plate-Spline-Motion-Model>

    opened by cjay777xb 2
  • Add Web Demo & Docker environment

    Add Web Demo & Docker environment

    Hey @yoyo-nb ! 👋

    This pull request makes it possible to run your model inside a Docker environment, which makes it easier for other people to run it. We're using an open source tool called Cog to make this process easier.

    This also means we can make a web page where other people can try out your model! View it here: https://replicate.com/yoyo-nb/thin-plate-spline-motion-model. The docker file can be found under the tab ‘run model with docker’.

    Do claim the page so you can own the page, customise the Example gallery as you like, push any future update to the web demo, and we'll feature it on our website and tweet about it too. You can find the 'Claim this model' button on the top of the page ~ When the page is claimed, it will be automatically linked to the arXiv website as well (under “Demos”).

    In case you're wondering who I am, I'm from Replicate, where we're trying to make machine learning reproducible. We got frustrated that we couldn't run all the really interesting ML work being done. So, we're going round implementing models we like. 😊

    opened by chenxwh 2
  • 关于hugface和replicated的demo产出不同质量的输出视频(fixed)

    关于hugface和replicated的demo产出不同质量的输出视频(fixed)

    您好! 因为不知道如何私信您 冒昧这里留言 还请包涵:

    疑问:我在用同样的图片和视频(模板视频)去跑您们model demo,在hugface上得到非常糟糕的结果,比如模糊的人脸:https://huggingface.co/spaces/CVPR/Image-Animation-using-Thin-Plate-Spline-Motion-Model

    但是在您们搭建的replicated demo却有非常好的输出(replicated:https://replicate.com/yoyo-nb/thin-plate-spline-motion-model/versions/382ceb8a9439737020bad407dec813e150388873760ad4a5a83a2ad01b039977) 是否您们在replicated做了一些参数微调?用了不同的数据集?或者我的操作有问题? 因为我只是AI爱好者,没有程序开发背景,所以可能描述不准确,不知能否让您明白我的问题,期待您的答复,我的邮箱:[email protected] 祈 安

    附对比视频链接: colab和hugface: https://drive.google.com/file/d/1zD5miR4985AhIQ_LovZ-7PbGRq3VrQGc/view?usp=share_link https://cdn.discordapp.com/attachments/1045752990079914016/1048012069494075442/colab.mp4

    replicated: https://drive.google.com/file/d/1CphSJpPIb6XETd3hPyMQYgjyhD1hTPx1/view?usp=share_link https://cdn.discordapp.com/attachments/1045752990079914016/1048012069229838397/REPLICATED.mp4

    opened by mingweihehehe 1
  • Can eyes always face forward?

    Can eyes always face forward?

    First off, amazing work on this project!

    Is it possible to make the eyes always face forward so that it appears that they are talking straight into the camera at all times?

    opened by andrewkuo 1
  • hope you make a colab notebook

    hope you make a colab notebook

    great work guys!

    would be great if you can a colab notebook for this project with the pretrained model , such that user can upload a video with the action and an image so that you can transfer the motion.

    opened by GeorvityLabs 1
  • full load yaml

    full load yaml

    I kept getting the error:

    positional argument Loader missing from line 37 demo.py. It seems that on versions >= 5 of pyaml that is additonal argument is required or yaml.load should be set to either safe_load or full_load. My fork added in full_load and the error was removed. Hope this helps!

    opened by emmajane1313 0
  • Can you provide colab cready code?

    Can you provide colab cready code?

    Downloading the code, resolving dependency issues, etc would eat a lot of time. A Google Colab notebook would be really helpful to quickly check the code and implement it on the cloud.

    opened by snehitvaddi 0
  • Host vox.pth somewhere else, please!

    Host vox.pth somewhere else, please!

    Everytime I try to run the model, saving vox.pth from cloud.tsinghua.edu.cn either failes or takes several minutes. It's currently loading at 4.3 KB/s! Couldn't you please host that file here on github or somewhere else, please?!?

    opened by blobbfobb 1
  • Error when training on my dataset

    Error when training on my dataset

    (T-P-S-M-M) C:\Users\k\Desktop\T-P-S-M-M>python run.py --config config/vox-256.yaml C:\Users\k\Desktop\T-P-S-M-M\run.py:38: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details. config = yaml.load(f) C:\anaconda3\envs\T-P-S-M-M\lib\site-packages\torchvision\models_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead. warnings.warn( C:\anaconda3\envs\T-P-S-M-M\lib\site-packages\torchvision\models_utils.py:223: UserWarning: Arguments other than a weight enum or None for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing weights=None. warnings.warn(msg) C:\anaconda3\envs\T-P-S-M-M\lib\site-packages\torch\functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\TensorShape.cpp:3191.) return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined] None Use predefined train-test split. Training... C:\anaconda3\envs\T-P-S-M-M\lib\site-packages\torchvision\models_utils.py:223: UserWarning: Arguments other than a weight enum or None for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing weights=VGG19_Weights.IMAGENET1K_V1. You can also use weights=VGG19_Weights.DEFAULT to get the most up-to-date weights. warnings.warn(msg) 0%| | 0/100 [00:09<?, ?it/s] Traceback (most recent call last): File "C:\Users\k\Desktop\T-P-S-M-M\run.py", line 83, in train(config, inpainting, kp_detector, bg_predictor, dense_motion_network, opt.checkpoint, log_dir, dataset) File "C:\Users\k\Desktop\T-P-S-M-M\train.py", line 55, in train for x in dataloader: File "C:\anaconda3\envs\T-P-S-M-M\lib\site-packages\torch\utils\data\dataloader.py", line 628, in next data = self._next_data() File "C:\anaconda3\envs\T-P-S-M-M\lib\site-packages\torch\utils\data\dataloader.py", line 1333, in _next_data return self._process_data(data) File "C:\anaconda3\envs\T-P-S-M-M\lib\site-packages\torch\utils\data\dataloader.py", line 1359, in _process_data data.reraise() File "C:\anaconda3\envs\T-P-S-M-M\lib\site-packages\torch_utils.py", line 543, in reraise raise exception ValueError: Caught ValueError in DataLoader worker process 0. Original Traceback (most recent call last): File "C:\anaconda3\envs\T-P-S-M-M\lib\site-packages\torch\utils\data_utils\worker.py", line 302, in _worker_loop data = fetcher.fetch(index) File "C:\anaconda3\envs\T-P-S-M-M\lib\site-packages\torch\utils\data_utils\fetch.py", line 58, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "C:\anaconda3\envs\T-P-S-M-M\lib\site-packages\torch\utils\data_utils\fetch.py", line 58, in data = [self.dataset[idx] for idx in possibly_batched_index] File "C:\Users\k\Desktop\T-P-S-M-M\frames_dataset.py", line 172, in getitem return self.dataset[idx % self.dataset.len()] File "C:\Users\k\Desktop\T-P-S-M-M\frames_dataset.py", line 109, in getitem path = np.random.choice(glob.glob(os.path.join(self.root_dir, name + '*.mp4'))) File "mtrand.pyx", line 915, in numpy.random.mtrand.RandomState.choice ValueError: 'a' cannot be empty unless no samples are taken

    please tell me what to do

    opened by kinkan59 2
  • Win10: Could not find a version that satisfies the requirement torch==1.10.0+cu113

    Win10: Could not find a version that satisfies the requirement torch==1.10.0+cu113

    I am on Windows 10 with Anaconda setup. I've tried Python 3.10, 3.9, and 3.7 (the latest has other issues too, but also this one). Do you have any suggestions how to debug this?

    (py39) Thin-Plate-Spline-Motion-Mode>pip install -r requirements.txt
    Collecting cffi==1.14.6
      Using cached cffi-1.14.6-cp39-cp39-win_amd64.whl (180 kB)
    Collecting cycler==0.10.0
      Using cached cycler-0.10.0-py2.py3-none-any.whl (6.5 kB)
    Collecting decorator==5.1.0
      Using cached decorator-5.1.0-py3-none-any.whl (9.1 kB)
    Collecting face-alignment==1.3.5
      Using cached face_alignment-1.3.5.tar.gz (27 kB)
      Preparing metadata (setup.py) ... done
    Collecting imageio==2.9.0
      Using cached imageio-2.9.0-py3-none-any.whl (3.3 MB)
    Collecting imageio-ffmpeg==0.4.5
      Using cached imageio_ffmpeg-0.4.5-py3-none-win_amd64.whl (22.6 MB)
    Collecting kiwisolver==1.3.2
      Using cached kiwisolver-1.3.2-cp39-cp39-win_amd64.whl (52 kB)
    Collecting matplotlib==3.4.3
      Using cached matplotlib-3.4.3-cp39-cp39-win_amd64.whl (7.1 MB)
    Collecting networkx==2.6.3
      Using cached networkx-2.6.3-py3-none-any.whl (1.9 MB)
    Collecting numpy==1.20.3
      Using cached numpy-1.20.3-cp39-cp39-win_amd64.whl (13.7 MB)
    Collecting pandas==1.3.3
      Using cached pandas-1.3.3-cp39-cp39-win_amd64.whl (10.2 MB)
    Collecting Pillow==8.3.2
      Using cached Pillow-8.3.2-cp39-cp39-win_amd64.whl (3.2 MB)
    Collecting pycparser==2.20
      Using cached pycparser-2.20-py2.py3-none-any.whl (112 kB)
    Collecting pyparsing==2.4.7
      Using cached pyparsing-2.4.7-py2.py3-none-any.whl (67 kB)
    Collecting python-dateutil==2.8.2
      Using cached python_dateutil-2.8.2-py2.py3-none-any.whl (247 kB)
    Collecting pytz==2021.1
      Using cached pytz-2021.1-py2.py3-none-any.whl (510 kB)
    Collecting PyWavelets==1.1.1
      Using cached PyWavelets-1.1.1-cp39-cp39-win_amd64.whl (4.2 MB)
    Collecting PyYAML==5.4.1
      Using cached PyYAML-5.4.1-cp39-cp39-win_amd64.whl (213 kB)
    Collecting scikit-image==0.18.3
      Using cached scikit_image-0.18.3-cp39-cp39-win_amd64.whl (12.2 MB)
    Collecting scikit-learn==1.0
      Using cached scikit_learn-1.0-cp39-cp39-win_amd64.whl (7.2 MB)
    Collecting scipy==1.7.1
      Using cached scipy-1.7.1-cp39-cp39-win_amd64.whl (33.8 MB)
    Collecting six==1.16.0
      Using cached six-1.16.0-py2.py3-none-any.whl (11 kB)
    ERROR: Could not find a version that satisfies the requirement torch==1.10.0+cu113 (from versions: 1.7.1, 1.8.0, 1.8.1, 1.9.0, 1.9.1, 1.10.0, 1.10.1, 1.10.2, 1.11.0, 1.12.0, 1.12.1, 1.13.0)
    ERROR: No matching distribution found for torch==1.10.0+cu113
    
    opened by robi-bobi 2
  • Determining number of TPS

    Determining number of TPS

    Hello, I have a question about this sentence in the 4.3 Ablations section of the paper.

    The dimensions of FOMM and MRAA are K ∗ (2 + 4) and (K + 1) ∗ (2 + 4), while ours is K ∗ (6 + 5 ∗ 2) + 6.

    I do not understand where these numbers come from. Could you elaborate please? Thank you.

    opened by hanweikung 0
  • How to properly use the ted dataset

    How to properly use the ted dataset

    Thanks for the repo and colab. I've gotten the demo to work with vox and a portrait. I'm trying to get ted to work.

    What do you think is the optimal parameters of the driving footage? For ted, vox, etc.

    Edit1: I used the ted checkpoint and config and made a little bit of progress with limbs showing, but still pretty messy. Perhaps just need to match dimensions of the source image to the footage perfectly in frame 1? Cropping source image to match driving video and removing the background from the source image made a tiny bit of progress, but still bad.

    Edit 2: I see your comparison and tip to use taichi for full body: https://github.com/yoyo-nb/Thin-Plate-Spline-Motion-Model/issues/38 Will try taichi.

    Would you say for ted, the driving video should be cropped at chest level and above like in your examples? Could you include the single source in the assets folder for the demo gifs you made (instead of the row of gifs)? Does the background need to be close to a solid color? I grabbed ted footage that had more objects in the background, but the person stayed stationary in the center. My source images also had a lot of background noise. In general the output just has the center pushed out where the ted talker is, and it wobbles around a bit but there's no limb or facial recognition.

    When using ted, I left the vox config the same because I'm unclear of how to modify that. Would you say I need to use a ted-specific config and go thru the parameters? I'll start looking now just in case.

    opened by GuruVirus 0
Owner
yoyo-nb
nmd,why?
yoyo-nb
[CVPR 2022] CoTTA Code for our CVPR 2022 paper Continual Test-Time Domain Adaptation

CoTTA Code for our CVPR 2022 paper Continual Test-Time Domain Adaptation Prerequisite Please create and activate the following conda envrionment. To r

Qin Wang 87 Jan 8, 2023
Code for Motion Representations for Articulated Animation paper

Motion Representations for Articulated Animation This repository contains the source code for the CVPR'2021 paper Motion Representations for Articulat

Snap Research 851 Jan 9, 2023
Neural Nano-Optics for High-quality Thin Lens Imaging

Neural Nano-Optics for High-quality Thin Lens Imaging Project Page | Paper | Data Ethan Tseng, Shane Colburn, James Whitehead, Luocheng Huang, Seung-H

Ethan Tseng 39 Dec 5, 2022
The 7th edition of NTIRE: New Trends in Image Restoration and Enhancement workshop will be held on June 2022 in conjunction with CVPR 2022.

NTIRE 2022 - Image Inpainting Challenge Important dates 2022.02.01: Release of train data (input and output images) and validation data (only input) 2

Andrés Romero 37 Nov 27, 2022
Official pytorch implementation for Learning to Listen: Modeling Non-Deterministic Dyadic Facial Motion (CVPR 2022)

Learning to Listen: Modeling Non-Deterministic Dyadic Facial Motion This repository contains a pytorch implementation of "Learning to Listen: Modeling

null 50 Dec 17, 2022
[SIGGRAPH 2022 Journal Track] AvatarCLIP: Zero-Shot Text-Driven Generation and Animation of 3D Avatars

AvatarCLIP: Zero-Shot Text-Driven Generation and Animation of 3D Avatars Fangzhou Hong1*  Mingyuan Zhang1*  Liang Pan1  Zhongang Cai1,2,3  Lei Yang2 

Fangzhou Hong 749 Jan 4, 2023
CCPD: a diverse and well-annotated dataset for license plate detection and recognition

CCPD (Chinese City Parking Dataset, ECCV) UPdate on 10/03/2019. CCPD Dataset is now updated. We are confident that images in subsets of CCPD is much m

detectRecog 1.8k Dec 30, 2022
Indonesian Car License Plate Character Recognition using Tensorflow, Keras and OpenCV.

Monopol Indonesian Car License Plate (Indonesia Mobil Nomor Polisi) Character Recognition using Tensorflow, Keras and OpenCV. Background This applicat

Jayaku Briliantio 3 Apr 7, 2022
Automatic number plate recognition using tech: Yolo, OCR, Scene text detection, scene text recognation, flask, torch

Automatic Number Plate Recognition Automatic Number Plate Recognition (ANPR) is the process of reading the characters on the plate with various optica

Meftun AKARSU 52 Dec 22, 2022
Imposter-detector-2022 - HackED 2022 Team 3IQ - 2022 Imposter Detector

HackED 2022 Team 3IQ - 2022 Imposter Detector By Aneeljyot Alagh, Curtis Kan, Jo

Joshua Ji 3 Aug 20, 2022
This repository contains the code for the paper "Hierarchical Motion Understanding via Motion Programs"

Hierarchical Motion Understanding via Motion Programs (CVPR 2021) This repository contains the official implementation of: Hierarchical Motion Underst

Sumith Kulal 40 Dec 5, 2022
Exploring Versatile Prior for Human Motion via Motion Frequency Guidance (3DV2021)

Exploring Versatile Prior for Human Motion via Motion Frequency Guidance This is the codebase for video-based human motion reconstruction in human-mot

Jiachen Xu 5 Jul 14, 2022
GANimation: Anatomically-aware Facial Animation from a Single Image (ECCV'18 Oral) [PyTorch]

GANimation: Anatomically-aware Facial Animation from a Single Image [Project] [Paper] Official implementation of GANimation. In this work we introduce

Albert Pumarola 1.8k Dec 28, 2022
Image Segmentation Animation using Quadtree concepts.

QuadTree Image Segmentation Animation using QuadTree concepts. Usage usage: quad.py [-h] [-fps FPS] [-i ITERATIONS] [-ws WRITESTART] [-b] [-img] [-s S

Alex Eidt 29 Dec 25, 2022
This is a model to classify Vietnamese sign language using Motion history image (MHI) algorithm and CNN.

Vietnamese sign lagnuage recognition using MHI and CNN This is a model to classify Vietnamese sign language using Motion history image (MHI) algorithm

Phat Pham 3 Feb 24, 2022
This repository contains a pytorch implementation of "HeadNeRF: A Real-time NeRF-based Parametric Head Model (CVPR 2022)".

HeadNeRF: A Real-time NeRF-based Parametric Head Model This repository contains a pytorch implementation of "HeadNeRF: A Real-time NeRF-based Parametr

null 294 Jan 1, 2023
Official repository for "Restormer: Efficient Transformer for High-Resolution Image Restoration". SOTA for motion deblurring, image deraining, denoising (Gaussian/real data), and defocus deblurring.

Restormer: Efficient Transformer for High-Resolution Image Restoration Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan,

Syed Waqas Zamir 906 Dec 30, 2022
A variational Bayesian method for similarity learning in non-rigid image registration (CVPR 2022)

A variational Bayesian method for similarity learning in non-rigid image registration We provide the source code and the trained models used in the re

daniel grzech 14 Nov 21, 2022
Sound-guided Semantic Image Manipulation - Official Pytorch Code (CVPR 2022)

?? Sound-guided Semantic Image Manipulation (CVPR2022) Official Pytorch Implementation Sound-guided Semantic Image Manipulation IEEE/CVF Conference on

CVLAB 58 Dec 28, 2022