Code for Motion Representations for Articulated Animation paper

Overview

Motion Representations for Articulated Animation

This repository contains the source code for the CVPR'2021 paper Motion Representations for Articulated Animation by Aliaksandr Siarohin, Oliver Woodford, Jian Ren, Menglei Chai and Sergey Tulyakov.

For more qualitiative examples visit our project page.

Example animation

Here is an example of several images produced by our method. In the first column the driving video is shown. For the remaining columns the top image is animated by using motions extracted from the driving.

Screenshot

Installation

We support python3. To install the dependencies run:

pip install -r requirements.txt

YAML configs

There are several configuration files one for each dataset in the config folder named as config/dataset_name.yaml. See config/dataset.yaml to get the description of each parameter.

See description of the parameters in the config/vox256.yaml. We adjust the the configuration to run on 1 V100 GPU, training on 256x256 dataset takes approximatly 2 days.

Pre-trained checkpoints

Checkpoints can be found in checkpoints folder. Checkpoints are large, therefore we use git lsf to store them. Either use git lfs pull or download checkpoints manually from github.

Animation Demo

To run a demo, download a checkpoint and run the following command:

python demo.py  --config config/dataset_name.yaml --driving_video path/to/driving --source_image path/to/source --checkpoint path/to/checkpoint

The result will be stored in result.mp4. To use Animation via Disentaglemet add --mode avd, for standard animation add --mode standard instead.

Colab Demo

We prepared a demo runnable in google-colab, see: demo.ipynb.

Training

To train a model run:

CUDA_VISIBLE_DEVICES=0 python run.py --config config/dataset_name.yaml --device_ids 0

The code will create a folder in the log directory (each run will create a time-stamped new folder). Checkpoints will be saved to this folder. To check the loss values during training see log.txt. You can also check training data reconstructions in the train-vis subfolder. Then to train Animation via disentaglement (AVD) use:

CUDA_VISIBLE_DEVICES=0 python run.py --checkpoint log/{folder}/cpk.pth --config config/dataset_name.yaml --device_ids 0 --mode train_avd

Where {folder} is the name of the folder created in the previous step. (Note: use backslash '' before space.) This will use the same folder where checkpoint was previously stored. It will create a new checkpoint containing all the previous models and the trained avd_network. You can monitor performance in log file and visualizations in train-vis folder.

Evaluation on video reconstruction

To evaluate the reconstruction performance run:

CUDA_VISIBLE_DEVICES=0 python run.py --config config/dataset_name.yaml --mode reconstruction --checkpoint log/{folder}/cpk.pth

Where {folder} is the name of the folder created in the previous step. (Note: use backslash '' before space.) The reconstruction subfolder will be created in the checkpoint folder. The generated video will be stored to this folder, also generated videos will be stored in png subfolder in loss-less '.png' format for evaluation. Instructions for computing metrics from the paper can be found here.

TED dataset

For obtaining TED dataset run the following commands:

git clone https://github.com/AliaksandrSiarohin/video-preprocessing
cd video-preprocessing
python load_videos.py --metadata ../data/ted384-metadata.csv --format .mp4 --out_folder ../data/TED384-v2 --workers 8 --image_shape 384,384

Training on your own dataset

  1. Resize all the videos to the same size, e.g 256x256, the videos can be in '.gif', '.mp4' or folder with images. We recommend the latter, for each video make a separate folder with all the frames in '.png' format. This format is loss-less, and it has better i/o performance.

  2. Create a folder data/dataset_name with 2 subfolders train and test, put training videos in the train and testing in the test.

  3. Create a config file config/dataset_name.yaml. See description of the parameters in the config/vox256.yaml. Specify the dataset root in dataset_params specify by setting root_dir: data/dataset_name. Adjust other parameters as desired, such as the number of epochs for example. Specify id_sampling: False if you do not want to use id_sampling.

Additional notes

Citation:

@inproceedings{siarohin2021motion,
        author={Siarohin, Aliaksandr and Woodford, Oliver and Ren, Jian and Chai, Menglei and Tulyakov, Sergey},
        title={Motion Representations for Articulated Animation},
        booktitle = {CVPR},
        year = {2021}
}
Comments
  • Warping Result

    Warping Result

    Hi @AliaksandrSiarohin,

    May I ask a question about the warping result like this: 1 It seems the moved region is filled in the warping image (the third column) based on the estimated optical flow. I try to use the off-the-self model to predict the optical flow, however, the warping result have the issue about double region like this: 3 I would appreciate it if you can give me any hint to solve this problem.

    opened by zhangyahu1 8
  • Cannot download the TED dataset

    Cannot download the TED dataset

    When I tried to download the Ted dataset, it seemed that all the links failed. I don't know how to fix it. In addition, I want to figure out the format of the dataset in the file. I find that the generalization ability of the model is limited for the data collected in the wild, even if I try to use all the pre trained models Thanks!

    /home/liuxinqi/anaconda3/envs/art-ani/lib/python3.7/site-packages/scipy/init.py:140: UserWarning: A NumPy version >=1.16.5 and <1.23.0 is required for this version of SciPy (detected version 1.15.0) UserWarning) 0it [00:00, ?it/s]Can not load video iMFJef3xnmg, broken link 1it [00:27, 27.56s/it]Can not load video Cetg4gu0oQQ, broken link Can not load video 360bU-vBJOI, broken link Can not load video xHHb7R3kx40, broken link Can not load video LnJwH_PZXnM, broken link 5it [00:27, 5.53s/it]Can not load video ClfBxWPkBKU, broken link Can not load video cK74vhqzeeQ, broken link Can not load video 1g-1_Y3fvUg, broken link 8it [00:28, 3.59s/it]Can not load video lyR-K2CZIHQ, broken link 9it [00:29, 3.25s/it]Can not load video 95ovIJ3dsNk, broken link Can not load video kBBmVezBUkg, broken link Can not load video iFTWM7HV2UI, broken link 12it [00:29, 2.44s/it]Can not load video WfTZ5iIUn4s, broken link Can not load video t0Cr64zCc38, broken link 14it [00:29, 2.13s/it]Can not load video kyaiTGmwxnU, broken link Can not load video 9alL95G293s, broken link 16it [00:30, 1.90s/it]Can not load video -nKdufEaL8k, broken link 17it [00:30, 1.82s/it]Can not load video oEIYHTlbeLA, broken link 18it [00:31, 1.73s/it]Can not load video v9EKV2nSU8w, broken link 19it [00:31, 1.65s/it]Can not load video 51k3UASQE5E, broken link Can not load video VM6HZqQKhok, broken link Can not load video 51k3UASQE5E, broken link 22it [00:31, 1.44s/it]Can not load video HI7zfpitZpo, broken link 23it [00:31, 1.38s/it]Can not load video FDhlOovaGrI, broken link 24it [00:32, 1.36s/it]Can not load video 08z_xW-szwM, broken link Can not load video fO2htapfNhA, broken link 26it [00:32, 1.26s/it]Can not load video idfv7Lw4Y_s, broken link 27it [00:32, 1.22s/it]Can not load video I3BJVaioX_k, broken link 28it [00:33, 1.18s/it]Can not load video MB5IX-np5fE, broken link 29it [00:33, 1.15s/it]Can not load video FqrLUtIFVjs, broken link 30it [00:33, 1.12s/it]Can not load video 13zN4-MVM9g, broken link 31it [00:33, 1.08s/it]Can not load video iFTWM7HV2UI, broken link 32it [00:34, 1.07s/it]Can not load video HiwJ0hNl1Fw, broken link 33it [00:34, 1.04s/it]Can not load video 1AT5klu_yAQ, broken link 34it [00:34, 1.02s/it]Can not load video TLZ6W-Nqv1I, broken link Can not load video iMFJef3xnmg, broken link 36it [00:34, 1.03it/s]Can not load video pxEcvU0Vp_M, broken link Can not load video kmbui1xF8DE, broken link 38it [00:35, 1.07it/s]Can not load video 2VBkDNzeRZM, broken link Can not load video yNhu0MG_2MA, broken link 40it [00:36, 1.11it/s]Can not load video VJoQj00RZHg, broken link 41it [00:36, 1.13it/s]Can not load video SF9qq6vQ3Pg, broken link multiprocessing.pool.RemoteTraceback: """ Traceback (most recent call last): File "/home/liuxinqi/anaconda3/envs/art-ani/lib/python3.7/multiprocessing/pool.py", line 121, in worker result = (True, func(*args, **kwds)) File "load_videos.py", line 72, in run save(os.path.join(args.out_folder, partition, path), entry['frames'], args.format) File "/home/liuxinqi/disk/study/articulated-animation-main/video-preprocessing/util.py", line 118, in save imageio.mimsave(path, frames) File "/home/liuxinqi/anaconda3/envs/art-ani/lib/python3.7/site-packages/imageio/core/functions.py", line 347, in mimwrite raise RuntimeError('Zero images were written.') RuntimeError: Zero images were written. """

    opened by LiuXinqi12 7
  • basic information about error

    basic information about error

    Thanks for you sharing code, This is a very great project. But I run, appear error, as follows:

    Traceback (most recent call last):
      File "demo.py", line 134, in <module>
        main(parser.parse_args())
      File "demo.py", line 113, in main
        checkpoint_path=opt.checkpoint, cpu=opt.cpu)
      File "demo.py", line 59, in load_checkpoints
        checkpoint = torch.load(checkpoint_path)
      File "/home/zhaopeng/anaconda3/envs/articulatedanimation/lib/python3.6/site-packages/torch/serialization.py", line 529, in load
        return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
      File "/home/zhaopeng/anaconda3/envs/articulatedanimation/lib/python3.6/site-packages/torch/serialization.py", line 692, in _legacy_load
        magic_number = pickle_module.load(f, **pickle_load_args)
    _pickle.UnpicklingError: invalid load key, 'v'.
    

    my command, as follows: CUDA_VISIBLE_DEVICES='0' python demo.py --config config/ted384.yaml --driving_video nv.mp4 --source_image nv.png --checkpoint checkpoints/ted384.pth --mode avd

    my environment, as follows:

    torch 1.4.0
    torchvision 0.5.0
    

    thers is a little different from requirement.txt, your torchvision is 0.2.1.

    Could you help me, thanks.

    opened by Adorablepet 4
  • Retrain on TED dataset

    Retrain on TED dataset

    Thanks for sharing your nice work!

    I am trying to retrain your model on TED dataset, and run the command:

    CUDA_VISIBLE_DEVICES=0,1 python run.py --config config/ted384.yaml --device_ids 0,1

    With

    Batch_size = 8, num_epochs: 100, num_repeats: 150, input: image with png format.

    It seems the training process will take more than 7 days. May I know how long did you take? Thanks.

    opened by zhangyahu1 3
  • Can't download checkpoints/mgif256.pth for demo. Repo over data quota

    Can't download checkpoints/mgif256.pth for demo. Repo over data quota

    Getting the error:

    Error downloading object: checkpoints/mgif256.pth (58b796e): Smudge error: Error downloading checkpoints/mgif256.pth (58b796e31f763ccfbb240f959c9d92c2afe8a6e57da303e07d8fabd1f2921c68): batch response: This repository is over its data quota. Account responsible for LFS bandwidth should purchase more data packs to restore access.
    
    opened by aaronrmm 2
  • Error while downloading TED Dataset

    Error while downloading TED Dataset

    On running: python load_videos.py --metadata ../data/ted384-metadata.csv --format .mp4 --out_folder ../data/TED384-v2 --image_shape 384,384

    I get:

    0it [00:00, ?it/s]Unknown encoder 'libx264' multiprocessing.pool.RemoteTraceback: """ Traceback (most recent call last): File "/home2/neelabh/miniconda3/envs/py36/lib/python3.6/site-packages/imageio/plugins/ffmpeg.py", line 661, in _append_data self._proc.stdin.write(im.tostring()) BrokenPipeError: [Errno 32] Broken pipe

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "/home2/neelabh/miniconda3/envs/py36/lib/python3.6/multiprocessing/pool.py", line 119, in worker result = (True, func(*args, **kwds)) File "load_videos.py", line 72, in run save(os.path.join(args.out_folder, partition, path), entry['frames'], args.format) File "/home2/neelabh/articulated-animation/video-preprocessing/util.py", line 118, in save imageio.mimsave(path, frames) File "/home2/neelabh/miniconda3/envs/py36/lib/python3.6/site-packages/imageio/core/functions.py", line 341, in mimwrite writer.append_data(im) File "/home2/neelabh/miniconda3/envs/py36/lib/python3.6/site-packages/imageio/core/format.py", line 492, in append_data return self._append_data(im, total_meta) File "/home2/neelabh/miniconda3/envs/py36/lib/python3.6/site-packages/imageio/plugins/ffmpeg.py", line 666, in _append_data raise IOError(msg) OSError: [Errno 32] Broken pipe

    FFMPEG COMMAND: /home2/neelabh/miniconda3/envs/py36/bin/ffmpeg -y -f rawvideo -vcodec rawvideo -s 384x384 -pix_fmt rgb24 -r 10.00 -i - -an -vcodec libx264 -pix_fmt yuv420p -crf 25 -v warning /home2/neelabh/articulated-animation/data/TED384-v2/test/yMWlkJAqKYU#005708#005838.mp4

    FFMPEG STDERR OUTPUT:

    """

    The above exception was the direct cause of the following exception:

    Traceback (most recent call last): File "load_videos.py", line 100, in for chunks_data in tqdm(pool.imap_unordered(run, zip(video_ids, args_list))): File "/home2/neelabh/miniconda3/envs/py36/lib/python3.6/site-packages/tqdm/_tqdm.py", line 931, in iter for obj in iterable: File "/home2/neelabh/miniconda3/envs/py36/lib/python3.6/multiprocessing/pool.py", line 735, in next raise value OSError: [Errno 32] Broken pipe

    FFMPEG COMMAND: /home2/neelabh/miniconda3/envs/py36/bin/ffmpeg -y -f rawvideo -vcodec rawvideo -s 384x384 -pix_fmt rgb24 -r 10.00 -i - -an -vcodec libx264 -pix_fmt yuv420p -crf 25 -v warning /home2/neelabh/articulated-animation/data/TED384-v2/test/yMWlkJAqKYU#005708#005838.mp4

    FFMPEG STDERR OUTPUT:

    According to this answer on stack overflow, it may be because the request is getting blocked or taking too long.

    opened by kumarneelabh13 2
  • Evaluation

    Evaluation

    Thanks for your nice work! I have a question about the evaulation: are the source and diriving frames from the same video for quantitative evaulation listed in Table 2?

    opened by zhangyahu1 2
  • Question about vox256 model

    Question about vox256 model

    @AliaksandrSiarohin Hi, I have two simple questions about the vox256 model in your provided checkpoints.

    • What is the resolution of the data for training vox256, 256x256, or 384x384?
    • This model is trained from which version of the VoxCeleb dataset? V1 or V2?
    opened by ChengBinJin 2
  • source image compatibility with the driving video

    source image compatibility with the driving video

    Hi @AliaksandrSiarohin,

    Thanks for sharing this amazing work. While using custom videos with the source images available on the internet, I found a few compatibility-related issues:

    1. The generated video is highly distorted if the source image is not appropriately aligned with the driving video.
    2. Initial pose of the source image, if not similar to that of the entity in the driving video, fails to process the image and generate a video.]

    For instance, the source image only has a face of a person, and in contrast, the driving video has a speaker with movements, the generated video will be full of distortions.

    opened by addy1997 1
  • Fail to download TED dataset

    Fail to download TED dataset

    Thanks for sharing your nice work! I meet a problem when downloading TED dataset. I get:

    /home/yzhang4/anaconda3/envs/motion/lib/python3.6/importlib/_bootstrap.py:205: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
      return f(*args, **kwds)
    /home/yzhang4/anaconda3/envs/motion/lib/python3.6/importlib/_bootstrap.py:205: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
      return f(*args, **kwds)
    0it [00:00, ?it/s]
    multiprocessing.pool.RemoteTraceback:
    """
    Traceback (most recent call last):
      File "/home/yzhang4/anaconda3/envs/motion/lib/python3.6/multiprocessing/pool.py", line 119, in worker
        result = (True, func(*args, **kwds))
      File "load_videos.py", line 32, in run
        download(video_id.split('#')[0], args)
      File "load_videos.py", line 25, in download
        video_path], stdout=DEVNULL, stderr=DEVNULL)
      File "/home/yzhang4/anaconda3/envs/motion/lib/python3.6/subprocess.py", line 267, in call
        with Popen(*popenargs, **kwargs) as p:
      File "/home/yzhang4/anaconda3/envs/motion/lib/python3.6/subprocess.py", line 707, in __init__
        restore_signals, start_new_session)
      File "/home/yzhang4/anaconda3/envs/motion/lib/python3.6/subprocess.py", line 1326, in _execute_child
        raise child_exception_type(errno_num, err_msg)
    PermissionError: [Errno 13] Permission denied
    """
    The above exception was the direct cause of the following exception:
    Traceback (most recent call last):
      File "load_videos.py", line 103, in <module>
        for chunks_data in tqdm(pool.imap_unordered(run, zip(video_ids, args_list))):
      File "/home/yzhang4/anaconda3/envs/motion/lib/python3.6/site-packages/tqdm/std.py", line 1178, in __iter__
        for obj in iterable:
      File "/home/yzhang4/anaconda3/envs/motion/lib/python3.6/multiprocessing/pool.py", line 699, in next
        raise value
    PermissionError: [Errno 13] Permission denied
    

    Could you please give me some advice to solve this problem?

    opened by zhangyahu1 1
  • TED metadata.csv

    TED metadata.csv

    Couldn't find 'ted384-metadata.csv' in './data' folder.

    Instead, there is a file 'ted-metadata.csv ''that has all 0s for 'start' and 'end' entries (but start/end frame numbers are included in video_id).

    Is it OK to modify 'load_videos.py' to support this configuration in ted-metadata.csv?

    But, I wonder whether ted-metadata.csv is the same as ted384-metadata.csv except start/end frames issue.

    opened by eastchun 1
  • Some questions about Teddataset

    Some questions about Teddataset

    First of all, thank you very much for your valuable research, and I have some questions about Ted dataset.

    1. In the paper, I found that Ted is preprocessed by cropping the upper part of the human body from the videos. I will be appreciate to know the detail way to crop Ted.
    2. When I run the commend of "python load_videos.py --metadata ../data/ted384-metadata.csv --format .mp4 --out_folder ../data/TED384-v2 --workers 8 --image_shape 384,384", I only find the ted-metadata.csv instead of ted384-metadata.csv in data folder. And I have some qusetions about the ted-metadata.csv. First, the classes of start and end has all zero number. Second, the class of partition gets all "train". Is the ted-metadata.csv in data folder right? if not, can you provide the right ted384-metadata.csv? image Again, Thank you for your remarkble work!
    opened by Number18-tong 0
  • produce smooth video results

    produce smooth video results

    Hello, I'm a little confused. Your method is ultimately about processing video, which should be converted frame by frame. But why can such methods produce relatively smooth video results without considering the temporal consistency of the video? Or why not model temporal consistency.

    opened by ElephantIcon 0
  • What version of CUDA, cuDNN are you using?

    What version of CUDA, cuDNN are you using?

    Hello.

    I am now trying to replicate your research. However, I am having trouble with an error when I try to run the training. エラーgit I believe the cause is in the version of CUDA, cuDNN. So I would like to know what those are if you were able to run the training. Also, if my guess is wrong, I would like to know the cause.

    Thanks.

    opened by Yanomizu 0
  • error when training avd with videos

    error when training avd with videos

    I get this error when trying to train with videos. The standard training does work with training videos. Training avd with .png is fine. When switching to avd mode there is this error:

    File "C:\NEURAL\articulated-animation\train_avd.py", line 75, in train_avd source = x['source'][:6].cuda() UnboundLocalError: local variable 'x' referenced before assignment

    When the background is static, do i just have to set bg_type: 'zero'?

    opened by surfingnirvana 0
  • Update the code

    Update the code

    The actual code cant be run on Python 3.10: dependencies are not compatible and some parts of the code has changed. This pull requests:

    • leaves the dependencies free (should be changed in a next PR)
    • Update yaml loading function passing a Loader
    • Pass memtest=False to video/image loaders to avoid breaking on memory
    • Add tiktok dataset example
    • Include the link of a google drive traned weights, because git lfs is resulting in errors (as described in an opened issue)
    opened by andreclaudino 0
[ICCV'21] Official implementation for the paper Social NCE: Contrastive Learning of Socially-aware Motion Representations

CrowdNav with Social-NCE This is an official implementation for the paper Social NCE: Contrastive Learning of Socially-aware Motion Representations by

VITA lab at EPFL 125 Dec 23, 2022
Code for "LASR: Learning Articulated Shape Reconstruction from a Monocular Video". CVPR 2021.

LASR Installation Build with conda conda env create -f lasr.yml conda activate lasr # install softras cd third_party/softras; python setup.py install;

Google 157 Dec 26, 2022
This repository contains the accompanying code for Deep Virtual Markers for Articulated 3D Shapes, ICCV'21

Deep Virtual Markers This repository contains the accompanying code for Deep Virtual Markers for Articulated 3D Shapes, ICCV'21 Getting Started Get sa

KimHyomin 45 Oct 7, 2022
Source code of our BMVC 2021 paper: AniFormer: Data-driven 3D Animation with Transformer

AniFormer This is the PyTorch implementation of our BMVC 2021 paper AniFormer: Data-driven 3D Animation with Transformer. Haoyu Chen, Hao Tang, Nicu S

null 7 Oct 22, 2021
Where2Act: From Pixels to Actions for Articulated 3D Objects

Where2Act: From Pixels to Actions for Articulated 3D Objects The Proposed Where2Act Task. Given as input an articulated 3D object, we learn to propose

Kaichun Mo 69 Nov 28, 2022
Official PyTorch implementation of CAPTRA: CAtegory-level Pose Tracking for Rigid and Articulated Objects from Point Clouds

CAPTRA: CAtegory-level Pose Tracking for Rigid and Articulated Objects from Point Clouds Introduction This is the official PyTorch implementation of o

Yijia Weng 96 Dec 7, 2022
SCALE: Modeling Clothed Humans with a Surface Codec of Articulated Local Elements (CVPR 2021)

SCALE: Modeling Clothed Humans with a Surface Codec of Articulated Local Elements (CVPR 2021) This repository contains the official PyTorch implementa

Qianli Ma 133 Jan 5, 2023
A-SDF: Learning Disentangled Signed Distance Functions for Articulated Shape Representation (ICCV 2021)

A-SDF: Learning Disentangled Signed Distance Functions for Articulated Shape Representation (ICCV 2021) This repository contains the official implemen

null 81 Dec 14, 2022
Pytorch implementation for A-NeRF: Articulated Neural Radiance Fields for Learning Human Shape, Appearance, and Pose

A-NeRF: Articulated Neural Radiance Fields for Learning Human Shape, Appearance, and Pose Paper | Website | Data A-NeRF: Articulated Neural Radiance F

Shih-Yang Su 172 Dec 22, 2022
ViSER: Video-Specific Surface Embeddings for Articulated 3D Shape Reconstruction

ViSER: Video-Specific Surface Embeddings for Articulated 3D Shape Reconstruction. NeurIPS 2021.

Gengshan Yang 59 Nov 25, 2022
Exploring Versatile Prior for Human Motion via Motion Frequency Guidance (3DV2021)

Exploring Versatile Prior for Human Motion via Motion Frequency Guidance This is the codebase for video-based human motion reconstruction in human-mot

Jiachen Xu 5 Jul 14, 2022
Official Pytorch Implementation of 3DV2021 paper: SAFA: Structure Aware Face Animation.

SAFA: Structure Aware Face Animation (3DV2021) Official Pytorch Implementation of 3DV2021 paper: SAFA: Structure Aware Face Animation. Getting Started

QiulinW 122 Dec 23, 2022
style mixing for animation face

An implementation of StyleGAN on Animation dataset. Install git clone https://github.com/MorvanZhou/anime-StyleGAN cd anime-StyleGAN pip install -r re

Morvan 46 Nov 30, 2022
CharacterGAN: Few-Shot Keypoint Character Animation and Reposing

CharacterGAN Implementation of the paper "CharacterGAN: Few-Shot Keypoint Character Animation and Reposing" by Tobias Hinz, Matthew Fisher, Oliver Wan

Tobias Hinz 181 Dec 27, 2022
GANimation: Anatomically-aware Facial Animation from a Single Image (ECCV'18 Oral) [PyTorch]

GANimation: Anatomically-aware Facial Animation from a Single Image [Project] [Paper] Official implementation of GANimation. In this work we introduce

Albert Pumarola 1.8k Dec 28, 2022
Dynamic Realtime Animation Control

Our project is targeted at making an application that dynamically detects the user’s expressions and gestures and projects it onto an animation software which then renders a 2D/3D animation realtime that gets broadcasted live.

Harsh Avinash 10 Aug 1, 2022
Image Segmentation Animation using Quadtree concepts.

QuadTree Image Segmentation Animation using QuadTree concepts. Usage usage: quad.py [-h] [-fps FPS] [-i ITERATIONS] [-ws WRITESTART] [-b] [-img] [-s S

Alex Eidt 29 Dec 25, 2022
[SIGGRAPH 2022 Journal Track] AvatarCLIP: Zero-Shot Text-Driven Generation and Animation of 3D Avatars

AvatarCLIP: Zero-Shot Text-Driven Generation and Animation of 3D Avatars Fangzhou Hong1*  Mingyuan Zhang1*  Liang Pan1  Zhongang Cai1,2,3  Lei Yang2 

Fangzhou Hong 749 Jan 4, 2023
Code for ICCV 2021 paper "HuMoR: 3D Human Motion Model for Robust Pose Estimation"

Code for ICCV 2021 paper "HuMoR: 3D Human Motion Model for Robust Pose Estimation"

Davis Rempe 367 Dec 24, 2022