PyTorch implementation of our ICCV 2019 paper: Liquid Warping GAN: A Unified Framework for Human Motion Imitation, Appearance Transfer and Novel View Synthesis

Overview

Impersonator

PyTorch implementation of our ICCV 2019 paper:

Liquid Warping GAN: A Unified Framework for Human Motion Imitation, Appearance Transfer and Novel View Synthesis

Please clone the newest codes.

[paper] [website] [Supplemental Material] [Dataset]

Update News

  • 10/05/2019, optimize the minimal requirements of GPU memory (at least 3.8GB available).

  • 10/24/2019, Imper-1.2.2, add the training document train.md.

  • 07/04/2020, Add the evaluation metrics on iPER dataset.

Getting Started

Python 3.6+, Pytorch 1.2, torchvision 0.4, cuda10.0, at least 3.8GB GPU memory and other requirements. All codes are tested on Linux Distributions (Ubutun 16.04 is recommended), and other platforms have not been tested yet.

Requirements

pip install -r requirements.txt
apt-get install ffmpeg

Installation

cd thirdparty/neural_renderer
python setup.py install

Download resources.

  1. Download pretrains.zip from OneDrive or BaiduPan and then move the pretrains.zip to the assets directory and unzip this file.
wget -O assets/pretrains.zip https://1drv.ws/u/s!AjjUqiJZsj8whLNw4QyntCMsDKQjSg?e=L77Elv
  1. Download checkpoints.zip from OneDrive or BaiduPan and then unzip the checkpoints.zip and move them to outputs directory.
wget -O outputs/checkpoints.zip https://1drv.ws/u/s!AjjUqiJZsj8whLNyoEh67Uu0LlxquA?e=dkOnhQ
  1. Download samples.zip from OneDrive or BaiduPan, and then unzip the samples.zip and move them to assets directory.
wget -O assets/samples.zip "https://1drv.ws/u/s\!AjjUqiJZsj8whLNz4BqnSgqrVwAXoQ?e=bC86db"

Running Demo

If you want to get the results of the demo shown on the webpage, you can run the following scripts. The results are saved in ./outputs/results/demos

  1. Demo of Motion Imitation

    python demo_imitator.py --gpu_ids 1
  2. Demo of Appearance Transfer

    python demo_swap.py --gpu_ids 1
  3. Demo of Novel View Synthesis

    python demo_view.py --gpu_ids 1

If you get the errors like RuntimeError: CUDA out of memory, please add the flag --batch_size 1, the minimal GPU memory is 3.8 GB.

Running custom examples (Details)

If you want to test other inputs (source image and reference images from yourself), here are some examples. Please replace the --ip YOUR_IP and --port YOUR_PORT for Visdom visualization.

  1. Motion Imitation

    • source image from iPER dataset
    python run_imitator.py --gpu_ids 0 --model imitator --output_dir ./outputs/results/  \
        --src_path      ./assets/src_imgs/imper_A_Pose/009_5_1_000.jpg    \
        --tgt_path      ./assets/samples/refs/iPER/024_8_2    \
        --bg_ks 13  --ft_ks 3 \
        --has_detector  --post_tune  \
        --save_res --ip YOUR_IP --port YOUR_PORT
    • source image from DeepFashion dataset
    python run_imitator.py --gpu_ids 0 --model imitator --output_dir ./outputs/results/  \
    --src_path      ./assets/src_imgs/fashion_woman/Sweaters-id_0000088807_4_full.jpg    \
    --tgt_path      ./assets/samples/refs/iPER/024_8_2    \
    --bg_ks 25  --ft_ks 3 \
    --has_detector  --post_tune  \
    --save_res --ip YOUR_IP --port YOUR_PORT
    • source image from Internet
    python run_imitator.py --gpu_ids 0 --model imitator --output_dir ./outputs/results/  \
        --src_path      ./assets/src_imgs/internet/men1_256.jpg    \
        --tgt_path      ./assets/samples/refs/iPER/024_8_2    \
        --bg_ks 7   --ft_ks 3 \
        --has_detector  --post_tune --front_warp \
        --save_res --ip YOUR_IP --port YOUR_PORT
  2. Appearance Transfer

    An example that source image from iPER and reference image from DeepFashion dataset.

    python run_swap.py --gpu_ids 0 --model imitator --output_dir ./outputs/results/  \
        --src_path      ./assets/src_imgs/imper_A_Pose/024_8_2_0000.jpg    \
        --tgt_path      ./assets/src_imgs/fashion_man/Sweatshirts_Hoodies-id_0000680701_4_full.jpg    \
        --bg_ks 13  --ft_ks 3 \
        --has_detector  --post_tune  --front_warp --swap_part body  \
        --save_res --ip http://10.10.10.100 --port 31102
  3. Novel View Synthesis

    python run_view.py --gpu_ids 0 --model viewer --output_dir ./outputs/results/  \
    --src_path      ./assets/src_imgs/internet/men1_256.jpg    \
    --bg_ks 13  --ft_ks 3 \
    --has_detector  --post_tune --front_warp --bg_replace \
    --save_res --ip http://10.10.10.100 --port 31102

If you get the errors like RuntimeError: CUDA out of memory, please add the flag --batch_size 1, the minimal GPU memory is 3.8 GB.

The details of each running scripts are shown in runDetails.md.

Training from Scratch

  • The details of training iPER dataset from scratch are shown in train.md.

Evaluation

Run ./scripts/motion_imitation/evaluate.sh. The details of the evaluation on iPER dataset in his_evaluators.

Announcement

In our paper, the results of LPIPS reported in Table 1, are calculated by 1 – distance score; thereby, the larger is more similar between two images. The beginning intention of using 1 – distance score is that it is more accurate to meet the definition of Similarity in LPIPS.

However, most other papers use the original definition that LPIPS = distance score; therefore, to eliminate the ambiguity and make it consistent with others, we update the results in Table 1 with the original definition in the latest paper.

Citation

thunmbnail

@InProceedings{lwb2019,
    title={Liquid Warping GAN: A Unified Framework for Human Motion Imitation, Appearance Transfer and Novel View Synthesis},
    author={Wen Liu and Zhixin Piao, Min Jie, Wenhan Luo, Lin Ma and Shenghua Gao},
    booktitle={The IEEE International Conference on Computer Vision (ICCV)},
    year={2019}
}
Comments
  • pytorch 和 torchvision 版本对应问题

    pytorch 和 torchvision 版本对应问题

    您好,非常感谢你的工作, 我有一些疑问,按照你所提供的requirements.txt:在训练的时候pytorch版本是1.2.0,torchvivion0.4.0.但是评价指标测试是pytorch版本是1.2.0,torchvivion0.4.2. 如果这样运行,程序会出错,你是怎么解决的呢?查询torchvision0.4.2,配套的pytorch版本是高于1.2.0

    谢谢

    opened by superior1993 13
  • invalid device function

    invalid device function

    When i run demo_imitator.py, it always comes the error:invalid device function like this:

    Error in forward_face_index_map_2: invalid device function Error in forward_face_index_map_1: invalid device function Error in forward_face_index_map_2: invalid device function Error in forward_face_index_map_1: invalid device function Error in forward_face_index_map_2: invalid device function Error in forward_face_index_map_1: invalid device function

    and the results of other two demos are the same. my GPUs are 2080ti and the environments is python-3.7, torch-1.3.1, CUDA-10.0, and does the version of my packages cause the error?

    opened by GYTuuT 6
  • AttributeError: Can't pickle local object 'make_dataset.<locals>.Config'

    AttributeError: Can't pickle local object 'make_dataset..Config'

    error when running demo_view.py

    Personalization: meta cycle finetune... load face model from assets/pretrains/sphere20a_20171020.pth 0%| | 0/5 [00:00<?, ?it/s]Traceback (most recent call last): File "demo_view.py", line 179, in generate_orig_pose_novel_view_result(opt, src_path) File "demo_view.py", line 117, in generate_orig_pose_novel_view_result adaptive_personalize(opt, viewer, visualizer) File "E:\SourceCodes\tensorflow\Gans\impersonator-master\run_imitator.py", line 209, in adaptive_personalize imitator.post_personalize(opt.output_dir, loader, visualizer=None, verbose=False) File "E:\SourceCodes\tensorflow\Gans\impersonator-master\models\viewer.py", line 395, in post_personalize for i, sample in enumerate(data_loader): File "D:\DevelopTools\Anaconda3\envs\tensorflow_gpu\lib\site-packages\torch\utils\data\dataloader.py", line 278, in iter return _MultiProcessingDataLoaderIter(self) File "D:\DevelopTools\Anaconda3\envs\tensorflow_gpu\lib\site-packages\torch\utils\data\dataloader.py", line 682, in init w.start() File "D:\DevelopTools\Anaconda3\envs\tensorflow_gpu\lib\multiprocessing\process.py", line 105, in start self._popen = self._Popen(self) File "D:\DevelopTools\Anaconda3\envs\tensorflow_gpu\lib\multiprocessing\context.py", line 223, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "D:\DevelopTools\Anaconda3\envs\tensorflow_gpu\lib\multiprocessing\context.py", line 322, in _Popen return Popen(process_obj) File "D:\DevelopTools\Anaconda3\envs\tensorflow_gpu\lib\multiprocessing\popen_spawn_win32.py", line 65, in init reduction.dump(process_obj, to_child) File "D:\DevelopTools\Anaconda3\envs\tensorflow_gpu\lib\multiprocessing\reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) AttributeError: Can't pickle local object 'make_dataset..Config'

    opened by opentld 5
  • Question on the post_tune argument on the novel view synthesis

    Question on the post_tune argument on the novel view synthesis

    Thank you so much for putting this code up and for providing clear instructions on how to use it. I'm testing the novel view synthesis code on some real images and I have got it to work but without the --post_tune parameter. The moment I add it to the argument I cannot get it to work since the dataloader on the part of the code is empty (error is that batchSize is 0 there).

    Hence I wanted to ask you:

    1. Is post_tune important as a parameter in terms of the quality of the final output?
    2. If it is important could you please help me getting it to work. Where are the outputs "saved" as pair before the post_tune and how could we assign them to the corresponding paths that are read from.

    Again thanks again I greatly appreciate it Nikolaos

    opened by nsarafianos 4
  • Additional training for Motion Imitation

    Additional training for Motion Imitation

    Hi, thank you for your awesome work.

    Btw, I tried to transfer my own with other target images. Basically, it works but my head doesn't look like me. My hair style and face don't reflect. Then, I assume this happened because of pre-trained model. I saw datasets and found out most of people are short and black hair. What do you think? And if so, how do I train more datasets?

    And also, I tried to do for fashion model who you provided. It works well!! I'm wondering what's going on.

    Thanks in advance.

    opened by ryo12882 4
  • pickle.load?win10

    pickle.load?win10

    win10 64 python373

    what i run:

    python run_imitator.py --gpu_ids 0 --model imitator --output_dir ./outputs/results/ --src_path ./assets/src_imgs/internet/men1_256.jpg --tgt_path ./assets/samples/ref_imgs/024_8_2 --has_detector --post_tune --front_warp --save_res

    Traceback

    (most recent call last): File "run_imitator.py", line 225, in adaptive_personalize(test_opt, imitator, visualizer) File "run_imitator.py", line 209, in adaptive_personalize imitator.post_personalize(opt.output_dir, loader, visualizer=None, verbose=False) File "J:\impersonator\models\imitator.py", line 423, in post_personalize for i, sample in enumerate(data_loader): File "C:\Users\goooice\Anaconda3\envs\ml\lib\site-packages\torch\utils\data\dataloader.py", line 278, in iter return _MultiProcessingDataLoaderIter(self) File "C:\Users\goooice\Anaconda3\envs\ml\lib\site-packages\torch\utils\data\dataloader.py", line 682, in init w.start() File "C:\Users\goooice\Anaconda3\envs\ml\lib\multiprocessing\process.py", line 112, in start self._popen = self._Popen(self) File "C:\Users\goooice\Anaconda3\envs\ml\lib\multiprocessing\context.py", line 223, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "C:\Users\goooice\Anaconda3\envs\ml\lib\multiprocessing\context.py", line 322, in _Popen return Popen(process_obj) File "C:\Users\goooice\Anaconda3\envs\ml\lib\multiprocessing\popen_spawn_win32.py", line 89, in init reduction.dump(process_obj, to_child) File "C:\Users\goooice\Anaconda3\envs\ml\lib\multiprocessing\reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) _pickle.PicklingError: Can't pickle <class 'main.MetaCycleDataSet'>: attribute lookup MetaCycleDataSet on main failed


    Traceback (most recent call last): File "", line 1, in File "C:\Users\goooice\Anaconda3\envs\ml\lib\multiprocessing\spawn.py", line 105, in spawn_main exitcode = _main(fd) File "C:\Users\goooice\Anaconda3\envs\ml\lib\multiprocessing\spawn.py", line 115, in _main self = reduction.pickle.load(from_parent) EOFError: Ran out of input

    opened by GoooIce 4
  •  error: neural_renderer/cuda/load_textures_cuda.cpp

    error: neural_renderer/cuda/load_textures_cuda.cpp

    Using Ubuntu Environment| Cuda 10.0 | Tried Pytorch 1.2.0 and 1.3.0 Getting the below error:

    error: neural_renderer/cuda/load_textures_cuda.cpp

    Any help will be appreciated. Thank you.

    opened by Hitesh-Nagothu 3
  • Outputs results folder all empty

    Outputs results folder all empty

    Hi, I'm trying to run demo_imitator and everything seems to be running fine except all the folder under outputs/results/demos/imitators is empty. Any idea what I can do to fix this? Much appreciated! Here's the attached output from terminal:

    ------------ Options -------------
    T_pose: False
    batch_size: 1
    bg_ks: 13
    bg_model: ./outputs/checkpoints/deepfillv2/net_epoch_50_id_G.pth
    bg_replace: False
    body_seg: False
    cam_strategy: smooth
    checkpoints_dir: ./outputs/checkpoints/
    cond_nc: 3
    data_dir: /p300/datasets/iPER
    dataset_mode: iPER
    debug: False
    do_saturate_mask: False
    face_model: assets/pretrains/sphere20a_20171020.pth
    front_warp: False
    ft_ks: 3
    gen_name: impersonator
    gpu_ids: 0
    has_detector: False
    hmr_model: assets/pretrains/hmr_tf2pt.pth
    image_size: 256
    images_folder: images_HD
    ip: 
    is_train: False
    load_epoch: 0
    load_path: ./outputs/checkpoints/lwb_imper_fashion_place/net_epoch_30_id_G.pth
    map_name: uv_seg
    model: impersonator
    n_threads_test: 2
    name: running
    norm_type: instance
    only_vis: False
    output_dir: ./outputs/results/
    part_info: assets/pretrains/smpl_part_info.json
    port: 31100
    post_tune: False
    pri_path: ./assets/samples/A_priors/imgs
    repeat_num: 6
    save_res: False
    serial_batches: False
    smpl_model: assets/pretrains/smpl_model.pkl
    smpls_folder: smpls
    src_path: 
    swap_part: body
    test_ids_file: val.txt
    tex_size: 3
    tgt_path: 
    time_step: 10
    train_ids_file: train.txt
    uv_mapping: assets/pretrains/mapper.txt
    view_params: R=0,90,0/t=0,0,0
    -------------- End ----------------
    ./outputs/checkpoints/running
      0%|                                                                                                                                                    | 0/3 [00:00<?, ?it/s]Network impersonator was created
    Loading net: ./outputs/checkpoints/lwb_imper_fashion_place/net_epoch_30_id_G.pth
    Network deepfillv2 was created
    Loading net: ./outputs/checkpoints/deepfillv2/net_epoch_50_id_G.pth
    
    			Personalization: meta imitation...
    
    100%|███████████████████Personalization: meta cycle finetune...████████████████████████████████████████████████████████████████████████████████| 25/25 [00:02<00:00,  9.58it/s]
    load face model from assets/pretrains/sphere20a_20171020.pth
    ./outputs/results/demos/imitators/mixamo_preds
    100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:49<00:00,  9.85s/it./outputs/results/demos/imitators/mixamo_preds███████████████████████████████████████████████████████████████████████████████████████████████| 148/148 [00:00<00:00, 830.61it/s]
                                                                                                                                                                                  ./outputs/results/demos/imitators/mixamo_preds███████████████████████████████████████████████████████████████████████████████████████████████| 576/576 [00:00<00:00, 842.92it/s]
     33%|██████████████████████████████████████████████▎                                                                                            | 1/3 [01:46<03:32, 106.16s/it]Network impersonator was created████████████████████████████████████████████████████████████████████████████████████████████████████████████| 111/111 [00:00<00:00, 745.93it/s]
    Loading net: ./outputs/checkpoints/lwb_imper_fashion_place/net_epoch_30_id_G.pth
    Network deepfillv2 was created
    Loading net: ./outputs/checkpoints/deepfillv2/net_epoch_50_id_G.pth
    
    			Personalization: meta imitation...
    
    100%|███████████████████Personalization: meta cycle finetune...████████████████████████████████████████████████████████████████████████████████| 25/25 [00:02<00:00, 10.11it/s]
    load face model from assets/pretrains/sphere20a_20171020.pth
    ./outputs/results/demos/imitators/mixamo_preds
    100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:48<00:00,  9.80s/it./outputs/results/demos/imitators/mixamo_preds███████████████████████████████████████████████████████████████████████████████████████████████| 148/148 [00:00<00:00, 793.41it/s]
                                                                                                                                                                                  ./outputs/results/demos/imitators/mixamo_preds███████████████████████████████████████████████████████████████████████████████████████████████| 576/576 [00:00<00:00, 891.59it/s]
     67%|████████████████████████████████████████████████████████████████████████████████████████████▋                                              | 2/3 [03:27<01:44, 104.62s/it]Network impersonator was created████████████████████████████████████████████████████████████████████████████████████████████████████████████| 111/111 [00:00<00:00, 726.58it/s]
    Loading net: ./outputs/checkpoints/lwb_imper_fashion_place/net_epoch_30_id_G.pth
    Network deepfillv2 was created
    Loading net: ./outputs/checkpoints/deepfillv2/net_epoch_50_id_G.pth
    
    			Personalization: meta imitation...
    
    100%|███████████████████Personalization: meta cycle finetune...████████████████████████████████████████████████████████████████████████████████| 25/25 [00:02<00:00, 10.18it/s]
    load face model from assets/pretrains/sphere20a_20171020.pth
    ./outputs/results/demos/imitators/mixamo_preds
    100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:48<00:00,  9.76s/it./outputs/results/demos/imitators/mixamo_preds███████████████████████████████████████████████████████████████████████████████████████████████| 148/148 [00:00<00:00, 720.35it/s]
                                                                                                                                                                                  ./outputs/results/demos/imitators/mixamo_preds███████████████████████████████████████████████████████████████████████████████████████████████| 576/576 [00:00<00:00, 824.25it/s]
    100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [05:06<00:00, 103.17s/it]
    Completed! All demo videos are save in ./outputs/results/demos/imitators████████████████████████████████████████████████████████████████████| 111/111 [00:00<00:00, 809.94it/s]
    
    opened by smalleight17 3
  • Training dataset with corrupted ZIP archive (

    Training dataset with corrupted ZIP archive ("smpls.zip")

    Hello, the provided training data at your OneDrive link have a file named "smpls.zip" that appears to be corrupted, it doesn't unzip correctly in both Windows and Linux, even using tools such as 7-Zip and WinRar, it reports a bad zipfile offset at several points (from file #146 to #735), I re-downloaded it 4 times already and kept an eye on the progress bar to make sure there were no interruptions. Could you please check if the provided file is okay? Thanks!

    opened by AndroXD 3
  • ImportError: load_textures.cpython-37m-x86_64-linux-gnu.so: undefined symbol: _ZN6caffe26detail37_typeMetaDataInstance_preallocated_32E

    ImportError: load_textures.cpython-37m-x86_64-linux-gnu.so: undefined symbol: _ZN6caffe26detail37_typeMetaDataInstance_preallocated_32E

    when i run demo_imitator.py I got↓

    Traceback (most recent call last): File "demo_imitator.py", line 6, in from models.imitator import Imitator File "/home/lbl/impersonator/models/imitator.py", line 8, in from utils.nmr import SMPLRenderer File "/home/lbl/impersonator/utils/nmr.py", line 11, in import neural_renderer as nr File "/home/lbl/anaconda3/lib/python3.7/site-packages/neural_renderer/init.py", line 3, in from .load_obj import load_obj File "/home/lbl/anaconda3/lib/python3.7/site-packages/neural_renderer/load_obj.py", line 8, in import neural_renderer.cuda.load_textures as load_textures_cuda ImportError: /home/lbl/anaconda3/lib/python3.7/site-packages/neural_renderer/cuda/load_textures.cpython-37m-x86_64-linux-gnu.so: undefined symbol: _ZN6caffe26detail37_typeMetaDataInstance_preallocated_32E

    opened by bolin12 3
  • About details: why you use 1 discriminator(4 layers) and use three label(-1,0,1) ?

    About details: why you use 1 discriminator(4 layers) and use three label(-1,0,1) ?

    Hi, author! I have read your paper and your code, and i have some questions which confused me. recently pix2pixHD is famous, why you just use one-scale discriminator and using three labels(-1, 0, 1) instead of using 2-scale Discriminator? and using three label( -1, 0, 1) have advantages than two labels style(e.g. in pix2pixHD, they use 0,1 for fake and real)? thanks for your reply!

    opened by dypromise 2
  • RuntimeError: Error compiling objects for extension

    RuntimeError: Error compiling objects for extension

    When I tried to run 'python setup.py install', it throwed out the error below. I wonder whether the version of pytorch and cuda is not suitable for this model.

    ERROR: File "...\torch\utils\cpp_extension.py", line 1824, in _run_ninja_build raise RuntimeError(message) from e RuntimeError: Error compiling objects for extension

    OS:win10 pytorch:1.12.1 cuda:11.6

    opened by OrangeLyx 0
  • Bump numpy from 1.14.5 to 1.22.0

    Bump numpy from 1.14.5 to 1.22.0

    Bumps numpy from 1.14.5 to 1.22.0.

    Release notes

    Sourced from numpy's releases.

    v1.22.0

    NumPy 1.22.0 Release Notes

    NumPy 1.22.0 is a big release featuring the work of 153 contributors spread over 609 pull requests. There have been many improvements, highlights are:

    • Annotations of the main namespace are essentially complete. Upstream is a moving target, so there will likely be further improvements, but the major work is done. This is probably the most user visible enhancement in this release.
    • A preliminary version of the proposed Array-API is provided. This is a step in creating a standard collection of functions that can be used across application such as CuPy and JAX.
    • NumPy now has a DLPack backend. DLPack provides a common interchange format for array (tensor) data.
    • New methods for quantile, percentile, and related functions. The new methods provide a complete set of the methods commonly found in the literature.
    • A new configurable allocator for use by downstream projects.

    These are in addition to the ongoing work to provide SIMD support for commonly used functions, improvements to F2PY, and better documentation.

    The Python versions supported in this release are 3.8-3.10, Python 3.7 has been dropped. Note that 32 bit wheels are only provided for Python 3.8 and 3.9 on Windows, all other wheels are 64 bits on account of Ubuntu, Fedora, and other Linux distributions dropping 32 bit support. All 64 bit wheels are also linked with 64 bit integer OpenBLAS, which should fix the occasional problems encountered by folks using truly huge arrays.

    Expired deprecations

    Deprecated numeric style dtype strings have been removed

    Using the strings "Bytes0", "Datetime64", "Str0", "Uint32", and "Uint64" as a dtype will now raise a TypeError.

    (gh-19539)

    Expired deprecations for loads, ndfromtxt, and mafromtxt in npyio

    numpy.loads was deprecated in v1.15, with the recommendation that users use pickle.loads instead. ndfromtxt and mafromtxt were both deprecated in v1.17 - users should use numpy.genfromtxt instead with the appropriate value for the usemask parameter.

    (gh-19615)

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Bump pillow from 6.2.0 to 9.0.1

    Bump pillow from 6.2.0 to 9.0.1

    Bumps pillow from 6.2.0 to 9.0.1.

    Release notes

    Sourced from pillow's releases.

    9.0.1

    https://pillow.readthedocs.io/en/stable/releasenotes/9.0.1.html

    Changes

    • In show_file, use os.remove to remove temporary images. CVE-2022-24303 #6010 [@​radarhere, @​hugovk]
    • Restrict builtins within lambdas for ImageMath.eval. CVE-2022-22817 #6009 [radarhere]

    9.0.0

    https://pillow.readthedocs.io/en/stable/releasenotes/9.0.0.html

    Changes

    ... (truncated)

    Changelog

    Sourced from pillow's changelog.

    9.0.1 (2022-02-03)

    • In show_file, use os.remove to remove temporary images. CVE-2022-24303 #6010 [radarhere, hugovk]

    • Restrict builtins within lambdas for ImageMath.eval. CVE-2022-22817 #6009 [radarhere]

    9.0.0 (2022-01-02)

    • Restrict builtins for ImageMath.eval(). CVE-2022-22817 #5923 [radarhere]

    • Ensure JpegImagePlugin stops at the end of a truncated file #5921 [radarhere]

    • Fixed ImagePath.Path array handling. CVE-2022-22815, CVE-2022-22816 #5920 [radarhere]

    • Remove consecutive duplicate tiles that only differ by their offset #5919 [radarhere]

    • Improved I;16 operations on big endian #5901 [radarhere]

    • Limit quantized palette to number of colors #5879 [radarhere]

    • Fixed palette index for zeroed color in FASTOCTREE quantize #5869 [radarhere]

    • When saving RGBA to GIF, make use of first transparent palette entry #5859 [radarhere]

    • Pass SAMPLEFORMAT to libtiff #5848 [radarhere]

    • Added rounding when converting P and PA #5824 [radarhere]

    • Improved putdata() documentation and data handling #5910 [radarhere]

    • Exclude carriage return in PDF regex to help prevent ReDoS #5912 [hugovk]

    • Fixed freeing pointer in ImageDraw.Outline.transform #5909 [radarhere]

    ... (truncated)

    Commits
    • 6deac9e 9.0.1 version bump
    • c04d812 Update CHANGES.rst [ci skip]
    • 4fabec3 Added release notes for 9.0.1
    • 02affaa Added delay after opening image with xdg-open
    • ca0b585 Updated formatting
    • 427221e In show_file, use os.remove to remove temporary images
    • c930be0 Restrict builtins within lambdas for ImageMath.eval
    • 75b69dd Dont need to pin for GHA
    • cd938a7 Autolink CWE numbers with sphinx-issues
    • 2e9c461 Add CVE IDs
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Bump opencv-contrib-python from 3.4.2.17 to 4.2.0.32

    Bump opencv-contrib-python from 3.4.2.17 to 4.2.0.32

    Bumps opencv-contrib-python from 3.4.2.17 to 4.2.0.32.

    Release notes

    Sourced from opencv-contrib-python's releases.

    4.2.0.32

    OpenCV version 4.2.0.

    Changes:

    • macOS environment updated from xcode8.3 to xcode 9.4
    • macOS uses now Qt 5 instead of Qt 4
    • Nasm version updated to Docker containers
    • multibuild updated

    Fixes:

    • don't use deprecated brew tap-pin, instead refer to the full package name when installing #267
    • replace get_config_var() with get_config_vars() in setup.py #274
    • add workaround for DLL errors in Windows Server #264

    3.4.9.31

    OpenCV version 3.4.9.

    Changes:

    • macOS environment updated from xcode8.3 to xcode 9.4
    • macOS uses now Qt 5 instead of Qt 4
    • Nasm version updated to Docker containers
    • multibuild updated

    Fixes:

    • don't use deprecated brew tap-pin, instead refer to the full package name when installing #267
    • replace get_config_var() with get_config_vars() in setup.py #274
    • add workaround for DLL errors in Windows Server #264

    4.1.2.30

    OpenCV version 4.1.2.

    Changes:

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • How could you get the camera parameter for mixamo smpls

    How could you get the camera parameter for mixamo smpls

    Hi, thanks for you interesting work.

    In demo_imitator.py, you transfer motions from mixamo smpls (e.g. ./assets/samples/refs/mixamo) to images. I guess these motions are obtained from Adobe Mixamo https://www.mixamo.com/#/?page=1&type=Motion%2CMotionPack. I notice there is a 3d camera parameters for each frame in the result.pkl file. Do you have any idea ehre this parameter come from?

    Thanks in advance.

    opened by EricGuo5513 0
Owner
SVIP Lab
ShanghaiTech Vision and Intelligent Perception Lab
SVIP Lab
Official Pytorch implementation of the paper "Action-Conditioned 3D Human Motion Synthesis with Transformer VAE", ICCV 2021

ACTOR Official Pytorch implementation of the paper "Action-Conditioned 3D Human Motion Synthesis with Transformer VAE", ICCV 2021. Please visit our we

Mathis Petrovich 248 Dec 23, 2022
Implementation of "Generalizable Neural Performer: Learning Robust Radiance Fields for Human Novel View Synthesis"

Generalizable Neural Performer: Learning Robust Radiance Fields for Human Novel View Synthesis Abstract: This work targets at using a general deep lea

null 163 Dec 14, 2022
Unified tracking framework with a single appearance model

Paper: Do different tracking tasks require different appearance model? [ArXiv] (comming soon) [Project Page] (comming soon) UniTrack is a simple and U

ZhongdaoWang 300 Dec 24, 2022
Open source repository for the code accompanying the paper 'Non-Rigid Neural Radiance Fields Reconstruction and Novel View Synthesis of a Deforming Scene from Monocular Video'.

Non-Rigid Neural Radiance Fields This is the official repository for the project "Non-Rigid Neural Radiance Fields: Reconstruction and Novel View Synt

Facebook Research 296 Dec 29, 2022
Pytorch implementation for A-NeRF: Articulated Neural Radiance Fields for Learning Human Shape, Appearance, and Pose

A-NeRF: Articulated Neural Radiance Fields for Learning Human Shape, Appearance, and Pose Paper | Website | Data A-NeRF: Articulated Neural Radiance F

Shih-Yang Su 172 Dec 22, 2022
Official PyTorch Implementation of paper "Deep 3D Mask Volume for View Synthesis of Dynamic Scenes", ICCV 2021.

Deep 3D Mask Volume for View Synthesis of Dynamic Scenes Official PyTorch Implementation of paper "Deep 3D Mask Volume for View Synthesis of Dynamic S

Ken Lin 17 Oct 12, 2022
Stereo Radiance Fields (SRF): Learning View Synthesis for Sparse Views of Novel Scenes

Stereo Radiance Fields (SRF): Learning View Synthesis for Sparse Views of Novel Scenes

null 111 Dec 29, 2022
SLAMP: Stochastic Latent Appearance and Motion Prediction

SLAMP: Stochastic Latent Appearance and Motion Prediction Official implementation of the paper SLAMP: Stochastic Latent Appearance and Motion Predicti

Kaan Akan 34 Dec 8, 2022
Code for ICCV 2021 paper "HuMoR: 3D Human Motion Model for Robust Pose Estimation"

Code for ICCV 2021 paper "HuMoR: 3D Human Motion Model for Robust Pose Estimation"

Davis Rempe 367 Dec 24, 2022
Exploring Versatile Prior for Human Motion via Motion Frequency Guidance (3DV2021)

Exploring Versatile Prior for Human Motion via Motion Frequency Guidance This is the codebase for video-based human motion reconstruction in human-mot

Jiachen Xu 5 Jul 14, 2022
code for our paper "Source Data-absent Unsupervised Domain Adaptation through Hypothesis Transfer and Labeling Transfer"

SHOT++ Code for our TPAMI submission "Source Data-absent Unsupervised Domain Adaptation through Hypothesis Transfer and Labeling Transfer" that is ext

null 75 Dec 16, 2022
an implementation of softmax splatting for differentiable forward warping using PyTorch

softmax-splatting This is a reference implementation of the softmax splatting operator, which has been proposed in Softmax Splatting for Video Frame I

Simon Niklaus 338 Dec 28, 2022
Official implementation of "A Unified Objective for Novel Class Discovery", ICCV2021 (Oral)

A Unified Objective for Novel Class Discovery This is the official repository for the paper: A Unified Objective for Novel Class Discovery Enrico Fini

Enrico Fini 118 Dec 26, 2022
MVGCN: a novel multi-view graph convolutional network (MVGCN) framework for link prediction in biomedical bipartite networks.

MVGCN MVGCN: a novel multi-view graph convolutional network (MVGCN) framework for link prediction in biomedical bipartite networks. Developer: Fu Hait

null 13 Dec 1, 2022
Blender add-on: Add to Cameras menu: View → Camera, View → Add Camera, Camera → View, Previous Camera, Next Camera

Blender add-on: Camera additions In 3D view, it adds these actions to the View|Cameras menu: View → Camera : set the current camera to the 3D view Vie

German Bauer 11 Feb 8, 2022
(CVPR 2022 - oral) Multi-View Depth Estimation by Fusing Single-View Depth Probability with Multi-View Geometry

Multi-View Depth Estimation by Fusing Single-View Depth Probability with Multi-View Geometry Official implementation of the paper Multi-View Depth Est

Bae, Gwangbin 138 Dec 28, 2022
A PyTorch implementation of SlowFast based on ICCV 2019 paper "SlowFast Networks for Video Recognition"

SlowFast A PyTorch implementation of SlowFast based on ICCV 2019 paper SlowFast Networks for Video Recognition. Requirements Anaconda PyTorch conda in

Hao Ren 8 Dec 23, 2022
Pytorch implementation of CVPR2021 paper "MUST-GAN: Multi-level Statistics Transfer for Self-driven Person Image Generation"

MUST-GAN Code | paper The Pytorch implementation of our CVPR2021 paper "MUST-GAN: Multi-level Statistics Transfer for Self-driven Person Image Generat

TianxiangMa 46 Dec 26, 2022
[ICCV'21] Neural Radiance Flow for 4D View Synthesis and Video Processing

NeRFlow [ICCV'21] Neural Radiance Flow for 4D View Synthesis and Video Processing Datasets The pouring dataset used for experiments can be download he

null 44 Dec 20, 2022