Liquid Warping GAN with Attention: A Unified Framework for Human Image Synthesis

Overview

Impersonator++

Update News

See the details of developing logs.

Liquid Warping GAN with Attention: A Unified Framework for Human Image Synthesis, including human motion imitation, appearance transfer, and novel view synthesis. Currently the paper is under review of IEEE TPAMI. It is an extension of our previous ICCV project impersonator, and it has a more powerful ability in generalization and produces higher-resolution results (512 x 512, 1024 x 1024) than the previous ICCV version.

🧾 Colab Notebook Released (Windows) 📑 Paper 📱 Website 📂 Dataset 💡 Bilibili Forum
Open In Colab [Usage] paper website Dataset bilibili Forum

Installation

See more details, including system dependencies, python requirements and setups in install.md. Please follows the instructions in install.md to install this firstly.

Notice that imags_size=512 need at least 9.8GB GPU memory. if you are using a middle-level GPU(e.g. RTX 2060), you should change the image_size to 384 or 256. The following table can be used as a reference:

image_size preprocess personalize run_imitator recommended gpu
256x256 3.1 GB 4.3 GB 1.1 GB RTX 2060 / RTX 2070
384x384 3.1 GB 7.9 GB 1.5 GB GTX 1080Ti / RTX 2080Ti / Titan Xp
512x512 3.1 GB 9.8 GB 2 GB GTX 1080Ti / RTX 2080Ti / Titan Xp
1024x1024 3.1 GB 20 GB - RTX Titan / P40 / V100 32G

Run demos

1. Run on Google Colab

Open In Colab

2. Run with Console (scripts) mode

See scripts_runner for more details.

3. Run with GUI mode

Coming soon!

Citation

@misc{liu2020liquid,
      title={Liquid Warping GAN with Attention: A Unified Framework for Human Image Synthesis}, 
      author={Wen Liu and Zhixin Piao, Zhi Tu, Wenhan Luo, Lin Ma and Shenghua Gao},
      year={2020},
      eprint={2011.09055},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

@InProceedings{lwb2019,
    title={Liquid Warping GAN: A Unified Framework for Human Motion Imitation, Appearance Transfer and Novel View Synthesis},
    author={Wen Liu and Zhixin Piao, Min Jie, Wenhan Luo, Lin Ma and Shenghua Gao},
    booktitle={The IEEE International Conference on Computer Vision (ICCV)},
    year={2019}
}
Comments
  • "Failed to build neural-renderer"

    Hi, iPERDance team! I really appreciate your great work! I met some problems when running the command "python setup.py develop" :

    Building wheel for neural-renderer (setup.py) ... error
      ERROR: Command errored out with exit status 1:
       command: /home/ang/anaconda3/envs/iPERDance/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-req-build-1b_3r1yg/setup.py'"'"'; __file__='"'"'/tmp/pip-req-build-1b_3r1yg/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-be6hts_5
           cwd: /tmp/pip-req-build-1b_3r1yg/
      Complete output (334 lines):
      running bdist_wheel
      /home/ang/anaconda3/envs/iPERDance/lib/python3.6/site-packages/torch/utils/cpp_extension.py:339: UserWarning: Attempted to use ninja as the BuildExtension backend but we could not find ninja.. Falling back to using the slow distutils backend.
        warnings.warn(msg.format('we could not find ninja.'))
    ...
    

    2020-12-11 22-07-43error

    (My environment: ubuntu16.04, cuda10.1, gcc/g++7.5.0, anaconda python3.6.6, and I did activate conda env when I running the command)

    I have tried several times to run the command but I met the same error. Hope your advice! Thanks in advance~

    opened by Bingoang 21
  • Could not find a version that satisfies the requirement mmcv-full==1.2.0+torch1.6.0+cu102

    Could not find a version that satisfies the requirement mmcv-full==1.2.0+torch1.6.0+cu102

    Hi, thank you for the amazing work. I am trying to run this on a Windows 10 system. When I entered:

    python setup.py
    

    It produces:

    C:\Users\anaconda3\envs\iPER\python.exe -m pip install mmcv-full==1.2.0+torch1.6.0+cu102 -f https://download.openmmlab.com/mmcv/dist/index.html
    Looking in links: https://download.openmmlab.com/mmcv/dist/index.html
    ERROR: Could not find a version that satisfies the requirement mmcv-full==1.2.0+torch1.6.0+cu102 (from versions: 1.1.5+torch1.4.0+cpu, 1.1.5+torch1.5.1+cpu, 1.1.5+torch1.6.0+cpu, 1.1.5+torch1.6.0+cu101, 1.1.5+torch1.6.0+cu102, 1.0rc0, 1.0rc2, 1.0.0, 1.0.1, 1.0.2, 1.0.3, 1.0.4, 1.0.5, 1.1.0, 1.1.1, 1.1.2, 1.1.3, 1.1.4, 1.1.5, 1.1.6, 1.2.0, 1.2.1)
    ERROR: No matching distribution found for mmcv-full==1.2.0+torch1.6.0+cu102
    

    How can I fix this? Thank you

    opened by bycloudai 12
  • iPERCore在python3.7 能运行吗?

    iPERCore在python3.7 能运行吗?

    你好,我运行demo/motion_imitate.py报错,请帮忙看看是什么原因。 打印front_info, {'ft': {'body_num': [], 'face_num': [], 'ids': []}, 'bk': {'body_num': [], 'face_num': [], 'ids': []}} 是不是视频帧没拆出来引起? Process PreprocessConsumer_0: Traceback (most recent call last): File "/usr/bin/anaconda3/envs/iPERCore/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap self.run() File "/home/jock/iPERCore/iPERCore/services/preprocess.py", line 41, in run device=device File "/home/jock/iPERCore/iPERCore/tools/processors/preprocessors.py", line 125, in init device=device File "/home/jock/iPERCore/iPERCore/tools/human_mattors/init.py", line 11, in build_mattor from .point_render_parser import PointRenderGCAMattor File "/home/jock/iPERCore/iPERCore/tools/human_mattors/point_render_parser.py", line 16, in from mmdet.apis import init_detector, inference_detector File "/home/jock/iPERCore/mmdetection/mmdet/apis/init.py", line 1, in from .inference import (async_inference_detector, inference_detector, File "/home/jock/iPERCore/mmdetection/mmdet/apis/inference.py", line 11, in from mmdet.core import get_classes File "/home/jock/iPERCore/mmdetection/mmdet/core/init.py", line 3, in from .evaluation import * # noqa: F401, F403 File "/home/jock/iPERCore/mmdetection/mmdet/core/evaluation/init.py", line 5, in from .mean_ap import average_precision, eval_map, print_map_summary File "/home/jock/iPERCore/mmdetection/mmdet/core/evaluation/mean_ap.py", line 6, in from terminaltables import AsciiTable File "", line 983, in _find_and_load File "", line 963, in _find_and_load_unlocked File "", line 906, in _find_spec File "", line 1280, in find_spec File "", line 1254, in _get_spec File "", line 1235, in _legacy_get_spec File "", line 441, in spec_from_loader File "", line 594, in spec_from_file_location zlib.error: Error -2 while decompressing data: inconsistent stream state Pre-processing: digital deformation start... Process HumanDigitalDeformConsumer_0: Traceback (most recent call last): File "/home/jock/iPERCore/iPERCore/services/preprocess.py", line 136, in run prepared_inputs = self.prepare_inputs_for_run_cloth_smpl_links(process_info) File "/home/jock/iPERCore/iPERCore/services/preprocess.py", line 210, in prepare_inputs_for_run_cloth_smpl_links src_infos = process_info.convert_to_src_info(self.opt.num_source) File "/home/jock/iPERCore/iPERCore/services/options/process_info.py", line 140, in convert_to_src_info src_infos = read_src_infos(self.vid_infos, num_source) File "/home/jock/iPERCore/iPERCore/services/options/process_info.py", line 233, in read_src_infos pad_ids = np.random.choice(src_ids, need_pad) File "mtrand.pyx", line 908, in numpy.random.mtrand.RandomState.choice ValueError: 'a' cannot be empty unless no samples are taken

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "/usr/bin/anaconda3/envs/iPERCore/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap self.run() File "/home/jock/iPERCore/iPERCore/services/preprocess.py", line 153, in run except Exception("model error!") as e: TypeError: catching classes that do not inherit from BaseException is not allowed Pre-processing: digital deformation completed... the current number of sources are 1, while the pre-defined number of sources are 1.

    opened by gdcrx 11
  • AssertionError: MMCV==1.2.4 is used but incompatible. Please install mmcv>=[1, 0, 2], <=[1, 2].

    AssertionError: MMCV==1.2.4 is used but incompatible. Please install mmcv>=[1, 0, 2], <=[1, 2].

    运行python demo/motion_imitate.py --gpu_ids 0
    --image_size 512
    --num_source 2
    --output_dir "./results"
    --assets_dir "./assets"
    --model_id "donald_trump_2"
    --src_path "path?=./assets/samples/sources/donald_trump_2/00000.PNG,name?=donald_trump_2"
    --ref_path "path?=./assets/samples/references/akun_2.mp4,name?=akun_2,pose_fc?=300"命令报错. 基础环境为docker容器中ubuntu16.04+cuda10.1+torch1.7.0-cu101,各种依赖报均安装成功,但报以上问题的错误

    opened by AI-mzq 10
  • Faild to run when using a folder of image for a person

    Faild to run when using a folder of image for a person

    1.1 Preprocessing, running Preprocessor to detect the human boxes of ./results/primitives/processed/orig_images...
      0% 0/5 [00:00<?, ?it/s]
    Process PreprocessConsumer_0:
    Traceback (most recent call last):
      File "/content/iPERCore/iPERCore/services/preprocess.py", line 77, in run
        visual=True,
      File "/content/iPERCore/iPERCore/tools/processors/base_preprocessor.py", line 81, in execute
        self._execute_detector(processed_info, factor=factor)
      File "/content/iPERCore/iPERCore/tools/processors/base_preprocessor.py", line 298, in _execute_detector
        cur_shape = image.shape[0:2]
    AttributeError: 'NoneType' object has no attribute 'shape'
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
        self.run()
      File "/content/iPERCore/iPERCore/services/preprocess.py", line 80, in run
        except Exception("model error!") as e:
    TypeError: catching classes that do not inherit from BaseException is not allowed
    		Pre-processing: digital deformation start...
    Process HumanDigitalDeformConsumer_0:
    Traceback (most recent call last):
      File "/content/iPERCore/iPERCore/services/preprocess.py", line 136, in run
        prepared_inputs = self.prepare_inputs_for_run_cloth_smpl_links(process_info)
      File "/content/iPERCore/iPERCore/services/preprocess.py", line 210, in prepare_inputs_for_run_cloth_smpl_links
        src_infos = process_info.convert_to_src_info(self.opt.num_source)
      File "/content/iPERCore/iPERCore/services/options/process_info.py", line 140, in convert_to_src_info
        src_infos = read_src_infos(self.vid_infos, num_source)
      File "/content/iPERCore/iPERCore/services/options/process_info.py", line 233, in read_src_infos
        pad_ids = np.random.choice(src_ids, need_pad)
      File "mtrand.pyx", line 908, in numpy.random.mtrand.RandomState.choice
    ValueError: 'a' cannot be empty unless no samples are taken
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
        self.run()
      File "/content/iPERCore/iPERCore/services/preprocess.py", line 153, in run
        except Exception("model error!") as e:
    TypeError: catching classes that do not inherit from BaseException is not allowed
    		Pre-processing: digital deformation completed...
    the current number of sources are 1, while the pre-defined number of sources are 2. 
    	Pre-processing: failed...
    
    opened by ghost 10
  • Install instruction

    Install instruction

    It's an interesting project, but it seems to be a little difficult to run the demo code because of environment problem. Thus, I put my problems and final solution here for anyone who meet the same issues.

    P1: The provided Colab cannot work.

    A1: It has 2 problems. First, the download link to checkpoints and samples are invalid, you can find new download addresses from OneDrive. Second, Colab does not support os.symlink or ln -s now, this will throw an intermediate error.

    P2: AttributeError: 'ConfigDict' object has no attribute 'nms'

    A2: This is caused by the conflict between old config and latest version of mmdet/mmcv/mmedit. You can directly modify YOUR-PATH-TO-MMDET/mmdet/models/dense_heads/rpn_head.py, and add cfg.nms = dict(type='nms', iou_threshold=0.7). Possibly, you will get another AttributeError of 'max_per_img', thus, you also need to add cfg.max_per_img = 100 subsequently.

    P3: RuntimeError: nms is not compiled with GPU support P4: /mmcv/_ext.cpython-37m-x86_64-linux-gnu.so: undefined symbol

    A3&A4: Those are all because of mm packages. This repo uses relative old version that may not fit CUDA11. I strongly suggest to install the newest version of them (mmcv == 1.5.3, mmdet == 2.25.0, mmedit == 0.15.0) following official instruction. Especially for mmcv-full, you need install the one that match your local torch&cuda version. If it still cannot detect CUDA, just install it from source instead of pip. After reinstall, you may need also do steps in A2.

    P5: KeyError: '04_left_leg'

    A5: In ./iPERCore/tools/human_digitalizer/deformers/link_utils.py, change '04_left_leg' to '02_left_leg'.

    opened by haofanwang 9
  • Failed to run main demo

    Failed to run main demo

    Hi there, I followed a tutorial for the Windows installation of this AI, everything should be installed correctly but when I run the main demo script it gives me this error Immagine

    I would like to know how to fix this issue, thank you.

    opened by AVTV64 8
  • 大佬们运行motion_imitate.py时老是报错mmcv-full 1.1.5版本不匹配怎么解决啊

    大佬们运行motion_imitate.py时老是报错mmcv-full 1.1.5版本不匹配怎么解决啊

    运行motion_imitate.py时老是报错mmcv-full 1.1.5版本不匹配怎么解决啊。 以下是我的运行命令和日志 (venv) C:\WINDOWS\system32>python D:\iPERCore-main\demo\motion_imitate.py --gpu_ids 0 --image_size 256 --num_source 2 --output_dir "D:\iPERCore-main/results" --assets_dir "D:\iPERCore-main/assets" --src_path "path?=D:\iPERCore-main\assets\samples\sources\donald_trump_2\00000.PNG,name?=donald_trump_2" --ref_path "path?=D:\iPERCore-main\assets\samples\references\akun_1.mp4,name?=akun_2,pose_fc?=300" ./assets/executables/ffmpeg-4.3.1-win64-static/bin/ffprobe.exe -v error -select_streams v -of default=noprint_wrappers=1:nokey=1 -show_entries stream=r_frame_rate D:\iPERCore-main\assets\samples\references\akun_1.mp4 ------------ Options ------------- {'MAX_NUM_SOURCE': 8, 'MultiMedia': {'ffmpeg': {'Linux': {'ffmpeg_exe_path': 'ffmpeg', 'ffprobe_exe_path': 'ffprobe'}, 'Windows': {'ffmpeg_exe_path': './assets/executables/ffmpeg-4.3.1-win64-static/bin/ffmpeg.exe', 'ffprobe_exe_path': './assets/executables/ffmpeg-4.3.1-win64-static/bin/ffprobe.exe'}, 'pix_fmt': 'yuv420p', 'vcodec': 'h264'}, 'image': {'caption': 'this is a fake video, synthesized by ' 'impersonator++', 'saved_name_format': 'pred_{:0>8}.png'}}, 'NUMBER_FACES': 13776, 'NUMBER_VERTS': 6890, 'Preprocess': {'BackgroundInpaintor': {'bg_replace': True, 'cfg_path': './assets/configs/inpaintors/mmedit_inpainting.toml', 'dilate_iter_num': 3, 'dilate_kernel_size': 9, 'name': 'mmedit_inpainting', 'use_sr': True}, 'Cropper': {'ref_crop_factor': 3.0, 'src_crop_factor': 1.3}, 'Deformer': {'cloth_parse_ckpt_path': './assets/checkpoints/mattors/exp-schp-lip.pth'}, 'FrontInfo': {'NUM_CANDIDATE': 25, 'RENDER_SIZE': 256}, 'HumanMattors': {'cfg_path': './assets/configs/mattors/point_render+gca.toml', 'dilate_iter_num': 7, 'erode_iter_num': 2, 'morph_kernel_size': 3, 'name': 'point_render+gca'}, 'MAX_PER_GPU_PROCESS': 1, 'Pose2dEstimator': {'cfg_path': './assets/configs/pose2d/openpose/body25.toml', 'joint_type': 'OpenPose-Body-25', 'name': 'openpose'}, 'Pose3dEstimator': {'batch_size': 32, 'cfg_path': './assets/configs/pose3d/spin.toml', 'name': 'spin', 'num_workers': 4}, 'Pose3dRefiner': {'cfg_path': './assets/configs/pose3d/smplify.toml', 'name': 'smplify', 'use_lfbgs': True}, 'Tracker': {'tracker_name': 'max_box'}, 'estimate_boxes_first': True, 'filter_invalid': True, 'has_detector': True, 'temporal': True, 'use_smplify': True}, 'Train': {'D_adam_b1': 0.9, 'D_adam_b2': 0.999, 'G_adam_b1': 0.9, 'G_adam_b2': 0.999, 'aug_bg': False, 'display_freq_s': 30, 'face_factor': 1.0, 'face_loss_path': './assets/checkpoints/losses/sphere20a_20171020.pth', 'final_lr': 2e-06, 'lambda_D_prob': 1.0, 'lambda_face': 5.0, 'lambda_mask': 5.0, 'lambda_mask_smooth': 1.0, 'lambda_rec': 10.0, 'lambda_tsf': 10.0, 'lr_D': 0.0001, 'lr_G': 0.0001, 'niters_or_epochs_decay': 0, 'niters_or_epochs_no_decay': 100, 'num_iters_validate': 1, 'opti': 'Adam', 'print_freq_s': 30, 'save_latest_freq_s': 300, 'tb_visual': False, 'train_G_every_n_iterations': 1, 'use_face': True, 'use_vgg': 'VGG19', 'vgg_loss_path': './assets/checkpoints/losses/vgg19-dcbb9e9d.pth'}, 'assets_dir': 'D:\iPERCore-main/assets', 'batch_size': 1, 'bg_ks': 11, 'cam_strategy': 'smooth', 'cfg_path': './assets\configs\deploy.toml', 'digital_type': 'cloth_smpl_link', 'dis_name': 'patch_global', 'face_path': './assets/checkpoints/pose3d/smpl_faces.npy', 'facial_path': './assets/checkpoints/pose3d/front_facial.json', 'fim_enc_path': './assets/checkpoints/pose3d/mapper_fim_enc.txt', 'front_path': './assets/checkpoints/pose3d/front_body.json', 'ft_ks': 1, 'gen_name': 'AttLWB-SPADE', 'gpu_ids': ['0'], 'head_path': './assets/checkpoints/pose3d/head.json', 'image_size': 256, 'intervals': 1, 'ip': '', 'load_epoch': -1, 'load_path_D': 'None', 'load_path_G': './assets/checkpoints/neural_renders/AttLWB-SPADE_id_G_2020-05-18.pth', 'local_rank': 0, 'map_name': 'uv_seg', 'meta_data': {'checkpoints_dir': 'D:\iPERCore-main/results\models\model_1610192845.939266', 'meta_ref': [<iPERCore.services.options.meta_info.MetaProcess object at 0x000001D52563A898>], 'meta_src': [<iPERCore.services.options.meta_info.MetaProcess object at 0x000001D52563AEF0>], 'opt_path': 'D:\iPERCore-main/results\models\model_1610192845.939266\opts.txt', 'personalized_ckpt_path': 'D:\iPERCore-main/results\models\model_1610192845.939266\personalized.pth', 'root_primitives_dir': 'D:\iPERCore-main/results\primitives'}, 'model_id': 'model_1610192845.939266', 'neural_render_cfg': {'Discriminator': {'bg_cond_nc': 4, 'cond_nc': 6, 'max_nf_mult': 8, 'n_layers': 4, 'name': 'patch_global', 'ndf': 64, 'norm_type': 'instance', 'use_sigmoid': False}, 'Generator': {'BGNet': {'cond_nc': 4, 'n_res_block': 6, 'norm_type': 'instance', 'num_filters': [64, 128, 128, 256]}, 'SIDNet': {'cond_nc': 6, 'n_res_block': 6, 'norm_type': 'None', 'num_filters': [64, 128, 256]}, 'TSFNet': {'cond_nc': 6, 'n_res_block': 6, 'norm_type': 'instance', 'num_filters': [64, 128, 256]}, 'name': 'AttLWB-SPADE'}}, 'neural_render_cfg_path': './assets/configs/neural_renders/AttLWB-SPADE.toml', 'num_source': 2, 'num_workers': 4, 'only_vis': False, 'output_dir': 'D:\iPERCore-main/results', 'part_path': './assets/checkpoints/pose3d/smpl_part_info.json', 'port': 0, 'ref_path': 'path?=D:\iPERCore-main\assets\samples\references\akun_1.mp4,name?=akun_2,pose_fc?=300', 'serial_batches': False, 'share_bg': True, 'smpl_model': './assets/checkpoints/pose3d/smpl_model.pkl', 'smpl_model_hand': './assets/checkpoints/pose3d/smpl_model_with_hand_v2.pkl', 'src_path': 'path?=D:\iPERCore-main\assets\samples\sources\donald_trump_2\00000.PNG,name?=donald_trump_2', 'tb_visual': False, 'temporal': False, 'tex_size': 3, 'time_step': 1, 'train_name': 'LWGTrainer', 'use_cudnn': False, 'use_inpaintor': False, 'uv_map_path': './assets/checkpoints/pose3d/mapper_uv.txt', 'verbose': True} -------------- End ---------------- Pre-processing: start... ----------------------MetaProcess---------------------- meta_input: path: D:\iPERCore-main\assets\samples\sources\donald_trump_2\00000.PNG bg_path: name: donald_trump_2 primitives_dir: D:\iPERCore-main/results\primitives\donald_trump_2 processed_dir: D:\iPERCore-main/results\primitives\donald_trump_2\processed vid_info_path: D:\iPERCore-main/results\primitives\donald_trump_2\processed\vid_info.pkl

    ----------------------MetaProcess---------------------- meta_input: path: D:\iPERCore-main\assets\samples\references\akun_1.mp4 bg_path: name: akun_2 audio: D:\iPERCore-main/results\primitives\akun_2\processed\audio.mp3 fps: 30.0 pose_fc: 300.0 cam_fc: 100 primitives_dir: D:\iPERCore-main/results\primitives\akun_2 processed_dir: D:\iPERCore-main/results\primitives\akun_2\processed vid_info_path: D:\iPERCore-main/results\primitives\akun_2\processed\vid_info.pkl

    Process PreprocessConsumer_0: Traceback (most recent call last): File "D:\anaconda3\envs\venv\lib\multiprocessing\process.py", line 258, in bootstrap self.run() File "D:\iPERCore-main\iPERCore\services\preprocess.py", line 43, in run device=device File "D:\iPERCore-main\iPERCore\tools\processors\preprocessors.py", line 127, in init device=device File "D:\iPERCore-main\iPERCore\tools\human_mattors_init.py", line 11, in build_mattor from .point_render_parser import PointRenderGCAMattor File "D:\iPERCore-main\iPERCore\tools\human_mattors\point_render_parser.py", line 11, in from mmdet.apis import init_detector, inference_detector File "D:\anaconda3\envs\venv\lib\site-packages\mmdet_init_.py", line 25, in f'MMCV=={mmcv.version} is used but incompatible. '
    AssertionError: MMCV==1.1.5 is used but incompatible. Please install mmcv>=1.2.4, <=1.3. Pre-processing: digital deformation start... Process HumanDigitalDeformConsumer_0: Traceback (most recent call last): File "D:\iPERCore-main\iPERCore\services\preprocess.py", line 138, in run prepared_inputs = self.prepare_inputs_for_run_cloth_smpl_links(process_info) File "D:\iPERCore-main\iPERCore\services\preprocess.py", line 212, in prepare_inputs_for_run_cloth_smpl_links src_infos = process_info.convert_to_src_info(self.opt.num_source) File "D:\iPERCore-main\iPERCore\services\options\process_info.py", line 142, in convert_to_src_info src_infos = read_src_infos(self.vid_infos, num_source) File "D:\iPERCore-main\iPERCore\services\options\process_info.py", line 235, in read_src_infos pad_ids = np.random.choice(src_ids, need_pad) File "mtrand.pyx", line 908, in numpy.random.mtrand.RandomState.choice ValueError: 'a' cannot be empty unless no samples are taken

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "D:\anaconda3\envs\venv\lib\multiprocessing\process.py", line 258, in _bootstrap self.run() File "D:\iPERCore-main\iPERCore\services\preprocess.py", line 155, in run except Exception("model error!") as e: TypeError: catching classes that do not inherit from BaseException is not allowed Pre-processing: digital deformation completed... the current number of sources are 1, while the pre-defined number of sources are 2. Pre-processing: failed...

    (venv) C:\WINDOWS\system32> (venv) C:\WINDOWS\system32>python D:\iPERCore-main\demo\motion_imitate.py --gpu_ids 0 --image_size 256 --num_source 2 --output_dir "D:\iPERCore-main/results" --assets_dir "D:\iPERCore-main/assets" --src_path "path?=D:\iPERCore-main\assets\samples\sources\donald_trump_2\00000.PNG,name?=donald_trump_2" --ref_path "path?=D:\iPERCore-main\assets\samples\references\akun_1.mp4,name?=akun_2,pose_fc?=300" ./assets/executables/ffmpeg-4.3.1-win64-static/bin/ffprobe.exe -v error -select_streams v -of default=noprint_wrappers=1:nokey=1 -show_entries stream=r_frame_rate D:\iPERCore-main\assets\samples\references\akun_1.mp4 ------------ Options ------------- {'MAX_NUM_SOURCE': 8, 'MultiMedia': {'ffmpeg': {'Linux': {'ffmpeg_exe_path': 'ffmpeg', 'ffprobe_exe_path': 'ffprobe'}, 'Windows': {'ffmpeg_exe_path': './assets/executables/ffmpeg-4.3.1-win64-static/bin/ffmpeg.exe', 'ffprobe_exe_path': './assets/executables/ffmpeg-4.3.1-win64-static/bin/ffprobe.exe'}, 'pix_fmt': 'yuv420p', 'vcodec': 'h264'}, 'image': {'caption': 'this is a fake video, synthesized by ' 'impersonator++', 'saved_name_format': 'pred_{:0>8}.png'}}, 'NUMBER_FACES': 13776, 'NUMBER_VERTS': 6890, 'Preprocess': {'BackgroundInpaintor': {'bg_replace': True, 'cfg_path': './assets/configs/inpaintors/mmedit_inpainting.toml', 'dilate_iter_num': 3, 'dilate_kernel_size': 9, 'name': 'mmedit_inpainting', 'use_sr': True}, 'Cropper': {'ref_crop_factor': 3.0, 'src_crop_factor': 1.3}, 'Deformer': {'cloth_parse_ckpt_path': './assets/checkpoints/mattors/exp-schp-lip.pth'}, 'FrontInfo': {'NUM_CANDIDATE': 25, 'RENDER_SIZE': 256}, 'HumanMattors': {'cfg_path': './assets/configs/mattors/point_render+gca.toml', 'dilate_iter_num': 7, 'erode_iter_num': 2, 'morph_kernel_size': 3, 'name': 'point_render+gca'}, 'MAX_PER_GPU_PROCESS': 1, 'Pose2dEstimator': {'cfg_path': './assets/configs/pose2d/openpose/body25.toml', 'joint_type': 'OpenPose-Body-25', 'name': 'openpose'}, 'Pose3dEstimator': {'batch_size': 32, 'cfg_path': './assets/configs/pose3d/spin.toml', 'name': 'spin', 'num_workers': 4}, 'Pose3dRefiner': {'cfg_path': './assets/configs/pose3d/smplify.toml', 'name': 'smplify', 'use_lfbgs': True}, 'Tracker': {'tracker_name': 'max_box'}, 'estimate_boxes_first': True, 'filter_invalid': True, 'has_detector': True, 'temporal': True, 'use_smplify': True}, 'Train': {'D_adam_b1': 0.9, 'D_adam_b2': 0.999, 'G_adam_b1': 0.9, 'G_adam_b2': 0.999, 'aug_bg': False, 'display_freq_s': 30, 'face_factor': 1.0, 'face_loss_path': './assets/checkpoints/losses/sphere20a_20171020.pth', 'final_lr': 2e-06, 'lambda_D_prob': 1.0, 'lambda_face': 5.0, 'lambda_mask': 5.0, 'lambda_mask_smooth': 1.0, 'lambda_rec': 10.0, 'lambda_tsf': 10.0, 'lr_D': 0.0001, 'lr_G': 0.0001, 'niters_or_epochs_decay': 0, 'niters_or_epochs_no_decay': 100, 'num_iters_validate': 1, 'opti': 'Adam', 'print_freq_s': 30, 'save_latest_freq_s': 300, 'tb_visual': False, 'train_G_every_n_iterations': 1, 'use_face': True, 'use_vgg': 'VGG19', 'vgg_loss_path': './assets/checkpoints/losses/vgg19-dcbb9e9d.pth'}, 'assets_dir': 'D:\iPERCore-main/assets', 'batch_size': 1, 'bg_ks': 11, 'cam_strategy': 'smooth', 'cfg_path': './assets\configs\deploy.toml', 'digital_type': 'cloth_smpl_link', 'dis_name': 'patch_global', 'face_path': './assets/checkpoints/pose3d/smpl_faces.npy', 'facial_path': './assets/checkpoints/pose3d/front_facial.json', 'fim_enc_path': './assets/checkpoints/pose3d/mapper_fim_enc.txt', 'front_path': './assets/checkpoints/pose3d/front_body.json', 'ft_ks': 1, 'gen_name': 'AttLWB-SPADE', 'gpu_ids': ['0'], 'head_path': './assets/checkpoints/pose3d/head.json', 'image_size': 256, 'intervals': 1, 'ip': '', 'load_epoch': -1, 'load_path_D': 'None', 'load_path_G': './assets/checkpoints/neural_renders/AttLWB-SPADE_id_G_2020-05-18.pth', 'local_rank': 0, 'map_name': 'uv_seg', 'meta_data': {'checkpoints_dir': 'D:\iPERCore-main/results\models\model_1610193396.8610756', 'meta_ref': [<iPERCore.services.options.meta_info.MetaProcess object at 0x000001A5253349B0>], 'meta_src': [<iPERCore.services.options.meta_info.MetaProcess object at 0x000001A525334CF8>], 'opt_path': 'D:\iPERCore-main/results\models\model_1610193396.8610756\opts.txt', 'personalized_ckpt_path': 'D:\iPERCore-main/results\models\model_1610193396.8610756\personalized.pth', 'root_primitives_dir': 'D:\iPERCore-main/results\primitives'}, 'model_id': 'model_1610193396.8610756', 'neural_render_cfg': {'Discriminator': {'bg_cond_nc': 4, 'cond_nc': 6, 'max_nf_mult': 8, 'n_layers': 4, 'name': 'patch_global', 'ndf': 64, 'norm_type': 'instance', 'use_sigmoid': False}, 'Generator': {'BGNet': {'cond_nc': 4, 'n_res_block': 6, 'norm_type': 'instance', 'num_filters': [64, 128, 128, 256]}, 'SIDNet': {'cond_nc': 6, 'n_res_block': 6, 'norm_type': 'None', 'num_filters': [64, 128, 256]}, 'TSFNet': {'cond_nc': 6, 'n_res_block': 6, 'norm_type': 'instance', 'num_filters': [64, 128, 256]}, 'name': 'AttLWB-SPADE'}}, 'neural_render_cfg_path': './assets/configs/neural_renders/AttLWB-SPADE.toml', 'num_source': 2, 'num_workers': 4, 'only_vis': False, 'output_dir': 'D:\iPERCore-main/results', 'part_path': './assets/checkpoints/pose3d/smpl_part_info.json', 'port': 0, 'ref_path': 'path?=D:\iPERCore-main\assets\samples\references\akun_1.mp4,name?=akun_2,pose_fc?=300', 'serial_batches': False, 'share_bg': True, 'smpl_model': './assets/checkpoints/pose3d/smpl_model.pkl', 'smpl_model_hand': './assets/checkpoints/pose3d/smpl_model_with_hand_v2.pkl', 'src_path': 'path?=D:\iPERCore-main\assets\samples\sources\donald_trump_2\00000.PNG,name?=donald_trump_2', 'tb_visual': False, 'temporal': False, 'tex_size': 3, 'time_step': 1, 'train_name': 'LWGTrainer', 'use_cudnn': False, 'use_inpaintor': False, 'uv_map_path': './assets/checkpoints/pose3d/mapper_uv.txt', 'verbose': True} -------------- End ---------------- Pre-processing: start... ----------------------MetaProcess---------------------- meta_input: path: D:\iPERCore-main\assets\samples\sources\donald_trump_2\00000.PNG bg_path: name: donald_trump_2 primitives_dir: D:\iPERCore-main/results\primitives\donald_trump_2 processed_dir: D:\iPERCore-main/results\primitives\donald_trump_2\processed vid_info_path: D:\iPERCore-main/results\primitives\donald_trump_2\processed\vid_info.pkl

    ----------------------MetaProcess---------------------- meta_input: path: D:\iPERCore-main\assets\samples\references\akun_1.mp4 bg_path: name: akun_2 audio: D:\iPERCore-main/results\primitives\akun_2\processed\audio.mp3 fps: 30.0 pose_fc: 300.0 cam_fc: 100 primitives_dir: D:\iPERCore-main/results\primitives\akun_2 processed_dir: D:\iPERCore-main/results\primitives\akun_2\processed vid_info_path: D:\iPERCore-main/results\primitives\akun_2\processed\vid_info.pkl

    Process PreprocessConsumer_0: Traceback (most recent call last): File "D:\anaconda3\envs\venv\lib\multiprocessing\process.py", line 258, in bootstrap self.run() File "D:\iPERCore-main\iPERCore\services\preprocess.py", line 43, in run device=device File "D:\iPERCore-main\iPERCore\tools\processors\preprocessors.py", line 127, in init device=device File "D:\iPERCore-main\iPERCore\tools\human_mattors_init.py", line 11, in build_mattor from .point_render_parser import PointRenderGCAMattor File "D:\iPERCore-main\iPERCore\tools\human_mattors\point_render_parser.py", line 11, in from mmdet.apis import init_detector, inference_detector File "D:\anaconda3\envs\venv\lib\site-packages\mmdet_init_.py", line 25, in f'MMCV=={mmcv.version} is used but incompatible. '
    AssertionError: MMCV==1.1.5 is used but incompatible. Please install mmcv>=1.2.4, <=1.3. Pre-processing: digital deformation start... Process HumanDigitalDeformConsumer_0: Traceback (most recent call last): File "D:\iPERCore-main\iPERCore\services\preprocess.py", line 138, in run prepared_inputs = self.prepare_inputs_for_run_cloth_smpl_links(process_info) File "D:\iPERCore-main\iPERCore\services\preprocess.py", line 212, in prepare_inputs_for_run_cloth_smpl_links src_infos = process_info.convert_to_src_info(self.opt.num_source) File "D:\iPERCore-main\iPERCore\services\options\process_info.py", line 142, in convert_to_src_info src_infos = read_src_infos(self.vid_infos, num_source) File "D:\iPERCore-main\iPERCore\services\options\process_info.py", line 235, in read_src_infos pad_ids = np.random.choice(src_ids, need_pad) File "mtrand.pyx", line 908, in numpy.random.mtrand.RandomState.choice ValueError: 'a' cannot be empty unless no samples are taken

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "D:\anaconda3\envs\venv\lib\multiprocessing\process.py", line 258, in _bootstrap self.run() File "D:\iPERCore-main\iPERCore\services\preprocess.py", line 155, in run except Exception("model error!") as e: TypeError: catching classes that do not inherit from BaseException is not allowed Pre-processing: digital deformation completed... the current number of sources are 1, while the pre-defined number of sources are 2. Pre-processing: failed...

    opened by bumie7758258 5
  • colab result: Two legs are merged like wearing a dress   :-)

    colab result: Two legs are merged like wearing a dress :-)

    I'm experimenting on colab with different photos. This is my recent result: Screenshot - 20-12-29 15 26 44 You can see in the photos that the legs are spread out. But looks like it can't handle pants when they are too wide.

    opened by bigboss97 5
  • iPERDance checkpoint server is down!! - ANY Solution

    iPERDance checkpoint server is down!! - ANY Solution

    Hello, we can't connect to the asset's server to download checkpoints and samples, it is giving timeout!.

    !wget -O assets/checkpoints.zip "https://download.impersonator.org/iper_plus_plus_latest_checkpoints.zip" and this !wget -O assets/samples.zip "https://download.impersonator.org/iper_plus_plus_latest_samples.zip"

    It seems like the server is down, do u have any way to download the checkpoints?

    opened by basemdabbour 4
  • Scripts for appearance transfer

    Scripts for appearance transfer

    In the ICCV version, there is a script "python run_swap.py" for appearance transfer. I wonder if there are similar scripts for appearance transfer like the ICCV paper? Thank you.

    opened by htzheng 4
  • colab is broken

    colab is broken

    colab is broken due to google removing cuda 10.1 here

    this can be fixed by:

    !apt-get -o Dpkg::Options::="--force-overwrite" install cuda-10-1 cuda-drivers
    import os
    os.environ["CUDA_HOME"] = '/usr/local/cuda-10.1'
    

    however if you want to support modern GPUs as A100 you need cuda 11:

    !apt-get -o Dpkg::Options::="--force-overwrite" install cuda-11-1
    import os
    cuda_home = '/usr/local/cuda-11.1'
    os.environ["CUDA_HOME"] = cuda_home
    with open(os.path.join(cuda_home, 'version.txt'), 'w') as f:
      f.write('CUDA Version 11.1')
    

    I couldn't figure out how to install cudnn, but it works. see e.g. my accessible colab: https://colab.research.google.com/github/eyaler/avatars4all/blob/master/ganivut.ipynb

    opened by eyaler 0
  • Operation error

    Operation error

    I try Run the trump case. But it occur OSError: [Errno 95] Operation not supported. Google colab connected. And I confirmed file path. But , it can't solve.

    May I help you?

    Maybe, it occur error by input file .

    Process PreprocessConsumer_0: Traceback (most recent call last): File "/content/drive/MyDrive/Colab Notebooks/iPERCore/iPERCore/services/preprocess.py", line 80, in run visual=True, File "/content/drive/MyDrive/Colab Notebooks/iPERCore/iPERCore/tools/processors/base_preprocessor.py", line 74, in execute src_img_dir = process_utils.format_imgs_dir(input_path, processed_info["src_img_dir"]) File "/content/drive/MyDrive/Colab Notebooks/iPERCore/iPERCore/tools/processors/process_utils.py", line 32, in format_imgs_dir os.symlink(osp.abspath(src_path), osp.abspath(dst_path)) OSError: [Errno 95] Operation not supported: '/content/drive/MyDrive/Colab Notebooks/iPERCore/assets/samples/sources/wangyibo_2.jpg' -> '/content/drive/MyDrive/Colab Notebooks/iPERCore/results/primitives/processed/orig_images/wangyibo_2.jpg'

    opened by masato-mouse 1
  • RuntimeError: dimension mismatch for operand 0: equation 2 tensor 3

    RuntimeError: dimension mismatch for operand 0: equation 2 tensor 3

    When I run the programme:

    Traceback (most recent call last): File "C:\Users\28627.conda\envs\i\lib\runpy.py", line 192, in _run_module_as_main return _run_code(code, main_globals, None, File "C:\Users\28627.conda\envs\i\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "D:\iPERCore-main\iPERCore-main\iPERCore\services\run_imitator.py", line 204, in run_imitator(opt=OPT) File "D:\iPERCore-main\iPERCore-main\iPERCore\services\run_imitator.py", line 193, in run_imitator all_meta_outputs = imitate(opt) File "D:\iPERCore-main\iPERCore-main\iPERCore\services\run_imitator.py", line 130, in imitate imitator.source_setup( File "C:\Users\28627.conda\envs\i\lib\site-packages\torch\autograd\grad_mode.py", line 15, in decorate_context return func(*args, **kwargs) File "D:\iPERCore-main\iPERCore-main\iPERCore\models\imitator.py", line 205, in source_setup src_info = self.body_rec.get_details(src_smpl, offsets, links_ids=links_ids) File "D:\iPERCore-main\iPERCore-main\iPERCore\tools\human_digitalizer\bodynets\base_smpl.py", line 129, in get_details verts, j3d, rs = self.forward(beta=shape, theta=pose, offsets=offsets, links_ids=links_ids, get_skin=True) File "D:\iPERCore-main\iPERCore-main\iPERCore\tools\human_digitalizer\bodynets\batch_smplh.py", line 172, in forward vertices, joints = lbs(beta, full_pose, self.v_template + offsets, File "D:\iPERCore-main\iPERCore-main\iPERCore\tools\human_digitalizer\smplx\lbs.py", line 181, in lbs v_shaped = v_template + blend_shapes(betas, shapedirs) File "D:\iPERCore-main\iPERCore-main\iPERCore\tools\human_digitalizer\smplx\lbs.py", line 270, in blend_shapes blend_shape = torch.einsum('bl,mkl->bmk', [betas, shape_disps]) File "C:\Users\28627.conda\envs\i\lib\site-packages\torch\functional.py", line 325, in einsum return einsum(equation, *operands) File "C:\Users\28627.conda\envs\i\lib\site-packages\torch\functional.py", line 327, in einsum return _VF.einsum(equation, operands) RuntimeError: dimension mismatch for operand 0: equation 2 tensor 3

    My python version ==3.8.0 CUDA version == 10.1 run on win10 pytorch ==1.6.0 GPU==GTX1650 Thanks for helping me

    opened by pilotx-doctor 0
data/code repository of "C2F-FWN: Coarse-to-Fine Flow Warping Network for Spatial-Temporal Consistent Motion Transfer"

C2F-FWN data/code repository of "C2F-FWN: Coarse-to-Fine Flow Warping Network for Spatial-Temporal Consistent Motion Transfer" (https://arxiv.org/abs/

EKILI 46 Dec 14, 2022
an implementation of softmax splatting for differentiable forward warping using PyTorch

softmax-splatting This is a reference implementation of the softmax splatting operator, which has been proposed in Softmax Splatting for Video Frame I

Simon Niklaus 338 Dec 28, 2022
π-GAN: Periodic Implicit Generative Adversarial Networks for 3D-Aware Image Synthesis

π-GAN: Periodic Implicit Generative Adversarial Networks for 3D-Aware Image Synthesis Project Page | Paper | Data Eric Ryan Chan*, Marco Monteiro*, Pe

null 375 Dec 31, 2022
Official PyTorch code for Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling (HCFlow, ICCV2021)

Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling (HCFlow, ICCV2021) This repository is the official P

Jingyun Liang 159 Dec 30, 2022
Official PyTorch code for Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling (HCFlow, ICCV2021)

Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling (HCFlow, ICCV2021) This repository is the official P

Jingyun Liang 159 Dec 30, 2022
Code of U2Fusion: a unified unsupervised image fusion network for multiple image fusion tasks, including multi-modal, multi-exposure and multi-focus image fusion.

U2Fusion Code of U2Fusion: a unified unsupervised image fusion network for multiple image fusion tasks, including multi-modal (VIS-IR, medical), multi

Han Xu 129 Dec 11, 2022
Flickr-Faces-HQ (FFHQ) is a high-quality image dataset of human faces, originally created as a benchmark for generative adversarial networks (GAN)

Flickr-Faces-HQ Dataset (FFHQ) Flickr-Faces-HQ (FFHQ) is a high-quality image dataset of human faces, originally created as a benchmark for generative

NVIDIA Research Projects 2.9k Dec 28, 2022
DR-GAN: Automatic Radial Distortion Rectification Using Conditional GAN in Real-Time

DR-GAN: Automatic Radial Distortion Rectification Using Conditional GAN in Real-Time Introduction This is official implementation for DR-GAN (IEEE TCS

Kang Liao 18 Dec 23, 2022
This repository is the offical Pytorch implementation of ContextPose: Context Modeling in 3D Human Pose Estimation: A Unified Perspective (CVPR 2021).

Context Modeling in 3D Human Pose Estimation: A Unified Perspective (CVPR 2021) Introduction This repository is the offical Pytorch implementation of

null 37 Nov 21, 2022
HiFi-GAN: Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis

HiFi-GAN: Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis Jungil Kong, Jaehyeon Kim, Jaekyoung Bae In our paper, we p

Rishikesh (ऋषिकेश) 31 Dec 8, 2022
Fre-GAN: Adversarial Frequency-consistent Audio Synthesis

Fre-GAN Vocoder Fre-GAN: Adversarial Frequency-consistent Audio Synthesis Training: python train.py --config config.json Citation: @misc{kim2021frega

Rishikesh (ऋषिकेश) 93 Dec 17, 2022
Style-based Neural Drum Synthesis with GAN inversion

Style-based Drum Synthesis with GAN Inversion Demo TensorFlow implementation of a style-based version of the adversarial drum synth (ADS) from the pap

Sound and Music Analysis (SoMA) Group 29 Nov 19, 2022
Official implementation of the paper Chunked Autoregressive GAN for Conditional Waveform Synthesis

Chunked Autoregressive GAN (CARGAN) Official implementation of the paper Chunked Autoregressive GAN for Conditional Waveform Synthesis [paper] [compan

Descript 150 Dec 6, 2022
PyTorch implementation of Lip to Speech Synthesis with Visual Context Attentional GAN (NeurIPS2021)

Lip to Speech Synthesis with Visual Context Attentional GAN This repository contains the PyTorch implementation of the following paper: Lip to Speech

null 6 Nov 2, 2022
A PyTorch Implementation of the Luna: Linear Unified Nested Attention

Unofficial PyTorch implementation of Luna: Linear Unified Nested Attention The quadratic computational and memory complexities of the Transformer’s at

Soohwan Kim 32 Nov 7, 2022
Implementation of Transformer in Transformer, pixel level attention paired with patch level attention for image classification, in Pytorch

Transformer in Transformer Implementation of Transformer in Transformer, pixel level attention paired with patch level attention for image c

Phil Wang 272 Dec 23, 2022
PyTorch code for our paper "Attention in Attention Network for Image Super-Resolution"

Under construction... Attention in Attention Network for Image Super-Resolution (A2N) This repository is an PyTorch implementation of the paper "Atten

Haoyu Chen 71 Dec 30, 2022
Official Pytorch implementation of the paper "Action-Conditioned 3D Human Motion Synthesis with Transformer VAE", ICCV 2021

ACTOR Official Pytorch implementation of the paper "Action-Conditioned 3D Human Motion Synthesis with Transformer VAE", ICCV 2021. Please visit our we

Mathis Petrovich 248 Dec 23, 2022
Implementation of "Generalizable Neural Performer: Learning Robust Radiance Fields for Human Novel View Synthesis"

Generalizable Neural Performer: Learning Robust Radiance Fields for Human Novel View Synthesis Abstract: This work targets at using a general deep lea

null 163 Dec 14, 2022