Code for the paper Progressive Pose Attention for Person Image Generation in CVPR19 (Oral).

Overview

Pose-Transfer

Code for the paper Progressive Pose Attention for Person Image Generation in CVPR19(Oral). The paper is available here.

Video generation with a single image as input. More details can be found in the supplementary materials in our paper.

News

  • We have released a new branch PATN_Fine. We introduce a segment-based skip-connection and a novel segment-based style loss, achieving even better results on DeepFashion.
  • Video demo is available now. We further improve the performance of our model by introducing a segment-based skip-connection. We will release the code soon. Refer to our supplementary materials for more details.
  • Codes for pytorch 1.0 is available now under the branch pytorch_v1.0. The same results on both datasets can be reproduced with the pretrained model.

Notes:

In pytorch 1.0, running_mean and running_var are not saved for the Instance Normalization layer by default. To reproduce our result in the paper, launch python tool/rm_insnorm_running_vars.py to remove corresponding keys in the pretrained model. (Only for the DeepFashion dataset.)

This is Pytorch implementation for pose transfer on both Market1501 and DeepFashion dataset. The code is written by Tengteng Huang and Zhen Zhu.

Requirement

  • pytorch(0.3.1)
  • torchvision(0.2.0)
  • numpy
  • scipy
  • scikit-image
  • pillow
  • pandas
  • tqdm
  • dominate

Getting Started

Installation

  • Clone this repo:
git clone https://github.com/tengteng95/Pose-Transfer.git
cd Pose-Transfer

Data Preperation

We provide our dataset split files and extracted keypoints files for convience.

Market1501

  • Download the Market-1501 dataset from here. Rename bounding_box_train and bounding_box_test to train and test, and put them under the market_data directory.
  • Download train/test splits and train/test key points annotations from Google Drive or Baidu Disk, including market-pairs-train.csv, market-pairs-test.csv, market-annotation-train.csv, market-annotation-train.csv. Put these four files under the market_data directory.
  • Generate the pose heatmaps. Launch
python tool/generate_pose_map_market.py

DeepFashion

Note: In our settings, we crop the images of DeepFashion into the resolution of 176x256 in a center-crop manner.

python tool/generate_fashion_datasets.py
  • Download train/test pairs and train/test key points annotations from Google Drive or Baidu Disk, including fasion-resize-pairs-train.csv, fasion-resize-pairs-test.csv, fasion-resize-annotation-train.csv, fasion-resize-annotation-train.csv. Put these four files under the fashion_data directory.
  • Generate the pose heatmaps. Launch
python tool/generate_pose_map_fashion.py

Notes:

Optionally, you can also generate these files by yourself.

  1. Keypoints files

We use OpenPose to generate keypoints.

  • Download pose estimator from Google Drive or Baidu Disk. Put it under the root folder Pose-Transfer.
  • Change the paths input_folder and output_path in tool/compute_coordinates.py. And then launch
python2 compute_coordinates.py
  1. Dataset split files
python2 tool/create_pairs_dataset.py

Train a model

Market-1501

python train.py --dataroot ./market_data/ --name market_PATN --model PATN --lambda_GAN 5 --lambda_A 10  --lambda_B 10 --dataset_mode keypoint --no_lsgan --n_layers 3 --norm batch --batchSize 32 --resize_or_crop no --gpu_ids 0 --BP_input_nc 18 --no_flip --which_model_netG PATN --niter 500 --niter_decay 200 --checkpoints_dir ./checkpoints --pairLst ./market_data/market-pairs-train.csv --L1_type l1_plus_perL1 --n_layers_D 3 --with_D_PP 1 --with_D_PB 1  --display_id 0

DeepFashion

python train.py --dataroot ./fashion_data/ --name fashion_PATN --model PATN --lambda_GAN 5 --lambda_A 1 --lambda_B 1 --dataset_mode keypoint --n_layers 3 --norm instance --batchSize 7 --pool_size 0 --resize_or_crop no --gpu_ids 0 --BP_input_nc 18 --no_flip --which_model_netG PATN --niter 500 --niter_decay 200 --checkpoints_dir ./checkpoints --pairLst ./fashion_data/fasion-resize-pairs-train.csv --L1_type l1_plus_perL1 --n_layers_D 3 --with_D_PP 1 --with_D_PB 1  --display_id 0

Test the model

Market1501

python test.py --dataroot ./market_data/ --name market_PATN --model PATN --phase test --dataset_mode keypoint --norm batch --batchSize 1 --resize_or_crop no --gpu_ids 2 --BP_input_nc 18 --no_flip --which_model_netG PATN --checkpoints_dir ./checkpoints --pairLst ./market_data/market-pairs-test.csv --which_epoch latest --results_dir ./results --display_id 0

DeepFashion

python test.py --dataroot ./fashion_data/ --name fashion_PATN --model PATN --phase test --dataset_mode keypoint --norm instance --batchSize 1 --resize_or_crop no --gpu_ids 0 --BP_input_nc 18 --no_flip --which_model_netG PATN --checkpoints_dir ./checkpoints --pairLst ./fashion_data/fasion-resize-pairs-test.csv --which_epoch latest --results_dir ./results --display_id 0

Evaluation

We adopt SSIM, mask-SSIM, IS, mask-IS, DS, and PCKh for evaluation of Market-1501. SSIM, IS, DS, PCKh for DeepFashion.

1) SSIM and mask-SSIM, IS and mask-IS, mask-SSIM

For evaluation, Tensorflow 1.4.1(python3) is required. Please see requirements_tf.txt for details.

For Market-1501:

python tool/getMetrics_market.py

For DeepFashion:

python tool/getMetrics_market.py

If you still have problems for evaluation, please consider using docker.

docker run -v <Pose-Transfer path>:/tmp -w /tmp --runtime=nvidia -it --rm tensorflow/tensorflow:1.4.1-gpu-py3 bash
# now in docker:
$ pip install scikit-image tqdm 
$ python tool/getMetrics_market.py

Refer to this Issue.

2) DS Score

Download pretrained on VOC 300x300 model and install propper caffe version SSD. Put it in the ssd_score forlder.

For Market-1501:

python compute_ssd_score_market.py --input_dir path/to/generated/images

For DeepFashion:

python compute_ssd_score_fashion.py --input_dir path/to/generated/images

3) PCKh

  • First, run tool/crop_market.py or tool/crop_fashion.py.
  • Download pose estimator from Google Drive or Baidu Disk. Put it under the root folder Pose-Transfer.
  • Change the paths input_folder and output_path in tool/compute_coordinates.py. And then launch
python2 compute_coordinates.py
  • run tool/calPCKH_fashion.py or tool/calPCKH_market.py

Pre-trained model

Our pre-trained model can be downloaded Google Drive or Baidu Disk.

Citation

If you use this code for your research, please cite our paper.

@inproceedings{zhu2019progressive,
  title={Progressive Pose Attention Transfer for Person Image Generation},
  author={Zhu, Zhen and Huang, Tengteng and Shi, Baoguang and Yu, Miao and Wang, Bofei and Bai, Xiang},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  pages={2347--2356},
  year={2019}
}

Acknowledgments

Our code is based on the popular pytorch-CycleGAN-and-pix2pix.

Comments
  • FileNotFoundError: [Errno 2] No such file or directory: 'market_data/trainK/0432_c5s1_105323_05.jpg.npy'

    FileNotFoundError: [Errno 2] No such file or directory: 'market_data/trainK/0432_c5s1_105323_05.jpg.npy'

    root@1123a6f58736:/home/Pose-Transfer-master/Pose-Transfer# python tool/generate_pose_map_market.py processing 0 / 12936 ... market_data/trainK 0432_c5s1_105323_05.jpg Traceback (most recent call last): File "tool/generate_pose_map_market.py", line 41, in compute_pose(img_dir, annotations_file, save_path) File "tool/generate_pose_map_market.py", line 39, in compute_pose np.save(file_name, pose) File "/usr/local/lib/python3.6/dist-packages/numpy/lib/npyio.py", line 524, in save fid = open(file, "wb") FileNotFoundError: [Errno 2] No such file or directory: 'market_data/trainK/0432_c5s1_105323_05.jpg.npy'

    opened by 2110317008 13
  • no path separator ‘/’ of image paths in train.lst and test.lst

    no path separator ‘/’ of image paths in train.lst and test.lst

    Hi, thanks for your interesting work! I found that there is no path separator ‘/’ of image paths in train.lst and test.lst, so when I run generate_fashion_datasets.py, I got following message:

    cp: cannot stat 'fashion_data/DeepFashion/fashionWOMENTees_Tanksid0000290606_2side.jpg': No such file or directory cp: cannot stat 'fashion_data/DeepFashion/fashionWOMENPantsid0000687301_1front.jpg': No such file or directory ...... How can I fix this? Thanks a lot!

    opened by SkyeLu 10
  • Inception Score of Test Images

    Inception Score of Test Images

    Hi, I got the test set according to your split and I used the inception score implementation here https://github.com/sbarratt/inception-score-pytorch.git. The problem is I got 3.50 on 12000 test images and 3.49 on 7284 filterd images that the repeated ones are removed. I have tried all suggestions in original repo to have a better score. Do you have any idea about this problem?

    opened by loadder 10
  • when i test fashion-data,it shows {

    when i test fashion-data,it shows {"eid": "main"} and stop here

    Total number of parameters: 41358019

    model [TransferModel] was created WARNING:root:Setting up a new session... POST /env/main HTTP/1.1 Host: localhost:8097 Accept: / Connection: keep-alive User-Agent: python-requests/2.21.0 Accept-Encoding: gzip, deflate Content-Length: 15

    {"eid": "main"}

    opened by ziyuwzf 9
  • The size of tensor a (16) must match the size of tensor b (320) at non-singleton dimension 3

    The size of tensor a (16) must match the size of tensor b (320) at non-singleton dimension 3

    .... Total number of parameters: 41365763


    model [TransferModel] was created WARNING:root:Setting up a new session... 200 3 False process 0/999999 img .. Traceback (most recent call last): File "test.py", line 39, in model.test() File "/home/ldf/www/Pose-Transfer/models/PATN.py", line 137, in test self.fake_p2 = self.netG(G_input) File "/home/ldf/www/py35_pytorch031/lib/python3.5/site-packages/torch/nn/modules/module.py", line 357, in call result = self.forward(*input, **kwargs) File "/home/ldf/www/Pose-Transfer/models/model_variants.py", line 165, in forward return nn.parallel.data_parallel(self.model, input, self.gpu_ids) File "/home/ldf/www/py35_pytorch031/lib/python3.5/site-packages/torch/nn/parallel/data_parallel.py", line 115, in data_parallel return module(*inputs[0], **module_kwargs[0]) File "/home/ldf/www/py35_pytorch031/lib/python3.5/site-packages/torch/nn/modules/module.py", line 357, in call result = self.forward(*input, **kwargs) File "/home/ldf/www/Pose-Transfer/models/model_variants.py", line 148, in forward x1, x2, _ = model(x1, x2) File "/home/ldf/www/py35_pytorch031/lib/python3.5/site-packages/torch/nn/modules/module.py", line 357, in call result = self.forward(*input, **kwargs) File "/home/ldf/www/Pose-Transfer/models/model_variants.py", line 63, in forward x1_out = x1_out * att RuntimeError: The size of tensor a (16) must match the size of tensor b (320) at non-singleton dimension 3

    opened by wduo 6
  • running_mean and running_var how to remove by code?

    running_mean and running_var how to remove by code?

    Hello, When I use the pre-training model, I get the following error, what should I do? Error: RuntimeError: Error(s) in loading state_dict for PATNetwork: Unexpected running stats buffer(s) "model.stream1_down.2.running_mean" and "model.stream1_down.2.running_var" for InstanceNorm2d with track_running_stats=False. If state_dict is a checkpoint saved before 0.4.0, this may be expected because InstanceNorm2d does not track running stats by default since 0.4.0. Please remove these keys from state_dict. If the running stats are actually needed, instead set track_running_stats=True in InstanceNorm2d to enable them. See the documentation of InstanceNorm2d for details. Unexpected running stats buffer(s) "model.stream1_down.5.running_mean" and "model.stream1_down.5.running_var" for InstanceNorm2d with track_running_stats=False. If state_dict is a checkpoint saved before 0.4.0, this may be expected because InstanceNorm2d does not track running stats by default since 0.4.0. Please remove these keys from state_dict. If the running stats are actually needed, instead set track_running_stats=True in InstanceNorm2d to enable them. See the documentation of InstanceNorm2d for details. Unexpected running stats buffer(s) "model.stream1_down.8.running_mean" and "model.stream1_down.8.running_var" for InstanceNorm2d with track_running_stats=False. If state_dict is a checkpoint saved before 0.4.0, this may be expected because InstanceNorm2d does not track running stats by default since 0.4.0. Please remove these keys from state_dict. If the running stats are actually needed, instead set track_running_stats=True in InstanceNorm2d to enable them. See the documentation of InstanceNorm2d for details. Unexpected running stats buffer(s) "model.stream2_down.2.running_mean" and "model.stream2_down.2.running_var" for InstanceNorm2d with track_running_stats=False. If state_dict is a checkpoint saved before 0.4.0, this may be expected because InstanceNorm2d does not track running stats by default since 0.4.0. Please remove these keys from state_dict. If the running stats are actually needed, instead set track_running_stats=True in InstanceNorm2d to enable them. See the documentation of InstanceNorm2d for details. Unexpected running stats buffer(s) "model.stream2_down.5.running_mean" and "model.stream2_down.5.running_var" for InstanceNorm2d with track_running_stats=False. If state_dict is a checkpoint saved before 0.4.0, this may be expected because InstanceNorm2d does not track running stats by default since 0.4.0. Please remove these keys from state_dict. If the running stats are actually needed, instead set track_running_stats=True in InstanceNorm2d to enable them. See the documentation of InstanceNorm2d for details. Unexpected running stats buffer(s) "model.stream2_down.8.running_mean" and "model.stream2_down.8.running_var" for InstanceNorm2d with track_running_stats=False. If state_dict is a checkpoint saved before 0.4.0, this may be expected because InstanceNorm2d does not track running stats by default since 0.4.0. Please remove these keys from state_dict. If the running stats are actually needed, instead set track_running_stats=True in InstanceNorm2d to enable them. See the documentation of InstanceNorm2d for details. Unexpected running stats buffer(s) "model.att.0.conv_block_stream1.2.running_mean" and "model.att.0.conv_block_stream1.2.running_var" for InstanceNorm2d with track_running_stats=False. If state_dict is a checkpoint saved before 0.4.0, this may be expected because InstanceNorm2d does not track running stats by default since 0.4.0. Please remove these keys from state_dict. If the running stats are actually needed, instead set track_running_stats=True in InstanceNorm2d to enable them. See the documentation of InstanceNorm2d for details. Unexpected running stats buffer(s) "model.att.0.conv_block_stream1.7.running_mean" and "model.att.0.conv_block_stream1.7.running_var" for InstanceNorm2d with track_running_stats=False. If state_dict is a checkpoint saved before 0.4.0, this may be expected because InstanceNorm2d does not track running stats by default since 0.4.0. Please remove these keys from state_dict. If the running stats are actually needed, instead set track_running_stats=True in InstanceNorm2d to enable them. See the documentation of InstanceNorm2d for details. and rm_insnorm_running_vars.py couldn't find in tool's file, thank you

    opened by Gavin-Evans 5
  • No model details in ‘pose_estimator.h5’

    No model details in ‘pose_estimator.h5’

    Hi, I tried to run 'compute_coordinates.py' to generate keypoints files for the DukeMTMC-reID dataset. But there is a warning "No training configuration found in save file", and keypoint data in the csv file are all -1. I will be so appreciate if you could help me with the problem :-)

    opened by ztjryg4 5
  • Inception Score and SSIM score on market-1501 of test

    Inception Score and SSIM score on market-1501 of test

    Hi! Thanks for your open-source code of your project. I have few questions about IS and SSIM scores I followed the readme, using your data and code ,but i got the different result from your paper: IS 3.1618128 Mask-IS 3.735669 SSIM 0.281508094
    Mask-SSIM 0.799356671 what is the reason?

    opened by longintlong 5
  • What do

    What do "BP" and "P" stand for in the program?

    Greetings! When we are training the model with market-1501 dataset, we have P_input_nc==3 and BP_input_nc === 18. I assume P_input_nc is the number of channels for input images, while BP_input_nc is the number of channels for input poses, aka the 18-keypoint representation. In this case, what do "BP" and "P" stand for respectively? Thanks in advance!

    opened by QuintessaQ 5
  • docker image for Evaluation

    docker image for Evaluation

    I have tried&failed 3000 times to do evaluation. Finally I came to a conclusion: docker, docker, we need docker!

    Share my solution below:

    docker run -v <Pose-Transfer path>:/tmp -w /tmp --runtime=nvidia -it --rm tensorflow/tensorflow:1.4.1-gpu-py3 bash
    # now in docker:
    $ pip install scikit-image tqdm 
    $ python tool/getMetrics_market.py
    
    opened by budui 5
  • Bad IS and SSIM score

    Bad IS and SSIM score

    Really really thanks for your awesome works.

    However, using your code and hyper-parameter, I got bad evaluation results after training by myself: SSIM=0.268 IS=3.652 mask-SSIM=0.792 mask-IS=3.733

    Meanwhile, using your provided pre-trained model, I got almost right results: SSIM=0.311 IS=3.329 mask-SSIM=0.811 mask-IS=3.778

    My training environment is pytorch0.3.1, I don't know why the results are different.

    Thanks again~

    opened by dawn7564 4
  • Bump certifi from 2019.3.9 to 2022.12.7

    Bump certifi from 2019.3.9 to 2022.12.7

    Bumps certifi from 2019.3.9 to 2022.12.7.

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Bump pillow from 5.4.1 to 9.3.0

    Bump pillow from 5.4.1 to 9.3.0

    Bumps pillow from 5.4.1 to 9.3.0.

    Release notes

    Sourced from pillow's releases.

    9.3.0

    https://pillow.readthedocs.io/en/stable/releasenotes/9.3.0.html

    Changes

    ... (truncated)

    Changelog

    Sourced from pillow's changelog.

    9.3.0 (2022-10-29)

    • Limit SAMPLESPERPIXEL to avoid runtime DOS #6700 [wiredfool]

    • Initialize libtiff buffer when saving #6699 [radarhere]

    • Inline fname2char to fix memory leak #6329 [nulano]

    • Fix memory leaks related to text features #6330 [nulano]

    • Use double quotes for version check on old CPython on Windows #6695 [hugovk]

    • Remove backup implementation of Round for Windows platforms #6693 [cgohlke]

    • Fixed set_variation_by_name offset #6445 [radarhere]

    • Fix malloc in _imagingft.c:font_setvaraxes #6690 [cgohlke]

    • Release Python GIL when converting images using matrix operations #6418 [hmaarrfk]

    • Added ExifTags enums #6630 [radarhere]

    • Do not modify previous frame when calculating delta in PNG #6683 [radarhere]

    • Added support for reading BMP images with RLE4 compression #6674 [npjg, radarhere]

    • Decode JPEG compressed BLP1 data in original mode #6678 [radarhere]

    • Added GPS TIFF tag info #6661 [radarhere]

    • Added conversion between RGB/RGBA/RGBX and LAB #6647 [radarhere]

    • Do not attempt normalization if mode is already normal #6644 [radarhere]

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Bump tensorflow-gpu from 1.4.1 to 2.9.3

    Bump tensorflow-gpu from 1.4.1 to 2.9.3

    Bumps tensorflow-gpu from 1.4.1 to 2.9.3.

    Release notes

    Sourced from tensorflow-gpu's releases.

    TensorFlow 2.9.3

    Release 2.9.3

    This release introduces several vulnerability fixes:

    TensorFlow 2.9.2

    Release 2.9.2

    This releases introduces several vulnerability fixes:

    ... (truncated)

    Changelog

    Sourced from tensorflow-gpu's changelog.

    Release 2.9.3

    This release introduces several vulnerability fixes:

    Release 2.8.4

    This release introduces several vulnerability fixes:

    ... (truncated)

    Commits
    • a5ed5f3 Merge pull request #58584 from tensorflow/vinila21-patch-2
    • 258f9a1 Update py_func.cc
    • cd27cfb Merge pull request #58580 from tensorflow-jenkins/version-numbers-2.9.3-24474
    • 3e75385 Update version numbers to 2.9.3
    • bc72c39 Merge pull request #58482 from tensorflow-jenkins/relnotes-2.9.3-25695
    • 3506c90 Update RELEASE.md
    • 8dcb48e Update RELEASE.md
    • 4f34ec8 Merge pull request #58576 from pak-laura/c2.99f03a9d3bafe902c1e6beb105b2f2417...
    • 6fc67e4 Replace CHECK with returning an InternalError on failing to create python tuple
    • 5dbe90a Merge pull request #58570 from tensorflow/r2.9-7b174a0f2e4
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Bump protobuf from 3.7.1 to 3.18.3

    Bump protobuf from 3.7.1 to 3.18.3

    Bumps protobuf from 3.7.1 to 3.18.3.

    Release notes

    Sourced from protobuf's releases.

    Protocol Buffers v3.18.3

    C++

    Protocol Buffers v3.16.1

    Java

    • Improve performance characteristics of UnknownFieldSet parsing (#9371)

    Protocol Buffers v3.18.2

    Java

    • Improve performance characteristics of UnknownFieldSet parsing (#9371)

    Protocol Buffers v3.18.1

    Python

    • Update setup.py to reflect that we now require at least Python 3.5 (#8989)
    • Performance fix for DynamicMessage: force GetRaw() to be inlined (#9023)

    Ruby

    • Update ruby_generator.cc to allow proto2 imports in proto3 (#9003)

    Protocol Buffers v3.18.0

    C++

    • Fix warnings raised by clang 11 (#8664)
    • Make StringPiece constructible from std::string_view (#8707)
    • Add missing capability attributes for LLVM 12 (#8714)
    • Stop using std::iterator (deprecated in C++17). (#8741)
    • Move field_access_listener from libprotobuf-lite to libprotobuf (#8775)
    • Fix #7047 Safely handle setlocale (#8735)
    • Remove deprecated version of SetTotalBytesLimit() (#8794)
    • Support arena allocation of google::protobuf::AnyMetadata (#8758)
    • Fix undefined symbol error around SharedCtor() (#8827)
    • Fix default value of enum(int) in json_util with proto2 (#8835)
    • Better Smaller ByteSizeLong
    • Introduce event filters for inject_field_listener_events
    • Reduce memory usage of DescriptorPool
    • For lazy fields copy serialized form when allowed.
    • Re-introduce the InlinedStringField class
    • v2 access listener
    • Reduce padding in the proto's ExtensionRegistry map.
    • GetExtension performance optimizations
    • Make tracker a static variable rather than call static functions
    • Support extensions in field access listener
    • Annotate MergeFrom for field access listener
    • Fix incomplete types for field access listener
    • Add map_entry/new_map_entry to SpecificField in MessageDifferencer. They record the map items which are different in MessageDifferencer's reporter.
    • Reduce binary size due to fieldless proto messages
    • TextFormat: ParseInfoTree supports getting field end location in addition to start.

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Hi, actually I use following code to fix it(suppose the directory ’fashion' is where you store the downlaod dataset), to run this code, just download the dataset and rename it to 'fashion', the renamed images will be store in the directory 'DeepFashion'.can you say this in detail?

    Hi, actually I use following code to fix it(suppose the directory ’fashion' is where you store the downlaod dataset), to run this code, just download the dataset and rename it to 'fashion', the renamed images will be store in the directory 'DeepFashion'.can you say this in detail?

    Hi, actually I use following code to fix it(suppose the directory ’fashion' is where you store the downlaod dataset), to run this code, just download the dataset and rename it to 'fashion', the renamed images will be store in the directory 'DeepFashion'.

    import os import shutil

    IMG_EXTENSIONS = [ '.jpg', '.JPG', '.jpeg', '.JPEG', '.png', '.PNG', '.ppm', '.PPM', '.bmp', '.BMP', ]

    def is_image_file(filename): return any(filename.endswith(extension) for extension in IMG_EXTENSIONS)

    def make_dataset(dir): images = [] assert os.path.isdir(dir), '%s is not a valid directory' % dir new_root = 'DeepFashion' if not os.path.exists(new_root): os.mkdir(new_root)

    for root, _, fnames in sorted(os.walk(dir)):
        for fname in fnames:
            if is_image_file(fname):
                path = os.path.join(root, fname)
                path_names = path.split('/') 
                #path_names[2] = path_names[2].replace('_', '')
                path_names[3] = path_names[3].replace('_', '')
                path_names[4] = path_names[4].split('_')[0] + "_" + "".join(path_names[4].split('_')[1:])
                path_names = "".join(path_names)
                new_path = os.path.join(root, path_names)
    
                os.rename(path, new_path)
                shutil.move(new_path, os.path.join(new_root, path_names))
    

    make_dataset('fashion')

    Originally posted by @SkyeLu in https://github.com/tengteng95/Pose-Transfer/issues/24#issuecomment-515350673

    opened by llll5555 0
Owner
Tengteng Huang
Tengteng Huang
Exploring the Dual-task Correlation for Pose Guided Person Image Generation

Dual-task Pose Transformer Network The source code for our paper "Exploring Dual-task Correlation for Pose Guided Person Image Generation“ (CVPR2022)

null 63 Dec 15, 2022
Two-Stream Adaptive Graph Convolutional Networks for Skeleton-Based Action Recognition in CVPR19

2s-AGCN Two-Stream Adaptive Graph Convolutional Networks for Skeleton-Based Action Recognition in CVPR19 Note PyTorch version should be 0.3! For PyTor

LShi 547 Dec 26, 2022
Simple Pose: Rethinking and Improving a Bottom-up Approach for Multi-Person Pose Estimation

SimplePose Code and pre-trained models for our paper, “Simple Pose: Rethinking and Improving a Bottom-up Approach for Multi-Person Pose Estimation”, a

Jia Li 256 Dec 24, 2022
SE3 Pose Interp - Interpolate camera pose or trajectory in SE3, pose interpolation, trajectory interpolation

SE3 Pose Interpolation Pose estimated from SLAM system are always discrete, and

Ran Cheng 4 Dec 15, 2022
Pytorch implementation of CVPR2021 paper "MUST-GAN: Multi-level Statistics Transfer for Self-driven Person Image Generation"

MUST-GAN Code | paper The Pytorch implementation of our CVPR2021 paper "MUST-GAN: Multi-level Statistics Transfer for Self-driven Person Image Generat

TianxiangMa 46 Dec 26, 2022
Official PyTorch implementation of "Camera Distance-aware Top-down Approach for 3D Multi-person Pose Estimation from a Single RGB Image", ICCV 2019

PoseNet of "Camera Distance-aware Top-down Approach for 3D Multi-person Pose Estimation from a Single RGB Image" Introduction This repo is official Py

Gyeongsik Moon 677 Dec 25, 2022
Efficient Conformer: Progressive Downsampling and Grouped Attention for Automatic Speech Recognition

Efficient Conformer: Progressive Downsampling and Grouped Attention for Automatic Speech Recognition Official implementation of the Efficient Conforme

Maxime Burchi 145 Dec 30, 2022
Code for "Multi-View Multi-Person 3D Pose Estimation with Plane Sweep Stereo"

Multi-View Multi-Person 3D Pose Estimation with Plane Sweep Stereo This repository includes the source code for our CVPR 2021 paper on multi-view mult

Jiahao Lin 66 Jan 4, 2023
Code for Referring Image Segmentation via Cross-Modal Progressive Comprehension, CVPR2020.

CMPC-Refseg Code of our CVPR 2020 paper Referring Image Segmentation via Cross-Modal Progressive Comprehension. Shaofei Huang*, Tianrui Hui*, Si Liu,

spyflying 55 Dec 1, 2022
PyTorch code for our paper "Attention in Attention Network for Image Super-Resolution"

Under construction... Attention in Attention Network for Image Super-Resolution (A2N) This repository is an PyTorch implementation of the paper "Atten

Haoyu Chen 71 Dec 30, 2022
Official code of paper "PGT: A Progressive Method for Training Models on Long Videos" on CVPR2021

PGT Code for paper PGT: A Progressive Method for Training Models on Long Videos. Install Run pip install -r requirements.txt. Run python setup.py buil

Bo Pang 27 Mar 30, 2022
Joint Discriminative and Generative Learning for Person Re-identification. CVPR'19 (Oral)

Joint Discriminative and Generative Learning for Person Re-identification [Project] [Paper] [YouTube] [Bilibili] [Poster] [Supp] Joint Discriminative

NVIDIA Research Projects 1.2k Dec 30, 2022
Code for "Single-view robot pose and joint angle estimation via render & compare", CVPR 2021 (Oral).

Single-view robot pose and joint angle estimation via render & compare Yann Labbé, Justin Carpentier, Mathieu Aubry, Josef Sivic CVPR: Conference on C

Yann Labbé 51 Oct 14, 2022
Code for "Reconstructing 3D Human Pose by Watching Humans in the Mirror", CVPR 2021 oral

Reconstructing 3D Human Pose by Watching Humans in the Mirror Qi Fang*, Qing Shuai*, Junting Dong, Hujun Bao, Xiaowei Zhou CVPR 2021 Oral The videos a

ZJU3DV 178 Dec 13, 2022
Code for "Human Pose Regression with Residual Log-likelihood Estimation", ICCV 2021 Oral

Human Pose Regression with Residual Log-likelihood Estimation [Paper] [arXiv] [Project Page] Human Pose Regression with Residual Log-likelihood Estima

JeffLi 347 Dec 24, 2022
Code for "PVNet: Pixel-wise Voting Network for 6DoF Pose Estimation" CVPR 2019 oral

Good news! We release a clean version of PVNet: clean-pvnet, including how to train the PVNet on the custom dataset. Use PVNet with a detector. The tr

ZJU3DV 722 Dec 27, 2022
3D Multi-Person Pose Estimation by Integrating Top-Down and Bottom-Up Networks

3D Multi-Person Pose Estimation by Integrating Top-Down and Bottom-Up Networks Introduction This repository contains the code and models for the follo

null 124 Jan 6, 2023
PyTorch Implementation of Realtime Multi-Person Pose Estimation project.

PyTorch Realtime Multi-Person Pose Estimation This is a pytorch version of Realtime_Multi-Person_Pose_Estimation, origin code is here Realtime_Multi-P

Dave Fang 157 Nov 12, 2022
PoseViz – Multi-person, multi-camera 3D human pose visualization tool built using Mayavi.

PoseViz – 3D Human Pose Visualizer Multi-person, multi-camera 3D human pose visualization tool built using Mayavi. As used in MeTRAbs visualizations.

István Sárándi 79 Dec 30, 2022