Official code for NeurIPS 2021 paper "Towards Scalable Unpaired Virtual Try-On via Patch-Routed Spatially-Adaptive GAN"

Overview

Towards Scalable Unpaired Virtual Try-On via Patch-Routed Spatially-Adaptive GAN

Official code for NeurIPS 2021 paper "Towards Scalable Unpaired Virtual Try-On via Patch-Routed Spatially-Adaptive GAN"

Requirements

Create a virtual environment:

virtualenv pasta --python=3.7
source pasta/bin/activate

Install required packages:

pip install torch==1.7.1+cu110 torchvision==0.8.2+cu110 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html
pip install click requests tqdm pyspng ninja imageio-ffmpeg==0.4.3
pip install psutil scipy matplotlib opencv-python scikit-image==0.18.3 pycocotools
apt install libgl1-mesa-glx

Data Preparation

Since the copyright of the UPT dataset belongs to the E-commerce website Zalando and Zalora, we only release the image links in this link. For more details about the dataset and the crawling scripts, please send email to [email protected].

After downloading the raw RGB image, we run the pose estimator Openpose and human parser Graphonomy for each image to obtain the 18-points human keypoints and the 19-labels huamn parsing, respectively.

The dataset structure is recommended as:

+—UPT_256_192
|   +—UPT_subset1_256_192
|       +-image
|           +- e.g. image1.jpg
|           +- ...
|       +-keypoints
|           +- e.g. image1_keypoints.json
|           +- ...
|       +-parsing
|           +- e.g. image1.png
|           +- ...
|       +-train_pairs_front_list_0508.txt
|       +-test_pairs_front_list_shuffle_0508.txt
|   +—UPT_subset2_256_192
|       +-image
|           +- ...
|       +-keypoints
|           +- ...
|       +-parsing
|           +- ...
|       +-train_pairs_front_list_0508.txt
|       +-test_pairs_front_list_shuffle_0508.txt
|   +— ...

By using the raw RGB image, huamn keypoints, and human parsing, we can run the training script and the testing script.

Running Inference

We provide the pre-trained models of PASTA-GAN which are trained by using the full UPT dataset (i.e., our newly collected data, data from Deepfashion dataset, data from MPV dataset) with the resolution of 256 and 512 separately.

we provide a simple script to test the pre-trained model provided above on the UPT dataset as follow:

CUDA_VISIBLE_DEVICES=0 python3 -W ignore test.py \
    --network /datazy/Codes/PASTA-GAN/PASTA-GAN_fullbody_model/network-snapshot-004000.pkl \
    --outdir /datazy/Datasets/pasta-gan_results/unpaired_results_fulltryonds \
    --dataroot /datazy/Datasets/PASTA_UPT_256 \
    --batchsize 16

or you can run the bash script by using the following command:

bash test.sh 1

To test with higher resolution pretrained model (512x320), you can run the bash script by using the following command:

bash test.sh 2

Note that, in the testing script, the parameter --network refers to the path of the pre-trained model, the parameter --outdir refers to the path of the directory for generated results, the parameter --dataroot refers to the path of the data root. Before running the testing script, please make sure these parameters refer to the correct locations.

Running Training

Training the 256x192 PASTA-GAN full body model on the UPT dataset

  1. Download the UPT_256_192 training set.
  2. Download the VGG model from VGG_model, then put "vgg19_conv.pth" and "vgg19-dcbb9e9d" under the directory "checkpoints".
  3. Run bash train.sh 1.

Todo

  • Release the the pretrained model (256x192) and the inference script.
  • Release the training script.
  • Release the pretrained model (512x320).
  • Release the training script for model (512x320).

License

The use of this code is RESTRICTED to non-commercial research and educational purposes.

Comments
  • runtimeerror given group=1,weight of size [64,42,1,1],expected input[1,60,64,64] to have 42 channels, but got 60 channels instead

    runtimeerror given group=1,weight of size [64,42,1,1],expected input[1,60,64,64] to have 42 channels, but got 60 channels instead

    i use 'bash train.sh 1' to get a model, and 'bash test.sh 1' to inference. @xiezhy6

    Traceback (most recent call last): File "test.py", line 160, in generate_images() # pylint: disable=no-value-for-parameter File "/usr/local/miniconda3/lib/python3.8/site-packages/click/core.py", line 1128, in call return self.main(*args, **kwargs) File "/usr/local/miniconda3/lib/python3.8/site-packages/click/core.py", line 1053, in main rv = self.invoke(ctx) File "/usr/local/miniconda3/lib/python3.8/site-packages/click/core.py", line 1395, in invoke return ctx.invoke(self.callback, **ctx.params) File "/usr/local/miniconda3/lib/python3.8/site-packages/click/core.py", line 754, in invoke return __callback(*args, **kwargs) File "/usr/local/miniconda3/lib/python3.8/site-packages/click/decorators.py", line 26, in new_func return f(get_current_context(), *args, **kwargs) File "test.py", line 122, in generate_images gen_c, cat_feat_list = G.style_encoding(norm_img_c_tensor, retain_tensor) File "/usr/local/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/data1/codes/PASTA-GAN-main/training/networks.py", line 4880, in forward x = module(x) File "/usr/local/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/data1/codes/PASTA-GAN-main/training/networks.py", line 174, in forward x = conv2d_resample.conv2d_resample(x=x, w=w.to(x.dtype), f=self.resample_filter, up=self.up, down=self.down, padding=self.padding, flip_weight=flip_weight) File "/data1/codes/PASTA-GAN-main/torch_utils/misc.py", line 107, in decorator return fn(*args, **kwargs) File "/data1/codes/PASTA-GAN-main/torch_utils/ops/conv2d_resample.py", line 147, in conv2d_resample return _conv2d_wrapper(x=x, w=w, padding=[py0,px0], groups=groups, flip_weight=flip_weight) File "/data1/codes/PASTA-GAN-main/torch_utils/ops/conv2d_resample.py", line 54, in _conv2d_wrapper return op(x, w, stride=stride, padding=padding, groups=groups) File "/data1/codes/PASTA-GAN-main/torch_utils/ops/conv2d_gradfix.py", line 38, in conv2d return torch.nn.functional.conv2d(input=input, weight=weight, bias=bias, stride=stride, padding=padding, dilation=dilation, groups=groups) RuntimeError: Given groups=1, weight of size [64, 42, 1, 1], expected input[1, 60, 64, 64] to have 42 channels, but got 60 channels instead

    opened by wucj123 3
  • Test--NameError: name 'os' is not defined  Now os module is obviously installed and importable in python

    Test--NameError: name 'os' is not defined Now os module is obviously installed and importable in python

    When i run test.py, the error is as following: Anyone could run the test.py successfully?

    module: <module '_imported_module_65d04ead8c1241548c4d30a4fe7a76b7'> Loading custom kernel... Traceback (most recent call last): File "/data_superbig/znn/CODE/NIPS2021_PASTA_GAN/PASTA_GAN_main/test.py", line 162, in generate_images() # pylint: disable=no-value-for-parameter File "/data_superbig/znn/Anaconda3/envs/Try_On37/lib/python3.7/site-packages/click/core.py", line 1130, in call return self.main(*args, **kwargs) File "/data_superbig/znn/Anaconda3/envs/Try_On37/lib/python3.7/site-packages/click/core.py", line 1055, in main rv = self.invoke(ctx) File "/data_superbig/znn/Anaconda3/envs/Try_On37/lib/python3.7/site-packages/click/core.py", line 1404, in invoke return ctx.invoke(self.callback, **ctx.params) File "/data_superbig/znn/Anaconda3/envs/Try_On37/lib/python3.7/site-packages/click/core.py", line 760, in invoke return __callback(*args, **kwargs) File "/data_superbig/znn/Anaconda3/envs/Try_On37/lib/python3.7/site-packages/click/decorators.py", line 26, in new_func return f(get_current_context(), *args, **kwargs) File "/data_superbig/znn/CODE/NIPS2021_PASTA_GAN/PASTA_GAN_main/test.py", line 96, in generate_images G = legacy.load_network_pkl(f)['G_ema'].to(device) # type: ignore File "/data_superbig/znn/CODE/NIPS2021_PASTA_GAN/PASTA_GAN_main/legacy.py", line 21, in load_network_pkl data = _LegacyUnpickler(f).load() File "/data_superbig/znn/CODE/NIPS2021_PASTA_GAN/PASTA_GAN_main/torch_utils/persistence.py", line 191, in _reconstruct_persistent_obj module = _src_to_module(meta.module_src) File "/data_superbig/znn/CODE/NIPS2021_PASTA_GAN/PASTA_GAN_main/torch_utils/persistence.py", line 231, in _src_to_module exec(src, module.dict) # pylint: disable=exec-used File "", line 2243, in NameError: name 'os' is not defined

    Process finished with exit code 1

    opened by nnzhangup 0
  • modify a Dockerfile

    modify a Dockerfile

    FROM nvidia/cuda:11.0.3-cudnn8-runtime-ubuntu18.04 as base

    FROM base as base-amd64

    FROM base-amd64

    FROM pytorch/pytorch:1.7.1-cuda11.0-cudnn8-runtime RUN apt-get update RUN apt-get upgrade -y RUN apt-get install gcc -y RUN pip3 install cython --use-feature=2020-resolver RUN pip3 install torchvision==0.8.2+cu110 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html --use-feature=2020-resolver RUN pip3 install scikit-build click requests tqdm pyspng ninja imageio-ffmpeg==0.4.3 --use-feature=2020-resolver RUN pip3 install psutil scipy matplotlib opencv-python scikit-image pycocotools --use-feature=2020-resolver RUN pip3 install pillow --use-feature=2020-resolver RUN apt-get install openssh-server -y RUN service ssh start EXPOSE 22

    opened by maizer2 0
  • Training model from scratch results in incompatibility with the testing script and inferior visual quality after adjustments

    Training model from scratch results in incompatibility with the testing script and inferior visual quality after adjustments

    I have attempted to reproduce high levels of quality shown in your pretrained model through training on a substantively expanded dataset. However, I have encountered a number of divergences between your dedicated training and testing dataset classes (UvitonDatasetFull and UvitonDatasetV19_test). Notably, training dataset, network architecture and, consequently, all new models trained through the provided script use two arrays of normalized body part images (norm_img and norm_img_lower, with their corresponding shapes at 30x64x64 and 12x64x64) as style encoding inputs, while testing dataset and pretrained model use only one array of normalized body part images, concatenated with a normalized pose representation (norm_img and norm_pose or ‘stickman’, with shapes at 30x64x64 and 30x64x64).

    My approach to resolving these differences so far was based on bringing the training dataset class into closer alignment with the testing dataset through modification of its normalize() function. After these dataset adjustments and slight changes to the training script to accommodate the new input flow, I was able to train a network model with style encoding input shape (norm_img and norm_pose) at 60x64x64, fully in line with the pretrained model. However, as attached images show, the resulting level of visual quality is far removed from yours, even at relatively advanced stages of training (8000-12000 iterations). Notably, shoulder area and general body shape experience unexpected deformations relative to the original pose, while use of full-body images as either person or garment causes severe distortion.

    Given this disappointing outcome, would it be possible for you to provide some feedback to the general direction of my efforts? Is there something I might have overlooked in my attempts to bring two dataset classes in line? Were there any additional parameters in the training script that should have received more of my attention?

    1a95a642b4a8440a81c96cc928b40e1c__5c5b36b78f2341e48946768b5c490ef9 1a30787ed8a94293a2ba2a6b0eb0aa32__0f47cb9cb50a4dba8bcdaa68ae4da54d 1a30787ed8a94293a2ba2a6b0eb0aa32__2ec33e777b9d4905941b1b33d670045a 2d1c8a0485704f1291c6a02be96bdac8__1a7a5302159e48af84a690e3ad456489 2d1c8a0485704f1291c6a02be96bdac8__0bc002dc7e474daeb9a97688043fd83f 6b9c0d92645049bc9906c7d418c1f09c__4d21f15280db465db0169b5804620d64 6b9c0d92645049bc9906c7d418c1f09c__1ec58bc1d00a4542bddd59c03664b24e

    opened by albek00 0
  • dataset download error

    dataset download error

    some websites do not exist or have been moved, e.g. https://www.zalora.com.my/kasih-plus-size-fishtail-cotton-shirt-brown-1016119.html, can you provide your downloaded UPT dataset?

    opened by AItechnology 1
  • How to get

    How to get "train_random_mask_acgpn"

    It seems this part is missing when training PASTA-GAN by using UPT dataset. Would you release these images or share the generation code of them? Thank you.

    opened by halcyon370 4
Owner
null
The official implementation of NeurIPS 2021 paper: Finding Optimal Tangent Points for Reducing Distortions of Hard-label Attacks

The official implementation of NeurIPS 2021 paper: Finding Optimal Tangent Points for Reducing Distortions of Hard-label Attacks

machen 11 Nov 27, 2022
Official implementation of NeurIPS'2021 paper TransformerFusion

TransformerFusion: Monocular RGB Scene Reconstruction using Transformers Project Page | Paper | Video TransformerFusion: Monocular RGB Scene Reconstru

Aljaz Bozic 118 Dec 25, 2022
Official code for On Path Integration of Grid Cells: Group Representation and Isotropic Scaling (NeurIPS 2021)

On Path Integration of Grid Cells: Group Representation and Isotropic Scaling This repo contains the official implementation for the paper On Path Int

Ruiqi Gao 39 Nov 10, 2022
Code for our NeurIPS 2021 paper Mining the Benefits of Two-stage and One-stage HOI Detection

CDN Code for our NeurIPS 2021 paper "Mining the Benefits of Two-stage and One-stage HOI Detection". Contributed by Aixi Zhang*, Yue Liao*, Si Liu, Mia

null 71 Dec 14, 2022
Code to reproduce the experiments from our NeurIPS 2021 paper " The Limitations of Large Width in Neural Networks: A Deep Gaussian Process Perspective"

Code To run: python runner.py new --save <SAVE_NAME> --data <PATH_TO_DATA_DIR> --dataset <DATASET> --model <model_name> [options] --n 1000 - train - t

Geoff Pleiss 5 Dec 12, 2022
Companion code for the paper "An Infinite-Feature Extension for Bayesian ReLU Nets That Fixes Their Asymptotic Overconfidence" (NeurIPS 2021)

ReLU-GP Residual (RGPR) This repository contains code for reproducing the following NeurIPS 2021 paper: @inproceedings{kristiadi2021infinite, title=

Agustinus Kristiadi 4 Dec 26, 2021
Code for our NeurIPS 2021 paper 'Exploiting the Intrinsic Neighborhood Structure for Source-free Domain Adaptation'

Exploiting the Intrinsic Neighborhood Structure for Source-free Domain Adaptation (NeurIPS 2021) Code for our NeurIPS 2021 paper 'Exploiting the Intri

Shiqi Yang 53 Dec 25, 2022
This GitHub repository contains code used for plots in NeurIPS 2021 paper 'Stochastic Multi-Armed Bandits with Control Variates.'

About Repository This repository contains code used for plots in NeurIPS 2021 paper 'Stochastic Multi-Armed Bandits with Control Variates.' About Code

Arun Verma 1 Nov 9, 2021
Source code of NeurIPS 2021 Paper ''Be Confident! Towards Trustworthy Graph Neural Networks via Confidence Calibration''

CaGCN This repo is for source code of NeurIPS 2021 paper "Be Confident! Towards Trustworthy Graph Neural Networks via Confidence Calibration". Paper L

null 6 Dec 19, 2022
Code for NeurIPS 2021 paper: Invariant Causal Imitation Learning for Generalizable Policies

Invariant Causal Imitation Learning for Generalizable Policies Ioana Bica, Daniel Jarrett, Mihaela van der Schaar Neural Information Processing System

Ioana Bica 17 Dec 1, 2022
Official Pytorch implementation of "Unbiased Classification Through Bias-Contrastive and Bias-Balanced Learning (NeurIPS 2021)

Unbiased Classification Through Bias-Contrastive and Bias-Balanced Learning (NeurIPS 2021) Official Pytorch implementation of Unbiased Classification

Youngkyu 17 Jan 1, 2023
This is an official PyTorch implementation of Task-Adaptive Neural Network Search with Meta-Contrastive Learning (NeurIPS 2021, Spotlight).

NeurIPS 2021 (Spotlight): Task-Adaptive Neural Network Search with Meta-Contrastive Learning This is an official PyTorch implementation of Task-Adapti

Wonyong Jeong 15 Nov 21, 2022
Official implementation of "Open-set Label Noise Can Improve Robustness Against Inherent Label Noise" (NeurIPS 2021)

Open-set Label Noise Can Improve Robustness Against Inherent Label Noise NeurIPS 2021: This repository is the official implementation of ODNL. Require

Hongxin Wei 12 Dec 7, 2022
Official implementation of Generalized Data Weighting via Class-level Gradient Manipulation (NeurIPS 2021).

Generalized Data Weighting via Class-level Gradient Manipulation This repository is the official implementation of Generalized Data Weighting via Clas

null 9 Nov 3, 2021
Official implementation of Neural Bellman-Ford Networks (NeurIPS 2021)

NBFNet: Neural Bellman-Ford Networks This is the official codebase of the paper Neural Bellman-Ford Networks: A General Graph Neural Network Framework

MilaGraph 136 Dec 21, 2022
Official Pytorch implementation for Deep Contextual Video Compression, NeurIPS 2021

Introduction Official Pytorch implementation for Deep Contextual Video Compression, NeurIPS 2021 Prerequisites Python 3.8 and conda, get Conda CUDA 11

null 51 Dec 3, 2022
PyTorch implementation of NeurIPS 2021 paper: "CoFiNet: Reliable Coarse-to-fine Correspondences for Robust Point Cloud Registration"

PyTorch implementation of NeurIPS 2021 paper: "CoFiNet: Reliable Coarse-to-fine Correspondences for Robust Point Cloud Registration"

null 76 Jan 3, 2023
PyTorch implementation for our NeurIPS 2021 Spotlight paper "Long Short-Term Transformer for Online Action Detection".

Long Short-Term Transformer for Online Action Detection Introduction This is a PyTorch implementation for our NeurIPS 2021 Spotlight paper "Long Short

null 77 Dec 16, 2022
This repo includes our code for evaluating and improving transferability in domain generalization (NeurIPS 2021)

Transferability for domain generalization This repo is for evaluating and improving transferability in domain generalization (NeurIPS 2021), based on

gordon 9 Nov 29, 2022