Code for "ShineOn: Illuminating Design Choices for Practical Video-based Virtual Clothing Try-on", accepted at WACV 2021 Generation of Human Behavior Workshop.

Overview

ShineOn: Illuminating Design Choices for Practical Video-based Virtual Clothing Try-on

[ Paper ] [ Project Page ]

This repository contains the code for our paper accepted at the Generation of Human Behavior Workshop at WACV 2021.

Key Contributions:

  • Scientific experiments built from the ground-up to isolate effects of each method
  • Empirically show DensePose results in better quality than CocoPose
  • Add self-attention layers
  • Find that GeLU show best results

Architecture Overview

image

How To Use This Repository

The point of entry of this repository is train.py and test.py. We have organized our code into these main folders: datasets, models, and options.

The datasets folder contains several custom defined datasets. To create your own custom tryon dataset, please refer to the Documentation IV below.

The models folder contains several models, such as the warp model and U-Net model that we used during virtual try-on work. Inside the networks sub-folder, we include several utility networks that we make use of.

The options folder contains several of the options we use at train and test time. These options allows our code to flexible, and run experiments easily.

Documentation

Results

Qualitative Comparison with FW-GAN and CP-VTON

image

Qualitative Comparison of Pose and Self-Attention

image

Qualitative Comparison of Activation Functions

image

Qualitative Comparison of Optical Flow

image

Acknowledgements and Related Code

  • This code is based in part on Sergey Wong's stellar CP-VTON repository. Thank you very much, Sergey, for your hard work.
  • Thank you Haoye Dong and his team for hosting the VUHCS competition at CVPR 2020, providing the VVT Dataset, and giving access to the FW-GAN reference code.
  • Thank you NVIDIA's team for their work on Vid2Vid and FlowNet2.
  • Credits to David Park's Self-Attention GAN implementation for attention layers reference.
  • Credits to Self-Corrective Human-Parsing for easy parsing of LIP clothing labels.
  • Credits to the detectron2 repository for Densepose annotations.
Comments
  • [1.1] MultiSPADE Generator, WITH adversarial loss, on a SINGLE frame, small multiscale weight

    [1.1] MultiSPADE Generator, WITH adversarial loss, on a SINGLE frame, small multiscale weight

    Description

    Reason:

    • Hypothesize adversarial loss is too big, hijacking the loss. Scale it down to see if learning is able to occur.
    • Just test small multiscale loss, set it to 0.05
    • Also want to see how temporal loss responds, but want to view isolated results of multiscale loss. So set temporal loss to a tiny 0.001

    Planned Start Date: 9/1/2020 Depends on Previous Experiment? Yes, followup of Experiment 1.0

    Train Command

    python train.py \
    --name "multiSPADE-generator_with-adversarial-loss_1-image-only_small-multiscale-weight" \
    --model sams \
    --gpu_ids 2,3,4 \
    --ngf_pow_outer 6 \
    --ngf_pow_inner 10 \
    --n_frames_total 1 \
    --batch_size 8 \
    --workers 4 \
    --vvt_data data \
    --val_check 0.2 \
    --wt_multiscale 0.05 --wt_temporal 0.001
    

    Report Results

    To report a result, copy this into a comment below:

    # Result Description
    <!--- 
    For Experiment Number, use "Major.minor.patch", e.g. 1.2.0.
    Major.minor should match the [M.m] in the title. 
    Patch describes a bug fix (change in the code or branch).
    -->
    **Experiment Number:** 1.2.0
    **Branch:** `master`
    **Timestamp:** MM/DD/YYYY 9pm PT
    **Epochs:** 
    
    
    # Architecture
    **Model Layers:**
    <!-- Paste the printed Model Layers -->
    
    **Module Parameters:**
    <!-- Paste the Params table -->
    
    
    # Loss Graphs
    <!--- Put detailed loss graphs here. Please include all graphs! -->
    
    # Image Results
    <!--- Put detailed image results here. Please include all images! Multiple screenshots is good. -->
    
    # Comments, Observations, or Insights
    <!--- Optional -->
    
    
    experiment 
    opened by andrewjong 10
  • [4.2]

    [4.2]

    Description

    Explain why we're running this and what we expect.

    Planned Start Date:

    Depends on Previous Experiment? Y/N Effects of GELU on UNet training

    Train Command

    python train.py \
    --name vanilla_dp_unet_attn_gelu \
    --model unet \
    --batch MAX \
    --person_inputs densepose agnostic \
    --cloth_inputs cloth \
    --val_check_interval  0.05 \
    --self_attn \
    --accumulated_batches 64 / MAX \
    --activation gelu
    

    Report Results

    To report a result, copy this into a comment below:

    # Result Description
    <!--- 
    For Experiment Number, use "Major.minor.patch", e.g. 1.2.0.
    Major.minor should match the [M.m] in the title. 
    Patch describes a bug fix (change in the code or branch).
    -->
    **Experiment Number:** 1.2.0
    **Branch:** `master`
    **Timestamp:** MM/DD/YYYY 9pm PT
    **Epochs:** 
    
    
    # Architecture
    **Model Layers:**
    <!-- Paste the printed Model Layers -->
    
    **Module Parameters:**
    <!-- Paste the Params table -->
    
    
    # Loss Graphs
    <!--- Put detailed loss graphs here. Please include all graphs! -->
    
    # Image Results
    <!--- Put detailed image results here. Please include all images! Multiple screenshots is good. -->
    
    # Comments, Observations, or Insights
    <!--- Optional -->
    
    experiment 
    opened by gauravkuppa 4
  • [4.3]

    [4.3]

    Description

    Explain why we're running this and what we expect.

    Planned Start Date: 9/9/20

    Depends on Previous Experiment? Y/N

    Train Command

    python train.py \
    --name vanilla_dp_unet_attn_swish \
    --model unet \
    --batch MAX \
    --person_inputs densepose agnostic \
    --cloth_inputs cloth \
    --val_check_interval  0.05 \
    --self_attn \
    --accumulated_batches 64 / MAX \
    --activation swish
    

    Report Results

    To report a result, copy this into a comment below:

    # Result Description
    <!--- 
    For Experiment Number, use "Major.minor.patch", e.g. 1.2.0.
    Major.minor should match the [M.m] in the title. 
    Patch describes a bug fix (change in the code or branch).
    -->
    **Experiment Number:** 1.2.0
    **Branch:** `master`
    **Timestamp:** MM/DD/YYYY 9pm PT
    **Epochs:** 
    
    
    # Architecture
    **Model Layers:**
    <!-- Paste the printed Model Layers -->
    
    **Module Parameters:**
    <!-- Paste the Params table -->
    
    
    # Loss Graphs
    <!--- Put detailed loss graphs here. Please include all graphs! -->
    
    # Image Results
    <!--- Put detailed image results here. Please include all images! Multiple screenshots is good. -->
    
    # Comments, Observations, or Insights
    <!--- Optional -->
    
    experiment 
    opened by gauravkuppa 3
  • [3.3] Vanilla UNet w/ DP w/ Attn

    [3.3] Vanilla UNet w/ DP w/ Attn

    Description

    Explain why we're running this and what we expect. We are running this experiment to see affects of attention on UNet; quantify these effects

    Planned Start Date: 9/7/20

    Depends on Previous Experiment? N

    Train Command

    python train.py \
    --name vanilla_dp_unet_attn \
    --model unet \
    --batch 4 \
    --person_inputs densepose agnostic \
    --cloth_inputs cloth \
    --val_check_interval 0.2 \
    --self_attn \
    --accumulated_batches 16
    

    Report Results

    To report a result, copy this into a comment below:

    # Result Description
    <!--- 
    For Experiment Number, use "Major.minor.patch", e.g. 1.2.0.
    Major.minor should match the [M.m] in the title. 
    Patch describes a bug fix (change in the code or branch).
    -->
    **Experiment Number:** 1.2.0
    **Branch:** `master`
    **Timestamp:** MM/DD/YYYY 9pm PT
    **Epochs:** 
    
    
    # Architecture
    **Model Layers:**
    <!-- Paste the printed Model Layers -->
    
    **Module Parameters:**
    <!-- Paste the Params table -->
    
    
    # Loss Graphs
    <!--- Put detailed loss graphs here. Please include all graphs! -->
    
    # Image Results
    <!--- Put detailed image results here. Please include all images! Multiple screenshots is good. -->
    
    # Comments, Observations, or Insights
    <!--- Optional -->
    
    • [x] Open GitHub Issue
    • [x] Start training with tmux (tensorboard and training)
    • [x] Upload scalars, train, and validation images to GitHub
    • [ ] Upload checkpoints to Google Drive
    • [ ] Generate test results from latest epoch
    • [ ] Calculate metrics (PSNR, SSIM)
    • [ ] Visualize metrics
    experiment 
    opened by gauravkuppa 3
  • [3.1] Vanilla UNet

    [3.1] Vanilla UNet

    Description

    Explain why we're running this and what we expect. Establish baseline of CPVTON model Planned Start Date: 9/7/20

    Depends on Previous Experiment? No

    Train Command

    python train.py \
    --name vanilla_unet \
    --model unet \
    --batch MAX \
    --person_inputs cocopose agnostic \
    --cloth_inputs cloth \
    --val_check_interval 0.2
    
    

    Report Results

    To report a result, copy this into a comment below:

    # Result Description
    <!--- 
    For Experiment Number, use "Major.minor.patch", e.g. 1.2.0.
    Major.minor should match the [M.m] in the title. 
    Patch describes a bug fix (change in the code or branch).
    -->
    **Experiment Number:** 1.2.0
    **Branch:** `master`
    **Timestamp:** MM/DD/YYYY 9pm PT
    **Epochs:** 
    
    
    # Architecture
    **Model Layers:**
    <!-- Paste the printed Model Layers -->
    
    **Module Parameters:**
    <!-- Paste the Params table -->
    
    
    # Loss Graphs
    <!--- Put detailed loss graphs here. Please include all graphs! -->
    
    # Image Results
    <!--- Put detailed image results here. Please include all images! Multiple screenshots is good. -->
    
    # Comments, Observations, or Insights
    <!--- Optional -->
    
    • [x] Open GitHub Issue
    • [x] Start training with tmux (tensorboard and training)
    • [x] Upload scalars, train, and validation images to GitHub
    • [ ] Upload checkpoints to Google Drive
    • [x] Generate test results from latest epoch
      • File path: /data_hdd/gaurav/wacv_unet_experiments/vanilla_unet
    • [ ] Calculate metrics (PSNR, SSIM)
    • [ ] Visualize metrics
    experiment 
    opened by gauravkuppa 3
  • [4.4]

    [4.4]

    Description

    Explain why we're running this and what we expect.

    Planned Start Date:

    Depends on Previous Experiment? Y/N

    Train Command

    python train.py \
    --name vanilla_dp_unet_attn_sine \
    --model unet \
    --batch MAX \
    --person_inputs densepose agnostic \
    --cloth_inputs cloth \
    --val_check_interval  0.05 \
    --self_attn \
    --accumulated_batches 64 / MAX \
    --activation sine
    

    Report Results

    To report a result, copy this into a comment below:

    # Result Description
    <!--- 
    For Experiment Number, use "Major.minor.patch", e.g. 1.2.0.
    Major.minor should match the [M.m] in the title. 
    Patch describes a bug fix (change in the code or branch).
    -->
    **Experiment Number:** 1.2.0
    **Branch:** `master`
    **Timestamp:** MM/DD/YYYY 9pm PT
    **Epochs:** 
    
    
    # Architecture
    **Model Layers:**
    <!-- Paste the printed Model Layers -->
    
    **Module Parameters:**
    <!-- Paste the Params table -->
    
    
    # Loss Graphs
    <!--- Put detailed loss graphs here. Please include all graphs! -->
    
    # Image Results
    <!--- Put detailed image results here. Please include all images! Multiple screenshots is good. -->
    
    # Comments, Observations, or Insights
    <!--- Optional -->
    
    wontfix experiment 
    opened by gauravkuppa 2
  • [4.1]

    [4.1]

    Description

    Explain why we're running this and what we expect. Make uprelu and downrelu with ReLU Establish baseline, if ReLU is different than (ReLU, LeakyReLU) combination

    Planned Start Date: 9/9/20

    Depends on Previous Experiment? N

    Train Command

    python train.py \
    --name vanilla_dp_unet_attn_relu \
    --model unet \
    --batch 4 \
    --person_inputs densepose agnostic \
    --cloth_inputs cloth \
    --val_check_interval 0.05 \
    --self_attn \
    --accumulated_batches 16 \
    --activation relu
    

    Report Results

    To report a result, copy this into a comment below:

    # Result Description
    <!--- 
    For Experiment Number, use "Major.minor.patch", e.g. 1.2.0.
    Major.minor should match the [M.m] in the title. 
    Patch describes a bug fix (change in the code or branch).
    -->
    **Experiment Number:** 1.2.0
    **Branch:** `master`
    **Timestamp:** MM/DD/YYYY 9pm PT
    **Epochs:** 
    
    
    # Architecture
    **Model Layers:**
    <!-- Paste the printed Model Layers -->
    
    **Module Parameters:**
    <!-- Paste the Params table -->
    
    
    # Loss Graphs
    <!--- Put detailed loss graphs here. Please include all graphs! -->
    
    # Image Results
    <!--- Put detailed image results here. Please include all images! Multiple screenshots is good. -->
    
    # Comments, Observations, or Insights
    <!--- Optional -->
    
    • [x] Open GitHub Issue
    • [x] Start training with tmux (tensorboard and training)
    • [x] Upload scalars, train, and validation images to GitHub
    • [x] Upload checkpoints to Google Drive
    • [x] Generate test results from latest epoch
    • [x] Calculate metrics (PSNR, SSIM)
    • [x] Visualize metrics
    experiment 
    opened by gauravkuppa 2
  • [3.2] Vanilla UNet w/ DensePose

    [3.2] Vanilla UNet w/ DensePose

    Description

    Explain why we're running this and what we expect. Effects of DensePose vs CocoPose

    Planned Start Date: 9/7/20

    Depends on Previous Experiment? Y/N

    Train Command

    python train.py \
    --name vanilla_dp_unet \
    --model unet \
    --batch 64 \
    --person_inputs densepose agnostic \
    --cloth_inputs cloth \
    --val_check_interval 0.2
    

    Report Results

    To report a result, copy this into a comment below:

    # Result Description
    <!--- 
    For Experiment Number, use "Major.minor.patch", e.g. 1.2.0.
    Major.minor should match the [M.m] in the title. 
    Patch describes a bug fix (change in the code or branch).
    -->
    **Experiment Number:** 1.2.0
    **Branch:** `master`
    **Timestamp:** MM/DD/YYYY 9pm PT
    **Epochs:** 
    
    
    # Architecture
    **Model Layers:**
    <!-- Paste the printed Model Layers -->
    
    **Module Parameters:**
    <!-- Paste the Params table -->
    
    
    # Loss Graphs
    <!--- Put detailed loss graphs here. Please include all graphs! -->
    
    # Image Results
    <!--- Put detailed image results here. Please include all images! Multiple screenshots is good. -->
    
    # Comments, Observations, or Insights
    <!--- Optional -->
    
    • [x] Open GitHub Issue
    • [x] Start training with tmux (tensorboard and training)
    • [x] Upload scalars, train, and validation images to GitHub
    • [x] Upload checkpoints to Google Drive
    • [ ] Generate test results from latest epoch
    • [ ] Calculate metrics (PSNR, SSIM)
    • [ ] Visualize metrics
    experiment 
    opened by gauravkuppa 2
  • Visualize validation too

    Visualize validation too

    This adds validation visualization and scalar metrics to Tensorboard. This way we know we're not overfitting.

    Because we want BaseModel to be able to call visualize() outside the train-step for validation, it needs a common visualize API. Therefore, this modifies visualize() signature to only accept two arguments: 1) the input batch and 2) the tag ("train" or "validation"). Outputs to be visualized are now saved to self, e.g. self.p_rendered.

    image

    image

    Though there is a bug when I tried this with UNetMask model though. Any idea what might cause this, @gauravkuppa? I'm having trouble figuring it out.

    $ python train.py --name DELETEME_unet_validation_vis -j 8  --model unet_mask  --person_inputs agnostic densepose --cloth_inputs cloth --self_attn --flow --n_frames_total 1 -b 4  --ngf 16 --val_check 0.0005
    
    logger | 2020-08-17 11:14:07 | ERROR | Traceback (most recent call last):                                                                                                                                          
      File "train.py", line 69, in main
        trainer.fit(model)
      File "/home/andrew/.miniconda3/envs/sams/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1003, in fit
        results = self.single_gpu_train(model)
      File "/home/andrew/.miniconda3/envs/sams/lib/python3.8/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 186, in single_gpu_train
        results = self.run_pretrain_routine(model)
      File "/home/andrew/.miniconda3/envs/sams/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1193, in run_pretrain_routine
        eval_results = self._evaluate(model,
      File "/home/andrew/.miniconda3/envs/sams/lib/python3.8/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 337, in _evaluate
        eval_results = model.validation_end(outputs)
      File "/home/andrew/Development/2021-wacv-video-vton_deploy/models/base_model.py", line 110, in validation_end
        self.visualize(self.batch, "validation")
      File "/home/andrew/Development/2021-wacv-video-vton_deploy/models/unet_mask_model.py", line 195, in visualize
        tensor = tensor_list_for_board(visuals)
      File "/home/andrew/Development/2021-wacv-video-vton_deploy/visualization.py", line 22, in tensor_list_for_board
        batch_size, channel, height, width = tensor_for_board(img_tensors_list[0][0]).size()
      File "/home/andrew/Development/2021-wacv-video-vton_deploy/visualization.py", line 13, in tensor_for_board
        tensor = tensor.repeat(1, 3, 1, 1)
    RuntimeError: Number of dimensions of repeat dims can not be smaller than number of dimensions of tensor
    
    
    opened by andrewjong 2
  • Sams Intermediary

    Sams Intermediary

    Intermediary update to stay on the same page.

    Changes that affect other things:

    • CPVtonDataset --> TryonDataset. Cuz we ain't CPVton anyjmore.
    • Instead of passing individually --densepose, --flow, we now just have --person_inputs agnostic densepose flow etc.. flag, that will grab those keys from the return dict. It will also find the correct channels via the constants defined at the top of TryonDataset.
      • to add flow, add the "flow" key to the return dict, and also add FLOW_CHANNELS=2 const to TryonDataset for the channels
    opened by andrewjong 2
  • [5.3] UNet, Attn, Flow, Activation: Swish

    [5.3] UNet, Attn, Flow, Activation: Swish

    Description

    Run Flow with Swish

    Planned Start Date: 9/17 10am PST

    Depends on Previous Experiment? N

    Train Command

    python train.py \
    --name flow_swish \
    --model unet \ 
    -j 8 \
    --batch 16 \
    --accumulated_batches 4 \
    --person_inputs densepose agnostic \
    --cloth_inputs cloth \ 
    --val_check_interval 0.05 \
    --self_attn \
    --activation swish \
    --n_frames_total 2 \
    --n_frames_now 2 \
    --flow_warp
    

    Report Results

    To report a result, copy this into a comment below:

    # Result Description
    <!--- 
    For Experiment Number, use "Major.minor.patch", e.g. 1.2.0.
    Major.minor should match the [M.m] in the title. 
    Patch describes a bug fix (change in the code or branch).
    -->
    **Experiment Number:** 1.2.0
    **Branch:** `master`
    **Timestamp:** MM/DD/YYYY 9pm PT
    **Epochs:** 
    
    
    # Architecture
    **Model Layers:**
    <!-- Paste the printed Model Layers -->
    
    **Module Parameters:**
    <!-- Paste the Params table -->
    
    
    # Loss Graphs
    <!--- Put detailed loss graphs here. Please include all graphs! -->
    
    # Image Results
    <!--- Put detailed image results here. Please include all images! Multiple screenshots is good. -->
    
    # Comments, Observations, or Insights
    <!--- Optional -->
    
    • [x] Open GitHub Issue
    • [ ] Start training with tmux (tensorboard and training)
    • [ ] Upload scalars, train, and validation images to GitHub
    • [ ] Upload checkpoints to Google Drive
    • [ ] Generate test results from latest epoch
    • [ ] Calculate metrics (PSNR, SSIM)
    • [ ] Visualize metrics
    experiment 
    opened by gauravkuppa 1
  • Error when runing

    Error when runing "bash install.sh"

    ninja: build stopped: subcommand failed. Traceback (most recent call last): File "/headless/anaconda3/envs/sams-pt1.6/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1509, in _run_ninja_build subprocess.run( File "/headless/anaconda3/envs/sams-pt1.6/lib/python3.8/subprocess.py", line 512, in run raise CalledProcessError(retcode, process.args, subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "setup.py", line 21, in setup( File "/headless/anaconda3/envs/sams-pt1.6/lib/python3.8/site-packages/setuptools/init.py", line 163, in setup return distutils.core.setup(**attrs) File "/headless/anaconda3/envs/sams-pt1.6/lib/python3.8/distutils/core.py", line 148, in setup dist.run_commands() File "/headless/anaconda3/envs/sams-pt1.6/lib/python3.8/distutils/dist.py", line 966, in run_commands self.run_command(cmd) File "/headless/anaconda3/envs/sams-pt1.6/lib/python3.8/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/headless/anaconda3/envs/sams-pt1.6/lib/python3.8/site-packages/setuptools/command/install.py", line 67, in run self.do_egg_install() File "/headless/anaconda3/envs/sams-pt1.6/lib/python3.8/site-packages/setuptools/command/install.py", line 109, in do_egg_install self.run_command('bdist_egg') File "/headless/anaconda3/envs/sams-pt1.6/lib/python3.8/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/headless/anaconda3/envs/sams-pt1.6/lib/python3.8/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/headless/anaconda3/envs/sams-pt1.6/lib/python3.8/site-packages/setuptools/command/bdist_egg.py", line 175, in run cmd = self.call_command('install_lib', warn_dir=0) File "/headless/anaconda3/envs/sams-pt1.6/lib/python3.8/site-packages/setuptools/command/bdist_egg.py", line 161, in call_command self.run_command(cmdname) File "/headless/anaconda3/envs/sams-pt1.6/lib/python3.8/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/headless/anaconda3/envs/sams-pt1.6/lib/python3.8/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/headless/anaconda3/envs/sams-pt1.6/lib/python3.8/site-packages/setuptools/command/install_lib.py", line 11, in run self.build() File "/headless/anaconda3/envs/sams-pt1.6/lib/python3.8/distutils/command/install_lib.py", line 107, in build self.run_command('build_ext') File "/headless/anaconda3/envs/sams-pt1.6/lib/python3.8/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/headless/anaconda3/envs/sams-pt1.6/lib/python3.8/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/headless/anaconda3/envs/sams-pt1.6/lib/python3.8/site-packages/setuptools/command/build_ext.py", line 87, in run _build_ext.run(self) File "/headless/anaconda3/envs/sams-pt1.6/lib/python3.8/distutils/command/build_ext.py", line 340, in run self.build_extensions() File "/headless/anaconda3/envs/sams-pt1.6/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 649, in build_extensions build_ext.build_extensions(self) File "/headless/anaconda3/envs/sams-pt1.6/lib/python3.8/distutils/command/build_ext.py", line 449, in build_extensions self._build_extensions_serial() File "/headless/anaconda3/envs/sams-pt1.6/lib/python3.8/distutils/command/build_ext.py", line 474, in _build_extensions_serial self.build_extension(ext) File "/headless/anaconda3/envs/sams-pt1.6/lib/python3.8/site-packages/setuptools/command/build_ext.py", line 208, in build_extension _build_ext.build_extension(self, ext) File "/headless/anaconda3/envs/sams-pt1.6/lib/python3.8/distutils/command/build_ext.py", line 528, in build_extension objects = self.compiler.compile(sources, File "/headless/anaconda3/envs/sams-pt1.6/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 469, in unix_wrap_ninja_compile _write_ninja_file_and_compile_objects( File "/headless/anaconda3/envs/sams-pt1.6/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1228, in _write_ninja_file_and_compile_objects _run_ninja_build( File "/headless/anaconda3/envs/sams-pt1.6/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1529, in _run_ninja_build raise RuntimeError(message) RuntimeError: Error compiling objects for extension

    I use the cuda-9.0 and cudnn-7.4. I've tried a few things, but it still doesn't work. Has anyone solved this error?

    experiment 
    opened by Vannaz 0
  • Did you mean `simple_extractor.py` insted of `evaluate.py`?

    Did you mean `simple_extractor.py` insted of `evaluate.py`?

    I think you have a bug here

    import os
    import os.path as osp
    
    home = "/path/to/fw_gan_vvt/test/test_frames"
    schp = "/path/to/Self-Correction-Human-Parsing"
    output = "/path/to/fw_gan_vvt/test/test_frames_parsing"
    os.chdir(home)
    paths = os.listdir('.')
    paths.sort()
    for vid in paths:
        os.chdir(osp.join(home, vid))
        input_dir = os.getcwd()
        output_dir = osp.join(output, vid)
        generate_seg = "python evaluate.py --dataset lip --restore-weight 
            checkpoints/exp-schp-201908261155-lip.pth --input " + input_dir + 
            " --output " + output_dir
        os.chdir(schp)
        os.system(generate_seg)
    

    I think you meant simple_extractor.py instead of evaluate.py because there is no such arguments in evaluate.py.

    Also where to get this "/path/to/fw_gan_vvt/test/test_frames.

    opened by shruti-bt 0
  • Try-on module pre-trained model is not valid

    Try-on module pre-trained model is not valid

    @andrewjong Hi. When I try to load your Try-On model (Unet model) from Google Drive, I get the following error:

      File "test.py", line 10, in <module>
        main(train=False)
      File "/code/train.py", line 44, in main
        model = model_class.load_from_checkpoint(
      File "/root/miniconda3/envs/sams-pt1.6/lib/python3.8/site-packages/pytorch_lightning/core/saving.py", line 153, in load_from_checkpoint
        model = cls._load_model_state(checkpoint, *args, strict=strict, **kwargs)
      File "/root/miniconda3/envs/sams-pt1.6/lib/python3.8/site-packages/pytorch_lightning/core/saving.py", line 192, in _load_model_state
        model.load_state_dict(checkpoint['state_dict'], strict=strict)
      File "/root/miniconda3/envs/sams-pt1.6/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1044, in load_state_dict
        raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
    RuntimeError: Error(s) in loading state_dict for UnetMaskModel:
            Missing key(s) in state_dict: "unet.model.model.1.model.3.model.1.weight", "unet.model.model.1.model.3.model.1.bias", "unet.model.model.1.model.3.model.3.model.1.weight", "unet.model.model.1.model.3.model.3.model.1.bias", "unet.model.model.1.model.3.model.3.model.3.model.1.weight", "unet.model.model.1.model.3.model.3.model.3.model.1.bias", "unet.model.model.1.model.3.model.3.model.3.model.3.gamma", "unet.model.model.1.model.3.model.3.model.3.model.3.query_conv.weight", "unet.model.model.1.model.3.model.3.model.3.model.3.query_conv.bias", "unet.model.model.1.model.3.model.3.model.3.model.3.key_conv.weight", "unet.model.model.1.model.3.model.3.model.3.model.3.key_conv.bias", "unet.model.model.1.model.3.model.3.model.3.model.3.value_conv.weight", "unet.model.model.1.model.3.model.3.model.3.model.3.value_conv.bias", "unet.model.model.1.model.3.model.3.model.3.model.4.model.1.weight", "unet.model.model.1.model.3.model.3.model.3.model.4.model.1.bias", "unet.model.model.1.model.3.model.3.model.3.model.4.model.2.gamma", "unet.model.model.1.model.3.model.3.model.3.model.4.model.2.query_conv.weight", "unet.model.model.1.model.3.model.3.model.3.model.4.model.2.query_conv.bias", "unet.model.model.1.model.3.model.3.model.3.model.4.model.2.key_conv.weight", "unet.model.model.1.model.3.model.3.model.3.model.4.model.2.key_conv.bias", "unet.model.model.1.model.3.model.3.model.3.model.4.model.2.value_conv.weight", "unet.model.model.1.model.3.model.3.model.3.model.4.model.2.value_conv.bias", "unet.model.model.1.model.3.model.3.model.3.model.4.model.5.weight", "unet.model.model.1.model.3.model.3.model.3.model.4.model.5.bias", "unet.model.model.1.model.3.model.3.model.3.model.4.model.7.gamma", "unet.model.model.1.model.3.model.3.model.3.model.4.model.7.query_conv.weight", "unet.model.model.1.model.3.model.3.model.3.model.4.model.7.query_conv.bias", "unet.model.model.1.model.3.model.3.model.3.model.4.model.7.key_conv.weight", "unet.model.model.1.model.3.model.3.model.3.model.4.model.7.key_conv.bias", "unet.model.model.1.model.3.model.3.model.3.model.4.model.7.value_conv.weight", "unet.model.model.1.model.3.model.3.model.3.model.4.model.7.value_conv.bias", "unet.model.model.1.model.3.model.3.model.3.model.7.weight", "unet.model.model.1.model.3.model.3.model.3.model.7.bias", "unet.model.model.1.model.3.model.3.model.3.model.9.gamma", "unet.model.model.1.model.3.model.3.model.3.model.9.query_conv.weight", "unet.model.model.1.model.3.model.3.model.3.model.9.query_conv.bias", "unet.model.model.1.model.3.model.3.model.3.model.9.key_conv.weight", "unet.model.model.1.model.3.model.3.model.3.model.9.key_conv.bias", "unet.model.model.1.model.3.model.3.model.3.model.9.value_conv.weight", "unet.model.model.1.model.3.model.3.model.3.model.9.value_conv.bias", "unet.model.model.1.model.3.model.3.model.6.weight", "unet.model.model.1.model.3.model.3.model.6.bias", "unet.model.model.1.model.3.model.6.weight", "unet.model.model.1.model.3.model.6.bias", "unet.model.model.1.model.6.weight", "unet.model.model.1.model.6.bias".
            Unexpected key(s) in state_dict: "unet.model.model.1.model.9.gamma", "unet.model.model.1.model.9.query_conv.weight", "unet.model.model.1.model.9.query_conv.bias", "unet.model.model.1.model.9.key_conv.weight", "unet.model.model.1.model.9.key_conv.bias", "unet.model.model.1.model.9.value_conv.weight", "unet.model.model.1.model.9.value_conv.bias", "unet.model.model.1.model.3.gamma", "unet.model.model.1.model.3.query_conv.weight", "unet.model.model.1.model.3.query_conv.bias", "unet.model.model.1.model.3.key_conv.weight", "unet.model.model.1.model.3.key_conv.bias", "unet.model.model.1.model.3.value_conv.weight", "unet.model.model.1.model.3.value_conv.bias", "unet.model.model.1.model.4.model.1.weight", "unet.model.model.1.model.4.model.1.bias", "unet.model.model.1.model.4.model.3.gamma", "unet.model.model.1.model.4.model.3.query_conv.weight", "unet.model.model.1.model.4.model.3.query_conv.bias", "unet.model.model.1.model.4.model.3.key_conv.weight", "unet.model.model.1.model.4.model.3.key_conv.bias", "unet.model.model.1.model.4.model.3.value_conv.weight", "unet.model.model.1.model.4.model.3.value_conv.bias", "unet.model.model.1.model.4.model.4.model.1.weight", "unet.model.model.1.model.4.model.4.model.1.bias", "unet.model.model.1.model.4.model.4.model.3.model.1.weight", "unet.model.model.1.model.4.model.4.model.3.model.1.bias", "unet.model.model.1.model.4.model.4.model.3.model.3.model.1.weight", "unet.model.model.1.model.4.model.4.model.3.model.3.model.1.bias", "unet.model.model.1.model.4.model.4.model.3.model.3.model.4.weight", "unet.model.model.1.model.4.model.4.model.3.model.3.model.4.bias", "unet.model.model.1.model.4.model.4.model.3.model.6.weight", "unet.model.model.1.model.4.model.4.model.3.model.6.bias", "unet.model.model.1.model.4.model.4.model.6.weight", "unet.model.model.1.model.4.model.4.model.6.bias", "unet.model.model.1.model.4.model.7.weight", "unet.model.model.1.model.4.model.7.bias", "unet.model.model.1.model.4.model.9.gamma", "unet.model.model.1.model.4.model.9.query_conv.weight", "unet.model.model.1.model.4.model.9.query_conv.bias", "unet.model.model.1.model.4.model.9.key_conv.weight", "unet.model.model.1.model.4.model.9.key_conv.bias", "unet.model.model.1.model.4.model.9.value_conv.weight", "unet.model.model.1.model.4.model.9.value_conv.bias", "unet.model.model.1.model.7.weight", "unet.model.model.1.model.7.bias".
    

    It seems that the checkpoint was recorded before the last changes were made to the model architecture. Could you help with that?

    opened by Lehsuby 1
  • Warp module pre-trained model

    Warp module pre-trained model

    Hi, really interesting approach to virtual try-on here. Have you published pre-trained weights for the warping module? The ShineOn checkpoint published seems to only be for the try-on Unet.

    opened by arvind-iyer 2
Owner
Andrew Jong
Master's student at Carnegie Mellon in Robotics and AI. Studies multi-agent UAVs for wildfire applications.
Andrew Jong
[WACV 2020] Reducing Footskate in Human Motion Reconstruction with Ground Contact Constraints

Reducing Footskate in Human Motion Reconstruction with Ground Contact Constraints Official implementation for Reducing Footskate in Human Motion Recon

Virginia Tech Vision and Learning Lab 38 Nov 1, 2022
Official PyTorch code for WACV 2022 paper "CFLOW-AD: Real-Time Unsupervised Anomaly Detection with Localization via Conditional Normalizing Flows"

CFLOW-AD: Real-Time Unsupervised Anomaly Detection with Localization via Conditional Normalizing Flows WACV 2022 preprint:https://arxiv.org/abs/2107.1

Denis 156 Dec 28, 2022
Source code for the GPT-2 story generation models in the EMNLP 2020 paper "STORIUM: A Dataset and Evaluation Platform for Human-in-the-Loop Story Generation"

Storium GPT-2 Models This is the official repository for the GPT-2 models described in the EMNLP 2020 paper [STORIUM: A Dataset and Evaluation Platfor

Nader Akoury 27 Dec 20, 2022
StyleGAN-Human: A Data-Centric Odyssey of Human Generation

StyleGAN-Human: A Data-Centric Odyssey of Human Generation Abstract: Unconditional human image generation is an important task in vision and graphics,

stylegan-human 762 Jan 8, 2023
Occlusion robust 3D face reconstruction model in CFR-GAN (WACV 2022)

Occlusion Robust 3D face Reconstruction Yeong-Joon Ju, Gun-Hee Lee, Jung-Ho Hong, and Seong-Whan Lee Code for Occlusion Robust 3D Face Reconstruction

Yeongjoon 31 Dec 19, 2022
Pytorch implementation of "Geometrically Adaptive Dictionary Attack on Face Recognition" (WACV 2022)

Geometrically Adaptive Dictionary Attack on Face Recognition This is the Pytorch code of our paper "Geometrically Adaptive Dictionary Attack on Face R

null 6 Nov 21, 2022
BABEL: Bodies, Action and Behavior with English Labels [CVPR 2021]

BABEL is a large dataset with language labels describing the actions being performed in mocap sequences. BABEL labels about 43 hours of mocap sequences from AMASS [1] with action labels.

null 113 Dec 28, 2022
1st ranked 'driver careless behavior detection' for AI Online Competition 2021, hosted by MSIT Korea.

2021AICompetition-03 본 repo 는 mAy-I Inc. 팀으로 참가한 2021 인공지능 온라인 경진대회 중 [이미지] 운전 사고 예방을 위한 운전자 부주의 행동 검출 모델] 태스크 수행을 위한 레포지토리입니다. mAy-I 는 과학기술정보통신부가 주최하

Junhyuk Park 9 Dec 1, 2022
Human POSEitioning System (HPS): 3D Human Pose Estimation and Self-localization in Large Scenes from Body-Mounted Sensors, CVPR 2021

Human POSEitioning System (HPS): 3D Human Pose Estimation and Self-localization in Large Scenes from Body-Mounted Sensors Human POSEitioning System (H

Aymen Mir 66 Dec 21, 2022
This repository contains the official implementation code of the paper Improving Multimodal Fusion with Hierarchical Mutual Information Maximization for Multimodal Sentiment Analysis, accepted at EMNLP 2021.

MultiModal-InfoMax This repository contains the official implementation code of the paper Improving Multimodal Fusion with Hierarchical Mutual Informa

Deep Cognition and Language Research (DeCLaRe) Lab 89 Dec 26, 2022
Code for the ICCV 2021 Workshop paper: A Unified Efficient Pyramid Transformer for Semantic Segmentation.

Unified-EPT Code for the ICCV 2021 Workshop paper: A Unified Efficient Pyramid Transformer for Semantic Segmentation. Installation Linux, CUDA>=10.0,

null 29 Aug 23, 2022
Tracking code for the winner of track 1 in the MMP-Tracking Challenge at ICCV 2021 Workshop.

Tracking Code for the winner of track1 in MMP-Trakcing challenge This repository contains our tracking code for the Multi-camera Multiple People Track

DamoCV 29 Nov 13, 2022
Code repo for "RBSRICNN: Raw Burst Super-Resolution through Iterative Convolutional Neural Network" (Machine Learning and the Physical Sciences workshop in NeurIPS 2021).

RBSRICNN: Raw Burst Super-Resolution through Iterative Convolutional Neural Network An official PyTorch implementation of the RBSRICNN network as desc

Rao Muhammad Umer 6 Nov 14, 2022
Our CIKM21 Paper "Incorporating Query Reformulating Behavior into Web Search Evaluation"

Reformulation-Aware-Metrics Introduction This codebase contains source-code of the Python-based implementation of our CIKM 2021 paper. Chen, Jia, et a

xuanyuan14 5 Mar 5, 2022
This was initially the repo for the project of PSYC626@USC of Asaf Mazar, Millad Kassaie and Georgios Chochlakis named "Powered by the Will? Exploring Lay Theories of Behavior Change through Social Media"

Subreddit Analysis This repo includes tools for Subreddit analysis, originally developed for our class project of PSYC 626 in USC, titled "Powered by

Georgios Chochlakis 1 Dec 17, 2021
Implementation of Geometric Vector Perceptron, a simple circuit for 3d rotation equivariance for learning over large biomolecules, in Pytorch. Idea proposed and accepted at ICLR 2021

Geometric Vector Perceptron Implementation of Geometric Vector Perceptron, a simple circuit with 3d rotation equivariance for learning over large biom

Phil Wang 59 Nov 24, 2022
Seach Losses of our paper 'Loss Function Discovery for Object Detection via Convergence-Simulation Driven Search', accepted by ICLR 2021.

CSE-Autoloss Designing proper loss functions for vision tasks has been a long-standing research direction to advance the capability of existing models

Peidong Liu(刘沛东) 54 Dec 17, 2022
This project is the official implementation of our accepted ICLR 2021 paper BiPointNet: Binary Neural Network for Point Clouds.

BiPointNet: Binary Neural Network for Point Clouds Created by Haotong Qin, Zhongang Cai, Mingyuan Zhang, Yifu Ding, Haiyu Zhao, Shuai Yi, Xianglong Li

Haotong Qin 59 Dec 17, 2022
A Research-oriented Federated Learning Library and Benchmark Platform for Graph Neural Networks. Accepted to ICLR'2021 - DPML and MLSys'21 - GNNSys workshops.

FedGraphNN: A Federated Learning System and Benchmark for Graph Neural Networks A Research-oriented Federated Learning Library and Benchmark Platform

FedML-AI 175 Dec 1, 2022