Official code for ICCV2021 paper "M3D-VTON: A Monocular-to-3D Virtual Try-on Network"

Overview

M3D-VTON: A Monocular-to-3D Virtual Try-On Network

Official code for ICCV2021 paper "M3D-VTON: A Monocular-to-3D Virtual Try-on Network"

Paper | Supplementary | MPV3D Dataset | Pretrained Models

M3D-VTON

Requirements

python >= 3.8.0, pytorch == 1.6.0, torchvision == 0.7.0

Data Processing

After downloading the MPV3D Dataset, please run the following script to preprocess the data:

python util/data_preprocessing.py --MPV3D_root path/to/MPV3D/dataset

Running Inference

We provide demo inputs under the mpv3d_example folder, where the target clothing and the reference person are like:

Demo inputs

with inputs from the mpv3d_example folder, the easiest way to get start is to use the pretrained models and sequentially run the four steps below:

1. Testing MTM Module

python test.py --model MTM --name MTM --dataroot mpv3d_example --datalist test_pairs --results_dir results

2. Testing DRM Module

python test.py --model DRM --name DRM --dataroot mpv3d_example --datalist test_pairs --results_dir results

3. Testing TFM Module

python test.py --model TFM --name TFM --dataroot mpv3d_example --datalist test_pairs --results_dir results

4. Getting colored point cloud and Remeshing

(Note: since the back-side person images are unavailable, in rgbd2pcd.py we provide a fast face inpainting function that produces the mirrored back-side image after a fashion. One may need manually inpaint other back-side texture areas to achieve better visual quality.)

python rgbd2pcd.py

Now you should get the point cloud file prepared for remeshing under results/aligned/pcd/test_pairs/*.ply. MeshLab can be used to remesh the predicted point cloud, with two simple steps below:

  • Normal Estimation: Open MeshLab and load the point cloud file, and then go to Filters --> Normals, Curvatures and Orientation --> Compute normals for point sets

  • Possion Remeshing: Go to Filters --> Remeshing, Simplification and Reconstruction --> Surface Reconstruction: Screen Possion (set reconstruction depth = 9)

Now the final 3D try-on result should be obtained:

Try-on Result

Training on MPV3D Dataset

With the pre-processed MPV3D dataset, you can train the model from scratch by folllowing the three steps below:

1. Train MTM module

python train.py --model MTM --name MTM --dataroot path/to/MPV3D/data --datalist train_pairs --checkpoints_dir path/for/saving/model

then run the command below to obtain the --warproot (here refers to the --results_dir) which is necessary for the other two modules:

python test.py --model MTM --name MTM --dataroot path/to/MPV3D/data --datalist train_pairs --checkpoints_dir path/to/saved/MTMmodel --results_dir path/for/saving/MTM/results

2. Train DRM module

python train.py --model DRM --name DRM --dataroot path/to/MPV3D/data --warproot path/to/MTM/warp/cloth --datalist train_pairs --checkpoints_dir path/for/saving/model

3. Train TFM module

python train.py --model TFM --name TFM --dataroot path/to/MPV3D/data --warproot path/to/MTM/warp/cloth --datalist train_pairs --checkpoints_dir path/for/saving/model

(See options/base_options.py and options/train_options.py for more training options.)

License

The use of this code and the MPV3D dataset is RESTRICTED to non-commercial research and educational purposes.

Citation

If our code is helpful to your research, please cite:

@article{Zhao2021M3DVTONAM,
  title={M3D-VTON: A Monocular-to-3D Virtual Try-On Network},
  author={Fuwei Zhao and Zhenyu Xie and Michael C. Kampffmeyer and Haoye Dong and Songfang Han and Tianxiang Zheng and Tao Zhang and Xiaodan Liang},
  journal={ArXiv},
  year={2021},
  volume={abs/2108.05126}
}
Comments
  • warp cloth

    warp cloth

    I 'm sorry to bother you again. You are so kind-hearted, thank you again. I just happened the probelm that warp cloth saved by MTM model seems smaller when i train MTM in MPV3D Dataset with the same param configures. But the warp cloth generated by your pretrained MTM model is larger. I analysis for a long time, but i can't find the reason. Please help me! Best wish with you.

    opened by hematank 6
  • DRM and TFM

    DRM and TFM

    thanks for your reply. this is great job! i have another question that what's the DRM and TFM's network? i can't find any detailed description in papers. are they using gan network? what can i refer to. i look forward to your reply, thank u.

    opened by hematank 1
  • How to run on custom images and clothing

    How to run on custom images and clothing

    Can you please let me know how to create custom data for testing? I see this has many files mpv3d_example. Also does your code hav anything to do with SMPL I assume the end result is just the 3d model with no SMPL and with poses Correct?

    opened by dev20111 1
  • Error in output files (.ply)

    Error in output files (.ply)

    Hi. I ran the repo on the test dataset. I am unable to visualize the output files (.ply) in blender/Online 3D viewer. The errors are "Invalid header ('end_header' line not found!) Invalid file" and "unable to import model, face not present" , respectively.

    opened by usmannamjad 0
  • Runtime error of channels

    Runtime error of channels

    Hello there, first of all thank you for this project/source code. I didn't face any issue regarding this. However, I could not understand the this issue related to custom data. The custom t-shirt is perfectly working on your dataset but not mine. I have create custom dataset of image but not working. Please revert whenever you get time. Thank You.

    Screenshot 2022-08-23 at 11 39 16 AM

    opened by BhaumikThakkar 0
  • certain part blurred in mesh

    certain part blurred in mesh

    Hi, this is absolutely amazing work! I tried to run the test demo according to all your instructions and here is the result I obtained. May I know if these small artifacts present in the mesh are normal? If so, what might be the reason causing these artifacts?

    drawing

    drawing

    opened by JacksonCakes 0
  • mesh results

    mesh results

    Hi,

    Thanks for your work.

    When I used your images with the same resolution, the output of rgbd2pcl which is .ply is not completely painted and there are some parts that are masked. Is there anything that should apply after the rgbd2pck before mesh lab?

    opened by sanazsab 2
  • mesh outputs

    mesh outputs

    Hi, How did you use gif maker of the mesh results to create gif output?

    rgbd2pc could also work for lower resolution? The .ply for my case is incomplete.

    Thanks for your swift response.

    opened by sanazsab 1
  • GPU Error when running the test examples

    GPU Error when running the test examples

    We are getting graphic card related error. Here are my system info

    pavan@u-20:~/.../M3D-VTON$ sudo dmidecode -t 1
    # dmidecode 3.2
    Getting SMBIOS data from sysfs.
    SMBIOS 3.0.0 present.
    
    Handle 0x000C, DMI type 1, 27 bytes
    System Information
    	Manufacturer: LENOVO
    	Product Name: 20JUS05X00
    	Version: ThinkPad L470 W10DG
    	Serial Number: PF11SM82
    	UUID: 31f6b94c-2fd0-11b2-a85c-ed0533b508ea
    	Wake-up Type: Power Switch
    	SKU Number: LENOVO_MT_20JU_BU_Think_FM_ThinkPad L470 W10DG
    	Family: ThinkPad L470 W10DG
    
    pavan@u-20:~/.../M3D-VTON$
    pavan@u-20:~/.../M3D-VTON$ inxi -G
    Graphics:  Device-1: Intel Skylake GT2 [HD Graphics 520] driver: i915 v: kernel 
               Display: x11 server: X.Org 1.20.13 driver: i915 resolution: 1366x768~60Hz 
               OpenGL: renderer: Mesa Intel HD Graphics 520 (SKL GT2) v: 4.6 Mesa 21.2.6 
    pavan@u-20:~/.../M3D-VTON$ 
    
    pavan@u-20:~/.../M3D-VTON$ python3 test.py --model MTM --name MTM --dataroot mpv3d_example --datalist test_pairs --results_dir results
                      verbose: False                         
    ----------------- End -------------------
    Traceback (most recent call last):
      File "test.py", line 25, in <module>
        opt = TestOptions().parse()  # get test options
      File "/home/pavan/Documents/aux/tmp-git/virtual-try-on/M3D-VTON/options/base_options.py", line 130, in parse
        torch.cuda.set_device(opt.gpu_ids[0])
      File "/home/pavan/.local/lib/python3.8/site-packages/torch/cuda/__init__.py", line 313, in set_device
        torch._C._cuda_setDevice(device)
      File "/home/pavan/.local/lib/python3.8/site-packages/torch/cuda/__init__.py", line 216, in _lazy_init
        torch._C._cuda_init()
    RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx
    pavan@u-20:~/.../M3D-VTON$
    

    is it possible to run it on my machine? I have attached my machine info also.

    opened by pavanyogi 1
Owner
null
This is an official implementation of the paper "Distance-aware Quantization", accepted to ICCV2021.

PyTorch implementation of DAQ This is an official implementation of the paper "Distance-aware Quantization", accepted to ICCV2021. For more informatio

CV Lab @ Yonsei University 36 Nov 4, 2022
Official Repo for ICCV2021 Paper: Learning to Regress Bodies from Images using Differentiable Semantic Rendering

[ICCV2021] Learning to Regress Bodies from Images using Differentiable Semantic Rendering Getting Started DSR has been implemented and tested on Ubunt

Sai Kumar Dwivedi 83 Nov 27, 2022
Official code for "Simpler is Better: Few-shot Semantic Segmentation with Classifier Weight Transformer. ICCV2021".

Simpler is Better: Few-shot Semantic Segmentation with Classifier Weight Transformer. ICCV2021. Introduction We proposed a novel model training paradi

Lucas 103 Dec 14, 2022
Official PyTorch code for Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling (HCFlow, ICCV2021)

Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling (HCFlow, ICCV2021) This repository is the official P

Jingyun Liang 159 Dec 30, 2022
Official PyTorch code for Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling (HCFlow, ICCV2021)

Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling (HCFlow, ICCV2021) This repository is the official P

Jingyun Liang 159 Dec 30, 2022
Official PyTorch code for Mutual Affine Network for Spatially Variant Kernel Estimation in Blind Image Super-Resolution (MANet, ICCV2021)

Mutual Affine Network for Spatially Variant Kernel Estimation in Blind Image Super-Resolution (MANet, ICCV2021) This repository is the official PyTorc

Jingyun Liang 139 Dec 29, 2022
[ICCV2021] Official code for "Channel-wise Topology Refinement Graph Convolution for Skeleton-Based Action Recognition"

CTR-GCN This repo is the official implementation for Channel-wise Topology Refinement Graph Convolution for Skeleton-Based Action Recognition. The pap

Yuxin Chen 148 Dec 16, 2022
Code and models for ICCV2021 paper "Robust Object Detection via Instance-Level Temporal Cycle Confusion".

Robust Object Detection via Instance-Level Temporal Cycle Confusion This repo contains the implementation of the ICCV 2021 paper, Robust Object Detect

Xin Wang 69 Oct 13, 2022
Code for Talk-to-Edit (ICCV2021). Paper: Talk-to-Edit: Fine-Grained Facial Editing via Dialog.

Talk-to-Edit (ICCV2021) This repository contains the implementation of the following paper: Talk-to-Edit: Fine-Grained Facial Editing via Dialog Yumin

Yuming Jiang 221 Jan 7, 2023
The source code of the ICCV2021 paper "PIRenderer: Controllable Portrait Image Generation via Semantic Neural Rendering"

The source code of the ICCV2021 paper "PIRenderer: Controllable Portrait Image Generation via Semantic Neural Rendering"

Ren Yurui 261 Jan 9, 2023
The source code of the ICCV2021 paper "PIRenderer: Controllable Portrait Image Generation via Semantic Neural Rendering"

Website | ArXiv | Get Start | Video PIRenderer The source code of the ICCV2021 paper "PIRenderer: Controllable Portrait Image Generation via Semantic

Ren Yurui 81 Sep 25, 2021
Code for ICCV2021 paper SPEC: Seeing People in the Wild with an Estimated Camera

SPEC: Seeing People in the Wild with an Estimated Camera [ICCV 2021] SPEC: Seeing People in the Wild with an Estimated Camera, Muhammed Kocabas, Chun-

Muhammed Kocabas 187 Dec 26, 2022
Official PyTorch Implementation of Rank & Sort Loss [ICCV2021]

Rank & Sort Loss for Object Detection and Instance Segmentation The official implementation of Rank & Sort Loss. Our implementation is based on mmdete

Kemal Oksuz 229 Dec 20, 2022
Official implementation of "A Unified Objective for Novel Class Discovery", ICCV2021 (Oral)

A Unified Objective for Novel Class Discovery This is the official repository for the paper: A Unified Objective for Novel Class Discovery Enrico Fini

Enrico Fini 118 Dec 26, 2022
Implementation of ICCV2021(Oral) paper - VMNet: Voxel-Mesh Network for Geodesic-aware 3D Semantic Segmentation

VMNet: Voxel-Mesh Network for Geodesic-Aware 3D Semantic Segmentation Created by Zeyu HU Introduction This work is based on our paper VMNet: Voxel-Mes

HU Zeyu 82 Dec 27, 2022
Implementation for our ICCV2021 paper: Internal Video Inpainting by Implicit Long-range Propagation

Implicit Internal Video Inpainting Implementation for our ICCV2021 paper: Internal Video Inpainting by Implicit Long-range Propagation paper | project

null 202 Dec 30, 2022
This is the pytorch implementation for the paper: Generalizable Mixed-Precision Quantization via Attribution Rank Preservation, which is accepted to ICCV2021.

GMPQ: Generalizable Mixed-Precision Quantization via Attribution Rank Preservation This is the pytorch implementation for the paper: Generalizable Mix

null 18 Sep 2, 2022
PyTorch implementation of our ICCV2021 paper: StructDepth: Leveraging the structural regularities for self-supervised indoor depth estimation

StructDepth PyTorch implementation of our ICCV2021 paper: StructDepth: Leveraging the structural regularities for self-supervised indoor depth estimat

SJTU-ViSYS 112 Nov 28, 2022
ICCV2021 Paper: AutoShape: Real-Time Shape-Aware Monocular 3D Object Detection

ICCV2021 Paper: AutoShape: Real-Time Shape-Aware Monocular 3D Object Detection

Zongdai 107 Dec 20, 2022