πŸ¦™ LaMa Image Inpainting, Resolution-robust Large Mask Inpainting with Fourier Convolutions, WACV 2022

Overview

πŸ¦™ LaMa: Resolution-robust Large Mask Inpainting with Fourier Convolutions

Official implementation by Samsung Research

by Roman Suvorov, Elizaveta Logacheva, Anton Mashikhin, Anastasia Remizova, Arsenii Ashukha, Aleksei Silvestrov, Naejin Kong, Harshith Goka, Kiwoong Park, Victor Lempitsky.

πŸ”₯ πŸ”₯ πŸ”₯
LaMa generalizes surprisingly well to much higher resolutions (~2k ❗️ ) than it saw during training (256x256), and achieves the excellent performance even in challenging scenarios, e.g. completion of periodic structures.

[Project page] [arXiv] [Supplementary] [BibTeX]


Try out in Google Colab

Environment setup

Clone the repo: git clone https://github.com/saic-mdal/lama.git

There are three options of an environment:

  1. Python virtualenv:

    virtualenv inpenv --python=/usr/bin/python3
    source inpenv/bin/activate
    pip install torch==1.8.0 torchvision==0.9.0
    
    cd lama
    pip install -r requirements.txt 
    
  2. Conda

    % Install conda for Linux, for other OS download miniconda at https://docs.conda.io/en/latest/miniconda.html
    wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
    bash Miniconda3-latest-Linux-x86_64.sh -b -p $HOME/miniconda
    $HOME/miniconda/bin/conda init bash
    
    cd lama
    conda env create -f conda_env.yml
    conda activate lama
    conda install pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch -y
    pip install pytorch-lightning==1.2.9
    
  3. Docker: No actions are needed πŸŽ‰ .

Inference

Run

cd lama
export TORCH_HOME=$(pwd) && export PYTHONPATH=.

1. Download pre-trained models

Install tool for yandex disk link extraction:

pip3 install wldhx.yadisk-direct

The best model (Places2, Places Challenge):

curl -L $(yadisk-direct https://disk.yandex.ru/d/ouP6l8VJ0HpMZg) -o big-lama.zip
unzip big-lama.zip

All models (Places & CelebA-HQ):

curl -L $(yadisk-direct https://disk.yandex.ru/d/EgqaSnLohjuzAg) -o lama-models.zip
unzip lama-models.zip

2. Prepare images and masks

Download test images:

curl -L $(yadisk-direct https://disk.yandex.ru/d/xKQJZeVRk5vLlQ) -o LaMa_test_images.zip
unzip LaMa_test_images.zip
OR prepare your data: 1) Create masks named as `[images_name]_maskXXX[image_suffix]`, put images and masks in the same folder.
  • You can use the script for random masks generation.
  • Check the format of the files:
    image1_mask001.png
    image1.png
    image2_mask001.png
    image2.png
    
  1. Specify image_suffix, e.g. .png or .jpg or _input.jpg in configs/prediction/default.yaml.

3. Predict

On the host machine:

python3 bin/predict.py model.path=$(pwd)/big-lama indir=$(pwd)/LaMa_test_images outdir=$(pwd)/output

OR in the docker

The following command will pull the docker image from Docker Hub and execute the prediction script

bash docker/2_predict.sh $(pwd)/big-lama $(pwd)/LaMa_test_images $(pwd)/output device=cpu

Docker cuda: TODO

Train and Eval

⚠️ Warning: The training is not fully tested yet, e.g., did not re-training after refactoring ⚠️

Make sure you run:

cd lama
export TORCH_HOME=$(pwd) && export PYTHONPATH=.

Then download models for perceptual loss:

mkdir -p ade20k/ade20k-resnet50dilated-ppm_deepsup/
wget -P ade20k/ade20k-resnet50dilated-ppm_deepsup/ http://sceneparsing.csail.mit.edu/model/pytorch/ade20k-resnet50dilated-ppm_deepsup/encoder_epoch_20.pth

Places

On the host machine:

# Download data from http://places2.csail.mit.edu/download.html
# Places365-Standard: Train(105GB)/Test(19GB)/Val(2.1GB) from High-resolution images section
wget http://data.csail.mit.edu/places/places365/train_large_places365standard.tar
wget http://data.csail.mit.edu/places/places365/val_large.tar
wget http://data.csail.mit.edu/places/places365/test_large.tar

# Unpack and etc.
bash fetch_data/places_standard_train_prepare.sh
bash fetch_data/places_standard_test_val_prepare.sh
bash fetch_data/places_standard_evaluation_prepare_data.sh

# Sample images for test and viz at the end of epoch
bash fetch_data/places_standard_test_val_sample.sh
bash fetch_data/places_standard_test_val_gen_masks.sh

# Run training
# You can change bs with data.batch_size=10
python bin/train.py -cn lama-fourier location=places_standard

# Infer model on thick/thin/medium masks in 256 and 512 and run evaluation 
# like this:
python3 bin/predict.py \
model.path=$(pwd)/experiments/
   
    _
    
     _lama-fourier_/ \
indir=$(pwd)/places_standard_dataset/evaluation/random_thick_512/ \
outdir=$(pwd)/inference/random_thick_512 model.checkpoint=last.ckpt

python3 bin/evaluate_predicts.py \
$(pwd)/configs/eval_2gpu.yaml \
$(pwd)/places_standard_dataset/evaluation/random_thick_512/ \
$(pwd)/inference/random_thick_512 $(pwd)/inference/random_thick_512_metrics.csv

    
   

Docker: TODO

CelebA

On the host machine:

# Make shure you are in lama folder
cd lama
export TORCH_HOME=$(pwd) && export PYTHONPATH=.

# Download CelebA-HQ dataset
# Download data256x256.zip from https://drive.google.com/drive/folders/11Vz0fqHS2rXDb5pprgTjpD7S2BAJhi1P

# unzip & split into train/test/visualization & create config for it
bash fetch_data/celebahq_dataset_prepare.sh

# generate masks for test and visual_test at the end of epoch
bash fetch_data/celebahq_gen_masks.sh

# Run training
python bin/train.py -cn lama-fourier-celeba data.batch_size=10

# Infer model on thick/thin/medium masks in 256 and run evaluation 
# like this:
python3 bin/predict.py \
model.path=$(pwd)/experiments/
   
    _
    
     _lama-fourier-celeba_/ \
indir=$(pwd)/celeba-hq-dataset/visual_test_256/random_thick_256/ \
outdir=$(pwd)/inference/celeba_random_thick_256 model.checkpoint=last.ckpt

    
   

Docker: TODO

Places Challenge

On the host machine:

# This script downloads multiple .tar files in parallel and unpacks them
# Places365-Challenge: Train(476GB) from High-resolution images (to train Big-Lama) 
bash places_challenge_train_download.sh

TODO: prepare
TODO: train 
TODO: eval

Docker: TODO

Create your data

On the host machine:

Explain explain explain

TODO: format
TODO: configs 
TODO: run training
TODO: run eval

OR in the docker:

TODO: train
TODO: eval

Hints

Generate different kinds of masks

The following command will execute a script that generates random masks.

bash docker/1_generate_masks_from_raw_images.sh \
    configs/data_gen/random_medium_512.yaml \
    /directory_with_input_images \
    /directory_where_to_store_images_and_masks \
    --ext png

The test data generation command stores images in the format, which is suitable for prediction.

The table below describes which configs we used to generate different test sets from the paper. Note that we do not fix a random seed, so the results will be slightly different each time.

Places 512x512 CelebA 256x256
Narrow random_thin_512.yaml random_thin_256.yaml
Medium random_medium_512.yaml random_medium_256.yaml
Wide random_thick_512.yaml random_thick_256.yaml

Feel free to change the config path (argument #1) to any other config in configs/data_gen or adjust config files themselves.

Override parameters in configs

Also you can override parameters in config like this:

python3 bin/train.py -cn 
   
     data.batch_size=10 run_title=my-title

   

Where .yaml file extension is omitted

Models options

Config names for models from paper (substitude into the training command):

* big-lama
* big-lama-regular
* lama-fourier
* lama-regular
* lama_small_train_masks

Which are seated in configs/training/folder

Links

Training time & resources

TODO

Acknowledgments

Citation

If you found this code helpful, please consider citing:

@article{suvorov2021resolution,
  title={Resolution-robust Large Mask Inpainting with Fourier Convolutions},
  author={Suvorov, Roman and Logacheva, Elizaveta and Mashikhin, Anton and Remizova, Anastasia and Ashukha, Arsenii and Silvestrov, Aleksei and Kong, Naejin and Goka, Harshith and Park, Kiwoong and Lempitsky, Victor},
  journal={arXiv preprint arXiv:2109.07161},
  year={2021}
}
Comments
  • google colab: ModuleNotFoundError: No module named 'torchtext.legacy'

    google colab: ModuleNotFoundError: No module named 'torchtext.legacy'

    The colab demo now has an module error when training and predicting, possibly due to to colab changes?

    from torchtext.legacy.data import Batch ModuleNotFoundError: No module named 'torchtext.legacy'

    opened by hanp0 15
  • Questions about training big-lama and the full-checkpoint

    Questions about training big-lama and the full-checkpoint

    Hi, thanks again for your excellent works. Is the big-lama model trained on places-challenge dataset? Whether it performs greatly better than a big-lama trained with places2-standard? Is it possible to release the full checkpoints of the big-lama model, so we can finetune it on other data? Thanks.

    opened by yzhouas 15
  • Feature Refinement to Improve High Resolution Image Inpainting

    Feature Refinement to Improve High Resolution Image Inpainting

    We are a team of researchers at Geomagical Labs (geomagical.com), a subsidiary of IKEA. We work on pioneering Mixed Reality apps which allow customers to scan photorealistic models of their indoor spaces and re-imagine them with virtual furniture.

    In this PR we propose an additional refinement step for LaMa to improve high-resolution inpainting results. We observed that when inpainting large regions at high resolution, LaMa struggles at structure completion. However, at low resolutions, LaMa can infill the same missing region much better. To address this we added an additional refinement step that uses the structure from low resolution predictions to guide higher resolution predictions.

    Our approach can work on any inpainting network, and does not require any additional training or network modification.

    How to run refinement

    To run refinement, simply pass refine=True in the evaluation step as:

        python3 bin/predict.py refine=True model.path=$(pwd)/big-lama indir=$(pwd)/LaMa_test_images outdir=$(pwd)/output
    

    Evaluation

    Here's a few example comparisons, with each triplet showing the masked image, inpainting with LaMa, and inpainting with LaMa using refinement: image

    Comparison of unrefined and refined images on all test images (kindly shared by you) is available here: https://drive.google.com/drive/folders/15LEa9k_7-dUKb2CPUDuw7e6Zk28KCtzz?usp=sharing

    We also performed some numerical evaluation on 1024x1024 size images sampled from [1], using the thin, medium, and thick masks. Results indicate that LaMa+refinement outperforms all the recent inpainting baselines on high resultion inpainting:

    | Method | FID (thin) | LPIPS (thin) | FID (medium) | LPIPS (medium) | FID (thick) | LPIPS (thick) | | :--- | ---: | ---: | ---: | ---: | ---: | ---: | | AOTGAN [3] | 17.387 | 0.133 | 34.667 | 0.144 | 54.015 | 0.184 | | LatentDiffusion [4] | 18.505 | 0.141 | 31.445 | 0.149 | 38.743 | 0.172 | | MAT [6] | 16.284 | 0.137 | 27.829 | 0.135 | 38.120 | 0.157 | | ZITS [5] | 15.696 | 0.125 | 23.500 | 0.121 | 31.777 | 0.140 | | LaMa-Fourier [2] | 14.780 | 0.124 | 22.584 | 0.120 | 29.351 | 0.140 | | Big-LaMa [2] | 13.143 | 0.114 | 21.169 | 0.116 | 29.022 | 0.140 | | Big-LaMa+refinement (ours) | 13.193 | 0.112 | 19.864 | 0.115 | 26.401 | 0.135 |

    Table 1. Performance comparison of various recent inpainting approaches on 1k 1024x1024 size images

    Video

    We have also created a video to explain the technical details of our approach: https://www.youtube.com/watch?v=gEukhOheWgE

    References

    [1] Unsplash Dataset. https://unsplash.com/data, 2020

    [2] Suvorov, R., Logacheva, E., Mashikhin, A., Remizova, A., Ashukha, A., Silvestrov, A., Kong, N., Goka, H., Park, K. and Lempitsky, V., 2022. Resolution-robust Large Mask Inpainting with Fourier Convolutions. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (pp. 2149-2159)

    [3] Zeng, Y., Fu, J., Chao, H. and Guo, B., 2022. Aggregated contextual transformations for high-resolution image inpainting. IEEE Transactions on Visualization and Computer Graphics.

    [4] Rombach, R., Blattmann, A., Lorenz, D., Esser, P. and Ommer, B., 2021. High-Resolution Image Synthesis with Latent Diffusion Models. arXiv preprint arXiv:2112.10752.

    [5] Dong, Q., Cao, C. and Fu, Y., 2022. Incremental Transformer Structure Enhanced Image Inpainting with Masking Positional Encoding. arXiv preprint arXiv:2203.00867.

    [6] Li, W., Lin, Z., Zhou, K., Qi, L., Wang, Y. and Jia, J., 2022. MAT: Mask-Aware Transformer for Large Hole Image Inpainting. arXiv preprint arXiv:2203.15270.

    opened by ankuPRK 13
  • Random Mask Generation

    Random Mask Generation

    Can you please guide over how can random masks be created on custom images? The link to the script given in the description does not seem to be working.

    opened by ZeeRizvee 11
  • Colab Code FileNotFoundError When Running Unmodified Example

    Colab Code FileNotFoundError When Running Unmodified Example

    Hello,

    I was trying to get the google colab code for this project to run its example, but it seems to be broken. I can run the first cell with no issues, then uncomment an example line for fname, and draw on the image to make a mask. When I click finish, it successfully shows the mask, image, and img * mask, but then gives FileNotFoundError and as such, No such file or directory: '/content/output/1010286_mask.png'

    Just wondering if this project is depreciated or if its still supposed to be usable?

    image

    Also if I look through the code there are several Invalid character "\u21" in token errors involving the !PYTHONPATH lines.

    opened by Casey-J-Wolcott 10
  • Question about L1 loss weight_know vs weight_missing

    Question about L1 loss weight_know vs weight_missing

    Hi! First of all thanks for sharing the code. I have a doubt about the loss L1.First of all this loss does not appear at the paper right? Second of all about the loss weight_know vs weight_missing: Why in most of the configs you set weight_missing to 0, As I understand this weights the part of the masked image in order to make the network match the gt with the predicted in the zone to inpainted. That is the zone where mask == 1. Why You set that to 0? Have you studied this param on the effect of convergence?

    opened by Marcelo5444 9
  • Environment variable 'USER' not found

    Environment variable 'USER' not found

    Thanks for your share! When I run python bin/train.py -cn lama-fourier location=my_dataset data.batch_size=10,the errors occured: omegaconf.errors.InterpolationResolutionError: ValidationError raised while resolving interpolation: Environment variable 'USER' not found full_key: hydra.run.dir object_type=dict

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "bin/train.py", line 74, in main() File "/root/lama-main/saicinpainting/utils.py", line 163, in new_main main_func(*args, **kwargs) File "/root/.local/lib/python3.7/site-packages/hydra/main.py", line 53, in decorated_main config_name=config_name, File "/root/.local/lib/python3.7/site-packages/hydra/_internal/utils.py", line 368, in _run_hydra lambda: hydra.run( File "/root/.local/lib/python3.7/site-packages/hydra/_internal/utils.py", line 270, in run_and_report cur.tb_lasti = iter_tb.tb_lasti AttributeError: 'NoneType' object has no attribute 'tb_lasti'

    Could you tell me how to solve it, thanks!

    opened by xinguo2 9
  • No module named 'saicinpainting'

    No module named 'saicinpainting'

    When I follow the instructions, at the predict stage, I get:

    (lama) inhahe@inhahe-Z370-AORUS-Gaming-5:~/lama$ python3 bin/predict.py model.path=$(pwd)/big-lama indir=$(pwd)/LaMa_test_images outdir=$(pwd)/output Traceback (most recent call last): File "bin/predict.py", line 14, in from saicinpainting.evaluation.utils import move_to_device ModuleNotFoundError: No module named 'saicinpainting'

    I'm using miniconda. a saicinpainting directory exists under the current directory. Apparently Python doesn't want to load modules from the current directory.

    opened by inhahe 7
  • loss.backward() error in prediction using feature refinement

    loss.backward() error in prediction using feature refinement

    Hello, when I want to make prediction using feature refinement, I face the following problem: RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn I debugged my code, and I noticed the problem is loss.backward() when I added loss.requires_grad = True before loss.backward(), it worked, but loss remained constant. I checked the code multiple times, but I cannot find what is the problem. :(

    opened by FBehrad 7
  • Colab error

    Colab error

    FileNotFoundError: [Errno 2] No such file or directory: '/content/output/1224276_original_mask.png'

    sdxfcgvbhyunjiasddas

    It happens with both custom images and the ones as examples

    opened by Oltrex 7
  • there are some strange white areas in my result.

    there are some strange white areas in my result.

    Thanks for your exciting work firstly. when i use -cn lama-fourier to train my own dataset . i find there are some white areas in some train and test images (not all images, and according to my observation it is irrelevant to mask size), like below( these two images are selected from epoch33/40): image and image Do you know how to avoid this situation? Thanks in advance.

    PS: my dataset is a food image set and there are 150,000 images. And i use this command to train my model CUDA_VISIBLE_DEVICES=0,1,2,3 python bin/train.py -cn lama-fourier location=food data.batch_size=10 data.num_workers=8 trainer.kwargs.gpus=[0,1,2,3] trainer.kwargs.limit_train_batches=12360 optimizers.generator.lr=0.001 optimizers.discriminator.lr=0.0001

    opened by liangzimei 7
  • Recommend AWS EC2 Instance ?

    Recommend AWS EC2 Instance ?

    I want to use in-depth functionality of lama and I want to use the refine=True parameter and i come to know that it consumes 24GB VRAM. Can some one recommend any ec2 instance with handsome gpu and good memory . That ec2 instance GPU should be cuda enabled . I have seen G Series but it is so confusing ? My ultimate goal is to just do the predictions with refine=True to see the results. @windj007 @cohimame @Sanster

    opened by hamzanaeem1999 0
  • predict.py error

    predict.py error

    image ** warn(f"Failed to load image Python extension: {e}") fused_weight_gradient_mlp_cuda module not found. gradient accumulation fusion with weight gradient computation disabled. Detectron v2 is not installed Traceback (most recent call last): File "G:\Compiler\Anaconda2021.11\envs\cu102_torch110_py38_lama\lib\site-packages\hydra_internal\utils.py", line 211, in run_and_report return func() File "G:\Compiler\Anaconda2021.11\envs\cu102_torch110_py38_lama\lib\site-packages\hydra_internal\utils.py", line 333, in lambda: Hydra.create_main_hydra2( File "G:\Compiler\Anaconda2021.11\envs\cu102_torch110_py38_lama\lib\site-packages\hydra_internal\hydra.py", line 64, in create_main_hydra2 hydra = cls(task_name=task_name, config_loader=config_loader) File "G:\Compiler\Anaconda2021.11\envs\cu102_torch110_py38_lama\lib\site-packages\hydra_internal\hydra.py", line 75, in init setup_globals() File "G:\Compiler\Anaconda2021.11\envs\cu102_torch110_py38_lama\lib\site-packages\hydra\core\utils.py", line 185, in setup_globals OmegaConf.register_new_resolver( TypeError: register_new_resolver() got an unexpected keyword argument 'replace' **

    I meet this problem,please help me .thank you very much

    opened by aibohang 1
  • Why does the edge inpainting results blur?

    Why does the edge inpainting results blur?

    I have been trying to apply LaMa to facade texture repair, the occlusion like trees and buildings always located at the edge of the facade, and the results seem to blur as below: wall-010_mask(1) the original image: wall-010 What should I do to make the fluzzy clearer?

    opened by Luciawow 0
Owner
Advanced Image Manipulation Lab @ Samsung AI Center Moscow
Advanced Image Manipulation Lab @ Samsung AI Center Moscow
MAT: Mask-Aware Transformer for Large Hole Image Inpainting

MAT: Mask-Aware Transformer for Large Hole Image Inpainting (CVPR2022, Oral) Wenbo Li, Zhe Lin, Kun Zhou, Lu Qi, Yi Wang, Jiaya Jia [Paper] News This

null 254 Dec 29, 2022
Occlusion robust 3D face reconstruction model in CFR-GAN (WACV 2022)

Occlusion Robust 3D face Reconstruction Yeong-Joon Ju, Gun-Hee Lee, Jung-Ho Hong, and Seong-Whan Lee Code for Occlusion Robust 3D Face Reconstruction

Yeongjoon 31 Dec 19, 2022
Implementations of orthogonal and semi-orthogonal convolutions in the Fourier domain with applications to adversarial robustness

Orthogonalizing Convolutional Layers with the Cayley Transform This repository contains implementations and source code to reproduce experiments for t

CMU Locus Lab 36 Dec 30, 2022
Unofficial pytorch implementation of 'Image Inpainting for Irregular Holes Using Partial Convolutions'

pytorch-inpainting-with-partial-conv Official implementation is released by the authors. Note that this is an ongoing re-implementation and I cannot f

Naoto Inoue 525 Jan 1, 2023
My implementation of Image Inpainting - A deep learning Inpainting model

Image Inpainting What is Image Inpainting Image inpainting is a restorative process that allows for the fixing or removal of unwanted parts within ima

Joshua V Evans 1 Dec 12, 2021
DCT-Mask: Discrete Cosine Transform Mask Representation for Instance Segmentation

DCT-Mask: Discrete Cosine Transform Mask Representation for Instance Segmentation This project hosts the code for implementing the DCT-MASK algorithms

Alibaba Cloud 57 Nov 27, 2022
Face Mask Detection is a project to determine whether someone is wearing mask or not, using deep neural network.

face-mask-detection Face Mask Detection is a project to determine whether someone is wearing mask or not, using deep neural network. It contains 3 scr

amirsalar 13 Jan 18, 2022
The Face Mask recognition system uses AI technology to detect the person with or without a mask.

Face Mask Detection Face Mask Detection system built with OpenCV, Keras/TensorFlow using Deep Learning and Computer Vision concepts in order to detect

Rohan Kasabe 4 Apr 5, 2022
Official PyTorch code for WACV 2022 paper "CFLOW-AD: Real-Time Unsupervised Anomaly Detection with Localization via Conditional Normalizing Flows"

CFLOW-AD: Real-Time Unsupervised Anomaly Detection with Localization via Conditional Normalizing Flows WACV 2022 preprint:https://arxiv.org/abs/2107.1

Denis 156 Dec 28, 2022
Pytorch implementation of "Geometrically Adaptive Dictionary Attack on Face Recognition" (WACV 2022)

Geometrically Adaptive Dictionary Attack on Face Recognition This is the Pytorch code of our paper "Geometrically Adaptive Dictionary Attack on Face R

null 6 Nov 21, 2022
RDA: Robust Domain Adaptation via Fourier Adversarial Attacking

RDA: Robust Domain Adaptation via Fourier Adversarial Attacking Updates 08/2021: check out our domain adaptation for video segmentation paper Domain A

null 17 Nov 30, 2022
Official Implementation and Dataset of "PPR10K: A Large-Scale Portrait Photo Retouching Dataset with Human-Region Mask and Group-Level Consistency", CVPR 2021

Portrait Photo Retouching with PPR10K Paper | Supplementary Material PPR10K: A Large-Scale Portrait Photo Retouching Dataset with Human-Region Mask an

null 184 Dec 11, 2022
Official PyTorch implementation of "RMGN: A Regional Mask Guided Network for Parser-free Virtual Try-on" (IJCAI-ECAI 2022)

RMGN-VITON RMGN: A Regional Mask Guided Network for Parser-free Virtual Try-on In IJCAI-ECAI 2022(short oral). [Paper] [Supplementary Material] Abstra

null 27 Dec 1, 2022
Imposter-detector-2022 - HackED 2022 Team 3IQ - 2022 Imposter Detector

HackED 2022 Team 3IQ - 2022 Imposter Detector By Aneeljyot Alagh, Curtis Kan, Jo

Joshua Ji 3 Aug 20, 2022
A Text Attention Network for Spatial Deformation Robust Scene Text Image Super-resolution (CVPR2022)

A Text Attention Network for Spatial Deformation Robust Scene Text Image Super-resolution (CVPR2022) https://arxiv.org/abs/2203.09388 Jianqi Ma, Zheto

MA Jianqi, shiki 104 Jan 5, 2023
Code for "ShineOn: Illuminating Design Choices for Practical Video-based Virtual Clothing Try-on", accepted at WACV 2021 Generation of Human Behavior Workshop.

ShineOn: Illuminating Design Choices for Practical Video-based Virtual Clothing Try-on [ Paper ] [ Project Page ] This repository contains the code fo

Andrew Jong 97 Dec 13, 2022
[WACV 2020] Reducing Footskate in Human Motion Reconstruction with Ground Contact Constraints

Reducing Footskate in Human Motion Reconstruction with Ground Contact Constraints Official implementation for Reducing Footskate in Human Motion Recon

Virginia Tech Vision and Learning Lab 38 Nov 1, 2022
RepMLP: Re-parameterizing Convolutions into Fully-connected Layers for Image Recognition

RepMLP: Re-parameterizing Convolutions into Fully-connected Layers for Image Recognition (PyTorch) Paper: https://arxiv.org/abs/2105.01883 Citation: @

null 260 Jan 3, 2023