SPN: Fully Context-Aware Image Inpainting with a Learned Semantic Pyramid
Code for Fully Context-Aware Image Inpainting with a Learned Semantic Pyramid, submitted to IEEE. Pretrained models have been uploaded.
This project is for our new inpainting method SPN which has been submitted to IEEE under peer review. This work is an extension version of our previous work SPL (IJCAI'21). If you have any questions, feel free to make issues. Thanks for your interests!
Paper on Arxiv. This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible
Introduction:
Briefly speaking, in this work, we still focus on the key insight that learning semantic priors from specific pretext tasks can benefit image inpainting, and we further strengthen the modeling of the learned priors in this work from the following aspects:
- We exploit multi-scale semantic priors in a feature pyramid manner to achieve consistent understanding of both gloabl and local context. The image generator is also improved to incorporate the prior pyramid.
- We extend our prior learned in a probabilistic manner which enables our method to handle probabilistic image inpainting problem.
- Besides, more analyses of the learned prior pyramid and the choices of the semantic supervision are provided in our experiment part.
Prerequisites (same with SPL)
- Python 3.7
- PyTorch 1.8 (1.6+ may also work)
- NVIDIA GPU + CUDA cuDNN
- Inplace_Abn (only needed for training our model, used in ASL_TRresNet model)
- torchlight (We only use it to record the printed information. You can change it as you want.)
Datasets
We use Places2, CelebA and Paris Street-View datasets for determinstic image inpainting which is same with SPL, and CelebA-HQ dataset is used for probabilistic image inpainting. We also used the irregular mask provided by Liu et al. which can be downloaded from their website. For the detailed processes of these datasets please refer to SPL and our paper.
Getting Strated
Since our approach can be applied for both deterministic and probabilistic image inpainting, so we seperate the codes under these two setups in different files and each file contains corresponding training and testing commonds.
For all setups, the common pre-preparations are list as follows:
-
Download the pre-trained models and copy them under ./checkpoints directory.
-
(For training) Make another directory, e.g ./pretrained_ASL, and download the weights of TResNet_L pretrained on OpenImage dataset to this directory.
-
Install torchlight
cd ./torchlight
python setup.py install