PyTorch implementaton of our CVPR 2021 paper "Bridging the Visual Gap: Wide-Range Image Blending"

Overview

Bridging the Visual Gap: Wide-Range Image Blending

PyTorch implementaton of our CVPR 2021 paper "Bridging the Visual Gap: Wide-Range Image Blending".
You can visit our project website here.

In this paper, we propose a novel model to tackle the problem of wide-range image blending, which aims to smoothly merge two different images into a panorama by generating novel image content for the intermediate region between them.

Paper

Bridging the Visual Gap: Wide-Range Image Blending
Chia-Ni Lu, Ya-Chu Chang, Wei-Chen Chiu
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2021.

Please cite our paper if you find it useful for your research.

@InProceedings{lu2021bridging,
    author = {Lu, Chia-Ni and Chang, Ya-Chu and Chiu, Wei-Chen},
    title = {Bridging the Visual Gap: Wide-Range Image Blending},
    booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month = {June},
    year = {2021}
}

Installation

  • This code was developed with Python 3.7.4 & Pytorch 1.0.0 & CUDA 9.2
  • Other requirements: numpy, skimage, tensorboardX
  • Clone this repo
git clone https://github.com/julia0607/Wide-Range-Image-Blending.git
cd Wide-Range-Image-Blending

Testing

Download our pre-trained model weights from here and put them under weights/.

Test the sample data provided in this repo:

python test.py

Or download our paired test data from here and put them under data/.
Then run the testing code:

python test.py --test_data_dir_1 ./data/scenery6000_paired/test/input1/
               --test_data_dir_2 ./data/scenery6000_paired/test/input2/

Run your own data:

python test.py --test_data_dir_1 YOUR_DATA_PATH_1
               --test_data_dir_2 YOUR_DATA_PATH_2
               --save_dir YOUR_SAVE_PATH

If your test data isn't paired already, add --rand_pair True to randomly pair the data.

Training

We adopt the scenery dataset proposed by Very Long Natural Scenery Image Prediction by Outpainting for conducting our experiments, in which we split the dataset to 5040 training images and 1000 testing images.

Download the dataset with our split of train and test set from here and put them under data/.
You can unzip the .zip file with jar xvf scenery6000_split.zip.
Then run the training code for self-reconstruction stage (first stage):

python train_SR.py

After finishing the training of self-reconstruction stage, move the latest model weights from checkpoints/SR_Stage/ to weights/, and run the training code for fine-tuning stage (second stage):

python train_FT.py --load_pretrain True

Train the model with your own dataset:

python train_SR.py --train_data_dir YOUR_DATA_PATH

After finishing the training of self-reconstruction stage, move the latest model weights to weights/, and run the training code for fine-tuning stage (second stage):

python train_FT.py --load_pretrain True
                   --train_data_dir YOUR_DATA_PATH

If your train data isn't paired already, add --rand_pair True to randomly pair the data in the fine-tuning stage.

TensorBoard Visualization

Visualization on TensorBoard for training and validation is supported. Run tensorboard --logdir YOUR_LOG_DIR to view training progress.

Acknowledgments

Our code is partially based on Very Long Natural Scenery Image Prediction by Outpainting and a pytorch re-implementation for Generative Image Inpainting with Contextual Attention.
The implementation of ID-MRF loss is borrowed from Image Inpainting via Generative Multi-column Convolutional Neural Networks.

You might also like...
Official code for our CVPR '22 paper
Official code for our CVPR '22 paper "Dataset Distillation by Matching Training Trajectories"

Dataset Distillation by Matching Training Trajectories Project Page | Paper This repo contains code for training expert trajectories and distilling sy

[CVPR 21] Vectorization and Rasterization: Self-Supervised Learning for Sketch and Handwriting, IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2021.
[CVPR 21] Vectorization and Rasterization: Self-Supervised Learning for Sketch and Handwriting, IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2021.

Vectorization and Rasterization: Self-Supervised Learning for Sketch and Handwriting, CVPR 2021. Ayan Kumar Bhunia, Pinaki nath Chowdhury, Yongxin Yan

PyTorch implementations for our SIGGRAPH 2021 paper: Editable Free-viewpoint Video Using a Layered Neural Representation.
PyTorch implementations for our SIGGRAPH 2021 paper: Editable Free-viewpoint Video Using a Layered Neural Representation.

st-nerf We provide PyTorch implementations for our paper: Editable Free-viewpoint Video Using a Layered Neural Representation SIGGRAPH 2021 Jiakai Zha

Pytorch implementation for  our ICCV 2021 paper
Pytorch implementation for our ICCV 2021 paper "TRAR: Routing the Attention Spans in Transformers for Visual Question Answering".

TRAnsformer Routing Networks (TRAR) This is an official implementation for ICCV 2021 paper "TRAR: Routing the Attention Spans in Transformers for Visu

This is the official pytorch implementation for our ICCV 2021 paper
This is the official pytorch implementation for our ICCV 2021 paper "TRAR: Routing the Attention Spans in Transformers for Visual Question Answering" on VQA Task

🌈 ERASOR (RA-L'21 with ICRA Option) Official page of "ERASOR: Egocentric Ratio of Pseudo Occupancy-based Dynamic Object Removal for Static 3D Point C

PyTorch implementation of our ICCV 2021 paper, Interpretation of Emergent Communication in Heterogeneous Collaborative Embodied Agents.
PyTorch implementation of our ICCV 2021 paper, Interpretation of Emergent Communication in Heterogeneous Collaborative Embodied Agents.

PyTorch implementation of our ICCV 2021 paper, Interpretation of Emergent Communication in Heterogeneous Collaborative Embodied Agents.

PyTorch implementation for our NeurIPS 2021 Spotlight paper
PyTorch implementation for our NeurIPS 2021 Spotlight paper "Long Short-Term Transformer for Online Action Detection".

Long Short-Term Transformer for Online Action Detection Introduction This is a PyTorch implementation for our NeurIPS 2021 Spotlight paper "Long Short

PyTorch implementation of paper
PyTorch implementation of paper "Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes", CVPR 2021

Neural Scene Flow Fields PyTorch implementation of paper "Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes", CVPR 20

Official pytorch implementation of paper "Inception Convolution with Efficient Dilation Search" (CVPR 2021 Oral).

IC-Conv This repository is an official implementation of the paper Inception Convolution with Efficient Dilation Search. Getting Started Download Imag

Comments
  • Thank you for your sharing! Is there any metrics code?

    Thank you for your sharing! Is there any metrics code?

    I am trying to reproduce the value of FID and KID of your program. Is there any metrics code for this program? I only get the related information from the sentence "crop the central area of size 256 ×512 from the output panorama of size 256 ×768 for performing evaluation and comparison" in your paper, and want to know which pictures to be compared with the test results as well as how to crop the compared pictures.

    opened by yiqingHuang0326 3
  • question about kid_score

    question about kid_score

    Hi :) For KID, I used the implementation provided by you on the rand_real folder and the center-cropped result1-2 folder.

    Honestly, in the beginig, I didn't follow the argument that "Ensure that you have saved both datasets as numpy files", because I found that it can also get final results using the normal absolute path like python kid_score.py --true /home/wrib/rand_real/ --fake /home/wrib/center_crop_result1-2/, and I got 0.015 for mean and 0.001 for std.

    Then I used the numpy files form of the two folders, and got the same results.

    So, I am here to request for more details about the KID score that how did you use the implementation.

    opened by yiqingHuang0326 1
Owner
Chia-Ni Lu
Chia-Ni Lu
Code for our CVPR 2021 paper "MetaCam+DSCE"

Joint Noise-Tolerant Learning and Meta Camera Shift Adaptation for Unsupervised Person Re-Identification (CVPR'21) Introduction Code for our CVPR 2021

FlyingRoastDuck 59 Oct 31, 2022
This is an official implementation of our CVPR 2021 paper "Bottom-Up Human Pose Estimation Via Disentangled Keypoint Regression" (https://arxiv.org/abs/2104.02300)

Bottom-Up Human Pose Estimation Via Disentangled Keypoint Regression Introduction In this paper, we are interested in the bottom-up paradigm of estima

HRNet 367 Dec 27, 2022
The official implementation of our CVPR 2021 paper - Hybrid Rotation Averaging: A Fast and Robust Rotation Averaging Approach

Graph Optimizer This repo contains the official implementation of our CVPR 2021 paper - Hybrid Rotation Averaging: A Fast and Robust Rotation Averagin

Chenyu 109 Dec 23, 2022
the code for our CVPR 2021 paper Bilateral Grid Learning for Stereo Matching Network [BGNet]

BGNet This repository contains the code for our CVPR 2021 paper Bilateral Grid Learning for Stereo Matching Network [BGNet] Environment Python 3.6.* C

3DCV developer 87 Nov 29, 2022
Repository of our paper 'Refer-it-in-RGBD' in CVPR 2021

Refer-it-in-RGBD This is the repository of our paper 'Refer-it-in-RGBD: A Bottom-up Approach for 3D Visual Grounding in RGBD Images' in CVPR 2021 Pape

Haolin Liu 34 Nov 7, 2022
Code for our CVPR 2021 Paper "Rethinking Style Transfer: From Pixels to Parameterized Brushstrokes".

Rethinking Style Transfer: From Pixels to Parameterized Brushstrokes (CVPR 2021) Project page | Paper | Colab | Colab for Drawing App Rethinking Style

CompVis Heidelberg 153 Jan 4, 2023
git git《Transformer Meets Tracker: Exploiting Temporal Context for Robust Visual Tracking》(CVPR 2021) GitHub:git2] 《Masksembles for Uncertainty Estimation》(CVPR 2021) GitHub:git3]

Transformer Meets Tracker: Exploiting Temporal Context for Robust Visual Tracking Ning Wang, Wengang Zhou, Jie Wang, and Houqiang Li Accepted by CVPR

NingWang 236 Dec 22, 2022
This project is the PyTorch implementation of our CVPR 2022 paper:

Requirements and Dependency Install PyTorch with CUDA (for GPU). (Experiments are validated on python 3.8.11 and pytorch 1.7.0) (For visualization if

Lei Huang 23 Nov 29, 2022
This repo contains the official code of our work SAM-SLR which won the CVPR 2021 Challenge on Large Scale Signer Independent Isolated Sign Language Recognition.

Skeleton Aware Multi-modal Sign Language Recognition By Songyao Jiang, Bin Sun, Lichen Wang, Yue Bai, Kunpeng Li and Yun Fu. Smile Lab @ Northeastern

Isen (Songyao Jiang) 128 Dec 8, 2022
Code for our CVPR 2022 Paper "GEN-VLKT: Simplify Association and Enhance Interaction Understanding for HOI Detection"

GEN-VLKT Code for our CVPR 2022 paper "GEN-VLKT: Simplify Association and Enhance Interaction Understanding for HOI Detection". Contributed by Yue Lia

Yue Liao 47 Dec 4, 2022