I2V-GAN
This repository is the official Pytorch implementation for ACMMM2021 paper
"I2V-GAN: Unpaired Infrared-to-Visible Video Translation".
Traffic I2V Example:
Monitoring I2V Example:
Flower Translation Example:
Introduction
Abstract
Human vision is often adversely affected by complex environmental factors, especially in night vision scenarios. Thus, infrared cameras are often leveraged to help enhance the visual effects via detecting infrared radiation in the surrounding environment, but the infrared videos are undesirable due to the lack of detailed semantic information. In such a case, an effective video-to-video translation method from the infrared domain to the visible counterpart is strongly needed by overcoming the intrinsic huge gap between infrared and visible fields.
Our work propose an infrared-to-visible (I2V) video translation method I2V-GAN to generate fine-grained and spatial-temporal consistent visible light video by given an unpaired infrared video.
The backbone network follows Cycle-GAN and Recycle-GAN.
Technically, our model capitalizes on three types of constraints: adversarial constraint to generate synthetic frame that is similar to the real one, cyclic consistency with the introduced perceptual loss for effective content conversion as well as style preservation, and similarity constraint across and within domains to enhance the content and motion consistency in both spatial and temporal spaces at a fine-grained level.
IRVI Dataset
Click here to download IRVI dataset from Baidu Netdisk. Access code: IRVI
.
Data Structure
SUBSET | TRAIN | TEST | TOTAL FRAME | ||
Traffic | 17000 | 1000 | 18000 | ||
Mornitoring | sub-1 | 1384 | 347 | 1731 | 6352 |
sub-2 | 1040 | 260 | 1300 | ||
sub-3 | 1232 | 308 | 1540 | ||
sub-4 | 672 | 169 | 841 | ||
sub-5 | 752 | 188 | 940 |
Installation
The code is implemented with Python(3.6)
and Pytorch(1.9.0)
for CUDA Version 11.2
Install dependencies:
pip install -r requirements.txt
Usage
Train
python train.py --dataroot /path/to/dataset \
--display_env visdom_env_name --name exp_name \
--model i2vgan --which_model_netG resnet_6blocks \
--no_dropout --pool_size 0 \
--which_model_netP unet_128 --npf 8 --dataset_mode unaligned_triplet
Test
python test.py --dataroot /path/to/dataset \
--which_epoch latest --name exp_name --model cycle_gan \
--which_model_netG resnet_6blocks --which_model_netP unet_128 \
--dataset_mode unaligned --no_dropout --loadSize 256 --resize_or_crop crop
Citation
If you find our work useful in your research or publication, please cite our work:
@inproceedings{I2V-GAN2021,
title = {I2V-GAN: Unpaired Infrared-to-Visible Video Translation},
author = {Shuang Li and Bingfeng Han and Zhenjie Yu and Chi Harold Liu and Kai Chen and Shuigen Wang},
booktitle = {ACMMM},
year = {2021}
}
Acknowledgements
This code borrows heavily from the PyTorch implementation of Cycle-GAN and Pix2Pix and RecycleGAN.
A huge thanks to them!
@inproceedings{CycleGAN2017,
title = {Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networkss},
author = {Zhu, Jun-Yan and Park, Taesung and Isola, Phillip and Efros, Alexei A},
booktitle = {ICCV},
year = {2017}
}
@inproceedings{Recycle-GAN2018,
title = {Recycle-GAN: Unsupervised Video Retargeting},
author = {Aayush Bansal and Shugao Ma and Deva Ramanan and Yaser Sheikh},
booktitle = {ECCV},
year = {2018}
}