Blind Image Decomposition (BID)
Blind Image Decomposition is a novel task. The task requires separating a superimposed image into constituent underlying images in a blind setting, that is, both the source components involved in mixing as well as the mixing mechanism are unknown.
We invite our community to explore the novel BID task, including discovering interesting areas of application, developing novel methods, extending the BID setting,and constructing benchmark datasets.
Blind Image Decomposition
Junlin Han, Weihao Li, Pengfei Fang, Chunyi Sun, Jie Hong, Ali Armin, Lars Petersson, Hongdong Li
DATA61-CSIRO and Australian National University
Preprint
BIDeN (Blind Image Decomposition Network):
Applications of BID
Deraining (rain streak, snow, haze, raindrop):
Row 1-6 presents 6 cases of a same scene. The 6 cases are (1): rainstreak, (2): rain streak + snow, (3): rain streak + light haze, (4): rain streak + heavy haze, (5): rain streak + moderate haze + raindrop, (6)rain streak + snow + moderate haze + raindrop.
Joint shadow/reflection/watermark removal:
Prerequisites
Python 3.7 or above.
For packages, see requirements.txt.
Getting started
- Clone this repo:
git clone https://github.com/JunlinHan/BID.git
-
Install PyTorch 1.7 or above and other dependencies (e.g., torchvision, visdom, dominate, gputil).
For pip users, please type the command
pip install -r requirements.txt
.For Conda users, you can create a new Conda environment using
conda env create -f environment.yml
. (Recommend)We tested our code on both Windows and Ubuntu OS.
BID Datasets
-
Download BID datasets: https://drive.google.com/drive/folders/1wUUKTiRAGVvelarhsjmZZ_1iBdBaM6Ka?usp=sharing
unzip the downloaded datasets, put them inside
./datasets/
.
BID Train/Test
- Detailed instructions are provided at
./models/
. - To view training results and loss plots, run
python -m visdom.server
and click the URL http://localhost:8097.
Task I: Mixed image decomposition across multiple domains:
Train (biden n, where n is the maximum number of source components):
python train.py --dataroot ./datasets/image_decom --name biden2 --model biden2 --dataset_mode unaligned2
python train.py --dataroot ./datasets/image_decom --name biden3 --model biden3 --dataset_mode unaligned3
...
python train.py --dataroot ./datasets/image_decom --name biden8 --model biden8 --dataset_mode unaligned8
Test a single case (use n = 3 as an example):
Test a single case:
python test.py --dataroot ./datasets/image_decom --name biden3 --model biden3 --dataset_mode unaligned3 --test_input A
python test.py --dataroot ./datasets/image_decom --name biden3 --model biden3 --dataset_mode unaligned3 --test_input AB
... ane other cases. change test_input to the case you want.
Test all cases:
python test2.py --dataroot ./datasets/image_decom --name biden3 --model biden3 --dataset_mode unaligned3
Task II: Real-scenario deraining:
Train:
python train.py --dataroot ./datasets/rain --name task2 --model rain --dataset_mode rain
Task III: Joint shadow/reflection/watermark removal:
Train:
python train.py --dataroot ./datasets/jointremoval_v1 --name task3_v1 --model jointremoval --dataset_mode jointremoval
or
python train.py --dataroot ./datasets/jointremoval_v2 --name task3_v2 --model jointremoval --dataset_mode jointremoval
The test results will be saved to an html file here: ./results/
.
Apply a pre-trained BIDeN model
We provide our pre-trained BIDeN models at: https://drive.google.com/drive/folders/1UBmdKZXYewJVXHT4dRaat4g8xZ61OyDF?usp=sharing
Download the pre-tained model, unzip it and put it inside ./checkpoints.
Example usage: Download the dataset of task II (rain) and pretainred model of task II (task2). Test the rain streak case.
python test.py --dataroot ./datasets/rain --name task2 --model rain --dataset_mode rain --test_input B
Evaluation
For FID score, use pytorch-fid.
For PSNR/SSIM/RMSE, see ./metrics/
.
Raindrop effect
See ./raindrop/
.
Citation
If you use our code or our results, please consider citing our paper. Thanks in advance!
@inproceedings{han2021bid,
title={Blind Image Decomposition},
author={Junlin Han and Weihao Li and Pengfei Fang and Chunyi Sun and Jie Hong and Mohammad Ali Armin and Lars Petersson and Hongdong Li},
booktitle={arXiv preprint arXiv:2108.11364},
year={2021}
}
Contact
[email protected] or [email protected]
Acknowledgments
Our code is developed based on DCLGAN and CUT. We thank the auhtors of MPRNet, perceptual-reflection-removal, Double-DIP, Deep-adversarial-decomposition for sharing their source code. We thank exposure-fusion-shadow-removal and ghost-free-shadow-removal for providing the source code and results. We thank pytorch-fid for FID computation.