The code release of paper Low-Light Image Enhancement with Normalizing Flow

Related tags

Deep Learning LLFlow
Overview

PWC

[AAAI 2022] Low-Light Image Enhancement with Normalizing Flow

Paper | Project Page

Low-Light Image Enhancement with Normalizing Flow
Yufei Wang, Renjie Wan, Wenhan Yang, Haoliang Li, Lap-pui Chau, Alex C. Kot
In AAAI'2022

Overall

Framework

Quantitative results

Evaluation on LOL

The evauluation results on LOL are as follows

Method PSNR SSIM LPIPS
LIME 16.76 0.56 0.35
RetinexNet 16.77 0.56 0.47
DRBN 20.13 0.83 0.16
Kind 20.87 0.80 0.17
KinD++ 21.30 0.82 0.16
LLFlow (Ours) 25.19 0.93 0.11

Computational Cost

Computational Cost The computational cost and performance of models are in the above table. We evaluate the cost using one image with a size 400×600. Ours(large) is the standard model reported in supplementary and Ours(small) is a model with reduced parameters. Both the training config files and pre-trained models are provided.

Visual Results

Visual comparison with state-of-the-art low-light image enhancement methods on LOL dataset.

Get Started

Dependencies and Installation

  • Python 3.8
  • Pytorch 1.9
  1. Clone Repo
git clone https://github.com/wyf0912/LLFlow.git
  1. Create Conda Environment
conda create --name LLFlow python=3.8
conda activate LLFlow
  1. Install Dependencies
cd LLFlow
pip install -r requirements.txt

Pretrained Model

We provide the pre-trained models with the following settings:

  • A light weight model with promising performance trained on LOL [Google drive] with training config file ./confs/LOL_smallNet.yml
  • A standard-sized model trained on LOL [Google drive] with training config file ./confs/LOL-pc.yml.
  • A standard-sized model trained on VE-LOL [Google drive] with training config file ./confs/LOLv2-pc.yml.

Test

You can check the training log to obtain the performance of the model. You can also directly test the performance of the pre-trained model as follows

  1. Modify the paths to dataset and pre-trained mode. You need to modify the following path in the config files in ./confs
#### Test Settings
dataroot_GT # only needed for testing with paired data
dataroot_LR
model_path
  1. Test the model

To test the model with paired data and obtain the evaluation results, e.g., PSNR, SSIM, and LPIPS.

python test.py --opt your_config_path
# You need to specify an appropriate config file since it stores the config of the model, e.g., the number of layers.

To test the model with unpaired data

python test_unpaired.py --opt your_config_path
# You need to specify an appropriate config file since it stores the config of the model, e.g., the number of layers.

You can check the output in ../results.

Train

All logging files in the training process, e.g., log message, checkpoints, and snapshots, will be saved to ./experiments.

  1. Modify the paths to dataset in the config yaml files. We provide the following training configs for both LOL and VE-LOL benchmarks. You can also create your own configs for your own dataset.
.\confs\LOL_smallNet.yml
.\confs\LOL-pc.yml
.\confs\LOLv2-pc.yml

You need to modify the following terms

datasets.train.root
datasets.val.root
gpu_ids: [0] # Our model can be trained using a single GPU with memory>20GB. You can also train the model using multiple GPUs by adding more GPU ids in it.
  1. Train the network.
python train.py --opt your_config_path

Citation

If you find our work useful for your research, please cite our paper

@article{wang2021low,
  title={Low-Light Image Enhancement with Normalizing Flow},
  author={Wang, Yufei and Wan, Renjie and Yang, Wenhan and Li, Haoliang and Chau, Lap-Pui and Kot, Alex C},
  journal={arXiv preprint arXiv:2109.05923},
  year={2021}
}

Contact

If you have any question, please feel free to contact us via [email protected].

Comments
  • 训练时候出现NAN的情况

    训练时候出现NAN的情况

    22-06-02 16:46:36.522 - INFO: <epoch: 40, iter: 1,740> psnr: 2.2714e+01 SSIM: 7.9226e-01 22-06-02 16:47:51.099 - INFO: # Validation # PSNR: 2.2015e+01 SSIM: 7.8545e-01 22-06-02 16:47:51.099 - INFO: <epoch: 40, iter: 1,760> psnr: 2.2015e+01 SSIM: 7.8545e-01 22-06-02 16:49:08.725 - INFO: # Validation # PSNR: 2.2413e+01 SSIM: 7.8182e-01 22-06-02 16:49:08.726 - INFO: <epoch: 41, iter: 1,780> psnr: 2.2413e+01 SSIM: 7.8182e-01 <epoch: 41, iter: 1,800, lr:1.800e-04, t:3.79e+00, td:4.99e-02, eta:4.03e+01, nll:-1.410e+01> 22-06-02 16:50:23.710 - INFO: # Validation # PSNR: 2.3476e+01 SSIM: 7.7006e-01 22-06-02 16:50:23.711 - INFO: <epoch: 41, iter: 1,800> psnr: 2.3476e+01 SSIM: 7.7006e-01 22-06-02 16:51:41.445 - INFO: # Validation # PSNR: 2.3458e+01 SSIM: 7.7498e-01 22-06-02 16:51:41.446 - INFO: <epoch: 42, iter: 1,820> psnr: 2.3458e+01 SSIM: 7.7498e-01 22-06-02 16:52:56.995 - INFO: # Validation # PSNR: 2.3248e+01 SSIM: 7.9559e-01 22-06-02 16:52:56.995 - INFO: <epoch: 42, iter: 1,840> psnr: 2.3248e+01 SSIM: 7.9559e-01 22-06-02 16:54:14.513 - INFO: # Validation # PSNR: 2.1923e+01 SSIM: 7.8828e-01 22-06-02 16:54:14.513 - INFO: <epoch: 43, iter: 1,860> psnr: 2.1923e+01 SSIM: 7.8828e-01 22-06-02 16:55:29.155 - INFO: # Validation # PSNR: 2.3526e+01 SSIM: 7.2573e-01 22-06-02 16:55:29.155 - INFO: <epoch: 43, iter: 1,880> psnr: 2.3526e+01 SSIM: 7.2573e-01 <epoch: 44, iter: 1,900, lr:1.900e-04, t:3.83e+00, td:7.98e-02, eta:4.06e+01, nll:-1.437e+01> 22-06-02 16:56:46.834 - INFO: # Validation # PSNR: 2.3193e+01 SSIM: 7.6726e-01 22-06-02 16:56:46.834 - INFO: <epoch: 44, iter: 1,900> psnr: 2.3193e+01 SSIM: 7.6726e-01 22-06-02 16:58:01.565 - INFO: # Validation # PSNR: 2.1788e+01 SSIM: 7.8349e-01 22-06-02 16:58:01.566 - INFO: <epoch: 44, iter: 1,920> psnr: 2.1788e+01 SSIM: 7.8349e-01 22-06-02 16:59:18.780 - INFO: # Validation # PSNR: 2.3761e+01 SSIM: 7.9732e-01 22-06-02 16:59:18.780 - INFO: <epoch: 45, iter: 1,940> psnr: 2.3761e+01 SSIM: 7.9732e-01 22-06-02 17:00:33.713 - INFO: # Validation # PSNR: 2.3915e+01 SSIM: 7.9458e-01 22-06-02 17:00:33.713 - INFO: <epoch: 45, iter: 1,960> psnr: 2.3915e+01 SSIM: 7.9458e-01 22-06-02 17:01:51.351 - INFO: # Validation # PSNR: 2.3342e+01 SSIM: 7.6980e-01 22-06-02 17:01:51.351 - INFO: <epoch: 46, iter: 1,980> psnr: 2.3342e+01 SSIM: 7.6980e-01 <epoch: 46, iter: 2,000, lr:2.000e-04, t:3.79e+00, td:5.20e-02, eta:4.00e+01, nll:-1.445e+01> 22-06-02 17:03:06.338 - INFO: # Validation # PSNR: 2.3837e+01 SSIM: 7.9719e-01 22-06-02 17:03:06.338 - INFO: <epoch: 46, iter: 2,000> psnr: 2.3837e+01 SSIM: 7.9719e-01 22-06-02 17:04:20.938 - INFO: # Validation # PSNR: 2.4228e+01 SSIM: 7.9535e-01 22-06-02 17:04:20.938 - INFO: <epoch: 46, iter: 2,020> psnr: 2.4228e+01 SSIM: 7.9535e-01 22-06-02 17:04:20.938 - INFO: Saving best models 22-06-02 17:05:39.073 - INFO: # Validation # PSNR: 2.4125e+01 SSIM: 7.7098e-01 22-06-02 17:05:39.073 - INFO: <epoch: 47, iter: 2,040> psnr: 2.4125e+01 SSIM: 7.7098e-01 22-06-02 17:06:53.739 - INFO: # Validation # PSNR: 2.4366e+01 SSIM: 7.8451e-01 22-06-02 17:06:53.740 - INFO: <epoch: 47, iter: 2,060> psnr: 2.4366e+01 SSIM: 7.8451e-01 22-06-02 17:06:53.740 - INFO: Saving best models 22-06-02 17:08:12.200 - INFO: # Validation # PSNR: 2.3133e+01 SSIM: 8.0574e-01 22-06-02 17:08:12.200 - INFO: <epoch: 48, iter: 2,080> psnr: 2.3133e+01 SSIM: 8.0574e-01 <epoch: 48, iter: 2,100, lr:2.100e-04, t:3.81e+00, td:5.07e-02, eta:4.01e+01, nll:-1.470e+01> 22-06-02 17:09:27.285 - INFO: # Validation # PSNR: 2.3492e+01 SSIM: 7.8315e-01 22-06-02 17:09:27.285 - INFO: <epoch: 48, iter: 2,100> psnr: 2.3492e+01 SSIM: 7.8315e-01 22-06-02 17:10:45.385 - INFO: # Validation # PSNR: 2.3993e+01 SSIM: 7.7895e-01 22-06-02 17:10:45.385 - INFO: <epoch: 49, iter: 2,120> psnr: 2.3993e+01 SSIM: 7.7895e-01 22-06-02 17:12:00.666 - INFO: # Validation # PSNR: 2.3472e+01 SSIM: 7.9519e-01 22-06-02 17:12:00.666 - INFO: <epoch: 49, iter: 2,140> psnr: 2.3472e+01 SSIM: 7.9519e-01 22-06-02 17:13:17.630 - INFO: # Validation # PSNR: 2.4151e+01 SSIM: 7.9518e-01 22-06-02 17:13:17.630 - INFO: <epoch: 50, iter: 2,160> psnr: 2.4151e+01 SSIM: 7.9518e-01 22-06-02 17:14:32.791 - INFO: # Validation # PSNR: 2.3569e+01 SSIM: 7.8216e-01 22-06-02 17:14:32.791 - INFO: <epoch: 50, iter: 2,180> psnr: 2.3569e+01 SSIM: 7.8216e-01 <epoch: 51, iter: 2,200, lr:2.200e-04, t:3.83e+00, td:7.13e-02, eta:4.02e+01, nll:-1.421e+01> 22-06-02 17:15:50.171 - INFO: # Validation # PSNR: 2.3831e+01 SSIM: 8.0239e-01 22-06-02 17:15:50.172 - INFO: <epoch: 51, iter: 2,200> psnr: 2.3831e+01 SSIM: 8.0239e-01 22-06-02 17:17:05.023 - INFO: # Validation # PSNR: 2.3857e+01 SSIM: 7.8366e-01 22-06-02 17:17:05.023 - INFO: <epoch: 51, iter: 2,220> psnr: 2.3857e+01 SSIM: 7.8366e-01 22-06-02 17:18:22.693 - INFO: # Validation # PSNR: 2.4156e+01 SSIM: 7.9386e-01 22-06-02 17:18:22.694 - INFO: <epoch: 52, iter: 2,240> psnr: 2.4156e+01 SSIM: 7.9386e-01 22-06-02 17:19:37.641 - INFO: # Validation # PSNR: 2.4247e+01 SSIM: 8.1077e-01 22-06-02 17:19:37.641 - INFO: <epoch: 52, iter: 2,260> psnr: 2.4247e+01 SSIM: 8.1077e-01 22-06-02 17:20:55.194 - INFO: # Validation # PSNR: 2.4300e+01 SSIM: 7.9351e-01 22-06-02 17:20:55.194 - INFO: <epoch: 53, iter: 2,280> psnr: 2.4300e+01 SSIM: 7.9351e-01 <epoch: 53, iter: 2,300, lr:2.300e-04, t:3.80e+00, td:5.24e-02, eta:3.98e+01, nll:-1.458e+01> 22-06-02 17:22:10.256 - INFO: # Validation # PSNR: 2.3832e+01 SSIM: 7.3875e-01 22-06-02 17:22:10.256 - INFO: <epoch: 53, iter: 2,300> psnr: 2.3832e+01 SSIM: 7.3875e-01 22-06-02 17:23:25.127 - INFO: # Validation # PSNR: 2.4496e+01 SSIM: 8.0288e-01 22-06-02 17:23:25.128 - INFO: <epoch: 53, iter: 2,320> psnr: 2.4496e+01 SSIM: 8.0288e-01 22-06-02 17:23:25.128 - INFO: Saving best models 22-06-02 17:24:44.229 - INFO: # Validation # PSNR: 2.2438e+01 SSIM: 7.8931e-01 22-06-02 17:24:44.229 - INFO: <epoch: 54, iter: 2,340> psnr: 2.2438e+01 SSIM: 7.8931e-01 22-06-02 17:26:00.054 - INFO: # Validation # PSNR: 2.4096e+01 SSIM: 8.0219e-01 22-06-02 17:26:00.054 - INFO: <epoch: 54, iter: 2,360> psnr: 2.4096e+01 SSIM: 8.0219e-01 22-06-02 17:27:17.756 - INFO: # Validation # PSNR: 2.4343e+01 SSIM: 7.8690e-01 22-06-02 17:27:17.756 - INFO: <epoch: 55, iter: 2,380> psnr: 2.4343e+01 SSIM: 7.8690e-01 <epoch: 55, iter: 2,400, lr:2.400e-04, t:3.82e+00, td:5.53e-02, eta:3.99e+01, nll:-1.636e+01> 22-06-02 17:28:32.438 - INFO: # Validation # PSNR: 2.4344e+01 SSIM: 7.7306e-01 22-06-02 17:28:32.439 - INFO: <epoch: 55, iter: 2,400> psnr: 2.4344e+01 SSIM: 7.7306e-01 22-06-02 17:29:49.812 - INFO: # Validation # PSNR: 2.4630e+01 SSIM: 7.8660e-01 22-06-02 17:29:49.812 - INFO: <epoch: 56, iter: 2,420> psnr: 2.4630e+01 SSIM: 7.8660e-01 22-06-02 17:29:49.812 - INFO: Saving best models 22-06-02 17:31:05.616 - INFO: # Validation # PSNR: 2.4031e+01 SSIM: 7.9935e-01 22-06-02 17:31:05.616 - INFO: <epoch: 56, iter: 2,440> psnr: 2.4031e+01 SSIM: 7.9935e-01 22-06-02 17:32:22.724 - INFO: # Validation # PSNR: 2.4581e+01 SSIM: 8.0658e-01 22-06-02 17:32:22.724 - INFO: <epoch: 57, iter: 2,460> psnr: 2.4581e+01 SSIM: 8.0658e-01 22-06-02 17:33:37.579 - INFO: # Validation # PSNR: 2.4193e+01 SSIM: 7.9596e-01 22-06-02 17:33:37.579 - INFO: <epoch: 57, iter: 2,480> psnr: 2.4193e+01 SSIM: 7.9596e-01 <epoch: 58, iter: 2,500, lr:2.500e-04, t:3.82e+00, td:7.41e-02, eta:3.98e+01, nll:-1.464e+01> 22-06-02 17:34:55.110 - INFO: # Validation # PSNR: 2.3714e+01 SSIM: 8.0299e-01 22-06-02 17:34:55.110 - INFO: <epoch: 58, iter: 2,500> psnr: 2.3714e+01 SSIM: 8.0299e-01 22-06-02 17:36:10.035 - INFO: # Validation # PSNR: 2.4123e+01 SSIM: 7.9859e-01 22-06-02 17:36:10.035 - INFO: <epoch: 58, iter: 2,520> psnr: 2.4123e+01 SSIM: 7.9859e-01 22-06-02 17:37:27.051 - INFO: # Validation # PSNR: 2.3793e+01 SSIM: 7.8510e-01 22-06-02 17:37:27.051 - INFO: <epoch: 59, iter: 2,540> psnr: 2.3793e+01 SSIM: 7.8510e-01 22-06-02 17:38:41.986 - INFO: # Validation # PSNR: 2.3959e+01 SSIM: 7.8839e-01 22-06-02 17:38:41.986 - INFO: <epoch: 59, iter: 2,560> psnr: 2.3959e+01 SSIM: 7.8839e-01 22-06-02 17:39:56.507 - INFO: # Validation # PSNR: 2.4116e+01 SSIM: 8.1688e-01 22-06-02 17:39:56.507 - INFO: <epoch: 59, iter: 2,580> psnr: 2.4116e+01 SSIM: 8.1688e-01 <epoch: 60, iter: 2,600, lr:2.600e-04, t:3.79e+00, td:5.16e-02, eta:3.94e+01, nll:-1.540e+01> 22-06-02 17:41:13.992 - INFO: # Validation # PSNR: 2.4500e+01 SSIM: 8.1474e-01 22-06-02 17:41:13.993 - INFO: <epoch: 60, iter: 2,600> psnr: 2.4500e+01 SSIM: 8.1474e-01 22-06-02 17:42:28.953 - INFO: # Validation # PSNR: 2.3704e+01 SSIM: 7.6469e-01 22-06-02 17:42:28.953 - INFO: <epoch: 60, iter: 2,620> psnr: 2.3704e+01 SSIM: 7.6469e-01 22-06-02 17:43:44.738 - INFO: # Validation # PSNR: nan SSIM: nan 22-06-02 17:43:44.738 - INFO: <epoch: 61, iter: 2,640> psnr: nan SSIM: nan 22-06-02 17:44:57.484 - INFO: # Validation # PSNR: nan SSIM: nan 22-06-02 17:44:57.485 - INFO: <epoch: 61, iter: 2,660> psnr: nan SSIM: nan 22-06-02 17:46:12.377 - INFO: # Validation # PSNR: nan SSIM: nan 22-06-02 17:46:12.378 - INFO: <epoch: 62, iter: 2,680> psnr: nan SSIM: nan <epoch: 62, iter: 2,700, lr:2.700e-04, t:3.71e+00, td:5.06e-02, eta:3.84e+01, nll:nan> 22-06-02 17:47:23.281 - INFO: # Validation # PSNR: nan SSIM: nan 22-06-02 17:47:23.281 - INFO: <epoch: 62, iter: 2,700> psnr: nan SSIM: nan 22-06-02 17:48:36.760 - INFO: # Validation # PSNR: nan SSIM: nan 22-06-02 17:48:36.760 - INFO: <epoch: 63, iter: 2,720> psnr: nan SSIM: nan 22-06-02 17:49:47.502 - INFO: # Validation # PSNR: nan SSIM: nan 22-06-02 17:49:47.502 - INFO: <epoch: 63, iter: 2,740> psnr: nan SSIM: nan 22-06-02 17:51:00.415 - INFO: # Validation # PSNR: nan SSIM: nan 22-06-02 17:51:00.416 - INFO: <epoch: 64, iter: 2,760> psnr: nan SSIM: nan 22-06-02 17:52:11.508 - INFO: # Validation # PSNR: nan SSIM: nan 22-06-02 17:52:11.508 - INFO: <epoch: 64, iter: 2,780> psnr: nan SSIM: nan <epoch: 65, iter: 2,800, lr:2.800e-04, t:3.62e+00, td:7.12e-02, eta:3.74e+01, nll:nan> 22-06-02 17:53:24.952 - INFO: # Validation # PSNR: nan SSIM: nan 22-06-02 17:53:24.952 - INFO: <epoch: 65, iter: 2,800> psnr: nan SSIM: nan 22-06-02 17:54:36.266 - INFO: # Validation # PSNR: nan SSIM: nan 22-06-02 17:54:36.266 - INFO: <epoch: 65, iter: 2,820> psnr: nan SSIM: nan 22-06-02 17:55:49.798 - INFO: # Validation # PSNR: nan SSIM: nan 22-06-02 17:55:49.799 - INFO: <epoch: 66, iter: 2,840> psnr: nan SSIM: nan 22-06-02 17:57:00.782 - INFO: # Validation # PSNR: nan SSIM: nan 22-06-02 17:57:00.782 - INFO: <epoch: 66, iter: 2,860> psnr: nan SSIM: nan 22-06-02 17:58:11.699 - INFO: # Validation # PSNR: nan SSIM: nan 22-06-02 17:58:11.700 - INFO: <epoch: 66, iter: 2,880> psnr: nan SSIM: nan <epoch: 67, iter: 2,900, lr:2.900e-04, t:3.60e+00, td:4.90e-02, eta:3.71e+01, nll:nan>

    把warmup_iter设置为5000仍会出现这样的情况,请问需要怎么解决呢?谢谢

    opened by 1013990424 8
  • Questions about the paper

    Questions about the paper

    Hello, I'd like to ask you two questions. The first: why should the input low light image be enhanced by histogram first? Can the low light image be directly sent to the network? Second: you said that you can learn one-to-many using normalizing flow. Where is it?

    opened by ph0316 7
  • Result mismatch

    Result mismatch

    Hello, I clone the code and test it on LOL dataset (eval15). I only modify the paths to dataset and pre-trained model in confs/LOL-pc.yml and run

    python test.py --opt your_config_path
    

    I got following results (PSNR 25.00) which is slightly different with the results in paper (PSNR 25.19) 截屏2022-08-22 23 58 09 I would like to know what causes the problem.

    opened by zhanggengchen 4
  • About dataset

    About dataset

    Hi, I found that you used the VE-LOL dataset, however, the visual results of VE-LOL in Figure 7 seem to be the LOLv2 dataset.

    If you train in LOLv1 and apply Cross-Dataset Evaluation in LOLv2, I think it may not very appropriate, since training images of LOLv1 contain part of testing images of LOLv2.

    Another question. I have not found the downloading link of VE-LOL in your cited paper, can you provide an address?

    opened by wangchx67 3
  • question about the 'finetune the overall brightness'

    question about the 'finetune the overall brightness'

    Hi, Thank you for your code. I used the provided pretrain model to test LOL. the average PSNR I measured is 25db, but I felt confused that when I removed the 'finetune the overall brightness', the result is 21.15. the value has a significant decline without finetuning the global brightness which uses the ground truth.

    opened by Boberalice 3
  • How to calculate FLOPs and #Params for LLFlow?

    How to calculate FLOPs and #Params for LLFlow?

    Hello, authors! I am using the following function to calculate the FLOPs, #Params, and the inference time, and this function works for methods like ZeroDCE, RUAS, and URetinexNet.

    from thop import profile
    import torch
    import time
    def cal_eff_score(model, count = 100, use_cuda=True):
    
        # define input tensor
        inp_tensor = torch.rand(1, 3, 1080, 1920) 
    
        # deploy to cuda
        if use_cuda:
            inp_tensor = inp_tensor.cuda()
            model = model.cuda()
    
        # get flops and params
        flops, params = profile(model, inputs=(inp_tensor, ))
        G_flops = flops * 1e-9
        M_params = params * 1e-6
    
        # get time
        start_time = time.time()
        for i in range(count):
            _ = model(inp_tensor)
        used_time = time.time() - start_time
        ave_time = used_time / count
    
        # print score
        print('FLOPs (G) = {:.4f}'.format(G_flops))
        print('Params (M) = {:.4f}'.format(M_params))
        print('Time (S) = {:.4f}'.format(ave_time))
    

    However, if I pass you model as the variable, it gives me the following error.

    Traceback (most recent call last):
      File "test_unpaired.py", line 184, in <module>
        main()
      File "test_unpaired.py", line 135, in main
        cal_eff_score(model)
      File "test_unpaired.py", line 32, in cal_eff_score
        flops, params = profile(model, inputs=(inp_tensor, ))
      File "/root/miniconda3/lib/python3.8/site-packages/thop/profile.py", line 92, in profile
        model(*inputs)
      File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
        return forward_call(*input, **kwargs)
      File "/root/miniconda3/lib/python3.8/site-packages/torch/cuda/amp/autocast_mode.py", line 141, in decorate_autocast
        return func(*args, **kwargs)
      File "/root/autodl-tmp/Model/LLFlow/code/models/modules/LLFlow_arch.py", line 97, in forward
        return self.normal_flow(gt, lr, epses=epses, lr_enc=lr_enc, add_gt_noise=add_gt_noise, step=step,
      File "/root/autodl-tmp/Model/LLFlow/code/models/modules/LLFlow_arch.py", line 121, in normal_flow
        lr_enc = self.rrdbPreprocessing(lr)
      File "/root/autodl-tmp/Model/LLFlow/code/models/modules/LLFlow_arch.py", line 182, in rrdbPreprocessing
        rrdbResults = self.RRDB(lr, get_steps=True)
      File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
        return forward_call(*input, **kwargs)
      File "/root/autodl-tmp/Model/LLFlow/code/models/modules/ConditionEncoder.py", line 96, in forward
        raw_low_input = x[:, 0:3].exp()
    TypeError: 'NoneType' object is not subscriptable
    

    I have spent a lot of time trying to understand the code in your models folder, but it is to complicated for me to understand. Therefore, I hope you can clarify how I can calculate the FLOPs and #Params for your model. Thanks!

    opened by ShenZheng2000 3
  • RuntimeError: svd_cuda: the updating process of SBDSDC did not converge (error: 22)

    RuntimeError: svd_cuda: the updating process of SBDSDC did not converge (error: 22)

    I have a problem when I run the training code. After 4,000 epochs, I got an error. This is the log on training.

    `OrderedDict([('manual_seed', 10), ('lr_G', 0.0005), ('weight_decay_G', 0), ('beta1', 0.9), ('beta2', 0.99), ('lr_scheme', 'MultiStepLR'), ('warmup_iter', 10), ('lr_steps_rel', [0.5, 0.75, 0.9, 0.95]), ('lr_gamma', 0.5), ('weight_l1', 0), ('weight_fl', 1), ('niter', 30000), ('val_freq', 200), ('lr_steps', [15000, 22500, 27000, 28500])]) Disabled distributed training. 22-05-17 21:58:49.336 - INFO: name: train_color_as_full_z_nosieMapBugFixed_noavgpool use_tb_logger: True model: LLFlow distortion: sr scale: 1 gpu_ids: [0] dataset: LoL optimize_all_z: False cond_encoder: ConEncoder1 train_gt_ratio: 0.2 avg_color_map: False concat_histeq: True histeq_as_input: False concat_color_map: False gray_map: False align_condition_feature: False align_weight: 0.001 align_maxpool: True to_yuv: False encode_color_map: False le_curve: False datasets:[ train:[ root: data/LOL quant: 32 use_shuffle: True n_workers: 4 batch_size: 16 use_flip: True color: RGB use_crop: True GT_size: 160 noise_prob: 0 noise_level: 5 log_low: True gamma_aug: False phase: train scale: 1 data_type: img ] val:[ root: data/LOL n_workers: 1 quant: 32 n_max: 20 batch_size: 1 log_low: True phase: val scale: 1 data_type: img ] ] dataroot_unpaired: data/LOL/eval15/low dataroot_GT: data/LOL/eval15/high dataroot_LR: data/LOL/eval15/low model_path: trained_models/trained.pth heat: 0 network_G:[ which_model_G: LLFlow in_nc: 3 out_nc: 3 nf: 64 nb: 24 train_RRDB: False train_RRDB_delay: 0.5 flow:[ K: 12 L: 3 noInitialInj: True coupling: CondAffineSeparatedAndCond additionalFlowNoAffine: 2 split:[ enable: False ] fea_up0: True stackRRDB:[ blocks: [1, 3, 5, 7] concat: True ] ] scale: 1 ] path:[ strict_load: True resume_state: auto root: /home/jaemin/Desktop/LLFlow-main experiments_root: /home/jaemin/Desktop/LLFlow-main/experiments/train_color_as_full_z_nosieMapBugFixed_noavgpool models: /home/jaemin/Desktop/LLFlow-main/experiments/train_color_as_full_z_nosieMapBugFixed_noavgpool/models training_state: /home/jaemin/Desktop/LLFlow-main/experiments/train_color_as_full_z_nosieMapBugFixed_noavgpool/training_state log: /home/jaemin/Desktop/LLFlow-main/experiments/train_color_as_full_z_nosieMapBugFixed_noavgpool val_images: /home/jaemin/Desktop/LLFlow-main/experiments/train_color_as_full_z_nosieMapBugFixed_noavgpool/val_images ] train:[ manual_seed: 10 lr_G: 0.0005 weight_decay_G: 0 beta1: 0.9 beta2: 0.99 lr_scheme: MultiStepLR warmup_iter: 10 lr_steps_rel: [0.5, 0.75, 0.9, 0.95] lr_gamma: 0.5 weight_l1: 0 weight_fl: 1 niter: 30000 val_freq: 200 lr_steps: [15000, 22500, 27000, 28500] ] val:[ n_sample: 4 ] test:[ heats: [0.0, 0.7, 0.8, 0.9] ] logger:[ print_freq: 200 save_checkpoint_freq: 1000.0 ] is_train: True dist: False

    22-05-17 21:58:49.351 - INFO: Random seed: 10 rrdb params 0 22-05-17 21:58:56.276 - INFO: Model [LLFlowModel] is created. Parameters of full network 38.8595 and encoder 17.4968 22-05-17 21:58:56.286 - INFO: Start training from epoch: 0, iter: 0 /home/jaemin/anaconda3/envs/mymodel/lib/python3.7/site-packages/torch/optim/lr_scheduler.py:136: UserWarning: Detected call of lr_scheduler.step() before optimizer.step(). In PyTorch 1.1.0 and later, you should call them in the opposite order: optimizer.step() before lr_scheduler.step(). Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate "https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning) <epoch: 0, iter: 1, lr:5.000e-05, t:-1.00e+00, td:2.79e-01, eta:-8.33e+00, nll:0.000e+00> /home/jaemin/anaconda3/envs/mymodel/lib/python3.7/site-packages/torch/nn/functional.py:3103: UserWarning: The default behavior for interpolate/upsample with float scale_factor changed in 1.6.0 to align with other frameworks/libraries, and now uses scale_factor directly, instead of relying on the computed output size. If you wish to restore the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details. warnings.warn("The default behavior for interpolate/upsample with float scale_factor changed " 22-05-17 21:59:00.339 - INFO: Parameters of full network 37.6774 and encoder 17.4211 <epoch: 0, iter: 2, lr:1.000e-04, t:-1.00e+00, td:5.03e-03, eta:-8.33e+00, nll:-7.233e+00> <epoch: 0, iter: 3, lr:1.500e-04, t:3.81e+00, td:5.90e-03, eta:3.17e+01, nll:-6.932e+00> <epoch: 0, iter: 4, lr:2.000e-04, t:1.40e+00, td:1.65e-03, eta:1.16e+01, nll:-7.222e+00> <epoch: 0, iter: 5, lr:2.500e-04, t:1.53e+00, td:1.21e-03, eta:1.28e+01, nll:9.519e+02> <epoch: 0, iter: 6, lr:3.000e-04, t:1.44e+00, td:7.74e-03, eta:1.20e+01, nll:1.007e+03> <epoch: 0, iter: 7, lr:3.500e-04, t:1.40e+00, td:1.87e-03, eta:1.17e+01, nll:8.534e+02> <epoch: 0, iter: 8, lr:4.000e-04, t:1.43e+00, td:1.58e-03, eta:1.19e+01, nll:9.690e+02> <epoch: 0, iter: 9, lr:4.500e-04, t:1.39e+00, td:1.63e-03, eta:1.16e+01, nll:6.836e+02> <epoch: 0, iter: 10, lr:4.500e-04, t:1.43e+00, td:1.73e-03, eta:1.20e+01, nll:5.680e+02> <epoch: 0, iter: 11, lr:4.500e-04, t:1.42e+00, td:1.61e-03, eta:1.19e+01, nll:5.967e+02> <epoch: 0, iter: 12, lr:4.500e-04, t:1.39e+00, td:1.52e-03, eta:1.16e+01, nll:5.568e+02> <epoch: 0, iter: 13, lr:4.500e-04, t:1.43e+00, td:1.57e-03, eta:1.19e+01, nll:4.397e+02> <epoch: 0, iter: 14, lr:4.500e-04, t:1.39e+00, td:2.70e-03, eta:1.16e+01, nll:5.610e+02> <epoch: 0, iter: 15, lr:4.500e-04, t:1.49e+00, td:1.53e-03, eta:1.24e+01, nll:1.028e+02> <epoch: 0, iter: 16, lr:4.500e-04, t:1.54e+00, td:1.48e-03, eta:1.28e+01, nll:1.578e+01> <epoch: 0, iter: 17, lr:4.500e-04, t:1.53e+00, td:1.65e-03, eta:1.28e+01, nll:1.097e+01> <epoch: 0, iter: 18, lr:4.500e-04, t:1.50e+00, td:1.56e-03, eta:1.25e+01, nll:1.119e+01> <epoch: 0, iter: 19, lr:4.500e-04, t:1.53e+00, td:2.73e-03, eta:1.28e+01, nll:7.515e+00> <epoch: 0, iter: 20, lr:4.500e-04, t:1.49e+00, td:2.75e-03, eta:1.24e+01, nll:3.359e+00> <epoch: 0, iter: 21, lr:4.500e-04, t:1.52e+00, td:2.58e-03, eta:1.27e+01, nll:1.562e+00> <epoch: 0, iter: 22, lr:4.500e-04, t:1.48e+00, td:2.86e-03, eta:1.24e+01, nll:-2.830e-01> <epoch: 0, iter: 23, lr:4.500e-04, t:1.53e+00, td:2.14e-03, eta:1.27e+01, nll:-1.822e+00> <epoch: 0, iter: 24, lr:4.500e-04, t:1.50e+00, td:2.55e-03, eta:1.25e+01, nll:-2.389e+00> <epoch: 6, iter: 200, lr:4.500e-04, t:1.57e+00, td:1.48e-02, eta:1.30e+01, nll:-1.198e+01> train.py:291: RuntimeWarning: divide by zero encountered in float_scalars cropped_sr_img_adjust = np.clip(cropped_sr_img * (mean_gray_gt / mean_gray_out), 0, 1) train.py:291: RuntimeWarning: invalid value encountered in multiply cropped_sr_img_adjust = np.clip(cropped_sr_img * (mean_gray_gt / mean_gray_out), 0, 1) 22-05-17 22:04:39.757 - INFO: # Validation # PSNR: nan SSIM: nan 22-05-17 22:04:39.758 - INFO: <epoch: 6, iter: 200> psnr: nan SSIM: nan <epoch: 13, iter: 400, lr:4.500e-04, t:1.71e+00, td:1.56e-02, eta:1.41e+01, nll:-1.342e+01> 22-05-17 22:10:20.482 - INFO: # Validation # PSNR: 1.9680e+01 SSIM: nan 22-05-17 22:10:20.483 - INFO: <epoch: 13, iter: 400> psnr: 1.9680e+01 SSIM: nan 22-05-17 22:10:20.483 - INFO: Saving best models <epoch: 19, iter: 600, lr:4.500e-04, t:1.70e+00, td:1.37e-02, eta:1.39e+01, nll:-1.219e+01> 22-05-17 22:16:00.804 - INFO: # Validation # PSNR: 1.8884e+01 SSIM: nan 22-05-17 22:16:00.805 - INFO: <epoch: 19, iter: 600> psnr: 1.8884e+01 SSIM: nan <epoch: 26, iter: 800, lr:4.500e-04, t:1.70e+00, td:1.55e-02, eta:1.38e+01, nll:-1.317e+01> 22-05-17 22:21:41.478 - INFO: # Validation # PSNR: 2.0391e+01 SSIM: 7.2959e-01 22-05-17 22:21:41.479 - INFO: <epoch: 26, iter: 800> psnr: 2.0391e+01 SSIM: 7.2959e-01 22-05-17 22:21:41.479 - INFO: Saving best models <epoch: 33, iter: 1,000, lr:4.500e-04, t:1.70e+00, td:1.54e-02, eta:1.37e+01, nll:-1.284e+01> 22-05-17 22:27:22.198 - INFO: # Validation # PSNR: 2.1104e+01 SSIM: 7.4706e-01 22-05-17 22:27:22.199 - INFO: <epoch: 33, iter: 1,000> psnr: 2.1104e+01 SSIM: 7.4706e-01 22-05-17 22:27:22.199 - INFO: Saving models and training states. 22-05-17 22:27:22.785 - INFO: Saving best models <epoch: 39, iter: 1,200, lr:4.500e-04, t:1.71e+00, td:1.36e-02, eta:1.37e+01, nll:-1.376e+01> 22-05-17 22:33:03.967 - INFO: # Validation # PSNR: 1.9835e+01 SSIM: 7.1762e-01 22-05-17 22:33:03.967 - INFO: <epoch: 39, iter: 1,200> psnr: 1.9835e+01 SSIM: 7.1762e-01 <epoch: 46, iter: 1,400, lr:4.500e-04, t:1.70e+00, td:1.56e-02, eta:1.35e+01, nll:-1.423e+01> 22-05-17 22:38:44.834 - INFO: # Validation # PSNR: 1.7979e+01 SSIM: 6.7875e-01 22-05-17 22:38:44.834 - INFO: <epoch: 46, iter: 1,400> psnr: 1.7979e+01 SSIM: 6.7875e-01 <epoch: 53, iter: 1,600, lr:4.500e-04, t:1.71e+00, td:1.58e-02, eta:1.35e+01, nll:-1.479e+01> 22-05-17 22:44:26.174 - INFO: # Validation # PSNR: 1.9058e+01 SSIM: 6.8164e-01 22-05-17 22:44:26.174 - INFO: <epoch: 53, iter: 1,600> psnr: 1.9058e+01 SSIM: 6.8164e-01 <epoch: 59, iter: 1,800, lr:4.500e-04, t:1.70e+00, td:1.36e-02, eta:1.33e+01, nll:-1.263e+01> 22-05-17 22:50:06.876 - INFO: # Validation # PSNR: 2.1273e+01 SSIM: 7.5831e-01 22-05-17 22:50:06.876 - INFO: <epoch: 59, iter: 1,800> psnr: 2.1273e+01 SSIM: 7.5831e-01 22-05-17 22:50:06.876 - INFO: Saving best models <epoch: 66, iter: 2,000, lr:4.500e-04, t:1.71e+00, td:1.55e-02, eta:1.33e+01, nll:-1.427e+01> 22-05-17 22:55:48.322 - INFO: # Validation # PSNR: 2.1426e+01 SSIM: 7.4877e-01 22-05-17 22:55:48.322 - INFO: <epoch: 66, iter: 2,000> psnr: 2.1426e+01 SSIM: 7.4877e-01 22-05-17 22:55:48.322 - INFO: Saving models and training states. 22-05-17 22:55:48.860 - INFO: Saving best models <epoch: 73, iter: 2,200, lr:4.500e-04, t:1.71e+00, td:1.54e-02, eta:1.32e+01, nll:-1.477e+01> 22-05-17 23:01:30.147 - INFO: # Validation # PSNR: 2.1490e+01 SSIM: 7.6223e-01 22-05-17 23:01:30.148 - INFO: <epoch: 73, iter: 2,200> psnr: 2.1490e+01 SSIM: 7.6223e-01 22-05-17 23:01:30.148 - INFO: Saving best models <epoch: 79, iter: 2,400, lr:4.500e-04, t:1.71e+00, td:1.37e-02, eta:1.31e+01, nll:-1.484e+01> 22-05-17 23:07:11.724 - INFO: # Validation # PSNR: 1.8654e+01 SSIM: 6.7717e-01 22-05-17 23:07:11.725 - INFO: <epoch: 79, iter: 2,400> psnr: 1.8654e+01 SSIM: 6.7717e-01 <epoch: 86, iter: 2,600, lr:4.500e-04, t:1.70e+00, td:1.56e-02, eta:1.30e+01, nll:-1.458e+01> 22-05-17 23:12:52.514 - INFO: # Validation # PSNR: 2.1183e+01 SSIM: 7.5730e-01 22-05-17 23:12:52.514 - INFO: <epoch: 86, iter: 2,600> psnr: 2.1183e+01 SSIM: 7.5730e-01 <epoch: 93, iter: 2,800, lr:4.500e-04, t:1.70e+00, td:1.57e-02, eta:1.29e+01, nll:-1.419e+01> 22-05-17 23:18:33.419 - INFO: # Validation # PSNR: 2.1128e+01 SSIM: 7.5815e-01 22-05-17 23:18:33.420 - INFO: <epoch: 93, iter: 2,800> psnr: 2.1128e+01 SSIM: 7.5815e-01 <epoch: 99, iter: 3,000, lr:4.500e-04, t:1.70e+00, td:1.37e-02, eta:1.28e+01, nll:-1.454e+01> 22-05-17 23:24:13.746 - INFO: # Validation # PSNR: 2.1340e+01 SSIM: 7.6262e-01 22-05-17 23:24:13.747 - INFO: <epoch: 99, iter: 3,000> psnr: 2.1340e+01 SSIM: 7.6262e-01 22-05-17 23:24:13.747 - INFO: Saving models and training states. <epoch:106, iter: 3,200, lr:4.500e-04, t:1.71e+00, td:1.55e-02, eta:1.27e+01, nll:-1.447e+01> 22-05-17 23:29:56.327 - INFO: # Validation # PSNR: 2.2221e+01 SSIM: 7.6925e-01 22-05-17 23:29:56.327 - INFO: <epoch:106, iter: 3,200> psnr: 2.2221e+01 SSIM: 7.6925e-01 22-05-17 23:29:56.327 - INFO: Saving best models <epoch:113, iter: 3,400, lr:4.500e-04, t:1.71e+00, td:1.58e-02, eta:1.27e+01, nll:-1.527e+01> 22-05-17 23:35:39.072 - INFO: # Validation # PSNR: 2.0904e+01 SSIM: 7.5687e-01 22-05-17 23:35:39.073 - INFO: <epoch:113, iter: 3,400> psnr: 2.0904e+01 SSIM: 7.5687e-01 <epoch:119, iter: 3,600, lr:4.500e-04, t:1.71e+00, td:1.37e-02, eta:1.25e+01, nll:-1.365e+01> 22-05-17 23:41:20.888 - INFO: # Validation # PSNR: 2.0487e+01 SSIM: 7.4803e-01 22-05-17 23:41:20.888 - INFO: <epoch:119, iter: 3,600> psnr: 2.0487e+01 SSIM: 7.4803e-01 <epoch:126, iter: 3,800, lr:4.500e-04, t:1.69e+00, td:1.56e-02, eta:1.23e+01, nll:6.914e+00> 22-05-17 23:46:58.775 - INFO: # Validation # PSNR: nan SSIM: nan 22-05-17 23:46:58.775 - INFO: <epoch:126, iter: 3,800> psnr: nan SSIM: nan <epoch:133, iter: 4,000, lr:4.500e-04, t:1.65e+00, td:1.56e-02, eta:1.19e+01, nll:nan> 22-05-17 23:52:29.097 - INFO: # Validation # PSNR: nan SSIM: nan 22-05-17 23:52:29.098 - INFO: <epoch:133, iter: 4,000> psnr: nan SSIM: nan 22-05-17 23:52:29.098 - INFO: Saving models and training states.

    Intel MKL ERROR: Parameter 4 was incorrect on entry to SLASCL.

    Intel MKL ERROR: Parameter 4 was incorrect on entry to SLASCL. Traceback (most recent call last): File "train.py", line 343, in main() File "train.py", line 191, in main nll = model.optimize_parameters(current_step) File "/home/jaemin/Desktop/LLFlow-main/code/models/LLFlow_model.py", line 208, in optimize_parameters self.scaler.scale(total_loss).backward() File "/home/jaemin/anaconda3/envs/mymodel/lib/python3.7/site-packages/torch/tensor.py", line 221, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/home/jaemin/anaconda3/envs/mymodel/lib/python3.7/site-packages/torch/autograd/init.py", line 132, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: svd_cuda: the updating process of SBDSDC did not converge (error: 22) `

    Do you have any solution for this problem? Thanks.

    opened by PJaemin 3
  • ZeroDivisionError: division by zero

    ZeroDivisionError: division by zero

    hello,you work is pretty good! But I meet some troubles when i train the model by my own dataset. It is shown like this <epoch: 1, iter: 200, lr:4.500e-04, t:1.77e+00, td:1.01e-02, eta:1.46e+01, nll:-1.825e+01> Traceback (most recent call last): File "train.py", line 345, in <module> main() File "train.py", line 301, in main avg_psnr = avg_psnr / idx ZeroDivisionError: float division by zero As you can see, the training shows incorrect t, td, ETA, and NLL. I guess maybe idX and NLL can't count correctly, but I don't know how to fix it. Have you ever encountered this problem? I really hope you can answer !thank you

    opened by gwp2021 3
  • Over compensated brightness

    Over compensated brightness

    Hello, first of all, great work!

    I was testing the model and I found that some output images look very weird. (see below)

    I am using the LOLv2.pth model

    Before wedding1

    After res

    ===== This image work on LOLv2.pth model however it doesn't work on LOL.pth (as below) Before daniel

    After res

    Any idea how to resolve this?

    Thank you

    opened by spacerunner8 3
  • It is mentioned in the paper that the detailed structure diagram of the encoder is in the appendix, but I did not find the appendix. Could you please give me a link to the appendix?

    It is mentioned in the paper that the detailed structure diagram of the encoder is in the appendix, but I did not find the appendix. Could you please give me a link to the appendix?

    It is mentioned in the paper that the detailed structure diagram of the encoder is in the appendix, but I did not find the appendix. Could you please give me a link to the appendix?

    opened by Smile-QT 2
  • Checkpoints

    Checkpoints

    Hello, first of all thank you for your contributions in this area, you have done an excellent work. I am kind of a new to image enhancement and deep learning areas so I want to ask where can we find checkpoints and especially the model path for testing. Looking forward to your answer!

    opened by orzanai 2
  • Question about color map calculation

    Question about color map calculation

    Thank you for posting the code of the paper. I have some doubts about the color map and would like to consult, as follows: image Figure 1 image Figure 2 Q1 Is it possible to repeat torch.cat in 1 and 3 in Figure 1 ? Q2 What is the effect of calculating the input exponent to get raw_low_input , 2 in Figure 1 ? Q3 Does mean_c(x) in the paper correspond to x.sum in the code? example Figure 2

    opened by lyy-zz 0
  • train and test question

    train and test question

    question question-1 There are several quetions I want to ask. I don't know why the training is wrong after I added the pre-trained model that you provided,is it not possible to use pre-trained models when training?By the way,I change the batch_size of training(16->8),because my "cuda out of memory".

    And I trained once without pre-trained model,I get the latest model,and use it for my test work,but I don't know why the loaded model was downloaded instead of written by me,is that normally? question2

    Looking forward to your reply!Thank you!

    opened by id4su 0
  • About loss

    About loss

    Hi, I have a question, how do I use L1 loss and maximum likelihood loss at the same time? When I used both together, I encountered the following error, and the resulting image would also be black. I am looking forward to your reply. Thank you! image

    opened by ph0316 1
  • Can not test the model

    Can not test the model

    Hi authors, I try to run the code to evaluate the model on the LOL dataset. I follow your command "python test.py --opt /media/vipl2/DATA_2/Min/LLFlow-main/code/confs/LOL_smallNet.yml ". However, I can not get the results. The terminal show as: image And, I already changed the link of the dataset inside the LOL_smallNet.yml file as: image Could you please show me how to fix and obtain the results? Thank you before!

    opened by minhthien2206 3
Owner
Yufei Wang
PhD student @ Nanyang Technological University
Yufei Wang
The code of Zero-shot learning for low-light image enhancement based on dual iteration

Zero-shot-dual-iter-LLE The code of Zero-shot learning for low-light image enhancement based on dual iteration. You can get the real night image tests

null 1 Mar 18, 2022
From Fidelity to Perceptual Quality: A Semi-Supervised Approach for Low-Light Image Enhancement (CVPR'2020)

Under-exposure introduces a series of visual degradation, i.e. decreased visibility, intensive noise, and biased color, etc. To address these problems, we propose a novel semi-supervised learning approach for low-light image enhancement.

Yang Wenhan 117 Jan 3, 2023
Tensorflow implementation of MIRNet for Low-light image enhancement

MIRNet Tensorflow implementation of the MIRNet architecture as proposed by Learning Enriched Features for Real Image Restoration and Enhancement. Lanu

Soumik Rakshit 91 Jan 6, 2023
Official implementation for "Low-light Image Enhancement via Breaking Down the Darkness"

Low-light Image Enhancement via Breaking Down the Darkness by Qiming Hu, Xiaojie Guo. 1. Dependencies Python3 PyTorch>=1.0 OpenCV-Python, TensorboardX

Qiming Hu 30 Jan 1, 2023
Learning Lightweight Low-Light Enhancement Network using Pseudo Well-Exposed Images

Learning Lightweight Low-Light Enhancement Network using Pseudo Well-Exposed Images This repository contains the implementation of the following paper

Seonggwan Ko 9 Jul 30, 2022
3D2Unet: 3D Deformable Unet for Low-Light Video Enhancement (PRCV2021)

3DDUNET This is the code for 3D2Unet: 3D Deformable Unet for Low-Light Video Enhancement (PRCV2021) Conference Paper Link Dataset We use SMOID dataset

null 1 Jan 7, 2022
Code for HLA-Face: Joint High-Low Adaptation for Low Light Face Detection (CVPR21)

HLA-Face: Joint High-Low Adaptation for Low Light Face Detection The official PyTorch implementation for HLA-Face: Joint High-Low Adaptation for Low L

Wenjing Wang 77 Dec 8, 2022
Code release for Local Light Field Fusion at SIGGRAPH 2019

Local Light Field Fusion Project | Video | Paper Tensorflow implementation for novel view synthesis from sparse input images. Local Light Field Fusion

null 1.1k Dec 27, 2022
Code release for Local Light Field Fusion at SIGGRAPH 2019

Local Light Field Fusion Project | Video | Paper Tensorflow implementation for novel view synthesis from sparse input images. Local Light Field Fusion

null 748 Nov 27, 2021
Official PyTorch code for WACV 2022 paper "CFLOW-AD: Real-Time Unsupervised Anomaly Detection with Localization via Conditional Normalizing Flows"

CFLOW-AD: Real-Time Unsupervised Anomaly Detection with Localization via Conditional Normalizing Flows WACV 2022 preprint:https://arxiv.org/abs/2107.1

Denis 156 Dec 28, 2022
EDCNN: Edge enhancement-based Densely Connected Network with Compound Loss for Low-Dose CT Denoising

EDCNN: Edge enhancement-based Densely Connected Network with Compound Loss for Low-Dose CT Denoising By Tengfei Liang, Yi Jin, Yidong Li, Tao Wang. Th

workingcoder 115 Jan 5, 2023
A Low Complexity Speech Enhancement Framework for Full-Band Audio (48kHz) based on Deep Filtering.

DeepFilterNet A Low Complexity Speech Enhancement Framework for Full-Band Audio (48kHz) based on Deep Filtering. libDF contains Rust code used for dat

Hendrik Schröter 292 Dec 25, 2022
Just Go with the Flow: Self-Supervised Scene Flow Estimation

Just Go with the Flow: Self-Supervised Scene Flow Estimation Code release for the paper Just Go with the Flow: Self-Supervised Scene Flow Estimation,

Himangi Mittal 50 Nov 22, 2022
LLVIP: A Visible-infrared Paired Dataset for Low-light Vision

LLVIP: A Visible-infrared Paired Dataset for Low-light Vision Project | Arxiv | Abstract It is very challenging for various visual tasks such as image

CVSM Group -  email: czhu@bupt.edu.cn 377 Jan 7, 2023
Implementation for paper "STAR: A Structure-aware Lightweight Transformer for Real-time Image Enhancement" (ICCV 2021).

STAR-pytorch Implementation for paper "STAR: A Structure-aware Lightweight Transformer for Real-time Image Enhancement" (ICCV 2021). CVF (pdf) STAR-DC

null 43 Dec 21, 2022
Official PyTorch code for Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling (HCFlow, ICCV2021)

Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling (HCFlow, ICCV2021) This repository is the official P

Jingyun Liang 159 Dec 30, 2022
Official PyTorch code for Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling (HCFlow, ICCV2021)

Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling (HCFlow, ICCV2021) This repository is the official P

Jingyun Liang 159 Dec 30, 2022
Stochastic Normalizing Flows

Stochastic Normalizing Flows We introduce stochasticity in Boltzmann-generating flows. Normalizing flows are exact-probability generative models that

AI4Science group, FU Berlin (Frank Noé and co-workers) 50 Dec 16, 2022