A Physics-based Noise Formation Model for Extreme Low-light Raw Denoising (CVPR 2020 Oral & TPAMI 2021)

Related tags

Deep Learning ELD
Overview

ELD

The implementation of CVPR 2020 (Oral) paper "A Physics-based Noise Formation Model for Extreme Low-light Raw Denoising" and its journal (TPAMI) version "Physics-based Noise Modeling for Extreme Low-light Photography". Interested readers are also referred to an insightful Note about this work in Zhihu (Chinese).

News

  • 2022/01/08: Major Update: Release the training code and other related items (including synthetic datasets, customized rawpy, calibrated camera noise parameters, baseline noise models, calibrated SonyA7S2 camera response function (CRF) and a modern implementation of EMoR radiometric calibration method) to accelerate further research!
  • 2022/01/05: Replace the released ELD dataset by my local version of the dataset. We thank @fenghansen for pointing this out. Please refer to this issue for more details.
  • 2021/08/05: The comprehensive version of this work was accepted to IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)
  • 2020/07/16: Release the ELD dataset and our pretrained models at GoogleDrive and Baidudisk (0lby)

Highlights

  • We present a highly accurate noise formation model based on the characteristics of CMOS photosensors, thereby enabling us to synthesize realistic samples that better match the physics of image formation process.

  • To study the generalizability of a neural network trained with existing schemes, we introduce a new Extreme Low-light Denoising (ELD) dataset that covers four representative modern camera devices for evaluation purposes only. The image capture setup and example images are shown as below:

  • By training only with our synthetic data, we demonstrate a convolutional neural network can compete with or sometimes even outperform the network trained with paired real data under extreme low-light settings. The denoising results of networks trained with multiple schemes, i.e. 1) synthetic data generated by the poissonian-gaussian noise model, 2) paired read data of SID dataset and 3) synthetic data generated by our proposed noise model, are displayed as follows:

Prerequisites

  • Python >=3.6, PyTorch >= 1.6
  • Requirements: opencv-python, tensorboardX, lmdb, rawpy, torchinterp1d
  • Platforms: Ubuntu 16.04, cuda-10.1

Notice this codebase relies on my own customized rawpy, which provides more functionalities than the official one. This is released together with our datasets and the pretrained models. To build rawpy from source, please first compile and install the LibRaw library following the official instructions, then type pip install -e . in the rawpy directory.

Quick Start

Due to the business license, we are unable to to provide the noise model as well as the calibration method. Instead, we release our collected ELD dataset and our pretrained models to facilitate future research.

To reproduce our results presented in the paper (Table 1 and 2), please take a look at scripts/test_SID.sh and scripts/test_ELD.sh

Update: (2022-01-08) We release the training code and the synthetic datasets per the users' requests. The training scripts and the user instructions can be found in scripts/train.sh. Additionally, we provide the baseline noise models (G/G+P/G+P*) and the calibrated noise parameters for all cameras of ELD for training (see noise.py and train_syn.py), which could serve as a starting point to develop your own noise model.

We use lmdb to prepare datasets, please refer to util/lmdb_data.py to see how we generate datasets from SID. We also provide a new implementation of a classic radiometric calibration method EMoR, and utilize it to calibrate the CRF of SonyA7S2, which could be further used to simulate realistic on-board ISP as in the commercial SonyA7S2 camera.

ELD Dataset

The dataset capture protocol is shown as follow:

We choose three ISO settings (800, 1600, 3200) and four low light factors (x1, x10, x100, x200) to capture the dataset (x1/x10 is not used in our paper). Image ids 1, 6, 11, 16 represent the long-exposure reference images. Please refer to ELDEvalDataset class in data/sid_dataset.py for more details.

Citation

If you find our code helpful in your research or work please cite our paper.

@inproceedings{wei2020physics,
  title={A Physics-based Noise Formation Model for Extreme Low-light Raw Denoising},
  author={Wei, Kaixuan and Fu, Ying and Yang, Jiaolong and Huang, Hua},
  booktitle={IEEE Conference on Computer Vision and Pattern Recognition},
  year={2020},
}

@article{wei2021physics,
  title={Physics-based Noise Modeling for Extreme Low-light Photography},
  author={Wei, Kaixuan and Fu, Ying and Zheng, Yinqiang and Yang, Jiaolong},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
  year={2021},
  publisher={IEEE}
}

Contact

If you find any problem, please feel free to contact me (kxwei at princeton.edu kaixuan_wei at bit.edu.cn). A brief self-introduction (including your name, affiliation and position) is required, if you would like to get an in-depth help from me. I'd be glad to talk with you if more information (e.g. your personal website link) is attached. Note I would not reply to any impolite/aggressive email that violates the above criteria.

Comments
  • Could you upload dataset that include the noisy images?

    Could you upload dataset that include the noisy images?

    Hi,

    Great work! Congratulations. I downloaded your ELD dataset but it doesn't contain the images processed by your noisy model. Is it possible to also upload the noisy images?

    Thanks.

    opened by YifeiAI 16
  • ELD dataset description

    ELD dataset description

    Hi,

    I download your pretrained model and ELD dataset to know more details of your paper.

    But I failed with run test_ELD.sh

    I found in the ELD dataset, every scene in a specific camera there is 16 raw data and some have 16 corresponding RGB data.

    Is this should be 3 (isos) * 2(lowlight ratio) = 6 examples?

    And I find img_ids_set = [[4, 9, 14], [5, 10, 15]] in the the_ELD.py adn gt_ids = np.array([1, 6, 11, 16]) in sid_dataset.py.

    I do not know exactly what this ids means.

    opened by oneTaken 6
  • Training Settings of Paired-SID Baseline

    Training Settings of Paired-SID Baseline

    Hello and sorry for bothering u again :stuck_out_tongue_winking_eye: I'm trying to reproduce the Paired-SID baseline according to the training settings from your paper:

    Our implementation is based on PyTorch. We train the models with 200 epoch using L1 loss and Adam optimizer with 
    batch size 1. The learning rate is initially set to 10−4, then halved at epoch 100, and finally reduced to 10−5 
    at epoch 180.
    

    I tested the checkpoint you provided and it achieved 38.87dB on SID-Sony(Raw), while the model trained by myself can only acquire around 36.80dB under the same test settings. I launched another experiment with a larger batch size(16), longer training procedure(about 600 epochs) and more coarse-to-fine lr settings(1e-3 to 1e-6) and ended up with a model which is still worse than the baseline(~0.4dB lower). I wonder if I missed any important settings, if possible, could you please share your training settings with me? Thanks a lot! 😄

    opened by YouCaiJun98 5
  • How can I know which demosaic method the camera use?

    How can I know which demosaic method the camera use?

    Hello, I have used python to open the raw file. And I can use rawpy class to obtain some parameters in the ISP process, like white balance, CCM. However, it didn't show which the demosaic method that the camera use. I want to implement the ISP process step by step, just like what your codes have done.

    opened by XinYu-Andy 4
  • Code release

    Code release

    Hi,

    Congratulations for your paper. It presents interesting insights. Are you planning to release the code for noise modelling, and if yes, could you let us know a tentative time/date for the same?

    Thanks!

    opened by aasharma90 3
  • Relationship between

    Relationship between "K" and ISO

    Thanks for sharing the code of the great works!

    Since I do not have exactly the same devices as the paper, could you share the ISO corresponding to the minimum and maximum of ”K"?

    Thanks a lot

    opened by wyf0912 2
  • What's the effect of IlluminanceCorrect?

    What's the effect of IlluminanceCorrect?

    Hello and thanks for your amazing job! I got quite confused about the effect of IlluminanceCorrect func at https://github.com/Vandermode/ELD/blob/aa0edb44a8fc20e01f83c1f6e93ee70d3190e142/models/ELD_model.py#L139 It seems like some kind of normalization, yet I didn't figure out why it should be applied here, could you please explain it for me? Thanks a lot :)

    opened by YouCaiJun98 2
  • where do you multify the luma ratio ?

    where do you multify the luma ratio ?

    In you dataset description, the reference frame with a proper exposure time, and have a f in [100, 200] to decrease exposure time to simulate low light situation.
    But I do not find where to multify the f in your paper, because it's obviously have luma difference in the raw pair.

    opened by oneTaken 2
  • The number and order of images in the scene-5 of ELD.7z are wrong

    The number and order of images in the scene-5 of ELD.7z are wrong

    There are 18 raw images in the SonyA7S2 sub-dataset and 17 raw images in the Canon700D sub-dataset. Since the verification code you released selects the picture based on the number, this error will cause the data to be mismatched, and then the wrong verification result will be obtained. Specifically, since GT is corresponding to a very dark image, the corrector will align the brightness to black, resulting in the PSNR being pulled up to about 70dB. I hope you can update ELD.7z and explain which version of the data should be used as a benchmark for future works to compare.

    opened by fenghansen 1
  • Small question about this paper😊

    Small question about this paper😊

    The short exposure pictures of SID(see in the dark) you use in this paper is not the original images, have you multiple a parameter on the short exposure pictures to make it easy to show?

    opened by cuiziteng 1
  • Pretrained model missing?

    Pretrained model missing?

    Hi, I got an error when running the command python test_ELD.py --name sid-paired -r -re 200 --no-verbose --chop Error shows: image image Through debugging, I think there must be some pre-trained model files like "model.pt" in the checkpoint folders, but I don't see them. Otherwise, we would need a script to download such pre-trained model, right?

    opened by ProNoobLi 1
  • white level for canon data

    white level for canon data

    i use rawpy to read raw data from Canon 70d and 700d, and find the max value are always less than 2^14=16383. may i know the white level of these two cameras?

    opened by watobe 1
  •  about color bias problem

    about color bias problem

    hi, the color bias is related with the gain K ?, According to some experiments,As K gets bigger, that color deviation gets bigger。 And for camera params in this demo, as K gets bigger value, color bias only sample some values in given color bias,that is right? how to sample color bias for A bigger K ?

    opened by fm123a 5
  • Questions about released camera_params

    Questions about released camera_params

    Hi I have some questions about released camera parameters For example: In NikonD850_params.npy, it has key name like Profile-1(G_scale, g_scale, R_scale), Profile-2, G_shape. I don't know they are corresponded to which parameter listed in paper table1 respectively.

    Another question, are these noise parameters the same in R, G, B channel of raw? (like system gain K, tukey lambda shape and sigma, row noise sigma)

    opened by sky135410 10
  • Artifacts in the synthetic low light clean image

    Artifacts in the synthetic low light clean image

    image Hi, I got the artifacts when generating synthetic low light clean images. According to your paper, the fake low light clean images = long exposure images / ratio, while actually such operation(large integers are divided by ratio then make float to integers) squeezes the range of the values which loses the accuracy and generates the "non-continuous step" in the image, which feels like a HDR image displayed on an 8-bit screen. The result is as follows: image original long exposure image image synthesize low light clean image after auto-brightness for imshow image original low light noisy image image synthesize noise image based on the "non-continious" low light clean image

    How do you fix the artifacts?

    opened by ProNoobLi 2
  • Possibility to estimate scale parameter directly from probability plot?

    Possibility to estimate scale parameter directly from probability plot?

    image image Hi, For the normal distribution, the intercept of the line in the Quantile-quantile plot is the mean, while the slope is the std of the distribution. However, the Tukey lambda is a prox distribution with a specific lambda and doesn't have a variance parameter as the closed-form like the normal distribution has. Thus, how can we derive the scale mathematically??

    opened by ProNoobLi 8
Owner
Kaixuan Wei
PhD student at Princeton University. Previously I obtained BS and MS degrees from BIT and ever did research at Cambridge and MSRA.
Kaixuan Wei
From Fidelity to Perceptual Quality: A Semi-Supervised Approach for Low-Light Image Enhancement (CVPR'2020)

Under-exposure introduces a series of visual degradation, i.e. decreased visibility, intensive noise, and biased color, etc. To address these problems, we propose a novel semi-supervised learning approach for low-light image enhancement.

Yang Wenhan 117 Jan 3, 2023
An official implementation of "SFNet: Learning Object-aware Semantic Correspondence" (CVPR 2019, TPAMI 2020) in PyTorch.

PyTorch implementation of SFNet This is the implementation of the paper "SFNet: Learning Object-aware Semantic Correspondence". For more information,

CV Lab @ Yonsei University 87 Dec 30, 2022
This is the implementation of our work Deep Extreme Cut (DEXTR), for object segmentation from extreme points.

This is the implementation of our work Deep Extreme Cut (DEXTR), for object segmentation from extreme points.

Sergi Caelles 828 Jan 5, 2023
The trained model and denoising example for paper : Cardiopulmonary Auscultation Enhancement with a Two-Stage Noise Cancellation Approach

The trained model and denoising example for paper : Cardiopulmonary Auscultation Enhancement with a Two-Stage Noise Cancellation Approach

ycj_project 1 Jan 18, 2022
Code for HLA-Face: Joint High-Low Adaptation for Low Light Face Detection (CVPR21)

HLA-Face: Joint High-Low Adaptation for Low Light Face Detection The official PyTorch implementation for HLA-Face: Joint High-Low Adaptation for Low L

Wenjing Wang 77 Dec 8, 2022
Official code of "R2RNet: Low-light Image Enhancement via Real-low to Real-normal Network."

R2RNet Official code of "R2RNet: Low-light Image Enhancement via Real-low to Real-normal Network." Jiang Hai, Zhu Xuan, Ren Yang, Yutong Hao, Fengzhu

null 77 Dec 24, 2022
Official implementation of "Open-set Label Noise Can Improve Robustness Against Inherent Label Noise" (NeurIPS 2021)

Open-set Label Noise Can Improve Robustness Against Inherent Label Noise NeurIPS 2021: This repository is the official implementation of ODNL. Require

Hongxin Wei 12 Dec 7, 2022
Point Cloud Denoising input segmentation output raw point-cloud valid/clear fog rain de-noised Abstract Lidar sensors are frequently used in environme

Point Cloud Denoising input segmentation output raw point-cloud valid/clear fog rain de-noised Abstract Lidar sensors are frequently used in environme

null 75 Nov 24, 2022
git《Investigating Loss Functions for Extreme Super-Resolution》(CVPR 2020) GitHub:

Investigating Loss Functions for Extreme Super-Resolution NTIRE 2020 Perceptual Extreme Super-Resolution Submission. Our method ranked first and secon

Sejong Yang 0 Oct 17, 2022
Re-implementation of the Noise Contrastive Estimation algorithm for pyTorch, following "Noise-contrastive estimation: A new estimation principle for unnormalized statistical models." (Gutmann and Hyvarinen, AISTATS 2010)

Noise Contrastive Estimation for pyTorch Overview This repository contains a re-implementation of the Noise Contrastive Estimation algorithm, implemen

Denis Emelin 42 Nov 24, 2022
PyTorch implementation of SmoothGrad: removing noise by adding noise.

SmoothGrad implementation in PyTorch PyTorch implementation of SmoothGrad: removing noise by adding noise. Vanilla Gradients SmoothGrad Guided backpro

SSKH 143 Jan 5, 2023
EDCNN: Edge enhancement-based Densely Connected Network with Compound Loss for Low-Dose CT Denoising

EDCNN: Edge enhancement-based Densely Connected Network with Compound Loss for Low-Dose CT Denoising By Tengfei Liang, Yi Jin, Yidong Li, Tao Wang. Th

workingcoder 115 Jan 5, 2023
Deep Learning for 3D Point Clouds: A Survey (IEEE TPAMI, 2020)

??Deep Learning for 3D Point Clouds (IEEE TPAMI, 2020)

Qingyong 1.4k Jan 8, 2023
ILVR: Conditioning Method for Denoising Diffusion Probabilistic Models (ICCV 2021 Oral)

ILVR + ADM This is the implementation of ILVR: Conditioning Method for Denoising Diffusion Probabilistic Models (ICCV 2021 Oral). This repository is h

Jooyoung Choi 225 Dec 28, 2022
UDP++ (ECCVW 2020 Oral), (Winner of COCO 2020 Keypoint Challenge).

UDP-Pose This is the pytorch implementation for UDP++, which won the Fisrt place in COCO Keypoint Challenge at ECCV 2020 Workshop. Top-Down Results on

null 20 Jul 29, 2022
The code of Zero-shot learning for low-light image enhancement based on dual iteration

Zero-shot-dual-iter-LLE The code of Zero-shot learning for low-light image enhancement based on dual iteration. You can get the real night image tests

null 1 Mar 18, 2022
PyTorch implementation for Partially View-aligned Representation Learning with Noise-robust Contrastive Loss (CVPR 2021)

2021-CVPR-MvCLN This repo contains the code and data of the following paper accepted by CVPR 2021 Partially View-aligned Representation Learning with

XLearning Group 33 Nov 1, 2022
Official code for "End-to-End Optimization of Scene Layout" -- including VAE, Diff Render, SPADE for colorization (CVPR 2020 Oral)

End-to-End Optimization of Scene Layout Code release for: End-to-End Optimization of Scene Layout CVPR 2020 (Oral) Project site, Bibtex For help conta

Andrew Luo 41 Dec 9, 2022
This is the pytorch implementation for the paper: *Learning Accurate Performance Predictors for Ultrafast Automated Model Compression*, which is in submission to TPAMI

SeerNet This is the pytorch implementation for the paper: Learning Accurate Performance Predictors for Ultrafast Automated Model Compression, which is

null 3 May 1, 2022