Implementation for paper "STAR: A Structure-aware Lightweight Transformer for Real-time Image Enhancement" (ICCV 2021).

Overview

STAR-pytorch

Implementation for paper "STAR: A Structure-aware Lightweight Transformer for Real-time Image Enhancement" (ICCV 2021).

CVF (pdf)

STAR-DCE

The pytorch implementation of low light enhancement with STAR on Adobe-MIT FiveK dataset. You can find it in STAR-DCE directory. Here we adopt the pipleline of Zero-DCE ( paper | code ), just replacing the CNN backbone with STAR. In Zero-DCE, for each image the network will regress a group of curves, which will then applied on the source image iteratively. You can find more details in the original repo Zero-DCE.

Requirements

  • numpy
  • einops
  • torch
  • torchvision
  • opencv

Datesets

We provide download links for Adobe-MIT FiveK datasets we used ( train | test ). Please note that we adopt the test set splited by DeepUPE for fair comparison.

Training DCE models

To train a original STAR-DCE model,

cd STAR-DCE
python train_dce.py 
  --lowlight_images_path "dir-to-your-training-set" \
  --parallel True \
  --snapshots_folder snapshots/STAR-ori \
  --lr 0.001 \
  --num_epochs 100 \
  --lr_type cos \
  --train_batch_size 32 \
  --model STAR-DCE-Ori \
  --snapshot_iter 10 \
  --num_workers 32 \

To train the baseline CNN-based DCE-Net (w\ or w\o Pooling),

cd STAR-DCE
python train_dce.py 
  --lowlight_images_path "dir-to-your-training-set" \
  --parallel True \
  --snapshots_folder snapshots/DCE \
  --lr 0.001 \
  --num_epochs 100 \
  --lr_type cos \
  --train_batch_size 32 \
  --model DCE-Net \
  --snapshot_iter 10 \
  --num_workers 32 \

or

cd STAR-DCE
python train_dce.py 
  --lowlight_images_path "dir-to-your-training-set" \
  --parallel True \
  --snapshots_folder snapshots/DCE-Pool \
  --lr 0.001 \
  --num_epochs 100 \
  --lr_type cos \
  --train_batch_size 32 \
  --model DCE-Net-Pool \
  --snapshot_iter 10 \
  --num_workers 32 \

Evaluation of trained models

To evaluated the STAR-DCE model you trained,

cd STAR-DCE
  python test_dce.py \
  --lowlight_images_path  "dir-to-your-test-set" \
  --parallel True \
  --snapshots_folder snapshots_test/STAR-DCE \
  --val_batch_size 1 \
  --pretrain_dir snapshots/STAR-ori/Epoch_best.pth \
  --model STAR-DCE-Ori \

To evaluated the DCE-Net model you trained,

cd STAR-DCE
  python test_dce.py \
  --lowlight_images_path  "dir-to-your-test-set" \
  --parallel True \
  --snapshots_folder snapshots_test/DCE \
  --val_batch_size 1 \
  --pretrain_dir snapshots/DCE/Epoch_best.pth \
  --model DCE-Net \

Citation

If this code helps your research, please cite our paper :)

@inproceedings{zhang2021star,
  title={STAR: A Structure-Aware Lightweight Transformer for Real-Time Image Enhancement},
  author={Zhang, Zhaoyang and Jiang, Yitong and Jiang, Jun and Wang, Xiaogang and Luo, Ping and Gu, Jinwei},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  pages={4106--4115},
  year={2021}
}
You might also like...
This is the official pytorch implementation for our ICCV 2021 paper
This is the official pytorch implementation for our ICCV 2021 paper "TRAR: Routing the Attention Spans in Transformers for Visual Question Answering" on VQA Task

🌈 ERASOR (RA-L'21 with ICRA Option) Official page of "ERASOR: Egocentric Ratio of Pseudo Occupancy-based Dynamic Object Removal for Static 3D Point C

PyTorch implementation of our ICCV 2021 paper, Interpretation of Emergent Communication in Heterogeneous Collaborative Embodied Agents.
PyTorch implementation of our ICCV 2021 paper, Interpretation of Emergent Communication in Heterogeneous Collaborative Embodied Agents.

PyTorch implementation of our ICCV 2021 paper, Interpretation of Emergent Communication in Heterogeneous Collaborative Embodied Agents.

Official Pytorch implementation of the paper
Official Pytorch implementation of the paper "Action-Conditioned 3D Human Motion Synthesis with Transformer VAE", ICCV 2021

ACTOR Official Pytorch implementation of the paper "Action-Conditioned 3D Human Motion Synthesis with Transformer VAE", ICCV 2021. Please visit our we

Code for ICCV 2021 paper
Code for ICCV 2021 paper "Distilling Holistic Knowledge with Graph Neural Networks"

HKD Code for ICCV 2021 paper "Distilling Holistic Knowledge with Graph Neural Networks" cifia-100 result The implementation of compared methods are ba

code for ICCV 2021 paper 'Generalized Source-free Domain Adaptation'

G-SFDA Code (based on pytorch 1.3) for our ICCV 2021 paper 'Generalized Source-free Domain Adaptation'. [project] [paper]. Dataset preparing Download

Homepage of paper: Paint Transformer: Feed Forward Neural Painting with Stroke Prediction, ICCV 2021.
Homepage of paper: Paint Transformer: Feed Forward Neural Painting with Stroke Prediction, ICCV 2021.

Paint Transformer: Feed Forward Neural Painting with Stroke Prediction [Paper] [PaddlePaddle Implementation] Homepage of paper: Paint Transformer: Fee

Official Repository for the ICCV 2021 paper
Official Repository for the ICCV 2021 paper "PixelSynth: Generating a 3D-Consistent Experience from a Single Image"

PixelSynth: Generating a 3D-Consistent Experience from a Single Image (ICCV 2021) Chris Rockwell, David F. Fouhey, and Justin Johnson [Project Website

Code for ICCV 2021 paper: ARAPReg: An As-Rigid-As Possible Regularization Loss for Learning Deformable Shape Generators..
Code for ICCV 2021 paper: ARAPReg: An As-Rigid-As Possible Regularization Loss for Learning Deformable Shape Generators..

ARAPReg Code for ICCV 2021 paper: ARAPReg: An As-Rigid-As Possible Regularization Loss for Learning Deformable Shape Generators.. Installation The cod

Code for the ICCV 2021 paper
Code for the ICCV 2021 paper "Pixel Difference Networks for Efficient Edge Detection" (Oral).

Pixel Difference Convolution This repository contains the PyTorch implementation for "Pixel Difference Networks for Efficient Edge Detection" by Zhuo

Comments
  • Can't reproduce the performance of this paper

    Can't reproduce the performance of this paper

    I can't reproduce the performance of this paper on MIT Adobe Fivek with your code. Could you please provide the train.py and test.py code on MIT Adobe Fivek? Or a trained model on MIT Adobe Fivek. Thank you very much.

    opened by huangjch526 0
  • error in the provided MIT 5K dataset?

    error in the provided MIT 5K dataset?

    Dear authors,

    The provided train / test dataset does not seem to be the original MIT 5K, input images look low-light. Can you tell which preprocessing do you use? I cannot find the information in the paper.

    thank you.

    opened by mv-lab 0
  • LOL dataset

    LOL dataset

    Good work, and thank you for releasing your code~

    Can the STAR model run on LOL dataset (for low-light enhancement task), and what is the performance?

    opened by cuiziteng 0
Owner
null
PyTorch implementation of paper: AdaAttN: Revisit Attention Mechanism in Arbitrary Neural Style Transfer, ICCV 2021.

AdaAttN: Revisit Attention Mechanism in Arbitrary Neural Style Transfer [Paper] [PyTorch Implementation] [Paddle Implementation] Overview This reposit

null 148 Dec 30, 2022
Official implementation of the ICCV 2021 paper "Conditional DETR for Fast Training Convergence".

The DETR approach applies the transformer encoder and decoder architecture to object detection and achieves promising performance. In this paper, we handle the critical issue, slow training convergence, and present a conditional cross-attention mechanism for fast DETR training. Our approach is motivated by that the cross-attention in DETR relies highly on the content embeddings and that the spatial embeddings make minor contributions, increasing the need for high-quality content embeddings and thus increasing the training difficulty.

null 281 Dec 30, 2022
The Official Implementation of the ICCV-2021 Paper: Semantically Coherent Out-of-Distribution Detection.

SCOOD-UDG (ICCV 2021) This repository is the official implementation of the paper: Semantically Coherent Out-of-Distribution Detection Jingkang Yang,

Jake YANG 62 Nov 21, 2022
Official implementation of the ICCV 2021 paper: "The Power of Points for Modeling Humans in Clothing".

The Power of Points for Modeling Humans in Clothing (ICCV 2021) This repository contains the official PyTorch implementation of the ICCV 2021 paper: T

Qianli Ma 158 Nov 24, 2022
Implementation for our ICCV 2021 paper: Dual-Camera Super-Resolution with Aligned Attention Modules

DCSR: Dual Camera Super-Resolution Implementation for our ICCV 2021 oral paper: Dual-Camera Super-Resolution with Aligned Attention Modules paper | pr

Tengfei Wang 110 Dec 20, 2022
Implementation for our ICCV 2021 paper: Dual-Camera Super-Resolution with Aligned Attention Modules

DCSR: Dual Camera Super-Resolution Implementation for our ICCV 2021 oral paper: Dual-Camera Super-Resolution with Aligned Attention Modules paper | pr

Tengfei Wang 110 Dec 20, 2022
official Pytorch implementation of ICCV 2021 paper FuseFormer: Fusing Fine-Grained Information in Transformers for Video Inpainting.

FuseFormer: Fusing Fine-Grained Information in Transformers for Video Inpainting By Rui Liu, Hanming Deng, Yangyi Huang, Xiaoyu Shi, Lewei Lu, Wenxiu

null 77 Dec 27, 2022
Official implementation of the ICCV 2021 paper "Joint Inductive and Transductive Learning for Video Object Segmentation"

JOINT This is the official implementation of Joint Inductive and Transductive learning for Video Object Segmentation, to appear in ICCV 2021. @inproce

Yunyao 35 Oct 16, 2022
Official PyTorch Implementation of paper "Deep 3D Mask Volume for View Synthesis of Dynamic Scenes", ICCV 2021.

Deep 3D Mask Volume for View Synthesis of Dynamic Scenes Official PyTorch Implementation of paper "Deep 3D Mask Volume for View Synthesis of Dynamic S

Ken Lin 17 Oct 12, 2022
Pytorch implementation for our ICCV 2021 paper "TRAR: Routing the Attention Spans in Transformers for Visual Question Answering".

TRAnsformer Routing Networks (TRAR) This is an official implementation for ICCV 2021 paper "TRAR: Routing the Attention Spans in Transformers for Visu

Ren Tianhe 49 Nov 10, 2022