HDR Video Reconstruction: A Coarse-to-fine Network and A Real-world Benchmark Dataset (ICCV 2021)

Overview

Code for HDR Video Reconstruction

HDR Video Reconstruction: A Coarse-to-fine Network and A Real-world Benchmark Dataset (ICCV 2021)
Guanying Chen, Chaofeng Chen, Shi Guo, Zhetong Liang, Kwan-Yee K. Wong, Lei Zhang

Table of Contents

Overview:

We provide testing and training code. Details of the training and testing dataset can be found in DeepHDRVideo-Dataset. Datasets and the trained models can be download in Google Drive or BaiduYun (TODO).

Dependencies

This model is implemented in PyTorch and tested with Ubuntu (14.04 and 16.04) and Centos 7.

  • Python 3.7
  • PyTorch 1.10 and torchvision 0.30

You are highly recommended to use Anaconda and create a new environment to run this code. The following is an example procedure to install the dependencies.

# Create a new python3.7 environment named hdr
conda create -n hdr python=3.7

# Activate the created environment
source activate hdr

pip install -r requirements.txt

# Build deformable convolutional layer, tested with pytorch 1.1, g++5.5, and cuda 9.0
cd extensions/dcn/
python setup.py develop
# Please refer to https://github.com/xinntao/EDVR if you have difficulty in building this module

Testing

Please first go through DeepHDRVideo-Dataset to familiarize yourself with the testing dataset.

The trained models can be found in Google Drive (Models/). Download and place it to data/models/.

Testing on the synthetic test dataset

The synthetic test dataset can be found in Google Drive (/Synthetic_Dataset/HDR_Synthetic_Test_Dataset.tgz). Download and unzip it to data/. Note that we donot perform global motion alignment for this synthetic dataset.

# Test our method on two-exposure data. Results can be found in data/models/CoarseToFine_2Exp/
python run_model.py --gpu_ids 0 --model hdr2E_flow2s_model \
    --benchmark syn_test_dataset --bm_dir data/HDR_Synthetic_Test_Dataset \
    --mnet_name weight_net --mnet_checkp data/models/CoarseToFine_2Exp/weight_net.pth --fnet_checkp data/models/CoarseToFine_2Exp/flow_net.pth --mnet2_checkp data/models/CoarseToFine_2Exp/refine_net.pth

# Test our method on three-exposure data. The results can be found in data/models/CoarseToFine_3Exp/
python run_model.py --gpu_ids 0 --model hdr3E_flow2s_model \
    --benchmark syn_test_dataset --bm_dir data/HDR_Synthetic_Test_Dataset \
    --mnet_name weight_net --mnet_checkp data/models/CoarseToFine_3Exp/weight_net.pth --fnet_checkp data/models/CoarseToFine_3Exp/flow_net.pth --mnet2_checkp data/models/CoarseToFine_3Exp/refine_net.pth

Testing on the TOG13 dataset

Please download this dataset from TOG13_Dynamic_Dataset.tgz and unzip to data/. Normally when testing on a video, we have to first compute the similarity transformation matrices between neighboring frames using the following commands.

# However, this is optional as the downloaded dataset already contains the require transformation matrices for each scene in Affine_Trans_Matrices/.
python utils/compute_nbr_trans_for_video.py --in_dir data/TOG13_Dynamic_Dataset/ --crf data/TOG13_Dynamic_Dataset/BaslerCRF.mat --scene_list 2Exp_scenes.txt
python utils/compute_nbr_trans_for_video.py --in_dir data/TOG13_Dynamic_Dataset/ --crf data/TOG13_Dynamic_Dataset/BaslerCRF.mat --scene_list 3Exp_scenes.txt
# Test our method on two-exposure data. The results can be found in data/models/CoarseToFine_2Exp/
# Specify the testing scene with --test_scene. Available options are Ninja-2Exp-3Stop WavingHands-2Exp-3Stop Skateboarder2-3Exp-2Stop ThrowingTowel-2Exp-3Stop 
python run_model.py --gpu_ids 0 --model hdr2E_flow2s_model \
    --benchmark tog13_online_align_dataset --bm_dir data/TOG13_Dynamic_Dataset --test_scene ThrowingTowel-2Exp-3Stop --align \ --mnet_name weight_net --fnet_checkp data/models/CoarseToFine_2Exp/flow_net.pth --mnet_checkp data/models/CoarseToFine_2Exp/weight_net.pth --mnet2_checkp data/models/CoarseToFine_2Exp/refine_net.pth 
# To test on a specific scene, you can use the --test_scene argument, e.g., "--test_scene ThrowingTowel-2Exp-3Stop".

# Test our method on three-exposure data. The results can be found in data/models/CoarseToFine_3Exp/
# Specify the testing scene with --test_scene. Available options are Cleaning-3Exp-2Stop Dog-3Exp-2Stop CheckingEmail-3Exp-2Stop Fire-2Exp-3Stop
python run_model.py --gpu_ids 0 --model hdr3E_flow2s_model \
    --benchmark tog13_online_align_dataset --bm_dir data/TOG13_Dynamic_Dataset --test_scene Dog-3Exp-2Stop --align \
    --mnet_name weight_net --fnet_checkp data/models/CoarseToFine_3Exp/flow_net.pth --mnet_checkp data/models/CoarseToFine_3Exp/weight_net.pth --mnet2_checkp data/models/CoarseToFine_3Exp/refine_net.pth 

Testing on the captured static dataset

The global motion augmented static dataset can be found in Google Drive (/Real_Dataset/Static/).

# Test our method on two-exposure data. Download static_RGB_data_2exp_rand_motion_release.tgz and unzip to data/
# Results can be found in data/models/CoarseToFine_2Exp/
python run_model.py --gpu_ids 0 --model hdr2E_flow2s_model \
    --benchmark real_benchmark_dataset --bm_dir data/static_RGB_data_2exp_rand_motion_release --test_scene all \
    --mnet_name weight_net --mnet_checkp data/models/CoarseToFine_2Exp/weight_net.pth --fnet_checkp data/models/CoarseToFine_2Exp/flow_net.pth --mnet2_checkp data/models/CoarseToFine_2Exp/refine_net.pth

# Test our method on three-exposure data. Download static_RGB_data_3exp_rand_motion_release.tgz and unzip to data/
# The results can be found in data/models/CoarseToFine_3Exp/
python run_model.py --gpu_ids 0 --model hdr3E_flow2s_model \
    --benchmark real_benchmark_dataset --bm_dir data/static_RGB_data_3exp_rand_motion_release --test_scene all \
    --mnet_name weight_net --mnet_checkp data/models/CoarseToFine_3Exp/weight_net.pth --fnet_checkp data/models/CoarseToFine_3Exp/flow_net.pth --mnet2_checkp data/models/CoarseToFine_3Exp/refine_net.pth

Testing on the captured dynamic with GT dataset

The dynamic with GT dataset can be found in Google Drive (/Real_Dataset/Dynamic/).

# Test our method on two-exposure data. Download dynamic_RGB_data_2exp_release.tgz and unzip to data/
python run_model.py --gpu_ids 0 --model hdr2E_flow2s_model \
    --benchmark real_benchmark_dataset --bm_dir data/dynamic_RGB_data_2exp_release --test_scene all \
    --mnet_name weight_net  --fnet_checkp data/models/CoarseToFine_2Exp/flow_net.pth --mnet_checkp data/models/CoarseToFine_2Exp/weight_net.pth --mnet2_checkp data/models/CoarseToFine_2Exp/refine_net.pth

# Test our method on three-exposure data. Download dynamic_RGB_data_3exp_release.tgz and unzip to data/
python run_model.py --gpu_ids 0 --model hdr3E_flow2s_model \
    --benchmark real_benchmark_dataset --bm_dir data/dynamic_RGB_data_3exp_release --test_scene all \
    --mnet_name weight_net  --fnet_checkp data/models/CoarseToFine_3Exp/flow_net.pth --mnet_checkp data/models/CoarseToFine_3Exp/weight_net.pth --mnet2_checkp data/models/CoarseToFine_3Exp/refine_net.pth

Testing on the customized dataset

You have two options to test our method on your dataset. In the first option, you have to implement a customized Dataset class to load your data, which should not be difficult. Please refer to datasets/tog13_online_align_dataset.py.

If you don't want to implement your own Dataset class, you may reuse datasets/tog13_online_align_dataset.py. However, you have to first arrange your dataset similar to TOG13 dataset. Then you can run utils/compute_nbr_trans_for_video.py to compute the similarity transformation matrices between neighboring frames to enable global alignment.

# Use gamma curve if you do not know the camera response function
python utils/compute_nb_transformation_video.py --in_dir /path/to/your/dataset/ --crf gamma --scene_list your_scene_list

HDR evaluation metrics

We evaluate PSRN, HDR-VDP, HDR-VQM metrics using the Matlab code. Please first install HDR Toolbox to read HDR. Then set the paths of the ground-truth HDR and the estimated HDR in matlab/config_eval.m. Last, run main_eval.m in the Matlab console in the directory of matlab/.

main_eval(2, 'Ours')
main_eval(3, 'Ours')

Tonemapping

All visual results in the experiment are tonemapped using Reinhard et al.’s method. Please first install luminance-hdr-cli. In Ubuntu, you may use sudo apt-get install -y luminance-hdr to install it. Then you can use the following command to produce the tonemmapped results.

python utils/tonemapper.py -i /path/to/HDR/

Precomputed Results

The precomputed results can be found in Google Drive (/Results) (TODO).

Training

The training process is described in docs/training.md.

License

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Citation

If you find this code useful in your research, please consider citing:

@article{chen2021hdr,
  title={{HDR} Video Reconstruction: A Coarse-to-fine Network and A Real-world Benchmark Dataset},
  author={Chen, Guanying and Chen, Chaofeng and Guo, Shi and Liang, Zhetong and Wong, Kwan-Yee K and Zhang, Lei},
  journal=ICCV,
  year={2021}
}
Comments
  • Global alignment

    Global alignment

    Hello. According to your paper, global alignment is performed using a similarity transformation in the preprocessing stage. I wonder if this global alignment process is performed during training phase in the code because I couldn't find it. Thank you!

    opened by haesoochung 15
  • How to test CoarseNet (kalantari's model) result?

    How to test CoarseNet (kalantari's model) result?

    Thanks to the author for the code, it's a very good work. In the process of reproduction, I want to test the visualization effect of coarsenet, but I don't know how to do it?

    opened by nono0822 9
  • About the comparison between HDRVideo and AHDRNet

    About the comparison between HDRVideo and AHDRNet

    您好,您的这篇工作十分精彩,算法性能也很优异,您在论文中提到的对比算法中有一个算法是19年提出的多曝光图像融合算法AHDRNet,想请问一下您是怎么用HDRVideo这篇工作的数据集去训练AHDRNet的呢(注意到您这篇文章训练的数据集代码中对参考帧有一个tone perturbation的操作,在AHDRNet训练过程中是否也需要有这个操作?),我按照您这篇工作中提供的数据集处理代码训练了一下AHDRNet,但是效果并不理想,在合成的测试集(两曝光为例)上远没有达到Psnr=39.05,尤其是参考帧为高曝光帧的时候,并且用其算法恢复出的视频效果并不理想,您是否方便提供一下您复现AHDRNet的训练代码,十分感谢!!

    opened by syujung 8
  • The reimplement performance is lower than the performance released in the paper

    The reimplement performance is lower than the performance released in the paper

    @guanyingc

    I try to reimplement the performance (PSNR) of the real-world dataset with your released code and model. However, I found that the reimplement performance is lower than the performance released in the paper, no matter is static scenes or dynamic scenes.

    I am sure that I do not modify any code. If it is normal to get such results?

    Thanks a lot and hope for your quick reply.

    opened by wzhouxiff 5
  • Save checkpoint and make_dir problem

    Save checkpoint and make_dir problem

    Hi, Thanks to the author for the code, it really help me a lot. In the process of retraining the model, we set make_dir as True.We only modify dataset_dir in train_opt.py. In the code, an error occurred in the creation the saving folder. The error is as follows:

    Traceback (most recent call last): File "main.py", line 8, in log = logger.Logger(args) File "I:\DeepHDRVideo-master\utils\logger.py", line 29, in init self._setup_dirs(args) File "I:\DeepHDRVideo-master\utils\logger.py", line 76, in _setup_dirs self.log_fie = open(file_dir, 'w') FileNotFoundError: [Errno 2] No such file or directory: 'logdir/syn_vimeo_dataset\ICCV\11-28,spynet_2triple,weight_net,LReLU,hdr3E_flow_model,kaiming,l1,cm_d-256,cr_h-256,ht_r-0.9,ba_h-16,in_r-0.0001,in_ldr,sc_h-320,concat\11-28,spynet_2triple,weight_net,LReLU,hdr3E_flow_model,kaiming,l1,cm_d-256,cr_h-256,ht_r-0.9,ba_h-16,in_r-0.0001,in_ldr,sc_h-320,concat,14:47:07'

    Whether my settings are wrong. Hope to get your help, thank you!

    opened by YingJieYang1997 4
  • About Vimeo-90k dataset utilized for training

    About Vimeo-90k dataset utilized for training

    Dear author,

    I have some questions and hope to get some suggestions from you.

    1. Have you ever tested models without vimeo-90k training data? Is your method still better than Kalantari's in such condition? I just want to know how effective training with vimeo is.
    2. I found that the ground truth of vimeo data is generated by converting the LDR images to linear HDR images for each single one. I'm a little worried about whether this data can work and whether it is meaningful. (because the ground truth of some real-world datasets is generated by merging multi-exposed still images like [1] and your paper )

    Looking forward to your reply!

    Reference: [1] Kalantari, Nima Khademi, and Ravi Ramamoorthi. "Deep high dynamic range imaging of dynamic scenes." ACM Trans. Graph. 36.4 (2017): 144-1.

    opened by Rhysess 4
  • About the Tog13 datasets

    About the Tog13 datasets

    Dear guanying, The file (TOG13 dataset) you provided in google drive already contains the hdr images , I want to know if those hdr images are predictd by the network you proposed in paper ?

    opened by syujung 4
  • Some questions about HDR-VDP-2.

    Some questions about HDR-VDP-2.

    Thanks for releasing the code and dataset! I have some questions about HDR-VDP-2: First, I found that you adopted 'sRGB-display' as the mode of HDR-VDP-2. I think this mode is for inputting images in the pixel domain. In this mode, HDR-VDP-2 will map input images from the pixel domain to the linear domain. However, the input images in your code are normalized HDR (linear) images. Second, other modes in HDR-VDP-2 require absolute luminance values rather than normalized values. I wonder how to compute this metric. Thank you!

    opened by Tx000 4
  • About the dataset named Vimeo 90K

    About the dataset named Vimeo 90K

    Dear Guanying,

    I have a question about the datasets, as noted in paper,you used Vimeo-90K to train the network,Vimeo-90K is a very big dataset,and I want to konw if you used all 91, 701 preprocessed 7-frame clips to train the network,or you just used a part of Vimeo-90K to train the network? In addition,What's the performance of the algorithm if we don't use Vimeo to train the network? I wonder if the performance will degrades a lot?

    Looking forward to your reply!

    opened by syujung 2
  • About comparison with Kalantari's method

    About comparison with Kalantari's method

    Dear Guanying,

    I have a question about the comparative experiment, and I hope you could give some suggestions.

    You give results with your method and Kalantari19's method to compare. However, in Kalantari19's paper they said " use mini-batches of size 10 and perform training for 60,000 iterations" without specifying concrete number of epochs. So I wonder how you train their method and how epochs you use for training Kalantari19's with your dataset.

    Looking forward to your reply. Thanks.

    opened by Rhysess 2
  • cudnn error

    cudnn error

    Hi, many thanks for your work and codes. I tested the pretrained model on GPU and it works. But I tried to retrained the model but I got cudnn error like" File "main.py", line 33, in main(args) File "main.py", line 21, in main train_utils.train(args, log, train_loader, model, epoch, recorder) File "/media/ye/ssd11/DeepHDRVideo-master/utils/train_utils.py", line 17, in train pred = model.forward(split='train'); File "/media/ye/ssd11/DeepHDRVideo-master/models/hdr2E_flow_model.py", line 93, in forward self.fpred = self.fnet(fnet_in) File "/home/ye/.opt/miniconda/envs/torch11/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in call result = self.forward(*input, **kwargs) File "/home/ye/.opt/miniconda/envs/torch11/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 150, in forward return self.module(*inputs[0], **kwargs[0]) File "/home/ye/.opt/miniconda/envs/torch11/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in call result = self.forward(*input, **kwargs) File "/media/ye/ssd11/DeepHDRVideo-master/models/archs/flow_networks.py", line 94, in forward flow1, flow2 = self.moduleBasici File "/home/ye/.opt/miniconda/envs/torch11/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in call result = self.forward(*input, **kwargs) File "/media/ye/ssd11/DeepHDRVideo-master/models/archs/flow_networks.py", line 33, in forward flows = self.moduleBasic(inputs).clamp(-320, 320) File "/home/ye/.opt/miniconda/envs/torch11/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in call result = self.forward(*input, **kwargs) File "/home/ye/.opt/miniconda/envs/torch11/lib/python3.7/site-packages/torch/nn/modules/container.py", line 92, in forward input = module(input) File "/home/ye/.opt/miniconda/envs/torch11/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in call result = self.forward(*input, **kwargs) File "/home/ye/.opt/miniconda/envs/torch11/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 338, in forward self.padding, self.dilation, self.groups) RuntimeError: cuDNN error: CUDNN_STATUS_EXECUTION_FAILED " And I checked the codes and there is no setting for cudnn. So could you help me for this problem? Many thansk.

    opened by yarqian 2
  • How to load the checkpoint

    How to load the checkpoint

    Dear author

    I forget to set the model to record the mnet2_15.pth, it just record the 14th epoch. So I change the start_epoch to 14 and add the path of 14th epoch's checkpoint. But it seems doesn't work. So what should I do to load the checkpoint in training stage 2?

    opened by K1NSA 2
  • Result on test sequence Carousel_fireworks

    Result on test sequence Carousel_fireworks

    Hi,

    I notice in one issue, you think vimeo-90k dataset is not perfect but effective in getting the higher performance. So now I'm preparing training with/without it to see the difference and considering how to make another suitable synthetic dataset.

    However, after training without vimeo-90k, I found results on test sequence Carousel_fireworks are obviously lower than other sequences. PSNR on Carousel_fireworks is 31.62, while PSNR on Poker_fullshot is 41.31 (2-alternating exposed inputs). Although training with vimeo-90k is not finished, I think maybe PSNR on Carousel_fireworks is still bad.

    So do you also get a bad PSNR score on Carousel_fireworks in your work? And I also wonder about the reason.

    opened by yorunouta 5
  • Flickering problem in 3-exposure-3stop video

    Flickering problem in 3-exposure-3stop video

    Hi, i'm testing your code on my own datasets, captured with a Canon and preprocessed as required. In particular i'm using sequences of three alternating exposures. When i have a configuration {EV-2, EV+0, EV+2, ...} your code works fine. Instead, when i have a configuration {EV-3, EV+0, EV+3, ...}, i get 'periodically' flickering: i.e., if H_i is a HDR frame, it doesn't match with H_{i+1} and H_{i+2} (flickering), but it matches with H_{i+3}. Why do i get this flickering problem? I attach two hdr video of a static scene to show this. Below I specify exposure times (in seconds). Aperture and iso are constant in both (ISO 800, f3.5). 1- First HDR video: {EV-2, EV+0, EV+2, ...}, exposure times {1/320, 1/80, 1/20} 2- Second HDR video: {EV-3, EV+0, EV+3, ...}, exposure times {1/640, 1/80, 1/10}

    https://user-images.githubusercontent.com/114491838/194347916-ccb60538-4562-4dd9-84ec-333cf1938cd3.mov

    https://user-images.githubusercontent.com/114491838/194347942-72cdfb11-d243-4c94-8f43-5bbce5860631.mov

    opened by xliuk 12
Owner
Guanying Chen
PhD student in HKU
Guanying Chen
PyTorch implementation of NeurIPS 2021 paper: "CoFiNet: Reliable Coarse-to-fine Correspondences for Robust Point Cloud Registration"

PyTorch implementation of NeurIPS 2021 paper: "CoFiNet: Reliable Coarse-to-fine Correspondences for Robust Point Cloud Registration"

null 76 Jan 3, 2023
A Planar RGB-D SLAM which utilizes Manhattan World structure to provide optimal camera pose trajectory while also providing a sparse reconstruction containing points, lines and planes, and a dense surfel-based reconstruction.

ManhattanSLAM Authors: Raza Yunus, Yanyan Li and Federico Tombari ManhattanSLAM is a real-time SLAM library for RGB-D cameras that computes the camera

null 117 Dec 28, 2022
Official codebase for Legged Robots that Keep on Learning: Fine-Tuning Locomotion Policies in the Real World

Legged Robots that Keep on Learning Official codebase for Legged Robots that Keep on Learning: Fine-Tuning Locomotion Policies in the Real World, whic

Laura Smith 70 Dec 7, 2022
Learning Generative Models of Textured 3D Meshes from Real-World Images, ICCV 2021

Learning Generative Models of Textured 3D Meshes from Real-World Images This is the reference implementation of "Learning Generative Models of Texture

Dario Pavllo 115 Jan 7, 2023
official Pytorch implementation of ICCV 2021 paper FuseFormer: Fusing Fine-Grained Information in Transformers for Video Inpainting.

FuseFormer: Fusing Fine-Grained Information in Transformers for Video Inpainting By Rui Liu, Hanming Deng, Yangyi Huang, Xiaoyu Shi, Lewei Lu, Wenxiu

null 77 Dec 27, 2022
PyTorch implementation of Deep HDR Imaging via A Non-Local Network (TIP 2020).

NHDRRNet-PyTorch This is the PyTorch implementation of Deep HDR Imaging via A Non-Local Network (TIP 2020). 0. Differences between Original Paper and

Yutong Zhang 1 Mar 1, 2022
A Real-World Benchmark for Reinforcement Learning based Recommender System

RL4RS: A Real-World Benchmark for Reinforcement Learning based Recommender System RL4RS is a real-world deep reinforcement learning recommender system

null 121 Dec 1, 2022
Code for "NeuralRecon: Real-Time Coherent 3D Reconstruction from Monocular Video", CVPR 2021 oral

NeuralRecon: Real-Time Coherent 3D Reconstruction from Monocular Video Project Page | Paper NeuralRecon: Real-Time Coherent 3D Reconstruction from Mon

ZJU3DV 1.4k Dec 30, 2022
Differentiable molecular simulation of proteins with a coarse-grained potential

Differentiable molecular simulation of proteins with a coarse-grained potential This repository contains the learned potential, simulation scripts and

UCL Bioinformatics Group 44 Dec 10, 2022
Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data

Real-ESRGAN Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data Ported from https://github.com/xinntao/Real-ESRGAN Depend

Holy Wu 44 Dec 27, 2022
The first dataset on shadow generation for the foreground object in real-world scenes.

Object-Shadow-Generation-Dataset-DESOBA Object Shadow Generation is to deal with the shadow inconsistency between the foreground object and the backgr

BCMI 105 Dec 30, 2022
Saeed Lotfi 28 Dec 12, 2022
VIL-100: A New Dataset and A Baseline Model for Video Instance Lane Detection (ICCV 2021)

Preparation Please see dataset/README.md to get more details about our datasets-VIL100 Please see INSTALL.md to install environment and evaluation too

null 82 Dec 15, 2022
[ICCV 2021] Counterfactual Attention Learning for Fine-Grained Visual Categorization and Re-identification

Counterfactual Attention Learning Created by Yongming Rao*, Guangyi Chen*, Jiwen Lu, Jie Zhou This repository contains PyTorch implementation for ICCV

Yongming Rao 90 Dec 31, 2022
Pytorch implementation of various High Dynamic Range (HDR) Imaging algorithms

Deep High Dynamic Range Imaging Benchmark This repository is the pytorch impleme

Tianhong Dai 5 Nov 16, 2022
Official PyTorch implementation of N-ImageNet: Towards Robust, Fine-Grained Object Recognition with Event Cameras (ICCV 2021)

N-ImageNet: Towards Robust, Fine-Grained Object Recognition with Event Cameras Official PyTorch implementation of N-ImageNet: Towards Robust, Fine-Gra

null 32 Dec 26, 2022
Official pytorch code for SSC-GAN: Semi-Supervised Single-Stage Controllable GANs for Conditional Fine-Grained Image Generation(ICCV 2021)

SSC-GAN_repo Pytorch implementation for 'Semi-Supervised Single-Stage Controllable GANs for Conditional Fine-Grained Image Generation'.PDF SSC-GAN:Sem

tyty 4 Aug 28, 2022
LoveDA: A Remote Sensing Land-Cover Dataset for Domain Adaptive Semantic Segmentation (NeurIPS2021 Benchmark and Dataset Track)

LoveDA: A Remote Sensing Land-Cover Dataset for Domain Adaptive Semantic Segmentation by Junjue Wang, Zhuo Zheng, Ailong Ma, Xiaoyan Lu, and Yanfei Zh

Kingdrone 174 Dec 22, 2022