Official-PyTorch-Implementation-of-TransMEF
Official PyTorch implementation of our AAAI22 paper: TransMEF: A Transformer-Based Multi-Exposure Image Fusion Framework via Self-Supervised Multi-Task Learning. Code will be available soon.
Official PyTorch implementation of our AAAI22 paper: TransMEF: A Transformer-Based Multi-Exposure Image Fusion Framework via Self-Supervised Multi-Task Learning. Code will be available soon.
Hi,
While trying to run the following command:
python fusion_gray_TransMEF.py --model_path ./best_model.pth --test_path ./MEFB_dataset_example/grayscale --result_path transMEF_results
with the provided pretrained model, I get a runtime error with lots of missing keys.
Could you please check if the code corresponds to the provided pretrained model?
您好,打扰一下.非常感谢能够分享这个工作的代码. 我目前有个小问题,我看到代码页里面说是用MEFB作者提供的工具包进行测试的. MEFB作者提供了各个对比方法的融合结果. 我用MEFB作者提供的融合结果和工具包跑了一下,发现和您论文中结果有些出入,比如MEFNet的PSNR跑的是56.5940 (而TransMEF论文中MEFNet的PSNR是52.9449), 想问一下出现这种情况的原因是什么呢.
你好,非常感谢关于上一个issue的解答!再次打扰一下,我在测试中发现以“fusion_arbitary_size_TransMEF_gray.py”融合后的图片和以“fusion_gray_TransMEF.py”融合后的图片不同。其中滑窗融合方法得到图片质量较差。两个方法采用的数据,模型是一致的。
Thanks for making the code public. Could you please also provide the inverse of function YCbCr2RGB that you have used to convert the RGB images into YCbCr?
Thanks for sharing the code. The training can be implemented normally, but there is a problem during the test. I ran the test code and reported an error. The test image should be cropped into 256*256 blocks and combined into the original image size, but the corresponding code was not found. Will the code for generating the original image size be made public?
In your paper, you mentioned "Concretely, we concatenate the two feature maps from the CNN-Module and the Transformer-Module in TransBlock and input ....". But the dim of embeddings z, the number of channels in Transformer-Module and how to deal with different resolutions of input images are not described clearly in your paper, which is very important in pratical applications. Could you please add these details or just simply release the model.py first?
Thanks for sharing the code. I noticed that you selected 12 objective evaluation metrics in your AAAI paper. however, I don't see them in the code. I would be very grateful if you could share the code about evaluation metrics.
Thanks for your significant work! I have read your paper. Do you have the code that uses the sliding window strategy to fuse input images of arbitrary non-256 * 256? Looking forward to your reply. Thanks!
Auto-exposure fusion for single-image shadow removal We propose a new method for effective shadow removal by regarding it as an exposure fusion proble
Paper For more details, please see our paper Learning to Solve Travelling Salesman Problem with Hardness-Adaptive Curriculum which has been accepted a
Self-Supervised Graph Representation Learning via Topology Transformations This repository is the official PyTorch implementation of the following pap
A deeply supervised image fusion network for change detection in high resolution bi-temporal remote sensing images 深度监督影像融合网络DSIFN用于高分辨率双时相遥感影像变化检测 Of
Reliable Propagation-Correction Modulation for Video Object Segmentation (AAAI22) Preview version paper of this work is available at: https://arxiv.or
ConSERT Code for our ACL 2021 paper - ConSERT: A Contrastive Framework for Self-Supervised Sentence Representation Transfer Requirements torch==1.6.0
S2VC Here is the implementation of our paper S2VC: A Framework for Any-to-Any Voice Conversion with Self-Supervised Pretrained Representations. In thi
JBHI-Pytorch This repository contains a reference implementation of the algorithms described in our paper "Self-supervised Multi-modal Hybrid Fusion N
Context Matters: Graph-based Self-supervised Representation Learning for Medical Images Official PyTorch implementation for paper Context Matters: Gra
EagerMOT: 3D Multi-Object Tracking via Sensor Fusion Read our ICRA 2021 paper here. Check out the 3 minute video for the quick intro or the full prese
?? ERASOR (RA-L'21 with ICRA Option) Official page of "ERASOR: Egocentric Ratio of Pseudo Occupancy-based Dynamic Object Removal for Static 3D Point C
Patch-Rotation(PatchRot) Patch Rotation: A Self-Supervised Auxiliary Task for Robustness and Accuracy of Supervised Models Submitted to Neurips2021 To
MKGFormer Code for the SIGIR 2022 paper "Hybrid Transformer with Multi-level Fusion for Multimodal Knowledge Graph Completion" Model Architecture Illu
StructDepth PyTorch implementation of our ICCV2021 paper: StructDepth: Leveraging the structural regularities for self-supervised indoor depth estimat
Published by SpaceML • About SpaceML • Quick Colab Example Self-Supervised Learner The Self-Supervised Learner can be used to train a classifier with
TBE The source code for our paper "Removing the Background by Adding the Background: Towards Background Robust Self-supervised Video Representation Le
Self-Supervised Learning of Image Scale and Orientation Estimation (BMVC 2021) This is the official implementation of the paper "Self-Supervised Learn
This is a Pytorch implementation of the paper: Self-Supervised Graph Transformer on Large-Scale Molecular Data.
Fusion-DHL: WiFi, IMU, and Floorplan Fusion for Dense History of Locations in Indoor Environments Paper: arXiv (ICRA 2021) Video : https://youtu.be/CC