Implementation of "Learning to Match Features with Seeded Graph Matching Network" ICCV2021

Related tags

Deep Learning SGMNet
Overview

SGMNet Implementation

Framework

PyTorch implementation of SGMNet for ICCV'21 paper "Learning to Match Features with Seeded Graph Matching Network", by Hongkai Chen, Zixin Luo, Jiahui Zhang, Lei Zhou, Xuyang Bai, Zeyu Hu, Chiew-Lan Tai, Long Quan.

This work focuses on keypoint-based image matching problem. We mitigate the qudratic complexity issue for typical GNN-based matching by leveraging a restrited set of pre-matched seeds.

This repo contains training, evaluation and basic demo sripts used in our paper. As baseline, it also includes our implementation for SuperGlue. If you find this project useful, please cite:

@article{chen2021sgmnet,
  title={Learning to Match Features with Seeded Graph Matching Network},
  author={Chen, Hongkai and Luo, Zixin and Zhang, Jiahui and Zhou, Lei and Bai, Xuyang and Hu, Zeyu and Tai, Chiew-Lan and Quan, Long},
  journal={International Conference on Computer Vision (ICCV)},
  year={2021}
}

Part of the code is borrowed or ported from

SuperPoint, for SuperPoint implementation,

SuperGlue, for SuperGlue implementation and exact auc computation,

OANet, for training scheme,

PointCN, for implementaion of PointCN block and geometric transformations,

FM-Bench, for evaluation of fundamental matrix estimation.

Please also cite these works if you find the corresponding code useful.

Requirements

We use PyTorch 1.6, later version should also be compatible. Please refer to requirements.txt for other dependencies.

If you are using conda, you may configure the environment as:

conda create --name sgmnet python=3.7 -y && \
pip install -r requirements.txt && \
conda activate sgmnet

Get started

Clone the repo:

git clone https://github.com/vdvchen/SGMNet.git && \

download model weights from here

extract weights by

tar -xvf weights.tar.gz

A quick demo for image matching can be called by:

cd demo && python demo.py --config_path configs/sgm_config.yaml

The resutls will be saved as match.png in demo folder. You may configure the matcher in corresponding yaml file.

Evaluation

We demonstrate evaluation process with RootSIFT and SGMNet. Evaluation with other features/matchers can be conducted by configuring the corresponding yaml files.

1. YFCC Evaluation

Refer to OANet repo to download raw YFCC100M dataset

Data Generation

  1. Configure datadump/configs/yfcc_root.yaml for the following entries

    rawdata_dir: path for yfcc rawdata
    feature_dump_dir: dump path for extracted features
    dataset_dump_dir: dump path for generated dataset
    extractor: configuration for keypoint extractor (2k RootSIFT by default)

  2. Generate data by

    cd datadump
    python dump.py --config_path configs/yfcc_root.yaml

    An h5py data file will be generated under dataset_dump_dir, e.g. yfcc_root_2000.hdf5

Evaluation:

  1. Configure evaluation/configs/eval/yfcc_eval_sgm.yaml for the following entries

    reader.rawdata_dir: path for yfcc_rawdata
    reader.dataset_dir: path for generated h5py dataset file
    matcher: configuration for sgmnet (we use the default setting)

  2. To run evaluation,

    cd evaluation
    python evaluate.py --config_path configs/eval/yfcc_eval_sgm.yaml

For 2k RootSIFT matching, similar results as below should be obtained,

auc th: [5 10 15 20 25 30]
approx auc: [0.634 0.729 0.783 0.818 0.843 0.861]
exact auc: [0.355 0.552 0.655 0.719 0.762 0.793]
mean match score: 17.06
mean precision: 86.08

2. ScanNet Evaluation

Download processed ScanNet evaluation data.

Data Generation

  1. Configure datadump/configs/scannet_root.yaml for the following entries

    rawdata_dir: path for ScanNet raw data
    feature_dump_dir: dump path for extracted features
    dataset_dump_dir: dump path for generated dataset
    extractor: configuration for keypoint extractor (2k RootSIFT by default)

  2. Generate data by

    cd datadump
    python dump.py --config_path configs/scannet_root.yaml

    An h5py data file will be generated under dataset_dump_dir, e.g. scannet_root_2000.hdf5

Evaluation:

  1. Configure evaluation/configs/eval/scannet_eval_sgm.yaml for the following entries

    reader.rawdata_dir: path for ScanNet evaluation data
    reader.dataset_dir: path for generated h5py dataset file
    matcher: configuration for sgmnet (we use the default setting)

  2. To run evaluation,

    cd evaluation
    python evaluate.py --config_path configs/eval/scannet_eval_sgm.yaml

For 2k RootSIFT matching, similar results as below should be obtained,

auc th: [5 10 15 20 25 30]
approx auc: [0.322 0.427 0.493 0.541 0.577 0.606]
exact auc: [0.125 0.283 0.383 0.452 0.503 0.541]
mean match score: 8.79
mean precision: 45.54

3. FM-Bench Evaluation

Refer to FM-Bench repo to download raw FM-Bench dataset

Data Generation

  1. Configure datadump/configs/fmbench_root.yaml for the following entries

    rawdata_dir: path for fmbench raw data
    feature_dump_dir: dump path for extracted features
    dataset_dump_dir: dump path for generated dataset
    extractor: configuration for keypoint extractor (4k RootSIFT by default)

  2. Generate data by

    cd datadump
    python dump.py --config_path configs/fmbench_root.yaml

    An h5py data file will be generated under dataset_dump_dir, e.g. fmbench_root_4000.hdf5

Evaluation:

  1. Configure evaluation/configs/eval/fm_eval_sgm.yaml for the following entries

    reader.rawdata_dir: path for fmbench raw data
    reader.dataset_dir: path for generated h5py dataset file
    matcher: configuration for sgmnet (we use the default setting)

  2. To run evaluation,

    cd evaluation
    python evaluate.py --config_path configs/eval/fm_eval_sgm.yaml

For 4k RootSIFT matching, similar results as below should be obtained,

CPC results:
F_recall:  0.617
precision:  0.7489
precision_post:  0.8399
num_corr:  663.838
num_corr_post:  284.455  

KITTI results:
F_recall:  0.911
precision:  0.9035133886251774
precision_post:  0.9837278538989989
num_corr:  1670.548
num_corr_post:  1121.902

TUM results:
F_recall:  0.666
precision:  0.6520260208250837
precision_post:  0.731507123852191
num_corr:  1650.579
num_corr_post:  941.846

Tanks_and_Temples results:
F_recall:  0.855
precision:  0.7452896681043316
precision_post:  0.8020184635328004
num_corr:  946.571
num_corr_post:  466.865

4. Run time and memory Evaluation

We provide a script to test run time and memory consumption, for a quick start, run

cd evaluation
python eval_cost.py --matcher_name SGM  --config_path configs/cost/sgm_cost.yaml --num_kpt=4000

You may configure the matcher in corresponding yaml files.

Visualization

For visualization of matching results on different dataset, add --vis_folder argument on evaluation command, e.g.

cd evaluation
python evaluate.py --config_path configs/eval/***.yaml --vis_folder visualization

Training

We train both SGMNet and SuperGlue on GL3D dataset. The training data is pre-generated in an offline manner, which yields about 400k pairs in total.

To generate training/validation dataset

  1. Download GL3D rawdata

  2. Configure datadump/configs/gl3d.yaml. Some important entries are

    rawdata_dir: path for GL3D raw data
    feature_dump_dir: path for extracted features
    dataset_dump_dir: path for generated dataset
    pairs_per_seq: number of pairs sampled for each sequence
    angle_th: angle threshold for sampled pairs
    overlap_th: common track threshold for sampled pairs
    extractor: configuration for keypoint extractor

  3. dump dataset by

cd datadump
python dump.py --config_path configs/gl3d.yaml

Two parts of data will be generated. (1) Extracted features and keypoints will be placed under feature_dump_dir (2) Pairwise dataset will be placed under dataset_dump_dir.

  1. After data generation, configure train/train_sgm.sh for necessary entries, including
    rawdata_path: path for GL3D raw data
    desc_path: path for extracted features
    dataset_path: path for generated dataset
    desc_suffix: suffix for keypoint files, _root_1000.hdf5 for 1k RootSIFT by default.
    log_base: log directory for training

  2. run SGMNet training scripts by

bash train_sgm.sh

our training scripts support multi-gpu training, which can be enabled by configure train/train_sgm.sh for these entries

CUDA_VISIBLE_DEVICES: id of gpus to be used
nproc_per_node: number of gpus to be used

run SuperGlue training scripts by

bash train_sg.sh
Comments
  • Could you please provide more guidance on reproducing Aachen for SIFT?

    Could you please provide more guidance on reproducing Aachen for SIFT?

    Hi! I would like to ask for guidance on reproducing Aachen for SIFT and SGMNet+SIFT?

    At a starter, I have tried to run MNN+SIFT for Aachen day-night...but I get a much worse result for 8196 keypoints... 23.5 / 33.7 / 41.8 I am guessing that maybe because I am using SIFT instead of RootSIFT... But I am not sure about other settings that could be different...

    So, as I saw the result of SGMNet+SIFT as well as MNN+SIFT on both papers and on https://www.visuallocalization.net/details/17655/, I am wondering how to reproduce them (especially Table 4. in the paper)...

    Could you please share the setting? I would like to learn how to get a similar result...

    So I would like to scope down my question as follows:

    • Firstly, it is RootSIFT not SIFT...where self.root == True (in https://github.com/vdvchen/SGMNet/blob/main/components/extractors.py#L43 ) ? Is that correct? Do you have to further normalize the features (to keep the norm of feature dimension to 1) ?
    • Did you restrict image size? For example, in D2-Net, the maximum size was restricted to 1600 ? Did you do something like that too?
    • For MNN+SIFT, is there any thresholding applied for MNN matching?
    • Is the setting for SGMNet+SIFT for Aachen Day-Night similar to the following? Also, as I read the name ...rootsift8k_upright_512_0.2_SGMNet... What do you mean by 512 and 0.2?
    matcher:
      name: SGM
      model_dir: ../weights/sgm/root
      seed_top_k: [256,256]
      seed_radius_coe: 0.01
      net_channels: 128
      layer_num: 9
      head: 4
      seedlayer: [0,6]
      use_mc_seeding: True
      use_score_encoding: False
      conf_bar: [1.11,0.1] #set to [1,0.1] for sp
      sink_iter: [10,100]
      detach_iter: 1000000
      p_th: 0.2
    
    • Also, is the setting for SG+SIFT for Aachen Day-Night similar to ?
    matcher:
      name: SG
      model_dir: ../weights/sg/root
      net_channels: 128
      layer_num: 9
      head: 4
      use_score_encoding: True
      sink_iter: [100]
      p_th: 0.2
    
    opened by GabbySuwichaya 6
  • A request for a setting of  SGMNet+SP

    A request for a setting of SGMNet+SP

    Hi! Thank you so much for releasing the code. Your paper is very impressive and contains so many interesting findings.

    Here, I would like to kindly ask for the following setting in using SGMNet.

    1. SGMNet+SP
    2. SGMNet+SP-10 sink

    I have tried running your work with HPatch it seems pretty good with the setting of SIFT (but switch the dimension to 256). However, I don't know what is the proper setting.

    opened by GabbySuwichaya 6
  • SGMNet+SP error

    SGMNet+SP error

    @vdvchen 你好, 当运行SGMNet+SP时出现了一个错误,应该是因为superpoint是256维特征,但是SMGNet的权重是128维的,请问方便发送我邮箱[email protected]一个基于superpoint的SGMNet权重么?

    aug_desc1, aug_desc2 = x1_pos_embedding + desc1, x2_pos_embedding + desc2 RuntimeError: The size of tensor a (128) must match the size of tensor b (256) at non-singleton dimension 1

    opened by 22wei22 3
  • Hi! Could you provide more details on the dataset for training ?

    Hi! Could you provide more details on the dataset for training ?

    Why can't I find the hdf5 file?Does the code not generate hdF5 files in the folder?

    Traceback (most recent call last): File "dump.py", line 27, in dataset.format_dump_data() File "/data5/huZhao/code/SGMNet/datadump/dumper/gl3d_train.py", line 244, in format_dump_data pool.map(self.format_seq,indices) File "/data5/huZhao/anaconda3/envs/sgmnet/lib/python3.7/multiprocessing/pool.py", line 268, in map return self._map_async(func, iterable, mapstar, chunksize).get() File "/data5/huZhao/anaconda3/envs/sgmnet/lib/python3.7/multiprocessing/pool.py", line 657, in get raise self._value FileNotFoundError: [Errno 2] Unable to open file (unable to open file: name = '/data5/huZhao/code/GL3D-2/dump_desc_dir/000000000000000000000009/00000010.jpg_root_1000.hdf5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)

    opened by happyboy1234 3
  • Hi! Could you provide more details on the dataset for training ?

    Hi! Could you provide more details on the dataset for training ?

    Thanks for your kind support last time and thank you very much for sharing the training script... It is quite interesting for me.
    Here, I would like to kindly ask about the data for training. As I have tried to follow the instruction to download the data from https://github.com/lzx551402/GL3D

    • I have a question on which of these three datasets (1) gl3d_imgs, (2) gl3d_raw_imgs (3) gl3d_blended_images from https://github.com/lzx551402/GL3D#downloads ...... are to be downloaded ... or all of them ?

    • I have downloaded gl3d_raw_imgs... However, I received the error (below).... Does this mean that I did not download correctly? Or that I have downloaded the wrong dataset?

    • My setting for gl3d.yaml file is as follows. Should rawdata_dir be the cloned directory of https://github.com/lzx551402/GL3D ? I am very sorry as this is not what you wrote in the instruction. The reason that I thought that this maybe the GL3D cloned directory is because dump.py also looks for GL3D/data/list/comb/imageset_train.txt... :

    data_name: gl3d_train
    rawdata_dir: /mnt/HDD4TB2/GL3D   
    feature_dump_dir: /mnt/HDD4TB3/SGMNet/gl3d_desc_dir
    dataset_dump_dir: /mnt/HDD4TB3/SGMNet/gl3d_dataset_dir
    
    

    The error:

    python dump.py --config_path configs/gl3d.yaml
    dump.py:20: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
      config = yaml.load(f)
    Formatting data...
      0%|                                                                                                                                                     | 0/109 [00:00<?, ?it/s]
    multiprocessing.pool.RemoteTraceback: 
    """
    Traceback (most recent call last):
      File "/home/gabby-suwichaya/anaconda3/envs/sgmnet/lib/python3.7/multiprocessing/pool.py", line 121, in worker
        result = (True, func(*args, **kwds))
      File "/home/gabby-suwichaya/anaconda3/envs/sgmnet/lib/python3.7/multiprocessing/pool.py", line 44, in mapstar
        return list(map(*args))
      File "/mnt/HDD4TB3/SGMNet/datadump/dumper/gl3d_train.py", line 147, in format_seq
        pair_list=np.loadtxt(os.path.join(seq_dir,'geolabel','common_track.txt'),dtype=float)[:,:2].astype(int)
      File "/home/gabby-suwichaya/anaconda3/envs/sgmnet/lib/python3.7/site-packages/numpy/lib/npyio.py", line 1067, in loadtxt
        fh = np.lib._datasource.open(fname, 'rt', encoding=encoding)
      File "/home/gabby-suwichaya/anaconda3/envs/sgmnet/lib/python3.7/site-packages/numpy/lib/_datasource.py", line 193, in open
        return ds.open(path, mode, encoding=encoding, newline=newline)
      File "/home/gabby-suwichaya/anaconda3/envs/sgmnet/lib/python3.7/site-packages/numpy/lib/_datasource.py", line 533, in open
        raise IOError("%s not found." % path)
    OSError: /mnt/HDD4TB2/GL3D/data/586326ad712e276146904571/geolabel/common_track.txt not found.
    """
    
    The above exception was the direct cause of the following exception:
    
    Traceback (most recent call last):
      File "dump.py", line 27, in <module>
        dataset.format_dump_data()
      File "/mnt/HDD4TB3/SGMNet/datadump/dumper/gl3d_train.py", line 244, in format_dump_data
        pool.map(self.format_seq,indices)
      File "/home/gabby-suwichaya/anaconda3/envs/sgmnet/lib/python3.7/multiprocessing/pool.py", line 268, in map
        return self._map_async(func, iterable, mapstar, chunksize).get()
      File "/home/gabby-suwichaya/anaconda3/envs/sgmnet/lib/python3.7/multiprocessing/pool.py", line 657, in get
        raise self._value
    OSError: /mnt/HDD4TB2/GL3D/data/586326ad712e276146904571/geolabel/common_track.txt not found.
    
    opened by GabbySuwichaya 3
  • Can not find hdf5 file

    Can not find hdf5 file

    When I run python dump.py --config_path configs/gl3d.yaml

    I encounter the following issue, seems like it can not find hdf5 file how to solve it?

    Traceback (most recent call last): File "/usr/lib/python3.6/multiprocessing/pool.py", line 119, in worker result = (True, func(*args, **kwds)) File "/usr/lib/python3.6/multiprocessing/pool.py", line 44, in mapstar return list(map(*args)) File "/home/sshuang/SGMNet/datadump/dumper/gl3d_train.py", line 192, in format_seq with h5py.File(os.path.join(self.config['feature_dump_dir'],fea_path1),'r') as fea1,
    File "/usr/local/lib/python3.6/dist-packages/h5py/_hl/files.py", line 394, in init swmr=swmr) File "/usr/local/lib/python3.6/dist-packages/h5py/_hl/files.py", line 170, in make_fid fid = h5f.open(name, flags, fapl=fapl) File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper File "h5py/h5f.pyx", line 85, in h5py.h5f.open OSError: Unable to open file (unable to open file: name = '/home/sshuang/SGMNet/datadump/dump_desc/000000000000000000000009/00000007.jpg_sp_500.hdf5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)"""

    The above exception was the direct cause of the following exception:

    Traceback (most recent call last): File "dump.py", line 27, in dataset.format_dump_data() File "/home/sshuang/SGMNet/datadump/dumper/gl3d_train.py", line 244, in format_dump_data pool.map(self.format_seq,indices) File "/usr/lib/python3.6/multiprocessing/pool.py", line 266, in map return self._map_async(func, iterable, mapstar, chunksize).get() File "/usr/lib/python3.6/multiprocessing/pool.py", line 644, in get raise self._value OSError: Unable to open file (unable to open file: name = '/home/sshuang/SGMNet/datadump/dump_desc/000000000000000000000009/00000007.jpg_sp_500.hdf5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)

    opened by shane5713 1
  • Descriptor

    Descriptor

    Hello, I read your paper, there is a combination of SIFT+Superglue in your comparative experiment, how do you solve the problem that the descriptor and matching network dimensions are inconsistent

    opened by Lucifer1002 1
  • How many epochs did you use for training?

    How many epochs did you use for training?

    Hi! Can you answer a couple of questions

    1. How many epochs did you use for training the SGMNet? And where to set this parameter in the code?
    2. When checkpoints are made? Because I didn't understand from the code.
    opened by fogice-10 0
Owner
null
Implementation for our ICCV2021 paper: Internal Video Inpainting by Implicit Long-range Propagation

Implicit Internal Video Inpainting Implementation for our ICCV2021 paper: Internal Video Inpainting by Implicit Long-range Propagation paper | project

null 202 Dec 30, 2022
This is the pytorch implementation for the paper: Generalizable Mixed-Precision Quantization via Attribution Rank Preservation, which is accepted to ICCV2021.

GMPQ: Generalizable Mixed-Precision Quantization via Attribution Rank Preservation This is the pytorch implementation for the paper: Generalizable Mix

null 18 Sep 2, 2022
Official implementation of "A Unified Objective for Novel Class Discovery", ICCV2021 (Oral)

A Unified Objective for Novel Class Discovery This is the official repository for the paper: A Unified Objective for Novel Class Discovery Enrico Fini

Enrico Fini 118 Dec 26, 2022
PyTorch implementation of our ICCV2021 paper: StructDepth: Leveraging the structural regularities for self-supervised indoor depth estimation

StructDepth PyTorch implementation of our ICCV2021 paper: StructDepth: Leveraging the structural regularities for self-supervised indoor depth estimat

SJTU-ViSYS 112 Nov 28, 2022
This is an official implementation of the paper "Distance-aware Quantization", accepted to ICCV2021.

PyTorch implementation of DAQ This is an official implementation of the paper "Distance-aware Quantization", accepted to ICCV2021. For more informatio

CV Lab @ Yonsei University 36 Nov 4, 2022
A PyTorch implementation of "From Two to One: A New Scene Text Recognizer with Visual Language Modeling Network" (ICCV2021)

From Two to One: A New Scene Text Recognizer with Visual Language Modeling Network The official code of VisionLAN (ICCV2021). VisionLAN successfully a

null 81 Dec 12, 2022
source code of “Visual Saliency Transformer” (ICCV2021)

Visual Saliency Transformer (VST) source code for our ICCV 2021 paper “Visual Saliency Transformer” by Nian Liu, Ni Zhang, Kaiyuan Wan, Junwei Han, an

null 89 Dec 21, 2022
HiFT: Hierarchical Feature Transformer for Aerial Tracking (ICCV2021)

HiFT: Hierarchical Feature Transformer for Aerial Tracking Ziang Cao, Changhong Fu, Junjie Ye, Bowen Li, and Yiming Li Our paper is Accepted by ICCV 2

Intelligent Vision for Robotics in Complex Environment 55 Nov 23, 2022
Official code for "Simpler is Better: Few-shot Semantic Segmentation with Classifier Weight Transformer. ICCV2021".

Simpler is Better: Few-shot Semantic Segmentation with Classifier Weight Transformer. ICCV2021. Introduction We proposed a novel model training paradi

Lucas 103 Dec 14, 2022
Seeing Dynamic Scene in the Dark: High-Quality Video Dataset with Mechatronic Alignment (ICCV2021)

Seeing Dynamic Scene in the Dark: High-Quality Video Dataset with Mechatronic Alignment This is a pytorch project for the paper Seeing Dynamic Scene i

DV Lab 21 Nov 28, 2022
Dynamic Divide-and-Conquer Adversarial Training for Robust Semantic Segmentation (ICCV2021)

Dynamic Divide-and-Conquer Adversarial Training for Robust Semantic Segmentation This is a pytorch project for the paper Dynamic Divide-and-Conquer Ad

DV Lab 29 Nov 21, 2022
CM-NAS: Cross-Modality Neural Architecture Search for Visible-Infrared Person Re-Identification (ICCV2021)

CM-NAS Official Pytorch code of paper CM-NAS: Cross-Modality Neural Architecture Search for Visible-Infrared Person Re-Identification in ICCV2021. Vis

JDAI-CV 40 Nov 25, 2022
Parametric Contrastive Learning (ICCV2021)

Parametric-Contrastive-Learning This repository contains the implementation code for ICCV2021 paper: Parametric Contrastive Learning (https://arxiv.or

DV Lab 156 Dec 21, 2022
Code and models for ICCV2021 paper "Robust Object Detection via Instance-Level Temporal Cycle Confusion".

Robust Object Detection via Instance-Level Temporal Cycle Confusion This repo contains the implementation of the ICCV 2021 paper, Robust Object Detect

Xin Wang 69 Oct 13, 2022
TOOD: Task-aligned One-stage Object Detection, ICCV2021 Oral

One-stage object detection is commonly implemented by optimizing two sub-tasks: object classification and localization, using heads with two parallel branches, which might lead to a certain level of spatial misalignment in predictions between the two tasks.

null 264 Jan 9, 2023
Online Multi-Granularity Distillation for GAN Compression (ICCV2021)

Online Multi-Granularity Distillation for GAN Compression (ICCV2021) This repository contains the pytorch codes and trained models described in the IC

Bytedance Inc. 299 Dec 16, 2022
Exploring Classification Equilibrium in Long-Tailed Object Detection, ICCV2021

Exploring Classification Equilibrium in Long-Tailed Object Detection (LOCE, ICCV 2021) Paper Introduction The conventional detectors tend to make imba

null 52 Nov 21, 2022
ICCV2021 Oral SA-ConvONet: Sign-Agnostic Optimization of Convolutional Occupancy Networks

Sign-Agnostic Convolutional Occupancy Networks Paper | Supplementary | Video | Teaser Video | Project Page This repository contains the implementation

null 63 Nov 18, 2022
Official PyTorch code for Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling (HCFlow, ICCV2021)

Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling (HCFlow, ICCV2021) This repository is the official P

Jingyun Liang 159 Dec 30, 2022