A framework for joint super-resolution and image synthesis, without requiring real training data

Overview

SynthSR

This repository contains code to train a Convolutional Neural Network (CNN) for Super-resolution (SR), or joint SR and data synthesis. The method can also be configured to achieve denoising and bias field correction.

The network takes synthetic scans generated on the fly as inputs, and can be trained to regress either real or synthetic target scans. The synthetic scans are obtained by sampling a generative model building on the SynthSeg [1] package, which we really encourage you to have a look at!


In short, synthetic scans are generated at each mini-batch by: 1) randomly selecting a label map among of pool of training segmentations, 2) spatially deforming it in 3D, 3) sampling a Gaussian Mixture Model (GMM) conditioned on the deformed label map (see Figure 1 below), and 4) corrupting with a random bias field. This gives us a synthetic scan at high resolution (HR). We then simulate thick slice spacing by blurring and downsampling it to low resolution (LR). In SR, we then train a network to learn the mapping between LR data (possibly multimodal, hence the joint synthesis) and HR synthetic scans. Moreover If real images are available along with the training label maps, we can learn to regress the real images instead.


Training overview Figure 1: overview of SynthSR


Tutorials for Generation and Training

This repository contains code to train your own network for SR or joint SR and synthesis. Because the training function has a lot of options, we provide here some tutorials to familiarise yourself with the different training/generation parameters. We emphasise that we provide example training data along with these scripts: 5 preprocessed publicly available T1 scans at 1mm isotropic resolution [2] with corresponding label maps obtained with FreeSurfer [3]. The tutorials can be found in scripts, and they include:

  • Six generation scripts corresponding to different use cases (see Figure 2 below). We recommend to go through them all, (even if you're only interested in case 1), since we successively introduce different functionalities as we go through.

  • One training script, explaining the main training parameters.

  • One script explaining how to estimate the parameters governing the GMM, in case you wish to train a model on your own data.


Training overview Figure 2: Examples generated by running the tutorials on the provided data [2]. For each use case, we show the synhtetic images used as inputs to the network, as well as the regression target.


Content

  • SynthSR: this is the main folder containing the generative model and training function:

    • labels_to_image_model.py: builds the generative model.

    • brain_generator.py: contains the class BrainGenerator, which is a wrapper around the model. New images can simply be generated by instantiating an object of this class, and calling the method generate_image().

    • model_inputs.py: prepares the inputs of the generative model.

    • training.py: contains the function to train the network. All training parameters are explained there.

    • metrics_model.py: contains a Keras model that implements diffrent loss functions.

    • estimate_priors.py: contains functions to estimate the prior distributions of the GMM parameters.

  • data: this folder contains the data for the tutorials (T1 scans [2], corresponding FreeSurfer segmentations and some other useful files)

  • script: additionally to the tutorials, we also provide a script to launch trainings from the terminal

  • ext: contains external packages.


Requirements

This code relies on several external packages (already included in \ext):

  • lab2im: contains functions for data augmentation, and a simple version of the generative model, on which we build to build label_to_image_model [1]

  • neuron: contains functions for deforming, and resizing tensors, as well as functions to build the segmentation network [4,5].

  • pytool-lib: library required by the neuron package.

All the other requirements are listed in requirements.txt. We list here the most important dependencies:

  • tensorflow-gpu 2.0
  • tensorflow_probability 0.8
  • keras > 2.0
  • cuda 10.0 (required by tensorflow)
  • cudnn 7.0
  • nibabel
  • numpy, scipy, sklearn, tqdm, pillow, matplotlib, ipython, ...

Citation/Contact

This repository contains the code related to a submission that is still under review.

If you have any question regarding the usage of this code, or any suggestions to improve it you can contact us at:
[email protected]


References

[1] A Learning Strategy for Contrast-agnostic MRI Segmentation
Benjamin Billot, Douglas N. Greve, Koen Van Leemput, Bruce Fischl, Juan Eugenio Iglesias*, Adrian V. Dalca*
*contributed equally
MIDL 2020

[2] A novel in vivo atlas of human hippocampal subfields usinghigh-resolution 3 T magnetic resonance imaging
J. Winterburn, J. Pruessner, S. Chavez, M. Schira, N. Lobaugh, A. Voineskos, M. Chakravarty
NeuroImage (2013)

[3] FreeSurfer
Bruce Fischl
NeuroImage (2012)

[4] Anatomical Priors in Convolutional Networks for Unsupervised Biomedical Segmentation
Adrian V. Dalca, John Guttag, Mert R. Sabuncu
CVPR 2018

[5] Unsupervised Data Imputation via Variational Inference of Deep Subspaces
Adrian V. Dalca, John Guttag, Mert R. Sabuncu
Arxiv preprint (2019)

You might also like...
Fast and Context-Aware Framework for Space-Time Video Super-Resolution (VCIP 2021)

Fast and Context-Aware Framework for Space-Time Video Super-Resolution Preparation Dependencies PyTorch 1.2.0 CUDA 10.0 DCNv2 cd model/DCNv2 bash make

MASA-SR: Matching Acceleration and Spatial Adaptation for Reference-Based Image Super-Resolution (CVPR2021)

MASA-SR Official PyTorch implementation of our CVPR2021 paper MASA-SR: Matching Acceleration and Spatial Adaptation for Reference-Based Image Super-Re

PyTorch implementation of Graph Convolutional Networks in Feature Space for Image Deblurring and Super-resolution, IJCNN 2021.

GCResNet PyTorch implementation of Graph Convolutional Networks in Feature Space for Image Deblurring and Super-resolution, IJCNN 2021. The code will

Unofficial pytorch implementation of the paper "Dynamic High-Pass Filtering and Multi-Spectral Attention for Image Super-Resolution"

DFSA Unofficial pytorch implementation of the ICCV 2021 paper "Dynamic High-Pass Filtering and Multi-Spectral Attention for Image Super-Resolution" (p

Blind Image Super-resolution with Elaborate Degradation Modeling on Noise and Kernel
Blind Image Super-resolution with Elaborate Degradation Modeling on Noise and Kernel

Blind Image Super-resolution with Elaborate Degradation Modeling on Noise and Kernel This repository is the official PyTorch implementation of BSRDM w

Using image super resolution models with vapoursynth and speeding them up with TensorRT

vs-RealEsrganAnime-tensorrt-docker Using image super resolution models with vapoursynth and speeding them up with TensorRT. Also a docker image since

Paper Title: Heterogeneous Knowledge Distillation for Simultaneous Infrared-Visible Image Fusion and Super-Resolution

HKDnet Paper Title: "Heterogeneous Knowledge Distillation for Simultaneous Infrared-Visible Image Fusion and Super-Resolution" Email: 18186470991@163.

 Single Image Super-Resolution (SISR) with SRResNet, EDSR and SRGAN
Single Image Super-Resolution (SISR) with SRResNet, EDSR and SRGAN

Single Image Super-Resolution (SISR) with SRResNet, EDSR and SRGAN Introduction Image super-resolution (SR) is the process of recovering high-resoluti

Repository for
Repository for "Exploring Sparsity in Image Super-Resolution for Efficient Inference", CVPR 2021

SMSR Reposity for "Exploring Sparsity in Image Super-Resolution for Efficient Inference" [arXiv] Highlights Locate and skip redundant computation in S

Comments
  • processing Hyperfine scans

    processing Hyperfine scans

    I have some images from Hyperfine, which I'd like to process to get "super resolution" scans. I actually have 3 T1-weighted and 3 T2-weighted scans, i.e. I do have high-resolution (well, 1.5mm) in each plane.

    I tried your pretrained model on each pair of scans (it doesn't seem that you support using multiple images of a single modality).

    • T1 ax and T2 ax
    • T1 cor and T2 cor
    • T1 sag and T2 sag

    The output is quite different for each pair of scans (and none of the outputs really matches the reference HR T1-MPRAGE that I have). So I'm coming to you for advice to move forward.

    I know that your model requires the FSE sequence for T1 and T2. Here is what I have:

    • for T1, T1_Gray_White_Contrast_(DL)
    • for T2, T2-Weighted_FSE_(DL) So, perhaps I have the wrong sequence for the T1 scan. Could I retrain the model? I looked at the tutorials (5 and 6) but couldn't quite see what knobs to turn.

    Are your models brain-specific? From the training images in the data folder I would assume so. I am interested in getting super resolution for the whole head, i.e. also scalp, skull and CSF. Would I need to retrain the models with my own whole head images?

    Finally -- is there an easy way to combine several scans of the same modality?

    opened by julien-dubois-k 3
  • Dataset Usage

    Dataset Usage

    Hi, Thanks for sharing your work here.

    You mentioned that the images present in this repo are pre-processed publicly available data and labels are generated using free surfer software. As you kept Apache 2.0 license to this repo, can I use this derived dataset present in this repository or Is there any restrictions on original publicly available dataset ??

    opened by BSYMAHESH 1
  • Hyperostosis affecting older adult scans

    Hyperostosis affecting older adult scans

    Hi, thanks for creating such an amazing tool!

    I've used the FreeSurfer implementation & am running into issues when attempting to use the HyperFine algorithm with older adult MRIs. In many older adults, the inner table of the skull starts to grow & this affects the intensity of the image. In many cases, the algorithm grabs extra "brain" from these (largely frontal) regions thinking that this high-intensity region must be white matter since there is another skull layer beyond this.

    Using the T1-only algorithm doesn't produce this error (though detail is lost in the rest of the brain - see attached).

    In the past, I've had difficulty skull stripping older adult brains because of this, but I've found that HD-BET https://github.com/MIC-DKFZ/HD-BET seems to handle this well - so if there's an issue of deciding what is/isn't brain, this might be a workaround.

    In the meantime, I'll use base SynthSR since that still produces better results than anything I've seen, but at some stage, if there is an update that allows HyperFine to run with older adults I'd be over the moon (I realize this is an edge case & might not be a priority).

    All of which is to say, cheers & thanks!

    John Hyperfine 001

    opened by johnaeanderson 1
  • Tutorial 1-SR_real Bug

    Tutorial 1-SR_real Bug

    Hi, I reacently discoverd your work and decided to try with your tutorials.

    In first tutorial named 1-SR_real.py is a bug that appears in file SynthSR/labels_to_image_model.py. Because in this first tutorial you set variable output_channel to None. Then in file labels_to_image_model.py you try to iterate over that variable (output_channel), that has in this case type NonType, which is not possible. You should just add one if statement like I do and everything works fine. You can see in the attached image where the bug is located and how to fix it. im_to_lab_Error_solved

    And also in the folowing tutorials, you describe output_channel as output_channel: (optional) a list with the indices of the output channels. So if you set variable output_channel=1, that variable has type integer and not list as you described, as a result the code does not work. You should set variable to output_channel=[1] or add an if statement that checks output_channel data type.

    The lines 192-204 in SynthSR/labels_to_image_model.py should be replaced with the folowing code or something similar:

    # synthetic regression target
    if type(output_channel)!=type(None):
       if type(output_channel)!= type(list()):
          output_channel=[output_channel]
       if any(c==i for c in output_channel):
          target = KL.Lambda(lambda x: tf.cast(x, dtype='float32'))(channel)
          # resample regression target at target resolution if needed
          if crop_shape != output_shape:
             sigma = utils.get_std_blurring_mask_for_downsampling(target_res, atlas_res)
             kernels_list = l2i_et.get_gaussian_1d_kernels(sigma)
             target = l2i_et.blur_tensor(target, kernels_list, n_dims=n_dims)
             target = l2i_et.resample_tensor(target, output_shape)
          regression_target.append(target)`
    

    @BBillot

    opened by stromguy 1
Owner
null
Real-CUGAN - Real Cascade U-Nets for Anime Image Super Resolution

Real Cascade U-Nets for Anime Image Super Resolution 中文 | English ?? Real-CUGAN

tarsin 111 Dec 28, 2022
[ACM MM 2021] Joint Implicit Image Function for Guided Depth Super-Resolution

Joint Implicit Image Function for Guided Depth Super-Resolution This repository contains the code for: Joint Implicit Image Function for Guided Depth

hawkey 78 Dec 27, 2022
Super-Fast-Adversarial-Training - A PyTorch Implementation code for developing super fast adversarial training

Super-Fast-Adversarial-Training This is a PyTorch Implementation code for develo

LBK 26 Dec 2, 2022
Official PyTorch code for Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling (HCFlow, ICCV2021)

Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling (HCFlow, ICCV2021) This repository is the official P

Jingyun Liang 159 Dec 30, 2022
Official PyTorch code for Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling (HCFlow, ICCV2021)

Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling (HCFlow, ICCV2021) This repository is the official P

Jingyun Liang 159 Dec 30, 2022
Official implementation of the paper 'Efficient and Degradation-Adaptive Network for Real-World Image Super-Resolution'

DASR Paper Efficient and Degradation-Adaptive Network for Real-World Image Super-Resolution Jie Liang, Hui Zeng, and Lei Zhang. In arxiv preprint. Abs

null 81 Dec 28, 2022
Official repository for "Restormer: Efficient Transformer for High-Resolution Image Restoration". SOTA for motion deblurring, image deraining, denoising (Gaussian/real data), and defocus deblurring.

Restormer: Efficient Transformer for High-Resolution Image Restoration Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan,

Syed Waqas Zamir 906 Dec 30, 2022
Edge-oriented Convolution Block for Real-time Super Resolution on Mobile Devices, ACM Multimedia 2021

Codes for ECBSR Edge-oriented Convolution Block for Real-time Super Resolution on Mobile Devices Xindong Zhang, Hui Zeng, Lei Zhang ACM Multimedia 202

xindong zhang 236 Dec 26, 2022
(CVPR2021) ClassSR: A General Framework to Accelerate Super-Resolution Networks by Data Characteristic

ClassSR (CVPR2021) ClassSR: A General Framework to Accelerate Super-Resolution Networks by Data Characteristic Paper Authors: Xiangtao Kong, Hengyuan

Xiangtao Kong 308 Jan 5, 2023