The implementation code for "DAGAN: Deep De-Aliasing Generative Adversarial Networks for Fast Compressed Sensing MRI Reconstruction"

Overview

DAGAN

This is the official implementation code for DAGAN: Deep De-Aliasing Generative Adversarial Networks for Fast Compressed Sensing MRI Reconstruction published in IEEE Transactions on Medical Imaging (2018).
Guang Yang*, Simiao Yu*, et al.
(* equal contributions)

If you use this code for your research, please cite our paper.

@article{yang2018_dagan,
	author = {Yang, Guang and Yu, Simiao and Dong, Hao and Slabaugh, Gregory G. and Dragotti, Pier Luigi and Ye, Xujiong and Liu, Fangde and Arridge, Simon R. and Keegan, Jennifer and Guo, Yike and Firmin, David N.},
	journal = {IEEE Trans. Med. Imaging},
	number = 6,
	pages = {1310--1321},
	title = {{DAGAN: deep de-aliasing generative adversarial networks for fast compressed sensing MRI reconstruction}},
	volume = 37,
	year = 2018
}

If you have any questions about this code, please feel free to contact Simiao Yu ([email protected]).

Prerequisites

The original code is in python 3.5 under the following dependencies:

  1. tensorflow (v1.1.0)
  2. tensorlayer (v1.7.2)
  3. easydict (v1.6)
  4. nibabel (v2.1.0)
  5. scikit-image (v0.12.3)

Code tested in Ubuntu 16.04 with Nvidia GPU + CUDA CuDNN (whose version is compatible to tensorflow v1.1.0).

How to use

  1. Prepare data

    1. Data used in this work are publicly available from the MICCAI 2013 grand challenge (link). We refer users to register with the grand challenge organisers to be able to download the data.
    2. Download training and test data respectively into data/MICCAI13_SegChallenge/Training_100 and data/MICCAI13_SegChallenge/Testing_100 (We randomly included 100 T1-weighted MRI datasets for training and 50 datasets for testing)
    3. run 'python data_loader.py'
    4. after running the code, training/validation/testing data should be saved to 'data/MICCAI13_SegChallenge/' in pickle format.
  2. Download pretrained VGG16 model

    1. Download 'vgg16_weights.npz' from this link
    2. Save 'vgg16_weights.npz' into 'trained_model/VGG16'
  3. Train model

    1. run 'CUDA_VISIBLE_DEVICES=0 python train.py --model MODEL --mask MASK --maskperc MASKPERC' where you should specify MODEL, MASK, MASKPERC respectively:
    • MODEL: choose from 'unet' or 'unet_refine'
    • MASK: choose from 'gaussian1d', 'gaussian2d', 'poisson2d'
    • MASKPERC: choose from '10', '20', '30', '40', '50' (percentage of mask)
  4. Test trained model

    1. run 'CUDA_VISIBLE_DEVICES=0 python test.py --model MODEL --mask MASK --maskperc MASKPERC' where you should specify MODEL, MASK, MASKPERC respectively (as above).

Results

Please refer to the paper for the detailed results.

Comments
  • Some questions about the vgg_prepro function in  utils.py files

    Some questions about the vgg_prepro function in utils.py files

    Issue Description

    Hello, when I read your code, I have some questions about the vgg_prepro functions in utils.py This function is defined as follows:

    def vgg_prepro(x):
        x = imresize(x, [244, 244], interp='bilinear', mode=None)
        x = np.tile(x, 3)
        x = x / 127.5 - 1
        return x
    

    I know this function is to preprocess an image, such as to Changing an image from [256,256,1] to [244,244,3], but I don't understand why do we need to use x/127.5 - 1 , here we have already to scale the image to [-1,1].

    In addition, I would also like to ask why we cut the size of 244 instead of 224 which is the size of the initial VGG paper.

    I was hoping you could help me with this problem, I can't get it right.

    opened by Alxemade 3
  • How to calculate the reconstruction time

    How to calculate the reconstruction time

    Hi, @nebulaV, I run your code and reconstruct one image, I use the following code during evaluate time,

        start_time = time.time()
        evaluate_restore_img = sess.run(net.outputs, {evaluate_image: evaluate_samples_bad})
        print("took: %4.4fs" % (time.time() - start_time)) 
    
    

    I run on the GPU platform, but the time is about 16s, but your code is 5.4ms , I don not understand how to calculate that results.

    opened by Alxemade 2
  • up7 layer's output is not equal to input.

    up7 layer's output is not equal to input.

    Hello, May I ask a problem? Thank you for your code, I have been learn AI from about 3 weeks ago. During training, It occur the error above like title. Help me, please.

    up7 = {DeConv2d} Last layer is: DeConv2d (u_net/deconv7) [25, 2, 2, 512] all_drop = {dict} {} all_layers = {list} <class 'list'>: [<tf.Tensor 'u_net/conv1/Identity:0' shape=(25, 1, 1, 64) dtype=float32>, <tf.Tensor 'u_net/conv2/Identity:0' shape=(25, 1, 1, 128) dtype=float32>, <tf.Tensor 'u_net/bn2/lrelu:0' shape=(25, 1, 1, 128) dtype=float32>, <tf.Tensor 'u_net/conv3/Identity:0' shape=(25, 1, 1, 256) dtype=float32>, <tf.Tensor 'u_net/bn3/lrelu:0' shape=(25, 1, 1, 256) dtype=float32>, <tf.Tensor 'u_net/conv4/Identity:0' shape=(25, 1, 1, 512) dtype=float32>, <tf.Tensor 'u_net/bn4/lrelu:0' shape=(25, 1, 1, 512) dtype=float32>, <tf.Tensor 'u_net/conv5/Identity:0' shape=(25, 1, 1, 512) dtype=float32>, <tf.Tensor 'u_net/bn5/lrelu:0' shape=(25, 1, 1, 512) dtype=float32>, <tf.Tensor 'u_net/conv6/Identity:0' shape=(25, 1, 1, 512) dtype=float32>, <tf.Tensor 'u_net/bn6/lrelu:0' shape=(25, 1, 1, 512) dtype=float32>, <tf.Tensor 'u_net/conv7/Identity:0' shape=(25, 1, 1, 512) dtype=float32>, <tf.Tensor 'u_net/bn7/lrelu:0' shape=(25, 1, 1, 512) dtype=float32>, <tf.Tensor 'u_net/conv8/lrelu:0' shape=(25, 1, 1, 51... all_params = {list} <class 'list'>: [<tf.Variable 'u_net/conv1/kernel:0' shape=(4, 4, 1, 64) dtype=float32_ref>, <tf.Variable 'u_net/conv1/bias:0' shape=(64,) dtype=float32_ref>, <tf.Variable 'u_net/conv2/kernel:0' shape=(4, 4, 64, 128) dtype=float32_ref>, <tf.Variable 'u_net/conv2/bias:0' shape=(128,) dtype=float32_ref>, <tf.Variable 'u_net/bn2/beta:0' shape=(128,) dtype=float32_ref>, <tf.Variable 'u_net/bn2/gamma:0' shape=(128,) dtype=float32_ref>, <tf.Variable 'u_net/bn2/moving_mean:0' shape=(128,) dtype=float32_ref>, <tf.Variable 'u_net/bn2/moving_variance:0' shape=(128,) dtype=float32_ref>, <tf.Variable 'u_net/conv3/kernel:0' shape=(4, 4, 128, 256) dtype=float32_ref>, <tf.Variable 'u_net/conv3/bias:0' shape=(256,) dtype=float32_ref>, <tf.Variable 'u_net/bn3/beta:0' shape=(256,) dtype=float32_ref>, <tf.Variable 'u_net/bn3/gamma:0' shape=(256,) dtype=float32_ref>, <tf.Variable 'u_net/bn3/moving_mean:0' shape=(256,) dtype=float32_ref>, <tf.Variable 'u_net/bn3/moving_variance:0' shape=(256,) dtype=float3...

    • inputs = {Tensor} Tensor("u_net/conv8/lrelu:0", shape=(25, 1, 1, 512), dtype=float32)
    • name = {str} 'u_net/deconv7'
    • outputs = {Tensor} Tensor("u_net/deconv7/Identity:0", shape=(25, 2, 2, 512), dtype=float32) w_init = {TruncatedNormal} <tensorflow.python.ops.init_ops.TruncatedNormal object at 0x7fd754f26358> x = {Tensor} Tensor("bad_image:0", shape=(25, 1, 1, 1), dtype=float32)
    opened by ssimine 2
  • FHD(1920x1080) size image

    FHD(1920x1080) size image

    Hello. Your paper is very helpful and thankful. However, I want to learn a bigger size image. I need advice on how to do it.

    Can you tell me how to configure the network?

    For reference, I am studying to eliminate aliasing of Aliasing images with Aliasing and Ground truth images.

    opened by ssimine 1
  • name 'tl' is not defined

    name 'tl' is not defined

    Hello,

    I had come accross your work and was very excited to run it. Unfortunately I keep getting an error message "name 'tl' is not defined", this occurs when I run train.py here https://github.com/tensorlayer/DAGAN/blob/master/train.py#L475, I ran this according to the README. It seems that this t1 object has many inbuilt functions so doesnt look like I can get away with commenting all the t1 lines. Can you please help ?

    opened by gtm2122 0
  • Questions about the weight for pixel loss

    Questions about the weight for pixel loss

    Hello,

    I noticed that you use 15 as the weight for pixel loss, which is much larger than other weights such as for perceptual loss, frequency loss and also generator loss. If such a large coefficient is used, the influence of the discriminator will be reduced and the generator might become a direct estimator approximately. I would like to know if mode collapse can happen when the large weight for pixel loss is used.

    BTW, I implemented conditional WGAN for MRI without pixel loss, but the image quality is not good enough.

    Thanks!

    opened by zhaodongsun 0
  • Is this applicible on different  type of dataset?

    Is this applicible on different type of dataset?

    hello, my question is that can we apply this model on generating the new dataset which related to other applications? such as Phase contrast microscopy images such as live cells dataset?

    opened by Ayanzadeh93 0
  • Error while loading data_loader.py

    Error while loading data_loader.py

    while loading data_loader.py it showing up dimension error in " img_2d = np.transpose(img_2d, (1, 0))" . Also while trying to do after reducing number of data set for fast training it shows up error "Traceback (most recent call last): File "sef_data_loader.py", line 94, in X_train = X_train[:, :, :, np.newaxis]
    IndexError: too many indices for array "

    opened by reshmarenjith 3
  • how to calculate psnr in your paper?

    how to calculate psnr in your paper?

    After reading your paper and codes, i'm strange about the calculation of PSNR. I use some mri data to test the ZF reconstruction and try to calculate the NMSE in your codes, it seems right. But when I calculate the PSNR based on your NMSE or RMSE defined by myself, it seems wrong. What's more, don't you think the PSNR is too high via ZF reconstruction with 20% data kept, because when PSNR reaches 34dB, pictures may seem very similar.

    opened by qiuwenyuan19921106 0
Owner
TensorLayer Community
A neutral open community to promote AI technology.
TensorLayer Community
TensorFlow code for the neural network presented in the paper: "Structural Language Models of Code" (ICML'2020)

SLM: Structural Language Models of Code This is an official implementation of the model described in: "Structural Language Models of Code" [PDF] To ap

null 73 Nov 6, 2022
Inference code for "StylePeople: A Generative Model of Fullbody Human Avatars" paper. This code is for the part of the paper describing video-based avatars.

NeuralTextures This is repository with inference code for paper "StylePeople: A Generative Model of Fullbody Human Avatars" (CVPR21). This code is for

Visual Understanding Lab @ Samsung AI Center Moscow 18 Oct 6, 2022
A code generator from ONNX to PyTorch code

onnx-pytorch Generating pytorch code from ONNX. Currently support onnx==1.9.0 and torch==1.8.1. Installation From PyPI pip install onnx-pytorch From

Wenhao Hu 94 Jan 6, 2023
This is the code for our KILT leaderboard submission to the T-REx and zsRE tasks. It includes code for training a DPR model then continuing training with RAG.

KGI (Knowledge Graph Induction) for slot filling This is the code for our KILT leaderboard submission to the T-REx and zsRE tasks. It includes code fo

International Business Machines 72 Jan 6, 2023
Convert Python 3 code to CUDA code.

Py2CUDA Convert python code to CUDA. Usage To convert a python file say named py_file.py to CUDA, run python generate_cuda.py --file py_file.py --arch

Yuval Rosen 3 Jul 14, 2021
Empirical Study of Transformers for Source Code & A Simple Approach for Handling Out-of-Vocabulary Identifiers in Deep Learning for Source Code

Transformers for variable misuse, function naming and code completion tasks The official PyTorch implementation of: Empirical Study of Transformers fo

Bayesian Methods Research Group 56 Nov 15, 2022
Code for the prototype tool in our paper "CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning".

CoProtector Code for the prototype tool in our paper "CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning".

Zhensu Sun 1 Oct 26, 2021
Low-code/No-code approach for deep learning inference on devices

EzEdgeAI A concept project that uses a low-code/no-code approach to implement deep learning inference on devices. It provides a componentized framewor

On-Device AI Co., Ltd. 7 Apr 5, 2022
Code for all the Advent of Code'21 challenges mostly written in python

Advent of Code 21 Code for all the Advent of Code'21 challenges mostly written in python. They are not necessarily the best or fastest solutions but j

null 4 May 26, 2022
Code to use Augmented Shapiro Wilks Stopping, as well as code for the paper "Statistically Signifigant Stopping of Neural Network Training"

This codebase is being actively maintained, please create and issue if you have issues using it Basics All data files are included under losses and ea

J K Terry 32 Nov 9, 2021
Opinionated code formatter, just like Python's black code formatter but for Beancount

beancount-black Opinionated code formatter, just like Python's black code formatter but for Beancount Try it out online here Features MIT licensed - b

Launch Platform 16 Oct 11, 2022
Official code implementation for "Personalized Federated Learning using Hypernetworks"

Personalized Federated Learning using Hypernetworks This is an official implementation of Personalized Federated Learning using Hypernetworks paper. [

Aviv Shamsian 121 Dec 25, 2022
An efficient and effective learning to rank algorithm by mining information across ranking candidates. This repository contains the tensorflow implementation of SERank model. The code is developed based on TF-Ranking.

SERank An efficient and effective learning to rank algorithm by mining information across ranking candidates. This repository contains the tensorflow

Zhihu 44 Oct 20, 2022
Implementation of the paper "Language-agnostic representation learning of source code from structure and context".

Code Transformer This is an official PyTorch implementation of the CodeTransformer model proposed in: D. Zügner, T. Kirschstein, M. Catasta, J. Leskov

Daniel Zügner 131 Dec 13, 2022
Code implementation of "Sparsity Probe: Analysis tool for Deep Learning Models"

Sparsity Probe: Analysis tool for Deep Learning Models This repository is a limited implementation of Sparsity Probe: Analysis tool for Deep Learning

null 3 Jun 9, 2021
PyTorch implementation code for the paper MixCo: Mix-up Contrastive Learning for Visual Representation

How to Reproduce our Results This repository contains PyTorch implementation code for the paper MixCo: Mix-up Contrastive Learning for Visual Represen

opcrisis 46 Dec 15, 2022
Code implementation of Data Efficient Stagewise Knowledge Distillation paper.

Data Efficient Stagewise Knowledge Distillation Table of Contents Data Efficient Stagewise Knowledge Distillation Table of Contents Requirements Image

IvLabs 112 Dec 2, 2022
This code is an unofficial implementation of HiFiSinger.

HiFiSinger This code is an unofficial implementation of HiFiSinger. The algorithm is based on the following papers: Chen, J., Tan, X., Luan, J., Qin,

Heejo You 87 Dec 23, 2022
This repository contains a re-implementation of the code for the CVPR 2021 paper "Omnimatte: Associating Objects and Their Effects in Video."

Omnimatte in PyTorch This repository contains a re-implementation of the code for the CVPR 2021 paper "Omnimatte: Associating Objects and Their Effect

Erika Lu 728 Dec 28, 2022