Implementation for paper LadderNet: Multi-path networks based on U-Net for medical image segmentation

Overview

Requirement

  • Python3.6
  • PyTorch 0.4
  • configparser

How to run

  • run python prepare_datasets_DRIVE.py to generate hdf5 file of training data
  • run cd src
  • run python retinaNN_training.py to train
  • run python retinaNN_predict.py to test

Parameter defination

  • parameters (path, patch size, et al.) are defined in "configuration.txt"
  • training parameters are defined in src/retinaNN_training.py line 49 t 84 with notes "=====Define parameters here ========="

Pretrained weights

  • pretrained weights are stored in "src/checkpoint"
  • results are stored in "test/"

Results

The results reported in the ./test folder are referred to the trained model which reported the minimum validation loss. The ./test folder includes:

  • Model:
    • test_model.png schematic representation of the neural network
    • test_architecture.json description of the model in json format
    • test_best_weights.h5 weights of the model which reported the minimum validation loss, as HDF5 file
    • test_last_weights.h5 weights of the model at last epoch (150th), as HDF5 file
    • test_configuration.txt configuration of the parameters of the experiment
  • Experiment results:
    • performances.txt summary of the test results, including the confusion matrix
    • Precision_recall.png the precision-recall plot and the corresponding Area Under the Curve (AUC)
    • ROC.png the Receiver Operating Characteristic (ROC) curve and the corresponding AUC
    • all_*.png the 20 images of the pre-processed originals, ground truth and predictions relative to the DRIVE testing dataset
    • sample_input_*.png sample of 40 patches of the pre-processed original training images and the corresponding ground truth
    • test_Original_GroundTruth_Prediction*.png from top to bottom, the original pre-processed image, the ground truth and the prediction. In the predicted image, each pixel shows the vessel predicted probability, no threshold is applied.

The following table compares this method to other recent techniques, which have published their performance in terms of Area Under the ROC curve (AUC ROC) on the DRIVE dataset.

Method AUC ROC on DRIVE
Soares et al [1] .9614
Azzopardi et al. [2] .9614
Osareh et al [3] .9650
Roychowdhury et al. [4] .9670
Fraz et al. [5] .9747
Qiaoliang et al. [6] .9738
Melinscak et al. [7] .9749
Liskowski et al.^ [8] .9790
orobix .9790
this method .9794

Comments
  •  RuntimeErrorreduction when running training script

    RuntimeErrorreduction when running training script

    Hello,

    I'm currently conducting comparison research on Convolutional Neural Networks. Due to GPU issues, I thought I try running the script on CPU while I wait for my GPU to be resolved which this error might be related in. Although I'm afraid that might not be the issues and it is probably a wrong installation process I might have followed.

    The following error is thrown once I run retinaNN_training.py, I tried reducing epochs and batches as I thought that might be the issue but the error kept insisting:

    `raceback (most recent call last): File "", line 1, in File "D:\Program_Files\Miniconda3\lib\multiprocessing\spawn.py", line 105, in spawn_main exitcode = _main(fd) File "D:\Program_Files\Miniconda3\lib\multiprocessing\spawn.py", line 114, in _main prepare(preparation_data) File "D:\Program_Files\Miniconda3\lib\multiprocessing\spawn.py", line 225, in prepare _fixup_main_from_path(data['init_main_from_path']) File "D:\Program_Files\Miniconda3\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path run_name="mp_main") File "D:\Program_Files\Miniconda3\lib\runpy.py", line 263, in run_path pkg_name=pkg_name, script_name=fname) File "D:\Program_Files\Miniconda3\lib\runpy.py", line 96, in _run_module_code mod_name, mod_spec, pkg_name, script_name) File "D:\Program_Files\Miniconda3\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "D:\Assignments\New_2019-2020\Desktop\Assignments\CT6039_Dissertation_30_Credits\Source_Code\Main_Source\Lab_Experiments\Experiment_4\LadderNet\LadderNet-master\src\retinaNN_training.py", line 206, in train(epoch) File "D:\Assignments\New_2019-2020\Desktop\Assignments\CT6039_Dissertation_30_Credits\Source_Code\Main_Source\Lab_Experiments\Experiment_4\LadderNet\LadderNet-master\src\retinaNN_training.py", line 164, in train for batch_idx, (inputs, targets) in enumerate(tqdm(train_loader)): File "D:\Program_Files\Miniconda3\lib\site-packages\tqdm\std.py", line 1081, in iter Traceback (most recent call last): File "retinaNN_training.py", line 206, in for obj in iterable: File "D:\Program_Files\Miniconda3\lib\site-packages\torch\utils\data\dataloader.py", line 279, in iter train(epoch) File "retinaNN_training.py", line 164, in train return _MultiProcessingDataLoaderIter(self)for batch_idx, (inputs, targets) in enumerate(tqdm(train_loader)):

    File "D:\Program_Files\Miniconda3\lib\site-packages\torch\utils\data\dataloader.py", line 719, in init File "D:\Program_Files\Miniconda3\lib\site-packages\tqdm\std.py", line 1081, in iter w.start() File "D:\Program_Files\Miniconda3\lib\multiprocessing\process.py", line 112, in start for obj in iterable: File "D:\Program_Files\Miniconda3\lib\site-packages\torch\utils\data\dataloader.py", line 279, in iter self._popen = self._Popen(self) return _MultiProcessingDataLoaderIter(self) File "D:\Program_Files\Miniconda3\lib\multiprocessing\context.py", line 223, in _Popen

    File "D:\Program_Files\Miniconda3\lib\site-packages\torch\utils\data\dataloader.py", line 719, in init return _default_context.get_context().Process._Popen(process_obj) File "D:\Program_Files\Miniconda3\lib\multiprocessing\context.py", line 322, in _Popen w.start()return Popen(process_obj)

    File "D:\Program_Files\Miniconda3\lib\multiprocessing\process.py", line 112, in start File "D:\Program_Files\Miniconda3\lib\multiprocessing\popen_spawn_win32.py", line 46, in init self._popen = self._Popen(self) File "D:\Program_Files\Miniconda3\lib\multiprocessing\context.py", line 223, in _Popen prep_data = spawn.get_preparation_data(process_obj._name) return _default_context.get_context().Process._Popen(process_obj) File "D:\Program_Files\Miniconda3\lib\multiprocessing\spawn.py", line 143, in get_preparation_data

      File "D:\Program_Files\Miniconda3\lib\multiprocessing\context.py", line 322, in _Popen
    

    _check_not_importing_main() return Popen(process_obj) File "D:\Program_Files\Miniconda3\lib\multiprocessing\spawn.py", line 136, in _check_not_importing_main

    is not going to be frozen to produce an executable.''')  File "D:\Program_Files\Miniconda3\lib\multiprocessing\popen_spawn_win32.py", line 89, in __init__
    
    RuntimeErrorreduction.dump(process_obj, to_child):
    
        An attempt has been made to start a new process before the
        current process has finished its bootstrapping phase.
    
        This probably means that you are not using fork to start your
        child processes and you have forgotten to use the proper idiom
        in the main module:
    
            if __name__ == '__main__':
                freeze_support()
                ...
    
        The "freeze_support()" line can be omitted if the program
        is not going to be frozen to produce an executable.  File "D:\Program_Files\Miniconda3\lib\multiprocessing\reduction.py", line 60, in dump
    
    ForkingPickler(file, protocol).dump(obj)
    

    BrokenPipeError: [Errno 32] Broken pipe 0%| | 0/9 [00:00<?, ?it/s] 0%| | 0/9 [00:02<?, ?it/s] `

    Any advice around this would be really appreciated.

    opened by YrgkenKoutsi 3
  • MemoryError

    MemoryError

    Traceback (most recent call last): File "retinaNN_training.py", line 122, in train_set = TrainDataset(patches_imgs_train[train_ind,...],patches_masks_train[train_ind,...]) MemoryError

    What should I do? Thanks!

    opened by LJM-GitHub 3
  • ImportError: libGL.so.1: cannot open shared object file: No such file or directory

    ImportError: libGL.so.1: cannot open shared object file: No such file or directory

    root@ip:/home/ubuntu/workspace/LadderNet/src# python retinaNN_training.py Traceback (most recent call last): File "retinaNN_training.py", line 17, in from lib.help_functions import * File "../lib/help_functions.py", line 4, in from matplotlib import pyplot as plt File "/root/anaconda3/lib/python3.6/site-packages/matplotlib/pyplot.py", line 115, in _backend_mod, new_figure_manager, draw_if_interactive, _show = pylab_setup() File "/root/anaconda3/lib/python3.6/site-packages/matplotlib/backends/init.py", line 62, in pylab_setup [backend_name], 0) File "/root/anaconda3/lib/python3.6/site-packages/matplotlib/backends/backend_qt5agg.py", line 15, in from .backend_qt5 import ( File "/root/anaconda3/lib/python3.6/site-packages/matplotlib/backends/backend_qt5.py", line 19, in import matplotlib.backends.qt_editor.figureoptions as figureoptions File "/root/anaconda3/lib/python3.6/site-packages/matplotlib/backends/qt_editor/figureoptions.py", line 20, in import matplotlib.backends.qt_editor.formlayout as formlayout File "/root/anaconda3/lib/python3.6/site-packages/matplotlib/backends/qt_editor/formlayout.py", line 54, in from matplotlib.backends.qt_compat import QtGui, QtWidgets, QtCore File "/root/anaconda3/lib/python3.6/site-packages/matplotlib/backends/qt_compat.py", line 140, in from PyQt5 import QtCore, QtGui, QtWidgets ImportError: libGL.so.1: cannot open shared object file: No such file or directory

    opened by LJM-GitHub 3
  • error: OSError: [Errno 22] Invalid argument

    error: OSError: [Errno 22] Invalid argument

    path_data = config.get('data paths', 'path_local') File "C:\Users\virkt\Anaconda3\envs\code\lib\configparser.py", line 780, in get d = self._unify_values(section, vars) File "C:\Users\virkt\Anaconda3\envs\code\lib\configparser.py", line 1146, in _unify_values raise NoSectionError(section) from None configparser.NoSectionError: No section: 'data paths' Exception ignored in: <_io.TextIOWrapper name='' mode='w' encoding='cp1252'> OSError: [Errno 22] Invalid argument can anyone help out?

    opened by manvirvirk 2
  • Difference between paper and code

    Difference between paper and code

    I've found 2 differences btw code and paper. If you check for me, it would be really helpful.

    1. BatchNormalization is turned off in the code. https://github.com/juntang-zhuang/LadderNet/blob/35ca1703db8fa3ded185bee2425866fce01511d8/src/LadderNetv65.py#L25
    2. I can't find shared-weights residual block. Every time you declare the network variable, new, independent residual block is created. isn't it? https://github.com/juntang-zhuang/LadderNet/blob/35ca1703db8fa3ded185bee2425866fce01511d8/src/LadderNetv65.py#L92
    opened by whikwon 2
  • Loss getting too high

    Loss getting too high

    Hi!

    I tried to execute your code and I've experienced a strange behavior regarding the training and validation losses. I didn't change any parameter, I've just cloned the repository and followed the instructions for the execution without change anything. These are the first four epochs:

    Epoch 0: 
    Train loss 0.347656
    Valid loss: 5.1115
    
    Epoch 1: 
    Train loss 0.219913
    Valid loss: 3.2819
    
    Epoch 2: 
    Train loss 0.175050
    Valid loss: 3.2127
    
    Epoch 3: 
    Train loss 0.160487
    Valid loss: 3.0469
    
    Epoch 4: 
    Train loss 1629511453423918336.000000
    Valid loss: 18624358.1250
    
    

    I've tried with the Pytorch 0.4.1 and Pytorch 0.4.0 and it didn't change the behavior. The loss keeps increasing as the training continues.

    Extra info: Python 3.6 CUDA 9.0 Ubuntu 14.04

    opened by wellescastro 2
  • Performance comparison with respect to parameter count

    Performance comparison with respect to parameter count

    Hi there! I've just come upon your LadderNet publication and was wondering whether you've also evaluated its performance with respect to the parameter counts. Especially, I'm interested if there is any benefit of using two U-Nets that are of depth x in a LadderNet architecture vs a single U-Net (with skip connections) that is of depth 2 * x. As far as I understand, the amount of parameters should be roughly the same in both cases, correct? I could imagine both outcomes: The single U-Net performing better due to a bigger reception field or the LadderNet due to the additional paths. Do you have any data on that?

    opened by christian-steinmeyer 1
  • BrokenPipeError: [Errno 32] Broken pipe

    BrokenPipeError: [Errno 32] Broken pipe

    i m getting this error. Please help raceback (most recent call last): File "", line 1, in File "C:\Users\virkt\Anaconda3\lib\multiprocessing\spawn.py", line 105, in spawn_main exitcode = _main(fd) File "C:\Users\virkt\Anaconda3\lib\multiprocessing\spawn.py", line 114, in _main prepare(preparation_data) File "C:\Users\virkt\Anaconda3\lib\multiprocessing\spawn.py", line 225, in prepare _fixup_main_from_path(data['init_main_from_path']) File "C:\Users\virkt\Anaconda3\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path run_name="mp_main") File "C:\Users\virkt\Anaconda3\lib\runpy.py", line 263, in run_path pkg_name=pkg_name, script_name=fname) File "C:\Users\virkt\Anaconda3\lib\runpy.py", line 96, in _run_module_code mod_name, mod_spec, pkg_name, script_name) File "C:\Users\virkt\Anaconda3\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "D:\LadderNet-master\src\retinaNN_training.py", line 206, in train(epoch) File "D:\LadderNet-master\src\retinaNN_training.py", line 164, in train for batch_idx, (inputs, targets) in enumerate(tqdm(train_loader)): File "C:\Users\virkt\Anaconda3\lib\site-packages\tqdm_tqdm.py", line 1005, in iter for obj in iterable: File "C:\Users\virkt\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 279, in iter return _MultiProcessingDataLoaderIter(self) File "C:\Users\virkt\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 719, in init w.start() File "C:\Users\virkt\Anaconda3\lib\multiprocessing\process.py", line 112, in start self._popen = self._Popen(self)Traceback (most recent call last):

    File "retinaNN_training.py", line 206, in File "C:\Users\virkt\Anaconda3\lib\multiprocessing\context.py", line 223, in _Popen train(epoch) return _default_context.get_context().Process._Popen(process_obj) File "C:\Users\virkt\Anaconda3\lib\multiprocessing\context.py", line 322, in _Popen File "retinaNN_training.py", line 164, in train return Popen(process_obj) File "C:\Users\virkt\Anaconda3\lib\multiprocessing\popen_spawn_win32.py", line 46, in init for batch_idx, (inputs, targets) in enumerate(tqdm(train_loader)): prep_data = spawn.get_preparation_data(process_obj._name) File "C:\Users\virkt\Anaconda3\lib\site-packages\tqdm_tqdm.py", line 1005, in iter

      File "C:\Users\virkt\Anaconda3\lib\multiprocessing\spawn.py", line 143, in get_preparation_data
    

    for obj in iterable: File "C:\Users\virkt\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 279, in iter _check_not_importing_main() File "C:\Users\virkt\Anaconda3\lib\multiprocessing\spawn.py", line 136, in _check_not_importing_main return _MultiProcessingDataLoaderIter(self) File "C:\Users\virkt\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 719, in init is not going to be frozen to produce an executable.''') RuntimeError: w.start()

        An attempt has been made to start a new process before the
        current process has finished its bootstrapping phase.
    
        This probably means that you are not using fork to start your
        child processes and you have forgotten to use the proper idiom
        in the main module:
    
            if __name__ == '__main__':
                freeze_support()
                ...
    
        The "freeze_support()" line can be omitted if the program
        is not going to be frozen to produce an executable.  File "C:\Users\virkt\Anaconda3\lib\multiprocessing\process.py", line 112, in start
    
    self._popen = self._Popen(self)
    

    File "C:\Users\virkt\Anaconda3\lib\multiprocessing\context.py", line 223, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "C:\Users\virkt\Anaconda3\lib\multiprocessing\context.py", line 322, in _Popen return Popen(process_obj) File "C:\Users\virkt\Anaconda3\lib\multiprocessing\popen_spawn_win32.py", line 89, in init reduction.dump(process_obj, to_child) File "C:\Users\virkt\Anaconda3\lib\multiprocessing\reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) BrokenPipeError: [Errno 32] Broken pipe

    opened by manvirvirk 1
  • meaning of below mentioned code lines

    meaning of below mentioned code lines

    y_scores, y_true = pred_only_FOV(pred_imgs,gtruth_masks, test_border_masks) #returns data only

    can anyone tell me what is exactlybelow code lines means" inside the FOV print("Calculating results only inside the FOV:") print("y scores pixels: " +str(y_scores.shape[0]) +" (radius 270: 2702703.14==228906), including background around retina: " +str(pred_imgs.shape[0]pred_imgs.shape[2]pred_imgs.shape[3]) +" (584565==329960)") print("y true pixels: " +str(y_true.shape[0]) +" (radius 270: 270270*3.14==228906), including background around retina: " +str(gtruth_masks.shape[2]*gtruth_masks.shape[3]gtruth_masks.shape[0])+" (584565==329960)")

    opened by manvirvirk 0
  • cycleGAN and laddernet

    cycleGAN and laddernet

    hi juntang-zhuang, i have reproduced results using your laddernet and cycleGAN repo. I want to use this ladder-net on the output produced by cycleGAN(your cycleGAN repo). Can you tell me how to do it. Thanks

    opened by manvirvirk 0
Owner
Juntang Zhuang
Juntang Zhuang
Official and maintained implementation of the paper "OSS-Net: Memory Efficient High Resolution Semantic Segmentation of 3D Medical Data" [BMVC 2021].

OSS-Net: Memory Efficient High Resolution Semantic Segmentation of 3D Medical Data Christoph Reich, Tim Prangemeier, Özdemir Cetin & Heinz Koeppl | Pr

Christoph Reich 23 Sep 21, 2022
Pytorch Code for "Medical Transformer: Gated Axial-Attention for Medical Image Segmentation"

Medical-Transformer Pytorch Code for the paper "Medical Transformer: Gated Axial-Attention for Medical Image Segmentation" About this repo: This repo

Jeya Maria Jose 615 Dec 25, 2022
Multi-atlas segmentation (MAS) is a promising framework for medical image segmentation

Multi-atlas segmentation (MAS) is a promising framework for medical image segmentation. Generally, MAS methods register multiple atlases, i.e., medical images with corresponding labels, to a target image;

NanYoMy 13 Oct 9, 2022
U-Net Implementation: Convolutional Networks for Biomedical Image Segmentation" using the Carvana Image Masking Dataset in PyTorch

U-Net Implementation By Christopher Ley This is my interpretation and implementation of the famous paper "U-Net: Convolutional Networks for Biomedical

Christopher Ley 1 Jan 6, 2022
A pytorch-based deep learning framework for multi-modal 2D/3D medical image segmentation

A 3D multi-modal medical image segmentation library in PyTorch We strongly believe in open and reproducible deep learning research. Our goal is to imp

Adaloglou Nikolas 1.2k Dec 27, 2022
RGBD-Net - This repository contains a pytorch lightning implementation for the 3DV 2021 RGBD-Net paper.

[3DV 2021] We propose a new cascaded architecture for novel view synthesis, called RGBD-Net, which consists of two core components: a hierarchical depth regression network and a depth-aware generator network.

Phong Nguyen Ha 4 May 26, 2022
Neural networks applied in recognizing guitar chords using python, AutoML.NET with C# and .NET Core

Chord Recognition Demo application The demo application is written in C# with .NETCore. As of July 9, 2020, the only version available is for windows

Andres Mauricio Rondon Patiño 24 Oct 22, 2022
Code of U2Fusion: a unified unsupervised image fusion network for multiple image fusion tasks, including multi-modal, multi-exposure and multi-focus image fusion.

U2Fusion Code of U2Fusion: a unified unsupervised image fusion network for multiple image fusion tasks, including multi-modal (VIS-IR, medical), multi

Han Xu 129 Dec 11, 2022
Build a medical knowledge graph based on Unified Language Medical System (UMLS)

UMLS-Graph Build a medical knowledge graph based on Unified Language Medical System (UMLS) Requisite Install MySQL Server 5.6 and import UMLS data int

Donghua Chen 6 Dec 25, 2022
U-2-Net: U Square Net - Modified for paired image training of style transfer

U2-Net: U Square Net Modified for paired image training of style transfer This is an unofficial repo making use of the code which was made available b

Doron Adler 43 Oct 3, 2022
An implementation of the research paper "Retina Blood Vessel Segmentation Using A U-Net Based Convolutional Neural Network"

Retina Blood Vessels Segmentation This is an implementation of the research paper "Retina Blood Vessel Segmentation Using A U-Net Based Convolutional

Srijarko Roy 23 Aug 20, 2022
Copy Paste positive polyp using poisson image blending for medical image segmentation

Copy Paste positive polyp using poisson image blending for medical image segmentation According poisson image blending I've completely used it for bio

Phạm Vũ Hùng 2 Oct 19, 2021
A keras-based real-time model for medical image segmentation (CFPNet-M)

CFPNet-M: A Light-Weight Encoder-Decoder Based Network for Multimodal Biomedical Image Real-Time Segmentation This repository contains the implementat

null 268 Nov 27, 2022
U^2-Net - Portrait matting This repository explores possibilities of using the original u^2-net model for portrait matting.

U^2-Net - Portrait matting This repository explores possibilities of using the original u^2-net model for portrait matting.

Dennis Bappert 104 Nov 25, 2022
Realtime segmentation with ENet, the fast and accurate segmentation net.

Enet This is a realtime segmentation net with almost 22 fps on GTX1080 ti, and the model size is very small with only 28M. This repo contains the infe

JinTian 14 Aug 30, 2022
This repo holds code for TransUNet: Transformers Make Strong Encoders for Medical Image Segmentation

TransUNet This repo holds code for TransUNet: Transformers Make Strong Encoders for Medical Image Segmentation Usage

null 1.4k Jan 4, 2023
CoTr: Efficiently Bridging CNN and Transformer for 3D Medical Image Segmentation

CoTr: Efficient 3D Medical Image Segmentation by bridging CNN and Transformer This is the official pytorch implementation of the CoTr: Paper: CoTr: Ef

null 218 Dec 25, 2022
[CVPR'21] FedDG: Federated Domain Generalization on Medical Image Segmentation via Episodic Learning in Continuous Frequency Space

FedDG: Federated Domain Generalization on Medical Image Segmentation via Episodic Learning in Continuous Frequency Space by Quande Liu, Cheng Chen, Ji

Quande Liu 178 Jan 6, 2023
A collection of loss functions for medical image segmentation

A collection of loss functions for medical image segmentation

Jun 3.1k Jan 3, 2023