HistoSeg : Quick attention with multi-loss function for multi-structure segmentation in digital histology images

Overview

HistoSeg : Quick attention with multi-loss function for multi-structure segmentation in digital histology images

Maintained - Yes Quick Attention Multi Loss Function Encoder-Decoder Network Semantic Segmentation Computational Pathology

Histological Image Segmentation
This repo contains the code to Test and Train the HistoSeg

HistoSeg is an Encoder-Decoder DCNN which utilizes the novel Quick Attention Modules and Multi Loss function to generate segmentation masks from histopathological images with greater accuracy.

Datasets used for trainig HistoSeg

MoNuSeg - Multi-organ nuclei segmentation from H&E stained histopathological images

link: https://monuseg.grand-challenge.org/

GlaS - Gland segmentation in histology images

link: https://warwick.ac.uk/fac/cross_fac/tia/data/glascontest/

Trained Weights are available in the repo to test the HistoSeg

For MoNuSeg Dataset link: https://github.com/saadwazir/HistoSeg/blob/main/HistoSeg_MoNuSeg_.h5

For GlaS Dataset link: https://github.com/saadwazir/HistoSeg/blob/main/HistoSeg_GlaS_.h5

Data Preprocessing for Training

After downloading the dataset you must generate patches of images and their corresponding masks (Ground Truth), & convert it into numpy arrays or you can use dataloaders directly inside the code. you can generate patches using Image_Patchyfy. Link : https://github.com/saadwazir/Image_Patchyfy

For example to train HistoSeg on MoNuSeg Dataset, the distribution of dataset after creating pathes

X_train 1470x256x256x3 
y_train 1470x256x256x1
X_val 686x256x256x3
y_Val 686x256x256x1

Data Preprocessing for Testing

You just need to resize the images and their corresponding masks (Ground Truth) into same size i.e all the samples must have same resolution, and then convert it into numpy arrays.

For example to test HistoSeg on MoNuSeg Dataset, the shapes of dataset after creating numpy arrays are

X_test 14x1000x1000x3 
y_test 14x1000x1000x1

Requirements

pip install matplotlib
pip install seaborn
pip install tqdm
pip install scikit-learn
conda install tensorflow==2.7
pip install keras==2.2.4

Training

To train HistoSeg use the following command

python HistoSeg_Train.py --train_images 'path' --train_masks 'path' --val_images 'path' --val_masks 'path' --width 256 --height 256 --epochs 100 --batch 16

Testing

To test HistoSeg use the following command

python HistoSeg_Test.py --images 'path' --masks 'path' --weights 'path' --width 1000 --height 1000

For example to test HistoSeg on MoNuSeg Dataset with trained weights, use the following command
python HistoSeg_Test.py --images 'X_test_MoNuSeg_14x1000x1000.npy' --masks 'y_test_MoNuSeg_14x1000x1000.npy' --weights 'HistoSeg_MoNuSeg_.h5' --width 1000 --height 1000
You might also like...
PyTorch implementation of Soft-DTW: a Differentiable Loss Function for Time-Series in CUDA
PyTorch implementation of Soft-DTW: a Differentiable Loss Function for Time-Series in CUDA

Soft DTW Loss Function for PyTorch in CUDA This is a Pytorch Implementation of Soft-DTW: a Differentiable Loss Function for Time-Series which is batch

Implementation of "A Deep Learning Loss Function based on Auditory Power Compression for Speech Enhancement" by pytorch

This repository is used to suspend the results of our paper "A Deep Learning Loss Function based on Auditory Power Compression for Speech Enhancement"

Pytorch Implementations of large number  classical backbone CNNs, data enhancement, torch loss, attention, visualization and  some common algorithms.
Pytorch Implementations of large number classical backbone CNNs, data enhancement, torch loss, attention, visualization and some common algorithms.

Torch-template-for-deep-learning Pytorch implementations of some **classical backbone CNNs, data enhancement, torch loss, attention, visualization and

Implementation of Invariant Point Attention, used for coordinate refinement in the structure module of Alphafold2, as a standalone Pytorch module
Implementation of Invariant Point Attention, used for coordinate refinement in the structure module of Alphafold2, as a standalone Pytorch module

Invariant Point Attention - Pytorch Implementation of Invariant Point Attention as a standalone module, which was used in the structure module of Alph

Deep Multi-Magnification Network for multi-class tissue segmentation of whole slide images
Deep Multi-Magnification Network for multi-class tissue segmentation of whole slide images

Deep Multi-Magnification Network This repository provides training and inference codes for Deep Multi-Magnification Network published here. Deep Multi

Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"

Memory Efficient Attention Pytorch Implementation of a memory efficient multi-head attention as proposed in the paper, Self-attention Does Not Need O(

A collection of loss functions for medical image segmentation
A collection of loss functions for medical image segmentation

A collection of loss functions for medical image segmentation

Auto Seg-Loss: Searching Metric Surrogates for Semantic Segmentation
Auto Seg-Loss: Searching Metric Surrogates for Semantic Segmentation

Auto-Seg-Loss By Hao Li, Chenxin Tao, Xizhou Zhu, Xiaogang Wang, Gao Huang, Jifeng Dai This is the official implementation of the ICLR 2021 paper Auto

Complete-IoU (CIoU) Loss and Cluster-NMS for Object Detection and Instance Segmentation (YOLACT)
Complete-IoU (CIoU) Loss and Cluster-NMS for Object Detection and Instance Segmentation (YOLACT)

Complete-IoU Loss and Cluster-NMS for Improving Object Detection and Instance Segmentation. Our paper is accepted by IEEE Transactions on Cybernetics

Comments
  • Dice's calculation is obviously wrong

    Dice's calculation is obviously wrong

    In the GlaS dataset, your IoU is only 76.73. According to the mathematical definition of IoU and Dice coefficient, your Dice is absolutely impossible to reach 99.09, so your Dice coefficient calculation is obviously wrong. You should check your code implementation of Dice coefficient calculation.

    在GlaS数据集里,你的IoU只有76.73,根据IoU和Dice系数的数学定义,你的Dice是绝对不可能达到99.09的,所以你的Dice系数计算显然是错误的。你应该检查一下你关于Dice系数计算这部分的代码实现。

    opened by Frank-Star-fn 2
  • About F1-score calculation

    About F1-score calculation

    I want to ask a question.

    When you were calculating F1, did you use the algorithm provided by the GlaS competition official website to calculate each object? That is, is your calculation the same as instance segmentation?

    Or, is your calculation in f1 score is calculated by pixel?

    Thank you.

    opened by aihcyllop 0
  • Sorry I don't know why my training .npy file does not work, could you please tell me why? or provide .npy file for train?

    Sorry I don't know why my training .npy file does not work, could you please tell me why? or provide .npy file for train?

    Sorry I don't know why my training .npy file didn't work, could you please tell me why? Or provide .npy file for train?

    The following are the errors that result from training with .npy file:

    Traceback (most recent call last): File "e:\HistoSeg\HistoSeg-Tensorflow\HistoSeg_Train.py", line 629, in results=model.fit(X_train, y_train, batch_size=batch_arg, epochs=epochs_arg, callbacks=callbacks, validation_data=(X_test, y_test) , verbose = 1)
    File "E:\Users\15199\anaconda3\envs\Histoseg\lib\site-packages\tensorflow\python\keras\engine\training_v1.py", line 793, in fit return func.fit( File "E:\Users\15199\anaconda3\envs\Histoseg\lib\site-packages\tensorflow\python\keras\engine\training_arrays_v1.py", line 644, in fit return fit_loop( File "E:\Users\15199\anaconda3\envs\Histoseg\lib\site-packages\tensorflow\python\keras\engine\training_arrays_v1.py", line 380, in model_iteration
    batch_outs = f(ins_batch) File "E:\Users\15199\anaconda3\envs\Histoseg\lib\site-packages\tensorflow\python\keras\backend.py", line 4067, in call fetched = self._callable_fn(*array_vals, File "E:\Users\15199\anaconda3\envs\Histoseg\lib\site-packages\tensorflow\python\client\session.py", line 1483, in call ret = tf_session.TF_SessionRunCallable(self._session._session, tensorflow.python.framework.errors_impl.InvalidArgumentError: 2 root error(s) found. (0) INVALID_ARGUMENT: assertion failed: [labels out of bound] [Condition x < y did not hold element-wise:] [x (metrics/mean_io_u/confusion_matrix/control_dependency:0) = ] [0 0 0...] [y (metrics/mean_io_u/confusion_matrix/Cast_2:0) = ] [2] [[{{function_node metrics_mean_io_u_confusion_matrix_assert_less_Assert_AssertGuard_false_7653}}{{node Assert}}]] [[expanded_conv_11_project_BN/cond/then/_980/FusedBatchNormV3/_6151]] (1) INVALID_ARGUMENT: assertion failed: [labels out of bound] [Condition x < y did not hold element-wise:] [x (metrics/mean_io_u/confusion_matrix/control_dependency:0) = ] [0 0 0...] [y (metrics/mean_io_u/confusion_matrix/Cast_2:0) = ] [2] [[{{function_node metrics_mean_io_u_confusion_matrix_assert_less_Assert_AssertGuard_false_7653}}{{node Assert}}]]

    Thanks.

    opened by DanggoRyo 2
Owner
Saad Wazir
Saad Wazir is currently working as a Researcher at Embedded Systems & Pervasive Computing (EPIC) Lab in National University of Computer and Emerging Sciences (F
Saad Wazir
Histology images query (unsupervised)

110-1-NTU-DBME5028-Histology-images-query Final Project: Histology images query (unsupervised) Kaggle: https://www.kaggle.com/c/histology-images-query

null 1 Jan 5, 2022
An implementation for the loss function proposed in Decoupled Contrastive Loss paper.

Decoupled-Contrastive-Learning This repository is an implementation for the loss function proposed in Decoupled Contrastive Loss paper. Requirements P

Ramin Nakhli 71 Dec 4, 2022
Recall Loss for Semantic Segmentation (This repo implements the paper: Recall Loss for Semantic Segmentation)

Recall Loss for Semantic Segmentation (This repo implements the paper: Recall Loss for Semantic Segmentation) Download Synthia dataset The model uses

null 32 Sep 21, 2022
Supervised Sliding Window Smoothing Loss Function Based on MS-TCN for Video Segmentation

SSWS-loss_function_based_on_MS-TCN Supervised Sliding Window Smoothing Loss Function Based on MS-TCN for Video Segmentation Supervised Sliding Window

null 3 Aug 3, 2022
Multi-scale discriminator feature-wise loss function

Multi-Scale Discriminative Feature Loss This repository provides code for Multi-Scale Discriminative Feature (MDF) loss for image reconstruction algor

Graphics and Displays group - University of Cambridge 76 Dec 12, 2022
An image base contains 490 images for learning (400 cars and 90 boats), and another 21 images for testingAn image base contains 490 images for learning (400 cars and 90 boats), and another 21 images for testing

SVM Données Une base d’images contient 490 images pour l’apprentissage (400 voitures et 90 bateaux), et encore 21 images pour fait des tests. Prétrait

Achraf Rahouti 3 Nov 30, 2021
Code for paper "ASAP-Net: Attention and Structure Aware Point Cloud Sequence Segmentation"

ASAP-Net This project implements ASAP-Net of paper ASAP-Net: Attention and Structure Aware Point Cloud Sequence Segmentation (BMVC2020). Overview We i

Hanwen Cao 26 Aug 25, 2022
Implement of "Training deep neural networks via direct loss minimization" in PyTorch for 0-1 loss

This is the implementation of "Training deep neural networks via direct loss minimization" published at ICML 2016 in PyTorch. The implementation targe

Cuong Nguyen 1 Jan 18, 2022
Losslandscapetaxonomy - Taxonomizing local versus global structure in neural network loss landscapes

Taxonomizing local versus global structure in neural network loss landscapes Int

Yaoqing Yang 8 Dec 30, 2022
Seach Losses of our paper 'Loss Function Discovery for Object Detection via Convergence-Simulation Driven Search', accepted by ICLR 2021.

CSE-Autoloss Designing proper loss functions for vision tasks has been a long-standing research direction to advance the capability of existing models

Peidong Liu(刘沛东) 54 Dec 17, 2022