SANet: A Slice-Aware Network for Pulmonary Nodule Detection

Related tags

Deep Learning SANet
Overview

SANet: A Slice-Aware Network for Pulmonary Nodule Detection

This paper (SANet) has been accepted and early accessed in IEEE TPAMI 2021.

This code and our data are licensed for non-commerical research purpose only.

Introduction

Lung cancer is the most common cause of cancer death worldwide. A timely diagnosis of the pulmonary nodules makes it possible to detect lung cancer in the early stage, and thoracic computed tomography (CT) provides a convenient way to diagnose nodules. However, it is hard even for experienced doctors to distinguish them from the massive CT slices. The currently existing nodule datasets are limited in both scale and category, which is insufficient and greatly restricts its applications. In this paper, we collect the largest and most diverse dataset named PN9 for pulmonary nodule detection by far. Specifically, it contains 8,798 CT scans and 40,439 annotated nodules from 9 common classes. We further propose a slice-aware network (SANet) for pulmonary nodule detection. A slice grouped non-local (SGNL) module is developed to capture long-range dependencies among any positions and any channels of one slice group in the feature map. And we introduce a 3D region proposal network to generate pulmonary nodule candidates with high sensitivity, while this detection stage usually comes with many false positives. Subsequently, a false positive reduction module (FPR) is proposed by using the multi-scale feature maps. To verify the performance of SANet and the significance of PN9, we perform extensive experiments compared with several state-of-the-art 2D CNN-based and 3D CNN-based detection methods. Promising evaluation results on PN9 prove the effectiveness of our proposed SANet.

SANet

Citations

If you are using the code/model/data provided here in a publication, please consider citing:

@article{21PAMI-SANet,
title={SANet: A Slice-Aware Network for Pulmonary Nodule Detection},
author={Jie Mei and Ming-Ming Cheng and Gang Xu and Lan-Ruo Wan and Huan Zhang},
journal={IEEE transactions on pattern analysis and machine intelligence},
year={2021},
publisher={IEEE},
doi={10.1109/TPAMI.2021.3065086}
}

Requirements

The code is built with the following libraries:

Besides, you need to install a custom module for bounding box NMS and overlap calculation.

cd build/box
python setup.py install

Data

Our new pulmonary nodule dataset PN9 is available now, please refer to here for more information.

Note: Considering the big size of raw data, we provide the PN9 dataset (after preprocessing as described in Sec. 5.2 of our paper) with two formats: .npy files and .jpg images. The data preprocessing contains spatially normalized (including the imaging thickness and spacing, the normalized data is 1mm x 1mm x 1mm.) and transforming the data into [0, 255]. The .npy files store the exact values of the corresponding samples while the .jpg images store the compressed ones. The .jpg version of our dataset is provided with the consideration of reducing the size of PN9 for more convenient distribution over the internet. We have done several ablation experiments on both versions of PN9 (i.e., .npy and .jpg), and the difference between the results basing on different data formats is little.

Download the PN9 and add the information to config.py.

Testing

The pretrained model of SANet with npy files can be downloaded here.

Run the following scripts to evaluate the model and obtain the results of FROC analysis.

python test.py --weight='./results/model/model.ckpt' --out_dir='./results/' --test_set_name='./test.txt'

Training

This implementation supports multi-gpu, data_parallel training.

Change training configuration and data configuration in config.py, especially the path to preprocessed data.

Run the training script:

python train.py

Contact

For any questions, please contact me via e-mail: [email protected].

Acknowledgment

This code is based on the NoduleNet codebase.

Comments
  • Can you provide the process data code?

    Can you provide the process data code?

    Hi, Thanks for your great work! If I want to use LUNA16 dataset to train the SANET. How can I get the processed data and the annotations csv? Can you provide the code of process? Best, Yaliang

    opened by yl255 2
  • Problems of the test.py

    Problems of the test.py

    开发者您好,近期使用该项目官方提供的预训练ckpt文件、使用npy格式的PN9数据集运行test时,提示 segmentation fault 且运行test.py 8Gb显存的RTX2070提示显存不够 想请教一下 运行test.py出现segmentation fault是哪里出现问题,应该如何改正 运行test程序需要多大显存GPU 期待您的回复,您的回复对我至关重要!谢谢!

    opened by jczzp 2
  • Mismatch between the pretrained weights provided and the actual model architecture.

    Mismatch between the pretrained weights provided and the actual model architecture.

    I tried to implement your model and tested on my dataset. But I found that the pretrained weights you have provided mismatch for rcnn.back3.0.weight. I think rcnn_crop is still needed in the test phase, is it right? Here is the error message. size mismatch for rcnn_crop.back3.0.weight: copying a param with shape torch.Size([64, 65, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 128, 3, 3, 3]).

    opened by VitalC-3026 1
  • Compile Error

    Compile Error

    Hi mj: When I compile using "python setup.py install", then it returns the error: "g++: error: ***/SANet/build/box/build/temp.linux-x86_64-3.8/box.o: No such file or directory". How can I fix it? Many thanks.

    opened by gdww97 0
  • A request for PN9 dataset

    A request for PN9 dataset

    Thanks for your good work. Please let me know how to obtain the PN9 dataset. I had sent a request for the PN9 dataset to [[email protected]], but not got any reply.

    opened by NieXiuping 0
  • bbox_reader.py self.crop 函数问题

    bbox_reader.py self.crop 函数问题

    作者您好, 我在尝试将SANet 训练另一数据集 LUNA16过程中间遇到了一些问题。在 bbox_reader.py 的 crop 环节,将不同数量的图片都crop 成 128,128,128 大小时遇到了问题,也就是如下代码:

    sample, target, bboxes, coord = self.crop(imgs, [], bboxes,isScale=False,isRand=True) if sample.shape[1] != self.cfg['crop_size'][0] or sample.shape[2] !=
    self.cfg['crop_size'][1] or sample.shape[3] != self.cfg['crop_size'][2]: print(filename, sample.shape)

    例如: 某位病人的 260张512,512 图片放入 crop 函数,裁剪后形状不是 128,128,128, 而是 228,149,128 (Batchsize = 1)。 imgs = (1, 260, 512, 512) sampe = (1, 128, 149, 128) Shape Inccorect 161855 (1, 128, 149, 128)

    又例如: imgs = (1, 300, 512, 512) sampe =(1, 351, 128, 128)

    因为我注意到您有一个判断语句来print 这类crop 出错图的大小,因此我猜测您可能遇到过这种情况。请问您有空是否能够查看一下,您是否遇到过类似情况,您的解决方式是什么?谢谢~

    opened by frankchen121212 0
Owner
Jie Mei
PhD
Jie Mei
zeus is a Python implementation of the Ensemble Slice Sampling method.

zeus is a Python implementation of the Ensemble Slice Sampling method. Fast & Robust Bayesian Inference, Efficient Markov Chain Monte Carlo (MCMC), Bl

Minas Karamanis 197 Dec 4, 2022
MCMC samplers for Bayesian estimation in Python, including Metropolis-Hastings, NUTS, and Slice

Sampyl May 29, 2018: version 0.3 Sampyl is a package for sampling from probability distributions using MCMC methods. Similar to PyMC3 using theano to

Mat Leonard 304 Dec 25, 2022
Implementation of ICCV2021(Oral) paper - VMNet: Voxel-Mesh Network for Geodesic-aware 3D Semantic Segmentation

VMNet: Voxel-Mesh Network for Geodesic-Aware 3D Semantic Segmentation Created by Zeyu HU Introduction This work is based on our paper VMNet: Voxel-Mes

HU Zeyu 82 Dec 27, 2022
Code for Boundary-Aware Segmentation Network for Mobile and Web Applications

BASNet Boundary-Aware Segmentation Network for Mobile and Web Applications This repository contain implementation of BASNet in tensorflow/keras. comme

Hamid Ali 8 Nov 24, 2022
Source Code for our paper: Understand me, if you refer to Aspect Knowledge: Knowledge-aware Gated Recurrent Memory Network

KaGRMN-DSG_ABSA This repository contains the PyTorch source Code for our paper: Understand me, if you refer to Aspect Knowledge: Knowledge-aware Gated

XingBowen 4 May 20, 2022
Unofficial implementation of Point-Unet: A Context-Aware Point-Based Neural Network for Volumetric Segmentation

Point-Unet This is an unofficial implementation of the MICCAI 2021 paper Point-Unet: A Context-Aware Point-Based Neural Network for Volumetric Segment

Namt0d 9 Dec 7, 2022
Implementation of ICCV19 Paper "Learning Two-View Correspondences and Geometry Using Order-Aware Network"

OANet implementation Pytorch implementation of OANet for ICCV'19 paper "Learning Two-View Correspondences and Geometry Using Order-Aware Network", by

Jiahui Zhang 225 Dec 5, 2022
Edge-aware Guidance Fusion Network for RGB-Thermal Scene Parsing

EGFNet Edge-aware Guidance Fusion Network for RGB-Thermal Scene Parsing Dataset and Results Test maps: 百度网盘 提取码:zust Citation @ARTICLE{ author={Zhou,

ShaohuaDong 10 Dec 8, 2022
Official pytorch code for SSAT: A Symmetric Semantic-Aware Transformer Network for Makeup Transfer and Removal

SSAT: A Symmetric Semantic-Aware Transformer Network for Makeup Transfer and Removal This is the official pytorch code for SSAT: A Symmetric Semantic-

ForeverPupil 57 Dec 13, 2022
The source code of the paper "SHGNN: Structure-Aware Heterogeneous Graph Neural Network"

SHGNN: Structure-Aware Heterogeneous Graph Neural Network The source code and dataset of the paper: SHGNN: Structure-Aware Heterogeneous Graph Neural

Wentao Xu 7 Nov 13, 2022
Official public repository of paper "Intention Adaptive Graph Neural Network for Category-Aware Session-Based Recommendation"

Intention Adaptive Graph Neural Network (IAGNN) This is the official repository of paper Intention Adaptive Graph Neural Network for Category-Aware Se

null 9 Nov 22, 2022
Official code for CVPR2022 paper: Depth-Aware Generative Adversarial Network for Talking Head Video Generation

?? Depth-Aware Generative Adversarial Network for Talking Head Video Generation (CVPR 2022) ?? If DaGAN is helpful in your photos/projects, please hel

Fa-Ting Hong 503 Jan 4, 2023
Official Repo for Ground-aware Monocular 3D Object Detection for Autonomous Driving

Visual 3D Detection Package: This repo aims to provide flexible and reproducible visual 3D detection on KITTI dataset. We expect scripts starting from

Yuxuan Liu 305 Dec 19, 2022
Official implementation of our paper "LLA: Loss-aware Label Assignment for Dense Pedestrian Detection" in Pytorch.

LLA: Loss-aware Label Assignment for Dense Pedestrian Detection This project provides an implementation for "LLA: Loss-aware Label Assignment for Dens

null 35 Dec 6, 2022
Scale-aware Automatic Augmentation for Object Detection (CVPR 2021)

SA-AutoAug Scale-aware Automatic Augmentation for Object Detection Yukang Chen, Yanwei Li, Tao Kong, Lu Qi, Ruihang Chu, Lei Li, Jiaya Jia [Paper] [Bi

DV Lab 182 Dec 29, 2022
Code for "FGR: Frustum-Aware Geometric Reasoning for Weakly Supervised 3D Vehicle Detection", ICRA 2021

FGR This repository contains the python implementation for paper "FGR: Frustum-Aware Geometric Reasoning for Weakly Supervised 3D Vehicle Detection"(I

Yi Wei 31 Dec 8, 2022
Part-Aware Data Augmentation for 3D Object Detection in Point Cloud

Part-Aware Data Augmentation for 3D Object Detection in Point Cloud This repository contains a reference implementation of our Part-Aware Data Augment

Jaeseok Choi 62 Jan 3, 2023
ICCV2021 Paper: AutoShape: Real-Time Shape-Aware Monocular 3D Object Detection

ICCV2021 Paper: AutoShape: Real-Time Shape-Aware Monocular 3D Object Detection

Zongdai 107 Dec 20, 2022