Data labels and scripts for fastMRI.org

Overview

fastMRI+: Clinical pathology annotations for the fastMRI dataset

The fastMRI dataset is a publicly available MRI raw (k-space) dataset. It has been used widely to train machine learning models for image reconstruction and has been used in reconstruction challenges.

This repo includes clinical pathology annotations for this dataset. The entire knee dataset and approximately 1000 brain datasets have been labeled. The goal of providing these labels is to enable developers of image reconstruction models and algorithms to evaluate the performance of the developed techniques with a focus on the sections or regions that could contain clinical pathology.

Limitations

Each image has labeled by a single radiologist and without the benefit of looking at other views and angles of the same subject, and should therefore be considered in that context. Specifically, the labels should not be considered clinical ground truth or an exhaustive list of all lesions but rather an indicatition of where a pathology could be present.

Obtaining fastMRI raw data and images

The fastMRI raw data and reference images can be obtained from fastmri.org. You will be able to download and use the data for academic purposes after signing the data sharing agreement. If you are looking for automation for downloading the dataset and training fastMRI models, please see the InnerEye Deep Learning Toolkit.

Labeling procedure and generating DICOM images from fastMRI data

In order to label the data, DICOM files were generated from the fastMRI dataset, and we are providing a fastmri_to_dicom.py to document the procedure. This script can be used like this:

python fastmri_to_dicom.py --filename fastmridatafile.h5

Note: In the process of converting the images to DICOM, the pixel arrays were flipped (up/down) to provide a view that was closer to DICOM orientation and assist with labeling. This should be taken into consideration when using the labels.

The labeling was performed by experienced radiologists using MD.ai.

Working with the annotations

The Annotations folder contains a label file for each of the knee (knee.csv and brain (brain.csv datasets. The files contain one line for each annotation (bounding box) that was labeled by the radiologists. Datasets with no findings (no annotations) are not represented in the label files, however, you can see which files were reviewed in the brain_file_list.csv and knee_file_list.csv. If a dataset (a fastMRI file) is listed in the file lists but not in the label files, it means that it has been reviewed, but there were no findings.

The repo contains an example jupyter notebook, which illustrates how to read the labels and overlay them onto the image pixels.

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

Trademarks

This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.

Comments
  • About the application of this data set in target detection

    About the application of this data set in target detection

    Hello, I have tried to convert the fast MRI Brain data set into PNG images in COCO format (I'm sure I flipped the images to align them with labels), and there are more than 3000 valid data.Then I called the open source target detection model for training, but the actual result was not ideal, the mAP could not exceed 0.1. I did some parameter experiments, but the result was not improved so far, so I began to doubt the quality and noise of the data set, and I hope to get confirmation from you

    opened by Breeze-Zero 3
  • About the application of this data set in target detection

    About the application of this data set in target detection

    Hello, I have tried to convert the fast MRI Brain data set into PNG images in COCO format (I'm sure I flipped the images to align them with labels), and there are more than 3000 valid data.Then I called the open source target detection model for training, but the actual result was not ideal, the mAP could not exceed 0.1. I did some parameter experiments, but the result was not improved so far, so I began to doubt the quality and noise of the data set, and I hope to get confirmation from you

    opened by Breeze-Zero 1
  • Looks like data lacks direction information

    Looks like data lacks direction information

    Dear fastmri-plus, Thank you for such a large amount of annotation. I tried to use these data but ran into some problems. I found that the converted DICOM data does not have the correct view in ITK-SNAP. Are these h5 data in the FAST-MRI (and the plus version) without direction and origin information?

    Best Wishes. Zixu.

    opened by zixuzhuang 0
  • Label error in knee dataset?

    Label error in knee dataset?

    Dear fastmri+ plus, Thank you so much for contributing to the society, its super helpful, I just wanna point out that there is some error in one bounding box: the height of the bounding box is zero, FYI file1002097,27,No,190,201,18,0,Cartilage - Partial Thickness loss/defect

    Thank you!!

    opened by KeWang0622 0
  • h5 and DICOM relationship

    h5 and DICOM relationship

    Hi, is there any relationship between .h5 and .dcm files available in fastMRI dataset? I'd like to get DICOMs with all planes instead of Coronal plane, but I don't find any relationship between .h5 and .dcm names.

    Best regards. Alberto.

    opened by AlbertoUAH 0
  • Link to fastMRI+ paper in repository

    Link to fastMRI+ paper in repository

    I was just looking for a link to the paper and couldn't find one - neither as a repository web site nor in the README. Maybe it would be good to add it somewhere for people looking for further details?

    https://arxiv.org/abs/2109.03812

    opened by mmuckley 0
  • There are some errors in the label

    There are some errors in the label

    Hello, we found some problems in the label of brain data, as shown in the picture below (the source of the picture is the 7th slice of file_brain_AXFLAIR_200_6002493 and the 8th slice of file_brain_AXT1POST_201_6002812 respectively), may I ask if it is a labeling error?And how are the criteria for annotation defined? file_brain_AXFLAIR_200_6002493_007 file_brain_AXT1POST_201_6002812_008

    opened by Breeze-Zero 0
Owner
Microsoft
Open source projects and samples from Microsoft
Microsoft
This YoloV5 based model is fit to detect people and different types of land vehicles, and displaying their density on a fitted map, according to their coordinates and detected labels.

This YoloV5 based model is fit to detect people and different types of land vehicles, and displaying their density on a fitted map, according to their

Liron Bdolah 8 May 22, 2022
Tools to create pixel-wise object masks, bounding box labels (2D and 3D) and 3D object model (PLY triangle mesh) for object sequences filmed with an RGB-D camera.

Tools to create pixel-wise object masks, bounding box labels (2D and 3D) and 3D object model (PLY triangle mesh) for object sequences filmed with an RGB-D camera. This project prepares training and testing data for various deep learning projects such as 6D object pose estimation projects singleshotpose, as well as object detection and instance segmentation projects.

null 305 Dec 16, 2022
3ds-Ghidra-Scripts - Ghidra scripts to help with 3ds reverse engineering

3ds Ghidra Scripts These are ghidra scripts to help with 3ds reverse engineering

Zak 7 May 23, 2022
Omniverse sample scripts - A guide for developing with Python scripts on NVIDIA Ominverse

Omniverse sample scripts ここでは、NVIDIA Omniverse ( https://www.nvidia.com/ja-jp/om

ft-lab (Yutaka Yoshisaka) 37 Nov 17, 2022
BABEL: Bodies, Action and Behavior with English Labels [CVPR 2021]

BABEL is a large dataset with language labels describing the actions being performed in mocap sequences. BABEL labels about 43 hours of mocap sequences from AMASS [1] with action labels.

null 113 Dec 28, 2022
Implementation of the paper All Labels Are Not Created Equal: Enhancing Semi-supervision via Label Grouping and Co-training

SemCo The official pytorch implementation of the paper All Labels Are Not Created Equal: Enhancing Semi-supervision via Label Grouping and Co-training

null 42 Nov 14, 2022
Simple and Robust Loss Design for Multi-Label Learning with Missing Labels

Simple and Robust Loss Design for Multi-Label Learning with Missing Labels Official PyTorch Implementation of the paper Simple and Robust Loss Design

Xinyu Huang 28 Oct 27, 2022
This is an unofficial PyTorch implementation of Meta Pseudo Labels

This is an unofficial PyTorch implementation of Meta Pseudo Labels. The official Tensorflow implementation is here.

Jungdae Kim 320 Jan 8, 2023
PyTorch implementation of "Contrast to Divide: self-supervised pre-training for learning with noisy labels"

Contrast to Divide: self-supervised pre-training for learning with noisy labels This is an official implementation of "Contrast to Divide: self-superv

null 55 Nov 23, 2022
The repo of the preprinting paper "Labels Are Not Perfect: Inferring Spatial Uncertainty in Object Detection"

Inferring Spatial Uncertainty in Object Detection A teaser version of the code for the paper Labels Are Not Perfect: Inferring Spatial Uncertainty in

ZINING WANG 21 Mar 3, 2022
An implementation for Neural Architecture Search with Random Labels (CVPR 2021 poster) on Pytorch.

Neural Architecture Search with Random Labels(RLNAS) Introduction This project provides an implementation for Neural Architecture Search with Random L

null 18 Nov 8, 2022
A curated (most recent) list of resources for Learning with Noisy Labels

A curated (most recent) list of resources for Learning with Noisy Labels

Jiaheng Wei 321 Jan 9, 2023
TLDR; Train custom adaptive filter optimizers without hand tuning or extra labels.

AutoDSP TLDR; Train custom adaptive filter optimizers without hand tuning or extra labels. About Adaptive filtering algorithms are commonplace in sign

Jonah Casebeer 48 Sep 19, 2022
Face Synthetics dataset is a collection of diverse synthetic face images with ground truth labels.

The Face Synthetics dataset Face Synthetics dataset is a collection of diverse synthetic face images with ground truth labels. It was introduced in ou

Microsoft 608 Jan 2, 2023
A GOOD REPRESENTATION DETECTS NOISY LABELS

A GOOD REPRESENTATION DETECTS NOISY LABELS This code is a PyTorch implementation of the paper: Prerequisites Python 3.6.9 PyTorch 1.7.1 Torchvision 0.

REAL@UCSC 64 Jan 4, 2023
NeurIPS 2021, "Fine Samples for Learning with Noisy Labels"

[Official] FINE Samples for Learning with Noisy Labels This repository is the official implementation of "FINE Samples for Learning with Noisy Labels"

mythbuster 27 Dec 23, 2022
Generalized Jensen-Shannon Divergence Loss for Learning with Noisy Labels

The official code for the NeurIPS 2021 paper Generalized Jensen-Shannon Divergence Loss for Learning with Noisy Labels

null 13 Dec 22, 2022
Official PyTorch implemention of our paper "Learning to Rectify for Robust Learning with Noisy Labels".

WarPI The official PyTorch implemention of our paper "Learning to Rectify for Robust Learning with Noisy Labels". Run python main.py --corruption_type

Haoliang Sun 3 Sep 3, 2022
A Light CNN for Deep Face Representation with Noisy Labels

A Light CNN for Deep Face Representation with Noisy Labels Citation If you use our models, please cite the following paper: @article{wulight, title=

Alfred Xiang Wu 715 Nov 5, 2022