AI Toolkit for Healthcare Imaging

Overview

project-monai

Medical Open Network for AI

License CI Build Documentation Status codecov PyPI version

MONAI is a PyTorch-based, open-source framework for deep learning in healthcare imaging, part of PyTorch Ecosystem. Its ambitions are:

  • developing a community of academic, industrial and clinical researchers collaborating on a common foundation;
  • creating state-of-the-art, end-to-end training workflows for healthcare imaging;
  • providing researchers with the optimized and standardized way to create and evaluate deep learning models.

Features

The codebase is currently under active development. Please see the technical highlights and What's New of the current milestone release.

  • flexible pre-processing for multi-dimensional medical imaging data;
  • compositional & portable APIs for ease of integration in existing workflows;
  • domain-specific implementations for networks, losses, evaluation metrics and more;
  • customizable design for varying user expertise;
  • multi-GPU data parallelism support.

Installation

To install the current release, you can simply run:

pip install monai

For other installation methods (using the default GitHub branch, using Docker, etc.), please refer to the installation guide.

Getting Started

MedNIST demo and MONAI for PyTorch Users are available on Colab.

Examples and notebook tutorials are located at Project-MONAI/tutorials.

Technical documentation is available at docs.monai.io.

Contributing

For guidance on making a contribution to MONAI, see the contributing guidelines.

Community

Join the conversation on Twitter @ProjectMONAI or join our Slack channel.

Ask and answer questions over on MONAI's GitHub Discussions tab.

Links

Comments
  • add swin_unetr model

    add swin_unetr model

    Signed-off-by: ahatamizadeh [email protected]

    Fixes #3520 .

    Description

    This PR adds the Swin UNETR [1] model to MONAI.

    [1]: Hatamizadeh, Ali, Vishwesh Nath, Yucheng Tang, Dong Yang, Holger Roth, and Daguang Xu. "Swin UNETR: Swin Transformers for Semantic Segmentation of Brain Tumors in MRI Images." arXiv preprint arXiv:2201.01266 (2022).

    Swin UNETR is also used in self-supervised learning for 3D medical image segmentation [2].

    [2]: Tang, Yucheng, Dong Yang, Wenqi Li, Holger Roth, Bennett Landman, Daguang Xu, Vishwesh Nath, and Ali Hatamizadeh. "Self-supervised pre-training of swin transformers for 3d medical image analysis." arXiv preprint arXiv:2111.14791 (2021).  

    Status

    Ready

    Types of changes

    • [x] Non-breaking change (fix or new feature that would not break existing functionality).
    • [x] New tests added to cover the changes.
    • [x] Integration tests passed locally by running ./runtests.sh -f -u --net --coverage.
    • [x] Quick tests passed locally by running ./runtests.sh --quick --unittests --disttests.
    • [x] In-line docstrings updated.
    • [x] Documentation updated, tested make html command in the docs/ folder.
    Feature request 
    opened by ahatamiz 68
  • 3482 Add `ConfigComponent` for config parsing

    3482 Add `ConfigComponent` for config parsing

    Task step 2 of #3482 .

    Description

    This PR implemented the ConfigComponent feature for config parsing, it's for task 2 of #3482 . The whole proposal is in draft PR: https://github.com/Project-MONAI/MONAI/pull/3593.

    Status

    Ready

    Types of changes

    • [x] Non-breaking change (fix or new feature that would not break existing functionality).
    • [ ] Breaking change (fix or new feature that would cause existing functionality to change).
    • [ ] New tests added to cover the changes.
    • [ ] Integration tests passed locally by running ./runtests.sh -f -u --net --coverage.
    • [ ] Quick tests passed locally by running ./runtests.sh --quick --unittests --disttests.
    • [ ] In-line docstrings updated.
    • [ ] Documentation updated, tested make html command in the docs/ folder.
    opened by Nic-Ma 59
  • Video dataset

    Video dataset

    Addresses: https://github.com/Project-MONAI/MONAI/issues/4746.

    Description

    Adds video datasets. Videos can be from file or capture device (e.g., webcam).

    Requires a corresponding tutorial.

    Status

    Hold

    Types of changes

    • [x] Non-breaking change (fix or new feature that would not break existing functionality).
    • [x] New tests added to cover the changes.
    opened by rijobro 44
  • WIP: Fix failing A100 tests

    WIP: Fix failing A100 tests

    Signed-off-by: Mohammad Adil [email protected]

    Fixes #2357 .

    Description

    Change tolerance in tests to fix tests failing on A100 GPU.

    Status

    Work in progress

    Types of changes

    • [x] Non-breaking change (fix or new feature that would not break existing functionality).
    • [ ] Breaking change (fix or new feature that would cause existing functionality to change).
    • [ ] New tests added to cover the changes.
    • [ ] Integration tests passed locally by running ./runtests.sh -f -u --net --coverage.
    • [ ] Quick tests passed locally by running ./runtests.sh --quick --unittests.
    • [ ] In-line docstrings updated.
    • [ ] Documentation updated, tested make html command in the docs/ folder.
    CI/CD 
    opened by madil90 41
  • 5308 Add runtime cache support to `CacheDataset`

    5308 Add runtime cache support to `CacheDataset`

    Fixes #5308 .

    Description

    This PR added runtime cache support in the CacheDataset.

    Types of changes

    • [x] Non-breaking change (fix or new feature that would not break existing functionality).
    • [ ] Breaking change (fix or new feature that would cause existing functionality to change).
    • [ ] New tests added to cover the changes.
    • [ ] Integration tests passed locally by running ./runtests.sh -f -u --net --coverage.
    • [ ] Quick tests passed locally by running ./runtests.sh --quick --unittests --disttests.
    • [ ] In-line docstrings updated.
    • [ ] Documentation updated, tested make html command in the docs/ folder.
    opened by Nic-Ma 40
  • Develop MVP of model bundle

    Develop MVP of model bundle

    Is your feature request related to a problem? Please describe. Thanks for the interesting technical discussion with @ericspod @wyli @atbenmurray @rijobro , as we still have many unclear requirements and unknown use cases, we plan to develop the model package feature step by step, May adjust the design based on feedback during the development.

    For the initial step, the core team aligned to develop a small but typical example for inference first, it will use JSON config files to define environments, components and workflow, save the config and model into TorchScript model. then other projects can easily reconstruct the exact same python program and parameters to reproduce the inference. When the small MVP is ready, will share and discuss within the team for the next steps.

    I will try to implement the MVP referring to some existing solutions, like NVIDIA Clara MMAR, ignite online package, etc. Basic task steps:

    • [x] Include metadata (env / sys info, changelog, version, input / output data format, etc), configs, model weights, etc. in a model package example for review. PR: https://github.com/Project-MONAI/tutorials/pull/487
    • [x] Search specified python packages and build python instances from dictionary config with name / path & args. PR: https://github.com/Project-MONAI/MONAI/pull/3720
    • [x] Recursively parse configs in a dictionary with dependencies and executable code, for example: {"dataset": {"<name>": "Dataset", "<args>": {"data": "$load_datalist()"}}, "dataloader": {"<name>": "DataLoader", "<args>": {"data": "@dataset"}}}. PR: https://github.com/Project-MONAI/MONAI/pull/3818, https://github.com/Project-MONAI/MONAI/pull/3822
    • [x] Add support to save the raw config dictionaries into TorchScript model. PR: Project-MONAI/MONAI#3138 .
    • [x] Add schema mechanism to verify the content of folder structure, metadata.json content, etc. Refer to: https://json-schema.org/, https://github.com/Julian/jsonschema. PR: https://github.com/Project-MONAI/MONAI/pull/3865
    • [x] Add support to verify network input / output with fake data (data input is from metadata).
    • [x] Add mechanism to easily load JSON config files with override (can add YAML support in future), similar to example: https://hydra.cc/docs/tutorials/basic/your_first_app/config_groups/. PR: https://github.com/Project-MONAI/MONAI/pull/3832
    • [x] Add support to refer to other config item in the same config file or other confg files, referring to Hydra ideas: https://hydra.cc/docs/advanced/overriding_packages/.
    • [x] Complete the inference example MMAR for spleen segmentation task. PR: https://github.com/Project-MONAI/tutorials/pull/604
    • [x] Write user manual and detailed documentation. PR: https://github.com/Project-MONAI/MONAI/pull/3834 https://github.com/Project-MONAI/MONAI/pull/3982
    • [x] [*Optional] Upload the spleen MMAR example to huggingface(https://github.com/Project-MONAI/MONAI/discussions/3451).
    • [x] [*Optional] Support relative config level in the reference ID, for example: "test": "@###data#1", 1 # means current level, 2 ## means upper level, etc. PR: https://github.com/Project-MONAI/MONAI/pull/3974
    • [ ] [*Optional] Support customized ConfigItem and ReferenceResolver in the ConfigParser. PR: https://github.com/Project-MONAI/MONAI/pull/3980/
    • [x] _requires_ keyword for config component (https://github.com/Project-MONAI/MONAI/issues/3942).
    • [x] Import statement in bundle config (https://github.com/Project-MONAI/MONAI/issues/3966).
    • [x] Config python logging properties in a file. PR: https://github.com/Project-MONAI/MONAI/pull/3994
    • [ ] [*Optional] Specify rank ID for component to run only on some ranks, for example, saving checkpoint in rank 0.
    Feature request 
    opened by Nic-Ma 38
  • Training instability with Dice Loss/Tversky Loss

    Training instability with Dice Loss/Tversky Loss

    I am training a 2D UNet to segment fetal MR images using MONAI and I have been observing some instability in the training when using MONAI Dice loss formulation. After some iteration, the loss jumps up and the network stops learning, as the gradients drop to zero. Here is an example (orange is loss on training set computed over 2D slices, blue is loss on validation computed over 3D volume). image

    After investigating several aspects (using the same deterministic seed), I've narrowed down the issue to the presence of the smooth term in both the numerator and denominator of the Dice Loss: f = 1.0 - (2.0 * intersection + smooth) / (denominator + smooth)

    When using the formulation: f = 1.0 - (2.0 * intersection) / (denominator + smooth) without the smooth term in the numerator, the training was stable and no longer showed unexpected behaviour: image [Note: this experiment was trained for much longer to make sure the jump would not appear later in the training]

    The same pattern was observed also for the Tversky Loss, so it could be worth investigating the stability of the losses to identify the best default option.

    Software version MONAI version: 0.1.0+84.ga683c4e.dirty Python version: 3.7.4 (default, Jul 9 2019, 03:52:42) [GCC 5.4.0 20160609] Numpy version: 1.18.2 Pytorch version: 1.4.0 Ignite version: 0.3.0

    Training information Using MONAI PersistentCache 2D UNet (as default in MONAI) Adam optimiser, LR = 1e-3, no LR decay Batch size: 10

    Other tests The following aspects were investigated but did not solve the instability issue:

    • Gradient clipping
    • Different optimisers (SGD, SGD + Momentum)
    • Transforming the binary segmentations to a two-channel approach ([background segmentation, foreground segmentation])
    • Choosing smooth = 1.0 as default here (https://github.com/MIC-DKFZ/nnUNet/blob/master/nnunet/training/loss_functions/dice_loss.py). However, this made the behaviour even more severe and the jump would happen sooner in the training.

    The following losses were also investigated

    • Binary Cross Entropy --> stable
    • Dice Loss + Binary Cross Entropy --> unstable
    • Dice Loss (no smooth at numerator) + Binary Cross Entropy --> stable
    • Tversky Loss --> Unstable
    • Tversky Loss (no smooth at numerator) --> stable
    bug 
    opened by martaranzini 38
  • Auto3D Task Module 1 (DataAnalyzer) for #4743

    Auto3D Task Module 1 (DataAnalyzer) for #4743

    Description

    Implemented a DataAnalyzer class to encapsulate data analysis module. As the beginning part of the auto3d/automl pipeline, the module shall find data and label from user inputs and generate a summary (dictionary) of data stats. The summary includes - file names, list, number of files; - dataset summary (basic information, image dimensions, number of classes, etc.); - individual data information (spacing, image size, number and size of the regions, etc.). The summary can be exported as a YAML file and a dictionary variable for use in Python

    Example Usage:

    from monai.apps.auto3d.data_analyzer import DataAnalyzer
    
    datalist = {
        "testing": [{"image": "image_003.nii.gz"}],
        "training": [
            {"fold": 0, "image": "image_001.nii.gz", "label": "label_001.nii.gz"},
            {"fold": 0, "image": "image_002.nii.gz", "label": "label_002.nii.gz"},
            {"fold": 1, "image": "image_001.nii.gz", "label": "label_001.nii.gz"},
            {"fold": 1, "image": "image_004.nii.gz", "label": "label_004.nii.gz"},
        ],
    }
    
    dataroot = '/datasets' # the directory where you have the image files (in this example we're using nii.gz)
    analyser = DataAnalyzer(datalist, dataroot)
    datastat = analyser.get_all_case_stats() # it will also generate a data_stats.yaml that saves the stats
    

    Status

    Ready for Review Reference issue https://github.com/Project-MONAI/MONAI/issues/4743.

    Types of changes

    • [x] Non-breaking change (fix or new feature that would not break existing functionality).
    • [x] New tests added to cover the changes.
    • [x] Quick tests passed locally by running ./runtests.sh --quick --unittests --disttests.
    • [x] In-line docstrings updated.
    • [x] Documentation updated, tested make html command in the docs/ folder.
    opened by mingxin-zheng 37
  • 3482 add `run` API for common training, evaluation and inference

    3482 add `run` API for common training, evaluation and inference

    Task step 7 of https://github.com/Project-MONAI/MONAI/issues/3482 .

    Description

    This PR added the run API to execute most common cases of supervised training, evaluation or inference from the config directly.

    Status

    Ready

    Types of changes

    • [x] Non-breaking change (fix or new feature that would not break existing functionality).
    • [ ] Breaking change (fix or new feature that would cause existing functionality to change).
    • [ ] New tests added to cover the changes.
    • [ ] Integration tests passed locally by running ./runtests.sh -f -u --net --coverage.
    • [ ] Quick tests passed locally by running ./runtests.sh --quick --unittests --disttests.
    • [ ] In-line docstrings updated.
    • [ ] Documentation updated, tested make html command in the docs/ folder.
    opened by Nic-Ma 35
  • 3482 3829 Add `ConfigParser` to recursively parse config content

    3482 3829 Add `ConfigParser` to recursively parse config content

    Task step 3-2 of https://github.com/Project-MONAI/MONAI/issues/3482 . Fixes https://github.com/Project-MONAI/MONAI/issues/3829

    Description

    This PR added the ConfigParser to recursively parse config content.

    Status

    Ready

    Types of changes

    • [x] Non-breaking change (fix or new feature that would not break existing functionality).
    • [ ] Breaking change (fix or new feature that would cause existing functionality to change).
    • [ ] New tests added to cover the changes.
    • [ ] Integration tests passed locally by running ./runtests.sh -f -u --net --coverage.
    • [ ] Quick tests passed locally by running ./runtests.sh --quick --unittests --disttests.
    • [ ] In-line docstrings updated.
    • [ ] Documentation updated, tested make html command in the docs/ folder.
    opened by Nic-Ma 35
  • Segmentation example on multiple labels

    Segmentation example on multiple labels

    Is your feature request related to a problem? Please describe. The spleen segmentation example is incredibly helpful. Several of my projects involve multiple labels - e.g. the right and left lateral ventricle.

    Describe the solution you'd like I'd greatly appreciate an example or guidance on how to train a model with multiple labels.

    Additional context I'm pretty new to this, and suspect it's a pretty easy modification to the example.

    question 
    opened by radiplab 33
  • SaveImaged not working after upgrading MONAI from 0.8.1 to 1.0.1

    SaveImaged not working after upgrading MONAI from 0.8.1 to 1.0.1

    Describe the bug I upgraded the MONAI from 0.8.1 to 1.0.1. Then my original yaml config as following that works on 0.8.1 does not work anymore on 1.0.1.

    _target_: SaveImaged, keys: model_output_key, meta_keys: [ 'labels_meta_dict' ], data_root_dir: label_root_dir, output_dir: pred_output_dir, output_postfix: 'pred', separate_folder: False

    Any help would be greatly appreciated!

    Expected behavior I expect the code to save the output to nii file.

    Screenshots Error trackback followed in the end.

    Environment

    Ensuring you use the relevant python executable, please paste the output of:

    python -c 'import monai; monai.config.print_debug_info()'
    

    ================================ Printing MONAI config...

    MONAI version: 1.0.1 Numpy version: 1.24.1 Pytorch version: 1.13.1+cu117 MONAI flags: HAS_EXT = False, USE_COMPILED = False, USE_META_DICT = False MONAI rev id: 8271a193229fe4437026185e218d5b06f7c8ce69 MONAI file: /home/ubuntu/.cache/pypoetry/virtualenvs/qbio-qbdl-D8V9KvtL-py3.8/lib/python3.8/site-packages/monai/init.py

    Optional dependencies: Pytorch Ignite version: NOT INSTALLED or UNKNOWN VERSION. Nibabel version: 3.2.2 scikit-image version: 0.19.3 Pillow version: 9.4.0 Tensorboard version: NOT INSTALLED or UNKNOWN VERSION. gdown version: NOT INSTALLED or UNKNOWN VERSION. TorchVision version: 0.14.1+cu117 tqdm version: 4.64.1 lmdb version: NOT INSTALLED or UNKNOWN VERSION. psutil version: NOT INSTALLED or UNKNOWN VERSION. pandas version: NOT INSTALLED or UNKNOWN VERSION. einops version: 0.5.0 transformers version: NOT INSTALLED or UNKNOWN VERSION. mlflow version: NOT INSTALLED or UNKNOWN VERSION. pynrrd version: NOT INSTALLED or UNKNOWN VERSION.

    For details about installing the optional dependencies, please visit: https://docs.monai.io/en/latest/installation.html#installing-the-recommended-dependencies

    ================================ Printing system config...

    psutil required for print_system_info

    ================================ Printing GPU config...

    Num GPUs: 4 Has CUDA: True CUDA version: 11.7 cuDNN enabled: True cuDNN version: 8500 Current device: 0 Library compiled for CUDA architectures: ['sm_37', 'sm_50', 'sm_60', 'sm_70', 'sm_75', 'sm_80', 'sm_86'] GPU 0 Name: NVIDIA A10G GPU 0 Is integrated: False GPU 0 Is multi GPU board: False GPU 0 Multi processor count: 80 GPU 0 Total memory (GB): 22.2 GPU 0 CUDA capability (maj.min): 8.6 GPU 1 Name: NVIDIA A10G GPU 1 Is integrated: False GPU 1 Is multi GPU board: False GPU 1 Multi processor count: 80 GPU 1 Total memory (GB): 22.2 GPU 1 CUDA capability (maj.min): 8.6 GPU 2 Name: NVIDIA A10G GPU 2 Is integrated: False GPU 2 Is multi GPU board: False GPU 2 Multi processor count: 80 GPU 2 Total memory (GB): 22.2 GPU 2 CUDA capability (maj.min): 8.6 GPU 3 Name: NVIDIA A10G GPU 3 Is integrated: False GPU 3 Is multi GPU board: False GPU 3 Multi processor count: 80 GPU 3 Total memory (GB): 22.2 GPU 3 CUDA capability (maj.min): 8.6

    Additional context Add any other context about the problem here. Error traceback:

    Traceback (most recent call last): File "/home/ubuntu/.cache/pypoetry/virtualenvs/qbio-qbdl-D8V9KvtL-py3.8/lib/python3.8/site-packages/monai/transforms/transform.py", line 91, in apply_transform return _apply_transform(transform, data, unpack_items) File "/home/ubuntu/.cache/pypoetry/virtualenvs/qbio-qbdl-D8V9KvtL-py3.8/lib/python3.8/site-packages/monai/transforms/transform.py", line 55, in _apply_transform return transform(parameters) File "/home/ubuntu/.cache/pypoetry/virtualenvs/qbio-qbdl-D8V9KvtL-py3.8/lib/python3.8/site-packages/monai/transforms/io/dictionary.py", line 293, in call self.saver(img=d[key], meta_data=meta_data) File "/home/ubuntu/.cache/pypoetry/virtualenvs/qbio-qbdl-D8V9KvtL-py3.8/lib/python3.8/site-packages/monai/transforms/io/array.py", line 423, in call subject = meta_data[Key.FILENAME_OR_OBJ] if meta_data else str(self._data_index) KeyError: 'filename_or_obj'

    opened by moonforsun 0
  • [Feature Request]: CLIP Driven Universal Model

    [Feature Request]: CLIP Driven Universal Model

    Contrastive Language-Image Pre-training (CLIP) Driven Models and Partially Supervised Learning for Medical Image Segmentation

    This issue is to discuss adding the CLIP-Driven Universal Model Features to MONAI.

    This will provide the implementation of top 1 solution in Medical Segmentation Decathlon (MSD).

    Potential assignee: @tangy5

    CLIP-Driven Universal Model

    Universal Model is the first framework for both organ segmentation and tumor detection. The method achieves and provide solution to the top spot of MSD competition leaderboard.

    Paper

    CLIP-Driven Universal Model for Organ Segmentation and Tumor Detection
    Pre-print: https://arxiv.org/pdf/2301.00785.pdf

    Key features

    The implementation will bring several new feature as follows:

    1. Universal Model: one model to detect and segment all abdominal organs and all types of tumors (Liver tumor, kidney tumor, Lung nodule, Pancreas tumor, hepatic vessel tumor, colon tumor).
    2. Language model (CLIP) and text-driven embeddings boost medical image analysis.
    3. Training Partial labelled datasets.
    4. Incremental learning: Users can continue to train new segmentation classes using the current trained model without catastrophic forgetting.

    ⏳ Dataset: The Universal Model is trained with following datasets

    Screenshot from 2023-01-03 13-16-57

    Implementation plans

    • [ ] Transformations (pre-processing) for partial labelled datasets: “PartialLabelTransfer”, etc
    • [ ] Segmentation backbone with CLIP embedding, text-driven segmentor: plug-and-play CLIP embedding and text encoder.
    • [ ] Tutorial for training and inference of Universal Model.
    • [ ] Tutorial for demonstrating partial supervised learning and incremental learning.
    • [ ] Model release: Bundle for Model Zoo for publishing the trained universal model to segment all types of tumours and abdominal organs.

    More Details of the Feature Methodology:

    1. Universal Model: Screenshot from 2023-01-03 12-09-23

    2. CLIP Driven and text-driven segmentor: Screenshot from 2023-01-03 12-10-09

    3. Partial Supervised Learning: Screenshot from 2023-01-03 12-04-46

    4. Incremental Leraning:

    Screenshot from 2023-01-03 12-11-14

    Detailed steps of implantation will provide after open discussion.

    Welcome all suggestions and comments!

    @ljwztc @MrGiovanni

    Design discussions Contribution wanted Feature request 
    opened by tangy5 0
  • invalid pointer error

    invalid pointer error

    Hello I install monai via

    pip install monai-weekly
    

    on Ubuntu 18 python 3.9 torch-1.12.0+cu116 When i do

    import monai 
    

    I get error

    free(): invalid pointer
    Aborted (core dumped)
    
    opened by jakubMitura14 0
  • 'dimensions'  -> 'spatial_dims'

    'dimensions' -> 'spatial_dims'

    VarAutoEncoder.init() got an unexpected keyword argument 'dimensions' because now it is called spatial_dims.

    Fixes # .

    Description

    A few sentences describing the changes proposed in this pull request.

    Types of changes

    • [x] Non-breaking change (fix or new feature that would not break existing functionality).
    • [ ] Breaking change (fix or new feature that would cause existing functionality to change).
    • [ ] New tests added to cover the changes.
    • [ ] Integration tests passed locally by running ./runtests.sh -f -u --net --coverage.
    • [ ] Quick tests passed locally by running ./runtests.sh --quick --unittests --disttests.
    • [ ] In-line docstrings updated.
    • [ ] Documentation updated, tested make html command in the docs/ folder.
    opened by ShadowTwin41 8
  • 5775 Fix `_get_latest_bundle_version` issue on Windows

    5775 Fix `_get_latest_bundle_version` issue on Windows

    Signed-off-by: Yiheng Wang [email protected]

    Fixes #5775 .

    Description

    This PR fixes the issue of _get_latest_bundle_version on Windows (os.path.join will produce backslash which will create a wrong url) Thanks @SachidanandAlle for finding this issue.

    Types of changes

    • [x] Non-breaking change (fix or new feature that would not break existing functionality).
    • [ ] Breaking change (fix or new feature that would cause existing functionality to change).
    • [ ] New tests added to cover the changes.
    • [ ] Integration tests passed locally by running ./runtests.sh -f -u --net --coverage.
    • [ ] Quick tests passed locally by running ./runtests.sh --quick --unittests --disttests.
    • [ ] In-line docstrings updated.
    • [ ] Documentation updated, tested make html command in the docs/ folder.
    opened by yiheng-wang-nv 1
Releases(1.1.0)
  • 1.1.0(Dec 19, 2022)

    Added

    • Hover-Net based digital pathology workflows including new network, loss, postprocessing, metric, training, and inference modules
    • Various enhancements for Auto3dSeg AutoRunner including template caching, selection, and a dry-run mode nni_dry_run
    • Various enhancements for Auto3dSeg algo templates including new state-of-the-art configurations, optimized GPU memory utilization
    • New bundle API and configurations to support experiment management including MLFlowHandler
    • New bundle.script API to support model zoo query and download
    • LossMetric metric to compute loss as cumulative metric measurement
    • Transforms and base transform APIs including RandomizableTrait and MedianSmooth
    • runtime_cache option for CacheDataset and the derived classes to allow for shared caching on the fly
    • Flexible name formatter for SaveImage transform
    • pending_operations MetaTensor property and basic APIs for lazy image resampling
    • Contrastive sensitivity for SSIM metric
    • Extensible backbones for FlexibleUNet
    • Generalize SobelGradients to 3D and any spatial axes
    • warmup_multiplier option for WarmupCosineSchedule
    • F beta score metric based on confusion matrix metric
    • Support of key overwriting in LambdaD
    • Basic premerge tests for Python 3.11
    • Unit and integration tests for CUDA 11.6, 11.7 and A100 GPU
    • DataAnalyzer handles minor image-label shape inconsistencies

    Fixed

    • Review and enhance previously untyped APIs with additional type annotations and casts
    • switch_endianness in LoadImage now supports tensor input
    • Reduced memory footprint for various Auto3dSeg tests
    • Issue of @ in monai.bundle.ReferenceResolver
    • Compatibility issue with ITK-Python 5.3 (converting itkMatrixF44 for default collate)
    • Inconsistent of sform and qform when using different backends for SaveImage
    • MetaTensor.shape call now returns a torch.Size instead of tuple
    • Issue of channel reduction in GeneralizedDiceLoss
    • Issue of background handling before softmax in DiceFocalLoss
    • Numerical issue of LocalNormalizedCrossCorrelationLoss
    • Issue of incompatible view size in ConfusionMatrixMetric
    • NetAdapter compatibility with Torchscript
    • Issue of extract_levels in RegUNet
    • Optional bias_downsample in ResNet
    • dtype overflow for ShiftIntensity transform
    • Randomized transforms such as RandCuCIM now inherit RandomizableTrait
    • fg_indices.size compatibility issue in generate_pos_neg_label_crop_centers
    • Issue when inverting ToTensor
    • Issue of capital letters in filename suffixes check in LoadImage
    • Minor tensor compatibility issues in apps.nuclick.transforms
    • Issue of float16 in verify_net_in_out
    • std variable type issue for RandRicianNoise
    • DataAnalyzer accepts None as label key and checks empty labels
    • iter_patch_position now has a smaller memory footprint
    • CumulativeAverage has been refactored and enhanced to allow for simple tracking of metric running stats.
    • Multi-threading issue for MLFlowHandler

    Changed

    • Printing a MetaTensor now generates a less verbose representation
    • DistributedSampler raises a ValueError if there are too few devices
    • OpenCV and VideoDataset modules are loaded lazily to avoid dependency issues
    • device in monai.engines.Workflow supports string values
    • Activations and AsDiscrete take kwargs as additional arguments
    • DataAnalyzer is now more efficient and writes summary stats before detailed all case stats
    • Base Docker image upgraded to nvcr.io/nvidia/pytorch:22.10-py3 from nvcr.io/nvidia/pytorch:22.09-py3
    • Simplified Conda environment file environment-dev.yml
    • Versioneer dependency upgraded to 0.23 from 0.19

    Deprecated

    • NibabelReader input argument dtype is deprecated, the reader will use the original dtype of the image

    Removed

    • Support for PyTorch 1.7
    Source code(tar.gz)
    Source code(zip)
  • 1.0.1(Oct 24, 2022)

    Fixes

    • DiceCELoss for multichannel targets
    • Auto3DSeg DataAnalyzer out-of-memory error and other minor issues
    • An optional flag issue in the RetinaNet detector
    • An issue with output offset for Spacing
    • A LoadImage issue when track_meta is False
    • 1D data output error in VarAutoEncoder
    • An issue with resolution computing in ImageStats

    Added

    • Flexible min/max pixdim options for Spacing
    • Upsample mode deconvgroup and optional kernel sizes
    • Docstrings for gradient-based saliency maps
    • Occlusion sensitivity to use sliding window inference
    • Enhanced Gaussian window and device assignments for sliding window inference
    • Multi-GPU support for MonaiAlgo
    • ClientAlgoStats and MonaiAlgoStats for federated summary statistics
    • MetaTensor support for OneOf
    • Add a file check for bundle logging config
    • Additional content and an authentication token option for bundle info API
    • An anti-aliasing option for Resized
    • SlidingWindowInferer adaptive device based on cpu_thresh
    • SegResNetDS with deep supervision and non-isotropic kernel support
    • Premerge tests for Python 3.10

    Changed

    • Base Docker image upgraded to nvcr.io/nvidia/pytorch:22.09-py3 from nvcr.io/nvidia/pytorch:22.08-py3
    • Replace None type metadata content with "none" for collate_fn compatibility
    • HoVerNet Mode and Branch to independent StrEnum
    • Automatically infer device from the first item in random elastic deformation dict
    • Add channel dim in ComputeHoVerMaps and ComputeHoVerMapsd
    • Remove batch dim in SobelGradients and SobelGradientsd

    Deprecated

    • Deprecating compute_meandice, compute_meaniou in monai.metrics, in favor of compute_dice and compute_iou respectively
    Source code(tar.gz)
    Source code(zip)
  • 1.0.0(Sep 16, 2022)

    Added

    • monai.auto3dseg base APIs and monai.apps.auto3dseg components for automated machine learning (AutoML) workflow
    • monai.fl module with base APIs and MonaiAlgo for federated learning client workflow
    • An initial backwards compatibility guide
    • Initial release of accelerated MRI reconstruction components, including CoilSensitivityModel
    • Support of MetaTensor and new metadata attributes for various digital pathology components
    • Various monai.bundle enhancements for MONAI model-zoo usability, including config debug mode and get_all_bundles_list
    • new monai.transforms components including SignalContinuousWavelet for 1D signal, ComputeHoVerMaps for digital pathology, and SobelGradients for spatial gradients
    • VarianceMetric and LabelQualityScore metrics for active learning
    • Dataset API for real-time stream and videos
    • Several networks and building blocks including FlexibleUNet and HoVerNet
    • MeanIoUHandler and LogfileHandler workflow event handlers
    • WSIReader with the TiffFile backend
    • Multi-threading in WSIReader with cuCIM backend
    • get_stats API in monai.engines.Workflow
    • prune_meta_pattern in monai.transforms.LoadImage
    • max_interactions for deepedit interaction workflow
    • Various profiling utilities in monai.utils.profiling

    Changed

    • Base Docker image upgraded to nvcr.io/nvidia/pytorch:22.08-py3 from nvcr.io/nvidia/pytorch:22.06-py3
    • Optionally depend on PyTorch-Ignite v0.4.10 instead of v0.4.9
    • The cache-based dataset now matches the transform information when read/write the cache
    • monai.losses.ContrastiveLoss now infers batch_size during forward()
    • Rearrange the spatial axes in RandSmoothDeform transforms following PyTorch's convention
    • Unified several environment flags into monai.utils.misc.MONAIEnvVars
    • Simplified __str__ implementation of MetaTensor instead of relying on the __repr__ implementation

    Fixed

    • Improved error messages when both monai and monai-weekly are pip-installed
    • Inconsistent pseudo number sequences for different num_workers in DataLoader
    • Issue of repeated sequences for monai.data.ShuffleBuffer
    • Issue of not preserving the physical extent in monai.transforms.Spacing
    • Issue of using inception_v3 as the backbone of monai.networks.nets.TorchVisionFCModel
    • Index device issue for monai.transforms.Crop
    • Efficiency issue when converting the array dtype and contiguous memory

    Deprecated

    • Addchannel and AsChannelFirst transforms in favor of EnsureChannelFirst
    • monai.apps.pathology.data components in favor of the corresponding components from monai.data
    • monai.apps.pathology.handlers in favor of the corresponding components from monai.handlers

    Removed

    • Status section in the pull request template in favor of the pull request draft mode
    • monai.engines.BaseWorkflow
    • ndim and dimensions arguments in favor of spatial_dims
    • n_classes, num_classes arguments in AsDiscrete in favor of to_onehot
    • logit_thresh, threshold_values arguments in AsDiscrete in favor of threshold
    • torch.testing.assert_allclose in favor of tests.utils.assert_allclose
    Source code(tar.gz)
    Source code(zip)
  • 0.9.1(Jul 25, 2022)

    Added

    • Support of monai.data.MetaTensor as core data structure across the modules
    • Support of inverse in array-based transforms
    • monai.apps.TciaDataset APIs for The Cancer Imaging Archive (TCIA) datasets, including a pydicom-backend reader
    • Initial release of components for MRI reconstruction in monai.apps.reconstruction, including various FFT utilities
    • New metrics and losses, including mean IoU and structural similarity index
    • monai.utils.StrEnum class to simplify Enum-based type annotations

    Changed

    • Base Docker image upgraded to nvcr.io/nvidia/pytorch:22.06-py3 from nvcr.io/nvidia/pytorch:22.04-py3
    • Optionally depend on PyTorch-Ignite v0.4.9 instead of v0.4.8

    Fixed

    • Fixed issue of not skipping post activations in Convolution when input arguments are None
    • Fixed issue of ignoring dropout arguments in DynUNet
    • Fixed issue of hard-coded non-linear function in ViT classification head
    • Fixed issue of in-memory config overriding with monai.bundle.ConfigParser.update
    • 2D SwinUNETR incompatible shapes
    • Fixed issue with monai.bundle.verify_metadata not raising exceptions
    • Fixed issue with monai.transforms.GridPatch returns inconsistent type location when padding
    • Wrong generalized Dice score metric when denominator is 0 but prediction is non-empty
    • Docker image build error due to NGC CLI upgrade
    • Optional default value when parsing id unavailable in a ConfigParser instance
    • Immutable data input for the patch-based WSI datasets

    Deprecated

    • *_transforms and *_meta_dict fields in dictionary-based transforms in favor of MetaTensor
    • meta_keys, meta_key_postfix, src_affine arguments in various transforms, in favor of MetaTensor
    • AsChannelFirst and AddChannel, in favor of EnsureChannelFirst transform
    Source code(tar.gz)
    Source code(zip)
  • 0.9.0(Jun 13, 2022)

    Added

    • monai.bundle primary module with a ConfigParser and command-line interfaces for config-based workflows
    • Initial release of MONAI bundle specification
    • Initial release of volumetric image detection modules including bounding boxes handling, RetinaNet-based architectures
    • API preview monai.data.MetaTensor
    • Unified monai.data.image_writer to support flexible IO backends including an ITK writer
    • Various new network blocks and architectures including SwinUNETR
    • DeepEdit interactive training/validation workflow
    • NuClick interactive segmentation transforms
    • Patch-based readers and datasets for whole-slide imaging
    • New losses and metrics including SurfaceDiceMetric, GeneralizedDiceFocalLoss
    • New pre-processing transforms including RandIntensityRemap, SpatialResample
    • Multi-output and slice-based inference for SlidingWindowInferer
    • NrrdReader for NRRD file support
    • Torchscript utilities to save models with meta information
    • Gradient-based visualization module SmoothGrad
    • Automatic regular source code scanning for common vulnerabilities and coding errors

    Changed

    • Simplified TestTimeAugmentation using de-collate and invertible transforms APIs
    • Refactoring monai.apps.pathology modules into monai.handlers and monai.transforms
    • Flexible activation and normalization layers for TopologySearch and DiNTS
    • Anisotropic first layers for 3D resnet
    • Flexible ordering of activation, normalization in UNet
    • Enhanced performance of connected-components analysis using Cupy
    • INSTANCE_NVFUSER for enhanced performance in 3D instance norm
    • Support of string representation of dtype in convert_data_type
    • Added new options iteration_log, iteration_log to the logging handlers
    • Base Docker image upgraded to nvcr.io/nvidia/pytorch:22.04-py3 from nvcr.io/nvidia/pytorch:21.10-py3
    • collate_fn generates more data-related debugging info with dev_collate

    Fixed

    • Unified the spellings of "meta data", "metadata", "meta-data" to "metadata"
    • Various inaccurate error messages when input data are in invalid shapes
    • Issue of computing symmetric distances in compute_average_surface_distance
    • Unnecessary layer self.conv3 in UnetResBlock
    • Issue of torchscript compatibility for ViT and self-attention blocks
    • Issue of hidden layers in UNETR
    • allow_smaller in spatial cropping transforms
    • Antialiasing in Resize
    • Issue of bending energy loss value at different resolutions
    • kwargs_read_csv in CSVDataset
    • In-place modification in Metric reduction
    • wrap_array for ensure_tuple
    • Contribution guide for introducing new third-party dependencies

    Removed

    • Deprecated nifti_writer, png_writer in favor of monai.data.image_writer
    • Support for PyTorch 1.6
    Source code(tar.gz)
    Source code(zip)
  • 0.8.1(Feb 16, 2022)

    Added

    • Support of spatial 2D for ViTAutoEnc
    • Support of dataframe object input in CSVDataset
    • Support of tensor backend for Orientation
    • Support of configurable delimiter for CSV writers
    • A base workflow API
    • DataFunc API for dataset-level preprocessing
    • write_scalar API for logging with additional engine parameter in TensorBoardHandler
    • Enhancements for NVTX Range transform logging
    • Enhancements for set_determinism
    • Performance enhancements in the cache-based datasets
    • Configurable metadata keys for monai.data.DatasetSummary
    • Flexible kwargs for WSIReader
    • Logging for the learning rate schedule handler
    • GridPatchDataset as subclass of monai.data.IterableDataset
    • is_onehot option in KeepLargestConnectedComponent
    • channel_dim in the image readers and support of stacking images with channels
    • Support of matshow3d with given channel_dim
    • Skipping workflow run if epoch length is 0
    • Enhanced CacheDataset to avoid duplicated cache items
    • save_state utility function

    Changed

    • Optionally depend on PyTorch-Ignite v0.4.8 instead of v0.4.6
    • monai.apps.mmars.load_from_mmar defaults to the latest version

    Fixed

    • Issue when caching large items with pickle
    • Issue of hard-coded activation functions in ResBlock
    • Issue of create_file_name assuming local disk file creation
    • Issue of WSIReader when the backend is TiffFile
    • Issue of deprecated_args when the function signature contains kwargs
    • Issue of channel_wise computations for the intensity-based transforms
    • Issue of inverting OneOf
    • Issue of removing temporary caching file for the persistent dataset
    • Error messages when reader backend is not available
    • Output type casting issue in ScaleIntensityRangePercentiles
    • Various docstring typos and broken URLs
    • mode in the evaluator engine
    • Ordering of Orientation and Spacing in monai.apps.deepgrow.dataset

    Removed

    • Additional deep supervision modules in DynUnet
    • Deprecated reduction argument for ContrastiveLoss
    • Decollate warning in Workflow
    • Unique label exception in ROCAUCMetric
    • Logger configuration logic in the event handlers
    Source code(tar.gz)
    Source code(zip)
  • 0.8.0(Nov 25, 2021)

    Added

    • Overview of new features in v0.8
    • Network modules for differentiable neural network topology search (DiNTS)
    • Multiple Instance Learning transforms and models for digital pathology WSI analysis
    • Vision transformers for self-supervised representation learning
    • Contrastive loss for self-supervised learning
    • Finalized major improvements of 200+ components in monai.transforms to support input and backend in PyTorch and NumPy
    • Initial registration module benchmarking with GlobalMutualInformationLoss as an example
    • monai.transforms documentation with visual examples and the utility functions
    • Event handler for MLfLow integration
    • Enhanced data visualization functions including blend_images and matshow3d
    • RandGridDistortion and SmoothField in monai.transforms
    • Support of randomized shuffle buffer in iterable datasets
    • Performance review and enhancements for data type casting
    • Cumulative averaging API with distributed environment support
    • Module utility functions including require_pkg and pytorch_after
    • Various usability enhancements such as allow_smaller when sampling ROI and wrap_sequence when casting object types
    • tifffile support in WSIReader
    • Regression tests for the fast training workflows
    • Various tutorials and demos including educational contents at MONAI Bootcamp 2021

    Changed

    • Base Docker image upgraded to nvcr.io/nvidia/pytorch:21.10-py3 from nvcr.io/nvidia/pytorch:21.08-py3
    • Decoupled TraceKeys and TraceableTransform APIs from InvertibleTransform
    • Skipping affine-based resampling when resample=False in NiftiSaver
    • Deprecated threshold_values: bool and num_classes: int in AsDiscrete
    • Enhanced apply_filter for spatially 1D, 2D and 3D inputs with non-separable kernels
    • Logging with logging in downloading and model archives in monai.apps
    • API documentation site now defaults to stable instead of latest
    • skip-magic-trailing-comma in coding style enforcements
    • Pre-merge CI pipelines now include unit tests with Nvidia Ampere architecture

    Removed

    • Support for PyTorch 1.5
    • The deprecated DynUnetV1 and the related network blocks
    • GitHub self-hosted CI/CD pipelines for package releases

    Fixed

    • Support of path-like objects as file path inputs in most modules
    • Issue of decollate_batch for dictionary of empty lists
    • Typos in documentation and code examples in various modules
    • Issue of no available keys when allow_missing_keys=True for the MapTransform
    • Issue of redundant computation when normalization factors are 0.0 and 1.0 in ScaleIntensity
    • Incorrect reports of registered readers in ImageReader
    • Wrong numbering of iterations in StatsHandler
    • Naming conflicts in network modules and aliases
    • Incorrect output shape when reduction="none" in FocalLoss
    • Various usability issues reported by users
    Source code(tar.gz)
    Source code(zip)
  • 0.7.0(Sep 24, 2021)

    Added

    • Overview of new features in v0.7
    • Initial phase of major usability improvements in monai.transforms to support input and backend in PyTorch and NumPy
    • Performance enhancements, with profiling and tuning guides for typical use cases
    • Reproducing training modules and workflows of state-of-the-art Kaggle competition solutions
    • 24 new transforms, including
      • OneOf meta transform
      • DeepEdit guidance signal transforms for interactive segmentation
      • Transforms for self-supervised pre-training
      • Integration of NVIDIA Tools Extension (NVTX)
      • Integration of cuCIM
      • Stain normalization and contextual grid for digital pathology
    • Transchex network for vision-language transformers for chest X-ray analysis
    • DatasetSummary utility in monai.data
    • WarmupCosineSchedule
    • Deprecation warnings and documentation support for better backwards compatibility
    • Padding with additional kwargs and different backend API
    • Additional options such as dropout and norm in various networks and their submodules

    Changed

    • Base Docker image upgraded to nvcr.io/nvidia/pytorch:21.08-py3 from nvcr.io/nvidia/pytorch:21.06-py3
    • Deprecated input argument n_classes, in favor of num_classes
    • Deprecated input argument dimensions and ndims, in favor of spatial_dims
    • Updated the Sphinx-based documentation theme for better readability
    • NdarrayTensor type is replaced by NdarrayOrTensor for simpler annotations
    • Self-attention-based network blocks now support both 2D and 3D inputs

    Removed

    • The deprecated TransformInverter, in favor of monai.transforms.InvertD
    • GitHub self-hosted CI/CD pipelines for nightly and post-merge tests
    • monai.handlers.utils.evenly_divisible_all_gather
    • monai.handlers.utils.string_list_all_gather

    Fixed

    • A Multi-thread cache writing issue in LMDBDataset
    • Output shape convention inconsistencies of the image readers
    • Output directory and file name flexibility issue for NiftiSaver, PNGSaver
    • Requirement of the label field in test-time augmentation
    • Input argument flexibility issues for ThreadDataLoader
    • Decoupled Dice and CrossEntropy intermediate results in DiceCELoss
    • Improved documentation, code examples, and warning messages in various modules
    • Various usability issues reported by users
    Source code(tar.gz)
    Source code(zip)
  • 0.6.0(Jul 8, 2021)

    Added

    • Overview document for feature highlights in v0.6
    • 10 new transforms, a masked loss wrapper, and a NetAdapter for transfer learning
    • APIs to load networks and pre-trained weights from Clara Train Medical Model ARchives (MMARs)
    • Base metric and cumulative metric APIs, 4 new regression metrics
    • Initial CSV dataset support
    • Decollating mini-batch as the default first postprocessing step
    • Initial backward compatibility support via monai.utils.deprecated
    • Attention-based vision modules and UNETR for segmentation
    • Generic module loaders and Gaussian mixture models using the PyTorch JIT compilation
    • Inverse of image patch sampling transforms
    • Network block utilities get_[norm, act, dropout, pool]_layer
    • unpack_items mode for apply_transform and Compose
    • New event INNER_ITERATION_STARTED in the deepgrow interactive workflow
    • set_data API for cache-based datasets to dynamically update the dataset content
    • Fully compatible with PyTorch 1.9
    • --disttests and --min options for runtests.sh
    • Initial support of pre-merge tests with Nvidia Blossom system

    Changed

    • Base Docker image upgraded to nvcr.io/nvidia/pytorch:21.06-py3 from nvcr.io/nvidia/pytorch:21.04-py3
    • Optionally depend on PyTorch-Ignite v0.4.5 instead of v0.4.4
    • Unified the demo, tutorial, testing data to the project shared drive, and Project-MONAI/MONAI-extra-test-data
    • Unified the terms: post_transform is renamed to postprocessing, pre_transform is renamed to preprocessing
    • Unified the postprocessing transforms and event handlers to accept the "channel-first" data format
    • evenly_divisible_all_gather and string_list_all_gather moved to monai.utils.dist

    Removed

    • Support of 'batched' input for postprocessing transforms and event handlers
    • TorchVisionFullyConvModel
    • set_visible_devices utility function
    • SegmentationSaver and TransformsInverter handlers

    Fixed

    • Issue of handling big-endian image headers
    • Multi-thread issue for non-random transforms in the cache-based datasets
    • Persistent dataset issue when multiple processes sharing a non-exist cache location
    • Typing issue with Numpy 1.21.0
    • Loading checkpoint with both model and optmizier using CheckpointLoader when strict_shape=False
    • SplitChannel has different behaviour depending on numpy/torch inputs
    • Transform pickling issue caused by the Lambda functions
    • Issue of filtering by name in generate_param_groups
    • Inconsistencies in the return value types of class_activation_maps
    • Various docstring typos
    • Various usability enhancements in monai.transforms
    Source code(tar.gz)
    Source code(zip)
  • 0.5.3(Jun 1, 2021)

    Changed

    • Project default branch renamed to dev from master
    • Base Docker image upgraded to nvcr.io/nvidia/pytorch:21.02-py3 from nvcr.io/nvidia/pytorch:21.04-py3
    • Enhanced type checks for the iteration_metric handler
    • Enhanced PersistentDataset to use tempfile during caching computation
    • Enhanced various info/error messages
    • Enhanced performance of RandAffine
    • Enhanced performance of SmartCacheDataset
    • Optionally requires cucim when the platform is Linux
    • Default device of TestTimeAugmentation changed to cpu

    Fixed

    • Download utilities now provide better default parameters
    • Duplicated key_transforms in the patch-based transforms
    • A multi-GPU issue in ClassificationSaver
    • A default meta_data issue in SpacingD
    • Dataset caching issue with the persistent data loader workers
    • A memory issue in permutohedral_cuda
    • Dictionary key issue in CopyItemsd
    • box_start and box_end parameters for deepgrow SpatialCropForegroundd
    • Tissue mask array transpose issue in MaskedInferenceWSIDataset
    • Various type hint errors
    • Various docstring typos

    Added

    • Support of to_tensor and device arguments for TransformInverter
    • Slicing options with SpatialCrop
    • Class name alias for the networks for backward compatibility
    • k_divisible option for CropForeground
    • map_items option for Compose
    • Warnings of inf and nan for surface distance computation
    • A print_log flag to the image savers
    • Basic testing pipelines for Python 3.9
    Source code(tar.gz)
    Source code(zip)
  • 0.5.0(Apr 13, 2021)

    Added

    • Overview document for feature highlights in v0.5.0
    • Invertible spatial transforms
      • InvertibleTransform base APIs
      • Batch inverse and decollating APIs
      • Inverse of Compose
      • Batch inverse event handling
      • Test-time augmentation as an application
    • Initial support of learning-based image registration:
      • Bending energy, LNCC, and global mutual information loss
      • Fully convolutional architectures
      • Dense displacement field, dense velocity field computation
      • Warping with high-order interpolation with C++/CUDA implementations
    • Deepgrow modules for interactive segmentation:
      • Workflows with simulations of clicks
      • Distance-based transforms for guidance signals
    • Digital pathology support:
      • Efficient whole slide imaging IO and sampling with Nvidia cuCIM and SmartCache
      • FROC measurements for lesion
      • Probabilistic post-processing for lesion detection
      • TorchVision classification model adaptor for fully convolutional analysis
    • 12 new transforms, grid patch dataset, ThreadDataLoader, EfficientNets B0-B7
    • 4 iteration events for the engine for finer control of workflows
    • New C++/CUDA extensions:
      • Conditional random field
      • Fast bilateral filtering using the permutohedral lattice
    • Metrics summary reporting and saving APIs
    • DiceCELoss, DiceFocalLoss, a multi-scale wrapper for segmentation loss computation
    • Data loading utilities:
      • decollate_batch
      • PadListDataCollate with inverse support
    • Support of slicing syntax for Dataset
    • Initial Torchscript support for the loss modules
    • Learning rate finder
    • Allow for missing keys in the dictionary-based transforms
    • Support of checkpoint loading for transfer learning
    • Various summary and plotting utilities for Jupyter notebooks
    • Contributor Covenant Code of Conduct
    • Major CI/CD enhancements covering the tutorial repository
    • Fully compatible with PyTorch 1.8
    • Initial nightly CI/CD pipelines using Nvidia Blossom Infrastructure

    Changed

    • Enhanced list_data_collate error handling
    • Unified iteration metric APIs
    • densenet* extensions are renamed to DenseNet*
    • se_res* network extensions are renamed to SERes*
    • Transform base APIs are rearranged into compose, inverse, and transform
    • _do_transform flag for the random augmentations is unified via RandomizableTransform
    • Decoupled post-processing steps, e.g. softmax, to_onehot_y, from the metrics computations
    • Moved the distributed samplers to monai.data.samplers from monai.data.utils
    • Engine's data loaders now accept generic iterables as input
    • Workflows now accept additional custom events and state properties
    • Various type hints according to Numpy 1.20
    • Refactored testing utility runtests.sh to have --unittest and --net integration tests options
    • Base Docker image upgraded to nvcr.io/nvidia/pytorch:21.02-py3 from nvcr.io/nvidia/pytorch:20.10-py3
    • Docker images are now built with self-hosted environments
    • Primary contact email updated to [email protected]
    • Now using GitHub Discussions as the primary communication forum

    Removed

    • Compatibility tests for PyTorch 1.5.x
    • Format specific loaders, e.g. LoadNifti, NiftiDataset
    • Assert statements from non-test files
    • from module import * statements, addressed flake8 F403

    Fixed

    • Uses American English spelling for code, as per PyTorch
    • Code coverage now takes multiprocessing runs into account
    • SmartCache with initial shuffling
    • ConvertToMultiChannelBasedOnBratsClasses now supports channel-first inputs
    • Checkpoint handler to save with non-root permissions
    • Fixed an issue for exiting the distributed unit tests
    • Unified DynUNet to have single tensor output w/o deep supervision
    • SegmentationSaver now supports user-specified data types and a squeeze_end_dims flag
    • Fixed *Saver event handlers output filenames with a data_root_dir option
    • Load image functions now ensure little-endian
    • Fixed the test runner to support regex-based test case matching
    • Usability issues in the event handlers
    Source code(tar.gz)
    Source code(zip)
  • 0.4.0(Dec 15, 2020)

    Added

    • Overview document for feature highlights in v0.4.0
    • Torchscript support for the net modules
    • New networks and layers:
      • Discrete Gaussian kernels
      • Hilbert transform and envelope detection
      • Swish and Mish activation
      • Acti-norm-dropout block
      • Upsampling layer
      • Autoencoder, Variational autoencoder
      • FCNet
    • Support of initialisation from pre-trained weights for densenet, SENet, multichannel AHNet
    • Layer-wise learning rate API
    • New model metrics and event handlers based on occlusion sensitivity, confusion matrix, surface distance
    • CAM/GradCAM/GradCAM++
    • File format-agnostic image loader APIs with Nibabel, ITK readers
    • Enhancements for dataset partition, cross-validation APIs
    • New data APIs:
      • LMDB-based caching dataset
      • Cache-N-transforms dataset
      • Iterable dataset
      • Patch dataset
    • Weekly PyPI release
    • Fully compatible with PyTorch 1.7
    • CI/CD enhancements:
      • Skip, speed up, fail fast, timed, quick tests
      • Distributed training tests
      • Performance profiling utilities
    • New tutorials and demos:
      • Autoencoder, VAE tutorial
      • Cross-validation demo
      • Model interpretability tutorial
      • COVID-19 Lung CT segmentation challenge open-source baseline
      • Threadbuffer demo
      • Dataset partitioning tutorial
      • Layer-wise learning rate demo
      • MONAI Bootcamp 2020

    Changed

    • Base Docker image upgraded to nvcr.io/nvidia/pytorch:20.10-py3 from nvcr.io/nvidia/pytorch:20.08-py3

    Backwards Incompatible Changes

    • monai.apps.CVDecathlonDataset is extended to a generic monai.apps.CrossValidation with an dataset_cls option
    • Cache dataset now requires a monai.transforms.Compose instance as the transform argument
    • Model checkpoint file name extensions changed from .pth to .pt
    • Readers' get_spatial_shape returns a numpy array instead of list
    • Decoupled postprocessing steps such as sigmoid, to_onehot_y, mutually_exclusive, logit_thresh from metrics and event handlers, the postprocessing steps should be used accordingly before computing the metrics
    • ConfusionMatrixMetric and DiceMetric computation now returns an additional not_nans flag to indicate valid results
    • UpSample optional mode supports "deconv", "nontrainable", "pixelshuffle"; interp_mode is only used when mode is "nontrainable"
    • SegResNet optional upsample_mode now supports "deconv", "nontrainable", "pixelshuffle"
    • monai.transforms.Compose class inherits monai.transforms.Transform
    • In Rotate, Rotated, RandRotate, RandRotated transforms, the angle related parameters are interpreted as angles in radians instead of degrees.
    • SplitChannel and SplitChanneld moved from transforms.post to transforms.utility

    Removed

    • Support of PyTorch 1.4

    Fixed

    • Enhanced the Dice related loss functions for stability and flexibility
    • Sliding window inference memory and device issues
    • Revised transforms:
      • Normalize intensity datatype and normalizer types
      • Padding modes for zoom
      • Crop returns coordinates
      • Select items transform
      • Weighted patch sampling
      • Option to keep aspect ratio for zoom
    • Various CI/CD issues
    Source code(tar.gz)
    Source code(zip)
  • 0.3.0(Oct 5, 2020)

    Added

    • Overview document for feature highlights in v0.3.0
    • Automatic mixed precision support
    • Multi-node, multi-GPU data parallel model training support
    • 3 new evaluation metric functions
    • 11 new network layers and blocks
    • 6 new network architectures
    • 14 new transforms, including an I/O adaptor
    • Cross validation module for DecathlonDataset
    • Smart Cache module in dataset
    • monai.optimizers module
    • monai.csrc module
    • Experimental feature of ImageReader using ITK, Nibabel, Numpy, Pillow (PIL Fork)
    • Experimental feature of differentiable image resampling in C++/CUDA
    • Ensemble evaluator module
    • GAN trainer module
    • Initial cross-platform CI environment for C++/CUDA code
    • Code style enforcement now includes isort and clang-format
    • Progress bar with tqdm

    Changed

    • Now fully compatible with PyTorch 1.6
    • Base Docker image upgraded to nvcr.io/nvidia/pytorch:20.08-py3 from nvcr.io/nvidia/pytorch:20.03-py3
    • Code contributions now require signing off on the Developer Certificate of Origin (DCO)
    • Major work in type hinting finished
    • Remote datasets migrated to Open Data on AWS
    • Optionally depend on PyTorch-Ignite v0.4.2 instead of v0.3.0
    • Optionally depend on torchvision, ITK
    • Enhanced CI tests with 8 new testing environments

    Removed

    Fixed

    • dense_patch_slices incorrect indexing
    • Data type issue in GeneralizedWassersteinDiceLoss
    • ZipDataset return value inconsistencies
    • sliding_window_inference indexing and device issues
    • importing monai modules may cause namespace pollution
    • Random data splits issue in DecathlonDataset
    • Issue of randomising a Compose transform
    • Various issues in function type hints
    • Typos in docstring and documentation
    • PersistentDataset issue with existing file folder
    • Filename issue in the output writers
    Source code(tar.gz)
    Source code(zip)
  • 0.2.0(Jul 7, 2020)

    Added

    • Overview document for feature highlights in v0.2.0
    • Type hints and static type analysis support
    • MONAI/research folder
    • monai.engine.workflow APIs for supervised training
    • monai.inferers APIs for validation and inference
    • 7 new tutorials and examples
    • 3 new loss functions
    • 4 new event handlers
    • 8 new layers, blocks, and networks
    • 12 new transforms, including post-processing transforms
    • monai.apps.datasets APIs, including MedNISTDataset and DecathlonDataset
    • Persistent caching, ZipDataset, and ArrayDataset in monai.data
    • Cross-platform CI tests supporting multiple Python versions
    • Optional import mechanism
    • Experimental features for third-party transforms integration

    Changed

    For more details please visit the project wiki

    • Core modules now require numpy >= 1.17
    • Categorized monai.transforms modules into crop and pad, intensity, IO, post-processing, spatial, and utility
    • Most transforms are now implemented with PyTorch native APIs
    • Code style enforcement and automated formatting workflows now use autopep8 and black
    • Base Docker image upgraded to nvcr.io/nvidia/pytorch:20.03-py3 from nvcr.io/nvidia/pytorch:19.10-py3
    • Enhanced local testing tools
    • Documentation website domain changed to https://docs.monai.io

    Removed

    • Support of Python < 3.6
    • Automatic installation of optional dependencies including pytorch-ignite, nibabel, tensorboard, pillow, scipy, scikit-image

    Fixed

    • Various issues in type and argument names consistency
    • Various issues in docstring and documentation site
    • Various issues in unit and integration tests
    • Various issues in examples and notebooks
    Source code(tar.gz)
    Source code(zip)
  • 0.1.0(Apr 21, 2020)

    Added

    • Public alpha source code under the Apache 2.0 license (highlights)
    • Various tutorials and examples
      • Medical image classification and segmentation workflows
      • Spacing/orientation-aware preprocessing with CPU/GPU and caching
      • Flexible workflows with external engines including PyTorch Ignite and Lightning
    • Various GitHub Actions
      • CI/CD pipelines via self-hosted runners
      • Documentation publishing via readthedocs.org
      • PyPI package publishing
    • Contributing guidelines
    • A project logo and badges
    Source code(tar.gz)
    Source code(zip)
Owner
Project MONAI
AI Toolkit for Healthcare Imaging
Project MONAI
Microsoft Cognitive Toolkit (CNTK), an open source deep-learning toolkit

CNTK Chat Windows build status Linux build status The Microsoft Cognitive Toolkit (https://cntk.ai) is a unified deep learning toolkit that describes

Microsoft 17.3k Dec 29, 2022
Microsoft Cognitive Toolkit (CNTK), an open source deep-learning toolkit

CNTK Chat Windows build status Linux build status The Microsoft Cognitive Toolkit (https://cntk.ai) is a unified deep learning toolkit that describes

Microsoft 17k Feb 11, 2021
The code release of paper 'Domain Generalization for Medical Imaging Classification with Linear-Dependency Regularization' NIPS 2020.

Domain Generalization for Medical Imaging Classification with Linear Dependency Regularization The code release of paper 'Domain Generalization for Me

Yufei Wang 56 Dec 28, 2022
A medical imaging framework for Pytorch

Welcome to MedicalTorch MedicalTorch is an open-source framework for PyTorch, implementing an extensive set of loaders, pre-processors and datasets fo

Christian S. Perone 799 Jan 3, 2023
Code associated with the paper "Deep Optics for Single-shot High-dynamic-range Imaging"

Deep Optics for Single-shot High-dynamic-range Imaging Code associated with the paper "Deep Optics for Single-shot High-dynamic-range Imaging" CVPR, 2

Stanford Computational Imaging Lab 40 Dec 12, 2022
Dense Deep Unfolding Network with 3D-CNN Prior for Snapshot Compressive Imaging, ICCV2021 [PyTorch Code]

Dense Deep Unfolding Network with 3D-CNN Prior for Snapshot Compressive Imaging, ICCV2021 [PyTorch Code]

Jian Zhang 20 Oct 24, 2022
[MICCAI'20] AlignShift: Bridging the Gap of Imaging Thickness in 3D Anisotropic Volumes

AlignShift NEW: Code for our new MICCAI'21 paper "Asymmetric 3D Context Fusion for Universal Lesion Detection" will also be pushed to this repository

Medical 3D Vision 42 Jan 6, 2023
Useful materials and tutorials for 110-1 NTU DBME5028 (Application of Deep Learning in Medical Imaging)

Useful materials and tutorials for 110-1 NTU DBME5028 (Application of Deep Learning in Medical Imaging)

null 7 Jun 22, 2022
Non-Imaging Transient Reconstruction And TEmporal Search (NITRATES)

Non-Imaging Transient Reconstruction And TEmporal Search (NITRATES) This repo contains the full NITRATES pipeline for maximum likelihood-driven discov

null 13 Nov 8, 2022
Neural Nano-Optics for High-quality Thin Lens Imaging

Neural Nano-Optics for High-quality Thin Lens Imaging Project Page | Paper | Data Ethan Tseng, Shane Colburn, James Whitehead, Luocheng Huang, Seung-H

Ethan Tseng 39 Dec 5, 2022
Official implementation of NLOS-OT: Passive Non-Line-of-Sight Imaging Using Optimal Transport (IEEE TIP, accepted)

NLOS-OT Official implementation of NLOS-OT: Passive Non-Line-of-Sight Imaging Using Optimal Transport (IEEE TIP, accepted) Description In this reposit

Ruixu Geng(耿瑞旭) 16 Dec 16, 2022
MRQy is a quality assurance and checking tool for quantitative assessment of magnetic resonance imaging (MRI) data.

Front-end View Backend View Table of Contents Description Prerequisites Running Basic Information Measurements User Interface Feedback and usage Descr

Center for Computational Imaging and Personalized Diagnostics 58 Dec 2, 2022
Official Keras Implementation for UNet++ in IEEE Transactions on Medical Imaging and DLMIA 2018

UNet++: A Nested U-Net Architecture for Medical Image Segmentation UNet++ is a new general purpose image segmentation architecture for more accurate i

Zongwei Zhou 1.8k Jan 7, 2023
Official implementation of the method ContIG, for self-supervised learning from medical imaging with genomics

ContIG: Self-supervised Multimodal Contrastive Learning for Medical Imaging with Genetics This is the code implementation of the paper "ContIG: Self-s

Digital Health & Machine Learning 22 Dec 13, 2022
A package, and script, to perform imaging transcriptomics on a neuroimaging scan.

Imaging Transcriptomics Imaging transcriptomics is a methodology that allows to identify patterns of correlation between gene expression and some prop

Alessio Giacomel 10 Dec 27, 2022
Simulation of moving particles under microscopic imaging

Simulation of moving particles under microscopic imaging Install scipy numpy scikit-image tiffile Run python simulation.py Read result https://imagej

Zehao Wang 2 Dec 14, 2021
PyTorch implementation of Deep HDR Imaging via A Non-Local Network (TIP 2020).

NHDRRNet-PyTorch This is the PyTorch implementation of Deep HDR Imaging via A Non-Local Network (TIP 2020). 0. Differences between Original Paper and

Yutong Zhang 1 Mar 1, 2022
Equivariant Imaging: Learning Beyond the Range Space

[Project] Equivariant Imaging: Learning Beyond the Range Space Project about the

Georges Le Bellier 3 Feb 6, 2022
Pytorch implementation of various High Dynamic Range (HDR) Imaging algorithms

Deep High Dynamic Range Imaging Benchmark This repository is the pytorch impleme

Tianhong Dai 5 Nov 16, 2022