MONAI Deploy App SDK offers a framework and associated tools to design, develop and verify AI-driven applications in the healthcare imaging domain.

Overview

project-monai

💡 If you want to know more about MONAI Deploy WG vision, overall structure, and guidelines, please read https://github.com/Project-MONAI/monai-deploy first.

MONAI Deploy App SDK

MONAI Deploy App SDK offers a framework and associated tools to design, develop and verify AI-driven applications in the healthcare imaging domain.

Features

  • Build medical imaging inference applications using a flexible, extensible & usable Pythonic API
  • Easy management of inference applications via programmable Directed Acyclic Graphs (DAGs)
  • Built-in operators to load DICOM data to be ingested in an inference app
  • Out-of-the-box support for in-proc PyTorch based inference
  • Easy incorporation of MONAI based pre and post transformations in the inference application
  • Package inference application with a single command into a portable MONAI Application Package
  • Locally run and debug your inference application using App Runner

User Guide

User guide is available at docs.monai.io.

Installation

To install the current release, you can simply run:

pip install monai-deploy-app-sdk  # '--pre' to install a pre-release version.

Getting Started

Getting started guide is available at here.

pip install monai-deploy-app-sdk  # '--pre' to install a pre-release version.

# Clone monai-deploy-app-sdk repository for accessing examples.
git clone https://github.com/Project-MONAI/monai-deploy-app-sdk.git
cd monai-deploy-app-sdk

# Install necessary dependencies for simple_imaging_app
pip install scikit-image

# Execute the app locally
python examples/apps/simple_imaging_app/app.py -i examples/apps/simple_imaging_app/brain_mr_input.jpg -o output

# Package app (creating MAP Docker image), using `-l DEBUG` option to see progress.
monai-deploy package examples/apps/simple_imaging_app -t simple_app:latest -l DEBUG

# Run the app with docker image and an input file locally
## Copy a test input file to 'input' folder
mkdir -p input && rm -rf input/*
cp examples/apps/simple_imaging_app/brain_mr_input.jpg input/
## Launch the app
monai-deploy run simple_app:latest input output

Tutorials

1) Creating a simple image processing app

2) Creating MedNIST Classifier app

YouTube Video:

3) Creating a Segmentation app

YouTube Video:

Examples

https://github.com/Project-MONAI/monai-deploy-app-sdk/tree/main/examples/apps has example apps that you can see.

  • ai_spleen_seg_app
  • ai_unetr_seg_app
  • dicom_series_to_image_app
  • mednist_classifier_monaideploy
  • simple_imaging_app

Contributing

For guidance on making a contribution to MONAI Deploy App SDK, see the contributing guidelines.

Community

To participate in the MONAI Deploy WG, please review https://github.com/Project-MONAI/MONAI/wiki/Deploy-Working-Group.

Join the conversation on Twitter @ProjectMONAI or join our Slack channel.

Ask and answer questions over on MONAI Deploy App SDK's GitHub Discussions tab.

Links

Comments
  • simple_app Executing packaged app locally Error

    simple_app Executing packaged app locally Error

    Hi,

    I am new to use MONAI and try the tutorial "01 simple_app" by jupyter notebook.

    MAP is built successfully and I tried to run but got a error message as following.

    the command i used.

    Copy a test input file to 'input' folder

    !mkdir -p testinput && rm -rf input/* !cp {test_input_path} testinput/ !ls testinput

    Launch the app

    !monai-deploy run simple_app:latest testinput output

    ============================================================ the message i got

    normal-brain-mri-4.png Checking dependencies... --> Verifying if "docker" is installed...

    --> Verifying if "simple_app:latest" is available...

    Checking for MAP "simple_app:latest" locally "simple_app:latest" found.

    Reading MONAI App Package manifest... Going to initiate execution of operator SobelOperator Executing operator SobelOperator (Process ID: 1, Operator ID: 896fa9c1-0fce-4ab0-a69d-cf86548bfedd) Traceback (most recent call last): File "/opt/monai/app/app.py", line 22, in App(do_run=True) File "/opt/conda/lib/python3.8/site-packages/monai/deploy/core/application.py", line 129, in init self.run(log_level=args.log_level) File "/opt/conda/lib/python3.8/site-packages/monai/deploy/core/application.py", line 429, in run executor_obj.run() File "/opt/conda/lib/python3.8/site-packages/monai/deploy/core/executors/single_process_executor.py", line 125, in run op.compute(op_exec_context.input_context, op_exec_context.output_context, op_exec_context) File "/opt/monai/app/sobel_operator.py", line 21, in compute input_path = next(input_path.glob(".")) # take the first file StopIteration

    ERROR: MONAI Application "simple_app:latest" failed.

    is there anything wrong and how to solve this, many thanks

    good first issue help wanted 
    opened by dukeyu2011 20
  • [FEA] Reducing manual copy/paste between MONAI / MONAI Deploy / MONAI Label

    [FEA] Reducing manual copy/paste between MONAI / MONAI Deploy / MONAI Label

    Is your feature request related to a problem? Please describe.

    Currently, manual work (copy/paste code) is required for a MONAI developer to create a MONAI Deploy App. That is, there is no common API between MONAI and MONAI Deploy (outside of importing MONAI utilities/libraries in a MONAI Deploy App). Due to this, an experienced MONAI developer (primarily focused on developing new algorithms/training paradigms) needs to learn the concepts/API in MONAI Deploy and copy/paste code in the appropriate places in a MONAI Deploy Operator and manually ensure that data transfer concepts (e.g. InferContext) in MONAI Deploy are correctly used.

    Analogically, in the case of MONAI Label, an experienced MONAI developer must do much the same: familiarize themselves with the concepts of of MONAI Label (e.g InferTask, TrainTask, etc) and copy/paste their code in the appropriate sections to ensure that the MONAI Label App functions as expected.

    Describe the solution you'd like

    In addition to researchers "freestyling" training scripts using MONAI, they could also optionally use concepts of MONAI Label and Stream such as tasks (InferTask, TrainTask, etc.) and contexts (InputContext, OutputContext, ExecutionContext) to represent the conceptual task the use aims a certain branch of code to accomplish all the while using data transfer types that are portable across MONAI. The aim is ultimately to employ each of the execution branches for the application the user wants to create; this may still require manual adjustment but overall reduce the amount of manual copy/paste and reduce the learning curve across different MONAI products.

    Describe alternatives you've considered Learn each framework in isolation (MONAI, MONAI Label, MONAI Deploy) and copy/paste where appropriate.

    enhancement 
    opened by aihsani 15
  • Add MONAI Bundle inference operator and update example app

    Add MONAI Bundle inference operator and update example app

    This PR is based on the bundle_addtion branch that @ericspod created for automating the parsing of MONAI Bundle and execution of inference, with some notable changes:

    • The base Application uses ModelFactory to load model files, e.g. MONAI Bundle TorchScript file, and makes the model network and model file path available through the ExecutionContext. The bundle operator will rely on this mechanism to retrieve the bundle path, and completes the bundle parsing when the operator is executed. The operator does support an optional arg to passing in the bundle path on its constructor so that the parsing can be completed on object creation. The models folder is known to the App at its creation time, but the logic for selecting the specific named model out of potentially multiple models is within the ModelFactory. The App's creation and execution sequence needs to be looked at in future releases so that named model and its path can be retrieved during App creation time so that the bundle path can be reliably passed to the bundle operator on its creation.
    • Similar related to the sequence of creation and execution, the bundle operator needs to have its input and output defined on creation so that the App's executor can validate their compatibilities with connected operators. Also, the operator input and output can have IN_MEMORY and DISK storage type, which is information not coming from the bundle, so the application needs to define it on creating the bundle operator instance.
    • Added support for parsing metadata from input image and preserving the metadata. This is required so that the inversion in post-processing can work. Also added are special treatment for parsing image object generated from DICOM instances as the order of dims and the names of some metadata are different from what are expected by the transforms.
    • Although the bundle operator will typically be used in between other operators, e.g. DICOM image converter and seg image to DICOM writer operators, a user might use the bundle operator as the first or leaf operator. For such cases, the app executor does have restrictions that the storage type needs to be DISK, so the input or output needs to be files. In order not to have many file formats to support in this initial release of the bundle operator, only Python pickle file is supported. Pickle file does have some security concerns, though content is only used as pure data object, validated after loading, thus mitigating the risk.
    • The bundle operator can be used in an application that has multiple models, by specifying the model name (the ModelFactory has a specification on model folder structure when multiple models are used).
    • The bundle operator and the example app have been tested with the Spleen_CT_Segmentation bundle from MONAI Model Zoo.
    enhancement 
    opened by MMelQin 12
  • Implement highdicom seg operator

    Implement highdicom seg operator

    I am recreating this pull request since I completely borked #298 due to the (extremely frustrating) DCO sign off requirement and subsequent attempts to fix it that made it worse :persevere:

    This PR introduces the highdicom library to handle the creation of DICOM segmentations. This should lead to more maintainable code by using the well-tested highdicom package for the complex task of segmentation creation, and should make adding new features to the segmentation writer operator considerably easier by leveraging the prior work in that library. Note that the complexity of the ad-hoc code in the dicom segmentation operator file is reduced significantly.

    Unlike the original version of #298 I have now removed the highdicom class highdicom.seg.SegmentDescription from the public API of the monai deploy class. However I have left the pydicom.sr.coding.Code class in the public API as we discussed in the last meeting.

    I have not yet changed the examples to use the new version of the operator, as this requires merging with some recent changes due to the bundle that I still need to get my head around. This will need to be addressed before merging so that the examples run. But I would like to get agreement on the API of the class before fixing the examples.

    Update: All example apps that make use of the Segmentation Writer have been updated in this PR, and all used the appropriate codes. Also the monai core is pinned at v0.9.0 because the latest version 0.9.1 is not compatible with the apps (not related to the new Seg Writer itself)

    enhancement 
    opened by CPBridge 9
  • Introduce highdicom library for DICOM Segmentations

    Introduce highdicom library for DICOM Segmentations

    Addresses #195 (partially)

    Introduces the highdicom library to handle the creation of DICOM segmentations. This should lead to more maintainable code by using the well-tested highdicom package for the complex task of segmentation creation, and should make adding new features to the segmentation writer operator considerably easier by leveraging the prior work in that library. Note that the complexity of the ad-hoc code in the dicom segmentation operator file as reduced signficantly.

    I am considering this a largely complete "minimum viable" implementation and would appreciate feedback at this stage about whether we should go forward with this and what may need to change. Note that this is a backwards incompatible change to the signature of the DICOMSegmentationWriterOperator's __init__ method. It is now necessary to pass a list of highdicom.seg.SegmentDescription objects to the constructor to describe each segment in the created segmentation. I'd argue that the existing implementation, which hard codes in some default values, is likely to create dicom segs that are semantically incorrect in many cases (e.g. hard coding the segmented category as organ when it may be an abnormality such as a tumor or nodule). I am assuming that this backwards incompatibility is acceptable at this stage since the docstring states that the interface may change. Would appreciate thoughts on this.

    I have attempted to update the two examples that use the seg writer: spleen and liver tumor segmetentations. I have managed to test the spleen example and visualize the results, and it's working nicely. I have not managed to test the liver tumor example. I may be being stupid but I can't find where to download the relevant files to actually run it.

    Another thing I'm a bit confused about is where and how to add highdicom as an (optional) requirement. There are lots of requirements files and I wasn't totally sure how it all fits together. For now I just added highdicom to requirements_dev.txt. Please let me know whether this is correct.

    My apologies that this took much longer than I anticipated to make happen (I largely blame this monster). Now I've figured out how this stuff all fits together, adding further operators (such as SR, GSPS) using the capabilities in highdicom should become much easier if we choose to merge this.

    Tagging highdicom author and co-maintainer @hackermd for visibility.

    enhancement 
    opened by CPBridge 9
  • [BUG] monai.deploy.exceptions.ItemNotExistsError: A predictor of the model is not set

    [BUG] monai.deploy.exceptions.ItemNotExistsError: A predictor of the model is not set

    Describe the bug We are working on deployment of spleen segmentation using monai deploy, we are using pretrained models from ngc for spleen segmentation from (https://catalog.ngc.nvidia.com/orgs/nvidia/teams/med/models/clara_pt_spleen_ct_segmentation) link.

    Similarly we are using the app.py from examples in deploy-app-sdk(https://github.com/Project-MONAI/monai-deploy-app-sdk/tree/main/examples/apps/ai_spleen_seg_app). when we run python app.py -i ./dcm -o ./output -m ./model.pt .

    We get the following error

    raise ItemNotExistsError("A predictor of the model is not set.") monai.deploy.exceptions.ItemNotExistsError: A predictor of the model is not set.

    We tried our own trained model and still got the same error.

    Steps/Code to reproduce bug

    1. Used app.py from examples in monai-deploy-app-sdk(https://github.com/Project-MONAI/monai-deploy-app-sdk/tree/main/examples/apps/ai_spleen_seg_app)
    2. Run the python app.py -i ./dcm -o ./output -m ./model.pt
      1. Input images from ai_spleen_seg_data which is provided in the MONAI Deploy Segmentation App documentation
      2. Used pretrained model from ngc for spleen segmentation from (https://catalog.ngc.nvidia.com/orgs/nvidia/teams/med/models/clara_pt_spleen_ct_segmentation) link.

    Expected behavior We expect the pretrained model from the link , also results in the output without any issues.

    Environment details (please complete the following information)

    • OS/Platform: linux 4.15.0-136-generic
    • Python Version: Python 3.8.10 | packaged by conda-forge
    • Method of MONAI Deploy App SDK install: [pip, conda, Docker, or from source] : pip install monai-deploy-app-sdk
    • SDK Version: 0.2.0

    Additional context This works fine for the spleen model downloaded from google drive link (https://drive.google.com/uc?id=1uTQsm8omwimBcp_kRXlduWBP2M6cspr1) which was in the tutorial MONAI Deploy Segmentation App documentation

    Is there any specific thing we need to do while saving and using the model with deploy app sdk.

    Any suggestion and help is appreciated. Thank you

    bug 
    opened by Suchi97 8
  • Enhance DICOM Series Selection Operator

    Enhance DICOM Series Selection Operator

    User Story As an application developer I would like to be able to select a subset of DICOM Series given one or more DICOM Studies. My selection criteria are based on conditions imposed on DICOM attributes embedded in the SOP instances that belong to one or more series

    Background Often an app developer needs to do some work to identify what type of medical images are applicable given an AI model. As data flows in from sources such as modality or PACS, it becomes challenging to automatically select what data needs to be provided so that an AI model can ingest to produce correct result. As of v0.1.0, this operator always returns the first series given a Study

    Success Criteria

    • A way to specify selection criteria using key value pairs of DICOM Attributes
    • A way to parse through DICOM datasets and identify the right study --> series --> sop instance sets given the criteria
    • An example application that showcases such series selection using this operator
    • Updated API documentation for the operator
    • Unit Test cases that exercise multiple scenarios for this Operator
    enhancement 
    opened by rahul-imaging 8
  • [BUG] MONAI 0.9.1 is not (backwards) compatible with MONAI Seg Inference Operator

    [BUG] MONAI 0.9.1 is not (backwards) compatible with MONAI Seg Inference Operator

    Describe the bug

    Steps/Code to reproduce bug

    • pip install monai v0.9.1
    • run the existing example apps, Spleen, LiverTumor, UNETR etc. Expected behavior Ideally monai v0.9.1 should not have broken existing consumers, especially when it is supposed to be largely backward compatible.

    Given that monai v0.9.1 has been released and published, the App SDK needs to declare incompatibility with this version until the App SDK implementation is updated and tested working with monai 0.9.1.

    Environment details (please complete the following information)

    • OS/Platform: Ubuntu 20.04 LTS, GPU Nvidia GV100 with 32 GB mem
    • Python Version: 3.8
    • Method of MONAI Deploy App SDK install: [pip install from source]
    • SDK Version: main branch

    Additional context See attached log files from running the apps (to be attached), with a brief snippet shown below:

    Traceback (most recent call last):
      File "/home/mqin/src/monai-app-sdk/.venv/lib/python3.8/site-packages/monai/transforms/transform.py", line 90, in apply_transform
        return _apply_transform(transform, data, unpack_items)
      File "/home/mqin/src/monai-app-sdk/.venv/lib/python3.8/site-packages/monai/transforms/transform.py", line 54, in _apply_transform
        return transform(parameters)
      File "/home/mqin/src/monai-app-sdk/.venv/lib/python3.8/site-packages/monai/transforms/post/dictionary.py", line 642, in __call__
        transform_info = d[InvertibleTransform.trace_key(orig_key)]
    KeyError: 'image_transforms'
    
    The above exception was the direct cause of the following exception:
    
    Traceback (most recent call last):
      File "/home/mqin/src/monai-app-sdk/.venv/bin/monai-deploy", line 33, in <module>
        sys.exit(load_entry_point('monai-deploy-app-sdk', 'console_scripts', 'monai-deploy')())
      File "/home/mqin/src/monai-app-sdk/.venv/lib/python3.8/site-packages/monai/deploy/cli/main.py", line 116, in main
        execute_exec_command(args)
      File "/home/mqin/src/monai-app-sdk/.venv/lib/python3.8/site-packages/monai/deploy/cli/exec_command.py", line 76, in execute_exec_command
        runpy.run_path(app_path, run_name="__main__")
      File "/usr/lib/python3.8/runpy.py", line 265, in run_path
        return _run_module_code(code, init_globals, run_name,
      File "/usr/lib/python3.8/runpy.py", line 97, in _run_module_code
        _run_code(code, mod_globals, init_globals,
      File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
        exec(code, run_globals)
      File "/home/mqin/src/monai-app-sdk/examples/apps/ai_livertumor_seg_app/app.py", line 90, in <module>
        app_instance.run()
      File "/home/mqin/src/monai-app-sdk/examples/apps/ai_livertumor_seg_app/app.py", line 37, in run
        super().run(*args, **kwargs)
      File "/home/mqin/src/monai-app-sdk/.venv/lib/python3.8/site-packages/monai/deploy/core/application.py", line 429, in run
        executor_obj.run()
      File "/home/mqin/src/monai-app-sdk/.venv/lib/python3.8/site-packages/monai/deploy/core/executors/single_process_executor.py", line 125, in run
        op.compute(op_exec_context.input_context, op_exec_context.output_context, op_exec_context)
      File "/home/mqin/src/monai-app-sdk/examples/apps/ai_livertumor_seg_app/livertumor_seg_operator.py", line 105, in compute
        infer_operator.compute(op_input, op_output, context)
      File "/home/mqin/src/monai-app-sdk/.venv/lib/python3.8/site-packages/monai/deploy/operators/monai_seg_inference_operator.py", line 226, in compute
        d = [post_transforms(i) for i in decollate_batch(d)]
      File "/home/mqin/src/monai-app-sdk/.venv/lib/python3.8/site-packages/monai/deploy/operators/monai_seg_inference_operator.py", line 226, in <listcomp>
        d = [post_transforms(i) for i in decollate_batch(d)]
      File "/home/mqin/src/monai-app-sdk/.venv/lib/python3.8/site-packages/monai/transforms/compose.py", line 173, in __call__
        input_ = apply_transform(_transform, input_, self.map_items, self.unpack_items, self.log_stats)
      File "/home/mqin/src/monai-app-sdk/.venv/lib/python3.8/site-packages/monai/transforms/transform.py", line 114, in apply_transform
        raise RuntimeError(f"applying transform {transform}") from e
    RuntimeError: applying transform <monai.transforms.post.dictionary.Invertd object at 0x7f5e3e4ac4f0>
    
    bug 
    opened by MMelQin 6
  • MONAI Seg Inference Operator Shared Memory

    MONAI Seg Inference Operator Shared Memory

    https://github.com/Project-MONAI/monai-deploy-app-sdk/blob/8ec336fffffbaf54cbe3464540f4e36763d92360/monai/deploy/operators/monai_seg_inference_operator.py#L211

    Applications that use the Dataloader in MONAI Seg Inference Operator throw errors when trying to allocate shared memory in Argo pipelines.

    Need to modify the num of workers argument to not use shared memory by default.

    enhancement 
    opened by slbryson 6
  • [BUG] Codes used to describe segments in DICOM SEG writer should be improved

    [BUG] Codes used to describe segments in DICOM SEG writer should be improved

    Describe the bug Codes used to describe segments in DICOM SEG writer should be improved.

    Steps/Code to reproduce bug See https://github.com/Project-MONAI/monai-deploy-app-sdk/blob/294fb6e185921a621ad28754d5b97999485e803e/monai/deploy/operators/dicom_seg_writer_operator.py#L440-L463.

    Expected behavior

    1. SegmentationCategoryType should refer to the code corresponding to the specific organ. Currently, there is a general code for "Organ", with the specifics about which organ is segmented listed in the SegmentLabel. This is not the intended use of this object.
    2. Codes should be using SNOMED-CT SCT codes, currently used SRT codes have been deprecated in DICOM.

    Environment details (please complete the following information)

    • SDK Version: current as of submitting

    Additional context

    • Relevant commit in dcmqi: https://github.com/QIICR/dcmqi/commit/0cf28290be3ab15afc818b3d1d032f70a8c782a0
    • why don't you just use highdicom to write DICOM SEG? https://github.com/herrmannlab/highdicom
    bug 
    opened by fedorov 6
  • ClaraVizOperator import Fails

    ClaraVizOperator import Fails

    Hello everyone, After getting the latest main branch and upgrading monai deploy. Running the 03_segmentation_viz_app gives the following error

    ---------------------------------------------------------------------------
    ModuleNotFoundError                       Traceback (most recent call last)
    /tmp/ipykernel_510135/3062323273.py in <module>
         27 from monai.deploy.operators.dicom_series_selector_operator import DICOMSeriesSelectorOperator
         28 from monai.deploy.operators.dicom_series_to_volume_operator import DICOMSeriesToVolumeOperator
    ---> 29 from monai.deploy.operators.clara_viz_operator import ClaraVizOperator
    
    ModuleNotFoundError: No module named 'monai.deploy.operators.clara_viz_operator'
    

    I checked that in my folder operators there is the file clara_viz_operator.py. In addition I checked init.py and there exist the required line

     29 from .clara_viz_operator import ClaraVizOperator
    

    I donot understand what is going on? Any comments @AndreasHeumann

    bug 
    opened by vikashg 6
  • App Segmentation Example

    App Segmentation Example

    I am trying to run this example: https://docs.monai.io/projects/monai-deploy-app-sdk/en/latest/notebooks/tutorials/03_segmentation_app.html

    but I am getting this error importing the libraries: import logging from os import path

    from numpy import uint8

    import monai.deploy.core as md from monai.deploy.core import ExecutionContext, Image, InputContext, IOType, Operator, OutputContext from monai.deploy.operators.monai_seg_inference_operator import InMemImageReader, MonaiSegInferenceOperator from monai.transforms import ( Activationsd, AsDiscreted, Compose, EnsureChannelFirstd, EnsureTyped, Invertd, LoadImaged, Orientationd, SaveImaged, ScaleIntensityRanged, Spacingd, )

    Required for setting SegmentDescription attributes. Direct import as this is not part of App SDK package.

    from pydicom.sr.codedict import codes

    from monai.deploy.core import Application, resource from monai.deploy.operators.dicom_data_loader_operator import DICOMDataLoaderOperator from monai.deploy.operators.dicom_seg_writer_operator import DICOMSegmentationWriterOperator, SegmentDescription from monai.deploy.operators.dicom_series_selector_operator import DICOMSeriesSelectorOperator from monai.deploy.operators.dicom_series_to_volume_operator import DICOMSeriesToVolumeOperator


    ImportError Traceback (most recent call last) /tmp/ipykernel_1969259/585954608.py in 26 from monai.deploy.core import Application, resource 27 from monai.deploy.operators.dicom_data_loader_operator import DICOMDataLoaderOperator ---> 28 from monai.deploy.operators.dicom_seg_writer_operator import DICOMSegmentationWriterOperator, SegmentDescription 29 from monai.deploy.operators.dicom_series_selector_operator import DICOMSeriesSelectorOperator 30 from monai.deploy.operators.dicom_series_to_volume_operator import DICOMSeriesToVolumeOperator

    ImportError: cannot import name 'SegmentDescription' from 'monai.deploy.operators.dicom_seg_writer_operator' (/home/sonosa/anaconda3/envs/monai/lib/python3.7/site-packages/monai/deploy/operators/dicom_seg_writer_operator.py)

    opened by acamargosonosa 0
  • Bug running docker

    Bug running docker

    Describe the bug I am following the example of https://docs.monai.io/projects/monai-deploy-app-sdk/en/0.2.1/getting_started/tutorials/02_mednist_app.html

    I was able to run everything but the part of the docker: monai-deploy run mednist_app:latest input output

    I am getting this error:

    (monai) sonosa@sonosa-MS-7B17:~/2022/ProjectsAI/monai$ monai-deploy run mednist_app:latest input output_docker_gpu /home/sonosa/anaconda3/envs/monai/lib/python3.7/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: /home/sonosa/anaconda3/envs/monai/lib/python3.7/site-packages/torchvision/image.so: undefined symbol: _ZNK3c1010TensorImpl36is_contiguous_nondefault_policy_implENS_12MemoryFormatE warn(f"Failed to load image Python extension: {e}") Checking dependencies... --> Verifying if "docker" is installed...

    --> Verifying if "mednist_app:latest" is available...

    Checking for MAP "mednist_app:latest" locally "mednist_app:latest" found.

    Reading MONAI App Package manifest... --> Verifying if "nvidia-docker" is installed...

    /opt/conda/lib/python3.8/site-packages/scipy/init.py:138: UserWarning: A NumPy version >=1.16.5 and <1.23.0 is required for this version of SciPy (detected version 1.23.5) warnings.warn(f"A NumPy version >={np_minversion} and <{np_maxversion} is required for this version of " Going to initiate execution of operator LoadPILOperator Executing operator LoadPILOperator (Process ID: 1, Operator ID: a37656f0-65c6-46ca-8fdf-e8477bab45d2) Done performing execution of operator LoadPILOperator

    Going to initiate execution of operator MedNISTClassifierOperator Executing operator MedNISTClassifierOperator (Process ID: 1, Operator ID: c9f285dd-3b8c-4ef9-b700-cd6ac49186a0) /root/.local/lib/python3.8/site-packages/monai/utils/deprecate_utils.py:107: FutureWarning: <class 'monai.transforms.utility.array.AddChannel'>: Class AddChannel has been deprecated since version 0.8. please use MetaTensor data type and monai.transforms.EnsureChannelFirst instead. warn_deprecated(obj, msg, warning_category) /root/.local/lib/python3.8/site-packages/monai/utils/type_conversion.py:134: UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors. This means you can write to the underlying (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to protect its data or make it writeable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at /opt/pytorch/pytorch/torch/csrc/utils/tensor_numpy.cpp:175.) tensor = torch.as_tensor(tensor, kwargs) device found : cuda terminate called after throwing an instance of 'c10::Error' what(): isTuple()INTERNAL ASSERT FAILED at "/opt/pytorch/pytorch/aten/src/ATen/core/ivalue_inl.h":1397, please report a bug to PyTorch. Expected Tuple but got String Exception raised from toTuple at /opt/pytorch/pytorch/aten/src/ATen/core/ivalue_inl.h:1397 (most recent call first): frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits, std::allocator >) + 0x6c (0x7f27560b224c in /opt/conda/lib/python3.8/site-packages/torch/lib/libc10.so) frame Project-MONAI/MONAI#1: c10::detail::torchCheckFail(char const, char const, unsigned int, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&) + 0xfa (0x7f275607da66 in /opt/conda/lib/python3.8/site-packages/torch/lib/libc10.so) frame Project-MONAI/MONAI#2: c10::detail::torchInternalAssertFail(char const*, char const*, unsigned int, char const*, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&) + 0x53 (0x7f27560b0233 in /opt/conda/lib/python3.8/site-packages/torch/lib/libc10.so) frame Project-MONAI/MONAI#3: + 0x4224e29 (0x7f27a23b4e29 in /opt/conda/lib/python3.8/site-packages/torch/lib/libtorch_cpu.so) frame Project-MONAI/MONAI#4: + 0x42253e9 (0x7f27a23b53e9 in /opt/conda/lib/python3.8/site-packages/torch/lib/libtorch_cpu.so) frame Project-MONAI/MONAI#5: torch::jit::SourceRange::highlight(std::ostream&) const + 0x48 (0x7f279f3d5c58 in /opt/conda/lib/python3.8/site-packages/torch/lib/libtorch_cpu.so) frame Project-MONAI/MONAI#6: torch::jit::ErrorReport::what() const + 0x2c3 (0x7f279f3baac3 in /opt/conda/lib/python3.8/site-packages/torch/lib/libtorch_cpu.so) frame Project-MONAI/MONAI#7: + 0x9ea44f (0x7f27a873344f in /opt/conda/lib/python3.8/site-packages/torch/lib/libtorch_python.so) frame Project-MONAI/MONAI#8: + 0x9fa12d (0x7f27a874312d in /opt/conda/lib/python3.8/site-packages/torch/lib/libtorch_python.so) frame Project-MONAI/MONAI#45: __libc_start_main + 0xf3 (0x7f27e722d083 in /usr/lib/x86_64-linux-gnu/libc.so.6)

    ERROR: MONAI Application "mednist_app:latest" failed.

    =================================================================

    Please any help could be useful, Just in case I tested the nvidia-docker and it is working very well

    bug 
    opened by acamargosonosa 3
  • GSPS Writer Operators for Object Detection and Segmentation

    GSPS Writer Operators for Object Detection and Segmentation

    Add operators to write out GSPS DICOM objects for:

    • Object detection (bounding box)
    • Segmentations (overlay boundaries)

    We can leverage highdicom to do this

    enhancement 
    opened by CPBridge 1
  • [BUG] STL generated for a prostate segmentation appears like a donut!

    [BUG] STL generated for a prostate segmentation appears like a donut!

    Describe the bug An prostate segmentation application is built with App SDK 0.5 and a model trained with MONAI core v1. It takes in a MR T2 series, creates the segmentation image, which then get written as DICOM Seg as well as transformed into a STL image. The former looks good, while latter looks like a squished donut.

    Steps/Code to reproduce bug Application code is supper simple since it uses the built-in MONAI bundle inference operator as well as others, though the model itself is proprietary, not to be shared publicly.

    Expected behavior The STL generated from the seg image (numpy + metadata) should represent closely the seg image

    Environment details (please complete the following information)

    • OS/Platform: Ubuntu 20.04 LTS
    • Python Version: 3.8
    • Method of MONAI Deploy App SDK install: [pip, and from source]
    • SDK Version: 0.5

    Additional context The STL operator has been tested with the liver and spleen application and generated the correctly mesh image. Noted that those seg images have hundreds of slices, while the prostate MR has only around 20 slices, though, it is expected the 3D spacings should be correct irrespective of number of slices.

    image

    bug 
    opened by MMelQin 0
  • [FEA] Provide tracing and performance statistics gathering capability in App SDK

    [FEA] Provide tracing and performance statistics gathering capability in App SDK

    Is your feature request related to a problem? Please describe. With the use of logging library, application execution logs can be gathered at various logging level per configuration. However, with the maturing of the App SDK, especially when venturing into clinical deployment (IRB'ed), there are stronger needs to provide additional insights into the application execution, e.g. activity audits, performance metrics both in terms of latency and ideally result accuracy.

    This is important when AI applications are deployed and used in imaging workflow, as the IT/PACS admins always need to have insights into the latency, resource, and clinical performance of the applications, so as to better manage the critical workflow stage for operational, clinical, as well as regulatory requirements.

    Describe the solution you'd like A logging framework that will support pre-defined as well as custom categories of metrics collection, while making the aggregation and delivery of the metrics dynamic and configurable, e.g. records gather could be delivered to the stats gathering agent on the host system that runs the MAP.

    Describe alternatives you've considered Native logging library with logging level combined with defined string literal indicating the categories of information, e.g. default being execution log, "PERF" for elapsed time for blocks of execution, etc.

    Additional context

    enhancement 
    opened by MMelQin 1
  • Create a Model Moniter [FEA]

    Create a Model Moniter [FEA]

    Is your feature request related to a problem? Please describe. Often we would like to know how the model is performing on the data over a period of time and maintain overall statistics of the model performance. These statistics can be further granularized to specific centers and demographics. If several models are deployed we can use this performance to track the performance of each separate model.

    Describe the solution you'd like A web-based dashboard where the model performance can be displayed. Something like Tensorboard.

    Describe alternatives you've considered None so far.

    Additional context These statistics can be further used for design model updating rules in a continuous fashion.

    enhancement architectural story Contribution wanted integration 
    opened by vikashg 2
Owner
Project MONAI
AI Toolkit for Healthcare Imaging
Project MONAI
The official Magenta Voice Skill SDK used to develop skills for the Magenta Voice Assistant using Voice Platform!

Magenta Voice Skill SDK Development • Support • Contribute • Contributors • Licensing Magenta Voice Skill SDK for Python is a package that assists in

Telekom Open Source Software 18 Nov 19, 2022
Graviti-python-sdk - Graviti Data Platform Python SDK

Graviti Python SDK Graviti Python SDK is a python library to access Graviti Data

Graviti 13 Dec 15, 2022
A Telegram Bot which will ask new Group Members to verify them by solving an emoji captcha.

Emoji-Captcha-Bot A Telegram Bot which will ask new Group Members to verify them by solving an emoji captcha. About API: Using api.abirhasan.wtf/captc

Abir Hasan 52 Dec 11, 2022
Verify your Accounts by Tempphone using this Discordbot

Verify your Accounts by Tempphone using this Discordbot 5sim.net is a service, that offer you temp phonenumbers for otp verification. It include a lot

null 23 Jan 3, 2023
A quick way to verify your Climate Hack.AI (2022) submission locally!

Climate Hack.AI (2022) Submission Validator This repository contains code that allows you to quickly validate your Climate Hack.AI (2022) submission l

Jeremy 3 Mar 3, 2022
Sail is a free CLI tool to deploy, manage and scale WordPress applications in the DigitalOcean cloud.

Deploy WordPress to DigitalOcean with Sail Sail is a free CLI tool to deploy, manage and scale WordPress applications in the DigitalOcean cloud. Conte

Konstantin Kovshenin 159 Dec 12, 2022
Image captioning service for healthcare domains in Vietnamese using VLP

Image captioning service for healthcare domains in Vietnamese using VLP This service is a web service that provides image captioning services for heal

CS-UIT AI Club 2 Nov 4, 2021
Nasdaq Cloud Data Service (NCDS) provides a modern and efficient method of delivery for realtime exchange data and other financial information. This repository provides an SDK for developing applications to access the NCDS.

Nasdaq Cloud Data Service (NCDS) Nasdaq Cloud Data Service (NCDS) provides a modern and efficient method of delivery for realtime exchange data and ot

Nasdaq 8 Dec 1, 2022
Access Undenied parses AWS AccessDenied CloudTrail events, explains the reasons for them, and offers actionable remediation steps. Open-sourced by Ermetic.

Access Undenied on AWS Access Undenied parses AWS AccessDenied CloudTrail events, explains the reasons for them, and offers actionable fixes. Access U

Ermetic 204 Jan 2, 2023
Telegram bot that sends new offers from otomoto.pl

Telegram bot that sends new offers under certain filters from otomoto.pl How to use this bot? Install requirements with pip install -r requirements.tx

Mikhail Zanka 1 Feb 14, 2022
Tools used by Ada Health's internal IT team to deploy and manage a serverless Munki setup.

Serverless Munki This repository contains cross platform code to deploy a production ready Munki service, complete with AutoPkg, that runs entirely fr

Ada Health 17 Dec 5, 2022
A Python script for rendering glTF files with V-Ray App SDK

V-Ray glTF viewer Overview The V-Ray glTF viewer is a set of Python scripts for the V-Ray App SDK that allow the parsing and rendering of glTF (.gltf

Chaos 24 Dec 5, 2022
Image Tooᥣs Bot I specialize for logo design Services with Amazing logo Creator Platform and more tools

Image Tooᥣs Bot I specialize for logo design Services with Amazing logo Creator Platform and more tools

Sz Team Bots <sz/>✌️ 10 Oct 21, 2022
💻 A fully functional local AWS cloud stack. Develop and test your cloud & Serverless apps offline!

LocalStack - A fully functional local AWS cloud stack LocalStack provides an easy-to-use test/mocking framework for developing Cloud applications. Cur

LocalStack 45.3k Jan 2, 2023
MVP monorepo to rapidly develop scalable, reliable, high-quality components for Amazon Linux instance configuration management

Ansible Amazon Base Repository Ansible Amazon Base Repository About Setting Up Ansible Environment Configuring Python VENV and Ansible Editor Configur

Artem Veremey 1 Aug 6, 2022
Deploy a STAC API and a dynamic mosaic tiler API using AWS CDK.

Earth Observation API Deploy a STAC API and a dynamic mosaic tiler API using AWS CDK.

Development Seed 39 Oct 30, 2022
Prabashwara's Pm Bot repository. You can deploy and edit this repository.

Tᴇʟᴇɢʀᴀᴍ Pᴍ Bᴏᴛ | Prabashwara's PM Bot Unmaintained. The new repo of @Pm-Bot is private. (It is no longer based on this source code. The completely re

Rivibibu Prabshwara Ⓒ 2 Jul 5, 2022