Data Consistency for Magnetic Resonance Imaging

Related tags

Deep Learning mridc
Overview

Data Consistency for Magnetic Resonance Imaging

Build Status CircleCI codecov DeepSource DeepSource


Data Consistency (DC) is crucial for generalization in multi-modal MRI data and robustness in detecting pathology.

This repo implements the following reconstruction methods:

  • Cascades of Independently Recurrent Inference Machines (CIRIM) [1],
  • Independently Recurrent Inference Machines (IRIM) [2, 3],
  • End-to-End Variational Network (E2EVN), [4, 5]
  • the UNet [5, 6],
  • Compressed Sensing (CS) [7], and
  • zero-filled reconstruction (ZF).

The CIRIM, the RIM, and the E2EVN target unrolled optimization by gradient descent. Thus, DC is implicitly enforced. Through cascades DC can be explicitly enforced by a designed term [1, 4].

Installation

You can install mridc with pip:

pip install mridc

Usage

Check on scripts how to train models and run a method for reconstruction.

Check on tools for preprocessing and evaluation tools.

Recommended public datasets to use with this repo:

Documentation

Documentation Status

Read the docs here

License

License: Apache 2.0

Citation

Check CITATION.cff file or cite using the widget. Alternatively cite as

@misc{mridc,
  author={Karkalousos, Dimitrios and Caan, Matthan},
  title={MRIDC: Data Consistency for Magnetic Resonance Imaging},
  year={2021},
  url = {https://github.com/wdika/mridc},
}

Bibliography

[1] Karkalousos, D. et al. (2021) ‘Assessment of Data Consistency through Cascades of Independently Recurrent Inference Machines for fast and robust accelerated MRI reconstruction’. Available at: https://arxiv.org/abs/2111.15498v1 (Accessed: 1 December 2021).

[2] Lønning, K. et al. (2019) ‘Recurrent inference machines for reconstructing heterogeneous MRI data’, Medical Image Analysis, 53, pp. 64–78. doi: 10.1016/j.media.2019.01.005.

[3] Karkalousos, D. et al. (2020) ‘Reconstructing unseen modalities and pathology with an efficient Recurrent Inference Machine’, pp. 1–31. Available at: http://arxiv.org/abs/2012.07819.

[4] Sriram, A. et al. (2020) ‘End-to-End Variational Networks for Accelerated MRI Reconstruction’, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 12262 LNCS, pp. 64–73. doi: 10.1007/978-3-030-59713-9_7.

[5] Zbontar, J. et al. (2018) ‘fastMRI: An Open Dataset and Benchmarks for Accelerated MRI’, arXiv, pp. 1–35. Available at: http://arxiv.org/abs/1811.08839.

[6] Ronneberger, O., Fischer, P. and Brox, T. (2015) ‘U-Net: Convolutional Networks for Biomedical Image Segmentation’, in Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 234–241. doi: 10.1007/978-3-319-24574-4_28.

[7] Lustig, M. et al. (2008) ‘Compressed Sensing MRI’, IEEE Signal Processing Magazine, 25(2), pp. 72–82. doi: 10.1109/MSP.2007.914728.

Comments
  • Refactor logic (Sourcery refactored)

    Refactor logic (Sourcery refactored)

    Pull Request #23 refactored by Sourcery.

    If you're happy with these changes, merge this Pull Request using the Squash and merge strategy.

    NOTE: As code is pushed to the original Pull Request, Sourcery will re-run and update (force-push) this Pull Request with new refactorings as necessary. If Sourcery finds no refactorings at any point, this Pull Request will be closed automatically.

    See our documentation here.

    Run Sourcery locally

    Reduce the feedback loop during development by using the Sourcery editor plugin:

    Review changes via command line

    To manually merge these changes, make sure you're on the refactor-logic branch, then run:

    git fetch origin sourcery/refactor-logic
    git merge --ff-only FETCH_HEAD
    git reset HEAD^
    

    Help us improve this pull request!

    documentation enhancement 
    opened by sourcery-ai[bot] 5
  • Sourcery refactored main branch

    Sourcery refactored main branch

    Branch main refactored by Sourcery.

    If you're happy with these changes, merge this Pull Request using the Squash and merge strategy.

    See our documentation here.

    Run Sourcery locally

    Reduce the feedback loop during development by using the Sourcery editor plugin:

    Review changes via command line

    To manually merge these changes, make sure you're on the main branch, then run:

    git fetch origin sourcery/main
    git merge --ff-only FETCH_HEAD
    git reset HEAD^
    

    Help us improve this pull request!

    documentation enhancement 
    opened by sourcery-ai[bot] 5
  • Rebase - refactor logic (Sourcery refactored)

    Rebase - refactor logic (Sourcery refactored)

    Pull Request #34 refactored by Sourcery.

    If you're happy with these changes, merge this Pull Request using the Squash and merge strategy.

    NOTE: As code is pushed to the original Pull Request, Sourcery will re-run and update (force-push) this Pull Request with new refactorings as necessary. If Sourcery finds no refactorings at any point, this Pull Request will be closed automatically.

    See our documentation here.

    Run Sourcery locally

    Reduce the feedback loop during development by using the Sourcery editor plugin:

    Review changes via command line

    To manually merge these changes, make sure you're on the sourcery/refactor-logic branch, then run:

    git fetch origin sourcery/sourcery/refactor-logic
    git merge --ff-only FETCH_HEAD
    git reset HEAD^
    

    Help us improve this pull request!

    opened by sourcery-ai[bot] 4
  • Sourcery refactored main branch

    Sourcery refactored main branch

    Branch main refactored by Sourcery.

    If you're happy with these changes, merge this Pull Request using the Squash and merge strategy.

    See our documentation here.

    Run Sourcery locally

    Reduce the feedback loop during development by using the Sourcery editor plugin:

    Review changes via command line

    To manually merge these changes, make sure you're on the main branch, then run:

    git fetch origin sourcery/main
    git merge --ff-only FETCH_HEAD
    git reset HEAD^
    

    Help us improve this pull request!

    enhancement 
    opened by sourcery-ai[bot] 3
  • Sourcery refactored main branch

    Sourcery refactored main branch

    Branch main refactored by Sourcery.

    If you're happy with these changes, merge this Pull Request using the Squash and merge strategy.

    See our documentation here.

    Run Sourcery locally

    Reduce the feedback loop during development by using the Sourcery editor plugin:

    Review changes via command line

    To manually merge these changes, make sure you're on the main branch, then run:

    git fetch origin sourcery/main
    git merge --ff-only FETCH_HEAD
    git reset HEAD^
    

    Help us improve this pull request!

    opened by sourcery-ai[bot] 3
  • Sourcery refactored main branch

    Sourcery refactored main branch

    Branch main refactored by Sourcery.

    If you're happy with these changes, merge this Pull Request using the Squash and merge strategy.

    See our documentation here.

    Run Sourcery locally

    Reduce the feedback loop during development by using the Sourcery editor plugin:

    Review changes via command line

    To manually merge these changes, make sure you're on the main branch, then run:

    git fetch origin sourcery/main
    git merge --ff-only FETCH_HEAD
    git reset HEAD^
    

    Help us improve this pull request!

    opened by sourcery-ai[bot] 3
  • [BUG] Check if half_scan_percentage alters the chosen acceleration factors / Add option for 1D masks

    [BUG] Check if half_scan_percentage alters the chosen acceleration factors / Add option for 1D masks

    Describe the bug half_scan_percentage works only with 2D masks.

    To Reproduce Duplicate a test function for 1D masking like this and add half_scan_percentage > 0.

    Expected behavior It should be working on both 1D and 2D masks. Add this option for 1D masking as well.

    Environment Operating System: ubuntu-latest Python Version: >= 3.9 PyTorch Version: >= 1.9

    bug Stale 
    opened by wdika 3
  • Bump torch from 1.12.0 to 1.13.0

    Bump torch from 1.12.0 to 1.13.0

    Bumps torch from 1.12.0 to 1.13.0.

    Release notes

    Sourced from torch's releases.

    PyTorch 1.13: beta versions of functorch and improved support for Apple’s new M1 chips are now available

    Pytorch 1.13 Release Notes

    • Highlights
    • Backwards Incompatible Changes
    • New Features
    • Improvements
    • Performance
    • Documentation
    • Developers

    Highlights

    We are excited to announce the release of PyTorch 1.13! This includes stable versions of BetterTransformer. We deprecated CUDA 10.2 and 11.3 and completed migration of CUDA 11.6 and 11.7. Beta includes improved support for Apple M1 chips and functorch, a library that offers composable vmap (vectorization) and autodiff transforms, being included in-tree with the PyTorch release. This release is composed of over 3,749 commits and 467 contributors since 1.12.1. We want to sincerely thank our dedicated community for your contributions.

    Summary:

    • The BetterTransformer feature set supports fastpath execution for common Transformer models during Inference out-of-the-box, without the need to modify the model. Additional improvements include accelerated add+matmul linear algebra kernels for sizes commonly used in Transformer models and Nested Tensors is now enabled by default.

    • Timely deprecating older CUDA versions allows us to proceed with introducing the latest CUDA version as they are introduced by Nvidia®, and hence allows support for C++17 in PyTorch and new NVIDIA Open GPU Kernel Modules.

    • Previously, functorch was released out-of-tree in a separate package. After installing PyTorch, a user will be able to import functorch and use functorch without needing to install another package.

    • PyTorch is offering native builds for Apple® silicon machines that use Apple's new M1 chip as a beta feature, providing improved support across PyTorch's APIs.

    Stable Beta Prototype
    Better TransformerCUDA 10.2 and 11.3 CI/CD Deprecation Enable Intel® VTune™ Profiler's Instrumentation and Tracing Technology APIsExtend NNC to support channels last and bf16Functorch now in PyTorch Core LibraryBeta Support for M1 devices Arm® Compute Library backend support for AWS Graviton CUDA Sanitizer

    You can check the blogpost that shows the new features here.

    Backwards Incompatible changes

    Python API

    uint8 and all integer dtype masks are no longer allowed in Transformer (#87106)

    Prior to 1.13, key_padding_mask could be set to uint8 or other integer dtypes in TransformerEncoder and MultiheadAttention, which might generate unexpected results. In this release, these dtypes are not allowed for the mask anymore. Please convert them to torch.bool before using.

    1.12.1

    >>> layer = nn.TransformerEncoderLayer(2, 4, 2)
    >>> encoder = nn.TransformerEncoder(layer, 2)
    >>> pad_mask = torch.tensor([[1, 1, 0, 0]], dtype=torch.uint8)
    >>> inputs = torch.cat([torch.randn(1, 2, 2), torch.zeros(1, 2, 2)], dim=1)
    # works before 1.13
    >>> outputs = encoder(inputs, src_key_padding_mask=pad_mask)
    

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 2
  • Sourcery refactored jrs branch

    Sourcery refactored jrs branch

    Branch jrs refactored by Sourcery.

    If you're happy with these changes, merge this Pull Request using the Squash and merge strategy.

    See our documentation here.

    Run Sourcery locally

    Reduce the feedback loop during development by using the Sourcery editor plugin:

    Review changes via command line

    To manually merge these changes, make sure you're on the jrs branch, then run:

    git fetch origin sourcery/jrs
    git merge --ff-only FETCH_HEAD
    git reset HEAD^
    

    Help us improve this pull request!

    opened by sourcery-ai[bot] 2
  • Bump hydra-core from 1.1.0 to 1.2.0

    Bump hydra-core from 1.1.0 to 1.2.0

    Bumps hydra-core from 1.1.0 to 1.2.0.

    Release notes

    Sourced from hydra-core's releases.

    Hydra 1.2.0

    1.2.0 (2022-05-17)

    Bug fixes

    • hydra.runtime.choices is now updated correctly during multi-run (#1882)
    • hydra.verbose=True now works with multirun. (#1897)
    • Fix a resolution error occurring when a nested class is passed as a _target_ keyword argument to instantiate (#1914)
    • It is now possible to pass other callable objects (besides functions) to hydra.main. (#2042)

    New features

    • Add support to Hydra's instantiation API for creation of functools.partial instances via a _partial_ keyword. (#1283)
    • Support defining basic sweeping in input config. (#1376)
    • Improve error message with more context when an omegaconf exception occurs during the config merge step. (#1697)
    • Add --experimental-rerun command-line option to reproduce pickled single runs (#1805)
    • Add experimental Callback for pickling job info. (#2092)
    • Implement tab completions for appending to the defaults list (+group=option) and deleting from the defaults list (~group). (#1841)
    • Enable the use of the pipe symbol | in unquoted strings when parsing command-line overrides. (#1850)
    • Support for Python 3.10 (#1856)
    • Improve clarity of error messages when hydra.utils.instantiate encounters a _target_ that cannot be located (#1863)
    • The instantiate API now accepts ListConfig/list-type config as top-level input. (#1950)
    • Improve error messages raised in case of instantiation failure. (#2099)
    • Add callback for logging JobReturn. (#2100)
    • Support disable changing working directory at runtime. (#910)
    • Support setting hydra.mode through config. (#394)

    Behavior changes

    • The antlr version requirement is updated from 4.8 to 4.9, to align better with current antlr versions
    • If user code raises an exception when called by instantiate, raise an InstantiateError exception instead of an instance of the same exception class that was raised by the user code. (#1911)
    • Remove support for deprecated arg config_loader to Plugin.setup, and update signature of run_job to require hydra_context. (#1953)

    The remaining changes are protected by the new version_base support, which allows one to either configure Hydra to support older setups / config, or configure Hydra to use the following more modern defaults:

    • Remove deprecated "old optional" defaults list syntax (#1952)
    • Remove support for the legacy hydra override syntax (see deprecation notice). (#2056)
    • Remove support for old hydra.experimental.{compose,initialize} interface
    • Remove support for _name_ and _group_ from package header (see deprecation notice)
    • Remove support for legacy default list interpolation format (see deprecation notice)
    • Remove support for TargetConf class
    • Remove support for strict flag from compose API (see deprecation notice)
    • Remove support for ".yml" extensions, requiring ".yaml" instead.
    • Default to not changing the working directory at runtime. Use hydra.job.chdir=True to reinstate old behavior.
    • Default to not adding any directory to the config path. (see config_path options)

    Hydra 1.1.2

    1.1.2 (2022-04-12)

    ... (truncated)

    Changelog

    Sourced from hydra-core's changelog.

    1.2.0 (2022-05-17)

    Bug fixes

    • hydra.runtime.choices is now updated correctly during multi-run (#1882)
    • hydra.verbose=True now works with multirun. (#1897)
    • Fix a resolution error occurring when a nested class is passed as a _target_ keyword argument to instantiate (#1914)
    • It is now possible to pass other callable objects (besides functions) to hydra.main. (#2042)

    New features

    • Add support to Hydra's instantiation API for creation of functools.partial instances via a _partial_ keyword. (#1283)
    • Support defining basic sweeping in input config. (#1376)
    • Improve error message with more context when an omegaconf exception occurs during the config merge step. (#1697)
    • Add --experimental-rerun command-line option to reproduce pickled single runs (#1805)
    • Add experimental Callback for pickling job info. (#2092)
    • Implement tab completions for appending to the defaults list (+group=option) and deleting from the defaults list (~group). (#1841)
    • Enable the use of the pipe symbol | in unquoted strings when parsing command-line overrides. (#1850)
    • Support for Python 3.10 (#1856)
    • Improve clarity of error messages when hydra.utils.instantiate encounters a _target_ that cannot be located (#1863)
    • The instantiate API now accepts ListConfig/list-type config as top-level input. (#1950)
    • Improve error messages raised in case of instantiation failure. (#2099)
    • Add callback for logging JobReturn. (#2100)
    • Support disable changing working directory at runtime. (#910)
    • Support setting hydra.mode through config. (#394)

    Behavior changes

    • The antlr version requirement is updated from 4.8 to 4.9, to align better with current antlr versions
    • If user code raises an exception when called by instantiate, raise an InstantiateError exception instead of an instance of the same exception class that was raised by the user code. (#1911)
    • Remove support for deprecated arg config_loader to Plugin.setup, and update signature of run_job to require hydra_context. (#1953)

    The remaining changes are protected by the new version_base support, which allows one to either configure Hydra to support older setups / config, or configure Hydra to use the following more modern defaults:

    • Remove deprecated "old optional" defaults list syntax (#1952)
    • Remove support for the legacy hydra override syntax (see deprecation notice). (#2056)
    • Remove support for old hydra.experimental.{compose,initialize} interface
    • Remove support for _name_ and _group_ from package header (see deprecation notice)
    • Remove support for legacy default list interpolation format (see deprecation notice)
    • Remove support for TargetConf class
    • Remove support for strict flag from compose API (see deprecation notice)
    • Remove support for ".yml" extensions, requiring ".yaml" instead.
    • Default to not changing the working directory at runtime. Use hydra.job.chdir=True to reinstate old behavior.
    • Default to not adding any directory to the config path. (see config_path options)

    1.1.1 (2021-08-19)

    ... (truncated)

    Commits
    • 66c3b37 website doc version for Hydra 1.2
    • 47e29bd plugins 1.2.0 release
    • bd29af8 Hydra 1.2.0 release
    • cee7793 fix syntax for win plugin tests (#2212)
    • 4065429 re-enable windows tests (except for py3.10) (#2210)
    • 36707bb Add plugin tests back (#2207)
    • de1e96e remove support for hydra.experimental.{compose,initialize}
    • 2c19891 remove support for name and group from package header
    • 2249721 remove support for legacy default list interpolation format
    • aeda2bc fix rq test (#2202)
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 2
  • Bump to v.0.1.1 (Sourcery refactored)

    Bump to v.0.1.1 (Sourcery refactored)

    Pull Request #74 refactored by Sourcery.

    If you're happy with these changes, merge this Pull Request using the Squash and merge strategy.

    NOTE: As code is pushed to the original Pull Request, Sourcery will re-run and update (force-push) this Pull Request with new refactorings as necessary. If Sourcery finds no refactorings at any point, this Pull Request will be closed automatically.

    See our documentation here.

    Run Sourcery locally

    Reduce the feedback loop during development by using the Sourcery editor plugin:

    Review changes via command line

    To manually merge these changes, make sure you're on the v.0.1.1 branch, then run:

    git fetch origin sourcery/v.0.1.1
    git merge --ff-only FETCH_HEAD
    git reset HEAD^
    

    Help us improve this pull request!

    opened by sourcery-ai[bot] 2
  • Bump pytorch-lightning from 1.7.7 to 1.8.6

    Bump pytorch-lightning from 1.7.7 to 1.8.6

    Bumps pytorch-lightning from 1.7.7 to 1.8.6.

    Release notes

    Sourced from pytorch-lightning's releases.

    Weekly patch release

    App

    Added

    • Added partial support for fastapi Request annotation in configure_api handlers (#16047)
    • Added a nicer UI with URL and examples for the autoscaler component (#16063)
    • Enabled users to have more control over scaling out/in intervals (#16093)
    • Added more datatypes to the serving component (#16018)
    • Added work.delete method to delete the work (#16103)
    • Added display_name property to LightningWork for the cloud (#16095)
    • Added ColdStartProxy to the AutoScaler (#16094)
    • Added status endpoint, enable ready (#16075)
    • Implemented ready for components (#16129)

    Changed

    • The default start_method for creating Work processes locally on macOS is now 'spawn' (previously 'fork') (#16089)
    • The utility lightning.app.utilities.cloud.is_running_in_cloud now returns True during the loading of the app locally when running with --cloud (#16045)
    • Updated Multinode Warning (#16091)
    • Updated app testing (#16000)
    • Changed overwrite to True (#16009)
    • Simplified messaging in cloud dispatch (#16160)
    • Added annotations endpoint (#16159)

    Fixed

    • Fixed PythonServer messaging "Your app has started" (#15989)
    • Fixed auto-batching to enable batching for requests coming even after the batch interval but is in the queue (#16110)
    • Fixed a bug where AutoScaler would fail with min_replica=0 (#16092
    • Fixed a non-thread safe deepcopy in the scheduler (#16114)
    • Fixed HTTP Queue sleeping for 1 sec by default if no delta was found (#16114)
    • Fixed the endpoint info tab not showing up in the AutoScaler UI (#16128)
    • Fixed an issue where an exception would be raised in the logs when using a recent version of streamlit (#16139)
    • Fixed e2e tests (#16146)

    Full Changelog: https://github.com/Lightning-AI/lightning/compare/1.8.5.post0...1.8.6

    Minor patch release

    App

    • Fixed install/upgrade - removing single quote (#16079)
    • Fixed bug where components that are re-instantiated several times failed to initialize if they were modifying self.lightningignore (#16080)
    • Fixed a bug where apps that had previously been deleted could not be run again from the CLI (#16082)

    Pytorch

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 0
  • Bump hydra-core from 1.1.0 to 1.3.1

    Bump hydra-core from 1.1.0 to 1.3.1

    Bumps hydra-core from 1.1.0 to 1.3.1.

    Release notes

    Sourced from hydra-core's releases.

    Hydra 1.3.1

    1.3.1 (2022-12-20)

    This bugfix release updates a version pin on the OmegaConf library, allowing Hydra to be installed alongside the latest version of OmegaConf.

    Bug Fixes

    • Relax OmegaConf pin allowing OmegaConf 2.3 to be installed (#2510)

    Links:

    Hydra 1.3.0

    1.3.0 (2022-12-08)

    Features:

    • Implement _convert_="object" option for instantiate, enabling conversion of non-_target_ structured configs to instances of the backing dataclass / attr class. (#1719)
    • Enable layering of the @hydra.main decorator on top of other decorators produced using functools.wraps. (#2303)
    • Allow for non-leading dashes in override keys (#2363)
    • support specifying an absolute path with --config-path (#2368)
    • Support python3.11 (#2443)

    Bug Fixes:

    • Fix an issue where Hydra's exception-handling logic could raise an AssertionError (#2342)

    Links:

    Hydra 1.2.0

    1.2.0 (2022-05-17)

    Bug fixes

    • hydra.runtime.choices is now updated correctly during multi-run (#1882)
    • hydra.verbose=True now works with multirun. (#1897)
    • Fix a resolution error occurring when a nested class is passed as a _target_ keyword argument to instantiate (#1914)
    • It is now possible to pass other callable objects (besides functions) to hydra.main. (#2042)

    New features

    • Add support to Hydra's instantiation API for creation of functools.partial instances via a _partial_ keyword. (#1283)

    ... (truncated)

    Changelog

    Sourced from hydra-core's changelog.

    1.3.1 (2022-12-20)

    Bug Fixes

    • Relax OmegaConf pin allowing OmegaConf 2.3 to be installed (#2510)

    1.3.0 (2022-12-08)

    Features

    • Implement _convert_="object" option for instantiate, enabling conversion of non-_target_ structured configs to instances of the backing dataclass / attr class. (#1719)
    • Enable layering of the @hydra.main decorator on top of other decorators produced using functools.wraps. (#2303)
    • Allow for non-leading dashes in override keys (#2363)
    • support specifying an absolute path with --config-path (#2368)
    • Support python3.11 (#2443)

    Bug Fixes

    • Fix an issue where Hydra's exception-handling logic could raise an AssertionError (#2342)

    1.2.0 (2022-05-17)

    Bug fixes

    • hydra.runtime.choices is now updated correctly during multi-run (#1882)
    • hydra.verbose=True now works with multirun. (#1897)
    • Fix a resolution error occurring when a nested class is passed as a _target_ keyword argument to instantiate (#1914)
    • It is now possible to pass other callable objects (besides functions) to hydra.main. (#2042)

    New features

    • Add support to Hydra's instantiation API for creation of functools.partial instances via a _partial_ keyword. (#1283)
    • Support defining basic sweeping in input config. (#1376)
    • Improve error message with more context when an omegaconf exception occurs during the config merge step. (#1697)
    • Add --experimental-rerun command-line option to reproduce pickled single runs (#1805)
    • Add experimental Callback for pickling job info. (#2092)
    • Implement tab completions for appending to the defaults list (+group=option) and deleting from the defaults list (~group). (#1841)
    • Enable the use of the pipe symbol | in unquoted strings when parsing command-line overrides. (#1850)
    • Support for Python 3.10 (#1856)
    • Improve clarity of error messages when hydra.utils.instantiate encounters a _target_ that cannot be located (#1863)
    • The instantiate API now accepts ListConfig/list-type config as top-level input. (#1950)
    • Improve error messages raised in case of instantiation failure. (#2099)
    • Add callback for logging JobReturn. (#2100)
    • Support disable changing working directory at runtime. (#910)
    • Support setting hydra.mode through config. (#394)

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 0
  • Bump torch from 1.12.0 to 1.13.1

    Bump torch from 1.12.0 to 1.13.1

    Bumps torch from 1.12.0 to 1.13.1.

    Release notes

    Sourced from torch's releases.

    PyTorch 1.13.1 Release, small bug fix release

    This release is meant to fix the following issues (regressions / silent correctness):

    • RuntimeError by torch.nn.modules.activation.MultiheadAttention with bias=False and batch_first=True #88669
    • Installation via pip on Amazon Linux 2, regression #88869
    • Installation using poetry on Mac M1, failure #88049
    • Missing masked tensor documentation #89734
    • torch.jit.annotations.parse_type_line is not safe (command injection) #88868
    • Use the Python frame safely in _pythonCallstack #88993
    • Double-backward with full_backward_hook causes RuntimeError #88312
    • Fix logical error in get_default_qat_qconfig #88876
    • Fix cuda/cpu check on NoneType and unit test #88854 and #88970
    • Onnx ATen Fallback for BUILD_CAFFE2=0 for ONNX-only ops #88504
    • Onnx operator_export_type on the new registry #87735
    • torchrun AttributeError caused by file_based_local_timer on Windows #85427

    The release tracker should contain all relevant pull requests related to this release as well as links to related issues

    PyTorch 1.13: beta versions of functorch and improved support for Apple’s new M1 chips are now available

    Pytorch 1.13 Release Notes

    • Highlights
    • Backwards Incompatible Changes
    • New Features
    • Improvements
    • Performance
    • Documentation
    • Developers

    Highlights

    We are excited to announce the release of PyTorch 1.13! This includes stable versions of BetterTransformer. We deprecated CUDA 10.2 and 11.3 and completed migration of CUDA 11.6 and 11.7. Beta includes improved support for Apple M1 chips and functorch, a library that offers composable vmap (vectorization) and autodiff transforms, being included in-tree with the PyTorch release. This release is composed of over 3,749 commits and 467 contributors since 1.12.1. We want to sincerely thank our dedicated community for your contributions.

    Summary:

    • The BetterTransformer feature set supports fastpath execution for common Transformer models during Inference out-of-the-box, without the need to modify the model. Additional improvements include accelerated add+matmul linear algebra kernels for sizes commonly used in Transformer models and Nested Tensors is now enabled by default.

    • Timely deprecating older CUDA versions allows us to proceed with introducing the latest CUDA version as they are introduced by Nvidia®, and hence allows support for C++17 in PyTorch and new NVIDIA Open GPU Kernel Modules.

    • Previously, functorch was released out-of-tree in a separate package. After installing PyTorch, a user will be able to import functorch and use functorch without needing to install another package.

    • PyTorch is offering native builds for Apple® silicon machines that use Apple's new M1 chip as a beta feature, providing improved support across PyTorch's APIs.

    Stable Beta Prototype
    Better TransformerCUDA 10.2 and 11.3 CI/CD Deprecation Enable Intel® VTune™ Profiler's Instrumentation and Tracing Technology APIsExtend NNC to support channels last and bf16Functorch now in PyTorch Core LibraryBeta Support for M1 devices Arm® Compute Library backend support for AWS Graviton CUDA Sanitizer

    You can check the blogpost that shows the new features here.

    Backwards Incompatible changes

    ... (truncated)

    Changelog

    Sourced from torch's changelog.

    Releasing PyTorch

    General Overview

    Releasing a new version of PyTorch generally entails 3 major steps:

    1. Cutting a release branch preparations
    2. Cutting a release branch and making release branch specific changes
    3. Drafting RCs (Release Candidates), and merging cherry picks
    4. Promoting RCs to stable and performing release day tasks

    Cutting a release branch preparations

    Following Requirements needs to be met prior to final RC Cut:

    • Resolve all outstanding issues in the milestones(for example 1.11.0)before first RC cut is completed. After RC cut is completed following script should be executed from builder repo in order to validate the presence of the fixes in the release branch :

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 1
  • Coil dim in the llg of the RIM is fixed

    Coil dim in the llg of the RIM is fixed

    Describe the bug The coil dimension in the log-likelihoog gradient computation is fixed to 1.

    To Reproduce Steps to reproduce the behavior: Go to rim_utils

    Expected behavior This should be dynamically set by the argument of the function.

    bug 
    opened by wdika 0
  • Bump onnxruntime from 1.12.0 to 1.13.1

    Bump onnxruntime from 1.12.0 to 1.13.1

    Bumps onnxruntime from 1.12.0 to 1.13.1.

    Release notes

    Sourced from onnxruntime's releases.

    ONNX Runtime v1.13.1

    Announcements

    • Security issues addressed by this release
      1. A protobuf security issue CVE-2022-1941 that impact users who load ONNX models from untrusted sources, for example, a deep learning inference service which allows users to upload their models then runs the inferences in a shared environment.
      2. An ONNX security vulnerability that allows reading of tensor_data outside the model directory, which allows attackers to read or write arbitrary files on an affected system that loads ONNX models from untrusted sources. (#12915)
    • Deprecations
      • CUDA 10.x support at source code level
      • Windows 8.x support in Nuget/C API prebuilt binaries. Support for Windows 7+ Desktop versions (including Windows servers) will be retained by building ONNX Runtime from source.
      • NUPHAR EP code is removed
    • Dependency versioning updates
      • C++ 17 compiler is now required to build ORT from source. On Linux, GCC version >=7.0 is required.
      • Minimal numpy version bumped to 1.21.6 (from 1.21.0) for ONNX Runtime Python packages
      • Official ONNX Runtime GPU packages now require CUDA version >=11.6 instead of 11.4.

    General

    • Expose all arena configs in Python API in an extensible way
    • Fix ARM64 NuGet packaging
    • Fix EP allocator setup issue affecting TVM EP

    Performance

    • Transformers CUDA improvements
      • Quantization on GPU for BERT - notebook, documentation on QAT, transformer optimization toolchain and quantized kernels.
      • Add fused attention CUDA kernels for BERT.
      • Fuse Add (bias) and Transpose of Q/K/V into one kernel for Attention and LongformerAttention.
      • Reduce GEMM computation in LongformerAttention with a new weight format.
    • General quantization (tool and kernel)
      • Quantization debugging tool - identify sensitive node/layer from accuracy drop discrepancies
      • New quantize API based on QuantConfig
      • New quantized operators: SoftMax, Split, Where

    Execution Providers

    • CUDA EP
      • Official ONNX Runtime GPU packages are now built with CUDA version 11.6 instead of 11.4, but should still be backwards compatible with 11.4
    • TensorRT EP
      • Build option to link against pre-built onnx-tensorrt parser; this enables potential "no-code" TensorRT minor version upgrades and can be used to build against TensorRT 8.5 EA
      • Improved nested control flow support
      • Improve HashId generation used for uniquely identifying TRT engines. Addresses issues such as TRT Engine Cache Regeneration Issue
      • TensorRT uint8 support
    • OpenVINO EP
      • OpenVINO version upgraded to 2022.2.0
      • Support for INT8 QDQ models from NNCF
      • Support for Intel 13th Gen Core Process (Raptor Lake)
      • Preview support for Intel discrete graphics cards Intel Data Center GPU Flex Series and Intel Arc GPU
      • Increased test coverage for GPU Plugin
    • SNPE EP
    • DirectML EP

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 1
Releases(v.0.2.0)
  • v.0.2.0(Sep 12, 2022)

    What's in

    Version 0.2.0 is a major stable update.

    The following tools have been implemented:

    • Quantitative MRI data loaders & transforms.
    • Quantitative MRI models: qCIRIM, qRIM, qE2EVN.
    • AHEAD dataset preprocessing tools for quantitative MRI.

    What's Changed

    • added quantitative MRI tools by @wdika in https://github.com/wdika/mridc/pull/82
    • Bump to v.0.2.0 by @wdika in https://github.com/wdika/mridc/pull/101

    Full Changelog: https://github.com/wdika/mridc/compare/v.0.1.1...v.0.2.0

    Important

    • Python 3.10 cannot be supported yet due to onnxruntime 1.11.1 inconsistency (#69).
    • hydra-core>=1.2.0 cannot be supported due to omegaconf > 2.1 inconsistency (#72).
    Source code(tar.gz)
    Source code(zip)
  • v.0.1.1(Jul 25, 2022)

    What's in

    Version 0.1.1 is a minor stable update.

    The following tools have been implemented:

    • Noise PreWhitening
    • Geometric Decomposition Coil Compression
    • The RIM-based models can support multi-slice 2D inputs.

    What's Changed

    • fetch upstream updates by @wdika in https://github.com/wdika/mridc/pull/65
    • Addresses #57, #58, #60 by @wdika in https://github.com/wdika/mridc/pull/66
    • Refactor workflows & Add tox by @wdika in https://github.com/wdika/mridc/pull/70
    • Bump to v.0.1.1 by @wdika in https://github.com/wdika/mridc/pull/74

    Full Changelog: https://github.com/wdika/mridc/compare/v.0.1.0...v.0.1.1

    Important

    • Python 3.10 cannot be supported yet due to onnxruntime 1.11.1 inconsistency (#69).
    • hydra-core>=1.2.0 cannot be supported due to omegaconf > 2.1 inconsistency (#72).
    Source code(tar.gz)
    Source code(zip)
  • v.0.1.0(May 25, 2022)

    What's in

    Version 0.1.0 is a major stable update.

    The following reconstruction methods have been added:

    • Convolutional Recurrent Neural Networks (CRNN)
    • Deep Cascade of Convolutional Neural Networks (CCNN)
    • Down-Up Net (DUNET)
    • Joint Deep Model-Based MR Image and Coil Sensitivity Reconstruction Network (Joint-ICNet)
    • KIKI-Net
    • Learned Primal-Dual Net (LPDNet)
    • MultiDomainNet
    • Recurrent Variational Network (RVN)
    • Variable Splitting Network (VSNet)
    • XPDNet

    PyTorch-Lighting, Hydra, ONXX are now used.

    What's Changed

    • Rebased - refactored logic by @wdika in https://github.com/wdika/mridc/pull/34
    • Updated base recon model inheritance to other models by @wdika in ce648e73691471aefa3b35b1e1b91b4b049dd059
    • Update README.md by @wdika in https://github.com/wdika/mridc/pull/39
    • fixed trainer, updated PL to 1.6.0 by @wdika in https://github.com/wdika/mridc/commit/4cc91ba0e348e52935de365d10fafc7b66875eaa
    • [fixes documentation & codecov uploader @wdika in e2c50f3bec7bfc73fdab9abd82bf05d3f4c4b7fc

    Full Changelog: https://github.com/wdika/mridc/compare/v.0.0.1...v.0.1.0

    Source code(tar.gz)
    Source code(zip)
  • v.0.0.1(Nov 30, 2021)

    What's in

    This initial version includes the implementation of the following reconstruction methods:

    • Cascades of Independently Recurrent Inference Machines (CIRIM),
    • Independently Recurrent Inference Machines (IRIM),
    • End-to-End Variational Network (E2EVN),
    • the UNet,
    • Compressed Sensing (CS), and
    • zero-filled (ZF).

    Also, includes coil sensitivity estimation and evaluation or reconstructions.

    What's Changed

    • Circleci project setup by @wdika in https://github.com/wdika/mridc/pull/2
    • Remove assert statement from non-test files by @deepsource-autofix in https://github.com/wdika/mridc/pull/3
    • Remove unnecessary parentheses after keyword by @deepsource-autofix in https://github.com/wdika/mridc/pull/4

    New Contributors

    • @wdika made their first contribution in https://github.com/wdika/mridc/pull/2
    • @deepsource-autofix made their first contribution in https://github.com/wdika/mridc/pull/3

    Full Changelog: https://github.com/wdika/mridc/commits/v.0.0.1

    Source code(tar.gz)
    Source code(zip)
Owner
Dimitris Karkalousos
Dimitris Karkalousos
The code release of paper 'Domain Generalization for Medical Imaging Classification with Linear-Dependency Regularization' NIPS 2020.

Domain Generalization for Medical Imaging Classification with Linear Dependency Regularization The code release of paper 'Domain Generalization for Me

Yufei Wang 56 Dec 28, 2022
A medical imaging framework for Pytorch

Welcome to MedicalTorch MedicalTorch is an open-source framework for PyTorch, implementing an extensive set of loaders, pre-processors and datasets fo

Christian S. Perone 799 Jan 3, 2023
Code associated with the paper "Deep Optics for Single-shot High-dynamic-range Imaging"

Deep Optics for Single-shot High-dynamic-range Imaging Code associated with the paper "Deep Optics for Single-shot High-dynamic-range Imaging" CVPR, 2

Stanford Computational Imaging Lab 40 Dec 12, 2022
Dense Deep Unfolding Network with 3D-CNN Prior for Snapshot Compressive Imaging, ICCV2021 [PyTorch Code]

Dense Deep Unfolding Network with 3D-CNN Prior for Snapshot Compressive Imaging, ICCV2021 [PyTorch Code]

Jian Zhang 20 Oct 24, 2022
[MICCAI'20] AlignShift: Bridging the Gap of Imaging Thickness in 3D Anisotropic Volumes

AlignShift NEW: Code for our new MICCAI'21 paper "Asymmetric 3D Context Fusion for Universal Lesion Detection" will also be pushed to this repository

Medical 3D Vision 42 Jan 6, 2023
Useful materials and tutorials for 110-1 NTU DBME5028 (Application of Deep Learning in Medical Imaging)

Useful materials and tutorials for 110-1 NTU DBME5028 (Application of Deep Learning in Medical Imaging)

null 7 Jun 22, 2022
Non-Imaging Transient Reconstruction And TEmporal Search (NITRATES)

Non-Imaging Transient Reconstruction And TEmporal Search (NITRATES) This repo contains the full NITRATES pipeline for maximum likelihood-driven discov

null 13 Nov 8, 2022
Neural Nano-Optics for High-quality Thin Lens Imaging

Neural Nano-Optics for High-quality Thin Lens Imaging Project Page | Paper | Data Ethan Tseng, Shane Colburn, James Whitehead, Luocheng Huang, Seung-H

Ethan Tseng 39 Dec 5, 2022
Official implementation of NLOS-OT: Passive Non-Line-of-Sight Imaging Using Optimal Transport (IEEE TIP, accepted)

NLOS-OT Official implementation of NLOS-OT: Passive Non-Line-of-Sight Imaging Using Optimal Transport (IEEE TIP, accepted) Description In this reposit

Ruixu Geng(耿瑞旭) 16 Dec 16, 2022
Official Keras Implementation for UNet++ in IEEE Transactions on Medical Imaging and DLMIA 2018

UNet++: A Nested U-Net Architecture for Medical Image Segmentation UNet++ is a new general purpose image segmentation architecture for more accurate i

Zongwei Zhou 1.8k Jan 7, 2023
Official implementation of the method ContIG, for self-supervised learning from medical imaging with genomics

ContIG: Self-supervised Multimodal Contrastive Learning for Medical Imaging with Genetics This is the code implementation of the paper "ContIG: Self-s

Digital Health & Machine Learning 22 Dec 13, 2022
A package, and script, to perform imaging transcriptomics on a neuroimaging scan.

Imaging Transcriptomics Imaging transcriptomics is a methodology that allows to identify patterns of correlation between gene expression and some prop

Alessio Giacomel 10 Dec 27, 2022
Simulation of moving particles under microscopic imaging

Simulation of moving particles under microscopic imaging Install scipy numpy scikit-image tiffile Run python simulation.py Read result https://imagej

Zehao Wang 2 Dec 14, 2021
PyTorch implementation of Deep HDR Imaging via A Non-Local Network (TIP 2020).

NHDRRNet-PyTorch This is the PyTorch implementation of Deep HDR Imaging via A Non-Local Network (TIP 2020). 0. Differences between Original Paper and

Yutong Zhang 1 Mar 1, 2022
Equivariant Imaging: Learning Beyond the Range Space

[Project] Equivariant Imaging: Learning Beyond the Range Space Project about the

Georges Le Bellier 3 Feb 6, 2022
Pytorch implementation of various High Dynamic Range (HDR) Imaging algorithms

Deep High Dynamic Range Imaging Benchmark This repository is the pytorch impleme

Tianhong Dai 5 Nov 16, 2022
SweiNet is an uncertainty-quantifying shear wave speed (SWS) estimator for ultrasound shear wave elasticity (SWE) imaging.

SweiNet SweiNet is an uncertainty-quantifying shear wave speed (SWS) estimator for ultrasound shear wave elasticity (SWE) imaging. SweiNet takes as in

Felix Jin 3 Mar 31, 2022
PyZebrascope - an open-source Python platform for brain-wide neural activity imaging in behaving zebrafish

PyZebrascope - an open-source Python platform for brain-wide neural activity imaging in behaving zebrafish

null 1 May 31, 2022
This is the official implementation code repository of Underwater Light Field Retention : Neural Rendering for Underwater Imaging (Accepted by CVPR Workshop2022 NTIRE)

Underwater Light Field Retention : Neural Rendering for Underwater Imaging (UWNR) (Accepted by CVPR Workshop2022 NTIRE) Authors: Tian Ye†, Sixiang Che

jmucsx 17 Dec 14, 2022