Multiparametric Image Analysis

Overview

populse_mia logo

Build status codecov

Documentation

The documentation is available on populse_mia's website here

Installation

Usage

  • After an installation in user mode:

    python3 -m populse_mia
    
  • After an installation in developer mode, interprets the main.py file from the source code directory:

    cd [populse_install_dir]/populse_mia/python/populse_mia  
    python3 main.py  
    
  • Depending on the operating system used, it was observed some compatibility issues with PyQt5/SIP. In this case, we recommend, as a first attempt, to do:

    python3 -m pip install --force-reinstall pyqt5==5.14.0
    python3 -m pip install --force-reinstall PyQt5-sip==5.0.1
    

Contributing to the project

If you'd like to contribute to the project please read our developer documentation page. Please also read through our code of conduct.

Tests

  • Unit tests written thanks to the python module unittest

  • Continuous integration made with Travis (Linux, OSX), and AppVeyor (Windows)

  • Code coverage calculated by the python module codecov

  • The module is ensured to work with Python >= 3.6

  • The module is ensured to work on the platforms Linux, OSX and Windows

  • The script of tests is python/populse_mia/test.py, so the following command launches the tests:

    python3 python/populse_mia/test.py (from populse_mia root folder, for example [populse_install_dir]/populse_mia)
    

Requirements

  • capsul
  • lark-parser
  • matplotlib
  • mia-processes
  • nibabel
  • nipype
  • pillow
  • populse-db
  • pyqt5
  • python-dateutil
  • pyyaml
  • scikit-image
  • scipy
  • SIP
  • snakeviz
  • soma_workflow
  • traits

Other packages used

  • copy
  • os
  • six
  • tempfile
  • unittest

License

  • The whole populse project is open source
  • Populse_mia is precisely released under the CeCILL software license
  • All license details can be found here, or refer to the license file here.

Support and Communication

In case of a problem or to ask a question about how to do something in populse_mia, please open an issue.

The developer team can even be contacted using [email protected].

Comments
  • [general need] Save pipeline/history information in the database

    [general need] Save pipeline/history information in the database

    In order to keep and be able to show history of files present in the main database (see #262), we need to save pipeline information in the database (indeed, brick tag is not enough alone to retrieve all information of the history, cf. discussion in #236).

    In order to do that, we need to create a new collection (at least), let's say COLLECTION_HISTORY, that will contains this information. This collection will minimally need to contain:

    • an uuid
    • the inputs of the pipeline (dictionnaries)
    • the outputs of the pipeline (dictionnaries)
    • an xml chain (serialization of the pipeline)

    The main database will then have for each file entry a history field containing the uuid of the corresponding history in the COLLECTION_HISTORY.

    This ticket is open to discuss the structure of the COLLECTION_HISTORY and the links between different collection of MIA. For example, if we introduce the link between COLLECTION_HISTORY and COLLECTION_CURRENT (via the uuid of history in a field), do we need to keep the link between COLLECTION_CURRENT and COLLECTION_BRICK, which consists of a field containing the uuid of the last brick that created the file, or do we see the COLLECTION_HISTORY as an intermediate collection which stores all links between the history and corresponding bricks (a field that contains all uuid of bricks involved in the pipeline) ? I think that in this last case, it may be tricky to determine which brick truly created the file (and we could fall again in resolving conflicts as discussed in #236), so we would need to keep both links in the main database (for each file storing in different fields the uuid of the history and the uuid of the last brick)

    enhancement 
    opened by LStruber 188
  • Take stock of the capsulengine + populse_db 2 merge with mia

    Take stock of the capsulengine + populse_db 2 merge with mia

    I will try to summarise all the issues that are already open because at the moment it is a bit scattered and difficult to deal with ... When a issue will be resumed here, I would indicate here his number (in order to be able to go and consult it if necessary because there is some interesting information in it) and I would close it.

    Let's go:

    Config used:

    • capsul branch on master branch
    • mia_process on master branch
    • populse_db on 2.0 branch (merged with the master of 13.11.2020 - current version of remotes/origin/2.0)
    • populse_mia on populse_db2_capsulengine branch (merged with the master of 13.11.2020 - current version of remotes/origin/populse_db2_capsulengine)
    • soma-base on the master branch
    • soma-workflow on the master branch
    • data coming from populse_mia/data_tests/Philips_files/Data_patient imported to mia thanks to mri_conv (in mia: File > Import): one T1 and one Bold images

    Important issues (t)/ remarks (r):

    • t1: Smooth brick from mia_processes test.
      • Using the T1 data (alej170316-IRMFonct.+perfusion-2016-03-17083444-0-T13DSENSE-T1TFE-000425.000.nii)
        • Initialisation: 6sec, the empty output is not created but the entry is created in the database (contrary to as indicated yesterday in #172 I do not observe today an initialisation duration of 2 mins, maybe a misconfiguration on my station at the lab - I was very disturbed yesterday afternoon ... - to check the next time I will go to the lab) -̶ ̶R̶u̶n̶:̶ ̶ ̶n̶o̶t̶ ̶l̶a̶u̶n̶c̶h̶e̶d̶ ̶(̶a̶s̶ ̶d̶e̶s̶c̶r̶i̶b̶e̶d̶ ̶i̶n̶ ̶"̶m̶a̶t̶l̶a̶b̶ ̶c̶o̶n̶f̶i̶g̶ ̶i̶s̶ ̶n̶o̶t̶ ̶w̶o̶r̶k̶i̶n̶g̶ ̶i̶n̶ ̶j̶o̶b̶s̶ ̶#̶1̶7̶2̶"̶ ̶a̶n̶d̶ ̶"̶g̶i̶v̶e̶ ̶t̶h̶e̶ ̶s̶y̶s̶.̶p̶a̶t̶h̶ ̶t̶o̶ ̶t̶h̶e̶ ̶p̶r̶o̶c̶e̶s̶s̶_̶c̶m̶d̶l̶i̶n̶e̶ ̶f̶o̶r̶ ̶d̶e̶v̶ ̶m̶o̶d̶e̶ ̶h̶t̶t̶p̶s̶:̶/̶/̶g̶i̶t̶h̶u̶b̶.̶c̶o̶m̶/̶p̶o̶p̶u̶l̶s̶e̶/̶c̶a̶p̶s̶u̶l̶/̶p̶u̶l̶l̶/̶1̶6̶7̶"̶)̶:̶ ̶s̶t̶d̶o̶u̶t̶ ̶d̶i̶s̶p̶l̶a̶y̶:̶
    ̶r̶u̶n̶n̶i̶n̶g̶ ̶p̶i̶p̶e̶l̶i̶n̶e̶.̶.̶.̶
    ̶s̶o̶m̶a̶-̶w̶o̶r̶k̶f̶l̶o̶w̶ ̶s̶t̶a̶r̶t̶i̶n̶g̶ ̶i̶n̶ ̶l̶i̶g̶h̶t̶ ̶m̶o̶d̶e̶
    ̶W̶o̶r̶k̶f̶l̶o̶w̶ ̶c̶o̶n̶t̶r̶o̶l̶l̶e̶r̶ ̶i̶n̶i̶t̶i̶a̶l̶i̶s̶e̶d̶
    ̶
    ̶ ̶W̶h̶e̶n̶ ̶t̶h̶e̶ ̶p̶i̶p̶e̶l̶i̶n̶e̶ ̶w̶a̶s̶ ̶l̶a̶u̶n̶c̶h̶e̶d̶,̶ ̶t̶h̶e̶ ̶f̶o̶l̶l̶o̶w̶i̶n̶g̶ ̶e̶x̶c̶e̶p̶t̶i̶o̶n̶ ̶w̶a̶s̶ ̶r̶a̶i̶s̶e̶d̶:̶ ̶E̶r̶r̶o̶r̶ ̶d̶u̶r̶i̶n̶g̶ ̶w̶o̶r̶k̶f̶l̶o̶w̶ ̶e̶x̶e̶c̶u̶t̶i̶o̶n̶.̶ ̶S̶t̶a̶t̶u̶s̶=̶(̶'̶f̶i̶n̶i̶s̶h̶e̶d̶_̶r̶e̶g̶u̶l̶a̶r̶l̶y̶'̶,̶ ̶1̶,̶ ̶N̶o̶n̶e̶,̶ ̶N̶o̶n̶e̶)̶.̶
    ̶T̶h̶e̶ ̶w̶o̶r̶k̶f̶l̶o̶w̶ ̶h̶a̶s̶ ̶n̶o̶t̶ ̶b̶e̶e̶n̶ ̶r̶e̶m̶o̶v̶e̶d̶ ̶f̶r̶o̶m̶ ̶s̶o̶m̶a̶_̶w̶o̶r̶k̶f̶l̶o̶w̶ ̶a̶n̶d̶ ̶m̶u̶s̶t̶ ̶b̶e̶ ̶d̶e̶l̶e̶t̶e̶d̶ ̶m̶a̶n̶u̶a̶l̶l̶y̶.̶ ̶
    ̶F̶a̶i̶l̶e̶d̶ ̶j̶o̶b̶s̶:̶ ̶[̶5̶8̶]̶
    ̶A̶b̶o̶r̶t̶e̶d̶/̶k̶i̶l̶l̶e̶d̶ ̶j̶o̶b̶s̶:̶ ̶[̶]̶
    ̶=̶=̶=̶=̶=̶=̶=̶=̶=̶=̶=̶=̶=̶=̶=̶=̶=̶=̶=̶=̶=̶=̶=̶=̶=̶=̶=̶=̶=̶=̶=̶=̶=̶=̶=̶=̶=̶=̶=̶=̶=̶=̶=̶=̶
    ̶-̶-̶-̶-̶ ̶f̶a̶i̶l̶e̶d̶ ̶j̶o̶b̶ ̶i̶n̶f̶o̶ ̶-̶-̶-̶
    ̶*̶ ̶j̶o̶b̶:̶ ̶5̶8̶:̶ ̶s̶m̶o̶o̶t̶h̶1̶
    ̶*̶ ̶e̶x̶i̶t̶ ̶s̶t̶a̶t̶u̶s̶:̶ ̶f̶i̶n̶i̶s̶h̶e̶d̶_̶r̶e̶g̶u̶l̶a̶r̶l̶y̶
    ̶*̶ ̶c̶o̶m̶m̶a̶n̶d̶l̶i̶n̶e̶:̶
    ̶'̶p̶y̶t̶h̶o̶n̶3̶'̶ ̶'̶-̶c̶'̶ ̶'̶f̶r̶o̶m̶ ̶c̶a̶p̶s̶u̶l̶.̶a̶p̶i̶ ̶i̶m̶p̶o̶r̶t̶ ̶P̶r̶o̶c̶e̶s̶s̶;̶ ̶P̶r̶o̶c̶e̶s̶s̶.̶r̶u̶n̶_̶f̶r̶o̶m̶_̶c̶o̶m̶m̶a̶n̶d̶l̶i̶n̶e̶(̶"̶m̶i̶a̶_̶p̶r̶o̶c̶e̶s̶s̶e̶s̶.̶p̶r̶e̶p̶r̶o̶c̶e̶s̶s̶.̶s̶p̶m̶.̶s̶p̶a̶t̶i̶a̶l̶_̶p̶r̶e̶p̶r̶o̶c̶e̶s̶s̶i̶n̶g̶.̶S̶m̶o̶o̶t̶h̶"̶)̶'̶
    ̶*̶ ̶e̶x̶i̶t̶ ̶v̶a̶l̶u̶e̶:̶ ̶1̶
    ̶*̶ ̶t̶e̶r̶m̶ ̶s̶i̶g̶n̶a̶l̶:̶ ̶N̶o̶n̶e̶
    ̶-̶-̶-̶-̶ ̶e̶n̶v̶ ̶-̶-̶-̶-̶
    ̶N̶o̶n̶e̶
    ̶-̶-̶-̶-̶ ̶c̶o̶n̶f̶i̶g̶u̶r̶a̶t̶i̶o̶n̶ ̶-̶-̶-̶-̶
    ̶{̶'̶c̶a̶p̶s̶u̶l̶_̶e̶n̶g̶i̶n̶e̶'̶:̶ ̶{̶'̶u̶s̶e̶s̶'̶:̶ ̶{̶'̶c̶a̶p̶s̶u̶l̶.̶e̶n̶g̶i̶n̶e̶.̶m̶o̶d̶u̶l̶e̶.̶s̶p̶m̶'̶:̶ ̶'̶a̶n̶y̶'̶}̶}̶,̶ ̶'̶c̶a̶p̶s̶u̶l̶.̶e̶n̶g̶i̶n̶e̶.̶m̶o̶d̶u̶l̶e̶.̶s̶p̶m̶'̶:̶ ̶{̶'̶c̶o̶n̶f̶i̶g̶_̶i̶d̶'̶:̶ ̶'̶s̶p̶m̶1̶2̶-̶s̶t̶a̶n̶d̶a̶l̶o̶n̶e̶'̶,̶ ̶'̶c̶o̶n̶f̶i̶g̶_̶e̶n̶v̶i̶r̶o̶n̶m̶e̶n̶t̶'̶:̶ ̶'̶g̶l̶o̶b̶a̶l̶'̶,̶ ̶'̶d̶i̶r̶e̶c̶t̶o̶r̶y̶'̶:̶ ̶'̶/̶u̶s̶r̶/̶l̶o̶c̶a̶l̶/̶S̶P̶M̶/̶s̶p̶m̶1̶2̶_̶s̶t̶a̶n̶d̶a̶l̶o̶n̶e̶'̶,̶ ̶'̶v̶e̶r̶s̶i̶o̶n̶'̶:̶ ̶'̶1̶2̶'̶,̶ ̶'̶s̶t̶a̶n̶d̶a̶l̶o̶n̶e̶'̶:̶ ̶T̶r̶u̶e̶}̶,̶ ̶'̶c̶a̶p̶s̶u̶l̶.̶e̶n̶g̶i̶n̶e̶.̶m̶o̶d̶u̶l̶e̶.̶p̶y̶t̶h̶o̶n̶'̶:̶ ̶{̶'̶c̶a̶p̶s̶u̶l̶_̶e̶n̶g̶i̶n̶e̶'̶:̶ ̶{̶'̶u̶s̶e̶s̶'̶:̶ ̶{̶'̶c̶a̶p̶s̶u̶l̶.̶e̶n̶g̶i̶n̶e̶.̶m̶o̶d̶u̶l̶e̶.̶p̶y̶t̶h̶o̶n̶'̶:̶ ̶'̶a̶n̶y̶'̶}̶}̶}̶}̶
    ̶-̶-̶-̶-̶ ̶s̶t̶d̶o̶u̶t̶ ̶-̶-̶-̶-̶
    ̶
    ̶-̶-̶-̶-̶ ̶s̶t̶d̶e̶r̶r̶ ̶-̶-̶-̶-̶
    ̶T̶r̶a̶c̶e̶b̶a̶c̶k̶ ̶(̶m̶o̶s̶t̶ ̶r̶e̶c̶e̶n̶t̶ ̶c̶a̶l̶l̶ ̶l̶a̶s̶t̶)̶:̶
    ̶ ̶ ̶F̶i̶l̶e̶ ̶"̶<̶s̶t̶r̶i̶n̶g̶>̶"̶,̶ ̶l̶i̶n̶e̶ ̶1̶,̶ ̶i̶n̶ ̶<̶m̶o̶d̶u̶l̶e̶>̶
    ̶M̶o̶d̶u̶l̶e̶N̶o̶t̶F̶o̶u̶n̶d̶E̶r̶r̶o̶r̶:̶ ̶N̶o̶ ̶m̶o̶d̶u̶l̶e̶ ̶n̶a̶m̶e̶d̶ ̶'̶c̶a̶p̶s̶u̶l̶'̶
    ̶ ̶.̶.̶.̶
    
    
    • r1: With populse_db and populse_mia on master, initialisation take less than a sec. Here also an empty output is not created but the entry is created in the database. I thought we had written code so that there is always an empty output created at the time of initialisation. I'm not very clear on this anymore. To be clarified because it is sure that there is a risk of malfunctioning afterwards ...
    opened by servoz 177
  • [general need] Synchronisation between mia and capsul engine configuration is suboptimal, in V2

    [general need] Synchronisation between mia and capsul engine configuration is suboptimal, in V2

    Currently by going to File > Mia preferences it is possible to access the configuration settings for mia but also for capsul engine. In some cases this is completely redundant. The synchronisation between the two is currently sub-optimal. We are getting there but I think that a simple user would be lost at the moment. It would be necessary to have a perfect synchronisation (instantaneous) in both directions and that one adds or removes a configuration of module. Currently it is not all the time easy and faster to do.

    In fact we did not invest too much effort in this direction because this duplication should disappear (keep only mia prefrence and a synchronisation invisible to the user would be made at each change, or keep only the display of the capsul engine).

    Another alternative, which goes in the direction of a reflection I had started and which I already mentioned, but which we never settled, would be to have a local or a remote mode of operation (soma-workflow does it, but we don't currently have a defined working mode in mia). In this case, we could keep both. The mia preferences would be keep for local use (where it is possible to test the validity of the config, which is currently done for spm and fsl) and capsul engine would be keep for remote use (where it may not be possible to check the config at the time of its definition). But this is beyond the scope of this ticket.

    enhancement To discuss 
    opened by servoz 84
  • [general need] Currently, DataBrowser Bricks tag does not give at a glance the whole history of a data

    [general need] Currently, DataBrowser Bricks tag does not give at a glance the whole history of a data

    In the DataBrowser, we have a special tag (Bricks) that has been created to provide the history of a data.

    Let's say we have this pipeline: anat.nii -> |BrickA| -> A_anat.nii ->|BrickB| -> A_B_anat.nii

    Currently, after launching this pipeline, we will find in the DataBrowser, the Bricks tag with only the last brick which the document has passed through. For this example: anat.nii: Bricks tag is empty A_anat.nii: Bricks tag = BrickA A_B_anat.nii: Bricks tag = BrickB

    I think this is not optimal. Indeed, we may want to have at a glance the whole history of a data (rather than having to reconstruct this history, which may not be instantaneous). In this case we want in DataBrowser:

    anat.nii: Bricks tag is empty A_anat.nii: Bricks tag = BrickA A_B_anat.nii: Bricks tag = BrickA BrickB

    enhancement 
    opened by servoz 59
  • [initialisation step] No more initialisation (and thus calculation launch) seems to work in mia!

    [initialisation step] No more initialisation (and thus calculation launch) seems to work in mia!

    Minimum process to reproduce :

    • Nipype:
      • Open a project with at least one anat in the DataBrowser
      • Drag and drop the Smooth process into PipelineManager editor (nipype > interfaces > spm)
      • Right click on the smooth_1 brick and choose Export all unconnected plugs
      • Hit the inputs brick then on the filter button of the in_files parameter
      • Select a data
      • Run the pipeline
      • The calculation does not take place:
      • No popup window to indicate that there is an issue
      • Pipeline "NoName" has not been correctly run is observed in the status bar

    We observe in stdout:

    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "/casa/host/src/capsul/master/capsul/process/process.py", line 731, in run_from_commandline
        result = ce.study_config.run(process, configuration_dict=configuration)
      File "/casa/host/src/capsul/master/capsul/study_config/study_config.py", line 429, in run
        configuration_dict=configuration_dict)
      File "/casa/host/src/capsul/master/capsul/study_config/run.py", line 165, in run_process
        returncode = process_instance._run_process()
      File "/casa/host/src/capsul/master/capsul/process/process.py", line 1864, in _run_process
        raise ValueError('output_directory is not set but is mandatory '
    ValueError: output_directory is not set but is mandatory to run a NipypeProcess
    
    
    • Mia_processes
      • Open a project with at least one anat in the DataBrowser
      • Drag and drop the Smooth process into PipelineManager editor (mia_processes > bricks > preprocess > spm)
      • Right click on the smooth_1 brick and choose Export all unconnected plugs
      • Hit the inputs brick then on the filter button of the in_files parameter
      • Select a data
      • Run the pipeline
      • The calculation does not take place:
        • A window pop up Screenshot from 2022-03-25 11-09-25
        • Pipeline "NoName" was not initlialised succesfully is observed in the status bar
        • No warning / exception message is observed in the stdout
    bug 
    opened by servoz 28
  • [Input filter enhancement] Adjusting input filter in node controller

    [Input filter enhancement] Adjusting input filter in node controller

    When you create a pipeline, you sometimes want to filter your database by a tag, or a patient name etc. There is a tool in MIA to do that: the input filter (which is automatically added to the pipeline, in iteration modes 2 and 3).

    In the documentation, it is indicated that you select outputs of the filter by right clicking on it and selecting your desired keywords etc. in the opened window. Your filtered files are indicated in the "outputs" field of the input filter in the node controller. If you initialize, everything is working. No issue here.

    However, one might expect that the filter button next to the outputs field in the node controller perform exactly the same action, since you would click here to filter your outputs of the input filter. And when you try to filter outputs with this button, it seems to work (filtered files are indicated in the "outputs" field of the node controller as previously), but when you initialize, the output field is reset to "all files in the database" and initialization is performed on all the database ! This is not a desirable behavior.

    Minimum steps to reproduce:

    • In the pipeline manager, add a smooth brick and an input_filter brick
    • Export all unconnected plugs
    • In the input filter node controller, set as inputs all your database (filter button in "inputs" field then ok without filtering rule)
    • In the output filter node controller, set as outputs your filtered files (filter button in "outputs" field then filter on anat files only for example)
    • Initialize the pipeline The outputs are reset to all your project database, and in data browser the path of all database smoothed files are created.
    enhancement 
    opened by LStruber 27
  • [Todo for V2] Reconnecting the iteration

    [Todo for V2] Reconnecting the iteration

    It is pressing need for the clinical application, but also for the research activity, to reconnect the iteration.

    At least as it existed in V1 (selection of inputs automatically from the database with filters).

    There were imperfections in the V1 iteration (the main one being that there was no initialisation step separated from the run).

    The initialisation step, as it currently exits, could be made to disappear. Indeed, at present, it make inappropriate operations (e.g. outputs are entered into the database when they do not already exist, etc.) in addition to verifying that the run step should run without trouble. There is yet a lot of thinking to be done on this initialisation step and decisions to be made on exactly how it will work as it may have impact in the code for iteration (and not only!).

    But maybe we are getting out of the scope of this ticket . Getting it back would be nice for now :-)

    enhancement To discuss 
    opened by servoz 27
  • Mia crashes when running a pipeline and 'use SPM' is not checked

    Mia crashes when running a pipeline and 'use SPM' is not checked

    Describe the bug MIA tries to run pipelines even if Matlab and SPM are not configured in the preferences. However, when doing so, MIA crashes.

    An unconfigured SPM in the preferences can occur, when unchecking the "Use Matlab" checkbox after enabling SPM. In this case, the "Use SPM" checkbox is disabled, and not re-enabled automatically if Matlab is enabled again.

    Generated error

    ________________________________________________________________________________
    [Process] Calling nipype.interfaces.spm.preprocess.Smooth...
    nipype.interfaces.spm.preprocess.Smooth(output_directory=/home/nietob/Data/mia/projects/test/scripts, synchronize=0, fwhm=[3.0, 3.0, 3.0], in_files=['/home/nietob/Data/mia/projects/test/data/raw_data/MAP-VS-704-599566309-2019-04-05_09-39-09-00-3D_pCASL_singleshot-3D_pCASL_singleshot-.nii'], matlab_cmd=/usr/local/MATLAB/R2019a/bin/matlab, mfile=True, out_prefix=s, use_mcr=False, use_v8struct=True)
    
    Exception hooking in progress ...
    
    Clean up before closing mia done ...
    
    Traceback (most recent call last):
      File "/home/nietob/Code/Populse_mia/populse_mia/python/populse_mia/user_interface/pipeline_manager/pipeline_manager_tab.py", line 1465, in run
        study_config.run(pipeline, verbose=1)
      File "/home/nietob/.local/lib/python3.6/site-packages/capsul/study_config/study_config.py", line 414, in run
        verbose)
      File "/home/nietob/.local/lib/python3.6/site-packages/capsul/study_config/study_config.py", line 471, in _run
        **kwargs)
      File "/home/nietob/.local/lib/python3.6/site-packages/capsul/study_config/run.py", line 102, in run_process
        returncode = process_instance._run_process()
      File "/home/nietob/.local/lib/python3.6/site-packages/capsul/process/process.py", line 1240, in _run_process
        results = self._nipype_interface.run()
      File "/home/nietob/.local/lib/python3.6/site-packages/nipype/interfaces/base/core.py", line 375, in run
        runtime = self._run_interface(runtime)
      File "/home/nietob/.local/lib/python3.6/site-packages/nipype/interfaces/spm/base.py", line 376, in _run_interface
        results = self.mlab.run()
      File "/home/nietob/.local/lib/python3.6/site-packages/nipype/interfaces/base/core.py", line 375, in run
        runtime = self._run_interface(runtime)
      File "/home/nietob/.local/lib/python3.6/site-packages/nipype/interfaces/matlab.py", line 170, in _run_interface
        self.raise_exception(runtime)
      File "/home/nietob/.local/lib/python3.6/site-packages/nipype/interfaces/base/core.py", line 695, in raise_exception
        ).format(**runtime.dictcopy()))
    RuntimeError: Command:
    /usr/local/MATLAB/R2019a/bin/matlab -nodesktop -nosplash -singleCompThread -r "addpath('/home/nietob/Data/mia/projects/test/scripts');pyscript_smooth;exit"
    Standard output:
    
                                                                                  < M A T L A B (R) >
                                                                        Copyright 1984-2019 The MathWorks, Inc.
                                                                   R2019a Update 2 (9.6.0.1114505) 64-bit (glnxa64)
                                                                                      May 1, 2019
    
     
    To get started, type doc.
    For product information, visit www.mathworks.com.
     
    The Window Command log is currently saved in ~/Documents/MATLAB/diaryWindowCommand file. Purge sometimes the diaryWindowCommand file in MATLAB directory.
    - diary off: suspends the diary.
    - diary on: resumes diary mode.
    
    By default at start up a debugging option is launched producing a breakpoint and pause execution if a run-time error occurs.
    - For stop this option: dbclear all.
    - For start this option: dbstop if error.
    - For known the current option: dbstatus.
    Executing pyscript_smooth at 18-Jun-2019 12:15:32:
    -----------------------------------------------------------------------------------------------------
    MATLAB Version: 9.6.0.1114505 (R2019a) Update 2
    MATLAB License Number: 100877
    Operating System: Linux 4.15.0-51-generic #55-Ubuntu SMP Wed May 15 14:27:21 UTC 2019 x86_64
    Java Version: Java 1.8.0_181-b13 with Oracle Corporation Java HotSpot(TM) 64-Bit Server VM mixed mode
    -----------------------------------------------------------------------------------------------------
    MATLAB                                                Version 9.6         (R2019a)
    Deep Learning Toolbox                                 Version 12.1        (R2019a)
    Image Processing Toolbox                              Version 10.4        (R2019a)
    MATLAB Compiler                                       Version 7.0.1       (R2019a)
    MATLAB Compiler SDK                                   Version 6.6.1       (R2019a)
    Optimization Toolbox                                  Version 8.3         (R2019a)
    Parallel Computing Toolbox                            Version 7.0         (R2019a)
    Signal Processing Toolbox                             Version 8.2         (R2019a)
    Statistical Parametric Mapping                        Version 7487        (SPM12) 
    Statistics and Machine Learning Toolbox               Version 11.5        (R2019a)
    
    Standard error:
    MATLAB code threw an exception:
    Cannot CD to /home/nietob/Data/mia/projects/test/scripts/<undefined> (Name is nonexistent or not a directory).
    File:/home/nietob/Data/mia/projects/test/scripts/pyscript_smooth.m
    Name:pyscript_smooth
    Line:4
    Return code: 0
    QObject::~QObject: Timers cannot be stopped from another thread
    Segmentation fault (core dumped)
    

    Notice that the option "paths=['/home/nietob/spm12']" is missing from the call to nipype.interfaces.spm.preprocess.Smooth.

    To Reproduce Steps to reproduce the behavior:

    1. Open a project
    2. In Preferences > Pipeline uncheck "Use Matlab" and check it again. The "Use SPM" checkbox got disabled in the process.
    3. Click on "OK".
    4. Configure a simple pipeline with an SPM module like smooth. Initialize and run it.
    5. MIA crashes.

    Expected behavior MIA should open an error dialog, informing the user that SPM and/or Matlab needs to be enabled in the preferences. In the preferences dialog, maybe SPM should not be automatically disabled when disabling Matlab (or it should be automatically re-enabled when it was automatically disabled before).

    Desktop (please complete the following information):

    • OS: Linux - Ubuntu 18.04, Plasma desktop
    • Populse_MIA version 1.2.1
    To discuss 
    opened by BlancaNietoPetinal 23
  • [Iteration enhancement] Iteration: database indexing issues

    [Iteration enhancement] Iteration: database indexing issues

    Observed in the 3 iteration modes described in the documentation.

    Only the last patient in the iteration is indexed in the database after initialisation (and then run). On the other hand, all the expected data can be found in the derived_data directory of the project, after the run.

    enhancement 
    opened by servoz 22
  • [work on virtualisation] Current issues with virtualisation in macos

    [work on virtualisation] Current issues with virtualisation in macos

    Following the last update of the virtualisation documentation it appears that anatomist is not available in mia. I open this ticket to help solve this.

    help wanted 
    opened by servoz 21
  • We should be able to detect if a parameter has been modified in the controller (consideration on the initialization step). (linked with #142)

    We should be able to detect if a parameter has been modified in the controller (consideration on the initialization step). (linked with #142)

    Let's take an example:

    • Launch Mia
    • Import or open a project with for ex. one anat.nii and one deformation field for normalisation (y_anat.nii).
    • Go to Pipeline Manager
    • In the New Pipeline tab, drag and drop the Normalize12 brick from the mia_processes library
    • On this Normalize12 brick, right click and choose Export all unconnected plugs
    • Click on the inputs node then click on the Filter buttons for the deformation_file and apply_to_files parameters then select the y_anat.nii and anat.nii scan from the database, respectively, (right part of the window).
    • Pipeline > Initialize pipeline
    • The run button become available
    • Then change the jobtype parameter to 'est'
    • The run button stay available
    • So click on the run button
    • The pipeline is launched but with the previous set of parameters (as with jobtype == write)
    enhancement 
    opened by servoz 21
  • [history] Complex types are not (or poorly) supported in the data history display.

    [history] Complex types are not (or poorly) supported in the data history display.

    Working on mia_processes.bricks.stat.spm.Level1Design(), it is observed that complex type parameters are not displayed correctly in the data history display. The pipeline: Screenshot from 2022-12-06 15-34-59 The data history display: Screenshot from 2022-12-06 15-39-31_2 Screenshot from 2022-12-06 15-38-30_2

    In fact we have for this case: dict4runtime == {'sessions': [{'scans': '/data_2/populse_mia_project/test/data/derived_data/swralej-IRMFonct_+perfusion-2016-03-17083444-05-BOLD_CVR_7_53sl_ModeratePNSSENSE-FEEPI-001212_000.nii', 'multi_reg': [[['/data_2/populse_mia_project/test/data/downloaded_data/regressor_physio_EtCO2_ctl.mat']], [['/data_2/populse_mia_project/test/data/derived_data/rp_alej-IRMFonct_+perfusion-2016-03-17083444-05-BOLD_CVR_7_53sl_ModeratePNSSENSE-FEEPI-001212_000.txt']]], 'hpf': 427.2}]} and sess_multi_reg == [['/data_2/populse_mia_project/test/data/downloaded_data/regressor_physio_EtCO2_ctl.mat', '/data_2/populse_mia_project/test/data/derived_data/rp_alej-IRMFonct_+perfusion-2016-03-17083444-05-BOLD_CVR_7_53sl_ModeratePNSSENSE-FEEPI-001212_000.txt']]

    We need to rework the relevant part of the scripts.

    I don't have time to work on this at all at the moment. I open this ticket to keep in mind that we will have to work on ASAP.

    The dict4runtime parameter is a special parameter (acting as a messenger between initialisation and runtime). As already mentioned in tickets, I think it should not appear in the history display (userlevel =1). When fixing this ticket, it would be nice to handle parameters with userlevel=1 in the history display.

    opened by servoz 0
  • [casa_distro virtualisation] unit tests are failing in the Singularity container

    [casa_distro virtualisation] unit tests are failing in the Singularity container

    Exactly, the test.TestMIAMainWindow.test_open_shell test, and only on the BV Singularity container (not for host installation).

    Exceptions / messages:

    test_open_shell (__main__.TestMIAMainWindow)
    Opens a Qt console and kill it afterwards. ... startShell
    run_ipconsole_kernel: qtconsole
    <frozen importlib._bootstrap>:914: ImportWarning: QtImporter.find_spec() not found; falling back to find_module()
    <frozen importlib._bootstrap>:914: ImportWarning: QtImporter.find_spec() not found; falling back to find_module()
    <frozen importlib._bootstrap>:914: ImportWarning: QtImporter.find_spec() not found; falling back to find_module()
    runing IP console kernel
    /usr/local/lib/python3.10/dist-packages/ipykernel/kernelapp.py:465: DeprecationWarning: zmq.eventloop.ioloop is deprecated in pyzmq 17. pyzmq now works with default tornado and asyncio eventloops.
      zmq_ioloop.install()
    /usr/local/lib/python3.10/dist-packages/ipykernel/kernelapp.py:305: DeprecationWarning: IPython.utils.io.raw_print has been deprecated since IPython 7.0
      io.raw_print(_ctrl_c_message)
    NOTE: When using the `ipython kernel` entry point, Ctrl-C will not work.
    
    To exit, you will have to explicitly quit this process, by either sending
    "quit" from a client, or using Ctrl-\ in UNIX-like environments.
    
    To read more about this, see https://github.com/ipython/ipython/issues/2049
    
    
    /usr/local/lib/python3.10/dist-packages/ipykernel/kernelapp.py:307: DeprecationWarning: IPython.utils.io.raw_print has been deprecated since IPython 7.0
      io.raw_print(line)
    To connect another client to this kernel, use:
        --existing kernel-29556.json
    
    
    enhancement 
    opened by servoz 3
  • [Documentation] Check and complete the documentation on the installation of Mia

    [Documentation] Check and complete the documentation on the installation of Mia

    We have to carefully test, check and complete if necessarythe documentation for the installation of Mia for user. We propose now the installation of Mia in "light" version, without viewer, or in virtualized version, using images of BrainVISA.

    It is thus necessary to test for the 3 OS (linux, macos, windows) and for the two types of installation ("light" or "with virtualization").

    In most cases, it will only be a matter of checking that the documentation is really functional and sufficiently detailed for an ordinary user without any particular computer knowledge. On the other hand, there is clearly the macos / "with virtualization" part to write

    enhancement 
    opened by servoz 0
  • [Undo-Redo] Inventory of bugs still existing in V2

    [Undo-Redo] Inventory of bugs still existing in V2

    Working on the latest tickets, it seems that Mia V2 is still not stable in all cases when using the undo - redo function.

    We need to extensively test the undo - redo and give the minimal procedures to reproduce the observed bug(s).

    We can do this in this ticket or open new tickets if there are too many!

    enhancement 
    opened by servoz 0
  • [Configuration] Reading the user's configuration file, if it exists, in developer mode should not be possible

    [Configuration] Reading the user's configuration file, if it exists, in developer mode should not be possible

    If the configuration file exists in the ~/.populse_mia directory, it is this one that is read in developer mode, whereas in developer mode it should be the populse_mia/properties/config.yml file.

    Indeed the interest to have a developer mode and a user mode is to be able to use on the same station, without virtual environment, mia from source and mia from pypi, if one wishes it. The two configurations must be isolated.

    enhancement 
    opened by servoz 0
  • [Config - Install]

    [Config - Install] "Show doc" feature of a process does not work in all cases.

    When we right click on a node/brick, we can choose "Show doc" to see the documentation of the corresponding process (the docstring of the class). On my station this does not work either in case-distro or directly on my host. See ticket 225 in capsul. The tests made by @denisri on his station indicate that everything works fine at his place. The issue would come from the configuration of the station but not from a code problem in populse/casa-distro

    enhancement 
    opened by servoz 0
Owner
Populse
Pipeline Organizer, Project Unifier, Linking Software for Everyone
Populse
Goddard Image Analysis and Navigation Tool

Copyright 2021 United States Government as represented by the Administrator of the National Aeronautics and Space Administration. No copyright is clai

NASA 12 Dec 23, 2022
impy is an all-in-one image analysis library, equipped with parallel processing, GPU support, GUI based tools and so on.

impy is All You Need in Image Analysis impy is an all-in-one image analysis library, equipped with parallel processing, GPU support, GUI based tools a

null 24 Dec 20, 2022
Open source software for image correlation, distance and analysis

Douglas-Quaid Project Open source software for image correlation, distance and analysis. Strongly related to : Carl-Hauser Problem statement (@CIRCL)

Dominik Dancs 2 May 1, 2022
An open source image editor which can manipulate an image in many ways!

Image Editor - An open source image editor which can manipulate an image in many ways! If you need any more modes in repo or I

TroJanzHEX 44 Nov 17, 2022
Image enhancing model for making a blurred image to be somehow clearer than before

This is a very small prject which helps in enhancing the images by taking a Input images. This project has many features like detcting the faces and enhaning the faces itself and also a feature which enhances the whole image

null 3 Dec 3, 2021
Easily turn large sets of image urls to an image dataset. Can download, resize and package 100M urls in 20h on one machine.

img2dataset Easily turn large sets of image urls to an image dataset. Can download, resize and package 100M urls in 20h on one machine. Also supports

Romain Beaumont 1.4k Jan 1, 2023
A pure python implementation of the GIMP XCF image format. Use this to interact with GIMP image formats

Pure Python implementation of the GIMP image formats (.xcf projects as well as brushes, patterns, etc)

FHPyhtonUtils 8 Dec 30, 2022
Image-Viewer is a Windows image viewer based on Python 3.

Image-Viewer Hi! Image-Viewer is a Windows image viewer based on Python 3. Using You must download Image-Viewer.exe from the root of the repository. T

null 2 Apr 18, 2022
Image Reading, Metadata Conversion, and Image Writing for Microscopy Images in Python

AICSImageIO Image Reading, Metadata Conversion, and Image Writing for Microscopy Images in Pure Python Features Supports reading metadata and imaging

Allen Institute for Cell Science - Modeling 137 Dec 14, 2022
Seaborn-image is a Python image visualization library based on matplotlib and provides a high-level API to draw attractive and informative images quickly and effectively.

seaborn-image: image data visualization Description Seaborn-image is a Python image visualization library based on matplotlib and provides a high-leve

null 48 Jan 5, 2023
This app finds duplicate to near duplicate images by generating a hash value for each image stored with a specialized data structure called VP-Tree which makes searching an image on a dataset of 100Ks almost instantanious

Offline Reverse Image Search Overview This app finds duplicate to near duplicate images by generating a hash value for each image stored with a specia

null 53 Nov 15, 2022
Quickly 'anonymize' all people in an image. This script will put a black bar over all eye-pairs in an image

Face-Detacher Quickly 'anonymize' all people in an image. This script will put a black bar over all eye-pairs in an image This is a small python scrip

Don Cato 1 Oct 29, 2021
Fast Image Retrieval is an open source image retrieval framework

Fast Image Retrieval is an open source image retrieval framework release by Center of Image and Signal Processing Lab (CISiP Lab), Universiti Malaya. This framework implements most of the major binary hashing methods, together with both popular backbone networks and public datasets.

CISiP Lab 39 Nov 25, 2022
Fast Image Retrieval (FIRe) is an open source image retrieval project

Fast Image Retrieval (FIRe) is an open source image retrieval project release by Center of Image and Signal Processing Lab (CISiP Lab), Universiti Malaya. This project implements most of the major binary hashing methods to date, together with different popular backbone networks and public datasets.

CISiP Lab 39 Nov 25, 2022
A Python Script to convert Normal PNG Image to Apple iDOT PNG Image.

idot-png-encoder A Python Script to convert Normal PNG Image to Apple iDOT PNG Image (Multi-threaded Decoding PNG). Usage idotpngencoder.py -i <inputf

Lrdcq 2 Feb 17, 2022
Anaglyph 3D Converter - A python script that adds a 3D anaglyph style effect to an image using the Pillow image processing package.

Anaglyph 3D Converter - A python script that adds a 3D anaglyph style effect to an image using the Pillow image processing package.

Kizdude 2 Jan 22, 2022
Pyconvert is a python script that you can use to convert image files to another image format! (eg. PNG to ICO)

Pyconvert is a python script that you can use to convert image files to another image format! (eg. PNG to ICO)

null 1 Jan 16, 2022
Simple Python package to convert an image into a quantized image using a customizable palette

Simple Python package to convert an image into a quantized image using a customizable palette. Resulting image can be displayed by ePaper displays such as Waveshare displays.

Luis Obis 3 Apr 13, 2022
Python-based tools for document analysis and OCR

ocropy OCRopus is a collection of document analysis programs, not a turn-key OCR system. In order to apply it to your documents, you may need to do so

OCRopus 3.2k Jan 4, 2023