Segment axon and myelin from microscopy data using deep learning

Overview

Binder Build Status Documentation Status Coverage Status Twitter Follow

Segment axon and myelin from microscopy data using deep learning. Written in Python. Using the TensorFlow framework. Based on a convolutional neural network architecture. Pixels are classified as either axon, myelin or background.

For more information, see the documentation website.

alt tag

Help

Whether you are a newcomer or an experienced user, we will do our best to help and reply to you as soon as possible. Of course, please be considerate and respectful of all people participating in our community interactions.

  • If you encounter difficulties during installation and/or while using AxonDeepSeg, or have general questions about the project, you can start a new discussion on the AxonDeepSeg GitHub Discussions forum. We also encourage you, once you've familiarized yourself with the software, to continue participating in the forum by helping answer future questions from fellow users!
  • If you encounter bugs during installation and/or use of AxonDeepSeg, you can open a new issue ticket on the AxonDeepSeg GitHub issues webpage.

FSLeyes plugin

A tutorial demonstrating the installation procedure and basic usage of our FSLeyes plugin is available on YouTube, and can be viewed by clicking this link.

References

AxonDeepSeg

Applications

Reviews

Citation

If you use this work in your research, please cite it as follows:

Zaimi, A., Wabartha, M., Herman, V., Antonsanti, P.-L., Perone, C. S., & Cohen-Adad, J. (2018). AxonDeepSeg: automatic axon and myelin segmentation from microscopy data using convolutional neural networks. Scientific Reports, 8(1), 3816. Link to paper: https://doi.org/10.1038/s41598-018-22181-4.

Copyright (c) 2018 NeuroPoly (Polytechnique Montreal)

Licence

The MIT License (MIT)

Copyright (c) 2018 NeuroPoly, École Polytechnique, Université de Montréal

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

Contributors

Pierre-Louis Antonsanti, Stoyan Asenov, Mathieu Boudreau, Oumayma Bounou, Marie-Hélène Bourget, Julien Cohen-Adad, Victor Herman, Melanie Lubrano, Antoine Moevus, Christian Perone, Vasudev Sharma, Thibault Tabarin, Maxime Wabartha, Aldo Zaimi.

Comments
  • Refactored data augmentation, changed loss function, cleaned notebooks and other improvements

    Refactored data augmentation, changed loss function, cleaned notebooks and other improvements

    This major PR aims to handle the improvement in the performance of model as well as an improvised version of data augmentation.

    DONE

    • Implemented data augmentation (Albumentation library) similar to previous version of ADS

    • Changed from Cross Entropy as a loss function to Dice coefficient to improve model performance as indicated in issue #19 .

    • Interpolation changed fromlinear to nearest neighbour

    • Notebooks are cleaned and removed irrelevant notebooks as indicated in #148

    • Migrated models to OSF storage to prevent bloating of the repository

    Fixes #148, Fixes #19, Fixes #241, Fixes #278, Fixes #240 Fixes #273

    opened by vasudev-sharma 75
  • Implement Ellipse Minor Axis  as Diameter

    Implement Ellipse Minor Axis as Diameter

    Following the discussions in #363, #349, this PR aims to implement the diameter of axon using minor axis of ellipse.

    DONE:

    • [x] Implement minor axis as an additional feature to compute the diameter of an axon, thickness of myelin, diameter of axon_myelin
    • [x] In order to allow the user to chose minor axis or equivalent diameter while computing morphometrics as its diameter, user can manually set the boolen variable ellipse to be True or False.
    • [x] Made the necessary changes in 04-compute-morphometrics.ipynb notebook file, allowing the user to set his choice for diameter computation.
    • [x] Added comprehensive tests to test this new feature.
    • [x] Implement the similar behaviour in FSLeyes plugin, where the user is prompted to set his preferred choice of diameter either as equivalent diameter or ellipse minor axis. Opened seperate issue for this see #432 and it will be dealt in seperate PR.
    • [x] Add documentation for this feature in notebook 04-morphometrics_extraction.ipynb
    • [x] Add a flag to select the shape of the axons
    • [x] Documentation: Add literature for axon shape (circle and ellipse)
    • [x] Add cli tests for the axon shape -a flag

    What are the main contributions of this PR?

    1. Implements Ellipse Minor axis as an additional way to compute morphometrics
    2. For generating morphometrics via CLI, adds an flag -a to select the shape of axon. Usage: Refer to the docs for usage.
    3. Updated the RTD

    NOTE: - The default behaviour is set as equivalent diameter(circle) for measuring morphometrics. However, if the user wants to consider the axon as an "oblong ellipse", the user can set ellipse boolean variable to True.

    Fix #363, #349

    feature 
    opened by vasudev-sharma 46
  • Add FSLeyes plugin

    Add FSLeyes plugin

    This PR implements the following changes:

    • Changed numpy and scikit-image versions
    • Implemented a GUI design for the plugin
    • Implemented an image loader for the plugin
    • Implemented buttons on the control panel of the plugin to apply a prediction model
    • Implemented a button to load existing masks
    • "Active" images/masks are determined by their visibility status (eye icon on the overlay list)
    • added the following tools : watershed segmentation, axon auto-fill

    Fixes #159, Fixes #162, Fixes #191, Fixes #192, Fixes #193, Fixes #201, Fixes #209

    TODO

    • Write tests for the plugin (and add to Travis) - Will be adressed in https://github.com/neuropoly/axondeepseg/issues/224

    How to test / install

    The installation procedure can by found here : https://github.com/neuropoly/axondeepseg/blob/FSLeyes_integration/docs/source/documentation.rst

    Tools description

    Tooltips were added to the GUI. If you hover your cursor over a button on the plugin, a description should popup

    opened by Stoyan-I-A 45
  • Release version 4.0.0

    Release version 4.0.0

    Checklist

    • [x] I've given this PR a concise, self-descriptive, and meaningful title
    • [x] I've linked relevant issues in the PR body
    • [x] I've applied the relevant labels to this PR
    • [x] I've added relevant tests for my contribution
    • [x] I've updated the documentation and/or added correct docstrings
    • [x] I've assigned a reviewer
    • [x] I've consulted ADS's internal developer documentation to ensure my contribution is in line with any relevant design decisions

    Description

    Release version 4.0.0 of AxonDeepSeg that integrated IVADOMED into the project and provides Mac M1 compatibility.

    Linked issues

    Resolves #523 #536

    enhancement feature installation dependencies refactoring 
    opened by mathieuboudreau 43
  • Change how ADS dependencies are installed

    Change how ADS dependencies are installed

    This branch is a child of the branch from the fork in #441, and thus that PR needs to be merged first. I had to branch out of that PR because we updated the OSF filenames, and thus test fails in the meantime.

    This PR seeks to resolve the tests failing in #441 highlights here , and also simplifies the installation of FSLeyes by merging the requirements.txt file with the FSLeyes installation commands into a single environment.yml file. (I thought @jcohenadad had opened an issue about this at a previous meeting, but maybe we just discussed it) Now, all the tools will be installed at the conda venv installation stage, inistead of afterwards. pip install -e . is still needed to install AxonDeepSeg itself.

    With this PR, FSLeyes will always be installed by default in the conda environment.

    With this PR, don't think including ADS in pipy is a viable option anymore. Adding it to conda-forge may be possible though

    To do:

    • [x] Resolve the failing test
      • This is likely due to one of the packages pulling the latest version instead of the fixed version in the previous requirements file.
    • [x] Someone with a Linux machine needs to test FSLeyes locally to make sure the GUI actually works.
    • [x] Update documentation on how to install AxonDeepSeg.
    • [x] Once this PR passes the travis tests, squash merge #441 before merging this one so that the diff is cleaner.
    opened by mathieuboudreau 38
  • Move to Python 3.6 compatibility

    Move to Python 3.6 compatibility

    This branch isn't ready for merging yet, please standby. I'm simply making this PR to see the merge conflicts. There's still 3 failing tests and 1 errored test in Windows.

    opened by mathieuboudreau 34
  • Add pre-commit hooks

    Add pre-commit hooks

    This PR aims to use pre-commit hooks to limit the file size. We wish to set a limit on the file sizes so that contributors don't commit massive files in the repo.

    pre-commit has been added to prevent files > 500KB from being committed, do a yaml check and clear the outputs of the Jupter notebook files.

    Asides from local pre-commit hooks being implemented, checks using pre-commit hooks are added for Travis CI.

    The changes implemented are similar to what was implemented on the sister projects (see here and here)

    3 checks are being done using pre-commit hooks both locally and on Travis CI.

    • Large files greater than 500kb are prevented from being committed.
    • YAML files syntax check
    • Jupyter notebook output clear check --> This hook basically clears the output of the cells in case you are committing the notebooks with outputs. So at the time of committing the changes, it will clear the output of the executed cells, and the next time when you try to re-commit the changes, the notebooks with no cell outputs will be committed.

    Instructions to test this PR.

    1. In your virtual environment, first run conda env update --name ads_venv --file environment.yml

    2. Do, pip install -e .

    3. You can now try to test each of the hooks individually.

      3.1 (Hook for files > 500kb): Try to either commit any ADS model or either run all the cells of 00-getting_started.ipynb(after running all the cells of this notebook, the size of this file would be around 1.5MB). Now, try to commit the model or this notebook or both. The expected behavior would be that this pre-commit hook won't allow you to commit these big files as the file size > 500KB

      3.2 (Hook for YAML File syntax): Try to change the syntax of .travis.yml file (of course, update this file with wrong syntax). Now, try to commit this file. Expected behavior should be that this pre-commit hook won't allow you to commit this YAML file having incorrect syntax.

      3.3 (Hook for no output of notebooks). Execute the cells of one of the notebook files. Now, try to commit the notebook file with the output of cells. This commit will modify the notebook such that the outputs of the cells will be cleared. One can then commit the notebook file with cleared output.

    Linked Issues

    Fixes #423

    dependencies ci 
    opened by vasudev-sharma 32
  • Improve and force imread/imwrite conversion to 8bit int

    Improve and force imread/imwrite conversion to 8bit int

    Checklist

    • [x] I've given this PR a concise, self-descriptive, and meaningful title
    • [x] I've linked relevant issues in the PR body
    • [x] I've applied the relevant labels to this PR
    • [x] I've added relevant tests for my contribution
    • [ ] I've updated the documentation and/or added correct docstrings
    • [x] I've assigned a reviewer
    • [x] I've consulted ADS's internal developer documentation to ensure my contribution is in line with any relevant design decisions

    Description

    Changes how bit 8-bit depth conversion is done in ADS's imread, removes the optional bit depth arg for this function (it appeared to have been unused for a long time, always using the default value of 8), and add test verifying that the same image saved at different int and float precision loaded with ads.imread outputs the same 8bit image array.

    Linked issues

    Resolves #175

    processing testing 
    opened by mathieuboudreau 31
  • No matching distribution found for tensorflow==1.3.0

    No matching distribution found for tensorflow==1.3.0

    using 2bee818b5be963b11f57733b110f1818daebf402 on rosenberg, i cannot properly install tensorflow==1.3.0:

    [...]
    Collecting tensorflow==1.3.0 (from AxonDeepSeg==2.2.dev0)
      Could not find a version that satisfies the requirement tensorflow==1.3.0 (from AxonDeepSeg==2.2.dev0) (from versions: )
    No matching distribution found for tensorflow==1.3.0 (from AxonDeepSeg==2.2.dev0)
    (venv_ads) [jcohen@rosenberg axondeepseg]$ pip install tensorflow==1.3.0
    Collecting tensorflow==1.3.0
      Could not find a version that satisfies the requirement tensorflow==1.3.0 (from versions: )
    No matching distribution found for tensorflow==1.3.0
    (venv_ads) [jcohen@rosenberg axondeepseg]$ pip -V
    pip 18.1 from /home/jcohen/miniconda3/envs/venv_ads/lib/python3.7/site-packages/pip (python 3.7)
    (venv_ads) [jcohen@rosenberg axondeepseg]$ python
    Python 3.7.1 (default, Oct 23 2018, 19:19:42) 
    [GCC 7.3.0] :: Anaconda, Inc. on linux
    
    installation 
    opened by jcohenadad 30
  • Fix Naming Convention

    Fix Naming Convention

    Fixes #439

    OSF: - Test files to upload on OSF test_files.zip

    DONE:

    • [x] fix naming convention on FSLeyes plugin
    • [x] fix naming convention in Notebooks
    • [x] Fix naming convention in apply_model.py scipt
    • [x] Upload test files on OSF

    TODO:

    • [ ] Add documentation on Wiki

    To test this PR :

    The segmented images names should follow a common convention, that is

    1. image_name_seg-axonmyelin.png (axon +myelin segmented mask)
    2. image_name_seg-axon.png (axon mask)
    3. image_name_seg-myelin.png (myelin mask)

    To check if the naming conventions are being followed, for the cases given below ADS should segment images adhering to the naming convention.

    1. FSLeyes: Test Apply ADS Segmentation Model and Save segmentation buttons and check if they are following the naming convention.
    2. Notebooks : Run all the notebooks and check whether the naming conventions are being followed.
    3. CLI: This has been tested in the ADS unit tests, so you should expect all the test cases to pass.
    opened by vasudev-sharma 27
  • v4 ivadomed implementation

    v4 ivadomed implementation

    Checklist

    • [x] I've given this PR a concise, self-descriptive, and meaningful title
    • [x] I've linked relevant issues in the PR body
    • [x] I've applied the relevant labels to this PR
    • [x] I've added relevant tests for my contribution
    • [x] I've updated the documentation and/or added correct docstrings
    • [x] I've assigned a reviewer
    • [x] I've consulted ADS's internal developer documentation to ensure my contribution is in line with any relevant design decisions

    Description

    Implements IVADOMED automated segmentation inside of the ADS framework.

    Linked issues

    Resolves #523

    enhancement feature fsleyes dependencies refactoring ivadomed-refactoring 
    opened by mathieuboudreau 26
  • RAM limitations with no-patch option

    RAM limitations with no-patch option

    Describe the problem

    In PR #696 and #700, we added the option for the user to segment images without patches.

    After comparing the segmentation results with our models, we conclude that:

    • Qualitatively: the "no-patch" segmentation produces generally better qualitative results with less border irregularities, less false positive pixels clusters and less incomplete axons and/or holes in axons.
    • Quantitatively: the "no-patch" option gives segmentation metrics close to the patch option but better detection metrics because of less false positive small pixel clusters.

    However, some issues arose while testing:

    1. By design in ivadomed, when both PT and ONNX models are available, PT models are selected automatically with GPU. However, PT models require more GPU memory than ONNX. Some larger images could be segmented with ONNX and not with PT but there is no way for the user to select the ONNX model when on GPU without removing the PT model from the folder.
    2. Some images are just too big to segment without patches even with GPU and ONNX model and resulted in “segmentation fault” (memory error) on bireli, rosenberg and romane. I was not able to identify how or if we can intercept this error. I was also not able to reproduce this error on CPU on my laptop and had to killed the process (Ctrl+C or closing the terminal) to avoid it crashing.
    3. We currently cannot choose which GPU to use in ADS but we can in ivadomed.

    Details of the tests and issues can be found in these slides.

    Proposed solutions

    We talked in meeting of different solutions/approaches to deal with these issues respectively:

    1. The model (PT or ONNX) is selected in ivadomed here depending on device (CPU/GPU) and availability. We could try to add a try-intercept block to try the PT and switch to ONNX model if PT fails and ONNX is availble. This would need to be done in ivadomed.
    2. Several solutions were suggested:
      • Estimate the RAM needed based on image size and use psutils to estimate the RAM available before launching the segmentation, then warning the user if RAM is not sufficient.
      • Estimate the maximum patch size that could be used given free RAM, add it to the warning to the user and implement a way to changer the patch size (currently fixed for a given model).
      • Warn the user when using "no-patch" that this may not be suitable for larger images, and warn the user when not using "no-patch" that "no-patch" could potentially produce better results if RAM is sufficient --> For now, we decide to go with this last option, implementation in-progress in PR #704.
    3. The implementation to choose which GPU to use in ADS is in progress in #701.

    Additional details can be found in these slides, including ideas on "where" to fix these issues (ivadomed or ADS) and what are the elements to consider in each case.

    enhancement discussion 
    opened by mariehbourget 0
  • Idea: stop supporting combined axon-myelin images and switch to only separate

    Idea: stop supporting combined axon-myelin images and switch to only separate

    This came up a couple of group meetings ago. If I recall, the reasoning behind it being that this is because it's how IVADOMED treats the images anyways, and would make things simpler for the GUIs as well.

    It wasn't clear to me if this was for both input and outputs of ADS; to me it makes sense to be only for inputs, as generating a combined axon-myelin image is still quite useful for us to look at. But it might make sense so stop supporint this combined image as an input into ADS. One potential issue I can see is that it might cause some issues if a user does manual correction of the separate masks on another software, as there might be overlapping pixels identified as both myelin and axon (something that is avoided by using the combined masks.

    opened by mathieuboudreau 3
  • Prepare support for 3-class segmentation

    Prepare support for 3-class segmentation

    In order to support 3-class segmentation (context: unmyelinated fibers), we will need to change some stuff on the ADS side. For ivadomed, nothing really changes: we will use 3 ground-truths in the BIDS derivatives and change the training config accordingly. However, in ADS we will need to add some flexibility, notably:

    • For the segmentation process, axon_segmentation(...) will need to also save the third prediction. Maybe also add flexibility to the merge_masks(...) if we want to support it: https://github.com/axondeepseg/axondeepseg/blob/821074c2c8b539bcec69686cce72304656124d51/AxonDeepSeg/apply_model.py#L46-L50 Not exactly sure how we would handle the 3rd class using grayscale format in the combined prediction image though
    • Most of the work will probably be on the morphometrics process. Thanks to @Stoyan-I-A's refactoring, this should be easier to do because I think we will need to add columns for the 3rd class metrics (e.g. area, etc.). Fortunately, processing unmyelinated axons should be the exact same as processing axons. I'm thinking of adding a parameter to get_axon_morphometrics(...) indicating if we want 3-class morphometrics, and if so will load the 3rd segmentation and run the usual axon metrics on it. Not sure how we could merge this data in the morphometrics file (because the "myelin" columns will not apply to unmyelinated axons). In that case, maybe we would need 2 separate morphometrics file, but I don't really like that idea.
    enhancement refactoring discussion morphometrics 
    opened by hermancollin 4
  • Colorization instance map question

    Colorization instance map question

    Hello,

    So I have been playing around with the colorization feature in the morphometrics extraction pipeline. My question refers to the colorization instance map and how that relates to the morphometrics extraction. The segmentation I have with my images is pretty good at the moment. However the colorization instance map shows myelin identity boundary creep between touching axons.

    Does the colorization identity mismatch contribute to the calculation of the myelin thickness? And if this is the case what is the best way to address this issue?

    Thanks a lot,

    Michael

    LM_1

    LM_1_axonmyelin_index

    LM_1_instance-map

    opened by GrimmSnark 5
  • Create a Napari plugin for ADS

    Create a Napari plugin for ADS

    Checklist

    • [ ] I've given this PR a concise, self-descriptive, and meaningful title
    • [ ] I've linked relevant issues in the PR body
    • [ ] I've applied the relevant labels to this PR
    • [ ] I've added relevant tests for my contribution
    • [ ] I've updated the documentation and/or added correct docstrings
    • [ ] I've assigned a reviewer
    • [ ] I've consulted ADS's internal developer documentation to ensure my contribution is in line with any relevant design decisions

    Description

    This PR contains the code I used for testing a Napari plugin

    Linked issues

    Resolves #681

    opened by Stoyan-I-A 1
Releases(v4.1.0)
Owner
NeuroPoly
Ecole Polytechnique, Université de Montréal
NeuroPoly
[CVPR2021] DoDNet: Learning to segment multi-organ and tumors from multiple partially labeled datasets

DoDNet This repo holds the pytorch implementation of DoDNet: DoDNet: Learning to segment multi-organ and tumors from multiple partially labeled datase

null 116 Dec 12, 2022
Pytorch implementation of paper "Learning Co-segmentation by Segment Swapping for Retrieval and Discovery"

SegSwap Pytorch implementation of paper "Learning Co-segmentation by Segment Swapping for Retrieval and Discovery" [PDF] [Project page] If our project

xshen 41 Dec 10, 2022
LETR: Line Segment Detection Using Transformers without Edges

LETR: Line Segment Detection Using Transformers without Edges Introduction This repository contains the official code and pretrained models for Line S

mlpc-ucsd 157 Jan 6, 2023
Code for "Learning to Segment Rigid Motions from Two Frames".

rigidmask Code for "Learning to Segment Rigid Motions from Two Frames". ** This is a partial release with inference and evaluation code.

Gengshan Yang 157 Nov 21, 2022
【ACMMM 2021】DSANet: Dynamic Segment Aggregation Network for Video-Level Representation Learning

DSANet: Dynamic Segment Aggregation Network for Video-Level Representation Learning (ACMMM 2021) Overview We release the code of the DSANet (Dynamic S

Wenhao Wu 46 Dec 27, 2022
Official Tensorflow implementation of "M-LSD: Towards Light-weight and Real-time Line Segment Detection"

M-LSD: Towards Light-weight and Real-time Line Segment Detection Official Tensorflow implementation of "M-LSD: Towards Light-weight and Real-time Line

NAVER/LINE Vision 357 Jan 4, 2023
Pytorch implementation of "M-LSD: Towards Light-weight and Real-time Line Segment Detection"

M-LSD: Towards Light-weight and Real-time Line Segment Detection Pytorch implementation of "M-LSD: Towards Light-weight and Real-time Line Segment Det

null 123 Jan 4, 2023
COD-Rank-Localize-and-Segment (CVPR2021)

COD-Rank-Localize-and-Segment (CVPR2021) Simultaneously Localize, Segment and Rank the Camouflaged Objects Full camouflage fixation training dataset i

JingZhang 52 Dec 20, 2022
some classic model used to segment the medical images like CT、X-ray and so on

github_project This is a project for medical image segmentation. This project includes common medical image segmentation models such as U-net, FCN, De

null 2 Mar 30, 2022
An e-commerce company wants to segment its customers and determine marketing strategies according to these segments.

customer_segmentation_with_rfm Business Problem : An e-commerce company wants to

Buse Yıldırım 3 Jan 6, 2022
code for `Look Closer to Segment Better: Boundary Patch Refinement for Instance Segmentation`

Look Closer to Segment Better: Boundary Patch Refinement for Instance Segmentation (CVPR 2021) Introduction PBR is a conceptually simple yet effective

H.Chen 143 Jan 5, 2023
Temporal Segment Networks (TSN) in PyTorch

TSN-Pytorch We have released MMAction, a full-fledged action understanding toolbox based on PyTorch. It includes implementation for TSN as well as oth

null 1k Jan 3, 2023
This project aims to segment 4 common retinal lesions from Fundus Images.

This project aims to segment 4 common retinal lesions from Fundus Images.

Husam Nujaim 1 Oct 10, 2021
Identify the emotion of multiple speakers in an Audio Segment

MevonAI - Speech Emotion Recognition Identify the emotion of multiple speakers in a Audio Segment Report Bug · Request Feature Try the Demo Here Table

Suyash More 110 Dec 3, 2022
Code & Models for Temporal Segment Networks (TSN) in ECCV 2016

Temporal Segment Networks (TSN) We have released MMAction, a full-fledged action understanding toolbox based on PyTorch. It includes implementation fo

null 1.4k Jan 1, 2023
deep-table implements various state-of-the-art deep learning and self-supervised learning algorithms for tabular data using PyTorch.

deep-table implements various state-of-the-art deep learning and self-supervised learning algorithms for tabular data using PyTorch.

null 63 Oct 17, 2022
A resource for learning about deep learning techniques from regression to LSTM and Reinforcement Learning using financial data and the fitness functions of algorithmic trading

A tour through tensorflow with financial data I present several models ranging in complexity from simple regression to LSTM and policy networks. The s

null 195 Dec 7, 2022
Deep learning (neural network) based remote photoplethysmography: how to extract pulse signal from video using deep learning tools

Deep-rPPG: Camera-based pulse estimation using deep learning tools Deep learning (neural network) based remote photoplethysmography: how to extract pu

Terbe Dániel 138 Dec 17, 2022
Codes for realizing theories learned from Data Mining, Machine Learning, Deep Learning without using the present Python packages.

Codes-for-Algorithms Codes for realizing theories learned from Data Mining, Machine Learning, Deep Learning without using the present Python packages.

Tracy (Shengmin) Tao 1 Apr 12, 2022