A deep learning framework for historical document image analysis

Overview

DIVA-DAF

PyTorch Lightning Config: Hydra Template
tests codecov

Description

A deep learning framework for historical document image analysis.

How to run

Install dependencies

# clone project
git clone https://github.com/DIVA-DIA/unsupervised_learning.git
cd unsupervised_learing

# create conda environment (IMPORTANT: needs Python 3.8+)
conda env create -f conda_env_gpu.yaml

# activate the environment using .autoenv
source .autoenv

# install requirements
pip install -r requirements.txt

Train model with default configuration. Care: you need to change the value of data_dir in config/datamodule/cb55_10_cropped_datamodule.yaml.

# default run based on config/config.yaml
python run.py

# train on CPU
python run.py trainer.gpus=0

# train on GPU
python run.py trainer.gpus=1

Train using GPU

# [default] train on all available GPUs
python run.py trainer.gpus=-1

# train on one GPU
python run.py trainer.gpus=1

# train on two GPUs
python run.py trainer.gpus=2

# train on CPU
python run.py trainer.accelerator=ddp_cpu

Train using CPU for debugging

# train on CPU
python run.py trainer.accelerator=ddp_cpu trainer.precision=32

Train model with chosen experiment configuration from configs/experiment/

python run.py +experiment=experiment_name

You can override any parameter from command line like this

python run.py trainer.max_epochs=20 datamodule.batch_size=64

Setup PyCharm

  1. Fork this repo
  2. Clone the repo to your local filesystem (git clone CLONELINK)
  3. Clone the repo onto your remote machine
  4. Move into the folder on your remote machine and create the conda environment (conda env create -f conda_env_gpu.yaml)
  5. Run source .autoenv in the root folder on your remote machine (activates the environment)
  6. Open the folder in PyCharm (File -> open)
  7. Add the interpreter (Preferences -> Project -> Python interpreter -> top left gear icon -> add... -> SSH Interpreter) follow the instructions (set the correct mapping to enable deployment)
  8. Upload the files (deployment)
  9. Create a wandb account (wandb.ai)
  10. Log via ssh onto your remote machine
  11. Go to the root folder of the framework and activate the environment (source .autoenv OR conda activate unsupervised_learning)
  12. Log into wandb. Execute wandb login and follow the instructions
  13. Now you should be able to run the basic experiment from PyCharm

Loading models

You can load the different model parts backbone or header as well as the whole task. To load the backbone or the header you need to add to your experiment config the field path_to_weights. e.g.

model:
    header:
        path_to_weights: /my/path/to/the/pth/file

To load the whole task you need to provide the path to the whole task to the trainer. This is with the field resume_from_checkpoint. e.g.

trainer:
    resume_from_checkpoint: /path/to/.ckpt/file

Freezing model parts

You can freeze both parts of the model (backbone or header) with the freeze flag in the config. E.g. you want to freeze the backbone: In the command line:

python run.py +model.backbone.freeze=True

In the config (e.g. model/backbone/baby_unet_model.yaml):

...
freeze: True
...

CARE: You can not train a model when you do not have trainable parameters (e.g. freezing backbone and header).

Selection in datasets

If you use the selection key you can either use an int, which takes the first n files, or a list of strings to filter the different datasets. In the case you are using a full-page dataset be aware that the selection list is a list of file names without the extension.

Cite us

@misc{vögtlin2022divadaf,
      title={DIVA-DAF: A Deep Learning Framework for Historical Document Image Analysis}, 
      author={Lars Vögtlin and Paul Maergner and Rolf Ingold},
      year={2022},
      eprint={2201.08295},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}
Comments
  • Not working with ddp_cpu

    Not working with ddp_cpu

    Describe the bug If we want to run the framework with ddp_cpu as accelerator it wont work as it has a working directory problem.

    To Reproduce python run.py trainer.accelerator='ddp_cpu' trainer.precision=32

    Expected behavior We can use ddp_cpu to debug our system

    Additional context To avoid this problem at the moment we can just use the full path to the run.py file ($PWD/run.py).

    Checklist

    • [ ] Add a warning if ddp_cpu and not presicion=32
    bug If time Pipeline 
    opened by lvoegtlin 3
  • Use deepspeed to speed up the training

    Use deepspeed to speed up the training

    Is your feature request related to a problem? Please describe. To accelerate the training we could use the deepspeed plugin

    Describe the solution you'd like Make it possible to activate deepspeed through the config

    Checklist

    • [x] Test deepspeed
    • [ ] Include it into the config system
    wontfix If time Pipeline 
    opened by lvoegtlin 3
  • Load model checkpoint instead of default init

    Load model checkpoint instead of default init

    differ between train test and train and test Already started with two parameters train and test to define what part of the process should be done. need to include loading from ckpt for fine-tuning or just testing

    https://pytorch-lightning.readthedocs.io/en/stable/common/weights_loading.html

    PXL_20210706_154513642

    Evtl. weights_only would work

    We need to make our own callback which inherits from ModelCheckpoint and override/add the just model checkpoint save (https://github.com/PyTorchLightning/pytorch-lightning/blob/bca5adf6de1ae74c7103839aac54c8648464bee6/pytorch_lightning/callbacks/model_checkpoint.py#L485)

    Checklist

    • [x] test check if path_to_weights is set
    • [x] load model state from path
    • [x] create a generic model which takes an encoder and a header (configs)
    • [x] #15
    • [x] save model with a callback (create callback)
    • [x] if we are just testing we need a path_to_weights for both
    Important Module Pipeline 
    opened by lvoegtlin 3
  • Updating dependecies

    Updating dependecies

    Description

    Updating PL, torchmetrics and pytest to the newest version. Also introduces codecoverage with sonarcloud. Each PR will now be tested on testcoverage

    How to Test/Run?

    pytest

    opened by lvoegtlin 2
  • Fixed problem with multiple empty folders in checkpoints

    Fixed problem with multiple empty folders in checkpoints

    Description

    The checkpoint callback created the checkpoints in a dedicated epochs folder. The folder should get deleted if it's no longer the best. This did not also work with the built-in version of the model checkpoint callback. Solved it by doing a clean-up at the end of the experiment.

    How to Test/Run?

    python run.py trainer.max_epochs=20

    Something missing?

    opened by lvoegtlin 2
  • Feature/datamodule for gif imgs

    Feature/datamodule for gif imgs

    Description

    A datamodule that takes advantage of the index format. It no longer determines the classes by the color but takes the classes directly form the raw image and uses the palette as class encoding.

    How to Test/Run?

    pytest or python run.py experiment=development_baby_unet_indexed.yaml

    opened by lvoegtlin 2
  • DDP metric bias

    DDP metric bias

    Is your feature request related to a problem? Please describe. When running an experiment with DDP we have a little data bias if the dataset is not dividable by batch_size * num_processors. To make the users aware of this problem we can add a warning if num_samples % (batch_size * num_processors) != 0. Problem described here

    Describe the solution you'd like Raining an error if the condition from above is not met. Also, add a flag to ignore this error (ignore_ddp_bias)

    Describe alternatives you've considered Solve it with the ddp join function from PyTorch but it is very hard to hack that into pl.

    Checklist

    • [x] Create check and warning
    • [x] Add shuffle and drop_last_batch options to datamodule config
    • [x] Add shuffle/drop_last_batch to default config files
    enhancement Pipeline 
    opened by lvoegtlin 2
  • Add the strict parameter to make it possible to load non-fitting models

    Add the strict parameter to make it possible to load non-fitting models

    Describe the feature

    Make it possible to transfer weights between similar models

    Describe the solution you'd like

    A parameter strict in the models which defines the way to load if it is not fitting the weights file

    Checklist

    • [x] Add this parameter in the model config
    • [x] Use it to load the model
    • [x] Add log for missed/unexpected keys
    If time Module Pipeline 
    opened by lvoegtlin 2
  • Loss function as config

    Loss function as config

    Is your feature request related to a problem? Please describe. Make it possible to define the loss function in the config.

    Describe the solution you'd like Define some defaults functions and create a config for them. Then hand over the criterion object to the task at the beginning of the training.

    Checklist

    • [ ] define 4 basic losses (Xentropy, L1, MSE, BCE)
    • [ ] create configs
    • [ ] hand over the loss function as a parameter to the task
    enhancement If time Module Pipeline 
    opened by lvoegtlin 2
  • Specify metric via callback

    Specify metric via callback

    Is your feature request related to a problem? Please describe. To make the system more flexible we have to implement the metrics with callbacks s.t. we can combine multiple metrics and also reuse them in other tasks.

    Describe the solution you'd like Implement mIoU (jar fashion), precision, recall, and accuracy as metric callbacks. Call metrics at the end of the steps (see) Also make sure that when we are testing and in ddp that we just run it on one gpu or with join (documentation of join) (look here or here)

    Checklist

    • [x] Implement DIVA HisDB metric class (our metric)
    • [x] Metric which is exactly like the jar
    • [x] Create config for mIoU
    enhancement If time Module Pipeline 
    opened by lvoegtlin 2
  • Feature/add fcn

    Feature/add fcn

    Description

    UNet now has a swappable classifier. This makes working with it way easier, as we can easily fine-tune it onto a dataset with more or less classes.

    How to Test/Run?

    pytest or python run.py

    opened by lvoegtlin 1
  • Training/validation and test time

    Training/validation and test time

    Is your feature request related to a problem? Please describe. Get the exact time for the training (incl. validation) and the testing in seconds. This can be reported overall as well as for an epoch. The setup time of the framework should be excluded.

    Describe the solution you'd like Log these times into the used loggers and report it in the experiment summary file.

    Checklist

    • [ ] Check if PL already provides such a feature
    • [ ] Create timers for the different phases
    • [ ] Report these times
    • [ ] Test
    • [ ] PR
    opened by lvoegtlin 1
  • More complex return

    More complex return

    Is your feature request related to a problem? Please describe. Let the framework return more information, like beast model path, metric, etc. as a dictionary, s.t. calling files can chain together multiple frameworks runs.

    Describe the solution you'd like With a dictionary

    Checklist

    • [ ] Check what return information are needed
    • [ ] Add it tot he execution class
    • [ ] Test
    • [ ] PR
    enhancement Needed Config 
    opened by lvoegtlin 0
  • Rework the backbone header model

    Rework the backbone header model

    Is your feature request related to a problem? Please describe. Think about the current Backboneheader model and try to adapt it to the new needs. Eventually, changes it to a new model.

    Checklist

    • [ ] Evaluate the existing model with the new needs
    • [ ] Think about solutions
    • [ ] Prototype the solutions
    • [ ] Implementation (models, workflow, callbacks)
    • [ ] Config adaption
    • [ ] Test
    • [ ] PR
    enhancement Needed Config 
    opened by lvoegtlin 0
  • Test if possible conf_mat from base_task into a callback

    Test if possible conf_mat from base_task into a callback

    Is your feature request related to a problem? Please describe. The problem before with the conf mat callback was that it had a semaphore leak. As described here (https://github.com/ashleve/lightning-hydra-template/issues/189#issuecomment-1003532448), it should work now with the usage of torchmetrics.

    Checklist

    • [ ] Factor the conf mat log into callback
    • [ ] Extensice testing
    • [ ] Tests
    • [ ] PR
    enhancement Config 
    opened by lvoegtlin 0
  • Update hydra to 1.2

    Update hydra to 1.2

    Is your feature request related to a problem? Please describe. Update hydra to the newest version

    Checklist

    • [ ] update
    • [ ] adapt code
    • [ ] test
    • [ ] PR
    enhancement 
    opened by lvoegtlin 0
  • Hyperparameter optimization

    Hyperparameter optimization

    Is your feature request related to a problem? Please describe. Create a possibility to do hyperparameter optimization with the framework

    Checklist

    • [ ] Check out which one works best
    • [ ] integrate it or use it as a script
    • [ ] Test
    • [ ] PR
    enhancement 
    opened by lvoegtlin 0
Releases(version_0.2.2)
  • version_0.2.2(Jun 24, 2022)

    What's Changed

    • Experiment for rotnet with unet backbone by @lvoegtlin in https://github.com/DIVA-DIA/DIVA-DAF/pull/101
    • Created additional tests by @lvoegtlin in https://github.com/DIVA-DIA/DIVA-DAF/pull/100
    • Updated the version on PL to 1.5.10 by @lvoegtlin in https://github.com/DIVA-DIA/DIVA-DAF/pull/112
    • Added tests for RolfFormat datamodule and RGB takes by @lvoegtlin in https://github.com/DIVA-DIA/DIVA-DAF/pull/114
    • Release 0.2.2 by @lvoegtlin in https://github.com/DIVA-DIA/DIVA-DAF/pull/113

    Full Changelog: https://github.com/DIVA-DIA/DIVA-DAF/compare/version_0.2.1...version_0.2.2

    Source code(tar.gz)
    Source code(zip)
  • version_0.2.1(Dec 2, 2021)

    What's Changed

    • Fixed selection parameter, removed todos, improved print_config, added self to configs by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/87
    • Added tests for tasks and fixed merge scripts by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/89
    • New log folder structure by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/91
    • Replacing numpy with torch in divahisdb functional by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/93
    • Rename config saved during a run, and print commands to rerun a run by @powl7 in https://github.com/DIVA-DIA/unsupervised_learning/pull/95
    • Release 0.2.1 by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/98

    Full Changelog: https://github.com/DIVA-DIA/unsupervised_learning/compare/version_0.2.0...version_0.2.1

    Source code(tar.gz)
    Source code(zip)
  • version_0.2.0(Nov 25, 2021)

    Some new things

    • new architectures (resnet)
    • new datamodules (rolf format, RGB, full-page, and SSL)
    • different bug fixes
    • experiment configs
    • refactoring and deletion of unused code
    • callback to check the compatibility of backbone and header
    • inference/prediction stage (list of files with regex)
    • freezing header or backbone
    • improved readme
    • improved testing

    What's Changed

    • Dev data refactoring by @powl7 in https://github.com/DIVA-DIA/unsupervised_learning/pull/74
    • Dev rgb encoding by @powl7 in https://github.com/DIVA-DIA/unsupervised_learning/pull/76
    • RotNet by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/75
    • log more by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/77
    • More architectures by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/78
    • Dev fixing tests by @powl7 in https://github.com/DIVA-DIA/unsupervised_learning/pull/79
    • Created resnet FCN header by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/83
    • Dev rolf data format by @powl7 in https://github.com/DIVA-DIA/unsupervised_learning/pull/84
    • Introduce inference/prediction and refactoring by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/85
    • release 0.2.0 by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/86

    Full Changelog: https://github.com/DIVA-DIA/unsupervised_learning/compare/version_0.1.1...version_0.2.0

    Source code(tar.gz)
    Source code(zip)
  • version_0.1.1(Oct 22, 2021)

    Changelog:

    • fixed conf mat
    • optimized test and validation step
    • improved merging of crops
    • more metrics and optimizers
    • updated requirements

    What's Changed

    • made tests running also in the terminal by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/60
    • fixed evaluation tool problem by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/62
    • adding new optimiser configs by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/64
    • removed unused dependency by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/65
    • Dev improve datamodule tests by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/66
    • Dev fixing conf and f1 heatmap by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/68
    • :art: each worker of the dl gets now an own seed by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/69
    • Dev reduce gpu memory by @powl7 in https://github.com/DIVA-DIA/unsupervised_learning/pull/71
    • upload run config by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/72
    • release version 0.1.1 by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/73

    Full Changelog: https://github.com/DIVA-DIA/unsupervised_learning/compare/version_0.1.0...version_0.1.1

    Source code(tar.gz)
    Source code(zip)
  • version_0.1.0(Oct 6, 2021)

    The first version of the framework

    What's Changed

    • Dev 38 create hydra configs by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/1
    • Dev 47 better logger name by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/3
    • Dev 43 configurable optimizers by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/2
    • Dev 44 load model checkpoint by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/16
    • dev synced metric logging by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/17
    • When DDP num_workers = 0 was forced by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/19
    • Resolve ddp warning by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/20
    • Add strict parameter by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/21
    • Config refinement by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/23
    • Save config file for each run by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/28
    • add env by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/29
    • Dev 25 torchmetric introduction by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/30
    • Removed custom hydra config by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/32
    • Dev 24 abstract task class by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/33
    • Dev 26 loading warning improvements by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/34
    • update pl to 1.4.4 by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/36
    • Loss functions as config by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/37
    • ddp cpu not working by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/39
    • Dev shuffle data option by @powl7 in https://github.com/DIVA-DIA/unsupervised_learning/pull/44
    • Dev dataset selected pages by @powl7 in https://github.com/DIVA-DIA/unsupervised_learning/pull/49
    • Dev 9 metric as config by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/47
    • Fix conf mat and extend by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/51
    • Save metrics to csv by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/52
    • Check backbone header compatibility by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/53
    • abstract datamodule and resolvers by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/56
    • Dev refactoring and tests by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/57
    • Dev 34 refactoring semantic segmentation by @powl7 in https://github.com/DIVA-DIA/unsupervised_learning/pull/58
    • Version 0.1.0 of the fw by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/59

    Full Changelog: https://github.com/DIVA-DIA/unsupervised_learning/commits/version_0.1.0

    Source code(tar.gz)
    Source code(zip)
Owner
null
ivadomed is an integrated framework for medical image analysis with deep learning.

Repository on the collaborative IVADO medical imaging project between the Mila and NeuroPoly labs.

null 144 Dec 19, 2022
Medical image analysis framework merging ANTsPy and deep learning

ANTsPyNet A collection of deep learning architectures and applications ported to the python language and tools for basic medical image processing. Bas

Advanced Normalization Tools Ecosystem 118 Dec 24, 2022
Implementation of "GNNAutoScale: Scalable and Expressive Graph Neural Networks via Historical Embeddings" in PyTorch

PyGAS: Auto-Scaling GNNs in PyG PyGAS is the practical realization of our G NN A uto S cale (GAS) framework, which scales arbitrary message-passing GN

Matthias Fey 139 Dec 25, 2022
A two-stage U-Net for high-fidelity denoising of historical recordings

A two-stage U-Net for high-fidelity denoising of historical recordings Official repository of the paper (not submitted yet): E. Moliner and V. Välimäk

Eloi Moliner Juanpere 57 Jan 5, 2023
docTR by Mindee (Document Text Recognition) - a seamless, high-performing & accessible library for OCR-related tasks powered by Deep Learning.

docTR by Mindee (Document Text Recognition) - a seamless, high-performing & accessible library for OCR-related tasks powered by Deep Learning.

Mindee 1.5k Jan 1, 2023
Detectron2 for Document Layout Analysis

Detectron2 trained on PubLayNet dataset This repo contains the training configurations, code and trained models trained on PubLayNet dataset using Det

Himanshu 163 Nov 21, 2022
Deep Image Search is an AI-based image search engine that includes deep transfor learning features Extraction and tree-based vectorized search.

Deep Image Search - AI-Based Image Search Engine Deep Image Search is an AI-based image search engine that includes deep transfer learning features Ex

null 139 Jan 1, 2023
Ivy is a templated deep learning framework which maximizes the portability of deep learning codebases.

Ivy is a templated deep learning framework which maximizes the portability of deep learning codebases. Ivy wraps the functional APIs of existing frameworks. Framework-agnostic functions, libraries and layers can then be written using Ivy, with simultaneous support for all frameworks. Ivy currently supports Jax, TensorFlow, PyTorch, MXNet and Numpy. Check out the docs for more info!

Ivy 8.2k Jan 2, 2023
One implementation of the paper "DMRST: A Joint Framework for Document-Level Multilingual RST Discourse Segmentation and Parsing".

Introduction One implementation of the paper "DMRST: A Joint Framework for Document-Level Multilingual RST Discourse Segmentation and Parsing". Users

seq-to-mind 18 Dec 11, 2022
PyTorch implementation of the Deep SLDA method from our CVPRW-2020 paper "Lifelong Machine Learning with Deep Streaming Linear Discriminant Analysis"

Lifelong Machine Learning with Deep Streaming Linear Discriminant Analysis This is a PyTorch implementation of the Deep Streaming Linear Discriminant

Tyler Hayes 41 Dec 25, 2022
Dewarping Document Image By Displacement Flow Estimation with Fully Convolutional Network.

Dewarping Document Image By Displacement Flow Estimation with Fully Convolutional Network

null 111 Dec 27, 2022
Dewarping Document Image By Displacement Flow Estimation with Fully Convolutional Network

Dewarping Document Image By Displacement Flow Estimation with Fully Convolutional Network

null 39 Aug 2, 2021
A pytorch-based deep learning framework for multi-modal 2D/3D medical image segmentation

A 3D multi-modal medical image segmentation library in PyTorch We strongly believe in open and reproducible deep learning research. Our goal is to imp

Adaloglou Nikolas 1.2k Dec 27, 2022
🔥 Cogitare - A Modern, Fast, and Modular Deep Learning and Machine Learning framework for Python

Cogitare is a Modern, Fast, and Modular Deep Learning and Machine Learning framework for Python. A friendly interface for beginners and a powerful too

Cogitare - Modern and Easy Deep Learning with Python 76 Sep 30, 2022
PArallel Distributed Deep LEarning: Machine Learning Framework from Industrial Practice (『飞桨』核心框架,深度学习&机器学习高性能单机、分布式训练和跨平台部署)

English | 简体中文 Welcome to the PaddlePaddle GitHub. PaddlePaddle, as the only independent R&D deep learning platform in China, has been officially open

null 19.4k Jan 4, 2023
Machine learning framework for both deep learning and traditional algorithms

NeoML is an end-to-end machine learning framework that allows you to build, train, and deploy ML models. This framework is used by ABBYY engineers for

NeoML 704 Dec 27, 2022
A machine learning malware analysis framework for Android apps.

??️ A machine learning malware analysis framework for Android apps. ☢️ DroidDetective is a Python tool for analysing Android applications (APKs) for p

James Stevenson 77 Dec 27, 2022
Deep learning (neural network) based remote photoplethysmography: how to extract pulse signal from video using deep learning tools

Deep-rPPG: Camera-based pulse estimation using deep learning tools Deep learning (neural network) based remote photoplethysmography: how to extract pu

Terbe Dániel 138 Dec 17, 2022