Histocartography is a framework bringing together AI and Digital Pathology

Overview

Build Status codecov PyPI version GitHub

Documentation | Paper

Welcome to the histocartography repository! histocartography is a python-based library designed to facilitate the development of graph-based computational pathology pipelines. The library includes plug-and-play modules to perform,

  • standard histology image pre-processing (e.g., stain normalization, nuclei detection, tissue detection)
  • entity-graph representation building (e.g. cell graph, tissue graph, hierarchical graph)
  • modeling Graph Neural Networks (e.g. GIN, PNA)
  • feature attribution based graph interpretability techniques (e.g. GraphGradCAM, GraphGradCAM++, GNNExplainer)
  • visualization tools

All the functionalities are grouped under a user-friendly API.

If you encounter any issue or have questions regarding the library, feel free to open a GitHub issue. We'll do our best to address it.

Installation

PyPI installer (recommended)

pip install histocartography

Development setup

  • Clone the repo:
git clone https://github.com/histocartography/histocartography.git && cd histocartography
  • Create a conda environment:
conda env create -f environment.yml
  • Activate it:
conda activate histocartography
  • Add histocartography to your python path:
export PYTHONPATH="<PATH>/histocartography:$PYTHONPATH"

Tests

To ensure proper installation, run unit tests as:

python -m unittest discover -s test -p "test_*" -v

Running tests on cpu can take up to 20mn.

Using histocartography

The histocartography library provides a set of helpers grouped in different modules, namely preprocessing, ml, visualization and interpretability.

For instance, in histocartography.preprocessing, building a cell-graph from an H&E image is as simple as:

>> from histocartography.preprocessing import NucleiExtractor, DeepFeatureExtractor, KNNGraphBuilder
>> 
>> nuclei_detector = NucleiExtractor()
>> feature_extractor = DeepFeatureExtractor(architecture='resnet34', patch_size=72)
>> knn_graph_builder = KNNGraphBuilder(k=5, thresh=50, add_loc_feats=True)
>>
>> image = np.array(Image.open('docs/_static/283_dcis_4.png'))
>> nuclei_map, _ = nuclei_detector.process(image)
>> features = feature_extractor.process(image, nuclei_map)
>> cell_graph = knn_graph_builder.process(nuclei_map, features)

The output can be then visualized with:

>> from histocartography.visualization import OverlayGraphVisualization, InstanceImageVisualization

>> visualizer = OverlayGraphVisualization(
...     instance_visualizer=InstanceImageVisualization(
...         instance_style="filled+outline"
...     )
... )
>> viz_cg = visualizer.process(
...     canvas=image,
...     graph=cell_graph,
...     instance_map=nuclei_map
... )
>> viz_cg.show()

A list of examples to discover the capabilities of the histocartography library is provided in examples. The examples will show you how to perform:

  • stain normalization with Vahadane or Macenko algorithm
  • cell graph generation to transform an H&E image into a graph-based representation where nodes encode nuclei and edges nuclei-nuclei interactions. It includes: nuclei detection based on HoverNet pretrained on PanNuke dataset, deep feature extraction and kNN graph building.
  • tissue graph generation to transform an H&E image into a graph-based representation where nodes encode tissue regions and edges tissue-to-tissue interactions. It includes: tissue detection based on superpixels, deep feature extraction and RAG graph building.
  • feature cube extraction to extract deep representations of individual patches depicting the image
  • cell graph explainer to generate an explanation to highlight salient nodes. It includes inference on a pretrained CG-GNN model followed by GraphGradCAM explainer.

A tutorial with detailed descriptions and visualizations of some of the main functionalities is provided here as a notebook.

External Ressources

Learn more about GNNs

  • We have prepared a gentle introduction to Graph Neural Networks. In this tutorial, you can find slides, notebooks and a set of reference papers.
  • For those of you interested in exploring Graph Neural Networks in depth, please refer to this content or this one.

Papers already using this library

  • Hierarchical Graph Representations for Digital Pathology, Pati et al., preprint, 2021. [pdf] [code]
  • Quantifying Explainers of Graph Neural Networks in Computational Pathology, Jaume et al., CVPR, 2021. [pdf] [code]
  • Learning Whole-Slide Segmentation from Inexact and Incomplete Labels using Tissue Graphs, Anklin et al., preprint, 2021. [pdf] [code]

If you use this library, please consider citing:

@inproceedings{pati2021,
    title = {Hierarchical Graph Representations for Digital Pathology},
    author = {Pushpak Pati, Guillaume Jaume, Antonio Foncubierta, Florinda Feroce, Anna Maria Anniciello, Giosuè Scognamiglio, Nadia Brancati, Maryse Fiche, Estelle Dubruc, Daniel Riccio, Maurizio Di Bonito, Giuseppe De Pietro, Gerardo Botti, Jean-Philippe Thiran, Maria Frucci, Orcun Goksel, Maria Gabrani},
    booktitle = {https://arxiv.org/pdf/2102.11057},
    year = {2021}
} 
Comments
  • Memory requirements

    Memory requirements

    Thanks for the awesome repository! I am trying to run the cell graph generation example but I get CUDA out of memory errors. I am using a GPU with 8.5Gb, not running anything else or shared in any way. Is there a minimum requirement for graph representation inference?

    opened by luiscarm9 6
  • forward() missing 1 required positional argument: 'H'

    forward() missing 1 required positional argument: 'H'

    Got this error when trying to implement my own model into the histocartography model

    `--------------------------------------------------------------------------- TypeError Traceback (most recent call last) c:\Users\raman\OneDrive - softsensor.ai\histocartography\v1.ipynb Cell 1 in <cell line: 13>() 10 knn_graph_builder = KNNGraphBuilder(k=5, thresh=50, add_loc_feats=True) 12 image = np.array(Image.open('docs/_static/283_dcis_4.png')) ---> 13 nuclei_map, _ = nuclei_detector.process(image) 14 features = feature_extractor.process(image, nuclei_map) 15 cell_graph = knn_graph_builder.process(nuclei_map, features)

    File c:\Users\raman\OneDrive - softsensor.ai\histocartography\histocartography\pipeline.py:138, in PipelineStep.process(self, output_name, *args, **kwargs) 135 return self._process_and_save( 136 *args, output_name=output_name, **kwargs) 137 else: --> 138 return self._process(*args, **kwargs)

    File c:\Users\raman\OneDrive - softsensor.ai\histocartography\histocartography\preprocessing\nuclei_extraction.py:118, in NucleiExtractor._process(self, input_image, tissue_mask) 106 def _process( # type: ignore[override] 107 self, 108 input_image: np.ndarray, 109 tissue_mask: Optional[np.ndarray] = None, 110 ) -> Tuple[np.ndarray, np.ndarray]: 111 """Extract nuclei from the input_image 112 Args: 113 input_image (np.array): Original RGB image (...) ... -> 1130 return forward_call(*input, **kwargs) 1131 # Do not call functions when jit is used 1132 full_backward_hooks, non_full_backward_hooks = [], []

    TypeError: forward() missing 1 required positional argument: 'H'`

    How do i proceed?

    opened by Ramxnan 5
  • RuntimeError: unexpected EOF, expected 4530578 more bytes. The file might be corrupted.

    RuntimeError: unexpected EOF, expected 4530578 more bytes. The file might be corrupted.

    from histocartography.preprocessing import ( VahadaneStainNormalizer, # stain normalizer NucleiExtractor, # nuclei detector DeepFeatureExtractor, # feature extractor KNNGraphBuilder, # kNN graph builder ColorMergedSuperpixelExtractor, # tissue detector DeepFeatureExtractor, # feature extractor RAGGraphBuilder, # build graph AssignmnentMatrixBuilder # assignment matrix )

    nuclei_detector = NucleiExtractor()

    when this code is running, the mistake is:

    图片

    File already downloaded. /home/yyang3/anaconda3/envs/yy/lib/python3.6/site-packages/torch/serialization.py:658: SourceChangeWarning: source code of class 'histocartography.ml.models.hovernet.HoverNet' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes. warnings.warn(msg, SourceChangeWarning) /home/yyang3/anaconda3/envs/yy/lib/python3.6/site-packages/torch/serialization.py:658: SourceChangeWarning: source code of class 'histocartography.ml.models.hovernet.Encoder' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes. warnings.warn(msg, SourceChangeWarning) /home/yyang3/anaconda3/envs/yy/lib/python3.6/site-packages/torch/serialization.py:658: SourceChangeWarning: source code of class 'histocartography.ml.models.hovernet.Conv2dWithActivation' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes. warnings.warn(msg, SourceChangeWarning) /home/yyang3/anaconda3/envs/yy/lib/python3.6/site-packages/torch/serialization.py:658: SourceChangeWarning: source code of class 'torch.nn.modules.conv.Conv2d' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes. warnings.warn(msg, SourceChangeWarning) /home/yyang3/anaconda3/envs/yy/lib/python3.6/site-packages/torch/serialization.py:658: SourceChangeWarning: source code of class 'histocartography.ml.models.hovernet.BNReLU' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes. warnings.warn(msg, SourceChangeWarning) /home/yyang3/anaconda3/envs/yy/lib/python3.6/site-packages/torch/serialization.py:658: SourceChangeWarning: source code of class 'torch.nn.modules.batchnorm.BatchNorm2d' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes. warnings.warn(msg, SourceChangeWarning) /home/yyang3/anaconda3/envs/yy/lib/python3.6/site-packages/torch/serialization.py:658: SourceChangeWarning: source code of class 'torch.nn.modules.activation.ReLU' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes. warnings.warn(msg, SourceChangeWarning) /home/yyang3/anaconda3/envs/yy/lib/python3.6/site-packages/torch/serialization.py:658: SourceChangeWarning: source code of class 'histocartography.ml.models.hovernet.ResidualBlock' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes. warnings.warn(msg, SourceChangeWarning) /home/yyang3/anaconda3/envs/yy/lib/python3.6/site-packages/torch/serialization.py:658: SourceChangeWarning: source code of class 'histocartography.ml.models.hovernet.SamepaddingLayer' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes. warnings.warn(msg, SourceChangeWarning) /home/yyang3/anaconda3/envs/yy/lib/python3.6/site-packages/torch/serialization.py:658: SourceChangeWarning: source code of class 'histocartography.ml.models.hovernet.Decoder' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes. warnings.warn(msg, SourceChangeWarning) /home/yyang3/anaconda3/envs/yy/lib/python3.6/site-packages/torch/serialization.py:658: SourceChangeWarning: source code of class 'histocartography.ml.models.hovernet.Upsample2x' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes. warnings.warn(msg, SourceChangeWarning) /home/yyang3/anaconda3/envs/yy/lib/python3.6/site-packages/torch/serialization.py:658: SourceChangeWarning: source code of class 'torch.nn.modules.upsampling.Upsample' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes. warnings.warn(msg, SourceChangeWarning) /home/yyang3/anaconda3/envs/yy/lib/python3.6/site-packages/torch/serialization.py:658: SourceChangeWarning: source code of class 'histocartography.ml.models.hovernet.DenseBlock' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes. warnings.warn(msg, SourceChangeWarning) Traceback (most recent call last): File "", line 1, in File "/home/yyang3/anaconda3/envs/yy/lib/python3.6/site-packages/histocartography/preprocessing/nuclei_extraction.py", line 82, in init self._load_model_from_path(model_path) File "/home/yyang3/anaconda3/envs/yy/lib/python3.6/site-packages/histocartography/preprocessing/nuclei_extraction.py", line 88, in _load_model_from_path self.model = torch.load(model_path) File "/home/yyang3/anaconda3/envs/yy/lib/python3.6/site-packages/torch/serialization.py", line 595, in load return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args) File "/home/yyang3/anaconda3/envs/yy/lib/python3.6/site-packages/torch/serialization.py", line 781, in _legacy_load deserialized_objects[key]._set_from_file(f, offset, f_should_read_directly) RuntimeError: unexpected EOF, expected 4530578 more bytes. The file might be corrupted.

    opened by yangyang117 4
  • Extract coordinates of nuclei in image

    Extract coordinates of nuclei in image

    Hello!

    Hope all is well. I have two images. One with the H+E staining, and another one, which is the exact same, but is colored by cell label. I'm wondering if it is possible to extract the coordinates of each nuclei in the input image? I would like to go from the nuclei in one picture to its label in the other using the coordinates. Is this possible?

    See the two images below (this is from an open source dataset):

    Screen Shot 2021-06-10 at 11 40 33 PM Screen Shot 2021-06-10 at 11 40 48 PM
    opened by hossam-zaki 4
  • Torch version issue

    Torch version issue

    Im trying to run own .pth model instead but my model is based off the recent torch version and does not work in 1.10.1 which is needed by nuclei_detection. is there a way to overcome the "AttributeError: 'Upsample' object has no attribute 'recompute_scale_factor'" without downgrading the torch package?

    opened by Ramxnan 2
  • Vahadane StainNormalizer raises error

    Vahadane StainNormalizer raises error

    Hi,

    I am trying to run Macenko and Vahadane stain normalizer on my datasets.

    The dataset has separate folders for I am trying to make the list of files, initialize the VahadaneStainNormalizer instance and call the _normalize_image(img) method. It works in the beginning but stops suddenly after some time and leaves this error. I have tried on different datasets but the error is the same.

    Images are in PNG, I am loading them using PIL, converting them to RGB, and making ndarray. I do not understand where NaN or inf values might be appearing.

    Traceback (most recent call last): File "normalizer.py", line 58, in norm_img = normalization._normalize_image(target) File "/home/neel/miniconda3/envs/DKL/lib/python3.6/site-packages/histocartography/preprocessing/stain_normalizers.py", line 498, in _normalize_image input_image, stain_matrix_source File "/home/neel/miniconda3/envs/DKL/lib/python3.6/site-packages/histocartography/preprocessing/stain_normalizers.py", line 103, in _get_concentrations stain_matrix.T, optical_density.T, rcond=-1)[0].T File "<array_function internals>", line 6, in lstsq File "/home/neel/miniconda3/envs/DKL/lib/python3.6/site-packages/numpy/linalg/linalg.py", line 2306, in lstsq x, resids, rank, s = gufunc(a, b, rcond, signature=signature, extobj=extobj) File "/home/neel/miniconda3/envs/DKL/lib/python3.6/site-packages/numpy/linalg/linalg.py", line 100, in _raise_linalgerror_lstsq raise LinAlgError("SVD did not converge in Linear Least Squares") numpy.linalg.LinAlgError: SVD did not converge in Linear Least Squares

    Can someone please suggest here?

    opened by NeelKanwal 2
  • Weights instead of model

    Weights instead of model

    Nuclei_extraction.py gives an option to use our own model instead of the existing model created by pannuke dataset.

    In my case i have the model weights and not the model itself, how would i have to adapt the code to run the nuclei detector with just my model weights?

    image

    opened by Ramxnan 1
  • Nuclei instance types

    Nuclei instance types

    Really appreciate your work guys. Can we draw an instance map for the specific type of nuclei? e.g only epithelium etc. From nuclei extractor or during visualization? Example Images are in the link: https://drive.google.com/drive/folders/1giWG6V-daElfYHtVMnFeAdbm2D_Efmcp?usp=sharing

    opened by Abbas009 1
  • AttributeError: 'Upsample' object has no attribute 'recompute_scale_factor'

    AttributeError: 'Upsample' object has no attribute 'recompute_scale_factor'

    2. nuclei detection

    nuclei_map, nuclei_centroids = nuclei_detector.process(image)
    

    I had error this line . Could you help me ?

    AttributeError Traceback (most recent call last) in () 40 41 # 2. nuclei detection ---> 42 nuclei_map, nuclei_centroids = nuclei_detector.process(image) 43 44 # 3. nuclei feature extraction

    11 frames /usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in getattr(self, name) 1184 return modules[name] 1185 raise AttributeError("'{}' object has no attribute '{}'".format( -> 1186 type(self).name, name)) 1187 1188 def setattr(self, name: str, value: Union[Tensor, 'Module']) -> None:

    AttributeError: 'Upsample' object has no attribute 'recompute_scale_factor'

    opened by esratepe 1
  • 'Upsample' object has no attribute 'recompute_scale_factor'

    'Upsample' object has no attribute 'recompute_scale_factor'

    Hi, thanks for making this available.

    I'm running into an issue when trying to execute the following code:

    feature_extractor = DeepFeatureExtractor(architecture='resnet34', patch_size=25)
    knn_graph_builder = KNNGraphBuilder(k=6, thresh=50, add_loc_feats=True)
    nuclei_map, x = nuclei_detector.process(img)
    

    Screenshot from 2022-04-25 20-02-57

    I tried the solution suggested here but it didn't help: https://stdworkflow.com/1508/attributeerror-upsample-object-has-no-attribute-recompute-scale-factor

    Has anyone come across this and found a solution?

    opened by spencerkrichevsky 1
  • HE Reference matrix

    HE Reference matrix

    Hi,

    I had a question regarding Macenko Stain Normalizer. I see that the H&E Reference matrix is a hard coded matrix of shape (2, 3). Can you shed some light on how did you get this matrix? I am looking for a paper where this matrix is provided. Below is the line of code which I am referring to in the code histocartography/preprocessing/stain_normalizers.py

    self.stain_matrix_target = np.array(
                    [[0.5626, 0.7201, 0.4062], [0.2159, 0.8012, 0.5581]]
    
    opened by krishnakanagal 1
  • CUDA Error

    CUDA Error

    Hi, I am trying to run the example you provided, but I am getting the following error at the line (feature_extractor = DeepFeatureExtractor(architecture='resnet34', patch_size=72)): "RuntimeError: CUDA error: no kernel image is available for execution on the device CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1."

    Any idea why I am getting this error? Thanks!

    opened by SaharAlmahfouzNasser 1
  • How to show just  sub-graph of a certain class by graph-pruning-explaining, just like the pictures shown in the paper

    How to show just sub-graph of a certain class by graph-pruning-explaining, just like the pictures shown in the paper

    How to show just sub-graph of a certain class by graph-pruning-explaining, just like the pictures shown in the paper "towards Explainable Graph Representations in Digital Pathology"

    opened by xiaofochai 1
  • Graph creation doesn't behave properly when used in patho-quant-explainer

    Graph creation doesn't behave properly when used in patho-quant-explainer

    See these three issues in the patho-quant-explainer repository.

    To summarize, I was trying to reproduce the results shown in the pathology quantitative explainer paper, but failed to do so. After doing debugging and traceback I found that it was because the nuclei extractor detects too few nuclei even on the original (latest) BRACS dataset. One example would be detecting only 4 nuclei in BRACS_1897_DCIS_4.png. The lack of nuclei sometimes causes the DeepFeatureExtractor to fail, which then causes the KNNGraphBuilder to fail and the graph output to file function to throw an error.

    I've also tried running the patho-quant-explainer pipeline on the previous version of the dataset, but that method fails on the very first graph in the test set because the KNNGraphBuilder fails to run, causing a save error.

    Since I made no modifications to the source code, this could be due to an environment or hardware issue. As mentioned in another issue, the environment yaml provided in any of the histocartography repositories appear to be incomplete, outdated, or both. If this error isn't replicated by the maintenance team, would you be able to provide the exact environment you're using? Thanks!

    opened by CarlinLiao 1
  • Environment dependency issue

    Environment dependency issue

    Hi, I'm using mac and I followed the command to create conda environments, but encountered something like "torchvision requires torch 1.2.1 but torch version requires 1.3.0." I then tried to remove the version requirement for torch, but encountered PIL issues such as "cannot import name 'PILLOW_VERSION' from 'PIL'". I don't see a similar issue from anyone else, and I don't know if this is a mac problem or not. Thank you!

    opened by jiaqiwu1999 4
  • GraphGradCAMExplainer use of backpropogation

    GraphGradCAMExplainer use of backpropogation

    When using the GraphGradCAMExplainer, we use an pretrained torch GNN model set to eval mode since we're no longer training the model. However, to find the node importances, the Explainer module uses backpropogation to find the node importances via the weight coefficients of the hooked activation maps, which shouldn't be possible on an eval model instance.

    image

    For whatever reason, this doesn't throw an error in the recommended python 3.7, dgl 0.4.3post2, and torch 1.10 environment, but does in my more up-to-date python 3.9, dgl 0.9, torch 1.12.1 env even though the written code is identical.

    The only solution I've found so far is to set the model used in the Explainer to training mode before running the explainer, but that's far from ideal.

    Is there a way to find the node importances without committing to backpropogation? Is that what backpropogating in the original histocartography environment does instead? If it doesn't, is it not an issue that the model is being updated via backpropogation during the process of explaining node importance?

    opened by CarlinLiao 1
Releases(v0.2.1)
Owner
null
Official Pytorch implementation of 'GOCor: Bringing Globally Optimized Correspondence Volumes into Your Neural Network' (NeurIPS 2020)

Official implementation of GOCor This is the official implementation of our paper : GOCor: Bringing Globally Optimized Correspondence Volumes into You

Prune Truong 71 Nov 18, 2022
Source code for "Pack Together: Entity and Relation Extraction with Levitated Marker"

PL-Marker Source code for Pack Together: Entity and Relation Extraction with Levitated Marker. Quick links Overview Setup Install Dependencies Data Pr

THUNLP 173 Dec 30, 2022
SMPL-X: A new joint 3D model of the human body, face and hands together

SMPL-X: A new joint 3D model of the human body, face and hands together [Paper Page] [Paper] [Supp. Mat.] Table of Contents License Description News I

Vassilis Choutas 1k Jan 9, 2023
Using NumPy to solve the equations of fluid mechanics together with Finite Differences, explicit time stepping and Chorin's Projection methods

Computational Fluid Dynamics in Python Using NumPy to solve the equations of fluid mechanics ?? ?? ?? together with Finite Differences, explicit time

Felix Köhler 4 Nov 12, 2022
LaneDet is an open source lane detection toolbox based on PyTorch that aims to pull together a wide variety of state-of-the-art lane detection models

LaneDet is an open source lane detection toolbox based on PyTorch that aims to pull together a wide variety of state-of-the-art lane detection models. Developers can reproduce these SOTA methods and build their own methods.

TuZheng 405 Jan 4, 2023
Seeing if I can put together an interactive version of 3b1b's Manim in Streamlit

streamlit-manim Seeing if I can put together an interactive version of 3b1b's Manim in Streamlit Installation I had to install pango with sudo apt-get

Adrien Treuille 6 Aug 3, 2022
Provided is code that demonstrates the training and evaluation of the work presented in the paper: "On the Detection of Digital Face Manipulation" published in CVPR 2020.

FFD Source Code Provided is code that demonstrates the training and evaluation of the work presented in the paper: "On the Detection of Digital Face M

null 88 Nov 22, 2022
Node for thenewboston digital currency network.

Project setup For project setup see INSTALL.rst Community Join the community to stay updated on the most recent developments, project roadmaps, and ra

thenewboston 27 Jul 8, 2022
Digital Twin Mobility Profiling: A Spatio-Temporal Graph Learning Approach

Digital Twin Mobility Profiling: A Spatio-Temporal Graph Learning Approach This is the implementation of traffic prediction code in DTMP based on PyTo

chenxin 1 Dec 19, 2021
MATLAB codes of the book "Digital Image Processing Fourth Edition" converted to Python

Digital Image Processing Python MATLAB codes of the book "Digital Image Processing Fourth Edition" converted to Python TO-DO: Refactor scripts, curren

Merve Noyan 24 Oct 16, 2022
A fast, dataset-agnostic, deep visual search engine for digital art history

imgs.ai imgs.ai is a fast, dataset-agnostic, deep visual search engine for digital art history based on neural network embeddings. It utilizes modern

Fabian Offert 5 Dec 14, 2022
HistoSeg : Quick attention with multi-loss function for multi-structure segmentation in digital histology images

HistoSeg : Quick attention with multi-loss function for multi-structure segmentation in digital histology images Histological Image Segmentation This

Saad Wazir 11 Dec 16, 2022
Bib-parser - Convenient script to parse .bib files with the ACM Digital Library like metadata

Bib Parser Convenient script to parse .bib files with the ACM Digital Library li

Mehtab Iqbal (Shahan) 1 Jan 26, 2022
The deployment framework aims to provide a simple, lightweight, fast integrated, pipelined deployment framework that ensures reliability, high concurrency and scalability of services.

savior是一个能够进行快速集成算法模块并支持高性能部署的轻量开发框架。能够帮助将团队进行快速想法验证(PoC),避免重复的去github上找模型然后复现模型;能够帮助团队将功能进行流程拆解,很方便的提高分布式执行效率;能够有效减少代码冗余,减少不必要负担。

Tao Luo 125 Dec 22, 2022
FEDn is an open-source, modular and ML-framework agnostic framework for Federated Machine Learning

FEDn is an open-source, modular and ML-framework agnostic framework for Federated Machine Learning (FedML) developed and maintained by Scaleout Systems. FEDn enables highly scalable cross-silo and cross-device use-cases over FEDn networks.

Scaleout 75 Nov 9, 2022
Propose a principled and practically effective framework for unsupervised accuracy estimation and error detection tasks with theoretical analysis and state-of-the-art performance.

Detecting Errors and Estimating Accuracy on Unlabeled Data with Self-training Ensembles This project is for the paper: Detecting Errors and Estimating

Jiefeng Chen 13 Nov 21, 2022
UMT is a unified and flexible framework which can handle different input modality combinations, and output video moment retrieval and/or highlight detection results.

Unified Multi-modal Transformers This repository maintains the official implementation of the paper UMT: Unified Multi-modal Transformers for Joint Vi

Applied Research Center (ARC), Tencent PCG 84 Jan 4, 2023
🔥 Cogitare - A Modern, Fast, and Modular Deep Learning and Machine Learning framework for Python

Cogitare is a Modern, Fast, and Modular Deep Learning and Machine Learning framework for Python. A friendly interface for beginners and a powerful too

Cogitare - Modern and Easy Deep Learning with Python 76 Sep 30, 2022