Python module providing a framework to trace individual edges in an image using Gaussian process regression.

Overview

Edge Tracing using Gaussian Process Regression

Repository storing python module which implements a framework to trace individual edges in an image using Gaussian process regression.


Abstract from paper on methodology

We introduce a novel edge tracing algorithm using Gaussian process regression. Our edge-based segmentation algorithm models an edge of interest using Gaussian process regression and iteratively searches the image for edge pixels in a recursive Bayesian scheme. This procedure combines local edge information from the image gradient and global structural information from posterior curves, sampled from the model's posterior predictive distribution, to sequentially build and refine an observation set of edge pixels. This accumulation of pixels converges the distribution to the edge of interest. Hyperparameters can be tuned by the user at initialisation and optimised given the refined observation set. This tunable approach does not require any prior training and is not restricted to any particular type of imaging domain. Due to the model's uncertainty quantification, the algorithm is robust to artefacts and occlusions which degrade the quality and continuity of edges in images. Our approach also has the ability to efficiently trace edges in image sequences by using previous-image edge traces as a priori information for consecutive images. Various applications to medical imaging and satellite imaging are used to validate the technique and comparisons are made with two commonly used edge tracing algorithms.

More information

Paper which describes this methodology has been accepted to be published in IEEE Transactions on Image Processing in December 2021 or January 2022 (TBC).

For open access to this paper for information on the algorithm, pseudocode, applications and discussion, see here


Getting started

Required packages

  • numpy
  • matplotlib
  • scikit-learn
  • scikit-image
  • KDEpy
  • scipy
  • time

Code demonstration

After cloning this repository, import the python module and the provided utilities script:

# Import relevant python packages
import numpy as np
from gp_edge_tracing import gpet_utils, gpet

We can now construct the same noisy, test image used in the paper:

# Create test image with single sinusoidal edge and simple image gradient
N = 500
test_img, true_edge = gpet_utils.construct_test_img(size=(N,N), amplitude=200, curvature=4, noise_level=0.05, ltype='sinusoidal', intensity=0.3, gaps=True)

kernel = gpet_utils.kernel_builder(size=(11,5), unit=False)
grad_img = gpet_utils.comp_grad_img(test_img, kernel)

This test image and corresponding image gradient is shown below.

testimg_imggrad

With specification of default parameters we can run the edge tracing algorithm:

# Define model parameters
kernel_params = {'kernel': 'RBF', 'sigma_f': 75, 'length_scale': 20}
delta_x = 5
score_thresh = 1
N_samples = 1000
noise_y = 1
seed = 1
keep_ratio = 0.1
init = true_edge[[0, -1],:][:, [1,0]]
obs = np.array([])
fix_endpoints=True
return_std = True

# Instantiate algorithm using parameters in __init__()
noisy_trace = gpet.GP_Edge_Tracing(init, grad_img, kernel_params, noise_y, obs, N_samples, score_thresh,
                                   delta_x, keep_ratio, seed, return_std, fix_endpoints)

# __call__() parameters and run algorithm on test image
# Change these verbosity parameters to monitor fitting procedure
print_final_diagnostics = False
show_init_post = False
show_post_iter = False
verbose = False
edge_pred, edge_credint = noisy_trace(print_final_diagnostics, show_init_post, show_post_iter, verbose)

We can then superimpose the edge prediction and 95% credible interval onto the test image and image gradient, quantitatively comparing the prediction with the ground truth, as shown below.

testimg_result

More information

please refer to this notebook for the code to reproduce this result, as well as where to find more information on the compulsory, tuning and verbosity parameters.


Contributors

You might also like...
Official repository for
Official repository for "Restormer: Efficient Transformer for High-Resolution Image Restoration". SOTA for motion deblurring, image deraining, denoising (Gaussian/real data), and defocus deblurring.

Restormer: Efficient Transformer for High-Resolution Image Restoration Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan,

Image-popularity-score - A novel deep regression method for image scoring.

Image-popularity-score - A novel deep regression method for image scoring.

Compute execution plan: A DAG representation of work that you want to get done. Individual nodes of the DAG could be simple python or shell tasks or complex deeply nested parallel branches or embedded DAGs themselves.

Hello from magnus Magnus provides four capabilities for data teams: Compute execution plan: A DAG representation of work that you want to get done. In

Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising
Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising

Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising

Industrial Image Anomaly Localization Based on Gaussian Clustering of Pre-trained Feature
Industrial Image Anomaly Localization Based on Gaussian Clustering of Pre-trained Feature

Industrial Image Anomaly Localization Based on Gaussian Clustering of Pre-trained Feature Q. Wan, L. Gao, X. Li and L. Wen, "Industrial Image Anomaly

StudioGAN is a Pytorch library providing implementations of representative Generative Adversarial Networks (GANs) for conditional/unconditional image generation.
StudioGAN is a Pytorch library providing implementations of representative Generative Adversarial Networks (GANs) for conditional/unconditional image generation.

StudioGAN is a Pytorch library providing implementations of representative Generative Adversarial Networks (GANs) for conditional/unconditional image generation.

Self-Attention Between Datapoints: Going Beyond Individual Input-Output Pairs in Deep Learning

We challenge a common assumption underlying most supervised deep learning: that a model makes a prediction depending only on its parameters and the features of a single input. To this end, we introduce a general-purpose deep learning architecture that takes as input the entire dataset instead of processing one datapoint at a time.

Code for Quantifying Ignorance in Individual-Level Causal-Effect Estimates under Hidden Confounding
Code for Quantifying Ignorance in Individual-Level Causal-Effect Estimates under Hidden Confounding

🍐 quince Code for Quantifying Ignorance in Individual-Level Causal-Effect Estimates under Hidden Confounding 🍐 Installation $ git clone [email protected]

Music source separation is a task to separate audio recordings into individual sources

Music Source Separation Music source separation is a task to separate audio recordings into individual sources. This repository is an PyTorch implmeme

Comments
  • Extend to 3D medical image?

    Extend to 3D medical image?

    Hi, I want to know if this method can be extended to 3D space and work for 3D medical image segmentation, e.g., brain tumor segmentation, blood vessel segmentation? And I also want to know how to choose endpoints for images when there are several distinct edges for different vessel parts?

    opened by haibinswe 0
Owner
Jamie Burke
I'm a third year PhD student at the University of Edinburgh currently developing novel image processing tools for automated ocular image analysis.
Jamie Burke
A bare-bones TensorFlow framework for Bayesian deep learning and Gaussian process approximation

Aboleth A bare-bones TensorFlow framework for Bayesian deep learning and Gaussian process approximation [1] with stochastic gradient variational Bayes

Gradient Institute 127 Dec 12, 2022
Audio Source Separation is the process of separating a mixture into isolated sounds from individual sources

Audio Source Separation is the process of separating a mixture into isolated sounds from individual sources (e.g. just the lead vocals).

Victor Basu 14 Nov 7, 2022
Newt - a Gaussian process library in JAX.

Newt __ \/_ (' \`\ _\, \ \\/ /`\/\ \\ \ \\

AaltoML 0 Nov 2, 2021
Multi-Output Gaussian Process Toolkit

Multi-Output Gaussian Process Toolkit Paper - API Documentation - Tutorials & Examples The Multi-Output Gaussian Process Toolkit is a Python toolkit f

GAMES 113 Nov 25, 2022
Official implementation of deep Gaussian process (DGP)-based multi-speaker speech synthesis with PyTorch.

Multi-speaker DGP This repository provides official implementation of deep Gaussian process (DGP)-based multi-speaker speech synthesis with PyTorch. O

sarulab-speech 24 Sep 7, 2022
Code to reproduce the experiments from our NeurIPS 2021 paper " The Limitations of Large Width in Neural Networks: A Deep Gaussian Process Perspective"

Code To run: python runner.py new --save <SAVE_NAME> --data <PATH_TO_DATA_DIR> --dataset <DATASET> --model <model_name> [options] --n 1000 - train - t

Geoff Pleiss 5 Dec 12, 2022
This repository contains the data and code for the paper "Diverse Text Generation via Variational Encoder-Decoder Models with Gaussian Process Priors" (SPNLP@ACL2022)

GP-VAE This repository provides datasets and code for preprocessing, training and testing models for the paper: Diverse Text Generation via Variationa

Wanyu Du 18 Dec 29, 2022
Hitters Linear Regression - Hitters Linear Regression With Python

Hitters_Linear_Regression Kullanacağımız veri seti Carnegie Mellon Üniversitesi'

AyseBuyukcelik 2 Jan 26, 2022
Quantile Regression DQN a Minimal Working Example, Distributional Reinforcement Learning with Quantile Regression

Quantile Regression DQN Quantile Regression DQN a Minimal Working Example, Distributional Reinforcement Learning with Quantile Regression (https://arx

Arsenii Senya Ashukha 80 Sep 17, 2022
Space-event-trace - Tracing service for spaceteam events

space-event-trace Tracing service for TU Wien Spaceteam events. This service is

TU Wien Space Team 2 Jan 4, 2022