LIMEcraft: Handcrafted superpixel selectionand inspection for Visual eXplanations

Overview

LIMEcraft

LIMEcraft: Handcrafted superpixel selectionand inspection for Visual eXplanations

The LIMEcraft algorithm is an explanatory method based on image perturbations. Its prototype was the LIME algorithm - but not without significant drawbacks, especially in the context of difficult cases and medical imaging. The superiority of the LIMEcraft algorithm is primarily the ability to freely select the areas that we want to analyze, as well as the aforementioned perturbations, thanks to which we can compare with the original image, and thus better understand which features have a significant impact on the prediction of the model.

About the User Interface

The interface is displayed in the browser, where the user can upload the image and the superpixel mask from their own computer. They can also manually mark interesting areas in the image with the help of a computer mouse. Then, they choose the number of superpixels the areas selected by them are divided into. The same procedure is also applied to the areas uploaded in the mask and, independently, to the areas outside. Then, they just need to confirm their actions and wait for the result of the LIMEcraft algorithm. The user receives the model prediction results expressed as the percentage for both the class originally predicted and the class after image editing.

User Interface

Functionalities:

  • upload image
  • upload mask
  • manually select mask
  • change number of superpixels inside and outside the mask
  • show how the prediction changed
  • change color - RGB
  • change shape - power expansion
  • rotate - degrees
  • shift - left/right and down/up
  • remove object by shifting it
  • generate report

How to run the code

git clone https://github.com/MI2DataLab/LIMEcraft.git
cd LIMEcraft

virtualenv venv
source venv/bin/activate
pip install -r requirements.txt
In case of problems with library versions, try to install the latest

git submodule update --init
if previous instruction does not work, use:
git submodule add --force https://github.com/marcotcr/lime.git code/lime_library
type jupyter notebook in the console
go to code/dashboard_LIMEcraft.ipynb
run the whole notebook
type http://127.0.0.1:8001/ in the web browser

How to test own model?

go to web browser and download full_skin_cancer_model.h5 from https://www.kaggle.com/kmader/deep-learning-skin-lesion-classification/data
put the model in the folder code
change selected model in code/dashboard_LIMEcraft.ipynb in section "Choose model"

Reference

Paper for this work is avaliable at: https://arxiv.org/abs/2111.08094

If you find our work useful, please cite our paper:

@misc{Hryniewska2021LIMEcraft,
	title={{LIMEcraft: Handcrafted superpixel selection and inspection for Visual eXplanations}}, 
	author={Weronika Hryniewska and Adrianna Grudzień and Przemysław Biecek},
	year={2021},
	eprint={2111.08094},
	archivePrefix={arXiv},
	primaryClass={cs.CV}
	keywords = {Explainable AI, superpixels, LIME, image features, interactive User Interface},
	howpublished = {\url{https://arxiv.org/abs/2111.08094}},
}
You might also like...
Propagate Yourself: Exploring Pixel-Level Consistency for Unsupervised Visual Representation Learning, CVPR 2021
Propagate Yourself: Exploring Pixel-Level Consistency for Unsupervised Visual Representation Learning, CVPR 2021

Propagate Yourself: Exploring Pixel-Level Consistency for Unsupervised Visual Representation Learning By Zhenda Xie*, Yutong Lin*, Zheng Zhang, Yue Ca

[CVPR 2021] Involution: Inverting the Inherence of Convolution for Visual Recognition, a brand new neural operator
[CVPR 2021] Involution: Inverting the Inherence of Convolution for Visual Recognition, a brand new neural operator

involution Official implementation of a neural operator as described in Involution: Inverting the Inherence of Convolution for Visual Recognition (CVP

Implementation of the 😇 Attention layer from the paper, Scaling Local Self-Attention For Parameter Efficient Visual Backbones
Implementation of the 😇 Attention layer from the paper, Scaling Local Self-Attention For Parameter Efficient Visual Backbones

HaloNet - Pytorch Implementation of the Attention layer from the paper, Scaling Local Self-Attention For Parameter Efficient Visual Backbones. This re

git git《Transformer Meets Tracker: Exploiting Temporal Context for Robust Visual Tracking》(CVPR 2021) GitHub:git2] 《Masksembles for Uncertainty Estimation》(CVPR 2021) GitHub:git3]
git git《Transformer Meets Tracker: Exploiting Temporal Context for Robust Visual Tracking》(CVPR 2021) GitHub:git2] 《Masksembles for Uncertainty Estimation》(CVPR 2021) GitHub:git3]

Transformer Meets Tracker: Exploiting Temporal Context for Robust Visual Tracking Ning Wang, Wengang Zhou, Jie Wang, and Houqiang Li Accepted by CVPR

Learning Spatio-Temporal Transformer for Visual Tracking
Learning Spatio-Temporal Transformer for Visual Tracking

STARK The official implementation of the paper Learning Spatio-Temporal Transformer for Visual Tracking Hiring research interns for visual transformer

UniMoCo: Unsupervised, Semi-Supervised and Full-Supervised Visual Representation Learning
UniMoCo: Unsupervised, Semi-Supervised and Full-Supervised Visual Representation Learning

UniMoCo: Unsupervised, Semi-Supervised and Full-Supervised Visual Representation Learning This is the official PyTorch implementation for UniMoCo pape

PyTorch reimplementation of the paper Involution: Inverting the Inherence of Convolution for Visual Recognition [CVPR 2021].

Involution: Inverting the Inherence of Convolution for Visual Recognition Unofficial PyTorch reimplementation of the paper Involution: Inverting the I

Code for Pose-Controllable Talking Face Generation by Implicitly Modularized Audio-Visual Representation (CVPR 2021)
Code for Pose-Controllable Talking Face Generation by Implicitly Modularized Audio-Visual Representation (CVPR 2021)

Pose-Controllable Talking Face Generation by Implicitly Modularized Audio-Visual Representation (CVPR 2021) Hang Zhou, Yasheng Sun, Wayne Wu, Chen Cha

Implementation of
Implementation of "Distribution Alignment: A Unified Framework for Long-tail Visual Recognition"(CVPR 2021)

Implementation of "Distribution Alignment: A Unified Framework for Long-tail Visual Recognition"(CVPR 2021)

Comments
  • Bump tensorflow from 2.7.0 to 2.8.0

    Bump tensorflow from 2.7.0 to 2.8.0

    ⚠️ Dependabot is rebasing this PR ⚠️

    Rebasing might not happen immediately, so don't worry if this takes some time.

    Note: if you make any changes to this PR yourself, they will take precedence over the rebase.


    Bumps tensorflow from 2.7.0 to 2.8.0.

    Release notes

    Sourced from tensorflow's releases.

    TensorFlow 2.8.0

    Release 2.8.0

    Major Features and Improvements

    • tf.lite:

      • Added TFLite builtin op support for the following TF ops:
        • tf.raw_ops.Bucketize op on CPU.
        • tf.where op for data types tf.int32/tf.uint32/tf.int8/tf.uint8/tf.int64.
        • tf.random.normal op for output data type tf.float32 on CPU.
        • tf.random.uniform op for output data type tf.float32 on CPU.
        • tf.random.categorical op for output data type tf.int64 on CPU.
    • tensorflow.experimental.tensorrt:

      • conversion_params is now deprecated inside TrtGraphConverterV2 in favor of direct arguments: max_workspace_size_bytes, precision_mode, minimum_segment_size, maximum_cached_engines, use_calibration and allow_build_at_runtime.
      • Added a new parameter called save_gpu_specific_engines to the .save() function inside TrtGraphConverterV2. When False, the .save() function won't save any TRT engines that have been built. When True (default), the original behavior is preserved.
      • TrtGraphConverterV2 provides a new API called .summary() which outputs a summary of the inference converted by TF-TRT. It namely shows each TRTEngineOp with their input(s)' and output(s)' shape and dtype. A detailed version of the summary is available which prints additionally all the TensorFlow OPs included in each of the TRTEngineOps.
    • tf.tpu.experimental.embedding:

      • tf.tpu.experimental.embedding.FeatureConfig now takes an additional argument output_shape which can specify the shape of the output activation for the feature.
      • tf.tpu.experimental.embedding.TPUEmbedding now has the same behavior as tf.tpu.experimental.embedding.serving_embedding_lookup which can take arbitrary rank of dense and sparse tensor. For ragged tensor, though the input tensor remains to be rank 2, the activations now can be rank 2 or above by specifying the output shape in the feature config or via the build method.
    • Add tf.config.experimental.enable_op_determinism, which makes TensorFlow ops run deterministically at the cost of performance. Replaces the TF_DETERMINISTIC_OPS environmental variable, which is now deprecated. The "Bug Fixes and Other Changes" section lists more determinism-related changes.

    • (Since TF 2.7) Add PluggableDevice support to TensorFlow Profiler.

    Bug Fixes and Other Changes

    • tf.data:

      • The optimization parallel_batch now becomes default if not disabled by users, which will parallelize copying of batch elements.
      • Added the ability for TensorSliceDataset to identify and handle inputs that are files. This enables creating hermetic SavedModels when using datasets created from files.
        • The optimization parallel_batch now becomes default if not disabled by users, which will parallelize copying of batch elements.
        • Added the ability for TensorSliceDataset to identify and handle inputs that are files. This enables creating hermetic SavedModels when using datasets created from files.
    • tf.lite:

      • Adds GPU Delegation support for serialization to Java API. This boosts initialization time up to 90% when OpenCL is available.
      • Deprecated Interpreter::SetNumThreads, in favor of InterpreterBuilder::SetNumThreads.
    • tf.keras:

      • Adds tf.compat.v1.keras.utils.get_or_create_layer to aid migration to TF2 by enabling tracking of nested keras models created in TF1-style, when used with the tf.compat.v1.keras.utils.track_tf1_style_variables decorator.
      • Added a tf.keras.layers.experimental.preprocessing.HashedCrossing layer which applies the hashing trick to the concatenation of crossed scalar inputs. This provides a stateless way to try adding feature crosses of integer or string data to a model.
      • Removed keras.layers.experimental.preprocessing.CategoryCrossing. Users should migrate to the HashedCrossing layer or use tf.sparse.cross/tf.ragged.cross directly.
      • Added additional standardize and split modes to TextVectorization:
        • standardize="lower" will lowercase inputs.
        • standardize="string_punctuation" will remove all puncuation.
        • split="character" will split on every unicode character.
      • Added an output_mode argument to the Discretization and Hashing layers with the same semantics as other preprocessing layers. All categorical preprocessing layers now support output_mode.
      • All preprocessing layer output will follow the compute dtype of a tf.keras.mixed_precision.Policy, unless constructed with output_mode="int" in which case output will be tf.int64. The output type of any preprocessing layer can be controlled individually by passing a dtype argument to the layer.
      • tf.random.Generator for keras initializers and all RNG code.
      • Added 3 new APIs for enable/disable/check the usage of tf.random.Generator in keras backend, which will be the new backend for all the RNG in Keras. We plan to switch on the new code path by default in tf 2.8, and the behavior change will likely to cause some breakage on user side (eg if the test is checking against some golden nubmer). These 3 APIs will allow user to disable and switch back to legacy behavior if they prefer. In future (eg TF 2.10), we expect to totally remove the legacy code path (stateful random Ops), and these 3 APIs will be removed as well.

    ... (truncated)

    Changelog

    Sourced from tensorflow's changelog.

    Release 2.8.0

    Major Features and Improvements

    • tf.lite:

      • Added TFLite builtin op support for the following TF ops:
        • tf.raw_ops.Bucketize op on CPU.
        • tf.where op for data types tf.int32/tf.uint32/tf.int8/tf.uint8/tf.int64.
        • tf.random.normal op for output data type tf.float32 on CPU.
        • tf.random.uniform op for output data type tf.float32 on CPU.
        • tf.random.categorical op for output data type tf.int64 on CPU.
    • tensorflow.experimental.tensorrt:

      • conversion_params is now deprecated inside TrtGraphConverterV2 in favor of direct arguments: max_workspace_size_bytes, precision_mode, minimum_segment_size, maximum_cached_engines, use_calibration and allow_build_at_runtime.
      • Added a new parameter called save_gpu_specific_engines to the .save() function inside TrtGraphConverterV2. When False, the .save() function won't save any TRT engines that have been built. When True (default), the original behavior is preserved.
      • TrtGraphConverterV2 provides a new API called .summary() which outputs a summary of the inference converted by TF-TRT. It namely shows each TRTEngineOp with their input(s)' and output(s)' shape and dtype. A detailed version of the summary is available which prints additionally all the TensorFlow OPs included in each of the TRTEngineOps.
    • tf.tpu.experimental.embedding:

      • tf.tpu.experimental.embedding.FeatureConfig now takes an additional argument output_shape which can specify the shape of the output activation for the feature.
      • tf.tpu.experimental.embedding.TPUEmbedding now has the same behavior as tf.tpu.experimental.embedding.serving_embedding_lookup which can take arbitrary rank of dense and sparse tensor. For ragged tensor, though the input tensor remains to be rank 2, the activations now can be rank 2 or above by specifying the output shape in the feature config or via the build method.
    • Add tf.config.experimental.enable_op_determinism, which makes TensorFlow ops run deterministically at the cost of performance. Replaces the TF_DETERMINISTIC_OPS environmental variable, which is now deprecated. The "Bug Fixes and Other Changes" section lists more determinism-related changes.

    • (Since TF 2.7) Add

    ... (truncated)

    Commits
    • 3f878cf Merge pull request #54226 from tensorflow-jenkins/version-numbers-2.8.0-22199
    • 54307e6 Update version numbers to 2.8.0
    • 2f2bdd2 Merge pull request #54193 from tensorflow/update-release-notes
    • 97e2f16 Update release notes with security advisories/updates
    • 93e224e Merge pull request #54182 from tensorflow/cherrypick-93323537ac0581a88af827af...
    • 14defd0 Bump ICU to 69.1 to handle CVE-2020-10531.
    • 0a20763 Merge pull request #54159 from tensorflow/cherrypick-b1756cf206fc4db86f05c420...
    • b7ecb36 Bump the maximum threshold before erroring
    • e542736 Merge pull request #54123 from terryheo/windows-fix-r2.8
    • 8dd07bd lite: Update Windows tensorflowlite_flex.dll build
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 1
Releases(v0.0.6)
Owner
MI^2 DataLab
MI^2 DataLab
Source code for paper "Deep Superpixel-based Network for Blind Image Quality Assessment"

DSN-IQA Source code for paper "Deep Superpixel-based Network for Blind Image Quality Assessment" Requirements Python >=3.8.0 Pytorch >=1.7.1 Usage wit

null 7 Oct 13, 2022
Super Pix Adv - Offical implemention of Robust Superpixel-Guided Attentional Adversarial Attack (CVPR2020)

Super_Pix_Adv Offical implemention of Robust Superpixel-Guided Attentional Adver

DLight 8 Oct 26, 2022
audioLIME: Listenable Explanations Using Source Separation

audioLIME This repository contains the Python package audioLIME, a tool for creating listenable explanations for machine learning models in music info

Institute of Computational Perception 27 Dec 1, 2022
PyTorch implementation of Interpretable Explanations of Black Boxes by Meaningful Perturbation

PyTorch implementation of Interpretable Explanations of Black Boxes by Meaningful Perturbation The paper: https://arxiv.org/abs/1704.03296 What makes

Jacob Gildenblat 322 Dec 17, 2022
Collection of NLP model explanations and accompanying analysis tools

Thermostat is a large collection of NLP model explanations and accompanying analysis tools. Combines explainability methods from the captum library wi

null 126 Nov 22, 2022
📦 PyTorch based visualization package for generating layer-wise explanations for CNNs.

Explainable CNNs ?? Flexible visualization package for generating layer-wise explanations for CNNs. It is a common notion that a Deep Learning model i

Ashutosh Hathidara 183 Dec 15, 2022
improvement of CLIP features over the traditional resnet features on the visual question answering, image captioning, navigation and visual entailment tasks.

CLIP-ViL In our paper "How Much Can CLIP Benefit Vision-and-Language Tasks?", we show the improvement of CLIP features over the traditional resnet fea

null 310 Dec 28, 2022
Pseudo-Visual Speech Denoising

Pseudo-Visual Speech Denoising This code is for our paper titled: Visual Speech Enhancement Without A Real Visual Stream published at WACV 2021. Autho

Sindhu 94 Oct 22, 2022
Bottleneck Transformers for Visual Recognition

Bottleneck Transformers for Visual Recognition Experiments Model Params (M) Acc (%) ResNet50 baseline (ref) 23.5M 93.62 BoTNet-50 18.8M 95.11% BoTNet-

Myeongjun Kim 236 Jan 3, 2023
[CVPR 2021] Released code for Counterfactual Zero-Shot and Open-Set Visual Recognition

Counterfactual Zero-Shot and Open-Set Visual Recognition This project provides implementations for our CVPR 2021 paper Counterfactual Zero-S

null 144 Dec 24, 2022