The fastest way to visualize GradCAM with your Keras models.

Overview

VizGradCAM

VizGradCam is the fastest way to visualize GradCAM in Keras models. GradCAM helps with providing visual explainability of trained models and may serve as an important step in ensuring that engineers observe the regions that contributed to certain inference results.

Most tutorials or function features similar methods but requires the name of the last convolutional layer, performing the upscaling of heatmap and superimposing it on the original image. In this repository, we aim to combine all of those tasks.

Usage

This function can be imported or simply copied out into your script where required. Specific usage can be found in the sample Jupyter Notebook.

"""
Function Parameters:
    model        : Compiled Model with Weights Loaded
    image        : Image to Perform Inference On 
    plot_results : True - Function Plots using PLT
                   False - Returns Heatmap Array
    interpolant  : Interpolant Value that Describes The Superimposition Ratio
                   Between Image and Heatmap
"""
VizGradCAM(model, image, plot_results=True, interpolant=0.5)

Sample Usage

# Import Function
from gradcam import VizGradCAM

# Load Your Favourite Image
test_img = img_to_array(load_img("monkey.jpeg" , target_size=(224,224)))

# Use The Function - Boom!
VizGradCAM(EfficientNetB4(weights="imagenet"), test_img))

Results

plot_results=True plot_results=False

More Information

This function is inspired by Keras' GradCAM tuturial here and the original paper, Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization can be found here.

Tested / Supported Models

This function works with Keras CNN models and most Keras Applications / Based Models. This means that it will work even if you used include_top=False to add your own final dense layers for transfer learning on some of the models listed below. In GradCAM, we are looking to target gradients flowing into the last convolutional layer.

Model Architecture Support Dimension
VGG16 (224,224)
VGG19 (224,224)
DenseNet121 (224,224)
DenseNet169 (224,224)
ResNet50 (224,224)
ResNet101 (224,224)
ResNet152 (224,224)
ResNet50V2 (224,224)
ResNet101V2 (224,224)
ResNet152V2 (224,224)
MobileNet (224,224)
MobileNetV2 (224,224)
Xception (299,299)
InceptionV3 (299,299)
InceptionResNetV2 (299,299)
EfficientNetB0 (224,224)
EfficientNetB1 (240,240)
EfficientNetB2 (260,260)
EfficientNetB3 (300,300)
EfficientNetB4 (380,380)
EfficientNetB5 (456,456)
EfficientNetB6 (528,528)
EfficientNetB7 (600,600)
You might also like...
A repository that shares tuning results of trained models generated by TensorFlow / Keras. Post-training quantization (Weight Quantization, Integer Quantization, Full Integer Quantization, Float16 Quantization), Quantization-aware training. TensorFlow Lite. OpenVINO. CoreML. TensorFlow.js. TF-TRT. MediaPipe. ONNX. [.tflite,.h5,.pb,saved_model,tfjs,tftrt,mlmodel,.xml/.bin, .onnx]
DenseNet Implementation in Keras with ImageNet Pretrained Models

DenseNet-Keras with ImageNet Pretrained Models This is an Keras implementation of DenseNet with ImageNet pretrained weights. The weights are converted

Header-only library for using Keras models in C++.
Header-only library for using Keras models in C++.

frugally-deep Use Keras models in C++ with ease Table of contents Introduction Usage Performance Requirements and Installation FAQ Introduction Would

Tensorflow2 Keras-based Semantic Segmentation Models Implementation

Tensorflow2 Keras-based Semantic Segmentation Models Implementation

Demonstrates how to divide a DL model into multiple IR model files (division) and introduce a simplest way to implement a custom layer works with OpenVINO IR models.
Demonstrates how to divide a DL model into multiple IR model files (division) and introduce a simplest way to implement a custom layer works with OpenVINO IR models.

Demonstration of OpenVINO techniques - Model-division and a simplest-way to support custom layers Description: Model Optimizer in Intel(r) OpenVINO(tm

Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.
Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.

PyTorch Implementation of Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers 1 Using Colab Please notic

This is my codes that can visualize the psnr image in testing videos.
This is my codes that can visualize the psnr image in testing videos.

CVPR2018-Baseline-PSNRplot This is my codes that can visualize the psnr image in testing videos. Future Frame Prediction for Anomaly Detection – A New

Visualize Camera's Pose Using Extrinsic Parameter by Plotting Pyramid Model on 3D Space
Visualize Camera's Pose Using Extrinsic Parameter by Plotting Pyramid Model on 3D Space

extrinsic2pyramid Visualize Camera's Pose Using Extrinsic Parameter by Plotting Pyramid Model on 3D Space Intro A very simple and straightforward modu

An all-in-one application to visualize multiple different local path planning algorithms
An all-in-one application to visualize multiple different local path planning algorithms

Table of Contents Table of Contents Local Planner Visualization Project (LPVP) Features Installation/Usage Local Planners Probabilistic Roadmap (PRM)

Comments
  • last_conv_layer fails for Transfer Learning Models

    last_conv_layer fails for Transfer Learning Models

    Creating a transfer learning model using Keras.Applications yields a model.summary() such as:

    Model: "model"
    _________________________________________________________________
    Layer (type)                 Output Shape              Param #   
    =================================================================
    input_2 (InputLayer)         [(None, 160, 160, 3)]     0         
    _________________________________________________________________
    sequential (Sequential)      (None, 160, 160, 3)       0         
    _________________________________________________________________
    tf.math.truediv (TFOpLambda) (None, 160, 160, 3)       0         
    _________________________________________________________________
    tf.math.subtract (TFOpLambda (None, 160, 160, 3)       0         
    _________________________________________________________________
    mobilenetv2_1.00_160 (Functi (None, 5, 5, 1280)        2257984   
    _________________________________________________________________
    global_average_pooling2d (Gl (None, 1280)              0         
    _________________________________________________________________
    dropout (Dropout)            (None, 1280)              0         
    _________________________________________________________________
    dense (Dense)                (None, 1)                 1281      
    =================================================================
    Total params: 2,259,265
    Trainable params: 1,281
    Non-trainable params: 2,257,984
    _________________________________________________________________
    

    Note the Functional layer mobilenetv2_1.00_160 which hides the underlying base_model.

    VizGradCAM fails to find the last convolutional layer as it doesn't "dive into" the Functional base_model.

    opened by DocandBean 0
Owner
Curious Human
null
This is an implementation of Googles Yogi-Optimizer in Keras (tf.keras)

Yogi-Optimizer_Keras This is an implementation of Googles Yogi-Optimizer in Keras (tf.keras) The NeurIPS-Paper can be found here: http://papers.nips.c

null 14 Sep 13, 2022
Keras udrl - Keras implementation of Upside Down Reinforcement Learning

keras_udrl Keras implementation of Upside Down Reinforcement Learning This is me

Eder Santana 7 Jan 24, 2022
Example-custom-ml-block-keras - Custom Keras ML block example for Edge Impulse

Custom Keras ML block example for Edge Impulse This repository is an example on

Edge Impulse 8 Nov 2, 2022
A python software that can help blind people find things like laptops, phones, etc the same way a guide dog guides a blind person in finding his way.

GuidEye A python software that can help blind people find things like laptops, phones, etc the same way a guide dog guides a blind person in finding h

Munal Jain 0 Aug 9, 2022
This project deploys a yolo fastest model in the form of tflite on raspberry 3b+. The model is from another repository of mine called -Trash-Classification-Car

Deploy-yolo-fastest-tflite-on-raspberry 觉得有用的话可以顺手点个star嗷 这个项目将垃圾分类小车中的tflite模型移植到了树莓派3b+上面。 该项目主要是为了记录在树莓派部署yolo fastest tflite的流程 (之后有时间会尝试用C++部署来提升

null 7 Aug 16, 2022
ScriptProfilerPy - Module to visualize where your python script is slow

ScriptProfiler helps you track where your code is slow It provides: Code lines t

Lucas BLP 3 Jun 2, 2022
MMdnn is a set of tools to help users inter-operate among different deep learning frameworks. E.g. model conversion and visualization. Convert models between Caffe, Keras, MXNet, Tensorflow, CNTK, PyTorch Onnx and CoreML.

MMdnn MMdnn is a comprehensive and cross-framework tool to convert, visualize and diagnose deep learning (DL) models. The "MM" stands for model manage

Microsoft 5.7k Jan 9, 2023
tf2onnx - Convert TensorFlow, Keras and Tflite models to ONNX.

tf2onnx converts TensorFlow (tf-1.x or tf-2.x), tf.keras and tflite models to ONNX via command line or python api.

Open Neural Network Exchange 1.8k Jan 8, 2023
Keras attention models including botnet,CoaT,CoAtNet,CMT,cotnet,halonet,resnest,resnext,resnetd,volo,mlp-mixer,resmlp,gmlp,levit

Keras_cv_attention_models Keras_cv_attention_models Usage Basic Usage Layers Model surgery AotNet ResNetD ResNeXt ResNetQ BotNet VOLO ResNeSt HaloNet

null 319 Dec 28, 2022