It is a common notion that a Deep Learning model is considered as a black box. Working towards this problem, this project provides flexible and easy to use
explainable-cnn that will help you to create visualization for any
torch based CNN model. Note that it uses one of the data centric approach. This project focusses on making the internal working of the Neural layers more transparent. In order to do so,
explainable-cnn is a plug & play component that visualizes the layers based on on their gradients and builds different representations including Saliency Map, Guided BackPropagation, Grad CAM and Guided Grad CAM.
Install the package
pip install explainable-cnn
To create visualizations, create an instance of
from explainable_cnn import CNNExplainer
x_cnn = CNNExplainer(...)
The following method calls returns
numpy arrays corresponding to image for different types of visualizations.
saliency_map = x_cnn.get_saliency_map(...)
grad_cam = x_cnn.get_grad_cam(...)
guided_grad_cam = x_cnn.get_guided_grad_cam(...)
To see full list of arguments and their usage for all methods, please refer to this file
You may want to look at example usage in the example notebook.
Below is a comparison of the visualization generated between GradCam and GuidedGradCam
Thanks goes to these wonderful people (emoji key):
This project follows the all-contributors specification. Contributions of any kind welcome!