PCACE: A Statistical Approach to Ranking Neurons for CNN Interpretability

Overview

PCACE: A Statistical Approach to Ranking Neurons for CNN Interpretability

PCACE is a new algorithm for ranking neurons in a CNN architecture in order of importance towards the final classification. PCACE is a statistical method combining Alternating Condition Expectation with Principal Component Analysis to find the maximal correlation coefficient between a hidden neuron and the final class score. This yields a rigorous and standardized method for quantifying the relevance of each neuron towards the final model classification.

Summary of Usage

  1. pcace_resnet_18.py: code for the PCACE algorithm in the ResNet-18 architecture. Uses PyTorch to load the model and requires the ACE package. Caps indicate variables changeable by the user: NUM_IMAGES: the number of input images for PCACE. CLASS: the class to which the input images belong to. LAYER_NAME: name of the convolutional layer to which we apply PCACE. Follows the structure layerx[y].convz. NUM_CHANNELS: number of channels in LAYER_NAME. SIZE: number of pixels in the activation maps of LAYER_NAME. SIZE_X, SIZE_Y: height and width of the activation maps. Must have SIZE = SIZE_X*SIZE_Y. CLASS_IDX: before the softmax, which index corresponds to the class score (class of the set of input images). PCA_COMP: number of components to which PCA wishes to be reduced to. After the algorithm runs, it provides an array results with the PCACE values of all channels, which can then be sorted.

  2. pcace_vgg_16.py: same code an functionality as pcace_resnet_18.py but in the VGG-16 architecture instead of ResNet-18. Computes the PCACE values for any layer in the VGG-16 architecture.

  3. activation_maximization.py: code to visualize the filter activation maximization images with VGG-16 following the code from https://github.com/keisen/tf-keras-vis. Uses Keras to load the model and requires teh tf-keras-vis package. Caps indicate variables changeable by the user: LAYER_NAME: where is the channel whose feature visualization we are trying to see. FILTER_NUMBER: which channel within that layer.

  4. visualize_act_maps_resnet_18.py: code to visualize the activation maps of the top PCACE channels with ResNet-18. As in pcace_resnet_18.py, it uses PyTorch to load the model. Caps indicate variables changeable by the user: LAYER_NAME: name of the convolutional layer to which we apply PCACE. Follows the structure layerx[y].convz. ORDER: an array containing the PCACE channels sorted from lowest to highest value. The good_urls refer to a list containing the URLs of the images that one wishes to visualize.

  5. visualize_act_maps_vgg_16.py: same functionality as in the visualize_act_maps_resnet_18.py code (i.e., visualize the activation maps of the top PCACE channels), but in the VGG-16 architecture instead of ResNet-18.

  6. visualizing_cam.py: producing CAM visualizations with ResNet-18 following the code from https://github.com/zhoubolei/CAM. Uses PyTorch to load the model. Returns the CAM visualization of the input image (in this case, given with a URL).

  7. london_kdd_examples_slevel.csv: The .csv file contains metadata for the 300 street level images we used in our experiments. In our experiments we used images from Google Street View. More information on these images and how to use them are available from here: https://developers.google.com/maps/documentation/streetview/overview. gsv_panoid: correspods to the 'pano' parameter, which is a specific panorama ID for the image. gsv_lat, gsv_lng: corresponds the the location coordinates for the image. Both gsv_panoid and gsv_lat, gsv_lng parameters can be used to access the images used in our experiments.

You might also like...
Adaout is a practical and flexible regularization method with high generalization and interpretability

Adaout Adaout is a practical and flexible regularization method with high generalization and interpretability. Requirements python 3.6 (Anaconda versi

The source code for the Cutoff data augmentation approach proposed in this paper: "A Simple but Tough-to-Beat Data Augmentation Approach for Natural Language Understanding and Generation".

Cutoff: A Simple Data Augmentation Approach for Natural Language This repository contains source code necessary to reproduce the results presented in

A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.
A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.

Website | Documentation | Tutorials | Installation | Release Notes CatBoost is a machine learning method based on gradient boosting over decision tree

A fast, distributed, high performance gradient boosting (GBT, GBDT, GBRT, GBM or MART) framework based on decision tree algorithms, used for ranking, classification and many other machine learning tasks.

Light Gradient Boosting Machine LightGBM is a gradient boosting framework that uses tree based learning algorithms. It is designed to be distributed a

A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.
A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.

Website | Documentation | Tutorials | Installation | Release Notes CatBoost is a machine learning method based on gradient boosting over decision tree

Learning embeddings for classification, retrieval and ranking.
Learning embeddings for classification, retrieval and ranking.

StarSpace StarSpace is a general-purpose neural model for efficient learning of entity embeddings for solving a wide variety of problems: Learning wor

ReConsider is a re-ranking model that re-ranks the top-K (passage, answer-span) predictions of an Open-Domain QA Model like DPR (Karpukhin et al., 2020).

ReConsider ReConsider is a re-ranking model that re-ranks the top-K (passage, answer-span) predictions of an Open-Domain QA Model like DPR (Karpukhin

Fast, differentiable sorting and ranking in PyTorch
Fast, differentiable sorting and ranking in PyTorch

Torchsort Fast, differentiable sorting and ranking in PyTorch. Pure PyTorch implementation of Fast Differentiable Sorting and Ranking (Blondel et al.)

Code and data of the ACL 2021 paper: Few-Shot Text Ranking with Meta Adapted Synthetic Weak Supervision

MetaAdaptRank This repository provides the implementation of meta-learning to reweight synthetic weak supervision data described in the paper Few-Shot

Owner
null
NFT-Price-Prediction-CNN - Using visual feature extraction, prices of NFTs are predicted via CNN (Alexnet and Resnet) architectures.

NFT-Price-Prediction-CNN - Using visual feature extraction, prices of NFTs are predicted via CNN (Alexnet and Resnet) architectures.

null 5 Nov 3, 2022
Scripts for training an AI to play the endless runner Subway Surfers using a supervised machine learning approach by imitation and a convolutional neural network (CNN) for image classification

About subwAI subwAI - a project for training an AI to play the endless runner Subway Surfers using a supervised machine learning approach by imitation

null 82 Jan 1, 2023
Vector Neurons: A General Framework for SO(3)-Equivariant Networks

Vector Neurons: A General Framework for SO(3)-Equivariant Networks Created by Congyue Deng, Or Litany, Yueqi Duan, Adrien Poulenard, Andrea Tagliasacc

Congyue Deng 332 Dec 29, 2022
Neuron Merging: Compensating for Pruned Neurons (NeurIPS 2020)

Neuron Merging: Compensating for Pruned Neurons Pytorch implementation of Neuron Merging: Compensating for Pruned Neurons, accepted at 34th Conference

Woojeong Kim 33 Dec 30, 2022
PyTorch implementation of "Transparency by Design: Closing the Gap Between Performance and Interpretability in Visual Reasoning"

Transparency-by-Design networks (TbD-nets) This repository contains code for replicating the experiments and visualizations from the paper Transparenc

David Mascharka 351 Nov 18, 2022
Binary Stochastic Neurons in PyTorch

Binary Stochastic Neurons in PyTorch http://r2rt.com/binary-stochastic-neurons-in-tensorflow.html https://github.com/pytorch/examples/tree/master/mnis

Onur Kaplan 54 Nov 21, 2022
A library for finding knowledge neurons in pretrained transformer models.

knowledge-neurons An open source repository replicating the 2021 paper Knowledge Neurons in Pretrained Transformers by Dai et al., and extending the t

EleutherAI 96 Dec 21, 2022
Code for the paper "Ordered Neurons: Integrating Tree Structures into Recurrent Neural Networks"

ON-LSTM This repository contains the code used for word-level language model and unsupervised parsing experiments in Ordered Neurons: Integrating Tree

Yikang Shen 572 Nov 21, 2022
Official implementation for "Symbolic Learning to Optimize: Towards Interpretability and Scalability"

Symbolic Learning to Optimize This is the official implementation for ICLR-2022 paper "Symbolic Learning to Optimize: Towards Interpretability and Sca

VITA 8 Dec 19, 2022
WormMovementSimulation - 3D Simulation of Worm Body Movement with Neurons attached to its body

Generate 3D Locomotion Data This module is intended to create 2D video trajector

null 1 Aug 9, 2022