Official Codes for Graph Modularity:Towards Understanding the Cross-Layer Transition of Feature Representations in Deep Neural Networks.

Overview

Dynamic-Graphs-Construction

Official Codes for Graph Modularity:Towards Understanding the Cross-Layer Transition of Feature Representations in Deep Neural Networks.

We give an example to understand the dynamic graphs: we leverage pretrained ResNet18 (Top1 acc: 94.78%) on CIFAR-10 to extract feature representations. Then we set N = 50, k = 3 for dynamic graphs construction. The result is shown in the gif below: image Note that nodes of the same color represent samples of the same class. The darker the color of the edge, the higher the similarity between the two corresponding nodes. The modularity curve of this dynamic graph: image

You might also like...
[CIKM 2019] Code and dataset for "Fi-GNN: Modeling Feature Interactions via Graph Neural Networks for CTR Prediction"

FiGNN for CTR prediction The code and data for our paper in CIKM2019: Fi-GNN: Modeling Feature Interactions via Graph Neural Networks for CTR Predicti

Towards Part-Based Understanding of RGB-D Scans
Towards Part-Based Understanding of RGB-D Scans

Towards Part-Based Understanding of RGB-D Scans (CVPR 2021) We propose the task of part-based scene understanding of real-world 3D environments: from

 Towards Long-Form Video Understanding
Towards Long-Form Video Understanding

Towards Long-Form Video Understanding Chao-Yuan Wu, Philipp Krähenbühl, CVPR 2021 [Paper] [Project Page] [Dataset] Citation @inproceedings{lvu2021,

[ICML 2021] Towards Understanding and Mitigating Social Biases in Language Models

Towards Understanding and Mitigating Social Biases in Language Models This repo contains code and data for evaluating and mitigating bias from generat

The source code for Generating Training Data with Language Models: Towards Zero-Shot Language Understanding.
The source code for Generating Training Data with Language Models: Towards Zero-Shot Language Understanding.

SuperGen The source code for Generating Training Data with Language Models: Towards Zero-Shot Language Understanding. Requirements Before running, you

[CVPR2022] Bridge-Prompt: Towards Ordinal Action Understanding in Instructional Videos
[CVPR2022] Bridge-Prompt: Towards Ordinal Action Understanding in Instructional Videos

Bridge-Prompt: Towards Ordinal Action Understanding in Instructional Videos Created by Muheng Li, Lei Chen, Yueqi Duan, Zhilan Hu, Jianjiang Feng, Jie

Pytorch implementation of
Pytorch implementation of "Forward Thinking: Building and Training Neural Networks One Layer at a Time"

forward-thinking-pytorch Pytorch implementation of Forward Thinking: Building and Training Neural Networks One Layer at a Time Requirements Python 2.7

OptNet: Differentiable Optimization as a Layer in Neural Networks

OptNet: Differentiable Optimization as a Layer in Neural Networks This repository is by Brandon Amos and J. Zico Kolter and contains the PyTorch sourc

A PyTorch implementation of Radio Transformer Networks from the paper "An Introduction to Deep Learning for the Physical Layer".

An Introduction to Deep Learning for the Physical Layer An usable PyTorch implementation of the noisy autoencoder infrastructure in the paper "An Intr

Comments
  • modularity计算出来为负值是什么原因?

    modularity计算出来为负值是什么原因?

    您好!我用的自己的数据集在resnet18上用了您的方法做了模块化程度的计算,提取的是relu层之后的特征,发现计算每层modularity会出现负值,这是什么原因呢?之后又尝试提取bn层后的特征还是有大量的负值存在,请问这是正常现象么?

    一下是几次计算的结果,len( modularity )=17层 modularity: [-0.019196462265709934, -0.02271897533277316, -0.023171874127450653, -0.022470118258014187, -0.01645792102456704, -0.018011648884517804, -0.010016131091073174, -0.02012415433281817, -0.00993257811055065, -0.008657139629071227, -0.010918394908571914, -0.009388655522269241, -0.002584169673622568, -0.0005555165491241826, -0.012236681334620234, -0.013657490194399866, -0.001615048060680923]

    modularity: [-0.009842042437187018, -0.009372720350310716, -0.012085664778406252, 0.005141147857948121, 0.010048463577370794, 0.00018423969515977273, -0.0030635190699569922, -0.00026638565957969734, -0.010287994904722021, -0.012399507863719801, -0.008366178789636743, -0.0032961831139690576, -0.006384714434207794, 0.0067996706828107704, -0.005832453918450487, 0.009217809629457067, -0.006108037287940963]

    [-0.008759199831022697, -0.036767948297815836, -0.029265369915441954, -0.025968150236407347, -0.025383781534253247, -0.019468812859621223, -0.012428653247791372, -0.0227319975256084, -0.028128953521547748, -0.02349360946111945, -0.024317913360711686, -0.02071382370153553, -0.02179457870885567, -0.015504702593439782, -0.0127327671915606, -0.005047404214200173, -0.01602515198867172]

    opened by Z-Yh-June 3
Owner
null
A static analysis library for computing graph representations of Python programs suitable for use with graph neural networks.

python_graphs This package is for computing graph representations of Python programs for machine learning applications. It includes the following modu

Google Research 258 Dec 29, 2022
Code for the CVPR 2021 paper: Understanding Failures of Deep Networks via Robust Feature Extraction

Welcome to Barlow Barlow is a tool for identifying the failure modes for a given neural network. To achieve this, Barlow first creates a group of imag

Sahil Singla 33 Dec 5, 2022
An Inverse Kinematics library aiming performance and modularity

IKPy Demo Live demos of what IKPy can do (click on the image below to see the video): Also, a presentation of IKPy: Presentation. Features With IKPy,

Pierre Manceron 481 Jan 2, 2023
Code for Understanding Pooling in Graph Neural Networks

Select, Reduce, Connect This repository contains the code used for the experiments of: "Understanding Pooling in Graph Neural Networks" Setup Install

Daniele Grattarola 37 Dec 13, 2022
Source code of NeurIPS 2021 Paper ''Be Confident! Towards Trustworthy Graph Neural Networks via Confidence Calibration''

CaGCN This repo is for source code of NeurIPS 2021 paper "Be Confident! Towards Trustworthy Graph Neural Networks via Confidence Calibration". Paper L

null 6 Dec 19, 2022
Pytorch code for "State-only Imitation with Transition Dynamics Mismatch" (ICLR 2020)

This repo contains code for our paper State-only Imitation with Transition Dynamics Mismatch published at ICLR 2020. The code heavily uses the RL mach

null 20 Sep 8, 2022
TART - A PyTorch implementation for Transition Matrix Representation of Trees with Transposed Convolutions

TART This project is a PyTorch implementation for Transition Matrix Representati

Lee Sael 2 Jan 19, 2022
Towards Open-World Feature Extrapolation: An Inductive Graph Learning Approach

This repository holds the implementation for paper Towards Open-World Feature Extrapolation: An Inductive Graph Learning Approach Download our preproc

Qitian Wu 42 Dec 27, 2022
Understanding and Improving Encoder Layer Fusion in Sequence-to-Sequence Learning (ICLR 2021)

Understanding and Improving Encoder Layer Fusion in Sequence-to-Sequence Learning (ICLR 2021) Citation Please cite as: @inproceedings{liu2020understan

Sunbow Liu 22 Nov 25, 2022