You are AllSet: A Multiset Function Framework for Hypergraph Neural Networks.

Related tags

Deep Learning AllSet
Overview

AllSet

This is the repo for our paper: You are AllSet: A Multiset Function Framework for Hypergraph Neural Networks. We prepared all codes and a subset of datasets used in our experiments.

All codes and script are in the folder src, and a subset of raw data are provided in folder data. To run the experiments, please go the the src folder first.

Enviroment requirement:

This repo is tested with the following enviroment, higher version of torch PyG may also be compatible.

pytorch==1.4.0+cu100
torch-geometric==1.6.3
torch-scatter==2.0.4

Generate dataset from raw data.

To generate PyG or DGL dataset for training, please create the following three folders:

p2root: '../data/pyg_data/hypergraph_dataset_updated/'
p2raw: '../data/AllSet_all_raw_data/'
p2dgl_data: '../data/dgl_data_raw/'

And then unzip the raw data zip file into p2raw.

Run one single experiment with one model with specified lr and wd:

source run_one_model.sh [dataset] [method] [MLP_hidden_dim] [Classifier_hidden_dim] [feature noise level]

Note that for HAN, please check the readme file in ./src/DGL_HAN/.

To reproduce the results in Table 2 (with the processed raw data)

source run_all_experiments.sh [method]

Issues

If you have any problem about our code, please open an issue and @ us (or send us an email) in case the notification doesn't work. Our email can be found in the paper.

Citation

If you use our code or data in your work, please cite our paper:

@inproceedings{
chien2022you,
title={You are AllSet: A Multiset Function Framework for Hypergraph Neural Networks},
author={Eli Chien and Chao Pan and Jianhao Peng and Olgica Milenkovic},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=hpBTIv2uy_E}
}
Comments
  • Dimension size issue

    Dimension size issue

    when running HCHA experiment getting next error:

     self.__set_size__(size, dim, data)
      File "/opt/anaconda3/envs/wave-pyg/lib/python3.9/site-packages/torch_geometric/nn/conv/message_passing.py", line 165, in __set_size__
        raise ValueError(
    ValueError: Encountered tensor with size 4287 in dimension 0, but expected size 2708.
    
    opened by levtelyatnikov 4
  • Could you please update the code to use the latest version of PyTorch Geometric?

    Could you please update the code to use the latest version of PyTorch Geometric?

    First of all thank you very much for your excellent work, I tried to run your program on my own computer. I am using the latest version 2.0.3 of pyg, and version 1.9.0 of pytorch.

    As you know, there are some differences between the 2.0.3 version of pyg and the 1.6.3 version used in your project. After my own modification, I have been able to run HyperGCN, CEGAT, CEGCN and AllSetTransformer, but when I try to run HGNN, HNHN and HCHA, the code always reports an error, and the error occurs here:

    layers.py line405 self.flow = 'target_to_source' out = self.propagate(hyperedge_index, x=out, norm=D, alpha=alpha, size=(num_edges, num_nodes))

    or layers.py line303 self.flow = 'target_to_source' out = self.propagate(hyperedge_index, x=out, norm=data.D_v_alpha_inv, size=(num_edges, num_nodes)) and it shows : ValueError: Encountered tensor with size 4287 in dimension 0, but expected size 2708.

    with 'cora' dataset.

    I really find it very difficult to solve this problem, could you please update your code to help me out. Thank you very much

    good first issue 
    opened by liyongkang123 3
  • weighted hypergraph

    weighted hypergraph

    Model: AllSetTransformer If I understood correctly, in the original code implementation all the hyperedges are assumed to have unit weight. Now my hypergraph has weighted hyperedges, how do I integrate this information into the code?

    I noticed the comment below https://github.com/jianhao2016/AllSet/blob/15cbc85a9c325815ab7b8f4bf51ff8e56bac2628/src/models.py#L330 . If the hyperedge weight can be considered as hyperedge feature, where do I concat the hyperedge weight exactly? Is it after the final output of the PMA of the self.V2EConvs block i.e. https://github.com/jianhao2016/AllSet/blob/15cbc85a9c325815ab7b8f4bf51ff8e56bac2628/src/layers.py#L157 ?

    good first issue 
    opened by JMian 2
  • Feature dimension of Walmart and house dataset

    Feature dimension of Walmart and house dataset

    Hi AllSet authors,

    In the appendix of the paper, I noticed that the feature dimensions of the House and Walmart dataset are fixed to 100, but I found that in the code, the feature dimensions are in [num_nodes, num_classes] of the "load_other_datasets.py" file.

    image

    I guess it is just a typo but still would like to double-check with you which dimension (100, or num_classes) was used for the reported results in the paper.

    Thank you for making the three new datasets available to the community, the paper is really a solid work!

    opened by wangfuli 2
  • Can you explain better what Add_Self_Loops and expand_edge_index do exactly?

    Can you explain better what Add_Self_Loops and expand_edge_index do exactly?

    Hi,

    I'm trying to understand what your code do. Let's say that we have a toy dataset like this: V;E = v0, v0, v1, v1, v2, v2, v3
    e1, e2, e2, e1, e2, e3, e3

    Which translate into this edge_index: [[0, 0, 1, 1, 2, 2, 3], [4, 5, 5, 4, 5, 6, 6]],

    import numpy as np

    edge_index = torch.tensor([[0, 0, 1, 1, 2, 2, 3], [4, 5, 5, 4, 5, 6, 6]], dtype=torch.long) edge_index = coalesce(edge_index)

    ei = np.array(edge_index) num_nodes = len(np.unique(ei[0])) num_hyperedges = len(np.unique(ei[1]))

    data = Data(edge_index=edge_index, n_x = num_nodes, num_hyperedges=num_hyperedges)

    Now, if I use your Add_Self_Loops code I obtain this edge_index: tensor([[ 0, 0, 0, 1, 1, 1, 2, 2, 2, 3, 3]
    [ 4, 5, 7, 4, 5, 8, 5, 6, 9, 6, 10]])

    Which is a little bit confusing. Shoudn't be added the same node? What are you doing in this code exactly, and why?

    Same question for expand_edge_index.

    tensor([[ 0, 0, 0, 1, 1, 1, 2, 2, 2, 3, 3]
    [ 5, 7, 8, 4, 6, 8, 6, 7, 10, 9, 11]])

    What kind of expansion are you implementing? What is the difference between the two implementations? Why are they needed for your models AllDeepSets and AllSetTransformer?

    opened by giuliacassara 2
  • How to set batch size when training AllSet?

    How to set batch size when training AllSet?

    Hi, this is a nice work. I read the code: https://github.com/jianhao2016/AllSet/blob/0d0e399a9168829fa898dd56f3d32bee36953b04/src/train.py#L325-L327 https://github.com/jianhao2016/AllSet/blob/0d0e399a9168829fa898dd56f3d32bee36953b04/src/train.py#L345 https://github.com/jianhao2016/AllSet/blob/0d0e399a9168829fa898dd56f3d32bee36953b04/src/train.py#L478 It seems that you train the model with all the data in a batch. Thus I have a question about how to set batch size when I train the AllSet model. Thank you.

    opened by sakuraiiiii 2
Owner
Jianhao
Jianhao
With this package, you can generate mixed-integer linear programming (MIP) models of trained artificial neural networks (ANNs) using the rectified linear unit (ReLU) activation function

With this package, you can generate mixed-integer linear programming (MIP) models of trained artificial neural networks (ANNs) using the rectified linear unit (ReLU) activation function. At the moment, only TensorFlow sequential models are supported. Interfaces to either the Pyomo or Gurobi modeling environments are offered.

ChemEngAI 40 Dec 27, 2022
A framework that constructs deep neural networks, autoencoders, logistic regressors, and linear networks

A framework that constructs deep neural networks, autoencoders, logistic regressors, and linear networks without the use of any outside machine learning libraries - all from scratch.

Kordel K. France 2 Nov 14, 2022
Complex-Valued Neural Networks (CVNN)Complex-Valued Neural Networks (CVNN)

Complex-Valued Neural Networks (CVNN) Done by @NEGU93 - J. Agustin Barrachina Using this library, the only difference with a Tensorflow code is that y

youceF 1 Nov 12, 2021
This repository contains notebook implementations of the following Neural Process variants: Conditional Neural Processes (CNPs), Neural Processes (NPs), Attentive Neural Processes (ANPs).

The Neural Process Family This repository contains notebook implementations of the following Neural Process variants: Conditional Neural Processes (CN

DeepMind 892 Dec 28, 2022
Tensorflow Implementation of SMU: SMOOTH ACTIVATION FUNCTION FOR DEEP NETWORKS USING SMOOTHING MAXIMUM TECHNIQUE

SMU A Tensorflow Implementation of SMU: SMOOTH ACTIVATION FUNCTION FOR DEEP NETWORKS USING SMOOTHING MAXIMUM TECHNIQUE arXiv https://arxiv.org/abs/211

Fuhang 5 Jan 18, 2022
Cancer-and-Tumor-Detection-Using-Inception-model - In this repo i am gonna show you how i did cancer/tumor detection in lungs using deep neural networks, specifically here the Inception model by google.

Cancer-and-Tumor-Detection-Using-Inception-model In this repo i am gonna show you how i did cancer/tumor detection in lungs using deep neural networks

Deepak Nandwani 1 Jan 1, 2022
Spam your friends and famly and when you do your famly will disown you and you will have no friends.

SpamBot9000 Spam your friends and family and when you do your family will disown you and you will have no friends. Terms of Use Disclaimer: Please onl

DJ15 0 Jun 9, 2022
A flexible framework of neural networks for deep learning

Chainer: A deep learning framework Website | Docs | Install Guide | Tutorials (ja) | Examples (Official, External) | Concepts | ChainerX Forum (en, ja

Chainer 5.8k Jan 6, 2023
A flexible framework of neural networks for deep learning

Chainer: A deep learning framework Website | Docs | Install Guide | Tutorials (ja) | Examples (Official, External) | Concepts | ChainerX Forum (en, ja

Chainer 5.5k Feb 12, 2021
Lingvo is a framework for building neural networks in Tensorflow, particularly sequence models.

Lingvo is a framework for building neural networks in Tensorflow, particularly sequence models.

null 2.7k Jan 5, 2023
PyTorch framework, for reproducing experiments from the paper Implicit Regularization in Hierarchical Tensor Factorization and Deep Convolutional Neural Networks

Implicit Regularization in Hierarchical Tensor Factorization and Deep Convolutional Neural Networks. Code, based on the PyTorch framework, for reprodu

Asaf 3 Dec 27, 2022
Game Agent Framework. Helping you create AIs / Bots that learn to play any game you own!

Serpent.AI - Game Agent Framework (Python) Update: Revival (May 2020) Development work has resumed on the framework with the aim of bringing it into 2

Serpent.AI 6.4k Jan 5, 2023
Code for "Neural Parts: Learning Expressive 3D Shape Abstractions with Invertible Neural Networks", CVPR 2021

Neural Parts: Learning Expressive 3D Shape Abstractions with Invertible Neural Networks This repository contains the code that accompanies our CVPR 20

Despoina Paschalidou 161 Dec 20, 2022
Bayesian-Torch is a library of neural network layers and utilities extending the core of PyTorch to enable the user to perform stochastic variational inference in Bayesian deep neural networks

Bayesian-Torch is a library of neural network layers and utilities extending the core of PyTorch to enable the user to perform stochastic variational inference in Bayesian deep neural networks. Bayesian-Torch is designed to be flexible and seamless in extending a deterministic deep neural network architecture to corresponding Bayesian form by simply replacing the deterministic layers with Bayesian layers.

Intel Labs 210 Jan 4, 2023
An implementation demo of the ICLR 2021 paper Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks in PyTorch.

Neural Attention Distillation This is an implementation demo of the ICLR 2021 paper Neural Attention Distillation: Erasing Backdoor Triggers from Deep

Yige-Li 84 Jan 4, 2023
DeepHyper: Scalable Asynchronous Neural Architecture and Hyperparameter Search for Deep Neural Networks

What is DeepHyper? DeepHyper is a software package that uses learning, optimization, and parallel computing to automate the design and development of

DeepHyper Team 214 Jan 8, 2023
Neural-fractal - Create Fractals Using Complex-Valued Neural Networks!

Neural Fractal Create Fractals Using Complex-Valued Neural Networks! Home Page Features Define Dynamical Systems Using Complex-Valued Neural Networks

Amirabbas Asadi 10 Dec 17, 2022