Atomistic Line Graph Neural Network

Related tags

Deep Learning alignn
Overview

name alt text codecov PyPI version GitHub tag (latest by date) GitHub code size in bytes GitHub commit activity PWC PWC PWC PWC PWC Downloads

Table of Contents

ALIGNN (Introduction)

The Atomistic Line Graph Neural Network (https://www.nature.com/articles/s41524-021-00650-1) introduces a new graph convolution layer that explicitly models both two and three body interactions in atomistic systems.

This is achieved by composing two edge-gated graph convolution layers, the first applied to the atomistic line graph L(g) (representing triplet interactions) and the second applied to the atomistic bond graph g (representing pair interactions).

The atomistic graph g consists of a node for each atom i (with atom/node representations hi), and one edge for each atom pair within a cutoff radius (with bond/pair representations eij).

The atomistic line graph L(g) represents relationships between atom triplets: it has nodes corresponding to bonds (sharing representations eij with those in g) and edges corresponding to bond angles (with angle/triplet representations tijk).

The line graph convolution updates the triplet representations and the pair representations; the direct graph convolution further updates the pair representations and the atom representations.

ALIGNN layer schematic

Installation

First create a conda environment: Install miniconda environment from https://conda.io/miniconda.html Based on your system requirements, you'll get a file something like 'Miniconda3-latest-XYZ'.

Now,

bash Miniconda3-latest-Linux-x86_64.sh (for linux)
bash Miniconda3-latest-MacOSX-x86_64.sh (for Mac)

Download 32/64 bit python 3.8 miniconda exe and install (for windows) Now, let's make a conda environment, say "version", choose other name as you like::

conda create --name version python=3.8
source activate version

Method 1 (using setup.py):

Now, let's install the package:

git clone https://github.com/usnistgov/alignn.git
cd alignn
python setup.py develop

For using GPUs/CUDA, install dgl-cu101 or dgl-cu111 based on the CUDA version available on your system, e.g.

pip install dgl-cu111

Method 2 (using pypi):

As an alternate method, ALIGNN can also be installed using pip command as follows:

pip install alignn dgl-cu111

Examples

Dataset

Users can keep their structure files in POSCAR, .cif, .xyz or .pdb files in a directory. In the examples below we will use POSCAR format files. In the same directory, there should be an id_prop.csv file.

In this directory, id_prop.csv, the filenames, and correponding target values are kept in comma separated values (csv) format.

Here is an example of training OptB88vdw bandgaps of 50 materials from JARVIS-DFT database. The example is created using the generate_sample_data_reg.py script. Users can modify the script for more than 50 data, or make their own dataset in this format. For list of available datasets see Databases.

The dataset in split in 80:10:10 as training-validation-test set (controlled by train_ratio, val_ratio, test_ratio) . To change the split proportion and other parameters, change the config_example.json file. If, users want to train on certain sets and val/test on another dataset, set n_train, n_val, n_test manually in the config_example.json and also set keep_data_order as True there so that random shuffle is disabled.

A brief help guide (-h) can be obtained as follows.

!train_folder.py -h 

Regression example

Now, the model is trained as follows. Please increase the batch_size parameter to something like 32 or 64 in config_example.json for general trainings.

!train_folder.py --root_dir "alignn/examples/sample_data" --config "alignn/examples/sample_data/config_example.json" --output_dir=temp

Classification example

While the above example is for regression, the follwoing example shows a classification task for metal/non-metal based on the above bandgap values. We transform the dataset into 1 or 0 based on a threshold of 0.01 eV (controlled by the parameter, classification_threshold) and train a similar classification model. Currently, the script allows binary classification tasks only.

!train_folder.py --root_dir "alignn/examples/sample_data" --classification_threshold 0.01 --config "alignn/examples/sample_data/config_example.json" --output_dir=temp

Multi-output model example

While the above example regression was for single-output values, we can train multi-output regression models as well. An example is given below for training formation energy per atom, bandgap and total energy per atom simulataneously. The script to generate the example data is provided in the script folder of the sample_data_multi_prop. Another example of training electron and phonon density of states is provided also.

!train_folder.py --root_dir "alignn/examples/sample_data_multi_prop" --config "alignn/examples/sample_data/config_example.json" --output_dir=temp

Automated model training

Users can try training using multiple example scripts to run multiple dataset (such as JARVIS-DFT, Materials project, QM9_JCTC etc.). Look into the alignn/scripts/train_*.py folder. This is done primarily to make the trainings more automated rather than making folder/ csv files etc. These scripts automatically download datasets from Databases in jarvis-tools and train several models. Make sure you specify your specific queuing system details in the scripts.

Using pre-trained models

All the trained models are distributed on figshare and this pretrained.py script can be applied to use them. These models can be used to directly make predictions.

A brief help section (-h) is shown using:

!pretrained.py -h

An example of prediction formation energy per atom using JARVIS-DFT dataset trained model is shown below:

!pretrained.py --model_name jv_formation_energy_peratom_alignn --file_format poscar --file_path alignn/examples/sample_data/POSCAR-JVASP-10.vasp

Quick start using GoogleColab notebook example

The following notebook provides an example of 1) installing ALIGNN model, 2) training the example data and 3) using the pretrained models. For this example, you don't need to install alignn package on your local computer/cluster, it requires a gmail account to login. Learn more about Google colab here.

name

Web-app

A basic web-app is for direct-prediction available at JARVIS-ALIGNN app. Given atomistic structure in POSCAR format it predict formation energy, total energy per atom and bandgap using data trained on JARVIS-DFT dataset.

JARVIS-ALIGNN

Performances

1) On QM9 dataset

QM9

2) On Materials project 2018 dataset

MP

3) On JARVIS-DFT 2021 dataset (classification)

JV-class

4) On JARVIS-DFT 2021 dataset (regression)

JV-reg1 JV-reg2

5) On hMOF dataset

hMOF

6) On qMOF dataset

MAE on electronic bandgap 0.20 eV

7) On OMDB dataset

coming soon!

8) On HOPV dataset

coming soon!

9) On QETB dataset

coming soon!

10) On OpenCatalyst dataset

coming soon!

Useful notes (based on some of the queries we received)

  1. If you are using GPUs, make sure you have a compatible dgl-cuda version installed, for example: dgl-cu101 or dgl-cu111, so e.g. pip install dgl-cu111 .
  2. The undirected graph and its line graph is constructured in jarvis-tools package using jarvis.core.graphs
  3. While comnventional '.cif' and '.pdb' files can be read using jarvis-tools, for complex files you might have to install cif2cell and pytraj respectively i.e.pip install cif2cell==2.0.0a3 and conda install -c ambermd pytraj.
  4. Make sure you use batch_size as 32 or 64 for large datasets, and not 2 as given in the example config file, else it will take much longer to train, and performnce might drop a lot.
  5. Note that train_folder.py and pretrained.py in alignn folder are actually python executable scripts. So, even if you don't provide absolute path of these scripts, they should work.

References

Please see detailed publications list here.

How to contribute

For detailed instructions, please see Contribution instructions

Correspondence

Please report bugs as Github issues (https://github.com/usnistgov/alignn/issues) or email to [email protected].

Funding support

NIST-MGI (https://www.nist.gov/mgi).

Code of conduct

Please see Code of conduct <https://github.com/usnistgov/jarvis/blob/master/CODE_OF_CONDUCT.md>__

Comments
  • Jarvis data

    Jarvis data

    Thank you for your work on the efficient way to predict the ML method for the molecular system.

    But, I couldn't reproduce the paper.

    I found that Jarvis summarizes QM9 datasets with normalization, and I issued in jarvis.

    I tested ALIGNN, but cannot reproduce it for the unnormalized QM9 datasets. Only the normalized QM9 dataset provided by Jarvis works to reproduce prediction values in paper.

    opened by Nokimann 8
  • Using a Trained Model

    Using a Trained Model

    Dear All,

    I successfully trained a model with ALIGNN. There is "pretrained.py" script that import some models according to datasets.

    My question is how I can use my trained model to predict a structure? Do I have to modify the "pretrained.py" script to import my model or can I do something like this:

    python alignn/pretrained.py --model_name <my_output_pt_file> --file_format poscar --file_path /path/to/sample_file

    Thanks for the help.

    opened by MiracAydin1 8
  • OMDB Dataset Import Error

    OMDB Dataset Import Error

    Dear All,

    First of all, thanks for creating ALIGNN tool.

    I am trying to train a model with OMDB dataset to obtain a bandgap prediction. The dataset containes xyz files of molecules and bandgap values of them. It is also included in JARVIS documentation: https://jarvis-tools.readthedocs.io/en/master/databases.html

    I am following the README file on ALIGNN page. I generated my xyz samples from the dataset as it follows without any problem:

    ` from jarvis.db.figshare import data as jdata from jarvis.core.atoms import Atoms

    omdbset = jdata("omdb") prop = "bandgap"

    max_samples = 12500 f = open("id_prop.csv", "w") count = 0 for i in omdbset: atoms = Atoms.from_dict(i["atoms"]) cod_id = i["cod_id"] xyz_name = "OMDB-" + cod_id + ".xyz" target = i[prop] if target != "na": atoms.write_xyz(xyz_name) f.write("%s,%6f\n" % (xyz_name, target)) count += 1 if count == max_samples: break f.close() `

    I used the config.json file like you did it in QM9 training. Just the following 2 lines are different:

    "dataset": "omdb", "target": "bandgap",

    When i tried to run the code like this, it gave the following errors:

    python /home/fsysadmin/alignn/alignn/train_folder.py --root_dir "/home/fsysadmin/alignn/omdb_tests/prep_data" --config "/home/fsysadmin/alignn/omdb_tests/prep_data/config.json" --file_format xyz --output_dir=/home/fsysadmin/alignn/omdb_tests/results

    Using backend: pytorch Check 1 validation error for TrainingConfig dataset unexpected value; permitted: 'dft_3d', 'jdft_3d-8-18-2021', 'dft_2d', 'megnet', 'megnet2', 'mp_3d_2020', 'qm9', 'qm9_dgl', 'qm9_std_jctc', 'user_data', 'oqmd_3d_no_cfid', 'edos_up', 'edos_pdos', 'qmof', 'hmof', 'hpov', 'pdbbind', 'pdbbind_core' (type=value_error.const; given=omdb; permitted=('dft_3d', 'jdft_3d-8-18-2021', 'dft_2d', 'megnet', 'megnet2', 'mp_3d_2020', 'qm9', 'qm9_dgl', 'qm9_std_jctc', 'user_data', 'oqmd_3d_no_cfid', 'edos_up', 'edos_pdos', 'qmof', 'hmof', 'hpov', 'pdbbind', 'pdbbind_core')) Traceback (most recent call last): File "/home/fsysadmin/alignn/alignn/train_folder.py", line 194, in <module> train_for_folder( File "/home/fsysadmin/alignn/alignn/train_folder.py", line 80, in train_for_folder config.keep_data_order = keep_data_order AttributeError: 'dict' object has no attribute 'keep_data_order'

    As i understand from the error output, OMDB dataset is not included into ALIGNN package. Giving the full path of OMDB tar file did not also solve the problem.

    How can i include OMDB dataset? There are some scripts that import dataset in alignn repo such as train_all_qm9_jctc.py. Maybe this scripts can be modified to include OMDB.

    I appreciate your help.

    Best regards,

    opened by MiracAydin1 3
  • Python API for web form?

    Python API for web form?

    Is the JARVIS-ALIGNN web interface accessible via Python API? I'd like to get predictions for a few dozen POSCAR files without pasting them all in manually.

    opened by janosh 2
  • compute training size learning curves

    compute training size learning curves

    let's do this in cross validation to address the stability question

    strategy: 5x shuffle-split validation scheme to keep things simple. schedule runs using ray tune grid_search

    report results for jarvis-55k formation energy and band gap targets.

    • [x] jarvis-55k formation energy
    • [x] jarvis-55k band gap
    • [x] publication-quality plots
    • [x] integrate into manuscript
    opened by bdecost 2
  • Develop

    Develop

    • Fix pytorch version to match torchvision that can be used in atomvision package.
    • https://github.com/pytorch/vision/blob/cb60e97a778ad6626980b67c456984e5fcf1d507/README.rst#L22
    opened by knc6 1
  • Added phonon DOS pretrained model

    Added phonon DOS pretrained model

    Also required adding max_neighbors as a parser argument Also involved adding an optional field for extra config parameters in the "all models" list

    opened by RamyaGuru 1
  • Atomwise work

    Atomwise work

    checking through atomwise training loop and force computation

    I think we should probably consolidate around the idea of storing target tensors and output tensors in dictionaries. then we can generate a custom loss function and go back to using ignite, and hopefully remove some of the code complexity

    opened by bdecost 1
  • Print predict fix

    Print predict fix

    Improve sections of train_dgl that write the last epoch's train and validation predictions to files. Also improve the write handling such that batch sizes of 1 are concatenated to files without requiring special case.

    Works only with PanayotisManganaris:pm/alignn-patch #261 on jarvis.

    opened by PanayotisManganaris 0
  • Small mistake in train script

    Small mistake in train script

    Hello everyone :)

    Thanks for great tool! I have found a small error in the "alignn/train.py" script while I was trying to use it for regression.

    At line 866 we have: `

    if config.n_early_stopping is not None:
        if classification:
            my_metrics = "accuracy"
        else:
            my_metrics = "mae"
    
        def default_score_fn(engine):
            score = engine.state.metrics[my_metrics]
            return score
    
        es_handler = EarlyStopping(
            patience=config.n_early_stopping,
            score_function=default_score_fn,
            trainer=trainer,
        )
    

    ` The problem is that, as stated in the documentation of ignite, "An improvement is considered if the score is higher." In other words, EarlyStopping only checks for increases of a benefit/reward function to be maximized, like "accuracy". A decreasing cost function would trigger earlyStopping after "patience" number of epochs, even if the cost is still decreasing.

    To monitor the decrease of a cost function, e.g. "mae", the default_score_fn() could return the negative value of the cost metric, as shown in the ignite documentation example here: https://pytorch.org/ignite/generated/ignite.handlers.early_stopping.EarlyStopping.html

    All the best

    opened by sasanamari 1
  • CIF File non-iterable NoneType object Error

    CIF File non-iterable NoneType object Error

    Dear All,

    I train ALIGNN with cif files. To improve the performance, I tried to augment my cif files with AugLiChem library. Here is the snippet from the original file and the augmented cif file:

    Original file:

    data_image0
    _chemical_formula_structural       H10C14S2N2O2
    _chemical_formula_sum              "H10 C14 S2 N2 O2"
    _cell_length_a       4.3258
    _cell_length_b       8.982
    _cell_length_c       8.4721
    _cell_angle_alpha    90
    _cell_angle_beta     90.594
    _cell_angle_gamma    90
    
    _space_group_name_H-M_alt    "P 1"
    _space_group_IT_number       1
    
    loop_
      _space_group_symop_operation_xyz
      'x, y, z'
    
    loop_
      _atom_site_type_symbol
      _atom_site_label
      _atom_site_symmetry_multiplicity
      _atom_site_fract_x
      _atom_site_fract_y
      _atom_site_fract_z
      _atom_site_occupancy
      H   H1        1.0  0.83700  0.83300  0.55800  1.0000
      H   H2        1.0  0.16300  0.33300  0.44200  1.0000
    .
    .
    .
    (continues)
    

    Augmented file:

    # generated using pymatgen
    data_H5C7SNO
    _symmetry_space_group_name_H-M   'P 1'
    _cell_length_a   4.32580000
    _cell_length_b   8.98200000
    _cell_length_c   8.47210000
    _cell_angle_alpha   90.00000000
    _cell_angle_beta   90.59400000
    _cell_angle_gamma   90.00000000
    _symmetry_Int_Tables_number   1
    _chemical_formula_structural   H5C7SNO
    _chemical_formula_sum   'H10 C14 S2 N2 O2'
    _cell_volume   329.16012678
    _cell_formula_units_Z   2
    loop_
     _symmetry_equiv_pos_site_id
     _symmetry_equiv_pos_as_xyz
      1  'x, y, z'
    loop_
     _atom_site_type_symbol
     _atom_site_label
     _atom_site_symmetry_multiplicity
     _atom_site_fract_x
     _atom_site_fract_y
     _atom_site_fract_z
     _atom_site_occupancy
      H  H0  1  0.83420779  0.82961480  0.54722879  1.0
      H  H1  1  0.15729856  0.32521300  0.43626670  1.0
    .
    .
    .
    (continues)
    
    

    ALIGNN works fine with original cif files but whenever I try to train it with augmented file, I encounter the following error:

    Using backend: pytorch
    Traceback (most recent call last):
      File "/raid/apps/alignn/2021/bin/train_folder.py", line 195, in <module>
        train_for_folder(
      File "/raid/apps/alignn/2021/bin/train_folder.py", line 103, in train_for_folder
        atoms = Atoms.from_cif(file_path)
      File "/raid/apps/alignn/2021/lib/python3.8/site-packages/jarvis/core/atoms.py", line 537, in from_cif
        cif_atoms = cif_atoms.get_primitive_atoms
      File "/raid/apps/alignn/2021/lib/python3.8/site-packages/jarvis/core/atoms.py", line 710, in get_primitive_atoms
        return Spacegroup3D(self).primitive_atoms
      File "/raid/apps/alignn/2021/lib/python3.8/site-packages/jarvis/analysis/structure/spacegroup.py", line 240, in primitive_atoms
        lattice, scaled_positions, numbers = spglib.find_primitive(
    TypeError: cannot unpack non-iterable NoneType object
    

    I can not see a problem in augmented files. Do you have any suggestions?

    Best regards,

    opened by MiracAydin1 5
  • OMDB Dataset Performance

    OMDB Dataset Performance

    Dear All,

    I am using ALIGNN model to train OMDB dataset and trying to improve results by adjusting hyperparameters. But I have not achieved good results yet (one of the reasons is that my test molecules are a little bigger than the dataset)

    In README.md file, OMDB results say "coming soon". Do you have any train results for this? I would like to compare your result and test the trained model on my own molecules.

    Best regards,

    opened by MiracAydin1 0
  • Order of edge-gated graph convolutions?

    Order of edge-gated graph convolutions?

    Hi,

    Thanks for this great library.

    I think I noticed a slight inconsistency in the code from the paper. The paper states that the ALIGNN layer first performs the edge-gated graph convolution on the line graph to update the pair and triplet features, and then the pair features are passed as edges to the atomistic/direct graph.

    However, when I look at alignn.models.alignn.ALIGNNConv.forward, I see that the edge-gated graph convolution is actually applied on the atomistic/direct graph first, and then the updated pair features are passed as nodes to the line graph. Am I understanding this correctly?

    opened by rees-c 2
  • Angle Information

    Angle Information

    Hi @bdecost,

    When I wrote the following code based on your code to print the line graph, an incomprehensible part appeared.

    In the code below, among (node_i, node_j) based on ij_pair and (nodej_, nodek_) based on jk_pair, I think that (node_j) and (nodej_) should have the same node number, but there are cases where they do not match as a result of executing the code below.

    When I referred to your paper, I understood that the angle of the line graph is made based on node_i, node_j, and node_k. Why is the shared node_j not the same?

    ############################################ CIF file (mp-2500.cif) '''generated using pymatgen''' data_AlCu _symmetry_space_group_name_H-M 'P 1' _cell_length_a 6.37716407 _cell_length_b 6.37716407 _cell_length_c 6.92031335 _cell_angle_alpha 57.14549155 _cell_angle_beta 57.14549155 _cell_angle_gamma 37.46229262 _symmetry_Int_Tables_number 1 _chemical_formula_structural AlCu _chemical_formula_sum 'Al5 Cu5' _cell_volume 140.31041575 cell_formula_units_Z 5 loop _symmetry_equiv_pos_site_id symmetry_equiv_pos_as_xyz 1 'x, y, z' loop _atom_site_type_symbol _atom_site_label _atom_site_symmetry_multiplicity _atom_site_fract_x _atom_site_fract_y _atom_site_fract_z _atom_site_occupancy Al Al0 1 0.50000000 0.50000000 0.50000000 1 Al Al1 1 0.15622000 0.15622000 0.53856900 1 Al Al2 1 0.84378000 0.84378000 0.46143100 1 Al Al3 1 0.37823100 0.37823100 0.00427500 1 Al Al4 1 0.62176900 0.62176900 0.99572500 1 Cu Cu5 1 0.00000000 0.00000000 0.00000000 1 Cu Cu6 1 0.25794700 0.25794700 0.75941600 1 Cu Cu7 1 0.74205300 0.74205300 0.24058400 1 Cu Cu8 1 0.10895200 0.10895200 0.22813800 1 Cu Cu9 1 0.89104800 0.89104800 0.77186200 1

    ############################################# Code import os import dgl import numpy as np

    import torch

    from jarvis.core.atoms import Atoms from jarvis.core.graphs import Graph

    from torch_geometric.data import InMemoryDataset, Data, Batch from torch_geometric.utils.convert import from_networkx

    raw_path = './mp-2500.cif' crystal = Atoms.from_cif(raw_path, use_cif2cell=False) coords = crystal.cart_coords graph = Graph.atom_dgl_multigraph(crystal, cutoff=8.0, atom_features='cgcnn',max_neighbors=12, compute_line_graph=True, use_canonize=False) for i in [0,1]: '''Atom-Bond Graph''' if i==0: g = from_networkx(dgl.DGLGraph.to_networkx(graph[i], node_attrs=['atom_features'], edge_attrs=['r'])) x = torch.tensor([x.detach().numpy() for x in g.atom_features]) z = torch.tensor(crystal.atomic_numbers) pos = torch.tensor(coords, dtype=torch.float) edge_id = g.id edge_pos = torch.tensor([x.detach().numpy() for x in g.r]) edge_index = g.edge_index edge_distance = torch.tensor(np.linalg.norm(graph[i].edata['r'], axis=1)) ab_g = Data(x=x, z=z, pos=pos, edge_id=edge_id, edge_index=edge_index, edge_distance=edge_distance, edge_pos=edge_pos, idx=n) '''Line Graph''' if i==1: g = from_networkx(dgl.DGLGraph.to_networkx(graph[i], node_attrs=['r'], edge_attrs=['h'])) x = torch.tensor(np.linalg.norm(graph[i].ndata['r'], axis=1)) pos = torch.tensor([x.detach().numpy() for x in g.r]) edge_id = g.id edge_index = g.edge_index edge_angle = g.h ba_g = Data(x=x, pos=pos, edge_id=edge_id, edge_index=edge_index, edge_angle=edge_angle, idx=n) dataset = [ab_g, ba_g]

    '''dataset[1] = Line Graph''' '''dataset[0] = Atom-Bond Graph''' ij_pair = dataset[1].edge_index[0] jk_pair = dataset[1].edge_index[1] node_i = dataset[0].edge_index[0][ij_pair] node_j = dataset[0].edge_index[1][ij_pair] nodej_ = dataset[0].edge_index[0][jk_pair] nodek_ = dataset[0].edge_index[1][jk_pair]

    ################################################################# Result node_i[0:10], node_j[0:10], nodej_[0:10], nodek_[0:10]

    opened by hyogyeongshin 1
Releases(v2022.11.06)
Owner
National Institute of Standards and Technology
Department of Commerce
National Institute of Standards and Technology
This is a model made out of Neural Network specifically a Convolutional Neural Network model

This is a model made out of Neural Network specifically a Convolutional Neural Network model. This was done with a pre-built dataset from the tensorflow and keras packages. There are other alternative libraries that can be used for this purpose, one of which is the PyTorch library.

null 9 Oct 18, 2022
This repository contains notebook implementations of the following Neural Process variants: Conditional Neural Processes (CNPs), Neural Processes (NPs), Attentive Neural Processes (ANPs).

The Neural Process Family This repository contains notebook implementations of the following Neural Process variants: Conditional Neural Processes (CN

DeepMind 892 Dec 28, 2022
A static analysis library for computing graph representations of Python programs suitable for use with graph neural networks.

python_graphs This package is for computing graph representations of Python programs for machine learning applications. It includes the following modu

Google Research 258 Dec 29, 2022
The source code of the paper "Understanding Graph Neural Networks from Graph Signal Denoising Perspectives"

GSDN-F and GSDN-EF This repository provides a reference implementation of GSDN-F and GSDN-EF as described in the paper "Understanding Graph Neural Net

Guoji Fu 18 Nov 14, 2022
Some tentative models that incorporate label propagation to graph neural networks for graph representation learning in nodes, links or graphs.

Some tentative models that incorporate label propagation to graph neural networks for graph representation learning in nodes, links or graphs.

zshicode 1 Nov 18, 2021
On Size-Oriented Long-Tailed Graph Classification of Graph Neural Networks

On Size-Oriented Long-Tailed Graph Classification of Graph Neural Networks We provide the code (in PyTorch) and datasets for our paper "On Size-Orient

Zemin Liu 4 Jun 18, 2022
Simple command line tool for text to image generation using OpenAI's CLIP and Siren (Implicit neural representation network)

Deep Daze mist over green hills shattered plates on the grass cosmic love and attention a time traveler in the crowd life during the plague meditative

Phil Wang 4.4k Jan 3, 2023
Graph neural network message passing reframed as a Transformer with local attention

Adjacent Attention Network An implementation of a simple transformer that is equivalent to graph neural network where the message passing is done with

Phil Wang 49 Dec 28, 2022
Implementation of E(n)-Transformer, which extends the ideas of Welling's E(n)-Equivariant Graph Neural Network to attention

E(n)-Equivariant Transformer (wip) Implementation of E(n)-Equivariant Transformer, which extends the ideas from Welling's E(n)-Equivariant G

Phil Wang 132 Jan 2, 2023
Spectral Temporal Graph Neural Network (StemGNN in short) for Multivariate Time-series Forecasting

Spectral Temporal Graph Neural Network for Multivariate Time-series Forecasting This repository is the official implementation of Spectral Temporal Gr

Microsoft 306 Dec 29, 2022
Continuous Diffusion Graph Neural Network

We present Graph Neural Diffusion (GRAND) that approaches deep learning on graphs as a continuous diffusion process and treats Graph Neural Networks (GNNs) as discretisations of an underlying PDE.

Twitter Research 227 Jan 5, 2023
This is the source code for our ICLR2021 paper: Adaptive Universal Generalized PageRank Graph Neural Network.

GPRGNN This is the source code for our ICLR2021 paper: Adaptive Universal Generalized PageRank Graph Neural Network. Hidden state feature extraction i

Jianhao 92 Jan 3, 2023
Parameterized Explainer for Graph Neural Network

PGExplainer This is a Tensorflow implementation of the paper: Parameterized Explainer for Graph Neural Network https://arxiv.org/abs/2011.04573 NeurIP

Dongsheng Luo 89 Dec 12, 2022
A PyTorch implementation of "Capsule Graph Neural Network" (ICLR 2019).

CapsGNN ⠀⠀ A PyTorch implementation of Capsule Graph Neural Network (ICLR 2019). Abstract The high-quality node embeddings learned from the Graph Neur

Benedek Rozemberczki 1.2k Jan 2, 2023
A PyTorch implementation of "SimGNN: A Neural Network Approach to Fast Graph Similarity Computation" (WSDM 2019).

SimGNN ⠀⠀⠀ A PyTorch implementation of SimGNN: A Neural Network Approach to Fast Graph Similarity Computation (WSDM 2019). Abstract Graph similarity s

Benedek Rozemberczki 534 Dec 25, 2022
A PyTorch implementation of "Graph Wavelet Neural Network" (ICLR 2019)

Graph Wavelet Neural Network ⠀⠀ A PyTorch implementation of Graph Wavelet Neural Network (ICLR 2019). Abstract We present graph wavelet neural network

Benedek Rozemberczki 490 Dec 16, 2022
TilinGNN: Learning to Tile with Self-Supervised Graph Neural Network (SIGGRAPH 2020)

TilinGNN: Learning to Tile with Self-Supervised Graph Neural Network (SIGGRAPH 2020) About The goal of our research problem is illustrated below: give

null 59 Dec 9, 2022
GNN4Traffic - This is the repository for the collection of Graph Neural Network for Traffic Forecasting

GNN4Traffic - This is the repository for the collection of Graph Neural Network for Traffic Forecasting

null 564 Jan 2, 2023