Junction Tree Variational Autoencoder for Molecular Graph Generation (ICML 2018)

Overview

Junction Tree Variational Autoencoder for Molecular Graph Generation

Official implementation of our Junction Tree Variational Autoencoder https://arxiv.org/abs/1802.04364

Update

We have made architecture improvements to JT-VAE. We recommend you to check our new repository at https://github.com/wengong-jin/hgraph2graph/. This repo contains a molecular language model pre-trained on ChEMBL (1.8 million compounds) and scripts for property-guided molecule generation. All scripts are written in python 3.7 and pytorch.

Accelerated Version

We have accelerated our code! The new code is in fast_jtnn/, and the VAE training script is in fast_molvae/. Please refer to fast_molvae/README.md for details.

Requirements

  • Linux (We only tested on Ubuntu)
  • RDKit (version >= 2017.09)
  • Python (version == 2.7)
  • PyTorch (version >= 0.2)

To install RDKit, please follow the instructions here http://www.rdkit.org/docs/Install.html

We highly recommend you to use conda for package management.

Quick Start

The following directories contains the most up-to-date implementations of our model:

  • fast_jtnn/ contains codes for model implementation.
  • fast_molvae/ contains codes for VAE training. Please refer to fast_molvae/README.md for details.

The following directories provides scripts for the experiments in our original ICML paper:

  • bo/ includes scripts for Bayesian optimization experiments. Please read bo/README.md for details.
  • molvae/ includes scripts for training our VAE model only. Please read molvae/README.md for training our VAE model.
  • molopt/ includes scripts for jointly training our VAE and property predictors. Please read molopt/README.md for details.
  • jtnn/ contains codes for model formulation.

Contact

Wengong Jin ([email protected])

Comments
  • Can't train a model with the same reconstruct score as your model.

    Can't train a model with the same reconstruct score as your model.

    Hello,

    I've been trying to train a model with the same instructions you provide in the Readme of molvae, but I have not been able to get the same reconstruct as you. First I used your code, and had some problems with this part of the code in vaetrain.py:

    if (it + 1) % 1500 == 0: #Fast annealing
                scheduler.step()
                print "learning rate: %.6f" % scheduler.get_lr()[0]
                torch.save(model.state_dict(), opts.save_path + "/model.iter-%d-%d" % (epoch, it + 1))
                beta = max(1.0, beta + anneal) 
     
    

    When the code first enters into this if, the LOGs of the reconstruction goes crazy and the final model I obtained had 0 reconstruction score. After reading and understanding more of your code I thought that the max in beta = max(1.0, beta + anneal) may be a min, so I changed that line of code to beta = min(1.0, beta + anneal).

    With this new change, I trained the whole model and now the LOGs seemed fine, but after I had the whole model, I tried to test this new model with the reconstruct.py and the model is giving me about a 0.52 reconstruct score, way too far from the 0.77 reconstruction score that the model MPNVAE-h450-L56-d3-beta0.005 has.

    I would like to know if the change I made to the beta = max(1.0, beta + anneal) it's correct, also why with or without this change, I'm not being able to train a new model following your instructions? Is there something I would have to do that I'm not doing to train a model? Did you trained your model with the same instructions that the Readme says?

    Thank you, I'm going crazy because this doesn't make sense at all, I would be very grateful if you had any idea of what's happening.

    opened by manurubo 7
  • Index Error in MPN.py

    Index Error in MPN.py

    Hey, when running your model with my own dataset I'm getting the following error:

    Traceback (most recent call last):
      File "pretrain.py", line 69, in <module>
        loss, kl_div, wacc, tacc, sacc, dacc, pacc = model(batch, beta=0)
      File "/home/.conda/envs/jtnn_pytorch/lib/python2.7/site-packages/torch/nn/modules/module.py", line 224, in __call__
        result = self.forward(*input, **kwargs)
      File "/home/molopt/jtnn/jtprop_vae.py", line 76, in forward
        tree_mess, tree_vec, mol_vec = self.encode(mol_batch)
      File "/home/molopt/jtnn/jtprop_vae.py", line 60, in encode
        mol_vec = self.mpn(mol2graph(smiles_batch))
      File "/home/molopt/jtnn/mpn.py", line 75, in mol2graph
        agraph[a,i] = b
    IndexError: index 6 is out of range for dimension 0 (of size 6)
    

    --> PS: I changed MAXNB in jtnn_dec and jtnn_enc to 32.

    opened by maxbernhard 3
  • Generating Vocab.txt

    Generating Vocab.txt

    Hi, I'm using my own train data to run Bayesian Optimization, but I'm not sure how to generate my Vocab file using my train file. Can somebody help me with this?

    Thanks

    opened by NamanChuriwala 3
  • No file called model.4

    No file called model.4

    Dear Wengong: You use the following code to run gen_latent.py in the bo folder.: python gen_latent.py --data ../data/train.txt --vocab ../data/vocab.txt
    --hidden 450 --depth 3 --latent 56
    --model ../molvae/MPNVAE-h450-L56-d3-beta0.005/model.4
    But the system throws the following error: No such file or directory: '../molvae/MPNVAE-h450-L56-d3-beta0.005/model.4 as the file in molvae is named model.iter-4 instead of model.4

    I tried passing model-iter.4 instead of model.4 but it still shows an error. Could you help resolve this? Thanks.

    opened by NamanChuriwala 3
  • vocab size

    vocab size

    Hi, I want to run the sample file on the provided model under the fast_molvae directory. But I found that the size of vocabulary in the mose directory may not match with the provided model. And I got the error below. The size of the learned embedding in the mode is 800 which indicates that that model is built on a vocab of size 800.

    I'm not sure whether the problem is because of vocabulary, but if it is, can you provide that vocab file?

    Thanks.

    le.py\", line 845, in load_state_dict
        self.__class__.__name__, \"\n\t\".join(error_msgs)))
    RuntimeError: Error(s) in loading state_dict for JTNNVAE:
            size mismatch for jtnn.embedding.weight: copying a param with shape torch.Size([531, 450]) from checkpoint, the shape in current model is torch.Size([800, 450]).
            size mismatch for decoder.embedding.weight: copying a param with shape torch.Size([531, 450]) from checkpoint, the shape in current model is torch.Size([800, 450]).
            size mismatch for decoder.W_o.bias: copying a param with shape torch.Size([531]) from checkpoint, the shape in current model is torch.Size([800]).
            size mismatch for decoder.W_o.weight: copying a param with shape torch.Size([531, 450]) from checkpoint, the shape in current model is torch.Size([800, 450]).
    
    opened by ziqi92 2
  • CUDA out of memory?

    CUDA out of memory?

    Hi Wengong,

    Thank you for sharing this wonderful project.

    I have issues on the second step of training in fast_molvae. Due to the MOSES dataset is too large for my hardware to training, I chose to train and test on ZINC dataset. The step of Deriving Vocabulary and Step 1 of Training works perfectly. But there are some "CUDA out of memory" issues happened at Step 2 of Training. One major problem it caused is there is no model output in vae_model folder (actually it is empty after my Step 2). It makes impossible for me to do the testing.

    The testing platform on my side is: Ubuntu 18.04/python 2.7/ cuda v9.1.85 / pytorch 1.0.1.post2 /rdkit 2018.09.2 GPU: NVIDIA Quadro P620

    Thanks!

    opened by jiuwuzhi 2
  • AttributeError: 'MolTreeNode' object has no attribute 'cid'

    AttributeError: 'MolTreeNode' object has no attribute 'cid'

    Hi! This error occurs when I ran sample.py in fast_molvae without any changes.

    Traceback (most recent call last): File "/fast_molvae/sample.py", line 39, in print model.sample_prior() File "/jtnn_vae.py", line 119, in sample_prior return self.decode(z_tree, z_mol, prob_decode) File "/fast_jtnn/jtnn_vae.py", line 200, in decode jtenc_holder, mess_dict = JTNNEncoder.tensorize_nodes(pred_nodes, scope) File "/fast_jtnn/jtnn_enc.py", line 73, in tensorize_nodes cid = y.cid[1] if y.cid[0] == x.idx else 0 AttributeError: 'MolTreeNode' object has no attribute 'cid'

    Process finished with exit code 1

    Maybe something wrong with your codes?

    Thank you!

    opened by lihan1997 2
  • Exception: Explicit valence for atom # 4 C, 5, is greater

    Exception: Explicit valence for atom # 4 C, 5, is greater

    I was trying to get the fast_molvae code to run on my own dataset and I changed the preprocess.py function code a little bit to run on my parallel worker setup. In the process I ended up removing the rdkit logger that's set to critical and I noticed a bunch of these exceptions:

    [23:37:03] Explicit valence for atom # 1 C, 5, is greater than permitted [23:37:03] Explicit valence for atom # 3 C, 5, is greater than permitted [23:37:03] Explicit valence for atom # 1 C, 6, is greater than permitted [23:37:03] Explicit valence for atom # 1 C, 5, is greater than permitted [23:37:03] Explicit valence for atom # 1 C, 6, is greater than permitted [23:37:03] Explicit valence for atom # 1 C, 6, is greater than permitted [23:37:03] Explicit valence for atom # 1 C, 5, is greater than permitted [23:37:03] Explicit valence for atom # 1 C, 6, is greater than permitted [23:37:03] Explicit valence for atom # 3 C, 5, is greater than permitted [23:37:03] Explicit valence for atom # 1 C, 5, is greater than permitted [23:37:03] Explicit valence for atom # 1 C, 7, is greater than permitted [23:37:03] Explicit valence for atom # 3 C, 5, is greater than permitted [23:37:03] Explicit valence for atom # 1 C, 5, is greater than permitted [23:37:03] Explicit valence for atom # 1 C, 7, is greater than permitted [23:37:03] Explicit valence for atom # 1 C, 5, is greater than permitted [23:37:03] Explicit valence for atom # 1 C, 6, is greater than permitted [[2323::3737::0303] ] Explicit valence for atom # 1 C, 5, is greater than permitted Explicit valence for atom # 4 C, 5, is greater

    I think they are coming from get_clique_mol-> sanitize() in the chemutils.py. I am not sure if I am missing something but is this supposed to happen? I tried to repeat the same thing with your training set and same set of logs appear. And next to the return statement in the sanitize method in utils I notice that there is a comment - #We assume this is not None . So are we supposed to "clean" our data to not include the smiles for which this happens ?

    opened by ManvithaPonnapati 2
  • RuntimeError in molopt/pretrain.py

    RuntimeError in molopt/pretrain.py

    Dear Wengong Jin

    I'd like to ask your help about molopt during running pretrain.py I have successfully done all of example in molopt with data/train.txt , data/vocab.txt, data/train.logP-SA.

    However RuntimeError has occurred with my own training dataset , vocab generated with python ../jtnn/mol_tree.py < my_dataset.txt and my own logP property file.

    It seems to be wrong dimension during node aggregation. What's your opinion about this issue ?

    Best Regards, Minkyu Ha

    ( environment is same with you. python 2.7, cuda 8.0, pytorch 0.3.1)

    Model #Params: 4271K Traceback (most recent call last): File "pretrain.py", line 69, in loss, kl_div, wacc, tacc, sacc, dacc, pacc = model(batch, beta=0) File "/home/minkyuha/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 357, in call result = self.forward(*input, **kwargs) File "/home/minkyuha/new-jtnn/icml18-jtnn/jtnn/jtprop_vae.py", line 76, in forward tree_mess, tree_vec, mol_vec = self.encode(mol_batch) File "/home/minkyuha/new-jtnn/icml18-jtnn/jtnn/jtprop_vae.py", line 57, in encode tree_mess,tree_vec = self.jtnn(root_batch) File "/home/minkyuha/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 357, in call result = self.forward(*input, **kwargs) File "/home/minkyuha/new-jtnn/icml18-jtnn/jtnn/jtnn_enc.py", line 62, in forward cur_h_nei = torch.cat(cur_h_nei, dim=0).view(-1,MAX_NB,self.hidden_size) RuntimeError: invalid argument 2: size '[-1 x 8 x 420]' is invalid for input with 144900 elements at /opt/conda/conda-bld/pytorch_1523240155148/work/torch/lib/TH/THStorage.c:37

    opened by MinkyuHa 2
  • 'module' object has no attribute 'MatrixInversePSD'

    'module' object has no attribute 'MatrixInversePSD'

    While running the run_bo.py file in Bayesian Optimization, I get the following error: File "run_bo.py", line 87, in sgp.train_via_ADAM(X_train, 0 * X_train, y_train, X_test, X_test * 0, y_test, minibatch_size = 10 * M, max_iterations = 100, learning_rate = 0.001) File "/home/naman_churiwala_quantiphi_com/icml18-jtnn/bo/sparse_gp.py", line 213, in train_via_ADAM e = self.getEnergy() File "/home/naman_churiwala_quantiphi_com/icml18-jtnn/bo/sparse_gp.py", line 102, in getEnergy self.sparse_gp.compute_output() File "/home/naman_churiwala_quantiphi_com/icml18-jtnn/bo/sparse_gp_theano_internal.py", line 77, in compute_output self.KzzInv = T.nlinalg.MatrixInversePSD()(self.Kzz) AttributeError: 'module' object has no attribute 'MatrixInversePSD'

    Is it possible that the attribute is MatrixInverse instead of MatrixInversePSD?

    I have run run_bo.py previously without encountering the above error, but this seems to be a problem now. Can you help me with this? Thanks

    opened by NamanChuriwala 2
  • Small molecule error

    Small molecule error

    Hi there,

    When I have a molecule in training with a single character SMILES (such as "C" for methane), the encoder is failing with IndexError: list index out of range on the following:

    def encode(self, mol_batch):
            set_batch_nodeID(mol_batch, self.vocab)
            root_batch = [mol_tree.nodes[0] for mol_tree in mol_batch]
    

    because mol_tree.nodes is empty. From my understanding, I thought methane should be a single node graph. "C" is present in my vocabulary, and this also happens for other single character SMILES.

    I removed these molecules from my training since they aren't helpful for my purposes anyway, but wasn't sure if this was a bug.

    opened by bslakman 2
  • How are the logp-sa scores in the training data computed?

    How are the logp-sa scores in the training data computed?

    Hi there! I have a question regarding the logp-sa score computation. Using the simple script below, I can reproduce the log-sa scores in the test data data/zinc/opt.test.logP-SA BUT NOT in the training data data/zinc/train.logP-SA. Suggestions, advice, and explanations appreciated regarding this mismatch. Thanks!

    from rdkit import Chem
    from rdkit.Chem import Descriptors
    from molopt import sascorer
    
    # the smiles below is the first one in `data/zinc/opt.test.logP-SA`
    # the score computed below (-2.5248038322) matches that in the file (-2.5248038322)
    smiles = 'CC(C)OC(=O)c1cccc(-c2ccc([C@H]3[NH2+][C@H](C(=O)[O-])C(C)(C)S3)o2)c1'
    score = Descriptors.MolLogP(Chem.MolFromSmiles(smiles)) - sascorer.calculateScore(Chem.MolFromSmiles(smiles))
    
    # the smiles below is the first one in `data/zinc/train.logP-SA`
    # the score computed below (3.412092566642019) DOES NOT matches that in the file (2.878620321486616174)
    smiles = 'CCCCCCC1=NN2C(=N)/C(=C\c3cc(C)n(-c4ccc(C)cc4C)c3C)C(=O)N=C2S1'
    score = Descriptors.MolLogP(Chem.MolFromSmiles(smiles)) - sascorer.calculateScore(Chem.MolFromSmiles(smiles))
    
    opened by anonymous10101010101 0
  • importError: cannot import name 'flatten_tensor' from 'nnutils'

    importError: cannot import name 'flatten_tensor' from 'nnutils'

    While running preprocess.py - python3 preprocess.py --train ../data/moses/train.txt --split 100 --jobs 16 Traceback (most recent call last): File "preprocess.py", line 10, in from fast_jtnn import * File "/home/shreya/Desktop/jtnn/icml18-jtnn-master/fast_molvae/fast_jtnn/init.py", line 2, in from jtnn_vae import JTNNVAE File "/home/shreya/Desktop/jtnn/icml18-jtnn-master/fast_molvae/jtnn_vae.py", line 5, in from nnutils import create_var, flatten_tensor, avg_pool ImportError: cannot import name 'flatten_tensor' from 'nnutils'

    Can anyone help with this?

    opened by sbanwaskar 0
  • gen_latent.py: smile code not found

    gen_latent.py: smile code not found

    While generating latent representations for BO, there are a few molecules that are perhaps not in the vocabulary, I get the following error: File "gen_latent.py", line 84, in <module> mol_vec = model.encode_latent_mean(batch) File "JT-VAE/icml18-jtnn/jtnn/jtnn_vae.py", line 61, in encode_latent_mean _, tree_vec, mol_vec = self.encode(mol_batch) File "JT-VAE/icml18-jtnn/jtnn/jtnn_vae.py", line 48, in encode set_batch_nodeID(mol_batch, self.vocab) File "JT-VAE/icml18-jtnn/jtnn/jtnn_vae.py", line 22, in set_batch_nodeID node.wid = vocab.get_index(node.smiles) File "JT-VAE/icml18-jtnn/jtnn/mol_tree.py", line 18, in get_index return self.vmap[smiles] KeyError: 'C1=CN=NC=C1'

    opened by chaitanyadwivedii 1
  • MAX_NB measuring

    MAX_NB measuring

    Is there any way that we could measure variable MAX_NB? While pretraining the code with our new dataset, we get below scores during training, [2100] Beta: 0.000, KL: 548.74, Word: 101.77, Topo: 106.81, Assm: 98.06, PNorm: 160.23, GNorm: 37.14 We thought that MAX_NB should be set correctly by each dataset. Thus, is there any way that we could measure MAX_NB variable?

    opened by minstar 0
  • Scaffold Molecule for ZINC dataset

    Scaffold Molecule for ZINC dataset

    Hi, I want to know how to print

    valid = 1.0
    unique@1000 = 1.0
    unique@10000 = 0.9992
    FCD/Test = 0.42235413520261034
    SNN/Test = 0.5560595345050097
    Frag/Test = 0.996223352989786
    Scaf/Test = 0.8924981494347503
    FCD/TestSF = 0.9962165008703465
    SNN/TestSF = 0.5272934146558245
    Frag/TestSF = 0.9947901514732745
    Scaf/TestSF = 0.10049873444911761
    IntDiv = 0.8511712225340441
    IntDiv2 = 0.8453088593783662
    Filters = 0.9778
    logP = 0.0054694810121243
    SA = 0.015992957588069068
    QED = 1.15692473423544e-05
    NP = 0.021087573878091237
    weight = 0.5403194879856983
    

    this result. So, at MOSES I know that if I print that result, I need test, test_scaffold, train, precalculated test npz, precalculated test_scaffold npz and generated(sampled) molecules.

    At /data/moses and moses-h450z56, I can get the test, test_scaffold and train molecules and generated molecules. The questions are here,

    1. How can I get the scaffold molecules? for custom dataset?
    2. What is the precalculated test( or test_scaffold) npz?
    3. Do you use .txt not .csv for printing that result? ( at MOSES, they use .csv)

    Thank you :)

    opened by ksh981214 0
  • A problem while I run preprocess.py

    A problem while I run preprocess.py

    when I run preprocess.py, I meet a problem. Traceback (most recent call last): File "preprocess.py", line 9, in from fast_jtnn import * File "/home/zhai/cxh/icml18-jtnn/fast_jtnn/init.py", line 1, in from mol_tree import Vocab, MolTree ModuleNotFoundError: No module named 'mol_tree' But I add export PYTHONPATH=$PREFIX/icml18-jtnn to my environment variables.

    opened by zoey1996 2
Owner
Wengong Jin
Wengong Jin
Implementation of Learning Gradient Fields for Molecular Conformation Generation (ICML 2021).

[PDF] | [Slides] The official implementation of Learning Gradient Fields for Molecular Conformation Generation (ICML 2021 Long talk) Installation Inst

MilaGraph 117 Dec 9, 2022
Molecular AutoEncoder in PyTorch

MolEncoder Molecular AutoEncoder in PyTorch Install $ git clone https://github.com/cxhernandez/molencoder.git && cd molencoder $ python setup.py insta

Carlos Hernández 80 Dec 5, 2022
CoSMA: Convolutional Semi-Regular Mesh Autoencoder. From Paper "Mesh Convolutional Autoencoder for Semi-Regular Meshes of Different Sizes"

Mesh Convolutional Autoencoder for Semi-Regular Meshes of Different Sizes Implementation of CoSMA: Convolutional Semi-Regular Mesh Autoencoder arXiv p

Fraunhofer SCAI 10 Oct 11, 2022
Code for the paper "Adversarially Regularized Autoencoders (ICML 2018)" by Zhao, Kim, Zhang, Rush and LeCun

ARAE Code for the paper "Adversarially Regularized Autoencoders (ICML 2018)" by Zhao, Kim, Zhang, Rush and LeCun https://arxiv.org/abs/1706.04223 Disc

Junbo (Jake) Zhao 399 Jan 2, 2023
Code for paper "Which Training Methods for GANs do actually Converge? (ICML 2018)"

GAN stability This repository contains the experiments in the supplementary material for the paper Which Training Methods for GANs do actually Converg

Lars Mescheder 885 Jan 1, 2023
Clockwork Variational Autoencoder

Clockwork Variational Autoencoders (CW-VAE) Vaibhav Saxena, Jimmy Ba, Danijar Hafner If you find this code useful, please reference in your paper: @ar

Vaibhav Saxena 35 Nov 6, 2022
VITS: Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech

VITS: Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech Jaehyeon Kim, Jungil Kong, and Juhee Son In our rece

Jaehyeon Kim 1.7k Jan 8, 2023
Implementation for "Manga Filling Style Conversion with Screentone Variational Autoencoder" (SIGGRAPH ASIA 2020 issue)

Manga Filling with ScreenVAE SIGGRAPH ASIA 2020 | Project Website | BibTex This repository is for ScreenVAE introduced in the following paper "Manga F

null 30 Dec 24, 2022
Recurrent Variational Autoencoder that generates sequential data implemented with pytorch

Pytorch Recurrent Variational Autoencoder Model: This is the implementation of Samuel Bowman's Generating Sentences from a Continuous Space with Kim's

Daniil Gavrilov 347 Nov 14, 2022
Variational autoencoder for anime face reconstruction

VAE animeface Variational autoencoder for anime face reconstruction Introduction This repository is an exploratory example to train a variational auto

Minzhe Zhang 2 Dec 11, 2021
PyTorch Autoencoders - Implementing a Variational Autoencoder (VAE) Series in Pytorch.

PyTorch Autoencoders Implementing a Variational Autoencoder (VAE) Series in Pytorch. Inspired by this repository Model List check model paper conferen

Subin An 8 Nov 21, 2022
Code of 3D Shape Variational Autoencoder Latent Disentanglement via Mini-Batch Feature Swapping for Bodies and Faces

3D Shape Variational Autoencoder Latent Disentanglement via Mini-Batch Feature Swapping for Bodies and Faces Installation After cloning the repo open

null 37 Dec 3, 2022
Official implementation of the RAVE model: a Realtime Audio Variational autoEncoder

RAVE: Realtime Audio Variational autoEncoder Official implementation of RAVE: A variational autoencoder for fast and high-quality neural audio synthes

ACIDS 587 Jan 1, 2023
A weakly-supervised scene graph generation codebase. The implementation of our CVPR2021 paper ``Linguistic Structures as Weak Supervision for Visual Scene Graph Generation''

README.md shall be finished soon. WSSGG 0 Overview 1 Installation 1.1 Faster-RCNN 1.2 Language Parser 1.3 GloVe Embeddings 2 Settings 2.1 VG-GT-Graph

Keren Ye 35 Nov 20, 2022
Implementation of GeoDiff: a Geometric Diffusion Model for Molecular Conformation Generation (ICLR 2022).

GeoDiff: a Geometric Diffusion Model for Molecular Conformation Generation [OpenReview] [arXiv] [Code] The official implementation of GeoDiff: A Geome

Minkai Xu 155 Dec 26, 2022
This is a Pytorch implementation of the paper: Self-Supervised Graph Transformer on Large-Scale Molecular Data.

This is a Pytorch implementation of the paper: Self-Supervised Graph Transformer on Large-Scale Molecular Data.

null 212 Dec 25, 2022
Few-Shot Graph Learning for Molecular Property Prediction

Few-shot Graph Learning for Molecular Property Prediction Introduction This is the source code and dataset for the following paper: Few-shot Graph Lea

Zhichun Guo 94 Dec 12, 2022
SkipGNN: Predicting Molecular Interactions with Skip-Graph Networks (Scientific Reports)

SkipGNN: Predicting Molecular Interactions with Skip-Graph Networks Molecular interaction networks are powerful resources for the discovery. While dee

Kexin Huang 49 Oct 15, 2022
Code release for the ICML 2021 paper "PixelTransformer: Sample Conditioned Signal Generation".

PixelTransformer Code release for the ICML 2021 paper "PixelTransformer: Sample Conditioned Signal Generation". Project Page Installation Please insta

Shubham Tulsiani 24 Dec 17, 2022