This repository contains the code used for Predicting Patient Outcomes with Graph Representation Learning (https://arxiv.org/abs/2101.03940).

Overview

Predicting Patient Outcomes with Graph Representation Learning

This repository contains the code used for Predicting Patient Outcomes with Graph Representation Learning. You can watch a video of the spotlight talk at W3PHIAI (AAAI workshop) here:

Watch the video

Citation

If you use this code or the models in your research, please cite the following:

@misc{rocheteautong2021,
      title={Predicting Patient Outcomes with Graph Representation Learning}, 
      author={Emma Rocheteau and Catherine Tong and Petar VeličkoviΔ‡ and Nicholas Lane and Pietro LiΓ²},
      year={2021},
      eprint={2101.03940},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}

Motivation

Recent work on predicting patient outcomes in the Intensive Care Unit (ICU) has focused heavily on the physiological time series data, largely ignoring sparse data such as diagnoses and medications. When they are included, they are usually concatenated in the late stages of a model, which may struggle to learn from rarer disease patterns. Instead, we propose a strategy to exploit diagnoses as relational information by connecting similar patients in a graph. To this end, we propose LSTM-GNN for patient outcome prediction tasks: a hybrid model combining Long Short-Term Memory networks (LSTMs) for extracting temporal features and Graph Neural Networks (GNNs) for extracting the patient neighbourhood information. We demonstrate that LSTM-GNNs outperform the LSTM-only baseline on length of stay prediction tasks on the eICU database. More generally, our results indicate that exploiting information from neighbouring patient cases using graph neural networks is a promising research direction, yielding tangible returns in supervised learning performance on Electronic Health Records.

Pre-Processing Instructions

eICU Pre-Processing

  1. To run the sql files you must have the eICU database set up: https://physionet.org/content/eicu-crd/2.0/.

  2. Follow the instructions: https://eicu-crd.mit.edu/tutorials/install_eicu_locally/ to ensure the correct connection configuration.

  3. Replace the eICU_path in paths.json to a convenient location in your computer, and do the same for eICU_preprocessing/create_all_tables.sql using find and replace for '/Users/emmarocheteau/PycharmProjects/eICU-GNN-LSTM/eICU_data/'. Leave the extra '/' at the end.

  4. In your terminal, navigate to the project directory, then type the following commands:

    psql 'dbname=eicu user=eicu options=--search_path=eicu'
    

    Inside the psql console:

    \i eICU_preprocessing/create_all_tables.sql
    

    This step might take a couple of hours.

    To quit the psql console:

    \q
    
  5. Then run the pre-processing scripts in your terminal. This will need to run overnight:

    python3 -m eICU_preprocessing.run_all_preprocessing
    

Graph Construction

To make the graphs, you can use the following scripts:

This is to make most of the graphs that we use. You can alter the arguments given to this script.

python3 -m graph_construction.create_graph --freq_adjust --penalise_non_shared --k 3 --mode k_closest

Write the diagnosis strings into eICU_data folder:

python3 -m graph_construction.get_diagnosis_strings

Get the bert embeddings:

python3 -m graph_construction.bert

Create the graph from the bert embeddings:

python3 -m graph_construction.create_bert_graph --k 3 --mode k_closest

Alternatively, you can request to download our graphs using this link: https://drive.google.com/drive/folders/1yWNLhGOTPhu6mxJRjKCgKRJCJjuToBS4?usp=sharing

Training the ML Models

Before proceeding to training the ML models, do the following.

  1. Define data_dir, graph_dir, log_path and ray_dir in paths.json to convenient locations.

  2. Run the following to unpack the processed eICU data into mmap files for easy loading during training. The mmap files will be saved in data_dir.

    python3 -m src.dataloader.convert
    

The following commands train and evaluate the models introduced in our paper.

N.B.

  • The models are structured using pytorch-lightning. Graph neural networks and neighbourhood sampling are implemented using pytorch-geometric.

  • Our models assume a default graph which is made with k=3 under a k-closest scheme. If you wish to use other graphs, refer to read_graph_edge_list in src/dataloader/pyg_reader.py to add a reference handle to version2filename for your graph.

  • The default task is In-House-Mortality Prediction (ihm), add --task los to the command to perform the Length-of-Stay Prediction (los) task instead.

  • These commands use the best set of hyperparameters; To use other hyperparameters, remove --read_best from the command and refer to src/args.py.

a. LSTM-GNN

The following runs the training and evaluation for LSTM-GNN models. --gnn_name can be set as gat, sage, or mpnn. When mpnn is used, add --ns_sizes 10 to the command.

python3 -m train_ns_lstmgnn --bilstm --ts_mask --add_flat --class_weights --gnn_name gat --add_diag --read_best

The following runs a hyperparameter search.

python3 -m src.hyperparameters.lstmgnn_search --bilstm --ts_mask --add_flat --class_weights  --gnn_name gat --add_diag

b. Dynamic LSTM-GNN

The following runs the training & evaluation for dynamic LSTM-GNN models. --gnn_name can be set as gcn, gat, or mpnn.

python3 -m train_dynamic --bilstm --random_g --ts_mask --add_flat --class_weights --gnn_name mpnn --read_best

The following runs a hyperparameter search.

python3 -m src.hyperparameters.dynamic_lstmgnn_search --bilstm --random_g --ts_mask --add_flat --class_weights --gnn_name mpnn

c. GNN

The following runs the GNN models (with neighbourhood sampling). --gnn_name can be set as gat, sage, or mpnn. When mpnn is used, add --ns_sizes 10 to the command.

python3 -m train_ns_gnn --ts_mask --add_flat --class_weights --gnn_name gat --add_diag --read_best

The following runs a hyperparameter search.

python3 -m src.hyperparameters.ns_gnn_search --ts_mask --add_flat --class_weights --gnn_name gat --add_diag

d. LSTM (Baselines)

The following runs the baseline bi-LSTMs. To remove diagnoses from the input vector, remove --add_diag from the command.

python3 -m train_ns_lstm --bilstm --ts_mask --add_flat --class_weights --num_workers 0 --add_diag --read_best

The following runs a hyperparameter search.

python3 -m src.hyperparameters.lstm_search --bilstm --ts_mask --add_flat --class_weights --num_workers 0 --add_diag
You might also like...
https://arxiv.org/abs/2102.11005
https://arxiv.org/abs/2102.11005

LogME LogME: Practical Assessment of Pre-trained Models for Transfer Learning How to use Just feed the features f and labels y to the function, and yo

[PyTorch] Official implementation of CVPR2021 paper
[PyTorch] Official implementation of CVPR2021 paper "PointDSC: Robust Point Cloud Registration using Deep Spatial Consistency". https://arxiv.org/abs/2103.05465

PointDSC repository PyTorch implementation of PointDSC for CVPR'2021 paper "PointDSC: Robust Point Cloud Registration using Deep Spatial Consistency",

Official Implementation for
Official Implementation for "ReStyle: A Residual-Based StyleGAN Encoder via Iterative Refinement" https://arxiv.org/abs/2104.02699

ReStyle: A Residual-Based StyleGAN Encoder via Iterative Refinement Recently, the power of unconditional image synthesis has significantly advanced th

This is an official implementation of our CVPR 2021 paper "Bottom-Up Human Pose Estimation Via Disentangled Keypoint Regression" (https://arxiv.org/abs/2104.02300)

Bottom-Up Human Pose Estimation Via Disentangled Keypoint Regression Introduction In this paper, we are interested in the bottom-up paradigm of estima

ISTR: End-to-End Instance Segmentation with Transformers (https://arxiv.org/abs/2105.00637)

This is the project page for the paper: ISTR: End-to-End Instance Segmentation via Transformers, Jie Hu, Liujuan Cao, Yao Lu, ShengChuan Zhang, Yan Wa

Non-Official Pytorch implementation of
Non-Official Pytorch implementation of "Face Identity Disentanglement via Latent Space Mapping" https://arxiv.org/abs/2005.07728 Using StyleGAN2 instead of StyleGAN

Face Identity Disentanglement via Latent Space Mapping - Implement in pytorch with StyleGAN 2 Description Pytorch implementation of the paper Face Ide

Minimal implementation of PAWS (https://arxiv.org/abs/2104.13963) in TensorFlow.
Minimal implementation of PAWS (https://arxiv.org/abs/2104.13963) in TensorFlow.

PAWS-TF 🐾 Implementation of Semi-Supervised Learning of Visual Features by Non-Parametrically Predicting View Assignments with Support Samples (PAWS)

YOLO5Face: Why Reinventing a Face Detector (https://arxiv.org/abs/2105.12931)
YOLO5Face: Why Reinventing a Face Detector (https://arxiv.org/abs/2105.12931)

Introduction Yolov5-face is a real-time,high accuracy face detection. Performance Single Scale Inference on VGA resolution(max side is equal to 640 an

A PyTorch implementation of EventProp [https://arxiv.org/abs/2009.08378], a method to train Spiking Neural Networks
A PyTorch implementation of EventProp [https://arxiv.org/abs/2009.08378], a method to train Spiking Neural Networks

Spiking Neural Network training with EventProp This is an unofficial PyTorch implemenation of EventProp, a method to compute exact gradients for Spiki

Comments
  • Diagnoses.sql

    Diagnoses.sql

    following the README, when I run the script of creat_all_tables.sql, I got a Null table of Diagnoses. When checking the Diagnoses.sql. In line 40 of Diagnoses.sql. it seems to have a spelling error, I trans the lb_labels to labels. it works!

    opened by sherry6247 1
  • How to get 'padded.dat' in bert.py when constructing graph

    How to get 'padded.dat' in bert.py when constructing graph

    When using the following scipt to make the graphs, a FileNotFoundError encountered. python3 -m graph_construction.bert

    In graph_construction/bert.py, padded.dat and attention_mask.dat are needed.

    def read_data(graph_dir):
        padded = np.memmap(graph_dir + 'padded.dat', dtype=int, shape=(89123, 512))
        attention_mask = np.memmap(graph_dir + 'attention_mask.dat', dtype=int, shape=(89123, 512))
        input_ids = torch.tensor(padded).to('cuda')
        attn_mask = torch.tensor(attention_mask).to('cuda')
        return input_ids, attn_mask
    

    How to generate padded.dat and attention_mask.dat? I am curious about how to use bert to analyze the EHR data. Could you share the related code? Thank you very much.

    opened by LeslieHoloway 1
  • RuntimeError: The size of tensor a (3) must match the size of tensor b (128) at non-singleton dimension 1

    RuntimeError: The size of tensor a (3) must match the size of tensor b (128) at non-singleton dimension 1

    I am trying to run "Dynamic LSTM-GNN" as mentioned in the training using the follwing command "python3 -m train_dynamic --bilstm --random_g --ts_mask --add_flat --class_weights --gnn_name mpnn --read_bes " Here is the stack trace of the error.. File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "/usr/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/content/drive/MyDrive/eICU/train_dynamic.py", line 286, in main(config) File "/content/drive/MyDrive/eICU/train_dynamic.py", line 205, in main trainer.fit(model) File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py", line 771, in fit self._fit_impl, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py", line 723, in _call_and_handle_interrupt return trainer_fn(*args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py", line 811, in _fit_impl results = self._run(model, ckpt_path=self.ckpt_path) File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py", line 1236, in _run results = self._run_stage() File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py", line 1323, in _run_stage return self._run_train() File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py", line 1353, in _run_train self.fit_loop.run() File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/loops/base.py", line 204, in run self.advance(*args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/loops/fit_loop.py", line 269, in advance self._outputs = self.epoch_loop.run(self._data_fetcher) File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/loops/base.py", line 204, in run self.advance(*args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 208, in advance batch_output = self.batch_loop.run(batch, batch_idx) File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/loops/base.py", line 204, in run self.advance(*args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/loops/batch/training_batch_loop.py", line 88, in advance outputs = self.optimizer_loop.run(split_batch, optimizers, batch_idx) File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/loops/base.py", line 204, in run self.advance(*args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 207, in advance self.optimizer_idx, File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 256, in _run_optimization self._optimizer_step(optimizer, opt_idx, batch_idx, closure) File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 378, in _optimizer_step using_lbfgs=is_lbfgs, File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py", line 1595, in _call_lightning_module_hook output = fn(*args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/core/lightning.py", line 1646, in optimizer_step optimizer.step(closure=optimizer_closure) File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/core/optimizer.py", line 168, in step step_output = self._strategy.optimizer_step(self._optimizer, self._optimizer_idx, closure, **kwargs) File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/strategies/strategy.py", line 193, in optimizer_step return self.precision_plugin.optimizer_step(model, optimizer, opt_idx, closure, **kwargs) File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/plugins/precision/precision_plugin.py", line 155, in optimizer_step return optimizer.step(closure=closure, **kwargs) File "/usr/local/lib/python3.7/dist-packages/torch/optim/lr_scheduler.py", line 65, in wrapper return wrapped(*args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/torch/optim/optimizer.py", line 88, in wrapper return func(*args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/torch/optim/adam.py", line 100, in step loss = closure() File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/plugins/precision/precision_plugin.py", line 140, in _wrap_closure closure_result = closure() File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 148, in call self._result = self.closure(*args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 134, in closure step_output = self._step_fn() File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 427, in _training_step training_step_output = self.trainer._call_strategy_hook("training_step", *step_kwargs.values()) File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py", line 1765, in _call_strategy_hook output = fn(*args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/strategies/strategy.py", line 333, in training_step return self.model.training_step(*args, **kwargs) File "/content/drive/MyDrive/eICU/train_dynamic.py", line 59, in training_step pred, pred_lstm = self(seq, flat) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/content/drive/MyDrive/eICU/train_dynamic.py", line 47, in forward out = self.net(seq, flat=flat) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/content/drive/MyDrive/eICU/src/models/dgnn.py", line 91, in forward edge_index, edge_weights = self.knn_to_graph(out) File "/content/drive/MyDrive/eICU/src/models/dgnn.py", line 59, in knn_to_graph us, vs, vals = self.get_u_v(k_closest[1], k_closest[0]) File "/content/drive/MyDrive/eICU/src/models/dgnn.py", line 74, in get_u_v reflected_int = edges_int + edges_int.transpose(1, 0)

    opened by nasim-ahmed 3
  • RuntimeError: There were no tensor arguments to this function (e.g., you passed an empty list of Tensors) ....

    RuntimeError: There were no tensor arguments to this function (e.g., you passed an empty list of Tensors) ....

    Hello

    First, I would like to thank you for sharing the code of your awesome projects. I am trying to run your code and reproduce your experiments. Currently, I'm facing a problem. Here are the errors and my fixes:

    [0]

    File ".../eICU-GNN-LSTM/graph_construction/create_bert_graph.py", line 19, in make_graph_bert distances = torch.cdist(batch, bert, p=2.0, compute_mode='use_mm_for_euclid_dist_if_necessary')

    RuntimeError: cdist only supports floating-point dtypes, X1 got: Byte

    Fix: changed dtype from ByteTensor to FloatTensor File ".../eICU-GNN-LSTM/graph_construction/create_graph.py", line 15 dtype = torch.cuda.sparse.FloatTensor if device.type == 'cuda' else torch.sparse.FloatTensor https://github.com/EmmaRocheteau/eICU-GNN-LSTM/blob/5167eea88bfe7a3146ccb6194f54e8e57f52128b/graph_construction/create_graph.py#L15

    File "/home/sale/eICU-GNN-LSTM/graph_construction/create_graph.py", line 65, in make_graph_penalise s_pen = 5 * s - total_combined_diags # the 5 is fairly arbitrary but I don't want to penalise not sharing diagnoses too much

    RuntimeError: The size of tensor a (89123) must match the size of tensor b (1000) at non-singleton dimension 1

    Fix: File ".../eICU-GNN-LSTM/graph_construction/create_graph.py", line 194

    u, v, vals, k = make_graph_penalise(all_diagnoses, scores, debug=False, k=args.k) ############### debug=False Fixes problem https://github.com/EmmaRocheteau/eICU-GNN-LSTM/blob/5167eea88bfe7a3146ccb6194f54e8e57f52128b/graph_construction/create_graph.py#L194

    [1]

    File "../projects/eICU-GNN-LSTM/src/models/pyg_ns.py", line 241, in inference edge_attn = torch.cat(edge_attn, dim=0) # [no. of edges, n_heads of that layer]

    RuntimeError: There were no tensor arguments to this function (e.g., you passed an empty list of Tensors), but no fallback function is registered for schema aten::_cat. This usually means that this function requires a non-empty list of Tensors. Available functions are [CPU, CUDA, QuantizedCPU, BackendSelect, Named, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, Autocast, Batched, VmapMode].

    Fix: if i == 1 and get_attn: edge_index_w_self_loops = torch.cat(edge_index_w_self_loops, dim=1) # [2, n. of edges] if get_attn: edge_attn = torch.cat(edge_attn, dim=0) # [no. of edges, n_heads of that layer] all_edge_attn.append(edge_attn) https://github.com/EmmaRocheteau/eICU-GNN-LSTM/blob/5167eea88bfe7a3146ccb6194f54e8e57f52128b/src/models/pyg_ns.py#L241

    [2]

    File "../eICU-GNN-LSTM/train_ns_lstmgnn.py", line 94, in validation_step out = out[self.dataset.data.val_mask] TypeError: only integer tensors of a single element can be converted to an index

    Fix: out = out[0][self.dataset.data.val_mask] https://github.com/EmmaRocheteau/eICU-GNN-LSTM/blob/5167eea88bfe7a3146ccb6194f54e8e57f52128b/train_ns_lstmgnn.py#L94

    [3]

    ValueError: Input contains NaN, infinity or a value too large for dtype('float32').

    Fix: In the same file "../eICU-GNN-LSTM/train_ns_lstmgnn.py" line 96 Added the following lines: out[out != out] = 0 out_lstm[out_lstm != out_lstm] = 0

    https://github.com/EmmaRocheteau/eICU-GNN-LSTM/blob/5167eea88bfe7a3146ccb6194f54e8e57f52128b/train_ns_lstmgnn.py#L94 because when I print those matrices found some NaN values.

    After this, the code starts training. BUT with wired training progress (loss always nan) !!! Print out the output matrices found that it's always NANs !!!

    acc: 0.9049 prec0: 0.9049 prec1: nan rec0: 1.0000 rec1: 0.0000 auroc: 0.5000 auprc: 0.5476 minpse: 0.0951 f1macro: 0.4750 Epoch 1: 92%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 452/489 [00:35<00:02, 12.78it/s, loss=nan, v_num=83]

    I tried to trace the source of the error and the NaNs come after lstm layer this line:

    https://github.com/EmmaRocheteau/eICU-GNN-LSTM/blob/5167eea88bfe7a3146ccb6194f54e8e57f52128b/src/models/lstm.py#L39

    Please correct me if I'm wrong ... Thanks a lot in advance...

    Note: I have used the same version of packages listed on the requirements.txt file

    opened by Al-Dailami 3
Owner
Emma Rocheteau
Computer Science PhD Student at Cambridge
Emma Rocheteau
The official implementation of NeMo: Neural Mesh Models of Contrastive Features for Robust 3D Pose Estimation [ICLR-2021]. https://arxiv.org/pdf/2101.12378.pdf

NeMo: Neural Mesh Models of Contrastive Features for Robust 3D Pose Estimation [ICLR-2021] Release Notes The offical PyTorch implementation of NeMo, p

Angtian Wang 76 Nov 23, 2022
Official repository with code and data accompanying the NAACL 2021 paper "Hurdles to Progress in Long-form Question Answering" (https://arxiv.org/abs/2103.06332).

Hurdles to Progress in Long-form Question Answering This repository contains the official scripts and datasets accompanying our NAACL 2021 paper, "Hur

Kalpesh Krishna 41 Nov 8, 2022
Code for the paper: Learning Adversarially Robust Representations via Worst-Case Mutual Information Maximization (https://arxiv.org/abs/2002.11798)

Representation Robustness Evaluations Our implementation is based on code from MadryLab's robustness package and Devon Hjelm's Deep InfoMax. For all t

Sicheng 19 Dec 7, 2022
Supplementary code for the paper "Meta-Solver for Neural Ordinary Differential Equations" https://arxiv.org/abs/2103.08561

Meta-Solver for Neural Ordinary Differential Equations Towards robust neural ODEs using parametrized solvers. Main idea Each Runge-Kutta (RK) solver w

Julia Gusak 25 Aug 12, 2021
Code for paper "A Critical Assessment of State-of-the-Art in Entity Alignment" (https://arxiv.org/abs/2010.16314)

A Critical Assessment of State-of-the-Art in Entity Alignment This repository contains the source code for the paper A Critical Assessment of State-of

Max Berrendorf 16 Oct 14, 2022
Source code for models described in the paper "AudioCLIP: Extending CLIP to Image, Text and Audio" (https://arxiv.org/abs/2106.13043)

AudioCLIP Extending CLIP to Image, Text and Audio This repository contains implementation of the models described in the paper arXiv:2106.13043. This

null 458 Jan 2, 2023
source code for https://arxiv.org/abs/2005.11248 "Accelerating Antimicrobial Discovery with Controllable Deep Generative Models and Molecular Dynamics"

Accelerating Antimicrobial Discovery with Controllable Deep Generative Models and Molecular Dynamics This work will be published in Nature Biomedical

International Business Machines 71 Nov 15, 2022
Tensorflow implementation of Semi-supervised Sequence Learning (https://arxiv.org/abs/1511.01432)

Transfer Learning for Text Classification with Tensorflow Tensorflow implementation of Semi-supervised Sequence Learning(https://arxiv.org/abs/1511.01

DONGJUN LEE 82 Oct 22, 2022
Official implementation of the paper Image Generators with Conditionally-Independent Pixel Synthesis https://arxiv.org/abs/2011.13775

CIPS -- Official Pytorch Implementation of the paper Image Generators with Conditionally-Independent Pixel Synthesis Requirements pip install -r requi

Multimodal Lab @ Samsung AI Center Moscow 201 Dec 21, 2022
Pytorch implementation of Each Part Matters: Local Patterns Facilitate Cross-view Geo-localization https://arxiv.org/abs/2008.11646

[TCSVT] Each Part Matters: Local Patterns Facilitate Cross-view Geo-localization LPN [Paper] NEWs Prerequisites Python 3.6 GPU Memory >= 8G Numpy > 1.

null 46 Dec 14, 2022