OOD Dataset Curator and Benchmark for AI-aided Drug Discovery

Overview

🔥 DrugOOD 🔥 : OOD Dataset Curator and Benchmark for AI Aided Drug Discovery

This is the official implementation of the DrugOOD project, this is the project page: https://drugood.github.io/

Environment Installation

You can install the conda environment using the drugood.yaml file provided:

!git clone https://github.com/tencent-ailab/DrugOOD.git
!cd DrugOOD
!conda env create --name drugood --file=drugood.yaml
!conda activate drugood

Then you can go to the demo at demo/demo.ipynb which gives a quick practice on how to use DrugOOD.

Demo

For a quick practice on using DrugOOD for dataset curation and OOD benchmarking, one can refer to the demo/demo.ipynb.

Dataset Curator

First, you need to generate the required DrugOOD dataset with our code. The dataset curator currently focusing on generating datasets from CHEMBL. It supports the following two tasks:

  • Ligand Based Affinity Prediction (LBAP).
  • Structure Based Affinity Prediction (SBAP).

For OOD domain annotations, it supports the following 5 choices.

  • Assay.
  • Scaffold.
  • Size.
  • Protein. (only for SBAP task)
  • Protein Family. (only for SBAP task)

For noise annotations, it supports the following three noise levels. Datasets with different noises are implemented by filters with different levels of strictness.

  • Core.
  • Refined.
  • General.

At the same time, due to the inconvenient conversion between different measurement type (E.g. IC50, EC50, Ki, Potency), one needs to specify the measurement type when generating the dataset.

How to Run and Reproduce the 96 Datasets?

Firstly, specifiy the path of CHEMBL database and the directory to save the data in the configuration file: configs/_base_/curators/lbap_defaults.py for LBAP task or configs/_base_/curators/sbap_defaults.py for SBAP task.
The source_root="YOUR_PATH/chembl_29_sqlite/chembl_29.db" means the path to the chembl29 sqllite file. The target_root="data/" specifies the folder to save the generated data.

Note that you can download the original chembl29 database with sqllite format from http://ftp.ebi.ac.uk/pub/databases/chembl/ChEMBLdb/releases/chembl_29/chembl_29_sqlite.tar.gz.

The built-in configuration files are located in:
configs/curators/. Here we provide the 96 config files to reproduce the 96 datasets in our paper. Meanwhile, you can also customize your own datasets by changing the config files.

Run tools/curate.py to generate dataset. Here are some examples:

Generate datasets for the LBAP task, with assay as domain, core as noise level, IC50 as measurement type, LBAP as task type.:

python tools/curate.py --cfg configs/curators/lbap_core_ic50_assay.py

Generate datasets for the SBAP task, with protein as domain, refined as noise level, EC50 as measurement type, SBAP as task type.:

python tools/curate.py --cfg configs/curator/sbap_refined_ec50_protein.py

Benchmarking SOTA OOD Algorithms

Currently we support 6 different baseline algorithms:

  • ERM
  • IRM
  • GroupDro
  • Coral
  • MixUp
  • DANN

Meanwhile, we support various GNN backbones:

  • GIN
  • GCN
  • Weave
  • ShcNet
  • GAT
  • MGCN
  • NF
  • ATi-FPGNN
  • GTransformer

And different backbones for protein sequence modeling:

  • Bert
  • ProteinBert

How to Run?

Firstly, run the following command to install.

python setup.py develop

Run the LBAP task with ERM algorithm:

python tools/train.py configs/algorithms/erm/lbap_core_ec50_assay_erm.py

If you would like to run ERM on other datasets, change the corresponding options inside the above config file. For example, ann_file = 'data/lbap_core_ec50_assay.json' specifies the input data.

Similarly, run the SBAP task with ERM algorithm:

python tools/train.py configs/algorithms/erm/sbap_core_ec50_assay_erm.py

Reference

😄 If you find this repo is useful, please consider to cite our paper:

@ARTICLE{2022arXiv220109637J,
    author = {{Ji}, Yuanfeng and {Zhang}, Lu and {Wu}, Jiaxiang and {Wu}, Bingzhe and {Huang}, Long-Kai and {Xu}, Tingyang and {Rong}, Yu and {Li}, Lanqing and {Ren}, Jie and {Xue}, Ding and {Lai}, Houtim and {Xu}, Shaoyong and {Feng}, Jing and {Liu}, Wei and {Luo}, Ping and {Zhou}, Shuigeng and {Huang}, Junzhou and {Zhao}, Peilin and {Bian}, Yatao},
    title = "{DrugOOD: Out-of-Distribution (OOD) Dataset Curator and Benchmark for AI-aided Drug Discovery -- A Focus on Affinity Prediction Problems with Noise Annotations}",
    journal = {arXiv e-prints},
    keywords = {Computer Science - Machine Learning, Computer Science - Artificial Intelligence, Quantitative Biology - Quantitative Methods},
    year = 2022,
    month = jan,
    eid = {arXiv:2201.09637},
    pages = {arXiv:2201.09637},
    archivePrefix = {arXiv},
    eprint = {2201.09637},
    primaryClass = {cs.LG}
}

Disclaimer

This is not an officially supported Tencent product.

Comments
  • segmentation fault

    segmentation fault

    Hi, I am so interested in your work and want to generate new datasets of mine, yet the segmentation fault always occurs when I run the curate.py, for example, in the cell [15] of demo.ipynb, two results of the same codes are listed below:

    from mmcv import Config
    cfg = Config.fromfile('../configs/curators/lbap_core_ec50_assay.py')
    cfg.path.source_root = '/data/ly03/DrugOOD/CHEMBL_SQLLITE/chembl_29_sqlite/chembl_29.db'
    cfg.path.target_root = '/data/ly03/DrugOOD/dataset/'
    cfg.noise_filter.assay.molecules_number=[50, 100]
    cfg.path.task.subset ="lbap_core_ec50_assay_custom"
    print(f'Built-in Config:\n{cfg.pretty_text}')
    from drugood.apis.curate import curate_data
    curate_data(cfg)
    

    1 2

    conda clean -a is used already, with no help for this problem. I think the segmentation fault may occur when the addresses of vars are not defined with constants, for instance, the dicts of the 'sample' are not assumed here. But I notice the codes in demo.ipynb doesn't point to these vars, either. So might this error occur because of the CPU/GPU limitations? I'm using the Ubuntu 16.04 with a memory of 94GB, and Gpu: 12 GB of TITAN V. Are these suitable to generate the datasets?

    Looking forward to your advice! Thank you for your gorgeous work.

    P.S.

    1. with ulimit -a the info is:
    core file size          (blocks, -c) unlimited
    data seg size           (kbytes, -d) unlimited
    scheduling priority             (-e) 0
    file size               (blocks, -f) unlimited
    pending signals                 (-i) 384842
    max locked memory       (kbytes, -l) 64
    max memory size         (kbytes, -m) unlimited
    open files                      (-n) 1024
    pipe size            (512 bytes, -p) 8
    POSIX message queues     (bytes, -q) 819200
    real-time priority              (-r) 0
    stack size              (kbytes, -s) unlimited
    cpu time               (seconds, -t) unlimited
    max user processes              (-u) 384842
    virtual memory          (kbytes, -v) unlimited
    file locks                      (-x) unlimited
    
    1. my env info is listed below(CUDA 10.2 CUDNN 7.6.5)
    name: drugood
    channels:
      - brown-data-science
      - https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch
      - https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
      - https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
      - https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/bioconda/
      - https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main/
      - https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/
      - https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/msys2/
      - bioconda
      - conda-forge
      - defaults
      - r
    dependencies:
      - _libgcc_mutex=0.1=main
      - _openmp_mutex=4.5=1_gnu
      - blas=1.0=mkl
      - bzip2=1.0.8=h7b6447c_0
      - ca-certificates=2021.10.8=ha878542_0
      - certifi=2021.10.8=py38h578d9bd_2
      - cudatoolkit=10.2.89=hfd86e86_1
      - ffmpeg=4.3=hf484d3e_0
      - freetype=2.10.4=h5ab3b9f_0
      - gcc=5.4.0=0
      - gmp=6.2.1=h2531618_2
      - gnutls=3.6.15=he1e5248_0
      - intel-openmp=2021.3.0=h06a4308_3350
      - jpeg=9b=h024ee3a_2
      - lame=3.100=h7b6447c_0
      - lcms2=2.12=h3be6417_0
      - ld_impl_linux-64=2.35.1=h7274673_9
      - libffi=3.3=he6710b0_2
      - libgcc-ng=9.3.0=h5101ec6_17
      - libgomp=9.3.0=h5101ec6_17
      - libiconv=1.15=h63c8f33_5
      - libidn2=2.3.1=h27cfd23_0
      - libpng=1.6.37=hbc83047_0
      - libstdcxx-ng=9.3.0=hd4cf53a_17
      - libtasn1=4.16.0=h27cfd23_0
      - libtiff=4.2.0=h85742a9_0
      - libunistring=0.9.10=h27cfd23_0
      - libuv=1.40.0=h7b6447c_0
      - libwebp-base=1.2.0=h27cfd23_0
      - lz4-c=1.9.3=h2531618_0
      - mkl=2021.3.0=h06a4308_520
      - mkl-service=2.4.0=py38h7f8727e_0
      - mkl_fft=1.3.0=py38h42c9631_2
      - mkl_random=1.2.2=py38h51133e4_0
      - ncurses=6.2=he6710b0_1
      - nettle=3.7.3=hbbd107a_1
      - ninja=1.10.2=hff7bd54_1
      - numpy=1.20.3=py38hf144106_0
      - numpy-base=1.20.3=py38h74d4b33_0
      - olefile=0.46=py_0
      - openh264=2.1.0=hd408876_0
      - openjpeg=2.3.0=h05c96fa_1
      - openssl=1.1.1k=h7f98852_0
      - pillow=8.3.1=py38h2c7a002_0
      - pip=21.1.3=py38h06a4308_0
      - python=3.8.5=h7579374_1
      - python_abi=3.8=2_cp38
      - pytorch=1.7.1=py3.8_cuda10.2.89_cudnn7.6.5_0
      - readline=8.1=h27cfd23_0
      - setuptools=52.0.0=py38h06a4308_0
      - six=1.16.0=pyhd3eb1b0_0
      - sqlite=3.36.0=hc218d9a_0
      - tk=8.6.10=hbc83047_0
      - torchaudio=0.7.2=py38
      - torchvision=0.8.2=py38_cu102
      - tree=1.8.0=h7f98852_2
      - typing_extensions=3.10.0.0=pyh06a4308_0
      - wheel=0.36.2=pyhd3eb1b0_0
      - xz=5.2.5=h7b6447c_0
      - zlib=1.2.11=h7b6447c_3
      - zstd=1.4.9=haebb681_0
      - pip:
        - absl-py==1.0.0
        - addict==2.4.0
        - attrs==21.4.0
        - cachetools==5.0.0
        - charset-normalizer==2.0.12
        - click==8.0.4
        - cloudpickle==2.0.0
        - codecov==2.1.12
        - colorama==0.4.4
        - coverage==6.3.2
        - cycler==0.11.0
        - cython==0.29.28
        - dgl-cu102==0.6.1
        - dgl-cu110==0.6.1
        - dgllife==0.2.9
        - drugood==0.0.1
        - filelock==3.6.0
        - flake8==4.0.1
        - fonttools==4.29.1
        - future==0.18.2
        - fuzzywuzzy==0.18.0
        - google-auth==2.6.0
        - google-auth-oauthlib==0.4.6
        - googledrivedownloader==0.4
        - grpcio==1.44.0
        - huggingface-hub==0.4.0
        - hyperopt==0.2.7
        - idna==3.3
        - importlib-metadata==4.11.1
        - iniconfig==1.1.1
        - interrogate==1.5.0
        - isodate==0.6.1
        - isort==4.3.21
        - jinja2==3.0.3
        - joblib==1.1.0
        - kiwisolver==1.3.2
        - littleutils==0.2.2
        - markdown==3.3.6
        - markupsafe==2.1.0
        - matplotlib==3.5.1
        - mccabe==0.6.1
        - mmcv==1.4.5
        - networkx==2.6.3
        - oauthlib==3.2.0
        - ogb==1.3.2
        - opencv-python==4.5.5.62
        - outdated==0.2.1
        - packaging==21.3
        - pandas==1.4.1
        - pluggy==1.0.0
        - prettytable==3.2.0
        - protobuf==3.19.4
        - py==1.11.0
        - py4j==0.10.9.3
        - pyasn1==0.4.8
        - pyasn1-modules==0.2.8
        - pycodestyle==2.8.0
        - pyflakes==2.4.0
        - pyparsing==3.0.7
        - pytdc==0.3.6
        - pytest==7.0.1
        - python-dateutil==2.8.2
        - pytz==2021.3
        - pyyaml==6.0
        - rdflib==6.1.1
        - rdkit-pypi==2021.9.4
        - regex==2022.1.18
        - requests==2.27.1
        - requests-oauthlib==1.3.1
        - rsa==4.8
        - sacremoses==0.0.47
        - scikit-learn==1.0.2
        - scipy==1.8.0
        - seaborn==0.11.2
        - tabulate==0.8.9
        - tensorboard==2.8.0
        - tensorboard-data-server==0.6.1
        - tensorboard-plugin-wit==1.8.1
        - threadpoolctl==3.1.0
        - tokenizers==0.11.5
        - toml==0.10.2
        - tomli==2.0.1
        - torch-cluster==1.5.9
        - torch-geometric==2.0.3
        - torch-scatter==2.0.7
        - torch-sparse==0.6.9
        - torch-spline-conv==1.2.1
        - tqdm==4.62.3
        - transformers==4.16.2
        - urllib3==1.26.8
        - wcwidth==0.2.5
        - werkzeug==2.0.3
        - wilds==2.0.0
        - xdoctest==0.15.10
        - yacs==0.1.8
        - yapf==0.32.0
        - zipp==3.7.0
    prefix: /home/ly03/anaconda3/envs/drugood
    
    opened by AnnLoya 2
  • Data curation error caused by configuration files

    Data curation error caused by configuration files

    Hello, I noticed that in some of the configuration python files of curators, there exist nested definitions of assay.

    For example:

    image

    After removing a layer of assay dictionary, the data curation process runs well.

    opened by solanoon 1
  • CVE-2007-4559 Patch

    CVE-2007-4559 Patch

    Patching CVE-2007-4559

    Hi, we are security researchers from the Advanced Research Center at Trellix. We have began a campaign to patch a widespread bug named CVE-2007-4559. CVE-2007-4559 is a 15 year old bug in the Python tarfile package. By using extract() or extractall() on a tarfile object without sanitizing input, a maliciously crafted .tar file could perform a directory path traversal attack. We found at least one unsantized extractall() in your codebase and are providing a patch for you via pull request. The patch essentially checks to see if all tarfile members will be extracted safely and throws an exception otherwise. We encourage you to use this patch or your own solution to secure against CVE-2007-4559. Further technical information about the vulnerability can be found in this blog.

    If you have further questions you may contact us through this projects lead researcher Kasimir Schulz.

    opened by TrellixVulnTeam 0
  • bug fix for irm

    bug fix for irm

    According to the original paper of IRM and its provided code, the value of "scale" is always 1 and will not be updated, which means it should not be add to parameters of the model. Also you forget to multiply the irm_penalty by irm_lambda while calculating losses

    opened by zengkaipeng 0
  • Bug about IRM implementation

    Bug about IRM implementation

    In the provided code of orginal IRM article, the value of dummy_w (the self.scale in irm.py of your codes) will not be updated. However, In your code, this value serves as a parameter of the model, which means it will be updated via the optimizer

    opened by zengkaipeng 0
  • A bug on using protein family

    A bug on using protein family

    Hi,

    I'm testing the dataset curated from protein family, e.g., configs/curators/sbap_core_ki_protein_family.py. And I get the following exception:

    Traceback (most recent call last):
      File "xxx/drugood/apis/curate.py", line 14, in curate_data
        data = curator.data_splitting(data)
      File "xxx/drugood/curators/curator.py", line 206, in data_splitting
        domain_value = domain_func(value_for_generating_domain)
      File "xxx/drugood/curators/get_domain_info.py", line 75, in protein_family
        class_id = self.protein_family_getter(protein_seq)
      File "xxx/drugood/curators/chembl/protein_family.py", line 48, in __call__
        target_level_class_id = self.get_target_level_class_id(class_id)
      File "xxx/drugood/curators/chembl/protein_family.py", line 37, in get_target_level_class_id
        class_id_cur_level = self.dict_id_to_parent_level[class_id_cur_level][0]
    KeyError: None
    

    It turns out that protein_family_level is None.


    A quick update: this line fails to pass in the protein_family_level.

    opened by chao1224 0
  • Question for LBAP task

    Question for LBAP task

    Dear authors, First, thank you for the excellent benchmark. I have one question about the protein target of LBAP task. In your paper, on page 12, you said "In LBAP, we follow the common practice and do not involve any protein target information, which is usually used in the activity prediction for one specific protein target." To my understanding, it should keep the protein target the same in both the training set and testing set. Is that correct? But I cannot find the code to ensure this requirement. If I missed it, could you point out where it is? Thanks advance.

    opened by googlebaba 0
  • A bug about device inconsistency

    A bug about device inconsistency

    Every provided backbone will call the function "move_to_device", moving the data to device "cuda:0". This means the model must be defined on "cuda:0" and "cuda:0" must be included in gpu-ids. This is quite inconvenient.

    opened by zengkaipeng 1
Owner
Research repositories.
null
Systemic Evolutionary Chemical Space Exploration for Drug Discovery

SECSE SECSE: Systemic Evolutionary Chemical Space Explorer Chemical space exploration is a major task of the hit-finding process during the pursuit of

null 64 Dec 16, 2022
DeepCAD: A Deep Generative Network for Computer-Aided Design Models

DeepCAD This repository provides source code for our paper: DeepCAD: A Deep Generative Network for Computer-Aided Design Models Rundi Wu, Chang Xiao,

Rundi Wu 85 Dec 31, 2022
PyTorch implementation of saliency map-aided GAN for Auto-demosaic+denosing

Saiency Map-aided GAN for RAW2RGB Mapping The PyTorch implementations and guideline for Saiency Map-aided GAN for RAW2RGB Mapping. 1 Implementations B

Yuzhi ZHAO 20 Oct 24, 2022
Github for the conference paper GLOD-Gaussian Likelihood OOD detector

FOOD - Fast OOD Detector Pytorch implamentation of the confernce peper FOOD arxiv link. Abstract Deep neural networks (DNNs) perform well at classifyi

null 17 Jun 19, 2022
LoveDA: A Remote Sensing Land-Cover Dataset for Domain Adaptive Semantic Segmentation (NeurIPS2021 Benchmark and Dataset Track)

LoveDA: A Remote Sensing Land-Cover Dataset for Domain Adaptive Semantic Segmentation by Junjue Wang, Zhuo Zheng, Ailong Ma, Xiaoyan Lu, and Yanfei Zh

Kingdrone 174 Dec 22, 2022
Multi-modal co-attention for drug-target interaction annotation and Its Application to SARS-CoV-2

CoaDTI Multi-modal co-attention for drug-target interaction annotation and Its Application to SARS-CoV-2 Abstract Environment The test was conducted i

Layne_Huang 7 Nov 14, 2022
This is the repo for the paper `SumGNN: Multi-typed Drug Interaction Prediction via Efficient Knowledge Graph Summarization'. (published in Bioinformatics'21)

SumGNN: Multi-typed Drug Interaction Prediction via Efficient Knowledge Graph Summarization This is the code for our paper ``SumGNN: Multi-typed Drug

Yue Yu 58 Dec 21, 2022
Cancer Drug Response Prediction via a Hybrid Graph Convolutional Network

DeepCDR Cancer Drug Response Prediction via a Hybrid Graph Convolutional Network This work has been accepted to ECCB2020 and was also published in the

Qiao Liu 50 Dec 18, 2022
The code for SAG-DTA: Prediction of Drug–Target Affinity Using Self-Attention Graph Network.

SAG-DTA The code is the implementation for the paper 'SAG-DTA: Prediction of Drug–Target Affinity Using Self-Attention Graph Network'. Requirements py

Shugang Zhang 7 Aug 2, 2022
The dataset and source code for our paper: "Did You Ask a Good Question? A Cross-Domain Question IntentionClassification Benchmark for Text-to-SQL"

TriageSQL The dataset and source code for our paper: "Did You Ask a Good Question? A Cross-Domain Question Intention Classification Benchmark for Text

Yusen Zhang 22 Nov 9, 2022
HDR Video Reconstruction: A Coarse-to-fine Network and A Real-world Benchmark Dataset (ICCV 2021)

Code for HDR Video Reconstruction HDR Video Reconstruction: A Coarse-to-fine Network and A Real-world Benchmark Dataset (ICCV 2021) Guanying Chen, Cha

Guanying Chen 64 Nov 19, 2022
Pytorch implementation of four neural network based domain adaptation techniques: DeepCORAL, DDC, CDAN and CDAN+E. Evaluated on benchmark dataset Office31.

Deep-Unsupervised-Domain-Adaptation Pytorch implementation of four neural network based domain adaptation techniques: DeepCORAL, DDC, CDAN and CDAN+E.

Alan Grijalva 49 Dec 20, 2022
NAS-HPO-Bench-II is the first benchmark dataset for joint optimization of CNN and training HPs.

NAS-HPO-Bench-II API Overview NAS-HPO-Bench-II is the first benchmark dataset for joint optimization of CNN and training HPs. It helps a fair and low-

yoichi hirose 8 Nov 21, 2022
A novel benchmark dataset for Monocular Layout prediction

AutoLay AutoLay: Benchmarking Monocular Layout Estimation Kaustubh Mani, N. Sai Shankar, J. Krishna Murthy, and K. Madhava Krishna Abstract In this pa

Kaustubh Mani 39 Apr 26, 2022
LexGLUE: A Benchmark Dataset for Legal Language Understanding in English

LexGLUE: A Benchmark Dataset for Legal Language Understanding in English ⚖️ ?? ??‍?? ??‍⚖️ Dataset Summary Inspired by the recent widespread use of th

null 95 Dec 8, 2022
Repository for the Bias Benchmark for QA dataset.

BBQ Repository for the Bias Benchmark for QA dataset. Authors: Alicia Parrish, Angelica Chen, Nikita Nangia, Vishakh Padmakumar, Jason Phang, Jana Tho

ML² AT CILVR 18 Nov 18, 2022
Flickr-Faces-HQ (FFHQ) is a high-quality image dataset of human faces, originally created as a benchmark for generative adversarial networks (GAN)

Flickr-Faces-HQ Dataset (FFHQ) Flickr-Faces-HQ (FFHQ) is a high-quality image dataset of human faces, originally created as a benchmark for generative

NVIDIA Research Projects 2.9k Dec 28, 2022
A benchmark dataset for mesh multi-label-classification based on cube engravings introduced in MeshCNN

Double Cube Engravings This script creates a dataset for multi-label mesh clasification, with an intentionally difficult setup for point cloud classif

Yotam Erel 1 Nov 30, 2021