The aim is to contain multiple models for materials discovery under a common interface

Overview

Aviary

License: MIT GitHub Repo Size GitHub last commit Tests pre-commit.ci status

The aviary contains:

  • Open Roost In Colab  -  roost,
  • Open Wren In Colab  -  wren,
  • cgcnn.

The aim is to contain multiple models for materials discovery under a common interface

Environment Setup

To use aviary you need to create an environment with the correct dependencies. The easiest way to get up and running is to use anaconda. A cudatoolkit=11.1 environment file is provided environment-gpu-cu111.yml allowing a working environment to be created with:

conda env create -f environment-gpu-cu111.yml

If you are not using cudatoolkit=11.1 or do not have access to a GPU this setup will not work for you. If so please check the following pages PyTorch, PyTorch-Scatter for how to install the core packages and then install the remaining requirements as detailed in requirements.txt.

The code was developed and tested on Linux Mint 19.1 Tessa. The code should work with other Operating Systems but it has not been tested for such use.

Aviary Setup

Once you have set up an environment with the correct dependencies you can install aviary using the following commands from the top of the directory:

conda activate aviary
python setup.py sdist
pip install -e .

This will install the library in an editable state allowing for advanced users to make changes as desired.

Example Use

To test the input files generation and cleaning/canonicalization please run:

python examples/inputs/poscar2df.py

This script will load and parse a subset of raw POSCAR files from the TAATA dataset and produce the datasets/examples/examples.csv file used for the next example. The raw files have been selected to ensure that the subset contains all the correct endpoints for the 5 elemental species in the Hf-N-Ti-Zr-Zn chemical system. All the models used share can be run on the input file produced by this example code. To test each of the three models provided please run:

python examples/roost-example.py --train --evaluate --data-path examples/inputs/examples.csv --targets E_f --tasks regression --losses L1 --robust --epoch 10
python examples/wren-example.py --train --evaluate --data-path examples/inputs/examples.csv --targets E_f --tasks regression --losses L1 --robust --epoch 10
python examples/cgcnn-example.py --train --evaluate --data-path examples/inputs/examples.csv --targets E_f --tasks regression --losses L1 --robust --epoch 10

Please note that for speed/demonstration purposes this example runs on only ~68 materials for 10 epochs - running all these examples should take < 30s. These examples do not have sufficient data or training to make accurate predictions, however, the same scripts have been used for all experiments conducted.

Cite This Work

If you use this code please cite the relevant work:

Predicting materials properties without crystal structure: Deep representation learning from stoichiometry. [Paper] [arXiv]

@article{goodall2020predicting,
  title={Predicting materials properties without crystal structure: Deep representation learning from stoichiometry},
  author={Goodall, Rhys EA and Lee, Alpha A},
  journal={Nature Communications},
  volume={11},
  number={1},
  pages={1--9},
  year={2020},
  publisher={Nature Publishing Group}
}

Rapid Discovery of Novel Materials by Coordinate-free Coarse Graining. [arXiv]

@article{goodall2021rapid,
  title={Rapid Discovery of Novel Materials by Coordinate-free Coarse Graining},
  author={Goodall, Rhys EA and Parackal, Abhijith S and Faber, Felix A and Armiento, Rickard and Lee, Alpha A},
  journal={arXiv preprint arXiv:2106.11132},
  year={2021}
}

Crystal Graph Convolutional Neural Networks for an Accurate and Interpretable Prediction of Material Properties. [Paper] [arXiv]

@article{xie2018crystal,
  title={Crystal graph convolutional neural networks for an accurate and interpretable prediction of material properties},
  author={Xie, Tian and Grossman, Jeffrey C},
  journal={Physical review letters},
  volume={120},
  number={14},
  pages={145301},
  year={2018},
  publisher={APS}
}

Disclaimer

This research code is provided as-is. We have checked for potential bugs and believe that the code is being shared in a bug-free state. As this is an archive version we will not be able to amend the code to fix bugs/edge-cases found at a later date. However, this code will likely continue to be developed at the location described in the metadata.

Comments
  • Wren: Why does averaging of augmented Wyckoff positions happen inside the NN, after message passing?

    Wren: Why does averaging of augmented Wyckoff positions happen inside the NN, after message passing?

    https://www.science.org/doi/epdf/10.1126/sciadv.abn4117

    The categorization of Wyckoff positions depends on a choice of origin (50). Hence, there is not a unique mapping between the crystal structure and the Wyckoff representation. To ensure that the model is invariant to the choice of origin, we perform on-the-fly augmentation of Wyckoff positions with respect to this choice of origin (see Fig. 6). The augmented representations are averaged at the end of the message passing stage to provide a single representation of equivalent Wyckoff representations to the output network. By pooling at this point, we ensure that the model is invariant and that its training is not biased toward materials for which many equivalent Wyckoff representations exist.

    Probably a noob question here. I think I understand that it needs to happen at some point, but why does it need to happen after message passing? Why not implement this at the very beginning (i.e. in the input data representation)? Not so much doubtful of the choice as I am interested in the mechanics behind this choice. A topic that's come up in another context for me.

    question 
    opened by sgbaird 11
  • Add models that are equivalent to Roost

    Add models that are equivalent to Roost

    CrabNet and AtomSets-v0 are both equivalent to roost in that they are weighted set regression architectures. If aviary is to develop into a DeepChem for inorganic materials property prediction it might be nice to add implementations of these models.

    enhancement help wanted 
    opened by CompRhys 11
  • How to predict on new materials with saved pytorch file

    How to predict on new materials with saved pytorch file

    I used roost-example.py and saved the trained model in a pytorch file (e.g., roost.pt). I have tried to load this file and predict as follows:

    targets=["E_f"]
    tasks=["regression"]
    task_dict = dict(zip(targets, tasks))
    df = pd.read_csv('candidate_compositions.csv')
    X = CompositionData(df, elem_embedding = "matscholar200", task_dict = task_dict)
    
    model = torch.load('models/roost.pt')
    y_pred = model.predict(X)
    

    and I get the following output:

    Traceback (most recent call last):
      File "~/opt/anaconda3/envs/aviary/lib/python3.7/site-packages/pandas/core/indexes/base.py", line 3361, in get_loc
        return self._engine.get_loc(casted_key)
      File "pandas/_libs/index.pyx", line 76, in pandas._libs.index.IndexEngine.get_loc
      File "pandas/_libs/index.pyx", line 108, in pandas._libs.index.IndexEngine.get_loc
      File "pandas/_libs/hashtable_class_helper.pxi", line 5198, in pandas._libs.hashtable.PyObjectHashTable.get_item
      File "pandas/_libs/hashtable_class_helper.pxi", line 5206, in pandas._libs.hashtable.PyObjectHashTable.get_item
    KeyError: 'E_f'
    
    The above exception was the direct cause of the following exception:
    
    Traceback (most recent call last):
      File "roost-predict.py", line 12, in <module>
        y_pred = model.predict(X)
      File "~/opt/anaconda3/envs/aviary/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
        return func(*args, **kwargs)
      File "~/opt/anaconda3/envs/aviary/lib/python3.7/site-packages/aviary/core.py", line 357, in predict
        data_loader, disable=True if not verbose else None
      File "~/opt/anaconda3/envs/aviary/lib/python3.7/site-packages/tqdm/std.py", line 1173, in __iter__
        for obj in iterable:
      File "~/opt/anaconda3/envs/aviary/lib/python3.7/site-packages/aviary/roost/data.py", line 126, in __getitem__
        targets.append(Tensor([row[target]]))
      File "~/opt/anaconda3/envs/aviary/lib/python3.7/site-packages/pandas/core/series.py", line 942, in __getitem__
        return self._get_value(key)
      File "~/opt/anaconda3/envs/aviary/lib/python3.7/site-packages/pandas/core/series.py", line 1051, in _get_value
        loc = self.index.get_loc(label)
      File "~/opt/anaconda3/envs/aviary/lib/python3.7/site-packages/pandas/core/indexes/base.py", line 3363, in get_loc
        raise KeyError(key) from err
    KeyError: 'E_f'
    

    Is it possible to add an example script to perform a prediction from a saved model?

    Thank you

    opened by sarah-allec 10
  • separate `fit` and `predict`

    separate `fit` and `predict`

    Thanks for the patience with all the posts.

    It seems that the train and test data is passed in all at once. Ideally, I'd like to use RooSt in an sklearn-esque "instantiate, fit, and predict" style; it's not urgent, timescale is about a month. Since I'm not familiar with the underlying code, thought I would ask before diving in. Any thoughts/suggestions on this?

    opened by sgbaird 7
  • Git Surgery Plan

    Git Surgery Plan

    In developing this code at several times I've been sloppy about committing large files to the git history. If we would like others to commit we would also like it to show a more accurate representation of their contribution in terms of relative LOC. Consequently we're going to carry out some git surgery before out first official release.

    The following is useful to identify large files in the git history:

    git rev-list --objects --all |   git cat-file --batch-check='%(objecttype) %(objectname) %(objectsize) %(rest)' |   sed -n 's/^blob //p' |   sort --numeric-sort --key=2 |   cut -c 1-12,41- |   $(command -v gnumfmt || echo numfmt) --field=2 --to=iec-i --suffix=B --padding=7 --round=nearest
    

    The following are some of the proposed clean-up commands.

    git filter-branch --force --index-filter "git rm -r --cached --ignore-unmatch data/" --prune-empty --tag-name-filter cat -- --all
    git filter-branch --force --index-filter "git rm -r --cached --ignore-unmatch *.pth.tar" --prune-empty --tag-name-filter cat -- --all
    git filter-branch --force --index-filter "git rm -r --cached --ignore-unmatch notebooks/" --prune-empty --tag-name-filter cat -- --all
    git filter-branch --force --index-filter "git rm -r --cached --ignore-unmatch examples/colab/" --prune-empty --tag-name-filter cat -- --all
    git filter-branch --force --index-filter "git rm -r --cached --ignore-unmatch results/" --prune-empty --tag-name-filter cat -- --all
    git filter-branch --force --index-filter "git rm -r --cached --ignore-unmatch examples/plots/" --prune-empty --tag-name-filter cat -- --all
    

    Colab example notebooks will be re-added but ensuring that their output is cleaned.

    code quality 
    opened by CompRhys 6
  • Instructions for use with custom datasets

    Instructions for use with custom datasets

    Hi @CompRhys, curious if you could give some tips on using Roost with a custom dataset. In my case, I have the chemical formulas as a list of str and the target properties, already separate by train+val vs. test datasets. I'm looking through the Colab notebook getting things set up.

    opened by sgbaird 5
  • TypeError: 'NoneType' object is not iterable

    TypeError: 'NoneType' object is not iterable

    I installed aviary using conda based on the instructions. However, when I run the command python examples/inputs/poscar2df.py, I met the following error:

    Traceback (most recent call last):
      File "examples/inputs/poscar2df.py", line 7, in <module>
        from pymatgen.core import Composition, Structure
      File "/(home path)/.conda/envs/aviary/lib/python3.7/site-packages/pymatgen/core/__init__.py", line 62, in <module>
        SETTINGS = _load_pmg_settings()
      File "/(home path)/.conda/envs/aviary/lib/python3.7/site-packages/pymatgen/core/__init__.py", line 52, in _load_pmg_settings
        d.update(d_yml)
    TypeError: 'NoneType' object is not iterable
    

    Any idea on how to solve this?

    invalid 
    opened by PinwenGuan 4
  • Roost Colab default Cuda version issue

    Roost Colab default Cuda version issue

    Tried running the Roost example Colab and got an error that seems it's probably related to Colab now using CUDA 11.2.

    OSError: libcudart.so.10.2: cannot open shared object file: No such file or directory
    
    stack trace
    OSError                                   Traceback (most recent call last)
    [<ipython-input-10-fd45f7ae93a3>](https://z3go6q25tqk-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220217-060102-RC00_429270882#) in <module>()
          1 from aviary.roost.data import CompositionData, collate_batch as roost_cb
    ----> 2 from aviary.roost.model import Roost
          3 
          4 torch.manual_seed(0)  # ensure reproducible results
          5 
    
    4 frames
    [/usr/local/lib/python3.7/dist-packages/aviary/roost/model.py](https://z3go6q25tqk-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220217-060102-RC00_429270882#) in <module>()
          4 
          5 from aviary.core import BaseModelClass
    ----> 6 from aviary.segments import (
          7     MessageLayer,
          8     ResidualNetwork,
    
    [/usr/local/lib/python3.7/dist-packages/aviary/segments.py](https://z3go6q25tqk-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220217-060102-RC00_429270882#) in <module>()
          1 import torch
          2 import torch.nn as nn
    ----> 3 from torch_scatter import scatter_add, scatter_max, scatter_mean
          4 
          5 
    
    [/usr/local/lib/python3.7/dist-packages/torch_scatter/__init__.py](https://z3go6q25tqk-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220217-060102-RC00_429270882#) in <module>()
         14     spec = cuda_spec or cpu_spec
         15     if spec is not None:
    ---> 16         torch.ops.load_library(spec.origin)
         17     elif os.getenv('BUILD_DOCS', '0') != '1':  # pragma: no cover
         18         raise ImportError(f"Could not find module '{library}_cpu' in "
    
    [/usr/local/lib/python3.7/dist-packages/torch/_ops.py](https://z3go6q25tqk-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220217-060102-RC00_429270882#) in load_library(self, path)
        108             # static (global) initialization code in order to register custom
        109             # operators with the JIT.
    --> 110             ctypes.CDLL(path)
        111         self.loaded_libraries.add(path)
        112 
    
    [/usr/lib/python3.7/ctypes/__init__.py](https://z3go6q25tqk-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220217-060102-RC00_429270882#) in __init__(self, name, mode, handle, use_errno, use_last_error)
        362 
        363         if handle is None:
    --> 364             self._handle = _dlopen(self._name, mode)
        365         else:
        366             self._handle = handle
    
    OSError: libcudart.so.10.2: cannot open shared object file: No such file or directory
    
    opened by jdagdelen 4
  • Type hints

    Type hints

    Lays the ground work for #29 and closes #30.

    These changes are all py37 compatible (unless I made a mistake). @CompRhys You may want to try this branch on Colab just to be sure.

    code quality types 
    opened by janosh 3
  • Suggested parameters for a

    Suggested parameters for a "performance" submission to matbench

    Curious if you have any suggestions on a general set of parameters that you would use for submission to matbench. For example, number of epochs. Right now, I've been using the defaults from the Colab notebook (just for the matbench_expt_gap task).

    opened by sgbaird 3
  • Better model.__repr__()

    Better model.__repr__()

    model.__repr__() now includes trainable params and epoch count. Moved from Wren + Roost having identical implementations to SSOT on BaseModelClass so CGCNN now has custom __repr__ too.

    Also confines coverage reporting in CI to package files (i.e. exclude test files).

    opened by janosh 3
  • Refactor `aviary/utils.py`

    Refactor `aviary/utils.py`

    aviary/utils.py is definitely in need of an overhaul. Was quite hard to type it in #31 and flake8 complained about surpassing max-complexity, both of which are bad signs for API design.

    code quality 
    opened by janosh 1
Releases(v0.0.4)
  • v0.0.4(Jul 1, 2022)

  • v0.0.3(Apr 20, 2022)

    This is a tag of the code used to generate results shown in science advances.

    After this tag in order to make the LOC more realistic git surgery was performed. This release is therefore also serves as a backup of the code before the clean-up commands were carried out.

    Source code(tar.gz)
    Source code(zip)
Owner
Rhys Goodall
PhD Student at the University of Cambridge working on the application of Machine Learning to Materials Discovery.
Rhys Goodall
Create Multiple CF entry for multiple websites

AWS-CloudFront Problem: Deploy multiple CloudFront for account with multiple domains. Functionality: Running this script in loop and deploy CloudFront

Giten Mitra 5 Nov 18, 2022
An API serving data on all creatures, monsters, materials, equipment, and treasure in The Legend of Zelda: Breath of the Wild

Hyrule Compendium API An API serving data on all creatures, monsters, materials, equipment, and treasure in The Legend of Zelda: Breath of the Wild. B

Aarav Borthakur 116 Dec 1, 2022
🐍 The official Python client library for Google's discovery based APIs.

Google API Client This is the Python client library for Google's discovery based APIs. To get started, please see the docs folder. These client librar

Google APIs 6.2k Jan 8, 2023
🐍 The official Python client library for Google's discovery based APIs.

Google API Client This is the Python client library for Google's discovery based APIs. To get started, please see the docs folder. These client librar

Google APIs 6.2k Dec 31, 2022
Discovery is an open-source Discord Bot with the main features Tickets, Moderation, Giveaways and Reaction roles.

Discovery is an open-source Discord Bot with the main features Tickets, Moderation, Giveaways and Reaction roles.

null 1 Dec 29, 2021
A Telegram Repo For Devs To Controll The Bots Under Maintenance.This Bot Is For Developers, If Your Bot Is Down, Use This Repo To Give Your Dear Subscribers Some Support By Providing Them Response.

Maintenance Bot A Telegram Repo For Devs To Controll The Bots Under Maintenance About This Bot This Bot Is For Developers, If Your Bot Is Down, Use Th

Vɪᴠᴇᴋ 47 Dec 29, 2022
A Telegram Repo For Devs To Controll The Bots Under Maintenance.This Bot Is For Developers, If Your Bot Is Down, Use This Repo To Give Your Dear Subscribers Some Support By Providing Them Response.

Maintenance Bot A Telegram Repo For Devs To Controll The Bots Under Maintenance About This Bot This Bot Is For Developers, If Your Bot Is Down, Use Th

Vɪᴠᴇᴋ 47 Dec 29, 2022
An unofficial library for discord components (under-development)

discord-components An unofficial library for discord components (under-development) Welcome! Discord components are cool, but discord.py will support

null 11 Jun 14, 2021
This bot will automatically like and follow users that post under a specified hashtag

Instagram-bot This bot will automatically like and follow users that post under a specified hashtag Dependencies Java JDK Selenium Updated version of

Makana Edwards 1 Nov 4, 2021
A script that writes automatic instagram comments under a post

Send automatic messages under a post on instagram Instagram will rate limit you after some time. From there on you can only post 1 comment every 40 se

Maximilian Freitag 3 Apr 28, 2022
Based on nonebot, a common bot framework for maimai.

mai bot 使用指南 此 README 提供了最低程度的 mai bot 教程与支持。 Step 1. 安装 Python 请自行前往 https://www.python.org/ 下载 Python 3 版本(> 3.7)并将其添加到环境变量(在安装过程中勾选 Add to system P

Diving-Fish 150 Jan 1, 2023
Python CMR is an easy to use wrapper to the NASA EOSDIS Common Metadata Repository API.

This repository is a copy of jddeal/python_cmr which is no longer maintained. It has been copied here with the permission of the original author for t

NASA 9 Nov 16, 2022
The scope of this project will be to build a data ware house on Google Cloud Platform that will help answer common business questions as well as powering dashboards

The scope of this project will be to build a data ware house on Google Cloud Platform that will help answer common business questions as well as powering dashboards.

Shweta_kumawat 2 Jan 20, 2022
A Advanced Auto Filter Bot Which Can Be Used In Many Groups With Multiple Channel Support....

Adv Auto Filter Bot This Just A Simple Hand Auto Filter Bot For Searching Files From Channel... Just Sent Any Text I Will Search In All Connected Chat

Albert Einstein 33 Oct 21, 2022
Simple software that can send WhatsApp message to a single or multiple users (including unsaved number**)

wp-automation Info: this is a simple automation software that sends WhatsApp message to single or multiple users. Key feature: -Sends message to multi

null 3 Jan 31, 2022
a script to bulk check usernames on multiple site. includes proxy & threading support.

linked-bulk-checker bulk checks username availability on multiple sites info people have been selling these so i just made one to release dm my discor

krul 9 Sep 20, 2021
Python tool to Check running WebClient services on multiple targets based on @leechristensen

WebClient Service Scanner Python tool to Check running WebClient services on multiple targets based on @tifkin_ idea. This tool uses impacket project.

Pixis 153 Dec 28, 2022
Script to post multiple status(posts) on twitter

Script to post multiple status on twitter (i.e. TWITTER STORM) This program can post upto maximum limit of twitter(around 300 tweets) within seconds.

Sandeep Kumar 4 Sep 9, 2021