Efficient and Scalable Physics-Informed Deep Learning and Scientific Machine Learning on top of Tensorflow for multi-worker distributed computing

Overview

TensorDiffEq logo

Package Build Package Release pypi downloads python versions

Notice: Support for Python 3.6 will be dropped in v.0.2.1, please plan accordingly!

Efficient and Scalable Physics-Informed Deep Learning

Collocation-based PINN PDE solvers for prediction and discovery methods on top of Tensorflow 2.X for multi-worker distributed computing.

Use TensorDiffEq if you require:

  • A meshless PINN solver that can distribute over multiple workers (GPUs) for forward problems (inference) and inverse problems (discovery)
  • Scalable domains - Iterated solver construction allows for N-D spatio-temporal support
    • support for N-D spatial domains with no time element is included
  • Self-Adaptive Collocation methods for forward and inverse PINNs
  • Intuitive user interface allowing for explicit definitions of variable domains, boundary conditions, initial conditions, and strong-form PDEs

What makes TensorDiffEq different?

  • Completely open-source

  • Self-Adaptive Solvers for forward and inverse problems, leading to increased accuracy of the solution and stability in training, resulting in less overall training time

  • Multi-GPU distributed training for large or fine-grain spatio-temporal domains

  • Built on top of Tensorflow 2.0 for increased support in new functionality exclusive to recent TF releases, such as XLA support, autograph for efficent graph-building, and grappler support for graph optimization* - with no chance of the source code being sunset in a further Tensorflow version release

  • Intuitive interface - defining domains, BCs, ICs, and strong-form PDEs in "plain english"

*In development

If you use TensorDiffEq in your work, please cite it via:

@article{mcclenny2021tensordiffeq,
  title={TensorDiffEq: Scalable Multi-GPU Forward and Inverse Solvers for Physics Informed Neural Networks},
  author={McClenny, Levi D and Haile, Mulugeta A and Braga-Neto, Ulisses M},
  journal={arXiv preprint arXiv:2103.16034},
  year={2021}
}

Thanks to our additional contributors:

@marcelodallaqua, @ragusa, @emiliocoutinho

Comments
  • Latest version of package

    Latest version of package

    The examples in the doc use the latest code of master branch but the library on Pypi is still the version in May. Can you build the lib and update the version on Pypi?

    opened by devzhk 5
  • ADAM training on batches

    ADAM training on batches

    It is possible to define a batch size and this will be applied to the calculation of the residual loss function, in splitting the collocation points in batches during the training.

    opened by emiliocoutinho 3
  • Pull Request using PyCharm

    Pull Request using PyCharm

    Dear Levi,

    I tried to make a Pull Request on this repository using PyCharm, and I received the following message:

    Although you appear to have the correct authorization credentials, the tensordiffeq organization has enabled OAuth App access restrictions, meaning that data access to third-parties is limited. For more information on these restrictions, including how to whitelist this app, visit https://help.github.com/articles/restricting-access-to-your-organization-s-data/

    I would kindly ask you to authorize PyCharm to access your organization data to use the GUI to make future pull requests.

    Best Regards

    opened by emiliocoutinho 1
  • Update method def get_sizes of utils.py

    Update method def get_sizes of utils.py

    Fix bug on the method def get_sizes(layer_sizes) of utils.py. The method was only allowing neural nets with an identical number of nodes in each hidden layer. Which was making the L- BFGS optimization to crash.

    opened by marcelodallaqua 1
  • model.save ?

    model.save ?

    Sometimes, it's useful to save the model for later use. I couldn't find a .save method and pickle (and dill) didn't let me dump the object for later re-use. (example of error with pickle: Can't pickle local object 'make_gradient_clipnorm_fn..').

    Is it currently possible to save the model? Thanks!

    opened by ragusa 1
  • add model.save and model.load_model

    add model.save and model.load_model

    Add model.save and model.load_model to CollocationSolverND class ref #3

    Will be released in the next stable.

    currently this can be done by using the Keras integration via running model.u_model.save("path/to/file"). This change will allow a direct save by calling model.save() on the CollocationSolverND class. Same with load_model().

    The docs will be updated to reflect this change.

    opened by levimcclenny 0
  • 2D Burgers Equation

    2D Burgers Equation

    Hello @levimcclenny and thanks for recommending this library!

    I have modified the 1D burger example to be in 2D, but I did not get good comparison results. Any suggestions?

    import math
    import scipy.io
    import tensordiffeq as tdq
    from tensordiffeq.boundaries import *
    from tensordiffeq.models import CollocationSolverND
    
    Domain = DomainND(["x", "y", "t"], time_var='t')
    
    Domain.add("x", [-1.0, 1.0], 256)
    Domain.add("y", [-1.0, 1.0], 256)
    Domain.add("t", [0.0, 1.0], 100)
    
    N_f = 10000
    Domain.generate_collocation_points(N_f)
    
    
    def func_ic(x,y):
        p =2
        q =1
        return np.sin (p * math.pi * x) * np.sin(q * math.pi * y)
        
    
    init = IC(Domain, [func_ic], var=[['x','y']])
    upper_x = dirichletBC(Domain, val=0.0, var='x', target="upper")
    lower_x = dirichletBC(Domain, val=0.0, var='x', target="lower")
    upper_y = dirichletBC(Domain, val=0.0, var='y', target="upper")
    lower_y = dirichletBC(Domain, val=0.0, var='y', target="lower")
    
    BCs = [init, upper_x, lower_x, upper_y, lower_y]
    
    
    def f_model(u_model, x, y, t):
        u = u_model(tf.concat([x, y, t], 1))
        u_x = tf.gradients(u, x)
        u_xx = tf.gradients(u_x, x)
        u_y = tf.gradients(u, y)
        u_yy = tf.gradients(u_y, y)
        u_t = tf.gradients(u, t)
        f_u = u_t + u * (u_x + u_y) - (0.01 / tf.constant(math.pi)) * (u_xx+u_yy)
        return f_u
    
    
    layer_sizes = [3, 20, 20, 20, 20, 20, 20, 20, 20, 1]
    
    model = CollocationSolverND()
    model.compile(layer_sizes, f_model, Domain, BCs)
    
    # to reproduce results from Raissi and the SA-PINNs paper, train for 10k newton and 10k adam
    model.fit(tf_iter=10000, newton_iter=10000)
    
    model.save("burger2D_Training_Model")
    #model.load("burger2D_Training_Model")
    
    #######################################################
    #################### PLOTTING #########################
    #######################################################
    
    data = np.load('py-pde_2D_burger_data.npz')
    
    Exact = data['u_output']
    Exact_u = np.real(Exact)
    
    x = Domain.domaindict[0]['xlinspace']
    y = Domain.domaindict[1]['ylinspace']
    t = Domain.domaindict[2]["tlinspace"]
    
    X, Y, T = np.meshgrid(x, y, t)
    
    X_star = np.hstack((X.flatten()[:, None], Y.flatten()[:, None], T.flatten()[:, None]))
    u_star = Exact_u.T.flatten()[:, None]
    
    u_pred, f_u_pred = model.predict(X_star)
    
    error_u = tdq.helpers.find_L2_error(u_pred, u_star)
    print('Error u: %e' % (error_u))
    
    lb = np.array([-1.0, -1.0, 0.0])
    ub = np.array([1.0, 1.0, 1])
    
    tdq.plotting.plot_solution_domain2D(model, [x, y, t], ub=ub, lb=lb, Exact_u=Exact_u.T)
    
    
    Screen Shot 2022-03-04 at 11 15 31 PM Screen Shot 2022-03-04 at 11 15 44 PM Screen Shot 2022-03-04 at 11 15 18 PM
    opened by engsbk 3
  • 2D Wave Equation

    2D Wave Equation

    Thank you for the great contribution!

    I'm trying to extend the 1D example problems to 2D, but I want to make sure my changes are in the correct place:

    1. Dimension variables. I changed them like so:

    Domain = DomainND(["x", "y", "t"], time_var='t')

    Domain.add("x", [0.0, 5.0], 100) Domain.add("y", [0.0, 5.0], 100) Domain.add("t", [0.0, 5.0], 100)

    1. My IC is zero, but for the BCs I'm not sure how to define the left and right borders, please let me know if my implementation is correct:
    
    def func_ic(x,y):
        return 0
    
    init = IC(Domain, [func_ic], var=[['x','y']])
    upper_x = dirichletBC(Domain, val=0.0, var='x', target="upper")
    lower_x = dirichletBC(Domain, val=0.0, var='x', target="lower")
    upper_y = dirichletBC(Domain, val=0.0, var='y', target="upper")
    lower_y = dirichletBC(Domain, val=0.0, var='y', target="lower")
            
    BCs = [init, upper_x, lower_x, upper_y, lower_y]
    

    All of my BCs and ICs are zero. And my equation has a (forcing) time-dependent source term as such:

    
    def f_model(u_model, x, y, t):
        c = tf.constant(1, dtype = tf.float32)
        Amp = tf.constant(2, dtype = tf.float32)
        freq = tf.constant(1, dtype = tf.float32)
        sigma = tf.constant(0.2, dtype = tf.float32)
    
        source_x = tf.constant(0.5, dtype = tf.float32)
        source_y = tf.constant(2.5, dtype = tf.float32)
    
        GP = Amp * tf.exp(-0.5*( ((x-source_x)/sigma)**2 + ((y-source_y)/sigma)**2 ))
        
        S = GP * tf.sin( 2 * tf.constant(math.pi)  * freq * t )
        u = u_model(tf.concat([x,y,t], 1))
        u_x = tf.gradients(u,x)
        u_xx = tf.gradients(u_x, x)
        u_y = tf.gradients(u,y)
        u_yy = tf.gradients(u_y, y)
        u_t = tf.gradients(u,t)
        u_tt = tf.gradients(u_t,t)
    
    
        f_u = u_xx + u_yy - (1/c**2) * u_tt + S
        
        return f_u
    

    Please advise.

    Looking forward to your reply!

    opened by engsbk 13
  • Reproducibility

    Reproducibility

    Dear @levimcclenny,

    Have you considered in adapt TensorDiffEq to be deterministic? In the way the code is implemented, we can find two sources of randomness:

    • The function Domain.generate_collocation_points has a random number generation
    • The TensorFlow training procedure (weights initialization and possibility of the use o random batches)

    Both sources of randomness can be solved with not much effort. We can define a random state for the first one that can be passed to the function Domain.generate_collocation_points. For the second, we can use the implementation provided on Framework Determinism. I have used the procedures suggested by this code, and the results of TensorFlow are always reproducible (CPU or GPU, serial or distributed).

    If you want, I can implement these two features.

    Best Regards

    opened by emiliocoutinho 3
Releases(v0.2.0)
Owner
tensordiffeq
Scalable PINN solvers for PDE Inference and Discovery
tensordiffeq
A python implementation of Physics-informed Spline Learning for nonlinear dynamics discovery

PiSL A python implementation of Physics-informed Spline Learning for nonlinear dynamics discovery. Sun, F., Liu, Y. and Sun, H., 2021. Physics-informe

Fangzheng (Andy) Sun 8 Jul 13, 2022
IDRLnet, a Python toolbox for modeling and solving problems through Physics-Informed Neural Network (PINN) systematically.

IDRLnet IDRLnet is a machine learning library on top of PyTorch. Use IDRLnet if you need a machine learning library that solves both forward and inver

IDRL 105 Dec 17, 2022
Must-read Papers on Physics-Informed Neural Networks.

PINNpapers Contributed by IDRL lab. Introduction Physics-Informed Neural Network (PINN) has achieved great success in scientific computing since 2017.

IDRL 330 Jan 7, 2023
Physics-informed convolutional-recurrent neural networks for solving spatiotemporal PDEs

PhyCRNet Physics-informed convolutional-recurrent neural networks for solving spatiotemporal PDEs Paper link: [ArXiv] By: Pu Ren, Chengping Rao, Yang

Pu Ren 11 Aug 23, 2022
PINN(s): Physics-Informed Neural Network(s) for von Karman vortex street

PINN(s): Physics-Informed Neural Network(s) for von Karman vortex street This is

ShotaDEGUCHI 2 Apr 18, 2022
xitorch: differentiable scientific computing library

xitorch is a PyTorch-based library of differentiable functions and functionals that can be widely used in scientific computing applications as well as deep learning.

null 24 Apr 15, 2021
Differentiable scientific computing library

xitorch: differentiable scientific computing library xitorch is a PyTorch-based library of differentiable functions and functionals that can be widely

null 98 Dec 26, 2022
Freecodecamp Scientific Computing with Python Certification; Solution for Challenge 2: Time Calculator

Assignment Write a function named add_time that takes in two required parameters and one optional parameter: a start time in the 12-hour clock format

Hellen Namulinda 0 Feb 26, 2022
The fundamental package for scientific computing with Python.

NumPy is the fundamental package needed for scientific computing with Python. Website: https://www.numpy.org Documentation: https://numpy.org/doc Mail

NumPy 22.4k Jan 9, 2023
Learn about quantum computing and algorithm on quantum computing

quantum_computing this repo contains everything i learn about quantum computing and algorithm on quantum computing what is aquantum computing quantum

arfy slowy 8 Dec 25, 2022
Scalable, Portable and Distributed Gradient Boosting (GBDT, GBRT or GBM) Library, for Python, R, Java, Scala, C++ and more. Runs on single machine, Hadoop, Spark, Dask, Flink and DataFlow

eXtreme Gradient Boosting Community | Documentation | Resources | Contributors | Release Notes XGBoost is an optimized distributed gradient boosting l

Distributed (Deep) Machine Learning Community 23.6k Dec 31, 2022
Scalable, Portable and Distributed Gradient Boosting (GBDT, GBRT or GBM) Library, for Python, R, Java, Scala, C++ and more. Runs on single machine, Hadoop, Spark, Dask, Flink and DataFlow

eXtreme Gradient Boosting Community | Documentation | Resources | Contributors | Release Notes XGBoost is an optimized distributed gradient boosting l

Distributed (Deep) Machine Learning Community 20.6k Feb 13, 2021
DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.

DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.

Microsoft 8.4k Jan 1, 2023
Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.

English | 简体中文 Easy Parallel Library Overview Easy Parallel Library (EPL) is a general and efficient library for distributed model training. Usability

Alibaba 185 Dec 21, 2022
SCAAML is a deep learning framwork dedicated to side-channel attacks run on top of TensorFlow 2.x.

SCAAML (Side Channel Attacks Assisted with Machine Learning) is a deep learning framwork dedicated to side-channel attacks. It is written in python and run on top of TensorFlow 2.x.

Google 69 Dec 21, 2022
Deep GPs built on top of TensorFlow/Keras and GPflow

GPflux Documentation | Tutorials | API reference | Slack What does GPflux do? GPflux is a toolbox dedicated to Deep Gaussian processes (DGP), the hier

Secondmind Labs 107 Nov 2, 2022
PArallel Distributed Deep LEarning: Machine Learning Framework from Industrial Practice (『飞桨』核心框架,深度学习&机器学习高性能单机、分布式训练和跨平台部署)

English | 简体中文 Welcome to the PaddlePaddle GitHub. PaddlePaddle, as the only independent R&D deep learning platform in China, has been officially open

null 19.4k Jan 4, 2023
Official implementation of "Learning Forward Dynamics Model and Informed Trajectory Sampler for Safe Quadruped Navigation" (RSS 2022)

Intro Official implementation of "Learning Forward Dynamics Model and Informed Trajectory Sampler for Safe Quadruped Navigation" Robotics:Science and

Yunho Kim 21 Dec 7, 2022
Official Pytorch implementation of ICLR 2018 paper Deep Learning for Physical Processes: Integrating Prior Scientific Knowledge.

Deep Learning for Physical Processes: Integrating Prior Scientific Knowledge: Official Pytorch implementation of ICLR 2018 paper Deep Learning for Phy

emmanuel 47 Nov 6, 2022