Deep learning for Engineers - Physics Informed Deep Learning

Related tags

Deep Learning sciann
Overview

PyPI Version Build Status Downloads Downloads Downloads License

SciANN: Neural Networks for Scientific Computations

SciANN is a Keras wrapper for scientific computations and physics-informed deep learning.

New to SciANN?

SciANN is a high-level artificial neural networks API, written in Python using Keras and TensorFlow backends. It is developed with a focus on enabling fast experimentation with different networks architectures and with emphasis on scientific computations, physics informed deep learing, and inversion. Being able to start deep-learning in a very few lines of code is key to doing good research.

Use SciANN if you need a deep learning library that:

  • Allows for easy and fast prototyping.
  • Allows the use of complex deep neural networks.
  • Takes advantage TensorFlow and Keras features including seamlessly running on CPU and GPU.

For more details, check out our review paper at https://arxiv.org/abs/2005.08803 and the documentation at SciANN.com.

Cite SciANN in your publications if it helps your research:

@article{haghighat2021sciann,
  title={SciANN: A Keras/TensorFlow wrapper for scientific computations and physics-informed deep learning using artificial neural networks},
  author={Haghighat, Ehsan and Juanes, Ruben},
  journal={Computer Methods in Applied Mechanics and Engineering},
  volume={373},
  pages={113552},
  year={2021},
  publisher={Elsevier}
}

SciANN is compatible with: Python 2.7-3.6.

Have questions or like collaborations, email Ehsan Haghighat.


Getting started: 30 seconds to SciANN

The core data structure of SciANN is a Functional, a way to organize inputs (Variables) and outputs (Fields) of a network.

Targets are imposed on Functional instances using Constraints.

The SciANN model (SciModel) is formed from inputs (Variables) and targets(Constraints). The model is then trained by calling the solve function.

Here is the simplest SciANN model:

from sciann import Variable, Functional, SciModel
from sciann.constraints import Data

x = Variable('x')
y = Functional('y')
 
# y_true is a Numpy array of (N,1) -- with N as number of samples.  
model = SciModel(x, Data(y))

This is associated to the simplest neural network possible, i.e. a linear relation between the input variable x and the output variable y with only two parameters to be learned.

Plotting a network is as easy as passing a file_name to the SciModel:

model = SciModel(x, Data(y), plot_to_file='file_path')

Once your model looks good, perform the learning with .solve():

# x_true is a Numpy array of (N,1) -- with N as number of samples. 
model.train(x_true, y_true, epochs=5, batch_size=32)

You can iterate on your training data in batches and in multiple epochs. Please check Keras documentation on model.fit for more information on possible options.

You can evaluate the model any time on new data:

classes = model.predict(x_test, batch_size=128)

In the application folder of the repository, you will find some examples of Linear Elasticity, Flow, Flow in Porous Media, etc.


Installation

Before installing SciANN, you need to install the TensorFlow and Keras.

You may also consider installing the following optional dependencies:

Then, you can install SciANN itself. There are two ways to install SciANN:

  • Install SciANN from PyPI (recommended):

Note: These installation steps assume that you are on a Linux or Mac environment. If you are on Windows, you will need to remove sudo to run the commands below.

sudo pip install sciann

If you are using a virtualenv, you may want to avoid using sudo:

pip install sciann
  • Alternatively: install SciANN from the GitHub source:

First, clone SciANN using git:

git clone https://github.com/sciann/sciann.git

Then, cd to the SciANN folder and run the install command:

sudo python setup.py install

or

sudo pip install .

Why this name, SciANN?

Scientific Computational with Artificial Neural Networks.

Scientific computations include solving ODEs, PDEs, Integration, Differentiation, Curve Fitting, etc.


Comments
  • Imposing BC using ids in Burger's equation

    Imposing BC using ids in Burger's equation

    Hi, I have read your papers and looked into the Burgers problem.

    From the paper, it mentions that to implement BC using ids, I should use:

    m = sn.SciModel ([t, x], [L1 , u], "mse", "Adam")

    m.train([x_data,t_data],['zeros',(ids_ic_bc,U_ic_bc)],batch_size=256,epochs=10000)

    I check and found that x_data and t_data are arrays of size (100,100).

    But what is the array size of ids_ic_bc and U_ic_bc?

    Thanks for the clarifications.

    opened by zonexo 8
  • Tensorflow & Keras version

    Tensorflow & Keras version

    Hello, I would like to know what version of Tensorflow and Keras should be used. For Tensorflow, I tried version 1.15 and 2.2, but I've got the following errors. <1.15> ImportError: Keras requires TensorFlow 2.2 or higher. Install TensorFlow via pip install tensorflow <2.2> ImportError: cannot import name 'is_tensor' from 'keras.backend' (C:\Program_Files\anaconda3\envs\tf2-gpu\lib\site-packages\keras\backend.py)

    I am using Windows 10 and anaconda. Tensorflow works fine. And, I installed sciann using pip as recommended here. Thank you.

    opened by sungkwang 7
  • Importing SciANN Fails

    Importing SciANN Fails

    Apologies for the trouble, but in Spyder 4.2 (through Anaconda), I get the following when trying to load SciANN using

    import sciann as sn

    image

    I downloaded the .zip file from GitHub and installed it through Anaconda's command window, but while the command line noted everything installed successfully, the attached error still occurs. As a basic remedial step, the previous copy was uninstalled and a new one downloaded/installed to no avail. Can I tell Python to not load the missing module? This is pretty much the extent of my knowledge with respect to troubleshooting these issues, so I haven't tried anything beyond remove/reinstall.

    Thank you very much for your time!

    bug 
    opened by NavyDevilDoc 6
  • Difficulty in Setting Initial Conditions

    Difficulty in Setting Initial Conditions

    Hello,

    I have been trying to write a code to solve a reaction-diffusion type problem. I wanted to compare deepXDE and SciANN but was having difficulty getting my initial conditions set up in SciANN.

    In deepXDE, I can use

    rho0*tf.cast(tf.math.greater(((x-x_center_scaled)**2)/(x_axis_scaled**2),1),tf.float64)

    To set a circle (line segment in 1D for now) in the center of the domain to zero while keeping everything else at a constant value. The result is shown below (sorry for the unlabeled axes - that is x from 0 to 75 and t from 0 to 10).

    image

    In SciANN, this doesn't seem to work since the Functionals cannot be acted on by TF operations. In accordance with the Burger eq example, I tried using

    0.25*(1 - sign(tt - TOL)) * ((1 - sign(x - (left_boundary+TOL))) + (1 + sign(x - (right_boundary-TOL)))) * ( rho0)

    as well as

    0.5*(1 - sign(t - TOL)) * rho0 * sign(((x-x_center)**2)/(x_axis**2))

    In numpy, both of these give the correct shape

    image

    I also tried using

    0.5*(1 - sign(t - TOL)) * rho0 * tanh(((x-x_center)**2)/(x_axis**2))

    Which should give a smoother boundary. In all cases the whole domain appears as solid rho0, with no area of 0 in the center, as shown below.

    image

    I also note that the initial condition loss seems to immediately get stuck at a particular value.

    image

    So I am wondering: is there a problem with the way I have set up my boundary conditions, a way to use TF operations or a "greater_than" function in SciANN, or is there another way to set up my boundary conditions?

    Thanks, David

    opened by davidsohutskay 5
  • transfer learning

    transfer learning

    I am going to use a pre-trained model (function) in new model structure. but I don't know how I can link the inputs to the old model. in keras we call the model, but I dont know how it goes with Sciann model. should we call the Function or Model? best regards

    opened by Alborz2020 4
  • sn.set_random_seed doesn't make initialized weights and training results reproducible

    sn.set_random_seed doesn't make initialized weights and training results reproducible

    The code below doesn't reproduce the same weights when model is initialized multiple times, even after using sn.set_random_seed function.

    import sciann as sn
    from sciann.utils.math import diff
    
    seed = 43634
    sn.set_random_seed(seed)
    
    for i in range(4):
    
        x = sn.Variable('x',dtype='float64')
        y = sn.Variable('y',dtype='float64')
    
        inputs = [x,y]
        p = sn.Functional('p', inputs,2*[10], activation='tanh')
    
        ## L1-loss
        L1 = diff(p, x, order=2) + diff(p, y, order=2)
    
        losses = [L1]
    
        m = sn.SciModel(inputs, losses)
    
        weights = p.get_weights()
    
        ## just talking a slice to compare
        compare_weights = weights[0][0][0]
    
        print(f'############## Weights Iter: {i}')
        print(compare_weights)
    

    Output:

    ############## Weights Iter: 0
    [-0.08364122 -0.39584261  0.46033181  0.699524   -0.15388536  1.13492848
      0.97746673  0.0638296   0.22659807 -0.36424939]
    ############## Weights Iter: 1
    [-0.57727796 -0.53107439 -0.36321291 -0.17676498 -0.00334409 -0.71008476
      0.98622227 -0.13798297  0.09670978 -1.08998305]
    ############## Weights Iter: 2
    [-1.49435514  0.80398993  0.89099648 -0.35270435 -0.87543759 -1.57591196
      0.3990877   0.57710672  0.60861149  0.06177852]
    ############## Weights Iter: 3
    [-0.4822131   0.8055504   0.2928848   1.15362153  0.95912567  0.30233269
      0.41268821  0.85532438 -0.36524137 -0.71060004]
    
    opened by pradhyumna85 3
  • Complex-Valued function with more then one coordinate yields only real-valued solution

    Complex-Valued function with more then one coordinate yields only real-valued solution

    Hey there, i am pretty new to SciANN and love working with it. I recently tried to get a complex-valued 2D helmholtz problem to work. For me 1D worked just fine. But when working with 2D-data i got the following Warning: <path-to-sciann>/sciann/lib/python3.6/site-packages/numpy/core/_asarray.py:83: ComplexWarning: Casting complex values to real discards the imaginary part return array(a, dtype, copy=False, order=order) As a result the solution of this model only yields real valued solutions. For the problem specified this is sadly not sufficient. Is this intetntional? Is there a way for me to circumvent this issue? Code to recreate this issue:

    from sciann.utils.math import diff
    import numpy as np
    
    x_data, y_data = np.meshgrid(
        np.linspace(0, 1, 20),
        np.linspace(0, 1, 20)
    )
    x_data, y_data = x_data.flatten(), y_data.flatten()
    
    p_data = np.random.random(x_data.shape) + 1j * np.random.random(x_data.shape)
    k_absorb_data = np.random.random(x_data.shape) + 1j * np.random.random(x_data.shape)
    
    x = sn.Variable("x")
    y = sn.Variable("y")
    k = 20  # wave number
    k_absorb = sn.Functional("k_absorb", [x, y], 3 * [20], "tanh", dtype='complex64')  # wave number modifier
    p = sn.Functional("p", [x, y, k_absorb], 8 * [20], "tanh", dtype='complex64')
    
    c1 = sn.Data(p)
    c2 = sn.Data(k_absorb)
    L1 = -(diff(p, x, order=2) + diff(p, y, order=2) + (k - k_absorb) ** 2 * p)
    
    model = sn.SciModel([x, y], [c1, c2, sn.PDE(L1)])
    
    model.train(
        [x_data, y_data],
        [p_data, k_absorb_data, 'zeros'],
        epochs=1,
        adaptive_weights=True
    )
    
    print(f"Output-type: {p.eval(model, [x_data, y_data]).dtype}")
    

    stdout:

    <path-to-script>/issue.py
    2021-04-01 15:56:09.670221: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
    2021-04-01 15:56:09.670241: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
    ---------------------- SCIANN 0.6.0.4 ---------------------- 
    For details, check out our review paper and the documentation at: 
     +  "https://arxiv.org/abs/2005.08803", 
     +  "https://www.sciann.com". 
    
    2021-04-01 15:56:11.870829: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
    2021-04-01 15:56:11.870986: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory
    2021-04-01 15:56:11.870993: W tensorflow/stream_executor/cuda/cuda_driver.cc:326] failed call to cuInit: UNKNOWN ERROR (303)
    2021-04-01 15:56:11.871008: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (JakobsComputer): /proc/driver/nvidia/version does not exist
    2021-04-01 15:56:11.871184: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 FMA
    To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
    2021-04-01 15:56:11.871724: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
    2021-04-01 15:56:11.897528: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:196] None of the MLIR optimization passes are enabled (registered 0 passes)
    2021-04-01 15:56:11.930168: I tensorflow/core/platform/profile_utils/cpu_utils.cc:112] CPU Frequency: 3692545000 Hz
    Train on 400 samples
    
    + adaptive_weights at epoch 1: [191.29670595816734, 630.4341240815332, 1.0068604349762287]
    <path-to-sciann>/sciann/lib/python3.6/site-packages/numpy/core/_asarray.py:83: ComplexWarning: Casting complex values to real discards the imaginary part
      return array(a, dtype, copy=False, order=order)
    400/400 [==============================] - 1s 4ms/sample - loss: 52328.3709 - p_loss: 0.2347 - k_absorb_loss: 0.2266 - mul_2_loss: 51517.6172
    Output-type: float32
    
    Process finished with exit code 0
    

    Thank you, Jakob

    opened by JakobEliasWagner 3
  • Time per epoch increases substantially for high order (4th) targets

    Time per epoch increases substantially for high order (4th) targets

    I observed that the training time increases a lot if I added 4th order targets. During a quick check roughly the following training times were observed for the corresponding targets (1000 epochs):

    • 3x 0th order: < 1s
    • 2x 0th + 1x 4th: ~ 30s
    • 1x 0th + 2x 4th: ~ 100s

    Is having high order targets that much more computational demanding? Are there ways to improve on this? Enabling GPU does not seem to matter.

    opened by PieterGimbel 3
  • SciANN example for heat transfer problem

    SciANN example for heat transfer problem

    Dear Cummunity, Firstly, I do appreciate your help regarding my trivial question. I am new to SciANN and PINN. I am trying to make a PINN model for a simple 1d heat equation problem. This is my PDE: rho*cp dT/dt = lamda d2T/dx2 I am using Python 3.8.12, SciANN 0.6.8.4 and tensorflow 2.7.0. My code is presented in the following but it gives the error discussed here.

    import sciann as sn
    from sciann.utils.math import diff
    rho, lamda, cp = 1, 1, 1 # parametrization
    t_ic = 50 # IC
    t_left = 70 # left side BC
    t_right = 40 # right side BC
    x = sn.Variable('x')
    t = sn.Variable('t')
    T = sn.Functional('T', [t,x], 4*[20], 'tanh')
    L1 = diff(T, t, order=1) - lamda/(rho* cp)*diff(T, x, order=2)
    BC_left = (x==0.)*(t_left)
    BC_right = (x==L)*(t_right)
    IC = (t==0.)*(t_ic)
    m = sn.SciModel([x, t], [L1, BC_left, BC_right, IC])
    x_data, t_data = np.meshgrid(
        np.linspace(0, 1, 100), 
        np.linspace(0, 60, 100)
    )
    h = m.train([x_data, t_data], 4*['zero'], learning_rate=0.002, epochs=500)
    

    Thanks again for you help. I very much appreciate if anyone can provide me with a SciANN example dealing with heat conduction and convection. Cheers Ali

    opened by Ali1990dashti 2
  • sciann.model save_weights parameter always uses frequency/period as 10

    sciann.model save_weights parameter always uses frequency/period as 10

    history = model.train(inputs_train, len(losses)*['zero'], batch_size=100, learning_rate=0.001,
                   save_weights={"path":'./weights','freq':200})
    

    When executing the above code with the save_weights parameter, weights are saved every 10 epoch irrespective of what we pass in the 'freq' key of the save_weights argument.

    I figured out the code which I think is logically incorrect, in the sciann.model class's train function and updated the same to get expected behavior (10 as default value of 'freq' if the key is not specified):

    # save model.
    model_file_path = None
    if save_weights is not None:
        assert isinstance(save_weights, dict), "pass a dictionary containing `path, freq, best`. "
        if 'path' not in save_weights.keys():
            save_weights_path = os.path.join(os.curdir, "weights")
        else:
            save_weights_path = save_weights['path']
        try:
            if 'best' in save_weights.keys() and \
                    save_weights['best'] is True:
                model_file_path = save_weights_path + "-best.hdf5"
                model_check_point = k.callbacks.ModelCheckpoint(
                    model_file_path, monitor='loss', save_weights_only=True, mode='auto',
    -                period=10 if 'freq' in save_weights.keys() else save_weights['freq'],
    +               period=save_weights['freq'] if 'freq' in save_weights.keys() else 10,
                    save_best_only=True
                )
            else:
                self._model.save_weights("{}-start.hdf5".format(save_weights_path))
                model_file_path = save_weights_path + "-{epoch:05d}-{loss:.3e}.hdf5"
                model_check_point = k.callbacks.ModelCheckpoint(
                    model_file_path, monitor='loss', save_weights_only=True, mode='auto',
    -                period=10 if 'freq' in save_weights.keys() else save_weights['freq'],
    +               period=save_weights['freq'] if 'freq' in save_weights.keys() else 10,
                    save_best_only=False
                )
        except:
            print("\nWARNING: Failed to save model.weights to the provided path: {}\n".format(save_weights_path))
    if model_file_path is not None:
        sci_callbacks.append(model_check_point)
    

    I have raised a pull request for the fix.

    opened by pradhyumna85 2
  • The meaning of the data return by

    The meaning of the data return by "get_weights"

    Dear community,

    It is happy to meet SCIANN, such an amazing tool. I have a question need your patient help.

    When I run following code:

    f=sn.Functional(['f'],[u],[1],'linear') weight_a=f.get_weights() print(np.array(weight_a)) I get the below results, which is confused.

    [[[-0.85555252525252] [0.00018264590427] [[[-0.80343422342870] [0.00342918346589]]]

    I think python should sent back 2 numbers for me, as I set 1 input, 1 output, and used 1 neural. While it sent back 4 numbers. So what do these numers mean? And if I want to set specific weights and biases for a neuron, how should I do?

    Thanks a lot

    opened by ZPLai 2
  • Change Implementation of Validation Loss

    Change Implementation of Validation Loss

    Dear Ehsan,

    I changed the workflow on how to treat validation data which leads to an improvement in computation speed during model training. I hope this is a useful contribution to the SciANN Code. Maybe my implementation has to be rewritten and adapted to the Coding syntax and standards of SciANN.

    Best regards, Linus


    Implementation: At first, validation data are passed to model.train in the same data structure as the training data. Inside the model.train function, both datasets are prepared in the same way as well, before being passed to the function opt_fit_func. So basically, I just copied line 325 until line 390and adapted it for the validation loss.


    Model Performance: The change in performance is quite visible in the computation times per epoch:

    1. Model performance without validation loss Pasted image 20220901155750

    2. Model performance with the old validation loss workflow Pasted image 20220901162109

    3. Model performance with the updated validation loss workflow

    opened by linuswalter 0
  • Error in Keras

    Error in Keras

    Dear Professor, i have a small problem with the code. I did a pip installation of tensorflow which includes keras. than i installed tf nightly, h5py, graphvis, pydot and CUDA followed by sciann. But still if i run the code the following Error occours:   2022-12-02 17:12:30.005975: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'nvcuda.dll'; dlerror: nvcuda.dll not found 2022-12-02 17:12:30.007259: W tensorflow/stream_executor/cuda/cuda_driver.cc:263] failed call to cuInit: UNKNOWN ERROR (303) Traceback (most recent call last): File "C:\Users\Max\Documents\TUM\WS22\BUIEKS\Statik_Hiwi\test_solid_mechanics.py", line 350, in train() File "C:\Users\Max\Documents\TUM\WS22\BUIEKS\Statik_Hiwi\test_solid_mechanics.py", line 243, in train history = model.train( File "C:\Anaconda\envs\BUIEKS\lib\site-packages\sciann\models\model.py", line 560, in train history = opt_fit_func( File "C:\Anaconda\envs\BUIEKS\lib\site-packages\keras\engine\training_v1.py", line 850, in fit raise TypeError("Unrecognized keyword arguments: " + str(kwargs)) TypeError: Unrecognized keyword arguments: {'save_weights_to': 'output\res_tanh_40x40x40x40_WEIGHTS', 'save_weights_freq': 100000}   If i run a diferent and more simple code from sciann the following Error arises: Traceback (most recent call last): File "C:\Users\Max\Documents\TUM\WS22\BUIEKS\Statik_Hiwi\test_functional.py", line 15, in y = Functional( File "C:\Anaconda\envs\BUIEKS\lib\site-packages\sciann\functionals\functional.py", line 200, in Functional layer = Dense( File "C:\Anaconda\envs\BUIEKS\lib\site-packages\keras\dtensor\utils.py", line 96, in _wrap_function init_method(layer_instance, *args, **kwargs) File "C:\Anaconda\envs\BUIEKS\lib\site-packages\keras\layers\core\dense.py", line 117, in init super().init(activity_regularizer=activity_regularizer, **kwargs) File "C:\Anaconda\envs\BUIEKS\lib\site-packages\tensorflow\python\trackable\base.py", line 205, in _method_wrapper result = method(self, *args, **kwargs) File "C:\Anaconda\envs\BUIEKS\lib\site-packages\keras\engine\base_layer_v1.py", line 151, in init generic_utils.validate_kwargs(kwargs, allowed_kwargs) File "C:\Anaconda\envs\BUIEKS\lib\site-packages\keras\utils\generic_utils.py", line 1269, in validate_kwargs raise TypeError(error_message, kwarg) TypeError: ('Keyword argument not understood:', 'activations') Do you have a clue what i am doing wrong?

    Best Regards,

    Max

    opened by maxhorlebein 0
  • Variable normalization problem

    Variable normalization problem

    Dear Professor I am training a SciANN model by using turbulence data, the data is 3D and should satisfy NavierStokes equation, when I am training, the loss of velocity field to pressure field cannot be reduced to small, I think it is because the variables are not normalized, can you please tell me how to normalize the variables? loss

    opened by nevoliu 0
  • Bump tensorflow from 2.8.1 to 2.9.3

    Bump tensorflow from 2.8.1 to 2.9.3

    Bumps tensorflow from 2.8.1 to 2.9.3.

    Release notes

    Sourced from tensorflow's releases.

    TensorFlow 2.9.3

    Release 2.9.3

    This release introduces several vulnerability fixes:

    TensorFlow 2.9.2

    Release 2.9.2

    This releases introduces several vulnerability fixes:

    ... (truncated)

    Changelog

    Sourced from tensorflow's changelog.

    Release 2.9.3

    This release introduces several vulnerability fixes:

    Release 2.8.4

    This release introduces several vulnerability fixes:

    ... (truncated)

    Commits
    • a5ed5f3 Merge pull request #58584 from tensorflow/vinila21-patch-2
    • 258f9a1 Update py_func.cc
    • cd27cfb Merge pull request #58580 from tensorflow-jenkins/version-numbers-2.9.3-24474
    • 3e75385 Update version numbers to 2.9.3
    • bc72c39 Merge pull request #58482 from tensorflow-jenkins/relnotes-2.9.3-25695
    • 3506c90 Update RELEASE.md
    • 8dcb48e Update RELEASE.md
    • 4f34ec8 Merge pull request #58576 from pak-laura/c2.99f03a9d3bafe902c1e6beb105b2f2417...
    • 6fc67e4 Replace CHECK with returning an InternalError on failing to create python tuple
    • 5dbe90a Merge pull request #58570 from tensorflow/r2.9-7b174a0f2e4
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • BUG (?) -- Different order of (-,*) operators result different data type

    BUG (?) -- Different order of (-,*) operators result different data type

    Dear friends,

    Please consider the following lines of code:

    import numpy as np
    import sciann as sn
    z = sn.Variable('z')
    omega = sn.Functional('omega', z,
                     hidden_layers=[10,10],
                     activation='tanh')
    omega_z = sn.diff(omega, z)
    data = np.array([1,2,3,4,5,6,7,8,9,10])
    print(type(data - 1/omega_z))
    print(type(1/omega_z - data))
    print(type(data * 1/omega_z ))
    print(type(1/omega_z * data))
    

    We observe that different order of operators between ndarray and sciann.functionals.mlp_functional.MLPFunctional results different outcome. I do not now if this is somehow desirable - I can not think of a reason for this. I suspect that there is something wrong at __ mul__ and __ add__ methods definitions.

    opened by fotisAnagnostopoulos 0
Releases(V-0.6.0.3)
  • V-0.6.0.3(Feb 6, 2021)

    • Support for TF == 2.4

    • Gradient Pathology and Neural Tangent Kernel adaptive weights are added. In the .train call, use:

    adaptive_weights = {"method": "NTK" or "GP", "freq": 100}

    • Get a bibliography of the papers used to generate your model by:

    sn.get_bibliography() !! Known issue - does not work properly on Google-Colab.

    Notify me if any news bugs are generated.

    Best, Ehsan

    Source code(tar.gz)
    Source code(zip)
Owner
SciANN
Artificial Neural Networks for Scientific Computations
SciANN
Must-read Papers on Physics-Informed Neural Networks.

PINNpapers Contributed by IDRL lab. Introduction Physics-Informed Neural Network (PINN) has achieved great success in scientific computing since 2017.

IDRL 330 Jan 7, 2023
Physics-informed convolutional-recurrent neural networks for solving spatiotemporal PDEs

PhyCRNet Physics-informed convolutional-recurrent neural networks for solving spatiotemporal PDEs Paper link: [ArXiv] By: Pu Ren, Chengping Rao, Yang

Pu Ren 11 Aug 23, 2022
PINN(s): Physics-Informed Neural Network(s) for von Karman vortex street

PINN(s): Physics-Informed Neural Network(s) for von Karman vortex street This is

ShotaDEGUCHI 2 Apr 18, 2022
Deep Learning and Reinforcement Learning Library for Scientists and Engineers 🔥

TensorLayer is a novel TensorFlow-based deep learning and reinforcement learning library designed for researchers and engineers. It provides an extens

TensorLayer Community 7.1k Dec 27, 2022
Deep Learning and Reinforcement Learning Library for Scientists and Engineers 🔥

TensorLayer is a novel TensorFlow-based deep learning and reinforcement learning library designed for researchers and engineers. It provides an extens

TensorLayer Community 7.1k Dec 29, 2022
Official implementation of "Learning Forward Dynamics Model and Informed Trajectory Sampler for Safe Quadruped Navigation" (RSS 2022)

Intro Official implementation of "Learning Forward Dynamics Model and Informed Trajectory Sampler for Safe Quadruped Navigation" Robotics:Science and

Yunho Kim 21 Dec 7, 2022
Pytorch Implementation of Interaction Networks for Learning about Objects, Relations and Physics

Interaction-Network-Pytorch Pytorch Implementraion of Interaction Networks for Learning about Objects, Relations and Physics. Interaction Network is a

null 117 Nov 5, 2022
Physics-Aware Training (PAT) is a method to train real physical systems with backpropagation.

Physics-Aware Training (PAT) is a method to train real physical systems with backpropagation. It was introduced in Wright, Logan G. & Onodera, Tatsuhiro et al. (2021)1 to train Physical Neural Networks (PNNs) - neural networks whose building blocks are physical systems.

McMahon Lab 230 Jan 5, 2023
Code for PhySG: Inverse Rendering with Spherical Gaussians for Physics-based Relighting and Material Editing

PhySG: Inverse Rendering with Spherical Gaussians for Physics-based Relighting and Material Editing CVPR 2021. Project page: https://kai-46.github.io/

Kai Zhang 141 Dec 14, 2022
Official PyTorch implementation of "Physics-aware Difference Graph Networks for Sparsely-Observed Dynamics".

Physics-aware Difference Graph Networks for Sparsely-Observed Dynamics This repository is the official PyTorch implementation of "Physics-aware Differ

USC-Melady 46 Nov 20, 2022
Brax is a differentiable physics engine that simulates environments made up of rigid bodies, joints, and actuators

Brax is a differentiable physics engine that simulates environments made up of rigid bodies, joints, and actuators. It's also a suite of learning algorithms to train agents to operate in these environments (PPO, SAC, evolutionary strategy, and direct trajectory optimization are implemented).

Google 1.5k Jan 2, 2023
Original code for "Zero-Shot Domain Adaptation with a Physics Prior"

Zero-Shot Domain Adaptation with a Physics Prior [arXiv] [sup. material] - ICCV 2021 Oral paper, by Attila Lengyel, Sourav Garg, Michael Milford and J

Attila Lengyel 40 Dec 21, 2022
McGill Physics Hackathon 2021: Reaction-Diffusion Models for the Generation of Biological Patterns

DiffuseAnimals: Reaction-Diffusion Models for the Generation of Biological Patterns Introduction Reaction-diffusion equations can be utilized in order

Austin Szuminsky 2 Mar 7, 2022
PyTorch implementation for the visual prior component (i.e. perception module) of the Visually Grounded Physics Learner [Li et al., 2020].

VGPL-Visual-Prior PyTorch implementation for the visual prior component (i.e. perception module) of the Visually Grounded Physics Learner (VGPL). Give

Toru 8 Dec 29, 2022
[ICLR 2022] Contact Points Discovery for Soft-Body Manipulations with Differentiable Physics

CPDeform Code and data for paper Contact Points Discovery for Soft-Body Manipulations with Differentiable Physics at ICLR 2022 (Spotlight). @InProceed

(Lester) Sizhe Li 29 Nov 29, 2022
Ivy is a templated deep learning framework which maximizes the portability of deep learning codebases.

Ivy is a templated deep learning framework which maximizes the portability of deep learning codebases. Ivy wraps the functional APIs of existing frameworks. Framework-agnostic functions, libraries and layers can then be written using Ivy, with simultaneous support for all frameworks. Ivy currently supports Jax, TensorFlow, PyTorch, MXNet and Numpy. Check out the docs for more info!

Ivy 8.2k Jan 2, 2023
Deep learning (neural network) based remote photoplethysmography: how to extract pulse signal from video using deep learning tools

Deep-rPPG: Camera-based pulse estimation using deep learning tools Deep learning (neural network) based remote photoplethysmography: how to extract pu

Terbe Dániel 138 Dec 17, 2022
deep-table implements various state-of-the-art deep learning and self-supervised learning algorithms for tabular data using PyTorch.

deep-table implements various state-of-the-art deep learning and self-supervised learning algorithms for tabular data using PyTorch.

null 63 Oct 17, 2022
Time-series-deep-learning - Developing Deep learning LSTM, BiLSTM models, and NeuralProphet for multi-step time-series forecasting of stock price.

Stock Price Prediction Using Deep Learning Univariate Time Series Predicting stock price using historical data of a company using Neural networks for

Abdultawwab Safarji 7 Nov 27, 2022