Reproduces the results of the paper "Finite Basis Physics-Informed Neural Networks (FBPINNs): a scalable domain decomposition approach for solving differential equations".

Overview

Finite basis physics-informed neural networks (FBPINNs)


This repository reproduces the results of the paper Finite Basis Physics-Informed Neural Networks (FBPINNs): a scalable domain decomposition approach for solving differential equations, B. Moseley, T. Nissen-Meyer and A. Markham, Jul 2021 ArXiv.


Key contributions

  • Physics-informed neural networks (PINNs) offer a powerful new paradigm for solving problems relating to differential equations
  • However, a key limitation is that PINNs struggle to scale to problems with large domains and/or multi-scale solutions
  • We present finite basis physics-informed neural networks (FBPINNs), which are able to scale to these problems
  • To do so, FBPINNs use a combination of domain decomposition, subdomain normalisation and flexible training schedules
  • FBPINNs outperform PINNs in terms of accuracy and computational resources required

Workflow

FBPINNs divide the problem domain into many small, overlapping subdomains. A neural network is placed within each subdomain such that within the center of the subdomain, the network learns the full solution, whilst in the overlapping regions, the solution is defined as the sum over all overlapping networks.

We use smooth, differentiable window functions to locally confine each network to its subdomain, and the inputs of each network are individually normalised over the subdomain.

In comparison to existing domain decomposition techniques, FBPINNs do not require additional interface terms in their loss function, and they ensure the solution is continuous across subdomain interfaces by the construction of their solution ansatz.

Installation

FBPINNs only requires Python libraries to run.

We recommend setting up a new environment, for example:

conda create -n fbpinns python=3  # Use conda package manager
conda activate fbpinns

and then installing the following libraries:

conda install scipy matplotlib jupyter
conda install pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch
pip install tensorboardX

All of our work was completed using PyTorch version 1.8.1 with CUDA 10.2.

Finally, download the source code:

git clone https://github.com/benmoseley/FBPINNs.git

Getting started

The workflow to train and compare FBPINNs and PINNs is very simple to set up, and consists of three steps:

  1. Initialise a problems.Problem class, which defines the differential equation (and boundary condition) you want to solve
  2. Initialise a constants.Constants object, which defines all of the other training hyperparameters (domain, number of subdomains, training schedule, .. etc)
  3. Pass this Constants object to the main.FBPINNTrainer or main.PINNTrainer class and call the .train() method to start training.

For example, to solve the problem du/dx = cos(wx) shown above you can use the following code to train a FBPINN / PINN:

P = problems.Cos1D_1(w=1, A=0)# initialise problem class

c1 = constants.Constants(
            RUN="FBPINN_%s"%(P.name),# run name
            P=P,# problem class
            SUBDOMAIN_XS=[np.linspace(-2*np.pi,2*np.pi,5)],# defines subdomains
            SUBDOMAIN_WS=[2*np.ones(5)],# defines width of overlapping regions between subdomains
            BOUNDARY_N=(1/P.w,),# optional arguments passed to the constraining operator
            Y_N=(0,1/P.w,),# defines unnormalisation
            ACTIVE_SCHEDULER=active_schedulers.AllActiveSchedulerND,# training scheduler
            ACTIVE_SCHEDULER_ARGS=(),# training scheduler arguments
            N_HIDDEN=16,# number of hidden units in subdomain network
            N_LAYERS=2,# number of hidden layers in subdomain network
            BATCH_SIZE=(200,),# number of training points
            N_STEPS=5000,# number of training steps
            BATCH_SIZE_TEST=(400,),# number of testing points
            )

run = main.FBPINNTrainer(c1)# train FBPINN
run.train()

c2 = constants.Constants(
            RUN="PINN_%s"%(P.name),
            P=P,
            SUBDOMAIN_XS=[np.linspace(-2*np.pi,2*np.pi,5)],
            BOUNDARY_N=(1/P.w,),
            Y_N=(0,1/P.w,),
            N_HIDDEN=32,
            N_LAYERS=3,
            BATCH_SIZE=(200,),
            N_STEPS=5000,
            BATCH_SIZE_TEST=(400,),
            )

run = main.PINNTrainer(c2)# train PINN
run.train()

The training code will automatically start outputting training statistics, plots and tensorboard summaries. The tensorboard summaries can be viewed by installing tensorboard and then running the command line tensorboard --logdir fbpinns/results/summaries/.

Defining your own problem.Problem class

To learn how to define and solve your own problem, see the Defining your own problem Jupyter notebook here.

Reproducing our results

The purpose of each folder is as follows:

  • fbpinns : contains the main code which defines and trains FBPINNs.
  • analytical_solutions : contains a copy of the BURGERS_SOLUTION code used to compute the exact solution to the Burgers equation problem.
  • seismic-cpml : contains a Python implementation of the SEISMIC_CPML FD library used to solve the wave equation problem.
  • shared_modules : contains generic Python helper functions and classes.

To reproduce the results in the paper, use the following steps:

  1. Run the scripts fbpinns/paper_main_1D.py, fbpinns/paper_main_2D.py, fbpinns/paper_main_3D.py. These train and save all of the FBPINNs and PINNs presented in the paper.
  2. Run the notebook fbpinns/Paper plots.ipynb. This generates all of the plots in the paper.

Further questions?

Please raise a GitHub issue or feel free to contact us.

You might also like...
We evaluate our method on different datasets (including ShapeNet, CUB-200-2011, and Pascal3D+) and achieve state-of-the-art results, outperforming all the other supervised and unsupervised methods and 3D representations, all in terms of performance, accuracy, and training time.
Code used to generate the results appearing in "Train longer, generalize better: closing the generalization gap in large batch training of neural networks"

Train longer, generalize better - Big batch training This is a code repository used to generate the results appearing in "Train longer, generalize bet

[ICCV 2021]  Official Pytorch implementation for Discriminative Region-based Multi-Label Zero-Shot Learning SOTA results on NUS-WIDE and OpenImages
[ICCV 2021] Official Pytorch implementation for Discriminative Region-based Multi-Label Zero-Shot Learning SOTA results on NUS-WIDE and OpenImages

Discriminative Region-based Multi-Label Zero-Shot Learning (ICCV 2021) [arXiv][Project page coming soon] Sanath Narayan*, Akshita Gupta*, Salman Kh

[ICCV 2021]  Official Pytorch implementation for Discriminative Region-based Multi-Label Zero-Shot Learning SOTA results on NUS-WIDE and OpenImages
[ICCV 2021] Official Pytorch implementation for Discriminative Region-based Multi-Label Zero-Shot Learning SOTA results on NUS-WIDE and OpenImages

Discriminative Region-based Multi-Label Zero-Shot Learning (ICCV 2021) [arXiv][Project page coming soon] Sanath Narayan*, Akshita Gupta*, Salman Kh

An easy way to build PyTorch datasets. Modularly build datasets and automatically cache processed results

EasyDatas An easy way to build PyTorch datasets. Modularly build datasets and automatically cache processed results Installation pip install git+https

A repository that shares tuning results of trained models generated by TensorFlow / Keras. Post-training quantization (Weight Quantization, Integer Quantization, Full Integer Quantization, Float16 Quantization), Quantization-aware training. TensorFlow Lite. OpenVINO. CoreML. TensorFlow.js. TF-TRT. MediaPipe. ONNX. [.tflite,.h5,.pb,saved_model,tfjs,tftrt,mlmodel,.xml/.bin, .onnx]
Code to reproduce the results for Compositional Attention: Disentangling Search and Retrieval.

Compositional-Attention This repository contains the official implementation for the paper Compositional Attention: Disentangling Search and Retrieval

A FAIR dataset of TCV experimental results for validating edge/divertor turbulence models.

TCV-X21 validation for divertor turbulence simulations Quick links Intro Welcome to TCV-X21. We're glad you've found us! This repository is designed t

This project aims to explore the deployment of Swin-Transformer based on TensorRT, including the test results of FP16 and INT8.

Swin Transformer This project aims to explore the deployment of SwinTransformer based on TensorRT, including the test results of FP16 and INT8. Introd

Comments
  • running error

    running error

    Hello,I am trying to use your program to write a FBPINN that solves a 3D temperature field, because the problem is more complex, I use the soft-constrained loss function form in the problem class.Because used a soft constrained loss function, so def boundary_condition() is not used in the problem class.However, when commenting out FBPINN and just running PINN, the error still occurs. I am getting an error while running the file paper_main_3D.py., The screenlog: RUN: final_FBPINN_TemperatureField3D_16h_2l_30b_r_0.1w_All P: <problems.TemperatureField3D object at 0x0000020521F32808> SUBDOMAIN_XS: [array([0. , 0.25, 0.5 , 0.75, 1. ]), array([0. , 0.25, 0.5 , 0.75, 1. ]), array([0. , 0.25, 0.5 , 0.75, 1. ])] SUBDOMAIN_WS: [array([0.025, 0.025, 0.025, 0.025, 0.025]), array([0.025, 0.025, 0.025, 0.025, 0.025]), array([0.025, 0.025, 0.025, 0.025, 0.025])] BOUNDARY_N: (0.1,) Y_N: (0, 1) ACTIVE_SCHEDULER: <class 'active_schedulers.AllActiveSchedulerND'> ACTIVE_SCHEDULER_ARGS: () DEVICE: 0 MODEL: <class 'models.FCN'> N_HIDDEN: 16 N_LAYERS: 2 BATCH_SIZE: (30, 30, 30) RANDOM: True LRATE: 0.001 N_STEPS: 150000 SEED: 123 BATCH_SIZE_TEST: (100, 100, 10) PLOT_LIMS: (0.4, True) SUMMARY_FREQ: 250 TEST_FREQ: 5000 MODEL_SAVE_FREQ: 10000 SHOW_FIGURES: False SAVE_FIGURES: False CLEAR_OUTPUT: False SUMMARY_OUT_DIR: results/summaries/final_FBPINN_TemperatureField3D_16h_2l_30b_r_0.1w_All/ MODEL_OUT_DIR: results/models/final_FBPINN_TemperatureField3D_16h_2l_30b_r_0.1w_All/ HOSTNAME: laptop-8t1qinad

    Device: cuda:0 Main thread ID: 7240 Torch seed: 123 0 Active updated: [[[1 1 1 1] [1 1 1 1] [1 1 1 1] [1 1 1 1]]

    [[1 1 1 1] [1 1 1 1] [1 1 1 1] [1 1 1 1]]

    [[1 1 1 1] [1 1 1 1] [1 1 1 1] [1 1 1 1]]

    [[1 1 1 1] [1 1 1 1] [1 1 1 1] [1 1 1 1]]] torch.Size([512, 1]) torch.Size([512, 1]) torch.Size([512, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([512, 3]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3]) torch.Size([512, 1]) torch.Size([512, 1]) torch.Size([512, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([512, 3]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3]) torch.Size([512, 1]) torch.Size([512, 1]) torch.Size([512, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([512, 3]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3]) torch.Size([512, 1]) torch.Size([512, 1]) torch.Size([512, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([512, 3]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3]) torch.Size([729, 1]) torch.Size([729, 1]) torch.Size([729, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([729, 3]) torch.Size([729, 1]) torch.Size([729, 1]) torch.Size([729, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([729, 3]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3]) torch.Size([729, 1]) torch.Size([729, 1]) torch.Size([729, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([729, 3]) torch.Size([729, 1]) torch.Size([729, 1]) torch.Size([729, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([729, 3]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3]) torch.Size([729, 1]) torch.Size([729, 1]) torch.Size([729, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([729, 3]) torch.Size([729, 1]) torch.Size([729, 1]) torch.Size([729, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([729, 3]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3]) torch.Size([729, 1]) torch.Size([729, 1]) torch.Size([729, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([729, 3]) torch.Size([729, 1]) torch.Size([729, 1]) torch.Size([729, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([729, 3]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3]) torch.Size([512, 1]) torch.Size([512, 1]) torch.Size([512, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([512, 3]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3]) torch.Size([512, 1]) torch.Size([512, 1]) torch.Size([512, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([512, 3]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3]) torch.Size([512, 1]) torch.Size([512, 1]) torch.Size([512, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([512, 3]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3]) torch.Size([512, 1]) torch.Size([512, 1]) torch.Size([512, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([512, 3]) Process Process-1:1: Traceback (most recent call last): File "C:\Users\10614\Anaconda3\envs\TF2.1\lib\multiprocessing\process.py", line 297, in _bootstrap self.run() File "C:\Users\10614\Anaconda3\envs\TF2.1\lib\multiprocessing\process.py", line 99, in run self._target(*self._args, **self._kwargs) File "C:\Users\10614\Desktop\FBPINNs-main\fbpinns\trainersBase.py", line 111, in train_models_multiprocess run.train() File "C:\Users\10614\Desktop\FBPINNs-main\fbpinns\main.py", line 267, in train xs, yjs, yjs_sum, loss = self._train_step(models, optimizers, c, D, i) File "C:\Users\10614\Desktop\FBPINNs-main\fbpinns\main.py", line 185, in _train_step yj = c.P.boundary_condition(x, *yj, *c.BOUNDARY_N)# problem-specific TypeError: boundary_condition() missing 1 required keyword-only argument: 'args' Exception in thread Thread-1: Traceback (most recent call last): File "C:\Users\10614\Anaconda3\envs\TF2.1\lib\multiprocessing\connection.py", line 302, in _recv_bytes overlapped=True) BrokenPipeError: [WinError 109] 管道已结束。

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "C:\Users\10614\Anaconda3\envs\TF2.1\lib\threading.py", line 926, in _bootstrap_inner self.run() File "C:\Users\10614\Anaconda3\envs\TF2.1\lib\site-packages\tensorboardX\event_file_writer.py", line 202, in run data = self._queue.get(True, queue_wait_duration) File "C:\Users\10614\Anaconda3\envs\TF2.1\lib\multiprocessing\queues.py", line 108, in get res = self._recv_bytes() File "C:\Users\10614\Anaconda3\envs\TF2.1\lib\multiprocessing\connection.py", line 216, in recv_bytes buf = self._recv_bytes(maxlength) File "C:\Users\10614\Anaconda3\envs\TF2.1\lib\multiprocessing\connection.py", line 321, in _recv_bytes raise EOFError EOFError

    opened by 135355 0
  • Modifiying the wave 3D Problem

    Modifiying the wave 3D Problem

    download Thank you for the innovative contribution!

    I tried modifying the wave 3D problem to have the following boundary conditions:

    u(x,y,0) = 0 u(0,0,t) = 2 sin (2 pi t) #time-dependent source

    in this way:

    
     def boundary_condition(self, x, u, dudt, d2udx2, d2udy2, d2udt2, sd):
            
            # Apply u = tanh^2((t-0)/sd)*NN + sigmoid((d-t)/sd)*exp( -(1/2)((x/sd)^2+(y/sd)^2) )  ansatz
            
            t_2, dudt_2, d2udt_22 = boundary_conditions.tanh2_2(x[:,2:3], 0, sd)
            s, _, d2uds2   = boundary_conditions.sigmoid_2(-x[:,2:3], -2*sd, 0.2*sd)# beware (!) this gives correct 2nd order gradients but negative 1st order (sign flip!)
            
            mx = my = 0; 
            sx = sy = self.source_sd
            xnx, xny = (x[:,0:1]-mx)/sx, (x[:,1:2]-my)/sy
            #exp = torch.exp(-0.5*(xnx**2 + xny**2))
            exp = torch.exp(-0.5*(xnx**2 + xny**2))*0 #IC = 0 instead of exp
            #Initial GP
            f = exp
            d2udfx2 = (1/sx**2) * ((xnx**2) - 1)*exp
            d2udfy2 = (1/sy**2) * ((xny**2) - 1)*exp
            
            u_new   = t_2*u + s*f
            d2udx2_new = t_2*d2udx2 + s*d2udfx2
            d2udy2_new = t_2*d2udy2 + s*d2udfy2
            d2udt2_new = d2udt_22*u + 2*dudt_2*dudt + t_2*d2udt2 + d2uds2*f
    
            #Zero Ic and BC
    #         u_new   = t_2*u *0
    #         d2udx2_new = t_2*d2udx2 
    #         d2udy2_new = t_2*d2udy2 
    #         d2udt2_new = d2udt_22*u 
            
            return u_new, dudt, d2udx2_new, d2udy2_new, d2udt2_new# skip updating first order gradients (not needed for loss)
        
    
    

    I also made some changes for the FD file to be this way:

    
    import numpy as np
    import time
    from seismic_CPML_helper import get_dampening_profiles
    
    # todo: is this faster in parallel with np.roll?
    
    def seismicCPML2D_wS(NX,
                    NY,
                    NSTEPS,
                    DELTAX,
                    DELTAY,
                    DELTAT,
                    NPOINTS_PML,
                    velocity,
                    density,
                    initial_pressures,
                    f0=20.,
                    dtype=np.float32,
                    output_wavefields=True,
                    gather_is=None):
        
        "Run seismicCPML2D"
        
        ## INPUT PARAMETERS
        velocity = velocity.astype(dtype)
        density = density.astype(dtype)
        
        if type(gather_is) != type(None): output_gather = True
        else: output_gather = False
        
        K_MAX_PML = 1.
        ALPHA_MAX_PML = 2.*np.pi*(f0/2.)# from Festa and Vilotte
        NPOWER = 2.# power to compute d0 profile
        Rcoef = 0.001
        
        STABILITY_THRESHOLD = 1e25
        ##
        
        
        # STABILITY CHECKS
        
        # basically: delta x > np.sqrt(3) * max(v) * delta t
        courant_number = np.max(velocity) * DELTAT * np.sqrt(1/(DELTAX**2) + 1/(DELTAY**2))
        if courant_number > 1.: raise Exception("ERROR: time step is too large, simulation will be unstable %.2f"%(courant_number))
        if NPOWER < 1: raise Exception("ERROR: NPOWER must be greater than 1")
        
        
        # GET DAMPENING PROFILES
        
        [[a_x, a_x_half, b_x, b_x_half, K_x, K_x_half],
         [a_y, a_y_half, b_y, b_y_half, K_y, K_y_half]] = get_dampening_profiles(velocity, NPOINTS_PML, Rcoef, K_MAX_PML, ALPHA_MAX_PML, NPOWER, DELTAT, DELTAS=(DELTAX, DELTAY), dtype=dtype, qc=False)
        
    
        # INITIALISE ARRAYS
        
        kappa = density*(velocity**2)
        
        # pressure_present = initial_pressures[1].astype(dtype)
        # pressure_past = initial_pressures[0].astype(dtype)
    
        #zero IC
        pressure_present = np.zeros((NX, NY), dtype=dtype)
        pressure_past = np.zeros((NX, NY), dtype=dtype)
        
        
        memory_dpressure_dx = np.zeros((NX, NY), dtype=dtype)
        memory_dpressure_dy = np.zeros((NX, NY), dtype=dtype)
        
        memory_dpressurexx_dx = np.zeros((NX, NY), dtype=dtype)
        memory_dpressureyy_dy = np.zeros((NX, NY), dtype=dtype)
        
        if output_wavefields: wavefields = np.zeros((NSTEPS, NX, NY), dtype=dtype)
        if output_gather: gather = np.zeros((gather_is.shape[0], NSTEPS), dtype=dtype)
        
        # precompute density_half arrays
        density_half_x = np.pad(0.5 * (density[1:NX,:]+density[:NX-1,:]), [[0,1],[0,0]], mode="edge")
        density_half_y = np.pad(0.5 * (density[:,1:NY]+density[:,:NY-1]), [[0,0],[0,1]], mode="edge")
        
        
        # RUN SIMULATION
        
        start = time.time()
        for it in range(NSTEPS):
                    
            # compute the first spatial derivatives divided by density
            
            value_dpressure_dx = np.pad((pressure_present[1:NX,:]-pressure_present[:NX-1,:]) / DELTAX, [[0,1],[0,0]], mode="constant", constant_values=0.)
            value_dpressure_dy = np.pad((pressure_present[:,1:NY]-pressure_present[:,:NY-1]) / DELTAY, [[0,0],[0,1]], mode="constant", constant_values=0.)
        
            memory_dpressure_dx = b_x_half * memory_dpressure_dx + a_x_half * value_dpressure_dx
            memory_dpressure_dy = b_y_half * memory_dpressure_dy + a_y_half * value_dpressure_dy
        
            value_dpressure_dx = value_dpressure_dx / K_x_half + memory_dpressure_dx
            value_dpressure_dy = value_dpressure_dy / K_y_half + memory_dpressure_dy
        
            pressure_xx = value_dpressure_dx / density_half_x
            pressure_yy = value_dpressure_dy / density_half_y
            
            # compute the second spatial derivatives
            
            value_dpressurexx_dx = np.pad((pressure_xx[1:NX,:]-pressure_xx[:NX-1,:]) / DELTAX, [[1,0],[0,0]], mode="constant", constant_values=0.)
            value_dpressureyy_dy = np.pad((pressure_yy[:,1:NY]-pressure_yy[:,:NY-1]) / DELTAY, [[0,0],[1,0]], mode="constant", constant_values=0.)
        
            memory_dpressurexx_dx = b_x * memory_dpressurexx_dx + a_x * value_dpressurexx_dx
            memory_dpressureyy_dy = b_y * memory_dpressureyy_dy + a_y * value_dpressureyy_dy
            
            value_dpressurexx_dx = value_dpressurexx_dx / K_x + memory_dpressurexx_dx
            value_dpressureyy_dy = value_dpressureyy_dy / K_y + memory_dpressureyy_dy
            
            dpressurexx_dx = value_dpressurexx_dx
            dpressureyy_dy = value_dpressureyy_dy
            
            # apply the time evolution scheme
            # we apply it everywhere, including at some points on the edges of the domain that have not be calculated above,
            # which is of course wrong (or more precisely undefined), but this does not matter because these values
            # will be erased by the Dirichlet conditions set on these edges below
            # pressure_future =   - pressure_past \
            #                     + 2 * pressure_present \
            #                     + DELTAT*DELTAT*(dpressurexx_dx+dpressureyy_dy)*kappa
    
            # Stepping with a source function, p is passed from the main file as p0 (Gaussian pulse)
            # location of source is passed within p0
            def func_t(p,t_inst):
                Amp = 1
                freq = 1
                t = t_inst*DELTAT
                return p*Amp*np.sin(2*np.pi**freq*t)
                
            pressure_future =   - pressure_past \
                                + 2 * pressure_present \
                                + DELTAT*DELTAT*(dpressurexx_dx+dpressureyy_dy)*kappa \
                                + DELTAT*DELTAT*func_t(initial_pressures[1].astype(dtype),it)
                    
            
            # apply Dirichlet conditions at the bottom of the C-PML layers,
            # which is the right condition to implement in order for C-PML to remain stable at long times
            
            # Dirichlet conditions
            pressure_future[0,:] = pressure_future[-1,:] = 0.
            pressure_future[:,0] = pressure_future[:,-1] = 0.
            
            if output_wavefields: wavefields[it,:,:] = np.copy(pressure_present)
            if output_gather:
                gather[:,it] = np.copy(pressure_present[gather_is[:,0], gather_is[:,1]])# nb important to copy
    
            
            # check stability of the code, exit if unstable
            if(np.max(np.abs(pressure_present)) > STABILITY_THRESHOLD):
                raise Exception('code became unstable and blew up')
        
            # move new values to old values (the present becomes the past, the future becomes the present)
            pressure_past = pressure_present
            pressure_present = pressure_future
        
            #print(pressure_past.dtype, pressure_future.dtype, wavefields.dtype, gather.dtype)
            if it % 10000 == 0 and it!=0:
                rate = (time.time()-start)/10.
                print("[%i/%i] %.2f s per step"%(it, NSTEPS, rate))
                start = time.time()
        
        output = [None, None]
        if output_wavefields: output[0]=wavefields
        if output_gather: output[1]=gather
        return output
    

    Mainly attempting to change the IC and add a time-dependent source term to the equation so it becomes:

    
     d^2 u     d^2 u        1   d^2 u
     ------ + ------  -  ---   ------   =   S(x,y,t)
     dx^2       dy^2      c^2  dt^2
            
    

    where,

    Amp = 1
    freq = 1
    sx = sy =self.source_sd
    mx = 0; my = 0
    GP = torch.exp(-0.5*(( (x[:,0:1]-mx)/sx)**2 + ((x[:,1:2]-my)/sy)**2 ))
    
    S = Amp * GP * torch.sin(2*np.pi*freq*x[:,2:3])  #The Source function
    

    but the results are as shown in the image. So, my questions are:

    1. How can I implement my specified boundary and initial conditions in a better way than the one I tried (if my attempt was correct)? I don't fully understand how to use the implemented boundary condition helper functions to implement my specific equations.
    
     d^2 u     d^2 u        1   d^2 u
     ------ + ------  -  ---   ------   =   S(x,y,t)
     dx^2       dy^2      c^2  dt^2
            
    Boundary conditions:
            u(x,y,0) = 0
            u(0,0,t) = 2 * sin (2 * pi * t)	#time-dependent source
            
    
    

    The results in the image were executed with these batch sizes:

    batch_size = (30,30,30)
    batch_size_test = (40,40,15)
    

    because of the limited memory on my GPU.

    1. Does this affect the results? If so, how can I increase batch_size_test without getting OOM error?

    Thanks again! Looking forward to your reply.

    opened by engsbk 1
  • Running Error

    Running Error

    Hello,

    I am getting an error while running the file paper_main_1D.py. I am using Spyder IDE on Anaconda.

    "OSError: [WinError 1455] The paging file is too small for this operation to complete. Error loading "C:\Users\gaura\Anaconda3\envs\torch\lib\site-packages\torch\lib\caffe2_detectron_ops_gpu.dll" or one of its dependencies."

    opened by Gaurav11ME 3
  • some confusion about unormalization

    some confusion about unormalization

    Your code is awesome! 👍 And i have some difficulties in understanding your code.

       #  codes from main.full_model_FBPINN
        y = y * c.Y_N[1] + c.Y_N[0]
    

    I think this sentence is to achieve unnormalization and i found define c.Y_N in constants.py.

      # codes from constants.Constants
       w = 1e-10
       self.Y_N = (0,1/self.P.w**2)# mu, sd
    

    This seems to multiply a large constant. I don't understand why this is necessary and why choose this value?

    opened by xuliang5115 1
Owner
Ben Moseley
Physics + AI researcher at University of Oxford, ML lead at NASA Frontier Development Lab
Ben Moseley
Reproduces ResNet-V3 with pytorch

ResNeXt.pytorch Reproduces ResNet-V3 (Aggregated Residual Transformations for Deep Neural Networks) with pytorch. Tried on pytorch 1.6 Trains on Cifar

Pau Rodriguez 481 Dec 23, 2022
The LaTeX and Python code for generating the paper, experiments' results and visualizations reported in each paper is available (whenever possible) in the paper's directory

This repository contains the software implementation of most algorithms used or developed in my research. The LaTeX and Python code for generating the

João Fonseca 3 Jan 3, 2023
This repository contains the source code and data for reproducing results of Deep Continuous Clustering paper

Deep Continuous Clustering Introduction This is a Pytorch implementation of the DCC algorithms presented in the following paper (paper): Sohil Atul Sh

Sohil Shah 197 Nov 29, 2022
Pytorch implementation for reproducing StackGAN_v2 results in the paper StackGAN++: Realistic Image Synthesis with Stacked Generative Adversarial Networks

StackGAN-v2 StackGAN-v1: Tensorflow implementation StackGAN-v1: Pytorch implementation Inception score evaluation Pytorch implementation for reproduci

Han Zhang 809 Dec 16, 2022
Code for sound field predictions in domains with impedance boundaries. Used for generating results from the paper

Code for sound field predictions in domains with impedance boundaries. Used for generating results from the paper

DTU Acoustic Technology Group 11 Dec 17, 2022
Code to reproduce the results in the paper "Tensor Component Analysis for Interpreting the Latent Space of GANs".

Tensor Component Analysis for Interpreting the Latent Space of GANs [ paper | project page ] Code to reproduce the results in the paper "Tensor Compon

James Oldfield 4 Jun 17, 2022
This repository contains the code and models necessary to replicate the results of paper: How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective

Black-Box-Defense This repository contains the code and models necessary to replicate the results of our recent paper: How to Robustify Black-Box ML M

OPTML Group 2 Oct 5, 2022
This repository contains the code and models necessary to replicate the results of paper: How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective

Black-Box-Defense This repository contains the code and models necessary to replicate the results of our recent paper: How to Robustify Black-Box ML M

OPTML Group 2 Oct 5, 2022
Capture all information throughout your model's development in a reproducible way and tie results directly to the model code!

Rubicon Purpose Rubicon is a data science tool that captures and stores model training and execution information, like parameters and outcomes, in a r

Capital One 97 Jan 3, 2023
A scientific and useful toolbox, which contains practical and effective long-tail related tricks with extensive experimental results

Bag of tricks for long-tailed visual recognition with deep convolutional neural networks This repository is the official PyTorch implementation of AAA

Yong-Shun Zhang 181 Dec 28, 2022