paper_id
stringlengths 10
10
| title
stringlengths 3
161
| abstract
stringlengths 3
3.52k
| full_text
stringlengths 0
910k
| authors
stringlengths 4
559
| decision
stringclasses 5
values | year
int64 2.02k
2.02k
| api_raw_submission
stringlengths 1.65k
6.18k
| review
stringlengths 0
23.1k
| reviewer_id
stringlengths 3
52
| rating
stringclasses 6
values | confidence
stringclasses 5
values | api_raw_review
stringlengths 2
24.2k
| criteria_count
dict | reward_value
float64 -10
3.29
| reward_value_length_adjusted
float64 -423.29
2.85
| length_penalty
float64 0
424
| reward_u
float64 0
3.31
| reward_h
float64 0
1.97
| meteor_score
float64 0
0.22
| criticism
float64 0
1
| example
float64 0
0.56
| importance_and_relevance
float64 0
0.82
| materials_and_methods
float64 0
1
| praise
float64 0
0.75
| presentation_and_reporting
float64 0
0.87
| results_and_discussion
float64 0
1
| suggestion_and_solution
float64 0
1
| dimension_scores
dict | overall_score
float64 -10
3.29
| source
stringclasses 1
value | review_src
stringclasses 1
value | relative_rank
int64 0
0
| win_prob
float64 0
0
| thinking_trace
stringclasses 1
value | prompt
stringclasses 1
value | prompt_length
int64 0
0
| conversations
null |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ApjY32f3Xr
|
PINNacle: A Comprehensive Benchmark of Physics-Informed Neural Networks for Solving PDEs
|
While significant progress has been made on Physics-Informed Neural Networks (PINNs), a comprehensive comparison of these methods across a wide range of Partial Differential Equations (PDEs) is still lacking. This study introduces PINNacle, a benchmarking tool designed to fill this gap. PINNacle provides a diverse dataset, comprising over 20 distinct PDEs from various domains including heat conduction, fluid dynamics, biology, and electromagnetics. These PDEs encapsulate key challenges inherent to real-world problems, such as complex geometry, multi-scale phenomena, nonlinearity, and high dimensionality. PINNacle also offers a user-friendly toolbox, incorporating about 10 state-of-the-art PINN methods for systematic evaluation and comparison. We have conducted extensive experiments with these methods, offering insights into their strengths and weaknesses. In addition to providing a standardized means of assessing performance, PINNacle also offers an in-depth analysis to guide future research, particularly in areas such as domain decomposition methods and loss reweighting for handling multi-scale problems and complex geometry. While PINNacle does not guarantee success in all real-world scenarios, it represents a significant contribution to the field by offering a robust, diverse, and comprehensive benchmark suite that will undoubtedly foster further research and development in PINNs.
|
Under review as a conference paper at ICLR 2024
PINN ACLE : A C OMPREHENSIVE BENCHMARK OF
PHYSICS -INFORMED NEURAL NETWORKS FOR SOLV-
ING PDE S
Anonymous authors
Paper under double-blind review
ABSTRACT
While significant progress has been made on Physics-Informed Neural Networks
(PINNs), a comprehensive comparison of these methods across a wide range of
Partial Differential Equations (PDEs) is still lacking. This study introduces PIN-
Nacle, a benchmarking tool designed to fill this gap. PINNacle provides a diverse
dataset, comprising over 20 distinct PDEs from various domains, including heat
conduction, fluid dynamics, biology, and electromagnetics. These PDEs encapsu-
late key challenges inherent to real-world problems, such as complex geometry,
multi-scale phenomena, nonlinearity, and high dimensionality. PINNacle also of-
fers a user-friendly toolbox, incorporating about 10 state-of-the-art PINN methods
for systematic evaluation and comparison. We have conducted extensive experi-
ments with these methods, offering insights into their strengths and weaknesses. In
addition to providing a standardized means of assessing performance, PINNacle
also offers an in-depth analysis to guide future research, particularly in areas such
as domain decomposition methods and loss reweighting for handling multi-scale
problems and complex geometry. To the best of our knowledge, it is the largest
benchmark with a diverse and comprehensive evaluation that will undoubtedly
foster further research in PINNs.
1 I NTRODUCTION
Partial Differential Equations (PDEs) are of paramount importance in science and engineering, as
they often underpin our understanding of intricate physical systems such as fluid flow, heat transfer,
and stress distribution (Morton & Mayers, 2005). The computational simulation of PDE systems
has been a focal point of research for an extensive period, leading to the development of numerical
methods such as finite difference (Causon & Mingham, 2010), finite element (Reddy, 2019), and
finite volume methods (Eymard et al., 2000).
Recent advancements have led to the use of deep neural networks to solve forward and inverse
problems involving PDEs (Raissi et al., 2019; Yu et al., 2018; Daw et al., 2017; Wang et al.,
2020). Among these, Physics-Informed Neural Networks (PINNs) have emerged as a promising
alternative to traditional numerical methods in solving such problems (Raissi et al., 2019; Karniadakis
et al., 2021). PINNs leverage the underlying physical laws and available data to effectively handle
various scientific and engineering applications. The growing interest in this field has spurred the
development of numerous PINN variants, each tailored to overcome specific challenges or to enhance
the performance of the original framework.
While PINN methods have achieved remarkable progress, a comprehensive comparison of these
methods across diverse types of PDEs is currently lacking. Establishing such a benchmark is crucial
as it could enable researchers to more thoroughly understand existing methods and pinpoint potential
challenges. Despite the availability of several studies comparing sampling methods (Wu et al., 2023)
and reweighting methods (Bischof & Kraus, 2021), there has been no concerted effort to develop
a rigorous benchmark using challenging datasets from real-world problems. The sheer variety and
inherent complexity of PDEs make it difficult to conduct a comprehensive analysis. Moreover,
different mathematical properties and application scenarios further complicate the task, requiring the
benchmark to be adaptable and exhaustive.
To resolve these challenges, we propose PINNacle, a comprehensive benchmark for evaluating and
understanding the performance of PINNs. As shown in Fig. 1, PINNacle consists of three major
1
Under review as a conference paper at ICLR 2024
DeepXDE
Datasets
/
PDEs
Toolbox
-
Reweighting
:
PINN
-
LRA
,
PINN
-
NTK
-
Resampling
:
RAR
-
Optimizer
:
Multi
-
Adam
-
Loss functions
:
gPINN
,
vPINN
-
Network Architecture
:
LAAF
,
GAAF
-
Domain decomposition
:
FBPINN
-
Metrics
:
L
1
,
L
2
relative error
MSE
-
Visualization
:
Temporal error analysis
2
D
/
3
D plots
Evaluation
Benchmark of PINNs on
20
+
PDEs with
10
+
SOTA methods
F
luid
M
echanics
E
lectromagnetism
Heat Conduction
G
eophysics
PyTorch
-
Linear
:
Poisson
,
Heat
,
Wave
-
Nonlinear
:
Burgers
,
NS
,
GS
,
KS
-
High Dimensional
:
Poisson
,
Diffusion
-
Numerical solutions
FEM
,
Spectral Methods
Backend
Applications
Benchmark
Figure 1: Architecture of PINNacle. It contains a dataset covering more than 20 PDEs, a toolbox that
implements about 10 SOTA methods, and an evaluation module. These methods have a wide range of
application scenarios like fluid mechanics, electromagnetism, heat conduction, geophysics, and so on.
components — a diverse dataset, a toolbox, and evaluation modules. The dataset comprises tasks
from over 20 different PDEs from various domains, including heat conduction, fluid dynamics,
biology, and electromagnetics. Each task brings its own set of challenges, such as complex geometry,
multi-scale phenomena, nonlinearity, and high dimensionality, thus providing a rich testing ground
for PINNs. The toolbox incorporates more than 10 state-of-the-art (SOTA) PINN methods, enabling
a systematic comparison of different strategies, including loss reweighting, variational formulation,
adaptive activations, and domain decomposition. These methods can be flexibly applied to the tasks
in the dataset, offering researchers a convenient way to evaluate the performance of PINNs which is
also user-friendly for secondary development. The evaluation modules provide a standardized means
of assessing the performance of different PINN methods across all tasks, ensuring consistency in
comparison and facilitating the identification of strengths and weaknesses in various methods.
PINNacle provides a robust, diverse, and comprehensive benchmark suite for PINNs, contributing
significantly to the field’s understanding and application. It represents a major step forward in the
evolution of PINNs which could foster more innovative research and development in this exciting
field. In a nutshell, our contributions can be summarized as follows:
• We design a dataset encompassing over 20 challenging PDE problems. These problems
encapsulate several critical challenges faced by PINNs, including handling complex geome-
tries, multi-scale phenomena, nonlinearity, and high-dimensional problems.
• We systematically evaluate more than 10 carefully selected representative variants of PINNs.
We conducted thorough experiments and ablation studies to evaluate their performance. To
the best of our knowledge, this is the largest benchmark comparing different PINN variants.
• We provide an in-depth analysis to guide future research. We show using loss reweighting
and domain decomposition methods could improve the performance on multi-scale and
complex geometry problems. Variational formulation achieves better performance on inverse
problems. However, few methods can adequately address nonlinear problems, indicating a
future direction for exploration and advancement.
2 R ELATED WORK
2.1 B ENCHMARKS AND DATASETS IN SCIENTIFIC MACHINE LEARNING
The growing trend of AI applications in scientific research has stimulated the development of various
benchmarks and datasets, which differ greatly in data formats, sizes, and governing principles. For
instance, (Lu et al., 2022) presents a benchmark for comparing neural operators, while (Botev
et al., 2021; Otness et al., 2021) benchmarks methods for learning latent Newtonian mechanics.
2
Under review as a conference paper at ICLR 2024
Furthermore, domain-specific datasets and benchmarks exist in fluid mechanics (Huang et al., 2021),
climate science (Racah et al., 2017; Cachay et al., 2021), quantum chemistry (Axelrod & Gomez-
Bombarelli, 2022), and biology (Boutet et al., 2016).
Beyond these domain-specific datasets and benchmarks, physics-informed machine learning has
received considerable attention (Hao et al., 2022; Cuomo et al., 2022) since the advent of Physics-
Informed Neural Networks (PINNs) (Raissi et al., 2019). These methods successfully incorporate
physical laws into model training, demonstrating immense potential across a variety of scientific
and engineering domains. Various papers have compared different components within the PINN
framework; for instance, (Das & Tesfamariam, 2022) and (Wu et al., 2023) investigate the sampling
methods of collocation points in PINNs, and (Bischof & Kraus, 2021) compare reweighting tech-
niques for different loss components. PDEBench (Takamoto et al., 2022) and PDEArena (Gupta
& Brandstetter, 2022) design multiple tasks to compare different methods in scientific machine
learning such as PINNs, FNO, and U-Net. Nevertheless, a comprehensive comparison of various
PINN approaches remains absent in the literature.
2.2 S OFTWARES AND TOOLBOXES
A plethora of software solutions have been developed for solving PDEs with neural networks. These
include SimNet (Hennigh et al., 2021), NeuralPDE (Rackauckas & Nie, 2017), TorchDiffEq (Chen
et al., 2020), and PyDEns (Khudorozhkov et al., 2019). More recently, DeepXDE (Lu et al., 2021)
has been introduced as a fundamental library for implementing PINNs across different backends.
However, there remains a void for a toolbox that provides a unified implementation for advanced PINN
variants. Our PINNacle fills this gap by offering a flexible interface that facilitates the implementation
and evaluation of diverse PINN variants. We furnish clear and concise code for researchers to execute
benchmarks across all problems and methods.
2.3 V ARIANTS OF PHYSICS -INFORMED NEURAL NETWORKS
The PINNs have received much attention due to their remarkable performance in solving both
forward and inverse PDE problems. However, vanilla PINNs have many limitations. Researchers
have proposed numerous PINN variants to address challenges associated with high-dimensionality,
non-linearity, multi-scale issues, and complex geometries (Hao et al., 2022; Cuomo et al., 2022;
Karniadakis et al., 2021; Krishnapriyan et al., 2021). Broadly speaking, these variants can be
categorized into: loss reweighting/resampling (Wang et al., 2021a;b; Tang et al., 2021; Wu et al.,
2023; Nabian et al., 2021), innovative optimizers (Yao et al., 2023), novel loss functions such as
variational formulations (Yu et al., 2018; Kharazmi et al., 2019; 2021; Khodayi-Mehr & Zavlanos,
2020) or regularization terms (Yu et al., 2022; Son et al., 2021), and novel architectures like domain
decomposition (Jagtap & Karniadakis, 2021; Li et al., 2019; Moseley et al., 2021; Jagtap et al., 2020c)
and adaptive activations (Jagtap et al., 2020b;a). These variants have enhanced PINN’s performance
across various problems. Here we select representative methods from each category and conduct a
comprehensive analysis using our benchmark dataset to evaluate these variants.
3 PINN ACLE : A H IERARCHICAL BENCHMARK FOR PINN S
In this section, we first introduce the preliminaries of PINNs. Then we introduce the details of
datasets (tasks), PINN methods, the toolbox framework, and the evaluation metrics.
3.1 P RELIMINARIES OF PHYSICS -INFORMED NEURAL NETWORKS
Physics-informed neural networks are neural network-based methods for solving PDEs as well as
inverse problems of PDEs, which have received much attention recently. Specifically, let’s consider a
general Partial Differential Equation (PDE) system defined on Ω, which can be represented as:
F(u(x); x) = 0 , x ∈ Ω, (1)
B(u(x); x) = 0 , x ∈ ∂Ω. (2)
where F is a differential operator and B is the boundary/initial condition. PINN uses a neural network
uθ(x) with parameters θ to approximate u(x). The objective of PINN is to minimize the following
3
Under review as a conference paper at ICLR 2024
loss function:
L(θ) = wc
Nc
NcX
i=1
||F(uθ(xi
c); xi
c)||2 + wb
Nb
NbX
i=1
||B(uθ(xi
b); xi
b)||2 + wd
Nd
NdX
i=1
||uθ(xi
d)−u(xi
d)||2. (3)
where wc, wb, wd are weights. The first two terms enforce the PDE constraints on {xi
c}1...Nc and
boundary conditions on {xi
b}1...Nb . The last term is data loss, which is optional when there is data
available. However, PINNs have several inherent drawbacks. First, PINNs optimize a mixture of
imbalance loss terms which might hinder its convergence as illustrated in (Wang et al., 2021a).
Second, nonlinear or stiff PDEs might lead to unstable optimization (Wang et al., 2021b). Third, the
vanilla MLPs might have difficulty in representing multi-scale or high-dimensional functions. For
example, (Krishnapriyan et al., 2021) shows that vanilla PINNs only work for a small parameter
range, even in a simple convection problem. To resolve these challenges, numerous variants of PINNs
are proposed. However, a comprehensive comparison of these methods is lacking, and thus it is
imperative to develop a benchmark.
3.2 D ATASETS
To effectively compare PINN variants, we’ve curated a set of PDE problems (datasets) representing
a wide range of challenges. We chose PDEs from diverse domains, reflecting their importance in
science and engineering. Our dataset includes 22 unique cases, with further details in Appendix B.
• The Burgers’ Equation, fundamental to fluid mechanics, considering both one and two-
dimensional problems.
• The Poisson’s Equation, widely used in math and physics, with four different cases.
• The Heat Equation, a time-dependent PDE that describes diffusion or heat conduction,
demonstrated in four unique cases.
• The Navier-Stokes Equation, describing the motion of viscous fluid substances, showcased
in three scenarios: a lid-driven flow (NS2d-C), a geometrically complex backward step flow
(NS2d-CG), and a time-dependent problem (NS2d-LT).
• The Wave Equation, modeling wave behavior, exhibited in three cases.
• Chaotic PDEs , featuring two popular examples: the Gray-Scott (GS) and Kuramoto-
Sivashinsky (KS) equations.
• High Dimensional PDEs, including the high-dimensional Poisson equation (PNd) and the
high-dimensional diffusion or heat equation (HNd).
• Inverse Problems, focusing on the reconstruction of the coefficient field from noisy data
for the Poisson equation (PInv) and the diffusion equation (HInv).
It is important to note that we have chosen PDEs encompassing a wide range of mathematical
properties. This ensures that the benchmarks do not favor a specific type of PDE. The selected PDE
problems introduce several core challenges, which include:
• Complex Geometry: Many PDE problems involve complex or irregular geometry, such as
heat conduction or wave propagation around obstacles. These complexities pose significant
challenges for PINNs in terms of accurate boundary behavior representation.
• Multi-Scale Phenomena: Multi-scale phenomena, where the solution varies significantly
over different scales, are prevalent in situations such as turbulent fluid flow. Achieving a
balanced representation across all scales is a challenge for PINNs in multi-scale scenarios.
• Nonlinear Behavior: Many PDEs exhibit nonlinear or even chaotic behavior, where mi-
nor variations in initial conditions can lead to substantial divergence in outcomes. The
optimization of PINNs becomes intriguing on nonlinear PDEs.
• High Dimensionality: High-dimensional PDE problems, frequently encountered in quantum
mechanics, present significant challenges for PINNs due to the “curse of dimensionality”.
This term refers to the increase in computational complexity with the addition of each
dimension, accompanied by statistical issues like data sparsity in high-dimensional space.
4
Under review as a conference paper at ICLR 2024
These challenges are selected due to their frequent occurrence in numerous real-world applications.
As such, a method’s performance in addressing these challenges serves as a reliable indicator of
its overall practical utility. Table 1 presents a detailed overview of the dataset, the PDEs, and the
challenges associated with these problems. We generate data using FEM solver provided by COMSOL
6.0 (Multiphysics, 1998) for problems with complex geometry and spectral method provided by
Chebfun (Driscoll et al., 2014) for chaotic problems. More details can be found in Appendix B.
A
.
Complex Geometry
B
.
Multi
-
scale
C
.
Nonlinear and Chaotic
D
.
High dimension
Dataset Complex geometry Multi-scale Nonlinearity High dim
Burgers1∼2 × × √ ×
Poisson3∼6 ×/√ ×/√ × ×
Heat7∼10 ×/√ ×/√ × ×
NS11∼13 ×/√ ×/√ √ ×
Wave14∼16 ×/√ ×/√ × ×
Chaotic17∼18 × √ √ ×
High dim19∼20 × × × √
Inverse 21∼22 × × √ ×
Table 1: Overview of our datasets along with their challenges. We chose 22 cases in total to evaluate
the methods of PINNs. The left picture shows the visualization of cases with these four challenges,
i.e., complex geometry, multi-scale, nonlinearity, and high dimension.
3.3 M ETHODS AND TOOLBOX
After conducting an extensive literature review, we present an overview of diverse PINNs approaches
for comparison. Then we present the high-level structure of our PINNacle.
3.3.1 M ETHODS
As mentioned above, variants of PINNs are mainly based on loss functions, architecture, and optimizer
(Hao et al., 2022). The modifications to loss functions can be divided into reweighting existing
losses and developing novel loss functions like regularization and variational formulation. Variants of
architectures include using domain decomposition and adaptive activations.
The methods discussed are directly correlated with the challenges highlighted in Table 1. For example,
domain decomposition methods are particularly effective for problems involving complex geometries
and multi-scale phenomena. Meanwhile, loss reweighting strategies are adept at addressing imbal-
ances in problems with multiple losses. We have chosen variants from these categories based on their
significant contributions to the field. Here, we list the primary categories and representative methods
as summarized in Table 2:
• Loss reweighting/Resampling (2∼4): PINNs are trained with a mixed loss of PDE residuals,
boundary conditions, and available data losses shown in Eq 3. Various methods (Wang
et al., 2021a; 2022b; Bischof & Kraus, 2021; Maddu et al., 2022; Rohrhofer et al., 2021)
propose different strategies to adjust these weights wc, wb and wd at different epochs or
resample collocation points {xi
c} and {xi
b} in Eq 3, which indirectly adjust the weights (Wu
et al., 2023; Nabian et al., 2021). We choose three famous examples, i.e., reweighting using
gradient norms (PINN-LRA) (Wang et al., 2021a), using neural tangent kernel (PINN-NTK)
(Wang et al., 2022b), and residual-based resampling (RAR)(Lu et al., 2021; Wu et al., 2023).
• Novel optimizer (5): To handle the problem of multi-scale objectives, some new optimizers
(Liu et al., 2022; Yao et al., 2023) are proposed. We chose MultiAdam, which is resistant to
domain scale changes.
• Novel loss functions (6∼7): Some works introduce novel loss functions like variational
formulation (Sirignano & Spiliopoulos, 2018; Khodayi-Mehr & Zavlanos, 2020; Kharazmi
et al., 2021) and regularization terms to improve training. We choose hp-VPINN (Kharazmi
et al., 2021) and gPINN (Yu et al., 2022; Son et al., 2021), which are representative examples
from these two categories.
• Novel activation architectures (8∼10): Some works propose various network architectures,
such as using CNN and LSTM (Zhang et al., 2020; Gao et al., 2021; Ren et al., 2022),
5
Under review as a conference paper at ICLR 2024
Complex Geometry Multi-scale Nonlinearity High dim
Vanilla PINN 1 × × × ×
Reweighting/Resampling2∼4 √ √ × ×
Novel Optimizer7 × √ × ×
Novel Loss Functions5∼6 × × × ×
Novel Architecture8∼10 √ √ × ×
Table 2: Overivew of methods in our PINNacle. √ denotes the method is potentially designed to
solve or show empirical improvements for problems encountering the challenge and vice versa.
custom activation functions (Jagtap et al., 2020a;b), and domain decomposition (Jagtap &
Karniadakis, 2021; Shukla et al., 2021; Jagtap et al., 2020c; Moseley et al., 2021). Among
adaptive activations for PINNs, we choose LAAF (Jagtap et al., 2020a) and GAAF (Jagtap
et al., 2020b). Domain decomposition is a method that divides the whole domain into
multiple subdomains and trains subnetworks on these subdomains. It is helpful for solving
multi-scale problems, but multiple subnetworks increase the difficulty of training. XPINNs,
cPINNs, and FBPINNs (Jagtap & Karniadakis, 2021; Jagtap et al., 2020c; Moseley et al.,
2021) are three representative examples. We choose FBPINNs which is the state-of-the-art
domain decomposition that applies domain-specific normalization to stabilize training.
3.3.2 S TRUCTURE OF TOOLBOX
We provide a user-friendly and concise toolbox for implementing, training, and evaluating diverse
PINN variants. Specifically, our codebase is based on DeepXDE and provides a series of encapsulated
classes and functions to facilitate high-level training and custom PDEs. These utilities allow for a
standardized and streamlined approach to the implementation of various PINN variants and PDEs.
Moreover, we provided many auxiliary functions, including computing different metrics, visualizing
predictions, and recording results.
Despite the unified implementation of diverse PINNs, we also design an adaptive multi-GPU parallel
training framework To enhance the efficiency of systematic evaluations of PINN methods. It addresses
the parallelization phase of training on multiple tasks, effectively balancing the computational loads
of multiple GPUs. It allows for the execution of larger and more complex tasks. In a nutshell, we
provide an example code for training and evaluating PINNs on two Poisson equations using our
PINNacle framework in Appendix D.
3.4 E VALUATION
To comprehensively analyze the discrepancy between the PINN solutions and the true solutions,
we adopt multiple metrics to evaluate the performance of the PINN variants. Generally, we choose
several metrics that are commonly used in literature that apply to all methods and problems. We
suppose that y = (yi)n
i=1 is the prediction and y′ = (y′
i)n
i=1 to is ground truth, where n is the number
of testing examples. Specifically, we use ℓ2 relative error (L2RE), and ℓ1 relative error (L1RE)
which are two most commonly used metrics to measure the global quality of the solution,
L2RE =
sPn
i=1(yi − y′
i)2
Pn
i=1 y′
i
2 , L1RE =
Pn
i=1 |yi − y′
i|Pn
i=1 |y′
i| . (4)
We also compute max error (mERR in short), mean square error (MSE), and Fourier error (fMSE) for
a detailed analysis of the prediction. These three metrics are computed as follows:
MSE = 1
n
nX
i=1
(yi − y′
i)2, mERR = maxi |yi − y′
i|, fMSE =
qPkmax
kmin |F(y) − F(y′)|2
kmax − kmin + 1 , (5)
where F denotes Fourier transform of y and kmin, kmax are chosen similar to PDEBench (Takamoto
et al., 2022). Besides, for time-dependent problems, investigating the quality of the solution with
time is important. Therefore we compute the L2RE error varying with time in Appendix E.2.
We assess the performance of PINNs against the reference from numerical solvers. Experimental
results utilizing the ℓ2 relative error (L2RE) metric are incorporated within the main text, while a
more exhaustive set of results, based on the aforementioned metrics, is available in the Appendix E.1.
6
Under review as a conference paper at ICLR 2024
L2RE Name Vanilla Loss Reweighting/SamplingOptimizerLoss functions Architecture
– PINN PINN-wLBFGSLRA NTK RAR MultiAdamgPINN vPINNLAAF GAAF FBPINN
Burgers1d-C 1.45E-2 2.63E-21.33E-22.61E-2 1.84E-2 3.32E-2 4.85E-2 2.16E-1 3.47E-11.43E-2 5.20E-2 2.32E-1
2d-C 3.24E-12.70E-1 4.65E-12.60E-12.75E-1 3.45E-1 3.33E-1 3.27E-1 6.38E-1 2.77E-1 2.95E-1 –
Poisson
2d-C 6.94E-1 3.49E-2 NaN 1.17E-11.23E-26.99E-12.63E-2 6.87E-1 4.91E-1 7.68E-1 6.04E-1 4.49E-2
2d-CG6.36E-1 6.08E-2 2.96E-1 4.34E-21.43E-26.48E-1 2.76E-1 7.92E-1 2.86E-1 4.80E-1 8.71E-12.90E-2
3d-CG5.60E-1 3.74E-1 7.05E-11.02E-19.47E-1 5.76E-13.63E-1 4.85E-1 7.38E-1 5.79E-1 5.02E-1 7.39E-1
2d-MS6.30E-1 7.60E-1 1.45E+0 7.94E-1 7.48E-1 6.44E-15.90E-16.16E-1 9.72E-15.93E-1 9.31E-1 1.04E+0
Heat 2d-VC1.01E+0 2.35E-1 2.32E-12.12E-12.14E-1 9.66E-1 4.75E-1 2.12E+0 9.40E-1 6.42E-1 8.49E-1 9.52E-1
2d-MS6.21E-2 2.42E-11.73E-28.79E-24.40E-2 7.49E-2 2.18E-1 1.13E-1 9.30E-1 7.40E-2 9.85E-1 8.20E-2
2d-CG3.64E-2 1.45E-1 8.57E-1 1.25E-1 1.16E-12.72E-2 7.12E-2 9.38E-2 – 2.39E-24.61E-1 9.16E-2
2d-LT9.99E-1 9.99E-1 1.00E+0 9.99E-1 1.00E+0 9.99E-1 1.00E+0 1.00E+0 1.00E+0 9.99E-1 9.99E-1 1.01E+0
NS 2d-C 4.70E-2 1.45E-1 2.14E-1 NaN 1.98E-1 4.69E-1 7.27E-1 7.70E-2 2.92E-13.60E-23.79E-2 8.45E-2
2d-CG1.19E-1 3.26E-1 NaN 3.32E-1 2.93E-1 3.34E-1 4.31E-1 1.54E-1 9.94E-18.24E-21.74E-1 8.27E+0
2d-LT9.96E-1 1.00E+0 9.70E-1 1.00E+0 9.99E-1 1.00E+0 1.00E+0 9.95E-1 1.73E+0 9.98E-1 9.99E-1 1.00E+0
Wave 1d-C 5.88E-1 2.85E-1 NaN 3.61E-19.79E-25.39E-11.21E-1 5.56E-1 8.39E-1 4.54E-1 6.77E-1 5.91E-1
2d-CG1.84E+0 1.66E+0 1.33E+0 1.48E+0 2.16E+0 1.15E+0 1.09E+0 8.14E-17.99E-1 8.19E-17.94E-11.06E+0
2d-MS1.34E+0 1.02E+0 1.37E+0 1.02E+0 1.04E+0 1.35E+0 1.01E+0 1.02E+0 9.82E-1 1.06E+0 1.06E+0 1.03E+0
Chaotic GS 3.19E-1 1.58E-1 NaN 9.37E-2 2.16E-1 9.46E-29.37E-2 2.48E-1 1.16E+0 9.47E-2 9.46E-27.99E-2
KS 1.01E+0 9.86E-1 NaN 9.57E-1 9.64E-1 1.01E+0 9.61E-1 9.94E-1 9.72E-1 1.01E+0 1.00E+0 1.02E+0
High dim PNd3.04E-32.58E-3 4.67E-44.58E-44.64E-3 3.59E-3 3.98E-3 5.05E-3 – 4.14E-3 7.75E-3 –
HNd 3.61E-1 4.59E-11.19E-43.94E-1 3.97E-1 3.57E-13.02E-1 3.17E-1 – 5.22E-1 5.21E-1 –
Inverse PInv9.42E-2 1.66E-1 NaN 1.54E-1 1.93E-1 9.35E-2 1.30E-18.03E-21.96E-21.30E-1 2.54E-1 8.44E-1
HInv1.57E+0 5.26E-2 NaN5.09E-2 7.52E-2 1.52E+0 8.04E-2 4.84E+01.19E-25.59E-1 2.12E-1 9.27E-1
Table 3: Mean L2RE of different PINN variants on our benchmark.Best results are highlighted in
blue and second-places in lightblue . We do not bold any result if errors of all methods are about
100%. “NaN” means the method does not converge and “–” means the method is not suitable for the
problem.
4 E XPERIMENTS
4.1 M AIN RESULTS
We now present experimental results. Except for the ablation study in Sec 4.3 and Appendix E.2, we
use a learning rate of 0.001 and train all models with 20,000 epochs. We repeat all experiments three
times and record the mean and std. Table 3 presents the main results for all methods on our tasks and
shows their average ℓ2 relative errors (with standard deviation results available in Appendix E.1).
PINN. We use PINN-w to denote training PINNs with larger boundary weights. The vanilla PINNs
struggle to accurately solve complex physics systems, indicating substantial room for improvement.
Using an ℓ2 relative error (L2RE) of 10% as a threshold for a successful solution, we find that vanilla
PINN only solves 10 out of 22 tasks, most of which involve simpler equations (e.g., 1.45% on
Burgers-1d-C). They encounter significant difficulties when faced with physics systems characterized
by complex geometries, multi-scale phenomena, nonlinearity, and longer time spans. This shows
that directly optimizing an average of the PDE losses and initial/boundary condition losses leads to
critical issues such as loss imbalance, suboptimal convergence, and limited expressiveness.
PINN variants. PINN variants offer approaches to addressing some of these challenges to varying
degrees. Methods involving loss reweighting and resampling have shown improved performance in
some cases involving complex geometries and multi-scale phenomena (e.g., 1.43% on Poisson-2d-
CG). This is due to the configuration of loss weights and sampled collocation points, which adaptively
place more weight on more challenging domains during the training process. However, these methods
still struggle with Wave equations, Navier-Stokes equations, and other cases with higher dimensions
or longer time spans. MultiAdam, a representative of novel optimizers, solves several simple cases
and the chaotic GS equation (9.37%), but does not significantly outperform other methods. The new
loss term of variational form demonstrates significant superiority in solving inverse problems (e.g.,
1.19% on HInv for vPINN), but no clear improvement in fitting error over standard PINN in forward
cases. Changes in architecture can enhance expressiveness and flexibility for cases with complex
geometries and multi-scale systems. For example, FBPINN achieves the smallest error on the chaotic
GS equation (7.99%), while LAAF delivers the best fitting result on Heat-2d-CG (2.39%).
7
Under review as a conference paper at ICLR 2024
L2RE Name Vanilla Loss Reweighting/SamplingOptimizer Loss functions Architecture
– – PINN LRA NTK RAR MultiAdamgPINN vPINN LAAF GAAF FBPINN
Burgers-P 2d-C4.74E-01 4.36E-014.13E-014.71E-01 4.93E-01 4.91E-01 2.82E+0 4.37E-014.34E-01 -
Poisson-P 2d-C2.24E-01 7.07E-021.66E-022.33E-01 8.24E-02 4.03E-01 5.51E-1 1.84E-01 2.97E-012.87E-2
Heat-P 2d-MS1.73E-011.23E-01 1.50E-01 1.53E-01 4.00E-01 4.59E-01 5.12E-16.27E-021.89E-01 2.46E-1
NS-P 2d-C 3.89E-01 - 4.52E-01 3.91E-01 9.33E-01 7.19E-01 3.76E-13.63E-014.85E-01 3.99E-1
Wave-P 1d-C 5.22E-013.44E-012.69E-015.05E-01 6.89E-01 7.66E-01 3.58E-1 4.03E-01 9.00E-01 1.15E+0
High dim-P HNd7.66E-036.53E-03 9.04E-03 8.07E-032.22E-037.87E-03 - 6.97E-03 1.94E-01 -
Table 4: Results of different PINN variants on parametric PDEs. We report average L2RE on all
examples within a class of PDE. We bold the best results across all methods.
Discussion. For challenges related to complex geometries and multi-scale phenomena, some methods
can mitigate these issues by implementing mechanisms like loss reweighting, novel optimizers, and
better capacity through adaptive activation. This holds true for the 2D cases of Heat and Poisson
equations, which are classic linear equations. However, when systems have higher dimensions
(Poisson3d-CG) or longer time spans (Heat2d-LT), all methods fail to solve, highlighting the difficul-
ties associated with complex geometries and multi-scale systems.
In contrast, nonlinear, long-time PDEs like 2D Burgers, NS, and KS pose challenges for most
methods. These equations are sensitive to initial conditions, resulting in complicated solution spaces
and more local minima for PINNs (Steger et al., 2022). The Wave equation, featuring a second-order
time derivative and periodic behavior, is particularly hard for PINNs, which often become unstable
and may violate conservation laws (Jagtap et al., 2020c; Wang et al., 2022a). Although all methods
perform well on Poisson-Nd, only PINN with LBFGS solves Heat-Nd, indicating the potential of a
second-order optimizer for solving high dimensional PDEs(Tang et al., 2021).
4.2 P ARAMETERIZED PDE E XPERIMENTS
The performance of PINNs is also highly influenced by parameters in PDEs (Krishnapriyan et al.,
2021). To investigate whether PINNs could handle a class of PDEs, we design this experiment to
solve the same PDEs with different parameters. We choose 6 PDEs, i.e., Burgers2d-C, Poisson2d-C,
Heat2d-MS, NS-C, Wave1d-C, and Heat-Nd (HNd), with each case containing five parameterized
examples. Details of the parametrized PDEs are shown in Appendix B. Here we report the average
L2RE metric on these parameterized PDEs for every case, and results are shown in the following
Table 4. First, we see that compared with the corresponding cases in Table E.1, the mean L2RE of
parameterized PDEs is usually higher. We suggest that this is because there are some difficult cases
under certain parameters for these PDEs with very high errors. Secondly, we find that PINN-NTK
works well on parameterized PDE tasks which achieve three best results among all six experiments.
We speculate that solving PDEs with different parameters requires different weights for loss terms,
and PINN-NTK is a powerful method for automatically balancing these weights.
4.3 H YPERPARAMETER ANALYSIS
The performance of PINNs is strongly affected by hyperparameters, with each variant potentially
introducing its own unique set. Here we aim to investigate the impact of several shared and method-
specific hyperparameters via ablation studies. We consider the batch size and the number of training
epochs. The corresponding results are shown in Figure 2. We focus on a set of problems, i.e.,
Burgers1d, GS, Heat2d-CG, and Poisson2d-C. Detailed numerical results and additional findings are
in Appendix E.2.
Batch Size. The left figure of Figure 18 shows the testing L2RE of PINNs for varying batch sizes. It
suggests that larger batch sizes provide superior results, possibly due to improved gradient estimation
accuracy. Despite the GS and Poisson2d-C witnessing saturation beyond a batch size of 2048,
amplifying the batch sizes generally enhances results for the remaining problems.
Training Epochs. The right figure of Figure 18 illustrates the L2RE of PINNs across varying training
epochs. Mirroring the trend observed with batch size, we find that an increase in epochs leads to a
decrease in overall error. However, the impact of the number of epochs also has a saturation point.
Notably, beyond 20k or 80k epochs, the error does not decrease significantly, suggesting convergence
of PINNs around 20∼ 80 epochs.
8
Under review as a conference paper at ICLR 2024
Burgers1d GS Heat2d-CG Poisson2d-C
100
10 1
10 2
L2RE
Performance of PINNs with different batchsize
Batchsize: 512
Batchsize: 2048
Batchsize: 8192
Batchsize:32768
Burgers1d GS Heat2d-CG Poisson2d-C
100
10 1
10 2
L2RE
Performance of PINNs with different epochs
Epochs: 5k
Epochs: 20k
Epochs: 80k
Epochs: 160k
Figure 2: Performance of vanilla PINNs under different batch sizes (number of collocation points),
which is shown in the left figure; and number of training epochs, which is shown in the right figure.
0 5000 10000 15000 20000
100
10 1
10 2
L2RE of Burger1D
lr : 1e-2
lr : 1e-3
lr : 1e-4
lr : 1e-5
lr : stepdecay1
lr : stepdecay2
0 5000 10000 15000 20000
100
10 1
10 2
L2RE of HeatComplex
lr : 1e-2
lr : 1e-3
lr : 1e-4
lr : 1e-5
lr : stepdecay1
lr : stepdecay2
0 5000 10000 15000 20000
100
10 1
10 2
L2RE of Poisson2D
lr : 1e-2
lr : 1e-3
lr : 1e-4
lr : 1e-5
lr : stepdecay1
lr : stepdecay2
Figure 3: Convergence curve of PINNs with different learning rate schedules on Burgers1d, Heat2d-
CG, and Poisson2d-C.
Learning Rates. The performance of standard PINNs under various learning rates and learning rate
schedules is shown in Figure 3. We observe that the influence of the learning rate on performance is
intricate, with optimal learning rates varying across problems. Furthermore, PINN training tends to
be unstable. High learning rates, such as 10−2, often lead to error spikes, while low learning rates,
like 10−5, result in slow convergence. Our findings suggest that a moderate learning rate, such as
10−3 or 10−4, or a step decay learning rate schedule, tends to yield more stable performance.
5 C ONCLUSION AND DISCUSSION
In this work, we introduced PINNacle, a comprehensive benchmark offering a user-friendly toolbox
that encompasses over 10 PINN methods. We evaluated these methods against more than 20 chal-
lenging PDE problems, conducting extensive experiments and ablation studies for hyperparameters.
Looking forward, we plan to expand the benchmark by integrating additional state-of-the-art methods
and incorporating more practical problem scenarios.
Our analysis of the experimental results yields several key insights. First, domain decomposition is
beneficial for addressing problems characterized by complex geometries, and PINN-NTK is a strong
method for balancing loss weights as experiments show. Second, the choice of hyperparameters
is crucial to the performance of PINNs. Selecting a larger batch size and appropriately weighting
losses or betas in Adam may significantly reduce the error. However, the best hyperparameters
usually vary with PDEs. Third, we identify high-dimensional and nonlinear problems as a pressing
challenge. The overall performance of PINNs is not yet on par with traditional numerical methods
(Grossmann et al., 2023). Fourth, from a theoretical standpoint, the exploration of PINNs’ loss
landscape during the training of high-dimensional or nonlinear problems remains largely unexplored.
Finally, from an algorithmic perspective, integrating the strengths of neural networks with numerical
methods like preconditioning, weak formulation, and multigrid may present a promising avenue
toward overcoming the challenges identified herein (Markidis, 2021).
9
Under review as a conference paper at ICLR 2024
REFERENCES
Simon Axelrod and Rafael Gomez-Bombarelli. Geom, energy-annotated molecular conformations
for property prediction and molecular generation. Scientific Data, 9(1):185, 2022.
Rafael Bischof and Michael Kraus. Multi-objective loss balancing for physics-informed deep learning.
arXiv preprint arXiv:2110.09813, 2021.
Aleksandar Botev, Andrew Jaegle, Peter Wirnsberger, Daniel Hennes, and Irina Higgins. Which priors
matter? benchmarking models for learning latent dynamics. arXiv preprint arXiv:2111.05458,
2021.
Emmanuel Boutet, Damien Lieberherr, Michael Tognolli, Michel Schneider, Parit Bansal, Alan J
Bridge, Sylvain Poux, Lydie Bougueleret, and Ioannis Xenarios. Uniprotkb/swiss-prot, the
manually annotated section of the uniprot knowledgebase: how to use the entry view. Plant
bioinformatics: methods and protocols, pp. 23–54, 2016.
Salva Rühling Cachay, Venkatesh Ramesh, Jason NS Cole, Howard Barker, and David Rolnick.
Climart: A benchmark dataset for emulating atmospheric radiative transfer in weather and climate
models. arXiv preprint arXiv:2111.14671, 2021.
DM Causon and CG Mingham. Introductory finite difference methods for PDEs. Bookboon, 2010.
Feiyu Chen, David Sondak, Pavlos Protopapas, Marios Mattheakis, Shuheng Liu, Devansh Agarwal,
and Marco Di Giovanni. Neurodiffeq: A python package for solving differential equations with
neural networks. Journal of Open Source Software, 5(46):1931, 2020.
Salvatore Cuomo, Vincenzo Schiano Di Cola, Fabio Giampaolo, Gianluigi Rozza, Maziar Raissi, and
Francesco Piccialli. Scientific machine learning through physics–informed neural networks: where
we are and what’s next. Journal of Scientific Computing, 92(3):88, 2022.
Sourav Das and Solomon Tesfamariam. State-of-the-art review of design of experiments for physics-
informed deep learning. arXiv preprint arXiv:2202.06416, 2022.
Arka Daw, Anuj Karpatne, William Watkins, Jordan Read, and Vipin Kumar. Physics-guided neural
networks (pgnn): An application in lake temperature modeling. arXiv preprint arXiv:1710.11431,
2017.
Tobin A Driscoll, Nicholas Hale, and Lloyd N Trefethen. Chebfun guide, 2014.
Robert Eymard, Thierry Gallouët, and Raphaèle Herbin. Finite volume methods. Handbook of
numerical analysis, 7:713–1018, 2000.
Han Gao, Luning Sun, and Jian-Xun Wang. Phygeonet: Physics-informed geometry-adaptive
convolutional neural networks for solving parameterized steady-state pdes on irregular domain.
Journal of Computational Physics, 428:110079, 2021.
Tamara G Grossmann, Urszula Julia Komorowska, Jonas Latz, and Carola-Bibiane Schönlieb.
Can physics-informed neural networks beat the finite element method? arXiv preprint
arXiv:2302.04107, 2023.
Jayesh K Gupta and Johannes Brandstetter. Towards multi-spatiotemporal-scale generalized pde
modeling. arXiv preprint arXiv:2209.15616, 2022.
Zhongkai Hao, Songming Liu, Yichi Zhang, Chengyang Ying, Yao Feng, Hang Su, and Jun Zhu.
Physics-informed machine learning: A survey on problems, methods and applications. arXiv
preprint arXiv:2211.08064, 2022.
Oliver Hennigh, Susheela Narasimhan, Mohammad Amin Nabian, Akshay Subramaniam, Kaus-
tubh Tangsali, Zhiwei Fang, Max Rietmann, Wonmin Byeon, and Sanjay Choudhry. Nvidia
simnet™: An ai-accelerated multi-physics simulation framework. In International Conference on
Computational Science, pp. 447–461. Springer, 2021.
10
Under review as a conference paper at ICLR 2024
Zizhou Huang, Teseo Schneider, Minchen Li, Chenfanfu Jiang, Denis Zorin, and Daniele Panozzo.
A large-scale benchmark for the incompressible navier-stokes equations. arXiv preprint
arXiv:2112.05309, 2021.
Ameya D Jagtap and George E Karniadakis. Extended physics-informed neural networks (xpinns): A
generalized space-time domain decomposition based deep learning framework for nonlinear partial
differential equations. In AAAI Spring Symposium: MLPS, 2021.
Ameya D Jagtap, Kenji Kawaguchi, and George Em Karniadakis. Locally adaptive activation
functions with slope recovery for deep and physics-informed neural networks. Proceedings of the
Royal Society A, 476(2239):20200334, 2020a.
Ameya D Jagtap, Kenji Kawaguchi, and George Em Karniadakis. Adaptive activation functions
accelerate convergence in deep and physics-informed neural networks. Journal of Computational
Physics, 404:109136, 2020b.
Ameya D Jagtap, Ehsan Kharazmi, and George Em Karniadakis. Conservative physics-informed
neural networks on discrete domains for conservation laws: Applications to forward and inverse
problems. Computer Methods in Applied Mechanics and Engineering, 365:113028, 2020c.
George Em Karniadakis, Ioannis G Kevrekidis, Lu Lu, Paris Perdikaris, Sifan Wang, and Liu Yang.
Physics-informed machine learning. Nature Reviews Physics, 3(6):422–440, 2021.
Ehsan Kharazmi, Zhongqiang Zhang, and George Em Karniadakis. Variational physics-informed
neural networks for solving partial differential equations. arXiv preprint arXiv:1912.00873, 2019.
Ehsan Kharazmi, Zhongqiang Zhang, and George Em Karniadakis. hp-vpinns: Variational physics-
informed neural networks with domain decomposition. Computer Methods in Applied Mechanics
and Engineering, 374:113547, 2021.
Reza Khodayi-Mehr and Michael Zavlanos. Varnet: Variational neural networks for the solution of
partial differential equations. In Learning for Dynamics and Control, pp. 298–307. PMLR, 2020.
R Khudorozhkov, S Tsimfer, and A Koryagin. Pydens framework for solving differential equations
with deep learning. arXiv preprint arXiv:1909.11544, 2019.
Aditi Krishnapriyan, Amir Gholami, Shandian Zhe, Robert Kirby, and Michael W Mahoney. Char-
acterizing possible failure modes in physics-informed neural networks. Advances in Neural
Information Processing Systems, 34:26548–26560, 2021.
Ke Li, Kejun Tang, Tianfan Wu, and Qifeng Liao. D3m: A deep domain decomposition method for
partial differential equations. IEEE Access, 8:5283–5294, 2019.
Yuhao Liu, Xiaojian Li, and Zhengxian Liu. An improved physics-informed neural network based on
a new adaptive gradient descent algorithm for solving partial differential equations of nonlinear
systems. 2022.
Lu Lu, Xuhui Meng, Zhiping Mao, and George Em Karniadakis. Deepxde: A deep learning library
for solving differential equations. SIAM review, 63(1):208–228, 2021.
Lu Lu, Xuhui Meng, Shengze Cai, Zhiping Mao, Somdatta Goswami, Zhongqiang Zhang, and
George Em Karniadakis. A comprehensive and fair comparison of two neural operators (with
practical extensions) based on fair data. Computer Methods in Applied Mechanics and Engineering,
393:114778, 2022.
Suryanarayana Maddu, Dominik Sturm, Christian L Müller, and Ivo F Sbalzarini. Inverse dirichlet
weighting enables reliable training of physics informed neural networks. Machine Learning:
Science and Technology, 3(1):015026, 2022.
Stefano Markidis. The old and the new: Can physics-informed deep-learning replace traditional
linear solvers? Frontiers in big Data, 4:669097, 2021.
Keith W Morton and David Francis Mayers. Numerical solution of partial differential equations: an
introduction. Cambridge university press, 2005.
11
Under review as a conference paper at ICLR 2024
Ben Moseley, Andrew Markham, and Tarje Nissen-Meyer. Finite basis physics-informed neural
networks (fbpinns): a scalable domain decomposition approach for solving differential equations.
arXiv preprint arXiv:2107.07871, 2021.
COMSOL Multiphysics. Introduction to comsol multiphysics ®. COMSOL Multiphysics, Burlington,
MA, accessed Feb, 9:2018, 1998.
Mohammad Amin Nabian, Rini Jasmine Gladstone, and Hadi Meidani. Efficient training of physics-
informed neural networks via importance sampling. Computer-Aided Civil and Infrastructure
Engineering, 36(8):962–977, 2021.
Karl Otness, Arvi Gjoka, Joan Bruna, Daniele Panozzo, Benjamin Peherstorfer, Teseo Schneider,
and Denis Zorin. An extensible benchmark suite for learning to simulate physical systems. arXiv
preprint arXiv:2108.07799, 2021.
Evan Racah, Christopher Beckham, Tegan Maharaj, Samira Ebrahimi Kahou, Mr Prabhat, and Chris
Pal. Extremeweather: A large-scale climate dataset for semi-supervised detection, localization,
and understanding of extreme weather events. Advances in neural information processing systems,
30, 2017.
Christopher Rackauckas and Qing Nie. Differentialequations. jl–a performant and feature-rich
ecosystem for solving differential equations in julia. Journal of open research software, 5(1), 2017.
Maziar Raissi, Paris Perdikaris, and George E Karniadakis. Physics-informed neural networks: A
deep learning framework for solving forward and inverse problems involving nonlinear partial
differential equations. Journal of Computational physics, 378:686–707, 2019.
Junuthula Narasimha Reddy. Introduction to the finite element method. McGraw-Hill Education,
2019.
Pu Ren, Chengping Rao, Yang Liu, Jian-Xun Wang, and Hao Sun. Phycrnet: Physics-informed
convolutional-recurrent network for solving spatiotemporal pdes. Computer Methods in Applied
Mechanics and Engineering, 389:114399, 2022.
Franz M Rohrhofer, Stefan Posch, and Bernhard C Geiger. On the pareto front of physics-informed
neural networks. arXiv preprint arXiv:2105.00862, 2021.
Khemraj Shukla, Ameya D Jagtap, and George Em Karniadakis. Parallel physics-informed neural
networks via domain decomposition. Journal of Computational Physics, 447:110683, 2021.
Justin Sirignano and Konstantinos Spiliopoulos. Dgm: A deep learning algorithm for solving partial
differential equations. Journal of computational physics, 375:1339–1364, 2018.
Hwijae Son, Jin Woo Jang, Woo Jin Han, and Hyung Ju Hwang. Sobolev training for the neural
network solutions of pdes. 2021.
Sophie Steger, Franz M Rohrhofer, and Bernhard C Geiger. How pinns cheat: Predicting chaotic
motion of a double pendulum. In The Symbiosis of Deep Learning and Differential Equations II,
2022.
Makoto Takamoto, Timothy Praditia, Raphael Leiteritz, Daniel MacKinlay, Francesco Alesiani, Dirk
Pflüger, and Mathias Niepert. Pdebench: An extensive benchmark for scientific machine learning.
Advances in Neural Information Processing Systems, 35:1596–1611, 2022.
Kejun Tang, Xiaoliang Wan, and Chao Yang. Das: A deep adaptive sampling method for solving
partial differential equations. arXiv preprint arXiv:2112.14038, 2021.
Nanzhe Wang, Dongxiao Zhang, Haibin Chang, and Heng Li. Deep learning of subsurface flow via
theory-guided neural network. Journal of Hydrology, 584:124700, 2020.
Sifan Wang, Yujun Teng, and Paris Perdikaris. Understanding and mitigating gradient flow pathologies
in physics-informed neural networks. SIAM Journal on Scientific Computing, 43(5):A3055–A3081,
2021a.
12
Under review as a conference paper at ICLR 2024
Sifan Wang, Hanwen Wang, and Paris Perdikaris. On the eigenvector bias of fourier feature networks:
From regression to solving multi-scale pdes with physics-informed neural networks. Computer
Methods in Applied Mechanics and Engineering, 384:113938, 2021b.
Sifan Wang, Shyam Sankaran, and Paris Perdikaris. Respecting causality is all you need for training
physics-informed neural networks. arXiv preprint arXiv:2203.07404, 2022a.
Sifan Wang, Xinling Yu, and Paris Perdikaris. When and why pinns fail to train: A neural tangent
kernel perspective. Journal of Computational Physics, 449:110768, 2022b.
Chenxi Wu, Min Zhu, Qinyang Tan, Yadhu Kartha, and Lu Lu. A comprehensive study of non-
adaptive and residual-based adaptive sampling for physics-informed neural networks. Computer
Methods in Applied Mechanics and Engineering, 403:115671, 2023.
Jiachen Yao, Chang Su, Zhongkai Hao, Songming Liu, Hang Su, and Jun Zhu. Multiadam: Parameter-
wise scale-invariant optimizer for multiscale training of physics-informed neural networks. arXiv
preprint arXiv:2306.02816, 2023.
Bing Yu et al. The deep ritz method: a deep learning-based numerical algorithm for solving variational
problems. Communications in Mathematics and Statistics, 6(1):1–12, 2018.
Jeremy Yu, Lu Lu, Xuhui Meng, and George Em Karniadakis. Gradient-enhanced physics-informed
neural networks for forward and inverse pde problems. Computer Methods in Applied Mechanics
and Engineering, 393:114823, 2022.
Ruiyang Zhang, Yang Liu, and Hao Sun. Physics-informed multi-lstm networks for metamodeling
of nonlinear structures. Computer Methods in Applied Mechanics and Engineering, 369:113226,
2020.
13
|
Zhongkai Hao, Jiachen Yao, Chang Su, Hang Su, Ziao Wang, Fanzhi Lu, Zeyu Xia, Yichi Zhang, Songming Liu, Lu Lu, Jun Zhu
|
Reject
| 2,024
|
{"id": "ApjY32f3Xr", "forum": "ApjY32f3Xr", "content": {"title": {"value": "PINNacle: A Comprehensive Benchmark of Physics-Informed Neural Networks for Solving PDEs"}, "authors": {"value": ["Zhongkai Hao", "Jiachen Yao", "Chang Su", "Hang Su", "Ziao Wang", "Fanzhi Lu", "Zeyu Xia", "Yichi Zhang", "Songming Liu", "Lu Lu", "Jun Zhu"]}, "authorids": {"value": ["~Zhongkai_Hao1", "~Jiachen_Yao3", "~Chang_Su7", "~Hang_Su3", "~Ziao_Wang2", "~Fanzhi_Lu1", "~Zeyu_Xia4", "~Yichi_Zhang4", "~Songming_Liu1", "~Lu_Lu1", "~Jun_Zhu2"]}, "keywords": {"value": ["PINN", "machine learning", "physics-informed machine learning"]}, "abstract": {"value": "While significant progress has been made on Physics-Informed Neural Networks (PINNs), a comprehensive comparison of these methods across a wide range of Partial Differential Equations (PDEs) is still lacking. This study introduces PINNacle, a benchmarking tool designed to fill this gap. PINNacle provides a diverse dataset, comprising over 20 distinct PDEs from various domains including heat conduction, fluid dynamics, biology, and electromagnetics. These PDEs encapsulate key challenges inherent to real-world problems, such as complex geometry, multi-scale phenomena, nonlinearity, and high dimensionality. PINNacle also offers a user-friendly toolbox, incorporating about 10 state-of-the-art PINN methods for systematic evaluation and comparison. We have conducted extensive experiments with these methods, offering insights into their strengths and weaknesses. In addition to providing a standardized means of assessing performance, PINNacle also offers an in-depth analysis to guide future research, particularly in areas such as domain decomposition methods and loss reweighting for handling multi-scale problems and complex geometry. While PINNacle does not guarantee success in all real-world scenarios, it represents a significant contribution to the field by offering a robust, diverse, and comprehensive benchmark suite that will undoubtedly foster further research and development in PINNs."}, "pdf": {"value": "/pdf/49840b2f19f2bbeff0c1539d86c876d01140da89.pdf"}, "supplementary_material": {"value": "/attachment/38585e83cfcd145e3816bd1154c966361c146b92.pdf"}, "primary_area": {"value": "datasets and benchmarks"}, "code_of_ethics": {"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."}, "submission_guidelines": {"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide."}, "anonymous_url": {"value": "I certify that there is no URL (e.g., github page) that could be used to find authors' identity."}, "no_acknowledgement_section": {"value": "I certify that there is no acknowledgement section in this submission for double blind review."}, "venue": {"value": "Submitted to ICLR 2024"}, "venueid": {"value": "ICLR.cc/2024/Conference/Rejected_Submission"}, "_bibtex": {"value": "@misc{\nhao2024pinnacle,\ntitle={{PINN}acle: A Comprehensive Benchmark of Physics-Informed Neural Networks for Solving {PDE}s},\nauthor={Zhongkai Hao and Jiachen Yao and Chang Su and Hang Su and Ziao Wang and Fanzhi Lu and Zeyu Xia and Yichi Zhang and Songming Liu and Lu Lu and Jun Zhu},\nyear={2024},\nurl={https://openreview.net/forum?id=ApjY32f3Xr}\n}"}, "paperhash": {"value": "hao|pinnacle_a_comprehensive_benchmark_of_physicsinformed_neural_networks_for_solving_pdes"}}, "invitations": ["ICLR.cc/2024/Conference/-/Submission", "ICLR.cc/2024/Conference/-/Post_Submission", "ICLR.cc/2024/Conference/Submission9493/-/Revision", "ICLR.cc/2024/Conference/Submission9493/-/Rebuttal_Revision", "ICLR.cc/2024/Conference/-/Edit"], "cdate": 1695557413894, "odate": 1697213872796, "mdate": 1707625775716, "signatures": ["ICLR.cc/2024/Conference/Submission9493/Authors"], "writers": ["ICLR.cc/2024/Conference", "ICLR.cc/2024/Conference/Submission9493/Authors"], "readers": ["everyone"]}
|
[Summary]:
This paper provides both a collection of benchmark datasets as well as a standardized suite of PINN-type neural network PDE solution approximators arranged as a python package.
It further shows benchmark numbers of the different PINN methods on the benchmark datasets.
[Strengths]:
Providing any meaningful benchmark to the community is a valuable service.
In addition to creating the benchmark data sets, the authors have made a big effort in collecting and unifying PINN methods into a unified framework.
The paper appendix contains detailed specifications about the particular setup for the data benchmark.
[Weaknesses]:
While providing a benchmark data set to the community is a valuable service, several aspects could be improved.
Minor:
-It would be great to have a table or list (in the appendix) detailing a comparison of the provided data sets to those in PDEarena (and PDEbench).
Major:
- The relative error values in the results tables are for the most part shockingly bad and simply not useful for many numerical analysis contexts. Given that PINNs seem to be mostly providing different function spaces for PDE solutions, one original base PINN should be included in the benchmark, which is to give each hat function on a finite element mesh one parameter, and hence include finite element methods. Because some of the data sets were created using FEM, the original mesh would yield 0 error, but different meshes may not, and in particular coarser meshes would accumulate error. Analyzing a curve of remeshing from same resolution to coarse would provide a baseline for the performances of the other PINNs.
- In the above sense, it also becomes important to quantify flop counts. It appears that most PINNs need to be fitted for each PDE solution, incurring the typically high flop count of solving an optimization problem (compared to one forward pass), and only some of them can learn solutions conditional on hyperparameters given as input and require only forward passes to solve e.g. from different inital conditions.
For all cases, there should be 3 different flop counts provided: 1) The number of flops required to create the training set 2) The number of flops required for any general training of the method 3) the number of flops required to evaluate/fit the method on a particular example. Many PINNs, and the FEM baseline would only have nonzero counts in point 3, and it would be good to compare them.
Having flop counts or even wall time counts would allow answering questions like "at equal error rate, does fitting a PINN or fitting FEM cost more computational power?" and "At equal computing power, can FEM beat the error rates of the listed PINNs?"
- continuing the discussion about flop counts, methods learning from multiple data sets/examples should be included in order to compare flop counts and provide additional reference error values. In particular for the time propagating PDEs, solutions using U-nets or FNOs from e.g. PDE bench should be included as reference values, in terms of performance, flops required for training, flops required to generate the required training data, and flops required to run a forward pass to obtain a solution. Then one can assess how many PINNs or FEM solutions one can compute for the same budget as a certain number of forward passes of the propagator network. The should be a break-even point at some number of forward passes justifying the training effort.
Without these points, the benchmark is unfortunately sitting just beyond actual widespread utility. I would highly encourage the authors to add these baselines to make the benchmark useful. Despite my positive bias towards benchmarking efforts I cannot recommend acceptance of this paper in its current state.
[Questions]:
Would it be possible to address the major issues listed above among weaknesses?
|
ICLR.cc/2024/Conference/Submission9493/Reviewer_Ytqc
|
6
|
3
|
{"id": "Ait2z7EwaE", "forum": "ApjY32f3Xr", "replyto": "ApjY32f3Xr", "content": {"summary": {"value": "This paper provides both a collection of benchmark datasets as well as a standardized suite of PINN-type neural network PDE solution approximators arranged as a python package.\n\nIt further shows benchmark numbers of the different PINN methods on the benchmark datasets."}, "soundness": {"value": "3 good"}, "presentation": {"value": "3 good"}, "contribution": {"value": "3 good"}, "strengths": {"value": "Providing any meaningful benchmark to the community is a valuable service.\nIn addition to creating the benchmark data sets, the authors have made a big effort in collecting and unifying PINN methods into a unified framework.\nThe paper appendix contains detailed specifications about the particular setup for the data benchmark."}, "weaknesses": {"value": "While providing a benchmark data set to the community is a valuable service, several aspects could be improved.\n\nMinor: \n-It would be great to have a table or list (in the appendix) detailing a comparison of the provided data sets to those in PDEarena (and PDEbench).\n\nMajor: \n- The relative error values in the results tables are for the most part shockingly bad and simply not useful for many numerical analysis contexts. Given that PINNs seem to be mostly providing different function spaces for PDE solutions, one original base PINN should be included in the benchmark, which is to give each hat function on a finite element mesh one parameter, and hence include finite element methods. Because some of the data sets were created using FEM, the original mesh would yield 0 error, but different meshes may not, and in particular coarser meshes would accumulate error. Analyzing a curve of remeshing from same resolution to coarse would provide a baseline for the performances of the other PINNs.\n\n- In the above sense, it also becomes important to quantify flop counts. It appears that most PINNs need to be fitted for each PDE solution, incurring the typically high flop count of solving an optimization problem (compared to one forward pass), and only some of them can learn solutions conditional on hyperparameters given as input and require only forward passes to solve e.g. from different inital conditions.\nFor all cases, there should be 3 different flop counts provided: 1) The number of flops required to create the training set 2) The number of flops required for any general training of the method 3) the number of flops required to evaluate/fit the method on a particular example. Many PINNs, and the FEM baseline would only have nonzero counts in point 3, and it would be good to compare them.\nHaving flop counts or even wall time counts would allow answering questions like \"at equal error rate, does fitting a PINN or fitting FEM cost more computational power?\" and \"At equal computing power, can FEM beat the error rates of the listed PINNs?\"\n\n- continuing the discussion about flop counts, methods learning from multiple data sets/examples should be included in order to compare flop counts and provide additional reference error values. In particular for the time propagating PDEs, solutions using U-nets or FNOs from e.g. PDE bench should be included as reference values, in terms of performance, flops required for training, flops required to generate the required training data, and flops required to run a forward pass to obtain a solution. Then one can assess how many PINNs or FEM solutions one can compute for the same budget as a certain number of forward passes of the propagator network. The should be a break-even point at some number of forward passes justifying the training effort.\n\nWithout these points, the benchmark is unfortunately sitting just beyond actual widespread utility. I would highly encourage the authors to add these baselines to make the benchmark useful. Despite my positive bias towards benchmarking efforts I cannot recommend acceptance of this paper in its current state."}, "questions": {"value": "Would it be possible to address the major issues listed above among weaknesses?"}, "flag_for_ethics_review": {"value": ["No ethics review needed."]}, "rating": {"value": "6: marginally above the acceptance threshold"}, "confidence": {"value": "3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked."}, "code_of_conduct": {"value": "Yes"}}, "invitations": ["ICLR.cc/2024/Conference/Submission9493/-/Official_Review", "ICLR.cc/2024/Conference/-/Edit"], "cdate": 1699333708841, "mdate": 1701736249610, "nonreaders": [], "signatures": ["ICLR.cc/2024/Conference/Submission9493/Reviewer_Ytqc"], "writers": ["ICLR.cc/2024/Conference", "ICLR.cc/2024/Conference/Submission9493/Reviewer_Ytqc"], "readers": ["everyone"], "license": "CC BY 4.0"}
|
{
"criticism": 3,
"example": 1,
"importance_and_relevance": 3,
"materials_and_methods": 22,
"praise": 3,
"presentation_and_reporting": 3,
"results_and_discussion": 2,
"suggestion_and_solution": 14,
"total": 27
}
| 1.888889
| -1.204467
| 3.093356
| 1.926865
| 0.437026
| 0.037976
| 0.111111
| 0.037037
| 0.111111
| 0.814815
| 0.111111
| 0.111111
| 0.074074
| 0.518519
|
{
"criticism": 0.1111111111111111,
"example": 0.037037037037037035,
"importance_and_relevance": 0.1111111111111111,
"materials_and_methods": 0.8148148148148148,
"praise": 0.1111111111111111,
"presentation_and_reporting": 0.1111111111111111,
"results_and_discussion": 0.07407407407407407,
"suggestion_and_solution": 0.5185185185185185
}
| 1.888889
|
iclr2024
|
openreview
| 0
| 0
| 0
| null |
||
ApjY32f3Xr
|
PINNacle: A Comprehensive Benchmark of Physics-Informed Neural Networks for Solving PDEs
|
While significant progress has been made on Physics-Informed Neural Networks (PINNs), a comprehensive comparison of these methods across a wide range of Partial Differential Equations (PDEs) is still lacking. This study introduces PINNacle, a benchmarking tool designed to fill this gap. PINNacle provides a diverse dataset, comprising over 20 distinct PDEs from various domains including heat conduction, fluid dynamics, biology, and electromagnetics. These PDEs encapsulate key challenges inherent to real-world problems, such as complex geometry, multi-scale phenomena, nonlinearity, and high dimensionality. PINNacle also offers a user-friendly toolbox, incorporating about 10 state-of-the-art PINN methods for systematic evaluation and comparison. We have conducted extensive experiments with these methods, offering insights into their strengths and weaknesses. In addition to providing a standardized means of assessing performance, PINNacle also offers an in-depth analysis to guide future research, particularly in areas such as domain decomposition methods and loss reweighting for handling multi-scale problems and complex geometry. While PINNacle does not guarantee success in all real-world scenarios, it represents a significant contribution to the field by offering a robust, diverse, and comprehensive benchmark suite that will undoubtedly foster further research and development in PINNs.
|
Under review as a conference paper at ICLR 2024
PINN ACLE : A C OMPREHENSIVE BENCHMARK OF
PHYSICS -INFORMED NEURAL NETWORKS FOR SOLV-
ING PDE S
Anonymous authors
Paper under double-blind review
ABSTRACT
While significant progress has been made on Physics-Informed Neural Networks
(PINNs), a comprehensive comparison of these methods across a wide range of
Partial Differential Equations (PDEs) is still lacking. This study introduces PIN-
Nacle, a benchmarking tool designed to fill this gap. PINNacle provides a diverse
dataset, comprising over 20 distinct PDEs from various domains, including heat
conduction, fluid dynamics, biology, and electromagnetics. These PDEs encapsu-
late key challenges inherent to real-world problems, such as complex geometry,
multi-scale phenomena, nonlinearity, and high dimensionality. PINNacle also of-
fers a user-friendly toolbox, incorporating about 10 state-of-the-art PINN methods
for systematic evaluation and comparison. We have conducted extensive experi-
ments with these methods, offering insights into their strengths and weaknesses. In
addition to providing a standardized means of assessing performance, PINNacle
also offers an in-depth analysis to guide future research, particularly in areas such
as domain decomposition methods and loss reweighting for handling multi-scale
problems and complex geometry. To the best of our knowledge, it is the largest
benchmark with a diverse and comprehensive evaluation that will undoubtedly
foster further research in PINNs.
1 I NTRODUCTION
Partial Differential Equations (PDEs) are of paramount importance in science and engineering, as
they often underpin our understanding of intricate physical systems such as fluid flow, heat transfer,
and stress distribution (Morton & Mayers, 2005). The computational simulation of PDE systems
has been a focal point of research for an extensive period, leading to the development of numerical
methods such as finite difference (Causon & Mingham, 2010), finite element (Reddy, 2019), and
finite volume methods (Eymard et al., 2000).
Recent advancements have led to the use of deep neural networks to solve forward and inverse
problems involving PDEs (Raissi et al., 2019; Yu et al., 2018; Daw et al., 2017; Wang et al.,
2020). Among these, Physics-Informed Neural Networks (PINNs) have emerged as a promising
alternative to traditional numerical methods in solving such problems (Raissi et al., 2019; Karniadakis
et al., 2021). PINNs leverage the underlying physical laws and available data to effectively handle
various scientific and engineering applications. The growing interest in this field has spurred the
development of numerous PINN variants, each tailored to overcome specific challenges or to enhance
the performance of the original framework.
While PINN methods have achieved remarkable progress, a comprehensive comparison of these
methods across diverse types of PDEs is currently lacking. Establishing such a benchmark is crucial
as it could enable researchers to more thoroughly understand existing methods and pinpoint potential
challenges. Despite the availability of several studies comparing sampling methods (Wu et al., 2023)
and reweighting methods (Bischof & Kraus, 2021), there has been no concerted effort to develop
a rigorous benchmark using challenging datasets from real-world problems. The sheer variety and
inherent complexity of PDEs make it difficult to conduct a comprehensive analysis. Moreover,
different mathematical properties and application scenarios further complicate the task, requiring the
benchmark to be adaptable and exhaustive.
To resolve these challenges, we propose PINNacle, a comprehensive benchmark for evaluating and
understanding the performance of PINNs. As shown in Fig. 1, PINNacle consists of three major
1
Under review as a conference paper at ICLR 2024
DeepXDE
Datasets
/
PDEs
Toolbox
-
Reweighting
:
PINN
-
LRA
,
PINN
-
NTK
-
Resampling
:
RAR
-
Optimizer
:
Multi
-
Adam
-
Loss functions
:
gPINN
,
vPINN
-
Network Architecture
:
LAAF
,
GAAF
-
Domain decomposition
:
FBPINN
-
Metrics
:
L
1
,
L
2
relative error
MSE
-
Visualization
:
Temporal error analysis
2
D
/
3
D plots
Evaluation
Benchmark of PINNs on
20
+
PDEs with
10
+
SOTA methods
F
luid
M
echanics
E
lectromagnetism
Heat Conduction
G
eophysics
PyTorch
-
Linear
:
Poisson
,
Heat
,
Wave
-
Nonlinear
:
Burgers
,
NS
,
GS
,
KS
-
High Dimensional
:
Poisson
,
Diffusion
-
Numerical solutions
FEM
,
Spectral Methods
Backend
Applications
Benchmark
Figure 1: Architecture of PINNacle. It contains a dataset covering more than 20 PDEs, a toolbox that
implements about 10 SOTA methods, and an evaluation module. These methods have a wide range of
application scenarios like fluid mechanics, electromagnetism, heat conduction, geophysics, and so on.
components — a diverse dataset, a toolbox, and evaluation modules. The dataset comprises tasks
from over 20 different PDEs from various domains, including heat conduction, fluid dynamics,
biology, and electromagnetics. Each task brings its own set of challenges, such as complex geometry,
multi-scale phenomena, nonlinearity, and high dimensionality, thus providing a rich testing ground
for PINNs. The toolbox incorporates more than 10 state-of-the-art (SOTA) PINN methods, enabling
a systematic comparison of different strategies, including loss reweighting, variational formulation,
adaptive activations, and domain decomposition. These methods can be flexibly applied to the tasks
in the dataset, offering researchers a convenient way to evaluate the performance of PINNs which is
also user-friendly for secondary development. The evaluation modules provide a standardized means
of assessing the performance of different PINN methods across all tasks, ensuring consistency in
comparison and facilitating the identification of strengths and weaknesses in various methods.
PINNacle provides a robust, diverse, and comprehensive benchmark suite for PINNs, contributing
significantly to the field’s understanding and application. It represents a major step forward in the
evolution of PINNs which could foster more innovative research and development in this exciting
field. In a nutshell, our contributions can be summarized as follows:
• We design a dataset encompassing over 20 challenging PDE problems. These problems
encapsulate several critical challenges faced by PINNs, including handling complex geome-
tries, multi-scale phenomena, nonlinearity, and high-dimensional problems.
• We systematically evaluate more than 10 carefully selected representative variants of PINNs.
We conducted thorough experiments and ablation studies to evaluate their performance. To
the best of our knowledge, this is the largest benchmark comparing different PINN variants.
• We provide an in-depth analysis to guide future research. We show using loss reweighting
and domain decomposition methods could improve the performance on multi-scale and
complex geometry problems. Variational formulation achieves better performance on inverse
problems. However, few methods can adequately address nonlinear problems, indicating a
future direction for exploration and advancement.
2 R ELATED WORK
2.1 B ENCHMARKS AND DATASETS IN SCIENTIFIC MACHINE LEARNING
The growing trend of AI applications in scientific research has stimulated the development of various
benchmarks and datasets, which differ greatly in data formats, sizes, and governing principles. For
instance, (Lu et al., 2022) presents a benchmark for comparing neural operators, while (Botev
et al., 2021; Otness et al., 2021) benchmarks methods for learning latent Newtonian mechanics.
2
Under review as a conference paper at ICLR 2024
Furthermore, domain-specific datasets and benchmarks exist in fluid mechanics (Huang et al., 2021),
climate science (Racah et al., 2017; Cachay et al., 2021), quantum chemistry (Axelrod & Gomez-
Bombarelli, 2022), and biology (Boutet et al., 2016).
Beyond these domain-specific datasets and benchmarks, physics-informed machine learning has
received considerable attention (Hao et al., 2022; Cuomo et al., 2022) since the advent of Physics-
Informed Neural Networks (PINNs) (Raissi et al., 2019). These methods successfully incorporate
physical laws into model training, demonstrating immense potential across a variety of scientific
and engineering domains. Various papers have compared different components within the PINN
framework; for instance, (Das & Tesfamariam, 2022) and (Wu et al., 2023) investigate the sampling
methods of collocation points in PINNs, and (Bischof & Kraus, 2021) compare reweighting tech-
niques for different loss components. PDEBench (Takamoto et al., 2022) and PDEArena (Gupta
& Brandstetter, 2022) design multiple tasks to compare different methods in scientific machine
learning such as PINNs, FNO, and U-Net. Nevertheless, a comprehensive comparison of various
PINN approaches remains absent in the literature.
2.2 S OFTWARES AND TOOLBOXES
A plethora of software solutions have been developed for solving PDEs with neural networks. These
include SimNet (Hennigh et al., 2021), NeuralPDE (Rackauckas & Nie, 2017), TorchDiffEq (Chen
et al., 2020), and PyDEns (Khudorozhkov et al., 2019). More recently, DeepXDE (Lu et al., 2021)
has been introduced as a fundamental library for implementing PINNs across different backends.
However, there remains a void for a toolbox that provides a unified implementation for advanced PINN
variants. Our PINNacle fills this gap by offering a flexible interface that facilitates the implementation
and evaluation of diverse PINN variants. We furnish clear and concise code for researchers to execute
benchmarks across all problems and methods.
2.3 V ARIANTS OF PHYSICS -INFORMED NEURAL NETWORKS
The PINNs have received much attention due to their remarkable performance in solving both
forward and inverse PDE problems. However, vanilla PINNs have many limitations. Researchers
have proposed numerous PINN variants to address challenges associated with high-dimensionality,
non-linearity, multi-scale issues, and complex geometries (Hao et al., 2022; Cuomo et al., 2022;
Karniadakis et al., 2021; Krishnapriyan et al., 2021). Broadly speaking, these variants can be
categorized into: loss reweighting/resampling (Wang et al., 2021a;b; Tang et al., 2021; Wu et al.,
2023; Nabian et al., 2021), innovative optimizers (Yao et al., 2023), novel loss functions such as
variational formulations (Yu et al., 2018; Kharazmi et al., 2019; 2021; Khodayi-Mehr & Zavlanos,
2020) or regularization terms (Yu et al., 2022; Son et al., 2021), and novel architectures like domain
decomposition (Jagtap & Karniadakis, 2021; Li et al., 2019; Moseley et al., 2021; Jagtap et al., 2020c)
and adaptive activations (Jagtap et al., 2020b;a). These variants have enhanced PINN’s performance
across various problems. Here we select representative methods from each category and conduct a
comprehensive analysis using our benchmark dataset to evaluate these variants.
3 PINN ACLE : A H IERARCHICAL BENCHMARK FOR PINN S
In this section, we first introduce the preliminaries of PINNs. Then we introduce the details of
datasets (tasks), PINN methods, the toolbox framework, and the evaluation metrics.
3.1 P RELIMINARIES OF PHYSICS -INFORMED NEURAL NETWORKS
Physics-informed neural networks are neural network-based methods for solving PDEs as well as
inverse problems of PDEs, which have received much attention recently. Specifically, let’s consider a
general Partial Differential Equation (PDE) system defined on Ω, which can be represented as:
F(u(x); x) = 0 , x ∈ Ω, (1)
B(u(x); x) = 0 , x ∈ ∂Ω. (2)
where F is a differential operator and B is the boundary/initial condition. PINN uses a neural network
uθ(x) with parameters θ to approximate u(x). The objective of PINN is to minimize the following
3
Under review as a conference paper at ICLR 2024
loss function:
L(θ) = wc
Nc
NcX
i=1
||F(uθ(xi
c); xi
c)||2 + wb
Nb
NbX
i=1
||B(uθ(xi
b); xi
b)||2 + wd
Nd
NdX
i=1
||uθ(xi
d)−u(xi
d)||2. (3)
where wc, wb, wd are weights. The first two terms enforce the PDE constraints on {xi
c}1...Nc and
boundary conditions on {xi
b}1...Nb . The last term is data loss, which is optional when there is data
available. However, PINNs have several inherent drawbacks. First, PINNs optimize a mixture of
imbalance loss terms which might hinder its convergence as illustrated in (Wang et al., 2021a).
Second, nonlinear or stiff PDEs might lead to unstable optimization (Wang et al., 2021b). Third, the
vanilla MLPs might have difficulty in representing multi-scale or high-dimensional functions. For
example, (Krishnapriyan et al., 2021) shows that vanilla PINNs only work for a small parameter
range, even in a simple convection problem. To resolve these challenges, numerous variants of PINNs
are proposed. However, a comprehensive comparison of these methods is lacking, and thus it is
imperative to develop a benchmark.
3.2 D ATASETS
To effectively compare PINN variants, we’ve curated a set of PDE problems (datasets) representing
a wide range of challenges. We chose PDEs from diverse domains, reflecting their importance in
science and engineering. Our dataset includes 22 unique cases, with further details in Appendix B.
• The Burgers’ Equation, fundamental to fluid mechanics, considering both one and two-
dimensional problems.
• The Poisson’s Equation, widely used in math and physics, with four different cases.
• The Heat Equation, a time-dependent PDE that describes diffusion or heat conduction,
demonstrated in four unique cases.
• The Navier-Stokes Equation, describing the motion of viscous fluid substances, showcased
in three scenarios: a lid-driven flow (NS2d-C), a geometrically complex backward step flow
(NS2d-CG), and a time-dependent problem (NS2d-LT).
• The Wave Equation, modeling wave behavior, exhibited in three cases.
• Chaotic PDEs , featuring two popular examples: the Gray-Scott (GS) and Kuramoto-
Sivashinsky (KS) equations.
• High Dimensional PDEs, including the high-dimensional Poisson equation (PNd) and the
high-dimensional diffusion or heat equation (HNd).
• Inverse Problems, focusing on the reconstruction of the coefficient field from noisy data
for the Poisson equation (PInv) and the diffusion equation (HInv).
It is important to note that we have chosen PDEs encompassing a wide range of mathematical
properties. This ensures that the benchmarks do not favor a specific type of PDE. The selected PDE
problems introduce several core challenges, which include:
• Complex Geometry: Many PDE problems involve complex or irregular geometry, such as
heat conduction or wave propagation around obstacles. These complexities pose significant
challenges for PINNs in terms of accurate boundary behavior representation.
• Multi-Scale Phenomena: Multi-scale phenomena, where the solution varies significantly
over different scales, are prevalent in situations such as turbulent fluid flow. Achieving a
balanced representation across all scales is a challenge for PINNs in multi-scale scenarios.
• Nonlinear Behavior: Many PDEs exhibit nonlinear or even chaotic behavior, where mi-
nor variations in initial conditions can lead to substantial divergence in outcomes. The
optimization of PINNs becomes intriguing on nonlinear PDEs.
• High Dimensionality: High-dimensional PDE problems, frequently encountered in quantum
mechanics, present significant challenges for PINNs due to the “curse of dimensionality”.
This term refers to the increase in computational complexity with the addition of each
dimension, accompanied by statistical issues like data sparsity in high-dimensional space.
4
Under review as a conference paper at ICLR 2024
These challenges are selected due to their frequent occurrence in numerous real-world applications.
As such, a method’s performance in addressing these challenges serves as a reliable indicator of
its overall practical utility. Table 1 presents a detailed overview of the dataset, the PDEs, and the
challenges associated with these problems. We generate data using FEM solver provided by COMSOL
6.0 (Multiphysics, 1998) for problems with complex geometry and spectral method provided by
Chebfun (Driscoll et al., 2014) for chaotic problems. More details can be found in Appendix B.
A
.
Complex Geometry
B
.
Multi
-
scale
C
.
Nonlinear and Chaotic
D
.
High dimension
Dataset Complex geometry Multi-scale Nonlinearity High dim
Burgers1∼2 × × √ ×
Poisson3∼6 ×/√ ×/√ × ×
Heat7∼10 ×/√ ×/√ × ×
NS11∼13 ×/√ ×/√ √ ×
Wave14∼16 ×/√ ×/√ × ×
Chaotic17∼18 × √ √ ×
High dim19∼20 × × × √
Inverse 21∼22 × × √ ×
Table 1: Overview of our datasets along with their challenges. We chose 22 cases in total to evaluate
the methods of PINNs. The left picture shows the visualization of cases with these four challenges,
i.e., complex geometry, multi-scale, nonlinearity, and high dimension.
3.3 M ETHODS AND TOOLBOX
After conducting an extensive literature review, we present an overview of diverse PINNs approaches
for comparison. Then we present the high-level structure of our PINNacle.
3.3.1 M ETHODS
As mentioned above, variants of PINNs are mainly based on loss functions, architecture, and optimizer
(Hao et al., 2022). The modifications to loss functions can be divided into reweighting existing
losses and developing novel loss functions like regularization and variational formulation. Variants of
architectures include using domain decomposition and adaptive activations.
The methods discussed are directly correlated with the challenges highlighted in Table 1. For example,
domain decomposition methods are particularly effective for problems involving complex geometries
and multi-scale phenomena. Meanwhile, loss reweighting strategies are adept at addressing imbal-
ances in problems with multiple losses. We have chosen variants from these categories based on their
significant contributions to the field. Here, we list the primary categories and representative methods
as summarized in Table 2:
• Loss reweighting/Resampling (2∼4): PINNs are trained with a mixed loss of PDE residuals,
boundary conditions, and available data losses shown in Eq 3. Various methods (Wang
et al., 2021a; 2022b; Bischof & Kraus, 2021; Maddu et al., 2022; Rohrhofer et al., 2021)
propose different strategies to adjust these weights wc, wb and wd at different epochs or
resample collocation points {xi
c} and {xi
b} in Eq 3, which indirectly adjust the weights (Wu
et al., 2023; Nabian et al., 2021). We choose three famous examples, i.e., reweighting using
gradient norms (PINN-LRA) (Wang et al., 2021a), using neural tangent kernel (PINN-NTK)
(Wang et al., 2022b), and residual-based resampling (RAR)(Lu et al., 2021; Wu et al., 2023).
• Novel optimizer (5): To handle the problem of multi-scale objectives, some new optimizers
(Liu et al., 2022; Yao et al., 2023) are proposed. We chose MultiAdam, which is resistant to
domain scale changes.
• Novel loss functions (6∼7): Some works introduce novel loss functions like variational
formulation (Sirignano & Spiliopoulos, 2018; Khodayi-Mehr & Zavlanos, 2020; Kharazmi
et al., 2021) and regularization terms to improve training. We choose hp-VPINN (Kharazmi
et al., 2021) and gPINN (Yu et al., 2022; Son et al., 2021), which are representative examples
from these two categories.
• Novel activation architectures (8∼10): Some works propose various network architectures,
such as using CNN and LSTM (Zhang et al., 2020; Gao et al., 2021; Ren et al., 2022),
5
Under review as a conference paper at ICLR 2024
Complex Geometry Multi-scale Nonlinearity High dim
Vanilla PINN 1 × × × ×
Reweighting/Resampling2∼4 √ √ × ×
Novel Optimizer7 × √ × ×
Novel Loss Functions5∼6 × × × ×
Novel Architecture8∼10 √ √ × ×
Table 2: Overivew of methods in our PINNacle. √ denotes the method is potentially designed to
solve or show empirical improvements for problems encountering the challenge and vice versa.
custom activation functions (Jagtap et al., 2020a;b), and domain decomposition (Jagtap &
Karniadakis, 2021; Shukla et al., 2021; Jagtap et al., 2020c; Moseley et al., 2021). Among
adaptive activations for PINNs, we choose LAAF (Jagtap et al., 2020a) and GAAF (Jagtap
et al., 2020b). Domain decomposition is a method that divides the whole domain into
multiple subdomains and trains subnetworks on these subdomains. It is helpful for solving
multi-scale problems, but multiple subnetworks increase the difficulty of training. XPINNs,
cPINNs, and FBPINNs (Jagtap & Karniadakis, 2021; Jagtap et al., 2020c; Moseley et al.,
2021) are three representative examples. We choose FBPINNs which is the state-of-the-art
domain decomposition that applies domain-specific normalization to stabilize training.
3.3.2 S TRUCTURE OF TOOLBOX
We provide a user-friendly and concise toolbox for implementing, training, and evaluating diverse
PINN variants. Specifically, our codebase is based on DeepXDE and provides a series of encapsulated
classes and functions to facilitate high-level training and custom PDEs. These utilities allow for a
standardized and streamlined approach to the implementation of various PINN variants and PDEs.
Moreover, we provided many auxiliary functions, including computing different metrics, visualizing
predictions, and recording results.
Despite the unified implementation of diverse PINNs, we also design an adaptive multi-GPU parallel
training framework To enhance the efficiency of systematic evaluations of PINN methods. It addresses
the parallelization phase of training on multiple tasks, effectively balancing the computational loads
of multiple GPUs. It allows for the execution of larger and more complex tasks. In a nutshell, we
provide an example code for training and evaluating PINNs on two Poisson equations using our
PINNacle framework in Appendix D.
3.4 E VALUATION
To comprehensively analyze the discrepancy between the PINN solutions and the true solutions,
we adopt multiple metrics to evaluate the performance of the PINN variants. Generally, we choose
several metrics that are commonly used in literature that apply to all methods and problems. We
suppose that y = (yi)n
i=1 is the prediction and y′ = (y′
i)n
i=1 to is ground truth, where n is the number
of testing examples. Specifically, we use ℓ2 relative error (L2RE), and ℓ1 relative error (L1RE)
which are two most commonly used metrics to measure the global quality of the solution,
L2RE =
sPn
i=1(yi − y′
i)2
Pn
i=1 y′
i
2 , L1RE =
Pn
i=1 |yi − y′
i|Pn
i=1 |y′
i| . (4)
We also compute max error (mERR in short), mean square error (MSE), and Fourier error (fMSE) for
a detailed analysis of the prediction. These three metrics are computed as follows:
MSE = 1
n
nX
i=1
(yi − y′
i)2, mERR = maxi |yi − y′
i|, fMSE =
qPkmax
kmin |F(y) − F(y′)|2
kmax − kmin + 1 , (5)
where F denotes Fourier transform of y and kmin, kmax are chosen similar to PDEBench (Takamoto
et al., 2022). Besides, for time-dependent problems, investigating the quality of the solution with
time is important. Therefore we compute the L2RE error varying with time in Appendix E.2.
We assess the performance of PINNs against the reference from numerical solvers. Experimental
results utilizing the ℓ2 relative error (L2RE) metric are incorporated within the main text, while a
more exhaustive set of results, based on the aforementioned metrics, is available in the Appendix E.1.
6
Under review as a conference paper at ICLR 2024
L2RE Name Vanilla Loss Reweighting/SamplingOptimizerLoss functions Architecture
– PINN PINN-wLBFGSLRA NTK RAR MultiAdamgPINN vPINNLAAF GAAF FBPINN
Burgers1d-C 1.45E-2 2.63E-21.33E-22.61E-2 1.84E-2 3.32E-2 4.85E-2 2.16E-1 3.47E-11.43E-2 5.20E-2 2.32E-1
2d-C 3.24E-12.70E-1 4.65E-12.60E-12.75E-1 3.45E-1 3.33E-1 3.27E-1 6.38E-1 2.77E-1 2.95E-1 –
Poisson
2d-C 6.94E-1 3.49E-2 NaN 1.17E-11.23E-26.99E-12.63E-2 6.87E-1 4.91E-1 7.68E-1 6.04E-1 4.49E-2
2d-CG6.36E-1 6.08E-2 2.96E-1 4.34E-21.43E-26.48E-1 2.76E-1 7.92E-1 2.86E-1 4.80E-1 8.71E-12.90E-2
3d-CG5.60E-1 3.74E-1 7.05E-11.02E-19.47E-1 5.76E-13.63E-1 4.85E-1 7.38E-1 5.79E-1 5.02E-1 7.39E-1
2d-MS6.30E-1 7.60E-1 1.45E+0 7.94E-1 7.48E-1 6.44E-15.90E-16.16E-1 9.72E-15.93E-1 9.31E-1 1.04E+0
Heat 2d-VC1.01E+0 2.35E-1 2.32E-12.12E-12.14E-1 9.66E-1 4.75E-1 2.12E+0 9.40E-1 6.42E-1 8.49E-1 9.52E-1
2d-MS6.21E-2 2.42E-11.73E-28.79E-24.40E-2 7.49E-2 2.18E-1 1.13E-1 9.30E-1 7.40E-2 9.85E-1 8.20E-2
2d-CG3.64E-2 1.45E-1 8.57E-1 1.25E-1 1.16E-12.72E-2 7.12E-2 9.38E-2 – 2.39E-24.61E-1 9.16E-2
2d-LT9.99E-1 9.99E-1 1.00E+0 9.99E-1 1.00E+0 9.99E-1 1.00E+0 1.00E+0 1.00E+0 9.99E-1 9.99E-1 1.01E+0
NS 2d-C 4.70E-2 1.45E-1 2.14E-1 NaN 1.98E-1 4.69E-1 7.27E-1 7.70E-2 2.92E-13.60E-23.79E-2 8.45E-2
2d-CG1.19E-1 3.26E-1 NaN 3.32E-1 2.93E-1 3.34E-1 4.31E-1 1.54E-1 9.94E-18.24E-21.74E-1 8.27E+0
2d-LT9.96E-1 1.00E+0 9.70E-1 1.00E+0 9.99E-1 1.00E+0 1.00E+0 9.95E-1 1.73E+0 9.98E-1 9.99E-1 1.00E+0
Wave 1d-C 5.88E-1 2.85E-1 NaN 3.61E-19.79E-25.39E-11.21E-1 5.56E-1 8.39E-1 4.54E-1 6.77E-1 5.91E-1
2d-CG1.84E+0 1.66E+0 1.33E+0 1.48E+0 2.16E+0 1.15E+0 1.09E+0 8.14E-17.99E-1 8.19E-17.94E-11.06E+0
2d-MS1.34E+0 1.02E+0 1.37E+0 1.02E+0 1.04E+0 1.35E+0 1.01E+0 1.02E+0 9.82E-1 1.06E+0 1.06E+0 1.03E+0
Chaotic GS 3.19E-1 1.58E-1 NaN 9.37E-2 2.16E-1 9.46E-29.37E-2 2.48E-1 1.16E+0 9.47E-2 9.46E-27.99E-2
KS 1.01E+0 9.86E-1 NaN 9.57E-1 9.64E-1 1.01E+0 9.61E-1 9.94E-1 9.72E-1 1.01E+0 1.00E+0 1.02E+0
High dim PNd3.04E-32.58E-3 4.67E-44.58E-44.64E-3 3.59E-3 3.98E-3 5.05E-3 – 4.14E-3 7.75E-3 –
HNd 3.61E-1 4.59E-11.19E-43.94E-1 3.97E-1 3.57E-13.02E-1 3.17E-1 – 5.22E-1 5.21E-1 –
Inverse PInv9.42E-2 1.66E-1 NaN 1.54E-1 1.93E-1 9.35E-2 1.30E-18.03E-21.96E-21.30E-1 2.54E-1 8.44E-1
HInv1.57E+0 5.26E-2 NaN5.09E-2 7.52E-2 1.52E+0 8.04E-2 4.84E+01.19E-25.59E-1 2.12E-1 9.27E-1
Table 3: Mean L2RE of different PINN variants on our benchmark.Best results are highlighted in
blue and second-places in lightblue . We do not bold any result if errors of all methods are about
100%. “NaN” means the method does not converge and “–” means the method is not suitable for the
problem.
4 E XPERIMENTS
4.1 M AIN RESULTS
We now present experimental results. Except for the ablation study in Sec 4.3 and Appendix E.2, we
use a learning rate of 0.001 and train all models with 20,000 epochs. We repeat all experiments three
times and record the mean and std. Table 3 presents the main results for all methods on our tasks and
shows their average ℓ2 relative errors (with standard deviation results available in Appendix E.1).
PINN. We use PINN-w to denote training PINNs with larger boundary weights. The vanilla PINNs
struggle to accurately solve complex physics systems, indicating substantial room for improvement.
Using an ℓ2 relative error (L2RE) of 10% as a threshold for a successful solution, we find that vanilla
PINN only solves 10 out of 22 tasks, most of which involve simpler equations (e.g., 1.45% on
Burgers-1d-C). They encounter significant difficulties when faced with physics systems characterized
by complex geometries, multi-scale phenomena, nonlinearity, and longer time spans. This shows
that directly optimizing an average of the PDE losses and initial/boundary condition losses leads to
critical issues such as loss imbalance, suboptimal convergence, and limited expressiveness.
PINN variants. PINN variants offer approaches to addressing some of these challenges to varying
degrees. Methods involving loss reweighting and resampling have shown improved performance in
some cases involving complex geometries and multi-scale phenomena (e.g., 1.43% on Poisson-2d-
CG). This is due to the configuration of loss weights and sampled collocation points, which adaptively
place more weight on more challenging domains during the training process. However, these methods
still struggle with Wave equations, Navier-Stokes equations, and other cases with higher dimensions
or longer time spans. MultiAdam, a representative of novel optimizers, solves several simple cases
and the chaotic GS equation (9.37%), but does not significantly outperform other methods. The new
loss term of variational form demonstrates significant superiority in solving inverse problems (e.g.,
1.19% on HInv for vPINN), but no clear improvement in fitting error over standard PINN in forward
cases. Changes in architecture can enhance expressiveness and flexibility for cases with complex
geometries and multi-scale systems. For example, FBPINN achieves the smallest error on the chaotic
GS equation (7.99%), while LAAF delivers the best fitting result on Heat-2d-CG (2.39%).
7
Under review as a conference paper at ICLR 2024
L2RE Name Vanilla Loss Reweighting/SamplingOptimizer Loss functions Architecture
– – PINN LRA NTK RAR MultiAdamgPINN vPINN LAAF GAAF FBPINN
Burgers-P 2d-C4.74E-01 4.36E-014.13E-014.71E-01 4.93E-01 4.91E-01 2.82E+0 4.37E-014.34E-01 -
Poisson-P 2d-C2.24E-01 7.07E-021.66E-022.33E-01 8.24E-02 4.03E-01 5.51E-1 1.84E-01 2.97E-012.87E-2
Heat-P 2d-MS1.73E-011.23E-01 1.50E-01 1.53E-01 4.00E-01 4.59E-01 5.12E-16.27E-021.89E-01 2.46E-1
NS-P 2d-C 3.89E-01 - 4.52E-01 3.91E-01 9.33E-01 7.19E-01 3.76E-13.63E-014.85E-01 3.99E-1
Wave-P 1d-C 5.22E-013.44E-012.69E-015.05E-01 6.89E-01 7.66E-01 3.58E-1 4.03E-01 9.00E-01 1.15E+0
High dim-P HNd7.66E-036.53E-03 9.04E-03 8.07E-032.22E-037.87E-03 - 6.97E-03 1.94E-01 -
Table 4: Results of different PINN variants on parametric PDEs. We report average L2RE on all
examples within a class of PDE. We bold the best results across all methods.
Discussion. For challenges related to complex geometries and multi-scale phenomena, some methods
can mitigate these issues by implementing mechanisms like loss reweighting, novel optimizers, and
better capacity through adaptive activation. This holds true for the 2D cases of Heat and Poisson
equations, which are classic linear equations. However, when systems have higher dimensions
(Poisson3d-CG) or longer time spans (Heat2d-LT), all methods fail to solve, highlighting the difficul-
ties associated with complex geometries and multi-scale systems.
In contrast, nonlinear, long-time PDEs like 2D Burgers, NS, and KS pose challenges for most
methods. These equations are sensitive to initial conditions, resulting in complicated solution spaces
and more local minima for PINNs (Steger et al., 2022). The Wave equation, featuring a second-order
time derivative and periodic behavior, is particularly hard for PINNs, which often become unstable
and may violate conservation laws (Jagtap et al., 2020c; Wang et al., 2022a). Although all methods
perform well on Poisson-Nd, only PINN with LBFGS solves Heat-Nd, indicating the potential of a
second-order optimizer for solving high dimensional PDEs(Tang et al., 2021).
4.2 P ARAMETERIZED PDE E XPERIMENTS
The performance of PINNs is also highly influenced by parameters in PDEs (Krishnapriyan et al.,
2021). To investigate whether PINNs could handle a class of PDEs, we design this experiment to
solve the same PDEs with different parameters. We choose 6 PDEs, i.e., Burgers2d-C, Poisson2d-C,
Heat2d-MS, NS-C, Wave1d-C, and Heat-Nd (HNd), with each case containing five parameterized
examples. Details of the parametrized PDEs are shown in Appendix B. Here we report the average
L2RE metric on these parameterized PDEs for every case, and results are shown in the following
Table 4. First, we see that compared with the corresponding cases in Table E.1, the mean L2RE of
parameterized PDEs is usually higher. We suggest that this is because there are some difficult cases
under certain parameters for these PDEs with very high errors. Secondly, we find that PINN-NTK
works well on parameterized PDE tasks which achieve three best results among all six experiments.
We speculate that solving PDEs with different parameters requires different weights for loss terms,
and PINN-NTK is a powerful method for automatically balancing these weights.
4.3 H YPERPARAMETER ANALYSIS
The performance of PINNs is strongly affected by hyperparameters, with each variant potentially
introducing its own unique set. Here we aim to investigate the impact of several shared and method-
specific hyperparameters via ablation studies. We consider the batch size and the number of training
epochs. The corresponding results are shown in Figure 2. We focus on a set of problems, i.e.,
Burgers1d, GS, Heat2d-CG, and Poisson2d-C. Detailed numerical results and additional findings are
in Appendix E.2.
Batch Size. The left figure of Figure 18 shows the testing L2RE of PINNs for varying batch sizes. It
suggests that larger batch sizes provide superior results, possibly due to improved gradient estimation
accuracy. Despite the GS and Poisson2d-C witnessing saturation beyond a batch size of 2048,
amplifying the batch sizes generally enhances results for the remaining problems.
Training Epochs. The right figure of Figure 18 illustrates the L2RE of PINNs across varying training
epochs. Mirroring the trend observed with batch size, we find that an increase in epochs leads to a
decrease in overall error. However, the impact of the number of epochs also has a saturation point.
Notably, beyond 20k or 80k epochs, the error does not decrease significantly, suggesting convergence
of PINNs around 20∼ 80 epochs.
8
Under review as a conference paper at ICLR 2024
Burgers1d GS Heat2d-CG Poisson2d-C
100
10 1
10 2
L2RE
Performance of PINNs with different batchsize
Batchsize: 512
Batchsize: 2048
Batchsize: 8192
Batchsize:32768
Burgers1d GS Heat2d-CG Poisson2d-C
100
10 1
10 2
L2RE
Performance of PINNs with different epochs
Epochs: 5k
Epochs: 20k
Epochs: 80k
Epochs: 160k
Figure 2: Performance of vanilla PINNs under different batch sizes (number of collocation points),
which is shown in the left figure; and number of training epochs, which is shown in the right figure.
0 5000 10000 15000 20000
100
10 1
10 2
L2RE of Burger1D
lr : 1e-2
lr : 1e-3
lr : 1e-4
lr : 1e-5
lr : stepdecay1
lr : stepdecay2
0 5000 10000 15000 20000
100
10 1
10 2
L2RE of HeatComplex
lr : 1e-2
lr : 1e-3
lr : 1e-4
lr : 1e-5
lr : stepdecay1
lr : stepdecay2
0 5000 10000 15000 20000
100
10 1
10 2
L2RE of Poisson2D
lr : 1e-2
lr : 1e-3
lr : 1e-4
lr : 1e-5
lr : stepdecay1
lr : stepdecay2
Figure 3: Convergence curve of PINNs with different learning rate schedules on Burgers1d, Heat2d-
CG, and Poisson2d-C.
Learning Rates. The performance of standard PINNs under various learning rates and learning rate
schedules is shown in Figure 3. We observe that the influence of the learning rate on performance is
intricate, with optimal learning rates varying across problems. Furthermore, PINN training tends to
be unstable. High learning rates, such as 10−2, often lead to error spikes, while low learning rates,
like 10−5, result in slow convergence. Our findings suggest that a moderate learning rate, such as
10−3 or 10−4, or a step decay learning rate schedule, tends to yield more stable performance.
5 C ONCLUSION AND DISCUSSION
In this work, we introduced PINNacle, a comprehensive benchmark offering a user-friendly toolbox
that encompasses over 10 PINN methods. We evaluated these methods against more than 20 chal-
lenging PDE problems, conducting extensive experiments and ablation studies for hyperparameters.
Looking forward, we plan to expand the benchmark by integrating additional state-of-the-art methods
and incorporating more practical problem scenarios.
Our analysis of the experimental results yields several key insights. First, domain decomposition is
beneficial for addressing problems characterized by complex geometries, and PINN-NTK is a strong
method for balancing loss weights as experiments show. Second, the choice of hyperparameters
is crucial to the performance of PINNs. Selecting a larger batch size and appropriately weighting
losses or betas in Adam may significantly reduce the error. However, the best hyperparameters
usually vary with PDEs. Third, we identify high-dimensional and nonlinear problems as a pressing
challenge. The overall performance of PINNs is not yet on par with traditional numerical methods
(Grossmann et al., 2023). Fourth, from a theoretical standpoint, the exploration of PINNs’ loss
landscape during the training of high-dimensional or nonlinear problems remains largely unexplored.
Finally, from an algorithmic perspective, integrating the strengths of neural networks with numerical
methods like preconditioning, weak formulation, and multigrid may present a promising avenue
toward overcoming the challenges identified herein (Markidis, 2021).
9
Under review as a conference paper at ICLR 2024
REFERENCES
Simon Axelrod and Rafael Gomez-Bombarelli. Geom, energy-annotated molecular conformations
for property prediction and molecular generation. Scientific Data, 9(1):185, 2022.
Rafael Bischof and Michael Kraus. Multi-objective loss balancing for physics-informed deep learning.
arXiv preprint arXiv:2110.09813, 2021.
Aleksandar Botev, Andrew Jaegle, Peter Wirnsberger, Daniel Hennes, and Irina Higgins. Which priors
matter? benchmarking models for learning latent dynamics. arXiv preprint arXiv:2111.05458,
2021.
Emmanuel Boutet, Damien Lieberherr, Michael Tognolli, Michel Schneider, Parit Bansal, Alan J
Bridge, Sylvain Poux, Lydie Bougueleret, and Ioannis Xenarios. Uniprotkb/swiss-prot, the
manually annotated section of the uniprot knowledgebase: how to use the entry view. Plant
bioinformatics: methods and protocols, pp. 23–54, 2016.
Salva Rühling Cachay, Venkatesh Ramesh, Jason NS Cole, Howard Barker, and David Rolnick.
Climart: A benchmark dataset for emulating atmospheric radiative transfer in weather and climate
models. arXiv preprint arXiv:2111.14671, 2021.
DM Causon and CG Mingham. Introductory finite difference methods for PDEs. Bookboon, 2010.
Feiyu Chen, David Sondak, Pavlos Protopapas, Marios Mattheakis, Shuheng Liu, Devansh Agarwal,
and Marco Di Giovanni. Neurodiffeq: A python package for solving differential equations with
neural networks. Journal of Open Source Software, 5(46):1931, 2020.
Salvatore Cuomo, Vincenzo Schiano Di Cola, Fabio Giampaolo, Gianluigi Rozza, Maziar Raissi, and
Francesco Piccialli. Scientific machine learning through physics–informed neural networks: where
we are and what’s next. Journal of Scientific Computing, 92(3):88, 2022.
Sourav Das and Solomon Tesfamariam. State-of-the-art review of design of experiments for physics-
informed deep learning. arXiv preprint arXiv:2202.06416, 2022.
Arka Daw, Anuj Karpatne, William Watkins, Jordan Read, and Vipin Kumar. Physics-guided neural
networks (pgnn): An application in lake temperature modeling. arXiv preprint arXiv:1710.11431,
2017.
Tobin A Driscoll, Nicholas Hale, and Lloyd N Trefethen. Chebfun guide, 2014.
Robert Eymard, Thierry Gallouët, and Raphaèle Herbin. Finite volume methods. Handbook of
numerical analysis, 7:713–1018, 2000.
Han Gao, Luning Sun, and Jian-Xun Wang. Phygeonet: Physics-informed geometry-adaptive
convolutional neural networks for solving parameterized steady-state pdes on irregular domain.
Journal of Computational Physics, 428:110079, 2021.
Tamara G Grossmann, Urszula Julia Komorowska, Jonas Latz, and Carola-Bibiane Schönlieb.
Can physics-informed neural networks beat the finite element method? arXiv preprint
arXiv:2302.04107, 2023.
Jayesh K Gupta and Johannes Brandstetter. Towards multi-spatiotemporal-scale generalized pde
modeling. arXiv preprint arXiv:2209.15616, 2022.
Zhongkai Hao, Songming Liu, Yichi Zhang, Chengyang Ying, Yao Feng, Hang Su, and Jun Zhu.
Physics-informed machine learning: A survey on problems, methods and applications. arXiv
preprint arXiv:2211.08064, 2022.
Oliver Hennigh, Susheela Narasimhan, Mohammad Amin Nabian, Akshay Subramaniam, Kaus-
tubh Tangsali, Zhiwei Fang, Max Rietmann, Wonmin Byeon, and Sanjay Choudhry. Nvidia
simnet™: An ai-accelerated multi-physics simulation framework. In International Conference on
Computational Science, pp. 447–461. Springer, 2021.
10
Under review as a conference paper at ICLR 2024
Zizhou Huang, Teseo Schneider, Minchen Li, Chenfanfu Jiang, Denis Zorin, and Daniele Panozzo.
A large-scale benchmark for the incompressible navier-stokes equations. arXiv preprint
arXiv:2112.05309, 2021.
Ameya D Jagtap and George E Karniadakis. Extended physics-informed neural networks (xpinns): A
generalized space-time domain decomposition based deep learning framework for nonlinear partial
differential equations. In AAAI Spring Symposium: MLPS, 2021.
Ameya D Jagtap, Kenji Kawaguchi, and George Em Karniadakis. Locally adaptive activation
functions with slope recovery for deep and physics-informed neural networks. Proceedings of the
Royal Society A, 476(2239):20200334, 2020a.
Ameya D Jagtap, Kenji Kawaguchi, and George Em Karniadakis. Adaptive activation functions
accelerate convergence in deep and physics-informed neural networks. Journal of Computational
Physics, 404:109136, 2020b.
Ameya D Jagtap, Ehsan Kharazmi, and George Em Karniadakis. Conservative physics-informed
neural networks on discrete domains for conservation laws: Applications to forward and inverse
problems. Computer Methods in Applied Mechanics and Engineering, 365:113028, 2020c.
George Em Karniadakis, Ioannis G Kevrekidis, Lu Lu, Paris Perdikaris, Sifan Wang, and Liu Yang.
Physics-informed machine learning. Nature Reviews Physics, 3(6):422–440, 2021.
Ehsan Kharazmi, Zhongqiang Zhang, and George Em Karniadakis. Variational physics-informed
neural networks for solving partial differential equations. arXiv preprint arXiv:1912.00873, 2019.
Ehsan Kharazmi, Zhongqiang Zhang, and George Em Karniadakis. hp-vpinns: Variational physics-
informed neural networks with domain decomposition. Computer Methods in Applied Mechanics
and Engineering, 374:113547, 2021.
Reza Khodayi-Mehr and Michael Zavlanos. Varnet: Variational neural networks for the solution of
partial differential equations. In Learning for Dynamics and Control, pp. 298–307. PMLR, 2020.
R Khudorozhkov, S Tsimfer, and A Koryagin. Pydens framework for solving differential equations
with deep learning. arXiv preprint arXiv:1909.11544, 2019.
Aditi Krishnapriyan, Amir Gholami, Shandian Zhe, Robert Kirby, and Michael W Mahoney. Char-
acterizing possible failure modes in physics-informed neural networks. Advances in Neural
Information Processing Systems, 34:26548–26560, 2021.
Ke Li, Kejun Tang, Tianfan Wu, and Qifeng Liao. D3m: A deep domain decomposition method for
partial differential equations. IEEE Access, 8:5283–5294, 2019.
Yuhao Liu, Xiaojian Li, and Zhengxian Liu. An improved physics-informed neural network based on
a new adaptive gradient descent algorithm for solving partial differential equations of nonlinear
systems. 2022.
Lu Lu, Xuhui Meng, Zhiping Mao, and George Em Karniadakis. Deepxde: A deep learning library
for solving differential equations. SIAM review, 63(1):208–228, 2021.
Lu Lu, Xuhui Meng, Shengze Cai, Zhiping Mao, Somdatta Goswami, Zhongqiang Zhang, and
George Em Karniadakis. A comprehensive and fair comparison of two neural operators (with
practical extensions) based on fair data. Computer Methods in Applied Mechanics and Engineering,
393:114778, 2022.
Suryanarayana Maddu, Dominik Sturm, Christian L Müller, and Ivo F Sbalzarini. Inverse dirichlet
weighting enables reliable training of physics informed neural networks. Machine Learning:
Science and Technology, 3(1):015026, 2022.
Stefano Markidis. The old and the new: Can physics-informed deep-learning replace traditional
linear solvers? Frontiers in big Data, 4:669097, 2021.
Keith W Morton and David Francis Mayers. Numerical solution of partial differential equations: an
introduction. Cambridge university press, 2005.
11
Under review as a conference paper at ICLR 2024
Ben Moseley, Andrew Markham, and Tarje Nissen-Meyer. Finite basis physics-informed neural
networks (fbpinns): a scalable domain decomposition approach for solving differential equations.
arXiv preprint arXiv:2107.07871, 2021.
COMSOL Multiphysics. Introduction to comsol multiphysics ®. COMSOL Multiphysics, Burlington,
MA, accessed Feb, 9:2018, 1998.
Mohammad Amin Nabian, Rini Jasmine Gladstone, and Hadi Meidani. Efficient training of physics-
informed neural networks via importance sampling. Computer-Aided Civil and Infrastructure
Engineering, 36(8):962–977, 2021.
Karl Otness, Arvi Gjoka, Joan Bruna, Daniele Panozzo, Benjamin Peherstorfer, Teseo Schneider,
and Denis Zorin. An extensible benchmark suite for learning to simulate physical systems. arXiv
preprint arXiv:2108.07799, 2021.
Evan Racah, Christopher Beckham, Tegan Maharaj, Samira Ebrahimi Kahou, Mr Prabhat, and Chris
Pal. Extremeweather: A large-scale climate dataset for semi-supervised detection, localization,
and understanding of extreme weather events. Advances in neural information processing systems,
30, 2017.
Christopher Rackauckas and Qing Nie. Differentialequations. jl–a performant and feature-rich
ecosystem for solving differential equations in julia. Journal of open research software, 5(1), 2017.
Maziar Raissi, Paris Perdikaris, and George E Karniadakis. Physics-informed neural networks: A
deep learning framework for solving forward and inverse problems involving nonlinear partial
differential equations. Journal of Computational physics, 378:686–707, 2019.
Junuthula Narasimha Reddy. Introduction to the finite element method. McGraw-Hill Education,
2019.
Pu Ren, Chengping Rao, Yang Liu, Jian-Xun Wang, and Hao Sun. Phycrnet: Physics-informed
convolutional-recurrent network for solving spatiotemporal pdes. Computer Methods in Applied
Mechanics and Engineering, 389:114399, 2022.
Franz M Rohrhofer, Stefan Posch, and Bernhard C Geiger. On the pareto front of physics-informed
neural networks. arXiv preprint arXiv:2105.00862, 2021.
Khemraj Shukla, Ameya D Jagtap, and George Em Karniadakis. Parallel physics-informed neural
networks via domain decomposition. Journal of Computational Physics, 447:110683, 2021.
Justin Sirignano and Konstantinos Spiliopoulos. Dgm: A deep learning algorithm for solving partial
differential equations. Journal of computational physics, 375:1339–1364, 2018.
Hwijae Son, Jin Woo Jang, Woo Jin Han, and Hyung Ju Hwang. Sobolev training for the neural
network solutions of pdes. 2021.
Sophie Steger, Franz M Rohrhofer, and Bernhard C Geiger. How pinns cheat: Predicting chaotic
motion of a double pendulum. In The Symbiosis of Deep Learning and Differential Equations II,
2022.
Makoto Takamoto, Timothy Praditia, Raphael Leiteritz, Daniel MacKinlay, Francesco Alesiani, Dirk
Pflüger, and Mathias Niepert. Pdebench: An extensive benchmark for scientific machine learning.
Advances in Neural Information Processing Systems, 35:1596–1611, 2022.
Kejun Tang, Xiaoliang Wan, and Chao Yang. Das: A deep adaptive sampling method for solving
partial differential equations. arXiv preprint arXiv:2112.14038, 2021.
Nanzhe Wang, Dongxiao Zhang, Haibin Chang, and Heng Li. Deep learning of subsurface flow via
theory-guided neural network. Journal of Hydrology, 584:124700, 2020.
Sifan Wang, Yujun Teng, and Paris Perdikaris. Understanding and mitigating gradient flow pathologies
in physics-informed neural networks. SIAM Journal on Scientific Computing, 43(5):A3055–A3081,
2021a.
12
Under review as a conference paper at ICLR 2024
Sifan Wang, Hanwen Wang, and Paris Perdikaris. On the eigenvector bias of fourier feature networks:
From regression to solving multi-scale pdes with physics-informed neural networks. Computer
Methods in Applied Mechanics and Engineering, 384:113938, 2021b.
Sifan Wang, Shyam Sankaran, and Paris Perdikaris. Respecting causality is all you need for training
physics-informed neural networks. arXiv preprint arXiv:2203.07404, 2022a.
Sifan Wang, Xinling Yu, and Paris Perdikaris. When and why pinns fail to train: A neural tangent
kernel perspective. Journal of Computational Physics, 449:110768, 2022b.
Chenxi Wu, Min Zhu, Qinyang Tan, Yadhu Kartha, and Lu Lu. A comprehensive study of non-
adaptive and residual-based adaptive sampling for physics-informed neural networks. Computer
Methods in Applied Mechanics and Engineering, 403:115671, 2023.
Jiachen Yao, Chang Su, Zhongkai Hao, Songming Liu, Hang Su, and Jun Zhu. Multiadam: Parameter-
wise scale-invariant optimizer for multiscale training of physics-informed neural networks. arXiv
preprint arXiv:2306.02816, 2023.
Bing Yu et al. The deep ritz method: a deep learning-based numerical algorithm for solving variational
problems. Communications in Mathematics and Statistics, 6(1):1–12, 2018.
Jeremy Yu, Lu Lu, Xuhui Meng, and George Em Karniadakis. Gradient-enhanced physics-informed
neural networks for forward and inverse pde problems. Computer Methods in Applied Mechanics
and Engineering, 393:114823, 2022.
Ruiyang Zhang, Yang Liu, and Hao Sun. Physics-informed multi-lstm networks for metamodeling
of nonlinear structures. Computer Methods in Applied Mechanics and Engineering, 369:113226,
2020.
13
|
Zhongkai Hao, Jiachen Yao, Chang Su, Hang Su, Ziao Wang, Fanzhi Lu, Zeyu Xia, Yichi Zhang, Songming Liu, Lu Lu, Jun Zhu
|
Reject
| 2,024
|
{"id": "ApjY32f3Xr", "forum": "ApjY32f3Xr", "content": {"title": {"value": "PINNacle: A Comprehensive Benchmark of Physics-Informed Neural Networks for Solving PDEs"}, "authors": {"value": ["Zhongkai Hao", "Jiachen Yao", "Chang Su", "Hang Su", "Ziao Wang", "Fanzhi Lu", "Zeyu Xia", "Yichi Zhang", "Songming Liu", "Lu Lu", "Jun Zhu"]}, "authorids": {"value": ["~Zhongkai_Hao1", "~Jiachen_Yao3", "~Chang_Su7", "~Hang_Su3", "~Ziao_Wang2", "~Fanzhi_Lu1", "~Zeyu_Xia4", "~Yichi_Zhang4", "~Songming_Liu1", "~Lu_Lu1", "~Jun_Zhu2"]}, "keywords": {"value": ["PINN", "machine learning", "physics-informed machine learning"]}, "abstract": {"value": "While significant progress has been made on Physics-Informed Neural Networks (PINNs), a comprehensive comparison of these methods across a wide range of Partial Differential Equations (PDEs) is still lacking. This study introduces PINNacle, a benchmarking tool designed to fill this gap. PINNacle provides a diverse dataset, comprising over 20 distinct PDEs from various domains including heat conduction, fluid dynamics, biology, and electromagnetics. These PDEs encapsulate key challenges inherent to real-world problems, such as complex geometry, multi-scale phenomena, nonlinearity, and high dimensionality. PINNacle also offers a user-friendly toolbox, incorporating about 10 state-of-the-art PINN methods for systematic evaluation and comparison. We have conducted extensive experiments with these methods, offering insights into their strengths and weaknesses. In addition to providing a standardized means of assessing performance, PINNacle also offers an in-depth analysis to guide future research, particularly in areas such as domain decomposition methods and loss reweighting for handling multi-scale problems and complex geometry. While PINNacle does not guarantee success in all real-world scenarios, it represents a significant contribution to the field by offering a robust, diverse, and comprehensive benchmark suite that will undoubtedly foster further research and development in PINNs."}, "pdf": {"value": "/pdf/49840b2f19f2bbeff0c1539d86c876d01140da89.pdf"}, "supplementary_material": {"value": "/attachment/38585e83cfcd145e3816bd1154c966361c146b92.pdf"}, "primary_area": {"value": "datasets and benchmarks"}, "code_of_ethics": {"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."}, "submission_guidelines": {"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide."}, "anonymous_url": {"value": "I certify that there is no URL (e.g., github page) that could be used to find authors' identity."}, "no_acknowledgement_section": {"value": "I certify that there is no acknowledgement section in this submission for double blind review."}, "venue": {"value": "Submitted to ICLR 2024"}, "venueid": {"value": "ICLR.cc/2024/Conference/Rejected_Submission"}, "_bibtex": {"value": "@misc{\nhao2024pinnacle,\ntitle={{PINN}acle: A Comprehensive Benchmark of Physics-Informed Neural Networks for Solving {PDE}s},\nauthor={Zhongkai Hao and Jiachen Yao and Chang Su and Hang Su and Ziao Wang and Fanzhi Lu and Zeyu Xia and Yichi Zhang and Songming Liu and Lu Lu and Jun Zhu},\nyear={2024},\nurl={https://openreview.net/forum?id=ApjY32f3Xr}\n}"}, "paperhash": {"value": "hao|pinnacle_a_comprehensive_benchmark_of_physicsinformed_neural_networks_for_solving_pdes"}}, "invitations": ["ICLR.cc/2024/Conference/-/Submission", "ICLR.cc/2024/Conference/-/Post_Submission", "ICLR.cc/2024/Conference/Submission9493/-/Revision", "ICLR.cc/2024/Conference/Submission9493/-/Rebuttal_Revision", "ICLR.cc/2024/Conference/-/Edit"], "cdate": 1695557413894, "odate": 1697213872796, "mdate": 1707625775716, "signatures": ["ICLR.cc/2024/Conference/Submission9493/Authors"], "writers": ["ICLR.cc/2024/Conference", "ICLR.cc/2024/Conference/Submission9493/Authors"], "readers": ["everyone"]}
|
[Summary]:
This paper provides a comprehensive comparison of PINN training methods, problems, and data. The paper visits common problems with training PINNs, namely the complex geometry, the multi-scale phenomena, nonlinearity of some PDE ofrs, and the high dimensional PDE problems. They also provide various training mechanisms such as domain decomposition and loss reweighting methods.
[Strengths]:
1. The paper is well-written; it seems obvious this work has gone through a few rounds of polishing and review.
2. The literature review is detailed and comprehensive.
3. The challenging aspects of training PINNs are decomposed and categorized well.
4. The appendix section of the paper is thorough and contains quality information.
5. The suite of experiments is admittedly comprehensive; there are more than 20 PDE forms, 10 methods considered and compartmentalized well in this paper.
6. The scale of the experiments and the analyses of the hyper-parameters is certainly admirable.
[Weaknesses]:
1. I'm saying out of respect to the author's work, but this paper may be more suited for a journal format. In particular, the page limit constraint is hitting the work hard in my opinion.
* By the time the authors present the data and experiments, there is less than half a page left to interpret the results and provide discussions and conclusions.
* Many key discussions, at different points in the main text, were deferred to the appendix. While they do exist in the appendix and carry out important information, they carry more scientific content than the existing paper's text.
To be clear, the paper's topic is certainly relevant to ICLR and could benefit the ICLR community. However, the conference format may not be the most suitable to present the work as best as it could have been.
2. The work utilizes 10 different methods for training PINNs, but a brief description of these methods in a single mathematical framework is missing. Adding such a description and correlating the numerical findings to the theoretical properties of each method is probably the most important, yet under-performed, part of the work in my opinion.
To be clear, I understand the paper's space constraints, but this is very important in my opinion. The least the authors could do is to add such a section, however briefly, to the appendix.
[Questions]:
See the weaknesses section.
|
ICLR.cc/2024/Conference/Submission9493/Reviewer_RGvf
|
6
|
3
|
{"id": "CZ5Y4HIwQW", "forum": "ApjY32f3Xr", "replyto": "ApjY32f3Xr", "content": {"summary": {"value": "This paper provides a comprehensive comparison of PINN training methods, problems, and data. The paper visits common problems with training PINNs, namely the complex geometry, the multi-scale phenomena, nonlinearity of some PDE ofrs, and the high dimensional PDE problems. They also provide various training mechanisms such as domain decomposition and loss reweighting methods."}, "soundness": {"value": "4 excellent"}, "presentation": {"value": "4 excellent"}, "contribution": {"value": "4 excellent"}, "strengths": {"value": "1. The paper is well-written; it seems obvious this work has gone through a few rounds of polishing and review.\n2. The literature review is detailed and comprehensive.\n3. The challenging aspects of training PINNs are decomposed and categorized well.\n4. The appendix section of the paper is thorough and contains quality information.\n5. The suite of experiments is admittedly comprehensive; there are more than 20 PDE forms, 10 methods considered and compartmentalized well in this paper.\n6. The scale of the experiments and the analyses of the hyper-parameters is certainly admirable."}, "weaknesses": {"value": "1. I'm saying out of respect to the author's work, but this paper may be more suited for a journal format. In particular, the page limit constraint is hitting the work hard in my opinion. \n\n * By the time the authors present the data and experiments, there is less than half a page left to interpret the results and provide discussions and conclusions.\n * Many key discussions, at different points in the main text, were deferred to the appendix. While they do exist in the appendix and carry out important information, they carry more scientific content than the existing paper's text.\n\n To be clear, the paper's topic is certainly relevant to ICLR and could benefit the ICLR community. However, the conference format may not be the most suitable to present the work as best as it could have been.\n\n2. The work utilizes 10 different methods for training PINNs, but a brief description of these methods in a single mathematical framework is missing. Adding such a description and correlating the numerical findings to the theoretical properties of each method is probably the most important, yet under-performed, part of the work in my opinion.\n\n To be clear, I understand the paper's space constraints, but this is very important in my opinion. The least the authors could do is to add such a section, however briefly, to the appendix."}, "questions": {"value": "See the weaknesses section."}, "flag_for_ethics_review": {"value": ["No ethics review needed."]}, "rating": {"value": "6: marginally above the acceptance threshold"}, "confidence": {"value": "3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked."}, "code_of_conduct": {"value": "Yes"}}, "invitations": ["ICLR.cc/2024/Conference/Submission9493/-/Official_Review", "ICLR.cc/2024/Conference/-/Edit"], "cdate": 1698810987938, "mdate": 1699637193408, "nonreaders": [], "signatures": ["ICLR.cc/2024/Conference/Submission9493/Reviewer_RGvf"], "writers": ["ICLR.cc/2024/Conference", "ICLR.cc/2024/Conference/Submission9493/Reviewer_RGvf"], "readers": ["everyone"], "license": "CC BY 4.0"}
|
{
"criticism": 4,
"example": 0,
"importance_and_relevance": 4,
"materials_and_methods": 8,
"praise": 9,
"presentation_and_reporting": 7,
"results_and_discussion": 4,
"suggestion_and_solution": 3,
"total": 29
}
| 1.344828
| -2.695474
| 4.040302
| 1.369971
| 0.260754
| 0.025144
| 0.137931
| 0
| 0.137931
| 0.275862
| 0.310345
| 0.241379
| 0.137931
| 0.103448
|
{
"criticism": 0.13793103448275862,
"example": 0,
"importance_and_relevance": 0.13793103448275862,
"materials_and_methods": 0.27586206896551724,
"praise": 0.3103448275862069,
"presentation_and_reporting": 0.2413793103448276,
"results_and_discussion": 0.13793103448275862,
"suggestion_and_solution": 0.10344827586206896
}
| 1.344828
|
iclr2024
|
openreview
| 0
| 0
| 0
| null |
||
ApjY32f3Xr
|
PINNacle: A Comprehensive Benchmark of Physics-Informed Neural Networks for Solving PDEs
| "While significant progress has been made on Physics-Informed Neural Networks (PINNs), a comprehensi(...TRUNCATED)
| "Under review as a conference paper at ICLR 2024\nPINN ACLE : A C OMPREHENSIVE BENCHMARK OF\nPHYSICS(...TRUNCATED)
| "Zhongkai Hao, Jiachen Yao, Chang Su, Hang Su, Ziao Wang, Fanzhi Lu, Zeyu Xia, Yichi Zhang, Songming(...TRUNCATED)
|
Reject
| 2,024
| "{\"id\": \"ApjY32f3Xr\", \"forum\": \"ApjY32f3Xr\", \"content\": {\"title\": {\"value\": \"PINNacle(...TRUNCATED)
| "[Summary]:\nThe paper provides a benchmarking tool called PINNacle which was lacking in the domain (...TRUNCATED)
|
ICLR.cc/2024/Conference/Submission9493/Reviewer_oeuE
|
3
|
5
| "{\"id\": \"4e1fSBB3OO\", \"forum\": \"ApjY32f3Xr\", \"replyto\": \"ApjY32f3Xr\", \"content\": {\"su(...TRUNCATED)
| {"criticism":5,"example":0,"importance_and_relevance":1,"materials_and_methods":10,"praise":2,"prese(...TRUNCATED)
| 1.090909
| -0.187468
| 1.278377
| 1.111025
| 0.196675
| 0.020116
| 0.227273
| 0
| 0.045455
| 0.454545
| 0.090909
| 0.090909
| 0.136364
| 0.045455
| {"criticism":0.22727272727272727,"example":0.0,"importance_and_relevance":0.045454545454545456,"mate(...TRUNCATED)
| 1.090909
|
iclr2024
|
openreview
| 0
| 0
| 0
| null |
||
ApjY32f3Xr
|
PINNacle: A Comprehensive Benchmark of Physics-Informed Neural Networks for Solving PDEs
| "While significant progress has been made on Physics-Informed Neural Networks (PINNs), a comprehensi(...TRUNCATED)
| "Under review as a conference paper at ICLR 2024\nPINN ACLE : A C OMPREHENSIVE BENCHMARK OF\nPHYSICS(...TRUNCATED)
| "Zhongkai Hao, Jiachen Yao, Chang Su, Hang Su, Ziao Wang, Fanzhi Lu, Zeyu Xia, Yichi Zhang, Songming(...TRUNCATED)
|
Reject
| 2,024
| "{\"id\": \"ApjY32f3Xr\", \"forum\": \"ApjY32f3Xr\", \"content\": {\"title\": {\"value\": \"PINNacle(...TRUNCATED)
| "[Summary]:\nThe article introduces \"PINNacle\", a robust benchmark suite tailored for Physics-Info(...TRUNCATED)
|
ICLR.cc/2024/Conference/Submission9493/Reviewer_xNZs
|
6
|
3
| "{\"id\": \"C4sqXJNESI\", \"forum\": \"ApjY32f3Xr\", \"replyto\": \"ApjY32f3Xr\", \"content\": {\"su(...TRUNCATED)
| {"criticism":1,"example":0,"importance_and_relevance":12,"materials_and_methods":30,"praise":8,"pres(...TRUNCATED)
| 1.625
| -9.880391
| 11.505391
| 1.671499
| 0.471897
| 0.046499
| 0.025
| 0
| 0.3
| 0.75
| 0.2
| 0
| 0.2
| 0.15
| {"criticism":0.025,"example":0.0,"importance_and_relevance":0.3,"materials_and_methods":0.75,"praise(...TRUNCATED)
| 1.625
|
iclr2024
|
openreview
| 0
| 0
| 0
| null |
||
mnyXZBa5dP
| "Image Authenticity Detection using Eye Gazing Data: A Performance Comparison Beyond Human Capabili(...TRUNCATED)
| "In the digital age, determining the authenticity of images has become \nincreasingly crucial. This (...TRUNCATED)
|
Ping Zhang
|
Unknown
| 2,024
| "{\"id\": \"mnyXZBa5dP\", \"forum\": \"mnyXZBa5dP\", \"content\": {\"title\": {\"value\": \"Image Au(...TRUNCATED)
|
N/A
| null | null |
{}
| {"criticism":0,"example":0,"importance_and_relevance":0,"materials_and_methods":0,"praise":0,"presen(...TRUNCATED)
| -10
| -2.667231
| 2.667231
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| {"criticism":0.0,"example":0.0,"importance_and_relevance":0.0,"materials_and_methods":0.0,"praise":0(...TRUNCATED)
| -10
|
iclr2024
|
openreview
| 0
| 0
| 0
| null |
||||
kKRbAY4CXv
|
Neural Evolutionary Kernel Method: A Knowledge-Based Learning Architechture for Evolutionary PDEs
| "Numerical solution of partial differential equations (PDEs) plays a vital role in various fields of(...TRUNCATED)
| "Under review as a conference paper at ICLR 2024\nNEURAL EVOLUTIONARY KERNEL METHOD :\nA K NOWLEDGE (...TRUNCATED)
|
Shuo Ling, Wenjun Ying, Zhen Zhang
|
Reject
| 2,024
| "{\"id\": \"kKRbAY4CXv\", \"forum\": \"kKRbAY4CXv\", \"content\": {\"title\": {\"value\": \"Neural E(...TRUNCATED)
| "[Summary]:\nThe paper introduces a novel approach called Neural Evolutionary Kernel Method (NEKM) f(...TRUNCATED)
|
ICLR.cc/2024/Conference/Submission9498/Reviewer_71KZ
|
6
|
2
| "{\"id\": \"hZLx3fXD5m\", \"forum\": \"kKRbAY4CXv\", \"replyto\": \"kKRbAY4CXv\", \"content\": {\"su(...TRUNCATED)
| {"criticism":5,"example":0,"importance_and_relevance":30,"materials_and_methods":56,"praise":20,"pre(...TRUNCATED)
| 1.39
| -118.067211
| 119.457211
| 1.447648
| 0.561711
| 0.057648
| 0.05
| 0
| 0.3
| 0.56
| 0.2
| 0.01
| 0.15
| 0.12
| {"criticism":0.05,"example":0.0,"importance_and_relevance":0.3,"materials_and_methods":0.56,"praise"(...TRUNCATED)
| 1.39
|
iclr2024
|
openreview
| 0
| 0
| 0
| null |
||
kKRbAY4CXv
|
Neural Evolutionary Kernel Method: A Knowledge-Based Learning Architechture for Evolutionary PDEs
| "Numerical solution of partial differential equations (PDEs) plays a vital role in various fields of(...TRUNCATED)
| "Under review as a conference paper at ICLR 2024\nNEURAL EVOLUTIONARY KERNEL METHOD :\nA K NOWLEDGE (...TRUNCATED)
|
Shuo Ling, Wenjun Ying, Zhen Zhang
|
Reject
| 2,024
| "{\"id\": \"kKRbAY4CXv\", \"forum\": \"kKRbAY4CXv\", \"content\": {\"title\": {\"value\": \"Neural E(...TRUNCATED)
| "[Summary]:\nThis paper aims to tackle solving partial differential equations (PDEs) traditionally s(...TRUNCATED)
|
ICLR.cc/2024/Conference/Submission9498/Reviewer_sX1E
|
5
|
4
| "{\"id\": \"eSJOZmZeDG\", \"forum\": \"kKRbAY4CXv\", \"replyto\": \"kKRbAY4CXv\", \"content\": {\"su(...TRUNCATED)
| {"criticism":5,"example":2,"importance_and_relevance":2,"materials_and_methods":30,"praise":6,"prese(...TRUNCATED)
| 1.807692
| -22.197383
| 24.005076
| 1.843615
| 0.396839
| 0.035923
| 0.096154
| 0.038462
| 0.038462
| 0.576923
| 0.115385
| 0.365385
| 0.153846
| 0.423077
| {"criticism":0.09615384615384616,"example":0.038461538461538464,"importance_and_relevance":0.0384615(...TRUNCATED)
| 1.807692
|
iclr2024
|
openreview
| 0
| 0
| 0
| null |
||
kKRbAY4CXv
|
Neural Evolutionary Kernel Method: A Knowledge-Based Learning Architechture for Evolutionary PDEs
| "Numerical solution of partial differential equations (PDEs) plays a vital role in various fields of(...TRUNCATED)
| "Under review as a conference paper at ICLR 2024\nNEURAL EVOLUTIONARY KERNEL METHOD :\nA K NOWLEDGE (...TRUNCATED)
|
Shuo Ling, Wenjun Ying, Zhen Zhang
|
Reject
| 2,024
| "{\"id\": \"kKRbAY4CXv\", \"forum\": \"kKRbAY4CXv\", \"content\": {\"title\": {\"value\": \"Neural E(...TRUNCATED)
| "[Summary]:\nThe paper presents the Neural Evolutionary Kernel Method (NEKM) for solving semi-linear(...TRUNCATED)
|
ICLR.cc/2024/Conference/Submission9498/Reviewer_Ljfw
|
3
|
4
| "{\"id\": \"uwuFudLbCi\", \"forum\": \"kKRbAY4CXv\", \"replyto\": \"kKRbAY4CXv\", \"content\": {\"su(...TRUNCATED)
| {"criticism":3,"example":1,"importance_and_relevance":2,"materials_and_methods":12,"praise":1,"prese(...TRUNCATED)
| 1.647059
| 1.39454
| 0.252519
| 1.662162
| 0.212119
| 0.015103
| 0.176471
| 0.058824
| 0.117647
| 0.705882
| 0.058824
| 0.058824
| 0.117647
| 0.352941
| {"criticism":0.17647058823529413,"example":0.058823529411764705,"importance_and_relevance":0.1176470(...TRUNCATED)
| 1.647059
|
iclr2024
|
openreview
| 0
| 0
| 0
| null |
||
kKRbAY4CXv
|
Neural Evolutionary Kernel Method: A Knowledge-Based Learning Architechture for Evolutionary PDEs
| "Numerical solution of partial differential equations (PDEs) plays a vital role in various fields of(...TRUNCATED)
| "Under review as a conference paper at ICLR 2024\nNEURAL EVOLUTIONARY KERNEL METHOD :\nA K NOWLEDGE (...TRUNCATED)
|
Shuo Ling, Wenjun Ying, Zhen Zhang
|
Reject
| 2,024
| "{\"id\": \"kKRbAY4CXv\", \"forum\": \"kKRbAY4CXv\", \"content\": {\"title\": {\"value\": \"Neural E(...TRUNCATED)
| "[Summary]:\nThis paper proposes a neural network-based algorithm, namely the Neural Evolutionary Ke(...TRUNCATED)
|
ICLR.cc/2024/Conference/Submission9498/Reviewer_tJtQ
|
3
|
3
| "{\"id\": \"u9bOLSB5vz\", \"forum\": \"kKRbAY4CXv\", \"replyto\": \"kKRbAY4CXv\", \"content\": {\"su(...TRUNCATED)
| {"criticism":5,"example":0,"importance_and_relevance":1,"materials_and_methods":12,"praise":3,"prese(...TRUNCATED)
| 1.526316
| 0.958148
| 0.568167
| 1.538009
| 0.129803
| 0.011693
| 0.263158
| 0
| 0.052632
| 0.631579
| 0.157895
| 0.105263
| 0.263158
| 0.052632
| {"criticism":0.2631578947368421,"example":0.0,"importance_and_relevance":0.05263157894736842,"materi(...TRUNCATED)
| 1.526316
|
iclr2024
|
openreview
| 0
| 0
| 0
| null |
||
eUgS9Ig8JG
|
SaNN: Simple Yet Powerful Simplicial-aware Neural Networks
| "Simplicial neural networks (SNNs) are deep models for higher-order graph representation learning. S(...TRUNCATED)
| "Published as a conference paper at ICLR 2024\nSANN: S IMPLE YET POWERFUL SIMPLICIAL -AWARE\nNEURAL (...TRUNCATED)
|
Sravanthi Gurugubelli, Sundeep Prabhakar Chepuri
|
Accept (spotlight)
| 2,024
| "{\"id\": \"eUgS9Ig8JG\", \"forum\": \"eUgS9Ig8JG\", \"content\": {\"title\": {\"value\": \"SaNN: Si(...TRUNCATED)
| "[Summary]:\nThe paper describes an efficient, and effective approach for learning representations f(...TRUNCATED)
|
ICLR.cc/2024/Conference/Submission9491/Reviewer_jAzK
|
8
|
4
| "{\"id\": \"6iCqDrP0HV\", \"forum\": \"eUgS9Ig8JG\", \"replyto\": \"eUgS9Ig8JG\", \"content\": {\"su(...TRUNCATED)
| {"criticism":5,"example":4,"importance_and_relevance":2,"materials_and_methods":10,"praise":6,"prese(...TRUNCATED)
| 1.125
| -18.208476
| 19.333476
| 1.138234
| 0.155156
| 0.013234
| 0.104167
| 0.083333
| 0.041667
| 0.208333
| 0.125
| 0.333333
| 0.0625
| 0.166667
| {"criticism":0.10416666666666667,"example":0.08333333333333333,"importance_and_relevance":0.04166666(...TRUNCATED)
| 1.125
|
iclr2024
|
openreview
| 0
| 0
| 0
| null |
End of preview. Expand
in Data Studio
No dataset card yet
- Downloads last month
- 21