Title: Learning Maps Between Function Spaces With Applications to PDEs
URL Source: https://arxiv.org/html/2108.08481
Markdown Content: Abstract 1Introduction 2Learning Operators 3Neural Operators 4Parameterization and Computation 5Neural Operators and Other Deep Learning Models 6Test Problems 7Numerical Results 8Approximation Theory 9Literature Review 10Conclusions A B C D E F G References Neural Operator: Learning Maps Between Function Spaces With Applications to PDEs \nameNikola Kovachki โ \emailnkovachki@nvidia.com \addrNvidia \AND\nameZongyi Liโ \emailzongyili@caltech.edu \addrCaltech \AND\nameBurigede Liu \emailbl377@cam.ac.uk \addrCambridge University \AND\nameKamyar Azizzadenesheli \emailkamyara@nvidia.com \addrNvidia \AND\nameKaushik Bhattacharya \emailbhatta@caltech.edu \addrCaltech \AND\nameAndrew Stuart \emailastuart@caltech.edu \addrCaltech \AND\nameAnima Anandkumar \emailanima@caltech.edu \addrCaltech Equal contribution.Majority of the work was completed while the author was at Caltech. Abstract
The classical development of neural networks has primarily focused on learning mappings between finite dimensional Euclidean spaces or finite sets. We propose a generalization of neural networks to learn operators, termed neural operators, that map between infinite dimensional function spaces. We formulate the neural operator as a composition of linear integral operators and nonlinear activation functions. We prove a universal approximation theorem for our proposed neural operator, showing that it can approximate any given nonlinear continuous operator. The proposed neural operators are also discretization-invariant, i.e., they share the same model parameters among different discretization of the underlying function spaces. Furthermore, we introduce four classes of efficient parameterization, viz., graph neural operators, multi-pole graph neural operators, low-rank neural operators, and Fourier neural operators. An important application for neural operators is learning surrogate maps for the solution operators of partial differential equations (PDEs). We consider standard PDEs such as the Burgers, Darcy subsurface flow, and the Navier-Stokes equations, and show that the proposed neural operators have superior performance compared to existing machine learning based methodologies, while being several orders of magnitude faster than conventional PDE solvers.
Keywords: Deep Learning, Operator Learning, Discretization-Invariance, Partial Differential Equations, Navier-Stokes Equation.
1Introduction
Learning mappings between function spaces has widespread applications in science and engineering. For instance, for solving differential equations, the input is a coefficient function and the output is a solution function. A straightforward solution to this problem is to simply discretize the infinite-dimensional input and output function spaces into finite-dimensional grids, and apply standard learning models such as neural networks. However, this limits applicability since the learned neural network model may not generalize well to different discretizations, beyond the discretization grid of the training data.
To overcome these limitations of standard neural networks, we formulate a new deep-learning framework for learning operators, called neural operators, which directly map between function spaces on bounded domains. Since our neural operator is designed on function spaces, they can be discretized by a variety of different methods, and at different levels of resolution, without the need for re-training. In contrast, standard neural network architectures depend heavily on the discretization of training data: new architectures with new parameters may be needed to achieve the same error for data with varying discretization. We also propose the notion of discretization-invariant models and prove that our neural operators satisfy this property, while standard neural networks do not.
1.1Our Approach Discretization-Invariant Models.
We formulate a precise mathematical notion of discretization invariance. We require any discretization-invariant model with a fixed number of parameters to satisfy the following:
1.
acts on any discretization of the input function, i.e. accepts any set of points in the input domain,
2.
can be evaluated at any point of the output domain,
3.
converges to a continuum operator as the discretization is refined.
The first two requirements of accepting any input and output points in the domain is a natural requirement for discretization invariance, while the last one ensures consistency in the limit as the discretization is refined. For example, families of graph neural networks (Scarselli et al., 2008) and transformer models (Vaswani et al., 2017) are resolution invariant, i.e., they can receive inputs at any resolution, but they fail to converge to a continuum operator as discretization is refined. Moreover, we require the models to have a fixed number of parameters; otherwise, the number of parameters becomes unbounded in the limit as the discretization is refined, as shown in Figure 1. Thus the notion of discretization invariance allows us to define neural operator models that are consistent in function spaces and can be applied to data given at any resolution and on any mesh. We also establish that standard neural network models are not discretization invariant.
Figure 1:Discretization Invariance
An discretization-invariant operator has convergent predictions on a mesh refinement.
Neural Operators.
We introduce the concept of neural operators for learning operators that are mappings between infinite-dimensional function spaces. We propose neural operator architectures to be multi-layers where layers are themselves operators composed with non-linear activations. This ensures that that the overall end-to-end composition is an operator, and thus satisfies the discretization invariance property. The key design choice for neural operator is the operator layers. To keep it simple, we limit ourselves to layers that are linear operators. Since these layers are composed with non-linear activations, we obtain neural operator models that are expressive and able to capture any continuous operator. The latter property is known as universal approximation.
The above line of reasoning for neural operator design follows closely the design of standard neural networks, where linear layers (e.g. matrix multiplication, convolution) are composed with non-linear activations, and we have universal approximation of continuous functions defined on compact domains (Hornik et al., 1989). Neural operators replace finite-dimensional linear layers in neural networks with linear operators in function spaces.
We formally establish that neural operator models with a fixed number of parameters satisfy discretization invariance. We further show that neural operators models are universal approximators of continuous operators acting between Banach spaces, and can uniformly approximate any continuous operator defined on a compact set of a Banach space. Neural operators are the only known class of models that guarantee both discretization-invariance and universal approximation. See Table 1 for a comparison among the deep learning models. Previous deep learning models are mostly defined on a fixed grid, and removing, adding, or moving grid points generally makes these models no longer applicable. Thus, they are not discretization invariant.
We propose several design choices for the linear operator layers in neural operator such as a parameterized integral operator or through multiplication in the spectral domain as shown in Figure 2. Specifically, we propose four practical methods for implementing the neural operator framework: graph-based operators, low-rank operators, multipole graph-based operators, and Fourier operators. Specifically, for graph-based operators, we develop a Nystrรถm extension to connect the integral operator formulation of the neural operator to families of graph neural networks (GNNs) on arbitrary grids. For Fourier operators, we consider the spectral domain formulation of the neural operator which leads to efficient algorithms in settings where fast transform methods are applicable.
We include an exhaustive numerical study of the four formulations of neural operators. Numerically, we show that the proposed methodology consistently outperforms all existing deep learning methods even on the resolutions for which the standard neural networks were designed. For the two-dimensional Navier-Stokes equation, when learning the entire flow map, the method achieves < 1 % error for a Reynolds number of 20 and 8 % error for a Reynolds number of 200.
The proposed Fourier neural operator (FNO) has an inference time that is three orders of magnitude faster than the pseudo-spectral method used to generate the data for the Navier-Stokes problem (Chandler and Kerswell, 2013) โ 0.005 s compared to the 2.2 โข ๐ on a 256 ร 256 uniform spatial grid. Despite its tremendous speed advantage, the method does not suffer from accuracy degradation when used in downstream applications such as solving Bayesian inverse problems. Furthermore, we demonstrate that FNO is robust to noise on the testing problems we consider here.
Figure 2:Neural operator architecture schematic
The input function ๐ is passed to a pointwise lifting operator ๐ that is followed by ๐ layers of integral operators and pointwise non-linearity operations ๐ . In the end, the pointwise projection operator ๐ outputs the function ๐ข . Three instantiation of neural operator layers, GNO, LNO, and FNO are provided.
Property Model NNs DeepONets Interpolation Neural Operators Discretization Invariance โ โ โ โ Is the output a function? โ โ โ โ Can query the output at any point? โ โ โ โ Can take the input at any point? โ โ โ โ Universal Approximation โ โ โ โ Table 1:Comparison of deep learning models. The first row indicates whether the model is discretization invariant. The second and third rows indicate whether the output and input are a functions. The fourth row indicates whether the model class is a universal approximator of operators. Neural Operators are discretization invariant deep learning methods that output functions and can approximate any operator. 1.2Background and Context Data-driven approaches for solving PDEs.
Over the past decades, significant progress has been made in formulating (Gurtin, 1982) and solving (Johnson, 2012) the governing PDEs in many scientific fields from micro-scale problems (e.g., quantum and molecular dynamics) to macro-scale applications (e.g., civil and marine engineering). Despite the success in the application of PDEs to solve real-world problems, two significant challenges remain: (1) identifying the governing model for complex systems; (2) efficiently solving large-scale nonlinear systems of equations.
Identifying and formulating the underlying PDEs appropriate for modeling a specific problem usually requires extensive prior knowledge in the corresponding field which is then combined with universal conservation laws to design a predictive model. For example, modeling the deformation and failure of solid structures requires detailed knowledge of the relationship between stress and strain in the constituent material. For complicated systems such as living cells, acquiring such knowledge is often elusive and formulating the governing PDE for these systems remains prohibitive, or the models proposed are too simplistic to be informative. The possibility of acquiring such knowledge from data can revolutionize these fields. Second, solving complicated nonlinear PDE systems (such as those arising in turbulence and plasticity) is computationally demanding and can often make realistic simulations intractable. Again the possibility of using instances of data to design fast approximate solvers holds great potential for accelerating numerous problems.
Learning PDE Solution Operators.
In PDE applications, the governing differential equations are by definition local, whilst the solution operator exhibits non-local properties. Such non-local effects can be described by integral operators explicitly in the spatial domain, or by means of spectral domain multiplication; convolution is an archetypal example. For integral equations, the graph approximations of Nystrรถm type (Belongie et al., 2002) provide a consistent way of connecting different grid or data structures arising in computational methods and understanding their continuum limits (Von Luxburg et al., 2008; Trillos and Slepฤev, 2018; Trillos et al., 2020). For spectral domain calculations, there are well-developed tools that exist for approximating the continuum (Boyd, 2001; Trefethen, 2000). However, these approaches for approximating integral operators are not data-driven. Neural networks present a natural approach for learning-based integral operator approximations since they can incorporate non-locality. However, standard neural networks are limited to the discretization of training data and hence, offer a poor approximation to the integral operator. We tackle this issue here by proposing the framework of neural operators.
Properties of existing deep-learning models.
Previous deep learning models are mostly defined on a fixed grid, and removing, adding, or moving grid points generally makes these models no longer applicable, as seen in Table 1. Thus, they are not discretization invariant. In general, standard neural networks (NN) (such as Multilayer perceptron (MLP), convolution neural networks (CNN), Resnet, and Vision Transformers (ViT)) that take the input grid and output grid as finite-dimensional vectors are not discretization-invariant since their input and output have to be at the fixed grid with fixed location. On the other hand, the pointwise neural networks used in PINNs (Raissi et al., 2019) that take each coordinate as input are discretization-invariant since it can be applied at each location in parallel. However PINNs only represent the solution function of one instance and it does not learn the map from the input functions to the output solution functions. A special class of neural networks is convolution neural networks (CNNs). CNNs also do not converge with grid refinement since their respective fields change with different input grids. On the other hand, if normalized by the grid size, CNNs can be applied to uniform grids with different resolutions, which converge to differential operators, in a similar fashion to the finite difference method. Interpolation is a baseline approach to achieve discretization-invariance. While NNs+Interpolation (or in general any finite-dimensional neural networks+Interpolation) are resolution invariant and their outputs can be queried at any point, they are not universal approximators of operators since the dimension of input and output of the internal CNN model is defined to a bounded number. DeepONets (Lu et al., 2019) are a class of operators that have the universal approximation property. DeepONets consist of a branch net and a trunk net. The trunk net allows queries at any point, but the branch net constrains the input to fixed locations; however it is possible to modify the branch net to make the methodology discretization invariant, for example by using the PCA-based approach as used in (De Hoop et al., 2022).
Furthermore, we show transformers (Vaswani et al., 2017) are special cases of neural operators with structured kernels that can be used with varying grids to represent the input function. However, the commonly used vision-based extensions of transformers, e.g., ViT (Dosovitskiy et al., 2020), use convolutions on patches to generate tokens, and therefore, they are not discretization-invariant models.
We also show that when our proposed neural operators are applied only on fixed grids, the resulting architectures coincide with neural networks and other operator learning frameworks. In such reductions, point evaluations of the input functions are available on the grid points. In particular, we show that the recent work of DeepONets (Lu et al., 2019), which are maps from finite-dimensional spaces to infinite dimensional spaces are special cases of neural operators architecture when neural operators are limited only to fixed input grids. Moreover, by introducing an adjustment to the DeepONet architecture, we propose the DeepONet-Operator model that fits into the full operator learning framework of maps between function spaces.
2Learning Operators
In subsection 2.1, we describe the generic setting of PDEs to make the discussions in the following setting concrete. In subsection 2.2, we outline the general problem of operator learning as well as our approach to solving it. In subsection 2.3, we discuss the functional data that is available and how we work with it numerically.
2.1Generic Parametric PDEs
We consider the generic family of PDEs of the following form,
( ๐ซ ๐ โข ๐ข ) โข ( ๐ฅ )
๐ โข ( ๐ฅ ) , ๐ฅ โ ๐ท ,
๐ข โข ( ๐ฅ )
0 , ๐ฅ โ โ ๐ท ,
(1)
for some ๐ โ ๐ , ๐ โ ๐ฐ โ and ๐ท โ โ ๐ a bounded domain. We assume that the solution ๐ข : ๐ท โ โ lives in the Banach space ๐ฐ and ๐ซ ๐ : ๐ โ โ โข ( ๐ฐ ; ๐ฐ โ ) is a mapping from the parameter Banach space ๐ to the space of (possibly unbounded) linear operators mapping ๐ฐ to its dual ๐ฐ โ . A natural operator which arises from this PDE is ๐ข โ := ๐ซ ๐ โ 1 โข ๐ : ๐ โ ๐ฐ defined to map the parameter to the solution ๐ โฆ ๐ข . A simple example that we study further in Section 6.2 is when ๐ซ ๐ is the weak form of the second-order elliptic operator โ โ โ ( ๐ โข โ ) subject to homogeneous Dirichlet boundary conditions. In this setting, ๐
๐ฟ โ โข ( ๐ท ; โ + ) , ๐ฐ
๐ป 0 1 โข ( ๐ท ; โ ) , and ๐ฐ โ
๐ป โ 1 โข ( ๐ท ; โ ) . When needed, we will assume that the domain ๐ท is discretized into ๐พ โ โ points and that we observe ๐ โ โ pairs of coefficient functions and (approximate) solution functions { ๐ ( ๐ ) , ๐ข ( ๐ ) } ๐
1 ๐ that are used to train the model (see Section 2.2). We assume that ๐ ( ๐ ) are i.i.d. samples from a probability measure ๐ supported on ๐ and ๐ข ( ๐ ) are the pushforwards under ๐ข โ .
2.2Problem Setting
Our goal is to learn a mapping between two infinite dimensional spaces by using a finite collection of observations of input-output pairs from this mapping. We make this problem concrete in the following setting. Let ๐ and ๐ฐ be Banach spaces of functions defined on bounded domains ๐ท โ โ ๐ , ๐ท โฒ โ โ ๐ โฒ respectively and ๐ข โ : ๐ โ ๐ฐ be a (typically) non-linear map. Suppose we have observations { ๐ ( ๐ ) , ๐ข ( ๐ ) } ๐
1 ๐ where ๐ ( ๐ ) โผ ๐ are i.i.d. samples drawn from some probability measure ๐ supported on ๐ and ๐ข ( ๐ )
๐ข โ โข ( ๐ ( ๐ ) ) is possibly corrupted with noise. We aim to build an approximation of ๐ข โ by constructing a parametric map
๐ข ๐ : ๐ โ ๐ฐ , ๐ โ โ ๐
(2)
with parameters from the finite-dimensional space โ ๐ and then choosing ๐ โ โ โ ๐ so that ๐ข ๐ โ โ ๐ข โ .
We will be interested in controlling the error of the approximation on average with respect to ๐ . In particular, assuming ๐ข โ is ๐ -measurable, we will aim to control the ๐ฟ ๐ 2 โข ( ๐ ; ๐ฐ ) Bochner norm of the approximation
โ ๐ข โ โ ๐ข ๐ โ ๐ฟ ๐ 2 โข ( ๐ ; ๐ฐ ) 2
๐ผ ๐ โผ ๐ โข โ ๐ข โ โข ( ๐ ) โ ๐ข ๐ โข ( ๐ ) โ ๐ฐ 2
โซ ๐ โ ๐ข โ โข ( ๐ ) โ ๐ข ๐ โข ( ๐ ) โ ๐ฐ 2 โข ๐ ๐ โข ( ๐ ) .
(3)
This is a natural framework for learning in infinite-dimensions as one could seek to solve the associated empirical-risk minimization problem
min ๐ โ โ ๐ โก ๐ผ ๐ โผ ๐ โข โ ๐ข โ โข ( ๐ ) โ ๐ข ๐ โข ( ๐ ) โ ๐ฐ 2 โ min ๐ โ โ ๐ โก 1 ๐ โข โ ๐
1 ๐ โ ๐ข ( ๐ ) โ ๐ข ๐ โข ( ๐ ( ๐ ) ) โ ๐ฐ 2
(4)
which directly parallels the classical finite-dimensional setting (Vapnik, 1998). As well as using error measured in the Bochner norm, we will also consider the setting where error is measured uniformly over compact sets of ๐ . In particular, given any ๐พ โ ๐ compact, we consider
sup ๐ โ ๐พ โ ๐ข โ โข ( ๐ ) โ ๐ข ๐ โข ( ๐ ) โ ๐ฐ
(5)
which is a more standard error metric in the approximation theory literature. Indeed, the classic approximation theory of neural networks in formulated analogously to equation (5) (Hornik et al., 1989).
In Section 8 we show that, for the architecture we propose and given any desired error tolerance, there exists ๐ โ โ and an associated parameter ๐ โ โ โ ๐ , so that the loss (3) or (5) is less than the specified tolerance. However, we do not address the challenging open problems of characterizing the error with respect to either (a) a fixed parameter dimension ๐ or (b) a fixed number of training samples ๐ . Instead, we approach this in the empirical test-train setting where we minimize (4) based on a fixed training set and approximate (3) from new samples that were not seen during training. Because we conceptualize our methodology in the infinite-dimensional setting, all finite-dimensional approximations can share a common set of network parameters which are defined in the (approximation-free) infinite-dimensional setting. In particular, our architecture does not depend on the way the functions ๐ ( ๐ ) , ๐ข ( ๐ ) are discretized. . The notation used through out this paper, along with a useful summary table, may be found in Appendix A.
2.3Discretization
Since our data ๐ ( ๐ ) and ๐ข ( ๐ ) are, in general, functions, to work with them numerically, we assume access only to their point-wise evaluations. To illustrate this, we will continue with the example of the preceding paragraph. For simplicity, assume ๐ท
๐ท โฒ and suppose that the input and output functions are both real-valued. Let ๐ท ( ๐ )
{ ๐ฅ โ ( ๐ ) } โ
1 ๐ฟ โ ๐ท be a ๐ฟ -point discretization of the domain ๐ท and assume we have observations ๐ ( ๐ ) | ๐ท ( ๐ ) , ๐ข ( ๐ ) | ๐ท ( ๐ ) โ โ ๐ฟ , for a finite collection of input-output pairs indexed by ๐ . In the next section, we propose a kernel inspired graph neural network architecture which, while trained on the discretized data, can produce the solution ๐ข โข ( ๐ฅ ) for any ๐ฅ โ ๐ท given an input ๐ โผ ๐ . In particular, our discretized architecture maps into the space ๐ฐ and not into a discretization thereof. Furthermore our parametric operator class is consistent, in that, given a fixed set of parameters, refinement of the input discretization converges to the true functions space operator. We make this notion precise in what follows and refer to architectures that possess it as function space architectures, mesh-invariant architectures, or discretization-invariant architectures. *
Definition 1
We call a discrete refinement of the domain ๐ท โ โ ๐ any sequence of nested sets ๐ท 1 โ ๐ท 2 โ โฏ โ ๐ท with | ๐ท ๐ฟ |
๐ฟ for any ๐ฟ โ โ such that, for any ๐ > 0 , there exists a number ๐ฟ
๐ฟ โข ( ๐ ) โ โ such that
๐ท โ โ ๐ฅ โ ๐ท ๐ฟ { ๐ฆ โ โ ๐ : โ ๐ฆ โ ๐ฅ โ 2 < ๐ } .
Definition 2
Given a discrete refinement ( ๐ท ๐ฟ ) ๐ฟ
1 โ of the domain ๐ท โ โ ๐ , any member ๐ท ๐ฟ is called a discretization of ๐ท .
Since ๐ : ๐ท โ โ ๐ โ โ ๐ , pointwise evaluation of the function (discretization) at a set of ๐ฟ points gives rise to the data set { ( ๐ฅ โ , ๐ โข ( ๐ฅ โ ) ) } โ
1 ๐ฟ . Note that this may be viewed as a vector in โ ๐ฟ โข ๐ ร โ ๐ฟ โข ๐ . An example of the mesh refinement is given in Figure 1.
Definition 3
Suppose ๐ is a Banach space of โ ๐ -valued functions on the domain ๐ท โ โ ๐ . Let ๐ข : ๐ โ ๐ฐ be an operator, ๐ท ๐ฟ be an ๐ฟ -point discretization of ๐ท , and ๐ข ^ : โ ๐ฟ โข ๐ ร โ ๐ฟ โข ๐ โ ๐ฐ some map. For any ๐พ โ ๐ compact, we define the discretized uniform risk as
๐ ๐พ ( ๐ข , ๐ข ^ , ๐ท ๐ฟ )
sup ๐ โ ๐พ โฅ ๐ข ^ ( ๐ท ๐ฟ , ๐ | ๐ท ๐ฟ ) โ ๐ข ( ๐ ) โฅ ๐ฐ .
Definition 4
Let ฮ โ โ ๐ be a finite dimensional parameter space and ๐ข : ๐ ร ฮ โ ๐ฐ a map representing a parametric class of operators with parameters ๐ โ ฮ . Given a discrete refinement ( ๐ท ๐ ) ๐
1 โ of the domain ๐ท โ โ ๐ , we say ๐ข is discretization-invariant if there exists a sequence of maps ๐ข ^ 1 , ๐ข ^ 2 , โฆ where ๐ข ^ ๐ฟ : โ ๐ฟ โข ๐ ร โ ๐ฟ โข ๐ ร ฮ โ ๐ฐ such that, for any ๐ โ ฮ and any compact set ๐พ โ ๐ ,
lim ๐ฟ โ โ ๐ ๐พ โข ( ๐ข โข ( โ , ๐ ) , ๐ข ^ ๐ฟ โข ( โ , โ , ๐ ) , ๐ท ๐ฟ )
0 .
We prove that the architectures proposed in Section 3 are discretization-invariant. We further verify this claim numerically by showing that the approximation error is approximately constant as we refine the discretization. Such a property is highly desirable as it allows a transfer of solutions between different grid geometries and discretization sizes with a single architecture that has a fixed number of parameters.
We note that, while the application of our methodology is based on having point-wise evaluations of the function, it is not limited by it. One may, for example, represent a function numerically as a finite set of truncated basis coefficients. Invariance of the representation would then be with respect to the size of this set. Our methodology can, in principle, be modified to accommodate this scenario through a suitably chosen architecture. We do not pursue this direction in the current work. From the construction of neural operators, when the input and output functions are evaluated on fixed grids, the architecture of neural operators on these fixed grids coincide with the class of neural networks.
3Neural Operators
In this section, we outline the neural operator framework. We assume that the input functions ๐ โ ๐ are โ ๐ ๐ -valued and defined on the bounded domain ๐ท โ โ ๐ while the output functions ๐ข โ ๐ฐ are โ ๐ ๐ข -valued and defined on the bounded domain ๐ท โฒ โ โ ๐ โฒ . The proposed architecture ๐ข ๐ : ๐ โ ๐ฐ has the following overall structure:
1.
Lifting: Using a pointwise function โ ๐ ๐ โ โ ๐ ๐ฃ 0 , map the input { ๐ : ๐ท โ โ ๐ ๐ } โฆ { ๐ฃ 0 : ๐ท โ โ ๐ ๐ฃ 0 } to its first hidden representation. Usually, we choose ๐ ๐ฃ 0
๐ ๐ and hence this is a lifting operation performed by a fully local operator.
2.
Iterative Kernel Integration: For ๐ก
0 , โฆ , ๐ โ 1 , map each hidden representation to the next { ๐ฃ ๐ก : ๐ท ๐ก โ โ ๐ ๐ฃ ๐ก } โฆ { ๐ฃ ๐ก + 1 : ๐ท ๐ก + 1 โ โ ๐ ๐ฃ ๐ก + 1 } via the action of the sum of a local linear operator, a non-local integral kernel operator, and a bias function, composing the sum with a fixed, pointwise nonlinearity. Here we set ๐ท 0
๐ท and ๐ท ๐
๐ท โฒ and impose that ๐ท ๐ก โ โ ๐ ๐ก is a bounded domain.โ
3.
Projection: Using a pointwise function โ ๐ ๐ฃ ๐ โ โ ๐ ๐ข , map the last hidden representation { ๐ฃ ๐ : ๐ท โฒ โ โ ๐ ๐ฃ ๐ } โฆ { ๐ข : ๐ท โฒ โ โ ๐ ๐ข } to the output function. Analogously to the first step, we usually pick ๐ ๐ฃ ๐
๐ ๐ข and hence this is a projection step performed by a fully local operator.
The outlined structure mimics that of a finite dimensional neural network where hidden representations are successively mapped to produce the final output. In particular, we have
๐ข ๐ โ ๐ฌ โ ๐ ๐ โข ( ๐ ๐ โ 1 + ๐ฆ ๐ โ 1 + ๐ ๐ โ 1 ) โ โฏ โ ๐ 1 โข ( ๐ 0 + ๐ฆ 0 + ๐ 0 ) โ ๐ซ
(6)
where ๐ซ : โ ๐ ๐ โ โ ๐ ๐ฃ 0 , ๐ฌ : โ ๐ ๐ฃ ๐ โ โ ๐ ๐ข are the local lifting and projection mappings respectively, ๐ ๐ก โ โ ๐ ๐ฃ ๐ก + 1 ร ๐ ๐ฃ ๐ก are local linear operators (matrices), ๐ฆ ๐ก : { ๐ฃ ๐ก : ๐ท ๐ก โ โ ๐ ๐ฃ ๐ก } โ { ๐ฃ ๐ก + 1 : ๐ท ๐ก + 1 โ โ ๐ ๐ฃ ๐ก + 1 } are integral kernel operators, ๐ ๐ก : ๐ท ๐ก + 1 โ โ ๐ ๐ฃ ๐ก + 1 are bias functions, and ๐ ๐ก are fixed activation functions acting locally as maps โ ๐ฃ ๐ก + 1 โ โ ๐ฃ ๐ก + 1 in each layer. The output dimensions ๐ ๐ฃ 0 , โฆ , ๐ ๐ฃ ๐ as well as the input dimensions ๐ 1 , โฆ , ๐ ๐ โ 1 and domains of definition ๐ท 1 , โฆ , ๐ท ๐ โ 1 are hyperparameters of the architecture. By local maps, we mean that the action is pointwise, in particular, for the lifting and projection maps, we have ( ๐ซ โข ( ๐ ) ) โข ( ๐ฅ )
๐ซ โข ( ๐ โข ( ๐ฅ ) ) for any ๐ฅ โ ๐ท and ( ๐ฌ โข ( ๐ฃ ๐ ) ) โข ( ๐ฅ )
๐ฌ โข ( ๐ฃ ๐ โข ( ๐ฅ ) ) for any ๐ฅ โ ๐ท โฒ and similarly, for the activation, ( ๐ โข ( ๐ฃ ๐ก + 1 ) ) โข ( ๐ฅ )
๐ โข ( ๐ฃ ๐ก + 1 โข ( ๐ฅ ) ) for any ๐ฅ โ ๐ท ๐ก + 1 . The maps, ๐ซ , ๐ฌ , and ๐ ๐ก can thus be thought of as defining Nemitskiy operators (Dudley and Norvaisa, 2011, Chapters 6,7) when each of their components are assumed to be Borel measurable. This interpretation allows us to define the general neural operator architecture when pointwise evaluation is not well-defined in the spaces ๐ or ๐ฐ e.g. when they are Lebesgue, Sobolev, or Besov spaces.
The crucial difference between the proposed architecture (6) and a standard feed-forward neural network is that all operations are directly defined in function space (noting that the activation funtions, ๐ซ and ๐ฌ are all interpreted through their extension to Nemitskiy operators) and therefore do not depend on any discretization of the data. Intuitively, the lifting step locally maps the data to a space where the non-local part of ๐ข โ is easier to capture. We confirm this intuition numerically in Section 7; however, we note that for the theory presented in Section 8 it suffices that ๐ซ is the identity map. The non-local part of ๐ข โ is then learned by successively approximating using integral kernel operators composed with a local nonlinearity. Each integral kernel operator is the function space analog of the weight matrix in a standard feed-forward network since they are infinite-dimensional linear operators mapping one function space to another. We turn the biases, which are normally vectors, to functions and, using intuition from the ResNet architecture (He et al., 2016), we further add a local linear operator acting on the output of the previous layer before applying the nonlinearity. The final projection step simply gets us back to the space of our output function. We concatenate in ๐ โ โ ๐ the parameters of ๐ซ , ๐ฌ , { ๐ ๐ก } which are usually themselves shallow neural networks, the parameters of the kernels representing { ๐ฆ ๐ก } which are again usually shallow neural networks, and the matrices { ๐ ๐ก } . We note, however, that our framework is general and other parameterizations such as polynomials may also be employed.
Integral Kernel Operators
We define three version of the integral kernel operator ๐ฆ ๐ก used in (6). For the first, let ๐ ( ๐ก ) โ ๐ถ โข ( ๐ท ๐ก + 1 ร ๐ท ๐ก ; โ ๐ ๐ฃ ๐ก + 1 ร ๐ ๐ฃ ๐ก ) and let ๐ ๐ก be a Borel measure on ๐ท ๐ก . Then we define ๐ฆ ๐ก by
( ๐ฆ ๐ก โข ( ๐ฃ ๐ก ) ) โข ( ๐ฅ )
โซ ๐ท ๐ก ๐ ( ๐ก ) โข ( ๐ฅ , ๐ฆ ) โข ๐ฃ ๐ก โข ( ๐ฆ ) โข d โข ๐ ๐ก โข ( ๐ฆ ) โ ๐ฅ โ ๐ท ๐ก + 1 .
(7)
Normally, we take ๐ ๐ก to simply be the Lebesgue measure on โ ๐ ๐ก but, as discussed in Section 4, other choices can be used to speed up computation or aid the learning process by building in a priori information. The choice of integral kernel operator in (7) defines the basic form of the neural operator and is the one we analyze in Section 8 and study most in the numerical experiments of Section 7.
For the second, let ๐ ( ๐ก ) โ ๐ถ โข ( ๐ท ๐ก + 1 ร ๐ท ๐ก ร โ ๐ ๐ ร โ ๐ ๐ ; โ ๐ ๐ฃ ๐ก + 1 ร ๐ ๐ฃ ๐ก ) . Then we define ๐ฆ ๐ก by
( ๐ฆ ๐ก โข ( ๐ฃ ๐ก ) ) โข ( ๐ฅ )
โซ ๐ท ๐ก ๐ ( ๐ก ) โข ( ๐ฅ , ๐ฆ , ๐ โข ( ฮ ๐ก + 1 ๐ท โข ( ๐ฅ ) ) , ๐ โข ( ฮ ๐ก ๐ท โข ( ๐ฆ ) ) ) โข ๐ฃ ๐ก โข ( ๐ฆ ) โข d โข ๐ ๐ก โข ( ๐ฆ ) โ ๐ฅ โ ๐ท ๐ก + 1 .
(8)
where ฮ ๐ก ๐ท : ๐ท ๐ก โ ๐ท are fixed mappings. We have found numerically that, for certain PDE problems, the form (8) outperforms (7) due to the strong dependence of the solution ๐ข on the parameters ๐ , for example, the Darcy flow problem considered in subsection 7.2.1. Indeed, if we think of (6) as a discrete time dynamical system, then the input ๐ โ ๐ only enters through the initial condition hence its influence diminishes with more layers. By directly building in ๐ -dependence into the kernel, we ensure that it influences the entire architecture.
Lastly, let ๐ ( ๐ก ) โ ๐ถ โข ( ๐ท ๐ก + 1 ร ๐ท ๐ก ร โ ๐ ๐ฃ ๐ก ร โ ๐ ๐ฃ ๐ก ; โ ๐ ๐ฃ ๐ก + 1 ร ๐ ๐ฃ ๐ก ) . Then we define ๐ฆ ๐ก by
( ๐ฆ ๐ก โข ( ๐ฃ ๐ก ) ) โข ( ๐ฅ )
โซ ๐ท ๐ก ๐ ( ๐ก ) โข ( ๐ฅ , ๐ฆ , ๐ฃ ๐ก โข ( ฮ ๐ก โข ( ๐ฅ ) ) , ๐ฃ ๐ก โข ( ๐ฆ ) ) โข ๐ฃ ๐ก โข ( ๐ฆ ) โข d โข ๐ ๐ก โข ( ๐ฆ ) โ ๐ฅ โ ๐ท ๐ก + 1 .
(9)
where ฮ ๐ก : ๐ท ๐ก + 1 โ ๐ท ๐ก are fixed mappings. Note that, in contrast to (7) and (8), the integral operator (9) is nonlinear since the kernel can depend on the input function ๐ฃ ๐ก . With this definition and a particular choice of kernel ๐ ๐ก and measure ๐ ๐ก , we show in Section 5.2 that neural operators are a continuous input/output space generalization of the popular transformer architecture (Vaswani et al., 2017).
Single Hidden Layer Construction
Having defined possible choices for the integral kernel operator, we are now in a position to explicitly write down a full layer of the architecture defined by (6). For simplicity, we choose the integral kernel operator given by (7), but note that the other definitions (8), (9) work analogously. We then have that a single hidden layer update is given by
๐ฃ ๐ก + 1 โข ( ๐ฅ )
๐ ๐ก + 1 โข ( ๐ ๐ก โข ๐ฃ ๐ก โข ( ฮ ๐ก โข ( ๐ฅ ) ) + โซ ๐ท ๐ก ๐ ( ๐ก ) โข ( ๐ฅ , ๐ฆ ) โข ๐ฃ ๐ก โข ( ๐ฆ ) โข d โข ๐ ๐ก โข ( ๐ฆ ) + ๐ ๐ก โข ( ๐ฅ ) ) โ ๐ฅ โ ๐ท ๐ก + 1
(10)
where ฮ ๐ก : ๐ท ๐ก + 1 โ ๐ท ๐ก are fixed mappings. We remark that, since we often consider functions on the same domain, we usually take ฮ ๐ก to be the identity.
We will now give an example of a full single hidden layer architecture i.e. when ๐
2 . We choose ๐ท 1
๐ท , take ๐ 2 as the identity, and denote ๐ 1 by ๐ , assuming it is any activation function. Furthermore, for simplicity, we set ๐ 1
0 , ๐ 1
0 , and assume that ๐ 0
๐ 1 is the Lebesgue measure on โ ๐ . Then (6) becomes
( ๐ข ๐ โข ( ๐ ) ) โข ( ๐ฅ )
๐ฌ โข ( โซ ๐ท ๐ ( 1 ) โข ( ๐ฅ , ๐ฆ ) โข ๐ โข ( ๐ 0 โข ๐ซ โข ( ๐ โข ( ๐ฆ ) ) + โซ ๐ท ๐ ( 0 ) โข ( ๐ฆ , ๐ง ) โข ๐ซ โข ( ๐ โข ( ๐ง ) ) โข d โข ๐ง + ๐ 0 โข ( ๐ฆ ) ) โข d โข ๐ฆ )
(11)
for any ๐ฅ โ ๐ท โฒ . In this example, ๐ซ โ ๐ถ โข ( โ ๐ ๐ ; โ ๐ ๐ฃ 0 ) , ๐ ( 0 ) โ ๐ถ โข ( ๐ท ร ๐ท ; โ ๐ ๐ฃ 1 ร ๐ ๐ฃ 0 ) , ๐ 0 โ ๐ถ โข ( ๐ท ; โ ๐ ๐ฃ 1 ) , ๐ 0 โ โ ๐ ๐ฃ 1 ร ๐ ๐ฃ 0 , ๐ ( 1 ) โ ๐ถ โข ( ๐ท โฒ ร ๐ท ; โ ๐ ๐ฃ 2 ร ๐ ๐ฃ 1 ) , and ๐ฌ โ ๐ถ โข ( โ ๐ ๐ฃ 2 ; โ ๐ ๐ข ) . One can then parametrize the continuous functions ๐ซ , ๐ฌ , ๐ ( 0 ) , ๐ ( 1 ) , ๐ 0 by standard feed-forward neural networks (or by any other means) and the matrix ๐ 0 simply by its entries. The parameter vector ๐ โ โ ๐ then becomes the concatenation of the parameters of ๐ซ , ๐ฌ , ๐ ( 0 ) , ๐ ( 1 ) , ๐ 0 along with the entries of ๐ 0 . One can then optimize these parameters by minimizing with respect to ๐ using standard gradient based minimization techniques. To implement this minimization, the functions entering the loss need to be discretized; but the learned parameters may then be used with other discretizations. In Section 4, we discuss various choices for parametrizing the kernels, picking the integration measure, and how those choices affect the computational complexity of the architecture.
Preprocessing
It is often beneficial to manually include features into the input functions ๐ to help facilitate the learning process. For example, instead of considering the โ ๐ ๐ -valued vector field ๐ as input, we use the โ ๐ + ๐ ๐ -valued vector field ( ๐ฅ , ๐ โข ( ๐ฅ ) ) . By including the identity element, information about the geometry of the spatial domain ๐ท is directly incorporated into the architecture. This allows the neural networks direct access to information that is already known in the problem and therefore eases learning. We use this idea in all of our numerical experiments in Section 7. Similarly, when learning a smoothing operator, it may be beneficial to include a smoothed version of the inputs ๐ ๐ using, for example, Gaussian convolution. Derivative information may also be of interest and thus, as input, one may consider, for example, the โ ๐ + 2 โข ๐ ๐ + ๐ โข ๐ ๐ -valued vector field ( ๐ฅ , ๐ โข ( ๐ฅ ) , ๐ ๐ โข ( ๐ฅ ) , โ ๐ฅ ๐ ๐ โข ( ๐ฅ ) ) . Many other possibilities may be considered on a problem-specific basis.
Discretization Invariance and Approximation
In light of discretization invariance Theorem 8 and universal approximation Theorems 11 12, 13, 14 whose formal statements are given in Section 8, we may obtain a decomposition of the total error made by a neural operator as a sum of the discretization error and the approximation error. In particular, given a finite dimensional instantiation of a neural operator ๐ข ^ ๐ : โ ๐ฟ โข ๐ ร โ ๐ฟ โข ๐ ๐ โ ๐ฐ , for some ๐ฟ -point discretization of the input, we have
โฅ ๐ข ^ ๐ ( ๐ท ๐ฟ , ๐ | ๐ท ๐ฟ ) โ ๐ข โ ( ๐ ) โฅ ๐ฐ โค โฅ ๐ข ^ ๐ ( ๐ท ๐ฟ , ๐ | ๐ท ๐ฟ ) โ ๐ข ๐ ( ๐ ) โฅ ๐ฐ โ discretization error + โ ๐ข ๐ โข ( ๐ ) โ ๐ข โ โข ( ๐ ) โ ๐ฐ โ approximation error .
Our approximation theoretic Theorems imply that we can find parameters ๐ so that the approximation error is arbitrarily small while the discretization invariance Theorem states that we can find a fine enough discretization (large enough ๐ฟ ) so that the discretization error is arbitrarily small. Therefore, with a fixed set of parameters independent of the input discretization, a neural operator that is able to be implemented on a computer can approximate operators to arbitrary accuracy.
4Parameterization and Computation
In this section, we discuss various ways of parameterizing the infinite dimensional architecture (6), Figure 2. The goal is to find an intrinsic infinite dimensional parameterization that achieves small error (say ๐ ), and then rely on numerical approximation to ensure that this parameterization delivers an error of the same magnitude (say 2 โข ๐ ), for all data discretizations fine enough. In this way the number of parameters used to achieve ๐ช โข ( ๐ ) error is independent of the data discretization. In many applications we have in mind the data discretization is something we can control, for example when generating input/output pairs from solution of partial differential equations via numerical simulation. The proposed approach allows us to train a neural operator approximation using data from different discretizations, and to predict with discretizations different from those used in the data, all by relating everything to the underlying infinite dimensional problem.
We also discuss the computational complexity of the proposed parameterizations and suggest algorithms which yield efficient numerical methods for approximation. Subsections 4.1-4.4 delineate each of the proposed methods.
To simplify notation, we will only consider a single layer of (6) i.e. (10) and choose the input and output domains to be the same. Furthermore, we will drop the layer index ๐ก and write the single layer update as
๐ข โข ( ๐ฅ )
๐ โข ( ๐ โข ๐ฃ โข ( ๐ฅ ) + โซ ๐ท ๐ โข ( ๐ฅ , ๐ฆ ) โข ๐ฃ โข ( ๐ฆ ) โข d โข ๐ โข ( ๐ฆ ) + ๐ โข ( ๐ฅ ) ) โ ๐ฅ โ ๐ท
(12)
where ๐ท โ โ ๐ is a bounded domain, ๐ฃ : ๐ท โ โ ๐ is the input function and ๐ข : ๐ท โ โ ๐ is the output function. When the domain domains ๐ท of ๐ฃ and ๐ข are different, we will usually extend them to be on a larger domain. We will consider ๐ to be fixed, and, for the time being, take d โข ๐ โข ( ๐ฆ )
d โข ๐ฆ to be the Lebesgue measure on โ ๐ . Equation (12) then leaves three objects which can be parameterized: ๐ , ๐ , and ๐ . Since ๐ is linear and acts only locally on ๐ฃ , we will always parametrize it by the values of its associated ๐ ร ๐ matrix; hence ๐ โ โ ๐ ร ๐ yielding ๐ โข ๐ parameters. We have found empirically that letting ๐ : ๐ท โ โ ๐ be a constant function over any domain ๐ท works at least as well as allowing it to be an arbitrary neural network. Perusal of the proof of Theorem 11 shows that we do not lose any approximation power by doing this, and we reduce the total number of parameters in the architecutre. Therefore we will always parametrize ๐ by the entries of a fixed ๐ -dimensional vector; in particular, ๐ โ โ ๐ yielding ๐ parameters. Notice that both parameterizations are independent of any discretization of ๐ฃ .
The rest of this section will be dedicated to choosing the kernel function ๐ : ๐ท ร ๐ท โ โ ๐ ร ๐ and the computation of the associated integral kernel operator. For clarity of exposition, we consider only the simplest proposed version of this operator (7) but note that similar ideas may also be applied to (8) and (9). Furthermore, in order to focus on learning the kernel ๐ , here we drop ๐ , ๐ , and ๐ from (12) and simply consider the linear update
๐ข โข ( ๐ฅ )
โซ ๐ท ๐ โข ( ๐ฅ , ๐ฆ ) โข ๐ฃ โข ( ๐ฆ ) โข d โข ๐ โข ( ๐ฆ ) โ ๐ฅ โ ๐ท .
(13)
To demonstrate the computational challenges associated with (13), let { ๐ฅ 1 , โฆ , ๐ฅ ๐ฝ } โ ๐ท be a uniformly-sampled ๐ฝ -point discretization of ๐ท . Recall that we assumed d โข ๐ โข ( ๐ฆ )
d โข ๐ฆ and, for simplicity, suppose that ๐ โข ( ๐ท )
1 , then the Monte Carlo approximation of (13) is
๐ข โข ( ๐ฅ ๐ )
1 ๐ฝ โข โ ๐
1 ๐ฝ ๐ โข ( ๐ฅ ๐ , ๐ฅ ๐ ) โข ๐ฃ โข ( ๐ฅ ๐ ) , ๐
1 , โฆ , ๐ฝ
(14)
The integral in (13) can be approximated using any other integral approximation methods, including the celebrated Riemann sum for which ๐ข โข ( ๐ฅ ๐ )
โ ๐
1 ๐ฝ ๐ โข ( ๐ฅ ๐ , ๐ฅ ๐ ) โข ๐ฃ โข ( ๐ฅ ๐ ) โข ฮ โข ๐ฅ ๐ and ฮ โข ๐ฅ ๐ is the Riemann sum coefficient associated with ๐ at ๐ฅ ๐ . For the approximation methods, to compute ๐ข on the entire grid requires ๐ช โข ( ๐ฝ 2 ) matrix-vector multiplications. Each of these matrix-vector multiplications requires ๐ช โข ( ๐ โข ๐ ) operations; for the rest of the discussion, we treat ๐ โข ๐
๐ช โข ( 1 ) as constant and consider only the cost with respect to ๐ฝ the discretization parameter since ๐ and ๐ are fixed by the architecture choice whereas ๐ฝ varies depending on required discretization accuracy and hence may be arbitrarily large. This cost is not specific to the Monte Carlo approximation but is generic for quadrature rules which use the entirety of the data. Therefore, when ๐ฝ is large, computing (13) becomes intractable and new ideas are needed in order to alleviate this. Subsections 4.1-4.4 propose different approaches to the solution to this problem, inspired by classical methods in numerical analysis. We finally remark that, in contrast, computations with ๐ , ๐ , and ๐ only require ๐ช โข ( ๐ฝ ) operations which justifies our focus on computation with the kernel integral operator.
Kernel Matrix.
It will often times be useful to consider the kernel matrix associated to ๐ for the discrete points { ๐ฅ 1 , โฆ , ๐ฅ ๐ฝ } โ ๐ท . We define the kernel matrix ๐พ โ โ ๐ โข ๐ฝ ร ๐ โข ๐ฝ to be the ๐ฝ ร ๐ฝ block matrix with each block given by the value of the kernel i.e.
๐พ ๐ โข ๐
๐ โข ( ๐ฅ ๐ , ๐ฅ ๐ ) โ โ ๐ ร ๐ , ๐ , ๐
1 , โฆ , ๐ฝ
where we use ( ๐ , ๐ ) to index an individual block rather than a matrix element. Various numerical algorithms for the efficient computation of (13) can be derived based on assumptions made about the structure of this matrix, for example, bounds on its rank or sparsity.
4.1Graph Neural Operator (GNO)
We first outline the Graph Neural Operator (GNO) which approximates (13) by combining a Nystrรถm approximation with domain truncation and is implemented with the standard framework of graph neural networks. This construction was originally proposed in Li et al. (2020c).
Nystrรถm Approximation.
A simple yet effective method to alleviate the cost of computing (13) is employing a Nystrรถm approximation. This amounts to sampling uniformly at random the points over which we compute the output function ๐ข . In particular, let ๐ฅ ๐ 1 , โฆ , ๐ฅ ๐ ๐ฝ โฒ โ { ๐ฅ 1 , โฆ , ๐ฅ ๐ฝ } be ๐ฝ โฒ โช ๐ฝ randomly selected points and, assuming ๐ โข ( ๐ท )
1 , approximate (13) by
๐ข โข ( ๐ฅ ๐ ๐ ) โ 1 ๐ฝ โฒ โข โ ๐
1 ๐ฝ โฒ ๐ โข ( ๐ฅ ๐ ๐ , ๐ฅ ๐ ๐ ) โข ๐ฃ โข ( ๐ฅ ๐ ๐ ) , ๐
1 , โฆ , ๐ฝ โฒ .
We can view this as a low-rank approximation to the kernel matrix ๐พ , in particular,
๐พ โ ๐พ ๐ฝ โข ๐ฝ โฒ โข ๐พ ๐ฝ โฒ โข ๐ฝ โฒ โข ๐พ ๐ฝ โฒ โข ๐ฝ
(15)
where ๐พ ๐ฝ โฒ โข ๐ฝ โฒ is a ๐ฝ โฒ ร ๐ฝ โฒ block matrix and ๐พ ๐ฝ โข ๐ฝ โฒ , ๐พ ๐ฝ โฒ โข ๐ฝ are interpolation matrices, for example, linearly extending the function to the whole domain from the random nodal points. The complexity of this computation is ๐ช โข ( ๐ฝ โฒ โฃ 2 ) hence it remains quadratic but only in the number of subsampled points ๐ฝ โฒ which we assume is much less than the number of points ๐ฝ in the original discretization.
Truncation.
Another simple method to alleviate the cost of computing (13) is to truncate the integral to a sub-domain of ๐ท which depends on the point of evaluation ๐ฅ โ ๐ท . Let ๐ : ๐ท โ โฌ โข ( ๐ท ) be a mapping of the points of ๐ท to the Lebesgue measurable subsets of ๐ท denoted โฌ โข ( ๐ท ) . Define d โข ๐ โข ( ๐ฅ , ๐ฆ )
๐ ๐ โข ( ๐ฅ ) โข d โข ๐ฆ then (13) becomes
๐ข โข ( ๐ฅ )
โซ ๐ โข ( ๐ฅ ) ๐ โข ( ๐ฅ , ๐ฆ ) โข ๐ฃ โข ( ๐ฆ ) โข d โข ๐ฆ โ ๐ฅ โ ๐ท .
(16)
If the size of each set ๐ โข ( ๐ฅ ) is smaller than ๐ท then the cost of computing (16) is ๐ช โข ( ๐ ๐ โข ๐ฝ 2 ) where ๐ ๐ < 1 is a constant depending on ๐ . While the cost remains quadratic in ๐ฝ , the constant ๐ ๐ can have a significant effect in practical computations, as we demonstrate in Section 7. For simplicity and ease of implementation, we only consider ๐ โข ( ๐ฅ )
๐ต โข ( ๐ฅ , ๐ ) โฉ ๐ท where ๐ต โข ( ๐ฅ , ๐ )
{ ๐ฆ โ โ ๐ : โ ๐ฆ โ ๐ฅ โ โ ๐ < ๐ } for some fixed ๐ > 0 . With this choice of ๐ and assuming that ๐ท
[ 0 , 1 ] ๐ , we can explicitly calculate that ๐ ๐ โ ๐ ๐ .
Furthermore notice that we do not lose any expressive power when we make this approximation so long as we combine it with composition. To see this, consider the example of the previous paragraph where, if we let ๐
2 , then (16) reverts to (13). Pick ๐ < 1 and let ๐ฟ โ โ with ๐ฟ โฅ 2 be the smallest integer such that 2 ๐ฟ โ 1 โข ๐ โฅ 1 . Suppose that ๐ข โข ( ๐ฅ ) is computed by composing the right hand side of (16) ๐ฟ times with a different kernel every time. The domain of influence of ๐ข โข ( ๐ฅ ) is then ๐ต โข ( ๐ฅ , 2 ๐ฟ โ 1 โข ๐ ) โฉ ๐ท
๐ท hence it is easy to see that there exist ๐ฟ kernels such that computing this composition is equivalent to computing (13) for any given kernel with appropriate regularity. Furthermore the cost of this computation is ๐ช โข ( ๐ฟ โข ๐ ๐ โข ๐ฝ 2 ) and therefore the truncation is beneficial if ๐ ๐ โข ( log 2 โก 1 / ๐ + 1 ) < 1 which holds for any ๐ < 1 / 2 when ๐
1 and any ๐ < 1 when ๐ โฅ 2 . Therefore we have shown that we can always reduce the cost of computing (13) by truncation and composition. From the perspective of the kernel matrix, truncation enforces a sparse, block diagonally-dominant structure at each layer. We further explore the hierarchical nature of this computation using the multipole method in subsection 4.3.
Besides being a useful computational tool, truncation can also be interpreted as explicitly building local structure into the kernel ๐ . For problems where such structure exists, explicitly enforcing it makes learning more efficient, usually requiring less data to achieve the same generalization error. Many physical systems such as interacting particles in an electric potential exhibit strong local behavior that quickly decays, making truncation a natural approximation technique.
Graph Neural Networks.
We utilize the standard architecture of message passing graph networks employing edge features as introduced in Gilmer et al. (2017) to efficiently implement (13) on arbitrary discretizations of the domain ๐ท . To do so, we treat a discretization { ๐ฅ 1 , โฆ , ๐ฅ ๐ฝ } โ ๐ท as the nodes of a weighted, directed graph and assign edges to each node using the function ๐ : ๐ท โ โฌ โข ( ๐ท ) which, recall from the section on truncation, assigns to each point a domain of integration. In particular, for ๐
1 , โฆ , ๐ฝ , we assign the node ๐ฅ ๐ the value ๐ฃ โข ( ๐ฅ ๐ ) and emanate from it edges to the nodes ๐ โข ( ๐ฅ ๐ ) โฉ { ๐ฅ 1 , โฆ , ๐ฅ ๐ฝ }
๐ฉ โข ( ๐ฅ ๐ ) which we call the neighborhood of ๐ฅ ๐ . If ๐ โข ( ๐ฅ )
๐ท then the graph is fully-connected. Generally, the sparsity structure of the graph determines the sparsity of the kernel matrix ๐พ , indeed, the adjacency matrix of the graph and the block kernel matrix have the same zero entries. The weights of each edge are assigned as the arguments of the kernel. In particular, for the case of (13), the weight of the edge between nodes ๐ฅ ๐ and ๐ฅ ๐ is simply the concatenation ( ๐ฅ ๐ , ๐ฅ ๐ ) โ โ 2 โข ๐ . More complicated weighting functions are considered for the implementation of the integral kernel operators (8) or (9).
With the above definition the message passing algorithm of Gilmer et al. (2017), with averaging aggregation, updates the value ๐ฃ โข ( ๐ฅ ๐ ) of the node ๐ฅ ๐ to the value ๐ข โข ( ๐ฅ ๐ ) as
๐ข โข ( ๐ฅ ๐ )
1 | ๐ฉ โข ( ๐ฅ ๐ ) | โข โ ๐ฆ โ ๐ฉ โข ( ๐ฅ ๐ ) ๐ โข ( ๐ฅ ๐ , ๐ฆ ) โข ๐ฃ โข ( ๐ฆ ) , ๐
1 , โฆ , ๐ฝ
which corresponds to the Monte-Carlo approximation of the integral (16). More sophisticated quadrature rules and adaptive meshes can also be implemented using the general framework of message passing on graphs, see, for example, Pfaff et al. (2020). We further utilize this framework in subsection 4.3.
Convolutional Neural Networks.
Lastly, we compare and contrast the GNO framework to standard convolutional neural networks (CNNs). In computer vision, the success of CNNs has largely been attributed to their ability to capture local features such as edges that can be used to distinguish different objects in a natural image. This property is obtained by enforcing the convolution kernel to have local support, an idea similar to our truncation approximation. Furthermore by directly using a translation invariant kernel, a CNN architecture becomes translation equivariant; this is a desirable feature for many vision models e.g. ones that perform segmentation. We will show that similar ideas can be applied to the neural operator framework to obtain an architecture with built-in local properties and translational symmetries that, unlike CNNs, remain consistent in function space.
To that end, let ๐ โข ( ๐ฅ , ๐ฆ )
๐ โข ( ๐ฅ โ ๐ฆ ) and suppose that ๐ : โ ๐ โ โ ๐ ร ๐ is supported on ๐ต โข ( 0 , ๐ ) . Let ๐ โ
0 be the smallest radius such that ๐ท โ ๐ต โข ( ๐ฅ โ , ๐ โ ) where ๐ฅ โ โ โ ๐ denotes the center of mass of ๐ท and suppose ๐ โช ๐ โ . Then (13) becomes the convolution
๐ข โข ( ๐ฅ )
( ๐ โ ๐ฃ ) โข ( ๐ฅ )
โซ ๐ต โข ( ๐ฅ , ๐ ) โฉ ๐ท ๐ โข ( ๐ฅ โ ๐ฆ ) โข ๐ฃ โข ( ๐ฆ ) โข d โข ๐ฆ โ ๐ฅ โ ๐ท .
(17)
Notice that (17) is precisely (16) when ๐ โข ( ๐ฅ )
๐ต โข ( ๐ฅ , ๐ ) โฉ ๐ท and ๐ โข ( ๐ฅ , ๐ฆ )
๐ โข ( ๐ฅ โ ๐ฆ ) . When the kernel is parameterized by e.g. a standard neural network and the radius ๐ is chosen independently of the data discretization, (17) becomes a layer of a convolution neural network that is consistent in function space. Indeed the parameters of (17) do not depend on any discretization of ๐ฃ . The choice ๐ โข ( ๐ฅ , ๐ฆ )
๐ โข ( ๐ฅ โ ๐ฆ ) enforces translational equivariance in the output while picking ๐ small enforces locality in the kernel; hence we obtain the distinguishing features of a CNN model.
We will now show that, by picking a parameterization that is inconsistent in function space and applying a Monte Carlo approximation to the integral, (17) becomes a standard CNN. This is most easily demonstrated when ๐ท
[ 0 , 1 ] and the discretization { ๐ฅ 1 , โฆ , ๐ฅ ๐ฝ } is equispaced i.e. | ๐ฅ ๐ + 1 โ ๐ฅ ๐ |
โ for any ๐
1 , โฆ , ๐ฝ โ 1 . Let ๐ โ โ be an odd filter size and let ๐ง 1 , โฆ , ๐ง ๐ โ โ be the points ๐ง ๐
( ๐ โ 1 โ ( ๐ โ 1 ) / 2 ) โข โ for ๐
1 , โฆ , ๐ . It is easy to see that { ๐ง 1 , โฆ , ๐ง ๐ } โ ๐ต ยฏ โข ( 0 , ( ๐ โ 1 ) โข โ / 2 ) which we choose as the support of ๐ . Furthermore, we parameterize ๐ directly by its pointwise values which are ๐ ร ๐ matrices at the locations ๐ง 1 , โฆ , ๐ง ๐ thus yielding ๐ โข ๐ โข ๐ parameters. Then (17) becomes
๐ข โข ( ๐ฅ ๐ ) ๐ โ 1 ๐ โข โ ๐
1 ๐ โ ๐
1 ๐ ๐ โข ( ๐ง ๐ ) ๐ โข ๐ โข ๐ฃ โข ( ๐ฅ ๐ โ ๐ง ๐ ) ๐ , ๐
1 , โฆ , ๐ฝ , ๐
1 , โฆ , ๐
where we define ๐ฃ โข ( ๐ฅ )
0 if ๐ฅ โ { ๐ฅ 1 , โฆ , ๐ฅ ๐ฝ } . Up to the constant factor 1 / ๐ which can be re-absorbed into the parameterization of ๐ , this is precisely the update of a stride 1 CNN with ๐ input channels, ๐ output channels, and zero-padding so that the input and output signals have the same length. This example can easily be generalized to higher dimensions and different CNN structures, we made the current choices for simplicity of exposition. Notice that if we double the amount of discretization points for ๐ฃ i.e. ๐ฝ โฆ 2 โข ๐ฝ and โ โฆ โ / 2 , the support of ๐ becomes ๐ต ยฏ โข ( 0 , ( ๐ โ 1 ) โข โ / 4 ) hence the model changes due to the discretization of the data. Indeed, if we take the limit to the continuum ๐ฝ โ โ , we find ๐ต ยฏ โข ( 0 , ( ๐ โ 1 ) โข โ / 2 ) โ { 0 } hence the model becomes completely local. To fix this, we may try to increase the filter size ๐ (or equivalently add more layers) simultaneously with ๐ฝ , but then the number of parameters in the model goes to infinity as ๐ฝ โ โ since, as we previously noted, there are ๐ โข ๐ โข ๐ parameters in this layer. Therefore standard CNNs are not consistent models in function space. We demonstrate their inability to generalize to different resolutions in Section 7.
4.2Low-rank Neural Operator (LNO)
By directly imposing that the kernel ๐ is of a tensor product form, we obtain a layer with ๐ช โข ( ๐ฝ ) computational complexity. We term this construction the Low-rank Neural Operator (LNO) due to its equivalence to directly parameterizing a finite-rank operator. We start by assuming ๐ : ๐ท ร ๐ท โ โ is scalar valued and later generalize to the vector valued setting. We express the kernel as
๐ โข ( ๐ฅ , ๐ฆ )
โ ๐
1 ๐ ๐ ( ๐ ) โข ( ๐ฅ ) โข ๐ ( ๐ ) โข ( ๐ฆ ) โ ๐ฅ , ๐ฆ โ ๐ท
for some functions ๐ ( 1 ) , ๐ ( 1 ) , โฆ , ๐ ( ๐ ) , ๐ ( ๐ ) : ๐ท โ โ that are normally given as the components of two neural networks ๐ , ๐ : ๐ท โ โ ๐ or a single neural network ฮ : ๐ท โ โ 2 โข ๐ which couples all functions through its parameters. With this definition, and supposing that ๐
๐
1 , we have that (13) becomes
๐ข โข ( ๐ฅ )
โซ ๐ท โ ๐
1 ๐ ๐ ( ๐ ) โข ( ๐ฅ ) โข ๐ ( ๐ ) โข ( ๐ฆ ) โข ๐ฃ โข ( ๐ฆ ) โข d โข ๐ฆ
โ ๐
1 ๐ โซ ๐ท ๐ ( ๐ ) โข ( ๐ฆ ) โข ๐ฃ โข ( ๐ฆ ) โข d โข ๐ฆ โข ๐ ( ๐ ) โข ( ๐ฅ )
โ ๐
1 ๐ โจ ๐ ( ๐ ) , ๐ฃ โฉ โข ๐ ( ๐ ) โข ( ๐ฅ )
where โจ โ , โ โฉ denotes the ๐ฟ 2 โข ( ๐ท ; โ ) inner product. Notice that the inner products can be evaluated independently of the evaluation point ๐ฅ โ ๐ท hence the computational complexity of this method is ๐ช โข ( ๐ โข ๐ฝ ) which is linear in the discretization.
We may also interpret this choice of kernel as directly parameterizing a rank ๐ โ โ operator on ๐ฟ 2 โข ( ๐ท ; โ ) . Indeed, we have
๐ข
โ ๐
1 ๐ ( ๐ ( ๐ ) โ ๐ ( ๐ ) ) โข ๐ฃ
(18)
which corresponds preceisely to applying the SVD of a rank ๐ operator to the function ๐ฃ . Equation (18) makes natural the vector valued generalization. Assume ๐ , ๐ โฅ 1 and ๐ ( ๐ ) : ๐ท โ โ ๐ and ๐ ( ๐ ) : ๐ท โ โ ๐ for ๐
1 , โฆ , ๐ then, (18) defines an operator mapping ๐ฟ 2 โข ( ๐ท ; โ ๐ ) โ ๐ฟ 2 โข ( ๐ท ; โ ๐ ) that can be evaluated as
๐ข โข ( ๐ฅ )
โ ๐
1 ๐ โจ ๐ ( ๐ ) , ๐ฃ โฉ ๐ฟ 2 โข ( ๐ท ; โ ๐ ) โข ๐ ( ๐ ) โข ( ๐ฅ ) โ ๐ฅ โ ๐ท .
We again note the linear computational complexity of this parameterization. Finally, we observe that this method can be interpreted as directly imposing a rank ๐ structure on the kernel matrix. Indeed,
๐พ
๐พ ๐ฝ โข ๐ โข ๐พ ๐ โข ๐ฝ
where ๐พ ๐ฝ โข ๐ , ๐พ ๐ โข ๐ฝ are ๐ฝ ร ๐ and ๐ ร ๐ฝ block matricies respectively. This construction is similar to the DeepONet construction of Lu et al. (2019) discussed in Section 5.1, but parameterized to be consistent in function space.
4.3Multipole Graph Neural Operator (MGNO)
A natural extension to directly working with kernels in a tensor product form as in Section 4.2 is to instead consider kernels that can be well approximated by such a form. This assumption gives rise to the fast multipole method (FMM) which employs a multi-scale decomposition of the kernel in order to achieve linear complexity in computing (13); for a detailed discussion see e.g. (E, 2011, Section 3.2). FMM can be viewed as a systematic approach to combine the sparse and low-rank approximations to the kernel matrix. Indeed, the kernel matrix is decomposed into different ranges and a hierarchy of low-rank structures is imposed on the long-range components. We employ this idea to construct hierarchical, multi-scale graphs, without being constrained to particular forms of the kernel. We will elucidate the workings of the FMM through matrix factorization. This approach was first outlined in Li et al. (2020b) and is referred as the Multipole Graph Neural Operator (MGNO).
The key to the fast multipole methodโs linear complexity lies in the subdivision of the kernel matrix according to the range of interaction, as shown in Figure 3:
๐พ
๐พ 1 + ๐พ 2 + โฆ + ๐พ ๐ฟ ,
(19)
where ๐พ โ with โ
1 corresponds to the shortest-range interaction, and โ
๐ฟ corresponds to the longest-range interaction; more generally index โ is ordered by the range of interaction. While the uniform grids depicted in Figure 3 produce an orthogonal decomposition of the kernel, the decomposition may be generalized to arbitrary discretizations by allowing slight overlap of the ranges.
Figure 3:Hierarchical matrix decomposition
The kernel matrix ๐พ is decomposed with respect to its interaction ranges. ๐พ 1 corresponds to short-range interaction; it is sparse but full-rank. ๐พ 3 corresponds to long-range interaction; it is dense but low-rank.
Multi-scale Discretization.
We produce a hierarchy of ๐ฟ discretizations with a decreasing number of nodes ๐ฝ 1 โฅ โฆ โฅ ๐ฝ ๐ฟ and increasing kernel integration radius ๐ 1 โค โฆ โค ๐ ๐ฟ . Therefore, the shortest-range interaction ๐พ 1 has a fine resolution but is truncated locally, while the longest-range interaction ๐พ ๐ฟ has a coarse resolution, but covers the entire domain. This is shown pictorially in Figure 3. The number of nodes ๐ฝ 1 โฅ โฆ โฅ ๐ฝ ๐ฟ , and the integration radii ๐ 1 โค โฆ โค ๐ ๐ฟ are hyperparameter choices and can be picked so that the total computational complexity is linear in ๐ฝ .
A special case of this construction is when the grid is uniform. Then our formulation reduces to the standard fast multipole algorithm and the kernels ๐พ ๐ form an orthogonal decomposition of the full kernel matrix ๐พ . Assuming the underlying discretization { ๐ฅ 1 , โฆ , ๐ฅ ๐ฝ } โ ๐ท is a uniform grid with resolution ๐ such that ๐ ๐
๐ฝ , the ๐ฟ multi-level discretizations will be grids with resolution ๐ ๐
๐ / 2 ๐ โ 1 , and consequentially ๐ฝ ๐
๐ ๐ ๐
( ๐ / 2 ๐ โ 1 ) ๐ . In this case ๐ ๐ can be chosen as 1 / ๐ for ๐
1 , โฆ , ๐ฟ . To ensure orthogonality of the discretizations, the fast multipole algorithm sets the integration domains to be ๐ต โข ( ๐ฅ , ๐ ๐ ) โ ๐ต โข ( ๐ฅ , ๐ ๐ โ 1 ) for each level ๐
2 , โฆ , ๐ฟ , so that the discretization on level ๐ does not overlap with the one on level ๐ โ 1 . Details of this algorithm can be found in e.g. Greengard and Rokhlin (1997).
Recursive Low-rank Decomposition.
The coarse discretization representation can be understood as recursively applying an inducing points approximation (Quiรฑonero Candela and Rasmussen, 2005): starting from a discretization with ๐ฝ 1
๐ฝ nodes, we impose inducing points of size ๐ฝ 2 , ๐ฝ 3 , โฆ , ๐ฝ ๐ฟ which all admit a low-rank kernel matrix decomposition of the form (15). The original ๐ฝ ร ๐ฝ kernel matrix ๐พ ๐ is represented by a much smaller ๐ฝ ๐ ร ๐ฝ ๐ kernel matrix, denoted by ๐พ ๐ , ๐ . As shown in Figure 3, ๐พ 1 is full-rank but very sparse while ๐พ ๐ฟ is dense but low-rank. Such structure can be achieved by applying equation (15) recursively to equation (19), leading to the multi-resolution matrix factorization (Kondor et al., 2014):
๐พ โ ๐พ 1 , 1 + ๐พ 1 , 2 โข ๐พ 2 , 2 โข ๐พ 2 , 1 + ๐พ 1 , 2 โข ๐พ 2 , 3 โข ๐พ 3 , 3 โข ๐พ 3 , 2 โข ๐พ 2 , 1 + โฏ
(20)
where ๐พ 1 , 1
๐พ 1 represents the shortest range, ๐พ 1 , 2 โข ๐พ 2 , 2 โข ๐พ 2 , 1 โ ๐พ 2 , represents the second shortest range, etc. The center matrix ๐พ ๐ , ๐ is a ๐ฝ ๐ ร ๐ฝ ๐ kernel matrix corresponding to the ๐ -level of the discretization described above. The matrices ๐พ ๐ + 1 , ๐ , ๐พ ๐ , ๐ + 1 are ๐ฝ ๐ + 1 ร ๐ฝ ๐ and ๐ฝ ๐ ร ๐ฝ ๐ + 1 wide and long respectively block transition matrices. Denote ๐ฃ ๐ โ ๐ ๐ฝ ๐ ร ๐ for the representation of the input ๐ฃ at each level of the discretization for ๐
1 , โฆ , ๐ฟ , and ๐ข ๐ โ ๐ ๐ฝ ๐ ร ๐ for the output (assuming the inputs and outputs has the same dimension). We define the matrices ๐พ ๐ + 1 , ๐ , ๐พ ๐ , ๐ + 1 as moving the representation ๐ฃ ๐ between different levels of the discretization via an integral kernel that we learn. Combining with the truncation idea introduced in subsection 4.1, we define the transition matrices as discretizations of the following integral kernel operators:
๐พ ๐ , ๐ : ๐ฃ ๐ โฆ ๐ข ๐
โซ ๐ต โข ( ๐ฅ , ๐ ๐ , ๐ ) ๐ ๐ , ๐ โข ( ๐ฅ , ๐ฆ ) โข ๐ฃ ๐ โข ( ๐ฆ ) โข d โข ๐ฆ
(21)
๐พ ๐ + 1 , ๐ : ๐ฃ ๐ โฆ ๐ข ๐ + 1
โซ ๐ต โข ( ๐ฅ , ๐ ๐ + 1 , ๐ ) ๐ ๐ + 1 , ๐ โข ( ๐ฅ , ๐ฆ ) โข ๐ฃ ๐ โข ( ๐ฆ ) โข d โข ๐ฆ
(22)
๐พ ๐ , ๐ + 1 : ๐ฃ ๐ + 1 โฆ ๐ข ๐
โซ ๐ต โข ( ๐ฅ , ๐ ๐ , ๐ + 1 ) ๐ ๐ , ๐ + 1 โข ( ๐ฅ , ๐ฆ ) โข ๐ฃ ๐ + 1 โข ( ๐ฆ ) โข d โข ๐ฆ
(23)
where each kernel ๐ ๐ , ๐ โฒ : ๐ท ร ๐ท โ โ ๐ ร ๐ is parameterized as a neural network and learned.
Figure 4:V-cycle
Left: the multi-level discretization. Right: one V-cycle iteration for the multipole neural operator.
V-cycle Algorithm
We present a V-cycle algorithm, see Figure 4, for efficiently computing (20). It consists of two steps: the downward pass and the upward pass. Denote the representation in downward pass and upward pass by ๐ฃ ห and ๐ฃ ^ respectively. In the downward step, the algorithm starts from the fine discretization representation ๐ฃ ห 1 and updates it by applying a downward transition ๐ฃ ห ๐ + 1
๐พ ๐ + 1 , ๐ โข ๐ฃ ห ๐ . In the upward step, the algorithm starts from the coarse presentation ๐ฃ ^ ๐ฟ and updates it by applying an upward transition and the center kernel matrix ๐ฃ ^ ๐
๐พ ๐ , ๐ โ 1 โข ๐ฃ ^ ๐ โ 1 + ๐พ ๐ , ๐ โข ๐ฃ ห ๐ . Notice that applying one level downward and upward exactly computes ๐พ 1 , 1 + ๐พ 1 , 2 โข ๐พ 2 , 2 โข ๐พ 2 , 1 , and a full ๐ฟ -level V-cycle leads to the multi-resolution decomposition (20).
Employing (21)-(23), we use ๐ฟ neural networks ๐ 1 , 1 , โฆ , ๐ ๐ฟ , ๐ฟ to approximate the kernel operators associated to ๐พ ๐ , ๐ , and 2 โข ( ๐ฟ โ 1 ) neural networks ๐ 1 , 2 , ๐ 2 , 1 , โฆ to approximate the transitions ๐พ ๐ + 1 , ๐ , ๐พ ๐ , ๐ + 1 . Following the iterative architecture (6), we introduce the linear operator ๐ โ โ ๐ ร ๐ (denoting it by ๐ ๐ for each corresponding resolution) to help regularize the iteration, as well as the nonlinear activation function ๐ to increase the expensiveness. Since ๐ acts pointwise (requiring ๐ฝ remains the same for input and output), we employ it only along with the kernel ๐พ ๐ , ๐ and not the transitions. At each layer ๐ก
0 , โฆ , ๐ โ 1 , we perform a full V-cycle as:
โข
Downward Pass
For ๐
1 , โฆ , ๐ฟ : ๐ฃ ห ๐ + 1 ( ๐ก + 1 )
๐ ( ๐ฃ ^ ๐ + 1 ( ๐ก ) + ๐พ ๐ + 1 , ๐ ๐ฃ ห ๐ ( ๐ก + 1 ) )
(24) โข
Upward Pass
For ๐
๐ฟ , โฆ , 1 : ๐ฃ ^ ๐ ( ๐ก + 1 )
๐ ( ( ๐ ๐ + ๐พ ๐ , ๐ ) ๐ฃ ห ๐ ( ๐ก + 1 ) + ๐พ ๐ , ๐ โ 1 ๐ฃ ^ ๐ โ 1 ( ๐ก + 1 ) ) .
(25)
Notice that one full pass of the V-cycle algorithm defines a mapping ๐ฃ โฆ ๐ข .
Multi-level Graphs.
We emphasize that we view the discretization { ๐ฅ 1 , โฆ , ๐ฅ ๐ฝ } โ ๐ท as a graph in order to facilitate an efficient implementation through the message passing graph neural network architecture. Since the V-cycle algorithm works at different levels of the discretization, we build multi-level graphs to represent the coarser and finer discretizations. We present and utilize two constructions of multi-level graphs, the orthogonal multipole graph and the generalized random graph. The orthogonal multipole graph is the standard grid construction used in the fast multiple method which is adapted to a uniform grid, see e.g. (Greengard and Rokhlin, 1997). In this construction, the decomposition in (19) is orthogonal in that the finest graph only captures the closest range interaction, the second finest graph captures the second closest interaction minus the part already captured in the previous graph and so on, recursively. In particular, the ranges of interaction for each kernel do not overlap. While this construction is usually efficient, it is limited to uniform grids which may be a bottleneck for certain applications. Our second construction is the generalized random graph as shown in Figure 3 where the ranges of the kernels are allowed to overlap. The generalized random graph is very flexible as it can be applied on any domain geometry and discretization. Further it can also be combined with random sampling methods to work on problems where ๐ฝ is very large or combined with an active learning method to adaptively choose the regions where a finer discretization is needed.
Linear Complexity.
Each term in the decomposition (19) is represented by the kernel matrix ๐พ ๐ , ๐ for ๐
1 , โฆ , ๐ฟ , and ๐พ ๐ + 1 , ๐ , ๐พ ๐ , ๐ + 1 for ๐
1 , โฆ , ๐ฟ โ 1 corresponding to the appropriate sub-discretization. Therefore the complexity of the multipole method is โ ๐
1 ๐ฟ ๐ช โข ( ๐ฝ ๐ 2 โข ๐ ๐ ๐ ) + โ ๐
1 ๐ฟ โ 1 ๐ช โข ( ๐ฝ ๐ โข ๐ฝ ๐ + 1 โข ๐ ๐ ๐ )
โ ๐
1 ๐ฟ ๐ช โข ( ๐ฝ ๐ 2 โข ๐ ๐ ๐ ) . By designing the sub-discretization so that ๐ช โข ( ๐ฝ ๐ 2 โข ๐ ๐ ๐ ) โค ๐ช โข ( ๐ฝ ) , we can obtain complexity linear in ๐ฝ . For example, when ๐
2 , pick ๐ ๐
1 / ๐ฝ ๐ and ๐ฝ ๐
๐ช โข ( 2 โ ๐ โข ๐ฝ ) such that ๐ ๐ฟ is large enough so that there exists a ball of radius ๐ ๐ฟ containing ๐ท . Then clearly โ ๐
1 ๐ฟ ๐ช โข ( ๐ฝ ๐ 2 โข ๐ ๐ ๐ )
๐ช โข ( ๐ฝ ) . By combining with a Nystrรถm approximation, we can obtain ๐ช โข ( ๐ฝ โฒ ) complexity for some ๐ฝ โฒ โช ๐ฝ .
4.4Fourier Neural Operator (FNO)
Instead of working with a kernel directly on the domain ๐ท , we may consider its representation in Fourier space and directly parameterize it there. This allows us to utilize Fast Fourier Transform (FFT) methods in order to compute the action of the kernel integral operator (13) with almost linear complexity. A similar idea was used in (Nelsen and Stuart, 2021) to construct random features in function space The method we outline was first described in Li et al. (2020a) and is termed the Fourier Neural Operator (FNO). We note that the theory of Section 4 is designed for general kernels and does not apply to the FNO formulation; however, similar universal approximation results were developed for it in (Kovachki et al., 2021) when the input and output spaces are Hilbert space. For simplicity, we will assume that ๐ท
๐ ๐ is the unit torus and all functions are complex-valued. Let โฑ : ๐ฟ 2 โข ( ๐ท ; โ ๐ ) โ โ 2 โข ( โค ๐ ; โ ๐ ) denote the Fourier transform of a function ๐ฃ : ๐ท โ โ ๐ and โฑ โ 1 its inverse. For ๐ฃ โ ๐ฟ 2 โข ( ๐ท ; โ ๐ ) and ๐ค โ โ 2 โข ( โค ๐ ; โ ๐ ) , we have
( โฑ โข ๐ฃ ) ๐ โข ( ๐ )
โจ
๐ฃ
๐
,
๐
๐
โฉ
๐ฟ
2
โข
(
๐ท
;
โ
)
,
๐
โ
{
1
,
โฆ
,
๐
}
,
๐
โ
โค
๐
,
(
โฑ
โ
1
โข
๐ค
)
๐
โข
(
๐ฅ
)
โ ๐ โ โค ๐ ๐ค ๐ โข ( ๐ ) โข ๐ ๐ โข ( ๐ฅ ) , ๐ โ { 1 , โฆ , ๐ } , ๐ฅ โ ๐ท
where, for each ๐ โ โค ๐ , we define
๐ ๐ โข ( ๐ฅ )
๐ 2 โข ๐ โข ๐ โข ๐ 1 โข ๐ฅ 1 โข โฏ โข ๐ 2 โข ๐ โข ๐ โข ๐ ๐ โข ๐ฅ ๐ , ๐ฅ โ ๐ท
with ๐
โ 1 the imaginary unit. By letting ๐ โข ( ๐ฅ , ๐ฆ )
๐ โข ( ๐ฅ โ ๐ฆ ) for some ๐ : ๐ท โ โ ๐ ร ๐ in (13) and applying the convolution theorem, we find that
๐ข โข ( ๐ฅ )
โฑ โ 1 โข ( โฑ โข ( ๐ ) โ โฑ โข ( ๐ฃ ) ) โข ( ๐ฅ ) โ ๐ฅ โ ๐ท .
We therefore propose to directly parameterize ๐ by its Fourier coefficients. We write
๐ข โข ( ๐ฅ )
โฑ โ 1 โข ( ๐ ๐ โ โฑ โข ( ๐ฃ ) ) โข ( ๐ฅ ) โ ๐ฅ โ ๐ท
(26)
where ๐ ๐ is the Fourier transform of a periodic function ๐ : ๐ท โ โ ๐ ร ๐ parameterized by some ๐ โ โ ๐ .
Figure 5: top: The architecture of the neural operators; bottom: Fourier layer.
(a) The full architecture of neural operator: start from input ๐ . 1. Lift to a higher dimension channel space by a neural network ๐ซ . 2. Apply ๐ (typically ๐
4 ) layers of integral operators and activation functions. 3. Project back to the target dimension by a neural network ๐ . Output ๐ข . (b) Fourier layers: Start from input ๐ฃ . On top: apply the Fourier transform โฑ ; a linear transform ๐ on the lower Fourier modes which also filters out the higher modes; then apply the inverse Fourier transform โฑ โ 1 . On the bottom: apply a local linear transform ๐ .
For frequency mode ๐ โ โค ๐ , we have ( โฑ โข ๐ฃ ) โข ( ๐ ) โ โ ๐ and ๐ ๐ โข ( ๐ ) โ โ ๐ ร ๐ . We pick a finite-dimensional parameterization by truncating the Fourier series at a maximal number of modes ๐ max
| ๐ ๐ max |
| { ๐ โ โค ๐ : | ๐ ๐ | โค ๐ max , ๐ , for โข ๐
1 , โฆ , ๐ } | . This choice improves the empirical performance and sensitivity of the resulting model with respect to the choices of discretization. We thus parameterize ๐ ๐ directly as complex-valued ( ๐ max ร ๐ ร ๐ ) -tensor comprising a collection of truncated Fourier modes and therefore drop ๐ from our notation. In the case where we have real-valued ๐ฃ and we want ๐ข to also be real-valued, we impose that ๐ is real-valued by enforcing conjugate symmetry in the parameterization i.e.
๐ โข ( โ ๐ ) ๐ , ๐
๐ โ โข ( ๐ ) ๐ , ๐ โ ๐ โ ๐ ๐ max , ๐
1 , โฆ , ๐ , ๐
1 , โฆ , ๐ .
We note that the set ๐ ๐ max is not the canonical choice for the low frequency modes of ๐ฃ ๐ก . Indeed, the low frequency modes are usually defined by placing an upper-bound on the โ 1 -norm of ๐ โ โค ๐ . We choose ๐ ๐ max as above since it allows for an efficient implementation. Figure 5 gives a pictorial representation of an entire Neural Operator architecture employing Fourier layers.
The Discrete Case and the FFT.
Assuming the domain ๐ท is discretized with ๐ฝ โ โ points, we can treat ๐ฃ โ โ ๐ฝ ร ๐ and โฑ โข ( ๐ฃ ) โ โ ๐ฝ ร ๐ . Since we convolve ๐ฃ with a function which only has ๐ max Fourier modes, we may simply truncate the higher modes to obtain โฑ โข ( ๐ฃ ) โ โ ๐ max ร ๐ . Multiplication by the weight tensor ๐ โ โ ๐ max ร ๐ ร ๐ is then
( ๐ โ ( โฑ โข ๐ฃ ๐ก ) ) ๐ , ๐
โ ๐
1 ๐ ๐ ๐ , ๐ , ๐ โข ( โฑ โข ๐ฃ ) ๐ , ๐ , ๐
1 , โฆ , ๐ max , ๐
1 , โฆ , ๐ .
(27)
When the discretization is uniform with resolution ๐ 1 ร โฏ ร ๐ ๐
๐ฝ , โฑ can be replaced by the Fast Fourier Transform. For ๐ฃ โ โ ๐ฝ ร ๐ , ๐
( ๐ 1 , โฆ , ๐ ๐ ) โ โค ๐ 1 ร โฏ ร โค ๐ ๐ , and ๐ฅ
( ๐ฅ 1 , โฆ , ๐ฅ ๐ ) โ ๐ท , the FFT โฑ ^ and its inverse โฑ ^ โ 1 are defined as
( โฑ ^ โข ๐ฃ ) ๐ โข ( ๐ )
โ ๐ฅ 1
0 ๐ 1 โ 1 โฏ โข โ ๐ฅ ๐
0 ๐ ๐ โ 1 ๐ฃ ๐ โข ( ๐ฅ 1 , โฆ , ๐ฅ ๐ ) โข ๐ โ 2 โข ๐ โข ๐ โข โ ๐
1
๐
๐ฅ
๐
โข
๐
๐
๐
๐
,
(
โฑ
^
โ
1
โข
๐ฃ
)
๐
โข
(
๐ฅ
)
โ ๐ 1
0 ๐ 1 โ 1 โฏ โข โ ๐ ๐
0 ๐ ๐ โ 1 ๐ฃ ๐ โข ( ๐ 1 , โฆ , ๐ ๐ ) โข ๐ 2 โข ๐ โข ๐ โข โ ๐
1 ๐ ๐ฅ ๐ โข ๐ ๐ ๐ ๐
for ๐
1 , โฆ , ๐ . In this case, the set of truncated modes becomes
๐ ๐ max
{ ( ๐ 1 , โฆ , ๐ ๐ ) โ โค ๐ 1 ร โฏ ร โค ๐ ๐ โฃ ๐ ๐ โค ๐ max , ๐ โข or โข ๐ ๐ โ ๐ ๐ โค ๐ max , ๐ , for โข ๐
1 , โฆ , ๐ } .
When implemented, ๐ is treated as a ( ๐ 1 ร โฏ ร ๐ ๐ ร ๐ ร ๐ ) -tensor and the above definition of ๐ ๐ max corresponds to the โcornersโ of ๐ , which allows for a straight-forward parallel implementation of (27) via matrix-vector multiplication. In practice, we have found the choice ๐ max , ๐ roughly around 1 3 to 2 3 of the maximum number of Fourier modes in the Fast Fourier Transform of the grid valuation of the input function provides desirable performance. In our empirical studies, we set ๐ max , ๐
12 which yields ๐ max
12 ๐ parameters per channel, to be sufficient for all the tasks that we consider.
Choices for ๐ .
In general, ๐ can be defined to depend on ( โฑ โข ๐ ) , the Fourier transform of the input ๐ โ ๐ to parallel our construction (8). Indeed, we can define ๐ ๐ : โค ๐ ร โ ๐ ๐ โ โ ๐ ร ๐ as a parametric function that maps ( ๐ , ( โฑ โข ๐ ) โข ( ๐ ) ) to the values of the appropriate Fourier modes. We have experimented with the following parameterizations of ๐ ๐ :
โข
Direct. Define the parameters ๐ ๐ โ โ ๐ ร ๐ for each wave number ๐ :
๐ ๐ โข ( ๐ , ( โฑ โข ๐ ) โข ( ๐ ) ) := ๐ ๐ .
โข
Linear. Define the parameters ๐ ๐ 1 โ โ ๐ ร ๐ ร ๐ ๐ , ๐ ๐ 2 โ โ ๐ ร ๐ for each wave number ๐ :
๐ ๐ โข ( ๐ , ( โฑ โข ๐ ) โข ( ๐ ) ) := ๐ ๐ 1 โข ( โฑ โข ๐ ) โข ( ๐ ) + ๐ ๐ 2 .
โข
Feed-forward neural network. Let ฮฆ ๐ : โค ๐ ร โ ๐ ๐ โ โ ๐ ร ๐ be a neural network with parameters ๐ :
๐ ๐ โข ( ๐ , ( โฑ โข ๐ ) โข ( ๐ ) ) := ฮฆ ๐ โข ( ๐ , ( โฑ โข ๐ ) โข ( ๐ ) ) .
We find that the linear parameterization has a similar performance to the direct parameterization above, however, it is not as efficient both in terms of computational complexity and the number of parameters required. On the other hand, we find that the feed-forward neural network parameterization has a worse performance. This is likely due to the discrete structure of the space โค ๐ ; numerical evidence suggests neural networks are not adept at handling this structure. Our experiments in this work focus on the direct parameterization presented above.
Invariance to Discretization.
The Fourier layers are discretization-invariant because they can learn from and evaluate functions which are discretized in an arbitrary way. Since parameters are learned directly in Fourier space, resolving the functions in physical space simply amounts to projecting on the basis elements ๐ 2 โข ๐ โข ๐ โข โจ ๐ฅ , ๐ โฉ ; these are well-defined everywhere on โ ๐ .
Quasi-linear Complexity.
The weight tensor ๐ contains ๐ max < ๐ฝ modes, so the inner multiplication has complexity ๐ช โข ( ๐ max ) . Therefore, the majority of the computational cost lies in computing the Fourier transform โฑ โข ( ๐ฃ ) and its inverse. General Fourier transforms have complexity ๐ช โข ( ๐ฝ 2 ) , however, since we truncate the series the complexity is in fact ๐ช โข ( ๐ฝ โข ๐ max ) , while the FFT has complexity ๐ช โข ( ๐ฝ โข log โก ๐ฝ ) . Generally, we have found using FFTs to be very efficient, however, a uniform discretization is required.
Non-uniform and Non-periodic Geometry.
The Fourier neural operator model is defined based on Fourier transform operations accompanied by local residual operations and potentially additive bias function terms. These operations are mainly defined on general geometries, function spaces, and choices of discretization. They are not limited to rectangular domains, periodic functions, or uniform grids. In this paper, we instantiate these operations on uniform grids and periodic functions in order to develop fast implementations that enjoy spectral convergence and utilize methods such as fast Fourier transform. In order to maintain a fast and memory-efficient method, our implementation of the Fourier neural operator relies on the fast Fourier transform which is only defined on uniform mesh discretizations of ๐ท
๐ ๐ , or for functions on the square satisfying homogeneous Dirichlet (fast Fourier sine transform) or homogeneous Neumann (fast Fourier cosine transform) boundary conditions. However, the fast implementation of Fourier neural operator can be applied in more general geometries via Fourier continuations. Given any compact manifold ๐ท
โณ , we can always embed it into a periodic cube (torus),
๐ : โณ โ ๐ ๐
where the regular FFT can be applied. Conventionally, in numerical analysis applications, the embedding ๐ is defined through a continuous extension by fitting polynomials (Bruno et al., 2007). However, in the Fourier neural operator, the idea can be applied simply by padding the input with zeros. The loss is computed only on the original space during training. The Fourier neural operator will automatically generate a smooth extension to the padded domain in the output space.
4.5Summary
We summarize the main computational approaches presented in this section and their complexity:
โข
GNO: Subsample ๐ฝ โฒ points from the ๐ฝ -point discretization and compute the truncated integral
๐ข โข ( ๐ฅ )
โซ ๐ต โข ( ๐ฅ , ๐ ) ๐ โข ( ๐ฅ , ๐ฆ ) โข ๐ฃ โข ( ๐ฆ ) โข d โข ๐ฆ
(28)
at a ๐ช โข ( ๐ฝ โข ๐ฝ โฒ ) complexity.
โข
LNO: Decompose the kernel function tensor product form and compute
๐ข โข ( ๐ฅ )
โ ๐
1 ๐ โจ ๐ ( ๐ ) , ๐ฃ โฉ โข ๐ ( ๐ ) โข ( ๐ฅ )
(29)
at a ๐ช โข ( ๐ฝ ) complexity.
โข
MGNO: Compute a multi-scale decomposition of the kernel
๐พ
๐พ 1 , 1 + ๐พ 1 , 2 โข ๐พ 2 , 2 โข ๐พ 2 , 1 + ๐พ 1 , 2 โข ๐พ 2 , 3 โข ๐พ 3 , 3 โข ๐พ 3 , 2 โข ๐พ 2 , 1 + โฏ
๐ข โข ( ๐ฅ )
( ๐พ โข ๐ฃ ) โข ( ๐ฅ )
(30)
at a ๐ช โข ( ๐ฝ ) complexity.
โข
FNO: Parameterize the kernel in the Fourier domain and compute the using the FFT
๐ข โข ( ๐ฅ )
โฑ โ 1 โข ( ๐ ๐ โ โฑ โข ( ๐ฃ ) ) โข ( ๐ฅ )
(31)
at a ๐ช โข ( ๐ฝ โข log โก ๐ฝ ) complexity.
5Neural Operators and Other Deep Learning Models
In this section, we provide a discussion on the recent related methods, in particular, DeepONets, and demonstrate that their architectures are subsumed by generic neural operators when neural operators are parametrized inconsistently 5.1. When only applied and queried on fixed grids, we show neural operator architectures subsume neural networks and, furthermore, we show how transformers are special cases of neural operators 5.2.
5.1DeepONets
We will now draw a parallel between the recently proposed DeepONet architecture in Lu et al. (2019), a map from finite-dimensional spaces to function spaces, and the neural operator framework. We will show that if we use a particular, point-wise parameterization of the first kernel in a NO and discretize the integral operator, we obtain a DeepONet. However, such a parameterization breaks the notion of discretization invariance because the number of parameters depends on the discretization of the input function. Therefore such a model cannot be applied to arbitrarily discretized functions and its number of parameters goes to infinity as we take the limit to the continuum. This phenomenon is similar to our discussion in subsection 4.1 where a NO parametrization which is inconsistent in function space and breaks discretization invariance yields a CNN. We propose a modification to the DeepONet architecture, based on the idea of the LNO, which addresses this issue and gives a discretization invariant neural operator.
Proposition 5
A neural operator with a point-wise parameterized first kernel and discretized integral operators yields a DeepONet.
Proof We work with (11) where we choose ๐ 0
0 and denote ๐ 0 by ๐ . For simplicity, we will consider only real-valued functions i.e. ๐ ๐
๐ ๐ข
1 and set ๐ ๐ฃ 0
๐ ๐ฃ 1
๐ and ๐ ๐ฃ 2
๐ for some ๐ , ๐ โ โ . Define ๐ซ : โ โ โ ๐ by ๐ซ โข ( ๐ฅ )
( ๐ฅ , โฆ , ๐ฅ ) and ๐ฌ : โ ๐ โ โ by ๐ฌ โข ( ๐ฅ )
๐ฅ 1 + โฏ + ๐ฅ ๐ . Furthermore let ๐ ( 1 ) : ๐ท โฒ ร ๐ท โ โ ๐ ร ๐ be defined by some ๐ ๐ โข ๐ ( 1 ) : ๐ท โฒ ร ๐ท โ โ for ๐
1 , โฆ , ๐ and ๐
1 , โฆ , ๐ . Similarly let ๐ ( 0 ) : ๐ท ร ๐ท โ โ ๐ ร ๐ be given as ๐ ( 0 ) โข ( ๐ฅ , ๐ฆ )
diag โข ( ๐ 1 ( 0 ) โข ( ๐ฅ , ๐ฆ ) , โฆ , ๐ ๐ ( 0 ) โข ( ๐ฅ , ๐ฆ ) ) for some ๐ 1 ( 0 ) , โฆ โข ๐ ๐ ( 0 ) : ๐ท ร ๐ท โ โ . Then (11) becomes
( ๐ข ๐ โข ( ๐ ) ) โข ( ๐ฅ )
โ ๐
1 ๐ โ ๐
1 ๐ โซ ๐ท ๐ ๐ โข ๐ ( 1 ) โข ( ๐ฅ , ๐ฆ ) โข ๐ โข ( โซ ๐ท ๐ ๐ ( 0 ) โข ( ๐ฆ , ๐ง ) โข ๐ โข ( ๐ง ) โข d โข ๐ง + ๐ ๐ โข ( ๐ฆ ) ) โข d โข ๐ฆ
where ๐ โข ( ๐ฆ )
( ๐ 1 โข ( ๐ฆ ) , โฆ , ๐ ๐ โข ( ๐ฆ ) ) for some ๐ 1 , โฆ , ๐ ๐ : ๐ท โ โ . Let ๐ฅ 1 , โฆ , ๐ฅ ๐ โ ๐ท be the points at which the input function ๐ is evaluated and denote by ๐ ~
( ๐ โข ( ๐ฅ 1 ) , โฆ , ๐ โข ( ๐ฅ ๐ ) ) โ โ ๐ the vector of evaluations. Choose ๐ ๐ ( 0 ) โข ( ๐ฆ , ๐ง )
๐ โข ( ๐ฆ ) โข ๐ค ๐ โข ( ๐ง ) for some ๐ค 1 , โฆ , ๐ค ๐ : ๐ท โ โ where ๐ denotes the constant function taking the value one. Let
๐ค ๐ โข ( ๐ฅ ๐ )
๐ | ๐ท | โข ๐ค ~ ๐ โข ๐
for ๐
1 , โฆ , ๐ and ๐
1 , โฆ , ๐ where ๐ค ~ ๐ โข ๐ โ โ are some constants. Furthermore let ๐ ๐ โข ( ๐ฆ )
๐ ~ ๐ โข ๐ โข ( ๐ฆ ) for some constants ๐ ~ ๐ โ โ . Then the Monte Carlo approximation of the inner-integral yields
( ๐ข ๐ โข ( ๐ ) ) โข ( ๐ฅ )
โ ๐
1 ๐ โ ๐
1 ๐ โซ ๐ท ๐ ๐ โข ๐ ( 1 ) โข ( ๐ฅ , ๐ฆ ) โข ๐ โข ( โจ ๐ค ~ ๐ , ๐ ~ โฉ โ ๐ + ๐ ~ ๐ ) โข ๐ โข ( ๐ฆ ) โข d โข ๐ฆ
where ๐ค ~ ๐
( ๐ค ~ ๐ โข 1 , โฆ , ๐ค ~ ๐ โข ๐ ) . Choose ๐ ๐ โข ๐ ( 1 ) โข ( ๐ฅ , ๐ฆ )
( ๐ ~ ๐ โข ๐ / | ๐ท | ) โข ๐ ๐ โข ( ๐ฅ ) โข ๐ โข ( ๐ฆ ) for some constants ๐ ~ ๐ โข ๐ โ โ and functions ๐ 1 , โฆ , ๐ ๐ : ๐ท โฒ โ โ . Then we obtain
( ๐ข ๐ โข ( ๐ ) ) โข ( ๐ฅ )
โ ๐
1 ๐ ( โ ๐
1 ๐ ๐ ~ ๐ โข ๐ โข ๐ โข ( โจ ๐ค ~ ๐ , ๐ ~ โฉ โ ๐ + ๐ ~ ๐ ) ) โข ๐ ๐ โข ( ๐ฅ )
โ ๐
1 ๐ ๐บ ๐ โข ( ๐ฃ ~ ) โข ๐ ๐ โข ( ๐ฅ )
(32)
where ๐บ ๐ : โ ๐ โ โ can be viewed as the components of a single hidden layer neural network ๐บ : โ ๐ โ โ ๐ with parameters ๐ค ~ ๐ โข ๐ , ๐ ~ ๐ , ๐ ~ ๐ โข ๐ . The set of maps ๐ 1 , โฆ , ๐ ๐ form the trunk net while ๐บ is the branch net of a DeepONet. Our construction above can clearly be generalized to yield arbitrary depth branch nets by adding more kernel integral layers, and, similarly, the trunk net can be chosen arbitrarily deep by parameterizing each ๐ ๐ as a deep neural network.
Since the mappings ๐ค 1 , โฆ , ๐ค ๐ are point-wise parametrized based on the input values ๐ ~ , it is clear that the construction in the above proof is not discretization invariant. In order to make this model a discretization invariant neural operator, we propose DeepONet-Operator where, for each ๐ , we replace the inner product in the finite dimensional space โจ ๐ค ~ ๐ , ๐ ~ โฉ โ ๐ with an appropriate inner product in the function space โจ ๐ค ๐ , ๐ โฉ .
( ๐ข ๐ โข ( ๐ ) ) โข ( ๐ฅ )
โ ๐
1 ๐ ( โ ๐
1 ๐ ๐ ~ ๐ โข ๐ โข ๐ โข ( โจ ๐ค ๐ , ๐ โฉ + ๐ ~ ๐ ) ) โข ๐ ๐ โข ( ๐ฅ )
(33)
This operation is a projection of function ๐ onto ๐ค ๐ . Parametrizing ๐ค ๐ by neural networks makes DeepONet-Operator a discretization invariant model.
There are other ways in which the issue can be resolved for DeepONets. For example, by fixing the set of points on which the input function is evaluated independently of its discretization, by taking local spatial averages as in (Lanthaler et al., 2021) or more generally by taking a set of linear functionals on ๐ as input to a finite-dimensional branch neural network (a generalization to DeeONet-Operator) as in the specific PCA-based variant on DeepONet in (De Hoop et al., 2022). We demonstrate numerically in Section 7 that, when applied in the standard way, the error incurred by DeepONet(s) grows with the discretization of ๐ while it remains constant for neural operators.
Linear Approximation and Nonlinear Approximation.
We point out that parametrizations of the form (32) fall within the class of linear approximation methods since the nonlinear space ๐ข โ โข ( ๐ ) is approximated by the linear space span โข { ๐ 1 , โฆ , ๐ ๐ } (DeVore, 1998). The quality of the best possible linear approximation to a nonlinear space is given by the Kolmogorov ๐ -width where ๐ is the dimension of the linear space used in the approximation (Pinkus, 1985). The rate of decay of the ๐ -width as a function of ๐ quantifies how well the linear space approximates the nonlinear one. It is well know that for some problems such as the flow maps of advection-dominated PDEs, the ๐ -widths decay very slowly; hence a very large ๐ is needed for a good approximation for such problems (Cohen and DeVore, 2015). This can be limiting in practice as more parameters are needed in order to describe more basis functions ๐ ๐ and therefore more data is needed to fit these parameters.
On the other hand, we point out that parametrizations of the form (6), and the particular case (11), constitute (in general) a form of nonlinear approximation. The benefits of nonlinear approximation are well understood in the setting of function approximation, see e.g. (DeVore, 1998), however the theory for the operator setting is still in its infancy (Bonito et al., 2020; Cohen et al., 2020). We observe numerically in Section 7 that nonlinear parametrizations such as (11) outperform linear ones such as DeepONets or the low-rank method introduced in Section 4.2 when implemented with similar numbers of parameters. We acknowledge, however, that the theory presented in Section 8 is based on the reduction to a linear approximation and therefore does not capture the benefits of the nonlinear approximation. Furthermore, in practice, we have found that deeper architectures than (11) (usually four to five layers are used in the experiments of Section 7), perform better. The benefits of depth are again not captured in our analysis in Section 8 either. We leave further theoretical studies of approximation properties as an interesting avenue of investigation for future work.
Function Representation.
An important difference between neural operators, introduced here, PCA-based operator approximation, introduced in Bhattacharya et al. (2020) and DeepONets, introduced in Lu et al. (2019), is the manner in which the output function space is finite-dimensionalized. Neural operators as implemented in this paper typically use the same finite-dimensionalization in both the input and output function spaces; however different variants of the neural operator idea use different finite-dimensionalizations. As discussed in Section 4, the GNO and MGNO are finite-dimensionalized using pointwise values as the nodes of graphs; the FNO is finite-dimensionalized in Fourier space, requiring finite-dimensionalization on a uniform grid in real space; the Low-rank neural operator is finite-dimensionalized on a product space formed from the Barron space of neural networks. The PCA approach finite-dimensionalizes in the span of PCA modes. DeepONet, on the other hand, uses different input and output space finite-dimensionalizations; in its basic form it uses pointwise (grid) values on the input (branch net) whilst its output (trunk net) is represented as a function in Barron space. There also exist POD-DeepONet variants that finite-dimensionalize the output in the span of PCA modes Lu et al. (2021b), bringing them closer to the method introduced in Bhattacharya et al. (2020), but with a different finite-dimensionalization of the input space.
As is widely quoted, โall models are wrong, but some are usefulโ Box (1976). For operator approximation, each finite-dimensionalization has its own induced biases and limitations, and therefore works best on a subset of problems. Finite-dimensionalization introduces a trade-off between flexibility and representation power of the resulting approximate architecture. The Barron space representation (Low-rank operator and DeepONet) is usually the most generic and flexible as it is widely applicable. However this can lead to induced biases and reduced representation power on specific problems; in practice, DeepONet sometimes needs problem-specific feature engineering and architecture choices as studied in Lu et al. (2021b). We conjecture that these problem-specific features compensate for the induced bias and reduced representation power that the basic form of the method (Lu et al., 2019) sometimes exhibits. The PCA (PCA operator, POD-DeepONet) and graph-based (GNO, MGNO) discretizations are also generic, but more specific compared to the DeepONet representation; for this reason POD-DeepONet can outperform DeepONet on some problems (Lu et al., 2021b). On the other hand, the uniform grid-based representation FNO is the most specific of all those operator approximators considered in this paper: in its basic form it applies by discretizing the input functions, assumed to be specified on a periodic domain, on a uniform grid. As shown in Section 7 FNO usually works out of the box on such problems. But, as a trade-off, it requires substantial additional treatments to work well on non-uniform geometries, such as extension, interpolation (explored in Lu et al. (2021b)), and Fourier continuation (Bruno et al., 2007).
5.2Transformers as a Special Case of Neural Operators
We will now show that our neural operator framework can be viewed as a continuum generalization to the popular transformer architecture (Vaswani et al., 2017) which has been extremely successful in natural language processing tasks (Devlin et al., 2018; Brown et al., 2020) and, more recently, is becoming a popular choice in computer vision tasks (Dosovitskiy et al., 2020). The parallel stems from the fact that we can view sequences of arbitrary length as arbitrary discretizations of functions. Indeed, in the context of natural language processing, we may think of a sentence as a โwordโ-valued function on, for example, the domain [ 0 , 1 ] . Assuming our function is linked to a sentence with a fixed semantic meaning, adding or removing words from the sentence simply corresponds to refining or coarsening the discretization of [ 0 , 1 ] . We will now make this intuition precise in the proof of the following statement.
Proposition 6
The attention mechanism in transformer models is a special case of a neural operator layer.
Proof We will show that by making a particular choice of the nonlinear integral kernel operator (9) and discretizing the integral by a Monte-Carlo approximation, a neural operator layer reduces to a pre-normalized, single-headed attention, transformer block as originally proposed in (Vaswani et al., 2017). For simplicity, we assume ๐ ๐ฃ ๐ก
๐ โ โ and that ๐ท ๐ก
๐ท for any ๐ก
0 , โฆ , ๐ , the bias term is zero, and ๐
๐ผ is the identity. Furthermore, to simplify notation, we will drop the layer index ๐ก from (10) and, employing (9), obtain
๐ข โข ( ๐ฅ )
๐ โข ( ๐ฃ โข ( ๐ฅ ) + โซ ๐ท ๐ ๐ฃ โข ( ๐ฅ , ๐ฆ , ๐ฃ โข ( ๐ฅ ) , ๐ฃ โข ( ๐ฆ ) ) โข ๐ฃ โข ( ๐ฆ ) โข d โข ๐ฆ ) โ ๐ฅ โ ๐ท
(34)
a single layer of the neural operator where ๐ฃ : ๐ท โ โ ๐ is the input function to the layer and we denote by ๐ข : ๐ท โ โ ๐ the output function. We use the notation ๐ ๐ฃ to indicate that the kernel depends on the entirety of the function ๐ฃ as well as on its pointwise values ๐ฃ โข ( ๐ฅ ) and ๐ฃ โข ( ๐ฆ ) . While this is not explicitly done in (9), it is a straightforward generalization. We now pick a specific form for kernel, in particular, we assume ๐ ๐ฃ : โ ๐ ร โ ๐ โ โ ๐ ร ๐ does not explicitly depend on the spatial variables ( ๐ฅ , ๐ฆ ) but only on the input pair ( ๐ฃ โข ( ๐ฅ ) , ๐ฃ โข ( ๐ฆ ) ) . Furthermore, we let
๐ ๐ฃ โข ( ๐ฃ โข ( ๐ฅ ) , ๐ฃ โข ( ๐ฆ ) )
๐ ๐ฃ โข ( ๐ฃ โข ( ๐ฅ ) , ๐ฃ โข ( ๐ฆ ) ) โข ๐
where ๐ โ โ ๐ ร ๐ is a matrix of free parameters i.e. its entries are concatenated in ๐ so they are learned, and ๐ ๐ฃ : โ ๐ ร โ ๐ โ โ is defined as
๐ ๐ฃ โข ( ๐ฃ โข ( ๐ฅ ) , ๐ฃ โข ( ๐ฆ ) )
( โซ ๐ท exp โข ( โจ ๐ด โข ๐ฃ โข ( ๐ ) , ๐ต โข ๐ฃ โข ( ๐ฆ ) โฉ ๐ ) โข d โข ๐ ) โ 1 โข exp โข ( โจ ๐ด โข ๐ฃ โข ( ๐ฅ ) , ๐ต โข ๐ฃ โข ( ๐ฆ ) โฉ ๐ ) .
Here ๐ด , ๐ต โ โ ๐ ร ๐ are again matrices of free parameters, ๐ โ โ is a hyperparameter, and โจ โ , โ โฉ is the Euclidean inner-product on โ ๐ . Putting this together, we find that (34) becomes
๐ข โข ( ๐ฅ )
๐ โข ( ๐ฃ โข ( ๐ฅ ) + โซ ๐ท exp โข ( โจ ๐ด โข ๐ฃ โข ( ๐ฅ ) , ๐ต โข ๐ฃ โข ( ๐ฆ ) โฉ ๐ ) โซ ๐ท exp โข ( โจ ๐ด โข ๐ฃ โข ( ๐ ) , ๐ต โข ๐ฃ โข ( ๐ฆ ) โฉ ๐ ) โข d โข ๐ โข ๐ โข ๐ฃ โข ( ๐ฆ ) โข d โข ๐ฆ ) โ ๐ฅ โ ๐ท .
(35)
Equation (35) can be thought of as the continuum limit of a transformer block. To see this, we will discretize to obtain the usual transformer block.
To that end, let { ๐ฅ 1 , โฆ , ๐ฅ ๐ } โ ๐ท be a uniformly-sampled, ๐ -point discretization of ๐ท and denote ๐ฃ ๐
๐ฃ โข ( ๐ฅ ๐ ) โ โ ๐ and ๐ข ๐
๐ข โข ( ๐ฅ ๐ ) โ โ ๐ for ๐
1 , โฆ , ๐ . Approximating the inner-integral in (35) by Monte-Carlo, we have
โซ ๐ท exp โข ( โจ ๐ด โข ๐ฃ โข ( ๐ ) , ๐ต โข ๐ฃ โข ( ๐ฆ ) โฉ ๐ ) โข d โข ๐ โ | ๐ท | ๐ โข โ ๐
1 ๐ exp โข ( โจ ๐ด โข ๐ฃ ๐ , ๐ต โข ๐ฃ โข ( ๐ฆ ) โฉ ๐ ) .
Plugging this into (35) and using the same approximation for the outer integral yields
๐ข ๐
๐ โข ( ๐ฃ ๐ + โ ๐
1 ๐ exp โข ( โจ ๐ด โข ๐ฃ ๐ , ๐ต โข ๐ฃ ๐ โฉ ๐ ) โ ๐
1 ๐ exp โข ( โจ ๐ด โข ๐ฃ ๐ , ๐ต โข ๐ฃ ๐ โฉ ๐ ) โข ๐ โข ๐ฃ ๐ ) , ๐
1 , โฆ , ๐ .
(36)
Equation (36) can be viewed as a Nystrรถm approximation of (35). Define the vectors ๐ง ๐ โ โ ๐ by
๐ง ๐
1 ๐ โข ( โจ ๐ด โข ๐ฃ 1 , ๐ต โข ๐ฃ ๐ โฉ , โฆ , โจ ๐ด โข ๐ฃ ๐ , ๐ต โข ๐ฃ ๐ โฉ ) , ๐
1 , โฆ , ๐ .
Define ๐ : โ ๐ โ ฮ ๐ , where ฮ ๐ denotes the ๐ -dimensional probability simplex, as the softmax function
๐ โข ( ๐ค )
( exp โข ( ๐ค 1 ) โ ๐
1 ๐ exp โข ( ๐ค ๐ ) , โฆ , exp โข ( ๐ค ๐ ) โ ๐
1 ๐ exp โข ( ๐ค ๐ ) ) , โ ๐ค โ โ ๐ .
Then we may re-write (36) as
๐ข ๐
๐ โข ( ๐ฃ ๐ + โ ๐
1 ๐ ๐ ๐ โข ( ๐ง ๐ ) โข ๐ โข ๐ฃ ๐ ) , ๐
1 , โฆ , ๐ .
Furthermore, if we re-parametrize ๐
๐ out โข ๐ val where ๐ out โ โ ๐ ร ๐ and ๐ val โ โ ๐ ร ๐ are matrices of free parameters, we obtain
๐ข ๐
๐ โข ( ๐ฃ ๐ + ๐ out โข โ ๐
1 ๐ ๐ ๐ โข ( ๐ง ๐ ) โข ๐ val โข ๐ฃ ๐ ) , ๐
1 , โฆ , ๐
which is precisely the single-headed attention, transformer block with no layer normalization applied inside the activation function. In the language of transformers, the matrices ๐ด , ๐ต , and ๐ val correspond to the queries, keys, and values functions respectively. We note that tricks such as layer normalization (Ba et al., 2016) can be adapted in a straightforward manner to the continuum setting and incorporated into (35). Furthermore multi-headed self-attention can be realized by simply allowing ๐ ๐ฃ to be a sum over multiple functions with form ๐ ๐ฃ โข ๐ all of which have separate trainable parameters. Including such generalizations yields the continuum limit of the transformer as implemented in practice. We do not pursue this here as our goal is simply to draw a parallel between the two methods.
Even though transformers are special cases of neural operators, the standard attention mechanism is memory and computation intensive, as seen in Section 6, compared to neural operator architectures developed here (7)-(9). The high computational complexity of transformers is evident is (35) since we must evaluate a nested integral of ๐ฃ for each ๐ฅ โ ๐ท . Recently, efficient attention mechanisms have been explored, e.g. long-short Zhu et al. (2021) and adaptive FNO-based attention mechanisms (Guibas et al., 2021). However, many of the efficient vision transformer architectures (Choromanski et al., 2020; Dosovitskiy et al., 2020) like ViTs are not special cases of neural operators since they use CNN layers to generate tokens, which are not discretization invariant.
6Test Problems
A central application of neural operators is learning solution operators defined by parametric partial differential equations. In this section, we define four test problems for which we numerically study the approximation properties of neural operators. To that end, let ( ๐ , ๐ฐ , โฑ ) be a triplet of Banach spaces. The first two problem classes considered are derived from the following general class of PDEs:
๐ซ ๐ โข ๐ข
๐
(37)
where, for every ๐ โ ๐ , ๐ซ ๐ : ๐ฐ โ โฑ is a, possibly nonlinear, partial differential operator, and ๐ข โ ๐ฐ corresponds to the solution of the PDE (37) when ๐ โ โฑ and appropriate boundary conditions are imposed. The second class will be evolution equations with initial condition ๐ โ ๐ and solution ๐ข โข ( ๐ก ) โ ๐ฐ at every time ๐ก
0 . We seek to learn the map from ๐ to ๐ข := ๐ข โข ( ๐ ) for some fixed time ๐
0 ; we will also study maps on paths (time-dependent solutions).
Our goal will be to learn the mappings
๐ข โ : ๐ โฆ ๐ข or ๐ข โ : ๐ โฆ ๐ข ;
we will study both cases, depending on the test problem considered. We will define a probability measure ๐ on ๐ or โฑ which will serve to define a model for likely input data. Furthermore, measure ๐ will define a topology on the space of mappings in which ๐ข โ lives, using the Bochner norm (3). We will assume that each of the spaces ( ๐ , ๐ฐ , โฑ ) are Banach spaces of functions defined on a bounded domain ๐ท โ โ ๐ . All reported errors will be Monte-Carlo estimates of the relative error
๐ผ ๐ โผ ๐ โข โ ๐ข โ โข ( ๐ ) โ ๐ข ๐ โข ( ๐ ) โ ๐ฟ 2 โข ( ๐ท ) โ ๐ข โ โข ( ๐ ) โ ๐ฟ 2 โข ( ๐ท )
or equivalently replacing ๐ with ๐ in the above display and with the assumption that ๐ฐ โ ๐ฟ 2 โข ( ๐ท ) . The domain ๐ท will be discretized, usually uniformly, with ๐ฝ โ โ points.
6.1Poisson Equation
First we consider the one-dimensional Poisson equation with a zero boundary condition. In particular, (37) takes the form
โ ๐ 2 ๐ โข ๐ฅ 2 โข ๐ข โข ( ๐ฅ )
๐ โข ( ๐ฅ ) , ๐ฅ โ ( 0 , 1 )
๐ข โข ( 0 )
๐ข โข ( 1 )
0
(38)
for some source function ๐ : ( 0 , 1 ) โ โ ) . In particular, for ๐ท โข ( ๐ซ ) := ๐ป 0 1 โข ( ( 0 , 1 ) ; โ ) โฉ ๐ป 2 โข ( ( 0 , 1 ) ; โ ) , we have ๐ซ : ๐ท โข ( ๐ซ ) โ ๐ฟ 2 โข ( ( 0 , 1 ) ; โ ) defined as โ ๐ 2 / ๐ โข ๐ฅ 2 , noting that that ๐ซ has no dependence on any parameter ๐ โ ๐ in this case. We will consider the weak form of (38) with source function ๐ โ ๐ป โ 1 โข ( ( 0 , 1 ) ; โ ) and therefore the solution operator ๐ข โ : ๐ป โ 1 โข ( ( 0 , 1 ) ; โ ) โ ๐ป 0 1 โข ( ( 0 , 1 ) ; โ ) defined as
๐ข โ : ๐ โฆ ๐ข .
We define the probability measure ๐
๐ โข ( 0 , ๐ถ ) where
๐ถ
( ๐ซ + ๐ผ ) โ 2 ,
defined through the spectral theory of self-adjoint operators. Since ๐ charges a subset of ๐ฟ 2 โข ( ( 0 , 1 ) ; โ ) , we will learn ๐ข โ : ๐ฟ 2 โข ( ( 0 , 1 ) ; โ ) โ ๐ป 0 1 โข ( ( 0 , 1 ) ; โ ) in the topology induced by (3).
In this setting, ๐ข โ has a closed-form solution given as
๐ข โ โข ( ๐ )
โซ 0 1 ๐บ โข ( โ , ๐ฆ ) โข ๐ โข ( ๐ฆ ) โข d โข ๐ฆ
where
๐บ โข ( ๐ฅ , ๐ฆ )
1 2 โข ( ๐ฅ + ๐ฆ โ | ๐ฆ โ ๐ฅ | ) โ ๐ฅ โข ๐ฆ , โ ( ๐ฅ , ๐ฆ ) โ [ 0 , 1 ] 2
is the Greenโs function. Note that while ๐ข โ is a linear operator, the Greenโs function ๐บ is non-linear as a function of its arguments. We will consider only a single layer of (6) with ๐ 1
Id , ๐ซ
Id , ๐ฌ
Id , ๐ 0
0 , ๐ 0
0 , and
๐ฆ 0 โข ( ๐ )
โซ 0 1 ๐ ๐ โข ( โ , ๐ฆ ) โข ๐ โข ( ๐ฆ ) โข d โข ๐ฆ
where ๐ ๐ : โ 2 โ โ will be parameterized as a standard neural network with parameters ๐ .
The purpose of the current example is two-fold. First we will test the efficacy of the neural operator framework in a simple setting where an exact solution is analytically available. Second we will show that by building in the right inductive bias, in particular, paralleling the form of the Greenโs function solution, we obtain a model that generalizes outside the distribution ๐ . That is, once trained, the model will generalize to any ๐ โ ๐ฟ 2 โข ( ( 0 , 1 ) ; โ ) that may be outside the support of ๐ . For example, as defined, the random variable ๐ โผ ๐ is a continuous function, however, if ๐ ๐ approximates the Greenโs function well then the model ๐ข โ will approximate the solution to (38) accurately even for discontinuous inputs.
To create the dataset used for training, solutions to (38) are obtained by numerical integration using the Greenโs function on a uniform grid with 85 collocation points. We use ๐
1000 training examples.
6.2Darcy Flow
We consider the steady state of Darcy Flow in two dimensions which is the second order elliptic equation
โ โ โ ( ๐ โข ( ๐ฅ ) โข โ ๐ข โข ( ๐ฅ ) )
๐ โข ( ๐ฅ ) , ๐ฅ โ ๐ท
๐ข โข ( ๐ฅ )
0 , ๐ฅ โ โ ๐ท
(39)
where ๐ท
( 0 , 1 ) 2 is the unit square. In this setting ๐
๐ฟ โ โข ( ๐ท ; โ + ) , ๐ฐ
๐ป 0 1 โข ( ๐ท ; โ ) , and โฑ
๐ป โ 1 โข ( ๐ท ; โ ) . We fix ๐ โก 1 and consider the weak form of (39) and therefore the solution operator ๐ข โ : ๐ฟ โ โข ( ๐ท ; โ + ) โ ๐ป 0 1 โข ( ๐ท ; โ ) defined as
๐ข โ : ๐ โฆ ๐ข .
(40)
Note that while (39) is a linear PDE, the solution operator ๐ข โ is nonlinear. We define the probability measure ๐
๐ โฏ โข ๐ โข ( 0 , ๐ถ ) as the pushforward of a Gaussian measure under the operator ๐ where the covariance of the Gaussian is
๐ถ
( โ ฮ + 9 โข ๐ผ ) โ 2
with ๐ท โข ( โ ฮ ) defined to impose zero Neumann boundary on the Laplacian. We define ๐ to be a Nemytskii operator acting on functions, defined through the map ๐ : โ โ โ + defined as
๐ โข ( ๐ฅ )
{ 12 ,
๐ฅ โฅ 0
3 ,
๐ฅ < 0 .
The random variable ๐ โผ ๐ is a piecewise-constant function with random interfaces given by the underlying Gaussian random field. Such constructions are prototypical models for many physical systems such as permeability in sub-surface flows and (in a vector generalization) material microstructures in elasticity.
To create the dataset used for training, solutions to (39) are obtained using a second-order finite difference scheme on a uniform grid of size 421 ร 421 . All other resolutions are downsampled from this data set. We use ๐
1000 training examples.
6.3Burgersโ Equation
We consider the one-dimensional viscous Burgersโ equation
โ โ ๐ก ๐ข ( ๐ฅ , ๐ก ) + 1 2 โ โ ๐ฅ ( ๐ข ( ๐ฅ , ๐ก ) ) 2
๐ โข โ 2 โ ๐ฅ 2 โข ๐ข โข ( ๐ฅ , ๐ก ) , ๐ฅ โ ( 0 , 2 โข ๐ ) , ๐ก โ ( 0 , โ )
๐ข โข ( ๐ฅ , 0 )
๐ข 0 โข ( ๐ฅ ) , ๐ฅ โ ( 0 , 2 โข ๐ )
(41)
with periodic boundary conditions and a fixed viscosity ๐
10 โ 1 . Let ฮจ : ๐ฟ per 2 โข ( ( 0 , 2 โข ๐ ) ; โ ) ร โ + โ ๐ป per ๐ โข ( ( 0 , 2 โข ๐ ) ; โ ) , for any ๐
0 , be the flow map associated to (41), in particular,
ฮจ โข ( ๐ข 0 , ๐ก )
๐ข โข ( โ , ๐ก ) , ๐ก
0 .
We consider the solution operator defined by evaluating ฮจ at a fixed time. Fix any ๐ โฅ 0 . Then we may define ๐ข โ : ๐ฟ per 2 โข ( ( 0 , 2 โข ๐ ) ; โ ) โ ๐ป per ๐ โข ( ( 0 , 2 โข ๐ ) ; โ ) by
๐ข โ : ๐ข 0 โฆ ฮจ โข ( ๐ข 0 , 1 ) .
(42)
We define the probability measure ๐
๐ โข ( 0 , ๐ถ ) where
๐ถ
625 ( โ ๐ 2 ๐ โข ๐ฅ 2 + 25 ๐ผ ) โ 2
with domain of the Laplacian defined to impose periodic boundary conditions. We chose the initial condition for (41) by drawing ๐ข 0 โผ ๐ , noting that ๐ charges a subset of ๐ฟ per 2 โข ( ( 0 , 2 โข ๐ ) ; โ ) .
To create the dataset used for training, solutions to (41) are obtained using a pseudo-spectral split step method where the heat equation part is solved exactly in Fourier space and then the non-linear part is advanced using a forward Euler method with a very small time step. We use a uniform spatial grid with 2 13
8192 collocation points and subsample all other resolutions from this data set. We use ๐
1000 training examples.
6.4Navier-Stokes Equation
We consider the two-dimensional Navier-Stokes equation for a viscous, incompressible fluid
โ ๐ก ๐ข โข ( ๐ฅ , ๐ก ) + ๐ข โข ( ๐ฅ , ๐ก ) โ โ ๐ข โข ( ๐ฅ , ๐ก ) + โ ๐ โข ( ๐ฅ , ๐ก )
๐ โข ฮ โข ๐ข โข ( ๐ฅ , ๐ก ) + ๐ โข ( ๐ฅ ) , ๐ฅ โ ๐ 2 , ๐ก โ ( 0 , โ )
โ โ ๐ข โข ( ๐ฅ , ๐ก )
0 , ๐ฅ โ ๐ 2 , ๐ก โ [ 0 , โ )
๐ข โข ( ๐ฅ , 0 )
๐ข 0 โข ( ๐ฅ ) , ๐ฅ โ ๐ 2
(43)
where ๐ 2 is the unit torus i.e. [ 0 , 1 ] 2 equipped with periodic boundary conditions, and ๐ โ โ + is a fixed viscosity. Here ๐ข : ๐ 2 ร โ + โ โ 2 is the velocity field, ๐ : ๐ 2 ร โ + โ โ 2 is the pressure field, and ๐ : ๐ 2 โ โ is a fixed forcing function.
Equivalently, we study the vorticity-streamfunction formulation of the equation
โ ๐ก ๐ค โข ( ๐ฅ , ๐ก ) + โ โ ๐ โ โ ๐ค โข ( ๐ฅ , ๐ก )
๐ โข ฮ โข ๐ค โข ( ๐ฅ , ๐ก ) + ๐ โข ( ๐ฅ ) , ๐ฅ โ ๐ 2 , ๐ก โ ( 0 , โ ) ,
(44a)
โ ฮ โข ๐
๐ , ๐ฅ โ ๐ 2 , ๐ก โ ( 0 , โ ) ,
(44b)
๐ค โข ( ๐ฅ , 0 )
๐ค 0 โข ( ๐ฅ ) , ๐ฅ โ ๐ 2 ,
(44c)
where ๐ค is the out-of-plane component of the vorticity field โ ร ๐ข : ๐ 2 ร โ + โ โ 3 . Since, when viewed in three dimensions, ๐ข
( ๐ข 1 โข ( ๐ฅ 1 , ๐ฅ 2 ) , ๐ข 2 โข ( ๐ฅ 1 , ๐ฅ 2 ) , 0 ) , it follows that โ ร ๐ข
( 0 , 0 , ๐ ) . The stream function ๐ is related to the velocity by ๐ข
โ โ ๐ , enforcing the divergence-free condition. Similar considerations as for the curl of ๐ข apply to the curl of ๐ , showing that โ ร ๐
( 0 , 0 , ๐ ) . Note that in (44b) ๐ is undetermined up to an (irrelevant, since ๐ข is computed from ๐ by taking a gradient) additive constant; uniqueness is restored by seeking ๐ with spatial mean 0 . We define the forcing term as
๐ โข ( ๐ฅ 1 , ๐ฅ 2 )
0.1 โข ( sin โก ( 2 โข ๐ โข ( ๐ฅ 1 + ๐ฅ 2 ) ) + cos โก ( 2 โข ๐ โข ( ๐ฅ 1 + ๐ฅ 2 ) ) ) , โ ( ๐ฅ 1 , ๐ฅ 2 ) โ ๐ 2 .
The corresponding Reynolds number is estimated as ๐ โข ๐
0.1 ๐ โข ( 2 โข ๐ ) 3 / 2 (Chandler and Kerswell, 2013). Let ฮจ : ๐ฟ 2 โข ( ๐ 2 ; โ ) ร โ + โ ๐ป ๐ โข ( ๐ 2 ; โ ) , for any ๐
0 , be the flow map associated to (44), in particular,
ฮจ โข ( ๐ค 0 , ๐ก )
๐ค โข ( โ , ๐ก ) , ๐ก
0 .
Notice that this is well-defined for any ๐ค 0 โ ๐ฟ 2 โข ( ๐ ; โ ) .
We will define two notions of the solution operator. In the first, we will proceed as in the previous examples, in particular, ๐ข โ : ๐ฟ 2 โข ( ๐ 2 ; โ ) โ ๐ป ๐ โข ( ๐ 2 ; โ ) is defined as
๐ข โ : ๐ค 0 โฆ ฮจ โข ( ๐ค 0 , ๐ )
(45)
for some fixed ๐
0 . In the second, we will map an initial part of the trajectory to a later part of the trajectory. In particular, we define ๐ข โ : ๐ฟ 2 ( ๐ 2 ; โ ) ร ๐ถ ( ( 0 , 10 ] ; ๐ป ๐ ( ๐ 2 ; โ ) ) โ ๐ถ ( ( 10 , ๐ ] ; ๐ป ๐ ( ๐ 2 ; โ ) ) by
๐ข โ : ( ๐ค 0 , ฮจ ( ๐ค 0 , ๐ก ) | ๐ก โ ( 0 , 10 ] ) โฆ ฮจ ( ๐ค 0 , ๐ก ) | ๐ก โ ( 10 , ๐ ]
(46)
for some fixed ๐ > 10 . We define the probability measure ๐
๐ โข ( 0 , ๐ถ ) where
๐ถ
7 3 / 2 โข ( โ ฮ + 49 โข ๐ผ ) โ 2.5
with periodic boundary conditions on the Laplacian. We model the initial vorticity ๐ค 0 โผ ๐ to (44) as ๐ charges a subset of ๐ฟ 2 โข ( ๐ 2 ; โ ) . Its pushforward onto ฮจ โข ( ๐ค 0 , ๐ก ) | ๐ก โ ( 0 , 10 ] is required to define the measure on input space in the second case defined by (46).
To create the dataset used for training, solutions to (44) are obtained using a pseudo-spectral split step method where the viscous terms are advanced using a CrankโNicolson update and the nonlinear and forcing terms are advanced using Heunโs method. Dealiasing is used with the 2 / 3 rule. For further details on this approach see (Chandler and Kerswell, 2013). Data is obtained on a uniform 256 ร 256 grid and all other resolutions are subsampled from this data set. We experiment with different viscosities ๐ , final times ๐ , and amounts of training data ๐ .
6.4.1Bayesian Inverse Problem
As an application of operator learning, we consider the inverse problem of recovering the initial vorticity in the Navier-Stokes equation (44) from partial, noisy observations of the vorticity at a later time. Consider the first solution operator defined in subsection 6.4, in particular, ๐ข โ : ๐ฟ 2 โข ( ๐ 2 ; โ ) โ ๐ป ๐ โข ( ๐ 2 ; โ ) defined as
๐ข โ : ๐ค 0 โฆ ฮจ โข ( ๐ค 0 , 50 )
where ฮจ is the flow map associated to (44). We then consider the inverse problem
๐ฆ
๐ ( ๐ข โ ( ๐ค 0 ) ) + ๐
(47)
of recovering ๐ค 0 โ ๐ฟ 2 โข ( ๐ 2 ; โ ) where ๐ : ๐ป ๐ โข ( ๐ 2 ; โ ) โ โ 49 is the evaluation operator on a uniform 7 ร 7 interior grid, and ๐ โผ ๐ โข ( 0 , ฮ ) is observational noise with covariance ฮ
( 1 / ๐พ 2 ) โข ๐ผ and ๐พ
0.1 . We view (47) as the Bayesian inverse problem mapping prior measure ๐ on ๐ค 0 to posterior measure ๐ ๐ฆ on ๐ค 0 / ๐ฆ . In particular, ๐ ๐ฆ has density with respect to ๐ , given by the Randon-Nikodym derivative
d โข ๐ ๐ฆ d โข ๐ ( ๐ค 0 ) โ exp ( โ 1 2 โฅ ๐ฆ โ ๐ ( ๐ข โ ( ๐ค 0 ) ) โฅ ฮ 2 )
where โฅ โ โฅ ฮ
โฅ ฮ โ 1 / 2 โ โฅ and โฅ โ โฅ is the Euclidean norm in โ 49 . For further details on Bayesian inversion for functions see (Cotter et al., 2009; Stuart, 2010), and see (Cotter et al., 2013) for MCMC methods adapted to the function-space setting.
We solve (47) by computing the posterior mean ๐ผ ๐ค 0 โผ ๐ ๐ฆ โข [ ๐ค 0 ] using the pre-conditioned CrankโNicolson (pCN) MCMC method described in Cotter et al. (2013) for this task. We employ pCN in two cases: (i) using ๐ข โ evaluated with the pseudo-spectral method described in section 6.4; and (ii) using ๐ข ๐ , the neural operator approximating ๐ข โ . After a 5,000 sample burn-in period, we generate 25,000 samples from the posterior using both approaches and use them to compute the posterior mean.
6.4.2Spectra
Because of the constant-in-time forcing term the energy reaches a non-zero equilibrium in time which is statistically reproducible for different initial conditions. To compare the complexity of the solution to the Navier-Stokes problem outlined in subsection 6.4 we show, in Figure 6, the Fourier spectrum of the solution data at time ๐ก
50 for three different choices of the viscosity ๐ . The figure demonstrates that, for a wide range of wavenumbers ๐ , which grows as ๐ decreases, the rate of decay of the spectrum is โ 5 / 3 , matching what is expected in the turbulent regime (Kraichnan, 1967). This is a statistically stationary property of the equation, sustained for all positive times.
Figure 6:Spectral Decay of Navier-Stokes.
The spectral decay of the Navier-stokes equation data. The y-axis is represents the value of each mode; the x-axis is the wavenumber | ๐ |
๐ 1 + ๐ 2 . From left to right, the solutions have viscosity ๐
10 โ 3 , โ 10 โ 4 , โ 10 โ 5 respectively.
6.5Choice of Loss Criteria
In general, the model has the best performance when trained and tested using the same loss criteria. If one trains the model using one norm and tests with another norm, the model may overfit in the training norm. Furthermore, the choice of loss function plays a key role. In this work, we use the relative ๐ฟ 2 error to measure the performance in all our problems. Both the ๐ฟ 2 error and its square, the mean squared error (MSE), are common choices of the testing criteria in the numerical analysis and machine learning literature. We observed that using the relative error to train the model has a good normalization and regularization effect that prevents overfitting. In practice, training with the relative ๐ฟ 2 loss results in around half the testing error rate compared to training with the MSE loss.
7Numerical Results
In this section, we compare the proposed neural operator with other supervised learning approaches, using the four test problems outlined in Section 6. In Subsection 7.1 we study the Poisson equation, and learning a Greens function; Subsection 7.2 considers the coefficient to solution map for steady Darcy flow, and the initial condition to solution at positive time map for Burgers equation. In subsection 7.3 we study the Navier-Stokes equation.
We compare with a variety of architectures found by discretizing the data and applying finite-dimensional approaches, as well as with other operator-based approximation methods; further detailed comparison of other operator-based approximation methods may be found in De Hoop et al. (2022), where the issue of error versus cost (with cost defined in various ways such as evaluation time of the network, amount of data required) is studied. We do not compare against traditional solvers (FEM/FDM/Spectral), although our methods, once trained, enable evaluation of the input to output map orders of magnitude more quickly than by use of such traditional solvers on complex problems. We demonstrate the benefits of this speed-up in a prototypical application, Bayesian inversion, in Subsubection 7.3.4.
All the computations are carried on a single Nvidia V100 GPU with 16GB memory. The code is available at https://github.com/zongyi-li/graph-pde and https://github.com/zongyi-li/fourier_neural_operator.
Setup of the Four Methods:
We construct the neural operator by stacking four integral operator layers as specified in (6) with the ReLU activation. No batch normalization is needed. Unless otherwise specified, we use ๐
1000 training instances and 200 testing instances. We use the Adam optimizer to train for 500 epochs with an initial learning rate of 0.001 that is halved every 100 epochs. We set the channel dimensions ๐ ๐ฃ 0
โฏ
๐ ๐ฃ 3
64 for all one-dimensional problems and ๐ ๐ฃ 0
โฏ
๐ ๐ฃ 3
32 for all two-dimensional problems. The kernel networks ๐ ( 0 ) , โฆ , ๐ ( 3 ) are standard feed-forward neural networks with three layers and widths of 256 units. We use the following abbreviations to denote the methods introduced in Section 4.
โข
GNO: The method introduced in subsection 4.1, truncating the integral to a ball with radius ๐
0.25 and using the Nystrรถm approximation with ๐ฝ โฒ
300 sub-sampled nodes.
โข
LNO: The low-rank method introduced in subsection 4.2 with rank ๐
4 .
โข
MGNO: The multipole method introduced in subsection 4.3. On the Darcy flow problem, we use the random construction with three graph levels, each sampling ๐ฝ 1
400 , ๐ฝ 2
100 , ๐ฝ 3
25 nodes nodes respectively. On the Burgersโ equation problem, we use the orthogonal construction without sampling.
โข
FNO: The Fourier method introduced in subsection 4.4. We set ๐ max , ๐
16 for all one-dimensional problems and ๐ max , ๐
12 for all two-dimensional problems.
Remark on the Resolution.
Traditional PDE solvers such as FEM and FDM approximate a single function and therefore their error to the continuum decreases as the resolution is increased. The figures we show here exhibit something different: the error is independent of resolution, once enough resolution is used, but is not zero. This reflects the fact that there is a residual approximation error, in the infinite dimensional limit, from the use of a finite-parametrized neural operator, trained on a finite amount of data. Invariance of the error with respect to (sufficiently fine) resolution is a desirable property that demonstrates that an intrinsic approximation of the operator has been learned, independent of any specific discretization; see Figure 8. Furthermore, resolution-invariant operators can do zero-shot super-resolution, as shown in Subsubection 7.3.1.
7.1Poisson Equation
Recall the Poisson equation (38) introduced in subsection 6.1. We use a zero hidden layer neural operator construction without lifting the input dimension. In particular, we simply learn a kernel ๐ ๐ : โ 2 โ โ parameterized as a standard feed-forward neural network with parameters ๐ . Using only ๐
1000 training examples, we obtain a relative test error of 10 โ 7 . The neural operator gives an almost perfect approximation to the true solution operator in the topology of (3).
To examine the quality of the approximation in the much stronger uniform topology, we check whether the kernel ๐ ๐ approximates the Greenโs function for this problem. To see why this is enough, let ๐พ โ ๐ฟ 2 โข ( [ 0 , 1 ] ; โ ) be a bounded set i.e.
โ ๐ โ ๐ฟ 2 โข ( [ 0 , 1 ] ; โ ) โค ๐ , โ ๐ โ ๐พ
and suppose that
sup ( ๐ฅ , ๐ฆ ) โ [ 0 , 1 ] 2 | ๐ ๐ โข ( ๐ฅ , ๐ฆ ) โ ๐บ โข ( ๐ฅ , ๐ฆ ) | < ๐ ๐ .
for some ๐
0 . Then it is easy to see that
sup ๐ โ ๐พ โ ๐ข โ โข ( ๐ ) โ ๐ข ๐ โข ( ๐ ) โ ๐ฟ 2 โข ( [ 0 , 1 ] ; โ ) < ๐ ,
in particular, we obtain an approximation in the topology of uniform convergence over bounded sets, while having trained only in the topology of the Bochner norm (3). Figure 7 shows the results from which we can see that ๐ ๐ does indeed approximate the Greenโs function well. This result implies that by constructing a suitable architecture, we can generalize to the entire space and data that is well outside the support of the training set.
Figure 7:Kernel for one-dimensional Greenโs function, with the Nystrom approximation method
left: learned kernel function; right: the analytic Greenโs function. This is a proof of concept of the graph kernel network on 1 dimensional Poisson equation and the comparison of learned and truth kernel.
7.2Darcy and Burgers Equations
In the following section, we compare four methods presented in this paper, with different operator approximation benchmarks; we study the Darcy flow problem introduced in Subsection 6.2 and the Burgersโ equation problem introduced in Subsection 6.3. The solution operators of interest are defined by (40) and (42). We use the following abbreviations for the methods against which we benchmark.
โข
NN is a standard point-wise feedforward neural network. It is mesh-free, but performs badly due to lack of neighbor information. We use standard fully connected neural networks with 8 layers and width 1000.
โข
FCN is the state of the art neural network method based on Fully Convolution Network (Zhu and Zabaras, 2018). It has a dominating performance for small grids ๐
61 . But fully convolution networks are mesh-dependent and therefore their error grows when moving to a larger grid.
โข
PCA+NN is an instantiation of the methodology proposed in Bhattacharya et al. (2020): using PCA as an autoencoder on both the input and output spaces and interpolating the latent spaces with a standard fully connected neural network with width 200. The method provably obtains mesh-independent error and can learn purely from data, however the solution can only be evaluated on the same mesh as the training data.
โข
RBM is the classical Reduced Basis Method (using a PCA basis), which is widely used in applications and provably obtains mesh-independent error (DeVore, 2014). This method has good performance, but the solutions can only be evaluated on the same mesh as the training data and one needs knowledge of the PDE to employ it.
โข
DeepONet is the Deep Operator network (Lu et al., 2019) that comes equipped with an approximation theory (Lanthaler et al., 2021). We use the unstacked version with width 200 which is precisely defined in the original work (Lu et al., 2019). We use standard fully connected neural networks with 8 layers and width 200.
(a) benchmarks on Burgers equation; (b) benchmarks on Darcy Flow for different resolutions; Train and test on the same resolution. For acronyms, see Section 7; details in Tables 3, 2.
Figure 8:Benchmark on Burgerโs equation and Darcy Flow 7.2.1Darcy Flow
The results of the experiments on Darcy flow are shown in Figure 8 and Table 2. All the methods, except for FCN, achieve invariance of the error with respect to the resolution ๐ . In the experiment, we tune each model across of range of different widths and depth to obtain the choices used here; for DeepONet for example this leads to 8 layers and width 200 as reported above.
Within our hyperparameter search, the Fourier neural operator (FNO) obtains the lowest relative error. The Fourier based method likely sees this advantage because the output functions are smooth in these test problems. We also note that is also possible to obtain better results on each model using modified architectures and problem specific feature engineering. For example for DeepONet, using CNN on the branch net and PCA on the trunk net (the latter being similar to the method used in Bhattacharya et al. (2020)) can achieve 0.0232 relative ๐ฟ 2 error, as shown in Lu et al. (2021b), about half the size of the error we obtain here, but for a very coarse grid with ๐
29 . In the experiments the different approximation architectures are such their training cost are similar across all the methods considered, for given ๐ . Noting this, and for example comparing the graph-based neural operator methods such as GNO and MGNO that use Nystrรถm sampling in physical space with FNO, we see that FNO is more accurate.
Networks ๐
85
๐
141
๐
211
๐
421
NN 0.1716
0.1716
0.1716
0.1716
FCN 0.0253
0.0493
0.0727
0.1097
PCANN 0.0299
0.0298
0.0298
0.0299
RBM 0.0244
0.0251
0.0255
0.0259
DeepONet 0.0476
0.0479
0.0462
0.0487
GNO 0.0346
0.0332
0.0342
0.0369
LNO 0.0520
0.0461
0.0445
โ
MGNO 0.0416
0.0428
0.0428
0.0420
FNO 0.0108 0.0109 0.0109 0.0098 Table 2:Relative error on 2-d Darcy Flow for different resolutions ๐ . 7.2.2Burgersโ Equation
The results of the experiments on Burgersโ equation are shown in Figure 8 and Table 3. As for the Darcy problem, our instantiation of the Fourier neural operator obtains nearly one order of magnitude lower relative error compared to any benchmarks. The Fourier neural operator has standard deviation 0.0010 and mean training error 0.0012 . If one replaces the ReLU activation by GeLU, the test error of the FNO is further reduced from 0.0018 to 0.0007 . We again observe the invariance of the error with respect to the resolution. It is possible to improve the performance on each model using modified architectures and problem specific feature engineering. Similarly, the PCA-enhanced DeepONet with a proper scaling can achieve 0.0194 relative ๐ฟ 2 error, as shown in Lu et al. (2021b), on a grid of resolution ๐
128 .
Networks ๐
256
๐
512
๐
1024
๐
2048
๐
4096
๐
8192
NN 0.4714
0.4561
0.4803
0.4645
0.4779
0.4452
GCN 0.3999
0.4138
0.4176
0.4157
0.4191
0.4198
FCN 0.0958
0.1407
0.1877
0.2313
0.2855
0.3238
PCANN 0.0398
0.0395
0.0391
0.0383
0.0392
0.0393
DeepONet 0.0569
0.0617
0.0685
0.0702
0.0833
0.0857
GNO 0.0555
0.0594
0.0651
0.0663
0.0666
0.0699
LNO 0.0212
0.0221
0.0217
0.0219
0.0200
0.0189
MGNO 0.0243
0.0355
0.0374
0.0360
0.0364
0.0364
FNO 0.0018 0.0018 0.0018 0.0019 0.0020 0.0019 Table 3: Relative errors on 1-d Burgersโ equation for different resolutions ๐ . Figure 9:Darcy, trained on 16 ร 16 , tested on 241 ร 241
Graph kernel network for the solution of (6.2). It can be trained on a small resolution and will generalize to a large one. The Error is point-wise absolute squared error.
7.2.3Zero-shot super-resolution.
The neural operator is mesh-invariant, so it can be trained on a lower resolution and evaluated at a higher resolution, without seeing any higher resolution data (zero-shot super-resolution). Figure 9 shows an example of the Darcy Equation where we train the GNO model on 16 ร 16 resolution data in the setting above and transfer to 256 ร 256 resolution, demonstrating super-resolution in space.
7.3Navier-Stokes Equation
In the following section, we compare our four methods with different benchmarks on the Navier-Stokes equation introduced in subsection 6.4. The operator of interest is given by (46). We use the following abbreviations for the methods against which we benchmark.
โข
ResNet: 18 layers of 2-d convolution with residual connections He et al. (2016).
โข
U-Net: A popular choice for image-to-image regression tasks consisting of four blocks with 2-d convolutions and deconvolutions Ronneberger et al. (2015).
โข
TF-Net: A network designed for learning turbulent flows based on a combination of spatial and temporal convolutions Wang et al. (2020).
โข
FNO-2d: 2-d Fourier neural operator with an auto-regressive structure in time. We use the Fourier neural operator to model the local evolution from the previous 10 time steps to the next one time step, and iteratively apply the model to get the long-term trajectory. We set and ๐ max , ๐
12 , ๐ ๐ฃ
32 .
โข
FNO-3d: 3-d Fourier neural operator that directly convolves in space-time. We use the Fourier neural operator to model the global evolution from the initial 10 time steps directly to the long-term trajectory. We set ๐ max , ๐
12 , ๐ ๐ฃ
20 .
Figure 10:Benchmark on the Navier-Stokes
The learning curves on Navier-Stokes ๐
1 โข e โ 3 with different benchmarks. Train and test on the same resolution. For acronyms, see Section 7; details in Tables 4.
Parameters Time
๐
10
โ
3
๐
10
โ
4
๐
10
โ
4
๐
10 โ 5
Config per ๐
50
๐
30
๐
30
๐
20
epoch
๐
1000
๐
1000
๐
10000
๐
1000
FNO-3D 6 , 558 , 537
38.99 โข ๐
0.0086
0.1918
0.0820
0.1893
FNO-2D 414 , 517
127.80 โข ๐
0.0128
0.1559
0.0834
0.1556
U-Net 24 , 950 , 491
48.67 โข ๐
0.0245
0.2051
0.1190
0.1982
TF-Net 7 , 451 , 724
47.21 โข ๐
0.0225
0.2253
0.1168
0.2268
ResNet 266 , 641
78.47 โข ๐
0.0701
0.2871
0.2311
0.2753 Table 4:Benchmarks on Navier Stokes (fixing resolution 64 ร 64 for both training and testing).
As shown in Table 4, the FNO-3D has the best performance when there is sufficient data ( ๐
10 โ 3 , ๐
1000 and ๐
10 โ 4 , ๐
10000 ). For the configurations where the amount of data is insufficient ( ๐
10 โ 4 , ๐
1000 and ๐
10 โ 5 , ๐
1000 ), all methods have
15 % error with FNO-2D achieving the lowest among our hyperparameter search. Note that we only present results for spatial resolution 64 ร 64 since all benchmarks we compare against are designed for this resolution. Increasing the spatial resolution degrades their performance while FNO achieves the same errors.
Auto-regressive (2D) and Temporal Convolution (3D).
We investigate two standard formulation to model the time evolution: the auto-regressive model (2D) and the temporal convolution model (3D). Auto-regressive models: FNO-2D, U-Net, TF-Net, and ResNet all do 2D-convolution in the spatial domain and recurrently propagate in the time domain (2D+RNN). The operator maps the solution at previous time steps to the next time step (2D functions to 2D functions). Temporal convolution models: on the other hand, FNO-3D performs convolution in space-time โ it approximates the integral in time by a convolution. FNO-3D maps the initial time interval directly to the full trajectory (3D functions to 3D functions). The 2D+RNN structure can propagate the solution to any arbitrary time ๐ in increments of a fixed interval length ฮ โข ๐ก , while the Conv3D structure is fixed to the interval [ 0 , ๐ ] but can transfer the solution to an arbitrary time-discretization. We find the 2D method work better for short time sequences while the 3D method more expressive and easier to train on longer sequences.
Networks ๐
64
๐
128
๐
256
FNO-3D 0.0098
0.0101
0.0106
FNO-2D 0.0129
0.0128
0.0126
U-Net 0.0253
0.0289
0.0344
TF-Net
0.0277
0.0278
0.0301
Table 5: Resolution study on Navier-stokes equation (
๐
10 โ 3 , ๐
200 , ๐
20 .) 7.3.1Zero-shot super-resolution.
The neural operator is mesh-invariant, so it can be trained on a lower resolution and evaluated at a higher resolution, without seeing any higher resolution data (zero-shot super-resolution). Figure 11 shows an example where we train the FNO-3D model on 64 ร 64 ร 20 resolution data in the setting above with ( ๐
10 โ 4 , ๐
10000 ) and transfer to 256 ร 256 ร 80 resolution, demonstrating super-resolution in space-time. The Fourier neural operator is the only model among the benchmarks (FNO-2D, U-Net, TF-Net, and ResNet) that can do zero-shot super-resolution; the method works well not only on the spatial but also on the temporal domain.
Figure 11: Zero-shot super-resolution
Vorticity field of the solution to the two-dimensional Navier-Stokes equation with viscosity ๐
10 4 (Re โ 200 ); Ground truth on top and prediction on bottom. The model is trained on data that is discretized on a uniform 64 ร 64 spatial grid and on a 20 -point uniform temporal grid. The model is evaluated with a different initial condition that is discretized on a uniform 256 ร 256 spatial grid and a 80 -point uniform temporal grid.
7.3.2Spectral analysis Figure 12:The spectral decay of the predictions of different methods
The spectral decay of the predictions of different models on the Navier-Stokes equation. The y-axis is the spectrum; the x-axis is the wavenumber. Left is the spectrum of one trajectory; right is the average of 40 trajectories.
Figure 13:Spectral Decay in term of ๐ ๐ โข ๐ โข ๐ฅ
The error of truncation in one single Fourier layer without applying the linear transform ๐ . The y-axis is the normalized truncation error; the x-axis is the truncation mode ๐ ๐ โข ๐ โข ๐ฅ .
Figure 12 shows that all the methods are able to capture the spectral decay of the Navier-Stokes equation. Notice that, while the Fourier method truncates the higher frequency modes during the convolution, FNO can still recover the higher frequency components in the final prediction. Due to the way we parameterize ๐ ๐ , the function output by (26) has at most ๐ max , ๐ Fourier modes per channel. This, however, does not mean that the Fourier neural operator can only approximate functions up to ๐ max , ๐ modes. Indeed, the activation functions which occurs between integral operators and the final decoder network ๐ recover the high frequency modes. As an example, consider a solution to the Navier-Stokes equation with viscosity ๐
10 โ 3 . Truncating this function at 20 Fourier modes yields an error around 2 % as shown in Figure 13, while the Fourier neural operator learns the parametric dependence and produces approximations to an error of โค 1 % with only ๐ max , ๐
12 parameterized modes.
7.3.3Non-periodic boundary condition.
Traditional Fourier methods work only with periodic boundary conditions. However, the Fourier neural operator does not have this limitation. This is due to the linear transform ๐ (the bias term) which keeps the track of non-periodic boundary. As an example, the Darcy Flow and the time domain of Navier-Stokes have non-periodic boundary conditions, and the Fourier neural operator still learns the solution operator with excellent accuracy.
7.3.4Bayesian Inverse Problem
As discussed in Section 6.4.1, we use the pCN method of Cotter et al. (2013) to draw samples from the posterior distribution of initial vorticities in the Navier-Stokes equation given sparse, noisy observations at time ๐
50 . We compare the Fourier neural operator acting as a surrogate model with the traditional solvers used to generate our train-test data (both run on GPU). We generate 25,000 samples from the posterior (with a 5,000 sample burn-in period), requiring 30,000 evaluations of the forward operator.
As shown in Figure 14, FNO and the traditional solver recover almost the same posterior mean which, when pushed forward, recovers well the later-time solution of the Navier-Stokes equation. In sharp contrast, FNO takes 0.005 โข ๐ to evaluate a single instance while the traditional solver, after being optimized to use the largest possible internal time-step which does not lead to blow-up, takes 2.2 โข ๐ . This amounts to 2.5 minutes for the MCMC using FNO and over 18 hours for the traditional solver. Even if we account for data generation and training time (offline steps) which take 12 hours, using FNO is still faster. Once trained, FNO can be used to quickly perform multiple MCMC runs for different initial conditions and observations, while the traditional solver will take 18 hours for every instance. Furthermore, since FNO is differentiable, it can easily be applied to PDE-constrained optimization problems in which adjoint calculations are used as part of the solution procedure.
Figure 14:Results of the Bayesian inverse problem for the Navier-Stokes equation.
The top left panel shows the true initial vorticity while bottom left panel shows the true observed vorticity at ๐
50 with black dots indicating the locations of the observation points placed on a 7 ร 7 grid. The top middle panel shows the posterior mean of the initial vorticity given the noisy observations estimated with MCMC using the traditional solver, while the top right panel shows the same thing but using FNO as a surrogate model. The bottom middle and right panels show the vorticity at ๐
50 when the respective approximate posterior means are used as initial conditions.
7.4Discussion and Comparison of the Four methods
In this section we will compare the four methods in term of expressiveness, complexity, refinabilibity, and ingenuity.
7.4.1Ingenuity
First we will discuss ingenuity, in other words, the design of the frameworks. The first method, GNO, relies on the Nystrรถm approximation of the kernel, or the Monte Carlo approximation of the integration. It is the most simple and straightforward method. The second method, LNO, relies on the low-rank decomposition of the kernel operator. It is efficient when the kernel has a near low-rank structure. The third method, MGNO, is the combination of the first two. It has a hierarchical, multi-resolution decomposition of the kernel. The last one, FNO, is different from the first three; it restricts the integral kernel to induce a convolution.
GNO and MGNO are implemented using graph neural networks, which helps to define sampling and integration. The graph network library also allows sparse and distributed message passing. The LNO and FNO donโt have sampling. They are faster without using the graph library.
scheme graph-based kernel network
GNO Nystrรถm approximation Yes Yes LNO Low-rank approximation No Yes MGNO Multi-level graphs on GNO Yes Yes FNO Convolution theorem; Fourier features No No Table 6:Ingenuity. 7.4.2Expressiveness
We measure the expressiveness by the training and testing error of the method. The full ๐ โข ( ๐ฝ 2 ) integration always has the best results, but it is usually too expensive. As shown in the experiments 7.2.1 and 7.2.2, GNO usually has good accuracy, but its performance suffers from sampling. LNO works the best on the 1d problem (Burgers equation). It has difficulty on the 2d problem because it doesnโt employ sampling to speed-up evaluation. MGNO has the multi-level structure, which gives it the benefit of the first two. Finally, FNO has overall the best performance. It is also the only method that can capture the challenging Navier-Stokes equation.
7.4.3Complexity
The complexity of the four methods are listed in Table 7. GNO and MGNO have sampling. Their complexity depends on the number of nodes sampled ๐ฝ โฒ . When using the full nodes. They are still quadratic. LNO has the lowest complexity ๐ โข ( ๐ฝ ) . FNO, when using the fast Fourier transform, has complexity ๐ โข ( ๐ฝ โข log โก ๐ฝ ) .
In practice. FNO is faster then the other three methods because it doesnโt have the kernel network ๐ . MGNO is relatively slower because of its multi-level graph structure.
Complexity Time per epochs in training
GNO ๐ โข ( ๐ฝ โฒ โฃ 2 โข ๐ 2 )
4 โข ๐
LNO ๐ โข ( ๐ฝ )
20 โข ๐
MGNO โ ๐ ๐ โข ( ๐ฝ ๐ 2 โข ๐ ๐ 2 ) โผ ๐ โข ( ๐ฝ )
8 โข ๐
FNO ( ๐ฝ โข log โก ๐ฝ )
4 โข ๐ Table 7:Complexity (roundup to second on a single Nvidia V100 GPU) 7.4.4Refinability
Refineability measures the number of parameters used in the framework. Table 8 lists the accuracy of the relative error on Darcy Flow with respect to different number of parameters. Because GNO, LNO, and MGNO have the kernel networks, the slope of their error rates are flat: they can work with a very small number of parameters. On the other hand, FNO does not have the sub-network. It needs at a larger magnitude of parameters to obtain an acceptable error rate.
Number of parameters 10 3
10 4
10 5
10 6
GNO 0.075
0.065
0.060
0.035
LNO 0.080
0.070
0.060
0.040
MGNO 0.070
0.050
0.040
0.030
FNO 0.200
0.035
0.020
0.015 Table 8:Refinability.
The relative error on Darcy Flow with respect to different number of parameters. The errors above are approximated value roundup to 0.05 . They are the lowest test error achieved by the model, given the modelโs number of parameters | ๐ | is bounded by 10 3 , 10 4 , 10 5 , 10 6 respectively.
7.4.5Robustness
We conclude with experiments investigating the robustness of Fourier neural operator to noise. We study: a) training on clean (noiseless) data and testing with clean and noisy data; b) training on clean (noiseless) data and testing with clean and noisy data. When creating noisy data we map ๐ to noisy ๐ โฒ as follows: at every grid-point ๐ฅ we set
๐ โข ( ๐ฅ ) โฒ
๐ โข ( ๐ฅ ) + 0.1 โ โ ๐ โ โ โข ๐ ,
where ๐ โผ ๐ฉ โข ( 0 , 1 ) is drawn i.i.d. at every grid point; this is similar to the setting adopted in Lu et al. (2021b). We also study the 1d advection equation as an additional test case, following the setting in Lu et al. (2021b) in which the input data is a random square wave, defined by an โ 3 -valued random variable.
Problems Training error Test (clean) Test (noisy) Burgers 0.002
0.002
0.018
Advection 0.002
0.002
0.094
Darcy 0.006
0.011
0.012
Navier-Stokes 0.024
0.024
0.039
Burgers (train with noise) 0.011
0.004
0.011
Advection (train with noise) 0.020
0.010
0.019
Darcy (train with noise) 0.007
0.012
0.012
Navier-Stokes (train with noise) 0.026
0.026
0.025 Table 9:Robustness.
As shown in the top half of Table 9 and Figure 15, we observe the Fourier neural operator is robust with respect to the (test) noise level on all four problems. In particular, on the advection problem, it has about 10% error with 10% noise. The Darcy and Navier-Stokes operators are smoothing, and the Fourier neural operator obtains lower than 10% error in all scenarios. However the FNO is less robust on the advection equation, which is not smoothing, and on Burgers equation which, whilst smoothing also forms steep fronts.
A straightforward approach to enhance the robustness is to train the model with noise. As shown in the bottom half of Table 9, the Fourier neural operator has no gap between the clean data and noisy data when training with noise. However, noise in training may degrade the performance on the clean data, as a trade-off. In general, augmenting the training data with noise leads to robustness. For example, in the auto-regressive modeling of dynamical systems, training the model with noise will reduce error accumulation in time, and thereby help the model to predict over longer time-horizons (Pfaff et al., 2020). We also observed that other regularization techniques such as early-stopping and weight decay improve robustness. Using a higher spatial resolution also helps.
The advection problem is a hard problem for the FNO since it has discontinuities; similar issues arise when using spectral methods for conservation laws. One can modify the architecture to address such discontinuities accordingly. For example, Wen et al. (2021) enhance the FNO by composing a CNN or UNet branch with the Fourier layer; the resulting composite model outperforms the basic FNO on multiphase flow with high contrast and sharp shocks. However the CNN and UNet take the method out of the realm of discretization-invariant methods; further work is required to design discretization-invariant image-processing tools, such as the identification of discontinuities.
Figure 15:Robustness on Advection and Burgers equations
(a) The input of Advection equation ( ๐
40 ). The orange curve is the clean input; the blue curve is the noisy input. (b) The output of Advection equation. The green curve is the ground truth output; the orange curve is the prediction of FNO with clean input (overlapping with the ground truth); the blue curve is the prediction on the noisy input. Figure (c) and (d) are for Burgersโ equation ( ๐
1000 ) correspondingly.
8Approximation Theory
The paper by Chen and Chen (1995) provides the first universal approximation theorem for operator approximation via neural networks, and the paper by Bhattacharya et al. (2020) provides an alternative architecture and approximation result. The analysis of Chen and Chen (1995) was recently extended in significant ways in the paper by Lanthaler et al. (2021) where, for the first time, the curse of dimensionality is addressed, and resolved, for certain specific operator learning problems, using the DeepOnet generalization Lu et al. (2019, 2021a) of Chen and Chen (1995). The paper Lanthaler et al. (2021) was generalized to study operator approximation, and the curse of dimensionality, for the FNO, in Kovachki et al. (2021).
Unlike the finite-dimensional setting, the choice of input and output spaces ๐ and ๐ฐ for the mapping ๐ข โ play a crucial role in the approximation theory due to the distinctiveness of the induced norm topologies. In this section, we prove universal approximation theorems for neural operators both with respect to the topology of uniform convergence over compact sets and with respect to the topology induced by the Bochner norm (3). We focus our attention on the Lebesgue, Sobolev, continuous, and continuously differentiable function classes as they have numerous applications in scientific computing and machine learning problems. Unlike the results of Bhattacharya et al. (2020); Kovachki et al. (2021) which rely on the Hilbertian structure of the input and output spaces or the results of Chen and Chen (1995); Lanthaler et al. (2021) which rely on the continuous functions, our results extend to more general Banach spaces as specified by Assumptions 9 and 10 (stated in Section 8.3) and are, to the best of our knowledge, the first of their kind to apply at this level of generality.
Our method of proof proceeds by making use of the following two observations. First we establish the Banach space approximation property Grothendieck (1955) for the input and output spaces of interest, which allows for a finite dimensionalization of the problem. In particular, we prove that the Banach space approximation property holds for various function spaces defined on Lipschitz domains; the precise result we need, while unsurprising, seems to be missing from the functional analysis literature and so we provide statement and proof. Details are given in Appendix A. Second, we establish that integral kernel operators with smooth kernels can be used to approximate linear functionals of various input spaces. In doing so, we establish a Riesz-type representation theorem for the continuously differentiable functions. Such a result is not surprising and mimics the well-known result for Sobolev spaces; however in the form we need it we could not find the result in the functional analysis literature and so we provide statement and proof. Details are given in Appendix B. With these two facts, we construct a neural operator which linearly maps any input function to a finite vector then non-linearly maps this vector to a new finite vector which is then used to form the coefficients of a basis expansion for the output function. We reemphasize that our approximation theory uses the fact that neural operators can be reduced to a linear method of approximation (as pointed out in Section 5.1) and does not capture any benefits of nonlinear approximation. However these benefits are present in the architecture and are exploited by the trained networks we find in practice. Exploiting their nonlinear nature to potentially obtain improved rates of approximation remains an interesting direction for future research.
The rest of this section is organized as follows. In Subsection 8.1, we define allowable activation functions and the set of neural operators used in our theory, noting that they constitute a subclass of the neural operators defined in Section 5. In Subsection 8.3, we state and prove our main universal approximation theorems.
8.1Neural Operators
For any ๐ โ โ and ๐ : โ โ โ , we define the set of real-valued ๐ -layer neural networks on โ ๐ by
๐ญ
๐
(
๐
;
โ
๐
)
โ
{
๐
:
โ
๐
โ
โ
:
๐
โข
(
๐ฅ
)
๐ ๐ โข ๐ โข ( โฆ โข ๐ 1 โข ๐ โข ( ๐ 0 โข ๐ฅ + ๐ 0 ) + ๐ 1 โข โฆ ) + ๐ ๐ ,
๐ 0 โ โ ๐ 0 ร ๐ , ๐ 1 โ โ ๐ 1 ร ๐ 0 , โฆ , ๐ ๐ โ โ 1 ร ๐ ๐ โ 1 ,
๐ 0 โ โ ๐ 0 , ๐ 1 โ โ ๐ 1 , โฆ , ๐ ๐ โ โ , ๐ 0 , ๐ 1 , โฆ , ๐ ๐ โ 1 โ โ } .
We define the set of โ ๐ โฒ -valued neural networks simply by stacking real-valued networks
๐ญ ๐ ( ๐ ; โ ๐ , โ ๐ โฒ ) โ { ๐ : โ ๐ โ โ ๐ โฒ : ๐ ( ๐ฅ )
( ๐ 1 ( ๐ฅ ) , โฆ , ๐ ๐ โฒ ( ๐ฅ ) ) , ๐ 1 , โฆ , ๐ ๐ โฒ โ ๐ญ ๐ ( ๐ ; โ ๐ ) } .
We remark that we could have defined ๐ญ ๐ โข ( ๐ ; โ ๐ , โ ๐ โฒ ) by letting ๐ ๐ โ โ ๐ โฒ ร ๐ ๐ and ๐ ๐ โ โ ๐ โฒ in the definition of ๐ญ ๐ โข ( ๐ ; โ ๐ ) because we allow arbitrary width, making the two definitions equivalent; however the definition as presented is more convenient for our analysis. We also employ the preceding definition with โ ๐ and โ ๐ โฒ replaced by spaces of matrices. For any ๐ โ โ 0 , we define the set of allowable activation functions as the continuous โ โ โ maps which make neural networks dense in ๐ถ ๐ โข ( โ ๐ ) on compacta at any fixed depth,
๐ ๐ โ { ๐ โ ๐ถ โข ( โ ) : โ ๐ โ โ โข s.t. โข ๐ญ ๐ โข ( ๐ ; โ ๐ ) โข is dense in โข ๐ถ ๐ โข ( ๐พ ) โข โ ๐พ โ โ ๐ โข compact } .
It is shown in (Pinkus, 1999, Theorem 4.1) that { ๐ โ ๐ถ ๐ โข ( โ ) : ๐ โข is not a polynomial } โ ๐ ๐ with ๐
1 . Clearly ๐ ๐ + 1 โ ๐ ๐ .
We define the set of linearly bounded activations as
๐ ๐ L โ { ๐ โ ๐ ๐ : ๐ โข is Borel measurable , sup ๐ฅ โ โ | ๐ โข ( ๐ฅ ) | 1 + | ๐ฅ | < โ } ,
noting that any globally Lipschitz, non-polynomial, ๐ถ ๐ -function is contained in ๐ ๐ L . Most activation functions used in practice fall within this class, for example, ReLU โ ๐ 0 L , ELU โ ๐ 1 L while tanh , sigmoid โ ๐ ๐ L for any ๐ โ โ 0 .
For approximation in a Bochner norm, we will be interested in constructing globally bounded neural networks which can approximate the identity over compact sets as done in (Lanthaler et al., 2021; Bhattacharya et al., 2020). This allows us to control the potential unboundedness of the support of the input measure by exploiting the fact that the probability of an input must decay to zero in unbounded regions. Following (Lanthaler et al., 2021), we introduce the forthcoming definition which uses the notation of the diameter of a set. In particular, the diameter of any set ๐ โ โ ๐ is defined as, for | โ | 2 the Euclidean norm on โ ๐ ,
diam 2 โข ( ๐ ) โ sup ๐ฅ , ๐ฆ โ ๐ | ๐ฅ โ ๐ฆ | 2 .
Definition 7
We denote by ๐ก๐ the set of maps ๐ โ ๐ 0 such that, for any compact set ๐พ โ โ ๐ , ๐
0 , and ๐ถ โฅ diam 2 โข ( ๐พ ) , there exists a number ๐ โ โ and a neural network ๐ โ ๐ญ ๐ โข ( ๐ ; โ ๐ , โ ๐ โฒ ) such that
| ๐ โข ( ๐ฅ ) โ ๐ฅ | 2
โค ๐ , โ ๐ฅ โ ๐พ ,
| ๐ โข ( ๐ฅ ) | 2
โค ๐ถ , โ ๐ฅ โ โ ๐ .
It is shown in (Lanthaler et al., 2021, Lemma C.1) that ReLU โ ๐ 0 L โฉ ๐ก๐ with ๐
3 .
We will now define the specific class of neural operators for which we prove a universal approximation theorem. It is important to note that the class with which we work is a simplification of the one given in (6). In particular, the lifting and projection operators ๐ฌ , ๐ซ , together with the final activation function ๐ ๐ , are set to the identity, and the local linear operators ๐ 0 , โฆ , ๐ ๐ โ 1 are set to zero. In our numerical studies we have in any case typically set ๐ ๐ to the identity. However we have found that learning the local operators ๐ฌ , ๐ซ and ๐ 0 , โฆ , ๐ ๐ โ 1 is beneficial in practice; extending the universal approximation theorems given here to explain this benefit would be an important but non-trivial development of the analysis we present here.
Let ๐ท โ โ ๐ be a domain. For any ๐ โ ๐ 0 , we define the set of affine kernel integral operators by
๐จ๐ฎ ( ๐ ; ๐ท , โ ๐ 1 , โ ๐ 2 )
{ ๐ โฆ โซ ๐ท ๐ ( โ , ๐ฆ ) ๐ ( ๐ฆ ) ๐ฝ ๐ฆ + ๐ :
๐ โ ๐ญ ๐ 1 โข ( ๐ ; โ ๐ ร โ ๐ , โ ๐ 2 ร ๐ 1 ) ,
๐ โ ๐ญ ๐ 2 ( ๐ ; โ ๐ , โ ๐ 2 ) , ๐ 1 , ๐ 2 โ โ } ,
for any ๐ 1 , ๐ 2 โ โ . Clearly, since ๐ โ ๐ 0 , any ๐ โ ๐จ๐ฎ โข ( ๐ ; ๐ท , โ ๐ 1 , โ ๐ 2 ) acts as ๐ : ๐ฟ ๐ โข ( ๐ท ; โ ๐ 1 ) โ ๐ฟ ๐ โข ( ๐ท ; โ ๐ 2 ) for any 1 โค ๐ โค โ since ๐ โ ๐ถ โข ( ๐ท ยฏ ร ๐ท ยฏ ; โ ๐ 2 ร ๐ 1 ) and ๐ โ ๐ถ โข ( ๐ท ยฏ ; โ ๐ 2 ) . For any ๐ โ โ โฅ 2 , ๐ ๐ , ๐ ๐ข โ โ , ๐ท โ โ ๐ , ๐ท โฒ โ โ ๐ โฒ domains, and ๐ 1 โ ๐ 0 L , ๐ 2 , ๐ 3 โ ๐ 0 , we define the set of ๐ -layer neural operators by
๐ญ๐ฎ ๐ ( ๐ 1 , ๐ 2 , ๐ 3 ; ๐ท , ๐ท โฒ , โ ๐ ๐ , โ ๐ ๐ข )
{
๐ โฆ โซ ๐ท ๐ ๐ ( โ , ๐ฆ ) ( ๐ ๐ โ 1 ๐ 1 ( โฆ ๐ 2 ๐ 1 ( ๐ 1 ( ๐ 0 ๐ ) ) โฆ ) ) ( ๐ฆ ) ๐ฝ ๐ฆ :
๐ 0 โ ๐จ๐ฎ โข ( ๐ 2 , ๐ท ; โ ๐ ๐ , โ ๐ 1 ) , โฆ โข ๐ ๐ โ 1 โ ๐จ๐ฎ โข ( ๐ 2 , ๐ท ; โ ๐ ๐ โ 1 , โ ๐ ๐ ) ,
๐ ๐ โ ๐ญ ๐ ( ๐ 3 ; โ ๐ โฒ ร โ ๐ , โ ๐ ๐ข ร ๐ ๐ ) , ๐ 1 , โฆ , ๐ ๐ , ๐ โ โ } .
When ๐ ๐
๐ ๐ข
1 , we will simply write ๐ญ๐ฎ ๐ โข ( ๐ 1 , ๐ 2 , ๐ 3 ; ๐ท , ๐ท โฒ ) . Since ๐ 1 is linearly bounded, we can use a result about compositions of maps in ๐ฟ ๐ spaces such as (Dudley and Norvaiลกa, 2010, Theorem 7.13) to conclude that any ๐บ โ ๐ญ๐ฎ ๐ โข ( ๐ 1 , ๐ 2 , ๐ 3 , ๐ท , ๐ท โฒ ; โ ๐ ๐ , โ ๐ ๐ข ) acts as ๐บ : ๐ฟ ๐ โข ( ๐ท ; โ ๐ ๐ ) โ ๐ฟ ๐ โข ( ๐ท โฒ ; โ ๐ ๐ข ) . Note that it is only in the last layer that we transition from functions defined over domain ๐ท to functions defined over domain ๐ท โฒ .
When the input space of an operator of interest is ๐ถ ๐ โข ( ๐ท ยฏ ) , for ๐ โ โ , we will need to take in derivatives explicitly as they cannot be learned using kernel integration as employed in the current construction given in Lemma 30; note that this is not the case for ๐ ๐ , ๐ โข ( ๐ท ) as shown in Lemma 28. We will therefore define the set of ๐ -th order neural operators by
๐ญ๐ฎ ๐ ๐ ( ๐ 1 , ๐ 2 , ๐ 3 ; ๐ท , ๐ท โฒ , โ ๐ ๐ , โ ๐ ๐ข )
{
( โ ๐ผ 1 ๐ , โฆ , โ ๐ผ ๐ฝ ๐ ๐ ) โฆ ๐บ โข ( โ ๐ผ 1 ๐ , โฆ , โ ๐ผ ๐ฝ ๐ ๐ ) :
๐บ โ ๐ญ๐ฎ ๐ ( ๐ 1 , ๐ 2 , ๐ 3 ; ๐ท , ๐ท โฒ , โ ๐ฝ ๐ โข ๐ ๐ , โ ๐ ๐ข ) }
where ๐ผ 1 , โฆ , ๐ผ ๐ฝ ๐ โ โ ๐ is an enumeration of the set { ๐ผ โ โ ๐ : 0 โค | ๐ผ | 1 โค ๐ } . Since we only use the ๐ -th order operators when dealing with spaces of continuous functions, each element of ๐ญ๐ฎ ๐ ๐ can be thought of as a mapping from a product space of spaces of the form ๐ถ ๐ โ | ๐ผ ๐ | โข ( ๐ท ยฏ ; โ ๐ ๐ ) for all ๐ โ { 1 , โฆ , ๐ฝ ๐ } to an appropriate Banach space of interest.
8.2Discretization Invariance
Given the construction above and the definition of discretization invariance in Definition 4, in the following we prove that neural operators are discretization invariant deep learning models.
Theorem 8
Let ๐ท โ โ ๐ and ๐ท โฒ โ โ ๐ โฒ be two domains for some ๐ , ๐ โฒ โ โ . Let ๐ and ๐ฐ be real-valued Banach function spaces on ๐ท and ๐ท โฒ respectively. Suppose that ๐ and ๐ฐ can be continuously embedded in ๐ถ โข ( ๐ท ยฏ ) and ๐ถ โข ( ๐ท โฒ ยฏ ) respectively and that ๐ 1 , ๐ 2 , ๐ 3 โ ๐ถ โข ( โ ) . Then, for any ๐ โ โ , the set of neural operators ๐ญ๐ฎ ๐ โข ( ๐ 1 , ๐ 2 , ๐ 3 ; ๐ท , ๐ท โฒ ) whose elements are viewed as maps ๐ โ ๐ฐ is discretization-invariant.
The proof, provided in appendix E, constructs a sequence of finite dimensional maps which approximate the neural operator by Riemann sums and shows uniform converges of the error over compact sets of ๐ .
8.3Approximation Theorems ๐ โ ๐ฝ ๐ ๐ฐ โ ๐ฝ โฒ ๐ฐ ๐น ๐ข โ ๐ ๐ข โ ๐บ Figure 16:A schematic overview of the maps used to approximate ๐ข โ .
Let ๐ and ๐ฐ be Banach function spaces on the domains ๐ท โ โ ๐ and ๐ท โฒ โ โ ๐ โฒ respectively. We will work in the setting where functions in ๐ or ๐ฐ are real-valued, but note that all results generalize in a straightforward fashion to the vector-valued setting. We are interested in the approximation of nonlinear operators ๐ข โ : ๐ โ ๐ฐ by neural operators. We will make the following assumptions on the spaces ๐ and ๐ฐ .
Assumption 9
Let ๐ท โ โ ๐ be a Lipschitz domain for some ๐ โ โ . One of the following holds
1.
๐
๐ฟ ๐ 1 โข ( ๐ท ) for some 1 โค ๐ 1 < โ .
2.
๐
๐ ๐ 1 , ๐ 1 โข ( ๐ท ) for some 1 โค ๐ 1 < โ and ๐ 1 โ โ ,
3.
๐
๐ถ โข ( ๐ท ยฏ ) .
Assumption 10
Let ๐ท โฒ โ โ ๐ โฒ be a Lipschitz domain for some ๐ โฒ โ โ . One of the following holds
1.
๐ฐ
๐ฟ ๐ 2 โข ( ๐ท โฒ ) for some 1 โค ๐ 2 < โ , and ๐ 2
0 ,
2.
๐ฐ
๐ ๐ 2 , ๐ 2 โข ( ๐ท โฒ ) for some 1 โค ๐ 2 < โ and ๐ 2 โ โ ,
3.
๐ฐ
๐ถ ๐ 2 โข ( ๐ท ยฏ โฒ ) and ๐ 2 โ โ 0 .
We first show that neural operators are dense in the continuous operators ๐ข โ : ๐ โ ๐ฐ in the topology of uniform convergence on compacta. The proof proceeds by making three main approximations which are schematically shown in Figure 16. First, inputs are mapped to a finite-dimensional representation through a set of appropriate linear functionals on ๐ denoted by ๐น : ๐ โ โ ๐ฝ . We show in Lemmas 21 and 23 that, when ๐ satisfies Assumption 9, elements of ๐ โ can be approximated by integration against smooth functions. This generalizes the idea from (Chen and Chen, 1995) where functionals on ๐ถ โข ( ๐ท ยฏ ) are approximated by a weighted sum of Dirac measures. We then show in Lemma 25 that, by lifting the dimension, this representation can be approximated by a single element of ๐จ๐ฎ . Second, the representation is non-linearly mapped to a new representation by a continuous function ๐ : โ ๐ฝ โ โ ๐ฝ โฒ which finite-dimensionalizes the action of ๐ข โ . We show, in Lemma 28, that this map can be approximated by a neural operator by reducing the architecture to that of a standard neural network. Third, the new representation is used as the coefficients of an expansion onto representers of ๐ฐ , the map denoted ๐บ : โ ๐ฝ โฒ โ ๐ฐ , which we show can be approximated by a single ๐จ๐ฎ layer in Lemma 27 using density results for continuous functions. The structure of the overall approximation is similar to (Bhattacharya et al., 2020) but generalizes the ideas from working on Hilbert spaces to the spaces in Assumptions 9 and 10. Statements and proofs of the lemmas used in the theorems are given in the appendices.
Theorem 11
Let Assumptions 9 and 10 hold and suppose ๐ข โ : ๐ โ ๐ฐ is continuous. Let ๐ 1 โ ๐ 0 L , ๐ 2 โ ๐ 0 , and ๐ 3 โ ๐ ๐ 2 . Then for any compact set ๐พ โ ๐ and 0 < ๐ โค 1 , there exists a number ๐ โ โ and a neural operator ๐ข โ ๐ญ๐ฎ ๐ โข ( ๐ 1 , ๐ 2 , ๐ 3 ; ๐ท , ๐ท โฒ ) such that
sup ๐ โ ๐พ โ ๐ข โ โข ( ๐ ) โ ๐ข โข ( ๐ ) โ ๐ฐ โค ๐ .
Furthermore, if ๐ฐ is a Hilbert space and ๐ 1 โ ๐ก๐ and, for some ๐
0 , we have that โ ๐ข โ โข ( ๐ ) โ ๐ฐ โค ๐ for all ๐ โ ๐ then ๐ข can be chosen so that
โ ๐ข โข ( ๐ ) โ ๐ฐ โค 4 โข ๐ , โ ๐ โ ๐ .
The proof is provided in appendix F In the following theorem, we extend this result to the case ๐
๐ถ ๐ 1 โข ( ๐ท ยฏ ) , showing density of the ๐ 1 -th order neural operators.
Theorem 12
Let ๐ท โ โ ๐ be a Lipschitz domain, ๐ 1 โ โ , define ๐ := ๐ถ ๐ 1 โข ( ๐ท ยฏ ) , suppose Assumption 10 holds and assume that ๐ข โ : ๐ โ ๐ฐ is continuous. Let ๐ 1 โ ๐ 0 L , ๐ 2 โ ๐ 0 , and ๐ 3 โ ๐ ๐ 2 . Then for any compact set ๐พ โ ๐ and 0 < ๐ โค 1 , there exists a number ๐ โ โ and a neural operator ๐ข โ ๐ญ๐ฎ ๐ ๐ 1 โข ( ๐ 1 , ๐ 2 , ๐ 3 ; ๐ท , ๐ท โฒ ) such that
sup ๐ โ ๐พ โ ๐ข โ โข ( ๐ ) โ ๐ข โข ( ๐ ) โ ๐ฐ โค ๐ .
Furthermore, if ๐ฐ is a Hilbert space and ๐ 1 โ ๐ก๐ and, for some ๐
0 , we have that โ ๐ข โ โข ( ๐ ) โ ๐ฐ โค ๐ for all ๐ โ ๐ then ๐ข can be chosen so that
โ ๐ข โข ( ๐ ) โ ๐ฐ โค 4 โข ๐ , โ ๐ โ ๐ .
Proof The proof follows as in Theorem 11, replacing the use of Lemma 32 with Lemma 33.
With these results in hand, we show density of neural operators in the space ๐ฟ ๐ 2 โข ( ๐ ; ๐ฐ ) where ๐ is a probability measure and ๐ฐ is a separable Hilbert space. The Hilbertian structure of ๐ฐ allows us to uniformly control the norm of the approximation due to the isomorphism with โ 2 as shown in Theorem 11. It remains an interesting future direction to obtain similar results for Banach spaces. The proof follows the ideas in (Lanthaler et al., 2021) where similar results are obtained for DeepONet(s) on ๐ฟ 2 โข ( ๐ท ) by using Lusinโs theorem to restrict the approximation to a large enough compact set and exploit the decay of ๐ outside it. Bhattacharya et al. (2020) also employ a similar approach but explicitly constructs the necessary compact set after finite-dimensionalizing.
Theorem 13
Let ๐ท โฒ โ โ ๐ โฒ be a Lipschitz domain, ๐ 2 โ โ 0 , and suppose Assumption 9 holds. Let ๐ be a probability measure on ๐ and suppose ๐ข โ : ๐ โ ๐ป ๐ 2 โข ( ๐ท ) is ๐ -measurable and ๐ข โ โ ๐ฟ ๐ 2 โข ( ๐ ; ๐ป ๐ 2 โข ( ๐ท ) ) . Let ๐ 1 โ ๐ 0 L โฉ ๐ก๐ , ๐ 2 โ ๐ 0 , and ๐ 3 โ ๐ ๐ 2 . Then for any 0 < ๐ โค 1 , there exists a number ๐ โ โ and a neural operator ๐ข โ ๐ญ๐ฎ ๐ โข ( ๐ 1 , ๐ 2 , ๐ 3 ; ๐ท , ๐ท โฒ ) such that
โ ๐ข โ โ ๐ข โ ๐ฟ ๐ 2 โข ( ๐ ; ๐ป ๐ 2 โข ( ๐ท ) ) โค ๐ .
The proof is provided in appendix G. In the following we extend this result to the case ๐
๐ถ ๐ 1 โข ( ๐ท ) using the ๐ 1 -th order neural operators.
Theorem 14
Let ๐ท โ โ ๐ be a Lipschitz domain, ๐ 1 โ โ , define ๐ := ๐ถ ๐ 1 โข ( ๐ท ) and suppose Assumption 10 holds. Let ๐ be a probability measure on ๐ถ ๐ 1 โข ( ๐ท ) and let ๐ข โ : ๐ถ ๐ 1 โข ( ๐ท ) โ ๐ฐ be ๐ -measurable and suppose ๐ข โ โ ๐ฟ ๐ 2 โข ( ๐ถ ๐ 1 โข ( ๐ท ) ; ๐ฐ ) . Let ๐ 1 โ ๐ 0 L โฉ ๐ก๐ , ๐ 2 โ ๐ 0 , and ๐ 3 โ ๐ ๐ 2 . Then for any 0 < ๐ โค 1 , there exists a number ๐ โ โ and a neural operator ๐ข โ ๐ญ๐ฎ ๐ ๐ 1 โข ( ๐ 1 , ๐ 2 , ๐ 3 ; ๐ท , ๐ท โฒ ) such that
โ ๐ข โ โ ๐ข โ ๐ฟ ๐ 2 โข ( ๐ถ ๐ 1 โข ( ๐ท ) ; ๐ฐ ) โค ๐ .
Proof The proof follows as in Theorem 13 by replacing the use of Theorem 11 with Theorem 12.
9Literature Review
We outline the major neural network-based approaches for the solution of PDEs.
Finite-dimensional Operators.
An immediate approach to approximate ๐ข โ is to parameterize it as a deep convolutional neural network (CNN) between the finite-dimensional Euclidean spaces on which the data is discretized i.e. ๐ข : โ ๐พ ร ฮ โ โ ๐พ (Guo et al., 2016; Zhu and Zabaras, 2018; Adler and Oktem, 2017; Bhatnagar et al., 2019; Kutyniok et al., 2022). Khoo et al. (2021) concerns a similar setting, but with output space โ . Such approaches are, by definition, not mesh independent and need modifications to the architecture for different resolution and discretization of ๐ท in order to achieve consistent error (if at all possible). We demonstrate this issue numerically in Section 7. Furthermore, these approaches are limited to the discretization size and geometry of the training data and hence it is not possible to query solutions at new points in the domain. In contrast for our method, we show in Section 7, both invariance of the error to grid resolution, and the ability to transfer the solution between meshes. The work Ummenhofer et al. (2020) proposed a continuous convolution network for fluid problems, where off-grid points are sampled and linearly interpolated. However the continuous convolution method is still constrained by the underlying grid which prevents generalization to higher resolutions. Similarly, to get finer resolution solution, Jiang et al. (2020) proposed learning super-resolution with a U-Net structure for fluid mechanics problems. However fine-resolution data is needed for training, while neural operators are capable of zero-shot super-resolution with no new data.
DeepONet
A novel operator regression architecture, named DeepONet, was recently proposed by Lu et al. (2019, 2021a); it builds an iterated or deep structure on top of the shallow architecture proposed in Chen and Chen (1995). The architecture consists of two neural networks: a branch net applied on the input functions and a trunk net applied on the querying locations in the output space. The original work of Chen and Chen (1995) provides a universal approximation theorem, and more recently Lanthaler et al. (2021) developed an error estimate for DeepONet itself. The standard DeepONet structure is a linear approximation of the target operator, where the trunk net and branch net learn the coefficients and basis. On the other hand, the neural operator setting is heavily inspired by the advances in deep learning and is a non-linear approximation, which makes it constructively more expressive. A detailed discussion of DeepONet is provided in Section 5.1 and as well as a numerical comparison to DeepONet in Section 7.2.
Physics Informed Neural Networks (PINNs), Deep Ritz Method (DRM), and Deep Galerkin Method (DGM).
A different approach is to directly parameterize the solution ๐ข as a neural network ๐ข : ๐ท ยฏ ร ฮ โ โ (E and Yu, 2018; Raissi et al., 2019; Sirignano and Spiliopoulos, 2018; Bar and Sochen, 2019; Smith et al., 2020; Pan and Duraisamy, 2020; Beck et al., 2021). This approach is designed to model one specific instance of the PDE, not the solution operator. It is mesh-independent, but for any given new parameter coefficient function ๐ โ ๐ , one would need to train a new neural network ๐ข ๐ which is computationally costly and time consuming. Such an approach closely resembles classical methods such as finite elements, replacing the linear span of a finite set of local basis functions with the space of neural networks.
ML-based Hybrid Solvers
Similarly, another line of work proposes to enhance existing numerical solvers with neural networks by building hybrid models (Pathak et al., 2020; Um et al., 2020a; Greenfeld et al., 2019) These approaches suffer from the same computational issue as classical methods: one needs to solve an optimization problem for every new parameter similarly to the PINNs setting. Furthermore, the approaches are limited to a setting in which the underlying PDE is known. Purely data-driven learning of a map between spaces of functions is not possible.
Reduced Basis Methods.
Our methodology most closely resembles the classical reduced basis method (RBM) (DeVore, 2014) or the method of Cohen and DeVore (2015). The method introduced here, along with the contemporaneous work introduced in the papers (Bhattacharya et al., 2020; Nelsen and Stuart, 2021; Opschoor et al., 2020; Schwab and Zech, 2019; OโLeary-Roseberry et al., 2020; Lu et al., 2019; Fresca and Manzoni, 2022), are, to the best of our knowledge, amongst the first practical supervised learning methods designed to learn maps between infinite-dimensional spaces. Our methodology addresses the mesh-dependent nature of the approach in the papers (Guo et al., 2016; Zhu and Zabaras, 2018; Adler and Oktem, 2017; Bhatnagar et al., 2019) by producing a single set of network parameters that can be used with different discretizations. Furthermore, it has the ability to transfer solutions between meshes and indeed between different discretization methods. Moreover, it needs only to be trained once on the equation set { ๐ ๐ , ๐ข ๐ } ๐
1 ๐ . Then, obtaining a solution for a new ๐ โผ ๐ only requires a forward pass of the network, alleviating the major computational issues incurred in (E and Yu, 2018; Raissi et al., 2019; Herrmann et al., 2020; Bar and Sochen, 2019) where a different network would need to be trained for each input parameter. Lastly, our method requires no knowledge of the underlying PDE: it is purely data-driven and therefore non-intrusive. Indeed the true map can be treated as a black-box, perhaps to be learned from experimental data or from the output of a costly computer simulation, not necessarily from a PDE.
Continuous Neural Networks.
Using continuity as a tool to design and interpret neural networks is gaining currency in the machine learning community, and the formulation of ResNet as a continuous time process over the depth parameter is a powerful example of this (Haber and Ruthotto, 2017; E, 2017). The concept of defining neural networks in infinite-dimensional spaces is a central problem that has long been studied (Williams, 1996; Neal, 1996; Roux and Bengio, 2007; Globerson and Livni, 2016; Guss, 2016). The general idea is to take the infinite-width limit which yields a non-parametric method and has connections to Gaussian Process Regression (Neal, 1996; Matthews et al., 2018; Garriga-Alonso et al., 2018), leading to the introduction of deep Gaussian processes (Damianou and Lawrence, 2013; Dunlop et al., 2018). Thus far, such methods have not yielded efficient numerical algorithms that can parallel the success of convolutional or recurrent neural networks for the problem of approximating mappings between finite dimensional spaces. Despite the superficial similarity with our proposed work, this body of work differs substantially from what we are proposing: in our work we are motivated by the continuous dependence of the data, in the input or output spaces, in spatial or spatio-temporal variables; in contrast the work outlined in this paragraph uses continuity in an artificial algorithmic depth or width parameter to study the network architecture when the depth or width approaches infinity, but the input and output spaces remain of fixed finite dimension.
Nystrรถm Approximation, GNNs, and Graph Neural Operators (GNOs).
The graph neural operators (Section 4.1) has an underlying Nystrรถm approximation formulation (Nystrรถm, 1930) which links different grids to a single set of network parameters. This perspective relates our continuum approach to Graph Neural Networks (GNNs). GNNs are a recently developed class of neural networks that apply to graph-structured data; they have been used in a variety of applications. Graph networks incorporate an array of techniques from neural network design such as graph convolution, edge convolution, attention, and graph pooling (Kipf and Welling, 2016; Hamilton et al., 2017; Gilmer et al., 2017; Veliฤkoviฤ et al., 2017; Murphy et al., 2018). GNNs have also been applied to the modeling of physical phenomena such as molecules (Chen et al., 2019) and rigid body systems (Battaglia et al., 2018) since these problems exhibit a natural graph interpretation: the particles are the nodes and the interactions are the edges. The work (Alet et al., 2019) performs an initial study that employs graph networks on the problem of learning solutions to Poissonโs equation, among other physical applications. They propose an encoder-decoder setting, constructing graphs in the latent space, and utilizing message passing between the encoder and decoder. However, their model uses a nearest neighbor structure that is unable to capture non-local dependencies as the mesh size is increased. In contrast, we directly construct a graph in which the nodes are located on the spatial domain of the output function. Through message passing, we are then able to directly learn the kernel of the network which approximates the PDE solution. When querying a new location, we simply add a new node to our spatial graph and connect it to the existing nodes, avoiding interpolation error by leveraging the power of the Nystrรถm extension for integral operators.
Low-rank Kernel Decomposition and Low-rank Neural Operators (LNOs).
Low-rank decomposition is a popular method used in kernel methods and Gaussian process (Kulis et al., 2006; Bach, 2013; Lan et al., 2017; Gardner et al., 2018). We present the low-rank neural operator in Section 4.2 where we structure the kernel network as a product of two factor networks inspired by Fredholm theory. The low-rank method, while simple, is very efficient and easy to train especially when the target operator is close to linear. Khoo and Ying (2019) proposed a related neural network with low-rank structure to approximate the inverse of differential operators. The framework of two factor networks is also similar to the trunk and branch network used in DeepONet (Lu et al., 2019). But in our work, the factor networks are defined on the physical domain and non-local information is accumulated through integration with respect to the Lebesgue measure. In contrast, DeepONet(s) integrate against delta measures at a set of pre-defined nodal points that are usually taken to be the grid on which the data is given. See section 5.1 for further discussion.
Multipole, Multi-resolution Methods, and Multipole Graph Neural Operators (MGNOs).
To efficiently capture long-range interaction, multi-scale methods such as the classical fast multipole methods (FMM) have been developed (Greengard and Rokhlin, 1997). Based on the assumption that long-range interactions decay quickly, FMM decomposes the kernel matrix into different ranges and hierarchically imposes low-rank structures on the long-range components (hierarchical matrices) (Bรถrm et al., 2003). This decomposition can be viewed as a specific form of the multi-resolution matrix factorization of the kernel (Kondor et al., 2014; Bรถrm et al., 2003). For example, the works of Fan et al. (2019c, b); He and Xu (2019) propose a similar multipole expansion for solving parametric PDEs on structured grids. However, the classical FMM requires nested grids as well as the explicit form of the PDEs. In Section 4.3, we propose the multipole graph neural operator (MGNO) by generalizing this idea to arbitrary graphs in the data-driven setting, so that the corresponding graph neural networks can learn discretization-invariant solution operators which are fast and can work on complex geometries.
Fourier Transform, Spectral Methods, and Fourier Neural Operators (FNOs).
The Fourier transform is frequently used in spectral methods for solving differential equations since differentiation is equivalent to multiplication in the Fourier domain. Fourier transforms have also played an important role in the development of deep learning. They are used in theoretical work, such as the proof of the neural network universal approximation theorem (Hornik et al., 1989) and related results for random feature methods (Rahimi and Recht, 2008); empirically, they have been used to speed up convolutional neural networks (Mathieu et al., 2013). Neural network architectures involving the Fourier transform or the use of sinusoidal activation functions have also been proposed and studied (Bengio et al., 2007; Mingo et al., 2004; Sitzmann et al., 2020). Recently, some spectral methods for PDEs have been extended to neural networks (Fan et al., 2019a, c; Kashinath et al., 2020). In Section 4.4, we build on these works by proposing the Fourier neural operator architecture defined directly in Fourier space with quasi-linear time complexity and state-of-the-art approximation capabilities.
Sources of Error
In this paper we will study the error resulting from approximating an operator (mapping between Banach spaces) from within a class of finitely-parameterized operators. We show that the resulting error, expressed in terms of universal approximation of operators over a compact set or in terms of a resulting risk, can be driven to zero by increasing the number of parameters, and refining the approximations inherent in the neural operator architecture. In practice there will be two other sources of approximation error: firstly from the discretization of the data; and secondly from the use of empirical risk minimization over a finite data set to determine the parameters. Balancing all three sources of error is key to making algorithms efficient. However we do not study these other two sources of error in this work. Furthermore we do not study how the number of parameters in our approximation grows as the error tolerance is refined. Generally, this growth may be super-exponential as shown in (Kovachki et al., 2021). However, for certain classes of operators and related approximation methods, it is possible to beat the curse of dimensionality; we refer the reader to the works (Lanthaler et al., 2021; Kovachki et al., 2021) for detailed analyses demonstrating this. Finally we also emphasize that there is a potential source of error from the optimization procedure which attempts to minimize the empirical risk: it may not achieve the global minumum. Analysis of this error in the context of operator approximation has not been undertaken.
10Conclusions
We have introduced the concept of Neural Operator, the goal being to construct a neural network architecture adapted to the problem of mapping elements of one function space into elements of another function space. The network is comprised of four steps which, in turn, (i) extract features from the input functions, (ii) iterate a recurrent neural network on feature space, defined through composition of a sigmoid function and a nonlocal operator, and (iii) a final mapping from feature space into the output function.
We have studied four nonlocal operators in step (iii), one based on graph kernel networks, one based on the low-rank decomposition, one based on the multi-level graph structure, and the last one based on convolution in Fourier space. The designed network architectures are constructed to be mesh-free and our numerical experiments demonstrate that they have the desired property of being able to train and generalize on different meshes. This is because the networks learn the mapping between infinite-dimensional function spaces, which can then be shared with approximations at different levels of discretization. A further advantage of the integral operator approach is that data may be incorporated on unstructured grids, using the Nystrรถm approximation; these methods, however, are quadratic in the number of discretization points; we describe variants on this methodology, using low rank and multiscale ideas, to reduce this complexity. On the other hand the Fourier approach leads directly to fast methods, linear-log linear in the number of discretization points, provided structured grids are used. We demonstrate that our methods can achieve competitive performance with other mesh-free approaches developed in the numerical analysis community. Specifically, the Fourier neural operator achieves the best numerical performance among our experiments, potentially due to the smoothness of the solution function and the underlying uniform grids. The methods developed in the numerical analysis community are less flexible than the approach we introduce here, relying heavily on the structure of an underlying PDE mapping input to output; our method is entirely data-driven.
10.1Future Directions
We foresee three main directions in which this work will develop: firstly as a method to speed-up scientific computing tasks which involve repeated evaluation of a mapping between spaces of functions, following the example of the Bayesian inverse problem 7.3.4, or when the underlying model is unknown as in computer vision or robotics; and secondly the development of more advanced methodologies beyond the four approximation schemes presented in Section 4 that are more efficient or better in specific situations; thirdly, the development of an underpinning theory which captures the expressive power, and approximation error properties, of the proposed neural network, following Section 8, and quantifies the computational complexity required to achieve given error.
10.1.1New Applications
The proposed neural operator is a blackbox surrogate model for function-to-function mappings. It naturally fits into solving PDEs for physics and engineering problems. In the paper we mainly studied three partial differential equations: Darcy Flow, Burgersโ equation, and Navier-Stokes equation, which cover a broad range of scenarios. Due to its blackbox structure, the neural operator is easily applied on other problems. We foresee applications on more challenging turbulent flows, such as those arising in subgrid models with in climate GCMs, high contrast media in geological models generalizing the Darcy model, and general physics simulation for games and visual effects. The operator setting leads to an efficient and accurate representation, and the resolution-invariant properties make it possible to training and a smaller resolution dataset, and be evaluated on arbitrarily large resolution.
The operator learning setting is not restricted to scientific computing. For example, in computer vision, images can naturally be viewed as real-valued functions on 2D domains and videos simply add a temporal structure. Our approach is therefore a natural choice for problems in computer vision where invariance to discretization is crucial. We leave this as an interesting future direction.
10.1.2New Methodologies
Despite their excellent performance, there is still room for improvement upon the current methodologies. For example, the full ๐ โข ( ๐ฝ 2 ) integration method still outperforms the FNO by about 40 % , albeit at greater cost. It is of potential interest to develop more advanced integration techniques or approximation schemes that follows the neural operator framework. For example, one can use adaptive graph or probability estimation in the Nystrรถm approximation. It is also possible to use other basis than the Fourier basis such as the PCA basis and the Chebyshev basis.
Another direction for new methodologies is to combine the neural operator in other settings. The current problem is set as a supervised learning problem. Instead, one can combine the neural operator with solvers (Pathak et al., 2020; Um et al., 2020b), augmenting and correcting the solvers to get faster and more accurate approximation. Similarly, one can combine operator learning with physics constraints (Wang et al., 2021; Li et al., 2021).
10.1.3Theory
In this work, we develop a universal approximation theory (Section 8) for neural operators. As in the work of Lu et al. (2019) studying universal approximation for DeepONet, we use linear approximation techniques. The power of non-linear approximation (DeVore, 1998), which is likely intrinsic to the success of neural operators in some settings, is still less studied, as discussed in Section 5.1; we note that DeepOnet is intrinsically limited by linear approximation properties. For functions between Euclidean spaces, we clearly know, by combining two layers of linear functions with one layer of non-linear activation function, the neural network can approximate arbitrary continuous functions, and that deep neural networks can be exponentially more expressive compared to shallow networks (Poole et al., 2016). However issues are less clear when it comes to the choice of architecture and the scaling of the number of parameters within neural operators between Banach spaces. The approximation theory of operators is much more complex and challenging compared to that of functions over Euclidean spaces. It is important to study the class of neural operators with respect to their architecture โ what spaces the true solution operators lie in, and which classes of PDEs the neural operator approximate efficiently. We leave these as exciting, but open, research directions.
Acknowledgements
Z. Li gratefully acknowledges the financial support from the Kortschak Scholars, PIMCO Fellows, and Amazon AI4Science Fellows programs. A. Anandkumar is supported in part by Bren endowed chair. K. Bhattacharya, N. B. Kovachki, B. Liu and A. M. Stuart gratefully acknowledge the financial support of the Army Research Laboratory through the Cooperative Agreement Number W911NF-12-0022. Research was sponsored by the Army Research Laboratory and was accomplished under Cooperative Agreement Number W911NF-12-2-0022. AMS is also supported by NSF (award DMS-1818977). Part of this research is developed when K. Azizzadenesheli was with the Purdue University. The authors are grateful to Siddhartha Mishra for his valuable feedback on this work.
The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.
The computations presented here were conducted on the Resnick High Performance Cluster at the California Institute of Technology.
References Aaronson (1997) J. Aaronson.An Introduction to Infinite Ergodic Theory.Mathematical surveys and monographs. American Mathematical Society, 1997.ISBN 9780821804940. Adams and Fournier (2003) R. A. Adams and J. J. Fournier.Sobolev Spaces.Elsevier Science, 2003. Adler and Oktem (2017) Jonas Adler and Ozan Oktem.Solving ill-posed inverse problems using iterative deep neural networks.Inverse Problems, nov 2017.doi: 10.1088/1361-6420/aa9581.URL https://doi.org/10.1088%2F1361-6420%2Faa9581. Albiac and Kalton (2006) Fernando Albiac and Nigel J. Kalton.Topics in Banach space theory.Graduate Texts in Mathematics. Springer, 1 edition, 2006. Alet et al. (2019) Ferran Alet, Adarsh Keshav Jeewajee, Maria Bauza Villalonga, Alberto Rodriguez, Tomas Lozano-Perez, and Leslie Kaelbling.Graph element networks: adaptive, structured computation and memory.In 36th International Conference on Machine Learning. PMLR, 2019.URL http://proceedings.mlr.press/v97/alet19a.html. Ba et al. (2016) Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton.Layer normalization.arXiv preprint arXiv:1607.06450, 2016. Bach (2013) Francis Bach.Sharp analysis of low-rank kernel matrix approximations.In Conference on Learning Theory, pages 185โ209, 2013. Bar and Sochen (2019) Leah Bar and Nir Sochen.Unsupervised deep learning algorithm for PDE-based forward and inverse problems.arXiv preprint arXiv:1904.05417, 2019. Battaglia et al. (2018) Peter W Battaglia, Jessica B Hamrick, Victor Bapst, Alvaro Sanchez-Gonzalez, Vinicius Zambaldi, Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, et al.Relational inductive biases, deep learning, and graph networks.arXiv preprint arXiv:1806.01261, 2018. Beck et al. (2021) Christian Beck, Sebastian Becker, Philipp Grohs, Nor Jaafari, and Arnulf Jentzen.Solving the kolmogorov pde by means of deep learning.Journal of Scientific Computing, 88(3), 2021. Belongie et al. (2002) Serge Belongie, Charless Fowlkes, Fan Chung, and Jitendra Malik.Spectral partitioning with indefinite kernels using the nystrรถm extension.In European conference on computer vision. Springer, 2002. Bengio et al. (2007) Yoshua Bengio, Yann LeCun, et al.Scaling learning algorithms towards ai.Large-scale kernel machines, 34(5):1โ41, 2007. Bhatnagar et al. (2019) Saakaar Bhatnagar, Yaser Afshar, Shaowu Pan, Karthik Duraisamy, and Shailendra Kaushik.Prediction of aerodynamic flow fields using convolutional neural networks.Computational Mechanics, pages 1โ21, 2019. Bhattacharya et al. (2020) Kaushik Bhattacharya, Bamdad Hosseini, Nikola B Kovachki, and Andrew M Stuart.Model reduction and neural networks for parametric PDEs.arXiv preprint arXiv:2005.03180, 2020. Bogachev (2007) V. I. Bogachev.Measure Theory, volume 2.Springer-Verlag Berlin Heidelberg, 2007. Bonito et al. (2020) Andrea Bonito, Albert Cohen, Ronald DeVore, Diane Guignard, Peter Jantsch, and Guergana Petrova.Nonlinear methods for model reduction.arXiv preprint arXiv:2005.02565, 2020. Bรถrm et al. (2003) Steffen Bรถrm, Lars Grasedyck, and Wolfgang Hackbusch.Hierarchical matrices.Lecture notes, 21:2003, 2003. Box (1976) George EP Box.Science and statistics.Journal of the American Statistical Association, 71(356):791โ799, 1976. Boyd (2001) John P Boyd.Chebyshev and Fourier spectral methods.Courier Corporation, 2001. Brown et al. (2020) Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al.Language models are few-shot learners.arXiv preprint arXiv:2005.14165, 2020. Brudnyi and Brudnyi (2012) Alexander Brudnyi and Yuri Brudnyi.Methods of Geometric Analysis in Extension and Trace Problems, volume 1.Birkhรคuser Basel, 2012. Bruno et al. (2007) Oscar P Bruno, Youngae Han, and Matthew M Pohlman.Accurate, high-order representation of complex three-dimensional surfaces via fourier continuation analysis.Journal of computational Physics, 227(2):1094โ1125, 2007. Chandler and Kerswell (2013) Gary J. Chandler and Rich R. Kerswell.Invariant recurrent solutions embedded in a turbulent two-dimensional kolmogorov flow.Journal of Fluid Mechanics, 722:554โ595, 2013. Chen et al. (2019) Chi Chen, Weike Ye, Yunxing Zuo, Chen Zheng, and Shyue Ping Ong.Graph networks as a universal machine learning framework for molecules and crystals.Chemistry of Materials, 31(9):3564โ3572, 2019. Chen and Chen (1995) Tianping Chen and Hong Chen.Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems.IEEE Transactions on Neural Networks, 6(4):911โ917, 1995. Choromanski et al. (2020) Krzysztof Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Davis, Afroz Mohiuddin, Lukasz Kaiser, et al.Rethinking attention with performers.arXiv preprint arXiv:2009.14794, 2020. Ciesielski and Domsta (1972) Z. Ciesielski and J. Domsta.Construction of an orthonormal basis in cm(id) and wmp(id).Studia Mathematica, 41:211โ224, 1972. Cohen and DeVore (2015) Albert Cohen and Ronald DeVore.Approximation of high-dimensional parametric PDEs.Acta Numerica, 2015.doi: 10.1017/S0962492915000033. Cohen et al. (2020) Albert Cohen, Ronald Devore, Guergana Petrova, and Przemyslaw Wojtaszczyk.Optimal stable nonlinear approximation.arXiv preprint arXiv:2009.09907, 2020. Conway (2007) J. B. Conway.A Course in Functional Analysis.Springer-Verlag New York, 2007. Cotter et al. (2013) S. L. Cotter, G. O. Roberts, A. M. Stuart, and D. White.Mcmc methods for functions: Modifying old algorithms to make them faster.Statistical Science, 28(3):424โ446, Aug 2013.ISSN 0883-4237.doi: 10.1214/13-sts421.URL http://dx.doi.org/10.1214/13-STS421. Cotter et al. (2009) Simon L Cotter, Massoumeh Dashti, James Cooper Robinson, and Andrew M Stuart.Bayesian inverse problems for functions and applications to fluid mechanics.Inverse problems, 25(11):115008, 2009. Damianou and Lawrence (2013) Andreas Damianou and Neil Lawrence.Deep gaussian processes.In Artificial Intelligence and Statistics, pages 207โ215, 2013. De Hoop et al. (2022) Maarten De Hoop, Daniel Zhengyu Huang, Elizabeth Qian, and Andrew M Stuart.The cost-accuracy trade-off in operator learning with neural networks.Journal of Machine Learning, to appear; arXiv preprint arXiv:2203.13181, 2022. Devlin et al. (2018) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova.Bert: Pre-training of deep bidirectional transformers for language understanding.arXiv preprint arXiv:1810.04805, 2018. DeVore (1998) Ronald A. DeVore.Nonlinear approximation.Acta Numerica, 7:51โ150, 1998. DeVore (2014) Ronald A. DeVore.Chapter 3: The Theoretical Foundation of Reduced Basis Methods.2014.doi: 10.1137/1.9781611974829.ch3. Dosovitskiy et al. (2020) Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al.An image is worth 16x16 words: Transformers for image recognition at scale.arXiv preprint arXiv:2010.11929, 2020. Dudley and Norvaisa (2011) R. Dudley and Rimas Norvaisa.Concrete Functional Calculus, volume 149.01 2011.ISBN 978-1-4419-6949-1. Dudley and Norvaiลกa (2010) R.M. Dudley and R. Norvaiลกa.Concrete Functional Calculus.Springer Monographs in Mathematics. Springer New York, 2010. Dugundji (1951) J. Dugundji.An extension of tietzeโs theorem.Pacific Journal of Mathematics, 1(3):353 โ 367, 1951. Dunlop et al. (2018) Matthew M Dunlop, Mark A Girolami, Andrew M Stuart, and Aretha L Teckentrup.How deep are deep gaussian processes?The Journal of Machine Learning Research, 19(1):2100โ2145, 2018. E (2017) W E.A proposal on machine learning via dynamical systems.Communications in Mathematics and Statistics, 5(1):1โ11, 2017. E (2011) Weinan E.Principles of Multiscale Modeling.Cambridge University Press, Cambridge, 2011. E and Yu (2018) Weinan E and Bing Yu.The deep ritz method: A deep learning-based numerical algorithm for solving variational problems.Communications in Mathematics and Statistics, 3 2018.ISSN 2194-6701.doi: 10.1007/s40304-018-0127-z. Fan et al. (2019a) Yuwei Fan, Cindy Orozco Bohorquez, and Lexing Ying.Bcr-net: A neural network based on the nonstandard wavelet form.Journal of Computational Physics, 384:1โ15, 2019a. Fan et al. (2019b) Yuwei Fan, Jordi Feliu-Faba, Lin Lin, Lexing Ying, and Leonardo Zepeda-Nรบnez.A multiscale neural network based on hierarchical nested bases.Research in the Mathematical Sciences, 6(2):21, 2019b. Fan et al. (2019c) Yuwei Fan, Lin Lin, Lexing Ying, and Leonardo Zepeda-Nรบnez.A multiscale neural network based on hierarchical matrices.Multiscale Modeling & Simulation, 17(4):1189โ1213, 2019c. Fefferman (2007) Charles Fefferman.Cm extension by linear operators.Annals of Mathematics, 166:779โ835, 2007. Fresca and Manzoni (2022) Stefania Fresca and Andrea Manzoni.Pod-dl-rom: Enhancing deep learning-based reduced order models for nonlinear parametrized pdes by proper orthogonal decomposition.Computer Methods in Applied Mechanics and Engineering, 388:114โ181, 2022. Gardner et al. (2018) Jacob R Gardner, Geoff Pleiss, Ruihan Wu, Kilian Q Weinberger, and Andrew Gordon Wilson.Product kernel interpolation for scalable gaussian processes.arXiv preprint arXiv:1802.08903, 2018. Garriga-Alonso et al. (2018) Adriร Garriga-Alonso, Carl Edward Rasmussen, and Laurence Aitchison.Deep Convolutional Networks as shallow Gaussian Processes.arXiv e-prints, art. arXiv:1808.05587, Aug 2018. Gilmer et al. (2017) Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl.Neural message passing for quantum chemistry.In Proceedings of the 34th International Conference on Machine Learning, 2017. Globerson and Livni (2016) Amir Globerson and Roi Livni.Learning infinite-layer networks: Beyond the kernel trick.CoRR, abs/1606.05316, 2016.URL http://arxiv.org/abs/1606.05316. Greenfeld et al. (2019) Daniel Greenfeld, Meirav Galun, Ronen Basri, Irad Yavneh, and Ron Kimmel.Learning to optimize multigrid PDE solvers.In International Conference on Machine Learning, pages 2415โ2423. PMLR, 2019. Greengard and Rokhlin (1997) Leslie Greengard and Vladimir Rokhlin.A new version of the fast multipole method for the laplace equation in three dimensions.Acta numerica, 6:229โ269, 1997. Grothendieck (1955) A Grothendieck.Produits tensoriels topologiques et espaces nuclรฉaires, volume 16.American Mathematical Society Providence, 1955. Guibas et al. (2021) John Guibas, Morteza Mardani, Zongyi Li, Andrew Tao, Anima Anandkumar, and Bryan Catanzaro.Adaptive fourier neural operators: Efficient token mixers for transformers.arXiv preprint arXiv:2111.13587, 2021. Guo et al. (2016) Xiaoxiao Guo, Wei Li, and Francesco Iorio.Convolutional neural networks for steady flow approximation.In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016. Gurtin (1982) Morton E Gurtin.An introduction to continuum mechanics.Academic press, 1982. Guss (2016) William H. Guss.Deep Function Machines: Generalized Neural Networks for Topological Layer Expression.arXiv e-prints, art. arXiv:1612.04799, Dec 2016. Haber and Ruthotto (2017) Eldad Haber and Lars Ruthotto.Stable architectures for deep neural networks.Inverse Problems, 34(1):014004, 2017. Hamilton et al. (2017) Will Hamilton, Zhitao Ying, and Jure Leskovec.Inductive representation learning on large graphs.In Advances in neural information processing systems, pages 1024โ1034, 2017. He and Xu (2019) Juncai He and Jinchao Xu.Mgnet: A unified framework of multigrid and convolutional neural network.Science china mathematics, 62(7):1331โ1354, 2019. He et al. (2016) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.Deep residual learning for image recognition.In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770โ778, 2016. Herrmann et al. (2020) L Herrmann, Ch Schwab, and J Zech.Deep relu neural network expression rates for data-to-qoi maps in bayesian PDE inversion.2020. Hornik et al. (1989) Kurt Hornik, Maxwell Stinchcombe, Halbert White, et al.Multilayer feedforward networks are universal approximators.Neural networks, 2(5):359โ366, 1989. Jiang et al. (2020) Chiyu Max Jiang, Soheil Esmaeilzadeh, Kamyar Azizzadenesheli, Karthik Kashinath, Mustafa Mustafa, Hamdi A Tchelepi, Philip Marcus, Anima Anandkumar, et al.Meshfreeflownet: A physics-constrained deep continuous space-time super-resolution framework.arXiv preprint arXiv:2005.01463, 2020. Johnson (2012) Claes Johnson.Numerical solution of partial differential equations by the finite element method.Courier Corporation, 2012. Kashinath et al. (2020) Karthik Kashinath, Philip Marcus, et al.Enforcing physical constraints in cnns through differentiable PDE layer.In ICLR 2020 Workshop on Integration of Deep Neural Models and Differential Equations, 2020. Khoo and Ying (2019) Yuehaw Khoo and Lexing Ying.Switchnet: a neural network model for forward and inverse scattering problems.SIAM Journal on Scientific Computing, 41(5):A3182โA3201, 2019. Khoo et al. (2021) Yuehaw Khoo, Jianfeng Lu, and Lexing Ying.Solving parametric PDE problems with artificial neural networks.European Journal of Applied Mathematics, 32(3):421โ435, 2021. Kipf and Welling (2016) Thomas N Kipf and Max Welling.Semi-supervised classification with graph convolutional networks.arXiv preprint arXiv:1609.02907, 2016. Kondor et al. (2014) Risi Kondor, Nedelina Teneva, and Vikas Garg.Multiresolution matrix factorization.In International Conference on Machine Learning, pages 1620โ1628, 2014. Kovachki et al. (2021) Nikola Kovachki, Samuel Lanthaler, and Siddhartha Mishra.On universal approximation and error bounds for Fourier Neural Operators.arXiv preprint arXiv:2107.07562, 2021. Kraichnan (1967) Robert H. Kraichnan.Inertial ranges in twoโdimensional turbulence.The Physics of Fluids, 10(7):1417โ1423, 1967. Kulis et al. (2006) Brian Kulis, Mรกtyรกs Sustik, and Inderjit Dhillon.Learning low-rank kernel matrices.In Proceedings of the 23rd international conference on Machine learning, pages 505โ512, 2006. Kutyniok et al. (2022) Gitta Kutyniok, Philipp Petersen, Mones Raslan, and Reinhold Schneider.A theoretical analysis of deep neural networks and parametric pdes.Constructive Approximation, 55(1):73โ125, 2022. Lan et al. (2017) Liang Lan, Kai Zhang, Hancheng Ge, Wei Cheng, Jun Liu, Andreas Rauber, Xiao-Li Li, Jun Wang, and Hongyuan Zha.Low-rank decomposition meets kernel learning: A generalized nystrรถm method.Artificial Intelligence, 250:1โ15, 2017. Lanthaler et al. (2021) Samuel Lanthaler, Siddhartha Mishra, and George Em Karniadakis.Error estimates for deeponets: A deep learning framework in infinite dimensions.arXiv preprint arXiv:2102.09618, 2021. Leoni (2009) G. Leoni.A First Course in Sobolev Spaces.Graduate studies in mathematics. American Mathematical Soc., 2009. Li et al. (2020a) Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar.Fourier neural operator for parametric partial differential equations, 2020a. Li et al. (2020b) Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar.Multipole graph neural operator for parametric partial differential equations, 2020b. Li et al. (2020c) Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar.Neural operator: Graph kernel network for partial differential equations.arXiv preprint arXiv:2003.03485, 2020c. Li et al. (2021) Zongyi Li, Hongkai Zheng, Nikola Kovachki, David Jin, Haoxuan Chen, Burigede Liu, Kamyar Azizzadenesheli, and Anima Anandkumar.Physics-informed neural operator for learning partial differential equations.arXiv preprint arXiv:2111.03794, 2021. Lu et al. (2019) Lu Lu, Pengzhan Jin, and George Em Karniadakis.Deeponet: Learning nonlinear operators for identifying differential equations based on the universal approximation theorem of operators.arXiv preprint arXiv:1910.03193, 2019. Lu et al. (2021a) Lu Lu, Pengzhan Jin, Guofei Pang, Zhongqiang Zhang, and George Em Karniadakis.Learning nonlinear operators via deeponet based on the universal approximation theorem of operators.Nature Machine Intelligence, 3(3):218โ229, 2021a. Lu et al. (2021b) Lu Lu, Xuhui Meng, Shengze Cai, Zhiping Mao, Somdatta Goswami, Zhongqiang Zhang, and George Em Karniadakis.A comprehensive and fair comparison of two neural operators (with practical extensions) based on fair data.arXiv preprint arXiv:2111.05512, 2021b. Mathieu et al. (2013) Michael Mathieu, Mikael Henaff, and Yann LeCun.Fast training of convolutional networks through ffts, 2013. Matthews et al. (2018) Alexander G. de G. Matthews, Mark Rowland, Jiri Hron, Richard E. Turner, and Zoubin Ghahramani.Gaussian Process Behaviour in Wide Deep Neural Networks.Apr 2018. Mingo et al. (2004) Luis Mingo, Levon Aslanyan, Juan Castellanos, Miguel Diaz, and Vladimir Riazanov.Fourier neural networks: An approach with sinusoidal activation functions.2004. Murphy et al. (2018) Ryan L Murphy, Balasubramaniam Srinivasan, Vinayak Rao, and Bruno Ribeiro.Janossy pooling: Learning deep permutation-invariant functions for variable-size inputs.arXiv preprint arXiv:1811.01900, 2018. Neal (1996) Radford M. Neal.Bayesian Learning for Neural Networks.Springer-Verlag, 1996.ISBN 0387947248. Nelsen and Stuart (2021) Nicholas H Nelsen and Andrew M Stuart.The random feature model for input-output maps between banach spaces.SIAM Journal on Scientific Computing, 43(5):A3212โA3243, 2021. Nystrรถm (1930) Evert J Nystrรถm.รber die praktische auflรถsung von integralgleichungen mit anwendungen auf randwertaufgaben.Acta Mathematica, 1930. OโLeary-Roseberry et al. (2020) Thomas OโLeary-Roseberry, Umberto Villa, Peng Chen, and Omar Ghattas.Derivative-informed projected neural networks for high-dimensional parametric maps governed by pdes.arXiv preprint arXiv:2011.15110, 2020. Opschoor et al. (2020) Joost A.A. Opschoor, Christoph Schwab, and Jakob Zech.Deep learning in high dimension: Relu network expression rates for bayesian PDE inversion.SAM Research Report, 2020-47, 2020. Pan and Duraisamy (2020) Shaowu Pan and Karthik Duraisamy.Physics-informed probabilistic learning of linear embeddings of nonlinear dynamics with guaranteed stability.SIAM Journal on Applied Dynamical Systems, 19(1):480โ509, 2020. Pathak et al. (2020) Jaideep Pathak, Mustafa Mustafa, Karthik Kashinath, Emmanuel Motheau, Thorsten Kurth, and Marcus Day.Using machine learning to augment coarse-grid computational fluid dynamics simulations, 2020. Peลczyลski and Wojciechowski (2001) Aleksander Peลczyลski and Michaล Wojciechowski.Contribution to the isomorphic classification of sobolev spaces lpk(omega).Recent Progress in Functional Analysis, 189:133โ142, 2001. Pfaff et al. (2020) Tobias Pfaff, Meire Fortunato, Alvaro Sanchez-Gonzalez, and Peter W. Battaglia.Learning mesh-based simulation with graph networks, 2020. Pinkus (1985) A. Pinkus.N-Widths in Approximation Theory.Springer-Verlag Berlin Heidelberg, 1985. Pinkus (1999) Allan Pinkus.Approximation theory of the mlp model in neural networks.Acta Numerica, 8:143โ195, 1999. Poole et al. (2016) Ben Poole, Subhaneil Lahiri, Maithra Raghu, Jascha Sohl-Dickstein, and Surya Ganguli.Exponential expressivity in deep neural networks through transient chaos.Advances in neural information processing systems, 29:3360โ3368, 2016. Quiรฑonero Candela and Rasmussen (2005) Joaquin Quiรฑonero Candela and Carl Edward Rasmussen.A unifying view of sparse approximate gaussian process regression.J. Mach. Learn. Res., 6:1939โ1959, 2005. Rahimi and Recht (2008) Ali Rahimi and Benjamin Recht.Uniform approximation of functions with random bases.In 2008 46th Annual Allerton Conference on Communication, Control, and Computing, pages 555โ561. IEEE, 2008. Raissi et al. (2019) Maziar Raissi, Paris Perdikaris, and George E Karniadakis.Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations.Journal of Computational Physics, 378:686โ707, 2019. Ronneberger et al. (2015) Olaf Ronneberger, Philipp Fischer, and Thomas Brox.U-net: Convolutional networks for biomedical image segmentation.In International Conference on Medical image computing and computer-assisted intervention, pages 234โ241. Springer, 2015. Roux and Bengio (2007) Nicolas Le Roux and Yoshua Bengio.Continuous neural networks.In Marina Meila and Xiaotong Shen, editors, Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics, 2007. Scarselli et al. (2008) Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini.The graph neural network model.IEEE transactions on neural networks, 20(1):61โ80, 2008. Schwab and Zech (2019) Christoph Schwab and Jakob Zech.Deep learning in high dimension: Neural network expression rates for generalized polynomial chaos expansions in UQ.Analysis and Applications, 17(01):19โ55, 2019. Sirignano and Spiliopoulos (2018) Justin Sirignano and Konstantinos Spiliopoulos.Dgm: A deep learning algorithm for solving partial differential equations.Journal of computational physics, 375:1339โ1364, 2018. Sitzmann et al. (2020) Vincent Sitzmann, Julien NP Martel, Alexander W Bergman, David B Lindell, and Gordon Wetzstein.Implicit neural representations with periodic activation functions.arXiv preprint arXiv:2006.09661, 2020. Smith et al. (2020) Jonathan D Smith, Kamyar Azizzadenesheli, and Zachary E Ross.Eikonet: Solving the eikonal equation with deep neural networks.arXiv preprint arXiv:2004.00361, 2020. Stein (1970) Elias M. Stein.Singular Integrals and Differentiability Properties of Functions.Princeton University Press, 1970. Stuart (2010) A. M. Stuart.Inverse problems: A bayesian perspective.Acta Numerica, 19:451โ559, 2010. Trefethen (2000) Lloyd N Trefethen.Spectral methods in MATLAB, volume 10.Siam, 2000. Trillos and Slepฤev (2018) Nicolas Garcia Trillos and Dejan Slepฤev.A variational approach to the consistency of spectral clustering.Applied and Computational Harmonic Analysis, 45(2):239โ281, 2018. Trillos et al. (2020) Nicolรกs Garcรญa Trillos, Moritz Gerlach, Matthias Hein, and Dejan Slepฤev.Error estimates for spectral convergence of the graph laplacian on random geometric graphs toward the laplaceโbeltrami operator.Foundations of Computational Mathematics, 20(4):827โ887, 2020. Um et al. (2020a) Kiwon Um, Philipp Holl, Robert Brand, Nils Thuerey, et al.Solver-in-the-loop: Learning from differentiable physics to interact with iterative PDE-solvers.arXiv preprint arXiv:2007.00016, 2020a. Um et al. (2020b) Kiwon Um, Raymond, Fei, Philipp Holl, Robert Brand, and Nils Thuerey.Solver-in-the-loop: Learning from differentiable physics to interact with iterative PDE-solvers, 2020b. Ummenhofer et al. (2020) Benjamin Ummenhofer, Lukas Prantl, Nils Thรผrey, and Vladlen Koltun.Lagrangian fluid simulation with continuous convolutions.In International Conference on Learning Representations, 2020. Vapnik (1998) Vladimir N. Vapnik.Statistical Learning Theory.Wiley-Interscience, 1998. Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, ล ukasz Kaiser, and Illia Polosukhin.Attention is all you need.In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017. Veliฤkoviฤ et al. (2017) Petar Veliฤkoviฤ, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio.Graph attention networks.2017. Von Luxburg et al. (2008) Ulrike Von Luxburg, Mikhail Belkin, and Olivier Bousquet.Consistency of spectral clustering.The Annals of Statistics, pages 555โ586, 2008. Wang et al. (2020) Rui Wang, Karthik Kashinath, Mustafa Mustafa, Adrian Albert, and Rose Yu.Towards physics-informed deep learning for turbulent flow prediction.In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 1457โ1466, 2020. Wang et al. (2021) Sifan Wang, Hanwen Wang, and Paris Perdikaris.Learning the solution operator of parametric partial differential equations with physics-informed deeponets.arXiv preprint arXiv:2103.10974, 2021. Wen et al. (2021) Gege Wen, Zongyi Li, Kamyar Azizzadenesheli, Anima Anandkumar, and Sally M Benson.U-fnoโan enhanced fourier neural operator based-deep learning model for multiphase flow.arXiv preprint arXiv:2109.03697, 2021. Whitney (1934) Hassler Whitney.Functions differentiable on the boundaries of regions.Annals of Mathematics, 35(3):482โ485, 1934. Williams (1996) Christopher K. I. Williams.Computing with infinite networks.In Proceedings of the 9th International Conference on Neural Information Processing Systems, Cambridge, MA, USA, 1996. MIT Press. Zhu et al. (2021) Chen Zhu, Wei Ping, Chaowei Xiao, Mohammad Shoeybi, Tom Goldstein, Anima Anandkumar, and Bryan Catanzaro.Long-short transformer: Efficient transformers for language and vision.In Advances in Neural Information Processing Systems, 2021. Zhu and Zabaras (2018) Yinhao Zhu and Nicholas Zabaras.Bayesian deep convolutional encoderโdecoder networks for surrogate modeling and uncertainty quantification.Journal of Computational Physics, 2018.ISSN 0021-9991.doi: https://doi.org/10.1016/j.jcp.2018.04.018.URL http://www.sciencedirect.com/science/article/pii/S0021999118302341. A Notation Meaning Operator Learning
๐ท โ โ ๐ The spatial domain for the PDE.
๐ฅ โ ๐ท Points in the the spatial domain.
๐ โ ๐
( ๐ท ; โ ๐ ๐ ) The input functions (coefficients, boundaries, and/or initial conditions).
๐ข โ ๐ฐ
( ๐ท ; โ ๐ ๐ข ) The target solution functions.
๐ท ๐ The discretization of ( ๐ ๐ , ๐ข ๐ ) .
๐ข โ : ๐ โ ๐ฐ The operator mapping the coefficients to the solutions.
๐ A probability measure where ๐ ๐ sampled from. Neural Operator
๐ฃ โข ( ๐ฅ ) โ โ ๐ ๐ฃ The neural network representation of ๐ข โข ( ๐ฅ )
๐ ๐ Dimension of the input ๐ โข ( ๐ฅ ) .
๐ ๐ข Dimension of the output ๐ข โข ( ๐ฅ ) .
๐ ๐ฃ The dimension of the representation ๐ฃ โข ( ๐ฅ ) .
๐ก
0 , โฆ , ๐ The layer (iteration) in the neural operator .
๐ซ , ๐ฌ The pointwise linear transformation ๐ซ : ๐ โข ( ๐ฅ ) โฆ ๐ฃ 0 โข ( ๐ฅ ) and ๐ฌ : ๐ฃ ๐ โข ( ๐ฅ ) โฆ ๐ข โข ( ๐ฅ ) .
๐ฆ The integral operator in the iterative update ๐ฃ ๐ก โฆ ๐ฃ ๐ก + 1 ,
๐ : โ 2 โข ( ๐ + 1 ) โ โ ๐ ๐ฃ ร ๐ ๐ฃ The kernel maps ( ๐ฅ , ๐ฆ , ๐ โข ( ๐ฅ ) , ๐ โข ( ๐ฆ ) ) to a ๐ ๐ฃ ร ๐ ๐ฃ matrix
๐พ โ โ ๐ ร ๐ ร ๐ ๐ฃ ร ๐ ๐ฃ The kernel matrix with ๐พ ๐ฅ โข ๐ฆ
๐ โข ( ๐ฅ , ๐ฆ ) .
๐ โ โ ๐ ๐ฃ ร ๐ ๐ฃ The pointwise linear transformation used as the bias term in the iterative update.
๐ The activation function. Table 10:Table of notations: operator learning and neural operators
In the paper, we will use lowercase letters such as ๐ฃ , ๐ข to represent vectors and functions; uppercase letters such as ๐ , ๐พ to represent matrices or discretized transformations; and calligraphic letters such as ๐ข , โฑ to represent operators.
We write โ
{ 1 , 2 , 3 , โฆ } and โ 0
โ โช { 0 } . Furthermore, we denote by | โ | ๐ the ๐ -norm on any Euclidean space. We say ๐ณ is a Banach space if it is a Banach space over the real field โ . We denote by โฅ โ โฅ ๐ณ its norm and by ๐ณ โ its topological (continuous) dual. In particular, ๐ณ โ is the Banach space consisting of all continuous linear functionals ๐ : ๐ณ โ โ with the operator norm
โ ๐ โ ๐ณ โ
sup ๐ฅ โ ๐ณ
โ ๐ฅ โ ๐ณ
1 | ๐ โข ( ๐ฅ ) | < โ .
For any Banach space ๐ด , we denote by โ โข ( ๐ณ ; ๐ด ) the Banach space of continuous linear maps ๐ : ๐ณ โ ๐ด with the operator norm
โ ๐ โ ๐ณ โ ๐ด
sup ๐ฅ โ ๐ณ
โ ๐ฅ โ ๐ณ
1 โ ๐ โข ๐ฅ โ ๐ด < โ .
We will abuse notation and write โฅ โ โฅ for any operator norm when there is no ambiguity about the spaces in question.
Let ๐ โ โ . We say that ๐ท โ โ ๐ is a domain if it is a bounded and connected open set that is topologically regular i.e. int โข ( ๐ท ยฏ )
๐ท . Note that, in the case ๐
1 , a domain is any bounded, open interval. For ๐ โฅ 2 , we say ๐ท is a Lipschitz domain if โ ๐ท can be locally represented as the graph of a Lipschitz continuous function defined on an open ball of โ ๐ โ 1 . If ๐
1 , we will call any domain a Lipschitz domain. For any multi-index ๐ผ โ โ 0 ๐ , we write โ ๐ผ ๐ for the ๐ผ -th weak partial derivative of ๐ when it exists.
Let ๐ท โ โ ๐ be a domain. For any ๐ โ โ 0 , we define the following spaces
๐ถ โข ( ๐ท )
{
๐
:
๐ท
โ
โ
:
๐
โข
is continuous
}
,
๐ถ
๐
โข
(
๐ท
)
{
๐
:
๐ท
โ
โ
:
โ
๐ผ
๐
โ
๐ถ
๐
โ
|
๐ผ
|
1
โข
(
๐ท
)
โข
โ
โ
0
โค
|
๐ผ
|
1
โค
๐
}
,
๐ถ
b
๐
โข
(
๐ท
)
{
๐
โ
๐ถ
๐
โข
(
๐ท
)
:
max
0
โค
|
๐ผ
|
1
โค
๐
โข
sup
๐ฅ
โ
๐ท
|
โ
๐ผ
๐
โข
(
๐ฅ
)
|
<
โ
}
,
๐ถ
๐
โข
(
๐ท
ยฏ
)
{ ๐ โ ๐ถ b ๐ โข ( ๐ท ) : โ ๐ผ ๐ โข is uniformly continuous โข โ โ 0 โค | ๐ผ | 1 โค ๐ }
and make the equivalent definitions when ๐ท is replaced with โ ๐ . Note that any function in ๐ถ ๐ โข ( ๐ท ยฏ ) has a unique, bounded, continuous extension from ๐ท to ๐ท ยฏ and is hence uniquely defined on โ ๐ท . We will work with this extension without further notice. We remark that when ๐ท is a Lipschitz domain, the following definition for ๐ถ ๐ โข ( ๐ท ยฏ ) is equivalent
๐ถ ๐ โข ( ๐ท ยฏ )
{ ๐ : ๐ท ยฏ โ โ : โ ๐น โ ๐ถ ๐ โข ( โ ๐ ) โข such that โข ๐ โก ๐น | ๐ท ยฏ } ,
see Whitney (1934); Brudnyi and Brudnyi (2012). We define ๐ถ โ โข ( ๐ท )
โ ๐
0 โ ๐ถ ๐ โข ( ๐ท ) and, similarly, ๐ถ b โ โข ( ๐ท ) and ๐ถ โ โข ( ๐ท ยฏ ) . We further define
๐ถ ๐ โ โข ( ๐ท )
{ ๐ โ ๐ถ โ โข ( ๐ท ) : supp โข ( ๐ ) โ ๐ท โข is compact }
and, again, note that all definitions hold analogously for โ ๐ . We denote by โฅ โ โฅ ๐ถ ๐ : ๐ถ b ๐ ( ๐ท ) โ โ โฅ 0 the norm
โ ๐ โ ๐ถ ๐
max 0 โค | ๐ผ | 1 โค ๐ โข sup ๐ฅ โ ๐ท | โ ๐ผ ๐ โข ( ๐ฅ ) |
which makes ๐ถ b ๐ โข ( ๐ท ) (also with ๐ท
โ ๐ ) and ๐ถ ๐ โข ( ๐ท ยฏ ) Banach spaces. For any ๐ โ โ , we write ๐ถ โข ( ๐ท ; โ ๐ ) for the ๐ -fold Cartesian product of ๐ถ โข ( ๐ท ) and similarly for all other spaces we have defined or will define subsequently. We will continue to write โฅ โ โฅ ๐ถ ๐ for the norm on ๐ถ b ๐ โข ( ๐ท ; โ ๐ ) and ๐ถ ๐ โข ( ๐ท ยฏ ; โ ๐ ) defined as
โ ๐ โ ๐ถ ๐
max ๐ โ { 1 , โฆ , ๐ } โก โ ๐ ๐ โ ๐ถ ๐ .
For any ๐ โ โ and 1 โค ๐ โค โ , we use the notation ๐ ๐ , ๐ โข ( ๐ท ) for the standard ๐ฟ ๐ -type Sobolev space with ๐ derivatives; we refer the reader to Adams and Fournier (2003) for a formal definition. Furthermore, we, at times, use the notation ๐ 0 , ๐ โข ( ๐ท )
๐ฟ ๐ โข ( ๐ท ) and ๐ ๐ , 2 โข ( ๐ท )
๐ป ๐ โข ( ๐ท ) . Since we use the standard definitions of Sobolev spaces that can be found in any reference on the subject, we do not give the specifics here.
B
In this section we gather various results on the approximation property of Banach spaces. The main results are Lemma 22 which states that if two Banach spaces have the approximation property then continuous maps between them can be approximated in a finite-dimensional manner, and Lemma 26 which states the spaces in Assumptions 9 and 10 have the approximation property.
Definition 15
A Banach space ๐ณ has a Schauder basis if there exist some { ๐ ๐ } ๐
1 โ โ ๐ณ and { ๐ ๐ } ๐
1 โ โ ๐ณ โ such that
1.
๐ ๐ โข ( ๐ ๐ )
๐ฟ ๐ โข ๐ for any ๐ , ๐ โ โ ,
2.
lim ๐ โ โ โ ๐ฅ โ โ ๐
1 ๐ ๐ ๐ โข ( ๐ฅ ) โข ๐ ๐ โ ๐ณ
0 for all ๐ฅ โ ๐ณ .
We remark that definition 15 is equivalent to the following. The elements { ๐ ๐ } ๐
1 โ โ ๐ณ are called a Schauder basis for ๐ณ if, for each ๐ฅ โ ๐ณ , there exists a unique sequence { ๐ผ ๐ } ๐
1 โ โ โ such that
lim ๐ โ โ โ ๐ฅ โ โ ๐
1 ๐ ๐ผ ๐ โข ๐ ๐ โ ๐ณ
0 .
For the equivalence, see, for example (Albiac and Kalton, 2006, Theorem 1.1.3). Throughout this paper we will simply write the term basis to mean Schauder basis. Furthermore, we note that if { ๐ } ๐
1 โ is a basis then so is { ๐ ๐ / โ ๐ โ ๐ณ } ๐
1 โ , so we will assume that any basis we use is normalized.
Definition 16
Let ๐ณ be a Banach space and ๐ โ โ โข ( ๐ณ ; ๐ณ ) . ๐ is called a finite rank operator if ๐ โข ( ๐ณ ) โ ๐ณ is finite dimensional.
By noting that any finite dimensional subspace has a basis, we may equivalently define a finite rank operator ๐ โ โ โข ( ๐ณ ; ๐ณ ) to be one such that there exists a number ๐ โ โ and some { ๐ ๐ } ๐
1 ๐ โ ๐ณ and { ๐ ๐ } ๐
1 ๐ โ ๐ณ โ such that
๐ โข ๐ฅ
โ ๐
1 ๐ ๐ ๐ โข ( ๐ฅ ) โข ๐ ๐ , โ ๐ฅ โ ๐ณ .
Definition 17
A Banach space ๐ณ is said to have the approximation property (AP) if, for any compact set ๐พ โ ๐ณ and ๐
0 , there exists a finite rank operator ๐ : ๐ณ โ ๐ณ such that
โ ๐ฅ โ ๐ โข ๐ฅ โ ๐ณ โค ๐ , โ ๐ฅ โ ๐พ .
We now state and prove some well-known results about the relationship between basis and the AP. We were unable to find the statements of the following lemmas in the form given here in the literature and therefore we provide full proofs.
Lemma 18
Let ๐ณ be a Banach space with a basis. Then ๐ณ has the AP.
Proof Let { ๐ ๐ } ๐
1 โ โ ๐ณ โ and { ๐ ๐ } ๐
1 โ โ ๐ณ be a basis for ๐ณ . Note that there exists a constant ๐ถ
0 such that, for any ๐ฅ โ ๐ณ and ๐ โ โ ,
โ โ ๐
1 ๐ ๐ ๐ โข ( ๐ฅ ) โข ๐ ๐ โ ๐ณ โค sup ๐ฝ โ โ โ โ ๐
1 ๐ฝ ๐ ๐ โข ( ๐ฅ ) โข ๐ ๐ โ ๐ณ โค ๐ถ โข โ ๐ฅ โ ๐ณ ,
see, for example (Albiac and Kalton, 2006, Remark 1.1.6). Assume, without loss of generality, that ๐ถ โฅ 1 . Let ๐พ โ ๐ณ be compact and ๐ > 0 . Since ๐พ is compact, we can find a number ๐
๐ โข ( ๐ , ๐ถ ) โ โ and elements ๐ฆ 1 , โฆ , ๐ฆ ๐ โ ๐พ such that for any ๐ฅ โ ๐พ there exists a number ๐ โ { 1 , โฆ , ๐ } with the property that
โ ๐ฅ โ ๐ฆ ๐ โ ๐ณ โค ๐ 3 โข ๐ถ .
We can then find a number ๐ฝ
๐ฝ โข ( ๐ , ๐ ) โ โ such that
max ๐ โ { 1 , โฆ , ๐ } โก โ ๐ฆ ๐ โ โ ๐
1 ๐ฝ ๐ ๐ โข ( ๐ฆ ๐ ) โข ๐ ๐ โ ๐ณ โค ๐ 3 .
Define the finite rank operator ๐ : ๐ณ โ ๐ณ by
๐ โข ๐ฅ
โ ๐
1 ๐ฝ ๐ ๐ โข ( ๐ฅ ) โข ๐ ๐ , โ ๐ฅ โ ๐ณ .
Triangle inequality implies that, for any ๐ฅ โ ๐พ ,
โ
๐ฅ
โ
๐
โข
(
๐ฅ
)
โ
๐ณ
โค
โ
๐ฅ
โ
๐ฆ
๐
โ
๐ณ
+
โ
๐ฆ
๐
โ
๐
โข
(
๐ฆ
๐
)
โ
๐ณ
+
โ
๐
โข
(
๐ฆ
๐
)
โ
๐
โข
(
๐ฅ
)
โ
๐ณ
โค
2
โข
๐
3
+
โฅ
โ
๐
1 ๐ฝ ( ๐ ๐ ( ๐ฆ ๐ ) โ ๐ ๐ ( ๐ฅ ) ) ๐ ๐ โฅ ๐ณ
โค 2 โข ๐ 3 + ๐ถ โข โ ๐ฆ ๐ โ ๐ฅ โ ๐ณ
โค ๐
as desired.
Lemma 19
Let ๐ณ be a Banach space with a basis and ๐ด be any Banach space. Suppose there exists a continuous linear bijection ๐ : ๐ณ โ ๐ด . Then ๐ด has a basis.
Proof Let ๐ฆ โ ๐ด and ๐ > 0 . Since ๐ is a bijection, there exists an element ๐ฅ โ ๐ณ so that ๐ โข ๐ฅ
๐ฆ and ๐ โ 1 โข ๐ฆ
๐ฅ . Since ๐ณ has a basis, we can find { ๐ ๐ } ๐
1 โ โ ๐ณ and { ๐ ๐ } ๐
1 โ โ ๐ณ โ and a number ๐
๐ โข ( ๐ , โ ๐ โ ) โ โ such that
โ ๐ฅ โ โ ๐
1 ๐ ๐ ๐ โข ( ๐ฅ ) โข ๐ ๐ โ ๐ณ โค ๐ โ ๐ โ .
Note that
โ ๐ฆ โ โ ๐
1 ๐ ๐ ๐ โข ( ๐ โ 1 โข ๐ฆ ) โข ๐ โข ๐ ๐ โ ๐ด
โ ๐ โข ๐ฅ โ ๐ โข โ ๐
1 ๐ ๐ ๐ โข ( ๐ฅ ) โข ๐ ๐ โ โค โ ๐ โ โข โ ๐ฅ โ โ ๐
1 ๐ ๐ ๐ โข ( ๐ฅ ) โข ๐ ๐ โ ๐ณ โค ๐
hence { ๐ โข ๐ ๐ } ๐
1 โ โ ๐ด and { ๐ ๐ ( ๐ โ 1 โ ) } ๐
1 โ โ ๐ด โ form a basis for ๐ด by linearity and continuity of ๐ and ๐ โ 1 .
Lemma 20
Let ๐ณ be a Banach space with the AP and ๐ด be any Banach space. Suppose there exists a continuous linear bijection ๐ : ๐ณ โ ๐ด . Then ๐ด has the AP.
Proof Let ๐พ โ ๐ด be a compact set and ๐ > 0 . The set ๐
๐ โ 1 โข ( ๐พ ) โ ๐ณ is compact since ๐ โ 1 is continuous. Since ๐ณ has the AP, there exists a finite rank operator ๐ : ๐ณ โ ๐ณ such that
โ ๐ฅ โ ๐ โข ๐ฅ โ ๐ณ โค ๐ โ ๐ โ , โ ๐ฅ โ ๐ .
Define the operator ๐ : ๐ด โ ๐ด by ๐
๐ โข ๐ โข ๐ โ 1 . Clearly ๐ is a finite rank operator since ๐ is a finite rank operator. Let ๐ฆ โ ๐พ then, since ๐พ
๐ โข ( ๐ ) , there exists ๐ฅ โ ๐ such that ๐ โข ๐ฅ
๐ฆ and ๐ฅ
๐ โ 1 โข ๐ฆ . Then
โ ๐ฆ โ ๐ โข ๐ฆ โ ๐ด
โ ๐ โข ๐ฅ โ ๐ โข ๐ โข ๐ฅ โ ๐ด โค โ ๐ โ โข โ ๐ฅ โ ๐ โข ๐ฅ โ ๐ณ โค ๐ .
hence ๐ด has the AP.
The following lemma shows than the infinite union of compact sets is compact if each set is the image of a fixed compact set under a convergent sequence of continuous maps. The result is instrumental in proving Lemma 22.
Lemma 21
Let ๐ณ , ๐ด be Banach spaces and ๐น : ๐ณ โ ๐ด be a continuous map. Let ๐พ โ ๐ณ be a compact set in ๐ณ and { ๐น ๐ : ๐ณ โ ๐ด } ๐
1 โ be a sequence of continuous maps such that
lim ๐ โ โ sup ๐ฅ โ ๐พ โ ๐น โข ( ๐ฅ ) โ ๐น ๐ โข ( ๐ฅ ) โ ๐ด
0 .
Then the set
๐ โ โ ๐
1 โ ๐น ๐ โข ( ๐พ ) โช ๐น โข ( ๐พ )
is compact in ๐ด .
Proof Let ๐ > 0 then there exists a number ๐
๐ โข ( ๐ ) โ โ such that
sup ๐ฅ โ ๐พ โ ๐น โข ( ๐ฅ ) โ ๐น ๐ โข ( ๐ฅ ) โ ๐ด โค ๐ 2 , โ ๐ โฅ ๐ .
Define the set
๐ ๐
โ ๐
1 ๐ ๐น ๐ โข ( ๐พ ) โช ๐น โข ( ๐พ )
which is compact since ๐น and each ๐น ๐ are continuous. We can therefore find a number ๐ฝ
๐ฝ โข ( ๐ , ๐ ) โ โ and elements ๐ฆ 1 , โฆ , ๐ฆ ๐ฝ โ ๐ ๐ such that, for any ๐ง โ ๐ ๐ , there exists a number ๐
๐ โข ( ๐ง ) โ { 1 , โฆ , ๐ฝ } such that
โ ๐ง โ ๐ฆ ๐ โ ๐ด โค ๐ 2 .
Let ๐ฆ โ ๐ โ ๐ ๐ then there exists a number ๐ > ๐ and an element ๐ฅ โ ๐พ such that ๐ฆ
๐น ๐ โข ( ๐ฅ ) . Since ๐น โข ( ๐ฅ ) โ ๐ ๐ , we can find a number ๐ โ { 1 , โฆ , ๐ฝ } such that
โ ๐น โข ( ๐ฅ ) โ ๐ฆ ๐ โ ๐ด โค ๐ 2 .
Therefore,
โ ๐ฆ โ ๐ฆ ๐ โ ๐ด โค โ ๐น ๐ โข ( ๐ฅ ) โ ๐น โข ( ๐ฅ ) โ ๐ด + โ ๐น โข ( ๐ฅ ) โ ๐ฆ ๐ โ ๐ด โค ๐
hence { ๐ฆ ๐ } ๐
1 ๐ฝ forms a finite ๐ -net for ๐ , showing that ๐ is totally bounded.
We will now show that ๐ is closed. To that end, let { ๐ ๐ } ๐
1 โ be a convergent sequence in ๐ , in particular, ๐ ๐ โ ๐ for every ๐ โ โ and ๐ ๐ โ ๐ โ ๐ด as ๐ โ โ . We can thus find convergent sequences { ๐ฅ ๐ } ๐
1 โ and { ๐ผ ๐ } ๐
1 โ such that ๐ฅ ๐ โ ๐พ , ๐ผ ๐ โ โ 0 , and ๐ ๐
๐น ๐ผ ๐ โข ( ๐ฅ ๐ ) where we define ๐น 0 โ ๐น . Since ๐พ is closed, lim ๐ โ โ ๐ฅ ๐
๐ฅ โ ๐พ thus, for each fixed ๐ โ โ ,
lim ๐ โ โ ๐น ๐ผ ๐ โข ( ๐ฅ ๐ )
๐น ๐ผ ๐ โข ( ๐ฅ ) โ ๐
by continuity of ๐น ๐ผ ๐ . Since uniform convergence implies point-wise convergence
๐
lim ๐ โ โ ๐น ๐ผ ๐ โข ( ๐ฅ )
๐น ๐ผ โข ( ๐ฅ ) โ ๐
for some ๐ผ โ โ 0 thus ๐ โ ๐ , showing that ๐ is closed.
The following lemma shows that any continuous operator acting between two Banach spaces with the AP can be approximated in a finite-dimensional manner. The approximation proceeds in three steps which are shown schematically in Figure 16. First an input is mapped to a finite-dimensional representation via the action of a set of functionals on ๐ณ . This representation is then mapped by a continuous function to a new finite-dimensional representation which serves as the set of coefficients onto representers of ๐ด . The resulting expansion is an element of ๐ด that is ๐ -close to the action of ๐ข on the input element. A similar finite-dimensionalization was used in (Bhattacharya et al., 2020) by using PCA on ๐ณ to define the functionals acting on the input and PCA on ๐ด to define the output representers. However the result in that work is restricted to separable Hilbert spaces; here, we generalize it to Banach spaces with the AP.
Lemma 22
Let ๐ณ , ๐ด be two Banach spaces with the AP and let ๐ข : ๐ณ โ ๐ด be a continuous map. For every compact set ๐พ โ ๐ณ and ๐
0 , there exist numbers ๐ฝ , ๐ฝ โฒ โ โ and continuous linear maps ๐น ๐ฝ : ๐ณ โ โ ๐ฝ , ๐บ ๐ฝ โฒ : โ ๐ฝ โฒ โ ๐ด as well as ๐ โ ๐ถ โข ( โ ๐ฝ ; โ ๐ฝ โฒ ) such that
sup ๐ฅ โ ๐พ โ ๐ข โข ( ๐ฅ ) โ ( ๐บ ๐ฝ โฒ โ ๐ โ ๐น ๐ฝ ) โข ( ๐ฅ ) โ ๐ด โค ๐ .
Furthermore there exist ๐ค 1 , โฆ , ๐ค ๐ฝ โ ๐ณ โ such that ๐น ๐ฝ has the form
๐น ๐ฝ ( ๐ฅ )
( ๐ค 1 ( ๐ฅ ) , โฆ , ๐ค ๐ฝ ( ๐ฅ ) ) , โ ๐ฅ โ ๐ณ
and there exist ๐ฝ 1 , โฆ , ๐ฝ ๐ฝ โฒ โ ๐ด such that ๐บ ๐ฝ โฒ has the form
๐บ ๐ฝ โฒ โข ( ๐ฃ )
โ ๐
1 ๐ฝ โฒ ๐ฃ ๐ โข ๐ฝ ๐ , โ ๐ฃ โ โ ๐ฝ โฒ .
If ๐ด admits a basis then { ๐ฝ ๐ } ๐
1 ๐ฝ โฒ can be picked so that there is an extension { ๐ฝ ๐ } ๐
1 โ โ ๐ด which is a basis for ๐ด .
Proof Since ๐ณ has the AP, there exists a sequence of finite rank operators { ๐ ๐ ๐ณ : ๐ณ โ ๐ณ } ๐
1 โ such that
lim ๐ โ โ sup ๐ฅ โ ๐พ โ ๐ฅ โ ๐ ๐ ๐ณ โข ๐ฅ โ ๐ณ
0 .
Define the set
๐
โ ๐
1 โ ๐ ๐ ๐ณ โข ( ๐พ ) โช ๐พ
which is compact by Lemma 21. Therefore, ๐ข is uniformly continuous on ๐ hence there exists a modulus of continuity ๐ : โ โฅ 0 โ โ โฅ 0 which is non-decreasing and satisfies ๐ โข ( ๐ก ) โ ๐ โข ( 0 )
0 as ๐ก โ 0 as well as
โฅ ๐ข ( ๐ง 1 ) โ ๐ข ( ๐ง 2 ) โฅ ๐ด โค ๐ ( โฅ ๐ง 1 โ ๐ง 2 โฅ ๐ณ ) โ ๐ง 1 , ๐ง 2 โ ๐ .
We can thus find, a number ๐
๐ โข ( ๐ ) โ โ such that
sup ๐ฅ โ ๐พ ๐ ( โฅ ๐ฅ โ ๐ ๐ ๐ณ ๐ฅ โฅ ๐ณ ) โค ๐ 2 .
Let ๐ฝ
dim โข ๐ ๐ ๐ณ โข ( ๐ณ ) < โ . There exist elements { ๐ผ ๐ } ๐
1 ๐ฝ โ ๐ณ and { ๐ค ๐ } ๐
1 ๐ฝ โ ๐ณ โ such that
๐ ๐ ๐ณ โข ๐ฅ
โ ๐
1 ๐ฝ ๐ค ๐ โข ( ๐ฅ ) โข ๐ผ ๐ , โ ๐ฅ โ ๐ .
Define the maps ๐น ๐ฝ ๐ณ : ๐ณ โ โ ๐ฝ and ๐บ ๐ฝ ๐ณ : โ ๐ฝ โ ๐ณ by
๐น ๐ฝ ๐ณ โข ( ๐ฅ )
(
๐ค
1
โข
(
๐ฅ
)
,
โฆ
,
๐ค
๐ฝ
โข
(
๐ฅ
)
)
,
โ
๐ฅ
โ
๐ณ
,
๐บ
๐ฝ
๐ณ
โข
(
๐ฃ
)
โ ๐
1 ๐ฝ ๐ฃ ๐ โข ๐ผ ๐ , โ ๐ฃ โ โ ๐ฝ ,
noting that ๐ ๐ ๐ณ
๐บ ๐ฝ ๐ณ โ ๐น ๐ฝ ๐ณ . Define the set ๐
( ๐ข โ ๐ ๐ ๐ณ ) โข ( ๐พ ) โ ๐ด which is clearly compact. Since ๐ด has the AP, we can similarly find a finite rank operator ๐ ๐ฝ โฒ ๐ด : ๐ด โ ๐ด with ๐ฝ โฒ
dim โข ๐ ๐ฝ โฒ ๐ด โข ( ๐ด ) < โ such that
sup ๐ฆ โ ๐ โ ๐ฆ โ ๐ ๐ฝ โฒ ๐ด โข ๐ฆ โ ๐ด โค ๐ 2 .
Analogously, define the maps ๐น ๐ฝ โฒ ๐ด : ๐ด โ โ ๐ฝ โฒ and ๐บ ๐ฝ โฒ ๐ด : โ ๐ฝ โฒ โ ๐ด by
๐น ๐ฝ โฒ ๐ด โข ( ๐ฆ )
(
๐
1
โข
(
๐ฆ
)
,
โฆ
,
๐
๐ฝ
โฒ
โข
(
๐ฆ
)
)
,
โ
๐ฆ
โ
๐ด
,
๐บ
๐ฝ
โฒ
๐ด
โข
(
๐ฃ
)
โ ๐
1 ๐ฝ โฒ ๐ฃ ๐ โข ๐ฝ ๐ , โ ๐ฃ โ โ ๐ฝ โฒ
for some { ๐ฝ ๐ } ๐
1 ๐ฝ โฒ โ ๐ด and { ๐ ๐ } ๐
1 ๐ฝ โฒ โ ๐ด โ such that ๐ ๐ฝ โฒ ๐ด
๐บ ๐ฝ โฒ ๐ด โ ๐น ๐ฝ โฒ ๐ด . Clearly if ๐ด admits a basis then we could have defined ๐น ๐ฝ โฒ ๐ด and ๐บ ๐ฝ โฒ ๐ด through it instead of through ๐ ๐ฝ โฒ ๐ด . Define ๐ : โ ๐ฝ โ โ ๐ฝ โฒ by
๐ โข ( ๐ฃ )
( ๐น ๐ฝ โฒ ๐ด โ ๐ข โ ๐บ ๐ฝ ๐ณ ) โข ( ๐ฃ ) , โ ๐ฃ โ โ ๐ฝ
which is clearly continuous and note that ๐บ ๐ฝ โฒ ๐ด โ ๐ โ ๐น ๐ฝ ๐ณ
๐ ๐ฝ โฒ ๐ด โ ๐ข โ ๐ ๐ ๐ณ . Set ๐น ๐ฝ
๐น ๐ฝ ๐ณ and ๐บ ๐ฝ โฒ
๐บ ๐ฝ โฒ ๐ด then, for any ๐ฅ โ ๐พ ,
โ ๐ข โข ( ๐ฅ ) โ ( ๐บ ๐ฝ โฒ โ ๐ โ ๐น ๐ฝ ) โข ( ๐ฅ ) โ ๐ด
โค โ ๐ข โข ( ๐ฅ ) โ ๐ข โข ( ๐ ๐ ๐ณ โข ๐ฅ ) โ ๐ด + โ ๐ข โข ( ๐ ๐ ๐ณ โข ๐ฅ ) โ ( ๐ ๐ฝ โฒ ๐ด โ ๐ข โ ๐ ๐ ๐ณ ) โข ( ๐ฅ ) โ ๐ด
โค ๐ ( โฅ ๐ฅ โ ๐ ๐ ๐ณ ๐ฅ โฅ ๐ณ ) + sup ๐ฆ โ ๐ โฅ ๐ฆ โ ๐ ๐ฝ โฒ ๐ด ๐ฆ โฅ ๐ด
โค ๐
as desired.
We now state and prove some results about isomorphisms of function spaces defined on different domains. These results are instrumental in proving Lemma 26.
Lemma 23
Let ๐ท , ๐ท โฒ โ โ ๐ be domains. Suppose that, for some ๐ โ โ 0 , there exists a ๐ถ ๐ -diffeomorphism ๐ : ๐ท ยฏ โฒ โ ๐ท ยฏ . Then the mapping ๐ : ๐ถ ๐ โข ( ๐ท ยฏ ) โ ๐ถ ๐ โข ( ๐ท ยฏ โฒ ) defined as
๐ โข ( ๐ ) โข ( ๐ฅ )
๐ โข ( ๐ โข ( ๐ฅ ) ) , โ ๐ โ ๐ถ ๐ โข ( ๐ท ยฏ ) , ๐ฅ โ ๐ท ยฏ โฒ
is a continuous linear bijection.
Proof Clearly ๐ is linear since the evaluation functional is linear. To see that it is continuous, note that by the chain rule we can find a constant ๐
๐ โข ( ๐ )
0 such that
โ ๐ โข ( ๐ ) โ ๐ถ ๐ โค ๐ โข โ ๐ โ ๐ถ ๐ โข โ ๐ โ ๐ถ ๐ , โ ๐ โ ๐ถ ๐ โข ( ๐ท ยฏ ) .
We will now show that it is bijective. Let ๐ , ๐ โ ๐ถ ๐ โข ( ๐ท ยฏ ) so that ๐ โ ๐ . Then there exists a point ๐ฅ โ ๐ท ยฏ such that ๐ โข ( ๐ฅ ) โ ๐ โข ( ๐ฅ ) . Then ๐ โข ( ๐ ) โข ( ๐ โ 1 โข ( ๐ฅ ) )
๐ โข ( ๐ฅ ) and ๐ โข ( ๐ ) โข ( ๐ โ 1 โข ( ๐ฅ ) )
๐ โข ( ๐ฅ ) hence ๐ โข ( ๐ ) โ ๐ โข ( ๐ ) thus ๐ is injective. Now let ๐ โ ๐ถ ๐ โข ( ๐ท ยฏ โฒ ) and define ๐ : ๐ท ยฏ โ โ by ๐
๐ โ ๐ โ 1 . Since ๐ โ 1 โ ๐ถ ๐ โข ( ๐ท ยฏ ; ๐ท ยฏ โฒ ) , we have that ๐ โ ๐ถ ๐ โข ( ๐ท ยฏ ) . Clearly, ๐ โข ( ๐ )
๐ hence ๐ is surjective.
Corollary 24
Let ๐
0 and ๐ โ โ 0 . There exists a continuous linear bijection ๐ : ๐ถ ๐ โข ( [ 0 , 1 ] ๐ ) โ ๐ถ ๐ โข ( [ โ ๐ , ๐ ] ๐ ) .
Proof Let ๐ โ โ ๐ denote the vector in which all entries are 1 . Define the map ๐ : โ ๐ โ โ ๐ by
๐ โข ( ๐ฅ )
1 2 โข ๐ โข ๐ฅ + 1 2 โข ๐ , โ ๐ฅ โ โ ๐ .
(48)
Clearly ๐ is a ๐ถ โ -diffeomorphism between [ โ ๐ , ๐ ] ๐ and [ 0 , 1 ] ๐ hence Lemma 23 implies the result.
Lemma 25
Let ๐
0 and ๐ โ โ . There exists a continuous linear bijection ๐ : ๐ ๐ , 1 โข ( ( 0 , 1 ) ๐ ) โ ๐ ๐ , 1 โข ( ( โ ๐ , ๐ ) ๐ ) .
Proof Define the map ๐ : โ ๐ โ โ ๐ by (48). We have that ๐ โข ( ( โ ๐ , ๐ ) ๐ )
( 0 , 1 ) ๐ . Define the operator ๐ by
๐ โข ๐
๐ โ ๐ , โ ๐ โ ๐ ๐ , 1 โข ( ( 0 , 1 ) ๐ ) .
which is clearly linear since composition is linear. We compute that, for any 0 โค | ๐ผ | 1 โค ๐ ,
โ ๐ผ ( ๐ โ ๐ )
( 2 โข ๐ ) โ | ๐ผ | 1 โข ( โ ๐ผ ๐ ) โ ๐
hence, by the change of variables formula,
โ ๐ โข ๐ โ ๐ ๐ , 1 โข ( ( โ ๐ , ๐ ) ๐ )
โ 0 โค | ๐ผ | 1 โค ๐ ( 2 โข ๐ ) ๐ โ | ๐ผ | 1 โข โ โ ๐ผ ๐ โ ๐ฟ 1 โข ( ( 0 , 1 ) ๐ ) .
We can therefore find numbers ๐ถ 1 , ๐ถ 2
0 , depending on ๐ and ๐ , such that
๐ถ 1 โข โ ๐ โ ๐ ๐ , 1 โข ( ( 0 , 1 ) ๐ ) โค โ ๐ โข ๐ โ ๐ ๐ , 1 โข ( ( โ ๐ , ๐ ) ๐ ) โค ๐ถ 2 โข โ ๐ โ ๐ ๐ , 1 โข ( ( 0 , 1 ) ๐ ) .
This shows that ๐ : ๐ ๐ , 1 โข ( ( 0 , 1 ) ๐ ) โ ๐ ๐ , 1 โข ( ( โ ๐ , ๐ ) ๐ ) is continuous and injective. Now let ๐ โ ๐ ๐ , 1 โข ( ( โ ๐ , ๐ ) ๐ ) and define ๐
๐ โ ๐ โ 1 . A similar argument shows that ๐ โ ๐ ๐ , 1 โข ( ( 0 , 1 ) ๐ ) and, clearly, ๐ โข ๐
๐ hence ๐ is surjective.
We now show that the spaces in Assumptions 9 and 10 have the AP. While the result is well-known when the domain is ( 0 , 1 ) ๐ or โ ๐ , we were unable to find any results in the literature for Lipschitz domains and we therefore give a full proof here. The essence of the proof is to either exhibit an isomorphism to a space that is already known to have AP or to directly show AP by embedding the Lipschitz domain into an hypercube for which there are known basis constructions. Our proof shows the stronger result that ๐ ๐ , ๐ โข ( ๐ท ) for ๐ โ โ 0 and 1 โค ๐ < โ has a basis, but, for ๐ถ ๐ โข ( ๐ท ยฏ ) , we only establish the AP and not necessarily a basis. The discrepancy comes from the fact that there is an isomorphism between ๐ ๐ , ๐ โข ( ๐ท ) and ๐ ๐ , ๐ โข ( โ ๐ ) while there is not one between ๐ถ ๐ โข ( ๐ท ยฏ ) and ๐ถ ๐ โข ( โ ๐ ) .
Lemma 26
Let Assumptions 9 and 10 hold. Then ๐ and ๐ฐ have the AP.
Proof It is enough to show that the spaces ๐ ๐ , ๐ โข ( ๐ท ) , and ๐ถ ๐ โข ( ๐ท ยฏ ) for any 1 โค ๐ < โ and ๐ โ โ 0 with ๐ท โ โ ๐ a Lipschitz domain have the AP. Consider first the spaces ๐ 0 , ๐ โข ( ๐ท )
๐ฟ ๐ โข ( ๐ท ) . Since the Lebesgue measure on ๐ท is ๐ -finite and has no atoms, ๐ฟ ๐ โข ( ๐ท ) is isometrically isomorphic to ๐ฟ ๐ โข ( ( 0 , 1 ) ) (see, for example, (Albiac and Kalton, 2006, Chapter 6)). Hence by Lemma 20, it is enough to show that ๐ฟ ๐ โข ( ( 0 , 1 ) ) has the AP. Similarly, consider the spaces ๐ ๐ , ๐ โข ( ๐ท ) for ๐ > 0 and ๐ > 1 . Since ๐ท is Lipschitz, there exists a continuous linear operator ๐ ๐ , ๐ โข ( ๐ท ) โ ๐ ๐ , ๐ โข ( โ ๐ ) (Stein, 1970, Chapter 6, Theorem 5) (this also holds for ๐
1 ). We can therefore apply (Peลczyลski and Wojciechowski, 2001, Corollary 4) (when ๐
1 ) to conclude that ๐ ๐ , ๐ โข ( ๐ท ) is isomorphic to ๐ฟ ๐ โข ( ( 0 , 1 ) ) . By (Albiac and Kalton, 2006, Proposition 6.1.3), ๐ฟ ๐ โข ( ( 0 , 1 ) ) has a basis hence Lemma 18 implies the result.
Now, consider the spaces ๐ถ ๐ โข ( ๐ท ยฏ ) . Since ๐ท is bounded, there exists a number ๐ > 0 such that ๐ท ยฏ โ [ โ ๐ , ๐ ] ๐ . Hence, by Corollary 24, ๐ถ ๐ โข ( [ 0 , 1 ] ๐ ) is isomorphic to ๐ถ ๐ โข ( [ โ ๐ , ๐ ] ๐ ) . Since ๐ถ ๐ โข ( [ 0 , 1 ] ๐ ) has a basis (Ciesielski and Domsta, 1972, Theorem 5), Lemma 19 then implies that ๐ถ ๐ โข ( [ โ ๐ , ๐ ] ๐ ) has a basis. By (Fefferman, 2007, Theorem 1), there exists a continuous linear operator ๐ธ : ๐ถ ๐ โข ( ๐ท ยฏ ) โ ๐ถ b ๐ โข ( โ ๐ ) such that ๐ธ โข ( ๐ ) | ๐ท ยฏ
๐ for all ๐ โ ๐ถ โข ( ๐ท ยฏ ) . Define the restriction operators ๐ ๐ : ๐ถ b ๐ โข ( โ ๐ ) โ ๐ถ ๐ โข ( [ โ ๐ , ๐ ] ๐ ) and ๐ ๐ท : ๐ถ ๐ โข ( [ โ ๐ , ๐ ] ๐ ) โ ๐ถ ๐ โข ( ๐ท ยฏ ) which are both clearly linear and continuous and โ ๐ ๐ โ
โ ๐ ๐ท โ
1 . Let { ๐ ๐ } ๐
1 โ โ ( ๐ถ ๐ ( [ โ ๐ , ๐ ] ๐ ) ) โ and { ๐ ๐ } ๐
1 โ โ ๐ถ ๐ โข ( [ โ ๐ , ๐ ] ๐ ) be a basis for ๐ถ ๐ โข ( [ โ ๐ , ๐ ] ๐ ) . As in the proof of Lemma 18, there exists a constant ๐ถ 1
0 such that, for any ๐ โ โ and ๐ โ ๐ถ ๐ โข ( [ โ ๐ , ๐ ] ๐ ) ,
โ โ ๐
1 ๐ ๐ ๐ โข ( ๐ ) โข ๐ ๐ โ ๐ถ ๐ โข ( [ โ ๐ , ๐ ] ๐ ) โค ๐ถ 1 โข โ ๐ โ ๐ถ ๐ โข ( [ โ ๐ , ๐ ] ๐ ) .
Suppose, without loss of generality, that ๐ถ 1 โข โ ๐ธ โ โฅ 1 . Let ๐พ โ ๐ถ ๐ โข ( ๐ท ยฏ ) be a compact set and ๐ > 0 . Since ๐พ is compact, we can find a number ๐
๐ โข ( ๐ ) โ โ and elements ๐ฆ 1 , โฆ , ๐ฆ ๐ โ ๐พ such that, for any ๐ โ ๐พ there exists a number ๐ โ { 1 , โฆ , ๐ } such that
โ ๐ โ ๐ฆ ๐ โ ๐ถ ๐ โข ( ๐ท ยฏ ) โค ๐ 3 โข ๐ถ 1 โข โ ๐ธ โ .
For every ๐ โ { 1 , โฆ , ๐ } , define ๐ ๐
๐ ๐ โข ( ๐ธ โข ( ๐ฆ ๐ ) ) and note that ๐ ๐ โ ๐ถ ๐ โข ( [ โ ๐ , ๐ ] ๐ ) hence there exists a number ๐ฝ
๐ฝ โข ( ๐ , ๐ ) โ โ such that
max ๐ โ { 1 , โฆ , ๐ } โก โ ๐ ๐ โ โ ๐
1 ๐ฝ ๐ ๐ โข ( ๐ ๐ ) โข ๐ ๐ โ ๐ถ ๐ โข ( [ โ ๐ , ๐ ] ๐ ) โค ๐ 3 .
Notice that, since ๐ฆ ๐
๐ ๐ท โข ( ๐ ๐ ) , we have
max ๐ โ { 1 , โฆ , ๐ } โฅ ๐ฆ ๐ โ โ ๐
1 ๐ฝ ๐ ๐ ( ๐ ๐ ( ๐ธ ( ๐ฆ ๐ ) ) ) ๐ ๐ท ( ๐ ๐ ) โฅ ๐ถ ๐ โข ( ๐ท ยฏ ) โค โฅ ๐ ๐ท โฅ max ๐ โ { 1 , โฆ , ๐ } โฅ ๐ ๐ โ โ ๐
1 ๐ฝ ๐ ๐ ( ๐ ๐ ) ๐ ๐ โฅ ๐ถ ๐ โข ( [ โ ๐ , ๐ ] ๐ ) โค ๐ 3 .
Define the finite rank operator ๐ : ๐ถ ๐ โข ( ๐ท ยฏ ) โ ๐ถ ๐ โข ( ๐ท ยฏ ) by
๐ ๐
โ ๐
1 ๐ฝ ๐ ๐ ( ๐ ๐ ( ๐ธ ( ๐ ) ) ) ๐ ๐ท ( ๐ ๐ ) , โ ๐ โ ๐ถ ๐ ( ๐ท ยฏ ) .
We then have that, for any ๐ โ ๐พ ,
โ
๐
โ
๐
โข
๐
โ
๐ถ
๐
โข
(
๐ท
ยฏ
)
โค
โ
๐
โ
๐ฆ
๐
โ
๐ถ
๐
โข
(
๐ท
ยฏ
)
+
โ
๐ฆ
๐
โ
๐
โข
๐ฆ
๐
โ
๐ถ
๐
โข
(
๐ท
ยฏ
)
+
โ
๐
โข
๐ฆ
๐
โ
๐
โข
๐
โ
๐ถ
๐
โข
(
๐ท
ยฏ
)
โค
2
โข
๐
3
+
โฅ
โ
๐
1 ๐ฝ ๐ ๐ ( ๐ ๐ ( ๐ธ ( ๐ฆ ๐ โ ๐ ) ) ) ๐ ๐ โฅ ๐ถ ๐ โข ( [ โ ๐ , ๐ ] ๐ )
โค 2 โข ๐ 3 + ๐ถ 1 โข โ ๐ ๐ โข ( ๐ธ โข ( ๐ฆ ๐ โ ๐ ) ) โ ๐ถ ๐ โข ( [ โ ๐ , ๐ ] ๐ )
โค 2 โข ๐ 3 + ๐ถ 1 โข โ ๐ธ โ โข โ ๐ฆ ๐ โ ๐ โ ๐ถ ๐ โข ( ๐ท ยฏ )
โค ๐
hence ๐ถ ๐ โข ( ๐ท ยฏ ) has the AP.
We are left with the case ๐ ๐ , 1 โข ( ๐ท ) . A similar argument as for the ๐ถ ๐ โข ( ๐ท ยฏ ) case holds. In particular the basis from (Ciesielski and Domsta, 1972, Theorem 5) is also a basis for ๐ ๐ , 1 โข ( ( 0 , 1 ) ๐ ) . Lemma 25 gives an isomorphism between ๐ ๐ , 1 โข ( ( 0 , 1 ) ๐ ) and ๐ ๐ , 1 โข ( ( โ ๐ , ๐ ) ๐ ) hence we may use the extension operator ๐ ๐ , 1 โข ( ๐ท ) โ ๐ ๐ , 1 โข ( โ ๐ ) from (Stein, 1970, Chapter 6, Theorem 5) to complete the argument. In fact, the same construction yields a basis for ๐ ๐ , 1 โข ( ๐ท ) due to the isomorphism with ๐ ๐ , 1 โข ( โ ๐ ) , see, for example (Peลczyลski and Wojciechowski, 2001, Theorem 1).
C
In this section, we prove various results about the approximation of linear functionals by kernel integral operators. Lemma 27 establishes a Riesz-representation theorem for ๐ถ ๐ . The proof proceeds exactly as in the well-known result for ๐ ๐ , ๐ but, since we did not find it in the literature, we give full details here. Lemma 28 shows that linear functionals on ๐ ๐ , ๐ can be approximated uniformly over compact set by integral kernel operators with a ๐ถ โ kernel. Lemmas 30 and 31 establish similar results for ๐ถ and ๐ถ ๐ respectively by employing Lemma 27. These lemmas are crucial in showing that NO(s) are universal since they imply that the functionals from Lemma 22 can be approximated by elements of ๐จ๐ฎ .
Lemma 27
Let ๐ท โ โ ๐ be a domain and ๐ โ โ 0 . For every ๐ฟ โ ( ๐ถ ๐ ( ๐ท ยฏ ) ) โ there exist finite, signed, Radon measures { ๐ ๐ผ } 0 โค | ๐ผ | 1 โค ๐ such that
๐ฟ โข ( ๐ )
โ 0 โค | ๐ผ | 1 โค ๐ โซ ๐ท ยฏ โ ๐ผ ๐ โข ๐ฝ โข ๐ ๐ผ , โ ๐ โ ๐ถ ๐ โข ( ๐ท ยฏ ) .
Proof The case ๐
0 follow directly from (Leoni, 2009, Theorem B.111), so we assume that ๐
0 . Let ๐ผ 1 , โฆ , ๐ผ ๐ฝ be an enumeration of the set { ๐ผ โ โ ๐ : | ๐ผ | 1 โค ๐ } . Define the mapping ๐ : ๐ถ ๐ โข ( ๐ท ยฏ ) โ ๐ถ โข ( ๐ท ยฏ ; โ ๐ฝ ) by
๐ โข ๐
( โ ๐ผ 0 ๐ , โฆ , โ ๐ผ ๐ฝ ๐ ) , โ ๐ โ ๐ถ ๐ โข ( ๐ท ยฏ ) .
Clearly โ ๐ โข ๐ โ ๐ถ โข ( ๐ท ยฏ ; โ ๐ฝ )
โ ๐ โ ๐ถ ๐ โข ( ๐ท ยฏ ) hence ๐ is an injective, continuous linear operator. Define ๐ โ ๐ โข ( ๐ถ ๐ โข ( ๐ท ยฏ ) ) โ ๐ถ โข ( ๐ท ยฏ ; โ ๐ฝ ) then ๐ โ 1 : ๐ โ ๐ถ ๐ โข ( ๐ท ยฏ ) is a continuous linear operator since ๐ preserves norm. Thus ๐
( ๐ โ 1 ) โ 1 ( ๐ถ ๐ ( ๐ท ยฏ ) ) is closed as the pre-image of a closed set under a continuous map. In particular, ๐ is a Banach space since ๐ถ โข ( ๐ท ยฏ ; โ ๐ฝ ) is a Banach space and ๐ is an isometric isomorphism between ๐ถ ๐ โข ( ๐ท ยฏ ) and ๐ . Therefore, there exists a continuous linear functional ๐ฟ ~ โ ๐ โ such that
๐ฟ โข ( ๐ )
๐ฟ ~ โข ( ๐ โข ๐ ) , โ ๐ โ ๐ถ ๐ โข ( ๐ท ยฏ ) .
By the Hahn-Banach theorem, ๐ฟ ~ can be extended to a continuous linear functional ๐ฟ ยฏ โ ( ๐ถ ( ๐ท ยฏ ; โ ๐ฝ ) ) โ such that โ ๐ฟ โ ( ๐ถ ๐ โข ( ๐ท ยฏ ) ) โ
โ ๐ฟ ~ โ ๐ โ
โ ๐ฟ ยฏ โ ( ๐ถ โข ( ๐ท ยฏ ; โ ๐ฝ ) ) โ . We have that
๐ฟ โข ( ๐ )
๐ฟ ~ โข ( ๐ โข ๐ )
๐ฟ ยฏ โข ( ๐ โข ๐ ) , โ ๐ โ ๐ถ ๐ โข ( ๐ท ยฏ ) .
Since
( ๐ถ ( ๐ท ยฏ ; โ ๐ฝ ) ) โ โ ร ๐
1 ๐ฝ ( ๐ถ ( ๐ท ยฏ ) ) โ โ โจ ๐
1 ๐ฝ ( ๐ถ ( ๐ท ยฏ ) ) โ ,
we have, by applying (Leoni, 2009, Theorem B.111) ๐ฝ times, that there exist finite, signed, Radon measures { ๐ ๐ผ } 0 โค | ๐ผ | 1 โค ๐ such that
๐ฟ ยฏ โข ( ๐ โข ๐ )
โ 0 โค | ๐ผ | 1 โค ๐ โซ ๐ท ยฏ โ ๐ผ ๐ โข ๐ฝ โข ๐ ๐ผ , โ ๐ โ ๐ถ ๐ โข ( ๐ท ยฏ )
as desired.
Lemma 28
Let ๐ท โ โ ๐ be a bounded, open set and ๐ฟ โ ( ๐ ๐ , ๐ โข ( ๐ท ) ) โ for some ๐ โฅ 0 and 1 โค ๐ < โ . For any closed and bounded set ๐พ โ ๐ ๐ , ๐ โข ( ๐ท ) (compact if ๐
1 ) and ๐
0 , there exists a function ๐ โ ๐ถ ๐ โ โข ( ๐ท ) such that
sup ๐ข โ ๐พ | ๐ฟ โข ( ๐ข ) โ โซ ๐ท ๐ โข ๐ข โข ๐ฝ ๐ฅ | < ๐ .
Proof First consider the case ๐
0 and 1 โค ๐ < โ . By the Riesz Representation Theorem (Conway, 2007, Appendix B), there exists a function ๐ฃ โ ๐ฟ ๐ โข ( ๐ท ) such that
๐ฟ โข ( ๐ข )
โซ ๐ท ๐ฃ โข ๐ข โข ๐ฝ ๐ฅ .
Since ๐พ is bounded, there is a constant ๐
0 such that
sup ๐ข โ ๐พ โ ๐ข โ ๐ฟ ๐ โค ๐ .
Suppose ๐
1 , so that 1 < ๐ < โ . Density of ๐ถ ๐ โ โข ( ๐ท ) in ๐ฟ ๐ โข ( ๐ท ) (Adams and Fournier, 2003, Corollary 2.30) implies there exists a function ๐ โ ๐ถ ๐ โ โข ( ๐ท ) such that
โ ๐ฃ โ ๐ โ ๐ฟ ๐ < ๐ ๐ .
By the Hรถlder inequality,
| ๐ฟ โข ( ๐ข ) โ โซ ๐ท ๐ โข ๐ข โข ๐ฝ ๐ฅ | โค โ ๐ข โ ๐ฟ ๐ โข โ ๐ฃ โ ๐ โ ๐ฟ ๐ < ๐ .
Suppose that ๐
1 then ๐
โ . Since ๐พ is totally bounded, there exists a number ๐ โ โ and functions ๐ 1 , โฆ , ๐ ๐ โ ๐พ such that, for any ๐ข โ ๐พ ,
โ ๐ข โ ๐ ๐ โ ๐ฟ 1 < ๐ 3 โข โ ๐ฃ โ ๐ฟ โ
for some ๐ โ { 1 , โฆ , ๐ } . Let ๐ ๐ โ ๐ถ ๐ โ โข ( ๐ท ) denote a standard mollifier for any ๐
0 . We can find ๐
0 small enough such that
max ๐ โ { 1 , โฆ , ๐ } โก โ ๐ ๐ โ ๐ ๐ โ ๐ ๐ โ ๐ฟ 1 < ๐ 9 โข โ ๐ฃ โ ๐ฟ โ
Define ๐
๐ ๐ โ ๐ฃ โ ๐ถ โข ( ๐ท ) and note that โ ๐ โ ๐ฟ โ โค โ ๐ฃ โ ๐ฟ โ . By Fubiniโs theorem, we find
| โซ ๐ท ( ๐ โ ๐ฃ ) โข ๐ ๐ โข ๐ฝ ๐ฅ |
โซ ๐ท ๐ฃ โข ( ๐ ๐ โ ๐ ๐ โ ๐ ๐ ) โข ๐ฝ ๐ฅ โค โ ๐ฃ โ ๐ฟ โ โข โ ๐ ๐ โ ๐ ๐ โ ๐ ๐ โ ๐ฟ 1 < ๐ 9 .
Since ๐ ๐ โ ๐ฟ 1 โข ( ๐ท ) , by Lusinโs theorem, we can find a compact set ๐ด โ ๐ท such that
max ๐ โ { 1 , โฆ , ๐ } โข โซ ๐ท โ ๐ด | ๐ ๐ | โข ๐ฝ ๐ฅ < ๐ 18 โข โ ๐ฃ โ ๐ฟ โ
Since ๐ถ ๐ โ โข ( ๐ท ) is dense in ๐ถ โข ( ๐ท ) over compact sets (Leoni, 2009, Theorem C.16), we can find a function ๐ โ ๐ถ ๐ โ โข ( ๐ท ) such that
sup ๐ฅ โ ๐ด | ๐ โข ( ๐ฅ ) โ ๐ โข ( ๐ฅ ) | โค ๐ 9 โข ๐
and โ ๐ โ ๐ฟ โ โค โ ๐ โ ๐ฟ โ โค โ ๐ฃ โ ๐ฟ โ . We have,
| โซ ๐ท ( ๐ โ ๐ฃ ) โข ๐ ๐ โข ๐ฝ ๐ฅ |
โค โซ ๐ด | ( ๐ โ ๐ฃ ) โข ๐ ๐ | โข ๐ฝ ๐ฅ + โซ ๐ท โ ๐ด | ( ๐ โ ๐ฃ ) โข ๐ ๐ | โข ๐ฝ ๐ฅ
โค โซ ๐ด | ( ๐ โ ๐ ) โข ๐ ๐ | โข ๐ฝ ๐ฅ + โซ ๐ท | ( ๐ โ ๐ฃ ) โข ๐ ๐ | โข ๐ฝ ๐ฅ + 2 โข โ ๐ฃ โ ๐ฟ โ โข โซ ๐ท โ ๐ด | ๐ ๐ | โข ๐ฝ ๐ฅ
โค sup ๐ฅ โ ๐ด | ๐ โข ( ๐ฅ ) โ ๐ โข ( ๐ฅ ) | โข โ ๐ ๐ โ ๐ฟ 1 + 2 โข ๐ 9
< ๐ 3 .
Finally,
| ๐ฟ โข ( ๐ข ) โ โซ ๐ท ๐ โข ๐ข โข ๐ฝ ๐ฅ |
โค | โซ ๐ท ๐ฃ โข ๐ข โข ๐ฝ ๐ฅ โ โซ ๐ท ๐ฃ โข ๐ ๐ โข ๐ฝ ๐ฅ | + | โซ ๐ท ๐ฃ โข ๐ ๐ โข ๐ฝ ๐ฅ โ โซ ๐ท ๐ โข ๐ข โข ๐ฝ ๐ฅ |
โค โ ๐ฃ โ ๐ฟ โ โข โ ๐ข โ ๐ ๐ โ ๐ฟ 1 + | โซ ๐ท ๐ โข ๐ข โข ๐ฝ ๐ฅ โ โซ ๐ท ๐ โข ๐ ๐ โข ๐ฝ ๐ฅ | + | โซ ๐ท ๐ โข ๐ ๐ โข ๐ฝ ๐ฅ โ โซ ๐ท ๐ฃ โข ๐ ๐ โข ๐ฝ ๐ฅ |
โค ๐ 3 + โ ๐ โ ๐ฟ โ โข โ ๐ข โ ๐ ๐ โ ๐ฟ 1 + | โซ ๐ท ( ๐ โ ๐ฃ ) โข ๐ ๐ โข ๐ฝ ๐ฅ |
โค 2 โข ๐ 3 + โ ๐ฃ โ ๐ฟ โ โข โ ๐ข โ ๐ ๐ โ ๐ฟ 1
< ๐ .
Suppose ๐ โฅ 1 . By the Riesz Representation Theorem (Adams and Fournier, 2003, Theorem 3.9), there exist elements ( ๐ฃ ๐ผ ) 0 โค | ๐ผ | 1 โค ๐ of ๐ฟ ๐ โข ( ๐ท ) where ๐ผ โ โ ๐ is a multi-index such that
๐ฟ โข ( ๐ข )
โ 0 โค | ๐ผ | 1 โค ๐ โซ ๐ท ๐ฃ ๐ผ โข โ ๐ผ ๐ข โข ๐ฝ โข ๐ฅ .
Since ๐พ is bounded, there is a constant ๐
0 such that
sup ๐ข โ ๐พ โ ๐ข โ ๐ ๐ , ๐ โค ๐ .
Suppose ๐
1 , so that 1 < ๐ < โ . Density of ๐ถ 0 โ โข ( ๐ท ) in ๐ฟ ๐ โข ( ๐ท ) implies there exist functions ( ๐ ๐ผ ) 0 โค | ๐ผ | 1 โค ๐ in ๐ถ ๐ โ โข ( ๐ท ) such that
โ ๐ ๐ผ โ ๐ฃ ๐ผ โ ๐ฟ ๐ < ๐ ๐ โข ๐ฝ
where ๐ฝ
| { ๐ผ โ โ ๐ : | ๐ผ | 1 โค ๐ } | . Let
๐
โ 0 โค | ๐ผ | 1 โค ๐ ( โ 1 ) | ๐ผ | 1 โข โ ๐ผ ๐ ๐ผ
then, by definition of a weak derivative,
โซ ๐ท ๐ โข ๐ข โข ๐ฝ ๐ฅ
โ 0 โค | ๐ผ | 1 โค ๐ ( โ 1 ) | ๐ผ | 1 โข โซ ๐ท โ ๐ผ ๐ ๐ผ โข ๐ข โข ๐ฝ โข ๐ฅ
โ 0 โค | ๐ผ | 1 โค ๐ โซ ๐ท ๐ ๐ผ โข โ ๐ผ ๐ข โข ๐ฝ โข ๐ฅ .
By the Hรถlder inequality,
| ๐ฟ โข ( ๐ข ) โ โซ ๐ท ๐ โข ๐ข โข ๐ฝ ๐ฅ | โค โ 0 โค | ๐ผ | 1 โค ๐ โ โ ๐ผ ๐ข โ ๐ฟ ๐ โข โ ๐ ๐ผ โ ๐ฃ ๐ผ โ ๐ฟ ๐ < ๐ โข โ 0 โค | ๐ผ | 1 โค ๐ ๐ ๐ โข ๐ฝ
๐ .
Suppose that ๐
1 then ๐
โ . Define the constant ๐ถ ๐ฃ
0 by
๐ถ ๐ฃ
โ 0 โค | ๐ผ | 1 โค ๐ โ ๐ฃ ๐ผ โ ๐ฟ โ .
Since ๐พ is totally bounded, there exists a number ๐ โ โ and functions ๐ 1 , โฆ , ๐ ๐ โ ๐พ such that, for any ๐ข โ ๐พ ,
โ ๐ข โ ๐ ๐ โ ๐ ๐ , 1 < ๐ 3 โข ๐ถ ๐ฃ
for some ๐ โ { 1 , โฆ , ๐ } . Let ๐ ๐ โ ๐ถ ๐ โ โข ( ๐ท ) denote a standard mollifier for any ๐
0 . We can find ๐
0 small enough such that
max ๐ผ โก max ๐ โ { 1 , โฆ , ๐ } โก โ ๐ ๐ โ โ ๐ผ ๐ ๐ โ โ ๐ผ ๐ ๐ โ ๐ฟ 1 < ๐ 9 โข ๐ถ ๐ฃ .
Define ๐ ๐ผ
๐ ๐ โ ๐ฃ ๐ผ โ ๐ถ โข ( ๐ท ) and note that โ ๐ ๐ผ โ ๐ฟ โ โค โ ๐ฃ ๐ผ โ ๐ฟ โ . By Fubiniโs theorem, we find
โ 0 โค | ๐ผ | 1 โค ๐ | โซ ๐ท ( ๐ ๐ผ โ ๐ฃ ๐ผ ) โข โ ๐ผ ๐ ๐ โข ๐ฝ โข ๐ฅ |
โ 0 โค | ๐ผ | 1 โค ๐ | โซ ๐ท ๐ฃ ๐ผ โข ( ๐ ๐ โ โ ๐ผ ๐ ๐ โ โ ๐ผ ๐ ๐ ) โข ๐ฝ ๐ฅ |
โค โ 0 โค | ๐ผ | 1 โค ๐ โ ๐ฃ ๐ผ โ ๐ฟ โ โข โ ๐ ๐ โ โ ๐ผ ๐ ๐ โ โ ๐ผ ๐ ๐ โ ๐ฟ 1
< ๐ 9 .
Since โ ๐ผ ๐ ๐ โ ๐ฟ 1 โข ( ๐ท ) , by Lusinโs theorem, we can find a compact set ๐ด โ ๐ท such that
max ๐ผ โก max ๐ โ { 1 , โฆ , ๐ } โข โซ ๐ท โ ๐ด | โ ๐ผ ๐ ๐ | โข ๐ฝ ๐ฅ < ๐ 18 โข ๐ถ ๐ฃ .
Since ๐ถ ๐ โ โข ( ๐ท ) is dense in ๐ถ โข ( ๐ท ) over compact sets, we can find functions ๐ค ๐ผ โ ๐ถ ๐ โ โข ( ๐ท ) such that
sup ๐ฅ โ ๐ด | ๐ค ๐ผ โข ( ๐ฅ ) โ ๐ ๐ผ โข ( ๐ฅ ) | โค ๐ 9 โข ๐ โข ๐ฝ
where ๐ฝ
| { ๐ผ โ โ ๐ : | ๐ผ | 1 โค ๐ } | and โ ๐ค ๐ผ โ ๐ฟ โ โค โ ๐ ๐ผ โ ๐ฟ โ โค โ ๐ฃ ๐ผ โ ๐ฟ โ . We have,
โ 0 โค | ๐ผ | 1 โค ๐ โซ ๐ท | ( ๐ค ๐ผ โ ๐ฃ ๐ผ ) โข โ ๐ผ ๐ ๐ |
โ 0 โค | ๐ผ | 1 โค ๐ ( โซ ๐ด | ( ๐ค ๐ผ โ ๐ฃ ๐ผ ) โข โ ๐ผ ๐ ๐ | โข ๐ ๐ฅ + โซ ๐ท โ ๐ด | ( ๐ค ๐ผ โ ๐ฃ ๐ผ ) โข โ ๐ผ ๐ ๐ | โข ๐ ๐ฅ )
โค โ 0 โค | ๐ผ | 1 โค ๐ ( โซ ๐ด | ( ๐ค ๐ผ โ ๐ ๐ผ ) โ ๐ผ ๐ ๐ | ๐ฝ ๐ฅ + โซ ๐ท | ( ๐ ๐ผ โ ๐ฃ ๐ผ ) โ ๐ผ ๐ ๐ | ๐ฝ ๐ฅ
- 2 โฅ ๐ฃ ๐ผ โฅ ๐ฟ โ โซ ๐ท โ ๐ด | โ ๐ผ ๐ ๐ | ๐ฝ ๐ฅ )
โค โ 0 โค | ๐ผ | 1 โค ๐ sup ๐ฅ โ ๐ด | ๐ค ๐ผ โข ( ๐ฅ ) โ ๐ ๐ผ โข ( ๐ฅ ) | โข โ โ ๐ผ ๐ ๐ โ ๐ฟ 1 + 2 โข ๐ 9
< ๐ 3 .
Let
๐
โ 0 โค | ๐ผ | 1 โค ๐ ( โ 1 ) | ๐ผ | 1 โข โ ๐ผ ๐ค ๐ผ .
then, by definition of a weak derivative,
โซ ๐ท ๐ โข ๐ข โข ๐ฝ ๐ฅ
โ 0 โค | ๐ผ | 1 โค ๐ ( โ 1 ) | ๐ผ | 1 โข โซ ๐ท โ ๐ผ ๐ค ๐ผ โข ๐ข โข ๐ฝ โข ๐ฅ
โ 0 โค | ๐ผ | 1 โค ๐ โซ ๐ท ๐ค ๐ผ โข โ ๐ผ ๐ข โข ๐ฝ โข ๐ฅ .
Finally,
| ๐ฟ โข ( ๐ข ) โ โซ ๐ท ๐ โข ๐ข โข ๐ฝ ๐ฅ |
โค โ 0 โค | ๐ผ | 1 โค ๐ โซ ๐ท | ๐ฃ ๐ผ โข โ ๐ผ ๐ข โ ๐ค ๐ผ โข โ ๐ผ ๐ข | โข ๐ฝ ๐ฅ
โค โ 0 โค | ๐ผ | 1 โค ๐ ( โซ ๐ท | ๐ฃ ๐ผ โข ( โ ๐ผ ๐ข โ โ ๐ผ ๐ ๐ ) | โข ๐ฝ ๐ฅ + โซ ๐ท | ๐ฃ ๐ผ โข โ ๐ผ ๐ ๐ โ ๐ค ๐ผ โข โ ๐ผ ๐ข | โข ๐ฝ ๐ฅ )
โค โ 0 โค | ๐ผ | 1 โค ๐ ( โ ๐ฃ ๐ผ โ ๐ฟ โ โข โ ๐ข โ ๐ ๐ โ ๐ ๐ , 1 + โซ ๐ท | ( ๐ฃ ๐ผ โ ๐ค ๐ผ ) โข โ ๐ผ ๐ ๐ | โข ๐ฝ ๐ฅ + โซ ๐ท | ( โ ๐ผ ๐ ๐ โ โ ๐ผ ๐ข ) โข ๐ค ๐ผ | โข ๐ฝ ๐ฅ )
< 2 โข ๐ 3 + โ 0 โค | ๐ผ | 1 โค ๐ โฅ โข ๐ค ๐ผ โฅ ๐ฟ โ โข โ ๐ข โ ๐ ๐ โ ๐ ๐ , 1
< ๐ .
Lemma 29
Let ๐ท โ โ ๐ be a domain and ๐ฟ โ ( ๐ถ ๐ ( ๐ท ยฏ ) ) โ for some ๐ โ โ 0 . For any compact set ๐พ โ ๐ถ ๐ โข ( ๐ท ยฏ ) and ๐
0 , there exists distinct points ๐ฆ 11 , โฆ , ๐ฆ 1 โข ๐ 1 , โฆ , ๐ฆ ๐ฝ โข ๐ ๐ฝ โ ๐ท and numbers ๐ 11 , โฆ , ๐ 1 โข ๐ 1 , โฆ , ๐ ๐ฝ โข ๐ ๐ฝ โ โ such that
sup ๐ข โ ๐พ | ๐ฟ โข ( ๐ข ) โ โ ๐
1 ๐ฝ โ ๐
1 ๐ ๐ ๐ ๐ โข ๐ โข โ ๐ผ ๐ ๐ข โข ( ๐ฆ ๐ โข ๐ ) | โค ๐
where ๐ผ 1 , โฆ , ๐ผ ๐ฝ is an enumeration of the set { ๐ผ โ โ 0 ๐ : 0 โค | ๐ผ | 1 โค ๐ } .
Proof By Lemma 27, there exist finite, signed, Radon measures { ๐ ๐ผ } 0 โค | ๐ผ | 1 โค ๐ such that
๐ฟ โข ( ๐ข )
โ 0 โค | ๐ผ | 1 โค ๐ โซ ๐ท ยฏ โ ๐ผ ๐ข โข ๐ฝ โข ๐ ๐ผ , โ ๐ข โ ๐ถ ๐ โข ( ๐ท ยฏ ) .
Let ๐ผ 1 , โฆ , ๐ผ ๐ฝ be an enumeration of the set { ๐ผ โ โ 0 ๐ : 0 โค | ๐ผ | 1 โค ๐ } . By weak density of the Dirac measures (Bogachev, 2007, Example 8.1.6), we can find points ๐ฆ 11 , โฆ , ๐ฆ 1 โข ๐ 1 , โฆ , ๐ฆ ๐ฝ โข 1 , โฆ , ๐ฆ ๐ฝ โข ๐ ๐ฝ โ ๐ท ยฏ as well as numbers ๐ 11 , โฆ , ๐ ๐ฝ โข ๐ ๐ฝ โ โ such that
| โซ ๐ท ยฏ โ ๐ผ ๐ ๐ข โข ๐ฝ โข ๐ ๐ผ ๐ โ โ ๐
1 ๐ ๐ ๐ ๐ โข ๐ โข โ ๐ผ ๐ ๐ข โข ( ๐ฆ ๐ โข ๐ ) | โค ๐ 4 โข ๐ฝ , โ ๐ข โ ๐ถ ๐ โข ( ๐ท ยฏ )
for any ๐ โ { 1 , โฆ , ๐ฝ } . Therefore,
| โ ๐
1 ๐ฝ โซ ๐ท ยฏ โ ๐ผ ๐ ๐ข โข ๐ฝ โข ๐ ๐ผ ๐ โ โ ๐
1 ๐ฝ โ ๐
1 ๐ ๐ ๐ ๐ โข ๐ โข โ ๐ผ ๐ ๐ข โข ( ๐ฆ ๐ โข ๐ ) | โค ๐ 4 , โ ๐ข โ ๐ถ ๐ โข ( ๐ท ยฏ ) .
Define the constant
๐ โ โ ๐
1 ๐ฝ โ ๐
1 ๐ ๐ | ๐ ๐ โข ๐ | .
Since ๐พ is compact, we can find functions ๐ 1 , โฆ , ๐ ๐ โ ๐พ such that, for any ๐ข โ ๐พ , there exists ๐ โ { 1 , โฆ , ๐ } such that
โ ๐ข โ ๐ ๐ โ ๐ถ ๐ โค ๐ 4 โข ๐ .
Suppose that some ๐ฆ ๐ โข ๐ โ โ ๐ท . By uniform continuity, we can find a point ๐ฆ ~ ๐ โข ๐ โ ๐ท such that
max ๐ โ { 1 , โฆ , ๐ } โก | โ ๐ผ ๐ ๐ ๐ โข ( ๐ฆ ๐ โข ๐ ) โ โ ๐ผ ๐ ๐ ๐ โข ( ๐ฆ ~ ๐ โข ๐ ) | โค ๐ 4 โข ๐ .
Denote
๐ โข ( ๐ข )
โ ๐
1 ๐ฝ โ ๐
1 ๐ ๐ ๐ ๐ โข ๐ โข โ ๐ผ ๐ ๐ข โข ( ๐ฆ ๐ โข ๐ )
and by ๐ ~ โข ( ๐ข ) the sum ๐ โข ( ๐ข ) with ๐ฆ ๐ โข ๐ replaced by ๐ฆ ~ ๐ โข ๐ . Then, for any ๐ข โ ๐พ , we have
| ๐ฟ โข ( ๐ข ) โ ๐ ~ โข ( ๐ข ) |
โค | ๐ฟ โข ( ๐ข ) โ ๐ โข ( ๐ข ) | + | ๐ โข ( ๐ข ) โ ๐ ~ โข ( ๐ข ) |
โค ๐ 4 + | ๐ ๐ โข ๐ โข โ ๐ผ ๐ ๐ข โข ( ๐ฆ ~ ๐ โข ๐ ) โ ๐ ๐ โข ๐ โข โ ๐ผ ๐ ๐ข โข ( ๐ฆ ๐ โข ๐ ) |
โค ๐ 4 + | ๐ ๐ โข ๐ โข โ ๐ผ ๐ ๐ข โข ( ๐ฆ ~ ๐ โข ๐ ) โ ๐ ๐ โข ๐ โข โ ๐ผ ๐ ๐ ๐ โข ( ๐ฆ ~ ๐ โข ๐ ) | + | ๐ ๐ โข ๐ โข โ ๐ผ ๐ ๐ ๐ โข ( ๐ฆ ~ ๐ โข ๐ ) โ ๐ ๐ โข ๐ โข โ ๐ผ ๐ ๐ข โข ( ๐ฆ ๐ โข ๐ ) |
โค ๐ 4 + | ๐ ๐ โข ๐ | โข โ ๐ข โ ๐ ๐ โ ๐ถ ๐ + | ๐ ๐ โข ๐ โข โ ๐ผ ๐ ๐ ๐ โข ( ๐ฆ ~ ๐ โข ๐ ) โ ๐ ๐ โข ๐ โข โ ๐ผ ๐ ๐ ๐ โข ( ๐ฆ ๐ โข ๐ ) | + | ๐ ๐ โข ๐ โข โ ๐ผ ๐ ๐ ๐ โข ( ๐ฆ ๐ โข ๐ ) โ ๐ ๐ โข ๐ โข โ ๐ผ ๐ ๐ข โข ( ๐ฆ ๐ โข ๐ ) |
โค ๐ 4 + 2 โข | ๐ ๐ โข ๐ | โข โ ๐ข โ ๐ ๐ โ ๐ถ ๐ + | ๐ ๐ โข ๐ | โข | โ ๐ผ ๐ ๐ ๐ โข ( ๐ฆ ~ ๐ โข ๐ ) โ โ ๐ผ ๐ ๐ ๐ โข ( ๐ฆ ๐ โข ๐ ) |
โค ๐ .
Since there are a finite number of points, this implies that all points ๐ฆ ๐ โข ๐ can be chosen in ๐ท . Suppose now that ๐ฆ ๐ โข ๐
๐ฆ ๐ โข ๐ for some ( ๐ , ๐ ) โ ( ๐ , ๐ ) . As before, we can always find a point ๐ฆ ~ ๐ โข ๐ distinct from all others such that
max ๐ โ { 1 , โฆ , ๐ } โก | โ ๐ผ ๐ ๐ ๐ โข ( ๐ฆ ๐ โข ๐ ) โ โ ๐ผ ๐ ๐ ๐ โข ( ๐ฆ ~ ๐ โข ๐ ) | โค ๐ 4 โข ๐ .
Repeating the previous argument then shows that all points ๐ฆ ๐ โข ๐ can be chosen distinctly as desired.
Lemma 30
Let ๐ท โ โ ๐ be a domain and ๐ฟ โ ( ๐ถ ( ๐ท ยฏ ) ) โ . For any compact set ๐พ โ ๐ถ โข ( ๐ท ยฏ ) and ๐
0 , there exists a function ๐ โ ๐ถ ๐ โ โข ( ๐ท ) such that
sup ๐ข โ ๐พ | ๐ฟ โข ( ๐ข ) โ โซ ๐ท ๐ โข ๐ข โข ๐ฝ ๐ฅ | < ๐ .
Proof By Lemma 29, we can find points distinct points ๐ฆ 1 , โฆ , ๐ฆ ๐ โ ๐ท as well as numbers ๐ 1 , โฆ , ๐ ๐ โ โ such that
sup ๐ข โ ๐พ | ๐ฟ โข ( ๐ข ) โ โ ๐
1 ๐ ๐ ๐ โข ๐ข โข ( ๐ฆ ๐ ) | โค ๐ 3 .
Define the constants
๐ โ โ ๐
1 ๐ | ๐ ๐ | .
Since ๐พ is compact, there exist functions ๐ 1 , โฆ , ๐ ๐ฝ โ ๐พ such that, for any ๐ข โ ๐พ , there exists some ๐ โ { 1 , โฆ , ๐ฝ } such that
โ ๐ข โ ๐ ๐ โ ๐ถ โค ๐ 6 โข ๐ โข ๐ .
Let ๐ > 0 be such that the open balls ๐ต ๐ โข ( ๐ฆ ๐ ) โ ๐ท and are pairwise disjoint. Let ๐ ๐ โ ๐ถ ๐ โ โข ( โ ๐ ) denote the standard mollifier with parameter ๐ > 0 , noting that supp โข ๐ ๐
๐ต ๐ โข ( 0 ) . We can find a number 0 < ๐พ โค ๐ such that
max ๐ โ { 1 , โฆ , ๐ฝ }
๐ โ { 1 , โฆ , ๐ } โก | โซ ๐ท ๐ ๐พ โข ( ๐ฅ โ ๐ฆ ๐ ) โข ๐ ๐ โข ( ๐ฅ ) โข ๐ฝ ๐ฅ โ ๐ ๐ โข ( ๐ฆ ๐ ) | โค ๐ 3 โข ๐ โข ๐ .
Define ๐ : โ ๐ โ โ by
๐ โข ( ๐ฅ )
โ ๐
1 ๐ ๐ ๐ โข ๐ ๐พ โข ( ๐ฅ โ ๐ฆ ๐ ) , โ ๐ฅ โ โ ๐ .
Since supp ๐ ๐พ ( โ โ ๐ฆ ๐ ) โ ๐ต ๐ ( ๐ฆ ๐ ) , we have that ๐ โ ๐ถ ๐ โ โข ( ๐ท ) . Then, for any ๐ข โ ๐พ ,
|
๐ฟ
โข
(
๐ข
)
โ
โซ
๐ท
๐
โข
๐ข
โข
๐ฝ
๐ฅ
|
โค
|
๐ฟ
โข
(
๐ข
)
โ
โ
๐
1 ๐ ๐ ๐ โข ๐ข โข ( ๐ฆ ๐ ) | + | โ ๐
1
๐
๐
๐
โข
๐ข
โข
(
๐ฆ
๐
)
โ
โซ
๐ท
๐
โข
๐ข
โข
๐ฝ
๐ฅ
|
โค
๐
3
+
โ
๐
1
๐
|
๐
๐
|
โข
|
๐ข
โข
(
๐ฆ
๐
)
โ
โซ
๐ท
๐
๐
โข
(
๐ฅ
โ
๐ฆ
๐
)
โข
๐ข
โข
(
๐ฅ
)
โข
๐ฝ
๐ฅ
|
โค
๐
3
+
๐
โข
โ
๐
1
๐
|
๐ข
โข
(
๐ฆ
๐
)
โ
๐
๐
โข
(
๐ฆ
๐
)
|
+
|
๐
๐
โข
(
๐ฆ
๐
)
โ
โซ
๐ท
๐
๐
โข
(
๐ฅ
โ
๐ฆ
๐
)
โข
๐ข
โข
(
๐ฅ
)
โข
๐ฝ
๐ฅ
|
โค
๐
3
+
๐
โข
๐
โข
โ
๐ข
โ
๐
๐
โ
๐ถ
+
๐
โข
โ
๐
1
๐
|
๐
๐
โข
(
๐ฆ
๐
)
โ
โซ
๐ท
๐
๐
โข
(
๐ฅ
โ
๐ฆ
๐
)
โข
๐
๐
โข
(
๐ฅ
)
โข
๐ฝ
๐ฅ
|
+
|
โซ
๐ท
๐
๐
(
๐ฅ
โ
๐ฆ
๐
)
(
๐
๐
(
๐ฅ
)
โ
๐ข
(
๐ฅ
)
)
๐ฝ
๐ฅ
|
โค
๐
3
+
๐
โข
๐
โข
โ
๐ข
โ
๐
๐
โ
๐ถ
+
๐
โข
๐
โข
๐
3
โข
๐
โข
๐
+
๐
โข
โ
๐
๐
โ
๐ข
โ
๐ถ
โข
โ
๐
1 ๐ โซ ๐ท ๐ ๐พ โข ( ๐ฅ โ ๐ฆ ๐ ) โข ๐ฝ ๐ฅ
2 โข ๐ 3 + 2 โข ๐ โข ๐ โข โ ๐ข โ ๐ ๐ โ ๐ถ
๐
where we use the fact that mollifiers are non-negative and integrate to one.
Lemma 31
Let ๐ท โ โ ๐ be a domain and ๐ฟ โ ( ๐ถ ๐ ( ๐ท ยฏ ) ) โ . For any compact set ๐พ โ ๐ถ ๐ โข ( ๐ท ยฏ ) and ๐
0 , there exist functions ๐ 1 , โฆ , ๐ ๐ฝ โ ๐ถ ๐ โ โข ( ๐ท ) such that
sup ๐ข โ ๐พ | ๐ฟ โข ( ๐ข ) โ โ ๐
1 ๐ฝ โซ ๐ท ๐ ๐ โข โ ๐ผ ๐ ๐ข โข ๐ฝ โข ๐ฅ | < ๐
where ๐ผ 1 , โฆ , ๐ผ ๐ฝ is an enumeration of the set { ๐ผ โ โ 0 ๐ : 0 โค | ๐ผ | 1 โค ๐ } .
Proof By Lemma 29, we find distinct points ๐ฆ 11 , โฆ , ๐ฆ 1 โข ๐ 1 , โฆ , ๐ฆ ๐ฝ โข ๐ ๐ฝ โ ๐ท and numbers ๐ 11 , โฆ , ๐ ๐ฝ โข ๐ ๐ฝ โ โ such that
sup ๐ข โ ๐พ | ๐ฟ โข ( ๐ข ) โ โ ๐
1 ๐ฝ โ ๐
1 ๐ ๐ ๐ ๐ โข ๐ โข โ ๐ผ ๐ ๐ข โข ( ๐ฆ ๐ โข ๐ ) | โค ๐ 2 .
Applying the proof of Lemma 31 ๐ฝ times to each of the inner sums, we find functions ๐ 1 , โฆ , ๐ ๐ฝ โ ๐ถ ๐ โ โข ( ๐ท ) such that
max ๐ โ { 1 , โฆ , ๐ฝ } โก | โซ ๐ท ๐ ๐ โข โ ๐ผ ๐ ๐ข โข ๐ฝ โข ๐ฅ โ โ ๐
1 ๐ ๐ ๐ ๐ โข ๐ โข โ ๐ผ ๐ ๐ข โข ( ๐ฆ ๐ โข ๐ ) | โค ๐ 2 โข ๐ฝ .
Then, for any ๐ข โ ๐พ ,
| ๐ฟ ( ๐ข ) โ โ ๐
1
๐ฝ
โซ
๐ท
๐
๐
โ
๐ผ
๐
๐ข
๐ฝ
๐ฅ
|
โค
|
๐ฟ
โข
(
๐ข
)
โ
โ
๐
1 ๐ฝ โ ๐
1 ๐ ๐ ๐ ๐ โข ๐ โข โ ๐ผ ๐ ๐ข โข ( ๐ฆ ๐ โข ๐ ) | + โ ๐
1 ๐ฝ | โซ ๐ท ๐ ๐ โข โ ๐ผ ๐ ๐ข โข ๐ฝ โข ๐ฅ โ โ ๐
1 ๐ ๐ ๐ ๐ โข ๐ โข โ ๐ผ ๐ ๐ข โข ( ๐ฆ ๐ โข ๐ ) | โค ๐
as desired.
D
The following lemmas show that the three pieces used in constructing the approximation from Lemma 22, which are schematically depicted in Figure 16, can all be approximated by NO(s). Lemma 32 shows that ๐น ๐ฝ : ๐ โ โ ๐ฝ can be approximated by an element of ๐จ๐ฎ by mapping to a vector-valued constant function. Similarly, Lemma 34 shows that ๐บ ๐ฝ โฒ : โ ๐ฝ โฒ โ ๐ฐ can be approximated by an element of ๐จ๐ฎ by mapping a vector-valued constant function to the coefficients of a basis expansion. Finally, Lemma 35 shows that NO(s) can exactly represent any standard neural network by viewing the inputs and outputs as vector-valued constant functions.
Lemma 32
Let Assumption 9 hold. Let { ๐ ๐ } ๐
1 ๐ โ ๐ โ for some ๐ โ โ . Define the map ๐น : ๐ โ โ ๐ by
๐น ( ๐ )
( ๐ 1 ( ๐ ) , โฆ , ๐ ๐ ( ๐ ) ) , โ ๐ โ ๐ .
Then, for any compact set ๐พ โ ๐ , ๐ โ ๐ 0 , and ๐
0 , there exists a number ๐ฟ โ โ and neural network ๐ โ ๐ญ ๐ฟ โข ( ๐ ; โ ๐ ร โ ๐ , โ ๐ ร 1 ) such that
sup ๐ โ ๐พ sup ๐ฆ โ ๐ท ยฏ | ๐น โข ( ๐ ) โ โซ ๐ท ๐ โข ( ๐ฆ , ๐ฅ ) โข ๐ โข ( ๐ฅ ) โข ๐ฝ ๐ฅ | 1 โค ๐ .
Proof Since ๐พ is bounded, there exists a number ๐
0 such that
sup ๐ โ ๐พ โ ๐ โ ๐ โค ๐ .
Define the constant
๐
โ
{
๐
,
๐
๐ ๐ , ๐ โข ( ๐ท )
๐
โข
|
๐ท
|
,
๐ด
๐ถ โข ( ๐ท ยฏ )
and let ๐
1 if ๐
๐ถ โข ( ๐ท ยฏ ) . By Lemma 28 and Lemma 30, there exist functions ๐ 1 , โฆ , ๐ ๐ โ ๐ถ ๐ โ โข ( ๐ท ) such that
max ๐ โ { 1 , โฆ , ๐ } โข sup ๐ โ ๐พ | ๐ ๐ โข ( ๐ ) โ โซ ๐ท ๐ ๐ โข ๐ โข ๐ฝ ๐ฅ | โค ๐ 2 โข ๐ 1 ๐ .
Since ๐ โ ๐ 0 , there exits some ๐ฟ โ โ and neural networks ๐ 1 , โฆ , ๐ ๐ โ ๐ญ ๐ฟ โข ( ๐ ; โ ๐ ) such that
max ๐ โ { 1 , โฆ , ๐ } โก โ ๐ ๐ โ ๐ ๐ โ ๐ถ โค ๐ 2 โข ๐ โข ๐ 1 ๐ .
By setting all weights associated to the first argument to zero, we can modify each neural network ๐ ๐ to a neural network ๐ ๐ โ ๐ญ ๐ฟ โข ( ๐ ; โ ๐ ร โ ๐ ) so that
๐ ๐ โข ( ๐ฆ , ๐ฅ )
๐ ๐ โข ( ๐ฅ ) โข ๐ โข ( ๐ฆ ) , โ ๐ฆ , ๐ฅ โ โ ๐ .
Define ๐ โ ๐ญ ๐ฟ โข ( ๐ ; โ ๐ ร โ ๐ , โ ๐ ร 1 ) by
๐ โข ( ๐ฆ , ๐ฅ )
[ ๐ 1 โข ( ๐ฆ , ๐ฅ ) , โฆ , ๐ ๐ โข ( ๐ฆ , ๐ฅ ) ] ๐ .
Then for any ๐ โ ๐พ and ๐ฆ โ ๐ท ยฏ , we have
| ๐น โข ( ๐ ) โ โซ ๐ท ๐ โข ( ๐ฆ , ๐ฅ ) โข ๐ โข ๐ฝ ๐ฅ | ๐ ๐
โ ๐
1
๐
|
๐
๐
โข
(
๐
)
โ
โซ
๐ท
๐
โข
(
๐ฆ
)
โข
๐
๐
โข
(
๐ฅ
)
โข
๐
โข
(
๐ฅ
)
โข
๐ฝ
๐ฅ
|
๐
โค
2
๐
โ
1
โข
โ
๐
1 ๐ | ๐ ๐ โข ( ๐ ) โ โซ ๐ท ๐ ๐ โข ๐ โข ๐ฝ ๐ฅ | ๐ + | โซ ๐ท ( ๐ ๐ โ ๐ ๐ ) โข ๐ โข ๐ฝ ๐ฅ | ๐
โค ๐ ๐ 2 + 2 ๐ โ 1 โข ๐ โข ๐ ๐ โข โ ๐ ๐ โ ๐ ๐ โ ๐ถ ๐
โค ๐ ๐
and the result follows by finite dimensional norm equivalence.
Lemma 33
Suppose ๐ท โ โ ๐ is a domain and let { ๐ ๐ } ๐
1 ๐ โ ( ๐ถ ๐ ( ๐ท ยฏ ) ) โ for some ๐ , ๐ โ โ . Define the map ๐น : ๐ โ โ ๐ by
๐น ( ๐ )
( ๐ 1 ( ๐ ) , โฆ , ๐ ๐ ( ๐ ) ) , โ ๐ โ ๐ถ ๐ ( ๐ท ยฏ ) .
Then, for any compact set ๐พ โ ๐ถ ๐ โข ( ๐ท ยฏ ) , ๐ โ ๐ 0 , and ๐
0 , there exists a number ๐ฟ โ โ and neural network ๐ โ ๐ญ ๐ฟ โข ( ๐ ; โ ๐ ร โ ๐ , โ ๐ ร ๐ฝ ) such that
sup ๐ โ ๐พ sup ๐ฆ โ ๐ท ยฏ | ๐น ( ๐ ) โ โซ ๐ท ๐ ( ๐ฆ , ๐ฅ ) ( โ ๐ผ 1 ๐ ( ๐ฅ ) , โฆ , โ ๐ผ ๐ฝ ๐ ( ๐ฅ ) ) ๐ฝ ๐ฅ | 1 โค ๐
where ๐ผ 1 , โฆ , ๐ผ ๐ฝ is an enumeration of the set { ๐ผ โ โ ๐ : 0 โค | ๐ผ | 1 โค ๐ } .
Proof The proof follows as in Lemma 32 by replacing the use of Lemmas 28 and 30 by Lemma 31.
Lemma 34
Let Assumption 10 hold. Let { ๐ ๐ } ๐
1 ๐ โ ๐ฐ for some ๐ โ โ . Define the map ๐บ : โ ๐ โ ๐ฐ by
๐บ โข ( ๐ค )
โ ๐
1 ๐ ๐ค ๐ โข ๐ ๐ , โ ๐ค โ โ ๐ .
Then, for any compact set ๐พ โ โ ๐ , ๐ โ ๐ ๐ 2 , and ๐
0 , there exists a number ๐ฟ โ โ and a neural network ๐ โ ๐ญ ๐ฟ โข ( ๐ ; โ ๐ โฒ ร โ ๐ โฒ , โ 1 ร ๐ ) such that
sup ๐ค โ ๐พ โ ๐บ โข ( ๐ค ) โ โซ ๐ท โฒ ๐ โข ( โ , ๐ฅ ) โข ๐ค โข ๐ โข ( ๐ฅ ) โข ๐ฝ ๐ฅ โ ๐ฐ โค ๐ .
Proof Since ๐พ โ โ ๐ is compact, there is a number ๐
1 such that
sup ๐ค โ ๐พ | ๐ค | 1 โค ๐ .
If ๐ฐ
๐ฟ ๐ 2 โข ( ๐ท โฒ ) , then density of ๐ถ ๐ โ โข ( ๐ท โฒ ) implies there are functions ๐ ~ 1 , โฆ , ๐ ~ ๐ โ ๐ถ โ โข ( ๐ท ยฏ โฒ ) such that
max ๐ โ { 1 , โฆ , ๐ } โก โ ๐ ๐ โ ๐ ~ ๐ โ ๐ฐ โค ๐ 2 โข ๐ โข ๐ .
Similarly if ๐
๐ ๐ 2 , ๐ 2 โข ( ๐ท โฒ ) , then density of the restriction of functions in ๐ถ ๐ โ โข ( โ ๐ โฒ ) to ๐ท โฒ (Leoni, 2009, Theorem 11.35) implies the same result. If ๐ฐ
๐ถ ๐ 2 โข ( ๐ท ยฏ โฒ ) then we set ๐ ~ ๐
๐ ๐ for any ๐ โ { 1 , โฆ , ๐ } . Define ๐ ~ : โ ๐ โฒ ร โ ๐ โฒ โ โ 1 ร ๐ by
๐ ~ โข ( ๐ฆ , ๐ฅ )
1 | ๐ท โฒ | โข [ ๐ ~ 1 โข ( ๐ฆ ) , โฆ , ๐ ~ ๐ โข ( ๐ฆ ) ] .
Then, for any ๐ค โ ๐พ ,
โ ๐บ โข ( ๐ค ) โ โซ ๐ท โฒ ๐ ~ โข ( โ , ๐ฅ ) โข ๐ค โข ๐ โข ( ๐ฅ ) โข ๐ฝ ๐ฅ โ ๐ฐ
โ โ ๐
1 ๐ ๐ค ๐ โข ๐ ๐ โ โ ๐
1
๐
๐ค
๐
โข
๐
~
๐
โ
๐ฐ
โค
โ
๐
1 ๐ | ๐ค ๐ | โข โ ๐ ๐ โ ๐ ~ ๐ โ ๐ฐ
โค ๐ 2 .
Since ๐ โ ๐ ๐ 2 , there exists neural networks ๐ 1 , โฆ , ๐ ๐ โ ๐ญ 1 โข ( ๐ ; โ ๐ โฒ ) such that
max ๐ โ { 1 , โฆ , ๐ } โก โ ๐ ~ ๐ โ ๐ ๐ โ ๐ถ ๐ 2 โค ๐ 2 โข ๐ โข ๐ โข ( ๐ฝ โข | ๐ท โฒ | ) 1 ๐ 2
where, if ๐ฐ
๐ถ ๐ 2 โข ( ๐ท ยฏ โฒ ) , we set ๐ฝ
1 / | ๐ท โฒ | and ๐ 2
1 , and otherwise ๐ฝ
| { ๐ผ โ โ ๐ : | ๐ผ | 1 โค ๐ 2 } | . By setting all weights associated to the second argument to zero, we can modify each neural network ๐ ๐ to a neural network ๐ ๐ โ ๐ญ 1 โข ( ๐ ; โ ๐ โฒ ร โ ๐ โฒ ) so that
๐ ๐ โข ( ๐ฆ , ๐ฅ )
๐ ๐ โข ( ๐ฆ ) โข ๐ โข ( ๐ฅ ) , โ ๐ฆ , ๐ฅ โ โ ๐ โฒ .
Define ๐ โ ๐ญ 1 โข ( ๐ ; โ ๐ โฒ ร โ ๐ โฒ , โ 1 ร ๐ ) as
๐ โข ( ๐ฆ , ๐ฅ )
1 | ๐ท โฒ | โข [ ๐ 1 โข ( ๐ฆ , ๐ฅ ) , โฆ , ๐ ๐ โข ( ๐ฆ , ๐ฅ ) ] .
Then, for any ๐ค โ โ ๐ ,
โซ ๐ท โฒ ๐ โข ( ๐ฆ , ๐ฅ ) โข ๐ค โข ๐ โข ( ๐ฅ ) โข ๐ฝ ๐ฅ
โ ๐
1 ๐ ๐ค ๐ โข ๐ ๐ โข ( ๐ฆ ) .
We compute that, for any ๐ โ { 1 , โฆ , ๐ } ,
โ
๐
๐
โ
๐
~
๐
โ
๐ฐ
โค
{
|
๐ท
โฒ
|
1
๐
2
โข
โ
๐
๐
โ
๐
~
๐
โ
๐ถ
๐
2
,
๐ฐ
๐ฟ ๐ 2 โข ( ๐ท โฒ )
(
๐ฝ
โข
|
๐ท
โฒ
|
)
1
๐
2
โข
โ
๐
๐
โ
๐
~
๐
โ
๐ถ
๐
2
,
๐ฐ
๐ ๐ 2 , ๐ 2 โข ( ๐ท โฒ )
โ
๐
๐
โ
๐
~
๐
โ
๐ถ
๐
2
,
๐ฐ
๐ถ ๐ 2 โข ( ๐ท ยฏ โฒ )
hence, for any ๐ค โ ๐พ ,
โ โซ ๐ท โฒ ๐ โข ( ๐ฆ , ๐ฅ ) โข ๐ค โข ๐ โข ( ๐ฅ ) โข ๐ฝ ๐ฅ โ โ ๐
1 ๐ ๐ค ๐ โข ๐ ~ ๐ โ ๐ฐ โค โ ๐
1 ๐ | ๐ค ๐ | โข โ ๐ ๐ โ ๐ ~ ๐ โ ๐ฐ โค ๐ 2 .
By triangle inequality, for any ๐ค โ ๐พ , we have
โ
๐บ
โข
(
๐ค
)
โ
โซ
๐ท
๐
โข
(
โ
,
๐ฅ
)
โข
๐ค
โข
๐
โข
(
๐ฅ
)
โข
๐ฝ
๐ฅ
โ
๐ฐ
โค
โ
๐บ
โข
(
๐ค
)
โ
โซ
๐ท
๐
~
โข
(
โ
,
๐ฅ
)
โข
๐ค
โข
๐
โข
(
๐ฅ
)
โข
๐ฝ
๐ฅ
โ
๐ฐ
+
โ
โซ
๐ท
๐
~
โข
(
โ
,
๐ฅ
)
โข
๐ค
โข
๐
โข
(
๐ฅ
)
โข
๐ฝ
๐ฅ
โ
โซ
๐ท
๐
โข
(
โ
,
๐ฅ
)
โข
๐ค
โข
๐
โข
(
๐ฅ
)
โข
๐ฝ
๐ฅ
โ
๐ฐ
โค
๐
2
+
โ
โซ
๐ท
๐
โข
(
โ
,
๐ฅ
)
โข
๐ค
โข
๐
โข
(
๐ฅ
)
โ
โ
๐
1 ๐ ๐ค ๐ โข ๐ ~ ๐ โ ๐ฐ
โค ๐
as desired.
Lemma 35
Let ๐ , ๐ , ๐ โฒ , ๐ , ๐ โ โ , ๐ , ๐ โ โ 0 , ๐ท โ โ ๐ and ๐ท โฒ โ โ ๐ be domains and ๐ 1 โ ๐ ๐ L . For any ๐ โ ๐ญ ๐ โข ( ๐ 1 ; โ ๐ , โ ๐ โฒ ) and ๐ 2 , ๐ 3 โ ๐ ๐ , there exists a ๐บ โ ๐ญ๐ฎ ๐ โข ( ๐ 1 , ๐ 2 , ๐ 3 ; ๐ท , ๐ท โฒ , โ ๐ , โ ๐ โฒ ) such that
๐ โข ( ๐ค )
๐บ โข ( ๐ค โข ๐ ) โข ( ๐ฅ ) , โ ๐ค โ โ ๐ , โ ๐ฅ โ ๐ท โฒ .
.
Proof We have that
๐ โข ( ๐ฅ )
๐ ๐ โข ๐ 1 โข ( โฆ โข ๐ 1 โข ๐ 1 โข ( ๐ 0 โข ๐ฅ + ๐ 0 ) + ๐ 1 โข โฆ ) + ๐ ๐ , โ ๐ฅ โ โ ๐
where ๐ 0 โ โ ๐ 0 ร ๐ , ๐ 1 โ โ ๐ 1 ร ๐ 0 , โฆ , ๐ ๐ โ โ ๐ โฒ ร ๐ ๐ โ 1 and ๐ 0 โ โ ๐ 0 , ๐ 1 โ โ ๐ 1 , โฆ , ๐ ๐ โ โ ๐ โฒ for some ๐ 0 , โฆ , ๐ ๐ โ 1 โ โ . By setting all parameters to zero except for the last bias term, we can find ๐ ( 0 ) โ ๐ญ 1 โข ( ๐ 2 ; โ ๐ ร โ ๐ , โ ๐ 0 ร ๐ ) such that
๐ 0 โข ( ๐ฅ , ๐ฆ )
1 | ๐ท | โข ๐ 0 , โ ๐ฅ , ๐ฆ โ โ ๐ .
Similarly, we can find ๐ ~ 0 โ ๐ญ 1 โข ( ๐ 2 ; โ ๐ , โ ๐ 0 ) such that
๐ ~ 0 โข ( ๐ฅ )
๐ 0 , โ ๐ฅ โ โ ๐ .
Then
โซ ๐ท ๐ 0 โข ( ๐ฆ , ๐ฅ ) โข ๐ค โข ๐ โข ( ๐ฅ ) โข ๐ฝ ๐ฅ + ๐ ~ โข ( ๐ฆ )
( ๐ 0 โข ๐ค + ๐ 0 ) โข ๐ โข ( ๐ฆ ) , โ ๐ค โ โ ๐ , โ ๐ฆ โ ๐ท .
Continuing a similar construction for all layers clearly yields the result.
E
Proof [of Theorem 8] Without loss of generality, we will assume that ๐ท
๐ท โฒ and, by continuous embedding, that ๐
๐ฐ
๐ถ โข ( ๐ท ยฏ ) . Furthermore, note that, by continuity, it suffices to show the result for the single layer
๐ญ๐ฎ
{ ๐ โฆ ๐ 1 โข ( โซ ๐ท ๐ โข ( โ , ๐ฆ ) โข ๐ โข ( ๐ฆ ) โข ๐ฝ ๐ฆ + ๐ ) : ๐ โ ๐ญ ๐ 1 โข ( ๐ 2 ; โ ๐ ร โ ๐ ) , ๐ โ ๐ญ ๐ 2 โข ( ๐ 2 ; โ ๐ ) , ๐ 1 , ๐ 2 โ โ } .
Let ๐พ โ ๐ be a compact set and ( ๐ท ๐ ) ๐
1 โ be a discrete refinement of ๐ท . To each discretization ๐ท ๐ associate partitions ๐ ๐ ( 1 ) , โฆ , ๐ ๐ ( ๐ ) โ ๐ท which are pairwise disjoint, each contains a single, unique point of ๐ท ๐ , each has positive Lebesgue measure, and
โ ๐
1 ๐ ๐ ๐ ( ๐ )
๐ท .
We can do this since the points in each discretization ๐ท ๐ are pairwise distinct. For any ๐ข โ ๐ญ๐ฎ with parameters ๐ , ๐ define the sequence of maps ๐ข ^ ๐ : โ ๐ โข ๐ ร โ ๐ โ ๐ด by
๐ข ^ ๐ โข ( ๐ฆ 1 , โฆ , ๐ฆ ๐ , ๐ค 1 , โฆ , ๐ค ๐ )
๐ 1 โข ( โ ๐
1 ๐ ๐ โข ( โ , ๐ฆ ๐ ) โข ๐ค ๐ โข | ๐ ๐ ( ๐ ) | + ๐ โข ( โ ) )
for any ๐ฆ ๐ โ โ ๐ and ๐ค ๐ โ โ . Since ๐พ is compact, there is a constant ๐
0 such that
sup ๐ โ ๐พ โ ๐ โ ๐ฐ โค ๐ .
Therefore,
sup ๐ฅ โ ๐ท ยฏ sup ๐ โ โ | โซ ๐ท ๐ โข ( ๐ฅ , ๐ฆ ) โข ๐ โข ( ๐ฆ ) โข ๐ฝ ๐ฆ + โ ๐
1 ๐ ๐ โข ( ๐ฅ , ๐ฆ ๐ ) โข ๐ โข ( ๐ฆ ๐ ) | โข ๐ ๐ ( ๐ ) โข | + 2 โข ๐ โข ( ๐ฅ ) |
โค 2 โข ( ๐ โข | ๐ท | โข โ ๐ โ ๐ถ โข ( ๐ท ยฏ ร ๐ท ยฏ ) + โ ๐ โ ๐ถ โข ( ๐ท ยฏ ) )
โ ๐ .
Hence we need only consider ๐ 1 as a map [ โ ๐ , ๐ ] โ โ . Thus, by uniform continuity, there exists a modulus of continuity ๐ : โ โฅ 0 โ โ โฅ 0 which is continuous, non-negative, and non-decreasing on โ โฅ 0 , satisfies ๐ โข ( ๐ง ) โ ๐ โข ( 0 )
0 as ๐ง โ 0 and
| ๐ 1 โข ( ๐ง 1 ) โ ๐ 1 โข ( ๐ง 2 ) | โค ๐ โข ( | ๐ง 1 โ ๐ง 2 | ) โ ๐ง 1 , ๐ง 2 โ [ โ ๐ , ๐ ] .
(49)
Let ๐ > 0 . Equation (49) and the non-decreasing property of ๐ imply that in order to show there exists ๐
๐ โข ( ๐ ) โ โ such that for any ๐ โฅ ๐ implies
sup ๐ โ ๐พ โฅ ๐ข ^ ๐ ( ๐ท ๐ , ๐ | ๐ท ๐ ) โ ๐ข ( ๐ ) โฅ ๐ด < ๐ ,
it is enough to show that
sup ๐ โ ๐พ sup ๐ฅ โ ๐ท ยฏ | โซ ๐ท ๐ โข ( ๐ฅ , ๐ฆ ) โข ๐ โข ( ๐ฆ ) โข ๐ฝ ๐ฆ โ โ ๐
1 ๐ ๐ โข ( ๐ฅ , ๐ฆ ๐ ) โข ๐ โข ( ๐ฆ ๐ ) โข | ๐ ๐ ( ๐ ) | | < ๐
(50)
for any ๐ โฅ ๐ . Since ๐พ is compact, we can find functions ๐ 1 , โฆ , ๐ ๐ โ ๐พ such that, for any ๐ โ ๐พ , there is some ๐ โ { 1 , โฆ , ๐ } such that
โ ๐ โ ๐ ๐ โ ๐ถ โข ( ๐ท ยฏ ) โค ๐ 4 โข | ๐ท | โข โ ๐ โ ๐ถ โข ( ๐ท ยฏ ร ๐ท ยฏ ) .
Since ( ๐ท ๐ ) is a discrete refinement, by convergence of Riemann sums, we can find some ๐ โ โ such that for any ๐ก โฅ ๐ , we have
sup ๐ฅ โ ๐ท ยฏ | โ ๐
1 ๐ก ๐ โข ( ๐ฅ , ๐ฆ ๐ ) | โข ๐ ๐ก ( ๐ ) โข | โ โซ ๐ท ๐ โข ( ๐ฅ , ๐ฆ ) โข ๐ฝ ๐ฆ | < | ๐ท | โข โ ๐ โ ๐ถ โข ( ๐ท ยฏ ร ๐ท ยฏ )
where ๐ท ๐ก
{ ๐ฆ 1 , โฆ , ๐ฆ ๐ก } . Similarly, we can find ๐ 1 , โฆ , ๐ ๐ โ โ such that, for any ๐ก ๐ โฅ ๐ ๐ , we have
sup ๐ฅ โ ๐ท ยฏ | โ ๐
1 ๐ก ๐ ๐ โข ( ๐ฅ , ๐ฆ ๐ ( ๐ ) ) โข ๐ ๐ โข ( ๐ฆ ๐ ( ๐ ) ) | โข ๐ ๐ก ๐ ( ๐ ) โข | โ โซ ๐ท ๐ โข ( ๐ฅ , ๐ฆ ) โข ๐ ๐ โข ( ๐ฆ ) โข ๐ฝ ๐ฆ | < ๐ 4
where ๐ท ๐ก ๐
{ ๐ฆ 1 ( ๐ ) , โฆ , ๐ฆ ๐ก ๐ ( ๐ ) } . Let ๐ โฅ max โก { ๐ , ๐ 1 , โฆ , ๐ ๐ } and denote ๐ท ๐
{ ๐ฆ 1 , โฆ , ๐ฆ ๐ } . Note that,
sup ๐ฅ โ ๐ท ยฏ | โซ ๐ท ๐ โข ( ๐ฅ , ๐ฆ ) โข ( ๐ โข ( ๐ฆ ) โ ๐ ๐ โข ( ๐ฆ ) ) โข ๐ฝ ๐ฆ | โค | ๐ท | โข โ ๐ โ ๐ถ โข ( ๐ท ยฏ ร ๐ท ยฏ ) โข โ ๐ โ ๐ ๐ โ ๐ถ โข ( ๐ท ยฏ ) .
Furthermore,
sup ๐ฅ โ ๐ท ยฏ | โ ๐
1
๐
๐
โข
(
๐ฅ
,
๐ฆ
๐
)
โข
(
๐
๐
โข
(
๐ฆ
๐
)
โ
๐
โข
(
๐ฆ
๐
)
)
โข
|
๐
๐
(
๐
)
|
|
โค
โ
๐
๐
โ
๐
โ
๐ถ
โข
(
๐ท
ยฏ
)
โข
sup
๐ฅ
โ
๐ท
ยฏ
|
โ
๐
1
๐
๐
โข
(
๐ฅ
,
๐ฆ
๐
)
โข
|
๐
๐
(
๐
)
|
|
โค
โฅ
๐
๐
โ
๐
โฅ
๐ถ
โข
(
๐ท
ยฏ
)
(
sup
๐ฅ
โ
๐ท
ยฏ
|
โ
๐
1 ๐ ๐ ( ๐ฅ , ๐ฆ ๐ ) | ๐ ๐ ( ๐ ) | โ โซ ๐ท ๐ ( ๐ฅ , ๐ฆ ) ๐ฝ ๐ฆ |
- sup ๐ฅ โ ๐ท ยฏ | โซ ๐ท ๐ ( ๐ฅ , ๐ฆ ) ๐ฝ ๐ฆ | )
โค 2 โข | ๐ท | โข โ ๐ โ ๐ถ โข ( ๐ท ยฏ ร ๐ท ยฏ ) โข โ ๐ ๐ โ ๐ โ ๐ถ โข ( ๐ท ยฏ ) .
Therefore, for any ๐ โ ๐พ , by repeated application of the triangle inequality, we find that
sup ๐ฅ โ ๐ท ยฏ | โซ ๐ท ๐ โข ( ๐ฅ , ๐ฆ ) โข ๐ โข ( ๐ฆ ) โข ๐ฝ ๐ฆ โ โ ๐
1
๐
๐
โข
(
๐ฅ
,
๐ฆ
๐
)
โข
๐
โข
(
๐ฆ
๐
)
โข
|
๐
๐
(
๐
)
|
|
โค
sup
๐ฅ
โ
๐ท
ยฏ
|
โ
๐
1
๐
๐
โข
(
๐ฅ
,
๐ฆ
๐
)
โข
๐
๐
โข
(
๐ฆ
๐
)
|
โข
๐
๐
(
๐
)
โข
|
โ
โซ
๐ท
๐
โข
(
๐ฅ
,
๐ฆ
)
โข
๐
๐
โข
(
๐ฆ
)
โข
๐ฝ
๐ฆ
|
+
3
โข
|
๐ท
|
โข
โ
๐
โ
๐ถ
โข
(
๐ท
ยฏ
ร
๐ท
ยฏ
)
โข
โ
๐
โ
๐
๐
โ
๐ถ
โข
(
๐ท
ยฏ
)
<
๐
4
+
3
โข
๐
4
๐
which completes the proof.
F
Proof [of Theorem 11] The statement in Lemma 26 allows us to apply Lemma 22 to find a mapping ๐ข 1 : ๐ โ ๐ฐ such that
sup ๐ โ ๐พ โ ๐ข โ โข ( ๐ ) โ ๐ข 1 โข ( ๐ ) โ ๐ฐ โค ๐ 2
where ๐ข 1
๐บ โ ๐ โ ๐น with ๐น : ๐ โ โ ๐ฝ , ๐บ : โ ๐ฝ โฒ โ ๐ฐ continuous linear maps and ๐ โ ๐ถ โข ( โ ๐ฝ ; โ ๐ฝ โฒ ) for some ๐ฝ , ๐ฝ โฒ โ โ . By Lemma 32, we can find a sequence of maps ๐น ๐ก โ ๐จ๐ฎ โข ( ๐ 2 ; ๐ท , โ , โ ๐ฝ ) for ๐ก
1 , 2 , โฆ such that
sup ๐ โ ๐พ sup ๐ฅ โ ๐ท ยฏ | ( ๐น ๐ก ( ๐ ) ) ( ๐ฅ ) โ ๐น ( ๐ ) | 1 โค 1 ๐ก .
In particular, ๐น ๐ก โข ( ๐ ) โข ( ๐ฅ )
๐ค ๐ก โข ( ๐ ) โข ๐ โข ( ๐ฅ ) for some ๐ค ๐ก : ๐ โ โ ๐ฝ which is constant in space. We can therefore identify the range of ๐น ๐ก โข ( ๐ ) with โ ๐ฝ . Define the set
๐ โ โ ๐ก
1 โ ๐น ๐ก โข ( ๐พ ) โช ๐น โข ( ๐พ ) โ โ ๐ฝ
which is compact by Lemma 21. Since ๐ is continuous, it is uniformly continuous on ๐ hence there exists a modulus of continuity ๐ : โ โฅ 0 โ โ โฅ 0 which is continuous, non-negative, and non-decreasing on โ โฅ 0 , satisfies ๐ โข ( ๐ ) โ ๐ โข ( 0 )
0 as ๐ โ 0 and
| ๐ โข ( ๐ง 1 ) โ ๐ โข ( ๐ง 2 ) | 1 โค ๐ โข ( | ๐ง 1 โ ๐ง 2 | 1 ) โ ๐ง 1 , ๐ง 2 โ ๐ .
We can thus find ๐ โ โ large enough such that
sup ๐ โ ๐พ ๐ โข ( | ๐น โข ( ๐ ) โ ๐น ๐ โข ( ๐ ) | 1 ) โค ๐ 6 โข โ ๐บ โ .
Since ๐น ๐ is continuous, ๐น ๐ โข ( ๐พ ) is compact. Since ๐ is a continuous function on the compact set ๐น ๐ โข ( ๐พ ) โ โ ๐ฝ mapping into โ ๐ฝ โฒ , we can use any classical neural network approximation theorem such as (Pinkus, 1999, Theorem 4.1) to find an ๐ -close (uniformly) neural network. Since Lemma 35 shows that neural operators can exactly mimic standard neural networks, it follows that we can find ๐ 1 โ ๐จ๐ฎ โข ( ๐ 1 ; ๐ท , โ ๐ฝ , โ ๐ 1 ) , โฆ ,
๐ ๐ โ 1 โ ๐จ๐ฎ โข ( ๐ 1 ; ๐ท , โ ๐ ๐ โ 1 , โ ๐ฝ โฒ ) for some ๐ โ โ โฅ 2 and ๐ 1 , โฆ , ๐ ๐ โ 1 โ โ such that
๐ ~ ( ๐ ) โ ( ๐ ๐ โ 1 โ ๐ 1 โ โฏ โ ๐ 2 โ ๐ 1 โ ๐ 1 ) ( ๐ ) , โ ๐ โ ๐ฟ 1 ( ๐ท ; โ ๐ฝ )
satisfies
sup ๐ โ ๐น ๐ โข ( ๐พ ) sup ๐ฅ โ ๐ท ยฏ | ๐ โข ( ๐ ) โ ๐ ~ โข ( ๐ โข ๐ ) โข ( ๐ฅ ) | 1 โค ๐ 6 โข โ ๐บ โ .
By construction, ๐ ~ maps constant functions into constant functions and is continuous in the appropriate subspace topology of constant functions hence we can identity it as an element of ๐ถ โข ( โ ๐ฝ ; โ ๐ฝ โฒ ) for any input constant function taking values in โ ๐ฝ . Then ( ๐ ~ โ ๐น ๐ ) โข ( ๐พ ) โ โ ๐ฝ โฒ is compact. Therefore, by Lemma 34, we can find a neural network ๐ โ ๐ฉ ๐ฟ โข ( ๐ 3 ; โ ๐ โฒ ร โ ๐ โฒ , โ 1 ร ๐ฝ โฒ ) for some ๐ฟ โ โ such that
๐บ ~ โข ( ๐ ) โ โซ ๐ท โฒ ๐ โข ( โ , ๐ฆ ) โข ๐ โข ( ๐ฆ ) โข d โข ๐ฆ , โ ๐ โ ๐ฟ 1 โข ( ๐ท ; โ ๐ฝ โฒ )
satisfies
sup ๐ฆ โ ( ๐ ~ โ ๐น ๐ ) โข ( ๐พ ) โ ๐บ โข ( ๐ฆ ) โ ๐บ ~ โข ( ๐ฆ โข ๐ ) โ ๐ฐ โค ๐ 6 .
Define
๐ข ( ๐ ) โ ( ๐บ ~ โ ๐ ~ โ ๐น ๐ ) ( ๐ )
โซ ๐ท โฒ ๐ ( โ , ๐ฆ ) ( ( ๐ ๐ โ 1 โ ๐ 1 โ โฆ ๐ 1 โ ๐ 1 โ ๐น ๐ ) ( ๐ ) ) ( ๐ฆ ) ๐ฝ ๐ฆ , โ ๐ โ ๐ ,
noting that ๐ข โ ๐ญ๐ฎ ๐ โข ( ๐ 1 , ๐ 2 , ๐ 3 ; ๐ท , ๐ท โฒ ) . For any ๐ โ ๐พ , define ๐ 1 โ ( ๐ โ ๐น ) โข ( ๐ ) and ๐ ~ 1 โ ( ๐ ~ โ ๐น ๐ ) โข ( ๐ ) so that ๐ข 1 โข ( ๐ )
๐บ โข ( ๐ 1 ) and ๐ข โข ( ๐ )
๐บ ~ โข ( ๐ ~ 1 ) then
โ ๐ข 1 โข ( ๐ ) โ ๐ข โข ( ๐ ) โ ๐ฐ
โค โ ๐บ โข ( ๐ 1 ) โ ๐บ โข ( ๐ ~ 1 ) โ ๐ฐ + โ ๐บ โข ( ๐ ~ 1 ) โ ๐บ ~ โข ( ๐ ~ 1 ) โ ๐ฐ
โค โ ๐บ โ โข | ๐ 1 โ ๐ 1 ~ | 1 + sup ๐ฆ โ ( ๐ ~ โ ๐น ๐ ) โข ( ๐พ ) โ ๐บ โข ( ๐ฆ ) โ ๐บ ~ โข ( ๐ฆ โข ๐ ) โ ๐ฐ
โค ๐ 6 + โ ๐บ โ โข | ( ๐ โ ๐น ) โข ( ๐ ) โ ( ๐ โ ๐น ๐ ) โข ( ๐ ) | 1 + โ ๐บ โ โข | ( ๐ โ ๐น ๐ ) โข ( ๐ ) โ ( ๐ ~ โ ๐น ๐ ) โข ( ๐ ) | 1
โค ๐ 6 + โฅ ๐บ โฅ ๐ ( | ๐น ( ๐ ) โ ๐น ๐ ( ๐ ) | 1 ) + โฅ ๐บ โฅ sup ๐ โ ๐น ๐ โข ( ๐พ ) | ๐ ( ๐ ) โ ๐ ~ ( ๐ ) | 1
โค ๐ 2 .
Finally we have
โ ๐ข โ โข ( ๐ ) โ ๐ข โข ( ๐ ) โ ๐ฐ โค โ ๐ข โ โข ( ๐ ) โ ๐ข 1 โข ( ๐ ) โ ๐ฐ + โ ๐ข 1 โข ( ๐ ) โ ๐ข โข ( ๐ ) โ ๐ฐ โค ๐ 2 + ๐ 2
๐
as desired.
To show boundedness, we will exhibit a neural operator ๐ข ~ that is ๐ -close to ๐ข in ๐พ and is uniformly bounded by 4 โข ๐ . Note first that
โ ๐ข โข ( ๐ ) โ ๐ฐ โค โ ๐ข โข ( ๐ ) โ ๐ข โ โข ( ๐ ) โ ๐ฐ + โ ๐ข โ โข ( ๐ ) โ ๐ฐ โค ๐ + ๐ โค 2 โข ๐ , โ ๐ โ ๐พ
where, without loss of generality, we assume that ๐ โฅ 1 . By construction, we have that
๐ข โข ( ๐ )
โ ๐
1 ๐ฝ โฒ ๐ ~ ๐ โข ( ๐น ๐ โข ( ๐ ) ) โข ๐ ๐ , โ ๐ โ ๐
for some neural network ๐ : โ ๐ โฒ โ โ ๐ฝ โฒ . Since ๐ฐ is a Hilbert space and by linearity, we may assume that the components ๐ ๐ are orthonormal since orthonormalizing them only requires multiplying the last layers of ๐ ~ by an invertible linear map. Therefore
| ๐ ~ โข ( ๐น ๐ โข ( ๐ ) ) | 2
โ ๐ข โข ( ๐ ) โ ๐ฐ โค 2 โข ๐ , โ ๐ โ ๐พ .
Define the set ๐ โ ( ๐ ~ โ ๐น ๐ ) โข ( ๐พ ) โ โ ๐ฝ โฒ which is compact as before. We have
diam 2 โข ( ๐ )
sup ๐ฅ , ๐ฆ โ ๐ | ๐ฅ โ ๐ฆ | 2 โค sup ๐ฅ , ๐ฆ โ ๐ | ๐ฅ | 2 + | ๐ฆ | 2 โค 4 โข ๐ .
Since ๐ 1 โ ๐ก๐ , there exists a number ๐ โ โ and a neural network ๐ฝ โ ๐ญ ๐ โข ( ๐ 1 ; โ ๐ฝ โฒ , โ ๐ฝ โฒ ) such that
| ๐ฝ โข ( ๐ฅ ) โ ๐ฅ | 2
โค ๐ , โ ๐ฅ โ ๐
| ๐ฝ โข ( ๐ฅ ) | 2
โค 4 โข ๐ , โ ๐ฅ โ โ ๐ฝ โฒ .
Define
๐ข ~ โข ( ๐ ) โ โ ๐
1 ๐ฝ โฒ ๐ฝ ๐ โข ( ๐ ~ โข ( ๐น ๐ โข ( ๐ ) ) ) โข ๐ ๐ , โ ๐ โ ๐ .
Lemmas 34 and 35 then shows that ๐ข ~ โ ๐ญ๐ฎ ๐ + ๐ โข ( ๐ 1 , ๐ 2 , ๐ 3 ; ๐ท , ๐ท โฒ ) . Notice that
sup ๐ โ ๐พ โ ๐ข โข ( ๐ ) โ ๐ข ~ โข ( ๐ ) โ ๐ฐ โค sup ๐ค โ ๐ | ๐ค โ ๐ฝ โข ( ๐ค ) | 2 โค ๐ .
Furthermore,
โ ๐ข ~ โข ( ๐ ) โ ๐ฐ โค โ ๐ข ~ โข ( ๐ ) โ ๐ข โข ( ๐ ) โ ๐ฐ + โ ๐ข โข ( ๐ ) โ ๐ฐ โค ๐ + 2 โข ๐ โค 3 โข ๐ , โ ๐ โ ๐พ .
Let ๐ โ ๐ โ ๐พ then there exists ๐ โ โ ๐ฝ โฒ โ ๐ such that ๐ ~ โข ( ๐น ๐ โข ( ๐ ) )
๐ and
โ ๐ข ~ โข ( ๐ ) โ ๐ฐ
| ๐ฝ โข ( ๐ ) | 2 โค 4 โข ๐
as desired.
G
Proof [of Theorem 13] Let ๐ฐ
๐ป ๐ 2 โข ( ๐ท ) . For any ๐
0 , define
๐ข ๐ โ โข ( ๐ ) โ { ๐ข โ โข ( ๐ ) ,
โ ๐ข โ โข ( ๐ ) โ ๐ฐ โค ๐
๐ โ ๐ข โ โข ( ๐ ) โ ๐ฐ โข ๐ข โ โข ( ๐ ) ,
otherwise
for any ๐ โ ๐ . Since ๐ข ๐ โ โ ๐ข โ as ๐ โ โ
๐ -almost everywhere, ๐ข โ โ ๐ฟ ๐ 2 โข ( ๐ ; ๐ฐ ) , and clearly โ ๐ข ๐ โ โข ( ๐ ) โ ๐ฐ โค โ ๐ข โ โข ( ๐ ) โ ๐ฐ for any ๐ โ ๐ , we can apply the dominated convergence theorem for Bochner integrals to find ๐
0 large enough such that
โ ๐ข ๐ โ โ ๐ข โ โ ๐ฟ ๐ 2 โข ( ๐ ; ๐ฐ ) โค ๐ 3 .
Since ๐ and ๐ฐ are Polish spaces, by Lusinโs theorem (Aaronson, 1997, Theorem 1.0.0) we can find a compact set ๐พ โ ๐ such that
๐ โข ( ๐ โ ๐พ ) โค ๐ 2 153 โข ๐ 2
and ๐ข ๐ โ | ๐พ is continuous. Since ๐พ is closed, by a generalization of the Tietze extension theorem (Dugundji, 1951, Theorem 4.1), there exist a continuous mapping ๐ข ~ ๐ โ : ๐ โ ๐ฐ such that ๐ข ~ ๐ โ โข ( ๐ )
๐ข ๐ โ โข ( ๐ ) for all ๐ โ ๐พ and
sup ๐ โ ๐ โ ๐ข ~ ๐ โ โข ( ๐ ) โ โค sup ๐ โ ๐ โ ๐ข ๐ โ โข ( ๐ ) โ โค ๐ .
Applying Theorem 11 to ๐ข ~ ๐ โ , we find that there exists a number ๐ โ โ and a neural operator ๐ข โ ๐ญ๐ฎ ๐ โข ( ๐ 1 , ๐ 2 , ๐ 3 ; ๐ท , ๐ท โฒ ) such that
sup ๐ โ ๐พ โ ๐ข โข ( ๐ ) โ ๐ข ๐ โ โข ( ๐ ) โ ๐ฐ โค 2 โข ๐ 3
and
sup ๐ โ ๐ โ ๐ข โข ( ๐ ) โ ๐ฐ โค 4 โข ๐ .
We then have
โ
๐ข
โ
โ
๐ข
โ
๐ฟ
๐
2
โข
(
๐
;
๐ฐ
)
โค
โ
๐ข
โ
โ
๐ข
๐
โ
โ
๐ฟ
๐
2
โข
(
๐
;
๐ฐ
)
+
โ
๐ข
๐
โ
โ
๐ข
โ
๐ฟ
๐
2
โข
(
๐
;
๐ฐ
)
โค
๐
3
+
(
โซ
๐พ
โ
๐ข
๐
โ
โข
(
๐
)
โ
๐ข
โข
(
๐
)
โ
๐ฐ
2
โข
๐ฝ
๐
โข
(
๐
)
+
โซ
๐
โ
๐พ
โ
๐ข
๐
โ
โข
(
๐
)
โ
๐ข
โข
(
๐
)
โ
๐ฐ
2
โข
๐ฝ
๐
โข
(
๐
)
)
1
2
โค
๐
3
+
(
2
โข
๐
2
9
+
2
โข
(
sup
๐
โ
๐
โ
๐ข
๐
โ
โข
(
๐
)
โ
๐ฐ
2
+
โ
๐ข
โข
(
๐
)
โ
๐ฐ
2
)
โข
๐
โข
(
๐
โ
๐พ
)
)
1
2
โค
๐
3
+
(
2
โข
๐
2
9
+
34
โข
๐
2
โข
๐
โข
(
๐
โ
๐พ
)
)
1
2
โค
๐
3
+
(
4
โข
๐
2
9
)
1
2
๐
as desired.
Generated on Fri May 24 22:24:16 2024 by LaTeXML Report Issue Report Issue for Selection