papers-content / 2108 /2108.08481.md
Sylvestre's picture
Sylvestre HF Staff
Squashing commit
8906289 verified

Title: Learning Maps Between Function Spaces With Applications to PDEs

URL Source: https://arxiv.org/html/2108.08481

Markdown Content: Abstract 1Introduction 2Learning Operators 3Neural Operators 4Parameterization and Computation 5Neural Operators and Other Deep Learning Models 6Test Problems 7Numerical Results 8Approximation Theory 9Literature Review 10Conclusions A B C D E F G References Neural Operator: Learning Maps Between Function Spaces With Applications to PDEs \nameNikola Kovachki โ€‰ \emailnkovachki@nvidia.com \addrNvidia \AND\nameZongyi Liโˆ— \emailzongyili@caltech.edu \addrCaltech \AND\nameBurigede Liu \emailbl377@cam.ac.uk \addrCambridge University \AND\nameKamyar Azizzadenesheli \emailkamyara@nvidia.com \addrNvidia \AND\nameKaushik Bhattacharya \emailbhatta@caltech.edu \addrCaltech \AND\nameAndrew Stuart \emailastuart@caltech.edu \addrCaltech \AND\nameAnima Anandkumar \emailanima@caltech.edu \addrCaltech Equal contribution.Majority of the work was completed while the author was at Caltech. Abstract

The classical development of neural networks has primarily focused on learning mappings between finite dimensional Euclidean spaces or finite sets. We propose a generalization of neural networks to learn operators, termed neural operators, that map between infinite dimensional function spaces. We formulate the neural operator as a composition of linear integral operators and nonlinear activation functions. We prove a universal approximation theorem for our proposed neural operator, showing that it can approximate any given nonlinear continuous operator. The proposed neural operators are also discretization-invariant, i.e., they share the same model parameters among different discretization of the underlying function spaces. Furthermore, we introduce four classes of efficient parameterization, viz., graph neural operators, multi-pole graph neural operators, low-rank neural operators, and Fourier neural operators. An important application for neural operators is learning surrogate maps for the solution operators of partial differential equations (PDEs). We consider standard PDEs such as the Burgers, Darcy subsurface flow, and the Navier-Stokes equations, and show that the proposed neural operators have superior performance compared to existing machine learning based methodologies, while being several orders of magnitude faster than conventional PDE solvers.

Keywords: Deep Learning, Operator Learning, Discretization-Invariance, Partial Differential Equations, Navier-Stokes Equation.

1Introduction

Learning mappings between function spaces has widespread applications in science and engineering. For instance, for solving differential equations, the input is a coefficient function and the output is a solution function. A straightforward solution to this problem is to simply discretize the infinite-dimensional input and output function spaces into finite-dimensional grids, and apply standard learning models such as neural networks. However, this limits applicability since the learned neural network model may not generalize well to different discretizations, beyond the discretization grid of the training data.

To overcome these limitations of standard neural networks, we formulate a new deep-learning framework for learning operators, called neural operators, which directly map between function spaces on bounded domains. Since our neural operator is designed on function spaces, they can be discretized by a variety of different methods, and at different levels of resolution, without the need for re-training. In contrast, standard neural network architectures depend heavily on the discretization of training data: new architectures with new parameters may be needed to achieve the same error for data with varying discretization. We also propose the notion of discretization-invariant models and prove that our neural operators satisfy this property, while standard neural networks do not.

1.1Our Approach Discretization-Invariant Models.

We formulate a precise mathematical notion of discretization invariance. We require any discretization-invariant model with a fixed number of parameters to satisfy the following:

1.

acts on any discretization of the input function, i.e. accepts any set of points in the input domain,

2.

can be evaluated at any point of the output domain,

3.

converges to a continuum operator as the discretization is refined.

The first two requirements of accepting any input and output points in the domain is a natural requirement for discretization invariance, while the last one ensures consistency in the limit as the discretization is refined. For example, families of graph neural networks (Scarselli et al., 2008) and transformer models (Vaswani et al., 2017) are resolution invariant, i.e., they can receive inputs at any resolution, but they fail to converge to a continuum operator as discretization is refined. Moreover, we require the models to have a fixed number of parameters; otherwise, the number of parameters becomes unbounded in the limit as the discretization is refined, as shown in Figure 1. Thus the notion of discretization invariance allows us to define neural operator models that are consistent in function spaces and can be applied to data given at any resolution and on any mesh. We also establish that standard neural network models are not discretization invariant.

Figure 1:Discretization Invariance

An discretization-invariant operator has convergent predictions on a mesh refinement.

Neural Operators.

We introduce the concept of neural operators for learning operators that are mappings between infinite-dimensional function spaces. We propose neural operator architectures to be multi-layers where layers are themselves operators composed with non-linear activations. This ensures that that the overall end-to-end composition is an operator, and thus satisfies the discretization invariance property. The key design choice for neural operator is the operator layers. To keep it simple, we limit ourselves to layers that are linear operators. Since these layers are composed with non-linear activations, we obtain neural operator models that are expressive and able to capture any continuous operator. The latter property is known as universal approximation.

The above line of reasoning for neural operator design follows closely the design of standard neural networks, where linear layers (e.g. matrix multiplication, convolution) are composed with non-linear activations, and we have universal approximation of continuous functions defined on compact domains (Hornik et al., 1989). Neural operators replace finite-dimensional linear layers in neural networks with linear operators in function spaces.

We formally establish that neural operator models with a fixed number of parameters satisfy discretization invariance. We further show that neural operators models are universal approximators of continuous operators acting between Banach spaces, and can uniformly approximate any continuous operator defined on a compact set of a Banach space. Neural operators are the only known class of models that guarantee both discretization-invariance and universal approximation. See Table 1 for a comparison among the deep learning models. Previous deep learning models are mostly defined on a fixed grid, and removing, adding, or moving grid points generally makes these models no longer applicable. Thus, they are not discretization invariant.

We propose several design choices for the linear operator layers in neural operator such as a parameterized integral operator or through multiplication in the spectral domain as shown in Figure 2. Specifically, we propose four practical methods for implementing the neural operator framework: graph-based operators, low-rank operators, multipole graph-based operators, and Fourier operators. Specifically, for graph-based operators, we develop a Nystrรถm extension to connect the integral operator formulation of the neural operator to families of graph neural networks (GNNs) on arbitrary grids. For Fourier operators, we consider the spectral domain formulation of the neural operator which leads to efficient algorithms in settings where fast transform methods are applicable.

We include an exhaustive numerical study of the four formulations of neural operators. Numerically, we show that the proposed methodology consistently outperforms all existing deep learning methods even on the resolutions for which the standard neural networks were designed. For the two-dimensional Navier-Stokes equation, when learning the entire flow map, the method achieves < 1 % error for a Reynolds number of 20 and 8 % error for a Reynolds number of 200.

The proposed Fourier neural operator (FNO) has an inference time that is three orders of magnitude faster than the pseudo-spectral method used to generate the data for the Navier-Stokes problem (Chandler and Kerswell, 2013) โ€“ 0.005 s compared to the 2.2 โข ๐‘  on a 256 ร— 256 uniform spatial grid. Despite its tremendous speed advantage, the method does not suffer from accuracy degradation when used in downstream applications such as solving Bayesian inverse problems. Furthermore, we demonstrate that FNO is robust to noise on the testing problems we consider here.

Figure 2:Neural operator architecture schematic

The input function ๐‘Ž is passed to a pointwise lifting operator ๐‘ƒ that is followed by ๐‘‡ layers of integral operators and pointwise non-linearity operations ๐œŽ . In the end, the pointwise projection operator ๐‘„ outputs the function ๐‘ข . Three instantiation of neural operator layers, GNO, LNO, and FNO are provided.

Property       Model NNs DeepONets Interpolation Neural Operators Discretization Invariance โœ— โœ— โœ“ โœ“ Is the output a function? โœ— โœ“ โœ“ โœ“ Can query the output at any point? โœ— โœ“ โœ“ โœ“ Can take the input at any point? โœ— โœ— โœ“ โœ“ Universal Approximation โœ— โœ“ โœ— โœ“ Table 1:Comparison of deep learning models. The first row indicates whether the model is discretization invariant. The second and third rows indicate whether the output and input are a functions. The fourth row indicates whether the model class is a universal approximator of operators. Neural Operators are discretization invariant deep learning methods that output functions and can approximate any operator. 1.2Background and Context Data-driven approaches for solving PDEs.

Over the past decades, significant progress has been made in formulating (Gurtin, 1982) and solving (Johnson, 2012) the governing PDEs in many scientific fields from micro-scale problems (e.g., quantum and molecular dynamics) to macro-scale applications (e.g., civil and marine engineering). Despite the success in the application of PDEs to solve real-world problems, two significant challenges remain: (1) identifying the governing model for complex systems; (2) efficiently solving large-scale nonlinear systems of equations.

Identifying and formulating the underlying PDEs appropriate for modeling a specific problem usually requires extensive prior knowledge in the corresponding field which is then combined with universal conservation laws to design a predictive model. For example, modeling the deformation and failure of solid structures requires detailed knowledge of the relationship between stress and strain in the constituent material. For complicated systems such as living cells, acquiring such knowledge is often elusive and formulating the governing PDE for these systems remains prohibitive, or the models proposed are too simplistic to be informative. The possibility of acquiring such knowledge from data can revolutionize these fields. Second, solving complicated nonlinear PDE systems (such as those arising in turbulence and plasticity) is computationally demanding and can often make realistic simulations intractable. Again the possibility of using instances of data to design fast approximate solvers holds great potential for accelerating numerous problems.

Learning PDE Solution Operators.

In PDE applications, the governing differential equations are by definition local, whilst the solution operator exhibits non-local properties. Such non-local effects can be described by integral operators explicitly in the spatial domain, or by means of spectral domain multiplication; convolution is an archetypal example. For integral equations, the graph approximations of Nystrรถm type (Belongie et al., 2002) provide a consistent way of connecting different grid or data structures arising in computational methods and understanding their continuum limits (Von Luxburg et al., 2008; Trillos and Slepฤev, 2018; Trillos et al., 2020). For spectral domain calculations, there are well-developed tools that exist for approximating the continuum (Boyd, 2001; Trefethen, 2000). However, these approaches for approximating integral operators are not data-driven. Neural networks present a natural approach for learning-based integral operator approximations since they can incorporate non-locality. However, standard neural networks are limited to the discretization of training data and hence, offer a poor approximation to the integral operator. We tackle this issue here by proposing the framework of neural operators.

Properties of existing deep-learning models.

Previous deep learning models are mostly defined on a fixed grid, and removing, adding, or moving grid points generally makes these models no longer applicable, as seen in Table 1. Thus, they are not discretization invariant. In general, standard neural networks (NN) (such as Multilayer perceptron (MLP), convolution neural networks (CNN), Resnet, and Vision Transformers (ViT)) that take the input grid and output grid as finite-dimensional vectors are not discretization-invariant since their input and output have to be at the fixed grid with fixed location. On the other hand, the pointwise neural networks used in PINNs (Raissi et al., 2019) that take each coordinate as input are discretization-invariant since it can be applied at each location in parallel. However PINNs only represent the solution function of one instance and it does not learn the map from the input functions to the output solution functions. A special class of neural networks is convolution neural networks (CNNs). CNNs also do not converge with grid refinement since their respective fields change with different input grids. On the other hand, if normalized by the grid size, CNNs can be applied to uniform grids with different resolutions, which converge to differential operators, in a similar fashion to the finite difference method. Interpolation is a baseline approach to achieve discretization-invariance. While NNs+Interpolation (or in general any finite-dimensional neural networks+Interpolation) are resolution invariant and their outputs can be queried at any point, they are not universal approximators of operators since the dimension of input and output of the internal CNN model is defined to a bounded number. DeepONets (Lu et al., 2019) are a class of operators that have the universal approximation property. DeepONets consist of a branch net and a trunk net. The trunk net allows queries at any point, but the branch net constrains the input to fixed locations; however it is possible to modify the branch net to make the methodology discretization invariant, for example by using the PCA-based approach as used in (De Hoop et al., 2022).

Furthermore, we show transformers (Vaswani et al., 2017) are special cases of neural operators with structured kernels that can be used with varying grids to represent the input function. However, the commonly used vision-based extensions of transformers, e.g., ViT (Dosovitskiy et al., 2020), use convolutions on patches to generate tokens, and therefore, they are not discretization-invariant models.

We also show that when our proposed neural operators are applied only on fixed grids, the resulting architectures coincide with neural networks and other operator learning frameworks. In such reductions, point evaluations of the input functions are available on the grid points. In particular, we show that the recent work of DeepONets (Lu et al., 2019), which are maps from finite-dimensional spaces to infinite dimensional spaces are special cases of neural operators architecture when neural operators are limited only to fixed input grids. Moreover, by introducing an adjustment to the DeepONet architecture, we propose the DeepONet-Operator model that fits into the full operator learning framework of maps between function spaces.

2Learning Operators

In subsection 2.1, we describe the generic setting of PDEs to make the discussions in the following setting concrete. In subsection 2.2, we outline the general problem of operator learning as well as our approach to solving it. In subsection 2.3, we discuss the functional data that is available and how we work with it numerically.

2.1Generic Parametric PDEs

We consider the generic family of PDEs of the following form,

( ๐–ซ ๐‘Ž โข ๐‘ข ) โข ( ๐‘ฅ )

๐‘“ โข ( ๐‘ฅ ) , ๐‘ฅ โˆˆ ๐ท ,

๐‘ข โข ( ๐‘ฅ )

0 , ๐‘ฅ โˆˆ โˆ‚ ๐ท ,

(1)

for some ๐‘Ž โˆˆ ๐’œ , ๐‘“ โˆˆ ๐’ฐ โˆ— and ๐ท โŠ‚ โ„ ๐‘‘ a bounded domain. We assume that the solution ๐‘ข : ๐ท โ†’ โ„ lives in the Banach space ๐’ฐ and ๐–ซ ๐‘Ž : ๐’œ โ†’ โ„’ โข ( ๐’ฐ ; ๐’ฐ โˆ— ) is a mapping from the parameter Banach space ๐’œ to the space of (possibly unbounded) linear operators mapping ๐’ฐ to its dual ๐’ฐ โˆ— . A natural operator which arises from this PDE is ๐’ข โ€  := ๐–ซ ๐‘Ž โˆ’ 1 โข ๐‘“ : ๐’œ โ†’ ๐’ฐ defined to map the parameter to the solution ๐‘Ž โ†ฆ ๐‘ข . A simple example that we study further in Section 6.2 is when ๐–ซ ๐‘Ž is the weak form of the second-order elliptic operator โˆ’ โˆ‡ โ‹… ( ๐‘Ž โข โˆ‡ ) subject to homogeneous Dirichlet boundary conditions. In this setting, ๐’œ

๐ฟ โˆž โข ( ๐ท ; โ„ + ) , ๐’ฐ

๐ป 0 1 โข ( ๐ท ; โ„ ) , and ๐’ฐ โˆ—

๐ป โˆ’ 1 โข ( ๐ท ; โ„ ) . When needed, we will assume that the domain ๐ท is discretized into ๐พ โˆˆ โ„• points and that we observe ๐‘ โˆˆ โ„• pairs of coefficient functions and (approximate) solution functions { ๐‘Ž ( ๐‘– ) , ๐‘ข ( ๐‘– ) } ๐‘–

1 ๐‘ that are used to train the model (see Section 2.2). We assume that ๐‘Ž ( ๐‘– ) are i.i.d. samples from a probability measure ๐œ‡ supported on ๐’œ and ๐‘ข ( ๐‘– ) are the pushforwards under ๐’ข โ€  .

2.2Problem Setting

Our goal is to learn a mapping between two infinite dimensional spaces by using a finite collection of observations of input-output pairs from this mapping. We make this problem concrete in the following setting. Let ๐’œ and ๐’ฐ be Banach spaces of functions defined on bounded domains ๐ท โŠ‚ โ„ ๐‘‘ , ๐ท โ€ฒ โŠ‚ โ„ ๐‘‘ โ€ฒ respectively and ๐’ข โ€  : ๐’œ โ†’ ๐’ฐ be a (typically) non-linear map. Suppose we have observations { ๐‘Ž ( ๐‘– ) , ๐‘ข ( ๐‘– ) } ๐‘–

1 ๐‘ where ๐‘Ž ( ๐‘– ) โˆผ ๐œ‡ are i.i.d. samples drawn from some probability measure ๐œ‡ supported on ๐’œ and ๐‘ข ( ๐‘– )

๐’ข โ€  โข ( ๐‘Ž ( ๐‘– ) ) is possibly corrupted with noise. We aim to build an approximation of ๐’ข โ€  by constructing a parametric map

๐’ข ๐œƒ : ๐’œ โ†’ ๐’ฐ , ๐œƒ โˆˆ โ„ ๐‘

(2)

with parameters from the finite-dimensional space โ„ ๐‘ and then choosing ๐œƒ โ€  โˆˆ โ„ ๐‘ so that ๐’ข ๐œƒ โ€  โ‰ˆ ๐’ข โ€  .

We will be interested in controlling the error of the approximation on average with respect to ๐œ‡ . In particular, assuming ๐’ข โ€  is ๐œ‡ -measurable, we will aim to control the ๐ฟ ๐œ‡ 2 โข ( ๐’œ ; ๐’ฐ ) Bochner norm of the approximation

โ€– ๐’ข โ€  โˆ’ ๐’ข ๐œƒ โ€– ๐ฟ ๐œ‡ 2 โข ( ๐’œ ; ๐’ฐ ) 2

๐”ผ ๐‘Ž โˆผ ๐œ‡ โข โ€– ๐’ข โ€  โข ( ๐‘Ž ) โˆ’ ๐’ข ๐œƒ โข ( ๐‘Ž ) โ€– ๐’ฐ 2

โˆซ ๐’œ โ€– ๐’ข โ€  โข ( ๐‘Ž ) โˆ’ ๐’ข ๐œƒ โข ( ๐‘Ž ) โ€– ๐’ฐ 2 โข ๐‘‘ ๐œ‡ โข ( ๐‘Ž ) .

(3)

This is a natural framework for learning in infinite-dimensions as one could seek to solve the associated empirical-risk minimization problem

min ๐œƒ โˆˆ โ„ ๐‘ โก ๐”ผ ๐‘Ž โˆผ ๐œ‡ โข โ€– ๐’ข โ€  โข ( ๐‘Ž ) โˆ’ ๐’ข ๐œƒ โข ( ๐‘Ž ) โ€– ๐’ฐ 2 โ‰ˆ min ๐œƒ โˆˆ โ„ ๐‘ โก 1 ๐‘ โข โˆ‘ ๐‘–

1 ๐‘ โ€– ๐‘ข ( ๐‘– ) โˆ’ ๐’ข ๐œƒ โข ( ๐‘Ž ( ๐‘– ) ) โ€– ๐’ฐ 2

(4)

which directly parallels the classical finite-dimensional setting (Vapnik, 1998). As well as using error measured in the Bochner norm, we will also consider the setting where error is measured uniformly over compact sets of ๐’œ . In particular, given any ๐พ โŠ‚ ๐’œ compact, we consider

sup ๐‘Ž โˆˆ ๐พ โ€– ๐’ข โ€  โข ( ๐‘Ž ) โˆ’ ๐’ข ๐œƒ โข ( ๐‘Ž ) โ€– ๐’ฐ

(5)

which is a more standard error metric in the approximation theory literature. Indeed, the classic approximation theory of neural networks in formulated analogously to equation (5) (Hornik et al., 1989).

In Section 8 we show that, for the architecture we propose and given any desired error tolerance, there exists ๐‘ โˆˆ โ„• and an associated parameter ๐œƒ โ€  โˆˆ โ„ ๐‘ , so that the loss (3) or (5) is less than the specified tolerance. However, we do not address the challenging open problems of characterizing the error with respect to either (a) a fixed parameter dimension ๐‘ or (b) a fixed number of training samples ๐‘ . Instead, we approach this in the empirical test-train setting where we minimize (4) based on a fixed training set and approximate (3) from new samples that were not seen during training. Because we conceptualize our methodology in the infinite-dimensional setting, all finite-dimensional approximations can share a common set of network parameters which are defined in the (approximation-free) infinite-dimensional setting. In particular, our architecture does not depend on the way the functions ๐‘Ž ( ๐‘– ) , ๐‘ข ( ๐‘– ) are discretized. . The notation used through out this paper, along with a useful summary table, may be found in Appendix A.

2.3Discretization

Since our data ๐‘Ž ( ๐‘– ) and ๐‘ข ( ๐‘– ) are, in general, functions, to work with them numerically, we assume access only to their point-wise evaluations. To illustrate this, we will continue with the example of the preceding paragraph. For simplicity, assume ๐ท

๐ท โ€ฒ and suppose that the input and output functions are both real-valued. Let ๐ท ( ๐‘– )

{ ๐‘ฅ โ„“ ( ๐‘– ) } โ„“

1 ๐ฟ โŠ‚ ๐ท be a ๐ฟ -point discretization of the domain ๐ท and assume we have observations ๐‘Ž ( ๐‘– ) | ๐ท ( ๐‘– ) , ๐‘ข ( ๐‘– ) | ๐ท ( ๐‘– ) โˆˆ โ„ ๐ฟ , for a finite collection of input-output pairs indexed by ๐‘— . In the next section, we propose a kernel inspired graph neural network architecture which, while trained on the discretized data, can produce the solution ๐‘ข โข ( ๐‘ฅ ) for any ๐‘ฅ โˆˆ ๐ท given an input ๐‘Ž โˆผ ๐œ‡ . In particular, our discretized architecture maps into the space ๐’ฐ and not into a discretization thereof. Furthermore our parametric operator class is consistent, in that, given a fixed set of parameters, refinement of the input discretization converges to the true functions space operator. We make this notion precise in what follows and refer to architectures that possess it as function space architectures, mesh-invariant architectures, or discretization-invariant architectures. *

Definition 1

We call a discrete refinement of the domain ๐ท โŠ‚ โ„ ๐‘‘ any sequence of nested sets ๐ท 1 โŠ‚ ๐ท 2 โŠ‚ โ‹ฏ โŠ‚ ๐ท with | ๐ท ๐ฟ |

๐ฟ for any ๐ฟ โˆˆ โ„• such that, for any ๐œ– > 0 , there exists a number ๐ฟ

๐ฟ โข ( ๐œ– ) โˆˆ โ„• such that

๐ท โІ โ‹ƒ ๐‘ฅ โˆˆ ๐ท ๐ฟ { ๐‘ฆ โˆˆ โ„ ๐‘‘ : โ€– ๐‘ฆ โˆ’ ๐‘ฅ โ€– 2 < ๐œ– } .

Definition 2

Given a discrete refinement ( ๐ท ๐ฟ ) ๐ฟ

1 โˆž of the domain ๐ท โŠ‚ โ„ ๐‘‘ , any member ๐ท ๐ฟ is called a discretization of ๐ท .

Since ๐‘Ž : ๐ท โŠ‚ โ„ ๐‘‘ โ†’ โ„ ๐‘š , pointwise evaluation of the function (discretization) at a set of ๐ฟ points gives rise to the data set { ( ๐‘ฅ โ„“ , ๐‘Ž โข ( ๐‘ฅ โ„“ ) ) } โ„“

1 ๐ฟ . Note that this may be viewed as a vector in โ„ ๐ฟ โข ๐‘‘ ร— โ„ ๐ฟ โข ๐‘š . An example of the mesh refinement is given in Figure 1.

Definition 3

Suppose ๐’œ is a Banach space of โ„ ๐‘š -valued functions on the domain ๐ท โŠ‚ โ„ ๐‘‘ . Let ๐’ข : ๐’œ โ†’ ๐’ฐ be an operator, ๐ท ๐ฟ be an ๐ฟ -point discretization of ๐ท , and ๐’ข ^ : โ„ ๐ฟ โข ๐‘‘ ร— โ„ ๐ฟ โข ๐‘š โ†’ ๐’ฐ some map. For any ๐พ โŠ‚ ๐’œ compact, we define the discretized uniform risk as

๐‘… ๐พ ( ๐’ข , ๐’ข ^ , ๐ท ๐ฟ )

sup ๐‘Ž โˆˆ ๐พ โˆฅ ๐’ข ^ ( ๐ท ๐ฟ , ๐‘Ž | ๐ท ๐ฟ ) โˆ’ ๐’ข ( ๐‘Ž ) โˆฅ ๐’ฐ .

Definition 4

Let ฮ˜ โІ โ„ ๐‘ be a finite dimensional parameter space and ๐’ข : ๐’œ ร— ฮ˜ โ†’ ๐’ฐ a map representing a parametric class of operators with parameters ๐œƒ โˆˆ ฮ˜ . Given a discrete refinement ( ๐ท ๐‘› ) ๐‘›

1 โˆž of the domain ๐ท โŠ‚ โ„ ๐‘‘ , we say ๐’ข is discretization-invariant if there exists a sequence of maps ๐’ข ^ 1 , ๐’ข ^ 2 , โ€ฆ where ๐’ข ^ ๐ฟ : โ„ ๐ฟ โข ๐‘‘ ร— โ„ ๐ฟ โข ๐‘š ร— ฮ˜ โ†’ ๐’ฐ such that, for any ๐œƒ โˆˆ ฮ˜ and any compact set ๐พ โŠ‚ ๐’œ ,

lim ๐ฟ โ†’ โˆž ๐‘… ๐พ โข ( ๐’ข โข ( โ‹… , ๐œƒ ) , ๐’ข ^ ๐ฟ โข ( โ‹… , โ‹… , ๐œƒ ) , ๐ท ๐ฟ )

0 .

We prove that the architectures proposed in Section 3 are discretization-invariant. We further verify this claim numerically by showing that the approximation error is approximately constant as we refine the discretization. Such a property is highly desirable as it allows a transfer of solutions between different grid geometries and discretization sizes with a single architecture that has a fixed number of parameters.

We note that, while the application of our methodology is based on having point-wise evaluations of the function, it is not limited by it. One may, for example, represent a function numerically as a finite set of truncated basis coefficients. Invariance of the representation would then be with respect to the size of this set. Our methodology can, in principle, be modified to accommodate this scenario through a suitably chosen architecture. We do not pursue this direction in the current work. From the construction of neural operators, when the input and output functions are evaluated on fixed grids, the architecture of neural operators on these fixed grids coincide with the class of neural networks.

3Neural Operators

In this section, we outline the neural operator framework. We assume that the input functions ๐‘Ž โˆˆ ๐’œ are โ„ ๐‘‘ ๐‘Ž -valued and defined on the bounded domain ๐ท โŠ‚ โ„ ๐‘‘ while the output functions ๐‘ข โˆˆ ๐’ฐ are โ„ ๐‘‘ ๐‘ข -valued and defined on the bounded domain ๐ท โ€ฒ โŠ‚ โ„ ๐‘‘ โ€ฒ . The proposed architecture ๐’ข ๐œƒ : ๐’œ โ†’ ๐’ฐ has the following overall structure:

1.

Lifting: Using a pointwise function โ„ ๐‘‘ ๐‘Ž โ†’ โ„ ๐‘‘ ๐‘ฃ 0 , map the input { ๐‘Ž : ๐ท โ†’ โ„ ๐‘‘ ๐‘Ž } โ†ฆ { ๐‘ฃ 0 : ๐ท โ†’ โ„ ๐‘‘ ๐‘ฃ 0 } to its first hidden representation. Usually, we choose ๐‘‘ ๐‘ฃ 0

๐‘‘ ๐‘Ž and hence this is a lifting operation performed by a fully local operator.

2.

Iterative Kernel Integration: For ๐‘ก

0 , โ€ฆ , ๐‘‡ โˆ’ 1 , map each hidden representation to the next { ๐‘ฃ ๐‘ก : ๐ท ๐‘ก โ†’ โ„ ๐‘‘ ๐‘ฃ ๐‘ก } โ†ฆ { ๐‘ฃ ๐‘ก + 1 : ๐ท ๐‘ก + 1 โ†’ โ„ ๐‘‘ ๐‘ฃ ๐‘ก + 1 } via the action of the sum of a local linear operator, a non-local integral kernel operator, and a bias function, composing the sum with a fixed, pointwise nonlinearity. Here we set ๐ท 0

๐ท and ๐ท ๐‘‡

๐ท โ€ฒ and impose that ๐ท ๐‘ก โŠ‚ โ„ ๐‘‘ ๐‘ก is a bounded domain.โ€ 

3.

Projection: Using a pointwise function โ„ ๐‘‘ ๐‘ฃ ๐‘‡ โ†’ โ„ ๐‘‘ ๐‘ข , map the last hidden representation { ๐‘ฃ ๐‘‡ : ๐ท โ€ฒ โ†’ โ„ ๐‘‘ ๐‘ฃ ๐‘‡ } โ†ฆ { ๐‘ข : ๐ท โ€ฒ โ†’ โ„ ๐‘‘ ๐‘ข } to the output function. Analogously to the first step, we usually pick ๐‘‘ ๐‘ฃ ๐‘‡

๐‘‘ ๐‘ข and hence this is a projection step performed by a fully local operator.

The outlined structure mimics that of a finite dimensional neural network where hidden representations are successively mapped to produce the final output. In particular, we have

๐’ข ๐œƒ โ‰” ๐’ฌ โˆ˜ ๐œŽ ๐‘‡ โข ( ๐‘Š ๐‘‡ โˆ’ 1 + ๐’ฆ ๐‘‡ โˆ’ 1 + ๐‘ ๐‘‡ โˆ’ 1 ) โˆ˜ โ‹ฏ โˆ˜ ๐œŽ 1 โข ( ๐‘Š 0 + ๐’ฆ 0 + ๐‘ 0 ) โˆ˜ ๐’ซ

(6)

where ๐’ซ : โ„ ๐‘‘ ๐‘Ž โ†’ โ„ ๐‘‘ ๐‘ฃ 0 , ๐’ฌ : โ„ ๐‘‘ ๐‘ฃ ๐‘‡ โ†’ โ„ ๐‘‘ ๐‘ข are the local lifting and projection mappings respectively, ๐‘Š ๐‘ก โˆˆ โ„ ๐‘‘ ๐‘ฃ ๐‘ก + 1 ร— ๐‘‘ ๐‘ฃ ๐‘ก are local linear operators (matrices), ๐’ฆ ๐‘ก : { ๐‘ฃ ๐‘ก : ๐ท ๐‘ก โ†’ โ„ ๐‘‘ ๐‘ฃ ๐‘ก } โ†’ { ๐‘ฃ ๐‘ก + 1 : ๐ท ๐‘ก + 1 โ†’ โ„ ๐‘‘ ๐‘ฃ ๐‘ก + 1 } are integral kernel operators, ๐‘ ๐‘ก : ๐ท ๐‘ก + 1 โ†’ โ„ ๐‘‘ ๐‘ฃ ๐‘ก + 1 are bias functions, and ๐œŽ ๐‘ก are fixed activation functions acting locally as maps โ„ ๐‘ฃ ๐‘ก + 1 โ†’ โ„ ๐‘ฃ ๐‘ก + 1 in each layer. The output dimensions ๐‘‘ ๐‘ฃ 0 , โ€ฆ , ๐‘‘ ๐‘ฃ ๐‘‡ as well as the input dimensions ๐‘‘ 1 , โ€ฆ , ๐‘‘ ๐‘‡ โˆ’ 1 and domains of definition ๐ท 1 , โ€ฆ , ๐ท ๐‘‡ โˆ’ 1 are hyperparameters of the architecture. By local maps, we mean that the action is pointwise, in particular, for the lifting and projection maps, we have ( ๐’ซ โข ( ๐‘Ž ) ) โข ( ๐‘ฅ )

๐’ซ โข ( ๐‘Ž โข ( ๐‘ฅ ) ) for any ๐‘ฅ โˆˆ ๐ท and ( ๐’ฌ โข ( ๐‘ฃ ๐‘‡ ) ) โข ( ๐‘ฅ )

๐’ฌ โข ( ๐‘ฃ ๐‘‡ โข ( ๐‘ฅ ) ) for any ๐‘ฅ โˆˆ ๐ท โ€ฒ and similarly, for the activation, ( ๐œŽ โข ( ๐‘ฃ ๐‘ก + 1 ) ) โข ( ๐‘ฅ )

๐œŽ โข ( ๐‘ฃ ๐‘ก + 1 โข ( ๐‘ฅ ) ) for any ๐‘ฅ โˆˆ ๐ท ๐‘ก + 1 . The maps, ๐’ซ , ๐’ฌ , and ๐œŽ ๐‘ก can thus be thought of as defining Nemitskiy operators (Dudley and Norvaisa, 2011, Chapters 6,7) when each of their components are assumed to be Borel measurable. This interpretation allows us to define the general neural operator architecture when pointwise evaluation is not well-defined in the spaces ๐’œ or ๐’ฐ e.g. when they are Lebesgue, Sobolev, or Besov spaces.

The crucial difference between the proposed architecture (6) and a standard feed-forward neural network is that all operations are directly defined in function space (noting that the activation funtions, ๐’ซ and ๐’ฌ are all interpreted through their extension to Nemitskiy operators) and therefore do not depend on any discretization of the data. Intuitively, the lifting step locally maps the data to a space where the non-local part of ๐’ข โ€  is easier to capture. We confirm this intuition numerically in Section 7; however, we note that for the theory presented in Section 8 it suffices that ๐’ซ is the identity map. The non-local part of ๐’ข โ€  is then learned by successively approximating using integral kernel operators composed with a local nonlinearity. Each integral kernel operator is the function space analog of the weight matrix in a standard feed-forward network since they are infinite-dimensional linear operators mapping one function space to another. We turn the biases, which are normally vectors, to functions and, using intuition from the ResNet architecture (He et al., 2016), we further add a local linear operator acting on the output of the previous layer before applying the nonlinearity. The final projection step simply gets us back to the space of our output function. We concatenate in ๐œƒ โˆˆ โ„ ๐‘ the parameters of ๐’ซ , ๐’ฌ , { ๐‘ ๐‘ก } which are usually themselves shallow neural networks, the parameters of the kernels representing { ๐’ฆ ๐‘ก } which are again usually shallow neural networks, and the matrices { ๐‘Š ๐‘ก } . We note, however, that our framework is general and other parameterizations such as polynomials may also be employed.

Integral Kernel Operators

We define three version of the integral kernel operator ๐’ฆ ๐‘ก used in (6). For the first, let ๐œ… ( ๐‘ก ) โˆˆ ๐ถ โข ( ๐ท ๐‘ก + 1 ร— ๐ท ๐‘ก ; โ„ ๐‘‘ ๐‘ฃ ๐‘ก + 1 ร— ๐‘‘ ๐‘ฃ ๐‘ก ) and let ๐œˆ ๐‘ก be a Borel measure on ๐ท ๐‘ก . Then we define ๐’ฆ ๐‘ก by

( ๐’ฆ ๐‘ก โข ( ๐‘ฃ ๐‘ก ) ) โข ( ๐‘ฅ )

โˆซ ๐ท ๐‘ก ๐œ… ( ๐‘ก ) โข ( ๐‘ฅ , ๐‘ฆ ) โข ๐‘ฃ ๐‘ก โข ( ๐‘ฆ ) โข d โข ๐œˆ ๐‘ก โข ( ๐‘ฆ ) โˆ€ ๐‘ฅ โˆˆ ๐ท ๐‘ก + 1 .

(7)

Normally, we take ๐œˆ ๐‘ก to simply be the Lebesgue measure on โ„ ๐‘‘ ๐‘ก but, as discussed in Section 4, other choices can be used to speed up computation or aid the learning process by building in a priori information. The choice of integral kernel operator in (7) defines the basic form of the neural operator and is the one we analyze in Section 8 and study most in the numerical experiments of Section 7.

For the second, let ๐œ… ( ๐‘ก ) โˆˆ ๐ถ โข ( ๐ท ๐‘ก + 1 ร— ๐ท ๐‘ก ร— โ„ ๐‘‘ ๐‘Ž ร— โ„ ๐‘‘ ๐‘Ž ; โ„ ๐‘‘ ๐‘ฃ ๐‘ก + 1 ร— ๐‘‘ ๐‘ฃ ๐‘ก ) . Then we define ๐’ฆ ๐‘ก by

( ๐’ฆ ๐‘ก โข ( ๐‘ฃ ๐‘ก ) ) โข ( ๐‘ฅ )

โˆซ ๐ท ๐‘ก ๐œ… ( ๐‘ก ) โข ( ๐‘ฅ , ๐‘ฆ , ๐‘Ž โข ( ฮ  ๐‘ก + 1 ๐ท โข ( ๐‘ฅ ) ) , ๐‘Ž โข ( ฮ  ๐‘ก ๐ท โข ( ๐‘ฆ ) ) ) โข ๐‘ฃ ๐‘ก โข ( ๐‘ฆ ) โข d โข ๐œˆ ๐‘ก โข ( ๐‘ฆ ) โˆ€ ๐‘ฅ โˆˆ ๐ท ๐‘ก + 1 .

(8)

where ฮ  ๐‘ก ๐ท : ๐ท ๐‘ก โ†’ ๐ท are fixed mappings. We have found numerically that, for certain PDE problems, the form (8) outperforms (7) due to the strong dependence of the solution ๐‘ข on the parameters ๐‘Ž , for example, the Darcy flow problem considered in subsection 7.2.1. Indeed, if we think of (6) as a discrete time dynamical system, then the input ๐‘Ž โˆˆ ๐’œ only enters through the initial condition hence its influence diminishes with more layers. By directly building in ๐‘Ž -dependence into the kernel, we ensure that it influences the entire architecture.

Lastly, let ๐œ… ( ๐‘ก ) โˆˆ ๐ถ โข ( ๐ท ๐‘ก + 1 ร— ๐ท ๐‘ก ร— โ„ ๐‘‘ ๐‘ฃ ๐‘ก ร— โ„ ๐‘‘ ๐‘ฃ ๐‘ก ; โ„ ๐‘‘ ๐‘ฃ ๐‘ก + 1 ร— ๐‘‘ ๐‘ฃ ๐‘ก ) . Then we define ๐’ฆ ๐‘ก by

( ๐’ฆ ๐‘ก โข ( ๐‘ฃ ๐‘ก ) ) โข ( ๐‘ฅ )

โˆซ ๐ท ๐‘ก ๐œ… ( ๐‘ก ) โข ( ๐‘ฅ , ๐‘ฆ , ๐‘ฃ ๐‘ก โข ( ฮ  ๐‘ก โข ( ๐‘ฅ ) ) , ๐‘ฃ ๐‘ก โข ( ๐‘ฆ ) ) โข ๐‘ฃ ๐‘ก โข ( ๐‘ฆ ) โข d โข ๐œˆ ๐‘ก โข ( ๐‘ฆ ) โˆ€ ๐‘ฅ โˆˆ ๐ท ๐‘ก + 1 .

(9)

where ฮ  ๐‘ก : ๐ท ๐‘ก + 1 โ†’ ๐ท ๐‘ก are fixed mappings. Note that, in contrast to (7) and (8), the integral operator (9) is nonlinear since the kernel can depend on the input function ๐‘ฃ ๐‘ก . With this definition and a particular choice of kernel ๐œ… ๐‘ก and measure ๐œˆ ๐‘ก , we show in Section 5.2 that neural operators are a continuous input/output space generalization of the popular transformer architecture (Vaswani et al., 2017).

Single Hidden Layer Construction

Having defined possible choices for the integral kernel operator, we are now in a position to explicitly write down a full layer of the architecture defined by (6). For simplicity, we choose the integral kernel operator given by (7), but note that the other definitions (8), (9) work analogously. We then have that a single hidden layer update is given by

๐‘ฃ ๐‘ก + 1 โข ( ๐‘ฅ )

๐œŽ ๐‘ก + 1 โข ( ๐‘Š ๐‘ก โข ๐‘ฃ ๐‘ก โข ( ฮ  ๐‘ก โข ( ๐‘ฅ ) ) + โˆซ ๐ท ๐‘ก ๐œ… ( ๐‘ก ) โข ( ๐‘ฅ , ๐‘ฆ ) โข ๐‘ฃ ๐‘ก โข ( ๐‘ฆ ) โข d โข ๐œˆ ๐‘ก โข ( ๐‘ฆ ) + ๐‘ ๐‘ก โข ( ๐‘ฅ ) ) โˆ€ ๐‘ฅ โˆˆ ๐ท ๐‘ก + 1

(10)

where ฮ  ๐‘ก : ๐ท ๐‘ก + 1 โ†’ ๐ท ๐‘ก are fixed mappings. We remark that, since we often consider functions on the same domain, we usually take ฮ  ๐‘ก to be the identity.

We will now give an example of a full single hidden layer architecture i.e. when ๐‘‡

2 . We choose ๐ท 1

๐ท , take ๐œŽ 2 as the identity, and denote ๐œŽ 1 by ๐œŽ , assuming it is any activation function. Furthermore, for simplicity, we set ๐‘Š 1

0 , ๐‘ 1

0 , and assume that ๐œˆ 0

๐œˆ 1 is the Lebesgue measure on โ„ ๐‘‘ . Then (6) becomes

( ๐’ข ๐œƒ โข ( ๐‘Ž ) ) โข ( ๐‘ฅ )

๐’ฌ โข ( โˆซ ๐ท ๐œ… ( 1 ) โข ( ๐‘ฅ , ๐‘ฆ ) โข ๐œŽ โข ( ๐‘Š 0 โข ๐’ซ โข ( ๐‘Ž โข ( ๐‘ฆ ) ) + โˆซ ๐ท ๐œ… ( 0 ) โข ( ๐‘ฆ , ๐‘ง ) โข ๐’ซ โข ( ๐‘Ž โข ( ๐‘ง ) ) โข d โข ๐‘ง + ๐‘ 0 โข ( ๐‘ฆ ) ) โข d โข ๐‘ฆ )

(11)

for any ๐‘ฅ โˆˆ ๐ท โ€ฒ . In this example, ๐’ซ โˆˆ ๐ถ โข ( โ„ ๐‘‘ ๐‘Ž ; โ„ ๐‘‘ ๐‘ฃ 0 ) , ๐œ… ( 0 ) โˆˆ ๐ถ โข ( ๐ท ร— ๐ท ; โ„ ๐‘‘ ๐‘ฃ 1 ร— ๐‘‘ ๐‘ฃ 0 ) , ๐‘ 0 โˆˆ ๐ถ โข ( ๐ท ; โ„ ๐‘‘ ๐‘ฃ 1 ) , ๐‘Š 0 โˆˆ โ„ ๐‘‘ ๐‘ฃ 1 ร— ๐‘‘ ๐‘ฃ 0 , ๐œ… ( 1 ) โˆˆ ๐ถ โข ( ๐ท โ€ฒ ร— ๐ท ; โ„ ๐‘‘ ๐‘ฃ 2 ร— ๐‘‘ ๐‘ฃ 1 ) , and ๐’ฌ โˆˆ ๐ถ โข ( โ„ ๐‘‘ ๐‘ฃ 2 ; โ„ ๐‘‘ ๐‘ข ) . One can then parametrize the continuous functions ๐’ซ , ๐’ฌ , ๐œ… ( 0 ) , ๐œ… ( 1 ) , ๐‘ 0 by standard feed-forward neural networks (or by any other means) and the matrix ๐‘Š 0 simply by its entries. The parameter vector ๐œƒ โˆˆ โ„ ๐‘ then becomes the concatenation of the parameters of ๐’ซ , ๐’ฌ , ๐œ… ( 0 ) , ๐œ… ( 1 ) , ๐‘ 0 along with the entries of ๐‘Š 0 . One can then optimize these parameters by minimizing with respect to ๐œƒ using standard gradient based minimization techniques. To implement this minimization, the functions entering the loss need to be discretized; but the learned parameters may then be used with other discretizations. In Section 4, we discuss various choices for parametrizing the kernels, picking the integration measure, and how those choices affect the computational complexity of the architecture.

Preprocessing

It is often beneficial to manually include features into the input functions ๐‘Ž to help facilitate the learning process. For example, instead of considering the โ„ ๐‘‘ ๐‘Ž -valued vector field ๐‘Ž as input, we use the โ„ ๐‘‘ + ๐‘‘ ๐‘Ž -valued vector field ( ๐‘ฅ , ๐‘Ž โข ( ๐‘ฅ ) ) . By including the identity element, information about the geometry of the spatial domain ๐ท is directly incorporated into the architecture. This allows the neural networks direct access to information that is already known in the problem and therefore eases learning. We use this idea in all of our numerical experiments in Section 7. Similarly, when learning a smoothing operator, it may be beneficial to include a smoothed version of the inputs ๐‘Ž ๐œ– using, for example, Gaussian convolution. Derivative information may also be of interest and thus, as input, one may consider, for example, the โ„ ๐‘‘ + 2 โข ๐‘‘ ๐‘Ž + ๐‘‘ โข ๐‘‘ ๐‘Ž -valued vector field ( ๐‘ฅ , ๐‘Ž โข ( ๐‘ฅ ) , ๐‘Ž ๐œ– โข ( ๐‘ฅ ) , โˆ‡ ๐‘ฅ ๐‘Ž ๐œ– โข ( ๐‘ฅ ) ) . Many other possibilities may be considered on a problem-specific basis.

Discretization Invariance and Approximation

In light of discretization invariance Theorem 8 and universal approximation Theorems 11 12, 13, 14 whose formal statements are given in Section 8, we may obtain a decomposition of the total error made by a neural operator as a sum of the discretization error and the approximation error. In particular, given a finite dimensional instantiation of a neural operator ๐’ข ^ ๐œƒ : โ„ ๐ฟ โข ๐‘‘ ร— โ„ ๐ฟ โข ๐‘‘ ๐‘Ž โ†’ ๐’ฐ , for some ๐ฟ -point discretization of the input, we have

โˆฅ ๐’ข ^ ๐œƒ ( ๐ท ๐ฟ , ๐‘Ž | ๐ท ๐ฟ ) โˆ’ ๐’ข โ€  ( ๐‘Ž ) โˆฅ ๐’ฐ โ‰ค โˆฅ ๐’ข ^ ๐œƒ ( ๐ท ๐ฟ , ๐‘Ž | ๐ท ๐ฟ ) โˆ’ ๐’ข ๐œƒ ( ๐‘Ž ) โˆฅ ๐’ฐ โŸ discretization error + โ€– ๐’ข ๐œƒ โข ( ๐‘Ž ) โˆ’ ๐’ข โ€  โข ( ๐‘Ž ) โ€– ๐’ฐ โŸ approximation error .

Our approximation theoretic Theorems imply that we can find parameters ๐œƒ so that the approximation error is arbitrarily small while the discretization invariance Theorem states that we can find a fine enough discretization (large enough ๐ฟ ) so that the discretization error is arbitrarily small. Therefore, with a fixed set of parameters independent of the input discretization, a neural operator that is able to be implemented on a computer can approximate operators to arbitrary accuracy.

4Parameterization and Computation

In this section, we discuss various ways of parameterizing the infinite dimensional architecture (6), Figure 2. The goal is to find an intrinsic infinite dimensional parameterization that achieves small error (say ๐œ– ), and then rely on numerical approximation to ensure that this parameterization delivers an error of the same magnitude (say 2 โข ๐œ– ), for all data discretizations fine enough. In this way the number of parameters used to achieve ๐’ช โข ( ๐œ– ) error is independent of the data discretization. In many applications we have in mind the data discretization is something we can control, for example when generating input/output pairs from solution of partial differential equations via numerical simulation. The proposed approach allows us to train a neural operator approximation using data from different discretizations, and to predict with discretizations different from those used in the data, all by relating everything to the underlying infinite dimensional problem.

We also discuss the computational complexity of the proposed parameterizations and suggest algorithms which yield efficient numerical methods for approximation. Subsections 4.1-4.4 delineate each of the proposed methods.

To simplify notation, we will only consider a single layer of (6) i.e. (10) and choose the input and output domains to be the same. Furthermore, we will drop the layer index ๐‘ก and write the single layer update as

๐‘ข โข ( ๐‘ฅ )

๐œŽ โข ( ๐‘Š โข ๐‘ฃ โข ( ๐‘ฅ ) + โˆซ ๐ท ๐œ… โข ( ๐‘ฅ , ๐‘ฆ ) โข ๐‘ฃ โข ( ๐‘ฆ ) โข d โข ๐œˆ โข ( ๐‘ฆ ) + ๐‘ โข ( ๐‘ฅ ) ) โˆ€ ๐‘ฅ โˆˆ ๐ท

(12)

where ๐ท โŠ‚ โ„ ๐‘‘ is a bounded domain, ๐‘ฃ : ๐ท โ†’ โ„ ๐‘› is the input function and ๐‘ข : ๐ท โ†’ โ„ ๐‘š is the output function. When the domain domains ๐ท of ๐‘ฃ and ๐‘ข are different, we will usually extend them to be on a larger domain. We will consider ๐œŽ to be fixed, and, for the time being, take d โข ๐œˆ โข ( ๐‘ฆ )

d โข ๐‘ฆ to be the Lebesgue measure on โ„ ๐‘‘ . Equation (12) then leaves three objects which can be parameterized: ๐‘Š , ๐œ… , and ๐‘ . Since ๐‘Š is linear and acts only locally on ๐‘ฃ , we will always parametrize it by the values of its associated ๐‘š ร— ๐‘› matrix; hence ๐‘Š โˆˆ โ„ ๐‘š ร— ๐‘› yielding ๐‘š โข ๐‘› parameters. We have found empirically that letting ๐‘ : ๐ท โ†’ โ„ ๐‘š be a constant function over any domain ๐ท works at least as well as allowing it to be an arbitrary neural network. Perusal of the proof of Theorem 11 shows that we do not lose any approximation power by doing this, and we reduce the total number of parameters in the architecutre. Therefore we will always parametrize ๐‘ by the entries of a fixed ๐‘š -dimensional vector; in particular, ๐‘ โˆˆ โ„ ๐‘š yielding ๐‘š parameters. Notice that both parameterizations are independent of any discretization of ๐‘ฃ .

The rest of this section will be dedicated to choosing the kernel function ๐œ… : ๐ท ร— ๐ท โ†’ โ„ ๐‘š ร— ๐‘› and the computation of the associated integral kernel operator. For clarity of exposition, we consider only the simplest proposed version of this operator (7) but note that similar ideas may also be applied to (8) and (9). Furthermore, in order to focus on learning the kernel ๐œ… , here we drop ๐œŽ , ๐‘Š , and ๐‘ from (12) and simply consider the linear update

๐‘ข โข ( ๐‘ฅ )

โˆซ ๐ท ๐œ… โข ( ๐‘ฅ , ๐‘ฆ ) โข ๐‘ฃ โข ( ๐‘ฆ ) โข d โข ๐œˆ โข ( ๐‘ฆ ) โˆ€ ๐‘ฅ โˆˆ ๐ท .

(13)

To demonstrate the computational challenges associated with (13), let { ๐‘ฅ 1 , โ€ฆ , ๐‘ฅ ๐ฝ } โŠ‚ ๐ท be a uniformly-sampled ๐ฝ -point discretization of ๐ท . Recall that we assumed d โข ๐œˆ โข ( ๐‘ฆ )

d โข ๐‘ฆ and, for simplicity, suppose that ๐œˆ โข ( ๐ท )

1 , then the Monte Carlo approximation of (13) is

๐‘ข โข ( ๐‘ฅ ๐‘— )

1 ๐ฝ โข โˆ‘ ๐‘™

1 ๐ฝ ๐œ… โข ( ๐‘ฅ ๐‘— , ๐‘ฅ ๐‘™ ) โข ๐‘ฃ โข ( ๐‘ฅ ๐‘™ ) , ๐‘—

1 , โ€ฆ , ๐ฝ

(14)

The integral in (13) can be approximated using any other integral approximation methods, including the celebrated Riemann sum for which ๐‘ข โข ( ๐‘ฅ ๐‘— )

โˆ‘ ๐‘™

1 ๐ฝ ๐œ… โข ( ๐‘ฅ ๐‘— , ๐‘ฅ ๐‘™ ) โข ๐‘ฃ โข ( ๐‘ฅ ๐‘™ ) โข ฮ” โข ๐‘ฅ ๐‘™ and ฮ” โข ๐‘ฅ ๐‘™ is the Riemann sum coefficient associated with ๐œˆ at ๐‘ฅ ๐‘™ . For the approximation methods, to compute ๐‘ข on the entire grid requires ๐’ช โข ( ๐ฝ 2 ) matrix-vector multiplications. Each of these matrix-vector multiplications requires ๐’ช โข ( ๐‘š โข ๐‘› ) operations; for the rest of the discussion, we treat ๐‘š โข ๐‘›

๐’ช โข ( 1 ) as constant and consider only the cost with respect to ๐ฝ the discretization parameter since ๐‘š and ๐‘› are fixed by the architecture choice whereas ๐ฝ varies depending on required discretization accuracy and hence may be arbitrarily large. This cost is not specific to the Monte Carlo approximation but is generic for quadrature rules which use the entirety of the data. Therefore, when ๐ฝ is large, computing (13) becomes intractable and new ideas are needed in order to alleviate this. Subsections 4.1-4.4 propose different approaches to the solution to this problem, inspired by classical methods in numerical analysis. We finally remark that, in contrast, computations with ๐‘Š , ๐‘ , and ๐œŽ only require ๐’ช โข ( ๐ฝ ) operations which justifies our focus on computation with the kernel integral operator.

Kernel Matrix.

It will often times be useful to consider the kernel matrix associated to ๐œ… for the discrete points { ๐‘ฅ 1 , โ€ฆ , ๐‘ฅ ๐ฝ } โŠ‚ ๐ท . We define the kernel matrix ๐พ โˆˆ โ„ ๐‘š โข ๐ฝ ร— ๐‘› โข ๐ฝ to be the ๐ฝ ร— ๐ฝ block matrix with each block given by the value of the kernel i.e.

๐พ ๐‘— โข ๐‘™

๐œ… โข ( ๐‘ฅ ๐‘— , ๐‘ฅ ๐‘™ ) โˆˆ โ„ ๐‘š ร— ๐‘› , ๐‘— , ๐‘™

1 , โ€ฆ , ๐ฝ

where we use ( ๐‘— , ๐‘™ ) to index an individual block rather than a matrix element. Various numerical algorithms for the efficient computation of (13) can be derived based on assumptions made about the structure of this matrix, for example, bounds on its rank or sparsity.

4.1Graph Neural Operator (GNO)

We first outline the Graph Neural Operator (GNO) which approximates (13) by combining a Nystrรถm approximation with domain truncation and is implemented with the standard framework of graph neural networks. This construction was originally proposed in Li et al. (2020c).

Nystrรถm Approximation.

A simple yet effective method to alleviate the cost of computing (13) is employing a Nystrรถm approximation. This amounts to sampling uniformly at random the points over which we compute the output function ๐‘ข . In particular, let ๐‘ฅ ๐‘˜ 1 , โ€ฆ , ๐‘ฅ ๐‘˜ ๐ฝ โ€ฒ โŠ‚ { ๐‘ฅ 1 , โ€ฆ , ๐‘ฅ ๐ฝ } be ๐ฝ โ€ฒ โ‰ช ๐ฝ randomly selected points and, assuming ๐œˆ โข ( ๐ท )

1 , approximate (13) by

๐‘ข โข ( ๐‘ฅ ๐‘˜ ๐‘— ) โ‰ˆ 1 ๐ฝ โ€ฒ โข โˆ‘ ๐‘™

1 ๐ฝ โ€ฒ ๐œ… โข ( ๐‘ฅ ๐‘˜ ๐‘— , ๐‘ฅ ๐‘˜ ๐‘™ ) โข ๐‘ฃ โข ( ๐‘ฅ ๐‘˜ ๐‘™ ) , ๐‘—

1 , โ€ฆ , ๐ฝ โ€ฒ .

We can view this as a low-rank approximation to the kernel matrix ๐พ , in particular,

๐พ โ‰ˆ ๐พ ๐ฝ โข ๐ฝ โ€ฒ โข ๐พ ๐ฝ โ€ฒ โข ๐ฝ โ€ฒ โข ๐พ ๐ฝ โ€ฒ โข ๐ฝ

(15)

where ๐พ ๐ฝ โ€ฒ โข ๐ฝ โ€ฒ is a ๐ฝ โ€ฒ ร— ๐ฝ โ€ฒ block matrix and ๐พ ๐ฝ โข ๐ฝ โ€ฒ , ๐พ ๐ฝ โ€ฒ โข ๐ฝ are interpolation matrices, for example, linearly extending the function to the whole domain from the random nodal points. The complexity of this computation is ๐’ช โข ( ๐ฝ โ€ฒ โฃ 2 ) hence it remains quadratic but only in the number of subsampled points ๐ฝ โ€ฒ which we assume is much less than the number of points ๐ฝ in the original discretization.

Truncation.

Another simple method to alleviate the cost of computing (13) is to truncate the integral to a sub-domain of ๐ท which depends on the point of evaluation ๐‘ฅ โˆˆ ๐ท . Let ๐‘  : ๐ท โ†’ โ„ฌ โข ( ๐ท ) be a mapping of the points of ๐ท to the Lebesgue measurable subsets of ๐ท denoted โ„ฌ โข ( ๐ท ) . Define d โข ๐œˆ โข ( ๐‘ฅ , ๐‘ฆ )

๐Ÿ™ ๐‘  โข ( ๐‘ฅ ) โข d โข ๐‘ฆ then (13) becomes

๐‘ข โข ( ๐‘ฅ )

โˆซ ๐‘  โข ( ๐‘ฅ ) ๐œ… โข ( ๐‘ฅ , ๐‘ฆ ) โข ๐‘ฃ โข ( ๐‘ฆ ) โข d โข ๐‘ฆ โˆ€ ๐‘ฅ โˆˆ ๐ท .

(16)

If the size of each set ๐‘  โข ( ๐‘ฅ ) is smaller than ๐ท then the cost of computing (16) is ๐’ช โข ( ๐‘ ๐‘  โข ๐ฝ 2 ) where ๐‘ ๐‘  < 1 is a constant depending on ๐‘  . While the cost remains quadratic in ๐ฝ , the constant ๐‘ ๐‘  can have a significant effect in practical computations, as we demonstrate in Section 7. For simplicity and ease of implementation, we only consider ๐‘  โข ( ๐‘ฅ )

๐ต โข ( ๐‘ฅ , ๐‘Ÿ ) โˆฉ ๐ท where ๐ต โข ( ๐‘ฅ , ๐‘Ÿ )

{ ๐‘ฆ โˆˆ โ„ ๐‘‘ : โ€– ๐‘ฆ โˆ’ ๐‘ฅ โ€– โ„ ๐‘‘ < ๐‘Ÿ } for some fixed ๐‘Ÿ > 0 . With this choice of ๐‘  and assuming that ๐ท

[ 0 , 1 ] ๐‘‘ , we can explicitly calculate that ๐‘ ๐‘  โ‰ˆ ๐‘Ÿ ๐‘‘ .

Furthermore notice that we do not lose any expressive power when we make this approximation so long as we combine it with composition. To see this, consider the example of the previous paragraph where, if we let ๐‘Ÿ

2 , then (16) reverts to (13). Pick ๐‘Ÿ < 1 and let ๐ฟ โˆˆ โ„• with ๐ฟ โ‰ฅ 2 be the smallest integer such that 2 ๐ฟ โˆ’ 1 โข ๐‘Ÿ โ‰ฅ 1 . Suppose that ๐‘ข โข ( ๐‘ฅ ) is computed by composing the right hand side of (16) ๐ฟ times with a different kernel every time. The domain of influence of ๐‘ข โข ( ๐‘ฅ ) is then ๐ต โข ( ๐‘ฅ , 2 ๐ฟ โˆ’ 1 โข ๐‘Ÿ ) โˆฉ ๐ท

๐ท hence it is easy to see that there exist ๐ฟ kernels such that computing this composition is equivalent to computing (13) for any given kernel with appropriate regularity. Furthermore the cost of this computation is ๐’ช โข ( ๐ฟ โข ๐‘Ÿ ๐‘‘ โข ๐ฝ 2 ) and therefore the truncation is beneficial if ๐‘Ÿ ๐‘‘ โข ( log 2 โก 1 / ๐‘Ÿ + 1 ) < 1 which holds for any ๐‘Ÿ < 1 / 2 when ๐‘‘

1 and any ๐‘Ÿ < 1 when ๐‘‘ โ‰ฅ 2 . Therefore we have shown that we can always reduce the cost of computing (13) by truncation and composition. From the perspective of the kernel matrix, truncation enforces a sparse, block diagonally-dominant structure at each layer. We further explore the hierarchical nature of this computation using the multipole method in subsection 4.3.

Besides being a useful computational tool, truncation can also be interpreted as explicitly building local structure into the kernel ๐œ… . For problems where such structure exists, explicitly enforcing it makes learning more efficient, usually requiring less data to achieve the same generalization error. Many physical systems such as interacting particles in an electric potential exhibit strong local behavior that quickly decays, making truncation a natural approximation technique.

Graph Neural Networks.

We utilize the standard architecture of message passing graph networks employing edge features as introduced in Gilmer et al. (2017) to efficiently implement (13) on arbitrary discretizations of the domain ๐ท . To do so, we treat a discretization { ๐‘ฅ 1 , โ€ฆ , ๐‘ฅ ๐ฝ } โŠ‚ ๐ท as the nodes of a weighted, directed graph and assign edges to each node using the function ๐‘  : ๐ท โ†’ โ„ฌ โข ( ๐ท ) which, recall from the section on truncation, assigns to each point a domain of integration. In particular, for ๐‘—

1 , โ€ฆ , ๐ฝ , we assign the node ๐‘ฅ ๐‘— the value ๐‘ฃ โข ( ๐‘ฅ ๐‘— ) and emanate from it edges to the nodes ๐‘  โข ( ๐‘ฅ ๐‘— ) โˆฉ { ๐‘ฅ 1 , โ€ฆ , ๐‘ฅ ๐ฝ }

๐’ฉ โข ( ๐‘ฅ ๐‘— ) which we call the neighborhood of ๐‘ฅ ๐‘— . If ๐‘  โข ( ๐‘ฅ )

๐ท then the graph is fully-connected. Generally, the sparsity structure of the graph determines the sparsity of the kernel matrix ๐พ , indeed, the adjacency matrix of the graph and the block kernel matrix have the same zero entries. The weights of each edge are assigned as the arguments of the kernel. In particular, for the case of (13), the weight of the edge between nodes ๐‘ฅ ๐‘— and ๐‘ฅ ๐‘˜ is simply the concatenation ( ๐‘ฅ ๐‘— , ๐‘ฅ ๐‘˜ ) โˆˆ โ„ 2 โข ๐‘‘ . More complicated weighting functions are considered for the implementation of the integral kernel operators (8) or (9).

With the above definition the message passing algorithm of Gilmer et al. (2017), with averaging aggregation, updates the value ๐‘ฃ โข ( ๐‘ฅ ๐‘— ) of the node ๐‘ฅ ๐‘— to the value ๐‘ข โข ( ๐‘ฅ ๐‘— ) as

๐‘ข โข ( ๐‘ฅ ๐‘— )

1 | ๐’ฉ โข ( ๐‘ฅ ๐‘— ) | โข โˆ‘ ๐‘ฆ โˆˆ ๐’ฉ โข ( ๐‘ฅ ๐‘— ) ๐œ… โข ( ๐‘ฅ ๐‘— , ๐‘ฆ ) โข ๐‘ฃ โข ( ๐‘ฆ ) , ๐‘—

1 , โ€ฆ , ๐ฝ

which corresponds to the Monte-Carlo approximation of the integral (16). More sophisticated quadrature rules and adaptive meshes can also be implemented using the general framework of message passing on graphs, see, for example, Pfaff et al. (2020). We further utilize this framework in subsection 4.3.

Convolutional Neural Networks.

Lastly, we compare and contrast the GNO framework to standard convolutional neural networks (CNNs). In computer vision, the success of CNNs has largely been attributed to their ability to capture local features such as edges that can be used to distinguish different objects in a natural image. This property is obtained by enforcing the convolution kernel to have local support, an idea similar to our truncation approximation. Furthermore by directly using a translation invariant kernel, a CNN architecture becomes translation equivariant; this is a desirable feature for many vision models e.g. ones that perform segmentation. We will show that similar ideas can be applied to the neural operator framework to obtain an architecture with built-in local properties and translational symmetries that, unlike CNNs, remain consistent in function space.

To that end, let ๐œ… โข ( ๐‘ฅ , ๐‘ฆ )

๐œ… โข ( ๐‘ฅ โˆ’ ๐‘ฆ ) and suppose that ๐œ… : โ„ ๐‘‘ โ†’ โ„ ๐‘š ร— ๐‘› is supported on ๐ต โข ( 0 , ๐‘Ÿ ) . Let ๐‘Ÿ โˆ—

0 be the smallest radius such that ๐ท โІ ๐ต โข ( ๐‘ฅ โˆ— , ๐‘Ÿ โˆ— ) where ๐‘ฅ โˆ— โˆˆ โ„ ๐‘‘ denotes the center of mass of ๐ท and suppose ๐‘Ÿ โ‰ช ๐‘Ÿ โˆ— . Then (13) becomes the convolution

๐‘ข โข ( ๐‘ฅ )

( ๐œ… โˆ— ๐‘ฃ ) โข ( ๐‘ฅ )

โˆซ ๐ต โข ( ๐‘ฅ , ๐‘Ÿ ) โˆฉ ๐ท ๐œ… โข ( ๐‘ฅ โˆ’ ๐‘ฆ ) โข ๐‘ฃ โข ( ๐‘ฆ ) โข d โข ๐‘ฆ โˆ€ ๐‘ฅ โˆˆ ๐ท .

(17)

Notice that (17) is precisely (16) when ๐‘  โข ( ๐‘ฅ )

๐ต โข ( ๐‘ฅ , ๐‘Ÿ ) โˆฉ ๐ท and ๐œ… โข ( ๐‘ฅ , ๐‘ฆ )

๐œ… โข ( ๐‘ฅ โˆ’ ๐‘ฆ ) . When the kernel is parameterized by e.g. a standard neural network and the radius ๐‘Ÿ is chosen independently of the data discretization, (17) becomes a layer of a convolution neural network that is consistent in function space. Indeed the parameters of (17) do not depend on any discretization of ๐‘ฃ . The choice ๐œ… โข ( ๐‘ฅ , ๐‘ฆ )

๐œ… โข ( ๐‘ฅ โˆ’ ๐‘ฆ ) enforces translational equivariance in the output while picking ๐‘Ÿ small enforces locality in the kernel; hence we obtain the distinguishing features of a CNN model.

We will now show that, by picking a parameterization that is inconsistent in function space and applying a Monte Carlo approximation to the integral, (17) becomes a standard CNN. This is most easily demonstrated when ๐ท

[ 0 , 1 ] and the discretization { ๐‘ฅ 1 , โ€ฆ , ๐‘ฅ ๐ฝ } is equispaced i.e. | ๐‘ฅ ๐‘— + 1 โˆ’ ๐‘ฅ ๐‘— |

โ„Ž for any ๐‘—

1 , โ€ฆ , ๐ฝ โˆ’ 1 . Let ๐‘˜ โˆˆ โ„• be an odd filter size and let ๐‘ง 1 , โ€ฆ , ๐‘ง ๐‘˜ โˆˆ โ„ be the points ๐‘ง ๐‘—

( ๐‘— โˆ’ 1 โˆ’ ( ๐‘˜ โˆ’ 1 ) / 2 ) โข โ„Ž for ๐‘—

1 , โ€ฆ , ๐‘˜ . It is easy to see that { ๐‘ง 1 , โ€ฆ , ๐‘ง ๐‘˜ } โŠ‚ ๐ต ยฏ โข ( 0 , ( ๐‘˜ โˆ’ 1 ) โข โ„Ž / 2 ) which we choose as the support of ๐œ… . Furthermore, we parameterize ๐œ… directly by its pointwise values which are ๐‘š ร— ๐‘› matrices at the locations ๐‘ง 1 , โ€ฆ , ๐‘ง ๐‘˜ thus yielding ๐‘˜ โข ๐‘š โข ๐‘› parameters. Then (17) becomes

๐‘ข โข ( ๐‘ฅ ๐‘— ) ๐‘ โ‰ˆ 1 ๐‘˜ โข โˆ‘ ๐‘™

1 ๐‘˜ โˆ‘ ๐‘ž

1 ๐‘› ๐œ… โข ( ๐‘ง ๐‘™ ) ๐‘ โข ๐‘ž โข ๐‘ฃ โข ( ๐‘ฅ ๐‘— โˆ’ ๐‘ง ๐‘™ ) ๐‘ž , ๐‘—

1 , โ€ฆ , ๐ฝ , ๐‘

1 , โ€ฆ , ๐‘š

where we define ๐‘ฃ โข ( ๐‘ฅ )

0 if ๐‘ฅ โˆ‰ { ๐‘ฅ 1 , โ€ฆ , ๐‘ฅ ๐ฝ } . Up to the constant factor 1 / ๐‘˜ which can be re-absorbed into the parameterization of ๐œ… , this is precisely the update of a stride 1 CNN with ๐‘› input channels, ๐‘š output channels, and zero-padding so that the input and output signals have the same length. This example can easily be generalized to higher dimensions and different CNN structures, we made the current choices for simplicity of exposition. Notice that if we double the amount of discretization points for ๐‘ฃ i.e. ๐ฝ โ†ฆ 2 โข ๐ฝ and โ„Ž โ†ฆ โ„Ž / 2 , the support of ๐œ… becomes ๐ต ยฏ โข ( 0 , ( ๐‘˜ โˆ’ 1 ) โข โ„Ž / 4 ) hence the model changes due to the discretization of the data. Indeed, if we take the limit to the continuum ๐ฝ โ†’ โˆž , we find ๐ต ยฏ โข ( 0 , ( ๐‘˜ โˆ’ 1 ) โข โ„Ž / 2 ) โ†’ { 0 } hence the model becomes completely local. To fix this, we may try to increase the filter size ๐‘˜ (or equivalently add more layers) simultaneously with ๐ฝ , but then the number of parameters in the model goes to infinity as ๐ฝ โ†’ โˆž since, as we previously noted, there are ๐‘˜ โข ๐‘š โข ๐‘› parameters in this layer. Therefore standard CNNs are not consistent models in function space. We demonstrate their inability to generalize to different resolutions in Section 7.

4.2Low-rank Neural Operator (LNO)

By directly imposing that the kernel ๐œ… is of a tensor product form, we obtain a layer with ๐’ช โข ( ๐ฝ ) computational complexity. We term this construction the Low-rank Neural Operator (LNO) due to its equivalence to directly parameterizing a finite-rank operator. We start by assuming ๐œ… : ๐ท ร— ๐ท โ†’ โ„ is scalar valued and later generalize to the vector valued setting. We express the kernel as

๐œ… โข ( ๐‘ฅ , ๐‘ฆ )

โˆ‘ ๐‘—

1 ๐‘Ÿ ๐œ‘ ( ๐‘— ) โข ( ๐‘ฅ ) โข ๐œ“ ( ๐‘— ) โข ( ๐‘ฆ ) โˆ€ ๐‘ฅ , ๐‘ฆ โˆˆ ๐ท

for some functions ๐œ‘ ( 1 ) , ๐œ“ ( 1 ) , โ€ฆ , ๐œ‘ ( ๐‘Ÿ ) , ๐œ“ ( ๐‘Ÿ ) : ๐ท โ†’ โ„ that are normally given as the components of two neural networks ๐œ‘ , ๐œ“ : ๐ท โ†’ โ„ ๐‘Ÿ or a single neural network ฮž : ๐ท โ†’ โ„ 2 โข ๐‘Ÿ which couples all functions through its parameters. With this definition, and supposing that ๐‘›

๐‘š

1 , we have that (13) becomes

๐‘ข โข ( ๐‘ฅ )

โˆซ ๐ท โˆ‘ ๐‘—

1 ๐‘Ÿ ๐œ‘ ( ๐‘— ) โข ( ๐‘ฅ ) โข ๐œ“ ( ๐‘— ) โข ( ๐‘ฆ ) โข ๐‘ฃ โข ( ๐‘ฆ ) โข d โข ๐‘ฆ

โˆ‘ ๐‘—

1 ๐‘Ÿ โˆซ ๐ท ๐œ“ ( ๐‘— ) โข ( ๐‘ฆ ) โข ๐‘ฃ โข ( ๐‘ฆ ) โข d โข ๐‘ฆ โข ๐œ‘ ( ๐‘— ) โข ( ๐‘ฅ )

โˆ‘ ๐‘—

1 ๐‘Ÿ โŸจ ๐œ“ ( ๐‘— ) , ๐‘ฃ โŸฉ โข ๐œ‘ ( ๐‘— ) โข ( ๐‘ฅ )

where โŸจ โ‹… , โ‹… โŸฉ denotes the ๐ฟ 2 โข ( ๐ท ; โ„ ) inner product. Notice that the inner products can be evaluated independently of the evaluation point ๐‘ฅ โˆˆ ๐ท hence the computational complexity of this method is ๐’ช โข ( ๐‘Ÿ โข ๐ฝ ) which is linear in the discretization.

We may also interpret this choice of kernel as directly parameterizing a rank ๐‘Ÿ โˆˆ โ„• operator on ๐ฟ 2 โข ( ๐ท ; โ„ ) . Indeed, we have

๐‘ข

โˆ‘ ๐‘—

1 ๐‘Ÿ ( ๐œ‘ ( ๐‘— ) โŠ— ๐œ“ ( ๐‘— ) ) โข ๐‘ฃ

(18)

which corresponds preceisely to applying the SVD of a rank ๐‘Ÿ operator to the function ๐‘ฃ . Equation (18) makes natural the vector valued generalization. Assume ๐‘š , ๐‘› โ‰ฅ 1 and ๐œ‘ ( ๐‘— ) : ๐ท โ†’ โ„ ๐‘š and ๐œ“ ( ๐‘— ) : ๐ท โ†’ โ„ ๐‘› for ๐‘—

1 , โ€ฆ , ๐‘Ÿ then, (18) defines an operator mapping ๐ฟ 2 โข ( ๐ท ; โ„ ๐‘› ) โ†’ ๐ฟ 2 โข ( ๐ท ; โ„ ๐‘š ) that can be evaluated as

๐‘ข โข ( ๐‘ฅ )

โˆ‘ ๐‘—

1 ๐‘Ÿ โŸจ ๐œ“ ( ๐‘— ) , ๐‘ฃ โŸฉ ๐ฟ 2 โข ( ๐ท ; โ„ ๐‘› ) โข ๐œ‘ ( ๐‘— ) โข ( ๐‘ฅ ) โˆ€ ๐‘ฅ โˆˆ ๐ท .

We again note the linear computational complexity of this parameterization. Finally, we observe that this method can be interpreted as directly imposing a rank ๐‘Ÿ structure on the kernel matrix. Indeed,

๐พ

๐พ ๐ฝ โข ๐‘Ÿ โข ๐พ ๐‘Ÿ โข ๐ฝ

where ๐พ ๐ฝ โข ๐‘Ÿ , ๐พ ๐‘Ÿ โข ๐ฝ are ๐ฝ ร— ๐‘Ÿ and ๐‘Ÿ ร— ๐ฝ block matricies respectively. This construction is similar to the DeepONet construction of Lu et al. (2019) discussed in Section 5.1, but parameterized to be consistent in function space.

4.3Multipole Graph Neural Operator (MGNO)

A natural extension to directly working with kernels in a tensor product form as in Section 4.2 is to instead consider kernels that can be well approximated by such a form. This assumption gives rise to the fast multipole method (FMM) which employs a multi-scale decomposition of the kernel in order to achieve linear complexity in computing (13); for a detailed discussion see e.g. (E, 2011, Section 3.2). FMM can be viewed as a systematic approach to combine the sparse and low-rank approximations to the kernel matrix. Indeed, the kernel matrix is decomposed into different ranges and a hierarchy of low-rank structures is imposed on the long-range components. We employ this idea to construct hierarchical, multi-scale graphs, without being constrained to particular forms of the kernel. We will elucidate the workings of the FMM through matrix factorization. This approach was first outlined in Li et al. (2020b) and is referred as the Multipole Graph Neural Operator (MGNO).

The key to the fast multipole methodโ€™s linear complexity lies in the subdivision of the kernel matrix according to the range of interaction, as shown in Figure 3:

๐พ

๐พ 1 + ๐พ 2 + โ€ฆ + ๐พ ๐ฟ ,

(19)

where ๐พ โ„“ with โ„“

1 corresponds to the shortest-range interaction, and โ„“

๐ฟ corresponds to the longest-range interaction; more generally index โ„“ is ordered by the range of interaction. While the uniform grids depicted in Figure 3 produce an orthogonal decomposition of the kernel, the decomposition may be generalized to arbitrary discretizations by allowing slight overlap of the ranges.

Figure 3:Hierarchical matrix decomposition

The kernel matrix ๐พ is decomposed with respect to its interaction ranges. ๐พ 1 corresponds to short-range interaction; it is sparse but full-rank. ๐พ 3 corresponds to long-range interaction; it is dense but low-rank.

Multi-scale Discretization.

We produce a hierarchy of ๐ฟ discretizations with a decreasing number of nodes ๐ฝ 1 โ‰ฅ โ€ฆ โ‰ฅ ๐ฝ ๐ฟ and increasing kernel integration radius ๐‘Ÿ 1 โ‰ค โ€ฆ โ‰ค ๐‘Ÿ ๐ฟ . Therefore, the shortest-range interaction ๐พ 1 has a fine resolution but is truncated locally, while the longest-range interaction ๐พ ๐ฟ has a coarse resolution, but covers the entire domain. This is shown pictorially in Figure 3. The number of nodes ๐ฝ 1 โ‰ฅ โ€ฆ โ‰ฅ ๐ฝ ๐ฟ , and the integration radii ๐‘Ÿ 1 โ‰ค โ€ฆ โ‰ค ๐‘Ÿ ๐ฟ are hyperparameter choices and can be picked so that the total computational complexity is linear in ๐ฝ .

A special case of this construction is when the grid is uniform. Then our formulation reduces to the standard fast multipole algorithm and the kernels ๐พ ๐‘™ form an orthogonal decomposition of the full kernel matrix ๐พ . Assuming the underlying discretization { ๐‘ฅ 1 , โ€ฆ , ๐‘ฅ ๐ฝ } โŠ‚ ๐ท is a uniform grid with resolution ๐‘  such that ๐‘  ๐‘‘

๐ฝ , the ๐ฟ multi-level discretizations will be grids with resolution ๐‘  ๐‘™

๐‘  / 2 ๐‘™ โˆ’ 1 , and consequentially ๐ฝ ๐‘™

๐‘  ๐‘™ ๐‘‘

( ๐‘  / 2 ๐‘™ โˆ’ 1 ) ๐‘‘ . In this case ๐‘Ÿ ๐‘™ can be chosen as 1 / ๐‘  for ๐‘™

1 , โ€ฆ , ๐ฟ . To ensure orthogonality of the discretizations, the fast multipole algorithm sets the integration domains to be ๐ต โข ( ๐‘ฅ , ๐‘Ÿ ๐‘™ ) โˆ– ๐ต โข ( ๐‘ฅ , ๐‘Ÿ ๐‘™ โˆ’ 1 ) for each level ๐‘™

2 , โ€ฆ , ๐ฟ , so that the discretization on level ๐‘™ does not overlap with the one on level ๐‘™ โˆ’ 1 . Details of this algorithm can be found in e.g. Greengard and Rokhlin (1997).

Recursive Low-rank Decomposition.

The coarse discretization representation can be understood as recursively applying an inducing points approximation (Quiรฑonero Candela and Rasmussen, 2005): starting from a discretization with ๐ฝ 1

๐ฝ nodes, we impose inducing points of size ๐ฝ 2 , ๐ฝ 3 , โ€ฆ , ๐ฝ ๐ฟ which all admit a low-rank kernel matrix decomposition of the form (15). The original ๐ฝ ร— ๐ฝ kernel matrix ๐พ ๐‘™ is represented by a much smaller ๐ฝ ๐‘™ ร— ๐ฝ ๐‘™ kernel matrix, denoted by ๐พ ๐‘™ , ๐‘™ . As shown in Figure 3, ๐พ 1 is full-rank but very sparse while ๐พ ๐ฟ is dense but low-rank. Such structure can be achieved by applying equation (15) recursively to equation (19), leading to the multi-resolution matrix factorization (Kondor et al., 2014):

๐พ โ‰ˆ ๐พ 1 , 1 + ๐พ 1 , 2 โข ๐พ 2 , 2 โข ๐พ 2 , 1 + ๐พ 1 , 2 โข ๐พ 2 , 3 โข ๐พ 3 , 3 โข ๐พ 3 , 2 โข ๐พ 2 , 1 + โ‹ฏ

(20)

where ๐พ 1 , 1

๐พ 1 represents the shortest range, ๐พ 1 , 2 โข ๐พ 2 , 2 โข ๐พ 2 , 1 โ‰ˆ ๐พ 2 , represents the second shortest range, etc. The center matrix ๐พ ๐‘™ , ๐‘™ is a ๐ฝ ๐‘™ ร— ๐ฝ ๐‘™ kernel matrix corresponding to the ๐‘™ -level of the discretization described above. The matrices ๐พ ๐‘™ + 1 , ๐‘™ , ๐พ ๐‘™ , ๐‘™ + 1 are ๐ฝ ๐‘™ + 1 ร— ๐ฝ ๐‘™ and ๐ฝ ๐‘™ ร— ๐ฝ ๐‘™ + 1 wide and long respectively block transition matrices. Denote ๐‘ฃ ๐‘™ โˆˆ ๐‘… ๐ฝ ๐‘™ ร— ๐‘› for the representation of the input ๐‘ฃ at each level of the discretization for ๐‘™

1 , โ€ฆ , ๐ฟ , and ๐‘ข ๐‘™ โˆˆ ๐‘… ๐ฝ ๐‘™ ร— ๐‘› for the output (assuming the inputs and outputs has the same dimension). We define the matrices ๐พ ๐‘™ + 1 , ๐‘™ , ๐พ ๐‘™ , ๐‘™ + 1 as moving the representation ๐‘ฃ ๐‘™ between different levels of the discretization via an integral kernel that we learn. Combining with the truncation idea introduced in subsection 4.1, we define the transition matrices as discretizations of the following integral kernel operators:

๐พ ๐‘™ , ๐‘™ : ๐‘ฃ ๐‘™ โ†ฆ ๐‘ข ๐‘™

โˆซ ๐ต โข ( ๐‘ฅ , ๐‘Ÿ ๐‘™ , ๐‘™ ) ๐œ… ๐‘™ , ๐‘™ โข ( ๐‘ฅ , ๐‘ฆ ) โข ๐‘ฃ ๐‘™ โข ( ๐‘ฆ ) โข d โข ๐‘ฆ

(21)

๐พ ๐‘™ + 1 , ๐‘™ : ๐‘ฃ ๐‘™ โ†ฆ ๐‘ข ๐‘™ + 1

โˆซ ๐ต โข ( ๐‘ฅ , ๐‘Ÿ ๐‘™ + 1 , ๐‘™ ) ๐œ… ๐‘™ + 1 , ๐‘™ โข ( ๐‘ฅ , ๐‘ฆ ) โข ๐‘ฃ ๐‘™ โข ( ๐‘ฆ ) โข d โข ๐‘ฆ

(22)

๐พ ๐‘™ , ๐‘™ + 1 : ๐‘ฃ ๐‘™ + 1 โ†ฆ ๐‘ข ๐‘™

โˆซ ๐ต โข ( ๐‘ฅ , ๐‘Ÿ ๐‘™ , ๐‘™ + 1 ) ๐œ… ๐‘™ , ๐‘™ + 1 โข ( ๐‘ฅ , ๐‘ฆ ) โข ๐‘ฃ ๐‘™ + 1 โข ( ๐‘ฆ ) โข d โข ๐‘ฆ

(23)

where each kernel ๐œ… ๐‘™ , ๐‘™ โ€ฒ : ๐ท ร— ๐ท โ†’ โ„ ๐‘› ร— ๐‘› is parameterized as a neural network and learned.

Figure 4:V-cycle

Left: the multi-level discretization. Right: one V-cycle iteration for the multipole neural operator.

V-cycle Algorithm

We present a V-cycle algorithm, see Figure 4, for efficiently computing (20). It consists of two steps: the downward pass and the upward pass. Denote the representation in downward pass and upward pass by ๐‘ฃ ห‡ and ๐‘ฃ ^ respectively. In the downward step, the algorithm starts from the fine discretization representation ๐‘ฃ ห‡ 1 and updates it by applying a downward transition ๐‘ฃ ห‡ ๐‘™ + 1

๐พ ๐‘™ + 1 , ๐‘™ โข ๐‘ฃ ห‡ ๐‘™ . In the upward step, the algorithm starts from the coarse presentation ๐‘ฃ ^ ๐ฟ and updates it by applying an upward transition and the center kernel matrix ๐‘ฃ ^ ๐‘™

๐พ ๐‘™ , ๐‘™ โˆ’ 1 โข ๐‘ฃ ^ ๐‘™ โˆ’ 1 + ๐พ ๐‘™ , ๐‘™ โข ๐‘ฃ ห‡ ๐‘™ . Notice that applying one level downward and upward exactly computes ๐พ 1 , 1 + ๐พ 1 , 2 โข ๐พ 2 , 2 โข ๐พ 2 , 1 , and a full ๐ฟ -level V-cycle leads to the multi-resolution decomposition (20).

Employing (21)-(23), we use ๐ฟ neural networks ๐œ… 1 , 1 , โ€ฆ , ๐œ… ๐ฟ , ๐ฟ to approximate the kernel operators associated to ๐พ ๐‘™ , ๐‘™ , and 2 โข ( ๐ฟ โˆ’ 1 ) neural networks ๐œ… 1 , 2 , ๐œ… 2 , 1 , โ€ฆ to approximate the transitions ๐พ ๐‘™ + 1 , ๐‘™ , ๐พ ๐‘™ , ๐‘™ + 1 . Following the iterative architecture (6), we introduce the linear operator ๐‘Š โˆˆ โ„ ๐‘› ร— ๐‘› (denoting it by ๐‘Š ๐‘™ for each corresponding resolution) to help regularize the iteration, as well as the nonlinear activation function ๐œŽ to increase the expensiveness. Since ๐‘Š acts pointwise (requiring ๐ฝ remains the same for input and output), we employ it only along with the kernel ๐พ ๐‘™ , ๐‘™ and not the transitions. At each layer ๐‘ก

0 , โ€ฆ , ๐‘‡ โˆ’ 1 , we perform a full V-cycle as:

โ€ข

Downward Pass

For ๐‘™

1 , โ€ฆ , ๐ฟ : ๐‘ฃ ห‡ ๐‘™ + 1 ( ๐‘ก + 1 )

๐œŽ ( ๐‘ฃ ^ ๐‘™ + 1 ( ๐‘ก ) + ๐พ ๐‘™ + 1 , ๐‘™ ๐‘ฃ ห‡ ๐‘™ ( ๐‘ก + 1 ) )

(24) โ€ข

Upward Pass

For ๐‘™

๐ฟ , โ€ฆ , 1 : ๐‘ฃ ^ ๐‘™ ( ๐‘ก + 1 )

๐œŽ ( ( ๐‘Š ๐‘™ + ๐พ ๐‘™ , ๐‘™ ) ๐‘ฃ ห‡ ๐‘™ ( ๐‘ก + 1 ) + ๐พ ๐‘™ , ๐‘™ โˆ’ 1 ๐‘ฃ ^ ๐‘™ โˆ’ 1 ( ๐‘ก + 1 ) ) .

(25)

Notice that one full pass of the V-cycle algorithm defines a mapping ๐‘ฃ โ†ฆ ๐‘ข .

Multi-level Graphs.

We emphasize that we view the discretization { ๐‘ฅ 1 , โ€ฆ , ๐‘ฅ ๐ฝ } โŠ‚ ๐ท as a graph in order to facilitate an efficient implementation through the message passing graph neural network architecture. Since the V-cycle algorithm works at different levels of the discretization, we build multi-level graphs to represent the coarser and finer discretizations. We present and utilize two constructions of multi-level graphs, the orthogonal multipole graph and the generalized random graph. The orthogonal multipole graph is the standard grid construction used in the fast multiple method which is adapted to a uniform grid, see e.g. (Greengard and Rokhlin, 1997). In this construction, the decomposition in (19) is orthogonal in that the finest graph only captures the closest range interaction, the second finest graph captures the second closest interaction minus the part already captured in the previous graph and so on, recursively. In particular, the ranges of interaction for each kernel do not overlap. While this construction is usually efficient, it is limited to uniform grids which may be a bottleneck for certain applications. Our second construction is the generalized random graph as shown in Figure 3 where the ranges of the kernels are allowed to overlap. The generalized random graph is very flexible as it can be applied on any domain geometry and discretization. Further it can also be combined with random sampling methods to work on problems where ๐ฝ is very large or combined with an active learning method to adaptively choose the regions where a finer discretization is needed.

Linear Complexity.

Each term in the decomposition (19) is represented by the kernel matrix ๐พ ๐‘™ , ๐‘™ for ๐‘™

1 , โ€ฆ , ๐ฟ , and ๐พ ๐‘™ + 1 , ๐‘™ , ๐พ ๐‘™ , ๐‘™ + 1 for ๐‘™

1 , โ€ฆ , ๐ฟ โˆ’ 1 corresponding to the appropriate sub-discretization. Therefore the complexity of the multipole method is โˆ‘ ๐‘™

1 ๐ฟ ๐’ช โข ( ๐ฝ ๐‘™ 2 โข ๐‘Ÿ ๐‘™ ๐‘‘ ) + โˆ‘ ๐‘™

1 ๐ฟ โˆ’ 1 ๐’ช โข ( ๐ฝ ๐‘™ โข ๐ฝ ๐‘™ + 1 โข ๐‘Ÿ ๐‘™ ๐‘‘ )

โˆ‘ ๐‘™

1 ๐ฟ ๐’ช โข ( ๐ฝ ๐‘™ 2 โข ๐‘Ÿ ๐‘™ ๐‘‘ ) . By designing the sub-discretization so that ๐’ช โข ( ๐ฝ ๐‘™ 2 โข ๐‘Ÿ ๐‘™ ๐‘‘ ) โ‰ค ๐’ช โข ( ๐ฝ ) , we can obtain complexity linear in ๐ฝ . For example, when ๐‘‘

2 , pick ๐‘Ÿ ๐‘™

1 / ๐ฝ ๐‘™ and ๐ฝ ๐‘™

๐’ช โข ( 2 โˆ’ ๐‘™ โข ๐ฝ ) such that ๐‘Ÿ ๐ฟ is large enough so that there exists a ball of radius ๐‘Ÿ ๐ฟ containing ๐ท . Then clearly โˆ‘ ๐‘™

1 ๐ฟ ๐’ช โข ( ๐ฝ ๐‘™ 2 โข ๐‘Ÿ ๐‘™ ๐‘‘ )

๐’ช โข ( ๐ฝ ) . By combining with a Nystrรถm approximation, we can obtain ๐’ช โข ( ๐ฝ โ€ฒ ) complexity for some ๐ฝ โ€ฒ โ‰ช ๐ฝ .

4.4Fourier Neural Operator (FNO)

Instead of working with a kernel directly on the domain ๐ท , we may consider its representation in Fourier space and directly parameterize it there. This allows us to utilize Fast Fourier Transform (FFT) methods in order to compute the action of the kernel integral operator (13) with almost linear complexity. A similar idea was used in (Nelsen and Stuart, 2021) to construct random features in function space The method we outline was first described in Li et al. (2020a) and is termed the Fourier Neural Operator (FNO). We note that the theory of Section 4 is designed for general kernels and does not apply to the FNO formulation; however, similar universal approximation results were developed for it in (Kovachki et al., 2021) when the input and output spaces are Hilbert space. For simplicity, we will assume that ๐ท

๐•‹ ๐‘‘ is the unit torus and all functions are complex-valued. Let โ„ฑ : ๐ฟ 2 โข ( ๐ท ; โ„‚ ๐‘› ) โ†’ โ„“ 2 โข ( โ„ค ๐‘‘ ; โ„‚ ๐‘› ) denote the Fourier transform of a function ๐‘ฃ : ๐ท โ†’ โ„‚ ๐‘› and โ„ฑ โˆ’ 1 its inverse. For ๐‘ฃ โˆˆ ๐ฟ 2 โข ( ๐ท ; โ„‚ ๐‘› ) and ๐‘ค โˆˆ โ„“ 2 โข ( โ„ค ๐‘‘ ; โ„‚ ๐‘› ) , we have

( โ„ฑ โข ๐‘ฃ ) ๐‘— โข ( ๐‘˜ )

โŸจ ๐‘ฃ ๐‘— , ๐œ“ ๐‘˜ โŸฉ ๐ฟ 2 โข ( ๐ท ; โ„‚ ) , ๐‘— โˆˆ { 1 , โ€ฆ , ๐‘› } , ๐‘˜ โˆˆ โ„ค ๐‘‘ ,

( โ„ฑ โˆ’ 1 โข ๐‘ค ) ๐‘— โข ( ๐‘ฅ )

โˆ‘ ๐‘˜ โˆˆ โ„ค ๐‘‘ ๐‘ค ๐‘— โข ( ๐‘˜ ) โข ๐œ“ ๐‘˜ โข ( ๐‘ฅ ) , ๐‘— โˆˆ { 1 , โ€ฆ , ๐‘› } , ๐‘ฅ โˆˆ ๐ท

where, for each ๐‘˜ โˆˆ โ„ค ๐‘‘ , we define

๐œ“ ๐‘˜ โข ( ๐‘ฅ )

๐‘’ 2 โข ๐œ‹ โข ๐‘– โข ๐‘˜ 1 โข ๐‘ฅ 1 โข โ‹ฏ โข ๐‘’ 2 โข ๐œ‹ โข ๐‘– โข ๐‘˜ ๐‘‘ โข ๐‘ฅ ๐‘‘ , ๐‘ฅ โˆˆ ๐ท

with ๐‘–

โˆ’ 1 the imaginary unit. By letting ๐œ… โข ( ๐‘ฅ , ๐‘ฆ )

๐œ… โข ( ๐‘ฅ โˆ’ ๐‘ฆ ) for some ๐œ… : ๐ท โ†’ โ„‚ ๐‘š ร— ๐‘› in (13) and applying the convolution theorem, we find that

๐‘ข โข ( ๐‘ฅ )

โ„ฑ โˆ’ 1 โข ( โ„ฑ โข ( ๐œ… ) โ‹… โ„ฑ โข ( ๐‘ฃ ) ) โข ( ๐‘ฅ ) โˆ€ ๐‘ฅ โˆˆ ๐ท .

We therefore propose to directly parameterize ๐œ… by its Fourier coefficients. We write

๐‘ข โข ( ๐‘ฅ )

โ„ฑ โˆ’ 1 โข ( ๐‘… ๐œ™ โ‹… โ„ฑ โข ( ๐‘ฃ ) ) โข ( ๐‘ฅ ) โˆ€ ๐‘ฅ โˆˆ ๐ท

(26)

where ๐‘… ๐œ™ is the Fourier transform of a periodic function ๐œ… : ๐ท โ†’ โ„‚ ๐‘š ร— ๐‘› parameterized by some ๐œ™ โˆˆ โ„ ๐‘ .

Figure 5: top: The architecture of the neural operators; bottom: Fourier layer.

(a) The full architecture of neural operator: start from input ๐‘Ž . 1. Lift to a higher dimension channel space by a neural network ๐’ซ . 2. Apply ๐‘‡ (typically ๐‘‡

4 ) layers of integral operators and activation functions. 3. Project back to the target dimension by a neural network ๐‘„ . Output ๐‘ข . (b) Fourier layers: Start from input ๐‘ฃ . On top: apply the Fourier transform โ„ฑ ; a linear transform ๐‘… on the lower Fourier modes which also filters out the higher modes; then apply the inverse Fourier transform โ„ฑ โˆ’ 1 . On the bottom: apply a local linear transform ๐‘Š .

For frequency mode ๐‘˜ โˆˆ โ„ค ๐‘‘ , we have ( โ„ฑ โข ๐‘ฃ ) โข ( ๐‘˜ ) โˆˆ โ„‚ ๐‘› and ๐‘… ๐œ™ โข ( ๐‘˜ ) โˆˆ โ„‚ ๐‘š ร— ๐‘› . We pick a finite-dimensional parameterization by truncating the Fourier series at a maximal number of modes ๐‘˜ max

| ๐‘ ๐‘˜ max |

| { ๐‘˜ โˆˆ โ„ค ๐‘‘ : | ๐‘˜ ๐‘— | โ‰ค ๐‘˜ max , ๐‘— ,  for  โข ๐‘—

1 , โ€ฆ , ๐‘‘ } | . This choice improves the empirical performance and sensitivity of the resulting model with respect to the choices of discretization. We thus parameterize ๐‘… ๐œ™ directly as complex-valued ( ๐‘˜ max ร— ๐‘š ร— ๐‘› ) -tensor comprising a collection of truncated Fourier modes and therefore drop ๐œ™ from our notation. In the case where we have real-valued ๐‘ฃ and we want ๐‘ข to also be real-valued, we impose that ๐œ… is real-valued by enforcing conjugate symmetry in the parameterization i.e.

๐‘… โข ( โˆ’ ๐‘˜ ) ๐‘— , ๐‘™

๐‘… โˆ— โข ( ๐‘˜ ) ๐‘— , ๐‘™ โˆ€ ๐‘˜ โˆˆ ๐‘ ๐‘˜ max , ๐‘—

1 , โ€ฆ , ๐‘š , ๐‘™

1 , โ€ฆ , ๐‘› .

We note that the set ๐‘ ๐‘˜ max is not the canonical choice for the low frequency modes of ๐‘ฃ ๐‘ก . Indeed, the low frequency modes are usually defined by placing an upper-bound on the โ„“ 1 -norm of ๐‘˜ โˆˆ โ„ค ๐‘‘ . We choose ๐‘ ๐‘˜ max as above since it allows for an efficient implementation. Figure 5 gives a pictorial representation of an entire Neural Operator architecture employing Fourier layers.

The Discrete Case and the FFT.

Assuming the domain ๐ท is discretized with ๐ฝ โˆˆ โ„• points, we can treat ๐‘ฃ โˆˆ โ„‚ ๐ฝ ร— ๐‘› and โ„ฑ โข ( ๐‘ฃ ) โˆˆ โ„‚ ๐ฝ ร— ๐‘› . Since we convolve ๐‘ฃ with a function which only has ๐‘˜ max Fourier modes, we may simply truncate the higher modes to obtain โ„ฑ โข ( ๐‘ฃ ) โˆˆ โ„‚ ๐‘˜ max ร— ๐‘› . Multiplication by the weight tensor ๐‘… โˆˆ โ„‚ ๐‘˜ max ร— ๐‘š ร— ๐‘› is then

( ๐‘… โ‹… ( โ„ฑ โข ๐‘ฃ ๐‘ก ) ) ๐‘˜ , ๐‘™

โˆ‘ ๐‘—

1 ๐‘› ๐‘… ๐‘˜ , ๐‘™ , ๐‘— โข ( โ„ฑ โข ๐‘ฃ ) ๐‘˜ , ๐‘— , ๐‘˜

1 , โ€ฆ , ๐‘˜ max , ๐‘™

1 , โ€ฆ , ๐‘š .

(27)

When the discretization is uniform with resolution ๐‘  1 ร— โ‹ฏ ร— ๐‘  ๐‘‘

๐ฝ , โ„ฑ can be replaced by the Fast Fourier Transform. For ๐‘ฃ โˆˆ โ„‚ ๐ฝ ร— ๐‘› , ๐‘˜

( ๐‘˜ 1 , โ€ฆ , ๐‘˜ ๐‘‘ ) โˆˆ โ„ค ๐‘  1 ร— โ‹ฏ ร— โ„ค ๐‘  ๐‘‘ , and ๐‘ฅ

( ๐‘ฅ 1 , โ€ฆ , ๐‘ฅ ๐‘‘ ) โˆˆ ๐ท , the FFT โ„ฑ ^ and its inverse โ„ฑ ^ โˆ’ 1 are defined as

( โ„ฑ ^ โข ๐‘ฃ ) ๐‘™ โข ( ๐‘˜ )

โˆ‘ ๐‘ฅ 1

0 ๐‘  1 โˆ’ 1 โ‹ฏ โข โˆ‘ ๐‘ฅ ๐‘‘

0 ๐‘  ๐‘‘ โˆ’ 1 ๐‘ฃ ๐‘™ โข ( ๐‘ฅ 1 , โ€ฆ , ๐‘ฅ ๐‘‘ ) โข ๐‘’ โˆ’ 2 โข ๐‘– โข ๐œ‹ โข โˆ‘ ๐‘—

1 ๐‘‘ ๐‘ฅ ๐‘— โข ๐‘˜ ๐‘— ๐‘  ๐‘— ,

( โ„ฑ ^ โˆ’ 1 โข ๐‘ฃ ) ๐‘™ โข ( ๐‘ฅ )

โˆ‘ ๐‘˜ 1

0 ๐‘  1 โˆ’ 1 โ‹ฏ โข โˆ‘ ๐‘˜ ๐‘‘

0 ๐‘  ๐‘‘ โˆ’ 1 ๐‘ฃ ๐‘™ โข ( ๐‘˜ 1 , โ€ฆ , ๐‘˜ ๐‘‘ ) โข ๐‘’ 2 โข ๐‘– โข ๐œ‹ โข โˆ‘ ๐‘—

1 ๐‘‘ ๐‘ฅ ๐‘— โข ๐‘˜ ๐‘— ๐‘  ๐‘—

for ๐‘™

1 , โ€ฆ , ๐‘› . In this case, the set of truncated modes becomes

๐‘ ๐‘˜ max

{ ( ๐‘˜ 1 , โ€ฆ , ๐‘˜ ๐‘‘ ) โˆˆ โ„ค ๐‘  1 ร— โ‹ฏ ร— โ„ค ๐‘  ๐‘‘ โˆฃ ๐‘˜ ๐‘— โ‰ค ๐‘˜ max , ๐‘— โข  or  โข ๐‘  ๐‘— โˆ’ ๐‘˜ ๐‘— โ‰ค ๐‘˜ max , ๐‘— ,  for  โข ๐‘—

1 , โ€ฆ , ๐‘‘ } .

When implemented, ๐‘… is treated as a ( ๐‘  1 ร— โ‹ฏ ร— ๐‘  ๐‘‘ ร— ๐‘š ร— ๐‘› ) -tensor and the above definition of ๐‘ ๐‘˜ max corresponds to the โ€œcornersโ€ of ๐‘… , which allows for a straight-forward parallel implementation of (27) via matrix-vector multiplication. In practice, we have found the choice ๐‘˜ max , ๐‘— roughly around 1 3 to 2 3 of the maximum number of Fourier modes in the Fast Fourier Transform of the grid valuation of the input function provides desirable performance. In our empirical studies, we set ๐‘˜ max , ๐‘—

12 which yields ๐‘˜ max

12 ๐‘‘ parameters per channel, to be sufficient for all the tasks that we consider.

Choices for ๐‘… .

In general, ๐‘… can be defined to depend on ( โ„ฑ โข ๐‘Ž ) , the Fourier transform of the input ๐‘Ž โˆˆ ๐’œ to parallel our construction (8). Indeed, we can define ๐‘… ๐œ™ : โ„ค ๐‘‘ ร— โ„‚ ๐‘‘ ๐‘Ž โ†’ โ„‚ ๐‘š ร— ๐‘› as a parametric function that maps ( ๐‘˜ , ( โ„ฑ โข ๐‘Ž ) โข ( ๐‘˜ ) ) to the values of the appropriate Fourier modes. We have experimented with the following parameterizations of ๐‘… ๐œ™ :

โ€ข

Direct. Define the parameters ๐œ™ ๐‘˜ โˆˆ โ„‚ ๐‘š ร— ๐‘› for each wave number ๐‘˜ :

๐‘… ๐œ™ โข ( ๐‘˜ , ( โ„ฑ โข ๐‘Ž ) โข ( ๐‘˜ ) ) := ๐œ™ ๐‘˜ .

โ€ข

Linear. Define the parameters ๐œ™ ๐‘˜ 1 โˆˆ โ„‚ ๐‘š ร— ๐‘› ร— ๐‘‘ ๐‘Ž , ๐œ™ ๐‘˜ 2 โˆˆ โ„‚ ๐‘š ร— ๐‘› for each wave number ๐‘˜ :

๐‘… ๐œ™ โข ( ๐‘˜ , ( โ„ฑ โข ๐‘Ž ) โข ( ๐‘˜ ) ) := ๐œ™ ๐‘˜ 1 โข ( โ„ฑ โข ๐‘Ž ) โข ( ๐‘˜ ) + ๐œ™ ๐‘˜ 2 .

โ€ข

Feed-forward neural network. Let ฮฆ ๐œ™ : โ„ค ๐‘‘ ร— โ„‚ ๐‘‘ ๐‘Ž โ†’ โ„‚ ๐‘š ร— ๐‘› be a neural network with parameters ๐œ™ :

๐‘… ๐œ™ โข ( ๐‘˜ , ( โ„ฑ โข ๐‘Ž ) โข ( ๐‘˜ ) ) := ฮฆ ๐œ™ โข ( ๐‘˜ , ( โ„ฑ โข ๐‘Ž ) โข ( ๐‘˜ ) ) .

We find that the linear parameterization has a similar performance to the direct parameterization above, however, it is not as efficient both in terms of computational complexity and the number of parameters required. On the other hand, we find that the feed-forward neural network parameterization has a worse performance. This is likely due to the discrete structure of the space โ„ค ๐‘‘ ; numerical evidence suggests neural networks are not adept at handling this structure. Our experiments in this work focus on the direct parameterization presented above.

Invariance to Discretization.

The Fourier layers are discretization-invariant because they can learn from and evaluate functions which are discretized in an arbitrary way. Since parameters are learned directly in Fourier space, resolving the functions in physical space simply amounts to projecting on the basis elements ๐‘’ 2 โข ๐œ‹ โข ๐‘– โข โŸจ ๐‘ฅ , ๐‘˜ โŸฉ ; these are well-defined everywhere on โ„‚ ๐‘‘ .

Quasi-linear Complexity.

The weight tensor ๐‘… contains ๐‘˜ max < ๐ฝ modes, so the inner multiplication has complexity ๐’ช โข ( ๐‘˜ max ) . Therefore, the majority of the computational cost lies in computing the Fourier transform โ„ฑ โข ( ๐‘ฃ ) and its inverse. General Fourier transforms have complexity ๐’ช โข ( ๐ฝ 2 ) , however, since we truncate the series the complexity is in fact ๐’ช โข ( ๐ฝ โข ๐‘˜ max ) , while the FFT has complexity ๐’ช โข ( ๐ฝ โข log โก ๐ฝ ) . Generally, we have found using FFTs to be very efficient, however, a uniform discretization is required.

Non-uniform and Non-periodic Geometry.

The Fourier neural operator model is defined based on Fourier transform operations accompanied by local residual operations and potentially additive bias function terms. These operations are mainly defined on general geometries, function spaces, and choices of discretization. They are not limited to rectangular domains, periodic functions, or uniform grids. In this paper, we instantiate these operations on uniform grids and periodic functions in order to develop fast implementations that enjoy spectral convergence and utilize methods such as fast Fourier transform. In order to maintain a fast and memory-efficient method, our implementation of the Fourier neural operator relies on the fast Fourier transform which is only defined on uniform mesh discretizations of ๐ท

๐•‹ ๐‘‘ , or for functions on the square satisfying homogeneous Dirichlet (fast Fourier sine transform) or homogeneous Neumann (fast Fourier cosine transform) boundary conditions. However, the fast implementation of Fourier neural operator can be applied in more general geometries via Fourier continuations. Given any compact manifold ๐ท

โ„ณ , we can always embed it into a periodic cube (torus),

๐‘– : โ„ณ โ†’ ๐•‹ ๐‘‘

where the regular FFT can be applied. Conventionally, in numerical analysis applications, the embedding ๐‘– is defined through a continuous extension by fitting polynomials (Bruno et al., 2007). However, in the Fourier neural operator, the idea can be applied simply by padding the input with zeros. The loss is computed only on the original space during training. The Fourier neural operator will automatically generate a smooth extension to the padded domain in the output space.

4.5Summary

We summarize the main computational approaches presented in this section and their complexity:

โ€ข

GNO: Subsample ๐ฝ โ€ฒ points from the ๐ฝ -point discretization and compute the truncated integral

๐‘ข โข ( ๐‘ฅ )

โˆซ ๐ต โข ( ๐‘ฅ , ๐‘Ÿ ) ๐œ… โข ( ๐‘ฅ , ๐‘ฆ ) โข ๐‘ฃ โข ( ๐‘ฆ ) โข d โข ๐‘ฆ

(28)

at a ๐’ช โข ( ๐ฝ โข ๐ฝ โ€ฒ ) complexity.

โ€ข

LNO: Decompose the kernel function tensor product form and compute

๐‘ข โข ( ๐‘ฅ )

โˆ‘ ๐‘—

1 ๐‘Ÿ โŸจ ๐œ“ ( ๐‘— ) , ๐‘ฃ โŸฉ โข ๐œ‘ ( ๐‘— ) โข ( ๐‘ฅ )

(29)

at a ๐’ช โข ( ๐ฝ ) complexity.

โ€ข

MGNO: Compute a multi-scale decomposition of the kernel

๐พ

๐พ 1 , 1 + ๐พ 1 , 2 โข ๐พ 2 , 2 โข ๐พ 2 , 1 + ๐พ 1 , 2 โข ๐พ 2 , 3 โข ๐พ 3 , 3 โข ๐พ 3 , 2 โข ๐พ 2 , 1 + โ‹ฏ

๐‘ข โข ( ๐‘ฅ )

( ๐พ โข ๐‘ฃ ) โข ( ๐‘ฅ )

(30)

at a ๐’ช โข ( ๐ฝ ) complexity.

โ€ข

FNO: Parameterize the kernel in the Fourier domain and compute the using the FFT

๐‘ข โข ( ๐‘ฅ )

โ„ฑ โˆ’ 1 โข ( ๐‘… ๐œ™ โ‹… โ„ฑ โข ( ๐‘ฃ ) ) โข ( ๐‘ฅ )

(31)

at a ๐’ช โข ( ๐ฝ โข log โก ๐ฝ ) complexity.

5Neural Operators and Other Deep Learning Models

In this section, we provide a discussion on the recent related methods, in particular, DeepONets, and demonstrate that their architectures are subsumed by generic neural operators when neural operators are parametrized inconsistently  5.1. When only applied and queried on fixed grids, we show neural operator architectures subsume neural networks and, furthermore, we show how transformers are special cases of neural operators 5.2.

5.1DeepONets

We will now draw a parallel between the recently proposed DeepONet architecture in Lu et al. (2019), a map from finite-dimensional spaces to function spaces, and the neural operator framework. We will show that if we use a particular, point-wise parameterization of the first kernel in a NO and discretize the integral operator, we obtain a DeepONet. However, such a parameterization breaks the notion of discretization invariance because the number of parameters depends on the discretization of the input function. Therefore such a model cannot be applied to arbitrarily discretized functions and its number of parameters goes to infinity as we take the limit to the continuum. This phenomenon is similar to our discussion in subsection 4.1 where a NO parametrization which is inconsistent in function space and breaks discretization invariance yields a CNN. We propose a modification to the DeepONet architecture, based on the idea of the LNO, which addresses this issue and gives a discretization invariant neural operator.

Proposition 5

A neural operator with a point-wise parameterized first kernel and discretized integral operators yields a DeepONet.

Proof  We work with (11) where we choose ๐‘Š 0

0 and denote ๐‘ 0 by ๐‘ . For simplicity, we will consider only real-valued functions i.e. ๐‘‘ ๐‘Ž

๐‘‘ ๐‘ข

1 and set ๐‘‘ ๐‘ฃ 0

๐‘‘ ๐‘ฃ 1

๐‘› and ๐‘‘ ๐‘ฃ 2

๐‘ for some ๐‘› , ๐‘ โˆˆ โ„• . Define ๐’ซ : โ„ โ†’ โ„ ๐‘› by ๐’ซ โข ( ๐‘ฅ )

( ๐‘ฅ , โ€ฆ , ๐‘ฅ ) and ๐’ฌ : โ„ ๐‘ โ†’ โ„ by ๐’ฌ โข ( ๐‘ฅ )

๐‘ฅ 1 + โ‹ฏ + ๐‘ฅ ๐‘ . Furthermore let ๐œ… ( 1 ) : ๐ท โ€ฒ ร— ๐ท โ†’ โ„ ๐‘ ร— ๐‘› be defined by some ๐œ… ๐‘— โข ๐‘˜ ( 1 ) : ๐ท โ€ฒ ร— ๐ท โ†’ โ„ for ๐‘—

1 , โ€ฆ , ๐‘ and ๐‘˜

1 , โ€ฆ , ๐‘› . Similarly let ๐œ… ( 0 ) : ๐ท ร— ๐ท โ†’ โ„ ๐‘› ร— ๐‘› be given as ๐œ… ( 0 ) โข ( ๐‘ฅ , ๐‘ฆ )

diag โข ( ๐œ… 1 ( 0 ) โข ( ๐‘ฅ , ๐‘ฆ ) , โ€ฆ , ๐œ… ๐‘› ( 0 ) โข ( ๐‘ฅ , ๐‘ฆ ) ) for some ๐œ… 1 ( 0 ) , โ€ฆ โข ๐œ… ๐‘› ( 0 ) : ๐ท ร— ๐ท โ†’ โ„ . Then (11) becomes

( ๐’ข ๐œƒ โข ( ๐‘Ž ) ) โข ( ๐‘ฅ )

โˆ‘ ๐‘˜

1 ๐‘ โˆ‘ ๐‘—

1 ๐‘› โˆซ ๐ท ๐œ… ๐‘— โข ๐‘˜ ( 1 ) โข ( ๐‘ฅ , ๐‘ฆ ) โข ๐œŽ โข ( โˆซ ๐ท ๐œ… ๐‘— ( 0 ) โข ( ๐‘ฆ , ๐‘ง ) โข ๐‘Ž โข ( ๐‘ง ) โข d โข ๐‘ง + ๐‘ ๐‘— โข ( ๐‘ฆ ) ) โข d โข ๐‘ฆ

where ๐‘ โข ( ๐‘ฆ )

( ๐‘ 1 โข ( ๐‘ฆ ) , โ€ฆ , ๐‘ ๐‘› โข ( ๐‘ฆ ) ) for some ๐‘ 1 , โ€ฆ , ๐‘ ๐‘› : ๐ท โ†’ โ„ . Let ๐‘ฅ 1 , โ€ฆ , ๐‘ฅ ๐‘ž โˆˆ ๐ท be the points at which the input function ๐‘Ž is evaluated and denote by ๐‘Ž ~

( ๐‘Ž โข ( ๐‘ฅ 1 ) , โ€ฆ , ๐‘Ž โข ( ๐‘ฅ ๐‘ž ) ) โˆˆ โ„ ๐‘ž the vector of evaluations. Choose ๐œ… ๐‘— ( 0 ) โข ( ๐‘ฆ , ๐‘ง )

๐Ÿ™ โข ( ๐‘ฆ ) โข ๐‘ค ๐‘— โข ( ๐‘ง ) for some ๐‘ค 1 , โ€ฆ , ๐‘ค ๐‘› : ๐ท โ†’ โ„ where ๐Ÿ™ denotes the constant function taking the value one. Let

๐‘ค ๐‘— โข ( ๐‘ฅ ๐‘™ )

๐‘ž | ๐ท | โข ๐‘ค ~ ๐‘— โข ๐‘™

for ๐‘—

1 , โ€ฆ , ๐‘› and ๐‘™

1 , โ€ฆ , ๐‘ž where ๐‘ค ~ ๐‘— โข ๐‘™ โˆˆ โ„ are some constants. Furthermore let ๐‘ ๐‘— โข ( ๐‘ฆ )

๐‘ ~ ๐‘— โข ๐Ÿ™ โข ( ๐‘ฆ ) for some constants ๐‘ ~ ๐‘— โˆˆ โ„ . Then the Monte Carlo approximation of the inner-integral yields

( ๐’ข ๐œƒ โข ( ๐‘Ž ) ) โข ( ๐‘ฅ )

โˆ‘ ๐‘˜

1 ๐‘ โˆ‘ ๐‘—

1 ๐‘› โˆซ ๐ท ๐œ… ๐‘— โข ๐‘˜ ( 1 ) โข ( ๐‘ฅ , ๐‘ฆ ) โข ๐œŽ โข ( โŸจ ๐‘ค ~ ๐‘— , ๐‘Ž ~ โŸฉ โ„ ๐‘ž + ๐‘ ~ ๐‘— ) โข ๐Ÿ™ โข ( ๐‘ฆ ) โข d โข ๐‘ฆ

where ๐‘ค ~ ๐‘—

( ๐‘ค ~ ๐‘— โข 1 , โ€ฆ , ๐‘ค ~ ๐‘— โข ๐‘ž ) . Choose ๐œ… ๐‘— โข ๐‘˜ ( 1 ) โข ( ๐‘ฅ , ๐‘ฆ )

( ๐‘ ~ ๐‘— โข ๐‘˜ / | ๐ท | ) โข ๐œ‘ ๐‘˜ โข ( ๐‘ฅ ) โข ๐Ÿ™ โข ( ๐‘ฆ ) for some constants ๐‘ ~ ๐‘— โข ๐‘˜ โˆˆ โ„ and functions ๐œ‘ 1 , โ€ฆ , ๐œ‘ ๐‘ : ๐ท โ€ฒ โ†’ โ„ . Then we obtain

( ๐’ข ๐œƒ โข ( ๐‘Ž ) ) โข ( ๐‘ฅ )

โˆ‘ ๐‘˜

1 ๐‘ ( โˆ‘ ๐‘—

1 ๐‘› ๐‘ ~ ๐‘— โข ๐‘˜ โข ๐œŽ โข ( โŸจ ๐‘ค ~ ๐‘— , ๐‘Ž ~ โŸฉ โ„ ๐‘ž + ๐‘ ~ ๐‘— ) ) โข ๐œ‘ ๐‘˜ โข ( ๐‘ฅ )

โˆ‘ ๐‘˜

1 ๐‘ ๐บ ๐‘˜ โข ( ๐‘ฃ ~ ) โข ๐œ‘ ๐‘˜ โข ( ๐‘ฅ )

(32)

where ๐บ ๐‘˜ : โ„ ๐‘ž โ†’ โ„ can be viewed as the components of a single hidden layer neural network ๐บ : โ„ ๐‘ž โ†’ โ„ ๐‘ with parameters ๐‘ค ~ ๐‘— โข ๐‘™ , ๐‘ ~ ๐‘— , ๐‘ ~ ๐‘— โข ๐‘˜ . The set of maps ๐œ‘ 1 , โ€ฆ , ๐œ‘ ๐‘ form the trunk net while ๐บ is the branch net of a DeepONet. Our construction above can clearly be generalized to yield arbitrary depth branch nets by adding more kernel integral layers, and, similarly, the trunk net can be chosen arbitrarily deep by parameterizing each ๐œ‘ ๐‘˜ as a deep neural network.  

Since the mappings ๐‘ค 1 , โ€ฆ , ๐‘ค ๐‘› are point-wise parametrized based on the input values ๐‘Ž ~ , it is clear that the construction in the above proof is not discretization invariant. In order to make this model a discretization invariant neural operator, we propose DeepONet-Operator where, for each ๐‘— , we replace the inner product in the finite dimensional space โŸจ ๐‘ค ~ ๐‘— , ๐‘Ž ~ โŸฉ โ„ ๐‘ž with an appropriate inner product in the function space โŸจ ๐‘ค ๐‘— , ๐‘Ž โŸฉ .

( ๐’ข ๐œƒ โข ( ๐‘Ž ) ) โข ( ๐‘ฅ )

โˆ‘ ๐‘˜

1 ๐‘ ( โˆ‘ ๐‘—

1 ๐‘› ๐‘ ~ ๐‘— โข ๐‘˜ โข ๐œŽ โข ( โŸจ ๐‘ค ๐‘— , ๐‘Ž โŸฉ + ๐‘ ~ ๐‘— ) ) โข ๐œ‘ ๐‘˜ โข ( ๐‘ฅ )

(33)

This operation is a projection of function ๐‘Ž onto ๐‘ค ๐‘— . Parametrizing ๐‘ค ๐‘— by neural networks makes DeepONet-Operator a discretization invariant model.

There are other ways in which the issue can be resolved for DeepONets. For example, by fixing the set of points on which the input function is evaluated independently of its discretization, by taking local spatial averages as in (Lanthaler et al., 2021) or more generally by taking a set of linear functionals on ๐’œ as input to a finite-dimensional branch neural network (a generalization to DeeONet-Operator) as in the specific PCA-based variant on DeepONet in (De Hoop et al., 2022). We demonstrate numerically in Section 7 that, when applied in the standard way, the error incurred by DeepONet(s) grows with the discretization of ๐‘Ž while it remains constant for neural operators.

Linear Approximation and Nonlinear Approximation.

We point out that parametrizations of the form (32) fall within the class of linear approximation methods since the nonlinear space ๐’ข โ€  โข ( ๐’œ ) is approximated by the linear space span โข { ๐œ‘ 1 , โ€ฆ , ๐œ‘ ๐‘ } (DeVore, 1998). The quality of the best possible linear approximation to a nonlinear space is given by the Kolmogorov ๐‘› -width where ๐‘› is the dimension of the linear space used in the approximation (Pinkus, 1985). The rate of decay of the ๐‘› -width as a function of ๐‘› quantifies how well the linear space approximates the nonlinear one. It is well know that for some problems such as the flow maps of advection-dominated PDEs, the ๐‘› -widths decay very slowly; hence a very large ๐‘› is needed for a good approximation for such problems (Cohen and DeVore, 2015). This can be limiting in practice as more parameters are needed in order to describe more basis functions ๐œ‘ ๐‘— and therefore more data is needed to fit these parameters.

On the other hand, we point out that parametrizations of the form (6), and the particular case (11), constitute (in general) a form of nonlinear approximation. The benefits of nonlinear approximation are well understood in the setting of function approximation, see e.g. (DeVore, 1998), however the theory for the operator setting is still in its infancy (Bonito et al., 2020; Cohen et al., 2020). We observe numerically in Section 7 that nonlinear parametrizations such as (11) outperform linear ones such as DeepONets or the low-rank method introduced in Section 4.2 when implemented with similar numbers of parameters. We acknowledge, however, that the theory presented in Section 8 is based on the reduction to a linear approximation and therefore does not capture the benefits of the nonlinear approximation. Furthermore, in practice, we have found that deeper architectures than (11) (usually four to five layers are used in the experiments of Section 7), perform better. The benefits of depth are again not captured in our analysis in Section 8 either. We leave further theoretical studies of approximation properties as an interesting avenue of investigation for future work.

Function Representation.

An important difference between neural operators, introduced here, PCA-based operator approximation, introduced in Bhattacharya et al. (2020) and DeepONets, introduced in Lu et al. (2019), is the manner in which the output function space is finite-dimensionalized. Neural operators as implemented in this paper typically use the same finite-dimensionalization in both the input and output function spaces; however different variants of the neural operator idea use different finite-dimensionalizations. As discussed in Section 4, the GNO and MGNO are finite-dimensionalized using pointwise values as the nodes of graphs; the FNO is finite-dimensionalized in Fourier space, requiring finite-dimensionalization on a uniform grid in real space; the Low-rank neural operator is finite-dimensionalized on a product space formed from the Barron space of neural networks. The PCA approach finite-dimensionalizes in the span of PCA modes. DeepONet, on the other hand, uses different input and output space finite-dimensionalizations; in its basic form it uses pointwise (grid) values on the input (branch net) whilst its output (trunk net) is represented as a function in Barron space. There also exist POD-DeepONet variants that finite-dimensionalize the output in the span of PCA modes Lu et al. (2021b), bringing them closer to the method introduced in Bhattacharya et al. (2020), but with a different finite-dimensionalization of the input space.

As is widely quoted, โ€œall models are wrong, but some are usefulโ€ Box (1976). For operator approximation, each finite-dimensionalization has its own induced biases and limitations, and therefore works best on a subset of problems. Finite-dimensionalization introduces a trade-off between flexibility and representation power of the resulting approximate architecture. The Barron space representation (Low-rank operator and DeepONet) is usually the most generic and flexible as it is widely applicable. However this can lead to induced biases and reduced representation power on specific problems; in practice, DeepONet sometimes needs problem-specific feature engineering and architecture choices as studied in Lu et al. (2021b). We conjecture that these problem-specific features compensate for the induced bias and reduced representation power that the basic form of the method (Lu et al., 2019) sometimes exhibits. The PCA (PCA operator, POD-DeepONet) and graph-based (GNO, MGNO) discretizations are also generic, but more specific compared to the DeepONet representation; for this reason POD-DeepONet can outperform DeepONet on some problems (Lu et al., 2021b). On the other hand, the uniform grid-based representation FNO is the most specific of all those operator approximators considered in this paper: in its basic form it applies by discretizing the input functions, assumed to be specified on a periodic domain, on a uniform grid. As shown in Section 7 FNO usually works out of the box on such problems. But, as a trade-off, it requires substantial additional treatments to work well on non-uniform geometries, such as extension, interpolation (explored in Lu et al. (2021b)), and Fourier continuation (Bruno et al., 2007).

5.2Transformers as a Special Case of Neural Operators

We will now show that our neural operator framework can be viewed as a continuum generalization to the popular transformer architecture (Vaswani et al., 2017) which has been extremely successful in natural language processing tasks (Devlin et al., 2018; Brown et al., 2020) and, more recently, is becoming a popular choice in computer vision tasks (Dosovitskiy et al., 2020). The parallel stems from the fact that we can view sequences of arbitrary length as arbitrary discretizations of functions. Indeed, in the context of natural language processing, we may think of a sentence as a โ€œwordโ€-valued function on, for example, the domain [ 0 , 1 ] . Assuming our function is linked to a sentence with a fixed semantic meaning, adding or removing words from the sentence simply corresponds to refining or coarsening the discretization of [ 0 , 1 ] . We will now make this intuition precise in the proof of the following statement.

Proposition 6

The attention mechanism in transformer models is a special case of a neural operator layer.

Proof  We will show that by making a particular choice of the nonlinear integral kernel operator (9) and discretizing the integral by a Monte-Carlo approximation, a neural operator layer reduces to a pre-normalized, single-headed attention, transformer block as originally proposed in (Vaswani et al., 2017). For simplicity, we assume ๐‘‘ ๐‘ฃ ๐‘ก

๐‘› โˆˆ โ„• and that ๐ท ๐‘ก

๐ท for any ๐‘ก

0 , โ€ฆ , ๐‘‡ , the bias term is zero, and ๐‘Š

๐ผ is the identity. Furthermore, to simplify notation, we will drop the layer index ๐‘ก from (10) and, employing (9), obtain

๐‘ข โข ( ๐‘ฅ )

๐œŽ โข ( ๐‘ฃ โข ( ๐‘ฅ ) + โˆซ ๐ท ๐œ… ๐‘ฃ โข ( ๐‘ฅ , ๐‘ฆ , ๐‘ฃ โข ( ๐‘ฅ ) , ๐‘ฃ โข ( ๐‘ฆ ) ) โข ๐‘ฃ โข ( ๐‘ฆ ) โข d โข ๐‘ฆ ) โˆ€ ๐‘ฅ โˆˆ ๐ท

(34)

a single layer of the neural operator where ๐‘ฃ : ๐ท โ†’ โ„ ๐‘› is the input function to the layer and we denote by ๐‘ข : ๐ท โ†’ โ„ ๐‘› the output function. We use the notation ๐œ… ๐‘ฃ to indicate that the kernel depends on the entirety of the function ๐‘ฃ as well as on its pointwise values ๐‘ฃ โข ( ๐‘ฅ ) and ๐‘ฃ โข ( ๐‘ฆ ) . While this is not explicitly done in (9), it is a straightforward generalization. We now pick a specific form for kernel, in particular, we assume ๐œ… ๐‘ฃ : โ„ ๐‘› ร— โ„ ๐‘› โ†’ โ„ ๐‘› ร— ๐‘› does not explicitly depend on the spatial variables ( ๐‘ฅ , ๐‘ฆ ) but only on the input pair ( ๐‘ฃ โข ( ๐‘ฅ ) , ๐‘ฃ โข ( ๐‘ฆ ) ) . Furthermore, we let

๐œ… ๐‘ฃ โข ( ๐‘ฃ โข ( ๐‘ฅ ) , ๐‘ฃ โข ( ๐‘ฆ ) )

๐‘” ๐‘ฃ โข ( ๐‘ฃ โข ( ๐‘ฅ ) , ๐‘ฃ โข ( ๐‘ฆ ) ) โข ๐‘…

where ๐‘… โˆˆ โ„ ๐‘› ร— ๐‘› is a matrix of free parameters i.e. its entries are concatenated in ๐œƒ so they are learned, and ๐‘” ๐‘ฃ : โ„ ๐‘› ร— โ„ ๐‘› โ†’ โ„ is defined as

๐‘” ๐‘ฃ โข ( ๐‘ฃ โข ( ๐‘ฅ ) , ๐‘ฃ โข ( ๐‘ฆ ) )

( โˆซ ๐ท exp โข ( โŸจ ๐ด โข ๐‘ฃ โข ( ๐‘  ) , ๐ต โข ๐‘ฃ โข ( ๐‘ฆ ) โŸฉ ๐‘š ) โข d โข ๐‘  ) โˆ’ 1 โข exp โข ( โŸจ ๐ด โข ๐‘ฃ โข ( ๐‘ฅ ) , ๐ต โข ๐‘ฃ โข ( ๐‘ฆ ) โŸฉ ๐‘š ) .

Here ๐ด , ๐ต โˆˆ โ„ ๐‘š ร— ๐‘› are again matrices of free parameters, ๐‘š โˆˆ โ„• is a hyperparameter, and โŸจ โ‹… , โ‹… โŸฉ is the Euclidean inner-product on โ„ ๐‘š . Putting this together, we find that (34) becomes

๐‘ข โข ( ๐‘ฅ )

๐œŽ โข ( ๐‘ฃ โข ( ๐‘ฅ ) + โˆซ ๐ท exp โข ( โŸจ ๐ด โข ๐‘ฃ โข ( ๐‘ฅ ) , ๐ต โข ๐‘ฃ โข ( ๐‘ฆ ) โŸฉ ๐‘š ) โˆซ ๐ท exp โข ( โŸจ ๐ด โข ๐‘ฃ โข ( ๐‘  ) , ๐ต โข ๐‘ฃ โข ( ๐‘ฆ ) โŸฉ ๐‘š ) โข d โข ๐‘  โข ๐‘… โข ๐‘ฃ โข ( ๐‘ฆ ) โข d โข ๐‘ฆ ) โˆ€ ๐‘ฅ โˆˆ ๐ท .

(35)

Equation (35) can be thought of as the continuum limit of a transformer block. To see this, we will discretize to obtain the usual transformer block.

To that end, let { ๐‘ฅ 1 , โ€ฆ , ๐‘ฅ ๐‘˜ } โŠ‚ ๐ท be a uniformly-sampled, ๐‘˜ -point discretization of ๐ท and denote ๐‘ฃ ๐‘—

๐‘ฃ โข ( ๐‘ฅ ๐‘— ) โˆˆ โ„ ๐‘› and ๐‘ข ๐‘—

๐‘ข โข ( ๐‘ฅ ๐‘— ) โˆˆ โ„ ๐‘› for ๐‘—

1 , โ€ฆ , ๐‘˜ . Approximating the inner-integral in (35) by Monte-Carlo, we have

โˆซ ๐ท exp โข ( โŸจ ๐ด โข ๐‘ฃ โข ( ๐‘  ) , ๐ต โข ๐‘ฃ โข ( ๐‘ฆ ) โŸฉ ๐‘š ) โข d โข ๐‘  โ‰ˆ | ๐ท | ๐‘˜ โข โˆ‘ ๐‘™

1 ๐‘˜ exp โข ( โŸจ ๐ด โข ๐‘ฃ ๐‘™ , ๐ต โข ๐‘ฃ โข ( ๐‘ฆ ) โŸฉ ๐‘š ) .

Plugging this into (35) and using the same approximation for the outer integral yields

๐‘ข ๐‘—

๐œŽ โข ( ๐‘ฃ ๐‘— + โˆ‘ ๐‘ž

1 ๐‘˜ exp โข ( โŸจ ๐ด โข ๐‘ฃ ๐‘— , ๐ต โข ๐‘ฃ ๐‘ž โŸฉ ๐‘š ) โˆ‘ ๐‘™

1 ๐‘˜ exp โข ( โŸจ ๐ด โข ๐‘ฃ ๐‘™ , ๐ต โข ๐‘ฃ ๐‘ž โŸฉ ๐‘š ) โข ๐‘… โข ๐‘ฃ ๐‘ž ) , ๐‘—

1 , โ€ฆ , ๐‘˜ .

(36)

Equation (36) can be viewed as a Nystrรถm approximation of (35). Define the vectors ๐‘ง ๐‘ž โˆˆ โ„ ๐‘˜ by

๐‘ง ๐‘ž

1 ๐‘š โข ( โŸจ ๐ด โข ๐‘ฃ 1 , ๐ต โข ๐‘ฃ ๐‘ž โŸฉ , โ€ฆ , โŸจ ๐ด โข ๐‘ฃ ๐‘˜ , ๐ต โข ๐‘ฃ ๐‘ž โŸฉ ) , ๐‘ž

1 , โ€ฆ , ๐‘˜ .

Define ๐‘† : โ„ ๐‘˜ โ†’ ฮ” ๐‘˜ , where ฮ” ๐‘˜ denotes the ๐‘˜ -dimensional probability simplex, as the softmax function

๐‘† โข ( ๐‘ค )

( exp โข ( ๐‘ค 1 ) โˆ‘ ๐‘—

1 ๐‘˜ exp โข ( ๐‘ค ๐‘— ) , โ€ฆ , exp โข ( ๐‘ค ๐‘˜ ) โˆ‘ ๐‘—

1 ๐‘˜ exp โข ( ๐‘ค ๐‘— ) ) , โˆ€ ๐‘ค โˆˆ โ„ ๐‘˜ .

Then we may re-write (36) as

๐‘ข ๐‘—

๐œŽ โข ( ๐‘ฃ ๐‘— + โˆ‘ ๐‘ž

1 ๐‘˜ ๐‘† ๐‘— โข ( ๐‘ง ๐‘ž ) โข ๐‘… โข ๐‘ฃ ๐‘ž ) , ๐‘—

1 , โ€ฆ , ๐‘˜ .

Furthermore, if we re-parametrize ๐‘…

๐‘… out โข ๐‘… val where ๐‘… out โˆˆ โ„ ๐‘› ร— ๐‘š and ๐‘… val โˆˆ โ„ ๐‘š ร— ๐‘› are matrices of free parameters, we obtain

๐‘ข ๐‘—

๐œŽ โข ( ๐‘ฃ ๐‘— + ๐‘… out โข โˆ‘ ๐‘ž

1 ๐‘˜ ๐‘† ๐‘— โข ( ๐‘ง ๐‘ž ) โข ๐‘… val โข ๐‘ฃ ๐‘ž ) , ๐‘—

1 , โ€ฆ , ๐‘˜

which is precisely the single-headed attention, transformer block with no layer normalization applied inside the activation function. In the language of transformers, the matrices ๐ด , ๐ต , and ๐‘… val correspond to the queries, keys, and values functions respectively. We note that tricks such as layer normalization (Ba et al., 2016) can be adapted in a straightforward manner to the continuum setting and incorporated into (35). Furthermore multi-headed self-attention can be realized by simply allowing ๐œ… ๐‘ฃ to be a sum over multiple functions with form ๐‘” ๐‘ฃ โข ๐‘… all of which have separate trainable parameters. Including such generalizations yields the continuum limit of the transformer as implemented in practice. We do not pursue this here as our goal is simply to draw a parallel between the two methods.  

Even though transformers are special cases of neural operators, the standard attention mechanism is memory and computation intensive, as seen in Section 6, compared to neural operator architectures developed here (7)-(9). The high computational complexity of transformers is evident is (35) since we must evaluate a nested integral of ๐‘ฃ for each ๐‘ฅ โˆˆ ๐ท . Recently, efficient attention mechanisms have been explored, e.g. long-short Zhu et al. (2021) and adaptive FNO-based attention mechanisms (Guibas et al., 2021). However, many of the efficient vision transformer architectures (Choromanski et al., 2020; Dosovitskiy et al., 2020) like ViTs are not special cases of neural operators since they use CNN layers to generate tokens, which are not discretization invariant.

6Test Problems

A central application of neural operators is learning solution operators defined by parametric partial differential equations. In this section, we define four test problems for which we numerically study the approximation properties of neural operators. To that end, let ( ๐’œ , ๐’ฐ , โ„ฑ ) be a triplet of Banach spaces. The first two problem classes considered are derived from the following general class of PDEs:

๐–ซ ๐‘Ž โข ๐‘ข

๐‘“

(37)

where, for every ๐‘Ž โˆˆ ๐’œ , ๐–ซ ๐‘Ž : ๐’ฐ โ†’ โ„ฑ is a, possibly nonlinear, partial differential operator, and ๐‘ข โˆˆ ๐’ฐ corresponds to the solution of the PDE (37) when ๐‘“ โˆˆ โ„ฑ and appropriate boundary conditions are imposed. The second class will be evolution equations with initial condition ๐‘Ž โˆˆ ๐’œ and solution ๐‘ข โข ( ๐‘ก ) โˆˆ ๐’ฐ at every time ๐‘ก

0 . We seek to learn the map from ๐‘Ž to ๐‘ข := ๐‘ข โข ( ๐œ ) for some fixed time ๐œ

0 ; we will also study maps on paths (time-dependent solutions).

Our goal will be to learn the mappings

๐’ข โ€  : ๐‘Ž โ†ฆ ๐‘ข or ๐’ข โ€  : ๐‘“ โ†ฆ ๐‘ข ;

we will study both cases, depending on the test problem considered. We will define a probability measure ๐œ‡ on ๐’œ or โ„ฑ which will serve to define a model for likely input data. Furthermore, measure ๐œ‡ will define a topology on the space of mappings in which ๐’ข โ€  lives, using the Bochner norm (3). We will assume that each of the spaces ( ๐’œ , ๐’ฐ , โ„ฑ ) are Banach spaces of functions defined on a bounded domain ๐ท โŠ‚ โ„ ๐‘‘ . All reported errors will be Monte-Carlo estimates of the relative error

๐”ผ ๐‘Ž โˆผ ๐œ‡ โข โ€– ๐’ข โ€  โข ( ๐‘Ž ) โˆ’ ๐’ข ๐œƒ โข ( ๐‘Ž ) โ€– ๐ฟ 2 โข ( ๐ท ) โ€– ๐’ข โ€  โข ( ๐‘Ž ) โ€– ๐ฟ 2 โข ( ๐ท )

or equivalently replacing ๐‘Ž with ๐‘“ in the above display and with the assumption that ๐’ฐ โІ ๐ฟ 2 โข ( ๐ท ) . The domain ๐ท will be discretized, usually uniformly, with ๐ฝ โˆˆ โ„• points.

6.1Poisson Equation

First we consider the one-dimensional Poisson equation with a zero boundary condition. In particular, (37) takes the form

โˆ’ ๐‘‘ 2 ๐‘‘ โข ๐‘ฅ 2 โข ๐‘ข โข ( ๐‘ฅ )

๐‘“ โข ( ๐‘ฅ ) , ๐‘ฅ โˆˆ ( 0 , 1 )

๐‘ข โข ( 0 )

๐‘ข โข ( 1 )

0

(38)

for some source function ๐‘“ : ( 0 , 1 ) โ†’ โ„ ) . In particular, for ๐ท โข ( ๐–ซ ) := ๐ป 0 1 โข ( ( 0 , 1 ) ; โ„ ) โˆฉ ๐ป 2 โข ( ( 0 , 1 ) ; โ„ ) , we have ๐–ซ : ๐ท โข ( ๐–ซ ) โ†’ ๐ฟ 2 โข ( ( 0 , 1 ) ; โ„ ) defined as โˆ’ ๐‘‘ 2 / ๐‘‘ โข ๐‘ฅ 2 , noting that that ๐–ซ has no dependence on any parameter ๐‘Ž โˆˆ ๐’œ in this case. We will consider the weak form of (38) with source function ๐‘“ โˆˆ ๐ป โˆ’ 1 โข ( ( 0 , 1 ) ; โ„ ) and therefore the solution operator ๐’ข โ€  : ๐ป โˆ’ 1 โข ( ( 0 , 1 ) ; โ„ ) โ†’ ๐ป 0 1 โข ( ( 0 , 1 ) ; โ„ ) defined as

๐’ข โ€  : ๐‘“ โ†ฆ ๐‘ข .

We define the probability measure ๐œ‡

๐‘ โข ( 0 , ๐ถ ) where

๐ถ

( ๐–ซ + ๐ผ ) โˆ’ 2 ,

defined through the spectral theory of self-adjoint operators. Since ๐œ‡ charges a subset of ๐ฟ 2 โข ( ( 0 , 1 ) ; โ„ ) , we will learn ๐’ข โ€  : ๐ฟ 2 โข ( ( 0 , 1 ) ; โ„ ) โ†’ ๐ป 0 1 โข ( ( 0 , 1 ) ; โ„ ) in the topology induced by (3).

In this setting, ๐’ข โ€  has a closed-form solution given as

๐’ข โ€  โข ( ๐‘“ )

โˆซ 0 1 ๐บ โข ( โ‹… , ๐‘ฆ ) โข ๐‘“ โข ( ๐‘ฆ ) โข d โข ๐‘ฆ

where

๐บ โข ( ๐‘ฅ , ๐‘ฆ )

1 2 โข ( ๐‘ฅ + ๐‘ฆ โˆ’ | ๐‘ฆ โˆ’ ๐‘ฅ | ) โˆ’ ๐‘ฅ โข ๐‘ฆ , โˆ€ ( ๐‘ฅ , ๐‘ฆ ) โˆˆ [ 0 , 1 ] 2

is the Greenโ€™s function. Note that while ๐’ข โ€  is a linear operator, the Greenโ€™s function ๐บ is non-linear as a function of its arguments. We will consider only a single layer of (6) with ๐œŽ 1

Id , ๐’ซ

Id , ๐’ฌ

Id , ๐‘Š 0

0 , ๐‘ 0

0 , and

๐’ฆ 0 โข ( ๐‘“ )

โˆซ 0 1 ๐œ… ๐œƒ โข ( โ‹… , ๐‘ฆ ) โข ๐‘“ โข ( ๐‘ฆ ) โข d โข ๐‘ฆ

where ๐œ… ๐œƒ : โ„ 2 โ†’ โ„ will be parameterized as a standard neural network with parameters ๐œƒ .

The purpose of the current example is two-fold. First we will test the efficacy of the neural operator framework in a simple setting where an exact solution is analytically available. Second we will show that by building in the right inductive bias, in particular, paralleling the form of the Greenโ€™s function solution, we obtain a model that generalizes outside the distribution ๐œ‡ . That is, once trained, the model will generalize to any ๐‘“ โˆˆ ๐ฟ 2 โข ( ( 0 , 1 ) ; โ„ ) that may be outside the support of ๐œ‡ . For example, as defined, the random variable ๐‘“ โˆผ ๐œ‡ is a continuous function, however, if ๐œ… ๐œƒ approximates the Greenโ€™s function well then the model ๐’ข โ€  will approximate the solution to (38) accurately even for discontinuous inputs.

To create the dataset used for training, solutions to (38) are obtained by numerical integration using the Greenโ€™s function on a uniform grid with 85 collocation points. We use ๐‘

1000 training examples.

6.2Darcy Flow

We consider the steady state of Darcy Flow in two dimensions which is the second order elliptic equation

โˆ’ โˆ‡ โ‹… ( ๐‘Ž โข ( ๐‘ฅ ) โข โˆ‡ ๐‘ข โข ( ๐‘ฅ ) )

๐‘“ โข ( ๐‘ฅ ) , ๐‘ฅ โˆˆ ๐ท

๐‘ข โข ( ๐‘ฅ )

0 , ๐‘ฅ โˆˆ โˆ‚ ๐ท

(39)

where ๐ท

( 0 , 1 ) 2 is the unit square. In this setting ๐’œ

๐ฟ โˆž โข ( ๐ท ; โ„ + ) , ๐’ฐ

๐ป 0 1 โข ( ๐ท ; โ„ ) , and โ„ฑ

๐ป โˆ’ 1 โข ( ๐ท ; โ„ ) . We fix ๐‘“ โ‰ก 1 and consider the weak form of (39) and therefore the solution operator ๐’ข โ€  : ๐ฟ โˆž โข ( ๐ท ; โ„ + ) โ†’ ๐ป 0 1 โข ( ๐ท ; โ„ ) defined as

๐’ข โ€  : ๐‘Ž โ†ฆ ๐‘ข .

(40)

Note that while (39) is a linear PDE, the solution operator ๐’ข โ€  is nonlinear. We define the probability measure ๐œ‡

๐‘‡ โ™ฏ โข ๐‘ โข ( 0 , ๐ถ ) as the pushforward of a Gaussian measure under the operator ๐‘‡ where the covariance of the Gaussian is

๐ถ

( โˆ’ ฮ” + 9 โข ๐ผ ) โˆ’ 2

with ๐ท โข ( โˆ’ ฮ” ) defined to impose zero Neumann boundary on the Laplacian. We define ๐‘‡ to be a Nemytskii operator acting on functions, defined through the map ๐‘‡ : โ„ โ†’ โ„ + defined as

๐‘‡ โข ( ๐‘ฅ )

{ 12 ,

๐‘ฅ โ‰ฅ 0

3 ,

๐‘ฅ < 0 .

The random variable ๐‘Ž โˆผ ๐œ‡ is a piecewise-constant function with random interfaces given by the underlying Gaussian random field. Such constructions are prototypical models for many physical systems such as permeability in sub-surface flows and (in a vector generalization) material microstructures in elasticity.

To create the dataset used for training, solutions to (39) are obtained using a second-order finite difference scheme on a uniform grid of size 421 ร— 421 . All other resolutions are downsampled from this data set. We use ๐‘

1000 training examples.

6.3Burgersโ€™ Equation

We consider the one-dimensional viscous Burgersโ€™ equation

โˆ‚ โˆ‚ ๐‘ก ๐‘ข ( ๐‘ฅ , ๐‘ก ) + 1 2 โˆ‚ โˆ‚ ๐‘ฅ ( ๐‘ข ( ๐‘ฅ , ๐‘ก ) ) 2

๐œˆ โข โˆ‚ 2 โˆ‚ ๐‘ฅ 2 โข ๐‘ข โข ( ๐‘ฅ , ๐‘ก ) , ๐‘ฅ โˆˆ ( 0 , 2 โข ๐œ‹ ) , ๐‘ก โˆˆ ( 0 , โˆž )

๐‘ข โข ( ๐‘ฅ , 0 )

๐‘ข 0 โข ( ๐‘ฅ ) , ๐‘ฅ โˆˆ ( 0 , 2 โข ๐œ‹ )

(41)

with periodic boundary conditions and a fixed viscosity ๐œˆ

10 โˆ’ 1 . Let ฮจ : ๐ฟ per 2 โข ( ( 0 , 2 โข ๐œ‹ ) ; โ„ ) ร— โ„ + โ†’ ๐ป per ๐‘  โข ( ( 0 , 2 โข ๐œ‹ ) ; โ„ ) , for any ๐‘ 

0 , be the flow map associated to (41), in particular,

ฮจ โข ( ๐‘ข 0 , ๐‘ก )

๐‘ข โข ( โ‹… , ๐‘ก ) , ๐‘ก

0 .

We consider the solution operator defined by evaluating ฮจ at a fixed time. Fix any ๐‘  โ‰ฅ 0 . Then we may define ๐’ข โ€  : ๐ฟ per 2 โข ( ( 0 , 2 โข ๐œ‹ ) ; โ„ ) โ†’ ๐ป per ๐‘  โข ( ( 0 , 2 โข ๐œ‹ ) ; โ„ ) by

๐’ข โ€  : ๐‘ข 0 โ†ฆ ฮจ โข ( ๐‘ข 0 , 1 ) .

(42)

We define the probability measure ๐œ‡

๐‘ โข ( 0 , ๐ถ ) where

๐ถ

625 ( โˆ’ ๐‘‘ 2 ๐‘‘ โข ๐‘ฅ 2 + 25 ๐ผ ) โˆ’ 2

with domain of the Laplacian defined to impose periodic boundary conditions. We chose the initial condition for (41) by drawing ๐‘ข 0 โˆผ ๐œ‡ , noting that ๐œ‡ charges a subset of ๐ฟ per 2 โข ( ( 0 , 2 โข ๐œ‹ ) ; โ„ ) .

To create the dataset used for training, solutions to (41) are obtained using a pseudo-spectral split step method where the heat equation part is solved exactly in Fourier space and then the non-linear part is advanced using a forward Euler method with a very small time step. We use a uniform spatial grid with 2 13

8192 collocation points and subsample all other resolutions from this data set. We use ๐‘

1000 training examples.

6.4Navier-Stokes Equation

We consider the two-dimensional Navier-Stokes equation for a viscous, incompressible fluid

โˆ‚ ๐‘ก ๐‘ข โข ( ๐‘ฅ , ๐‘ก ) + ๐‘ข โข ( ๐‘ฅ , ๐‘ก ) โ‹… โˆ‡ ๐‘ข โข ( ๐‘ฅ , ๐‘ก ) + โˆ‡ ๐‘ โข ( ๐‘ฅ , ๐‘ก )

๐œˆ โข ฮ” โข ๐‘ข โข ( ๐‘ฅ , ๐‘ก ) + ๐‘“ โข ( ๐‘ฅ ) , ๐‘ฅ โˆˆ ๐•‹ 2 , ๐‘ก โˆˆ ( 0 , โˆž )

โˆ‡ โ‹… ๐‘ข โข ( ๐‘ฅ , ๐‘ก )

0 , ๐‘ฅ โˆˆ ๐•‹ 2 , ๐‘ก โˆˆ [ 0 , โˆž )

๐‘ข โข ( ๐‘ฅ , 0 )

๐‘ข 0 โข ( ๐‘ฅ ) , ๐‘ฅ โˆˆ ๐•‹ 2

(43)

where ๐•‹ 2 is the unit torus i.e. [ 0 , 1 ] 2 equipped with periodic boundary conditions, and ๐œˆ โˆˆ โ„ + is a fixed viscosity. Here ๐‘ข : ๐•‹ 2 ร— โ„ + โ†’ โ„ 2 is the velocity field, ๐‘ : ๐•‹ 2 ร— โ„ + โ†’ โ„ 2 is the pressure field, and ๐‘“ : ๐•‹ 2 โ†’ โ„ is a fixed forcing function.

Equivalently, we study the vorticity-streamfunction formulation of the equation

โˆ‚ ๐‘ก ๐‘ค โข ( ๐‘ฅ , ๐‘ก ) + โˆ‡ โŸ‚ ๐œ“ โ‹… โˆ‡ ๐‘ค โข ( ๐‘ฅ , ๐‘ก )

๐œˆ โข ฮ” โข ๐‘ค โข ( ๐‘ฅ , ๐‘ก ) + ๐‘” โข ( ๐‘ฅ ) , ๐‘ฅ โˆˆ ๐•‹ 2 , ๐‘ก โˆˆ ( 0 , โˆž ) ,

(44a)

โˆ’ ฮ” โข ๐œ“

๐œ” , ๐‘ฅ โˆˆ ๐•‹ 2 , ๐‘ก โˆˆ ( 0 , โˆž ) ,

(44b)

๐‘ค โข ( ๐‘ฅ , 0 )

๐‘ค 0 โข ( ๐‘ฅ ) , ๐‘ฅ โˆˆ ๐•‹ 2 ,

(44c)

where ๐‘ค is the out-of-plane component of the vorticity field โˆ‡ ร— ๐‘ข : ๐•‹ 2 ร— โ„ + โ†’ โ„ 3 . Since, when viewed in three dimensions, ๐‘ข

( ๐‘ข 1 โข ( ๐‘ฅ 1 , ๐‘ฅ 2 ) , ๐‘ข 2 โข ( ๐‘ฅ 1 , ๐‘ฅ 2 ) , 0 ) , it follows that โˆ‡ ร— ๐‘ข

( 0 , 0 , ๐œ” ) . The stream function ๐œ“ is related to the velocity by ๐‘ข

โˆ‡ โŸ‚ ๐œ“ , enforcing the divergence-free condition. Similar considerations as for the curl of ๐‘ข apply to the curl of ๐‘“ , showing that โˆ‡ ร— ๐‘“

( 0 , 0 , ๐‘” ) . Note that in (44b) ๐œ“ is undetermined up to an (irrelevant, since ๐‘ข is computed from ๐œ“ by taking a gradient) additive constant; uniqueness is restored by seeking ๐œ“ with spatial mean 0 . We define the forcing term as

๐‘” โข ( ๐‘ฅ 1 , ๐‘ฅ 2 )

0.1 โข ( sin โก ( 2 โข ๐œ‹ โข ( ๐‘ฅ 1 + ๐‘ฅ 2 ) ) + cos โก ( 2 โข ๐œ‹ โข ( ๐‘ฅ 1 + ๐‘ฅ 2 ) ) ) , โˆ€ ( ๐‘ฅ 1 , ๐‘ฅ 2 ) โˆˆ ๐•‹ 2 .

The corresponding Reynolds number is estimated as ๐‘… โข ๐‘’

0.1 ๐œˆ โข ( 2 โข ๐œ‹ ) 3 / 2 (Chandler and Kerswell, 2013). Let ฮจ : ๐ฟ 2 โข ( ๐•‹ 2 ; โ„ ) ร— โ„ + โ†’ ๐ป ๐‘  โข ( ๐•‹ 2 ; โ„ ) , for any ๐‘ 

0 , be the flow map associated to (44), in particular,

ฮจ โข ( ๐‘ค 0 , ๐‘ก )

๐‘ค โข ( โ‹… , ๐‘ก ) , ๐‘ก

0 .

Notice that this is well-defined for any ๐‘ค 0 โˆˆ ๐ฟ 2 โข ( ๐•‹ ; โ„ ) .

We will define two notions of the solution operator. In the first, we will proceed as in the previous examples, in particular, ๐’ข โ€  : ๐ฟ 2 โข ( ๐•‹ 2 ; โ„ ) โ†’ ๐ป ๐‘  โข ( ๐•‹ 2 ; โ„ ) is defined as

๐’ข โ€  : ๐‘ค 0 โ†ฆ ฮจ โข ( ๐‘ค 0 , ๐‘‡ )

(45)

for some fixed ๐‘‡

0 . In the second, we will map an initial part of the trajectory to a later part of the trajectory. In particular, we define ๐’ข โ€  : ๐ฟ 2 ( ๐•‹ 2 ; โ„ ) ร— ๐ถ ( ( 0 , 10 ] ; ๐ป ๐‘  ( ๐•‹ 2 ; โ„ ) ) โ†’ ๐ถ ( ( 10 , ๐‘‡ ] ; ๐ป ๐‘  ( ๐•‹ 2 ; โ„ ) ) by

๐’ข โ€  : ( ๐‘ค 0 , ฮจ ( ๐‘ค 0 , ๐‘ก ) | ๐‘ก โˆˆ ( 0 , 10 ] ) โ†ฆ ฮจ ( ๐‘ค 0 , ๐‘ก ) | ๐‘ก โˆˆ ( 10 , ๐‘‡ ]

(46)

for some fixed ๐‘‡ > 10 . We define the probability measure ๐œ‡

๐‘ โข ( 0 , ๐ถ ) where

๐ถ

7 3 / 2 โข ( โˆ’ ฮ” + 49 โข ๐ผ ) โˆ’ 2.5

with periodic boundary conditions on the Laplacian. We model the initial vorticity ๐‘ค 0 โˆผ ๐œ‡ to (44) as ๐œ‡ charges a subset of ๐ฟ 2 โข ( ๐•‹ 2 ; โ„ ) . Its pushforward onto ฮจ โข ( ๐‘ค 0 , ๐‘ก ) | ๐‘ก โˆˆ ( 0 , 10 ] is required to define the measure on input space in the second case defined by (46).

To create the dataset used for training, solutions to (44) are obtained using a pseudo-spectral split step method where the viscous terms are advanced using a Crankโ€“Nicolson update and the nonlinear and forcing terms are advanced using Heunโ€™s method. Dealiasing is used with the 2 / 3 rule. For further details on this approach see (Chandler and Kerswell, 2013). Data is obtained on a uniform 256 ร— 256 grid and all other resolutions are subsampled from this data set. We experiment with different viscosities ๐œˆ , final times ๐‘‡ , and amounts of training data ๐‘ .

6.4.1Bayesian Inverse Problem

As an application of operator learning, we consider the inverse problem of recovering the initial vorticity in the Navier-Stokes equation (44) from partial, noisy observations of the vorticity at a later time. Consider the first solution operator defined in subsection 6.4, in particular, ๐’ข โ€  : ๐ฟ 2 โข ( ๐•‹ 2 ; โ„ ) โ†’ ๐ป ๐‘  โข ( ๐•‹ 2 ; โ„ ) defined as

๐’ข โ€  : ๐‘ค 0 โ†ฆ ฮจ โข ( ๐‘ค 0 , 50 )

where ฮจ is the flow map associated to (44). We then consider the inverse problem

๐‘ฆ

๐‘‚ ( ๐’ข โ€  ( ๐‘ค 0 ) ) + ๐œ‚

(47)

of recovering ๐‘ค 0 โˆˆ ๐ฟ 2 โข ( ๐•‹ 2 ; โ„ ) where ๐‘‚ : ๐ป ๐‘  โข ( ๐•‹ 2 ; โ„ ) โ†’ โ„ 49 is the evaluation operator on a uniform 7 ร— 7 interior grid, and ๐œ‚ โˆผ ๐‘ โข ( 0 , ฮ“ ) is observational noise with covariance ฮ“

( 1 / ๐›พ 2 ) โข ๐ผ and ๐›พ

0.1 . We view (47) as the Bayesian inverse problem mapping prior measure ๐œ‡ on ๐‘ค 0 to posterior measure ๐œ‹ ๐‘ฆ on ๐‘ค 0 / ๐‘ฆ . In particular, ๐œ‹ ๐‘ฆ has density with respect to ๐œ‡ , given by the Randon-Nikodym derivative

d โข ๐œ‹ ๐‘ฆ d โข ๐œ‡ ( ๐‘ค 0 ) โˆ exp ( โˆ’ 1 2 โˆฅ ๐‘ฆ โˆ’ ๐‘‚ ( ๐’ข โ€  ( ๐‘ค 0 ) ) โˆฅ ฮ“ 2 )

where โˆฅ โ‹… โˆฅ ฮ“

โˆฅ ฮ“ โˆ’ 1 / 2 โ‹… โˆฅ and โˆฅ โ‹… โˆฅ is the Euclidean norm in โ„ 49 . For further details on Bayesian inversion for functions see (Cotter et al., 2009; Stuart, 2010), and see (Cotter et al., 2013) for MCMC methods adapted to the function-space setting.

We solve (47) by computing the posterior mean ๐”ผ ๐‘ค 0 โˆผ ๐œ‹ ๐‘ฆ โข [ ๐‘ค 0 ] using the pre-conditioned Crankโ€“Nicolson (pCN) MCMC method described in Cotter et al. (2013) for this task. We employ pCN in two cases: (i) using ๐’ข โ€  evaluated with the pseudo-spectral method described in section 6.4; and (ii) using ๐’ข ๐œƒ , the neural operator approximating ๐’ข โ€  . After a 5,000 sample burn-in period, we generate 25,000 samples from the posterior using both approaches and use them to compute the posterior mean.

6.4.2Spectra

Because of the constant-in-time forcing term the energy reaches a non-zero equilibrium in time which is statistically reproducible for different initial conditions. To compare the complexity of the solution to the Navier-Stokes problem outlined in subsection 6.4 we show, in Figure 6, the Fourier spectrum of the solution data at time ๐‘ก

50 for three different choices of the viscosity ๐œˆ . The figure demonstrates that, for a wide range of wavenumbers ๐‘˜ , which grows as ๐œˆ decreases, the rate of decay of the spectrum is โˆ’ 5 / 3 , matching what is expected in the turbulent regime (Kraichnan, 1967). This is a statistically stationary property of the equation, sustained for all positive times.

Figure 6:Spectral Decay of Navier-Stokes.

The spectral decay of the Navier-stokes equation data. The y-axis is represents the value of each mode; the x-axis is the wavenumber | ๐‘˜ |

๐‘˜ 1 + ๐‘˜ 2 . From left to right, the solutions have viscosity ๐œˆ

10 โˆ’ 3 , โ€…10 โˆ’ 4 , โ€…10 โˆ’ 5 respectively.

6.5Choice of Loss Criteria

In general, the model has the best performance when trained and tested using the same loss criteria. If one trains the model using one norm and tests with another norm, the model may overfit in the training norm. Furthermore, the choice of loss function plays a key role. In this work, we use the relative ๐ฟ 2 error to measure the performance in all our problems. Both the ๐ฟ 2 error and its square, the mean squared error (MSE), are common choices of the testing criteria in the numerical analysis and machine learning literature. We observed that using the relative error to train the model has a good normalization and regularization effect that prevents overfitting. In practice, training with the relative ๐ฟ 2 loss results in around half the testing error rate compared to training with the MSE loss.

7Numerical Results

In this section, we compare the proposed neural operator with other supervised learning approaches, using the four test problems outlined in Section 6. In Subsection 7.1 we study the Poisson equation, and learning a Greens function; Subsection 7.2 considers the coefficient to solution map for steady Darcy flow, and the initial condition to solution at positive time map for Burgers equation. In subsection 7.3 we study the Navier-Stokes equation.

We compare with a variety of architectures found by discretizing the data and applying finite-dimensional approaches, as well as with other operator-based approximation methods; further detailed comparison of other operator-based approximation methods may be found in De Hoop et al. (2022), where the issue of error versus cost (with cost defined in various ways such as evaluation time of the network, amount of data required) is studied. We do not compare against traditional solvers (FEM/FDM/Spectral), although our methods, once trained, enable evaluation of the input to output map orders of magnitude more quickly than by use of such traditional solvers on complex problems. We demonstrate the benefits of this speed-up in a prototypical application, Bayesian inversion, in Subsubection 7.3.4.

All the computations are carried on a single Nvidia V100 GPU with 16GB memory. The code is available at https://github.com/zongyi-li/graph-pde and https://github.com/zongyi-li/fourier_neural_operator.

Setup of the Four Methods:

We construct the neural operator by stacking four integral operator layers as specified in (6) with the ReLU activation. No batch normalization is needed. Unless otherwise specified, we use ๐‘

1000 training instances and 200 testing instances. We use the Adam optimizer to train for 500 epochs with an initial learning rate of 0.001 that is halved every 100 epochs. We set the channel dimensions ๐‘‘ ๐‘ฃ 0

โ‹ฏ

๐‘‘ ๐‘ฃ 3

64 for all one-dimensional problems and ๐‘‘ ๐‘ฃ 0

โ‹ฏ

๐‘‘ ๐‘ฃ 3

32 for all two-dimensional problems. The kernel networks ๐œ… ( 0 ) , โ€ฆ , ๐œ… ( 3 ) are standard feed-forward neural networks with three layers and widths of 256 units. We use the following abbreviations to denote the methods introduced in Section 4.

โ€ข

GNO: The method introduced in subsection 4.1, truncating the integral to a ball with radius ๐‘Ÿ

0.25 and using the Nystrรถm approximation with ๐ฝ โ€ฒ

300 sub-sampled nodes.

โ€ข

LNO: The low-rank method introduced in subsection 4.2 with rank ๐‘Ÿ

4 .

โ€ข

MGNO: The multipole method introduced in subsection 4.3. On the Darcy flow problem, we use the random construction with three graph levels, each sampling ๐ฝ 1

400 , ๐ฝ 2

100 , ๐ฝ 3

25 nodes nodes respectively. On the Burgersโ€™ equation problem, we use the orthogonal construction without sampling.

โ€ข

FNO: The Fourier method introduced in subsection 4.4. We set ๐‘˜ max , ๐‘—

16 for all one-dimensional problems and ๐‘˜ max , ๐‘—

12 for all two-dimensional problems.

Remark on the Resolution.

Traditional PDE solvers such as FEM and FDM approximate a single function and therefore their error to the continuum decreases as the resolution is increased. The figures we show here exhibit something different: the error is independent of resolution, once enough resolution is used, but is not zero. This reflects the fact that there is a residual approximation error, in the infinite dimensional limit, from the use of a finite-parametrized neural operator, trained on a finite amount of data. Invariance of the error with respect to (sufficiently fine) resolution is a desirable property that demonstrates that an intrinsic approximation of the operator has been learned, independent of any specific discretization; see Figure 8. Furthermore, resolution-invariant operators can do zero-shot super-resolution, as shown in Subsubection 7.3.1.

7.1Poisson Equation

Recall the Poisson equation (38) introduced in subsection 6.1. We use a zero hidden layer neural operator construction without lifting the input dimension. In particular, we simply learn a kernel ๐œ… ๐œƒ : โ„ 2 โ†’ โ„ parameterized as a standard feed-forward neural network with parameters ๐œƒ . Using only ๐‘

1000 training examples, we obtain a relative test error of 10 โˆ’ 7 . The neural operator gives an almost perfect approximation to the true solution operator in the topology of (3).

To examine the quality of the approximation in the much stronger uniform topology, we check whether the kernel ๐œ… ๐œƒ approximates the Greenโ€™s function for this problem. To see why this is enough, let ๐พ โŠ‚ ๐ฟ 2 โข ( [ 0 , 1 ] ; โ„ ) be a bounded set i.e.

โ€– ๐‘“ โ€– ๐ฟ 2 โข ( [ 0 , 1 ] ; โ„ ) โ‰ค ๐‘€ , โˆ€ ๐‘“ โˆˆ ๐พ

and suppose that

sup ( ๐‘ฅ , ๐‘ฆ ) โˆˆ [ 0 , 1 ] 2 | ๐œ… ๐œƒ โข ( ๐‘ฅ , ๐‘ฆ ) โˆ’ ๐บ โข ( ๐‘ฅ , ๐‘ฆ ) | < ๐œ– ๐‘€ .

for some ๐œ–

0 . Then it is easy to see that

sup ๐‘“ โˆˆ ๐พ โ€– ๐’ข โ€  โข ( ๐‘“ ) โˆ’ ๐’ข ๐œƒ โข ( ๐‘“ ) โ€– ๐ฟ 2 โข ( [ 0 , 1 ] ; โ„ ) < ๐œ– ,

in particular, we obtain an approximation in the topology of uniform convergence over bounded sets, while having trained only in the topology of the Bochner norm (3). Figure 7 shows the results from which we can see that ๐œ… ๐œƒ does indeed approximate the Greenโ€™s function well. This result implies that by constructing a suitable architecture, we can generalize to the entire space and data that is well outside the support of the training set.

Figure 7:Kernel for one-dimensional Greenโ€™s function, with the Nystrom approximation method

left: learned kernel function; right: the analytic Greenโ€™s function. This is a proof of concept of the graph kernel network on 1 dimensional Poisson equation and the comparison of learned and truth kernel.

7.2Darcy and Burgers Equations

In the following section, we compare four methods presented in this paper, with different operator approximation benchmarks; we study the Darcy flow problem introduced in Subsection 6.2 and the Burgersโ€™ equation problem introduced in Subsection 6.3. The solution operators of interest are defined by (40) and (42). We use the following abbreviations for the methods against which we benchmark.

โ€ข

NN is a standard point-wise feedforward neural network. It is mesh-free, but performs badly due to lack of neighbor information. We use standard fully connected neural networks with 8 layers and width 1000.

โ€ข

FCN is the state of the art neural network method based on Fully Convolution Network (Zhu and Zabaras, 2018). It has a dominating performance for small grids ๐‘ 

61 . But fully convolution networks are mesh-dependent and therefore their error grows when moving to a larger grid.

โ€ข

PCA+NN is an instantiation of the methodology proposed in Bhattacharya et al. (2020): using PCA as an autoencoder on both the input and output spaces and interpolating the latent spaces with a standard fully connected neural network with width 200. The method provably obtains mesh-independent error and can learn purely from data, however the solution can only be evaluated on the same mesh as the training data.

โ€ข

RBM is the classical Reduced Basis Method (using a PCA basis), which is widely used in applications and provably obtains mesh-independent error (DeVore, 2014). This method has good performance, but the solutions can only be evaluated on the same mesh as the training data and one needs knowledge of the PDE to employ it.

โ€ข

DeepONet is the Deep Operator network (Lu et al., 2019) that comes equipped with an approximation theory (Lanthaler et al., 2021). We use the unstacked version with width 200 which is precisely defined in the original work (Lu et al., 2019). We use standard fully connected neural networks with 8 layers and width 200.

(a) benchmarks on Burgers equation; (b) benchmarks on Darcy Flow for different resolutions; Train and test on the same resolution. For acronyms, see Section 7; details in Tables 3, 2.

Figure 8:Benchmark on Burgerโ€™s equation and Darcy Flow 7.2.1Darcy Flow

The results of the experiments on Darcy flow are shown in Figure 8 and Table 2. All the methods, except for FCN, achieve invariance of the error with respect to the resolution ๐‘  . In the experiment, we tune each model across of range of different widths and depth to obtain the choices used here; for DeepONet for example this leads to 8 layers and width 200 as reported above.

Within our hyperparameter search, the Fourier neural operator (FNO) obtains the lowest relative error. The Fourier based method likely sees this advantage because the output functions are smooth in these test problems. We also note that is also possible to obtain better results on each model using modified architectures and problem specific feature engineering. For example for DeepONet, using CNN on the branch net and PCA on the trunk net (the latter being similar to the method used in Bhattacharya et al. (2020)) can achieve 0.0232 relative ๐ฟ 2 error, as shown in Lu et al. (2021b), about half the size of the error we obtain here, but for a very coarse grid with ๐‘ 

29 . In the experiments the different approximation architectures are such their training cost are similar across all the methods considered, for given ๐‘  . Noting this, and for example comparing the graph-based neural operator methods such as GNO and MGNO that use Nystrรถm sampling in physical space with FNO, we see that FNO is more accurate.

Networks ๐‘ 

85
๐‘ 

141
๐‘ 

211
๐‘ 

421

NN 0.1716

0.1716

0.1716

0.1716

FCN 0.0253

0.0493

0.0727

0.1097

PCANN 0.0299

0.0298

0.0298

0.0299

RBM 0.0244

0.0251

0.0255

0.0259

DeepONet 0.0476

0.0479

0.0462

0.0487

GNO 0.0346

0.0332

0.0342

0.0369

LNO 0.0520

0.0461

0.0445

โˆ’

MGNO 0.0416

0.0428

0.0428

0.0420

FNO 0.0108 0.0109 0.0109 0.0098 Table 2:Relative error on 2-d Darcy Flow for different resolutions ๐‘  . 7.2.2Burgersโ€™ Equation

The results of the experiments on Burgersโ€™ equation are shown in Figure 8 and Table 3. As for the Darcy problem, our instantiation of the Fourier neural operator obtains nearly one order of magnitude lower relative error compared to any benchmarks. The Fourier neural operator has standard deviation 0.0010 and mean training error 0.0012 . If one replaces the ReLU activation by GeLU, the test error of the FNO is further reduced from 0.0018 to 0.0007 . We again observe the invariance of the error with respect to the resolution. It is possible to improve the performance on each model using modified architectures and problem specific feature engineering. Similarly, the PCA-enhanced DeepONet with a proper scaling can achieve 0.0194 relative ๐ฟ 2 error, as shown in Lu et al. (2021b), on a grid of resolution ๐‘ 

128 .

Networks ๐‘ 

256
๐‘ 

512
๐‘ 

1024
๐‘ 

2048
๐‘ 

4096
๐‘ 

8192

NN 0.4714

0.4561

0.4803

0.4645

0.4779

0.4452

GCN 0.3999

0.4138

0.4176

0.4157

0.4191

0.4198

FCN 0.0958

0.1407

0.1877

0.2313

0.2855

0.3238

PCANN 0.0398

0.0395

0.0391

0.0383

0.0392

0.0393

DeepONet 0.0569

0.0617

0.0685

0.0702

0.0833

0.0857

GNO 0.0555

0.0594

0.0651

0.0663

0.0666

0.0699

LNO 0.0212

0.0221

0.0217

0.0219

0.0200

0.0189

MGNO 0.0243

0.0355

0.0374

0.0360

0.0364

0.0364

FNO 0.0018 0.0018 0.0018 0.0019 0.0020 0.0019 Table 3: Relative errors on 1-d Burgersโ€™ equation for different resolutions ๐‘  . Figure 9:Darcy, trained on 16 ร— 16 , tested on 241 ร— 241

Graph kernel network for the solution of (6.2). It can be trained on a small resolution and will generalize to a large one. The Error is point-wise absolute squared error.

7.2.3Zero-shot super-resolution.

The neural operator is mesh-invariant, so it can be trained on a lower resolution and evaluated at a higher resolution, without seeing any higher resolution data (zero-shot super-resolution). Figure 9 shows an example of the Darcy Equation where we train the GNO model on 16 ร— 16 resolution data in the setting above and transfer to 256 ร— 256 resolution, demonstrating super-resolution in space.

7.3Navier-Stokes Equation

In the following section, we compare our four methods with different benchmarks on the Navier-Stokes equation introduced in subsection 6.4. The operator of interest is given by (46). We use the following abbreviations for the methods against which we benchmark.

โ€ข

ResNet: 18 layers of 2-d convolution with residual connections He et al. (2016).

โ€ข

U-Net: A popular choice for image-to-image regression tasks consisting of four blocks with 2-d convolutions and deconvolutions Ronneberger et al. (2015).

โ€ข

TF-Net: A network designed for learning turbulent flows based on a combination of spatial and temporal convolutions Wang et al. (2020).

โ€ข

FNO-2d: 2-d Fourier neural operator with an auto-regressive structure in time. We use the Fourier neural operator to model the local evolution from the previous 10 time steps to the next one time step, and iteratively apply the model to get the long-term trajectory. We set and ๐‘˜ max , ๐‘—

12 , ๐‘‘ ๐‘ฃ

32 .

โ€ข

FNO-3d: 3-d Fourier neural operator that directly convolves in space-time. We use the Fourier neural operator to model the global evolution from the initial 10 time steps directly to the long-term trajectory. We set ๐‘˜ max , ๐‘—

12 , ๐‘‘ ๐‘ฃ

20 .

Figure 10:Benchmark on the Navier-Stokes

The learning curves on Navier-Stokes ๐œˆ

1 โข e โˆ’ 3 with different benchmarks. Train and test on the same resolution. For acronyms, see Section 7; details in Tables 4.

Parameters	Time	

๐œˆ

10 โˆ’ 3
๐œˆ

10 โˆ’ 4
๐œˆ

10 โˆ’ 4
๐œˆ

10 โˆ’ 5

Config per ๐‘‡

50
๐‘‡

30
๐‘‡

30
๐‘‡

20

    epoch	

๐‘

1000
๐‘

1000
๐‘

10000
๐‘

1000

FNO-3D 6 , 558 , 537

38.99 โข ๐‘ 

0.0086

0.1918

0.0820

0.1893

FNO-2D 414 , 517

127.80 โข ๐‘ 

0.0128

0.1559

0.0834

0.1556

U-Net 24 , 950 , 491

48.67 โข ๐‘ 

0.0245

0.2051

0.1190

0.1982

TF-Net 7 , 451 , 724

47.21 โข ๐‘ 

0.0225

0.2253

0.1168

0.2268

ResNet 266 , 641

78.47 โข ๐‘ 

0.0701

0.2871

0.2311

0.2753 Table 4:Benchmarks on Navier Stokes (fixing resolution 64 ร— 64 for both training and testing).

As shown in Table 4, the FNO-3D has the best performance when there is sufficient data ( ๐œˆ

10 โˆ’ 3 , ๐‘

1000 and ๐œˆ

10 โˆ’ 4 , ๐‘

10000 ). For the configurations where the amount of data is insufficient ( ๐œˆ

10 โˆ’ 4 , ๐‘

1000 and ๐œˆ

10 โˆ’ 5 , ๐‘

1000 ), all methods have

15 % error with FNO-2D achieving the lowest among our hyperparameter search. Note that we only present results for spatial resolution 64 ร— 64 since all benchmarks we compare against are designed for this resolution. Increasing the spatial resolution degrades their performance while FNO achieves the same errors.

Auto-regressive (2D) and Temporal Convolution (3D).

We investigate two standard formulation to model the time evolution: the auto-regressive model (2D) and the temporal convolution model (3D). Auto-regressive models: FNO-2D, U-Net, TF-Net, and ResNet all do 2D-convolution in the spatial domain and recurrently propagate in the time domain (2D+RNN). The operator maps the solution at previous time steps to the next time step (2D functions to 2D functions). Temporal convolution models: on the other hand, FNO-3D performs convolution in space-time โ€“ it approximates the integral in time by a convolution. FNO-3D maps the initial time interval directly to the full trajectory (3D functions to 3D functions). The 2D+RNN structure can propagate the solution to any arbitrary time ๐‘‡ in increments of a fixed interval length ฮ” โข ๐‘ก , while the Conv3D structure is fixed to the interval [ 0 , ๐‘‡ ] but can transfer the solution to an arbitrary time-discretization. We find the 2D method work better for short time sequences while the 3D method more expressive and easier to train on longer sequences.

Networks ๐‘ 

64
๐‘ 

128
๐‘ 

256

FNO-3D 0.0098

0.0101

0.0106

FNO-2D 0.0129

0.0128

0.0126

U-Net 0.0253

0.0289

0.0344

TF-Net 0.0277
0.0278
0.0301 Table 5: Resolution study on Navier-stokes equation ( ๐œˆ

10 โˆ’ 3 , ๐‘

200 , ๐‘‡

20 .) 7.3.1Zero-shot super-resolution.

The neural operator is mesh-invariant, so it can be trained on a lower resolution and evaluated at a higher resolution, without seeing any higher resolution data (zero-shot super-resolution). Figure 11 shows an example where we train the FNO-3D model on 64 ร— 64 ร— 20 resolution data in the setting above with ( ๐œˆ

10 โˆ’ 4 , ๐‘

10000 ) and transfer to 256 ร— 256 ร— 80 resolution, demonstrating super-resolution in space-time. The Fourier neural operator is the only model among the benchmarks (FNO-2D, U-Net, TF-Net, and ResNet) that can do zero-shot super-resolution; the method works well not only on the spatial but also on the temporal domain.

Figure 11: Zero-shot super-resolution

Vorticity field of the solution to the two-dimensional Navier-Stokes equation with viscosity ๐œˆ

10 4 (Re โ‰ˆ 200 ); Ground truth on top and prediction on bottom. The model is trained on data that is discretized on a uniform 64 ร— 64 spatial grid and on a 20 -point uniform temporal grid. The model is evaluated with a different initial condition that is discretized on a uniform 256 ร— 256 spatial grid and a 80 -point uniform temporal grid.

7.3.2Spectral analysis Figure 12:The spectral decay of the predictions of different methods

The spectral decay of the predictions of different models on the Navier-Stokes equation. The y-axis is the spectrum; the x-axis is the wavenumber. Left is the spectrum of one trajectory; right is the average of 40 trajectories.

Figure 13:Spectral Decay in term of ๐‘˜ ๐‘š โข ๐‘Ž โข ๐‘ฅ

The error of truncation in one single Fourier layer without applying the linear transform ๐‘… . The y-axis is the normalized truncation error; the x-axis is the truncation mode ๐‘˜ ๐‘š โข ๐‘Ž โข ๐‘ฅ .

Figure 12 shows that all the methods are able to capture the spectral decay of the Navier-Stokes equation. Notice that, while the Fourier method truncates the higher frequency modes during the convolution, FNO can still recover the higher frequency components in the final prediction. Due to the way we parameterize ๐‘… ๐œ™ , the function output by (26) has at most ๐‘˜ max , ๐‘— Fourier modes per channel. This, however, does not mean that the Fourier neural operator can only approximate functions up to ๐‘˜ max , ๐‘— modes. Indeed, the activation functions which occurs between integral operators and the final decoder network ๐‘„ recover the high frequency modes. As an example, consider a solution to the Navier-Stokes equation with viscosity ๐œˆ

10 โˆ’ 3 . Truncating this function at 20 Fourier modes yields an error around 2 % as shown in Figure 13, while the Fourier neural operator learns the parametric dependence and produces approximations to an error of โ‰ค 1 % with only ๐‘˜ max , ๐‘—

12 parameterized modes.

7.3.3Non-periodic boundary condition.

Traditional Fourier methods work only with periodic boundary conditions. However, the Fourier neural operator does not have this limitation. This is due to the linear transform ๐‘Š (the bias term) which keeps the track of non-periodic boundary. As an example, the Darcy Flow and the time domain of Navier-Stokes have non-periodic boundary conditions, and the Fourier neural operator still learns the solution operator with excellent accuracy.

7.3.4Bayesian Inverse Problem

As discussed in Section 6.4.1, we use the pCN method of Cotter et al. (2013) to draw samples from the posterior distribution of initial vorticities in the Navier-Stokes equation given sparse, noisy observations at time ๐‘‡

50 . We compare the Fourier neural operator acting as a surrogate model with the traditional solvers used to generate our train-test data (both run on GPU). We generate 25,000 samples from the posterior (with a 5,000 sample burn-in period), requiring 30,000 evaluations of the forward operator.

As shown in Figure 14, FNO and the traditional solver recover almost the same posterior mean which, when pushed forward, recovers well the later-time solution of the Navier-Stokes equation. In sharp contrast, FNO takes 0.005 โข ๐‘  to evaluate a single instance while the traditional solver, after being optimized to use the largest possible internal time-step which does not lead to blow-up, takes 2.2 โข ๐‘  . This amounts to 2.5 minutes for the MCMC using FNO and over 18 hours for the traditional solver. Even if we account for data generation and training time (offline steps) which take 12 hours, using FNO is still faster. Once trained, FNO can be used to quickly perform multiple MCMC runs for different initial conditions and observations, while the traditional solver will take 18 hours for every instance. Furthermore, since FNO is differentiable, it can easily be applied to PDE-constrained optimization problems in which adjoint calculations are used as part of the solution procedure.

Figure 14:Results of the Bayesian inverse problem for the Navier-Stokes equation.

The top left panel shows the true initial vorticity while bottom left panel shows the true observed vorticity at ๐‘‡

50 with black dots indicating the locations of the observation points placed on a 7 ร— 7 grid. The top middle panel shows the posterior mean of the initial vorticity given the noisy observations estimated with MCMC using the traditional solver, while the top right panel shows the same thing but using FNO as a surrogate model. The bottom middle and right panels show the vorticity at ๐‘‡

50 when the respective approximate posterior means are used as initial conditions.

7.4Discussion and Comparison of the Four methods

In this section we will compare the four methods in term of expressiveness, complexity, refinabilibity, and ingenuity.

7.4.1Ingenuity

First we will discuss ingenuity, in other words, the design of the frameworks. The first method, GNO, relies on the Nystrรถm approximation of the kernel, or the Monte Carlo approximation of the integration. It is the most simple and straightforward method. The second method, LNO, relies on the low-rank decomposition of the kernel operator. It is efficient when the kernel has a near low-rank structure. The third method, MGNO, is the combination of the first two. It has a hierarchical, multi-resolution decomposition of the kernel. The last one, FNO, is different from the first three; it restricts the integral kernel to induce a convolution.

GNO and MGNO are implemented using graph neural networks, which helps to define sampling and integration. The graph network library also allows sparse and distributed message passing. The LNO and FNO donโ€™t have sampling. They are faster without using the graph library.

scheme	graph-based	kernel network

GNO Nystrรถm approximation Yes Yes LNO Low-rank approximation No Yes MGNO Multi-level graphs on GNO Yes Yes FNO Convolution theorem; Fourier features No No Table 6:Ingenuity. 7.4.2Expressiveness

We measure the expressiveness by the training and testing error of the method. The full ๐‘‚ โข ( ๐ฝ 2 ) integration always has the best results, but it is usually too expensive. As shown in the experiments 7.2.1 and 7.2.2, GNO usually has good accuracy, but its performance suffers from sampling. LNO works the best on the 1d problem (Burgers equation). It has difficulty on the 2d problem because it doesnโ€™t employ sampling to speed-up evaluation. MGNO has the multi-level structure, which gives it the benefit of the first two. Finally, FNO has overall the best performance. It is also the only method that can capture the challenging Navier-Stokes equation.

7.4.3Complexity

The complexity of the four methods are listed in Table 7. GNO and MGNO have sampling. Their complexity depends on the number of nodes sampled ๐ฝ โ€ฒ . When using the full nodes. They are still quadratic. LNO has the lowest complexity ๐‘‚ โข ( ๐ฝ ) . FNO, when using the fast Fourier transform, has complexity ๐‘‚ โข ( ๐ฝ โข log โก ๐ฝ ) .

In practice. FNO is faster then the other three methods because it doesnโ€™t have the kernel network ๐œ… . MGNO is relatively slower because of its multi-level graph structure.

Complexity	Time per epochs in training

GNO ๐‘‚ โข ( ๐ฝ โ€ฒ โฃ 2 โข ๐‘Ÿ 2 )

4 โข ๐‘ 

LNO ๐‘‚ โข ( ๐ฝ )

20 โข ๐‘ 

MGNO โˆ‘ ๐‘™ ๐‘‚ โข ( ๐ฝ ๐‘™ 2 โข ๐‘Ÿ ๐‘™ 2 ) โˆผ ๐‘‚ โข ( ๐ฝ )

8 โข ๐‘ 

FNO ( ๐ฝ โข log โก ๐ฝ )

4 โข ๐‘  Table 7:Complexity (roundup to second on a single Nvidia V100 GPU) 7.4.4Refinability

Refineability measures the number of parameters used in the framework. Table 8 lists the accuracy of the relative error on Darcy Flow with respect to different number of parameters. Because GNO, LNO, and MGNO have the kernel networks, the slope of their error rates are flat: they can work with a very small number of parameters. On the other hand, FNO does not have the sub-network. It needs at a larger magnitude of parameters to obtain an acceptable error rate.

Number of parameters 10 3

10 4

10 5

10 6

GNO 0.075

0.065

0.060

0.035

LNO 0.080

0.070

0.060

0.040

MGNO 0.070

0.050

0.040

0.030

FNO 0.200

0.035

0.020

0.015 Table 8:Refinability.

The relative error on Darcy Flow with respect to different number of parameters. The errors above are approximated value roundup to 0.05 . They are the lowest test error achieved by the model, given the modelโ€™s number of parameters | ๐œƒ | is bounded by 10 3 , 10 4 , 10 5 , 10 6 respectively.

7.4.5Robustness

We conclude with experiments investigating the robustness of Fourier neural operator to noise. We study: a) training on clean (noiseless) data and testing with clean and noisy data; b) training on clean (noiseless) data and testing with clean and noisy data. When creating noisy data we map ๐‘Ž to noisy ๐‘Ž โ€ฒ as follows: at every grid-point ๐‘ฅ we set

๐‘Ž โข ( ๐‘ฅ ) โ€ฒ

๐‘Ž โข ( ๐‘ฅ ) + 0.1 โ‹… โ€– ๐‘Ž โ€– โˆž โข ๐œ‰ ,

where ๐œ‰ โˆผ ๐’ฉ โข ( 0 , 1 ) is drawn i.i.d. at every grid point; this is similar to the setting adopted in Lu et al. (2021b). We also study the 1d advection equation as an additional test case, following the setting in Lu et al. (2021b) in which the input data is a random square wave, defined by an โ„ 3 -valued random variable.

Problems Training error Test (clean) Test (noisy) Burgers 0.002

0.002

0.018

Advection 0.002

0.002

0.094

Darcy 0.006

0.011

0.012

Navier-Stokes 0.024

0.024

0.039

Burgers (train with noise) 0.011

0.004

0.011

Advection (train with noise) 0.020

0.010

0.019

Darcy (train with noise) 0.007

0.012

0.012

Navier-Stokes (train with noise) 0.026

0.026

0.025 Table 9:Robustness.

As shown in the top half of Table 9 and Figure 15, we observe the Fourier neural operator is robust with respect to the (test) noise level on all four problems. In particular, on the advection problem, it has about 10% error with 10% noise. The Darcy and Navier-Stokes operators are smoothing, and the Fourier neural operator obtains lower than 10% error in all scenarios. However the FNO is less robust on the advection equation, which is not smoothing, and on Burgers equation which, whilst smoothing also forms steep fronts.

A straightforward approach to enhance the robustness is to train the model with noise. As shown in the bottom half of Table 9, the Fourier neural operator has no gap between the clean data and noisy data when training with noise. However, noise in training may degrade the performance on the clean data, as a trade-off. In general, augmenting the training data with noise leads to robustness. For example, in the auto-regressive modeling of dynamical systems, training the model with noise will reduce error accumulation in time, and thereby help the model to predict over longer time-horizons (Pfaff et al., 2020). We also observed that other regularization techniques such as early-stopping and weight decay improve robustness. Using a higher spatial resolution also helps.

The advection problem is a hard problem for the FNO since it has discontinuities; similar issues arise when using spectral methods for conservation laws. One can modify the architecture to address such discontinuities accordingly. For example, Wen et al. (2021) enhance the FNO by composing a CNN or UNet branch with the Fourier layer; the resulting composite model outperforms the basic FNO on multiphase flow with high contrast and sharp shocks. However the CNN and UNet take the method out of the realm of discretization-invariant methods; further work is required to design discretization-invariant image-processing tools, such as the identification of discontinuities.

Figure 15:Robustness on Advection and Burgers equations

(a) The input of Advection equation ( ๐‘ 

40 ). The orange curve is the clean input; the blue curve is the noisy input. (b) The output of Advection equation. The green curve is the ground truth output; the orange curve is the prediction of FNO with clean input (overlapping with the ground truth); the blue curve is the prediction on the noisy input. Figure (c) and (d) are for Burgersโ€™ equation ( ๐‘ 

1000 ) correspondingly.

8Approximation Theory

The paper by Chen and Chen (1995) provides the first universal approximation theorem for operator approximation via neural networks, and the paper by Bhattacharya et al. (2020) provides an alternative architecture and approximation result. The analysis of Chen and Chen (1995) was recently extended in significant ways in the paper by Lanthaler et al. (2021) where, for the first time, the curse of dimensionality is addressed, and resolved, for certain specific operator learning problems, using the DeepOnet generalization Lu et al. (2019, 2021a) of Chen and Chen (1995). The paper Lanthaler et al. (2021) was generalized to study operator approximation, and the curse of dimensionality, for the FNO, in Kovachki et al. (2021).

Unlike the finite-dimensional setting, the choice of input and output spaces ๐’œ and ๐’ฐ for the mapping ๐’ข โ€  play a crucial role in the approximation theory due to the distinctiveness of the induced norm topologies. In this section, we prove universal approximation theorems for neural operators both with respect to the topology of uniform convergence over compact sets and with respect to the topology induced by the Bochner norm (3). We focus our attention on the Lebesgue, Sobolev, continuous, and continuously differentiable function classes as they have numerous applications in scientific computing and machine learning problems. Unlike the results of Bhattacharya et al. (2020); Kovachki et al. (2021) which rely on the Hilbertian structure of the input and output spaces or the results of Chen and Chen (1995); Lanthaler et al. (2021) which rely on the continuous functions, our results extend to more general Banach spaces as specified by Assumptions 9 and 10 (stated in Section 8.3) and are, to the best of our knowledge, the first of their kind to apply at this level of generality.

Our method of proof proceeds by making use of the following two observations. First we establish the Banach space approximation property Grothendieck (1955) for the input and output spaces of interest, which allows for a finite dimensionalization of the problem. In particular, we prove that the Banach space approximation property holds for various function spaces defined on Lipschitz domains; the precise result we need, while unsurprising, seems to be missing from the functional analysis literature and so we provide statement and proof. Details are given in Appendix A. Second, we establish that integral kernel operators with smooth kernels can be used to approximate linear functionals of various input spaces. In doing so, we establish a Riesz-type representation theorem for the continuously differentiable functions. Such a result is not surprising and mimics the well-known result for Sobolev spaces; however in the form we need it we could not find the result in the functional analysis literature and so we provide statement and proof. Details are given in Appendix B. With these two facts, we construct a neural operator which linearly maps any input function to a finite vector then non-linearly maps this vector to a new finite vector which is then used to form the coefficients of a basis expansion for the output function. We reemphasize that our approximation theory uses the fact that neural operators can be reduced to a linear method of approximation (as pointed out in Section 5.1) and does not capture any benefits of nonlinear approximation. However these benefits are present in the architecture and are exploited by the trained networks we find in practice. Exploiting their nonlinear nature to potentially obtain improved rates of approximation remains an interesting direction for future research.

The rest of this section is organized as follows. In Subsection 8.1, we define allowable activation functions and the set of neural operators used in our theory, noting that they constitute a subclass of the neural operators defined in Section 5. In Subsection 8.3, we state and prove our main universal approximation theorems.

8.1Neural Operators

For any ๐‘› โˆˆ โ„• and ๐œŽ : โ„ โ†’ โ„ , we define the set of real-valued ๐‘› -layer neural networks on โ„ ๐‘‘ by

๐–ญ ๐‘› ( ๐œŽ ; โ„ ๐‘‘ ) โ‰” { ๐‘“ : โ„ ๐‘‘ โ†’ โ„ :
๐‘“ โข ( ๐‘ฅ )

๐‘Š ๐‘› โข ๐œŽ โข ( โ€ฆ โข ๐‘Š 1 โข ๐œŽ โข ( ๐‘Š 0 โข ๐‘ฅ + ๐‘ 0 ) + ๐‘ 1 โข โ€ฆ ) + ๐‘ ๐‘› ,

๐‘Š 0 โˆˆ โ„ ๐‘‘ 0 ร— ๐‘‘ , ๐‘Š 1 โˆˆ โ„ ๐‘‘ 1 ร— ๐‘‘ 0 , โ€ฆ , ๐‘Š ๐‘› โˆˆ โ„ 1 ร— ๐‘‘ ๐‘› โˆ’ 1 ,

๐‘ 0 โˆˆ โ„ ๐‘‘ 0 , ๐‘ 1 โˆˆ โ„ ๐‘‘ 1 , โ€ฆ , ๐‘ ๐‘› โˆˆ โ„ , ๐‘‘ 0 , ๐‘‘ 1 , โ€ฆ , ๐‘‘ ๐‘› โˆ’ 1 โˆˆ โ„• } .

We define the set of โ„ ๐‘‘ โ€ฒ -valued neural networks simply by stacking real-valued networks

๐–ญ ๐‘› ( ๐œŽ ; โ„ ๐‘‘ , โ„ ๐‘‘ โ€ฒ ) โ‰” { ๐‘“ : โ„ ๐‘‘ โ†’ โ„ ๐‘‘ โ€ฒ : ๐‘“ ( ๐‘ฅ )

( ๐‘“ 1 ( ๐‘ฅ ) , โ€ฆ , ๐‘“ ๐‘‘ โ€ฒ ( ๐‘ฅ ) ) , ๐‘“ 1 , โ€ฆ , ๐‘“ ๐‘‘ โ€ฒ โˆˆ ๐–ญ ๐‘› ( ๐œŽ ; โ„ ๐‘‘ ) } .

We remark that we could have defined ๐–ญ ๐‘› โข ( ๐œŽ ; โ„ ๐‘‘ , โ„ ๐‘‘ โ€ฒ ) by letting ๐‘Š ๐‘› โˆˆ โ„ ๐‘‘ โ€ฒ ร— ๐‘‘ ๐‘› and ๐‘ ๐‘› โˆˆ โ„ ๐‘‘ โ€ฒ in the definition of ๐–ญ ๐‘› โข ( ๐œŽ ; โ„ ๐‘‘ ) because we allow arbitrary width, making the two definitions equivalent; however the definition as presented is more convenient for our analysis. We also employ the preceding definition with โ„ ๐‘‘ and โ„ ๐‘‘ โ€ฒ replaced by spaces of matrices. For any ๐‘š โˆˆ โ„• 0 , we define the set of allowable activation functions as the continuous โ„ โ†’ โ„ maps which make neural networks dense in ๐ถ ๐‘š โข ( โ„ ๐‘‘ ) on compacta at any fixed depth,

๐–  ๐‘š โ‰” { ๐œŽ โˆˆ ๐ถ โข ( โ„ ) : โˆƒ ๐‘› โˆˆ โ„• โข  s.t.  โข ๐–ญ ๐‘› โข ( ๐œŽ ; โ„ ๐‘‘ ) โข  is dense in  โข ๐ถ ๐‘š โข ( ๐พ ) โข โˆ€ ๐พ โŠ‚ โ„ ๐‘‘ โข  compact } .

It is shown in (Pinkus, 1999, Theorem 4.1) that { ๐œŽ โˆˆ ๐ถ ๐‘š โข ( โ„ ) : ๐œŽ โข  is not a polynomial } โІ ๐–  ๐‘š with ๐‘›

1 . Clearly ๐–  ๐‘š + 1 โІ ๐–  ๐‘š .

We define the set of linearly bounded activations as

๐–  ๐‘š L โ‰” { ๐œŽ โˆˆ ๐–  ๐‘š : ๐œŽ โข  is Borel measurable  , sup ๐‘ฅ โˆˆ โ„ | ๐œŽ โข ( ๐‘ฅ ) | 1 + | ๐‘ฅ | < โˆž } ,

noting that any globally Lipschitz, non-polynomial, ๐ถ ๐‘š -function is contained in ๐–  ๐‘š L . Most activation functions used in practice fall within this class, for example, ReLU โˆˆ ๐–  0 L , ELU โˆˆ ๐–  1 L while tanh , sigmoid โˆˆ ๐–  ๐‘š L for any ๐‘š โˆˆ โ„• 0 .

For approximation in a Bochner norm, we will be interested in constructing globally bounded neural networks which can approximate the identity over compact sets as done in (Lanthaler et al., 2021; Bhattacharya et al., 2020). This allows us to control the potential unboundedness of the support of the input measure by exploiting the fact that the probability of an input must decay to zero in unbounded regions. Following (Lanthaler et al., 2021), we introduce the forthcoming definition which uses the notation of the diameter of a set. In particular, the diameter of any set ๐‘† โІ โ„ ๐‘‘ is defined as, for | โ‹… | 2 the Euclidean norm on โ„ ๐‘‘ ,

diam 2 โข ( ๐‘† ) โ‰” sup ๐‘ฅ , ๐‘ฆ โˆˆ ๐‘† | ๐‘ฅ โˆ’ ๐‘ฆ | 2 .

Definition 7

We denote by ๐–ก๐–  the set of maps ๐œŽ โˆˆ ๐–  0 such that, for any compact set ๐พ โŠ‚ โ„ ๐‘‘ , ๐œ–

0 , and ๐ถ โ‰ฅ diam 2 โข ( ๐พ ) , there exists a number ๐‘› โˆˆ โ„• and a neural network ๐‘“ โˆˆ ๐–ญ ๐‘› โข ( ๐œŽ ; โ„ ๐‘‘ , โ„ ๐‘‘ โ€ฒ ) such that

| ๐‘“ โข ( ๐‘ฅ ) โˆ’ ๐‘ฅ | 2

โ‰ค ๐œ– , โˆ€ ๐‘ฅ โˆˆ ๐พ ,

| ๐‘“ โข ( ๐‘ฅ ) | 2

โ‰ค ๐ถ , โˆ€ ๐‘ฅ โˆˆ โ„ ๐‘‘ .

It is shown in (Lanthaler et al., 2021, Lemma C.1) that ReLU โˆˆ ๐–  0 L โˆฉ ๐–ก๐–  with ๐‘›

3 .

We will now define the specific class of neural operators for which we prove a universal approximation theorem. It is important to note that the class with which we work is a simplification of the one given in (6). In particular, the lifting and projection operators ๐’ฌ , ๐’ซ , together with the final activation function ๐œŽ ๐‘› , are set to the identity, and the local linear operators ๐‘Š 0 , โ€ฆ , ๐‘Š ๐‘› โˆ’ 1 are set to zero. In our numerical studies we have in any case typically set ๐œŽ ๐‘› to the identity. However we have found that learning the local operators ๐’ฌ , ๐’ซ and ๐‘Š 0 , โ€ฆ , ๐‘Š ๐‘› โˆ’ 1 is beneficial in practice; extending the universal approximation theorems given here to explain this benefit would be an important but non-trivial development of the analysis we present here.

Let ๐ท โŠ‚ โ„ ๐‘‘ be a domain. For any ๐œŽ โˆˆ ๐–  0 , we define the set of affine kernel integral operators by

๐–จ๐–ฎ ( ๐œŽ ; ๐ท , โ„ ๐‘‘ 1 , โ„ ๐‘‘ 2 )

{ ๐‘“ โ†ฆ โˆซ ๐ท ๐œ… ( โ‹… , ๐‘ฆ ) ๐‘“ ( ๐‘ฆ ) ๐–ฝ ๐‘ฆ + ๐‘ :

๐œ… โˆˆ ๐–ญ ๐‘› 1 โข ( ๐œŽ ; โ„ ๐‘‘ ร— โ„ ๐‘‘ , โ„ ๐‘‘ 2 ร— ๐‘‘ 1 ) ,

๐‘ โˆˆ ๐–ญ ๐‘› 2 ( ๐œŽ ; โ„ ๐‘‘ , โ„ ๐‘‘ 2 ) , ๐‘› 1 , ๐‘› 2 โˆˆ โ„• } ,

for any ๐‘‘ 1 , ๐‘‘ 2 โˆˆ โ„• . Clearly, since ๐œŽ โˆˆ ๐–  0 , any ๐‘† โˆˆ ๐–จ๐–ฎ โข ( ๐œŽ ; ๐ท , โ„ ๐‘‘ 1 , โ„ ๐‘‘ 2 ) acts as ๐‘† : ๐ฟ ๐‘ โข ( ๐ท ; โ„ ๐‘‘ 1 ) โ†’ ๐ฟ ๐‘ โข ( ๐ท ; โ„ ๐‘‘ 2 ) for any 1 โ‰ค ๐‘ โ‰ค โˆž since ๐œ… โˆˆ ๐ถ โข ( ๐ท ยฏ ร— ๐ท ยฏ ; โ„ ๐‘‘ 2 ร— ๐‘‘ 1 ) and ๐‘ โˆˆ ๐ถ โข ( ๐ท ยฏ ; โ„ ๐‘‘ 2 ) . For any ๐‘› โˆˆ โ„• โ‰ฅ 2 , ๐‘‘ ๐‘Ž , ๐‘‘ ๐‘ข โˆˆ โ„• , ๐ท โŠ‚ โ„ ๐‘‘ , ๐ท โ€ฒ โŠ‚ โ„ ๐‘‘ โ€ฒ domains, and ๐œŽ 1 โˆˆ ๐–  0 L , ๐œŽ 2 , ๐œŽ 3 โˆˆ ๐–  0 , we define the set of ๐‘› -layer neural operators by

๐–ญ๐–ฎ ๐‘› ( ๐œŽ 1 , ๐œŽ 2 , ๐œŽ 3 ; ๐ท , ๐ท โ€ฒ , โ„ ๐‘‘ ๐‘Ž , โ„ ๐‘‘ ๐‘ข )

{

๐‘“ โ†ฆ โˆซ ๐ท ๐œ… ๐‘› ( โ‹… , ๐‘ฆ ) ( ๐‘† ๐‘› โˆ’ 1 ๐œŽ 1 ( โ€ฆ ๐‘† 2 ๐œŽ 1 ( ๐‘† 1 ( ๐‘† 0 ๐‘“ ) ) โ€ฆ ) ) ( ๐‘ฆ ) ๐–ฝ ๐‘ฆ :

๐‘† 0 โˆˆ ๐–จ๐–ฎ โข ( ๐œŽ 2 , ๐ท ; โ„ ๐‘‘ ๐‘Ž , โ„ ๐‘‘ 1 ) , โ€ฆ โข ๐‘† ๐‘› โˆ’ 1 โˆˆ ๐–จ๐–ฎ โข ( ๐œŽ 2 , ๐ท ; โ„ ๐‘‘ ๐‘› โˆ’ 1 , โ„ ๐‘‘ ๐‘› ) ,

๐œ… ๐‘› โˆˆ ๐–ญ ๐‘™ ( ๐œŽ 3 ; โ„ ๐‘‘ โ€ฒ ร— โ„ ๐‘‘ , โ„ ๐‘‘ ๐‘ข ร— ๐‘‘ ๐‘› ) , ๐‘‘ 1 , โ€ฆ , ๐‘‘ ๐‘› , ๐‘™ โˆˆ โ„• } .

When ๐‘‘ ๐‘Ž

๐‘‘ ๐‘ข

1 , we will simply write ๐–ญ๐–ฎ ๐‘› โข ( ๐œŽ 1 , ๐œŽ 2 , ๐œŽ 3 ; ๐ท , ๐ท โ€ฒ ) . Since ๐œŽ 1 is linearly bounded, we can use a result about compositions of maps in ๐ฟ ๐‘ spaces such as (Dudley and Norvaiลกa, 2010, Theorem 7.13) to conclude that any ๐บ โˆˆ ๐–ญ๐–ฎ ๐‘› โข ( ๐œŽ 1 , ๐œŽ 2 , ๐œŽ 3 , ๐ท , ๐ท โ€ฒ ; โ„ ๐‘‘ ๐‘Ž , โ„ ๐‘‘ ๐‘ข ) acts as ๐บ : ๐ฟ ๐‘ โข ( ๐ท ; โ„ ๐‘‘ ๐‘Ž ) โ†’ ๐ฟ ๐‘ โข ( ๐ท โ€ฒ ; โ„ ๐‘‘ ๐‘ข ) . Note that it is only in the last layer that we transition from functions defined over domain ๐ท to functions defined over domain ๐ท โ€ฒ .

When the input space of an operator of interest is ๐ถ ๐‘š โข ( ๐ท ยฏ ) , for ๐‘š โˆˆ โ„• , we will need to take in derivatives explicitly as they cannot be learned using kernel integration as employed in the current construction given in Lemma 30; note that this is not the case for ๐‘Š ๐‘š , ๐‘ โข ( ๐ท ) as shown in Lemma 28. We will therefore define the set of ๐‘š -th order neural operators by

๐–ญ๐–ฎ ๐‘› ๐‘š ( ๐œŽ 1 , ๐œŽ 2 , ๐œŽ 3 ; ๐ท , ๐ท โ€ฒ , โ„ ๐‘‘ ๐‘Ž , โ„ ๐‘‘ ๐‘ข )

{

( โˆ‚ ๐›ผ 1 ๐‘“ , โ€ฆ , โˆ‚ ๐›ผ ๐ฝ ๐‘š ๐‘“ ) โ†ฆ ๐บ โข ( โˆ‚ ๐›ผ 1 ๐‘“ , โ€ฆ , โˆ‚ ๐›ผ ๐ฝ ๐‘š ๐‘“ ) :

๐บ โˆˆ ๐–ญ๐–ฎ ๐‘› ( ๐œŽ 1 , ๐œŽ 2 , ๐œŽ 3 ; ๐ท , ๐ท โ€ฒ , โ„ ๐ฝ ๐‘š โข ๐‘‘ ๐‘Ž , โ„ ๐‘‘ ๐‘ข ) }

where ๐›ผ 1 , โ€ฆ , ๐›ผ ๐ฝ ๐‘š โˆˆ โ„• ๐‘‘ is an enumeration of the set { ๐›ผ โˆˆ โ„• ๐‘‘ : 0 โ‰ค | ๐›ผ | 1 โ‰ค ๐‘š } . Since we only use the ๐‘š -th order operators when dealing with spaces of continuous functions, each element of ๐–ญ๐–ฎ ๐‘› ๐‘š can be thought of as a mapping from a product space of spaces of the form ๐ถ ๐‘š โˆ’ | ๐›ผ ๐‘— | โข ( ๐ท ยฏ ; โ„ ๐‘‘ ๐‘Ž ) for all ๐‘— โˆˆ { 1 , โ€ฆ , ๐ฝ ๐‘š } to an appropriate Banach space of interest.

8.2Discretization Invariance

Given the construction above and the definition of discretization invariance in Definition 4, in the following we prove that neural operators are discretization invariant deep learning models.

Theorem 8

Let ๐ท โŠ‚ โ„ ๐‘‘ and ๐ท โ€ฒ โŠ‚ โ„ ๐‘‘ โ€ฒ be two domains for some ๐‘‘ , ๐‘‘ โ€ฒ โˆˆ โ„• . Let ๐’œ and ๐’ฐ be real-valued Banach function spaces on ๐ท and ๐ท โ€ฒ respectively. Suppose that ๐’œ and ๐’ฐ can be continuously embedded in ๐ถ โข ( ๐ท ยฏ ) and ๐ถ โข ( ๐ท โ€ฒ ยฏ ) respectively and that ๐œŽ 1 , ๐œŽ 2 , ๐œŽ 3 โˆˆ ๐ถ โข ( โ„ ) . Then, for any ๐‘› โˆˆ โ„• , the set of neural operators ๐–ญ๐–ฎ ๐‘› โข ( ๐œŽ 1 , ๐œŽ 2 , ๐œŽ 3 ; ๐ท , ๐ท โ€ฒ ) whose elements are viewed as maps ๐’œ โ†’ ๐’ฐ is discretization-invariant.

The proof, provided in appendix E, constructs a sequence of finite dimensional maps which approximate the neural operator by Riemann sums and shows uniform converges of the error over compact sets of ๐’œ .

8.3Approximation Theorems ๐’œ โ„ ๐ฝ ๐’œ ๐’ฐ โ„ ๐ฝ โ€ฒ ๐’ฐ ๐น ๐’ข โ€  ๐œ“ ๐’ข โ€  ๐บ Figure 16:A schematic overview of the maps used to approximate ๐’ข โ€  .

Let ๐’œ and ๐’ฐ be Banach function spaces on the domains ๐ท โŠ‚ โ„ ๐‘‘ and ๐ท โ€ฒ โŠ‚ โ„ ๐‘‘ โ€ฒ respectively. We will work in the setting where functions in ๐’œ or ๐’ฐ are real-valued, but note that all results generalize in a straightforward fashion to the vector-valued setting. We are interested in the approximation of nonlinear operators ๐’ข โ€  : ๐’œ โ†’ ๐’ฐ by neural operators. We will make the following assumptions on the spaces ๐’œ and ๐’ฐ .

Assumption 9

Let ๐ท โŠ‚ โ„ ๐‘‘ be a Lipschitz domain for some ๐‘‘ โˆˆ โ„• . One of the following holds

1.

๐’œ

๐ฟ ๐‘ 1 โข ( ๐ท ) for some 1 โ‰ค ๐‘ 1 < โˆž .

2.

๐’œ

๐‘Š ๐‘š 1 , ๐‘ 1 โข ( ๐ท ) for some 1 โ‰ค ๐‘ 1 < โˆž and ๐‘š 1 โˆˆ โ„• ,

3.

๐’œ

๐ถ โข ( ๐ท ยฏ ) .

Assumption 10

Let ๐ท โ€ฒ โŠ‚ โ„ ๐‘‘ โ€ฒ be a Lipschitz domain for some ๐‘‘ โ€ฒ โˆˆ โ„• . One of the following holds

1.

๐’ฐ

๐ฟ ๐‘ 2 โข ( ๐ท โ€ฒ ) for some 1 โ‰ค ๐‘ 2 < โˆž , and ๐‘š 2

0 ,

2.

๐’ฐ

๐‘Š ๐‘š 2 , ๐‘ 2 โข ( ๐ท โ€ฒ ) for some 1 โ‰ค ๐‘ 2 < โˆž and ๐‘š 2 โˆˆ โ„• ,

3.

๐’ฐ

๐ถ ๐‘š 2 โข ( ๐ท ยฏ โ€ฒ ) and ๐‘š 2 โˆˆ โ„• 0 .

We first show that neural operators are dense in the continuous operators ๐’ข โ€  : ๐’œ โ†’ ๐’ฐ in the topology of uniform convergence on compacta. The proof proceeds by making three main approximations which are schematically shown in Figure 16. First, inputs are mapped to a finite-dimensional representation through a set of appropriate linear functionals on ๐’œ denoted by ๐น : ๐’œ โ†’ โ„ ๐ฝ . We show in Lemmas 21 and 23 that, when ๐’œ satisfies Assumption 9, elements of ๐’œ โˆ— can be approximated by integration against smooth functions. This generalizes the idea from (Chen and Chen, 1995) where functionals on ๐ถ โข ( ๐ท ยฏ ) are approximated by a weighted sum of Dirac measures. We then show in Lemma 25 that, by lifting the dimension, this representation can be approximated by a single element of ๐–จ๐–ฎ . Second, the representation is non-linearly mapped to a new representation by a continuous function ๐œ“ : โ„ ๐ฝ โ†’ โ„ ๐ฝ โ€ฒ which finite-dimensionalizes the action of ๐’ข โ€  . We show, in Lemma 28, that this map can be approximated by a neural operator by reducing the architecture to that of a standard neural network. Third, the new representation is used as the coefficients of an expansion onto representers of ๐’ฐ , the map denoted ๐บ : โ„ ๐ฝ โ€ฒ โ†’ ๐’ฐ , which we show can be approximated by a single ๐–จ๐–ฎ layer in Lemma 27 using density results for continuous functions. The structure of the overall approximation is similar to (Bhattacharya et al., 2020) but generalizes the ideas from working on Hilbert spaces to the spaces in Assumptions 9 and 10. Statements and proofs of the lemmas used in the theorems are given in the appendices.

Theorem 11

Let Assumptions 9 and 10 hold and suppose ๐’ข โ€  : ๐’œ โ†’ ๐’ฐ is continuous. Let ๐œŽ 1 โˆˆ ๐–  0 L , ๐œŽ 2 โˆˆ ๐–  0 , and ๐œŽ 3 โˆˆ ๐–  ๐‘š 2 . Then for any compact set ๐พ โŠ‚ ๐’œ and 0 < ๐œ– โ‰ค 1 , there exists a number ๐‘ โˆˆ โ„• and a neural operator ๐’ข โˆˆ ๐–ญ๐–ฎ ๐‘ โข ( ๐œŽ 1 , ๐œŽ 2 , ๐œŽ 3 ; ๐ท , ๐ท โ€ฒ ) such that

sup ๐‘Ž โˆˆ ๐พ โ€– ๐’ข โ€  โข ( ๐‘Ž ) โˆ’ ๐’ข โข ( ๐‘Ž ) โ€– ๐’ฐ โ‰ค ๐œ– .

Furthermore, if ๐’ฐ is a Hilbert space and ๐œŽ 1 โˆˆ ๐–ก๐–  and, for some ๐‘€

0 , we have that โ€– ๐’ข โ€  โข ( ๐‘Ž ) โ€– ๐’ฐ โ‰ค ๐‘€ for all ๐‘Ž โˆˆ ๐’œ then ๐’ข can be chosen so that

โ€– ๐’ข โข ( ๐‘Ž ) โ€– ๐’ฐ โ‰ค 4 โข ๐‘€ , โˆ€ ๐‘Ž โˆˆ ๐’œ .

The proof is provided in appendix F In the following theorem, we extend this result to the case ๐’œ

๐ถ ๐‘š 1 โข ( ๐ท ยฏ ) , showing density of the ๐‘š 1 -th order neural operators.

Theorem 12

Let ๐ท โŠ‚ โ„ ๐‘‘ be a Lipschitz domain, ๐‘š 1 โˆˆ โ„• , define ๐’œ := ๐ถ ๐‘š 1 โข ( ๐ท ยฏ ) , suppose Assumption 10 holds and assume that ๐’ข โ€  : ๐’œ โ†’ ๐’ฐ is continuous. Let ๐œŽ 1 โˆˆ ๐–  0 L , ๐œŽ 2 โˆˆ ๐–  0 , and ๐œŽ 3 โˆˆ ๐–  ๐‘š 2 . Then for any compact set ๐พ โŠ‚ ๐’œ and 0 < ๐œ– โ‰ค 1 , there exists a number ๐‘ โˆˆ โ„• and a neural operator ๐’ข โˆˆ ๐–ญ๐–ฎ ๐‘ ๐‘š 1 โข ( ๐œŽ 1 , ๐œŽ 2 , ๐œŽ 3 ; ๐ท , ๐ท โ€ฒ ) such that

sup ๐‘Ž โˆˆ ๐พ โ€– ๐’ข โ€  โข ( ๐‘Ž ) โˆ’ ๐’ข โข ( ๐‘Ž ) โ€– ๐’ฐ โ‰ค ๐œ– .

Furthermore, if ๐’ฐ is a Hilbert space and ๐œŽ 1 โˆˆ ๐–ก๐–  and, for some ๐‘€

0 , we have that โ€– ๐’ข โ€  โข ( ๐‘Ž ) โ€– ๐’ฐ โ‰ค ๐‘€ for all ๐‘Ž โˆˆ ๐’œ then ๐’ข can be chosen so that

โ€– ๐’ข โข ( ๐‘Ž ) โ€– ๐’ฐ โ‰ค 4 โข ๐‘€ , โˆ€ ๐‘Ž โˆˆ ๐’œ .

Proof  The proof follows as in Theorem 11, replacing the use of Lemma 32 with Lemma 33.  

With these results in hand, we show density of neural operators in the space ๐ฟ ๐œ‡ 2 โข ( ๐’œ ; ๐’ฐ ) where ๐œ‡ is a probability measure and ๐’ฐ is a separable Hilbert space. The Hilbertian structure of ๐’ฐ allows us to uniformly control the norm of the approximation due to the isomorphism with โ„“ 2 as shown in Theorem 11. It remains an interesting future direction to obtain similar results for Banach spaces. The proof follows the ideas in (Lanthaler et al., 2021) where similar results are obtained for DeepONet(s) on ๐ฟ 2 โข ( ๐ท ) by using Lusinโ€™s theorem to restrict the approximation to a large enough compact set and exploit the decay of ๐œ‡ outside it. Bhattacharya et al. (2020) also employ a similar approach but explicitly constructs the necessary compact set after finite-dimensionalizing.

Theorem 13

Let ๐ท โ€ฒ โŠ‚ โ„ ๐‘‘ โ€ฒ be a Lipschitz domain, ๐‘š 2 โˆˆ โ„• 0 , and suppose Assumption 9 holds. Let ๐œ‡ be a probability measure on ๐’œ and suppose ๐’ข โ€  : ๐’œ โ†’ ๐ป ๐‘š 2 โข ( ๐ท ) is ๐œ‡ -measurable and ๐’ข โ€  โˆˆ ๐ฟ ๐œ‡ 2 โข ( ๐’œ ; ๐ป ๐‘š 2 โข ( ๐ท ) ) . Let ๐œŽ 1 โˆˆ ๐–  0 L โˆฉ ๐–ก๐–  , ๐œŽ 2 โˆˆ ๐–  0 , and ๐œŽ 3 โˆˆ ๐–  ๐‘š 2 . Then for any 0 < ๐œ– โ‰ค 1 , there exists a number ๐‘ โˆˆ โ„• and a neural operator ๐’ข โˆˆ ๐–ญ๐–ฎ ๐‘ โข ( ๐œŽ 1 , ๐œŽ 2 , ๐œŽ 3 ; ๐ท , ๐ท โ€ฒ ) such that

โ€– ๐’ข โ€  โˆ’ ๐’ข โ€– ๐ฟ ๐œ‡ 2 โข ( ๐’œ ; ๐ป ๐‘š 2 โข ( ๐ท ) ) โ‰ค ๐œ– .

The proof is provided in appendix G. In the following we extend this result to the case ๐’œ

๐ถ ๐‘š 1 โข ( ๐ท ) using the ๐‘š 1 -th order neural operators.

Theorem 14

Let ๐ท โŠ‚ โ„ ๐‘‘ be a Lipschitz domain, ๐‘š 1 โˆˆ โ„• , define ๐’œ := ๐ถ ๐‘š 1 โข ( ๐ท ) and suppose Assumption 10 holds. Let ๐œ‡ be a probability measure on ๐ถ ๐‘š 1 โข ( ๐ท ) and let ๐’ข โ€  : ๐ถ ๐‘š 1 โข ( ๐ท ) โ†’ ๐’ฐ be ๐œ‡ -measurable and suppose ๐’ข โ€  โˆˆ ๐ฟ ๐œ‡ 2 โข ( ๐ถ ๐‘š 1 โข ( ๐ท ) ; ๐’ฐ ) . Let ๐œŽ 1 โˆˆ ๐–  0 L โˆฉ ๐–ก๐–  , ๐œŽ 2 โˆˆ ๐–  0 , and ๐œŽ 3 โˆˆ ๐–  ๐‘š 2 . Then for any 0 < ๐œ– โ‰ค 1 , there exists a number ๐‘ โˆˆ โ„• and a neural operator ๐’ข โˆˆ ๐–ญ๐–ฎ ๐‘ ๐‘š 1 โข ( ๐œŽ 1 , ๐œŽ 2 , ๐œŽ 3 ; ๐ท , ๐ท โ€ฒ ) such that

โ€– ๐’ข โ€  โˆ’ ๐’ข โ€– ๐ฟ ๐œ‡ 2 โข ( ๐ถ ๐‘š 1 โข ( ๐ท ) ; ๐’ฐ ) โ‰ค ๐œ– .

Proof  The proof follows as in Theorem 13 by replacing the use of Theorem 11 with Theorem 12.  

9Literature Review

We outline the major neural network-based approaches for the solution of PDEs.

Finite-dimensional Operators.

An immediate approach to approximate ๐’ข โ€  is to parameterize it as a deep convolutional neural network (CNN) between the finite-dimensional Euclidean spaces on which the data is discretized i.e. ๐’ข : โ„ ๐พ ร— ฮ˜ โ†’ โ„ ๐พ (Guo et al., 2016; Zhu and Zabaras, 2018; Adler and Oktem, 2017; Bhatnagar et al., 2019; Kutyniok et al., 2022). Khoo et al. (2021) concerns a similar setting, but with output space โ„ . Such approaches are, by definition, not mesh independent and need modifications to the architecture for different resolution and discretization of ๐ท in order to achieve consistent error (if at all possible). We demonstrate this issue numerically in Section 7. Furthermore, these approaches are limited to the discretization size and geometry of the training data and hence it is not possible to query solutions at new points in the domain. In contrast for our method, we show in Section 7, both invariance of the error to grid resolution, and the ability to transfer the solution between meshes. The work Ummenhofer et al. (2020) proposed a continuous convolution network for fluid problems, where off-grid points are sampled and linearly interpolated. However the continuous convolution method is still constrained by the underlying grid which prevents generalization to higher resolutions. Similarly, to get finer resolution solution, Jiang et al. (2020) proposed learning super-resolution with a U-Net structure for fluid mechanics problems. However fine-resolution data is needed for training, while neural operators are capable of zero-shot super-resolution with no new data.

DeepONet

A novel operator regression architecture, named DeepONet, was recently proposed by Lu et al. (2019, 2021a); it builds an iterated or deep structure on top of the shallow architecture proposed in Chen and Chen (1995). The architecture consists of two neural networks: a branch net applied on the input functions and a trunk net applied on the querying locations in the output space. The original work of Chen and Chen (1995) provides a universal approximation theorem, and more recently Lanthaler et al. (2021) developed an error estimate for DeepONet itself. The standard DeepONet structure is a linear approximation of the target operator, where the trunk net and branch net learn the coefficients and basis. On the other hand, the neural operator setting is heavily inspired by the advances in deep learning and is a non-linear approximation, which makes it constructively more expressive. A detailed discussion of DeepONet is provided in Section 5.1 and as well as a numerical comparison to DeepONet in Section 7.2.

Physics Informed Neural Networks (PINNs), Deep Ritz Method (DRM), and Deep Galerkin Method (DGM).

A different approach is to directly parameterize the solution ๐‘ข as a neural network ๐‘ข : ๐ท ยฏ ร— ฮ˜ โ†’ โ„ (E and Yu, 2018; Raissi et al., 2019; Sirignano and Spiliopoulos, 2018; Bar and Sochen, 2019; Smith et al., 2020; Pan and Duraisamy, 2020; Beck et al., 2021). This approach is designed to model one specific instance of the PDE, not the solution operator. It is mesh-independent, but for any given new parameter coefficient function ๐‘Ž โˆˆ ๐’œ , one would need to train a new neural network ๐‘ข ๐‘Ž which is computationally costly and time consuming. Such an approach closely resembles classical methods such as finite elements, replacing the linear span of a finite set of local basis functions with the space of neural networks.

ML-based Hybrid Solvers

Similarly, another line of work proposes to enhance existing numerical solvers with neural networks by building hybrid models (Pathak et al., 2020; Um et al., 2020a; Greenfeld et al., 2019) These approaches suffer from the same computational issue as classical methods: one needs to solve an optimization problem for every new parameter similarly to the PINNs setting. Furthermore, the approaches are limited to a setting in which the underlying PDE is known. Purely data-driven learning of a map between spaces of functions is not possible.

Reduced Basis Methods.

Our methodology most closely resembles the classical reduced basis method (RBM) (DeVore, 2014) or the method of Cohen and DeVore (2015). The method introduced here, along with the contemporaneous work introduced in the papers (Bhattacharya et al., 2020; Nelsen and Stuart, 2021; Opschoor et al., 2020; Schwab and Zech, 2019; Oโ€™Leary-Roseberry et al., 2020; Lu et al., 2019; Fresca and Manzoni, 2022), are, to the best of our knowledge, amongst the first practical supervised learning methods designed to learn maps between infinite-dimensional spaces. Our methodology addresses the mesh-dependent nature of the approach in the papers (Guo et al., 2016; Zhu and Zabaras, 2018; Adler and Oktem, 2017; Bhatnagar et al., 2019) by producing a single set of network parameters that can be used with different discretizations. Furthermore, it has the ability to transfer solutions between meshes and indeed between different discretization methods. Moreover, it needs only to be trained once on the equation set { ๐‘Ž ๐‘— , ๐‘ข ๐‘— } ๐‘—

1 ๐‘ . Then, obtaining a solution for a new ๐‘Ž โˆผ ๐œ‡ only requires a forward pass of the network, alleviating the major computational issues incurred in (E and Yu, 2018; Raissi et al., 2019; Herrmann et al., 2020; Bar and Sochen, 2019) where a different network would need to be trained for each input parameter. Lastly, our method requires no knowledge of the underlying PDE: it is purely data-driven and therefore non-intrusive. Indeed the true map can be treated as a black-box, perhaps to be learned from experimental data or from the output of a costly computer simulation, not necessarily from a PDE.

Continuous Neural Networks.

Using continuity as a tool to design and interpret neural networks is gaining currency in the machine learning community, and the formulation of ResNet as a continuous time process over the depth parameter is a powerful example of this (Haber and Ruthotto, 2017; E, 2017). The concept of defining neural networks in infinite-dimensional spaces is a central problem that has long been studied (Williams, 1996; Neal, 1996; Roux and Bengio, 2007; Globerson and Livni, 2016; Guss, 2016). The general idea is to take the infinite-width limit which yields a non-parametric method and has connections to Gaussian Process Regression (Neal, 1996; Matthews et al., 2018; Garriga-Alonso et al., 2018), leading to the introduction of deep Gaussian processes (Damianou and Lawrence, 2013; Dunlop et al., 2018). Thus far, such methods have not yielded efficient numerical algorithms that can parallel the success of convolutional or recurrent neural networks for the problem of approximating mappings between finite dimensional spaces. Despite the superficial similarity with our proposed work, this body of work differs substantially from what we are proposing: in our work we are motivated by the continuous dependence of the data, in the input or output spaces, in spatial or spatio-temporal variables; in contrast the work outlined in this paragraph uses continuity in an artificial algorithmic depth or width parameter to study the network architecture when the depth or width approaches infinity, but the input and output spaces remain of fixed finite dimension.

Nystrรถm Approximation, GNNs, and Graph Neural Operators (GNOs).

The graph neural operators (Section 4.1) has an underlying Nystrรถm approximation formulation (Nystrรถm, 1930) which links different grids to a single set of network parameters. This perspective relates our continuum approach to Graph Neural Networks (GNNs). GNNs are a recently developed class of neural networks that apply to graph-structured data; they have been used in a variety of applications. Graph networks incorporate an array of techniques from neural network design such as graph convolution, edge convolution, attention, and graph pooling (Kipf and Welling, 2016; Hamilton et al., 2017; Gilmer et al., 2017; Veliฤkoviฤ‡ et al., 2017; Murphy et al., 2018). GNNs have also been applied to the modeling of physical phenomena such as molecules (Chen et al., 2019) and rigid body systems (Battaglia et al., 2018) since these problems exhibit a natural graph interpretation: the particles are the nodes and the interactions are the edges. The work (Alet et al., 2019) performs an initial study that employs graph networks on the problem of learning solutions to Poissonโ€™s equation, among other physical applications. They propose an encoder-decoder setting, constructing graphs in the latent space, and utilizing message passing between the encoder and decoder. However, their model uses a nearest neighbor structure that is unable to capture non-local dependencies as the mesh size is increased. In contrast, we directly construct a graph in which the nodes are located on the spatial domain of the output function. Through message passing, we are then able to directly learn the kernel of the network which approximates the PDE solution. When querying a new location, we simply add a new node to our spatial graph and connect it to the existing nodes, avoiding interpolation error by leveraging the power of the Nystrรถm extension for integral operators.

Low-rank Kernel Decomposition and Low-rank Neural Operators (LNOs).

Low-rank decomposition is a popular method used in kernel methods and Gaussian process (Kulis et al., 2006; Bach, 2013; Lan et al., 2017; Gardner et al., 2018). We present the low-rank neural operator in Section 4.2 where we structure the kernel network as a product of two factor networks inspired by Fredholm theory. The low-rank method, while simple, is very efficient and easy to train especially when the target operator is close to linear. Khoo and Ying (2019) proposed a related neural network with low-rank structure to approximate the inverse of differential operators. The framework of two factor networks is also similar to the trunk and branch network used in DeepONet (Lu et al., 2019). But in our work, the factor networks are defined on the physical domain and non-local information is accumulated through integration with respect to the Lebesgue measure. In contrast, DeepONet(s) integrate against delta measures at a set of pre-defined nodal points that are usually taken to be the grid on which the data is given. See section 5.1 for further discussion.

Multipole, Multi-resolution Methods, and Multipole Graph Neural Operators (MGNOs).

To efficiently capture long-range interaction, multi-scale methods such as the classical fast multipole methods (FMM) have been developed (Greengard and Rokhlin, 1997). Based on the assumption that long-range interactions decay quickly, FMM decomposes the kernel matrix into different ranges and hierarchically imposes low-rank structures on the long-range components (hierarchical matrices) (Bรถrm et al., 2003). This decomposition can be viewed as a specific form of the multi-resolution matrix factorization of the kernel (Kondor et al., 2014; Bรถrm et al., 2003). For example, the works of Fan et al. (2019c, b); He and Xu (2019) propose a similar multipole expansion for solving parametric PDEs on structured grids. However, the classical FMM requires nested grids as well as the explicit form of the PDEs. In Section 4.3, we propose the multipole graph neural operator (MGNO) by generalizing this idea to arbitrary graphs in the data-driven setting, so that the corresponding graph neural networks can learn discretization-invariant solution operators which are fast and can work on complex geometries.

Fourier Transform, Spectral Methods, and Fourier Neural Operators (FNOs).

The Fourier transform is frequently used in spectral methods for solving differential equations since differentiation is equivalent to multiplication in the Fourier domain. Fourier transforms have also played an important role in the development of deep learning. They are used in theoretical work, such as the proof of the neural network universal approximation theorem (Hornik et al., 1989) and related results for random feature methods (Rahimi and Recht, 2008); empirically, they have been used to speed up convolutional neural networks (Mathieu et al., 2013). Neural network architectures involving the Fourier transform or the use of sinusoidal activation functions have also been proposed and studied (Bengio et al., 2007; Mingo et al., 2004; Sitzmann et al., 2020). Recently, some spectral methods for PDEs have been extended to neural networks (Fan et al., 2019a, c; Kashinath et al., 2020). In Section 4.4, we build on these works by proposing the Fourier neural operator architecture defined directly in Fourier space with quasi-linear time complexity and state-of-the-art approximation capabilities.

Sources of Error

In this paper we will study the error resulting from approximating an operator (mapping between Banach spaces) from within a class of finitely-parameterized operators. We show that the resulting error, expressed in terms of universal approximation of operators over a compact set or in terms of a resulting risk, can be driven to zero by increasing the number of parameters, and refining the approximations inherent in the neural operator architecture. In practice there will be two other sources of approximation error: firstly from the discretization of the data; and secondly from the use of empirical risk minimization over a finite data set to determine the parameters. Balancing all three sources of error is key to making algorithms efficient. However we do not study these other two sources of error in this work. Furthermore we do not study how the number of parameters in our approximation grows as the error tolerance is refined. Generally, this growth may be super-exponential as shown in (Kovachki et al., 2021). However, for certain classes of operators and related approximation methods, it is possible to beat the curse of dimensionality; we refer the reader to the works (Lanthaler et al., 2021; Kovachki et al., 2021) for detailed analyses demonstrating this. Finally we also emphasize that there is a potential source of error from the optimization procedure which attempts to minimize the empirical risk: it may not achieve the global minumum. Analysis of this error in the context of operator approximation has not been undertaken.

10Conclusions

We have introduced the concept of Neural Operator, the goal being to construct a neural network architecture adapted to the problem of mapping elements of one function space into elements of another function space. The network is comprised of four steps which, in turn, (i) extract features from the input functions, (ii) iterate a recurrent neural network on feature space, defined through composition of a sigmoid function and a nonlocal operator, and (iii) a final mapping from feature space into the output function.

We have studied four nonlocal operators in step (iii), one based on graph kernel networks, one based on the low-rank decomposition, one based on the multi-level graph structure, and the last one based on convolution in Fourier space. The designed network architectures are constructed to be mesh-free and our numerical experiments demonstrate that they have the desired property of being able to train and generalize on different meshes. This is because the networks learn the mapping between infinite-dimensional function spaces, which can then be shared with approximations at different levels of discretization. A further advantage of the integral operator approach is that data may be incorporated on unstructured grids, using the Nystrรถm approximation; these methods, however, are quadratic in the number of discretization points; we describe variants on this methodology, using low rank and multiscale ideas, to reduce this complexity. On the other hand the Fourier approach leads directly to fast methods, linear-log linear in the number of discretization points, provided structured grids are used. We demonstrate that our methods can achieve competitive performance with other mesh-free approaches developed in the numerical analysis community. Specifically, the Fourier neural operator achieves the best numerical performance among our experiments, potentially due to the smoothness of the solution function and the underlying uniform grids. The methods developed in the numerical analysis community are less flexible than the approach we introduce here, relying heavily on the structure of an underlying PDE mapping input to output; our method is entirely data-driven.

10.1Future Directions

We foresee three main directions in which this work will develop: firstly as a method to speed-up scientific computing tasks which involve repeated evaluation of a mapping between spaces of functions, following the example of the Bayesian inverse problem 7.3.4, or when the underlying model is unknown as in computer vision or robotics; and secondly the development of more advanced methodologies beyond the four approximation schemes presented in Section 4 that are more efficient or better in specific situations; thirdly, the development of an underpinning theory which captures the expressive power, and approximation error properties, of the proposed neural network, following Section 8, and quantifies the computational complexity required to achieve given error.

10.1.1New Applications

The proposed neural operator is a blackbox surrogate model for function-to-function mappings. It naturally fits into solving PDEs for physics and engineering problems. In the paper we mainly studied three partial differential equations: Darcy Flow, Burgersโ€™ equation, and Navier-Stokes equation, which cover a broad range of scenarios. Due to its blackbox structure, the neural operator is easily applied on other problems. We foresee applications on more challenging turbulent flows, such as those arising in subgrid models with in climate GCMs, high contrast media in geological models generalizing the Darcy model, and general physics simulation for games and visual effects. The operator setting leads to an efficient and accurate representation, and the resolution-invariant properties make it possible to training and a smaller resolution dataset, and be evaluated on arbitrarily large resolution.

The operator learning setting is not restricted to scientific computing. For example, in computer vision, images can naturally be viewed as real-valued functions on 2D domains and videos simply add a temporal structure. Our approach is therefore a natural choice for problems in computer vision where invariance to discretization is crucial. We leave this as an interesting future direction.

10.1.2New Methodologies

Despite their excellent performance, there is still room for improvement upon the current methodologies. For example, the full ๐‘‚ โข ( ๐ฝ 2 ) integration method still outperforms the FNO by about 40 % , albeit at greater cost. It is of potential interest to develop more advanced integration techniques or approximation schemes that follows the neural operator framework. For example, one can use adaptive graph or probability estimation in the Nystrรถm approximation. It is also possible to use other basis than the Fourier basis such as the PCA basis and the Chebyshev basis.

Another direction for new methodologies is to combine the neural operator in other settings. The current problem is set as a supervised learning problem. Instead, one can combine the neural operator with solvers (Pathak et al., 2020; Um et al., 2020b), augmenting and correcting the solvers to get faster and more accurate approximation. Similarly, one can combine operator learning with physics constraints (Wang et al., 2021; Li et al., 2021).

10.1.3Theory

In this work, we develop a universal approximation theory (Section 8) for neural operators. As in the work of Lu et al. (2019) studying universal approximation for DeepONet, we use linear approximation techniques. The power of non-linear approximation (DeVore, 1998), which is likely intrinsic to the success of neural operators in some settings, is still less studied, as discussed in Section 5.1; we note that DeepOnet is intrinsically limited by linear approximation properties. For functions between Euclidean spaces, we clearly know, by combining two layers of linear functions with one layer of non-linear activation function, the neural network can approximate arbitrary continuous functions, and that deep neural networks can be exponentially more expressive compared to shallow networks (Poole et al., 2016). However issues are less clear when it comes to the choice of architecture and the scaling of the number of parameters within neural operators between Banach spaces. The approximation theory of operators is much more complex and challenging compared to that of functions over Euclidean spaces. It is important to study the class of neural operators with respect to their architecture โ€“ what spaces the true solution operators lie in, and which classes of PDEs the neural operator approximate efficiently. We leave these as exciting, but open, research directions.

Acknowledgements

Z. Li gratefully acknowledges the financial support from the Kortschak Scholars, PIMCO Fellows, and Amazon AI4Science Fellows programs. A. Anandkumar is supported in part by Bren endowed chair. K. Bhattacharya, N. B. Kovachki, B. Liu and A. M. Stuart gratefully acknowledge the financial support of the Army Research Laboratory through the Cooperative Agreement Number W911NF-12-0022. Research was sponsored by the Army Research Laboratory and was accomplished under Cooperative Agreement Number W911NF-12-2-0022. AMS is also supported by NSF (award DMS-1818977). Part of this research is developed when K. Azizzadenesheli was with the Purdue University. The authors are grateful to Siddhartha Mishra for his valuable feedback on this work.

The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.

The computations presented here were conducted on the Resnick High Performance Cluster at the California Institute of Technology.

References Aaronson (1997) J. Aaronson.An Introduction to Infinite Ergodic Theory.Mathematical surveys and monographs. American Mathematical Society, 1997.ISBN 9780821804940. Adams and Fournier (2003) R. A. Adams and J. J. Fournier.Sobolev Spaces.Elsevier Science, 2003. Adler and Oktem (2017) Jonas Adler and Ozan Oktem.Solving ill-posed inverse problems using iterative deep neural networks.Inverse Problems, nov 2017.doi: 10.1088/1361-6420/aa9581.URL https://doi.org/10.1088%2F1361-6420%2Faa9581. Albiac and Kalton (2006) Fernando Albiac and Nigel J. Kalton.Topics in Banach space theory.Graduate Texts in Mathematics. Springer, 1 edition, 2006. Alet et al. (2019) Ferran Alet, Adarsh Keshav Jeewajee, Maria Bauza Villalonga, Alberto Rodriguez, Tomas Lozano-Perez, and Leslie Kaelbling.Graph element networks: adaptive, structured computation and memory.In 36th International Conference on Machine Learning. PMLR, 2019.URL http://proceedings.mlr.press/v97/alet19a.html. Ba et al. (2016) Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton.Layer normalization.arXiv preprint arXiv:1607.06450, 2016. Bach (2013) Francis Bach.Sharp analysis of low-rank kernel matrix approximations.In Conference on Learning Theory, pages 185โ€“209, 2013. Bar and Sochen (2019) Leah Bar and Nir Sochen.Unsupervised deep learning algorithm for PDE-based forward and inverse problems.arXiv preprint arXiv:1904.05417, 2019. Battaglia et al. (2018) Peter W Battaglia, Jessica B Hamrick, Victor Bapst, Alvaro Sanchez-Gonzalez, Vinicius Zambaldi, Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, et al.Relational inductive biases, deep learning, and graph networks.arXiv preprint arXiv:1806.01261, 2018. Beck et al. (2021) Christian Beck, Sebastian Becker, Philipp Grohs, Nor Jaafari, and Arnulf Jentzen.Solving the kolmogorov pde by means of deep learning.Journal of Scientific Computing, 88(3), 2021. Belongie et al. (2002) Serge Belongie, Charless Fowlkes, Fan Chung, and Jitendra Malik.Spectral partitioning with indefinite kernels using the nystrรถm extension.In European conference on computer vision. Springer, 2002. Bengio et al. (2007) Yoshua Bengio, Yann LeCun, et al.Scaling learning algorithms towards ai.Large-scale kernel machines, 34(5):1โ€“41, 2007. Bhatnagar et al. (2019) Saakaar Bhatnagar, Yaser Afshar, Shaowu Pan, Karthik Duraisamy, and Shailendra Kaushik.Prediction of aerodynamic flow fields using convolutional neural networks.Computational Mechanics, pages 1โ€“21, 2019. Bhattacharya et al. (2020) Kaushik Bhattacharya, Bamdad Hosseini, Nikola B Kovachki, and Andrew M Stuart.Model reduction and neural networks for parametric PDEs.arXiv preprint arXiv:2005.03180, 2020. Bogachev (2007) V. I. Bogachev.Measure Theory, volume 2.Springer-Verlag Berlin Heidelberg, 2007. Bonito et al. (2020) Andrea Bonito, Albert Cohen, Ronald DeVore, Diane Guignard, Peter Jantsch, and Guergana Petrova.Nonlinear methods for model reduction.arXiv preprint arXiv:2005.02565, 2020. Bรถrm et al. (2003) Steffen Bรถrm, Lars Grasedyck, and Wolfgang Hackbusch.Hierarchical matrices.Lecture notes, 21:2003, 2003. Box (1976) George EP Box.Science and statistics.Journal of the American Statistical Association, 71(356):791โ€“799, 1976. Boyd (2001) John P Boyd.Chebyshev and Fourier spectral methods.Courier Corporation, 2001. Brown et al. (2020) Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al.Language models are few-shot learners.arXiv preprint arXiv:2005.14165, 2020. Brudnyi and Brudnyi (2012) Alexander Brudnyi and Yuri Brudnyi.Methods of Geometric Analysis in Extension and Trace Problems, volume 1.Birkhรคuser Basel, 2012. Bruno et al. (2007) Oscar P Bruno, Youngae Han, and Matthew M Pohlman.Accurate, high-order representation of complex three-dimensional surfaces via fourier continuation analysis.Journal of computational Physics, 227(2):1094โ€“1125, 2007. Chandler and Kerswell (2013) Gary J. Chandler and Rich R. Kerswell.Invariant recurrent solutions embedded in a turbulent two-dimensional kolmogorov flow.Journal of Fluid Mechanics, 722:554โ€“595, 2013. Chen et al. (2019) Chi Chen, Weike Ye, Yunxing Zuo, Chen Zheng, and Shyue Ping Ong.Graph networks as a universal machine learning framework for molecules and crystals.Chemistry of Materials, 31(9):3564โ€“3572, 2019. Chen and Chen (1995) Tianping Chen and Hong Chen.Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems.IEEE Transactions on Neural Networks, 6(4):911โ€“917, 1995. Choromanski et al. (2020) Krzysztof Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Davis, Afroz Mohiuddin, Lukasz Kaiser, et al.Rethinking attention with performers.arXiv preprint arXiv:2009.14794, 2020. Ciesielski and Domsta (1972) Z. Ciesielski and J. Domsta.Construction of an orthonormal basis in cm(id) and wmp(id).Studia Mathematica, 41:211โ€“224, 1972. Cohen and DeVore (2015) Albert Cohen and Ronald DeVore.Approximation of high-dimensional parametric PDEs.Acta Numerica, 2015.doi: 10.1017/S0962492915000033. Cohen et al. (2020) Albert Cohen, Ronald Devore, Guergana Petrova, and Przemyslaw Wojtaszczyk.Optimal stable nonlinear approximation.arXiv preprint arXiv:2009.09907, 2020. Conway (2007) J. B. Conway.A Course in Functional Analysis.Springer-Verlag New York, 2007. Cotter et al. (2013) S. L. Cotter, G. O. Roberts, A. M. Stuart, and D. White.Mcmc methods for functions: Modifying old algorithms to make them faster.Statistical Science, 28(3):424โ€“446, Aug 2013.ISSN 0883-4237.doi: 10.1214/13-sts421.URL http://dx.doi.org/10.1214/13-STS421. Cotter et al. (2009) Simon L Cotter, Massoumeh Dashti, James Cooper Robinson, and Andrew M Stuart.Bayesian inverse problems for functions and applications to fluid mechanics.Inverse problems, 25(11):115008, 2009. Damianou and Lawrence (2013) Andreas Damianou and Neil Lawrence.Deep gaussian processes.In Artificial Intelligence and Statistics, pages 207โ€“215, 2013. De Hoop et al. (2022) Maarten De Hoop, Daniel Zhengyu Huang, Elizabeth Qian, and Andrew M Stuart.The cost-accuracy trade-off in operator learning with neural networks.Journal of Machine Learning, to appear; arXiv preprint arXiv:2203.13181, 2022. Devlin et al. (2018) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova.Bert: Pre-training of deep bidirectional transformers for language understanding.arXiv preprint arXiv:1810.04805, 2018. DeVore (1998) Ronald A. DeVore.Nonlinear approximation.Acta Numerica, 7:51โ€“150, 1998. DeVore (2014) Ronald A. DeVore.Chapter 3: The Theoretical Foundation of Reduced Basis Methods.2014.doi: 10.1137/1.9781611974829.ch3. Dosovitskiy et al. (2020) Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al.An image is worth 16x16 words: Transformers for image recognition at scale.arXiv preprint arXiv:2010.11929, 2020. Dudley and Norvaisa (2011) R. Dudley and Rimas Norvaisa.Concrete Functional Calculus, volume 149.01 2011.ISBN 978-1-4419-6949-1. Dudley and Norvaiลกa (2010) R.M. Dudley and R. Norvaiลกa.Concrete Functional Calculus.Springer Monographs in Mathematics. Springer New York, 2010. Dugundji (1951) J. Dugundji.An extension of tietzeโ€™s theorem.Pacific Journal of Mathematics, 1(3):353 โ€“ 367, 1951. Dunlop et al. (2018) Matthew M Dunlop, Mark A Girolami, Andrew M Stuart, and Aretha L Teckentrup.How deep are deep gaussian processes?The Journal of Machine Learning Research, 19(1):2100โ€“2145, 2018. E (2017) W E.A proposal on machine learning via dynamical systems.Communications in Mathematics and Statistics, 5(1):1โ€“11, 2017. E (2011) Weinan E.Principles of Multiscale Modeling.Cambridge University Press, Cambridge, 2011. E and Yu (2018) Weinan E and Bing Yu.The deep ritz method: A deep learning-based numerical algorithm for solving variational problems.Communications in Mathematics and Statistics, 3 2018.ISSN 2194-6701.doi: 10.1007/s40304-018-0127-z. Fan et al. (2019a) Yuwei Fan, Cindy Orozco Bohorquez, and Lexing Ying.Bcr-net: A neural network based on the nonstandard wavelet form.Journal of Computational Physics, 384:1โ€“15, 2019a. Fan et al. (2019b) Yuwei Fan, Jordi Feliu-Faba, Lin Lin, Lexing Ying, and Leonardo Zepeda-Nรบnez.A multiscale neural network based on hierarchical nested bases.Research in the Mathematical Sciences, 6(2):21, 2019b. Fan et al. (2019c) Yuwei Fan, Lin Lin, Lexing Ying, and Leonardo Zepeda-Nรบnez.A multiscale neural network based on hierarchical matrices.Multiscale Modeling & Simulation, 17(4):1189โ€“1213, 2019c. Fefferman (2007) Charles Fefferman.Cm extension by linear operators.Annals of Mathematics, 166:779โ€“835, 2007. Fresca and Manzoni (2022) Stefania Fresca and Andrea Manzoni.Pod-dl-rom: Enhancing deep learning-based reduced order models for nonlinear parametrized pdes by proper orthogonal decomposition.Computer Methods in Applied Mechanics and Engineering, 388:114โ€“181, 2022. Gardner et al. (2018) Jacob R Gardner, Geoff Pleiss, Ruihan Wu, Kilian Q Weinberger, and Andrew Gordon Wilson.Product kernel interpolation for scalable gaussian processes.arXiv preprint arXiv:1802.08903, 2018. Garriga-Alonso et al. (2018) Adriร  Garriga-Alonso, Carl Edward Rasmussen, and Laurence Aitchison.Deep Convolutional Networks as shallow Gaussian Processes.arXiv e-prints, art. arXiv:1808.05587, Aug 2018. Gilmer et al. (2017) Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl.Neural message passing for quantum chemistry.In Proceedings of the 34th International Conference on Machine Learning, 2017. Globerson and Livni (2016) Amir Globerson and Roi Livni.Learning infinite-layer networks: Beyond the kernel trick.CoRR, abs/1606.05316, 2016.URL http://arxiv.org/abs/1606.05316. Greenfeld et al. (2019) Daniel Greenfeld, Meirav Galun, Ronen Basri, Irad Yavneh, and Ron Kimmel.Learning to optimize multigrid PDE solvers.In International Conference on Machine Learning, pages 2415โ€“2423. PMLR, 2019. Greengard and Rokhlin (1997) Leslie Greengard and Vladimir Rokhlin.A new version of the fast multipole method for the laplace equation in three dimensions.Acta numerica, 6:229โ€“269, 1997. Grothendieck (1955) A Grothendieck.Produits tensoriels topologiques et espaces nuclรฉaires, volume 16.American Mathematical Society Providence, 1955. Guibas et al. (2021) John Guibas, Morteza Mardani, Zongyi Li, Andrew Tao, Anima Anandkumar, and Bryan Catanzaro.Adaptive fourier neural operators: Efficient token mixers for transformers.arXiv preprint arXiv:2111.13587, 2021. Guo et al. (2016) Xiaoxiao Guo, Wei Li, and Francesco Iorio.Convolutional neural networks for steady flow approximation.In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016. Gurtin (1982) Morton E Gurtin.An introduction to continuum mechanics.Academic press, 1982. Guss (2016) William H. Guss.Deep Function Machines: Generalized Neural Networks for Topological Layer Expression.arXiv e-prints, art. arXiv:1612.04799, Dec 2016. Haber and Ruthotto (2017) Eldad Haber and Lars Ruthotto.Stable architectures for deep neural networks.Inverse Problems, 34(1):014004, 2017. Hamilton et al. (2017) Will Hamilton, Zhitao Ying, and Jure Leskovec.Inductive representation learning on large graphs.In Advances in neural information processing systems, pages 1024โ€“1034, 2017. He and Xu (2019) Juncai He and Jinchao Xu.Mgnet: A unified framework of multigrid and convolutional neural network.Science china mathematics, 62(7):1331โ€“1354, 2019. He et al. (2016) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.Deep residual learning for image recognition.In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770โ€“778, 2016. Herrmann et al. (2020) L Herrmann, Ch Schwab, and J Zech.Deep relu neural network expression rates for data-to-qoi maps in bayesian PDE inversion.2020. Hornik et al. (1989) Kurt Hornik, Maxwell Stinchcombe, Halbert White, et al.Multilayer feedforward networks are universal approximators.Neural networks, 2(5):359โ€“366, 1989. Jiang et al. (2020) Chiyu Max Jiang, Soheil Esmaeilzadeh, Kamyar Azizzadenesheli, Karthik Kashinath, Mustafa Mustafa, Hamdi A Tchelepi, Philip Marcus, Anima Anandkumar, et al.Meshfreeflownet: A physics-constrained deep continuous space-time super-resolution framework.arXiv preprint arXiv:2005.01463, 2020. Johnson (2012) Claes Johnson.Numerical solution of partial differential equations by the finite element method.Courier Corporation, 2012. Kashinath et al. (2020) Karthik Kashinath, Philip Marcus, et al.Enforcing physical constraints in cnns through differentiable PDE layer.In ICLR 2020 Workshop on Integration of Deep Neural Models and Differential Equations, 2020. Khoo and Ying (2019) Yuehaw Khoo and Lexing Ying.Switchnet: a neural network model for forward and inverse scattering problems.SIAM Journal on Scientific Computing, 41(5):A3182โ€“A3201, 2019. Khoo et al. (2021) Yuehaw Khoo, Jianfeng Lu, and Lexing Ying.Solving parametric PDE problems with artificial neural networks.European Journal of Applied Mathematics, 32(3):421โ€“435, 2021. Kipf and Welling (2016) Thomas N Kipf and Max Welling.Semi-supervised classification with graph convolutional networks.arXiv preprint arXiv:1609.02907, 2016. Kondor et al. (2014) Risi Kondor, Nedelina Teneva, and Vikas Garg.Multiresolution matrix factorization.In International Conference on Machine Learning, pages 1620โ€“1628, 2014. Kovachki et al. (2021) Nikola Kovachki, Samuel Lanthaler, and Siddhartha Mishra.On universal approximation and error bounds for Fourier Neural Operators.arXiv preprint arXiv:2107.07562, 2021. Kraichnan (1967) Robert H. Kraichnan.Inertial ranges in twoโ€dimensional turbulence.The Physics of Fluids, 10(7):1417โ€“1423, 1967. Kulis et al. (2006) Brian Kulis, Mรกtyรกs Sustik, and Inderjit Dhillon.Learning low-rank kernel matrices.In Proceedings of the 23rd international conference on Machine learning, pages 505โ€“512, 2006. Kutyniok et al. (2022) Gitta Kutyniok, Philipp Petersen, Mones Raslan, and Reinhold Schneider.A theoretical analysis of deep neural networks and parametric pdes.Constructive Approximation, 55(1):73โ€“125, 2022. Lan et al. (2017) Liang Lan, Kai Zhang, Hancheng Ge, Wei Cheng, Jun Liu, Andreas Rauber, Xiao-Li Li, Jun Wang, and Hongyuan Zha.Low-rank decomposition meets kernel learning: A generalized nystrรถm method.Artificial Intelligence, 250:1โ€“15, 2017. Lanthaler et al. (2021) Samuel Lanthaler, Siddhartha Mishra, and George Em Karniadakis.Error estimates for deeponets: A deep learning framework in infinite dimensions.arXiv preprint arXiv:2102.09618, 2021. Leoni (2009) G. Leoni.A First Course in Sobolev Spaces.Graduate studies in mathematics. American Mathematical Soc., 2009. Li et al. (2020a) Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar.Fourier neural operator for parametric partial differential equations, 2020a. Li et al. (2020b) Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar.Multipole graph neural operator for parametric partial differential equations, 2020b. Li et al. (2020c) Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar.Neural operator: Graph kernel network for partial differential equations.arXiv preprint arXiv:2003.03485, 2020c. Li et al. (2021) Zongyi Li, Hongkai Zheng, Nikola Kovachki, David Jin, Haoxuan Chen, Burigede Liu, Kamyar Azizzadenesheli, and Anima Anandkumar.Physics-informed neural operator for learning partial differential equations.arXiv preprint arXiv:2111.03794, 2021. Lu et al. (2019) Lu Lu, Pengzhan Jin, and George Em Karniadakis.Deeponet: Learning nonlinear operators for identifying differential equations based on the universal approximation theorem of operators.arXiv preprint arXiv:1910.03193, 2019. Lu et al. (2021a) Lu Lu, Pengzhan Jin, Guofei Pang, Zhongqiang Zhang, and George Em Karniadakis.Learning nonlinear operators via deeponet based on the universal approximation theorem of operators.Nature Machine Intelligence, 3(3):218โ€“229, 2021a. Lu et al. (2021b) Lu Lu, Xuhui Meng, Shengze Cai, Zhiping Mao, Somdatta Goswami, Zhongqiang Zhang, and George Em Karniadakis.A comprehensive and fair comparison of two neural operators (with practical extensions) based on fair data.arXiv preprint arXiv:2111.05512, 2021b. Mathieu et al. (2013) Michael Mathieu, Mikael Henaff, and Yann LeCun.Fast training of convolutional networks through ffts, 2013. Matthews et al. (2018) Alexander G. de G. Matthews, Mark Rowland, Jiri Hron, Richard E. Turner, and Zoubin Ghahramani.Gaussian Process Behaviour in Wide Deep Neural Networks.Apr 2018. Mingo et al. (2004) Luis Mingo, Levon Aslanyan, Juan Castellanos, Miguel Diaz, and Vladimir Riazanov.Fourier neural networks: An approach with sinusoidal activation functions.2004. Murphy et al. (2018) Ryan L Murphy, Balasubramaniam Srinivasan, Vinayak Rao, and Bruno Ribeiro.Janossy pooling: Learning deep permutation-invariant functions for variable-size inputs.arXiv preprint arXiv:1811.01900, 2018. Neal (1996) Radford M. Neal.Bayesian Learning for Neural Networks.Springer-Verlag, 1996.ISBN 0387947248. Nelsen and Stuart (2021) Nicholas H Nelsen and Andrew M Stuart.The random feature model for input-output maps between banach spaces.SIAM Journal on Scientific Computing, 43(5):A3212โ€“A3243, 2021. Nystrรถm (1930) Evert J Nystrรถm.รœber die praktische auflรถsung von integralgleichungen mit anwendungen auf randwertaufgaben.Acta Mathematica, 1930. Oโ€™Leary-Roseberry et al. (2020) Thomas Oโ€™Leary-Roseberry, Umberto Villa, Peng Chen, and Omar Ghattas.Derivative-informed projected neural networks for high-dimensional parametric maps governed by pdes.arXiv preprint arXiv:2011.15110, 2020. Opschoor et al. (2020) Joost A.A. Opschoor, Christoph Schwab, and Jakob Zech.Deep learning in high dimension: Relu network expression rates for bayesian PDE inversion.SAM Research Report, 2020-47, 2020. Pan and Duraisamy (2020) Shaowu Pan and Karthik Duraisamy.Physics-informed probabilistic learning of linear embeddings of nonlinear dynamics with guaranteed stability.SIAM Journal on Applied Dynamical Systems, 19(1):480โ€“509, 2020. Pathak et al. (2020) Jaideep Pathak, Mustafa Mustafa, Karthik Kashinath, Emmanuel Motheau, Thorsten Kurth, and Marcus Day.Using machine learning to augment coarse-grid computational fluid dynamics simulations, 2020. Peล‚czyล„ski and Wojciechowski (2001) Aleksander Peล‚czyล„ski and Michaล‚ Wojciechowski.Contribution to the isomorphic classification of sobolev spaces lpk(omega).Recent Progress in Functional Analysis, 189:133โ€“142, 2001. Pfaff et al. (2020) Tobias Pfaff, Meire Fortunato, Alvaro Sanchez-Gonzalez, and Peter W. Battaglia.Learning mesh-based simulation with graph networks, 2020. Pinkus (1985) A. Pinkus.N-Widths in Approximation Theory.Springer-Verlag Berlin Heidelberg, 1985. Pinkus (1999) Allan Pinkus.Approximation theory of the mlp model in neural networks.Acta Numerica, 8:143โ€“195, 1999. Poole et al. (2016) Ben Poole, Subhaneil Lahiri, Maithra Raghu, Jascha Sohl-Dickstein, and Surya Ganguli.Exponential expressivity in deep neural networks through transient chaos.Advances in neural information processing systems, 29:3360โ€“3368, 2016. Quiรฑonero Candela and Rasmussen (2005) Joaquin Quiรฑonero Candela and Carl Edward Rasmussen.A unifying view of sparse approximate gaussian process regression.J. Mach. Learn. Res., 6:1939โ€“1959, 2005. Rahimi and Recht (2008) Ali Rahimi and Benjamin Recht.Uniform approximation of functions with random bases.In 2008 46th Annual Allerton Conference on Communication, Control, and Computing, pages 555โ€“561. IEEE, 2008. Raissi et al. (2019) Maziar Raissi, Paris Perdikaris, and George E Karniadakis.Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations.Journal of Computational Physics, 378:686โ€“707, 2019. Ronneberger et al. (2015) Olaf Ronneberger, Philipp Fischer, and Thomas Brox.U-net: Convolutional networks for biomedical image segmentation.In International Conference on Medical image computing and computer-assisted intervention, pages 234โ€“241. Springer, 2015. Roux and Bengio (2007) Nicolas Le Roux and Yoshua Bengio.Continuous neural networks.In Marina Meila and Xiaotong Shen, editors, Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics, 2007. Scarselli et al. (2008) Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini.The graph neural network model.IEEE transactions on neural networks, 20(1):61โ€“80, 2008. Schwab and Zech (2019) Christoph Schwab and Jakob Zech.Deep learning in high dimension: Neural network expression rates for generalized polynomial chaos expansions in UQ.Analysis and Applications, 17(01):19โ€“55, 2019. Sirignano and Spiliopoulos (2018) Justin Sirignano and Konstantinos Spiliopoulos.Dgm: A deep learning algorithm for solving partial differential equations.Journal of computational physics, 375:1339โ€“1364, 2018. Sitzmann et al. (2020) Vincent Sitzmann, Julien NP Martel, Alexander W Bergman, David B Lindell, and Gordon Wetzstein.Implicit neural representations with periodic activation functions.arXiv preprint arXiv:2006.09661, 2020. Smith et al. (2020) Jonathan D Smith, Kamyar Azizzadenesheli, and Zachary E Ross.Eikonet: Solving the eikonal equation with deep neural networks.arXiv preprint arXiv:2004.00361, 2020. Stein (1970) Elias M. Stein.Singular Integrals and Differentiability Properties of Functions.Princeton University Press, 1970. Stuart (2010) A. M. Stuart.Inverse problems: A bayesian perspective.Acta Numerica, 19:451โ€“559, 2010. Trefethen (2000) Lloyd N Trefethen.Spectral methods in MATLAB, volume 10.Siam, 2000. Trillos and Slepฤev (2018) Nicolas Garcia Trillos and Dejan Slepฤev.A variational approach to the consistency of spectral clustering.Applied and Computational Harmonic Analysis, 45(2):239โ€“281, 2018. Trillos et al. (2020) Nicolรกs Garcรญa Trillos, Moritz Gerlach, Matthias Hein, and Dejan Slepฤev.Error estimates for spectral convergence of the graph laplacian on random geometric graphs toward the laplaceโ€“beltrami operator.Foundations of Computational Mathematics, 20(4):827โ€“887, 2020. Um et al. (2020a) Kiwon Um, Philipp Holl, Robert Brand, Nils Thuerey, et al.Solver-in-the-loop: Learning from differentiable physics to interact with iterative PDE-solvers.arXiv preprint arXiv:2007.00016, 2020a. Um et al. (2020b) Kiwon Um, Raymond, Fei, Philipp Holl, Robert Brand, and Nils Thuerey.Solver-in-the-loop: Learning from differentiable physics to interact with iterative PDE-solvers, 2020b. Ummenhofer et al. (2020) Benjamin Ummenhofer, Lukas Prantl, Nils Thรผrey, and Vladlen Koltun.Lagrangian fluid simulation with continuous convolutions.In International Conference on Learning Representations, 2020. Vapnik (1998) Vladimir N. Vapnik.Statistical Learning Theory.Wiley-Interscience, 1998. Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, ล ukasz Kaiser, and Illia Polosukhin.Attention is all you need.In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017. Veliฤkoviฤ‡ et al. (2017) Petar Veliฤkoviฤ‡, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio.Graph attention networks.2017. Von Luxburg et al. (2008) Ulrike Von Luxburg, Mikhail Belkin, and Olivier Bousquet.Consistency of spectral clustering.The Annals of Statistics, pages 555โ€“586, 2008. Wang et al. (2020) Rui Wang, Karthik Kashinath, Mustafa Mustafa, Adrian Albert, and Rose Yu.Towards physics-informed deep learning for turbulent flow prediction.In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 1457โ€“1466, 2020. Wang et al. (2021) Sifan Wang, Hanwen Wang, and Paris Perdikaris.Learning the solution operator of parametric partial differential equations with physics-informed deeponets.arXiv preprint arXiv:2103.10974, 2021. Wen et al. (2021) Gege Wen, Zongyi Li, Kamyar Azizzadenesheli, Anima Anandkumar, and Sally M Benson.U-fnoโ€“an enhanced fourier neural operator based-deep learning model for multiphase flow.arXiv preprint arXiv:2109.03697, 2021. Whitney (1934) Hassler Whitney.Functions differentiable on the boundaries of regions.Annals of Mathematics, 35(3):482โ€“485, 1934. Williams (1996) Christopher K. I. Williams.Computing with infinite networks.In Proceedings of the 9th International Conference on Neural Information Processing Systems, Cambridge, MA, USA, 1996. MIT Press. Zhu et al. (2021) Chen Zhu, Wei Ping, Chaowei Xiao, Mohammad Shoeybi, Tom Goldstein, Anima Anandkumar, and Bryan Catanzaro.Long-short transformer: Efficient transformers for language and vision.In Advances in Neural Information Processing Systems, 2021. Zhu and Zabaras (2018) Yinhao Zhu and Nicholas Zabaras.Bayesian deep convolutional encoderโ€“decoder networks for surrogate modeling and uncertainty quantification.Journal of Computational Physics, 2018.ISSN 0021-9991.doi: https://doi.org/10.1016/j.jcp.2018.04.018.URL http://www.sciencedirect.com/science/article/pii/S0021999118302341. A Notation Meaning Operator Learning

๐ท โŠ‚ โ„ ๐‘‘ The spatial domain for the PDE.

๐‘ฅ โˆˆ ๐ท Points in the the spatial domain.

๐‘Ž โˆˆ ๐’œ

( ๐ท ; โ„ ๐‘‘ ๐‘Ž ) The input functions (coefficients, boundaries, and/or initial conditions).

๐‘ข โˆˆ ๐’ฐ

( ๐ท ; โ„ ๐‘‘ ๐‘ข ) The target solution functions.

๐ท ๐‘— The discretization of ( ๐‘Ž ๐‘— , ๐‘ข ๐‘— ) .

๐’ข โ€  : ๐’œ โ†’ ๐’ฐ The operator mapping the coefficients to the solutions.

๐œ‡ A probability measure where ๐‘Ž ๐‘— sampled from. Neural Operator

๐‘ฃ โข ( ๐‘ฅ ) โˆˆ โ„ ๐‘‘ ๐‘ฃ The neural network representation of ๐‘ข โข ( ๐‘ฅ )

๐‘‘ ๐‘Ž Dimension of the input ๐‘Ž โข ( ๐‘ฅ ) .

๐‘‘ ๐‘ข Dimension of the output ๐‘ข โข ( ๐‘ฅ ) .

๐‘‘ ๐‘ฃ The dimension of the representation ๐‘ฃ โข ( ๐‘ฅ ) .

๐‘ก

0 , โ€ฆ , ๐‘‡ The layer (iteration) in the neural operator .

๐’ซ , ๐’ฌ The pointwise linear transformation ๐’ซ : ๐‘Ž โข ( ๐‘ฅ ) โ†ฆ ๐‘ฃ 0 โข ( ๐‘ฅ ) and ๐’ฌ : ๐‘ฃ ๐‘‡ โข ( ๐‘ฅ ) โ†ฆ ๐‘ข โข ( ๐‘ฅ ) .

๐’ฆ The integral operator in the iterative update ๐‘ฃ ๐‘ก โ†ฆ ๐‘ฃ ๐‘ก + 1 ,

๐œ… : โ„ 2 โข ( ๐‘‘ + 1 ) โ†’ โ„ ๐‘‘ ๐‘ฃ ร— ๐‘‘ ๐‘ฃ The kernel maps ( ๐‘ฅ , ๐‘ฆ , ๐‘Ž โข ( ๐‘ฅ ) , ๐‘Ž โข ( ๐‘ฆ ) ) to a ๐‘‘ ๐‘ฃ ร— ๐‘‘ ๐‘ฃ matrix

๐พ โˆˆ โ„ ๐‘› ร— ๐‘› ร— ๐‘‘ ๐‘ฃ ร— ๐‘‘ ๐‘ฃ The kernel matrix with ๐พ ๐‘ฅ โข ๐‘ฆ

๐œ… โข ( ๐‘ฅ , ๐‘ฆ ) .

๐‘Š โˆˆ โ„ ๐‘‘ ๐‘ฃ ร— ๐‘‘ ๐‘ฃ The pointwise linear transformation used as the bias term in the iterative update.

๐œŽ The activation function. Table 10:Table of notations: operator learning and neural operators

In the paper, we will use lowercase letters such as ๐‘ฃ , ๐‘ข to represent vectors and functions; uppercase letters such as ๐‘Š , ๐พ to represent matrices or discretized transformations; and calligraphic letters such as ๐’ข , โ„ฑ to represent operators.

We write โ„•

{ 1 , 2 , 3 , โ€ฆ } and โ„• 0

โ„• โˆช { 0 } . Furthermore, we denote by | โ‹… | ๐‘ the ๐‘ -norm on any Euclidean space. We say ๐’ณ is a Banach space if it is a Banach space over the real field โ„ . We denote by โˆฅ โ‹… โˆฅ ๐’ณ its norm and by ๐’ณ โˆ— its topological (continuous) dual. In particular, ๐’ณ โˆ— is the Banach space consisting of all continuous linear functionals ๐‘“ : ๐’ณ โ†’ โ„ with the operator norm

โ€– ๐‘“ โ€– ๐’ณ โˆ—

sup ๐‘ฅ โˆˆ ๐’ณ

โ€– ๐‘ฅ โ€– ๐’ณ

1 | ๐‘“ โข ( ๐‘ฅ ) | < โˆž .

For any Banach space ๐’ด , we denote by โ„’ โข ( ๐’ณ ; ๐’ด ) the Banach space of continuous linear maps ๐‘‡ : ๐’ณ โ†’ ๐’ด with the operator norm

โ€– ๐‘‡ โ€– ๐’ณ โ†’ ๐’ด

sup ๐‘ฅ โˆˆ ๐’ณ

โ€– ๐‘ฅ โ€– ๐’ณ

1 โ€– ๐‘‡ โข ๐‘ฅ โ€– ๐’ด < โˆž .

We will abuse notation and write โˆฅ โ‹… โˆฅ for any operator norm when there is no ambiguity about the spaces in question.

Let ๐‘‘ โˆˆ โ„• . We say that ๐ท โŠ‚ โ„ ๐‘‘ is a domain if it is a bounded and connected open set that is topologically regular i.e. int โข ( ๐ท ยฏ )

๐ท . Note that, in the case ๐‘‘

1 , a domain is any bounded, open interval. For ๐‘‘ โ‰ฅ 2 , we say ๐ท is a Lipschitz domain if โˆ‚ ๐ท can be locally represented as the graph of a Lipschitz continuous function defined on an open ball of โ„ ๐‘‘ โˆ’ 1 . If ๐‘‘

1 , we will call any domain a Lipschitz domain. For any multi-index ๐›ผ โˆˆ โ„• 0 ๐‘‘ , we write โˆ‚ ๐›ผ ๐‘“ for the ๐›ผ -th weak partial derivative of ๐‘“ when it exists.

Let ๐ท โŠ‚ โ„ ๐‘‘ be a domain. For any ๐‘š โˆˆ โ„• 0 , we define the following spaces

๐ถ โข ( ๐ท )

{ ๐‘“ : ๐ท โ†’ โ„ : ๐‘“ โข  is continuous } ,

๐ถ ๐‘š โข ( ๐ท )

{ ๐‘“ : ๐ท โ†’ โ„ : โˆ‚ ๐›ผ ๐‘“ โˆˆ ๐ถ ๐‘š โˆ’ | ๐›ผ | 1 โข ( ๐ท ) โข โˆ€ โ€…0 โ‰ค | ๐›ผ | 1 โ‰ค ๐‘š } ,

๐ถ b ๐‘š โข ( ๐ท )

{ ๐‘“ โˆˆ ๐ถ ๐‘š โข ( ๐ท ) : max 0 โ‰ค | ๐›ผ | 1 โ‰ค ๐‘š โข sup ๐‘ฅ โˆˆ ๐ท | โˆ‚ ๐›ผ ๐‘“ โข ( ๐‘ฅ ) | < โˆž } ,

๐ถ ๐‘š โข ( ๐ท ยฏ )

{ ๐‘“ โˆˆ ๐ถ b ๐‘š โข ( ๐ท ) : โˆ‚ ๐›ผ ๐‘“ โข  is uniformly continuous  โข โˆ€ โ€…0 โ‰ค | ๐›ผ | 1 โ‰ค ๐‘š }

and make the equivalent definitions when ๐ท is replaced with โ„ ๐‘‘ . Note that any function in ๐ถ ๐‘š โข ( ๐ท ยฏ ) has a unique, bounded, continuous extension from ๐ท to ๐ท ยฏ and is hence uniquely defined on โˆ‚ ๐ท . We will work with this extension without further notice. We remark that when ๐ท is a Lipschitz domain, the following definition for ๐ถ ๐‘š โข ( ๐ท ยฏ ) is equivalent

๐ถ ๐‘š โข ( ๐ท ยฏ )

{ ๐‘“ : ๐ท ยฏ โ†’ โ„ : โˆƒ ๐น โˆˆ ๐ถ ๐‘š โข ( โ„ ๐‘‘ ) โข  such that  โข ๐‘“ โ‰ก ๐น | ๐ท ยฏ } ,

see Whitney (1934); Brudnyi and Brudnyi (2012). We define ๐ถ โˆž โข ( ๐ท )

โ‹‚ ๐‘š

0 โˆž ๐ถ ๐‘š โข ( ๐ท ) and, similarly, ๐ถ b โˆž โข ( ๐ท ) and ๐ถ โˆž โข ( ๐ท ยฏ ) . We further define

๐ถ ๐‘ โˆž โข ( ๐ท )

{ ๐‘“ โˆˆ ๐ถ โˆž โข ( ๐ท ) : supp โข ( ๐‘“ ) โŠ‚ ๐ท โข  is compact }

and, again, note that all definitions hold analogously for โ„ ๐‘‘ . We denote by โˆฅ โ‹… โˆฅ ๐ถ ๐‘š : ๐ถ b ๐‘š ( ๐ท ) โ†’ โ„ โ‰ฅ 0 the norm

โ€– ๐‘“ โ€– ๐ถ ๐‘š

max 0 โ‰ค | ๐›ผ | 1 โ‰ค ๐‘š โข sup ๐‘ฅ โˆˆ ๐ท | โˆ‚ ๐›ผ ๐‘“ โข ( ๐‘ฅ ) |

which makes ๐ถ b ๐‘š โข ( ๐ท ) (also with ๐ท

โ„ ๐‘‘ ) and ๐ถ ๐‘š โข ( ๐ท ยฏ ) Banach spaces. For any ๐‘› โˆˆ โ„• , we write ๐ถ โข ( ๐ท ; โ„ ๐‘› ) for the ๐‘› -fold Cartesian product of ๐ถ โข ( ๐ท ) and similarly for all other spaces we have defined or will define subsequently. We will continue to write โˆฅ โ‹… โˆฅ ๐ถ ๐‘š for the norm on ๐ถ b ๐‘š โข ( ๐ท ; โ„ ๐‘› ) and ๐ถ ๐‘š โข ( ๐ท ยฏ ; โ„ ๐‘› ) defined as

โ€– ๐‘“ โ€– ๐ถ ๐‘š

max ๐‘— โˆˆ { 1 , โ€ฆ , ๐‘› } โก โ€– ๐‘“ ๐‘— โ€– ๐ถ ๐‘š .

For any ๐‘š โˆˆ โ„• and 1 โ‰ค ๐‘ โ‰ค โˆž , we use the notation ๐‘Š ๐‘š , ๐‘ โข ( ๐ท ) for the standard ๐ฟ ๐‘ -type Sobolev space with ๐‘š derivatives; we refer the reader to Adams and Fournier (2003) for a formal definition. Furthermore, we, at times, use the notation ๐‘Š 0 , ๐‘ โข ( ๐ท )

๐ฟ ๐‘ โข ( ๐ท ) and ๐‘Š ๐‘š , 2 โข ( ๐ท )

๐ป ๐‘š โข ( ๐ท ) . Since we use the standard definitions of Sobolev spaces that can be found in any reference on the subject, we do not give the specifics here.

B

In this section we gather various results on the approximation property of Banach spaces. The main results are Lemma 22 which states that if two Banach spaces have the approximation property then continuous maps between them can be approximated in a finite-dimensional manner, and Lemma 26 which states the spaces in Assumptions 9 and 10 have the approximation property.

Definition 15

A Banach space ๐’ณ has a Schauder basis if there exist some { ๐œ‘ ๐‘— } ๐‘—

1 โˆž โŠ‚ ๐’ณ and { ๐‘ ๐‘— } ๐‘—

1 โˆž โŠ‚ ๐’ณ โˆ— such that

1.

๐‘ ๐‘— โข ( ๐œ‘ ๐‘˜ )

๐›ฟ ๐‘— โข ๐‘˜ for any ๐‘— , ๐‘˜ โˆˆ โ„• ,

2.

lim ๐‘› โ†’ โˆž โ€– ๐‘ฅ โˆ’ โˆ‘ ๐‘—

1 ๐‘› ๐‘ ๐‘— โข ( ๐‘ฅ ) โข ๐œ‘ ๐‘— โ€– ๐’ณ

0 for all ๐‘ฅ โˆˆ ๐’ณ .

We remark that definition 15 is equivalent to the following. The elements { ๐œ‘ ๐‘— } ๐‘—

1 โˆž โŠ‚ ๐’ณ are called a Schauder basis for ๐’ณ if, for each ๐‘ฅ โˆˆ ๐’ณ , there exists a unique sequence { ๐›ผ ๐‘— } ๐‘—

1 โˆž โŠ‚ โ„ such that

lim ๐‘› โ†’ โˆž โ€– ๐‘ฅ โˆ’ โˆ‘ ๐‘—

1 ๐‘› ๐›ผ ๐‘— โข ๐œ‘ ๐‘— โ€– ๐’ณ

0 .

For the equivalence, see, for example (Albiac and Kalton, 2006, Theorem 1.1.3). Throughout this paper we will simply write the term basis to mean Schauder basis. Furthermore, we note that if { ๐œ‘ } ๐‘—

1 โˆž is a basis then so is { ๐œ‘ ๐‘— / โ€– ๐œ‘ โ€– ๐’ณ } ๐‘—

1 โˆž , so we will assume that any basis we use is normalized.

Definition 16

Let ๐’ณ be a Banach space and ๐‘ˆ โˆˆ โ„’ โข ( ๐’ณ ; ๐’ณ ) . ๐‘ˆ is called a finite rank operator if ๐‘ˆ โข ( ๐’ณ ) โІ ๐’ณ is finite dimensional.

By noting that any finite dimensional subspace has a basis, we may equivalently define a finite rank operator ๐‘ˆ โˆˆ โ„’ โข ( ๐’ณ ; ๐’ณ ) to be one such that there exists a number ๐‘› โˆˆ โ„• and some { ๐œ‘ ๐‘— } ๐‘—

1 ๐‘› โŠ‚ ๐’ณ and { ๐‘ ๐‘— } ๐‘—

1 ๐‘› โŠ‚ ๐’ณ โˆ— such that

๐‘ˆ โข ๐‘ฅ

โˆ‘ ๐‘—

1 ๐‘› ๐‘ ๐‘— โข ( ๐‘ฅ ) โข ๐œ‘ ๐‘— , โˆ€ ๐‘ฅ โˆˆ ๐’ณ .

Definition 17

A Banach space ๐’ณ is said to have the approximation property (AP) if, for any compact set ๐พ โŠ‚ ๐’ณ and ๐œ–

0 , there exists a finite rank operator ๐‘ˆ : ๐’ณ โ†’ ๐’ณ such that

โ€– ๐‘ฅ โˆ’ ๐‘ˆ โข ๐‘ฅ โ€– ๐’ณ โ‰ค ๐œ– , โˆ€ ๐‘ฅ โˆˆ ๐พ .

We now state and prove some well-known results about the relationship between basis and the AP. We were unable to find the statements of the following lemmas in the form given here in the literature and therefore we provide full proofs.

Lemma 18

Let ๐’ณ be a Banach space with a basis. Then ๐’ณ has the AP.

Proof  Let { ๐‘ ๐‘— } ๐‘—

1 โˆž โŠ‚ ๐’ณ โˆ— and { ๐œ‘ ๐‘— } ๐‘—

1 โˆž โŠ‚ ๐’ณ be a basis for ๐’ณ . Note that there exists a constant ๐ถ

0 such that, for any ๐‘ฅ โˆˆ ๐’ณ and ๐‘› โˆˆ โ„• ,

โ€– โˆ‘ ๐‘—

1 ๐‘› ๐‘ ๐‘— โข ( ๐‘ฅ ) โข ๐œ‘ ๐‘— โ€– ๐’ณ โ‰ค sup ๐ฝ โˆˆ โ„• โ€– โˆ‘ ๐‘—

1 ๐ฝ ๐‘ ๐‘— โข ( ๐‘ฅ ) โข ๐œ‘ ๐‘— โ€– ๐’ณ โ‰ค ๐ถ โข โ€– ๐‘ฅ โ€– ๐’ณ ,

see, for example (Albiac and Kalton, 2006, Remark 1.1.6). Assume, without loss of generality, that ๐ถ โ‰ฅ 1 . Let ๐พ โŠ‚ ๐’ณ be compact and ๐œ– > 0 . Since ๐พ is compact, we can find a number ๐‘›

๐‘› โข ( ๐œ– , ๐ถ ) โˆˆ โ„• and elements ๐‘ฆ 1 , โ€ฆ , ๐‘ฆ ๐‘› โˆˆ ๐พ such that for any ๐‘ฅ โˆˆ ๐พ there exists a number ๐‘™ โˆˆ { 1 , โ€ฆ , ๐‘› } with the property that

โ€– ๐‘ฅ โˆ’ ๐‘ฆ ๐‘™ โ€– ๐’ณ โ‰ค ๐œ– 3 โข ๐ถ .

We can then find a number ๐ฝ

๐ฝ โข ( ๐œ– , ๐‘› ) โˆˆ โ„• such that

max ๐‘— โˆˆ { 1 , โ€ฆ , ๐‘› } โก โ€– ๐‘ฆ ๐‘— โˆ’ โˆ‘ ๐‘˜

1 ๐ฝ ๐‘ ๐‘˜ โข ( ๐‘ฆ ๐‘— ) โข ๐œ‘ ๐‘˜ โ€– ๐’ณ โ‰ค ๐œ– 3 .

Define the finite rank operator ๐‘ˆ : ๐’ณ โ†’ ๐’ณ by

๐‘ˆ โข ๐‘ฅ

โˆ‘ ๐‘—

1 ๐ฝ ๐‘ ๐‘— โข ( ๐‘ฅ ) โข ๐œ‘ ๐‘— , โˆ€ ๐‘ฅ โˆˆ ๐’ณ .

Triangle inequality implies that, for any ๐‘ฅ โˆˆ ๐พ ,

โ€– ๐‘ฅ โˆ’ ๐‘ˆ โข ( ๐‘ฅ ) โ€– ๐’ณ
โ‰ค โ€– ๐‘ฅ โˆ’ ๐‘ฆ ๐‘™ โ€– ๐’ณ + โ€– ๐‘ฆ ๐‘™ โˆ’ ๐‘ˆ โข ( ๐‘ฆ ๐‘™ ) โ€– ๐’ณ + โ€– ๐‘ˆ โข ( ๐‘ฆ ๐‘™ ) โˆ’ ๐‘ˆ โข ( ๐‘ฅ ) โ€– ๐’ณ

โ‰ค 2 โข ๐œ– 3 + โˆฅ โˆ‘ ๐‘—

1 ๐ฝ ( ๐‘ ๐‘— ( ๐‘ฆ ๐‘™ ) โˆ’ ๐‘ ๐‘— ( ๐‘ฅ ) ) ๐œ‘ ๐‘— โˆฅ ๐’ณ

โ‰ค 2 โข ๐œ– 3 + ๐ถ โข โ€– ๐‘ฆ ๐‘™ โˆ’ ๐‘ฅ โ€– ๐’ณ

โ‰ค ๐œ–

as desired.  

Lemma 19

Let ๐’ณ be a Banach space with a basis and ๐’ด be any Banach space. Suppose there exists a continuous linear bijection ๐‘‡ : ๐’ณ โ†’ ๐’ด . Then ๐’ด has a basis.

Proof  Let ๐‘ฆ โˆˆ ๐’ด and ๐œ– > 0 . Since ๐‘‡ is a bijection, there exists an element ๐‘ฅ โˆˆ ๐’ณ so that ๐‘‡ โข ๐‘ฅ

๐‘ฆ and ๐‘‡ โˆ’ 1 โข ๐‘ฆ

๐‘ฅ . Since ๐’ณ has a basis, we can find { ๐œ‘ ๐‘— } ๐‘—

1 โˆž โŠ‚ ๐’ณ and { ๐‘ ๐‘— } ๐‘—

1 โˆž โŠ‚ ๐’ณ โˆ— and a number ๐‘›

๐‘› โข ( ๐œ– , โ€– ๐‘‡ โ€– ) โˆˆ โ„• such that

โ€– ๐‘ฅ โˆ’ โˆ‘ ๐‘—

1 ๐‘› ๐‘ ๐‘— โข ( ๐‘ฅ ) โข ๐œ‘ ๐‘— โ€– ๐’ณ โ‰ค ๐œ– โ€– ๐‘‡ โ€– .

Note that

โ€– ๐‘ฆ โˆ’ โˆ‘ ๐‘—

1 ๐‘› ๐‘ ๐‘— โข ( ๐‘‡ โˆ’ 1 โข ๐‘ฆ ) โข ๐‘‡ โข ๐œ‘ ๐‘— โ€– ๐’ด

โ€– ๐‘‡ โข ๐‘ฅ โˆ’ ๐‘‡ โข โˆ‘ ๐‘—

1 ๐‘› ๐‘ ๐‘— โข ( ๐‘ฅ ) โข ๐œ‘ ๐‘— โ€– โ‰ค โ€– ๐‘‡ โ€– โข โ€– ๐‘ฅ โˆ’ โˆ‘ ๐‘—

1 ๐‘› ๐‘ ๐‘— โข ( ๐‘ฅ ) โข ๐œ‘ ๐‘— โ€– ๐’ณ โ‰ค ๐œ–

hence { ๐‘‡ โข ๐œ‘ ๐‘— } ๐‘—

1 โˆž โŠ‚ ๐’ด and { ๐‘ ๐‘— ( ๐‘‡ โˆ’ 1 โ‹… ) } ๐‘—

1 โˆž โŠ‚ ๐’ด โˆ— form a basis for ๐’ด by linearity and continuity of ๐‘‡ and ๐‘‡ โˆ’ 1 .  

Lemma 20

Let ๐’ณ be a Banach space with the AP and ๐’ด be any Banach space. Suppose there exists a continuous linear bijection ๐‘‡ : ๐’ณ โ†’ ๐’ด . Then ๐’ด has the AP.

Proof  Let ๐พ โŠ‚ ๐’ด be a compact set and ๐œ– > 0 . The set ๐‘…

๐‘‡ โˆ’ 1 โข ( ๐พ ) โŠ‚ ๐’ณ is compact since ๐‘‡ โˆ’ 1 is continuous. Since ๐’ณ has the AP, there exists a finite rank operator ๐‘ˆ : ๐’ณ โ†’ ๐’ณ such that

โ€– ๐‘ฅ โˆ’ ๐‘ˆ โข ๐‘ฅ โ€– ๐’ณ โ‰ค ๐œ– โ€– ๐‘‡ โ€– , โˆ€ ๐‘ฅ โˆˆ ๐‘… .

Define the operator ๐‘Š : ๐’ด โ†’ ๐’ด by ๐‘Š

๐‘‡ โข ๐‘ˆ โข ๐‘‡ โˆ’ 1 . Clearly ๐‘Š is a finite rank operator since ๐‘ˆ is a finite rank operator. Let ๐‘ฆ โˆˆ ๐พ then, since ๐พ

๐‘‡ โข ( ๐‘… ) , there exists ๐‘ฅ โˆˆ ๐‘… such that ๐‘‡ โข ๐‘ฅ

๐‘ฆ and ๐‘ฅ

๐‘‡ โˆ’ 1 โข ๐‘ฆ . Then

โ€– ๐‘ฆ โˆ’ ๐‘Š โข ๐‘ฆ โ€– ๐’ด

โ€– ๐‘‡ โข ๐‘ฅ โˆ’ ๐‘‡ โข ๐‘ˆ โข ๐‘ฅ โ€– ๐’ด โ‰ค โ€– ๐‘‡ โ€– โข โ€– ๐‘ฅ โˆ’ ๐‘ˆ โข ๐‘ฅ โ€– ๐’ณ โ‰ค ๐œ– .

hence ๐’ด has the AP.  

The following lemma shows than the infinite union of compact sets is compact if each set is the image of a fixed compact set under a convergent sequence of continuous maps. The result is instrumental in proving Lemma 22.

Lemma 21

Let ๐’ณ , ๐’ด be Banach spaces and ๐น : ๐’ณ โ†’ ๐’ด be a continuous map. Let ๐พ โŠ‚ ๐’ณ be a compact set in ๐’ณ and { ๐น ๐‘› : ๐’ณ โ†’ ๐’ด } ๐‘›

1 โˆž be a sequence of continuous maps such that

lim ๐‘› โ†’ โˆž sup ๐‘ฅ โˆˆ ๐พ โ€– ๐น โข ( ๐‘ฅ ) โˆ’ ๐น ๐‘› โข ( ๐‘ฅ ) โ€– ๐’ด

0 .

Then the set

๐‘Š โ‰” โ‹ƒ ๐‘›

1 โˆž ๐น ๐‘› โข ( ๐พ ) โˆช ๐น โข ( ๐พ )

is compact in ๐’ด .

Proof  Let ๐œ– > 0 then there exists a number ๐‘

๐‘ โข ( ๐œ– ) โˆˆ โ„• such that

sup ๐‘ฅ โˆˆ ๐พ โ€– ๐น โข ( ๐‘ฅ ) โˆ’ ๐น ๐‘› โข ( ๐‘ฅ ) โ€– ๐’ด โ‰ค ๐œ– 2 , โˆ€ ๐‘› โ‰ฅ ๐‘ .

Define the set

๐‘Š ๐‘

โ‹ƒ ๐‘›

1 ๐‘ ๐น ๐‘› โข ( ๐พ ) โˆช ๐น โข ( ๐พ )

which is compact since ๐น and each ๐น ๐‘› are continuous. We can therefore find a number ๐ฝ

๐ฝ โข ( ๐œ– , ๐‘ ) โˆˆ โ„• and elements ๐‘ฆ 1 , โ€ฆ , ๐‘ฆ ๐ฝ โˆˆ ๐‘Š ๐‘ such that, for any ๐‘ง โˆˆ ๐‘Š ๐‘ , there exists a number ๐‘™

๐‘™ โข ( ๐‘ง ) โˆˆ { 1 , โ€ฆ , ๐ฝ } such that

โ€– ๐‘ง โˆ’ ๐‘ฆ ๐‘™ โ€– ๐’ด โ‰ค ๐œ– 2 .

Let ๐‘ฆ โˆˆ ๐‘Š โˆ– ๐‘Š ๐‘ then there exists a number ๐‘š > ๐‘ and an element ๐‘ฅ โˆˆ ๐พ such that ๐‘ฆ

๐น ๐‘š โข ( ๐‘ฅ ) . Since ๐น โข ( ๐‘ฅ ) โˆˆ ๐‘Š ๐‘ , we can find a number ๐‘™ โˆˆ { 1 , โ€ฆ , ๐ฝ } such that

โ€– ๐น โข ( ๐‘ฅ ) โˆ’ ๐‘ฆ ๐‘™ โ€– ๐’ด โ‰ค ๐œ– 2 .

Therefore,

โ€– ๐‘ฆ โˆ’ ๐‘ฆ ๐‘™ โ€– ๐’ด โ‰ค โ€– ๐น ๐‘š โข ( ๐‘ฅ ) โˆ’ ๐น โข ( ๐‘ฅ ) โ€– ๐’ด + โ€– ๐น โข ( ๐‘ฅ ) โˆ’ ๐‘ฆ ๐‘™ โ€– ๐’ด โ‰ค ๐œ–

hence { ๐‘ฆ ๐‘— } ๐‘—

1 ๐ฝ forms a finite ๐œ– -net for ๐‘Š , showing that ๐‘Š is totally bounded.

We will now show that ๐‘Š is closed. To that end, let { ๐‘ ๐‘› } ๐‘›

1 โˆž be a convergent sequence in ๐‘Š , in particular, ๐‘ ๐‘› โˆˆ ๐‘Š for every ๐‘› โˆˆ โ„• and ๐‘ ๐‘› โ†’ ๐‘ โˆˆ ๐’ด as ๐‘› โ†’ โˆž . We can thus find convergent sequences { ๐‘ฅ ๐‘› } ๐‘›

1 โˆž and { ๐›ผ ๐‘› } ๐‘›

1 โˆž such that ๐‘ฅ ๐‘› โˆˆ ๐พ , ๐›ผ ๐‘› โˆˆ โ„• 0 , and ๐‘ ๐‘›

๐น ๐›ผ ๐‘› โข ( ๐‘ฅ ๐‘› ) where we define ๐น 0 โ‰” ๐น . Since ๐พ is closed, lim ๐‘› โ†’ โˆž ๐‘ฅ ๐‘›

๐‘ฅ โˆˆ ๐พ thus, for each fixed ๐‘› โˆˆ โ„• ,

lim ๐‘— โ†’ โˆž ๐น ๐›ผ ๐‘› โข ( ๐‘ฅ ๐‘— )

๐น ๐›ผ ๐‘› โข ( ๐‘ฅ ) โˆˆ ๐‘Š

by continuity of ๐น ๐›ผ ๐‘› . Since uniform convergence implies point-wise convergence

๐‘

lim ๐‘› โ†’ โˆž ๐น ๐›ผ ๐‘› โข ( ๐‘ฅ )

๐น ๐›ผ โข ( ๐‘ฅ ) โˆˆ ๐‘Š

for some ๐›ผ โˆˆ โ„• 0 thus ๐‘ โˆˆ ๐‘Š , showing that ๐‘Š is closed.  

The following lemma shows that any continuous operator acting between two Banach spaces with the AP can be approximated in a finite-dimensional manner. The approximation proceeds in three steps which are shown schematically in Figure 16. First an input is mapped to a finite-dimensional representation via the action of a set of functionals on ๐’ณ . This representation is then mapped by a continuous function to a new finite-dimensional representation which serves as the set of coefficients onto representers of ๐’ด . The resulting expansion is an element of ๐’ด that is ๐œ– -close to the action of ๐’ข on the input element. A similar finite-dimensionalization was used in (Bhattacharya et al., 2020) by using PCA on ๐’ณ to define the functionals acting on the input and PCA on ๐’ด to define the output representers. However the result in that work is restricted to separable Hilbert spaces; here, we generalize it to Banach spaces with the AP.

Lemma 22

Let ๐’ณ , ๐’ด be two Banach spaces with the AP and let ๐’ข : ๐’ณ โ†’ ๐’ด be a continuous map. For every compact set ๐พ โŠ‚ ๐’ณ and ๐œ–

0 , there exist numbers ๐ฝ , ๐ฝ โ€ฒ โˆˆ โ„• and continuous linear maps ๐น ๐ฝ : ๐’ณ โ†’ โ„ ๐ฝ , ๐บ ๐ฝ โ€ฒ : โ„ ๐ฝ โ€ฒ โ†’ ๐’ด as well as ๐œ‘ โˆˆ ๐ถ โข ( โ„ ๐ฝ ; โ„ ๐ฝ โ€ฒ ) such that

sup ๐‘ฅ โˆˆ ๐พ โ€– ๐’ข โข ( ๐‘ฅ ) โˆ’ ( ๐บ ๐ฝ โ€ฒ โˆ˜ ๐œ‘ โˆ˜ ๐น ๐ฝ ) โข ( ๐‘ฅ ) โ€– ๐’ด โ‰ค ๐œ– .

Furthermore there exist ๐‘ค 1 , โ€ฆ , ๐‘ค ๐ฝ โˆˆ ๐’ณ โˆ— such that ๐น ๐ฝ has the form

๐น ๐ฝ ( ๐‘ฅ )

( ๐‘ค 1 ( ๐‘ฅ ) , โ€ฆ , ๐‘ค ๐ฝ ( ๐‘ฅ ) ) , โˆ€ ๐‘ฅ โˆˆ ๐’ณ

and there exist ๐›ฝ 1 , โ€ฆ , ๐›ฝ ๐ฝ โ€ฒ โˆˆ ๐’ด such that ๐บ ๐ฝ โ€ฒ has the form

๐บ ๐ฝ โ€ฒ โข ( ๐‘ฃ )

โˆ‘ ๐‘—

1 ๐ฝ โ€ฒ ๐‘ฃ ๐‘— โข ๐›ฝ ๐‘— , โˆ€ ๐‘ฃ โˆˆ โ„ ๐ฝ โ€ฒ .

If ๐’ด admits a basis then { ๐›ฝ ๐‘— } ๐‘—

1 ๐ฝ โ€ฒ can be picked so that there is an extension { ๐›ฝ ๐‘— } ๐‘—

1 โˆž โŠ‚ ๐’ด which is a basis for ๐’ด .

Proof  Since ๐’ณ has the AP, there exists a sequence of finite rank operators { ๐‘ˆ ๐‘› ๐’ณ : ๐’ณ โ†’ ๐’ณ } ๐‘›

1 โˆž such that

lim ๐‘› โ†’ โˆž sup ๐‘ฅ โˆˆ ๐พ โ€– ๐‘ฅ โˆ’ ๐‘ˆ ๐‘› ๐’ณ โข ๐‘ฅ โ€– ๐’ณ

0 .

Define the set

๐‘

โ‹ƒ ๐‘›

1 โˆž ๐‘ˆ ๐‘› ๐’ณ โข ( ๐พ ) โˆช ๐พ

which is compact by Lemma 21. Therefore, ๐’ข is uniformly continuous on ๐‘ hence there exists a modulus of continuity ๐œ” : โ„ โ‰ฅ 0 โ†’ โ„ โ‰ฅ 0 which is non-decreasing and satisfies ๐œ” โข ( ๐‘ก ) โ†’ ๐œ” โข ( 0 )

0 as ๐‘ก โ†’ 0 as well as

โˆฅ ๐’ข ( ๐‘ง 1 ) โˆ’ ๐’ข ( ๐‘ง 2 ) โˆฅ ๐’ด โ‰ค ๐œ” ( โˆฅ ๐‘ง 1 โˆ’ ๐‘ง 2 โˆฅ ๐’ณ ) โˆ€ ๐‘ง 1 , ๐‘ง 2 โˆˆ ๐‘ .

We can thus find, a number ๐‘

๐‘ โข ( ๐œ– ) โˆˆ โ„• such that

sup ๐‘ฅ โˆˆ ๐พ ๐œ” ( โˆฅ ๐‘ฅ โˆ’ ๐‘ˆ ๐‘ ๐’ณ ๐‘ฅ โˆฅ ๐’ณ ) โ‰ค ๐œ– 2 .

Let ๐ฝ

dim  โข ๐‘ˆ ๐‘ ๐’ณ โข ( ๐’ณ ) < โˆž . There exist elements { ๐›ผ ๐‘— } ๐‘—

1 ๐ฝ โŠ‚ ๐’ณ and { ๐‘ค ๐‘— } ๐‘—

1 ๐ฝ โŠ‚ ๐’ณ โˆ— such that

๐‘ˆ ๐‘ ๐’ณ โข ๐‘ฅ

โˆ‘ ๐‘—

1 ๐ฝ ๐‘ค ๐‘— โข ( ๐‘ฅ ) โข ๐›ผ ๐‘— , โˆ€ ๐‘ฅ โˆˆ ๐‘‹ .

Define the maps ๐น ๐ฝ ๐’ณ : ๐’ณ โ†’ โ„ ๐ฝ and ๐บ ๐ฝ ๐’ณ : โ„ ๐ฝ โ†’ ๐’ณ by

๐น ๐ฝ ๐’ณ โข ( ๐‘ฅ )

( ๐‘ค 1 โข ( ๐‘ฅ ) , โ€ฆ , ๐‘ค ๐ฝ โข ( ๐‘ฅ ) ) , โˆ€ ๐‘ฅ โˆˆ ๐’ณ ,

๐บ ๐ฝ ๐’ณ โข ( ๐‘ฃ )

โˆ‘ ๐‘—

1 ๐ฝ ๐‘ฃ ๐‘— โข ๐›ผ ๐‘— , โˆ€ ๐‘ฃ โˆˆ โ„ ๐ฝ ,

noting that ๐‘ˆ ๐‘ ๐’ณ

๐บ ๐ฝ ๐’ณ โˆ˜ ๐น ๐ฝ ๐’ณ . Define the set ๐‘Š

( ๐’ข โˆ˜ ๐‘ˆ ๐‘ ๐’ณ ) โข ( ๐พ ) โІ ๐’ด which is clearly compact. Since ๐’ด has the AP, we can similarly find a finite rank operator ๐‘ˆ ๐ฝ โ€ฒ ๐’ด : ๐’ด โ†’ ๐’ด with ๐ฝ โ€ฒ

dim  โข ๐‘ˆ ๐ฝ โ€ฒ ๐’ด โข ( ๐’ด ) < โˆž such that

sup ๐‘ฆ โˆˆ ๐‘Š โ€– ๐‘ฆ โˆ’ ๐‘ˆ ๐ฝ โ€ฒ ๐’ด โข ๐‘ฆ โ€– ๐’ด โ‰ค ๐œ– 2 .

Analogously, define the maps ๐น ๐ฝ โ€ฒ ๐’ด : ๐’ด โ†’ โ„ ๐ฝ โ€ฒ and ๐บ ๐ฝ โ€ฒ ๐’ด : โ„ ๐ฝ โ€ฒ โ†’ ๐’ด by

๐น ๐ฝ โ€ฒ ๐’ด โข ( ๐‘ฆ )

( ๐‘ž 1 โข ( ๐‘ฆ ) , โ€ฆ , ๐‘ž ๐ฝ โ€ฒ โข ( ๐‘ฆ ) ) , โˆ€ ๐‘ฆ โˆˆ ๐’ด ,

๐บ ๐ฝ โ€ฒ ๐’ด โข ( ๐‘ฃ )

โˆ‘ ๐‘—

1 ๐ฝ โ€ฒ ๐‘ฃ ๐‘— โข ๐›ฝ ๐‘— , โˆ€ ๐‘ฃ โˆˆ โ„ ๐ฝ โ€ฒ

for some { ๐›ฝ ๐‘— } ๐‘—

1 ๐ฝ โ€ฒ โŠ‚ ๐’ด and { ๐‘ž ๐‘— } ๐‘—

1 ๐ฝ โ€ฒ โŠ‚ ๐’ด โˆ— such that ๐‘ˆ ๐ฝ โ€ฒ ๐’ด

๐บ ๐ฝ โ€ฒ ๐’ด โˆ˜ ๐น ๐ฝ โ€ฒ ๐’ด . Clearly if ๐’ด admits a basis then we could have defined ๐น ๐ฝ โ€ฒ ๐’ด and ๐บ ๐ฝ โ€ฒ ๐’ด through it instead of through ๐‘ˆ ๐ฝ โ€ฒ ๐’ด . Define ๐œ‘ : โ„ ๐ฝ โ†’ โ„ ๐ฝ โ€ฒ by

๐œ‘ โข ( ๐‘ฃ )

( ๐น ๐ฝ โ€ฒ ๐’ด โˆ˜ ๐’ข โˆ˜ ๐บ ๐ฝ ๐’ณ ) โข ( ๐‘ฃ ) , โˆ€ ๐‘ฃ โˆˆ โ„ ๐ฝ

which is clearly continuous and note that ๐บ ๐ฝ โ€ฒ ๐’ด โˆ˜ ๐œ‘ โˆ˜ ๐น ๐ฝ ๐’ณ

๐‘ˆ ๐ฝ โ€ฒ ๐’ด โˆ˜ ๐’ข โˆ˜ ๐‘ˆ ๐‘ ๐’ณ . Set ๐น ๐ฝ

๐น ๐ฝ ๐’ณ and ๐บ ๐ฝ โ€ฒ

๐บ ๐ฝ โ€ฒ ๐’ด then, for any ๐‘ฅ โˆˆ ๐พ ,

โ€– ๐’ข โข ( ๐‘ฅ ) โˆ’ ( ๐บ ๐ฝ โ€ฒ โˆ˜ ๐œ‘ โˆ˜ ๐น ๐ฝ ) โข ( ๐‘ฅ ) โ€– ๐’ด

โ‰ค โ€– ๐’ข โข ( ๐‘ฅ ) โˆ’ ๐’ข โข ( ๐‘ˆ ๐‘ ๐’ณ โข ๐‘ฅ ) โ€– ๐’ด + โ€– ๐’ข โข ( ๐‘ˆ ๐‘ ๐’ณ โข ๐‘ฅ ) โˆ’ ( ๐‘ˆ ๐ฝ โ€ฒ ๐’ด โˆ˜ ๐’ข โˆ˜ ๐‘ˆ ๐‘ ๐’ณ ) โข ( ๐‘ฅ ) โ€– ๐’ด

โ‰ค ๐œ” ( โˆฅ ๐‘ฅ โˆ’ ๐‘ˆ ๐‘ ๐’ณ ๐‘ฅ โˆฅ ๐’ณ ) + sup ๐‘ฆ โˆˆ ๐‘Š โˆฅ ๐‘ฆ โˆ’ ๐‘ˆ ๐ฝ โ€ฒ ๐’ด ๐‘ฆ โˆฅ ๐’ด

โ‰ค ๐œ–

as desired.  

We now state and prove some results about isomorphisms of function spaces defined on different domains. These results are instrumental in proving Lemma 26.

Lemma 23

Let ๐ท , ๐ท โ€ฒ โŠ‚ โ„ ๐‘‘ be domains. Suppose that, for some ๐‘š โˆˆ โ„• 0 , there exists a ๐ถ ๐‘š -diffeomorphism ๐œ : ๐ท ยฏ โ€ฒ โ†’ ๐ท ยฏ . Then the mapping ๐‘‡ : ๐ถ ๐‘š โข ( ๐ท ยฏ ) โ†’ ๐ถ ๐‘š โข ( ๐ท ยฏ โ€ฒ ) defined as

๐‘‡ โข ( ๐‘“ ) โข ( ๐‘ฅ )

๐‘“ โข ( ๐œ โข ( ๐‘ฅ ) ) , โˆ€ ๐‘“ โˆˆ ๐ถ ๐‘š โข ( ๐ท ยฏ ) , ๐‘ฅ โˆˆ ๐ท ยฏ โ€ฒ

is a continuous linear bijection.

Proof  Clearly ๐‘‡ is linear since the evaluation functional is linear. To see that it is continuous, note that by the chain rule we can find a constant ๐‘„

๐‘„ โข ( ๐‘š )

0 such that

โ€– ๐‘‡ โข ( ๐‘“ ) โ€– ๐ถ ๐‘š โ‰ค ๐‘„ โข โ€– ๐œ โ€– ๐ถ ๐‘š โข โ€– ๐‘“ โ€– ๐ถ ๐‘š , โˆ€ ๐‘“ โˆˆ ๐ถ ๐‘š โข ( ๐ท ยฏ ) .

We will now show that it is bijective. Let ๐‘“ , ๐‘” โˆˆ ๐ถ ๐‘š โข ( ๐ท ยฏ ) so that ๐‘“ โ‰  ๐‘” . Then there exists a point ๐‘ฅ โˆˆ ๐ท ยฏ such that ๐‘“ โข ( ๐‘ฅ ) โ‰  ๐‘” โข ( ๐‘ฅ ) . Then ๐‘‡ โข ( ๐‘“ ) โข ( ๐œ โˆ’ 1 โข ( ๐‘ฅ ) )

๐‘“ โข ( ๐‘ฅ ) and ๐‘‡ โข ( ๐‘” ) โข ( ๐œ โˆ’ 1 โข ( ๐‘ฅ ) )

๐‘” โข ( ๐‘ฅ ) hence ๐‘‡ โข ( ๐‘“ ) โ‰  ๐‘‡ โข ( ๐‘” ) thus ๐‘‡ is injective. Now let ๐‘” โˆˆ ๐ถ ๐‘š โข ( ๐ท ยฏ โ€ฒ ) and define ๐‘“ : ๐ท ยฏ โ†’ โ„ by ๐‘“

๐‘” โˆ˜ ๐œ โˆ’ 1 . Since ๐œ โˆ’ 1 โˆˆ ๐ถ ๐‘š โข ( ๐ท ยฏ ; ๐ท ยฏ โ€ฒ ) , we have that ๐‘“ โˆˆ ๐ถ ๐‘š โข ( ๐ท ยฏ ) . Clearly, ๐‘‡ โข ( ๐‘“ )

๐‘” hence ๐‘‡ is surjective.  

Corollary 24

Let ๐‘€

0 and ๐‘š โˆˆ โ„• 0 . There exists a continuous linear bijection ๐‘‡ : ๐ถ ๐‘š โข ( [ 0 , 1 ] ๐‘‘ ) โ†’ ๐ถ ๐‘š โข ( [ โˆ’ ๐‘€ , ๐‘€ ] ๐‘‘ ) .

Proof  Let ๐Ÿ™ โˆˆ โ„ ๐‘‘ denote the vector in which all entries are 1 . Define the map ๐œ : โ„ ๐‘‘ โ†’ โ„ ๐‘‘ by

๐œ โข ( ๐‘ฅ )

1 2 โข ๐‘€ โข ๐‘ฅ + 1 2 โข ๐Ÿ™ , โˆ€ ๐‘ฅ โˆˆ โ„ ๐‘‘ .

(48)

Clearly ๐œ is a ๐ถ โˆž -diffeomorphism between [ โˆ’ ๐‘€ , ๐‘€ ] ๐‘‘ and [ 0 , 1 ] ๐‘‘ hence Lemma 23 implies the result.  

Lemma 25

Let ๐‘€

0 and ๐‘š โˆˆ โ„• . There exists a continuous linear bijection ๐‘‡ : ๐‘Š ๐‘š , 1 โข ( ( 0 , 1 ) ๐‘‘ ) โ†’ ๐‘Š ๐‘š , 1 โข ( ( โˆ’ ๐‘€ , ๐‘€ ) ๐‘‘ ) .

Proof  Define the map ๐œ : โ„ ๐‘‘ โ†’ โ„ ๐‘‘ by (48). We have that ๐œ โข ( ( โˆ’ ๐‘€ , ๐‘€ ) ๐‘‘ )

( 0 , 1 ) ๐‘‘ . Define the operator ๐‘‡ by

๐‘‡ โข ๐‘“

๐‘“ โˆ˜ ๐œ , โˆ€ ๐‘“ โˆˆ ๐‘Š ๐‘š , 1 โข ( ( 0 , 1 ) ๐‘‘ ) .

which is clearly linear since composition is linear. We compute that, for any 0 โ‰ค | ๐›ผ | 1 โ‰ค ๐‘š ,

โˆ‚ ๐›ผ ( ๐‘“ โˆ˜ ๐œ )

( 2 โข ๐‘€ ) โˆ’ | ๐›ผ | 1 โข ( โˆ‚ ๐›ผ ๐‘“ ) โˆ˜ ๐œ

hence, by the change of variables formula,

โ€– ๐‘‡ โข ๐‘“ โ€– ๐‘Š ๐‘š , 1 โข ( ( โˆ’ ๐‘€ , ๐‘€ ) ๐‘‘ )

โˆ‘ 0 โ‰ค | ๐›ผ | 1 โ‰ค ๐‘š ( 2 โข ๐‘€ ) ๐‘‘ โˆ’ | ๐›ผ | 1 โข โ€– โˆ‚ ๐›ผ ๐‘“ โ€– ๐ฟ 1 โข ( ( 0 , 1 ) ๐‘‘ ) .

We can therefore find numbers ๐ถ 1 , ๐ถ 2

0 , depending on ๐‘€ and ๐‘š , such that

๐ถ 1 โข โ€– ๐‘“ โ€– ๐‘Š ๐‘š , 1 โข ( ( 0 , 1 ) ๐‘‘ ) โ‰ค โ€– ๐‘‡ โข ๐‘“ โ€– ๐‘Š ๐‘š , 1 โข ( ( โˆ’ ๐‘€ , ๐‘€ ) ๐‘‘ ) โ‰ค ๐ถ 2 โข โ€– ๐‘“ โ€– ๐‘Š ๐‘š , 1 โข ( ( 0 , 1 ) ๐‘‘ ) .

This shows that ๐‘‡ : ๐‘Š ๐‘š , 1 โข ( ( 0 , 1 ) ๐‘‘ ) โ†’ ๐‘Š ๐‘š , 1 โข ( ( โˆ’ ๐‘€ , ๐‘€ ) ๐‘‘ ) is continuous and injective. Now let ๐‘” โˆˆ ๐‘Š ๐‘š , 1 โข ( ( โˆ’ ๐‘€ , ๐‘€ ) ๐‘‘ ) and define ๐‘“

๐‘” โˆ˜ ๐œ โˆ’ 1 . A similar argument shows that ๐‘“ โˆˆ ๐‘Š ๐‘š , 1 โข ( ( 0 , 1 ) ๐‘‘ ) and, clearly, ๐‘‡ โข ๐‘“

๐‘” hence ๐‘‡ is surjective.  

We now show that the spaces in Assumptions 9 and 10 have the AP. While the result is well-known when the domain is ( 0 , 1 ) ๐‘‘ or โ„ ๐‘‘ , we were unable to find any results in the literature for Lipschitz domains and we therefore give a full proof here. The essence of the proof is to either exhibit an isomorphism to a space that is already known to have AP or to directly show AP by embedding the Lipschitz domain into an hypercube for which there are known basis constructions. Our proof shows the stronger result that ๐‘Š ๐‘š , ๐‘ โข ( ๐ท ) for ๐‘š โˆˆ โ„• 0 and 1 โ‰ค ๐‘ < โˆž has a basis, but, for ๐ถ ๐‘š โข ( ๐ท ยฏ ) , we only establish the AP and not necessarily a basis. The discrepancy comes from the fact that there is an isomorphism between ๐‘Š ๐‘š , ๐‘ โข ( ๐ท ) and ๐‘Š ๐‘š , ๐‘ โข ( โ„ ๐‘‘ )  while there is not one between ๐ถ ๐‘š โข ( ๐ท ยฏ ) and ๐ถ ๐‘š โข ( โ„ ๐‘‘ ) .

Lemma 26

Let Assumptions 9 and 10 hold. Then ๐’œ and ๐’ฐ have the AP.

Proof  It is enough to show that the spaces ๐‘Š ๐‘š , ๐‘ โข ( ๐ท ) , and ๐ถ ๐‘š โข ( ๐ท ยฏ ) for any 1 โ‰ค ๐‘ < โˆž and ๐‘š โˆˆ โ„• 0 with ๐ท โŠ‚ โ„ ๐‘‘ a Lipschitz domain have the AP. Consider first the spaces ๐‘Š 0 , ๐‘ โข ( ๐ท )

๐ฟ ๐‘ โข ( ๐ท ) . Since the Lebesgue measure on ๐ท is ๐œŽ -finite and has no atoms, ๐ฟ ๐‘ โข ( ๐ท ) is isometrically isomorphic to ๐ฟ ๐‘ โข ( ( 0 , 1 ) ) (see, for example, (Albiac and Kalton, 2006, Chapter 6)). Hence by Lemma 20, it is enough to show that ๐ฟ ๐‘ โข ( ( 0 , 1 ) ) has the AP. Similarly, consider the spaces ๐‘Š ๐‘š , ๐‘ โข ( ๐ท ) for ๐‘š > 0 and ๐‘ > 1 . Since ๐ท is Lipschitz, there exists a continuous linear operator ๐‘Š ๐‘š , ๐‘ โข ( ๐ท ) โ†’ ๐‘Š ๐‘š , ๐‘ โข ( โ„ ๐‘‘ ) (Stein, 1970, Chapter 6, Theorem 5) (this also holds for ๐‘

1 ). We can therefore apply (Peล‚czyล„ski and Wojciechowski, 2001, Corollary 4) (when ๐‘

1 ) to conclude that ๐‘Š ๐‘š , ๐‘ โข ( ๐ท ) is isomorphic to ๐ฟ ๐‘ โข ( ( 0 , 1 ) ) . By (Albiac and Kalton, 2006, Proposition 6.1.3), ๐ฟ ๐‘ โข ( ( 0 , 1 ) ) has a basis hence Lemma  18 implies the result.

Now, consider the spaces ๐ถ ๐‘š โข ( ๐ท ยฏ ) . Since ๐ท is bounded, there exists a number ๐‘€ > 0 such that ๐ท ยฏ โІ [ โˆ’ ๐‘€ , ๐‘€ ] ๐‘‘ . Hence, by Corollary 24, ๐ถ ๐‘š โข ( [ 0 , 1 ] ๐‘‘ ) is isomorphic to ๐ถ ๐‘š โข ( [ โˆ’ ๐‘€ , ๐‘€ ] ๐‘‘ ) . Since ๐ถ ๐‘š โข ( [ 0 , 1 ] ๐‘‘ ) has a basis (Ciesielski and Domsta, 1972, Theorem 5), Lemma 19 then implies that ๐ถ ๐‘š โข ( [ โˆ’ ๐‘€ , ๐‘€ ] ๐‘‘ ) has a basis. By (Fefferman, 2007, Theorem 1), there exists a continuous linear operator ๐ธ : ๐ถ ๐‘š โข ( ๐ท ยฏ ) โ†’ ๐ถ b ๐‘š โข ( โ„ ๐‘‘ ) such that ๐ธ โข ( ๐‘“ ) | ๐ท ยฏ

๐‘“ for all ๐‘“ โˆˆ ๐ถ โข ( ๐ท ยฏ ) . Define the restriction operators ๐‘… ๐‘€ : ๐ถ b ๐‘š โข ( โ„ ๐‘‘ ) โ†’ ๐ถ ๐‘š โข ( [ โˆ’ ๐‘€ , ๐‘€ ] ๐‘‘ ) and ๐‘… ๐ท : ๐ถ ๐‘š โข ( [ โˆ’ ๐‘€ , ๐‘€ ] ๐‘‘ ) โ†’ ๐ถ ๐‘š โข ( ๐ท ยฏ ) which are both clearly linear and continuous and โ€– ๐‘… ๐‘€ โ€–

โ€– ๐‘… ๐ท โ€–

1 . Let { ๐‘ ๐‘— } ๐‘—

1 โˆž โŠ‚ ( ๐ถ ๐‘š ( [ โˆ’ ๐‘€ , ๐‘€ ] ๐‘‘ ) ) โˆ— and { ๐œ‘ ๐‘— } ๐‘—

1 โˆž โŠ‚ ๐ถ ๐‘š โข ( [ โˆ’ ๐‘€ , ๐‘€ ] ๐‘‘ ) be a basis for ๐ถ ๐‘š โข ( [ โˆ’ ๐‘€ , ๐‘€ ] ๐‘‘ ) . As in the proof of Lemma 18, there exists a constant ๐ถ 1

0 such that, for any ๐‘› โˆˆ โ„• and ๐‘“ โˆˆ ๐ถ ๐‘š โข ( [ โˆ’ ๐‘€ , ๐‘€ ] ๐‘‘ ) ,

โ€– โˆ‘ ๐‘—

1 ๐‘› ๐‘ ๐‘— โข ( ๐‘“ ) โข ๐œ‘ ๐‘— โ€– ๐ถ ๐‘š โข ( [ โˆ’ ๐‘€ , ๐‘€ ] ๐‘‘ ) โ‰ค ๐ถ 1 โข โ€– ๐‘“ โ€– ๐ถ ๐‘š โข ( [ โˆ’ ๐‘€ , ๐‘€ ] ๐‘‘ ) .

Suppose, without loss of generality, that ๐ถ 1 โข โ€– ๐ธ โ€– โ‰ฅ 1 . Let ๐พ โŠ‚ ๐ถ ๐‘š โข ( ๐ท ยฏ ) be a compact set and ๐œ– > 0 . Since ๐พ is compact, we can find a number ๐‘›

๐‘› โข ( ๐œ– ) โˆˆ โ„• and elements ๐‘ฆ 1 , โ€ฆ , ๐‘ฆ ๐‘› โˆˆ ๐พ such that, for any ๐‘“ โˆˆ ๐พ there exists a number ๐‘™ โˆˆ { 1 , โ€ฆ , ๐‘› } such that

โ€– ๐‘“ โˆ’ ๐‘ฆ ๐‘™ โ€– ๐ถ ๐‘š โข ( ๐ท ยฏ ) โ‰ค ๐œ– 3 โข ๐ถ 1 โข โ€– ๐ธ โ€– .

For every ๐‘™ โˆˆ { 1 , โ€ฆ , ๐‘› } , define ๐‘” ๐‘™

๐‘… ๐‘€ โข ( ๐ธ โข ( ๐‘ฆ ๐‘™ ) ) and note that ๐‘” ๐‘™ โˆˆ ๐ถ ๐‘š โข ( [ โˆ’ ๐‘€ , ๐‘€ ] ๐‘‘ ) hence there exists a number ๐ฝ

๐ฝ โข ( ๐œ– , ๐‘› ) โˆˆ โ„• such that

max ๐‘™ โˆˆ { 1 , โ€ฆ , ๐‘› } โก โ€– ๐‘” ๐‘™ โˆ’ โˆ‘ ๐‘—

1 ๐ฝ ๐‘ ๐‘— โข ( ๐‘” ๐‘™ ) โข ๐œ‘ ๐‘— โ€– ๐ถ ๐‘š โข ( [ โˆ’ ๐‘€ , ๐‘€ ] ๐‘‘ ) โ‰ค ๐œ– 3 .

Notice that, since ๐‘ฆ ๐‘™

๐‘… ๐ท โข ( ๐‘” ๐‘™ ) , we have

max ๐‘™ โˆˆ { 1 , โ€ฆ , ๐‘› } โˆฅ ๐‘ฆ ๐‘™ โˆ’ โˆ‘ ๐‘—

1 ๐ฝ ๐‘ ๐‘— ( ๐‘… ๐‘€ ( ๐ธ ( ๐‘ฆ ๐‘™ ) ) ) ๐‘… ๐ท ( ๐œ‘ ๐‘— ) โˆฅ ๐ถ ๐‘š โข ( ๐ท ยฏ ) โ‰ค โˆฅ ๐‘… ๐ท โˆฅ max ๐‘™ โˆˆ { 1 , โ€ฆ , ๐‘› } โˆฅ ๐‘” ๐‘™ โˆ’ โˆ‘ ๐‘—

1 ๐ฝ ๐‘ ๐‘— ( ๐‘” ๐‘™ ) ๐œ‘ ๐‘— โˆฅ ๐ถ ๐‘š โข ( [ โˆ’ ๐‘€ , ๐‘€ ] ๐‘‘ ) โ‰ค ๐œ– 3 .

Define the finite rank operator ๐‘ˆ : ๐ถ ๐‘š โข ( ๐ท ยฏ ) โ†’ ๐ถ ๐‘š โข ( ๐ท ยฏ ) by

๐‘ˆ ๐‘“

โˆ‘ ๐‘—

1 ๐ฝ ๐‘ ๐‘— ( ๐‘… ๐‘€ ( ๐ธ ( ๐‘“ ) ) ) ๐‘… ๐ท ( ๐œ‘ ๐‘— ) , โˆ€ ๐‘“ โˆˆ ๐ถ ๐‘š ( ๐ท ยฏ ) .

We then have that, for any ๐‘“ โˆˆ ๐พ ,

โ€– ๐‘“ โˆ’ ๐‘ˆ โข ๐‘“ โ€– ๐ถ ๐‘š โข ( ๐ท ยฏ )
โ‰ค โ€– ๐‘“ โˆ’ ๐‘ฆ ๐‘™ โ€– ๐ถ ๐‘š โข ( ๐ท ยฏ ) + โ€– ๐‘ฆ ๐‘™ โˆ’ ๐‘ˆ โข ๐‘ฆ ๐‘™ โ€– ๐ถ ๐‘š โข ( ๐ท ยฏ ) + โ€– ๐‘ˆ โข ๐‘ฆ ๐‘™ โˆ’ ๐‘ˆ โข ๐‘“ โ€– ๐ถ ๐‘š โข ( ๐ท ยฏ )

โ‰ค 2 โข ๐œ– 3 + โˆฅ โˆ‘ ๐‘—

1 ๐ฝ ๐‘ ๐‘— ( ๐‘… ๐‘€ ( ๐ธ ( ๐‘ฆ ๐‘™ โˆ’ ๐‘“ ) ) ) ๐œ‘ ๐‘— โˆฅ ๐ถ ๐‘š โข ( [ โˆ’ ๐‘€ , ๐‘€ ] ๐‘‘ )

โ‰ค 2 โข ๐œ– 3 + ๐ถ 1 โข โ€– ๐‘… ๐‘€ โข ( ๐ธ โข ( ๐‘ฆ ๐‘™ โˆ’ ๐‘“ ) ) โ€– ๐ถ ๐‘š โข ( [ โˆ’ ๐‘€ , ๐‘€ ] ๐‘‘ )

โ‰ค 2 โข ๐œ– 3 + ๐ถ 1 โข โ€– ๐ธ โ€– โข โ€– ๐‘ฆ ๐‘™ โˆ’ ๐‘“ โ€– ๐ถ ๐‘š โข ( ๐ท ยฏ )

โ‰ค ๐œ–

hence ๐ถ ๐‘š โข ( ๐ท ยฏ ) has the AP.

We are left with the case ๐‘Š ๐‘š , 1 โข ( ๐ท ) . A similar argument as for the ๐ถ ๐‘š โข ( ๐ท ยฏ ) case holds. In particular the basis from (Ciesielski and Domsta, 1972, Theorem 5) is also a basis for ๐‘Š ๐‘š , 1 โข ( ( 0 , 1 ) ๐‘‘ ) . Lemma 25 gives an isomorphism between ๐‘Š ๐‘š , 1 โข ( ( 0 , 1 ) ๐‘‘ ) and ๐‘Š ๐‘š , 1 โข ( ( โˆ’ ๐‘€ , ๐‘€ ) ๐‘‘ ) hence we may use the extension operator ๐‘Š ๐‘š , 1 โข ( ๐ท ) โ†’ ๐‘Š ๐‘š , 1 โข ( โ„ ๐‘‘ ) from (Stein, 1970, Chapter 6, Theorem 5) to complete the argument. In fact, the same construction yields a basis for ๐‘Š ๐‘š , 1 โข ( ๐ท ) due to the isomorphism with ๐‘Š ๐‘š , 1 โข ( โ„ ๐‘‘ ) , see, for example (Peล‚czyล„ski and Wojciechowski, 2001, Theorem 1).  

C

In this section, we prove various results about the approximation of linear functionals by kernel integral operators. Lemma 27 establishes a Riesz-representation theorem for ๐ถ ๐‘š . The proof proceeds exactly as in the well-known result for ๐‘Š ๐‘š , ๐‘ but, since we did not find it in the literature, we give full details here. Lemma 28 shows that linear functionals on ๐‘Š ๐‘š , ๐‘ can be approximated uniformly over compact set by integral kernel operators with a ๐ถ โˆž kernel. Lemmas 30 and 31 establish similar results for ๐ถ and ๐ถ ๐‘š respectively by employing Lemma 27. These lemmas are crucial in showing that NO(s) are universal since they imply that the functionals from Lemma 22 can be approximated by elements of ๐–จ๐–ฎ .

Lemma 27

Let ๐ท โŠ‚ โ„ ๐‘‘ be a domain and ๐‘š โˆˆ โ„• 0 . For every ๐ฟ โˆˆ ( ๐ถ ๐‘š ( ๐ท ยฏ ) ) โˆ— there exist finite, signed, Radon measures { ๐œ† ๐›ผ } 0 โ‰ค | ๐›ผ | 1 โ‰ค ๐‘š such that

๐ฟ โข ( ๐‘“ )

โˆ‘ 0 โ‰ค | ๐›ผ | 1 โ‰ค ๐‘š โˆซ ๐ท ยฏ โˆ‚ ๐›ผ ๐‘“ โข ๐–ฝ โข ๐œ† ๐›ผ , โˆ€ ๐‘“ โˆˆ ๐ถ ๐‘š โข ( ๐ท ยฏ ) .

Proof  The case ๐‘š

0 follow directly from (Leoni, 2009, Theorem B.111), so we assume that ๐‘š

0 . Let ๐›ผ 1 , โ€ฆ , ๐›ผ ๐ฝ be an enumeration of the set { ๐›ผ โˆˆ โ„• ๐‘‘ : | ๐›ผ | 1 โ‰ค ๐‘š } . Define the mapping ๐‘‡ : ๐ถ ๐‘š โข ( ๐ท ยฏ ) โ†’ ๐ถ โข ( ๐ท ยฏ ; โ„ ๐ฝ ) by

๐‘‡ โข ๐‘“

( โˆ‚ ๐›ผ 0 ๐‘“ , โ€ฆ , โˆ‚ ๐›ผ ๐ฝ ๐‘“ ) , โˆ€ ๐‘“ โˆˆ ๐ถ ๐‘š โข ( ๐ท ยฏ ) .

Clearly โ€– ๐‘‡ โข ๐‘“ โ€– ๐ถ โข ( ๐ท ยฏ ; โ„ ๐ฝ )

โ€– ๐‘“ โ€– ๐ถ ๐‘š โข ( ๐ท ยฏ ) hence ๐‘‡ is an injective, continuous linear operator. Define ๐‘Š โ‰” ๐‘‡ โข ( ๐ถ ๐‘š โข ( ๐ท ยฏ ) ) โŠ‚ ๐ถ โข ( ๐ท ยฏ ; โ„ ๐ฝ ) then ๐‘‡ โˆ’ 1 : ๐‘Š โ†’ ๐ถ ๐‘š โข ( ๐ท ยฏ ) is a continuous linear operator since ๐‘‡ preserves norm. Thus ๐‘Š

( ๐‘‡ โˆ’ 1 ) โˆ’ 1 ( ๐ถ ๐‘š ( ๐ท ยฏ ) ) is closed as the pre-image of a closed set under a continuous map. In particular, ๐‘Š is a Banach space since ๐ถ โข ( ๐ท ยฏ ; โ„ ๐ฝ ) is a Banach space and ๐‘‡ is an isometric isomorphism between ๐ถ ๐‘š โข ( ๐ท ยฏ ) and ๐‘Š . Therefore, there exists a continuous linear functional ๐ฟ ~ โˆˆ ๐‘Š โˆ— such that

๐ฟ โข ( ๐‘“ )

๐ฟ ~ โข ( ๐‘‡ โข ๐‘“ ) , โˆ€ ๐‘“ โˆˆ ๐ถ ๐‘š โข ( ๐ท ยฏ ) .

By the Hahn-Banach theorem, ๐ฟ ~ can be extended to a continuous linear functional ๐ฟ ยฏ โˆˆ ( ๐ถ ( ๐ท ยฏ ; โ„ ๐ฝ ) ) โˆ— such that โ€– ๐ฟ โ€– ( ๐ถ ๐‘š โข ( ๐ท ยฏ ) ) โˆ—

โ€– ๐ฟ ~ โ€– ๐‘Š โˆ—

โ€– ๐ฟ ยฏ โ€– ( ๐ถ โข ( ๐ท ยฏ ; โ„ ๐ฝ ) ) โˆ— . We have that

๐ฟ โข ( ๐‘“ )

๐ฟ ~ โข ( ๐‘‡ โข ๐‘“ )

๐ฟ ยฏ โข ( ๐‘‡ โข ๐‘“ ) , โˆ€ ๐‘“ โˆˆ ๐ถ ๐‘š โข ( ๐ท ยฏ ) .

Since

( ๐ถ ( ๐ท ยฏ ; โ„ ๐ฝ ) ) โˆ— โ‰… ร— ๐‘—

1 ๐ฝ ( ๐ถ ( ๐ท ยฏ ) ) โˆ— โ‰… โจ ๐‘—

1 ๐ฝ ( ๐ถ ( ๐ท ยฏ ) ) โˆ— ,

we have, by applying (Leoni, 2009, Theorem B.111) ๐ฝ times, that there exist finite, signed, Radon measures { ๐œ† ๐›ผ } 0 โ‰ค | ๐›ผ | 1 โ‰ค ๐‘š such that

๐ฟ ยฏ โข ( ๐‘‡ โข ๐‘“ )

โˆ‘ 0 โ‰ค | ๐›ผ | 1 โ‰ค ๐‘š โˆซ ๐ท ยฏ โˆ‚ ๐›ผ ๐‘“ โข ๐–ฝ โข ๐œ† ๐›ผ , โˆ€ ๐‘“ โˆˆ ๐ถ ๐‘š โข ( ๐ท ยฏ )

as desired.  

Lemma 28

Let ๐ท โŠ‚ โ„ ๐‘‘ be a bounded, open set and ๐ฟ โˆˆ ( ๐‘Š ๐‘š , ๐‘ โข ( ๐ท ) ) โˆ— for some ๐‘š โ‰ฅ 0 and 1 โ‰ค ๐‘ < โˆž . For any closed and bounded set ๐พ โŠ‚ ๐‘Š ๐‘š , ๐‘ โข ( ๐ท ) (compact if ๐‘

1 ) and ๐œ–

0 , there exists a function ๐œ… โˆˆ ๐ถ ๐‘ โˆž โข ( ๐ท ) such that

sup ๐‘ข โˆˆ ๐พ | ๐ฟ โข ( ๐‘ข ) โˆ’ โˆซ ๐ท ๐œ… โข ๐‘ข โข ๐–ฝ ๐‘ฅ | < ๐œ– .

Proof  First consider the case ๐‘š

0 and 1 โ‰ค ๐‘ < โˆž . By the Riesz Representation Theorem (Conway, 2007, Appendix B), there exists a function ๐‘ฃ โˆˆ ๐ฟ ๐‘ž โข ( ๐ท ) such that

๐ฟ โข ( ๐‘ข )

โˆซ ๐ท ๐‘ฃ โข ๐‘ข โข ๐–ฝ ๐‘ฅ .

Since ๐พ is bounded, there is a constant ๐‘€

0 such that

sup ๐‘ข โˆˆ ๐พ โ€– ๐‘ข โ€– ๐ฟ ๐‘ โ‰ค ๐‘€ .

Suppose ๐‘

1 , so that 1 < ๐‘ž < โˆž . Density of ๐ถ ๐‘ โˆž โข ( ๐ท ) in ๐ฟ ๐‘ž โข ( ๐ท ) (Adams and Fournier, 2003, Corollary 2.30) implies there exists a function ๐œ… โˆˆ ๐ถ ๐‘ โˆž โข ( ๐ท ) such that

โ€– ๐‘ฃ โˆ’ ๐œ… โ€– ๐ฟ ๐‘ž < ๐œ– ๐‘€ .

By the Hรถlder inequality,

| ๐ฟ โข ( ๐‘ข ) โˆ’ โˆซ ๐ท ๐œ… โข ๐‘ข โข ๐–ฝ ๐‘ฅ | โ‰ค โ€– ๐‘ข โ€– ๐ฟ ๐‘ โข โ€– ๐‘ฃ โˆ’ ๐œ… โ€– ๐ฟ ๐‘ž < ๐œ– .

Suppose that ๐‘

1 then ๐‘ž

โˆž . Since ๐พ is totally bounded, there exists a number ๐‘› โˆˆ โ„• and functions ๐‘” 1 , โ€ฆ , ๐‘” ๐‘› โˆˆ ๐พ such that, for any ๐‘ข โˆˆ ๐พ ,

โ€– ๐‘ข โˆ’ ๐‘” ๐‘™ โ€– ๐ฟ 1 < ๐œ– 3 โข โ€– ๐‘ฃ โ€– ๐ฟ โˆž

for some ๐‘™ โˆˆ { 1 , โ€ฆ , ๐‘› } . Let ๐œ“ ๐œ‚ โˆˆ ๐ถ ๐‘ โˆž โข ( ๐ท ) denote a standard mollifier for any ๐œ‚

0 . We can find ๐œ‚

0 small enough such that

max ๐‘™ โˆˆ { 1 , โ€ฆ , ๐‘› } โก โ€– ๐œ“ ๐œ‚ โˆ— ๐‘” ๐‘™ โˆ’ ๐‘” ๐‘™ โ€– ๐ฟ 1 < ๐œ– 9 โข โ€– ๐‘ฃ โ€– ๐ฟ โˆž

Define ๐‘“

๐œ“ ๐œ‚ โˆ— ๐‘ฃ โˆˆ ๐ถ โข ( ๐ท ) and note that โ€– ๐‘“ โ€– ๐ฟ โˆž โ‰ค โ€– ๐‘ฃ โ€– ๐ฟ โˆž . By Fubiniโ€™s theorem, we find

| โˆซ ๐ท ( ๐‘“ โˆ’ ๐‘ฃ ) โข ๐‘” ๐‘™ โข ๐–ฝ ๐‘ฅ |

โˆซ ๐ท ๐‘ฃ โข ( ๐œ“ ๐œ‚ โˆ— ๐‘” ๐‘™ โˆ’ ๐‘” ๐‘™ ) โข ๐–ฝ ๐‘ฅ โ‰ค โ€– ๐‘ฃ โ€– ๐ฟ โˆž โข โ€– ๐œ“ ๐œ‚ โˆ— ๐‘” ๐‘™ โˆ’ ๐‘” ๐‘™ โ€– ๐ฟ 1 < ๐œ– 9 .

Since ๐‘” ๐‘™ โˆˆ ๐ฟ 1 โข ( ๐ท ) , by Lusinโ€™s theorem, we can find a compact set ๐ด โŠ‚ ๐ท such that

max ๐‘™ โˆˆ { 1 , โ€ฆ , ๐‘› } โข โˆซ ๐ท โˆ– ๐ด | ๐‘” ๐‘™ | โข ๐–ฝ ๐‘ฅ < ๐œ– 18 โข โ€– ๐‘ฃ โ€– ๐ฟ โˆž

Since ๐ถ ๐‘ โˆž โข ( ๐ท ) is dense in ๐ถ โข ( ๐ท ) over compact sets (Leoni, 2009, Theorem C.16), we can find a function ๐œ… โˆˆ ๐ถ ๐‘ โˆž โข ( ๐ท ) such that

sup ๐‘ฅ โˆˆ ๐ด | ๐œ… โข ( ๐‘ฅ ) โˆ’ ๐‘“ โข ( ๐‘ฅ ) | โ‰ค ๐œ– 9 โข ๐‘€

and โ€– ๐œ… โ€– ๐ฟ โˆž โ‰ค โ€– ๐‘“ โ€– ๐ฟ โˆž โ‰ค โ€– ๐‘ฃ โ€– ๐ฟ โˆž . We have,

| โˆซ ๐ท ( ๐œ… โˆ’ ๐‘ฃ ) โข ๐‘” ๐‘™ โข ๐–ฝ ๐‘ฅ |

โ‰ค โˆซ ๐ด | ( ๐œ… โˆ’ ๐‘ฃ ) โข ๐‘” ๐‘™ | โข ๐–ฝ ๐‘ฅ + โˆซ ๐ท โˆ– ๐ด | ( ๐œ… โˆ’ ๐‘ฃ ) โข ๐‘” ๐‘™ | โข ๐–ฝ ๐‘ฅ

โ‰ค โˆซ ๐ด | ( ๐œ… โˆ’ ๐‘“ ) โข ๐‘” ๐‘™ | โข ๐–ฝ ๐‘ฅ + โˆซ ๐ท | ( ๐‘“ โˆ’ ๐‘ฃ ) โข ๐‘” ๐‘™ | โข ๐–ฝ ๐‘ฅ + 2 โข โ€– ๐‘ฃ โ€– ๐ฟ โˆž โข โˆซ ๐ท โˆ– ๐ด | ๐‘” ๐‘™ | โข ๐–ฝ ๐‘ฅ

โ‰ค sup ๐‘ฅ โˆˆ ๐ด | ๐œ… โข ( ๐‘ฅ ) โˆ’ ๐‘“ โข ( ๐‘ฅ ) | โข โ€– ๐‘” ๐‘™ โ€– ๐ฟ 1 + 2 โข ๐œ– 9

< ๐œ– 3 .

Finally,

| ๐ฟ โข ( ๐‘ข ) โˆ’ โˆซ ๐ท ๐œ… โข ๐‘ข โข ๐–ฝ ๐‘ฅ |

โ‰ค | โˆซ ๐ท ๐‘ฃ โข ๐‘ข โข ๐–ฝ ๐‘ฅ โˆ’ โˆซ ๐ท ๐‘ฃ โข ๐‘” ๐‘™ โข ๐–ฝ ๐‘ฅ | + | โˆซ ๐ท ๐‘ฃ โข ๐‘” ๐‘™ โข ๐–ฝ ๐‘ฅ โˆ’ โˆซ ๐ท ๐œ… โข ๐‘ข โข ๐–ฝ ๐‘ฅ |

โ‰ค โ€– ๐‘ฃ โ€– ๐ฟ โˆž โข โ€– ๐‘ข โˆ’ ๐‘” ๐‘™ โ€– ๐ฟ 1 + | โˆซ ๐ท ๐œ… โข ๐‘ข โข ๐–ฝ ๐‘ฅ โˆ’ โˆซ ๐ท ๐œ… โข ๐‘” ๐‘™ โข ๐–ฝ ๐‘ฅ | + | โˆซ ๐ท ๐œ… โข ๐‘” ๐‘™ โข ๐–ฝ ๐‘ฅ โˆ’ โˆซ ๐ท ๐‘ฃ โข ๐‘” ๐‘™ โข ๐–ฝ ๐‘ฅ |

โ‰ค ๐œ– 3 + โ€– ๐œ… โ€– ๐ฟ โˆž โข โ€– ๐‘ข โˆ’ ๐‘” ๐‘™ โ€– ๐ฟ 1 + | โˆซ ๐ท ( ๐œ… โˆ’ ๐‘ฃ ) โข ๐‘” ๐‘™ โข ๐–ฝ ๐‘ฅ |

โ‰ค 2 โข ๐œ– 3 + โ€– ๐‘ฃ โ€– ๐ฟ โˆž โข โ€– ๐‘ข โˆ’ ๐‘” ๐‘™ โ€– ๐ฟ 1

< ๐œ– .

Suppose ๐‘š โ‰ฅ 1 . By the Riesz Representation Theorem (Adams and Fournier, 2003, Theorem 3.9), there exist elements ( ๐‘ฃ ๐›ผ ) 0 โ‰ค | ๐›ผ | 1 โ‰ค ๐‘š of ๐ฟ ๐‘ž โข ( ๐ท ) where ๐›ผ โˆˆ โ„• ๐‘‘ is a multi-index such that

๐ฟ โข ( ๐‘ข )

โˆ‘ 0 โ‰ค | ๐›ผ | 1 โ‰ค ๐‘š โˆซ ๐ท ๐‘ฃ ๐›ผ โข โˆ‚ ๐›ผ ๐‘ข โข ๐–ฝ โข ๐‘ฅ .

Since ๐พ is bounded, there is a constant ๐‘€

0 such that

sup ๐‘ข โˆˆ ๐พ โ€– ๐‘ข โ€– ๐‘Š ๐‘š , ๐‘ โ‰ค ๐‘€ .

Suppose ๐‘

1 , so that 1 < ๐‘ž < โˆž . Density of ๐ถ 0 โˆž โข ( ๐ท ) in ๐ฟ ๐‘ž โข ( ๐ท ) implies there exist functions ( ๐‘“ ๐›ผ ) 0 โ‰ค | ๐›ผ | 1 โ‰ค ๐‘š in ๐ถ ๐‘ โˆž โข ( ๐ท ) such that

โ€– ๐‘“ ๐›ผ โˆ’ ๐‘ฃ ๐›ผ โ€– ๐ฟ ๐‘ž < ๐œ– ๐‘€ โข ๐ฝ

where ๐ฝ

| { ๐›ผ โˆˆ โ„• ๐‘‘ : | ๐›ผ | 1 โ‰ค ๐‘š } | . Let

๐œ…

โˆ‘ 0 โ‰ค | ๐›ผ | 1 โ‰ค ๐‘š ( โˆ’ 1 ) | ๐›ผ | 1 โข โˆ‚ ๐›ผ ๐‘“ ๐›ผ

then, by definition of a weak derivative,

โˆซ ๐ท ๐œ… โข ๐‘ข โข ๐–ฝ ๐‘ฅ

โˆ‘ 0 โ‰ค | ๐›ผ | 1 โ‰ค ๐‘š ( โˆ’ 1 ) | ๐›ผ | 1 โข โˆซ ๐ท โˆ‚ ๐›ผ ๐‘“ ๐›ผ โข ๐‘ข โข ๐–ฝ โข ๐‘ฅ

โˆ‘ 0 โ‰ค | ๐›ผ | 1 โ‰ค ๐‘š โˆซ ๐ท ๐‘“ ๐›ผ โข โˆ‚ ๐›ผ ๐‘ข โข ๐–ฝ โข ๐‘ฅ .

By the Hรถlder inequality,

| ๐ฟ โข ( ๐‘ข ) โˆ’ โˆซ ๐ท ๐œ… โข ๐‘ข โข ๐–ฝ ๐‘ฅ | โ‰ค โˆ‘ 0 โ‰ค | ๐›ผ | 1 โ‰ค ๐‘š โ€– โˆ‚ ๐›ผ ๐‘ข โ€– ๐ฟ ๐‘ โข โ€– ๐‘“ ๐›ผ โˆ’ ๐‘ฃ ๐›ผ โ€– ๐ฟ ๐‘ž < ๐‘€ โข โˆ‘ 0 โ‰ค | ๐›ผ | 1 โ‰ค ๐‘š ๐œ– ๐‘€ โข ๐ฝ

๐œ– .

Suppose that ๐‘

1 then ๐‘ž

โˆž . Define the constant ๐ถ ๐‘ฃ

0 by

๐ถ ๐‘ฃ

โˆ‘ 0 โ‰ค | ๐›ผ | 1 โ‰ค ๐‘š โ€– ๐‘ฃ ๐›ผ โ€– ๐ฟ โˆž .

Since ๐พ is totally bounded, there exists a number ๐‘› โˆˆ โ„• and functions ๐‘” 1 , โ€ฆ , ๐‘” ๐‘› โˆˆ ๐พ such that, for any ๐‘ข โˆˆ ๐พ ,

โ€– ๐‘ข โˆ’ ๐‘” ๐‘™ โ€– ๐‘Š ๐‘š , 1 < ๐œ– 3 โข ๐ถ ๐‘ฃ

for some ๐‘™ โˆˆ { 1 , โ€ฆ , ๐‘› } . Let ๐œ“ ๐œ‚ โˆˆ ๐ถ ๐‘ โˆž โข ( ๐ท ) denote a standard mollifier for any ๐œ‚

0 . We can find ๐œ‚

0 small enough such that

max ๐›ผ โก max ๐‘™ โˆˆ { 1 , โ€ฆ , ๐‘› } โก โ€– ๐œ“ ๐œ‚ โˆ— โˆ‚ ๐›ผ ๐‘” ๐‘™ โˆ’ โˆ‚ ๐›ผ ๐‘” ๐‘™ โ€– ๐ฟ 1 < ๐œ– 9 โข ๐ถ ๐‘ฃ .

Define ๐‘“ ๐›ผ

๐œ“ ๐œ‚ โˆ— ๐‘ฃ ๐›ผ โˆˆ ๐ถ โข ( ๐ท ) and note that โ€– ๐‘“ ๐›ผ โ€– ๐ฟ โˆž โ‰ค โ€– ๐‘ฃ ๐›ผ โ€– ๐ฟ โˆž . By Fubiniโ€™s theorem, we find

โˆ‘ 0 โ‰ค | ๐›ผ | 1 โ‰ค ๐‘š | โˆซ ๐ท ( ๐‘“ ๐›ผ โˆ’ ๐‘ฃ ๐›ผ ) โข โˆ‚ ๐›ผ ๐‘” ๐‘™ โข ๐–ฝ โข ๐‘ฅ |

โˆ‘ 0 โ‰ค | ๐›ผ | 1 โ‰ค ๐‘š | โˆซ ๐ท ๐‘ฃ ๐›ผ โข ( ๐œ“ ๐œ‚ โˆ— โˆ‚ ๐›ผ ๐‘” ๐‘™ โˆ’ โˆ‚ ๐›ผ ๐‘” ๐‘™ ) โข ๐–ฝ ๐‘ฅ |

โ‰ค โˆ‘ 0 โ‰ค | ๐›ผ | 1 โ‰ค ๐‘š โ€– ๐‘ฃ ๐›ผ โ€– ๐ฟ โˆž โข โ€– ๐œ“ ๐œ‚ โˆ— โˆ‚ ๐›ผ ๐‘” ๐‘™ โˆ’ โˆ‚ ๐›ผ ๐‘” ๐‘™ โ€– ๐ฟ 1

< ๐œ– 9 .

Since โˆ‚ ๐›ผ ๐‘” ๐‘™ โˆˆ ๐ฟ 1 โข ( ๐ท ) , by Lusinโ€™s theorem, we can find a compact set ๐ด โŠ‚ ๐ท such that

max ๐›ผ โก max ๐‘™ โˆˆ { 1 , โ€ฆ , ๐‘› } โข โˆซ ๐ท โˆ– ๐ด | โˆ‚ ๐›ผ ๐‘” ๐‘™ | โข ๐–ฝ ๐‘ฅ < ๐œ– 18 โข ๐ถ ๐‘ฃ .

Since ๐ถ ๐‘ โˆž โข ( ๐ท ) is dense in ๐ถ โข ( ๐ท ) over compact sets, we can find functions ๐‘ค ๐›ผ โˆˆ ๐ถ ๐‘ โˆž โข ( ๐ท ) such that

sup ๐‘ฅ โˆˆ ๐ด | ๐‘ค ๐›ผ โข ( ๐‘ฅ ) โˆ’ ๐‘“ ๐›ผ โข ( ๐‘ฅ ) | โ‰ค ๐œ– 9 โข ๐‘€ โข ๐ฝ

where ๐ฝ

| { ๐›ผ โˆˆ โ„• ๐‘‘ : | ๐›ผ | 1 โ‰ค ๐‘š } | and โ€– ๐‘ค ๐›ผ โ€– ๐ฟ โˆž โ‰ค โ€– ๐‘“ ๐›ผ โ€– ๐ฟ โˆž โ‰ค โ€– ๐‘ฃ ๐›ผ โ€– ๐ฟ โˆž . We have,

โˆ‘ 0 โ‰ค | ๐›ผ | 1 โ‰ค ๐‘š โˆซ ๐ท | ( ๐‘ค ๐›ผ โˆ’ ๐‘ฃ ๐›ผ ) โข โˆ‚ ๐›ผ ๐‘” ๐‘™ |

โˆ‘ 0 โ‰ค | ๐›ผ | 1 โ‰ค ๐‘š ( โˆซ ๐ด | ( ๐‘ค ๐›ผ โˆ’ ๐‘ฃ ๐›ผ ) โข โˆ‚ ๐›ผ ๐‘” ๐‘™ | โข ๐‘‘ ๐‘ฅ + โˆซ ๐ท โˆ– ๐ด | ( ๐‘ค ๐›ผ โˆ’ ๐‘ฃ ๐›ผ ) โข โˆ‚ ๐›ผ ๐‘” ๐‘™ | โข ๐‘‘ ๐‘ฅ )

โ‰ค โˆ‘ 0 โ‰ค | ๐›ผ | 1 โ‰ค ๐‘š ( โˆซ ๐ด | ( ๐‘ค ๐›ผ โˆ’ ๐‘“ ๐›ผ ) โˆ‚ ๐›ผ ๐‘” ๐‘™ | ๐–ฝ ๐‘ฅ + โˆซ ๐ท | ( ๐‘“ ๐›ผ โˆ’ ๐‘ฃ ๐›ผ ) โˆ‚ ๐›ผ ๐‘” ๐‘™ | ๐–ฝ ๐‘ฅ

  • 2 โˆฅ ๐‘ฃ ๐›ผ โˆฅ ๐ฟ โˆž โˆซ ๐ท โˆ– ๐ด | โˆ‚ ๐›ผ ๐‘” ๐‘™ | ๐–ฝ ๐‘ฅ )

โ‰ค โˆ‘ 0 โ‰ค | ๐›ผ | 1 โ‰ค ๐‘š sup ๐‘ฅ โˆˆ ๐ด | ๐‘ค ๐›ผ โข ( ๐‘ฅ ) โˆ’ ๐‘“ ๐›ผ โข ( ๐‘ฅ ) | โข โ€– โˆ‚ ๐›ผ ๐‘” ๐‘™ โ€– ๐ฟ 1 + 2 โข ๐œ– 9

< ๐œ– 3 .

Let

๐œ…

โˆ‘ 0 โ‰ค | ๐›ผ | 1 โ‰ค ๐‘š ( โˆ’ 1 ) | ๐›ผ | 1 โข โˆ‚ ๐›ผ ๐‘ค ๐›ผ .

then, by definition of a weak derivative,

โˆซ ๐ท ๐œ… โข ๐‘ข โข ๐–ฝ ๐‘ฅ

โˆ‘ 0 โ‰ค | ๐›ผ | 1 โ‰ค ๐‘š ( โˆ’ 1 ) | ๐›ผ | 1 โข โˆซ ๐ท โˆ‚ ๐›ผ ๐‘ค ๐›ผ โข ๐‘ข โข ๐–ฝ โข ๐‘ฅ

โˆ‘ 0 โ‰ค | ๐›ผ | 1 โ‰ค ๐‘š โˆซ ๐ท ๐‘ค ๐›ผ โข โˆ‚ ๐›ผ ๐‘ข โข ๐–ฝ โข ๐‘ฅ .

Finally,

| ๐ฟ โข ( ๐‘ข ) โˆ’ โˆซ ๐ท ๐œ… โข ๐‘ข โข ๐–ฝ ๐‘ฅ |

โ‰ค โˆ‘ 0 โ‰ค | ๐›ผ | 1 โ‰ค ๐‘š โˆซ ๐ท | ๐‘ฃ ๐›ผ โข โˆ‚ ๐›ผ ๐‘ข โˆ’ ๐‘ค ๐›ผ โข โˆ‚ ๐›ผ ๐‘ข | โข ๐–ฝ ๐‘ฅ

โ‰ค โˆ‘ 0 โ‰ค | ๐›ผ | 1 โ‰ค ๐‘š ( โˆซ ๐ท | ๐‘ฃ ๐›ผ โข ( โˆ‚ ๐›ผ ๐‘ข โˆ’ โˆ‚ ๐›ผ ๐‘” ๐‘™ ) | โข ๐–ฝ ๐‘ฅ + โˆซ ๐ท | ๐‘ฃ ๐›ผ โข โˆ‚ ๐›ผ ๐‘” ๐‘™ โˆ’ ๐‘ค ๐›ผ โข โˆ‚ ๐›ผ ๐‘ข | โข ๐–ฝ ๐‘ฅ )

โ‰ค โˆ‘ 0 โ‰ค | ๐›ผ | 1 โ‰ค ๐‘š ( โ€– ๐‘ฃ ๐›ผ โ€– ๐ฟ โˆž โข โ€– ๐‘ข โˆ’ ๐‘” ๐‘™ โ€– ๐‘Š ๐‘š , 1 + โˆซ ๐ท | ( ๐‘ฃ ๐›ผ โˆ’ ๐‘ค ๐›ผ ) โข โˆ‚ ๐›ผ ๐‘” ๐‘™ | โข ๐–ฝ ๐‘ฅ + โˆซ ๐ท | ( โˆ‚ ๐›ผ ๐‘” ๐‘™ โˆ’ โˆ‚ ๐›ผ ๐‘ข ) โข ๐‘ค ๐›ผ | โข ๐–ฝ ๐‘ฅ )

< 2 โข ๐œ– 3 + โˆ‘ 0 โ‰ค | ๐›ผ | 1 โ‰ค ๐‘š โˆฅ โข ๐‘ค ๐›ผ โˆฅ ๐ฟ โˆž โข โ€– ๐‘ข โˆ’ ๐‘” ๐‘™ โ€– ๐‘Š ๐‘š , 1

< ๐œ– .

 

Lemma 29

Let ๐ท โŠ‚ โ„ ๐‘‘ be a domain and ๐ฟ โˆˆ ( ๐ถ ๐‘š ( ๐ท ยฏ ) ) โˆ— for some ๐‘š โˆˆ โ„• 0 . For any compact set ๐พ โŠ‚ ๐ถ ๐‘š โข ( ๐ท ยฏ ) and ๐œ–

0 , there exists distinct points ๐‘ฆ 11 , โ€ฆ , ๐‘ฆ 1 โข ๐‘› 1 , โ€ฆ , ๐‘ฆ ๐ฝ โข ๐‘› ๐ฝ โˆˆ ๐ท and numbers ๐‘ 11 , โ€ฆ , ๐‘ 1 โข ๐‘› 1 , โ€ฆ , ๐‘ ๐ฝ โข ๐‘› ๐ฝ โˆˆ โ„ such that

sup ๐‘ข โˆˆ ๐พ | ๐ฟ โข ( ๐‘ข ) โˆ’ โˆ‘ ๐‘—

1 ๐ฝ โˆ‘ ๐‘˜

1 ๐‘› ๐‘— ๐‘ ๐‘— โข ๐‘˜ โข โˆ‚ ๐›ผ ๐‘— ๐‘ข โข ( ๐‘ฆ ๐‘— โข ๐‘˜ ) | โ‰ค ๐œ–

where ๐›ผ 1 , โ€ฆ , ๐›ผ ๐ฝ is an enumeration of the set { ๐›ผ โˆˆ โ„• 0 ๐‘‘ : 0 โ‰ค | ๐›ผ | 1 โ‰ค ๐‘š } .

Proof  By Lemma 27, there exist finite, signed, Radon measures { ๐œ† ๐›ผ } 0 โ‰ค | ๐›ผ | 1 โ‰ค ๐‘š such that

๐ฟ โข ( ๐‘ข )

โˆ‘ 0 โ‰ค | ๐›ผ | 1 โ‰ค ๐‘š โˆซ ๐ท ยฏ โˆ‚ ๐›ผ ๐‘ข โข ๐–ฝ โข ๐œ† ๐›ผ , โˆ€ ๐‘ข โˆˆ ๐ถ ๐‘š โข ( ๐ท ยฏ ) .

Let ๐›ผ 1 , โ€ฆ , ๐›ผ ๐ฝ be an enumeration of the set { ๐›ผ โˆˆ โ„• 0 ๐‘‘ : 0 โ‰ค | ๐›ผ | 1 โ‰ค ๐‘š } . By weak density of the Dirac measures (Bogachev, 2007, Example 8.1.6), we can find points ๐‘ฆ 11 , โ€ฆ , ๐‘ฆ 1 โข ๐‘› 1 , โ€ฆ , ๐‘ฆ ๐ฝ โข 1 , โ€ฆ , ๐‘ฆ ๐ฝ โข ๐‘› ๐ฝ โˆˆ ๐ท ยฏ as well as numbers ๐‘ 11 , โ€ฆ , ๐‘ ๐ฝ โข ๐‘› ๐ฝ โˆˆ โ„ such that

| โˆซ ๐ท ยฏ โˆ‚ ๐›ผ ๐‘— ๐‘ข โข ๐–ฝ โข ๐œ† ๐›ผ ๐‘— โˆ’ โˆ‘ ๐‘˜

1 ๐‘› ๐‘— ๐‘ ๐‘— โข ๐‘˜ โข โˆ‚ ๐›ผ ๐‘— ๐‘ข โข ( ๐‘ฆ ๐‘— โข ๐‘˜ ) | โ‰ค ๐œ– 4 โข ๐ฝ , โˆ€ ๐‘ข โˆˆ ๐ถ ๐‘š โข ( ๐ท ยฏ )

for any ๐‘— โˆˆ { 1 , โ€ฆ , ๐ฝ } . Therefore,

| โˆ‘ ๐‘—

1 ๐ฝ โˆซ ๐ท ยฏ โˆ‚ ๐›ผ ๐‘— ๐‘ข โข ๐–ฝ โข ๐œ† ๐›ผ ๐‘— โˆ’ โˆ‘ ๐‘—

1 ๐ฝ โˆ‘ ๐‘˜

1 ๐‘› ๐‘— ๐‘ ๐‘— โข ๐‘˜ โข โˆ‚ ๐›ผ ๐‘— ๐‘ข โข ( ๐‘ฆ ๐‘— โข ๐‘˜ ) | โ‰ค ๐œ– 4 , โˆ€ ๐‘ข โˆˆ ๐ถ ๐‘š โข ( ๐ท ยฏ ) .

Define the constant

๐‘„ โ‰” โˆ‘ ๐‘—

1 ๐ฝ โˆ‘ ๐‘˜

1 ๐‘› ๐‘— | ๐‘ ๐‘— โข ๐‘˜ | .

Since ๐พ is compact, we can find functions ๐‘” 1 , โ€ฆ , ๐‘” ๐‘ โˆˆ ๐พ such that, for any ๐‘ข โˆˆ ๐พ , there exists ๐‘™ โˆˆ { 1 , โ€ฆ , ๐‘ } such that

โ€– ๐‘ข โˆ’ ๐‘” ๐‘™ โ€– ๐ถ ๐‘˜ โ‰ค ๐œ– 4 โข ๐‘„ .

Suppose that some ๐‘ฆ ๐‘— โข ๐‘˜ โˆˆ โˆ‚ ๐ท . By uniform continuity, we can find a point ๐‘ฆ ~ ๐‘— โข ๐‘˜ โˆˆ ๐ท such that

max ๐‘™ โˆˆ { 1 , โ€ฆ , ๐‘ } โก | โˆ‚ ๐›ผ ๐‘— ๐‘” ๐‘™ โข ( ๐‘ฆ ๐‘— โข ๐‘˜ ) โˆ’ โˆ‚ ๐›ผ ๐‘— ๐‘” ๐‘™ โข ( ๐‘ฆ ~ ๐‘— โข ๐‘˜ ) | โ‰ค ๐œ– 4 โข ๐‘„ .

Denote

๐‘† โข ( ๐‘ข )

โˆ‘ ๐‘—

1 ๐ฝ โˆ‘ ๐‘˜

1 ๐‘› ๐‘— ๐‘ ๐‘— โข ๐‘˜ โข โˆ‚ ๐›ผ ๐‘— ๐‘ข โข ( ๐‘ฆ ๐‘— โข ๐‘˜ )

and by ๐‘† ~ โข ( ๐‘ข ) the sum ๐‘† โข ( ๐‘ข ) with ๐‘ฆ ๐‘— โข ๐‘˜ replaced by ๐‘ฆ ~ ๐‘— โข ๐‘˜ . Then, for any ๐‘ข โˆˆ ๐พ , we have

| ๐ฟ โข ( ๐‘ข ) โˆ’ ๐‘† ~ โข ( ๐‘ข ) |

โ‰ค | ๐ฟ โข ( ๐‘ข ) โˆ’ ๐‘† โข ( ๐‘ข ) | + | ๐‘† โข ( ๐‘ข ) โˆ’ ๐‘† ~ โข ( ๐‘ข ) |

โ‰ค ๐œ– 4 + | ๐‘ ๐‘— โข ๐‘˜ โข โˆ‚ ๐›ผ ๐‘— ๐‘ข โข ( ๐‘ฆ ~ ๐‘— โข ๐‘˜ ) โˆ’ ๐‘ ๐‘— โข ๐‘˜ โข โˆ‚ ๐›ผ ๐‘— ๐‘ข โข ( ๐‘ฆ ๐‘— โข ๐‘˜ ) |

โ‰ค ๐œ– 4 + | ๐‘ ๐‘— โข ๐‘˜ โข โˆ‚ ๐›ผ ๐‘— ๐‘ข โข ( ๐‘ฆ ~ ๐‘— โข ๐‘˜ ) โˆ’ ๐‘ ๐‘— โข ๐‘˜ โข โˆ‚ ๐›ผ ๐‘— ๐‘” ๐‘™ โข ( ๐‘ฆ ~ ๐‘— โข ๐‘˜ ) | + | ๐‘ ๐‘— โข ๐‘˜ โข โˆ‚ ๐›ผ ๐‘— ๐‘” ๐‘™ โข ( ๐‘ฆ ~ ๐‘— โข ๐‘˜ ) โˆ’ ๐‘ ๐‘— โข ๐‘˜ โข โˆ‚ ๐›ผ ๐‘— ๐‘ข โข ( ๐‘ฆ ๐‘— โข ๐‘˜ ) |

โ‰ค ๐œ– 4 + | ๐‘ ๐‘— โข ๐‘˜ | โข โ€– ๐‘ข โˆ’ ๐‘” ๐‘™ โ€– ๐ถ ๐‘š + | ๐‘ ๐‘— โข ๐‘˜ โข โˆ‚ ๐›ผ ๐‘— ๐‘” ๐‘™ โข ( ๐‘ฆ ~ ๐‘— โข ๐‘˜ ) โˆ’ ๐‘ ๐‘— โข ๐‘˜ โข โˆ‚ ๐›ผ ๐‘— ๐‘” ๐‘™ โข ( ๐‘ฆ ๐‘— โข ๐‘˜ ) | + | ๐‘ ๐‘— โข ๐‘˜ โข โˆ‚ ๐›ผ ๐‘— ๐‘” ๐‘™ โข ( ๐‘ฆ ๐‘— โข ๐‘˜ ) โˆ’ ๐‘ ๐‘— โข ๐‘˜ โข โˆ‚ ๐›ผ ๐‘— ๐‘ข โข ( ๐‘ฆ ๐‘— โข ๐‘˜ ) |

โ‰ค ๐œ– 4 + 2 โข | ๐‘ ๐‘— โข ๐‘˜ | โข โ€– ๐‘ข โˆ’ ๐‘” ๐‘™ โ€– ๐ถ ๐‘š + | ๐‘ ๐‘— โข ๐‘˜ | โข | โˆ‚ ๐›ผ ๐‘— ๐‘” ๐‘™ โข ( ๐‘ฆ ~ ๐‘— โข ๐‘˜ ) โˆ’ โˆ‚ ๐›ผ ๐‘— ๐‘” ๐‘™ โข ( ๐‘ฆ ๐‘— โข ๐‘˜ ) |

โ‰ค ๐œ– .

Since there are a finite number of points, this implies that all points ๐‘ฆ ๐‘— โข ๐‘˜ can be chosen in ๐ท . Suppose now that ๐‘ฆ ๐‘— โข ๐‘˜

๐‘ฆ ๐‘ž โข ๐‘ for some ( ๐‘— , ๐‘˜ ) โ‰  ( ๐‘ž , ๐‘ ) . As before, we can always find a point ๐‘ฆ ~ ๐‘— โข ๐‘˜ distinct from all others such that

max ๐‘™ โˆˆ { 1 , โ€ฆ , ๐‘ } โก | โˆ‚ ๐›ผ ๐‘— ๐‘” ๐‘™ โข ( ๐‘ฆ ๐‘— โข ๐‘˜ ) โˆ’ โˆ‚ ๐›ผ ๐‘— ๐‘” ๐‘™ โข ( ๐‘ฆ ~ ๐‘— โข ๐‘˜ ) | โ‰ค ๐œ– 4 โข ๐‘„ .

Repeating the previous argument then shows that all points ๐‘ฆ ๐‘— โข ๐‘˜ can be chosen distinctly as desired.  

Lemma 30

Let ๐ท โŠ‚ โ„ ๐‘‘ be a domain and ๐ฟ โˆˆ ( ๐ถ ( ๐ท ยฏ ) ) โˆ— . For any compact set ๐พ โŠ‚ ๐ถ โข ( ๐ท ยฏ ) and ๐œ–

0 , there exists a function ๐œ… โˆˆ ๐ถ ๐‘ โˆž โข ( ๐ท ) such that

sup ๐‘ข โˆˆ ๐พ | ๐ฟ โข ( ๐‘ข ) โˆ’ โˆซ ๐ท ๐œ… โข ๐‘ข โข ๐–ฝ ๐‘ฅ | < ๐œ– .

Proof  By Lemma 29, we can find points distinct points ๐‘ฆ 1 , โ€ฆ , ๐‘ฆ ๐‘› โˆˆ ๐ท as well as numbers ๐‘ 1 , โ€ฆ , ๐‘ ๐‘› โˆˆ โ„ such that

sup ๐‘ข โˆˆ ๐พ | ๐ฟ โข ( ๐‘ข ) โˆ’ โˆ‘ ๐‘—

1 ๐‘› ๐‘ ๐‘— โข ๐‘ข โข ( ๐‘ฆ ๐‘— ) | โ‰ค ๐œ– 3 .

Define the constants

๐‘„ โ‰” โˆ‘ ๐‘—

1 ๐‘› | ๐‘ ๐‘— | .

Since ๐พ is compact, there exist functions ๐‘” 1 , โ€ฆ , ๐‘” ๐ฝ โˆˆ ๐พ such that, for any ๐‘ข โˆˆ ๐พ , there exists some ๐‘™ โˆˆ { 1 , โ€ฆ , ๐ฝ } such that

โ€– ๐‘ข โˆ’ ๐‘” ๐‘™ โ€– ๐ถ โ‰ค ๐œ– 6 โข ๐‘› โข ๐‘„ .

Let ๐‘Ÿ > 0 be such that the open balls ๐ต ๐‘Ÿ โข ( ๐‘ฆ ๐‘— ) โŠ‚ ๐ท and are pairwise disjoint. Let ๐œ“ ๐œ‚ โˆˆ ๐ถ ๐‘ โˆž โข ( โ„ ๐‘‘ ) denote the standard mollifier with parameter ๐œ‚ > 0 , noting that supp  โข ๐œ“ ๐‘Ÿ

๐ต ๐‘Ÿ โข ( 0 ) . We can find a number 0 < ๐›พ โ‰ค ๐‘Ÿ such that

max ๐‘™ โˆˆ { 1 , โ€ฆ , ๐ฝ }

๐‘— โˆˆ { 1 , โ€ฆ , ๐‘› } โก | โˆซ ๐ท ๐œ“ ๐›พ โข ( ๐‘ฅ โˆ’ ๐‘ฆ ๐‘— ) โข ๐‘” ๐‘™ โข ( ๐‘ฅ ) โข ๐–ฝ ๐‘ฅ โˆ’ ๐‘” ๐‘™ โข ( ๐‘ฆ ๐‘— ) | โ‰ค ๐œ– 3 โข ๐‘› โข ๐‘„ .

Define ๐œ… : โ„ ๐‘‘ โ†’ โ„ by

๐œ… โข ( ๐‘ฅ )

โˆ‘ ๐‘—

1 ๐‘› ๐‘ ๐‘— โข ๐œ“ ๐›พ โข ( ๐‘ฅ โˆ’ ๐‘ฆ ๐‘— ) , โˆ€ ๐‘ฅ โˆˆ โ„ ๐‘‘ .

Since supp  ๐œ“ ๐›พ ( โ‹… โˆ’ ๐‘ฆ ๐‘— ) โІ ๐ต ๐‘Ÿ ( ๐‘ฆ ๐‘— ) , we have that ๐œ… โˆˆ ๐ถ ๐‘ โˆž โข ( ๐ท ) . Then, for any ๐‘ข โˆˆ ๐พ ,

| ๐ฟ โข ( ๐‘ข ) โˆ’ โˆซ ๐ท ๐œ… โข ๐‘ข โข ๐–ฝ ๐‘ฅ |
โ‰ค | ๐ฟ โข ( ๐‘ข ) โˆ’ โˆ‘ ๐‘—

1 ๐‘› ๐‘ ๐‘— โข ๐‘ข โข ( ๐‘ฆ ๐‘— ) | + | โˆ‘ ๐‘—

1 ๐‘› ๐‘ ๐‘— โข ๐‘ข โข ( ๐‘ฆ ๐‘— ) โˆ’ โˆซ ๐ท ๐œ… โข ๐‘ข โข ๐–ฝ ๐‘ฅ |

โ‰ค ๐œ– 3 + โˆ‘ ๐‘—

1 ๐‘› | ๐‘ ๐‘— | โข | ๐‘ข โข ( ๐‘ฆ ๐‘— ) โˆ’ โˆซ ๐ท ๐œ“ ๐œ‚ โข ( ๐‘ฅ โˆ’ ๐‘ฆ ๐‘— ) โข ๐‘ข โข ( ๐‘ฅ ) โข ๐–ฝ ๐‘ฅ |

โ‰ค ๐œ– 3 + ๐‘„ โข โˆ‘ ๐‘—

1 ๐‘› | ๐‘ข โข ( ๐‘ฆ ๐‘— ) โˆ’ ๐‘” ๐‘™ โข ( ๐‘ฆ ๐‘— ) | + | ๐‘” ๐‘™ โข ( ๐‘ฆ ๐‘— ) โˆ’ โˆซ ๐ท ๐œ“ ๐œ‚ โข ( ๐‘ฅ โˆ’ ๐‘ฆ ๐‘— ) โข ๐‘ข โข ( ๐‘ฅ ) โข ๐–ฝ ๐‘ฅ |

โ‰ค ๐œ– 3 + ๐‘› โข ๐‘„ โข โ€– ๐‘ข โˆ’ ๐‘” ๐‘™ โ€– ๐ถ + ๐‘„ โข โˆ‘ ๐‘—

1 ๐‘› | ๐‘” ๐‘™ โข ( ๐‘ฆ ๐‘— ) โˆ’ โˆซ ๐ท ๐œ“ ๐œ‚ โข ( ๐‘ฅ โˆ’ ๐‘ฆ ๐‘— ) โข ๐‘” ๐‘™ โข ( ๐‘ฅ ) โข ๐–ฝ ๐‘ฅ |

+ | โˆซ ๐ท ๐œ“ ๐œ‚ ( ๐‘ฅ โˆ’ ๐‘ฆ ๐‘— ) ( ๐‘” ๐‘™ ( ๐‘ฅ ) โˆ’ ๐‘ข ( ๐‘ฅ ) ) ๐–ฝ ๐‘ฅ |

โ‰ค ๐œ– 3 + ๐‘› โข ๐‘„ โข โ€– ๐‘ข โˆ’ ๐‘” ๐‘™ โ€– ๐ถ + ๐‘› โข ๐‘„ โข ๐œ– 3 โข ๐‘› โข ๐‘„ + ๐‘„ โข โ€– ๐‘” ๐‘™ โˆ’ ๐‘ข โ€– ๐ถ โข โˆ‘ ๐‘—

1 ๐‘› โˆซ ๐ท ๐œ“ ๐›พ โข ( ๐‘ฅ โˆ’ ๐‘ฆ ๐‘— ) โข ๐–ฝ ๐‘ฅ

2 โข ๐œ– 3 + 2 โข ๐‘› โข ๐‘„ โข โ€– ๐‘ข โˆ’ ๐‘” ๐‘™ โ€– ๐ถ

๐œ–

where we use the fact that mollifiers are non-negative and integrate to one.  

Lemma 31

Let ๐ท โŠ‚ โ„ ๐‘‘ be a domain and ๐ฟ โˆˆ ( ๐ถ ๐‘š ( ๐ท ยฏ ) ) โˆ— . For any compact set ๐พ โŠ‚ ๐ถ ๐‘š โข ( ๐ท ยฏ ) and ๐œ–

0 , there exist functions ๐œ… 1 , โ€ฆ , ๐œ… ๐ฝ โˆˆ ๐ถ ๐‘ โˆž โข ( ๐ท ) such that

sup ๐‘ข โˆˆ ๐พ | ๐ฟ โข ( ๐‘ข ) โˆ’ โˆ‘ ๐‘—

1 ๐ฝ โˆซ ๐ท ๐œ… ๐‘— โข โˆ‚ ๐›ผ ๐‘— ๐‘ข โข ๐–ฝ โข ๐‘ฅ | < ๐œ–

where ๐›ผ 1 , โ€ฆ , ๐›ผ ๐ฝ is an enumeration of the set { ๐›ผ โˆˆ โ„• 0 ๐‘‘ : 0 โ‰ค | ๐›ผ | 1 โ‰ค ๐‘š } .

Proof  By Lemma 29, we find distinct points ๐‘ฆ 11 , โ€ฆ , ๐‘ฆ 1 โข ๐‘› 1 , โ€ฆ , ๐‘ฆ ๐ฝ โข ๐‘› ๐ฝ โˆˆ ๐ท and numbers ๐‘ 11 , โ€ฆ , ๐‘ ๐ฝ โข ๐‘› ๐ฝ โˆˆ โ„ such that

sup ๐‘ข โˆˆ ๐พ | ๐ฟ โข ( ๐‘ข ) โˆ’ โˆ‘ ๐‘—

1 ๐ฝ โˆ‘ ๐‘˜

1 ๐‘› ๐‘— ๐‘ ๐‘— โข ๐‘˜ โข โˆ‚ ๐›ผ ๐‘— ๐‘ข โข ( ๐‘ฆ ๐‘— โข ๐‘˜ ) | โ‰ค ๐œ– 2 .

Applying the proof of Lemma 31 ๐ฝ times to each of the inner sums, we find functions ๐œ… 1 , โ€ฆ , ๐œ… ๐ฝ โˆˆ ๐ถ ๐‘ โˆž โข ( ๐ท ) such that

max ๐‘— โˆˆ { 1 , โ€ฆ , ๐ฝ } โก | โˆซ ๐ท ๐œ… ๐‘— โข โˆ‚ ๐›ผ ๐‘— ๐‘ข โข ๐–ฝ โข ๐‘ฅ โˆ’ โˆ‘ ๐‘˜

1 ๐‘› ๐‘— ๐‘ ๐‘— โข ๐‘˜ โข โˆ‚ ๐›ผ ๐‘— ๐‘ข โข ( ๐‘ฆ ๐‘— โข ๐‘˜ ) | โ‰ค ๐œ– 2 โข ๐ฝ .

Then, for any ๐‘ข โˆˆ ๐พ ,

| ๐ฟ ( ๐‘ข ) โˆ’ โˆ‘ ๐‘—

1 ๐ฝ
โˆซ ๐ท ๐œ… ๐‘— โˆ‚ ๐›ผ ๐‘— ๐‘ข ๐–ฝ ๐‘ฅ |

โ‰ค | ๐ฟ โข ( ๐‘ข ) โˆ’ โˆ‘ ๐‘—

1 ๐ฝ โˆ‘ ๐‘˜

1 ๐‘› ๐‘— ๐‘ ๐‘— โข ๐‘˜ โข โˆ‚ ๐›ผ ๐‘— ๐‘ข โข ( ๐‘ฆ ๐‘— โข ๐‘˜ ) | + โˆ‘ ๐‘—

1 ๐ฝ | โˆซ ๐ท ๐œ… ๐‘— โข โˆ‚ ๐›ผ ๐‘— ๐‘ข โข ๐–ฝ โข ๐‘ฅ โˆ’ โˆ‘ ๐‘˜

1 ๐‘› ๐‘— ๐‘ ๐‘— โข ๐‘˜ โข โˆ‚ ๐›ผ ๐‘— ๐‘ข โข ( ๐‘ฆ ๐‘— โข ๐‘˜ ) | โ‰ค ๐œ–

as desired.  

D

The following lemmas show that the three pieces used in constructing the approximation from Lemma 22, which are schematically depicted in Figure 16, can all be approximated by NO(s). Lemma 32 shows that ๐น ๐ฝ : ๐’œ โ†’ โ„ ๐ฝ can be approximated by an element of ๐–จ๐–ฎ by mapping to a vector-valued constant function. Similarly, Lemma 34 shows that ๐บ ๐ฝ โ€ฒ : โ„ ๐ฝ โ€ฒ โ†’ ๐’ฐ can be approximated by an element of ๐–จ๐–ฎ by mapping a vector-valued constant function to the coefficients of a basis expansion. Finally, Lemma 35 shows that NO(s) can exactly represent any standard neural network by viewing the inputs and outputs as vector-valued constant functions.

Lemma 32

Let Assumption 9 hold. Let { ๐‘ ๐‘— } ๐‘—

1 ๐‘› โŠ‚ ๐’œ โˆ— for some ๐‘› โˆˆ โ„• . Define the map ๐น : ๐’œ โ†’ โ„ ๐‘› by

๐น ( ๐‘Ž )

( ๐‘ 1 ( ๐‘Ž ) , โ€ฆ , ๐‘ ๐‘› ( ๐‘Ž ) ) , โˆ€ ๐‘Ž โˆˆ ๐’œ .

Then, for any compact set ๐พ โŠ‚ ๐’œ , ๐œŽ โˆˆ ๐–  0 , and ๐œ–

0 , there exists a number ๐ฟ โˆˆ โ„• and neural network ๐œ… โˆˆ ๐–ญ ๐ฟ โข ( ๐œŽ ; โ„ ๐‘‘ ร— โ„ ๐‘‘ , โ„ ๐‘› ร— 1 ) such that

sup ๐‘Ž โˆˆ ๐พ sup ๐‘ฆ โˆˆ ๐ท ยฏ | ๐น โข ( ๐‘Ž ) โˆ’ โˆซ ๐ท ๐œ… โข ( ๐‘ฆ , ๐‘ฅ ) โข ๐‘Ž โข ( ๐‘ฅ ) โข ๐–ฝ ๐‘ฅ | 1 โ‰ค ๐œ– .

Proof  Since ๐พ is bounded, there exists a number ๐‘€

0 such that

sup ๐‘Ž โˆˆ ๐พ โ€– ๐‘Ž โ€– ๐’œ โ‰ค ๐‘€ .

Define the constant

๐‘„ โ‰” { ๐‘€ ,
๐’œ

๐‘Š ๐‘š , ๐‘ โข ( ๐ท )

๐‘€ โข | ๐ท | ,
๐ด

๐ถ โข ( ๐ท ยฏ )

and let ๐‘

1 if ๐’œ

๐ถ โข ( ๐ท ยฏ ) . By Lemma 28 and Lemma 30, there exist functions ๐‘“ 1 , โ€ฆ , ๐‘“ ๐‘› โˆˆ ๐ถ ๐‘ โˆž โข ( ๐ท ) such that

max ๐‘— โˆˆ { 1 , โ€ฆ , ๐‘› } โข sup ๐‘Ž โˆˆ ๐พ | ๐‘ ๐‘— โข ( ๐‘Ž ) โˆ’ โˆซ ๐ท ๐‘“ ๐‘— โข ๐‘Ž โข ๐–ฝ ๐‘ฅ | โ‰ค ๐œ– 2 โข ๐‘› 1 ๐‘ .

Since ๐œŽ โˆˆ ๐–  0 , there exits some ๐ฟ โˆˆ โ„• and neural networks ๐œ“ 1 , โ€ฆ , ๐œ“ ๐‘› โˆˆ ๐–ญ ๐ฟ โข ( ๐œŽ ; โ„ ๐‘‘ ) such that

max ๐‘— โˆˆ { 1 , โ€ฆ , ๐‘› } โก โ€– ๐œ“ ๐‘— โˆ’ ๐‘“ ๐‘— โ€– ๐ถ โ‰ค ๐œ– 2 โข ๐‘„ โข ๐‘› 1 ๐‘ .

By setting all weights associated to the first argument to zero, we can modify each neural network ๐œ“ ๐‘— to a neural network ๐œ“ ๐‘— โˆˆ ๐–ญ ๐ฟ โข ( ๐œŽ ; โ„ ๐‘‘ ร— โ„ ๐‘‘ ) so that

๐œ“ ๐‘— โข ( ๐‘ฆ , ๐‘ฅ )

๐œ“ ๐‘— โข ( ๐‘ฅ ) โข ๐Ÿ™ โข ( ๐‘ฆ ) , โˆ€ ๐‘ฆ , ๐‘ฅ โˆˆ โ„ ๐‘‘ .

Define ๐œ… โˆˆ ๐–ญ ๐ฟ โข ( ๐œŽ ; โ„ ๐‘‘ ร— โ„ ๐‘‘ , โ„ ๐‘› ร— 1 ) by

๐œ… โข ( ๐‘ฆ , ๐‘ฅ )

[ ๐œ“ 1 โข ( ๐‘ฆ , ๐‘ฅ ) , โ€ฆ , ๐œ“ ๐‘› โข ( ๐‘ฆ , ๐‘ฅ ) ] ๐‘‡ .

Then for any ๐‘Ž โˆˆ ๐พ and ๐‘ฆ โˆˆ ๐ท ยฏ , we have

| ๐น โข ( ๐‘Ž ) โˆ’ โˆซ ๐ท ๐œ… โข ( ๐‘ฆ , ๐‘ฅ ) โข ๐‘Ž โข ๐–ฝ ๐‘ฅ | ๐‘ ๐‘

โˆ‘ ๐‘—

1 ๐‘› | ๐‘ ๐‘— โข ( ๐‘Ž ) โˆ’ โˆซ ๐ท ๐Ÿ™ โข ( ๐‘ฆ ) โข ๐œ“ ๐‘— โข ( ๐‘ฅ ) โข ๐‘Ž โข ( ๐‘ฅ ) โข ๐–ฝ ๐‘ฅ | ๐‘

โ‰ค 2 ๐‘ โˆ’ 1 โข โˆ‘ ๐‘—

1 ๐‘› | ๐‘ ๐‘— โข ( ๐‘Ž ) โˆ’ โˆซ ๐ท ๐‘“ ๐‘— โข ๐‘Ž โข ๐–ฝ ๐‘ฅ | ๐‘ + | โˆซ ๐ท ( ๐‘“ ๐‘— โˆ’ ๐œ“ ๐‘— ) โข ๐‘Ž โข ๐–ฝ ๐‘ฅ | ๐‘

โ‰ค ๐œ– ๐‘ 2 + 2 ๐‘ โˆ’ 1 โข ๐‘› โข ๐‘„ ๐‘ โข โ€– ๐‘“ ๐‘— โˆ’ ๐œ“ ๐‘— โ€– ๐ถ ๐‘

โ‰ค ๐œ– ๐‘

and the result follows by finite dimensional norm equivalence.  

Lemma 33

Suppose ๐ท โŠ‚ โ„ ๐‘‘ is a domain and let { ๐‘ ๐‘— } ๐‘—

1 ๐‘› โŠ‚ ( ๐ถ ๐‘š ( ๐ท ยฏ ) ) โˆ— for some ๐‘š , ๐‘› โˆˆ โ„• . Define the map ๐น : ๐’œ โ†’ โ„ ๐‘› by

๐น ( ๐‘Ž )

( ๐‘ 1 ( ๐‘Ž ) , โ€ฆ , ๐‘ ๐‘› ( ๐‘Ž ) ) , โˆ€ ๐‘Ž โˆˆ ๐ถ ๐‘š ( ๐ท ยฏ ) .

Then, for any compact set ๐พ โŠ‚ ๐ถ ๐‘š โข ( ๐ท ยฏ ) , ๐œŽ โˆˆ ๐–  0 , and ๐œ–

0 , there exists a number ๐ฟ โˆˆ โ„• and neural network ๐œ… โˆˆ ๐–ญ ๐ฟ โข ( ๐œŽ ; โ„ ๐‘‘ ร— โ„ ๐‘‘ , โ„ ๐‘› ร— ๐ฝ ) such that

sup ๐‘Ž โˆˆ ๐พ sup ๐‘ฆ โˆˆ ๐ท ยฏ | ๐น ( ๐‘Ž ) โˆ’ โˆซ ๐ท ๐œ… ( ๐‘ฆ , ๐‘ฅ ) ( โˆ‚ ๐›ผ 1 ๐‘Ž ( ๐‘ฅ ) , โ€ฆ , โˆ‚ ๐›ผ ๐ฝ ๐‘Ž ( ๐‘ฅ ) ) ๐–ฝ ๐‘ฅ | 1 โ‰ค ๐œ–

where ๐›ผ 1 , โ€ฆ , ๐›ผ ๐ฝ is an enumeration of the set { ๐›ผ โˆˆ โ„• ๐‘‘ : 0 โ‰ค | ๐›ผ | 1 โ‰ค ๐‘š } .

Proof  The proof follows as in Lemma 32 by replacing the use of Lemmas 28 and 30 by Lemma 31.  

Lemma 34

Let Assumption 10 hold. Let { ๐œ‘ ๐‘— } ๐‘—

1 ๐‘› โŠ‚ ๐’ฐ for some ๐‘› โˆˆ โ„• . Define the map ๐บ : โ„ ๐‘› โ†’ ๐’ฐ by

๐บ โข ( ๐‘ค )

โˆ‘ ๐‘—

1 ๐‘› ๐‘ค ๐‘— โข ๐œ‘ ๐‘— , โˆ€ ๐‘ค โˆˆ โ„ ๐‘› .

Then, for any compact set ๐พ โŠ‚ โ„ ๐‘› , ๐œŽ โˆˆ ๐–  ๐‘š 2 , and ๐œ–

0 , there exists a number ๐ฟ โˆˆ โ„• and a neural network ๐œ… โˆˆ ๐–ญ ๐ฟ โข ( ๐œŽ ; โ„ ๐‘‘ โ€ฒ ร— โ„ ๐‘‘ โ€ฒ , โ„ 1 ร— ๐‘› ) such that

sup ๐‘ค โˆˆ ๐พ โ€– ๐บ โข ( ๐‘ค ) โˆ’ โˆซ ๐ท โ€ฒ ๐œ… โข ( โ‹… , ๐‘ฅ ) โข ๐‘ค โข ๐Ÿ™ โข ( ๐‘ฅ ) โข ๐–ฝ ๐‘ฅ โ€– ๐’ฐ โ‰ค ๐œ– .

Proof  Since ๐พ โŠ‚ โ„ ๐‘› is compact, there is a number ๐‘€

1 such that

sup ๐‘ค โˆˆ ๐พ | ๐‘ค | 1 โ‰ค ๐‘€ .

If ๐’ฐ

๐ฟ ๐‘ 2 โข ( ๐ท โ€ฒ ) , then density of ๐ถ ๐‘ โˆž โข ( ๐ท โ€ฒ ) implies there are functions ๐œ“ ~ 1 , โ€ฆ , ๐œ“ ~ ๐‘› โˆˆ ๐ถ โˆž โข ( ๐ท ยฏ โ€ฒ ) such that

max ๐‘— โˆˆ { 1 , โ€ฆ , ๐‘› } โก โ€– ๐œ‘ ๐‘— โˆ’ ๐œ“ ~ ๐‘— โ€– ๐’ฐ โ‰ค ๐œ– 2 โข ๐‘› โข ๐‘€ .

Similarly if ๐‘ˆ

๐‘Š ๐‘š 2 , ๐‘ 2 โข ( ๐ท โ€ฒ ) , then density of the restriction of functions in ๐ถ ๐‘ โˆž โข ( โ„ ๐‘‘ โ€ฒ ) to ๐ท โ€ฒ (Leoni, 2009, Theorem 11.35) implies the same result. If ๐’ฐ

๐ถ ๐‘š 2 โข ( ๐ท ยฏ โ€ฒ ) then we set ๐œ“ ~ ๐‘—

๐œ‘ ๐‘— for any ๐‘— โˆˆ { 1 , โ€ฆ , ๐‘› } . Define ๐œ… ~ : โ„ ๐‘‘ โ€ฒ ร— โ„ ๐‘‘ โ€ฒ โ†’ โ„ 1 ร— ๐‘› by

๐œ… ~ โข ( ๐‘ฆ , ๐‘ฅ )

1 | ๐ท โ€ฒ | โข [ ๐œ“ ~ 1 โข ( ๐‘ฆ ) , โ€ฆ , ๐œ“ ~ ๐‘› โข ( ๐‘ฆ ) ] .

Then, for any ๐‘ค โˆˆ ๐พ ,

โ€– ๐บ โข ( ๐‘ค ) โˆ’ โˆซ ๐ท โ€ฒ ๐œ… ~ โข ( โ‹… , ๐‘ฅ ) โข ๐‘ค โข ๐Ÿ™ โข ( ๐‘ฅ ) โข ๐–ฝ ๐‘ฅ โ€– ๐’ฐ

โ€– โˆ‘ ๐‘—

1 ๐‘› ๐‘ค ๐‘— โข ๐œ‘ ๐‘— โˆ’ โˆ‘ ๐‘—

1 ๐‘› ๐‘ค ๐‘— โข ๐œ“ ~ ๐‘— โ€– ๐’ฐ

โ‰ค โˆ‘ ๐‘—

1 ๐‘› | ๐‘ค ๐‘— | โข โ€– ๐œ‘ ๐‘— โˆ’ ๐œ“ ~ ๐‘— โ€– ๐’ฐ

โ‰ค ๐œ– 2 .

Since ๐œŽ โˆˆ ๐–  ๐‘š 2 , there exists neural networks ๐œ“ 1 , โ€ฆ , ๐œ“ ๐‘› โˆˆ ๐–ญ 1 โข ( ๐œŽ ; โ„ ๐‘‘ โ€ฒ ) such that

max ๐‘— โˆˆ { 1 , โ€ฆ , ๐‘› } โก โ€– ๐œ“ ~ ๐‘— โˆ’ ๐œ“ ๐‘— โ€– ๐ถ ๐‘š 2 โ‰ค ๐œ– 2 โข ๐‘› โข ๐‘€ โข ( ๐ฝ โข | ๐ท โ€ฒ | ) 1 ๐‘ 2

where, if ๐’ฐ

๐ถ ๐‘š 2 โข ( ๐ท ยฏ โ€ฒ ) , we set ๐ฝ

1 / | ๐ท โ€ฒ | and ๐‘ 2

1 , and otherwise ๐ฝ

| { ๐›ผ โˆˆ โ„• ๐‘‘ : | ๐›ผ | 1 โ‰ค ๐‘š 2 } | . By setting all weights associated to the second argument to zero, we can modify each neural network ๐œ“ ๐‘— to a neural network ๐œ“ ๐‘— โˆˆ ๐–ญ 1 โข ( ๐œŽ ; โ„ ๐‘‘ โ€ฒ ร— โ„ ๐‘‘ โ€ฒ ) so that

๐œ“ ๐‘— โข ( ๐‘ฆ , ๐‘ฅ )

๐œ“ ๐‘— โข ( ๐‘ฆ ) โข ๐Ÿ™ โข ( ๐‘ฅ ) , โˆ€ ๐‘ฆ , ๐‘ฅ โˆˆ โ„ ๐‘‘ โ€ฒ .

Define ๐œ… โˆˆ ๐–ญ 1 โข ( ๐œŽ ; โ„ ๐‘‘ โ€ฒ ร— โ„ ๐‘‘ โ€ฒ , โ„ 1 ร— ๐‘› ) as

๐œ… โข ( ๐‘ฆ , ๐‘ฅ )

1 | ๐ท โ€ฒ | โข [ ๐œ“ 1 โข ( ๐‘ฆ , ๐‘ฅ ) , โ€ฆ , ๐œ“ ๐‘› โข ( ๐‘ฆ , ๐‘ฅ ) ] .

Then, for any ๐‘ค โˆˆ โ„ ๐‘› ,

โˆซ ๐ท โ€ฒ ๐œ… โข ( ๐‘ฆ , ๐‘ฅ ) โข ๐‘ค โข ๐Ÿ™ โข ( ๐‘ฅ ) โข ๐–ฝ ๐‘ฅ

โˆ‘ ๐‘—

1 ๐‘› ๐‘ค ๐‘— โข ๐œ“ ๐‘— โข ( ๐‘ฆ ) .

We compute that, for any ๐‘— โˆˆ { 1 , โ€ฆ , ๐‘› } ,

โ€– ๐œ“ ๐‘— โˆ’ ๐œ“ ~ ๐‘— โ€– ๐’ฐ โ‰ค { | ๐ท โ€ฒ | 1 ๐‘ 2 โข โ€– ๐œ“ ๐‘— โˆ’ ๐œ“ ~ ๐‘— โ€– ๐ถ ๐‘š 2 ,
๐’ฐ

๐ฟ ๐‘ 2 โข ( ๐ท โ€ฒ )

( ๐ฝ โข | ๐ท โ€ฒ | ) 1 ๐‘ 2 โข โ€– ๐œ“ ๐‘— โˆ’ ๐œ“ ~ ๐‘— โ€– ๐ถ ๐‘š 2 ,
๐’ฐ

๐‘Š ๐‘š 2 , ๐‘ 2 โข ( ๐ท โ€ฒ )

โ€– ๐œ“ ๐‘— โˆ’ ๐œ“ ~ ๐‘— โ€– ๐ถ ๐‘š 2 ,
๐’ฐ

๐ถ ๐‘š 2 โข ( ๐ท ยฏ โ€ฒ )

hence, for any ๐‘ค โˆˆ ๐พ ,

โ€– โˆซ ๐ท โ€ฒ ๐œ… โข ( ๐‘ฆ , ๐‘ฅ ) โข ๐‘ค โข ๐Ÿ™ โข ( ๐‘ฅ ) โข ๐–ฝ ๐‘ฅ โˆ’ โˆ‘ ๐‘—

1 ๐‘› ๐‘ค ๐‘— โข ๐œ“ ~ ๐‘— โ€– ๐’ฐ โ‰ค โˆ‘ ๐‘—

1 ๐‘› | ๐‘ค ๐‘— | โข โ€– ๐œ“ ๐‘— โˆ’ ๐œ“ ~ ๐‘— โ€– ๐’ฐ โ‰ค ๐œ– 2 .

By triangle inequality, for any ๐‘ค โˆˆ ๐พ , we have

โ€– ๐บ โข ( ๐‘ค ) โˆ’ โˆซ ๐ท ๐œ… โข ( โ‹… , ๐‘ฅ ) โข ๐‘ค โข ๐Ÿ™ โข ( ๐‘ฅ ) โข ๐–ฝ ๐‘ฅ โ€– ๐’ฐ
โ‰ค โ€– ๐บ โข ( ๐‘ค ) โˆ’ โˆซ ๐ท ๐œ… ~ โข ( โ‹… , ๐‘ฅ ) โข ๐‘ค โข ๐Ÿ™ โข ( ๐‘ฅ ) โข ๐–ฝ ๐‘ฅ โ€– ๐’ฐ

+ โ€– โˆซ ๐ท ๐œ… ~ โข ( โ‹… , ๐‘ฅ ) โข ๐‘ค โข ๐Ÿ™ โข ( ๐‘ฅ ) โข ๐–ฝ ๐‘ฅ โˆ’ โˆซ ๐ท ๐œ… โข ( โ‹… , ๐‘ฅ ) โข ๐‘ค โข ๐Ÿ™ โข ( ๐‘ฅ ) โข ๐–ฝ ๐‘ฅ โ€– ๐’ฐ

โ‰ค ๐œ– 2 + โ€– โˆซ ๐ท ๐œ… โข ( โ‹… , ๐‘ฅ ) โข ๐‘ค โข ๐Ÿ™ โข ( ๐‘ฅ ) โˆ’ โˆ‘ ๐‘—

1 ๐‘› ๐‘ค ๐‘— โข ๐œ“ ~ ๐‘› โ€– ๐’ฐ

โ‰ค ๐œ–

as desired.  

Lemma 35

Let ๐‘ , ๐‘‘ , ๐‘‘ โ€ฒ , ๐‘ , ๐‘ž โˆˆ โ„• , ๐‘š , ๐‘› โˆˆ โ„• 0 , ๐ท โŠ‚ โ„ ๐‘ and ๐ท โ€ฒ โŠ‚ โ„ ๐‘ž be domains and ๐œŽ 1 โˆˆ ๐–  ๐‘š L . For any ๐œ‘ โˆˆ ๐–ญ ๐‘ โข ( ๐œŽ 1 ; โ„ ๐‘‘ , โ„ ๐‘‘ โ€ฒ ) and ๐œŽ 2 , ๐œŽ 3 โˆˆ ๐–  ๐‘› , there exists a ๐บ โˆˆ ๐–ญ๐–ฎ ๐‘ โข ( ๐œŽ 1 , ๐œŽ 2 , ๐œŽ 3 ; ๐ท , ๐ท โ€ฒ , โ„ ๐‘‘ , โ„ ๐‘‘ โ€ฒ ) such that

๐œ‘ โข ( ๐‘ค )

๐บ โข ( ๐‘ค โข ๐Ÿ™ ) โข ( ๐‘ฅ ) , โˆ€ ๐‘ค โˆˆ โ„ ๐‘‘ , โˆ€ ๐‘ฅ โˆˆ ๐ท โ€ฒ .

.

Proof  We have that

๐œ‘ โข ( ๐‘ฅ )

๐‘Š ๐‘ โข ๐œŽ 1 โข ( โ€ฆ โข ๐‘Š 1 โข ๐œŽ 1 โข ( ๐‘Š 0 โข ๐‘ฅ + ๐‘ 0 ) + ๐‘ 1 โข โ€ฆ ) + ๐‘ ๐‘ , โˆ€ ๐‘ฅ โˆˆ โ„ ๐‘‘

where ๐‘Š 0 โˆˆ โ„ ๐‘‘ 0 ร— ๐‘‘ , ๐‘Š 1 โˆˆ โ„ ๐‘‘ 1 ร— ๐‘‘ 0 , โ€ฆ , ๐‘Š ๐‘ โˆˆ โ„ ๐‘‘ โ€ฒ ร— ๐‘‘ ๐‘ โˆ’ 1 and ๐‘ 0 โˆˆ โ„ ๐‘‘ 0 , ๐‘ 1 โˆˆ โ„ ๐‘‘ 1 , โ€ฆ , ๐‘ ๐‘ โˆˆ โ„ ๐‘‘ โ€ฒ for some ๐‘‘ 0 , โ€ฆ , ๐‘‘ ๐‘ โˆ’ 1 โˆˆ โ„• . By setting all parameters to zero except for the last bias term, we can find ๐œ… ( 0 ) โˆˆ ๐–ญ 1 โข ( ๐œŽ 2 ; โ„ ๐‘ ร— โ„ ๐‘ , โ„ ๐‘‘ 0 ร— ๐‘‘ ) such that

๐œ… 0 โข ( ๐‘ฅ , ๐‘ฆ )

1 | ๐ท | โข ๐‘Š 0 , โˆ€ ๐‘ฅ , ๐‘ฆ โˆˆ โ„ ๐‘ .

Similarly, we can find ๐‘ ~ 0 โˆˆ ๐–ญ 1 โข ( ๐œŽ 2 ; โ„ ๐‘ , โ„ ๐‘‘ 0 ) such that

๐‘ ~ 0 โข ( ๐‘ฅ )

๐‘ 0 , โˆ€ ๐‘ฅ โˆˆ โ„ ๐‘ .

Then

โˆซ ๐ท ๐œ… 0 โข ( ๐‘ฆ , ๐‘ฅ ) โข ๐‘ค โข ๐Ÿ™ โข ( ๐‘ฅ ) โข ๐–ฝ ๐‘ฅ + ๐‘ ~ โข ( ๐‘ฆ )

( ๐‘Š 0 โข ๐‘ค + ๐‘ 0 ) โข ๐Ÿ™ โข ( ๐‘ฆ ) , โˆ€ ๐‘ค โˆˆ โ„ ๐‘‘ , โˆ€ ๐‘ฆ โˆˆ ๐ท .

Continuing a similar construction for all layers clearly yields the result.  

E

Proof [of Theorem 8] Without loss of generality, we will assume that ๐ท

๐ท โ€ฒ and, by continuous embedding, that ๐’œ

๐’ฐ

๐ถ โข ( ๐ท ยฏ ) . Furthermore, note that, by continuity, it suffices to show the result for the single layer

๐–ญ๐–ฎ

{ ๐‘“ โ†ฆ ๐œŽ 1 โข ( โˆซ ๐ท ๐œ… โข ( โ‹… , ๐‘ฆ ) โข ๐‘“ โข ( ๐‘ฆ ) โข ๐–ฝ ๐‘ฆ + ๐‘ ) : ๐œ… โˆˆ ๐–ญ ๐‘› 1 โข ( ๐œŽ 2 ; โ„ ๐‘‘ ร— โ„ ๐‘‘ ) , ๐‘ โˆˆ ๐–ญ ๐‘› 2 โข ( ๐œŽ 2 ; โ„ ๐‘‘ ) , ๐‘› 1 , ๐‘› 2 โˆˆ โ„• } .

Let ๐พ โŠ‚ ๐’œ be a compact set and ( ๐ท ๐‘— ) ๐‘—

1 โˆž be a discrete refinement of ๐ท . To each discretization ๐ท ๐‘— associate partitions ๐‘ƒ ๐‘— ( 1 ) , โ€ฆ , ๐‘ƒ ๐‘— ( ๐‘— ) โІ ๐ท which are pairwise disjoint, each contains a single, unique point of ๐ท ๐‘— , each has positive Lebesgue measure, and

โˆ ๐‘˜

1 ๐‘— ๐‘ƒ ๐‘— ( ๐‘˜ )

๐ท .

We can do this since the points in each discretization ๐ท ๐‘— are pairwise distinct. For any ๐’ข โˆˆ ๐–ญ๐–ฎ with parameters ๐œ… , ๐‘ define the sequence of maps ๐’ข ^ ๐‘— : โ„ ๐‘— โข ๐‘‘ ร— โ„ ๐‘— โ†’ ๐’ด by

๐’ข ^ ๐‘— โข ( ๐‘ฆ 1 , โ€ฆ , ๐‘ฆ ๐‘— , ๐‘ค 1 , โ€ฆ , ๐‘ค ๐‘— )

๐œŽ 1 โข ( โˆ‘ ๐‘˜

1 ๐‘— ๐œ… โข ( โ‹… , ๐‘ฆ ๐‘˜ ) โข ๐‘ค ๐‘˜ โข | ๐‘ƒ ๐‘— ( ๐‘˜ ) | + ๐‘ โข ( โ‹… ) )

for any ๐‘ฆ ๐‘˜ โˆˆ โ„ ๐‘‘ and ๐‘ค ๐‘˜ โˆˆ โ„ . Since ๐พ is compact, there is a constant ๐‘€

0 such that

sup ๐‘Ž โˆˆ ๐พ โ€– ๐‘Ž โ€– ๐’ฐ โ‰ค ๐‘€ .

Therefore,

sup ๐‘ฅ โˆˆ ๐ท ยฏ sup ๐‘— โˆˆ โ„• | โˆซ ๐ท ๐œ… โข ( ๐‘ฅ , ๐‘ฆ ) โข ๐‘Ž โข ( ๐‘ฆ ) โข ๐–ฝ ๐‘ฆ + โˆ‘ ๐‘˜

1 ๐‘— ๐œ… โข ( ๐‘ฅ , ๐‘ฆ ๐‘˜ ) โข ๐‘Ž โข ( ๐‘ฆ ๐‘˜ ) | โข ๐‘ƒ ๐‘— ( ๐‘˜ ) โข | + 2 โข ๐‘ โข ( ๐‘ฅ ) |

โ‰ค 2 โข ( ๐‘€ โข | ๐ท | โข โ€– ๐œ… โ€– ๐ถ โข ( ๐ท ยฏ ร— ๐ท ยฏ ) + โ€– ๐‘ โ€– ๐ถ โข ( ๐ท ยฏ ) )

โ‰” ๐‘… .

Hence we need only consider ๐œŽ 1 as a map [ โˆ’ ๐‘… , ๐‘… ] โ†’ โ„ . Thus, by uniform continuity, there exists a modulus of continuity ๐œ” : โ„ โ‰ฅ 0 โ†’ โ„ โ‰ฅ 0 which is continuous, non-negative, and non-decreasing on โ„ โ‰ฅ 0 , satisfies ๐œ” โข ( ๐‘ง ) โ†’ ๐œ” โข ( 0 )

0 as ๐‘ง โ†’ 0 and

| ๐œŽ 1 โข ( ๐‘ง 1 ) โˆ’ ๐œŽ 1 โข ( ๐‘ง 2 ) | โ‰ค ๐œ” โข ( | ๐‘ง 1 โˆ’ ๐‘ง 2 | ) โˆ€ ๐‘ง 1 , ๐‘ง 2 โˆˆ [ โˆ’ ๐‘… , ๐‘… ] .

(49)

Let ๐œ– > 0 . Equation (49) and the non-decreasing property of ๐œ” imply that in order to show there exists ๐‘„

๐‘„ โข ( ๐œ– ) โˆˆ โ„• such that for any ๐‘š โ‰ฅ ๐‘„ implies

sup ๐‘Ž โˆˆ ๐พ โˆฅ ๐’ข ^ ๐‘š ( ๐ท ๐‘š , ๐‘Ž | ๐ท ๐‘š ) โˆ’ ๐’ข ( ๐‘Ž ) โˆฅ ๐’ด < ๐œ– ,

it is enough to show that

sup ๐‘Ž โˆˆ ๐พ sup ๐‘ฅ โˆˆ ๐ท ยฏ | โˆซ ๐ท ๐œ… โข ( ๐‘ฅ , ๐‘ฆ ) โข ๐‘Ž โข ( ๐‘ฆ ) โข ๐–ฝ ๐‘ฆ โˆ’ โˆ‘ ๐‘˜

1 ๐‘š ๐œ… โข ( ๐‘ฅ , ๐‘ฆ ๐‘˜ ) โข ๐‘Ž โข ( ๐‘ฆ ๐‘˜ ) โข | ๐‘ƒ ๐‘š ( ๐‘˜ ) | | < ๐œ–

(50)

for any ๐‘š โ‰ฅ ๐‘„ . Since ๐พ is compact, we can find functions ๐‘Ž 1 , โ€ฆ , ๐‘Ž ๐‘ โˆˆ ๐พ such that, for any ๐‘Ž โˆˆ ๐พ , there is some ๐‘› โˆˆ { 1 , โ€ฆ , ๐‘ } such that

โ€– ๐‘Ž โˆ’ ๐‘Ž ๐‘› โ€– ๐ถ โข ( ๐ท ยฏ ) โ‰ค ๐œ– 4 โข | ๐ท | โข โ€– ๐œ… โ€– ๐ถ โข ( ๐ท ยฏ ร— ๐ท ยฏ ) .

Since ( ๐ท ๐‘— ) is a discrete refinement, by convergence of Riemann sums, we can find some ๐‘ž โˆˆ โ„• such that for any ๐‘ก โ‰ฅ ๐‘ž , we have

sup ๐‘ฅ โˆˆ ๐ท ยฏ | โˆ‘ ๐‘˜

1 ๐‘ก ๐œ… โข ( ๐‘ฅ , ๐‘ฆ ๐‘˜ ) | โข ๐‘ƒ ๐‘ก ( ๐‘˜ ) โข | โˆ’ โˆซ ๐ท ๐œ… โข ( ๐‘ฅ , ๐‘ฆ ) โข ๐–ฝ ๐‘ฆ | < | ๐ท | โข โ€– ๐œ… โ€– ๐ถ โข ( ๐ท ยฏ ร— ๐ท ยฏ )

where ๐ท ๐‘ก

{ ๐‘ฆ 1 , โ€ฆ , ๐‘ฆ ๐‘ก } . Similarly, we can find ๐‘ 1 , โ€ฆ , ๐‘ ๐‘ โˆˆ โ„• such that, for any ๐‘ก ๐‘› โ‰ฅ ๐‘ ๐‘› , we have

sup ๐‘ฅ โˆˆ ๐ท ยฏ | โˆ‘ ๐‘˜

1 ๐‘ก ๐‘› ๐œ… โข ( ๐‘ฅ , ๐‘ฆ ๐‘˜ ( ๐‘› ) ) โข ๐‘Ž ๐‘› โข ( ๐‘ฆ ๐‘˜ ( ๐‘› ) ) | โข ๐‘ƒ ๐‘ก ๐‘› ( ๐‘˜ ) โข | โˆ’ โˆซ ๐ท ๐œ… โข ( ๐‘ฅ , ๐‘ฆ ) โข ๐‘Ž ๐‘› โข ( ๐‘ฆ ) โข ๐–ฝ ๐‘ฆ | < ๐œ– 4

where ๐ท ๐‘ก ๐‘›

{ ๐‘ฆ 1 ( ๐‘› ) , โ€ฆ , ๐‘ฆ ๐‘ก ๐‘› ( ๐‘› ) } . Let ๐‘š โ‰ฅ max โก { ๐‘ž , ๐‘ 1 , โ€ฆ , ๐‘ ๐‘ } and denote ๐ท ๐‘š

{ ๐‘ฆ 1 , โ€ฆ , ๐‘ฆ ๐‘š } . Note that,

sup ๐‘ฅ โˆˆ ๐ท ยฏ | โˆซ ๐ท ๐œ… โข ( ๐‘ฅ , ๐‘ฆ ) โข ( ๐‘Ž โข ( ๐‘ฆ ) โˆ’ ๐‘Ž ๐‘› โข ( ๐‘ฆ ) ) โข ๐–ฝ ๐‘ฆ | โ‰ค | ๐ท | โข โ€– ๐œ… โ€– ๐ถ โข ( ๐ท ยฏ ร— ๐ท ยฏ ) โข โ€– ๐‘Ž โˆ’ ๐‘Ž ๐‘› โ€– ๐ถ โข ( ๐ท ยฏ ) .

Furthermore,

sup ๐‘ฅ โˆˆ ๐ท ยฏ | โˆ‘ ๐‘˜

1 ๐‘š ๐œ… โข ( ๐‘ฅ , ๐‘ฆ ๐‘˜ ) โข ( ๐‘Ž ๐‘› โข ( ๐‘ฆ ๐‘˜ ) โˆ’ ๐‘Ž โข ( ๐‘ฆ ๐‘˜ ) ) โข | ๐‘ƒ ๐‘š ( ๐‘˜ ) | |
โ‰ค โ€– ๐‘Ž ๐‘› โˆ’ ๐‘Ž โ€– ๐ถ โข ( ๐ท ยฏ ) โข sup ๐‘ฅ โˆˆ ๐ท ยฏ | โˆ‘ ๐‘˜

1 ๐‘š ๐œ… โข ( ๐‘ฅ , ๐‘ฆ ๐‘˜ ) โข | ๐‘ƒ ๐‘š ( ๐‘˜ ) | |

โ‰ค โˆฅ ๐‘Ž ๐‘› โˆ’ ๐‘Ž โˆฅ ๐ถ โข ( ๐ท ยฏ ) ( sup ๐‘ฅ โˆˆ ๐ท ยฏ | โˆ‘ ๐‘˜

1 ๐‘š ๐œ… ( ๐‘ฅ , ๐‘ฆ ๐‘˜ ) | ๐‘ƒ ๐‘š ( ๐‘˜ ) | โˆ’ โˆซ ๐ท ๐œ… ( ๐‘ฅ , ๐‘ฆ ) ๐–ฝ ๐‘ฆ |

  • sup ๐‘ฅ โˆˆ ๐ท ยฏ | โˆซ ๐ท ๐œ… ( ๐‘ฅ , ๐‘ฆ ) ๐–ฝ ๐‘ฆ | )

โ‰ค 2 โข | ๐ท | โข โ€– ๐œ… โ€– ๐ถ โข ( ๐ท ยฏ ร— ๐ท ยฏ ) โข โ€– ๐‘Ž ๐‘› โˆ’ ๐‘Ž โ€– ๐ถ โข ( ๐ท ยฏ ) .

Therefore, for any ๐‘Ž โˆˆ ๐พ , by repeated application of the triangle inequality, we find that

sup ๐‘ฅ โˆˆ ๐ท ยฏ | โˆซ ๐ท ๐œ… โข ( ๐‘ฅ , ๐‘ฆ ) โข ๐‘Ž โข ( ๐‘ฆ ) โข ๐–ฝ ๐‘ฆ โˆ’ โˆ‘ ๐‘˜

1 ๐‘š ๐œ… โข ( ๐‘ฅ , ๐‘ฆ ๐‘˜ ) โข ๐‘Ž โข ( ๐‘ฆ ๐‘˜ ) โข | ๐‘ƒ ๐‘š ( ๐‘˜ ) | |
โ‰ค sup ๐‘ฅ โˆˆ ๐ท ยฏ | โˆ‘ ๐‘˜

1 ๐‘š ๐œ… โข ( ๐‘ฅ , ๐‘ฆ ๐‘˜ ) โข ๐‘Ž ๐‘› โข ( ๐‘ฆ ๐‘˜ ) | โข ๐‘ƒ ๐‘š ( ๐‘˜ ) โข | โˆ’ โˆซ ๐ท ๐œ… โข ( ๐‘ฅ , ๐‘ฆ ) โข ๐‘Ž ๐‘› โข ( ๐‘ฆ ) โข ๐–ฝ ๐‘ฆ |

+ 3 โข | ๐ท | โข โ€– ๐œ… โ€– ๐ถ โข ( ๐ท ยฏ ร— ๐ท ยฏ ) โข โ€– ๐‘Ž โˆ’ ๐‘Ž ๐‘› โ€– ๐ถ โข ( ๐ท ยฏ )

< ๐œ– 4 + 3 โข ๐œ– 4

๐œ–

which completes the proof.  

F

Proof [of Theorem 11] The statement in Lemma 26 allows us to apply Lemma 22 to find a mapping ๐’ข 1 : ๐’œ โ†’ ๐’ฐ such that

sup ๐‘Ž โˆˆ ๐พ โ€– ๐’ข โ€  โข ( ๐‘Ž ) โˆ’ ๐’ข 1 โข ( ๐‘Ž ) โ€– ๐’ฐ โ‰ค ๐œ– 2

where ๐’ข 1

๐บ โˆ˜ ๐œ“ โˆ˜ ๐น with ๐น : ๐’œ โ†’ โ„ ๐ฝ , ๐บ : โ„ ๐ฝ โ€ฒ โ†’ ๐’ฐ continuous linear maps and ๐œ“ โˆˆ ๐ถ โข ( โ„ ๐ฝ ; โ„ ๐ฝ โ€ฒ ) for some ๐ฝ , ๐ฝ โ€ฒ โˆˆ โ„• . By Lemma 32, we can find a sequence of maps ๐น ๐‘ก โˆˆ ๐–จ๐–ฎ โข ( ๐œŽ 2 ; ๐ท , โ„ , โ„ ๐ฝ ) for ๐‘ก

1 , 2 , โ€ฆ such that

sup ๐‘Ž โˆˆ ๐พ sup ๐‘ฅ โˆˆ ๐ท ยฏ | ( ๐น ๐‘ก ( ๐‘Ž ) ) ( ๐‘ฅ ) โˆ’ ๐น ( ๐‘Ž ) | 1 โ‰ค 1 ๐‘ก .

In particular, ๐น ๐‘ก โข ( ๐‘Ž ) โข ( ๐‘ฅ )

๐‘ค ๐‘ก โข ( ๐‘Ž ) โข ๐Ÿ™ โข ( ๐‘ฅ ) for some ๐‘ค ๐‘ก : ๐’œ โ†’ โ„ ๐ฝ which is constant in space. We can therefore identify the range of ๐น ๐‘ก โข ( ๐‘Ž ) with โ„ ๐ฝ . Define the set

๐‘ โ‰” โ‹ƒ ๐‘ก

1 โˆž ๐น ๐‘ก โข ( ๐พ ) โˆช ๐น โข ( ๐พ ) โŠ‚ โ„ ๐ฝ

which is compact by Lemma 21. Since ๐œ“ is continuous, it is uniformly continuous on ๐‘ hence there exists a modulus of continuity ๐œ” : โ„ โ‰ฅ 0 โ†’ โ„ โ‰ฅ 0 which is continuous, non-negative, and non-decreasing on โ„ โ‰ฅ 0 , satisfies ๐œ” โข ( ๐‘  ) โ†’ ๐œ” โข ( 0 )

0 as ๐‘  โ†’ 0 and

| ๐œ“ โข ( ๐‘ง 1 ) โˆ’ ๐œ“ โข ( ๐‘ง 2 ) | 1 โ‰ค ๐œ” โข ( | ๐‘ง 1 โˆ’ ๐‘ง 2 | 1 ) โˆ€ ๐‘ง 1 , ๐‘ง 2 โˆˆ ๐‘ .

We can thus find ๐‘‡ โˆˆ โ„• large enough such that

sup ๐‘Ž โˆˆ ๐พ ๐œ” โข ( | ๐น โข ( ๐‘Ž ) โˆ’ ๐น ๐‘‡ โข ( ๐‘Ž ) | 1 ) โ‰ค ๐œ– 6 โข โ€– ๐บ โ€– .

Since ๐น ๐‘‡ is continuous, ๐น ๐‘‡ โข ( ๐พ ) is compact. Since ๐œ“ is a continuous function on the compact set ๐น ๐‘‡ โข ( ๐พ ) โŠ‚ โ„ ๐ฝ mapping into โ„ ๐ฝ โ€ฒ , we can use any classical neural network approximation theorem such as (Pinkus, 1999, Theorem 4.1) to find an ๐œ– -close (uniformly) neural network. Since Lemma 35 shows that neural operators can exactly mimic standard neural networks, it follows that we can find ๐‘† 1 โˆˆ ๐–จ๐–ฎ โข ( ๐œŽ 1 ; ๐ท , โ„ ๐ฝ , โ„ ๐‘‘ 1 ) , โ€ฆ ,

๐‘† ๐‘ โˆ’ 1 โˆˆ ๐–จ๐–ฎ โข ( ๐œŽ 1 ; ๐ท , โ„ ๐‘‘ ๐‘ โˆ’ 1 , โ„ ๐ฝ โ€ฒ ) for some ๐‘ โˆˆ โ„• โ‰ฅ 2 and ๐‘‘ 1 , โ€ฆ , ๐‘‘ ๐‘ โˆ’ 1 โˆˆ โ„• such that

๐œ“ ~ ( ๐‘“ ) โ‰” ( ๐‘† ๐‘ โˆ’ 1 โˆ˜ ๐œŽ 1 โˆ˜ โ‹ฏ โˆ˜ ๐‘† 2 โˆ˜ ๐œŽ 1 โˆ˜ ๐‘† 1 ) ( ๐‘“ ) , โˆ€ ๐‘“ โˆˆ ๐ฟ 1 ( ๐ท ; โ„ ๐ฝ )

satisfies

sup ๐‘ž โˆˆ ๐น ๐‘‡ โข ( ๐พ ) sup ๐‘ฅ โˆˆ ๐ท ยฏ | ๐œ“ โข ( ๐‘ž ) โˆ’ ๐œ“ ~ โข ( ๐‘ž โข ๐Ÿ™ ) โข ( ๐‘ฅ ) | 1 โ‰ค ๐œ– 6 โข โ€– ๐บ โ€– .

By construction, ๐œ“ ~ maps constant functions into constant functions and is continuous in the appropriate subspace topology of constant functions hence we can identity it as an element of ๐ถ โข ( โ„ ๐ฝ ; โ„ ๐ฝ โ€ฒ ) for any input constant function taking values in โ„ ๐ฝ . Then ( ๐œ“ ~ โˆ˜ ๐น ๐‘‡ ) โข ( ๐พ ) โŠ‚ โ„ ๐ฝ โ€ฒ is compact. Therefore, by Lemma 34, we can find a neural network ๐œ… โˆˆ ๐’ฉ ๐ฟ โข ( ๐œŽ 3 ; โ„ ๐‘‘ โ€ฒ ร— โ„ ๐‘‘ โ€ฒ , โ„ 1 ร— ๐ฝ โ€ฒ ) for some ๐ฟ โˆˆ โ„• such that

๐บ ~ โข ( ๐‘“ ) โ‰” โˆซ ๐ท โ€ฒ ๐œ… โข ( โ‹… , ๐‘ฆ ) โข ๐‘“ โข ( ๐‘ฆ ) โข d โข ๐‘ฆ , โˆ€ ๐‘“ โˆˆ ๐ฟ 1 โข ( ๐ท ; โ„ ๐ฝ โ€ฒ )

satisfies

sup ๐‘ฆ โˆˆ ( ๐œ“ ~ โˆ˜ ๐น ๐‘‡ ) โข ( ๐พ ) โ€– ๐บ โข ( ๐‘ฆ ) โˆ’ ๐บ ~ โข ( ๐‘ฆ โข ๐Ÿ™ ) โ€– ๐’ฐ โ‰ค ๐œ– 6 .

Define

๐’ข ( ๐‘Ž ) โ‰” ( ๐บ ~ โˆ˜ ๐œ“ ~ โˆ˜ ๐น ๐‘‡ ) ( ๐‘Ž )

โˆซ ๐ท โ€ฒ ๐œ… ( โ‹… , ๐‘ฆ ) ( ( ๐‘† ๐‘ โˆ’ 1 โˆ˜ ๐œŽ 1 โˆ˜ โ€ฆ ๐œŽ 1 โˆ˜ ๐‘† 1 โˆ˜ ๐น ๐‘‡ ) ( ๐‘Ž ) ) ( ๐‘ฆ ) ๐–ฝ ๐‘ฆ , โˆ€ ๐‘Ž โˆˆ ๐’œ ,

noting that ๐’ข โˆˆ ๐–ญ๐–ฎ ๐‘ โข ( ๐œŽ 1 , ๐œŽ 2 , ๐œŽ 3 ; ๐ท , ๐ท โ€ฒ ) . For any ๐‘Ž โˆˆ ๐พ , define ๐‘Ž 1 โ‰” ( ๐œ“ โˆ˜ ๐น ) โข ( ๐‘Ž ) and ๐‘Ž ~ 1 โ‰” ( ๐œ“ ~ โˆ˜ ๐น ๐‘‡ ) โข ( ๐‘Ž ) so that ๐’ข 1 โข ( ๐‘Ž )

๐บ โข ( ๐‘Ž 1 ) and ๐’ข โข ( ๐‘Ž )

๐บ ~ โข ( ๐‘Ž ~ 1 ) then

โ€– ๐’ข 1 โข ( ๐‘Ž ) โˆ’ ๐’ข โข ( ๐‘Ž ) โ€– ๐’ฐ

โ‰ค โ€– ๐บ โข ( ๐‘Ž 1 ) โˆ’ ๐บ โข ( ๐‘Ž ~ 1 ) โ€– ๐’ฐ + โ€– ๐บ โข ( ๐‘Ž ~ 1 ) โˆ’ ๐บ ~ โข ( ๐‘Ž ~ 1 ) โ€– ๐’ฐ

โ‰ค โ€– ๐บ โ€– โข | ๐‘Ž 1 โˆ’ ๐‘Ž 1 ~ | 1 + sup ๐‘ฆ โˆˆ ( ๐œ“ ~ โˆ˜ ๐น ๐‘‡ ) โข ( ๐พ ) โ€– ๐บ โข ( ๐‘ฆ ) โˆ’ ๐บ ~ โข ( ๐‘ฆ โข ๐Ÿ™ ) โ€– ๐’ฐ

โ‰ค ๐œ– 6 + โ€– ๐บ โ€– โข | ( ๐œ“ โˆ˜ ๐น ) โข ( ๐‘Ž ) โˆ’ ( ๐œ“ โˆ˜ ๐น ๐‘‡ ) โข ( ๐‘Ž ) | 1 + โ€– ๐บ โ€– โข | ( ๐œ“ โˆ˜ ๐น ๐‘‡ ) โข ( ๐‘Ž ) โˆ’ ( ๐œ“ ~ โˆ˜ ๐น ๐‘‡ ) โข ( ๐‘Ž ) | 1

โ‰ค ๐œ– 6 + โˆฅ ๐บ โˆฅ ๐œ” ( | ๐น ( ๐‘Ž ) โˆ’ ๐น ๐‘‡ ( ๐‘Ž ) | 1 ) + โˆฅ ๐บ โˆฅ sup ๐‘ž โˆˆ ๐น ๐‘‡ โข ( ๐พ ) | ๐œ“ ( ๐‘ž ) โˆ’ ๐œ“ ~ ( ๐‘ž ) | 1

โ‰ค ๐œ– 2 .

Finally we have

โ€– ๐’ข โ€  โข ( ๐‘Ž ) โˆ’ ๐’ข โข ( ๐‘Ž ) โ€– ๐’ฐ โ‰ค โ€– ๐’ข โ€  โข ( ๐‘Ž ) โˆ’ ๐’ข 1 โข ( ๐‘Ž ) โ€– ๐’ฐ + โ€– ๐’ข 1 โข ( ๐‘Ž ) โˆ’ ๐’ข โข ( ๐‘Ž ) โ€– ๐’ฐ โ‰ค ๐œ– 2 + ๐œ– 2

๐œ–

as desired.

To show boundedness, we will exhibit a neural operator ๐’ข ~ that is ๐œ– -close to ๐’ข in ๐พ and is uniformly bounded by 4 โข ๐‘€ . Note first that

โ€– ๐’ข โข ( ๐‘Ž ) โ€– ๐’ฐ โ‰ค โ€– ๐’ข โข ( ๐‘Ž ) โˆ’ ๐’ข โ€  โข ( ๐‘Ž ) โ€– ๐’ฐ + โ€– ๐’ข โ€  โข ( ๐‘Ž ) โ€– ๐’ฐ โ‰ค ๐œ– + ๐‘€ โ‰ค 2 โข ๐‘€ , โˆ€ ๐‘Ž โˆˆ ๐พ

where, without loss of generality, we assume that ๐‘€ โ‰ฅ 1 . By construction, we have that

๐’ข โข ( ๐‘Ž )

โˆ‘ ๐‘—

1 ๐ฝ โ€ฒ ๐œ“ ~ ๐‘— โข ( ๐น ๐‘‡ โข ( ๐‘Ž ) ) โข ๐œ‘ ๐‘— , โˆ€ ๐‘Ž โˆˆ ๐’œ

for some neural network ๐œ‘ : โ„ ๐‘‘ โ€ฒ โ†’ โ„ ๐ฝ โ€ฒ . Since ๐’ฐ is a Hilbert space and by linearity, we may assume that the components ๐œ‘ ๐‘— are orthonormal since orthonormalizing them only requires multiplying the last layers of ๐œ“ ~ by an invertible linear map. Therefore

| ๐œ“ ~ โข ( ๐น ๐‘‡ โข ( ๐‘Ž ) ) | 2

โ€– ๐’ข โข ( ๐‘Ž ) โ€– ๐’ฐ โ‰ค 2 โข ๐‘€ , โˆ€ ๐‘Ž โˆˆ ๐พ .

Define the set ๐‘Š โ‰” ( ๐œ“ ~ โˆ˜ ๐น ๐‘‡ ) โข ( ๐พ ) โŠ‚ โ„ ๐ฝ โ€ฒ which is compact as before. We have

diam 2 โข ( ๐‘Š )

sup ๐‘ฅ , ๐‘ฆ โˆˆ ๐‘Š | ๐‘ฅ โˆ’ ๐‘ฆ | 2 โ‰ค sup ๐‘ฅ , ๐‘ฆ โˆˆ ๐‘Š | ๐‘ฅ | 2 + | ๐‘ฆ | 2 โ‰ค 4 โข ๐‘€ .

Since ๐œŽ 1 โˆˆ ๐–ก๐–  , there exists a number ๐‘… โˆˆ โ„• and a neural network ๐›ฝ โˆˆ ๐–ญ ๐‘… โข ( ๐œŽ 1 ; โ„ ๐ฝ โ€ฒ , โ„ ๐ฝ โ€ฒ ) such that

| ๐›ฝ โข ( ๐‘ฅ ) โˆ’ ๐‘ฅ | 2

โ‰ค ๐œ– , โˆ€ ๐‘ฅ โˆˆ ๐‘Š

| ๐›ฝ โข ( ๐‘ฅ ) | 2

โ‰ค 4 โข ๐‘€ , โˆ€ ๐‘ฅ โˆˆ โ„ ๐ฝ โ€ฒ .

Define

๐’ข ~ โข ( ๐‘Ž ) โ‰” โˆ‘ ๐‘—

1 ๐ฝ โ€ฒ ๐›ฝ ๐‘— โข ( ๐œ“ ~ โข ( ๐น ๐‘‡ โข ( ๐‘Ž ) ) ) โข ๐œ‘ ๐‘— , โˆ€ ๐‘Ž โˆˆ ๐’œ .

Lemmas 34 and 35 then shows that ๐’ข ~ โˆˆ ๐–ญ๐–ฎ ๐‘ + ๐‘… โข ( ๐œŽ 1 , ๐œŽ 2 , ๐œŽ 3 ; ๐ท , ๐ท โ€ฒ ) . Notice that

sup ๐‘Ž โˆˆ ๐พ โ€– ๐’ข โข ( ๐‘Ž ) โˆ’ ๐’ข ~ โข ( ๐‘Ž ) โ€– ๐’ฐ โ‰ค sup ๐‘ค โˆˆ ๐‘Š | ๐‘ค โˆ’ ๐›ฝ โข ( ๐‘ค ) | 2 โ‰ค ๐œ– .

Furthermore,

โ€– ๐’ข ~ โข ( ๐‘Ž ) โ€– ๐’ฐ โ‰ค โ€– ๐’ข ~ โข ( ๐‘Ž ) โˆ’ ๐’ข โข ( ๐‘Ž ) โ€– ๐’ฐ + โ€– ๐’ข โข ( ๐‘Ž ) โ€– ๐’ฐ โ‰ค ๐œ– + 2 โข ๐‘€ โ‰ค 3 โข ๐‘€ , โˆ€ ๐‘Ž โˆˆ ๐พ .

Let ๐‘Ž โˆˆ ๐’œ โˆ– ๐พ then there exists ๐‘ž โˆˆ โ„ ๐ฝ โ€ฒ โˆ– ๐‘Š such that ๐œ“ ~ โข ( ๐น ๐‘‡ โข ( ๐‘Ž ) )

๐‘ž and

โ€– ๐’ข ~ โข ( ๐‘Ž ) โ€– ๐’ฐ

| ๐›ฝ โข ( ๐‘ž ) | 2 โ‰ค 4 โข ๐‘€

as desired.  

G

Proof [of Theorem 13] Let ๐’ฐ

๐ป ๐‘š 2 โข ( ๐ท ) . For any ๐‘…

0 , define

๐’ข ๐‘… โ€  โข ( ๐‘Ž ) โ‰” { ๐’ข โ€  โข ( ๐‘Ž ) ,

โ€– ๐’ข โ€  โข ( ๐‘Ž ) โ€– ๐’ฐ โ‰ค ๐‘…

๐‘… โ€– ๐’ข โ€  โข ( ๐‘Ž ) โ€– ๐’ฐ โข ๐’ข โ€  โข ( ๐‘Ž ) ,

otherwise

for any ๐‘Ž โˆˆ ๐’œ . Since ๐’ข ๐‘… โ€  โ†’ ๐’ข โ€  as ๐‘… โ†’ โˆž

๐œ‡ -almost everywhere, ๐’ข โ€  โˆˆ ๐ฟ ๐œ‡ 2 โข ( ๐’œ ; ๐’ฐ ) , and clearly โ€– ๐’ข ๐‘… โ€  โข ( ๐‘Ž ) โ€– ๐’ฐ โ‰ค โ€– ๐’ข โ€  โข ( ๐‘Ž ) โ€– ๐’ฐ for any ๐‘Ž โˆˆ ๐’œ , we can apply the dominated convergence theorem for Bochner integrals to find ๐‘…

0 large enough such that

โ€– ๐’ข ๐‘… โ€  โˆ’ ๐’ข โ€  โ€– ๐ฟ ๐œ‡ 2 โข ( ๐’œ ; ๐’ฐ ) โ‰ค ๐œ– 3 .

Since ๐’œ and ๐’ฐ are Polish spaces, by Lusinโ€™s theorem (Aaronson, 1997, Theorem 1.0.0) we can find a compact set ๐พ โŠ‚ ๐’œ such that

๐œ‡ โข ( ๐’œ โˆ– ๐พ ) โ‰ค ๐œ– 2 153 โข ๐‘… 2

and ๐’ข ๐‘… โ€  | ๐พ is continuous. Since ๐พ is closed, by a generalization of the Tietze extension theorem (Dugundji, 1951, Theorem 4.1), there exist a continuous mapping ๐’ข ~ ๐‘… โ€  : ๐’œ โ†’ ๐’ฐ such that ๐’ข ~ ๐‘… โ€  โข ( ๐‘Ž )

๐’ข ๐‘… โ€  โข ( ๐‘Ž ) for all ๐‘Ž โˆˆ ๐พ and

sup ๐‘Ž โˆˆ ๐’œ โ€– ๐’ข ~ ๐‘… โ€  โข ( ๐‘Ž ) โ€– โ‰ค sup ๐‘Ž โˆˆ ๐’œ โ€– ๐’ข ๐‘… โ€  โข ( ๐‘Ž ) โ€– โ‰ค ๐‘… .

Applying Theorem 11 to ๐’ข ~ ๐‘… โ€  , we find that there exists a number ๐‘ โˆˆ โ„• and a neural operator ๐’ข โˆˆ ๐–ญ๐–ฎ ๐‘ โข ( ๐œŽ 1 , ๐œŽ 2 , ๐œŽ 3 ; ๐ท , ๐ท โ€ฒ ) such that

sup ๐‘Ž โˆˆ ๐พ โ€– ๐’ข โข ( ๐‘Ž ) โˆ’ ๐’ข ๐‘… โ€  โข ( ๐‘Ž ) โ€– ๐’ฐ โ‰ค 2 โข ๐œ– 3

and

sup ๐‘Ž โˆˆ ๐’œ โ€– ๐’ข โข ( ๐‘Ž ) โ€– ๐’ฐ โ‰ค 4 โข ๐‘… .

We then have

โ€– ๐’ข โ€  โˆ’ ๐’ข โ€– ๐ฟ ๐œ‡ 2 โข ( ๐’œ ; ๐’ฐ )
โ‰ค โ€– ๐’ข โ€  โˆ’ ๐’ข ๐‘… โ€  โ€– ๐ฟ ๐œ‡ 2 โข ( ๐’œ ; ๐’ฐ ) + โ€– ๐’ข ๐‘… โ€  โˆ’ ๐’ข โ€– ๐ฟ ๐œ‡ 2 โข ( ๐’œ ; ๐’ฐ )

โ‰ค ๐œ– 3 + ( โˆซ ๐พ โ€– ๐’ข ๐‘… โ€  โข ( ๐‘Ž ) โˆ’ ๐’ข โข ( ๐‘Ž ) โ€– ๐’ฐ 2 โข ๐–ฝ ๐œ‡ โข ( ๐‘Ž ) + โˆซ ๐’œ โˆ– ๐พ โ€– ๐’ข ๐‘… โ€  โข ( ๐‘Ž ) โˆ’ ๐’ข โข ( ๐‘Ž ) โ€– ๐’ฐ 2 โข ๐–ฝ ๐œ‡ โข ( ๐‘Ž ) ) 1 2

โ‰ค ๐œ– 3 + ( 2 โข ๐œ– 2 9 + 2 โข ( sup ๐‘Ž โˆˆ ๐’œ โ€– ๐’ข ๐‘… โ€  โข ( ๐‘Ž ) โ€– ๐’ฐ 2 + โ€– ๐’ข โข ( ๐‘Ž ) โ€– ๐’ฐ 2 ) โข ๐œ‡ โข ( ๐’œ โˆ– ๐พ ) ) 1 2

โ‰ค ๐œ– 3 + ( 2 โข ๐œ– 2 9 + 34 โข ๐‘… 2 โข ๐œ‡ โข ( ๐’œ โˆ– ๐พ ) ) 1 2

โ‰ค ๐œ– 3 + ( 4 โข ๐œ– 2 9 ) 1 2

๐œ–

as desired.  

Generated on Fri May 24 22:24:16 2024 by LaTeXML Report Issue Report Issue for Selection