Buckets:
Title: Contents
URL Source: https://arxiv.org/html/2208.03761
Markdown Content: List of Algorithms List of Algorithms List of Symbols 1Introduction 2Regression 2.1Data Fitting Problem 2.2Gaussian Processes 2.3Neural Networks 3Reproducing Kernel Hilbert Spaces 3.1Reproducing Kernels 3.2Mercer Representation 3.3Representer Theorem 4Types of Kernels and their Equivalences 4.1Matรฉrn Class of Kernels 4.2Neural Tangent Kernel 4.3RKHS Inclusion 4.4Equivalence of the Laplace and Neural Tangent Kernels 5Synthetic Experiments 5.1Setup 5.2Illustrative Example 5.3Synthetic 2D Input Case 5.4Synthetic High Dimensional Cases 6Real World Experiments 6.1Setup and Datasets 6.2Results 7Conclusion References \titleone
An Empirical Analysis of the \titletwoLaplace and Neural Tangent Kernels \titlethree \doctypeThesis \doctypeUpThesis \degreeMaster of Science \fieldMathematics \AuthorRonaldas Paulius Lenceviฤius \AdvisorJames Risk \MemberAManuchehr Aminian \MemberBAdam King \SemesterSummer \Year2022
\Abstract
The neural tangent kernel is a kernel function defined over the parameter distribution of an infinite width neural network. Despite the impracticality of this limit, the neural tangent kernel has allowed for a more direct study of neural networks and a gaze through the veil of their black box. More recently, it has been shown theoretically that the Laplace kernel and neural tangent kernel share the same reproducing kernel Hilbert space in the space of ๐ ๐ โ 1 alluding to their equivalence. In this work, we analyze the practical equivalence of the two kernels. We first do so by matching the kernels exactly and then by matching posteriors of a Gaussian process. Moreover, we analyze the kernels in โ ๐ and experiment with them in the task of regression.
\Acknowledgments
To Ada, Percy, and Sabrina,
to Raimis, Diana, Boson, and Curie,
to all my friends, family, and faculty,
โฆand to Dr. Riskโs care, patience, and mentorship.
\signaturepage\acknowledgmentspage\abstractpage
Contents List of Algorithms List of Symbols 1Introduction 2Regression 2.1Data Fitting Problem 2.2Gaussian Processes 2.3Neural Networks 3Reproducing Kernel Hilbert Spaces 3.1Reproducing Kernels 3.2Mercer Representation 3.3Representer Theorem 4Types of Kernels and their Equivalences 4.1Matรฉrn Class of Kernels 4.2Neural Tangent Kernel 4.3RKHS Inclusion 4.4Equivalence of the Laplace and Neural Tangent Kernels 5Synthetic Experiments 5.1Setup 5.2Illustrative Example 5.3Synthetic 2D Input Case 5.4Synthetic High Dimensional Cases 6Real World Experiments 6.1Setup and Datasets 6.2Results 7Conclusion List of Tables 4.1 \setlinespacing1.1 An illustration of the discrepancy between kernel differences ๐ ๐ while trying to match โ to ๐ ยจ ๐ โ ๐ โ ๐พ of depth ๐ท
3 using Equation (). Left column: Random inputs in ๐ ๐ โ 1 . Right table: Table of differences for specific parameters. Column 1 fixes ๐ฝ
0 during matching. Column 2 optimizes ๐ฝ and โ using Algorithm 4.1. 5.1 \setlinespacing1.1 Summary of variables managed in all synthetic experiments. Values left blank are determined per experiment. Unfixed values are ones allowed to be optimized during experiments. 5.2 \setlinespacing1.1 The results of posterior mean matching the NTK to Matรฉrn kernels for the parametric dataset in ๐ 1 . 5.3 \setlinespacing1.1 Kernel hyperparameter results for posterior mean matching with inputs in ๐ 1 . 5.4 \setlinespacing1.1 Three 2D input functions with ( ๐ฅ 1 , ๐ฅ 2 ) โ โ 2 . 5.5 \setlinespacing1.1 2D input function input sampling distributions and noise used for the noisy experiments. 5.6 \setlinespacing1.1 Posterior mean matching results for the 2D input surface datasets in โ 2 with test size ๐
500 . GPs were trained using ๐
500 inputs ๐ฅ 1 , ๐ฅ 2 generated using Latin hypercube sampling over the respective function domains. 5.7 \setlinespacing1.1 Friedman datasets along with their corresponding input dimensions where ( ๐ฅ 1 , โฆ , ๐ฅ ๐ ) โ โ ๐ . Friedman 1 datasetโs output is independent of the last five input variables hence why the dataset has a total of 10 input dimensions. 5.8 \setlinespacing1.1 Friedman data feature distributions. Friedman 2 and 3 share the same feature distributions. The noise term ๐ is applied only for the noisy data cases. 5.9 \setlinespacing1.1 Friedman 1 ๐ 2 results for NTK posterior means with training done using various data transformations. The None column is the baseline with no input or output rescaling, ๐ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ column is with input rescaling only, ๐ฒ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ column is with output rescaling during training only, and the Both column applies both types of rescaling. 5.10 \setlinespacing1.1 Friedman 1 ๐ results for Laplace kernel and NTK posterior mean matching in ๐ 9 with training done using various data transformations. 6.1 \setlinespacing1.1 Summary of variables managed in all real world experiments. 6.2 \setlinespacing1.1 Summary of real world data. 6.3 \setlinespacing1.1 Results for real world experiments in โ ๐ and ๐ ๐ โ 1 . Metrics for the Fire dataset are calculated by first inverting the log-transformation. C.1 \setlinespacing1.1 Posterior mean matching results for the 2D input surface datasets in ๐ 1 . As noted in Section 5.3, the noisy the Ackley function was difficult to properly fit resulting in a constant and meaningless posterior thus the zero RMSE should be looked at skeptically. C.2 \setlinespacing1.1 Friedman 2 ๐ 2 results for NTK posterior means with training done using various data transformations. C.4 \setlinespacing1.1 Friedman 2 ๐ results for Laplace kernel and NTK posterior mean matching in ๐ 9 with training done using various data transformations. List of Figures 2.1 \setlinespacing1.1 Sample paths of the Laplace covariance function from a GP prior and posterior. The solid line represents ๐ โ ( ๐ฑ ) and ๐ ยฏ โ in the left and right panel respectively. The shaded bands represent cov โ ( ๐ โ ) . The red xโs in the right panel are the training points. 2.2 \setlinespacing1.1 A visualization of a neuron courtesy of [32]. 2.3 \setlinespacing1.1 A visualization of a 2 layer fully connected neural network courtesy of [32]. 4.1 \setlinespacing1.1 Mean and variance plots of โ given specific ๐ฝ calculated using ๐
1000 sample of input pairs for various depths. The solid orange line represents the variance while the dotted blue line represents the mean. 4.2 \setlinespacing1.1 Solid orange line represents the variance and the dotted blue line represents the mean. Top: A zoom in of depth ๐ท
6 for ๐ฝ โ [ 0 , 10 โ 7 ] . Due to the zoom, the mean values are all concentrated around โ 1.0524 and all variance values are near โ 2.437 โ 10 โ 5 . The difference between the minimum and maximum variance shown is approximately 10 โ 18 . Bottom: A showcase of a typical plot past depth 6. 5.1 \setlinespacing1.1 Top: The parametric curve defined in Equation (5.8). Bottom: Equation (5.8) with inputs normalized and noisy training points shown. 5.2 \setlinespacing1.1 Posterior means generated by fitting non-noisy parametric curve data in ๐ 1 and predicted on out of sample data in ๐ 1 . For visualization purposes we set ( ๐ฅ 1 , ๐ฅ 2 ) โ โ 2 . 5.3 \setlinespacing1.1 Posterior means for the noisy the Ackley function in โ 2 for NTK depth ๐ท
2 with test size ๐
500 . GPs were trained using ๐
500 inputs ๐ฅ 1 , ๐ฅ 2 โ [ 1 , 7 ] generated using Latin hypercube sampling. 5.4 \setlinespacing1.1 Posterior means of the non-noisy 2D input functions trained in โ 2 for NTK depth ๐ท
3 with test size ๐
500 . GPs were trained using ๐
500 inputs ๐ฅ 1 , ๐ฅ 2 generated using Latin hypercube sampling over the respective function domains. 5.5 \setlinespacing1.1 Posterior means of the non-noisy 2D input functions trained in ๐ 1 for NTK depth ๐ท
3 with test size ๐
500 . GPs were trained using ๐
500 inputs ๐ฅ 1 , ๐ฅ 2 generated using Latin hypercube sampling over the respective function domains. For visualization ( ๐ฅ 1 , ๐ฅ 2 ) โ โ 2 . 5.6 \setlinespacing1.1 Predictions for non-noisy Friedman 2 in ๐ 3 for ๐ท
2 with ๐ฒ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ . Top: NTK and Laplace predictions overlayed. Middle: NTK and Gaussian predictions overlayed. Bottom: Averaged prediction plots of all kernels. 5.8 \setlinespacing1.1 Predictions for non-noisy Friedman 3 in โ 4 for ๐ท
2 with ๐ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ and ๐ฒ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ . The rows of the figure are laid out as in Figure 5.7. 6.1 \setlinespacing1.1 Concrete compression strength predictions over ๐ 7 with NTK depth ๐ท
10 overlayed. Inputs are shown in โ 8 for visualization. The first and last two rows correspond to the input features all in kg/m3 except for the Age. 6.2 \setlinespacing1.1 Fire area predictions over ๐ 3 with NTK depth ๐ท
10 overlayed. Inputs are shown in โ 4 and output is log-transformed for visualization. C.1 \setlinespacing1.1 Posterior means generated by fitting to data in โ 2 and predicted on out of sample data in โ 2 . All kernels seem to be approximating the loop in the curve. The Laplace and Gaussian kernels provide almost the same predictions between them. In addition, the kernels seem to do a better job that ๐ 1 of approximating the underlying parametric curve. Top: NTK with Laplace kernel overlayed. Bottom: NTK with Gaussian kernel overlayed. C.2 \setlinespacing1.1 NTK posterior mean of the noisy the Ackley function in ๐ 1 for NTK depth ๐ท
2 . The GP was trained using ๐
500 inputs ๐ฅ 1 , ๐ฅ 2 โ [ 1 , 7 ] generated using Latin hypercube sampling. The ๐ฆ values are all concentrated around โ 12.67 with a difference between the minimum and maximum being โ 10 โ 8 indicating that the posterior mean has zeroed out. This is due to the kernelโs constant value optimizing close to zero. Attempting to manually fit the GP while controlling the constant value and white noise provides similar results. C.3 \setlinespacing1.1 Predictions for noisy Friedman 2 in ๐ 3 for ๐ท
2 with ๐ฒ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ . Top: NTK and Laplace predictions overlayed. Middle: NTK and Gaussian predictions overlayed. Bottom: Averaged prediction plots of all kernels. C.5 \setlinespacing1.1 Predictions for noisy Friedman 3 in โ 4 for ๐ท
2 with ๐ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ and ๐ฒ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ . The rows of the figure are laid out as in Figure C.4. C.7 \setlinespacing1.1 Fire area predictions over ๐ 3 without white noise term with NTK ๐ท
10 . Inputs are shown in โ 4 and output is log-transformed for visualization. This is interesting since the NTK seems to do a better job of aligning the predictions to the ground truth in comparison to the GPs fit with a white noise term in Figure 6.2. List of Algorithms List of Symbols ๐ฟ 2
Square-integrable function space
โ ๐
๐ -dimensional real space
๐ ๐ โ 1
Unit ๐ -sphere space
โ
Hilbert space
โ 0
Pre-Hilbert space
โ ๐
Reproducing kernel Hilbert space (RKHS)
๐ณ
Data space
๐ข โ ๐ซ
Gaussian process (GP)
โณ โ ๐ฑ โ ๐ฉ
Multivariate normal
๐ฉ
Normal
๐
Metric
๐
Matrix
๐ฑ
Vector
๐ฅ
Scalar
โฃโ
Test/predictions
๐ท
๐ฟ + 1
Depth (neural tangent kernel)
๐ 2
Coefficient of determination
๐ฝ
Bias parameter (neural tangent kernel)
โ
Length-scale parameter (Matรฉrn kernel)
๐
Noise
๐
Smoothness parameter (Matรฉrn kernel)
๐
Pearson correlation coefficient
๐ 2
Variance
๐
General parameter set
๐ฟ
Loss function
๐ ๐
Integral operator
cov/var
Covariance/variance
๐ผ
Expected value
๐
Kernel
โฅ โ โฅ
Norm
โจ โ , โ โฉ
Inner product
๐ โ ( โ )
Activation function
๐ ๐
Absolute difference function given kernel parameters ๐
๐ ๐
Eigenvalue
๐ ๐
Eigenfunction
\AddChap Chapter 1Introduction
The relation between artificial neural networks and Gaussian processes has been established since the early 1990s. In 1989 researchers determined that a single layer neural network can, under the right conditions, approximate any continuous function as the layer width tends to infinity [15, 20, 41, 23]. With this idea, Neal [35] later found that a single hidden layer neural network with normally distributed weights and biases at initialization can be represented as a closed form Gaussian process when the layer width approach infinity and hence converges to a normal distribution. It is then possible to inform neural networks via specified priors and analyze them over the neural network parameter distribution. This was further expanded in 2018 to neural networks of many layers by Lee et al. [30].
This perspective opened new doors to analysis but did not explain the effectiveness of training neural networks using the widely used gradient descent approach. Jacot et al. [28] developed the solution to this by further generalizing infinite-width neural networks via a recursively defined kernel called the neural tangent kernel. The neural tangent kernel can be used to represent and analyze a given infinite-width neural network during training with a specific depth, activation, and variance initialization. Future works utilized this kernel and expanded on the 1989 single layer results by showing that wide neural networks trained under gradient descent work as linear models and that empirically, finite networks also share those attributes [31]. In addition, further neural tangent kernel parameterizations were discovered for convolutional, recurrent, and graph neural network architectures [5, 3, 16]. Furthermore, the neural tangent kernel was also shown to generalize with neural networks that allow for regularization and gradient noise during training [12].
During the same time, Belkin et al. [7] empirically found that overfitted kernel methods display a similar phenomenon to overparameterized deep models where, despite reaching zero training loss, the test data would show good performance. Moreover, their work showed parallels between rectified linear unit (ReLU) activated neural networks and the Laplace kernel in the task of fitting random labels. Since then it has been shown theoretically that the Laplace and neural tangent kernels do in fact perform similarly to their neural network counterparts since both kernels share the same reproducing kernel Hilbert space โ ๐ of predictions in the ๐ ๐ โ 1 unit ๐ -sphere [11, 21]. This is consequential because the Laplace kernel has a very simple, well understood formulation whereas the neural tangent kernel has an unwieldy recursive formulation. In essence, the Laplace kernel can be used to analyze deep neural networks without the computational or theoretical difficulties of the neural tangent kernel.
However, this leaves some questions unanswered such as the practical equivalence of the kernels, the ability to find matching elements from the โ ๐ of the kernels, whether the kernels share these similarities in the space of โ ๐ , and if the Gaussian kernel, which comes from the same family as the Laplace, provides similar results. We attempt to provide answers by analyzing the neural tangent, Laplace, and Gaussian kernels via the framework of Gaussian process regression.
Chapter 2 defines the general data fitting problem, Gaussian processes for regression, and neural networks and their equivalence to Gaussian processes. Chapter 3 develops the theory for reproducing kernel Hilbert spaces and their relevance to the task of regression. Chapter 4 introduces the kernels used in our analysis and results showcasing the conditions for which the Laplace and neural tangent kernels can be made equal. Chapter 5 provides synthetic experiments for matching Gaussian process regression results (i.e. posterior means) between the various kernels, comparing the kernels in the space of โ ๐ , and improving regression results for the neural tangent kernel. Lastly, Chapter 6 showcases real world experiments of the kernels using the lessons learned from the previous chapters. We summarize the contributions of this thesis as follows:
โข
We find empirical evidence for the importance of the neural tangent kernelโs bias parameter in equating it to the Laplace kernel.
โข
We derive the partial derivative of the neural tangent kernel with respect to its bias parameter which can be used to optimize the bias during regression fitting.
โข
We develop a method for matching the Laplace and neural tangent kernelsโ Gaussian process regression posteriors, which are elements of โ ๐ .
โข
We implement a Python programming language package called scikit-ntk which is an implementation of the neural tangent kernel that can be used directly with the popular scikit-learn machine learning toolkit.
Chapter 2Regression
In this chapter we will introduce the basic data modeling problem and present two relevant modeling tools: Gaussian process regression and neural networks. We also present the relation between these two tools that allows practitioners to study the black box architecture of neural networks through the lens of Gaussian processes.
2.1Data Fitting Problem
A single output data fitting problem begins with a set of ๐ data points { ( ๐ฑ ๐ , ๐ฆ ๐ ) | ๐
1 , โฆ , ๐ } where ๐ฑ ๐
[ ๐ฅ 1 , โฆ , ๐ฅ ๐ ] โค โ ๐ณ โ โ ๐ is a single input vector and ๐ฆ ๐ โ โ is a output value usually referred to as a target or response1. It is important to note that ๐ณ can be a metric space but for our purpose it is sufficient to assume that it is a subset of the ๐ -dimensional real space. In general, ๐ณ may be determined by the type of data used or a transformation that is applied to the data (e.g. the unit d-sphere space ๐ ๐ โ 1 := { ๐ฑ โ โ ๐ : โ ๐ฑ โ
1 } ).
We can form an ๐ ร ๐ size matrix ๐
[ ๐ฑ 1 , โฆ , ๐ฑ ๐ ] โค of ๐ observations and ๐ independent variables and a vector ๐ฒ
[ ๐ฆ 1 , โฆ , ๐ฆ ๐ ] โค of dependent outputs which combine into a set ( ๐ , ๐ฒ ) which we call a training set. The training set can be seen as a snapshot of some phenomenon that relates the training set inputs to response values via some function ๐ :
๐ฆ
๐ โ ( ๐ฑ ) + ๐
(2.1)
where ๐ is some additive noise assumed to be independent of ๐ and other noise terms. Depending on the problem, we may have ๐
0 which indicates exact observations. Otherwise, we assume that
๐ โผ ๐ฉ โ ( 0 , ๐ 2 )
(2.2)
indicating normally distributed noise with mean zero and fixed variance ๐ 2 โ โ .
The goal of the data fitting problem is to find a function ๐ that best fits the training set while also best generalizing to unseen data of the phenomenon we are trying to model. This is done by minimizing an empirical loss functional between the true response values ๐ฆ ๐ and estimated values ๐ โ ( ๐ฑ ๐ ) :
๐ ๐ โ ๐ โ ๐ก
arg โก min ๐ โก { โ ๐
1 ๐ ๐ฟ โ ( ๐ฆ ๐ , ๐ โ ( ๐ฑ ๐ ) ) } .
(2.3)
For regression, the loss function is usually absolute error ๐ฟ โ ( ๐ฆ ๐ , ๐ โ ( ๐ฑ ๐ ) )
| ๐ฆ ๐ โ ๐ โ ( ๐ฑ ๐ ) | , squared error ๐ฟ โ ( ๐ฆ ๐ , ๐ โ ( ๐ฑ ๐ ) )
( ๐ฆ ๐ โ ๐ โ ( ๐ฑ ๐ ) ) 2 , or some variation of either.
Without any restrictions or further assumptions placed on this process it is possible to perfectly interpolate (or โoverfitโ) over the training set thus losing any generalization of unseen data. This is not a desirable outcome since the purpose of the problem is to capture additional information about the underlying phenomenon that generated the training set. As such, it is reasonable to place further assumptions on things including but not limited to the dataset (i.i.d., transformations), function properties (continuity, time dependence), model type (nonlinear regressor, support vector machine), and model properties (parameterization type, regularization). These assumptions allow for the data fitting process to more productively use the training set to explain the underlying phenomenon. An example of a commonly used set of assumptions is a linear parametric model:
๐ โ ( ๐ฑ )
๐ฝ 0 + ๐ฝ 1 โ ๐ฅ 1 + โฏ + ๐ฝ ๐ โ ๐ฅ ๐
(2.4)
= ๐ฝ 0 + ๐ท โค โ ๐ฑ ,
where ๐ท
[ ๐ฝ 1 , โฆ , ๐ฝ ๐ ] โค . With the squared error loss function, Equation (2.3) reduces to
๐ ๐ โ ๐ โ ๐ก
arg โก min ๐ โก { โ ๐
1 ๐ ๐ฟ โ ( ๐ฆ ๐ , ๐ โ ( ๐ฑ ๐ ) ) }
(2.5)
= arg โก min ๐ฝ 0 , โฆ , ๐ฝ ๐ โก { โ ๐
1 ๐ ( ๐ฆ ๐ โ ๐ฝ 0 โ ๐ท โค โ ๐ฑ ๐ ) 2 } .
However, this is a highly restrictive assumption which is not applicable in many cases.
On the other hand, regularization is a modification that is applicable to all models. Regularization helps prevent overfitting by placing a penalty on the modelโs objective function (Equation (2.3)):
๐ ๐ โ ๐ โ ๐ก
arg โก min ๐ โก { โ ๐
1 ๐ ๐ฟ โ ( ๐ฆ ๐ , ๐ โ ( ๐ฑ ๐ ) ) + ๐ ๐ โ ( ๐ ) }
(2.6)
where ๐ ๐ โ ( โ ) โ โ is a penalization term for a given predictive function ๐ that is scaled by ๐ โ โ and added to the summation. Effectively, ๐ ๐ โ ( โ ) can be seen as a restriction on smoothness (i.e. properties of differentiability). The idea is to change the search space of the objective function from all possible models, including sophisticated ones that interpolate through the data, to simpler models that attempt to capture the underlying phenomenon instead of attaining zero training loss. Penalization for linear models (Equation (2.4)) includes ridge regression where ๐ ๐ โ ( ๐ )
๐ โ โ ๐
1 ๐ ๐ฝ ๐ 2 and lasso regression where ๐ ๐ โ ( ๐ )
๐ โ โ ๐
1 ๐ | ๐ฝ ๐ | . The regularization framework in Equation (2.6) is very general. For example, if instead we are searching through the space of all functions ๐ with two continuous derivatives and utilizing the residual sum of squares loss, Equation (2.6) becomes
๐ ๐ โ ๐ โ ๐ก
arg โก min ๐ โก { โ ๐
1 ๐ ๐ฟ โ ( ๐ฆ ๐ , ๐ โ ( ๐ฑ ๐ ) ) + ๐ ๐ โ ( ๐ ) }
(2.7)
= arg โก min ๐ โก { โ ๐
1 ๐ ( ๐ฆ ๐ โ ๐ โ ( ๐ฑ ๐ ) ) 2 + ๐ โ โซ ๐ โฒโฒ โ ( ๐ฑ ๐ ) 2 โ ๐ ๐ฅ }
with the unique solution being a natural cubic spline [26, pg. 110].
The data fitting problem is one of compromises. While adding additional assumptions helps by reducing search time and/or limiting the search space, this still results in a complex task since we are going from one set of infinite functions to another. In addition, there may be fundamental issues with our training set since it is just a small sample of the overall phenomenon and thus sensitive to sampling methods and low signal to noise ratio. As for the model, we are limited by โno free lunchโ theoremโs [49, 48] implication that there is no best way to choose a model or even a best model for a specific task. There are many ways to approach the data fitting problem so our choices in models, regularization, data augmentation, etc. are then informed by both the task and quality of data. In this paper we focus on Gaussian processes which are a nonparametric regression method highly related to kernel ridge regression.
2.2Gaussian Processes
At its simplest, a Gaussian process is a set of random variables indexed by time where any finite set composes a multivariate normal distribution. At the heart of a Gaussian process is a positive definite covariance function.
Definition 2.2.1 (Positive Definite Function).
Let ๐ : ๐ณ ร ๐ณ โ โ be a symmetric function. Given ๐ โ โ , inputs ๐ฑ 1 , โฆ , ๐ฑ ๐ โ ๐ณ , and constants ๐ 1 , โฆ , ๐ ๐ โ โ where ๐
[ ๐ 1 , โฆ , ๐ ๐ ] โค we say that ๐ is positive definite and forms a positive definite matrix ๐พ of all pairwise evaluations of the inputs on ๐ if
โ ๐
1 ๐ โ ๐
1 ๐ ๐ ๐ โ ๐ ๐ โ ๐ โ ( ๐ฑ ๐ , ๐ฑ ๐ )
๐ โค โ ๐พ โ ๐ โฅ 0
Symmetry in the inputs is required in order for a covariance function to be real valued. Covariance functions are also referred to as kernel functions which will be expanded on in Chapters 3 and 4. From here we can define a Gaussian process from a functional context which gives us the ability to build regression models through them [47, 29].
Definition 2.2.2 (Gaussian Process).
Let ๐ : ๐ณ โ โ be a real valued random function. We then say that ๐ โ ( ๐ฑ ) is a Gaussian process (GP) defined by mean function ๐ : ๐ณ โ โ and positive definite covariance function ๐ : ๐ณ ร ๐ณ โ โ if for any finite set ๐
{ ๐ฑ 1 , โฆ , ๐ฑ ๐ } โ ๐ณ of any size ๐ โ โ we have the random vector ๐ ๐
[ ๐ โ ( ๐ฑ 1 ) , โฆ , ๐ โ ( ๐ฑ ๐ ) ] โค โ โ ๐ such that
๐ ๐ โผ โณ โ ๐ฑ โ ๐ฉ โ ( ๐ฆ ๐ , ๐พ ๐ โ ๐ )
(2.8)
where ๐ฆ ๐
[ ๐ โ ( ๐ฑ 1 ) , โฆ , ๐ โ ( ๐ฑ ๐ ) ] โค โ โ ๐ is the mean vector and ๐พ ๐ โ ๐
[ ๐ โ ( ๐ฑ ๐ , ๐ฑ ๐ ) ] ๐ , ๐
1 ๐
โ โ ๐ ร ๐ is the covariance matrix. We define ๐ and ๐ for inputs ๐ฑ , ๐ณ โ ๐ณ as follows:
๐ โ ( ๐ฑ )
๐ผ โ [ ๐ โ ( ๐ฑ ) ]
(2.9)
๐ โ ( ๐ฑ , ๐ณ )
๐ผ โ [ ( ๐ โ ( ๐ฑ ) โ ๐ โ ( ๐ฑ ) ) โ ( ๐ โ ( ๐ณ ) โ ๐ โ ( ๐ณ ) ) ]
and denote such a process as
๐ โ ( ๐ฑ ) โผ ๐ข โ ๐ซ โ ( ๐ โ ( ๐ฑ ) , ๐ โ ( ๐ฑ , ๐ณ ) ) .
(2.10)
The implication of this definition is if a GP exists, then so does a corresponding ๐ and ๐ . However, it is not necessary to begin with a process as shown via the Kolmogorov Existence Theorem [17, Theorem 12.1.3, pg. 443].
Theorem 2.2.1 (Kolmogorov Existence Theorem for Gaussian Processes).
Let ๐ณ be any set such that there exists a function ๐ : ๐ณ โ โ and a positive definite function ๐ : ๐ณ ร ๐ณ โ โ . Then there exists a GP on ๐ณ with mean function ๐ and covariance function ๐ .
As a result, there is a one-to-one relationship between a GP and its mean and covariance functions. Properties such as continuity, periodicity, and differentiability are tied to the mean and covariance functions which allows for an open view of the inner workings of a model built on a GP. Furthermore, it allows for users to insert prior knowledge into the data fitting process making GPs a powerful Bayesian modeling tool.
To apply a GP to the task of regression it is necessary to build a joint distribution over a training set and a previously ignored test set, [ ๐ฑ โ 1 , โฆ , ๐ฑ โ ๐ ] โค
๐ โ โ โ ๐ ร ๐ where each ๐ฑ โ ๐ โ ๐ณ โ โ ๐ . The test set mirrors the training set in Section 2.1 with the only differences being that the test set is usually disjoint from the training set and potentially contains a different number of observations compared to the training set ( ๐ โ ๐ ). The test set will be used analyze unseen data ๐ โ โ โ ๐ through a posterior distribution conditional on the known ๐ฒ . First, we choose a covariance function ๐ and build covariance matrices ๐พ to inform our data fitting task from Equation 2.1 following the same noise assumptions as in Equation 2.2. We can then build the following joint distribution of the training data and testing data:
[ ๐ฒ
๐ โ ] โผ โณ โ ๐ฑ โ ๐ฉ โ ( [ ๐
๐ ] , [ ๐พ ๐ โ ๐ + ๐ 2 โ ๐ผ ๐
๐พ ๐ โ ๐ โ
๐พ ๐ โ โ ๐
๐พ ๐ โ โ ๐ โ ] )
(2.11)
where
๐พ ๐ โ ๐
[ ๐ โ ( ๐ฑ ๐ , ๐ฑ ๐ ) ] ๐ , ๐
1 ๐
(2.12)
๐พ ๐ โ โ ๐ โ
[ ๐ โ ( ๐ฑ โ ๐ , ๐ฑ โ ๐ ) ] ๐ , ๐
1
๐
๐พ
๐
โ
๐
โ
[ ๐ โ ( ๐ฑ ๐ , ๐ฑ โ ๐ ) ] ๐ , ๐
1 ๐
๐ , ๐
๐
๐พ
๐
โ
โ
๐
๐พ ๐ โ ๐ โ โค
Figure 2.1:\setlinespacing1.1 Sample paths of the Laplace covariance function from a GP prior and posterior. The solid line represents ๐ โ ( ๐ฑ ) and ๐ ยฏ โ in the left and right panel respectively. The shaded bands represent cov โ ( ๐ โ ) . The red xโs in the right panel are the training points.
Equation (2.11) forms the prior distribution for our GP regressor. To get the posterior distribution, we condition ๐ โ on the known data which by properties of โณ โ ๐ฑ โ ๐ฉ is itself โณ โ ๐ฑ โ ๐ฉ [25]:
๐ โ | ๐ , ๐ฆ , ๐ โ
โผ โณ โ ๐ฑ โ ๐ฉ โ ( ๐ ยฏ โ , cov โ ( ๐ โ ) )
(2.13)
๐ ยฏ โ
๐พ
๐
โ
โ
๐
โ
[
๐พ
๐
โ
๐
+
๐
2
โ
๐ผ
๐
]
โ
1
โ
๐ฒ
cov
โ
(
๐
โ
)
๐พ ๐ โ โ ๐ โ โ ๐พ ๐ โ โ ๐ โ [ ๐พ ๐ โ ๐ + ๐ 2 โ ๐ผ ๐ ] โ 1 โ ๐พ ๐ โ ๐ โ
Viewing the procedure from the practical modeling perspective, training the GP regressor can be thought of as pre-computation of [ ๐พ ๐ โ ๐ + ๐ 2 โ ๐ผ ๐ ] โ 1 while prediction computes the remainder of the values involving the test set ๐ โ . In addition, with knowledge of the full distribution we can sample the prior and posterior distributions to view the bounds and functions generated by the GP as seen in Figure 2.1. The matrix operations involved in calculating the posterior are ๐ โ ( ๐ 3 ) and thus require approximate methods for large scale datasets. Rasmussen and Williams calculate the inverse using Cholesky decomposition and provide a number of approximate methods for computation [47, pg. 19, pg. 171].
2.3Neural Networks
A neural network is one of the key tools used in machine learning. They provide a way to model highly nonlinear phenomena and have been found to generalize exceptionally to a variety of complex tasks. One drawback of neural networks is the difficulty of analyzing their black box nature. Traditional tools for inference and uncertainty estimation cannot be easily applied to them; however, there are equivalences that aid in this task. To start, we adopt notation from [24] and define the neuron which is the basic building block of a neural network.
Definition 2.3.1 (Neuron).
Let ๐ฑ
[ ๐ฅ 1 , โฆ , ๐ฅ ๐ ] โค โ ๐ณ be an input vector with ๐ โ โ . Then the ๐ -th neuron ๐ ๐ is defined as:
๐ ๐
๐ โ ( โ ๐
1 ๐ ๐ค ๐ ๐ โ ๐ฅ ๐ + ๐ ๐ )
๐ โ ( ๐ฐ ๐ โค โ ๐ฑ + ๐ ๐ )
(2.14)
where ๐ฐ ๐
[ ๐ค ๐ 1 , โฆ , ๐ค ๐ ๐ ] โค is a vector of weights and ๐ ๐ โ โ is the bias value. ๐ is an activation function which is a fixed transformation that is used to inject nonlinear behavior to the resulting outputs.
โฎ โฎ ฮฃ ๐ Activation function ๐ ๐ Output ๐ฅ 1 ๐ค 1 ๐ฅ ๐ ๐ค ๐ Weights Bias ๐ ๐ Inputs Figure 2.2:\setlinespacing1.1 A visualization of a neuron courtesy of [32].
Nonlinear activations allow neural networks to model complex behavior that traditional linear models cannot. A combination of ๐ โ โ neurons into a vector is interpreted as ๐ -width layer. By combining many layers of varying widths, we can build the simplest architecture of a neural network called the multilayer perceptron.
Definition 2.3.2 (Multilayer Perceptron).
Let ๐ฑ โ ๐ณ and โ ( ๐ ) be the ๐ -th hidden layer where ๐ โ { 1 , โฆ , ๐ฟ } represents a finite amount of layers with ๐ฟ โ โ . Then a multilayer perceptron architecture is defined as follows:
โ ( 1 )
๐ ( 1 ) โ ( ๐พ ( 1 ) โ ๐ฑ + ๐ฝ ( 1 ) )
(2.15)
โ ( 2 )
๐
(
2
)
โ
(
๐พ
(
2
)
โ
โ
(
1
)
+
๐ฝ
(
2
)
)
โฎ
โ
(
๐ฟ
)
๐
(
๐ฟ
)
โ
(
๐พ
(
๐ฟ
)
โ
โ
(
๐ฟ
โ
1
)
+
๐ฝ
(
๐ฟ
)
)
๐
โ
(
๐ฑ
;
๐
)
๐ โ โ ( ๐ฟ ) + ๐ฝ ( ๐ฟ + 1 )
where ๐ ( ๐ ) are weight matrices for each layer, ๐ฐ is the vector of weights for the desired output dimension of the function ๐ , ๐ฝ ( ๐ ) is the bias vector, and ๐ ( ๐ ) is the layer dependant activation function. The 1st dimension of ๐ ( ๐ ) represents the number of neurons (i.e. the width of the hidden layer โ ( ๐ ) ) whilst the 2nd dimension is determined by the dimension of โ ( ๐ โ 1 ) . โ ( 1 ) is called the input layer while ๐ โ ( ๐ฑ ; ๐ ) is the output layer where ๐ is a set of all parameters ๐ ( 1 ) , ๐ ( 2 ) , โฆ , ๐ ( ๐ฟ ) , ๐ฐ , ๐ฝ ( 1 ) , ๐ฝ ( 2 ) , โฆ , ๐ฝ ( ๐ฟ + 1 ) in the network. The resulting network is considered to be ๐ฟ + 1 layers deep.
Input layer Hidden layer 1 Hidden layer 2 Output layer Input 1 Input 2 Input 3 ๐ โ ( ๐ฅ ) Figure 2.3:\setlinespacing1.1 A visualization of a 2 layer fully connected neural network courtesy of [32].
A multilayer perceptron can be modified to include transformation layers (e.g. a dropout layer which randomly drops weights from the previous layer), differing activation functions per layer, differing layer widths, etc. The training procedure for such networks follows the data fitting procedure in Equation (2.3)
๐ ๐ โ ๐ โ ๐ก
arg โก min ๐ โก { โ ๐
1 ๐ ๐ฟ โ ( ๐ฆ ๐ , ๐ โ ( ๐ฑ ๐ ; ๐ ) ) } ,
(2.16)
so the optimized network is ๐ ๐ โ ๐ โ ๐ก โ ( ๐ฑ ) := ๐ โ ( ๐ฑ ; ๐ ๐ โ ๐ โ ๐ก ) with optimal parameters ๐ ๐ โ ๐ โ ๐ก found during the minimization procedure over the unfixed network parameters ๐ . The minimization is done by computing the gradient of the loss function with respect to ๐ . This is done efficiently via back-propagation [38] which uses information provided by the outputs ๐ โ ( โ ) in a single gradient step and computes the gradient of the loss over the parameters starting from the output layer and ending at the input layer. On the other hand, prediction is done by forward propagation which uses an input to calculate an output ๐ โ on the optimized network through a forward pass using Equation 2.15.
The specific neural networks we consider in this thesis are infinite width fully connected rectified linear unit (ReLU) activated neural networks. Infinite width refers to the size of all hidden layers. Fully connected means that every individual neuron in every layer ๐ โ { 1 , โฆ , ๐ฟ } sends its output to every neuron in layer ๐ + 1 . Lastly, the ReLU activation function is defined as follows for ๐ฅ โ โ :
๐ โ ( ๐ฅ )
max โก ( 0 , ๐ฅ )
where โ ๐ โ ( โ ) : โ โ โ .
(2.17)
Neal [35] showed that neural networks in the infinite width limit converge to a GP while Williams [46] developed the computation of the covariance function. Let us assume a single hidden layer network with weights ๐ค ๐ ๐ and biases ๐ ๐ , each independent and identically distributed (iid) normal with mean 0 and respective variances ๐ ๐ค 2 and ๐ ๐ 2 . The gist of this equivalence is that each neuron in the hidden layer then becomes iid normal with mean zero and finite variance. By the Central Limit Theorem, as the width ๐ป of the hidden layer tends to infinity, it also forms a normal distribution. By following Definition 2.2.2, we form a stochastic process over the indexed set of ๐ inputs and corresponding outputs which form a multivariate normal distribution with mean zero and finite covariance for all inputs:
๐ โ ( ๐ฑ )
0
๐ โ ( ๐ฑ , ๐ณ )
๐ ๐ 2 + ๐ ๐ค 2 โ ๐ป โ ๐ผ โ [ โ ๐ โ ( ๐ฑ ) โ โ ๐ โ ( ๐ณ ) ]
= ๐ ๐ 2 + ๐ ๐ค 2 โ ๐ผ โ [ โ ๐ โ ( ๐ฑ ) โ โ ๐ โ ( ๐ณ ) ]
where โ ๐ ๐ค
๐ ๐ค โ ๐ป โ 1 / 2 โ and โ ๐ โ { 1 , โฆ , ๐ฟ } .
(2.18)
This single layer infinite width network satisfies the definition of a GP. This can be further extended to networks of multiple layers by applying the same procedure by over all network weights and biases. It should be noted that this formulation describes an untrained network at initialization.
Although somewhat impractical in practice, infinite width networks provide a way to study the properties of neural networks. Yang [50] expanded on this and showed that the GP and neural network equivalence applies to all modern architectures. The result described in this section is related to the neural tangent kernel [28] which can be used as a GP covariance function to implement an infinite width neural network which is further discussed in Chapter 4.
Chapter 3Reproducing Kernel Hilbert Spaces
Kernels exist in a variety of contexts throughout statistics, probability, and mathematics. They can be thought of as the kernel of a probability density (mass) function used in kernel density estimation, a positive definite kernel used in a variety of kernel methods, a null space in linear algebra, an integral transform in calculus, and a reproducing kernel considered in functional analysis.
Although kernels span a large variety of subjects, we are going to focus on them as they pertain to functional analysis via reproducing kernel Hilbert spaces. In this chapter, we will define kernels, their reproducing kernel Hilbert spaces, and properties of these spaces in the context of data fitting.
3.1Reproducing Kernels
A kernel is a class of functions that map two values to the real line:
๐ : ๐ณ ร ๐ณ โ โ
(3.1)
In our context, we are interested in kernels that are positive definite and symmetric because this allows them to be real valued and used in methods such as kernel ridge regression, support vector machines, and Gaussian processes. As mentioned in Chapter 2, a positive definite kernel and covariance function are one and the same in the context of GPs. In order to further develop kernels, we need to work within a general vector space, namely a Hilbert space.
Definition 3.1.1 (Hilbert space).
A Hilbert space โ is a vector space with the following properties:
1.
โ contains an inner product โจ โ , โ โฉ โ : โ ร โ โ โ which induces the norm โจ ๐ฅ , ๐ฅ โฉ โ
โ ๐ฅ โ โ 2 for ๐ฅ โ โ ,
2.
โ is complete (i.e. every Cauchy sequence in โ converges to some element of โ ).
Although our definition of a Hilbert space constitutes a real valued inner product, it can be defined over more general spaces (see Axler [6, Chapter 8 pg. 211]). In our case it is sufficient to work in the space of โ . We are now ready to define a key space regarding kernels.
Definition 3.1.2 (Reproducing Kernel Hilbert Space).
Let โ ๐ be a Hilbert space of real functions defined on ๐ณ and norm โ ๐ โ โ ๐ 2
โจ ๐ , ๐ โฉ โ ๐ for ๐ โ โ ๐ . The function ๐ : ๐ณ ร ๐ณ โ โ is called a reproducing kernel of โ ๐ if:
1.
For all ๐ฑ โ ๐ณ , ๐ โ ( โ , ๐ฑ ) โ โ ๐ ,
2.
For all ๐ฑ โ ๐ณ and for all ๐ โ โ ๐ , โจ ๐ โ ( โ ) , ๐ โ ( โ , ๐ฑ ) โฉ
๐ โ ( ๐ฑ ) .
Property 2 in the definition above is called the reproducing property. Reproducing kernel Hilbert spaces (RKHSs) do not require for kernel functions to be positive definite explicitly; however, positive definite kernels have a nice property regarding RKHS:
Theorem 3.1.1 (Moore-Aronszajn Theorem [4]).
For a positive definite function ๐ โ ( โ , ๐ฑ ) on ๐ณ ร ๐ณ there exists only one RKHS.
This theorem guarantees that for any symmetric and positive definite kernel, there exists a unique RKHS and vice-versa. Using this idea we can build an RKHS from a positive definite kernel alone. To illustrate this given a symmetric and positive definite kernel on ๐ณ , we begin by defining a pre-Hilbert space โ 0 .
โ 0 := span โ { ๐ โ ( โ , ๐ฑ ) : ๐ฑ โ ๐ณ }
= { ๐ โ ( โ )
โ ๐
1 ๐ ๐ ๐ โ ๐ โ ( โ , ๐ฑ ๐ ) : ๐ โ โ , ๐ 1 , โฆ , ๐ ๐ โ โ , ๐ฑ 1 , โฆ , ๐ฑ ๐ โ ๐ณ } .
(3.2)
with a valid inner product (see [8, pg. 20]) defined for any ๐ := โ ๐
1 ๐ ๐ ๐ โ ๐ โ ( โ , ๐ฑ ๐ ) and ๐ := โ ๐
1 ๐ ๐ ๐ โ ๐ โ ( โ , ๐ฑ ๐ )
โจ ๐ , ๐ โฉ โ 0
โ ๐
1 ๐ โ ๐
1 ๐ ๐ ๐ โ ๐ ๐ โ ๐ โ ( ๐ฑ ๐ , ๐ฑ ๐ )
(3.3)
and a norm induced by the inner product
โจ ๐ , ๐ โฉ โ 0
โ ๐ โ โ 0 2
โ ๐
1 ๐ โ ๐
1 ๐ ๐ ๐ โ ๐ ๐ โ ๐ โ ( ๐ฑ ๐ , ๐ฑ ๐ )
๐ โค โ ๐พ โ ๐
(3.4)
where ๐
[ ๐ 1 , โฆ , ๐ ๐ ] โค is a vector and ๐พ
[ ๐ โ ( ๐ฑ ๐ , ๐ฑ ๐ ) ] ๐ , ๐
1 ๐ is a matrix. To form the RKHS โ ๐ we must bundle the rest of the possible elements within โ 0 by defining its closure with respect to | | โ | | โ 0
โ ๐
{ ๐ ( โ )
โ ๐
1 โ ๐ ๐ ๐ ( โ , ๐ฑ ๐ ) : ๐ โ โ , ๐ 1 , ๐ 2 , โฏ โ โ , ๐ฑ 1 , ๐ฑ 2 , โฏ โ ๐ณ where
โฅ ๐ โฅ โ ๐ 2 := lim ๐ โ โ โฅ โ ๐
1 ๐ ๐ ๐ ๐ ( โ , ๐ฑ ๐ ) โฅ โ 0 2
โ ๐
1 โ โ ๐
1 โ ๐ ๐ ๐ ๐ ๐ ( ๐ฑ ๐ , ๐ฑ ๐ ) < โ } ,
(3.5)
thus completing our construction. For more details refer to Kanagawa et al. [29, pg. 11] and Berlinet and Thomas-Agnan [8, Theorem 3 pg. 19-21].
3.2Mercer Representation
There is another way to represent RKHSs that provides a further route of analyzing kernels via spectral decomposition. This will require a deeper dive into functional analysis for which further details can be found in Axler [6, Chapter 10 pg. 280], Steinwart and Christmann [40, Appendix A.5 pg 497], and Rudin [37].
We begin with a measurable space ๐ณ with ๐ being its finite Borel measure with ๐ณ being its support. We then consider the space of square-integrable functions ๐ฟ 2 โ ( ๐ณ , ๐ ) that exist on ๐ณ with respect to metric ๐ and the kernel ๐ on ๐ณ . Then we define an integral operator ๐ ๐ : ๐ฟ 2 โ ( ๐ณ , ๐ ) โ ๐ฟ 2 โ ( ๐ณ , ๐ ) such that for ๐ โ ๐ฟ 2 โ ( ๐ณ , ๐ )
๐ ๐ โ ๐ := โซ ๐ณ ๐ โ ( โ , ๐ฑ ) โ ๐ โ ( ๐ฑ ) โ ๐ ๐ โ ( ๐ฑ )
(3.6)
which is compact, positive, and self-adjoint [40, Theorem 4.27]. As a result, we can apply the Spectral Theorem [40, Theorem A.5.13] which shows that there exists ( ๐ ๐ , ๐ ๐ ) ๐ โ ๐ผ where ( ๐ ๐ ) ๐ โ ๐ผ โ ๐ฟ 2 โ ( ๐ณ , ๐ ) is the orthonormal system of countable eigenfunctions and ( ๐ ๐ ) ๐ โ ๐ผ is the corresponding family of eigenvalues for indices ๐ผ โ โ such that for strictly positive eigenvalues ๐ 1 โฅ ๐ 2 โฅ โฏ
0 :
๐ ๐ โ ๐
โ ๐ โ ๐ผ ๐ ๐ โ โจ ๐ ๐ , ๐ โฉ ๐ฟ 2 โ ๐ ๐ .
(3.7)
With this we can now develop an expansion of kernels via orthonormal functions:
Theorem 3.2.1 (Mercerโs theorem [40, 33]).
Let ๐ณ be a compact metric space, ๐ : ๐ณ ร ๐ณ โ โ be a continuous kernel, and ๐ be a finite Borel measure with support on ๐ณ . Then for ( ๐ ๐ , ๐ ๐ ) ๐ โ ๐ผ and inputs ๐ฑ , ๐ณ โ ๐ณ we have
๐ โ ( ๐ฑ , ๐ณ )
โ ๐ โ ๐ผ ๐ ๐ โ ๐ ๐ โ ( ๐ฑ ) โ ๐ ๐ โ ( ๐ณ ) ,
(3.8)
where the convergence is absolute and uniform over the inputs.
Lastly, we can redevelop RKHSs using orthonormal eigenfunctions to represent kernels:
Theorem 3.2.2 (Mercer Representation of RKHSs [40]).
Let ๐ณ be a compact metric space, ๐ : ๐ณ ร ๐ณ โ โ be a continuous kernel, ๐ be a finite Borel measure with support on ๐ณ and ( ๐ ๐ , ๐ ๐ ) ๐ โ ๐ผ . We then define the RKHS โ ๐
โ ๐ := { ๐ := โ ๐ โ ๐ผ ๐ ๐ โ ๐ ๐ โ ๐ ๐ : โ ๐ โ โ ๐ 2 := โ ๐ โ ๐ผ ๐ ๐ 2 < โ } ,
(3.9)
with an inner product defined for any ๐ := โ ๐ โ ๐ผ ๐ ๐ โ ๐ ๐ โ ๐ ๐ and ๐ := โ ๐ โ ๐ผ ๐ ๐ โ ๐ ๐ โ ๐ ๐
โจ ๐ , ๐ โฉ โ ๐
โ ๐ โ ๐ผ ๐ ๐ โ ๐ ๐
(3.10)
where ( ๐ ๐ โ ๐ ๐ ) ๐ โ ๐ผ is an orthonormal basis of โ ๐ and operator ๐ ๐ 1 / 2 : ๐ฟ 2 โ ( ๐ณ , ๐ ) โ ๐ป is an isometric isomorphism. The โ ๐ in Equation (3.9) is the same as in Equation (3.5).
As a result, Theorem 3.2.1 gives a general way of analyzing kernels via their resulting eigenfunctions and corresponding eigenvalues while Theorem 3.2.2 connects this analysis back to RKHSs. One way to do this is by observing and comparing the rate of eigenvalue decay between two separate kernels [21]. Furthermore, both theorems indicate that an RKHS โ ๐ is a subset of the ๐ฟ 2 โ ( ๐ณ , ๐ ) space. Because of this, the Mercer notion of kernel and RKHS relies on a choice of measure ๐ to work; however, as shown in Section 3.1, both kernel and RKHS are independent of any measure. As a result, regardless of the choice of measure we still end up with the same RKHS [29]. The only difference here is that the new measure will result in a new eigensystem.
3.3Representer Theorem
Let us backtrack to the data fitting problem established in Section 2.1. Consider the dataset { ( ๐ฑ ๐ , ๐ฆ ๐ ) : ๐
1 , โฆ , ๐ } where ๐ฑ ๐ โ ๐ณ and ๐ฆ ๐ โ โ . From such a setup, it is natural to wish to determine whether there exists a function ๐ that is generating some sort of signal in our dataset. We can do so by regression using a variety of methods to accomplish this task; however, in general we wish to minimize an empirical risk functional [39].
Theorem 3.3.1 (Representer Theorem).
Let { ๐ฑ ๐ } ๐
1 ๐ โ ๐ณ and { ๐ฆ ๐ } ๐
1 ๐ โ โ . Consider functions defined as ๐ : ๐ณ โ โ defined in a RKHS โ ๐ where ๐ represents a kernel. Now consider the following empirical risk minimization problem:
๐ ๐ โ ๐ โ ๐ก
arg โก min ๐ โ โ ๐ โก { โ ๐
1 ๐ ๐ฟ โ ( ๐ฆ ๐ , ๐ โ ( ๐ฑ ๐ ) ) + ๐ โ โ ๐ โ ๐ 2 }
(3.11)
๐ฟ : โ 2 โ โ represents the data fitting term (usually referred to as a loss function) and ๐ โ โ ๐ โ ๐ 2 is a regularization term with factor ๐ โฅ 0 . The minimizer ๐ ๐ โ ๐ โ ๐ก โ โ ๐ can be represented pointwise as follows:
๐ ๐ โ ๐ โ ๐ก โ ( ๐ฑ )
โ ๐
1 ๐ ๐ผ ๐ โ ๐ โ ( ๐ฑ , ๐ฑ ๐ )
๐พ ๐ฑ โ ๐ โ ๐ถ , ๐ฑ โ ๐ณ
(3.12)
where ๐
[ ๐ผ 1 , โฆ , ๐ผ ๐ ] โค
( ๐พ ๐ โ ๐ + ๐ โ ๐ โ ๐ผ ๐ ) โ 1 โ ๐ฒ .
As a result, the potentially infinite dimensional problem of finding ๐ ๐ โ ๐ โ ๐ก only depends on a finite sum and on the data in question. In addition, it is guaranteed to exist and is unique. It also gives us a generic way of looking various regression tools from the more abstract perspective of RKHSs. This includes deep neural networks as shown by Unser [43]. It is important to note that Equation (3.12) shows that the minimizer belongs to the RKHS regardless of the dataset used.
Regularization is one form of the representer theorem. By applying Theorem 3.3.1 and slightly adjusting the regularized regression problem presented in Equation (2.7), we can view the penalty term as one determined by a RKHS endowed with a kernel ๐ :
๐ ๐ โ ๐ โ ๐ก
arg โก min ๐ โ โ ๐ โก { 1 ๐ โ โ ๐
1 ๐ ( ๐ฆ ๐ โ ๐ โ ( ๐ฑ ๐ ) ) 2 + ๐ โ โ ๐ โ โ ๐ 2 }
(3.13)
with a unique solution
๐พ ๐ฑ โ ๐ โ ( ๐พ ๐ โ ๐ + ๐ โ ๐ โ ๐ผ ๐ ) โ 1 โ ๐ฒ .
(3.14)
This form of regression is called kernel ridge regression (KRR) [29] and only depends on functions defined by the kernel ๐ and its corresponding RKHS โ ๐ . Alternatively, a GP regressor defined by the same kernel ๐ has a unique posterior mean function computed by the marginal log likelihood
๐พ ๐ฑ โ ๐ โ ( ๐พ ๐ โ ๐ + ๐ 2 โ ๐ผ ๐ ) โ 1 โ ๐ฒ .
(3.15)
Given this, Kanagawa et al. [29] concisely summarize the following known result:
Proposition 1.
If ๐ 2
๐ โ ๐ then the GP posterior mean function and the KRR solution are the same.
This result ties a GP posterior mean to a KRR solution and the resulting RKHS โ ๐ . Although any valid kernel is tied to an RKHS, our definition of a GP regressor does not directly rely on that connection. By relating GP regression to KRR we can better analyze kernels via the GP framework while having confidence in the underlying RKHS theory. In particular interest is attempting to find kernel hyperparameters of a GP for the Laplace and neural tangent kernels so that the resulting posterior means are the same. Through this matching of posterior means (and underlying hyperparameters) we can gain insight to the underlying RKHS of each kernel which we explore later in Chapter 5.
Chapter 4Types of Kernels and their Equivalences
Having developed the key theory behind kernels we turn to define the kernels used in this thesis. Our empirical analysis will focus on the Laplace, Gaussian, and neural tangent kernels. We define the kernels over their respective inputs ๐ฑ , ๐ณ โ ๐ณ and set of parameters ๐ ; however, when it is clear from context we may drop the inputs so that
๐ โ ( ๐ ) := ๐ โ ( ๐ฑ , ๐ณ ; ๐ ) .
(4.1)
Moving forward it should be noted that by the reproducing property of a given kernel ๐ , the elements of its RKHS can be represented as
๐ โ ( โ )
โ ๐
1 โ ๐ ๐ โ ๐ โ ( โ , ๐ฑ ๐ ) โ where โ ๐ ๐ โ โ , ๐ฑ ๐ โ ๐ณ
(4.2)
which shows that RKHS member functions inherit properties that are dependent on the kernel.
4.1Matรฉrn Class of Kernels Definition 4.1.1 (Matรฉrn Kernel).
Let ๐ฑ , ๐ณ โ โ ๐ be inputs, โ
0 be the length-scale parameter, and ๐
0 be the smoothness parameter such that
๐ ๐ โ ๐ โ ๐ก ( ๐ฑ , ๐ณ ; ๐ , โ )
2 1 โ ๐ ฮ โ ( ๐ ) ( 2 โ ๐ โ โฅ ๐ฑ โ ๐ณ โฅ ) ) ๐ ๐พ ๐ ( 2 โ ๐ โ โฅ ๐ฑ โ ๐ณ โฅ ) )
(4.3)
where | | โ | | is the ๐ฟ 2 -norm, ฮ is the Gamma function, and ๐พ ๐ is a modified Bessel function of the 2nd kind [1, Section 9.6].
The Matรฉrn class of kernels defines a set of functions dependent on their smoothness ๐ . By varying ๐ we define our two kernels of interest. As ๐ โ โ , the Matรฉrn kernel becomes equivalent to the well known Gaussian kernel2:
๐ ๐บ โ ๐ โ ๐ข โ ๐ โ ( ๐ฑ , ๐ณ ; โ )
exp โก ( โ โ ๐ฑ โ ๐ณ โ 2 2 โ โ 2 ) .
(4.4)
By setting ๐
1 2 , the Matรฉrn kernel becomes equivalent to the Laplace kernel:
๐ ๐ฟ โ ๐ โ ๐ โ ( ๐ฑ , ๐ณ ; โ )
exp โก ( โ โ ๐ฑ โ ๐ณ โ โ 2 ) .
(4.5)
Both kernels are very similarly defined and as such fall under the umbrella of exponential class of kernels as well. Despite their similar forms, the two kernels have some stark differences. The Gaussian kernel produces an RKHS of continuous, infinitely differentiable functions for all possible length-scales [47, Section 4.2.1], and its eigenvalues decay exponentially [34]. In contrast, elements of the Laplace kernelโs RKHS are continuous, nowhere differentiable for all possible length-scales [47, Section 4.2.1], and its eigenvalues decay polynomially [21]. A process defined by the Laplace kernel is called an OrnsteinโUhlenbeck process [42] which was shown to describe the velocity of a Brownian particle.
4.2Neural Tangent Kernel
As mentioned in Section 2.3, infinite width neural networks behave as GPs and can be studied through the function space. One hindrance with this approach is that in order to develop a GP via the method outlined previously, one must determine the covariance kernel of the GP using all the parameters of that architecture. Cho and Saul [13] found that instead of this approach, it is possible to use the arc-cosine kernel to build various finite network architectures via a single kernel. Our focus is on the neural tangent kernel defined by Jacot et al. [28] which describes the behavior of a neural network trained by gradient descent.
Definition 4.2.1 (Finite Neural Tangent Kernel).
Let ๐ โ ( โ ; ๐ ) be a neural network with finite number of parameters ๐ . Then, the finite neural tangent kernel for the neural network and inputs ๐ฑ , ๐ณ โ ๐ ๐ โ 1 is defined as a sum containing tensor products of partials with respect to the ๐ -th parameter of ๐ :
๐ ๐ โ ๐ โ ๐พ โ ( ๐ฑ , ๐ณ ; ๐ )
โ ๐
1 ๐ โ ๐ ๐ ๐ โ ( ๐ฑ ; ๐ ) โ โ ๐ ๐ ๐ โ ( ๐ณ ; ๐ ) ,
(4.6)
where ๐ is the total number of network parameters and ๐ ๐ โ 1 is the unit ๐ -sphere space defined as
๐ ๐ โ 1 := { ๐ฑ โ โ ๐ : โ ๐ฑ โ
1 } .
(4.7)
Definition 4.2.1 refers to the neural tangent kernel for finite width and depth neural networks. However, this kernel can also be used with kernel methods to represent infinitely wide neural network architectures through an explicit recursively defined kernel. The neural network and GP equivalence discussed in Section 2.3 makes this possible by making the network in question independent of the parameters ๐ and instead dependent on the resulting GP. The details of this are further discussed in the appendix of Jacot et al. [28]. In our definitions we adopt their notation alongside notation from Bietti and Mairal [9].
Definition 4.2.2 (Infinite Neural Tangent Kernel).
Given a fully connected infinite width network with ๐ฟ + 1 layers, ๐ฝ โฅ 0 bias, and with โ โ { 1 , โฆ , ๐ฟ } we define the deterministic infinite neural tangent kernel recursively for inputs ๐ฑ , ๐ณ โ ๐ ๐ โ 1 as
๐ ๐ โ ๐ โ ๐พ โ ( ๐ฑ , ๐ณ ; ๐ฟ + 1 , ๐ฝ ) := ฮ ( ๐ฟ ) โ ( ๐ฑ , ๐ณ )
ฮ ( โ ) โ ( ๐ฑ , ๐ณ )
ฮ ( โ โ 1 ) โ ( ๐ฑ , ๐ณ ) โ ฮฃ ห ( โ ) โ ( ๐ฑ , ๐ณ ) + ฮฃ ( โ ) โ ( ๐ฑ , ๐ณ ) + ๐ฝ 2 ,
(4.8)
with the base case
ฮฃ ( 0 ) โ ( ๐ฑ , ๐ณ )
๐ฑ โค โ ๐ณ
(4.9)
ฮ ( 0 ) โ ( ๐ฑ , ๐ณ )
ฮฃ ( 0 ) โ ( ๐ฑ , ๐ณ ) + ๐ฝ 2
and components
ฮฃ ( โ ) โ ( ๐ฑ , ๐ณ )
๐ ๐ 2 โ ๐ 1 โ ( ๐ โ โ 1 ) โ ฮฃ ( โ โ 1 ) โ ( ๐ฑ , ๐ฑ ) โ ฮฃ ( โ โ 1 ) โ ( ๐ณ , ๐ณ )
(4.10)
ฮฃ ห ( โ ) โ ( ๐ฑ , ๐ณ )
๐ ๐ 2 โ ๐ 0 โ ( ๐ โ โ 1 ) ,
where ๐ ๐
2 for ReLU activated networks. We then define the cosine normalization [22]:
๐ ( โ โ 1 ) โ ( ๐ฑ , ๐ณ )
ฮฃ ( โ โ 1 ) โ ( ๐ฑ , ๐ณ ) ฮฃ ( โ โ 1 ) โ ( ๐ฑ , ๐ฑ ) โ ฮฃ ( โ โ 1 ) โ ( ๐ณ , ๐ณ ) ,
(4.11)
such that | ๐ ( โ โ 1 ) | โค 1 . Lastly, we define the arc-cosine kernels of degree 0 and 1 respectively [13]:
๐ 0 โ ( ๐ข )
1 ๐ โ ( ๐ โ arccos โก ( ๐ข ) )
(4.12)
๐ 1 โ ( ๐ข )
1 ๐ โ ( ๐ข โ ( ๐ โ arccos โก ( ๐ข ) ) + 1 โ ๐ข 2 ) .
In this thesis, we refer to the infinite neural tangent kernel as the NTK for brevity. It should be noted that Definitions 4.2.1 and 4.2.2 are also valid in the space of โ ๐ . Geifman et al. [21] further define the normalized kernel 1 ๐ฟ + 1 โ ๐ ๐ โ ๐ โ ๐พ โ ( ๐ฑ , ๐ณ ; ๐ฟ + 1 , ๐ฝ ) when ๐ฝ
0 . We improve on this by empirically finding the more general case of normalization. Let ๐ฝ โฅ 0 , then it can be shown that
๐ ยจ ๐ โ ๐ โ ๐พ โ ( ๐ฑ , ๐ฑ ; ๐ฟ + 1 , ๐ฝ )
1 ( ๐ฟ + 1 ) โ ( ๐ฝ 2 + 1 ) โ ๐ ๐ โ ๐ โ ๐พ โ ( ๐ฑ , ๐ฑ ; ๐ฟ + 1 , ๐ฝ )
1
(4.13)
for all ๐ฑ โ โ ๐ . We will be using this normalized form for the remainder of our work.
This recursive formalization depends entirely on the depth of the network and bias ๐ฝ . In practice, we want to be able to find the optimal ๐ฝ parameter for the given regression problem. As such, this requires the gradient of ๐ ยจ ๐ โ ๐ โ ๐พ which we define independent of input for a given depth ๐ฟ + 1 and bias ๐ฝ
โ โ ๐ฝ โ ๐ ยจ ๐ โ ๐ โ ๐พ โ ( ๐ฑ , ๐ณ )
1 ( ๐ฟ + 1 ) โ ( ๐ฝ 2 + 1 ) โ ( โ โ ๐ฝ โ ๐ ๐ โ ๐ โ ๐พ โ ( ๐ฑ , ๐ณ ) โ 2 โ ๐ฝ ๐ฝ 2 + 1 โ ๐ ๐ โ ๐ โ ๐พ โ ( ๐ฑ , ๐ณ ) ) .
(4.14)
The full derivation of the gradient can be found in Appendix A. From here, we will utilize the notation ๐ท
๐ฟ + 1 to refer to the depth or the number of layers of a given network defined by the NTK.
4.3RKHS Inclusion
Given two kernels, it is natural to compare the space of functions that they are capable of producing and seeing if there is any overlap. This is made possible by a consequence of Theorem 3.1.1 which allows us to determine if one RKHS is a subset of another. In regards to this thesis, we are interested in determining equality between the RKHSs of two kernels and thus equality of the kernels themselves.
Theorem 4.3.1 (Subset Inclusion [8], p. 30).
Let ๐ 1 and ๐ 2 be continuous positive definite kernels on ๐ณ 1 ร ๐ณ 1 and ๐ณ 2 ร ๐ณ 2 respectively with โ ๐ 1 and โ ๐ 2 denoting their respective RKHS. Then, โ ๐ 1 โ โ ๐ 2 if and only if there exists a constant ๐ต such that ๐ต 2 โ ๐ 2 โ ๐ 1 is a positive definite kernel.
While this is a powerful theorem, in practice it presents a problem when trying to take into consideration a kernelโs set of parameters in relation to an RKHS. In fact, a parameter can have a great effect on the functions that belong to the RKHS. This is illustrated in Walder [44, Lemma 3.1.2, pg. 34] where we have two Gaussian kernels defined by length-scale parameters โ 1 , โ 2 โ โ . We let ๐ ๐บ โ ๐ โ ๐ข โ ๐ โ ( โ , โ ; โ ) be the general Gaussian reproducing kernel defined by some parameter โ โ โ that defines an RKHS โ ๐ . If โ 1
1 2 โ โ and โ 2
1 2 โ โ , then by utilizing the inner product of โ ๐ we can create a new reproducing kernel defined in that space for some inputs ๐ฑ , ๐ณ โ โ ๐
โจ ๐ ๐บ โ ๐ โ ๐ข โ ๐ โ ( โ , ๐ฑ ; โ 1 ) , ๐ ๐บ โ ๐ โ ๐ข โ ๐ โ ( โ , ๐ณ ; โ 2 ) โฉ โ ๐
๐ ๐บ โ ๐ โ ๐ข โ ๐ โ ( ๐ฑ , ๐ณ ; โ 1 + โ 2 โ โ )
(4.15)
which shows that there exists a new kernel function with length-scale โ 1 + โ 2 โ โ that is a member of โ ๐ . However, if one of the inequalities is not satisfied, then the kernel corresponding to that length-scale is not in the RKHS.
Another way to look at this is that by scaling the parameter of the Gaussian kernel, you also scale the โ ๐ input space [40, Proposition 4.37, pg. 132]. Without the rescaling the input space, there is no guarantee that the scaled Gaussian kernel will maintain the same RKHS.
This challenges the notion that a kernel can produces an all-encompassing RKHS independent of its set of parameters. Hence, a kernel and its parameters must be observed together in the practical analysis of RKHSs. There are two ways to think about this in terms of equality between two RKHSs: there are specific parameters for which the RKHSs produce the same set of functions or there is a family of many RKHSs over all possible parameters for which all possible sets of functions are the same. In the upcoming sections and chapters we analyze the practical equivalence of the Laplace kernel and the NTK by viewing their equivalence through parameter matching.
4.4Equivalence of the Laplace and Neural Tangent Kernels
It is clear that neural networks and kernel methods have some latent overlap. Belkin et al. [7] empirically found similarities between the Laplace kernel and ReLU activated neural networks when used on the task of fitting random labels. As such, it motivates the question: Do the Laplace and neural tangent kernels have the same RKHS and if so to what extent? This question is answered in theory by Geifman et al. [21] and Chen and Xu [11] who showed subset equality between the Laplace RKHS โ ๐ฟ โ ๐ โ ๐ and the NTK RKHS โ ๐ โ ๐ โ ๐พ in the space of ๐ ๐ โ 1 . The forward direction [21] was shown by eigenvalue analysis by way of Mercerโs Theorem 3.2.1. The backward direction [11] was shown by utilizing the Subset Inclusion Theorem 4.3.1 and singularity analysis.
This result begs a further question: What does the practical equivalence of these kernels look like? Since they are dual representations of each other, this poses a challenge because the NTK relies on depth ๐ท that is in the natural numbers which is in great contrast to the Laplace parameter โ which is in the positive reals. Due to the vast differences in parameterization, we hypothesize that the bias ๐ฝ plays a role in bridging the gap between the depths.
We begin by considering the matching of the neural tangent kernel ๐ ยจ ๐ โ ๐ โ ๐พ for set depth ๐ท โ โ and ๐ฝ โ โ + with the Laplace kernel ๐ ๐ฟ โ ๐ โ ๐ for some โ โ โ + โ { 0 } . It would then suffice that finding a matching kernel for some inputs ๐ฑ , ๐ณ โ ๐ ๐ โ 1 would be shown as follows:
๐ ๐ฟ โ ๐ โ ๐ โ ( ๐ฑ , ๐ณ ; โ )
๐
ยจ
๐
โ
๐
โ
๐พ
โ
(
๐ฑ
,
๐ณ
;
๐ท
,
๐ฝ
)
exp
โก
(
โ
โ
๐ฑ
โ
๐ณ
โ
โ
)
๐
ยจ
๐
โ
๐
โ
๐พ
โ
(
๐ฑ
,
๐ณ
;
๐ท
,
๐ฝ
)
โ
๐ฑ
โ
๐ณ
โ
โ
โ
log
โก
(
๐
ยจ
๐
โ
๐
โ
๐พ
โ
(
๐ฑ
,
๐ณ
;
๐ท
,
๐ฝ
)
)
โ
โ ๐ฑ โ ๐ณ โ โ log โก ( ๐ ยจ ๐ โ ๐ โ ๐พ โ ( ๐ฑ , ๐ณ ; ๐ท , ๐ฝ ) ) .
(4.16)
Further, we define ๐ ๐ as a measure of differences between the kernels given a set of parameters ๐ :
๐ ๐ท , ๐ฝ , โ โ ( ๐ฑ , ๐ณ )
| ๐ ยจ ๐ โ ๐ โ ๐พ โ ( ๐ฑ , ๐ณ ; ๐ท , ๐ฝ ) โ ๐ ๐ฟ โ ๐ โ ๐ โ ( ๐ฑ , ๐ณ ; โ ) | .
(4.17)
For the kernels to be the same, it is necessary that Equation (4.4) provides a length-scale where the kernels are indeed the same. This is consequential because if this can be done consistently, then the kernels are interchangeable regardless of any optimization procedures that are done while utilizing kernel methods. This brings us to the motivating question: Can we find a length-scale for which both kernels are identical?
Inputs ๐ ๐ท
3 , ๐ฝ , โ โ ( ๐ฑ ๐ , ๐ณ ๐ )
๐ฝ
0 , โ โ 1.815
๐ฝ โ 2.122 , โ โ 2.036
๐ฑ 1
[ 0.8027 0.2299 0.5503 ]
โ 0.0 0.001296
๐ณ 1
[ 0.7982 0.3818 0.4658 ]
๐ฑ 2
[ 0.0389 0.9663 0.2545 ] 0.0980 0.0000187
๐ณ 2
[ 0.6941 0.5958 0.4040 ] Table 4.1:\setlinespacing1.1 An illustration of the discrepancy between kernel differences ๐ ๐ while trying to match โ to ๐ ยจ ๐ โ ๐ โ ๐พ of depth ๐ท
3 using Equation (4.4). Left column: Random inputs in ๐ ๐ โ 1 . Right table: Table of differences for specific parameters. Column 1 fixes ๐ฝ
0 during matching. Column 2 optimizes ๐ฝ and โ using Algorithm 4.1.
Table 4.1 showcases this conundrum. Using Equation (4.4) we find that for fixed ๐ท and ๐ฝ we can find โ such that ๐ ๐ of the two kernels is near zero for a single input but not for any other input as illustrated in the 1st ๐ ๐ column in Table 4.1. Here ๐ฑ 1 , ๐ณ 1 produce a difference near zero but ๐ฑ 2 , ๐ณ 2 are nearly 0.1 apart which indicates that this type of matching only works on an input by input basis.
Input : ๐ โ ๐ โ ๐ , ๐ , ๐ท โ โ and ๐ , ๐ โ ๐ โ ๐ โ ๐ โ โ 1exInitialize empty list for means ๐ and variances ๐ Initialize list of ๐ฝ values ๐ต in range [ 0 , ๐ ] foreach ๐ฝ in list ๐ต do โ โ Set ๐ โ ๐ โ ๐ โ ๐ โ โ Initialize ๐ ยจ ๐ โ ๐ โ ๐พ โ ( ๐ท , ๐ฝ ) โ โ Initialize empty list for length-scales ๐ฟ โ โ for ๐ โ 0 to ๐ do โ โโ โ ๐ฑ ๐ โ vector size ๐ โ ๐ โ ๐ with random ๐ฅ entries normalized to ๐ ๐ โ 1 โ โโ โ ๐ณ ๐ โ vector size ๐ โ ๐ โ ๐ with random ๐ง entries normalized to ๐ ๐ โ 1 โ โโ โ Append length-scale โ ๐
โ ๐ฑ ๐ โ ๐ณ ๐ โ โ log โก ( ๐ ยจ ๐ โ ๐ โ ๐พ โ ( ๐ฑ ๐ , ๐ณ ๐ ; ๐ท , ๐ฝ ) ) to ๐ฟ โ โโ โ โ โ end for โ โAppend โ ยฏ
๐ผ โ [ ๐ฟ ] to ๐ โ โ Append var โ [ ๐ฟ ] to ๐ โ โ end foreach 1exReturn : โ ยฏ in ๐ and ๐ฝ in ๐ต corresponding to min โก ๐ Algorithm 4.1 \setlinespacing1.1 Length-scale and bias matching procedure.
To improve on this, we attempt to vary ๐ฝ and try to find one for a given ๐ท such that the โ found using Equation (4.4) matches for all ๐ฑ , ๐ณ โ ๐ ๐ โ 1 . This procedure is described in Algorithm 4.1. The idea behind this procedure is that we cannot directly find an โ and ๐ฝ that produce ๐ ๐
0 using all possible ๐ฑ , ๐ณ so instead we can approximate it by using ๐ โ โ points ( ๐ฑ ๐ , ๐ณ ๐ ) where each ๐ฑ ๐ , ๐ณ ๐ โ ๐ ๐ โ 1 . We set ๐ท of the NTK and calculate โ for some ๐ฝ over every point using Equation (4.4). We then take the mean and variance of the resulting set of length-scales ๐ฟ
{ โ 1 , โฆ , โ ๐ } for that specific ๐ฝ and record it. We repeat this over ๐ โ โ number of ๐ฝ โs bounded between 0 and some upper bound (dependent on ๐ท ). The minimum variance in this experiment indicates that the corresponding mean length-scale โ ยฏ ๐ and ๐ฝ ๐ where ๐ โ { 1 , โฆ , ๐ } are the optimal kernel parameters that produce a small or approximately zero ๐ ๐ . This can be summarized as follows:
โ , ๐ฝ
arg โก min โ ยฏ ๐ , ๐ฝ ๐ โก { var โ [ ๐ฟ ] โ ๐ฝ 1 , โฆ , ๐ฝ ๐ } .
(4.18)
As ๐ โ โ the entire space gets utilized so the resulting โ and ๐ฝ should be optimal.
Figure 4.1:\setlinespacing1.1 Mean and variance plots of โ given specific ๐ฝ calculated using ๐
1000 sample of input pairs for various depths. The solid orange line represents the variance while the dotted blue line represents the mean.
In our experiments, for depths 1 and 2, var โ [ ๐ฟ ] โ 0 as ๐ฝ โ โ , depths 3, 4, and 5 attain non-trivial minimums at various ๐ฝ , and for depth
6 the global minima is near zero. Figure 4.1 illustrates the global minima for depths 1, 3, 5, and 6. The 2nd ๐ ๐ column in Table 4.1 illustrates the effect using the optimal โ and ๐ฝ values. We can see that both points have small ๐ ๐ .
Figure 4.2:\setlinespacing1.1 Solid orange line represents the variance and the dotted blue line represents the mean. Top: A zoom in of depth ๐ท
6 for ๐ฝ โ [ 0 , 10 โ 7 ] . Due to the zoom, the mean values are all concentrated around โ 1.0524 and all variance values are near โ 2.437 โ 10 โ 5 . The difference between the minimum and maximum variance shown is approximately 10 โ 18 . Bottom: A showcase of a typical plot past depth 6.
Depth 6 is an interesting case because as seen in Figure 4.2, as we look closer to zero we are still unable to attain a global minimum for the variance despite it appearing that ๐ฝ
0 . The rough looking nature of the plot is due to ๐ฝ being squared in the NTK formulation which results in values less than 10 โ 14 . The floating point precision of numpy for the machine used to compute this is 10 โ 15 . Beyond depth 6, ๐ฝ needs to be orders of magnitude smaller and thus cannot be accurately computed.
In summary, we gained significant insight to the empirical matching of these kernels. In order to equate the kernels, that is, over all possible inputs, we are indeed dependent on both the Laplace kernel length-scale โ and the NTK bias ๐ฝ . In addition, through Figures 4.1 and 4.2 we have evidence to support the idea that as the NTK depth increases, the optimal bias and length-scale both decrease. From the context of machine learning methods, increasing the depth of a neural network also increases the generalization properties and the susceptibility for overfitting. This is reflected in kernel methods, where a relatively low length-scale parameter produces functions that capture more fine grained detail by closer interpolating over the given dataset (e.g. overfitting).
Chapter 5Synthetic Experiments
In this chapter we analyze the similarities and differences between the Laplace kernel and the NTK over a number of synthetically generated datasets using GP regression. Specifically, we are interested in matching the posterior means generated by the GP regressor under kernel assumptions, determining the effectiveness of matching in โ ๐ and ๐ ๐ โ 1 , and analyzing the influence of data transformations on the quality of posterior mean predictions for the NTK. Section 5.1 describes the experimental setup used in this chapter, Section 5.2 focuses on posterior mean matching when ๐
2 , Section 5.3 showcases the the differences in posterior mean matching in โ 2 and ๐ 1 on more complex surfaces, and Section 5.4 deals with the quality of posterior mean predictions using high dimensional input datasets.
5.1Setup
In our experiments, we utilize scikit-learn [36] which is a machine learning library for the Python programming language. One contribution of this thesis is an implementation of the NTK3 that is directly compatible with scikit-learnโs GP module. Using this implementation we are able to compute NTK values, optimize the NTKโs bias, and train GP models that represent infinite width neural networks. We also use their GP module for the Matรฉrn kernel which yields the Laplace ( ๐
1 2 ) and Gaussian ( ๐ โ โ ) kernels in our experiments.
We generate both noisy and noiseless synthetic data for our experiments. The added data noise ๐ is normally distributed with mean zero and variance ๐ 2 chosen depending on the dataset,
๐ โผ ๐ฉ โ ( 0 , ๐ 2 ) .
(5.1)
The number of samples we generate depends on the data. In addition, the data is split 50-50 into a training set ( ๐ , ๐ฒ ) and a testing set ( ๐ โ , ๐ฒ โ ) for each experiment. Lastly, we normalize and/or rescale our datasets depending on experiment. Normalizing in this context means that we map โ ๐ inputs (each observation or row vector in ๐ ) to the ๐ -sphere space ๐ ๐ โ 1 by way of the ๐ฟ 2 -norm:
๐ฑ โฆ ๐ฑ โ ๐ฑ โ ๐ฟ 2 .
(5.2)
On the other hand, rescaling refers to subtracting the sample mean ๐ and dividing sample variance ๐ 2 for a variable (column vector in ๐ ) ๐ฑ :
๐ฑ โ ๐ ๐ 2 .
(5.3)
Our GP regressor setup begins with the kernels; namely, the NTK, Laplace, and Gaussian kernels. During GP regression, the kernel hyperparameters are trained via optimization to best fit the data. Up until now we have been referring to kernel parameters because those values are intrinsic to the definition of a kernel. However, in the context of GPs, we refer to those kernel parameters as hyperparameters since they do not directly influence the definition of a GP model. Thus, we define hyperparameters as the parameters that are optimized in order to tune a specific model but do not directly describe that model.
All kernels contain the constant value parameter ๐ . In the case when a dataset is noisy, we include a white noise variance ๐ 2 . There are additional hyperparameters that are used to define specific kernels: ๐ท and ๐ฝ for the NTK and ๐ and โ for the Matรฉrn kernel. All the listed hyperparameters can be unfixed meaning that they can be optimized during training by maximizing the marginal log likelihood of the GP as outlined by Williams and Rasmussen [47].
Furthermore, GPs contain optimization specific options: ๐ ๐ โ ๐ โ ๐ โ ๐ก โ ๐ โ ๐ โ ๐ก , ๐ฒ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ , and ๐ผ . ๐ ๐ โ ๐ โ ๐ โ ๐ก โ ๐ โ ๐ โ ๐ก controls the number of optimizer restarts that are done during training in order to find the optimal hyperparameters. Each restart randomly chooses a new initial value from within pre-specified bounds which helps combat what may be a complicated loss surface with many local minima. ๐ฒ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ is a boolean value that when true, rescales the response variable during training in order to aid with fitting and optimization. It should be noted that ๐ฒ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ is undone at time of prediction. Finally, ๐ผ is a small positive value that is added to the diagonal of [ ๐พ ๐ โ ๐ + ๐ 2 โ ๐ผ ๐ ] โ 1 (Equation (2.13)) to ensure positive definiteness in light of any numerical issues. All relevant modifications and parameters are summarized in Table 5.1.
Data Description Value Normalization Transforming data to ๐ ๐ โ 1 (Equation (5.2)) โ Rescaling Subtracting ๐ , dividing ๐ 2 (Equation (5.3)) โ Optimization Description Value
๐ ๐ โ ๐ โ ๐ โ ๐ก โ ๐ โ ๐ โ ๐ก Number of optimizer restarts 9
๐ฒ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ Response variable rescaling during training true
๐ผ Value to ensure positive definiteness 10 โ 5
Kernel Description Value
๐ท NTK depth 2 , 3 , 10
๐ฝ NTK bias Unfixed
๐ Matรฉrn kernel smoothness 1 2 , โ
โ Length-scale Unfixed
๐ Constant value Unfixed
๐ 2 Data noise variance Unfixed Table 5.1:\setlinespacing1.1 Summary of variables managed in all synthetic experiments. Values left blank are determined per experiment. Unfixed values are ones allowed to be optimized during experiments.
Our main task throughout this chapter is to perfectly match the posterior means of the Laplace kernel and NTK. We use the Gaussian kernel as a comparison that is outside the RKHS in question for our experiments. To accomplish this task we will be optimizing Matรฉrn kernels with ๐
1 2 (Laplace) and ๐ โ โ (Gaussian). We begin with an objective function as outlined in Algorithm 5.1 that we will minimize during the optimization process.
Input : ๐ ๐ โ ๐ โ ๐ก โ ( ๐ ) , ๐ ยฏ ๐ โ ๐ โ ๐พ โ , ๐ , ๐ฒ , ๐ โ 1ex 1ex ๐ ๐ โ ๐ โ ๐ก ๐ โ ๐ โ ๐ก , ๐ ๐ โ ๐ โ ๐ก โ optimize { ๐ ๐ โ ๐ โ ๐ก โผ ๐ข โ ๐ซ โ ( 0 , ๐ ๐ โ ๐ โ ๐ก โ ( ๐ ) ) | ๐ , ๐ฒ : ๐ ๐ โ ๐ โ ๐ โ ๐ก โ ๐ โ ๐ โ ๐ก , ๐ฒ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ , ๐ผ } ๐ ยฏ ๐ โ ๐ โ ๐ก โ โ ๐ ๐ โ ๐ โ ๐ก ๐ โ ๐ โ ๐ก โ ( ๐ โ ) 1ex 1exReturn : โ ๐ ยฏ ๐ โ ๐ โ ๐พ โ โ ๐ ยฏ ๐ โ ๐ โ ๐ก โ โ ๐ฟ 2 Algorithm 5.1 \setlinespacing1.1 Objective function obj_func for posterior mean matching optimization. Define : ๐ ๐ โ ๐ โ ๐ โ ๐ก โ ๐ โ ๐ โ ๐ก , ๐ฒ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ , ๐ผ , ๐ โ ๐ โ ๐ โ ๐ โ ๐ Input : ๐ท , ๐ , ๐ , ๐ฒ , ๐ โ (Train and test data pre-processed the same way) 1exNTK fitting and optimization ๐ ยจ ๐ โ ๐ โ ๐พ โ ( ๐ ) โ ๐ โ ๐ ยจ ๐ โ ๐ โ ๐พ โ ( ๐ท , ๐ฝ ) ; ๐ := { ๐ท , ๐ฝ , ๐ } if ๐ โ ๐ โ ๐ โ ๐ โ ๐ is True then ๐ ยจ ๐ โ ๐ โ ๐พ โ ( ๐ ) โ ๐ ยจ ๐ โ ๐ โ ๐พ โ ( ๐ ) + ๐ 2 โ ๐ฟ ๐ฌ
๐ญ ; ๐
{ โฆ , ๐ 2 } ๐ ๐ โ ๐ โ ๐พ ๐ โ ๐ โ ๐ก , ๐ ๐ โ ๐ โ ๐ก โ optimize { ๐ ๐ โ ๐ โ ๐พ โผ ๐ข ๐ซ ( 0 , ๐ ยจ ๐ โ ๐ โ ๐พ ( ๐ ) ) | ๐ , ๐ฒ : ๐ ๐ โ ๐ โ ๐ โ ๐ก โ ๐ โ ๐ โ ๐ก , ๐ฒ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ , ๐ผ } ๐ ยฏ ๐ โ ๐ โ ๐พ โ โ ๐ ๐ โ ๐ โ ๐พ ๐ โ ๐ โ ๐ก โ ( ๐ โ ) ๐ฝ ๐ โ ๐ โ ๐ก , ๐ ๐ โ ๐ โ ๐ก , ๐ ๐ โ ๐ โ ๐ก 2 โ { โฆ , ๐ฝ , ๐ , ๐ 2 } ๐ ๐ โ ๐ โ ๐ก 1ex 1exMatรฉrn posterior matching ๐ ๐ โ ๐ โ ๐ก โ ( ๐ ) โ ๐ ๐ โ ๐ โ ๐ก โ ๐ ๐ โ ๐ โ ๐ก โ ( ๐ , โ ) ; ๐ := { ๐ , โ , ๐ ๐ โ ๐ โ ๐ก } if ๐ โ ๐ โ ๐ โ ๐ โ ๐ is True then ๐ ๐ โ ๐ โ ๐ก ( ๐ ) โ ๐ ๐ โ ๐ โ ๐ก ( ๐ ) + ๐ 2 ๐ฟ ๐ฌ
๐ญ ) ; ๐
{ โฆ , ๐ 2 } ๐ ๐ โ ๐ โ ๐ก ๐ โ ๐ โ ๐ก , ๐ ๐ โ ๐ โ ๐ก โ minimize { obj_func ( ๐ ๐ โ ๐ โ ๐ก ( ๐ ) , ๐ ยฏ ๐ โ ๐ โ ๐พ โ , ๐ , ๐ฆ , ๐ โ ) } โ ๐ โ ๐ โ ๐ก โ { โฆ , โ , โฆ } ๐ ๐ โ ๐ โ ๐ก 1ex 1ex Return : ๐ ๐ โ ๐ โ ๐ก , ๐ ๐ โ ๐ โ ๐ก 2 , ๐ฝ ๐ โ ๐ โ ๐ก , โ ๐ โ ๐ โ ๐ก , ๐ ๐ โ ๐ โ ๐พ ๐ โ ๐ โ ๐ก , ๐ ๐ โ ๐ โ ๐ก ๐ โ ๐ โ ๐ก Algorithm 5.2 \setlinespacing1.1 Posterior mean matching for Matรฉrn kernels. The optimized model is denoted as ๐ ๐ โ ๐ โ ๐ก โ ( โ ) := ๐ โ ( โ ; ๐ ๐ โ ๐ โ ๐ก ) . optimize refers to the GP optimization process of maximizing the marginal log likelihood.
The purpose of the objective function is to minimize the RMSE between the Matรฉrn and NTK posterior means by varying the Matรฉrn kernel parameter โ ,
โ ๐ ยฏ ๐ โ ๐ โ ๐พ โ โ ๐ ยฏ ๐ โ ๐ โ ๐ก โ โ ๐ฟ 2 .
(5.4)
The procedure is outlined below in Algorithm 5.2 utilizes parameters for the GPs and hyperparameters for the kernels as expressed in Table 5.1. It should be noted that optimize utilizes sklearn.gaussian_process.GaussianProcessRe-gressorโs fit function and minimize uses the scipy.optimize.minimize_scalar function to perform the key steps in our procedure.
We control NTKโs depth parameter ๐ท and the choice of Matรฉrn kernel smoothness ๐ . It should be noted that for our experiments we only allow the constant value ๐ and error term ๐ 2 parameters to optimize for the NTK while the respective parameters for the Matรฉrn kernels inherit the NTKโs optimized values. The reasoning lies in the relation between a kernel and a GP covariance function. For values ๐ฌ , ๐ญ โ ๐ณ , the covariance function of a GP ๐ โผ ๐ข โ ๐ซ โ ( 0 , ๐ ) is related directly to its kernel (and thus ๐ depends on choice of ๐ ):
cov โ ( ๐ โ ( ๐ฌ ) , ๐ โ ( ๐ญ ) )
๐ โ ( ๐ฌ , ๐ญ )
(5.5)
So, a scaled GP is expressed as ๐ โ ๐ โผ ๐ข โ ๐ซ โ ( 0 , ๐ โ ๐ ) because its covariance function simplifies as follows:
cov โ ( ๐ โ ๐ โ ( ๐ฌ ) , ๐ โ ๐ โ ( ๐ญ ) )
( ๐ ) 2 โ cov โ ( ๐ โ ( ๐ฌ ) , ๐ โ ( ๐ญ ) )
๐ โ ๐ โ ( ๐ฌ , ๐ญ ) .
(5.6)
Lastly, a GP that includes an additive noise term is expressed as ๐ + ๐ โผ ๐ข โ ๐ซ โ ( 0 , ๐ + ๐ 2 โ ๐ฟ ๐ฌ
๐ญ ) where ๐ 2 โ ๐ฟ ๐ฌ
๐ญ is the white noise kernel with a constant value represented by the noise variance ๐ 2 and the kernel itself being the Kronecker delta ๐ฟ ๐ฌ
๐ญ . This is justified because
cov โ ( ๐ โ ( ๐ฌ ) + ๐ โ ( ๐ฌ ) , ๐ โ ( ๐ญ ) + ๐ โ ( ๐ญ ) )
cov โ ( ๐ โ ( ๐ฌ ) , ๐ โ ( ๐ญ ) ) +
cov โ ( ๐ โ ( ๐ฌ ) , ๐ โ ( ๐ญ ) + ๐ โ ( ๐ญ ) ) + cov โ ( ๐ โ ( ๐ฌ ) , ๐ โ ( ๐ญ ) + ๐ โ ( ๐ญ ) ) +
cov โ ( ๐ โ ( ๐ฌ ) + ๐ โ ( ๐ฌ ) , ๐ โ ( ๐ญ ) ) + cov โ ( ๐ โ ( ๐ฌ ) + ๐ โ ( ๐ฌ ) , ๐ โ ( ๐ฌ ) ) +
cov โ ( ๐ โ ( ๐ฌ ) , ๐ โ ( ๐ญ ) )
= ๐ โ ( ๐ฌ , ๐ญ ) + ๐ 2 โ ๐ฟ ๐ฌ
๐ญ .
(5.7)
As a result, if we wish to compare just the kernels while maintaining their expressivity through transformations, it is necessary to keep the constant value and noise variance terms the same between kernels.
We find that our procedure is a difficult optimization task due to the objective function landscape containing many local minima. Our Python implementation4 deals with this by modifying the minimization process to search by slowly expanding the search bounds for โ over multiple runs. In our experiments we consider two metrics for posterior mean matching: Root mean-squared error (RMSE) and Pearson correlation coefficient ( ๐ ). The former gives an idea of the ability to minimize the objective functionโs loss and latter is used to see how well the posterior means of different kernels overlap with each other.
5.2Illustrative Example
To motivate the remainder of this work, we will first attempt to answer the following question and analyze the results produced: Does our posterior mean matching procedure work? We begin by creating a simple 2D input parametric curve as follows:
๐ฅ 1
( ๐ฆ 2 + 1 ) โ sin โก ( ๐ก )
๐ฅ 2
( ๐ฆ 2 + 1 ) โ cos โก ( ๐ก )
๐ฆ โ [ โ 2 , 2 ]
where โ ๐ก โ [ โ 2 โ ๐ , 2 โ ๐ ] .
(5.8)
We independently sample 100 values of ๐ก โผ ๐ โ [ โ 2 , 2 ] and ๐ฆ โผ ๐ โ [ โ 2 โ ๐ , 2 โ ๐ ] where ๐ โ [ ๐ , ๐ ] is the uniform distribution between points ๐ and ๐ , inclusive. We generate ๐ฅ 1 and ๐ฅ 2 using the ๐ก and ๐ฆ samples. Although the curve is defined by starting with samples of ๐ฆ and ๐ก , we treat points ( ๐ฅ 1 , ๐ฅ 2 ) as the inputs and ๐ฆ as the output during fitting. We chose this curve because it very clearly illustrates the posterior mean matching between the NTK and Matรฉrn kernels. The dependence on ๐ก allows us to connect the points in a way that other surfaces and datasets do not. We then split our inputs ( ๐ฅ 1 , ๐ฅ 2 ) and output ๐ฆ into a training set size ๐
50 and testing set size ๐
50 . We make two separate experiments: non-noisy and noisy with ๐ โผ ๐ฉ โ ( 0 , 0.15 2 ) . We utilize the experiment setup outlined in Table 5.1. In addition, we normalize the data inputs ( ๐ฅ 1 , ๐ฅ 2 ) to ๐ 1 prior to training. We then perform the posterior mean matching as outlined in Algorithm 5.2.
Figure 5.1:\setlinespacing1.1 Top: The parametric curve defined in Equation (5.8). Bottom: Equation (5.8) with inputs normalized and noisy training points shown.
๐ท
2
๐ท
3
๐ท
10
Metric Noise Lap Gaus Lap Gaus Lap Gaus RMSE No 0.0860 0.3352 0.0250 0.3610 0.0079 0.3670 Yes 0.0563 0.0228 0.0314 0.0412 0.0494 0.0915
๐ No 0.9974 0.9590 0.9998 0.9509 0.9999 0.9462 Yes 0.9954 0.9991 0.9984 0.9980 0.9929 0.9783 Table 5.2:\setlinespacing1.1 The results of posterior mean matching the NTK to Matรฉrn kernels for the parametric dataset in ๐ 1 . Figure 5.2:\setlinespacing1.1 Posterior means generated by fitting non-noisy parametric curve data in ๐ 1 and predicted on out of sample data in ๐ 1 . For visualization purposes we set ( ๐ฅ 1 , ๐ฅ 2 ) โ โ 2 .
Fitting the parametric curve shows a number of interesting results. The optimization procedure performs well and can accurately match the mean posteriors of the Laplace and NTK GPs. Table 5.2 indicates this with very small RMSE values and Figure 5.2 visually shows an exact match. More than that, the posterior means correlate almost perfectly ( ๐
0.99 ) regardless if the GPs were trained on noisy data or not.
On the other hand, the Gaussian kernel and NTK fail to perfectly match up in the case where no noise is present as can be seen in Table 5.2. Although there is a high correlation between the posterior means, Figure 5.2 illustrates how the Gaussian kernel cannot match the NTKโs predictions to the exact effect that the Laplace kernel does. This is also indicated by a higher resulting RMSE during optimization.
One important thing of note is the low RMSE and high correlation values in Table 5.2 for both the Laplace and Gaussian kernels when noisy data is present. This can be explained by the fact that adding the noise term ๐ 2 to our kernels gives the resulting regression additional smoothness and increased bounds for error while fitting. When matching posterior means, the regressors can use this inferred noise to their advantage to better minimize our objective function resulting in the values seen in the table. Without ๐ 2 , our GP regressors would attempt to perfectly interpolate through the data.
D Noise ๐ฝ
โ ๐ฟ โ ๐ โ ๐
โ ๐บ โ ๐ โ ๐ข โ ๐
๐
๐ 2
2 No 8480.751 1276.249 0.6543 32.495 โ Yes 0.000010 1.0569 0.8414 0.2734 0.7910
3 No 453.111 765.133 0.6540 21.515 โ Yes 0.000014 1.0046 0.8242 0.2722 0.7916
10 No 469.676 0.3702 0.6249 6.6946 โ Yes 0.000010 0.3026 0.2152 0.2350 0.8177 Table 5.3:\setlinespacing1.1 Kernel hyperparameter results for posterior mean matching with inputs in ๐ 1 .
Lastly, Table 5.3 shows the trained kernel hyperparameters after the posterior mean matching procedure. As noted in Section 4.4, we see that the Laplace length-scale decreases as NTK depth increases. The Gaussian length-scale stays approximately the same regardless of depth when the data is non-noisy. Both the Laplace and Gaussian length-scales decrease as depth increases when the data is noisy which may be explained by the additional smoothness added by the noise term ๐ 2 .
5.3Synthetic 2D Input Case Dataset
๐ โ ( ๐ฅ 1 , ๐ฅ 2 )
Ackley [2]
โ 20 โ exp โก [ โ 1 5 โ 1 2 โ ( ๐ฅ 1 2 + ๐ฅ 2 2 ) ] โ exp โก [ 1 2 โ ( cos โก 2 โ ๐ โ ๐ฅ 1 + cos โก 2 โ ๐ โ ๐ฅ 2 ) ] + ๐ + 20
Franke [18]
0.75 โ exp โก ( โ ( 9 โ ๐ฅ 1 โ 2 ) 2 4 โ ( 9 โ ๐ฅ 2 โ 2 ) 2 4 ) + 0.75 โ exp โก ( โ ( 9 โ ๐ฅ 1 + 1 ) 2 49 โ 9 โ ๐ฅ 2 + 1 10 ) + 0.5 โ exp โก ( โ ( 9 โ ๐ฅ 1 โ 7 ) 2 4 โ ( 9 โ ๐ฅ 2 โ 3 ) 2 4 ) โ 0.2 โ exp โก ( โ ( 9 โ ๐ฅ 1 โ 4 ) 2 โ ( 9 โ ๐ฅ 2 โ 7 ) 2 )
Nonpoly. [45]
1 6 โ [ ( 30 + 5 โ ๐ฅ 1 โ sin โก ( 5 โ ๐ฅ 1 ) ) โ ( 4 + exp โก ( โ 5 โ ๐ฅ 2 ) ) โ 100 ] Table 5.4:\setlinespacing1.1 Three 2D input functions with ( ๐ฅ 1 , ๐ฅ 2 ) โ โ 2 . Ackley Franke Nonpolynomial
๐ฅ 1 , ๐ฅ 2 โผ ๐ โ [ 1 , 7 ]
๐ฅ 1 , ๐ฅ 2 โผ ๐ โ [ โ 0.5 , 1 ]
๐ฅ 1 , ๐ฅ 2 โผ ๐ โ [ 0 , 2 ]
๐ โผ ๐ฉ โ ( 0 , 0.75 2 )
๐ โผ ๐ฉ โ ( 0 , 0.10 2 )
๐ โผ ๐ฉ โ ( 0 , 1 ) Table 5.5:\setlinespacing1.1 2D input function input sampling distributions and noise used for the noisy experiments.
In this section we look at more complicated functions in the 2D input space. Since the Laplace kernel and NTK share the same RKHS in ๐ ๐ โ 1 , it is of interest to determine how these kernels behave in the space of โ ๐ . We consider 3 different functions found in literature summarized in Tables 5.4 and 5.5.
During the calculation of the NTK, the computational space complexity scales exceptionally poorly due to the recursive nature of the kernel which caused issues on our hardware. Thus this required a way to minimize the dataset size while retaining enough information to generalize our model. For the following experiments, we utilize Latin hypercube sampling [27]. To understand Latin hypercube sampling it is simpler to define Latin square sampling first. Latin square sampling is where only one sample is chosen in each row and column of a square grid. As a result, Latin hypercube sampling is a generalization of the Latin square giving a sample that tries to yield approximately equidistant points in given boundaries. We use it because it better captures the overall surface of these functions during training whilst utilizing fewer training points.
Figure 5.3:\setlinespacing1.1 Posterior means for the noisy the Ackley function in โ 2 for NTK depth ๐ท
2 with test size ๐
500 . GPs were trained using ๐
500 inputs ๐ฅ 1 , ๐ฅ 2 โ [ 1 , 7 ] generated using Latin hypercube sampling.
In these experiments we have 2 groups ( โ 2 and normalized to ๐ 1 ) each containing a non-noisy and noisy dataset for a total of 4 datasets per function. For each function we sample a grid of 1000 points using Latin hypercube sampling. We split the sample into a training dataset size ๐
500 and a testing dataset size ๐
500 . For all other parameters we adopt the defaults laid out in Table 5.1.
๐ท
2
๐ท
3
๐ท
10
Metrics Dataset Noise Lap Gaus Lap Gaus Lap Gaus RMSE Ackley No 0.7881 0.6695 0.7868 0.6693 0.7790 0.6682 Yes 0.3296 0.3115 0.3370 0.3206 0.3816 0.3712 Franke No 0.0947 0.0957 0.0946 0.0968 0.0935 0.0965 Yes 0.0487 0.0532 0.0496 0.0541 0.0551 0.0595 Nonpoly No 2.6526 2.5916 2.6526 2.5916 2.6513 2.5905 Yes 0.8275 0.8411 0.8307 0.8448 0.8461 0.8640
๐ Ackley No 0.9407 0.9558 0.9408 0.9559 0.9419 0.9560 Yes 0.9890 0.9900 0.9884 0.9894 0.9849 0.9857 Franke No 0.9330 0.9328 0.9332 0.9310 0.9341 0.9315 Yes 0.9791 0.9753 0.9781 0.9744 0.9724 0.9688 Nonpoly No 0.3760 0.4255 0.3760 0.4255 0.3761 0.4254 Yes 0.8000 0.7994 0.7985 0.7967 0.7883 0.7809 Table 5.6:\setlinespacing1.1 Posterior mean matching results for the 2D input surface datasets in โ 2 with test size ๐
500 . GPs were trained using ๐
500 inputs ๐ฅ 1 , ๐ฅ 2 generated using Latin hypercube sampling over the respective function domains. Figure 5.4:\setlinespacing1.1 Posterior means of the non-noisy 2D input functions trained in โ 2 for NTK depth ๐ท
3 with test size ๐
500 . GPs were trained using ๐
500 inputs ๐ฅ 1 , ๐ฅ 2 generated using Latin hypercube sampling over the respective function domains. Figure 5.5:\setlinespacing1.1 Posterior means of the non-noisy 2D input functions trained in ๐ 1 for NTK depth ๐ท
3 with test size ๐
500 . GPs were trained using ๐
500 inputs ๐ฅ 1 , ๐ฅ 2 generated using Latin hypercube sampling over the respective function domains. For visualization ( ๐ฅ 1 , ๐ฅ 2 ) โ โ 2 .
One reoccurring theme in these experiments is the white noise term ๐ 2 performing the role of smoothing out the predictions. As mentioned in the previous section, taking data noise into account allows for more leeway in matching using Algorithm 4.1. Figure 5.3 illustrates this via the posterior means of the Ackley function. The smoothed surface of the NTK is similar to that of a plane and thus closely matched to by the Laplace and Gaussian kernels. Much like noisy data in ๐ 1 , this effect is seen in all noisy โ 2 experiments where the posterior mean correlation is always higher compared to the corresponding non-noisy experiments.
Looking at the โ 2 experiments in Table 5.6 and Figure 5.4, it is apparent from the correlations of the posterior means that the Laplace kernel is not able to match the NTK. This is the same case for the Gaussian kernel as well. The nonpolynomial dataset seems to be especially hard to match ( ๐ โ 0.5 ) . The Laplace and Gaussian kernels both provide similar resulting correlations in all configurations in โ 2 . Based on these experiments, the conclusion is that the elements of the Laplace kernelโs and NTKโs RKHSs (e.g. the posterior means) do not fully overlap in the general space โ ๐ .
As mentioned previously, this is in contrast to the space of ๐ ๐ โ 1 where we observe nearly perfect correlation and low RMSE values during the posterior mean matching procedure for the Laplace kernel and NTK. Details of these experiments are can be found in Table C.1 in Appendix C. Figure 5.5 shows that the Laplace kernel and NTK have effectively the same posterior means whereas the Gaussian kernelโs posterior falls short of matching. The two spaces presented in the figures have very different posterior means. Arguably, the space of โ 2 provides results more in-line with the ground truth than the space of ๐ 1 . This observation is further explore in the next section. Lastly, we would like to draw attention to the noisy Ackley function in ๐ 1 which was a difficult dataset to fit utilizing the NTK. We discuss the details of this in Figure C.2 in Appendix C.
5.4Synthetic High Dimensional Cases
Knowing that we can reliably match the posterior means of the Laplace kernel and NTK in ๐ ๐ โ 1 , we now turn our focus to the quality of predictions while utilizing these kernels. Specifically, we focus on the NTKโs ability to generate meaningful posterior means. We test this using the multidimensional Friedman datasets [19, 10] which are used to benchmark and test linear regression models. The datasets and their features are summarized in Tables 5.7 and 5.8 respectively.
Dataset
๐ โ ( ๐ฅ 1 , ๐ฅ 2 , โฆ , ๐ฅ ๐ )
๐
Friedman 1
10 โ sin โก ( ๐ โ ๐ฅ 1 โ ๐ฅ 2 ) + 20 โ ( ๐ฅ 3 โ 0.5 ) 2 + 10 โ ๐ฅ 4 + 5 โ ๐ฅ 5
10
Friedman 2
( ๐ฅ 1 2 + ( ๐ฅ 2 โ ๐ฅ 3 โ ( ๐ฅ 2 โ ๐ฅ 4 ) โ 2 ) 2 ) 1 / 2
4
Friedman 3
arctan โก ( ๐ฅ 2 โ ๐ฅ 3 โ ( ๐ฅ 2 โ ๐ฅ 4 ) โ 1 ๐ฅ 1 )
4 Table 5.7:\setlinespacing1.1 Friedman datasets along with their corresponding input dimensions where ( ๐ฅ 1 , โฆ , ๐ฅ ๐ ) โ โ ๐ . Friedman 1 datasetโs output is independent of the last five input variables hence why the dataset has a total of 10 input dimensions. Friedman 1 Friedman 2 Friedman 3
๐ฅ 1 , โฆ , ๐ฅ 10 โผ ๐ โ [ 0 , 1 ]
๐ฅ 1 โผ ๐ โ ( 0 , 100 ]
๐ฅ 2 โผ ๐ โ [ 40 โ ๐ , 560 โ ๐ ]
๐ฅ 3 โผ ๐ โ [ 0 , 1 ]
๐ฅ 4 โผ ๐ โ [ 1 , 11 ]
๐ โผ ๐ฉ โ ( 0 , 1.5 )
๐ โผ ๐ฉ โ ( 0 , 5 )
๐ โผ ๐ฉ โ ( 0 , 0.15 ) Table 5.8:\setlinespacing1.1 Friedman data feature distributions. Friedman 2 and 3 share the same feature distributions. The noise term ๐ is applied only for the noisy data cases.
We maintain the same experiment setup as stated in Table 5.1; however, we utilize the coefficient of determination ( ๐ 2 ) as a new metric for this section as a way to assess regression performance:
๐ 2
1 โ residual sum of squares total sum of squares
1 โ โ ๐ ( ๐ฆ โ ๐ โ ๐ โ ๐ ) 2 โ ๐ ( ๐ฆ โ ๐ โ ๐ฆ ยฏ โ ) 2 ,
(5.9)
where ๐ฆ โ ๐ is the ground truth value from the testing set and ๐ โ ๐ is the predicted value. ๐ 2 can attain a maximum value of 1 indicating perfect interpolation, a value of 0 indicating performance equivalent to the mean of the ground truth ๐ฆ ยฏ โ , and arbitrary negative values indicating worse performance than ๐ฆ ยฏ โ . Our goal in this section is to test the impact of rescaling inputs and/or outputs while using the NTK in both โ ๐ and ๐ ๐ โ 1 . Alongside this, we continue to perform posterior mean matching to the NTK for the Matรฉrn kernels.
For each dataset we generate 200 samples with ๐
100 for the training set and ๐
100 for the testing set. In the experiments where both normalization and rescaling is used on the input space, we first rescale our inputs and then normalize them to ๐ ๐ โ 1 . It is important to rescale first because doing otherwise does not guarantee that our inputs will lie in ๐ ๐ โ 1 . We also treat rescaling inputs ( ๐ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ ) and outputs ( ๐ฒ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ ) as separate experiments. Output rescaling is only done during training as outlined in Table 5.1.
Non-noisy Noisy
๐ท Sp. None ๐ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐
๐ฒ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ Both None ๐ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐
๐ฒ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ Both 2 ๐ 9 0.6549 0.7792 0.6447 0.8054 0.5919 0.6993 0.5843 0.7154
โ 10 0.7917 0.7857 0.7924 0.7954 0.6947 0.7000 0.7054 0.7086 3 ๐ 9 0.6481 0.7672 0.6353 0.7880 0.5924 0.6913 0.5820 0.7054
โ 10 0.7860 0.7697 0.7833 0.7788 0.6901 0.6898 0.7010 0.7007 10 ๐ 9 0.5956 0.6302 0.5785 0.6565 0.5538 0.5781 0.5406 0.5992
โ 10 0.7569 0.6301 0.7069 0.6595 0.6711 0.5727 0.6459 0.6048 Table 5.9:\setlinespacing1.1 Friedman 1 ๐ 2 results for NTK posterior means with training done using various data transformations. The None column is the baseline with no input or output rescaling, ๐ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ column is with input rescaling only, ๐ฒ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ column is with output rescaling during training only, and the Both column applies both types of rescaling. Non-noisy Noisy
๐ท None ๐ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐
๐ฒ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ Both None ๐ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐
๐ฒ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ Both 2 0.9974 โ 1 0.9981 0.9961 0.9985 0.9991 0.9995 0.9951 3 0.9983 โ 1 0.9991 0.9986 0.9988 0.9996 0.9992 0.9983 10 0.9985 0.9989 0.9992 0.9999 0.9984 0.9990 0.9993 0.9999 Table 5.10:\setlinespacing1.1 Friedman 1 ๐ results for Laplace kernel and NTK posterior mean matching in ๐ 9 with training done using various data transformations.
Our experiment results show that rescaling the inputs has a significant positive impact on regression results for the NTK. This is best illustrated in Table 5.9 where we consistently see that input rescaling increases our ๐ 2 value significantly. On the other hand output rescaling does not have a significant impact on the quality of the regression as indicated by the None and ๐ฒ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ columns. That said, output rescaling helps with hyperparameter optimization. Without it we hit upper bounds during optimization for hyperparameters like constant value or NTKโs bias. Lastly, applying both input and output rescaling yields the best regression results. The data transformations do not greatly impact Laplace kernel and NTK posterior mean matching as seen in Table 5.10. We find similar results for the Friedman 2 and 3 datasets found in Tables C.3, C.3, C.5, and C.5 in Appendix C.
Figure 5.6:\setlinespacing1.1 Predictions for non-noisy Friedman 2 in ๐ 3 for ๐ท
2 with ๐ฒ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ . Top: NTK and Laplace predictions overlayed. Middle: NTK and Gaussian predictions overlayed. Bottom: Averaged prediction plots of all kernels. Figure 5.7:\setlinespacing1.1 Predictions for non-noisy Friedman 2 in ๐ 3 for ๐ท
2 with ๐ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ and ๐ฒ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ . The rows of the figure are laid out as in Figure 5.7.
We move on to visualizations of the different data transformations. In the remainder of the figures in this section, we include a bottom row that contains averaged prediction plots of all the kernels. These plots are created by using a trained GP to predict over a dataset where we uniformly sample from one input dimension while the remaining inputs are set as the mean of their respective distribution (Table 5.8). As a result, we can visualize the sampled dimension as a 2D plot (e.g. ๐ฅ 1 vs ๐ โ ).
We begin by differentiating the posterior results for scaled and unscaled inputs. For this we turn to the non-noisy Friedman 2 dataset in ๐ 3 with outputs rescaled and NTK ๐ท
2 . Figures 5.7 and 5.7 showcase unscaled and scaled inputs respectively. In the unscaled input case in Figure 5.7, the GP poorly generalizes for all inputs aside from ๐ฅ 2 ( ๐ 2 โ โ 0.071 , Table C.3). On the other hand, the scaled inputs in Figure 5.7 do a much better job in producing a good regression fit over the test data ( ๐ 2 โ 0.855 , Table C.3). These results indicate that input rescaling seems to improve regression predictions and does not impact our ability to match posterior means. We include results for the noisy Friedman 2 dataset with the same setup in Figures C.4 and C.4 in Appendix C.
Figure 5.8:\setlinespacing1.1 Predictions for non-noisy Friedman 3 in โ 4 for ๐ท
2 with ๐ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ and ๐ฒ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ . The rows of the figure are laid out as in Figure 5.7. Figure 5.9:\setlinespacing1.1 Predictions for non-noisy Friedman 3 in ๐ 3 for ๐ท
2 with ๐ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ and ๐ฒ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ . The rows of the figure are laid out as in Figure 5.7.
We now backtrack a bit to address the high ๐ 2 values in โ ๐ as opposed to ๐ ๐ โ 1 . This is almost always the case regardless of dataset or configuration as seen in Tables 5.9 (Friedman 1), C.3 (Friedman 2), and C.3 (Friedman 3). We provide a visualization of this phenomenon for the non-noisy Friedman 3 dataset with both inputs and outputs rescaled and NTK ๐ท
2 . Figures 5.9 and 5.9 on the following page showcase the posterior means of โ 4 ( ๐ 2 โ 0.692 ) and ๐ 3 ( ๐ 2 โ 0.033 ) respectively. The low ๐ 2 of ๐ 3 is attributed close values between the residual sum of squares and the total sum of squares in Equation (5.9). The differences of the predictions appear to be very minor between the spaces. Looking at the first row of both figures it appears that the inputs in โ 4 have a slightly tighter fit over the test data. This can best be seen on the ๐ฅ 3 input in both rows. What is of interest is that the averaged prediction plots (row 3) for both figures appear to be very similar. That said, this is in-line with the plots seen in Section 5.3 where the posteriors trained in โ ๐ were more related to the ground truth compared to posteriors trained in ๐ ๐ โ 1 . We include Figures C.6 and C.6 in Appendix C which showcases the same setup for noisy Friedman 3.
Chapter 6Real World Experiments
In this chapter we use the lessons we learned from Chapter 5 to evaluate the individual regression performance of the Laplace kernel, Gaussian kernel, and NTK individually on two real world datasets in a setting where the models are allowed to train to their own accord. In Section 6.1 we outline the experiment setup and datasets and in Section 6.2 we present our regression results.
6.1Setup and Datasets
We setup our experiments with the findings in Chapter 5 in mind for the best possible GP fit. We summarize the experimental treatment in Table 6.1. The key differences are that we perform input rescaling ( ๐ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ ) in all experiments. In addition, we split our data 75-25 into a training set ( ๐ , ๐ฒ ) and a testing set ( ๐ โ , ๐ฒ โ ) respectively. Our experiments test inputs in โ ๐ or ๐ ๐ โ 1 resulting in only 2 configurations per dataset. Since we are working with real world data, we assume that noise is present and apply the white noise kernel to our three kernels. We use ๐ 2 and RMSE as our regression metrics.
Data Description Value Normalization Transforming inputs to ๐ ๐ โ 1 โ
๐ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ Input variable rescaling true Optimization Description Value
๐ ๐ โ ๐ โ ๐ โ ๐ก โ ๐ โ ๐ โ ๐ก Number of optimizer restarts 9
๐ฒ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ Response variable rescaling during training true
๐ผ Value to ensure positive definiteness 10 โ 5
Kernel Description Value
๐ท NTK depth 2 , 3 , 10
๐ฝ NTK bias Unfixed
๐ Matรฉrn kernel smoothness 1 2 , โ
โ ๐ฟ โ ๐ โ ๐ , โ ๐บ โ ๐ โ ๐ข โ ๐ Length-scale Unfixed
๐ ๐ โ ๐ โ ๐พ , ๐ ๐ฟ โ ๐ โ ๐ , ๐ ๐บ โ ๐ โ ๐ข โ ๐ Constant value Unfixed
๐ ๐ โ ๐ โ ๐พ 2 , ๐ ๐ฟ โ ๐ โ ๐ 2 , ๐ ๐บ โ ๐ โ ๐ข โ ๐ 2 Data noise variance Unfixed Table 6.1:\setlinespacing1.1 Summary of variables managed in all real world experiments. Define : ๐ ๐ โ ๐ โ ๐ โ ๐ก โ ๐ โ ๐ โ ๐ก , ๐ฒ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ , ๐ผ , ๐ โ ๐ โ ๐ โ ๐ โ ๐ Input : ๐ , ๐ , ๐ฒ , ๐ โ (Train and test data pre-processed the same way) ๐ โ ( ๐ ) โ ๐ โ ๐ โ ( ๐ ) + ๐ 2 โ ๐ฟ ๐ฌ
๐ญ ๐ ๐ ๐ โ ๐ โ ๐ก , ๐ ๐ โ ๐ โ ๐ก โ optimize { ๐ ๐ โผ ๐ข ๐ซ ( 0 , ๐ ( ๐ ) ) | ๐ , ๐ฒ : ๐ ๐ โ ๐ โ ๐ โ ๐ก โ ๐ โ ๐ โ ๐ก , ๐ฒ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ , ๐ผ } ๐ ยฏ ๐ โ โ ๐ ๐ ๐ โ ๐ โ ๐ก โ ( ๐ โ ) Return : ๐ ยฏ ๐ โ Algorithm 6.1 \setlinespacing1.1 Experiment procedure for fitting a GP to real world data. optimize refers to the GP optimization process of maximizing the marginal log likelihood.
Furthermore, we simplify the training procedure as we are not interested in posterior mean matching. The three kernels considered follow the same GP fitting and predicting procedure outlined in Equation (2.13) from Section 2.2. We summarize the pseudocode in Algorithm 6.1.
We utilize two real world datasets: the concrete compressive strength dataset [51] and the forest fire dataset [14]. We provide a brief summary of the data in Table 6.2. The Concrete dataset contains 1030 observations and 8 features which include 7 features describing the concreteโs ingredients (kg/m3) and 1 feature describing its age (days). The Fire dataset contains 517 observations and a total of 12 features. We drop all the features except for the 4 features describing the weather conditions (temperature (โ), relative humidity (%), wind (km/h), and rain (mm/m2)). This is because Cortez and Morais [14] found that these features provided the best regression performance given their metrics and we found that additional features would result in kernels that would zero out resulting in meaningless posteriors. It should be noted that we also take the advice of the authors of the Fire dataset and transform the response variable โareaโ using a log transform since it is heavily right-skewed: ๐ฆ ๐ โ ๐ โ ๐ โ ๐ := log โก ( ๐ฆ + 1 ) .
Data Name Task
๐
๐
Concrete Determine concrete compressive strength 8 1030 Fire Determine area of forest burned during fire 4 517 Table 6.2:\setlinespacing1.1 Summary of real world data. 6.2Results
We find through our experiments that the NTK attains comparable ๐ 2 values to the Laplace and Gaussian kernels for the Concrete dataset as seen in Table 6.3. In fact, we attain values of ๐ 2 โ 0.9 indicating a very close fit to the ground truth. There is only a minute difference between experiments done in โ 8 versus ๐ 7 . We illustrate the concrete compression predictions in ๐ 7 in Figure 6.1. We note that the Laplace and Gaussian GP predictions seem to align closely to the NTK despite being trained separately.
NTK Lap Gaus
Data Metric Space ๐ท
2
๐ท
3
๐ท
10
Concrete RMSE โ 8 4.9562 5.0614 5.6477 5.3210 5.3058
๐ 7 5.3571 5.2906 5.5710 5.4368 5.1869
๐ 2
โ 8 0.9041 0.9000 0.8754 0.8894 0.8901
๐ 7 0.8879 0.8907 0.8788 0.8846 0.8949 Fire RMSE โ 4 78.302 78.278 78.262 78.415 78.449
๐ 3 78.302 78.277 78.262 78.456 78.462
๐ 2
โ 4
โ 10594
โ 8415
โ 4154
โ 50077
โ 116771
๐
3
โ
11031
โ
8539
โ
4154
โ
499246
โ
1254525
Table 6.3:\setlinespacing1.1 Results for real world experiments in
โ
๐
and
๐
๐
โ
1
. Metrics for the Fire dataset are calculated by first inverting the log-transformation.
Figure 6.1:\setlinespacing1.1 Concrete compression strength predictions over
๐
7
with NTK depth
๐ท
10 overlayed. Inputs are shown in โ 8 for visualization. The first and last two rows correspond to the input features all in kg/m3 except for the Age.
On the other hand, the Fire dataset fails to produce predictions with ๐ 2 values that are better than the baseline mean. In fact, the ๐ 2 values are significantly worse than the baseline but do show an improvement with higher NTK depth as seen in Table 6.3. In Figure 6.2 we can see the area predictions for the NTK attempt to approximate the ground truth but does not capture the general landscape of the data. Upon further investigation this may be due to the kernel hyperparameter ๐ 2 overestimating the noise. We include Figure C.7 of the predictions without the white noise kernel applied in Appendix C. It is interesting to note that the Laplace and Gaussian predictions become constant. This is because the constant value completely zeros out for both kernels while length-scale becomes large. Lastly, we turn to the RMSE values for the Fire dataset. According to Cortez and Morais [14], their models attained RMSE values of approximately 64.7 whereas we get values close to 78 indicating a worse fit. Overall, this dataset presented a significantly difficult regression task.
Figure 6.2:\setlinespacing1.1 Fire area predictions over ๐ 3 with NTK depth ๐ท
10 overlayed. Inputs are shown in โ 4 and output is log-transformed for visualization. Chapter 7Conclusion
In this work, we explored the empirical connections between the Laplace kernel and NTK, most notably, the direct relationship between the NTKโs parameterization and how it can be reconciled with the parameterization of the Laplace kernel. We showed evidence for the importance of the bias parameter in equating the NTK to the Laplace kernel. Further, we developed a way to to reliably match the posterior means (elements of an RKHS) generated by the Laplace kernel and NTK GPs trained on various datasets. We further solidified the evidence that โ ๐ is not a space in which the two kernels can be equal. Finally, we showcased that although the Gaussian kernel shares some similarities to the Laplace kernel, it fails to have the flexibility to match up with the other two kernels.
Though consequential, the NTK is a limited tool for machine learning tasks. As layers increase, it requires a high overhead for computation and runs into numerical issues during hyperparameter optimization due to its recursive nature. With the connections established to the Laplace kernel by Geifman et al. [21] and Chen and Xu [11], the NTK can be more easily applied to practical use and further motivate additional work in kernel methods.
Future work to consider in this topic would be to fully develop the equality between two RKHSs. As discussed in Section 4.3, parameters of two kernels limit the functions a shared RKHS can include. As such, it would be interesting to explore how kernel parameters affect the resulting RKHS. The Laplace kernel and NTK are especially of interest since they have entirely different parameterizations yet still share the same space. In addition, it would be of interest to analyze the discretized nature of the NTK to see if it is possible to develop a direct parameter relation to the Laplace kernel other than pure optimization. Lastly, it is mathematically interesting to do a full treatment of the asymptotics of the NTKโs bias parameter (Appendix B).
At the most essential level, RKHSs provide a robust theoretical framework for practical data modeling. For a full treatment of this topic we highly recommend the text by Berlinet and Thomas-Agnan [8]. For machine learning with Gaussian processes Williams and Rasmussen [47] give an in-depth overview of the topic. Lastly, Goodfellow et al. [24] serves as an excellent introduction to modern deep learning.
\SuppChap References Abramowitz and Stegun [1964] Milton Abramowitz and Irene A. Stegun.Handbook of mathematical functions with formulas, graphs, and mathematical tables, volume 55.US Government printing office, 1964. Ackley [2012] David Ackley.A connectionist machine for genetic hillclimbing, volume 28.Springer Science & Business Media, 2012. Alemohammad et al. [2021] Sina Alemohammad, Zichao Wang, Randall Balestriero, and Richard Baraniuk.The recurrent neural tangent kernel.In International Conference on Learning Representations, 2021. Aronszajn [1950] Nachman Aronszajn.Theory of reproducing kernels.Transactions of the American mathematical society, 68(3):337โ404, 1950. Arora et al. [2019] Sanjeev Arora, Simon S. Du, Wei Hu, Zhiyuan Li, Russ R. Salakhutdinov, and Ruosong Wang.On exact computation with an infinitely wide neural net.In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alchรฉ-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. Axler [2020] Sheldon Axler.Measure, integration & real analysis.Springer Nature, 2020. Belkin et al. [2018] Mikhail Belkin, Siyuan Ma, and Soumik Mandal.To understand deep learning we need to understand kernel learning.In International Conference on Machine Learning, pages 541โ549. PMLR, 2018. Berlinet and Thomas-Agnan [2004] Alain Berlinet and Christine Thomas-Agnan.Reproducing kernel Hilbert spaces in probability and statistics.Springer Science & Business Media, 2004. Bietti and Mairal [2019] Alberto Bietti and Julien Mairal.On the inductive bias of neural tangent kernels.Advances in Neural Information Processing Systems, 32, 2019. Breiman [1996] Leo Breiman.Bagging predictors.Machine learning, 24(2):123โ140, 1996. Chen and Xu [2021] Lin Chen and Sheng Xu.Deep neural tangent kernel and laplace kernel have the same rkhs.In International Conference on Learning Representations, 2021. Chen et al. [2020] Zixiang Chen, Yuan Cao, Quanquan Gu, and Tong Zhang.A generalized neural tangent kernel analysis for two-layer neural networks.In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, pages 13363โ13373. Curran Associates, Inc., 2020. Cho and Saul [2009] Youngmin Cho and Lawrence Saul.Kernel methods for deep learning.Advances in neural information processing systems, 22, 2009. Cortez and Morais [2007] Paulo Cortez and Anรญbal de Jesus Raimundo Morais.A data mining approach to predict forest fires using meteorological data.New Trends in Artificial Intelligence, Proceedings of the 13th EPIA, pages 512โ523, 2007. Cybenko [1989] George Cybenko.Approximation by superpositions of a sigmoidal function.Mathematics of control, signals and systems, 2(4):303โ314, 1989. Du et al. [2019] Simon S. Du, Kangcheng Hou, Russ R. Salakhutdinov, Barnabas Poczos, Ruosong Wang, and Keyulu Xu.Graph neural tangent kernel: Fusing graph neural networks with graph kernels.In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alchรฉ-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. Dudley [2018] Richard M. Dudley.Real analysis and probability.CRC Press, 2018. Franke [1979] Richard Franke.A critical comparison of some methods for interpolation of scattered data.Technical report, NAVAL POSTGRADUATE SCHOOL MONTEREY CA, 1979. Friedman [1991] Jerome H. Friedman.Multivariate adaptive regression splines.The annals of statistics, 19(1):1โ67, 1991. Funahashi [1989] Ken-Ichi Funahashi.On the approximate realization of continuous mappings by neural networks.Neural networks, 2(3):183โ192, 1989. Geifman et al. [2020] Amnon Geifman, Abhay Yadav, Yoni Kasten, Meirav Galun, David Jacobs, and Basri Ronen.On the similarity between the laplace and neural tangent kernels.Advances in Neural Information Processing Systems, 33:1451โ1461, 2020. Ghojogh et al. [2021] Benyamin Ghojogh, Ali Ghodsi, Fakhri Karray, and Mark Crowley.Reproducing kernel hilbert space, mercerโs theorem, eigenfunctions, nystrรถm method, and use of kernels in machine learning: Tutorial and survey.arXiv preprint arXiv:2106.08443, 2021. Girosi and Poggio [1990] Federico Girosi and Tomaso Poggio.Networks and the best approximation property.Biological cybernetics, 63(3):169โ176, 1990. Goodfellow et al. [2016] Ian Goodfellow, Yoshua Bengio, and Aaron Courville.Deep Learning.MIT Press, 2016.http://www.deeplearningbook.org. Guan [2020] Yuying Bella Guan.Introduction to gaussian processes for regression.Masterโs thesis, California State Polytechnic University, Pomona, May 2020. Hastie and Tibshirani [2017] Trevor J. Hastie and Robert J. Tibshirani.Generalized additive models.Routledge, 2017. Iman et al. [1981] Ronald L. Iman, Jon C. Helton, and James E. Campbell.An approach to sensitivity analysis of computer models: Part iโintroduction, input variable selection and preliminary variable assessment.Journal of quality technology, 13(3):174โ183, 1981. Jacot et al. [2018] Arthur Jacot, Franck Gabriel, and Clรฉment Hongler.Neural tangent kernel: Convergence and generalization in neural networks.Advances in neural information processing systems, 31, 2018. Kanagawa et al. [2018] Motonobu Kanagawa, Philipp Hennig, Dino Sejdinovic, and Bharath K. Sriperumbudur.Gaussian processes and kernel methods: A review on connections and equivalences.arXiv preprint arXiv:1807.02582, 2018. Lee et al. [2018] Jaehoon Lee, Jascha Sohl-dickstein, Jeffrey Pennington, Roman Novak, Sam Schoenholz, and Yasaman Bahri.Deep neural networks as gaussian processes.In International Conference on Learning Representations, 2018.URL https://openreview.net/forum?id=B1EA-M-0Z. Lee et al. [2019] Jaehoon Lee, Lechao Xiao, Samuel Schoenholz, Yasaman Bahri, Roman Novak, Jascha Sohl-Dickstein, and Jeffrey Pennington.Wide neural networks of any depth evolve as linear models under gradient descent.In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alchรฉ-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. Medina [2013] Gonzalo Medina.Diagram of an artificial neural network, 2013.https://tex.stackexchange.com/questions/132444/diagram-of-an-artificial-neural-network. Mercer [1909] James Mercer.Functions of positive and negative type, and their connection with the theory of integral equations.Philosophical Transactions of the Royal Society of London. Series A, Containing Papers of a Mathematical or Physical Character, 209:415โ446, 1909.ISSN 02643952.URL http://www.jstor.org/stable/91043. Minh et al. [2006] Ha Quang Minh, Partha Niyogi, and Yuan Yao.Mercerโs theorem, feature maps, and smoothing.In International Conference on Computational Learning Theory, pages 154โ168. Springer, 2006. Neal [1996] Radford M. Neal.Priors for Infinite Networks, volume 118, chapter 2, pages 29โ53.Springer Science New York, 1996. Pedregosa et al. [2011] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay.Scikit-learn: Machine learning in Python.Journal of Machine Learning Research, 12:2825โ2830, 2011. Rudin [1991] Walter Rudin.Functional Analysis.International series in pure and applied mathematics. McGraw-Hill, 1991.ISBN 9780070542365.URL https://books.google.com/books?id=Sh_vAAAAMAAJ. Rumelhart et al. [1986] David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams.Learning representations by back-propagating errors.nature, 323(6088):533โ536, 1986. Schรถlkopf et al. [2001] Bernhard Schรถlkopf, Ralf Herbrich, and Alex J. Smola.A generalized representer theorem.In International conference on computational learning theory, pages 416โ426. Springer, 2001. Steinwart and Christmann [2008] Ingo Steinwart and Andreas Christmann.Support vector machines.Springer Science & Business Media, 2008. Stinchombe [1989] Maxwell Stinchombe.Universal approximation using feed-forward networks with nonsigmoid hidden layer activation functions.Proc. IJCNN, Washington, DC, 1989, pages 161โ166, 1989. Uhlenbeck and Ornstein [1930] George E. Uhlenbeck and Leonard S. Ornstein.On the theory of the brownian motion.Physical review, 36(5):823, 1930. Unser [2019] Michael Unser.A representer theorem for deep neural networks.J. Mach. Learn. Res., 20(110):1โ30, 2019. Walder [2008] Christian J. Walder.Efficient and Invariant Regularisation with Application to Computer Graphics.PhD thesis, University of Queensland, January 2008. Welch et al. [1992] William J. Welch, Robert J. Buck, Jerome Sacks, Henry P. Wynn, Toby J. Mitchell, and Max D. Morris.Screening, predicting, and computer experiments.Technometrics, 34(1):15โ25, 1992. Williams [1996] Christopher Williams.Computing with infinite networks.Advances in neural information processing systems, 9, 1996. Williams and Rasmussen [2006] Christopher K. I. Williams and Carl Edward Rasmussen.Gaussian processes for machine learning.MIT press Cambridge, MA, 2006. Wolpert [1996] David H. Wolpert.The lack of a priori distinctions between learning algorithms.Neural computation, 8(7):1341โ1390, 1996. Wolpert and Macready [1997] David H. Wolpert and William G. Macready.No free lunch theorems for optimization.IEEE transactions on evolutionary computation, 1(1):67โ82, 1997. Yang [2019] Greg Yang.Wide feedforward or recurrent neural networks of any architecture are gaussian processes.Advances in Neural Information Processing Systems, 32, 2019. Yeh [1998] I-C Yeh.Modeling of strength of high-performance concrete using artificial neural networks.Cement and Concrete research, 28(12):1797โ1808, 1998. \AddAppendix Appendix ANeural Tangent Kernel Gradient with respect to ๐ท
We derive the gradient used in optimizing the normalized recursive NTKโs ๐ฝ bias parameter during GP training. We begin with the following proposition:
Proposition 2.
Let ๐ฝ โฅ 0 and define the normalized recursive NTK using Definition 4.2.2 and Equation (4.13):
๐ ยจ ๐ โ ๐ โ ๐พ โ ( ๐ฟ + 1 , ๐ฝ )
1 ( ๐ฟ + 1 ) โ ( ๐ฝ 2 + 1 ) โ ๐ ๐ โ ๐ โ ๐พ โ ( ๐ฟ + 1 , ๐ฝ )
1 ( ๐ฟ + 1 ) โ ( ๐ฝ 2 + 1 ) โ ฮ ( ๐ฟ )
(A.1)
Then, the partial derivative with respect to ๐ฝ of ๐ ยจ ๐ โ ๐ โ ๐พ โ ( ๐ฟ + 1 , ๐ฝ ) is defined as
โ โ ๐ฝ โ ๐ ยจ ๐ โ ๐ โ ๐พ โ ( ๐ฟ + 1 , ๐ฝ )
1 ( ๐ฟ + 1 ) โ ( ๐ฝ 2 + 1 ) โ ( โ ฮ ( ๐ฟ ) โ ๐ฝ โ 2 โ ๐ฝ ๐ฝ 2 + 1 โ ฮ ( ๐ฟ ) )
(A.2)
where
โ ฮ ( ๐ฟ ) โ ๐ฝ
2 โ ๐ฝ โ ( โ ๐
1 ๐ฟ ฮฃ ( ๐ ) ) โ ( 1 + โ ๐
1 ๐ฟ ( โ ๐
1 ๐ ฮฃ ห ( ๐ ) ) โ 1 )
(A.3)
Proving this Equation A.2 requires the use of the product rule ( ๐ข โ ๐ฃ ) โฒ
๐ข โฒ โ ๐ฃ + ๐ข โ ๐ฃ โฒ where we will let ๐ข
1 ( ๐ฟ + 1 ) โ ( ๐ฝ 2 + 1 ) and ๐ฃ
ฮ ( ๐ฟ ) from Equation A.1. We will start with the substantially less involved task of deriving ๐ข followed by deriving ๐ฃ .
Proof.
We begin with ๐ฝ โฅ 0 and Equation A.1 and we set up the partial derivative with respect to ๐ฝ as follows:
โ โ ๐ฝ โ ๐ ยจ ๐ โ ๐ โ ๐พ โ ( ๐ฟ + 1 , ๐ฝ )
โ โ ๐ฝ โ ( 1 ( ๐ฟ + 1 ) โ ( ๐ฝ 2 + 1 ) โ ฮ ( ๐ฟ ) )
We then let
๐ข
1
(
๐ฟ
+
1
)
โ
(
๐ฝ
2
+
1
)
๐ฃ
(
๐ฟ
)
ฮ ( ๐ฟ )
so that we can utilize the product rule ( ๐ข โ ๐ฃ ( ๐ฟ ) ) โฒ
๐ข โฒ โ ๐ฃ ( ๐ฟ ) + ๐ข โ ๐ฃ ( ๐ฟ ) โฒ in order to solve. We begin with finding the derivative of ๐ข :
๐ โ ๐ข ๐ โ ๐ฝ
๐ ๐ โ ๐ฝ โ ( 1 ( ๐ฟ + 1 ) โ ( ๐ฝ 2 + 1 ) )
๐ ๐ โ ๐ฝ โ ( ( ๐ฟ + 1 ) โ ( ๐ฝ 2 + 1 ) ) โ 1
โ ( ( ๐ฟ + 1 ) โ ( ๐ฝ 2 + 1 ) ) โ 2 โ ( 2 โ ๐ฝ โ ( ๐ฟ + 1 ) )
โ
2
โ
๐ฝ
โ
(
๐ฟ
+
1
)
(
๐ฟ
+
1
)
2
โ
(
๐ฝ
2
+
1
)
2
๐
โ
๐ข
๐
โ
๐ฝ
โ 2 โ ๐ฝ ( ๐ฟ + 1 ) โ ( ๐ฝ 2 + 1 ) 2
Which completes the derivative of ๐ข . Now we continue onto the partial derivative with respect to ๐ฝ for ๐ฃ which is defined recursively. As such, we begin with the base case and build our way towards the general case using the recursive formula from Definition 4.2.2:
โ ๐ฃ ( 0 ) โ ๐ฝ
โ ฮ ( 0 ) โ ๐ฝ
โ โ ๐ฝ โ ( ฮฃ ( 0 ) + ๐ฝ 2 )
2
โ
๐ฝ
โ
๐ฃ
(
1
)
โ
๐ฝ
โ ฮ ( 1 ) โ ๐ฝ
โ โ ๐ฝ โ ( ฮ ( 0 ) โ ฮฃ ห ( 1 ) + ฮฃ ( 1 ) + ๐ฝ 2 )
= โ ฮ ( 0 ) โ ๐ฝ โ ฮฃ ห ( 1 ) + โ โ ๐ฝ โ ฮฃ ( 1 ) + โ โ ๐ฝ โ ๐ฝ 2
= 2 โ ๐ฝ โ ฮฃ ห ( 1 ) + 2 โ ๐ฝ
=
2
โ
๐ฝ
โ
(
ฮฃ
ห
(
1
)
+
1
)
โ
๐ฃ
(
2
)
โ
๐ฝ
โ ฮ ( 2 ) โ ๐ฝ
โ โ ๐ฝ โ ( ฮ ( 1 ) โ ฮฃ ห ( 2 ) + ฮฃ ( 2 ) + ๐ฝ 2 )
= โ ฮ ( 1 ) โ ๐ฝ โ ฮฃ ห ( 2 ) + โ โ ๐ฝ โ ฮฃ ( 2 ) + โ โ ๐ฝ โ ๐ฝ 2
= 2 โ ๐ฝ โ ( ฮฃ ห ( 1 ) + 1 ) โ ฮฃ ห ( 2 ) + 2 โ ๐ฝ
=
2
โ
๐ฝ
โ
(
ฮฃ
ห
(
1
)
โ
ฮฃ
ห
(
2
)
+
ฮฃ
ห
(
2
)
+
1
)
โ
๐ฃ
(
3
)
โ
๐ฝ
โ ฮ ( 3 ) โ ๐ฝ
โ โ ๐ฝ โ ( ฮ ( 2 ) โ ฮฃ ห ( 3 ) + ฮฃ ( 3 ) + ๐ฝ 2 )
= โ ฮ ( 2 ) โ ๐ฝ โ ฮฃ ห ( 3 ) + โ โ ๐ฝ โ ฮฃ ( 3 ) + โ โ ๐ฝ โ ๐ฝ 2
= 2 โ ๐ฝ โ ( ฮฃ ห ( 1 ) โ ฮฃ ห ( 2 ) + ฮฃ ห ( 2 ) + 1 ) โ ฮฃ ห ( 3 ) + 2 โ ๐ฝ
= 2 โ ๐ฝ โ ( ฮฃ ห ( 1 ) โ ฮฃ ห ( 2 ) โ ฮฃ ห ( 3 ) + ฮฃ ห ( 2 ) โ ฮฃ ห ( 3 ) + ฮฃ ห ( 3 ) + 1 )
= 2 โ ๐ฝ โ ( โ ๐
1 3 ฮฃ ห ( ๐ ) + โ ๐
1 3 ฮฃ ห ( ๐ ) ฮฃ ห ( 1 ) + โ ๐
1 3 ฮฃ ห ( ๐ ) ฮฃ ห ( 1 ) โ ฮฃ ห ( 2 ) + โ ๐
1 3 ฮฃ ห ( ๐ ) โ ๐
1 3 ฮฃ ห ( ๐ ) )
= 2 โ ๐ฝ โ ( โ ๐
1 3 ฮฃ ห ( ๐ ) ) โ ( 1 + 1 ฮฃ ห ( 1 ) + 1 ฮฃ ห ( 1 ) โ ฮฃ ห ( 2 ) + 1 โ ๐
1 3 ฮฃ ห ( ๐ ) )
= 2 โ ๐ฝ โ ( โ ๐
1 3 ฮฃ ห ( ๐ ) ) โ ( 1 + โ ๐
1 3 ( โ ๐
1
๐
ฮฃ
ห
(
๐
)
)
โ
1
)
โฎ
โ
๐ฃ
(
๐ฟ
)
โ
๐ฝ
โ ฮ ( ๐ฟ ) โ ๐ฝ
2 โ ๐ฝ โ ( โ ๐
1 ๐ฟ ฮฃ ห ( ๐ ) ) โ ( 1 + โ ๐
1 ๐ฟ ( โ ๐
1 ๐ ฮฃ ห ( ๐ ) ) โ 1 )
Thus, the partial derivative of ๐ฃ with respect to ๐ฝ is established. This leaves us to establish the full partial derivative of ๐ ยจ ๐ โ ๐ โ ๐พ โ ( ๐ฟ + 1 , ๐ฝ ) via the product rule:
โ โ ๐ฝ โ ๐ ยจ ๐ โ ๐ โ ๐พ โ ( ๐ฟ + 1 , ๐ฝ )
โ โ ๐ฝ โ ( ๐ข โ ๐ฃ ( ๐ฟ ) )
( ๐ โ ๐ข ๐ โ ๐ฝ ) โ ( ๐ฃ ( ๐ฟ ) ) + ( ๐ข ) โ ( โ ๐ฃ ( ๐ฟ ) โ ๐ฝ )
( โ 2 โ ๐ฝ ( ๐ฟ + 1 ) โ ( ๐ฝ 2 + 1 ) 2 ) โ ( ฮ ( ๐ฟ ) ) + ( 1 ( ๐ฟ + 1 ) โ ( ๐ฝ 2 + 1 ) ) โ ( โ ฮ ( ๐ฟ ) โ ๐ฝ )
1 ( ๐ฟ + 1 ) โ ( ๐ฝ 2 + 1 ) โ ( โ 2 โ ๐ฝ โ ฮ ( ๐ฟ ) ๐ฝ 2 + 1 + โ ฮ ( ๐ฟ ) โ ๐ฝ )
1 ( ๐ฟ + 1 ) โ ( ๐ฝ 2 + 1 ) โ ( โ ฮ ( ๐ฟ ) โ ๐ฝ โ 2 โ ๐ฝ โ ฮ ( ๐ฟ ) ๐ฝ 2 + 1 )
This completes the full derivation of the partial with respect to ๐ฝ for the normalized recursive NTK. โ
Appendix BAsymptotics of ๐ท
In this Appendix, we derive the limit of the NTKโs parameter ๐ฝ when depth ๐ท
1 .
Proposition 3.
Let ๐ ยจ ๐ โ ๐ โ ๐พ โ ( 1 , ๐ฝ ) be the shallow normalized neural tangent kernel as defined in Equation (4.13). Then the limit of ๐ฝ โ โ is as follows:
lim ๐ฝ โ โ ๐ ยจ ๐ โ ๐ โ ๐พ โ ( 1 , ๐ฝ )
2 โ arccos โก ( ๐ ( 0 ) ) ๐
(B.1) Proof.
We begin with the normalized NTK from Equation (4.13) with 1 hidden layer.
๐ ยจ ๐ โ ๐ โ ๐พ โ ( 1 , ๐ฝ )
1 ( ๐ฝ 2 + 1 ) โ ๐ ๐ โ ๐ โ ๐พ โ ( 1 , ๐ฝ )
1 ( ๐ฝ 2 + 1 ) โ ฮ ( 1 )
(B.2)
Then, by substituting, we get
1 ( ๐ฝ 2 + 1 ) โ ฮ ( 1 )
1 ( ๐ฝ 2 + 1 ) โ ( ฮ ( 0 ) โ ฮฃ ห ( 1 ) + ฮฃ ( 1 ) + ๐ฝ 2 )
1 ( ๐ฝ 2 + 1 ) โ ( ( ฮฃ ( 0 ) + ๐ฝ 2 ) โ ๐ 0 โ ( ๐ ( 0 ) ) + ๐ 1 โ ( ๐ ( 0 ) ) โ ๐ฅ โค โ ๐ฅ โ ๐ง โค โ ๐ง + ๐ฝ 2 )
( ฮฃ ( 0 ) + ๐ฝ 2 ) โ ๐ 0 โ ( ๐ ( 0 ) ) ๐ฝ 2 + 1 + ๐ 1 โ ( ๐ ( 0 ) ) โ ๐ฅ โค โ ๐ฅ โ ๐ง โค โ ๐ง ๐ฝ 2 + 1 + ๐ฝ 2 ๐ฝ 2 + 1
ฮฃ ( 0 ) โ ๐ 0 โ ( ๐ ( 0 ) ) ๐ฝ 2 + 1 + ๐ฝ 2 โ ๐ 0 โ ( ๐ ( 0 ) ) ๐ฝ 2 + 1 + ๐ 1 โ ( ๐ ( 0 ) ) โ ๐ฅ โค โ ๐ฅ โ ๐ง โค โ ๐ง ๐ฝ 2 + 1 + ๐ฝ 2 ๐ฝ 2 + 1 ,
then by taking the limit with respect to ๐ฝ towards infinity:
lim ๐ฝ โ โ ( ฮฃ ( 0 ) โ ๐ 0 โ ( ๐ ( 0 ) ) ๐ฝ 2 + 1 + ๐ฝ 2 โ ๐ 0 โ ( ๐ ( 0 ) ) ๐ฝ 2 + 1 + ๐ 1 โ ( ๐ ( 0 ) ) โ ๐ฅ โค โ ๐ฅ โ ๐ง โค โ ๐ง ๐ฝ 2 + 1 + ๐ฝ 2 ๐ฝ 2 + 1 )
0 + ๐ 0 โ ( ๐ ( 0 ) ) + 0 + 1
1 ๐ โ ( ๐ โ arccos โก ( ๐ ( 0 ) ) ) + 1
1 โ arccos โก ( ๐ ( 0 ) ) ๐ + 1
2 โ arccos โก ( ๐ ( 0 ) ) ๐
thus completing the formulation. โ
Appendix CAdditional Figures and Tables Figure C.1:\setlinespacing1.1 Posterior means generated by fitting to data in โ 2 and predicted on out of sample data in โ 2 . All kernels seem to be approximating the loop in the curve. The Laplace and Gaussian kernels provide almost the same predictions between them. In addition, the kernels seem to do a better job that ๐ 1 of approximating the underlying parametric curve. Top: NTK with Laplace kernel overlayed. Bottom: NTK with Gaussian kernel overlayed. Figure C.2:\setlinespacing1.1 NTK posterior mean of the noisy the Ackley function in ๐ 1 for NTK depth ๐ท
2 . The GP was trained using ๐
500
inputs
๐ฅ
1
,
๐ฅ
2
โ
[
1
,
7
]
generated using Latin hypercube sampling. The
๐ฆ
values are all concentrated around
โ
12.67
with a difference between the minimum and maximum being
โ
10
โ
8
indicating that the posterior mean has zeroed out. This is due to the kernelโs constant value optimizing close to zero. Attempting to manually fit the GP while controlling the constant value and white noise provides similar results.
๐ท
2
๐ท
3
๐ท
10
Metrics Dataset Noise Lap Gaus Lap Gaus Lap Gaus RMSE Ackley No โ 0 1.7505 0.0001 1.7505 0.0001 1.7504 Yes โ 0
โ 0
โ 0
โ 0
โ 0
โ 0
Franke No 0.0101 0.1413 0.0030 0.1412 0.0001 0.1408 Yes 0.0046 0.0166 0.0028 0.0151 0.0031 0.0155 Nonpoly No โ 0 1.5981 โ 0 1.5975 0.0001 1.5955 Yes 0.0216 0.1924 0.0141 0.1866 0.0137 0.1685
๐ Ackley No โ 1 0.3526 โ 1 0.3526 โ 1 0.3526 Yes 0.9926 0.9028 0.9968 0.8939 0.9985 0.8738 Franke No 0.9990 0.7782 0.9999 0.7757 โ 1 0.7766 Yes 0.9997 0.9943 0.9999 0.9952 โ 1 0.9951 Nonpoly No โ 1 0.7492 โ 1 0.7495 โ 1 0.7502 Yes 0.9999 0.9871 0.9999 0.9878 โ 1 0.9911 Table C.1:\setlinespacing1.1 Posterior mean matching results for the 2D input surface datasets in ๐ 1 . As noted in Section 5.3, the noisy the Ackley function was difficult to properly fit resulting in a constant and meaningless posterior thus the zero RMSE should be looked at skeptically. Non-noisy Noisy
๐ท Sp. None ๐ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐
๐ฒ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ Both None ๐ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐
๐ฒ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ Both 2 ๐ 9 -0.069 0.852 -0.071 0.855 0.136 0.849 0.126 0.852
โ 10 0.229 0.855 0.240 0.935 0.232 0.852 0.242 0.935 3 ๐ 9 -0.070 0.851 -0.071 0.855 0.137 0.850 0.127 0.852
โ 10 0.226 0.927 0.235 0.933 0.229 0.925 0.238 0.932 10 ๐ 9 -0.072 0.815 -0.073 0.822 0.128 0.814 0.129 0.821
โ 10 0.218 0.810 0.221 0.894 0.221 0.873 0.224 0.892 Table C.2:\setlinespacing1.1 Friedman 2 ๐ 2 results for NTK posterior means with training done using various data transformations. Non-noisy Noisy
๐ท Sp. None ๐ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐
๐ฒ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ Both None ๐ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐
๐ฒ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ Both 2 ๐ 9 -0.198 0.023 -0.198 0.033 0.063 0.044 0.075 0.117
โ 10 -0.187 0.667 -0.187 0.692 0.057 0.317 0.058 0.378 3 ๐ 9 -0.200 0.040 -0.200 0.055 0.064 0.025 0.066 0.125
โ 10 -0.191 0.659 -0.190 0.696 0.051 0.313 0.117 0.374 10 ๐ 9 -0.211 0.209 -0.211 0.260 0.024 0.039 0.029 0.177
โ 10 -0.195 0.532 -0.207 0.735 0.022 0.280 0.025 0.431 Table C.3:\setlinespacing1.1 Friedman 3 ๐ 2 results for NTK posterior means with training done using various data transformations. Non-noisy Noisy
๐ท N/a ๐ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐
๐ฒ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ Both N/a ๐ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐
๐ฒ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ Both 2 โ 1 0.9995 โ 1 0.9995 0.9999 0.9993 0.9998 0.9993 3 โ 1 0.9998 โ 1 0.9998 โ 1 0.9998 0.9999 0.9997 10 โ 1 0.9996 โ 1
โ 1
โ 1 0.9996 โ 1
โ 1 Table C.4:\setlinespacing1.1 Friedman 2 ๐ results for Laplace kernel and NTK posterior mean matching in ๐ 9 with training done using various data transformations. Non-noisy Noisy
๐ท
N/a
๐
๐
โ
๐
โ
๐
โ
๐
โ
๐
โ
๐
โ
๐
๐ฒ
๐
โ
๐
โ
๐
โ
๐
โ
๐
โ
๐
โ
๐
Both N/a
๐
๐
โ
๐
โ
๐
โ
๐
โ
๐
โ
๐
โ
๐
๐ฒ
๐
โ
๐
โ
๐
โ
๐
โ
๐
โ
๐
โ
๐
Both
2
โ
1
0.9993
โ
1
0.9993
โ
1
0.9997
โ
1
0.9980
3
โ
1
0.9999
โ
1
0.9999
โ
1
0.9999
โ
1
0.9991
10
โ
1
0.9983
โ
1
0.9997
โ
1
0.9953
โ
1
0.9993
Table C.5:\setlinespacing1.1 Friedman 3
๐
results for Laplace kernel and NTK posterior mean matching in
๐
9
with training done using various data transformations.
Figure C.3:\setlinespacing1.1 Predictions for noisy Friedman 2 in
๐
3
for
๐ท
2 with ๐ฒ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ . Top: NTK and Laplace predictions overlayed. Middle: NTK and Gaussian predictions overlayed. Bottom: Averaged prediction plots of all kernels. Figure C.4:\setlinespacing1.1 Predictions for noisy Friedman 2 in ๐ 3 for ๐ท
2 with ๐ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ and ๐ฒ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ . The rows of the figure are laid out as in Figure C.4. Figure C.5:\setlinespacing1.1 Predictions for noisy Friedman 3 in โ 4 for ๐ท
2 with ๐ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ and ๐ฒ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ . The rows of the figure are laid out as in Figure C.4. Figure C.6:\setlinespacing1.1 Predictions for noisy Friedman 3 in ๐ 3 for ๐ท
2 with ๐ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ and ๐ฒ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ โ ๐ . The rows of the figure are laid out as in Figure C.4. Figure C.7:\setlinespacing1.1 Fire area predictions over ๐ 3 without white noise term with NTK ๐ท
10 . Inputs are shown in โ 4 and output is log-transformed for visualization. This is interesting since the NTK seems to do a better job of aligning the predictions to the ground truth in comparison to the GPs fit with a white noise term in Figure 6.2. Generated on Wed Oct 8 07:38:13 2025 by LaTeXML Report Issue Report Issue for Selection
Xet Storage Details
- Size:
- 139 kB
- Xet hash:
- 3ec0f569325387fecda360d2e55cd9ebcfe589b25cb9c3d1e74d0e07eedc6892
Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.