Buckets:

|
download
raw
106 kB

Title: Approximate Stein Classes for Truncated Density Estimation

URL Source: https://arxiv.org/html/2306.00602

Markdown Content: Back to arXiv

This is experimental HTML to improve accessibility. We invite you to report rendering errors. Use Alt+Y to toggle on accessible reporting links and Alt+Shift+Y to toggle off. Learn more about this project and help improve conversions.

Why HTML? Report Issue Back to Abstract Download PDF Abstract 1Introduction 2Problem Setup 3Background 4Approximate Stein Classes 5KSD for Truncated Density Estimation 6Experimental Results 7Discussion References License: arXiv.org perpetual non-exclusive license arXiv:2306.00602v2 [stat.ML] null Approximate Stein Classes for Truncated Density Estimation Daniel J. Williams Song Liu Abstract

Estimating truncated density models is difficult, as these models have intractable normalising constants and hard to satisfy boundary conditions. Score matching can be adapted to solve the truncated density estimation problem, but requires a continuous weighting function which takes zero at the boundary and is positive elsewhere. Evaluation of such a weighting function (and its gradient) often requires a closed-form expression of the truncation boundary and finding a solution to a complicated optimisation problem. In this paper, we propose approximate Stein classes, which in turn leads to a relaxed Stein identity for truncated density estimation. We develop a novel discrepancy measure, truncated kernelised Stein discrepancy (TKSD), which does not require fixing a weighting function in advance, and can be evaluated using only samples on the boundary. We estimate a truncated density model by minimising the Lagrangian dual of TKSD. Finally, experiments show the accuracy of our method to be an improvement over previous works even without the explicit functional form of the boundary.

Density Estimation, Truncated Densities, Stein Discrepancies, Score Matching, Parameter Estimation 1Introduction

In truncated density estimation, we are unable to view a full picture of our dataset. We are instead given access to a smaller subsample of data artificially truncated by a boundary. Examples of truncation boundaries include limited number of medical tests resulting in under-reported disease counts, and a countryโ€™s borders preventing discoveries of habitat locations. In either case, a complex boundary causes truncation which introduces difficulties in statistical parameter estimation. Regular estimation techniques such as maximum likelihood estimation (MLE) are ill-suited, as we will explain.

When data are truncated, many typical statistical assumptions break down. One fundamental problem with estimation in a truncated space is that the probability density function (PDF), given by

๐‘ ๐œฝ โข ( ๐’™ )

๐‘ ยฏ ๐œฝ โข ( ๐’™ ) ๐‘ โข ( ๐œฝ ) , ๐‘ โข ( ๐œฝ )

โˆซ ๐‘‰ ๐‘ ยฏ ๐œฝ โข ( ๐’™ ) โข ๐‘‘ ๐’™ ,

cannot be fully evaluated. In this setup, ๐‘‰ is the truncated domain which can be highly complex and thus the integration to obtain the normalising constant, ๐‘ โข ( ๐œฝ ) , is intractable. This normalising constant could be approximated via numerical integration, such as with Monte Carlo methods (Kalos & Whitlock, 2009), but this comes with a significant computational expense. Recent attention has turned to estimation methods which bypass the calculation of the normalising constant entirely by working with the score function for ๐‘ ๐œฝ โข ( ๐’™ ) ,

๐ ๐‘ ๐œฝ

๐ ๐‘ ๐œฝ โข ( ๐’™ ) := โˆ‡ ๐’™ log โก ๐‘ ๐œฝ โข ( ๐’™ )

โˆ‡ ๐’™ log โก ๐‘ ยฏ ๐œฝ โข ( ๐’™ ) .

which uniquely represents a probability distribution. Score-based estimation methods include score matching (Hyvรคrinen, 2005, 2007), noise-contrastive estimation (Gutmann & Hyvรคrinen, 2010, 2012) and minimum Stein discrepancies (Stein, 1972; Barp et al., 2019). These methods are computationally fast and accurate, and usually rely on minimising a discrepancy between the score functions for the model density ๐‘ ๐œฝ โข ( ๐’™ ) and the unknown data density ๐‘ž โข ( ๐’™ ) , whose score function is ๐ ๐‘ž := โˆ‡ ๐’™ log โก ๐‘ž โข ( ๐’™ ) .

Score-based methods have been applied across many domains, including hypothesis testing (Liu et al., 2016; Chwialkowski et al., 2016; Xu, 2022; Wu et al., 2022), generative modelling (Song & Ermon, 2019; Song et al., 2021; Pang et al., 2020), energy based modelling (Song & Kingma, 2021), and Bayesian posterior estimation (Sharrock et al., 2022). Recently, two lines of work have been proposed for the truncated domain; truncated density estimation via score matching (Yu et al., 2021; Liu et al., 2022; Williams & Liu, 2022), called TruncSM, and truncated goodness-of-fit testing via the kernelised Stein discrepancy (KSD), denoted bounded-domain KSD (bd-KSD) (Xu, 2022).

These two lines of work both use a distance function as a weighting function on the objective. Such a function is chosen in advance such that the boundary conditions required for deriving score matching or KSD hold when the domain is truncated. The computation of this distance function can be challenging when the boundary is complex and high-dimensional, which we will demonstrate later in experiments. Further, these methods rely on knowing a functional form of the boundary, which is not always available.

In this paper, we consider a situation where the functional form of the boundary is not available to us. We can only access the boundary information through a finite set of random samples. As an example, suppose we want to estimate a density model for submissions at an academic conference. We can only observe the accepted papers but not the rejected ones. Furthermore, it is difficult to provide a functional definition of the truncation boundary that distinguishes between the accepted and rejected submissions. However, we can easily identify borderline submissions from the review scores which can be seen as samples of the boundary. In this case, TruncSM and bd-KSD are not applicable due to the lack of a functional definition of the boundary, and classical methods such as MLE are intractable. To our knowledge, there exists no method to estimate a truncated density when there is no functional form of the boundary. To solve this problem, we first define approximate Stein classes and its corresponding Stein discrepancies, which we refer to as truncated kernelised Stein discrepancy (TKSD), which is computationally tractable with only samples from the boundary. By minimising the TKSD, we obtain a truncated density estimator when the boundaryโ€™s functional form is unavailable. In experiments, we show that despite the approximate nature of the Stein class, density models can be accurately estimated from truncated observations. We also provide a theoretical justification of the estimator consistency.

Our main contributions are:

โ€ข

We introduce approximate Stein classes, which in turn define approximate Stein discrepancies. Unlike earlier approaches, these Stein discrepancies are more relaxed and applicable to a truncated setting.

โ€ข

These discrepancies enable density estimation on truncated datasets. This estimator is an extension of earlier Stein discrepancy estimators. We include theoretical and experimental results showing that the TKSD estimator is a consistent and competitive estimator with previous works.

2Problem Setup

The Problem. โ€ƒWe assume the data density, ๐‘ž , has support on ๐‘‰ โІ โ„ ๐‘‘ , whose boundary is denoted as โˆ‚ ๐‘‰ . We aim to find a model, given by ๐‘ ๐œฝ , which best estimates ๐‘ž . However, there are some significant challenges for the truncated setting: the normalising constant in ๐‘ ๐œฝ is intractable, and score-based methods rely on a boundary condition which breaks down in this situation. Previous works (Xu, 2022; Liu et al., 2022) do address these issues. However, in this work, we assume we have no functional form of โˆ‚ ๐‘‰ , and instead, a finite set of samples, { ๐’™ ๐‘– โ€ฒ } ๐‘–

1 ๐‘š , which are drawn randomly from the boundary. To our best knowledge, no existing density estimation methods can be applied directly.

The Aims. โ€ƒWe aim to use unnormalised models to estimate a truncated density, bypassing the evaluation of the normalising constant. We also aim to construct an estimator that does not require a functional form of the boundary, but can still adjust to the boundary and the dataset adaptively. This will lead to an estimator which is data-driven and flexible, not relying on a pre-defined weighting function, unlike previous works by Xu (2022) and Liu et al. (2022).

Before explaining our proposed solution, we introduce prior methods for measuring discrepancies for unnormalised densities.

3Background

Statistical density estimation is often performed by minimising a divergence between the data density, ๐‘ž โข ( ๐’™ ) , and the model density, ๐‘ ๐œฝ โข ( ๐’™ ) . Since truncated densities have intractable normalising constants, we introduce divergence measures suited for unnormalised densities and discuss their generalizations to truncated supports. These methods rely on Steinโ€™s identity, which we will introduce foremost.

3.1Steinโ€™s Identity

Originating from Steinโ€™s method (Stein, 1972; Chen et al., 2011), a Stein class of functions enables the construction of a family of discrepancy measures (Barp et al., 2019).

Definition 3.1.

Let ๐‘ž

๐‘ž โข ( ๐’™ ) be any smooth probability density supported on โ„ ๐‘‘ and let ๐’ฎ ๐‘ž : โ„ฑ ๐‘‘ โ†’ โ„ be a map. โ„ฑ ๐‘‘ is a Stein class of functions, if for any ๐’‡ โˆˆ โ„ฑ ๐‘‘ ,

๐”ผ ๐‘ž โข [ ๐’ฎ ๐‘ž โข ๐’‡ โข ( ๐’™ ) ]

0 ,

(1)

where ๐’ฎ ๐‘ž is called a Stein operator (Gorham & Mackey, 2015).

We refer to Equation 1 as the Stein identity, which underpins a lot of existing work in unnormalised modelling. The Langevin Stein operator (Gorham & Mackey, 2015) on ๐‘ ๐œฝ โข ( ๐’™ ) ,

๐’ฎ ๐‘ ๐œฝ โข ๐’‡ โข ( ๐’™ )

๐’ฏ ๐‘ ๐œฝ โข ๐’‡ โข ( ๐’™ ) := โˆ‘ ๐‘™

1 ๐‘‘ ๐œ“ ๐‘ ๐œฝ , ๐‘™ โข ( ๐’™ ) โข ๐‘“ ๐‘™ โข ( ๐’™ ) + โˆ‚ ๐‘ฅ ๐‘™ ๐‘“ ๐‘™ โข ( ๐’™ ) ,

where ๐œ“ ๐‘ ๐œฝ , ๐‘™ โข ( ๐’› ) := โˆ‚ ๐‘ง ๐‘™ log โก ๐‘ ๐œฝ โข ( ๐’› ) , is independent of the normalising constant, ๐‘ โข ( ๐œฝ ) . ๐‘ ๐œฝ is involved in the Stein operator only via its โ€˜proxyโ€™, the score function, ๐ ๐‘ ๐œฝ . When this Langevin Stein operator is used, it is straightforward to see that Equation 1 holds:

๐”ผ ๐‘ž โข [ ๐’ฏ ๐‘ž โข ๐’‡ โข ( ๐’™ ) ]

โˆซ โ„ ๐‘‘ ๐‘ž โข ( ๐’™ ) โข ( โˆ‘ ๐‘™

1 ๐‘‘ ๐œ“ ๐‘ž , ๐‘™ โข ๐‘“ ๐‘™ โข ( ๐’™ ) + โˆ‚ ๐‘ฅ ๐‘™ ๐‘“ ๐‘™ โข ( ๐’™ ) ) โข ๐‘‘ ๐’™

โˆ‘ ๐‘™

1 ๐‘‘ โˆซ โ„ ๐‘‘ โˆ‚ ๐‘ฅ ๐‘™ ๐‘ž โข ( ๐’™ ) โข ๐‘“ ๐‘™ โข ( ๐’™ ) + ๐‘ž โข ( ๐’™ ) โข โˆ‚ ๐‘ฅ ๐‘™ ๐‘“ ๐‘™ โข ( ๐’™ ) โข ๐‘‘ โข ๐’™

0 ,

where the last equality holds due to integration by parts and the fact that ๐‘ž โข ( ๐’™ ) vanishes at infinity:

lim โ€– ๐’™ โ€– โ†’ โˆž ๐‘ž โข ( ๐’™ )

0 .

(2)

This assumption is critical, and is a key focus of research for this paper. It holds for many densities supported on โ„ ๐‘‘ , such as the Gaussian distribution or the Gaussian mixture distribution. In the rest of this paper, we refer to this condition as the boundary condition, as it describes the behaviour of ๐‘ž โข ( ๐’™ ) at the boundary of its domain.

3.2Divergence for Unnormalised Densities

When ๐’™ โˆˆ โ„ ๐‘‘ , and the boundary condition Equation 2 holds, we describe two computationally tractable discrepancy measures for unnormalised density models ๐‘ ๐œƒ โข ( ๐’™ ) : the Stein discrepancy and the score matching divergence. Both divergences rely on Steinโ€™s identity to derive a tractable form.

3.2.1Classical Stein Discrepancy

Gorham & Mackey (2015) use Steinโ€™s identity to define a Stein discrepancy: the supremum of the differences between expected Stein operators for two densities ๐‘ž โข ( ๐’™ ) and ๐‘ ๐œฝ โข ( ๐’™ ) ,

๐’Ÿ ๐‘† โข ๐ท โข ( ๐‘ ๐œฝ | ๐‘ž ) :

sup ๐’‡ โˆˆ โ„ฑ ๐‘‘ ( ๐”ผ ๐‘ž โข [ ๐’ฏ ๐‘ ๐œฝ โข ๐’‡ โข ( ๐’™ ) ] โˆ’ ๐”ผ ๐‘ ๐œฝ โข [ ๐’ฏ ๐‘ ๐œฝ โข ๐’‡ โข ( ๐’™ ) ] )

sup ๐’‡ โˆˆ โ„ฑ ๐‘‘ ๐”ผ ๐‘ž โข [ ๐’ฏ ๐‘ ๐œฝ โข ๐’‡ โข ( ๐’™ ) ] ,

(3)

where the second line holds as โ„ฑ ๐‘‘ is a Stein class. The Stein discrepancy can be interpreted as the maximum violation of Steinโ€™s identity. ๐’‡ is referred to as the discriminatory function, as it discriminates between ๐‘ž โข ( ๐’™ ) and ๐‘ ๐œฝ โข ( ๐’™ ) . However, the supremum in Equation 3 across โ„ฑ ๐‘‘ is a challenging problem for optimisation (Gorham & Mackey, 2015).

Stein discrepancies have seen a lot of recent development, including extensions to non-Euclidean domains (Shi et al., 2021; Xu & Matsuda, 2021), discrete operators (Yang et al., 2018), stochastic operators (Gorham et al., 2020) and diffusion-based operators (Gorham & Mackey, 2015; Gorham et al., 2020).

3.2.2Kernelised Stein Discrepancy (KSD)

When we restrict the function class over which the supremum is taken to a Reproducing Kernel Hilbert Space (RKHS), we can derive the Kernelised Stein discrepancy (KSD), for which we follow a similar definition to Chwialkowski et al. (2016) and Liu et al. (2016).

First, let ๐’ข be an RKHS equipped with positive definite kernel ๐‘˜ , and let ๐’ข ๐‘‘ denote the product RKHS with ๐‘‘ elements, where ๐’ˆ

( ๐‘” 1 , โ€ฆ , ๐‘” ๐‘‘ ) โˆˆ ๐’ข ๐‘‘ , and is defined with inner product โŸจ ๐’ˆ , ๐’ˆ โ€ฒ โŸฉ ๐’ข ๐‘‘

โˆ‘ ๐‘–

1 ๐‘‘ โŸจ ๐‘” ๐‘– , ๐‘” ๐‘– โ€ฒ โŸฉ ๐’ข and norm โ€– ๐’ˆ โ€– ๐’ข ๐‘‘

โˆ‘ ๐‘–

1 ๐‘‘ โŸจ ๐‘” ๐‘– , ๐‘” ๐‘– โŸฉ ๐’ข . By the reproducing property, any evaluation of ๐’ˆ โˆˆ ๐’ข ๐‘‘ can be written as

๐’ˆ โข ( ๐’™ )

โŸจ ๐’ˆ , ๐‘˜ โข ( ๐’™ , โ‹… ) โŸฉ ๐’ข ๐‘‘

โˆ‘ ๐‘–

1 ๐‘‘ โŸจ ๐‘” ๐‘– , ๐‘˜ โข ( ๐’™ , โ‹… ) โŸฉ ๐’ข .

(4)

Taking the supremum over ๐’ข ๐‘‘ , and including the restriction of ๐’ˆ to the RKHS unit ball, i.e. โ€– ๐’ˆ โ€– ๐’ข ๐‘‘ โ‰ค 1 , gives rise to the KSD

๐’Ÿ KSD โข ( ๐‘ ๐œฝ | ๐‘ž )
:= sup ๐’ˆ โˆˆ ๐’ข ๐‘‘ , โ€– ๐’ˆ โ€– ๐’ข ๐‘‘ โ‰ค 1 ๐”ผ ๐‘ž โข [ ๐’ฏ ๐‘ ๐œฝ โข ๐’ˆ โข ( ๐’™ ) ]

โ€– ๐”ผ ๐‘ž โข [ ๐’ฏ ๐‘ ๐œฝ โข ๐‘˜ โข ( ๐’™ , โ‹… ) ] โ€– ๐’ข ๐‘‘ .

(5)

The KSD has a closed-form expression as indicated by Equation 5. Moreover, the squared KSD can be expanded to a double expectation

๐’Ÿ KSD โข ( ๐‘ ๐œฝ | ๐‘ž ) 2

โ€– ๐”ผ ๐‘ž โข [ ๐’ฏ ๐‘ ๐œฝ โข ๐‘˜ โข ( ๐’™ , โ‹… ) ] โ€– ๐’ข ๐‘‘ 2

๐”ผ ๐’™ โˆผ ๐‘ž โข ๐”ผ ๐’š โˆผ ๐‘ž โข [ โˆ‘ ๐‘™

1 ๐‘‘ ๐‘ข ๐‘™ โข ( ๐’™ , ๐’š ) ]

(6)

where

๐‘ข ๐‘™ โข ( ๐’™ , ๐’š )

๐œ“ ๐‘ , ๐‘™ โข ( ๐’™ ) โข ๐œ“ ๐‘ , ๐‘™ โข ( ๐’š ) โข ๐‘˜ โข ( ๐’™ , ๐’š ) + ๐œ“ ๐‘ , ๐‘™ โข ( ๐’™ ) โข โˆ‚ ๐‘ฆ ๐‘™ ๐‘˜ โข ( ๐’™ , ๐’š )

  • ๐œ“ ๐‘ , ๐‘™ โข ( ๐’š ) โข โˆ‚ ๐‘ฅ ๐‘™ ๐‘˜ โข ( ๐’™ , ๐’š )
  • โˆ‚ ๐‘ฅ ๐‘™ โˆ‚ ๐‘ฆ ๐‘™ ๐‘˜ โข ( ๐’™ , ๐’š ) .

(7)

This divergence can be fully evaluated using samples from ๐‘ž โข ( ๐’™ ) to approximate the expectation in Equation 6. Further, Chwialkowski et al. (2016) showed that ๐’Ÿ KSD โข ( ๐‘ ๐œฝ | ๐‘ž )

0 if and only if ๐‘ ๐œฝ

๐‘ž , making ๐’Ÿ KSD โข ( ๐‘ ๐œฝ | ๐‘ž ) a good discrepancy measure between distributions.

3.2.3Score Matching

The score matching (or the Fisher-Hyvรคrinen) divergence, initially developed by Hyvรคrinen (2005), is the expected squared difference between the score functions for the two densities ๐‘ ๐œฝ โข ( ๐’™ ) and ๐‘ž โข ( ๐’™ ) :

๐’Ÿ ๐‘† โข ๐‘€ โข ( ๐‘ ๐œฝ | ๐‘ž )

๐”ผ ๐‘ž โข [ โ€– ๐’‰ โข ( ๐’™ ) 1 / 2 โŠ™ ( ๐ ๐‘ž โˆ’ ๐ ๐‘ ) โ€– 2 ] ,

(8)

where โŠ™ denotes element-wise multiplication. The inclusion of the weighting function ๐’‰ โข ( ๐’™ ) yields the generalised score matching divergence (Lin et al., 2016; Yu et al., 2016, 2019), and ๐’‰ โข ( ๐’™ )

๐Ÿ yields the classic score matching divergence.

It can be seen that ๐’Ÿ ๐‘† โข ๐‘€ โข ( ๐‘ ๐œฝ | ๐‘ž ) is simply the squared difference between two Langevin Stein operators ๐’ฏ ๐‘ ๐œฝ โข โ„Ž ๐‘™ โข ( ๐’™ ) and ๐’ฏ ๐‘ž โข โ„Ž ๐‘™ โข ( ๐’™ ) . Using Equation 1, ๐’Ÿ ๐‘† โข ๐‘€ โข ( ๐‘ ๐œฝ | ๐‘ž ) can be rewritten as

๐”ผ ๐‘ž โข [ โˆ‘ ๐‘™

1 ๐‘‘ โ„Ž ๐‘™ โข ( ๐’™ ) โข ( ๐œ“ ๐‘ , ๐‘™ 2 + 2 โข โˆ‚ ๐‘ฅ ๐‘™ ๐œ“ ๐‘ , ๐‘™ ) + โˆ‚ ๐‘ฅ ๐‘™ โ„Ž ๐‘™ โข ( ๐’™ ) โข ๐œ“ ๐‘ , ๐‘™ ] + ๐ถ ๐‘ž ,

where ๐ถ ๐‘ž

๐”ผ ๐‘ž โข [ ๐ ๐‘ž โŠค โข ๐ ๐‘ž ] can be considered a constant which can be safely ignored when minimising with respect to ๐œฝ .

3.3Divergence for Densities with a Truncated Support

Let ๐‘‰ โŠ‚ โ„ ๐‘‘ be a domain whose boundary is denoted by โˆ‚ ๐‘‰ . The boundary condition of the density ๐‘ž โข ( ๐’™ ) given in Equation 2 needs to hold on the boundary โˆ‚ ๐‘‰ , i.e., the truncated density ๐‘ž โข ( ๐’™ โ€ฒ )

0 , โˆ€ ๐’™ โ€ฒ โˆˆ โˆ‚ ๐‘‰ . However, this is, in general, not true for truncated densities. For example, a 1D truncated unit Gaussian distribution within the interval [ โˆ’ 1 , 1 ] has non-zero density at the boundary of the support at exactly ๐‘ฅ

โˆ’ 1 and ๐‘ฅ

1 . When the boundary condition breaks down, the function families presented for classical SD, KSD and score matching are no longer Stein classes, and these divergences are no longer computationally tractable.

We outline two recent methods for circumventing this issue by modifying the KSD and the score matching divergence.

3.3.1Bounded-Domain KSD (bd-KSD)

Motivated by performing goodness-of-fit testing on truncated domains, Xu (2022) propose a modified Stein operator, given by

๐’ฏ ๐‘ , โ„Ž โข ๐’ˆ โข ( ๐’™ )

โˆ‘ ๐‘™

1 ๐‘‘ ๐œ“ ๐‘ , ๐‘™ โข ๐‘” ๐‘™ โข ( ๐’™ ) โข โ„Ž โข ( ๐’™ ) + โˆ‚ ๐‘ฅ ๐‘™ ( ๐‘” ๐‘™ โข ( ๐’™ ) โข โ„Ž โข ( ๐’™ ) )

(9)

for ๐’ˆ โˆˆ ๐’ข ๐‘‘ , where โ„Ž โข ( ๐’™ ) is a weighting function for which โ„Ž โข ( ๐’™ โ€ฒ )

0 โข โˆ€ ๐’™ โ€ฒ โˆˆ โˆ‚ ๐‘‰ . This modified Stein operator relies on the boundary conditions on โ„Ž instead of ๐‘ž . This condition can be satisfied by choosing โ„Ž carefully.

Using the Stein operator in Equation 3 and taking the supremum over ๐’ˆ โˆˆ ๐’ข ๐‘‘ gives an alternative definition to KSD for bounded domains, called bounded-domain kernelised Stein discrepancy (bd-KSD):

๐’Ÿ bd-KSD โข ( ๐‘ ๐œฝ | ๐‘ž ) 2

โ€– ๐”ผ ๐‘ž โข [ ๐’ฏ ๐‘ ๐œฝ , โ„Ž โข ๐‘˜ โข ( ๐’™ , โ‹… ) ] โ€– ๐’ข ๐‘‘ 2 .

The form of โ„Ž is not explicitly defined, but the authors recommend a distance function. For example, if ๐‘‰ is the unit ball, โ„Ž โข ( ๐’™ )

1 โˆ’ โ€– ๐’™ โ€– ๐‘ for a chosen power ๐‘ .

3.3.2Truncated Score Matching (TruncSM)

When the support of ๐‘ž is the non-negative orthant, โ„ + ๐‘‘ , Hyvรคrinen (2007) and Yu et al. (2019) impose restrictions on the weighting function ๐’‰ in the score matching divergence, given in Equation 8, such that ๐’‰ โข ( ๐’™ ) โ†’ 0 as โ€– ๐’™ โ€– โ†’ 0 , for example ๐’‰ โข ( ๐’™ )

๐’™ . Yu et al. (2021) and Liu et al. (2022) generalised this constraint on ๐’‰ to any bounded domain ๐‘‰ , imposing the restrictions โ„Ž ๐‘™ โข ( ๐’™ ) > 0 โข โˆ€ ๐‘™ and ๐’‰ โข ( ๐’™ โ€ฒ )

๐ŸŽ for all ๐’™ โ€ฒ โˆˆ โˆ‚ ๐‘‰ .

Liu et al. (2022) showed that maximising a Stein discrepancy with respect to ๐’‰ gives a solution ๐’‰ 0

( โ„Ž 0 , โ€ฆ , โ„Ž 0 ) , where

โ„Ž 0

min ๐’™ โ€ฒ โˆˆ โˆ‚ ๐‘‰ โก dist โข ( ๐’™ , ๐’™ โ€ฒ ) ,

(10)

the smallest distance between ๐’™ and the boundary โˆ‚ ๐‘‰ . Liu et al. (2022) proposed the use of several distance functions such as the โ„“ 1 and โ„“ 2 distance. The generalised score matching divergence with โ„Ž 0 is referred to as TruncSM.

4Approximate Stein Classes

So far, previous works have assumed a functional form of the truncation boundary, so that โ„Ž can be precisely designed and Steinโ€™s identity holds exactly. In our setting, the functional form of the truncation boundary is unavailable, making the design of a Stein class infeasible; we cannot design a class of functions with appropriate boundary conditions that satisfy Steinโ€™s identity exactly. However, the truncation boundary is known to us approximately as a set of boundary points, and so we can design a set of functions such that Steinโ€™s identity holds โ€˜approximatelyโ€™, as we will show below.

Let us first define an approximate Stein class and an approximate Stein identity.

Definition 4.1.

Let ๐‘ž

๐‘ž โข ( ๐’™ ) be a smooth probability density function on ๐‘‰ โІ โ„ ๐‘‘ . โ„ฑ ~ ๐‘š ๐‘‘ is an approximate Stein class of ๐‘ž , if for all ๐’‡ ~ โˆˆ โ„ฑ ~ ๐‘š ๐‘‘ ,

๐”ผ ๐‘ž โข [ ๐’ฎ ๐‘ž , ๐‘š โข ๐’‡ ~ โข ( ๐’™ ) ]

๐‘‚ ๐‘ƒ โข ( ๐œ€ ๐‘š ) ,

(11)

where ๐œ€ ๐‘š is a monotonically decreasing function of ๐‘š , and ๐’ฎ ๐‘ž , ๐‘š is a Stein operator that depends on ๐‘š in some way. ๐‘‚ ๐‘ƒ โข ( ๐œ€ ๐‘š ) denotes a sequence indexed by ๐‘š which is bounded in probability by ๐œ€ ๐‘š .

We denote Equation 11 as the approximate Stein identity, and the approximate Stein class โ„ฑ ~ ๐‘š ๐‘‘ can be written more simply as

โ„ฑ ~ ๐‘š ๐‘‘

{ ๐’‡ ~ : ๐‘‰ โ†’ โ„ | ๐”ผ ๐‘ž โข [ ๐’ฎ ๐‘ž , ๐‘š โข ๐’‡ ~ โข ( ๐’™ ) ]

๐‘‚ ๐‘ƒ โข ( ๐œ€ ๐‘š ) } .

(12)

The classical Stein class of functions given by Equation 1 requires Steinโ€™s identity to hold, whereas this approximate Stein class Equation 12 defines a set of functions for which ๐”ผ ๐‘ž โข [ ๐’ฎ ๐‘ž , ๐‘š โข ๐’‡ ~ โข ( ๐’™ ) ] is bounded by a decreasing sequence. Similarly to the classical Stein discrepancy, we propose the use of the Langenvin Stein operator, i.e. ๐’ฎ ๐‘ž , ๐‘š

๐’ฏ ๐‘ž , ๐‘š . We aim to show this is a flexible class which can be used across various applications, of which we provide two examples below.

Example: Latent Variable Models.

Kanagawa et al. (2019) presented a variation of KSD for testing the goodness-of-fit of models with unobserved latent variables ๐’› , where the density of ๐’™ is given by ๐‘ž โข ( ๐’™ )

โˆซ โ„ ๐‘‘ ๐‘ž โข ( ๐’™ | ๐’› ) โข ๐‘ž โข ( ๐’› ) โข ๐‘‘ ๐’› , and thus the score function can be written as

๐ ๐‘ž โข ( ๐’™ )

โˆ‡ ๐’™ ๐‘ž โข ( ๐’™ ) ๐‘ž โข ( ๐’™ )

โˆซ โ„ ๐‘‘ โˆ‡ ๐’™ ๐‘ž โข ( ๐’™ | ๐’› ) ๐‘ž โข ( ๐’™ | ๐’› ) โ‹… ๐‘ž โข ( ๐’™ | ๐’› ) โข ๐‘ž โข ( ๐’› ) ๐‘ž โข ( ๐’™ ) โข ๐‘‘ ๐’›

๐”ผ ๐‘ž โข ( ๐’› | ๐’™ ) โข [ ๐ ๐‘ž โข ( ๐’™ | ๐’› ) ] .

(13)

We show that this existing modification of KSD gives rise to an Approximate Stein Class. To evaluate the expectation over ๐‘ž โข ( ๐’› | ๐’™ ) , Kanagawa et al. (2019) recommend approximating the expectation with a Monte Carlo estimate

๐”ผ ๐‘ž โข ( ๐’› | ๐’™ ) โข [ ๐ ๐‘ž โข ( ๐’™ | ๐’› ) ]

1 ๐‘š โข โˆ‘ ๐‘–

1 ๐‘š ๐ ๐‘ž โข ( ๐’™ | ๐’› ๐‘– ) + ๐‘‚ ๐‘ƒ โข ( ๐œ€ ๐‘š ) ,

(14)

where we assume we have access to ๐‘š unbiased samples { ๐’› ๐‘– } ๐‘–

1 ๐‘š โˆผ ๐‘ž โข ( ๐’› | ๐’™ ) , and ๐‘‚ ๐‘ƒ โข ( ๐œ€ ๐‘š ) is the Monte Carlo approximation error. This approximation leads to

๐”ผ ๐‘ž โข [ ๐’ฏ ๐‘ž โข ๐’ˆ โข ( ๐’™ ) ]

๐‘‚ ๐‘ƒ โข ( ๐œ€ ๐‘š ) โ‰  0

and therefore this variation of KSD uses an Approximate Stein Class. See Section A.1 for more details.

5KSD for Truncated Density Estimation

Let ๐‘ž โข ( ๐’™ ) be a smooth probability density function with truncated support ๐‘‰ โŠ‚ โ„ ๐‘‘ with boundary โˆ‚ ๐‘‰ . When ๐‘ž โข ( ๐’™ ) has a truncated support, ๐’ข ๐‘‘ is not a Stein class. To show this, let ๐’ˆ โˆˆ ๐’ข ๐‘‘ , then Steinโ€™s identity can be written as

๐”ผ ๐’ฏ ๐‘ž โข ๐’ˆ โข ( ๐’™ )

โˆซ ๐‘‰ ๐‘ž โข ( ๐’™ ) โข ( โˆ‘ ๐‘™

1 ๐‘‘ ๐œ“ ๐‘ ๐œฝ , ๐‘™ โข ( ๐’™ ) โข ๐‘” ๐‘™ โข ( ๐’™ ) + โˆ‚ ๐‘ฅ ๐‘™ ๐‘” ๐‘™ โข ( ๐’™ ) ) โข ๐‘‘ ๐’™

โˆ‘ ๐‘™

1 ๐‘‘ โˆซ ๐‘‰ โˆ‚ ๐‘ฅ ๐‘™ ๐‘ž โข ( ๐’™ ) โข ๐‘” ๐‘™ โข ( ๐’™ ) + ๐‘ž โข ( ๐’™ ) โข โˆ‚ ๐‘ฅ ๐‘™ ๐‘” ๐‘™ โข ( ๐’™ ) โข ๐‘‘ โข ๐’™

โˆฎ โˆ‚ ๐‘‰ ๐‘ž โข ( ๐’™ ) โข โˆ‘ ๐‘™

1 ๐‘‘ ๐‘” ๐‘™ โข ( ๐’™ ) โข u ^ ๐‘™ โข ( ๐’™ ) โข ๐‘‘ โข ๐‘  ,

(15)

where โˆฎ โˆ‚ ๐‘‰ is the surface integral over the boundary โˆ‚ ๐‘‰ , u ^ 1 โข ( ๐’™ ) , โ€ฆ , u ^ ๐‘‘ โข ( ๐’™ ) are the unit outward normal vectors on โˆ‚ ๐‘‰ and ๐‘‘ โข ๐‘  is the surface element on โˆ‚ ๐‘‰ .

To satisfy the Stein identity, Equation 15 must equal zero. In the untruncated setting, Equation 15 is zero due to the boundary condition on ๐‘ž โข ( ๐’™ ) , but in the truncated setting the density is significantly nonzero on the boundary at โˆ‚ ๐‘‰ . Prior methods such as Xu (2022) choose a pre-defined weighting function for which Equation 15 is exactly zero at the boundary. We instead aim to show that Equation 15 is ๐‘‚ ๐‘ƒ โข ( ๐œ€ ๐‘š ) under an approximate Stein class, detailed in the remainder of this section.

5.1Approximate Stein Class for Truncated Densities

Let us first consider a setting where the boundary โˆ‚ ๐‘‰ is known analytically. Define a modified product RKHS as

๐’ข 0 ๐‘‘

{ ๐’ˆ โˆˆ ๐’ข ๐‘‘ | ๐’ˆ โข ( ๐’™ โ€ฒ )

๐ŸŽ โข โˆ€ ๐’™ โ€ฒ โˆˆ โˆ‚ ๐‘‰ , โ€– ๐’ˆ โ€– ๐’ข ๐‘‘ 2 โ‰ค 1 } .

(16) Lemma 5.1.

Let ๐‘ž be a smooth density supported on ๐‘‰ . For any ๐  โˆˆ ๐’ข 0 ๐‘‘ , then

๐”ผ ๐‘ž โข [ ๐’ฏ ๐‘ž โข ๐’ˆ โข ( ๐’™ ) ]

0 .

(17)

Similar to the classic Steinโ€™s identity, the proof follows from applying a simple integration by parts, see Section A.2. Lemma 5.1 shows that ๐’ข 0 ๐‘‘ is a proper Stein class of ๐‘ž . ๐’ข 0 ๐‘‘ defines a large class of functions, for which the proposed operator by Xu (2022) given in Equation 9 is one such example of a function from this family.

Let us now consider the setting where โˆ‚ ๐‘‰ is not known exactly. The information about the boundary is provided by โˆ‚ ๐‘‰ ~

{ ๐’™ ๐‘– โ€ฒ โ€ฒ } ๐‘– โ€ฒ

1 ๐‘š , a finite set of points randomly sampled from โˆ‚ ๐‘‰ . Define

๐’ข 0 , ๐‘š ๐‘‘

{ ๐’ˆ โˆˆ ๐’ข ๐‘‘ | ๐’ˆ โข ( ๐’™ โ€ฒ )

๐ŸŽ โข โˆ€ ๐’™ โ€ฒ โˆˆ โˆ‚ ๐‘‰ ~ , โ€– ๐’ˆ โ€– ๐’ข ๐‘‘ 2 โ‰ค 1 } ,

(18)

which can be considered as an approximate version of ๐’ข 0 ๐‘‘ using the finite set โˆ‚ ๐‘‰ ~ . The benefit of using ๐’ข 0 , ๐‘š ๐‘‘ over ๐’ข 0 ๐‘‘ is that this class can be constructed using only โ€˜partialโ€™ boundary information, i.e., โˆ‚ ๐‘‰ ~ , without knowing the explicit expression of โˆ‚ ๐‘‰ .

First, we show specific properties of the relationship between โˆ‚ ๐‘‰ ~ and โˆ‚ ๐‘‰ .

Lemma 5.2.

Let ๐  โˆˆ ๐’ข 0 ๐‘‘ and ๐  ~ โˆˆ ๐’ข 0 , ๐‘š ๐‘‘ . Assume that โˆ‚ ๐‘‰ ~ is ๐œ€ ๐‘š -dense in โˆ‚ ๐‘‰ with some probability. Further assume that ๐‘” ๐‘™ and ๐‘” ~ ๐‘™ are ๐ถ -Lipschitz continuous for all ๐‘™

1 , โ€ฆ , ๐‘‘ . Then

| ๐‘” ๐‘™ โข ( ๐’™ โ€ฒ ) โˆ’ ๐‘” ~ ๐‘™ โข ( ๐’™ โ€ฒ ) |

๐‘‚ ๐‘ƒ โข ( ๐œ€ ๐‘š )

for any ๐ฑ โ€ฒ โˆˆ โˆ‚ ๐‘‰ .

The proof follows from applying the Lipschitz continuous property on ๐‘” ๐‘™ and ๐‘” ~ ๐‘™ and then the triangle inequality, see Section A.3. Lemma 5.2 establishes a connection between ๐’ข 0 ๐‘‘ and ๐’ข 0 , ๐‘š ๐‘‘ , relying on the assumption that โˆ‚ ๐‘‰ ~ is ๐œ€ ๐‘š -dense in โˆ‚ ๐‘‰ . We now show under some mild conditions this assumption holds with high probability.

Proposition 5.3.

Assume { ๐ฑ ๐‘– โ€ฒ } ๐‘–

1 ๐‘š are samples drawn from the uniform distribution defined on โˆ‚ ๐‘‰ . Let L โข ( ๐‘‰ ) denote the ( ๐‘‘ โˆ’ 1 ) -surface area of a bounded domain ๐‘‰ โŠ‚ โ„ ๐‘‘ and L โข ( ๐‘‰ ) < โˆž . Let ๐ต ๐œ€ ๐‘š โข ( ๐ฑ โ€ฒ ) denote a ball of radius ๐œ€ ๐‘š centred on ๐ฑ โ€ฒ , and let ๐œ‰ โข ( ๐‘‘ )

๐œ‹ ๐‘‘ / 2 / ฮ“ โข ( ๐‘‘ 2 + 1 ) . For all ๐œ€ ๐‘š such that

๐œ€ ๐‘š โ‰ฅ ( L โข ( ๐‘‰ ) ๐œ‰ โข ( ๐‘‘ ) โข [ 1 โˆ’ 0.05 1 / ๐‘š ] ) 1 / ๐‘‘ .

(19)

we have

โ„™ โข ( โˆ‚ ๐‘‰ ~ โข  is  โข ๐œ€ ๐‘š โข -dense  in  โข โˆ‚ ๐‘‰ )

0.95 .

For proof, see Section A.4. Equation 19 shows the relationship between ๐‘š and ๐œ€ ๐‘š , which corresponds to how โ€˜closeโ€™ โˆ‚ ๐‘‰ ~ is to โˆ‚ ๐‘‰ .

Remark 5.4.

Our numerical investigation (Section B.2) shows that the bound on ๐œ€ ๐‘š , as defined in Equation 19, is a decreasing function of ๐‘š , and is not sensitive to the value of L โข ( ๐‘‰ ) .

Proposition 5.3 shows that Lemma 5.2 holds with high probability, which in turn enables us to show that indeed ๐’ข 0 , ๐‘š ๐‘‘ is an approximate Stein class.

Theorem 5.5.

Assume the conditions specified in Lemma 5.2 hold. Let ๐‘ž be a smooth density supported on ๐‘‰ . For any ๐  ~ โˆˆ ๐’ข 0 , ๐‘š ๐‘‘ , then

๐”ผ ๐‘ž โข [ ๐’ฏ ๐‘ž โข ๐’ˆ ~ โข ( ๐’™ ) ]

๐‘‚ ๐‘ƒ โข ( ๐œ€ ๐‘š ) .

(20)

The proof again follows from integration by parts, then applying Lemma 5.2, see Section A.5. This result paves the way for designing a new type of Stein divergence that measures differences between two distributions when their domain ๐‘‰ is truncated.

5.2Truncated Kernelised Stein Discrepancy (TKSD) and Density Estimation

Let ๐‘ž and ๐‘ ๐œฝ be two smooth densities supported on ๐‘‰ . We can construct a Stein discrepancy measure, called the truncated Kernelised Stein Discrepancy (TKSD), given by

๐’Ÿ TKSD โข ( ๐‘ ๐œฝ | ๐‘ž ) := sup ๐’ˆ โˆˆ ๐’ข 0 , ๐‘š ๐‘‘ ๐”ผ ๐‘ž โข [ ๐’ฏ ๐‘ ๐œฝ โข ๐’ˆ โข ( ๐’™ ) ] .

(21)

Similarly to classical SD and KSD, TKSD can still be intuitively thought as the maximum violation of Steinโ€™s identity with respect to an approximate Stein class ๐’ข 0 , ๐‘š ๐‘‘ . It can be used to distinguish two distributions when their domain ๐‘‰ is truncated, but the boundary is not known analytically.

๐’Ÿ TKSD โข ( ๐‘ ๐œฝ | ๐‘ž ) serves as the discrepancy measure between densities. Later, we propose to estimate a truncated density function by minimising this discrepancy.

Next, we show that there is an analytic solution to the constrained optimisation problem in Equation 21.

Theorem 5.6.

๐’Ÿ TKSD โข ( ๐‘ ๐œฝ | ๐‘ž ) 2 can be written as

โˆ‘ ๐‘™

1 ๐‘‘ ๐”ผ ๐’™ โˆผ ๐‘ž โข ๐”ผ ๐’š โˆผ ๐‘ž โข [ ๐‘ข ๐‘™ โข ( ๐’™ , ๐’š ) โˆ’ ๐ฏ ๐‘™ โข ( ๐’™ ) โŠค โข ( ๐Š โ€ฒ ) โˆ’ 1 โข ๐ฏ ๐‘™ โข ( ๐’š ) ]

(22)

where ๐‘ข ๐‘™ โข ( ๐ฑ , ๐ฒ ) is given by Equation 7, ๐ฏ ๐‘™ โข ( ๐ณ )

๐œ“ ๐‘ , ๐‘™ โข ( ๐ณ ) โข ๐›— ๐ณ , ๐ฑ โ€ฒ โŠค + ( โˆ‚ ๐‘ง ๐‘™ ๐›— ๐ณ , ๐ฑ โ€ฒ ) โŠค , ๐›— ๐ณ , ๐ฑ โ€ฒ

[ ๐‘˜ โข ( ๐ณ , ๐ฑ 1 โ€ฒ ) , โ€ฆ , ๐‘˜ โข ( ๐ณ , ๐ฑ ๐‘š โ€ฒ ) ] , ๐œ™ ๐ฑ โ€ฒ

[ ๐‘˜ โข ( ๐ฑ 1 โ€ฒ , โ‹… ) , โ€ฆ , ๐‘˜ โข ( ๐ฑ ๐‘š โ€ฒ , โ‹… ) ] โŠค and ๐Š โ€ฒ

๐œ™ ๐ฑ โ€ฒ โข ๐œ™ ๐ฑ โ€ฒ โŠค .

The proof relies on solving a Lagrangian dual problem, see Section A.6. This result gives a closed form loss function for ๐’Ÿ TKSD โข ( ๐‘ ๐œฝ | ๐‘ž ) 2 , and is not straightforward to obtain since the constraints in ๐’ข 0 , ๐‘š ๐‘‘ are enforced on a finite set of points in โ„ ๐‘‘ . Equation 22 can be decomposed into the โ€˜KSD partโ€™, given by the ๐‘ข ๐‘™ โข ( ๐’™ , ๐’š ) term, and the โ€˜truncated partโ€™, given by ๐ฏ ๐‘™ โข ( ๐’™ ) โŠค โข ( ๐Š โ€ฒ ) โˆ’ 1 โข ๐ฏ ๐‘™ โข ( ๐’š ) , which comes from solving for the Lagrangian dual parameter. Equation 22 is also linear in ๐‘‘ , so its evaluation cost only increases linearly with ๐‘‘ .

Next, we show that the TKSD can be approximated with samples from ๐‘ž .

Theorem 5.7.

Let { ๐ฑ ๐‘– } ๐‘–

1 ๐‘› be a set of samples from ๐‘ž โข ( ๐ฑ ) , then

๐’Ÿ ^ TKSD โข ( ๐‘ ๐œฝ | ๐‘ž ) 2

2 ๐‘› โข ( ๐‘› โˆ’ 1 ) โข โˆ‘ ๐‘–

1 ๐‘› โˆ‘ ๐‘—

1

๐‘– โ‰  ๐‘— ๐‘› โ„Ž โข ( ๐’™ ๐‘– , ๐’™ ๐‘— ) ,

(23)

is an unbiased estimate of ๐’Ÿ TKSD โข ( ๐‘ ๐›‰ | ๐‘ž ) 2 , where

โ„Ž โข ( ๐’™ ๐‘– , ๐’™ ๐‘— )

โˆ‘ ๐‘™

1 ๐‘‘ ๐‘ข ๐‘™ โข ( ๐’™ ๐‘– , ๐’™ ๐‘— ) โˆ’ ๐ฏ ๐‘™ โข ( ๐’™ ๐‘– ) โŠค โข ( ๐Š โ€ฒ ) โˆ’ 1 โข ๐ฏ ๐‘™ โข ( ๐’™ ๐‘— ) ,

(24)

assuming that ๐”ผ ๐ฑ โˆผ ๐‘ž โข ๐”ผ ๐ฒ โˆผ ๐‘ž โข [ โ„Ž โข ( ๐ฑ , ๐ฒ ) 2 ] โ‰ค โˆž .

The proof follows directly from the definition of a ๐‘ˆ -statistic (Serfling, 2009). Additionally, we could define a biased estimate of ๐’Ÿ TKSD โข ( ๐‘ ๐œฝ | ๐‘ž ) 2 via a ๐‘‰ -statistic

๐’Ÿ ^ TKSD โข ( ๐‘ ๐œฝ | ๐‘ž ) 2

1 ๐‘› 2 โข โˆ‘ ๐‘–

1 ๐‘› โˆ‘ ๐‘—

1 ๐‘› โ„Ž โข ( ๐’™ ๐‘– , ๐’™ ๐‘— ) .

(25)

The ๐‘ˆ -statistic and ๐‘‰ -statistic for TKSD allow us to evaluate ๐’Ÿ TKSD โข ( ๐‘ ๐œฝ | ๐‘ž ) 2 via ๐‘› samples from ๐‘ž . In empirical experiments, the ๐‘‰ -statistic seems to give better performance overall compared to the ๐‘ˆ -statistic.

Finally, we define our proposed estimator for unnormalised truncated density by minimising TKSD over the density parameter ๐œฝ .

๐œฝ ^ ๐‘› , ๐‘š := arg โข min ๐œฝ โก ๐’Ÿ ^ TKSD โข ( ๐‘ ๐œฝ | ๐‘ž ) 2 .

(26)

In the next section, we study the theoretical properties of Equation 26.

5.3Consistency Analysis

Since ๐’Ÿ ^ TKSD โข ( ๐‘ ๐œฝ | ๐‘ž ) 2 is a function of ๐‘š , for the simplicity of the theorem, let us study Equation 26 at the limit of ๐‘š . Let ๐ฟ ^ โข ( ๐œฝ ) := lim ๐‘š โ†’ โˆž ๐’Ÿ ^ TKSD โข ( ๐‘ ๐œฝ | ๐‘ž ) 2 and define

๐œฝ ^ ๐‘› := arg โข min ๐œฝ โก ๐ฟ ^ โข ( ๐œฝ ) .

We now prove that ๐œฝ ^ ๐‘› converges to the true parameter under mild conditions:

Assumption 5.8 (Accurate Boundary Prediction).

Let

๐’• := ๐”ผ ๐’š [ ( ๐‹ ๐’š , ๐ฑ โ€ฒ โŠค [ โˆ‡ ๐œฝ ๐œ“ ๐‘ , ๐‘™ ( ๐’š ) ) โŠค ] | ๐œฝ

๐œฝ โ‹† ] โˆˆ โ„ ๐‘š ร— dim โข ( ๐œฝ ) ,

๐’• ( ๐’™ ) := ๐”ผ ๐’š [ ( ๐‹ ๐’š , ๐’™ โŠค [ โˆ‡ ๐œฝ ๐œ“ ๐‘ , ๐‘™ ( ๐’š ) ) โŠค ] | ๐œฝ

๐œฝ โ‹† ] โˆˆ โ„ dim โข ( ๐œฝ ) .

Assume the following holds:

[ โˆฎ โˆ‚ ๐‘‰ ๐‘ž โข ( ๐’™ ) โข ๐‹ ๐’™ , ๐ฑ โ€ฒ โข ( ๐Š โ€ฒ ) โˆ’ 1 โข ๐’• โข u ^ ๐‘™ โข ( ๐’™ ) โข ๐‘‘ ๐‘  ] ๐‘–

=

[ โˆฎ โˆ‚ ๐‘‰ ๐‘ž โข ( ๐’™ ) โข ๐’• โข ( ๐’™ ) โข u ^ ๐‘™ โข ( ๐’™ ) โข ๐‘‘ ๐‘  ] ๐‘– + ๐‘‚ ๐‘ƒ โข ( ๐œ€ ^ ๐‘š ) ,

โˆ€ ๐‘–

โˆˆ { 1 , โ€ฆ , dim โข ( ๐œฝ ) } , ๐‘™ โˆˆ { 1 , โ€ฆ , ๐‘‘ }

(27)

where ๐œ€ ^ ๐‘š is a positive decaying sequence with respect to ๐‘š and lim ๐‘š โ†’ โˆž ๐œ€ ^ ๐‘š

0 .

One can see that ๐‹ ๐’™ , ๐ฑ โ€ฒ โข ( ๐Š โ€ฒ ) โˆ’ 1 โข ๐’• is the kernel least-square regression prediction of ๐’• โข ( ๐’™ โ€ฒ ) , for a ๐’™ โ€ฒ โˆˆ โˆ‚ ๐‘‰ . 5.8 essentially states that the least squares โ€˜trainedโ€™ on our boundary samples, { ๐’™ ๐‘– โ€ฒ โ€ฒ } ๐‘– โ€ฒ

1 ๐‘š , should be asymptotically accurate in terms of a testing error computed over a surface integral as ๐‘š increases.

Assumption 5.9.

The smallest eigenvalue of the Hessian of ๐ฟ ^ โข ( ๐œฝ ) is lower bounded, i.e., ๐œ† min โข [ โˆ‡ ๐œฝ 2 ๐ฟ ^ โข ( ๐œฝ ) ] โ‰ฅ ฮ› min

0 with high probability.

In fact, we could make the same assumption on the population version of the objective function, and the convergence of the sample hessian matrix would guarantee Assumption 5.9 holds with high probability. To simplify our theoretical statement we stick to the simpler version.

Theorem 5.10.

Suppose there exists a unique ๐›‰ โ‹† such that ๐‘ž

๐‘ ๐›‰ โ‹† , 5.8 and 5.9 hold, Then

โ€– ๐œฝ ^ ๐‘› โˆ’ ๐œฝ โ‹† โ€–

๐‘‚ ๐‘ƒ โข ( 1 ๐‘› ) .

For proof, see Section A.7. This result shows, although our TKSD is an approximate version of KSD, the density estimator derived from TKSD is still a consistent estimator. We empirically verify this consistency in Section B.3.

6Experimental Results

To show the validity of our proposed method, we experiment on benchmark settings against TruncSM and an adaptation of bd-KSD for truncated density estimation. We also provide additional empirical experiments in the appendices; empirical consistency (Section B.3), a demonstration on the Gaussian mixture distribution (Section B.4), an implementation for truncated regression (Section B.5) and a investigation into the effect of the distribution of the boundary points (Section B.6).

6.1Computational Considerations

TKSD requires the selection of hyperparameters: the number of boundary points, ๐‘š , the choice of kernel function, ๐‘˜ , and the corresponding kernel hyperparameters. We focus on the Gaussian kernel, ๐‘˜ โข ( ๐’™ , ๐’š )

exp โก { โˆ’ ( 2 โข ๐œŽ 2 ) โˆ’ 1 โข โ€– ๐’™ โˆ’ ๐’š โ€– 2 } , and the bandwidth parameter ๐œŽ is chosen heuristically as the median of pairwise distances on the data matrix.

In choosing ๐‘š , there is a trade-off between accuracy and computational expense, since ๐Š โ€ฒ in Equation 24 is an ๐‘š ร— ๐‘š matrix which requires inversion. In experiments, we let ๐‘š scale with ๐‘‘ 2 . We provide more computational details of the method in Section B.8.

When the boundaryโ€™s functional form is unknown, the recommended distance functions by Xu (2022) and Liu et al. (2022) cannot be used, and instead TruncSM and bd-KSD must use approximate boundary points. This approximation to the distance function is given by

min ๐’™ โ€ฒ โˆˆ โˆ‚ ๐‘‰ ~ โก โ€– ๐’™ โˆ’ ๐’™ โ€ฒ โ€– ๐›ผ ๐›พ ,

(28)

for each dataset point ๐’™ , where ๐›พ and ๐›ผ are chosen based on the application. This may not provide an accurate approximation to the true distance function when ๐‘š is small.

6.2Density Truncated by the Boundary of the United States Figure 1:Density estimation when the truncation boundary is the border of the U.S., as described in Section 6.2. Top: example of increasing the number of boundary points ๐‘š . Bottom: across 256 seeds for each value of ๐‘š , mean estimation error with standard error bars for the mean of a 2D Gaussian, for TKSD and TruncSM as ๐‘š increases.

Let us consider the complicated boundary of the United States (U.S.). Let ๐‘‰ be the interior of the U.S. and let โˆ‚ ๐‘‰ ~ be a set of coordinates (longitude and latitude) which define the countryโ€™s borders. The boundary of the U.S. is a highly irregular shape, and as such, there is no explicit expression of the boundary in this case. TruncSM and bd-KSD must use an approximate distance function (given by Equation 28), whereas TKSD can readily use the set of coordinates that give rise to the boundary.

The experiment is set up as follows. Let ๐ โ‹†

[ โˆ’ 115 , 35 ] and ๐šบ

10 โ‹… ๐ผ ๐‘‘ . Samples are simulated from ๐’ฉ โข ( ๐ โ‹† , ๐šบ ) and we select only those which are in the interior of the U.S. until we reach ๐‘›

400 points. Assuming ๐šบ is known, we estimate ๐ โ‹† with ๐ ^ using TKSD and compare it to the estimation using TruncSM. We also vary ๐‘š by uniformly sampling from the perimeter of the U.S., demonstrated in Figure 1 (top).

Figure 1 (bottom) shows the mean and standard error of the โ„“ 2 estimation error between ๐ โ‹† and ๐ ^ , measured over 256 trials for each value of ๐‘š . TruncSM improves with higher values of ๐‘š as the approximate distance function increases in accuracy, whilst TKSD performs significantly better with fewer boundary points.

6.3Estimation Error and Dimensionality Figure 2:Mean estimation error across 256 seeds, with standard error bars, as dimension ๐‘‘ increases (left) and runtime for each method (right). The truncation domain is the โ„“ 2 ball of radius ๐‘‘ 0.53 (top) and โ„“ 1 ball of radius ๐‘‘ (bottom).

Consider a simple experiment setup where ๐ ๐‘‘ โ‹†

๐Ÿ ๐‘‘ โ‹… 0.5 , samples are simulated from ๐’ฉ โข ( ๐ ๐‘‘ โ‹† , ๐‘ฐ ๐‘‘ ) and are truncated be within the โ„“ ๐‘ ball for ๐‘

1 and ๐‘

2 , each of radius ๐‘Ÿ , until we reach ๐‘›

300 data points. We choose radii ๐‘Ÿ

๐‘‘ 0.53 for ๐‘

1 and ๐‘Ÿ

๐‘‘ 0.98 for ๐‘

2 , chosen so that the amount of points truncated across dimension is roughly 50% (see Section B.7 for details). We estimate ๐ ๐‘‘ โ‹† with ๐ ^ , and measure the โ„“ 2 estimation error against ๐ ๐‘‘ โ‹† as ๐‘‘ increases. Each boundary point ๐’™ โ€ฒ โˆˆ โˆ‚ ๐‘‰ ~ is simulated from ๐’ฉ โข ( ๐ŸŽ ๐‘‘ , ๐‘ฐ ๐‘‘ ) , normalized by โ€– ๐’™ โ€ฒ โ€– ๐‘ , and multiplied by ๐‘Ÿ , resulting in a set of random points on the boundary.

Figure 2 shows the error and computation time for the following estimators:

โ€ข

TKSD: our method as described in Section 5, using a randomly sampled โˆ‚ ๐‘‰ ~ .

โ€ข

TruncSM/bd-KSD (exact): the implementation by Liu et al. (2022)/Xu (2022) respectively, where the distance function is computed exactly using the known boundaries.

โ€ข

TruncSM/bd-KSD (approximate): the implementation by Liu et al. (2022)/Xu (2022) respectively with distance function given by Equation 28, using the same โˆ‚ ๐‘‰ ~ as given to TKSD.

For the โ„“ 2 case, TKSD marginally outperforms all competitors across all dimensions, at only a slight computational expense. For the โ„“ 1 case, TKSD, bd-KSD and TruncSM have similar estimation errors across all dimensions. However, TruncSM (exact) and bd-KSD (exact) have an increasing computation time due to the costly evaluation of the distance function. In this implementation, we follow the advice of Liu et al. (2022), Section 7, where the distance function to the โ„“ 1 ball is calculated via a closed-form expression, for which the computational complexity increases combinatorically with dimension.

TruncSM (approximate) and bd-KSD (approximate) have significantly higher estimation error than other methods across all benchmarks. TKSD is able to achieve the same level of accuracy as the exact methods, using the same finite set of boundary points, โˆ‚ ๐‘‰ ~ .

7Discussion

We have proposed an alternative to the classical Stein class, called an approximate Stein class, bounded by a decreasing sequence instead of being strictly equal to zero. By maximising the KSD objective over this approximate Stein class, we have constructed a truncated density estimator based on KSD, called truncated KSD (TKSD), and shown that it is consistent. TKSD has advantages over the prior works by Xu (2022) and Liu et al. (2022), as it does not require a functional form of a boundary.

Some limitations of this method include the requirement of selecting hyperparameters, such as the kernel function ๐‘˜ and its associated hyperparameters, and the number of samples from the boundary, ๐‘š . Choice of ๐‘š may depend on applications. Still, a larger value is preferred when the complexity of the truncation boundary is higher, or the dimension increases. However, even with the heuristic approaches presented in this paper, TKSD provides competitive results.

In experimental results, we have shown that even though we assume no access to a functional form of the boundary, TKSD performs similarly to previous methods which require the boundary to be computed. In some scenarios, TKSD performs better than these methods, or takes less time to achieve the same result.

Reproducibility

All results in this paper can be reproduced using the GitHub repository located at https://github.com/dannyjameswilliams/tksd.

Acknowledgements

We thank all four reviewers for their insightful feedback and suggestions to improve the paper. We are particularly grateful for their recommendations of additional experiments. We would also like to thank Jake Spiteri, Jack Simons, Michael Whitehouse, Mingxuan Yi and Dom Owens, for their helpful input throughout the development of this work.

Daniel J. Williams was supported by a PhD studentship from the EPSRC Centre for Doctoral Training in Computational Statistics and Data Science.

References Barp et al. (2019) โ†‘ Barp, A., Briol, F.-X., Duncan, A., Girolami, M., and Mackey, L.Minimum stein discrepancy estimators.In Advances in Neural Information Processing Systems, volume 32, 2019. Boyd & Vandenberghe (2004) โ†‘ Boyd, S. and Vandenberghe, L.Convex optimization.Cambridge university press, 2004. Chen et al. (2011) โ†‘ Chen, L. H., Goldstein, L., and Shao, Q.-M.Normal approximation by Steinโ€™s method, volume 2.Springer, 2011. Chwialkowski et al. (2016) โ†‘ Chwialkowski, K., Strathmann, H., and Gretton, A.A kernel test of goodness of fit.In Proceedings of The 33rd International Conference on Machine Learning, volume 48 of Proceedings of Machine Learning Research, pp.  2606โ€“2615. PMLR, 2016. Gorham & Mackey (2015) โ†‘ Gorham, J. and Mackey, L.Measuring sample quality with stein's method.In Advances in Neural Information Processing Systems, volume 28. Curran Associates, Inc., 2015. Gorham et al. (2020) โ†‘ Gorham, J., Raj, A., and Mackey, L.Stochastic stein discrepancies.In Advances in Neural Information Processing Systems, volume 33, pp.  17931โ€“17942. Curran Associates, Inc., 2020. Gutmann & Hyvรคrinen (2010) โ†‘ Gutmann, M. and Hyvรคrinen, A.Noise-contrastive estimation: A new estimation principle for unnormalized statistical models.In Proceedings of the 13th International Conference on Artificial Intelligence and Statistics, volume 9 of Proceedings of Machine Learning Research, pp.  297โ€“304. PMLR, 2010. Gutmann & Hyvรคrinen (2012) โ†‘ Gutmann, M. U. and Hyvรคrinen, A.Noise-contrastive estimation of unnormalized statistical models, with applications to natural image statistics.Journal of Machine Learning Research, 13(11):307โ€“361, 2012. Hyvรคrinen (2005) โ†‘ Hyvรคrinen, A.Estimation of non-normalized statistical models by score matching.Journal of Machine Learning Research, 6(24):695โ€“709, 2005. Hyvรคrinen (2007) โ†‘ Hyvรคrinen, A.Some extensions of score matching.Computational statistics & data analysis, 51(5):2499โ€“2512, 2007. Jitkrittum et al. (2017) โ†‘ Jitkrittum, W., Xu, W., Szabo, Z., Fukumizu, K., and Gretton, A.A linear-time kernel goodness-of-fit test.In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017. Kalos & Whitlock (2009) โ†‘ Kalos, M. H. and Whitlock, P. A.Monte carlo methods.John Wiley & Sons, 2009. Kanagawa et al. (2019) โ†‘ Kanagawa, H., Jitkrittum, W., Mackey, L., Fukumizu, K., and Gretton, A.A kernel stein test for comparing latent variable models.arXiv preprint arXiv:1907.00586, 2019. Krishnamoorthy & Menon (2013) โ†‘ Krishnamoorthy, A. and Menon, D.Matrix inversion using cholesky decomposition.In 2013 signal processing: Algorithms, architectures, arrangements, and applications (SPA), pp.  70โ€“72. IEEE, 2013. Lin et al. (2016) โ†‘ Lin, L., Drton, M., and Shojaie, A.Estimation of high-dimensional graphical models using regularized score matching.Electronic journal of statistics, 10(1):806, 2016. Liu et al. (2016) โ†‘ Liu, Q., Lee, J., and Jordan, M.A kernelized stein discrepancy for goodness-of-fit tests.In Proceedings of The 33rd International Conference on Machine Learning, volume 48 of Proceedings of Machine Learning Research, pp.  276โ€“284. PMLR, 2016. Liu et al. (2022) โ†‘ Liu, S., Kanamori, T., and Williams, D. J.Estimating density models with truncation boundaries using score matching.Journal of Machine Learning Research, 23(186):1โ€“38, 2022. Pang et al. (2020) โ†‘ Pang, T., Xu, K., Li, C., Song, Y., Ermon, S., and Zhu, J.Efficient learning of generative models via finite-difference score matching.In Advances in Neural Information Processing Systems, volume 33, pp.  19175โ€“19188, 2020. Serfling (2009) โ†‘ Serfling, R. J.Approximation theorems of mathematical statistics.John Wiley & Sons, 2009. Sharrock et al. (2022) โ†‘ Sharrock, L., Simons, J., Liu, S., and Beaumont, M.Sequential neural score estimation: Likelihood-free inference with conditional score based diffusion models.arXiv preprint arXiv:2210.04872, 2022. Shi et al. (2021) โ†‘ Shi, J., Liu, C., and Mackey, L.Sampling with mirrored stein operators.arXiv preprint arXiv:2106.12506, 2021. Song & Ermon (2019) โ†‘ Song, Y. and Ermon, S.Generative modeling by estimating gradients of the data distribution.In Advances in Neural Information Processing Systems, volume 32, 2019. Song & Kingma (2021) โ†‘ Song, Y. and Kingma, D. P.How to train your energy-based models.arXiv preprint arXiv:2101.03288, 2021. Song et al. (2021) โ†‘ Song, Y., Sohl-Dickstein, J., Kingma, D. P., Kumar, A., Ermon, S., and Poole, B.Score-based generative modeling through stochastic differential equations.In International Conference on Learning Representations, 2021. Stein (1972) โ†‘ Stein, C.A bound for the error in the normal approximation to the distribution of a sum of dependent random variables.In Proceedings of the sixth Berkeley symposium on mathematical statistics and probability, volume 2: Probability theory, pp.  583โ€“602. University of California Press, 1972. Steinwart & Christmann (2008) โ†‘ Steinwart, I. and Christmann, A.Support vector machines.Springer Science & Business Media, 2008. UCLA: Statistical Consulting Group (2022) โ†‘ UCLA: Statistical Consulting Group.Truncated regression โ€” stata data analysis examples, 2022.URL https://stats.oarc.ucla.edu/stata/dae/truncated-regression/.Accessed March 17, 2023. Williams & Liu (2022) โ†‘ Williams, D. J. and Liu, S.Score matching for truncated density estimation on a manifold.In Topological, Algebraic and Geometric Learning Workshops 2022, pp.  312โ€“321. PMLR, 2022. Wu et al. (2022) โ†‘ Wu, S., Diao, E., Elkhalil, K., Ding, J., and Tarokh, V.Score-based hypothesis testing for unnormalized models.IEEE Access, 10:71936โ€“71950, 2022. Xu (2022) โ†‘ Xu, W.Standardisation-function kernel stein discrepancy: A unifying view on kernel stein discrepancy tests for goodness-of-fit.In International Conference on Artificial Intelligence and Statistics, pp.  1575โ€“1597. PMLR, 2022. Xu & Matsuda (2021) โ†‘ Xu, W. and Matsuda, T.Interpretable stein goodness-of-fit tests on riemannian manifold.In Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pp.  11502โ€“11513. PMLR, 2021. Yang et al. (2018) โ†‘ Yang, J., Liu, Q., Rao, V., and Neville, J.Goodness-of-fit testing for discrete distributions via stein discrepancy.In Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pp.  5561โ€“5570. PMLR, 2018. Yu et al. (2016) โ†‘ Yu, M., Kolar, M., and Gupta, V.Statistical inference for pairwise graphical models using score matching.In Advances in Neural Information Processing Systems, volume 29, 2016. Yu et al. (2019) โ†‘ Yu, S., Drton, M., and Shojaie, A.Generalized score matching for non-negative data.Journal of Machine Learning Research, 20(76):1โ€“70, 2019. Yu et al. (2021) โ†‘ Yu, S., Drton, M., and Shojaie, A.Generalized score matching for general domains.Information and Inference: A Journal of the IMA, 2021. Appendix AProofs and Additional Theoretical Results A.1Latent Variable Approximate Stein Identity

Firstly, we show that the score function, ๐ ๐‘ž โข ( ๐’™ ) , is given by

๐ ๐‘ž โข ( ๐’™ )

โˆ‡ ๐’™ ๐‘ž โข ( ๐’™ ) ๐‘ž โข ( ๐’™ )

โˆซ โ„ ๐‘‘ โˆ‡ ๐’™ ( ๐‘ž โข ( ๐’™ | ๐’› ) โข ๐‘ž โข ( ๐’› ) ) ๐‘ž โข ( ๐’™ ) โข ๐‘ž โข ( ๐’™ | ๐’› ) ๐‘ž โข ( ๐’™ | ๐’› ) โข ๐‘‘ ๐’›

โˆซ โ„ ๐‘‘ ๐‘ž โข ( ๐’™ | ๐’› ) โข ๐‘ž โข ( ๐’› ) ๐‘ž โข ( ๐’™ ) โข [ โˆ‡ ๐’™ ๐‘ž โข ( ๐’™ | ๐’› ) ๐‘ž โข ( ๐’™ | ๐’› ) ] โข ๐‘‘ ๐’›

โˆซ โ„ ๐‘‘ ๐‘ž โข ( ๐’› | ๐’™ ) โข โˆ‡ ๐’™ log โก ๐‘ž โข ( ๐’™ | ๐’› ) โข ๐‘‘ ๐’›

๐”ผ ๐’› โˆผ ๐‘ž โข ( ๐’› | ๐’™ ) โข [ ๐œ“ ๐‘ž โข ( ๐’™ | ๐’› ) ] ,

(29)

where ๐œ“ ๐‘ž โข ( ๐’™ | ๐’› )

โˆ‡ ๐’™ log โก ๐‘ž โข ( ๐’™ | ๐’› ) . To evaluate the expectation over ๐‘ž โข ( ๐’› | ๐’™ ) , Kanagawa et al. (2019) recommend approximating each element with a Monte Carlo estimate

๐”ผ ๐’› โˆผ ๐‘ž โข ( ๐’› | ๐’™ ) โข [ ๐œ“ ๐‘ž , ๐‘— โข ( ๐’™ | ๐’› ) ]

1 ๐‘š โข โˆ‘ ๐‘–

1 ๐‘š ๐œ“ ๐‘ž , ๐‘— โข ( ๐’™ | ๐’› ๐‘– ) + ๐‘‚ ๐‘ƒ โข ( ๐œ€ ๐‘š ) ,

(30)

where ๐œ“ ๐‘ž , ๐‘— โข ( ๐’™ | ๐’› )

โˆ‚ ๐‘ฅ ๐‘— log โก ๐‘ž โข ( ๐’™ | ๐’› ) and we assume we have access to ๐‘š samples { ๐’› ๐‘– } ๐‘–

1 ๐‘š โˆผ ๐‘ž โข ( ๐’› | ๐’™ ) , and ๐‘‚ ๐‘ƒ โข ( ๐œ€ ๐‘š ) is the Monte Carlo approximation error which decreases as the number of samples on the boundary, ๐‘š , increases.

The Stein identity with this score function is written asWe show that this existing modification of Stein discrepancies give rise to an Approximate Stein Class. Suppose ๐’‡ โˆˆ โ„ฑ ๐‘‘ , where โ„ฑ ๐‘‘ is a regular Stein class. Steinโ€™s identity with the score function from Equation 29 can be written as

๐”ผ ๐‘ž โข [ ๐’ฏ ๐‘ž โข ๐’‡ โข ( ๐’™ ) ]

๐”ผ ๐‘ž โข [ โˆ‘ ๐‘—

1 ๐‘‘ ๐œ“ ๐‘ž , ๐‘— โข ( ๐’™ ) โข ๐‘“ ๐‘— โข ( ๐’™ ) + โˆ‚ ๐‘ฅ ๐‘— ๐‘“ ๐‘— โข ( ๐’™ ) ]

๐”ผ ๐‘ž โข [ ๐”ผ ๐’› โˆผ ๐‘ž โข ( ๐’› | ๐’™ ) โข [ ๐œ“ ๐‘ž , ๐‘— โข ( ๐’™ | ๐’› ) ] โข ๐‘“ ๐‘— โข ( ๐’™ ) + โˆ‚ ๐‘ฅ ๐‘— ๐‘“ ๐‘— โข ( ๐’™ ) ] .

Substituting (30) into the above gives

๐”ผ ๐‘ž โข [ ๐’ฏ ๐‘ž โข ๐’‡ โข ( ๐’™ ) ]

๐”ผ ๐‘ž โข [ โˆ‘ ๐‘—

1 ๐‘‘ { ( 1 ๐‘š โข โˆ‘ ๐‘–

1 ๐‘š ๐œ“ ๐‘ž , ๐‘— โข ( ๐’™ | ๐’› ๐‘– ) + ๐‘‚ ๐‘ƒ โข ( ๐œ€ ๐‘š ) ) โข ๐‘“ ๐‘— โข ( ๐’™ ) + โˆ‚ ๐‘ฅ ๐‘— ๐‘“ ๐‘— โข ( ๐’™ ) } ]

๐”ผ ๐‘ž โข [ โˆ‘ ๐‘—

1 ๐‘‘ { 1 ๐‘š โข โˆ‘ ๐‘–

1 ๐‘š ๐œ“ ๐‘ž , ๐‘— โข ( ๐’™ | ๐’› ๐‘– ) โข ๐‘“ ๐‘— โข ( ๐’™ ) + โˆ‚ ๐‘ฅ ๐‘— ๐‘“ ๐‘— โข ( ๐’™ ) + ๐‘‚ ๐‘ƒ โข ( ๐œ€ ๐‘š ) โข ๐‘“ ๐‘— โข ( ๐’™ ) } ]

(a) ๐”ผ ๐‘ž โข [ โˆ‘ ๐‘—

1 ๐‘‘ { 1 ๐‘š โข โˆ‘ ๐‘–

1 ๐‘š ๐œ“ ๐‘ž , ๐‘— โข ( ๐’™ | ๐’› ๐‘– ) โข ๐‘“ ๐‘— โข ( ๐’™ ) + โˆ‚ ๐‘ฅ ๐‘— ๐‘“ ๐‘— โข ( ๐’™ ) } ] + ๐‘‚ ๐‘ƒ โข ( ๐œ€ ๐‘š )

(b) ๐‘‚ ๐‘ƒ โข ( ๐œ€ ๐‘š ) ,

where equality (a) follows from the fact that the error in the Monte Carlo approximation is accounted for by the ๐‘‚ ๐‘ƒ โข ( ๐œ€ ๐‘š ) term, and equality (b) follows by definition of โ„ฑ ๐‘‘ being a Stein class. Therefore, ๐”ผ ๐‘ž โข [ ๐’ฏ ๐‘ž โข ๐’‡ โข ( ๐’™ ) ]

๐‘‚ ๐‘ƒ โข ( ๐œ€ ๐‘š ) , and this latent variable modification uses an Approximate Stein Class.

A.2Proof of Lemma 5.1

Let ๐’ˆ โˆˆ ๐’ข 0 ๐‘‘ . Begin by writing the expectation as an integral,

๐”ผ ๐‘ž โข [ ๐’ฏ ๐‘ž โข ๐’ˆ โข ( ๐’™ ) ]

โˆซ ๐‘‰ ๐‘ž โข ( ๐’™ ) โข ๐’ฏ ๐‘ž โข ๐’ˆ โข ( ๐’™ ) โข ๐‘‘ ๐’™

โˆ‘ ๐‘™

1 ๐‘‘ โˆซ ๐‘‰ ๐‘ž โข ( ๐’™ ) โข โˆ‚ ๐‘ฅ ๐‘™ log โก ๐‘ž โข ( ๐’™ ) โข ๐‘” ๐‘™ โข ( ๐’™ ) + โˆ‚ ๐‘ฅ ๐‘™ ๐‘” ๐‘™ โข ( ๐’™ ) โข ๐‘‘ โข ๐’™

โˆ‘ ๐‘™

1 ๐‘‘ โˆซ ๐‘‰ โˆ‚ ๐‘ฅ ๐‘™ ๐‘ž โข ( ๐’™ ) โข ๐‘” ๐‘™ โข ( ๐’™ ) + โˆ‚ ๐‘ฅ ๐‘™ ๐‘” ๐‘™ โข ( ๐’™ ) โข ๐‘‘ โข ๐’™ .

This can be expanded by integration by parts,

โˆ‘ ๐‘™

1 ๐‘‘ โˆซ ๐‘‰ โˆ‚ ๐‘ฅ ๐‘™ ๐‘ž โข ( ๐’™ ) โข ๐‘” ๐‘™ โข ( ๐’™ ) + โˆ‚ ๐‘ฅ ๐‘™ ๐‘” ๐‘™ โข ( ๐’™ ) โข ๐‘‘ โข ๐’™

โˆ‘ ๐‘™

1 ๐‘‘ โˆฎ โˆ‚ ๐‘‰ ๐‘ž โข ( ๐’™ ) โข ๐‘” ๐‘™ โข ( ๐’™ ) โข u ^ ๐‘™ โข ( ๐’™ ) โข ๐‘‘ ๐‘  + โˆ‘ ๐‘™

1 ๐‘‘ โˆซ ๐‘‰ ๐‘ž โข ( ๐’™ ) โข ( โˆ‚ ๐‘ฅ ๐‘™ ๐‘” ๐‘™ โข ( ๐’™ ) โˆ’ โˆ‚ ๐‘ฅ ๐‘™ ๐‘” ๐‘™ โข ( ๐’™ ) ) โข ๐‘‘ ๐’™

โˆ‘ ๐‘™

1 ๐‘‘ โˆฎ โˆ‚ ๐‘‰ ๐‘ž โข ( ๐’™ ) โข ๐‘” ๐‘™ โข ( ๐’™ ) โข u ^ ๐‘™ โข ( ๐’™ ) โข ๐‘‘ ๐‘ 

0 ,

where the final equality comes from all evaluations ๐‘” ๐‘™ โข ( ๐’™ โ€ฒ )

0 โข โˆ€ ๐’™ โ€ฒ โˆˆ โˆ‚ ๐‘‰ .

A.3Proof of Lemma 5.2.

Let ๐’ˆ โˆˆ ๐’ข 0 ๐‘‘ and ๐’ˆ ~ โˆˆ ๐’ข 0 , ๐‘š ๐‘‘ . First note that ๐’ˆ and ๐’ˆ ~ agree on โˆ‚ ๐‘‰ ~ , i.e.

๐‘” ๐‘™ โข ( ๐’™ ~ โ€ฒ )

๐‘” ~ ๐‘™ โข ( ๐’™ ~ โ€ฒ ) , โˆ€ ๐‘™

1 , โ€ฆ , ๐‘‘

(31)

for all ๐’™ ~ โ€ฒ โˆˆ โˆ‚ ๐‘‰ ~ , since all ๐’™ ~ โ€ฒ are elements of โˆ‚ ๐‘‰ also, as โˆ‚ ๐‘‰ ~ โŠ‚ โˆ‚ ๐‘‰ . First, we note that since ๐‘” ๐‘™ and ๐‘” ~ ๐‘™ are both Lipschitz continuous, then

| ๐‘” ๐‘™ โข ( ๐’™ โ€ฒ ) โˆ’ ๐‘” ๐‘™ โข ( ๐’™ ~ โ€ฒ ) |

โ‰ค ๐ถ 1 โข โ€– ๐’™ โ€ฒ โˆ’ ๐’™ ~ โ€ฒ โ€– โ‰ค ๐ถ 1 โข ๐œ€ ๐‘š

(32)

| ๐‘” ~ ๐‘™ โข ( ๐’™ ~ โ€ฒ ) โˆ’ ๐‘” ~ ๐‘™ โข ( ๐’™ โ€ฒ ) |

โ‰ค ๐ถ 2 โข โ€– ๐’™ โ€ฒ โˆ’ ๐’™ ~ โ€ฒ โ€– โ‰ค ๐ถ 2 โข ๐œ€ ๐‘š ,

(33)

where โ€– ๐’™ โ€ฒ โˆ’ ๐’™ ~ โ€ฒ โ€– โ‰ค ๐œ€ ๐‘š . This follows under the assumption that โˆ‚ ๐‘‰ ~ is ๐œ€ ๐‘š -dense in โˆ‚ ๐‘‰ , therefore the distance โ€– ๐’™ โ€ฒ โˆ’ ๐’™ ~ โ€ฒ โ€– is at most ๐œ€ ๐‘š .

We seek to quantify

| ๐‘” ๐‘™ โข ( ๐’™ โ€ฒ ) โˆ’ ๐‘” ~ ๐‘™ โข ( ๐’™ โ€ฒ ) | ,

which is how far apart the โ€˜approximateโ€™ ๐‘” ~ ๐‘™ is from the โ€˜trueโ€™ ๐‘” ๐‘™ for any point ๐’™ โ€ฒ โˆˆ โˆ‚ ๐‘‰ . Note that this includes points that are not in โˆ‚ ๐‘‰ ~ . We let

๐‘” ๐‘™ โข ( ๐’™ โ€ฒ ) โˆ’ ๐‘” ~ ๐‘™ โข ( ๐’™ โ€ฒ )

( ๐‘” ๐‘™ โข ( ๐’™ โ€ฒ ) โˆ’ ๐‘” ~ ๐‘™ โข ( ๐’™ โ€ฒ ) ) + ( ๐‘” ๐‘™ โข ( ๐’™ ~ โ€ฒ ) โˆ’ ๐‘” ๐‘™ โข ( ๐’™ ~ โ€ฒ ) ) + ( ๐‘” ~ ๐‘™ โข ( ๐’™ ~ โ€ฒ ) โˆ’ ๐‘” ~ ๐‘™ โข ( ๐’™ ~ โ€ฒ ) ) ,

and by (31),

๐‘” ๐‘™ โข ( ๐’™ โ€ฒ ) โˆ’ ๐‘” ~ ๐‘™ โข ( ๐’™ โ€ฒ )

( ๐‘” ๐‘™ โข ( ๐’™ โ€ฒ ) โˆ’ ๐‘” ~ ๐‘™ โข ( ๐’™ โ€ฒ ) ) โˆ’ ๐‘” ๐‘™ โข ( ๐’™ ~ โ€ฒ ) + ๐‘” ~ ๐‘™ โข ( ๐’™ ~ โ€ฒ )

( ๐‘” ๐‘™ โข ( ๐’™ โ€ฒ ) โˆ’ ๐‘” ๐‘™ โข ( ๐’™ ~ โ€ฒ ) ) + ( ๐‘” ~ ๐‘™ โข ( ๐’™ ~ โ€ฒ ) โˆ’ ๐‘” ~ ๐‘™ โข ( ๐’™ โ€ฒ ) ) .

Using (32) and (33), we have the following inequality,

| ๐‘” ๐‘™ โข ( ๐’™ โ€ฒ ) โˆ’ ๐‘” ~ ๐‘™ โข ( ๐’™ โ€ฒ ) | โ‰ค | ๐‘” ๐‘™ โข ( ๐’™ โ€ฒ ) โˆ’ ๐‘” ๐‘™ โข ( ๐’™ ~ โ€ฒ ) | + | ๐‘” ~ ๐‘™ โข ( ๐’™ ~ โ€ฒ ) โˆ’ ๐‘” ~ ๐‘™ โข ( ๐’™ โ€ฒ ) | โ‰ค ( ๐ถ 1 + ๐ถ 2 ) โข ๐œ€ ๐‘š ,

where the last step follows by the triangle inequality. Therefore, we have

| ๐‘” ๐‘™ โข ( ๐’™ โ€ฒ ) โˆ’ ๐‘” ~ ๐‘™ โข ( ๐’™ โ€ฒ ) |

๐‘‚ ๐‘ƒ โข ( ๐œ€ ๐‘š )

as desired.

A.4Proof of Proposition 5.3

First, let L โข ( ๐‘‰ ) denote the ( ๐‘‘ โˆ’ 1 ) -surface area of a bounded domain ๐‘‰ โŠ‚ โ„ ๐‘‘ and L โข ( ๐‘‰ ) < โˆž . For example, in 2D, L โข ( ๐‘‰ ) corresponds to the line length of โˆ‚ ๐‘‰ . We also define ๐ต ๐œ€ ๐‘š โข ( ๐’™ โ€ฒ ) as the ball of radius ๐œ€ ๐‘š centred on ๐’™ โ€ฒ .

Before continuing to the proof, recall the definition of an ๐œ€ -dense set: โˆ€ ๐’™ โ€ฒ โˆˆ โˆ‚ ๐‘‰ , โˆƒ ๐’™ ~ โ€ฒ โˆˆ โˆ‚ ๐‘‰ ~ such that ๐‘‘ โข ( ๐’™ โ€ฒ , ๐’™ ~ โ€ฒ ) โ‰ค ๐œ€ , where ๐‘‘ is some measure of distance. The statement ๐‘‘ โข ( ๐’™ โ€ฒ , ๐’™ ~ โ€ฒ ) โ‰ค ๐œ€ ๐‘š is equivalent to ๐’™ ~ โ€ฒ โˆˆ ๐ต ๐œ€ ๐‘š โข ( ๐’™ โ€ฒ ) . Now write that the probability that the definition of a ๐œ€ ๐‘š -dense set holds for โˆ‚ ๐‘‰ ~ being dense in โˆ‚ ๐‘‰ is equal to 0.95:

0.95

โ„™ โข ( โˆ€ ๐’™ โ€ฒ โˆˆ โˆ‚ ๐‘‰ , โˆƒ ๐’™ ~ โ€ฒ โˆˆ โˆ‚ ๐‘‰ ~ , ๐’™ ~ โ€ฒ โˆˆ ๐ต ๐œ€ ๐‘š โข ( ๐’™ โ€ฒ ) )

1 โˆ’ โ„™ โข ( โˆƒ ๐’™ โ€ฒ โˆˆ โˆ‚ ๐‘‰ , โˆ€ ๐’™ ~ โ€ฒ โˆˆ โˆ‚ ๐‘‰ ~ , ๐’™ ~ โ€ฒ โˆ‰ ๐ต ๐œ€ ๐‘š โข ( ๐’™ โ€ฒ ) )

1 โˆ’ โ„™ โข ( โ‹ƒ ๐’™ โ€ฒ โˆˆ โˆ‚ ๐‘‰ โˆ€ ๐’™ ~ โ€ฒ โˆˆ โˆ‚ ๐‘‰ ~ , ๐’™ ~ โ€ฒ โˆ‰ ๐ต ๐œ€ ๐‘š โข ( ๐’™ โ€ฒ ) ) .

Equivalently, by rearranging the above, we have

โ„™ โข ( โ‹ƒ ๐’™ โ€ฒ โˆˆ โˆ‚ ๐‘‰ โˆ€ ๐’™ ~ โ€ฒ โˆˆ โˆ‚ ๐‘‰ ~ , ๐’™ ~ โ€ฒ โˆ‰ ๐ต ๐œ€ ๐‘š โข ( ๐’™ โ€ฒ ) )

0.05 .

This probability can be bounded as follows

0.05

โ„™ โข ( โ‹ƒ ๐’™ โ€ฒ โˆˆ โˆ‚ ๐‘‰ โˆ€ ๐’™ ~ โ€ฒ โˆˆ โˆ‚ ๐‘‰ ~ , ๐’™ ~ โ€ฒ โˆ‰ ๐ต ๐œ€ ๐‘š โข ( ๐’™ โ€ฒ ) ) โ‰ฅ โ„™ โข ( โˆ€ ๐’™ ~ โ€ฒ โˆˆ โˆ‚ ๐‘‰ ~ , ๐’™ ~ โ€ฒ โˆ‰ ๐ต ๐œ€ ๐‘š โข ( ๐’™ 0 โ€ฒ ) )

where ๐’™ 0 โ€ฒ is a single point in โˆ‚ ๐‘‰ . This follows by basic laws of probability. Note that the probability on the right hand side can be written as

โ„™ โข ( โˆ€ ๐’™ ~ โ€ฒ โˆˆ โˆ‚ ๐‘‰ ~ , ๐’™ ~ โ€ฒ โˆ‰ ๐ต ๐œ€ ๐‘š โข ( ๐’™ 0 โ€ฒ ) )

โ„™ โข ( ๐’™ ~ 0 โ€ฒ โˆ‰ ๐ต ๐œ€ ๐‘š โข ( ๐’™ 0 โ€ฒ ) ) ๐‘š

[ 1 โˆ’ โ„™ โข ( ๐’™ ~ 0 โ€ฒ โˆˆ ๐ต ๐œ€ ๐‘š โข ( ๐’™ 0 โ€ฒ ) ) ] ๐‘š

[ 1 โˆ’ Area โข ( โˆ‚ ๐‘‰ โˆฉ ๐ต ๐œ€ ๐‘š โข ( ๐’™ 0 โ€ฒ ) ) L โข ( ๐‘‰ ) ] ๐‘š โ‰ค 0.05

(34)

where the first equality comes from all ๐’™ ~ โ€ฒ โˆˆ โˆ‚ ๐‘‰ ~ being independently distributed. Area โข ( ๐‘† ) denotes the surface area of the boundary set ๐‘† . For example, Area โข ( โˆ‚ ๐‘‰ )

L โข ( ๐‘‰ ) . Therefore Area โข ( โˆ‚ ๐‘‰ โˆฉ ๐ต ๐œ€ ๐‘š โข ( ๐’™ 0 โ€ฒ ) ) represents the size of the region of โˆ‚ ๐‘‰ that is inside ๐ต ๐œ€ ๐‘š โข ( ๐’™ 0 โ€ฒ ) , which will be the area of the boundary hyperplane of โˆ‚ ๐‘‰ which passes through ๐ต ๐œ€ ๐‘š โข ( ๐’™ 0 โ€ฒ ) . We obtain the probability in the final equality by assuming that all ๐’™ ~ โ€ฒ are uniformly sampled from โˆ‚ ๐‘‰ , so the probability is the proportion of โˆ‚ ๐‘‰ inside ๐ต ๐œ€ ๐‘š โข ( ๐’™ 0 โ€ฒ ) as a ratio of the full โˆ‚ ๐‘‰ . This probability exists under the assumption that L โข ( ๐‘‰ ) < โˆž .

Next, we can rearrange Equation 34 to

Area โข ( โˆ‚ ๐‘‰ โˆฉ ๐ต ๐œ€ ๐‘š โข ( ๐’™ 0 โ€ฒ ) ) โ‰ฅ L โข ( ๐‘‰ ) โข [ 1 โˆ’ 0.05 1 / ๐‘š ] .

Consider an upper bound for Area โข ( โˆ‚ ๐‘‰ โˆฉ ๐ต ๐œ€ ๐‘š โข ( ๐’™ 0 โ€ฒ ) ) given by the volume of ๐ต ๐œ€ ๐‘š โข ( ๐’™ 0 โ€ฒ ) , i.e.

Area โข ( โˆ‚ ๐‘‰ โˆฉ ๐ต ๐œ€ ๐‘š โข ( ๐’™ 0 โ€ฒ ) ) โ‰ค ๐œ‰ โข ( ๐‘‘ ) โข ๐œ€ ๐‘š ๐‘‘ , ๐œ‰ โข ( ๐‘‘ )

๐œ‹ ๐‘‘ / 2 ฮ“ โข ( ๐‘‘ 2 + 1 ) ,

where ฮ“ is the Gamma function. Combining these bounds gives

๐œ‰ โข ( ๐‘‘ ) โข ๐œ€ ๐‘š ๐‘‘ โ‰ฅ L โข ( ๐‘‰ ) โข [ 1 โˆ’ 0.05 1 / ๐‘š ]

and therefore

๐œ€ ๐‘š โ‰ฅ ( L โข ( ๐‘‰ ) ๐œ‰ โข ( ๐‘‘ ) โข [ 1 โˆ’ 0.05 1 / ๐‘š ] ) 1 / ๐‘‘ ,

which completes the proof.

A.5Proof of Theorem 5.5

Let ๐’ˆ โˆˆ ๐’ข 0 ๐‘‘ and ๐’ˆ ~ โˆˆ ๐’ข 0 , ๐‘š ๐‘‘ . Begin with writing the expectation in its integral form,

๐”ผ ๐‘ž โข [ ๐’ฏ ๐‘ž โข ๐’ˆ ~ โข ( ๐’™ ) ]

โˆซ ๐‘‰ ๐‘ž โข ( ๐’™ ) โข [ โˆ‘ ๐‘™

1 ๐‘‘ โˆ‚ ๐‘ฅ ๐‘™ log โก ๐‘ž โข ( ๐’™ ) โข ๐‘” ~ ๐‘™ โข ( ๐’™ ) + โˆ‘ ๐‘™

1 ๐‘‘ โˆ‚ ๐‘ฅ ๐‘™ ๐‘” ~ ๐‘™ โข ( ๐’™ ) ] โข ๐‘‘ ๐’™

โˆ‘ ๐‘™

1 ๐‘‘ โˆซ ๐‘‰ ๐‘ž โข ( ๐’™ ) โข โˆ‚ ๐‘ฅ ๐‘™ log โก ๐‘ž โข ( ๐’™ ) โข ๐‘” ~ ๐‘™ โข ( ๐’™ ) โข ๐‘‘ โข ๐’™ + โˆ‘ ๐‘™

1 ๐‘‘ โˆซ ๐‘‰ ๐‘ž โข ( ๐’™ ) โข โˆ‚ ๐‘ฅ ๐‘™ ๐‘” ~ ๐‘™ โข ( ๐’™ ) โข ๐‘‘ โข ๐’™

(a) โˆ‘ ๐‘™

1 ๐‘‘ โˆซ ๐‘‰ โˆ‚ ๐‘ฅ ๐‘™ ๐‘ž โข ( ๐’™ ) โข ๐‘” ~ ๐‘™ โข ( ๐’™ ) โข ๐‘‘ โข ๐’™ + โˆ‘ ๐‘™

1 ๐‘‘ โˆซ ๐‘‰ ๐‘ž โข ( ๐’™ ) โข โˆ‚ ๐‘ฅ ๐‘™ ๐‘” ~ ๐‘™ โข ( ๐’™ ) โข ๐‘‘ โข ๐’™

(b) โˆ‘ ๐‘™

1 ๐‘‘ โˆฎ โˆ‚ ๐‘‰ ๐‘ž โข ( ๐’™ ) โข ๐‘” ~ ๐‘™ โข ( ๐’™ ) โข u ^ ๐‘™ โข ( ๐’™ ) โข ๐‘‘ ๐‘  โˆ’ โˆ‘ ๐‘™

1 ๐‘‘ โˆซ ๐‘‰ ๐‘ž โข ( ๐’™ ) โข โˆ‚ ๐‘ฅ ๐‘™ ๐‘” ~ ๐‘™ โข ( ๐’™ ) โข ๐‘‘ โข ๐’™ + โˆ‘ ๐‘™

1 ๐‘‘ โˆซ ๐‘‰ ๐‘ž โข ( ๐’™ ) โข โˆ‚ ๐‘ฅ ๐‘™ ๐‘” ~ ๐‘™ โข ( ๐’™ ) โข ๐‘‘ โข ๐’™

โˆ‘ ๐‘™

1 ๐‘‘ โˆฎ โˆ‚ ๐‘‰ ๐‘ž โข ( ๐’™ ) โข ๐‘” ~ ๐‘™ โข ( ๐’™ ) โข u ^ ๐‘™ โข ( ๐’™ ) โข ๐‘‘ ๐‘ 

where โˆฎ โˆ‚ ๐‘‰ is the surface integral over the boundary โˆ‚ ๐‘‰ , (a) comes from the identity that ๐‘ž โข ( ๐’™ ) โข โˆ‚ ๐‘ฅ ๐‘™ log โก ๐‘ž โข ( ๐’™ )

โˆ‚ ๐‘ฅ ๐‘™ ๐‘ž โข ( ๐’™ ) , and (b) is from integration by parts. Substitute ๐‘” ~ ๐‘™ โข ( ๐’™ )

๐‘” ~ ๐‘™ โข ( ๐’™ ) + ๐‘” ๐‘™ โข ( ๐’™ ) โˆ’ ๐‘” ๐‘™ โข ( ๐’™ ) to obtain

โˆ‘ ๐‘™

1 ๐‘‘ โˆฎ โˆ‚ ๐‘‰ ๐‘ž โข ( ๐’™ ) โข ( ๐‘” ~ ๐‘™ โข ( ๐’™ ) + ๐‘” ๐‘™ โข ( ๐’™ ) โˆ’ ๐‘” ๐‘™ โข ( ๐’™ ) ) โข u ^ ๐‘™ โข ( ๐’™ ) โข ๐‘‘ ๐‘ 

โˆ‘ ๐‘™

1 ๐‘‘ โˆฎ โˆ‚ ๐‘‰ ๐‘ž โข ( ๐’™ ) โข ( ๐‘” ~ ๐‘™ โข ( ๐’™ ) โˆ’ ๐‘” ๐‘™ โข ( ๐’™ ) ) โข u ^ ๐‘™ โข ( ๐’™ ) โข ๐‘‘ ๐‘  + โˆ‘ ๐‘™

1 ๐‘‘ โˆฎ โˆ‚ ๐‘‰ ๐‘ž โข ( ๐’™ ) โข ๐‘” ๐‘™ โข ( ๐’™ ) โข u ^ ๐‘™ โข ( ๐’™ ) โข ๐‘‘ ๐‘  .

By Lemma 5.2, | ๐‘” ~ ๐‘™ โข ( ๐’™ โ€ฒ ) โˆ’ ๐‘” ๐‘™ โข ( ๐’™ โ€ฒ ) |

๐‘‚ ๐‘ƒ โข ( ๐œ€ ๐‘š ) , leaving

๐”ผ ๐‘ž โข [ ๐’ฏ ๐‘ž โข ๐’ˆ ~ โข ( ๐’™ ) ]

๐‘‚ ๐‘ƒ โข ( ๐œ€ ๐‘š ) + โˆ‘ ๐‘™

1 ๐‘‘ โˆฎ โˆ‚ ๐‘‰ ๐‘ž โข ( ๐’™ ) โข ๐‘” ๐‘™ โข ( ๐’™ ) โข u ^ ๐‘™ โข ( ๐’™ ) โข ๐‘‘ ๐‘ 

As all evaluations of ๐‘” ๐‘™ โข ( ๐’™ )

0 โข โˆ€ ๐’™ โˆˆ โˆ‚ ๐‘‰ by definition of ๐’ข 0 ๐‘‘ , the second term equals zero, leaving only

๐”ผ ๐‘ž โข [ ๐’ฏ ๐‘ž โข ๐’ˆ ~ โข ( ๐’™ ) ]

๐‘‚ ๐‘ƒ โข ( ๐œ€ ๐‘š )

as desired.

A.6Proof of Theorem 5.6

Recall ๐’ข 0 , ๐‘š ๐‘‘

{ ๐’ˆ โˆˆ ๐’ข ๐‘‘ | ๐’ˆ โข ( ๐’™ โ€ฒ )

๐ŸŽ โข โˆ€ ๐’™ โ€ฒ โˆˆ โˆ‚ ๐‘‰ ~ , โ€– ๐’ˆ โ€– ๐’ข ๐‘‘ 2 โ‰ค 1 } , where ๐’ข ๐‘‘ is the product RKHS with ๐‘‘ elements, ๐’ˆ

( ๐‘” 1 , โ€ฆ , ๐‘” ๐‘‘ ) , and ๐‘” ๐‘™ โˆˆ ๐’ข , โˆ€ ๐‘™

1 , โ€ฆ , ๐‘‘ .

Begin with the definition of TKSD as defined in Equation 21,

sup ๐’ˆ โˆˆ ๐’ข 0 , ๐‘š ๐‘‘ ๐”ผ ๐‘ž โข [ ๐’ฏ ๐‘ ๐œฝ โข ๐’ˆ โข ( ๐’™ ) ] .

To solve this supremum analytically, we can reframe it as a constrained maximisation problem,

max ๐’ˆ โˆˆ ๐’ข ๐‘‘

๐”ผ ๐‘ž โข [ ๐’ฏ ๐‘ ๐œฝ โข ๐’ˆ โข ( ๐’™ ) ] ,

(35)

subject to	

๐‘” ๐‘™ โข ( ๐’™ โ€ฒ )

0 โข โˆ€ ๐’™ โ€ฒ โˆˆ โˆ‚ ๐‘‰ ~ , โˆ€ ๐‘™

1 , โ€ฆ , ๐‘‘

โ€– ๐’ˆ โ€– ๐’ข ๐‘‘ โ‰ค 1 ,

where the constraints in the definition for ๐’ข 0 , ๐‘š ๐‘‘ have been included as optimisation constraints, and the maximisation is now with respect to the RKHS function family ๐’ข ๐‘‘ only. To solve this, we can formulate a Lagrangian dual function (Boyd & Vandenberghe, 2004).

inf ๐‚ ( 1 ) , โ€ฆ , ๐‚ ( ๐‘‘ ) , ๐œ†

โ„’ โข ( ๐œฝ , ๐’ˆ , ๐‚ ( 1 ) , โ€ฆ , ๐‚ ( ๐‘‘ ) , ๐œ† )

(36)

โ„’

โ„’ โข ( ๐œฝ , ๐’ˆ , ๐‚ ( 1 ) , โ€ฆ , ๐‚ ( ๐‘‘ ) , ๐œ† ) :=
๐”ผ ๐‘ž โข [ ๐’ฏ ๐‘ ๐œฝ โข ๐’ˆ โข ( ๐’™ ) ] + โˆ‘ ๐‘™

1 ๐‘‘ โˆ‘ ๐‘– โ€ฒ

1 ๐‘š ๐œˆ ๐‘– โ€ฒ ( ๐‘™ ) โข ๐‘” ๐‘™ โข ( ๐’™ ๐‘– โ€ฒ โ€ฒ ) + ๐œ† โข ( โ€– ๐’ˆ โ€– ๐’ข ๐‘‘ 2 โˆ’ 1 ) ,

(37)

for ๐‚ ( ๐‘™ ) โˆˆ โ„ ๐‘š โข โˆ€ ๐‘™ , ๐œ† โ‰ฅ 0 and โ„’ is our Lagrangian. The overall optimisation problem that needs solving is given by

min ๐‚ ( 1 ) , โ€ฆ , ๐‚ ( ๐‘‘ ) , ๐œ† โก max ๐’ˆ โˆˆ ๐’ข ๐‘‘ โก โ„’ โข ( ๐œฝ , ๐’ˆ , ๐‚ ( 1 ) , โ€ฆ , ๐‚ ( ๐‘‘ ) , ๐œ† ) .

(38)

By solving the dual problem, Equation 38, we solve the primal problem, Equation 35. We can rewrite (37) as

โ„’

๐”ผ ๐‘ž โข [ โˆ‘ ๐‘™

1 ๐‘‘ โˆ‚ ๐‘ฅ ๐‘™ log โก ๐‘ ๐œฝ โข ( ๐’™ ) โข ๐‘” ๐‘™ โข ( ๐’™ ) + โˆ‚ ๐‘ฅ ๐‘™ ๐‘” ๐‘™ โข ( ๐’™ ) ] + โˆ‘ ๐‘™

1 ๐‘‘ โˆ‘ ๐‘– โ€ฒ

1 ๐‘š ๐œˆ ๐‘– โ€ฒ ( ๐‘™ ) โข ๐‘” ๐‘™ โข ( ๐’™ ๐‘– โ€ฒ โ€ฒ ) + ๐œ† โข ( โ€– ๐’ˆ โ€– ๐’ข ๐‘‘ 2 โˆ’ 1 ) ,

and expand evaluations of ๐‘” ๐‘™ โข ( ๐’™ ) via the reproducing property of ๐’ข , given by ๐‘” ๐‘™ โข ( ๐’™ )

โŸจ ๐‘” ๐‘™ , ๐‘˜ โข ( ๐’™ , โ‹… ) โŸฉ ๐’ข , to

โ„’

๐”ผ ๐‘ž โข [ โˆ‘ ๐‘™

1 ๐‘‘ โˆ‚ ๐‘ฅ ๐‘™ log โก ๐‘ ๐œฝ โข ( ๐’™ ) โข โŸจ ๐‘” ๐‘™ , ๐‘˜ โข ( ๐’™ , โ‹… ) โŸฉ ๐’ข + โŸจ ๐‘” ๐‘™ , โˆ‚ ๐‘ฅ ๐‘™ ๐‘˜ โข ( ๐’™ , โ‹… ) โŸฉ ๐’ข ] + โˆ‘ ๐‘– โ€ฒ

1 ๐‘š โˆ‘ ๐‘–

1 ๐‘‘ ๐œˆ ๐‘– โ€ฒ ( ๐‘™ ) โข โŸจ ๐‘” ๐‘™ , ๐‘˜ โข ( ๐’™ ๐‘– โ€ฒ โ€ฒ , โ‹… ) โŸฉ ๐’ข + ๐œ† โข ( โˆ‘ ๐‘™

1 ๐‘‘ โŸจ ๐‘” ๐‘™ , ๐‘” ๐‘™ โŸฉ ๐’ข โˆ’ 1 )

โˆ‘ ๐‘™

1 ๐‘‘ ๐”ผ ๐‘ž โข [ โŸจ ๐‘” ๐‘™ , โˆ‚ ๐‘ฅ ๐‘™ log โก ๐‘ ๐œฝ โข ( ๐’™ ) โข ๐‘˜ โข ( ๐’™ , โ‹… ) + โˆ‚ ๐‘ฅ ๐‘™ ๐‘˜ โข ( ๐’™ , โ‹… ) + โˆ‘ ๐‘– โ€ฒ

1 ๐‘š ๐œˆ ๐‘– โ€ฒ ( ๐‘™ ) โข ๐‘˜ โข ( ๐’™ ๐‘– โ€ฒ โ€ฒ , โ‹… ) โŸฉ ๐’ข ] + ๐œ† โข ( โˆ‘ ๐‘™

1 ๐‘‘ โŸจ ๐‘” ๐‘™ , ๐‘” ๐‘™ โŸฉ ๐’ข โˆ’ 1 )

โˆ‘ ๐‘™

1 ๐‘‘ โŸจ ๐‘” ๐‘™ , ๐”ผ ๐‘ž โข [ โˆ‚ ๐‘ฅ ๐‘™ log โก ๐‘ ๐œฝ โข ( ๐’™ ) โข ๐‘˜ โข ( ๐’™ , โ‹… ) + โˆ‚ ๐‘ฅ ๐‘™ ๐‘˜ โข ( ๐’™ , โ‹… ) ] + โˆ‘ ๐‘– โ€ฒ

1 ๐‘š ๐œˆ ๐‘– โ€ฒ ( ๐‘™ ) โข ๐‘˜ โข ( ๐’™ ๐‘– โ€ฒ โ€ฒ , โ‹… ) โŸฉ ๐’ข + ๐œ† โข ( โˆ‘ ๐‘™

1 ๐‘‘ โŸจ ๐‘” ๐‘™ , ๐‘” ๐‘™ โŸฉ ๐’ข โˆ’ 1 ) .

(39)

The final equality holds provided that ๐”ผ ๐‘ž โข [ โˆ‚ ๐‘ฅ ๐‘™ log โก ๐‘ ๐œฝ โข ( ๐’™ ) โข ๐‘˜ โข ( ๐’™ , โ‹… ) + โˆ‚ ๐‘ฅ ๐‘™ ๐‘˜ โข ( ๐’™ , โ‹… ) + โˆ‘ ๐‘– โ€ฒ

1 ๐‘š ๐œˆ ๐‘– โ€ฒ ( ๐‘™ ) โข ๐‘˜ โข ( ๐’™ ๐‘– โ€ฒ โ€ฒ , โ‹… ) ] < โˆž , i.e. the term inside the expectation is Bochner integrable (Steinwart & Christmann, 2008). This same assumption was made in Chwialkowski et al. (2016), as we can consider โˆ‘ ๐‘– โ€ฒ

1 ๐‘š ๐œˆ ๐‘– โ€ฒ ( ๐‘™ ) โข ๐‘˜ โข ( ๐’™ ๐‘– โ€ฒ โ€ฒ , โ‹… ) as a constant with respect to the expectation.

We solve for each parameter via differentiation to obtain a closed form solution. Across dimensions, each ๐‘” ๐‘™ in (39) appears only additively to another, and so we can consider the ๐‘™ -th element of the derivative, and solve the inner maximisation for each ๐‘” ๐‘™ to give a solution for ๐’ˆ . This differentiation gives

โˆ‚ โ„’ โˆ‚ ๐‘” ๐‘™

๐”ผ ๐‘ž โข [ โˆ‚ ๐‘ฅ ๐‘™ log โก ๐‘ ๐œฝ โข ( ๐’™ ) โข ๐‘˜ โข ( ๐’™ , โ‹… ) + โˆ‚ ๐‘ฅ ๐‘™ ๐‘˜ โข ( ๐’™ , โ‹… ) ] + โˆ‘ ๐‘– โ€ฒ

1 ๐‘š ๐œˆ ๐‘– โ€ฒ ( ๐‘™ ) โข ๐‘˜ โข ( ๐’™ ๐‘– โ€ฒ โ€ฒ , โ‹… ) + 2 โข ๐œ† โข ๐‘” ๐‘™

0 .

Rearranging for ๐‘” ๐‘™ gives the solution

๐‘” ๐‘™ โ‹†

โˆ’ 1 2 โข ๐œ† โข ๐”ผ ๐‘ž โข [ โˆ‚ ๐‘ฅ ๐‘™ log โก ๐‘ ๐œฝ โข ( ๐’™ ) โข ๐‘˜ โข ( ๐’™ , โ‹… ) + โˆ‚ ๐‘ฅ ๐‘™ ๐‘˜ โข ( ๐’™ , โ‹… ) ] โˆ’ 1 2 โข ๐œ† โข โˆ‘ ๐‘– โ€ฒ

1 ๐‘š ๐œˆ ๐‘– โ€ฒ ( ๐‘™ ) โข ๐‘˜ โข ( ๐’™ ๐‘– โ€ฒ โ€ฒ , โ‹… )

โˆ’ ๐‘ง ๐‘™ 2 โข ๐œ† ,

where for convenience we have denoted ๐‘ง ๐‘™

๐”ผ ๐‘ž โข [ โˆ‚ ๐‘ฅ ๐‘™ log โก ๐‘ ๐œฝ โข ( ๐’™ ) โข ๐‘˜ โข ( ๐’™ , โ‹… ) + โˆ‚ ๐‘ฅ ๐‘™ ๐‘˜ โข ( ๐’™ , โ‹… ) ] + โˆ‘ ๐‘– โ€ฒ

1 ๐‘š ๐œˆ ๐‘– โ€ฒ ( ๐‘™ ) โข ๐‘˜ โข ( ๐’™ ๐‘– โ€ฒ โ€ฒ , โ‹… ) . Substituting this back into (39) gives

โ„’ โข ( ๐œฝ , ๐’ˆ โ‹† , ๐‚ ( 1 ) , โ€ฆ , ๐‚ ( ๐‘‘ ) , ๐œ† )

โˆ‘ ๐‘™

1 ๐‘‘ โŸจ โˆ’ ๐‘ง ๐‘™ 2 โข ๐œ† , ๐‘ง ๐‘™ โŸฉ ๐’ข + ๐œ† โข ( โˆ‘ ๐‘–

1 ๐‘‘ โŸจ โˆ’ ๐‘ง ๐‘™ 2 โข ๐œ† , โˆ’ ๐‘ง ๐‘™ 2 โข ๐œ† โŸฉ ๐’ข โˆ’ 1 )

โˆ‘ ๐‘™

1 ๐‘‘ 1 4 โข ๐œ† โข โŸจ ๐‘ง ๐‘™ , ๐‘ง ๐‘™ โŸฉ ๐’ข + ๐œ† ,

(40)

where ๐’ˆ โ‹†

( ๐‘” 1 โ‹† , โ€ฆ , ๐‘” ๐‘‘ โ‹† ) . Solve for ๐œ† by differentiating

โˆ‚ โ„’ โˆ‚ ๐œ†

โˆ’ 1 4 โข ๐œ† 2 โข โˆ‘ ๐‘–

1 ๐‘‘ โŸจ ๐‘ง ๐‘™ , ๐‘ง ๐‘™ โŸฉ ๐’ข + 1

0 .

Rearranging for ๐œ† gives ๐œ†

ยฑ โˆ‘ ๐‘™

1 ๐‘‘ โŸจ ๐‘ง ๐‘™ , ๐‘ง ๐‘™ โŸฉ ๐’ข / 4 . Since ๐œ† โ‰ฅ 0 , we take the positive solution, giving ๐œ† โ‹†

โˆ‘ ๐‘™

1 ๐‘‘ โŸจ ๐‘ง ๐‘™ , ๐‘ง ๐‘™ โŸฉ ๐’ข / 2 . Substituting ๐œ† โ‹† into (40) gives

โ„’ โข ( ๐œฝ , ๐’ˆ โ‹† , ๐‚ ( 1 ) , โ€ฆ , ๐‚ ( ๐‘‘ ) , ๐œ† โ‹† )

โˆ‘ ๐‘™

1 ๐‘‘ 1 4 โข ๐œ† โ‹† โข โŸจ ๐‘ง ๐‘™ , ๐‘ง ๐‘™ โŸฉ ๐’ข + ๐œ† โ‹†

โˆ‘ ๐‘™

1 ๐‘‘ โŸจ ๐‘ง ๐‘™ , ๐‘ง ๐‘™ โŸฉ ๐’ข .

(41)

To solve for the final Lagrangian parameters, ๐‚ ( 1 ) , โ€ฆ , ๐‚ ( ๐‘‘ ) , let us first introduce some notation. Denote

๐œ™ ๐ฑ โ€ฒ

[ ๐‘˜ โข ( ๐’™ 1 โ€ฒ , โ‹… ) ,

โ‹ฎ ,

๐‘˜ โข ( ๐’™ ๐‘š โ€ฒ , โ‹… ) ] , ๐‹ ๐’› , ๐ฑ โ€ฒ

[ ๐‘˜ โข ( ๐’› , ๐’™ 1 โ€ฒ )
โ€ฆ
๐‘˜ โข ( ๐’› , ๐’™ ๐‘š โ€ฒ ) ] , ๐Š โ€ฒ

๐œ™ ๐ฑ โ€ฒ โข ๐œ™ ๐ฑ โ€ฒ โŠค ,

where ๐’™ 1 โ€ฒ , โ€ฆ , ๐’™ ๐‘š โ€ฒ โˆˆ โˆ‚ ๐‘‰ ~ . Now consider the equivalence

๐‚ ( ๐‘™ ) โ‹† := arg โข min ๐‚ ( ๐‘™ ) โˆ‘ ๐‘™

1 ๐‘‘ โŸจ ๐‘ง ๐‘™ , ๐‘ง ๐‘™ โŸฉ ๐’ข

arg โข min ๐‚ ( ๐‘™ ) โˆ‘ ๐‘™

1 ๐‘‘ โŸจ ๐‘ง ๐‘™ , ๐‘ง ๐‘™ โŸฉ ๐’ข

arg โข min ๐‚ ( ๐‘™ ) โŸจ ๐‘ง ๐‘™ , ๐‘ง ๐‘™ โŸฉ ๐’ข ,

as each ๐‚ ( ๐‘™ ) is independent. By rewriting ๐‘ง ๐‘™ as ๐‘ง ๐‘™

๐”ผ ๐‘ž โข [ โˆ‚ ๐‘ฅ ๐‘™ log โก ๐‘ ๐œฝ โข ( ๐’™ ) โข ๐‘˜ โข ( ๐’™ , โ‹… ) + โˆ‚ ๐‘ฅ ๐‘™ ๐‘˜ โข ( ๐’™ , โ‹… ) ] + ๐‚ ( ๐‘™ ) โŠค โข ๐œ™ ๐ฑ โ€ฒ , we solve this again by differentiating,

๐‘‘ ๐‘‘ โข ๐‚ ( ๐‘™ ) โข โŸจ ๐‘ง ๐‘™ , ๐‘ง ๐‘™ โŸฉ ๐’ข

2 โข โŸจ ๐‘‘ โข ๐‘ง ๐‘™ ๐‘‘ โข ๐‚ ( ๐‘™ ) , ๐‘ง ๐‘™ โŸฉ ๐’ข

2 โข โŸจ ๐œ™ ๐ฑ โ€ฒ , ๐”ผ ๐‘ž โข [ โˆ‚ ๐‘ฅ ๐‘™ log โก ๐‘ ๐œฝ โข ( ๐’™ ) โข ๐‘˜ โข ( ๐’™ , โ‹… ) + โˆ‚ ๐‘ฅ ๐‘™ ๐‘˜ โข ( ๐’™ , โ‹… ) ] + ๐‚ ( ๐‘™ ) โŠค โข ๐œ™ ๐ฑ โ€ฒ โŸฉ ๐’ข

2 โข ๐”ผ ๐‘ž โข [ โˆ‚ ๐‘ฅ ๐‘™ log โก ๐‘ ๐œฝ โข ( ๐’™ ) โข ๐‹ ๐’™ , ๐ฑ โ€ฒ + โˆ‚ ๐‘ฅ ๐‘™ ๐‹ ๐’™ , ๐ฑ โ€ฒ ] + 2 โข ๐‚ ( ๐‘™ ) โŠค โข ๐œ™ ๐ฑ โ€ฒ โข ๐œ™ ๐ฑ โ€ฒ โŠค

2 โข ๐”ผ ๐‘ž โข [ โˆ‚ ๐‘ฅ ๐‘™ log โก ๐‘ ๐œฝ โข ( ๐’™ ) โข ๐‹ ๐’™ , ๐ฑ โ€ฒ + โˆ‚ ๐‘ฅ ๐‘™ ๐‹ ๐’™ , ๐ฑ โ€ฒ ] + 2 โข ๐‚ ( ๐‘™ ) โŠค โข ๐Š โ€ฒ .

Setting equal to zero and rearranging gives

๐‚ ( ๐‘™ ) โ‹†

โˆ’ ( ๐Š โ€ฒ ) โˆ’ 1 โข ๐”ผ ๐‘ž โข [ โˆ‚ ๐‘ฅ ๐‘™ log โก ๐‘ ๐œฝ โข ( ๐’™ ) โข ๐‹ ๐’™ , ๐ฑ โ€ฒ โŠค + ( โˆ‚ ๐‘ฅ ๐‘™ ๐‹ ๐’™ , ๐ฑ โ€ฒ ) โŠค ] .

(42)

Before substituting back into (41), let us first square it and expand it as

โ„’ 2

โˆ‘ ๐‘™

1 ๐‘‘ ๐”ผ ๐’™ โˆผ ๐‘ž โข ๐”ผ ๐’š โˆผ ๐‘ž โข [ ๐‘ข ๐‘™ โข ( ๐’™ , ๐’š ) ] + ( ๐”ผ ๐’™ โˆผ ๐‘ž โข [ โˆ‚ ๐‘ฅ ๐‘™ log โก ๐‘ ๐œฝ โข ( ๐’™ ) โข ๐‹ ๐’™ , ๐ฑ โ€ฒ โŠค + ( โˆ‚ ๐‘ฅ ๐‘™ ๐‹ ๐’™ , ๐ฑ โ€ฒ ) โŠค ] ) โŠค โข ๐‚ ( ๐‘™ )

  • ๐‚ ( ๐‘™ ) โŠค โข ๐”ผ ๐’š โˆผ ๐‘ž โข [ โˆ‚ ๐‘ฆ ๐‘™ log โก ๐‘ ๐œฝ โข ( ๐’š ) โข ๐‹ ๐’š , ๐ฑ โ€ฒ โŠค
  • ( โˆ‚ ๐‘ฆ ๐‘™ ๐‹ ๐’š , ๐ฑ โ€ฒ ) โŠค ]
  • ๐‚ ( ๐‘™ ) โŠค โข ๐Š โ€ฒ โข ๐‚ ( ๐‘™ )

(43)

where ๐‘ข ๐‘™ โข ( ๐’™ , ๐’š )

๐œ“ ๐‘ , ๐‘™ โข ( ๐’™ ) โข ๐œ“ ๐‘ , ๐‘™ โข ( ๐’š ) โข ๐‘˜ โข ( ๐’™ , ๐’š ) + ๐œ“ ๐‘ , ๐‘™ โข ( ๐’™ ) โข โˆ‚ ๐‘ฆ ๐‘™ ๐‘˜ โข ( ๐’™ , ๐’š ) + ๐œ“ ๐‘ , ๐‘™ โข ( ๐’š ) โข โˆ‚ ๐‘ฅ ๐‘™ ๐‘˜ โข ( ๐’™ , ๐’š ) + โˆ‚ ๐‘ฅ ๐‘™ โˆ‚ ๐‘ฆ ๐‘™ ๐‘˜ โข ( ๐’™ , ๐’š ) . We can substitute ๐‚ ( ๐‘™ ) โ‹† from (42) into โ„’ 2 , denoting โ„’ โ‹† 2 := โ„’ โข ( ๐œฝ , ๐’ˆ โ‹† , ๐‚ ( 1 ) โ‹† , โ€ฆ , ๐‚ ( ๐‘‘ ) โ‹† , ๐œ† ) , giving

โ„’ 2

โˆ‘ ๐‘™

1 ๐‘‘ ๐”ผ ๐’™ โˆผ ๐‘ž โข ๐”ผ ๐’š โˆผ ๐‘ž โข [ ๐‘ข ๐‘™ โข ( ๐’™ , ๐’š ) ] + ( ๐”ผ ๐’™ โˆผ ๐‘ž โข [ ๐œ“ ๐’™ , ๐‘™ โข ๐‹ ๐’™ , ๐ฑ โ€ฒ โŠค + ( โˆ‚ ๐‘ฅ ๐‘™ ๐‹ ๐’™ , ๐ฑ โ€ฒ ) โŠค ] ) โŠค โข ( โˆ’ ( ๐Š โ€ฒ ) โˆ’ 1 โข ๐”ผ ๐’š โˆผ ๐‘ž โข [ ๐œ“ ๐’š , ๐‘™ โข ๐‹ ๐’š , ๐ฑ โ€ฒ โŠค + ( โˆ‚ ๐‘ฆ ๐‘™ ๐‹ ๐’š , ๐ฑ โ€ฒ ) โŠค ] )

  • ( โˆ’ ( ๐Š โ€ฒ ) โˆ’ 1 โข ๐”ผ ๐’™ โˆผ ๐‘ž โข [ ๐œ“ ๐’™ , ๐‘™ โข ๐‹ ๐’™ , ๐ฑ โ€ฒ โŠค

  • ( โˆ‚ ๐‘ฅ ๐‘™ ๐‹ ๐’™ , ๐ฑ โ€ฒ ) โŠค ] ) โŠค โข ๐”ผ ๐’š โˆผ ๐‘ž โข [ ๐œ“ ๐’š , ๐‘™ โข ๐‹ ๐’š , ๐ฑ โ€ฒ โŠค

  • ( โˆ‚ ๐‘ฆ ๐‘™ ๐‹ ๐’š , ๐ฑ โ€ฒ ) โŠค ]

  • ( โˆ’ ( ๐Š โ€ฒ ) โˆ’ 1 โข ๐”ผ ๐’™ โˆผ ๐‘ž โข [ ๐œ“ ๐’™ , ๐‘™ โข ๐‹ ๐’™ , ๐ฑ โ€ฒ โŠค

  • ( โˆ‚ ๐‘ฅ ๐‘™ ๐‹ ๐’™ , ๐ฑ โ€ฒ ) โŠค ] ) โŠค โข ๐Š โ€ฒ โข ( โˆ’ ( ๐Š โ€ฒ ) โˆ’ 1 โข ๐”ผ ๐’š โˆผ ๐‘ž โข [ ๐œ“ ๐’š , ๐‘™ โข ๐‹ ๐’š , ๐ฑ โ€ฒ โŠค

  • ( โˆ‚ ๐‘ฆ ๐‘™ ๐‹ ๐’š , ๐ฑ โ€ฒ ) โŠค ] ) ,

where, for brevity, we have written ๐œ“ ๐’™ , ๐‘™

โˆ‚ ๐‘ฅ ๐‘™ log โก ๐‘ ๐œฝ โข ( ๐’™ ) and ๐œ“ ๐’š , ๐‘™

โˆ‚ ๐‘ฆ ๐‘™ log โก ๐‘ ๐œฝ โข ( ๐’š ) . This can be further simplified to

โ„’ 2

โˆ‘ ๐‘™

1 ๐‘‘ ๐”ผ ๐’™ โˆผ ๐‘ž โข ๐”ผ ๐’š โˆผ ๐‘ž โข [ ๐‘ข ๐‘™ โข ( ๐’™ , ๐’š ) ] โˆ’ ( ๐”ผ ๐’™ โˆผ ๐‘ž โข [ ๐œ“ ๐’™ , ๐‘™ โข ๐‹ ๐’™ , ๐ฑ โ€ฒ โŠค + ( โˆ‚ ๐‘ฅ ๐‘™ ๐‹ ๐’™ , ๐ฑ โ€ฒ ) โŠค ] ) โŠค โข ( ๐Š โ€ฒ ) โˆ’ 1 โข ๐”ผ ๐’š โˆผ ๐‘ž โข [ ๐œ“ ๐’š , ๐‘™ โข ๐‹ ๐’š , ๐ฑ โ€ฒ โŠค + ( โˆ‚ ๐‘ฆ ๐‘™ ๐‹ ๐’š , ๐ฑ โ€ฒ ) โŠค ] ,

(44)

and equivalently

โ„’ 2

โˆ‘ ๐‘™

1 ๐‘‘ ๐”ผ ๐’™ โˆผ ๐‘ž โข ๐”ผ ๐’š โˆผ ๐‘ž โข [ ๐‘ข ๐‘™ โข ( ๐’™ , ๐’š ) โˆ’ ( ๐œ“ ๐’™ , ๐‘™ โข ๐‹ ๐’™ , ๐ฑ โ€ฒ โŠค + ( โˆ‚ ๐‘ฅ ๐‘™ ๐‹ ๐’™ , ๐ฑ โ€ฒ ) โŠค ) โŠค โข ( ๐Š โ€ฒ ) โˆ’ 1 โข ( ๐œ“ ๐’š , ๐‘™ โข ๐‹ ๐’š , ๐ฑ โ€ฒ โŠค + ( โˆ‚ ๐‘ฆ ๐‘™ ๐‹ ๐’š , ๐ฑ โ€ฒ ) โŠค ) ] ,

(45)

giving the desired result.

A.7Proof of Theorem 5.10

First, we state a general result for proving the consistency of an empirical estimator of ๐œฝ , which is second-order differentiable with respect to ๐œฝ .

Lemma A.1.

Define the unique solution of empirical objective ๐›‰ ^ := arg โข min ๐›‰ โก ๐ฟ ^ โข ( ๐›‰ ) and the population ๐›‰ โ‹† := arg โข min ๐›‰ โก ๐ฟ โข ( ๐›‰ ) , where ๐ฟ โข ( ๐›‰ ) := ๐”ผ โข [ ๐ฟ ^ โข ( ๐›‰ ) ] . If ๐œ† min โข [ โˆ‡ ๐›‰ 2 ๐ฟ ^ โข ( ๐›‰ ) ] โ‰ฅ ฮ› min > 0 , โˆ€ ๐›‰ and โ€– โˆ‡ ๐›‰ ๐ฟ ^ โข ( ๐›‰ โ‹† ) โ€–

๐‘‚ ๐‘ƒ โข ( 1 ๐‘› ) , then โ€– ๐›‰ ^ โˆ’ ๐›‰ โ‹† โ€–

๐‘‚ ๐‘ƒ โข ( 1 ๐‘› ) . Here, ๐œ† min โข ( ๐‘€ ) is the smallest eigenvalue of a matrix ๐‘€ .

Proof.

First, we define a function โ„Ž โข ( ๐œฝ ) := โŸจ ๐œฝ ^ โˆ’ ๐œฝ โ‹† , ๐ฟ ^ โข ( ๐œฝ ) โŸฉ . Using the mean value theorem, we obtain โ„Ž โข ( ๐œฝ ^ ) โˆ’ โ„Ž โข ( ๐œฝ โ‹† )

โŸจ โˆ‡ ๐œฝ โ„Ž โข ( ๐œฝ ยฏ ) , ๐œฝ ^ โˆ’ ๐œฝ โ‹† โŸฉ , where ๐œฝ ยฏ is the midpoint between ๐œฝ ^ and ๐œฝ โ‹† in a co-ordinate wise fashion. Therefore, due to the fact that โˆ‡ ๐œฝ ๐ฟ ^ โข ( ๐œฝ ^ )

0 ,

0 โˆ’ โ„Ž โข ( ๐œฝ โ‹† )

โŸจ ๐œฝ ^ โˆ’ ๐œฝ โ‹† , โˆ‡ ๐œฝ ๐ฟ ^ โข ( ๐œฝ โ‹† ) โŸฉ

โŸจ ๐œฝ ^ โˆ’ ๐œฝ โ‹† , โˆ‡ ๐œฝ 2 ๐ฟ ^ โข ( ๐œฝ ยฏ ) โข ( ๐œฝ ^ โˆ’ ๐œฝ โ‹† ) โŸฉ .

This implies the following inequality

โ€– ๐œฝ ^ โˆ’ ๐œฝ โ‹† โ€– โข โ€– โˆ‡ ๐œฝ ๐ฟ ^ โข ( ๐œฝ โ‹† ) โ€– โ‰ฅ โ€– ( ๐œฝ ^ โˆ’ ๐œฝ โ‹† ) โŠค โข โˆ‡ ๐œฝ 2 ๐ฟ ^ โข ( ๐œฝ ^ ) โข ( ๐œฝ ^ โˆ’ ๐œฝ โ‹† ) โ€– โ‰ฅ ๐œ† min โข โ€– ( ๐œฝ ^ โˆ’ ๐œฝ โ‹† ) โ€– 2 .

Assume โ€– ๐œฝ ^ โˆ’ ๐œฝ โ‹† โ€– โ‰  0 , 1 ๐œ† min โข โ€– โˆ‡ ๐œฝ ๐ฟ ^ โข ( ๐œฝ โ‹† ) โ€– โ‰ฅ โ€– ๐œฝ ^ โˆ’ ๐œฝ โ‹† โ€– 2 . โˆŽ

For the remainder, we need only to show that ๐’Ÿ TKSD โข ( ๐‘ ๐œฝ | ๐‘ž ) 2 satisfies Lemma A.1.

Proof.

First, we need to show that

[ โˆ‡ ๐œฝ ๐’Ÿ TKSD โข ( ๐‘ ๐œฝ | ๐‘ž ) 2 | ๐œฝ

๐œฝ โ‹† ] ๐‘–

๐‘‚ ๐‘ƒ โข ( ๐œ€ ^ ๐‘š ) , โˆ€ ๐‘– .

(46)

For brevity let us define ๐”ผ ๐’™

๐”ผ ๐’™ โˆผ ๐‘ž and similarly ๐”ผ ๐’š

๐”ผ ๐’š โˆผ ๐‘ž . Recall

๐’Ÿ TKSD โข ( ๐‘ ๐œฝ | ๐‘ž ) 2

โˆ‘ ๐‘™

1 ๐‘‘ ๐”ผ ๐’™ โข ๐”ผ ๐’š โข [ ๐‘ข ๐‘™ โข ( ๐’™ , ๐’š ) โˆ’ ๐ฏ ๐‘™ โข ( ๐’™ ) โŠค โข ( ๐Š โ€ฒ ) โˆ’ 1 โข ๐ฏ ๐‘™ โข ( ๐’š ) ]

(47)

where

๐‘ข ๐‘™ โข ( ๐’™ , ๐’š )

๐œ“ ๐‘ , ๐‘™ ( ๐’™ ) ๐œ“ ๐‘ , ๐‘™ ( ๐’š ) ๐‘˜ ( ๐’™ , ๐’š ) + ๐œ“ ๐‘ , ๐‘™ ( ๐’™ ) โˆ‚ ๐‘ฆ ๐‘™ ๐‘˜ ( ๐’™ , ๐’š ) + ๐œ“ ๐‘ , ๐‘™ ( ๐’š ) โˆ‚ ๐‘ฅ ๐‘™ ๐‘˜ ( ๐’™ , ๐’š ) + โˆ‚ ๐‘ฅ ๐‘™ โˆ‚ ๐‘ฆ ๐‘™ ๐’š ) ,

๐ฏ ๐‘™ โข ( ๐’› )

๐œ“ ๐‘ , ๐‘™ โข ( ๐’› ) โข ๐‹ ๐’› , ๐ฑ โ€ฒ โŠค + ( โˆ‚ ๐‘ง ๐‘™ ๐‹ ๐’› , ๐ฑ โ€ฒ ) โŠค ,

๐‹ ๐’› , ๐ฑ โ€ฒ

[ ๐‘˜ โข ( ๐’› , ๐’™ 1 โ€ฒ ) , โ€ฆ , ๐‘˜ โข ( ๐’› , ๐’™ ๐‘š โ€ฒ ) ] , ๐œ™ ๐ฑ โ€ฒ

[ ๐‘˜ โข ( ๐’™ 1 โ€ฒ , โ‹… ) , โ€ฆ , ๐‘˜ โข ( ๐’™ ๐‘š โ€ฒ , โ‹… ) ] โŠค and ๐Š โ€ฒ

๐œ™ ๐ฑ โ€ฒ โข ๐œ™ ๐ฑ โ€ฒ โŠค . Let us take the derivative of (47) with respect to ๐œฝ . We first consider for the ๐‘™ -th dimension of the sum, and then note that this will apply to all ๐‘‘ dimensions. Therefore consider

โˆ‡ ๐œฝ
๐’Ÿ TKSD โข ( ๐‘ ๐œฝ | ๐‘ž ) 2 | ๐œฝ

๐œฝ โ‹†

โˆ‡ ๐œฝ ๐”ผ ๐’™ โข ๐”ผ ๐’š โข [ ๐‘ข ๐‘™ โข ( ๐’™ , ๐’š ) โˆ’ ๐ฏ ๐‘™ โข ( ๐’™ ) โŠค โข ( ๐Š โ€ฒ ) โˆ’ 1 โข ๐ฏ ๐‘™ โข ( ๐’š ) ] | ๐œฝ

๐œฝ โ‹†

๐”ผ ๐’™ โข ๐”ผ ๐’š โข [ โˆ‡ ๐œฝ ๐‘ข ๐‘™ โข ( ๐’™ , ๐’š ) โˆ’ ๐’— ๐‘™ โข ( ๐’™ ) โŠค โข ( ๐Š โ€ฒ ) โˆ’ 1 โข ( โˆ‡ ๐œฝ ๐ฏ ๐‘™ โข ( ๐’š ) ) โˆ’ ( โˆ‡ ๐œฝ ๐’— ๐‘™ โข ( ๐’™ ) ) โŠค โข ( ๐Š โ€ฒ ) โˆ’ 1 โข ๐ฏ ๐‘™ โข ( ๐’š ) ] | ๐œฝ

๐œฝ โ‹† .

(48)

We begin by expanding the first term in (48):

๐”ผ ๐’™ โข ๐”ผ ๐’š โข [ โˆ‡ ๐œฝ ๐‘ข ๐‘™ โข ( ๐’™ , ๐’š ) ] | ๐œฝ

๐œฝ โ‹†

๐”ผ ๐’™ ๐”ผ ๐’š [ ๐œ“ ๐‘ , ๐‘™ ( ๐’™ ) ๐‘˜ ( ๐’™ , ๐’š ) ( โˆ‡ ๐œฝ ๐œ“ ๐‘ , ๐‘™ ( ๐’š ) ) + ( โˆ‡ ๐œฝ ๐œ“ ๐‘ , ๐‘™ ( ๐’™ ) ) ๐‘˜ ( ๐’™ , ๐’š ) ๐œ“ ๐‘ , ๐‘™ ( ๐’š )

+ ( โˆ‡ ๐œฝ ๐œ“ ๐‘ , ๐‘™ ( ๐’™ ) ) โˆ‚ ๐‘ฆ ๐‘™ ๐‘˜ ( ๐’™ , ๐’š ) + ( โˆ‡ ๐œฝ ๐œ“ ๐‘ , ๐‘™ ( ๐’š ) ) โˆ‚ ๐‘ฅ ๐‘™ ๐‘˜ ( ๐’™ , ๐’š ) ] | ๐œฝ

๐œฝ โ‹† .

(49)

Now expand the first term of (49), giving

๐”ผ ๐’™ โข ๐”ผ ๐’š โข [ ๐œ“ ๐‘ , ๐‘™ โข ( ๐’™ ) โข ๐‘˜ โข ( ๐’™ , ๐’š ) โข ( โˆ‡ ๐œฝ ๐œ“ ๐‘ , ๐‘™ โข ( ๐’š ) ) ] | ๐œฝ

๐œฝ โ‹†

โˆซ ๐‘‰ ๐‘ž โข ( ๐’™ ) โข ๐œ“ ๐‘ , ๐‘™ โข ( ๐’™ ) โข ๐”ผ ๐’š โข [ ๐‘˜ โข ( ๐’™ , ๐’š ) โข ( โˆ‡ ๐œฝ ๐œ“ ๐‘ , ๐‘™ โข ( ๐’š ) ) ] โข ๐‘‘ ๐’™ | ๐œฝ

๐œฝ โ‹†

โˆซ ๐‘‰ โˆ‚ ๐‘ฅ ๐‘™ ๐‘ž โข ( ๐’™ ) โข ๐”ผ ๐’š โข [ ๐‘˜ โข ( ๐’™ , ๐’š ) โข [ โˆ‡ ๐œฝ ๐œ“ ๐‘ , ๐‘™ โข ( ๐’š ) ] | ๐œฝ

๐œฝ โ‹† ] โข ๐‘‘ โข ๐’™

(50)

due to

๐‘ž โข ( ๐’› ) โข ๐œ“ ๐‘ , ๐‘™ โข ( ๐’› ) | ๐œฝ

๐œฝ โ‹†

๐‘ž โข ( ๐’› ) โข โˆ‚ ๐‘ฆ ๐‘™ log โก ๐‘ ๐œฝ โ‹† โข ( ๐’› )

๐‘ž โข ( ๐’› ) โข โˆ‚ ๐‘ฆ ๐‘™ ๐‘ž โข ( ๐’› ) ๐‘ž โข ( ๐’› )

โˆ‚ ๐‘ฆ ๐‘™ ๐‘ž โข ( ๐’› ) .

Then, via integration by parts, (50) becomes

โˆฎ โˆ‚ ๐‘‰ ๐‘ž โข ( ๐’™ ) โข ๐”ผ ๐’š โข [ ๐‘˜ โข ( ๐’™ , ๐’š ) โข [ โˆ‡ ๐œฝ ๐œ“ ๐‘ , ๐‘™ โข ( ๐’š ) ] | ๐œฝ

๐œฝ โ‹† ] โข u ^ ๐‘™ โข ( ๐’™ ) โข ๐‘‘ ๐‘  โˆ’ โˆซ ๐‘‰ ๐‘ž โข ( ๐’™ ) โข [ โˆ‚ ๐‘ฅ ๐‘™ ๐‘˜ โข ( ๐’™ , ๐’š ) โข [ โˆ‡ ๐œฝ ๐œ“ ๐‘ , ๐‘™ โข ( ๐’š ) ] | ๐œฝ

๐œฝ โ‹† ] .

We can expand the second term of (49) in a similar way:

( โˆ‡ ๐œฝ ๐œ“ ๐‘ , ๐‘™ โข ( ๐’™ ) ) โข ๐‘˜ โข ( ๐’™ , ๐’š ) โข ๐œ“ ๐‘ , ๐‘™ โข ( ๐’š )

โˆฎ โˆ‚ ๐‘‰ ๐‘ž โข ( ๐’š ) โข ๐”ผ ๐’™ โข [ [ โˆ‡ ๐œฝ ๐œ“ ๐‘ , ๐‘™ โข ( ๐’™ ) ] | ๐œฝ

๐œฝ โ‹† โข ๐‘˜ โข ( ๐’™ , ๐’š ) ] โข u ^ ๐‘™ โข ( ๐’š ) โข ๐‘‘ ๐‘ 

โˆ’ โˆซ ๐‘‰ ๐‘ž โข ( ๐’š ) โข [ [ โˆ‡ ๐œฝ ๐œ“ ๐‘ , ๐‘™ โข ( ๐’™ ) ] | ๐œฝ

๐œฝ โ‹† โข โˆ‚ ๐‘ฆ ๐‘™ ๐‘˜ โข ( ๐’™ , ๐’š ) ]

Both of these together, as they appear in (49), gives

๐ต :=
โˆฎ โˆ‚ ๐‘‰ ๐‘ž โข ( ๐’™ ) โข ๐”ผ ๐’š โข [ ๐‘˜ โข ( ๐’™ , ๐’š ) โข [ โˆ‡ ๐œฝ ๐œ“ ๐‘ , ๐‘™ โข ( ๐’š ) ] | ๐œฝ

๐œฝ โ‹† ] โข u ^ ๐‘™ โข ( ๐’™ ) โข ๐‘‘ ๐‘ 

  • โˆฎ โˆ‚ ๐‘‰ ๐‘ž โข ( ๐’š ) โข ๐”ผ ๐’™ โข [ [ โˆ‡ ๐œฝ ๐œ“ ๐‘ , ๐‘™ โข ( ๐’™ ) ] | ๐œฝ

    ๐œฝ โ‹† โข ๐‘˜ โข ( ๐’™ , ๐’š ) ] โข u ^ ๐‘™ โข ( ๐’š ) โข ๐‘‘ ๐‘  .

(51)

Now consider the second term in (48),

๐”ผ ๐’™ โข ๐”ผ ๐’š
[ ๐’— ๐‘™ โข ( ๐’™ ) โŠค โข ( ๐Š โ€ฒ ) โˆ’ 1 โข ( โˆ‡ ๐œฝ ๐ฏ ๐‘™ โข ( ๐’š ) ) ] | ๐œฝ

๐œฝ โ‹†

(52)

= ๐”ผ ๐’™ โข ๐”ผ ๐’š โข [ ( ๐œ“ ๐‘ , ๐‘™ โข ( ๐’™ ) โข ๐‹ ๐’™ , ๐ฑ โ€ฒ + โˆ‚ ๐‘ฅ ๐‘™ ๐‹ ๐’™ , ๐ฑ โ€ฒ ) โข ( ๐Š โ€ฒ ) โˆ’ 1 โข ( ๐‹ ๐’š , ๐ฑ โ€ฒ โŠค โข ( โˆ‡ ๐œฝ ๐œ“ ๐‘ , ๐‘™ โข ( ๐’š ) ) โŠค ) ] | ๐œฝ

๐œฝ โ‹†

๐”ผ ๐’™ ๐”ผ ๐’š [ ๐œ“ ๐‘ , ๐‘™ ( ๐’™ ) ๐‹ ๐’™ , ๐ฑ โ€ฒ ( ๐Š โ€ฒ ) โˆ’ 1 ( ๐‹ ๐’š , ๐ฑ โ€ฒ โŠค ( โˆ‡ ๐œฝ ๐œ“ ๐‘ , ๐‘™ ( ๐’š ) ) โŠค )

(53)

  • ( โˆ‚ ๐‘ฅ ๐‘™ ๐‹ ๐’™ , ๐ฑ โ€ฒ ) ( ๐Š โ€ฒ ) โˆ’ 1 ( ๐‹ ๐’š , ๐ฑ โ€ฒ โŠค ( โˆ‡ ๐œฝ ๐œ“ ๐‘ , ๐‘™ ( ๐’š ) ) โŠค ) ) ] | ๐œฝ

    ๐œฝ โ‹†

= ๐”ผ ๐’™ โข [ ๐œ“ ๐‘ , ๐‘™ โข ( ๐’™ ) โข ๐‹ ๐’™ , ๐ฑ โ€ฒ โข ( ๐Š โ€ฒ ) โˆ’ 1 โข ๐”ผ ๐’š โข [ ๐‹ ๐’š , ๐ฑ โ€ฒ โŠค โข ( โˆ‡ ๐œฝ ๐œ“ ๐‘ , ๐‘™ โข ( ๐’š ) โŠค ) ] ] | ๐œฝ

๐œฝ โ‹†

+ ๐”ผ ๐’™ [ ( โˆ‚ ๐‘ฅ ๐‘™ ๐‹ ๐’™ , ๐ฑ โ€ฒ ) ( ๐Š โ€ฒ ) โˆ’ 1 ๐”ผ ๐’š [ ๐‹ ๐’š , ๐ฑ โ€ฒ โŠค ( โˆ‡ ๐œฝ ๐œ“ ๐‘ , ๐‘™ ( ๐’š ) ) โŠค ) ] ] | ๐œฝ

๐œฝ โ‹†

(54)

Consider only the first term of the above,

๐”ผ ๐’™ โข [ ๐œ“ ๐‘ , ๐‘™ โข ( ๐’™ ) โข ๐‹ ๐’™ , ๐ฑ โ€ฒ โข ( ๐Š โ€ฒ ) โˆ’ 1 โข ๐”ผ ๐’š โข [ ( ๐‹ ๐’š , ๐ฑ โ€ฒ โŠค โข ( โˆ‡ ๐œฝ ๐œ“ ๐‘ , ๐‘™ โข ( ๐’š ) ) โŠค ) ] ] | ๐œฝ

๐œฝ โ‹†

โˆซ ๐‘‰ ๐‘ž โข ( ๐’™ ) โข ๐œ“ ๐‘ , ๐‘™ โข ( ๐’™ ) โข ๐‹ ๐’™ , ๐ฑ โ€ฒ โข ( ๐Š โ€ฒ ) โˆ’ 1 โข ๐”ผ ๐’š โข [ ๐‹ ๐’š , ๐ฑ โ€ฒ โŠค โข ( โˆ‡ ๐œฝ ๐œ“ ๐‘ , ๐‘™ โข ( ๐’š ) ) โŠค ] โข ๐‘‘ ๐’™ | ๐œฝ

๐œฝ โ‹†

โˆซ ๐‘‰ โˆ‚ ๐‘ฅ ๐‘™ ๐‘ž โข ( ๐’™ ) โข ๐‹ ๐’™ , ๐ฑ โ€ฒ โข ( ๐Š โ€ฒ ) โˆ’ 1 โข ๐”ผ ๐’š โข [ ๐‹ ๐’š , ๐ฑ โ€ฒ โŠค โข [ โˆ‡ ๐œฝ ๐œ“ ๐‘ , ๐‘™ โข ( ๐’š ) โŠค ] | ๐œฝ

๐œฝ โ‹† ] โข ๐‘‘ โข ๐’™ .

Integration by parts can be used to expand this to

โˆฎ โˆ‚ ๐‘‰ ๐‘ž โข ( ๐’™ )
๐‹ ๐’™ , ๐ฑ โ€ฒ โข ( ๐Š โ€ฒ ) โˆ’ 1 โข ๐”ผ ๐’š โข [ ๐‹ ๐’š , ๐ฑ โ€ฒ โŠค โข [ โˆ‡ ๐œฝ ๐œ“ ๐‘ , ๐‘™ โข ( ๐’š ) โŠค ] | ๐œฝ

๐œฝ โ‹† ] โข u ^ ๐‘™ โข ( ๐’™ ) โข ๐‘‘ โข ๐‘ 

โˆ’ โˆซ ๐‘‰ ๐‘ž โข ( ๐’™ ) โข ๐‹ ๐’™ , ๐ฑ โ€ฒ โข ( ๐Š โ€ฒ ) โˆ’ 1 โข ๐”ผ ๐’š โข [ โˆ‚ ๐‘ฅ ๐‘™ ๐‹ ๐’š , ๐ฑ โ€ฒ โŠค โข [ โˆ‡ ๐œฝ ๐œ“ ๐‘ , ๐‘™ โข ( ๐’š ) โŠค ] | ๐œฝ

๐œฝ โ‹† ] โข ๐œˆ โข ( ๐’™ ) โข ๐‘‘ ๐’™ .

(55)

5.8 indicates that we have the following equivalence (in a coordinate-wise fashion):

โˆฎ โˆ‚ ๐‘‰ ๐‘ž โข ( ๐’™ ) โข ๐‹ ๐’™ , ๐ฑ โ€ฒ
( ๐Š โ€ฒ ) โˆ’ 1 โข ๐”ผ ๐’š โข [ ๐‹ ๐’š , ๐ฑ โ€ฒ โŠค โข [ โˆ‡ ๐œฝ ๐œ“ ๐‘ , ๐‘™ โข ( ๐’š ) โŠค ] | ๐œฝ

๐œฝ โ‹† ] โข u ^ ๐‘™ โข ( ๐’™ ) โข ๐‘‘ โข ๐‘ 

= โˆฎ โˆ‚ ๐‘‰ ๐‘ž โข ( ๐’™ ) โข ๐”ผ ๐’š โข [ ๐‘˜ โข ( ๐’™ , ๐’š ) โข [ โˆ‡ ๐œฝ ๐œ“ ๐‘ , ๐‘™ โข ( ๐’š ) โŠค ] | ๐œฝ

๐œฝ โ‹† ] โข u ^ ๐‘™ โข ( ๐’™ ) โข ๐‘‘ ๐‘  + ๐‘‚ ๐‘ƒ โข ( ๐œ€ ^ ๐‘š , 1 ) .

(56)

We interpret this as follows. If we consider ๐ฑ โ€ฒ โˆˆ โˆ‚ ๐‘‰ ~ as our training set, then our regression function is trained on the approximate set of boundary points. The surface integral denoted by โˆฎ โˆ‚ ๐‘‰ is evaluating on all ๐’™ โˆˆ โˆ‚ ๐‘‰ , the true boundary. Therefore the prediction based on ๐’™ โˆˆ โˆ‚ ๐‘‰ is going to be well approximated by the regression function that is trained on ๐ฑ โ€ฒ , and the approximation error, ๐œ€ ^ ๐‘š , 1 , will decrease as ๐‘š , the number of training points, increases.

We can apply the same operations (from (54) onwards) to the third term in (48), which gives

๐”ผ ๐’™ โข ๐”ผ ๐’š โข [ ( โˆ‡ ๐œฝ ๐’— ๐‘™ โข ( ๐’™ ) ) โŠค โข ( ๐Š โ€ฒ ) โˆ’ 1 โข ๐ฏ ๐‘™ โข ( ๐’š ) ]

โˆฎ โˆ‚ ๐‘‰ ๐‘ž โข ( ๐’š ) โข ๐”ผ ๐’™ โข [ [ โˆ‡ ๐œฝ ๐œ“ ๐‘ , ๐‘™ โข ( ๐’™ ) โŠค ] | ๐œฝ

๐œฝ โ‹† โข ๐‘˜ โข ( ๐’™ , ๐’š ) ] โข u ^ ๐‘™ โข ( ๐’š ) โข ๐‘‘ ๐‘  + ๐‘‚ ๐‘ƒ โข ( ๐œ€ ^ ๐‘š , 2 ) ,

(57)

where ๐œ€ ^ ๐‘š , 2 is the approximation error for the corresponding kernel regression on this term. When taking the sum, as it is in (48), (56) + (57),

โˆฎ โˆ‚ ๐‘‰ ๐‘ž โข ( ๐’™ )
๐”ผ ๐’š โข [ ๐‘˜ โข ( ๐’™ , ๐’š ) โข [ โˆ‡ ๐œฝ ๐œ“ ๐‘ , ๐‘™ โข ( ๐’š ) โŠค ] | ๐œฝ

๐œฝ โ‹† ] โข u ^ ๐‘™ โข ( ๐’™ ) โข ๐‘‘ โข ๐‘  + ๐‘‚ ๐‘ƒ โข ( ๐œ€ ^ ๐‘š , 1 )

  • โˆฎ โˆ‚ ๐‘‰ ๐‘ž โข ( ๐’š ) โข ๐”ผ ๐’™ โข [ [ โˆ‡ ๐œฝ ๐œ“ ๐‘ , ๐‘™ โข ( ๐’™ ) โŠค ] | ๐œฝ

    ๐œฝ โ‹† โข ๐‘˜ โข ( ๐’™ , ๐’š ) ] โข u ^ ๐‘™ โข ( ๐’š ) โข ๐‘‘ ๐‘ 
  • ๐‘‚ ๐‘ƒ โข ( ๐œ€ ^ ๐‘š , 2 )

    ๐ต
  • ๐‘‚ ๐‘ƒ โข ( ๐œ€ ^ ๐‘š ) ,

where ๐œ€ ^ ๐‘š

๐œ€ ^ ๐‘š , 1 + ๐œ€ ^ ๐‘š , 2 . Therefore, when substituting all of (51), (56) and (57) back into (48), we have

โˆ‡ ๐œฝ ๐’Ÿ TKSD โข ( ๐‘ ๐œฝ | ๐‘ž ) 2 | ๐œฝ

๐œฝ โ‹†

๐‘‚ ๐‘ƒ โข ( ๐œ€ ^ ๐‘š ) ,

where ๐œ€ ^ ๐‘š is a decreasing function of ๐‘š .

Now we show ๐œฝ โ‹† is the true density parameter, i.e., ๐‘ ๐œฝ โ‹†

๐‘ž . Let ๐’Ÿ TKSD โข ( ๐‘ ๐œฝ โ‹† | ๐‘ž ) := ๐’Ÿ TKSD โข ( ๐‘ ๐œฝ | ๐‘ž ) | ๐œฝ

๐œฝ โ‹† . First, we show that ๐ฟ โข ( ๐œฝ โ‹† )

0 . This is guaranteed as

๐ฟ โข ( ๐œฝ โ‹† )

lim ๐‘š โ†’ โˆž ๐’Ÿ TKSD โข ( ๐‘ ๐œฝ โ‹† | ๐‘ž ) 2

lim ๐‘š โ†’ โˆž { sup ๐’ˆ โˆˆ ๐’ข 0 , ๐‘š ๐‘‘ ๐”ผ ๐‘ž โข [ ๐’ฏ ๐‘ ๐œฝ โ‹† โข ๐’ˆ ] } 2

lim ๐‘š โ†’ โˆž ๐‘‚ ๐‘ƒ โข ( ๐œ€ ๐‘š )

0 โข  (with high probability) ,

for ๐’ˆ โˆˆ ๐’ข 0 , ๐‘š ๐‘‘ . The change from Stein discrepancy to ๐‘‚ ๐‘ƒ โข ( ๐œ€ ๐‘š ) is guaranteed by Theorem 5.5. Since we assume that the minimiser of the population objective ๐œฝ โ‹† is unique and our density model is identifiable, it shows that the unique solution of the population objective is the unique optimal parameter of the density model.

Second, we verify that โ€– โˆ‡ ๐œฝ ๐ฟ ^ โข ( ๐œฝ โ‹† ) โ€–

๐‘‚ ๐‘ƒ โข ( 1 ๐‘› ) .

โ€– โˆ‡ ๐œฝ ๐ฟ ^ โข ( ๐œฝ โ‹† ) โ€–

โˆฅ โˆ‡ ๐œฝ lim ๐‘š โ†’ โˆž ๐’Ÿ ^ TKSD ( ๐‘ ๐œฝ โ‹† | ๐‘ž ) 2 โˆฅ

lim ๐‘š โ†’ โˆž โˆฅ โˆ‡ ๐œฝ ๐’Ÿ ^ TKSD ( ๐‘ ๐œฝ โ‹† | ๐‘ž ) 2 โˆฅ

lim ๐‘š โ†’ โˆž โˆฅ โˆ‡ ๐œฝ ๐’Ÿ ^ TKSD ( ๐‘ ๐œฝ โ‹† | ๐‘ž ) 2 โˆ’ โˆ‡ ๐œฝ ๐’Ÿ TKSD ( ๐‘ ๐œฝ โ‹† | ๐‘ž ) 2 + โˆ‡ ๐œฝ ๐’Ÿ TKSD ( ๐‘ ๐œฝ โ‹† | ๐‘ž ) 2 โˆฅ

โ‰ค lim ๐‘š โ†’ โˆž ( โˆฅ โˆ‡ ๐œฝ ๐’Ÿ ^ TKSD ( ๐‘ ๐œฝ โ‹† | ๐‘ž ) 2 โˆ’ โˆ‡ ๐œฝ ๐’Ÿ TKSD ( ๐‘ ๐œฝ โ‹† | ๐‘ž ) 2 โˆฅ + โˆฅ โˆ‡ ๐œฝ ๐’Ÿ TKSD ( ๐‘ ๐œฝ โ‹† | ๐‘ž ) 2 โˆฅ )

โ‰ค lim ๐‘š โ†’ โˆž ( โˆฅ โˆ‡ ๐œฝ ๐’Ÿ ^ TKSD ( ๐‘ ๐œฝ โ‹† | ๐‘ž ) 2 โˆ’ โˆ‡ ๐œฝ ๐’Ÿ TKSD ( ๐‘ ๐œฝ โ‹† | ๐‘ž ) 2 โˆฅ + ๐‘‚ ๐‘ƒ ( ๐œ€ ^ ๐‘š ) )

โ‰ค (a) lim ๐‘š โ†’ โˆž ( ๐‘‚ ๐‘ƒ โข ( 1 ๐‘› ) + ๐‘‚ ๐‘ƒ โข ( ๐œ€ ^ ๐‘š ) )

(58)

= (b) ๐‘‚ ๐‘ƒ โข ( 1 ๐‘› ) ,

where the inequality in (a) is due to the convergence of U-statistics (Serfling, 2009), and the equality in (b) is due to lim ๐‘š โ†’ โˆž ๐‘‚ ๐‘ƒ โข ( ๐œ€ ^ ๐‘š )

0 .

โˆŽ

We have โ€– โˆ‡ ๐œฝ ๐ฟ ^ โข ( ๐œฝ โ‹† ) โ€–

๐‘‚ ๐‘ƒ โข ( 1 ๐‘› ) , and (5.9) provides ๐œ† min โข [ โˆ‡ ๐œฝ 2 ๐ฟ ^ โข ( ๐œฝ ) ] โ‰ฅ ฮ› min > 0 , then both conditions in Lemma A.1 are satisfied, and this concludes the proof of the consistency of ๐œฝ ^ ๐‘› , i.e., โ€– ๐œฝ ^ ๐‘› โˆ’ ๐œฝ โ‹† โ€–

๐‘‚ ๐‘ƒ โข ( 1 ๐‘› ) .

Appendix BComputation B.1Empirical Convergence of ๐’ˆ ~ to ๐’ˆ

In Lemma 5.2 we proved that under some conditions, the difference between ๐’ˆ โˆˆ ๐’ข 0 ๐‘‘ and ๐’ˆ ~ โˆˆ ๐’ข 0 , ๐‘š ๐‘‘ evaluated on ๐’™ โ€ฒ โˆˆ โˆ‚ ๐‘‰ is bounded in probability by ๐œ€ ๐‘š and this probability tends to zero as ๐‘š โ†’ โˆž . In this section we aim to show that ๐’ˆ ~ converges to a specific function as ๐‘š increases, which we hypothesise is โ€˜trueโ€™ ๐’ˆ โˆˆ ๐’ข 0 ๐‘‘ .

Figure 3 demonstrates empirical convergence of ๐’ˆ ~ as ๐‘š increases for a given dataset. The experiment is setup as follows: we simulate points from a ๐’ฉ โข ( ๐ŸŽ 2 , ๐‘ฐ 2 ) distribution (a unit Multivariate Normal distribution in 2D), then truncate data points to within the shape of the boundary based on location, and repeat until we acquire ๐‘›

150 truncated points. For each value of ๐‘š , we minimise the TKSD objective to estimate ๐œฝ

๐ŸŽ 2 , and plug in the estimated ๐œฝ ^ to the formula for ๐’ˆ ~ .

For lower values of ๐‘š , there are not enough boundary points to enforce the constraint well on ๐’ˆ ~ , and the approximate Stein identity has a high error, and the estimator is poor. The evaluations of ๐’ˆ ~ across the space do not match the ๐’ˆ ~ output for higher values of ๐‘š . As ๐‘š increases, ๐’ˆ ~ begins to converge to a particular shape, and the difference between function evaluations for ๐‘š

150 and ๐‘š

300 is small.

Figure 3:Contour lines of the optimal ๐‘” ~ 0 (first dimension of ๐’ˆ ~ ) output by TKSD across different values of ๐‘š and differently shaped truncation boundaries: the โ„“ 1 ball (top), the โ„“ 2 ball (middle) and a heart shape (bottom). Red points are all ๐‘š points in โˆ‚ ๐‘‰ ~ and grey points are samples from the truncated dataset. Note that we plot only the first dimension of ๐’ˆ ~ (i.e. ๐‘” 1 ), but we observe the same pattern with the second dimension. B.2Evaluating increasing values of ๐‘š in Proposition 5.3 Figure 4:Lower bound on ๐œ€ ๐‘š (given in (19)) against ๐‘š , the number of finite boundary points, plotted for different values of fixed dimension ๐‘‘ and boundary โ€˜sizeโ€™ ๐ฟ โข ( ๐‘‰ ) , which scales quadratically.

In Figure 4 we show that ๐œ€ ๐‘š , as defined in Proposition 5.3, is bounded below by a decreasing function of ๐‘š . In this example, we plot the value of the lower bound in (19) against ๐‘š , the number of samples on the boundary set, and L โข ( ๐‘‰ ) , the ( ๐‘‘ โˆ’ 1 ) -surface area of the bounded domain ๐‘‰ , which can be considered as a proxy to measure the complexity of the boundary.

We let L โข ( ๐‘‰ ) scale quadratically with dimension ๐‘‘ , and treat it as a constant. ๐‘š scales linearly, and measure the effect on the lower bound of ๐œ€ ๐‘š . As L โข ( ๐‘‰ ) increases, the bound on ๐œ€ ๐‘š increases slightly. As ๐‘š increases, the bound on ๐œ€ ๐‘š decreases rapidly. Even at extremely high values of ๐œ€ ๐‘š , relatively small values of ๐‘š are required to shrink the bound. This result highlights how to choose ๐‘š in experiments - as ๐‘š increases, the โ€˜densenessโ€™ of โˆ‚ ๐‘‰ ~ in โˆ‚ ๐‘‰ also increases, as expected. Whilst larger values of ๐‘š will always be better, there is a less pronounced effect when ๐‘š becomes significantly larger than zero. As discussed in Section B.8, larger ๐‘š comes with computational cost, and therefore this experiment shows that it may be beneficial to choose smaller values of ๐‘š . This is explored further in Section B.3.

B.3Empirical Consistency Figure 5:Left: Estimation error as ๐‘› and ๐‘š increases for TKSD only. Right: Mean estimation error for the three methods: TKSD, TruncSM and bd-KSD, with standard error bars. TKSD uses a fixed ๐‘š

32 across all values of ๐‘› . Both plots report statistics over 64 seeds.

We verify that (26) is consistent estimator via empirical experiments. Note that for consistency as proven in Theorem 5.10, we require taking the limit as ๐‘š โ†’ โˆž , after which the estimator is consistent for ๐‘› . So as ๐‘š and ๐‘› increases, the estimation error decreases towards zero. We show consistency as ๐‘š and ๐‘› increase empirically in Figure 5, for a simple experiment setup, similar to the setup from Section 6.3. Data are simulated from ๐’ฉ โข ( ๐ โ‹† , ๐ผ ๐‘‘ ) , where ๐‘‘

2 , ๐ โ‹†

[ 0.5 , 0.5 ] โŠค , and ๐ผ ๐‘‘ is known. Data are truncated to the โ„“ 2 ball until we reach ๐‘› many data points (after truncation).

The aim of estimation is ๐ โ‹† . Across 64 trials, Figure 5 shows plots of the mean estimation error, given by โ€– ๐ ^ โˆ’ ๐ โ‹† โ€– , where ๐ ^ is the corresponding estimate of ๐ โ‹† output by a given method. In the first plot (left), we show that as both ๐‘› and ๐‘š increase, the estimation error for TKSD decreases towards zero. In the second plot (right), for a fixed ๐‘š , we show that the rate of convergence as ๐‘› increases for TKSD matches that of bd-KSD and TruncSM.

B.4Gaussian Mixture Experiment Figure 6:Top: example setup for the experiment as the number of mixture modes increases from 2 to 4. Middle: estimation error as ๐‘› increases for 2 mixture modes, averaged across 256 seeds. Bottom: Estimation error as the number of mixture modes increases for a fixed ๐‘›

300 , averaged across 256 seeds.

As an additional experiment to show the capability of the method, we test on a more complex problem, estimating several means of a Gaussian Mixture distribution. The estimation task is as follows. Fix ๐‘‘

2 and ๐‘š

200 . Let the mixture modes be given as follows,

๐ 1 โˆ—

[ 1.5 , 1.5 ] โŠค , ๐ 2 โˆ—

[ โˆ’ 1.5 , โˆ’ 1.5 ] โŠค , ๐ 3 โˆ—

[ โˆ’ 1.5 , 1.5 ] โŠค , ๐ 4 โˆ—

[ 1.5 , โˆ’ 1.5 ] โŠค .

We independently simulate samples from ๐’ฉ โข ( ๐ ๐‘– โˆ— , ๐ผ ๐‘‘ ) , for ๐‘–

1 , โ€ฆ , 4 , and truncate these samples to within a box with vertices at [ โˆ’ 3 , โˆ’ 3 ] , [ 3 , โˆ’ 3 ] , [ โˆ’ 3 , 3 ] and [ 3 , 3 ] until we reach a total of ๐‘› samples after truncation. Figure 6, top, shows an example of this experiment setup.

The task is to estimate ๐ ๐‘– โˆ— for all ๐‘– . To ensure a well specified experiment, we set the initial conditions of the optimisation routine as a perturbation from the true value, i.e. ๐ ๐‘– ini

๐ ๐‘– โˆ— + ๐’› , ๐’› โˆผ ๐’ฉ โข ( ๐ŸŽ , 0.5 โ‹… ๐ผ 2 ) . We estimate ๐ with TKSD and compare it to a corresponding estimate by TruncSM across a range of different values of ๐‘› and number of mixture modes, shown in Figure 6, middle and bottom. Overall, TKSD significantly outperforms TruncSM in this experiment across all variations at the cost of runtime.

B.5Regression

We provide a further example of using TKSD to estimate the parameters of a regression problem. We provide two examples in this section, for synthetic data and for real data.

Synthetic Data

First, we simulate data in the following way:

๐‘ฆ ๐‘– โˆผ ๐’ฉ โข ( ๐œ‡ ๐‘– , 1 ) , ๐œ‡ ๐‘–

๐›ฝ 0 โ‹† + ๐›ฝ 1 โ‹† โข ๐‘ฅ ๐‘– , ๐‘ฅ ๐‘– โˆผ Uniform โข ( 0 , 1 )

where we assume knowledge of the true values, ๐›ฝ 0 โ‹†

3 and ๐›ฝ 1 โ‹†

4 . We truncate the dataset to where ๐‘ฆ ๐‘– โ‰ฅ 5 โข โˆ€ ๐‘– , so only a portion of both ๐‘ฆ and ๐‘ฅ are observed. We then estimate the conditional density ๐‘ โข ( ๐‘ฆ | ๐‘ฅ ) with TKSD via estimation of ๐œฝ โ‹† := [ ๐›ฝ 0 โ‹† , ๐›ฝ 1 โ‹† ] โŠค .

We obtain the log-likelihood of the Normal distribution under the estimate of ๐œฝ ^ := [ ๐›ฝ ^ 0 , ๐›ฝ ^ 1 ] โŠค output by TKSD, split across the observed dataset (within the truncation boundary) and the unobserved dataset (outside the truncation boundary). We compare these log-likelihoods to ones obtained by a naive MLE approach which does not account for truncation. As an additional measure, we calculate the unobserved error, given by

MSE โข ( ๐ฒ , ๐ฒ ^ ) := โˆ‘ ๐‘–

1 ๐‘› unobs ( ๐‘ฆ ^ ๐‘– โˆ’ ๐‘ฆ ๐‘– ) 2 , ๐‘ฆ ^ ๐‘–

๐›ฝ ^ 0 + ๐›ฝ ^ 1 โข ๐‘ฅ ๐‘– ,

where ๐‘› unobs is the number of discarded data points due to truncation, and this error is calculated over these discarded data points only. This forms a measure which evaluates how well the density estimation method is predicting on unseen data points. Instead of traditional test error, it is the error on data points that have been truncated and hence no longer observed by either method.

A histogram of log-likelihoods and errors across 256 trials is given in Figure 7 (left), and an example of one such simulated regression is given in Figure 7 (top-right). The log-likelihoods for the observed points are higher for MLE, since the objective of this method is to directly maximise these likelihoods. However, on the unobserved dataset, the log-likelihoods obtained by TKSD are significantly higher than MLE. The unobserved errors are significantly smaller for TKSD, showing the improvement of TKSD over the naive approach.

Real Data: Student Test Scores

We also experiment on a real-world dataset given by UCLA: Statistical Consulting Group (2022) (Example 1). This dataset contains student test scores in a school for which the acceptance threshold is 40 out of 100, and therefore the response variable (the test scores) are truncated below by 40 and above by 100. Since no scores get close to 100, we only consider one-sided truncation at ๐‘ฆ

40 . The aim of the regression is to model the response variable, the test scores, based on a single covariate of each studentsโ€™ corresponding score on a different test. Figure 7 (bottom-right) shows the plotted dataset and the regression line fit by TKSD and naive MLE. Whilst we have no true baseline value to compare to, the TKSD regression line seems to account for the truncation, whilst as expected, MLE does not.

Figure 7:Statistics from regression fit. Left:Mean squared errors only on the observed dataset (top). Log-likelihoods on the observed dataset (middle) and the unobserved dataset (bottom), both for a ๐’ฉ โข ( ๐œ‡ , 1 ) distribution and over 256 trials. Right: An example of TKSD used to fit regression on the synthetic data on one trial (top) and TKSD used to fit regression on student test scores data (bottom). Fuller black points are the observed data points, after truncation, and smaller grey points are the unobserved data points that were truncated out in the data simulation process. B.6Quantifying Effect of Boundary Point Distribution

We test whether the effect of boundary point sampling distribution has an effect on the robustness of TKSD. To do so, we repeat the simple experiment setup where data are simulated as follows,

๐’™ โˆผ ๐’ฉ โข ( ๐ โ‹† , ๐‘ฐ 2 ) , ๐ โ‹†

[ 1 ,

1 ] โŠค

from which we observe ๐‘›

400 realisations of ๐’™ that are restricted to the unit โ„“ 2 ball around the origin, and let ๐‘š

30 . We use TKSD to provide an estimate ๐ ^ of ๐ โ‹† under three scenarios:

1.

Boundary points are distributed towards ๐ โ‹† , i.e. samples from โˆ‚ ๐‘‰ ~ are closer to the centre of the dataset.

2.

Boundary points are distributed away from ๐ โ‹† , i.e. samples from โˆ‚ ๐‘‰ ~ are closer to the edge of the dataset.

3.

Boundary points are sampled uniformly across โˆ‚ ๐‘‰ .

Figure 8:Example of experiment setup for measuring the effect of the boundary point distribution. Figure 9:Frequency of estimation errors ( โ€– ๐ ^ โˆ’ ๐ โ‹† โ€– ) in the simple experiment setup for the three differently distributed boundary points.

See Figure 8 for a visual representation of these three scenarios. We measure the estimation error, โ€– ๐ ^ โˆ’ ๐ โ‹† โ€– , and test whether one scenario provides a lower error on average overall. Figure 9 shows the distribution of all three estimation errors across 256 trials. Scenario 1 and 3 provide comparable error distributions which are significantly smaller than the error distribution of scenario 2, implying that either the boundary needs to be covered fully, or the boundary points need to be โ€˜representativeโ€™ in some way, where the most significant truncation effect is.

B.7Choosing Boundary Size for Dimension Benchmarks Figure 10:Mean percentage of points which remain after truncation as dimension ๐‘‘ increases, where the truncation domain is a โ„“ ๐‘ ball of radius ๐‘‘ 0.98 for ๐‘

1 (left), and ๐‘‘ 0.53 for ๐‘

2 (right).

In Section 6.3, we chose the size of the boundary to scale with ๐‘‘ so that roughly the same amount of data points are truncated for each value of dimension ๐‘‘ . The values of ๐‘‘ 0.98 and ๐‘‘ 0.53 for the โ„“ 1 ball and โ„“ 2 ball respectively were chosen via trial and error such that the percentage of points simulated from the Normal distribution remaining after truncation did not vary significantly. Figure 10 shows that with this choice of โ„“ 1 and โ„“ 2 ball radius, the mean percentage of points that remain after truncation remains at consistent as dimension increases in both cases.

B.8Extra Computation Details โ€ข

The inversion of ๐Š โ€ฒ is the largest computational expense when it comes to evaluating the ๐‘ˆ -statistic or ๐‘‰ -statistic ((24) and (25) respectively).

If ๐‘š < ๐‘‘ , which will be common as we want ๐‘š to be large, then ๐Š โ€ฒ is rank deficient. To circumvent this issue, we invert ๐Š โ€ฒ + ๐œ– โข ๐ˆ ๐‘š instead, where ๐œ–

0 is small. Additionally, ๐Š โ€ฒ + ๐œ– โข ๐ˆ ๐‘š is symmetric and positive definite, so we can exploit its Cholesky decomposition to make the inversion faster and more stable.

Overall, this inversion looks like

( ๐Š โ€ฒ ) โˆ’ 1 โ‰ˆ ( ๐Š โ€ฒ + ๐œ– โข ๐ˆ ๐‘š ) โˆ’ 1

๐‹ โˆ’ 1 โข ( ๐‹ โˆ’ 1 ) โŠค ,

where ๐‹ is the corresponding lower triangular matrix from the Cholesky decomposition of ๐Š โ€ฒ + ๐œ– โข ๐ˆ ๐‘š . The inversion of ๐‹ , a lower triangular matrix, requires half as many operations as inverting ๐Š + ๐œ– โข ๐ˆ ๐‘š directly (Krishnamoorthy & Menon, 2013).

โ€ข

We also make use of methods presented in Jitkrittum et al. (2017) for fast computation when constructing all kernel matrices in Python.

Report Issue Report Issue for Selection Generated by L A T E xml Instructions for reporting errors

We are continuing to improve HTML versions of papers, and your feedback helps enhance accessibility and mobile support. To report errors in the HTML that will help us improve conversion and rendering, choose any of the methods listed below:

Click the "Report Issue" button. Open a report feedback form via keyboard, use "Ctrl + ?". Make a text selection and click the "Report Issue for Selection" button near your cursor. You can use Alt+Y to toggle on and Alt+Shift+Y to toggle off accessible reporting links at each section.

Our team has already identified the following issues. We appreciate your time reviewing and reporting rendering errors we may not have found yet. Your efforts will help us improve the HTML versions for all readers, because disability should not be a barrier to accessing research. Thank you for your continued support in championing open access for all.

Have a free development cycle? Help support accessibility at arXiv! Our collaborators at LaTeXML maintain a list of packages that need conversion, and welcome developer contributions.

Xet Storage Details

Size:
106 kB
ยท
Xet hash:
34f33225bbdef840a8532c0088c8f62d3fd965c387b01b17475ae7ae4da41fdc

Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.