Buckets:
Title: Approximate Stein Classes for Truncated Density Estimation
URL Source: https://arxiv.org/html/2306.00602
Markdown Content: Back to arXiv
This is experimental HTML to improve accessibility. We invite you to report rendering errors. Use Alt+Y to toggle on accessible reporting links and Alt+Shift+Y to toggle off. Learn more about this project and help improve conversions.
Why HTML? Report Issue Back to Abstract Download PDF Abstract 1Introduction 2Problem Setup 3Background 4Approximate Stein Classes 5KSD for Truncated Density Estimation 6Experimental Results 7Discussion References License: arXiv.org perpetual non-exclusive license arXiv:2306.00602v2 [stat.ML] null Approximate Stein Classes for Truncated Density Estimation Daniel J. Williams Song Liu Abstract
Estimating truncated density models is difficult, as these models have intractable normalising constants and hard to satisfy boundary conditions. Score matching can be adapted to solve the truncated density estimation problem, but requires a continuous weighting function which takes zero at the boundary and is positive elsewhere. Evaluation of such a weighting function (and its gradient) often requires a closed-form expression of the truncation boundary and finding a solution to a complicated optimisation problem. In this paper, we propose approximate Stein classes, which in turn leads to a relaxed Stein identity for truncated density estimation. We develop a novel discrepancy measure, truncated kernelised Stein discrepancy (TKSD), which does not require fixing a weighting function in advance, and can be evaluated using only samples on the boundary. We estimate a truncated density model by minimising the Lagrangian dual of TKSD. Finally, experiments show the accuracy of our method to be an improvement over previous works even without the explicit functional form of the boundary.
Density Estimation, Truncated Densities, Stein Discrepancies, Score Matching, Parameter Estimation 1Introduction
In truncated density estimation, we are unable to view a full picture of our dataset. We are instead given access to a smaller subsample of data artificially truncated by a boundary. Examples of truncation boundaries include limited number of medical tests resulting in under-reported disease counts, and a countryโs borders preventing discoveries of habitat locations. In either case, a complex boundary causes truncation which introduces difficulties in statistical parameter estimation. Regular estimation techniques such as maximum likelihood estimation (MLE) are ill-suited, as we will explain.
When data are truncated, many typical statistical assumptions break down. One fundamental problem with estimation in a truncated space is that the probability density function (PDF), given by
๐ ๐ฝ โข ( ๐ )
๐ ยฏ ๐ฝ โข ( ๐ ) ๐ โข ( ๐ฝ ) , ๐ โข ( ๐ฝ )
โซ ๐ ๐ ยฏ ๐ฝ โข ( ๐ ) โข ๐ ๐ ,
cannot be fully evaluated. In this setup, ๐ is the truncated domain which can be highly complex and thus the integration to obtain the normalising constant, ๐ โข ( ๐ฝ ) , is intractable. This normalising constant could be approximated via numerical integration, such as with Monte Carlo methods (Kalos & Whitlock, 2009), but this comes with a significant computational expense. Recent attention has turned to estimation methods which bypass the calculation of the normalising constant entirely by working with the score function for ๐ ๐ฝ โข ( ๐ ) ,
๐ ๐ ๐ฝ
๐ ๐ ๐ฝ โข ( ๐ ) := โ ๐ log โก ๐ ๐ฝ โข ( ๐ )
โ ๐ log โก ๐ ยฏ ๐ฝ โข ( ๐ ) .
which uniquely represents a probability distribution. Score-based estimation methods include score matching (Hyvรคrinen, 2005, 2007), noise-contrastive estimation (Gutmann & Hyvรคrinen, 2010, 2012) and minimum Stein discrepancies (Stein, 1972; Barp et al., 2019). These methods are computationally fast and accurate, and usually rely on minimising a discrepancy between the score functions for the model density ๐ ๐ฝ โข ( ๐ ) and the unknown data density ๐ โข ( ๐ ) , whose score function is ๐ ๐ := โ ๐ log โก ๐ โข ( ๐ ) .
Score-based methods have been applied across many domains, including hypothesis testing (Liu et al., 2016; Chwialkowski et al., 2016; Xu, 2022; Wu et al., 2022), generative modelling (Song & Ermon, 2019; Song et al., 2021; Pang et al., 2020), energy based modelling (Song & Kingma, 2021), and Bayesian posterior estimation (Sharrock et al., 2022). Recently, two lines of work have been proposed for the truncated domain; truncated density estimation via score matching (Yu et al., 2021; Liu et al., 2022; Williams & Liu, 2022), called TruncSM, and truncated goodness-of-fit testing via the kernelised Stein discrepancy (KSD), denoted bounded-domain KSD (bd-KSD) (Xu, 2022).
These two lines of work both use a distance function as a weighting function on the objective. Such a function is chosen in advance such that the boundary conditions required for deriving score matching or KSD hold when the domain is truncated. The computation of this distance function can be challenging when the boundary is complex and high-dimensional, which we will demonstrate later in experiments. Further, these methods rely on knowing a functional form of the boundary, which is not always available.
In this paper, we consider a situation where the functional form of the boundary is not available to us. We can only access the boundary information through a finite set of random samples. As an example, suppose we want to estimate a density model for submissions at an academic conference. We can only observe the accepted papers but not the rejected ones. Furthermore, it is difficult to provide a functional definition of the truncation boundary that distinguishes between the accepted and rejected submissions. However, we can easily identify borderline submissions from the review scores which can be seen as samples of the boundary. In this case, TruncSM and bd-KSD are not applicable due to the lack of a functional definition of the boundary, and classical methods such as MLE are intractable. To our knowledge, there exists no method to estimate a truncated density when there is no functional form of the boundary. To solve this problem, we first define approximate Stein classes and its corresponding Stein discrepancies, which we refer to as truncated kernelised Stein discrepancy (TKSD), which is computationally tractable with only samples from the boundary. By minimising the TKSD, we obtain a truncated density estimator when the boundaryโs functional form is unavailable. In experiments, we show that despite the approximate nature of the Stein class, density models can be accurately estimated from truncated observations. We also provide a theoretical justification of the estimator consistency.
Our main contributions are:
โข
We introduce approximate Stein classes, which in turn define approximate Stein discrepancies. Unlike earlier approaches, these Stein discrepancies are more relaxed and applicable to a truncated setting.
โข
These discrepancies enable density estimation on truncated datasets. This estimator is an extension of earlier Stein discrepancy estimators. We include theoretical and experimental results showing that the TKSD estimator is a consistent and competitive estimator with previous works.
2Problem Setup
The Problem. โWe assume the data density, ๐ , has support on ๐ โ โ ๐ , whose boundary is denoted as โ ๐ . We aim to find a model, given by ๐ ๐ฝ , which best estimates ๐ . However, there are some significant challenges for the truncated setting: the normalising constant in ๐ ๐ฝ is intractable, and score-based methods rely on a boundary condition which breaks down in this situation. Previous works (Xu, 2022; Liu et al., 2022) do address these issues. However, in this work, we assume we have no functional form of โ ๐ , and instead, a finite set of samples, { ๐ ๐ โฒ } ๐
1 ๐ , which are drawn randomly from the boundary. To our best knowledge, no existing density estimation methods can be applied directly.
The Aims. โWe aim to use unnormalised models to estimate a truncated density, bypassing the evaluation of the normalising constant. We also aim to construct an estimator that does not require a functional form of the boundary, but can still adjust to the boundary and the dataset adaptively. This will lead to an estimator which is data-driven and flexible, not relying on a pre-defined weighting function, unlike previous works by Xu (2022) and Liu et al. (2022).
Before explaining our proposed solution, we introduce prior methods for measuring discrepancies for unnormalised densities.
3Background
Statistical density estimation is often performed by minimising a divergence between the data density, ๐ โข ( ๐ ) , and the model density, ๐ ๐ฝ โข ( ๐ ) . Since truncated densities have intractable normalising constants, we introduce divergence measures suited for unnormalised densities and discuss their generalizations to truncated supports. These methods rely on Steinโs identity, which we will introduce foremost.
3.1Steinโs Identity
Originating from Steinโs method (Stein, 1972; Chen et al., 2011), a Stein class of functions enables the construction of a family of discrepancy measures (Barp et al., 2019).
Definition 3.1.
Let ๐
๐ โข ( ๐ ) be any smooth probability density supported on โ ๐ and let ๐ฎ ๐ : โฑ ๐ โ โ be a map. โฑ ๐ is a Stein class of functions, if for any ๐ โ โฑ ๐ ,
๐ผ ๐ โข [ ๐ฎ ๐ โข ๐ โข ( ๐ ) ]
0 ,
(1)
where ๐ฎ ๐ is called a Stein operator (Gorham & Mackey, 2015).
We refer to Equation 1 as the Stein identity, which underpins a lot of existing work in unnormalised modelling. The Langevin Stein operator (Gorham & Mackey, 2015) on ๐ ๐ฝ โข ( ๐ ) ,
๐ฎ ๐ ๐ฝ โข ๐ โข ( ๐ )
๐ฏ ๐ ๐ฝ โข ๐ โข ( ๐ ) := โ ๐
1 ๐ ๐ ๐ ๐ฝ , ๐ โข ( ๐ ) โข ๐ ๐ โข ( ๐ ) + โ ๐ฅ ๐ ๐ ๐ โข ( ๐ ) ,
where ๐ ๐ ๐ฝ , ๐ โข ( ๐ ) := โ ๐ง ๐ log โก ๐ ๐ฝ โข ( ๐ ) , is independent of the normalising constant, ๐ โข ( ๐ฝ ) . ๐ ๐ฝ is involved in the Stein operator only via its โproxyโ, the score function, ๐ ๐ ๐ฝ . When this Langevin Stein operator is used, it is straightforward to see that Equation 1 holds:
๐ผ ๐ โข [ ๐ฏ ๐ โข ๐ โข ( ๐ ) ]
โซ โ ๐ ๐ โข ( ๐ ) โข ( โ ๐
1 ๐ ๐ ๐ , ๐ โข ๐ ๐ โข ( ๐ ) + โ ๐ฅ ๐ ๐ ๐ โข ( ๐ ) ) โข ๐ ๐
โ ๐
1 ๐ โซ โ ๐ โ ๐ฅ ๐ ๐ โข ( ๐ ) โข ๐ ๐ โข ( ๐ ) + ๐ โข ( ๐ ) โข โ ๐ฅ ๐ ๐ ๐ โข ( ๐ ) โข ๐ โข ๐
0 ,
where the last equality holds due to integration by parts and the fact that ๐ โข ( ๐ ) vanishes at infinity:
lim โ ๐ โ โ โ ๐ โข ( ๐ )
0 .
(2)
This assumption is critical, and is a key focus of research for this paper. It holds for many densities supported on โ ๐ , such as the Gaussian distribution or the Gaussian mixture distribution. In the rest of this paper, we refer to this condition as the boundary condition, as it describes the behaviour of ๐ โข ( ๐ ) at the boundary of its domain.
3.2Divergence for Unnormalised Densities
When ๐ โ โ ๐ , and the boundary condition Equation 2 holds, we describe two computationally tractable discrepancy measures for unnormalised density models ๐ ๐ โข ( ๐ ) : the Stein discrepancy and the score matching divergence. Both divergences rely on Steinโs identity to derive a tractable form.
3.2.1Classical Stein Discrepancy
Gorham & Mackey (2015) use Steinโs identity to define a Stein discrepancy: the supremum of the differences between expected Stein operators for two densities ๐ โข ( ๐ ) and ๐ ๐ฝ โข ( ๐ ) ,
๐ ๐ โข ๐ท โข ( ๐ ๐ฝ | ๐ ) :
sup ๐ โ โฑ ๐ ( ๐ผ ๐ โข [ ๐ฏ ๐ ๐ฝ โข ๐ โข ( ๐ ) ] โ ๐ผ ๐ ๐ฝ โข [ ๐ฏ ๐ ๐ฝ โข ๐ โข ( ๐ ) ] )
sup ๐ โ โฑ ๐ ๐ผ ๐ โข [ ๐ฏ ๐ ๐ฝ โข ๐ โข ( ๐ ) ] ,
(3)
where the second line holds as โฑ ๐ is a Stein class. The Stein discrepancy can be interpreted as the maximum violation of Steinโs identity. ๐ is referred to as the discriminatory function, as it discriminates between ๐ โข ( ๐ ) and ๐ ๐ฝ โข ( ๐ ) . However, the supremum in Equation 3 across โฑ ๐ is a challenging problem for optimisation (Gorham & Mackey, 2015).
Stein discrepancies have seen a lot of recent development, including extensions to non-Euclidean domains (Shi et al., 2021; Xu & Matsuda, 2021), discrete operators (Yang et al., 2018), stochastic operators (Gorham et al., 2020) and diffusion-based operators (Gorham & Mackey, 2015; Gorham et al., 2020).
3.2.2Kernelised Stein Discrepancy (KSD)
When we restrict the function class over which the supremum is taken to a Reproducing Kernel Hilbert Space (RKHS), we can derive the Kernelised Stein discrepancy (KSD), for which we follow a similar definition to Chwialkowski et al. (2016) and Liu et al. (2016).
First, let ๐ข be an RKHS equipped with positive definite kernel ๐ , and let ๐ข ๐ denote the product RKHS with ๐ elements, where ๐
( ๐ 1 , โฆ , ๐ ๐ ) โ ๐ข ๐ , and is defined with inner product โจ ๐ , ๐ โฒ โฉ ๐ข ๐
โ ๐
1 ๐ โจ ๐ ๐ , ๐ ๐ โฒ โฉ ๐ข and norm โ ๐ โ ๐ข ๐
โ ๐
1 ๐ โจ ๐ ๐ , ๐ ๐ โฉ ๐ข . By the reproducing property, any evaluation of ๐ โ ๐ข ๐ can be written as
๐ โข ( ๐ )
โจ ๐ , ๐ โข ( ๐ , โ ) โฉ ๐ข ๐
โ ๐
1 ๐ โจ ๐ ๐ , ๐ โข ( ๐ , โ ) โฉ ๐ข .
(4)
Taking the supremum over ๐ข ๐ , and including the restriction of ๐ to the RKHS unit ball, i.e. โ ๐ โ ๐ข ๐ โค 1 , gives rise to the KSD
๐
KSD
โข
(
๐
๐ฝ
|
๐
)
:=
sup
๐
โ
๐ข
๐
,
โ
๐
โ
๐ข
๐
โค
1
๐ผ
๐
โข
[
๐ฏ
๐
๐ฝ
โข
๐
โข
(
๐
)
]
โ ๐ผ ๐ โข [ ๐ฏ ๐ ๐ฝ โข ๐ โข ( ๐ , โ ) ] โ ๐ข ๐ .
(5)
The KSD has a closed-form expression as indicated by Equation 5. Moreover, the squared KSD can be expanded to a double expectation
๐ KSD โข ( ๐ ๐ฝ | ๐ ) 2
โ ๐ผ ๐ โข [ ๐ฏ ๐ ๐ฝ โข ๐ โข ( ๐ , โ ) ] โ ๐ข ๐ 2
๐ผ ๐ โผ ๐ โข ๐ผ ๐ โผ ๐ โข [ โ ๐
1 ๐ ๐ข ๐ โข ( ๐ , ๐ ) ]
(6)
where
๐ข ๐ โข ( ๐ , ๐ )
๐ ๐ , ๐ โข ( ๐ ) โข ๐ ๐ , ๐ โข ( ๐ ) โข ๐ โข ( ๐ , ๐ ) + ๐ ๐ , ๐ โข ( ๐ ) โข โ ๐ฆ ๐ ๐ โข ( ๐ , ๐ )
- ๐ ๐ , ๐ โข ( ๐ ) โข โ ๐ฅ ๐ ๐ โข ( ๐ , ๐ )
- โ ๐ฅ ๐ โ ๐ฆ ๐ ๐ โข ( ๐ , ๐ ) .
(7)
This divergence can be fully evaluated using samples from ๐ โข ( ๐ ) to approximate the expectation in Equation 6. Further, Chwialkowski et al. (2016) showed that ๐ KSD โข ( ๐ ๐ฝ | ๐ )
0 if and only if ๐ ๐ฝ
๐ , making ๐ KSD โข ( ๐ ๐ฝ | ๐ ) a good discrepancy measure between distributions.
3.2.3Score Matching
The score matching (or the Fisher-Hyvรคrinen) divergence, initially developed by Hyvรคrinen (2005), is the expected squared difference between the score functions for the two densities ๐ ๐ฝ โข ( ๐ ) and ๐ โข ( ๐ ) :
๐ ๐ โข ๐ โข ( ๐ ๐ฝ | ๐ )
๐ผ ๐ โข [ โ ๐ โข ( ๐ ) 1 / 2 โ ( ๐ ๐ โ ๐ ๐ ) โ 2 ] ,
(8)
where โ denotes element-wise multiplication. The inclusion of the weighting function ๐ โข ( ๐ ) yields the generalised score matching divergence (Lin et al., 2016; Yu et al., 2016, 2019), and ๐ โข ( ๐ )
๐ yields the classic score matching divergence.
It can be seen that ๐ ๐ โข ๐ โข ( ๐ ๐ฝ | ๐ ) is simply the squared difference between two Langevin Stein operators ๐ฏ ๐ ๐ฝ โข โ ๐ โข ( ๐ ) and ๐ฏ ๐ โข โ ๐ โข ( ๐ ) . Using Equation 1, ๐ ๐ โข ๐ โข ( ๐ ๐ฝ | ๐ ) can be rewritten as
๐ผ ๐ โข [ โ ๐
1 ๐ โ ๐ โข ( ๐ ) โข ( ๐ ๐ , ๐ 2 + 2 โข โ ๐ฅ ๐ ๐ ๐ , ๐ ) + โ ๐ฅ ๐ โ ๐ โข ( ๐ ) โข ๐ ๐ , ๐ ] + ๐ถ ๐ ,
where ๐ถ ๐
๐ผ ๐ โข [ ๐ ๐ โค โข ๐ ๐ ] can be considered a constant which can be safely ignored when minimising with respect to ๐ฝ .
3.3Divergence for Densities with a Truncated Support
Let ๐ โ โ ๐ be a domain whose boundary is denoted by โ ๐ . The boundary condition of the density ๐ โข ( ๐ ) given in Equation 2 needs to hold on the boundary โ ๐ , i.e., the truncated density ๐ โข ( ๐ โฒ )
0 , โ ๐ โฒ โ โ ๐ . However, this is, in general, not true for truncated densities. For example, a 1D truncated unit Gaussian distribution within the interval [ โ 1 , 1 ] has non-zero density at the boundary of the support at exactly ๐ฅ
โ 1 and ๐ฅ
1 . When the boundary condition breaks down, the function families presented for classical SD, KSD and score matching are no longer Stein classes, and these divergences are no longer computationally tractable.
We outline two recent methods for circumventing this issue by modifying the KSD and the score matching divergence.
3.3.1Bounded-Domain KSD (bd-KSD)
Motivated by performing goodness-of-fit testing on truncated domains, Xu (2022) propose a modified Stein operator, given by
๐ฏ ๐ , โ โข ๐ โข ( ๐ )
โ ๐
1 ๐ ๐ ๐ , ๐ โข ๐ ๐ โข ( ๐ ) โข โ โข ( ๐ ) + โ ๐ฅ ๐ ( ๐ ๐ โข ( ๐ ) โข โ โข ( ๐ ) )
(9)
for ๐ โ ๐ข ๐ , where โ โข ( ๐ ) is a weighting function for which โ โข ( ๐ โฒ )
0 โข โ ๐ โฒ โ โ ๐ . This modified Stein operator relies on the boundary conditions on โ instead of ๐ . This condition can be satisfied by choosing โ carefully.
Using the Stein operator in Equation 3 and taking the supremum over ๐ โ ๐ข ๐ gives an alternative definition to KSD for bounded domains, called bounded-domain kernelised Stein discrepancy (bd-KSD):
๐ bd-KSD โข ( ๐ ๐ฝ | ๐ ) 2
โ ๐ผ ๐ โข [ ๐ฏ ๐ ๐ฝ , โ โข ๐ โข ( ๐ , โ ) ] โ ๐ข ๐ 2 .
The form of โ is not explicitly defined, but the authors recommend a distance function. For example, if ๐ is the unit ball, โ โข ( ๐ )
1 โ โ ๐ โ ๐ for a chosen power ๐ .
3.3.2Truncated Score Matching (TruncSM)
When the support of ๐ is the non-negative orthant, โ + ๐ , Hyvรคrinen (2007) and Yu et al. (2019) impose restrictions on the weighting function ๐ in the score matching divergence, given in Equation 8, such that ๐ โข ( ๐ ) โ 0 as โ ๐ โ โ 0 , for example ๐ โข ( ๐ )
๐ . Yu et al. (2021) and Liu et al. (2022) generalised this constraint on ๐ to any bounded domain ๐ , imposing the restrictions โ ๐ โข ( ๐ ) > 0 โข โ ๐ and ๐ โข ( ๐ โฒ )
๐ for all ๐ โฒ โ โ ๐ .
Liu et al. (2022) showed that maximising a Stein discrepancy with respect to ๐ gives a solution ๐ 0
( โ 0 , โฆ , โ 0 ) , where
โ 0
min ๐ โฒ โ โ ๐ โก dist โข ( ๐ , ๐ โฒ ) ,
(10)
the smallest distance between ๐ and the boundary โ ๐ . Liu et al. (2022) proposed the use of several distance functions such as the โ 1 and โ 2 distance. The generalised score matching divergence with โ 0 is referred to as TruncSM.
4Approximate Stein Classes
So far, previous works have assumed a functional form of the truncation boundary, so that โ can be precisely designed and Steinโs identity holds exactly. In our setting, the functional form of the truncation boundary is unavailable, making the design of a Stein class infeasible; we cannot design a class of functions with appropriate boundary conditions that satisfy Steinโs identity exactly. However, the truncation boundary is known to us approximately as a set of boundary points, and so we can design a set of functions such that Steinโs identity holds โapproximatelyโ, as we will show below.
Let us first define an approximate Stein class and an approximate Stein identity.
Definition 4.1.
Let ๐
๐ โข ( ๐ ) be a smooth probability density function on ๐ โ โ ๐ . โฑ ~ ๐ ๐ is an approximate Stein class of ๐ , if for all ๐ ~ โ โฑ ~ ๐ ๐ ,
๐ผ ๐ โข [ ๐ฎ ๐ , ๐ โข ๐ ~ โข ( ๐ ) ]
๐ ๐ โข ( ๐ ๐ ) ,
(11)
where ๐ ๐ is a monotonically decreasing function of ๐ , and ๐ฎ ๐ , ๐ is a Stein operator that depends on ๐ in some way. ๐ ๐ โข ( ๐ ๐ ) denotes a sequence indexed by ๐ which is bounded in probability by ๐ ๐ .
We denote Equation 11 as the approximate Stein identity, and the approximate Stein class โฑ ~ ๐ ๐ can be written more simply as
โฑ ~ ๐ ๐
{ ๐ ~ : ๐ โ โ | ๐ผ ๐ โข [ ๐ฎ ๐ , ๐ โข ๐ ~ โข ( ๐ ) ]
๐ ๐ โข ( ๐ ๐ ) } .
(12)
The classical Stein class of functions given by Equation 1 requires Steinโs identity to hold, whereas this approximate Stein class Equation 12 defines a set of functions for which ๐ผ ๐ โข [ ๐ฎ ๐ , ๐ โข ๐ ~ โข ( ๐ ) ] is bounded by a decreasing sequence. Similarly to the classical Stein discrepancy, we propose the use of the Langenvin Stein operator, i.e. ๐ฎ ๐ , ๐
๐ฏ ๐ , ๐ . We aim to show this is a flexible class which can be used across various applications, of which we provide two examples below.
Example: Latent Variable Models.
Kanagawa et al. (2019) presented a variation of KSD for testing the goodness-of-fit of models with unobserved latent variables ๐ , where the density of ๐ is given by ๐ โข ( ๐ )
โซ โ ๐ ๐ โข ( ๐ | ๐ ) โข ๐ โข ( ๐ ) โข ๐ ๐ , and thus the score function can be written as
๐ ๐ โข ( ๐ )
โ ๐ ๐ โข ( ๐ ) ๐ โข ( ๐ )
โซ โ ๐ โ ๐ ๐ โข ( ๐ | ๐ ) ๐ โข ( ๐ | ๐ ) โ ๐ โข ( ๐ | ๐ ) โข ๐ โข ( ๐ ) ๐ โข ( ๐ ) โข ๐ ๐
๐ผ ๐ โข ( ๐ | ๐ ) โข [ ๐ ๐ โข ( ๐ | ๐ ) ] .
(13)
We show that this existing modification of KSD gives rise to an Approximate Stein Class. To evaluate the expectation over ๐ โข ( ๐ | ๐ ) , Kanagawa et al. (2019) recommend approximating the expectation with a Monte Carlo estimate
๐ผ ๐ โข ( ๐ | ๐ ) โข [ ๐ ๐ โข ( ๐ | ๐ ) ]
1 ๐ โข โ ๐
1 ๐ ๐ ๐ โข ( ๐ | ๐ ๐ ) + ๐ ๐ โข ( ๐ ๐ ) ,
(14)
where we assume we have access to ๐ unbiased samples { ๐ ๐ } ๐
1 ๐ โผ ๐ โข ( ๐ | ๐ ) , and ๐ ๐ โข ( ๐ ๐ ) is the Monte Carlo approximation error. This approximation leads to
๐ผ ๐ โข [ ๐ฏ ๐ โข ๐ โข ( ๐ ) ]
๐ ๐ โข ( ๐ ๐ ) โ 0
and therefore this variation of KSD uses an Approximate Stein Class. See Section A.1 for more details.
5KSD for Truncated Density Estimation
Let ๐ โข ( ๐ ) be a smooth probability density function with truncated support ๐ โ โ ๐ with boundary โ ๐ . When ๐ โข ( ๐ ) has a truncated support, ๐ข ๐ is not a Stein class. To show this, let ๐ โ ๐ข ๐ , then Steinโs identity can be written as
๐ผ ๐ฏ ๐ โข ๐ โข ( ๐ )
โซ ๐ ๐ โข ( ๐ ) โข ( โ ๐
1 ๐ ๐ ๐ ๐ฝ , ๐ โข ( ๐ ) โข ๐ ๐ โข ( ๐ ) + โ ๐ฅ ๐ ๐ ๐ โข ( ๐ ) ) โข ๐ ๐
โ ๐
1 ๐ โซ ๐ โ ๐ฅ ๐ ๐ โข ( ๐ ) โข ๐ ๐ โข ( ๐ ) + ๐ โข ( ๐ ) โข โ ๐ฅ ๐ ๐ ๐ โข ( ๐ ) โข ๐ โข ๐
โฎ โ ๐ ๐ โข ( ๐ ) โข โ ๐
1 ๐ ๐ ๐ โข ( ๐ ) โข u ^ ๐ โข ( ๐ ) โข ๐ โข ๐ ,
(15)
where โฎ โ ๐ is the surface integral over the boundary โ ๐ , u ^ 1 โข ( ๐ ) , โฆ , u ^ ๐ โข ( ๐ ) are the unit outward normal vectors on โ ๐ and ๐ โข ๐ is the surface element on โ ๐ .
To satisfy the Stein identity, Equation 15 must equal zero. In the untruncated setting, Equation 15 is zero due to the boundary condition on ๐ โข ( ๐ ) , but in the truncated setting the density is significantly nonzero on the boundary at โ ๐ . Prior methods such as Xu (2022) choose a pre-defined weighting function for which Equation 15 is exactly zero at the boundary. We instead aim to show that Equation 15 is ๐ ๐ โข ( ๐ ๐ ) under an approximate Stein class, detailed in the remainder of this section.
5.1Approximate Stein Class for Truncated Densities
Let us first consider a setting where the boundary โ ๐ is known analytically. Define a modified product RKHS as
๐ข 0 ๐
{ ๐ โ ๐ข ๐ | ๐ โข ( ๐ โฒ )
๐ โข โ ๐ โฒ โ โ ๐ , โ ๐ โ ๐ข ๐ 2 โค 1 } .
(16) Lemma 5.1.
Let ๐ be a smooth density supported on ๐ . For any ๐ โ ๐ข 0 ๐ , then
๐ผ ๐ โข [ ๐ฏ ๐ โข ๐ โข ( ๐ ) ]
0 .
(17)
Similar to the classic Steinโs identity, the proof follows from applying a simple integration by parts, see Section A.2. Lemma 5.1 shows that ๐ข 0 ๐ is a proper Stein class of ๐ . ๐ข 0 ๐ defines a large class of functions, for which the proposed operator by Xu (2022) given in Equation 9 is one such example of a function from this family.
Let us now consider the setting where โ ๐ is not known exactly. The information about the boundary is provided by โ ๐ ~
{ ๐ ๐ โฒ โฒ } ๐ โฒ
1 ๐ , a finite set of points randomly sampled from โ ๐ . Define
๐ข 0 , ๐ ๐
{ ๐ โ ๐ข ๐ | ๐ โข ( ๐ โฒ )
๐ โข โ ๐ โฒ โ โ ๐ ~ , โ ๐ โ ๐ข ๐ 2 โค 1 } ,
(18)
which can be considered as an approximate version of ๐ข 0 ๐ using the finite set โ ๐ ~ . The benefit of using ๐ข 0 , ๐ ๐ over ๐ข 0 ๐ is that this class can be constructed using only โpartialโ boundary information, i.e., โ ๐ ~ , without knowing the explicit expression of โ ๐ .
First, we show specific properties of the relationship between โ ๐ ~ and โ ๐ .
Lemma 5.2.
Let ๐ โ ๐ข 0 ๐ and ๐ ~ โ ๐ข 0 , ๐ ๐ . Assume that โ ๐ ~ is ๐ ๐ -dense in โ ๐ with some probability. Further assume that ๐ ๐ and ๐ ~ ๐ are ๐ถ -Lipschitz continuous for all ๐
1 , โฆ , ๐ . Then
| ๐ ๐ โข ( ๐ โฒ ) โ ๐ ~ ๐ โข ( ๐ โฒ ) |
๐ ๐ โข ( ๐ ๐ )
for any ๐ฑ โฒ โ โ ๐ .
The proof follows from applying the Lipschitz continuous property on ๐ ๐ and ๐ ~ ๐ and then the triangle inequality, see Section A.3. Lemma 5.2 establishes a connection between ๐ข 0 ๐ and ๐ข 0 , ๐ ๐ , relying on the assumption that โ ๐ ~ is ๐ ๐ -dense in โ ๐ . We now show under some mild conditions this assumption holds with high probability.
Proposition 5.3.
Assume { ๐ฑ ๐ โฒ } ๐
1 ๐ are samples drawn from the uniform distribution defined on โ ๐ . Let L โข ( ๐ ) denote the ( ๐ โ 1 ) -surface area of a bounded domain ๐ โ โ ๐ and L โข ( ๐ ) < โ . Let ๐ต ๐ ๐ โข ( ๐ฑ โฒ ) denote a ball of radius ๐ ๐ centred on ๐ฑ โฒ , and let ๐ โข ( ๐ )
๐ ๐ / 2 / ฮ โข ( ๐ 2 + 1 ) . For all ๐ ๐ such that
๐ ๐ โฅ ( L โข ( ๐ ) ๐ โข ( ๐ ) โข [ 1 โ 0.05 1 / ๐ ] ) 1 / ๐ .
(19)
we have
โ โข ( โ ๐ ~ โข is โข ๐ ๐ โข -dense in โข โ ๐ )
0.95 .
For proof, see Section A.4. Equation 19 shows the relationship between ๐ and ๐ ๐ , which corresponds to how โcloseโ โ ๐ ~ is to โ ๐ .
Remark 5.4.
Our numerical investigation (Section B.2) shows that the bound on ๐ ๐ , as defined in Equation 19, is a decreasing function of ๐ , and is not sensitive to the value of L โข ( ๐ ) .
Proposition 5.3 shows that Lemma 5.2 holds with high probability, which in turn enables us to show that indeed ๐ข 0 , ๐ ๐ is an approximate Stein class.
Theorem 5.5.
Assume the conditions specified in Lemma 5.2 hold. Let ๐ be a smooth density supported on ๐ . For any ๐ ~ โ ๐ข 0 , ๐ ๐ , then
๐ผ ๐ โข [ ๐ฏ ๐ โข ๐ ~ โข ( ๐ ) ]
๐ ๐ โข ( ๐ ๐ ) .
(20)
The proof again follows from integration by parts, then applying Lemma 5.2, see Section A.5. This result paves the way for designing a new type of Stein divergence that measures differences between two distributions when their domain ๐ is truncated.
5.2Truncated Kernelised Stein Discrepancy (TKSD) and Density Estimation
Let ๐ and ๐ ๐ฝ be two smooth densities supported on ๐ . We can construct a Stein discrepancy measure, called the truncated Kernelised Stein Discrepancy (TKSD), given by
๐ TKSD โข ( ๐ ๐ฝ | ๐ ) := sup ๐ โ ๐ข 0 , ๐ ๐ ๐ผ ๐ โข [ ๐ฏ ๐ ๐ฝ โข ๐ โข ( ๐ ) ] .
(21)
Similarly to classical SD and KSD, TKSD can still be intuitively thought as the maximum violation of Steinโs identity with respect to an approximate Stein class ๐ข 0 , ๐ ๐ . It can be used to distinguish two distributions when their domain ๐ is truncated, but the boundary is not known analytically.
๐ TKSD โข ( ๐ ๐ฝ | ๐ ) serves as the discrepancy measure between densities. Later, we propose to estimate a truncated density function by minimising this discrepancy.
Next, we show that there is an analytic solution to the constrained optimisation problem in Equation 21.
Theorem 5.6.
๐ TKSD โข ( ๐ ๐ฝ | ๐ ) 2 can be written as
โ ๐
1 ๐ ๐ผ ๐ โผ ๐ โข ๐ผ ๐ โผ ๐ โข [ ๐ข ๐ โข ( ๐ , ๐ ) โ ๐ฏ ๐ โข ( ๐ ) โค โข ( ๐ โฒ ) โ 1 โข ๐ฏ ๐ โข ( ๐ ) ]
(22)
where ๐ข ๐ โข ( ๐ฑ , ๐ฒ ) is given by Equation 7, ๐ฏ ๐ โข ( ๐ณ )
๐ ๐ , ๐ โข ( ๐ณ ) โข ๐ ๐ณ , ๐ฑ โฒ โค + ( โ ๐ง ๐ ๐ ๐ณ , ๐ฑ โฒ ) โค , ๐ ๐ณ , ๐ฑ โฒ
[ ๐ โข ( ๐ณ , ๐ฑ 1 โฒ ) , โฆ , ๐ โข ( ๐ณ , ๐ฑ ๐ โฒ ) ] , ๐ ๐ฑ โฒ
[ ๐ โข ( ๐ฑ 1 โฒ , โ ) , โฆ , ๐ โข ( ๐ฑ ๐ โฒ , โ ) ] โค and ๐ โฒ
๐ ๐ฑ โฒ โข ๐ ๐ฑ โฒ โค .
The proof relies on solving a Lagrangian dual problem, see Section A.6. This result gives a closed form loss function for ๐ TKSD โข ( ๐ ๐ฝ | ๐ ) 2 , and is not straightforward to obtain since the constraints in ๐ข 0 , ๐ ๐ are enforced on a finite set of points in โ ๐ . Equation 22 can be decomposed into the โKSD partโ, given by the ๐ข ๐ โข ( ๐ , ๐ ) term, and the โtruncated partโ, given by ๐ฏ ๐ โข ( ๐ ) โค โข ( ๐ โฒ ) โ 1 โข ๐ฏ ๐ โข ( ๐ ) , which comes from solving for the Lagrangian dual parameter. Equation 22 is also linear in ๐ , so its evaluation cost only increases linearly with ๐ .
Next, we show that the TKSD can be approximated with samples from ๐ .
Theorem 5.7.
Let { ๐ฑ ๐ } ๐
1 ๐ be a set of samples from ๐ โข ( ๐ฑ ) , then
๐ ^ TKSD โข ( ๐ ๐ฝ | ๐ ) 2
2 ๐ โข ( ๐ โ 1 ) โข โ ๐
1 ๐ โ ๐
1
๐ โ ๐ ๐ โ โข ( ๐ ๐ , ๐ ๐ ) ,
(23)
is an unbiased estimate of ๐ TKSD โข ( ๐ ๐ | ๐ ) 2 , where
โ โข ( ๐ ๐ , ๐ ๐ )
โ ๐
1 ๐ ๐ข ๐ โข ( ๐ ๐ , ๐ ๐ ) โ ๐ฏ ๐ โข ( ๐ ๐ ) โค โข ( ๐ โฒ ) โ 1 โข ๐ฏ ๐ โข ( ๐ ๐ ) ,
(24)
assuming that ๐ผ ๐ฑ โผ ๐ โข ๐ผ ๐ฒ โผ ๐ โข [ โ โข ( ๐ฑ , ๐ฒ ) 2 ] โค โ .
The proof follows directly from the definition of a ๐ -statistic (Serfling, 2009). Additionally, we could define a biased estimate of ๐ TKSD โข ( ๐ ๐ฝ | ๐ ) 2 via a ๐ -statistic
๐ ^ TKSD โข ( ๐ ๐ฝ | ๐ ) 2
1 ๐ 2 โข โ ๐
1 ๐ โ ๐
1 ๐ โ โข ( ๐ ๐ , ๐ ๐ ) .
(25)
The ๐ -statistic and ๐ -statistic for TKSD allow us to evaluate ๐ TKSD โข ( ๐ ๐ฝ | ๐ ) 2 via ๐ samples from ๐ . In empirical experiments, the ๐ -statistic seems to give better performance overall compared to the ๐ -statistic.
Finally, we define our proposed estimator for unnormalised truncated density by minimising TKSD over the density parameter ๐ฝ .
๐ฝ ^ ๐ , ๐ := arg โข min ๐ฝ โก ๐ ^ TKSD โข ( ๐ ๐ฝ | ๐ ) 2 .
(26)
In the next section, we study the theoretical properties of Equation 26.
5.3Consistency Analysis
Since ๐ ^ TKSD โข ( ๐ ๐ฝ | ๐ ) 2 is a function of ๐ , for the simplicity of the theorem, let us study Equation 26 at the limit of ๐ . Let ๐ฟ ^ โข ( ๐ฝ ) := lim ๐ โ โ ๐ ^ TKSD โข ( ๐ ๐ฝ | ๐ ) 2 and define
๐ฝ ^ ๐ := arg โข min ๐ฝ โก ๐ฟ ^ โข ( ๐ฝ ) .
We now prove that ๐ฝ ^ ๐ converges to the true parameter under mild conditions:
Assumption 5.8 (Accurate Boundary Prediction).
Let
๐ := ๐ผ ๐ [ ( ๐ ๐ , ๐ฑ โฒ โค [ โ ๐ฝ ๐ ๐ , ๐ ( ๐ ) ) โค ] | ๐ฝ
๐ฝ
โ
]
โ
โ
๐
ร
dim
โข
(
๐ฝ
)
,
๐
(
๐
)
:=
๐ผ
๐
[
(
๐
๐
,
๐
โค
[
โ
๐ฝ
๐
๐
,
๐
(
๐
)
)
โค
]
|
๐ฝ
๐ฝ โ ] โ โ dim โข ( ๐ฝ ) .
Assume the following holds:
[ โฎ โ ๐ ๐ โข ( ๐ ) โข ๐ ๐ , ๐ฑ โฒ โข ( ๐ โฒ ) โ 1 โข ๐ โข u ^ ๐ โข ( ๐ ) โข ๐ ๐ ] ๐
=
[ โฎ โ ๐ ๐ โข ( ๐ ) โข ๐ โข ( ๐ ) โข u ^ ๐ โข ( ๐ ) โข ๐ ๐ ] ๐ + ๐ ๐ โข ( ๐ ^ ๐ ) ,
โ ๐
โ { 1 , โฆ , dim โข ( ๐ฝ ) } , ๐ โ { 1 , โฆ , ๐ }
(27)
where ๐ ^ ๐ is a positive decaying sequence with respect to ๐ and lim ๐ โ โ ๐ ^ ๐
0 .
One can see that ๐ ๐ , ๐ฑ โฒ โข ( ๐ โฒ ) โ 1 โข ๐ is the kernel least-square regression prediction of ๐ โข ( ๐ โฒ ) , for a ๐ โฒ โ โ ๐ . 5.8 essentially states that the least squares โtrainedโ on our boundary samples, { ๐ ๐ โฒ โฒ } ๐ โฒ
1 ๐ , should be asymptotically accurate in terms of a testing error computed over a surface integral as ๐ increases.
Assumption 5.9.
The smallest eigenvalue of the Hessian of ๐ฟ ^ โข ( ๐ฝ ) is lower bounded, i.e., ๐ min โข [ โ ๐ฝ 2 ๐ฟ ^ โข ( ๐ฝ ) ] โฅ ฮ min
0 with high probability.
In fact, we could make the same assumption on the population version of the objective function, and the convergence of the sample hessian matrix would guarantee Assumption 5.9 holds with high probability. To simplify our theoretical statement we stick to the simpler version.
Theorem 5.10.
Suppose there exists a unique ๐ โ such that ๐
๐ ๐ โ , 5.8 and 5.9 hold, Then
โ ๐ฝ ^ ๐ โ ๐ฝ โ โ
๐ ๐ โข ( 1 ๐ ) .
For proof, see Section A.7. This result shows, although our TKSD is an approximate version of KSD, the density estimator derived from TKSD is still a consistent estimator. We empirically verify this consistency in Section B.3.
6Experimental Results
To show the validity of our proposed method, we experiment on benchmark settings against TruncSM and an adaptation of bd-KSD for truncated density estimation. We also provide additional empirical experiments in the appendices; empirical consistency (Section B.3), a demonstration on the Gaussian mixture distribution (Section B.4), an implementation for truncated regression (Section B.5) and a investigation into the effect of the distribution of the boundary points (Section B.6).
6.1Computational Considerations
TKSD requires the selection of hyperparameters: the number of boundary points, ๐ , the choice of kernel function, ๐ , and the corresponding kernel hyperparameters. We focus on the Gaussian kernel, ๐ โข ( ๐ , ๐ )
exp โก { โ ( 2 โข ๐ 2 ) โ 1 โข โ ๐ โ ๐ โ 2 } , and the bandwidth parameter ๐ is chosen heuristically as the median of pairwise distances on the data matrix.
In choosing ๐ , there is a trade-off between accuracy and computational expense, since ๐ โฒ in Equation 24 is an ๐ ร ๐ matrix which requires inversion. In experiments, we let ๐ scale with ๐ 2 . We provide more computational details of the method in Section B.8.
When the boundaryโs functional form is unknown, the recommended distance functions by Xu (2022) and Liu et al. (2022) cannot be used, and instead TruncSM and bd-KSD must use approximate boundary points. This approximation to the distance function is given by
min ๐ โฒ โ โ ๐ ~ โก โ ๐ โ ๐ โฒ โ ๐ผ ๐พ ,
(28)
for each dataset point ๐ , where ๐พ and ๐ผ are chosen based on the application. This may not provide an accurate approximation to the true distance function when ๐ is small.
6.2Density Truncated by the Boundary of the United States Figure 1:Density estimation when the truncation boundary is the border of the U.S., as described in Section 6.2. Top: example of increasing the number of boundary points ๐ . Bottom: across 256 seeds for each value of ๐ , mean estimation error with standard error bars for the mean of a 2D Gaussian, for TKSD and TruncSM as ๐ increases.
Let us consider the complicated boundary of the United States (U.S.). Let ๐ be the interior of the U.S. and let โ ๐ ~ be a set of coordinates (longitude and latitude) which define the countryโs borders. The boundary of the U.S. is a highly irregular shape, and as such, there is no explicit expression of the boundary in this case. TruncSM and bd-KSD must use an approximate distance function (given by Equation 28), whereas TKSD can readily use the set of coordinates that give rise to the boundary.
The experiment is set up as follows. Let ๐ โ
[ โ 115 , 35 ] and ๐บ
10 โ ๐ผ ๐ . Samples are simulated from ๐ฉ โข ( ๐ โ , ๐บ ) and we select only those which are in the interior of the U.S. until we reach ๐
400 points. Assuming ๐บ is known, we estimate ๐ โ with ๐ ^ using TKSD and compare it to the estimation using TruncSM. We also vary ๐ by uniformly sampling from the perimeter of the U.S., demonstrated in Figure 1 (top).
Figure 1 (bottom) shows the mean and standard error of the โ 2 estimation error between ๐ โ and ๐ ^ , measured over 256 trials for each value of ๐ . TruncSM improves with higher values of ๐ as the approximate distance function increases in accuracy, whilst TKSD performs significantly better with fewer boundary points.
6.3Estimation Error and Dimensionality Figure 2:Mean estimation error across 256 seeds, with standard error bars, as dimension ๐ increases (left) and runtime for each method (right). The truncation domain is the โ 2 ball of radius ๐ 0.53 (top) and โ 1 ball of radius ๐ (bottom).
Consider a simple experiment setup where ๐ ๐ โ
๐ ๐ โ 0.5 , samples are simulated from ๐ฉ โข ( ๐ ๐ โ , ๐ฐ ๐ ) and are truncated be within the โ ๐ ball for ๐
1 and ๐
2 , each of radius ๐ , until we reach ๐
300 data points. We choose radii ๐
๐ 0.53 for ๐
1 and ๐
๐ 0.98 for ๐
2 , chosen so that the amount of points truncated across dimension is roughly 50% (see Section B.7 for details). We estimate ๐ ๐ โ with ๐ ^ , and measure the โ 2 estimation error against ๐ ๐ โ as ๐ increases. Each boundary point ๐ โฒ โ โ ๐ ~ is simulated from ๐ฉ โข ( ๐ ๐ , ๐ฐ ๐ ) , normalized by โ ๐ โฒ โ ๐ , and multiplied by ๐ , resulting in a set of random points on the boundary.
Figure 2 shows the error and computation time for the following estimators:
โข
TKSD: our method as described in Section 5, using a randomly sampled โ ๐ ~ .
โข
TruncSM/bd-KSD (exact): the implementation by Liu et al. (2022)/Xu (2022) respectively, where the distance function is computed exactly using the known boundaries.
โข
TruncSM/bd-KSD (approximate): the implementation by Liu et al. (2022)/Xu (2022) respectively with distance function given by Equation 28, using the same โ ๐ ~ as given to TKSD.
For the โ 2 case, TKSD marginally outperforms all competitors across all dimensions, at only a slight computational expense. For the โ 1 case, TKSD, bd-KSD and TruncSM have similar estimation errors across all dimensions. However, TruncSM (exact) and bd-KSD (exact) have an increasing computation time due to the costly evaluation of the distance function. In this implementation, we follow the advice of Liu et al. (2022), Section 7, where the distance function to the โ 1 ball is calculated via a closed-form expression, for which the computational complexity increases combinatorically with dimension.
TruncSM (approximate) and bd-KSD (approximate) have significantly higher estimation error than other methods across all benchmarks. TKSD is able to achieve the same level of accuracy as the exact methods, using the same finite set of boundary points, โ ๐ ~ .
7Discussion
We have proposed an alternative to the classical Stein class, called an approximate Stein class, bounded by a decreasing sequence instead of being strictly equal to zero. By maximising the KSD objective over this approximate Stein class, we have constructed a truncated density estimator based on KSD, called truncated KSD (TKSD), and shown that it is consistent. TKSD has advantages over the prior works by Xu (2022) and Liu et al. (2022), as it does not require a functional form of a boundary.
Some limitations of this method include the requirement of selecting hyperparameters, such as the kernel function ๐ and its associated hyperparameters, and the number of samples from the boundary, ๐ . Choice of ๐ may depend on applications. Still, a larger value is preferred when the complexity of the truncation boundary is higher, or the dimension increases. However, even with the heuristic approaches presented in this paper, TKSD provides competitive results.
In experimental results, we have shown that even though we assume no access to a functional form of the boundary, TKSD performs similarly to previous methods which require the boundary to be computed. In some scenarios, TKSD performs better than these methods, or takes less time to achieve the same result.
Reproducibility
All results in this paper can be reproduced using the GitHub repository located at https://github.com/dannyjameswilliams/tksd.
Acknowledgements
We thank all four reviewers for their insightful feedback and suggestions to improve the paper. We are particularly grateful for their recommendations of additional experiments. We would also like to thank Jake Spiteri, Jack Simons, Michael Whitehouse, Mingxuan Yi and Dom Owens, for their helpful input throughout the development of this work.
Daniel J. Williams was supported by a PhD studentship from the EPSRC Centre for Doctoral Training in Computational Statistics and Data Science.
References Barp et al. (2019) โ Barp, A., Briol, F.-X., Duncan, A., Girolami, M., and Mackey, L.Minimum stein discrepancy estimators.In Advances in Neural Information Processing Systems, volume 32, 2019. Boyd & Vandenberghe (2004) โ Boyd, S. and Vandenberghe, L.Convex optimization.Cambridge university press, 2004. Chen et al. (2011) โ Chen, L. H., Goldstein, L., and Shao, Q.-M.Normal approximation by Steinโs method, volume 2.Springer, 2011. Chwialkowski et al. (2016) โ Chwialkowski, K., Strathmann, H., and Gretton, A.A kernel test of goodness of fit.In Proceedings of The 33rd International Conference on Machine Learning, volume 48 of Proceedings of Machine Learning Research, pp. 2606โ2615. PMLR, 2016. Gorham & Mackey (2015) โ Gorham, J. and Mackey, L.Measuring sample quality with stein's method.In Advances in Neural Information Processing Systems, volume 28. Curran Associates, Inc., 2015. Gorham et al. (2020) โ Gorham, J., Raj, A., and Mackey, L.Stochastic stein discrepancies.In Advances in Neural Information Processing Systems, volume 33, pp. 17931โ17942. Curran Associates, Inc., 2020. Gutmann & Hyvรคrinen (2010) โ Gutmann, M. and Hyvรคrinen, A.Noise-contrastive estimation: A new estimation principle for unnormalized statistical models.In Proceedings of the 13th International Conference on Artificial Intelligence and Statistics, volume 9 of Proceedings of Machine Learning Research, pp. 297โ304. PMLR, 2010. Gutmann & Hyvรคrinen (2012) โ Gutmann, M. U. and Hyvรคrinen, A.Noise-contrastive estimation of unnormalized statistical models, with applications to natural image statistics.Journal of Machine Learning Research, 13(11):307โ361, 2012. Hyvรคrinen (2005) โ Hyvรคrinen, A.Estimation of non-normalized statistical models by score matching.Journal of Machine Learning Research, 6(24):695โ709, 2005. Hyvรคrinen (2007) โ Hyvรคrinen, A.Some extensions of score matching.Computational statistics & data analysis, 51(5):2499โ2512, 2007. Jitkrittum et al. (2017) โ Jitkrittum, W., Xu, W., Szabo, Z., Fukumizu, K., and Gretton, A.A linear-time kernel goodness-of-fit test.In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017. Kalos & Whitlock (2009) โ Kalos, M. H. and Whitlock, P. A.Monte carlo methods.John Wiley & Sons, 2009. Kanagawa et al. (2019) โ Kanagawa, H., Jitkrittum, W., Mackey, L., Fukumizu, K., and Gretton, A.A kernel stein test for comparing latent variable models.arXiv preprint arXiv:1907.00586, 2019. Krishnamoorthy & Menon (2013) โ Krishnamoorthy, A. and Menon, D.Matrix inversion using cholesky decomposition.In 2013 signal processing: Algorithms, architectures, arrangements, and applications (SPA), pp. 70โ72. IEEE, 2013. Lin et al. (2016) โ Lin, L., Drton, M., and Shojaie, A.Estimation of high-dimensional graphical models using regularized score matching.Electronic journal of statistics, 10(1):806, 2016. Liu et al. (2016) โ Liu, Q., Lee, J., and Jordan, M.A kernelized stein discrepancy for goodness-of-fit tests.In Proceedings of The 33rd International Conference on Machine Learning, volume 48 of Proceedings of Machine Learning Research, pp. 276โ284. PMLR, 2016. Liu et al. (2022) โ Liu, S., Kanamori, T., and Williams, D. J.Estimating density models with truncation boundaries using score matching.Journal of Machine Learning Research, 23(186):1โ38, 2022. Pang et al. (2020) โ Pang, T., Xu, K., Li, C., Song, Y., Ermon, S., and Zhu, J.Efficient learning of generative models via finite-difference score matching.In Advances in Neural Information Processing Systems, volume 33, pp. 19175โ19188, 2020. Serfling (2009) โ Serfling, R. J.Approximation theorems of mathematical statistics.John Wiley & Sons, 2009. Sharrock et al. (2022) โ Sharrock, L., Simons, J., Liu, S., and Beaumont, M.Sequential neural score estimation: Likelihood-free inference with conditional score based diffusion models.arXiv preprint arXiv:2210.04872, 2022. Shi et al. (2021) โ Shi, J., Liu, C., and Mackey, L.Sampling with mirrored stein operators.arXiv preprint arXiv:2106.12506, 2021. Song & Ermon (2019) โ Song, Y. and Ermon, S.Generative modeling by estimating gradients of the data distribution.In Advances in Neural Information Processing Systems, volume 32, 2019. Song & Kingma (2021) โ Song, Y. and Kingma, D. P.How to train your energy-based models.arXiv preprint arXiv:2101.03288, 2021. Song et al. (2021) โ Song, Y., Sohl-Dickstein, J., Kingma, D. P., Kumar, A., Ermon, S., and Poole, B.Score-based generative modeling through stochastic differential equations.In International Conference on Learning Representations, 2021. Stein (1972) โ Stein, C.A bound for the error in the normal approximation to the distribution of a sum of dependent random variables.In Proceedings of the sixth Berkeley symposium on mathematical statistics and probability, volume 2: Probability theory, pp. 583โ602. University of California Press, 1972. Steinwart & Christmann (2008) โ Steinwart, I. and Christmann, A.Support vector machines.Springer Science & Business Media, 2008. UCLA: Statistical Consulting Group (2022) โ UCLA: Statistical Consulting Group.Truncated regression โ stata data analysis examples, 2022.URL https://stats.oarc.ucla.edu/stata/dae/truncated-regression/.Accessed March 17, 2023. Williams & Liu (2022) โ Williams, D. J. and Liu, S.Score matching for truncated density estimation on a manifold.In Topological, Algebraic and Geometric Learning Workshops 2022, pp. 312โ321. PMLR, 2022. Wu et al. (2022) โ Wu, S., Diao, E., Elkhalil, K., Ding, J., and Tarokh, V.Score-based hypothesis testing for unnormalized models.IEEE Access, 10:71936โ71950, 2022. Xu (2022) โ Xu, W.Standardisation-function kernel stein discrepancy: A unifying view on kernel stein discrepancy tests for goodness-of-fit.In International Conference on Artificial Intelligence and Statistics, pp. 1575โ1597. PMLR, 2022. Xu & Matsuda (2021) โ Xu, W. and Matsuda, T.Interpretable stein goodness-of-fit tests on riemannian manifold.In Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pp. 11502โ11513. PMLR, 2021. Yang et al. (2018) โ Yang, J., Liu, Q., Rao, V., and Neville, J.Goodness-of-fit testing for discrete distributions via stein discrepancy.In Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pp. 5561โ5570. PMLR, 2018. Yu et al. (2016) โ Yu, M., Kolar, M., and Gupta, V.Statistical inference for pairwise graphical models using score matching.In Advances in Neural Information Processing Systems, volume 29, 2016. Yu et al. (2019) โ Yu, S., Drton, M., and Shojaie, A.Generalized score matching for non-negative data.Journal of Machine Learning Research, 20(76):1โ70, 2019. Yu et al. (2021) โ Yu, S., Drton, M., and Shojaie, A.Generalized score matching for general domains.Information and Inference: A Journal of the IMA, 2021. Appendix AProofs and Additional Theoretical Results A.1Latent Variable Approximate Stein Identity
Firstly, we show that the score function, ๐ ๐ โข ( ๐ ) , is given by
๐ ๐ โข ( ๐ )
โ ๐ ๐ โข ( ๐ ) ๐ โข ( ๐ )
โซ โ ๐ โ ๐ ( ๐ โข ( ๐ | ๐ ) โข ๐ โข ( ๐ ) ) ๐ โข ( ๐ ) โข ๐ โข ( ๐ | ๐ ) ๐ โข ( ๐ | ๐ ) โข ๐ ๐
โซ โ ๐ ๐ โข ( ๐ | ๐ ) โข ๐ โข ( ๐ ) ๐ โข ( ๐ ) โข [ โ ๐ ๐ โข ( ๐ | ๐ ) ๐ โข ( ๐ | ๐ ) ] โข ๐ ๐
โซ โ ๐ ๐ โข ( ๐ | ๐ ) โข โ ๐ log โก ๐ โข ( ๐ | ๐ ) โข ๐ ๐
๐ผ ๐ โผ ๐ โข ( ๐ | ๐ ) โข [ ๐ ๐ โข ( ๐ | ๐ ) ] ,
(29)
where ๐ ๐ โข ( ๐ | ๐ )
โ ๐ log โก ๐ โข ( ๐ | ๐ ) . To evaluate the expectation over ๐ โข ( ๐ | ๐ ) , Kanagawa et al. (2019) recommend approximating each element with a Monte Carlo estimate
๐ผ ๐ โผ ๐ โข ( ๐ | ๐ ) โข [ ๐ ๐ , ๐ โข ( ๐ | ๐ ) ]
1 ๐ โข โ ๐
1 ๐ ๐ ๐ , ๐ โข ( ๐ | ๐ ๐ ) + ๐ ๐ โข ( ๐ ๐ ) ,
(30)
where ๐ ๐ , ๐ โข ( ๐ | ๐ )
โ ๐ฅ ๐ log โก ๐ โข ( ๐ | ๐ ) and we assume we have access to ๐ samples { ๐ ๐ } ๐
1 ๐ โผ ๐ โข ( ๐ | ๐ ) , and ๐ ๐ โข ( ๐ ๐ ) is the Monte Carlo approximation error which decreases as the number of samples on the boundary, ๐ , increases.
The Stein identity with this score function is written asWe show that this existing modification of Stein discrepancies give rise to an Approximate Stein Class. Suppose ๐ โ โฑ ๐ , where โฑ ๐ is a regular Stein class. Steinโs identity with the score function from Equation 29 can be written as
๐ผ ๐ โข [ ๐ฏ ๐ โข ๐ โข ( ๐ ) ]
๐ผ ๐ โข [ โ ๐
1 ๐ ๐ ๐ , ๐ โข ( ๐ ) โข ๐ ๐ โข ( ๐ ) + โ ๐ฅ ๐ ๐ ๐ โข ( ๐ ) ]
๐ผ ๐ โข [ ๐ผ ๐ โผ ๐ โข ( ๐ | ๐ ) โข [ ๐ ๐ , ๐ โข ( ๐ | ๐ ) ] โข ๐ ๐ โข ( ๐ ) + โ ๐ฅ ๐ ๐ ๐ โข ( ๐ ) ] .
Substituting (30) into the above gives
๐ผ ๐ โข [ ๐ฏ ๐ โข ๐ โข ( ๐ ) ]
๐ผ ๐ โข [ โ ๐
1 ๐ { ( 1 ๐ โข โ ๐
1 ๐ ๐ ๐ , ๐ โข ( ๐ | ๐ ๐ ) + ๐ ๐ โข ( ๐ ๐ ) ) โข ๐ ๐ โข ( ๐ ) + โ ๐ฅ ๐ ๐ ๐ โข ( ๐ ) } ]
๐ผ ๐ โข [ โ ๐
1 ๐ { 1 ๐ โข โ ๐
1 ๐ ๐ ๐ , ๐ โข ( ๐ | ๐ ๐ ) โข ๐ ๐ โข ( ๐ ) + โ ๐ฅ ๐ ๐ ๐ โข ( ๐ ) + ๐ ๐ โข ( ๐ ๐ ) โข ๐ ๐ โข ( ๐ ) } ]
(a) ๐ผ ๐ โข [ โ ๐
1 ๐ { 1 ๐ โข โ ๐
1 ๐ ๐ ๐ , ๐ โข ( ๐ | ๐ ๐ ) โข ๐ ๐ โข ( ๐ ) + โ ๐ฅ ๐ ๐ ๐ โข ( ๐ ) } ] + ๐ ๐ โข ( ๐ ๐ )
(b) ๐ ๐ โข ( ๐ ๐ ) ,
where equality (a) follows from the fact that the error in the Monte Carlo approximation is accounted for by the ๐ ๐ โข ( ๐ ๐ ) term, and equality (b) follows by definition of โฑ ๐ being a Stein class. Therefore, ๐ผ ๐ โข [ ๐ฏ ๐ โข ๐ โข ( ๐ ) ]
๐ ๐ โข ( ๐ ๐ ) , and this latent variable modification uses an Approximate Stein Class.
A.2Proof of Lemma 5.1
Let ๐ โ ๐ข 0 ๐ . Begin by writing the expectation as an integral,
๐ผ ๐ โข [ ๐ฏ ๐ โข ๐ โข ( ๐ ) ]
โซ ๐ ๐ โข ( ๐ ) โข ๐ฏ ๐ โข ๐ โข ( ๐ ) โข ๐ ๐
โ ๐
1 ๐ โซ ๐ ๐ โข ( ๐ ) โข โ ๐ฅ ๐ log โก ๐ โข ( ๐ ) โข ๐ ๐ โข ( ๐ ) + โ ๐ฅ ๐ ๐ ๐ โข ( ๐ ) โข ๐ โข ๐
โ ๐
1 ๐ โซ ๐ โ ๐ฅ ๐ ๐ โข ( ๐ ) โข ๐ ๐ โข ( ๐ ) + โ ๐ฅ ๐ ๐ ๐ โข ( ๐ ) โข ๐ โข ๐ .
This can be expanded by integration by parts,
โ ๐
1 ๐ โซ ๐ โ ๐ฅ ๐ ๐ โข ( ๐ ) โข ๐ ๐ โข ( ๐ ) + โ ๐ฅ ๐ ๐ ๐ โข ( ๐ ) โข ๐ โข ๐
โ ๐
1 ๐ โฎ โ ๐ ๐ โข ( ๐ ) โข ๐ ๐ โข ( ๐ ) โข u ^ ๐ โข ( ๐ ) โข ๐ ๐ + โ ๐
1 ๐ โซ ๐ ๐ โข ( ๐ ) โข ( โ ๐ฅ ๐ ๐ ๐ โข ( ๐ ) โ โ ๐ฅ ๐ ๐ ๐ โข ( ๐ ) ) โข ๐ ๐
โ ๐
1 ๐ โฎ โ ๐ ๐ โข ( ๐ ) โข ๐ ๐ โข ( ๐ ) โข u ^ ๐ โข ( ๐ ) โข ๐ ๐
0 ,
where the final equality comes from all evaluations ๐ ๐ โข ( ๐ โฒ )
0 โข โ ๐ โฒ โ โ ๐ .
A.3Proof of Lemma 5.2.
Let ๐ โ ๐ข 0 ๐ and ๐ ~ โ ๐ข 0 , ๐ ๐ . First note that ๐ and ๐ ~ agree on โ ๐ ~ , i.e.
๐ ๐ โข ( ๐ ~ โฒ )
๐ ~ ๐ โข ( ๐ ~ โฒ ) , โ ๐
1 , โฆ , ๐
(31)
for all ๐ ~ โฒ โ โ ๐ ~ , since all ๐ ~ โฒ are elements of โ ๐ also, as โ ๐ ~ โ โ ๐ . First, we note that since ๐ ๐ and ๐ ~ ๐ are both Lipschitz continuous, then
| ๐ ๐ โข ( ๐ โฒ ) โ ๐ ๐ โข ( ๐ ~ โฒ ) |
โค ๐ถ 1 โข โ ๐ โฒ โ ๐ ~ โฒ โ โค ๐ถ 1 โข ๐ ๐
(32)
| ๐ ~ ๐ โข ( ๐ ~ โฒ ) โ ๐ ~ ๐ โข ( ๐ โฒ ) |
โค ๐ถ 2 โข โ ๐ โฒ โ ๐ ~ โฒ โ โค ๐ถ 2 โข ๐ ๐ ,
(33)
where โ ๐ โฒ โ ๐ ~ โฒ โ โค ๐ ๐ . This follows under the assumption that โ ๐ ~ is ๐ ๐ -dense in โ ๐ , therefore the distance โ ๐ โฒ โ ๐ ~ โฒ โ is at most ๐ ๐ .
We seek to quantify
| ๐ ๐ โข ( ๐ โฒ ) โ ๐ ~ ๐ โข ( ๐ โฒ ) | ,
which is how far apart the โapproximateโ ๐ ~ ๐ is from the โtrueโ ๐ ๐ for any point ๐ โฒ โ โ ๐ . Note that this includes points that are not in โ ๐ ~ . We let
๐ ๐ โข ( ๐ โฒ ) โ ๐ ~ ๐ โข ( ๐ โฒ )
( ๐ ๐ โข ( ๐ โฒ ) โ ๐ ~ ๐ โข ( ๐ โฒ ) ) + ( ๐ ๐ โข ( ๐ ~ โฒ ) โ ๐ ๐ โข ( ๐ ~ โฒ ) ) + ( ๐ ~ ๐ โข ( ๐ ~ โฒ ) โ ๐ ~ ๐ โข ( ๐ ~ โฒ ) ) ,
and by (31),
๐ ๐ โข ( ๐ โฒ ) โ ๐ ~ ๐ โข ( ๐ โฒ )
( ๐ ๐ โข ( ๐ โฒ ) โ ๐ ~ ๐ โข ( ๐ โฒ ) ) โ ๐ ๐ โข ( ๐ ~ โฒ ) + ๐ ~ ๐ โข ( ๐ ~ โฒ )
( ๐ ๐ โข ( ๐ โฒ ) โ ๐ ๐ โข ( ๐ ~ โฒ ) ) + ( ๐ ~ ๐ โข ( ๐ ~ โฒ ) โ ๐ ~ ๐ โข ( ๐ โฒ ) ) .
Using (32) and (33), we have the following inequality,
| ๐ ๐ โข ( ๐ โฒ ) โ ๐ ~ ๐ โข ( ๐ โฒ ) | โค | ๐ ๐ โข ( ๐ โฒ ) โ ๐ ๐ โข ( ๐ ~ โฒ ) | + | ๐ ~ ๐ โข ( ๐ ~ โฒ ) โ ๐ ~ ๐ โข ( ๐ โฒ ) | โค ( ๐ถ 1 + ๐ถ 2 ) โข ๐ ๐ ,
where the last step follows by the triangle inequality. Therefore, we have
| ๐ ๐ โข ( ๐ โฒ ) โ ๐ ~ ๐ โข ( ๐ โฒ ) |
๐ ๐ โข ( ๐ ๐ )
as desired.
A.4Proof of Proposition 5.3
First, let L โข ( ๐ ) denote the ( ๐ โ 1 ) -surface area of a bounded domain ๐ โ โ ๐ and L โข ( ๐ ) < โ . For example, in 2D, L โข ( ๐ ) corresponds to the line length of โ ๐ . We also define ๐ต ๐ ๐ โข ( ๐ โฒ ) as the ball of radius ๐ ๐ centred on ๐ โฒ .
Before continuing to the proof, recall the definition of an ๐ -dense set: โ ๐ โฒ โ โ ๐ , โ ๐ ~ โฒ โ โ ๐ ~ such that ๐ โข ( ๐ โฒ , ๐ ~ โฒ ) โค ๐ , where ๐ is some measure of distance. The statement ๐ โข ( ๐ โฒ , ๐ ~ โฒ ) โค ๐ ๐ is equivalent to ๐ ~ โฒ โ ๐ต ๐ ๐ โข ( ๐ โฒ ) . Now write that the probability that the definition of a ๐ ๐ -dense set holds for โ ๐ ~ being dense in โ ๐ is equal to 0.95:
0.95
โ โข ( โ ๐ โฒ โ โ ๐ , โ ๐ ~ โฒ โ โ ๐ ~ , ๐ ~ โฒ โ ๐ต ๐ ๐ โข ( ๐ โฒ ) )
1 โ โ โข ( โ ๐ โฒ โ โ ๐ , โ ๐ ~ โฒ โ โ ๐ ~ , ๐ ~ โฒ โ ๐ต ๐ ๐ โข ( ๐ โฒ ) )
1 โ โ โข ( โ ๐ โฒ โ โ ๐ โ ๐ ~ โฒ โ โ ๐ ~ , ๐ ~ โฒ โ ๐ต ๐ ๐ โข ( ๐ โฒ ) ) .
Equivalently, by rearranging the above, we have
โ โข ( โ ๐ โฒ โ โ ๐ โ ๐ ~ โฒ โ โ ๐ ~ , ๐ ~ โฒ โ ๐ต ๐ ๐ โข ( ๐ โฒ ) )
0.05 .
This probability can be bounded as follows
0.05
โ โข ( โ ๐ โฒ โ โ ๐ โ ๐ ~ โฒ โ โ ๐ ~ , ๐ ~ โฒ โ ๐ต ๐ ๐ โข ( ๐ โฒ ) ) โฅ โ โข ( โ ๐ ~ โฒ โ โ ๐ ~ , ๐ ~ โฒ โ ๐ต ๐ ๐ โข ( ๐ 0 โฒ ) )
where ๐ 0 โฒ is a single point in โ ๐ . This follows by basic laws of probability. Note that the probability on the right hand side can be written as
โ โข ( โ ๐ ~ โฒ โ โ ๐ ~ , ๐ ~ โฒ โ ๐ต ๐ ๐ โข ( ๐ 0 โฒ ) )
โ โข ( ๐ ~ 0 โฒ โ ๐ต ๐ ๐ โข ( ๐ 0 โฒ ) ) ๐
[ 1 โ โ โข ( ๐ ~ 0 โฒ โ ๐ต ๐ ๐ โข ( ๐ 0 โฒ ) ) ] ๐
[ 1 โ Area โข ( โ ๐ โฉ ๐ต ๐ ๐ โข ( ๐ 0 โฒ ) ) L โข ( ๐ ) ] ๐ โค 0.05
(34)
where the first equality comes from all ๐ ~ โฒ โ โ ๐ ~ being independently distributed. Area โข ( ๐ ) denotes the surface area of the boundary set ๐ . For example, Area โข ( โ ๐ )
L โข ( ๐ ) . Therefore Area โข ( โ ๐ โฉ ๐ต ๐ ๐ โข ( ๐ 0 โฒ ) ) represents the size of the region of โ ๐ that is inside ๐ต ๐ ๐ โข ( ๐ 0 โฒ ) , which will be the area of the boundary hyperplane of โ ๐ which passes through ๐ต ๐ ๐ โข ( ๐ 0 โฒ ) . We obtain the probability in the final equality by assuming that all ๐ ~ โฒ are uniformly sampled from โ ๐ , so the probability is the proportion of โ ๐ inside ๐ต ๐ ๐ โข ( ๐ 0 โฒ ) as a ratio of the full โ ๐ . This probability exists under the assumption that L โข ( ๐ ) < โ .
Next, we can rearrange Equation 34 to
Area โข ( โ ๐ โฉ ๐ต ๐ ๐ โข ( ๐ 0 โฒ ) ) โฅ L โข ( ๐ ) โข [ 1 โ 0.05 1 / ๐ ] .
Consider an upper bound for Area โข ( โ ๐ โฉ ๐ต ๐ ๐ โข ( ๐ 0 โฒ ) ) given by the volume of ๐ต ๐ ๐ โข ( ๐ 0 โฒ ) , i.e.
Area โข ( โ ๐ โฉ ๐ต ๐ ๐ โข ( ๐ 0 โฒ ) ) โค ๐ โข ( ๐ ) โข ๐ ๐ ๐ , ๐ โข ( ๐ )
๐ ๐ / 2 ฮ โข ( ๐ 2 + 1 ) ,
where ฮ is the Gamma function. Combining these bounds gives
๐ โข ( ๐ ) โข ๐ ๐ ๐ โฅ L โข ( ๐ ) โข [ 1 โ 0.05 1 / ๐ ]
and therefore
๐ ๐ โฅ ( L โข ( ๐ ) ๐ โข ( ๐ ) โข [ 1 โ 0.05 1 / ๐ ] ) 1 / ๐ ,
which completes the proof.
A.5Proof of Theorem 5.5
Let ๐ โ ๐ข 0 ๐ and ๐ ~ โ ๐ข 0 , ๐ ๐ . Begin with writing the expectation in its integral form,
๐ผ ๐ โข [ ๐ฏ ๐ โข ๐ ~ โข ( ๐ ) ]
โซ ๐ ๐ โข ( ๐ ) โข [ โ ๐
1 ๐ โ ๐ฅ ๐ log โก ๐ โข ( ๐ ) โข ๐ ~ ๐ โข ( ๐ ) + โ ๐
1 ๐ โ ๐ฅ ๐ ๐ ~ ๐ โข ( ๐ ) ] โข ๐ ๐
โ ๐
1 ๐ โซ ๐ ๐ โข ( ๐ ) โข โ ๐ฅ ๐ log โก ๐ โข ( ๐ ) โข ๐ ~ ๐ โข ( ๐ ) โข ๐ โข ๐ + โ ๐
1 ๐ โซ ๐ ๐ โข ( ๐ ) โข โ ๐ฅ ๐ ๐ ~ ๐ โข ( ๐ ) โข ๐ โข ๐
(a) โ ๐
1 ๐ โซ ๐ โ ๐ฅ ๐ ๐ โข ( ๐ ) โข ๐ ~ ๐ โข ( ๐ ) โข ๐ โข ๐ + โ ๐
1 ๐ โซ ๐ ๐ โข ( ๐ ) โข โ ๐ฅ ๐ ๐ ~ ๐ โข ( ๐ ) โข ๐ โข ๐
(b) โ ๐
1 ๐ โฎ โ ๐ ๐ โข ( ๐ ) โข ๐ ~ ๐ โข ( ๐ ) โข u ^ ๐ โข ( ๐ ) โข ๐ ๐ โ โ ๐
1 ๐ โซ ๐ ๐ โข ( ๐ ) โข โ ๐ฅ ๐ ๐ ~ ๐ โข ( ๐ ) โข ๐ โข ๐ + โ ๐
1 ๐ โซ ๐ ๐ โข ( ๐ ) โข โ ๐ฅ ๐ ๐ ~ ๐ โข ( ๐ ) โข ๐ โข ๐
โ ๐
1 ๐ โฎ โ ๐ ๐ โข ( ๐ ) โข ๐ ~ ๐ โข ( ๐ ) โข u ^ ๐ โข ( ๐ ) โข ๐ ๐
where โฎ โ ๐ is the surface integral over the boundary โ ๐ , (a) comes from the identity that ๐ โข ( ๐ ) โข โ ๐ฅ ๐ log โก ๐ โข ( ๐ )
โ ๐ฅ ๐ ๐ โข ( ๐ ) , and (b) is from integration by parts. Substitute ๐ ~ ๐ โข ( ๐ )
๐ ~ ๐ โข ( ๐ ) + ๐ ๐ โข ( ๐ ) โ ๐ ๐ โข ( ๐ ) to obtain
โ ๐
1 ๐ โฎ โ ๐ ๐ โข ( ๐ ) โข ( ๐ ~ ๐ โข ( ๐ ) + ๐ ๐ โข ( ๐ ) โ ๐ ๐ โข ( ๐ ) ) โข u ^ ๐ โข ( ๐ ) โข ๐ ๐
โ ๐
1 ๐ โฎ โ ๐ ๐ โข ( ๐ ) โข ( ๐ ~ ๐ โข ( ๐ ) โ ๐ ๐ โข ( ๐ ) ) โข u ^ ๐ โข ( ๐ ) โข ๐ ๐ + โ ๐
1 ๐ โฎ โ ๐ ๐ โข ( ๐ ) โข ๐ ๐ โข ( ๐ ) โข u ^ ๐ โข ( ๐ ) โข ๐ ๐ .
By Lemma 5.2, | ๐ ~ ๐ โข ( ๐ โฒ ) โ ๐ ๐ โข ( ๐ โฒ ) |
๐ ๐ โข ( ๐ ๐ ) , leaving
๐ผ ๐ โข [ ๐ฏ ๐ โข ๐ ~ โข ( ๐ ) ]
๐ ๐ โข ( ๐ ๐ ) + โ ๐
1 ๐ โฎ โ ๐ ๐ โข ( ๐ ) โข ๐ ๐ โข ( ๐ ) โข u ^ ๐ โข ( ๐ ) โข ๐ ๐
As all evaluations of ๐ ๐ โข ( ๐ )
0 โข โ ๐ โ โ ๐ by definition of ๐ข 0 ๐ , the second term equals zero, leaving only
๐ผ ๐ โข [ ๐ฏ ๐ โข ๐ ~ โข ( ๐ ) ]
๐ ๐ โข ( ๐ ๐ )
as desired.
A.6Proof of Theorem 5.6
Recall ๐ข 0 , ๐ ๐
{ ๐ โ ๐ข ๐ | ๐ โข ( ๐ โฒ )
๐ โข โ ๐ โฒ โ โ ๐ ~ , โ ๐ โ ๐ข ๐ 2 โค 1 } , where ๐ข ๐ is the product RKHS with ๐ elements, ๐
( ๐ 1 , โฆ , ๐ ๐ ) , and ๐ ๐ โ ๐ข , โ ๐
1 , โฆ , ๐ .
Begin with the definition of TKSD as defined in Equation 21,
sup ๐ โ ๐ข 0 , ๐ ๐ ๐ผ ๐ โข [ ๐ฏ ๐ ๐ฝ โข ๐ โข ( ๐ ) ] .
To solve this supremum analytically, we can reframe it as a constrained maximisation problem,
max ๐ โ ๐ข ๐
๐ผ ๐ โข [ ๐ฏ ๐ ๐ฝ โข ๐ โข ( ๐ ) ] ,
(35)
subject to
๐ ๐ โข ( ๐ โฒ )
0 โข โ ๐ โฒ โ โ ๐ ~ , โ ๐
1 , โฆ , ๐
โ ๐ โ ๐ข ๐ โค 1 ,
where the constraints in the definition for ๐ข 0 , ๐ ๐ have been included as optimisation constraints, and the maximisation is now with respect to the RKHS function family ๐ข ๐ only. To solve this, we can formulate a Lagrangian dual function (Boyd & Vandenberghe, 2004).
inf ๐ ( 1 ) , โฆ , ๐ ( ๐ ) , ๐
โ โข ( ๐ฝ , ๐ , ๐ ( 1 ) , โฆ , ๐ ( ๐ ) , ๐ )
(36)
โ
โ
โข
(
๐ฝ
,
๐
,
๐
(
1
)
,
โฆ
,
๐
(
๐
)
,
๐
)
:=
๐ผ
๐
โข
[
๐ฏ
๐
๐ฝ
โข
๐
โข
(
๐
)
]
+
โ
๐
1 ๐ โ ๐ โฒ
1 ๐ ๐ ๐ โฒ ( ๐ ) โข ๐ ๐ โข ( ๐ ๐ โฒ โฒ ) + ๐ โข ( โ ๐ โ ๐ข ๐ 2 โ 1 ) ,
(37)
for ๐ ( ๐ ) โ โ ๐ โข โ ๐ , ๐ โฅ 0 and โ is our Lagrangian. The overall optimisation problem that needs solving is given by
min ๐ ( 1 ) , โฆ , ๐ ( ๐ ) , ๐ โก max ๐ โ ๐ข ๐ โก โ โข ( ๐ฝ , ๐ , ๐ ( 1 ) , โฆ , ๐ ( ๐ ) , ๐ ) .
(38)
By solving the dual problem, Equation 38, we solve the primal problem, Equation 35. We can rewrite (37) as
โ
๐ผ ๐ โข [ โ ๐
1 ๐ โ ๐ฅ ๐ log โก ๐ ๐ฝ โข ( ๐ ) โข ๐ ๐ โข ( ๐ ) + โ ๐ฅ ๐ ๐ ๐ โข ( ๐ ) ] + โ ๐
1 ๐ โ ๐ โฒ
1 ๐ ๐ ๐ โฒ ( ๐ ) โข ๐ ๐ โข ( ๐ ๐ โฒ โฒ ) + ๐ โข ( โ ๐ โ ๐ข ๐ 2 โ 1 ) ,
and expand evaluations of ๐ ๐ โข ( ๐ ) via the reproducing property of ๐ข , given by ๐ ๐ โข ( ๐ )
โจ ๐ ๐ , ๐ โข ( ๐ , โ ) โฉ ๐ข , to
โ
๐ผ ๐ โข [ โ ๐
1 ๐ โ ๐ฅ ๐ log โก ๐ ๐ฝ โข ( ๐ ) โข โจ ๐ ๐ , ๐ โข ( ๐ , โ ) โฉ ๐ข + โจ ๐ ๐ , โ ๐ฅ ๐ ๐ โข ( ๐ , โ ) โฉ ๐ข ] + โ ๐ โฒ
1 ๐ โ ๐
1 ๐ ๐ ๐ โฒ ( ๐ ) โข โจ ๐ ๐ , ๐ โข ( ๐ ๐ โฒ โฒ , โ ) โฉ ๐ข + ๐ โข ( โ ๐
1 ๐ โจ ๐ ๐ , ๐ ๐ โฉ ๐ข โ 1 )
โ ๐
1 ๐ ๐ผ ๐ โข [ โจ ๐ ๐ , โ ๐ฅ ๐ log โก ๐ ๐ฝ โข ( ๐ ) โข ๐ โข ( ๐ , โ ) + โ ๐ฅ ๐ ๐ โข ( ๐ , โ ) + โ ๐ โฒ
1 ๐ ๐ ๐ โฒ ( ๐ ) โข ๐ โข ( ๐ ๐ โฒ โฒ , โ ) โฉ ๐ข ] + ๐ โข ( โ ๐
1 ๐ โจ ๐ ๐ , ๐ ๐ โฉ ๐ข โ 1 )
โ ๐
1 ๐ โจ ๐ ๐ , ๐ผ ๐ โข [ โ ๐ฅ ๐ log โก ๐ ๐ฝ โข ( ๐ ) โข ๐ โข ( ๐ , โ ) + โ ๐ฅ ๐ ๐ โข ( ๐ , โ ) ] + โ ๐ โฒ
1 ๐ ๐ ๐ โฒ ( ๐ ) โข ๐ โข ( ๐ ๐ โฒ โฒ , โ ) โฉ ๐ข + ๐ โข ( โ ๐
1 ๐ โจ ๐ ๐ , ๐ ๐ โฉ ๐ข โ 1 ) .
(39)
The final equality holds provided that ๐ผ ๐ โข [ โ ๐ฅ ๐ log โก ๐ ๐ฝ โข ( ๐ ) โข ๐ โข ( ๐ , โ ) + โ ๐ฅ ๐ ๐ โข ( ๐ , โ ) + โ ๐ โฒ
1 ๐ ๐ ๐ โฒ ( ๐ ) โข ๐ โข ( ๐ ๐ โฒ โฒ , โ ) ] < โ , i.e. the term inside the expectation is Bochner integrable (Steinwart & Christmann, 2008). This same assumption was made in Chwialkowski et al. (2016), as we can consider โ ๐ โฒ
1 ๐ ๐ ๐ โฒ ( ๐ ) โข ๐ โข ( ๐ ๐ โฒ โฒ , โ ) as a constant with respect to the expectation.
We solve for each parameter via differentiation to obtain a closed form solution. Across dimensions, each ๐ ๐ in (39) appears only additively to another, and so we can consider the ๐ -th element of the derivative, and solve the inner maximisation for each ๐ ๐ to give a solution for ๐ . This differentiation gives
โ โ โ ๐ ๐
๐ผ ๐ โข [ โ ๐ฅ ๐ log โก ๐ ๐ฝ โข ( ๐ ) โข ๐ โข ( ๐ , โ ) + โ ๐ฅ ๐ ๐ โข ( ๐ , โ ) ] + โ ๐ โฒ
1 ๐ ๐ ๐ โฒ ( ๐ ) โข ๐ โข ( ๐ ๐ โฒ โฒ , โ ) + 2 โข ๐ โข ๐ ๐
0 .
Rearranging for ๐ ๐ gives the solution
๐ ๐ โ
โ 1 2 โข ๐ โข ๐ผ ๐ โข [ โ ๐ฅ ๐ log โก ๐ ๐ฝ โข ( ๐ ) โข ๐ โข ( ๐ , โ ) + โ ๐ฅ ๐ ๐ โข ( ๐ , โ ) ] โ 1 2 โข ๐ โข โ ๐ โฒ
1 ๐ ๐ ๐ โฒ ( ๐ ) โข ๐ โข ( ๐ ๐ โฒ โฒ , โ )
โ ๐ง ๐ 2 โข ๐ ,
where for convenience we have denoted ๐ง ๐
๐ผ ๐ โข [ โ ๐ฅ ๐ log โก ๐ ๐ฝ โข ( ๐ ) โข ๐ โข ( ๐ , โ ) + โ ๐ฅ ๐ ๐ โข ( ๐ , โ ) ] + โ ๐ โฒ
1 ๐ ๐ ๐ โฒ ( ๐ ) โข ๐ โข ( ๐ ๐ โฒ โฒ , โ ) . Substituting this back into (39) gives
โ โข ( ๐ฝ , ๐ โ , ๐ ( 1 ) , โฆ , ๐ ( ๐ ) , ๐ )
โ ๐
1 ๐ โจ โ ๐ง ๐ 2 โข ๐ , ๐ง ๐ โฉ ๐ข + ๐ โข ( โ ๐
1 ๐ โจ โ ๐ง ๐ 2 โข ๐ , โ ๐ง ๐ 2 โข ๐ โฉ ๐ข โ 1 )
โ ๐
1 ๐ 1 4 โข ๐ โข โจ ๐ง ๐ , ๐ง ๐ โฉ ๐ข + ๐ ,
(40)
where ๐ โ
( ๐ 1 โ , โฆ , ๐ ๐ โ ) . Solve for ๐ by differentiating
โ โ โ ๐
โ 1 4 โข ๐ 2 โข โ ๐
1 ๐ โจ ๐ง ๐ , ๐ง ๐ โฉ ๐ข + 1
0 .
Rearranging for ๐ gives ๐
ยฑ โ ๐
1 ๐ โจ ๐ง ๐ , ๐ง ๐ โฉ ๐ข / 4 . Since ๐ โฅ 0 , we take the positive solution, giving ๐ โ
โ ๐
1 ๐ โจ ๐ง ๐ , ๐ง ๐ โฉ ๐ข / 2 . Substituting ๐ โ into (40) gives
โ โข ( ๐ฝ , ๐ โ , ๐ ( 1 ) , โฆ , ๐ ( ๐ ) , ๐ โ )
โ ๐
1 ๐ 1 4 โข ๐ โ โข โจ ๐ง ๐ , ๐ง ๐ โฉ ๐ข + ๐ โ
โ ๐
1 ๐ โจ ๐ง ๐ , ๐ง ๐ โฉ ๐ข .
(41)
To solve for the final Lagrangian parameters, ๐ ( 1 ) , โฆ , ๐ ( ๐ ) , let us first introduce some notation. Denote
๐ ๐ฑ โฒ
[ ๐ โข ( ๐ 1 โฒ , โ ) ,
โฎ ,
๐ โข ( ๐ ๐ โฒ , โ ) ] , ๐ ๐ , ๐ฑ โฒ
[
๐
โข
(
๐
,
๐
1
โฒ
)
โฆ
๐
โข
(
๐
,
๐
๐
โฒ
)
]
,
๐
โฒ
๐ ๐ฑ โฒ โข ๐ ๐ฑ โฒ โค ,
where ๐ 1 โฒ , โฆ , ๐ ๐ โฒ โ โ ๐ ~ . Now consider the equivalence
๐ ( ๐ ) โ := arg โข min ๐ ( ๐ ) โ ๐
1 ๐ โจ ๐ง ๐ , ๐ง ๐ โฉ ๐ข
arg โข min ๐ ( ๐ ) โ ๐
1 ๐ โจ ๐ง ๐ , ๐ง ๐ โฉ ๐ข
arg โข min ๐ ( ๐ ) โจ ๐ง ๐ , ๐ง ๐ โฉ ๐ข ,
as each ๐ ( ๐ ) is independent. By rewriting ๐ง ๐ as ๐ง ๐
๐ผ ๐ โข [ โ ๐ฅ ๐ log โก ๐ ๐ฝ โข ( ๐ ) โข ๐ โข ( ๐ , โ ) + โ ๐ฅ ๐ ๐ โข ( ๐ , โ ) ] + ๐ ( ๐ ) โค โข ๐ ๐ฑ โฒ , we solve this again by differentiating,
๐ ๐ โข ๐ ( ๐ ) โข โจ ๐ง ๐ , ๐ง ๐ โฉ ๐ข
2 โข โจ ๐ โข ๐ง ๐ ๐ โข ๐ ( ๐ ) , ๐ง ๐ โฉ ๐ข
2 โข โจ ๐ ๐ฑ โฒ , ๐ผ ๐ โข [ โ ๐ฅ ๐ log โก ๐ ๐ฝ โข ( ๐ ) โข ๐ โข ( ๐ , โ ) + โ ๐ฅ ๐ ๐ โข ( ๐ , โ ) ] + ๐ ( ๐ ) โค โข ๐ ๐ฑ โฒ โฉ ๐ข
2 โข ๐ผ ๐ โข [ โ ๐ฅ ๐ log โก ๐ ๐ฝ โข ( ๐ ) โข ๐ ๐ , ๐ฑ โฒ + โ ๐ฅ ๐ ๐ ๐ , ๐ฑ โฒ ] + 2 โข ๐ ( ๐ ) โค โข ๐ ๐ฑ โฒ โข ๐ ๐ฑ โฒ โค
2 โข ๐ผ ๐ โข [ โ ๐ฅ ๐ log โก ๐ ๐ฝ โข ( ๐ ) โข ๐ ๐ , ๐ฑ โฒ + โ ๐ฅ ๐ ๐ ๐ , ๐ฑ โฒ ] + 2 โข ๐ ( ๐ ) โค โข ๐ โฒ .
Setting equal to zero and rearranging gives
๐ ( ๐ ) โ
โ ( ๐ โฒ ) โ 1 โข ๐ผ ๐ โข [ โ ๐ฅ ๐ log โก ๐ ๐ฝ โข ( ๐ ) โข ๐ ๐ , ๐ฑ โฒ โค + ( โ ๐ฅ ๐ ๐ ๐ , ๐ฑ โฒ ) โค ] .
(42)
Before substituting back into (41), let us first square it and expand it as
โ 2
โ ๐
1 ๐ ๐ผ ๐ โผ ๐ โข ๐ผ ๐ โผ ๐ โข [ ๐ข ๐ โข ( ๐ , ๐ ) ] + ( ๐ผ ๐ โผ ๐ โข [ โ ๐ฅ ๐ log โก ๐ ๐ฝ โข ( ๐ ) โข ๐ ๐ , ๐ฑ โฒ โค + ( โ ๐ฅ ๐ ๐ ๐ , ๐ฑ โฒ ) โค ] ) โค โข ๐ ( ๐ )
- ๐ ( ๐ ) โค โข ๐ผ ๐ โผ ๐ โข [ โ ๐ฆ ๐ log โก ๐ ๐ฝ โข ( ๐ ) โข ๐ ๐ , ๐ฑ โฒ โค
- ( โ ๐ฆ ๐ ๐ ๐ , ๐ฑ โฒ ) โค ]
- ๐ ( ๐ ) โค โข ๐ โฒ โข ๐ ( ๐ )
(43)
where ๐ข ๐ โข ( ๐ , ๐ )
๐ ๐ , ๐ โข ( ๐ ) โข ๐ ๐ , ๐ โข ( ๐ ) โข ๐ โข ( ๐ , ๐ ) + ๐ ๐ , ๐ โข ( ๐ ) โข โ ๐ฆ ๐ ๐ โข ( ๐ , ๐ ) + ๐ ๐ , ๐ โข ( ๐ ) โข โ ๐ฅ ๐ ๐ โข ( ๐ , ๐ ) + โ ๐ฅ ๐ โ ๐ฆ ๐ ๐ โข ( ๐ , ๐ ) . We can substitute ๐ ( ๐ ) โ from (42) into โ 2 , denoting โ โ 2 := โ โข ( ๐ฝ , ๐ โ , ๐ ( 1 ) โ , โฆ , ๐ ( ๐ ) โ , ๐ ) , giving
โ 2
โ ๐
1 ๐ ๐ผ ๐ โผ ๐ โข ๐ผ ๐ โผ ๐ โข [ ๐ข ๐ โข ( ๐ , ๐ ) ] + ( ๐ผ ๐ โผ ๐ โข [ ๐ ๐ , ๐ โข ๐ ๐ , ๐ฑ โฒ โค + ( โ ๐ฅ ๐ ๐ ๐ , ๐ฑ โฒ ) โค ] ) โค โข ( โ ( ๐ โฒ ) โ 1 โข ๐ผ ๐ โผ ๐ โข [ ๐ ๐ , ๐ โข ๐ ๐ , ๐ฑ โฒ โค + ( โ ๐ฆ ๐ ๐ ๐ , ๐ฑ โฒ ) โค ] )
( โ ( ๐ โฒ ) โ 1 โข ๐ผ ๐ โผ ๐ โข [ ๐ ๐ , ๐ โข ๐ ๐ , ๐ฑ โฒ โค
( โ ๐ฅ ๐ ๐ ๐ , ๐ฑ โฒ ) โค ] ) โค โข ๐ผ ๐ โผ ๐ โข [ ๐ ๐ , ๐ โข ๐ ๐ , ๐ฑ โฒ โค
( โ ๐ฆ ๐ ๐ ๐ , ๐ฑ โฒ ) โค ]
( โ ( ๐ โฒ ) โ 1 โข ๐ผ ๐ โผ ๐ โข [ ๐ ๐ , ๐ โข ๐ ๐ , ๐ฑ โฒ โค
( โ ๐ฅ ๐ ๐ ๐ , ๐ฑ โฒ ) โค ] ) โค โข ๐ โฒ โข ( โ ( ๐ โฒ ) โ 1 โข ๐ผ ๐ โผ ๐ โข [ ๐ ๐ , ๐ โข ๐ ๐ , ๐ฑ โฒ โค
( โ ๐ฆ ๐ ๐ ๐ , ๐ฑ โฒ ) โค ] ) ,
where, for brevity, we have written ๐ ๐ , ๐
โ ๐ฅ ๐ log โก ๐ ๐ฝ โข ( ๐ ) and ๐ ๐ , ๐
โ ๐ฆ ๐ log โก ๐ ๐ฝ โข ( ๐ ) . This can be further simplified to
โ 2
โ ๐
1 ๐ ๐ผ ๐ โผ ๐ โข ๐ผ ๐ โผ ๐ โข [ ๐ข ๐ โข ( ๐ , ๐ ) ] โ ( ๐ผ ๐ โผ ๐ โข [ ๐ ๐ , ๐ โข ๐ ๐ , ๐ฑ โฒ โค + ( โ ๐ฅ ๐ ๐ ๐ , ๐ฑ โฒ ) โค ] ) โค โข ( ๐ โฒ ) โ 1 โข ๐ผ ๐ โผ ๐ โข [ ๐ ๐ , ๐ โข ๐ ๐ , ๐ฑ โฒ โค + ( โ ๐ฆ ๐ ๐ ๐ , ๐ฑ โฒ ) โค ] ,
(44)
and equivalently
โ 2
โ ๐
1 ๐ ๐ผ ๐ โผ ๐ โข ๐ผ ๐ โผ ๐ โข [ ๐ข ๐ โข ( ๐ , ๐ ) โ ( ๐ ๐ , ๐ โข ๐ ๐ , ๐ฑ โฒ โค + ( โ ๐ฅ ๐ ๐ ๐ , ๐ฑ โฒ ) โค ) โค โข ( ๐ โฒ ) โ 1 โข ( ๐ ๐ , ๐ โข ๐ ๐ , ๐ฑ โฒ โค + ( โ ๐ฆ ๐ ๐ ๐ , ๐ฑ โฒ ) โค ) ] ,
(45)
giving the desired result.
A.7Proof of Theorem 5.10
First, we state a general result for proving the consistency of an empirical estimator of ๐ฝ , which is second-order differentiable with respect to ๐ฝ .
Lemma A.1.
Define the unique solution of empirical objective ๐ ^ := arg โข min ๐ โก ๐ฟ ^ โข ( ๐ ) and the population ๐ โ := arg โข min ๐ โก ๐ฟ โข ( ๐ ) , where ๐ฟ โข ( ๐ ) := ๐ผ โข [ ๐ฟ ^ โข ( ๐ ) ] . If ๐ min โข [ โ ๐ 2 ๐ฟ ^ โข ( ๐ ) ] โฅ ฮ min > 0 , โ ๐ and โ โ ๐ ๐ฟ ^ โข ( ๐ โ ) โ
๐ ๐ โข ( 1 ๐ ) , then โ ๐ ^ โ ๐ โ โ
๐ ๐ โข ( 1 ๐ ) . Here, ๐ min โข ( ๐ ) is the smallest eigenvalue of a matrix ๐ .
Proof.
First, we define a function โ โข ( ๐ฝ ) := โจ ๐ฝ ^ โ ๐ฝ โ , ๐ฟ ^ โข ( ๐ฝ ) โฉ . Using the mean value theorem, we obtain โ โข ( ๐ฝ ^ ) โ โ โข ( ๐ฝ โ )
โจ โ ๐ฝ โ โข ( ๐ฝ ยฏ ) , ๐ฝ ^ โ ๐ฝ โ โฉ , where ๐ฝ ยฏ is the midpoint between ๐ฝ ^ and ๐ฝ โ in a co-ordinate wise fashion. Therefore, due to the fact that โ ๐ฝ ๐ฟ ^ โข ( ๐ฝ ^ )
0 ,
0 โ โ โข ( ๐ฝ โ )
โจ ๐ฝ ^ โ ๐ฝ โ , โ ๐ฝ ๐ฟ ^ โข ( ๐ฝ โ ) โฉ
โจ ๐ฝ ^ โ ๐ฝ โ , โ ๐ฝ 2 ๐ฟ ^ โข ( ๐ฝ ยฏ ) โข ( ๐ฝ ^ โ ๐ฝ โ ) โฉ .
This implies the following inequality
โ ๐ฝ ^ โ ๐ฝ โ โ โข โ โ ๐ฝ ๐ฟ ^ โข ( ๐ฝ โ ) โ โฅ โ ( ๐ฝ ^ โ ๐ฝ โ ) โค โข โ ๐ฝ 2 ๐ฟ ^ โข ( ๐ฝ ^ ) โข ( ๐ฝ ^ โ ๐ฝ โ ) โ โฅ ๐ min โข โ ( ๐ฝ ^ โ ๐ฝ โ ) โ 2 .
Assume โ ๐ฝ ^ โ ๐ฝ โ โ โ 0 , 1 ๐ min โข โ โ ๐ฝ ๐ฟ ^ โข ( ๐ฝ โ ) โ โฅ โ ๐ฝ ^ โ ๐ฝ โ โ 2 . โ
For the remainder, we need only to show that ๐ TKSD โข ( ๐ ๐ฝ | ๐ ) 2 satisfies Lemma A.1.
Proof.
First, we need to show that
[ โ ๐ฝ ๐ TKSD โข ( ๐ ๐ฝ | ๐ ) 2 | ๐ฝ
๐ฝ โ ] ๐
๐ ๐ โข ( ๐ ^ ๐ ) , โ ๐ .
(46)
For brevity let us define ๐ผ ๐
๐ผ ๐ โผ ๐ and similarly ๐ผ ๐
๐ผ ๐ โผ ๐ . Recall
๐ TKSD โข ( ๐ ๐ฝ | ๐ ) 2
โ ๐
1 ๐ ๐ผ ๐ โข ๐ผ ๐ โข [ ๐ข ๐ โข ( ๐ , ๐ ) โ ๐ฏ ๐ โข ( ๐ ) โค โข ( ๐ โฒ ) โ 1 โข ๐ฏ ๐ โข ( ๐ ) ]
(47)
where
๐ข ๐ โข ( ๐ , ๐ )
๐
๐
,
๐
(
๐
)
๐
๐
,
๐
(
๐
)
๐
(
๐
,
๐
)
+
๐
๐
,
๐
(
๐
)
โ
๐ฆ
๐
๐
(
๐
,
๐
)
+
๐
๐
,
๐
(
๐
)
โ
๐ฅ
๐
๐
(
๐
,
๐
)
+
โ
๐ฅ
๐
โ
๐ฆ
๐
๐
)
,
๐ฏ
๐
โข
(
๐
)
๐ ๐ , ๐ โข ( ๐ ) โข ๐ ๐ , ๐ฑ โฒ โค + ( โ ๐ง ๐ ๐ ๐ , ๐ฑ โฒ ) โค ,
๐ ๐ , ๐ฑ โฒ
[ ๐ โข ( ๐ , ๐ 1 โฒ ) , โฆ , ๐ โข ( ๐ , ๐ ๐ โฒ ) ] , ๐ ๐ฑ โฒ
[ ๐ โข ( ๐ 1 โฒ , โ ) , โฆ , ๐ โข ( ๐ ๐ โฒ , โ ) ] โค and ๐ โฒ
๐ ๐ฑ โฒ โข ๐ ๐ฑ โฒ โค . Let us take the derivative of (47) with respect to ๐ฝ . We first consider for the ๐ -th dimension of the sum, and then note that this will apply to all ๐ dimensions. Therefore consider
โ
๐ฝ
๐
TKSD
โข
(
๐
๐ฝ
|
๐
)
2
|
๐ฝ
๐ฝ โ
โ ๐ฝ ๐ผ ๐ โข ๐ผ ๐ โข [ ๐ข ๐ โข ( ๐ , ๐ ) โ ๐ฏ ๐ โข ( ๐ ) โค โข ( ๐ โฒ ) โ 1 โข ๐ฏ ๐ โข ( ๐ ) ] | ๐ฝ
๐ฝ โ
๐ผ ๐ โข ๐ผ ๐ โข [ โ ๐ฝ ๐ข ๐ โข ( ๐ , ๐ ) โ ๐ ๐ โข ( ๐ ) โค โข ( ๐ โฒ ) โ 1 โข ( โ ๐ฝ ๐ฏ ๐ โข ( ๐ ) ) โ ( โ ๐ฝ ๐ ๐ โข ( ๐ ) ) โค โข ( ๐ โฒ ) โ 1 โข ๐ฏ ๐ โข ( ๐ ) ] | ๐ฝ
๐ฝ โ .
(48)
We begin by expanding the first term in (48):
๐ผ ๐ โข ๐ผ ๐ โข [ โ ๐ฝ ๐ข ๐ โข ( ๐ , ๐ ) ] | ๐ฝ
๐ฝ โ
๐ผ
๐
๐ผ
๐
[
๐
๐
,
๐
(
๐
)
๐
(
๐
,
๐
)
(
โ
๐ฝ
๐
๐
,
๐
(
๐
)
)
+
(
โ
๐ฝ
๐
๐
,
๐
(
๐
)
)
๐
(
๐
,
๐
)
๐
๐
,
๐
(
๐
)
+
(
โ
๐ฝ
๐
๐
,
๐
(
๐
)
)
โ
๐ฆ
๐
๐
(
๐
,
๐
)
+
(
โ
๐ฝ
๐
๐
,
๐
(
๐
)
)
โ
๐ฅ
๐
๐
(
๐
,
๐
)
]
|
๐ฝ
๐ฝ โ .
(49)
Now expand the first term of (49), giving
๐ผ ๐ โข ๐ผ ๐ โข [ ๐ ๐ , ๐ โข ( ๐ ) โข ๐ โข ( ๐ , ๐ ) โข ( โ ๐ฝ ๐ ๐ , ๐ โข ( ๐ ) ) ] | ๐ฝ
๐ฝ โ
โซ ๐ ๐ โข ( ๐ ) โข ๐ ๐ , ๐ โข ( ๐ ) โข ๐ผ ๐ โข [ ๐ โข ( ๐ , ๐ ) โข ( โ ๐ฝ ๐ ๐ , ๐ โข ( ๐ ) ) ] โข ๐ ๐ | ๐ฝ
๐ฝ โ
โซ ๐ โ ๐ฅ ๐ ๐ โข ( ๐ ) โข ๐ผ ๐ โข [ ๐ โข ( ๐ , ๐ ) โข [ โ ๐ฝ ๐ ๐ , ๐ โข ( ๐ ) ] | ๐ฝ
๐ฝ โ ] โข ๐ โข ๐
(50)
due to
๐ โข ( ๐ ) โข ๐ ๐ , ๐ โข ( ๐ ) | ๐ฝ
๐ฝ โ
๐ โข ( ๐ ) โข โ ๐ฆ ๐ log โก ๐ ๐ฝ โ โข ( ๐ )
๐ โข ( ๐ ) โข โ ๐ฆ ๐ ๐ โข ( ๐ ) ๐ โข ( ๐ )
โ ๐ฆ ๐ ๐ โข ( ๐ ) .
Then, via integration by parts, (50) becomes
โฎ โ ๐ ๐ โข ( ๐ ) โข ๐ผ ๐ โข [ ๐ โข ( ๐ , ๐ ) โข [ โ ๐ฝ ๐ ๐ , ๐ โข ( ๐ ) ] | ๐ฝ
๐ฝ โ ] โข u ^ ๐ โข ( ๐ ) โข ๐ ๐ โ โซ ๐ ๐ โข ( ๐ ) โข [ โ ๐ฅ ๐ ๐ โข ( ๐ , ๐ ) โข [ โ ๐ฝ ๐ ๐ , ๐ โข ( ๐ ) ] | ๐ฝ
๐ฝ โ ] .
We can expand the second term of (49) in a similar way:
( โ ๐ฝ ๐ ๐ , ๐ โข ( ๐ ) ) โข ๐ โข ( ๐ , ๐ ) โข ๐ ๐ , ๐ โข ( ๐ )
โฎ โ ๐ ๐ โข ( ๐ ) โข ๐ผ ๐ โข [ [ โ ๐ฝ ๐ ๐ , ๐ โข ( ๐ ) ] | ๐ฝ
๐ฝ โ โข ๐ โข ( ๐ , ๐ ) ] โข u ^ ๐ โข ( ๐ ) โข ๐ ๐
โ โซ ๐ ๐ โข ( ๐ ) โข [ [ โ ๐ฝ ๐ ๐ , ๐ โข ( ๐ ) ] | ๐ฝ
๐ฝ โ โข โ ๐ฆ ๐ ๐ โข ( ๐ , ๐ ) ]
Both of these together, as they appear in (49), gives
๐ต
:=
โฎ
โ
๐
๐
โข
(
๐
)
โข
๐ผ
๐
โข
[
๐
โข
(
๐
,
๐
)
โข
[
โ
๐ฝ
๐
๐
,
๐
โข
(
๐
)
]
|
๐ฝ
๐ฝ โ ] โข u ^ ๐ โข ( ๐ ) โข ๐ ๐
โฎ โ ๐ ๐ โข ( ๐ ) โข ๐ผ ๐ โข [ [ โ ๐ฝ ๐ ๐ , ๐ โข ( ๐ ) ] | ๐ฝ
๐ฝ โ โข ๐ โข ( ๐ , ๐ ) ] โข u ^ ๐ โข ( ๐ ) โข ๐ ๐ .
(51)
Now consider the second term in (48),
๐ผ
๐
โข
๐ผ
๐
[
๐
๐
โข
(
๐
)
โค
โข
(
๐
โฒ
)
โ
1
โข
(
โ
๐ฝ
๐ฏ
๐
โข
(
๐
)
)
]
|
๐ฝ
๐ฝ โ
(52)
= ๐ผ ๐ โข ๐ผ ๐ โข [ ( ๐ ๐ , ๐ โข ( ๐ ) โข ๐ ๐ , ๐ฑ โฒ + โ ๐ฅ ๐ ๐ ๐ , ๐ฑ โฒ ) โข ( ๐ โฒ ) โ 1 โข ( ๐ ๐ , ๐ฑ โฒ โค โข ( โ ๐ฝ ๐ ๐ , ๐ โข ( ๐ ) ) โค ) ] | ๐ฝ
๐ฝ โ
๐ผ ๐ ๐ผ ๐ [ ๐ ๐ , ๐ ( ๐ ) ๐ ๐ , ๐ฑ โฒ ( ๐ โฒ ) โ 1 ( ๐ ๐ , ๐ฑ โฒ โค ( โ ๐ฝ ๐ ๐ , ๐ ( ๐ ) ) โค )
(53)
( โ ๐ฅ ๐ ๐ ๐ , ๐ฑ โฒ ) ( ๐ โฒ ) โ 1 ( ๐ ๐ , ๐ฑ โฒ โค ( โ ๐ฝ ๐ ๐ , ๐ ( ๐ ) ) โค ) ) ] | ๐ฝ
๐ฝ โ
= ๐ผ ๐ โข [ ๐ ๐ , ๐ โข ( ๐ ) โข ๐ ๐ , ๐ฑ โฒ โข ( ๐ โฒ ) โ 1 โข ๐ผ ๐ โข [ ๐ ๐ , ๐ฑ โฒ โค โข ( โ ๐ฝ ๐ ๐ , ๐ โข ( ๐ ) โค ) ] ] | ๐ฝ
๐ฝ
โ
+
๐ผ
๐
[
(
โ
๐ฅ
๐
๐
๐
,
๐ฑ
โฒ
)
(
๐
โฒ
)
โ
1
๐ผ
๐
[
๐
๐
,
๐ฑ
โฒ
โค
(
โ
๐ฝ
๐
๐
,
๐
(
๐
)
)
โค
)
]
]
|
๐ฝ
๐ฝ โ
(54)
Consider only the first term of the above,
๐ผ ๐ โข [ ๐ ๐ , ๐ โข ( ๐ ) โข ๐ ๐ , ๐ฑ โฒ โข ( ๐ โฒ ) โ 1 โข ๐ผ ๐ โข [ ( ๐ ๐ , ๐ฑ โฒ โค โข ( โ ๐ฝ ๐ ๐ , ๐ โข ( ๐ ) ) โค ) ] ] | ๐ฝ
๐ฝ โ
โซ ๐ ๐ โข ( ๐ ) โข ๐ ๐ , ๐ โข ( ๐ ) โข ๐ ๐ , ๐ฑ โฒ โข ( ๐ โฒ ) โ 1 โข ๐ผ ๐ โข [ ๐ ๐ , ๐ฑ โฒ โค โข ( โ ๐ฝ ๐ ๐ , ๐ โข ( ๐ ) ) โค ] โข ๐ ๐ | ๐ฝ
๐ฝ โ
โซ ๐ โ ๐ฅ ๐ ๐ โข ( ๐ ) โข ๐ ๐ , ๐ฑ โฒ โข ( ๐ โฒ ) โ 1 โข ๐ผ ๐ โข [ ๐ ๐ , ๐ฑ โฒ โค โข [ โ ๐ฝ ๐ ๐ , ๐ โข ( ๐ ) โค ] | ๐ฝ
๐ฝ โ ] โข ๐ โข ๐ .
Integration by parts can be used to expand this to
โฎ
โ
๐
๐
โข
(
๐
)
๐
๐
,
๐ฑ
โฒ
โข
(
๐
โฒ
)
โ
1
โข
๐ผ
๐
โข
[
๐
๐
,
๐ฑ
โฒ
โค
โข
[
โ
๐ฝ
๐
๐
,
๐
โข
(
๐
)
โค
]
|
๐ฝ
๐ฝ โ ] โข u ^ ๐ โข ( ๐ ) โข ๐ โข ๐
โ โซ ๐ ๐ โข ( ๐ ) โข ๐ ๐ , ๐ฑ โฒ โข ( ๐ โฒ ) โ 1 โข ๐ผ ๐ โข [ โ ๐ฅ ๐ ๐ ๐ , ๐ฑ โฒ โค โข [ โ ๐ฝ ๐ ๐ , ๐ โข ( ๐ ) โค ] | ๐ฝ
๐ฝ โ ] โข ๐ โข ( ๐ ) โข ๐ ๐ .
(55)
5.8 indicates that we have the following equivalence (in a coordinate-wise fashion):
โฎ
โ
๐
๐
โข
(
๐
)
โข
๐
๐
,
๐ฑ
โฒ
(
๐
โฒ
)
โ
1
โข
๐ผ
๐
โข
[
๐
๐
,
๐ฑ
โฒ
โค
โข
[
โ
๐ฝ
๐
๐
,
๐
โข
(
๐
)
โค
]
|
๐ฝ
๐ฝ โ ] โข u ^ ๐ โข ( ๐ ) โข ๐ โข ๐
= โฎ โ ๐ ๐ โข ( ๐ ) โข ๐ผ ๐ โข [ ๐ โข ( ๐ , ๐ ) โข [ โ ๐ฝ ๐ ๐ , ๐ โข ( ๐ ) โค ] | ๐ฝ
๐ฝ โ ] โข u ^ ๐ โข ( ๐ ) โข ๐ ๐ + ๐ ๐ โข ( ๐ ^ ๐ , 1 ) .
(56)
We interpret this as follows. If we consider ๐ฑ โฒ โ โ ๐ ~ as our training set, then our regression function is trained on the approximate set of boundary points. The surface integral denoted by โฎ โ ๐ is evaluating on all ๐ โ โ ๐ , the true boundary. Therefore the prediction based on ๐ โ โ ๐ is going to be well approximated by the regression function that is trained on ๐ฑ โฒ , and the approximation error, ๐ ^ ๐ , 1 , will decrease as ๐ , the number of training points, increases.
We can apply the same operations (from (54) onwards) to the third term in (48), which gives
๐ผ ๐ โข ๐ผ ๐ โข [ ( โ ๐ฝ ๐ ๐ โข ( ๐ ) ) โค โข ( ๐ โฒ ) โ 1 โข ๐ฏ ๐ โข ( ๐ ) ]
โฎ โ ๐ ๐ โข ( ๐ ) โข ๐ผ ๐ โข [ [ โ ๐ฝ ๐ ๐ , ๐ โข ( ๐ ) โค ] | ๐ฝ
๐ฝ โ โข ๐ โข ( ๐ , ๐ ) ] โข u ^ ๐ โข ( ๐ ) โข ๐ ๐ + ๐ ๐ โข ( ๐ ^ ๐ , 2 ) ,
(57)
where ๐ ^ ๐ , 2 is the approximation error for the corresponding kernel regression on this term. When taking the sum, as it is in (48), (56) + (57),
โฎ
โ
๐
๐
โข
(
๐
)
๐ผ
๐
โข
[
๐
โข
(
๐
,
๐
)
โข
[
โ
๐ฝ
๐
๐
,
๐
โข
(
๐
)
โค
]
|
๐ฝ
๐ฝ โ ] โข u ^ ๐ โข ( ๐ ) โข ๐ โข ๐ + ๐ ๐ โข ( ๐ ^ ๐ , 1 )
โฎ โ ๐ ๐ โข ( ๐ ) โข ๐ผ ๐ โข [ [ โ ๐ฝ ๐ ๐ , ๐ โข ( ๐ ) โค ] | ๐ฝ
๐ฝ โ โข ๐ โข ( ๐ , ๐ ) ] โข u ^ ๐ โข ( ๐ ) โข ๐ ๐๐ ๐ โข ( ๐ ^ ๐ , 2 )
๐ต- ๐ ๐ โข ( ๐ ^ ๐ ) ,
where ๐ ^ ๐
๐ ^ ๐ , 1 + ๐ ^ ๐ , 2 . Therefore, when substituting all of (51), (56) and (57) back into (48), we have
โ ๐ฝ ๐ TKSD โข ( ๐ ๐ฝ | ๐ ) 2 | ๐ฝ
๐ฝ โ
๐ ๐ โข ( ๐ ^ ๐ ) ,
where ๐ ^ ๐ is a decreasing function of ๐ .
Now we show ๐ฝ โ is the true density parameter, i.e., ๐ ๐ฝ โ
๐ . Let ๐ TKSD โข ( ๐ ๐ฝ โ | ๐ ) := ๐ TKSD โข ( ๐ ๐ฝ | ๐ ) | ๐ฝ
๐ฝ โ . First, we show that ๐ฟ โข ( ๐ฝ โ )
0 . This is guaranteed as
๐ฟ โข ( ๐ฝ โ )
lim ๐ โ โ ๐ TKSD โข ( ๐ ๐ฝ โ | ๐ ) 2
lim ๐ โ โ { sup ๐ โ ๐ข 0 , ๐ ๐ ๐ผ ๐ โข [ ๐ฏ ๐ ๐ฝ โ โข ๐ ] } 2
lim ๐ โ โ ๐ ๐ โข ( ๐ ๐ )
0 โข (with high probability) ,
for ๐ โ ๐ข 0 , ๐ ๐ . The change from Stein discrepancy to ๐ ๐ โข ( ๐ ๐ ) is guaranteed by Theorem 5.5. Since we assume that the minimiser of the population objective ๐ฝ โ is unique and our density model is identifiable, it shows that the unique solution of the population objective is the unique optimal parameter of the density model.
Second, we verify that โ โ ๐ฝ ๐ฟ ^ โข ( ๐ฝ โ ) โ
๐ ๐ โข ( 1 ๐ ) .
โ โ ๐ฝ ๐ฟ ^ โข ( ๐ฝ โ ) โ
โฅ โ ๐ฝ lim ๐ โ โ ๐ ^ TKSD ( ๐ ๐ฝ โ | ๐ ) 2 โฅ
lim ๐ โ โ โฅ โ ๐ฝ ๐ ^ TKSD ( ๐ ๐ฝ โ | ๐ ) 2 โฅ
lim ๐ โ โ โฅ โ ๐ฝ ๐ ^ TKSD ( ๐ ๐ฝ โ | ๐ ) 2 โ โ ๐ฝ ๐ TKSD ( ๐ ๐ฝ โ | ๐ ) 2 + โ ๐ฝ ๐ TKSD ( ๐ ๐ฝ โ | ๐ ) 2 โฅ
โค lim ๐ โ โ ( โฅ โ ๐ฝ ๐ ^ TKSD ( ๐ ๐ฝ โ | ๐ ) 2 โ โ ๐ฝ ๐ TKSD ( ๐ ๐ฝ โ | ๐ ) 2 โฅ + โฅ โ ๐ฝ ๐ TKSD ( ๐ ๐ฝ โ | ๐ ) 2 โฅ )
โค lim ๐ โ โ ( โฅ โ ๐ฝ ๐ ^ TKSD ( ๐ ๐ฝ โ | ๐ ) 2 โ โ ๐ฝ ๐ TKSD ( ๐ ๐ฝ โ | ๐ ) 2 โฅ + ๐ ๐ ( ๐ ^ ๐ ) )
โค (a) lim ๐ โ โ ( ๐ ๐ โข ( 1 ๐ ) + ๐ ๐ โข ( ๐ ^ ๐ ) )
(58)
= (b) ๐ ๐ โข ( 1 ๐ ) ,
where the inequality in (a) is due to the convergence of U-statistics (Serfling, 2009), and the equality in (b) is due to lim ๐ โ โ ๐ ๐ โข ( ๐ ^ ๐ )
0 .
โ
We have โ โ ๐ฝ ๐ฟ ^ โข ( ๐ฝ โ ) โ
๐ ๐ โข ( 1 ๐ ) , and (5.9) provides ๐ min โข [ โ ๐ฝ 2 ๐ฟ ^ โข ( ๐ฝ ) ] โฅ ฮ min > 0 , then both conditions in Lemma A.1 are satisfied, and this concludes the proof of the consistency of ๐ฝ ^ ๐ , i.e., โ ๐ฝ ^ ๐ โ ๐ฝ โ โ
๐ ๐ โข ( 1 ๐ ) .
Appendix BComputation B.1Empirical Convergence of ๐ ~ to ๐
In Lemma 5.2 we proved that under some conditions, the difference between ๐ โ ๐ข 0 ๐ and ๐ ~ โ ๐ข 0 , ๐ ๐ evaluated on ๐ โฒ โ โ ๐ is bounded in probability by ๐ ๐ and this probability tends to zero as ๐ โ โ . In this section we aim to show that ๐ ~ converges to a specific function as ๐ increases, which we hypothesise is โtrueโ ๐ โ ๐ข 0 ๐ .
Figure 3 demonstrates empirical convergence of ๐ ~ as ๐ increases for a given dataset. The experiment is setup as follows: we simulate points from a ๐ฉ โข ( ๐ 2 , ๐ฐ 2 ) distribution (a unit Multivariate Normal distribution in 2D), then truncate data points to within the shape of the boundary based on location, and repeat until we acquire ๐
150 truncated points. For each value of ๐ , we minimise the TKSD objective to estimate ๐ฝ
๐ 2 , and plug in the estimated ๐ฝ ^ to the formula for ๐ ~ .
For lower values of ๐ , there are not enough boundary points to enforce the constraint well on ๐ ~ , and the approximate Stein identity has a high error, and the estimator is poor. The evaluations of ๐ ~ across the space do not match the ๐ ~ output for higher values of ๐ . As ๐ increases, ๐ ~ begins to converge to a particular shape, and the difference between function evaluations for ๐
150 and ๐
300 is small.
Figure 3:Contour lines of the optimal ๐ ~ 0 (first dimension of ๐ ~ ) output by TKSD across different values of ๐ and differently shaped truncation boundaries: the โ 1 ball (top), the โ 2 ball (middle) and a heart shape (bottom). Red points are all ๐ points in โ ๐ ~ and grey points are samples from the truncated dataset. Note that we plot only the first dimension of ๐ ~ (i.e. ๐ 1 ), but we observe the same pattern with the second dimension. B.2Evaluating increasing values of ๐ in Proposition 5.3 Figure 4:Lower bound on ๐ ๐ (given in (19)) against ๐ , the number of finite boundary points, plotted for different values of fixed dimension ๐ and boundary โsizeโ ๐ฟ โข ( ๐ ) , which scales quadratically.
In Figure 4 we show that ๐ ๐ , as defined in Proposition 5.3, is bounded below by a decreasing function of ๐ . In this example, we plot the value of the lower bound in (19) against ๐ , the number of samples on the boundary set, and L โข ( ๐ ) , the ( ๐ โ 1 ) -surface area of the bounded domain ๐ , which can be considered as a proxy to measure the complexity of the boundary.
We let L โข ( ๐ ) scale quadratically with dimension ๐ , and treat it as a constant. ๐ scales linearly, and measure the effect on the lower bound of ๐ ๐ . As L โข ( ๐ ) increases, the bound on ๐ ๐ increases slightly. As ๐ increases, the bound on ๐ ๐ decreases rapidly. Even at extremely high values of ๐ ๐ , relatively small values of ๐ are required to shrink the bound. This result highlights how to choose ๐ in experiments - as ๐ increases, the โdensenessโ of โ ๐ ~ in โ ๐ also increases, as expected. Whilst larger values of ๐ will always be better, there is a less pronounced effect when ๐ becomes significantly larger than zero. As discussed in Section B.8, larger ๐ comes with computational cost, and therefore this experiment shows that it may be beneficial to choose smaller values of ๐ . This is explored further in Section B.3.
B.3Empirical Consistency Figure 5:Left: Estimation error as ๐ and ๐ increases for TKSD only. Right: Mean estimation error for the three methods: TKSD, TruncSM and bd-KSD, with standard error bars. TKSD uses a fixed ๐
32 across all values of ๐ . Both plots report statistics over 64 seeds.
We verify that (26) is consistent estimator via empirical experiments. Note that for consistency as proven in Theorem 5.10, we require taking the limit as ๐ โ โ , after which the estimator is consistent for ๐ . So as ๐ and ๐ increases, the estimation error decreases towards zero. We show consistency as ๐ and ๐ increase empirically in Figure 5, for a simple experiment setup, similar to the setup from Section 6.3. Data are simulated from ๐ฉ โข ( ๐ โ , ๐ผ ๐ ) , where ๐
2 , ๐ โ
[ 0.5 , 0.5 ] โค , and ๐ผ ๐ is known. Data are truncated to the โ 2 ball until we reach ๐ many data points (after truncation).
The aim of estimation is ๐ โ . Across 64 trials, Figure 5 shows plots of the mean estimation error, given by โ ๐ ^ โ ๐ โ โ , where ๐ ^ is the corresponding estimate of ๐ โ output by a given method. In the first plot (left), we show that as both ๐ and ๐ increase, the estimation error for TKSD decreases towards zero. In the second plot (right), for a fixed ๐ , we show that the rate of convergence as ๐ increases for TKSD matches that of bd-KSD and TruncSM.
B.4Gaussian Mixture Experiment Figure 6:Top: example setup for the experiment as the number of mixture modes increases from 2 to 4. Middle: estimation error as ๐ increases for 2 mixture modes, averaged across 256 seeds. Bottom: Estimation error as the number of mixture modes increases for a fixed ๐
300 , averaged across 256 seeds.
As an additional experiment to show the capability of the method, we test on a more complex problem, estimating several means of a Gaussian Mixture distribution. The estimation task is as follows. Fix ๐
2 and ๐
200 . Let the mixture modes be given as follows,
๐ 1 โ
[ 1.5 , 1.5 ] โค , ๐ 2 โ
[ โ 1.5 , โ 1.5 ] โค , ๐ 3 โ
[ โ 1.5 , 1.5 ] โค , ๐ 4 โ
[ 1.5 , โ 1.5 ] โค .
We independently simulate samples from ๐ฉ โข ( ๐ ๐ โ , ๐ผ ๐ ) , for ๐
1 , โฆ , 4 , and truncate these samples to within a box with vertices at [ โ 3 , โ 3 ] , [ 3 , โ 3 ] , [ โ 3 , 3 ] and [ 3 , 3 ] until we reach a total of ๐ samples after truncation. Figure 6, top, shows an example of this experiment setup.
The task is to estimate ๐ ๐ โ for all ๐ . To ensure a well specified experiment, we set the initial conditions of the optimisation routine as a perturbation from the true value, i.e. ๐ ๐ ini
๐ ๐ โ + ๐ , ๐ โผ ๐ฉ โข ( ๐ , 0.5 โ ๐ผ 2 ) . We estimate ๐ with TKSD and compare it to a corresponding estimate by TruncSM across a range of different values of ๐ and number of mixture modes, shown in Figure 6, middle and bottom. Overall, TKSD significantly outperforms TruncSM in this experiment across all variations at the cost of runtime.
B.5Regression
We provide a further example of using TKSD to estimate the parameters of a regression problem. We provide two examples in this section, for synthetic data and for real data.
Synthetic Data
First, we simulate data in the following way:
๐ฆ ๐ โผ ๐ฉ โข ( ๐ ๐ , 1 ) , ๐ ๐
๐ฝ 0 โ + ๐ฝ 1 โ โข ๐ฅ ๐ , ๐ฅ ๐ โผ Uniform โข ( 0 , 1 )
where we assume knowledge of the true values, ๐ฝ 0 โ
3 and ๐ฝ 1 โ
4 . We truncate the dataset to where ๐ฆ ๐ โฅ 5 โข โ ๐ , so only a portion of both ๐ฆ and ๐ฅ are observed. We then estimate the conditional density ๐ โข ( ๐ฆ | ๐ฅ ) with TKSD via estimation of ๐ฝ โ := [ ๐ฝ 0 โ , ๐ฝ 1 โ ] โค .
We obtain the log-likelihood of the Normal distribution under the estimate of ๐ฝ ^ := [ ๐ฝ ^ 0 , ๐ฝ ^ 1 ] โค output by TKSD, split across the observed dataset (within the truncation boundary) and the unobserved dataset (outside the truncation boundary). We compare these log-likelihoods to ones obtained by a naive MLE approach which does not account for truncation. As an additional measure, we calculate the unobserved error, given by
MSE โข ( ๐ฒ , ๐ฒ ^ ) := โ ๐
1 ๐ unobs ( ๐ฆ ^ ๐ โ ๐ฆ ๐ ) 2 , ๐ฆ ^ ๐
๐ฝ ^ 0 + ๐ฝ ^ 1 โข ๐ฅ ๐ ,
where ๐ unobs is the number of discarded data points due to truncation, and this error is calculated over these discarded data points only. This forms a measure which evaluates how well the density estimation method is predicting on unseen data points. Instead of traditional test error, it is the error on data points that have been truncated and hence no longer observed by either method.
A histogram of log-likelihoods and errors across 256 trials is given in Figure 7 (left), and an example of one such simulated regression is given in Figure 7 (top-right). The log-likelihoods for the observed points are higher for MLE, since the objective of this method is to directly maximise these likelihoods. However, on the unobserved dataset, the log-likelihoods obtained by TKSD are significantly higher than MLE. The unobserved errors are significantly smaller for TKSD, showing the improvement of TKSD over the naive approach.
Real Data: Student Test Scores
We also experiment on a real-world dataset given by UCLA: Statistical Consulting Group (2022) (Example 1). This dataset contains student test scores in a school for which the acceptance threshold is 40 out of 100, and therefore the response variable (the test scores) are truncated below by 40 and above by 100. Since no scores get close to 100, we only consider one-sided truncation at ๐ฆ
40 . The aim of the regression is to model the response variable, the test scores, based on a single covariate of each studentsโ corresponding score on a different test. Figure 7 (bottom-right) shows the plotted dataset and the regression line fit by TKSD and naive MLE. Whilst we have no true baseline value to compare to, the TKSD regression line seems to account for the truncation, whilst as expected, MLE does not.
Figure 7:Statistics from regression fit. Left:Mean squared errors only on the observed dataset (top). Log-likelihoods on the observed dataset (middle) and the unobserved dataset (bottom), both for a ๐ฉ โข ( ๐ , 1 ) distribution and over 256 trials. Right: An example of TKSD used to fit regression on the synthetic data on one trial (top) and TKSD used to fit regression on student test scores data (bottom). Fuller black points are the observed data points, after truncation, and smaller grey points are the unobserved data points that were truncated out in the data simulation process. B.6Quantifying Effect of Boundary Point Distribution
We test whether the effect of boundary point sampling distribution has an effect on the robustness of TKSD. To do so, we repeat the simple experiment setup where data are simulated as follows,
๐ โผ ๐ฉ โข ( ๐ โ , ๐ฐ 2 ) , ๐ โ
[ 1 ,
1 ] โค
from which we observe ๐
400 realisations of ๐ that are restricted to the unit โ 2 ball around the origin, and let ๐
30 . We use TKSD to provide an estimate ๐ ^ of ๐ โ under three scenarios:
1.
Boundary points are distributed towards ๐ โ , i.e. samples from โ ๐ ~ are closer to the centre of the dataset.
2.
Boundary points are distributed away from ๐ โ , i.e. samples from โ ๐ ~ are closer to the edge of the dataset.
3.
Boundary points are sampled uniformly across โ ๐ .
Figure 8:Example of experiment setup for measuring the effect of the boundary point distribution. Figure 9:Frequency of estimation errors ( โ ๐ ^ โ ๐ โ โ ) in the simple experiment setup for the three differently distributed boundary points.
See Figure 8 for a visual representation of these three scenarios. We measure the estimation error, โ ๐ ^ โ ๐ โ โ , and test whether one scenario provides a lower error on average overall. Figure 9 shows the distribution of all three estimation errors across 256 trials. Scenario 1 and 3 provide comparable error distributions which are significantly smaller than the error distribution of scenario 2, implying that either the boundary needs to be covered fully, or the boundary points need to be โrepresentativeโ in some way, where the most significant truncation effect is.
B.7Choosing Boundary Size for Dimension Benchmarks Figure 10:Mean percentage of points which remain after truncation as dimension ๐ increases, where the truncation domain is a โ ๐ ball of radius ๐ 0.98 for ๐
1 (left), and ๐ 0.53 for ๐
2 (right).
In Section 6.3, we chose the size of the boundary to scale with ๐ so that roughly the same amount of data points are truncated for each value of dimension ๐ . The values of ๐ 0.98 and ๐ 0.53 for the โ 1 ball and โ 2 ball respectively were chosen via trial and error such that the percentage of points simulated from the Normal distribution remaining after truncation did not vary significantly. Figure 10 shows that with this choice of โ 1 and โ 2 ball radius, the mean percentage of points that remain after truncation remains at consistent as dimension increases in both cases.
B.8Extra Computation Details โข
The inversion of ๐ โฒ is the largest computational expense when it comes to evaluating the ๐ -statistic or ๐ -statistic ((24) and (25) respectively).
If ๐ < ๐ , which will be common as we want ๐ to be large, then ๐ โฒ is rank deficient. To circumvent this issue, we invert ๐ โฒ + ๐ โข ๐ ๐ instead, where ๐
0 is small. Additionally, ๐ โฒ + ๐ โข ๐ ๐ is symmetric and positive definite, so we can exploit its Cholesky decomposition to make the inversion faster and more stable.
Overall, this inversion looks like
( ๐ โฒ ) โ 1 โ ( ๐ โฒ + ๐ โข ๐ ๐ ) โ 1
๐ โ 1 โข ( ๐ โ 1 ) โค ,
where ๐ is the corresponding lower triangular matrix from the Cholesky decomposition of ๐ โฒ + ๐ โข ๐ ๐ . The inversion of ๐ , a lower triangular matrix, requires half as many operations as inverting ๐ + ๐ โข ๐ ๐ directly (Krishnamoorthy & Menon, 2013).
โข
We also make use of methods presented in Jitkrittum et al. (2017) for fast computation when constructing all kernel matrices in Python.
Report Issue Report Issue for Selection Generated by L A T E xml Instructions for reporting errors
We are continuing to improve HTML versions of papers, and your feedback helps enhance accessibility and mobile support. To report errors in the HTML that will help us improve conversion and rendering, choose any of the methods listed below:
Click the "Report Issue" button. Open a report feedback form via keyboard, use "Ctrl + ?". Make a text selection and click the "Report Issue for Selection" button near your cursor. You can use Alt+Y to toggle on and Alt+Shift+Y to toggle off accessible reporting links at each section.
Our team has already identified the following issues. We appreciate your time reviewing and reporting rendering errors we may not have found yet. Your efforts will help us improve the HTML versions for all readers, because disability should not be a barrier to accessing research. Thank you for your continued support in championing open access for all.
Have a free development cycle? Help support accessibility at arXiv! Our collaborators at LaTeXML maintain a list of packages that need conversion, and welcome developer contributions.
Xet Storage Details
- Size:
- 106 kB
- Xet hash:
- 34f33225bbdef840a8532c0088c8f62d3fd965c387b01b17475ae7ae4da41fdc
Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.