Title: Non-asymptotic oracle inequalities for the Lasso in high-dimensional mixture of experts
URL Source: https://arxiv.org/html/2009.10622
Markdown Content: Abstract 1Introduction 2Problem setup 3Main result 4Numerical experiments 5Conclusions References \jmlrvolume
Under Review \jmlryear2024
Non-asymptotic oracle inequalities for the Lasso in high-dimensional mixture of experts \NameTrungTin Nguyen \Emailtrungtin.nguyen@uq.edu.au \addrSchool of Mathematics and Physics The University of Queensland St Lucia QLD 4072 Australia; Univ. Grenoble Alpes Inria CNRS Grenoble INP LJK Inria Grenoble Rhone-Alpes 655 av. de l’Europe 38335 Montbonnot France \NameHien D Nguyen \Emailh.nguyen5@latrobe.edu.au \addrSchool of Computing Engineering and Mathematical Sciences La Trobe University Bundoora VIC 3086 Australia; Institute of Mathematics for Industry Kyushu University Nishi Ward Fukuoka 819-0395 Japan \NameFaicel Chamroukhi \EmailFaicel.chamroukhi@irt-systemx.fr \addrIRT SystemX Palaiseau France \NameGeoffrey J McLachlan \Emailg.mclachlan@uq.edu.au \addrSchool of Mathematics and Physics The University of Queensland St Lucia QLD 4072 Australia Abstract
We investigate the estimation properties of the mixture of experts (MoE) model in a high-dimensional setting, where the number of predictors is much larger than the sample size, and for which the literature is particularly lacking in theoretical results. We consider the class of softmax-gated Gaussian MoE (SGMoE) models, defined as MoE models with softmax gating functions and Gaussian experts, and focus on the theoretical properties of their 𝑙 1 -regularized estimation via the Lasso. To the best of our knowledge, we are the first to investigate the 𝑙 1 -regularization properties of SGMoE models from a non-asymptotic perspective, under the mildest assumptions, namely the boundedness of the parameter space. We provide a lower bound on the regularization parameter of the Lasso penalty that ensures non-asymptotic theoretical control of the Kullback–Leibler loss of the Lasso estimator for SGMoE models. Finally, we carry out a simulation study to empirically validate our theoretical findings.
keywords: Mixture of experts; mixture of regressions; penalized maximum likelihood; 𝑙 1 -oracle inequality; high-dimensional statistics; Lasso. 1Introduction 1.1Mixture of experts
MoE models, introduced in Jacobs et al. (1991), are a flexible mixture model construction for conditional density estimation and prediction. Because of their flexibility and the wealth of statistical estimation and model selection tools available, they have become widely used in statistics and machine learning. The MoE model construction allows the mixture weights (or gating functions) to depend on the explanatory variables (or predictors) together with the experts (or mixture component densities). This permits the modeling of data arising from more complex data generating processes than those that can be analyzed using mixture models and mixture of regressions models, whose mixing parameters are independent of the covariates. Finite mixture-type models have also become popular due to their universal approximation and good convergence rates for parameter and density estimation, which have been extensively studied in Genovese and Wasserman (2000); Nguyen (2013); Ho and Nguyen (2016); Nguyen et al. (2020, 2022a). In the same vein, recent results for parameter and conditional density estimation of MoE models have recently been published in Jiang and Tanner (1999); Norets (2010); Nguyen et al. (2016, 2019, 2021); Ho et al. (2022); Nguyen et al. (2023, 2024b, 2024a).
In the context of regression, softmax-gated Gaussian MoE models, which will be referred to as SGMoE, defined as MoE models with Gaussian experts and softmax gating functions, are a standard choice and a powerful tool for modeling more complex nonlinear relationships between response and predictor, arising from different subpopulations. Since each mixture weight is modeled by a softmax function of the covariates, the dependence on each feature appears both in the experts and in the gating functions, which allows one to capture more complex nonlinear relationships between the response and predictors arising from different subpopulations, compared to mixture of regressions models. This is demonstrated via numerical experiments in several works such as Chamroukhi and Huynh (2018, 2019); Montuelle and Le Pennec (2014). The reader is referred to Yuksel et al. (2012); Nguyen and Chamroukhi (2018) for reviews on this topic. Statistical estimation and variable selection for MoE models in the high-dimensional regression setting remain challenging. In particular, from a theoretical point of view, there is a lack of results for MoE models, where the number of explanatory variables can be much larger than the sample size. In such situations, we need to reduce the dimension of the problem by looking for the most relevant relationships to avoid numerical problems while ensuring identifiability.
1.2Related literature
We focus on the use of the Lasso, originally introduced by Tibshirani (1996), also known as an 𝑙 1 -penalised maximum likelihood estimator ( 𝑙 1 -PMLE). Using 𝑙 1 -PMLE tends to produce sparse solutions and can be viewed as a convex surrogate for the non-convex 𝑙 0 -penalization problem. Relaxation methods have attractive computational and theoretical properties (cf., Fan and Li, 2001). First introduced for the linear regression model, the Lasso estimator has since been studied and extended to many statistical problems. To deal with heterogeneous high-dimensional data, several researchers have studied the Lasso for variable selection in the context of mixture of regression models, see, e.g., Khalili and Chen (2007); Stadler et al. (2010); Meynet (2013); Devijver (2015); and Lloyd-Jones et al. (2018). In particular, Stadler et al. (2010) provided an 𝑙 0 -oracle inequality, satisfied by the Lasso estimators, conditional on the restricted eigenvalue condition, namely that the Fisher information matrix is positive definite. Furthermore, they have to introduce some margin conditions to link the Kullback–Leibler (KL) loss function to the 𝑙 2 -norm of the parameters. Another direction of studying this problem is to look at its 𝑙 1 -regularisation properties; see e.g., Massart and Meynet (2011); Meynet (2013); Devijver (2015). As indicated by Devijver (2015), in contrast to results for the 𝑙 0 -penalty, some results for the 𝑙 1 -penalty are valid without any assumptions, either on the Gram matrix or on the bound.
1.3Main contributions
Our overall contributions in the paper can be summarized as follows:
1.
To the best of our knowledge, we are the first to study the 𝑙 1 -regularization properties of the SGMoE models from a non-asymptotic point of view with the mildest assumptions. Theorem 3.1 provides a lower bound on the regularization parameter of the Lasso penalty that ensures non-asymptotic theoretical control of the KL loss of the 𝑙 1 -PMLE estimator for SGMoE models. Because this result is non-asymptotic, it is valid when 𝑛 is fixed, while the number of predictors 𝑝 can grow, with respect to 𝑛 , and can be much larger than 𝑛 .
2.
Our non-asymptotic result complements the standard asymptotic results for high-dimensional SGMoE models for feature selection using Lasso-PMLE or more general PMLE via the Scad penalty function of Khalili (2010). Specifically, Khalili (2010) proved both consistency in feature selection and 𝑛 consistency of the PMLE in SGMoE models, but under several strict conditions on the regularity of the true joint density function and on the choice of tuning parameters. On the contrary, the only mild assumption we use here to obtain the order of rate convergence of the error upper bounds in (10) from Theorem 3.1 is boundedness on the parameter space, which also appeared in Khalili (2010); Stadler et al. (2010); Meynet (2013); Devijver (2015).
3.
We extend non-asymptotic results for mixture of regressions models (Massart and Meynet, 2011; Meynet, 2013; Devijver, 2015) to the more general SGMoE models as defined in (1), for which the theoretical analysis of the non-asymptotic result is challenging, because, in SGMoE models, the dependence on each feature appears both in the means of the experts and in the gating functions. This requires in particular non-trivial technical proof we establish in this paper.
4.
Our focus in this paper is on a simplified but standard setting in which the expert component means are linear functions with respect to the explanatory variables. Despite this linear simplification, the overall SGMoE model captures the non-linearity of the true regression function thanks to its mixture construction. We believe that the general techniques we develop here can be extended to more general experts, such as Gaussian experts with polynomial means (see, e.g., Mendes and Jiang, 2012), hierarchical MoE for exponential family regression models (Jiang and Tanner, 1999), and when the covariance matrix is also parameterized as an expert function potentially depending on the covariates as in Ho et al. (2022).
Notations. Throughout this paper, { 1 , … , 𝑛 } is abbreviated as [ 𝑛 ] for 𝑛 ∈ ℕ ⋆
{ 1 , 2 , … } . Here, vec ( ⋅ ) is the vectorization operator that stacks the columns of a matrix into a vector. We denote the induced 𝑝 -norm of a matrix 𝛽 by ∥ 𝛽 ∥ 𝑝 , 𝑝 ∈ { 1 , 2 , ∞ } , which differs from the vector norm ∥ vec ( 𝛽 ) ∥ 𝑝 . For a matrix Σ , 𝑚 ( Σ ) and 𝑀 ( Σ ) denote the smallest and largest eigenvalues of Σ , respectively. We write 𝒩 ( ⋅ ; 𝑣 , Σ ) for the multivariate Gaussian density with mean 𝑣 and with covariance matrix Σ . Given an arbitrary event 𝒯 in some probability space, we define an indicator function by: 𝕀 𝒯 ( 𝜔 )
1 if 𝜔 ∈ 𝒯 and 𝕀 𝒯 ( 𝜔 )
0 if 𝜔 ∉ 𝒯 .
Paper organization. In Section 2, we discuss the construction and framework of high-dimensional SGMoE models. In Section 3, we present our main result. Then, we conduct a simulation study to empirically verify our theoretical results in 4. Some conclusions are given in Section 5. Supplementary material is devoted to proving the technical results.
2Problem setup 2.1High-dimensional SGMoE models
In the high-dimensional regression setting, we observe 𝑛 couples ( 𝑥 [ 𝑛 ] , 𝑦 [ 𝑛 ] ) ≡ ( 𝑥 𝑖 , 𝑦 𝑖 ) 𝑖 ∈ [ 𝑛 ] ∈ ( 𝒳 × 𝒴 ) 𝑛 ⊂ ( ℝ 𝑝 × ℝ 𝑞 ) 𝑛 , where typically 𝑝 ≫ 𝑛 , 𝑥 𝑖 is fixed and 𝑦 𝑖 is a realization of the random variable 𝑌 𝑖 , 𝑖 ∈ [ 𝑛 ] . We assume that, conditional on 𝑥 [ 𝑛 ] , 𝑌 [ 𝑛 ] are independent and identically distributed (IID) with conditional PDF 𝑠 0 ( ⋅ | 𝑥 𝑖 ) . Our goal is to estimate 𝑠 0 from the observations using the following 𝐾 -component SGMoE models:
𝑠 𝜓 ( 𝑦 | 𝑥 )
∑ 𝑘
1 𝐾 exp ( 𝛾 𝑘 0 + 𝛾 𝑘 ⊤ 𝑥 ) ∑ 𝑙
1 𝐾 exp ( 𝛾 𝑙 0 + 𝛾 𝑙 ⊤ 𝑥 ) 𝒩 ( 𝑦 ; 𝛽 𝑘 0 + 𝛽 𝑘 𝑥 , Σ 𝑘 ) ,
(1)
with 𝐾 ∈ ℕ ⋆ and unknown parameters 𝜓
( 𝛾 , 𝛽 , Σ ) ≡ ( 𝛾 𝑘 0 , 𝛾 𝑘 , 𝛽 𝑘 0 , 𝛽 𝑘 , Σ 𝑘 ) 𝑘 ∈ [ 𝐾 ] in a parameter space Ψ . For technical reasons, we require that the covariates are fixed and the boundedness assumptions on the parameter space Ψ .
The explanatory variables 𝑥 [ 𝑛 ] and the number of components 𝐾 are both fixed. We assume that 𝒳 is a compact subset of ℝ 𝑝 and the observations 𝑥 [ 𝑛 ] are finite. Without loss of generality, we choose to rescale 𝑥 , so that ∥ 𝑥 ∥ ∞ ≤ 1 . Therefore, we can assume that 𝒳
[ 0 , 1 ] 𝑝 . However, the arguments in our proofs are valid for covariates of on scale. We assume that there exists positive constants 𝐴 𝛾 , 𝐴 𝛽 , 𝑎 Σ , 𝐴 Σ , such that 𝜓 ∈ Ψ ~ , where
Ψ ~
{ 𝜓 ∈ Ψ ∣ max 𝑘 ∈ [ 𝐾 ] sup 𝑥 ∈ 𝒳 ( | 𝛾 𝑘 0 | + | 𝛾 𝑘 ⊤ 𝑥 | ) ≤ 𝐴 𝛾 , max 𝑧 ∈ [ 𝑞 ] max 𝑘 ∈ [ 𝐾 ] sup 𝑥 ∈ 𝒳 ( | [ 𝛽 𝑘 0 ] 𝑧 | + | [ 𝛽 𝑘 𝑥 ] 𝑧 | ) ≤ 𝐴 𝛽 ,
𝑎 Σ ≤ 𝑚 ( Σ 𝑘 − 1 ) ≤ 𝑀 ( Σ 𝑘 − 1 ) ≤ 𝐴 Σ } .
(2)
Collection of SGMoE models. In summary, given (1) and (2.1), we wish to estimate 𝑠 0 via the following collection of SGMoE models:
𝑆
{ ( 𝑥 , 𝑦 ) ↦ 𝑠 𝜓 ( 𝑦 | 𝑥 ) ∣ 𝜓 ∈ Ψ ~ } .
(3)
In particular, to simplify the proofs, we shall assume that the true conditional PDF 𝑠 0 belongs to 𝑆 . That is to say, there exists 𝜓 0
( 𝛾 0 , 𝛽 0 , Σ 0 ) ∈ Ψ ~ , such that 𝑠 0
𝑠 𝜓 0 . From hereon in, where there is no confusion, we will use 𝑠 0 and 𝑠 𝜓 0 , interchangeably.
2.2Minimum contrast estimation
Several loss functions have been introduced into the MLE for MoE models. For example, by using the identifiability conditions, Nguyen et al. (2023, 2024b, 2024a) established inverse bounds between Hellinger distance and some Wasserstein distances or Voronoi loss functions to accurately capture heterogeneous parameter estimation convergence rates for different classes of MoE models. However, in this paper, our main idea is to consider the Lasso as an 𝑙 1 -ball model selection procedure, see e.g., Massart and Meynet (2011). Therefore, we follow the framework of minimum contrast estimation, see e.g., Massart (2007, Chapter 1), Arlot and Celisse (2010), and Barron et al. (1999). In this situation, negative log-likelihood (NLL) and KL divergence are the natural choices for the density estimation problem.
Average KL divergence. To take into account the structure of conditional PDFs, for fixed explanatory variables ( 𝑥 𝑖 ) 1 ≤ 𝑖 ≤ 𝑛 , we consider the following average KL loss function:
KL 𝑛 ( 𝑠 , 𝑡 )
1 𝑛 ∑ 𝑖
1 𝑛 KL ( 𝑠 ( ⋅ | 𝑥 𝑖 ) , 𝑡 ( ⋅ | 𝑥 𝑖 ) ) , for any densities 𝑠 and 𝑡 , where
(4)
KL ( 𝑠 , 𝑡 )
{ ∫ ℝ 𝑞 ln ( 𝑠 ( 𝑦 ) 𝑡 ( 𝑦 ) ) 𝑠 ( 𝑦 ) 𝑑 𝑦 ,
if 𝑠 𝑑 𝑦 is absolutely continuous w.r.t 𝑡 𝑑 𝑦 ,
- ∞ ,
otherwise .
(5)
Lasso estimator. Conditioned on ( 𝑥 𝑖 ) 1 ≤ 𝑖 ≤ 𝑛 , the MLE approach suggests estimating 𝑠 0 by the conditional PDF 𝑠 𝜓 that minimizes the NLL: − 1 𝑛 ∑ 𝑖
1 𝑛 ln ( 𝑠 𝜓 ( 𝑦 𝑖 | 𝑥 𝑖 ) ) . However, in high-dimensional data, we need to regularize the MLE in order to obtain reasonable estimates. Here, we first consider the 𝑙 1 -PMLE (the Lasso estimator):
𝑠 ^ 𝜆 Lasso
arg
min
𝑠
𝜓
∈
𝑆
{
−
1
𝑛
∑
𝑖
1 𝑛 ln ( 𝑠 𝜓 ( 𝑦 𝑖 | 𝑥 𝑖 ) ) + 𝜆 ( ∥ 𝛾 ∥ 1 + ∥ vec ( 𝛽 ) ∥ 1 ) } ,
(6)
where 𝜆 ≥ 0 is a regularization parameter to be tuned, ∥ 𝛾 ∥ 1
∑ 𝑘
1 𝐾 ∑ 𝑗
1 𝑝 | 𝛾 𝑘 𝑗 | , and ∥ vec ( 𝛽 ) ∥ 1
∑ 𝑘
1 𝐾 ∑ 𝑗
1 𝑝 ∑ 𝑧
1 𝑞 | [ 𝛽 𝑘 ] 𝑧 , 𝑗 | . It is worth noting that these two entry-wise 𝑙 1 norms do not contain scalar 𝛾 𝑘 0 and vector 𝛽 𝑘 0 bias. These Lasso regularisation terms encourage sparsity for both gating and expert parameters.
3Main result
To simplify the statement of Theorem 3.1, given some constants 𝜅 ≥ 148 , we first define the following condition for 𝜆 and the constant 𝐶 1 𝑛 that appears on the upper risk bounds:
𝜆 ≥ 𝜅 𝐾 𝑛 𝐶 0 𝑛 , 𝐶 0 𝑛
𝐵 0 𝑛 ( 𝑞 ln 𝑛 ln ( 2 𝑝 + 1 ) + 1 ) ,
(7)
𝐵 0 𝑛
max
(
𝐴
Σ
,
1
+
𝐾
𝐴
𝐺
)
[
1
+
2
𝑞
𝑞
𝐴
Σ
(
5
𝐴
𝛽
2
+
4
𝐴
Σ
ln
𝑛
)
]
,
𝐶
1
𝑛
2 𝑞 𝐴 𝛾 ( 𝑒 𝑞 / 2 − 1 𝜋 𝑞 / 2 𝐴 Σ 𝑞 / 2 + 𝐻 𝑠 0 ) + 𝐵 0 𝑛 𝐶 2 𝑛 ,
(8)
𝐻 𝑠 0
max { 0 , ln [ ( 4 𝜋 ) − 𝑞 / 2 𝐴 Σ 𝑞 / 2 ] } , 𝐶 2 𝑛
302 𝑞 𝐾 [ 1 + ( 𝐴 𝛾 + 𝑞 𝐴 𝛽 + 𝑞 𝑞 𝑎 Σ ) 2 ] .
(9)
Remark. In (7), we have taken care to make dependencies explicit, not only on the tuning constant 𝜅 , but also on 𝑛 , 𝑝 , 𝑞 , and 𝐾 as well as on 𝐴 𝛽 , 𝐴 Σ , 𝐴 𝐺 —all of the quantities that constrain the parameters of the model. See Section 3.1 for a more detailed description. Note that both 𝐶 0 𝑛 and 𝐶 1 𝑛 depend on the sample size 𝑛 only via the ln 𝑛 term. Furthermore, 𝐻 𝑠 0 is related to the negative of the differential entropy of the true unknown conditional density 𝑠 0 ∈ 𝑆 ; see the supplementary material for more details. We state our main contribution: an 𝑙 1 -oracle inequality for the Lasso estimator for SGMoE models via Theorem 3.1.
Theorem 3.1 ( 𝑙 1 -oracle inequality).
Assume that we observe ( 𝑥 [ 𝑛 ] , 𝑦 [ 𝑛 ] ) ∈ ( [ 0 , 1 ] 𝑝 × ℝ 𝑞 ) 𝑛 , coming from an unknown conditional PDF 𝑠 0 ≡ 𝑠 𝜓 0 ∈ 𝑆 , defined in (3). Given 𝐶 1 𝑛 in (8), if 𝜆 satisfies (7), the Lasso estimator 𝑠 ^ 𝜆 Lasso , defined in (6), satisfies the 𝑙 1 -oracle inequality:
𝔼 [ KL 𝑛 ( 𝑠 0 , 𝑠 ^ 𝜆 Lasso ) ]
≤ 𝜅 + 1 𝜅 inf 𝑠 𝜓 ∈ 𝑆 [ KL 𝑛 ( 𝑠 0 , 𝑠 𝜓 ) + 𝜆 ( ∥ 𝛾 ∥ 1 + ∥ vec ( 𝛽 ) ∥ 1 ) ] + 𝜆 + 𝐾 𝑛 𝐶 1 𝑛 .
(10) 3.1Discussion and perspectives regarding our oracle inequality
The oracle model and convergence rate for the Lasso estimator. Theorem 3.1 characterizes the performance of Lasso estimators as 𝑙 1 -PMLEs for SGMoE models. If the regularization parameter 𝜆 is properly chosen, the solution to the 𝑙 1 -penalized empirical risk minimization problem, behaves in a manner comparable to the deterministic Lasso (the so-called oracle). This oracle is the solution to the 𝑙 1 -penalized true risk minimization problem, up to an error term of order 𝜆 . Note that the best model, denoted by 𝑠 𝜓 ∗ , is defined as the one with the smallest 𝑙 1 -penalized risk:
inf 𝑠 𝜓 ∈ 𝑆 [ KL 𝑛 ( 𝑠 0 , 𝑠 𝜓 ) + 𝜆 ( ∥ 𝛾 ∥ 1 + ∥ vec ( 𝛽 ) ∥ 1 ) ] .
(11)
However, since we do not know true density 𝑠 0 , we cannot select this best model, which we call the oracle model. In particular, by definition, the oracle is the model in the collection that minimizes the 𝑙 1 -penalized risk in (11), which is generally assumed to be unknown. From the oracle inequality of Theorem 3.1, we conjecture that by constructing a suitable approximation theory on a good space, we can control this 𝑙 1 -penalized true risk to obtain the parametric convergence rate of 𝑛 − 1 / 2 for the Lasso estimator. A related work in this direction is Massart and Meynet (2012), who established convergence rates for the selected Lasso estimator of Massart and Meynet (2011), for a wide range of function classes described by the interpolation spaces of Barron et al. (2008). Furthermore, to the best of our knowledge, Theorem 2.8 of Maugis-Rabusseau and Michel (2013) is the only result in the literature that investigates the minimax estimator for Gaussian mixture models but for model selection instead of Lasso. We will leave the non-trivial task of obtaining Lasso extensions of this result to future work.
Implementable Lasso estimator and data-driven regularization parameter 𝜆 . Note that Theorem 3.1 ensures that there exists a sufficiently large 𝜆 for which the estimate has good properties, but does not give an explicit value for 𝜆 . However, we at least give the lower bound on the value of 𝜆 via the bound 𝜆 ≥ 𝜅 𝐶 ( 𝑝 , 𝑞 , 𝑛 , 𝐾 ) , where 𝜅 ≥ 148 , although this value is obviously conservative. Moreover, it is important to note that the Lasso estimator that appears in Theorem 3.1 has already been implemented in practice. Indeed, when the 𝑙 2 -penalties are given zero weight in Khalili (2010) and Chamroukhi and Huynh (2018, 2019), their penalty functions and a recent result from Huynh and Chamroukhi (2019) for generalized linear expert models belong to our framework and the 𝑙 1 -oracle inequality from Theorem 3.1 provides further theoretical insight for these Lasso estimators. In particular, possible solutions for calibrating the tuning parameter 𝜆 of the penalty from the data are the BIC (Schwarz, 1978) used in Chamroukhi and Huynh (2018, 2019) and the generalized cross-validation (Stone, 1974) utilized in Khalili (2010); Khalili and Chen (2007). Given the available literature, such computational strategies and numerical simulations for real data sets will not be considered and discussed further here.
Dependency on 𝑝 , 𝑞 , 𝑛 , and 𝐾 in the lower bound of 𝜆 . Note that we recover the same dependence of the form ln ( 2 𝑝 + 1 ) as for the homogeneous linear regression in Stadler et al. (2010) and of the form ln ( 2 𝑝 + 1 ) ( ln 𝑛 ) 2 / 𝑛 for the mixture of regressions models in Meynet (2013). On the contrary, the dependence on 𝑞 for the mixture of multivariate Gaussian regression models in Devijver (2015) has form 𝑞 2 + 𝑞 , while here we get the form 𝑞 2 𝑞 . The main reason is that the class 𝑆 of the SGMoE model is larger, and we use a different technique to evaluate the upper bound of the uniform norm of the gradient for each element in 𝑆 . In the lower bound of 𝜆 in (7), we can potentially get the factor dependence 𝐾 2 instead of 𝐾 as in Meynet (2013) and Devijver (2015). This can be explained by the fact that we used a different technique to handle the more complex model when dealing with the upper bound on the uniform norm of the gradient of ln 𝑠 𝜓 , for 𝑠 𝜓 ∈ 𝑆 . We refer to Meynet (2013, Remark 5.8) for some data sets where the dependence on 𝐾 can be reduced to the order of 𝐾 for the mixture Gaussian regression models. The determination of optimal rates for such problems is still open. Furthermore, the dependence on 𝑛 for the homogeneous linear regression in Stadler et al. (2010) is of the order of 𝑛 − 1 / 2 , while here we have an additional ( ln 𝑛 ) 2 factor. In fact, the same situation can be found in the 𝑙 1 -oracle inequalities of Meynet (2013) and Devijver (2015). As explained in Meynet (2013), the use of nonlinear KL information leads to a scenario where the linearity arguments developed in Stadler et al. (2010) with the quadratic loss function cannot be exploited. Instead, we need to use the entropy arguments to for our model, which leads to an additional ( ln 𝑛 ) 2 factor.
Multiplicative upper bound constant. It is worth noting that the constant 1 + 𝜅 − 1 appearing in the upper bounds of Theorem 3.1 cannot be reduced to 1 , which is in fact the same situation as the constant from 𝐶 1 from Montuelle and Le Pennec (2014, Theorem 1). Note that this problem also occurred in the 𝑙 1 -oracle inequalities of Meynet (2013), and Devijver (2015). Deriving an oracle inequality such that 1 + 𝜅 − 1 can be replaced by 1 for the KL loss is still an open problem.
Model misspecification. In Theorem 3.1, when 𝑠 0 ∉ 𝑆 , by letting 𝑛 → ∞ , due to the large bias from the first upper bound term, the total error upper bound of (10) converges to
( 1 + 1 𝜅 ) inf 𝑠 𝜓 ∈ 𝑆 [ lim 𝑛 → ∞ KL 𝑛 ( 𝑠 0 , 𝑠 𝜓 ) + 𝜆 ( ∥ 𝛾 ∥ 1 + ∥ vec ( 𝛽 ) ∥ 1 ) ] + 𝜆 ,
which may be large. The same conclusion holds for Theorem 3.2 when 𝑠 0 ∉ ∪ 𝑚 ∈ ℕ ⋆ 𝑆 𝑚 . Nevertheless, as we consider SGMoE models, some recent universal approximation results, see, e.g., Nguyen et al. (2016, 2019, 2020, 2021, 2022a), imply that if we take a sufficiently large number of mixture components 𝐾 , that is sufficiently large class 𝑆 , we can approximate a broad class of conditional PDFs, and thus the term on the right hand side is small for 𝐾 sufficiently large. This improves the error bound even when 𝑠 0 ∉ 𝑆 .
3.2Comparison with the state-of-the-art
Standard asymptotic results with variable selection. Theorem 3.1 complements the standard asymptotic results for high-dimensional SGMoE models via feature selection using Lasso, as well asthe more general PMLE via the Scad penalty function of Khalili (2010). By extending the theoretical developments for mixture of regressions models in Khalili and Chen (2007), standard asymptotic theorems for SGMoE are established in Khalili (2010). Then, under several strict regularity conditions on the true joint density function and the choice of tuning parameter, the PMLE from Khalili (2010), using the Scad penalty function from Fan and Li (2001) instead of Lasso, is proved to be both consistent in feature selection and maintains 𝑛 consistency. On the contrary, the only assumption used to obtain the 𝑛 − 1 / 2 convergence rate of the error upper bound in (10) from Theorem 3.1 is boundedness on the parameter space. In fact, this boundedness condition is also required by Khalili (2010). Furthermore, we work directly on conditional PDFs with fixed covariates rather than focusing on joint PDFs as in Khalili (2010); Khalili and Chen (2007). In future work, we shall investigate whether our proof technique used in this paper can be adapted to the problem of estimating joint PDFs when the predictors are random variables.
Boundedness assumptions on the parameter space. It is worth noting that our boundedness assumptions also appeared in Stadler et al. (2010); Meynet (2013); Devijver (2015). They are quite natural when working with MLE (Maugis and Michel, 2011), at least when considering the problem of the unboundedness of the likelihood at the boundaries of the parameter space (Redner and Walker, 1984; McLachlan and Peel, 2000), and to prevent the likelihood from diverging.
3.3Proof of Theorem 3.1
Proof sketch of Theorem 3.1. At a high level, the main idea here is to study the Lasso estimator, Theorem 3.1, as a solution of the 𝑙 1 -ball PMLE, Theorem 3.2, which is defined later in this section. The proof of Theorem 3.2 can be deduced from Propositions 3.3 and 3.4, which deal with the cases for small and large values of 𝑌 and are proved in the supplementary materials.
𝑙 1 -ball PMLE. We need to define the 𝑙 1 -ball PMLE for the statement of Theorem 3.2. For this, by restricting 𝑆 , to a suitable 𝑙 1 -ball of the parameters 𝛾 , 𝛽 on the definition of 𝑆 , we define a collection of 𝑙 1 -ball models 𝑆 𝑚 , where 𝑚 ∈ ℕ ⋆ is a radius of the 𝑙 1 -ball, as follows:
𝑆 𝑚
{ 𝑠 𝜓 ∈ 𝑆 ∣ ∥ 𝛾 ∥ 1 + ∥ vec ( 𝛽 ) ∥ 1 ≤ 𝑚 } .
(12)
Then for some 𝜂 𝑚 ≥ 0 , let 𝑠 ^ 𝑚 be a 𝜂 𝑚 -log-likelihood estimator (LLE) in 𝑆 𝑚 , defined as:
− 1 𝑛 ∑ 𝑖
1
𝑛
ln
(
𝑠
^
𝑚
(
𝑦
𝑖
|
𝑥
𝑖
)
)
≤
inf
𝑠
𝑚
∈
𝑆
𝑚
(
−
1
𝑛
∑
𝑖
1 𝑛 ln ( 𝑠 𝑚 ( 𝑦 𝑖 | 𝑥 𝑖 ) ) ) + 𝜂 𝑚 .
(13)
As is always the case, it is not sufficient to use the LLE of the estimate in each model as a criterion. It underestimates the risk of the estimate and the result is a choice of model that is too complex. In the context of the PMLE, by adding an appropriate penalty pen ( 𝑚 ) , one hopes to create a trade-off between good data fit and model complexity. Suppose that for all 𝑚 ∈ ℕ ⋆ , the penalty function satisfies pen ( 𝑚 )
𝜆 𝑚 , where 𝜆 will be determined as in (7). Then, for some 𝜂 ≥ 0 , an 𝑙 1 -ball PMLE is defined as 𝑠 ^ 𝑚 ^ , where 𝑚 ^ satisfies
− 1 𝑛 ∑ 𝑖
1 𝑛 ln ( 𝑠 ^ 𝑚 ^ ( 𝑦 𝑖 | 𝑥 𝑖 ) ) + pen ( 𝑚 ^ ) ≤ inf 𝑚 ∈ ℕ ⋆ ( − 1 𝑛 ∑ 𝑖
1 𝑛 ln ( 𝑠 ^ 𝑚 ( 𝑦 𝑖 | 𝑥 𝑖 ) ) + pen ( 𝑚 ) ) + 𝜂 .
(14)
Note that the error terms 𝜂 𝑚 and 𝜂 are necessary to avoid any existence problems, e.g., the infimum may not be reached. Roughly speaking, the Ekeland variational principle states that for any extended-valued lower semicontinuous function, which is bounded below, one can add a small perturbation to ensure the existence of the minimum, see e.g., Borwein and Zhu (2004). This framework is also used in Montuelle and Le Pennec (2014), and Nguyen et al. (2022b). Next, we state an 𝑙 1 -ball model selection via Theorem 3.2.
Theorem 3.2 ( 𝑙 1 -ball model selection).
Assume that ( 𝑥 [ 𝑛 ] , 𝑦 [ 𝑛 ] ) ∈ ( [ 0 , 1 ] 𝑝 × ℝ 𝑞 ) 𝑛 , come from an unknown conditional PDF 𝑠 0 ≡ 𝑠 𝜓 0 ∈ 𝑆 , defined in (3). Given 𝐶 1 𝑛 in (8), if 𝜆 satisfies (7), the 𝑙 1 -ball PMLE 𝑠 ^ 𝑚 ^ , defined in (14), satisfies the oracle inequality:
𝔼 [ KL 𝑛 ( 𝑠 0 , 𝑠 ^ 𝑚 ^ ) ] ≤ ( 1 + 1 𝜅 ) inf 𝑚 ∈ ℕ ⋆ ( inf 𝑠 𝑚 ∈ 𝑆 𝑚 KL 𝑛 ( 𝑠 0 , 𝑠 𝑚 ) + pen ( 𝑚 ) + 𝜂 𝑚 ) + 𝜂 + 𝐾 𝑛 𝐶 1 𝑛 .
(15)
Proof of Theorem 3.1. Let 𝜆 > 0 and define 𝑚 ^ to be the smallest integer such that 𝑠 ^ 𝜆 Lasso belongs to 𝑆 𝑚 ^ , i.e., 𝑚 ^ := ⌈ ‖ 𝜓 [ 1 , 2 ] ‖ 1 ⌉ ≤ ‖ 𝜓 [ 1 , 2 ] ‖ 1 + 1 . Then using the definition of 𝑚 ^ , (6), (12), and 𝑆
⋃ 𝑚 ∈ ℕ ⋆ 𝑆 𝑚 , we get
− 1 𝑛 ∑ 𝑖
1
𝑛
ln
(
𝑠
^
𝜆
Lasso
(
𝑦
𝑖
|
𝑥
𝑖
)
)
+
𝜆
𝑚
^
≤
−
1
𝑛
∑
𝑖
1 𝑛 ln ( 𝑠 ^ 𝜆 Lasso ( 𝑦 𝑖 | 𝑥 𝑖 ) ) + 𝜆 ( ‖ 𝜓 [ 1 , 2 ] ‖ 1 + 1 )
inf 𝑠 𝜓 ∈ 𝑆 ( − 1 𝑛 ∑ 𝑖
1 𝑛 ln ( 𝑠 𝜓 ( 𝑦 𝑖 | 𝑥 𝑖 ) ) + 𝜆 ‖ 𝜓 [ 1 , 2 ] ‖ 1 ) + 𝜆
inf 𝑚 ∈ ℕ ⋆ ( inf 𝑠 𝜓 ∈ 𝑆 𝑚 ( − 1 𝑛 ∑ 𝑖
1
𝑛
ln
(
𝑠
𝜓
(
𝑦
𝑖
|
𝑥
𝑖
)
)
+
𝜆
‖
𝜓
[
1
,
2
]
‖
1
)
)
+
𝜆
≤
inf
𝑚
∈
ℕ
⋆
(
inf
𝑠
𝑚
∈
𝑆
𝑚
(
−
1
𝑛
∑
𝑖
1 𝑛 ln ( 𝑠 𝑚 ( 𝑦 𝑖 | 𝑥 𝑖 ) ) + 𝜆 𝑚 ) ) + 𝜆 ,
which implies
− 1 𝑛 ∑ 𝑖
1 𝑛 ln ( 𝑠 ^ 𝜆 Lasso ( 𝑦 𝑖 | 𝑥 𝑖 ) ) + pen ( 𝑚 ^ ) ≤ inf 𝑚 ∈ ℕ ⋆ ( − 1 𝑛 ∑ 𝑖
1 𝑛 ln ( 𝑠 ^ 𝑚 ( 𝑦 𝑖 | 𝑥 𝑖 ) ) + pen ( 𝑚 ) ) + 𝜂
with pen ( 𝑚 )
𝜆 𝑚 , 𝜂
𝜆 , and 𝑠 ^ 𝑚 is a 𝜂 𝑚 -log-likelihood minimizer in 𝑆 𝑚 , with 𝜂 𝑚 ≥ 0 defined by (13). Thus, 𝑠 ^ 𝜆 Lasso satisfies (14) with 𝑠 ^ 𝜆 Lasso ≡ 𝑠 ^ 𝑚 ^ , i.e.,
− 1 𝑛 ∑ 𝑖
1 𝑛 ln ( 𝑠 ^ 𝑚 ^ ( 𝑦 𝑖 | 𝑥 𝑖 ) ) + pen ( 𝑚 ^ ) ≤ inf 𝑚 ∈ ℕ ⋆ ( − 1 𝑛 ∑ 𝑖
1 𝑛 ln ( 𝑠 ^ 𝑚 ( 𝑦 𝑖 | 𝑥 𝑖 ) ) + pen ( 𝑚 ) ) + 𝜂 .
Then, Theorem 3.2 implies that if
𝜆
≥
𝜅
𝐾
𝐵
0
𝑛
𝑛
(
𝑞
ln
𝑛
ln
(
2
𝑝
+
1
)
+
1
)
,
𝐵
0
𝑛
max ( 𝐴 Σ , 1 + 𝐾 𝐴 𝐺 ) [ 1 + 2 𝑞 𝑞 𝐴 Σ ( 5 𝐴 𝛽 2 + 4 𝐴 Σ ln 𝑛 ) ] ,
for some absolute constants 𝜅 ≥ 148 , Theorem 3.1 holds as required.
Proof of Theorem 3.2. Given any 𝑀 𝑛
0 , this can be done by introducing an event 𝒯 and a space 𝐹 𝑚 as follows:
𝒯
{ max 𝑖 ∈ [ 𝑛 ] ‖ 𝑌 𝑖 ‖ ∞
max 𝑖 ∈ [ 𝑛 ] max 𝑧 ∈ [ 𝑞 ] | [ 𝑌 𝑖 ] 𝑧 | ≤ 𝑀 𝑛 } , 𝒯 𝐶
{ max 𝑖 ∈ [ 𝑛 ] ‖ 𝑌 𝑖 ‖ ∞
max
𝑖
∈
[
𝑛
]
max
𝑧
∈
[
𝑞
]
|
[
𝑌
𝑖
]
𝑧
|
>
𝑀
𝑛
}
,
𝐹
𝑚
{ 𝑓 𝑚
− ln ( 𝑠 𝑚 𝑠 0 )
ln ( 𝑠 0 ) − ln ( 𝑠 𝑚 ) , 𝑠 𝑚 ∈ 𝑆 𝑚 } .
Conditional on { 𝑥 𝑖 } 𝑖 ∈ [ 𝑛 ] , let 𝑌 [ 𝑛 ] ′ | 𝑥 [ 𝑛 ] ≡ ( 𝑌 𝑖 ′ | 𝑥 𝑖 ) 𝑖 ∈ [ 𝑛 ] be IID random samples from 𝑌 arising from the conditional PDF 𝑠 0 ( ⋅ | 𝑥 𝑖 ) , 𝑖 ∈ [ 𝑛 ] . They are independence copies of the sample 𝑌 [ 𝑛 ] | 𝑥 [ 𝑛 ] . By taking into account the definition of average KL loss function from (4), the conditional expectation property, we obtain
KL 𝑛 ( 𝑠 0 , 𝑠 ^ 𝑚 ^ )
𝔼 𝑌 [ 𝑛 ] ′ | 𝑥 [ 𝑛 ] [ 1 𝑛 ∑ 𝑖
1 𝑛 𝑓 ^ 𝑚 ^ ( 𝑌 𝑖 | 𝑥 𝑖 ) | 𝒯 ] 𝕀 𝒯 + 𝔼 𝑌 [ 𝑛 ] ′ | 𝑥 [ 𝑛 ] [ 1 𝑛 ∑ 𝑖
1 𝑛 𝑓 ^ 𝑚 ^ ( 𝑌 𝑖 | 𝑥 𝑖 ) | 𝒯 𝐶 ] 𝕀 𝒯 𝐶
≡ ( KL 𝑛 ( 𝑠 0 , 𝑠 ^ 𝑚 ^ ) | 𝒯 ) 𝕀 𝒯 + ( KL 𝑛 ( 𝑠 0 , 𝑠 ^ 𝑚 ^ ) | 𝒯 𝐶 ) 𝕀 𝒯 𝑐 .
(16)
From now on, when there is no confusion, the expectation of (16) is written as follows:
𝔼 [ KL 𝑛 ( 𝑠 0 , 𝑠 ^ 𝑚 ^ ) ]
𝔼 𝑌 [ 𝑛 ] [ ( KL 𝑛 ( 𝑠 0 , 𝑠 ^ 𝑚 ^ ) | 𝒯 ) 𝕀 𝒯 ] + 𝔼 𝑌 [ 𝑛 ] [ ( KL 𝑛 ( 𝑠 0 , 𝑠 ^ 𝑚 ^ ) | 𝒯 𝐶 ) 𝕀 𝒯 𝑐 ]
≡ 𝔼 [ KL 𝑛 ( 𝑠 0 , 𝑠 ^ 𝑚 ^ ) 𝕀 𝒯 ] + 𝔼 [ KL 𝑛 ( 𝑠 0 , 𝑠 ^ 𝑚 ^ ) 𝕀 𝒯 𝐶 ] .
(17)
Therefore, on the basis of the above remark (17), Theorem 3.2 is proved by obtaining the upper bound for each of the following terms using Propositions 3.3 and 3.4:
𝔼 [ KL 𝑛 ( 𝑠 0 , 𝑠 ^ 𝑚 ^ ) ]
𝔼 [ KL 𝑛 ( 𝑠 0 , 𝑠 ^ 𝑚 ^ ) 𝕀 𝒯 ] + 𝔼 [ KL 𝑛 ( 𝑠 0 , 𝑠 ^ 𝑚 ^ ) 𝕀 𝒯 𝐶 ] .
Given some constants 𝜅 ≥ 148 , we need to define the following condition for 𝜆 , given some constant 𝑀 𝑛
0 :
𝜆 ≥ 𝜅 𝐾 𝑛 𝐶 3 𝑛 , 𝐶 3 𝑛
𝐵 𝑛 ( 𝑞 ln 𝑛 ln ( 2 𝑝 + 1 ) + 1 ) ,
(18)
𝐵 𝑛
max ( 𝐴 Σ , 1 + 𝐾 𝐴 𝐺 ) [ 1 + 𝑞 𝑞 ( 𝑀 𝑛 + 𝐴 𝛽 ) 2 𝐴 Σ ] .
(19) Proposition 3.3 (Small values of 𝑌 ).
Assume that ( 𝑥 [ 𝑛 ] , 𝑦 [ 𝑛 ] ) ∈ ( [ 0 , 1 ] 𝑝 × ℝ 𝑞 ) 𝑛 comes from an unknown conditional PDF 𝑠 0 ≡ 𝑠 𝜓 0 ∈ 𝑆 defined in (3). Given 𝐶 2 𝑛 and 𝐵 𝑛 defined as in (9) and (19), respectively, if 𝜆 satisfies (18), the 𝑙 1 -ball PMLE 𝑠 ^ 𝑚 ^ , defined in (14), satisfies:
𝔼 [ KL 𝑛 ( 𝑠 0 , 𝑠 ^ 𝑚 ^ ) 𝕀 𝒯 ] ≤ ( 1 + 𝜅 − 1 ) inf 𝑚 ∈ ℕ ⋆ ( inf 𝑠 𝑚 ∈ 𝑆 𝑚 KL 𝑛 ( 𝑠 0 , 𝑠 𝑚 ) + pen ( 𝑚 ) + 𝜂 𝑚 ) + 𝜂 + 𝐾 𝑛 𝐵 𝑛 𝐶 2 𝑛 .
Proposition 3.4 (Large values of 𝑌 ).
Consider 𝑠 0 , 𝒯 , and 𝑠 ^ 𝑚 ^ as defined in Proposition 3.3 and 𝐻 𝑠 0 as defined in (9). Then,
𝔼 [ KL 𝑛 ( 𝑠 0 , 𝑠 ^ 𝑚 ^ ) 𝕀 𝒯 𝐶 ] ≤ ( 𝑒 𝑞 / 2 − 1 𝜋 𝑞 / 2 𝐴 Σ 𝑞 / 2 + 𝐻 𝑠 0 ) 2 𝐾 𝑛 𝑞 𝐴 𝛾 𝑒 − 𝑀 𝑛 2 − 2 𝑀 𝑛 𝐴 𝛽 4 𝐴 Σ .
Proposition 3.3 constitutes our important technical contribution. Via Lemma 3.5, the main idea to prove Proposition 3.3 is to control the following deviation on the event 𝒯 :
sup 𝑓 𝑚 ∈ 𝐹 𝑚 | 𝜈 𝑛 ( − 𝑓 𝑚 ) | ≡ sup 𝑓 𝑚 ∈ 𝐹 𝑚 | 1 𝑛 ∑ 𝑖
1 𝑛 { 𝑓 𝑚 ( 𝑌 𝑖 | 𝑥 𝑖 ) − 𝔼 [ 𝑓 𝑚 ( 𝑌 𝑖 | 𝑥 𝑖 ) ] } | ,
(20)
𝐹 𝑚
{ 𝑓 𝑚
− ln ( 𝑠 𝑚 𝑠 0 )
ln ( 𝑠 0 ) − ln ( 𝑠 𝑚 ) , 𝑠 𝑚 ∈ 𝑆 𝑚 } .
(21) Lemma 3.5 (Control of deviation).
For each 𝑚 ′ ∈ ℕ ⋆ , let
Δ 𝑚 ′
𝑚 ′ ln ( 2 𝑝 + 1 ) ln 𝑛 + 2 𝐾 ( 𝐴 𝛾 + 𝑞 𝐴 𝛽 + 𝑞 𝑞 𝑎 Σ ) .
(22)
Then, on the event 𝒯 , for all 𝑚 ′ ∈ ℕ ⋆ , and for all 𝑡
0 , with probability greater than 1 − 𝑒 − 𝑡 ,
sup 𝑓 𝑚 ′ ∈ 𝐹 𝑚 ′ | 𝜈 𝑛 ( − 𝑓 𝑚 ′ ) | ≤ 4 𝐾 𝐵 𝑛 𝑛 [ 37 𝑞 Δ 𝑚 ′ + 2 ( 𝐴 𝛾 + 𝑞 𝐴 𝛽 + 𝑞 𝑞 𝑎 Σ ) 𝑡 ] .
(23)
The proof of Lemma 3.5 appears in the supplementary material and follows the arguments developed in the proof of Massart (2007, Theorem 7.11). The proof of Proposition 3.3 is in the spirit of Vapnik’s method of structural risk minimization, first established in Vapnik (1982) and briefly summarized in Section 8.2 of Massart (2007). In particular, we use concentration inequalities combined with symmetrization arguments to obtain an upper bound on the empirical process in expectation from (20). Our technique combines Vapnik’s structural risk minimization paradigm (e.g., Vapnik, 1982) and model selection theory for conditional density estimation (e.g., Cohen and Le Pennec, 2011), which extend the density estimation results of Massart (2007).
4Numerical experiments
In this section, we empirically validate the convergence rate of the error upper bound in (10) from Theorem 3.1 in our SGMoE models. For simplicity, we only perform a simulation study to illustrate the convergence rates when 𝒳 ⊂ ℝ 𝑝 , 𝑝
6 , and 𝒴 ⊂ ℝ 𝑞 , 𝑞
1 . All the following simulations were performed in R 4.3.2 on a standard Unix machine. We construct simulated data sets sampling from the true conditional density, 𝑠 0 , belongs to the class of SGMoE models 𝑆 :
𝑠 0 ( 𝑦 | 𝑥 )
∑ 𝑘
1 2 exp ( 𝛾 0 𝑘 0 + 𝛾 0 𝑘 ⊤ 𝑥 ) ∑ 𝑙
1 2 exp ( 𝛾 0 𝑙 0 + 𝛾 0 𝑙 ⊤ 𝑥 ) 𝒩 ( 𝑦 ; 𝛽 0 𝑘 0 + 𝛽 0 𝑘 𝑥 , Σ 0 𝑘 ) .
Here, the true parameters for the true SGMoE model are given by:
( 𝛾 010 , 𝛾 01 ) ⊤
( 1 , 2 , 0 , 0 , − 1 , 0 , 0 ) ⊤ ; Σ 01
Σ 02
1
;
(
𝛽
010
,
𝛽
01
)
⊤
( 0 , 0 , 1.5 , 0 , 0 , 0 , 1 ) ⊤ ; ( 𝛽 020 , 𝛽 02 ) ⊤
( 0 , 1 , − 1.5 , 0 , 0 , 2 , 0 ) ⊤ .
Here we implement the 𝑙 1 -PMLE using the EM algorithm for the SGMoE model with coordinate ascent algorithm for updating the gating network, following the strategy of Chamroukhi and Huynh (2018, 2019).
We want to empirically validate the convergence rate of the error upper bound in terms of the KL divergence, which cannot be exactly calculated in the case of Gaussian mixtures. Thus, we assess the divergence through a Monte Carlo simulation, given that we have knowledge of the true density. It is important to mention that the variability in this randomized approximation has been shown to be minimal in practice, a fact that is corroborated by the numerical experiments conducted by Nguyen et al. (2022b); Montuelle and Le Pennec (2014). Specifically, we calculate the Monte Carlo approximation for the average KL divergence KL 𝑛 ( 𝑠 0 , 𝑠 ^ 𝜆 Lasso ) as described below:
1 𝑛 ∑ 𝑖
1 𝑛 KL ( 𝑠 0 ( ⋅ | 𝑥 𝑖 ) , 𝑠 ^ 𝜆 Lasso ( ⋅ | 𝑥 𝑖 ) ) ≈ 1 𝑛 𝑛 𝑦 ∑ 𝑖
1 𝑛 ∑ 𝑗
1 𝑛 𝑦 ln ( 𝑠 0 ( 𝑦 𝑖 𝑗 | 𝑥 𝑖 ) 𝑠 ^ 𝜆 Lasso ( 𝑦 𝑖 𝑗 | 𝑥 𝑖 ) ) .
Here ( 𝑦 𝑖 𝑗 ) 𝑗 ∈ [ 𝑛 𝑦 ] are drawn from 𝑠 0 ( ⋅ | 𝑥 𝑖 ) . Then 𝔼 [ KL 𝑛 ( 𝑠 0 , 𝑠 ^ 𝜆 Lasso ) ] is approximated again by averaging over 𝑛 𝑡 Monte Carlo trials. Therefore, the simulated data used for approximation can be written as ( 𝑥 𝑖 , 𝑦 𝑖 𝑗 ) 𝑡 with 𝑖 ∈ [ 𝑛 ] , 𝑗 ∈ [ 𝑛 𝑦 ] , 𝑡 ∈ [ 𝑛 𝑡 ] . Figure 1 illustrates that the error decreases with order 𝐶 1 𝑛 𝐾 / 𝑛 , as predicted theoretically in Theorem 3.1, as the sample size 𝑛 increases when applying the penalty based on our criterion.
Figure 1:Average KL divergence between the true and selected densities based on the Lasso estimator, represented in a log-log scale, using 100 different choices of sample size 𝑛 between 1000 and 32000 over 𝑛 𝑡
20 trials and 𝑛 𝑦
30 . A free least-square regression with confidence interval and a regression with slope − 1 / 2 were added to stress the two different behavior for each graph. 5Conclusions
To the best of our knowledge, we are the first to establish an 𝑙 1 -oracle inequality for SGMoE models from a non-asymptotic perspective, under the mildest assumptions, namely the boundedness of the parameter space, which provides a lower bound for the regularization parameter of the Lasso, while ensuring non-asymptotic theoretical control of the KL loss of the estimator. We further conduct a simulation study to empirically confirm our theoretical results. We believe that our contribution assists in further popularizing MoE models by providing a theoretical basis for their application to heterogeneous high-dimensional data.
References Arlot and Celisse (2010) Sylvain Arlot and Alain Celisse.A survey of cross-validation procedures for model selection.Statistics Surveys, 4:40–79, January 2010. Barron et al. (1999) Andrew Barron, Lucien Birgé, and Pascal Massart.Risk bounds for model selection via penalization.Probability theory and related fields, 113:301–413, 1999. Barron et al. (2008) Andrew R. Barron, Albert Cohen, Wolfgang Dahmen, and Ronald A. DeVore.Approximation and learning by greedy algorithms.The Annals of Statistics, 36(1):64 – 94, 2008. Borwein and Zhu (2004) Jonathan M Borwein and Qiji J Zhu.Techniques of Variational Analysis.Springer New York, 2004. Chamroukhi and Huynh (2018) Faicel Chamroukhi and Bao Tuyen Huynh.Regularized maximum-likelihood estimation of mixture-of-experts for regression and clustering.In 2018 International Joint Conference on Neural Networks (IJCNN), pages 1–8, 2018. Chamroukhi and Huynh (2019) Faicel Chamroukhi and Bao Tuyen Huynh.Regularized maximum likelihood estimation and feature selection in mixtures-of-experts models.Journal de la Société Française de Statistique, 160(1):57–85, 2019. Cohen and Le Pennec (2011) S X Cohen and Erwan Le Pennec.Conditional density estimation by penalized likelihood model selection and applications.Technical report, INRIA, 2011. Cover (1999) Thomas M Cover.Elements of information theory.John Wiley & Sons, 1999. Devijver (2015) Emilie Devijver.An 𝑙 1 -oracle inequality for the Lasso in multivariate finite mixture of multivariate Gaussian regression models.ESAIM: PS, 19:649–670, 2015. Duistermaat and Kolk (2004) Johannes Jisse Duistermaat and Johan AC Kolk.Multidimensional real analysis I: differentiation, volume 86.Cambridge University Press, 2004. Fan and Li (2001) Jianqing Fan and Runze Li.Variable selection via nonconcave penalized likelihood and its oracle properties.Journal of the American statistical Association, 96(456):1348–1360, 2001. Genovese and Wasserman (2000) Christopher R Genovese and Larry Wasserman.Rates of convergence for the Gaussian mixture sieve.The Annals of Statistics, 28(4):1105–1127, aug 2000. Golub and Van Loan (2012) Gene H Golub and Charles F Van Loan.Matrix computations, volume 3.JHU press, 2012. Ho and Nguyen (2016) Nhat Ho and XuanLong Nguyen.Convergence rates of parameter estimation for some weakly identifiable finite mixtures.The Annals of Statistics, 44(6):2726 – 2755, 2016. Ho et al. (2022) Nhat Ho, Chiao-Yu Yang, and Michael I. Jordan.Convergence Rates for Gaussian Mixtures of Experts.Journal of Machine Learning Research, 23(323):1–81, 2022. Horn and Johnson (2012) Roger A Horn and Charles R Johnson.Matrix analysis.Cambridge University Press, 2012. Huynh and Chamroukhi (2019) Bao Tuyen Huynh and Faicel Chamroukhi.Estimation and feature selection in mixtures of generalized linear experts models.arXiv preprint arXiv:1907.06994, 2019. Jacobs et al. (1991) Robert A Jacobs, Michael I Jordan, Steven J Nowlan, and Geoffrey E Hinton.Adaptive mixtures of local experts.Neural computation, 3(1):79–87, 1991. Jensen (1906) J L W V Jensen.Sur les fonctions convexes et les inégalités entre les valeurs moyennes.Acta Mathematica, 30(1):175–193, 1906. Jiang and Tanner (1999) Wenxin Jiang and Martin A Tanner.Hierarchical mixtures-of-experts for exponential family regression models: approximation and maximum likelihood estimation.Annals of Statistics, pages 987–1011, 1999. Khalili (2010) Abbas Khalili.New estimation and feature selection methods in mixture-of-experts models.Canadian Journal of Statistics, 38(4):519–539, 2010. Khalili and Chen (2007) Abbas Khalili and Jiahua Chen.Variable selection in finite mixture of regression models.Journal of the american Statistical association, 102(479):1025–1038, 2007. Lloyd-Jones et al. (2018) Luke R Lloyd-Jones, Hien D Nguyen, and Geoffrey J McLachlan.A globally convergent algorithm for lasso-penalized mixture of linear regression models.Computational Statistics & Data Analysis, 119:19–38, 2018. Magnus and Neudecker (2019) Jan R Magnus and Heinz Neudecker.Matrix differential calculus with applications in statistics and econometrics.John Wiley & Sons, 2019. Mansuripur (1987) Masud Mansuripur.Introduction to information theory.Prentice-Hall, Inc., 1987. Massart (2007) Pascal Massart.Concentration Inequalities and Model Selection: Ecole d’Eté de Probabilités de Saint-Flour XXXIII-2003.Springer, 2007. Massart and Meynet (2011) Pascal Massart and Caroline Meynet.The Lasso as an 𝑙 1 -ball model selection procedure.Electronic Journal of Statistics, 5:669 – 687, 2011. Massart and Meynet (2012) Pascal Massart and Caroline Meynet.Some Rates of Convergence for the Selected Lasso Estimator.In Algorithmic Learning Theory, pages 17–33, Berlin, Heidelberg, 2012.ISBN 978-3-642-34106-9. Maugis and Michel (2011) Cathy Maugis and Bertrand Michel.A non asymptotic penalized criterion for gaussian mixture model selection.ESAIM: Probability and Statistics, 15:41–68, 2011. Maugis-Rabusseau and Michel (2013) Cathy Maugis-Rabusseau and Bertrand Michel.Adaptive density estimation for clustering with Gaussian mixtures.ESAIM: Probability and Statistics, 17:698–724, 2013. McLachlan and Peel (2000) G J McLachlan and D Peel.Finite Mixture Models.John Wiley & Sons, 2000. Mendes and Jiang (2012) Eduardo F Mendes and Wenxin Jiang.On convergence rates of mixtures of polynomial experts.Neural computation, 24(11):3025–3051, 2012. Meynet (2013) C Meynet.An 𝑙 1 -oracle inequality for the Lasso in finite mixture Gaussian regression models.ESAIM: Probability and Statistics, 17:650–671, 2013. Montuelle and Le Pennec (2014) Lucie Montuelle and Erwan Le Pennec.Mixture of Gaussian regressions model with logistic weights, a penalized maximum likelihood approach.Electronic Journal of Statistics, 8(1):1661–1695, 2014. Nguyen and Chamroukhi (2018) Hien D Nguyen and Faicel Chamroukhi.Practical and theoretical aspects of mixture-of-experts modeling: An overview.Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 8(4):e1246, 2018. Nguyen et al. (2016) Hien D Nguyen, Luke R Lloyd-Jones, and Geoffrey J McLachlan.A universal approximation theorem for mixture-of-experts models.Neural computation, 28(12):2585–2593, 2016. Nguyen et al. (2019) Hien D Nguyen, Faicel Chamroukhi, and Florence Forbes.Approximation results regarding the multiple-output Gaussian gated mixture of linear experts model.Neurocomputing, 366:208–214, 2019.ISSN 0925-2312. Nguyen et al. (2021) Hien D Nguyen, TrungTin Nguyen, Faicel Chamroukhi, and Geoffrey John McLachlan.Approximations of conditional probability density functions in Lebesgue spaces via mixture of experts models.Journal of Statistical Distributions and Applications, 8(1):13, 2021. Nguyen et al. (2023) Huy Nguyen, TrungTin Nguyen, and Nhat Ho.Demystifying Softmax Gating Function in Gaussian Mixture of Experts.In Thirty-seventh Conference on Neural Information Processing Systems, 2023. Nguyen et al. (2024a) Huy Nguyen, Pedram Akbarian, TrungTin Nguyen, and Nhat Ho.A General Theory for Softmax Gating Multinomial Logistic Mixture of Experts.In Forty-first International Conference on Machine Learning, 2024a. Nguyen et al. (2024b) Huy Nguyen, TrungTin Nguyen, Khai Nguyen, and Nhat Ho.Towards Convergence Rates for Parameter Estimation in Gaussian-gated Mixture of Experts.In Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, volume 238, pages 2683–2691, May 2024b. Nguyen et al. (2020) TrungTin Nguyen, Hien D. Nguyen, Faicel Chamroukhi, and Geoffrey J. McLachlan.Approximation by finite mixtures of continuous density functions that vanish at infinity.Cogent Mathematics & Statistics, 7(1):1750861, January 2020. Nguyen et al. (2022a) TrungTin Nguyen, Faicel Chamroukhi, Hien D. Nguyen, and Geoffrey J. McLachlan.Approximation of probability density functions via location-scale finite mixtures in Lebesgue spaces.Communications in Statistics - Theory and Methods, pages 1–12, May 2022a. Nguyen et al. (2022b) TrungTin Nguyen, Hien Duy Nguyen, Faicel Chamroukhi, and Florence Forbes.A non-asymptotic approach for model selection via penalization in high-dimensional mixture of experts models.Electronic Journal of Statistics, 16(2):4742 – 4822, 2022b. Nguyen (2013) XuanLong Nguyen.Convergence of latent mixing measures in finite and infinite mixture models.The Annals of Statistics, 41(1):370–400, 2013. Norets (2010) Andriy Norets.Approximation of conditional densities by smooth mixtures of regressions.The Annals of Statistics, 38(3):1733 – 1766, 2010. Redner and Walker (1984) Richard A Redner and Homer F Walker.Mixture densities, maximum likelihood and the EM algorithm.SIAM review, 26(2):195–239, 1984. Schwarz (1978) Gideon Schwarz.Estimating the dimension of a model.The Annals of Statistics, 6(2):461–464, 1978. Stadler et al. (2010) N Stadler, P Buhlmann, and S van de Geer. 𝑙 1 -penalization for mixture regression models.TEST, 19:209–256, 2010. Stone (1974) M. Stone.Cross-Validatory Choice and Assessment of Statistical Predictions.Journal of the Royal Statistical Society: Series B (Methodological), 36(2):111–133, 1974. Tibshirani (1996) Robert Tibshirani.Regression shrinkage and selection via the Lasso.Journal of the Royal Statistical Society: Series B (Methodological), 58(1):267–288, 1996. Van Der Vaart and Wellner (1996) AW Van Der Vaart and JA Wellner.Weak Convergence and Empirical Processes: With Applications to Statistics Springer Series in Statistics, volume 58.Springer, 1996. Vapnik (1982) Vladimir Vapnik.Estimation of Dependences Based on Empirical Data (Springer Series in Statistics).Springer-Verlag, 1982. Wainwright (2019) Martin J Wainwright.High-dimensional statistics: A non-asymptotic viewpoint, volume 48.Cambridge University Press, 2019. Williams and Rasmussen (2006) Christopher K Williams and Carl Edward Rasmussen.Gaussian processes for machine learning, volume 2.MIT press Cambridge, MA, 2006. Yuksel et al. (2012) S E Yuksel, J N Wilson, and P D Gader.Twenty Years of Mixture of Experts.IEEE Transactions on Neural Networks and Learning Systems, 23(8):1177–1193, 2012.
Supplement to “Non-asymptotic oracle inequalities for the Lasso in high-dimensional mixture of experts” In this supplement, we provide proofs for Theorem 3.2,and Propositions 3.3 and 3.4 in Appendices A.1, A.2 and A.3, respectively. We then provide proofs for the remaining lemmas and provide further related technical results in Appendices B and C, respectively.
Appendix AProof of main results
First, it is important to note that the negative of the differential entropy (see, e.g., Mansuripur (1987, Chapter 9)) of the true unknown conditional density 𝑠 0 ∈ 𝑆 , defined in (3), is finite, see more in Lemma A.1, which is proved in Appendix B.3.
Lemma A.1.
There exists a constant 𝐻 𝑠 0
max { 0 , ln 𝐶 𝑠 0 } , where 𝐶 𝑠 0
( 4 𝜋 ) − 𝑞 / 2 𝐴 Σ 𝑞 / 2 , s.t.
max { 0 , sup 𝑥 ∈ 𝒳 ∫ ℝ 𝑞 ln ( 𝑠 0 ( 𝑦 | 𝑥 ) ) 𝑠 0 ( 𝑦 | 𝑥 ) 𝑑 𝑦 } ≤ 𝐻 𝑠 0 < ∞ .
(24)
We then introduce some definitions and notations that we shall use in the proofs.
Additional notation
For any measurable function 𝑓 : ℝ 𝑞 → ℝ , consider its empirical norm
∥ 𝑓 ∥ 𝑛 := 1 𝑛 ∑ 𝑖
1 𝑛 𝑓 2 ( 𝑌 𝑖 | 𝑥 𝑖 ) ,
and its conditional expectation
𝔼 𝑌 | 𝑋
𝑥 [ 𝑓 ] := 𝔼 [ 𝑓 ( 𝑌 | 𝑋 ) | 𝑋
𝑥 ]
∫ ℝ 𝑞 𝑓 ( 𝑦 | 𝑥 ) 𝑠 0 ( 𝑦 | 𝑥 ) 𝑑 𝑦 .
Furthermore, we also define its empirical process
𝑃 𝑛 ( 𝑓 ) := 1 𝑛 ∑ 𝑖
1 𝑛 𝑓 ( 𝑌 𝑖 | 𝑥 𝑖 ) ,
(25)
with expectation
𝑃 ( 𝑓 )
1 𝑛 ∑ 𝑖
1 𝑛 𝔼 𝑌 𝑖 | 𝑋 𝑖
𝑥 𝑖 [ 𝑓 ( 𝑌 𝑖 | 𝑋 𝑖 ) | 𝑋 𝑖
𝑥 𝑖 ]
1 𝑛 ∑ 𝑖
1 𝑛 ∫ ℝ 𝑞 𝑓 ( 𝑦 | 𝑥 𝑖 ) 𝑠 0 ( 𝑦 | 𝑥 𝑖 ) 𝑑 𝑦 ,
(26)
and the recentered process
𝜈 𝑛 ( 𝑓 ) := 𝑃 𝑛 ( 𝑓 ) − 𝑃 ( 𝑓 )
1 𝑛 ∑ 𝑖
1 𝑛 [ 𝑓 ( 𝑌 𝑖 | 𝑥 𝑖 ) − ∫ ℝ 𝑞 𝑓 ( 𝑦 | 𝑥 𝑖 ) 𝑠 0 ( 𝑦 | 𝑥 𝑖 ) 𝑑 𝑦 ] .
(27)
For all 𝑚 ∈ ℕ ⋆ , recall that we consider the model
𝑆 𝑚
{
𝑠
𝜓
∈
𝑆
|
∥
𝛾
∥
1
+
∥
vec
(
𝛽
)
∥
1
≤
𝑚
}
≡
{
𝑠
𝜓
∈
𝑆
|
‖
𝜓
[
1
,
2
]
‖
1
≤
𝑚
}
,
𝐹
𝑚
{ 𝑓 𝑚
− ln ( 𝑠 𝑚 𝑠 0 )
ln ( 𝑠 0 ) − ln ( 𝑠 𝑚 ) , 𝑠 𝑚 ∈ 𝑆 𝑚 } .
By using the basic properties of the infimum: for every 𝜖
0 , there exists 𝑥 𝜖 ∈ 𝐴 , such that 𝑥 𝜖 < inf 𝐴 + 𝜖 . Then let 𝛿 KL
0 for all 𝑚 ∈ ℕ ⋆ , and let 𝜂 𝑚 ≥ 0 . It holds that there exist two functions 𝑠 ^ 𝑚 and 𝑠 ¯ 𝑚 in 𝑆 𝑚 , such that
𝑃 𝑛 ( − ln 𝑠 ^ 𝑚 )
≤ inf 𝑠 𝑚 ∈ 𝑆 𝑚 𝑃 𝑛 ( − ln 𝑠 𝑚 ) + 𝜂 𝑚 , and
(28)
KL 𝑛 ( 𝑠 0 , 𝑠 ¯ 𝑚 )
≤ inf 𝑠 𝑚 ∈ 𝑆 𝑚 KL 𝑛 ( 𝑠 0 , 𝑠 𝑚 ) + 𝛿 KL .
(29)
Define
𝑓 ^ 𝑚 := − ln ( 𝑠 ^ 𝑚 𝑠 0 ) , and 𝑓 ¯ 𝑚 := − ln ( 𝑠 ¯ 𝑚 𝑠 0 ) .
(30)
Let 𝜂 ≥ 0 and fix 𝑚 ∈ ℕ ⋆ . Further, define
ℳ ^ ( 𝑚 )
{ 𝑚 ′ ∈ ℕ ⋆ | 𝑃 𝑛 ( − ln 𝑠 ^ 𝑚 ′ ) + pen ( 𝑚 ′ ) ≤ 𝑃 𝑛 ( − ln 𝑠 ^ 𝑚 ) + pen ( 𝑚 ) + 𝜂 } .
(31) A.1Proof of Theorem 3.2
Let 𝑀 𝑛 > 0 and 𝜅 ≥ 148 . Assume that, for all 𝑚 ∈ ℕ ⋆ , the penalty function satisfies pen ( 𝑚 )
𝜆 𝑚 , with
𝜆 ≥ 𝜅 𝐾 𝐵 𝑛 𝑛 ( 𝑞 ln 𝑛 ln ( 2 𝑝 + 1 ) + 1 ) .
(32)
We derive, from Propositions 3.3 and 3.4, that any penalized likelihood estimator 𝑠 ^ 𝑚 ^ with 𝑚 ^ , satisfying
− 1 𝑛 ∑ 𝑖
1 𝑛 ln ( 𝑠 ^ 𝑚 ^ ( 𝑦 𝑖 | 𝑥 𝑖 ) ) + pen ( 𝑚 ^ ) ≤ inf 𝑚 ∈ ℕ ⋆ ( − 1 𝑛 ∑ 𝑖
1 𝑛 ln ( 𝑠 ^ 𝑚 ( 𝑦 𝑖 | 𝑥 𝑖 ) ) + pen ( 𝑚 ) ) + 𝜂 ,
for some 𝜂 ≥ 0 , yields
𝔼 [ KL 𝑛 ( 𝑠 0 , 𝑠 ^ 𝑚 ^ ) ]
𝔼 [ KL 𝑛 ( 𝑠 0 , 𝑠 ^ 𝑚 ^ ) 𝕀 𝒯 ] + 𝔼 [ KL 𝑛 ( 𝑠 0 , 𝑠 ^ 𝑚 ^ ) 𝕀 𝒯 𝑐 ]
≤
( 1 + 𝜅 − 1 ) inf 𝑚 ∈ ℕ ⋆ ( inf 𝑠 𝑚 ∈ 𝑆 𝑚 KL 𝑛 ( 𝑠 0 , 𝑠 𝑚 ) + pen ( 𝑚 ) + 𝜂 𝑚 )
302 𝐾 3 / 2 𝑞 𝐵 𝑛 𝑛 ( 1
( 𝐴 𝛾
𝑞 𝐴 𝛽
𝑞 𝑞 𝑎 Σ ) 2 )
𝜂
( 𝑒 𝑞 / 2 − 1 𝜋 𝑞 / 2 𝐴 Σ 𝑞 / 2
𝐻 𝑠 0 ) 2 𝐾 𝑛 𝑞 𝐴 𝛾 𝑒 − 𝑀 𝑛 2 − 2 𝑀 𝑛 𝐴 𝛽 4 𝐴 Σ .
(33)
To obtain inequality (15), it only remains to optimize the inequality (A.1), with respect 𝑀 𝑛 . Since the two terms depending on 𝑀 𝑛 , in (A.1), have opposite monotonicity with respect to 𝑀 𝑛 , we are looking for a value of 𝑀 𝑛 such that these two terms are the same order with respect to 𝑛 . Consider the positive solution 𝑀 𝑛
𝐴 𝛽 + 𝐴 𝛽 2 + 4 𝐴 Σ ln 𝑛 of the equation 𝑋 ( 𝑋 − 2 𝐴 𝛽 ) 4 𝐴 Σ − ln 𝑛
0 . Then, on the one hand,
𝑒 − 𝑀 𝑛 2 − 2 𝑀 𝑛 𝐴 𝛽 4 𝐴 Σ 𝑛
𝑒 − ln 𝑛 𝑛
1 𝑛 .
On the other hand, using the inequality ( 𝑎 + 𝑏 ) 2 ≤ 2 ( 𝑎 2 + 𝑏 2 ) , we have
𝐵 𝑛
max ( 𝐴 Σ , 1 + 𝐾 𝐴 𝐺 ) ( 1 + 𝑞 𝑞 ( 𝑀 𝑛 + 𝐴 𝛽 ) 2 𝐴 Σ )
max ( 𝐴 Σ , 1 + 𝐾 𝐴 𝐺 ) ( 1 + 𝑞 𝑞 𝐴 Σ ( 2 𝐴 𝛽 + 𝐴 𝛽 2 + 4 𝐴 Σ ln 𝑛 ) 2 )
≤ max ( 𝐴 Σ , 1 + 𝐾 𝐴 𝐺 ) ( 1 + 2 𝑞 𝑞 𝐴 Σ ( 5 𝐴 𝛽 2 + 4 𝐴 Σ ln 𝑛 ) ) ,
hence (A.1) implies (15).
A.2Proof of Proposition 3.3
For every 𝑚 ′ ∈ ℳ ^ ( 𝑚 ) , from (31), (30), and (28), we obtain
𝑃 𝑛 ( 𝑓 ^ 𝑚 ′ ) + pen ( 𝑚 ′ )
𝑃
𝑛
(
ln
(
𝑠
0
)
−
ln
(
𝑠
^
𝑚
′
)
)
+
pen
(
𝑚
′
)
(using (
30
))
≤
𝑃
𝑛
(
ln
(
𝑠
0
)
−
ln
(
𝑠
^
𝑚
)
)
+
pen
(
𝑚
)
+
𝜂
(using (
31
))
≤
𝑃
𝑛
(
ln
(
𝑠
0
)
−
ln
(
𝑠
¯
𝑚
)
)
+
𝜂
𝑚
+
pen
(
𝑚
)
+
𝜂
(using (28) with
𝑠
¯
𝑚
in
𝑆
𝑚
and linearity of
𝑃
𝑛
)
𝑃 𝑛 ( 𝑓 ¯ 𝑚 ) + pen ( 𝑚 ) + 𝜂 𝑚 + 𝜂 (using ( 30 )) .
By the definition of recentered process, 𝜈 𝑛 ( ⋅ ) , in (27), it holds that
𝑃 ( 𝑓 ^ 𝑚 ′ ) + pen ( 𝑚 ′ ) ≤ 𝑃 ( 𝑓 ¯ 𝑚 ) + pen ( 𝑚 ) + 𝜈 𝑛 ( 𝑓 ¯ 𝑚 ) − 𝜈 𝑛 ( 𝑓 ^ 𝑚 ′ ) + 𝜂 + 𝜂 𝑚 .
Taking into account (4) and (25), we obtain
KL 𝑛 ( 𝑠 0 , 𝑠 ^ 𝑚 ′ )
1 𝑛 ∑ 𝑖
1 𝑛 ∫ ℝ 𝑞 ln ( 𝑠 0 ( 𝑦 | 𝑥 𝑖 ) 𝑠 ^ 𝑚 ′ ( 𝑦 | 𝑥 𝑖 ) ) 𝑠 0 ( 𝑦 | 𝑥 𝑖 ) 𝑑 𝑦
1 𝑛 ∑ 𝑖
1 𝑛 ∫ ℝ 𝑞 𝑓 ^ 𝑚 ′ ( 𝑦 | 𝑥 𝑖 ) 𝑠 0 ( 𝑦 | 𝑥 𝑖 ) 𝑑 𝑦 (using ( 30 ))
𝑃 ( 𝑓 ^ 𝑚 ′ ) (using ( 26 )) .
Similarly, we also obtain KL 𝑛 ( 𝑠 0 , 𝑠 ¯ 𝑚 )
𝑃 ( 𝑓 ¯ 𝑚 ) . Hence, (29) implies that
KL 𝑛 ( 𝑠 0 , 𝑠 ^ 𝑚 ′ ) + pen ( 𝑚 ′ )
≤ KL 𝑛 ( 𝑠 0 , 𝑠 ¯ 𝑚 ) + pen ( 𝑚 ) + 𝜈 𝑛 ( 𝑓 ¯ 𝑚 ) − 𝜈 𝑛 ( 𝑓 ^ 𝑚 ′ ) + 𝜂 + 𝜂 𝑚
≤ inf 𝑠 𝑚 ∈ 𝑆 𝑚 KL 𝑛 ( 𝑠 0 , 𝑠 𝑚 ) + pen ( 𝑚 ) + 𝜈 𝑛 ( 𝑓 ¯ 𝑚 )
− 𝜈 𝑛 ( 𝑓 ^ 𝑚 ′ ) + 𝜂 𝑚 + 𝛿 KL + 𝜂 .
(34)
All that remains is to control the deviation of − 𝜈 𝑛 ( 𝑓 ^ 𝑚 ′ )
𝜈 𝑛 ( − 𝑓 ^ 𝑚 ′ ) . To handle the randomness of 𝑓 ^ 𝑚 ′ , we shall control the deviation of sup 𝑓 𝑚 ′ ∈ 𝐹 𝑚 ′ 𝜈 𝑛 ( − 𝑓 𝑚 ′ ) , since 𝑓 ^ 𝑚 ′ ∈ 𝐹 𝑚 ′ . Such control is provided by Lemma 3.5. From (A.2) and (23), we derive that on the event 𝒯 , for all 𝑚 ∈ ℕ ⋆ , and 𝑡
0 , with probability larger than 1 − 𝑒 − 𝑡 ,
KL 𝑛 ( 𝑠 0 , 𝑠 ^ 𝑚 ′ ) + pen ( 𝑚 ′ ) ≤
inf 𝑠 𝑚 ∈ 𝑆 𝑚 KL 𝑛 ( 𝑠 0 , 𝑠 𝑚 ) + pen ( 𝑚 ) + 𝜈 𝑛 ( 𝑓 ¯ 𝑚 ) − 𝜈 𝑛 ( 𝑓 ^ 𝑚 ′ ) + 𝜂 𝑚 + 𝛿 KL + 𝜂
≤
inf 𝑠 𝑚 ∈ 𝑆 𝑚 KL 𝑛 ( 𝑠 0 , 𝑠 𝑚 ) + pen ( 𝑚 ) + 𝜈 𝑛 ( 𝑓 ¯ 𝑚 ) + 𝜂 𝑚 + 𝛿 KL + 𝜂
- 4 𝐾 𝐵 𝑛 𝑛 [ 37 𝑞 Δ 𝑚 ′
- 2 ( 𝐴 𝛾
- 𝑞 𝐴 𝛽
- 𝑞 𝑞 𝑎 Σ ) 𝑡 ]
≤
inf 𝑠 𝑚 ∈ 𝑆 𝑚 KL 𝑛 ( 𝑠 0 , 𝑠 𝑚 ) + pen ( 𝑚 ) + 𝜈 𝑛 ( 𝑓 ¯ 𝑚 ) + 𝜂 𝑚 + 𝛿 KL + 𝜂
- 4 𝐾 𝐵 𝑛 𝑛 [ 37 𝑞 Δ 𝑚 ′
- 1 2 ( 𝐴 𝛾
- 𝑞 𝐴 𝛽
- 𝑞 𝑞 𝑎 Σ ) 2
- 𝑡 ] , if 𝑚 ′ ∈ ℳ ^ ( 𝑚 ) .
(35)
Here, we use the fact that 𝑓 ^ 𝑚 ′ ∈ 𝐹 𝑚 ′ and get the last inequality using the fact that 2 𝑎 𝑏 ≤ 𝑎 2 + 𝑏 2 for 𝑏
𝑡 , and 𝑎
( 𝐴 𝛾 + 𝑞 𝐴 𝛽 + 𝑞 𝑞 𝑎 Σ ) / 2 .
It remains to sum up the tail bounds (35) over all possible values of 𝑚 ∈ ℕ ⋆ and 𝑚 ′ ∈ ℳ ^ ( 𝑚 ) . To get an inequality valid on a set of high probability, we need to adequately choose the value of the parameter 𝑡 , depending on 𝑚 ∈ ℕ ⋆ and 𝑚 ′ ∈ ℳ ^ ( 𝑚 ) . Let 𝑧 > 0 , for all 𝑚 ∈ ℕ ⋆ and 𝑚 ′ ∈ ℳ ^ ( 𝑚 ) , and apply (35) to obtain 𝑡
𝑧 + 𝑚 + 𝑚 ′ . Then, on the event 𝒯 , for all 𝑚 ∈ ℕ ⋆ , with probability larger than 1 − 𝑒 − ( 𝑧 + 𝑚 + 𝑚 ′ ) ,
KL 𝑛 ( 𝑠 0 , 𝑠 ^ 𝑚 ′ ) + pen ( 𝑚 ′ ) ≤
inf 𝑠 𝑚 ∈ 𝑆 𝑚 KL 𝑛 ( 𝑠 0 , 𝑠 𝑚 ) + pen ( 𝑚 ) + 𝜈 𝑛 ( 𝑓 ¯ 𝑚 ) + 𝜂 𝑚 + 𝛿 KL + 𝜂
- 4 𝐾 𝐵 𝑛 𝑛 [ 37 𝑞 Δ 𝑚 ′
- 1 2 ( 𝐴 𝛾
- 𝑞 𝐴 𝛽
- 𝑞 𝑞 𝑎 Σ ) 2
- ( 𝑧
- 𝑚
- 𝑚 ′ ) ] ,
if 𝑚 ′ ∈ ℳ ^ ( 𝑚 ) .
(36)
Here, (36) is equivalent to
KL 𝑛 ( 𝑠 0 , 𝑠 ^ 𝑚 ′ ) − 𝜈 𝑛 ( 𝑓 ¯ 𝑚 ) ≤
inf 𝑠 𝑚 ∈ 𝑆 𝑚 KL 𝑛 ( 𝑠 0 , 𝑠 𝑚 ) + [ pen ( 𝑚 ) + 4 𝐾 𝐵 𝑛 𝑛 𝑚 ] + 𝜂 𝑚 + 𝛿 KL + 𝜂
[ 4 𝐾 𝐵 𝑛 𝑛 ( 37 𝑞 Δ 𝑚 ′
𝑚 ′ ) − pen ( 𝑚 ′ ) ]
4 𝐾 𝐵 𝑛 𝑛 [ 1 2 ( 𝐴 𝛾
𝑞 𝐴 𝛽
𝑞 𝑞 𝑎 Σ ) 2
𝑧 ] .
(37)
Note that with probability larger than 1 − 𝑒 − 𝑧 , (36) holds simultaneously for all 𝑚 ∈ ℕ ⋆ and 𝑚 ′ ∈ ℳ ^ ( 𝑚 ) . Indeed, by defining the event
∩ ( 𝑚 , 𝑚 ′ ) ∈ ℕ ⋆ × ℳ ^ ( 𝑚 ) Ω 𝑚 , 𝑚 ′
{ 𝑤 : 𝑤 ∈ Ω such that the event in ( 36 ) holds } ,
it holds that, on the event 𝒯 ,
ℙ ( ∩ ( 𝑚 , 𝑚 ′ ) ∈ ℕ ⋆ × ℳ ^ ( 𝑚 ) Ω 𝑚 , 𝑚 ′ )
1
−
ℙ
(
∪
(
𝑚
,
𝑚
′
)
∈
ℕ
⋆
×
ℳ
^
(
𝑚
)
Ω
𝑚
,
𝑚
′
𝐶
)
≥
1
−
∑
(
𝑚
,
𝑚
′
)
∈
ℕ
⋆
×
ℳ
^
(
𝑚
)
ℙ
(
Ω
𝑚
,
𝑚
′
𝐶
)
≥
1
−
∑
(
𝑚
,
𝑚
′
)
∈
ℕ
⋆
×
ℳ
^
(
𝑚
)
𝑒
−
(
𝑧
+
𝑚
+
𝑚
′
)
≥
1
−
∑
(
𝑚
,
𝑚
′
)
∈
ℕ
⋆
×
ℕ
⋆
𝑒
−
(
𝑧
+
𝑚
+
𝑚
′
)
1 − 𝑒 − 𝑧 ( ∑ 𝑚 ∈ ℕ ⋆ 𝑒 − 𝑚 ) 2 ≥ 1 − 𝑒 − 𝑧 ,
where we get the last inequality by using the the geometric series
∑ 𝑚
1 ∞ ( 𝑒 − 1 ) 𝑚
∑ 𝑚
0 ∞ ( 𝑒 − 1 ) 𝑚 − 1
1 1 − 𝑒 − 1 − 1
𝑒 𝑒 − 1 − 1
1 𝑒 − 1 < 1 .
Taking into account (22), we get
KL 𝑛 ( 𝑠 0 , 𝑠 ^ 𝑚 ′ ) − 𝜈 𝑛 ( 𝑓 ¯ 𝑚 ) ≤
inf 𝑠 𝑚 ∈ 𝑆 𝑚 KL 𝑛 ( 𝑠 0 , 𝑠 𝑚 ) + [ pen ( 𝑚 ) + 4 𝐾 𝐵 𝑛 𝑛 𝑚 ] + 𝜂 𝑚 + 𝛿 KL + 𝜂
[ 4 𝐾 𝐵 𝑛 𝑛 ( 37 𝑞 ln 𝑛 ln ( 2 𝑝
1 )
1 ) 𝑚 ′ − pen ( 𝑚 ′ ) ]
4 𝐾 𝐵 𝑛 𝑛 [ 1 2 ( 𝐴 𝛾
𝑞 𝐴 𝛽
𝑞 𝑞 𝑎 Σ ) 2
74 𝑞 𝐾 ( 𝐴 𝛾
𝑞 𝐴 𝛽
𝑞 𝑞 𝑎 Σ )
𝑧 ] .
(38)
Now, let 𝜅 ≥ 1 and assume that pen ( 𝑚 )
𝜆 𝑚 , for all 𝑚 ∈ ℕ ⋆ with
𝜆 ≥ 𝜅 4 𝐾 𝐵 𝑛 𝑛 ( 37 𝑞 ln 𝑛 ln ( 2 𝑝 + 1 ) + 1 ) .
(39)
Then, (38) implies
KL 𝑛 ( 𝑠 0 , 𝑠 ^ 𝑚 ′ ) − 𝜈 𝑛 ( 𝑓 ¯ 𝑚 ) ≤
inf 𝑠 𝑚 ∈ 𝑆 𝑚 KL 𝑛 ( 𝑠 0 , 𝑠 𝑚 ) + [ 𝜆 𝑚 + 4 𝐾 𝐵 𝑛 𝑛 𝑚 ] + 𝜂 𝑚 + 𝛿 KL + 𝜂
[ 4 𝐾 𝐵 𝑛 𝑛 ( 37 𝑞 ln 𝑛 ln ( 2 𝑝
1 )
1 ) ⏟ ≤ 𝜆 𝜅 − 1 𝑚 ′ − 𝜆 𝑚 ′ ]
4 𝐾 𝐵 𝑛 𝑛 [ 1 2 ( 𝐴 𝛾
𝑞 𝐴 𝛽
𝑞 𝑞 𝑎 Σ ) 2
74 𝑞 𝐾 ( 𝐴 𝛾
𝑞 𝐴 𝛽
𝑞 𝑞 𝑎 Σ )
𝑧 ]
≤
inf 𝑠 𝑚 ∈ 𝑆 𝑚 KL 𝑛 ( 𝑠 0 , 𝑠 𝑚 ) + [ pen ( 𝑚 ) + 4 𝐾 𝐵 𝑛 𝑛 𝑚 ⏟ ≤ 𝜅 − 1 pen ( 𝑚 ) ] + 𝜂 𝑚 + 𝛿 KL + 𝜂
[ 𝜆 𝜅 − 1 𝑚 ′ − 𝜆 𝑚 ′ ] ⏟ ≤ 0
4 𝐾 𝐵 𝑛 𝑛 [ 1 2 ( 𝐴 𝛾
𝑞 𝐴 𝛽
𝑞 𝑞 𝑎 Σ ) 2
74 𝑞 𝐾 ( 𝐴 𝛾
𝑞 𝐴 𝛽
𝑞 𝑞 𝑎 Σ )
𝑧 ]
≤
inf 𝑠 𝑚 ∈ 𝑆 𝑚 KL 𝑛 ( 𝑠 0 , 𝑠 𝑚 ) + ( 1 + 𝜅 − 1 ) pen ( 𝑚 ) + 𝜂 𝑚 + 𝛿 KL + 𝜂
- 4 𝐾 𝐵 𝑛 𝑛 [ 1 2 ( 𝐴 𝛾
- 𝑞 𝐴 𝛽
- 𝑞 𝑞 𝑎 Σ ) 2
- 74 𝑞 𝐾 ( 𝐴 𝛾
- 𝑞 𝐴 𝛽
- 𝑞 𝑞 𝑎 Σ )
- 𝑧 ] .
Next, using the inequality 2 𝑎 𝑏 ≤ 𝛽 − 1 𝑎 2 + 𝛽 − 1 𝑏 2 for 𝑎
𝐾 , 𝑏
𝐾 ( 𝐴 𝛾 + 𝑞 𝐴 𝛽 + 𝑞 𝑞 𝑎 Σ ) , and 𝛽
𝐾 , and the fact that 𝐾 ≤ 𝐾 3 / 2 , for all 𝐾 ∈ ℕ ⋆ , it follows that
KL 𝑛 ( 𝑠 0 , 𝑠 ^ 𝑚 ′ ) − 𝜈 𝑛 ( 𝑓 ¯ 𝑚 ) ≤ inf 𝑠 𝑚 ∈ 𝑆 𝑚 KL 𝑛 ( 𝑠 0 , 𝑠 𝑚 ) + ( 1 + 𝜅 − 1 ) pen ( 𝑚 ) + 𝜂 𝑚 + 𝛿 KL + 𝜂
- 4 𝐵 𝑛 𝑛 [ 𝑞 𝐾 3 / 2 2 ( 𝐴 𝛾
- 𝑞 𝐴 𝛽
- 𝑞 𝑞 𝑎 Σ ) 2
- 74 𝑞 𝐾 𝐾 ( 𝐴 𝛾
- 𝑞 𝐴 𝛽
- 𝑞 𝑞 𝑎 Σ ) ⏟ 37 𝑞 × 2 𝑎 𝑏
- 𝐾 𝑧 ]
≤
inf 𝑠 𝑚 ∈ 𝑆 𝑚 KL 𝑛 ( 𝑠 0 , 𝑠 𝑚 ) + ( 1 + 𝜅 − 1 ) pen ( 𝑚 ) + 𝜂 𝑚 + 𝛿 KL + 𝜂
- 4 𝐵 𝑛 𝑛 [ 37 𝑞 𝐾 1 / 2
- 75 𝑞 𝐾 3 / 2 2 ( 𝐴 𝛾
- 𝑞 𝐴 𝛽
- 𝑞 𝑞 𝑎 Σ ) 2
- 𝐾 𝑧 ] .
(40)
By (14) and (31), 𝑚 ^ belongs to ℳ ^ ( 𝑚 ) , for all 𝑚 ∈ ℕ ⋆ , so we deduce from (40) that on the event 𝒯 , for all 𝑧
0 , with probability greater than 1 − 𝑒 − 𝑧 ,
KL 𝑛 ( 𝑠 0 , 𝑠 ^ 𝑚 ^ ) − 𝜈 𝑛 ( 𝑓 ¯ 𝑚 ) ≤
inf 𝑚 ∈ ℕ ⋆ ( inf 𝑠 𝑚 ∈ 𝑆 𝑚 KL 𝑛 ( 𝑠 0 , 𝑠 𝑚 ) + ( 1 + 𝜅 − 1 ) pen ( 𝑚 ) + 𝜂 𝑚 ) + 𝜂 + 𝛿 KL
- 4 𝐵 𝑛 𝑛 [ 37 𝑞 𝐾 1 / 2
- 75 𝑞 𝐾 3 / 2 2 ( 𝐴 𝛾
- 𝑞 𝐴 𝛽
- 𝑞 𝑞 𝑎 Σ ) 2
- 𝐾 𝑧 ] .
(41)
Note that for any non-negative random variable 𝑍 and any 𝑎 > 0 , 𝔼 [ 𝑍 ]
𝑎 ∫ 𝑧 ≥ 0 ℙ ( 𝑍 > 𝑎 𝑧 ) 𝑑 𝑧 . Indeed, if we let 𝑡
𝑎 𝑧 , then 𝑑 𝑧
𝑎 𝑑 𝑡 and
𝑎 ∫ 𝑧 ≥ 0 ℙ ( 𝑍 > 𝑎 𝑧 ) 𝑑 𝑧
𝑎 ∫ 0 ∞ ∫ 𝑎 𝑧 ∞ 𝑓 𝑍 ( 𝑢 ) 𝑑 𝑢 𝑑 𝑧
∫ 0 ∞ ∫ 𝑡 ∞ 𝑓 𝑍 ( 𝑢 ) 𝑑 𝑢 𝑑 𝑡
∫ 0 ∞ ∫ 0 𝑢 𝑓 𝑍 ( 𝑢 ) 𝑑 𝑡 𝑑 𝑢
∫ 0 ∞ 𝑓 𝑍 ( 𝑢 ) ∫ 0 𝑢 𝑑 𝑡 𝑑 𝑢
∫ 0 ∞ 𝑓 𝑍 ( 𝑢 ) 𝑢 𝑑 𝑢
𝔼 [ 𝑍 ] .
(42)
Then, we define the following random variable w.r.t. the random response 𝑌 [ 𝑛 ] := ( 𝑌 𝑖 ) 𝑖 ∈ [ 𝑛 ] :
𝑍
:= KL 𝑛 ( 𝑠 0 , 𝑠 ^ 𝑚 ^ ) − 𝜈 𝑛 ( 𝑓 ¯ 𝑚 ) − [ inf 𝑚 ∈ ℕ ⋆ ( inf 𝑠 𝑚 ∈ 𝑆 𝑚 KL 𝑛 ( 𝑠 0 , 𝑠 𝑚 ) + ( 1 + 𝜅 − 1 ) pen ( 𝑚 ) + 𝜂 𝑚 ) + 𝜂 + 𝛿 KL ]
− 4 𝐵 𝑛 𝑛 [ 37 𝑞 𝐾 1 / 2 + 75 𝑞 𝐾 3 / 2 2 ( 𝐴 𝛾 + 𝑞 𝐴 𝛽 + 𝑞 𝑞 𝑎 Σ ) 2 ] .
Then by (A.2), on the even 𝒯 , it holds that 𝑃 ( 𝑍 ≤ 𝑎 𝑧 ) ≥ 1 − 𝑒 − 𝑧 and if 𝑍 ≤ 0 then 𝑃 ( 𝑍 < 𝑎 𝑧 )
1 ≥ 1 − 𝑒 − 𝑧 , for all 𝑧 > 0 , where 𝑎
4 𝐵 𝑛 𝐾 𝑛
0 . Therefore, it is sufficient to consider 𝑍 ≥ 0 and it holds that 𝑃 ( 𝑍
𝑎 𝑧 | 𝒯 ) ≤ 𝑒 − 𝑧 . In this case, by (42) and the fact that ℙ ( 𝒯 ) ≤ 1 , it holds that
𝔼 𝑌 [ 𝑛 ] [ 𝑍 𝕀 𝒯 ]
ℙ ( 𝒯 ) 𝔼 𝑌 [ 𝑛 ] [ 𝑍 | 𝒯 ] ≤ 𝔼 𝑌 [ 𝑛 ] [ 𝑍 | 𝒯 ] ≤ 𝑎 ∫ 𝑧 ≥ 0 𝑒 − 𝑧 𝑑 𝑧
𝑎 .
(43)
Then, by integrating (A.2) over 𝑧
0 using (43), the fact that
𝔼 𝑌 [ 𝑛 ] [ 𝜈 𝑛 ( 𝑓 ¯ 𝑚 ) ]
𝔼 𝑌 [ 𝑛 ] [ 𝑃 𝑛 ( 𝑓 ¯ 𝑚 ) ] − 𝔼 𝑌 [ 𝑛 ] [ 𝑃 ( 𝑓 ¯ 𝑚 ) ]
0 ,
(44)
𝛿 KL > 0 can be chosen arbitrary small, and 𝔼 𝑌 [ 𝑛 ] [ 𝕀 𝒯 ]
ℙ ( 𝒯 ) ≤ 1 , we obtain that
𝔼 [ KL 𝑛 ( 𝑠 0 , 𝑠 ^ 𝑚 ^ ) 𝕀 𝒯 ] ≤
[ inf 𝑚 ∈ ℕ ⋆ ( inf 𝑠 𝑚 ∈ 𝑆 𝑚 KL 𝑛 ( 𝑠 0 , 𝑠 𝑚 ) + ( 1 + 𝜅 − 1 ) pen ( 𝑚 ) + 𝜂 𝑚 ) + 𝜂 ] 𝔼 𝑌 [ 𝑛 ] [ 𝕀 𝒯 ]
- 4 𝐵 𝑛 𝑛 [ 37 𝑞 𝐾 1 / 2
- 75 𝑞 𝐾 3 / 2 2 ( 𝐴 𝛾
- 𝑞 𝐴 𝛽
- 𝑞 𝑞 𝑎 Σ ) 2
- 𝐾 ] 𝔼 𝑌 [ 𝑛 ] [ 𝕀 𝒯 ]
≤
inf 𝑚 ∈ ℕ ⋆ ( inf 𝑠 𝑚 ∈ 𝑆 𝑚 KL 𝑛 ( 𝑠 0 , 𝑠 𝑚 ) + ( 1 + 𝜅 − 1 ) pen ( 𝑚 ) + 𝜂 𝑚 ) + 𝜂
- 4 𝐵 𝑛 𝑛 [ 37 𝑞 𝐾 3 / 2
- 75 𝑞 𝐾 3 / 2 2 ( 𝐴 𝛾
- 𝑞 𝐴 𝛽
- 𝑞 𝑞 𝑎 Σ ) 2
- 𝑞 𝐾 3 / 2 ]
≤
inf 𝑚 ∈ ℕ ⋆ ( inf 𝑠 𝑚 ∈ 𝑆 𝑚 KL 𝑛 ( 𝑠 0 , 𝑠 𝑚 ) + ( 1 + 𝜅 − 1 ) pen ( 𝑚 ) + 𝜂 𝑚 ) + 𝜂
- 302 𝐾 3 / 2 𝑞 𝐵 𝑛 𝑛 ( 1
- ( 𝐴 𝛾
- 𝑞 𝐴 𝛽
- 𝑞 𝑞 𝑎 Σ ) 2 ) .
(45) A.3Proof of Proposition 3.4
By the Cauchy-Schwarz inequality,
𝔼 [ KL 𝑛 ( 𝑠 0 , 𝑠 ^ 𝑚 ^ ) 𝕀 𝒯 𝐶 ] ≤ 𝔼 [ KL 𝑛 2 ( 𝑠 0 , 𝑠 ^ 𝑚 ^ ) ] 𝔼 [ 𝕀 𝒯 𝐶 2 ]
𝔼 [ KL 𝑛 2 ( 𝑠 0 , 𝑠 ^ 𝑚 ^ ) ] ℙ ( 𝒯 𝐶 ) .
(46)
We seek to bound the two terms of the right-hand side of (46).
For the first term, let us bound KL ( 𝑠 0 ( ⋅ | 𝑥 ) , 𝑠 𝜓 ( ⋅ | 𝑥 ) ) , for all 𝑠 𝜓 ∈ 𝑆 and 𝑥 ∈ 𝒳 . Let 𝑠 𝜓 ∈ 𝑆 and 𝑥 ∈ 𝒳 . Then, we obtain
KL ( 𝑠 0 ( ⋅ | 𝑥 ) , 𝑠 𝜓 ( ⋅ | 𝑥 ) )
∫ ℝ 𝑞 ln ( 𝑠 0 ( 𝑦 | 𝑥 ) 𝑠 𝜓 ( 𝑦 | 𝑥 ) ) 𝑠 0 ( 𝑦 | 𝑥 ) 𝑑 𝑦
∫ ℝ 𝑞 ln ( 𝑠 0 ( 𝑦 | 𝑥 ) ) 𝑠 0 ( 𝑦 | 𝑥 ) 𝑑 𝑦 − ∫ ℝ 𝑞 ln ( 𝑠 𝜓 ( 𝑦 | 𝑥 ) ) 𝑠 0 ( 𝑦 | 𝑥 ) 𝑑 𝑦
≤
− ∫ ℝ 𝑞 ln ( 𝑠 𝜓 ( 𝑦 | 𝑥 ) ) 𝑠 0 ( 𝑦 | 𝑥 ) 𝑑 𝑦 + 𝐻 𝑠 0 , ∀ 𝑥 ∈ 𝒳 ( using ( 24 ) ) .
(47)
Since
𝑎 𝐺 := exp ( − 𝐴 𝛾 ) ∑ 𝑙
1 𝐾 exp ( 𝐴 𝛾 ) ≤ sup 𝑥 ∈ 𝒳 , 𝛾 ∈ Γ ~ exp ( 𝛾 𝑘 0 + 𝛾 𝑘 ⊤ 𝑥 ) ∑ 𝑙
1 𝐾 exp ( 𝛾 𝑙 0 + 𝛾 𝑙 ⊤ 𝑥 ) ≤ exp ( 𝐴 𝛾 ) ∑ 𝑙
1 𝐾 exp ( − 𝐴 𝛾 )
: 𝐴 𝐺 ,
there exists deterministic positive constants 𝑎 𝐺 , 𝐴 𝐺 , such that
𝑎 𝐺 ≤ sup 𝑥 ∈ 𝒳 , 𝛾 ∈ Γ ~ 𝑔 𝑘 ( 𝑥 ; 𝛾 ) ≤ 𝐴 𝐺 .
(48)
Here, the softmax gating function 𝑔 𝑘 ( 𝑥 ; 𝛾 ) is described as
𝑔 𝑘 ( 𝑥 ; 𝛾 )
exp ( 𝑤 𝑘 ( 𝑥 ) ) ∑ 𝑙
1 𝐾 exp ( 𝑤 𝑙 ( 𝑥 ) ) , 𝑤 𝑘 ( 𝑥 )
𝛾 𝑘 0 + 𝛾 𝑘 ⊤ 𝑥 , 𝛾
( 𝛾 𝑘 0 , 𝛾 𝑘 ⊤ ) 𝑘 ∈ [ 𝐾 ] ∈ Γ
ℝ ( 𝑝 + 1 ) 𝐾 .
(49)
Thus, for all 𝑦 ∈ ℝ 𝑞 ,
ln
(
𝑠
𝜓
(
𝑦
|
𝑥
)
)
𝑠
0
(
𝑦
|
𝑥
)
≥
ln
[
∑
𝑘
1
𝐾
𝑎
𝐺
det
(
Σ
𝑘
−
1
)
1
/
2
(
2
𝜋
)
𝑞
/
2
exp
(
−
(
𝑦
⊤
Σ
𝑘
−
1
𝑦
+
(
𝛽
𝑘
0
+
𝛽
𝑘
𝑥
)
⊤
Σ
𝑘
−
1
(
𝛽
𝑘
0
+
𝛽
𝑘
𝑥
)
)
)
]
×
∑
𝑘
1
𝐾
𝑎
𝐺
det
(
Σ
0
,
𝑘
−
1
)
1
/
2
(
2
𝜋
)
𝑞
/
2
exp
(
−
(
𝑦
⊤
Σ
0
,
𝑘
−
1
𝑦
+
(
𝛽
0
,
𝑘
0
+
𝛽
0
,
𝑘
𝑥
)
⊤
Σ
0
,
𝑘
−
1
(
𝛽
0
,
𝑘
0
+
𝛽
0
,
𝑘
𝑥
)
)
)
(
using (
48
) and
−
(
𝑎
−
𝑏
)
⊤
𝐴
(
𝑎
−
𝑏
)
/
2
≥
−
(
𝑎
⊤
𝐴
𝑎
+
𝑏
⊤
𝐴
𝑏
)
,
e.g.,
a
y , b
β k 0 + β k x , 𝐴
Σ
𝑘
)
≥
ln
[
∑
𝑘
1
𝐾
𝑎
𝐺
𝑎
Σ
𝑞
/
2
(
2
𝜋
)
𝑞
/
2
exp
(
−
(
𝑦
⊤
Σ
𝑘
−
1
𝑦
+
(
𝛽
𝑘
0
+
𝛽
𝑘
𝑥
)
⊤
Σ
𝑘
−
1
(
𝛽
𝑘
0
+
𝛽
𝑘
𝑥
)
)
)
]
×
∑
𝑘
1 𝐾 𝑎 𝐺 𝑎 Σ 𝑞 / 2 ( 2 𝜋 ) 𝑞 / 2 exp ( − ( 𝑦 ⊤ Σ 0 , 𝑘 − 1 𝑦 + ( 𝛽 0 , 𝑘 0 + 𝛽 0 , 𝑘 𝑥 ) ⊤ Σ 0 , 𝑘 − 1 ( 𝛽 0 , 𝑘 0 + 𝛽 0 , 𝑘 𝑥 ) ) ) ( using ( 2.1 ) )
≥ ln [ 𝐾 𝑎 𝐺 𝑎 Σ 𝑞 / 2 ( 2 𝜋 ) 𝑞 / 2 exp ( − ( 𝑦 ⊤ 𝑦 + 𝑞 𝐴 𝛽 2 ) 𝐴 Σ ) ] × 𝐾 𝑎 𝐺 𝑎 Σ 𝑞 / 2 ( 2 𝜋 ) 𝑞 / 2 exp ( − ( 𝑦 ⊤ 𝑦 + 𝑞 𝐴 𝛽 2 ) 𝐴 Σ ) ( using ( 2.1 ) ) ,
(50)
where, in the last inequality, we use the fact that for all 𝑢 ∈ ℝ 𝑞 . By using the eigenvalue decomposition of Σ 1
𝑃 ⊤ 𝐷 𝑃 ,
| 𝑢 ⊤ Σ 1 𝑢 |
| 𝑢 ⊤ 𝑃 ⊤ 𝐷 𝑃 𝑢 | ≤ ‖ 𝑃 𝑢 ‖ 2 ≤ 𝑀 ( 𝐷 ) ‖ 𝑃 𝑢 ‖ 2 2 ≤ 𝐴 Σ ‖ 𝑢 ‖ 2 2 ≤ 𝐴 Σ 𝑞 ‖ 𝑢 ‖ ∞ 2 ,
where in the last inequality, we used the fact that (94). Therefore, setting 𝑢
2 𝐴 Σ 𝑦 and ℎ ( 𝑡 )
𝑡 ln 𝑡 , for all 𝑡 ∈ ℝ , and noticing that ℎ ( 𝑡 ) ≥ ℎ ( 𝑒 − 1 )
− 𝑒 − 1 , for all 𝑡 ∈ ℝ , and from (A.3) and (50), we get that
KL
(
𝑠
0
(
⋅
|
𝑥
)
,
𝑠
𝜓
(
⋅
|
𝑥
)
)
−
𝐻
𝑠
0
≤
−
∫
ℝ
𝑞
[
ln
[
𝐾
𝑎
𝛾
𝑎
Σ
𝑞
/
2
(
2
𝜋
)
𝑞
/
2
exp
(
−
(
𝑦
⊤
𝑦
+
𝑞
𝐴
𝛽
2
)
𝐴
Σ
)
]
𝐾
𝑎
𝛾
𝑎
Σ
𝑞
/
2
(
2
𝜋
)
𝑞
/
2
exp
(
−
(
𝑦
⊤
𝑦
+
𝑞
𝐴
𝛽
2
)
𝐴
Σ
)
)
𝑑
𝑦
− 𝐾 𝑎 𝛾 𝑎 Σ 𝑞 / 2 𝑒 − 𝑞 𝐴 𝛽 2 𝐴 Σ ( 2 𝐴 Σ ) 𝑞 / 2 ∫ ℝ 𝑞 [ ln ( 𝐾 𝑎 𝛾 𝑎 Σ 𝑞 / 2 ( 2 𝜋 ) 𝑞 / 2 ) − 𝑞 𝐴 𝛽 2 𝐴 Σ − 𝑢 ⊤ 𝑢 2 ] 𝑒 − 𝑢 ⊤ 𝑢 2 ( 2 𝜋 ) 𝑞 / 2 𝑑 𝑢
− 𝐾 𝑎 𝛾 𝑎 Σ 𝑞 / 2 𝑒 − 𝑞 𝐴 𝛽 2 𝐴 Σ ( 2 𝐴 Σ ) 𝑞 / 2 𝔼 𝑈 [ [ ln ( 𝐾 𝑎 𝛾 𝑎 Σ 𝑞 / 2 ( 2 𝜋 ) 𝑞 / 2 ) − 𝑞 𝐴 𝛽 2 𝐴 Σ − 𝑈 ⊤ 𝑈 2 ] ] ( with 𝑈 ∼ 𝒩 𝑞 ( 0 , I 𝑞 ) )
− 𝐾 𝑎 𝛾 𝑎 Σ 𝑞 / 2 𝑒 − 𝑞 𝐴 𝛽 2 𝐴 Σ ( 2 𝐴 Σ ) 𝑞 / 2 [ ln ( 𝐾 𝑎 𝛾 𝑎 Σ 𝑞 / 2 ( 2 𝜋 ) 𝑞 / 2 ) − 𝑞 𝐴 𝛽 2 𝐴 Σ − 𝑞 2 ]
− 𝐾 𝑎 𝛾 𝑎 Σ 𝑞 / 2 𝑒 − 𝑞 𝐴 𝛽 2 𝐴 Σ − 𝑞 2 ( 2 𝜋 ) 𝑞 / 2 ( 𝐴 Σ ) 𝑞 / 2 𝑒 𝑞 / 2 𝜋 𝑞 / 2 ln ( 𝐾 𝑎 𝛾 𝑎 Σ 𝑞 / 2 𝑒 − 𝑞 𝐴 𝛽 2 𝐴 Σ − 𝑞 2 ( 2 𝜋 ) 𝑞 / 2 ) ≤ 𝑒 𝑞 / 2 − 1 𝜋 𝑞 / 2 𝐴 Σ 𝑞 / 2 ,
(51)
where we used the fact that 𝑡 ln ( 𝑡 ) ≥ − 𝑒 − 1 , for all 𝑡 ∈ ℝ .
Then, for all 𝑠 𝜓 ∈ 𝑆 ,
KL 𝑛 ( 𝑠 0 , 𝑠 𝜓 )
1 𝑛 ∑ 𝑖
1 𝑛 KL ( 𝑠 0 ( ⋅ | 𝑥 𝑖 ) , 𝑠 𝜓 ( . | 𝑥 𝑖 ) ) ≤ 𝑒 𝑞 / 2 − 1 𝜋 𝑞 / 2 𝐴 Σ 𝑞 / 2 + 𝐻 𝑠 0 ,
and note that 𝑠 ^ 𝑚 ^ ∈ 𝑆 , thus
𝔼 [ KL 𝑛 2 ( 𝑠 0 , 𝑠 ^ 𝑚 ^ ) ] ≤ 𝑒 𝑞 / 2 − 1 𝜋 𝑞 / 2 𝐴 Σ 𝑞 / 2 + 𝐻 𝑠 0 .
(52)
We now provide an upper bound for ℙ ( 𝒯 𝐶 ) :
ℙ ( 𝒯 𝐶 ) ≤ ∑ 𝑖
1 𝑛 ℙ ( ‖ 𝑌 𝑖 ‖ ∞
𝑀 𝑛 ) .
(53)
For all 𝑖 ∈ [ 𝑛 ] ,
𝑌 𝑖 | 𝑥 𝑖 ∼ ∑ 𝑘
1 𝐾 𝑔 𝑘 ( 𝑥 𝑖 ; 𝛾 ) 𝒩 𝑞 ( 𝛽 𝑘 0 + 𝛽 𝑘 𝑥 𝑖 , Σ 𝑘 ) ,
so we see from (53) that we need to provide an upper bound on ℙ ( | 𝑌 𝑥 |
𝑀 𝑛 ) , with
𝑌 𝑥 ∼ ∑ 𝑘
1 𝐾 𝑔 𝑘 ( 𝑥 ; 𝛾 ) 𝒩 𝑞 ( 𝛽 𝑘 0 + 𝛽 𝑘 𝑥 , Σ 𝑘 ) , 𝑥 ∈ 𝒳 .
First, using Chernoff’s inequality for a centered Gaussian variable (see Lemma C.8), and the fact that 𝜓 belongs to the bounded space Ψ ~ (defined by (2.1)), and that ∑ 𝑘
1 𝐾 𝑔 𝑘 ( 𝑥 ; 𝛾 )
1 , we get
ℙ ( ‖ 𝑌 𝑥 ‖ ∞ > 𝑀 𝑛 )
∑ 𝑘
1 𝐾 𝑔 𝑘 ( 𝑥 ; 𝛾 ) ( 2 𝜋 ) 𝑞 / 2 det ( Σ 𝑘 ) 1 / 2 ∫ { ‖ 𝑦 ‖ ∞ > 𝑀 𝑛 } exp ( − ( 𝑦 − ( 𝛽 𝑘 0 + 𝛽 𝑘 𝑥 ) ) ⊤ Σ 𝑘 − 1 ( 𝑦 − ( 𝛽 𝑘 0 + 𝛽 𝑘 𝑥 ) ) 2 ) 𝑑 𝑦
∑ 𝑘
1 𝐾 𝑔 𝑘 ( 𝑥 ; 𝛾 ) ℙ ( ‖ 𝑌 𝑥 , 𝑘 ‖ ∞ > 𝑀 𝑛 ) ≤ ∑ 𝑘
1 𝐾 𝑔 𝑘 ( 𝑥 ; 𝛾 ) ∑ 𝑧
1 𝑞 ℙ ( | [ 𝑌 𝑥 , 𝑘 ] 𝑧 | > 𝑀 𝑛 )
∑ 𝑘
1 𝐾 𝑔 𝑘 ( 𝑥 ; 𝛾 ) ∑ 𝑧
1 𝑞 ( ℙ ( [ 𝑌 𝑥 , 𝑘 ] 𝑧 < − 𝑀 𝑛 ) + ℙ ( [ 𝑌 𝑥 , 𝑘 ] 𝑧 > 𝑀 𝑛 ) )
∑ 𝑘
1 𝐾 𝑔 𝑘 ( 𝑥 ; 𝛾 ) ∑ 𝑧
1 𝑞 ( ℙ ( 𝑈 > 𝑀 𝑛 − [ 𝛽 𝑘 0 + 𝛽 𝑘 𝑥 ] 𝑧 [ Σ 𝑘 ] 𝑧 , 𝑧 1 / 2 ) + ℙ ( 𝑈 < − 𝑀 𝑛 − [ 𝛽 𝑘 0 + 𝛽 𝑘 𝑥 ] 𝑧 [ Σ 𝑘 ] 𝑧 , 𝑧 1 / 2 ) )
∑ 𝑘
1 𝐾 𝑔 𝑘 ( 𝑥 ; 𝛾 ) ∑ 𝑧
1
𝑞
(
ℙ
(
𝑈
>
𝑀
𝑛
−
[
𝛽
𝑘
0
+
𝛽
𝑘
𝑥
]
𝑧
[
Σ
𝑘
]
𝑧
,
𝑧
1
/
2
)
+
ℙ
(
𝑈
>
𝑀
𝑛
+
[
𝛽
𝑘
0
+
𝛽
𝑘
𝑥
]
𝑧
[
Σ
𝑘
]
𝑧
,
𝑧
1
/
2
)
)
≤
∑
𝑘
1 𝐾 𝑔 𝑘 ( 𝑥 ; 𝛾 ) ∑ 𝑧
1
𝑞
[
𝑒
−
1
2
(
𝑀
𝑛
−
[
𝛽
𝑘
0
+
𝛽
𝑘
𝑥
]
𝑧
[
Σ
𝑘
]
𝑧
,
𝑧
1
/
2
)
2
+
𝑒
−
1
2
(
𝑀
𝑛
+
[
𝛽
𝑘
0
+
𝛽
𝑘
𝑥
]
𝑧
[
Σ
𝑘
]
𝑧
,
𝑧
1
/
2
)
2
]
(
using Lemma
C.8
, (
105
)
)
≤
2
∑
𝑘
1 𝐾 𝑔 𝑘 ( 𝑥 ; 𝛾 ) ∑ 𝑧
1 𝑞 𝑒 − 1 2 ( 𝑀 𝑛 − | [ 𝛽 𝑘 0 + 𝛽 𝑘 𝑥 ] 𝑧 | [ Σ 𝑘 ] 𝑧 , 𝑧 1 / 2 ) 2 ≤ 2 ∑ 𝑘
1 𝐾 𝑔 𝑘 ( 𝑥 ; 𝛾 ) ∑ 𝑧
1 𝑞 𝑒 − 1 2 𝑀 𝑛 2 − 2 𝑀 𝑛 | [ 𝛽 𝑘 0 + 𝛽 𝑘 𝑥 ] 𝑧 | + | [ 𝛽 𝑘 0 + 𝛽 𝑘 𝑥 ] | 𝑧 2 [ Σ 𝑘 ] 𝑧 , 𝑧
≤ 2 𝐾 𝐴 𝛾 𝑞 𝑒 − 𝑀 𝑛 2 − 2 𝑀 𝑛 𝐴 𝛽 2 𝐴 Σ ,
(54)
where
𝑌 𝑥 , 𝑘 ∼ 𝒩 𝑞 ( 𝛽 𝑘 0 + 𝛽 𝑘 𝑥 , Σ 𝑘 ) , [ 𝑌 𝑥 , 𝑘 ] 𝑧 ∼ 𝒩 ( [ 𝛽 𝑘 0 + 𝛽 𝑘 𝑥 ] 𝑧 , [ Σ 𝑘 ] 𝑧 , 𝑧 ) , and 𝑈
[ 𝑌 𝑥 , 𝑘 ] 𝑧 − [ 𝛽 𝑥 ] 𝑧 [ Σ 𝑘 ] 𝑧 , 𝑧 1 / 2 ∼ 𝒩 ( 0 , 1 ) ,
and using the facts that 𝑒 − 1 2 | [ 𝛽 𝑘 0 + 𝛽 𝑘 𝑥 ] | 𝑧 2 𝐴 Σ ≤ 1 and max 1 ≤ 𝑧 ≤ 𝑞 | [ Σ 𝑘 ] 𝑧 , 𝑧 | ≤ ‖ Σ 𝑘 ‖ 2
𝑀 ( Σ 𝑘 )
𝑚 ( Σ 𝑘 − 1 ) ≤ 𝐴 Σ . We derive from (53) and (A.3) that
ℙ ( 𝒯 𝑐 ) ≤ 2 𝐾 𝑛 𝑞 𝐴 𝛾 𝑒 − 𝑀 𝑛 2 − 2 𝑀 𝑛 𝐴 𝛽 2 𝐴 Σ ,
(55)
and finally from (46), (52), and (55), we obtain
𝔼 [ KL 𝑛 ( 𝑠 0 , 𝑠 ^ 𝑚 ^ ) 𝕀 𝒯 𝐶 ] ≤ ( 𝑒 𝑞 / 2 − 1 𝜋 𝑞 / 2 𝐴 Σ 𝑞 / 2 + 𝐻 𝑠 0 ) 2 𝐾 𝑛 𝑞 𝐴 𝛾 𝑒 − 𝑀 𝑛 2 − 2 𝑀 𝑛 𝐴 𝛽 4 𝐴 Σ .
(56) Appendix BProofs of technical lemmas B.1Proof of Lemma 3.5
Let 𝑚 ∈ ℕ ⋆ , on the event 𝒯 , to control the deviation
sup 𝑓 𝑚 ∈ 𝐹 𝑚 | 𝜈 𝑛 ( − 𝑓 𝑚 ) |
sup 𝑓 𝑚 ∈ 𝐹 𝑚 | 1 𝑛 ∑ 𝑖
1 𝑛 { 𝑓 𝑚 ( 𝑌 𝑖 | 𝑥 𝑖 ) − 𝔼 [ 𝑓 𝑚 ( 𝑌 𝑖 | 𝑥 𝑖 ) ] } | ,
(57)
we shall use concentration and symmetrization arguments. We shall first use the following concentration inequality, which is an adaption of Wainwright (2019, Theorem 4.10).
Lemma B.1 (Theorem 4.10 from Wainwright (2019)).
Let 𝑍 1 , … , 𝑍 𝑛 be independent random variables with values in some space 𝒵 and let ℱ be a class of integrable real-valued functions with domain on 𝒵 . Assume that
sup 𝑓 ∈ ℱ ∥ 𝑓 ∥ ∞ ≤ 𝑅 𝑛 for some non-random constants 𝑅 𝑛 < ∞ .
(58)
Then, for all 𝑡
0 ,
ℙ ( sup 𝑓 ∈ ℱ | 1 𝑛 ∑ 𝑖
1 𝑛 [ 𝑓 ( 𝑍 𝑖 ) − 𝔼 [ 𝑓 ( 𝑍 𝑖 ) ] ] | > 𝔼 [ sup 𝑓 ∈ ℱ | 1 𝑛 ∑ 𝑖
1 𝑛 [ 𝑓 ( 𝑍 𝑖 ) − 𝔼 [ 𝑓 ( 𝑍 𝑖 ) ] ] | ] + 2 2 𝑅 𝑛 𝑡 𝑛 ) ≤ 𝑒 − 𝑡 .
(59)
That is, with probability greater than 1 − 𝑒 𝑡 ,
sup 𝑓 ∈ ℱ | 1 𝑛 ∑ 𝑖
1 𝑛 [ 𝑓 ( 𝑍 𝑖 ) − 𝔼 [ 𝑓 ( 𝑍 𝑖 ) ] ] | ≤ 𝔼 [ sup 𝑓 ∈ ℱ | 1 𝑛 ∑ 𝑖
1 𝑛 [ 𝑓 ( 𝑍 𝑖 ) − 𝔼 [ 𝑓 ( 𝑍 𝑖 ) ] ] | ] + 2 2 𝑅 𝑛 𝑡 𝑛
(60)
Then, we propose to bound 𝔼 [ sup 𝑓 ∈ ℱ | 1 𝑛 ∑ 𝑖
1 𝑛 [ 𝑓 ( 𝑍 𝑖 ) − 𝔼 [ 𝑓 ( 𝑍 𝑖 ) ] ] | ] due to the following symmetrization argument. The proof of this result can be found in Van Der Vaart and Wellner (1996).
Lemma B.2 (See Lemma 2.3.6 in Van Der Vaart and Wellner (1996)).
Let 𝑍 1 , … , 𝑍 𝑛 be independent random variables with values in some space 𝒵 and let ℱ be a class of real-valued functions on 𝒵 . Let ( 𝜖 1 , … , 𝜖 𝑛 ) be a Rademacher sequence independent of ( 𝑍 1 , … , 𝑍 𝑛 ) . Then,
𝔼 [ sup 𝑓 ∈ ℱ | 1 𝑛 ∑ 𝑖
1 𝑛 [ 𝑓 ( 𝑍 𝑖 ) − 𝔼 [ 𝑓 ( 𝑍 𝑖 ) ] ] | ] ≤ 2 𝔼 [ sup 𝑓 ∈ ℱ | 1 𝑛 ∑ 𝑖
1 𝑛 𝜖 𝑖 𝑓 ( 𝑍 𝑖 ) | ] .
(61)
From (61), the problem is to provide an upper bound on
𝔼 [ sup 𝑓 ∈ ℱ | 1 𝑛 ∑ 𝑖
1 𝑛 𝜖 𝑖 𝑓 ( 𝑍 𝑖 ) | ] .
To do so, we shall apply the following lemma, which is adapted from Lemma 6.1 in Massart (2007).
Lemma B.3 (See Lemma 6.1 in Massart (2007)).
Let 𝑍 1 , … , 𝑍 𝑛 be independent random variables with values in some space 𝒵 and let ℱ be a class of real-valued functions on 𝒵 . Let ( 𝜖 1 , … , 𝜖 𝑛 ) be a Rademacher sequence, independent of ( 𝑍 1 , … , 𝑍 𝑛 ) . Define 𝑅 𝑛 , a non-random constant, such that
sup 𝑓 ∈ ℱ ∥ 𝑓 ∥ 𝑛 ≤ 𝑅 𝑛 .
(62)
Then, for all 𝑆 ∈ ℕ ⋆ ,
𝔼 [ sup 𝑓 ∈ ℱ | 1 𝑛 ∑ 𝑖
1 𝑛 𝜖 𝑖 𝑓 ( 𝑍 𝑖 ) | ] ≤ 𝑅 𝑛 ( 6 𝑛 ∑ 𝑠
1 𝑆 2 − 𝑠 ln [ 1 + 𝑀 ( 2 − 𝑠 𝑅 𝑛 , ℱ , ∥ . ∥ 𝑛 ) ] + 2 − 𝑆 ) ,
(63)
where 𝑀 ( 𝛿 , ℱ , ∥ . ∥ 𝑛 ) stands for the 𝛿 -packing number (see Definition C.5) of the set of functions ℱ , equipped with the metric induced by the norm ∥ ⋅ ∥ 𝑛 .
We are now able to prove Lemma 3.5. Indeed, given any fixed values 𝑥 1 , … , 𝑥 𝑛 ∈ 𝒳 , in order to control sup 𝑓 𝑚 ∈ 𝐹 𝑚 | 𝜈 𝑛 ( − 𝑓 𝑚 ) | 𝕀 𝒯 from the (57), we would like to apply Lemmas B.1–B.3. On the one hand, we see from (62) that we need an upper bound of sup 𝑓 𝑚 ∈ 𝐹 𝑚 ‖ 𝑓 𝑚 ‖ ∞ 𝕀 𝒯 . On the other hand, we see from (63) that on the event 𝒯 , we need to bound the entropy of the set of functions 𝐹 𝑚 , equipped with the metric induced by the norm ∥ ⋅ ∥ 𝑛 . Such bounds are provided by the two following lemmas.
Recall that given 𝑀 𝑛
0 , we considered the event
𝒯
{ max 𝑖 ∈ [ 𝑛 ] ‖ 𝑌 𝑖 ‖ ∞
max 𝑖 ∈ [ 𝑛 ] max 𝑧 ∈ [ 𝑞 ] | [ 𝑌 𝑖 ] 𝑧 | ≤ 𝑀 𝑛 } ,
(64)
let 𝐵 𝑛
max ( 𝐴 Σ , 1 + 𝐾 𝐴 𝐺 ) ( 1 + 𝑞 𝑞 ( 𝑀 𝑛 + 𝐴 𝛽 ) 2 𝐴 Σ ) .
Lemma B.4.
On the event 𝒯 , for all 𝑚 ∈ ℕ ⋆ ,
sup 𝑓 𝑚 ∈ 𝐹 𝑚 ∥ 𝑓 𝑚 ∥ ∞ ≤ 2 𝐾 𝐵 𝑛 ( 𝐴 𝛾 + 𝑞 𝐴 𝛽 + 𝑞 𝑞 𝑎 Σ )
: 𝑅 𝑛 .
(65)
Proof of Lemma B.4. See Appendix B.2.1.
Lemma B.5.
Let 𝛿
0 and 𝑚 ∈ ℕ ⋆ . On the event 𝒯 , we have the following upper bound of the 𝛿 -packing number of the set of functions 𝐹 𝑚 , equipped with the metric induced by the norm ∥ ⋅ ∥ 𝑛 :
𝑀 ( 𝛿 , 𝐹 𝑚 , ∥ ⋅ ∥ 𝑛 )
≤ ( 2 𝑝 + 1 ) 72 𝐵 𝑛 2 𝑞 2 𝐾 2 𝑚 2 𝛿 2 ( 1 + 18 𝐵 𝑛 𝐾 𝑞 𝐴 𝛽 𝛿 ) 𝐾 ( 1 + 18 𝐵 𝑛 𝐾 𝐴 𝛾 𝛿 ) 𝐾 ( 1 + 18 𝐵 𝑛 𝐾 𝑞 𝑞 𝑎 Σ 𝛿 ) 𝐾 .
Proof of Lemma B.5. See Appendix B.2.2.
Lemma B.6 (Lemma 5.9 from Meynet (2013)).
Let 𝛿 > 0 and ( 𝑥 𝑖 𝑗 ) 𝑖 ∈ [ 𝑛 ] ; 𝑗
1 , … , 𝑝 ∈ ℝ 𝑛 𝑝 . There exists a family ℬ of ( 2 𝑝 + 1 ) ∥ 𝑥 ∥ max , 𝑛 2 / 𝛿 2 vectors in ℝ 𝑝 , such that for all 𝛽 ∈ ℝ 𝑝 , with ∥ 𝛽 ∥ 1 ≤ 1 , where ∥ 𝑥 ∥ max , 𝑛 2
1 𝑛 ∑ 𝑖
1 𝑛 max 𝑗 ∈ { 1 , … , 𝑝 } 𝑥 𝑖 𝑗 2 , there exists 𝛽 ′ ∈ ℬ , such that
1 𝑛 ∑ 𝑖
1 𝑛 ( ∑ 𝑗
1 𝑝 ( 𝛽 𝑗 − 𝛽 𝑗 ′ ) 𝑥 𝑖 𝑗 ) 2 ≤ 𝛿 2 .
Proof of Lemma B.6. See in the proof of Meynet (2013, Lemma 5.9).
Via the upper bounds provided in Lemmas B.4 and B.5, we can apply Lemma B.3 to get an upper bound of
𝔼 [ sup 𝑓 𝑚 ∈ ℱ 𝑚 | 1 𝑛 ∑ 𝑖
1 𝑛 𝜖 𝑖 𝑓 𝑚 ( 𝑌 𝑖 | 𝑥 𝑖 ) | ] on the event 𝒯 .
(66)
In order to provide such an upper bound, Lemmas B.1 and B.3 can be utilized via defining a suitable class of integrable real-valued functions as follows:
ℱ := { 𝑓 := 𝑓 𝑚 𝕀 | 𝑓 𝑚 | ≤ 𝑅 𝑛 : 𝑓 𝑚 ∈ 𝐹 𝑚 , } , 𝑍 𝑖 := 𝑌 𝑖 | 𝑥 𝑖 , ∀ 𝑖 ∈ [ 𝑛 ] .
(67)
Indeed, by definition, it holds that
sup 𝑓 ∈ ℱ ‖ 𝑓 ‖ 𝑛 ≤ sup 𝑓 ∈ ℱ ‖ 𝑓 ‖ ∞
sup 𝑓 ∈ ℱ sup 𝑧 ∈ 𝒵 | 𝑓 ( 𝑧 ) |
sup 𝑓 𝑚 ∈ ℱ 𝑚 sup 𝑧 ∈ 𝒵 | 𝑓 𝑚 ( 𝑧 ) 𝕀 | 𝑓 𝑚 ( 𝑧 ) | ≤ 𝑅 𝑛 | ≤ 𝑅 𝑛 .
(68)
Note that the last inequality is valid since if | 𝑓 𝑚 ( 𝑧 ) | ≤ 𝑅 𝑛 then | 𝑓 𝑚 ( 𝑧 ) 𝕀 | 𝑓 𝑚 ( 𝑧 ) | ≤ 𝑅 𝑛 |
| 𝑓 𝑚 ( 𝑧 ) | ≤ 𝑅 𝑛 . Otherwise, if | 𝑓 𝑚 ( 𝑧 ) | > 𝑅 𝑛 , then | 𝑓 𝑚 ( 𝑧 ) 𝕀 | 𝑓 𝑚 ( 𝑧 ) | ≤ 𝑅 𝑛 |
| 𝑓 𝑚 ( 𝑧 ) × 0 |
0 ≤ 𝑅 𝑛 . We thus obtain the following results.
Lemma B.7.
Let 𝑚 ∈ ℕ ⋆ , consider ( 𝜖 1 , … , 𝜖 𝑛 ) , a Rademacher sequence independent of ( 𝑌 1 , … , 𝑌 𝑛 ) . Then, on the event 𝒯 , it holds that
𝔼 [ sup 𝑓 𝑚 ∈ ℱ 𝑚 | 1 𝑛 ∑ 𝑖
1 𝑛 𝜖 𝑖 𝑓 𝑚 ( 𝑌 𝑖 | 𝑥 𝑖 ) | ] ≤ 74 𝐾 𝐵 𝑛 𝑞 𝑛 Δ 𝑚 , where
Δ 𝑚 := 𝑚 ln ( 2 𝑝 + 1 ) ln 𝑛 + 2 𝐾 ( 𝐴 𝛾 + 𝑞 𝐴 𝛽 + 𝑞 𝑞 𝑎 Σ ) .
(69)
Proof of Lemma B.7. See Appendix B.2.3.
We now return to the proof of the Lemma 3.5.
Finally, on the event 𝒯 , using (67) and Lemma B.4, for all 𝑚 ∈ ℕ ⋆ and 𝑡
0 , with probability greater than 1 − 𝑒 − 𝑡 , we obtain
sup 𝑓 𝑚 ∈ 𝐹 𝑚 | 𝜈 𝑛 ( − 𝑓 𝑚 ) |
sup 𝑓 ∈ ℱ | 1 𝑛 ∑ 𝑖
1 𝑛 { 𝑓 ( 𝑍 𝑖 ) − 𝔼 [ 𝑓 ( 𝑍 𝑖 ) ] } |
(70)
≤ 𝔼 [ sup 𝑓 ∈ ℱ | 1 𝑛 ∑ 𝑖
1 𝑛 [ 𝑓 ( 𝑍 𝑖 ) − 𝔼 [ 𝑓 ( 𝑍 𝑖 ) ] ] | ] + 2 2 𝑅 𝑛 𝑡 𝑛 ( using Lemma B.1 )
(71)
≤ 𝔼 [ sup 𝑓 ∈ ℱ | 1 𝑛 ∑ 𝑖
1 𝑛 [ 𝑓 ( 𝑍 𝑖 ) − 𝔼 [ 𝑓 ( 𝑍 𝑖 ) ] ] | ] + 2 2 𝑅 𝑛 𝑡 𝑛 ( since ≤ 1 )
(72)
≤ 2 𝔼 [ sup 𝑓 ∈ ℱ | 1 𝑛 ∑ 𝑖
1 𝑛 𝜖 𝑖 𝑓 ( 𝑍 𝑖 ) | ] + 2 2 𝑅 𝑛 𝑡 𝑛 ( Lemma B.2 )
(73)
= 2 𝔼 [ sup 𝑓 𝑚 ∈ ℱ 𝑚 | 1 𝑛 ∑ 𝑖
1 𝑛 𝜖 𝑖 𝑓 𝑚 ( 𝑌 𝑖 | 𝑥 𝑖 ) | ] + 2 2 𝑅 𝑛 𝑡 𝑛
(74)
≤ 148 𝐾 𝐵 𝑛 𝑞 𝑛 Δ 𝑚 + 4 2 𝐾 𝐵 𝑛 ( 𝐴 𝛾 + 𝑞 𝐴 𝛽 + 𝑞 𝑞 𝑎 Σ ) 𝑡 𝑛
(75)
( using Lemma B.7 and 𝑅 𝑛
2 𝐾 𝐵 𝑛 ( 𝐴 𝛾 + 𝑞 𝐴 𝛽 + 𝑞 𝑞 𝑎 Σ ) )
(76)
≤ 4 𝐾 𝐵 𝑛 𝑛 [ 37 𝑞 Δ 𝑚 + 2 ( 𝐴 𝛾 + 𝑞 𝐴 𝛽 + 𝑞 𝑞 𝑎 Σ ) 𝑡 ] .
(77) B.2Proofs of Lemmas B.4–B.7
The proofs of Lemmas B.4–B.5 require an upper bound on the uniform norm of the gradient of ln 𝑠 𝜓 , for 𝑠 𝜓 ∈ 𝑆 . We begin by providing such an upper bound.
Lemma B.8.
Given 𝑠 𝜓 , as described in (3), it holds that
sup
𝑥
∈
𝒳
sup
𝜓
∈
Ψ
~
‖
∂
ln
(
𝑠
𝜓
(
⋅
|
𝑥
)
)
∂
𝜓
‖
∞
≤
𝐺
(
⋅
)
,
𝐺
:
ℝ
𝑞
∋
𝑦
↦
𝐺
(
𝑦
)
max ( 𝐴 Σ , 1 + 𝐾 𝐴 𝐺 ) ( 1 + 𝑞 𝑞 ( ∥ 𝑦 ∥ ∞ + 𝐴 𝛽 ) 2 𝐴 Σ ) .
(78)
Proof of Lemma B.8. Let 𝑠 𝜓 ∈ 𝑆 , with 𝜓
( 𝛾 , 𝛽 , Σ ) . From now on, we consider any 𝑥 ∈ 𝒳 , any 𝑦 ∈ ℝ 𝑞 , and any 𝑘 ∈ [ 𝐾 ] . We can write
ln ( 𝑠 𝜓 ( 𝑦 | 𝑥 ) )
ln ( ∑ 𝑘
1 𝐾 𝑔 𝑘 ( 𝑥 ; 𝛾 ) 𝒩 ( 𝑦 ; 𝛽 𝑘 0 + 𝛽 𝑘 𝑥 , Σ 𝑘 ) )
ln ( ∑ 𝑘
1
𝐾
𝑓
𝑘
(
𝑥
,
𝑦
)
)
,
𝑔
𝑘
(
𝑥
;
𝛾
)
exp ( 𝑤 𝑘 ( 𝑥 ) ) ∑ 𝑙
1 𝐾 exp ( 𝑤 𝑙 ( 𝑥 ) ) , 𝑤 𝑘 ( 𝑥 )
𝛾
𝑘
0
+
𝛾
𝑘
⊤
𝑥
,
𝒩
(
𝑦
;
𝛽
𝑘
0
+
𝛽
𝑘
𝑥
,
Σ
𝑘
)
1
(
2
𝜋
)
𝑞
/
2
det
(
Σ
𝑘
)
1
/
2
exp
(
−
(
𝑦
−
(
𝛽
𝑘
0
+
𝛽
𝑘
𝑥
)
)
⊤
Σ
𝑘
−
1
(
𝑦
−
(
𝛽
𝑘
0
+
𝛽
𝑘
𝑥
)
)
2
)
,
𝑓
𝑘
(
𝑥
,
𝑦
)
𝑔 𝑘 ( 𝑥 ; 𝛾 ) 𝒩 ( 𝑦 ; 𝛽 𝑘 0 + 𝛽 𝑘 𝑥 , Σ 𝑘 )
𝑔 𝑘 ( 𝑥 ; 𝛾 ) ( 2 𝜋 ) 𝑞 / 2 det ( Σ 𝑘 ) 1 / 2 exp [ − 1 2 ( 𝑦 − ( 𝛽 𝑘 0 + 𝛽 𝑘 𝑥 ) ) ⊤ Σ 𝑘 − 1 ( 𝑦 − ( 𝛽 𝑘 0 + 𝛽 𝑘 𝑥 ) ) ] .
By using the chain rule, for all 𝑙 ∈ [ 𝐾 ] ,
∂ ln ( 𝑠 𝜓 ( 𝑦 | 𝑥 ) ) ∂ 𝛾 𝑙 0
∑ 𝑘
1 𝐾 𝑓 𝑘 ( 𝑥 , 𝑦 ) 𝑔 𝑘 ( 𝑥 ; 𝛾 ) ∑ 𝑘
1 𝐾 𝑓 𝑘 ( 𝑥 , 𝑦 ) ∂ 𝑔 𝑘 ( 𝑥 ; 𝛾 ) ∂ 𝑤 𝑙 ( 𝑥 ) ∂ 𝑤 𝑙 ( 𝑥 ) ∂ 𝛾 𝑙 0 ⏟
1
,
and
∂
ln
(
𝑠
𝜓
(
𝑦
|
𝑥
)
)
∂
(
𝛾
𝑙
⊤
𝑥
)
∑ 𝑘
1 𝐾 𝑓 𝑘 ( 𝑥 , 𝑦 ) 𝑔 𝑘 ( 𝑥 ; 𝛾 ) ∑ 𝑘
1 𝐾 𝑓 𝑘 ( 𝑥 , 𝑦 ) ∂ 𝑔 𝑘 ( 𝑥 ; 𝛾 ) ∂ 𝑤 𝑙 ( 𝑥 ) ∂ 𝑤 𝑙 ( 𝑥 ) ∂ ( 𝛾 𝑙 ⊤ 𝑥 ) ⏟
1 .
Furthermore,
∂ 𝑔 𝑘 ( 𝑥 ; 𝛾 ) ∂ 𝑤 𝑙 ( 𝑥 )
∂ ∂ 𝑤 𝑙 ( 𝑥 ) ( exp ( 𝑤 𝑘 ( 𝑥 ) ) ∑ 𝑙
1 𝐾 exp ( 𝑤 𝑙 ( 𝑥 ) ) )
𝛿 𝑙 𝑘 exp ( 𝑤 𝑘 ( 𝑥 ) ) ∑ 𝑙
1 𝐾 exp ( 𝑤 𝑙 ( 𝑥 ) ) − exp ( 𝑤 𝑘 ( 𝑥 ) ) ∑ 𝑙
1 𝐾 exp ( 𝑤 𝑙 ( 𝑥 ) ) exp ( 𝑤 𝑙 ( 𝑥 ) ) ∑ 𝑙
1 𝐾 exp ( 𝑤 𝑙 ( 𝑥 ) )
𝑔
𝑘
(
𝑥
;
𝛾
)
(
𝛿
𝑙
𝑘
−
𝑔
𝑙
(
𝑥
;
𝛾
)
)
,
where
𝛿
𝑙
𝑘
{
1
if
𝑙
𝑘 ,
0
if 𝑙 ≠ 𝑘 .
Therefore, we obtain
| ∂ ln ( 𝑠 𝜓 ( 𝑦 | 𝑥 ) ) ∂ ( 𝛾 𝑙 ⊤ 𝑥 ) |
| ∂ ln ( 𝑠 𝜓 ( 𝑦 | 𝑥 ) ) ∂ 𝛾 𝑙 0 |
| ∑ 𝑘
1 𝐾 𝑓 𝑘 ( 𝑥 , 𝑦 ) 𝑔 𝑘 ( 𝑥 ; 𝛾 ) ∑ 𝑘
1 𝐾 𝑓 𝑘 ( 𝑥 , 𝑦 ) 𝑔 𝑘 ( 𝑥 ; 𝛾 ) ( 𝛿 𝑙 𝑘 − 𝑔 𝑙 ( 𝑥 ; 𝛾 ) ) |
| ∑ 𝑘
1 𝐾 𝑓 𝑘 ( 𝑥 , 𝑦 ) ∑ 𝑘
1 𝐾 𝑓 𝑘 ( 𝑥 , 𝑦 ) ( 𝛿 𝑙 𝑘 − 𝑔 𝑙 ( 𝑥 ; 𝛾 ) ) | ≤ | ∑ 𝑘
1 𝐾 ( 𝛿 𝑙 𝑘 − 𝑔 𝑙 ( 𝑥 ; 𝛾 ) ) |
| 1 − ∑ 𝑘
1 𝐾 𝑔 𝑙 ( 𝑥 ; 𝛾 ) |
| 1 − 𝐾 𝑔 𝑙 ( 𝑥 ; 𝛾 ) | ≤ 1 + 𝐾 𝑔 𝑙 ( 𝑥 ; 𝛾 ) ≤ 1 + 𝐾 𝐴 𝐺 ( using ( 48 ) ) .
Similarly, by using the fact that 𝜓 belongs to the bounded space Ψ ~ , 𝑓 𝑙 ( 𝑥 , 𝑦 ) / ∑ 𝑘
1 𝐾 𝑓 𝑘 ( 𝑥 , 𝑦 ) ≤ 1 ,
‖ ∂ ln ( 𝑠 𝜓 ( 𝑦 | 𝑥 ) ) ∂ 𝛽 𝑙 0 ‖ ∞
‖ ∂ ln ( 𝑠 𝜓 ( 𝑦 | 𝑥 ) ) ∂ ( 𝛽 𝑙 𝑥 ) ‖ ∞
‖ 𝑓 𝑙 ( 𝑥 , 𝑦 ) ∑ 𝑘
1
𝐾
𝑓
𝑘
(
𝑥
,
𝑦
)
∂
∂
(
𝛽
𝑙
0
+
𝛽
𝑙
𝑥
)
[
−
1
2
(
𝑦
−
(
𝛽
𝑙
0
+
𝛽
𝑙
𝑥
)
)
⊤
Σ
𝑙
−
1
(
𝑦
−
(
𝛽
𝑙
0
+
𝛽
𝑙
𝑥
)
)
]
‖
∞
≤
‖
∂
∂
(
𝛽
𝑙
0
+
𝛽
𝑙
𝑥
)
[
−
1
2
(
𝑦
−
(
𝛽
𝑙
0
+
𝛽
𝑙
𝑥
)
)
⊤
Σ
𝑙
−
1
(
𝑦
−
(
𝛽
𝑙
0
+
𝛽
𝑙
𝑥
)
)
]
‖
∞
‖ Σ 𝑙 − 1 ( 𝑦 − ( 𝛽 𝑙 0 + 𝛽 𝑙 𝑥 ) ) ‖ ∞ ≤ ‖ Σ 𝑙 − 1 ‖ ∞ ‖ ( 𝑦 − ( 𝛽 𝑙 0 + 𝛽 𝑙 𝑥 ) ) ‖ ∞ ( using ( 95 ) )
≤ 𝑞 ‖ Σ 𝑙 − 1 ‖ 2 ( ‖ 𝑦 ‖ ∞ + ‖ 𝛽 𝑙 0 + 𝛽 𝑙 𝑥 ‖ ∞ ) ( using ( 100 ) )
≤ 𝑞 𝑀 ( Σ 𝑙 − 1 ) ( ‖ 𝑦 ‖ ∞ + ‖ 𝛽 𝑙 0 + 𝛽 𝑙 𝑥 ‖ ∞ ) ( using ( 99 ) )
≤ 𝑞 𝐴 Σ ( ‖ 𝑦 ‖ ∞ + 𝐴 𝛽 ) ( using ( 2.1 ) ) .
Now, we need to calculate the gradient w.r.t. to the covariance matrices of the Gaussian experts. To do this, we need the following result: given any 𝑙 ∈ [ 𝐾 ] , 𝑣 𝑙
𝛽 𝑙 0 + 𝛽 𝑙 𝑥 , it holds that
∂ ∂ Σ 𝑙 𝒩 ( 𝑥 ; 𝑣 𝑙 , Σ 𝑙 )
𝒩 ( 𝑥 ; 𝑣 𝑙 , Σ 𝑙 ) 1 2 [ Σ 𝑙 − 1 ( 𝑥 − 𝑣 𝑙 ) ( 𝑥 − 𝑣 𝑙 ) ⊤ Σ 𝑙 − 1 − ( Σ 𝑙 − 1 ) ⊤ ] ⏟ 𝑇 ( 𝑥 , 𝑣 𝑙 , Σ 𝑙 ) ,
(79)
noting that
∂ ∂ Σ 𝑙 ( ( 𝑥 − 𝑣 𝑙 ) ⊤ Σ 𝑙 − 1 ( 𝑥 − 𝑣 𝑙 ) )
− Σ 𝑙 − 1 ( 𝑥 − 𝑣 𝑙 ) ( 𝑥 − 𝑣 𝑙 ) ⊤ Σ 𝑙 − 1 ( using Lemma C.1 ) ,
(80)
∂ ∂ Σ 𝑙 ( det ( Σ 𝑙 ) )
det ( Σ 𝑙 ) ( Σ 𝑙 − 1 ) ⊤ ( using Jacobi formula, Lemma C.2 ) .
(81)
For any 𝑙 ∈ [ 𝐾 ] ,
|
∂
ln
(
𝑠
𝜓
(
𝑦
|
𝑥
)
)
∂
(
[
Σ
𝑙
]
𝑧
1
,
𝑧
2
)
|
≤
‖
∂
ln
(
𝑠
𝜓
(
𝑦
|
𝑥
)
)
∂
Σ
𝑙
‖
2
(
using (
99
)
)
| 𝑓 𝑙 ( 𝑥 , 𝑦 ) ∑ 𝑘
1
𝐾
𝑓
𝑘
(
𝑥
,
𝑦
)
|
‖
∂
∂
Σ
𝑙
[
−
1
2
(
𝑦
−
(
𝛽
𝑙
0
+
𝛽
𝑙
𝑥
)
)
⊤
Σ
𝑙
−
1
(
𝑦
−
(
𝛽
𝑙
0
+
𝛽
𝑙
𝑥
)
)
]
‖
2
≤
‖
∂
∂
Σ
𝑙
[
−
1
2
(
𝑦
−
(
𝛽
𝑙
0
+
𝛽
𝑙
𝑥
)
)
⊤
Σ
𝑙
−
1
(
𝑦
−
(
𝛽
𝑙
0
+
𝛽
𝑙
𝑥
)
)
]
‖
2
1 2 ‖ Σ 𝑙 − 1 ( 𝑦 − ( 𝛽 𝑙 0 + 𝛽 𝑙 𝑥 ) ) ( 𝑦 − ( 𝛽 𝑙 0 + 𝛽 𝑙 𝑥 ) ) ⊤ Σ 𝑙 − 1 − ( Σ 𝑙 − 1 ) ⊤ ‖ 2 ( using ( 79 ) )
≤ 1 2 [ 𝐴 Σ + 𝑞 ‖ ( 𝑦 − ( 𝛽 𝑙 0 + 𝛽 𝑙 𝑥 ) ) ( 𝑦 − ( 𝛽 𝑙 0 + 𝛽 𝑙 𝑥 ) ) ⊤ ‖ ∞ 𝐴 Σ 2 ] ( using ( 100 ) )
≤ 1 2 [ 𝐴 Σ + 𝑞 𝑞 ( ∥ 𝑦 ∥ ∞ + 𝐴 𝛽 ) 2 𝐴 Σ 2 ] ( using ( 2.1 ) ) ,
where, in the last inequality given 𝑎
𝑦 − ( 𝛽 𝑙 0 + 𝛽 𝑙 𝑥 ) , we use the fact that
‖ 𝑎 𝑎 ⊤ ‖ ∞
max 1 ≤ 𝑖 ≤ 𝑞 ∑ 𝑗
1 𝑞 | [ 𝑎 𝑎 ⊤ ] 𝑖 , 𝑗 |
max 1 ≤ 𝑖 ≤ 𝑞 ∑ 𝑗
1 𝑞 | 𝑎 𝑖 𝑎 𝑗 |
max 1 ≤ 𝑖 ≤ 𝑞 | 𝑎 𝑖 | ∑ 𝑗
1 𝑞 | 𝑎 𝑗 | ≤ 𝑞 ‖ 𝑎 ‖ ∞ 2 .
Thus,
sup
𝑥
∈
𝒳
sup
𝜓
∈
Ψ
~
‖
∂
ln
(
𝑠
𝜓
(
𝑦
|
𝑥
)
)
∂
𝜓
‖
∞
≤
max
[
1
+
𝐾
𝐴
𝐺
,
𝑞
(
‖
𝑦
‖
∞
+
𝐴
𝛽
)
𝐴
Σ
,
1
2
[
𝐴
Σ
+
𝑞
𝑞
(
∥
𝑦
∥
∞
+
𝐴
𝛽
)
2
𝐴
Σ
2
]
]
≤
max
[
1
+
𝐾
𝐴
𝐺
,
max
(
𝐴
Σ
,
1
)
(
1
+
𝑞
𝑞
(
∥
𝑦
∥
∞
+
𝐴
𝛽
)
2
𝐴
Σ
)
]
≤
max
(
𝐴
Σ
,
1
+
𝐾
𝐴
𝐺
)
(
1
+
𝑞
𝑞
(
∥
𝑦
∥
∞
+
𝐴
𝛽
)
2
𝐴
Σ
)
: 𝐺 ( 𝑦 ) ,
where we use the fact that
𝑞 ( ∥ 𝑦 ∥ ∞ + 𝐴 𝛽 ) 𝐴 Σ
:
𝜃
≤
1
+
𝜃
2
1 + 𝑞 ( ∥ 𝑦 ∥ ∞ + 𝐴 𝛽 ) 2 𝐴 Σ 2
≤ max ( 𝐴 Σ , 1 ) ( 1 + 𝑞 𝑞 ( ∥ 𝑦 ∥ ∞ + 𝐴 𝛽 ) 2 𝐴 Σ ) .
B.2.1Proof of Lemma B.4
Let 𝑚 ∈ ℕ ⋆ and 𝑓 𝑚 ∈ 𝐹 𝑚 . By (21), there exists 𝑠 𝑚 ∈ 𝑆 𝑚 , such that 𝑓 𝑚
− ln ( 𝑠 𝑚 / 𝑠 0 ) . For all 𝑥 ∈ 𝒳 , let 𝜓 ( 𝑥 )
( 𝛾 𝑘 0 , 𝛾 𝑘 ⊤ 𝑥 , 𝛽 𝑘 0 , 𝛽 𝑘 𝑥 , Σ 𝑘 ) 𝑘 ∈ [ 𝐾 ] be the parameters of 𝑠 𝑚 ( ⋅ | 𝑥 ) . In our case, we approximate 𝑓 ( 𝜓 )
ln ( 𝑠 𝜓 ( 𝑦 𝑖 | 𝑥 𝑖 ) ) around 𝜓 0 ( 𝑥 𝑖 ) by the 𝑛
0 𝑡 ℎ degree Taylor polynomial of 𝑓 ( 𝜓 ) . That is,
| ln ( 𝑠 𝑚 ( 𝑦 𝑖 | 𝑥 𝑖 ) ) − ln ( 𝑠 0 ( 𝑦 𝑖 | 𝑥 𝑖 ) ) |
: | 𝑓 ( 𝜓 ) − 𝑓 ( 𝜓 0 ) |
| 𝑅 0 ( 𝜓 ) | ( defined in Lemma C.9 )
≤ sup 𝑥 ∈ 𝒳 sup 𝜓 ∈ Ψ ~ ‖ ∂ ln ( 𝑠 𝜓 ( 𝑦 𝑖 | 𝑥 ) ) ∂ 𝜓 ‖ ∞ ‖ 𝜓 ( 𝑥 𝑖 ) − 𝜓 0 ( 𝑥 𝑖 ) ‖ 1 .
First applying Taylor’s inequality and then Lemma B.8 on the event 𝒯 . For all 𝑖 ∈ [ 𝑛 ] , it holds that
| 𝑓 𝑚 ( 𝑦 𝑖 | 𝑥 𝑖 ) | 𝕀 𝒯
|
ln
(
𝑠
𝑚
(
𝑦
𝑖
|
𝑥
𝑖
)
)
−
ln
(
𝑠
0
(
𝑦
𝑖
|
𝑥
𝑖
)
)
|
𝕀
𝒯
≤
sup
𝑥
∈
𝒳
sup
𝜓
∈
Ψ
~
∥
∂
ln
(
𝑠
𝜓
(
𝑦
𝑖
|
𝑥
)
)
∂
𝜓
∥
∞
∥
𝜓
(
𝑥
𝑖
)
−
𝜓
0
(
𝑥
𝑖
)
∥
1
𝕀
𝒯
≤
max
(
𝐴
Σ
,
1
+
𝐾
𝐴
𝐺
)
(
1
+
𝑞
𝑞
(
𝑀
𝑛
+
𝐴
𝛽
)
2
𝐴
Σ
)
⏟
:
𝐵
𝑛
‖
𝜓
(
𝑥
𝑖
)
−
𝜓
0
(
𝑥
𝑖
)
‖
1
(
using Lemma
B.8
)
≤
𝐵
𝑛
∑
𝑘
1
𝐾
(
|
𝛾
𝑘
0
−
𝛾
0
,
𝑘
0
|
+
|
𝛾
𝑘
⊤
𝑥
𝑖
−
𝛾
0
,
𝑘
⊤
𝑥
𝑖
|
+
‖
𝛽
𝑘
0
−
𝛽
0
,
𝑘
0
‖
1
+
‖
𝛽
𝑘
𝑥
𝑖
−
𝛽
0
,
𝑘
𝑥
𝑖
‖
1
+
‖
vec
(
Σ
𝑘
−
Σ
0
,
𝑘
)
‖
1
)
≤
2
𝐵
𝑛
∑
𝑘
1 𝐾 ( | 𝛾 𝑘 0 | + | 𝛾 𝑘 ⊤ 𝑥 𝑖 | + ‖ 𝛽 𝑘 0 ‖ 1 + ‖ 𝛽 𝑘 𝑥 𝑖 ‖ 1 + 𝑞 ‖ Σ 𝑘 ‖ 1 ) ( using ( 97 ) )
≤ 2 𝐾 𝐵 𝑛 ( 𝐴 𝛾 + 𝑞 ‖ 𝛽 𝑘 0 ‖ ∞ + 𝑞 ‖ 𝛽 𝑘 𝑥 𝑖 ‖ ∞ + 𝑞 𝑞 ‖ Σ 𝑘 ‖ 2 ) ( using ( 2.1 ), ( 92 ), ( 93 ), ( 101 ) )
≤ 2 𝐾 𝐵 𝑛 ( 𝐴 𝛾 + 𝑞 𝐴 𝛽 + 𝑞 𝑞 𝑎 Σ ) ( using ( 2.1 ) ) .
Therefore, on the event 𝒯 ,
sup 𝑓 𝑚 ∈ 𝐹 𝑚 ∥ 𝑓 𝑚 ∥ ∞ ≤ 2 𝐾 𝐵 𝑛 ( 𝐴 𝛾 + 𝑞 𝐴 𝛽 + 𝑞 𝑞 𝑎 Σ )
: 𝑅 𝑛 .
B.2.2Proof of Lemma B.5
Let 𝑚 ∈ ℕ ⋆ , 𝑓 𝑚 [ 1 ] ∈ 𝐹 𝑚 , and 𝑥 ∈ [ 0 , 1 ] 𝑝 . By (21), there exists 𝑠 𝑚 [ 1 ] ∈ 𝑆 𝑚 , such that 𝑓 𝑚 [ 1 ]
− ln ( 𝑠 𝑚 [ 1 ] / 𝑠 0 ) . Introduce the notation 𝑠 𝑚 [ 2 ] ∈ 𝑆 and 𝑓 𝑚 [ 2 ]
− ln ( 𝑠 𝑚 [ 2 ] / 𝑠 0 ) . Let
𝜓 [ 1 ] ( 𝑥 )
( 𝛾 𝑘 0 [ 1 ] , 𝛾 𝑘 [ 1 ] 𝑥 , 𝛽 𝑘 0 [ 1 ] , 𝛽 𝑘 [ 1 ] 𝑥 , Σ 𝑘 [ 1 ] ) 𝑘 ∈ [ 𝐾 ] , and 𝜓 [ 2 ] ( 𝑥 )
( 𝛾 𝑘 0 [ 2 ] , 𝛾 𝑘 [ 2 ] 𝑥 , 𝛽 𝑘 0 [ 2 ] , 𝛽 𝑘 [ 2 ] 𝑥 , Σ 𝑘 [ 2 ] ) 𝑘 ∈ [ 𝐾 ] ,
be the parameters of the PDFs 𝑠 𝑚 [ 1 ] ( ⋅ | 𝑥 ) and 𝑠 𝑚 [ 2 ] ( ⋅ | 𝑥 ) , respectively. By applying Taylor’s inequality and then Lemma B.8 on the event 𝒯 , for all 𝑖 ∈ [ 𝑛 ] , it holds that
| 𝑓 𝑚 [ 1 ] ( 𝑦 𝑖 | 𝑥 𝑖 ) − 𝑓 𝑚 [ 2 ] ( 𝑦 𝑖 | 𝑥 𝑖 ) |
|
ln
(
𝑠
𝑚
[
1
]
(
𝑦
𝑖
|
𝑥
𝑖
)
)
−
ln
(
𝑠
𝑚
[
2
]
(
𝑦
𝑖
|
𝑥
𝑖
)
)
|
≤
sup
𝑥
∈
𝒳
sup
𝜓
∈
Ψ
~
|
∂
ln
(
𝑠
𝜓
(
𝑦
𝑖
|
𝑥
)
)
∂
𝜓
|
‖
𝜓
[
1
]
(
𝑥
𝑖
)
−
𝜓
[
1
]
(
𝑥
𝑖
)
‖
1
(
using Taylor’s inequality in Lemma
C.9
)
≤
max
(
𝐴
Σ
,
𝐶
(
𝑝
,
𝐾
)
)
(
1
+
𝑞
𝑞
(
𝑀
𝑛
+
𝐴
𝛽
)
2
𝐴
Σ
)
⏟
𝐵
𝑛
‖
𝜓
[
1
]
(
𝑥
𝑖
)
−
𝜓
[
2
]
(
𝑥
𝑖
)
‖
1
(
using Lemma
B.8
)
≤
𝐵
𝑛
∑
𝑘
1 𝐾 ( | 𝛾 𝑘 0 [ 1 ] − 𝛾 𝑘 0 [ 2 ] | + | 𝛾 𝑘 [ 1 ] ⊤ 𝑥 𝑖 − 𝛾 𝑘 [ 2 ] ⊤ 𝑥 𝑖 |
- ∥ 𝛽 𝑘 0 [ 1 ] − 𝛽 𝑘 0 [ 2 ] ∥ 1
- ∥ 𝛽 𝑘 [ 1 ] 𝑥 𝑖 − 𝛽 𝑘 [ 2 ] 𝑥 𝑖 ∥ 1
- ∥ vec ( Σ 𝑘 [ 1 ] − Σ 𝑘 [ 2 ] ) ∥ 1 ) .
By the Cauchy-Schwarz inequality, ( ∑ 𝑖
1 𝑚 𝑎 𝑖 ) 2 ≤ 𝑚 ∑ 𝑖
1 𝑚 𝑎 𝑖 2 ( 𝑚 ∈ ℕ ⋆ ), we get
|
𝑓
𝑚
[
1
]
(
𝑦
𝑖
|
𝑥
𝑖
)
−
𝑓
𝑚
[
2
]
(
𝑦
𝑖
|
𝑥
𝑖
)
|
2
≤
3
𝐵
𝑛
2
[
(
∑
𝑘
1 𝐾 | 𝛾 𝑘 [ 1 ] ⊤ 𝑥 𝑖 − 𝛾 𝑘 [ 2 ] ⊤ 𝑥 𝑖 | ) 2 + ( ∑ 𝑘
1 𝐾 ∑ 𝑧
1
𝑞
|
[
𝛽
𝑘
[
1
]
𝑥
𝑖
]
𝑧
−
[
𝛽
𝑘
[
2
]
𝑥
𝑖
]
𝑧
|
)
2
]
+
3
𝐵
𝑛
2
(
‖
𝛽
0
[
1
]
−
𝛽
0
[
2
]
‖
1
+
‖
𝛾
0
[
1
]
−
𝛾
0
[
2
]
‖
1
+
‖
vec
(
Σ
[
1
]
−
Σ
[
2
]
)
‖
1
)
2
≤
3
𝐵
𝑛
2
[
𝐾
∑
𝑘
1 𝐾 ( ∑ 𝑗
1 𝑝 𝛾 𝑘 𝑗 [ 1 ] ⊤ 𝑥 𝑖 𝑗 − ∑ 𝑗
1 𝑝 𝛾 𝑘 𝑗 [ 2 ] ⊤ 𝑥 𝑖 𝑗 ) 2 𝐾 𝑞 ∑ 𝑘
1 𝐾 ∑ 𝑧
1 𝑞 ( ∑ 𝑗
1 𝑝 [ 𝛽 𝑘 [ 1 ] ] 𝑧 , 𝑗 𝑥 𝑖 𝑗 − ∑ 𝑗
1 𝑝 [ 𝛽 𝑘 [ 2 ] ] 𝑧 , 𝑗 𝑥 𝑖 𝑗 ) 2 ]
- 3 𝐵 𝑛 2 ( ‖ 𝛽 0 [ 1 ] − 𝛽 0 [ 2 ] ‖ 1
- ‖ 𝛾 0 [ 1 ] − 𝛾 0 [ 2 ] ‖ 1
- ‖ vec ( Σ [ 1 ] − Σ [ 2 ] ) ‖ 1 ) 2 ,
and
∥ 𝑓 𝑚 [ 1 ] − 𝑓 𝑚 [ 2 ] ∥ 𝑛 2
1 𝑛 ∑ 𝑖
1
𝑛
|
𝑓
𝑚
[
1
]
(
𝑦
𝑖
|
𝑥
𝑖
)
−
𝑓
𝑚
[
2
]
(
𝑦
𝑖
|
𝑥
𝑖
)
|
2
≤
3
𝐵
𝑛
2
𝐾
∑
𝑘
1 𝐾 1 𝑛 ∑ 𝑖
1 𝑛 ( ∑ 𝑗
1 𝑝 𝛾 𝑘 𝑗 [ 1 ] 𝑥 𝑖 𝑗 − ∑ 𝑗
1 𝑝 𝛾 𝑘 𝑗 [ 2 ] 𝑥 𝑖 𝑗 ) 2 ⏟
:
𝑎
+
3
𝐵
𝑛
2
𝐾
𝑞
∑
𝑘
1 𝐾 ∑ 𝑧
1 𝑞 1 𝑛 ∑ 𝑖
1 𝑛 ( ∑ 𝑗
1 𝑝 [ 𝛽 𝑘 [ 1 ] ] 𝑧 , 𝑗 𝑥 𝑖 𝑗 − ∑ 𝑗
1 𝑝 [ 𝛽 𝑘 [ 2 ] ] 𝑧 , 𝑗 𝑥 𝑖 𝑗 ) 2 ⏟
: 𝑏
- 3 𝐵 𝑛 2 ( ‖ 𝛽 0 [ 1 ] − 𝛽 0 [ 2 ] ‖ 1
- ‖ 𝛾 0 [ 1 ] − 𝛾 0 [ 2 ] ‖ 1
- ‖ vec ( Σ [ 1 ] − Σ [ 2 ] ) ‖ 1 ) 2 .
So, for all 𝛿
0 , if
𝑎 ≤ 𝛿 2 / ( 36 𝐵 𝑛 2 ) , 𝑏 ≤ 𝛿 2 / ( 36 𝐵 𝑛 2 ) , ‖ 𝛽 0 [ 1 ] − 𝛽 0 [ 2 ] ‖ 1 ≤ 𝛿 / ( 18 𝐵 𝑛 ) ,
‖ 𝛾 0 [ 1 ] − 𝛾 0 [ 2 ] ‖ 1 ≤ 𝛿 / ( 18 𝐵 𝑛 ) , and ‖ vec ( Σ [ 1 ] − Σ [ 2 ] ) ‖ 1 ≤ 𝛿 / ( 18 𝐵 𝑛 ) ,
then ∥ 𝑓 𝑚 [ 1 ] − 𝑓 𝑚 [ 2 ] ∥ 𝑛 2 ≤ 𝛿 2 / 4 . To bound 𝑎 and 𝑏 , we can write
𝑎
𝐾 𝑚 2 ∑ 𝑘
1 𝐾 1 𝑛 ∑ 𝑖
1 𝑛 ( ∑ 𝑗
1 𝑝 𝛾 𝑘 𝑗 [ 1 ] 𝑚 𝑥 𝑖 𝑗 − ∑ 𝑗
1
𝑝
𝛾
𝑘
𝑗
[
2
]
𝑚
𝑥
𝑖
𝑗
)
2
,
and
𝑏
𝐾 𝑞 𝑚 2 ∑ 𝑘
1 𝐾 ∑ 𝑧
1 𝑞 1 𝑛 ∑ 𝑖
1 𝑛 ( ∑ 𝑗
1 𝑝 [ 𝛽 𝑘 [ 1 ] ] 𝑧 , 𝑗 𝑚 𝑥 𝑖 𝑗 − ∑ 𝑗
1 𝑝 [ 𝛽 𝑘 [ 2 ] ] 𝑧 , 𝑗 𝑚 𝑥 𝑖 𝑗 ) 2 .
Then, we apply Lemma B.6 to obtain 𝛾 𝑘 , . [ 1 ] 𝑚
( 𝛾 𝑘 𝑗 [ 1 ] 𝑚 ) 𝑗 ∈ [ 𝑞 ] and [ 𝛽 𝑘 [ 1 ] ] 𝑧 , . 𝑚
( [ 𝛽 𝑘 [ 1 ] ] 𝑧 , 𝑗 𝑚 ) 𝑗 ∈ [ 𝑞 ] , for all 𝑘 ∈ [ 𝐾 ] , 𝑧 ∈ [ 𝑞 ] . Since 𝑠 𝑚 [ 1 ] ∈ 𝑆 𝑚 , and using (12), we have ‖ 𝛾 𝑘 [ 1 ] ‖ ≤ 𝑚 and ‖ vec ( 𝛽 𝑘 [ 1 ] ) ‖ 1 ≤ 𝑚 , which leads to ∑ 𝑗
1 𝑝 | 𝛾 𝑘 𝑗 [ 1 ] 𝑚 | ≤ 1 and ∑ 𝑧
1 𝑞 ∑ 𝑗
1 𝑝 | [ 𝛽 𝑘 [ 1 ] ] 𝑧 , 𝑗 𝑚 | ≤ 1 , respectively. Furthermore, given 𝑥 ∈ 𝒳
[ 0 , 1 ] 𝑝 , we have ∥ 𝑥 ∥ max , 𝑛 2
1 . Thus, there exist families 𝒜 of ( 2 𝑝 + 1 ) 36 𝐵 𝑛 2 𝐾 2 𝑚 2 / 𝛿 2 vectors and ℬ of ( 2 𝑝 + 1 ) 16 𝐵 𝑛 2 𝑞 2 𝐾 2 𝑚 2 / 𝛿 2 vectors of ℝ 𝑝 , such that for all 𝑘 ∈ [ 𝐾 ] , 𝑧 ∈ [ 𝑞 ] , 𝛾 𝑘 , . [ 1 ] , and [ 𝛽 𝑘 [ 1 ] ] 𝑧 , . , there exist 𝛾 𝑘 , . [ 1 ] ∈ 𝒜 and [ 𝛽 𝑘 [ 2 ] ] 𝑧 , . ∈ ℬ , such that
1 𝑛 ∑ 𝑖
1 𝑛 ( ∑ 𝑗
1 𝑝 𝛾 𝑘 𝑗 [ 1 ] 𝑚 𝑥 𝑖 𝑗 − ∑ 𝑗
1
𝑝
𝛾
𝑘
𝑗
[
2
]
𝑚
𝑥
𝑖
𝑗
)
2
≤
𝛿
2
36
𝐵
𝑛
2
𝐾
2
𝑚
2
,
and
1
𝑛
∑
𝑖
1 𝑛 ( ∑ 𝑗
1 𝑝 [ 𝛽 𝑘 [ 1 ] ] 𝑧 , 𝑗 𝑚 𝑥 𝑖 𝑗 − ∑ 𝑗
1 𝑝 [ 𝛽 𝑘 [ 2 ] ] 𝑧 , 𝑗 𝑚 𝑥 𝑖 𝑗 ) 2 ≤ 𝛿 2 36 𝐵 𝑛 2 𝑞 2 𝐾 2 𝑚 2 ,
which leads to 𝑎 ≤ 𝛿 2 / 36 𝐵 𝑛 2 and 𝑏 ≤ 𝛿 2 / 36 𝐵 𝑛 2 . Moreover, (2.1) leads to
‖ 𝛽 0 [ 1 ] ‖ 1
∑ 𝑘
1
𝐾
‖
𝛽
0
𝑘
[
1
]
‖
1
≤
𝐾
𝑞
‖
𝛽
0
𝑘
[
1
]
‖
∞
≤
𝐾
𝑞
𝐴
𝛽
(
using (
92
)
)
,
‖
𝛾
0
[
1
]
‖
1
∑ 𝑘
1 𝐾 | 𝛾 0 𝑘 [ 1 ] | ≤ 𝐾 𝐴 𝛾 , ‖ vec ( Σ [ 1 ] ) ‖ 1 ≤ 𝐾 𝑞 𝑞 𝑎 Σ .
Therefore, on the event 𝒯 ,
𝑀 ( 𝛿 , 𝐹 𝑚 , ∥ ⋅ ∥ 𝑛 )
≤ 𝑁 ( 𝛿 / 2 , 𝐹 𝑚 , ∥ ⋅ ∥ 𝑛 ) ( using Lemma C.7 )
≤ card ( 𝒜 ) card ( ℬ ) 𝑁 ( 𝛿 18 𝐵 𝑛 , 𝐵 1 𝐾 ( 𝐾 𝑞 𝐴 𝛽 ) , ∥ ⋅ ∥ 1 )
× 𝑁 ( 𝛿 18 𝐵 𝑛 , 𝐵 1 𝐾 ( 𝐾 𝐴 𝛾 ) , ∥ ⋅ ∥ 1 ) 𝑁 ( 𝛿 18 𝐵 𝑛 , 𝐵 1 𝐾 ( 𝐾 𝑞 𝑞 𝑎 Σ ) , ∥ ⋅ ∥ 1 )
≤ ( 2 𝑝 + 1 ) 72 𝐵 𝑛 2 𝑞 2 𝐾 2 𝑚 2 𝛿 2 ( 1 + 18 𝐵 𝑛 𝐾 𝑞 𝐴 𝛽 𝛿 ) 𝐾 ( 1 + 18 𝐵 𝑛 𝐾 𝐴 𝛾 𝛿 ) 𝐾 ( 1 + 18 𝐵 𝑛 𝐾 𝑞 𝑞 𝑎 Σ 𝛿 ) 𝐾 .
B.2.3Proof of Lemma B.7
Let 𝑚 ∈ ℕ ⋆ . From Lemma B.4, on the event 𝒯 , it holds that
sup 𝑓 𝑚 ∈ 𝐹 𝑚 ∥ 𝑓 𝑚 ∥ 𝑛 ≤ 2 𝐾 𝐵 𝑛 ( 𝐴 𝛾 + 𝑞 𝐴 𝛽 + 𝑞 𝑞 𝑎 Σ )
: 𝑅 𝑛 .
(82)
From Lemma B.5, on the event 𝒯 for all 𝑆 ∈ ℕ ⋆ , with 𝛿
2 − 𝑠 𝑅 𝑛 ,
∑ 𝑠
1 𝑆 2 − 𝑠 ln [ 1 + 𝑀 ( 2 − 𝑠 𝑅 𝑛 , 𝐹 𝑚 , ∥ ⋅ ∥ 𝑛 ) ] ≤ ∑ 𝑠
1
𝑆
2
−
𝑠
ln
[
2
𝑀
(
𝛿
,
𝐹
𝑚
,
∥
⋅
∥
𝑛
)
]
≤
∑
𝑠
1
𝑆
2
−
𝑠
[
ln
2
+
6
2
𝐵
𝑛
𝑞
𝐾
𝑚
𝛿
ln
(
2
𝑝
+
1
)
+
𝐾
ln
[
(
1
+
18
𝐵
𝑛
𝐾
𝑞
𝐴
𝛽
𝛿
)
(
1
+
18
𝐵
𝑛
𝐾
𝐴
𝛾
𝛿
)
(
1
+
18
𝐵
𝑛
𝐾
𝑞
𝑞
𝑎
Σ
𝛿
)
]
]
≤
∑
𝑠
1 𝑆 2 − 𝑠 [ ln 2 + 2 𝑠 6 2 𝐵 𝑛 𝑞 𝐾 𝑚 𝑅 𝑛 ln ( 2 𝑝 + 1 )
- 𝐾 ln [ ( 1
- 2 𝑠 18 𝐵 𝑛 𝐾 𝑞 𝐴 𝛽 𝑅 𝑛 ) ( 1
- 2 𝑠 18 𝐵 𝑛 𝐾 𝐴 𝛾 𝑅 𝑛 ) ( 1
- 2 𝑠 18 𝐵 𝑛 𝐾 𝑞 𝑞 𝑎 Σ 𝑅 𝑛 ) ] ] .
(83)
Notice from (82), that 𝑅 𝑛 ≥ 2 𝐾 𝐵 𝑛 max ( 𝐴 𝛾 , 𝑞 𝐴 𝛽 , 𝑞 𝑞 𝑎 Σ ) . Moreover, it holds that 1 ≤ 2 𝑠 + 3 , and ∑ 𝑠
1 𝑆 2 − 𝑠
1 − 2 − 𝑆 ≤ 1 , ∑ 𝑠
1 𝑆 ( 𝑒 / 2 ) 𝑠 ≤ 𝑒 / ( 2 − 𝑒 ) , and since for all 𝑠 ∈ ℕ ⋆ , 𝑒 𝑠 ≥ 𝑠 , and thus 2 − 𝑠 𝑠 ≤ ( 𝑒 / 2 ) 𝑠 . Therefore, from (B.2.3):
∑ 𝑠
1
𝑆
2
−
𝑠
ln
[
1
+
𝑀
(
2
−
𝑠
𝑅
𝑛
,
𝐹
𝑚
,
∥
⋅
∥
𝑛
)
]
≤
∑
𝑠
1 𝑆 2 − 𝑠 [ ln 2 + 2 𝑠 6 2 𝐵 𝑛 𝑞 𝐾 𝑚 𝑅 𝑛 ln ( 2 𝑝 + 1 ) + 𝐾 ln [ ( 2 𝑠 + 1 3 2 ) ( 2 𝑠 + 1 3 2 ) ( 2 𝑠 + 1 3 2 ) ] ]
∑ 𝑠
1
𝑆
2
−
𝑠
[
ln
2
+
2
𝑠
6
2
𝐵
𝑛
𝑞
𝐾
𝑚
𝑅
𝑛
ln
(
2
𝑝
+
1
)
+
𝐾
3
(
(
𝑠
+
1
)
ln
2
+
2
ln
3
)
]
≤
6
2
𝐵
𝑛
𝐾
𝑞
𝑚
𝑅
𝑛
ln
(
2
𝑝
+
1
)
𝑆
+
𝐾
3
ln
2
∑
𝑠
1
𝑆
2
−
𝑠
𝑠
+
ln
2
(
1
+
3
𝐾
)
+
6
ln
3
𝐾
≤
6
2
𝐵
𝑛
𝐾
𝑞
𝑚
𝑅
𝑛
ln
(
2
𝑝
+
1
)
𝑆
+
𝐾
3
ln
2
∑
𝑠
1
𝑆
(
𝑒
2
)
𝑠
+
ln
2
(
1
+
3
𝐾
)
+
6
ln
3
𝐾
≤
6
2
𝐵
𝑛
𝑞
𝐾
𝑚
𝑅
𝑛
ln
(
2
𝑝
+
1
)
𝑆
+
𝐾
ln
2
(
3
𝑒
2
−
𝑒
+
1
+
3
+
6
ln
3
ln
2
)
⏟
: 𝐶 1 .
(84)
Then, for all 𝑆 ∈ ℕ ⋆ , on the event 𝒯 :
𝔼 [ sup 𝑓 𝑚 ∈ ℱ 𝑚 | 1 𝑛 ∑ 𝑖
1 𝑛 𝜖 𝑖 𝑓 𝑚 ( 𝑍 𝑖 ) | ]
𝔼 [ sup 𝑓 𝑚 ∈ ℱ 𝑚 | 1 𝑛 ∑ 𝑖
1 𝑛 𝜖 𝑖 𝑓 𝑚 ( 𝑍 𝑖 ) 𝕀 | 𝑓 𝑚 ( 𝑍 𝑖 ) | ≤ 𝑅 𝑛 | ] ( using Lemma B.4 )
𝔼 [ sup 𝑓 ∈ ℱ | 1 𝑛 ∑ 𝑖
1 𝑛 𝜖 𝑖 𝑓 ( 𝑍 𝑖 ) | ] ≤ 𝑅 𝑛 ( 6 𝑛 ∑ 𝑠
1 𝑆 2 − 𝑠 ln [ 1 + 𝑀 ( 2 − 𝑠 𝑅 𝑛 , ℱ , ∥ . ∥ 𝑛 ) ] + 2 − 𝑆 ) ( using Lemma B.3 )
≤ 𝑅 𝑛 [ 6 𝑛 ( 6 2 𝐵 𝑛 𝐾 𝑚 𝑞 𝑅 𝑛 ln ( 2 𝑝 + 1 ) 𝑆 + 𝐾 ln 2 𝐶 1 ) + 2 − 𝑆 ] ( using Lemma B.2.3 ) .
(85)
We choose 𝑆
ln 𝑛 / ln 2 so that the two terms depending on 𝑆 in (B.2.3) are of the same order. In particular, for this value of 𝑆 , 2 − 𝑆 ≤ 1 / 𝑛 , and we deduce from (B.2.3) and (82) that on the event 𝒯 ,
𝔼 [ sup 𝑓 𝑚 ∈ ℱ 𝑚 | 1 𝑛 ∑ 𝑖
1 𝑛 𝜖 𝑖 𝑓 𝑚 ( 𝑍 𝑖 ) | ]
≤ 36 2 𝐵 𝑛 𝐾 𝑚 𝑞 𝑛 ln ( 2 𝑝 + 1 ) ln 𝑛 ln 2 + 2 𝐾 𝐵 𝑛 ( 𝐴 𝛾 + 𝑞 𝐴 𝛽 + 𝑞 𝑞 𝑎 Σ ) ( 6 ln 2 𝐶 1 𝐾 𝑛 + 1 𝑛 )
≤ 𝐵 𝑛 𝐾 𝑚 𝑞 𝑛 ln ( 2 𝑝 + 1 ) ln 𝑛 36 2 ln 2 ⏟ ≈ 73.45 + 𝐾 𝐾 𝑛 𝐵 𝑛 ( 𝐴 𝛾 + 𝑞 𝐴 𝛽 + 𝑞 𝑞 𝑎 Σ ) 2 ( 6 ln 2 𝐶 1 + 1 ) ⏟ ≈ 141.32
< 74 𝐾 𝐵 𝑛 𝑛 [ 𝑚 𝑞 ln ( 2 𝑝 + 1 ) ln 𝑛 + 2 𝐾 ( 𝐴 𝛾 + 𝑞 𝐴 𝛽 + 𝑞 𝑞 𝑎 Σ ) ] .
B.3Proof of Lemma A.1
Since ln ( 𝑧 ) is concave in 𝑧 , Jensen’s inequality (see e.g. Jensen (1906); Cover (1999)) implies that ln ( 𝔼 𝑍 [ 𝑍 ] ) ≥ 𝔼 𝑍 [ ln ( 𝑍 ) ] , where 𝑍 is a random variable. Thus, for all 𝑥 ∈ 𝒳 , Jensen’s inequality and Lemma B.9 lead us the following upper bound
∫
ℝ
𝑞
ln
(
𝑠
0
(
𝑦
|
𝑥
)
)
𝑠
0
(
𝑦
|
𝑥
)
𝑑
𝑦
≤
∑
𝑘
1
𝐾
𝑔
𝑘
(
𝑥
;
𝛾
0
)
ln
[
∫
ℝ
𝑞
𝑠
0
(
𝑦
|
𝑥
)
𝒩
(
𝑦
;
𝑣
0
𝑘
(
𝑥
)
,
Σ
0
𝑘
)
𝑑
𝑦
]
≤
∑
𝑘
1 𝐾 𝑔 𝑘 ( 𝑥 ; 𝛾 0 ) ln [ ∑ 𝑙
1 𝐾 𝑔 𝑙 ( 𝑥 ; 𝛾 0 ) 𝐶 𝑠 0 ]
ln
𝐶
𝑠
0
<
∞
,
where
𝐶
𝑠
0
( 4 𝜋 ) − 𝑞 / 2 𝐴 Σ 𝑞 / 2 , ( using Lemma B.9 ) . (86)
Therefore, we obtain
max { 0 , sup 𝑥 ∈ 𝒳 ∫ ℝ 𝑞 ln ( 𝑠 0 ( 𝑦 | 𝑥 ) ) 𝑠 0 ( 𝑦 | 𝑥 ) 𝑑 𝑦 } ≤ max { 0 , ln 𝐶 𝑠 0 }
: 𝐻 𝑠 0 < ∞ .
Next, we state the following important Lemma B.9, which is used in the proof of Lemma A.1.
Lemma B.9.
There exists a positive constant 𝐶 𝑠 0 := ( 4 𝜋 ) − 𝑞 / 2 𝐴 Σ 𝑞 / 2 , 0 < 𝐶 𝑠 0 < ∞ , such that for all 𝑘 ∈ [ 𝐾 ] , 𝑙 ∈ [ 𝐿 ] ,
∫ ℝ 𝑞 𝒩 ( 𝑦 ; 𝑣 0 𝑙 ( 𝑥 ) , Σ 0 𝑙 ) 𝒩 ( 𝑦 ; 𝑣 0 𝑘 ( 𝑥 ) , Σ 0 𝑘 ) 𝑑 𝑦 < 𝐶 𝑠 0 , ∀ 𝑥 ∈ 𝒳 .
(87)
Proof of Lemma B.9. Firstly, for all 𝑘 ∈ [ 𝐾 ] , 𝑙 ∈ [ 𝐿 ] , given
𝑐 𝑙 𝑘 ( 𝑥 )
𝐶 𝑙 𝑘 [ Σ 0 𝑙 − 1 𝑣 0 𝑙 ( 𝑥 ) + Σ 0 𝑘 − 1 𝑣 0 𝑘 ( 𝑥 ) ] , 𝐶 𝑙 𝑘
( Σ 0 𝑙 − 1 + Σ 0 𝑘 − 1 ) − 1 ,
Lemma C.10 leads to
∫ ℝ 𝑞 [ 𝒩 ( 𝑦 ; 𝑣 0 𝑙 ( 𝑥 ) , Σ 0 𝑙 ) 𝒩 ( 𝑦 ; 𝑣 0 𝑘 ( 𝑥 ) , Σ 0 𝑘 ) ] 𝑑 𝑦
𝑍 𝑙 𝑘 − 1 ∫ ℝ 𝑞 𝒩 ( 𝑦 ; 𝑐 𝑙 𝑘 ( 𝑥 ) , 𝐶 𝑙 𝑘 ) 𝑑 𝑦 ⏟
1 , where
( 2 𝜋 ) − 𝑞 / 2 det ( Σ 0 𝑙 + Σ 0 𝑘 ) − 1 / 2 exp ( − 1 2 ( 𝑣 0 𝑙 ( 𝑥 ) − 𝑣 0 𝑘 ( 𝑥 ) ) ⊤ ( Σ 0 𝑙 + Σ 0 𝑘 ) − 1 ( 𝑣 0 𝑙 ( 𝑥 ) − 𝑣 0 𝑘 ( 𝑥 ) ) )
(88)
Next, since the determinant is the product of the eigenvalues, counted with multiplicity, and Weyl’s inequality, see e.g., Lemma C.11, for all 𝑘 ∈ [ 𝐾 ] , 𝑙 ∈ [ 𝐿 ] , we have
det
(
Σ
0
𝑙
+
Σ
0
𝑘
)
≥
[
𝑚
(
Σ
0
𝑙
+
Σ
0
𝑘
)
]
𝑞
≥
[
𝑚
(
Σ
0
𝑙
)
+
𝑚
(
Σ
0
𝑘
)
]
𝑞
(
using (
108
) from Lemma
C.11
)
[ 𝑀 ( Σ 0 𝑙 − 1 ) − 1 + 𝑀 ( Σ 0 𝑘 − 1 ) − 1 ] 𝑞
≥ ( 2 𝐴 Σ − 1 ) 𝑞 ( using boundedness assumptions in ( 2.1 ) ) .
Therefore, for all 𝑘 ∈ [ 𝐾 ] , 𝑙 ∈ [ 𝐿 ] , it holds that
det ( Σ 0 𝑙 + Σ 0 𝑘 ) − 1 / 2
≤ 2 − 𝑞 / 2 ( 𝐴 Σ ) 𝑞 / 2 ( using boundedness assumptions in ( 2.1 ) ) .
(89)
Since ( Σ 0 𝑙 + Σ 0 𝑘 ) − 1 is a positive definite matrix, it holds that
( 𝑣 0 𝑙 ( 𝑥 ) − 𝑣 0 𝑘 ( 𝑥 ) ) ⊤ ( Σ 0 𝑙 + Σ 0 𝑘 ) − 1 ( 𝑣 0 𝑙 ( 𝑥 ) − 𝑣 0 𝑘 ( 𝑥 ) ) ≥ 0 , ∀ 𝑥 ∈ 𝒳 , 𝑙 ∈ [ 𝐿 ] , 𝑘 ∈ [ 𝐾 ] .
Then, since the exponential function is increasing, ∀ 𝑥 ∈ 𝒳 , 𝑙 ∈ [ 𝐿 ] , 𝑘 ∈ [ 𝐾 ] , we have
exp ( − 1 2 ( 𝑣 0 𝑙 ( 𝑥 ) − 𝑣 0 𝑘 ( 𝑥 ) ) ⊤ ( Σ 0 𝑙 + Σ 0 𝑘 ) − 1 ( 𝑣 0 𝑙 ( 𝑥 ) − 𝑣 0 𝑘 ( 𝑥 ) ) ) ≤ exp ( 0 )
1 .
(90)
Finally, from (B.3), (89) and (90), we obtain
∫
ℝ
𝑞
[
𝒩
(
𝑦
;
𝑣
0
𝑙
(
𝑥
)
,
Σ
0
𝑙
)
𝒩
(
𝑦
;
𝑣
0
𝑘
(
𝑥
)
,
Σ
0
𝑘
)
]
𝑑
𝑦
≤
(
2
𝜋
)
−
𝑞
/
2
2
−
𝑞
/
2
𝐴
Σ
𝑞
/
2
( 4 𝜋 ) − 𝑞 / 2 𝐴 Σ 𝑞 / 2
: 𝐶 𝑠 0 < ∞ .
Appendix CFurther Technical results
We denote the vector space of all 𝑞 -by- 𝑞 real matrices by ℝ 𝑞 × 𝑞 ( 𝑞 ∈ ℕ ⋆ ):
𝐴 ∈ ℝ 𝑞 × 𝑞 ⟺ 𝐴
( 𝑎 𝑖 , 𝑗 )
[ 𝑎 1 , 1
⋯
𝑎 1 , 𝑞
⋮
⋮
𝑎 𝑞 , 1
⋯
𝑎 𝑞 , 𝑞 ] , 𝑎 𝑖 , 𝑗 ∈ ℝ .
If a capital letter is used to denote a matrix (e.g., 𝐴 , 𝐵 ), then the corresponding lower-case letter with subscript 𝑖 , 𝑗 refers to the ( 𝑖 , 𝑗 ) th entry (e.g., 𝑎 𝑖 , 𝑗 , 𝑏 𝑖 , 𝑗 ). When required, we also designate the elements of a matrix with the notation [ 𝐴 ] 𝑖 , 𝑗 or 𝐴 ( 𝑖 , 𝑗 ) . Denote the 𝑞 -by- 𝑞 identity and zero matrices by I 𝑞 and 𝟎 𝑞 , respectively.
Lemma C.1 (Derivative of quadratic form, cf., Magnus and Neudecker (2019)).
Assume that 𝑋 and 𝑎 are non-singular matrix in ℝ 𝑞 × 𝑞 and vector in ℝ 𝑞 × 1 , respectively. Then
∂ 𝑎 ⊤ 𝑋 − 1 𝑎 ∂ 𝑋
− 𝑋 − 1 𝑎 𝑎 ⊤ 𝑋 − 1 .
Lemma C.2 (Jacobi’s formula, cf., Theorem 8.1 from Magnus and Neudecker (2019)).
If 𝑋 is a differentiable map from the real numbers to 𝑞 -by- 𝑞 matrices,
𝑑 𝑑 𝑡 det ( 𝑋 ( 𝑡 ) )
𝑡 𝑟 ( Adj ( 𝑋 ( 𝑡 ) ) 𝑑 𝑋 ( 𝑡 ) 𝑑 𝑡 ) , ∂ det ( 𝑋 ) ∂ 𝑋
( Adj ( 𝑋 ) ) ⊤
det ( 𝑋 ) ( 𝑋 − 1 ) ⊤ .
Lemma C.3 (Operator induced 𝑝 -norm).
We recall an operator (induced) 𝑝 -norms of a matrix 𝐴 ∈ ℝ 𝑞 × 𝑞 ( 𝑞 ∈ ℕ ⋆ , 𝑝 ∈ { 1 , 2 , ∞ } ),
∥ 𝐴 ∥ 𝑝
max 𝑥 ≠ 0 ∥ 𝐴 𝑥 ∥ 𝑝 ∥ 𝑥 ∥ 𝑝
max 𝑥 ≠ 0 ∥ 𝐴 ( 𝑥 ∥ 𝑥 ∥ 𝑝 ) ∥ 𝑝
max ∥ 𝑥 ∥ 𝑝
1 ∥ 𝐴 𝑥 ∥ 𝑝 ,
(91)
where for all 𝑥 ∈ ℝ 𝑞 ,
∥ 𝑥 ∥ ∞ ≤ ∥ 𝑥 ∥ 1
∑ 𝑖
1 𝑞 | 𝑥 𝑖 | ≤ 𝑞 ∥ 𝑥 ∥ ∞ ,
(92)
∥ 𝑥 ∥ 2
( ∑ 𝑖
1 𝑞 | 𝑥 𝑖 | 2 ) 1 2
( 𝑥 ⊤ 𝑥 ) 1 2 ≤ ∥ 𝑥 ∥ 1 ≤ 𝑞 ∥ 𝑥 ∥ 2 , and
(93)
∥ 𝑥 ∥ ∞
max 1 ≤ 𝑖 ≤ 𝑞 | 𝑥 𝑖 | ≤ ∥ 𝑥 ∥ 2 ≤ 𝑞 ∥ 𝑥 ∥ ∞ .
(94) Lemma C.4 (Some matrix 𝑝 -norm properties, Golub and Van Loan (2012)).
By definition, we always have the important property that for every 𝐴 ∈ ℝ 𝑞 × 𝑞 and 𝑥 ∈ ℝ 𝑞 ,
∥ 𝐴 𝑥 ∥ 𝑝 ≤ ∥ 𝐴 ∥ 𝑝 ∥ 𝑥 ∥ 𝑝 ,
(95)
and every induced 𝑝 -norm is submultiplicative, i.e., for every 𝐴 ∈ ℝ 𝑞 × 𝑞 and B ∈ ℝ 𝑞 × 𝑞 ,
∥ 𝐴 𝐵 ∥ 𝑝 ≤ ∥ 𝐴 ∥ 𝑝 ∥ 𝐵 ∥ 𝑝 .
(96)
In particular, it holds that
∥ 𝐴 ∥ 1
max 1 ≤ 𝑗 ≤ 𝑞 ∑ 𝑖
1 𝑞 | 𝑎 𝑖 𝑗 | ≤ ∑ 𝑗
1 𝑞 ∑ 𝑖
1 𝑞 | 𝑎 𝑖 𝑗 | := ‖ vec ( 𝐴 ) ‖ 1 ≤ 𝑞 ∥ 𝐴 ∥ 1 ,
(97)
∥
vec
(
𝐴
)
∥
∞
:=
max
1
≤
𝑖
≤
𝑞
,
1
≤
𝑗
≤
𝑞
|
𝑎
𝑖
𝑗
|
≤
∥
𝐴
∥
∞
max 1 ≤ 𝑗 ≤ 𝑞 ∑ 𝑖
1 𝑞 | 𝑎 𝑖 𝑗 | ≤ 𝑞 ∥ vec ( 𝐴 ) ∥ ∞ ,
(98)
∥
vec
(
𝐴
)
∥
∞
≤
∥
𝐴
∥
2
𝜆 max ( 𝐴 ) ≤ 𝑞 ∥ vec ( 𝐴 ) ∥ ∞ ,
(99)
where 𝜆 max is the largest eigenvalue of a positive definite symmetric matrix 𝐴 . The 𝑝 -norms, when 𝑝 ∈ { 1 , 2 , ∞ } , satisfy
1 𝑞 ∥ 𝐴 ∥ ∞
≤ ∥ 𝐴 ∥ 2 ≤ 𝑞 ∥ 𝐴 ∥ ∞ ,
(100)
1 𝑞 ∥ 𝐴 ∥ 1
≤ ∥ 𝐴 ∥ 2 ≤ 𝑞 ∥ 𝐴 ∥ 1 .
(101)
Given 𝛿
0 , we need to define the 𝛿 -packing number and 𝛿 -covering number.
Definition C.5 ( 𝛿 -packing number, cf., Definition 5.4 from Wainwright (2019)).
Let ( ℱ , ∥ ⋅ ∥ ) be a normed space and let 𝒢 ⊂ ℱ . With ( 𝑔 𝑖 ) 𝑖
1 , … , 𝑚 ∈ 𝒢 , { 𝑔 1 , … , 𝑔 𝑚 } is an 𝛿 -packing of 𝒢 of size 𝑚 ∈ ℕ ⋆ , if ∥ 𝑔 𝑖 − 𝑔 𝑗 ∥ > 𝛿 , ∀ 𝑖 ≠ 𝑗 , 𝑖 , 𝑗 ∈ { 1 , … , 𝑚 } , or equivalently, ⋂ 𝑖
1 𝑛 𝐵 ( 𝑔 𝑖 , 𝛿 / 2 )
∅ . Upon defining 𝛿 -packing, we can measure the maximal number of disjoint closed balls with radius 𝛿 / 2 that can be “packed” into 𝒢 . This number is called the 𝛿 -packing number and is defined as
𝑀 ( 𝛿 , 𝒢 , ∥ ⋅ ∥ ) := max { 𝑚 ∈ ℕ ⋆ : ∃ 𝛿 -packing of 𝒢 of size 𝑚 } .
(102) Definition C.6 ( 𝛿 -covering number, cf., Definition 5.1 from Wainwright (2019)).
Let ( ℱ , ∥ ⋅ ∥ ) be a normed space and let 𝒢 ⊂ ℱ . With ( 𝑔 𝑖 ) 𝑖 ∈ [ 𝑛 ] ∈ 𝒢 , { 𝑔 1 , … , 𝑔 𝑛 } is an 𝛿 -covering of 𝒢 of size 𝑛 if 𝒢 ⊂ ∪ 𝑖
1 𝑛 𝐵 ( 𝑔 𝑖 , 𝛿 ) , or equivalently, ∀ 𝑔 ∈ 𝒢 , ∃ 𝑖 such that ∥ 𝑔 − 𝑔 𝑖 ∥ ≤ 𝛿 . Upon defining the 𝛿 -covering, we can measure the minimal number of closed balls with radius 𝛿 , which is necessary to cover 𝒢 . This number is called the 𝛿 -covering number and is defined as
𝑁 ( 𝛿 , 𝒢 , ∥ ⋅ ∥ ) := min { 𝑛 ∈ ℕ ⋆ : ∃ 𝛿 -covering of 𝒢 of size 𝑛 } .
(103)
The covering entropy (metric entropy) is defined as follows 𝐻 ∥ . ∥ ( 𝛿 , 𝒢 )
ln ( 𝑁 ( 𝛿 , 𝒢 , ∥ ⋅ ∥ ) ) .
The relation between the packing number and the covering number is described in the following lemma.
Lemma C.7 (Lemma 5.5 from Wainwright (2019)).
Let ( ℱ , ∥ ⋅ ∥ ) be a normed space and let 𝒢 ⊂ ℱ . Then
𝑀 ( 2 𝛿 , 𝒢 , ∥ ⋅ ∥ ) ≤ 𝑁 ( 𝛿 , 𝒢 , ∥ ⋅ ∥ ) ≤ 𝑀 ( 𝛿 , 𝒢 , ∥ ⋅ ∥ ) .
Lemma C.8 (Chernoff’s inequality, e.g., Chapter 2 in Wainwright (2019)).
Assume that the random variable has a moment generating function in a neighborhood of zero, meaning that there is some constant 𝑏 > 0 such that the function 𝜑 ( 𝜆 )
𝔼 [ 𝑒 𝜆 ( 𝑈 − 𝜇 ) ] exists for all 𝜆 ≤ | 𝑏 | . In such a case, we may apply Markov’s inequality to the random variable 𝑌
𝑒 𝜆 ( 𝑈 − 𝜇 ) , thereby obtaining the upper bound
ℙ ( 𝑈 − 𝜇 ≥ 𝑎 )
ℙ ( 𝑒 𝜆 ( 𝑈 − 𝜇 ) ≥ 𝑒 𝜆 𝑡 ) ≤ 𝔼 [ 𝑒 𝜆 ( 𝑈 − 𝜇 ) ] 𝑒 𝜆 𝑡 .
Optimizing our choice of 𝜆 so as to obtain the tightest result yields the Chernoff bound
ln ( ℙ ( 𝑈 − 𝜇 ≥ 𝑎 ) ) ≤ sup 𝜆 ∈ [ 0 , 𝑏 ] { 𝜆 𝑡 − ln ( 𝔼 [ 𝑒 𝜆 ( 𝑈 − 𝜇 ) ] ) } .
(104)
In particular, if 𝑈 ∼ 𝒩 ( 𝜇 , 𝜎 ) is a Gaussian random variable with mean 𝜇 and variance 𝜎 2 . By a straightforward calculation, we find that 𝑈 has the moment generating function
𝔼 [ 𝑒 𝜆 𝑈 ]
𝑒 𝜇 𝜆 + 𝜎 2 𝜆 2 2 , valid for all 𝜆 ∈ ℝ .
Substituting this expression into the optimization problem defining the optimized Chernoff bound (104), we obtain
sup 𝜆 ≥ 0 { 𝜆 𝑡 − ln ( 𝔼 [ 𝑒 𝜆 ( 𝑈 − 𝜇 ) ] ) }
sup 𝜆 ≥ 0 { 𝜆 𝑡 − 𝜎 2 𝜆 2 2 }
− 𝑡 2 2 𝜎 2 ,
where we have taken derivatives in order to find the optimum of this quadratic function. So, (104) leads to
ℙ ( 𝑋 ≥ 𝜇 + 𝑡 ) ≤ 𝑒 − 𝑡 2 2 𝜎 2 , for all 𝑡 ≥ 0 .
(105)
Recall that a multi-index 𝛼
( 𝛼 1 , … , 𝛼 𝑝 ) , 𝛼 𝑖 ∈ ℕ ⋆ , ∀ 𝑖 ∈ { 1 , … , 𝑝 } is an 𝑝 -tuple of non-negative integers. Let
| 𝛼 |
∑ 𝑖
1 𝑝 𝛼 𝑖 , 𝛼 !
∏ 𝑖
1 𝑝 𝛼 𝑖 ! , 𝑥 𝛼
∏ 𝑖
1 𝑝 𝑥 𝑖 𝛼 𝑖 , 𝑥 ∈ ℝ 𝑝 , ∂ 𝛼 𝑓
∂ 1 𝛼 1 ∂ 2 𝛼 2 ⋯ ∂ 𝑝 𝛼 𝑝
∂ | 𝛼 | 𝑓 ∂ 𝑥 1 𝛼 1 ∂ 𝑥 2 𝛼 2 ⋯ ∂ 𝑥 𝑝 𝛼 𝑝 .
The number | 𝛼 | is called the order or degree of 𝛼 . Thus, the order of 𝛼 is the same as the order of 𝑥 𝛼 as a monomial or the order of ∂ 𝛼 as a partial derivative.
Lemma C.9 (Multivariate Taylor’s Theorem from Duistermaat and Kolk (2004)).
Suppose 𝑓 : ℝ 𝑝 ↦ ℝ is in the class 𝐶 𝑘 + 1 , of continuously differentiable functions, on an open convex set 𝑆 . If 𝑎 ∈ 𝑆 and 𝑎 + ℎ ∈ 𝑆 , then
𝑓 ( 𝑎 + ℎ )
∑ | 𝛼 | ≤ 𝑘 ∂ 𝛼 𝑓 ( 𝑎 ) 𝛼 ! ℎ 𝛼 + 𝑅 𝑎 , 𝑘 ( ℎ ) ,
where the remainder is given in Lagrange’s form by
𝑅 𝑎 , 𝑘 ( ℎ )
∑ | 𝛼 |
𝑘 + 1 ∂ 𝛼 𝑓 ( 𝑎 + 𝑐 ℎ ) ℎ 𝛼 𝛼 ! for some 𝑐 ∈ ( 0 , 1 ) ,
or in integral form by
𝑅 𝑎 , 𝑘 ( ℎ )
( 𝑘 + 1 ) ∑ | 𝛼 |
𝑘 + 1 ℎ 𝛼 𝛼 ! ∫ 0 1 ( 1 − 𝑡 ) 𝑘 ∂ 𝛼 𝑓 ( 𝑎 + 𝑡 ℎ ) 𝑑 𝑡 .
In particular, we can estimate the remainder term if | ∂ 𝛼 𝑓 ( 𝑥 ) | ≤ 𝑀 for 𝑥 ∈ 𝑆 and | 𝛼 |
𝑘 + 1 ,
| 𝑅 𝑎 , 𝑘 ( ℎ ) | ≤ 𝑀 ( 𝑘 + 1 ) ! ‖ ℎ ‖ 1 𝑘 + 1 , ‖ ℎ ‖ 1
∑ 𝑖
1 𝑝 | ℎ 𝑖 | .
Recall that the multivariate Gaussian (or Normal) distribution has a joint density given by
𝒩 ( 𝑦 ; 𝜇 ; Σ )
( 2 𝜋 ) − 𝑞 / 2 det ( Σ ) − 1 / 2 exp ( − 1 2 ( 𝑦 − 𝜇 ) ⊤ Σ − 1 ( 𝑦 − 𝜇 ) ) ,
(106)
where 𝜇 is the mean vector (of length 𝑞 ) and Σ is the symmetric, positive definite covariance matrix (of size 𝑞 × 𝑞 ) . Then, we have the following well-known Gaussian identity, see more in Lemma C.10, which is proved in Equation (A.7) from Williams and Rasmussen (2006).
Lemma C.10 (Product of two Gaussians).
The product of two Gaussians gives another (un-normalized) Gaussian
𝒩 ( 𝑦 ; 𝑎 , 𝐴 ) 𝒩 ( 𝑦 ; 𝑏 , 𝐵 )
𝑍 − 1 𝒩 ( 𝑦 ; 𝑐 , 𝐶 ) , where,
(107)
𝑐
𝐶 ( 𝐴 − 1 𝑎 + 𝐵 − 1 𝑏 ) and 𝐶
(
𝐴
−
1
+
𝐵
−
1
)
−
1
,
𝑍
−
1
( 2 𝜋 ) − 𝑞 / 2 det ( 𝐴 + 𝐵 ) − 1 / 2 exp ( − 1 2 ( 𝑎 − 𝑏 ) ⊤ ( 𝐴 + 𝐵 ) − 1 ( 𝑎 − 𝑏 ) ) .
We recall the following inequality of Hermann Weyl, see e.g., Horn and Johnson (2012, Theorem 4.3.1)
Lemma C.11 (Weyl’s inequality).
Let 𝐴 , 𝐵 ∈ ℝ 𝑞 × 𝑞 be Hermitian and let the respective eigenvalues of 𝐴 , 𝐵 , and 𝐴 + 𝐵 be { 𝜆 𝑖 ( 𝐴 ) } 𝑖 ∈ [ 𝑞 ] , { 𝜆 𝑖 ( 𝐵 ) } 𝑖 ∈ [ 𝑞 ] , and { 𝜆 𝑖 ( 𝐴 + 𝐵 ) } 𝑖 ∈ [ 𝑞 ] , each algebraically nondecreasing order as:
𝑚 ( 𝐴 )
𝜆 1 ( 𝐴 ) ≤ 𝜆 2 ( 𝐴 ) ≤ … ≤ 𝜆 𝑞 ( 𝐴 )
𝑀 ( 𝐴 ) .
Then, for each 𝑖 ∈ [ 𝑞 ] ,
𝜆 𝑖 ( 𝐴 + 𝐵 )
≤ 𝜆 𝑖 + 𝑗 ( 𝐴 ) + 𝜆 𝑞 − 𝑗 ( 𝐵 ) , 𝑗 ∈ { 0 } ∪ [ 𝑞 − 𝑖 ] , 𝜆 𝑖 − 𝑗 + 1 ( 𝐴 ) + 𝜆 𝑗 ( 𝐵 ) ≤ 𝜆 𝑖 ( 𝐴 + 𝐵 ) , 𝑗 ∈ [ 𝑖 ] .
In particular, we have
𝑀 ( 𝐴 + 𝐵 )
≤ 𝑀 ( 𝐴 ) + 𝑀 ( 𝐵 ) , 𝑚 ( 𝐴 + 𝐵 ) ≥ 𝑚 ( 𝐴 ) + 𝑚 ( 𝐵 ) .
(108) Generated on Tue Jul 2 17:20:43 2024 by LaTeXML Report Issue Report Issue for Selection