text
string | source
string |
|---|---|
equipped inner product in HKas⟨·,·⟩K and the endowed norm as ∥ · ∥ K. For any v∈ Z, by writing Kv:=K(v,·)∈ H K, the reproducing property asserts that ⟨f, K v⟩K=f(v),∀f∈ H K. (11) Equipped with HK, we adopt KRR by regressing the response Y= (Y1, . . . , Y n)⊤onto the predicted inputs bg(X1), . . . ,bg(Xn) by solving the following optimization problem: bf= arg min f∈HK( 1 nnX i=1 Yi−(f◦bg)(Xi)2+λ∥f∥2 K) . (12) Thanks to the reproducing property in (11), the representer theorem (Kimeldorf and Wahba, 1971) states that the minimization in (12) admits a closed-form solution: bf=1√nnX i=1bαiKbg(Xi) (13) where the representer coefficients bα= (bα1, . . . ,bαn)⊤are obtained by bα= arg min α∈Rn1 n Y−√nKxα 2 2+λ α⊤Kxα =1√n Kx+λIn−1Y. (14) HereKxis the n×nkernel matrix with entries equal to n−1K(bg(Xi),bg(Xj)) for i, j∈[n]. Finally, for a new data point X∈ X, we predict its corresponding response Yby (bf◦bg)(X) =1√nnX i=1bαiK(bg(Xi),bg(X)). (15) Remark 1 (Independence between bgandD).Our theory in Section 3 requires that bgbe constructed independently of the training data D. This independence simplifies our analysis, especially given the technical complexity of the current framework. Moreover, such independence can lead to better results in many statistical problems, including smaller prediction or estimation errors and weaker conditions – a phenomenon has been observed in various problems, such as estimating the optimal instrument in sparse high-dimensional instrumental variable models (Belloni et al., 2012), inferring a low-dimensional parameter in the presence of high-dimensional nuisance parameters (Chernozhukov et al., 2018), and performing discriminant analysis using a few retained principal components of high-dimensional features (Bing and Wegkamp, 2023). On the other hand, in many applications such as the examples mentioned in the Introduction, auxiliary data without labels or responses are readily available or at least easy to obtain (Bengio et al., 2013). When this is not the case, one 10 can still use a procedure called k-fold cross-fitting (Chernozhukov et al., 2018). The 2-fold version of this method first splits the data into equal parts, computes bgon one subset, and then performs the regression step using the other subset, before switching their roles. The final prediction is then made by the average of the two resulting predictors. 3 Theoretical guarantees of KRR with predicted feature inputs In this section we present our theoretical guarantees of the KRR estimator bfin (12) that is based on a generic predictor bg:X → Z , constructed independently of D. The quantity of our interest is the excess risk of the prediction function bf◦bg:X →R, given by E bf◦bg :=E Y−(bf◦bg)(X)2−σ2. (16) Henceforth, the expectation Eis taken with respect to a new pair ( Y, X) that is independent of, and generated from the same model as, the training data D. Since E(bf◦bg) is a random quantity depending on Dandbg, our goal is to establish its rate of convergence in probability. We start by decomposing the excess risk into three interpretable terms. 3.1 Decomposition of the excess risk For any deterministic f∈ L2(ρ), we proved in Appendix B.1 that E bf◦bg =E (Y−(bf◦bg)(X))2−(Y−(f◦bg)(X))2 +E[f∗(Z)−(f◦bg)(X)]2. (17) To specify the
|
https://arxiv.org/abs/2505.20022v1
|
choice of f, we make the following blanket assumption. Recall that ∥ · ∥ ρis the induced norm of L2(ρ). Assumption 1. There exists some fH∈ H Ksuch that ∥fH−f∗∥2 ρ= inf f∈HK∥f−f∗∥2 ρ. We do not assume f∗∈ H Kbut rather the weaker Assumption 1. If the former holds, we immedi- ately have fH=f∗. Since HKis convex, existence of fHalso ensures its uniqueness (Cucker and Smale, 2002). As discussed in Rudi and Rosasco (2017), existence of fHcan be ensured if HKin Assumption 1 is replaced by {f∈ H K:∥f∥K≤R}for any finite radius R > 0. Alternatively, we can choose fH,λ= arg minf∈HK{∥f−f∗∥2 ρ+λ∥f∥2 K}which, for λ >0, is always unique and exists (Cucker and Smale, 2002). Notably, our results presented below can be applied to any function f∈ H Kin replace of fH, including these mentioned examples. By choosing f=fHin (17), the second term on the right-hand side (RHS) of (17) represents the irreducible error from two different sources. To see this, by introducing ∆bg:=∥KZ−Kbg(X)∥2 K, (18) we have that for any θ≥1, the elementary inequality ( a+b)2≤xa2+b2/xforx >0, E[f∗(Z)−(fH◦bg)(X)]2≤(1 +θ)E[fH(Z)−(fH◦bg)(X)]2+1 +θ θ∥fH−f∗∥2 ρ = (1 + θ)E fH, KZ−Kbg(X) 2 K+1 +θ θ∥fH−f∗∥2 ρ by (11) ≤(1 +θ)∥fH∥2 KE∆bg+1 +θ θ∥fH−f∗∥2 ρ. (19) 11 The last step uses the Cauchy-Schwarz inequality. The quantity E∆bgis referred to as the kernel- related latent error , representing the error in using bg(X) to predict the latent factor Z, composited with the kernel function K. The term ∥fH−f∗∥2 ρon the other hand represents the approximation error due to model misspecification when f∗/∈ H K. We provide detailed discussion on both terms after stating our main result in Theorem 2 below. Going back to the risk decomposition in (17), the remaining term with fHin place of f,E[(Y−(bf◦bg)(X))2−(Y−(fH◦bg)(X))2], represents the prediction risk of bf◦bgrelative to fH◦bg. Analyzing this term is the main challenge in our analysis, and is detailed in the next section. 3.2 Non-asymptotic upper bounds of the excess risk In this section we establish non-asymptotic upper bounds of the excess risk E(bf◦bg) in (16). An important component of our analysis is the empirical process theory based on local Rademacher complexities developed in Bartlett et al. (2005), building on a line of previous works such as Bousquet et al. (2002); Koltchinskii and Panchenko (2000); Lugosi and Wegkamp (2004); Massart (2000). In our context, local Rademacher complexities translate into the complexity of the RKHS. We begin with a brief review of the basics of RKHS, followed by a discussion of its complexity measures and the assumptions used in our analysis. In this paper, we make the following assumption on the kernel function. It is satisfied by many widely used kernel functions including the Gaussian and Laplacian kernels. Assumption 2. The kernel function Kis continuous, symmetric and positive semi-definite.1 Moreover, there exists some positive constant κ <∞such that supz,z′∈ZK(z, z′)≤κ2. A key element for analyzing bfin (12) is the integral operator LK:L2(ρ)→ L2(ρ), defined as LKf:=Z ZKzf(z) dρ(z). (20) Our next condition assumes that the integral operator LKcan be eigen-decomposed. Assumption 3. There exist a sequence of
|
https://arxiv.org/abs/2505.20022v1
|
eigenvalues {µj}∞ j=1arranged in non-increasing order and corresponding eigenfunctions {ϕj}∞ j=1that form an orthonormal basis of L2(ρ), such that LK=P∞ j=1µj⟨ϕj,·⟩ρϕj. Assumption 3 is common in the literature of kernel methods. It is guaranteed, for instance, by the celebrated Mercer’s theorem (Mercer, 1909) when the domain of the kernel function Kis compact together with Assumption 2, although it can also hold in more general settings (see, e.g., Wainwright (2019, Theorem 12.20)). Under Assumption 3, the kernel function Kadmits K(z, z′) =∞X j=1µjϕj(z)ϕj(z′),∀z, z′∈ Z. (21) 1We say Kis positive semi-definite if for all finite sets {z1, . . . , z m} ⊂ Z them×mmatrix whose ( i, j) entry is K(zi, zj) is positive semi-definite. 12 In the case µj>0 for all j≥1, the RKHS can be explicitly written as HK=( f=∞X j=1γjϕj:∞X j=1γ2 j µj<∞) . For any f1=P∞ j=1αjϕjwith αj=⟨f1, ϕj⟩ρandf2=P∞ j=1βjϕjwith βj=⟨f2, ϕj⟩ρ, the equipped inner product of HKis ⟨f1, f2⟩K=∞X j=1αjβj µj. If for some k∈N,µj>0 for j≤kandµj= 0 for all j > k , the RKHS reduces to a k-dimensional function space spanned by {ϕ1, . . . , ϕ k}. In the rest of this paper, we are only interested in non- degenerate kernels, that is, µ1>0. In classical literature of learning theory, the risk bounds are determined by the richness of the function class. Classical measures include the Vapnik–Chervonenkis dimension (Vapnik and Chervonenkis, 2015), metric entropy (Pollard, 2012) and (local) Rademacher complexity (Bartlett et al., 2005; Koltchinskii, 2001; Koltchinskii and Panchenko, 2000; Mendelson, 2002), see also the references therein. In our case the function class is HK, and its complexity is commonly measured by the following kernel complexity function based on the eigenvalues of the integral operator LK, R(δ) =1 n∞X j=1min{δ, µj}1/2 ,∀δ≥0. (22) As detailed in Section 3.4, the (local) Rademacher complexity turns out to be closely related with R(δ) (Mendelson, 2002). An important quantity in our excess risk bounds is the critical radius δn, defined as the fixed point to R(δ), i.e., the positive solution to R(δ) =δ. (23) It can be verified that R(δ) is a sub-root function of δ, so that both existence and uniqueness of its fixed point δnare guaranteed (Bartlett et al., 2005, Lemma 3.2). Here we recall that Definition 1. A function ψ: [0,∞)→[0,∞)is called sub-root if it is non-decreasing, and if δ7→ψ(δ)/√ δis non-increasing for δ >0. More basic properties of sub-root functions are reviewed in Appendix C.1. Finally, we assume the regression error in model (4) to have sub-Gaussian tails. This simplifies our analysis and can be relaxed to sub-exponential tails or bounded finite moments. Assumption 4. There exists some constant γϵ<∞such that E[exp( tϵ)]≤exp(t2γ2 ϵ)for all t∈R. The following theorem is our main result which provides non-asymptotic upper bounds of the excess risk of bf◦bg, given in (16). Theorem 2. Grant model (4) and Assumptions 1–4. For any η∈(0,1), by choosing λin(12) such that λ=C δnlog(1/η) +E∆bg+∥fH−f∗∥2 ρ+log(1/η) n , (24) 13 with probability at least 1−η, one has E(bf◦bg)≤C′ δnlog(1/η) +E∆bg+∥fH−f∗∥2 ρ+log(1/η) n . (25) Here both positive constants CandC′depend only on
|
https://arxiv.org/abs/2505.20022v1
|
κ,∥f∗∥∞,∥fH∥Kandγ2 ϵ. The explicit dependence of CandC′onκ,∥f∗∥∞,∥fH∥Kandγ2 ϵcan be found in the proof. Since the proof of Theorem 2 is rather long, we defer its full statement to Appendix B.2. In Section 3.4 we offer a proof sketch and highlight the main technical difficulties. As shown in Theorem 2, the excess risk is essentially determined by the critical radius δn, the kernel-related latent error E∆bgand the approximation error ∥fH−f∗∥2 ρ. We discuss each term in detail shortly. It is worth emphasizing that Theorem 2 does not impose any assumptions on the magnitudes of either E∆bgor∥fH−f∗∥2 ρ. Surprisingly, regardless of the magnitude of E∆bg, it appears additively in the risk bound. Remark 2 (Comparison with the existing theory of KRR in the classical setting) .Although The- orem 2 is derived for the case where the feature inputs are predicted, it is insightful to compare it with the existing results of KRR in the classical setting. When Z1, . . . , Z nare directly observed and used in (1), setting E∆bg= 0 with bg(z) =zforz∈ Zsimplifies the risk bound (25) to E(bf◦bg)≲δnlog(1/η) +log(1/η) n+∥fH−f∗∥2 ρ (26) which, to the best of our knowledge, is also new in the literature of kernel learning methods. As mentioned in Section 1.1, most existing analyses of KRR cannot handle arbitrary model misspeci- fication without requiring regularity and decay properties of the eigenvalues µj’s. When the model is correctly specified, our risk bound in (26) matches that in (7), derived by Bartlett et al. (2005). However, their analysis requires both the response variable and the RKHS to be bounded, which we do not assume. Compared to the integral operator approach, our analysis does not impose any restrictive conditions on the eigenvalues µj’s, which are often required in the existing literature (Caponnetto and De Vito, 2007; Fischer and Steinwart, 2020; Lin and Zhou, 2018; Steinwart et al., 2009). For instance, Caponnetto and De Vito (2007); Rudi and Rosasco (2017) require µ1≥λin their analysis, a condition that is somewhat counterintuitive. Such refinement of our result stems from a distinct proof strategy, contributing to a new analytical framework for kernel-based learning. We provide in Section 3.4 further discussion on technical details. Remark 3 (Discussion on the approximation error) .If we choose the kernel Kusing some universal kernels, such as the Gaussian kernel or Laplacian kernel, it is known that the induced RKHS is dense in the space of bounded continuous functions C(Z) under the infinity norm (Micchelli et al., 2006). If we further assume f∗∈ C(Z), then ∥fH−f∗∥ρ= 0 for universal kernels. Indeed, for any ϑ >0, there must exist some fϑ∈ H Ksuch that ∥fϑ−f∗∥∞≤ϑ. This implies that ∥fH−f∗∥ρ≤ ∥fϑ−f∗∥ρ≤ ∥fϑ−f∗∥∞≤ϑ. The leftmost inequality uses the definition of fHin Assumption 1. 3.2.1 Discussion on the kernel-related latent error Recall from (18) that the term E∆bgreflects the error of bg(X) in predicting Zthrough the kernel function K. To obtain more transparent expression, we need the following Lipschitz property of the map Z → H K, that is, z7→Kzfor all z∈ Z. 14 Assumption 5. There exists some positive constant CKsuch that
|
https://arxiv.org/abs/2505.20022v1
|
Kz−Kz′ K≤CK∥z−z′∥2,∀z, z′∈ Z. Assumption 5 can be verified for various kernels. One sufficient condition is the following uniform boundedness condition of the mixing partial derivatives of the kernel function (Blanchard et al., 2011): there exists some constant C >0 such that sup z,z′∈Z ∂2K(z, z′) ∂z∂z′ op≤C. (27) Condition (27) holds for several commonly used kernels, such as the linear kernel and the Gaussian kernel (Blanchard and Zadorozhnyi, 2019; Sun et al., 2022). Under Assumption 5, the kernel-related latent error can be bounded by E∆bg≤C2 KE∥bg(X)−Z∥2 2, (28) entailing consistent prediction of Zitself by using bg(X). We remark that this can be relaxed when the kernel function Kis invariant to orthogonal transformations, that is, K(z, z′) =K(Qz, Qz′) for any z, z′∈ Zand any Q∈Or×r. When this is the case, we only need consistent prediction of Zup to an orthogonal transformation, in the precise sense that E∆bg≤C2 Kinf Q∈Or×rE∥bg(X)−QZ∥2 2. (29) This claim is proved in Appendix B.4. We give below two common classes of kernel functions that are invariant to orthogonal transformations. Let h:R→Rbe some kernel-specific function. (1) Inner product kernel: K(z, z′) =h(⟨z, z′⟩2), e.g. linear kernels, polynomial kernels; (2) Radial basis kernel: K(z, z′) =h(∥z−z′∥2), e.g. exponential kernels, Gaussian kernels. 3.2.2 Discussion on the critical radius The critical radius δndefined in (23) is an important factor in the risk bound (25). Finding its exact expression is difficult in general. To quantify its rate of convergence, by the sub-root property ofR(δ) and Assumption 2, we prove in Appendix B.5 the following slow rate ofδn: δn≤κ/√n. (30) Together with Theorem 2, we have the following slow rate of convergence of the excess risk. Corollary 1 (Slow rate of the excess risk) .Grant model (4)and Assumptions 1–5. For any η∈(0,1)with log(1/η)≤√n, by choosing λas in (24) withδnreplaced by κ/√n, with probability at least 1−η, one has E(bf◦bg)≤C′log(1/η)√n+E∥bg(X)−Z∥2 2+∥fH−f∗∥2 ρ . (31) Here the positive constant C′depends only on κ,∥f∗∥∞,∥fH∥K,γ2 ϵandCK. 15 The first n−1/2-type rate is well-known in the literature of kernel methods when Z1, . . . , Z n are observed and no assumptions on the smoothness of f∗∈ H Kor decaying properties of the eigenvalues µj’s are made. See, for instance, Corollary 5 in Smale and Zhou (2007) and Caponnetto and De Vito (2007). Our new result additionally accounts for both the model misspecification and the error due to predicting feature inputs. In many situations, however, fast rates ofδnare obtainable, as we discuss below. For ease of presentation, assume fH=f∗for now. To explain the role of δn, we follow Yang et al. (2017) by introducing the concept of statistical dimension , which is defined as the last index for which the kernel eigenvalues {µj}j≥1exceed δn, that is, d(δn) = max {j≥0 :µj≥δn} with µ0:=∞. In view of the expression of R(δ) in (22), this implies δn=R(δn) =d(δn) nδn+1 nX j>d(δn)µj1/2 . IfP j>d(δn)µj≲d(δn)δnholds, we can conclude that δn≍d(δn)/n, hence the name statistical dimension for d(δn). Any kernel whose eigenvalues satisfy this condition is said to be regular (Yang et al., 2017). More importantly, for any regular kernel, Yang
|
https://arxiv.org/abs/2505.20022v1
|
et al. (2017, Theorem 1) shows that in the fixed design setting ( Zi’s are treated as deterministic), an empirical version of δnserves as a fundamental lower bound on the in-sample prediction risk of any estimator: inf fsup ∥f∗∥K≤1En[f(Z)−f∗(Z)]2≳bδn where the infimum is taken over all estimators based on {Yi, Zi}n i=1, andbδnis the positive solution tobR(δ) =δwithbR(δ) being the empirical kernel complexity function, given in (44). In the random design settings, we prove in Theorem 3 that bδnin the lower bound is indeed replaced by δn. In the next section we derive explicit expressions of δnfor some specific kernels. 3.3 Explicit excess risk bounds for specific kernel classes To establish explicit rates of δn, we consider three common classes of kernels, including the linear kernels, kernels with polynomially decaying eigenvalues and kernels with exponentially decaying eigenvalues. We also present simplified excess risk bounds of Theorem 2 for each class. For ease of presentation, we assume fH=f∗andκ= 1 in the rest of this section. 3.3.1 Linear kernels Consider Z ⊆Rrfor some r∈N. We start with the linear kernel K(z, z′) =z⊤z′,∀z, z′∈ Z. Denote by Σ Zthe second moment matrix of Zand write its eigen-decomposition as Σ Z=Pr j=1σjvjv⊤ j where the eigenvalues are arranged in non-increasing order. For simplicity, we assume rank(Σ Z) =r whence the induced HKalso has rank r. For each j∈[r], define ϕj:Z →Ras ϕj(z) :=σ−1/2 jv⊤ jz, ∀z∈ Z. 16 It is easy to verify that K(z, z′) =z⊤z′=z⊤rX j=1vjv⊤ jz′=rX j=1σjϕj(z)ϕj(z′),∀z, z′∈ Z. (32) The following corollary states the simplified excess risk bound of Theorem 2 for the linear kernel. Its proof is deferred to Appendix B.6. The notation ≲ηdenotes the inequality holds up to log(1 /η). Corollary 2. Consider the linear kernel. Grant model (4)with f∗∈ H Kand Assumptions 1–5. For any η∈(0,1), with probability at least 1−η, one has E(bf◦bg)≲ηr n+E∥bg(X)−Z∥2 2. (33) The term r/nis as expected for the prediction risk under linear regression models using {(Yi, Zi)}n i=1. Note that it only depends on the dimension of the latent factor Zinstead of the ambient dimension of X. The second term, E∥bg(X)−Z∥2 2, represents the error in using bg(X) to predict Z. When bgis chosen as PCA (see Section 4.2 for details), the predictor bf◦bgcoincides with the Principal Component Regression (PCR). Our risk bound aligns with existing works of PCR (Bing et al., 2021; Stock and Watson, 2002), although it holds for more general predictor bg. 3.3.2 Kernels with polynomially decaying eigenvalues We consider the kernel class satisfying the α-polynomial decay condition in their eigenvalues, that is, there exists some α >1/2 and some absolute constant C >0 such that µj≤Cj−2αfor all j≥1. Several widely used kernels, including the Sobolev kernel and the Laplacian kernel, belong to this class. See, Chapters 12 and 13 in Wainwright (2019), for more examples. The quantity αcontrols the complexity of HK, with a smaller αindicating a more complex HK. The following corollary states simplified risk bounds of Theorem 2 for such kernels with poly- nomially decaying eigenvalues. Its proof can be found in Appendix B.7. Corollary 3.
|
https://arxiv.org/abs/2505.20022v1
|
Consider the polynomial decay kernels. Grant model (4)withf∗∈ H Kand Assump- tions 1–5. For any η∈(0,1), the following holds with probability at least 1−η, E(bf◦bg)≲ηn−2α 2α+1+E∥bg(X)−Z∥2 2. (34) The first term in the risk bound (34) corresponds to the minimax optimal prediction risk established in Caponnetto and De Vito (2007) when ni.i.d. pairs of ( Yi, Zi) are observed. As α decreases and the corresponding HKbecomes more complex, the first term decays more slowly, while the term E∥bg(X)−Z∥2 2becomes easier to absorb into the first term. An important example of an RKHS induced by polynomial decay kernels is the well-known Sobolev RKHS. Specifically, consider the case where Z ⊂Rris open and bounded, and ρhas a density with respect to the uniform distribution on Z(Steinwart et al., 2009). For any integer s > r/ 2, let Ws(Z) denote the Sobolev space of smoothness s. When s=r= 1, the Sobolev space W1(Z) is an RKHS induced by K(z, z′) = min {z, z′}. For more general settings, we refer the reader to Novak et al. (2018) for the explicit form of the kernel function that induces Ws(Z). As described by Edmunds and Triebel (1996, Equation 4 on page 119), the eigenvalues {µj}j≥1corresponding 17 toWs(Z) satisfy the polynomial decay condition with α=s/r > 1/2. As a result, the excess risk bound in Corollary 3 becomes E(bf◦bg)≲ηn−2s 2s+r+E∥bg(X)−Z∥2 2. The first term follows the classical nonparametric rate but depends only on r, the dimension of the latent factor Z, rather than the ambient dimension of X. It tends to zero as long as r=o(logn) for any fixed smoothness s. By contrast, when applying KRR directly to predict the response from X1, . . . , X n∈ X ⊆ Rp, the excess risk bound would be n−2s 2s+p, which does not vanish once logn=O(p). When r≪p, reducing to a lower dimension before applying KRR could lead to a significant advantage for prediction. See a concrete example in Section 4. 3.3.3 Kernels with exponentially decaying eigenvalues We consider in this section RKHSs induced by kernels whose eigenvalues exhibit exponential decay, that is, there exists some γ >0 such that µj≤exp(−γj) for all j≥1. It is well known that the eigenvalues of the Gaussian kernel satisfy this exponential decay condition. The following corollary states the simplified risk bounds of Theorem 2 for such kernels with exponentially decaying eigenvalues. Its proof can be found in Appendix B.8. Corollary 4. Consider the exponential decay kernels. Grant model (4)with f∗∈ H Kand As- sumptions 1–5. For any η∈(0,1), the following holds with probability at least 1−η, E(bf◦bg)≲ηlogn n+E∥bg(X)−Z∥2 2. (35) The first term in the bound of (35) coincides with the minimax optimal rate in the classical setting for an exponentially decaying kernel (see, for instance, Example 2 in Yang et al. (2017) under the fixed design setting). It is interesting to note that absorbing the error from predicting feature inputs into the first term requires a higher accuracy of bg(X) in predicting Zfor exponentially decaying kernels compared to polynomially decaying kernels. 3.4 Proof techniques of Theorem 2 In this section,
|
https://arxiv.org/abs/2505.20022v1
|
we provide a proof sketch of Theorem 2 and highlight its main challenges. The proof consists of three main components, which are discussed separately in the following subsections. 3.4.1 Bounding the empirical process with a reduction argument Recall that the excess loss function of f◦bgrelative to fH◦bgis: ℓf◦bg(y, x) := y−(f◦bg)(x)2− y−(fH◦bg)(x)2,∀x∈ X, y∈R. The first component of our proof employs a delicate reduction argument, showing that proving Theorem 2 can be reduced to bounding from above the empirical process sup f∈Fbn E[ℓf◦bg(Y, X)]−2En[ℓf◦bg(Y, X)]o (36) 18 where Endenotes the expectation with respect to the empirical measure, and Fbis the local RKHS- ball around fH, given by Fb:= f∈ Fb:E[ℓf◦bg(Y, X)]≤λ ,Fb:={f∈ H K:∥f−fH∥K≤3∥fH∥K} (37) with λgiven in (24). Since Fb⊆ F bandFbis a bounded function class, which can be seen from ∥f∥∞≤4κ∥fH∥K,∀f∈ Fb, by the reproducing property, this reduction argument allows us later to apply existing empirical process theory which is only applicable for bounded function classes. Moreover, the reduction argument ensures that our bound in Theorem 2 depends only on ∥fH∥2 K, a finite quantity that can be much smaller than supf∈HK∥f∥2 K, which could be infinite. The second part of this component is to bound from above the empirical process (36). Since the response variable Yis not assumed to be bounded, existing empirical process theory for bounded functions cannot be directly applied. Instead we rewrite the excess loss function as E[ℓf◦bg(Y, X)] =Eh f∗(Z)−(f◦bg)(X)2− f∗(Z)−(fH◦bg)(X)2i =:E hf◦bg(Z, X) for the function hf◦bg:Z × X → R. Also note that its empirical version satisfies En[ℓf◦bg(Y, X)] =En hf◦bg(Z, X) + 2En ϵ(fH◦bg)(X)−ϵ(f◦bg)(X) . Our proof in this step relies on the local Rademacher complexity, defined as: for any δ≥0, ψx(δ) :=Rnn f◦bg−fH◦bg:f∈ Fb,E[(f◦bg)(X)−(fH◦bg)(X)]2≤δo . (38) The detailed definition of Rncan be found in Appendix B.2.1. Let δxbe the fixed point of ψxsuch thatψx(δx) =δx. A key step is to establish the following result that holds uniformly over f∈ Fb, E hf◦bg(Z, X) ≲En hf◦bg(Z, X) +δx+E∆bg+∥f∗−fH∥2 ρ. (39) This is proved in Lemma 3 where we apply the empirical process result based on the local Rademacher complexity in Bartlett et al. (2005). A notable difficulty in our case is to take into account both the approximation error and the error of predicting feature inputs. The next step towards bounding the empirical process (36) is to bound from above the following cross-term uniformly over f∈ Fb, En ϵ(fH◦bg)(X)−ϵ(f◦bg)(X) . This also turns out to be quite challenging due to the presence of bg(X1), . . . ,bg(Xn), and we sketch its proof in Section 3.4.2 separately. Note that this difficulty could have been avoided if the response Ywere assumed to be bounded. Combining (39) with the bound of the cross-term stated below yields upper bounds for the empirical process (36). 3.4.2 Uniformly bounding the cross-term The proof of bounding the cross-term over the function space Fbconsists of three steps. The first step reduces bounding the cross-term to bounding En[(f◦bg)(X)−(fH◦bg)(X)]2through the empirical kernel complexity function bRx(δ), defined as bRx(δ) =1 nnX j=1min{δ,bµx,j}1/2 ,∀δ≥0. (40) 19 Here we use bµx,1≥ ··· ≥ bµx,nto
|
https://arxiv.org/abs/2505.20022v1
|
denote the eigenvalues of the kernel matrix Kxwith entries n−1K(bg(Xi),bg(Xj)) for i, j∈[n]. Since bRx(δ) depends only on the predicted inputs, its introduction is pivotal not only for uniformly bounding the cross-term but also, as detailed in the next section, for determining the rate of δx, the fixed point to the local Rademacher complexity in (38). We show in Lemma 5 of Appendix B.2.3 that for any q >0, with probability 1 −η, the following holds uniformly over the function class {f∈ Fb:En[(f◦bg)(X)−(fH◦bg)(X)]2≤q}, En[ϵ(f◦bg)(X)−ϵ(fH◦bg)(X)]≲p log(1/η)bRx(q). The next step connects En[(f◦bg)(X)−(fH◦bg)(X)]2to its population-level counterpart. By leveraging the empirical process result in Bartlett et al. (2005), we show in Lemma 6 that with probability 1 −ηand uniformly over f∈ Fb, En[(f◦bg)(X)−(fH◦bg)(X)]2≲E[(f◦bg)(X)−(fH◦bg)(X)]2+δx+log (1 /η) n. Finally, as Fbis defined via the excess loss function E[ℓf◦bg(Y, X)], we further prove that E[(f◦bg)(X)−(fH◦bg)(X)]2≲E[ℓf◦bg(Y, X)] +E∆bg+∥fH−f∗∥2 ρ so that combining the three displays above yields the final uniform bound for the cross-term, stated in the following lemma, with the detailed proof provided in Appendix B.2.3. Lemma 1. Grant model (4)with Assumptions 1–4. Fix any η∈(0,1). With probability at least 1−η, the following holds uniformly over f∈ Fb, En ϵ(fH◦bg)(X)−ϵ(f◦bg)(X) ≲p log(1/η)bRx(λ+δx). It thus remains to derive upper bounds for δxandbRx(·), which are outlined in the next section. 3.4.3 Relating the local Rademacher complexity to the kernel complexity The two previous components necessitate studying the fixed point δxin (39), which in turn requires relating the local Rademacher complexity ψx(δ) to the kernel complexity function R(δ) in (22). This arises exclusively from using predicted feature inputs and is highly non-trivial without imposing any assumptions on the error of predicting features. To appreciate the difficulty, we start by revisiting existing analysis in the classical setting. When {Zi}n i=1are observed and used as the input vectors in the empirical risk minimization (12), the local Rademacher complexity becomes ψz(δ) :=Rnn f−fH:f∈ Fb,E[f(Z)−fH(Z)]2≤δo ,∀δ≥0. Mendelson (2002, Lemma 42) shows that ψz(δ) is sandwiched by the kernel complexity R(δ) in (39), up to some constants that only depend on the radius of Fb. Armed with this result, it is easy to prove that the fixed point δzofψz(δ) satisfies δz≲δn. One key argument in Mendelson (2002) is the fact that E[ϕ2 j(Zi)] = 1 for all i∈[n] and j≥1, as{ϕj}∞ j=1, defined in Assumption 3, form an orthonormal basis of the L2space induced by the probability measure ρofZ. Returning to our case, the intrinsic difficulty arises from the mismatch between the predicted features bg(X), which are used in the regression, and the latent factor Zwhose probability measure ρ 20 induces the integral operator LKhence the kernel complexity function R(δ). Indeed, if we directly characterize the complexity of HKvia the integral operator Lx,K, induced by the probability measure ρxofbg(X), we need to assume Lx,Kadmits the Mercer decomposition as in Assumption 3 with eigenvalues {µx,j}j≥1(note that this is already difficult to justify due to its dependence on ρx andbg). Then, by repeating the argument in Mendelson (2002), the local Rademacher complexity ψx(δ) in (38) is bounded by (in order) Rx(δ) =1 n∞X j=1min{δ, µx,j}1/2 . Establishing a direct
|
https://arxiv.org/abs/2505.20022v1
|
relationship between Rx(δ) and R(δ), however, is generally intractable without imposing strong assumptions on the relationship between ρxandρ. To deal with this mismatch issue, we have to work on the empirical counterparts of the afore- mentioned quantities. Start with the empirical counterpart of ψx(δ), defined as: for any δ≥0, bψx(δ) :=bRnn f◦bg−fH◦bg:f∈ Fb,En[(f◦bg)(X)−(fH◦bg)(X)]2≤δo . (41) By borrowing the results in Bartlett et al. (2005); Boucheron et al. (2000, 2003), we first establish the connection between the population and empirical local Rademacher complexities in Lemma 9 of Appendix B.3.1: ψx(δ)≲bψx(δ) +log(1/η) n, (42) provided that δis not too small. Subsequently, we relate the empirical local Rademacher complexity bψx(δ) to the empirical kernel complexity bRx(δ) in (40). Both quantities depend on the predicted inputs bg(X1), . . . ,bg(Xn). By following the argument used in the proof of Lemma 6.6 of Bartlett et al. (2005), we prove in Lemma 10 of Appendix B.3.2: bψx(δ)≲bRx(δ),∀δ≥0. (43) We proceed to derive an upper bound for bRx(δ), a quantity that also appears in the bound of the cross-term in Section 3.4.2. Analyzing bRx(δ) is crucial for quantifying the increased complexity ofHKcaused by the predicted feature inputs. Indeed, when bg(Xi) predicts Ziaccurately, the kernel matrix Kxshould be close the kernel matrix K, whose entries are n−1K(Zi, Zj) for i, j∈[n]. As a result, the empirical kernel complexity bRx(δ) approximates bR(δ) =1 nnX j=1min{δ,bµj}1/2 ,∀δ≥0, (44) withbµ1≥ ··· ≥ bµnbeing the eigenvalues of K. Note that bR(δ) can be regarded as the empirical counterpart of R(δ), and it is expected due to concentration that bR(δ) and R(δ) have the same order. However, when the prediction error of bg(Xi) is not negligible, it should manifest in the gap between bRx(δ) and bR(δ). Our results in Lemmas 11 and 12 and Corollary 8 of Appendix B.3.3 characterize how the error of bg(Xi) in predicting Ziinflates the empirical kernel complexity: bRx(δ)≲bR(δ) +r E∆bg n+log(1/η) n. (45) 21 The final step is to bound from above bR(δ) by its population-level counterpart R(δ). This is proved in Lemma 13 of Appendix B.3.4: bR(δ)≲R(δ) +p log(1/η) n. (46) By collecting the results in (42), (43), (45) and (46), we conclude that ψx(δ)≲R(δ) +r E∆bg n+p log(1/η) n, (47) from which we finally derive the order of δxin Lemma 4 of Appendix B.2.2 as δx≲δn+E∆bg+p log(1/η) n. 4 Application to factor-based nonparametric regression models In this section, to provide a concrete application of our developed theory in Section 3, we consider the latent factor model (3) where the ambient features X∈ X =Rpare linearly related with the latent factor Z∈ Z=Rrwith r≪p. Under such model, it is reasonable to choose bgas a linear function to predict Z, as detailed in Section 4.1. In Section 4.2 we state the upper bound of the excess risk of the corresponding predictor while in Section 4.3 we prove a matching minimax lower bound, thereby establishing the minimax optimality of the proposed predictor. 4.1 Prediction of latent factors using PCA under model (3) We first discuss the choice of bgto predict the latent factor Z. As mentioned in Remark 1, we
|
https://arxiv.org/abs/2505.20022v1
|
construct such predictor from an auxiliary data X′∈Rn′×p(for simplicity, we assume n′=n) that is independent of the training data D. Under the factor model (3), the n×pmatrix X′= (X′ 1, . . . , X′ n)⊤satisfies X′=Z′A⊤+W′, (48) withZ′= (Z′ 1, . . . , Z′ n)⊤andW′= (W′ 1, . . . , W′ n)⊤. Prediction of Z′and estimation of Aunder (48) has been extensively studied in the literature of factor models. One classical way is to solve the following constrained least squares problem: (bZ′,bA) = arg min Z′∈Rn×r, A∈Rp×r1 np X′−Z′A⊤ 2 F, subject to1 nZ′⊤Z′=Ir,1 pA⊤Ais diagonal . The above formulation uses r, the true number of latent factors. In the factor model literature, there exist several methods that are provably consistent for selecting r. See, for instance, Ahn and Horenstein (2013); Bai and Ng (2002); Bing et al. (2021, 2022); Buja and Eyuboglu (1992); Lam and Yao (2012). Write the singular value decomposition (SVD) of the normalized X′as 1√npX′=pX i=1diuiv⊤ i. (49) 22 LetUr= (u1, . . . , u r)∈On×r,Vr= (v1, . . . , v r)∈Op×randDr= diag( d1, . . . , d r) containing the largest rsingular values. It is known (see, for instance, Bai (2003)) that the constrained optimization in (49) admits the following closed-form solution bA=√p VrDr,bZ′=√n Ur=X′ VrD−1 r/√p =:X′bB⊤. (50) Given bB∈Rr×pas above, we can predict Z1, . . . , Z nin the training data by bBX1, . . . ,bBXnand then proceed with the kernel ridge regression in (12) of Section 3 to obtain the regression predictor bf. For a new data point Xfrom model (3), its final prediction is (bf◦bB)(X) =1√nnX i=1bαiK(bBXi,bBX). We state its excess risk bound in the next section, as an application of our theory in Section 3. 4.2 Upper bounds of the excess risk of bf◦bB From our theory in Section 3, prediction error of bg(X) =bBXappears in the excess risk bound in the form of E∥Z−bBX∥2 2= Σ1/2 Z(bBA−Ir) 2 F+ Σ1/2 WbB⊤ 2 F(51) where we write Σ Z= Cov( Z) and Σ W= Cov( W). The second equality above is due to the independence between ZandW. In view of (51), the matrix bBneeds to ensure ∥bBA−Ir∥F vanishing. Since this essentially requires to identify A, we adopt the following identifiability and regularity conditions that are commonly used in the factor model literature, see, for instance, Bai (2003); Bai and Ng (2020); Fan et al. (2013); Stock and Watson (2002). Throughout this section, we also treat the number of latent factors ras fixed. Assumption 6. The matrix AandΣZsatisfies ΣZ=Irandp−1A⊤A= diag( λ1, . . . λ r)with λ1, . . . , λ rbeing distinct. Moreover, there exists some absolute constant 0< c≤C <∞such that max j∈[p]∥[ΣW]j·∥1< C andc≤λr≤λ1≤C. Assumption 7. Both random vectors ZandWunder model (3)have sub-Gaussian tails, that is, E[exp( u⊤Z)]≤ ∥u∥2 2γ2 zandE[exp( v⊤W)]≤ ∥v∥2 2γ2 wfor all u∈Rrandv∈Rp. Both Assumption 6 and Assumption 7 are commonly assumed in the literature of factor models with the number of features allowed to diverge (Bai, 2003; Bai and Ng, 2006; Fan et al., 2013;
|
https://arxiv.org/abs/2505.20022v1
|
Stock and Watson, 2002). The requirement of λ1, . . . , λ rbeing distinct in (6) is used to ensure that the factor Zcan be consistently predicted. We remark that this requirement can be removed once kernels that are invariant to orthogonal transformation are used (see, also, Section 3.2.1). Under Assumptions 6 and 7, the existing literature of approximate factor model (see, for in- stance, Bai and Ng (2020)) ensures that E∥Z−bBX∥2 2=OP(n−1+p−1).Consequently, Theorem 2 together with E∆bg≲E∥Z−bBX∥2 2yields the following excess risk bounds of bf◦bB. Corollary 5. Under conditions of Theorem 2, further grant model (3)with Assumptions 6 and 7. By choosing λ≍δnlog(1/η) + log(1 /η)/n+ 1/p+∥fH−f∗∥2 ρin(12), one has E(bf◦bB) =OP δn+1 n+1 p+∥fH−f∗∥2 ρ where OPis with respect to the law of both DandX′. 23 The term 1 /parises exclusively from using PCA to predict the latent factors under Assumptions 6 and 7. For linear kernels and f∗∈ H K, we reduce to the factor-based linear regression models (2) and (3). Our results coincide with the excess risk bounds of PCR derived in Bing et al. (2021). When f∗belongs to a hierarchical composition of functions in the H¨ older class with some smoothness parameter γ >0, Fan and Gu (2024) derived the optimal prediction risk (log n/n)−2γ 2γ+1+log( p)/n+ 1/p, achieved by regressing the response onto the leading PCs via deep neural nets. Corollary 5 on the other hand provides the excess risk bound of kernel ridge regression using the leading PCs under a broader function class. The obtained rate in the next section is further shown to be minimax optimal over a wide range of kernel functions. 4.3 Minimax lower bound of the excess risk To benchmark the obtained upper bounds in Corollary 5, we establish the minimax lower bounds of the excess risk E(h) for any measurable function h:Rp→Runder models (3) and (4). Although the factor model (3) is a particular instance for modeling the relationship between XandZ, the obtained minimax lower bounds remain valid for the space of joint distributions of ZandXunder more general dependence. To specify the RKHS for model (4), we consider the following class of kernel functions K:= Kis regular, universal, and satisfies Assumptions 2 & 3 . Regular kernels were already required in the past work of Yang et al. (2017) to derive the minimax lower bounds for estimating f∗under the classical KRR setting with a fixed design. In our case, we further need the kernel function to be universal in order to characterize the effect of predicting Z. As mentioned in Remark 3, there is no approximation error for universal kernels. Our lower bounds below will hold pointwise for each K∈ K. For the purpose of establishing minimax lower bounds, it suffices to choose ϵ∼N(0, σ2) under model (4) and ( Z, W ) being jointly Gaussian under model (3). For this setting, we use Pθto denote the set of all distributions of Das well as the test data point ( Y, X), parametrized by θ∈Θ :={(A,ΣZ,ΣW, f∗) :A,ΣZand Σ Wsatisfy Assumption 6 , f∗∈ H K}. LetEθdenote its corresponding
|
https://arxiv.org/abs/2505.20022v1
|
expectation. The following theorem states the minimax lower bounds of the excess risk under the above specification. Its proof can be found in Appendix B.9. Theorem 3. Under models (3)and (4), consider any kernel function K∈ K. Assume nis sufficiently large such that d(δn)≥128 log 2 . There exists some constant c >0depending on σ2 only and some absolute constant c′>0such that inf hsup θ∈ΘEθ[f∗(Z)−h(X)]2≥c δn+c′ p. (52) The infimum is over all measurable functions h:X →Rconstructed from D. The minimax lower bound in (52) consists of two terms: the first term δnaccounts for the model complexity of HK, which depends on the kernel function K, while the second term 1 /p reflects the irreducible error due to not observing Z. In conjunction with Corollary 5, by noting thatδn≍d(δn)/n≥1/nand∥fH−f∗∥ρ= 0, the lower bound in Theorem 3 is also tight and the predictor bf◦bBis minimax optimal. 24 Finally, we remark that although our lower bound in Theorem 3 is stated for universal kernels, inspecting its proof reveals that the same results hold for kernels that induce an RKHS containing all linear functions. 5 Extension to other loss functions Although so far we have been focusing on KRR with predicted inputs under the squared loss function, our analytical framework can be adapted to general loss functions that are strongly convex and smooth. In this section, we give details on such extension. Specifically, let L(·,·) :R×R→Rbe a general loss function with f∗being the best predictor ofYover all measurable functions f:Z →R, that is, f∗(z) = arg min fE[L(Y, f(z))|Z=z],∀z∈ Z. Fix any measurable function bg:X → Z that is used to predict the feature inputs Z1, . . . , Z n. For the specified loss function Land the predicted inputs bg(X1), . . . ,bg(Xn), we solve the following optimization problem: bf= arg min f∈HKn1 nnX i=1L(Yi,(f◦bg)(Xi)) +λ∥f∥2 Ko . (53) For a new data point X∈ X, we predict its corresponding response Yby (bf◦bg)(X). Our target of this section is to bound from above the excess risk of bf◦bg, E(bf◦bg) :=Eh L(Y,(bf◦bg)(X))−L(Y, f∗(Z))i . In addition to Assumptions 1–3 in the main paper, our analysis for general loss functions requires the following two assumptions. Recall that Fbis given in (37). Assumption 8. For any y∈R, the loss function L(y,·)is convex. Moreover, for any y∈Rand f, f′∈ Fb, there exists a constant Cℓsuch that L(y,(f◦bg)(x))−L(y,(f′◦bg)(x)) ≤Cℓ (f◦bg)(x)−(f′◦bg)(x) ,∀x∈ X. Many commonly used loss functions satisfy Assumption 8. For example, it holds for the logistic loss and exponential loss which are commonly used for classification problems. Other examples in- clude the check loss used for quantile regressions, the hinge loss used for margin-based classification problems, and the Huber loss adopted in robust regressions. As a result of Assumption 8 together with the boundedness of Fbin (37), the quantity |L(y,(f◦bg)(x))−L(y,(f′◦bg)(x))|is bounded uniformly over f, f′∈ Fb,x∈ Xandy∈R. Assumption 9. There exist two constants 0< C L≤CU<∞such that for all f∈ Fbandbg, CLE[(f◦bg)(X)−f∗(Z)]2≤ E(f◦bg)≤CUE[(f◦bg)(X)−f∗(Z)]2. (54) 25 The first inequality in (54) is the strongly convex condition while the second one corresponds to the smoothness condition. When Zis
|
https://arxiv.org/abs/2505.20022v1
|
observable, Assumption 9 reduces to E[L(Y, f(Z))−L(Y, f∗(Z))]≍ ∥f−f∗∥2 ρ,∀f∈ Fb, a condition, or its variants, frequently adopted in the existing literature for analyzing general loss functions (Farrell et al., 2021; Li et al., 2019; Steinwart and Christmann, 2008; Wei et al., 2017). It can be seen from (63) that Assumption 9 holds for the squared loss function with CL=CU= 1. For more general loss, Assumption 9 essentially requires that the excess risk maintains a similar curvature as the L2-norm around the minimizer f∗. Indeed, by following the same argument in Wei et al. (2017), it can be verified that Assumption 9 holds for both the logistic loss and the exponential loss, with constants CLandCUdepending only on κ,∥f∗∥∞and the radius of Fb. For the check loss, Assumption 9 is satisfied when the conditional distribution of Ygiven Zis well-behaved, for instance, for any z∈ Zandy∈R, CL|y| ≤ FY|Z=z(f∗(z) +y)−FY|Z=z(f∗(z)) ≤CU|y|, (55) where FY|Z=zdenotes the conditional distribution function of Ygiven Z=z. The fact that (55) implies Assumption 9 can be easily verified by using Knight’s identity (Knight, 1998) and adapting the argument in Belloni and Chernozhukov (2011). In Table 1 we summarize some common loss functions that satisfy Assumptions 8 and 9. Table 1: Losses that satisfy Assumptions 8 and 9 Loss functions Exponential Logistic Check Assumption 8 ✓ ✓ ✓ Assumption 9 ✓ ✓ under (55) Recall the kernel-related latent error E∆bgfrom (18) and δnas the fixed point of R(δ) in (22). The following theorem provides non-asymptotic upper bounds of the excess risk of bf◦bgwithbf given in (53). Theorem 4. Grant Assumptions 1–3, 8 and 9. For any η∈(0,1), by choosing λin(53) such that λ=C δnlog(1/η) +E∆bg+∥fH−f∗∥2 ρ+log(1/η) n , (56) with probability at least 1−η, one has E(bf◦bg)≤C′ δnlog(1/η) +E∆bg+∥fH−f∗∥2 ρ+log(1/η) n . (57) Here both positive constants CandC′depend only on κ,∥fH∥K,Cℓ, CLandCU. Proof. Its proof can be found in Appendix B.10. The risk bound in (57) of Theorem 4 consists of the same components as in (25) of Theorem 2. Under the factor model (3), it readily yields the risk bound when bgis chosen as the linear predictor by using PCA. 26 Remark 4. By inspecting the proof of Theorem 4, it can be seen that under the classical setting where Zis observable and f∗∈ H K, the smoothness condition, the second inequality in Assump- tion 9, can be dropped. References Amirabbas Afzali, Hesam Hosseini, Mohmmadamin Mirzai, and Arash Amini. Clustering time series data with Gaussian mixture embeddings in a graph autoencoder framework. arXiv preprint arXiv:2411.16972 , 2024. Seung C. Ahn and Alex R. Horenstein. Eigenvalue ratio test for the number of factors. Econometrica , 81 (3):1203–1227, 2013. T. W. Anderson. An Introduction to Multivariate Statistical Analysis . Wiley, New York, 3rd edition, 2003. ISBN 978-0-471-36091-9. Francis Bach and Michael Jordan. Kernel independent component analysis. The Journal of Machine Learning Research , 3:1–48, 2002. Jushan Bai. Inferential theory for factor models of large dimensions. Econometrica , 71(1):135–171, 2003. Jushan Bai and Serena Ng. Determining the number of factors in approximate factor models. Econometrica , 70(1):191–221, 2002. Jushan Bai and Serena Ng.
|
https://arxiv.org/abs/2505.20022v1
|
Confidence intervals for diffusion index forecasts and inference for factor- augmented regressions. Econometrica , 74(4):1133–1150, 2006. Jushan Bai and Serena Ng. Simpler proofs for approximate factor models of large dimensions. arXiv preprint arXiv:2008.00254 , 2020. D Bank, N Koenigstein, and R Giryes. Machine Learning for Data Science Handbook: Data Mining and Knowledge Discovery Handbook . Springer, 2023. Peter L Bartlett, Olivier Bousquet, and Shahar Mendelson. Local rademacher complexities. The Annals of Statistics , 33(4):1497–1537, 2005. Alexandre Belloni and Victor Chernozhukov. l1-penalized quantile regression in high dimensional sparse models. The Annals of Statistics , 39(1):82–130, 2011. Alexandre Belloni, Daniel Chen, Victor Chernozhukov, and Christian Hansen. Sparse models and methods for optimal instruments with an application to eminent domain. Econometrica , 80(6):2369–2429, 2012. Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspec- tives. IEEE Transactions on Pattern Analysis and Machine Intelligence , 35(8):1798–1828, 2013. Kamal Berahmand, Fatemeh Daneshfar, Elaheh Sadat Salehi, Yuefeng Li, and Yue Xu. Autoencoders and their applications in machine learning: A survey. Artificial Intelligence Review , 57(2):28, 2024. Sergei Bernstein. The Theory of Probabilities . Gastehizdat Publishing House, Moscow, 1946. Bartosz Bieganowski and Robert ´Slepaczuk. Supervised autoencoder MLP for financial time series forecast- ing.arXiv preprint arXiv:2404.01866 , 2024. Xin Bing and Marten Wegkamp. Optimal discriminant analysis in high-dimensional latent factor models. The Annals of Statistics , 51(3):1232–1257, 2023. 27 Xin Bing, Florentina Bunea, Seth Strimas-Mackey, and Marten Wegkamp. Prediction under latent fac- tor regression: Adaptive PCR, interpolating predictors and beyond. The Journal of Machine Learning Research , 22(177):1–50, 2021. Xin Bing, Yang Ning, and Yaosheng Xu. Adaptive estimation in multivariate response regression with hidden variables. The Annals of Statistics , 50(2):640 – 672, 2022. Gilles Blanchard and Nicole M¨ ucke. Optimal rates for regularization of statistical inverse learning problems. Foundations of Computational Mathematics , 18(4):971–1013, 2018. Gilles Blanchard and Oleksandr Zadorozhnyi. Concentration of weakly dependent Banach-valued sums and applications to statistical learning methods. Bernoulli , 25(4B):3421–3458, 2019. Gilles Blanchard, Gyemin Lee, and Clayton Scott. Generalizing from several related classification tasks to a new unlabeled sample. Advances in Neural Information Processing Systems , 24, 2011. St´ ephane Boucheron, G´ abor Lugosi, and Pascal Massart. A sharp concentration inequality with applications. Random Structures & Algorithms , 16(3):277–292, 2000. St´ ephane Boucheron, G´ abor Lugosi, and Pascal Massart. Concentration inequalities using the entropy method. The Annals of Probability , 31(3):1583–1614, 2003. Olivier Bousquet, Vladimir Koltchinskii, and Dmitriy Panchenko. Some local measures of complexity of convex hulls and generalization bounds. In Computational Learning Theory: 15th Annual Conference on Computational Learning Theory, COLT 2002 Sydney, Australia, July 8–10, 2002 Proceedings 15 , pages 59–73. Springer, 2002. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D. Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. Advances in Neural Information
|
https://arxiv.org/abs/2505.20022v1
|
Processing Systems (NeurIPS) , 2020. Andreas Buja and Nermin Eyuboglu. Remarks on parallel analysis. Multivariate Behavioral Research , 27 (4):509–540, 1992. Andrea Caponnetto and Ernesto De Vito. Optimal rates for the regularized least-squares algorithm. Foun- dations of Computational Mathematics , 7:331–368, 2007. G. Chamberlain and M. Rothschild. Arbitrage, factor structure, and mean-variance analysis on large asset markets. Econometrica , 51(5):1281–1304, 1983. Liuyi Chen, Bocheng Han, Xuesong Wang, Jiazhen Zhao, Wenke Yang, and Zhengyi Yang. Machine learning methods in weather and climate applications: A survey. Applied Sciences , 13(21):12019, 2023. Victor Chernozhukov, Denis Chetverikov, Mert Demirer, Esther Duflo, Christian Hansen, Whitney Newey, and James Robins. Double/debiased machine learning for treatment and structural parameters. The Econometrics Journal , 21(1):C1–C68, 01 2018. Felipe Cucker and Steve Smale. On the mathematical foundations of learning. Bulletin of the American Mathematical Society , 39(1):1–49, 2002. Hugo Cui, Bruno Loureiro, Florent Krzakala, and Lenka Zdeborov´ a. Generalization error rates in kernel regression: The crossover from the noiseless to noisy regime. Advances in Neural Information Processing Systems , 34:10131–10143, 2021. 28 Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirec- tional transformers for language understanding. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT) , pages 4171–4186, 2019. Yaqi Duan, Chi Jin, and Zhiyuan Li. Risk bounds and rademacher complexity in batch reinforcement learning. In International Conference on Machine Learning , pages 2892–2902. PMLR, 2021. Yaqi Duan, Mengdi Wang, and Martin J Wainwright. Optimal policy evaluation using kernel-based temporal difference methods. The Annals of Statistics , 52(5):1927–1952, 2024. Mona Eberts and Ingo Steinwart. Optimal regression rates for SVMs using Gaussian kernels. Electronic Journal of Statistics , 7(none):1 – 42, 2013. doi: 10.1214/12-EJS760. URL https://doi.org/10.1214/ 12-EJS760 . David Eric Edmunds and Hans Triebel. Function Spaces, Entropy numbers, Differential operators . Cambridge University Press, 1996. Jianqing Fan and Yihong Gu. Factor augmented sparse throughput deep ReLU neural networks for high dimensional regression. Journal of the American Statistical Association , 119(548):2680–2694, 2024. Jianqing Fan, Yuan Liao, and Martina Mincheva. Large covariance estimation by thresholding principal orthogonal complements. Journal of the Royal Statistical Society Series B: Statistical Methodology , 75(4): 603–680, 2013. Max H Farrell, Tengyuan Liang, and Sanjog Misra. Deep neural networks for estimation and inference. Econometrica , 89(1):181–213, 2021. Simon Fischer and Ingo Steinwart. Sobolev norm learning rates for regularized least-squares algorithms. The Journal of Machine Learning Research , 21(1):8464–8501, 2020. Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning . MIT Press, 2016. Harold Hotelling. The relations of the newer multivariate statistical methods to factor analysis. British Journal of Statistical Psychology , 10(2):69–79, 1957. Daniel Hsu, Sham Kakade, and Tong Zhang. Random design analysis of ridge regression. Foundations of Computational Mathematics , 14:569–600, 2014. Bryan Kelly and Seth Pruitt. The three-pass regression filter: A new approach to forecasting using many predictors. Journal of Econometrics , 186(2):294–316, 2015. George Kimeldorf and Grace Wahba. Some results on Tchebycheffian spline functions. Journal of Mathe- matical Analysis and Applications , 33(1):82–95, 1971. Keith Knight. Limiting distributions for l1 regression
|
https://arxiv.org/abs/2505.20022v1
|
estimators under general conditions. Annals of statis- tics, pages 755–770, 1998. Vladimir Koltchinskii. Rademacher penalties and structural risk minimization. IEEE Transactions on Information Theory , 47(5):1902–1914, 2001. Vladimir Koltchinskii and Dmitriy Panchenko. Rademacher processes and bounding the risk of function learning. In High dimensional probability II , pages 443–457. Springer, 2000. Clifford Lam and Qiwei Yao. Factor modeling for high-dimensional time series: Inference for the number of factors. The Annals of Statistics , 40(2):694–726, 2012. 29 D. N. Lawley and A. E. Maxwell. Factor analysis as a statistical method. Journal of the Royal Statistical Society. Series D (The Statistician) , 12(3):209–229, 1962. Michel Ledoux and Michel Talagrand. Probability in Banach Spaces: Isoperimetry and Processes . Springer Science & Business Media, 2013. Zhu Li, Jean-Francois Ton, Dino Oglic, and Dino Sejdinovic. Towards a unified analysis of random fourier features. In International Conference on Machine Learning , pages 3905–3914. PMLR, 2019. Junhong Lin and Volkan Cevher. Convergences of regularized algorithms and stochastic gradient methods with random projections. The Journal of Machine Learning Research , 21(1):690–733, 2020. Shao-Bo Lin and Ding-Xuan Zhou. Distributed kernel-based gradient descent algorithms. Constructive Approximation , 47(2):249–276, 2018. Shao-Bo Lin, Di Wang, and Ding-Xuan Zhou. Distributed kernel ridge regression with communications. The Journal of Machine Learning Research , 21:1–38, 2020. G´ abor Lugosi and Marten Wegkamp. Complexity regularization via localized random penalties. The Annals of Statistics , 32(4):1679 – 1697, 2004. doi: 10.1214/009053604000000463. URL https://doi.org/10. 1214/009053604000000463 . Cong Ma, Reese Pathak, and Martin Wainwright. Optimally tackling covariate shift in RKHS-based non- parametric regression. The Annals of Statistics , 51(2):738–761, 2023. Pascal Massart. Some applications of concentration inequalities to statistics. In Annales de la Facult´ e des sciences de Toulouse: Math´ ematiques , volume 9, pages 245–303, 2000. Shahar Mendelson. Geometric parameters of kernel machines. In International Conference on Computational Learning Theory , pages 29–43. Springer, 2002. Shahar Mendelson and Joseph Neeman. Regularization in kernel learning. The Annals of Statistics , 38(1): 526–565, 2010. James Mercer. Functions of positive and negative type, and their connection the theory of integral equations. Philosophical Transactions of the Royal Society of London. Series A, Containing Papers of a Mathematical or Physical Character , 209(441-458):415–446, 1909. Charles A Micchelli, Yuesheng Xu, and Haizhang Zhang. Universal kernels. The Journal of Machine Learning Research , 7(12), 2006. Stanislav Minsker. On some extensions of Bernstein’s inequality for self-adjoint operators. Statistics & Probability Letters , 127:111–119, 2017. Sayan Mukherjee, Qiang Wu, and Ding-Xuan Zhou. Learning gradients on manifolds. Bernoulli , 16(1): 181–207, 2010. Erich Novak, Mario Ullrich, Henryk Wo´ zniakowski, and Shun Zhang. Reproducing kernels of Sobolev spaces onRdand applications to embedding constants and tractability. Analysis and Applications , 16(05):693– 715, 2018. David Pollard. Convergence of Stochastic Processes . Springer Science & Business Media, 2012. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. Proceedings of the International Conference on Machine Learning (ICML) , 2021. 30 Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang,
|
https://arxiv.org/abs/2505.20022v1
|
Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research , 21(140):1–67, 2020. Alessandro Rudi and Lorenzo Rosasco. Generalization properties of learning with random features. Advances in Neural Information Processing Systems , 30, 2017. Alessandro Rudi, Raffaello Camoriano, and Lorenzo Rosasco. Less is more: Nystr¨ om computational regu- larization. Advances in Neural Information Processing Systems , 28, 2015. Steve Smale and Ding-Xuan Zhou. Learning theory estimates via integral operators and their approximations. Constructive Approximation , 26(2):153–172, 2007. I. Steinwart. Consistency of support vector machines and other regularized kernel classifiers. IEEE Trans- actions on Information Theory , 51:128–142, 2005. Ingo Steinwart and Andreas Christmann. Support Vector Machines . Springer Science & Business Media, 2008. Ingo Steinwart, Clint Scovel, et al. Optimal rates for regularized least squares regression. In Proceedings of the Annual Conference on Learning Theory, 2009 , pages 79–93, 2009. James H Stock and Mark W Watson. Forecasting using principal components from a large number of predictors. Journal of the American Statistical Association , 97(460):1167–1179, 2002. Zirui Sun, Mingwei Dai, Yao Wang, and Shao-Bo Lin. Nystr¨ om regularization for time series forecasting. The Journal of Machine Learning Research , 23(1):14082–14123, 2022. Vladimir N Vapnik and A Ya Chervonenkis. On the uniform convergence of relative frequencies of events to their probabilities. In Measures of complexity: festschrift for alexey chervonenkis , pages 11–30. Springer, 2015. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in Neural Information Processing Systems (NeurIPS) , 30, 2017. Martin Wainwright. High-dimensional Statistics: A Non-asymptotic Viewpoint , volume 48. Cambridge University Press, 2019. Yuting Wei, Fanny Yang, and Martin Wainwright. Early stopping for kernel boosting algorithms: A general analysis with localized complexities. Advances in Neural Information Processing Systems , 30, 2017. Yun Yang, Mert Pilanci, and Martin Wainwright. Randomized sketches for kernels: Fast and optimal nonparametric regression. The Annals of Statistics , 45(3):991–1023, 2017. Vadim Yurinsky. Sums and Gaussian vectors . Springer, 1995. Haobo Zhang, Yicheng Li, Weihao Lu, and Qian Lin. On the optimality of misspecified kernel ridge regression. InProceedings of the 40th International Conference on Machine Learning , volume 202 of Proceedings of Machine Learning Research , pages 41331–41353, 2023. Ding-Xuan Zhou. The covering number in learning theory. Journal of Complexity , 18(3):739–767, 2002. 31 Appendix Appendix A contains numerical studies of the kernel ridge regression with features predicted by the principal component analysis under models (3) and (4). The main proofs are collected in Ap- pendix B and presented on a section-by-section basis. Auxiliary lemmas are stated in Appendix C. A Numerical experiments In this section, we evaluate the numerical performance of the proposed algorithm in Section 2 for KRR with predicted feature inputs, and compare it with several competitors. Specifically, we consider the following nonparametric latent factor model: Y=f∗(Z) +ϵ, (58) X=AZ+W. (59) where the latent vector Z= (Z1, Z2, Z3)⊤, with r= 3, is generated with entries independently sampled from Unif(0 ,1), the
|
https://arxiv.org/abs/2505.20022v1
|
true regression function f∗is set as f∗(Z) = 2 sin 3πZ1 + 3|Z1−0.5| −exp Z2 2−Z2 3 , the regression error ϵ∼N(0,0.82), the loading matrix Ais generated with rows independently sampled from N(0,diag(10 ,5.5,1)) and Wis generated with entries independently sampled from N(0,1.52). In each setting, we generate the training data T={(Yi, Xi, Zi)}n i=1i.i.d. according to models (58) and (59). We also generate another auxiliary data X′= (X′ 1, . . . , X′ n)⊤independently from model (59). Under model (59), as illustrated in Section 4, the latent factors can be predicted by bZ′= (bB′X1, . . . ,bB′Xn)⊤ where bB′is constructed from X′via PCA as in (50). According to our theory, the auxiliary data X′is needed to construct bBthat is independent of T. Numerically, instead of using X′, we can also predict ZbybZ= (bBX1, . . . ,bBXn)⊤, where bBis obtained by applying PCA on X= (X1, . . . , X n)⊤. For comparison, we consider the following predictors: •KRR-bZ′: KRR in (12) with predicted inputs using bZ′; •KRR-bZ: KRR in (12) with predicted inputs using bZ; •KRR- Z: KRR in (12) using the true Z; •KRR- X: Classic KRR obtained from (12) with Xiin place of bg(Xi); •LR-Z: Regressing YontoZby OLS. KRR- Xis included for comparing the proposed KRR- bZ′with the classic KRR method. The linear predictor LR- Zis considered to illustrate the benefit of using non-linear predictors. KRR- Zuses the unobserved latent factors Zto illustrate the additional prediction errors of KRR- bZ′due to predicting Z. Finally, KRR- bZis included to examine the effect of using an auxiliary data set to compute bB. 32 For the kernel-based methods, the regularization parameter λin (12) is chosen via 3-fold cross- validation and we choose the RKHS induced by the Gaussian kernel, K(z, z′) = exp( −∥z−z′∥2 2/σ2 n), where σnis set as the median of all pairwise distances among sample points, as suggested in Mukherjee et al. (2010). To evaluate the numerical performance, a test dataset Tt={(Yt i, Xt i, Zt i)}m i=1is also generated independently from models (58) and (59). For any predictor bf:Rp→R, we evaluate its predictive performance by Em Y−bf(X)2:=1 mmX i=1 Yt i−bf(Xt i)2 . For any predictor bf:Rr→Rusing the true latent factor Z, we evaluate its performance by Em Y−bf(Z)2:=1 mmX i=1 Yt i−bf(Zt i)2 . These metrics are computed based on 100 repetitions in each setting. A.1 The effect of training sample size We first study how the predictive performance of different predictors changes when the number of training samples varies as n∈ {200,600,1000,1400,1800,2200}. For each n, we fix p= 2000 and n=m. The averaged empirical prediction errors, along with the corresponding standard deviations (included in parentheses), are reported in Table 2. Table 2: The averaged performance of all the predictors with varying n. Sample size n 200 600 1000 1400 1800 2200 KRR-bZ′1.59 0.90 0.82 0.79 0.77 0.75 (0.32) (0.06) (0.04) (0.04) (0.03) (0.02) KRR-bZ1.63 0.90 0.81 0.79 0.77 0.75 (0.32) (0.07) (0.04) (0.03) (0.03) (0.02) KRR- Z1.08 0.77 0.72 0.71 0.70 0.68 (0.19) (0.05) (0.03) (0.03) (0.03) (0.02) KRR- X1.99 1.91 1.89
|
https://arxiv.org/abs/2505.20022v1
|
1.85 1.83 1.82 (0.19) (0.10) (0.08) (0.07) (0.06) (0.05) LR-Z3.61 3.58 3.62 3.60 3.61 3.60 (0.29) (0.16) (0.13) (0.10) (0.11) (0.08) As shown in Table 2, the proposed KRR- bZ′outperforms all other predictors, except for the oracle approach, KRR- Z, and its variant KRR- bZthat does not use the auxiliary data. KRR- bZ′has increasingly closer performance to KRR- Zas the sample size grows. This is in line with our risk bound in Corollary 5 where the extra prediction error of KRR- bZ′relative to KRR- Zis quantified byOP(1/p+1/n), which reduces to 1 /pasnincreases. Comparing to KRR- bZ, using auxiliary data only seems to have benefit for small sample sizes. Compared to KRR- bZ′and KRR- Z, increasing the sample size has a much smaller effect in reducing the prediction error of KRR- X, indicating a slower rate of convergence of the latter on the sample size. This is expected, as KRR- Xdoes not 33 exploit the low-dimensional structure. Finally, comparing to the predictor based on OLS in the second step, KRR- bZ′has substantial predictive advantage, implying the benefit of using KRR to capture the non-linear relationship between the response and the latent factor. A.2 The effect of dimensionality We next examine the predictors when the number of features varies as p∈ {50,100,500,1000,1500,3000}. For each p, we set n=m= 800. As shown in Table 3, the predictive performance between KRR- bZ′ and its variant, KRR- bZ, are nearly identical. KRR- bZ′has also the close performance to KRR- Z and their difference gets smaller as pincreases. Indeed, as shown in Corollary 5, the prediction error of KRR- bZ′, due to predicting Z, contains the term 1 /pwhich decreases as pgrows. Since KRR- Z and LR- ZuseZinstead of X, their prediction errors remain the same as pincreases. We also see that the performance of KRR- Xslightly increases as pgrows. One explanation could be that more useful information for predicting Ycan be found from Xaspincreases. The linear predictor that uses PCs still suffers from underfitting. Table 3: The averaged performance of all the predictors with varying p. Dimension p 50 100 500 1000 1500 3000 KRR-bZ′1.85 1.44 0.95 0.89 0.87 0.84 (0.15) (0.12) (0.06) (0.06) (0.05) (0.05) KRR-bZ1.86 1.43 0.94 0.89 0.87 0.83 (0.16) (0.12) (0.06) (0.06) (0.05) (0.05) KRR- Z0.75 0.74 0.74 0.75 0.75 0.74 (0.04) (0.04) (0.04) (0.03) (0.04) (0.04) KRR- X2.32 2.14 1.95 1.94 1.90 1.83 (0.17) (0.13) (0.10) (0.12) (0.10) (0.09) LR-Z3.60 3.60 3.59 3.62 3.60 3.60 (0.15) (0.14) (0.14) (0.12) (0.15) (0.14) A.3 The effect of signal-to-noise ratio We investigate how the signal-to-noise ratio (SNR) of predicting the latent factor Z, defined as SNR := λr(AΣZA⊤)/∥ΣW∥op, affects the prediction performance of different predictors. A higher SNR implies more information ofZcontained in X, which leads to better prediction of Zand improves the prediction of Y. So it is expected that only KRR- bZ′, KRR- bZand KRR- Xbehave differently as the SNR changes. In this experiment, we vary the SNR by multiplying Aby a scalar αwithin {0.1,0.2,0.3,0.5,0.75,1,1.5}. In all settings, we fix n=m= 800 and p= 2000. The obtained results are reported in Table 4.
|
https://arxiv.org/abs/2505.20022v1
|
As expected, increasing SNR leads to better prediction performance for KRR- bZ′, KRR- bZand KRR- Xwhile the prediction errors of other predictors remain unchanged. However, KRR- Xcon- tinues to perform poorly even for the settings with large values of SNR due to underfitting. On the contrary, when the SNR is sufficiently large, the prediction error of KRR- bZ′becomes very small, 34 approaching that of KRR- Z. We also observe from the comparison between KRR- bZ′and KRR- bZ that the benefit of using auxiliary data becomes more visible for small SNRs. Table 4: The averaged performance of all the predictors with varying α. SNR parameter α 0.1 0.2 0.3 0.5 0.75 1 1.5 KRR-bZ′2.59 1.46 1.16 0.94 0.87 0.85 0.82 (0.13) (0.10) (0.07) (0.06) (0.06) (0.05) (0.05) KRR-bZ2.83 1.56 1.14 0.93 0.86 0.84 0.82 (0.15) (0.13) (0.06) (0.06) (0.05) (0.05) (0.05) KRR- Z0.74 0.75 0.74 0.74 0.74 0.74 0.74 (0.04) (0.04) (0.04) (0.04) (0.04) (0.04) (0.04) KRR- X3.79 3.74 3.47 2.53 2.11 1.88 1.39 (0.16) (0.16) (0.14) (0.11) (0.10) (0.08) (0.07) LR-Z3.62 3.59 3.61 3.59 3.58 3.61 3.61 (0.14) (0.15) (0.14) (0.14) (0.13) (0.15) (0.15) B Main Proofs Notation. Throughout this appendix, given i.i.d. copies U1, . . . , U nof any random element U, for any measurable function f, we denote the empirical mean by En[f(U)] :=1 nnX i=1f(Ui). (60) We use Tr( ·) to denote the trace of any trace operator. In addition to denote the operator norm for real-valued matrices, we use ∥·∥opto denote the operator norm for operators from HKtoHK. For any Hilbert–Schmidt operator A:HK→ H K, we use ∥A∥HSto denote the Hilbert–Schmidt norm, given by ∥A∥2 HS= Tr( A⊤A). The notation Iis used as the identity operator, whose domain and range may vary depending on the context. Unless otherwise specified, we will use Eto either denote the expectation conditioning on the observations D:={(Yi, Xi)}n i=1andbg, or the expectation taking over all involved random quantities when the context is clear. Recall that the ℓ2space is ℓ2(N) :=( x= (x1, x2, ...)⊤∈R∞:∞X i=1x2 i<∞) , equipped with the inner product ⟨u, v⟩2=P∞ i=1uivi. For any two elements a, b∈ℓ2(N), we write ab⊤as an operator from ℓ2(N) toℓ2(N), defined as (ab⊤)v:=a⟨b, v⟩2, for any v∈ℓ2(N). Throughout, we fix the generic predictor bg:X → Z that is constructed independently of D. All our statements, unless otherwise specified, should be understood to hold almost surely, conditional onbg. For any f∈ L2(ρ), define the function hf:=hf◦bg:Z × X → Ras hf(z, x) = f∗(z)−(f◦bg)(x)2− f∗(z)−(fH◦bg)(x)2. (61) 35 Further define ℓf:=ℓf◦bg:R× X → Ras ℓf(y, x) = y−(f◦bg)(x)2− y−(fH◦bg)(x)2. (62) From (66), we know that for any f∈ L2(ρ), Eh Y−(f◦bg)(X)2− Y−f∗(Z)2i =E[(f◦bg)(X)−f∗(Z)]2(63) so that E[ℓf(Y, X)] =E (Y−(f◦bg)(X))2−(Y−(fH◦bg)(X))2 =E (Y−(f◦bg)(X))2−(Y−f∗(Z))2 +E (Y−f∗(Z))2−(Y−(fH◦bg)(X))2 =Eh f∗(Z)−(f◦bg)(X)2− f∗(Z)−(fH◦bg)(X)2i =E[hf(Z, X)]. (64) Additionally, the empirical counterpart of the estimation error can be decomposed as En[ℓf(Y, X)] =En (Y−(f◦bg)(X))2−(Y−(fH◦bg)(X))2 =En[hf(Z, X)] +2 nnX i=1ϵi (fH◦bg)(Xi)−(f◦bg)(Xi) =En[hf(Z, X)] + 2En[ϵ(fH◦bg)(X)−ϵ(f◦bg)(X)], (65) where ϵi’s are the copies of random noise term defined in (4). B.1 Proof of the excess risk decomposition in (17) Proof. Pick any deterministic f:Z →R. By
|
https://arxiv.org/abs/2505.20022v1
|
adding and subtracting terms, we have E bf◦bg =E Y−(bf◦bg)(X)2−E[Y−(f◦bg)(X)]2+E[Y−(f◦bg)(X)]2−σ2. Note that E[Y−(f◦bg)(X)]2−σ2=Eh (Y−f∗(Z) +f∗(Z)−(f◦bg)(X))2i −σ2 =E[f∗(Z)−(f◦bg)(X)]2+ 2E[ϵ(f∗(Z)−(f◦bg)(X))] =E[f∗(Z)−(f◦bg)(X)]2, (66) where the penultimate step uses Y−f∗(Z) =ϵandϵis independent of Z,Xandbg. B.2 Proof of Theorem 2: bounding the excess risk Before proving Theorem 2, we first give a useful lemma that quantifies the relative loss function in (62) along the line segment between any f∈ H KandfH. Lemma 2. For any f∈ H Kandα∈[0,1], define ef=αf+ (1−α)fH. Then we have En[ℓef(Y, X)] +λ∥ef∥2 K−λ∥fH∥2 K≤α En[ℓf(Y, X)] +λ∥f∥2 K−λ∥fH∥2 K . 36 Proof. The proof of Lemma 2 is provided after the main proof of Theorem 2. Recall ∆ bgfrom (18) and δnis defined as R(δn) =δn. Define δn,bg:= (C2 0∨1)(γ2 ϵ∨1) δnlog(1/η) +E∆bg+(κ2∨1) nlog(κ2∨1) η , (67) with γϵintroduced in Assumption 4 and C0>0 being some sufficiently large constant. Further define the scalars VK:= max {κ2∥fH∥2 K,1}(∥fH∥2 K∨1), (68) Λ := max κ2∥fH∥2 K+∥f∗∥2 ∞,1 VKδn,bg. (69) Note that both δn,bgand Λ can be simplified to C δnlog(1/η) +E∆bg+log(1/η) n , (70) for some C=C(κ, γϵ,∥f∗∥∞,∥fH∥K), as stated in Theorem 2. Below we prove Theorem 2 in terms ofδn,bgand Λ. Proof of Theorem 2. By the optimality of bfand the feasibility of fHin (12), we have En[ℓbf(Y, X)] +λ∥bf∥2 K−λ∥fH∥2 K≤0. (71) In the case when ∥fH∥K= 0, choosing λ→ ∞ together with (71) leads to ∥bf∥K= 0. By the reproducing kernel property in HKand Assumption 2, we have that for any z∈ Z, fH(z) =⟨fH, Kz⟩K≤ ∥Kz∥K∥fH∥K=p K(z, z)∥fH∥K≤κ∥fH∥K= 0 so that fH≡0. The same argument yields bf≡0. Consequently, we have E[ℓbf(Y, X)] = 0 when ∥fH∥K= 0. In the following we focus on the case ∥fH∥K>0 with the choice of λas λ∥fH∥2 K> C Λ + 4∥fH−f∗∥2 ρ. (72) Recall hffrom (61) and ℓffrom (62). Define the local RKHS-ball as Fb:={f∈ H K:∥f−fH∥K≤3∥fH∥K} (73) and recall from (74) that Fb= f∈ Fb:E[ℓf(Y, X)]≤6λ∥fH∥2 K . (74) We first prove a uniform result over Fb. Consider the events E(h) :=\ f∈Fbn E[hf(Z, X)]≤2En[hf(Z, X)] +CΛ + 4∥fH−f∗∥2 ρo , E(ϵ) :=\ f∈Fbn 4En[ϵ(fH◦bg)(X)−ϵ(f◦bg)(X)] ≤λ∥fH∥2 Ko . 37 By Corollary 6 and Corollary 7, the event E(h)∩ E(ϵ) holds with probability at least 1 −η. In the rest of the proof we work on the event E(h)∩ E(ϵ). For any f∈ Fb, we have that E[ℓf(Y, X)] =E[hf(Z, X)] by (64) ≤2En[hf(Z, X)] +CΛ + 4∥fH−f∗∥2 ρ byE(h) = 2En[ℓf(Y, X)]−4En[ϵ(fH◦bg)(X)−ϵ(f◦bg)(X)] +CΛ + 4∥fH−f∗∥2 ρ by (65) ≤2En[ℓf(Y, X)] + 2 λ∥fH∥2 K byE(ϵ)&(72) . (75) Note that the conclusion follows trivially if bf∈ F b. It remains to prove by contradiction when bf /∈ Fb. The latter consists of two cases: (1)bf∈ Fb\ Fb; (2)bf /∈ Fb. Case (1). For any α∈[0,1], let fα:=αbf+ (1−α)fH. Since E[ℓfH(Y, X)] = 0 and E[ℓbf(Y, X)]>6λ∥fH∥2 K(asbf∈ Fb\ Fb), the fact that E[ℓfα(Y, X)] =E[(Y−(fα◦bg)(X))2−(Y−(fH◦bg)(X))2] is continuous in αimplies the existence of bα∈(0,1) such that E[ℓfbα(Y, X)] = 6 λ∥fH∥2 K. (76) By the convexity of Fb,2we have fbα∈ Fbhence fbα∈ Fb. Applying Lemma 2 with α=bα,ef=fbα andf=bfyields En[ℓfbα(Y, X)] +λ∥fbα∥2 K−λ∥fH∥2 K≤bαn En[ℓbf(Y, X)]
|
https://arxiv.org/abs/2505.20022v1
|
+λ∥bf∥2 K−λ∥fH∥2 Ko . Sincebα >0, if we could show that En[ℓfbα(Y, X)] +λ∥fbα∥2 K−λ∥fH∥2 K>0, (77) then (77) also holds for bfin lieu of fbα, which contradicts (71). To prove (77), recalling that fbα∈ Fb, invoking the uniform bound in (75) with f=fbαgives −2En[ℓfbα(Y, X)] + 2 λ∥fH∥2 K−2λ∥fbα∥2 K≤ −E[ℓfbα(Y, X)] + 4 λ∥fH∥2 K−2λ∥fbα∥2 K =−2λ∥fH∥2 K−2λ∥fbα∥2 K by (76) <0. Case (2). Note that bf /∈ Fbimplies ∥bf−fH∥K>3∥fH∥K. By letting eα=3∥fH∥K ∥bf−fH∥K∈(0,1), 2The RKHS HKis the completion of the linear span of the functions K(v,·) for v∈ Z, hence must be convex. 38 the function feα:=eαbf+ (1−eα)fH=fH+3∥fH∥K ∥bf−fH∥K(bf−fH) also belongs to HK, due to the convexity of HK. Moreover, we have ∥feα−fH∥K= 3∥fH∥Kso that feα∈ Fb. Similar as case (1), it suffices to prove En[ℓfeα(Y, X)] +λ∥feα∥2 K−λ∥fH∥2 K>0, (78) which, by applying Lemma 2 with α=eα,ef=feαandf=bf, would imply that (78) also holds for bfin lieu of feα, thereby establishing contradiction with (71). Iffeα/∈ Fb, then repeating the same arguments of proving case (1) proves (78). It remains to prove (78) for feα∈ Fb. Observe that −E[ℓfeα(Y, X)] =−E[hfeα(Z, X)] by (64) =E[(fH◦bg)(X)−f∗(Z)]2−E[(feα◦bg)(X)−f∗(Z)]2 ≤2∥fH∥2 KE∆bg+ 2∥fH−f∗∥2 ρ by (19) with θ= 1.(79) Meanwhile, since feα∈ Fb, invoking (75) with f=feαgives −2En[ℓfeα(Y, X)] + 2 λ∥fH∥2 K−2λ∥feα∥2 K ≤ −E[ℓfeα(Y, X)] + 4 λ∥fH∥2 K−2λ∥feα∥2 K ≤2∥fH∥2 KE∆bg+ 2∥fH−f∗∥2 ρ+ 4λ∥fH∥2 K−2λ∥feα∥2 K by (79) ≤5λ∥fH∥2 K−2λ∥feα∥2 K by (72) <0, where the last step follows from ∥feα∥K≥ ∥feα−fH∥K− ∥fH∥K= 2∥fH∥K>0. This proves (78), thereby completing the proof. Proof of Lemma 2. For any f1, f2∈ L2(ρ) and α∈[0,1], by the convexity of quadratic loss function, we have Y−((αf1+ (1−α)f2)◦bg)(X)2− Y−(fH◦bg)(X)2 ≤α Y−(f1◦bg)(X)2+ (1−α) Y−(f2◦bg)(X)2− Y−(fH◦bg)(X)2 =αh Y−(f1◦bg)(X)2− Y−(fH◦bg)(X)2i + (1−α)h Y−(f2◦bg)(X)2− Y−(fH◦bg)(X)2i =αℓf1(Y, X) + (1 −α)ℓf2(Y, X). By applying the above inequality with f1=fandf2=fHand taking Enon both sides, we have En[ℓef(Y, X)]≤αEn[ℓf(Y, X)]. (80) On the other hand, note that ∥ef∥2 K= αf+ (1−α)fH 2 K ≤ α∥f∥K+ (1−α)∥fH∥K2by the triangle inequality, ≤α∥f∥2 K+ (1−α)∥fH∥2 K by the convexity of the quadratic function .(81) Combining (80) and (81) completes the proof. 39 B.2.1 Uniform upper bounds of E[hf(Z, X)]over f∈ Fb In the section, we apply Lemma 22 to provide uniform control of E[hf(Z, X)] over f∈ Fbby using the technique of local Rademacher complexity, which was developed by Bartlett et al. (2005). For a generic function space FfromRrtoRandnfixed points {v1, ..., v n}, the empirical Rademacher complexity is defined as bRn(F) :=Eεh sup f∈F1 nnX i=1εif(vi)i , (82) where the expectation Eεis taken over ni.i.d. Rademacher variables ε1, . . . , ε nwithP(ε1= 1) = P(ε1=−1) = 1 /2. Suppose that {v1, . . . , v n}are independently generated according to some distribution ϱ. The Rademacher complexity is defined as Rn(F) :=Eϱh bRn(F)i , where Eϱis taken with respect to {v1, ..., v n}. Let T(·) :F →R+be some functional. Our results will be based on the local Rademacher complexity , which typically takes the form Rn({f∈ F:T(f)≤δ}),for all δ≥0. (83) The sub-root function is an essential ingredient for establishing fast rates in
|
https://arxiv.org/abs/2505.20022v1
|
statistical learning problems. Precisely, when deriving the bound on the error of a learning algorithm, the key step is to select an appropriate sub-root function, and its corresponding fixed point will appear in the prediction error bound. In Appendix C.1 we review the definition and some useful properties of sub-root functions. For any f∈ L2(ρ), define the function ξf:=ξf◦bg:X →Ras ξf(x) = (f◦bg)(x)−(fH◦bg)(x) (84) and the function class Ξ bas Ξb:= ξf:f∈ Fb . (85) Consider the function ψx(δ) :=Rn ξf∈Ξb:E[ξ2 f(X)]≤δ , for all δ≥0. (86) It is verified in Lemma 21 that ψx(δ) is a sub-root function. Let δxbe its fixed point, that is, ψx(δx) =δx. The following lemma bounds from above E[hf(Z, X)] by its empirical counterpart En[hf(Z, X)], the fixed point δx, the kernel-related latent error E∆bgand the approximation error ∥fH−f∗∥2 ρ, uniformly over f∈ Fb. Lemma 3. Grant Assumptions 1 and 2. Fix any η∈(0,1). With probability at least 1−η, for anyf∈ Fb, one has E[hf(Z, X)]≤2En[hf(Z, X)] +c1max κ2∥fH∥2 K+∥f∗∥2 ∞,1 δx +c2κ∥fH∥K(κ∥fH∥K+∥f∗∥∞)log(1/η) n+ 4∥fH∥2 KE∆bg+ 4∥fH−f∗∥2 ρ. Here c1andc2are some absolute positive constants. 40 Proof. Recall hffrom (61) with its corresponding function class is Fh:=n hf:f∈ Fbo . Define the scalar V:= 3κ∥fH∥K(5κ∥fH∥K+ 2∥f∗∥∞). Further define the functional T:Fh→R+as T(hf) :=V E[hf(Z, X)] + 4∥fH∥2 KE∆bg+ 4∥fH−f∗∥2 ρ (87) as well as the local Rademacher complexity of Fhas Rn hf∈ Fh:T(hf)≤δ . We aim to apply the first statement in Lemma 22 with F=Fh,T=T,L= 4∥fH∥2 KE∆bg+ 4∥fH− f∗∥2 ρand B=−a=b=V. The remaining proof is divided into four steps. Step 1: Verification of the boundedness of Fh.By the reproducing property, for every z∈ Z and f∈ H K, we have |f(z)|=|⟨f, K z⟩K| ≤ ∥Kz∥K∥f∥K≤κ∥f∥K, where Kz=K(·, z)∈ H Kand the last inequality holds due to ∥Kz∥2 K=K(z, z)≤κ2from Assumption 2. Moreover, for any f∈ Fb, ∥f∥K≤ ∥fH∥K+∥f−fH∥K≤4∥fH∥K. Then, the function space Fbcan be uniformly bounded as ∥f∥∞≤κ∥f∥K≤4κ∥fH∥K,for any f∈ Fb. (88) For every f∈ Fb, we also have ∥f−fH∥∞≤κ∥f−fH∥K≤3κ∥fH∥K. (89) Combining all these bounds, for every f∈ Fband ( z, x)∈ Z × X , we have |hf(z, x)|= (f∗(z)−(f◦bg)(x))2−(f∗(z)−(fH◦bg)(x))2 ≤ (fH◦bg)(x)−(f◦bg)(x) |(f◦bg)(x)|+|(fH◦bg)(x)|+ 2|f∗(z)| ≤3κ∥fH∥K(5κ∥fH∥K+ 2∥f∗∥∞). (90) This shows the function class Fhis uniformly bounded within [ −V, V]. 41 Step 2: Bounding the variance of hf(Z, X).To apply Lemma 22, we need to bound from above the variance of hf(Z, X) byT(hf). To this end, note that Var (hf(Z, X)) = Var (f∗(Z)−(f◦bg)(X))2−(f∗(Z)−(fH◦bg)(X))2 ≤E (f∗(Z)−(f◦bg)(X))2−(f∗(Z)−(fH◦bg)(X))22 ≤VE (f∗(Z)−(f◦bg)(X))2−(f∗(Z)−(fH◦bg)(X))2 by (90) . Since E (f∗(Z)−(f◦bg)(X))2−(f∗(Z)−(fH◦bg)(X))2 ≤E[(f∗(Z)−(f◦bg)(X))2−(f∗(Z)−(fH◦bg)(X))2] + 2E[f∗(Z)−(fH◦bg)(X)]2 =E[hf(Z, X)] + 2E[f∗(Z)−(fH◦bg)(X)]2by (61) ≤E[hf(Z, X)] + 4∥fH∥2 KE∆bg+ 4E[fH(Z)−f∗(Z)]2 where the last step uses (19) with θ= 1, we have Var (hf(Z, X))≤V E[hf(Z, X)] + 4∥fH∥2 KE∆bg+ 4∥fH−f∗∥2 ρ(87)=T(hf). This further implies that the variance condition in Lemma 22 holds with B=VandL= 4∥fH∥2 KE∆bg+ 4∥fH−f∗∥2 ρfor the functional Tin (87). Step 3: Choosing a suitable sub-root function. To apply Lemma 22, we need to find a suitable sub-root function ψ(δ) with its fixed point δ⋆such that for all δ≥δ⋆, ψ(δ)≥VRn hf∈ Fh:T(hf)≤δ . (91) To this end, we notice that for
|
https://arxiv.org/abs/2505.20022v1
|
any f∈ Fb, V 2E[ξ2 f(X)] =V 2E[(f◦bg)(X)−(fH◦bg)(X)]2 ≤VE[(f◦bg)(X)−f∗(Z)]2+VE[f∗(Z)−(fH◦bg)(X)]2 =VE[hf(Z, X)] + 2 VE[f∗(Z)−(fH◦bg)(X)]2 ≤V E[hf(Z, X)] + 4∥fH∥2 KE∆bg+ 4∥fH−f∗∥2 ρ by (19) with θ= 1 (87)=T(hf). It follows that for any δ≥0, Rn hf∈ Fh:T(hf)≤δ ≤ R n hf∈ Fh:VE[ξ2 f(X)]≤2δ =E sup f∈Fb, VE[ξ2 f(X)]≤2δ1 nnX i=1εihf(Zi, Xi) . (92) Observe that, for any z∈ Z,x∈ Xandf1, f2∈ Fb, |hf1(z, x)−hf2(z, x)| = (f∗(z)−(f1◦bg)(x))2−(f∗(z)−(f2◦bg)(x))2 ≤ |(f1◦bg)(x) + (f2◦bg)(x)−2f∗(z)||(f1◦bg)(x)−(f2◦bg)(x)| ≤(8κ∥fH∥K+ 2∥f∗∥∞)|(f1◦bg)(x)−(fH◦bg)(x)−(f2◦bg)(x) + (fH◦bg)(x)| = (8κ∥fH∥K+ 2∥f∗∥∞)|ξf1(x)−ξf2(x)|. (93) 42 This implies that hf(Zi, Xi) is (8 κ∥fH∥K+ 2∥f∗∥∞)-Lipschitz in ξf(Xi) for all i∈[n]. We next apply Ledoux–Talagrand contraction inequality in Lemma 23 with d=n, θj=ξf(Xj) = (f◦bg)(Xj)−(fH◦bg)(Xj), ψj(θj) =hf(Zj, Xj) = (f∗(Zj)−(fH◦bg)(Xj)−θj)2−(f∗(Zj)−(fH◦bg)(Xj))2 for all j∈[n] as well as T ⊂Rnchosen as n (ξf(X1), . . . , ξ f(Xn))⊤:f∈ Fb, VE[ξ2 f(X)]≤2δo . Note that (93) ensures that ψj(θj) is (8 κ∥fH∥K+ 2∥f∗∥∞)-Lipschitiz. Invoking Lemma 23 yields Eh sup f∈Fb,VE[ξ2 f(X)]≤2δ1 nnX i=1εihf(Zi, Xi)i ≤(8κ∥fH∥K+ 2∥f∗∥∞)E sup f∈Fb,VE[ξ2 f(X)]≤2δ1 nnX i=1εiξf(Xi) (94) so that, together with (92), Rn hf∈ Fh:T(hf)≤δ ≤(8κ∥fH∥K+ 2∥f∗∥∞)Rn ξf:f∈ Fb, VE[ξ2 f(X)]≤2δ . (95) To find a sub-root function that meets the requirement (91), recall that the function ψx(·) in (86) is sub-root. It follows from (95) that VRn hf∈ Fh:T(hf)≤δ ≤(8κ∥fH∥K+ 2∥f∗∥∞)V ψ x2δ V =:ψ(δ) (96) for any δ≥0. According to Lemma 19, the function ψ(δ) is also sub-root in δ. Step 4: Bounding the fixed point of the sub-root function in (96).By using Lemma 19 again, the fixed point of (96) defined as δ⋆=ψ(δ⋆) = (8 κ∥fH∥K+ 2∥f∗∥∞)V ψ x2δ⋆ V satisfies δ⋆≤max (16κ∥fH∥K+ 4∥f∗∥∞)2,1 V 2δx, where δxis the fixed point of ψx(·) in (86). Finally, the proof is completed by invoking Lemma 22 with −a=b=B=V,L= 4∥fH∥2 KE∆bg+ 4∥fH−f∗∥2 ρandθ= 2. B.2.2 Bounding from above the fixed point of the local Rademacher complexity ψx(δ) by that of the kernel complexity Rx(δ) This section is devoted to deriving an upper bound of δxwhich is the fixed point of ψx(·) in (86) via a series of useful lemmas in Appendix B.3, followed by a corollary of Lemma 3. Recall that ∆ bg from (18). Further recall δn,bgandVKfrom (67) and (68), respectively. 43 Lemma 4. Grant Assumptions 1, 2 and 3. For any η∈(0,1), with probability at least 1−η, one has δx≤CVKδn,bg. Proof. Recalling bRnfrom (82), we first link ψx(δ) to its following data-dependent version bψx(δ) =bRn ξf∈Ξb:En[ξ2 f(X)]≤δ . (97) By (89), the function class Ξ bis uniformly bounded as ∥ξf∥∞≤3κ∥fH∥Kfor any f∈ Fb. Then by applying Lemma 9 to Ξ bwith G= 3κ∥fH∥K, for any η∈(0,1), it holds with probability at least 1−ηthat ψx(δ)≤2√ 2bψx(δ) + 6κ∥fH∥Klog(2/η) n(98) whenever δ≥c1κ∥fH∥Kψx(δ) +c2κ2∥fH∥2 Klog(2/η) n. (99) where c1andc2are two absolute constants. Second, recall bRx(δ) from (40) and bR(δ) from (44). Applying Lemma 10 results in bψx(δ)≤√ 10(∥fH∥K∨1)bRx(δ),∀δ >0, (100) which holds almost surely on X1, . . . , X n. Subsequently, by applying Corollary 8, for any η∈(0,1), whenever 30E∆bg+560κ2log(2/η) 3n≤δ, (101) with probability at least 1 −η, one has bRx(δ)≤C bR(δ) +r E∆bg n+κp log(2/η) n!
|
https://arxiv.org/abs/2505.20022v1
|
. (102) To proceed, by applying Lemma 13, for any η∈(0,1), it holds with probability at least 1 −η that bR(δ)≤C R(δ) +κp log(2/η) n! , (103) provided that nδ≥32κ2log256(κ2∨1) η (104) and R(δ)≤δ. (105) 44 Collecting the bounds (98), (100), (102) and (103) yields ψx(δ)≤C(∥fH∥K∨1) R(δ) +r E∆bg n+κp log(2/η) n! (106) provided that δsatisfies (99), (101), (104) and (105). For any δ≥0, define ψ1(δ) :=c1max{κ∥fH∥K,1}ψx(δ) +c2κ2∥fH∥2 Klog(2/η) n, ψ2(δ) :=Cmax{κ∥fH∥K,1}(∥fH∥K∨1) R(δ) +r E∆bg n+κlog(2/η) n! . It is easy to see that both ψ1andψ2are sub-root. Write the fixed point of ψ1asδ1. Since ψx(δ)≤ψ1(δ) for any δ≥0, invoking Lemma 18 ensures δx≤δ1. (107) So it suffices to derive an upper bound for δ1. To this end, note that δn,bgin (67) satisfies the conditions (101) and (104) for sufficiently large C0in (67). By noting that δn,bg≥δn, applying Lemma 18 yields R(δn,bg)≤δn,bgso that δn,bgalso satisfies the condition (105). By definition of fixed point, δ1satisfies the condition (99). If δ1≤δn,bg, the proof is completed. Otherwise, δ1satisfies (99), (101), (104) and (105) so that the inequality (106) holds for δ1. By adding and multiplying terms on both sides of (106), we find that δ1=ψ1(δ1)≤ψ2(δ1). (108) Furthermore, the fact that R(δn,bg)≤δn,bg, together with the definition of δn,bgin (67), leads to ψ2(δn,bg)≤Cmax{κ∥fH∥K,1}(∥fH∥K∨1)δn,bg. (109) Since ψ2(δ)/√ δis non-increasing in δ, we find that p δ1(108) ≤ψ2(δ1)√δ1≤ψ2(δn,bg)pδn,bg(109) ≤Cmax{κ∥fH∥K,1}(∥fH∥K∨1)q δn,bg so that δ1≤CVKδn,bg, which concludes the bound in Lemma 4. Combining Lemmas 3 and 4 yields the following corollary. Corollary 6. Grant Assumptions 1, 2 and 3. Fix any η∈(0,1). With probability at least 1−η, for any f∈ Fb, one has E[hf(Z, X)]≤2En[hf(Z, X)]+Cmax κ2∥fH∥2 K+∥f∗∥2 ∞,1 VKδn,bg+ 4∥fH−f∗∥2 ρ. Proof. By noting that κ∥fH∥K(κ∥fH∥K+∥f∗∥∞)log(1/η) n+∥fH∥2 KE∆bg≤Cmax κ2∥fH∥2 K+∥f∗∥2 ∞,1 VKδn,bg, we complete the proof by applying Lemmas 3 and 4. 45 B.2.3 Local uniform upper bounds of the cross-term In this section we establish the uniform bound of the cross-term En[ϵ(fH◦bg)(X)−ϵ(f◦bg)(X)] over a local ball. Recall the definitions of the function ξffrom (84) and the function class Ξ bfrom (85). To facilitate the proof in this section, introduce the empirical covariance operator bTx,Kwith respect to the predicted inputs bZi=bg(Xi) for i∈[n], defined as bTx,K:HK→ H K,bTx,Kf:=1 nnX i=1f(bZi)K bZi,· . (110) The operator bTx,Khas the following useful properties. First, by the reproducing property in HK, for any f, g∈ H K, we have ⟨f,bTx,Kg⟩K=1 nnX i=1f(bZi)g(bZi) so that En[(f◦bg)(X)]2=1 nnX i=1f2(bZi) =⟨f,bTx,Kf⟩K, for any f∈ H K. (111) Second, by using Lemma 8, the largest neigenvalues of bTx,Kcoincide with those of Kx, while the remaining eigenvalues are all zero. Write {bµx,j}∞ j=1as the eigenvalues of bTx,Karranged in descending order with bµx,j= 0 for j > n , and {bϕx,j}∞ j=1as the associated eigenfunctions which forms an orthonormal basis HK. Lemma 5. Grant model (4)with Assumptions 1–4. Fix any η∈(0,1). For any q > 0, with probability 1−η, the following holds uniformly over {f∈ Fb:En[ξ2 f(X)]≤q}, En[ϵ(fH◦bg)(X)−ϵ(f◦bg)(X)]≤Cγϵ(∥fH∥K∨1)p log(1/η)bRx(q). Proof. Start by noting that any function f∈ H Kcan be expanded as f=P∞ j=1αjbϕx,jwith αj=⟨f,bϕx,j⟩K. Similarly, fHcan be written as fH=P∞ j=1α∗ jbϕx,jwith α∗ j=⟨fH,bϕx,j⟩K. Write βj=αj−α∗ jforj= 1,2, . .
|
https://arxiv.org/abs/2505.20022v1
|
., so that ∥f−fH∥2 K=∞X j=1β2 j. Additionally, from (111), we have En[ξ2 f(X)] =En[(f◦bg)(X)−(fH◦bg)(X)]2=⟨f−fH,bTx,K(f−fH)⟩K=∞X j=1bµx,jβ2 j. This implies that f∈ F bsatisfies En[ξ2 f(X)]≤qif and only if the vector β= (β1, β1, . . .)⊤∈R∞ satisfies∞X j=1bµx,jβ2 j≤q,∞X j=1β2 j≤9∥fH∥2 K. 46 Additionally, any vector βsatisfying the above constraints must belong to the ellipse class E:= β∈R∞:∞X j=1β2 jνj≤1 + 9∥fH∥2 K , (112) where we write νj= max {bµx,j/q,1}for any j= 1,2, . . .. Observe that sup f∈Fb,En[ξ2 f(X)]≤q 1 nnX i=1ϵi (fH◦bg)(Xi)−(f◦bg)(Xi) ≤sup β∈E 1 nnX i=1ϵi∞X j=1βj(bϕx,j◦bg)(Xi) ≤sup β∈E1 n ∞X j=1β2 jνj 1/2 ∞X j=11 νj nX i=1ϵi(bϕx,j◦bg)(Xi)!2 1/2 by Cauchy-Schwarz inequality ≤q 1 + 9∥fH∥2 K n ∞X j=11 νj nX i=1ϵi(bϕx,j◦bg)(Xi)!2 1/2 byE. (113) To proceed, let H∈Rn×nwith [H]ab=∞X j=11 νj(bϕx,j◦bg)(Xa)(bϕx,j◦bg)(Xb),∀a, b∈[n]. Then, with ϵ= (ϵ1, ..., ϵ n)⊤, we have ∞X j=11 νj nX i=1ϵi(bϕx,j◦bg)(Xi)!2 =ϵ⊤H ϵ. (114) Applying Lemma 27 with A=Handξ=ϵgives that for any η∈(0,1), Pn ϵ⊤Hϵ≤γ2 ϵ Tr(H) + 2p Tr(H2) log(1 /η) + 2∥H∥oplog(1/η)o ≥1−η (115) 47 where γ2 ϵis the sub-Gaussian constant in Assumption 4. Note that Tr( H2)≤ ∥H∥opTr(H) and ∥H∥op≤Tr(H) =nX i=1∞X j=11 νj (bϕx,j◦bg)(Xi)2 =n∞X j=11 νjEnh (bϕx,j◦bg)(X)i2 =n∞X j=11 νjD bϕx,j,bTx,Kbϕx,jE Kby (111) =n∞X j=1bµx,j νj =n∞X j=1min{q,bµx,j}=n2bR2 x(q), where the last step follows from the definition of bRx(δ) in (40) and Lemma 8. By combining with (114) and (115), for any η∈(0,1), with probability at least 1 −η, we have 1 n ∞X j=11 νj nX i=1ϵi(bϕx,j◦bg)(Xi)!2 1/2 ≤Cγϵp log(1/η)bRx(q). (116) Since (116) does not depend on f∈ Fb, combining with (113) implies sup f∈Fb,En[ξ2 f(X)]≤q 1 nnX i=1ϵi (fH◦bg)(Xi)−(f◦bg)(Xi) ≤Cγϵ(∥fH∥K∨1)p log(1/η)bRx(q), which completes the proof. The following lemma bounds En[ξ2 f(X)] by its population-level counterpart. Lemma 6. Grant Assumptions 1 and 2. Fix any η∈(0,1). Then with probability at least 1−η, for any f∈ Fb, one has En ξf(X)2 ≤2E ξf(X)2 +C max{κ2∥fH∥2 K,1}δx+κ2∥fH∥2 Klog(1/η) n . Proof. Define the function class Ξ2 b:= ξ2 f:ξf∈Ξb (117) and the scalar V:= 9κ2∥fH∥2 K. Moreover, define the functional T: Ξ2 b→R+as T(ξ2 f) :=VE[ξ2 f(X)] and the local Rademacher complexity of Ξ2 bas Rn ξ2 f∈Ξ2 b:T(ξ2 f)≤δ . 48 Below we apply Lemma 22 to the function class Ξ2 bwith T=T, L= 0 and B=−a=b=V. Step 1: Verification of the boundedness of Ξ2 b.By using (89), the function class Ξ2 bis uniformly bounded with range [ −V, V]. Step 2: Bounding the variance of ξ2 f(X).For any ξ2 f∈Ξ2 b, we have Var(ξ2 f(X))≤E ξ4 f(X) ≤VE ξ2 f(X) =T(ξ2 f), which verifies the variance condition with T=T, B =VandL= 0 required in Lemma 22. Step 3: Choosing a suitable sub-root function. Define ψ(δ) := 8 κ∥fH∥KV ψ xδ V for any δ≥0, where ψxis a sub-root function defined in (86). Using Lemma 19 ensures that ψ(δ) is sub-root in δ. Let δ⋆be the fixed point of ψ(·). By applying (88), for any f1, f2∈ Fbandx∈ X, we have ξ2 f1(x)−ξ2 f2(x) = (f1◦bg)(x) + (f2◦bg)(x) ξf1(x)−ξf2(x) ≤8κ∥fH∥K ξf1(x)−ξf2(x) which implies that ξ2 f(Xi) is (8 κ∥fH∥K)-Lipschitz in ξf(Xi)
|
https://arxiv.org/abs/2505.20022v1
|
for all i∈[n]. Then, by repeating the similar arguments of proving (94), we can apply Ledoux–Talagrand contraction inequality in Lemma 23 to obtain Rn ξ2 f∈Ξ2 b, T(ξ2 f)≤δ ≤8κ∥fH∥KRn ξf∈Ξb, T(ξ2 f)≤δ =8κ∥fH∥Kψxδ V so that for every δ≥δ⋆, ψ(δ)≥VRn ξ2 f∈Ξ2 b, T(ξ2 f)≤δ . Step 4: Bounding δ⋆byδx.Using Lemma 19, one can deduce δ⋆≤max{64κ2∥fH∥2 K,1}V δx with δxbeing the fixed point of ψx. Finally, applying the second statement in Lemma 22 by setting θ= 2 yields that for any η∈(0,1), with probability at least 1 −η, the following inequality holds uniformly over f∈ Fb, En[ξ2 f(X)]≤2E[ξ2 f(X)] +Cmax{κ2∥fH∥2 K,1}δx+κ2∥fH∥2 Klog(1/η) n. (118) This completes the proof. 49 For any q >0, let Fb(q) :={f∈ Fb:E[ℓf(Y, X)]≤q}. (119) The following lemma gives a uniform bound of the cross-term over Fb(q) by combining Lemma 5 and Lemma 6 together with relating E[ξ2 f(X)] toE[ℓf(Y, X)]. Lemma 7. Grant model (4)with Assumptions 1–4. Fix any η∈(0,1)and any q > 0. With probability at least 1−η, the following holds for any f∈ Fb(q), 1 nnX i=1ϵi (fH◦bg)(Xi)−(f◦bg)(Xi) ≤Cγϵ(∥fH∥K∨1)p log(1/η)bRx(¯q), where C≥0is some absolute constant and ¯q:= 4q+ 16∥fH∥2 KE∆bg+ 16∥fH−f∗∥2 ρ+Cmax{κ2∥fH∥2 K,1}δx+Cκ2∥fH∥2 Klog(1/η) n.(120) Proof. Observe that E[ξ2 f(X)] =E[(f◦bg)(X)−(fH◦bg)(X)]2 ≤2E[(f◦bg)(X)−f∗(Z)]2+ 2E[(f∗(Z)−(fH◦bg)(X)]2 = 2E[(f◦bg)(X)−f∗(Z)]2−2E[f∗(Z)−(fH◦bg)(X)]2+ 4E[f∗(Z)−(fH◦bg)(X)]2 = 2E (Y−(f◦bg)(X))2−(Y−(fH◦bg)(X))2 + 4E[f∗(Z)−(fH◦bg)(X)]2by (63) . (121) By using (19) with θ= 1, we have E[f∗(Z)−(fH◦bg)(X)]2≤2∥fH−f∗∥2 ρ+ 2∥fH∥2 KE∆bg. By combining the above results, for f∈ Fb, we have E[ξ2 f(X)]≤2E[ℓf(Y, X)] + 8∥fH∥2 KE∆bg+ 8∥fH−f∗∥2 ρ. so that applying Lemma 6 yields that for any η∈(0,1), with probability at least 1 −η, En[ξ2 f(X)]≤¯q, for any f∈ Fb(q). SincebRx(δ) is non-increasing in δ, using Lemma 5 yields the following inequality which holds for anyf∈ Fb(q). En[ϵ(fH◦bg)(X)−ϵ(f◦bg)(X)]≤Cγϵ(∥fH∥K∨1)p log(1/η)bRx(¯q). So the proof is complete. As a consequence of Lemma 7, the following corollary is useful for proving Theorem 2. Recall Fbfrom (74). Corollary 7. Grant the conditions in Lemma 7. Fix any η∈(0,1). Then with probability at least 1−η, the following holds uniformly over f∈ Fb, 4En[ϵ(fH◦bg)(X)−ϵ(f◦bg)(X)] ≤λ∥fH∥2 K. 50 Proof. Recall from (120) the definition of ¯ q. In this proof, for any f∈ Fb, define qλ:= 24 λ∥fH∥2 K+ 16∥f∗−fH∥2 ρ+CVKmax{κ2∥fH∥2 K,1}δn,bg (122) with VKandδn,bggiven in (68) and (67). Invoking Lemma 4 yields that for any η∈(0,1), with probability at 1 −η, we have ¯ q≤qλ. Since bRx(δ) is non-decreasing in δand by applying Lemma 7, we conclude that for any η∈(0,1), the following holds uniformly over f∈ Fb, P 4En[ϵ(fH◦bg)(X)−ϵ(f◦bg)(X)] ≤C(∥fH∥K∨1)γϵp log(1/η)bRx(qλ) ≥1−η. Below we apply Corollary 8 and Lemma 13 to derive an upper bound for bRx(qλ). Define the function ψ(δ) :=C0(∥fH∥K∨1)γϵp log(1/η)R(δ),∀δ≥0, with C0given in (67). Recall that δnis the fixed point of R(δ). Using the second statement of Lemma 19 ensures that ψ(·) is a sub-root function and its fixed point δ⋆satisfies δ⋆≤(C2 0∨1)(∥fH∥2 K∨1)(γ2 ϵ∨1)log(1 /η)δn. By the definition of qλ, it is easy to see that qλ≥(C2 0∨1)(∥fH∥2 K∨1)(γ2 ϵ∨1)log(1 /η)δn so that invoking Lemma 18 gives ψ(qλ) =C0(∥fH∥K∨1)γϵp log(1/η)R(qλ)≤qλ. (123) Observe that C0(∥fH∥K∨1)γϵp log(1/η)bRx(qλ) ≤CC0(∥fH∥K∨1)γϵp log(1/η) bR(qλ) +r E∆bg n+κp log(2/η) n! by Corollary 8 ≤C′C0(∥fH∥K∨1)γϵp
|
https://arxiv.org/abs/2505.20022v1
|
log(1/η) R(qλ) +r E∆bg n+κp log(2/η) n! by Lemma 13 ≤C′qλ+C′C0(∥fH∥K∨1)γϵ r E∆bglog(1/η) n+κlog(2/η) n! by (123) ≤C′qλ+C′C0(∥fH∥K∨1)γϵ1 2E∆bg+3(κ∨1)log(2 /η) 2n ≤C′′qλ by (122) . Finally, since qλ≲λ∥fH∥2 Kby (72), choosing C0in (67) sufficiently large and using (72) complete the proof. The following lemma is used in the proof of this section, which states an essential connection between the eigenvalues of bTx,KandKx. 51 Lemma 8. The number of non-zero eigenvalues of bTx,Kis at most n. Additionally, the eigenvalues ofKxare the same as the nlargest eigenvalues of bTx,K. Proof. The proof can be done by largely following the argument used in the proof of Lemma 6.6 in Bartlett et al. (2005). First, it is easy to see that the non-zero eigenvalues of bTx,Kare at most nin number because the range of bTx,Kis at most n-dimensional. To prove the second statement of Lemma 8, it suffices to consider the non-zero eigenvalues. To this end, for any f∈ H K, write fn:= (f(bZ1), . . . , f (bZn))⊤∈Rn. Suppose that µis a non-zero eigenvalue of bTx,Kandfis its associated non-trivial eigenfunction, that is ⟨f,bTx,Kf⟩K>0. Then for all j∈[n], µf(bZj) = (bTx,Kf)(bZj)(110)=1 nnX i=1f(bZi)K(bZj,bZi), implying that Kxfn=µfn.Using (111) to deduce ∥fn∥2 2=n⟨f,bTx,Kf⟩K>0 implies that µis also an eigenvalue of Kx. On the other hand, suppose that vis an non-trivial eigenvector (that is, ∥Kxv∥2 2>0) of Kxwith non-zero eigenvalue µ. Then by letting fv:= n−1/2Pn j=1vjK(bZj,·), we find that bTx,Kfv=1√nnX j=1vj 1 nnX i=1K(bZi,bZj)K(bZi,·)! =1√nnX i=1K(bZi,·)[Kxv]i =µ√nnX i=1viK(bZi,·) =µfv, implying that either fv= 0 or µis an eigenvalue of Kx. The former is impossible since ⟨fv, fv⟩K= 1√nnX j=1vjK(bZj,·) 2 K=v⊤Kxv by the reproducing property in HK. This concludes the proof. B.3 Technical lemmas for proving Lemma 4 Recall that δxis fixed point of ψxin (86). This section provides some useful lemmas for proving Lemma 4 that states an upper bound on δx. B.3.1 Relating local Rademacher complexity to its empirical counterpart The following lemma states that local Rademacher complexity of any bounded function class can be bounded in terms of its empirical counterpart in probability. 52 Lemma 9. LetFbe a function class with ranges in [−G, G]andU1, . . . , U nbe the i.i.d. copies of some random variable U. Fix any t≥0. For any δsuch that δ≥10GRn f∈ F:E f2(U) ≤δ + 11G2t, with probability at least 1−2e−nt, one has Rn f∈ F:E f2(U) ≤δ ≤2√ 2bRn f∈ F:En f2(U) ≤δ + 2Gt. Proof. For short, for any δ≥0, we write A(δ) :=Rn f∈ F:E f2(U) ≤δ ; bA(δ) :=bRn f∈ F:En f2(U) ≤δ . For any δ≥0, define an intermediate variable as ¯A(δ) :=bRn {f∈ F:E[f2(U)]≤δ} =Eε" sup f∈F,E[f2(U)]≤δ1 nnX i=1εif(Ui)# . Taking expectations on both sides of the above equality gives A(δ) =E¯A(δ) . From Lemma A.4 in Bartlett et al. (2005), for any t≥0, with probability at least 1 −e−nt, one has A(δ)≤2¯A(δ) + 2Gt. (124) Additionally, Corollary 2.2 in Bartlett et al. (2005) implies that whenever δ≥10G A(δ) + 11 G2t, with probability at least 1 −2e−nt, one has ¯A(δ)≤bA(2δ). Together with (124) yields that
|
https://arxiv.org/abs/2505.20022v1
|
A(δ)≤2bA(2δ) + 2Gt. (125) Finally, using Lemma 20 ensures that bA(δ) is sub-root in δ. By the definition of sub-root function, bA(δ)/√ δis non-increasing in δso that bA(2δ)√ 2δ≤bA(δ)√ δ, which, combining with (125), completes the proof. 53 B.3.2 Relating empirical local Rademacher complexity in (41)to empirical kernel complexity bRx(δ) This section is toward bounding from above the empirical Rademacher complexity in (41). Recall the definitions of ξffrom (84) and Ξ bfrom (85). The empirical local Rademacher complexity in (41) can be written as: for any δ≥0, bRn ξf∈Ξb:En[ξ2 f(X)]≤δ =Eε sup ξf∈Ξb,En[ξ2 f(X)]≤δ1 nnX i=1εiξf(Xi) =Eε sup f∈Fb,En[ξ2 f(X)]≤δ1 nnX i=1εi((f◦bg)(Xi)−(fH◦bg)(Xi)) .(126) where the expectation Eεis taken over ni.i.d. Rademacher variables ε1, . . . , ε n. Lemma 10. Grant Assumption 3. Then the following holds almost surely on X1, ..., X n. bRn ξf∈Ξb:En[ξ2 f(X)]≤δ ≤√ 10 (∥fH∥K∨1)bRx(δ). Proof. Define the scalar V:= 1 + 9 ∥fH∥2 K. By repeating the argument for proving (113), we have Eε sup f∈Fb,En[ξ2 f(X)]≤δ1 nnX i=1εi((f◦bg)(Xi)−(fH◦bg)(Xi)) ≤Eε sup β∈E√ V n ∞X j=11 νj nX i=1εi(bϕx,j◦bg)(Xi)!2 1/2 .(127) where Eis defined in (112) and νj= max {bµx,j/δ,1}forj= 1,2, . . .. Applying Jensen’s inequality gives Eε sup β∈E√ V n ∞X j=11 νj nX i=1εi(bϕx,j◦bg)(Xi)!2 1/2 ≤sup β∈E√ V n ∞X j=11 νjEε nX i=1εi(bϕx,j◦bg)(Xi)!2 1/2 . (128) Note that ε1, . . . , ε nare Rademacher variables so that for any j, Eε nX i=1εi(bϕx,j◦bg)(Xi)!2 =nX i=1(bϕ2 x,j◦bg)(Xi). 54 Then we have ∞X j=11 νjEε nX i=1εi(bϕx,j◦bg)(Xi)!2 =∞X j=11 νj nX i=1(bϕ2 x,j◦bg)(Xi)! =n∞X j=11 νjD bTx,Kbϕx,j,bϕx,jE Kby (111) =n∞X j=1bµx,j νj=n∞X j=1min δ,bµx,j .(129) Collecting (127), (128), (129) yields Eε sup f∈Fb,En[ξ2 f(X)]≤δ1 nnX i=1εi((f◦bg)(Xi)−(fH◦bg)(Xi)) ≤vuutV n∞X j=1min δ,bµx,j . By the definition of bRx(δ) in (40) and using Lemma 8 to deduce vuut1 n∞X j=1min δ,bµx,j =bRx(δ), we complete the proof. B.3.3 Relating bRx(δ)tobR(δ) Recall bZi=bg(Xi) for any i∈[n]. By Mercer’s expansion of Kin (21), for each j∈[n], we have Kx jj=1 nΦ(bZj)⊤Φ(bZj) and K jj=1 nΦ(Zj)⊤Φ(Zj), (130) where we write Φ(z) = (√µ1ϕ1(z),√µ2ϕ2(z), . . .)⊤∀z∈ Z. By Assumption 2 and (21), it is easy to check that for any z∈ Z, Φ(z)∈ℓ2(N). Further define for anyδ >0, bDx(δ) := Tr (Kx+δIn)−1Kx =nX j=1bµx,j bµx,j+δ, bD(δ) := Tr (K+δIn)−1K =nX j=1bµj bµj+δ. (131) Observe that for any positive values aandb, a∧b 2=1 2 max{1/a,1/b}≤ab a+b≤1 max{1/a,1/b}=a∧b. (132) 55 As a result, for any δ >0, we have 1 2bDx(δ)≤n δbR2 x(δ)≤bDx(δ) and1 2bD(δ)≤n δbR2(δ)≤bD(δ). (133) For each i∈[n], write ∆i,bg:=∥K(Zi,·)−K(bg(Xi),·)∥2 K. (134) Note that ∆ 1,bg, . . . , ∆n,bgare i.i.d. copies of ∆ bgin (18). Let ¯∆bgbe the average of them, that is ¯∆bg=1 nnX i=1∆i,bg. Lemma 11. Grant Assumptions 2 and 3. On the event {20¯∆bg≤δ}, we have bRx(δ)≤CbR(δ) +Cs ¯∆bg n, where C >0is some absolute constant. Remark 5. The result stated in Lemma 11 is conditioning on both X1, ..., X nandZ1, ..., Z n, and all statements in this section should be understood to hold almost surely
|
https://arxiv.org/abs/2505.20022v1
|
on them. Proof. In view of (133), we turn to focus on bounding the difference between bDx(δ) andbD(δ). To this end, define the operators Σx:ℓ2(N)→ℓ2(N), Σx=1 nnX i=1Φ(bZi)Φ(bZi)⊤; Σ :ℓ2(N)→ℓ2(N), Σ =1 nnX i=1Φ(Zi)Φ(Zi)⊤.(135) For short, for any δ >0, we write A(δ) = (Σ + δI)−1 2(Σ−Σx)(Σ + δI)−1 2; B(δ) = (Σ + δI)−1(Σ−Σx)(Σ + δI)−1. It follows from (130) that |bDx(δ)−bD(δ)|= Tr Σx+δI−1Σx− Σ +δI−1Σ . Since bDx(δ) = Tr (Σx+δI)−1Σx = Tr I−δ(Σx+δI)−1 , and similarly, bD(δ) = Tr I−δ(Σ + δI)−1 , we have bDx(δ)−bD(δ) =δ Tr (Σ + δI)−1−(Σx+δI)−1 . 56 Further using the equality A−1−B−1=A−1(B−A)B−1for any invertible operators AandB gives bDx(δ)−bD(δ) =δ Tr (Σ + δI)−1(Σx−Σ)(Σ x+δI)−1 . (136) On the event {∥A(δ)∥op<1}, we find that (Σx+δI)−1= (Σ + δI+ Σ x−Σ)−1 = (Σ + δI)1 2(I−(Σ + δI)−1 2(Σ−Σx)(Σ + δI)−1 2)(Σ + δI)1 2−1 = (Σ + δI)1 2(I−A(δ))(Σ + δI)1 2−1 = (Σ + δI)−1 2(I−A(δ))−1(Σ + δI)−1 2 so that together with (136) gives bDx(δ)−bD(δ) =δ Tr (Σ + δI)−1(Σx−Σ)(Σ + δI)−1 2(I−A(δ))−1(Σ + δI)−1 2 =δ Tr (Σ + δI)−1 2A(δ)(I−A(δ))−1(Σ + δI)−1 2 =δ Tr (Σ + δI)−1 2A(δ)(I−A(δ))−1(I−A(δ) +A(δ))(Σ + δI)−1 2 ≤δ Tr B(δ) +δ Tr (Σ + δI)−1 2A(δ)(I−A(δ))−1A(δ)(Σ + δI)−1 2 .(137) By the definition of the trace of an operator, we have δTr (Σ + δI)−1 2A(δ)(I−A(δ))−1A(δ)(Σ + δI)−1 2 =δ (Σ + δI)−1 2A(δ)(I−A(δ))−1 2 2 HS (i) ≤δ∥(Σ + δI)−1∥op∥A(δ)∥2 HS∥(I−A(δ))−1∥op (ii) ≤∥A(δ)∥2 HS 1− ∥A(δ)∥op,(138) where (i) follows from the fact that ∥AB∥HS≤ ∥A∥op∥B∥HS, (ii) holds since ∥(I−A(δ))−1∥op=1 λmin(I−A(δ))=1 1− ∥A(δ)∥op. By combining (137) and (138), on the event {∥A(δ)∥op<1}, we have the upper bound on |bDx(δ)− bD(δ)|of form bDx(δ)−bD(δ) ≤δ Tr(B(δ)) +∥A(δ)∥2 HS 1− ∥A(δ)∥op. (139) It remains to bound from above ∥A(δ)∥op,δ|Tr(B(δ))|and∥A(δ)∥HSseparately. 57 To this end, start with the following decomposition using (135). Σx−Σ =1 nnX i=1n Φ(bZi)−Φ(Zi) Φ(Zi)⊤+ Φ(Zi) Φ(bZi)−Φ(Zi)⊤ + Φ(bZi)−Φ(Zi) Φ(bZi)−Φ(Zi)⊤o .(140) Observe that for each i∈[n], Φ(bZi)−Φ(Zi) 2 2=∞X j=1µj ϕj(bZi)−ϕj(Zi)2 =∞X j=1µjϕj(Zi)ϕj(Zi)−2∞X j=1µjϕj(Zi)ϕj(bZi) +∞X j=1µjϕj(bZi)ϕj(bZi) =K(Zi, Zi)−2K(Zi,bZi) +K(bZi,bZi) by (21). Combining with reproducing property and (134) to deduce K(Zi, Zi)−2K(Zi,bZi) +K(bZi,bZi) =⟨K(Zi,·), K(Zi,·)⟩K−2⟨K(Zi,·), K(bZi,·)⟩K+⟨K(bZi,·), K(bZi,·)⟩K = K(Zi,·)−K(bZi,·) 2 K= ∆ i,bg, we have Φ(bZi)−Φ(Zi) 2 2= ∆ i,bg. (141) Bounding ∥A(δ)∥op.For any u∈ℓ2(N), by writing T1(δ) :=1 nnX i=1 Σ +δI−1 2Φ(Zi) Φ(bZi)−Φ(Zi)⊤ Σ +δI−1 2,Ia:=u⊤T1(δ)u; T′ 1(δ) :=1 nnX i=1 Σ +δI−1 2 Φ(bZi)−Φ(Zi) Φ(Zi)⊤ Σ +δI−1 2,I′ a:=u⊤T′ 1(δ)u; T2(δ) :=1 nnX i=1 Σ +δI−1 2 Φ(bZi)−Φ(Zi) Φ(bZi)−Φ(Zi)⊤ Σ +δ−1 2,IIa:=u⊤T2(δ)u, it follows from (140) and the triangle inequality that ∥A(δ)∥op≤ ∥T1(δ)∥op+∥T′ 1(δ)∥op+∥T2(δ)∥op. (142) 58 For I a, by using Cauchy-Schwarz inequality, observe that Ia≤1 n nX i=1 u⊤ Σ +δI−1 2Φ(Zi)2!1 2 nX i=1 Φ(bZi)−Φ(Zi)⊤ Σ +δI−1 2u2!1 2 ≤1 n nX i=1 u⊤ Σ +δI−1 2Φ(Zi)2!1 2 nX i=1 Φ(bZi)−Φ(Zi) 2 2 Σ +δI−1 2u 2 2!1 2 ≤1 n nX i=1 u⊤ Σ +δI−1 2Φ(Zi)2!1 2 1 δnX i=1∆i,bg∥u∥2 2!1 2 by (141) =s ¯∆bg δ∥u∥2 1 nnX i=1u⊤ Σ +δI−1 2Φ(Zi)Φ(Zi)⊤ Σ +δI−1 2u!1 2 =s ¯∆bg δ∥u∥2 u⊤ Σ +δI−1 2Σ Σ +δI−1 2u1 2by (135)
|
https://arxiv.org/abs/2505.20022v1
|
≤s ¯∆bg δ∥u∥2 2, implying ∥T1(δ)∥op≤q ¯∆bg/δ. (143) Repeating similar arguments gives ∥T′ 1(δ)∥op≤q ¯∆bg/δ. (144) For II a, by using Cauchy-Schwarz inequality and (141), we have IIa≤1 nnX i=1 Φ(bZi)−Φ(Zi)⊤ Σ +δI−1 2u2 ≤1 nδnX i=1∆i,bg∥u∥2 2=1 δ¯∆bg∥u∥2 2, implying ∥T2(δ)∥op≤¯∆bg/δ. (145) Combining (142), (143), (144) and (145) yields ∥A(δ)∥op≤2q ¯∆bg/δ+¯∆bg/δ. Under the condition that 20 ¯∆bg≤δ, one can deduce ∥A(δ)∥op≤1/2. 59 Bounding δ|Tr(B(δ))|.By (140) and writing Ib:= 1 nnX i=1Tr Σ +δI−1Φ(Zi) Φ(bZi)−Φ(Zi)⊤ Σ +δI−1 I′ b:= 1 nnX i=1Tr Σ +δI−1 Φ(bZi)−Φ(Zi) Φ(Zi)⊤ Σ +δI−1 IIb:=1 nnX i=1Tr Σ +δI−1 Φ(bZi)−Φ(Zi) Φ(bZi)−Φ(Zi)⊤ Σ +δI−1 , we have δ|Tr(B(δ))| ≤ δ Ib+ I′ b+ II b . For any elements a1, . . . , a n∈ℓ2(N) and b1, . . . , b n∈ℓ2(N), applying Cauchy-Schwarz inequality twice yields nX i=1Tr(aib⊤ i)≤nX i=1∥ai∥2∥bi∥2≤ nX i=1∥ai∥2 2!1 2 nX i=1∥bi∥2 2!1 2 . (146) Regarding I b, observe that Ib≤1 n nX i=1 Σ +δI−1 Φ(bZi)−Φ(Zi) 2 2!1 2 nX i=1 Σ +δI−1Φ(Zi)⊤ 2 2!1 2 by (146) ≤1 nδ nX i=1 Φ(bZi)−Φ(Zi) 2 2!1 2 nX i=1 Σ +δI−1Φ(Zi)⊤ 2 2!1 2 ≤s ¯∆bg δ2 1 nnX i=1 Σ +δI−1Φ(Zi)⊤ 2 2!1 2 by (141) =s ¯∆bg δ2 1 nnX i=1Tr Σ +δI−1Φ(Zi)Φ(Zi)⊤ Σ +δI−1!1 2 ≤s ¯∆bg δ2r 1 δ 1 nnX i=1Tr Σ +δI−1 2Φ(Zi)Φ(Zi)⊤ Σ +δI−1 2!1 2 =s ¯∆bg δ2s bD(δ) δby (135) . Repeating similar arguments gives I′ b≤s ¯∆bg δ2s bD(δ) δ. 60 For II b, we note that Tr T2(δ)) =1 nnX i=1Tr Σ +δI−1 2 Φ(bZi)−Φ(Zi) Φ(bZi)−Φ(Zi)⊤ Σ +δI−1 2 ≤1 nnX i=1 Σ +δI−1 2 Φ(bZi)−Φ(Zi) 2 2by (146) ≤¯∆bg δby (141). (147) Invoking Von Neumann’s trace inequality gives IIb= Tr Σ +δI−1 2T2(δ) Σ +δI−1 2 ≤¯∆bg δ2. Summing the bounds of I b,I′ b,IIbgives δ|Tr(B(δ))| ≤2s ¯∆bgbD(δ) δ+¯∆bg δ. Bounding ∥A(δ)∥2 HS. Start by noting that ∥A(δ)∥2 HS= Tr (Σ + δI)−1 2(Σ−Σx)(Σ + δI)−1(Σ−Σx)(Σ + δI)−1 2 . Further by applying (140), we have ∥A(δ)∥2 HS≤ Tr T1(δ)T1(δ) + Tr T′ 1(δ)T′ 1(δ) + Tr T2(δ)T2(δ) + 2 Tr T1(δ)T′ 1(δ) + 2 Tr T1(δ)T2(δ) + 2 Tr T′ 1(δ)T2(δ) .(148) For Tr T1(δ)T1(δ) , it follows from (146) that Tr T1(δ)T1(δ) ≤1 n nX i=1∥T1(δ)∥2 op Σ +δI−1 2 Φ(bZi)−Φ(Zi) 2 2!1 2 nX i=1 Σ +δI−1 2Φ(Zi) 2 2!1 2 ≤s ¯∆bg δ 1 nnX i=1 Σ +δI−1 2 Φ(bZi)−Φ(Zi) 2 2!1 2 1 nnX i=1 Σ +δI−1 2Φ(Zi) 2 2!1 2 by (143) ≤¯∆bg δ 1 nnX i=1 Σ +δI−1 2Φ(Zi) 2 2!1 2 by (141) ≤¯∆bgbD(δ) δby (135) . For Tr T′ 1(δ)T′ 1(δ) and Tr T1(δ)T′ 1(δ) , repeating the similar arguments, and together with (144), leads to Tr T′ 1(δ)T′ 1(δ) ≤¯∆bg δbD(δ) 61 and Tr T1(δ)T′ 1(δ) ≤¯∆bg δbD(δ). For Tr T2(δ)T2(δ) , we have Tr T2(δ)T2(δ) ≤ ∥T2(δ)∥opTr T2(δ) by Von Neumann’s trace inequality ≤¯∆bg δTr T2(δ) by (145) ≤¯∆bg δ2 by (147) . For Tr T1(δ)T2(δ) , by (146), (135), (141) and (145), observe that Tr T1(δ)T2(δ)
|
https://arxiv.org/abs/2505.20022v1
|
≤1 n nX i=1∥T2(δ)∥2 op Σ +δI−1 2 Φ(bZi)−Φ(Zi) 2 2!1 2 nX i=1 Σ +δI−1 2Φ(Zi) 2 2!1 2 ≤¯∆bg δ3 2bD(δ). Repeating similar arguments gives Tr T′ 1(δ)T2(δ) ≤¯∆bg δ3 2bD(δ). Summing the bounds for each term in the right-hand side of (148) yields ∥A(δ)∥2 HS≤4¯∆bgbD(δ) δ+ 4¯∆bg δ3 2bD(δ) +¯∆bg δ2 . Finally, from (139), summarizing the bounds of ∥A(δ)∥op,δ|Tr(B(δ))|and∥A(δ)∥HSconcludes that bDx(δ)−bD(δ) ≤2s ¯∆bgbD(δ) δ+¯∆bg δ+ 8¯∆bgbD(δ) δ+ 8¯∆bg δ3 2bD(δ) + 2¯∆bg δ2 (149) so that combining with the elementary inequality 2√ ab≤a+bfor any non-negative numbers a, b gives bDx(δ)≤2bD(δ) +2¯∆bg δ+ 8¯∆bgbD(δ) δ+ 8¯∆bg δ3 2bD(δ) + 2¯∆bg δ2 . So on the event {20¯∆bg≤δ}, it is easy to deduce bDx(δ)≤3bD(δ) +3¯∆bg δ. 62 Together with (133) yields bRx(δ)≤s δbDx(δ) n≤s 3δbD(δ) n+3¯∆bg n≤s 6bR2(δ) +3¯∆bg n≤C bR(δ) +s ¯∆bg n , hence we complete the proof. The following lemma bounds from above ¯∆bg= (1/n)Pn i=1∆i,bg. Lemma 12. Grant Assumption 2. Fix any η∈(0,1). Then with probability at least 1−η, one has ¯∆bg≤3 2E∆bg+28κ2log(2/η) 3n. Proof. For each i∈[n], observe that ∆i,bg=∥K(Zi,·)−K(bg(Xi),·)∥2 K =⟨K(Zi,·), K(Zi,·)⟩K−2⟨K(Zi,·), K(bg(Xi),·)⟩K+⟨K(bg(Xi),·), K(bg(Xi),·)⟩K =K(Zi, Zi)−2K(Zi,bg(Xi)) +K(bg(Xi),bg(Xi)), where the last step holds by reproducing property in HK. Then by Assumption 2, one has the following upper bound uniformly over i∈[n]. ∆i,bg≤4κ2. It follows that max i∈[n]E[∆2 i,bg]≤4κ2E∆bg. Invoking Bernstein’s inequality in Lemma 24 implies that for any η∈(0,1), it holds with probability at least 1 −ηthat 1 nnX i=1 ∆i,bg−E[∆i,bg] ≤16κ2log(2/η) 3n+r 8κ2E∆bglog(2/η) n. Rearranging the terms and using the elementary inequality 2√ ab≤a+bgives ¯∆bg≤E∆bg+16κ2log(2/η) 3n+r 8κ2E∆bglog(2/η) n ≤3 2E∆bg+28κ2log(2/η) 3n. This completes the proof. Combining Lemmas 11 and 12 gives the following corollary. Corollary 8. Grant Assumptions 2 and 3. Fix any η∈(0,1). For any δsuch that 30E∆bg+560κ2log(2/η) 3n ≤δ, with probability at least 1−η, we have bRx(δ)≤C bR(δ) +r E∆bg n+κp log(2/η) n! , where C >0is some absolute constant. 63 B.3.4 Relating bR(δ)toR(δ) Recall the definitions of R(δ) from (22) and bR(δ) from (44). In this part, we aim to derive an upper bound for bR(δ) in terms of R(δ). Introduce the empirical covariance operator bTKwith respect to Z1, . . . , Z n, defined as bTK:HK→ H K,bTKf:=1 nnX i=1f(Zi)K(Zi,·). By repeating the same argument of proving (111), we have En[f(Z)] =⟨f,bTKf⟩K,∀f∈ H K. Furthermore, by repeating the same argument as that in the proof of Lemma 8, we claim that the eigenvalues of Kare the same as the nlargest eigenvalues of bTK, and the remaining eigenvalues of bTKare all zero. We are therefore allowed to write the associated eigenvalues with bTKas{bµj}∞ j=1, arranged in descending order. Note that bTKdepends on Z1, . . . , Z n, and its population counterpart is known as the covariance operator TK, defined as TK:HK→ H K, T Kf:=Z ZKzf(z) dρ(z). The operator TKis defined in the same way as the integral operator LKin (20) except that their domains and ranges are distinct. It is worth mentioning that LKandTKshare the same eigenvalues {µj}j≥1, as discussed in Caponnetto and De Vito (2007). Recall the definition of bD(δ) from (131). Sincebµj=
|
https://arxiv.org/abs/2505.20022v1
|
0 for all j > n , for any δ >0, we have bD(δ) =nX j=1bµj bµj+δ=∞X j=1bµj bµj+δ= Tr(( bTK+δI)−1bTK). In the proof of this section, define for any δ >0, D(δ) := Tr(( TK+δI)−1TK) =∞X j=1µj µj+δ. Then by (132), we have 1 2D(δ)≤nR2(δ) δ≤D(δ). (150) Lemma 13. Grant Assumptions 2 and 3. Fix any η∈(0,1). For any δ >0such that R(δ)≤δ and nδ≥32κ2log256(κ2∨1) η , (151) with probability at least 1−η, one has bR(δ)≤C R(δ) +κp log(2/η) n! , where C >0is some absolute constant. 64 Proof. We split the proof into two cases: µ1< δandµ1≥δ. Case (1): Start by observing that bD(δ) = Tr (bTK+δI)−1bTK ≤1 δTr(bTK) by Von Neumann’s trace inequality ≤1 δ Tr(TK) + Tr(TK)−Tr(bTK) .(152) Applying Lemma 14 yields that for any η∈(0,1), with probability at least 1 −η, Tr(TK)−Tr(bTK) ≤1 2Tr(TK) +7κ2log(2/η) 3n. Together with (152) gives bD(δ)≤1 δ3 2Tr(TK) +7κ2log(2/η) 3n By combining with the fact µ1< δandµ1≥µ2≥. . .to deduce Tr(TK) =∞X j=1µj=∞X j=1min{µj, δ}=nR2(δ), (153) we have bD(δ)≤1 δ3 2nR2(δ) +7κ2log(2/η) 3n so that combining with (133) leads to bR(δ)≤s δbD(δ) n≤r 3 2R2(δ) +7κ2log(2/η) 3n2≤C R(δ) +κp log(2/η) n! . Case (2): To be short, for any δ >0, we write A(δ) = (TK+δI)−1 2(TK−bTK)(TK+δI)−1 2; B(δ) = (TK+δI)−1(TK−bTK)(TK+δI)−1. By repeating the same argument of proving Lemma 11, on the event {∥A(δ)∥op≤1 2}, we have |bD(δ)−D(δ)|= δTr(B(δ)) + A(δ) 2 HS 1− ∥A(δ)∥op. (154) We proceed to bound ∥A(δ)∥op,∥A(δ)∥2 HSand|δTr(B(δ))|. To this end, for any z∈ Z, define the operator Kzas Kz:R→ H K, y 7→yK(·, z), for any y∈R. 65 LetK⊤ z:HK→Rdenote the adjoint operator of Kzthat satisfies K⊤ zf=⟨K(z,·), f⟩K=f(z), for any f∈ H K. According to the definitions of TKandbTK, we find that TKf=Z ZKzK⊤ zfdρ(z) =Z ZKzf(z) dρ(z) (155) and bTKf=1 nnX i=1KZiK⊤ Zif. (156) Bounding ∥A(δ)∥op.Applying Lemma 15 yields that for any η∈(0,1), with probability at least 1−η, we have (TK+δI)−1 2(TK−bTK)(TK+δI)−1 2 op≤4κ2 3nδlog4δD(δ) η(µ1∧δ) +s 2κ2 nδlog4δD(δ) η(µ1∧δ) .(157) Observe that log4δD(δ) η(µ1∧δ) µ1≥δ= log4D(δ) η(150) ≤log8nR2(δ) ηδ ≤log8nδ η byR(δ)≤δ, so that together with (157) gives (TK+δI)−1 2(TK−bTK)(TK+δI)−1 2 op≤4κ2 3nδlog8nδ η +s 2κ2 nδlog8nδ η . By noting that the right hand side of the above inequality decreases with nδ, choosing any δsuch that (151) yields ∥A(δ)∥op= 1 nnX i=1 E[ζ(Zi)]−ζ(Zi) op≤1 12+1 2√ 2<1 2. (158) Bounding ∥A(δ)∥HS. For any z∈ Z, define the operator ζ(z) as ζ(z) = (TK+δI)−1 2KzK⊤ z(TK+δI)−1 2. It can be verified by applying (155) that Eh (TK+δI)−1 2KZK⊤ Z(TK+δI)−1 2i = (TK+δI)−1 2TK(TK+δI)−1 2. 66 Together with (156), we can write A(δ) = (TK+δI)−1 2(TK−bTK)(TK+δI)−1 2=1 nnX i=1 E[ζ(Zi)]−ζ(Zi) . (159) For any z∈ Z, we have Tr(KzK⊤ z) = Tr( K⊤ zKz) =K(z, z)≤κ2(160) so that ∥ζ(z)∥2 HS= Tr (TK+δI)−1 2KzK⊤ z(TK+δI)−1 22 ≤κ4 δ2. Furthermore, we have E ∥ζ(Z)∥2 HS =Eh Tr (TK+δI)−1 2KZK⊤ Z(TK+δI)−1 22i ≤sup z∈ZTr K⊤ z(TK+δI)−1Kz Eh Tr (TK+δI)−1 2KZK⊤ Z(TK+δI)−1 2i ≤κ2 δEh Tr (TK+δI)−1 2KZK⊤ Z(TK+δI)−1 2i by (160) =κ2D(δ) δby (155) . Then, by applying Lemma 25 for the the Hilbert space of Hilbert-Schmidt
|
https://arxiv.org/abs/2505.20022v1
|
operators on HKwith H= 2κ2δ−1andS=κ2δ−1D(δ), for any η∈(0,1), the following holds with probability at least 1−η, ∥A(δ)∥HS(159)= 1 nnX i=1 E[ζ(Zi)]−ζ(Zi) HS≤4κ2 nδlog2 η+s 2κ2D(δ) nδlog2 η. (161) Bounding δ|Tr(B(δ))|.For any z∈ Z, define the random variable ξ(z) as ξ(z) :=δTr (TK+δI)−1KzK⊤ z(TK+δI)−1 , and it is clear that E[ξ(Z)] =δTr((TK+δI)−1TK(TK+δI)−1). Therefore, we have δ|Tr(B(δ))|= 1 nnX i=1 E[ξ(Zi)]−ξ(Zi) . For any z∈ Z, observe that |ξ(z)|= δTr((TK+δI)−1KzK⊤ z(TK+δI)−1) ≤Tr((TK+δI)−1KzK⊤ z) by Von Neumann’s trace inequality ≤κ2 δby (160),(162) 67 which, by using Jensen’s inequality, further implies E[ξ(Z)] ≤κ2δ−1. Then we have sup i∈[n] E[ξ(Zi)]−ξ(Zi) ≤2κ2 δ. Further observe that E[E[ξ(Z)]−ξ(Z)]2 ≤ sup z∈Zξ(z) E ξ(Z) ≤κ2 δEh δTr (TK+δI)−1KZK⊤ Z(TK+δI)−1i by (162) ≤κ2 δEh Tr((TK+δI)−1KZK⊤ Z)i by Von Neumann’s trace inequality =κ2D(δ) δby (155). Then by applying Lemma 24 with H= 2κ2δ−1andS=κ2δ−1D(δ), for any η∈(0,1), the following holds with probability at least 1 −η, δ|Tr(B(δ))|= 1 nnX i=1(E[ξ(Zi)]−ξ(Zi)) ≤4κ2 3nδlog2 η+s 2κ2D(δ) nδlog2 η, (163) Combining (154), (158), (161) and (163) gives |bD(δ)−D(δ)| ≤4κ2 nδlog2 η+s 2κ2D(δ) nδlog2 η+ 2 4κ2 3nδlog2 η+s 2κ2D(δ) nδlog2 η!2 (164) so that (151) implies bD(δ)≤C D(δ) +κ2 nδlog2 η Together with (150) and repeating the same argument in Case 1 gives bR(δ)≤C R(δ) +κp log(2/η) n! . Combining both cases completes the proof. The following two lemmas are used in the proof of Lemma 13. The first lemma bounds from above the trace of TK−bTK. Lemma 14. Grant Assumption 2. For any fixed η∈(0,1), the following inequality holds with probability at least 1−η, |Tr(TK)−Tr(bTK)| ≤1 2Tr(TK) +7κ2log(2/η) 3n. 68 Proof. By applying (155) and (156), we can write |Tr(TK)−Tr(bTK)|= 1 nnX i=1 E[Tr(KZiKZi)]−Tr(KZiKZi) . (165) For each i∈[n], according to Assumption 2, we have Tr(KZiK⊤ Zi) = Tr( K⊤ ZiKZi) =K(Zi, Zi)≤κ2, which further implies that sup i∈[n]E Tr KZiK⊤ Zi2 ≤κ2Eh Tr KZiK⊤ Zii =κ2Tr(TK). Invoking Bernstein’s inequality in Lemma 24 implies that for any η∈(0,1), it holds with probability at least 1 −ηthat |Tr(TK)−Tr(bTK)|= 1 nnX i=1 E[Tr(KZiKZi)]−Tr(KZiKZi) by (165) ≤4κ2log(2/η) 3n+r 2κ2Tr(TK) log(2 /η) n. Using the elementary inequality 2√ ab≤a+bfor any two non-negative numbers a, bcompletes the proof. The next lemma bounds from above the operator norm of ( TK+δI)−1 2(TK−bTK)(TK+δI)−1 2. Lemma 15. Grant Assumptions 2 and 3. Fix any η∈(0,1)andδ >0. Then with probability at least 1−η, one has (TK+δI)−1 2(TK−bTK)(TK+δI)−1 2 op≤4κ2 3nδlog4δD(δ) η(µ1∧δ) +s 2κ2 nδlog4δD(δ) η(µ1∧δ) .(166) Proof. We prove this lemma by following the same notation in the proof of Lemma 13. For any z∈ Z, we have ∥ζ(z)∥op= (TK+δI)−1 2KzK⊤ z(TK+δI)−1 2 op ≤1 δ KzK⊤ z op≤1 δTr(KzK⊤ z)(160) ≤κ2 δ.(167) It follows from Jensen’s inequality that E[ζ(Z)] op≤Eh ζ(Z) opi ≤κ2 δ. Combining the bounds of ∥ζ(z)∥opand∥E[ζ(Z)]∥opgives sup i∈[n] E[ζ(Zi)]−ζ(Zi) op≤2κ2 δ. 69 Regarding E[Eζ(Z)−ζ(Z)]2, observe that E[Eζ(Z)−ζ(Z)]2≤E ζ(Z)2 ≤ sup z∈Zζ(z) E[ζ(Z)] ≤κ2 δEh (TK+δI)−1 2KZK⊤ Z(TK+δI)−1 2i by (167) =κ2 δ(TK+δI)−1 2TK(TK+δI)−1 2 by (155) . By applying Lemma 26 with H= 2κ2δ−1andS=κ2δ−1(TK+δI)−1 2TK(TK+δI)−1 2and noting that ∥S∥op=κ2 δ (TK+δI)−1 2TK(TK+δI)−1 2 op≤κ2 δ, for any η∈(0,1), the following holds with probability at least 1 −η, 1 nnX i=1
|
https://arxiv.org/abs/2505.20022v1
|
E[ζ(Zi)]−ζ(Zi) op≤4κ2 3nδlog2Tr(S) η∥S∥op +s 2κ2 nδlog2Tr(S) η∥S∥op . (168) By using (132), we find that (TK+δI)−1 2TK(TK+δI)−1 2 op=µ1 µ1+δ≥µ1∧δ 2δ so that Tr(S) ∥S∥op=D(δ) ∥(TK+δI)−1 2TK(TK+δI)−1 2∥op≤2δD(δ) (µ1∧δ). Combining with (168) completes the proof. Remark 6. It is worth mentioning that bounding the LHS in (166) is a key step in the proof of some recent work on the kernel-based methods when adopting the operator technique, as seen, for instance, Lin and Cevher (2020); Lin et al. (2020). However, the proof in Lin et al. (2020) requires µ1≥cδfor some constant c >0, a lower bound condition on µ1, and the derived upper bound in Lin and Cevher (2020) is slightly weaker than ours in Lemma 15. Precisely, Lemma 23 in Lin and Cevher (2020) states that the LHS in (166) can be bounded from above (in probability) by 4κ2 3nδlog4(D(δ) + 1) ηµ1 +s 2κ2 nδlog4(D(δ) + 1) ηµ1 . By comparison, we observe that if µ1≤δ, the term δD(δ)/(µ1∧δ) in our bound in Lemma 15 reduces toδD(δ)/µ1, which is much smaller ( D(δ) + 1) /µ1since when applying Lemma 15, we typically choose δto depend on nsuch that δ→0 asn→ ∞ . Ifµ1> δ, the term δD(δ)/(µ1∧δ) reduces to D(δ) which is also much smaller ( D(δ) + 1) /µ1when µ1→0 (note that µ1≤Tr(TK)≤κ2). 70 B.4 Proof of the claim in Eq. (29) Proof. To verify the statements, by using (13) and (14), we first note that (bf◦bg)(X) =1 nnX i=1e⊤ i(Kx+λIn)−1YK(bg(Xi),bg(X)) with [ Kx]ij=n−1K(bg(Xi),bg(Xj)) for i, j∈[n]. For Kthat is invariant to orthogonal transforma- tions, we have (bf◦bg)(X) = (bf◦(Qbg))(X) for all fixed Q∈Or×r. Therefore, repeating the arguments in (17) and (19) gives E(bf◦bg)≤E (Y−(bf◦Qbg)(X))2−(Y−(fH◦Qbg)(X))2 + (1 + θ)E[fH(Z)−(fH◦(Qbg))(X)]2+1 +θ θ∥fH−f∗∥2 ρ for any θ≥1. Under Assumption 5, we further have E[fH(Z)−(fH◦(Qbg))(X)]2≤C2 K∥fH∥2 KE ∥Z−Qbg(X)∥2 2 . Since Qis taken arbitrarily, the claim follows from inspecting the proof of Theorem 2. B.5 Proof of Corollary 1 Proof. Under Assumptions 2 and 3, we have nR2(δ) =∞X j=1min{δ, µj} ≤∞X j=1µjE ϕ2 j(Z)(21)=E[K(Z, Z)]≤κ2. Then δ=κ/√nmust satisfy R(δ)≤δ. Applying Lemma 18 yields δn≤κ/√n. B.6 Proof of Corollary 2 Proof. First note that for any δ≥0, R(δ)2=1 nrX j=1min{δ, σj} ≤r nδ. The statement then follows by using Lemma 18 to deduce δn≤r n and invoking Theorem 2. 71 B.7 Proof of Corollary 3 Proof. First, we note that R2(δ) =1 n∞X j=1min δ, µj ≤1 n∞X j=1µj=1 n∞X j=1µjE[ϕ2 j(Z)](21)=1 nE[K(Z, Z)]≤1 n, where the last step follows from Assumption 2 with κ= 1. By Lemma 19, we must have δn≤1√n. Now fix any 0 < δ≤1/√nand recall that d(δ) = max {j≥0 :µj≥δ}with µ0=∞. We claim that d(δ) + 1≤C′δ−1/(2α)(169) for some constant C′=C′(C, α)>0. This follows trivially by using α > 1/2 ifd(δ) = 0. For d(δ)≥1, by definition of d(δ) and µj≤Cj−2αfor all j≥1, we have δ≤µd(δ)≤Cd(δ)−2α from which (169) follows. To derive the rate of δn, we have R2(δ) =1 n∞X j=1min δ, µj ≤δ n(d(δ) + 1) +1 n∞X j=d(δ)+2µj ≤C′ nδ2α−1 2α+C n∞X j=d(δ)+2j−2α ≤C′ nδ2α−1 2α+C nZ∞ d(δ)+1t−2αdt =C′ nδ2α−1 2α+C
|
https://arxiv.org/abs/2505.20022v1
|
n1 2α−1(d(δ) + 1)1−2α ≤Cα nδ2α−1 2α, where Cαis some constant depending only on α. Then solvingp Cα/n δ2α−1 4α=δand applying Lemma 18 yield δn≤C′ αn−2α 2α+1. Invoking Theorem 2 gives the bound (in order) n−2α 2α+1log (1 /η) +E∥bg(X)−Z∥2 2+log(1/η) n. This completes the proof. 72 B.8 Proof of Corollary 4 Proof. Fix any 0 < δ≤1/√n. Repeating the arguments in the proof of Corollary 4 gives that the statistical dimension d(δ) under µj≤exp(−γj) for j≥1 satisfies d(δ) + 1≤1 γlog(C/δ). (170) Also by similar reasoning in the proof of Corollary 4, we have R2(δ)≤δ n(d(δ) + 1) +1 nZ∞ d(δ)+1exp(−γt)dt ≤δ n(d(δ) + 1) +1 γnexp(−γ(d(δ) + 1))) ≲δ nlog 1/δ by (170) . This leads to δn≲logn n. Invoking Theorem 2 gives the bound (in order) lognlog (1 /η) n+E∥bg(X)−Z∥2 2+log(1/η) n. This completes the proof. B.9 Proof of Theorem 3: the minimax lower bound Proof. To establish the claimed result, it suffices to prove each term in the minimax lower bounds separately. Step 1: To prove the term δnin the lower bound, we consider the model X=AZfor some deterministic matrix A∈Rp×randY=f∗(Z) +ϵwith ϵ∼N(0, σ2). In this case, we observe that inf hsup θ∈ΘEθ[f∗(Z)−h(X)]2≥inf hsup θ∈ΘEθ[f∗(Z)−(h◦A)(Z)]2 ≥inf fsup f∗∈HK,∥f∗∥K≤1∥f−f∗∥2 ρ(171) where the infimum is over all measurable function f:Rr→Rconstructed from {(Yi, Zi)}n i=1. To proceed, for any δ >0, define the ellipse E(δ) := θ∈R∞:∞X j=1θ2 j min{δ, µj}≤1 . By applying Lemma 16, there exists a (√ δ/2)-separated collection of points {p(1), . . . , p(m)}inE(δ) under the metric ∥ · ∥ 2such that log m≥d(δ)/64. We can therefore construct f(1), . . . , f(m)∈ H K as f(k)=∞X j=1p(k) jϕj. 73 By construction, we have for any pair ( i, j)∈[m]×[m], ∥f(i)−f(j)∥2 ρ=∥p(i)−p(j)∥2 2, implying that f(1), . . . , f(m)is also (√ δ/2)-separated under the metric ∥ · ∥ ρ. For each k∈[m], let ρ(k)correspond to the underlying distribution of the collected data {(Yi, Zi)}n i=1when f∗=f(k)in (4). Let KL( · ∥ ·) denote the KL divergence between two distributions. For any ( i, j)∈[m]×[m], observe that KL ρ(i)∥ρ(j) =nX k=1Eh KL N(f(i)(Zk), σ2)∥N(f(j)(Zk), σ2)i =n 2σ2∥f(i)−f(j)∥2 ρ ≤2nδ σ2, where the first equality follows from the chain rule of KL divergence and the last second step follows from the fact that KL (ρ1∥ρ2) =(µ1−µ2)2 2σ2 for any ρ1=N(µ1, σ2) and ρ2=N(µ2, σ2). Applying Fano’s lemma yields the lower bound as inf fsup f∗∈HK,∥f∗∥K≤1∥f−f∗∥2 ρ≥δ, (172) provided that 2nδ/σ2+ log 2 d(δ)/64<1. (173) By the definition of regular kernel, we have d(δn)≥cnδnfor some positive constant c. Rewriting δ′=c1δnfor some sufficiently small c1=c1(σ2), by d(δn)≥128 log 2, we find d(c1δn) 64≥d(δn) 64≥cnδn 128+ log 2 , implying that (173) holds for δ′with c1being sufficiently small. Then using (172) with δ=δ′gives inf fsup f∗∈HK,∥f∗∥K≤1∥f−f∗∥2 ρ≥c1δn. Together with (171) yields inf hsup θ∈ΘEθ[f∗(Z)−h(X)]2≥c1δn. Step 2: We prove the appearance of the second term in the lower bound by using the property the universality of the RKHS HKinduced by some universal kernel K. Specifically, fix any ϑ >0. For anyβ∈Sr−1, the unit sphere of Rr,
|
https://arxiv.org/abs/2505.20022v1
|
define the function fβasfβ(z) =z⊤β. By the universality of the kernel K, for any β∈Sr−1, there must exist a function fϑ,β∈ H Ksuch that ∥fϑ,β−fβ∥ρ≤ ∥fϑ,β−fβ∥∞≤ϑ. 74 For any h:Rp→R, observe that sup θ∈ΘEθ[f∗(Z)−h(X)]2≥sup β∈Sr−1Eθ[fϑ,β(Z)−h(X)]2 ≥sup β∈Sr−11 2Eθ[fβ(Z)−h(X)]2−Eθ[fβ(Z)−fϑ,β(Z)]2 ≥sup β∈Sr−11 2Eθ[fβ(Z)−h(X)]2−ϑ2. (174) Since ϑcan be arbitrarily small and the leftmost side of (174) is independent of ϑ, we obtain inf hsup θ∈ΘEθ[f∗(Z)−h(X)]2≥1 2inf hsup β∈Sr−1Eθh Z⊤β−h(X)i2 =1 2sup β∈Sr−1Eθh Z⊤β−E Z⊤β|Xi2 =1 2sup β∈Sr−1β⊤ΣZ|Xβ =1 2∥ΣZ|X∥op where we write Σ Z|X=E[Cov( Z|X)]. Since model (3) with Z∼N(0r,Ir) and W∼N(0p,Ip) implies ΣZ|X=Ir−A⊤(AA⊤+Ip)−1A= (A⊤A+Ir)−1. Invoking Assumption 6 yields the second term 1 /pin the lower bound, thereby completing the proof. Our proof of lower bound is based on the following lemma which is proved in Lemma 4 of Yang et al. (2017) for the fixed design setting. Since their argument can be used for the random design as well, we omit the proof. Lemma 16. For any δ >0, there is a collection of (√ δ/2)-separated points {p(1), . . . , p(m)}inE(δ) under the metric ∥ · ∥ 2such that logm≥d(δ) 64. B.10 Proofs of Section 5: general convex loss functions For any f:Z →R, letℓf:R× X → R+be the loss function (relative to fH) ℓf(y, x) :=ℓf◦bg(y, x) :=L(y,(f◦bg)(x))−L(y,(fH◦bg)(x)). For any measurable function f:Z →R, the excess risk can be decomposed as E(f◦bg) =E[L(Y,(f◦bg)(X))−L(Y, f∗(Z))] =E[ℓf(Y, X)] +E[L(Y,(fH◦bg)(X))−L(Y, f∗(Z))] ≤E[ℓf(Y, X)] +CUE[(fH◦bg)(X)−f∗(Z)]2by Assumption 9 ≤E[ℓf(Y, X)] + 2 CU∥fH∥2 KE∆bg+ 2CU∥fH−f∗∥2 ρ by (19) with θ= 1. 75 The above error decomposition, with bfin place of f, reveals that the magnitude of the excess risk ofbf◦bgis jointly determined by the estimation error E[ℓbf(Y, X)], the kernel-related latent error E∆bgand the approximation error ∥fH−f∗∥2 ρ. B.10.1 Proof of Theorem 4 Proof of Theorem 4. The proof largely follows the same arguments used in the proof of Theorem 2. Here we only mention some key changes. First, a similar result for general loss functions to Lemma 2 follows from the convexity of L(y,·) by Assumption 8. Second, by using Corollary 9 stated and proved in the next section, for any η∈(0,1), with probability at least 1 −η, the following holds uniformly over f∈ Fb, E[ℓf(Y, X)]≤2En[ℓf(Y, X)] + 2 λ∥fH∥2 K. This result replaces (75) in the proof of Theorem 2. For Case (1) considered in the proof of Theorem 2, the bαdefined therein exists due to the convexity of L(y,·). So the proof follows analogously. For Case (2), the chain of inequalities in (79) can be replaced by −E[ℓfeα(Y, X)] =−E[L(Y,(feα◦bg)(X))−L(Y,(fH◦bg)(X))] =−E[L(Y,(feα◦bg)(X))−L(Y, f∗(Z))] +E[L(Y,(fH◦bg)(X))−L(Y, f∗(Z))] ≤ −CLE[(feα◦bg)(X)−f∗(Z)]2+CUE[(fH◦bg)(X)−f∗(Z)]2by Assumption 9 ≤CUE[(fH◦bg)(X)−f∗(Z)]2 ≤2CU∥fH∥2 KE∆bg+ 2CU∥fH−f∗∥2 ρ by (19). Reasoning as in the proof of Theorem 2 completes the proof. B.10.2 Uniform upper bounds of E[ℓf(Y, X)]over f∈ Fb The following lemma, which is an adaptation of Lemma 3 for general loss functions, bounds from above E[ℓf(Y, X)] by its empirical counterpart En[ℓf(Y, X)], the fixed point δxofψx(·) in (86), the kernel-related latent error E∆bgand the approximation error ∥fH−f∗∥2 ρ, uniformly over f∈ F b. Recall the definitions of ξffrom (84) and Ξ ffrom (85). Lemma
|
https://arxiv.org/abs/2505.20022v1
|
17. Grant Assumptions 1, 2, 8 and 9. Fix any η∈(0,1). With probability at least 1−η, for any f∈ Fb, one has E[ℓf(Y, X)]≤2En[ℓf(Y, X)] +c1CLmax C2 ℓC−2 L,1 δx +c2 C2 ℓC−1 L+Cℓκ∥fH∥Klog(1/η) n+ 4CU∥fH∥2 KE∆bg+ 4CU∥fH−f∗∥2 ρ. Here c1andc2are some absolute positive constants. Proof. Define the scalar V:= 2C2 ℓC−1 Land the function class Fℓ:=n ℓf:f∈ Fbo . 76 Further define the functional T:Fℓ→R+as T(ℓf) :=V E[ℓf(Y, X)] + 4 CU ∥fH∥2 KE∆bg+∥fH−f∗∥2 ρ (175) as well as the local Rademacher complexity of Fℓas Rn ℓf∈ Fℓ:T(ℓf)≤δ . The remaining proof is completed by splitting into 4 steps. Step 1: Verification of the boundedness of Fℓ.Using some results in Step (1) of the proof of Lemma 3 yields for every f∈ Fband ( y, x)∈R× X, we have |ℓf(y, x)|=|L(y,(f◦bg)(x))−L(y,(fH◦bg)(x))| ≤Cℓ (f◦bg)(x)−(fH◦bg)(x) by Assumption 8 ≤3Cℓκ∥fH∥K (176) so that the function class Fℓis uniformly bounded within [ −3Cℓκ∥fH∥K,3Cℓκ∥fH∥K]. Step 2: Bounding the variance of ℓf(Y, X).Note that Var (ℓf(Y, X)) ≤C2 ℓE[(f◦bg)(X)−(fH◦bg)(X)]2by Assumption 8 ≤2C2 ℓ E[(f◦bg)(X)−f∗(Z)]2+E[f∗(Z)−(fH◦bg)(X)]2 ≤2C2 ℓE[(f◦bg)(X)−f∗(Z)]2+ 4C2 ℓ∥fH∥2 KE∆bg+ 4C2 ℓ∥fH−f∗∥2 ρby (19) .(177) Applying Assumption 9 leads to E[(f◦bg)(X)−f∗(Z)]2 ≤C−1 LE[L(Y,(f◦bg)(X))−L(Y, f∗(Z))] =C−1 L(E[ℓf(Y, X)] +E[L(Y,(fH◦bg)(X))−L(Y, f∗(Z))]) =C−1 L E[ℓf(Y, X)] +CUE[f∗(Z)−(fH◦bg)(X)]2 =C−1 L E[ℓf(Y, X)] + 2 CU∥fH∥2 KE∆bg+ 2CU∥fH−f∗∥2 ρ by (19) .(178) Together with (177) yields Var (ℓf(Y, X))≤V E[ℓf(Y, X)] + 4 V CU ∥fH∥2 KE∆bg+∥fH−f∗∥2 ρ(175)=T(ℓf). Step 3: Choosing a suitable sub-root function. By inspecting the argument of proving (177) and (178), we have that for any f∈ Fb, V CL 2E[ξ2 f(X)] =V CL 2E[(f◦bg)(X)−(fH◦bg)(X)]2 ≤V E[ℓf(Y, X)] + 4 CU∥fH∥2 KE∆bg+ 4CU∥fH−f∗∥2 ρ(87)=T(ℓf) 77 so that for any δ≥0, Rn ℓf∈ Fℓ:T(ℓf)≤δ ≤ R n ℓf∈ Fℓ:V CLE[ξ2 f(X)]≤2δ . (179) By Assumption 8, for any f1, f2∈ Fband ( y, x)∈R× X, we have |ℓf1(y, x)−ℓf1(y, x)|=|L(y,(f1◦bg)(x))−L(y,(f2◦bg)(x))| ≤Cℓ ξf1(x)−ξf2(x) which implies that ℓf(Yi, Xi) is ( Cℓ)-Lipschitz in ξf(Xi) for all i∈[n]. Then, by repeating the arguments of proving (94), and together with (179), we conclude Rn ℓf∈ Fℓ:T(ℓf)≤δ ≤CℓRn ξf∈Ξb:V CLE[ξ2 f(X)]≤2δ =Cℓψx2δ V CL so that by choosing ψ(δ) =V Cℓψx 2δ V CL (Lemma 19 ensures it is sub-root in δ), for every δ≥δ⋆, ψ(δ)≥VRn ℓf∈ Fℓ:T(ℓf)≤δ . Step 4: Bounding the fixed point of the sub-root function ψ(·).Write the fixed point of ψ(·) asδ⋆. By using Lemma 19, δ⋆can be bounded as δ⋆≤max 4C2 ℓC−2 L,1 V CL 2δx. Finally, the proof is completed by invoking Lemma 22 with −a=b= 3Cℓκ∥fH∥K, B=V, L= 4CU ∥fH∥2 KE∆bg+∥fH−f∗∥2 ρ andθ= 2. Combining Lemmas 4 and 17 readily yields the following corollary where we bound from above δxbyδn,bg, with the latter given in (67). Corollary 9. Grant Assumptions 1–3, 8 and 9. Fix any η∈(0,1). With probability at least 1−η, for any f∈ Fb, one has E[ℓf(Z, X)]≤2En[ℓf(Y, X)] +c1CLmax C2 ℓC−2 L,1 VKδn,bg +c2 C2 ℓC−1 L+Cℓκ∥fH∥Klog(1/η) n+ 4CU∥fH∥2 KE∆bg+ 4CU∥fH−f∗∥2 ρ. C Auxilliary Lemmas In this section, we summarize the supporting lemmas used in our proof. 78 C.1 Definition and properties of the sub-root functions Definition 2. A function
|
https://arxiv.org/abs/2505.20022v1
|
ψ: [0,∞)→[0,∞)is a sub-root function if it is nonnegative, nonde- creasing, and if δ7→ψ(δ)/√ δis nonincreasing for δ >0. A useful property about the sub-root function defined above is stated as follows. Lemma 18 (Lemma 3.2 in Bartlett et al. (2005)) .Ifψ: [0,∞)→[0,∞)is a nontrivial3sub-root function, then it is continuous on [0,∞)and the equation ψ(δ) =δhas a unique positive solution δ⋆. Moreover, for all δ >0, δ≥ψ(δ)if and only if δ⋆≤δ. Here, δ⋆is referred to as the fixed point. The following lemma lists some useful properties of sub-root functions and corresponding fixed points, whose proof can be found in Duan et al. (2021). Lemma 19. Ifψ: [0,∞)→[0,∞)is nontrivial sub-root function and δ⋆is its positive fixed point, then (i) For any c >0,eψ(δ) :=cψ c−1δ is sub-root and its positive fixed point eδ⋆satisfies eδ⋆=cδ⋆. (ii) For any C > 0,eψ(δ) :=Cψ(δ)is sub-root and its positive fixed point eδ⋆satisfies eδ⋆≤(C2∨ 1)δ⋆. C.2 Local Rademacher complexity The following lemma indicates that one can take the local Rademacher complexity as a sub-root function. Lemma 20. If the class Fis star-shaped,4andT:F →R+is a (possibly random) function that satisfies T(αf)≤α2T(f)for any f∈ F and any α∈[0,1], then the (random) function ψdefined forδ≥0by ψ(δ) =bRn({f∈ F:T(f)≤δ}) is sub-root. Furthermore, the (random) function ψ′defined for δ≥0by ψ′(δ) =Rn({f∈ F:T(f)≤δ}) is also sub-root. Proof. See the proof of Lemma 3.4 in Bartlett et al. (2005). Recall that Fb:={f∈ H K:∥f−fH∥K≤3∥fH∥K}. Lemma 21 (Corollary of Lemma 20) .The function ψ(δ) =Rn x7→(f◦bg)(x)−(fH◦bg)(x) :f∈ Fb,E[(f◦bg)(X)−(fH◦bg)(X)]2≤δ is sub-root. 3Functions other than the constant function ψ≡0. 4A function class Fis said star-shaped if for any f∈ F andα∈[0,1], the scaled function αfalso belongs to F. 79 Proof. By translating the function class, it suffices to show that ψ(δ) =Rn x7→(f◦bg)(x) :f∈ H K,∥f∥K≤3∥fH∥K,E[(f◦bg)(X)]2≤δ is sub-root. To show this, we take T(f) =E[(f◦bg)(X)]2. We find T(αf) =E[(αf◦bg)(X)]2=α2E[(f◦bg)(X)]2=α2T(f). Moreover, it is clear that the function class {x7→(f◦bg)(x) :f∈ H K,∥f∥K≤3∥fH∥K}is star-shaped. Invoking Lemma 20 completes the proof. The following lemma is a variant of Theorem 3.3 in Bartlett et al. (2005), which is useful for proving uniform concentration and can be found in Duan et al. (2021). Lemma 22 (Corollary of Theorem 3.3 in Bartlett et al. (2005)) .LetFbe a class of functions with ranges in [a, b]andU1, ..., U nbe the i.i.d. copies of some random variable U. Assume that there are some functional T:F →R+and some constants BandLsuch that for every f∈ F, Var[f(U)]≤T(f)≤B(Ef(U) +L). (Variance condition) Letψbe a sub-root function and let δ⋆be the fixed point of ψ. Assume that ψsatisfies, for any δ≥δ⋆, ψ(δ)≥BRn({f∈ F:T(f)≤δ}). Then, for any θ >1andη∈(0,1), with probability at least 1−η, Ef(U)≤θ θ−1Enf(U) +c1θ Bδ⋆+ c2(b−a) +c3Bθlog(1/η) n+L θ−1,for any f∈ F. Also, with probability at least 1−η, Enf(U)≤θ+ 1 θEf(U) +c1θ Bδ⋆+ c2(b−a) +c3Bθlog(1/η) n+L θ,for any f∈ F. Here, c1, c2, c3>0are some universal constants. The following lemma is contained in Ledoux and Talagrand (2013), which allows us to utilize the symmetrization technique for the family of Lipschitz functions. Lemma 23 (Ledoux–Talagrand contraction inequality) .For any set T ⊆Rd, let{ψj:R→R, j= 1, ..., d}be any
|
https://arxiv.org/abs/2505.20022v1
|
family of B-Lipschitz functions. Then, we have E sup θ∈TdX j=1εjψj(θj) ≤BE sup θ∈TdX j=1εjθj . C.3 Auxiliary concentration inequalities This section provides some useful concentration or tail inequalities that are used in the proof of the Appendix. The first lemma states a variant of the classic Bernstein’s inequality for the sum of random variables from Bernstein (1946). 80 Lemma 24. Letξ1, . . . , ξ nbe a sequence of independent and identically distributed random variables taking values in Rwith zero mean. If there exist some H, S > 0such that max i∈[n]|ξi| ≤Halmost surely and Eξ2 i≤Sfor any i∈[n], then for any η∈(0,1), the following holds with probability at least 1−η, 1 nnX i=1ξi ≤2H 3nlog2 η+s 2S nlog2 η. Proof. See, for instance, Proposition 2.14 in Wainwright (2019) for the proof of Bernstein’s inequal- ity. The form of Bernstein’s inequality in Lemma 24 then follows after some simple algebra. The following concentration inequality is used for bounding the sum of random variables taking values in real separable Hilbert space, which is based on the result of Theorem 3.3.4 in Yurinsky (1995). See also Caponnetto and De Vito (2007); Lin and Cevher (2020); Rudi and Rosasco (2017); Rudi et al. (2015). Lemma 25. LetFbe a real separable Hilbert space equipped with norm ∥ · ∥ vandξ∈ F be a random element. Assume that there exist some constants H, S > 0such that Eh ξ−Eξ k vi ≤1 2k!SHk−2, for any k∈Nandk≥2. (180) Letξ1, . . . , ξ n∈ F be i.i.d. copies of ξ. Then for any η∈(0,1), the following holds with probability at least 1−η, 1 nnX i=1ξi−Eξ v≤2H nlog2 η+s 2S nlog2 η, Particularly, (180) is satisfied if ∥ξ∥v≤H 2almost surely and E ∥ξ∥2 v ≤S. The following is Bernstein’s inequality for the sum of random self-adjoint operators, which was proved by Minsker (2017). See also Lin and Cevher (2020); Rudi and Rosasco (2017); Rudi et al. (2015). Lemma 26. LetFbe a separable Hilbert space and let ξ1, . . . , ξ nbe a sequence of independent and identically distributed self-adjoint random operators on Fendowed with the operator norm ∥ · ∥ op. Assume that there exists Eξi= 0and∥ξi∥op≤Halmost surely for some H > 0for any i∈[n]. LetSbe a positive operator such that E[ξ2 i]≤S. Then for any η∈(0,1), the following holds with probability at least 1−η, 1 nnX i=1ξi op≤2Hβ 3n+r 2∥S∥opβ n, where β= log 2Tr(S) ∥S∥opη . The following lemma states a tail inequality for quadratic forms of a sub-Gaussian random vector, which comes from Lemma 30 in Hsu et al. (2014). 81 Lemma 27. Letξbe a random vector taking values in Rnsuch that for some c≥0, E[exp(⟨u, ξ⟩)]≤exp c∥u∥2 2/2 ,∀u∈Rn. For any symmetric positive semidefinite matrices A⪰0, and any t >0, Ph ξ⊤Aξ > c Tr(A) + 2p Tr (A2)t+ 2∥A∥opti ≤e−t. 82
|
https://arxiv.org/abs/2505.20022v1
|
arXiv:2505.20151v1 [stat.AP] 26 May 2025The evolving categories multinomial distribution: introduction with applications to movement ecology and vote transfer Ricardo Carrizo Vergara1, Marc Kéry1, Trevor Hefley2 1Population Biology Group, Swiss Ornithological Institute, 6204 Sempach, Switzerland. 2Department of Statistics, Kansas State University, Manhattan, KS 66506, USA. Abstract We introduce the evolving categories multinomial (ECM) distribution for multivariate count data taken over time. This distribution models the counts of individuals following iid stochastic dynamics among categories, with the number and identity of the categories also evolving over time. We specify the one-time and two-times marginal distribu- tions of the counts and the first and second order moments. When the total number of individuals is unknown, placing a Poisson prior on it yields a new distribution (ECM-Poisson), whose main properties we also describe. Since likelihoods are intractable or impractical, we propose two estimating functions for parameter estimation: a Gaussian pseudo-likelihood and a pairwise composite likelihood. We show two application scenarios: the inference of movement parameters of an- imals moving continuously in space-time with irregular survey regions, and the inference of vote transfer in two-rounds elections. We give three illustrations: a simulation study with Ornstein-Uhlenbeck moving individuals, paying special attention to the autocorrelation parameter; the inference of movement and behavior parameters of lesser prairie-chickens; and the estimation of vote transfer in the 2021 Chilean presidential election. Keywords Evolving categories multinomial (ECM) distribution, ECM-Poisson, composite likelihood, abundance mod- eling, movement ecology, ecological inference. 1 Introduction Count data are ubiquitous in statistics. Probability distributions for modeling integer-valued data have been developed since the very beginning of probability theory and are used in virtually every discipline that applies probability and statis- tics. One particular scenario often present in applications is when count data are taken over time. In addition, often counts refer to entities (e.g., particles, animals, people, or individuals for using a general term), whose attributes experience a dynamics over time. In this context, a certain multivariate distribution arises as a conceptually simple model for such counts: we call it here the evolving categories multinomial (ECM) distribution. It considers individuals moving from cat- egory to category over time, with the additional feature that the categories may also evolve over time . The categories are abstract and they may mean whatever is required in an application context where individuals have to be categorized and counted dynamically: e.g., spatial volumes for gas particles; survey landscapes for animal abundance; political preference, age group or professional activity sector of humans; seasonal consumption goods preferred by costumers; health state of agents during pandemics, to mention but a few examples. The ECM distribution is the joint distribution followed by the counts per category and time of individuals following independently the same dynamics across the evolving categories. It can be used to infer movement properties using the counts as sole information. While the construction of this multivariate discrete distribution is simple and general, we are unaware of any work fo- cusing on its general abstract form or mathematical properties. Technically, any application with counts of independently moving individuals has used this probabilistic structure, so in that
|
https://arxiv.org/abs/2505.20151v1
|
sense the distribution is not new. Our contribution is to pay special attention to this mathematical object, giving it a name, studying its basic properties, developing methods 1 for statistical inference and demonstrating its utility by addressing important questions in scientific fields such as ecology and sociology. One important aspect of this abstract construction is that the exact individual “position” at each time is lost, and an aggregate becomes the sole available information. This key issue is present in every application where the individual information is unavailable because it is too expensive, sensitive or even physically impossible to collect. The classical example of statistical mechanics (Gallavotti, 2013; Pathria, 2017) for studying broad scale properties of many-particles systems is perhaps the most common example. In the social sciences, some aggregate information about people’s behavior can be available for some institutions (e.g., market and educational institutions, social media, governments) or even publicly available (e.g., vote counts in transparent democracies), but sensitive individual information is often protected. In ecology, both kinds of information are often available: animal count data from surveys (Seber, 1986) and telemetry data of animal’s positions from tracking devices (Hooten et al., 2017). But tracking animals has inconveniences such as biased sampling or expensive tracking technology, while count data are very widely available, usually cheaper to collect, and can be obtained from citizen-science data collection initiatives (Sullivan et al., 2009). In this work we study the minimal mathematical properties of this distribution so we can apply it to model important phenomena and provide fitting techniques. We aim for two application scenarios from ecology and sociology. In the ecological scenario, we infer animal movement parameters from survey data (spatio-temporal counts) taken at irregular times in continuous space-time. There is a substantial literature on the modeling of animal movement (Turchin, 1998; Grimm & Railsback, 2013) and of abundance (Andrewartha & Birch, 1954; Thorson & Kristensen, 2024) separately, but the two concepts have hardly ever been united in a single framework, either in practice with integrated data scenarios or in theory by studying the statistical properties issued from the conceptual connection between movement and count. We refer to (Chandler et al., 2022; Roques et al., 2022; Potts & Börger, 2023; Buderman et al., 2025) for some recent works using, either explicitly or implicitly, the connection between movement and broad-scale count. An important and hitherto neglected aspect is the space-time autocorrelation induced by movement. Here we exploit it to infer movement parameters which do not occur on the average (intensity) field. We illustrate with a simulation study with steady-state Ornstein- Uhlenbeck moving individuals. Also, we show how the ECM distribution is useful in classification problems, where the proportions of animals following different movement behaviors must be estimated. We illustrate with an analysis of translocated lesser prairie-chickens ( Tympanuchus pallidicinctus ) in the U.S. (Berigan et al., 2024). In the sociological scenario, we study vote transfer between election rounds, which is an ecological inference problem (Schuessler, 1999; King et al., 2004). Here, the ECM distribution offers a different approach to existing methodology commonly found in the literature.
|
https://arxiv.org/abs/2505.20151v1
|
We illustrate by inferring vote transfer during the 2021 Chilean presidential election. Our work is organized as follows. In Section 2 we introduce the ECM distribution with its main properties: char- acteristic function, one-time and two-times marginal distributions, and first and second order moments. In Section 3 we Poissonize the ECM distribution in order to consider cases with unknown total number of individuals, describing thus a new distribution here called ECM-Poisson. Since likelihoods are intractable, in Section 4 we propose two ad-hoc estimat- ing functions for inference which allow the estimation of parameters occurring in the mean and the autocorrelation of the counts. The first is a Gaussian pseudo-likelihood respecting mean and covariance, justified by the central limit theorem and conceived for cases with huge number of individuals. The second is a pairwise composite likelihood, conceived for cases where the number of individuals is too low for the Gaussian pseudo-likelihood to be reliable. The distributions of the pairs are related to the bivariate binomial and Poisson distributions (Kawamura, 1973). Applications are shown in Section 5. In Section 6 we discuss similarities and differences with Poisson processes and Markov models, which can be retrieved as particular cases. We conclude in Section 7 with ideas to explore more about the ECM distribution and further potential applications. Most part of mathematical results are explained intuitively in the main body of this paper. Mathematical proofs, which rely mainly on characteristic functions, are given in Appendix A. Reproducible Rroutines are openly available at the Github repository https://github.com/CarrizoV/Introducing-the-ECM-distribution . 2 2 The ECM distribution We consider an individual following a stochastic dynamics across evolving categories: at each time the individual belongs to a single category among many possible ones, and the categories also may change with time. Let nPNbe the number of time steps1. For each k“1, ..., n we consider mkPNexhaustive and exclusive categories (or classes, or boxes, or states, or bins, etc.) to which an individual may belong at time k. The subsequent stochastic dynamics is completely determined by the full-path probabilities: pp1,...,nq l1,...,ln:““probability of belonging to categories l1, . . . , l nat times 1, . . . , n respectively” , (2.1) for everypl1, . . . , l nqPt1, . . . , m 1uˆ. . .ˆt1, . . . , m nu. There arepśn k“1mkq´1free probabilities to determine, since they must sum to one due to the exhaustiveness of the categories at each time: m1ÿ l1“1...mnÿ ln“1pp1,...,nq l1,...,ln“1. (2.2) With the full-path probabilities one can obtain, by marginalization, the sub-path probabilities, indicating the probability of belonging to some categories at some times, not necessarily considering all the ntime steps. In this work we shall focus mainly on the one-time and two-times probabilities: ppkq ldenotes the probability of belonging to category lat time k;ppk,k1q l,l1denotes the probability of belonging to category lat time kand to category l1at time k1. As explained further, these probabilities will be enough to describe important properties and develop ad-hoc fitting methods. We now consider Nindividuals following independently this stochastic dynamics across the evolving categories. Our focus is
|
https://arxiv.org/abs/2505.20151v1
|
on the number of individuals counted in each category at each time. For each kPt1, ..., nuandlPt1, ..., m ku, we define the random variable Qpkq l:““number of individuals belonging to category lat time k”. (2.3) We assemble the variables (2.3) into a single random arrangement Q:“` Qpkq l˘ kPt1,...,nu,lPt1,...,m ku. (2.4) Definition 2.1. Under the previously defined conditions, we say that the random arrangement Qfollows an evolving categories multinomial distribution (abbreviated ECM distribution). There are many possible ways to arrange the variables in Q. One useful manner is to see Qas alist or collection of random vectors , by writing Q“p⃗QpkqqkPt1,...,nu, with ⃗Qpkq“pQpkq 1, ..., Qpkq mkq,@kPt1, ..., nu. (2.5) Note that Qis not a random matrix unless m1“...“mn. The arrangement Qcould be wrapped into a traditional random vector withřn k“1mkcomponents, but for ease of presentation we maintain the indexing scheme in (2.4). We describe the distribution of an ECM random arrangement through its characteristic function. Proposition 2.1. The characteristic function φQ:Rm1`...`mnÑCofQis φQpξq“«m1ÿ l1“1...mnÿ ln“1pp1,...,nq l1,...,lneipξp1q l1`...`ξpnq lnqffN , (2.6) withξ“´ ξpkq l¯ kPt1,...,nu,lPt1,...,m kuPRm1`...`mn.2 1The construction of this distribution is done in discrete index time. These index times may not necessarily refer to equally spaced chronological time points or similar. What the index time refers to depends on the application scenario. For example, index times may represent moments in a continuous-time interval (possibly with irregular lag between them), long periods of time (days, years, centuries, etc.), or particular events (election rounds, selling seasons, concerts, etc.). 2For ease of exposition, we consider the arrangements of the form pqpkq lqkPt1,...,nu,lPt1,...,m kuas members of the space Rm1`...`mn, although technically their indexing differs from the traditional one used for vectors. In any case, the vector space of such arrangements is isomorphic to Rm1`...`mn. 3 The following Proposition 2.2 describes the one-time marginals, that is, the distribution of the random vectors ⃗Qpkqas defined in (2.5). The word multinomial in the name “ECM” comes from this result. Proposition 2.2. LetkPt1, ..., nu. Then, ⃗Qpkq„Multinomial´ N,pppkq 1, ..., ppkq mkq¯ (2.7) This result can be seen intuitively: for a given time kwe assign Nindividuals, independently and with common probabilities, to the mkcategories, so ⃗Qpkqis a multinomial random vector. The two-times conditional distribution is also related to the multinomial distribution. Let us introduce the two-times conditional probabilities for k‰k1: ppk1|kq l1|l:“ppk,k1q l,l1{pk l,@pl, l1qPt1, . . . , m kuˆt1, . . . , m k1u (2.8) Proposition 2.3. Letk, k1Pt1, ..., nu,k‰k1. Then, the random vector ⃗Qpk1qconditioned on ⃗Qpkqhas the distribution of a sum of mkindependent mk1-dimensional multinomial random vectors: ⃗Qpk1q|⃗Qpkq„mkÿ l“1Multinomial´ Qpkq l,pppk1|kq 1|l, . . . , ppk1|kq mk1|lq¯ (indep. sum) . (2.9) Proposition 2.3 can also be interpreted intuitively: the individuals in each category at time kgenerate a multinomial vector when moving to the categories at time k1, with corresponding conditional probabilities. Since individuals move independently, the aggregate of these independent random vectors yields the final count at time k1. This structure with a sum of independent multinomial random vectors is a particular case of the Poisson-multinomial distribution (Daskalakis et al., 2015; Lin et
|
https://arxiv.org/abs/2505.20151v1
|
al., 2023), which is constructed as the sum of independent multinomial random vectors of size 1with different bins probabilities. Propositions 2.2 and 2.3 allow us to compute the first and second moment structures of Q. We use the Kronecker delta notation δl,l1“1ifl“l1, and δl,l1“0ifl‰l1. Proposition 2.4. The mean and covariance structures of Qare given respectively by E´ Qpkq l¯ “Nppkq l, (2.10) Cov´ Qpkq l, Qpk1q l1¯ “$ & %N´ ppk,k1q l,l1´ppkq lppk1q l1¯ ifk‰k1, N´ δl,l1ppkq l´ppkq lppk1q l1¯ ifk“k1.(2.11) for every k, k1Pt1, ..., nuandpl, l1qPt1, ..., m kuˆt1, ..., m k1u. 3 Unknown N: the ECM-Poisson distribution In many applications, the total number of individuals Nwill be unknown. Methods are then needed to use the ECM distribution in such scenarios. Here we explore the option of assuming Nis random following a Poisson distribution. This provides a connection with Poisson counts-type models. Letλą0be a parameter (the size rate ) and let N„Poissonpλq. We use the same notations and indexing as in Section 2 for number of time steps, number of categories, and path probabilities. Let Qbe a random arrangement whose distribution is, when conditioned on N, an ECM distribution with Nindividuals. Then Qfollows a new distribution which we call the ECM-Poisson distribution . Proposition 3.1. The characteristic function φQof an ECM-Poisson random arrangement Qis φQpξq“exp! λ˜m1ÿ l1“1...mnÿ ln“1pp1,...,nq l1,...,lneipξp1q l1`...`ξpnq lnq´1¸) , (3.1) for every ξ“´ ξpkq l¯ kPt1,...,nu,lPt1,...,m kuPRm1`...`mn. 4 The following result for the one-time marginal distribution derives immediately from the connection between the Poisson and multinomial distributions. We use the tensor product notation: if ⃗X“pX1, . . . , X ˜nqis a random vector, we note ⃗X„˜n j“1Ljto say that it consists of independent components, each Xjfollowing the distribution Lj. Proposition 3.2. LetkPt1, ..., nu. Then, ⃗Qpkqhas independent Poisson components: ⃗Qpkq„mkâ l“1Poissonpλppkq lq. (3.2) The ECM-Poisson case contrasts thus with the ECM case where ⃗Qpkqhas correlated binomial components. For the two-times distribution, there is a particular subtlety to take into account. Since the categories are exhaustive at each time, knowing ⃗Qpkqimplies that we know the value of N“řmk l“1Qpkq l. Therefore, the distribution of ⃗Qpk1qgiven ⃗Qpkqfor k1‰kis simply the same as in Proposition 2.3. However, it is more enlightening to study the case where only some components of ⃗Qpkqare known, implying that we do not know N. For every k, denote ˜mk“mk´1(assuming mkě2) and denote⃗˜Qpkq“pQpkq 1, . . . , Qpkq ˜mkqthe sub-vector of ⃗Qpkq considering the categories up to ˜mk. Suppose that we know⃗˜Qpkqfor a given k, and take k1‰k. Then, each quantity Qpkq l is redistributed in the mk1categories at time k1similarly to Proposition 2.3, producing a sum of independent multinomial contributions to the vector ⃗Qpk1q. In addition, the unknown quantity Qpkq mkgenerates independent Poisson counts, similarly as in Proposition 3.2, which are added to⃗˜Qpk1q. This intuition is precisely stated in Proposition 3.3. Proposition 3.3. Letk, k1P t1, . . . , nu,k‰k1. Then, ⃗Qpk1qconditioned on⃗˜Qpkqhas the distribution of a sum of ˜mkindependent mk1-dimensional multinomial random vectors plus an independent random vector of mk1independent Poisson components: ⃗Qpk1q|⃗˜Qpkq„mk1â l1“1Poissonpλppk,k1q mk,l1q `˜mkÿ l“1Multinomial´ Qpkq l,pppk1|kq 1|l, .
|
https://arxiv.org/abs/2505.20151v1
|
. . , ppk1|kq mk1|lq¯ (indep. sums) . (3.3) Similar distributions have been worked out in the context of evolving Poisson point processes from moving particles. See (Roques et al., 2022, Section 4), where a future point process conditioned on a past Poisson process is described as the union of an inhomogeneous Poisson process and independent thinned binomial point processes. Proposition 3.3 is the resulting distribution of the count of such processes over different spatial areas (see Section 5.1). The second order moments of an ECM-Poisson random arrangement are simpler than in the ECM case. Proposition 3.4. The mean and covariance structures of Qare given respectively by E´ Qpkq l¯ “λppkq l, (3.4) Cov´ Qpkq l, Qpk1q l1¯ “$ & %λppkq lδl,l1ifk“k1, λppk,k1q l,l1 ifk‰k1,(3.5) for every k, k1Pt1, ..., nuandpl, l1qPt1, ..., m kuˆt1, ..., m k1u. 4 Inference methods Consider the problem of computing the probability PpQ“qqfor an ECM random arrangement Qand a consistent arrangement q. For the case n“2, one could do PpQ“qq“Pp⃗Qp2q“⃗ qp2q|⃗Qp1q“⃗ qp1qqPp⃗Qp1q“⃗ qp1qq, (4.1) and use Propositions 2.2 and 2.3. Pp⃗Qp1q“⃗ qp1qqis a simple multinomial probability, and Pp⃗Qp2q“⃗ qp2q|⃗Qp1q“⃗ qp1qq is a Poisson-Multinomial probability which can be obtained as a convolution of multinomial probabilities. Under this 5 approach, one must convolve m1arrays of dimension m2, thel-th array having values ranging from 0toqp1q l. For m1 andm2of the order of 102(which is common in ecological applications with a domain split in many subdomains), this convolution becomes intractable both in terms of computation time and memory use, getting worse if the values in q are high (likely when Nis high). A computing method using fast Fourier transforms from the characteristic function is explored in (Lin et al., 2023, Section 4.4). The study is done only for m2ď5since the algorithm computes arrays of dimensionpN`1qm2´1(in their setting N“m1), becoming impractical even for small values of m2. Besides the convolutional formulation, no closed-form expression for the Poisson-Multinomial part is known. The case ně3 is even more intricate, since the Poisson-Multinomial distribution is not present and the cost of Fourier transforms of characteristic function (2.6) are more sensitive to m1, m2, Nas when n“2. The use of the full-path probabilities (2.1) requires to storageśn k“1mkvalues, which in certain scenarios are not provided but must be computed (see Section 5.1). The explicit ECM and ECM-Poisson likelihoods are thus considered currently intractable. Fitting methods requiring likelihood evaluation cannot be applied. Due to this, we explore two suitable estimating functions for inference: a Gaussian pseudo-likelihood and a pairwise composite likelihood. Only the one-time and two-times probabilities are required for these techniques. From our developments, these methods are immediately appliable and can be used to infer parameters occurring in the autocorrelation structure of the count. The basic parameters of the ECM distribution are the full-path probabilities (2.1). Since there are pśn k“1mkq´1of them but onlyřn k“1pmk´1qinformative data, it is impossible to obtain precise estimates with a single realization. One option, which we exploit in Section 5.2, is to consider a sufficient number of independent replicates. The other option, explored in Section 5.1 and thought for the
|
https://arxiv.org/abs/2505.20151v1
|
single-realization case, is to express the path probabilities as functions of a much smaller set of parameters for which inference is possible. The methods here presented can help for both scenarios. 4.1 Maximum Gaussian Likelihood Estimation (MGLE) IfQpNqis an ECM random arrangement of Nindividuals, it can be expressed as a sum of Niid ECM random arrange- ments with 1individual. Note that the characteristic function of QpNqin Proposition 2.1, is the N-power of a given characteristic function. This allows us to invoke a multivariate central limit theorem argument and claim that QpNqis asymptotically Gaussian as NÑ 8 . Precisely, let mQp1qandΣQp1qbe respectively the mean arrangement and the covariance tensor of an ECM arrangement with 1individual. Then, when NÑ8 QpNq´NmQp1q? NlÑNp0,ΣQp1qq, (4.2) wherelÑdenotes the convergence in distribution. The idea is simply to replace the original likelihood with a Gaussian one with identical mean and covariance (Proposition 2.4) when Nis large. For the ECM-Poisson case, the classical Gaussian approximation of the Poisson distribution for large rates can also be applied for large λ, replacing Nwithλin formula (4.2) and using the corresponding mean and covariance given in Proposition 3.4 (with λ“1). Replacing the exact likelihood with a Gaussian one through the use of central limit theorems is common practice in statistics. See (Daskalakis et al., 2015; Lin et al., 2023) for the Poisson-Multinomial distribution. Here, this method requires the one-time and two-times probabilities. If they are not available, they must be computed and then used to construct the mean vector and the covariance matrix. Then, the computational cost of the Gaussian likelihood depends on the number of counts in Q, as is typical for Gaussian vectors. 4.2 Maximum Composite Likelihood Estimation (MCLE) Composite likelihoods are estimating functions constructed from likelihoods of subsets of the variables. They are used for parameter estimation of models whose likelihood is too costly to evaluate (Lindsay, 1988; Varin et al., 2011). MCLE provides less efficient estimates than proper maximum likelihood estimates, but for which an asymptotic theory is well 6 developed. In particular, consistency and asymptotic Gaussianity (when adding proper information) are common proper- ties of these estimates (Godambe, 1960; Kent, 1982; Cox & Reid, 2004). The most commonly used composite likelihood is the pairwise composite likelihood (Varin & Vidoni, 2006; Caragea & Smith, 2007; Padoan et al., 2010; Bevilacqua & Gaetan, 2015). Let pX1, . . . , X ˜nqbe a random vector with ˜ncomponents whose distribution depends on a parameter θ. The estimating function is given by the product of the likelihoods of the pairs: cLX1,...,X ˜npθq“˜n´1ź j“1˜nź k“j`1LXj,Xkpθq. (4.3) This function is rich enough to provide estimates for parameters occurring in the bivariate marginals, particularly those in the covariance structure. This is the approach we explore here. Consider an ECM or ECM-Poisson random arrangement Q. We express the pairwise composite log-likelihood of Q for a given parameter θas cℓQpθq“nÿ k“1mk´1ÿ l“1mkÿ l1“l`1ℓQpkq l,Qpkq l1pθq `n´1ÿ k“1nÿ k1“k`1mkÿ l“1mk1ÿ l1“1ℓQpkq l,Qpk1q l1pθq. (4.4) The triple sum in (4.4) includes pairs at the same time, while the quadruple sum includes the pairs at different times. The results in Sections 2
|
https://arxiv.org/abs/2505.20151v1
|
and 3 allow us to obtain the log-likelihoods of the pairs ℓQpkq l,Qpk1q l1pθq. For the ECM case, the pairs at same time k“k1are sub-vectors of a multinomial vector, so their distribution is simple. When k‰k1a special structure appears in pQpkq l, Qpk1q l1q. In the case with just 1individual, it is a random binary vector following thus a bivariate Bernoulli distribution (Kawamura, 1973, Section 2.1). For Nindividuals,pQpkq l, Qpk1q l1qis the addition of Nindependent bivariate Bernoulli vectors, obtaining a bivariate binomial vector (Kawamura, 1973, Section 2.2). Proposition 4.1. LetQbe an ECM random arrangement. Then, for k‰k1,pQpkq l, Qpk1q l1qfollows a bivariate binomial distribution of size Nwith joint success probability ppk,k1q l,l1and respective marginal success probabilities ppkq landppk1q l1. For the ECM-Poisson case with k“k1,pQpkq l, Qpk1q l1qhas independent Poisson components. For k‰k1, each component can be written as the sum of two independent Poisson: Qpkq lis the count of those belonging to latkandtol1 atk1plus those belonging to latk1and not tol1atk1(Qpk1q l1is analogue). The count of those belonging to latkand to l1 atk1occurs in both sums. The resulting law is known as the bivariate Poisson distribution (Kawamura, 1973, Section 3). Proposition 4.2. LetQbe an ECM-Poisson random arrangement. Then, for k‰k1,pQpkq l, Qpk1q l1qfollows a bivariate Poisson distribution with joint rate λppk,k1q l,l1and respective marginal rates λppkq landλppk1q l1. In Appendices A.9, A.10 we give more details on the bivariate binomial and Poisson distributions, including formulae to compute the pair likelihoods ((A.34), (A.38)). For MCLE the one-time and two-times probabilities must be known, similarly as in MGLE. Bivariate binomial and Poisson distributions require extra computations of sums, whose lengths increase with the values in Q, getting larger for large Norλ. MCLE is thus more costly than MGLE, but, as seen further in Section 5, it can provide better estimates when Norλare too low for the Gaussian pseudo-likelihood to be reliable. 5 Applications We propose two standalone application scenarios with count data: one for inferring movement parameters in ecology, and the other for inferring vote transfer in sociology. Readers interested in one field are not required to read the application on the other field. The models in our illustrations are rather simplistic proofs of concept kind, and their specific assumptions may not be entirely biologically or sociologically realistic. The conclusions should not be seen as definite. 7 5.1 Counting moving individuals An underlying continuous space-time movement of individuals induces particular statistical properties on space-time survey counts. Here we show how use the ECM distribution to analyze those properties and to learn about movement parameters from aggregated count data. Let t0be an initial time. Consider Nindividuals moving continuously and randomly over Rd. We denote Xj“ pXjptqqtět0theRd-valued stochastic process which describes the trajectory of individual j. We define the abundance random measure as the space-time (generalized) random field identified with the family of random variables Φ“pΦtpAqqAPBpRdq,tět0which represent the population abundance: ΦtpAq:““number of individuals in region Aat time t”. (5.1) Note that Arepresents a (Borel) survey region AĂRd, not a single point of Rd. We use the term random measure since for every tthe application AÞÑΦtpAqis a
|
https://arxiv.org/abs/2505.20151v1
|
random measure (Kallenberg, 2017), more particularly a spatial point process (Daley & Vere-Jones, 2006). Let us assume the Nindividuals follow iid trajectories, that is, the processes pXjqj are iid with the distribution of a reference process X“pXptqqtět0. Let t1, ..., t nět0be survey time points (possibly continuously irregularly sampled). For each time tk, letApkq 1, ..., Apkq mkbe a collection of mkě1survey regions forming a partition of Rd. The categories to be used are simply given by individual jbelongs to category lat time kðñ individual jis inApkq lat time tk, (5.2) which are exhaustive and exclusive since the survey regions form a partition. We define the random variables Qpkq l:“ΦtkpApkq lq, lPt1, . . . , m ku, kPt1, . . . , nu. (5.3) The arrangement Q“ pQpkq lqk,lcontains the aggregated counts of Nindividuals following iid movement dynamics across evolving categories: Qfollows the ECM distribution. The path probabilities are determined by the distribution of X: pp1,...,nq l1,...,ln“P´ Xpt1qPAp1q l1, . . . , XptnqPApnq ln¯ . (5.4) The sub-path probabilities can be obtained by marginalization, replacing some of the Apkq lksets in (5.4) by Rd. In particular, one-time probabilities are determined by the one-time marginal distributions of the process X, and two-times probabilities by the bivariate two-times marginals. That the path probabilities are completely determined by the distribution of Xhas two consequences. First, the parameters governing the distribution of Qare now the same as those in the distribution of X, which are likely much smaller in number than the total amount of path probabilities in a general ECM case. This may allow inference even in the single-realization scenario. Second, we can analyze a variety of trajectory models with relative freedom. For instance, if Xis any Gaussian process and the sets Apkq lare rectangles, many available libraries in various programming languages allow computation of the path probabilities (Genz & Bretz, 2009; Botev, 2017; Cao et al., 2022). Note also that, in principle, no Markov assumption is required on the process X, contrarily to the common case of interacting particle systems in statistical mechanics (Martin-Löf, 1976; Liggett & Liggett, 1985). The case where Nis unknown provides a particular connection with Poisson point processes. If N„Poissonpλq and if we define Φconsidering Nindependent moving individuals as above, then Qis ECM-Poisson. At each time tk, the countspΦt1pApkq 1q, . . . , ΦtkpApkq mkqqare independent Poisson variables with respective rates λPpXptkq PApkq lq (Proposition 3.2), and this structure holds for any possible choice of disjoint survey regions pApkq lql. In other words, for every t,Φtis a spatial inhomogeneous Poisson process with intensity AÞÑλPpXptqPAq. Thus, Qcontains the counts of an evolution of inhomogeneous Poisson processes , with a particular temporal dependence induced by the movement, involving for instance the bivariate Poisson distribution. 5.1.1 Illustration 1: simulation studies on steady-state Ornstein-Uhlenbeck trajectories To show we can infer movement parameters with count data, we qualitatively investigate both the non-asymptotics and asymptotics properties of estimates (as the number of individuals grows) using a simulation study with individuals moving 8 according to a steady-state Ornstein-Uhlenbeck process (OU), which is a widely
|
https://arxiv.org/abs/2505.20151v1
|
used trajectory model (Uhlenbeck & Ornstein, 1930; Gardiner, 2009, Section 3.8). One parameter (here σ) only occurs in the covariance of the movement, thus the space-time autocorrelation of the counts, but not in the mean. It is therefore impossible to estimate it using only the intensity information. For instance, an inhomogeneous space-time Poisson process model for the counts cannot be used to estimate it. Here we prove that with our approach even such kind of parameters can be estimated from the mere count data. A2-dimensional (isotropic) steady-state OU process is a Gaussian process X“pXptqqtět0with values in R2, say Xptq“pX1ptq, X2ptqq, such that EpXptqq“z;CovpXjptq, Xkpsqq“δj,kτ2e´1 2σ2 τ2|t´s|,@t, sět0, (5.5) where z“ pz1, z2q PR2andσ, τą0. In other words, Xis a stationary Gaussian process with constant mean and exponential covariance function, as widely used in time-series modeling (Allen, 2010; Hamilton, 2020). OU processes have been used in ecology to model animal movement around an activity center (Fleming et al., 2014; Eisaguirre et al., 2021; Gurarie et al., 2017). The OU parameters can be interpreted biologically: since Xjptq „Npzj, τ2q,zis the activity center and τcontrols the extent of the home range (Börger et al., 2008; Powell & Mitchell, 2012); σcontrols the instantaneous quadratic variation of the process, which can be interpreted as how “fast” an individual moves: lim ∆tÑ0`VarpXpt`∆tq´Xptqq ∆t“σ2. (5.6) In Section 5.1.2 we shall also use the non-steady-state version of an OU process. The simplest manner of defining such a process is as a steady-state OU process conditioned on Xpt0q“x0for a initial time t0and a initial position x0PR2. Then, the process describes an individual starting at x0and traveling towards zin a sort of direct manner (exponentially fast on average), to finish in an approximately steady-state OU process around z(convergence to the steady-state when tÑ 8 ). Another common equivalent way of defining an OU process is through a stochastic differential equation and using a different parametrization, setting θ“1 2σ2 τ2and focusing on pσ, θqas parameters. We consider a toy space-time setting in 2D. We set t0“0and we consider n“10count times taken randomly uniformly on the interval r0,10s. At each time, a random quantity of between 10and50disjoint squares of length ∆x“0.1, randomly placed in the domain r´1,1s2are used as survey areas, see Figure 6 in Appendix B. We simulate steady-state OU moving individuals with parameters τ“0.4, σ“0.4? 0.002«0.0179 (that is θ“0.001), and z“ p´ 0.2,0.1qand we obtain the count fields. We simulate 1000 data sets per setting and estimate parameters using MGLE and MCLE. We study asymptotics for the estimates as the quantity of individuals grows, with N“102,103,104 for ECM and λ“102,103,104for ECM-Poisson. The estimations are done in the logscale for τ, σ andλ. Table 1 contains a bias analysis of the results. Figures 1 and 2 contain Box-plots of the estimations of movement parameters and size rates (in the ECM-Poisson case) respectively. Estimates’ correlograms are presented in Appendix B. Consistency is empirically supported: both MGLE and MCLE provide estimates which get closer to the true values as Norλgrow. This includes the estimate of λitself in the ECM-Poisson case and the autocorrelation parameter σ,
|
https://arxiv.org/abs/2505.20151v1
|
which seems to be the most difficult to estimate in terms of reducing variance when adding extra information. In general, MCLE has less bias and variance than MGLE, especially for τandλ, where the former is under- and the latter is overestimated. MCLE presents few correlation among estimates. The Gaussian approximation in MGLE seems to be too crude for lower sizes, specially in the ECM-Poisson case for which the estimation was highly erratic, see more comments on this in Appendix B. For N, λě103both methods provide reasonable estimates for the movement parameters. For N, λ“104 both are essentially unbiased and show similar variability, although with some undesired behavior for MCLE in the ECM- Poisson case, supposedly due to a multi-modal estimating function. While in general the knowledge of Nhelps providing better estimates, for N, λě103the bias and variance are not importantly smaller when knowing N: as long as Nis big enough, the precise knowledge of Nis not required for a reasonable estimation. In summary: consistent estimation of movement parameters, including σ, is possible given the correct space-time count setting and with enough individuals; and MCLE performs better than MGLE in terms of bias and variability, but for large amount of individuals, MGLE provides adequate estimates with a less erratic and less computationally expensive estimating function. 9 FIGURE 1: E STIMATES OF STEADY -STATE OU PARAMETERS pτ, σ, z 1, z2qIN SIMULATION STUDY ,WITH VARIABLE QUANTITY OF INDIVIDUALS NAND SIZE RATE λFOR ECM AND ECM-P OISSON (ECM-P) SIMULATIONS RESPECTIVELY . RED COLORED BOX-PLOTS CORRESPOND TO MGLE, BLUE COLORED ONES TO MCLE. T RUE PARAMETER VALUE INDICATED WITH A DOTED GOLD LINE . SAMPLE SIZE 1000 IN EVERY SETTING (EXCEPT MGLE ECM-P λ“102,SEEAPPENDIX B). 10 Distribution Method Sizelogpppˆτqqq logpτtheoq«´ 0.916logpppˆσqqq logpσtheoq«´ 4.024ˆz1 z1,theo“´0.2ˆz2 z2,theo“0.1logpppˆλqqq ECMMGLE N“102´1.053p´0.137q 0.196´4.010p0.013q 0.617´0.210p´0.010q 0.1000.099p´0.001q 0.099´ MCLE´0.934p´0.018q 0.092´4.063p´0.039q 0.352´0.198p0.002q 0.0650.098p´0.002q 0.063´ ECM-PMGLE*λtheo“102 logpλtheoq«4.605´1.315p´0.399q 0.409´4.488p´0.464q 1.184´0.226p´0.026q 0.1020.125p0.025q 0.1106.126p1.521q 1.523 MCLE´0.930p´0.014q 0.093´4.075p0.009q 0.406´0.201p´0.001q 0.0660.098p´0.002q 0.0674.599p´0.006q 0.151 ECMMGLE N“103´0.933p´0.017q 0.046´4.043p´0.020q 0.259´0.203p´0.003q 0.0250.098p´0.002q 0.027´ MCLE´0.918p´0.002q 0.028´4.035p´0.011q 0.203´0.201p´0.001q 0.0200.099p´0.001q 0.021´ ECM-PMGLEλtheo“103 logpλtheoq«6.908´0.949p´0.032q 0.056´4.194p´0.171q 0.344´0.205p´0.005q 0.0280.107p0.007q 0.0327.169p0.262q 0.264 MCLE´0.918p´0.001q 0.028´4.033p´0.010q 0.213´0.200p0.000q 0.0200.100p0.000q 0.0216.906p´0.001q 0.046 ECMMGLE N“104´0.917p´0.001q 0.010´4.044p´0.020q 0.175´0.200p´0.000q 0.0060.100p´0.000q 0.006´ MCLE´0.916p´0.000q 0.009´4.020p´0.010q 0.266´0.200p0.000q 0.0060.100p´0.000q 0.006´ ECM-PMGLEλtheo“104 logpλtheoq«9.210´0.915p´0.001q 0.010´4.069p´0.045q 0.182´0.200p´0.000q 0.0060.101p0.001q 0.0079.242p0.032q 0.035 MCLE´0.916p´0.000q 0.009´4.137p´0.114q 0.379´0.200p0.000q 0.0060.100p´0.000q 0.0069.211p0.001q 0.014 TABLE 1: R ESULTS FROM SIMULATION STUDY TO QUANTIFY BIAS IN MGLE AND MCLE OF PARAMETERS OF ECM AND ECM- POISSON (ECM-P) COUNTS INDUCED BY A STEADY -STATE OU MOVEMENT PROCESS . T HE AVERAGE OF THE ESTIMATES IS PRESENTED FOR EACH PARAMETER . IN PARENTHESES ,THE DIFFERENCE OF THIS AVERAGE WITH RESPECT TO THE THEORETI - CAL VALUE . DOWN IN ITALICS ,THE SQUARE ROOT OF THE MEAN SQUARED ERROR . SAMPLE SIZE IS 1000 IN EVERY SETTING (EXCEPT FOR MGLE ECM-P λ“102,SEEAPPENDIX B). 5.1.2 Illustration 2: are translocated lesser prairie-chickens behaving exploratory or sedentary? We consider data on N“93lesser prairie-chickens, an endangered species in the United States. For conservation and study purposes, the individuals were translocated from northwest Kansas to southeastern Colorado and southwestern Kansas in 2016–2019 and telemetry devices were attached to each bird (Berigan et al., 2024). To show that our method- ology allows to
|
https://arxiv.org/abs/2505.20151v1
|
infer movement parameters even under loss of information, we transformed the tracking data into counts to be modeled with the ECM distribution. Although individuals were released at different places and on different days, the telemetry data were translated in space-time so all individuals start at an origin x0“p0,0qand at a common release timet0. Due to information loss, t0is unknown and must be estimated. The individuals share n“10location-record times, which are thus used as survey times, with t1“0and the other 9times forming an irregular grid beneath a two-day horizon after t1. The recorded positions are also scaled so they are contained in the domain r´1,1s2. At each survey time, we emulate a count in 1to30randomly selected squares of length ∆x“0.1insider´0.6,0.6s2. Since a central square around x0likely provides valuable information, we added such a square as survey count area for times t1andt4. See Figure 3 for the finally used count data. One important biological question is whether this released population has an exploratory or sedentary movement behavior. Bird populations usually show a mixture of behaviors among individuals, sometimes related to sex (Berigan et al., 2024). Here we use the ECM distribution for inferring both movement parameters and the proportion of the population with exploratory movement type. A sedentary movement is modeled as a (non-steady state) OU process around x0, with τ, σto be estimated. The exploratory movement is modeled as a Brownian motion with standard deviation σ. Note that for a Brownian motion, formula (5.6) also holds, therefore it makes sense to interpret σas a “speed” parameter for either movement behavior. We introduce the parameter αP r0,1s, the probability of a randomly selected individual to be an 11 FIGURE 2: E STIMATIONS OF SIZE RATES λINECM-P OISSON SIMULATION STUDIES WITH STEADY -STATE OU UNDERLYING MOVEMENT . RED COLORED BOX -PLOTS CORRESPOND TO MGLE, BLUE COLORED ONES TO MCLE. T RUE PARAMETER VALUE INDICATED WITH A DOTED GOL LINE . SAMPLE SIZE 1000 IN EVERY SETTING (EXCEPT MGLE λ“102,SEEAPPENDIX B). FIGURE 3: L ESSER PRAIRIE -CHICKEN ABUNDANCE DATA CONSTRUCTED FROM TELEMETRY DATA . E ACH PLOT SHOWS THE SURVEY SUB -SQUARES OF SIZE ∆x“0.1INSIDE THE DOMAIN r´0.6,0.6s2WHERE THE UNDERLYING MOVING INDIVIDUALS ARE COUNTED AT A GIVEN SURVEY TIME tk,k“1, . . . , 10. INSIDE EACH SUB -SQUARE ,A NUMBER INDICATES THE QUANTITY OF INDIVIDUALS THERE COUNTED . 12 explorer. The “trajectory” model Xis therefore a mixture: X„$ & %XB:“Brownpσq, XBpt0q“x0with probability α XO:“OUpτ, σq, XOpt0q“x0 with probability 1´α.(5.7) By conditioning, one obtains that the one-time and two-times marginals of the overall mixed trajectory model Xis given by the corresponding linear combination of the marginals associated to each movement: PpXptqPAq“αPpXBptqPAq`p1´αqPpXOptqPAq, (5.8) PpXptqPA, XpsqPBq“αPpXBptqPA , X BpsqPBq`p1´αqPpXOptqPA , X OpsqPBq. (5.9) The one-time and two-times probabilities of the generated ECM counts then follow. From the simulation studies in Section 5.1.1, we know that MCLE gives more adequate estimates than MGLE when N«102, so we apply it to our case with N“93. We estimate the movement parameters σ, τ, the initial time t0 and the probability α. Table 2 presents the point estimates and 95% confidence intervals
|
https://arxiv.org/abs/2505.20151v1
|
obtained through parametric bootstrapping. Bootstrapped sampling histograms are presented in Figure 4, and 5 contains the associated correlograms (with σ, τ,´t0inlog-scale, and αinlogit -scale). Pairwise MCLE results ˆτ (dist-unit)ˆσ (dist-unit/days1 2)ˆt0 (days)ˆα cℓ Estimate 0.0324 0.1085 ´0.1362 0.3682´8524.08 95% Bootstrap CI r0.0228 ,0.0419sr0.0752 ,0.1419sr´0.2658 ,´0.0333sr0.1696 ,0.5697s´ TABLE 2: P OINT ESTIMATES AND 95% CONFIDENCE INTERVALS (IN BRACKETS )FOR LESSER PRAIRIE -CHICKENS MOVEMENT PARAMETERS ,OBTAINED WITH MCLE. C ONFIDENCE INTERVALS OBTAINED FROM PARAMETRIC BOOTSTRAPPING WITH 1000 SAMPLES . COMPOSITE LIKELIHOOD AT THE ESTIMATE IS SHOWN IN COLUMN cℓ. FIGURE 4: P ARAMETRIC BOOTSTRAP FREQUENCY HISTOGRAMS FOR ESTIMATES OF pτ, σ, t 0, αqWITH 1000 SAMPLES . POINT ESTIMATES IN RED ,SAMPLE MEAN IN BLUE ,95% CONFIDENCE INTERVAL LIMITS IN DOTTED BLACK . The exploratory probability was estimated at ˆα«0.37. There is some uncertainty on it, but nearly 90% of bootstrap samples are less than 0.5, suggesting a sedentary tendance during the initial two days. Parameters τ, σ, t 0are estimated with adequate uncertainty, specially τ, whose low estimate suggests that the sedentary population tends to stay close to the release point. Correlograms 5 show strong correlations among estimates, suggesting that a suitable re-parametrization may be better for statistical purposes and that the knowledge of t0would further improve the estimation of other parameters. 5.2 Estimating vote transfer One-final-option elections are usually held in at least two rounds. In the first, many options are available, while in the second only those candidates who satisfied a certain criterion in the first round can compete. One question that arises is 13 FIGURE 5: C ORRELOGRAM OF BOOTSTRAP SAMPLES OF ESTIMATES OF LESSER PRAIRIE -CHICKEN MOVEMENT PARAMETERS pτ, σ, t 0, αq. IN HISTOGRAMS ,TRUE VALUE MARKED WITH A DOTTED RED ,SAMPLE MEAN MARKED WITH DOTTED BLUE . IN POINT CLOUDS ,TRUE PAIR OF VALUES MARKED WITH A RED POINT ,SAMPLE MEAN MARKED WITH AN EMPTY -INTERIOR BLUE CIRCLE . SAMPLE SIZE OF 1000 . if it is possible to infer, using only the vote counts from both rounds, the vote transfer , that is, the proportions of voters preferring the different options in the second round given their preferences in the first. The problem of finding patterns of individual behavior given aggregate counts is known in sociology as the ecological inference problem (Schuessler, 1999). Many techniques have been developed for such problem. Commonly used methods are ecological regression (Goodman, 1953, 1959), where the objective proportions are linear regression coefficients, and King’s method (King, 2013), where a truncated Gaussian prior distribution on the proportions is used for Bayesian analysis. A variation of King’s method (Rosen et al., 2001; King et al., 2004) appears to be the most used, and it has already been applied for inferring vote transfer (Audemard, 2024). Here we explain how to use the ECM distribution for this aim, providing a different methodology from the previously mentioned.3 We consider n“2times representing the two election rounds. The categories at each time correspond to the possible options the voters have. Suppose there are ˜m1candidates in the first round, ˜m2“2in the second, and that abstention is possible during
|
https://arxiv.org/abs/2505.20151v1
|
both rounds, resulting in m1“˜m1`1categories at time 1andm2“3at time 2. Assume there arenDelectoral districts and that in each district jP t1, . . . , n Du, there is a known number of voters, denoted Nj. Since we focus on vote transfer, one-time probabilities at first round are not important, the focus being on the conditional probabilities pp2|1q l1|lof transition from one option to another (Eq. (2.8)). Our main simplifying assumption is that in each district we have independent ECM vote counts with identical transition probabilities, and our objective is to estimate them. Sinceř3 l1“1pp2|1q l1|l“1, there are 2p˜m1`1qdifferent conditional probabilities to estimate. The independent replicates assumption makes this inference possible as long as nDis large enough. Let us denotejQ“ pj⃗Qpkqqk“1,2the ECM random arrangement containing the counts in district j. From Propo- sition 2.3, at each district the second-round countj⃗Qp2qis Poisson-Multinomial when conditioned on the first-round resultsj⃗Qp1q. The log-likelihood of the transition probabilities ppp2|1q l1|lql1,lgiven a possible national-level result q“ 3Practically speaking, our methodology is an ecological regression where the variances of the residuals also depend on the parameters to estimate. 14 pjqqj“1,...,n D“` pj⃗ qpkqqk“1,2˘ j“1,...,n Dis given by ℓ´ ppp2|1q l1|lql1,l| pjqqj¯ “nDÿ j“1ℓSIMpm1,j⃗ qp1qq´ ppp2|1q l1|lql1,l|j⃗ qp2q¯ , (5.10) where ℓSIMpm1,j⃗ qp1qqdenotes the log-likelihood of the sum of m1independent multinomials with sizes given by the components of the vectorj⃗ qp1q. The Poisson-Multinomial likelihood has already been used in ecological inference problems in the context of predict- ing machine failure (Lin et al., 2023). In our context, the explicit computation of the log-likelihood requires the evaluation ofm1convolutions of dimension 2, which is tractable for reasonable values of m1. However, since the number of voters per district is usually large in national elections, we assume MGLE is adequate for estimation purposes. Hence, we replace the log-likelihood (5.10) with a Gaussian pseudo-likelihood with corresponding mean and covariance. 5.2.1 Illustration: Chilean 2021 presidential election The first round of the 2021 Chilean presidential election comprised 7candidates ranging from far-left to far-right. Since no candidate obtained 50%`1of valid votes in the first round, a second round was performed opposing left-winged candidate Gabriel Boric with far-right-winged José Kast. Two main questions arise in this context: did the vote transfer occur according to the usual left/right classification? Did the participation increase favor Boric, who later won the second round with 55.87% of valid votes? See Table 3 for the election results. The vote count information is publicly available in the Chilean election qualifying court’s website.4 Option Political Tendance Results 1st round Results 2nd round José Kast Far-Right 1 961 779 (13.05%)3 650 662 (24.29%) Sebastián Sichel Center-Right 898 635 (5.98%) Franco Parisi Dissident/Right 900 064 (5.99%) Yasna Provoste Center-Left 815 563 (5.43%) Gabriel Boric Left 1 815 024 (12.08%)4 621 231 (30.74%) Marco Enríquez-Ominami Dissident/Left 534 383 (3.56%) Eduardo Artés Far-Left 102 897 (0.68%) Abstention (incl. null/blank) 8 002 629 (53.24%)6 759 081 (44.97%) TABLE 3: R ESULTS OF 2021 C HILEAN PRESIDENTIAL ELECTION IN QUANTITY OF VOTES (PERCENTAGES IN PARENTHESIS ). We applied our methodology with MGLE for inference. The number of people with right to
|
https://arxiv.org/abs/2505.20151v1
|
vote is N“15 030 974 for both rounds. As districts, we considered the territorial division of Chile into comunas . V oters abroad were grouped into an extra fictional district. This resulted in nD“347districts with number of voters ranging from 233to403 129 . Null and blank votes are considered as abstention. We obtained estimates for vote transfer proportions and we quantified uncertainty with parametric bootstrapping. Results are presented in Table 4. Confidence intervals are quite narrow, presumably due to the strong and questionable iid assumption among and within districts. Under the assumed model, we learn, first, that the reduction in abstention did favor Boric, but Kast also benefited from it. The most crucial vote transfer for Boric’s victory may have been from Parisi voters. Parisi is commonly perceived as right or center-right (Quiroga, 2015), but above all as a dissident from the traditional Chilean political system. His voters seem to have followed such tendance, with a huge abstention during the second round. But surprisingly, our estimated Parisi voters’ support for leftist Boric was nearly twice their support for Kast. This suggests that the left/right classification of candidates may not be reasonable for classifying their voters. Results of more centrist Sichel and Provoste are also to be commented: while the vote transfer roughly respects the left/right tendance, there is a significant shift to the opposite side and to abstention in both cases. 4Seehttps://tribunalcalificador.cl/resultados_de_elecciones/ , consulted for the last time the 1st May 2025. 15 V ote transfer MGLE results Kast Boric Parisi Sichel Provoste ME-O Artés Abstention Boric«0% r0,0.28s«100% r99.80,100s31.04% r30.41,31.63s19.80% r19.21,20.28s52.13% r50.98,53.29s«100% r99.25,100s«100% r96.24,100s16.34% r16.26,16.57s Kast97.54% r97.15,97.94s«0% r0,0.01s16.11% r15.57,16.69s68.24% r67.55,68.91s30.18% r29.21,31.18s«0% r0,0.28s«0% r0,2.69s9.33% r9.19,9.45s Abstention2.47% r1.99,2.84s«0% r0,0.17s52.84% r52.16,53.57s11.96% r11.28,12.77s17.69% r16.42,18.90s«0% r0,0.50s«0% r0,0.78s74.28% r74.12,74.43s TABLE 4: P OINT ESTIMATES AND 95% CONFIDENCE INTERVALS (IN BRACKETS )OF VOTE TRANSFER PROPORTIONS DURING THE 2021 C HILEAN PRESIDENTIAL ELECTION ,OBTAINED WITH MGLE ( IN PERCENTAGES ). IN ROWS ,PREFERRED OPTION AT SECOND ROUND . IN COLUMNS ,PREFERRED OPTION AT FIRST ROUND . CONFIDENCE INTERVALS OBTAINED FROM PARAMETRIC BOOTSTRAPPING WITH 1000 SAMPLES . 6 Comparison with other models of counts The ECM distribution is connected to many models used in different fields. For sake of bibliographic completeness, this section is devoted to show the similarities with two widely used kinds of models: Poisson processes and Markov models. 6.1 Count of Poisson point processes Poisson point processes have applications in many fields (Ross, 1995; Moller & Waagepetersen, 2003; Illian et al., 2008). Their simplicity stems from the sole need of modeling an intensity field, which gives the expected count over a space/time region. The counts are independent Poisson variables around such an average. The intensity can be modeled quite freely, sometimes relating it to covariates (Baddeley et al., 2005; Thurman & Zhu, 2014) or taken to be the solution of a suitable differential equation (Wikle, 2003; Soubeyrand & Roques, 2014; Hefley & Hooten, 2016; Zamberletti et al., 2022). We can retrieve a structure of independent Poisson counts with the ECM-Poisson distribution as a particular scenario where the path probabilities satisfy pp1,...,nq l1,...,ln“pp1q l1¨¨¨ppnq ln. (6.1) As a result,
|
https://arxiv.org/abs/2505.20151v1
|
counts of space-time Poisson processes can be subsumed as a special case of this distribution. In such case, the time index k“1, . . . , n refers to disjoint time intervals, and the categories at each krepresent disjoint spatial regions of count. The one-time probabilities govern the intensity. If we apply this logic to moving individuals as in Section 5.1, where the time index krefers to a specific time point tk, we obtain that the continuous-time stochastic processes satisfying (6.1) for any possible choice of count times are those with independence at different times, no matter how small the time-lag. Such trajectory models have not continuous paths, thus they are not realistic for physical scenarios. As explained in Section 5.1, the abundance random measure Φin (5.1) is a continuous-time evolution of inhomogeneous spatial Poisson processes, but it is not a space-time point process. One of the disadvantages of Poisson processes is the lack of autocorrelation in the count field. The most widely used point process model which includes dependence among counts is the Cox process (Møller et al., 1998; Diggle et al., 2010). It is a Poisson process conditioned on a random intensity, which is often modeled phenomenologically in order to fit data or relate the counts to covariates, but we are unaware of a Cox-dependence modeling through movement of underlying individuals. Our approach tackles this problem by considering the natural autocorrelation induced by the movement. In summary, ECM-related models here developed are arguably statistically richer than space-time Poisson processes. Their advantages include the explicit description of the temporal dependence between counts, which can be interpreted in a continuous-time setting, and that both intensity and space-time autocorrelation are related to individual movement, allowing to infer movement parameters which are unidentifiable in a purely intensity-based model. 16 6.2 Markov models The ECM distribution does not assume any Markovian structure. Markov models can nonetheless be related to a particular case. Suppose the path probabilities can be written as pp1,...,nq l1,...,ln“ppn|n´1q ln|ln´1ppn´1|n´2q ln´1|ln´2. . . pp2|1q l2|l1pp1q l1. (6.2) In other words, the individuals follow a Markov chain across the categories. Then, the count vectors ⃗Qp1q,⃗Qp2q, . . . , ⃗Qpnq follow a Hidden Markov Model (HMM) (Rabiner & Juang, 1986; Zucchini & MacDonald, 2009). The counts are not themselves Markov in general due to lumping (Gurvits & Ledoux, 2005; Kemeny et al., 1969), but they are constructed from an underlying Markov chain, the vector ⃗Qpkqcontaining information of the chain only at time k. HMMs are very widely used in applied probability and are a reference in modeling counts in ecology (Knape & De Valpine, 2012; Glennie et al., 2023). Several techniques for treating the complicated likelihoods involved have been developed: the forward algorithm, the Viterbi algorithm and particular filtering techniques (Smyth et al., 1997; Doucet et al., 2001). Thus, the likelihood of the ECM distribution in the Markovian case (6.2) can become tractable in some scenarios. Conversely, the fitting methods here presented may be added to the tool-box of inference methods for certain HMMs, which can also present intractable likelihoods (Dean et al.,
|
https://arxiv.org/abs/2505.20151v1
|
2014). One issue is still present when considering an underlying continuous Markov process Xin the scenario of Section 5.1. The Markovianity of Xdoes not imply the Markovianity of the counts. This is a form of continuous-to-discrete lumping, similar to encountered in discrete-valued Markov chains (Gurvits & Ledoux, 2005). So, even with Markov trajectories such as Brownian motion, OU processes, or more generally Itô diffusions, the scenario is, strictly speaking, outside the scope of condition (6.2). Nonetheless, a Markov assumption on the path probabilities may still be used as a simplified ECM model, and the associated, more tractable likelihood, can be used as an estimating function or for Bayesian inference. The conditions in which this approximation produces adequate estimates have to be explored. 7 Perspectives The properties of the ECM distribution here explored are modest in the sense that we only studied the minimum necessary to approach practical problems. Many additional properties such as higher-order marginals, Fisher information, Kullback- Leibler divergences with respect to distributions, or properties of diverse estimators, remain to be explored. A more theoretical study about the Gaussian approximation exploited in the MGLE estimator could be approached by using classical error-bounding results (Valiant & Valiant, 2010, 2011; Daskalakis et al., 2015). Asymptotic properties and uncertainty quantification for MGLE and MCLE may also be explored theoretically (Godambe, 1991). One interesting question, especially in ecological applications, is the estimation of the number of individuals Nwhen it is unknown. We have shown that we can estimate λfor the ECM-Poisson distribution if enough information is available, but an estimator of λis not an estimator of N. One could proceed studying the conditional distribution of Ngiven the information in Q, which by Bayes’ rule is given by PpN“x|Q“qq“PpQ“q|N“xqPpN“xq PpQ“qq. (7.1) Note that PpQ“q|N“xqis an ECM probability, PpN“xqis a Poisson probability and PpQ“qqis an ECM- Poisson probability. Thus, one can either use point estimates for λand the other parameters and describe this distribution, or place a prior on λand derive the posterior of NgivenQandλusing Bayesian inference. If λis big enough, the intractable probabilities in (7.1) can be approximated by Gaussian probabilities with continuity correction. For small λ, the conditions under which the ECM and ECM-Poisson probabilities can be replaced by corresponding composite likelihoods may be studied (Pauli et al., 2011; Ribatet et al., 2012; Syring & Martin, 2019). It is likely that future applications or variants of the ECM distribution will also present likelihood intractability. New ad-hoc fitting methods should be explored. Likelihood-free methods (Drovandi & Frazier, 2022) are an interesting option. 17 Since simulating an ECM or ECM-Poisson arrangement is easy as long as the individual movement is easy to simulate, approximate Bayesian computation (Sisson et al., 2018) can be used for Bayesian analyses, but case-specific summary statistics and discrepancies need to be chosen. Neural estimates (Lenzi et al., 2023; Zammit-Mangion et al., 2024) could also be explored, since they are versatile enough to be adapted to more complex but easily simulated scenarios such as individuals reproducing and dying or non-independent movement. The iid assumption among individuals is clearly a strong hypothesis for biological or
|
https://arxiv.org/abs/2505.20151v1
|
sociological applications. How- ever, the ECM distribution can still be used as a basis for building more complex models. For example, counts of individuals with two or more different behaviors (such as in Section 5.1.2 but knowing the number of exploratory indi- viduals) could be modeled as the sum of independent ECM arrangements, one for each type of movement. This avoids the identically distributed assumption, but requires to study the properties of sums of independent ECM arrangements. To avoid the independence assumption, one could try to condition on some external variable that influences individuals’ behavior and then model their movement by invoking conditional independence. Then, counts are conditionally ECM, but not marginally. This also applies to the case where some individuals have a big influence on others. For example, a leader could follow a certain movement, and other individuals would follow conditionally independent movements given the leader’s. This idea can be applied to either physical movement as in Section 5.1 or to sociological scenarios as in Section 5.2, in which case it would refer to opinion leader’s effects. Acknowledgments We warmly thank Liam A. Berigan and David Haukos for allowing us to use their data on lesser prairie-chickens. We also thank Denis Allard for his suggestions on the use of composite likelihood estimators and their application. This research has been founded by the Swiss National Science Foundation grant number 310030 _207603 . Appendices A Proofs A.1 Proof of Proposition 2.1 Let us define the n-dimensional random array X“pXl1,...,lnql1,...,lnwith dimensions m1ˆ. . .ˆmn, in which each Xl1,...,lnmeans, for every pl1, . . . , l nqPt1, . . . , m 1uˆt1, . . . , m nu, Xl1,...,ln:““number of individuals doing the full-path l1, . . . , l n”. (A.1) Since the Nindividuals follow independently the same dynamics across categories, the array Xcan be wrapped into a random vector which follows a classical multinomial distribution, each component with success probability given by the corresponding full-path probability. The characteristic function of Xis thus a multinomial one: φXpηq“«m1ÿ l1“1. . .mnÿ ln“1pp1,...,nq l1,...,lneiηl1,...,lnffN , (A.2) for every consistent n-dimensional array of real values η“pηl1,...,lnql1,...,ln. Now, the variables in the ECM random arrangement Qare, by definition, the marginalization of the array Xover each category and time, that is, Qpkq l“ÿ l1,...,ln lk“lXl1,...,ln,@k, l. (A.3) 18 Therefore, we can compute φQfrom φX. Letξ“´ ξpkq l¯ l,kbe an arrangement in Rm1`...`mn. Using (A.3), we have φQpξq“E´ eiřn k“1řmk l“1Qpkq lξpkq l¯ “Eˆ eiřn k“1řmk l“1řm1 l1“1...řmk´1 lk´1“1řmk`1 lk`1“1...řmn ln“1Xl1,...,lk´1,l,lk,...,lnξpkq l˙ . “Eˆ eiřm1 l1“1...řmn ln“1´řn k“1ξpkq lk¯ Xl1,...,ln˙ “φXpηq,(A.4) where η“pηl1,...,lnql1,...,lnis the n-dimensional array given by ηl1,...,ln“řn k“1ξpkq lk. From (A.2) it follows φQpξq“«m1ÿ l1“1...mnÿ ln“1pp1,...,nq l1,...,lneipξp1q l1`...`ξpnq lnqffN .■ (A.5) A.2 Proof of Proposition 2.2 Let⃗ξ“pξ1, ..., ξ mkqPRmk. Letξ“pξpkq lql,kthe arrangement in Rm1`...`mnsuch that ξk1 l“0ifk1‰kandξpkq l“ξl for every lPt1, . . . , m lu. Then the characteristic function of the random vector ⃗Qpkqsatisfies φ⃗Qkp⃗ξq“φQpξq “«m1ÿ l1“1...mnÿ ln“1pp1,...,nq l1,...,lneipξp1q l1`...`ξpnq lnqffN “«m1ÿ l1“1...mnÿ ln“1pp1,...,nq l1,...,lneiξlkffN “«mkÿ lk“1ppkq lkeiξlkffN ,(A.6) which is the characteristic function of a multinomial random vector. Note that we have
|
https://arxiv.org/abs/2505.20151v1
|
used marginalization for the one-time probabilities ppkq l“ÿ l1,...,ln lk“lpp1,...,nq l1,...,ln.■ (A.7) A.3 Proof of Proposition 2.3 We remind that for two random vectors ⃗X,⃗Ywith values in RmandRnrespectively, the conditioned characteristic function φ⃗X|⃗Y“⃗ ysatisfies φ⃗X,⃗Yp⃗ξ, ⃗ ηq“ż Rnφ⃗X|⃗Y“⃗ yp⃗ξqei⃗ ηT⃗ ydµ⃗Yp⃗ yq,⃗ξPRm, ⃗ ηPRn, (A.8) where µ⃗Ydenotes the distribution measure of ⃗Y. In our case, let k1‰k. Since ⃗Qpkqis multinomial (Proposition 2.2), we use (A.8) and conclude φ⃗Qpk1q,⃗Qpkqp⃗ξ, ⃗ ηq“ÿ ⃗ qPt0,...,Numk sump⃗ qq“Nφ⃗Qpk1q|⃗Qpkq“⃗ qp⃗ξqei⃗ ηT⃗ qP´ ⃗Qpkq“⃗ q¯ “ÿ ⃗ qPt0,...,Numk sump⃗ qq“Nφ⃗Qpk1q|⃗Qpkq“⃗ qp⃗ξqei⃗ ηT⃗ qN! q1!...qmk!´ ppkq 1¯q1 ...´ ppkq lmk¯qmk,(A.9) 19 for every ⃗ξ“pξ1, ..., ξ mk1qPRmk1, and every ⃗ η“pη1, ..., η mkqPRmk. Here we denoted sump⃗ qq“q1`...`qmk. On the other hand, φ⃗Qpk1q,⃗Qpkqcan be computed similarly as the marginal characteristic function φ⃗Qpkqobtained in the proof of Proposition 2.2, by considering φQevaluated at an arrangement with convenient null components and using marginalization for the two-times probabilities ppk,k1q l,l1“ÿ l1,...,ln lk“l,lk1“l1pp1,...,nq l1,...,ln. (A.10) The final result is (details are left to the reader) φ⃗Qpk1q,⃗Qpkqp⃗ξ, ⃗ ηq“» –mk1ÿ lk1“1mkÿ lk“1ppk,k1q lk,lk1eipξlk1`ηlkqfi flN . (A.11) Using the factorisation ppk,k1q lk,lk1“ppk1|kq lk1|lkppkq lkand the multinomial theorem, we obtain φ⃗Qpk1q,⃗Qpkqp⃗ξ, ⃗ ηq“» –mkÿ lk“1ppkq lkeiηlkmk1ÿ lk1“1ppk1|kq lk1|lkeiξlk1fi flN “ÿ ⃗ qPt0,...,Numk sump⃗ qq“NN! q1!...qmk!ei⃗ ηT⃗ q´ ppkq 1¯q1 ...´ ppkq lmk¯qmkmkź lk“1» –mk1ÿ lk1“1ppk1|kq lk1|lkeiξlk1fi flqlk .(A.12) Comparing term-by-term expressions (A.9) and (A.12), we conclude φ⃗Qpk1q|⃗Qpkq“⃗ qp⃗ξq“mkź lk“1» –mk1ÿ lk1“1ppk1|kq lk1|lkeiξlk1fi flqlk , (A.13) which is the product of mkmultinomial characteristic functions, each one with desired size and probabilities vector. Note that the term-by-term comparison is justified by the linear independence of the functions ηPRmkÞÑeiηT⃗ qfor ⃗ qPt0, ..., Numk.■ A.4 Proof of Proposition 2.4 Since ⃗Qpkqis multinomial (Proposition 2.2), one has immediately EpQpkq lq“Nppkq l. Using the covariance structure of a multinomial vector, we obtain when k“k1 E´ Qpkq lQpkq l1¯ “δl,l1Nppkq l´Nppkq lpk1 l1`N2ppkq lppkq l1“δl,l1Nppkq l`NpN´1qppkq lppkq l1. (A.14) For the case k‰k1, we use Proposition 2.3 and the properties of conditional expectations: EpQpkq lQpk1q l1q“E´ Qpkq lE´ Qpk1q l1|⃗Qpkq¯¯ “E˜ Qpkq lmkÿ l2“1Qpkq l2ppk1|kq l1|l2¸ (expectation of sum of indep. multinomials) “mkÿ l2“1´ δl,l2Nppkq l`NpN´1qppkq lppkq l2¯ ppk1|kq l1|l2 (expression (A.14)) “Nppkq lppk1|kq l1|l`NpN´1qppkq lmkÿ l2“1ppk1,kq l1,l2(split the sum and use conditional probability products) “Nppk,k1q l,l1`NpN´1qppkq lppk1q l1.(conditional probability products and marginalization (A.10) )(A.15) With the non-centered second moments being computed, the covariance (2.11) follows. ■ 20 A.5 Proof of Proposition 3.1 Using that Q|N„ECM and the properties of conditional expectation we have φQpξq“E´ eiξTQ¯ “E´ E´ eiξTQ|N¯¯ “E¨ ˝«m1ÿ l1“1. . .mnÿ ln“1eipξp1q l1`...`ξpnq lnqpp1,...,nq l1,...,lnffN˛ ‚,(A.16) where we have used the characteristic function of an ECM random arrangement (2.6). Using the probability generating function of a Poisson random variable GNpzq“EpzNq“eλpz´1q, we have φQpξq“exp! λ˜m1ÿ l1“1. . .mnÿ ln“1pp1,...,nq l1,...,lneipξp1q l1`...`ξpnq lnq´1¸) .■ (A.17) A.6 Proof of Proposition 3.2 Takeξ“pξpkq lql,kbe such that ξpk1q l“0for every k1‰k, and set ⃗ξ“pξp1q 1, . . . , ξpkq mkq. Then, φ⃗Qpkqp⃗ξq“φQpξq “exp! λ˜m1ÿ l1“1. . .mnÿ ln“1pp1,...,nq l1,...,lneiξpkq lk´1¸) “exp! λ˜mkÿ lk“1ppkq lkeiξpkq lk´1¸) ,(A.18) where we have used the marginalization (A.7). Considering thatřmk l“1ppkq l“1, we obtain φ⃗Qpkqp⃗ξq“mkź l“1exp! λpeiξpkq
|
https://arxiv.org/abs/2505.20151v1
|
l´1q) , (A.19) which is a product of Poisson characteristic functions. ■ A.7 Proof of Proposition 3.3 Let⃗ξ“pξ1, . . . , ξ mk1qPRmk1and⃗ η“pη1, . . . , η ˜mkqPR˜mk. We consider an arrangement ξ“pξpkq lqlPt1,...,m ku, kPt1,...,nu such that ξpk2q l“$ ’’& ’’%ξlifk2“k1 ηlifk2“kandlď˜mk 0otherwise .(A.20) Then, the joint characteristic function of p⃗Qpk1q,⃗˜Qpkqqis given by φ⃗Qpk1q,⃗˜Qpkqp⃗ξ, ⃗ ηq“φQpξq“exp! λ˜m1ÿ l1“1. . .mnÿ ln“1pp1,...,nq l1,...,lneipξpk1q lk1`ξpkq lkq´1¸) “exp! λ˜mk1ÿ l1“1mkÿ l“1ppk,k1q l,l1eipξpkq l`ξpk1q l1q´1¸) “exp! λ˜mk1ÿ l1“1˜mkÿ l“1ppk,k1q l,l1eipηl`ξl1q`mk1ÿ l1“1ppk,k1q mk,l1eiξl1´1¸) “exp! λ˜mk1ÿ l1“1ppk,k1q mk,l1eiξl1´1¸)˜mkź l“1exp! eiηlλmk1ÿ l1“1ppk,k1q l,l1eiξl1) .(A.21) 21 On the other hand, using the same arguments as in the proof A.3 (Eq. (A.8)), considering that⃗˜Qpkqhas Poisson indepen- dent components, we have φ⃗Qpk1q,⃗˜Qpkqp⃗ξ, ⃗ ηq“8ÿ q1“0. . .8ÿ q˜mk“0φ⃗Qpk1q|⃗˜Q“pq1,...,q ˜mkqp⃗ξqei⃗ ηT⃗ q˜mkź l“1e´λppkq lrλppkq lsql ql!. (A.22) Now, the product of exponentials in the last line in (A.21) can be expressed as ˜mkź l“1exp! eiηlλmk1ÿ l1“1ppk,k1q l,l1eiξl1) “˜mkź l“18ÿ q“0” eiηlλřmk1 l1“1ppk,k1q l,l1eiξl1ıq q! “8ÿ q1“0. . .8ÿ q˜mk“0˜mkź l“1” eiηlλřmk1 l1“1ppk,k1q l,l1eiξl1ıql ql! “8ÿ q1“0. . .8ÿ q˜mk“0ei⃗ ηT⃗ q˜mkź l“1” λřmk1 l1“1ppk,k1q l,l1eiξl1ıql ql!.(A.23) We have therefore the equalities φ⃗Qpk1q,⃗˜Qpkqp⃗ξ, ⃗ ηq“8ÿ q1“0. . .8ÿ q˜mk“0φ⃗Qpk1q|⃗˜Q“pq1,...,q ˜mkqp⃗ξqei⃗ ηT⃗ q˜mkź l“1e´λppkq lrλppkq lsql ql! “8ÿ q1“0. . .8ÿ q˜mk“0exp! λ˜mk1ÿ l1“1ppk,k1q mk,l1eiξl1´1¸) ei⃗ ηT⃗ q˜mkź l“1” λřmk1 l1“1ppk,k1q l,l1eiξl1ıql ql!.(A.24) Comparing term-by-term, we must have for every ⃗ qPN˜mk φ⃗Qpk1q|⃗˜Q“⃗ qp⃗ξq˜mkź l“1e´λppkq lrλppkq lsql ql!“exp! λ˜mk1ÿ l1“1ppk,k1q mk,l1eiξl1´1¸)˜mkź l“1” λřmk1 l1“1ppk,k1q l,l1eiξl1ıql ql!. (A.25) Isolating φ⃗Qpk1q|⃗˜Q“⃗ qp⃗ξqwe obtain φ⃗Qpk1q|⃗˜Q“⃗ qp⃗ξq“exp! λ˜mk1ÿ l1“1ppk,k1q mk,l1eiξl1´1¸)˜mkź l“1” λřmk1 l1“1ppk,k1q l,l1eiξl1ıql ql!eλppkq lql! rλppkq lsql “exp! λ˜mk1ÿ l1“1ppk,k1q mk,l1eiξl1´1`˜mkÿ l“1ppkq l¸)˜mkź l“1«mk1ÿ l1“1ppk1|kq l1|leiξl1ffql .(A.26) Finally, we consider the sum-up-to-one condition on (all) the one-time probabilities at time kand by marginalization, we have ˜mkÿ l“1ppkq l“1´ppkq mk“1´mk1ÿ l1“1ppk,k1q mk,l1. (A.27) Putting this in (A.26) we conclude φ⃗Qpk1q|⃗˜Q“⃗ qp⃗ξq“exp! λmk1ÿ l1“1ppk,k1q mk,l1` eiξl1´1˘)˜mkź l“1«mk1ÿ l1“1ppk1|kq l1|leiξl1ffql “#mk1ź l1“1eλppk,k1q mk,l1peiξl1´1q+˜mkź l“1«mk1ÿ l1“1ppk1|kq l1|leiξl1ffql ,(A.28) which is indeed the product between a characteristic function of an independent-Poisson-components random vector and a characteristic function which is the product of ˜mkmultinomial characteristic functions, all of them with their desired parameters. ■ 22 A.8 Proof of Proposition 3.4 The mean and the covariance at same time k“k1come directly from Proposition 3.2. For k‰k1, the non-centered second order moment can be obtained by conditioning with respect to Nand use the ECM second moment formula (A.15): E´ Qpkq lQpk1q l1¯ “E´ E´ Qpkq lQpk1q l1|N¯¯ “E´ Nppk,k1q l,l1`pN2´Nqppkq lppk1q l1¯ “λppk,k1q l,l1`pλ2`λ´λqppkq lppk1q l1 “λppk,k1q l,l1`λ2ppkq lppk1q l1,(A.29) where we have used that N„Poissonpλq. The covariance follows immediately. ■ A.9 Proof of Proposition 4.1 We give reminders on how to construct a bivariate binomial distribution (Kawamura, 1973). Consider a random vector pX, Yqwith binary components (bivariate Bernoulli). Its distribution can be parametrized by three quantities: the joint success probability pX,Y“PpX“1, Y“1q, and the marginal success probabilities pX“PpX“1q,pY“PpY“1q. In such case we have PpX“1, Y“1q“pX,Y ;PpX“1, Y“0q“pX´pX,Y PpX“0, Y“1q“pY´pX,Y ;PpX“0, Y“0q“1´pX´pY`pX,Y.(A.30) The characteristic function of pX, Yqis easily computed φX,Ypξ, ηq“eipξ`ηqpX,Y`eiξppX´pX,Yq`eiηppY´pX,Yq`1´pX´pY`pX,Y. (A.31) Suppose nowpX, Yqis the sum of Niid bivariate Bernoulli random vectors with joint success probability pX,Y and marginal success
|
https://arxiv.org/abs/2505.20151v1
|
probabilities pX,pY. Then,pX, Yqis said to follow a bivariate binomial distribution. Its characteristic function is then given by φX,Ypξ, ηq“” eipξ`ηqpX,Y`eiξppX´pX,Yq`eiηppY´pX,Yq`1´pX´pY`pX,YıN . (A.32) Consider the components pQpk1q l1, Qpkq lqof an ECM random arrangement, with k‰k1. Its characteristic function φQpk1q l1,Qpkq lcan be obtained by evaluating the characteristic function φ⃗Qpk1q,⃗Qpkq(Eq. (A.11)) at ⃗ξand⃗ ηsuch that ξl1“ξ, ηl“ηand the rest of their components are null. We obtain φQpk1q l1,Qpkq lpξ, ηq“» –mk1ÿ lk1“1mkÿ lk“1eiξδl1,lk1eiηδl,lkppk,k1q lk,lk1fi flN “» ———–eipξ`ηqppk,k1q l,l1`eiηmk1ÿ lk1“1 lk1‰l1ppk,k1q l,lk1`eiξmkÿ lk“1 lk‰lppk,k1q lk,l1`ÿ lk,lk1 lk‰l,lk1‰l1ppk,k1q lk,lk1fi ffiffiffiflN “” eipξ`ηqppk,k1q l,l1`eiηpppkq l´ppk,k1q l,l1q`eiξpppk1q l1´ppk,k1q l,l1q`1´ppkq l´ppk1q l1`ppk,k1q l,l1ıN ,(A.33) which is indeed the characteristic function of a bivariate binomial random vector. The formula for the distribution of the pairs (Kawamura, 1973) is given in Eq. (A.34). We use the notations a_b“maxta, buanda^b“minta, bu. PpQpkq l“q, Qpk1q l1“q1q“q^q1ÿ j“0_pq`q1´NqN! pN´q´q1`jq!pq´jq!pq1´jq!j!” ppk,k1q l,l1ıj” ppkq l´ppk,k1q l,l1ıq´j ” ppk1q l1´ppk,k1q l,l1ıq1´j” 1´ppkq l´ppk1q l1`ppk,k1q l,l1ıN´q´q1`j.■ (A.34) 23 A.10 Proof of Proposition 4.2 We give reminders on how to construct a bivariate Poisson distribution (Kawamura, 1973). Consider three independent Poisson random variables U, V, W with rates λU, λV, λW. Construct a random vector pX, Yqas X“W`U Y“W`V.(A.35) We say then that pX, Yqfollows a bivariate Poisson distribution. This distribution can be parametrized with three quan- tities, the joint rate λX,Y“λW, and the marginal rates λX“λW`λUandλY“λW`λV. Each component ofpX, Yqis Poisson and CovpX, Yq “λX,Y. The characteristic function can be obtained using the independent sum pX, Yq“pW, Wq`pU, Vq: φX,Ypξ, ηq“φW,Wpξ, ηqφU,Vpξ, ηq “φWpξ`ηqφUpξqφVpηq “exp! λX,Ypeipξ`ηq´1q`pλX´λX,Yqpeiξ´1q`pλY´λX,Yqpeiη´1q) .(A.36) Now we proced analogously as in the Proof A.9 of Proposition 4.1. The components pQpk1q l1, Qpkq lqof an ECM-Poisson random arrangement, with k‰k1have joint characteristic function (Eq. (A.21)) φQpk1q l1,Qpkq lpξ, ηq“exp! λmk1ÿ lk1“1mkÿ lk“1´ eiξδl1,lk1eiηδl,lk´1¯ ppk,k1q lk,lk1) “exp! peipξ`ηq´1qλppk,k1q l,l1`peiη´1qλmk1ÿ lk1“1 lk1‰l1ppk,k1q l,lk1`peiξ´1qλmkÿ lk“1 lk‰lppk,k1q lk,l1) “exp! peipξ`ηq´1qλppk,k1q l,l1`peiη´1qλpppkq l´ppk,k1q l,l1q`λpeiξ´1qpppk1q l1´ppk,k1q l,l1q) ,(A.37) which is indeed the characteristic function of a bivariate Poisson random vector with joint rate λppk,k1q l,l1and marginal rates λppk1q l1andλppkq l. The expression for the distribution of the pairs (Kawamura, 1973) is given by P´ Qpkq l“q, Qpk1q l1“q1¯ “q^q1ÿ j“0e´λpppkq l`ppk1q l1´ppk,k1q l,l1qrλpppkq l´ppk,k1q l,l1qsq´j pq´jq!rλpppk1q l1´ppk,k1q l,l1qsq1´j pq1´jq!rλppk,k1q l,l1sj j!.■ (A.38) B Further details on simulation studies The simulation studies were implemented in R, using the L-BFGS-B method in the optim function for optimization. The optimization domain for the movement parameters is plogpτq,logpσq, z1, z2qPr´ 8,6sˆr´ 8,10sˆr´ 1,1sˆr´ 1,1s. In each optimization, three different initial parameter settings ( par0 ) were provided towards ensuring a good optimization in the given domains. We set τ0“τtheo{2,z1,0“0,z2,0“0, and we try three different initial values for the autocorrelation parameter σ, by doing θ0P t1 10θtheo 2,θtheo 2,10θtheo 2u, and then σ0“τ0?2θ0(see thepθ, σq-parametrization of OU processes explained in Section 5.1.1). In the ECM-Poisson case, which is usually more complicated, we used four initial values for the autocorrelation parameter by adding an extra order of magnitude, θ0Pt1 10θtheo 2,θtheo 2,10θtheo 2,100θtheo 2u. In such case, the domain for logpλqisrlogpλtheo{10q,logp10λtheoqs(except for the MGLE λ“102case, see further), and the initial value is λ0“λtheo{2. For each Nandλ, and
|
https://arxiv.org/abs/2505.20151v1
|
each method MGLE and MCLE, 1045 simulations-estimations were performed, in order to take out potential erratic scenarios and still aiming for a total of 1000 samples. By “erratic” we mean the following. The hessians at the optimum values proposed by optim are computed using the hessian function from the library numDeriv . The scenarios where the minimum eigenvalue of the hessian is smaller than 10´12are considered erratic in terms of optimization (minimization), and are taken out from the samples. It turns out that almost none erratic scenarios were present for most settings, the only big exception being the case of MGLE applied to the ECM-Poisson distribution 24 FIGURE 6: S URVEY SQUARES AND TIMES USED IN SIMULATION STUDIES (SECTION 5.1.1). when λ“102. In such case, more than 50% of samples were erratic. This required special attention. The domain for logpλqhad to be reduced to rlogpλtheo{5q,logp6λtheoqs, and 2090 simulation-estimations were tried-out, yielding to only 800non-erratic scenarios. Those 800samples are the ones presented in Section 5.1.1 and in the correlograms showed in this Appendix. The erratic estimates often lied on the border of optimization domains. This erratic behavior is likely due to the crude approximation of the Gaussian distribution when replacing the ECM-Poisson for small λ. We strongly suggest not to use MGLE for such settings. The correlograms for all settings are presented in Figures 7, 8 and 9. A visual comparison of bias is accompanied in every case. There is an undesired complex scenario with MCLE for N“104andλ“104, with multi-modal results. Contrarily to the MGLE ECM-Poisson λ“102case, these are not erratic in the previously specified sense of optimization, but rather an indication of the multi-modality of the composite likelihood in this scenario. We remark, nonetheless, that the undesired lines formed around theoretical values in the point clouds are still inside a reasonable scale-range of the theoretical value. It is expected that for larger λorN, similar behavior will be present but inside a finer region around the theoretical value, respecting the asymptotic convergence. 25 (A) MGLE FOR ECM, N“102 (B) MCLE FOR ECM, N“102 (C) MGLE FOR ECM-P OISSON *,λ“102 (D) MCLE FOR ECM-P OISSON ,λ“102 FIGURE 7: C ORRELOGRAMS OF MGLE AND MCLE ESTIMATES OF STEADY -STATE OU PARAMETERSpτ, σ, z 1, z2qIN SIMULA - TION STUDIES .N“102FOR ECM, λ“102FOR ECM-P OISSON . IN HISTOGRAMS ,TRUE PARAMETER VALUE MARKED WITH A DOTTED RED LINE ,SAMPLE MEAN MARKED WITH A DOTTED BLUE LINE . IN POINT CLOUDS ,THEORETICAL POINT VALUE IS INDICATED WITH A RED POINT ,THE SAMPLE MEAN IS INDICATED WITH AN EMPTY -INTERIOR BLUE CIRCLE . SAMPLE SIZE 1000 IN ALL SETTINGS ,EXCEPT MGLE ECM-P OISSON ,IN WHICH IT IS 800(SEEAPPENDIX B). 26 (A) MGLE FOR ECM, N“103 (B) MCLE FOR ECM, N“103 (C) MGLE FOR ECM-P OISSON ,λ“103 (D) MCLE FOR ECM-P OISSON ,λ“103 FIGURE 8: C ORRELOGRAMS OF MGLE AND MCLE ESTIMATES OF STEADY -STATE OU PARAMETERSpτ, σ, z 1, z2qIN SIMULA - TION STUDIES .N“103FOR ECM, λ“103FOR ECM-P OISSON . IN HISTOGRAMS ,TRUE PARAMETER VALUE MARKED WITH A DOTTED RED LINE ,SAMPLE MEAN MARKED WITH A DOTTED BLUE LINE
|
https://arxiv.org/abs/2505.20151v1
|
. IN POINT CLOUDS ,THEORETICAL POINT VALUE IS INDICATED WITH A RED POINT ,THE SAMPLE MEAN IS INDICATED WITH AN EMPTY -INTERIOR BLUE CIRCLE . SAMPLE SIZE 1000 IN ALL SETTINGS . 27 (A) MGLE FOR ECM, N“104 (B) MCLE FOR ECM, N“104 (C) MGLE FOR ECM-P OISSON ,λ“104 (D) MCLE FOR ECM-P OISSON ,λ“104 FIGURE 9: C ORRELOGRAMS OF MGLE AND MCLE ESTIMATES OF STEADY -STATE OU PARAMETERSpτ, σ, z 1, z2qIN SIMULA - TION STUDIES .N“104FOR ECM, λ“104FOR ECM-P OISSON . IN HISTOGRAMS ,TRUE PARAMETER VALUE MARKED WITH A DOTTED RED LINE ,SAMPLE MEAN MARKED WITH A DOTTED BLUE LINE . IN POINT CLOUDS ,THEORETICAL POINT VALUE IS INDICATED WITH A RED POINT ,THE SAMPLE MEAN IS INDICATED WITH AN EMPTY -INTERIOR BLUE CIRCLE . SAMPLE SIZE 1000 IN ALL SETTINGS . 28 References Allen, L. J. (2010). An introduction to stochastic processes with applications to biology . CRC press. Andrewartha, H. G., & Birch, L. C. (1954). The distribution and abundance of animals (No. Edn 1). Univ. Chicago Press. Audemard, J. (2024). Understanding vote transfers in two-round elections without resorting to declared data. The contri- bution of ecological inference, consolidated with factual information from a case study of the 2014 municipal elections in Montpellier. Bulletin of Sociological Methodology/Bulletin de Méthodologie Sociologique ,162(1), 13–34. Baddeley, A., Turner, R., Møller, J., & Hazelton, M. (2005). Residual analysis for spatial point processes (with discussion). Journal of the Royal Statistical Society Series B: Statistical Methodology ,67(5), 617–666. Berigan, L. A., Aulicky, C. S., Teige, E. C., Sullins, D. S., Fricke, K. A., Reitz, J. H., . . . others (2024). Lesser prairie- chicken dispersal after translocation: Implications for restoration and population connectivity. Ecology and Evolution , 14(2), e10871. Bevilacqua, M., & Gaetan, C. (2015). Comparing composite likelihood methods based on pairs for spatial Gaussian random fields. Statistics and Computing ,25, 877–892. Börger, L., Dalziel, B. D., & Fryxell, J. M. (2008). Are there general mechanisms of animal home range behaviour? A review and prospects for future research. Ecology letters ,11(6), 637–650. Botev, Z. I. (2017). The normal law under linear restrictions: simulation and estimation via minimax tilting. Journal of the Royal Statistical Society Series B: Statistical Methodology ,79(1), 125–148. Buderman, F. E., Hanks, E. M., Ruiz-Gutierrez, V ., Shull, M., Murphy, R. K., & Miller, D. A. (2025). Integrated movement models for individual tracking and species distribution data. Methods in Ecology and Evolution ,16(2), 345–361. Cao, J., Genton, M. G., Keyes, D. E., & Turkiyyah, G. M. (2022). tlrmvnmvt: Computing high-dimensional multivariate normal and student-t probabilities with low-rank methods in r. Journal of Statistical Software ,101, 1–25. Caragea, P. C., & Smith, R. L. (2007). Asymptotic properties of computationally efficient alternative estimators for a class of multivariate normal models. Journal of Multivariate Analysis ,98(7), 1417–1440. Chandler, R. B., Crawford, D. A., Garrison, E. P., Miller, K. V ., & Cherry, M. J. (2022). Modeling abundance, distribution, movement and space use with camera and telemetry data. Ecology ,103(10), e3583. Cox, D. R., & Reid, N. (2004). A note on pseudolikelihood constructed from marginal densities.
|
https://arxiv.org/abs/2505.20151v1
|
Biometrika ,91(3), 729–737. Daley, D. J., & Vere-Jones, D. (2006). An introduction to the theory of point processes: volume I: elementary theory and methods . Springer Science & Business Media. Daskalakis, C., Kamath, G., & Tzamos, C. (2015). On the structure, covering, and learning of poisson multinomial distributions. In 2015 ieee 56th annual symposium on foundations of computer science (pp. 1203–1217). Dean, T. A., Singh, S. S., Jasra, A., & Peters, G. W. (2014). Parameter estimation for hidden Markov models with intractable likelihoods. Scandinavian Journal of Statistics ,41(4), 970–987. Diggle, P. J., Kaimi, I., & Abellana, R. (2010). Partial-likelihood analysis of spatio-temporal point-process data. Biomet- rics,66(2), 347–354. Doucet, A., De Freitas, N., Gordon, N. J., & others. (2001). Sequential Monte Carlo methods in practice (V ol. 1) (No. 2). Springer. 29 Drovandi, C., & Frazier, D. T. (2022). A comparison of likelihood-free methods with and without summary statistics. Statistics and Computing ,32(3), 42. Eisaguirre, J. M., Booms, T. L., Barger, C. P., Goddard, S. D., & Breed, G. A. (2021). Multistate Ornstein–Uhlenbeck approach for practical estimation of movement and resource selection around central places. Methods in Ecology and Evolution ,12(3), 507–519. Fleming, C. H., Calabrese, J. M., Mueller, T., Olson, K. A., Leimgruber, P., & Fagan, W. F. (2014). From fine-scale foraging to home ranges: a semivariance approach to identifying movement modes across spatiotemporal scales. The American Naturalist ,183(5), E154–E167. Gallavotti, G. (2013). Statistical mechanics: A short treatise . Springer Science & Business Media. Gardiner, C. (2009). Stochastic methods (V ol. 4). Springer Berlin Heidelberg. Genz, A., & Bretz, F. (2009). Computation of multivariate normal and t probabilities (V ol. 195). Springer Science & Business Media. Glennie, R., Adam, T., Leos-Barajas, V ., Michelot, T., Photopoulou, T., & McClintock, B. T. (2023). Hidden Markov models: Pitfalls and opportunities in ecology. Methods in Ecology and Evolution ,14(1), 43–56. Godambe, V. P. (1960). An optimum property of regular maximum likelihood estimation. The Annals of Mathematical Statistics ,31(4), 1208–1211. Godambe, V. P. (1991). Estimating functions . Oxford University Press. Goodman, L. A. (1953). Ecological regressions and behavior of individuals. American sociological review ,18(6). Goodman, L. A. (1959). Some alternatives to ecological correlation. American Journal of Sociology ,64(6), 610–625. Grimm, V ., & Railsback, S. F. (2013). Individual-based modeling and ecology. In Individual-based modeling and ecology. Princeton university press. Gurarie, E., Fleming, C. H., Fagan, W. F., Laidre, K. L., Hernández-Pliego, J., & Ovaskainen, O. (2017). Correlated velocity models as a fundamental unit of animal movement: synthesis and applications. Movement Ecology ,5, 1–18. Gurvits, L., & Ledoux, J. (2005). Markov property for a function of a Markov chain: A linear algebra approach. Linear algebra and its applications ,404, 85–117. Hamilton, J. D. (2020). Time series analysis . Princeton university press. Hefley, T. J., & Hooten, M. B. (2016). Hierarchical species distribution models. Current Landscape Ecology Reports ,1, 87–97. Hooten, M. B., Johnson, D. S., McClintock, B. T., & Morales, J. M. (2017). Animal movement: statistical models for telemetry data . CRC press. Illian, J., Penttinen, A., Stoyan, H., & Stoyan, D. (2008). Statistical analysis and modelling
|
https://arxiv.org/abs/2505.20151v1
|
of spatial point patterns . John Wiley & Sons. Kallenberg, O. (2017). Random measures, theory and applications . Springer. Kawamura, K. (1973). The structure of bivariate Poisson distribution. In Kodai mathematical seminar reports (V ol. 25, pp. 246–256). Kemeny, J. G., Snell, J. L., & others. (1969). Finite Markov chains (V ol. 26). van Nostrand Princeton, NJ. Kent, J. T. (1982). Robust properties of likelihood ratio tests. Biometrika ,69(1), 19–27. 30 King, G. (2013). A solution to the ecological inference problem: Reconstructing individual behavior from aggregate data. Princeton University Press. King, G., Tanner, M. A., & Rosen, O. (2004). Ecological inference: New methodological strategies . Cambridge University Press. Knape, J., & De Valpine, P. (2012). Fitting complex population models by combining particle filters with Markov chain Monte Carlo. Ecology ,93(2), 256–263. Lenzi, A., Bessac, J., Rudi, J., & Stein, M. L. (2023). Neural networks for parameter estimation in intractable models. Computational Statistics & Data Analysis ,185, 107762. Liggett, T. M., & Liggett, T. M. (1985). Interacting particle systems (V ol. 2). Springer. Lin, Z., Wang, Y ., & Hong, Y . (2023). The computing of the Poisson multinomial distribution and applications in ecological inference and machine learning. Computational Statistics ,38(4), 1851–1877. Lindsay, B. G. (1988). Composite likelihood methods. Comtemporary Mathematics ,80(1), 221–239. Martin-Löf, A. (1976). Limit theorems for the motion of a Poisson system of independent Markovian particles with high density. Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete ,34(3), 205–223. Møller, J., Syversveen, A. R., & Waagepetersen, R. P. (1998). Log gaussian cox processes. Scandinavian journal of statistics ,25(3), 451–482. Moller, J., & Waagepetersen, R. P. (2003). Statistical inference and simulation for spatial point processes . CRC press. Padoan, S. A., Ribatet, M., & Sisson, S. A. (2010). Likelihood-based inference for max-stable processes. Journal of the American Statistical Association ,105(489), 263–277. Pathria, R. K. (2017). Statistical Mechanics: International Series of Monographs in Natural Philosophy (V ol. 45). Elsevier. Pauli, F., Racugno, W., & Ventura, L. (2011). Bayesian composite marginal likelihoods. Statistica Sinica , 149–164. Potts, J. R., & Börger, L. (2023). How to scale up from animal movement decisions to spatiotemporal patterns: An approach via step selection. Journal of Animal Ecology ,92(1), 16–29. Powell, R. A., & Mitchell, M. S. (2012). What is a home range? Journal of mammalogy ,93(4), 948–958. Quiroga, M. M. (2015). Debut y despedida. La derecha chilena en las elecciones presidenciales 2013. Revista de estudios políticos (168), 261–290. Rabiner, L., & Juang, B. (1986). An introduction to hidden Markov models. ieee assp magazine ,3(1), 4–16. Ribatet, M., Cooley, D., & Davison, A. C. (2012). Bayesian inference from composite likelihoods, with an application to spatial extremes. Statistica Sinica , 813–845. Roques, L., Allard, D., & Soubeyrand, S. (2022). Spatial statistics and stochastic partial differential equations: A mechanistic viewpoint. Spatial Statistics ,50, 100591. Rosen, O., Jiang, W., King, G., & Tanner, M. A. (2001). Bayesian and frequentist inference for ecological inference: The RˆC case. Statistica neerlandica ,55(2), 134–156. Ross, S. M. (1995). Stochastic processes . John Wiley & Sons. Schuessler, A. A. (1999). Ecological inference. Proceedings of the National Academy of
|
https://arxiv.org/abs/2505.20151v1
|
Sciences ,96(19), 10578–10581. 31 Seber, G. A. (1986). A review of estimating animal abundance. Biometrics , 267–292. Sisson, S. A., Fan, Y ., & Beaumont, M. (2018). Handbook of approximate Bayesian computation . CRC press. Smyth, P., Heckerman, D., & Jordan, M. I. (1997). Probabilistic independence networks for hidden Markov probability models. Neural computation ,9(2), 227–269. Soubeyrand, S., & Roques, L. (2014). Parameter estimation for reaction-diffusion models of biological invasions. Popu- lation ecology ,56, 427–434. Sullivan, B. L., Wood, C. L., Iliff, M. J., Bonney, R. E., Fink, D., & Kelling, S. (2009). eBird: A citizen-based bird observation network in the biological sciences. Biological conservation ,142(10), 2282–2292. Syring, N., & Martin, R. (2019). Calibrating general posterior credible regions. Biometrika ,106(2), 479–486. Thorson, J., & Kristensen, K. (2024). Spatio-Temporal Models for Ecologists . CRC Press. Thurman, A. L., & Zhu, J. (2014). Variable selection for spatial Poisson point processes via a regularization method. Statistical Methodology ,17, 113–125. Turchin, P. (1998). Quantitative analysis of movement: measuring and modeling population redistribution in animals and plants . Massachusets, Estados Unidos: Sinauer Associates,. Uhlenbeck, G. E., & Ornstein, L. S. (1930). On the theory of the Brownian motion. Physical review ,36(5), 823. Valiant, G., & Valiant, P. (2010). A CLT and tight lower bounds for estimating entropy. In Electronic colloquium on computational complexity (eccc) (V ol. 17, p. 9). Valiant, G., & Valiant, P. (2011). Estimating the unseen: an n/log (n)-sample estimator for entropy and support size, shown optimal via new CLTs. In Proceedings of the forty-third annual acm symposium on theory of computing (pp. 685–694). Varin, C., Reid, N., & Firth, D. (2011). An overview of composite likelihood methods. Statistica Sinica , 5–42. Varin, C., & Vidoni, P. (2006). Pairwise likelihood inference for ordinal categorical time series. Computational statistics & data analysis ,51(4), 2365–2373. Wikle, C. K. (2003). Hierarchical Bayesian models for predicting the spread of ecological processes. Ecology ,84(6), 1382–1394. Zamberletti, P., Papaïx, J., Gabriel, E., & Opitz, T. (2022). Understanding complex spatial dynamics from mechanistic models through spatio-temporal point processes. Ecography ,2022 (5), e05956. Zammit-Mangion, A., Sainsbury-Dale, M., & Huser, R. (2024). Neural methods for amortized inference. Annual Review of Statistics and Its Application ,12. Zucchini, W., & MacDonald, I. L. (2009). Hidden Markov models for time series: an introduction using R . Chapman and Hall/CRC. 32
|
https://arxiv.org/abs/2505.20151v1
|
arXiv:2505.20153v1 [math.ST] 26 May 2025Submitted to the Annals of Statistics SIMPLE, EFFICIENT ENTROPY ESTIMATION USING HARMONIC NUMBERS BYOCTAVIO CÉSAR MESNER *1,a 1Brown School, Washington University in St. Louis ,amesner@wustl.edu The estimation of entropy, a fundamental measure of uncertainty, is cen- tral to diverse data applications. For discrete random variables, however, ef- ficient entropy estimation presents challenges, particularly when the cardi- nality of the support set is large relative to the available sample size. This is because, without other assumptions, there may be insufficient data to ade- quately characterize a probability mass function. Further complications stem from the dependence among transformations of empirical frequencies within the sample. This paper demonstrates that a simple entropy estimator based on the har- monic number function achieves asymptotic efficiency for discrete random variables with tail probabilities satisfying pj=o(j−2)asj→ ∞ . This re- sult renders statistical inference newly feasible for a broad class of distribu- tions. Further, the proposed estimator has superior mean squared error bounds compared to the classical plug-in estimator, while retaining its computational simplicity, offering practical and theoretical advantages. 1. Introduction. Originally developed by Shannon in A Mathematical Theory of Com- munication [40], entropy was intended to characterize the expected amount of information contained in a message (or random variable). The informational value of a particular message corresponds to its rarity: a message indicating a less likely event contains more information than a message on a common event. For example, a news alert that a solar eclipse will be visible in Saint Louis tomorrow is more informative than an alert about no solar eclipse. Researchers use entropy estimation to quantify uncertainty and information in data, en- abling them to glean insight from data in various settings. Unfortunately, statisticians have only shown efficient estimation in settings where the number of sample points is large in comparison to the support size of a random variable. Currently, no results on the asymptotic distribution of entropy estimators exist for discrete, random variables with infinite support. This paper bounds the mean squared error of a mathematically simple, and computation- ally inexpensive, entropy estimator using the harmonic number function and shows that the estimator is efficient for a large class of discrete random variables with countable support, namely probability mass functions psuch that pj=o(j−2). This development may be partic- ularly useful for providing Wald-style hypothesis tests and confidence intervals for entropy and other information theoretic metrics such as mutual information. 1.1. Background. LetXbe a discrete random variable with probability mass function (PMF) pover the natural numbers, N:={1,2,3,...}, such that pj:=p(j) =P(X=j).1 Without loss of generality, assume that pis monotonically decreasing. Entropy maps Xto a non-negative, real number quantifying its expected degree of uncertainty, with greater val- ues corresponding to greater uncertainty. The informational value of X=jis quantified as −logpj, which increases as pjdecreases. * Keywords and phrases: Entropy estimation, Information Theory, Central Limit Theorem, Efficient Entropy Estimation. 1IfXis finite, then eventually pj= 0for all jbeyond some value. 1 2 DEFINITION 1.1. Theentropy ofXis (1.1) H(X) :=−E[logp(X)] =−X j∈Npjlogpj where 0log 0 := 0 andlogis the natural logarithm.2 In most settings, pis
|
https://arxiv.org/abs/2505.20153v1
|
unknown. As entropy is a functional, whose domain is the space of PMFs, in order to accurately estimate the entropy of X, one must typically estimate pj, or some transformation of it, for each j∈S:={j∈N:pj>0}, the support of p. The earliest method for entropy estimation was the plug-in , (or maximum likelihood) estimator. While universally consistent [1] and asymptotically efficient for random variables with finite sup- port [4], it is considerably biased [7, 14], especially with small samples on random variables with large support [28], due in part to support values not observed in the sample used for estimation. Much of the recent development within the frequentist paradigm on entropy es- timation directly addresses this issue [21, 31, 41, 47]. Interestingly, Bayesian approaches to entropy estimation can potentially circumvent the problem of missing support points with a well-chosen prior [2, 30, 46]. There is a rich body of literature on improved methods for en- tropy estimation, briefly reviewed in Section 2. Most of these methods focus on the bounded support setting, with recent advancements for cases where the sample size small relative to support size [21, 47]. However, there has been some progress on entropy estimation for dis- crete random variables with unbounded support as well [2]. In practice, entropy estimation is common in statistics, machine learning, data science, and many other disciplines involving data. In fact, introductory courses in machine learn- ing [9, 29] often discuss the topic early, suggesting its importance to the field. For ex- ample, entropy is commonly used for building decision trees [35–37] and feature selec- tion [24, 32, 48]. In some cases, such as the Chow-Liu algorithm [11] for building graph- ical dependence trees, mutual information, closely related to entropy, is the only suitable metric. However, the concept arises in other areas as well from probability theory [13] to neuroscience [12] to genetics [18] to physics [42] to algorithmic fairness[20]. 1.2. Main Results. To introduce the entropy estimator proposed here, define the mth harmonic number3asJ(m) :=Pm k=11 kwith J(0) := 0 . LetXn:= X(1),...,X(n) be an independent sample generated from p. Define the sample histogram4as the sequence m(Xn) := ( mj∈N:j∈N)where mj=mj(Xn) :=Pn i=1I X(i)=j is the number of sample points in Xnequal to j∈N. To simplify notation, let m(i):=mX(i). This paper estimates the entropy of Xas (1.2) bH(Xn) =J(n)−1 nnX i=1J m(i) . Other authors have proposed similar estimators using the digamma function in place of harmonic numbers. The estimator proposed here is equal to bHψ(Xn) :=ψ(n+ 1)− 1 nPn i=1ψ m(i)+ 1 where ψ(z) :=d dzlog Γ( z)is the digamma function.5Grassberger [17] and Schürmann [38] develop estimators similar but not equal to bHψ, with variations using 2Shannon [40] defines entropy using log2. 3Many books and papers represent the mth harmonic number as Hm; to avoid confusion with notation for entropy, this paper uses J(m). 4A minimal sufficient statistics for entropy estimation [31, Prop. 11] 5It is straightforward to show that ψ(z) =J(z−1)−γforz∈Nwhere γis the Euler-Mascheroni constant. 3 ψ m(i) . Subsequently, Zhang [49] develops another entropy estimator equal [39] to those of Grassberger [17]
|
https://arxiv.org/abs/2505.20153v1
|
and Schürmann [38]. Wolpert and Wolf [46], working from a Bayesian paradigm, proposes another similar estimator, which is equal to bHψif prior offsets are all set to zero. Section 2.1 provides more background on previous estimators. The first main result bounds the estimators mean squared error, combining bounds on the estimator’s bias and variance, from Theorems 3.1 and 3.3 respectively, characterized by a PMF’s tail behavior. THEOREM 1.2 (Mean Squared Error). Assume Xis a discrete random variable with PMF p, andXn:= X(1),...,X(n) is an independent random sample from p. Ifpj= O j−1/α withα∈[0,1), then (1.3) Eh bH(Xn)−H(X)i2 =O1 n2(1−α) +O1 n . We define α:= 0 ifpj=o j−1/α for all α∈(0,1). PMFs with exponential, or Poisson tail decay or finite support are examples where α= 0. This result expands entropy-estimator, mean squared error analysis for a broad range of PMFs, namely any PMF bounded by a power law. Moreover, for all pj=o j−2 , this estimator’s mean squared error remains consistent atO(1/n). The second main result establishes the asymptotic behavior of the estimator’s limiting distribution under particular tail conditions. THEOREM 1.3 (Central Limit). Assume Xis a discrete random variable with PMF p, andXn:= X(1),...,X(n) is an independent random sample from p. Ifpj=o j−2 , then (1.4)√n bH(Xn)−H(X) ⇝Gaussian (0,Var[log p(X)]) asn→ ∞ where⇝indicates convergence in distribution. Other estimators including the plug-in [1] and that of Zhang [49] both bound their esti- mator variance by O logn n on similar conditions, rendering asymptotic efficiency beyond reach. In contrast, this paper shows that its estimator’s variance is Var[log pX]/n+o(1/n) forpj=o j−2 ; see theorem 3.3. 2. Previous Estimation Techniques. This section provides a concise overview of the literature on discrete entropy estimation, split into frequentist and Bayesian methods. For a more complete review of entropy estimators see reference [33]. 2.1. Frequentist Estimators. The plug-in entropy estimator, defined as (2.1) bHPlug(Xn) =−X j∈Nbpjlogbpj wherebpj=mj/n, is likely the most used entropy estimator to date, probably because of its simplicity. Basharin [4] shows that if phas finite support, s:=|{j∈N:pj>0}|<∞, then the plug-in estimator is asymptotically Gaussian. For cases when s=∞, Antos and Kontoyiannis [1] show that the plug-in estimator is consistent for all PMFs and determines the root mean squared error convergence rates for power-law PMFs, i.e. pj/j−1/α→1, written pj≍j−1/α. 4 THEOREM 2.1. [1, Thm. 7] If pj≍j−1/αforj∈Nandα∈ 1 2,1 , then (2.2) n−(1−α)≲E bHPlug(Xn)−H(X) ≤r Eh bHPlug(Xn)−H(X)i2 ≲n−(1−α) and, for α∈(0,1/2], (2.3)r Eh bHPlug(Xn)−H(X)i2 ≲n−1/2logn . Theorem 2.1 eliminates the possibility of the plug-in estimator being asymptotically effi- cient for PMFs that converge more slowly than pj≍j−2. Moreover, reference [1] also shows that some PMFs may converge arbitrarily slowly, making it impossible to provide a general convergence bound for all discrete PMFs using the plug-in entropy estimator. The plug-in estimator suffers from bias which grows with support size: Using a Taylor expansion, it is easy to see that if Xtakes spossible distinct values, then (2.4) E −X jbpjlogbpj =H(X)−s−1 2n+O n−2 . To adjust for this bias, Miller [28] proposes addings−1 2n, to reduce the bias of the plug-in estimator. When support size, s, is unknown, as
|
https://arxiv.org/abs/2505.20153v1
|
if often the case, some recent methods [21, 47] use a variation of this adjustment for estimating \−pjlogpj=−bpjlogbpj−1 2nfor some values of jappearing in the sample. Paninski [31] develops the Best Upper Bound Estimator (BUB) by minimizing bias squared and variance of a linear function of the sequence (|{j∈N:mj=k}|:k∈N)where thekth entry is the number of distinct values in the sample occurring ktimes. BUB requires the upper bound on the support of pto be known; Vu [43] suggests that its performance de- teriorate when misspecified. Importantly, Paninski [31, Prop. 8] also proves that no unbiased estimators of entropy exist. Valiant and Valiant [41] address the issue of estimating the probability unobserved support points, pjforj̸∈Xn. As such, the paper attempts to estimate a histogram representing the entire support of the PMF, including support points not appearing in the sample. Specifically, reference [41] develops a PMF estimate whose relative earth mover distance [41, Def. 6] to the true PMF is bounded under some conditions including finite support but does not require a specified bound for support size, s. The paper estimates entropy using this PMF estimate. Again, in settings where sis finite but possibly unknown, both Jiao and others [21] and Wu and others [47] take a two-step approach, first estimating the size of pj, then using a bias- adjusted plug-in entropy estimator for large values of pj, where pj≫logn n, and a polynomial approximation otherwise. Both papers [21, 47] provide the minimax mean squared error for entropy estimators for PMFs of size s(and minimax estimators): THEOREM 2.2. Ifn≳s logsandPsis the set of PMFs of size s, then (2.5) inf bH(Xn)sup X∈PsEh bH(Xn)−H(X)i2 ≍s2 (nlogn)2+(logs)2 n. Taking a different approach to the other methods in this section, Grassberger [17] develops several estimators by approximating E[mj]logE[mj]≈mjϕ(mj), where ϕis determined to be the digamma function, ψ(z) :=d dzlog Γ( z)so that: (2.6) bHG(Xn) := log n−1 nX jmjψ(mj). 5 Grassberger [17] shows empirically that this estimator is more accurate than the Miller- corrected, plug-in estimator, with the further advantage that sneed not be known. Zhang [49] develops an entropy estimator for random variables with countable support whose mean squared error is O(1/n)under certain conditions characterized by conditions on the expected missing mass, E (1−pX)k . This estimator is based on a representation of entropy using the Taylor expansion of the natural logarithm: (2.7) H(X) =∞X k=1E (1−pX)k k. Zhang [49] estimates entropy by plugging in an estimate of the missing mass. Schürmann [39] demonstrates that Zhang’s estimator [49] is equal to that of Grassberger [17]. As previously mentioned, the estimator proposed here is very similar to those of both Grassberger [17] and Zhang [49]. 2.2. Bayesian Estimators. Bayesian entropy estimation assumes that the PMF itself is a random sequence, p= (pj:j∈N), generated by some prior process.6Here, entropy too is a random variable, as it depends on p. Generally, Bayesian estimation methods compute the Bayes estimator for entropy as (2.8) bHBayes(Xn) :=Z H(p)P(p|Xn)dp where H(p) :=P jpjlogpj, andP(p|Xn)is the is the posterior distribution of the PMF, p, given the sample, Xn, derived using the prior and Bayes Theorem.
|
https://arxiv.org/abs/2505.20153v1
|
Wolpert and Wolf [46] apply a Dirichlet prior with parameter δ:= (δ1,...,δ s)to the PMF yielding a Bayes estimator for entropy as (2.9) bHWW(Xn) :=ψ 1 +sX j=1mj+δj −sX j=1 mj+δjPs j=1mj+δj! ψ(1 +mj+δj). This estimator is equal to the proposed estimator if δ= 0. For entropy estimation, especially when n≪s, this model will rely largely on the prior for unobserved support points. Nemenman and others [30] address this issue by giving pa Dirichlet-mixture prior. This choice reduces the influence of the prior on the resulting estimate compared to a Dirichlet prior. Jiao and others [22] shows that applying a Dirichlet prior generally does not improve consistency from a frequentist paradigm. Moving to the setting where pcan potentially have infinite support, Archer and others [2] expand on reference [30], using Pitman-Yor process [34] mixture as a prior. Before this, most theoretical work on entropy estimation, Bayesian and otherwise, focused on PMFs with finite support. A challenge encountered with the use of a Dirichlet (or related) prior, as sin- creases, is the need to integrate over its s-dimensional parameter space to compute the Bayes estimator. Use of the Pitman-Yor process prior, a two-parameter stochastic process, greatly simplifies this computation. However, the authors note that the resulting integral may not be computationally tractable, and provide a heuristic for its estimation. The authors establish that the estimator is consistent by showing it converges in probability to the plug-in estimator under mild conditions, which was shown to be consistent by reference [1]. 6Ifphas finite support, pcan be thought of as a random vector generated by a prior distribution 6 3. Analysis of Proposed Estimator. This section details the techniques for establishing the main results from Section 1.2. This paper uses the harmonic number function to esti- mate entropy because of its relationship to the natural logarithm and the exact algebraic sim- plification of expressions involving the harmonic numbers and binomial coefficients using mathematical induction, as Knuth [25, Sec. 1.2.7] shows. The primary reason for avoiding the digamma function despite its frequent use in the entropy estimation literature is that the analysis does not require mathematics more complex than harmonic numbers. 3.1. Bias. The algebraic approach taken here links the estimator’s expected value with its theoretical target through the Taylor approximation of the natural logarithm. In doing so, the Taylor remainder coincides with the estimator’s bias. Claim 1 underscores the relationship that makes the harmonic number function ideal for entropy estimation. CLAIM 1.LetMbe a binomial random variable of n∈Ntrials each with probability p∈(0,1]. Then (3.1) J(n)−E[J(M)] =nX m=1(1−p)m m=−logp−∞X m=n+1(1−p)m m. PROOF . Proceeding with mathematical induction, let Sn:=Pn m=0J(m) n m pm(1− p)n−mrepresent the expected value for ntrials. Using the identity n+1 m = n m + n m−1 , the binomial theorem, and J(m+ 1) = J(m) +1 m+1, with re-indexing Sn+1=nX m=0J(m)n m pm(1−p)n+1−m+nX m=0J(m+ 1)n m pm+1(1−p)n−m(3.2) =Sn+1 n+ 1n+1X m=1n+ 1 m pm(1−p)n+1−m=Sn+1−(1−p)n+1 n+ 1. (3.3) Because S1=p, the statement is true for n= 1. These together are sufficient to show that (3.4) E[J(M)] =Sn=J(n)−nX m=1(1−p)m m. Recall the Taylor series for the natural logarithm is
|
https://arxiv.org/abs/2505.20153v1
|
logp=−P∞ m=1(1−p)m mand converges forp∈(0,1]to conclude the proof. Many papers have indicated that S\Xn, the support points of Xnot in the sample, con- tribute to estimator bias [21, 41, 44]. A distinctive feature of this estimator is that its bias can be expressed using P(Xn+1∈S\Xn) =E[(1−pX)n]. Specifically, support points not in- cluded in the sample contribute to the estimator’s negative bias: For each j∈S\Xn,bH(Xn) will miss an estimate for −pjlogpj. As a result, bH(Xn)will typically underestimate H(X) when the sample misses support points. Building on the prior claim, Theorem 3.1 provides the estimator’s exact bias and its bound. THEOREM 3.1 (Bias). LetXn:= X(1),...,X(n) be an independent, identically dis- tributed (i.i.d.) sample from p. IfH(X)<∞, then (3.5) Eh bH(Xn)i −H(X) =−∞X k=n+1E (1−pX)k k. 7 Ifpj=O j−1/α forα∈[0,1), then (3.6) Eh bH(Xn)i −H(X) =O1 n1−α . PROOF . Using Claim 1, Eh bH(Xn)i =J(n)−E[E[J(mX)|X]] =−E" logpX+∞X k=n+1(1−pX)k k# (3.7) =H(X)−∞X k=n+1E (1−pX)k k. (3.8) Using Lemma 3.2, for large n, (3.9)∞X k=n+1E (1−pX)k k≤∞X k=n+1M k2−α≤MZ∞ nxα−2dx=O1 n1−α . Lemma 3.2 provides bounds for E (1−pX)k and related sums as k→ ∞ . See Ap- pendix A for the proof. LEMMA 3.2. Ifpj=O j−1/α forα∈[0,1], and m∈N, then, as k→ ∞ , (3.10)X jpm j(1−pj)k=O1 km−α . Methods for estimating the missing mass, the probability that a new sample point is not in the current sample, P X(n+1)̸∈Xn , famously date back to Alan Turing and IJ Good [16]. Since then, other papers [5, 6, 15, 19, 23] characterizing the missing mass and other related quantities followed. While the bound in Lemma 3.2 does not rely on the this literature, it is possible to use these ideas for a more nuanced characterization of this estimators bias. For example, reference [6, Theorem 1] can be used to show exponential decay in bias when n≥s. Moreover, using these ideas, one could project the values of E (1−pX)k fork≥n+ 1to augment the estimator for reducing its bias, but this falls beyond the goals of this paper as its goal is to build and analyze a simple entropy estimator. 3.2. Variance. Similar to its approach for characterizing its bias, this paper begins by using mathematical induction to characterize its variance as well. Because the estimator can be written as an average of local entropy estimates at each sample point, it is straightforward to decompose the estimator variance into the variance of a local estimate and covariance between local estimates of any two sample points. From here, mathematical induction al- lows exact algebraic simplification of the two components into the variance and a remainder, though, the calculations for covariance is somewhat more involved. For reference on other variance results, the plug-in estimator variance is bound by O((log n)2/n)[1]. Other estimators achieve a variance bound of O(1/n)on random variables with finite support [21, 47]. Under conditions related to the convergence of E (1−pX)k , Zhang’s estimator has a variance bound of O(logn/n)[49, Corollary 2]. However, the paper shows a mean squared error of O(1/n)forpj=o(1/j2), which im- plies the same bound for its variance [49, Example 1]. To this author’s
|
https://arxiv.org/abs/2505.20153v1
|
knowledge, this is the first entropy estimator whose variance is calculated explicitly under the mild condition that pj=o(1/j). 8 THEOREM 3.3. LetXn:= X(1),...,X(n) be an i.i.d. random sample from pwhere pj=O j−1/α forα∈[0,1), then (3.11) Varh bH(Xn)i =Var[log pX] n+o1 n +O1 n2(1−α) . PROOF OF THEOREM 3.3. For brevity, let X:=X(1)andY:=X(2). Using basic vari- ance properties, one can decompose the estimator variance as Varh bH(Xn)i = Var" 1 nnX i=1h J(n)−J m(i)i# (3.12) =1 nVar[J(n)−J(mX)] +n−1 nCov [J(n)−J(mX),J(n)−J(mY)]. (3.13) Beginning with the variance term of line 3.13, use Lemma B.1 for the first expected value of line 3.14 and Claim 1 for the second. From there, recognize that the double sum of line 3.15 is the first nterms of Cauchy product of the squared Taylor series for the natural logarithm. With these, Var[J(n)−J(mX)] =E (J(n)−J(mX))2 −(E[J(n)−J(mX)])2(3.14) =nX m=2m−1X k=1E[(1−pX)m] k(m−k)+o(1)− nX m=1E[(1−pX)m] m!2 (3.15) =Eh (logpX)2i −∞X m=n+1m−1X k=1E[(1−pX)m] k(m−k)+o(1)−(E[logpX])2(3.16) + 2∞X m=n+1nX k=1E[(1−pX)m]E (1−pX)k mk+ ∞X m=n+1E[(1−pX)m] m!2 (3.17) = Var[log pX] +o(1). (3.18) The double summation from line 3.16 correspond those added to complete the Cauchy prod- uct for (logpX)2and the additional terms from line 3.17 correspond to those added to com- plete the logarithm Taylor series within the squared term prior. With Lemma 3.2, it is easy to see that these terms converge to zero. Lemma 3.4 completes the proof, bounding the covari- ance. Bounding the covariance is more complicated so its proof and supporting claims are left in Appendix B, involving algebraic simplification using mathematical induction as before (Claim 5), then an integral representation of these expressions to uncover its relationship to the dilogarithm function (Claim 6). Below is a sketch of the proof after these details have been sorted. Critical to this bound is recognizing that the positive sum from terms where X=Yand negative terms from where X̸=Y, both of which are not o(1/n), combine to converge at a o(1/n)rate. Details can be found in the proof of Lemma 3.4 in Appendix B. LEMMA 3.4. LetXn,X,Y be an i.i.d. sample from pwhere pj=O j−1/α forα∈ [0,1). Then (3.19) Cov [J(n)−J(mX),J(n)−J(mY)] =o1 n +O1 n2(1−α) . 9 SKETCH OF PROOF . Using the law of total covariance, the independence of XandY, Cov [J(n)−J(mX),J(n)−J(mY)] (3.20) =EX,Y[Cov [ J(n)−J(mX),J(n)−J(mY)|X,Y]] (3.21) + Cov [ E[J(n)−J(mX)|X,Y],E[J(n)−J(mY)|X,Y]] (3.22) =X ip2 iVar[J(n)−J(mi)] +X iX j̸=ipipjCov [J(n)−J(mi),J(n)−J(mj)] (3.23) =nX m=1nX k=n−m+1P ip2 i(1−pi)m mk−∞X m=n+1P ipi(1−pi) m2+O1 n2(1−α) . (3.24) The first sum is from the variance (Claim 3) and the second from the covariance (Claim 4), after aggregating the remaining convergent terms. The sums can be combined, using the fact thatPn k=n−m+11 k=P∞ k=n+11 k(k−m)on the first term and the geometric series identity on the second. Together these sums converge at a o(1/n)rate. 3.3. Central Limit. This section supports the claims of Theorem 1.3. As Theorem 1.2 states, the estimator’s mean squared error is sensitive to a PMF’s tail behavior. Theorem 1.3 shows that the estimator’s density is asymptotically Gaussian as sample size increases when a PMF’s tail converges at a rate bounded by pj=o j−2 . Relying on Theorems 3.1 and 3.3,
|
https://arxiv.org/abs/2505.20153v1
|
the proof shows that the bounded Lipschitz distance between√nh bH(Xn)−H(X)i and a Gaussian-distributed random variable converges to zero, which implies weak convergence of the former to the later. PROOF OF THEOREM 1.3. For random variables, UandVdefine the bounded Lipschitz metric as (3.25) d(U,V) := sup f∈BL(R)|E[f(U)]−E[f(V)]| where BL (R) :=n f:R→R: supx∈R|f(x)| ≤1,supx̸=y|f(x)−f(y)| |x−y|≤1o . Define the ran- dom variable Z∼Gaussian (0,Var[log p(X)])and the oracle estimator as H∗(Xn) := 1 nPn i=1logp X(i) . Using the triangle inequality, d√nh bH(Xn)−H(X)i ,Z ≤d√nh bH(Xn)−H(X)i ,√n[H∗(Xn)−H(X)](3.26) +d √n[H∗(Xn)−H(X)],Z . (3.27) We proceed by showing that each term converges to zero. Beginning with the first term, using the Lipschitz property, d√nh bH(Xn)−H(X)i ,√n[H∗(Xn)−H(X)] (3.28) ≤√nE bH(Xn)−H∗(Xn) ≤r nE bH(Xn)−H∗(Xn)2 (3.29) =√nr Eh bH(Xn)i −H(X)2 + Varh bH(Xn)−H∗(Xn)i (3.30) =√nq O n2(α−1) +o(n−1)→0. (3.31) 10 The last equality follows from the bound on bias, Theorem 3.1, and Claim 7, and converges to zero when α∈ 0,1 2 . Now we consider the second other distance metric term. With a Berry-Esseen-style, Wasserstein central limit theorem [3][Theorem 3.2], (3.32) d √n[H∗(Xn)−H(X)],Z ≤3E|−logpX−H(X)|3 √nVar[log pX]→0. To show convergence to zero, it suffices to prove that E|−logp(X)−H(X)|3<∞, which holds for α∈ 0,1 2 . Together, these show that d√nh bH(Xn)−H(X)i ,Z →0asn→ ∞ , which implies that√nh bH(Xn)−H(X)i ⇝Zusing [10, Thm. 8.3.2]. Theorem 1.3 makes statistical inference viable for sufficiently large nusing the Wald test [45, Section 10.1] on entropy when pj=o j−2 . While Var[log p(X)]is unknown, one can estimate its value as (3.33) \Var[log p(X)] :=1 nnX i=1h J(n)−J m(i)i2 −h bH(Xn)i2 and estimate standard error as bse:=q \Var[log p(X)]. For example, an asymptotic (1−q)- level confidence interval for H(X)is (3.34) bH(Xn)−zq/2bse,bH(Xn) +zq/2bse where zq/2is the (1−q/2)th quantile of the standard Gaussian distribution. 4. Experiements. This section illustrates the empirical performance of the proposed es- timator compared to different estimators on simulated data. While the primary goal of this paper is theory, empirical performance may help readers choose an estimation method based on their setting and specific needs. The simulations include the proposed estimator (labeled Proposed ), the plug-in (labeled Plug-In ), the estimator for random variables with finite sup- port from Jiao and others [21] (labeled JVHW ), and Bayesian estimator for random variables with countable support from Archer and others [2] (labeled PYM ). The JVHW estimator has theoretical guarantees only for random variables with a finite support, but it is included in all simulations, on finitely and infinitely supported random variables, for visibility. The PYM estimator was developed within a Bayesian paradigm, which assumes that entropy is itself a random variable; the simulations presented here follow a frequentist model assuming that entropy is fixed. This section leaves out the estimators of Grassberger [17] and Zhang [49] because of their similarity to the proposed estimator. Simulating various settings, datasets are repeatedly generated from four discrete PMFs with varying sample sizes. Sample sizes range from 200 data points to 2000 in increments of 200. The PMFs presented here represent (1) a small, finite-support PMF, (2) a large, finite- support PMF, (3) an unbounded PMF with exponential decay, and (4)
|
https://arxiv.org/abs/2505.20153v1
|
an unbounded PMF with power-law decay. For each PMF and sample size, the simulation generates 100 datasets then applies all four estimation methods to each dataset. The estimation results are displayed as empirical mean squared error (MSE) and in a series ofviolin plots7representing the distribution of the entropy estimation methods by sample 7Mirrored kernel density estimation plots 11 size. Violin plots provide additional information on estimator density, bias, and variance not available in the MSE plots. Thicker regions of the violin plot correspond to a greater density of entropy estimates of the 100 total for that sample size. The “–” markers above and below each violin indicate the maximum and minimum estimates for each violin. The “ ×” marker within each violin indicates the mean value of the 100 entropy estimates; mean values of each estimator are connected to better visualize change as the sample size increases. 250 500 750 1000 1250 1500 1750 2000 Sample Size104 103 Empirical Mean Squared Error p=(0.5, 0.2, 0.15, 0.1, 0.05) JVHW Miller PYM Plug-In Proposed 250 500 750 1000 1250 1500 1750 2000 Sample Size1.251.301.351.401.451.50Entropy p=(0.5, 0.2, 0.15, 0.1, 0.05) JVHW Miller PYM Plug-In Proposed True Entropy FIG1. The figure above shows plots for a Multinomial PMF with probability vector (0.5,0.2,0.15,0.1,0.05). On the left are average mean squared errors (MSE) by sample size and on the right are corresponding violin plots. Figure 1 shows empirical MSE and violin plots for entropy estimates where the underly- ing PMF has five support points with p1= 0.5,p2= 0.2,p3= 0.15,p4= 0.1,p5= 0.05and entropy, H(X)≈1.333074 . Using Theorem 1.2, the mean squared error convergence rate for the proposed estimator is O(1/n). Because the support is small in comparison to each sample size, each estimation method is fairly accurate on all sample sizes. While there are some differences in mean entropy between methods for smaller samples, the large amount of overlap between the different violins for each sample size indicates that all methods have similar performance for this PMF. 250 500 750 1000 1250 1500 1750 2000 Sample Size104 103 102 101 100Empirical Mean Squared Error Uniform(500) JVHW Miller PYM Plug-In Proposed 250 500 750 1000 1250 1500 1750 2000 Sample Size5.05.56.06.57.0Entropy Uniform(500) JVHW Miller PYM Plug-In Proposed True Entropy FIG2. The figure above shows plots for the uniform PMF with S= 500 . On the left are average mean squared errors (MSE) by sample size and on the right are corresponding violin plots. Figure 2 shows simulation plots for uniformly distributed data over a support of 500 points, with a true entropy of H(X)≈6.214608 . Again, the MSE convergence for the proposed es- timator is O(1/n). This PMF illustrates the challenge of estimating entropy when sample 12 size is smaller than support size and the accuracy the JVHW estimator with smaller sup- port. While the proposed estimator underestimates entropy for smaller samples, it quickly converges to the true entropy and eventually overtakes JVHW in empirical MSE. 250 500 750 1000 1250 1500 1750 2000 Sample Size103 102 Empirical Mean Squared Error Geometric(0.1) JVHW Miller PYM Plug-In Proposed 250 500 750
|
https://arxiv.org/abs/2505.20153v1
|
1000 1250 1500 1750 2000 Sample Size2.93.03.13.23.33.4Entropy Geometric(0.1) JVHW Miller PYM Plug-In Proposed True Entropy FIG3. The figure above shows plots for the geometric PMF with parameter p= 0.5. On the left are average mean squared errors (MSE) by sample size and on the right are corresponding violin plots. In figure 3, the underlying data come from the geometric distribution with parameter p= 0.1. The geometric distribution is exponentially decreasing. Because of this, α= 0for any geometric distribution, so the mean squared error convergence rate for the proposed estimator isO(1/n). For a geometric distribution with parameter p, entropy is (4.1) H(X) =−(1−p)log(1 −p)−plogp p. With parameter p= 0.1,H(X)≈3.250830 . In this setting, all estimators, other than the plug-in, perform similarly, with a relatively small bias for all sample sizes with shrinking variance as sample size increases. 250 500 750 1000 1250 1500 1750 2000 Sample Size103 102 Empirical Mean Squared Error Zeta(2) JVHW Miller PYM Plug-In Proposed 250 500 750 1000 1250 1500 1750 2000 Sample Size1.31.41.51.61.71.81.9Entropy Zeta(2) JVHW Miller PYM Plug-In Proposed True Entropy FIG4. The figure above shows plots for the zeta PMF with parameter γ= 2. On the left are average mean squared errors (MSE) by sample size and on the right are corresponding violin plots. Figure 4 shows the case where the underlying data are generated from a zeta distribution pj=j−γ ζ(γ)where ζ(γ) =P∞ j=1j−γis the Riemann zeta function as a normalizing constant. The data are generated with parameter γ= 2, so that pjis regularly varying function with 13 α= 1/2. In this case, the tail of the PMF converges too slowly for the conditions for the cen- tral limit (Theorem 1.3) to be met for the proposed estimator. Entropy for a zeta-distributed random variable with parameter γ= 2is (4.2) H(X) =∞X j=1j−2log j2ζ(2) ζ(2)≈1.637622 . For this PMF, all methods underestimate entropy on average with PYM consistently closest in mean to the true value of entropy and the plug-in farthest away. 250 500 750 1000 1250 1500 1750 2000 Sample Size0.20.40.60.81.0Time (ms) JVHW Miller Plug-In Proposed FIG5. The figure above shows the median rum times by sample size for each method to complete one estimate. PYM is not shown because the of its significantly greater run time. Figure 5 shows the median run times by sample size for all methods except PYM. Run times for the plug-in and proposed were similarly fast at a median of 0.09 ms over all sample sizes. The median run time for JYHW is 0.66 ms using code provided by the authors on their github page. The median run time for PYM is 32.48 ms however, this was done by interfacing Matlab through Python, which further increase run time. All others were run directly in Python. All estimates for the proposed entropy estimator were calculated as bHψ(Xn)because the digamma function is easily available in the python math module but harmonic numbers are not.8All simulations were run on 2.9 GHz Quad-Core Intel Core i7 processors. 5. Conclusion. This paper shows that an entropy estimator using the harmonic number function is asymptotically Gaussian for
|
https://arxiv.org/abs/2505.20153v1
|
a large class discrete random variables and provides its mean squared error. The algebraic approach used throughout much of the paper provides the mathematical precision necessary to establish these findings. This development allows for statistical inference on entropy and other information theoretic measures for unbounded random variables, which was previously limited. Moreover, the experiments demonstrate that applications currently using the plug-in estimator would be well-served by replacing it with the proposed estimator as it does not compromise estimator complexity nor run time and nearly always either improves estimation or is no worse. Finally, the use of mathematical induction for bounding estimator bias and variance may also yield new results for nearest-neighbor entropy estimators on continuous random vari- ables [8, 26, 50]. Interestingly, a similar result to Claim 1 already exist for nearest-neighbor order statistics [50, Line 36]. Another benefit of the estimator’s simplicity is its potential for 8Recall that bH(Xn) =bHψ(Xn). 14 theory on mixed, continuous and discrete, data as a nearest-neighbor estimator for other in- formation theoretic measures, similar to the mixed conditional mutual information estimator of Mesner and Shalizi [27]. APPENDIX A: AUXILIARY PROOFS PROOF OF LEMMA 3.2. Define ν:P(R)→Nasν(A) =P jI(pj∈A)forA⊆R, the counting measure defined by p. Further, let ⃗ ν(x) :=|{j∈N:pj> x}|be the number of prob- ability mass points of pgreater than x. Because pj=O j−1/α , there is some j0∈Nsuch that for all j≥j0,pj≤Mj−1/α. Ifx≤pj0, then, because pjis decreasing, (A.1) ⃗ ν(x) :=|{j∈N:pj> x}| ≤ n j∈N:Mj−1/α> xo ≤Mαx−α. Letu≤pj0. Using integration by parts, X j:pj<upm j=Zu 0xmdν(x) =−um⃗ ν(u) +mZu 0xm−1⃗ ν(x)dx (A.2) ≤MαmZu 0xm−1−αdx=Mαmum−α m−α. (A.3) Now, define νm(A) :=P jpm jI(pj∈A). Then νm([0,u])≤Mαmum−α m−αforu∈[0,u0]. Again, with integration by parts, X jpm j(1−pj)k=Z∞ 0xm(1−x)kdν(x) (A.4) =Z∞ 0(1−x)kdνm(x)≤Z∞ 0e−kxdνm(x) (A.5) =e−kxνm([0,x]) ∞ 0+Z∞ 0ke−kxνm([0,x])dx (A.6) =Zpj0 0ke−kxνm([0,x])dx+Z∞ pj0ke−kxνm([0,x])dx (A.7) ≤Mαmk m−αZ∞ 0xm−αe−kxdx+Z∞ pj0ke−kxdx (A.8) =Mαm (m−α)km−αZ∞ 0xm−αe−xdx+e−kpj0=MαmΓ(m−α) km−α+e−kpj0 (A.9) where the last step uses the definition of the gamma function and its property that Γ(x+ 1) = xΓ(x). CLAIM 2.Ifn∈Nandp,q∈R, then (A.10)nX k=11 kn k pkqn−k=nX k=1(p+q)kqn−k−qn k. PROOF . Let Sn=Pn k=11 k n k pkqn−k. Using the identity that n+1 k = n k + n k−1 and the binomial theorem, Sn+1=n+1X k=11 kn+ 1 k pkqn+1−k(A.11) 15 =n+1X k=11 kn k pkqn+1−k+n+1X k=11 kn k−1 pkqn+1−k(A.12) =nX k=11 kn k pkqn+1−k+1 n+ 1n+1X k=1n+ 1 k pkqn+1−k(A.13) =qSn+(p+q)n+1−qn+1 n+ 1. (A.14) Dividing by qn+1, we have Sn+1 qn+1=Sn qn+(p+q)n+1q−(n+1)−1 n+ 1. (A.15) Because S1=p, using mathematical induction, we conclude that Sn=nX k=1(p+q)kqn−k−qn k. (A.16) APPENDIX B: V ARIANCE PROOFS LEMMA B.1. LetMbe a binomial random variable of ntrails each with probability p. Then E[J(n)−J(M)]2=nX m=2m−1X k=1(1−p)m k(m−k)+nX m=1[J(n)−J(n−m)](1−p)m m(B.1) =nX m=2m−1X k=1(1−p)m k(m−k)+o(1) (B.2) asn→ ∞ . PROOF . Using mathematical induction to simplify the first term, let (B.3) Sn:=nX m=0[J(n)−J(m)]2n m pm(1−p)n−m. Using the identities, J(n+ 1) = J(n) +1 n+1and n+1 m = n m + n m−1 , summation reindex- ing, Sn+1=n+1X m=0[J(n+ 1)−J(m)]2n+ 1 m pm(1−p)n+1−m(B.4) =n+1X m=0 [J(n)−J(m)]2+1 (n+ 1)2+2[J(n)−J(m)] n+ 1n+ 1 m pm(1−p)n+1−m(B.5) =Sn−(1−p)n+1 n+ 1n+1X k=1(1−p)−k−1 k+2J(n+ 1)(1 −p)n+1 n+ 1+1−2(1−p)n+1 (n+ 1)2(B.6) =Sn+3J(n+ 1)(1 −p)n+1 n+ 1−1 n+ 1n+1X k=1(1−p)n+1−k k+1−2(1−p)n+1
|
https://arxiv.org/abs/2505.20153v1
|
(n+ 1)2(B.7) 16 using Claim 2 on line B.6. Together with S1= 1−p, then Sn=nX m=1mX k=13(1−p)m−(1−p)m−k mk+nX m=11−2(1−p)m m2(B.8) =nX m=1mX k=12(1−p)m mk−nX m=12(1−p)m m2(B.9) +nX m=1mX k=1(1−p)m−(1−p)m−k mk+nX m=11 m2. (B.10) Working with these lines separately, on line B.9, re-indexing, canceling, and using the fact that1 mk+1 m(m−k)=1 k(m−k), we have nX m=1mX k=12(1−p)m mk−nX m=12(1−p)m m2=nX m=2m−1X k=1(1−p)m k(m−k). (B.11) On line B.10, with re-indexing and cancelling nX m=1mX k=1(1−p)m−(1−p)m−k mk+nX m=11 m2=nX m=1[J(n)−J(n−m)](1−p)m m. (B.12) Using the monotone convergence theorem, lim n→∞∞X m=1[J(n)−J(n−m)](1−p)m m=∞X m=1(1−p)m mlim n→∞[J(n)−J(n−m)] = 0 .(B.13) PROOF OF LEMMA 3.4. Using the law of total covariance, the independence of Xand Y, Cov [J(n)−J(mX),J(n)−J(mY)] (B.14) =EX,Y[Cov [ J(n)−J(mX),J(n)−J(mY)|X,Y]] (B.15) + Cov [ E[J(n)−J(mX)|X,Y],E[J(n)−J(mY)|X,Y]] (B.16) =X ip2 iVar[J(n)−J(mi)] +X iX j̸=ipipjCov [J(n)−J(mi),J(n)−J(mj)]. (B.17) Beginning with the covariance from line B.17, with Claim 4,P j̸=ipj= 1−pi, and re- indexing iandj, we have X iX j̸=ipipjCov [J(n)−J(mi),J(n)−J(mj)] (B.18) =−∞X m=n+1∞X k=n+1P ipi(1−pi)mP j̸=ipj(1−pj)k mk(B.19) −∞X m=n+1P ipi(1−pi)−2pi(1−pi)m+1 m2+2P ipi(1−pi)m+1log(1−pi) m(B.20) + 2∞X m=n+1∞X k=m+1P ipi(1−pi)m−kP j̸=ipj(1−pi−pj)k mk. (B.21) 17 Working through the terms, we use Lemma 3.2 and integrals to bound most sums: ∞X m=n+1∞X k=n+1P ipi(1−pi)mP j̸=ipj(1−pj)k mk(B.22) =∞X m=n+1∞X k=n+1P ipi(1−pi)mP jpj(1−pj)k−P ip2 i(1−pi)m+k mk(B.23) =∞X m=n+1∞X k=n+1O1 m2−αk2−α =Z∞ nO1 x2−α dx2 =O1 n2(1−α) . (B.24) Next, ∞X m=n+1P ipi(1−pi)m+1 m2=∞X m=n+1O1 m3−α =O1 n2−α . (B.25) Additionally using the fact that −log(1−x)≤x 1−xforx∈[0,1), −∞X m=n+1P ipi(1−pi)m+1log(1−pi) m≤∞X m=n+1P ip2 i(1−pi)m m(B.26) =∞X m=n+1O1 m3−α =O1 n2−α . (B.27) Last, ∞X m=n+1∞X k=m+1P ipi(1−pi)m−kP j̸=ipj(1−pi−pj)k mk(B.28) ≤∞X m=n+1∞X k=m+1P ipi(1−pi)mP jpj(1−pj)k mk(B.29) =∞X m=n+1∞X k=m+1O1 m2−αk2−α =O1 n2(1−α) . (B.30) Together, we have (B.31) X iX j̸=ipipjCov J(n)−J(mi),J(n)−J(mj) =−∞X m=n+1P ipi(1−pi) m2+O1 n2(1−α) . Using Claim 3, we move to the variance term from line B.17: X ip2 iVar[J(n)−J(mi)] =nX m=1nX k=n−m+1P ip2 i(1−pi)m−P ip2 i(1−pi)m+k mk.(B.32) Working on the second term in the numerator above, we re-index, use partial fraction decom- position, then use Lemma 3.2: nX m=1nX k=n−m+1P ip2 i(1−pi)m+k mk=2nX m=n+1nX k=m−nP ip2 i(1−pi)m k(m−k)(B.33) 18 =2nX m=n+1P ip2 i(1−pi)m mnX k=m−n1 k+1 m−k (B.34) = 22nX m=n+1P ip2 i(1−pi)m mnX k=m−n1 k= 2nX k=11 kn+kX m=n+1P ip2 i(1−pi)m m(B.35) ≤2[log( n) + 1]Z∞ nO1 x3−α dx=Olog(n) n2−α . (B.36) Now, X ip2 iVar[J(n)−J(mi)] =nX m=1nX k=n−m+1P ip2 i(1−pi)m mk+Olog(n) n2−α . (B.37) Combining the two sums from the variance on line B.37 and covariance on line B.31, we use the fact thatPn k=n−m+11 k=P∞ k=n+11 k(k−m)on the first term and the geometric series on the second: nX m=1nX k=n−m+1P ip2 i(1−pi)m mk−∞X k=n+1P ipi(1−pi) k2(B.38) =nX m=1∞X k=n+1P ip2 i(1−pi)m k(k−m)−∞X m=1∞X k=n+1P ip2 i(1−pi)m k2(B.39) =nX m=1∞X k=n+1mP ip2 i(1−pi)m k2(k−m)−∞X m=n+1X ip2 i(1−pi)m∞X k=n+11 k2(B.40) ≤nX m=1X ip2 i(1−pi)mZ∞ nm x2(x−m)dx−∞X m=n+1X ip2 i(1−pi)mZ∞ n1 x2dx (B.41) =nX m=1X ip2 i(1−pi)m −1 mlog 1−m n −1 n −1 n∞X m=n+1O1 m2−α (B.42) =nX m=1P ip2 i(1−pi)m m∞X k=21 km nk +O1 n2−α =o1 n . (B.43) To show the last bound, we separate the small and large values of m. For small values of m, saym≤m0for some m0≪n, P
|
https://arxiv.org/abs/2505.20153v1
|
ip2 i(1−pi)m m∞X k=21 km nk =1 nX ip2 i(1−pi)m∞X k=11 k+ 1m nk (B.44) ≤ −P ip2 i(1−pi)m nlog 1−m n =o1 n . (B.45) For all values m≥m0+ 1, we use Lemma 3.2, nX m=m0+1P ip2 i(1−pi)m m∞X k=21 km nk =nX m=m0+1∞X k=2O mk−3+α knk(B.46) ≤∞X k=0Zn m0O xk−1+α (k+ 2)nk+2dx . (B.47) 19 Ifα∈(0,1), then ∞X k=0Zn m0O xk−1+α (k+ 2)nk+2dx≤ O1 n2−α∞X k=01 (k+α)(k+ 2)=O1 n2−α . (B.48) Otherwise, if α= 0, then ∞X k=0Zn m0O xk−1+α (k+ 2)nk+2dx=Zn m0O x−1 2n2dx+∞X k=1Zn m0O xk−1 (k+ 2)nk+2dx (B.49) ≤O(log(n)) 2n2+O1 n2∞X k=11 k(k+ 2)=Olog(n) n2 . (B.50) CLAIM 3.LetMbe a binomial random variable of n∈Ntrials each with probability p∈(0,1). Then (B.51) Var[J(n)−J(M)] =nX m=1nX k=n−m+1(1−p)m 1−(1−p)k mk. PROOF . Using Lemma B.1 and Claim 1, Var[J(n)−J(M)] (B.52) =Eh [J(n)−J(M)]2i −[E[J(n)−J(M)]]2(B.53) =nX m=1n−mX k=1(1−p)m+k mk+nX m=1nX k=n−m+1(1−p)m mk−nX m=1nX k=1(1−p)m+k mk(B.54) =nX m=1nX k=n−m+1(1−p)m mk−nX m=1nX k=n−m+1(1−p)m+k mk. (B.55) CLAIM 4.Let(M,K,n −M−K)be a multinomial random vector of ntrials with probabilities (p,q,1−p−q). Then Cov [( J(n)−J(M))(J(n)−J(K))] (B.56) =−∞X m=n+1∞X k=n+1(1−p)m(1−q)k mk−∞X m=n+11−(1−p)m−(1−q)m m2(B.57) −∞X m=n+1(1−p)mlog(1−p) + (1−q)mlog(1−q) m(B.58) +∞X m=n+1∞X k=m+1(1−p)m 1−q 1−pk + (1−q)m 1−p 1−qk mk. (B.59) 20 PROOF . Using Claims 5 and 1 on line B.61, then Claim 6 on line B.65, and canceling like terms, we have Cov [J(n)−J(M),J(n)−J(K)] (B.60) =E[[J(n)−J(M)][J(n)−J(K)]]−E[J(n)−J(M)]E[J(n)−J(K)] (B.61) =nX m=1mX k=1(1−p)m 1−q 1−pk + (1−q)m 1−p 1−qk mk(B.62) +nX m=11−(1−p)m−(1−q)m m2−nX m=1(1−p)m mnX k=1(1−q)k k(B.63) =nX m=1∞X k=1(1−p)m 1−q 1−pk + (1−q)m 1−p 1−qk mk(B.64) −nX m=1∞X k=m+1(1−p)m 1−q 1−pk + (1−q)m 1−p 1−qk mk(B.65) +nX m=11−(1−p)m−(1−q)m m2−nX m=1(1−p)m mnX k=1(1−q)k k(B.66) =−nX m=1(1−p)m[log(q)−log(1−p)] + (1 −q)m[log(p)−log(1−q)] m(B.67) −log(p)log( q) + log( p)log(1 −p) + log( q)log(1 −q) (B.68) −ζ(2) + Li 2(1−p) + Li 2(1−q) (B.69) +∞X m=n+1∞X k=m+1(1−p)m 1−q 1−pk + (1−q)m 1−p 1−qk mk(B.70) +nX m=11−(1−p)m−(1−q)m m2−nX m=1(1−p)m mnX k=1(1−q)k k(B.71) =−∞X m=n+1∞X k=n+1(1−p)m(1−q)k mk−∞X m=n+11−(1−p)m−(1−q)m m2(B.72) −∞X m=n+1(1−p)mlog(1−p) + (1−q)mlog(1−q) m(B.73) +∞X m=n+1∞X k=m+1(1−p)m 1−q 1−pk + (1−q)m 1−p 1−qk mk. (B.74) CLAIM 5.Let(M,K,n −M−K)be a multinomial random vector of ntrials with probabilities (p,q,1−p−q). Then E[[J(n)−J(M)][J(n)−J(K)]] (B.75) 21 =nX m=1mX k=1(1−q)m 1−p 1−qk + (1−p)m 1−q 1−pk mk(B.76) +nX m=11−(1−p)m−(1−q)m m2. (B.77) SKETCH OF PROOF . Using Claim 1, E[[J(n)−J(M)][J(n)−J(K)]] (B.78) =nX m=0n−mX k=0[J(n)−J(m)][J(n)−J(k)]n m,k pmqk(1−p−q)n−m−k(B.79) =nX m=0[J(n)−J(m)] J(n)−n−mX k=11− 1−q 1−pk k n m pm(1−p)n−m. (B.80) LetSnbe the statement above, then, with tedious but straightforward calculations, using mathematical induction with the fact that S1= 1−p−q, Sn=nX m=1mX k=1(1−q)m 1−p 1−qk + (1−p)m 1−q 1−pk mk(B.81) +nX m=11−(1−p)m−(1−q)m m2. (B.82) CLAIM 6.Ifp,q∈(0,1)andn∈N, then −nX m=1∞X k=m+1(1−p)m 1−q 1−pk + (1−q)m 1−p 1−qk mk(B.83) =−log(p)log( q) + log( q)log(1 −q) + log( p)log(1 −p) (B.84) −ζ(2) + Li 2(1−p) + Li 2(1−q) (B.85) +∞X m=n+1∞X k=m+1 1−q 1−pk (1−p)m mk+∞X m=n+1∞X k=m+1 1−p 1−qk (1−q)m mk. (B.86) PROOF . Representing the left-hand summation as an integral and using the geometric series to get lines B.89, B.92, and B.93, we have −nX m=1∞X k=m+1(1−p)m 1−q 1−pk mk(B.87) =nX m=1∞X k=m+1Zp 1−q q(1−z)m−2 1−q 1−zk−1 m+(1−z)m−1 1−q 1−zk k dz (B.88) 22 =Zp 1−qnX m=1 (1−z)m−1 1−q 1−zm m+∞X
|
https://arxiv.org/abs/2505.20153v1
|
k=m+1(1−z)m−1 1−q 1−zk k dz (B.89) =Zp 1−qnX m=1∞X k=m(1−z)m−1 1−q 1−zk kdz (B.90) =−Zp 1−qnX m=1∞X k=mZq 1−z(1−z)m−2 1−y 1−zk−1 dy dz (B.91) =−Zp 1−qZq 1−znX m=1(1−z)m−1 1−y 1−zm−1 ydy dz (B.92) =−Zp 1−qZq 1−znX m=1(1−y−z)m−1 ydy dz =−Zp 1−qZq 1−z1−(1−y−z)n y(y+z)dy dz . (B.93) Now, focusing on part of line B.93, and using the geometric series again, then switching the order of integration, Zp 1−qZq 1−z(1−y−z)n y(y+z)dy dz =Zp 1−qZq 1−z∞X m=0(1−y−z)n+m ydy dz (B.94) =Zq 1−pZp 1−y∞X m=0(1−y−z)n+m ydz dy =−Zq 1−p∞X m=n+1(1−y−p)m mydy (B.95) =−Zq 1−p∞X m=n+1mX k=0m k(−1)kyk−1(1−p)m−k mdy (B.96) =−∞X m=n+1[log(q)−log(1−p)](1−p)m m(B.97) −∞X m=n+1mX k=1m k(−1)kh qk−(1−p)ki (1−p)m−k mk(B.98) =−∞X m=n+1log q 1−p (1−p)m m−∞X m=n+1mX k=1(1−p−q)k(1−p)m−k mk(B.99) =∞X m=n+1∞X k=1 1−q 1−pk (1−p)m mk−∞X m=n+1mX k=1 1−q 1−pk (1−p)m mk(B.100) =∞X m=n+1∞X k=m+1(1−p)m 1−q 1−pk mk(B.101) where we used Claim 2 on line B.98 twice. Now, switching the order of integration for the second integral on the line below, −Zp 1−qZq 1−z1 y(y+z)dy dz−Zq 1−pZp 1−y1 z(y+z)dz dy (B.102) 23 =−Zp 1−qZq 1−z1 y(y+z)dy dz−Zp 1−qZq 1−z1 z(y+z)dy dz (B.103) =−Zp 1−qZq 1−z1 yzdy dz =−Zp 1−qlog(q)−log(1−z) zdz (B.104) =−log(p)log( q) + log( q)log(1 −q)−Li2(p) + Li 2(1−q) (B.105) =−log(p)log( q) + log( q)log(1 −q) + log( p)log(1 −p) (B.106) −ζ(2) + Li 2(1−p) + Li 2(1−q) (B.107) using the dilogarithm reflection identity: (B.108) Li2(z) + Li 2(1−z) =ζ(2)−log(z)log(1 −z) where Li2(z) :=P∞ m=1zm m2is the dilogarithm function and ζ(z) :=P∞ m=1m−zis the zeta function. Putting this together, −nX m=1∞X k=m+1(1−p)m 1−q 1−pk + (1−q)m 1−p 1−qk mk(B.109) =−Zp 1−qZq 1−z1−(1−y−z)n y(y+z)dy dz−Zq 1−pZp 1−y1−(1−y−z)n z(y+z)dz dy (B.110) =−log(p)log( q) + log( q)log(1 −q) + log( p)log(1 −p) (B.111) −ζ(2) + Li 2(1−p) + Li 2(1−q) (B.112) +∞X m=n+1∞X k=m+1(1−p)m 1−q 1−pk mk+∞X m=n+1∞X k=m+1(1−q)m 1−p 1−qk mk. (B.113) APPENDIX C: CENTRAL LIMIT PROOFS CLAIM 7.Assuming the conditions from Theorem 1.3, (C.1) Varh bH(Xn)−H∗(Xn)i =o1 n . PROOF . Using a variance property, we can separate terms into variance and covariance: Varh bH(Xn)−H∗(Xn)i (C.2) = Varh bH(Xn)i + Var[ H∗(Xn)]−2Covh bH(Xn),H∗(Xn)i . (C.3) The two variance terms are straightforward: From Theorem 3.3, (C.4) Varh bH(Xn)i =Var[log pX] n+o1 n . Because the sample is independent, (C.5) Var[H∗(Xn)] = Var" −1 nnX i=1logp X(i)# =1 nVar[log pX]. 24 Working on the covariance terms, recall that m(i):=Pn j=1I X(i)=X(j) , then Covh bH(Xn),H∗(Xn)i (C.6) =1 n2nX i=1nX k=1Covh J m(i) −J(n),logp X(k)i (C.7) =1 nCovh J m(1) −J(n),logp X(1)i (C.8) +n−1 nCovh J m(1) −J(n),logp X(2)i . (C.9) Considering the covariance term of line C.8, we begin with the joint expectation. Using Claim 1, Eh J m(1) −J(n) logp X(1)i (C.10) =EX(1)h EX(2),...,X(n)h J m(1) −J(n) X(1)i logp X(1)i (C.11) =EX(1)" logp X(1) −∞X k=n+1(1−pX)k k! logp X(1)# (C.12) =Eh (logpX)2i −∞X k=n+1E (1−pX)klogpX k. (C.13) With this, and using Claim 1 again, along with Lemma 3.2, Covh J m(1) −J(n),logp X(1)i (C.14) =Eh J m(1) −J(n) logp X(1)i −Eh J m(1) −J(n)i Eh logp X(1)i (C.15) =Eh (logpX)2i −[H(X)]2−∞X m=n+1E[(1−pX)mlogpX] +H(X)E[(1−p)m] m(C.16) = Var[log p(X)] +O1 n1−α . (C.17) Moving to the covariance term of line
|
https://arxiv.org/abs/2505.20153v1
|
C.9, let m(2) −1:=Pn i=2I X(2)=X(i) . IfX(1)= X(2), then m(2)=m(2) −1+ 1and if X(1)̸=X(2), then m(2)=m(2) −1. Recall that J(z+ 1) = J(z) +1 zforz∈N. Then, using the law of total expectation to separate then again to join, and Claim 1, Eh J m(2) −J(n) logp X(1)i (C.18) =P X(1)=X(2) Eh J m(2) −1+ 1 −J(n) logp X(1) X(1)=X(2)i (C.19) +P X(1)̸=X(2) Eh J m(2) −1 −J(n) logp X(1) X(1)̸=X(2)i (C.20) =P X(1)=X(2) E J m(2) −1 −J(n) +1 m(2) −1+ 1 logp X(1) X(1)=X(2) (C.21) +P X(1)̸=X(2) Eh J m(2) −1 −J(n) logp X(1) X(1)̸=X(2)i (C.22) 25 =Eh J m(2) −1 −J(n) logp X(1)i (C.23) +P X(1)=X(2) E logp X(1) m(2) −1+ 1 X(1)=X(2) (C.24) =H(X) n−1X m=1E[(1−pX)m] m+1 n! −H(X) +E[(1−pX)nlog(pX)] n(C.25) =H(X)n−1X m=1E[(1−pX)m] m−E[(1−pX)nlog(pX)] n, (C.26) where, using the fact that Eh 1 U+1i =1−(1−p)n+1 (n+1)pforU∼Binomial (n,p), P X(1)=X(2) E" logp X(1) m(2) −1+ 1 X(1)=X(2)# (C.27) =X ip2 ilog(pi)n−1X m=01 m+ 1n−1 m pm i(1−pi)n−1−m(C.28) =X ipilog(pi)1−(1−pi)n n=−H(X) +E[(1−pX)nlog(pX)] n. (C.29) Using the joint expectation calculation, using Lemma 3.2, we have Covh J m(2) −J(n),logp X(1)i (C.30) =Eh J m(2) −J(n) logp X(1)i −Eh J m(2) −J(n)i Eh logp X(1)i (C.31) =H(X)n−1X m=1E[(1−pX)m] m−E[(1−pX)nlog(pX)] n−H(X)nX m=1E[(1−pX)m] m(C.32) =−H(X)E[(1−pX)n] n−E[(1−pX)nlog(pX)] n=O1 n2−α . (C.33) Collecting all of these calculations, Varh bH(Xn)−H∗(Xn)i (C.34) = Varh bH(Xn)i + Var[ H∗(Xn)]−2Covh bH(Xn),H∗(Xn)i (C.35) =Var[log pX] n+o1 n +Var[log pX] n−2Var[log pX] n+O1 n2−α =o1 n . (C.36) Acknowledgments. The author would like to thank Liza Levina and Ji Zhu for their support with this project. Funding. The author was supported by NSF RTG Grant DMS-1646108 and NIH NIDA Grant U01 DA03926. 26 REFERENCES [1] A NTOS , A. and K ONTOYIANNIS , I. (2001). Convergence Properties of Functional Estimates for Discrete Distributions. Random Structures & Algorithms 19163–193. [2] A RCHER , E., P ARK, I. M. and P ILLOW , J. W. (2014). Bayesian entropy estimation for countable discrete distributions. The Journal of Machine Learning Research 152833–2868. [3] B ARBOUR , A. D. and C HEN, L. H. Y. (2005). An introduction to Stein’s method 4. World Scientific. [4] B ASHARIN , G. P. (1959). On a statistical estimate for the entropy of a sequence of independent random variables. Theory of Probability & Its Applications 4333–336. [5] B EN-HAMOU , A., B OUCHERON , S. and O HANNESSIAN , M. I. (2017). Concentration inequalities in the infinite urn scheme for occupancy counts and the missing mass, with applications. Bernoulli 23249– 287. [6] B EREND , D. and K ONTOROVICH , A. (2012). The missing mass problem. Statistics & Probability Letters 821102–1110. [7] B ERKSON , J. (1980). Minimum chi-square, not maximum likelihood! The Annals of Statistics 8457–487. [8] B ERRETT , T. B., S AMWORTH , R. J., Y UAN, M. et al. (2019). Efficient multivariate entropy estimation via k-nearest neighbour distances. The Annals of Statistics 47288–318. [9] B ISHOP , C. M. (2006). Pattern Recognition and Machine Learning . Springer, New York. [10] B OGACHEV , V. I. (2007). Measure theory 1. Springer Science & Business Media.
|
https://arxiv.org/abs/2505.20153v1
|
[11] C HOW , C. and L IU, C. (1968). Approximating discrete probability distributions with dependence trees. IEEE transactions on Information Theory 14462–467. [12] D IMITROV , A. G., L AZAR , A. A. and V ICTOR , J. D. (2011). Information theory in neuroscience. Journal of computational neuroscience 301–5. [13] D URRETT , R. (2010). Probability: theory and examples . Cambridge university press. [14] E FRON , B. (1982). Maximum likelihood and decision theory. The annals of Statistics 340–356. [15] G NEDIN , A., H ANSEN , B., P ITMAN , J. et al. (2007). Notes on the occupancy problem with infinitely many boxes: general asymptotics and power laws. Probability surveys 4146–171. [16] G OOD , I. J. (1953). The population frequencies of species and the estimation of population parameters. Biometrika 40237–264. [17] G RASSBERGER , P. (2003). Entropy estimates from insufficient samplings. arXiv preprint physics/0307138 . [18] H AUSSER , J. and S TRIMMER , K. (2009). Entropy inference and the James-Stein estimator, with application to nonlinear gene association networks. Journal of Machine Learning Research 101469–1484. [19] H WANG , H.-K. and J ANSON , S. (2008). Local limit theorems for finite and infinite urn models. The Annals of Probability 36992–1022. [20] I BL, M. (2014). Measurement of fairness in process models using entropy and stochastic Petri nets. In 2014 9th International Conference on Software Paradigm Trends (ICSOFT-PT) 115-120. [21] J IAO, J., V ENKAT , K., H AN, Y. and W EISSMAN , T. (2015). Minimax estimation of functionals of discrete distributions. IEEE Transactions on Information Theory 612835–2885. [22] J IAO, J., V ENKAT , K., H AN, Y. and W EISSMAN , T. (2017). Maximum likelihood estimation of functionals of discrete distributions. IEEE Transactions on Information Theory 636774–6798. [23] K ARLIN , S. (1967). Central limit theorems for certain infinite urn schemes. Journal of Mathematics and Mechanics 17373–401. [24] K IRA, K. and R ENDELL , L. A. (1992). A practical approach to feature selection. In Machine learning proceedings 1992 249–256. Elsevier. [25] K NUTH , D. (1997). The Art of Computer Programming, Vol. 1 (3rd Edn.): Fundamental Algorithms . Red- wood City, CA: Addison Wesley Longman Publishing Co., Inc. [26] K OZACHENKO , L. and L EONENKO , N. N. (1987). Sample estimate of the entropy of a random vector. Problemy Peredachi Informatsii 239–16. [27] M ESNER , O. C. and S HALIZI , C. R. (2020). Conditional Mutual Information Estimation for Mixed, Dis- crete and Continuous Data. IEEE Transactions on Information Theory 67464–484. [28] M ILLER , G. (1955). Note on the bias of information estimates. Information theory in psychology: Problems and methods . [29] M ITCHELL , T. M. (1997). Machine Learning . McGraw-Hill Science/Engineering/Math. [30] N EMENMAN , I., S HAFEE , F. and B IALEK , W. (2001). Entropy and inference, revisited. arXiv preprint physics/0108025 . [31] P ANINSKI , L. (2003). Estimation of entropy and mutual information. Neural computation 151191–1253. [32] P ENG, H., L ONG , F. and D ING, C. (2005). Feature selection based on mutual information criteria of max-
|
https://arxiv.org/abs/2505.20153v1
|
dependency, max-relevance, and min-redundancy. IEEE Transactions on pattern analysis and machine intelligence 271226–1238. 27 [33] P INCHAS , A., B EN-GAL, I. and P AINSKY , A. (2024). A Comparative Analysis of Discrete Entropy Esti- mators for Large-Alphabet Problems. Entropy 26369. [34] P ITMAN , J. and Y OR, M. (1997). The two-parameter Poisson-Dirichlet distribution derived from a stable subordinator. The Annals of Probability 855–900. [35] Q UINLAN , J. R. (1986). Induction of decision trees. Machine learning 181–106. [36] Q UINLAN , J. R. (1996). Improved use of continuous attributes in C4. 5. Journal of artificial intelligence research 477–90. [37] S AFAVIAN , S. R. and L ANDGREBE , D. (1991). A survey of decision tree classifier methodology. IEEE transactions on systems, man, and cybernetics 21660–674. [38] S CHÜRMANN , T. (2004). Bias analysis in entropy estimation. Journal of Physics A: Mathematical and General 37L295. [39] S CHÜRMANN , T. (2015). A note on entropy estimation. Neural computation 272097–2106. [40] S HANNON , C. E. (1948). A mathematical theory of communication. The Bell system technical journal 27 379–423. [41] V ALIANT , P. and V ALIANT , G. (2013). Estimating the Unseen: Improved Estimators for Entropy and other Properties. In NIPS 2157–2165. [42] V INCK , M., B ATTAGLIA , F. P., B ALAKIRSKY , V. B., V INCK , A. H. and P ENNARTZ , C. M. (2012). Esti- mation of the entropy based on its polynomial representation. Physical Review E 85051139. [43] V U, V. Q., Y U, B. and K ASS, R. E. (2007). Coverage-adjusted entropy estimation. Statistics in medicine 264039–4060. [44] V U, V. Q., Y U, B. and K ASS, R. E. (2007). Coverage-adjusted entropy estimation. Statistics in medicine 264039–4060. [45] W ASSERMAN , L. (2004). All of statistics: a concise course in statistical inference 26. Springer. [46] W OLPERT , D. H. and W OLF, D. R. (1995). Estimating functions of probability distributions from a finite set of samples. Physical Review E 526841. [47] W U, Y. and Y ANG , P. (2016). Minimax Rates of Entropy Estimation on Large Alphabets via Best Poly- nomial Approximation. IEEE Transactions on Information Theory 623702-3720. https://doi.org/10. 1109/TIT.2016.2548468 [48] Y U, L. and L IU, H. (2004). Efficient feature selection via analysis of relevance and redundancy. The Journal of Machine Learning Research 51205–1224. [49] Z HANG , Z. (2012). Entropy estimation in Turing’s perspective. Neural computation 241368–1389. [50] Z HAO, P. and L AI, L. (2019). Analysis of KNN information estimators for smooth distributions. IEEE Transactions on Information Theory .
|
https://arxiv.org/abs/2505.20153v1
|
arXiv:2505.20157v1 [math.ST] 26 May 2025Gaussian Process Methods for Covariate-Based Intensity Estimation Patric Dolmeta and Matteo Giordano ESOMAS Department, University of Turin Abstract We study nonparametric Bayesian inference for the intensity function of a covariate- driven point process. We extend recent results from the literature, showing that a wide class of Gaussian priors, combined with flexible link functions, achieve minimax optimal posterior contraction rates. Our result includes widespread prior choices such as the popular Matérn processes, with the standard exponential (and sigmoid) link, and implies that the resulting methodologically attractive procedures optimally solve the statistical problem at hand, in the increasing domain asymptotics and under the common assumption in spatial statistics that the covariates are stationary and ergodic. Keywords. Cox process; frequentist analysis of Bayesian procedures; Gaussian prior; minimax optimality Contents 1 Introduction 1 2 Posterior contraction rates for covariate-based intensity estimation 2 2.1 Observation model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 2.2 Assumptions on the covariates . . . . . . . . . . . . . . . . . . . . . . . 3 2.3 Prior specification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2.4 Main result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 3 Concluding discussion 5 4 Proof 5 1 Introduction In the statistical analysis of many point patterns, a central issue is to determine the in- fluenceof covariateson thepointdistribution. We referto[2], andreferences therein, for an overview of the problem and applications. The established framework to tackle this scenarioistomodelthepointpatternviaaCoxprocess[4], namelya‘doubly-stochastic’ point process with covariate-dependent intensity function, cf. Section 2.1 below. The resulting statistical problem is then to estimate the intensity from observations of the points and covariates. 1 There is a vast literature on the parametric approach to this issue. For example the log-Gaussian Cox model [17] is widely used; further see the monograph [6]. Existing nonparametric frequentist approaches largely rely on kernel-type estimators, e.g. [2]. These methods were first shown to be asymptotically consistent in a seminal paper by Guan [13], under the typical ‘large domain’ framework of spatial statistics and the assumption that the covariates are stationary and ergodic. On the other hand, nonparametric Bayesian intensity estimation has so far been considered almost exclusively in models without covariates; see [17, 1, 3, 14, 7] for methodology and theoretical results, as well as for further references. The first frequen- tist asymptotic analysis of posterior distributions for covariate-driven point processes was developed in the recent paper by Giordano et al. [10], in an analogous increasing domain framework as the one considered in [13]. Among their results, they showed that truncated Gaussian wavelet priors, combined with suitable positive
|
https://arxiv.org/abs/2505.20157v1
|
link functions, achieve minimax-optimal rates of posterior contraction towards the ground truth, in L1-distance. In this paper, we build on the investigation of [10], obtaining in the same setting optimal posterior contraction rates for a much wider class of (rescaled) Gaussian pri- ors, including widespread choices such as the popular Matérn processes (e.g. [8, Section 11.4.4]). See Theorem 3. The proof of our result follows the general program set forth in [10], based on the derivation of preliminary convergence properties in a covariate- dependent metric, cf. (7), to be combined with concentration inequalities for integral functional of stationary processes. To obtain the required uniformity over the prior sup- port beyond the specific wavelet structure considered in [10], we then employ techniques from the statistical theory of inverse problems, e.g. [11, 12, 18]. Another novelty in our result is that, compared to [10], we also significantly weaken the assumptions on the link function, which we only require to be locally Lipschitz (and bijective). Among the others, this allows to incorporate into the procedure the stan- dard exponential link, as well as the sigmoid one, which had previously been employed for efficient posterior sampling in non-covariate-based intensity estimation [1]; see also [14]. Thus, while theoretical, our result has also potential interesting methodological implications. We discuss these and other related research question in the concluding discussion in Section 3. 2 Posterior contraction rates for covariate-based in- tensity estimation 2.1 Observation model Throughout, we take RD(D∈N) to be the ambient space. Let the covariates be given by a d-dimensional ( d∈N) random field Z:= (Z(x), x∈RD), and let (Wn)n∈Nbe a sequence of compact sets in RDsatisfying Wn⊆ W n+1for all n. For some n∈N, the data consist in an observed realisation of the covariates over the window Wn, denoted by Z(n):= (Z(x), x∈ W n), and of a random point process N(n)onWnwhich, conditionally given Z(n), is of inhomogeneous Poisson type with intensity λ(n) ρ(x) :=ρ(Z(x)), x ∈ W n, (1) for some unknown bounded function ρ:Rd→[0,∞). Formally, we may write N(n)d={X1, . . . , X Nn}, N n|Z(n)∼Po(Λ(n) ρ), X i|Z(n)iid∼λ(n) ρ(x)dx Λ(n) ρ,(2) 2 with Λ(n) ρ:=R Wnλ(n) ρ(x)dx. In other words, N(n)is a Cox process on Wndirected by the random measure λ(n) ρ(x)dx[4]. The statistical problem is to estimate non- parametrically ρfrom data (Z(n), N(n)). We will denote by P(n) ρthe joint law of (N(n), Z(n)). By standard theory for Poisson processes, e.g. [15], P(n) ρis absolutely continuous with respect to the law P(n) 1corre- sponding to the homogeneous case, with log-likelihood equal to ln(ρ) := logdP(n) ρ dP(n) 1(D(n)) =Z Wnlog(ρ(Z(x)))dN(n)(x)−Z Wnρ(Z(x))dx,(3) cf. [15, Theorem 1.3]. We will write E(n) ρfor the expectation with respect to P(n) ρ. 2.2 Assumptions on the covariates We consider the important case where the covariates are given by a Gaussian process, according to the following assumption. Under the latter, the point process (2) is then a non-parametric generalisation of the celebrated log-Gaussian Cox process [17], wherein ρin(1)takestheparametricform ρ(z) = exp( βTz)forsome β∈Rd. Gaussianprocesses areamongthemostubiquitouslyemployedmodelsforspatialrandomfieldsinstatistical applications, e.g. [5, Section 2.3]. Condition 1. Let˜Z(h):= (˜Z(h)(x), x∈RD), for h=
|
https://arxiv.org/abs/2505.20157v1
|
1, . . . , d,be independent, almost surely locally bounded, centred and stationary Gaussian processes with integrable covari- ance functions. Further assume without loss of generality that Var[Z(h)(x)] = 1for all handx. Let Zbe given by Z(x) := [ ϕ(˜Z(1)(x)), . . . , ϕ (˜Z(d)(x))], x ∈RD, (4) where ϕis the standard normal cumulative distribution function. Under Condition 1, ˜Z(x) := ( ˜Z(1)(x), . . . , ˜Z(d)(x))∼Nd(0,1)for all x∈RD, which amounts to the common practice of standardising the covariates, cf. [13, Section 3.2.1]. The component-wise application of ϕin (4) is without loss of generality in view of its invertibility. It can be thought of as a convenient ‘pre-processing’ step that implies, in particular, that Ztakes values in the compact set [0,1]d. The integrability of the covariances is a mild requirement that is satisfied under the (often realistic, e.g. [5, Sec. 2.3]) assumption that the correlation between covariates at distant locations decays sufficiently fast. This condition implies that Zis ergodic, which, similarly to [13], crucially enables in our analysis a sufficient ‘accumulation of information’ from distinct points in the observation window with similar covariate values, allowing for consistent inference in the increasing domain asymptotics. 2.3 Prior specification Gaussian processes are among the most universal models for prior distributions in func- tion spaces, and have been successfully employed in related nonparametric Bayesian intensity estimation problems; see [17, 1, 14] and references therein. In the present setting, under Condition 1, we place on the unknown function ρ: [0,1]d→[0,∞)in (1) an ( n-dependent) log-Gaussian prior Πn, constructed starting from a base distribution that we require to satisfy the following mild regularity condi- tion. 3 Condition 2. Let˜Πbe a centred Gaussian probability measure on the Banach space C([0,1]d)that is supported on a linear subspace of C1([0,1]d). Further assume that, for some α >0, the reproducing kernel Hilbert space (RKHS) ˜Hof˜Πis equal to the Sobolev space Hα([0,1]d), with RKHS norm satisfying ∥ · ∥H≃ ∥ · ∥ Hα. Condition 2 is satisfied by a wide variety of Gaussian processes, including by station- ary ones with polynomially-tailed spectral measures (e.g. the popular Matérn processes, cf. [8, Section 11.4.4]), as well as by series priors defined on bases spanning the scale of Sobolev spaces (e.g. [8, Section 11.4.3]). Given ˜Πsatisfying Condition 2, we then construct the prior Πnas the law of the random function Rn(z) := exp n−d/(4α+2d)˜W(z) , z ∈[0,1]d, ˜W∼˜Π. (5) The introduction of the exponential link in (5) is motivated by the positivity constraint onρ. This particular choice was made for concreteness, and we note that any other locallyLipschitzandbijectivelinkfunctionmaybeusedintheresulttofollow, including for example the sigmoid one used in [1, 14]. The n-dependent rescaling of the base Gaussian prior is a common technique in the statistical theory for inverse problems, e.g. [11, 12, 18]. In the proof, it implies a bound for the norm of the draws from Π, which is key to obtaining certain uniform concentration inequalities, going beyond the specific wavelet structure considered in [10]. Non-asymptotically, the rescaling amounts to a simple adjustment of the covariance function of ˜Π, and does not require further
|
https://arxiv.org/abs/2505.20157v1
|
tuning in practice. 2.4 Main result In our main result, we characterise the speed of asymptotic concentration of the poste- rior distributions Πn(·|N(n), Z(n))arising from the log-Gaussian priors (5), under the frequentist assumption that the data (N(n), Z(n))have been generated by some fixed groundtruth ρ0. Weworkinthe(widelyadoptedinspatialstatistics)increasingdomain asymptotics, namely vol (Wn)→ ∞asn→ ∞, which reflects the common situation where a single realisation of the points and covariates is observed over a large window. Similarly to [13], our proof requires Wnto grow uniformly in all directions, and for concreteness we restrict to square domains Wn= −1 2n1/D,1 2n1/DD , (6) whence vol =n. Extensions to other domains containing the above sets is straightfor- ward. Theorem 3. Letρ0= exp ◦w0, for some w0∈Cβ([0,1]d)∩Hβ([0,1]d)and some β >min(1 , d/2). Consider data (N(n), Z(n))∼P(n) ρ0arising from the observation model (2)with ρ=ρ0,Wnas in(6)and with Za random field satisfying Condtion 1. Let the prior Πnbe as in (5), with ˜Πsatisfying Condition 2 with α=β. Then, for M > 0 large enough, as n→ ∞, E(n) ρ0" Πn ρ:∥ρ−ρ0∥L1> Mn−β/(2β+d) N(n), Z(n)# →0. Theorem 3 asserts that the posterior distribution Πn(·|N(n), Z(n))asymptotically concentrates its mass over small neighbourhoods of the ground truth, whose L1-radius 4 shrinks at rate n−β/(2β+d). The latter is known to be minimax for β-smooth intensities, e.g. [15], implying the optimality of Theorem 3. The proof of the result, presented in Section 4, is based on recent work by Gior- dano et al. [10], who developed a general program to derive posterior contraction rates in covariate-based intensity estimation. In particular, the argument also implies that Π(·|N(n), Z(n))concentrates at the same speed also in the ‘empirical’ (i.e. covariate- dependent) L1-metric 1 n∥λ(n) ρ−λ(n) ρ0∥L1(Wn)=1 |Wn|Z Wn|ρ(Z(x))−ρ0(Z(x))|dx, (7) cf. (8) below. This implies that the proposed nonparametric Bayesian methodology also optimally solves the ‘prediction’ problem of inferring the overall spatial intensity function λ(n) ρin (1). 3 Concluding discussion Inthisarticle, wehaveconsideredGaussianprocessmethodsforestimatingtheintensity function of a covariate-driven point process. In a realistic increasing domain framework with ergodic covariates, our main result, Theorem 3, shows that the posterior distri- butions arising from a large class of (rescaled) Gaussian priors concentrates at optimal rate around the ground truth. This is based on recent work by [10], and extends a re- sult obtained in the latter reference for the specific case of truncated Gaussian wavelet priors. We conclude mentioning some related interesting research questions. Firstly, The- orem 3 is non-adaptive in that the prior specification under which optimal rates are obtained requires knowledge of the regularity βof the ground truth. Extensions to adaptive version of the rescaled priors (5) in the present setting is of primary interest, but is known to be an involved issue in the broader literature, cf. [11, 12]. Implementation of posterior inference with the considered Gaussian priors (as well as the other priors studied in [10]) is also of great importance. In ongoing work, we are exploring extensions to the problem at hand of the existing efficient posterior sampling methodologies for non-covariate-based Bayesian intensity estimation [1]. Lastly, beyond the large domain framework considered in the present article, ‘in- fill asymptotics’ are
|
https://arxiv.org/abs/2505.20157v1
|
also of interest in spatial statistics. For covariate-based intensity estimation, this setting can be formulated as having access to repeated (and possibly independent) realisations of the covariates and the points over a fixed observation win- dow. We refer to [10] for possible extensions of the proof techniques employed here to such situations. This is beyond the scope of the present article, but represents an interesting avenue for future research. 4 Proof We employ techniques from [10], to which we refer for the necessary background. We begin proving posterior contraction in the empirical distance (7). This is then combined with an exponential concentration inequality for n−1∥λ(n) ρ−λ(n) ρ0∥L1(Wn)around its ‘ergodic average’ ∥ρ−ρ0∥L1, which allows to extend the obtained rates to the latter metric, concluding the proof. Setεn:=n−β/(2β+d). For ˜Πsatisfying Condition 2, let ˜W∼˜Πand denote by ˜Πnthe law of the random function (√nεn)−1˜W=n−d/(4α+2d)˜W. Using standard 5 techniques from the posterior contraction rate theory for (rescaled) Gaussian priors, e.g. [11, 12, 18], for all K1>0there exist a sufficiently large constant M1>0and a set Fn⊂ f∈C1([0,1]d) :∥f∥C1≤M1 such that, for all nlarge enough, ˜Πn(Fc n)≤e−K1nε2 n; log N(εn;Fn,∥ · ∥L∞)≲nε2 n. See e.g. Lemma 5 in [12] and its proof. Set Rn:={exp◦f, f∈ Fn}. Then, by construc- tionof Πn, cf.(5), thefirstinequalityinthelastdisplayimpliesthat Πn(Rc n)≤e−K1nε2 n. Further, since Fnis contained in a sup-norm ball, ∥exp◦f1−exp◦f2∥L∞≲∥f1−f2∥L∞ for all f1, f2∈ Fn. It follows that logN(εn;Rn,∥·∥L∞)≲logN(εn;Fn,∥·∥L∞)≲nε2 n. Lastly, using Corollary 2.6.18 of [9], since w0∈Hβ([0,1])d=Hby assumption, we have ˜Πn(w:∥w−w0∥L∞≤εn)≥e−1 2∥w0∥2 Hnε2 n˜Π(w:∥w∥∞≤√nε2 n), which, bythemetricentropyestimateinTheorem4.3.36of[9]andthecentredsmallball inequality in Theorem 1.2 of [16], is lower bounded by e−1 2∥w0∥2 Hnε2 ne−c1nε2 n=e−c2nε2 n for some c1, c2>0. Arguing as in the proof of Theorem 3.2 in [10], this implies that for all sufficiently large M2>0, asn→ ∞, E(n) ρ0h Π(Sn|N(n), Z(n))i →1, (8) where Sn:=n ρ∈ R n:n−1∥λ(n) ρ−λ(n) ρ0∥L1(Wn)≤M2εno .Now for M > 0to be chosen below, set Tn:= ρ∈C1([0,1]d) :∥ρ−ρ0∥L1≤Mεn .The proof is then concluded by showing that E(n) ρ0[Π(Tc n|N(n), Z(n))]→0asn→ ∞. To do so, note that by (8), Π(Tc n|N(n), Z(n)) = Π( Tc n∩ Sn|N(n), Z(n)) +oP(n) ρ0(1) =R Tcn∩Sneln(ρ)−ln(ρ0)dΠ(ρ) R C([0,1]d)eln(ρ)−ln(ρ0)dΠ(ρ)+oP(n) ρ0(1). Denote by Dnthe denominator in the previous display. The proof of Theorem 3.2 in [10] shows that P(n) ρ0(Dn≤e−K1nε2 n) =o(1)asn→ ∞for some constant K1>0, so that by Fubini’s theorem, E(n) ρ0h Π(Tc n|N(n), Z(n))i ≤eK1nε2 nZ Tcn∩{ρ∈Rn:∥ρ∥C1≤M1}Pr(∥λ(n) ρ−λ(n) ρ0∥L1(Wn)≤M2nεn)dΠ(ρ) +o(1). Fix any ρ∈ Tc n∩ {ρ∈ R n:∥ρ∥C1≤M1}. Then, if ∥λ(n) ρ−λ(n) ρ0∥L1(Wn)≤M2nεn, necessarily ∆(n)(ρ) :=∥ρ−ρ0∥L1−1 n∥λ(n) ρ−λ(n) ρ0∥L1(Wn)>(M−M2)εn≥M2 2εn upon taking M > 2M2. For all such M, it follows that the expectation of interest is upper bounded by eK1nε2 nZ Tcn∩{ρ∈Rn:∥ρ∥C1≤M1}Pr(∆(n)(ρ)> M 2εn/2)dΠ(ρ) +o(1). 6 The concentration inequality in Proposition D.1 of [10], applied with f:=|ρ−ρ0| − n−1∥λ(n) ρ−λ(n) ρ0∥L1(Wn), forρ∈C1([0,1]d), whose (weak) gradient satisfies ∥∇f∥L∞≤ ∥ρ∥C1+∥ρ0∥C1≤2M1now gives that sup ρ∈Tcn∩{ρ∈Rn:∥ρ∥C1≤M1}Pr(∆(n)(ρ)> M 2εn/2)≤e−K2(M2)2nε2 n for some K2>0. The claim follows taking M2>0large enough and combining the last two displays. acknowledgement M.G. has been partially supported by MUR, PRIN project 2022CLTYP4. References [1]Adams, R. P., Murray, I., and MacKay, D. J. C. Tractable nonparametric bayesian inference in poisson processes with gaussian process
|
https://arxiv.org/abs/2505.20157v1
|
intensities. In Pro- ceedings of the 26th Annual International Conference on Machine Learning (New York, NY, USA, 2009), ICML ’09, Association for Computing Machinery, pp. 9–16. [2]Baddeley, A., Chang, Y.-M., Song, Y., and Turner, R. Nonparametric estimation of the dependence of a spatial point process on spatial covariates. Stat. Interface 5 , 2 (2012), 221–236. [3]Belitser, E., Serra, P., and van Zanten, H. Rate-optimal Bayesian intensity smoothing for inhomogeneous Poisson processes. J. Statist. Plann. Inference 166 (2015), 24–35. [4]Cox, D. R. Some statistical methods connected with series of events. J. Roy. Statist. Soc. Ser. B 17 (1955), 129–157; discussion, 157–164. [5]Cressie, N. Statistics for spatial data . John Wiley & Sons, 2015. [6]Diggle, P. J. Statistical analysis of spatial and spatio-temporal point patterns , third ed., vol. 128 of Monographs on Statistics and Applied Probability . CRC Press, Boca Raton, FL, 2014. [7]Donnet, S., Rivoirard, V., Rousseau, J., and Scricciolo, C. Posterior concentration rates for counting processes with Aalen multiplicative intensities. Bayesian Anal. 12 , 1 (2017), 53–87. [8]Ghosal, S., and van der Vaart, A. W. Fundamentals of Nonparametric Bayesian Inference . Cambridge University Press, New York, 2017. [9]Giné, E., and Nickl, R. Mathematical foundations of infinite-dimensional sta- tistical models . Cambridge University Press, New York, 2016. [10]Giordano, M., Kirichenko, A., and Rousseau, J. Nonparametric bayesian intensity estimation for covariate-driven inhomogeneous point processes. arXiv preprint arXiv:2312.14073 (2023). [11]Giordano, M., and Nickl, R. Consistency of Bayesian inference with Gaussian process priors in an elliptic inverse problem. Inverse Problems 36 , 8 (2020), 085001, 35. [12]Giordano, M., and Ray, K. Nonparametric Bayesian inference for reversible multidimensional diffusions. Ann. Statist. 50 , 5 (2022), 2872–2898. [13]Guan, Y. On consistent nonparametric intensity estimation for inhomogeneous spatial point processes. J. Amer. Statist. Assoc. 103 , 483 (2008), 1238–1247. 7 [14]Kirichenko, A., and van Zanten, H. Optimality of Poisson processes intensity learning with Gaussian processes. J. Mach. Learn. Res. 16 (2015), 2909–2919. [15]Kutoyants, Y. A. Statistical inference for spatial Poisson processes , vol. 134 of Lecture Notes in Statistics . Springer-Verlag, New York, 1998. [16]Li, W. V., and Linde, W. Approximation, metric entropy and small ball esti- mates for Gaussian measures. Ann. Probab. 27 , 3 (1999), 1556–1578. [17]Møller, J., Syversveen, A. R., and Waagepetersen, R. P. Log Gaussian Cox processes. Scand. J. Statist. 25 , 3 (1998), 451–482. [18]Nickl, R. Bayesian non-linear statistical inverse problems . EMS press, 2023. 8
|
https://arxiv.org/abs/2505.20157v1
|
Str ong Lo w Degr ee Har dness f or the N umber P artitioning Pr oblem R ushil Mallar apu* Mar k Sellk e† A bstr act In the number partitioning pr oblem (NPP ) one aims t o partition a giv en set of 𝑁 r eal number s int o t w o subset s w ith appr o ximat ely equal sum. The NPP is a w ell-studied optimization pr oblem and is famous f or possessing a statistical-to - computational g ap : when the 𝑁 number s t o be partitioned ar e i.i. d. st andar d g aussian, the optimal discr epancy is 2− Θ ( 𝑁 ) w ith high pr obabilit y , but the best kno w n poly nomial-time alg orithms only find solutions w ith a discr epancy of 2− Θ ( log2𝑁 ). This g ap is a c ommon f eatur e in optimization pr oblems o v er r andom c ombinat orial structur es , and indicat es the need f or a study that g oes be y ond w or st - case analy sis . W e pr o v ide e v idenc e of a near ly tight alg orithmic barrier f or the number partitioning pr oblem. N amely w e c onsider the family of lo w c oor dinat e degr ee alg orithms ( w ith r andomized r ounding int o the Boolean cube ), and sho w that degr ee 𝐷 alg orithms fail t o solv e the NPP t o ac cur acy be y ond 2− ̃ 𝑂 ( 𝐷 ). Ac c or ding t o the lo w degr ee heuristic, this sugg est s that simple brut e -f or c e sear c h alg orithms ar e near ly unimpr o v able , giv en an y allott ed runtime bet w een poly nomial and e xponen- tial in 𝑁 . Our pr oof c ombines the isolation of solutions in the landscape w ith a c onditional f orm of the o v er lap g ap pr opert y: giv en a g ood solution t o an NPP inst anc e , slightly noising the NPP inst anc e t y pically lea v es no g ood solutions near the original one . In fact our analy sis applies whene v er the 𝑁 number s t o be partitioned ar e independent w ith unif ormly bounded densit y . C ont ent s 1 Intr oduction 2 1.1 St at ement s of Main R esult s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
|
https://arxiv.org/abs/2505.20607v1
|
. . . . 4 1.2 Heuristic Optimalit y of Theor em 1. 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.3 N ot ations and Pr eliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1. 4 St abilit y of Lo w Degr ee Alg orithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2 Har dness f or Lo w Degr ee Alg orithms 10 2.1 Pr eliminar y Estimat es . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.2 C onditional Landscape Obstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.3 Har dness f or LDP Alg orithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2. 4 Har dness f or N on-R ounded L CD Alg orithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.5 Locally Impr o v ing Alg orithms . . . . . . . . . .
|
https://arxiv.org/abs/2505.20607v1
|
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2. 6 Truly R andom R ounding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 R ef er enc es 21 * Har v ar d Univ er sit y . Email: rushil.mallar apu@pm.me . † Department of St atistics , Har v ar d Univ er sit y . Email: msellk e@fas .har v ar d. edu . 1 1 Intr oduction The number partitioning pr oblem (NPP ) ask s t o partition 𝑁 giv en r eal number s int o t w o part s w ith appr o ximat ely equal sum. That is , giv en r eal number s 𝑔1 , … , 𝑔𝑁 , one aims t o find the subset 𝐴 of [ 𝑁 ] ≔ { 1 , 2 , … , 𝑁 } whic h minimizes the discr epancy | ∑ 𝑖 ∈ 𝐴𝑔𝑖 − ∑ 𝑖 ∉ 𝐴𝑔𝑖 | . Identif y ing the input 𝑔1 , … , 𝑔𝑁 w ith a v ect or 𝑔 ∈ 𝐑𝑁, this is equiv alent t o c hoosing a point 𝑥 in the 𝑁 - dimensional binar y h y per cube Σ𝑁 ≔ { ± 1 }𝑁 whic h minimizes the discr epancy: min 𝑥 ∈ Σ𝑁| ⟨ 𝑔 , 𝑥 ⟩ | . (1.1) R ephr ased as a decision pr oblem – whether ther e e xist s a subset 𝐴 ⊆ [ 𝑁 ] ( or a point 𝑥 ∈ Σ𝑁 ) suc h that the discr epancy is zer o , or sufficiently small – the NPP is NP - c omplet e as can be sho w n b y r eduction fr om subset sum. In fact, the NPP is one of the six basic NP - c omplet e pr oblems of Gar e y and J ohnson, and of those , the only one in v olv ing number s [ GJ79 , § 3 .1] . Solv ing the NPP has a number of pr actical applications . T og ether w ith a multiw a y g ener alization, it w as fir st f ormulat ed b y Gr aham, who c onsider ed it in the c ont e x t of multipr oc essor sc heduling [ Gr a6 9] . Lat er w or k b y C offman, Gar e y , and
|
https://arxiv.org/abs/2505.20607v1
|
J ohnson, as w ell as T sai, look ed at utilizing alg orithms f or the NPP f or designing multipr oc essor sc heduler s or lar g e int egr at ed cir cuit s [ C GJ7 8] , [T sa 9 2] . C offman and L uek er further discuss ho w the NPP can be used f or r esour c e allocation pr oblems [ CL 91] . The NPP w as also used t o design an ear ly public k e y cr y pt ogr aph y sc heme [MH7 8] , lat er sho w n t o be insecur e b y [Sha8 2] . An import ant application of the NPP t o st atistics c omes fr om the design of r andomized contr olled trials [K AK19] , [Har+24] . C onsider 𝑁 indiv iduals , eac h w ith a c o v ariat e v ect or 𝑔𝑖 ∈ 𝐑𝑑. Then the pr oblem is t o div ide them int o a tr eatment gr oup ( denot ed 𝐴+ ) and a c ontr ol gr oup ( denot ed 𝐴− ), while ensuring that the c o v ariat es acr oss both gr oups ar e balanc ed. In our not ation, this amount s t o finding an 𝐴+ ( w ith 𝐴− ≔ [ 𝑁 ] ∖ 𝐴+ ) minimizing min 𝐴+ ⊆ [ 𝑁 ]‖ ∑ 𝑖 ∈ 𝐴+𝑔𝑖 − ∑ 𝑖 ∈ 𝐴−𝑔𝑖 ‖ ∞. (1.2) This v ect or e x t ension of the NPP (the 𝑑 = 1 case ) is called the v ect or balancing pr oblem ( VBP ). Another ma jor sour c e of int er est in the NPP , as w ell as pot ential e xplanations f or it s alg orithmic har dness , c omes fr om st atistical ph y sics . In the 1980s , Derrida intr oduc ed the epon y mous r andom ener gy model (REM) , a simplified e x ample of a spin glass in whic h, unlik e e .g . the Sherringt on- Kir kpatric k model, the ener gy le v els of diff er ent st at es ar e independent of eac h other [Der 80] , [Der 81] , [BFM04] . Despit e this simplicit y , this model enabled heuristic analy ses of the P arisi theor y f or mean field spin glasses , and it w as suspect ed that v er y g ener al disor der ed s y st ems w ould locally beha v e lik e the REM and it s g ener alizations [BM04] , [Kis 15] . The NPP w as the fir st s y st em f or whic h this local REM c onjectur e w as c onfirmed [Bor+09a] , [Bor+09b] . R elat edly , in the case when the 𝑔𝑖 ar e c hosen
|
https://arxiv.org/abs/2505.20607v1
|
independently and unif ormly fr om the discr et e set { 1 , 2 , 3 , … , 2𝑀} , Gent and W alsh 2 c onjectur ed that the har dness of finding perf ect partitions (i. e ., w ith discr epancy zer o if ∑𝑖𝑔𝑖 is e v en, and one else ) w as c ontr olled b y the par amet er 𝜅 ≔𝑀 𝑁 [ GW 98] , [ GW 00] . Mert ens soon g a v e a nonrig or ous st atistical mec hanics ar gument sugg esting the e xist enc e of a phase tr ansition fr om 𝜅 < 1 t o 𝜅 > 1 ; that is , while solutions e xist in the lo w 𝜅 r egime , the y st op e xisting in the high 𝜅 r egime [Mer01] . It w as also obser v ed that this phase tr ansition c oincided w ith the empirical onset of c omput ational har dness f or t y pical alg orithms , and Bor g s , Cha y es , and Pitt el pr o v ed the e xist enc e of this phase tr ansition soon aft er [Ha y0 2] , [BCP01] . The St atistical-t o - C omput ational Gap . Man y pr oblems in v olv ing sear c hes o v er r andom c ombinat orial structur es (i. e ., thr oughout high- dimensional st atistics ) e xhibit a st atistical-t o - c omput ational g ap: the optimal v alues whic h ar e kno w n t o e xist v ia non- c onstructiv e , pr obabilistic methods ar e far bett er than those ac hie v able b y st at e - of-the - art alg orithms . In the pur e optimiza- tion setting , e x amples of suc h g aps ar e f ound in r andom c onstr aint satisfaction [MMZ 05] , [ A C08] , [K ot+17] , finding maximal independent set s in spar se r andom gr aphs [ GS14] , [ CE15] , the lar g est sub- matrix pr oblem [ GL18] , [ GJS21] , and the 𝑝 -spin and dilut ed 𝑝 -spin models [ GJ21] , [ Che+19] . These g aps also arise in v arious plant ed models , suc h as matrix or t ensor PC A [BR13] , [LKZ15a] , [LKZ15b] , [HSS15] , [Hop+17] , [ A GJ20] , high- dimensional r egr ession [ GZ2 2] , or the infamously har d plant ed c lique pr oblem [ J er9 2] , [DM15] , [MP W15] , [Bar+19] , [ GZ24] . These indicat e that these pr oblems ar e “har d” be y ond being in NP: alg orithms fail e v en f or a v er ag e - case inst anc es . The NPP is
|
https://arxiv.org/abs/2505.20607v1
|
no e x c eption, and e xhibit s an e xponentially w ide st atistical-t o - c omput ational g ap . F or the global optimum, K armar k ar-K arp -L uek er- Odlyzk o sho w ed that when the 𝑔𝑖 ar e i.i. d. r an- dom v ariables w ith sufficiently nic e distribution, ¹ the minimum discr epancy of ( 1.1) is Θ (√ 𝑁 2− 𝑁) w ith high pr obabilit y as 𝑁 → ∞ [K ar+86] . Their r esult also e x t ends t o even partitions , wher e the sizes of eac h subset is equal (i. e ., f or 𝑁 e v en), w or sening only t o Θ ( 𝑁 2− 𝑁) . Y et the best kno w n alg orithms cannot ac hie v e discr epancies an y wher e c lose t o this in poly nomial time . A simple gr eedy alg orithm based on sorting 𝑔1 , … , 𝑔𝑁 and gr ouping adjac ent pair s is kno w n t o ac hie v e discr epancy 𝑂 ( 𝑁− 1) ( see [Mer06] ), and mor e r ec ently [K AK19] g a v e another alg orithm ac hie v ing discr epancy 𝑂 ( 𝑁− 2) f or balanc ed partitions . The best kno w n alg orithm is ho w e v er due t o K armar k ar-K arp [KK 8 3] fr om 198 3 , and ac hie v es discr epancy 𝑂 ( 𝑁− 𝑂 ( log 𝑁 )) = 2− 𝑂 ( log2𝑁 ). See also [L ue 8 7] and [ Y ak 96] f or perf ormanc e bounds on the r elat ed partial diff er encing method (PDM) and lar g est diff er- encing method (LDM) , and [BM08] f or a c onjectur ally sharper description of the perf ormanc e of the latt er . The situation f or the multidimensional VBP is similar: r estricting f or simplicit y t o fix ed 𝑑 as 𝑁 gr o w s , the minimum discr epancy is t y pically Θ (√ 𝑁 2− 𝑁 / 𝑑) [TMR 20] , w ith an e x t ension of the K armar k ar-K arp alg orithm ac hie v ing discr epancy 2− Θ ( log2𝑁 ) / 𝑑. Alg orithmic Har dness and Landscape Obstructions . A br oadly suc c essful appr oac h t o r andom optimization pr oblems has emer g ed r ec ently based on analyzing the g eometr y of the solution landscape . Man y “har d” r andom optimization pr oblems ha v e a c ert ain disc onnectiv it y pr opert y , in whic h solutions t end t o liv e in isolat ed separ at ed c lust er s [MMZ 05] , [ A CR11] , [ A
|
https://arxiv.org/abs/2505.20607v1
|
C08] , [ AMS25] , [ GJK23] . In the seminal w or k [ GS14] , Gamarnik and Sudan sho w ed ho w t o use a str ong f orm of this disc onnectiv it y t o deduc e rig or ous har dness r esult s f or c lasses of st able alg orithms , in what has bec ome kno w n as the over lap g ap pr opert y ( OGP ) [ Gam 21] . In it s simplest f orm, an OGP st at es that f or c ert ain c onst ant s 0 ≤ 𝜈1 < 𝜈2 , f or e v er y t w o near- optimal st at es 𝑥 , 𝑥′ f or a particular inst anc e 𝑔 of the pr oblem either ha v e 𝑑 ( 𝑥 , 𝑥′) < 𝜈1 or 𝑑 ( 𝑥 , 𝑥′) > 𝜈2 . That is , pair s of solutions ar e either c lose ¹ Specifically , ha v ing bounded densit y and finit e f ourth moment. 3 t o eac h other , or muc h further a w a y – the c ondition that 𝜈1 < 𝜈2 ensur es that the “ diamet er ” of solution c lust er s is smaller than the separ ation bet w een these c lust er s . The original w or k [ GS14] studied maximal independent set s in spar se r andom gr aphs , obt aining suboptimal bounds lat er sharpened b y [R V17] , [ W ei2 2] . Ho w e v er the method has been gr eatly de v eloped sinc e then, finding applications t o a host of other pr oblems suc h as lar g est a v er ag e submatrix [ GL18] , mean-field spin glasses and r elat ed c ombinat orial optimization pr oblems [ GJ21] , [ GJ W 24] , [ Che+19] , [HS25a] , [HS23] , [ J on+23] , gr aph alignment [DGH25] , (N ot - All-Equal)- 𝑘 -SA T [ GS17] , [BH21] , and the r andom per c eptr on [ Gam+2 2] , [ Gam+23] , [L SZ24] . F or the NPP , it w as e xpect ed f or decades that the “brittleness” of the solution landscape w ould be a c entr al barrier in finding suc c essful alg orithms t o c lose the st atistical-t o - c omput ational g ap . Mert ens w r ot e in 2001 that an y local heuristics , whic h only look ed at fr actions of the domain, w ould fail t o outperf orm r andom sear c h [Mer01, § 4.3] . This w as bac k ed up b y the failur e of man y alg orithms based on locally r efining K armar k ar-K arp - optimal solutions , suc h as
|
https://arxiv.org/abs/2505.20607v1
|
simulat ed anneal- ing [ AF G96] , [SFD96] , [ J oh+8 9] , [ J oh+91] , [ Ali+05] . Pr e v iously , Gamarnik and Kızıldağ applied the OGP methodology t o the NPP , pr o v ing that a mor e c omplicat ed multi- OGP holds f or discr epancies of 2− Θ ( 𝑁 ) (i. e ., the st atistical near- optimum), but is absent f or smaller discr epancies of 2− 𝐸𝑁 w ith 𝜔 ( 1 ) ≤ 𝐸𝑁 ≤ 𝑜 ( 𝑁 ) [ GK23] . Ho w e v er the y w er e able t o pr o v e that f or 𝜀 ∈ ( 0 , 1 / 5 ) , no st able alg o- rithm ( suit ably defined) c ould find solutions w ith discr epancy 2− 𝐸𝑁 f or 𝜔 ( 𝑁 log−1 5+ 𝜀𝑁 ) ≤ 𝐸𝑁 ≤ 𝑜 ( 𝑁 ) [ GK23 , Thm. 3 .2] . These r esult s point t o the efficacy of using landscape obstructions t o sho w alg orithmic har dness f or the NPP , whic h w e w ill t ak e adv ant ag e of in Section 2 . Other e v idenc e of har dness f or the NPP has been pr o v ided b y [Hob+17] , who sho w ed that a poly nomial-time appr o ximation or ac le ac hie v ing discr epancy 𝑂 ( 2√ 𝑁 − 𝑁) c ould giv e poly nomial- time appr o ximations in Mink o w ski’ s theor em on s y mmetric c on v e x bodies . V er y r ec ently , V afa and V aik unt anathan sho w ed that the K armar k ar-K arp alg orithm ’ s perf ormanc e is near ly optimal among poly nomial time alg orithms , assuming the w or st - case har dness of the short est v ect or pr oblem on lattic es [ VV 25] . 1.1 St at ement s of Main R esult s W e w ill use the f ollo w ing c on v enient t erminology f or the qualit y of an NPP solution. Definition 1.1 . Let 𝑔 ∈ 𝐑𝑁 be an inst anc e of the NPP , and let 𝑥 ∈ Σ𝑁 . The ener gy of 𝑥 is 𝐸 ( 𝑥 ; 𝑔 ) ≔ − log2 | ⟨ 𝑔 , 𝑥 ⟩ | . The solution set 𝑆 ( 𝐸 ; 𝑔 ) is the set of all 𝑥 ∈ Σ𝑁 that ha v e ener gy at least 𝐸 , i. e ., that satisf y | ⟨ 𝑔 , 𝑥 ⟩ | ≤ 2− 𝐸. Obser v e her e that minimizing the discr epancy | ⟨ 𝑔 , 𝑥 ⟩ | c orr esponds t o maximizing the ener gy 𝐸 ( 𝑥 ;
|
https://arxiv.org/abs/2505.20607v1
|
𝑔 ) ; this t erminology is motiv at ed b y the st atistical ph y sics lit er atur e [Mer01] . R ephr asing the pr ec eding discussion, the statisticall y optimal ener gy level is 𝐸 = Θ ( 𝑁 ) , while the best computational ener gy level curr ently kno w n t o be ac hie v able in poly nomial time is 𝐸 = Θ ( log2𝑁 ) . F or our purposes , a r andomized alg orithm is a measur able function 𝒜 : ( 𝑔 , 𝜔 ) ↦ 𝑥 ∈ Σ𝑁 wher e 𝜔 ∈ Ω𝑁 is an independent r andom v ariable in some P olish spac e Ω𝑁 . Suc h an 𝒜 is deterministic if it does not depend on 𝜔 . F or our main analy sis , c onsidering det erministic Σ𝑁 - v alued alg orithms w ill 4 suffic e . W e c onsider t w o c lasses of low degr ee alg orithms , giv en b y either lo w degr ee poly nomials or b y functions w ith lo w coor dinate degr ee . Their study is motiv at ed b y the w ell- est ablished low degr ee heuristic : degr ee 𝐷 alg orithms (in either sense ) ar e belie v ed t o be g ood pr o xies f or the br oader c lass of 𝑒̃ 𝑂 ( 𝐷 )-time alg orithms [Hop18] , [K WB19] . A t the same time , these c lasses ar e kno w n t o enjo y fa v or able st abilit y pr operties making them amenable t o rig or ous analy sis . Our r esult s sho w str ong low degr ee har dness f or the NPP at essentially all ener gy le v els bet w een the st atistical and c omput ational thr esholds , in the sense of [HS25b] . Definition 1.2 (Str ong Lo w Degr ee Har dness [HS25b , Def. 3] ) . A sequenc e of r andom sear c h pr ob- lems , that is , a 𝑁 -inde x ed sequenc e of r andom input v ect or s 𝑔𝑁 ∈ 𝐑𝑑𝑁 and r andom subset s 𝑆𝑁 = 𝑆𝑁 ( 𝑔𝑁 ) ⊆ Σ𝑁 , e xhibit s str ong low degr ee har dness (SLDH) up to degr ee 𝐷 ≤ 𝑜 ( 𝐷𝑁 ) if, f or all sequenc es of degr ee 𝑜 ( 𝐷𝑁 ) alg orithms 𝒜𝑁 : ( 𝑔 , 𝜔 ) ↦ 𝑥 w ith 𝐄 ‖ 𝒜 ( 𝑦𝑁 ) ‖2≤ 𝑂 ( 𝑁 ) , w e ha v e 𝐏 ( 𝒜𝑁 ( 𝑔𝑁 , 𝜔 ) ∈ 𝑆𝑁 ) ≤ 𝑜 ( 1 ) . Ther e ar e t w o r elat ed notions of degr ee whic h w e w ant t o c onsider
|
https://arxiv.org/abs/2505.20607v1
|
in Definition 1.2 . The fir st is tr aditional poly nomial degr ee , applicable f or alg orithms giv en in eac h c oor dinat e b y lo w degr ee poly nomial functions of the input s . In this case , w e sho w: Theor em 1.3 (Har dness f or LDP Alg orithms ) . Let 𝑔 ∼ 𝒩 ( 0 , 𝐼𝑁 ) be a standar d N ormal r andom vec tor . Let 𝒜 be an y Σ𝑁 -valued alg orithm with 𝐄 ‖ 𝒜 ( 𝑔 , 𝜔 ) ‖2≤ 𝐶 𝑁 , whic h obeys the stabilit y estimate ( 1.5 ) below with degr ee 𝐷 in the LDP case . Assume 𝜔 ( log 𝑁 ) ≤ 𝐸 ≤ 𝑁 and 𝐷 ≤ 𝑜 ( 2𝐸 / 4) . Then 𝐏 ( 𝒜 ( 𝑔 , 𝜔 ) ∈ 𝑆 ( 𝐸 ; 𝑔 ) ) = 𝑜 ( 1 ) . Let us mak e a f e w c omment s about this r esult. F ir st, the phr asing does not actually r equir e 𝒜 t o be a poly nomial, because a finit e - degr ee poly nomial cannot actually map st andar d Gaussian input s t o Σ𝑁 . Ho w e v er this aside , the implications of Theor em 1.3 under the lo w degr ee heuristic ar e unr easonably pessimistic and primarily indicat e that lo w degr ee poly nomials ar e a poor pr o x y f or efficient alg orithms . F or e x ample , Theor em 1.3 rules out 𝑁Ω ( log 𝑁 ) poly nomial alg orithms fr om e v en ac hie v ing the same ener gy thr eshold as the (poly nomial time ) K armar k ar-K arp alg orithm, but the lo w degr ee heuristic sugg est s that suc h poly nomials c onstitut e a g ood pr o x y f or all e xponential time alg orithms . On the other e x tr eme , the lo w degr ee heuristic sugg est s that poly nomial alg orithms r equir e doubly e xponential time t o ac hie v e the st atistically optimal discr epancy 𝐸 = Θ ( 𝑁 ) , while brut e -f or c e sear c h in fact r equir es at most e xponential time . Thus w e turn t o the sec ond, mor e e xpr essiv e notion of coor dinate degr ee : a function 𝑓 : 𝐑𝑁→ 𝐑 has c oor dinat e degr ee 𝐷 if it can be e xpr essed as a linear c ombination of functions depending on c ombinations of no mor e than 𝐷 c oor dinat es . As c oor dinat e degr ee is alw a y s smaller than poly nomial degr ee , this enables us
|
https://arxiv.org/abs/2505.20607v1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.