text
string | source
string |
|---|---|
Pn andP′ n, respectively. Similarly, Pfrepresents the expectation of funderP(x,y) =F(x)G(y). Thus, the expressions are given by: Pnf=1 nn/summationdisplay i=1f(Xi,Yi),P′ nf=1 n2n/summationdisplay i=1n/summationdisplay j=1f(Xi,Yj), Pf=/integraldisplay fdP. (A.1) We choose f(x,y) =|F(x)−G(y)|andfn(x,y) =|Fn(x)−Gn(y)|, (A.2) 5 whereFn(x) =1 n/summationtextn k=1I(Xk/lessorequalslantx) andGn(y) =1 n/summationtextn k=1I(Yk/lessorequalslanty). LetF0be the collection of cumulative distribution functions for all univariate continuous variables and F={|F(x)−G(y)| ∈R:F,G∈ F0andx,y∈R}. Example 19.6 of Van der Vaart(2000) illustrates that F0is aP-Donsker class. Thus, for all ( x,y)∈R2andF,G∈ F0, according to the information provided on page 19 of Kosorok (2008), along with the fact that |a−b|= max{a,b}−min{a,b},Fis also a P-Donsker class. By the law of large numbers, for every xandy, it is apparent that Fn(x)a.s.− − →F(x) and Gn(y)a.s.− − →G(y) hold, thus resulting in fn(x,y)a.s.− − →f(x,y), hence for some f∈L2(P),/integraltext (fn(x)− f)2dPp− →0 follows. Then, define Gn=√n(Pn−P),G′ n=√n(P′ n−P). The empirical process evaluated at fis Gnf=√n(Pnf−Pf),G′ nf=√n(P′ nf−Pf). By Lemma 19.24 in Van der Vaart (2000), one has G′ n(fn−f) =op(1),Gn(fn−f) =op(1), i.e.,/parenleftbig P′ n−P/parenrightbig (fn−f) =op(n−1/2),(Pn−P)(fn−f) =op(n−1/2). Further expanding these two expressions leads to the follow ing forms, P′ nfn−P′ nf−Pfn+Pf=op(n−1/2),Pnfn−Pnf−Pfn+Pf=op(n−1/2). Note that the combination of equations ( A.1) and (A.2) can yield the following forms: P′ nfn=1 n2n/summationdisplay i=1n/summationdisplay j=1|Fn(Xi)−Gn(Yj)|,Pnfn=1 nn/summationdisplay i=1|Fn(Xi)−Gn(Yi)|. P′ nf=1 n2n/summationdisplay i=1n/summationdisplay j=1|F(Xi)−G(Yj)|=1 n2n/summationdisplay i=1n/summationdisplay j=1|Ui−Vj|. Pnf=1 nn/summationdisplay i=1|F(Xi)−G(Yi)|=1 nn/summationdisplay i=1|Ui−Vi|. Pfn= E|Fn(Xi)−Gn(Yi)|, Pf= E|F(Xi)−G(Yi)|= E|Ui−Vi|. Thus, 1 n2n/summationdisplay i=1n/summationdisplay j=1|Fn(Xi)−Gn(Yj)|=1 n2n/summationdisplay i=1n/summationdisplay j=1|Ui−Vj|+Pfn−Pf+op(n−1/2), 1 nn/summationdisplay i=1|Fn(Xi)−Gn(Yi)|=1 nn/summationdisplay i=1|Ui−Vi|+Pfn−Pf+op(n−1/2). 6 Combining the above equations, we obtain ϕn= 1−3 n2−1n/summationdisplay i=1|Ri−Si| =3 n2−1/parenleftBigg n2−1 3−n/summationdisplay i=1|Ri−Si|/parenrightBigg =3 n2−1 1 nn/summationdisplay i=1n/summationdisplay j=1|Ri−Sj|−n/summationdisplay i=1|Ri−Si| =3n2 n2−1 1 n2n/summationdisplay i=1n/summationdisplay j=1|Fn(Xi)−Gn(Yi)|−1 nn/summationdisplay i=1|Fn(Xi)−Gn(Yi)| =3n2 n2−1 1 n2n/summationdisplay i=1n/summationdisplay j=1|Ui−Vj|−1 nn/summationdisplay i=1|Ui−Vi|+op(n−1/2) =/tildewideϕn+op(n−1/2). Next, relying on the facts stated in Lemma A.1, we deal with the expectation and variance of/tildewideϕn. LetC1=1 n2/summationtextn i=1/summationtextn j=1|Ui−Vj|andC2=1 n/summationtextn i=1|Ui−Vi|, it is easy to calculate that E/tildewideϕn= 0. Furthermore, routine calculation yields Var(C1) =1 n4/braceleftbig n2Var(|U1−V1|)+2×n2×(n−1)Cov(|U1−V1|,|U1−V2|)/bracerightbig =1 n4/braceleftbigg n2×1 18+2n2(n−1)×1 180/bracerightbigg =n+4 90n2. Var(C2) =1 n2×nVar(|U1−V1|) =1 18n. Cov(C1,C2) =1 n3 n/summationdisplay i=1n/summationdisplay j=1|Ui−Vj|,n/summationdisplay i=1|Ui−Vi| =1 n3{nVar(|U1−V1|)+2(n−1)Cov(|U1−V1|,|U1−V2|)×n} =1 n3/braceleftbigg n×1 18+2n(n−1)×1 180/bracerightbigg =n+4 90n2. Ultimately, it is derived that Var(/tildewideϕn) =/parenleftbigg3n2 n2−1/parenrightbigg2 [Var(C1)+Var(C2)−2Cov(C1,C2)] =/parenleftbigg3n2 n2−1/parenrightbigg2/parenleftbiggn+4 90n2+1 18n−2×n+4 90n2/parenrightbigg 7 =2n2 5(n+1)2(n−1). The proof of Theorem 2.1is now complete. Proof of Theorem 2.2.Rewrite/tildewideϕnas /tildewideϕn=3 n2−1 n/summationdisplay i=1n/summationdisplay j=1|Ui−Vj|−nn/summationdisplay i=1|Ui−Vi| =3 n2−1 n/summationdisplay i/negationslash=j|Ui−Vj|−(n−1)n/summationdisplay i=1|Ui−Vi| =3 n2−1/parenleftbiggn(n−1) 2T1−(n−1)T2/parenrightbigg ,(A.3) whereT1=2 n(n−1)/summationtextn i/negationslash=j|Ui−Vj|, andT2=/summationtextn i=1|Ui−Vi|. SinceT2isalready asumof independentandidentically distributed terms, its H´ ajek projection remains itself. Thus, we only need to calculate the H´ ajek re presentation for T1. In fact,T1is a U-statistic that can be expressed as T1=2 n(n−1)n/summationdisplay i/negationslash=j|Ui−Vj|=2 n(n−1)n/summationdisplay i<jh/parenleftBig (Ui,Vi)⊤,(Uj,Vj)⊤/parenrightBig , where the symmetric kernel function is taken as h/parenleftBig (u1,v1)⊤,(u2,v2)⊤/parenrightBig =|u1−v2|+|u2−v1|. Itisevidentthatthevarianceof T1exists. Let θ= E/bracketleftBig h/parenleftBig (Ui,Vi)⊤,(Uj,Vj)⊤/parenrightBig/bracketrightBig , andh1/parenleftBig (u,v)⊤/parenrightBig = E/bracketleftBig h/parenleftBig (u,v)⊤,(U2,V2)⊤/parenrightBig/bracketrightBig −θ. According to Lemma A.1and through simple derivation, we have θ=E(|U1−V2|+|U2−V1|) =2 3andh1/parenleftBig (u,v)⊤/parenrightBig =1 3−u(1−u)−v(1−v). The projection of T1−2 3is then given by /tildewideT1:=2 nn/summationdisplay i=1/bracketleftbigg1 3−Ui(1−Ui)−Vi(1−Vi)/bracketrightbigg . Then, by applying Lemma 12.3 from Van der Vaart (2000), we obtain T1−2 3=/tildewideT1+Op(1 n), i.e., T1=2 nn/summationdisplay i=1/bracketleftbigg1 3−Ui(1−Ui)−Vi(1−Vi)/bracketrightbigg +2 3+Op(1 n). 8
|
https://arxiv.org/abs/2505.01825v1
|
Substituting this result into Equation ( A.3), we get /tildewideϕn=3 n+1n/summationdisplay i=1/parenleftbigg2 3−|Ui−Vi|−Ui(1−Ui)−Vi(1−Vi)/parenrightbigg +Op(1 n) =/hatwideϕn+Op(1 n). Additionally, utilizing the results from Lemma A.1, it is easy to derive that E/hatwideϕn= 0, Var(/hatwideϕn) =/parenleftbigg3 n+1/parenrightbigg2 ×nVar/parenleftbigg2 3−|Ui−Vi|−Ui(1−Ui)−Vi(1−Vi)/parenrightbigg =n/parenleftbigg3 n+1/parenrightbigg2 ×[Var(|U1−V1|)+Var(U1(1−U1))×2+2Cov( |U1−V1|,U1(1−U1))] =n/parenleftbigg3 n+1/parenrightbigg2 ×/parenleftbigg1 18+1 180×2+2×/parenleftbigg −1 180/parenrightbigg ×2/parenrightbigg =2n 5(n+1)2. Thus, this proof is complete. Proof of Theorem 2.3./hatwideϕncan be expressed as the sum of independently and identically dis- tributed random variables with existing second moment, so i ts asymptotic normality can be easily obtained through the ordinary central limit theorem. By fur ther utilizing the asymptotic repre- sentations of Theorem 2.1and Theorem 2.2, the asymptotic normality of /tildewideϕnandϕnis also apparent. 9 References Bukovˇ sek, D. K., Mojˇ skerc, B., 2022. On the exact region de termined by Spearman’s footrule and Gini’s gamma. Journal of Computational and Applied Mathema tics 410, 114212. Chen, C., Xu, W., Zhang, W., Zhu, H., Dai, J., 2023. Asymptoti c properties of Spearman’s footruleandGini’s gammainbivariate normalmodel.Journa lof theFranklin Institute360 (13), 9812–9843. Chen, L. H., Fang, X., Shao, Q.-M., 2013. From Stein identiti es to moderate deviations. The Annals of Probability 41 (1), 262–293. Diaconis, P., Graham, R. L., 1977. Spearman’s footrule as a m easure of disarray. Journal of the Royal Statistical Society Series B: Statistical Methodolo gy 39 (2), 262–268. Hoeffding, W., 1951. A combinatorial central limit theorem. T he Annals of Mathematical Statis- tics, 558–566. Kosorok, M. R., 2008. Introduction to empirical processes a nd semiparametric inference. Vol. 61. Springer. Nelsen, R. B., 2006. An introduction to copulas, 2nd Edition . Springer, New York. P´ erez, A., Prieto-Alaiz, M., Chamizo, F., Liebscher, E., ´Ubeda-Flores, M., 2023. Nonparametric estimationofthemultivariateSpearman’sfootrule: afurt herdiscussion.FuzzySetsandSystems 467, 108489. Sen, P. K., Salama, I. A., 1983. The Spearman footrule and a Ma rkov chain property. Statistics & probability letters 1 (6), 285–289. Shi, X., Xu, M., Du, J., 2023. Max-sum test based on Spearman’ s footrule for high-dimensional independence tests. Computational Statistics & Data Analy sis 185, 107768. Shi, X., Zhang, W., Du, J., Kwessi, E., 2024. Testing indepen dence based on Spearman’s footrule in high dimensions. Communications in Statistics-Theory a nd Methods, 1–18. Small, C. G., 2010. Expansions and asymptotics for statisti cs. Chapman and Hall/CRC. Spearman, C., 1906. Footrule for measuring correlation. Br itish Journal of Psychology 2 (1), 89. Van der Vaart, A. W., 2000. Asymptotic statistics. Vol. 3. Ca mbridge university press. 10
|
https://arxiv.org/abs/2505.01825v1
|
Sharp empirical Bernstein bounds for the variance of bounded random variables Diego Martinez-Taboada1and Aaditya Ramdas12 1Department of Statistics & Data Science 2Machine Learning Department Carnegie Mellon University {diegomar,aramdas}@andrew.cmu.edu May 6, 2025 Abstract Much recent effort has focused on deriving “empirical Bernstein” con- fidence sets for the mean µof bounded random variables, that adapts to unknown variance V(X). In this paper, we provide fully empirical upper and lower confidence sets for the variance V(X), that adapt to both unknown µandV[(X−µ)2]. Our bounds hold under constant conditional variance and mean, making them suitable for sequential decision making contexts. The results are instantiated for both the batch setting (where the sample size is fixed) and the sequential setting (where the sample size is a stopping time). We show that the first order width of the confidence intervals exactlymatches that of the oracle Bernstein inequality; thus, our empirical Bernstein bounds are “sharp”. We compare our bounds to a widely used concentration inequality based on self-bounding ran- dom variables, showing both the theoretical gains and improved empirical performance of our approach. 1 Introduction Providing finite-sample confidence intervals for the variance of a random variable is a fundamental problem in statistical inference, as variance quantifies the dispersion of data and directly influences uncertainty in decision-making. Exact confidence intervals are particularly important because approximate methods can lead to misleading inferences, especially when sample sizes are small, data distributions are skewed, or underlying assumptions are violated. For example, exact confidence intervals for the variance have been widely exploited in empirical Bernstein inequalities (Audibert et al., 2009; Maurer and Pontil, 2009), which give way to a variety of applications including adaptive statistical learning (Mnih et al., 2008), high-confidence policy evaluation in reinforcement learning (Thomas et al., 2015), off-policy risk assessment (Huang et al., 2022), and inference for 1arXiv:2505.01987v1 [math.ST] 4 May 2025 the average treatment effect (Howard et al., 2021). But in all these works, the focus was the mean, offering limited inferential tools for the variance (although all such contributions would benefit from a refined analysis of the variance). This contribution centers on the study of the variance of bounded random variables. The primary goal of this work is to derive confidence intervals for the variance that both well work in practice and are asymptotically equivalent to those derived from the oracle Bernstein inequality. More specifically, we seek confidence intervals whose width’s leading term matches that of the oracle Bernstein confidence interval, i.e.,p 2V[(Xi−µ)2] log(1 /α). We refer to such confidence intervals as sharp. To elaborate, when estimating the mean, Bernstein inequalities (Bernstein, 1927; Bennett, 1962) are widely known for leading to closed-form, tight confidence intervals. However, their practicality is limited, as they require knowing a bound on the variance of the random variables (that is better than the trivial bound implied by the bounds on the random variables). For this reason, they establish a natural “oracle” benchmark for fully empirical confidence sets that only exploit knowledge on the bound of the random variables. Furthermore, some of the aforementioned applications actually rely on confi- dence sequences, which are anytime-valid counterparts
|
https://arxiv.org/abs/2505.01987v1
|
of confidence intervals. A (1−α)-confidence interval CCIfor a target parameter θis a random set such that P(θ∈CCI)≥1−α, where CCIis built after having observed a fixed number of observations. In contrast, a (1−α)-confidence sequence CCS,tprovides a high-probability guarantee that the parameter θis contained in the sequence at all time points, i.e., P(∃t≥1 :θ∈CCS,t)≥1−α, with trepresenting the number of observations collected sequentially. Confidence sequences are of key importance in online settings, where data is observed sequentially and probabilistic guarantees that hold at stopping times are often desired: confidence sequences allow for sequential procedures that are continuously monitored and adaptively adjusted. For instance, optimal adaptive Neyman allocation for ran- domized control trials in the context of causal inference and off-policy evaluation (Neopane et al., 2025) are based on confidence sequences for the variance. In many such online settings, the assumption that data points are independent and identically distributed (iid) is often too strong and unrealistic due to the dynamic and evolving nature of data streams. Unlike traditional offline analyses wheredatacanbeassumedtocomefromafixeddistribution, onlineenvironments involve sequentially arriving data that may exhibit temporal dependencies. Thus, we seek to develop concentration inequalities that only require the following assumption to hold (where EtandVtdenote conditional expectations and variances, respectively; these definitions are later formalized in Section 3). Assumption 1.1. The stream of random variables X1, X2, . . .is such that Xt∈[0,1],Et−1Xt=µ,Vt−1Xt=σ2. Note that all (rescaled) iid bounded random variables attain Assumption 1.1. More generally, Assumption 1.1 is rather weak, avoiding assuming independence of the random variables. Any bounded random variable can be rescaled to belong to [0,1], and so the first of the conditions can be assumed without loss of 2 generality in the bounded setting. The conditional constant mean and variance are arguably the least we may assume if we wish to estimate “the variance”. We revisit the problem of providing confidence intervals and sequences for the variance of bounded random variables under Assumption 1.1. Our contributions are three-fold: •We provide novel confidence sequences for the variance. We instantiate the results for the batch setting (where the confidence sequence reduces to a confidence interval) and the sequential setting. Confidence sequences for the standard deviation (std) can also be immediately derived by taking the square root of the confidence sequences for the variance. •Theoretically, we prove the sharpness of our inequalities by showing that the first order term of the novel confidence interval exactly matches that of the oracle Bernstein inequality. •Empirically, we illustrate how our proposed inequalities substantially out- perform those of Maurer and Pontil (2009, Theorem 10), which constitute the current state-of-the-art inequalities for the standard deviation, to the best of our knowledge. Paper outline. We present related work in Section 2, followed by preliminaries in Section 3. Section 4 exhibits the main results of this contribution, namely an empirical Bernstein inequality for the variance. Section 5 displays experimen- tal results, showing how our proposed inequalities outperform state-of-the-art alternatives. We conclude with some remarks in Section 6. 2 Related Work Current concentration inequalities for the variance. Upper and lower inequalities for the variance were presented in Maurer and Pontil
|
https://arxiv.org/abs/2505.01987v1
|
(2009, Theorem 10). They are based on the concentration of self-bounding random variables (Maurer, 2006). Another concentration inequality for the variance can be found in the proof of Audibert et al. (2009, Theorem 1), which decouples the analysis into those of the mean and second centered moment, in a similar spirit to our contribution. However, both inequalities rely on conservatively upper bounding thevariance of the empirical variance , thus being loose. In contrast, our inequalities empirically estimate such a variance, resulting in tighter confidence sets. Less closely related to our work, other inequalities rely on the Kolmogorov- Smirnov (KS) distance between the empirical distribution and a second dis- tribution. For instance, Austern and Mackey (2022, Lemma 2) leveraged the concentration inequalities from Romano and Wolf (2000), which build on the KS distance between the empirical distribution and the actual distribution in order to derive inequalities for the variance. Austern and Mackey (2022, Appendix D) elucidated the use of these inequalities for variance estimation in the case of 3 independent and identically distributed (iid) random variables. However, these methods strongly rely on independence assumptions and are also tailored to the batch setting, where the sample size is fixed in advance. Empirical Bernstein inequalities for the mean. Based on combining Bennett’sinequalityandupperconcentrationinequalitiesforthevariance, Maurer and Pontil (2009, Theorem 11) proposed a well known empirical Bernstein inequality for the mean, improving a similar inequality presented in Audibert et al. (2009, Theorem 1). These inequalities are not sharp (in that the first order limiting width does not match that of the oracle Bernstein inequality, including constants), and they are empirically significantly looser than those presented in Howard et al. (2021, Theorem 4) and Waudby-Smith and Ramdas (2024, Theorem 2), which are known to be sharp. The latter inequalities were extended to2-smooth Banach spaces in Martinez-Taboada and Ramdas (2024, Theorem 1), and to matrices in Wang and Ramdas (2024, Theorem 4.2), both of which are also sharp. Time-uniformChernoffinequalities. Ourworkfallsunderthetime-uniform Chernoff inequalities umbrella from Howard et al. (2020, 2021); Waudby-Smith and Ramdas (2024). A key proof technique of this line of work is the derivation of sophisticated nonnegative supermartingales, followed by an application of Ville’s inequality (Ville, 1939), an anytime-valid version of Markov’s inequality. 3 Background Let us start by presenting the concepts of filtration andsupermartingale , which will be heavily exploited in this work to go beyond the iid setting. Consider a filtered measurable space (Ω,F), where the filtration F= (Ft)t≥0is a sequence ofσ-algebras such that Ft⊆ F t+1,t≥0. The canonical filtration Ft= σ(X1, . . . , X t), with F0being trivial, is considered throughout. A stochastic process M≡(Mt)t≥0is a sequence of random variables that are adapted to (Ft)t≥0, i.e., MtisFt-measurable for all t.Mis calledpredictable ifMtisFt−1- measurable for all t. An integrable stochastic process Mis a supermartingale if E[Mt+1|Ft]≤Mtfor all t. We use Et[·]andVt[·]in short for E[·|Ft]andV[·|Ft], respectively. Inequalities between random variables are always interpreted to hold almost surely. As exhibited in later sections, our concentration inequalities will be derived as Chernoff inequalities. In contrast to more classical inequalities, our results come with anytime validity (that is, they
|
https://arxiv.org/abs/2505.01987v1
|
hold at any stopping time), derived using the following anytime-valid version of Markov’s inequality. Theorem 3.1 (Ville’s inequality) .For any nonnegative supermartingale (Mt)t≥0 andx >0, P(∃t≥0 :Mt≥x)≤EM0 x. 4 Powerful nonnegative supermartingale constructions are usually at the heart of anytime valid concentration inequalities. For example, the following sharp empirical Bernstein inequality from Howard et al. (2021) and Waudby-Smith and Ramdas (2024) is derived from a nonnegative supermartingale. Theorem 3.2 (Empirical Bernstein inequality) .LetX1, X2, . . .be a stream of random variables such that, for all t≥1, it holds that Xt∈[0,1]andEt−1Xt=µ. LetψE(λ) =−logλ−λ. For any [0,1)-valued predictable sequence (λi)i≥1such thatλ1>0, it holds that Pn i=1λiXiPn i=1λi±log 2 δ +Pn i=1ψE(λi) (Xi−ˆµi−1)2 Pn i=1λi! is a1−δconfidence sequence for µ. We will modify Theorem 3.2 in later sections in order to derive our results. The sequence (λi)i≥1is referred to as ‘predictable plug-ins’. They play the role of the parameter λthat naturally appears in all the Chernoff inequality derivations; nevertheless, instead of they being equal for each iand theoretically optimized, they are empirically and sequentially chosen. The choice of the predictable plug-ins is key in the performance of the inequalities, and will be discussed throughout our work. Besides making use of predictable plug-ins in empirical Bernstein-type supermartingales, we will also exploit them in the following anytime valid version of Bennett’s inequality. Theorem 3.3 (Anytime valid Bennett’s inequality) .LetX1, X2, . . .be a stream of random variables such that, for all t≥1, it holds that Xt∈[0,1],Et−1Xt=µ, andVt−1Xt=σ2. Let ψP(λ) =exp(λ)−λ−1. For any R+-valued predictable sequence (˜λi)i≥1, it holds that P i≤t˜λiXiP i≤t˜λi±log(2/δ) +σ2P i≤tψP(˜λi) P i≤t˜λi! is a1−δconfidence sequence for µ. While this result is technically novel (and so we present a proof in Ap- pendix D.1), it can be derived using the techniques in Howard et al. (2020); Waudby-Smith and Ramdas (2024). It would generally lack any practical use, given that σis typically unknown. Nonetheless, we will invoke it in combination with an empirical Bernstein inequality for σ2, thus making it actionable. 4 Main results In Section 4.1, we present the theoretical foundation of all the inequalities derived thereafter, namely a novel nonnegative supermartingale construction and its corollary. Section 4.2 and Section 4.3 make use of such theoretical tools to derive upper and lower confidence sequences, respectively. Section 4.4 instantiates such confidence sequences in the (more classical) batch setting, where they reduce to confidence intervals. Lastly, Section 4.5 extends these results to Hilbert spaces. 5 4.1 A nonnegative supermartingale construction We begin by introducing two nonnegative supermartingale constructions that serve as the theoretical foundation for the inequalities derived in this work. Its proof may be found in Appendix D.2. Theorem 4.1. Let Assumption 1.1 hold. For a [0,1]-valued predictable sequence (bµi)i≥1, denote ˜σ2 i=σ2+ (bµi−µ)2. For any [0,1]-valued predictable sequence (bσi)i≥1and any [0,1)-valued predictable sequence (λi)i≥1, the processes (S+ t)t≥0and(S− t)t≥0, with S+ 0:= 1, and S− 0:= 1, and S+ t:= exp X i≤tλi (Xi−bµi)2−˜σ2 i −ψE(λi) (Xi−bµi)2−bσ2 i2 , t≥1, S− t:= exp X i≤tλi ˜σ2 i−(Xi−bµi)2 −ψE(λi) (Xi−bµi)2−bσ2 i2 , t≥1, are nonnegative supermartingales. Theorem 4.1 modifies the
|
https://arxiv.org/abs/2505.01987v1
|
supermartingales that give way to Theorem 3.2. However, in contrast to Theorem 3.2, the conditional means of the random variables (Xi−bµi)2under study is not constant. This opens the door to providing concentration for the variance. Denoting Rt,α:=log(1/α) +P i≤tψE(λi) (Xi−bµi)2−bσ2 i2 P i≤tλi, Dt:=P i≤tλi(Xi−bµi)2 P i≤tλi, E t:=P i≤tλi(bµi−µ)2 P i≤tλi, the following corollary is a direct consequence of Theorem 4.1. We defer its proof to Appendix D.3. Corollary 4.2. Let Assumption 1.1 hold. For any [0,1)-valued predictable sequence (λi)i≥1and any [0,1]-valued predictable sequences (bµi)i≥1and(bσi)i≥1, it holds that Dt−Et±Rt,α 2 is a1−αconfidence sequence for σ2. The confidence sequence provided by Corollary 4.2 cannot be invoked in practice: since µis unknown, Etis also unknown. 6 4.2 Upper confidence sequence for the variance In spite of Etbeing unknown, this term poses no challenge for the upper confidence sequence, as we can simply lower bound it by 0. That is, if (−∞, Dt− Et+Rt,α)isanα-levelupperconfidencesequenceforthevariance, sois (−∞, Dt+ Rt,α), given that Etis nonnegative. We formalize such an observation in the following corollary. Corollary 4.3 (Upper empirical Bernstein for the variance) .Let Assumption 1.1 hold. For any [0,1]-valued predictable sequences (bµi)i≥1and(bσi)i≥1, and any [0,1)-valued predictable sequence (λi)i≥1, it holds that (−∞, Ut)is a1−αupper confidence sequence for σ2, where Ut=Dt+Rt,α. It remains to discuss the choice of predictable sequences. We propose to take1 bσ2 t:=c3+P i≤t−1(Xi−¯µi)2 t,¯µt:=c4+P i≤t−1Xi t, where c3, c4∈[0,1]are constant, as well as bµt=¯µt. Following the discussion from Waudby-Smith and Ramdas (2024, Section 3.3) for confidence sequences, we propose to take the predictable plug-ins λCS t,u,α:=s 2 log(1 /α) bm2 4,ttlog(1 + t)∧c1 where bm2 4,t:=c2+P i≤t−1 (Xi−bµi)2−bσ2 i2 t, with c1∈(0,1), and c2∈[0,1]. Reasonable defaults are c1=1 2,c2=1 24,c3=1 22, andc4=1 2. The reason for these predictable plug-ins is simply that these choices lead to sharpness. 4.3 Lower confidence sequence for the variance In order to provide a lower confidence sequence, we must control the term Et, which depends on the terms |µ−bµi|, with i≤t. This can be done if (bµt)t≥1 is such that a confidence sequence for |bµi−µ|can be provided (we ought to use confidence sequences instead of confidence intervals in order to avoid union bounding over all i≤t). If|bµi−µ| ≤˜Ri,δfor all i≥1with probability 1−δ, then Dt−P i≤tλi˜R2 i,α1P i≤tλi−Rt,α2 (4.1) 1These specific choices are proposed due to their computational simplicity; they are not necessarily better than reasonable alternatives such as empirical or running averages. 7 yields a (1−α)-lower confidence sequence for σ2with α1+α2=α. We propose to obtain ˜Ri,δbased on the anytime valid Bennett’s inequality presented in Theorem 3.3. That is, take bµt=Pt−1 i=1˜λiXiPt−1 i=1˜λi,˜Rt,α1=log(2/α1) +σ2Pt−1 i=1ψP(˜λi)Pt−1 i=1˜λi,(4.2) fort≥2, as well as bµ1=1 2and˜R1,α1=1 2. Substituting (4.2)in(4.1)leads to a quadratic polynomial on σ2. Equaling σ2to such a polynomial and solving for σ2yields our lower confidence sequence. In order to formalize this, denote ˜At:=Pt−1 i=1ψP(˜λi)2 Pt−1 i=1˜λi2,˜Bt,δ:=2 log(2 /δ)Pt−1 i=1ψP(˜λi) Pt−1 i=1˜λi2,˜Ct,δ:=log2(2/δ) Pt−1 i=1˜λi2, as well as At:=P i≤tλi˜AiP i≤tλi, B t,δ:= 1 +P i≤tλi˜Bi,δP i≤tλi, C t,δ:=P i≤tλi˜Ci,δP i≤tλi. Under this notation, we are ready to present Corollary 4.4, a lower confidence sequence for the variance. Its proof has been deferred
|
https://arxiv.org/abs/2505.01987v1
|
to Appendix D.4. Corollary 4.4 (Lower empirical Bernstein for the variance) .Let Assumption 1.1 hold. For (bµi)i≥1defined as in (4.2), any [0,1]-valued predictable sequence (bσ2 i)i≥1, any [0,1)-valued predictable sequence (λi)i≥1, and any [0,∞)-valued predictable sequence (˜λi)i≥1, it holds that (Lt,∞)is a1−αlower confidence sequence for σ2, where α1+α2=αand Lt:=−Bt,α1+q B2 t,α1+ 4At(Dt−Ct,α1−Rt,α2) 2At. (4.3) It remains to discuss the choice of predictable plug-ins. Analogously to the upper inequality plug-ins, it would be natural to take λCS t,l,α 2=λCS t,u,α 2. However, the lower inequality includes the extra terms ˜Ri,α1that ought to be accounted for. Taking λi>0forisuch that ˜Ri,α1>1would add a summand that is vacuous.2For this reason, we propose to take λCS t,l,α 2:= λCS t,u,α 2,ift≥2andlog(2/α1)+ˆσ2 tPt−1 i=1ψP(˜λi)Pt−1 i=1˜λi≤1, 0, otherwise . Note that the threshold is only an approximation of ˜Ri,α1, given that the latter is unknown in practice. Seeking a confidence sequence for µ, we propose to take 2It would also be reasonable to take the minimum of ˜Ri,α1and1. We explore this alternative in Appendix F. 8 (˜λi)i≥1as ˜λt:=s 2 log(2 /α) bσ2 ttlog(1 + t)∧c5, (4.4) with c5being a constant in (0,∞), with a sensible default being c5= 2. The choice of the split of αinto α1andα2is also of importance. In the next section, we analyze specific splits for retrieving optimal asymptotical behavior. In practice, the split α1=α2=α 2works generally well. 4.4 Upper and lower confidence intervals In the more classical batch setting, we observe a fixed number of observations X1, . . . , X n, with nknown in advance. Given that confidence sequences are, in particular, confidence intervals for a fixed t=n, both Corollary 4.3 and Corollary 4.4 immediately establish confidence intervals. However, the choice of predictable plug-ins used in such corollaries should now be driven by minimizing the expected interval width at a specific t≡n, rather than being tight uniformly overt. For this reason, in order to optimize the upper confidence interval for a fixed t≡n, we take λCI i,u,α:=s 2 log(1 /α) bm2 4,in∧c1. (4.5) Following the same line of reasoning as in Section 4.3, the plug-ins for the lower confidence intervals are defined as a slight modification of those for the upper confidence sequence. Accordingly, we take λCI t,l,α 2:= λCI t,u,α 2,ift≥2andlog(2/α1)+ˆσ2 tPt−1 i=1ψP(˜λi)Pt−1 i=1˜λi≤1, 0, otherwise . The remaining parameters and estimators are defined in the same manner as in the preceding sections. An analysis of the widths for the variance In order to draw comparisons with related inequalities, we analyze the asymptotic first order term of the novel confidence intervals for the variance.As emphasized in Section 1, in the event of (Xi−µ)2having constant conditional variance, the benchmark for the first order terms are those of oracle Bernstein confi- dence intervals, i.e.p 2V[(Xi−µ)2] log(1 /α). That is, we seek to prove that both√n(Un−Dn)and√n(Dn−Ln)converge almost surely to such quantity. Accordingly, we make the following assumption throughout. Assumption 4.5. X1, X2, . . .is such that Vi−1 (Xi−µ)2 is constant across i. 9 We will implicitly also assume that the predictable sequences (bµi)i∈[n]and (bσ2 i)i∈[n]are defined as in Section 4.2 or Section
|
https://arxiv.org/abs/2505.01987v1
|
4.3, with c2∧c3>0. The condition c2∧c3>0is not necessary for the proofs to hold, but they follow cleaner with it. For simplicity, we will focus on the asymptotic behavior of both √nRn,α,√n P i≤tλi,α2,n˜R2 i,α1,nP i≤tλi,α2,n+Rn,α 2,n! , where we define λi,α2,n=λCI t,u,α 2,n. These two quantities correspond to the first order widths of the upper and lower confidence intervals above and below the estimate Dt, respectively, if taking the plug-ins λi,α2,n.3Note that, in contrast to the previous section, we emphasize the dependence of the split α=α1,n+α2,n onn, which we will exploit to recover optimal first order terms. We decouple the analysis in two parts, one involving the Ri’s and the other involving the ˜Ri’s. We start by establishing that the former converges almost surely to the oracle Bernstein first order term for the right choices of α2,n. The proof can be found in Appendix D.5. Theorem 4.6. Let(δn)n≥1be a deterministic sequence such that δn>0and δn↗δ >0. If Assumption 1.1 and Assumption 4.5 hold, then √nRn,δna.s.→p 2V[(Xi−µ)2] log(1 /δ). Second, we prove that the extra term that appears in the lower confidence interval converges to zero almost surely for right choices of α1,n. The proof can be found in Appendix D.6. Theorem 4.7. Letα=α1,n+α2,nbe such that α1,n= Ω 1 log(n) andα2,n→α. If Assumption 1.1 and Assumption 4.5 hold, then √nP i≤tλi,α2,n˜R2 i,α1,nP i≤tλi,α2,na.s.→0. Taking δn=α, it immediately follows from Theorem 4.6 that the upper confidence interval’s first order term is asymptotically almost surely equal to that of the oracle Bernstein confidence interval. To derive the analogous conclusion for the lower confidence interval, it suffices to take α1,n=1 lognα, δ n=α2,n=log(n)−1 log(n)α (4.6) in Theorem 4.7 and Theorem 4.6, respectively. These claims are formalized in the following corollary. 3Note that the plug-ins proposed in Section 4.4 are slightly different (they may take the value 0for small enough t). However, the proofs can be easily extended to λCI t,l,α 2after realizing that these two plug-ins are equal almost surely for big enough t; we believe the simplification is convenient for ease of presentation. 10 Corollary 4.8 (Sharpness) .Let the predictable sequences (bµi)i∈[n]and(bσ2 i)i∈[n] be defined as in Section 4.2 or Section 4.3, with c2∧c3>0. Let Assumption 1.1 and Assumption 4.5 hold. If α=α1,n+α2,nas defined in (4.6), then √n(Un−Dn)a.s.→p 2V[(Xi−µ)2] log(1 /α),√n(Dn−Ln)a.s.→p 2V[(Xi−µ)2] log(1 /α). An analysis of the widths for the standard deviation We devote this section to showing that the confidence intervals for the std also achieve 1/√nrates. To see this, let (Ln, Un)be the confidence interval for the variance obtained in Section 4. We just showed that, for big n, Ln≈σ2−2r cV[(Xi−µ)2] n, U n≈σ2+ 2r cV[(Xi−µ)2] n with c=log(1/α) 2. At first glance, it could seem like√Lnand√Unapproach σ atn−1/4rates, instead of n−1/2rates. Nonetheless, let us note that V (Xi−µ)2 ≤E (Xi−µ)2 −E2 (Xi−µ)2 =σ2(1−σ2)≤σ2. Thus, √ Un⪅s σ2+ 2σrc n=s σ+rc n2 −c n≤σ+rc n, and so√ Unapproaches σat an−1/2rate. Analogously,√ Lnalso approaches σ at the same rate. 4.5 An extension to Hilbert spaces Our inequalities naturally extend to separable Hilbert spaces. In these more abstract spaces, we shall assume that our
|
https://arxiv.org/abs/2505.01987v1
|
random variables lie in a ball of diameter 1, instead of a unit long interval. Similarly, the concept of variance involves norms, instead of just squares of scalars. Assumption 4.9. The stream of random variables X1, X2, . . .belongs to a separable Hilbert space H, and is such that ∥Xt∥ ∈ 0,1 2 ,Et−1Xt=µ,Et−1∥Xt−µ∥2=σ2. Under Assumption 4.9, all the concentration inequalities for σ2previously presented for the one dimensional case still hold if replacing (Xi−¯µi)2and (Xi−bµi)2by∥Xi−¯µi∥2and∥Xi−bµi∥2, respectively. The main technical obstacle of this extension is the generalization of The- orem 3.3 to multivariate settings, which we formalize next. Contrary to its one-dimensional counterpart, its proof builds on more sophisticated techniques from Pinelis (1994); we defer such a proof to Appendix D.7. 11 Theorem4.10 (Vector-valuedanytimevalidBennett’sinequality) .LetX1, X2, . . . be a stream of random variables belonging to a separable Hilbert space Hsuch that∥Xt∥ ≤1 2,Et−1Xt=µ, andEt−1∥Xt−µ∥2=σ2, for all t≥1. For any R+-valued predictable sequence (˜λi)i≥1, the sequence of sets ( x∈H: x−P i≤t˜λiXiP i≤t˜λi ≤log(2/δ) +σ2P i≤tψP(˜λi) P i≤t˜λi) is a1−δconfidence sequence for µ. The remainder of the one-dimensional results can be extended with relative ease, and so we emphasize once again that the concentration inequalities previ- ously introduced still hold in Hilbert spaces. We defer the formal presentation of such extensions to Appendix A. We highlight that the bounds are also sharp in this vector-valued setting, with the analysis conducted in Section 4.4 naturally extending to Hilbert spaces. 5 Experiments We devote this section to exploring the empirical performance of both the upper and lower confidence intervals presented in this paper. We compare them with the inequalities from Maurer and Pontil (2009, Theorem 10), which currently constitute the state of the art for the standard deviation. Note that, in order to obtain confidence intervals for the standard deviation using our approach, it suffices to take the square root of the upper and lower confidence intervals for the variance presented in Section 4. Figure 1 displays the average upper and lower confidence intervals for the standard deviation for different samples sizes for three different bounded distributions; our inequalities consistently demonstrate improved empirical performance in all evaluated scenarios. We defer a comparison with other alternatives, such as a double empirical Bernstein inequality on the first and second moment, to Appendix F. In all the experiments, we take α= 0.05, and the constants c1=1 2,c2=1 24,c3=1 22,c4=1 2,c5= 2. The code may be found in the supplementary material. 6 Conclusion Wehaveprovidednovelconcentrationinequalitiesforthevarianceofboundedran- dom variables under mild assumptions, instantiating them for both the batch and sequential settings. We have shown their theoretical sharpness, asymptotically matching the first order term of the oracle Bernstein’s inequality. Furthermore, our empirical findings demonstrate that they significantly outperform the widely adopted inequalities presented by Maurer and Pontil (2009, Theorem 10). There are several possible avenues for future work. In Section 4.5 and Appendix A, we show how the results naturally extend to Hilbert spaces. The proof of Theorem A.1 implicitly exploits the inner product structure of the 12 Figure 1: Average confidence intervals over 100simulations for the std σfor (I) the
|
https://arxiv.org/abs/2505.01987v1
|
uniform distribution in (0,1), (II) the beta distribution with parameters (2,6), and (III) the beta distribution with parameters (5,5). For each of the inequalities, the 0.95%-empirical quantiles are also displayed. The Maurer Pontil (MP) inequality (Maurer and Pontil, 2009, Theorem 10) is compared against our proposal (EB). We highlight the improved empirical performance of our methods in all scenarios. Hilbert space, which cannot be done in arbitrary Banach spaces. While this challenge may be circumvented by means of the triangle inequality, that approach leads to inflated constants; exploring tighter alternatives for general or smooth Banach spaces would be of interest. Furthermore, the analysis in Section 4.5 exploits ‘one-dimensional variances’. Extending our work to covariance matrices or operators would also be a natural direction to follow. Acknowledgments DMT thanks Ben Chugg, Ojash Neopane, and Tomás González for insightful conversations. DMT gratefully acknowledges that the project that gave rise to these results received the support of a fellowship from ‘la Caixa’ Foundation (ID 100010434). The fellowship code is LCF/BQ/EU22/11930075. AR was funded by NSF grant DMS-2310718. References Audibert, J.-Y., Munos, R., and Szepesvári, C. (2009). Exploration–exploitation tradeoffusingvarianceestimatesinmulti-armedbandits. Theoretical Computer Science, 410(19):1876–1902. Austern, M. and Mackey, L. (2022). Efficient concentration with Gaussian approximation. arXiv preprint arXiv:2208.09922 . 13 Bennett, G. (1962). Probability inequalities for the sum of independent random variables. Journal of the American Statistical Association , 57(297):33–45. Bernstein, S. (1927). Theory of Probability . Gastehizdat Publishing House. Fan, X., Grama, I., and Liu, Q. (2015). Exponential inequalities for martingales with applications. Electronic Journal of Probability , 20(1):1–22. Hall, P. and Heyde, C. C. (2014). Martingale Limit Theory and its Application . Academic press. Howard, S. R., Ramdas, A., McAuliffe, J., and Sekhon, J. (2020). Time-uniform Chernoffboundsvianonnegativesupermartingales. Probability Surveys , 17:257– 317. Howard, S. R., Ramdas, A., McAuliffe, J., and Sekhon, J. (2021). Time-uniform, nonparametric, nonasymptotic confidence sequences. The Annals of Statistics , 49(2):1055–1080. Huang, A., Leqi, L., Lipton, Z., and Azizzadenesheli, K. (2022). Off-policy risk assessment for markov decision processes. In International Conference on Artificial Intelligence and Statistics , pages 5022–5050. PMLR. Martinez-Taboada, D. and Ramdas, A. (2024). Empirical Bernstein in smooth Banach spaces. arXiv preprint arXiv:2409.06060 . Maurer, A. (2006). Concentration inequalities for functions of independent variables. Random Structures & Algorithms , 29(2):121–138. Maurer, A. and Pontil, M. (2009). Empirical Bernstein bounds and sample- variance penalization. In Conference on Learning Theory . PMLR. Mnih, V., Szepesvári, C., and Audibert, J.-Y. (2008). Empirical Bernstein stopping. In Proceedings of the 25th international conference on Machine learning, pages 672–679. Neopane, O., Ramdas, A., and Singh, A. (2025). Optimistic algorithms for adaptiveestimationoftheaveragetreatmenteffect. In International Conference on Machine Learning . PMLR. Pinelis, I. (1994). Optimum bounds for the distributions of martingales in Banach spaces.The Annals of Probability , pages 1679–1706. Romano, J. P. and Wolf, M. (2000). Finite sample nonparametric inference and large sample efficiency. Annals of Statistics , pages 756–778. Stout, W. F. (1970). A martingale analogue of Kolmogorov’s law of the iterated logarithm. Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebiete , 15(4):279–290. 14 Thomas, P., Theocharous, G., and Ghavamzadeh, M. (2015). High-confidence off-policy evaluation. In Proceedings of
|
https://arxiv.org/abs/2505.01987v1
|
the AAAI Conference on Artificial Intelligence , volume 29. Ville, J. (1939). Etude Critique de la Notion de Collectif . Gauthier-Villars Paris. Wang, H. and Ramdas, A. (2024). Sharp matrix empirical Bernstein inequalities. arXiv preprint arXiv:2411.09516 . Waudby-Smith, I. and Ramdas, A. (2024). Estimating means of bounded random variables by betting. Journal of the Royal Statistical Society Series B: Statistical Methodology , 86(1):1–27. 15 Appendix outline We organize the appendices as follows. We devote Appendix A to the formaliza- tion of the extension of the results to Hilbert spaces. Section B presents auxiliary lemmata that are exploited in the remaining proofs. These are mostly analytic or simple probabilistic results that can be skipped on a first pass. Section C contains more involved technical propositions that are later combined to yield the proofs of Theorem 4.6 and Theorem 4.7. The proofs of such propositions are deferred to Appendix E. Appendix D displays the proofs of the theoretical results exhibited in the main body of the paper. Lastly, Appendix F exhibits potential alternative approaches to the proposed empirical Bernstein inequality, illustrating the empirical benefits of the latter. Throughout, we denote the probability space on which the random variables are defined by (Ω,F, P). Furthermore, we use the standard asymptotic big- oh notations. Given functions fandg, we write f(n) =O(g(n))if there exist constants C, n 0>0such that |f(n)| ≤C|g(n)|for all n≥n0. We write f(n) =eO(g(n))iff(n) =O(g(n)polylog (n)), where polylog (n)denotes a polylogarithmic factor in n. Finally, we use f(n) = Ω( g(n))to denote that there exist constants c, n0>0such that f(n)≥cg(n)for all n≥n0. A Extension to Hilbert spaces Throughout, let Hbe a separable Hilbert space and denote Br(x) ={y∈H:∥y−r∥ ≤r}. We remind the reader that the theoretical foundation of the results from Section 4 are namely the scalar-valued anytime valid Bennett’s inequality (Theorem 3.3) and the supermartingale construction from Theorem 4.1. Theorem 3.3 extended the former to the multivariate setting. The remaining foundational piece is the extension of the supermartingale construction from Theorem 4.1, which we present next4and whose proof we defer to Appendix D.8. Theorem A.1. Let Assumption 4.9 hold. For any B1 2(0)-valued predictable sequence (bµHS i)i≥1, define ˜σ2 i=σ2+ bµHS i−µ 2. For any [0,1]-valued predictable sequence (bσi)i≥1and any [0,1)-valued predictable sequence (λi)i≥1, the processes S±,HS t = exp X i≤tλih ± Xi−bµHS i 2∓˜σ2 ii −ψE(λi)h Xi−bµHS i 2−bσ2 ii2 4Throughout, we emphasize the fact that the estimator of the mean belongs to the Hilbert space (in contrary to the estimator of the variance, which still belongs to the real line) by means of the notation bµHS i. 16 fort≥1andS±,HS 0 = 1, are nonnegative supermartingales. This theorem implies that the upper and lower inequalities previously derived for one dimensional processes equally apply to Hilbert spaces. Denoting RHS t,α:=log(1/α) +P i≤tψE(λi) Xi−bµHS i 2−bσ2 i2 P i≤tλi, DHS t:=P i≤t λiXi−bµHS i 2 P i≤tλi, EHS t:=P i≤tλi bµHS i−µ 2 P i≤tλi, the following corollary is a direct consequence of Theorem A.1, whose proof is analogous to that of Corollary 4.2. Corollary A.2. Let Assumption 1.1 hold.
|
https://arxiv.org/abs/2505.01987v1
|
For any [0,1)-valued predictable sequence (λi)i≥1, any B1 2(0)-valued predictable sequence (bµHS i)i≥1, and any [0,1]- valued predictable sequence (bσi)i≥1, the sequence of sets DHS t−EHS t±RHS t,α 2 is a1−αconfidence sequence for σ2. From Corollary A.2, upper and lower inequalities for the variance can be derived analogously to those presented in Section 4. That is, in order to derive an upper inequalities for the variance, it suffices to ignore the term EHS t. Corollary A.3 (Vector-valued upper empirical Bernstein for the variance) . Let Assumption 1.1 hold. For any B1 2(0)-valued predictable sequence (bµHS i)i≥1, any[0,1]-valued predictable sequence (bσi)i≥1, and any [0,1)-valued predictable sequence (λi)i≥1, it holds that −∞, UHS t is a1−αupper confidence sequence forσ2, where UHS t:=DHS t+RHS t,α. In order to derive ˜Ri,δsuch that ∥bµi−µ∥ ≤ ˜Ri,δfor all i≥1, we propose to use the vector-valued anytime valid Bennett’s inequality from Theorem 4.10. That is, take bµHS t=Pt−1 i=1˜λiXiPt−1 i=1˜λi,˜Rt,δ=log(2/δ) +σ2Pt−1 i=1ψP(˜λi)Pt−1 i=1˜λi.(A.1) These choices of bµHS tand ˜Rt,δlead to the same exact definition of ˜At,˜Bt,δ,˜Ct,δ, At,Bt,δ, and Ct,δfrom Section 4. The following corollary follows analogously to its one-dimensional counterpart. Corollary A.4 (Vector-valued lower empirical Bernstein for the variance) . Let Assumption 1.1 hold. For the B1 2(0)-valued predictable sequence (bµHS i)i≥1 17 defined in (4.2), any [0,1]-valued predictable sequence (bσ2 i)i≥1, any [0,1)-valued predictable sequence (λi)i≥1, and any [0,∞)-valued predictable sequence (˜λi)i≥1, it holds that LHS t,∞ is a 1−αlower confidence sequence for σ2, where α1+α2=αand LHS t:=−Bt,α1+q B2 t,α1+ 4At(DHS t−Ct,α1−RHS t,α2) 2At. We propose to take the plug-ins (λi)i≥1and(˜λi)i≥1analogously to Section 4. These choices require that the definitions of bm2 4,tandbσ2 tfrom Section 4 naturally replace the squares by the squares of the norms. Similarly to Section 4, the choice of the split of αintoα1andα2is also of importance. In practice, we propose to take α1andα2analogously to Section 4. The optimality of the results from Section 4.4 for specific choices of α1andα2extends analogously to Hilbert spaces. However, Assumption 4.5 ought to be replaced by the following assumption. Assumption A.5. X1, X2, . . .is such that Vi−1 ∥Xi−µ∥2 is constant across i. Under Assumption 4.9 and Assumption A.5, the first order term of the width of the confidence intervals can be compared with that from the oracle Bernstein-type inequality, i.e.,p 2V[∥Xi−µ∥2] log(1 /α). The following corol- lary establishes that the first order width of the confidence intervals does indeed match this oracle benchmark. Its proof is not provided given that it is completely analogous to that of Section 4.4. Corollary A.6 (Sharpness) .Let Assumption 4.9 and Assumption A.5 hold. If α=α1,n+α2,nas defined in (4.6), then √n(UHS n−DHS n)a.s.→p 2V[∥Xi−µ∥2] log(1 /α), √n(DHS n−LHS n)a.s.→p 2V[∥Xi−µ∥2] log(1 /α). B Auxiliary lemmata Lemma B.1. Fora∈[0,1]andb≥0, ψP(ab)≤a2ψP(b), ψ E(ab)≤a2ψE(b). Proof.It suffices to observe that ψP(ab) =∞X k=2(ab)k k!(i) ≤a2∞X k=2bk k!=a2ψP(b), as well as ψE(ab) =∞X k=2(ab)k k(i) ≤a2∞X k=2bk k=a2ψE(b), where in both (i) follows from |a| ≤1. 18 Lemma B.2. LetψN=λ2 2andψE(λ) =−log(1−λ)−λ. The function λ∈[0,1)7→ψE(λ) ψN(λ)is increasing. Proof.It suffices to observe that ψE(λ) ψN(λ)=P k≥2λk k λ2 2= 2X k≥2λk−2 k, which is clearly increasing on
|
https://arxiv.org/abs/2505.01987v1
|
λ. Lemma B.3. It holds that nX i=11√ i∈ 2√n−2,2√n−1 , and so 1√nnX i=11√ i→2. Proof.Given that x7→1√xis a decreasing function, it follows that Zn 11√xdx≤nX i=11√ i≤1 +Zn 11√xdx, withRn 11√xdx= 2√n−2. In order to conclude the proof, it suffices to note that 1√nnX i=11√ i∈ 2−2√n,2−1√n ,2−2√n→2,2−1√n→2, and invoke the sandwich theorem. Lemma B.4. It holds that nX i=11 i∈[logn,logn+ 1], and so 1 lognnX i=11 i→1. 19 Proof.Given that x7→1 xis a decreasing function, it follows that Zn 11 xdx≤nX i=11 i≤1 +Zn 11 xdx, withRn 11 xdx= log n. In order to conclude the proof, it suffices to note that 1 lognnX i=11 i∈ 1,1 +1 logn ,1 +1 logn→1, and invoke the sandwich theorem. Lemma B.5. It holds that nX i=1√ i∈1 3+2 3n3 2,−2 3+2 3(n+ 1)3 2 . Proof.Given that x7→√xis an increasing function, it follows that 1 +Zn 1√xdx≤nX i=1√ i≤Zn+1 1√xdx, withRn 1√xdx=2 3n3 2. Lemma B.6. It holds that ∞X i=21 ilogi=∞, and so ∞X i=11 ilog(i+ 1)=∞. Proof.Given that x7→1 xlogxis a decreasing function, it follows that n−1X i=21 ilogi≥Zn 21 xlogxdx(i)=Zlogn log 21 udu, where we have used the change of variable u= log xin (i). Thus ∞X i=21 ilogi= lim n→∞n−1X i=21 ilogi≥lim n→∞Zlogn log 21 udu=∞. It remains to note that ∞X i=11 ilog(i+ 1)≥∞X i=11 (i+ 1) log( i+ 1)=∞X i=21 ilogi. 20 Lemma B.7. Let(an)n≥0be a deterministic sequence such that a0≥2and an∈[0,1]forn≥1. Then 1 nnX i=1aiP j≤i−1aj≤log nX i=0ai! Proof.Denoting si=Pi j=0aj, it follows that 1 nnX i=1aiP j≤i−1aj=1 nnX i=1si−si−1 si−1. We now note thatsi−si−1 si−1is the area of a rectangle with width si−si−1and height1 si−1. Define the function f(x) :=1 x−1, which is decreasing on xand f(si) =1 si−1(i) ≥1 (si−1+ 1)−1=1 si−1, where (i) follows from ai∈[0,1]. Thus 1 nnX i=1si−si−1 si−1≤Zsn s0f(x)dx=Zsn s01 x−1dx= log( sn−1)−log(a0−1) (i) ≤log(sn), where (i)follows from a0≥2, thus concluding the result. Lemma B.8. Let(an)n≥1be a deterministic sequence such that an→a. Then 1 nX i≤nain→∞→a. Further, if an→0and|bn|< C, then 1 nX i≤naibin→∞→0. Proof.Letϵ >0. We want to show that there exists M∈Nsuch that a−1 nX i≤nai ≤ϵ. •Given that an→a, there exists M1∈Nsuch that |an−a| ≤ϵ 2for all n≥M1. •Further, there exists M2∈Nsuch that1 nPM1−1 i=1|ai−a| ≤ϵ 2for all n≥M2. 21 Taking M= max {M1, M2}, it follows that a−1 nX i≤nai ≤1 nX i≤n|a−ai| =1 nM1−1X i=1|a−ai|+1 nnX i=M1|a−ai| ≤ϵ 2+1 nnX i=M1ϵ 2≤ϵ 2+1 nnX i=1ϵ 2=ϵ, thus concluding the first result. The second result trivially follows after observing 1 nX i≤naibi ≤C1 nX i≤n|ai|, and the right hand side converges to 0in view of the first result. Lemma B.9. Let(an)n≥1and(bn)n≥1be two deterministic sequences such that ann→∞→a, b i≥0,1 nnX i=1bin→∞→b. Then 1 nnX i=1aibin→∞→ab Proof.Letϵ∈(0,1)be arbitrary. It suffices to show that there exists M∈N such that 1 nnX i=1aibi−ab ≤ϵ for all n≥M. Given that an→a, there exists M1∈Nsuch that supi≥M1|ai− a| ≤ϵ 3(b+1). Furthermore,1 nPn i=1bi→bimplies the existence of M2∈Nsuch that 1 nnX i=1bi−b ≤ϵ 3(|a|+ 1). Lastly, there exists M3∈Nsuch that 1 nsup i≤M′−1|ai−a|M′−1X i=1bi≤ϵ 3, 22 where M′=M1∨M2. Taking M=M′∨M3,
|
https://arxiv.org/abs/2505.01987v1
|
it follows that 1 nnX i=1aibi−ab = 1 nnX i=1aibi−abi+abi−ab ≤1 nsup i≤n|ai−a|nX i=1bi+|a| 1 nnX i=1bi−b ≤1 nsup i≤n|ai−a|nX i=1bi+|a|ϵ 3(|a|+ 1) ≤1 nsup i≤M′−1|ai−a|M′−1X i=1bi+1 nsup M′≤i≤n|ai|nX i=M′bi+ϵ 3 ≤ϵ 3+ sup i≥M′|ai−a|1 nnX i=1bi+ϵ 3 ≤ϵ 3+ϵ 3(b+ 1)ϵ 3(|a|+ 1)+b +ϵ 3≤ϵ 3+ϵ 3+ϵ 3=ϵ. Lemma B.10. Let(an,i)n≥1,i∈[n]and(bn)n≥1be two deterministic sequences such that an,nn→∞→a,|an,i−a| ≤ |ai,i−a|, b i≥0,1 nnX i=1bin→∞→b. Then 1 nnX i=1an,ibin→∞→ab Proof.Letϵ∈(0,1)be arbitrary. It suffices to show that there exists M∈N such that 1 nnX i=1an,ibi−ab ≤ϵ forall n≥M. Giventhat an,n→a, thereexists M1∈Nsuchthat supi≥M1|ai,i− a| ≤ϵ 3(b+1). Furthermore,1 nPn i=1bi→bimplies the existence of M2∈Nsuch that 1 nnX i=1bi−b ≤ϵ 3(|a|+ 1). Lastly, there exists M3∈Nsuch that 1 nsup i≤M′−1|ai,i−a|M′−1X i=1bi≤ϵ 3, 23 where M′=M1∨M2. Taking M=M′∨M3, it follows that 1 nnX i=1an,ibi−ab = 1 nnX i=1an,ibi−abi+abi−ab ≤1 nsup i≤n|an,i−a|nX i=1bi+|a| 1 nnX i=1bi−b ≤1 nsup i≤n|ai,i−a|nX i=1bi+|a|ϵ 3(|a|+ 1) ≤1 nsup i≤M′−1|ai,i−a|M′−1X i=1bi+1 nsup M′≤i≤n|ai|nX i=M′bi+ϵ 3 ≤ϵ 3+ sup i≥M′|ai,i−a|1 nnX i=1bi+ϵ 3 ≤ϵ 3+ϵ 3(b+ 1)ϵ 3(|a|+ 1)+b +ϵ 3≤ϵ 3+ϵ 3+ϵ 3=ϵ. Lemma B.11. Let(an,i)n≥1,i∈[n]such that an,i≥0,Pn i=1an,i≤Cfor some C <∞, an,in→∞→0∀i≥1, and(bn)n≥1such that bnn→∞→0. Then nX i=1an,ibin→∞→0. Proof.Letϵ > 0. We want to show that there exists M∈Nsuch thatPn i=1an,ibi≤ϵfor all n≥M. •Given that bn→0, there exists M1∈Nsuch that |bn| ≤ϵ 2Cfor all n > M 1. •Further, there exists M2∈Nsuch thatPM1 i=1an,ibi≤ϵ 2for all n≥M2. Such an M2exists, as it suffices to take M2=max{M2,i:i∈[M1]}, where M2,iis such that an,ibi≤ϵ 2M1(whose existence is granted by an,i→0as n→ ∞for any fixed i). 24 Taking M= max {M1, M2}, it follows that nX i=1an,ibi=M1X i=1an,ibi+nX i=M1+1an,ibi ≤ϵ 2+ϵ 2CnX i=M1+1an,i (i) ≤ϵ 2+ϵ 2CnX i=1an,i≤ϵ 2+ϵ 2CC=ϵ, where (i)follows from an,i≥0, thus concluding the result. Lemma B.12. Leta >0andb >0. IfZn>0a.s. and Zn→ba.s., then inf n≥1a n+ 1+n n+ 1Zn is strictly positive almost surely. Proof.Given that Zn>0a.s. and Zn→ba.s., there exists A∈ Fsuch that P(A) = 1andZn(ω)>0for all n, as well as Zn(ω)→bwith n→ ∞, for all ω∈A. It suffices to show that, for ω∈A, inf n≥1a n+ 1+n n+ 1Zn(ω)>0. In order to see this, observe that Zn(ω)→bimplies that there exists m∈Nsuch thatZn>b 2for all n≥m. Given that the function x7→x/(x+ 1)is increasing onn, for all n≥m, a n+ 1+n n+ 1Zn(ω)≥n n+ 1Zn(ω)≥m m+ 1Zn(ω)≥m m+ 1b 2. IfZn(w)>0, then for all n < m, a n+ 1+n n+ 1Zn(ω)≥a n+ 1≥a m. From these two inequalities, we conclude that inf n≥1a n+ 1+n n+ 1Zn(ω)≥a m∧m m+ 1b 2 for all ω∈A. C Auxiliary propositions The proofs of the propositions exhibited herein are deferred to Appendix E. We start by presenting a proof of the almost sure convergence of the fourth moment estimator used throughout. Its proof can be found in Appendix E.1 25 Proposition C.1. LetX1, . . . , X nfulfill Assumption 1.1 and Assumption 4.5. Let(bµi)i∈[n]and(bσ2 i)i∈[n]be[0,1]-valued predictable sequences. If bµna.s.→µ,bσ2 na.s.→σ2, then 1 nnX i=1 (Xi−bµi)2−bσ2 i2a.s.→V (X−µ)2 , which implies bm2 4,na.s.→V (X−µ)2 . IfV (X−µ)2 = 0, thenthefourthmomentestimatordoesnotonlyconverge to0almost surely, but it also does it at a ˜O(1 t)rate. We start by formalizing
|
https://arxiv.org/abs/2505.01987v1
|
this result when (bm2 4,i)i∈[n]is defined as in Section 4.2. Its proof may be found in Appendix E.2 Proposition C.2. LetX1, . . . , X nfulfill Assumption 1.1 and Assumption 4.5 such that V (X−µ)2 = 0. Let (bµi)i∈[n],(bσ2 i)i∈[n], and (bm2 4,i)i∈[n]be defined as in Section 4.2. Then bm2 4,t=˜O1 t almost surely. The result also extends to the estimator (bm2 4,i)i∈[n]defined in Section 4.3. We present such an extension next, whose proof we defer to Appendix E.3. Proposition C.3. LetX1, . . . , X nfulfill Assumption 1.1 and Assumption 4.5 such that V (X−µ)2 = 0. Let (bµi)i∈[n],(bσ2 i)i∈[n], and (bm2 4,i)i∈[n]be defined as in Section 4.3. If log(1/α1,n) =˜O(1)and0< α 1,n≤α, then bm2 4,t=˜O1 t almost surely. IfV (X−µ)2 = 0, the (normalized) sum of the plug-ins also converges almost surely to a tractable quantity. We present this result next, and defer the proof to Appendix E.4. Proposition C.4. LetX1, . . . , X nfulfill Assumption 1.1 and Assumption 4.5 such that V (Xi−µ)2 >0. Let (δn)n≥1be a deterministic sequence such that δn→δ >0, δ n>0. 26 Define λt,δn:=s 2 log(1 /δn) bm2 4,tn∧c1, withc1∈(0,1]andbm2 4,tdefined as in Section 4 with c2>0. Then 1√nnX i=1λi,δna.s.→s 2 log(1 /δ) V[(Xi−µ)2]. IfV (X−µ)2 >0, we study the inverse of the (normalized) sum of the plug-ins. In the next proposition, we prove that such a quantity converges almost surely to 0at a ˜O(1√n)rate. Its proof may be found in Appendix E.6. Proposition C.5. LetX1, . . . , X nfulfill Assumption 1.1 and Assumption 4.5 such that V (Xi−µ)2 = 0. Let (δn)n≥1be a deterministic sequence such that δn→δ >0, δ n>0. Define λt,δn:=s 2 log(1 /δn) bm2 4,tn∧c1, withc1∈(0,1). Then 1 1√nPn i=1λi,δn=˜O1√n almost surely. We now analyze the almost sure converge of the sum of the product of two sequences of random variables, one involving the plug-ins through the function ψE. Its proof may be found in Appendix E.5 Proposition C.6. LetX1, . . . , X nfulfill Assumption 1.1 and Assumption 4.5 such that V (Xi−µ)2 >0. Let (δn)n≥1be a deterministic sequence such that δn↗δ >0, δ n>0. Define λt,δn:=s 2 log(1 /δn) bm2 4,tn∧c1, with c1>0, andbm2 4,tdefined as in Section 4 with c2>0. Let (Zn)n≥1be such that Zi≥0,1 nnX i=1Zia.s.→a, 27 witha∈R. Then nX i=1ψE(λi,δn)Zia.s.→alog(1/δ) V[(Xi−µ)2]. Lastly, we present a technical proposition that will be used in the proof of Theorem 4.7. We defer its proof to Appendix E.7. Proposition C.7. Letα1,n= Ω 1 log(n) andbσkbe defined as in Section 4, with c3>0. Then X 2≤i≤nn log(2/α1,n) +σ2Pi−1 k=1ψPq 2 log(2 /α1,n) bσ2 kklog(1+ k)∧c5o2 Pi−1 k=1q 2 log(2 /α1,n) bσ2 kklog(1+ k)∧c52=˜O(1)(C.1) almost surely. D Main proofs D.1 Proof of Theorem 3.3 Fixtand observe Et−1exp ˜λt(Xt−µ) =Et−1∞X k=0 ˜λt(Xt−µ)k k! =∞X k=0˜λk tEt−1 (Xt−µ)k k! (i)= 1 +∞X k=2˜λk tEt−1 (Xt−µ)k k! (ii) ≤1 +Et−1 (Xt−µ)2∞X k=2˜λk t k! (iii)= 1 + σ2ψP(˜λt) (iv) ≤exp σ2ψP(˜λt) , where(i)followsfrom EXt=µ,(ii)from |Xt−µ| ≤1,(iii)from Et−1 (Xt−µ)2 = σ2, and (iv) from 1 +x≤exp(x)for all x∈R. It thus follows that S′ t= exp X i≤t˜λi(Xi−µ)−σ2X i≤tψP(˜λi) t≥1,
|
https://arxiv.org/abs/2505.01987v1
|
S′ 0= 1, 28 is a nonnegative supermartingale. In view of Ville’s inequality (Theorem 3.1), we observe that P exp X i≤t˜λi(Xi−µ)−σ2X i≤tψP(˜λi) ≥2/δ ≤δ 2, thus P X i≤t˜λi(Xi−µ)−σ2X i≤tψP(˜λi)≥log(2/δ) ≤δ 2, and so P µ≤P i≤t˜λiXi−σ2P i≤tψP(˜λi)−log(2/δ) P i≤t˜λi! ≤δ 2. Arguing analogously replacing Xi−µforµ−Xiand taking the union bound concludes the proof. D.2 Proof of Theorem 4.1 The processes are clearly nonnegative, so it remains to prove that they are supermartingales. Let us begin by showing that S+ tis indeed a supermartingale, i.e., Et−1expn λt (Xt−bµt)2−˜σ2 t −ψE(λt) (Xt−bµt)2−bσ2 t2o ≤1(D.1) for any t≥1. In order to see this, denote Yt= (Xt−bµt)2−˜σ2 t, δ t=bσ2 t−˜σ2 t, and restate (D.1) as Et−1expn λtYt−ψE(λt) (Yt−δt)2o ≤1. (D.2) FromFanetal.(2015,Proposition4.1), whichestablishesthat exp ξλ−ξ2ψE(λ) ≤ 1 +ξλfor any λ∈[0,1)andξ≥ −1, it follows that Et−1expn λtYt−ψE(λt) (Yt−δt)2o = exp( λtδt)Et−1expn λt(Yt−δt)−ψE(λt) (Yt−δt)2o ≤exp(λtδt)Et−1[1 +λt(Yt−δt)] (i)= exp( λtδt) (1−λtδt) (ii) ≤exp(λtδt) exp (−λtδt) = 1, 29 where (i) is obtained given that Et−1Yt= 0, and (ii) from 1 +x≤exfor all x∈R. Showing that S− tis a supermartingale follows analogously, but replacing Yt andδtby −(Xt−bµt)2+ ˜σ2 t,−bσ2 t+ ˜σ2 t. Note that this proof is analogous to the proof of Waudby-Smith and Ramdas (2024, Theorem 2), but with non-constant conditional expectations ˜σ2 i. D.3 Proof of Corollary 4.2 In view of Ville’s inequality (Theorem 3.1) and Theorem 4.1, the probability of the event exp X i≤tλi ±(Xi−bµi)2∓˜σ2 i −ψE(λi) (Xi−bµi)2−bσ2 i2 ≥2/δ uniformly over tis upper bounded byδ 2, and so is X i≤tλi ±(Xi−bµi)2∓˜σ2 i −ψE(λi) (Xi−bµi)2−bσ2 i2≥log(2/δ) uniformly over t. Thus P sup t∓P i≤tλi˜σ2 iP i≤tλi±Dt−Rt,α 2≥0! ≤δ 2. From P i≤tλi˜σ2 iP i≤tλi=P i≤tλi σ2+ (bµi−µ)2 P i≤tλi=σ2+Et, it follows that P sup t∓σ2∓Et±Dt−Rt,α 2≥0 ≤δ 2, which allows to conclude that P σ /∈ Dt−Et−Rt,α 2, Dt−Et+Rt,α 2 ≤δ 2, uniformly over t. 30 D.4 Proof of Corollary 4.4 As exhibited in Section 4.3, σ2≥Dt−P i≤tλi˜R2 i,α1P i≤tλi−Rt,α2 uniformly over twith probability α1+α2=α. Taking ˜R2 i,α1as in(4.2)leads to σ2≥Dt−P i≤tλi ˜Atσ4+˜Bt,α1σ2+˜Ct,α1 P i≤tλi−Rt,α2, i.e., σ2≥Dt−Atσ4−(Bt,α1−1)σ2−Ct,α1−Rt,α2. Thus, it suffices to consider σ2≥σ2 l,t, where σ2 l,tis such that σ2 l,t=Dt−Atσ4 l,t−(Bt,α1−1)σ2 l,t−Ct,α1−Rt,α2. Clearly, solving for this quadratic polynomial leads to (4.3). D.5 Proof of Theorem 4.6 We proceed differently for the cases V (Xi−µ)2 = 0andV (Xi−µ)2 >0. Case 1: V (Xi−µ)2 = 0.Note that √nRn,δn=log(1/δn) +P i≤nψE(λi,δn) (Xi−bµi)2−bσ2 i2 1√nP i≤nλi,δn. Denote ν2 i:= (Xi−bµi)2−bσ2 i2. In view of Proposition C.2 or Proposition C.3, it follows that nbm2 4,n=˜O(1) almost surely, and so there exists A∈ Fwith P(A) = 1such that nbm2 4,n(ω) = ˜O(1)for all ω∈A. For ω∈A, it may be that ∞X i=1ν2 i(ω) =:M <∞ (D.3) or ∞X i=1ν2 i(ω) =∞. (D.4) 31 If (D.3) holds, then X i≤nψE(λi,δn(ω)) (Xi(ω)−bµi(ω))2−bσ2 i(ω)2=X i≤nψE(λi,δn(ω))ν2 i(ω) ≤ψE(c1)X i≤nν2 i(ω) ≤ψE(c1)M, andso log(1/δn)+P i≤nψE(λi,δn(ω)) (Xi(ω)−bµi(ω))2−bσ2 i(ω)2isupperbounded bylog(1/l) +ψE(c1)M, where l=infn∈Nδn(which is strictly positive given that δn→δ >0andδn>0). If(D.4)holds, then there exists m(ω)∈Nsuch that, fort≥m(ω), bm2 4,t(ω)t=c2+t−1X i=1ν2 i(ω)≥2 log(1 /l) c2 1. Thus s 2 log(1 /δn) bm2 4,t(ω)n≤s 2 log(1 /l) bm2 4,t(ω)t≤c1, and so λi,δn(ω) =s 2
|
https://arxiv.org/abs/2505.01987v1
|
log(1 /δn) bm2 4,t(ω)n fori≥m(ω). Denote (In(ω)) := log(1 /δn) +X i<m(ω)ψE(λi,δn(ω)) (Xi(ω)−bµi(ω))2−bσ2 i(ω)2, (IIn(ω)) :=nX i=m(ω)ψE(λi,δn) (Xi(ω)−bµi(ω))2−bσ2 i(ω)2. We observe that (In(ω))≤log(1/l) +ψE(c1)X i<m(ω) (Xi(ω)−bµi(ω))2−bσ2 i(ω)2, 32 and so it is bounded. Furthermore, (IIn(ω)) =nX i=m(ω)ψE s 2 log(1 /δn) bm2 4,i(ω)n! (Xi(ω)−bµi(ω))2−bσ2 i(ω)2 (i) ≤2 log(1 /δn)ψE(c1) c2 1nX i=m(ω)1 bm2 4,i(ω)n (Xi(ω)−bµi(ω))2−bσ2 i(ω)2 ≤2 log(1 /l)ψE(c1) c2 1nX i=m(ω)1 bm2 4,i(ω)n (Xi(ω)−bµi(ω))2−bσ2 i(ω)2 ≤2 log(1 /l)ψE(c1) c2 1nX i=m(ω)1 ibm2 4,i(ω) (Xi(ω)−bµi(ω))2−bσ2 i(ω)2 =2 log(1 /l)ψE(c1) c2 1nX i=m(ω)1 c2+Pi−1 i=1ν2 i(ω)ν2 i(ω) (ii) ≤2 log(1 /l)ψE(c1) c2 1log c2+nX i=1ν2 i(ω)! =2 log(1 /l)ψE(c1) c2 1log bm2 4,n(ω)n where (i) follows from Lemma B.1 and c1∈(0,1), and (ii) follows from Lemma B.7. Given that m2 4,n(ω)n=˜O(1), it follows from the previous in- equalities that (In(ω)) + ( IIn(ω))is also ˜O(1). Consequently, we have shown that regardless of (D.3) or (D.4) holding, it follows that X i≤nψE(λi,δn(ω)) (Xi(ω)−bµi(ω))2−bσ2 i(ω)2=˜O(1) for all ω∈A, with P(A) = 1. That is, log(1/δn) +X i≤nψE(λi,δn) (Xi−bµi)2−bσ2 i2=˜O(1) almost surely. Further, by Proposition C.5 and δn→δ, it also follows that 1 1√nPn i=1λi,δn=˜O1√n almost surely. Thus, it is concluded that √nRn,δn=˜O1√n almost surely, and so it converges to 0almost surely. 33 Case 2: V (Xi−µ)2 >0.By Proposition C.4, 1√nnX i=1λi,δna.s.→s 2 log(1 /δ) V[(Xi−µ)2]. In view of 1 nX i≤n (Xi−bµi)2−bσ2 i2→V (Xi−µ)2 almost surely (Proposition C.1) and Proposition C.6, it follows that X i≤nψE(λi) (Xi−bµi)2−bσ2 i2→V (Xi−µ)2 log(1/δ) V[(Xi−µ)2]= log(1 /δ). We thus conclude that √nRn,δn→log(1/δ) + log(1 /δ)q 2 log(1 /δ) V[(Xi−µ)2]=p 2V[(Xi−µ)2] log(1 /δ) almost surely. D.6 Proof of Theorem 4.7 We differentiate the cases V (Xi−µ)2 = 0andV (Xi−µ)2 >0. Case 1: V (Xi−µ)2 = 0.By Proposition C.5 and α2,n→α, 1 1√nPn i=1λi,α2,n=˜O1√n almost surely. Furthermore, in view of Proposition C.7, X i≤nn log(2/α1,n) +σ2Pi−1 k=1ψPq 2 log(2 /α1,n) bσ2 kklog(1+ k)∧c5o2 Pi−1 k=1q 2 log(2 /α1,n) bσ2 kklog(1+ k)∧c52. is˜O(1)almost surely. Thus, the product of both is ˜O 1√n almost surely, which further implies that it converges to 0almost surely. Case 2: V (Xi−µ)2 >0.By Proposition C.4 and α2,n→α, 1√nX i≤nλi,α2,na.s.→s 2 log(1 /δ) V[(Xi−µ)2]. Thus, it suffices to prove X i≤nλi,α2,nR2 i,α1,na.s.→0 (D.5) 34 to conclude the proof. We note that λi,α2,nR2 i,α1,nis equal to 1√nX i≤n√nλi,α2,nn log(2/α1,n) +σ2Pi−1 k=1ψPq 2 log(2 /α1,n) bσ2 kklog(1+ k)∧c5o2 Pi−1 k=1q 2 log(2 /α1,n) bσ2 kklog(1+ k)∧c52. By Proposition C.7, 1√nX i≤nn log(2/α1,n) +σ2Pi−1 k=1ψPq 2 log(2 /α1,n) bσ2 kklog(1+ k)∧c5o2 Pi−1 k=1q 2 log(2 /α1,n) bσ2 kklog(1+ k)∧c52. is˜O(1√n)almost surely, and so it suffices to show that sup n∈Nsup i≤n√nλi,α2,n is bounded almost surely in order to conclude the result (in view of Hölder’s inequality, (D.5)will follow). Analogously to the proof of Proposition C.4, there exist mω∈Nandu(ω)<∞such that λt,α2,n(ω) =s 2 log(1 /α2,n) bm2 4,t(ω)n,1 bm2 4,n(ω)≤u(ω), forn≥mωandω∈A, with P(A) = 1. Given that α2,n→α >0andα2,n>0, then l:= inf nα2,n>0, and so we observe that √nλt,α2,n(ω) =s 2 log(1 /α2,n) bm2 4,t(ω)≤p 2 log(1 /l)u(ω) forn≥mω. It follows that sup n∈Nsup i≤n√nλi,α2,n(ω)≤p 2 log(1 /l)u(ω) ∨ sup n<m ωsup i≤n√nλi,α2,n(ω) <∞, and thus the result is concluded by Hölder’s
|
https://arxiv.org/abs/2505.01987v1
|
inequality. D.7 Proof of Theorem 4.10 Denote ft=X i≤t˜λi(Xi−µ). Pinelis (1994, Theorem 3.2) showed that Et−1cosh (∥ft∥)≤ 1 +Et−1ψP ˜λt∥Xt−µ∥ cosh (∥ft−1∥). 35 Similarly to the proof of Theorem 3.3, it now follows that 1 +Et−1ψP ˜λt∥Xt−µ∥ = 1 +Et−1∞X k=2 ˜λt∥Xt−µ∥k k! = 1 +∞X k=2˜λk tEt−1h ∥Xt−µ∥ki k! (i)= 1 +∞X k=2˜λk tEt−1h ∥Xt−µ∥ki k! (ii) ≤1 +Et−1h ∥Xt−µ∥2i∞X k=2˜λk t k! (iii)= 1 + σ2ψP(˜λt)(iv) ≤exp σ2ψP(˜λt) , where (i) follows from EXt=µ, (ii) follows from ∥Xt−µ∥ ≤1, (iii) follows from Et−1∥Xt−µ∥2=σ2, and (iv) follows from 1 +x≤exp(x)for all x∈R. Thus, the process S′ t= cosh X i≤t˜λi(Xi−µ) exp −σ2X i≤tψP(˜λi) , fort≥1, and S′ 0= 1, is a nonnegative supermartingale. In view of Ville’s inequality (Theorem 3.1) we observe that P cosh X i≤t˜λi(Xi−µ) exp −σ2X i≤tψP(˜λi) ≥1 δ ≤δ, and, from expx≤2 cosh xforx∈R, it follows that P exp X i≤t˜λi(Xi−µ) −σ2X i≤tψP(˜λi) ≥2 δ ≤δ. Thus P X i≤t˜λi(Xi−µ) −σ2X i≤tψP(˜λi)≥log(2/δ) ≤δ 2, and so P µ−P i≤t˜λiXiP i≤t˜λi ≤σ2P i≤tψP(˜λi) + log(2 /δ) P i≤t˜λi! ≤δ 2. 36 D.8 Proof of Theorem A.1 The proof is analogous to that of Theorem 4.1, replacing Yt= (Xt−bµt)2−˜σ2 t byYt=∥Xt−bµt∥2−˜σ2 t. E Proofs of auxiliary propositions E.1 Proof of Proposition C.1 Denote ˜m2 4,n:=1 nPn i=1 (Xi−bµi)2−bσ2 i2. Then ˜m2 4,n=1 nnX i=1 (Xi−bµi)2−σ2+σ2−bσ2 i2 =1 nnX i=1 (Xi−bµi)2−σ22 | {z } (In)−2 nnX i=1 (Xi−bµi)2−σ2 σ2−bσ2 i | {z } (IIn) +1 nnX i=1 σ2−bσ2 i2 | {z } (IIIn). It suffices to prove that (In)converges to V (X−µ)2 almost surely, and (IIn) and(IIIn)converge to 0almost surely. •Denoting γi= (µ−bµi)2+ 2(Xi−µ)(µ−bµi), it follows that (In) =1 nnX i=1 (Xi−µ+µ−bµi)2−σ22 =1 nnX i=1 (Xi−µ)2−σ2+ (µ−bµi)2+ 2(Xi−µ)(µ−bµi)2 =1 nnX i=1 (Xi−µ)2−σ2+γi2 =1 nnX i=1 (Xi−µ)2−σ22+2 nnX i=1 (Xi−µ)2−σ2 γi+1 nnX i=1γ2 i. The first of these three summands converges to V (X−µ)2 by the scalar martingale strong law of large numbers (Hall and Heyde, 2014, Theorem 2.1). Given that (bµi−µ)→0almost surely, γi→0almost surely as well. Thus, the latter summands converge to 0almost surely: the second summand converges to 0in view of Lemma B.8 and the fact that the (Xi−µ)2−σ2are bounded; the third summand converges to 0almost surely also in view of Lemma B.8. 37 •Given that (Xi−bµi)2−σ2 is bounded and σ2−bσ2 i →0almost surely, (IIn)converges to 0almost surely by Lemma B.8. •Given that σ2−bσ2 i →0almost surely, σ2−bσ2 i2→0almost surely, and so (IIIn)converges to 0almost surely by Lemma B.8. Thus, ˜m2 4,n→V (X−µ)2 almost surely. Given that bm2 4,n=c2 n+n−1 n˜m2 4,n−1, this also implies that bm2 4,n→V (X−µ)2 almost surely. E.2 Proof of Proposition C.2 Denote vi= (Xi−bµi)2−bσ2 i, so that m2 4,t=c2+P i≤t−1v2 i t. Ifσ2= 0, then Xi=µfor all i. In that case, bµi=c4 i+i−1 iµ,bσ2 i=c3 i+(c4−µ)2P j≤i−11 j2 i. Note that bσ2 i≤c3 i+(c4−µ)2P∞ j=11 j2 i=c3+ (c4−µ)2π2 6 i, and so v2 i="c4−µ i2 −bσ2 i#2 (i) ≤2c4−µ i4 + 2bσ4 i ≤2(c4−µ)4 i2+ 2bσ4 i ≤κ1 i2, where κ1:= 2(c4−µ)4+ 2 c3+ (c4−µ)2π2 62 , and (i) follows from (a−b)2≤ 2a2+ 2b2. Thus m2 4,t≤c2+P i≤t−1κ1
|
https://arxiv.org/abs/2505.01987v1
|
i2 t≤c2+P∞ i=1κ1 i2 t=c2+κ1π2 6 t=O1 t . Ifσ2>0, note that vi= (Xi−bµi)2−bσ2 i = (Xi−µ)2−σ2+ 2(Xi−µ)(µ−bµi) + (µ−bµi)2+σ2−bσ2 i (i)= 2(Xi−µ)(µ−bµi) + (µ−bµi)2+σ2−bσ2 i, 38 where (i) follows from (Xi−µ)2=σ2, and so |vi| ≤3|µ−bµi|+ σ2−bσ2 i . The martingale analogue of Kolmogorov’s law of iterated logarithm (Stout, 1970) establishes that lim sup i→∞|bµi−µ|√np 2σ2log log ( nσ2)= 1 almost surely. That implies that there exists A∈ Fsuch that P(A) = 1and, for all ω∈A, |bµi(ω)−µ| ≤p C(ω)p 2σ2log log ( iσ2)√ i for some C(ω)<∞. Furthermore, bσ2 i=c3+P j≤i−1(Xj−¯µj)2 i =c3+P j≤i−1(Xj−µ)2+ 2 (Xj−µ) (µ−¯µj) + (µ−¯µj)2 i =c3+P j≤i−1(Xj−µ)2+ 2 (Xj−µ) (µ−¯µj) + (µ−¯µj)2 i (i)=c3 i+i−1 iσ2+P j≤i−12 (Xj−µ) (µ−¯µj) + (µ−¯µj)2 i, where (i) follows from (Xi−µ)2=σ2, and so σ2−bσ2 i=σ2 i−c3 i−P j≤i−12 (Xj−µ) (µ−¯µj) + (µ−¯µj)2 i, which implies σ2−bσ2 i ≤c3 i+σ2 i+ P j≤i−12 (Xj−µ) (µ−¯µj) + (µ−¯µj)2 i ≤c3 i+σ2 i+ 3P j≤i−1|µ−¯µj| i ≤2 i+ 3P j≤i−1|µ−¯µj| i. 39 Thus, for ω∈A, |vi(ω)| ≤3|µ−bµi(ω)|+ σ2−bσ2 i(ω) ≤3|µ−bµi(ω)|+2 i+ 3P j≤i−1|µ−¯µj(ω)| i ≤3p 2C(ω)σ2log log ( iσ2)√ i+2 i+ 3P j≤i−1p C(ω)√ 2σ2log log( jσ2)√j i ≤3p 2C(ω)σ2log log ( iσ2)√ i+2 i+ 3p 2C(ω)σ2log log ( iσ2)P j≤i−11√j i (i) ≤3p 2C(ω)σ2log log ( iσ2)√ i+2√ i+ 6p 2C(ω)σ2log log ( iσ2)1√ i = 2 + 9p 2C(ω)σ2log log ( iσ2)1√ i, where (i) follows from Lemma B.3. Thus, (vi(ω))2≤ 2 + 9p 2C(ω)σ2log log ( iσ2)21 i. From here, it follows that, for ω∈A, m2 4,t(ω) =c2+P i≤t−1v2 i(ω) t ≤c2+P i≤t−1 2 + 9p 2C(ω)σ2log log ( iσ2)21 i t ≤c2+ 2 + 9p 2C(ω)σ2log log ( tσ2)2P i≤t−11 i t (i) ≤c2+ 2 + 9p 2C(ω)σ2log log ( tσ2)2 (1 + log t) t =˜O1 t , where (i) follows from Lemma B.4. Noting that P(A) = 1concludes the result. E.3 Proof of Proposition C.3 The proof follows analogously to that of Proposition C.2 as soon as we show that •ifσ= 0, then (Xi−bµi)2=O1 i2 ; 40 •ifσ >0, then |bµi(ω)−µ|=˜O1√ i almost surely. Let us now prove each of the statements. Ifσ= 0, then bµt=Pt−1 i=1˜λiXiPt−1 i=1˜λi=Pt−1 i=1˜λiµPt−1 i=1˜λi=µ, and we are done. Ifσ >0, define ιi=s 2 bσ2 iilog(i+ 1). We shall start by studying the growth ofP i≤nιiXi. In view of ∞X i=1Ei−1 ι2 i(Xi−µ)2 =σ2∞X i=1ι2 i=σ2∞X i=12 bσ2 iilog(i+ 1) ≥2σ2∞X i=11 ilog(i+ 1)(i)=∞, where (i) follows from Lemma B.6, as well as |ιi(Xi−µ)| ≤ιiwithιi→0almost surely, we can apply the martingale analogue of Kolmogorov’s law of the iterated logarithm (Stout, 1970). Thus, defining S2 n:=nX i=1Ei−1 ι2 i(Xi−µ)2 =σ2nX i=1ι2 i, it follows that lim sup n→∞P i≤nιi(Xi−µ)p 2S2nlog log( S2n)= 1 almost surely. Hence, there exists A1∈ Fwith P(A1) = 1such that X i≤nιi(ω)(Xi(ω)−µ) ≤C(ω)p 2S2nlog log( S2n) for all ω∈A1. Given that bσn→σalmost surely, there exists A2∈ Fwith P(A2) = 1such that bσn(ω)→σ. Hence, for each ω∈A2, there exists m(ω)∈N 41 such that bσi≥σ 2for all i≥m(ω). Thus, for ω∈A2, S2 n(ω) =σ2nX i=12 bσ2 i(ω)ilog(i+ 1) ≤nX i=18 ilog(i+ 1) ≤nX i=116 i (i) ≤16(log n+ 1), where (i) is
|
https://arxiv.org/abs/2505.01987v1
|
obtained in view of Lemma B.4. Hence, for ω∈A:=A1∩A2, X i≤nιi(ω)(Xi(ω)−µ) ≤C(ω)p 32(log n+ 1) log log(16(log n+ 1)) , which is ˜O(1). That is,P i≤nιi(Xi−µ) =˜O(1)almost surely. Let us now show that X i≤n˜λi,α1,n(Xi−µ) =˜O(1) (E.1) almost surely as well. For i≥m(ω), s 2 log(2 /α1,n) bσ2 iilog(i+ 1)≤σ√ 2s log(2/α1,n) ilog(i+ 1) and so, for i≥m(ω)∨σp log(2/α1,n)c5, it holds that ˜λi,α1,n=q log(2/α1,n)ιi. Given that we will let ntend to ∞, we can assume without loss of generality thatm(ω)< σp log(2/α1,n)c5=:tn. In that case, X i≤n˜λi,α1,n(Xi−µ) =X i<tn˜λi,α1,n(Xi−µ) +nX i=tn˜λi,α1,n(Xi−µ) =X i<tn˜λi,α1,n(Xi−µ) +q log(2/α1,n)nX i=tnιi(Xi−µ) =X i<tn ˜λi,α1,n−q log(2/α1,n)ιi (Xi−µ) +q log(2/α1,n)X i≤nιi(Xi−µ). 42 Now note that the absolute value of the first summand is upper bounded by X i<tn ˜λi,α1,n−q log(2/α1,n)ιi ≤X i<tn ˜λi,α1,n−q log(2/α1,n)ιi ≤X i<tn c5+q log(2/α1,n) sup iιi . Given that ιn→0almost surely and c2>0,supiιiis almost surely bounded, and thus such a first summand is upper bounded by tn c5+q log(2/α1,n) sup iιi , which is also ˜O(1)a.s., in view of log(1/α1,n) =˜O(1). Consequently, we have shown the validity of (E.1). Lastly, we observe that, X i≤n˜λi,α1,n=X i≤ns 2 log(2 /α1,n) bσ2 tilog(1 + i)∧c5 ≥1p log(1 + n)X i≤nr 2 log(2 /α) i∧c5 ≥1p log(1 + n)p 2 log(2 /α)∧c5X i≤n1√ i (i) ≥1p log(1 + n)p 2 log(2 /α)∧c5 2√n−2 , which is ˜Ω (√n), where (i) is obtained in view of Lemma B.3. We thus conclude that |bµt−µ|= Pt−1 i=1˜λi(Xi−µ)Pt−1 i=1˜λi =˜O(1) ˜Ω(√ t)=˜O1√ t almost surely. E.4 Proof of Proposition C.4 Given that m2 4,n→V (Xi−µ)2 a.s. (in view of Proposition C.1), there exists A∈ Fsuch that P(A) = 1and bm2 4,n(ω)→V (Xi−µ)2 for all ω∈A. Based on Lemma B.12 and c2>0, 1 bm2 4,n(ω)≤u(ω)<∞ 43 for all ω∈A. Given that δn→δ >0andδn>0, then l:=infnδn>0, and so we observe that s 2 log(1 /δn) bm2 4,t(ω)n≤s 2 log(1 /l) u(ω)n, which implies the existence of mω∈Nsuch that s 2 log(1 /l) u(ω)n≤c1 for all n≥mω. Hence λt,δn(ω) =s 2 log(1 /δn) bm2 4,t(ω)n forn≥mω. It follows that 1√nnX i=1λi,δn(ω) =1√nmω−1X i=1λi,δn(ω) +1√nnX i=mωλi,δn(ω). Clearly, the first term converges to 0, and so it suffices to show that 1√nnX i=mωλi,δn(ω)a.s.→s 2 log(1 /δ) V[(Xi−µ)2]. To see this, note that 1√nnX i=mωλi,δn(ω) =1√nnX i=mωs 2 log(1 /δn) bm2 4,i(ω)n =n−mω n|{z} (In)1 n−mωnX i=mωs 2 log(1 /δn) bm2 4,i(ω) | {z } (IIn(ω)) Clearly, (In)n→∞→1. Furthermore, (IIn(ω))n→∞→s 2 log(1 /δ) V[(Xi−µ)2] in view of m2 4,n(ω)→V (Xi−µ)2 ,δn→δ, and Lemma B.8. Hence 1√nnX i=mωλCI i,δn(ω)n→∞→s 2 log(1 /δ) V[(Xi−µ)2] forω∈A, with P(A) = 1, thus concluding the proof. 44 E.5 Proof of Proposition C.6 Analogously to the first part of the proof of Proposition C.4, there exists A∈ F such that P(A) = 1, bm2 4,n(ω)→V (Xi−µ)2 ∀ω∈A,1 nnX i=1Zi(ω)→a∀ω∈A,(E.2) and there exists mωsuch that λt,δn(ω) =s 2 log(1 /δn) bm2 4,t(ω)n forn≥mωfor all ω∈A. Observing that nX i=1ψE(λi,δn(ω))Zi(ω) =mω−1X i=1ψE(λi,δn(ω))Zi(ω) | {z } (In(ω))+nX i=mωψE(λi,δn(ω))Zi(ω) | {z } (IIn(ω)), Clearly, (In(ω))→0given that it is a linear combination of terms ψE(λi,δn(ω)), with λi,δn(ω)n→∞ ↘0, ψ E(λ)λ→0→0. Let us now prove that (IIn(ω))→s 2 log(1 /δ)
|
https://arxiv.org/abs/2505.01987v1
|
V[(Xi−µ)2], (E.3) forω∈A. Denoting ψN(λ) =λ2 2, as well as ξn,i(ω) :=ψE(λi,δn(ω)) ψN(λi,δn(ω)), it follows that (IIn(ω)) =nX i=mωψE(λi,δn(ω))Zi(ω) =nX i=mωψN(λi,δn(ω))ξn,i(ω)Zi(ω) =log(1/δn)(n−mω+ 1) n1 n−mω+ 1nX i=mω1 bm2 4,i(ω)ξn,i(ω)Zi(ω). In view of (E.2), Lemma B.9 yields 1 n−mω+ 1nX i=mω1 bm2 4,i(ω)Zi(ω)→a V[(Xi−µ)2]. 45 Noting thatψE(λ) ψN(λ)λ→0→1,λi,δi(ω)≥λi,δn(ω)forn≥i, and lim n→∞λn,δn(ω) = lim n→∞s 2 log(1 /δn) bm2 4,n(ω)n = lim n→∞r 1 nlim n→∞s 2 log(1 /δn) bm2 4,n(ω) = 0s 2 log(1 /δ) V[(Xi−µ)2] = 0, we observe that ξn,n(ω)→1, ξ i,i(ω)≥ξn,i(ω)≥1, where the latter inequality follows from Lemma B.2. Invoking Lemma B.10 with an,i=ξn,i(ω), b i=1 bm2 4,i(ω)Zi(ω), it follows that 1 n−mω+ 1nX i=mω1 bm2 4,i(ω)ξn,i(ω)Zi(ω)→a V[(Xi−µ)2]. It suffices to observe that log(1/δn)(n−mω+ 1) n→log(1/δ) to conclude the proof. E.6 Proof of Proposition C.5 In view Proposition C.2 or Proposition C.3, there exists A∈ Fsuch that P(A) = 1and m2 4,t(ω) =˜O1 t (E.4) for all ω∈A. For ω∈A, it may be that lim sup t→∞tm2 4,t(ω) =:M <∞ (E.5) or lim t→∞tm2 4,t(ω) =∞. (E.6) 46 Denote L:= supn∈Nδn, as well as κ:=q 2 log(1 /L) M∧c1. If (E.5) holds, then λt,δn(ω) =s 2 log(1 /δn) bm2 4,t(ω)n∧c1 =s 2 log(1 /δn) bm2 4,t(ω)tt n∧c1 ≥r 2 log(1 /L) Mt n∧c1 (i) ≥κr t n, where (i) follows fromt n≤1. Thus 1√nnX i=1λi,δn(ω)≥κ√nnX i=1r i n =κ nnX i=1√ i (i) ≥2κ 3nn3 2 =2κ 3n1 2, where (i) follows from Lemma B.5. If (E.6) holds, then there exists m(ω)∈Nsuch that, for t≥m(ω), bm2 4,tt≥2 log(1 /l) c2 1, where l=infn∈Nδn, which is strictly positive given that δn→δ >0andδn>0. Thus s 2 log(1 /δn) bm2 4,tn≤s 2 log(1 /l) bm2 4,tt≤c1, and so λi,δn(ω) =s 2 log(1 /δn) bm2 4,tn 47 fori≥m(ω). It follows that 1 1√nPn i=1λi,δn(ω)≤1 1√nPn i=m(ω)λi,δn(ω) =n (n−m(ω) + 1)p 2 log(1 /δn)n−m(ω) + 1Pn i=m(ω)1 bm4,i(ω) (i) ≤n (n−m(ω) + 1)p 2 log(1 /δn)| {z } (In(ω))sPn i=m(ω)bm2 4,i(ω) (n−m(ω) + 1)| {z } (IIn(ω)), where (i) follows from the harmonic-quadratic means inequality. Now note that (In(ω))n→∞→p 2 log(1 /δ). In view of (E.4), (IIn(ω)) =sPn i=m(ω)˜O 1 i (n−m(ω) + 1)=˜O1√n , We have shown that, regardless of (E.5) or (E.6) holding, 1 1√nPn i=1λi,δn(ω)=˜O1√n for all ω∈A. Given that P(A) = 1, the proof is concluded. E.7 Proof of Proposition C.7 We will conclude the proof in two steps. First, we will prove that (In) = sup i≤n( log(2/α1,n) +σ2i−1X k=1ψP s 2 log(2 /α1,n) bσ2 kklog(1 + k)∧c5!)2 scales polylogarithmically with nalmost surely. Second, we will show that (IIn) =X i≤n1 Pi−1 k=1q 2 log(2 /α1,n) bσ2 kklog(1+ k)∧c52 also scales polylogarithmically with nalmost surely. Thus, by Hölder’s inequality and these two steps, it will follow that X i≤nn log(2/α1,n) +σ2Pi−1 k=1ψPq 2 log(2 /α1,n) bσ2 kklog(1+ k)∧c5o2 Pi−1 k=1q 2 log(2 /α1,n) bσ2 kklog(1+ k)∧c52(E.7) 48 scales logarithmically with nalmost surely. That is, it is ˜O(1)almost surely. Step 1. Ifσ= 0, then (In) =log2(2/α1,n), which scales at most logarithmi- cally with ngiven that 1/α1,n=O(logn), which follows from α1,n= Ω 1 log(n) . Ifσ >0, then in view of (a+b)2≤2a2+ 2b2,
|
https://arxiv.org/abs/2505.01987v1
|
it follows that (In)≤sup i≤n 2 log2(2/α1,n) + 2σ4(i−1X k=1ψP s 2 log(2 /α1,n) bσ2 kklog(1 + k)∧c5!)2 . We observe that ψP s 2 log(2 /α1,n) bσ2 kklog(1 + k)∧c5! ≤ψP s 2 log(2 /α1,n) bσ2 kklog(1 + k)! (i) ≤ 1p 2klog(1 + k)!2 ψP s 4 log(2 /α1,n) bσ2 k! =1 2klog(1 + k)ψP s 4 log(2 /α1,n) bσ2 k! (ii) ≤1 2klog(1 + k)exp s 4 log(2 /α1,n) bσ2 k! ≤1 kexp s 4 log(2 /α1,n) bσ2 k! (iii) ≤1 kexp4 log(2 /α1,n) bσ2 k =1 k2 α1,n4 bσ2 k, where (i) follows from Lemma B.1 and 2klog(1 +k)≥1for all k≥1, (ii) follows from ψP(x) =exp(x)−x−1≤exp(x)for all x≥0, and (iii) follows from bσk∈[0,1]and√x≤xfor all x≥1. Given Proposition C.1 and Lemma B.12 (in view of c3>0), there exists A∈ Fsuch that P(A) = 1and bσ2 k(ω)→σ2,inf kbσk(ω)≥κ(ω)>0, for all ω∈A. For ω∈Aandk∈N, ψP s 2 log(2 /α1,n) bσ2 k(ω)klog(1 + k)! ≤1 k2 α1,n 4 κ2(ω) , 49 and so sup i≤ni−1X k=1ψP s 2 log(2 /α1,n) bσ2 k(ω)klog(1 + k)! ≤2 α1,n 4 κ2(ω)i−1X k=11 k (i) ≤2 α1,n 4 κ2(ω) (logi+ 1) ≤2 α1,n 4 κ2(ω) (logn+ 1), where (i) is obtained in view of Lemma B.4. Thus (In(ω))≤2 log2(2/α1,n) +2 α1,n 4 κ2(ω) (logn+ 1), which scales polinomially with lognin view of 1/α1,n=O(logn). Step 2. Denoting κ=p 4 log(2 /α)∧c5, it follows that i−1X k=1s 2 log(2 /α1,n) bσ2 kklog(1 + k)∧c5≥i−1X k=1s 2 log(2 /α) klog(1 + k)∧c5 (i) ≥κi−1X k=1s 1 2klog(1 + k) =κ√ 2i−1X k=1s 1 klog(1 + k) ≥κp 2 log( i)i−1X k=1r 1 k (ii) ≥2κp 2 log( i)h (√ i−1−1)∨1i where (i) follows from 2klog(1 +k)≥1fork≥1, and (ii) is obtained in view of 50 Lemma B.3. It follows that (IIn)≤X 2≤i≤n2 log( i) κ2 (√i−1−1)∨12 ≤2 log( n) κ2X 2≤i≤n1 (√i−1−1)∨12 =2 log( n) κ2 2 +X 4≤i≤n1 (√i−1−1)2 (i) ≤2 log( n) κ2 2 +X 4≤i≤n9 i ≤2 log( n) κ2 2 +X 2≤i≤n9 i (ii) ≤2 log( n) κ2(2 + 9 log n), where (i) follows from√i−1−1≥√ i 3for all i≥4, and (ii) follows from Lemma B.4. Thus, (IIn)also scales polylogarithmically with n. FAlternative approaches to the proposed empir- ical Bernstein inequality We present in this appendix two alternative approaches to that proposed in Section 4. F.1Decoupling the inequality into first and second mo- ment inequalities We start by presenting a naive approach to the problem using two empirical Bernstein inequalities, which may be the most natural starting point. However, this approach will prove suboptimal, both theoretically and empirically. F.1.1Confidence sequences obtained using two empirical Bernstein inequalities We start by noting that σ2=EX2 i−E2Xi, Thus, in order to give an upper confidence sequence for σ2, it suffices to derive an upper confidence sequence for EX2 iand a lower confidence sequence for EXi. 51 Consider U1,α1,t:=P i≤tλi(X2 i−cm2i)2 P i≤tλi+log(1/α1) +P i≤tψE(λi) X2 i−cm2i2 P i≤tλi as the upper confidence sequence for EX2 i(which follows from empirical Bernstein inequality), and L2,α2,t:=P i≤t˜λiXiP i≤t˜λi−log(1/α1) +P i≤tψE(˜λi) (Xi−bµi)2 P i≤t˜λi asthelowerconfidencesequencefor EXi(whichfollowsfromempiricalBernstein), so that α1+α2=α. Now
|
https://arxiv.org/abs/2505.01987v1
|
we take σ2≤U1,α1,t−L2 2,α2,t (F.1) as the upper confidence sequence for σ2. Similarly, in order to derive lower inequalities, define L1,α1,t:=P i≤tλi(X2 i−cm2i)2 P i≤tλi−log(1/α1) +P i≤tψE(λi) X2 i−cm2i2 P i≤tλi as the lower confidence sequence for EX2 i(which follows from empirical Bernstein inequality), and L2,α2,t:=P i≤t˜λiXiP i≤t˜λi+log(1/α1) +P i≤tψE(˜λi) (Xi−bµi)2 P i≤t˜λi as the upper confidence sequence for EXi(which follows from empirical Bern- stein), so that α1+α2=α. Now we take σ2≥L1,α1,t−U2 2,α2,t (F.2) as the lower confidence sequence for σ2. F.1.2 Theoretical and empirical suboptimality of the approach Ideally, we would expect the width of the confidence interval for σ2to scale asp 2V(X−µ)2log(1/α)/t(i.e., first order term in Bennett’s inequality). However, we see that the term in U1,α1,t log(1/α1) +P i≤tψE(λi) X2 i−cm2i2 P i≤tλi 52 scales asp 2VX2log(1/α)/t. It suffices to observe that V(X−µ)2=E(X−µ)4−E2(X−µ)2 =E X4−4X3µ+ 6X2µ2−4Xµ3+µ4 − E2X2+µ4−2µ2EX2 = EX4−E2X2 +E −4X3µ+ 8X2µ2−4Xµ3 = EX4−E2X2 −4µE X3 2−X1 2µ2 =VX2−4µE X3 2−X1 2µ2 ≤VX2, where the last inequality follows from µ∈(0,1), to conclude that the first order term of this confidence interval will generally dominate that of Bennett’s inequality. We also clearly see the suboptimality of the approach empirically. Figure 2 exhibits the upper and lower inequalities proposed in Section 4 to those derived in this appendix for all the scenarios considered in Section 5, illustrating the poor performance of the latter. Figure 2: Average confidence intervals over 100simulations for the std σfor (I) the uniform distribution in (0,1), (II) the beta distribution with parameters (2,6), and (III) the beta distribution with parameters (5,5). For each of the inequalities, the 0.95%-empirical quantiles are also displayed. The decoupling approach (this appendix) is compared against EB (our proposal). EB clearly outperforms the decoupled approach in all the scenarios. F.2Upper bounding the error term instead of taking neg- ligible plug-ins In Section 4.3, we proposed to take λt,l,α 2= 0if log(2/α1) + ˆσ2 tPt−1 i=1ψP(˜λi)Pt−1 i=1˜λi≤1. 53 A reasonable alternative would be to avoid defining λt,l,α 2as0(i.e., always define λt,l,α 2:=λt,u,α 2), and to take ˜Rt,δ= log(2/δ)+σ2Pt−1 i=1ψP(˜λi)Pt−1 i=1˜λi,iflog(2/δ)+ˆσ2 t−1Pt−1 i=1ψP(˜λi)Pt−1 i=1˜λi≤1, 1, otherwise . In order to formalize this, denote Υt:=( i∈[t] :log(2/δ) + ˆσ2 t−1Pt−1 i=1ψP(˜λi) Pt−1 i=1˜λi≤1) ,Υc t:= [t]\Υt. Taking At:=P i∈Υtλi˜AiP i≤tλi, B t,δ:= 1 +P i∈Υtλi˜Bi,δP i≤tλi, Ct,δ:=P i∈Υtλi˜Ci,δ+P i∈Υc tλiP i≤tλi, in Section 4.3, Corollary 4.4 also holds. Figure 3 exhibits the empirical perfor- mance of this choice of plug-ins and that of Section 4.3, in the three scenarios from Section 5. The figure shows the slight advantage of considering the plug-ins from Section 4.3. Figure 3: Average confidence intervals over 100simulations for the std σfor (I) the uniform distribution in (0,1), (II) the beta distribution with parameters (2,6), and (III) the beta distribution with parameters (5,5). For each of the inequalities, the0.95%-empirical quantiles are also displayed. The EB lower confidence intervals with the plug-ins from Section 4.3 (our proposal) are compared against the EB lower confidence intervals with the plug-ins proposed in this appendix (alternative). Despitetheexpectedsimilaroutcomes, theplug-insfromSection4.3 lead to slightly sharper bounds. 54
|
https://arxiv.org/abs/2505.01987v1
|
arXiv:2505.02002v1 [math.OC] 4 May 2025JOTA manuscript No. (will be inserted by the editor) Sharp bounds in perturbed smooth optimization Vladimir Spokoiny Received: date / Accepted: date Abstract This paper studies the problem of perturbed convex and smooth optimization. The main results describe how the solution and the value of the problem change if the objective function is perturbed. Example s include linear, quadratic, and smooth additive perturbations. Such proble ms naturally arise in statistics and machine learning, stochastic optimization, sta bility and robustnessanalysis,inverseproblems, optimal control,etc. The resultsprovide accurate expansions for the difference between the solution of th e original problem and its perturbed counterpart with an explicit error term. Keywords self-concordance ·third and fourth order expansions Mathematics Subject Classification (2010) 65K10·90C31·90C25· 90C15 1 Introduction For a smooth convex function f(·), consider an optimization problem υ∗= argmin υ∈Υf(υ). (1) HereΥis an open subset of /CApforp≤ ∞and we focus on unconstraint optimization.Thispaperstudiesthefollowingquestion:howdo υ∗andf(υ∗) change if the objective function fis perturbed in some special way? We show how any smooth perturbation can be reduced to a linear one with g(υ) =f(υ)+/an}b∇acketle{tυ,A/an}b∇acket∇i}ht. Weierstrass Institute and HU Berlin, HSE and IITP RAS, Mohrenstr. 39, 10117 Berlin, Germany spokoiny@wias-berlin.de 2 Vladimir Spokoiny Forυ◦= argminυg(υ),wedescribethedifferences υ◦−υ∗andg(υ◦)−g(υ∗) in termsofthe Newton correction /u1D53D−1Awith/u1D53D=∇2f(υ∗) or/u1D53D=∇2f(υ◦). Motivation 1: Statistical inference for SLS models. Most of statisti- cal estimation procedurescan be representedas “empiricalriskm inimization”. Thisincludesleastsquares,maximumlikelihood,minimumcontrast,and many other methods. Let L(υ) be a random loss function (or empirical risk) and/BXL(υ) its expectation, υ∈Υ⊆ /CAp,p <∞. Consider /tildewideυ= argmin υ∈ΥL(υ);υ∗= argmin υ∈Υ /BXL(υ). Aim: describe the estimation loss /tildewideυ−υ∗and the prediction loss (excess) L(/tildewideυ)−L(υ∗).Ostrovskii and Bach (2021) provides some finite sample accu- racyguaranteesunderself-concordanceof L(υ).Spokoiny (2025) introduced a specialclassof stochastically linear smooth (SLS)modelsforwhichthestochas- tic term ζ(υ) =L(υ)− /BXL(υ) linearly depends on υand thus, its gradient ∇ζdoes not depend on υ:ζ(υ)−ζ(υ◦) =/an}b∇acketle{tυ−υ◦,∇ζ/an}b∇acket∇i}ht. This means that L(υ) is a linear perturbation of f(υ) = /BXL(υ). This reduces the study of the maximum likelihood estimator /tildewideυto linear perturbation analysis. Motivation 2: Uncertainty quantification by smooth penaliz ation. Given a smooth penalty penG(υ) indexed by parameter G, define fG(υ) =f(υ)+penG(υ). A leading example is given by a quadratic penalty /ba∇dblGυ/ba∇dbl2/2: fG(υ) =f(υ)+/ba∇dblGυ/ba∇dbl2/2. Again, the question under study is the corresponding change of th e solu- tionυ∗ G= argminυfG(υ) and the value fG(υ∗ G) of this problem. Adding a quadratic penalty changes the curvature of the objective func tion but does not change its smoothness. It is important to establish an expansio n whose remainder is only sensitive to the smoothness of f. Similar questions arise in Bayesian inference and high-dimensional Laplace approximation; s ee e.g. Stuart(2010),Nickl(2022),Katsevich (2023), and references therein. For ap- plications to Bayesian variational inference see Katsevich and Rigollet (2023). Motivation 3: Quasi-Newton iterations. Consider an optimization routine for ( 1) delivering υkas a solution after the step k. We want to measure the accuracy υk−υ∗and the deficiency f(υk)−f(υ∗). Define fk(υ)def=f(υ)−/an}b∇acketle{tυ−υk,∇f(υk)/an}b∇acket∇i}ht. Then∇fk(υk) = 0. As this function is strongly convex, it holds υk= argminυfk(υ), while fis a linear pertirbation of fkwithA=∇f(υk). The obtained results describe υ∗−υkin terms of ∇f(υk) and∇2f(υk). Similar Sharp bounds in perturbed smooth optimization 3 derivations
|
https://arxiv.org/abs/2505.02002v1
|
for tensor methods in convex optimization and further references canbefoundin Doikov and Nesterov (2021,2022,2024)andRodomanov and Nesterov (2022,2021). An accurate estimate of υk−υ∗and offk(υk)−fk(υ∗) can be used for analysis of many iterative procedures in optimization and stochas- tic approximation; see Polyak and Juditsky (1992),Nesterov and Nemirovskii (1994),Boyd and Vandenberghe (2004),Nesterov (2018),Polyak(2007),among many others. There exists a vast literature on sensitive analysis in perturbed opt imiza- tion focusing on the asymptotic setup with a small parametric pertu rbation of a non-smooth objective function in Hilbert/Banach spaces under n on-smooth constraints;seee.g. Bonnans and Shapiro (2000),Bonnans and Cominetti (1996), Bonnans and Shapiro (1998) among many other. For geometric properties of perturbedoptimizationwithapplicationstomachinelearningsee Berthet et al. (2020) and references therein. For our motivating examples, the assum ption of a small perturbation is too restrictive. Instead, we limit ourselve s to a finite dimensional optimization setup and to a smooth perturbation. This e nables us to derive accurate non-asymptotic closed form results. This paper contribution. The main results describe the leading terms and the remainder of the expansion for the solution of the perturb ed opti- mization problem for the cases of a linear, quadratic, and smooth pe rturba- tion. The accuracyofthe expansion stronglydepends on the smoo thness ofthe perturbed objective function given in terms of a metric tensor /u1D53Bwith/u1D53B2/lessorsimilar/u1D53D for/u1D53Ddef=∇2f(υ∗). This enables us to obtain sharp bounds for a quadratic or smooth perturbation. Under second-order smoothness, we sho w that /ba∇dbl/u1D53B(υ◦−υ∗+/u1D53D−1A)/ba∇dbl ≤2√ω 1−ω/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl, /vextendsingle/vextendsingle2g(υ◦)−2g(υ∗)+/ba∇dbl/u1D53D−1/2A/ba∇dbl2/vextendsingle/vextendsingle≤ω 1−ω/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl2, where/ba∇dbl·/ba∇dblmeans the standard Euclidean norm and ωdescribes the relative error of the second-order Taylor approximation; see Theorem 3.2. Further, a third-order self-concordance condition with a small constant τ3helps to substantially improve the accuracy: /ba∇dbl/u1D53B−1/u1D53D(υ◦−υ∗+/u1D53D−1A)/ba∇dbl ≤0.75τ3/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl2, /vextendsingle/vextendsingle2g(υ◦)−2g(υ∗)+/ba∇dbl/u1D53D−1/2A/ba∇dbl2/vextendsingle/vextendsingle≤0.5τ3/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl3; see Theorems 3.3and3.4. All these bounds are non-asymptotic, the remainder is given in a closed form via the norm of the Newton correction /u1D53D−1Aand it is nearly optimal. The results only involve the smoothness constant τ3. Even more, the fourth-order self-concordance yields /ba∇dbl/u1D53B−1/u1D53D{υ◦−υ∗+/u1D53D−1A+/u1D53D−1∇T(/u1D53D−1A)}/ba∇dbl ≤(τ4/2+τ2 3)/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl3, 4 Vladimir Spokoiny and /vextendsingle/vextendsingle/vextendsingleg(υ◦)−g(υ∗)+1 2/ba∇dbl/u1D53D−1/2A/ba∇dbl2+T(/u1D53D−1A)/vextendsingle/vextendsingle/vextendsingle ≤τ4+4τ2 3 8/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl4+(τ4+2τ2 3)2 4/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl6 with the third-order tensor T=∇3f(υ∗); see Theorem 3.5. The skewness correction term /u1D53D−1∇T(/u1D53D−1A) naturally arises in variational inference and sharp Laplace approximation Katsevich (2024). We also explain how the case of any smooth perturbation can be reduced to a linear one. Organization of the paper. Section2introduces some concepts of local smoothnessinthesenseoftheself-concordanceconditionfrom Nesterov and Nemirovskii (1994). Section 3collects the main results on linearly perturbed optimization. The obtained results are extended to the case of quadratic pertu rbation in Section4and of a smooth perturbation in Section 5. 2 Gateaux smoothness and self-concordance Below we assume the function f(υ),υ∈Υ⊆ /CApto be strongly convex with a positive definite Hessian /u1D53D(υ)def=∇2f(υ). Also, assume f(υ) three or sometimes even four times Gateaux differentiable in υ∈Υ. Local smoothness offwill be described by the relative errorof the Taylorexpansionof the third or fourth order. Define δ3(υ,u) =f(υ+u)−f(υ)−/an}b∇acketle{t∇f(υ),u/an}b∇acket∇i}ht−1 2/an}b∇acketle{t∇2f(υ),u⊗2/an}b∇acket∇i}ht, δ′ 3(υ,u) =/an}b∇acketle{t∇f(υ+u),u/an}b∇acket∇i}ht−/an}b∇acketle{t∇f(υ),u/an}b∇acket∇i}ht−/an}b∇acketle{t∇2f(υ),u⊗2/an}b∇acket∇i}ht, and δ4(υ,u) =f(υ+u)−f(υ)−/an}b∇acketle{t∇f(υ),u/an}b∇acket∇i}ht−1 2/an}b∇acketle{t∇2f(υ),u⊗2/an}b∇acket∇i}ht−1 6/an}b∇acketle{t∇3f(υ),u⊗3/an}b∇acket∇i}ht. Now, for each υ, suppose to be given a positive symmetric matrix /u1D53B(υ) defining a local metric and a local vicinity around υ: for some radius r, Ur(υ) =/braceleftbig u∈ /CAp:/ba∇dbl/u1D53B(υ)u/ba∇dbl ≤r/bracerightbig Local smoothness
|
https://arxiv.org/abs/2505.02002v1
|
properties of fatυare given via the quantities ω(υ)def= sup u:/bardbl/u1D53B(υ)u/bardbl≤r2|δ3(υ,u)| /ba∇dbl/u1D53B(υ)u/ba∇dbl2, ω′(υ)def= sup u:/bardbl/u1D53B(υ)u/bardbl≤r|δ′ 3(υ,u)| /ba∇dbl/u1D53B(υ)u/ba∇dbl2.(2) The definition yields for any uwith/ba∇dbl/u1D53B(υ)u/ba∇dbl ≤r /vextendsingle/vextendsingleδ3(υ,u)/vextendsingle/vextendsingle≤ω(υ) 2/ba∇dbl/u1D53B(υ)u/ba∇dbl2,/vextendsingle/vextendsingleδ′ 3(υ,u)/vextendsingle/vextendsingle≤ω′(υ)/ba∇dbl/u1D53B(υ)u/ba∇dbl2.(3) Sharp bounds in perturbed smooth optimization 5 The approximation results can be improved under a third-order upp er bound on the error of Taylor expansion. (T3)For some τ3 /vextendsingle/vextendsingleδ3(υ,u)/vextendsingle/vextendsingle≤τ3 6/ba∇dbl/u1D53B(υ)u/ba∇dbl3,/vextendsingle/vextendsingleδ′ 3(υ,u)/vextendsingle/vextendsingle≤τ3 2/ba∇dbl/u1D53B(υ)u/ba∇dbl3,u∈ Ur(υ). (T4)For some τ4 /vextendsingle/vextendsingleδ4(υ,u)/vextendsingle/vextendsingle≤τ4 24/ba∇dbl/u1D53B(υ)u/ba∇dbl4,u∈ Ur(υ). We also present a version of (T3)resp.(T4)in terms of the third (resp. fourth) derivative of f. (T∗ 3)f(υ)is three times differentiable and sup u:/bardbl/u1D53B(υ)u/bardbl≤rsup z∈ /CAp/vextendsingle/vextendsingle/an}b∇acketle{t∇3f(υ+u),z⊗3/an}b∇acket∇i}ht/vextendsingle/vextendsingle /ba∇dbl/u1D53B(υ)z/ba∇dbl3≤τ3. (T∗ 4)f(υ)is four times differentiable and sup u:/bardbl/u1D53B(υ)u/bardbl≤rsup z∈ /CAp/vextendsingle/vextendsingle/an}b∇acketle{t∇4f(υ+u),z⊗4/an}b∇acket∇i}ht/vextendsingle/vextendsingle /ba∇dbl/u1D53B(υ)z/ba∇dbl4≤τ4. By Banach’s characterization Banach(1938),(T∗ 3)implies /vextendsingle/vextendsingle/an}b∇acketle{t∇3f(υ+u),z1⊗z2⊗z3/an}b∇acket∇i}ht/vextendsingle/vextendsingle≤τ3/ba∇dbl/u1D53B(υ)z1/ba∇dbl/ba∇dbl/u1D53B(υ)z2/ba∇dbl/ba∇dbl/u1D53B(υ)z3/ba∇dbl(4) for anyuwith/ba∇dbl/u1D53B(υ)u/ba∇dbl ≤rand allz1,z2,z3∈ /CAp. Similarly under (T∗ 4) /vextendsingle/vextendsingle/an}b∇acketle{t∇4f(υ+u),z1⊗z2⊗z3⊗z4/an}b∇acket∇i}ht/vextendsingle/vextendsingle ≤τ4/ba∇dbl/u1D53B(υ)z1/ba∇dbl/ba∇dbl/u1D53B(υ)z2/ba∇dbl/ba∇dbl/u1D53B(υ)z3/ba∇dbl/ba∇dbl/u1D53B(υ)z4/ba∇dbl,z1,z2,z3,z4∈ /CAp.(5) Lemma 2.1 Under(T3)or(T∗ 3), it holds for ω(υ)andω′(υ)from(2) ω(υ)≤τ3r 3, ω′(υ)≤τ3r 2. ProofFor any u∈ Ur(υ) with/ba∇dbl/u1D53B(υ)u/ba∇dbl ≤r /vextendsingle/vextendsingleδ3(υ,u)/vextendsingle/vextendsingle≤τ3 6/ba∇dbl/u1D53B(υ)u/ba∇dbl3≤τ3r 6/ba∇dbl/u1D53B(υ)u/ba∇dbl2, and the bound for ω(υ) follows. The proof for ω′(υ) is similar. Under (T∗ 3), apply the third order Taylor expansion to the univariate function f(υ+tu) oftand(T∗ 3)withz≡u. 6 Vladimir Spokoiny The values τ3andτ4are usually very small. Some quantitative bounds are given later in this section under the assumption that the functio nf(υ) can be written in the form f(υ) =nh(υ) for a fixed smooth function h(υ) with the Hessian ∇2h(υ). The factor nhas meaning of the sample size. (S∗ 3)f(υ) =nh(υ)forh(υ)three times differentiable and for a metric tensor/u1D55E(υ) sup u:/bardbl/u1D55E(υ)u/bardbl≤r/√n/vextendsingle/vextendsingle/an}b∇acketle{t∇3h(υ+u),u⊗3/an}b∇acket∇i}ht/vextendsingle/vextendsingle /ba∇dbl/u1D55E(υ)u/ba∇dbl3≤c3. (S∗ 4)the function h(·)satisfies (S∗ 3)and sup u:/bardbl/u1D55E(υ)u/bardbl≤r/√n/vextendsingle/vextendsingle/an}b∇acketle{t∇4h(υ+u),u⊗4/an}b∇acket∇i}ht/vextendsingle/vextendsingle /ba∇dbl/u1D55E(υ)u/ba∇dbl4≤c4. (S∗ 3)and(S∗ 4)are local versions of the so-called self-concordance condi- tion; see Nesterov and Nemirovskii (1994) andOstrovskii and Bach (2021). In fact, they require that each univariate function h(υ+tu) oft∈ /CAis self- concordantwith some universal constants c3andc4. Under(S∗ 3)and(S∗ 4), with/u1D53B2(υ) =n/u1D55E2(υ), the values δ3(υ,u),δ4(υ,u), andω(υ),ω′(υ) can be well bounded. Lemma 2.2 Suppose (S∗ 3). Then(T3)follows with τ3=c3n−1/2. More- over, for ω(υ)andω′(υ)from(2), it holds ω(υ)≤c3r 3n1/2, ω′(υ)≤c3r 2n1/2. (6) Also(T4)follows from (S∗ 4)withτ4=c4n−1. ProofFor any u∈ Ur(υ), by third order Taylor expansion, |δ3(υ,u)| ≤sup t∈[0,1]1 6/vextendsingle/vextendsingle/an}b∇acketle{t∇3f(υ+tu),u⊗3/an}b∇acket∇i}ht/vextendsingle/vextendsingle=n 6sup t∈[0,1]/vextendsingle/vextendsingle/an}b∇acketle{t∇3h(υ+tu),u⊗3/an}b∇acket∇i}ht/vextendsingle/vextendsingle ≤nc3 6/ba∇dbl/u1D55E(υ)u/ba∇dbl3=n−1/2c3 6/ba∇dbl/u1D53B(υ)u/ba∇dbl3≤n−1/2c3r 6/ba∇dbl/u1D53B(υ)u/ba∇dbl2. This implies (T3)as well as ( 6); see (3). The statement about (T4)is similar. 3 Linearly perturbed optimization Letf(υ) be a smooth convex function, υ∗= argmin υf(υ), f(υ∗) = min υf(υ),/u1D53D=∇2f(υ∗). Sharp bounds in perturbed smooth optimization 7 Later we study the question of how the point of minimum and the value of minimum of fchange if we add a linear component to f. More precisely, let another function g(υ) satisfy for some vector A g(υ)−g(υ∗) =/angbracketleftbig υ−υ∗,A/angbracketrightbig +f(υ)−f(υ∗). (7) Define υ◦def= argmin υg(υ), g(υ◦) = min υg(υ). (8) Theaimoftheanalysisistoevaluatethequantities υ◦−υ∗andg(υ◦)−g(υ∗). First, we consider the case of a quadratic function f. Lemma 3.1 Letf(υ)be quadratic with ∇2f(υ)≡/u1D53Dand letg(υ)be from (7). Then υ◦−υ∗=−/u1D53D−1A, g(υ◦)−g(υ∗) =−1 2/ba∇dbl/u1D53D−1/2A/ba∇dbl2. ProofIff(υ) is quadratic, then under ( 7),g(υ) is quadratic as well with ∇2g(υ)≡/u1D53D. This implies ∇g(υ∗)−∇g(υ◦) =/u1D53D(υ∗−υ◦). Further, ( 7) and∇f(υ∗) = 0 yield ∇g(υ∗) =A. Together with ∇g(υ◦) = 0, this implies υ◦−υ∗=−/u1D53D−1A. The Taylor expansion of gatυ◦yields by ∇g(υ◦) = 0 g(υ∗)−g(υ◦) =1 2/ba∇dbl/u1D53D1/2(υ◦−υ∗)/ba∇dbl2=1 2/ba∇dbl/u1D53D−1/2A/ba∇dbl2 and the assertion follows. 3.1 Local concentration Letfsatisfy (2) atυ∗with/u1D53B(υ∗) =/u1D53B≤κ/u1D53D1/2for some κ>0. The latter
|
https://arxiv.org/abs/2505.02002v1
|
means that the matrix /u1D53D−κ2/u1D53B2is positive definite. The next result describes the concentration properties of υ◦from (8) in a local elliptic set A(r)def={υ:/ba∇dbl/u1D53D1/2(υ−υ∗)/ba∇dbl ≤r}, (9) whereris slightly larger than /ba∇dbl/u1D53D−1/2A/ba∇dbl. Theorem 3.1 Letf(υ)be astrongly convex function with f(υ∗) = min υf(υ) and/u1D53D=∇2f(υ∗). Let, further, g(υ)andf(υ)be related by (7)with some vectorA. Fixν <1andrsuch that /ba∇dbl/u1D53D−1/2A/ba∇dbl ≤νr. Suppose now that 8 Vladimir Spokoiny f(υ)satisfy(2)forυ=υ∗,/u1D53B(υ∗) =/u1D53B≤κ/u1D53D1/2with some κ>0andω′ such that 1−ν−ω′κ2>0. (10) Then for υ◦from(8), it holds /ba∇dbl/u1D53D1/2(υ◦−υ∗)/ba∇dbl ≤rand/ba∇dbl/u1D53B(υ◦−υ∗)/ba∇dbl ≤κr. ProofDefine/u1D53Bκ=κ−1/u1D53B. Then/u1D53Bκ≤/u1D53D1/2and the use of /u1D53Bκin place of /u1D53Byields (2) withω′κ2in place of ω′. This allows us to reduce the proof to κ= 1. The bound /ba∇dbl/u1D53D−1/2A/ba∇dbl ≤νrimplies for any u /vextendsingle/vextendsingle/an}b∇acketle{tA,u/an}b∇acket∇i}ht/vextendsingle/vextendsingle=/vextendsingle/vextendsingle/an}b∇acketle{t/u1D53D−1/2A,/u1D53D1/2u/an}b∇acket∇i}ht/vextendsingle/vextendsingle≤νr/ba∇dbl/u1D53D1/2u/ba∇dbl. Letυbe a point on the boundary of the set A(r) from (9). We also write u=υ−υ∗. The idea is to show that the derivatived dtg(υ∗+tu)>0 is positive for t >1. Then all the extreme points of g(υ) are within A(r). We use the decomposition g(υ∗+tu)−g(υ∗) =/an}b∇acketle{tA,u/an}b∇acket∇i}htt+f(υ∗+tu)−f(υ∗). Withh(t) =f(υ∗+tu)−f(υ∗)+/an}b∇acketle{tA,u/an}b∇acket∇i}htt, it holds d dtf(υ∗+tu) =−/an}b∇acketle{tA,u/an}b∇acket∇i}ht+h′(t). (11) By definition of υ∗, it also holds h′(0) =/an}b∇acketle{tA,u/an}b∇acket∇i}ht. The identity ∇2f(υ∗) =/u1D53D yieldsh′′(0) =/ba∇dbl/u1D53D1/2u/ba∇dbl2. Bound ( 3) implies for |t| ≤1 /vextendsingle/vextendsingleh′(t)−h′(0)−th′′(0)/vextendsingle/vextendsingle≤t/ba∇dbl/u1D53Bu/ba∇dbl2ω′. Fort= 1, we obtain by ( 10) h′(1)≥ /an}b∇acketle{tA,u/an}b∇acket∇i}ht+/ba∇dbl/u1D53D1/2u/ba∇dbl2−/ba∇dbl/u1D53Bu/ba∇dbl2ω′≥ /ba∇dbl/u1D53D1/2u/ba∇dbl2(1−ω′−ν)>0. Moreover,convexityof h(t) implies that h′(t)−h′(0) increasesin tfort >1. Further, summing up the above derivation yields d dtg(υ∗+tu)/vextendsingle/vextendsingle/vextendsingle t=1≥ /ba∇dbl/u1D53D1/2u/ba∇dbl2(1−ν−ω′)>0. Asd dtg(υ∗+tu) increases with t≥1 together with h′(t) due to ( 11), the same applies to all such t. This implies the assertion. Sharp bounds in perturbed smooth optimization 9 3.2 Second-order expansions The result of Theorem 3.1allows to localize the point υ◦= argminυg(υ) in the local vicinity A(r) ofυ∗. The use of smoothness properties of gor, equivalently, of f, in this vicinity helps to obtain rather sharp expansions for υ◦−υ∗and forg(υ◦)−g(υ∗). Theorem 3.2 Assume the conditions of Theorem 3.1withω≤1/3. Then −ω 1−κ2ω/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl2≤2g(υ◦)−2g(υ∗)+/ba∇dbl/u1D53D−1/2A/ba∇dbl2 ≤ω 1+κ2ω/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl2.(12) Also /ba∇dbl/u1D53B(υ◦−υ∗+/u1D53D−1A)/ba∇dbl ≤2√ω 1−κ2ω/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl, /ba∇dbl/u1D53B(υ◦−υ∗)/ba∇dbl ≤1+2√ω 1−κ2ω/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl.(13) ProofAs in the proof of Theorem 3.1, replacing /u1D53Bκ=κ−1/u1D53Bwith/u1D53Breduces the statement to κ= 1 in view of κ2ω/u1D53B2=ω/u1D53B2 κ. By (2), for any υ∈ A(r) /vextendsingle/vextendsingle/vextendsinglef(υ)−f(υ∗)−1 2/ba∇dbl/u1D53D1/2(υ−υ∗)/ba∇dbl2/vextendsingle/vextendsingle/vextendsingle≤ω 2/ba∇dbl/u1D53B(υ−υ∗)/ba∇dbl2.(14) Further, g(υ)−g(υ∗)+1 2/ba∇dbl/u1D53D−1/2A/ba∇dbl2 =/angbracketleftbig υ−υ∗,A/angbracketrightbig +f(υ)−f(υ∗)+1 2/ba∇dbl/u1D53D−1/2A/ba∇dbl2 =1 2/vextenddouble/vextenddouble/u1D53D1/2(υ−υ∗)+/u1D53D−1/2A/vextenddouble/vextenddouble2+f(υ)−f(υ∗)−1 2/ba∇dbl/u1D53D1/2(υ−υ∗)/ba∇dbl2.(15) Asυ◦∈ A(r) and it minimizes g(υ), we derive by ( 14) g(υ◦)−g(υ∗)+1 2/ba∇dbl/u1D53D−1/2A/ba∇dbl2= min υ∈A(r)/braceleftBig g(υ)−g(υ∗)+1 2/ba∇dbl/u1D53D−1/2A/ba∇dbl2/bracerightBig ≥min υ∈A(r)/braceleftBig1 2/vextenddouble/vextenddouble/u1D53D1/2(υ−υ∗)+/u1D53D−1/2A/vextenddouble/vextenddouble2−ω 2/ba∇dbl/u1D53B(υ−υ∗)/ba∇dbl2/bracerightBig . 10 Vladimir Spokoiny Denoteu=/u1D53D1/2(υ−υ∗),ξ=/u1D53D−1/2A,and/u1D539=/u1D53D−1/2/u1D53B2/u1D53D−1/2.As/u1D53B2≤/u1D53D, /ba∇dblξ/ba∇dbl<r, andω <1, it holds /ba∇dbl/u1D539/ba∇dbl ≤1 for the operator norm of /u1D539and min υ∈A(r)/braceleftbig/vextenddouble/vextenddouble/u1D53D1/2(υ−υ∗)+/u1D53D−1/2A/vextenddouble/vextenddouble2−ω/ba∇dbl/u1D53B(υ−υ∗)/ba∇dbl2/bracerightbig = min /bardblu/bardbl≤r/braceleftbig /ba∇dblu+ξ/ba∇dbl2−ωu⊤/u1D539u/bracerightbig =−ξ⊤/braceleftbig ( /C1−ω/u1D539)−1− /C1/bracerightbig ξ≥ −ω 1−ωξ⊤/u1D539ξ with /C1being the unit matrix in /CAp. This yields g(υ◦)−g(υ∗)+1 2/ba∇dbl/u1D53D−1/2A/ba∇dbl2≥ −ω 2(1−ω)/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl2. Similarly g(υ◦)−g(υ∗)+1 2/ba∇dbl/u1D53D−1/2A/ba∇dbl2 ≤min υ∈A(r)/braceleftBig1 2/vextenddouble/vextenddouble/u1D53D1/2(υ−υ∗)+/u1D53D−1/2A/vextenddouble/vextenddouble2+ω 2/ba∇dbl/u1D53B(υ−υ∗)/ba∇dbl2/bracerightBig ≤1 2ξ⊤/braceleftbig/C1−( /C1+ω/u1D539)−1/bracerightbig ξ≤ω 2(1+ω)/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl2. (16) These bounds imply ( 12). Now we derive similarly to ( 15) that for υ∈ A(r) g(υ)−g(υ∗)≥/angbracketleftbig υ−υ∗,A/angbracketrightbig +1 2/ba∇dbl/u1D53D1/2(υ−υ∗)/ba∇dbl2−ω 2/ba∇dbl/u1D53B(υ−υ∗)/ba∇dbl2. A particular choice υ=υ◦yields g(υ◦)−g(υ∗)≥/angbracketleftbig υ◦−υ∗,A/angbracketrightbig +1 2/ba∇dbl/u1D53D1/2(υ◦−υ∗)/ba∇dbl2−ω 2/ba∇dbl/u1D53B(υ◦−υ∗)/ba∇dbl2. Combining this inequality with ( 16) allows to bound /angbracketleftbig υ◦−υ∗,A/angbracketrightbig +1 2/ba∇dbl/u1D53D1/2(υ◦−υ∗)/ba∇dbl2−ω 2/ba∇dbl/u1D53B(υ◦−υ∗)/ba∇dbl2≤ −1 2ξ⊤( /C1+ω/u1D539)−1ξ. Withu◦=/u1D53D1/2(υ◦−υ∗), this implies 2/angbracketleftbig u◦,ξ/angbracketrightbig +u◦⊤(1−ω/u1D539)u◦≤ −ξ⊤( /C1+ω/u1D539)−1ξ, and hence, /braceleftbig u◦+( /C1−ω/u1D539)−1ξ/bracerightbig⊤( /C1−ω/u1D539)/braceleftbig u◦+( /C1−ω/u1D539)−1ξ/bracerightbig ≤ξ⊤/braceleftbig ( /C1−ω/u1D539)−1−( /C1+ω/u1D539)−1/bracerightbig ξ≤2ω (1+ω)(1−ω)ξ⊤/u1D539ξ. Sharp bounds in perturbed smooth optimization 11 Introduce /ba∇dbl·/ba∇dbl/u1D56Bby/ba∇dblx/ba∇dbl2 /u1D56Bdef=x⊤( /C1−ω/u1D539)x,x∈ /CAp. Then /ba∇dblu◦+( /C1−ω/u1D539)−1ξ/ba∇dbl2 /u1D56B≤2ω 1−ω2ξ⊤/u1D539ξ. As /ba∇dblξ−( /C1−ω/u1D539)−1ξ/ba∇dbl2
|
https://arxiv.org/abs/2505.02002v1
|
/u1D56B=ω2(/u1D539ξ)⊤( /C1−ω/u1D539)−1/u1D539ξ≤ω2 1−ω/ba∇dbl/u1D539ξ/ba∇dbl2≤ω2 1−ωξ⊤/u1D539ξ we conclude for ω≤1/3 by the triangle inequality /ba∇dblu◦+ξ/ba∇dbl/u1D56B≤/parenleftbigg/radicalbigg ω2 1−ω+/radicalbigg 2ω 1−ω2/parenrightbigg/radicalBig ξ⊤/u1D539ξ≤2/radicalbiggω 1−ω/radicalBig ξ⊤/u1D539ξ, and (13) follows by /C1−ω/u1D539≥(1−ω) /C1. Remark 3.1 The roles of the functions fandgare exchangeable. In particu- lar, the results from ( 13) apply with /u1D53D=∇2g(υ◦) =∇2f(υ◦) provided that (2) is fulfilled at υ=υ◦. 3.3 Expansions under third order smoothness The results of Theorem 3.2can be refined if fsatisfies condition (T3). Theorem 3.3 Letf(υ)be astrongly convex function with f(υ∗) = min υf(υ) and/u1D53D=∇2f(υ∗). Letg(υ)fulfill(7)with some vector A. Suppose that f(υ)follows(T3)atυ∗with/u1D53B2,r, andτ3such that /u1D53B2≤κ2/u1D53D,r≥4κ 3/ba∇dbl/u1D53D−1/2A/ba∇dbl,κ3τ3/ba∇dbl/u1D53D−1/2A/ba∇dbl<1 4.(17) Thenυ◦= argminυg(υ)satisfies /ba∇dbl/u1D53D1/2(υ◦−υ∗)/ba∇dbl ≤4 3/ba∇dbl/u1D53D−1/2A/ba∇dbl,/ba∇dbl/u1D53B(υ◦−υ∗)/ba∇dbl ≤4κ 3/ba∇dbl/u1D53D−1/2A/ba∇dbl. Moreover, /vextendsingle/vextendsingle/vextendsingle2g(υ◦)−2g(υ∗)+/ba∇dbl/u1D53D−1/2A/ba∇dbl2/vextendsingle/vextendsingle/vextendsingle≤τ3 2/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl3. (18) ProofW.l.o.g. assume κ= 1 and replace rwith4 3/ba∇dbl/u1D53D−1/2A/ba∇dbl. By (17)τ3r≤ 1/3,and (17) implies ( 10). Now(T3)and Lemma 2.1ensure(2) withω′(υ) = τ3r/2, and the first statement follows from Theorem 3.1withν= 3/4. As∇f(υ∗) = 0,(T3)implies for any υ∈ A(r) /vextendsingle/vextendsingle/vextendsinglef(υ)−f(υ∗)−1 2/ba∇dbl/u1D53D1/2(υ−υ∗)/ba∇dbl2/vextendsingle/vextendsingle/vextendsingle≤τ3 6/ba∇dbl/u1D53B(υ−υ∗)/ba∇dbl3. (19) 12 Vladimir Spokoiny Further, g(υ)−g(υ∗)+1 2/ba∇dbl/u1D53D−1/2A/ba∇dbl2 =/angbracketleftbig υ−υ∗,A/angbracketrightbig +f(υ)−f(υ∗)+1 2/ba∇dbl/u1D53D−1/2A/ba∇dbl2 =1 2/vextenddouble/vextenddouble/u1D53D1/2(υ−υ∗)+/u1D53D−1/2A/vextenddouble/vextenddouble2+f(υ)−f(υ∗)−1 2/ba∇dbl/u1D53D1/2(υ−υ∗)/ba∇dbl2. Asυ◦∈ A(r) and it minimizes g(υ), we derive by ( 19) and Lemma 3.2with /u1D54C=/u1D53D1/2/u1D53B−1ands=/u1D53B/u1D53D−1A 2g(υ◦)−2g(υ∗)+/ba∇dbl/u1D53D−1/2A/ba∇dbl2= min υ∈A(r)/braceleftBig 2g(υ)−2g(υ∗)+/ba∇dbl/u1D53D−1/2A/ba∇dbl2/bracerightBig ≥min υ∈A(r)/braceleftBig/vextenddouble/vextenddouble/u1D53D1/2(υ−υ∗)+/u1D53D−1/2A/vextenddouble/vextenddouble2−τ3 3/ba∇dbl/u1D53B(υ−υ∗)/ba∇dbl3/bracerightBig ≥ −τ3 2/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl3. Similarly 2g(υ◦)−2g(υ∗)+/ba∇dbl/u1D53D−1/2A/ba∇dbl2 ≤min υ∈A(r)/braceleftBig/vextenddouble/vextenddouble/u1D53D1/2(υ−υ∗)+/u1D53D−1/2A/vextenddouble/vextenddouble2+τ3 3/ba∇dbl/u1D53B(υ−υ∗)/ba∇dbl3/bracerightBig ≤τ3 2/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl3. This implies ( 18). Lemma 3.2 LetU≥ /C1. Fix some rand lets∈ /CApsatisfy(3/4)r≤ /ba∇dbls/ba∇dbl ≤ r. Ifτr≤1/3, then max /bardblu/bardbl≤r/parenleftBigτ 3/ba∇dblu/ba∇dbl3−(u−s)⊤U(u−s)/parenrightBig ≤τ 2/ba∇dbls/ba∇dbl3, (20) min /bardblu/bardbl≤r/parenleftBigτ 3/ba∇dblu/ba∇dbl3+(u−s)⊤U(u−s)/parenrightBig ≤τ 2/ba∇dbls/ba∇dbl3. (21) ProofReplacing /ba∇dblu/ba∇dbl3withr/ba∇dblu/ba∇dbl2reduces the problem to quadratic pro- gramming. It holds with ρdef=τr/3 andsρdef= (U−ρ /C1)−1Us τ 3/ba∇dblu/ba∇dbl3−(u−s)⊤U(u−s)≤τr 3/ba∇dblu/ba∇dbl2−(u−s)⊤U(u−s) =−u⊤/parenleftbig U−ρ /C1/parenrightbig u+2u⊤Us−s⊤Us =−(u−sρ)⊤(U−ρ /C1)(u−sρ)+s⊤ ρ(U−ρ /C1)sρ−s⊤Us ≤s⊤/braceleftbig U(U−ρ /C1)−1U−U/bracerightbig s=ρs⊤U(U−ρ /C1)−1s. Sharp bounds in perturbed smooth optimization 13 This implies in view of U≥ /C1,r≤(4/3)/ba∇dbls/ba∇dbl, andρ=τr/3≤1/9 max /bardblu/bardbl≤r/parenleftBigτ 3/ba∇dblu/ba∇dbl3−(u−s)⊤U(u−s)/parenrightBig ≤ρ 1−ρ/ba∇dbls/ba∇dbl2≤τr 3(1−ρ)/ba∇dbls/ba∇dbl2≤4τ 9(1−ρ)/ba∇dbls/ba∇dbl3≤τ 2/ba∇dbls/ba∇dbl3, and (20) follows. Further, τ/ba∇dblu/ba∇dbl3/3≤τr/ba∇dblu/ba∇dbl2/3 =ρ/ba∇dblu/ba∇dbl2for/ba∇dblu/ba∇dbl ≤r, and τ 3/ba∇dblu/ba∇dbl3+(u−s)⊤U(u−s)≤ρ/ba∇dblu/ba∇dbl2+(u−s)⊤U(u−s). The globalminimum ofthe latter function is attained at uρdef= (U+ρ /C1)−1Us. As/ba∇dbluρ/ba∇dbl ≤ /ba∇dbls/ba∇dbl ≤rand (3/4)r≤ /ba∇dbls/ba∇dbl, this implies min /bardblu/bardbl≤r/parenleftBig ρ/ba∇dblu/ba∇dbl2+(u−s)⊤U(u−s)/parenrightBig =τr 3/ba∇dbluρ/ba∇dbl2+(uρ−s)⊤U(uρ−s) ≤s⊤/braceleftbig U−U(U+ρ /C1)−1U/bracerightbig s=ρs⊤U(U+ρ /C1)−1s≤ρ/ba∇dbls/ba∇dbl2≤4τ 9/ba∇dbls/ba∇dbl3, and (21) follows as well. 3.4 Advanced approximation under locally uniform smoothness The bounds of Theorem 3.3can be made more accurate if ffollows(T∗ 3) and(T∗ 4)and one can apply Taylor’s expansion around any point close to υ∗. In particular, the improved results do not involve the value /ba∇dbl/u1D53D−1/2A/ba∇dbl which can be large or even infinite in some situation; see Section 4. Theorem 3.4 Letf(υ)be astrongly convex function with f(υ∗) = min υf(υ) and/u1D53D=∇2f(υ∗). Assume (T∗ 3)atυ∗with/u1D53B2,r, andτ3such that /u1D53B2≤κ2/u1D53D,r≥3 2/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl,κ2τ3/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl<4 9. Then /ba∇dbl/u1D53B(υ◦−υ∗)/ba∇dbl ≤3 2/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl, (22) /ba∇dbl/u1D53B−1/u1D53D(υ◦−υ∗+/u1D53D−1A)/ba∇dbl ≤3τ3 4/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl2. (23) ProofW.l.o.g. assume κ= 1. If the function fis quadratic and convex with the minimum at υ∗then the linearly perturbed function gis also quadratic and convex with the minimum at ˘υ=υ∗−/u1D53D−1A. In general, the point ˘υis not the minimizer of g, however, it is very close to υ◦. We use that ∇f(υ∗) = 0 and ∇2f(υ∗) =/u1D53D. The main step of the proof is given by the next lemma. 14 Vladimir Spokoiny Lemma 3.3 Assume (T∗ 3)atυand letUr={u:/ba∇dbl/u1D53Bu/ba∇dbl ≤r}. Then /vextenddouble/vextenddouble/u1D53B−1/braceleftbig ∇f(υ+u)−∇f(υ)−/an}b∇acketle{t∇2f(υ),u/an}b∇acket∇i}ht/bracerightbig/vextenddouble/vextenddouble≤τ3 2/ba∇dbl/u1D53Bu/ba∇dbl2,u∈ Ur.(24) Also for all u,u1∈ Ur /vextenddouble/vextenddouble/u1D53B−1/braceleftbig ∇2f(υ+u1)−∇2f(υ+u)/bracerightbig /u1D53B−1/vextenddouble/vextenddouble≤τ3/ba∇dbl/u1D53B(u1−u)/ba∇dbl(25) and /vextenddouble/vextenddouble/u1D53B−1/braceleftbig ∇f(υ+u1)−∇f(υ+u)−∇2f(υ)(u1−u)/bracerightbig/vextenddouble/vextenddouble≤3τ3 2/ba∇dbl/u1D53B(u1−u)/ba∇dbl2.(26) Moreover, under (T∗ 4), for any u∈ Ur, /vextenddouble/vextenddouble/u1D53B−1/braceleftbig ∇f(υ+u)−∇f(υ)−/an}b∇acketle{t∇2f(υ),u/an}b∇acket∇i}ht−1 2/an}b∇acketle{t∇3f(υ),u⊗2/an}b∇acket∇i}ht/bracerightbig/vextenddouble/vextenddouble≤τ4
|
https://arxiv.org/abs/2505.02002v1
|
6/ba∇dbl/u1D53Bu/ba∇dbl3.(27) ProofFixu∈ Urand any vector w∈ /CAp. Fort∈[0,1], denote h(t)def=/angbracketleftbig ∇f(υ+tu)−∇f(υ)−t/an}b∇acketle{t∇2f(υ),u/an}b∇acket∇i}ht,w/angbracketrightbig . Thenh(0) = 0, h′(0) = 0, and (T∗ 3)and (4) imply |h′′(t)|=/vextendsingle/vextendsingle/an}b∇acketle{t∇3f(υ+tu),u⊗2⊗w/an}b∇acket∇i}ht/vextendsingle/vextendsingle≤τ3/ba∇dbl/u1D53Bu/ba∇dbl2/ba∇dbl/u1D53Bw/ba∇dbl. Withadef=∇f(υ+u)−∇f(υ)−/an}b∇acketle{t∇2f(υ),u/an}b∇acket∇i}ht, this yields |/an}b∇acketle{ta,w/an}b∇acket∇i}ht|=|h(1)| ≤τ3 2/ba∇dbl/u1D53Bu/ba∇dbl2/ba∇dbl/u1D53Bw/ba∇dbl, /ba∇dbl/u1D53B−1a/ba∇dbl= sup /bardblw/bardbl=1/vextendsingle/vextendsingle/an}b∇acketle{t/u1D53B−1a,w/an}b∇acket∇i}ht/vextendsingle/vextendsingle= sup /bardblw/bardbl=1/vextendsingle/vextendsingle/an}b∇acketle{ta,/u1D53B−1w/an}b∇acket∇i}ht/vextendsingle/vextendsingle≤τ3 2/ba∇dbl/u1D53Bu/ba∇dbl2, and the first statement follows. For ( 27), apply adef=∇f(υ+u)−∇f(υ)−/an}b∇acketle{t∇2f(υ),u/an}b∇acket∇i}ht−1 2/an}b∇acketle{t∇3f(υ),u⊗2/an}b∇acket∇i}ht, h(t)def=/angbracketleftbig ∇f(υ+tu)−∇f(υ)−t/an}b∇acketle{t∇2f(υ),u/an}b∇acket∇i}ht−t2 2/an}b∇acketle{t∇3f(υ),u⊗2/an}b∇acket∇i}ht,w/angbracketrightbig , and use (T∗ 4)and (5) instead of (T∗ 3)and (4). Further, with B 1def=∇2f(υ+u1)− ∇2f(υ+u) and∆=u1−u, by (T∗ 3), for any w∈ /CApand some t∈[0,1], /vextendsingle/vextendsingle/an}b∇acketle{t/u1D53B−1/braceleftbig ∇2f(υ+u1)−∇2f(υ+u)/bracerightbig /u1D53B−1,w⊗2/an}b∇acket∇i}ht/vextendsingle/vextendsingle=/vextendsingle/vextendsingle/an}b∇acketle{tB1,(/u1D53B−1w)⊗2/an}b∇acket∇i}ht/vextendsingle/vextendsingle =/vextendsingle/vextendsingle/angbracketleftbig ∇3f(υ+u+t∆),∆⊗(/u1D53B−1w)⊗2/angbracketrightbig/vextendsingle/vextendsingle≤τ3/ba∇dbl/u1D53B∆/ba∇dbl/ba∇dblw/ba∇dbl2. Sharp bounds in perturbed smooth optimization 15 This proves ( 25). Similarly, for some t∈[0,1] /vextendsingle/vextendsingle/angbracketleftbig /u1D53B−1/braceleftbig ∇f(υ+u1)−∇f(υ+u)−∇2f(υ+u)∆/bracerightbig ,w/angbracketrightbig/vextendsingle/vextendsingle =1 2/vextendsingle/vextendsingle/angbracketleftbig ∇3f(υ+u+t∆),∆⊗∆⊗/u1D53B−1w/angbracketrightbig/vextendsingle/vextendsingle≤τ3 2/ba∇dbl/u1D53B∆/ba∇dbl2/ba∇dblw/ba∇dbl and with B = ∇2f(υ+u)−∇2f(υ), by (25), /vextenddouble/vextenddouble/u1D53B−1B∆/vextenddouble/vextenddouble≤ /ba∇dbl/u1D53B−1B/u1D53B−1/ba∇dbl /ba∇dbl/u1D53B∆/ba∇dbl ≤τ3/ba∇dbl/u1D53B∆/ba∇dbl2. This completes the proof of ( 26). Now we prove ( 23). W.l.o.g. assume /ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl= 2r/3. Lemma 3.3, (24), applied with υ=υ∗andu=/u1D53D−1Ayields for ˘υ=υ∗−/u1D53D−1A /vextenddouble/vextenddouble/u1D53B−1∇g(˘υ)/vextenddouble/vextenddouble=/vextenddouble/vextenddouble/u1D53B−1{∇f(υ∗−/u1D53D−1A)−∇f(υ∗)−A}/vextenddouble/vextenddouble≤τ3 2/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl2.(28) As/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl= 2r/3, condition (T∗ 3)can be applied in the r/3-vicinity of ˘υ. We show that g(υ) attains its minimum within this vicinity. Fix any υ on its boundary, i.e. with /ba∇dbl/u1D53B(υ−˘υ)/ba∇dbl=r/3 and denote ∆=υ−˘υ. By (26) /vextenddouble/vextenddouble/u1D53B−1{∇g(υ)−∇g(˘υ)−/u1D53D∆}/vextenddouble/vextenddouble=/vextenddouble/vextenddouble/u1D53B−1{∇f(υ)−∇f(˘υ)−/u1D53D∆}/vextenddouble/vextenddouble≤3τ3 2/ba∇dbl/u1D53B∆/ba∇dbl2. In particular, this and ( 28) yield /vextenddouble/vextenddouble/u1D53B−1{∇g(˘υ+∆)−/u1D53D∆}/vextenddouble/vextenddouble≤2τ3/ba∇dbl/u1D53B∆/ba∇dbl2. For any uwith/ba∇dblu/ba∇dbl= 1, this implies /vextendsingle/vextendsingle/angbracketleftbig ∇g(˘υ+∆)−/u1D53D∆,/u1D53B−1u/angbracketrightbig/vextendsingle/vextendsingle≤2τ3/ba∇dbl/u1D53B∆/ba∇dbl2. (29) Now consider the function h(t) =g(˘υ+t∆). Then h′(t) =/an}b∇acketle{t∇g(˘υ+t∆),∆/an}b∇acket∇i}ht and (29) implies with u=/u1D53B∆//ba∇dbl/u1D53B∆/ba∇dbl /vextendsingle/vextendsingle/an}b∇acketle{t∇g(˘υ+∆),∆/an}b∇acket∇i}ht−/ba∇dbl/u1D53D1/2∆/ba∇dbl2/vextendsingle/vextendsingle≤2τ3/ba∇dbl/u1D53B∆/ba∇dbl3. As/u1D53D≥/u1D53B2, this yields h′(1)≥ /ba∇dbl/u1D53B∆/ba∇dbl2−2τ3/ba∇dbl/u1D53B∆/ba∇dbl3. (30) Similarly −h′(−1)≥ /ba∇dbl/u1D53B∆/ba∇dbl2−2τ3/ba∇dbl/u1D53B∆/ba∇dbl3and (28) yields by /ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl=2 3r |h′(0)|=/vextendsingle/vextendsingle/an}b∇acketle{t∇g(˘υ),∆/an}b∇acket∇i}ht/vextendsingle/vextendsingle≤τ3 2/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl2/ba∇dbl/u1D53B∆/ba∇dbl ≤2τ3 9r2/ba∇dbl/u1D53B∆/ba∇dbl.(31) 16 Vladimir Spokoiny Due to (30), (31),/ba∇dbl/u1D53B∆/ba∇dbl=r/3,τ3/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl ≤4/9, and/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl= 2r/3 h′(1)−|h′(0)| ≥ /ba∇dbl/u1D53B∆/ba∇dbl2−2τ3 9r2/ba∇dbl/u1D53B∆/ba∇dbl−2τ3/ba∇dbl/u1D53B∆/ba∇dbl3 =/ba∇dbl/u1D53B∆/ba∇dblr/parenleftBig1 3−2τ3r 9−2τ3r 9/parenrightBig >0. Similarly −h′(−1)>|h′(0)|,andconvexityof g(·) ensuresthat t∗= argminth(t) satisfies |t∗| ≤1. We summarize that υ◦= argminυg(υ) satisfies /ba∇dbl/u1D53B(υ◦− ˘υ)/ba∇dbl ≤r/3 while /ba∇dbl/u1D53B(˘υ−υ∗)/ba∇dbl=/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl= 2r/3, thus yielding ( 22). We can now apply (T∗ 3)atυ◦for checking ( 23). As∇g(υ◦) = 0, by ( 28) /ba∇dbl/u1D53B−1{∇g(υ∗−/u1D53D−1A)−∇g(υ◦)}/ba∇dbl ≤τ3 2/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl2.(32) By (26) of Lemma 3.3, it holds with ∆=υ∗−/u1D53D−1A−υ◦ /vextenddouble/vextenddouble/u1D53B−1{∇g(υ∗−/u1D53D−1A)−∇g(υ◦)−∇2g(υ∗)∆}/vextenddouble/vextenddouble≤3τ3 2/ba∇dbl/u1D53B∆/ba∇dbl2. Combining with ( 32) yields /ba∇dbl/u1D53B−1/u1D53D∆/ba∇dbl ≤3τ3 2/ba∇dbl/u1D53B∆/ba∇dbl2+τ3 2/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl2≤3τ3 2/ba∇dbl/u1D53B−1/u1D53D∆/ba∇dbl2+τ3 2/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl2. As 2x≤αx2+βwithα= 3τ3,β=τ3/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl2, andx=/ba∇dbl/u1D53B−1/u1D53D∆/ba∇dbl ∈ (0,1/α) implies x≤β/(2−αβ), this yields /ba∇dbl/u1D53B−1/u1D53D(υ◦−υ∗+/u1D53D−1A)/ba∇dbl ≤τ3 2−3τ2 3/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl2/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl2 and (23) follows by τ3/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl ≤4/9. Remark 3.2 As in Remark 2,fandgcan be exchanged. In particular, ( 23) applies with /u1D53D=/u1D53D(υ◦) provided that (T∗ 3)is fulfilled at υ◦. 3.5 Fourth-order expansions Under fourth-order condition (T∗ 4), expansion ( 23) can further be refined. Theorem 3.5 Letf(υ)be a strongly convex function satisfying (T∗ 3)and (T∗ 4)atυ∗= argminυf(υ)with some /u1D53B2,τ3,τ4, andrsuch that /u1D53B2≤κ2/u1D53D,r≥3 2/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl,κ2τ3/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl<4 9,κ2τ4/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl2<1 3.(33) Sharp bounds in perturbed smooth optimization 17 Letg(υ)fulfill(7)with some vector Aandg(υ◦) = min υg(υ). Then/ba∇dbl/u1D53B(υ◦− υ∗)/ba∇dbl ≤(3/2)/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl. Further, define a=−/u1D53D−1{A+∇T(/u1D53D−1A)}, (34) whereT(u) =1 6/an}b∇acketle{t∇3f(υ∗),u⊗3/an}b∇acket∇i}htforu∈ /CAp. Then /ba∇dbl/u1D53B−1/u1D53D(υ◦−υ∗−a)/ba∇dbl ≤(τ4/2+κ2τ2 3)/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl3. (35) Also /vextendsingle/vextendsingle/vextendsingleg(υ◦)−g(υ∗)+1 2/ba∇dbl/u1D53D−1/2A/ba∇dbl2+T(/u1D53D−1A)/vextendsingle/vextendsingle/vextendsingle ≤τ4+4κ2τ2 3 8/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl4+κ2(τ4+2κ2τ2 3)2 4/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl6(36) and /vextendsingle/vextendsingleT(/u1D53D−1A)/vextendsingle/vextendsingle≤τ3 6/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl3. (37) ProofW.l.o.g. assume κ= 1 and υ∗= 0. Theorem 3.4yields (23). Later we use that T(−u) =−T(u) while∇T(−u) =∇T(u). By(T∗ 3)and (4) /ba∇dbl/u1D53B−1/u1D53D(a+/u1D53D−1A)/ba∇dbl=/ba∇dbl/u1D53B−1∇T(/u1D53D−1A)/ba∇dbl = sup /bardblu/bardbl=13/vextendsingle/vextendsingle/an}b∇acketle{tT,/u1D53D−1A⊗/u1D53D−1A⊗/u1D53B−1u/an}b∇acket∇i}ht/vextendsingle/vextendsingle≤τ3 2/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl2. (38) As/u1D53B−1/u1D53D≥/u1D53D1/2≥/u1D53B, this implies by τ3/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl ≤4/9 /ba∇dbl/u1D53Ba/ba∇dbl ≤ /ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl+/ba∇dbl/u1D53B/u1D53D−1∇T(/u1D53D−1A)/ba∇dbl ≤/parenleftBig 1+τ3 2/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl/parenrightBig /ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl ≤11 9/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl (39) and /ba∇dbl/u1D53D1/2a+/u1D53D−1/2A/ba∇dbl ≤τ3 2/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl2. (40) Next, again by (T∗ 3), for any w /ba∇dbl/u1D53B−1∇2T(w)/u1D53B−1/ba∇dbl= sup /bardblu/bardbl=16/vextendsingle/vextendsingle/an}b∇acketle{tT,w⊗(/u1D53B−1u)⊗2/an}b∇acket∇i}ht/vextendsingle/vextendsingle≤τ3/ba∇dbl/u1D53Bw/ba∇dbl. The tensor ∇2T(u) is linear in u, hence/ba∇dbl∇2T(u)/ba∇dblis convex in uand sup t∈[0,1]/ba∇dbl/u1D53B−1∇2T(ta−(1−t)/u1D53D−1A)/u1D53B−1/ba∇dbl = max{/ba∇dbl/u1D53B−1∇2T(−/u1D53D−1A)/u1D53B−1/ba∇dbl,/ba∇dbl/u1D53B−1∇2T(a)/u1D53B−1/ba∇dbl} ≤τ3max{/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl,/ba∇dbl/u1D53Ba/ba∇dbl}. 18 Vladimir Spokoiny Based
|
https://arxiv.org/abs/2505.02002v1
|
on ( 39), assume /ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl ≤ /ba∇dbl/u1D53Ba/ba∇dbl ≤(11/9)/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl. Then ( 38) yields by ∇T(u) =∇T(−u) /ba∇dbl/u1D53B−1∇T(a)−/u1D53B−1∇T(/u1D53D−1A)/ba∇dbl=/ba∇dbl/u1D53B−1∇T(a)−/u1D53B−1∇T(−/u1D53D−1A)/ba∇dbl ≤sup t∈[0,1]/ba∇dbl/u1D53B−1∇2T(ta−(1−t)/u1D53D−1A)/u1D53B−1/ba∇dbl /ba∇dbl/u1D53B/u1D53D−1(a+/u1D53D−1A)/ba∇dbl ≤τ2 3 2/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl2/ba∇dbl/u1D53Ba/ba∇dbl ≤2τ2 3 3/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl3. As∇2f(0) =/u1D53Dand∇T(a) =1 2/an}b∇acketle{t∇3f(0),a⊗a/an}b∇acket∇i}ht, by (27) withυ= 0 and u=aand by (39) /vextenddouble/vextenddouble/u1D53B−1{∇f(a)−/u1D53Da−∇T(a)}/vextenddouble/vextenddouble ≤τ4 6/ba∇dbl/u1D53Ba/ba∇dbl3≤(11/9)3τ4 6/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl3≤τ4 3/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl3. Next we bound/vextenddouble/vextenddouble/u1D53B−1{∇g(a)−∇g(υ◦)}/vextenddouble/vextenddouble. As∇g(υ◦) = 0, (7) and (34) imply /vextenddouble/vextenddouble/u1D53B−1{∇g(a)−∇g(υ◦)}/vextenddouble/vextenddouble=/vextenddouble/vextenddouble/u1D53B−1∇g(a)/vextenddouble/vextenddouble=/vextenddouble/vextenddouble/u1D53B−1{∇g(a)−/u1D53Da−∇T(/u1D53D−1A)−A}/vextenddouble/vextenddouble ≤/vextenddouble/vextenddouble/u1D53B−1{∇f(a)−/u1D53Da−∇T(a)}/vextenddouble/vextenddouble+/ba∇dbl/u1D53B−1{∇T(a)−∇T(/u1D53D−1A)}/ba∇dbl ≤ ♦ 1,(41) where ♦1def=τ4+2τ2 3 3/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl3 and by (33) 3τ3♦1=τ3/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl/parenleftBig τ4/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl2+2τ2 3/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl2/parenrightBig ≤4 9/parenleftBig1 3+32 81/parenrightBig <1 3.(42) Further, ∇2g(0) =∇2f(0) =/u1D53D, and (26) of Lemma 3.3implies /vextenddouble/vextenddouble/u1D53B−1{∇g(a)−∇g(υ◦)−/u1D53D(a−υ◦)}/vextenddouble/vextenddouble =/vextenddouble/vextenddouble/u1D53B−1{∇f(a)−∇f(υ◦)−/u1D53D(a−υ◦)}/vextenddouble/vextenddouble≤3τ3 2/ba∇dbl/u1D53B(a−υ◦)/ba∇dbl2. Combining with ( 41) yields in view of /u1D53B2≤/u1D53D /ba∇dbl/u1D53B−1/u1D53D(a−υ◦)/ba∇dbl ≤3τ3 2/ba∇dbl/u1D53B(a−υ◦)/ba∇dbl2+♦1≤3τ3 2/ba∇dbl/u1D53B−1/u1D53D(a−υ◦)/ba∇dbl2+♦1. As 2x≤αx2+βwithα= 3τ3,β= 2♦1, andx∈(0,1/α) implies x≤ β/(2−αβ), we conclude by ( 42) /ba∇dbl/u1D53B−1/u1D53D(a−υ◦)/ba∇dbl ≤♦1 1−3τ3♦1≤τ4+2τ2 3 2/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl3,(43) Sharp bounds in perturbed smooth optimization 19 and (35) follows. Next we bound g(υ◦)−g(0). By ( 40) and/u1D53B2≤/u1D53D 1 2/ba∇dbl/u1D53D−1/2A/ba∇dbl2+/an}b∇acketle{tA,a/an}b∇acket∇i}ht+1 2/ba∇dbl/u1D53D1/2a/ba∇dbl2=1 2/ba∇dbl/u1D53D1/2a+/u1D53D−1/2A/ba∇dbl2≤τ2 3 8/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl4. This together with ∇f(0) = 0, ∇2f(0) =/u1D53D,(T∗ 4), and (39) implies /vextendsingle/vextendsingle/vextendsingleg(a)−g(0)+1 2/ba∇dbl/u1D53D−1/2A/ba∇dbl2−T(a)/vextendsingle/vextendsingle/vextendsingle =/vextendsingle/vextendsingle/vextendsinglef(a)−f(0)+/an}b∇acketle{tA,a/an}b∇acket∇i}ht+1 2/ba∇dbl/u1D53D−1/2A/ba∇dbl2−T(a)/vextendsingle/vextendsingle/vextendsingle ≤/vextendsingle/vextendsingle/vextendsinglef(a)−f(0)−1 2/ba∇dbl/u1D53D1/2a/ba∇dbl2−T(a)/vextendsingle/vextendsingle/vextendsingle+τ2 3 8/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl4 ≤τ4 24/ba∇dbl/u1D53Ba/ba∇dbl4+τ2 3 8/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl4≤/parenleftBigτ4 10+τ2 3 8/parenrightBig /ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl4. Further, by ∇g(υ◦) = 0 and ∇2g(·)≡ ∇2f(·), it holds for some υ∈[a,υ◦] 2/vextendsingle/vextendsingleg(a)−g(υ◦)/vextendsingle/vextendsingle=/vextendsingle/vextendsingle/an}b∇acketle{t∇2f(υ),(a−υ◦)⊗2/an}b∇acket∇i}ht/vextendsingle/vextendsingle. As/ba∇dbl/u1D53Ba/ba∇dbl ≤r=3 2/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbland/ba∇dbl/u1D53Bυ◦/ba∇dbl ≤r, also/ba∇dbl/u1D53Bυ/ba∇dbl ≤r. The use of ∇2f(0) =/u1D53D≥/u1D53B2and (25) yields by τ3/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl<4 9and (43) 2/vextendsingle/vextendsingleg(a)−g(υ◦)/vextendsingle/vextendsingle≤ /ba∇dbl/u1D53D1/2(a−υ◦)/ba∇dbl2+/vextendsingle/vextendsingle/angbracketleftbig ∇2f(υ)−∇2f(0),(a−υ◦)⊗2/angbracketrightbig/vextendsingle/vextendsingle ≤(1+τ3r)/ba∇dbl/u1D53D1/2(a−υ◦)/ba∇dbl2≤(5/3)(τ4+2τ2 3)2 4/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl6. AsT(a) =−T(−a), it holds with ∆def=/u1D53D−1∇T(/u1D53D−1A) for some t∈[0,1] /vextendsingle/vextendsingleT(a)+T(/u1D53D−1A)/vextendsingle/vextendsingle=/vextendsingle/vextendsingleT(/u1D53D−1A+∆)−T(/u1D53D−1A)/vextendsingle/vextendsingle=/vextendsingle/vextendsingle/angbracketleftbig ∇T(/u1D53D−1A+t∆),∆/angbracketrightbig/vextendsingle/vextendsingle ≤τ3 2/ba∇dbl/u1D53B(/u1D53D−1A+t∆)/ba∇dbl2/ba∇dbl/u1D53B∆/ba∇dbl=τ3 2/ba∇dbl/u1D53B/u1D53D−1A+t/u1D53B∆/ba∇dbl2/ba∇dbl/u1D53B∆/ba∇dbl. Similarly to ( 38), it holds /ba∇dbl/u1D53B∆/ba∇dbl ≤ /ba∇dbl/u1D53B−1∇T(/u1D53D−1A)/ba∇dbl ≤(τ3/2)/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl2, and byτ3/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl ≤1/2 /vextendsingle/vextendsingleT(a)+T(/u1D53D−1A)/vextendsingle/vextendsingle≤(5/4)2τ2 3 4/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl4. Summing up the obtained bounds yields ( 36). (37) follows from (T∗ 3). 20 Vladimir Spokoiny 4 Quadratic penalization Here we discuss the case when g(υ)−f(υ) is quadratic. The general case can be reduced to the situation with g(υ) =f(υ) +/ba∇dblGυ/ba∇dbl2/2. To make the dependence of Gmore explicit, denote fG(υ)def=f(υ)+/ba∇dblGυ/ba∇dbl2/2, υ∗= argmin υf(υ),υ∗ G= argmin υfG(υ) = argmin υ/braceleftbig f(υ)+/ba∇dblGυ/ba∇dbl2/2/bracerightbig .(44) Westudy thebias υ∗ G−υ∗inducedbythispenalization.Togetsomeintuition, consider first the case of a quadratic function f(υ). Lemma 4.1 Letf(υ)be quadratic with /u1D53D≡ ∇2f(υ). Denote /u1D53DG=/u1D53D+G2. Thenυ∗ Gfrom(44)satisfies υ∗ G−υ∗=−/u1D53D−1 GG2υ∗, f G(υ∗ G)−fG(υ∗) =−1 2/ba∇dbl/u1D53D−1/2 GG2υ∗/ba∇dbl2. ProofBy definition, fG(υ) is quadratic with ∇2fG(υ)≡/u1D53DGand ∇fG(υ∗ G)−∇fG(υ∗) =/u1D53DG(υ∗ G−υ∗). Further, ∇f(υ∗) = 0 yielding ∇fG(υ∗) =G2υ∗. Together with ∇fG(υ∗ G) = 0, this implies υ∗ G−υ∗=−/u1D53D−1 GG2υ∗. The Taylor expansion of fGatυ∗ G yields fG(υ∗)−fG(υ∗ G) =1 2/ba∇dbl/u1D53D1/2 G(υ∗−υ∗ G)/ba∇dbl2=1 2/ba∇dbl/u1D53D−1/2 GG2υ∗/ba∇dbl2 and the assertion follows. Now we turn to the general case with fsatisfying (T∗ 3). Define /u1D53DGdef=∇2fG(υ∗),bGdef=/ba∇dbl/u1D53B/u1D53D−1 GG2υ∗/ba∇dbl. (45) Theorem 4.1 LetfG(υ) =f(υ) +/ba∇dblGυ/ba∇dbl2/2be strongly convex and follow (T∗ 3)with some /u1D53B2,τ3, andrsatisfying for κ>0 /u1D53B2≤κ2/u1D53DG,r≥3bG/2,κ2τ3bG<4/9. Then /ba∇dbl/u1D53B(υ∗ G−υ∗)/ba∇dbl ≤3bG/2. (46) Moreover, /vextenddouble/vextenddouble/u1D53B−1/u1D53DG(υ∗ G−υ∗+/u1D53D−1 GG2υ∗)/vextenddouble/vextenddouble≤3τ3 4b2 G, /vextendsingle/vextendsingle/vextendsingle2fG(υ∗ G)−2fG(υ∗)+/ba∇dbl/u1D53D−1/2 GG2υ∗/ba∇dbl2/vextendsingle/vextendsingle/vextendsingle≤τ3 2b3 G. Sharp bounds in perturbed smooth optimization 21 ProofDefinegG(υ) by gG(υ)−gG(υ∗ G) =fG(υ)−fG(υ∗ G)−/an}b∇acketle{tG2υ∗,υ−υ∗ G/an}b∇acket∇i}ht.(47) The function fGis convex, the same holds for gGfrom (47). Moreover, ∇fG(υ∗) =G2υ∗yields∇gG(υ∗) =−G2υ∗+G2υ∗= 0. Hence, υ∗= argmingG(υ) andfG(υ) is a linear perturbation ( 7) ofgGwithA=G2υ∗. Now the results follow from Theorems 3.4and (18) of Theorem 3.3applied withf(υ) =fG(υ)−/an}b∇acketle{tA,υ/an}b∇acket∇i}ht,g(υ) =fG(υ),A=G2υ∗, and∇2f(υ∗) =/u1D53DG. The bound can be further improved under fourth-order smoothn ess off using the results of Theorem 3.5. Theorem 4.2 Letf(υ)be strongly convex and υ∗= argminυf(υ). Let also f(υ)follow(T∗ 3)and(T∗ 4)with/u1D53DG=∇2f(υ∗) +G2and
|
https://arxiv.org/abs/2505.02002v1
|
some /u1D53B2,τ3, τ4, andrsatisfying /u1D53B2≤κ2/u1D53DG,r=3 2bG,κ2τ3bG<4 9,κ2τ4b2 G<1 3 forbGfrom(45). Then(46)holds. Furthermore, define µG=−/u1D53D−1 G{G2υ∗+∇T(/u1D53D−1 GG2υ∗)} withT(u) =1 6/an}b∇acketle{t∇3f(υ∗),u⊗3/an}b∇acket∇i}htand∇T=1 2/an}b∇acketle{t∇3f(υ∗),u⊗2/an}b∇acket∇i}ht. Then /ba∇dbl/u1D53B(µG−/u1D53D−1 GG2υ∗)/ba∇dbl ≤τ3 2b2 G, and /ba∇dbl/u1D53B−1/u1D53DG(υ∗ G−υ∗−µG)/ba∇dbl ≤τ4+2κ2τ2 3 2b3 G, /vextendsingle/vextendsingle/vextendsinglefG(υ∗ G)−fG(υ∗) +1 2/ba∇dbl/u1D53D−1/2 GG2υ∗/ba∇dbl2+T(/u1D53D−1 GG2υ∗)/vextendsingle/vextendsingle/vextendsingle ≤τ4+4κ2τ2 3 8b4 G+κ2(τ4+2κ2τ2 3)2 4b6 G. ProofWe apply Theorem 3.5and use that for afrom (34), it holds a=µG. Also use that ∇3f(υ∗) =∇3fG(υ∗) =∇3gG(υ∗). 5 A smooth penalty The case of a general smooth penalty penG(υ) can be studied similarly to the quadratic case. Denote fG(υ)def=f(υ)+penG(υ), υ∗= argmin υf(υ),υ∗ G= argmin υfG(υ) = argmin υ/braceleftbig f(υ)+penG(υ)/bracerightbig . 22 Vladimir Spokoiny We study the bias υ∗ G−υ∗induced by this penalization. The statement of Theorem 4.1and its proof can be easily extended to this situation. Define /u1D53DGdef=∇2fG(υ∗),bGdef=/ba∇dbl/u1D53B/u1D53D−1 GMG/ba∇dbl,MGdef=∇penG(υ∗). Theorem 5.1 LetfG(υ) =f(υ) + penG(υ)be strongly convex and follow (T∗ 3)atυ∗with some /u1D53B2,τ3, andrsatisfying for κ>0 /u1D53B2≤κ2/u1D53DG,r≥3bG/2,κ2τ3bG<4/9, Then /ba∇dbl/u1D53B(υ∗ G−υ∗)/ba∇dbl ≤3bG/2. Moreover, /vextenddouble/vextenddouble/u1D53B−1/u1D53DG(υ∗ G−υ∗+/u1D53D−1 GMG)/vextenddouble/vextenddouble≤3τ3 4b2 G, /vextendsingle/vextendsingle/vextendsingle2fG(υ∗ G)−2fG(υ∗)+/ba∇dbl/u1D53D−1/2 GMG/ba∇dbl2/vextendsingle/vextendsingle/vextendsingle≤τ3 2b3 G. If, in addition, fG(υ)satisfies (T∗ 4)andκ2τ4b2 G<1 3, then with TG(u) = 1 6/an}b∇acketle{t∇3fG(υ∗),u⊗3/an}b∇acket∇i}ht,∇TG=1 2/an}b∇acketle{t∇3fG(υ∗),u⊗2/an}b∇acket∇i}ht, and µG=−/u1D53D−1 G{MG+∇TG(/u1D53D−1 GMG)}, it holds /ba∇dbl/u1D53B−1/u1D53DG(υ∗ G−υ∗−µG)/ba∇dbl ≤τ4+2κ2τ2 3 2b3 G, /vextendsingle/vextendsingle/vextendsinglefG(υ∗ G)−fG(υ∗) +1 2/ba∇dbl/u1D53D−1/2 GMG/ba∇dbl2+TG(/u1D53D−1 GMG)/vextendsingle/vextendsingle/vextendsingle ≤τ4+4κ2τ2 3 8b4 G+κ2(τ4+2κ2τ2 3)2 4b6 G. ProofConsider gG(υ) =fG(υ)−/an}b∇acketle{t∇penG(υ∗),υ/an}b∇acket∇i}ht. ThengGis strongly con- vex and ∇gG(υ∗) = 0 yielding υ∗= argminυgG(υ). Also, fG(υ) is a linear perturbation of gG(υ) withA=MG=∇penG(υ∗). Now all the statements of Theorem 4.1and Theorem 4.2apply to υ∗ Gwith obvious changes. Conclusion The paper systematically studies the effect of changing the object ive function by a linear, quadratic, or smooth perturbation. The obtained resu lts provide careful expansions for the solution and the value of the perturbe d optimiza- tion problem. These expansions can be used as building blocks in differe nt areas including statistics and machine learning, quasi-Newton optimiz ation, uncertainty quantification for inverse problems, among many othe rs. Sharp bounds in perturbed smooth optimization 23 References Banach, S. (1938). ¨Uber homogene Polynome in ( L2).Studia Mathematica , 7(1):36–44. Berthet, Q., Blondel, M., Teboul, O., Cuturi, M., Vert, J.-P., and Bach, F. (2020). Learning with differentiable perturbed optimizers . https://arxiv.org/abs/2002.08676. Bonnans, J. and Shapiro, A. (2000). Perturbation Analysis of Optimization Problems . Springer Series in Operations Research and Financial Engineer- ing. Springer New York. Bonnans, J. F. and Cominetti, R. (1996). Perturbed optimization in banach spaces i: A general theory based on a weak directional constraint qualifica- tion.SIAM Journal on Control and Optimization , 34(4):1151–1171. Bonnans, J. F. and Shapiro, A. (1998). Optimization problems with p ertur- bations: A guided tour. SIAM Review , 40(2):228–264. Boyd, S. and Vandenberghe, L. (2004). Convex optimization . Cambridge university press. Doikov, N. and Nesterov, Y. (2021). Minimizing uniformly convex fun ctions by cubic regularization of newton method. Journal of Optimization Theory and Applications , 189(1):317–339. Doikov, N. and Nesterov, Y. (2022). Local convergence of tens or methods. Mathematical Programming , 193(1):315–336. Doikov,N. andNesterov,Y. (2024). Gradientregularizationofne wton method with bregman distances. Mathematical programming , 204(1):1–25. Katsevich, A. (2023). Tight bounds on the laplace approximation ac curacy in high dimensions. https://arxiv.org/abs/2305.17604. Katsevich, A. (2024). The laplace approximation accuracy in high dimensions:
|
https://arxiv.org/abs/2505.02002v1
|
a refined analysis and new skew adjustment. https://arxiv.org/abs/2306.07262. Katsevich, A. and Rigollet, P. (2023). On the approximation accura cy of gaussian variational inference. Nesterov, Y. (2018). Lectures on convex optimization , volume 137. Springer. Nesterov, Y. and Nemirovskii, A. (1994). Interior-Point Polynomial Algo- rithms in Convex Programming . Society for Industrial and Applied Mathe- matics. Nickl, R. (2022). Bayesian non-linear statistical inverse problems. Lecture Notes ETH Zurich . Ostrovskii,D. M. and Bach, F. (2021). Finite-sample analysisof M-estimators using self-concordance. Electronic Journal of Statistics , 15(1):326 – 391. Polyak, B. T. (2007). Newton’s method and its use in optimization. European Journal of Operational Research , 181(3):1086–1096. Polyak, B. T. and Juditsky, A. B. (1992). Acceleration of stochas tic approxi- mation by averaging. SIAM journal on control and optimization , 30(4):838– 855. Rodomanov, A. and Nesterov, Y. (2021). New results on superline ar conver- gence of classicalquasi-newton methods. Journal of optimization theory and 24 Vladimir Spokoiny applications , 188:744–769. Rodomanov, A. and Nesterov, Y. (2022). Rates of superlinear co nvergence for classical quasi-newton methods. Mathematical Programming , pages 1–32. Spokoiny, V. (2025). Estimation and inference for Deep Neuronal Networks. arxiv.org/abs/2305.08193. Stuart,A.M.(2010).Inverseproblems:aBayesianperspective .Actanumerica , 19:451.
|
https://arxiv.org/abs/2505.02002v1
|
Central limit theorems under non-stationarity via relative weak convergence Nicolai Palm and Thomas Nagler May 6, 2025 Statistical inference for non-stationary data is hindered by the lack of classical central limit theorems (CLTs), not least because there is no fixed Gaussian limit to converge to. To address this, we introduce relative weak convergence , a mode of convergence that compares a statistic or process to a sequence of evolving processes. Relative weak convergence retains the main consequences of classical weak convergence while accommodating time- varying distributional characteristics. We develop concrete relative CLTs for random vectors and empirical processes, along with sequential, weighted, and bootstrap variants, paralleling the state-of-the-art in stationary settings. Our framework and results offer simple, plug-in replacements for classical CLTs whenever stationarity is untenable, as illustrated by applications in nonparametric trend estimation and hypothesis testing. Contents 1. Introduction 3 2. Relative Weak Convergence and CLTs 6 2.1. Background and notation . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.2. Non-stationary CLTs via weak convergence along subsequences . . . . . . 8 2.3. Relative weak convergence . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.4. Relative central limit theorems . . . . . . . . . . . . . . . . . . . . . . . 12 3. Relative CLTs for non-stationary time-series 14 3.1. Multivariate relative CLT . . . . . . . . . . . . . . . . . . . . . . . . . . 14 3.2. Asymptotic tightness under bracketing entropy conditions . . . . . . . . 15 3.3. Weighted uniform relative CLT . . . . . . . . . . . . . . . . . . . . . . . 17 1arXiv:2505.02197v1 [math.ST] 4 May 2025 3.4. Sequential relative CLT . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 4. Bootstrap inference 19 4.1. Bootstrap consistency and relative weak convergence . . . . . . . . . . . 19 4.2. Multiplier bootstrap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 4.3. Practical inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 5. Applications 22 5.1. Uniform confidence bands for a nonparametric trend . . . . . . . . . . . 22 5.2. Testing for time series characteristics . . . . . . . . . . . . . . . . .
|
https://arxiv.org/abs/2505.02197v1
|
. . . 24 A. Proofs for relative weak convergence and CLTs 28 A.1. Relative weak convergence in ℓ∞(T) . . . . . . . . . . . . . . . . . . . . . 30 A.2. Relative central limit theorems . . . . . . . . . . . . . . . . . . . . . . . 31 A.3. Existence and tightness of corresponding GPs . . . . . . . . . . . . . . . 31 A.4. Relative Lindeberg CLT . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 B. Tightness under bracketing entropy conditions 34 B.1. Coupling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 B.2. Bernstein inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 B.3. Chaining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 B.4. Proof of Theorem 3.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 C. Proofs for relative CLTs under mixing conditions 49 C.1. Proof of Theorem 3.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 C.2. Proof of Theorem 3.6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 C.3. Asymptotic tightness of the multiplier empirical process . . . . . . . . . . 56 D. Proofs for the bootstrap 60 D.1. Proof of Proposition 4.1 and a corollary . . . . . . . . . . . . . . . . . . . 60 D.2. Proof of Proposition 4.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 D.3. Proof of Proposition 4.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 D.4. Proof of Proposition 4.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 D.5. Proof
|
https://arxiv.org/abs/2505.02197v1
|
of Proposition 4.6 . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 D.6. Proof of Proposition 4.7 . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 D.7. A useful lemma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 E. Proofs for the applications 66 E.1. Proof of Corollary 5.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 E.2. Proof of Corollary 5.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 E.3. Proof of Corollary 5.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 F. Auxiliary results 69 F.1. Bracketing numbers under non-stationarity . . . . . . . . . . . . . . . . . 70 2 1. Introduction In many application areas, data are naturally recorded over time. The processes gener- ating these data often have evolving characteristics. Examples are environmental data, which are subject to evolving climatic conditions; economic and financial data, which are influenced by changing market fundamentals and political environment; and network data in epidemiology and social sciences, which are subject to changes in societal behav- ior. Given the significant role these data play in our society, there is a strong need for statistical methods that can handle such non-stationary data with formal guarantees. Most inferential procedures in statistics rely on central limit theorems (CLTs) as a fundamental principle. This includes simple low-dimensional problems such as testing for equality of means, as well as modern high-dimensional and infinite-dimensional re- gression problems, where empirical process theory (including related uniform CLTs via weak convergence) has become the workhorse (Van der Vaart and Wellner, 2023, Dehling and Philipp, 2002, Kosorok, 2008). However, weak convergence and, by extension, CLTs are fundamentally tied to stationarity. In general non-stationary settings, there may be no fixed Gaussian limit to converge to because the data-generating process evolves over time. Existing approaches A common ad-hoc fix to this conundrum is to de-trend, difference, or otherwise transform the data to remove the most glaring effects of non-stationarity (e.g., Shumway et al., 2000). The pre-processed data is assumed to be stationary, and classical CLTs are applied. This approach is sometimes successful in practice, but it has its pitfalls. The pre-processing is unlikely to remove all stationarity issues, and potential uncertainty and data dependence in the pre-processing are not accounted for in the subsequent analysis. A likely
|
https://arxiv.org/abs/2505.02197v1
|
reason for the popularity of these heuristics is the apparent scarcity of limit theorems for non-stationary data that could support the development of inferential methods. A few approaches have been proposed to establish CLTs for non-stationary processes, but they have significant limitations. Merlevede and Peligrad (2020) establish a uni- variate (uniform-in-time) CLT by standardizing the sample average by its standard deviation, which ensures a fixed standard Gaussian limit. Extending this approach to multivariate settings is straightforward by additionally de-correlating the coordinates. However, this is doomed to fail in infinite-dimensional settings, where a decorrelated Gaussian limit process has unbounded sample paths. Another recent line of work (Kar- makar and Wu, 2020, Mies and Steland, 2023, Bonnerjee et al., 2024) couples the sample average to an explicit sequence of Gaussian variables on an enriched probability space. Such results are stronger than necessary for most statistical applications and require substantially more effort to establish than classical CLTs. Furthermore, they apply only to multivariate sample averages with extensions to more complex quantities such as em- pirical processes being an open problem. The most developed approach thus far relies on the local stationarity assumption, as summarized in Dahlhaus (2012) and recently extended to empirical processes by Phandoidaen and Richter (2022). Here, asymptotic results become possible by considering a hypothetical sequence of data-generating pro- 3 cesses providing increasingly many, increasingly stationary observations in a given time window. This approach is tailored to procedures that localize estimates in a small win- dow, and the asymptotics do not concern the actual process generating the data. It does not apply, for example, to a simple (non-localized) sample average over non-stationary random variables. Contribution 1: Relative weak convergence This paper takes a different approach. Fundamentally, a CLT is a comparison between the distribution of a sample quantity and a fixed Gaussian distribution. Since, in the case of non-stationary data, there is no fixed distribution to converge to, we may simply compare to an appropriate sequence of Gaussian distributions. The approximating Gaussians can vary with the sample size, reflecting the evolving structure of the underlying process. This approach is strong enough to substitute classical (uniform) CLTs in statistical methods and simple enough such that the required conditions are not substantially stronger than those required for stationary CLTs. To formalize the idea in general terms, we introduce the following concept. Definition. We say Xn, Ynarerelatively weakly convergent if |E∗[f(Xn)]−E∗[f(Yn)]| →0 for all fbounded and continuous. Note that if Yn=Yis constant, this is simply the definition of weak convergence. IfTis finite-dimensional and the distributions of Ynsuitably continuous, relative weak convergence is equivalent to convergence of the difference of distribution functions. Our general notion of asymptotic normality looks as follows. Definition. We say that a sequence {Yn(t):T}satisfies a relative central limit theorem (relative CLT) if there exists a relatively compact sequence of centered, tight, and Borel measurable Gaussian processes NYnwith Cov[Yn(s), Yn(t)] =Cov[Nn(s), Nn(t)] such that YnandNYnare relatively weakly convergent. Let us explain the definition. The covariance structure of the approximating GP NYn is the natural Gaussian process to compare Ynto. Relatively compact sequences are the
|
https://arxiv.org/abs/2505.02197v1
|
natural analog of tight limits in classical weak convergence and defined formally in Section 2. Under relative compactness, relative weak convergence is equivalent to weak convergence on subsequences, which makes it straightforward to transfer many useful tools, such as the continuous mapping theorem or the functional delta method, to relative weak convergence. 4 Contribution 2: Relative CLTs for sample averages and empirical processes To make the general theory useful, we need provide concrete relative CLTs under verifiable conditions. It turns out that both finite- and infinite-dimensional relative CLTs hold under high-level assumptions similar to those in classical stationary CLTs. One of our main results looks as follows (Theorem 3.6). Theorem. Under some moment, mixing, and bracketing entropy conditions the weighted empirical process Gn∈ℓ∞(S× F)defined by Gn(s, f) =1√knknX i=1wn,i(s) f(Xn,i)−E[f(Xn,i)] satisfies a relative CLT. The weights wn,ican be specified to cover a variety of different CLT flavors. For example, taking wn,i(s) =1i≤⌊skn⌋yields a relative sequential CLT (Corollary 3.8); taking wn,i(s) = K((i/kn−s)/bn) for some kernel Kand bandwidth bn, we obtain localized CLTs similar to those established under local stationary by Phandoidaen and Richter (2022). To prove the theorem, we rely on a characterization of relative CLTs in terms of relative compactness and marginal relative CLTs of Gn, similar to classical empirical process theory. Regarding the marginals, we provide a relative version of Lyapunov’s multivariate CLT (Theorem 3.2). Relative compactness can be guaranteed established using chaining and coupling arguments under a bracketing condition tailored to non- stationary β-mixing sequences (Theorem 3.4). Generally, such entropy conditions also ensure the existence of the sequence of approximating Gaussian sequence. Contribution 3: Bootstrap inference under non-stationarity Many inferential meth- ods rely on a tractable Gaussian approximation in a weak sense, but whether that Gaussian changes with the sample size plays a subordinate role. A key difficulty re- mains, however: the covariance structure of the approximating Gaussian process is not known in practice and difficult to estimate. The bootstrap is a common solution to this problem. Considerable effort has been devoted to deriving consistent bootstrap schemes for special cases of non-stationary data. To the best of our knowledge, no boot- strap scheme currently exists for general non-stationary settings. Extending the work of B¨ ucher and Kojadinovic (2019), bootstrap consistency can be characterized in terms of relative weak convergence. Using our relative CLTs, we prove consistency of generic multiplier bootstrap procedures for non-stationary time series solely under moment and mixing assumptions. Summary and outline In summary, our main contributions are as follows: •We introduce a new mode of convergence, which is general enough to explain the asymptotics of non-stationary processes and convenient to use in statistical applications. 5 •We derive asymptotic normality of empirical processes for non-stationary time series under the same high-level assumptions as for classical stationary CLTs. •We facilitate the practical use of our theory by establishing consistency of a mul- tiplier bootstrap for non-stationary time series. This provides a new perspective on the asymptotic theory of non-stationary processes and a comprehensive framework of concepts and tools for statistical inference with gen- eral non-stationary data. Our results
|
https://arxiv.org/abs/2505.02197v1
|
are applicable to a wide range of statistical meth- ods and effectively usable as drop-in replacements for classical CLTs when data are not stationary. The remainder of this paper is structured as follows. Relative weak convergence and CLTs are developed in Section 2. We provide tools for proving relative CLTs and foster their intuition by connecting relative with classical CLTs. Section 3 develops relative CLTs for non-stationary β-mixing sequences. Section 4 develops multiplier bootstrap consistency for non-stationary time series, with applications to non-parametric trend estimation and hypothesis testing discussed in Section 5. 2. Relative Weak Convergence and CLTs This section introduces definitions, characterizations, and basic properties of relative weak convergence and relative CLTs. 2.1. Background and notation Throughout this paper, we make use of Hoffmann-Jørgensen’s theory of weak conver- gence. For this reason, we recall some definitions and central statements about weak converge in general metric spaces and in the space of bounded functions. The material and references of this section are found in Chapter 1 of Van der Vaart and Wellner (2023). Weak convergence in general metric spaces In what follows we denote by Xn: Ωn→Dsequences of (not necessarily measurable) maps with Ω nprobability spaces and Dsome metric space. We will assume Ω n= Ω without loss of generality (see discussion above Theorem 1.3.4). To avoid clutter, we omit the domain Ω and write Xn∈D whenever it is clear from context. Definition 2.1 (Definition 1.3.3) .The sequence Xnconverges weakly to some Borel measurable map X∈Dif E∗[f(Xα)]→E[f(Y)] for all bounded and continuous functions f:D→RwhereE∗[f(Xn)]denotes the outer expectation of f(Xn)(chapter 1.2). We write Xn→dX. 6 Weak convergence of (measurable) random vectors agrees with the usual notation in terms of distribution functions. Definition 2.2. The net Yαisasymptotically measurable if E∗[f(Yα)]−E∗[f(Yα)]→0 for all f:D→Rbounded and continuous where E∗andE∗denote outer and inner expectation, respectively.The net is asymptotically tight if for all ε >0there exists a compact set K⊆Dsuch that lim inf αP∗(Yα∈Kδ)≥1−ε for all δ >0with Kδ={y∈D:∃x∈Ks.t.d(x, y)< δ}where P∗denotes the inner probability. Weak convergence to some tight Borel measure implies asymptotic measurability and asymptotic tightness. Conversely, Prohorov’s theorem asserts weak convergence along subsequences whenever the sequence is asymptotically tight and measurable. Weak convergence of stochastic processes A stochastic process indexed by a set T is a collection {Y(t):t∈T}of random variables Y(t) : Ω→Rdefined on the same probability space. A Gaussian process (GP) is a stochastic process {N(t):t∈T}such that ( N(t1), . . . , N (tk)) is multivariate Gaussian for all t1, . . . , t k∈T. Weak convergence of empirical processes is typically considered in (a subspace of) the space of bounded functions ℓ∞(T) = f:T→R:∥f∥T= sup t∈T|f(t)|<∞ equipped with the uniform metric d(f, g) = supt∈T|f(t)−g(t)|=∥f−g∥T. If the sample paths t7→Y(t)(ω) of a stochastic process Yare bounded for all ω∈Ω, it induces a map Y: Ω→ℓ∞(T), ω7→(t7→Xt(ω)). To abbreviate notation, we say Y∈ℓ∞(T) is a stochastic process (with bounded sample paths) indicating that Y(t) : Ω→Rare random variables with common domain. A sequence Yn∈ℓ∞(T) converges weakly to some tight and Borel measurable Y∈ ℓ∞(T) if and only if Ynis asymptotically tight, asymptotically
|
https://arxiv.org/abs/2505.02197v1
|
measurable and all marginals converge weakly, i.e., (Yn(t1), . . . , Y n(tk))→d(Y(t1), . . . , Y (tk)) inRkfor all t1, . . . , t k∈T. 7 2.2. Non-stationary CLTs via weak convergence along subsequences Let us first explain some difficulties with non-stationary CLTs. Consider a sequence of stochastic processes Xi∈ℓ∞(T). CLTs establish weak convergence of the empirical process Gn∈ℓ∞(T) defined by Gn(t) =1√nnX i=1Xi(t)−E[Xi(t)] to some tight and measurable Gaussian process G∈ℓ∞(T). The convergence is charac- terized in terms of asymptotic tightness and marginal CLTs. The former is independent of stationarity (e.g., Theorems 2.11.1 and 2.11.9 of Van der Vaart and Wellner (2023)). The latter involves weak convergence of the marginals; in particular, Gn(t)→dG(t) for all t∈T. This also implies convergence of the variances under mild conditions. Here a certain degree of stationarity of the samples is required. For non-stationary observations, the convergence Var[Gn(t)]→Var[G(t)] fails in general. Example 2.3. Assume Var[Xi(t)]∈ {σ2 1, σ2 2}andXi(t)are independent. Write Nk,n= #{i≤n:Var[Xi(t)] =σ2 k}. Then, Var[Gn(t)] =n−1nX i=1Var[Xi(t)] =σ2 1N1,n n+σ2 1N2,n n=σ2 2+ (σ2 1−σ2 2)N1,n n converges if and only ifN1,n nconverges. For this reason, some non-stationary CLTs rely on standardization such that the covari- ance matrix equals the identity. Such standardization can only work in finite dimensions, however. The intuitive reason is a decorrelated Gaussian limit process Gcannot have bounded sample paths (nor be tight) unless Tis finite. Lemma 2.4. If a Gaussian process G∈ℓ∞(T)satisfies Cov[G(s),G(t)] = 0 fors̸=t andVar[G(s)] = 1 , then, Tis finite. Proof. Appendix F. In summary, the problem with uniform CLTs lies in the fact that marginal CLTs generally require a degree of stationarity - which cannot be bypassed by (naive) stan- dardization. 8 The basic result which we want to put into a larger context is a version of Lyapunov’s CLT. Write Sn=1√knknX i=1Xn,i−E[Xn,i]. forXn,1. . . X n,kn∈Ra triangular array of independent random variables. Assume the general moment condition supn,iE[|Xn,i|2+δ]<∞for some δ > 0. Lyapunov’s CLT asserts asymptotic normality whenever variances converge. Proposition 2.5. Suppose supn,iE[|Xn,i|2+δ]<∞for some δ >0. Then Var[Sn]→σ2 ∞ if and only if Sn→dN(0, σ2 ∞). Proof. The sufficiency is a special case of Proposition 2.27 of van der Vaart (2000) and the necessity is Example 1.11.4 of Van der Vaart and Wellner (2023). When Var[Sn] doesn’t converge, the moment condition still implies that Var[Sn] is uniformly bounded. Thus, every subsequence of Var[Sn] contains a converging subse- quence Var[Snk] along which Lyapunov’s CLT implies asymptotic normality of Snk. The weak limit of Snkhowever depends on the limit of Var[Snk]. In other words, along sub- sequences Snconverges weakly to some Gaussian, yet not globally. To obtain a global Gaussian to compare Snto, the complementary observation is the following fact Var[Snk]→σ2 ∞ if and only if N(0,Var[Snk])→dN(0, σ2 ∞). Thus, along subsequences, SnandN(0,Var[Sn]) have the same weak limits. Because any subsequence of Sncontains a weakly convergent subsequence, i.e., Snis relatively compact, some thought reveals their difference of distribution functions converges to zero. Proposition 2.6 (Relative Lyapunov’s CLT) .Denote by Fnresp. Φnthe distribution function of Snresp.N(0,Var[Sn]). Then, |Fn(t)−Φn(t)| →0for all t∈[0,1].
|
https://arxiv.org/abs/2505.02197v1
|
Proof. This is a special case of Theorem A.7. In other words, even in non-stationary settings, the scaled sample average remains approximately Gaussian, but with the notion of a limiting distribution replaced by a sequence of Gaussians. This mode of convergence and the resulting type of asymptotic normality extends naturally to stochastic processes. 2.3. Relative weak convergence All proofs of the remaining section are found in Appendix A. Let Xn, Ynbe sequences of arbitrary maps from probability spaces Ω n,Ω′ ninto a metric space D. Throughout the rest of this paper it is tacitly understood that all such maps have a common probability space as their domain. 9 Definition 2.7 (Relative weak convergence) .We say XnandYnarerelatively weakly convergent if |E∗[f(Xn)]−E∗[f(Yn)]| →0 for all f:D→Rbounded and continuous. We write Xn↔dYn. Remark 2.8. ForXa Borel law, Xn↔dXif and only if Xn→dX. Contrary to weak convergence, relative weak convergence implies neither measurability nor tightness. Any sequence is relatively weakly convergent to itself. For the purpose of this paper, we restrict to relative weak convergence to relatively compact sequences. Those turn out to be the natural analog of tight, measurable limits in classical weak convergence. Definition 2.9 (Relative asymptotic tightness and compactness) .We call Xnrelatively asymptotically tight if every subsequence contains a further subsequence which is asymp- totically tight. We call Xnrelatively compact if every subsequence contains a further subsequence which converges weakly to a tight Borel law. From the definition we see that if Xn↔dYnandYn→dYthen Xn→dY. Thus, if Yn is relatively compact, Xn↔dYnessentially states that XnandYnhave the same weak limits along subsequences. Proposition 2.10. Assume that Xnis relatively compact. The following are equivalent: (i)Xn↔dYn. (ii) For all subsequences nksuch that Xnk→dXwith Xa tight Borel law it follows Ynk→dX. (iii) For all subsequences nkthere exists a further subsequence nkisuch that both XnkiandYnkiconverge weakly to the same tight Borel law. In such case, Ynis relatively compact as well. The characterization of relative weak convergence via weak convergence on subse- quences is very convenient. In particular, many useful properties of weak convergence can be transferred to relative weak convergence. The following results are particularly relevant for statistical applications and indicate that relative weak convergence allows for similar conclusions than weak convergence. Proposition 2.11 (Relative continuous mapping) .IfXn↔dYnandg:D→Eis continuous then g(Xn)↔dg(Yn). 10 Proposition 2.12 (Extended relative continuous mapping) .Letgn:D→Ebe a se- quence of functions with Eanother metric space. Assume that for all subsequences of n there exists another subsequence nkand some g:D→Esuch that gnk(xk)→g(x)for allxk→xinD. Let Ynbe relatively compact. Then, (i)gn(Yn)is relatively compact. (ii) if Xn↔dYnthen gn(Xn)↔dgn(Yn). Proposition 2.13 (Relative delta-method) .LetD,Ebe metrizable topological vector spaces and θn∈Dbe relatively compact. Let ϕ:D→Ebe continuously Hadamard- differentiable (see Definition A.2) in an open subset D0⊂Dwith θn∈D0for all n. Assume rn(Xn−θn)↔dYn for some sequence of constants rn→ ∞ withYnrelatively compact. Then, rn ϕ(Xn)−ϕ(θn) ↔dϕ′ θn(Yn). LetSδ={x∈D:d(x, S)< δ}denote the δ-enlargement and ∂Sthe boundary (closure minus interior) of a set S. Proposition 2.14. LetXn↔dYnwithYnrelatively compact, and Snbe Borel sets such that every subsequence of Sncontains a further subsequence Snksuch that limk→∞Snk= Sfor some Swith lim inf δ→0lim sup k→∞P∗(Ynk∈(∂S)δ) = 0 . (1)
|
https://arxiv.org/abs/2505.02197v1
|
Then lim n→∞|P∗(Xn∈Sn)−P∗(Yn∈Sn)|= 0. Condition (1) ensures that Snare continuity sets of Ynasymptotically in a strong form. This prevents the laws from accumulating too much mass near the boundary of Sn. In statistical applications, Ynis typically (the supremum of) a non-degenerate Gaussian process, for which appropriate anti-concentration properties can be guaranteed for any setS(e.g., Giessing, 2023). A fundamental result in empirical process theory is the characterization of weak con- vergence in terms of asymptotic tightness and marginal weak convergence (Van der Vaart and Wellner, 2023, Theorem 1.5.7). This characterization underpins uniform CLTs. Be- cause relative weak convergence to some relatively compact sequence is equivalent to weak convergence along subsequences, a similar result holds for relative weak conver- gence. Theorem 2.15. LetXn, Yn∈ℓ∞(T)with Ynrelatively compact. The following are equivalent: 11 (i)Xn↔dYn. (ii)Xnis relatively asymptotically tight and all marginals satisfy Xn(t1), . . . , X n(td) ↔d Yn(t1), . . . , Y n(td) for all finite subsets {t1, . . . , t d} ⊂T. 2.4. Relative central limit theorems Fix a sequence of stochastic processes Yn∈ℓ∞(T) with supt∈TE[Yn(t)2]<∞. CLTs assert weak convergence Yn→dNwith Nsome tight and measurable GP. Naturally, we define (relative) asymptotic normality as Yn↔dNnwith Nnsome sequences of GPs. Contrary to CLTs with a fixed limit, there is no unique sequence of ‘limiting’ GPs, however. It shall be convenient to specify the covariance structure of Nnto mirror that ofYn. This allows for a tractable Gaussian approximation. Definition 2.16 (Corresponding GP) .LetYbe a map with values in ℓ∞(T). AGaus- sian process (GP) corresponding to Yis a map NYwith values in ℓ∞(T)such that {NY(t):t∈T} is a centered GP with covariance function given by (s, t)7→Cov[Y(s), Y(t)]. Similar to classical CLTs and in view of Proposition 2.10, we further restrict to relatively compact sequences of tight and Borel measurable GPs and define a relative CLT as follows. Definition 2.17 (Relative CLT) .We say that the sequence Ynsatisfies a relative central limit theorem if exists a relatively compact sequence of tight and Borel measurable GPs NYncorresponding to YnwithYn↔dNYn. Theorem 2.15 characterizes relative in terms of marginal relative CLTs. Corollary 2.18. The sequence Ynsatisfies a relative CLT if and only if (i) there exist tight and Borel measurable GPs NYncorresponding to Yn, (ii)YnandNYnare relatively asymptotically tight and (iii) all marginals (Yn(t1), . . . , Y n(tk))∈Rksatisfy a relative CLT. The restriction to relatively compact sequences of tight and Borel measurable GPs enables inference just as in classical weak convergence theory (see previous section and Section 5). Such sequences exist under mild assumptions. In finite dimensions, corre- sponding tight Gaussians always exist. Any such sequence converges iff its covariances 12 converge. Hence, a sequence of Gaussians is relatively compact iff covariances converge along subsequences. Equivalently, the sequences of variances are uniformly bounded (Corollary A.6). As a result, relative CLTs can be characterized as follows. Proposition 2.19. LetYnbe a sequence of Rd-valued random variables. Denote by Σn the covariance matrix of Yn. Then, the following are equivalent: (i)Ynsatisfies a relative CLT. (ii) for all subsequences nksuch that Σnk→Σconverges it holds Ynk→dN(0,Σ) andsupn∈N,i≤dVar[Y(i) n]<∞where Y(i) ndenotes
|
https://arxiv.org/abs/2505.02197v1
|
the i-th component of Yn. (iii) all subsequences nkcontain a subsequence nkisuch that Σnki→Σand Ynki→dN(0,Σ). In other words, multivariate relative CLTs are essentially equivalent to CLTs along subsequences where covariances converge. From this it is straightforward to general- ize classical multivariate CLTs to relative multivariate CLTs (e.g., Lindeberg’s CLT, Theorem A.7). For infinite dimensional index sets T, Kolmogorov’s extension theorem implies exis- tence of GPs {Nn(t):t∈T}with Cov[Nn(s), Nn(t)] =Cov[Yn(s), Yn(t)], but possibly unbounded sample paths. Tightness can be guaranteed under entropy conditions, as shown in the following. Define the covering number N(ϵ, T, d ) with respect to some semi-metric space ( T, d) as the minimal number of ϵ-balls needed to cover T. Denote by ρn(s, t) =Var[Yn(s)−Yn(t)]1/2, s, t ∈T, the standard deviation semi-metric on Tinduced by Yn. Proposition 2.20. If for all nit holds Z∞ 0p lnN(ϵ, T, ρ n)dϵ <∞, there exists a sequence of tight and Borel measurable GPs NYncorresponding to Yn. Proposition 2.21. LetNYnbe a sequence of Borel measurable GPs corresponding to Yn. Assume that there exists a semi-metric donTsuch that (i)(T, d)is totally bounded. 13 (ii)limn→∞Rδn 0p lnN(ϵ, T, ρ n)dϵ= 0for all δn↓0. (iii) limn→∞supd(s,t)<δnρn(s, t) = 0 for every δn↓0. If further supnVar[Yn(t)]<∞for all t∈T, the sequence NYnis asymptotically tight. Condition (iii) requires the existence of a global semi-metric with respect to which the sequence of standard deviation semi-metrics are asymptotically uniformly continuous. In many practical settings, one can simply take d(t, s) = supnρn(t, s). 3. Relative CLTs for non-stationary time-series Now that we know what relative CLTs are, this section provides specific instances of such results for non-stationary time-series. The assumptions of these CLTs are relatively weak, making them applicable in a wide range of statistical problems. Of course, no CLT can be expected to hold under arbitrary dependence structures. Several measures exist to constrain the dependence between observations, such as α-mixing or ϕ-mixing coefficents (Bradley, 2005), or the functional dependence measure of Wu (2005). In the following, we will focus on β-mixing, because it is widely applicable and allows for sharp coupling inequalities. Definition 3.1. Let(Ω,A, P)be a probability space and A1,A2⊂ A sub- σ-algebras. Define the β-mixing coefficient β(F1,F2) =1 2supX (i,j)∈I×J|P(Ai∩Bj)−P(Ai)P(Bj)| where the supremum is taken over all finite partitions ∪i∈IAi=∪j∈JBj= Ω with Ai∈ A1, Bj∈ A 2. For p, n∈Nandp < k ndefine βn(p) = sup k≤kn−pβ(σ(Xn,1, . . . , X n,k), σ(Xn,k+p, . . . , X n,kn)). Theβ-mixing coefficients quantify how independent events become as one moves fur- ther into the sequence. If the β-mixing coefficients become zero as nandpapproach infinity, the events become close to independent. Note that the mixing coefficients them- selves are indexed by n, because they may change over time in the nonstationary setting. 3.1. Multivariate relative CLT We start with a multivariate relative CLT for triangular arrays of random variables. Proposition 2.19 extends classical to relative multivariate CLTs. The following result builds on Lyapunov’s CLT in combination with a coupling argument. Let Xn,1, . . . , X n,kn be a triangular array of Rd-valued random variables. 14 Theorem
|
https://arxiv.org/abs/2505.02197v1
|
3.2 (Multivariate relative CLT) .Letα∈[0,1/2)and1 + (1 −2α)−1< γ. Assume (i)k−1 nPkn i,j=1|Cov[X(l1) n,i, X(l2) n,j]| ≤Kfor all nandl1, l2= 1, . . . d . (ii)supn,iEh |X(l) n,i|γi <∞for all l= 1, . . . , d . (iii) knβn(kα n)γ−2 γ→0. Then, the scaled sample average k−1/2 nPkn i=1(Xn,i−E[Xn,i])satisfies a relative CLT. Proof. Appendix C.1 The summability condition (i) on the covariances can be seen as a minimal requirement for any general CLT under mixing conditions (Bradley, 1999). Condition (i) and (iii) restrict the dependence where (i) essentially bounds the variances of the scaled sample average. Condition (ii) excludes heavy tails of Xn,i. Note that (ii) and (iii) exhibit a trade-off. Universal bounds on higher moments weaken the conditions on the mixing coefficients’ decay rate. 3.2. Asymptotic tightness under bracketing entropy conditions In order to extend the multivariate CLT to an empirical process CLT, we need a way to ensure relative compactness. Consider the following general setup: •Xn,1, . . . , X n,knis a triangular array of random variables in some Polish space X. •Fn={fn,t:t∈T}is a set of measurable functions from XtoRfor all n. •∪n∈NFnadmits a finite envelope function F:X →R, i.e., supn∈N,f∈Fn|f(x)| ≤ F(x) for all x∈ X . Define the empirical process GnonTby Gn(t) =1√knknX i=1fn,t(Xi,n)−E[fn,t(Xi,n)]. Because we have an envelope, Gnhas bounded sample paths and we obtain a map Gn with values in ℓ∞(T). This empirical process can be seen as a triangular version of the (classical) empirical process indexed by a single set of functions. To ensure asymptotic tightness, we rely on bracketing entropy conditions, with norms tailored to the nonstationary time-series setting. Given a semi-norm ∥·∥on (an extension of)F, recall that the bracketing numbers N[](ϵ,F, d)∈Nare the minimal number of brackets [li, ui] ={f∈ F:li≤f≤ui} 15 such that F=∪Nϵ i=1[li, ui] with li, ui:X → Rmeasurable and ∥li−ui∥< ϵ. For stationary observations Xn,i∼P, bracketing entropy is usually measured with respect to an Lp(P)-norm, p≥2. In case of non-stationary observations the brackets need to be measured with respect to all underlying laws of the samples. It turns out that a scaled average of Lp(PXn,i)-norms is sufficient. Definition 3.3. Letγ≥1. Define the semi-norms ∥ · ∥ γ,nresp.∥ · ∥ γ,∞onFby ∥h∥γ,n= 1 knknX i=1E[|h(Xn,i)|γ]!1/γ ∥h∥γ,∞= sup n∈N∥h∥γ,n. Note that ∥·∥γ,nis a composition of semi-norms, hence, a semi-norm itself (Lemma B.3). The following result shows that ∥ · ∥ γ,n-bracketing entropy conditions imply asymptotic tightness of Gnunder mixing assumptions (Appendix B). Because ∥·∥ γ,n-bracketing en- tropy bounds covering entropy (Remark B.8) the existence of an approximating sequence of GPs is also guaranteed. Theorem 3.4. Assume that for some γ >2 (i)∥F∥γ,∞<∞. (ii)supn∈Nmax m≤knmρβn(m)<∞for some ρ > γ/ (γ−2) (iii)Rδn 0p lnN[](ϵ,Fn,∥ · ∥ γ,n)dϵ→0for all δn↓0and are finite for all n. Denote by dn(s, t) =∥fn,s−fn,t∥γ,n fors, t∈T. Assume that there exists a semi-metric donTsuch that lim n→∞sup d(s,t)<δndn(s, t) = 0 for all δn↓0and(T, d)is totally bounded. Then, •Gnis asymptotically tight. •there exists an asymptotically tight sequence of tight Borel measurable GPs Nn corresponding to Gn. The proof, given in Appendix B, relies on a chaining
|
https://arxiv.org/abs/2505.02197v1
|
argument with adaptive coupling and truncation. An important intermediate step is a new maximal inequality (see The- orem B.6) that may be of independent interest: Theorem 3.5. LetFbe a class of functions f:X →Rwith envelope F, and ∥f∥γ,n≤δ1 nnX i,j=1|Cov[h(Xi), h(Xj)]| ≤K1∥h∥2 γ,n 16 for some γ > 2, all f∈ F andh:X →Rbounded and measurable. Suppose that supnβn(m)≤K2m−ρfor some ρ≥γ/(γ−2). Then, for any n≥5andδ∈(0,1), E∥Gn∥F≲Zδ 0q ln+N[](ϵ)dϵ+∥F∥γ,n[lnN[](δ)][1−1/(ρ+1)](1 −1/γ) n−1/2+[1−1/(ρ+1)](1 −1/γ)+√nN−1 [](en), where N[](ε) =N(ϵ,F,∥ · ∥ γ,n)andln+x= max(ln x,0). In many applications, ∥·∥γ,n-bracketing numbers can be replaced by Lγ(Q)-bracketing numbers whenever all PXn,iare simultaneously dominated by some measure Q(Ap- pendix F.1), or simply the L∞-bracketing numbers (which coincide with L∞-covering numbers) whenever the function class is uniformly bounded. Many bounds on Lγ(Q)- bracketing numbers are well known (Section 2.7 of Van der Vaart and Wellner (2023)). 3.3. Weighted uniform relative CLT With a multivariate relative CLT and conditions for asymptotic tightness, we have all we need to estabish relative empirical process CLTs. Our main result below should cover the vast majority of statistical applications. Let Fbe a set of measurable functions with finite envelope F. Recall that an envelope of Fis a function F:X →Rwith supf∈F|f(x)| ≤F(x) for all x∈ X. Assume the family of weights wn,i:S→R, n∈N, i≤knsatisfies supn,i,x|wn,i(x)|<∞and recall the definition of the weighted empirical process Gn∈ℓ∞(S× F) Gn(s, f) =1√knknX i=1wn,i(s) f(Xn,i)−E[f(Xn,i)] . Note that the envelope implies bounded sample paths of Gn. A relative CLT for Gncan be established in terms of bracketing entropy for the function class Wn={gn,s:{1, . . . , k n} →R, i7→wn,i(s):s∈S}. Denote by dnsemi-metrics on Sdefined by dw n(s, t) =∥gn,s−gn,t∥γ,n= 1 knknX i=1|wn,i(s)−wn,i(t)|γ!1/γ and assume the following entropy conditions on the weights W1Rδn 0p lnN[](ϵ,Wn,∥ · ∥ γ,n)dϵ→0 for all δn↓0 and finite for all n. W2 there exists a semi-metric dwonSsuch that for all δn↓0 lim n→∞sup dw(s,t)<δndw n(s, t) = 0 . W3 (S, d) is totally bounded. 17 These assumptions, cover the constant case wn,i(s) = 1 as well as more sophisticated weights; see the next sections. Theorem 3.6 (Weighted relative CLT) .Assume W1–W3 and the following hold: (i)∥F∥γ,∞<∞. (ii)supn∈Nmax m≤knmρβn(m)<∞for some ρ >2γ(γ−1)/(γ−2)2 (iii)R∞ 0p lnN[](ϵ,F,∥ · ∥ γ,∞)dϵ <∞. Then, Gnsatisfies a relative CLT in ℓ∞(S× F). Proof. Appendix C.2 The moment condition (i) ensures that all γ-moments E[|f(Xn,i)|γ] are uniformly bounded. Again, (i) and (ii) display a trade-off. Higher moments allow for a slower polynomial decay of the mixing coefficients. Example 3.7. Assuming the moment condition with γ= 4 theβ-mixing coefficients must decay polynomial of degree greater than 6, i.e., supnβn(m)≤Cm−ρfor some ρ >6and all m. 3.4. Sequential relative CLT A famous result by Donsker asserts asymptotic normality of the partial sum process over iiddata. A generalization to sequential empirical processes Zn∈ℓ∞([0,1]×F) indexed by functions, defined as Zn(s, f) =1√kn⌊skn⌋X i=1f(Xn,i)−E[f(Xn,i)], can be found in, e.g., Theorem 2.12.1 of Van der Vaart and Wellner (2023). Beyond the iidcase, asymptotic normality of Znis hard to prove and requires additional technical assumptions even for finite function classes (Dahlhaus et al., 2019, Merlev` ede et al., 2019).
|
https://arxiv.org/abs/2505.02197v1
|
Specifying wn,i(s) =1{i≤ ⌊sn⌋}, relative sequential CLTs are simple corollaries of weighted relative CLTs. Corollary 3.8 (Sequential relative CLT) .Under conditions (i)–(iii) of Theorem 3.6, the sequential empirical process Zn∈ℓ∞([0,1]× F)satisfies a relative CLT. Proof. Appendix C.2 18 4. Bootstrap inference To make practical use of relative CLTs, we need a way to approximate the distribution of limiting GPs. Their covariance operators are a moving target, however, and generally difficult to estimate. The bootstrap is a convient way to approximate the distribution of limiting GPs, and easy to implement in practice. This section provides some general results on the consistency of multiplier bootstrap schemes for non-stationary time-series. 4.1. Bootstrap consistency and relative weak convergence To define what bootstrap consistency means in the context of empirical processes, we follow the setup and notation of B¨ ucher and Kojadinovic (2019). In fact, we shall see that the usual definition of bootstrap consistency can be equivalently expressed in terms of relative weak convergence. Let Xnbe some sequence of random variables with values inXnandWnan additional sequence of random variables, independent of Xn, with values in WnwithW(j) ndenoting independent copies of Wn. Denote by Gn=Gn(Xn) resp.G(j) n=Gn(Xn,W(j) n) a sequence of maps constructed from Xnresp.Xn,W(j) nwith values in ℓ∞(T) such that each Gn(t),Gn(t)(j)is measurable. Proposition 4.1. Assume that Gnis relatively compact. Then, the following are equiv- alent: (i) for n→ ∞ sup h∈BL1(ℓ∞(F)) E h(G(1) n)|Xn −E[h(Gn)] P∗ →0 andG(1) nis asymptotically measurable (ii) it holds Gn,G(1) n,G(2) n ↔dG⊗3 n where BL1(ℓ∞(F))denotes the set of 1-Lipschitz continuous functions from ℓ∞(F)to R. Call G(j) naconsistent bootstrap scheme in any such case. Classically, Gnis some (transformation of an) empirical process and consistency of the bootstrap is derived from CLTs for GnandG(j) n. In view of Proposition 4.1, this approach generalizes to relative CLTs (Corollary D.1). 4.2. Multiplier bootstrap Now fix some triangular array Xn= (Xn,1, . . . , X n,kn)∈ Xnof random variables with values in a Polish space Xand some family of uniformly bounded functions wn,i:S→R. LetFbe a set of measurable functions from XtoRwith finite envelope F. Denote 19 byVn= (Vn,1, . . . V n,kn)∈Rkna triangular array of random variables and by V(i) n= (V(i) n,1, . . . , V(i) n,kn) independent copies of Vn. Define Gn,G(j) n∈ℓ∞(S× F) by Gn(s, f) =1√knknX i=1wn,i(s) f(Xn,i)−E[f(Xn,i)] , G(j) n(s, f) =1√knknX i=1V(j) n,iwn,i(s) f(Xn,i)−E[f(Xn,i)] . Proposition 4.2. LetXn,isatisfy the conditions of Theorem 3.6 for some γ >2andρ. For every ϵ >0, letνn(ϵ)be such that max |i−j|≤νn(ϵ)|Cov[Vn,i, Vn,j]−1| ≤ϵ. Assume that (i)Vn,1, . . . V n,knare identically distributed and independent of (Xn,i)i∈N. (ii)E[Vn,i] = 0 ,Var[Vn,i] = 1 , and supnE[|Vn,i|γ]<∞ (iii) knβX n νn(ϵ)γ−2 γ, knβV n kα nγ−2 γ→0for every ϵ >0and some 1 + (1 −2α)−1< γ. Then, G(j) nis a consistent bootstrap scheme. Proof. Appendix D.2 Example 4.3 (Block bootstrap with exponential weights) .Letξi∼Exp(1) beiidfor i∈Zand define Vn,i=1√2mni+mnX j=i−mn(ξj−1). Then Vn,iaremn-dependent and it holds |Cov[Vn,i, Vn,j]−1| ≤ | i−j|/mn. Choosing νn(ε) =⌊εmn⌋, we see that if (i)mn< kα nfor some 1 + (1 −2α)−1< γand (ii)knβX n(ϵmn)(γ−2)/γ→0for every ϵ >0, conditions
|
https://arxiv.org/abs/2505.02197v1
|
(i)–(iii) of Proposition 4.2 are satisfied. Continuing Example 3.7 with γ= 4 andsupnβn(m)≤Cm−ρfor some ρ >6, we can pick mn=k1/3 n. 20 4.3. Practical inference The bootstrap process G(j) nin the previous section depends on the unknown quantity µn(i, f) =E[f(Xn,i)]. In many testing applications, we have E[f(Xn,i)] = 0 at least under the null hypothesis; see Section 5. If this is not the case, estimating µn(i, f) con- sistently may still be possible in simple problems (e.g., fix-degree polynomial trend), or under triangular array asymptotics where µn(i, f) approaches a simple function (e.g., lo- cal stationarity). For a general, observed non-stationary process ( Xi)i∈N, it is impossible to distinguish a random series ( Xi)i∈NwithE[f(Xi)] = 0 from a deterministic one with E[Xi] =µ(i). As a consequence, it is generally impossible to quantify the uncertainty in Gnconsistently. This is a fundamental problem in non-stationary time series analysis, which the relative CLT framework makes transparent. A modified bootstrap can still provide valid, but possibly conservative, inference in practice. Let bµn(i, f) be a potentially non-consistent estimator of µn(i, f),µn(i, f) = E[bµn(i, f)] its expectation, and define the processes bG∗ n(s, f) =1√nnX i=1Vn,iwn,i(s)(f(Xi)−bµn(i, f)), G∗ n(s, f) =1√nnX i=1Vn,iwn,i(s)(f(Xi)−µn(i, f)). We assume that bµn(i, f) converges to µn(i, f) in the following sense. Proposition 4.4. Suppose the conditions of Proposition 4.2 are satisfied, bG∗ n−G∗ nis relatively compact, and for every ϵ >0, sup f∈Fmax 1≤i≤nVar[bµn(i, f)] =o(νn(ϵ)−1). Then∥bG∗ n−G∗ n∥S×F→p0. The variance condition is fairly mild: it must vanish, but can do so at a rate much slower than 1 /n. If the bias also vanishes at an appropriate rate, the approximated bootstrap process bG∗ nis, in fact, consistent. Proposition 4.5. Suppose the conditions of Proposition 4.2 and Proposition 4.4 are satisfied, G∗ nis relatively compact, and sup f∈Fmax 1≤i≤n|µn(i, f)−µn(i, f)|2=o(νn(ϵ)−1), ThenbG∗ nis consistent for Gn. In particular, this allows to derive bootstrap consistency under local stationarity asymptotics under standard conditions. As explained above, this type of consistency 21 should not be expected for the asymptotics of the observed process. Even without con- sistency, the bootstrap still provides valid, but conservative inference, as shown in the following propositions. Proposition 4.6. Define bq∗ n,αas the (1−α)-quantile of ∥bG∗ n∥S×F. Suppose that the conditions of Proposition 4.4 hold, and that G∗ nis relatively compact and satisfies a relative CLT with tight corresponding GPs and Var[G∗ n(s, f)],Var[Gn(s, f)]≥σ>0for all(s, f)∈ S × F andnlarge. Then, lim inf n→∞P(∥Gn∥S×F≤bq∗ n,α)≥1−α. If, on the other hand, G∗ nisnotrelatively compact, it usually holds P(∥G∗ n∥S×F> tn)→1, (2) for some tn→ ∞ . In this case, the bootstrap is over-conservative. Proposition 4.7. If(2)and the conditions of Theorem 3.6 and Proposition 4.4 hold, then lim n→∞P(∥Gn∥S×F≤bq∗ n,α) = 1 . Although the bootstrap quantiles may be conservative, they are still informative: as ntends to infinity, it usually holds bq∗ n,α/√n→0 (see the proof of Lemma D.3). In this sense, the bootstrap quantiles yield potentially conservative, yet asymptotically vanishing, uniform confidence intervals for the weighted mean n−1Pn i=1wn,iµ(i,·). 5. Applications To end, we explore some exemplary applications that
|
https://arxiv.org/abs/2505.02197v1
|
cannot be handled by previous results. The methods are illustrated by monthly mean temperature anomalies in the Northern Hemisphere from 1880 to 2024 provided by NASA (GISTEMP Team, 2025, Lenssen et al., 2024), and shown in Fig. 1. 5.1. Uniform confidence bands for a nonparametric trend Nonparametric estimation of the trend function µ(i) =E[Xi] is a key problem in non- stationary time series analysis. As explained in Section 4, µ(i) =E[Xi] is not es- timable consistently in general, at least not under the asymptotics of the observed pro- cess ( Xi)i∈N. The local stationarity assumption (Dahlhaus, 2012) resolves this issue by considering the asymptotics of a hypothetical sequence of time series ( Xn,i)i∈Nin which µn(i) =E[Xn,i] becomes flat as n→ ∞ . 22 −1012 1880 1920 1960 2000 YearAnomalyFigure 1: Monthly mean temperature anomalies in the Northern Hemisphere from 1880 to 2024 (dots), kernel estimate of the trend (solid line), and uniform 90%- confidence interval (shaded area). The relative CLT framework allows us to consider the asymptotics of the observed process ( Xi)i∈N, but we have to accept the fact that µ(i) =E[Xi] cannot be estimated consistently. Instead, we aim for estimating the smoothed mean µb(i) =1 nbnX j=1Kj−i nb µ(j) by bµb(i) =1 nbnX j=1Kj−i nb Xj, where Kis a kernel function supported on [ −1,1] and b >0 is the bandwidth parameter. In our setting, bandwidth bplays a different role than in the local stationarity framework. SinceE[bµn(i)] =µb(i) for every value of b, we do not require b→0 asn→ ∞ . Instead, we may choose bto be a fixed value, quantifying the scale (as a fraction of the overall sample) at which we look at the series. Fig. 1 shows the kernel estimate of the trend function µb(i) for b= 0.05 as a solid line. Here, b= 0.05 means that the smoothing window spans 0 .1×nmonths points, or roughly 15 years. Holding bfixed conveniently allows for uniform-in-time asymptotics as shown in the following. Corollary 5.1. Suppose the kernel Kis a four times continuously differentiable proba- bility density function supported on [−1,1],E[|Xi|5]<∞, and supnβX n(k)≲k−7. Then for any b >0, the process s7→√n(bµb(sn)−µb(sn)) satisfies a relative CLT in ℓ∞([0,1]). To quantify uncertainty of the estimator, we can use the bootstrap. Specifically, let bµ∗ b(i) =1 bnX j=1Kj−i b Vn,i(Xj−bµb(j)), 23 with block multipliers Vn,ias in Example 4.3. Let bqn,αbe the (1 −α)-quantile of the distribution of sups∈[0,1]√n|bµ∗ b(sn)|. Then, for α∈(0,1), we can construct a uniform confidence interval for µb(sn) by bCn(α) = bµb−bqn,α/√n,bµb+bqn,α/√n . Corollary 5.2. Suppose the condition of Corollary 5.1 hold, supi|µ(i)|<∞, and there iss∈[0,1]such that σ2 n(s) =Var" 1√nnX i=1Vn,iKi−sn nb (µb(i)−µ(i))# → ∞ . Then lim inf P(µb∈bCn(α))≥1−α. The condition on σ2 n(s) is usually satisfied in the time series setting, where a diverging number of the covariances Cov[Vn,i, Vn,j] are close to 1. If this is not the case, a similar result could be established using Proposition 4.6. The confidence interval bCn(α) is shown as a shaded area in Fig. 1. The confidence interval is uniformly valid for all s∈[0,1] and shows a significant, strongly
|
https://arxiv.org/abs/2505.02197v1
|
increasing trend in the last 50 years. 5.2. Testing for time series characteristics Suppose Z1, Z2, . . .is a non-stationary time series and we want to test H0: sup isup f∈F|E[f(Zi)]|= 0 against H1: sup isup f∈F|E[f(Zi)]| ̸= 0. The functions f∈ Fdetermine which characteristics of the time series we want to con- trol. This framework includes many important applications, two of which are discussed below. Example 5.3 (Equal characteristics of two series) .Suppose Zi= (Xi, Yi),i∈N, and we want to test whether the two time series X1, X2, . . . andY1, Y2, . . . have the same characteristics. To do so, let F={f:g(x)−g(y), g∈ G} , so that H0: sup isup g∈G|E[g(Xi)]−E[g(Yi)]|= 0. HereGdescribes the characteristics of the two time series Xi, Yithat we want to match. Common choices are monomials or indicator functions for testing equality of moments or distribution, respectively. 24 Example 5.4 (Deterministic trends) .Suppose we want to test for a deterministic trend in a time series (Xi)i∈N. Let ∆hXi=Xi+h−Xibe the h-step forward difference operator, and define ∆r h= ∆r−1 hXi+h−∆r−1 hXi, forr≥2. The null hypothesis is H0:E[∆r hXi] = 0 for all i∈Nand1≤r≤R, and fixed h, R∈N. The parameter Rdetermines the order of the polynomial trend we want to test for. The step-size hallows focusing on long-term trends in the presence of deterministic seasonality. This fits into the above framework by letting Zi= (∆ hXi, . . . , ∆R hXi), with the convention ∆r hXi= 0forhr≥i, and F={f:f(z1, . . . , z R) =zj,1≤r≤R}. The multiplier bootstrap allows to construct a test for the general null hypothesis above. Define the test statistic and its bootstrap version Tn= sup s∈[0,1],f∈F 1√nnX i=1wn,i(s)f(Zi) , T∗ n= sup s∈[0,1],f∈F 1√nnX i=1Vn,iwn,i(s)f(Zi) , where wn,i(s) are some weights. For example, wn,i(s) =K((i−sn)/nb) allows focusing on time-local deviations from the null hypothesis. Letα∈(0,1), and c∗ n(α) be the (1 −α)-quantile of the distribution of T∗ n. We reject H0iffTn> c∗ n(α). Level and consistency of the test can be straightforwardly derived from our general results. Corollary 5.5. Let the sequence of weights wn,i(s)andFsatisfy the conditions of The- orem 3.6. It holds P(Tn> c∗ n(α))→αunder H0, andP(Tn> c∗ n(α))→1whenever lim inf n→∞sup s∈S,f∈F 1 nnX i=1wn,i(s)E[f(Zi)] =δ >0. Because supi|E[f(Zi)]| ̸= 0 under H1, the distribution of the bootstrap statistic T∗ n does not resemble the distribution of Tnunder the alternative. Consistency still follows from the fact that Tn/√nis bounded away from zero with probability tending to 1, andT∗ n/√n→p0. The power of the test can be improved if we center by some (non- consistent) estimator bµn(i, f) as discussed in Section 4. As an illustration, we apply the above procedure to test for nonstationarity of the monthly mean anomalies. For example, let Zi= (Xi, Xi−120) be a pair of anomolies 10 years apart, F={f:f(x, y) =1(x < t )−1(y < t ):t∈[−5,5]}, and wn,i(s) = Kb((i−sn)/nb). This gives a Kolmogorov-Smirnov-type statistic Tn= sup t∈[−5,5],s∈[0,1] 1√nnX i=1Ki−sn bn 1Xi<t−1Xi−120<t . It is straightforward to show that the conditions of Corollary 5.5 hold, and we can use the multiplier
|
https://arxiv.org/abs/2505.02197v1
|
bootstrap to construct a test for the null hypothesis of no nonstationarity. Using b= 0.05 and a kernel estimator for bµn(s, f) as in the previous section, we get Tn= 0.69 and c∗ n(0.05) = 0 .30, and a bootstrapped p-value smaller than 0 .0001, providing strong evidence against the null hypothesis of stationarity. 25 References Bonnerjee, S., Karmakar, S., and Wu, W. B. (2024). Gaussian approximation for non- stationary time series with optimal rate and explicit construction. The Annals of Statistics , 52(5):2293 – 2317. Bradley, R. C. (1999). On the growth of variances in a central limit theorem for strongly mixing sequences. Bernoulli , 5(1):67–80. Bradley, R. C. (2005). Basic Properties of Strong Mixing Conditions. A Survey and Some Open Questions. Probability Surveys , 2(none):107 – 144. B¨ ucher, A. and Kojadinovic, I. (2019). A note on conditional versus joint unconditional weak convergence in bootstrap consistency results. Journal of Theoretical Probability , 32(3):1145–1165. Dahlhaus, R. (2012). Locally stationary processes. In Handbook of statistics , volume 30, pages 351–413. Elsevier. Dahlhaus, R., Richter, S., and Wu, W. B. (2019). Towards a general theory for nonlinear locally stationary processes. Bernoulli , 25(2):1013 – 1044. Dehling, H. and Philipp, W. (2002). Empirical process techniques for dependent data. InEmpirical process techniques for dependent data , pages 3–113. Springer. Doukhan, P. (2012). Mixing: Properties and Examples . Lecture Notes in Statistics. Springer New York. Giessing, A. (2023). Anti-concentration of suprema of gaussian processes and gaussian order statistics. GISTEMP Team (2025). GISS Surface Temperature Analysis (GISTEMP), version 4. https://data.giss.nasa.gov/gistemp/ . Dataset accessed 2025-05-01. Karmakar, S. and Wu, W. B. (2020). Optimal gaussian approximation for multiple time series. Statistica Sinica , 30(3):1399–1417. Kosorok, M. R. (2008). Introduction to empirical processes and semiparametric inference , volume 61. Springer. Ledoux, M. and Talagrand, M. (1991). Probability in Banach Spaces: Isoperimetry and Processes , volume 23. Springer Science & Business Media. Lenssen, N., Schmidt, G. A., Hendrickson, M., Jacobs, P., Menne, M., and Ruedy, R. (2024). A GISTEMPv4 observational uncertainty ensemble. J. Geophys. Res. Atmos. , 129(17):e2023JD040179. Merlevede, F. and Peligrad, M. (2020). Functional clt for nonstationary strongly mixing processes. Statistics & Probability Letters , 156:108581. 26 Merlev` ede, F., Peligrad, M., and Utev, S. (2019). Functional CLT for martingale-like nonstationary dependent structures. Bernoulli , 25(4B):3203 – 3233. Mies, F. and Steland, A. (2023). Sequential gaussian approximation for nonstationary time series in high dimensions. Bernoulli , 29(4):3114–3140. Phandoidaen, N. and Richter, S. (2022). Empirical process theory for locally stationary processes. Bernoulli , 28(1):453 – 480. Rio, E. (2017). Asymptotic Theory of Weakly Dependent Random Processes . Probability Theory and Stochastic Modelling. Springer Berlin Heidelberg. Shumway, R. H., Stoffer, D. S., and Stoffer, D. S. (2000). Time series analysis and its applications , volume 3. Springer. van der Vaart, A. (2000). Asymptotic Statistics . Asymptotic Statistics. Cambridge University Press. Van der Vaart, A. and Wellner, J. (2023). Weak Convergence and Empirical Processes: With Applications to Statistics . Springer Series in Statistics. Springer International Publishing. Wu, W. B. (2005). Nonlinear system theory: Another look at dependence. Proceedings of the National Academy of
|
https://arxiv.org/abs/2505.02197v1
|
Sciences , 102(40):14150–14154. 27 A. Proofs for relative weak convergence and CLTs Lemma A.1. The sequence Xnis relatively compact if and only if it is asymptotically measurable and relatively asymptotically tight. Proof. IfXnis relatively compact, it converges to some tight Borel law along subse- quences. Along such subsequences nk,Xnkis asymptotically tight and measurable by Lemma 1.3.8 of Van der Vaart and Wellner (2023). We obtain the sufficiency. Note that asymptotic measurability along subsequences implies (global) asymptotic measurability. For the necessity, for any subsequence there exists a further subsequence nksuch that Xnkis asymptotically tight and measurable. By Prohorov’s theorem (Van der Vaart and Wellner, 2023, Theorem 1.3.9), there exists a further subsequence nkisuch that Xnkiconverges weakly to some tight Borel law. This implies relative compactness of Xn. Proof of Proposition 2.10. Recall that Xnconverges weakly to a Borel law Xiff E∗[f(Xn)]→E[f(X)] for all f:D→Rbounded and continuous. 1.⇒2.: Assume that Xn↔dYn. Then, for all f:D→Rbounded and continuous |E∗[f(Ynk)]−E[f(X)]| ≤ |E∗[f(Xnk)]−E∗[f(Ynk)]|+|E∗[f(Xnk)]−E[f(X)]| →0 whenever Xnk→dXandXBorel measurable. 2.⇒3.: Since Xnis relatively compact, every subsequence Xnkcontains a weakly convergent subsequence Xnki→dXwith Xtight and Borel measurable. By assumption also Ynk→d X. 3.⇒1.: Given f, it suffices to prove that for all subsequences nkthere exists a further subsequence nkisuch that E∗[f(Xnki)]−E∗[f(Ynki)] →0. Pick nkisuch that both Xnki→dXandYnki→dX with Xtight and Borel measurable. Then, E∗[f(Ynki)]−E∗[f(Ynki)] ≤ E[f(X)]−E∗[f(Ynki)] + E∗[f(Xnki)]−E[f(X)] →0 At last, characterization (iii) implies relative compactness of Yn. 28 Proposition 2.11. For all f:E→Rbounded and continuous, the composition f◦g: D→Ris bounded and continuous. Thus, |E∗[f◦g(Xn)]−E∗[f◦g(Yn)]| →0 for all such fby definition of Xn↔dYn. This yields the claim. Proof of Proposition 2.12. Any subsequence of ncontains a further subsequence such that Ynk→dYand there exists g:D→Esuch that gnk(xk)→g(x) for all xk→xin D. Theorem 1.11.1 of Van der Vaart and Wellner (2023) implies gn(Ynk)→dg(Y). In particular, gn(Yn) is relatively compact and Proposition 2.10 yields the second claim. Proof of Proposition 2.14. We prove this statement by contradiction. Suppose that lim sup n→∞P∗(Xn∈Sn)−P∗(Yn∈sn)>0. Then there is a subsequence nkofnsuch that lim i→∞P∗(Xnki∈Snki)−P∗(Ynki∈Snki)>0, (3) for every subsequence nkiofnk. By Proposition 2.10, nkhas a subsequence nkion which Xnki→dYandYnki→dYfor some tight Borel law Y. We may further assume that Snkiconverges to Son the same subsequence. It holds lim sup i→∞P∗(Xnki∈Snki)−P∗(Ynki∈Snki) ≤lim sup i→∞P∗(Xnki∈Snki)−lim inf i→∞P∗(Ynki∈Snki) ≤lim sup i→∞P∗ Xnki∈lim sup i→∞Snki −lim inf i→∞P∗ Ynki∈lim inf i→∞Snki = lim sup i→∞P∗(Xnki∈S)−lim inf i→∞P∗(Ynki∈S). Further, the Portmanteau theorem (Van der Vaart and Wellner, 2023, Theorem 1.3.4) gives P∗(Y∈∂S)≤P∗(Y∈(∂S)δ)≤lim sup i→∞P∗(Ynki∈(∂S)δ). Taking δ→0, we obtain P(Y∈∂S) = 0, so Sis a continuity set of Y. Now the Portmanteau theorem implies lim sup i→∞P∗(Xnki∈S)−lim inf i→∞P∗(Ynki∈S) =P∗(Y∈S)−P∗(Y∈S) = 0 , which contradicts (3). The case where (3) holds with reverse sign is treated analogously. Definition A.2. LetD,Ebe metrizable topological vector spaces, this is metric spaces such that addition and scalar multiplication are continuous. A map ϕ:D→Eis called 29 Hadamard-differentiable at θ∈Dif there exists ϕ′ θ:D→Econtinuous and linear such that ϕ(θ+tnhn)−ϕ(θ) tn→ϕθ(h) for all tn→tinR,hn→hsuch that θ+tnhn∈Dfor all n.ϕis continuously Hadamard-differentiable in an open subset U⊂Difϕis Hadamard-differentiable for all θ∈Uandϕ′ θis continuous in θ∈U. Proof of Proposition 2.13. Note that gn:D→E,
|
https://arxiv.org/abs/2505.02197v1
|
x7→ϕ′ θn(x) satisfies the condition of Proposition 2.12 since ϕhas continuous Hadamard-differentials. Thus, ϕ′ θn(Yn) is rela- tively compact since Ynis. By (iii) of Proposition 2.10 and descending to subsequences, we may assume Yn→dY,θn→θandϕ′ θn(Yn)→dϕ′ θ(Y). By Theorem 3.10.4 in Van der Vaart and Wellner (2023), we obtain rn(ϕ(Xn)−ϕ(θn))→dϕ′ θ(Y). Then, (iii) of Proposition 2.10 yields the claim. A.1. Relative weak convergence in ℓ∞(T) Proof of Theorem 2.15. IfXn↔dYnthen Xnis relatively compact by Proposition 2.10. This is equivalent to relative asymptotic tightness and asymptotic measurability by Lemma A.1. The continuous mapping theorem then implies marginal relative weak convergence. For the reverse direction, let nkbe as subsequence. Let nkibe a subsequence of nksuch that Xnki→dXwith Xa tight Borel law and Ynkiis asymptotically tight. Since all marginals of XnandYnare relatively weakly convergent, this implies the convergence of all marginals of Ynkto the marginals of Xby characterization (ii) of Proposition 2.10. Note that all marginals of Xnare relatively compact, e.g., by the continuous mapping theorem. Together with asymptotic tightness of Ynki, this implies the convergence Ynki→dXby Theorem 1.5.4 of Van der Vaart and Wellner (2023). By characterization (iii) of Proposition 2.10 we obtain Xn↔dYn. Lemma A.3. The sequence Xnis relatively compact if and only if it is relatively asymp- totically tight and Xn(t)is asymptotically measurable for all t∈T. Proof. By Lemma A.1, Xnis relatively compact if and only if it is relatively asymptot- ically tight and asymptotically measurable. By definition, any sequence Xnis asymp- totically measurable if and only if any subsequence nkcontains a further subsequences nkisuch that Xnkiis asymptotically measurable. By Lemma 1.5.2 of Van der Vaart and Wellner (2023) being asymptotically measurable is equivalent to Xn(t) being asymp- totically measurable for all t∈Twhenever Xnis relatively asymptotically tight. All together, this implies the equivalence. 30 Corollary A.4 (Relative Cramer-Wold device) .LetXnandYnbe two sequences of Rd-valued random variables. If Xnis uniformly tight, then, Xn↔dYnif and only if tTXn↔dtTYn for all t∈Rd. Proof. Restricting to functions of the form f(tT·) with f:R→Rbounded and contin- uous, the only if part follows by definition. For the other direction, assume tTXn↔dtTYnfor all t∈Rd. Note that tTXnis uniformly tight for all t∈Rd. We use characterization (iii) of Proposition 2.10. Let nk be a subsequence. Since Xnis uniformly tight, there exists a subsequence Xnkj→dX. Then, also tTXnkj→dtTX. By characterization (ii) of Proposition 2.10 and tTXn↔d tTYn, it follows tTYnkj→dtTXfor all t∈Rd. By the Cramer-Wold device, we derive Ynkj→dX. This proves the claim by characterization (iii) of Proposition 2.10. A.2. Relative central limit theorems Proof of Corollary 2.18. Note that ( Yn(t1), . . . , Y n(tk)) satisfies a relative CLT if and only if (Yn(t1), . . . , Y n(tk))↔d(NYn(t1), . . . , N Yn(tk)) since corresponding Gaussians are unique in distribution. The necessity then follows by Lemma A.3 (for (ii)) and Theorem 2.15 (for (iii)). For the sufficiency observe that NYn(t) and Yn(t) are measurable by assumption. Thus, (ii) is equivalent to YnandNYnbeing relatively compact by Lemma A.3. Then, Theo- rem 2.15 provides the claim. Proof of Proposition 2.19. By Corollary A.6, there exists an asymptotically tight
|
https://arxiv.org/abs/2505.02197v1
|
se- quence of tight Borel measurable GPs corresponding to Ynif and only if sup n∈N,i≤dVar[Y(i) n]<∞. Equivalently, if all subsequences nkcontain a further subsequence nkisuch that Σ nkiconverges. Combined with the fact that a sequence of centered Gaussians converges weakly if and only if its corresponding sequence of covariances converges, the equivalences follows from Proposition 2.10. A.3. Existence and tightness of corresponding GPs Proposition A.5. If there exists a relatively asymptotically tight sequence of tight and Borel measurable GPs corresponding to Yn, then, every subsequence of ncontains a further subsequence nkisuch that Cov[Ynki(t), Ynki(s)]converges for all s, t∈T. 31 Proof. Denote by NYna relatively asymptotically tight sequence of GPs corresponding toYn. Any subsequence contains a further subsequence nkisuch that NYnkiconverges weakly to some tight GP. In particular, all marginals of NYnkiconverge weakly. Recall that a sequence of centered multivariate Gaussians converges weakly if and only if their corresponding covariances converges. Thus, we obtain convergence of all covariances Cov[NYnki(t), NYnki(s)] =Cov[Ynki(t), Ynki(s)]. Corollary A.6. IfTis finite, the following are equivalent: (i) there exists a relatively asymptotically tight sequence of tight and Borel measurable GPs corresponding to Yn. (ii)supnVar[Yn(t)]<∞for all t∈T. Proof. For the sufficiency, Proposition A.5 implies that all sequences of covariances Cov[Yn(s), Yn(t)] are relatively compact or, equivalently, bounded. For the necessaty, identify Ynwith ( Yn(t1), . . . , Y n(td)) for T={t1, . . . , t d}. Construct NYn∼ N (0,Σn) with Σ nthe covariance matrix of Yn. Then, each NYnis measurable and tight. Then, supnVar[Yn(t)]<∞implies that all covariance Cov[Yn(s), Yn(t)] are relatively compact. Thus, every subsequence nkcontains a further subsequence nki such that all covariances Cov[Ynki(s), Ynki(t)] converge. This is equivalent to the weak convergence of NYn. We obtain relative compactness, hence, asymptotic tightness of NYn. Proof of Proposition 2.20. By Kolmogorov’s extension theorem, there exist centered GPs {NYn(t):t∈T}with covariance function given by ( s, t)7→Cov[Yn(s), Yn(t)]. Since (T, ρ n) is totally bounded (by finitenss of the covering numbers), ( T, ρ n) is separable and, thus there exists a separable version of {NYn(t):t∈T}with the same marginal distributions (Section 2.3.3 of Van der Vaart and Wellner (2023)). Without loss of generality, assume that {NYn(t):t∈T}is separable. Then, E[∥NYn∥T]≤CZ∞ 0p lnN(ϵ, T, ρ n)dϵ <∞ E" sup ρn(s,t)≤δ|NYn(t)−NY n(s)|# ≤CZδ 0p lnN(ϵ, T, ρ n)dϵ for some constant Cby Corollary 2.2.9 of Van der Vaart and Wellner (2023). The first inequality implies that each {NYn(t):t∈T}has bounded sample paths almost surely, hence, without loss of generality {NYn(t):t∈T}induces a map NYnwith values in ℓ∞(T). The second inequality implies that all sample paths of NYnare uniformly ρn- equicontinuous in probability. Hence, each NYnis tight and Borel measurable (Example 1.5.10 of Van der Vaart and Wellner (2023)). 32 Proof of Proposition 2.21. We derive E" sup ρn(s,t)≤δ|NYn(t)−Nn(s)|# ≤CZδ 0p lnN(ϵ, T, ρ n)dϵ for some constant Cindependent of nby Corollary 2.2.9 of Van der Vaart and Wellner (2023). By (iii), for every sequence δ→0 there exists ϵ(δ)→0 such that d(s, t)< δ implies ρn(s, t)< ϵ(δ). Accordingly, lim sup nE" sup d(s,t)≤δ|NYn(t)−NYn(s)|# ≤lim sup nE" sup ρn(s,t)≤ϵ(δ)|NYn(t)−Nn(s)|# ≤lim sup nCZϵ(δ) 0p lnN(ϵ, T,
|
https://arxiv.org/abs/2505.02197v1
|
ρ n)dϵ Taking the limit δ→0 the right hand side converges to zero by (ii). Together with Markov’s inequality, we obtain that NYnis asymptotically uniformly d-equicontinuous in probability. By Corollary A.6 for fixed t∈Tthe sequence NYn(t) is relatively asymptotically tight for all t∈Tif supnVar[Yn(t)]<∞. Since this sequence is R-valued, relative asymptotic tightness, relative compactness and asymptotic tightness agree. Then, Theorem 1.5.7 of Van der Vaart and Wellner (2023) proves that NYnis asymptotically tight. A.4. Relative Lindeberg CLT Theorem A.7 (Relative Lindeberg CLT) .LetXn,1, . . . , X n,knbe a triangular array of independent random vectors with finite variance. Assume 1 knknX i=1E ∥Xn,i∥21{∥Xn,i∥2>knϵ} →0 for all ε >0and for all n∈N, l= 1, . . . , d 1 knknX i=1Varh X(l) n,ii ≤K∈R. Then, the scaled sample average√kn Xn−E Xn satisfies a relative CLT. Proof. Letklnbe a subsequence of knsuch that 1 klnklnX i=1Cov[Xln,i]→Σ 33 converges. Observe that the Lindeberg condition implies 1 klnklnX i=1E ∥Xln,i∥21{∥Xln,i∥2>klnϵ} →0 for all ϵ >0. We apply Proposition 2.27 of van der Vaart (2000) to the triangular array Yn,1, . . . Y n,klnwith Yn,i=k−1/2 lnXln,ito derive p kln Xln−E Xln →dN(0,Σ). By (ii) of Proposition 2.19 we derive the claim. B. Tightness under bracketing entropy conditions The proof of Theorem 3.4 is based on a long sequence of well-known arguments: We group the observations Xn,iin alternating blocks of equal size and apply maximal cou- pling. This yields random variables X∗ n,iwhich corresponding blocks are independent and we obtain the empirical process G∗ nwhere Xn,iis replaced by X∗ n,i. Because G∗ n consists of independent blocks, we can derive a Bernstein type inequality bounding the first moment of G∗ nin terms of Fnprovided that Fnis finite (Lemma B.2). For any fixed n, we use a chaining argument in order to reduce to finite Fnwhich, in combination with the Bernstein inequality, yields a bound of the first moment of G∗ nin terms of the bracketing entropy (Theorem B.6). Under the conditions of Theorem 3.4, this yields asymptotic equicontinuity of Gnwhich implies relative compactness of Gn. B.1. Coupling Letmnbe a sequence in N. Suppose for simplicity that knis a multiple of 2 mnand group the observations Xn,1, . . . , X n,knin alternating blocks of size mn. By maximal coupling (Rio, 2017, Theorem 5.1), there are random vectors U∗ n,j= X∗ n,(j−1)mn+1, . . . , X∗ n,jm n ∈ Xn such that •Un,j= Xn,(j−1)mn+1, . . . , X n,jm nd=U∗ n,jfor every j= 1, . . . , m n, •each of the sequences U∗ n,2j j=1,...,n/ (2mn)and U∗ n,2j−1 j=1,...,n/ (2mn)are indepen- dent, •P Xn,j̸=X∗ n,j ≤βn(mn) for all j, •P ∃j:Un,j̸=U∗ n,j ≤(kn/mn)βn(mn). 34 Define the coupled empirical process G∗ n∈ℓ∞(T) asGn, but with all Xn,jreplaced by X∗ n,j. In what follows, we will replace E∗(indicating the outer expectation) by Efor better readability. We will provide an upper bound on E∥G∗ n∥Tfor any fixed n. In such case, we identify G∗ nwith the empirical process G∗ n(f) =1√knknX i=1f X∗ n,i −E f X∗ n,i indexed by the function class Fn.
|
https://arxiv.org/abs/2505.02197v1
|
For fixed n, we drop the index n, i.e., write Fn=F, Xn,i=Xi,mn=metc. and assume without loss of generality that kn=n. Lemma B.1. For any class Fof functions f:X →Rwith supf∈F∥f∥∞≤Band any integer 1≤m≤n/2, it holds E∥Gn∥F≤E∥G∗ n∥F+B√nβn(m). Proof. We have E∥Gn∥F≤Esup f∈F|G∗ nf|+B√nE"nX i=11(Xj̸=X∗ j)# ≤E∥G∗ n∥F+B√nβn(m). B.2. Bernstein inequality Lemma B.2. LetFbe a finite set of functions f:X →Rwith ∥f∥∞≤B,1 nnX i,j=1|Cov[f(Xi), f(Xj)]| ≤Kδ2 for all f∈ F. Then, for any 1≤m≤n/2it holds E∥Gn∥F≲δp ln+(|F|) +mBln+(|F|)√n+B√nβn(m) where the constant only depends on Kandln+(x) = ln(1 + x). Proof. We have E∥Gn∥F≤E∥G∗ n∥F+B√nβn(m), by Lemma B.1. 35 Defining Aj,f=mX i=1f X∗ (j−1)m+i −E f X∗ (j−1)m+i , we can write √n|G∗ n(f)|= n/mX j=1Aj,f ≤ n/(2m)X j=1A2j,f + n/(2m)X j=1A2j−1,f . The random variables in the sequence ( A2j,f)n/(2m) j=1are independent and so are those in (A2j−1,f)n/(2m) j=1. We apply Bernstein’s inequality (Van der Vaart and Wellner, 2023, Lemma 2.2.10). Note that |Aj,f| ≤2mB, hence E |Aj,f|k ≤(2mB)k−2Var [Aj,f] for k≥2. We obtain 2m nn/(2m)X i=1E |Aj,f|k ≤(2mB)k−22m nn/(2m)X i=1Var [Aj,f] ≤(2mB)k−22m nnX i=1|Cov[f(Xi), f(Xj)]| ≤(2mB)k−22Kmδ2. Using Bernstein’s inequality for independent random variables gives P n/(2m)X j=1A2j,f > t ≤2 exp −1 2t2 Knδ2+ 2tmB and the same bounds holds for the odd sums. Altogether we get P(|G∗ n(f)|> t)≤P n/(2m)X j=1A2j,f > t√n/2 +P n/(2m)X j=1A2j−1,f > t√n/2 ≤4 exp −1 8t2 Kδ2+tmB/√n The result follows upon converting this to a bound on the expectation (e.g., Van der Vaart and Wellner, 2023, Lemma 2.2.13). B.3. Chaining We will abbreviate ∥f∥a,n= 1 nnX i=1E[|f(Xi)|a]!1/a N[](ϵ) =N[](ϵ,F,∥ · ∥ γ,n). Let us first collect some properties of ∥ · ∥ γ,nand∥ · ∥ γ,∞. 36 Lemma B.3. The following holds: (i)∥ · ∥ a,ndefines a semi-norm. (ii)∥ · ∥ a,n≤ ∥ · ∥ b,nfora≤b. (iii)∥f1|f|>K∥a,n≤Ka−b∥f∥b/a b,nfor all K > 0anda≤b. Proof. Positivity and homogeinity of ∥ · ∥ γ,nfollow clearly and the triangle inequality follows by ∥h+g∥γ,n= 1 nnX i=1∥(h+g)(Xi)∥γ γ!1/γ ≤ 1 nnX i=1 ∥h(Xi)∥γ+∥g(Xi)∥γγ!1/γ ≤ 1 nnX i=1∥h(Xi)∥γ!1/γ + 1 nnX i=1∥g(Xi)∥γ γ!1/γ Minkowski’s inequality =∥h∥γ,n+∥g∥γ,n for all h, g. Next, ∥f∥a,n= 1 nnX i=1∥f(Xi)∥a a!1/a ≤ 1 nnX i=1∥f(Xi)∥a b!1/a ≤ 1 nnX i=1∥f(Xi)∥b b!1/b by Jensen’s inequality. Lastly, note Kb−a|f(Xi)1|f(Xi)|>K|a≤ |f(Xi)|b. Thus, ∥f1|f|≥K∥a,n= 1 nnX i=1E |f(Xi)1|f(Xi)|>K|a!1/a ≤Ka−b 1 nnX i=1E |f(Xi)|b!1/a =Ka−b∥f∥b/a b,n 37 Theorem B.4. LetFbe a class of functions f:X →Rwith envelope Fand for some γ≥2, ∥f∥γ,n≤δ1 nnX i,j=1|Cov[h(Xi), h(Xj)]| ≤K1∥h∥2 γ,n for all f∈ F andh:X →Rbounded and measurable. Suppose that supnβn(m)≤ K2m−ρfor some ρ≥γ/(γ−2). Then, for any n≥5,m≥1, B∈(0,∞), E∥Gn∥F≲Zδ 0q ln+N[](ϵ)dϵ +mBln+N[](δ)√n+√nBβ n(m) +√n∥F1{F > B }∥1,n+√nN−1 [](en) with constants only depending on K1, K2. If the integral is finite, then√nN−1 [](en)→0 forn→ ∞ . Let us first derive some useful corollaries. Corollary B.5. LetFbe a class of functions f:X →Rwith envelope F,Xn,iare independent and ∥f∥2,n≤δfor all f∈ F. Then, for any n≥5,B∈(0,∞), E∥Gn∥F≲Zδ 0q ln+N[](ϵ,F,∥ · ∥ 2,n)dϵ +Bln+N[](δ,F,∥ · ∥ 2,n)√n+√n∥F1{F > B }∥1,n+√nN−1 [](en) with constants only depending on K1, K2. Proof. It holds 1 nnX i,j=1|Cov[h(Xi), h(Xj)]|=1 nnX i=1Var[h(Xi)]≤2∥h∥2,n and the β-coefficients are 0 for
|
https://arxiv.org/abs/2505.02197v1
|
all m≥1. Applying Theorem B.4 with m= 1 yields the claim. Theorem B.6. LetFbe a class of functions f:X →Rwith envelope Fand for some γ >2, ∥f∥γ,n≤δ1 nnX i,j=1|Cov[h(Xi), h(Xj)]| ≤K1∥h∥2 γ,n 38 for all f∈ F andh:X →Rbounded and measurable. Suppose that max nβn(m)≤ K2m−ρfor some ρ≥γ/(γ−2). Then, for any n≥5, E∥Gn∥F≲Zδ 0q ln+N[](ϵ)dϵ+∥F∥γ,n[lnN[](δ)][1−1/(ρ+1)](1 −1/γ) n−1/2+[1−1/(ρ+1)](1 −1/γ)+√nN−1 [](en). with constants only depending on K1, K2. In particular, if the integral is finite, ∥F∥γ,∞<∞,ρ > γ/ (γ−2)andK1, K2can be chosen independent of n, then lim sup n→∞E∥Gn∥F≲Zδ 0q lnN[](ϵ)dϵ. Proof. It holds ∥F1{F > B }∥1,n≤√n∥F∥γ γ,n Bγ−1. By Theorem B.4 E[∥Gn∥F]≲Zδ 0q ln+N[](ϵ)dϵ+mBln+N[](δ)√n+√nBβ n(m) +√n∥F∥γ γ,n Bγ−1+√nN−1 [](en). Choose m= (n/ln+N[](δ))1/(ρ+1), which gives mln+N[](δ)√n+√nβn(m)≲n−1/2+1/(ρ+1)[ln+N[](δ)]1−1/(ρ+1), and, thus, E[∥Gn∥F]≲Zδ 0q ln+N[](ϵ)dϵ+B[ln+N[](δ)]1−1/(ρ+1) n1/2−1/(ρ+1)+√n∥F∥γ γ,n Bγ−1+√nN−1 [](en). Next, choose B= n[1−1/(ρ+1)]∥F∥γ γ,n [ln+N[](δ)]1−1/(ρ+1)!1/γ This gives B[ln+N[](δ)]1−1/(ρ+1) n1/2−1/(ρ+1)+√n∥F∥γ γ,n Bγ−1=∥F∥γ,n[ln+N[](δ)][1−1/(ρ+1)](1 −1/γ) n−1/2+[1−1/(ρ+1)](1 −1/γ). Lastly, ρ > γ/ (γ−2), then −1/2 + [1 −1/(ρ+ 1)](1 −1/γ)>0, so the second term in the first statement vanishes as n→ ∞ and the last term vanishes since the bracketing integral is finite. Proof of Theorem B.4. Let us first deduce the last statement. If the bracketing integral exists, then it must holdp ln+N[](δ)≲δ−1/(1 +|ln(δ)|) for δ→0, because the upper bound is not integrable. For δ−1=√ nlnn, we have ln +N[](δ)≲nlnn/(1 + ln n)2= o(n). So for large n, it must hold N−1 [](en)≲1/√ nlnn→0. We now turn to the proof of the first statement. 39 Truncation We first truncate the function class Fin order to apply Bernstein’s inequality in com- bination with a chaining argument. It holds E∥Gn∥F≤E ∥Gn(f1{|F| ≤B})∥F +E ∥Gn(f1{|F|> B})∥F , and E ∥Gn(f1{|F|> B})∥F ≤21√nnX i=1E F(Xi)1{|F(Xi)|> B} = 2√n∥F1{F > B }∥1,n. because In summary, E∥Gn(f)∥F≤E ∥Gn(f1{|F| ≤B})∥F + 2√n∥F1{F > B }∥1,n. Note that |f1{|F| ≤B}| ≤F1{|F| ≤B} ≤B. By replacing Fwith Ftrun={f1{|F| ≤B}:f∈ F} , we may without loss of generality assume that Fhas an envelope with ∥F∥∞≤B. Observe that the conditions of the theorem remain true for Ftrunand that the bracketing numbers with respect to Ftrunare bounded above by the bracketing numbers with respect toF. Lastly, we may assume ln +Nr0≤n: Ifn <ln+Nr0≤ln+N[](δ) then E∥Gn∥F≲√nB≤mBln+N[](δ)√n which still implies the claim. Chaining setup Fix integers r0≤r1such that 2−r0−1< δ≤2−r0. For r≥r0we construct a nested sequence of partitions F=SNr k=1Fr,kofFintoNrdisjoint subsets such that for each r≥r0 sup f,f′∈Fr,k|f−f′| γ,n<2−r. Clearly, we can choose the partition such that Nr0≤N[] 2−r0 ≤N[](δ). As explained in the proof of Theorem 2.5.8 of Van der Vaart and Wellner (2023), we may assume without loss of generality that p ln+Nr≤rX k=r0q ln+N[](2−k). 40 Then by reindexing the double sum, r1X r=r02−rp ln+Nr≤r1X r=r02−rrX k=r0q ln+N[](2−k) =r1X k=r0q ln+N[](2−k)r1X r=k2−r =r1X k=r02−kq ln+N[](2−k)r1X r=k2−(r−k) ≲r1X k=r02−kq ln+N[](2−k) ≲Zδ 0q ln+N[](ϵ)dϵ. Decomposition For a given f, suppose that Fr,kis the element of the partition that contains f. Note that such Fr,kis unique since all Fr,1, . . . ,Fr,Nrare disjoint. Define πr(f) as some fixed element of this set and define ∆r(f) = sup f1,f2∈Fr,k|f1−f2|. Set τr=2−r mr+1rn ln+Nr+1, m r= min(r ln+Nr n,1)−(γ−2)/(γ−1) , (4) and r1=−log2N−1 [](en). Then ln
|
https://arxiv.org/abs/2505.02197v1
|
+Nr≤nfor all r≤r1. We will frequently apply Bernstein’s inequality with m=mr. Here, note mr≤p n/ln+Nr≤p n/ln(2)≤n/2 for all n≥5. The following (in-)equalities are the reason for the choices of τrandmr: for rsuch 41 thatp ln+Nr+1≤n, i.e., r < r 1it holds mrτr−1√n= 2−r+1p ln+Nr √n2−rγτ−(γ−1) r = 2−r√n1 mrrn ln+Nr+1−(γ−1) = 2−r √n2−γp ln+Nr+1γ−1 mγ−1 r+1 = 2−rp ln+Nr+1 r ln+Nr+1 n!γ−2 mγ−1 r+1 = 2−rp ln+Nr+1 √nτr−1βn(mr) =√n2−r+11 mrrn ln+Nrβn(mr) ≲2−r+11 mrnp ln+Nrm−ρ r = 2−r+1p ln+Nrn ln+Nrm−ρ−1 r = 2−r+1p ln+Nrm2(γ−1) γ−2 r m−ρ−1 r ≤2−r+1p ln+Nr where the last inequality holds because 1 ≤mrandγ/(γ−2)≥ρ, hence, m2(γ−1) γ−2−ρ−1 r ≤ 1. Decompose f=πr0(f) + [f−πr0(f)]1{∆r0(f)/τr0>1} +r1X r=r0+1[f−πr(f)]1 max r0≤k<r∆k(f)/τk≤1,∆r(f)/τr>1 +r1X r=r0+1[πr(f)−πr−1(f)]1 max r0≤k<r∆k(f)/τk≤1 + [f−πr1(f)]1 max r0≤k≤r1∆k(f)/τk≤1 =T1(f) +T2(f) +T3(f) +T4(f). To see this, note that if ∆ r0(f)/τr0>1 all terms but T1(f) vanish and T1(f) =f. Otherwise, define ˆ ras the maximal number r0≤r≤r1such that max r0≤k≤r∆k(f)/τk≤ 1. Then, T1(f) =πr0(f) and if ˆ r < r 1, then, T2(f) =f−πˆr+1(f) T3(f) =πˆr+1(f)−πr0(f) T4(f) = 0 . 42 If ˆr=r1, then, T2(f) = 0 T3(f) =πr1(f)−πr0(f) T4(f) =f−πr1(f). We prove the theorem by separately bounding the four terms E∥GnTj∥F. Note that Gnis additive by construction, i.e., Gn(f+g) =Gn(f) +Gn(g). Bounding T1 Note that for every |g| ≤hit follows |Gn(g)| ≤ |Gn(h)|+ 2√n∥h∥1,n. In combination with the triangle inequality we obtain ∥GnT1∥F≤ ∥Gnπr0∥F+∥Gn∆r0∥F+ 2√nsup f∈F∥∆r0(f)1{∆r0(f)/τr0>1}∥1,n. The sets {∆r0(f) :f∈ F} and{πr0(f) :f∈ F} contain at most Nr0different functions each. The construction implies ∥πr0(f)∥γ,n≤δ,∥πr0(f)∥∞≲B,∥∆r0(f)∥γ,n≤2δ,∥∆r0(f)∥∞≲B. Now the Bernstein bound from Lemma B.2 gives E∥Gnπr0∥F+E∥Gn∆r0∥F≲δp ln+Nr0+mB nln+Nr0+√nBβ n(m). ≤δq ln+N[](δ) +mB nln+N[](δ) +√nBβ n(m). Since the bracketing numbers are decreasing, q ln+N[](δ)≤δ−1Zδ 0q ln+N[](ϵ)dϵ so E∥Gnπr0∥F+E∥Gn∆r0∥F≲Zδ 0q ln+N[](ϵ)dϵ+mBln+N[](δ)√n+√nBβ n(m). Recall ln +Nr+1≤nfor any r < r 1. For any such r, (iii) of Lemma B.3 gives √nsup f∈F∥∆r(f)1{∆r(f)/τr>1}∥1,n≤√nτ−(γ−1) r sup f∈F∥∆r(f)∥γ γ,n ≤√nτ−(γ−1) r 2−rγ so that the final upper bound becomes √nsup f∈F∥∆r(f)1{∆r(f)/τr>1}∥1,n≲2−rp ln+Nr+1, (5) 43 for any r < r 1. In particular, using δ≤2−r0, we get √nsup f∈F∥∆r0(f)1{∆r0(f)/τr0>1}∥1,n≲δq ln+N[](δ)≤Zδ 0q ln+N[](ϵ)dϵ. Combined, E∥GnT1∥F≲Zδ 0q ln+N[](ϵ)dϵ+mBln+N[](δ)√n+√nBβ n(m). Bounding T2 Next, E∥GnT2∥F≤r1X r=r0+1E Gn∆r1 max r0≤k<r∆k/τk≤1,∆r/τr>1 F + 2√nr1X r=r0+1sup f∈F ∆r(f)1 max r0≤k<r∆k(f)/τk≤1,∆r/τr>1 1,n =T2,1+T2,2. We start by bounding the first term. It holds ∆r(f)1 max r0≤k<r∆k(f)/τk≤1,∆r(f)/τr>1 γ,n≤2−r by construction of ∆ r(f). Since the partitions are nested ∆ r≤∆r−1. Thus, ∆r1 max r0≤k<r∆k/τk≤1,∆r/τr>1 F≤τr−1. Since there are at most Nrfunctions in {∆r(f) :f∈ F} , the Bernstein bound from Lemma B.2 yields T2,1≲r1X r=r0+1 2−rp ln+Nr+mrτr−1√nln+Nr+√nτr−1βn(mr) ≲r1X r=r0+12−rp ln+Nr ≲Zδ 0q ln+N[](ϵ)dϵ. Further, (5) gives √nsup f∈F ∆r(f)1 max r0≤k<r∆k(f)/τk≤1,∆r/τr>1 1,n≲2−rp ln+Nr. 44 forr < r 1and √nsup f∈F ∆r1(f)1 max r0≤k<r 1∆k(f)/τk≤1,∆r1/τr1>1 1,n≤√nsup f∈F∥∆r1(f)∥γ γ,n ≤√n2−r1 =√nN−1 [](en) by the definition of r1. Thus, T2,2≲r1−1X r=r0+12−rp ln+Nr+√nN−1 [](en)≲Zδ 0q ln+N[](ϵ)dϵ+√nN−1 [](en), and, in summary, E∥GnT2∥F≲Zδ 0q ln+N[](ϵ)dϵ+√nN−1 [](en). Bounding T3 Next, E∥GnT3∥F≤r1X r=r0+1E Gn[πr−πr−1]1 max r0≤k<r∆k/τk≤1 F. There are at most Nrfunctions πr(f) and at most Nr−1functions πr−1(f) asfranges overF. Since the partitions are nested, |πr(f)−πr−1(f)| ≤∆r−1(f) and |πr(f)−πr−1(f)|1 max r0≤k<r∆k(f)/τk≤1 ≤ |∆r−1(f)|1{∆r−1(f)/τr−1≤1} ≤τr−1. Further, ∥πr(f)−πr−1(f)∥γ,n≤ ∥∆r−1(f)∥γ,n≤2−r+1. Just as for T2,1, the Bernstein bound (Lemma B.2) gives E∥GnT3∥F≲Zδ 0q ln+N[](ϵ)dϵ.
|
https://arxiv.org/abs/2505.02197v1
|
Bounding T4 Finally, E∥GnT4∥F=E Gn[f−πr1(f)]1 max r0≤k≤r1∆k(f)/τk≤1 f∈F ≲E∥Gn∆r11{∆r1≤τr1}∥F+√nsup f∈F∥∆r1(f)1{∆r1(f)≤τr1}∥1,n ≲E∥Gn∆r0∥F+√nτr1. 45 and the first term is bounded by T1. Finally, observe that, by the definition of r1, √nτr1≤√n2−r1=√nN−1 [](en). Lemma B.7. Forγ >2and nX i=1βn(i)γ−2 γ≤K it holds 1 nnX i,j=1|Cov[h(Xi), h(Xj)]| ≤4K∥h∥2 γ,n for all h:X →Rmeasurable. Furthermore, if supn∈Nmax m≤nmρβn(m)<∞for some ρ > γ/ (γ−2), then, sup nnX i=1βn(i)γ−2 γ<∞. Proof. Let us first prove the latter claim. Note that if supn∈Nmax m≤nmρβn(m)<∞ thenPn i=1βn(i)γ−2 γ≲Pn i=1m−ργ−2 γand the latter display converges for ρ > γ/ (γ−2). For the first claim, it holds 1 nnX i,j=1|Cov[h(Xi), h(Xj)]| ≤1 nnX i,j=1βn(|i−j|)γ−2 γ∥h(Xi)∥γ∥h(Xj)∥γ ≤1 nnX i,j=1βn(|i−j|)γ−2 γ(∥h(Xi)∥2 γ+∥h(Xj)∥2 γ) ≤1 nnX i=1∥h(Xi)∥2 γnX j=1βn(|i−j|)γ−2 γ+1 nnX j=1∥h(Xj)∥2 γnX i=1βn(|i−j|)γ−2 γ ≤4K nnX i=1∥h(Xi)∥2 γ ≤4K∥h∥2 γ,n. by Theorem 3 of Doukhan (2012) and where the last inequality follows from 1 nnX i=1∥h(Xi)∥2 γ!1/2 = 1 nnX i=1E[|h(Xi)|γ]2/γ!1/2 ≤ 1 nnX i=1E[|h(Xi)|γ]!1/γ by 2/γ≤1 and Jensen’s inequality. 46 Remark B.8. Given a semi-metric donFinduced by a semi-norm ∥ · ∥ satisfying |f| ≤ |g| ⇒ ∥ f∥ ≤ ∥ g∥, any2ε-bracket [f, g]is contained in the ϵ-ball around (f−g)/2. Then, N(ε,F, d)≤N[](2ε,F,∥ · ∥). Both, ∥ · ∥ γ,nand∥ · ∥ γ,∞, satisfy this property. B.4. Proof of Theorem 3.4 We first prove the existence of an asymptotically tight sequence of GPs. We will conclude by Propositions 2.20 and 2.21: There exists K∈Rsuch that knX i=1βn(i)γ−2 γ≤K for all nby Lemma B.7. It holds ρn(s, t)2=Var[Gn(s)−Gn(t)] =1 knVar"knX i=1(fn,s−fn,t)(Xn,i)# ≤1 knknX i,j=1|Cov [(fn,s−fn,t)(Xn,i),(fn,s−fn,t)(Xn,j)]| ≤4K∥fn,s−fn,t∥2 γ,n = 4Kdn(s, t)2 by Lemma B.7. Thus, N(2√ Kϵ, T, ρ n)≤N(ϵ, T, d n)≤N[](2ϵ,Fn,∥ · ∥ γ,n) by Remark B.8. Next, observe that lnN[](ϵ,Fn,∥ · ∥ γ,n) = 0 for all ε≥2∥F∥γ,n. Thus, Z∞ 0q lnN[](ϵ,Fn,∥ · ∥ γ,n)dϵ <∞ for all nby the entropy condition (iii). In summary, 47 •(T, d) is totally bounded. •limn→∞Rδn 0p lnN(ϵ, T, ρ n)dϵ≲limn→∞Rδn 0p lnN[](ϵ,Fn,∥ · ∥ γ,n)dϵ= 0 for all δn↓0. •limn→∞supd(s,t)<δnρn(s, t)≤limn→∞supd(s,t)<δn2√ Kdn(s, t) = 0 for every δn↓0. •R∞ 0p lnN(ϵ, T, ρ n)dϵ≲R∞ 0p lnN[](ϵ,Fn,∥ · ∥ γ,n)dϵ <∞for all n. by the assumptions. Lastly, sup nVar[Gn(t)]≲sup n∥F∥2 γ,n=∥F∥2 γ,∞<∞ for all t∈Tby the same argument as above. Combined, we derive the claim by Propositions 2.20 and 2.21. Next, we prove asymptotic tightness of Gn. We derive that supnE |Gn(s)|2 <∞for alls∈Tagain by the moment condition (i) and the summability condition Lemma B.7. Thus, each Gn(s) is asymptotically tight. By Markov’s inequality and Theorem 1.5.7 of Van der Vaart and Wellner (2023), it suffices to prove uniform d-equicontinuity, i.e. that lim sup n→∞E∗sup d(f,g)<δn|Gn(s)−Gn(t)|= 0 for all δn↓0. By lim n→∞sup d(s,t)<δndn(s, t) = 0 for all δn↓0, for every sequence δ→0 there exists a sequence ϵ(δ)→0 such that d(s, t)< δimplies dn(s, t)< ϵ(δ). Thus, lim sup n→∞E∗sup d(s,t)<δn|Gn(s)−Gn(t)| ≤lim sup n→∞E∗sup dn(s,t)<ϵ(δn)|Gn(s)−Gn(t)|. Accordingly, it suffices to prove that lim sup n→∞E∗sup dn(s,t)<δn|Gn(s)−Gn(t)|= 0. For fixed nwe again identify Gnwith the empirical process Gnindexed by Fnand similarly for dn. Note that Gn(f)−Gn(g) =Gn(f−g) and the bracketing number with respect to the function class
|
https://arxiv.org/abs/2505.02197v1
|
Fn,δ={f−g:f, g∈ F n,∥f−g∥γ,n< δ} satisfies N[](ϵ,Fn,δ,∥ · ∥ γ,n)≤N[](ϵ/2,Fn,∥ · ∥ γ,n)2. Indeed, given ϵ/2 brackets [ lf, uf] and [ lg, ug] forfandg, [lf−ug, uf−lg] is an ε-bracket forf−g. By Theorem B.6, lim sup nE∗sup dn(s,t)<δn|Gn(s)−Gn(t)|≲lim n→∞Z2δn 0q lnN[](ϵ,Fn,∥ · ∥ γ,n)dϵ = 0 by (iii) which proves the claim. 48 C. Proofs for relative CLTs under mixing conditions C.1. Proof of Theorem 3.2 We first restrict to univariate random variables which, in combination with the relative Cramer-Wold device (Corollary A.4), yields Theorem 3.2. The idea is to split the scaled sample average 1√nnX i=1(Xn,i−E[Xn,i]) =1√nrnX i=1 Zn,i−E[Zn,i] +˜Zn,i−Eh ˜Zn,ii into alternating long and short block sums. By considering a small enough length of the short blocks, the short block sum are asymptotically negligible. It then suffices to prove a relative CLT for the sequence of long block sums (Lemma C.1) . By maximal coupling and Lemma C.1, the sequence of long block sums can be considered independent and Lindeberg’s CLT (Theorem A.7) applies. Lemma C.1. LetYnandY∗ nbe sequences of random variables in Rsuch that (i)|Yn−Y∗ n|P→0, (ii)supnVar [Yn]<∞and (iii)|Var [Yn]−Var [Y∗ n]| →0. Then, Ynsatisfies a relative CLT if and only if Y∗ ndoes. Proof. It suffices to prove the if direction since the statement is symmetric. Let knbe a subsequence of nandlnbe a further subsequence such that Y∗ ln→dNconverges weakly to some Gaussian with Var Y∗ ln →Var [N]. Such lnexists by (iii) of Proposition 2.19. Since |Yn−Y∗ n|P→0, we obtain Yln→dN. Note Var [N] = lim n→∞Var Y∗ ln = lim n→∞Var [Yln] by assumption. Thus, Ynsatisfies a relative CLT by (iii) of Proposition 2.19. Theorem C.2 (Univariate relative CLT) .LetXn,1, . . . , X n,knbe a triangular array of univariate random variables. Let α∈[0,1/2)and1 + (1 −2α)−1< γ. Assume that (i)k−1 nPkn i,j=1|Cov [Xn,i, Xn,j]| ≤Kfor all n. (ii)supn,iE[|Xn,i|γ]<∞. (iii) knβn(kα n)γ−2 γ→0. 49 Then, the scaled sample average√kn Xn−E Xn satisfies a relative CLT. Proof. There exists some δwith 0 < δ < 1/2−αand 1 + (1 −2α)−1<1 + (2 δ)−1< γ. Define qn=kα n,pn=k1/2−δ n−qnandrn=k1/2+δ n. Group the observations in alternating blocks of size pnresp. qn, i.e. Un,i= Xn,1+(i−1)(pn+qn), . . . , X n,pn+(i−1)(pn+qn) ∈Rpn(long blocks) ˜Un,i= Xn,1+ipn+(i−1)qn, . . . , X n,qn+ipn+(i−1)qn ∈Rqn(short blocks) Define Zn,i=pnX j=1U(j) n,i (long block sums) ˜Zn,i=qnX j=1˜U(j) n,i (short block sums) where the upper index jdenotes the j-th component. Then, knX i=1(Xn,i−E[Xn,i]) =rnX i=1 Zn,i−E[Zn,i] +˜Zn,i−Eh ˜Zn,ii and it holds k−1 nVar"knX i=1Xn,i# ≤k−1 nknX i,j=1|Cov [Xn,i, Xn,j]| ≤K k−1 nVar"rnX i=1Zn,i# ≤K k−1 nCov"rnX i=1˜Zn,i,rnX i=1Zn,i# =O(rnqn/kn) =O k1/2+δ+α−1 n =o(1) k−1 nVar"rnX i=1˜Zn,i# =O(rnqn/kn) =o(1) by assumption. Thus, k−1/2 nrnX i=1 ˜Zn,i−Eh ˜Zn,iiP→0 by Markov’s inequality. Hence, k−1/2 nknX i=1 Xn,i−E[Xn,i] −k−1/2 nrnX i=1 Zn,i−E[Zn,i] P→0. 50 Furthermore, Var" k−1/2 nknX i=1Xn,i# −Var" k−1/2 nrnX i=1Zn,i# =k−1 n Var"rnX i=1˜Zn,i# −2Cov"rnX i=1˜Zn,i,rnX i=1Zn,i# →0. By the previous lemma, k−1/2 nPkn i=1(Xn,i−E[Xn,i]) satisfies a relative CLT if and only ifk−1/2 nPrn i=1Zn,i−E[Zn,i] does. By maximal coupling (Theorem 5.1 of Rio (2017)), for all i= 1, . . . , r nthere exist random vectors
|
https://arxiv.org/abs/2505.02197v1
|
U∗ n,i∈Rpnsuch that •Un,id=U∗ n,i. •the sequence U∗ n,iis independent. •P ∃Un,i̸=U∗ n,i ≤rnβn(qn). Define the coupled long block sums Z∗ n,i=pnX j=1U∗(j) n,i. For all ε >0 we obtain P k−1/2 n rnX i=1Z∗ n,i−rnX i=1Zn,i > ϵ! ≤P ∃Un,i̸=U∗ n,i ≤rnβn(qn) =k1/2+δ nβn(kα n) ≤knβn(kα n)→0. Next, Var"rnX i=1Zn,i# =Var"rnX i=1Z∗ n,i# +rnX i̸=jCov [Zn,i, Zn,j] by independence of Z∗ n,iandPZn,i=PZ∗ n,i. Since supn,k∥Xn,k∥γ<∞and |Cov [Xn,i, Xn,j]| ≤βn(|i−j|)γ−2 γsup n,k∥Xn,k∥γ by Theorem 3. of Doukhan (2012), for i̸=jwe obtain |Cov [Zn,i, Zn,j]| ≤ O p2 nβn(qn)γ−2 γ 1 knrnX i̸=jCov [Zn,i, Zn,j] ≤ O knβn(qn)γ−2 γ =o(1). 51 Thus, 1 kn Var"rnX i=1Zn,i# −Var"rnX i=1Z∗ n,i# →0. Combined with PZn,i=PZ∗ n,i, hence k−1 nVarPrn i=1Z∗ n,i ≤KandE[Zn,i] =E Z∗ n,i , the previous lemma yields that k−1/2 nPrn i=1Zn,i−E[Zn,i] satisfies a relative CLT if and only if k−1/2 nPrn i=1Z∗ n,i−E Z∗ n,i does. Next, the moment assumption together with PZn,i=PZ∗ n,iimply that the sequence (rn/kn)1/2Z∗ n,isatisfies the Lindeberg condition given in Theorem A.7. More specifically, 1 rnrnX i=1Eh |(rn/kn)1/2Z∗ n,i|21{|(rn/kn)1/2Zn,i|2>rnε2}i =1 knrnX i=1E |Zn,i|21{|Zn,i|2>knε2} ≤ϵ1−γ/21 kγ/2 nrnX i=1E[|Zn,i|γ] ≤ϵ1−γ/2Crnpγ n kγ/2 n forC= supiE[|Xn,i|γ]<∞where we used E[|Zn,1|γ] =∥Zn,1∥γ γ = pnX i=1Xn,i γ γ ≤ pnX i=1∥Xn,i∥γ!γ ≤Cpγ n and similarly E[|Zn,i|γ]≤Cpγ n. It holds rnpγ n kγ/2 n≤rnpγ n (rnpn)γ/2=pγ/2 n rγ/2−1 n≤k1/2+δ−δγ n →0. Thus, k−1/2 nrnX i=1 Z∗ n,i−E Z∗ n,i =r−1/2 nrnX i=1(rn/kn)1/2 Z∗ n,i−E Z∗ n,i satisfies a relative CLT which finishes the proof. Proof of Theorem 3.2. Write Sn=1√knknX i=1(Xn,i−E[Xn,i]) 52 for the scaled sample average and Σ nfor its covariance matrix. Let Nn∼ N (0,Σn). By assumption, Σ nis componentwise a bounded sequence, hence, Nnis relatively com- pact. Thus, it suffices to prove Sn↔dNn. By Corollary A.4, this is equivalent to tTSn↔dtTNnfor all t∈Rd. Note that tTNn∼ N 0, tTΣn andtTSnis the scaled sample average associated to tTXn,i. Accordingly, it suffices to check the conditions of Theorem C.2 for tTXn,i: The moment and mixing conditions ((ii) and (iii)) of Theorem C.2 follow by assumption. Lastly, k−1 nknX i,j=1 Cov tTXn,i, tTXn,j ≤tTtmax l1,l2k−1 nknX i,j=1 Covh X(l1) n,i, X(l2) n,ji ≤tTtK for all n. This proves (i) of Theorem C.2 and combined we derive the claim. C.2. Proof of Theorem 3.6 We derive Theorem 3.6 from a more general result. Fix some triangular array Xn,1, . . . , X n,kn of random variables with values in a Polish space X. For each n∈Nlet Fn={fn,t:t∈T} be a set of measurable functions from XtoR. Assume that ∪n∈NFnadmits a finite envelope F:X →R. Theorem C.3. Assume that for some γ >2 (i)∥F∥γ,∞<∞. (ii)supn∈Nmax m≤knmρβn(m)<∞for some ρ >2γ(γ−1)/(γ−2)2 (iii)Rδn 0p lnN[](ϵ,Fn,∥ · ∥ γ,n)dϵ→0for all δn↓0and are finite for all n. Denote by dn(s, t) =∥fn,s−fn,t∥γ,n fors, t∈T. Assume that there exists a semi-metric donTsuch that lim n→∞sup d(s,t)<δndn(s, t) = 0 for all δn↓0and(T, d)is totally bounded. Then, the empirical process Gndefined by Gn(t) =1√knknX i=1fn,t(Xn,i)−E[fn,t(Xn,i)] satisfies a relative CLT in ℓ∞(T). 53 Proof. We apply Theorem 3.4 to derive relative compactness of Gnand the existence of an asymptotically tight sequence of tight Borel
|
https://arxiv.org/abs/2505.02197v1
|
measurable GPs NGncorrespond- ing to Gn. Note that each NGn(s) is measurable for all s∈T. Accordingly, NGnis asymptotically measurable, hence, relatively compact by Lemma A.3. According to Corollary 2.18, it remains to prove relative CLTs of the marginals (Gn(t1), . . . ,Gn(td)) for all d∈N,t1, . . . , t d∈T. We apply Theorem 3.2 to the triangular array Yn,1, . . . , Y n,knwith Yn,k= (fn,t1(Xk), . . . , f n,td(Xk)). (ii) of Theorem 3.2 follows by ∥F∥γ,∞<∞. Next, pick ρ−1γ γ−2< α <γ−2 2(γ−1). Such αexists since ρ−1γ γ−2<γ−2 2(γ−1). Then, 1 + (1 −2α)−1< γandknβn(kα n)γ−2 γ≲k1−ραγ−2 γ n →0 sinceγ γ−2< ρα . Lastly, the summability condition on the covariances follows by the summability condition on the β-mixing coefficients (Lemma B.7). Combined, we obtain the claim. Proof of Theorem 3.6. Define the random variables Yn,i= (Xn,i, i)∈ X × N. Note that X ×Nis Polish since XandNare. Define T=S× F,Fn={hn,t:t∈T}with hn,(s,f):X ×N→R,(x, k)7→wn,k(s)f(x) for every ( s, f)∈Twith wn,k= 0 for k > k n. Note that each hn,(s,f)is measurable. Then, the empirical process associated to Yn,iandTis given by Gn. We apply Theorem C.3: set K= max {supn,i,x|wn,i(x)|,∥F∥γ,∞}. Note that F′:X ×N→R,(x, k)7→KF(x) is an envelope of ∪n∈NFnsatisfying condition (i) of Theorem C.3. Since we have σ(Xn,i) =σ((Xn,i, i)), the β-mixing coefficients w.r.t. Yn,iare equal to the β-mixing coefficients w.r.t. Xn,i. Thus, (ii) of Theorem C.3 are satisfied by assumption. Define the semi-metric donTby d (s1, f1),(s2, f2) =dw(s1, s2) +∥f1−f2∥γ,∞. By the entropy condition (iii) and W3 we derive that ( T, d) is totally bounded. By 54 Minkowski’s inequality, we get dn (s1, f1),(s2, f2) =∥hn,(s1,f1)−hn,(s2,f2)∥γ,n = 1 knknX i=1∥wn,i(s1)f1(Xn,i)−wn,i(s2)f2(Xn,i)∥γ γ!1/γ ≤ 1 knknX i=1 ∥F(Xn,i)∥γ|wn,i(s1)−wn,i(s2)|+∥wn,i∥∞∥(f1−f2)(Xn,i)∥γγ!1/γ ≤K 1 knknX i=1|wn,i(s1)−wn,i(s2)|γ!1/γ +K 1 knknX i=1∥(f1−f2)(Xn,i)∥γ γ!1/γ ≤Kdw n(s1, s2) +K∥f1−f2∥γ,∞ which implies lim n→∞sup d(s,t)<δndn(s, t) = 0 for all δn↓0. Define gn,s(i) =wn,i(s) and Gn={gn,s:N→R:s∈S}. Given f∈ F, s∈Sand ε-brackets g≤gn,s≤gandf≤f≤f, set the centers fc= (f+f)/2 and gc= (g+g)/2. Then, |fgn,s−fcgc| ≤ |fgn,s−fgc|+|fgc−fcgc| ≤Fg−g 2+Kf−f 2. Thus, we obtain a bracket fcgc− Fg−g 2+Kf−f 2! ≤fgn,s≤fcgc+ Fg−g 2+Kf−f 2! with ∥F(g−g) +K(f−f)∥γ,n≤ ∥F∥γ,∞∥g−g∥γ,n+K∥f−f∥γ,n ≤2Kε hence, an 2 Kε-bracket for fgn,s. This implies N[](2Kϵ,Fn,∥ · ∥ γ,n)≤N[](ϵ,F,∥ · ∥ γ,∞)N[](ϵ, S,∥ · ∥ γ,n). Since Zδn 0q lnN[](ϵ,F,∥ · ∥ γ,∞)dϵ,Zδn 0p lnN(ϵ, S, d )dϵ→0 for all δn↓0, this implies the entropy condition (iii) of Theorem C.3 and, combined, we derive the claim. 55 Proof of Corollary 3.8. In Theorem 3.6 set wn,i: [0,1]→ {0,1}, s7→1{i≤ ⌊sn⌋}. Now note that |wn,i(s)−wn,i(t)| ≤1 can only be non-zero for |⌊skn⌋ − ⌊ tkn⌋| ≤ 2kn|s−t| many i’s. Thus, dw n(s, t) = 1 knknX i=1|wn,i(s)−wn,i(t)|γ!1/γ ≤(2|s−t|)1/γ andW2andW3are satisfied for dw(s, t) =|s−t|1/γ. In combination with wn,i(s)≤ wn,i(t) for all s≤t, N[](ϵ,Wn,∥ · ∥ γ,n)≤ ⌊1/ε⌋1/γ which implies W1. Theorem 3.6 gives the claim. C.3. Asymptotic tightness of the multiplier empirical process LetVn,1, . . . , V n,knbe a triangular array of identically distributed random variables. De- fineGV n∈ℓ∞(S× F) by GV n(s, f) =1√knknX i=1Vn,iwn,i(s) f(Xn,i)−E[f(Xn,i)] . We will derive asymptotic tightness of the
|
https://arxiv.org/abs/2505.02197v1
|
multiplier empirical process GV nin terms of bracketing entropy conditions with respect to F. Here, we apply a coupling argument and apply Corollary B.5 to the coupled empirical process. We will argue similar to the proof of Theorem C.3. To avoid clutter we assume wn,i= 1, but the argument is similar for the general case. Set fV:R×X → R,(v, x)7→vf(x). Forf∈ Fdefine fV,+,n: (R× X)mn→R,(v, x)7→m−1/2 nmnX i=1fV(vi, xi). Define FV,+,n={fV,+,n:f∈ F} . We use Un,iresp. U∗ n,ifrom the maximal coupling paragraph but with Xn,ireplaced by ( Vn,i, Xn,i). Define rn=kn/(2mn) andGn,1,Gn,2∈ ℓ∞(F) by Gn,1(f) =1√rnrnX i=1fV,+,n(Un,2i−1)−E[f+,n(Un,2i−1)], Gn,2(f) =1√rnrnX i=1fV,+,n(Un,2i)−E[f+,n(Un,2i)]. andG∗ n,jasGn,jbut with Un,ireplaced by U∗ n,i. Note GV n=Gn,1+Gn,2. 56 Lemma C.4. Denote by β(V,X) n theβ-coefficients associated with the triangular array (Vn,i, Xn,i)If kn mnβ(V,X) n(mn)→0, andG∗ n,1,G∗ n,2are asymptotically tight then, GV nis asymptotically tight. Proof. It holds P∗ ∥Gn,1−G∗ n,1∥F̸= 0 ≤rnP(∃i:U∗ i̸=Ui) ≤kn/mnβ(V,X) n(mn)→0 and similarly for Gn,2. Thus, Gn,1−G∗ n,1P∗ →0 and if G∗ n,1,G∗ n,2are asymptotically tight, so areGn,1,Gn,2. Since finite sums of asymptotically tight sequences remain asymptotically tight we obtain GV n=Gn,1+Gn,2is asymptotically tight. Theorem C.5. For some γ >2andα <(γ−2)/2(γ−1), assume (i)∥F∥γ,∞<∞. (ii)supn∥Vn,1∥γ<∞ (iii) k1−α nβV n(kα n) +k1−α nβX n(kα n)→0where βV ndenotes the β-coefficients of the Vn,i. (iv)supnPkn i=1βX n(i)γ−2 γ<∞. (v)Rδn 0p lnN[](ϵ,Fn,∥ · ∥ γ,∞)dϵ→0for all δn→0. Then, GV nis asymptotically tight. Proof. First, we may without loss of generality assume E[f(Xn,i)] = 0. To see this define the function class Gn={gf:f∈ F} with gf:X × { 1, . . . , k n} →R,(x, i)7→f(x)−E[f(Xn,i)]. ForYn,i= (Xn,i, i) it holds E[gf(Yn,i)] = 0. Note that X × { 1, . . . , k n}remains Polish and the β-mixing coefficients with respect to Yn,iandXn,iare equal. Further, given an ε-bracket [ f,f] with respect to ∥ · ∥ γ,n, define u(x, i) =f(x)−E[f(Xn,i)], l(x, i) =f−E[f(Xn,i)]. Clearly, f∈[f,f] implies gf∈[l, u]. Further, |u−l| ≤ |f−f|+E[|f−f|] ≤ |f−f|+∥f−f∥γ,n ≤ |f−f|+ε. 57 Thus, ∥u−l∥γ,n≤2εandN[](2ε,Gn,∥·∥ γ,n)≤N[](ε,F,∥·∥ γ,n). Replacing FbyGnand Xn,ibyYn,iwe may assume E[f(Xn,i)] = 0. Next, recall the function class FV={fV:f∈ F} withfV:R×X → R,(v, x)7→vf(x) and set Yn,i= (Vn,i, Xn,i). Then, the empirical process GV n∈ℓ∞(F) is the empirical process associated to FVandYn,i. Again, R× X is Polish and the β-coefficients βY n associated to Yn,isatisfy βY n(m)≤βX n(m) +βV n(m) by Theorem 5.1 (c) of Bradley (2005). Set mn=kα n. Then, kn mnβY n(mn) =k1−α nβY n(kα n)→0. We apply Lemma C.4 to Yn,iandFV. Thus, it suffices to show that G∗ n,1andG∗ n,1are asymptotically tight. We will prove the statement for G∗ n,1. The arguments for G∗ n,1are the same. Similar to the proof of Theorem 3.4, it suffices to show lim sup δ→0lim sup n→∞E∗∥G∗ n,1∥FV,δ= 0, where FV,δ={fV−gV:f, g∈ F,∥f−g∥γ,∞< δ}. Recall G∗ n,1(fV) =1 rnknX i=1fV,+,n(U∗ n,2i−1) sinceE[f(Xn,i)] = 0. Note that for fixed n,G∗ n,1can be identified with an empirical process G∗ n,1∈ℓ∞(FV,+,n) indexed by the function class FV,+,n={fV,+,n:f∈ F} . Denote by ∥ · ∥ 2,n,+the∥ · ∥ 2,n-seminorm on FV,+,ninduced by U∗ n,i. Next, let f∈ F and an ε-bracket f∈[f,f] with respect to
|
https://arxiv.org/abs/2505.02197v1
|
∥ · ∥ γ,nbe given. Clearly fV,+,n∈[fV,+,n,fV,+,n]. Since E[f(Xn,i)] = 0 it holds ∥fV,+,n∥2 2,n,+=1 rnrnX i=1E[fV,+,n(U∗ n,i)2] =1 rnrnX i=1Var[fV,+,n(U∗ n,i)] =1 mnrnrnX i=1pnX j,k=1Cov[Vn,(i−1)mn+jf(Xn,(i−1)mn+j), Vn,(i−1)mn+kf(Xn,(i−1)mn+k)] ≲2 knknX i,j=1|Cov[f(Xn,i), f(Xn,j)] ≲∥f∥2 γ,n, 58 by Lemma B.7. This yields N[](Cε,FV,+,n,∥ · ∥ 2,n,+)≤N[](ε,F,∥ · ∥ γ,n), for some constant Cindependent of nand ∥fV,+,n−gV,+,n∥2,n,+≤C∥f−g∥γ,n≤Cδ, for all fV−gV∈ F V,δ.Without loss of generality, C= 1. Lastly, note that FV,+,nare envelopes for FV,+,n. Putting everything together, and in combination with Corollary B.5, we obtain E∥Gn,1∥FV,δ≲Z2δ 0q ln+N[](ϵ)dϵ+Bln+N[](δ)√rn+√rn∥FV,+,n1{FV,+,n> B}∥1,n+√rnN−1 [](ern) with N[](ϵ) =N[](ϵ,F,∥ · ∥ γ,∞). Let B=an√rn, with an→0 arbitrarily slowly. We obtain Bln+N[](δ)√rn=anln+N[](δ)→0, for every fixed δ >0 due to condition (v). Further, √rn∥FV,+,n1{FV,+,n> a n√rn}∥1,n≤√rnmn∥FV1{FV> a np rn/mn}∥1,n =p rnmn(rn/mn)1−γ∥FV∥γ γ,na1−γ n =p rnmn(rn/mn)1−γ∥Vn,1∥γ∥F∥γ γ,na1−γ n ≲s mγ n rγ−2 n∥F∥γ γ,na1−γ n ≲s mγ+γ−2 n kγ−2 n∥F∥γ γ,na1−γ n =q k2α(γ−1)−(γ−2) n ∥F∥γ γ,na1−γ n →0, foran→0 sufficiently slowly, where we used that Vn,iare identically distributed with supn∥Vn,i∥γ<∞in the third and fourth step, and our condition on αand (i) in the last. Combined, we obtain lim sup nE∥Gn,1∥FV,δ≲Zδ 0q ln+N[](ϵ)dϵ for all δ >0. With δn→0, we obtain lim sup nE∥Gn,1∥FV,δn= lim sup nZδn 0q ln+N[](ϵ,Fn,∥ · ∥ γ,∞)dϵ, and the right hand side converges to 0 as δ→0 by condition (v), completing the proof. 59 D. Proofs for the bootstrap We will use the notation introduced in Section 4 without further mentioning. D.1. Proof of Proposition 4.1 and a corollary Proof of Proposition 4.1. SinceGnis relatively compact, so is G⊗3 n(Van der Vaart and Wellner, 2023, Example 1.4.6). Because GnandG⊗3 nare relatively compact, both state- ments can be checked at the level of subsequences and we may assume that Gn→dG converges weakly to some tight Borel law. By (asymptotic) independence, we obtain G⊗3 n→dG⊗3. Then (ii) is equivalent to Gn,G(1) n,G(2) n →dG⊗3 inℓ∞(F)3. Thus, (ii) is equivalent to (a) of Lemma 3.1 in B¨ ucher and Kojadinovic (2019) and we obtain the claim. Corollary D.1. Assume that Gnsatisfies a relative CLTs and G(i) nare relatively com- pact. Then, G(1) nis a consistent bootstrap scheme if (i) all marginals of Gn,G(1) n,G(2) n satisfy a relative CLT. (ii) for n→ ∞ Cov G(i) n(s),G(i) n(t) −Cov [Gn(s),Gn(t)]→0 Cov G(i) n(s),G(j) n(t) →0. for all i, j= 0,1,2,i̸=jwhere we set G(0) n=Gn. Proof. We conclude by Proposition 4.1, i.e., we prove Gn,G(1) n,G(2) n →dG⊗3 n. SinceGnsatisfies a relative CLTs resp. G(i) nis relatively compact and satisfies marginal relative CLTs, any subsequence of ncontains a further subsequence such that both, Gn andG(i) n, converge weakly to some tight and measurable GP. By Proposition 2.10 and (ii) of Proposition 4.1, we may assume that GnandG(i) nconverge weakly to some tight and measurable GP. By the latter condition, such limiting GPs are equal in distribution, i.e.,Gn,G(i) n→dNwith Nsome tight and measurable GP. Denote by N(i)iid copies of N. It suffices to prove Gn,G(1) n,G(2) n →dN⊗3. By the middle condition, Gn(t1), . . . ,Gn(tk),G(1) n(tk+1), . . . ,G(1) n(tk+m),G(2) n(tk+m+1), . . . ,G(2) n(tm+k+l) 60 converges weakly to
|
https://arxiv.org/abs/2505.02197v1
|