text
string
source
string
Pn andP′ n, respectively. Similarly, Pfrepresents the expectation of funderP(x,y) =F(x)G(y). Thus, the expressions are given by: Pnf=1 nn/summationdisplay i=1f(Xi,Yi),P′ nf=1 n2n/summationdisplay i=1n/summationdisplay j=1f(Xi,Yj), Pf=/integraldisplay fdP. (A.1) We choose f(x,y) =|F(x)−G(y)|andfn(x,y) =|Fn(x)−Gn(y)|, (A.2) 5 whereFn(x) =1 n/summationtextn k=1I(Xk/lessorequalslantx) andGn(y) =1 n/summationtextn k=1I(Yk/lessorequalslanty). LetF0be the collection of cumulative distribution functions for all univariate continuous variables and F={|F(x)−G(y)| ∈R:F,G∈ F0andx,y∈R}. Example 19.6 of Van der Vaart(2000) illustrates that F0is aP-Donsker class. Thus, for all ( x,y)∈R2andF,G∈ F0, according to the information provided on page 19 of Kosorok (2008), along with the fact that |a−b|= max{a,b}−min{a,b},Fis also a P-Donsker class. By the law of large numbers, for every xandy, it is apparent that Fn(x)a.s.− − →F(x) and Gn(y)a.s.− − →G(y) hold, thus resulting in fn(x,y)a.s.− − →f(x,y), hence for some f∈L2(P),/integraltext (fn(x)− f)2dPp− →0 follows. Then, define Gn=√n(Pn−P),G′ n=√n(P′ n−P). The empirical process evaluated at fis Gnf=√n(Pnf−Pf),G′ nf=√n(P′ nf−Pf). By Lemma 19.24 in Van der Vaart (2000), one has G′ n(fn−f) =op(1),Gn(fn−f) =op(1), i.e.,/parenleftbig P′ n−P/parenrightbig (fn−f) =op(n−1/2),(Pn−P)(fn−f) =op(n−1/2). Further expanding these two expressions leads to the follow ing forms, P′ nfn−P′ nf−Pfn+Pf=op(n−1/2),Pnfn−Pnf−Pfn+Pf=op(n−1/2). Note that the combination of equations ( A.1) and (A.2) can yield the following forms: P′ nfn=1 n2n/summationdisplay i=1n/summationdisplay j=1|Fn(Xi)−Gn(Yj)|,Pnfn=1 nn/summationdisplay i=1|Fn(Xi)−Gn(Yi)|. P′ nf=1 n2n/summationdisplay i=1n/summationdisplay j=1|F(Xi)−G(Yj)|=1 n2n/summationdisplay i=1n/summationdisplay j=1|Ui−Vj|. Pnf=1 nn/summationdisplay i=1|F(Xi)−G(Yi)|=1 nn/summationdisplay i=1|Ui−Vi|. Pfn= E|Fn(Xi)−Gn(Yi)|, Pf= E|F(Xi)−G(Yi)|= E|Ui−Vi|. Thus, 1 n2n/summationdisplay i=1n/summationdisplay j=1|Fn(Xi)−Gn(Yj)|=1 n2n/summationdisplay i=1n/summationdisplay j=1|Ui−Vj|+Pfn−Pf+op(n−1/2), 1 nn/summationdisplay i=1|Fn(Xi)−Gn(Yi)|=1 nn/summationdisplay i=1|Ui−Vi|+Pfn−Pf+op(n−1/2). 6 Combining the above equations, we obtain ϕn= 1−3 n2−1n/summationdisplay i=1|Ri−Si| =3 n2−1/parenleftBigg n2−1 3−n/summationdisplay i=1|Ri−Si|/parenrightBigg =3 n2−1 1 nn/summationdisplay i=1n/summationdisplay j=1|Ri−Sj|−n/summationdisplay i=1|Ri−Si|  =3n2 n2−1 1 n2n/summationdisplay i=1n/summationdisplay j=1|Fn(Xi)−Gn(Yi)|−1 nn/summationdisplay i=1|Fn(Xi)−Gn(Yi)|  =3n2 n2−1 1 n2n/summationdisplay i=1n/summationdisplay j=1|Ui−Vj|−1 nn/summationdisplay i=1|Ui−Vi|+op(n−1/2)  =/tildewideϕn+op(n−1/2). Next, relying on the facts stated in Lemma A.1, we deal with the expectation and variance of/tildewideϕn. LetC1=1 n2/summationtextn i=1/summationtextn j=1|Ui−Vj|andC2=1 n/summationtextn i=1|Ui−Vi|, it is easy to calculate that E/tildewideϕn= 0. Furthermore, routine calculation yields Var(C1) =1 n4/braceleftbig n2Var(|U1−V1|)+2×n2×(n−1)Cov(|U1−V1|,|U1−V2|)/bracerightbig =1 n4/braceleftbigg n2×1 18+2n2(n−1)×1 180/bracerightbigg =n+4 90n2. Var(C2) =1 n2×nVar(|U1−V1|) =1 18n. Cov(C1,C2) =1 n3 n/summationdisplay i=1n/summationdisplay j=1|Ui−Vj|,n/summationdisplay i=1|Ui−Vi|  =1 n3{nVar(|U1−V1|)+2(n−1)Cov(|U1−V1|,|U1−V2|)×n} =1 n3/braceleftbigg n×1 18+2n(n−1)×1 180/bracerightbigg =n+4 90n2. Ultimately, it is derived that Var(/tildewideϕn) =/parenleftbigg3n2 n2−1/parenrightbigg2 [Var(C1)+Var(C2)−2Cov(C1,C2)] =/parenleftbigg3n2 n2−1/parenrightbigg2/parenleftbiggn+4 90n2+1 18n−2×n+4 90n2/parenrightbigg 7 =2n2 5(n+1)2(n−1). The proof of Theorem 2.1is now complete. Proof of Theorem 2.2.Rewrite/tildewideϕnas /tildewideϕn=3 n2−1 n/summationdisplay i=1n/summationdisplay j=1|Ui−Vj|−nn/summationdisplay i=1|Ui−Vi|  =3 n2−1 n/summationdisplay i/negationslash=j|Ui−Vj|−(n−1)n/summationdisplay i=1|Ui−Vi|  =3 n2−1/parenleftbiggn(n−1) 2T1−(n−1)T2/parenrightbigg ,(A.3) whereT1=2 n(n−1)/summationtextn i/negationslash=j|Ui−Vj|, andT2=/summationtextn i=1|Ui−Vi|. SinceT2isalready asumof independentandidentically distributed terms, its H´ ajek projection remains itself. Thus, we only need to calculate the H´ ajek re presentation for T1. In fact,T1is a U-statistic that can be expressed as T1=2 n(n−1)n/summationdisplay i/negationslash=j|Ui−Vj|=2 n(n−1)n/summationdisplay i<jh/parenleftBig (Ui,Vi)⊤,(Uj,Vj)⊤/parenrightBig , where the symmetric kernel function is taken as h/parenleftBig (u1,v1)⊤,(u2,v2)⊤/parenrightBig =|u1−v2|+|u2−v1|. Itisevidentthatthevarianceof T1exists. Let θ= E/bracketleftBig h/parenleftBig (Ui,Vi)⊤,(Uj,Vj)⊤/parenrightBig/bracketrightBig , andh1/parenleftBig (u,v)⊤/parenrightBig = E/bracketleftBig h/parenleftBig (u,v)⊤,(U2,V2)⊤/parenrightBig/bracketrightBig −θ. According to Lemma A.1and through simple derivation, we have θ=E(|U1−V2|+|U2−V1|) =2 3andh1/parenleftBig (u,v)⊤/parenrightBig =1 3−u(1−u)−v(1−v). The projection of T1−2 3is then given by /tildewideT1:=2 nn/summationdisplay i=1/bracketleftbigg1 3−Ui(1−Ui)−Vi(1−Vi)/bracketrightbigg . Then, by applying Lemma 12.3 from Van der Vaart (2000), we obtain T1−2 3=/tildewideT1+Op(1 n), i.e., T1=2 nn/summationdisplay i=1/bracketleftbigg1 3−Ui(1−Ui)−Vi(1−Vi)/bracketrightbigg +2 3+Op(1 n). 8
https://arxiv.org/abs/2505.01825v1
Substituting this result into Equation ( A.3), we get /tildewideϕn=3 n+1n/summationdisplay i=1/parenleftbigg2 3−|Ui−Vi|−Ui(1−Ui)−Vi(1−Vi)/parenrightbigg +Op(1 n) =/hatwideϕn+Op(1 n). Additionally, utilizing the results from Lemma A.1, it is easy to derive that E/hatwideϕn= 0, Var(/hatwideϕn) =/parenleftbigg3 n+1/parenrightbigg2 ×nVar/parenleftbigg2 3−|Ui−Vi|−Ui(1−Ui)−Vi(1−Vi)/parenrightbigg =n/parenleftbigg3 n+1/parenrightbigg2 ×[Var(|U1−V1|)+Var(U1(1−U1))×2+2Cov( |U1−V1|,U1(1−U1))] =n/parenleftbigg3 n+1/parenrightbigg2 ×/parenleftbigg1 18+1 180×2+2×/parenleftbigg −1 180/parenrightbigg ×2/parenrightbigg =2n 5(n+1)2. Thus, this proof is complete. Proof of Theorem 2.3./hatwideϕncan be expressed as the sum of independently and identically dis- tributed random variables with existing second moment, so i ts asymptotic normality can be easily obtained through the ordinary central limit theorem. By fur ther utilizing the asymptotic repre- sentations of Theorem 2.1and Theorem 2.2, the asymptotic normality of /tildewideϕnandϕnis also apparent. 9 References Bukovˇ sek, D. K., Mojˇ skerc, B., 2022. On the exact region de termined by Spearman’s footrule and Gini’s gamma. Journal of Computational and Applied Mathema tics 410, 114212. Chen, C., Xu, W., Zhang, W., Zhu, H., Dai, J., 2023. Asymptoti c properties of Spearman’s footruleandGini’s gammainbivariate normalmodel.Journa lof theFranklin Institute360 (13), 9812–9843. Chen, L. H., Fang, X., Shao, Q.-M., 2013. From Stein identiti es to moderate deviations. The Annals of Probability 41 (1), 262–293. Diaconis, P., Graham, R. L., 1977. Spearman’s footrule as a m easure of disarray. Journal of the Royal Statistical Society Series B: Statistical Methodolo gy 39 (2), 262–268. Hoeffding, W., 1951. A combinatorial central limit theorem. T he Annals of Mathematical Statis- tics, 558–566. Kosorok, M. R., 2008. Introduction to empirical processes a nd semiparametric inference. Vol. 61. Springer. Nelsen, R. B., 2006. An introduction to copulas, 2nd Edition . Springer, New York. P´ erez, A., Prieto-Alaiz, M., Chamizo, F., Liebscher, E., ´Ubeda-Flores, M., 2023. Nonparametric estimationofthemultivariateSpearman’sfootrule: afurt herdiscussion.FuzzySetsandSystems 467, 108489. Sen, P. K., Salama, I. A., 1983. The Spearman footrule and a Ma rkov chain property. Statistics & probability letters 1 (6), 285–289. Shi, X., Xu, M., Du, J., 2023. Max-sum test based on Spearman’ s footrule for high-dimensional independence tests. Computational Statistics & Data Analy sis 185, 107768. Shi, X., Zhang, W., Du, J., Kwessi, E., 2024. Testing indepen dence based on Spearman’s footrule in high dimensions. Communications in Statistics-Theory a nd Methods, 1–18. Small, C. G., 2010. Expansions and asymptotics for statisti cs. Chapman and Hall/CRC. Spearman, C., 1906. Footrule for measuring correlation. Br itish Journal of Psychology 2 (1), 89. Van der Vaart, A. W., 2000. Asymptotic statistics. Vol. 3. Ca mbridge university press. 10
https://arxiv.org/abs/2505.01825v1
Sharp empirical Bernstein bounds for the variance of bounded random variables Diego Martinez-Taboada1and Aaditya Ramdas12 1Department of Statistics & Data Science 2Machine Learning Department Carnegie Mellon University {diegomar,aramdas}@andrew.cmu.edu May 6, 2025 Abstract Much recent effort has focused on deriving “empirical Bernstein” con- fidence sets for the mean µof bounded random variables, that adapts to unknown variance V(X). In this paper, we provide fully empirical upper and lower confidence sets for the variance V(X), that adapt to both unknown µandV[(X−µ)2]. Our bounds hold under constant conditional variance and mean, making them suitable for sequential decision making contexts. The results are instantiated for both the batch setting (where the sample size is fixed) and the sequential setting (where the sample size is a stopping time). We show that the first order width of the confidence intervals exactlymatches that of the oracle Bernstein inequality; thus, our empirical Bernstein bounds are “sharp”. We compare our bounds to a widely used concentration inequality based on self-bounding ran- dom variables, showing both the theoretical gains and improved empirical performance of our approach. 1 Introduction Providing finite-sample confidence intervals for the variance of a random variable is a fundamental problem in statistical inference, as variance quantifies the dispersion of data and directly influences uncertainty in decision-making. Exact confidence intervals are particularly important because approximate methods can lead to misleading inferences, especially when sample sizes are small, data distributions are skewed, or underlying assumptions are violated. For example, exact confidence intervals for the variance have been widely exploited in empirical Bernstein inequalities (Audibert et al., 2009; Maurer and Pontil, 2009), which give way to a variety of applications including adaptive statistical learning (Mnih et al., 2008), high-confidence policy evaluation in reinforcement learning (Thomas et al., 2015), off-policy risk assessment (Huang et al., 2022), and inference for 1arXiv:2505.01987v1 [math.ST] 4 May 2025 the average treatment effect (Howard et al., 2021). But in all these works, the focus was the mean, offering limited inferential tools for the variance (although all such contributions would benefit from a refined analysis of the variance). This contribution centers on the study of the variance of bounded random variables. The primary goal of this work is to derive confidence intervals for the variance that both well work in practice and are asymptotically equivalent to those derived from the oracle Bernstein inequality. More specifically, we seek confidence intervals whose width’s leading term matches that of the oracle Bernstein confidence interval, i.e.,p 2V[(Xi−µ)2] log(1 /α). We refer to such confidence intervals as sharp. To elaborate, when estimating the mean, Bernstein inequalities (Bernstein, 1927; Bennett, 1962) are widely known for leading to closed-form, tight confidence intervals. However, their practicality is limited, as they require knowing a bound on the variance of the random variables (that is better than the trivial bound implied by the bounds on the random variables). For this reason, they establish a natural “oracle” benchmark for fully empirical confidence sets that only exploit knowledge on the bound of the random variables. Furthermore, some of the aforementioned applications actually rely on confi- dence sequences, which are anytime-valid counterparts
https://arxiv.org/abs/2505.01987v1
of confidence intervals. A (1−α)-confidence interval CCIfor a target parameter θis a random set such that P(θ∈CCI)≥1−α, where CCIis built after having observed a fixed number of observations. In contrast, a (1−α)-confidence sequence CCS,tprovides a high-probability guarantee that the parameter θis contained in the sequence at all time points, i.e., P(∃t≥1 :θ∈CCS,t)≥1−α, with trepresenting the number of observations collected sequentially. Confidence sequences are of key importance in online settings, where data is observed sequentially and probabilistic guarantees that hold at stopping times are often desired: confidence sequences allow for sequential procedures that are continuously monitored and adaptively adjusted. For instance, optimal adaptive Neyman allocation for ran- domized control trials in the context of causal inference and off-policy evaluation (Neopane et al., 2025) are based on confidence sequences for the variance. In many such online settings, the assumption that data points are independent and identically distributed (iid) is often too strong and unrealistic due to the dynamic and evolving nature of data streams. Unlike traditional offline analyses wheredatacanbeassumedtocomefromafixeddistribution, onlineenvironments involve sequentially arriving data that may exhibit temporal dependencies. Thus, we seek to develop concentration inequalities that only require the following assumption to hold (where EtandVtdenote conditional expectations and variances, respectively; these definitions are later formalized in Section 3). Assumption 1.1. The stream of random variables X1, X2, . . .is such that Xt∈[0,1],Et−1Xt=µ,Vt−1Xt=σ2. Note that all (rescaled) iid bounded random variables attain Assumption 1.1. More generally, Assumption 1.1 is rather weak, avoiding assuming independence of the random variables. Any bounded random variable can be rescaled to belong to [0,1], and so the first of the conditions can be assumed without loss of 2 generality in the bounded setting. The conditional constant mean and variance are arguably the least we may assume if we wish to estimate “the variance”. We revisit the problem of providing confidence intervals and sequences for the variance of bounded random variables under Assumption 1.1. Our contributions are three-fold: •We provide novel confidence sequences for the variance. We instantiate the results for the batch setting (where the confidence sequence reduces to a confidence interval) and the sequential setting. Confidence sequences for the standard deviation (std) can also be immediately derived by taking the square root of the confidence sequences for the variance. •Theoretically, we prove the sharpness of our inequalities by showing that the first order term of the novel confidence interval exactly matches that of the oracle Bernstein inequality. •Empirically, we illustrate how our proposed inequalities substantially out- perform those of Maurer and Pontil (2009, Theorem 10), which constitute the current state-of-the-art inequalities for the standard deviation, to the best of our knowledge. Paper outline. We present related work in Section 2, followed by preliminaries in Section 3. Section 4 exhibits the main results of this contribution, namely an empirical Bernstein inequality for the variance. Section 5 displays experimen- tal results, showing how our proposed inequalities outperform state-of-the-art alternatives. We conclude with some remarks in Section 6. 2 Related Work Current concentration inequalities for the variance. Upper and lower inequalities for the variance were presented in Maurer and Pontil
https://arxiv.org/abs/2505.01987v1
(2009, Theorem 10). They are based on the concentration of self-bounding random variables (Maurer, 2006). Another concentration inequality for the variance can be found in the proof of Audibert et al. (2009, Theorem 1), which decouples the analysis into those of the mean and second centered moment, in a similar spirit to our contribution. However, both inequalities rely on conservatively upper bounding thevariance of the empirical variance , thus being loose. In contrast, our inequalities empirically estimate such a variance, resulting in tighter confidence sets. Less closely related to our work, other inequalities rely on the Kolmogorov- Smirnov (KS) distance between the empirical distribution and a second dis- tribution. For instance, Austern and Mackey (2022, Lemma 2) leveraged the concentration inequalities from Romano and Wolf (2000), which build on the KS distance between the empirical distribution and the actual distribution in order to derive inequalities for the variance. Austern and Mackey (2022, Appendix D) elucidated the use of these inequalities for variance estimation in the case of 3 independent and identically distributed (iid) random variables. However, these methods strongly rely on independence assumptions and are also tailored to the batch setting, where the sample size is fixed in advance. Empirical Bernstein inequalities for the mean. Based on combining Bennett’sinequalityandupperconcentrationinequalitiesforthevariance, Maurer and Pontil (2009, Theorem 11) proposed a well known empirical Bernstein inequality for the mean, improving a similar inequality presented in Audibert et al. (2009, Theorem 1). These inequalities are not sharp (in that the first order limiting width does not match that of the oracle Bernstein inequality, including constants), and they are empirically significantly looser than those presented in Howard et al. (2021, Theorem 4) and Waudby-Smith and Ramdas (2024, Theorem 2), which are known to be sharp. The latter inequalities were extended to2-smooth Banach spaces in Martinez-Taboada and Ramdas (2024, Theorem 1), and to matrices in Wang and Ramdas (2024, Theorem 4.2), both of which are also sharp. Time-uniformChernoffinequalities. Ourworkfallsunderthetime-uniform Chernoff inequalities umbrella from Howard et al. (2020, 2021); Waudby-Smith and Ramdas (2024). A key proof technique of this line of work is the derivation of sophisticated nonnegative supermartingales, followed by an application of Ville’s inequality (Ville, 1939), an anytime-valid version of Markov’s inequality. 3 Background Let us start by presenting the concepts of filtration andsupermartingale , which will be heavily exploited in this work to go beyond the iid setting. Consider a filtered measurable space (Ω,F), where the filtration F= (Ft)t≥0is a sequence ofσ-algebras such that Ft⊆ F t+1,t≥0. The canonical filtration Ft= σ(X1, . . . , X t), with F0being trivial, is considered throughout. A stochastic process M≡(Mt)t≥0is a sequence of random variables that are adapted to (Ft)t≥0, i.e., MtisFt-measurable for all t.Mis calledpredictable ifMtisFt−1- measurable for all t. An integrable stochastic process Mis a supermartingale if E[Mt+1|Ft]≤Mtfor all t. We use Et[·]andVt[·]in short for E[·|Ft]andV[·|Ft], respectively. Inequalities between random variables are always interpreted to hold almost surely. As exhibited in later sections, our concentration inequalities will be derived as Chernoff inequalities. In contrast to more classical inequalities, our results come with anytime validity (that is, they
https://arxiv.org/abs/2505.01987v1
hold at any stopping time), derived using the following anytime-valid version of Markov’s inequality. Theorem 3.1 (Ville’s inequality) .For any nonnegative supermartingale (Mt)t≥0 andx >0, P(∃t≥0 :Mt≥x)≤EM0 x. 4 Powerful nonnegative supermartingale constructions are usually at the heart of anytime valid concentration inequalities. For example, the following sharp empirical Bernstein inequality from Howard et al. (2021) and Waudby-Smith and Ramdas (2024) is derived from a nonnegative supermartingale. Theorem 3.2 (Empirical Bernstein inequality) .LetX1, X2, . . .be a stream of random variables such that, for all t≥1, it holds that Xt∈[0,1]andEt−1Xt=µ. LetψE(λ) =−logλ−λ. For any [0,1)-valued predictable sequence (λi)i≥1such thatλ1>0, it holds that Pn i=1λiXiPn i=1λi±log2 δ +Pn i=1ψE(λi) (Xi−ˆµi−1)2 Pn i=1λi! is a1−δconfidence sequence for µ. We will modify Theorem 3.2 in later sections in order to derive our results. The sequence (λi)i≥1is referred to as ‘predictable plug-ins’. They play the role of the parameter λthat naturally appears in all the Chernoff inequality derivations; nevertheless, instead of they being equal for each iand theoretically optimized, they are empirically and sequentially chosen. The choice of the predictable plug-ins is key in the performance of the inequalities, and will be discussed throughout our work. Besides making use of predictable plug-ins in empirical Bernstein-type supermartingales, we will also exploit them in the following anytime valid version of Bennett’s inequality. Theorem 3.3 (Anytime valid Bennett’s inequality) .LetX1, X2, . . .be a stream of random variables such that, for all t≥1, it holds that Xt∈[0,1],Et−1Xt=µ, andVt−1Xt=σ2. Let ψP(λ) =exp(λ)−λ−1. For any R+-valued predictable sequence (˜λi)i≥1, it holds that P i≤t˜λiXiP i≤t˜λi±log(2/δ) +σ2P i≤tψP(˜λi) P i≤t˜λi! is a1−δconfidence sequence for µ. While this result is technically novel (and so we present a proof in Ap- pendix D.1), it can be derived using the techniques in Howard et al. (2020); Waudby-Smith and Ramdas (2024). It would generally lack any practical use, given that σis typically unknown. Nonetheless, we will invoke it in combination with an empirical Bernstein inequality for σ2, thus making it actionable. 4 Main results In Section 4.1, we present the theoretical foundation of all the inequalities derived thereafter, namely a novel nonnegative supermartingale construction and its corollary. Section 4.2 and Section 4.3 make use of such theoretical tools to derive upper and lower confidence sequences, respectively. Section 4.4 instantiates such confidence sequences in the (more classical) batch setting, where they reduce to confidence intervals. Lastly, Section 4.5 extends these results to Hilbert spaces. 5 4.1 A nonnegative supermartingale construction We begin by introducing two nonnegative supermartingale constructions that serve as the theoretical foundation for the inequalities derived in this work. Its proof may be found in Appendix D.2. Theorem 4.1. Let Assumption 1.1 hold. For a [0,1]-valued predictable sequence (bµi)i≥1, denote ˜σ2 i=σ2+ (bµi−µ)2. For any [0,1]-valued predictable sequence (bσi)i≥1and any [0,1)-valued predictable sequence (λi)i≥1, the processes (S+ t)t≥0and(S− t)t≥0, with S+ 0:= 1, and S− 0:= 1, and S+ t:= exp  X i≤tλi (Xi−bµi)2−˜σ2 i −ψE(λi) (Xi−bµi)2−bσ2 i2  , t≥1, S− t:= exp  X i≤tλi ˜σ2 i−(Xi−bµi)2 −ψE(λi) (Xi−bµi)2−bσ2 i2  , t≥1, are nonnegative supermartingales. Theorem 4.1 modifies the
https://arxiv.org/abs/2505.01987v1
supermartingales that give way to Theorem 3.2. However, in contrast to Theorem 3.2, the conditional means of the random variables (Xi−bµi)2under study is not constant. This opens the door to providing concentration for the variance. Denoting Rt,α:=log(1/α) +P i≤tψE(λi) (Xi−bµi)2−bσ2 i2 P i≤tλi, Dt:=P i≤tλi(Xi−bµi)2 P i≤tλi, E t:=P i≤tλi(bµi−µ)2 P i≤tλi, the following corollary is a direct consequence of Theorem 4.1. We defer its proof to Appendix D.3. Corollary 4.2. Let Assumption 1.1 hold. For any [0,1)-valued predictable sequence (λi)i≥1and any [0,1]-valued predictable sequences (bµi)i≥1and(bσi)i≥1, it holds that Dt−Et±Rt,α 2 is a1−αconfidence sequence for σ2. The confidence sequence provided by Corollary 4.2 cannot be invoked in practice: since µis unknown, Etis also unknown. 6 4.2 Upper confidence sequence for the variance In spite of Etbeing unknown, this term poses no challenge for the upper confidence sequence, as we can simply lower bound it by 0. That is, if (−∞, Dt− Et+Rt,α)isanα-levelupperconfidencesequenceforthevariance, sois (−∞, Dt+ Rt,α), given that Etis nonnegative. We formalize such an observation in the following corollary. Corollary 4.3 (Upper empirical Bernstein for the variance) .Let Assumption 1.1 hold. For any [0,1]-valued predictable sequences (bµi)i≥1and(bσi)i≥1, and any [0,1)-valued predictable sequence (λi)i≥1, it holds that (−∞, Ut)is a1−αupper confidence sequence for σ2, where Ut=Dt+Rt,α. It remains to discuss the choice of predictable sequences. We propose to take1 bσ2 t:=c3+P i≤t−1(Xi−¯µi)2 t,¯µt:=c4+P i≤t−1Xi t, where c3, c4∈[0,1]are constant, as well as bµt=¯µt. Following the discussion from Waudby-Smith and Ramdas (2024, Section 3.3) for confidence sequences, we propose to take the predictable plug-ins λCS t,u,α:=s 2 log(1 /α) bm2 4,ttlog(1 + t)∧c1 where bm2 4,t:=c2+P i≤t−1 (Xi−bµi)2−bσ2 i2 t, with c1∈(0,1), and c2∈[0,1]. Reasonable defaults are c1=1 2,c2=1 24,c3=1 22, andc4=1 2. The reason for these predictable plug-ins is simply that these choices lead to sharpness. 4.3 Lower confidence sequence for the variance In order to provide a lower confidence sequence, we must control the term Et, which depends on the terms |µ−bµi|, with i≤t. This can be done if (bµt)t≥1 is such that a confidence sequence for |bµi−µ|can be provided (we ought to use confidence sequences instead of confidence intervals in order to avoid union bounding over all i≤t). If|bµi−µ| ≤˜Ri,δfor all i≥1with probability 1−δ, then Dt−P i≤tλi˜R2 i,α1P i≤tλi−Rt,α2 (4.1) 1These specific choices are proposed due to their computational simplicity; they are not necessarily better than reasonable alternatives such as empirical or running averages. 7 yields a (1−α)-lower confidence sequence for σ2with α1+α2=α. We propose to obtain ˜Ri,δbased on the anytime valid Bennett’s inequality presented in Theorem 3.3. That is, take bµt=Pt−1 i=1˜λiXiPt−1 i=1˜λi,˜Rt,α1=log(2/α1) +σ2Pt−1 i=1ψP(˜λi)Pt−1 i=1˜λi,(4.2) fort≥2, as well as bµ1=1 2and˜R1,α1=1 2. Substituting (4.2)in(4.1)leads to a quadratic polynomial on σ2. Equaling σ2to such a polynomial and solving for σ2yields our lower confidence sequence. In order to formalize this, denote ˜At:=Pt−1 i=1ψP(˜λi)2 Pt−1 i=1˜λi2,˜Bt,δ:=2 log(2 /δ)Pt−1 i=1ψP(˜λi) Pt−1 i=1˜λi2,˜Ct,δ:=log2(2/δ) Pt−1 i=1˜λi2, as well as At:=P i≤tλi˜AiP i≤tλi, B t,δ:= 1 +P i≤tλi˜Bi,δP i≤tλi, C t,δ:=P i≤tλi˜Ci,δP i≤tλi. Under this notation, we are ready to present Corollary 4.4, a lower confidence sequence for the variance. Its proof has been deferred
https://arxiv.org/abs/2505.01987v1
to Appendix D.4. Corollary 4.4 (Lower empirical Bernstein for the variance) .Let Assumption 1.1 hold. For (bµi)i≥1defined as in (4.2), any [0,1]-valued predictable sequence (bσ2 i)i≥1, any [0,1)-valued predictable sequence (λi)i≥1, and any [0,∞)-valued predictable sequence (˜λi)i≥1, it holds that (Lt,∞)is a1−αlower confidence sequence for σ2, where α1+α2=αand Lt:=−Bt,α1+q B2 t,α1+ 4At(Dt−Ct,α1−Rt,α2) 2At. (4.3) It remains to discuss the choice of predictable plug-ins. Analogously to the upper inequality plug-ins, it would be natural to take λCS t,l,α 2=λCS t,u,α 2. However, the lower inequality includes the extra terms ˜Ri,α1that ought to be accounted for. Taking λi>0forisuch that ˜Ri,α1>1would add a summand that is vacuous.2For this reason, we propose to take λCS t,l,α 2:=  λCS t,u,α 2,ift≥2andlog(2/α1)+ˆσ2 tPt−1 i=1ψP(˜λi)Pt−1 i=1˜λi≤1, 0, otherwise . Note that the threshold is only an approximation of ˜Ri,α1, given that the latter is unknown in practice. Seeking a confidence sequence for µ, we propose to take 2It would also be reasonable to take the minimum of ˜Ri,α1and1. We explore this alternative in Appendix F. 8 (˜λi)i≥1as ˜λt:=s 2 log(2 /α) bσ2 ttlog(1 + t)∧c5, (4.4) with c5being a constant in (0,∞), with a sensible default being c5= 2. The choice of the split of αinto α1andα2is also of importance. In the next section, we analyze specific splits for retrieving optimal asymptotical behavior. In practice, the split α1=α2=α 2works generally well. 4.4 Upper and lower confidence intervals In the more classical batch setting, we observe a fixed number of observations X1, . . . , X n, with nknown in advance. Given that confidence sequences are, in particular, confidence intervals for a fixed t=n, both Corollary 4.3 and Corollary 4.4 immediately establish confidence intervals. However, the choice of predictable plug-ins used in such corollaries should now be driven by minimizing the expected interval width at a specific t≡n, rather than being tight uniformly overt. For this reason, in order to optimize the upper confidence interval for a fixed t≡n, we take λCI i,u,α:=s 2 log(1 /α) bm2 4,in∧c1. (4.5) Following the same line of reasoning as in Section 4.3, the plug-ins for the lower confidence intervals are defined as a slight modification of those for the upper confidence sequence. Accordingly, we take λCI t,l,α 2:=  λCI t,u,α 2,ift≥2andlog(2/α1)+ˆσ2 tPt−1 i=1ψP(˜λi)Pt−1 i=1˜λi≤1, 0, otherwise . The remaining parameters and estimators are defined in the same manner as in the preceding sections. An analysis of the widths for the variance In order to draw comparisons with related inequalities, we analyze the asymptotic first order term of the novel confidence intervals for the variance.As emphasized in Section 1, in the event of (Xi−µ)2having constant conditional variance, the benchmark for the first order terms are those of oracle Bernstein confi- dence intervals, i.e.p 2V[(Xi−µ)2] log(1 /α). That is, we seek to prove that both√n(Un−Dn)and√n(Dn−Ln)converge almost surely to such quantity. Accordingly, we make the following assumption throughout. Assumption 4.5. X1, X2, . . .is such that Vi−1 (Xi−µ)2 is constant across i. 9 We will implicitly also assume that the predictable sequences (bµi)i∈[n]and (bσ2 i)i∈[n]are defined as in Section 4.2 or Section
https://arxiv.org/abs/2505.01987v1
4.3, with c2∧c3>0. The condition c2∧c3>0is not necessary for the proofs to hold, but they follow cleaner with it. For simplicity, we will focus on the asymptotic behavior of both √nRn,α,√n P i≤tλi,α2,n˜R2 i,α1,nP i≤tλi,α2,n+Rn,α 2,n! , where we define λi,α2,n=λCI t,u,α 2,n. These two quantities correspond to the first order widths of the upper and lower confidence intervals above and below the estimate Dt, respectively, if taking the plug-ins λi,α2,n.3Note that, in contrast to the previous section, we emphasize the dependence of the split α=α1,n+α2,n onn, which we will exploit to recover optimal first order terms. We decouple the analysis in two parts, one involving the Ri’s and the other involving the ˜Ri’s. We start by establishing that the former converges almost surely to the oracle Bernstein first order term for the right choices of α2,n. The proof can be found in Appendix D.5. Theorem 4.6. Let(δn)n≥1be a deterministic sequence such that δn>0and δn↗δ >0. If Assumption 1.1 and Assumption 4.5 hold, then √nRn,δna.s.→p 2V[(Xi−µ)2] log(1 /δ). Second, we prove that the extra term that appears in the lower confidence interval converges to zero almost surely for right choices of α1,n. The proof can be found in Appendix D.6. Theorem 4.7. Letα=α1,n+α2,nbe such that α1,n= Ω 1 log(n) andα2,n→α. If Assumption 1.1 and Assumption 4.5 hold, then √nP i≤tλi,α2,n˜R2 i,α1,nP i≤tλi,α2,na.s.→0. Taking δn=α, it immediately follows from Theorem 4.6 that the upper confidence interval’s first order term is asymptotically almost surely equal to that of the oracle Bernstein confidence interval. To derive the analogous conclusion for the lower confidence interval, it suffices to take α1,n=1 lognα, δ n=α2,n=log(n)−1 log(n)α (4.6) in Theorem 4.7 and Theorem 4.6, respectively. These claims are formalized in the following corollary. 3Note that the plug-ins proposed in Section 4.4 are slightly different (they may take the value 0for small enough t). However, the proofs can be easily extended to λCI t,l,α 2after realizing that these two plug-ins are equal almost surely for big enough t; we believe the simplification is convenient for ease of presentation. 10 Corollary 4.8 (Sharpness) .Let the predictable sequences (bµi)i∈[n]and(bσ2 i)i∈[n] be defined as in Section 4.2 or Section 4.3, with c2∧c3>0. Let Assumption 1.1 and Assumption 4.5 hold. If α=α1,n+α2,nas defined in (4.6), then √n(Un−Dn)a.s.→p 2V[(Xi−µ)2] log(1 /α),√n(Dn−Ln)a.s.→p 2V[(Xi−µ)2] log(1 /α). An analysis of the widths for the standard deviation We devote this section to showing that the confidence intervals for the std also achieve 1/√nrates. To see this, let (Ln, Un)be the confidence interval for the variance obtained in Section 4. We just showed that, for big n, Ln≈σ2−2r cV[(Xi−µ)2] n, U n≈σ2+ 2r cV[(Xi−µ)2] n with c=log(1/α) 2. At first glance, it could seem like√Lnand√Unapproach σ atn−1/4rates, instead of n−1/2rates. Nonetheless, let us note that V (Xi−µ)2 ≤E (Xi−µ)2 −E2 (Xi−µ)2 =σ2(1−σ2)≤σ2. Thus, √ Un⪅s σ2+ 2σrc n=s σ+rc n2 −c n≤σ+rc n, and so√ Unapproaches σat an−1/2rate. Analogously,√ Lnalso approaches σ at the same rate. 4.5 An extension to Hilbert spaces Our inequalities naturally extend to separable Hilbert spaces. In these more abstract spaces, we shall assume that our
https://arxiv.org/abs/2505.01987v1
random variables lie in a ball of diameter 1, instead of a unit long interval. Similarly, the concept of variance involves norms, instead of just squares of scalars. Assumption 4.9. The stream of random variables X1, X2, . . .belongs to a separable Hilbert space H, and is such that ∥Xt∥ ∈ 0,1 2 ,Et−1Xt=µ,Et−1∥Xt−µ∥2=σ2. Under Assumption 4.9, all the concentration inequalities for σ2previously presented for the one dimensional case still hold if replacing (Xi−¯µi)2and (Xi−bµi)2by∥Xi−¯µi∥2and∥Xi−bµi∥2, respectively. The main technical obstacle of this extension is the generalization of The- orem 3.3 to multivariate settings, which we formalize next. Contrary to its one-dimensional counterpart, its proof builds on more sophisticated techniques from Pinelis (1994); we defer such a proof to Appendix D.7. 11 Theorem4.10 (Vector-valuedanytimevalidBennett’sinequality) .LetX1, X2, . . . be a stream of random variables belonging to a separable Hilbert space Hsuch that∥Xt∥ ≤1 2,Et−1Xt=µ, andEt−1∥Xt−µ∥2=σ2, for all t≥1. For any R+-valued predictable sequence (˜λi)i≥1, the sequence of sets ( x∈H: x−P i≤t˜λiXiP i≤t˜λi ≤log(2/δ) +σ2P i≤tψP(˜λi) P i≤t˜λi) is a1−δconfidence sequence for µ. The remainder of the one-dimensional results can be extended with relative ease, and so we emphasize once again that the concentration inequalities previ- ously introduced still hold in Hilbert spaces. We defer the formal presentation of such extensions to Appendix A. We highlight that the bounds are also sharp in this vector-valued setting, with the analysis conducted in Section 4.4 naturally extending to Hilbert spaces. 5 Experiments We devote this section to exploring the empirical performance of both the upper and lower confidence intervals presented in this paper. We compare them with the inequalities from Maurer and Pontil (2009, Theorem 10), which currently constitute the state of the art for the standard deviation. Note that, in order to obtain confidence intervals for the standard deviation using our approach, it suffices to take the square root of the upper and lower confidence intervals for the variance presented in Section 4. Figure 1 displays the average upper and lower confidence intervals for the standard deviation for different samples sizes for three different bounded distributions; our inequalities consistently demonstrate improved empirical performance in all evaluated scenarios. We defer a comparison with other alternatives, such as a double empirical Bernstein inequality on the first and second moment, to Appendix F. In all the experiments, we take α= 0.05, and the constants c1=1 2,c2=1 24,c3=1 22,c4=1 2,c5= 2. The code may be found in the supplementary material. 6 Conclusion Wehaveprovidednovelconcentrationinequalitiesforthevarianceofboundedran- dom variables under mild assumptions, instantiating them for both the batch and sequential settings. We have shown their theoretical sharpness, asymptotically matching the first order term of the oracle Bernstein’s inequality. Furthermore, our empirical findings demonstrate that they significantly outperform the widely adopted inequalities presented by Maurer and Pontil (2009, Theorem 10). There are several possible avenues for future work. In Section 4.5 and Appendix A, we show how the results naturally extend to Hilbert spaces. The proof of Theorem A.1 implicitly exploits the inner product structure of the 12 Figure 1: Average confidence intervals over 100simulations for the std σfor (I) the
https://arxiv.org/abs/2505.01987v1
uniform distribution in (0,1), (II) the beta distribution with parameters (2,6), and (III) the beta distribution with parameters (5,5). For each of the inequalities, the 0.95%-empirical quantiles are also displayed. The Maurer Pontil (MP) inequality (Maurer and Pontil, 2009, Theorem 10) is compared against our proposal (EB). We highlight the improved empirical performance of our methods in all scenarios. Hilbert space, which cannot be done in arbitrary Banach spaces. While this challenge may be circumvented by means of the triangle inequality, that approach leads to inflated constants; exploring tighter alternatives for general or smooth Banach spaces would be of interest. Furthermore, the analysis in Section 4.5 exploits ‘one-dimensional variances’. Extending our work to covariance matrices or operators would also be a natural direction to follow. Acknowledgments DMT thanks Ben Chugg, Ojash Neopane, and Tomás González for insightful conversations. DMT gratefully acknowledges that the project that gave rise to these results received the support of a fellowship from ‘la Caixa’ Foundation (ID 100010434). The fellowship code is LCF/BQ/EU22/11930075. AR was funded by NSF grant DMS-2310718. References Audibert, J.-Y., Munos, R., and Szepesvári, C. (2009). Exploration–exploitation tradeoffusingvarianceestimatesinmulti-armedbandits. Theoretical Computer Science, 410(19):1876–1902. Austern, M. and Mackey, L. (2022). Efficient concentration with Gaussian approximation. arXiv preprint arXiv:2208.09922 . 13 Bennett, G. (1962). Probability inequalities for the sum of independent random variables. Journal of the American Statistical Association , 57(297):33–45. Bernstein, S. (1927). Theory of Probability . Gastehizdat Publishing House. Fan, X., Grama, I., and Liu, Q. (2015). Exponential inequalities for martingales with applications. Electronic Journal of Probability , 20(1):1–22. Hall, P. and Heyde, C. C. (2014). Martingale Limit Theory and its Application . Academic press. Howard, S. R., Ramdas, A., McAuliffe, J., and Sekhon, J. (2020). Time-uniform Chernoffboundsvianonnegativesupermartingales. Probability Surveys , 17:257– 317. Howard, S. R., Ramdas, A., McAuliffe, J., and Sekhon, J. (2021). Time-uniform, nonparametric, nonasymptotic confidence sequences. The Annals of Statistics , 49(2):1055–1080. Huang, A., Leqi, L., Lipton, Z., and Azizzadenesheli, K. (2022). Off-policy risk assessment for markov decision processes. In International Conference on Artificial Intelligence and Statistics , pages 5022–5050. PMLR. Martinez-Taboada, D. and Ramdas, A. (2024). Empirical Bernstein in smooth Banach spaces. arXiv preprint arXiv:2409.06060 . Maurer, A. (2006). Concentration inequalities for functions of independent variables. Random Structures & Algorithms , 29(2):121–138. Maurer, A. and Pontil, M. (2009). Empirical Bernstein bounds and sample- variance penalization. In Conference on Learning Theory . PMLR. Mnih, V., Szepesvári, C., and Audibert, J.-Y. (2008). Empirical Bernstein stopping. In Proceedings of the 25th international conference on Machine learning, pages 672–679. Neopane, O., Ramdas, A., and Singh, A. (2025). Optimistic algorithms for adaptiveestimationoftheaveragetreatmenteffect. In International Conference on Machine Learning . PMLR. Pinelis, I. (1994). Optimum bounds for the distributions of martingales in Banach spaces.The Annals of Probability , pages 1679–1706. Romano, J. P. and Wolf, M. (2000). Finite sample nonparametric inference and large sample efficiency. Annals of Statistics , pages 756–778. Stout, W. F. (1970). A martingale analogue of Kolmogorov’s law of the iterated logarithm. Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebiete , 15(4):279–290. 14 Thomas, P., Theocharous, G., and Ghavamzadeh, M. (2015). High-confidence off-policy evaluation. In Proceedings of
https://arxiv.org/abs/2505.01987v1
the AAAI Conference on Artificial Intelligence , volume 29. Ville, J. (1939). Etude Critique de la Notion de Collectif . Gauthier-Villars Paris. Wang, H. and Ramdas, A. (2024). Sharp matrix empirical Bernstein inequalities. arXiv preprint arXiv:2411.09516 . Waudby-Smith, I. and Ramdas, A. (2024). Estimating means of bounded random variables by betting. Journal of the Royal Statistical Society Series B: Statistical Methodology , 86(1):1–27. 15 Appendix outline We organize the appendices as follows. We devote Appendix A to the formaliza- tion of the extension of the results to Hilbert spaces. Section B presents auxiliary lemmata that are exploited in the remaining proofs. These are mostly analytic or simple probabilistic results that can be skipped on a first pass. Section C contains more involved technical propositions that are later combined to yield the proofs of Theorem 4.6 and Theorem 4.7. The proofs of such propositions are deferred to Appendix E. Appendix D displays the proofs of the theoretical results exhibited in the main body of the paper. Lastly, Appendix F exhibits potential alternative approaches to the proposed empirical Bernstein inequality, illustrating the empirical benefits of the latter. Throughout, we denote the probability space on which the random variables are defined by (Ω,F, P). Furthermore, we use the standard asymptotic big- oh notations. Given functions fandg, we write f(n) =O(g(n))if there exist constants C, n 0>0such that |f(n)| ≤C|g(n)|for all n≥n0. We write f(n) =eO(g(n))iff(n) =O(g(n)polylog (n)), where polylog (n)denotes a polylogarithmic factor in n. Finally, we use f(n) = Ω( g(n))to denote that there exist constants c, n0>0such that f(n)≥cg(n)for all n≥n0. A Extension to Hilbert spaces Throughout, let Hbe a separable Hilbert space and denote Br(x) ={y∈H:∥y−r∥ ≤r}. We remind the reader that the theoretical foundation of the results from Section 4 are namely the scalar-valued anytime valid Bennett’s inequality (Theorem 3.3) and the supermartingale construction from Theorem 4.1. Theorem 3.3 extended the former to the multivariate setting. The remaining foundational piece is the extension of the supermartingale construction from Theorem 4.1, which we present next4and whose proof we defer to Appendix D.8. Theorem A.1. Let Assumption 4.9 hold. For any B1 2(0)-valued predictable sequence (bµHS i)i≥1, define ˜σ2 i=σ2+ bµHS i−µ 2. For any [0,1]-valued predictable sequence (bσi)i≥1and any [0,1)-valued predictable sequence (λi)i≥1, the processes S±,HS t = exp  X i≤tλih ± Xi−bµHS i 2∓˜σ2 ii −ψE(λi)h Xi−bµHS i 2−bσ2 ii2   4Throughout, we emphasize the fact that the estimator of the mean belongs to the Hilbert space (in contrary to the estimator of the variance, which still belongs to the real line) by means of the notation bµHS i. 16 fort≥1andS±,HS 0 = 1, are nonnegative supermartingales. This theorem implies that the upper and lower inequalities previously derived for one dimensional processes equally apply to Hilbert spaces. Denoting RHS t,α:=log(1/α) +P i≤tψE(λi) Xi−bµHS i 2−bσ2 i2 P i≤tλi, DHS t:=P i≤t λiXi−bµHS i 2 P i≤tλi, EHS t:=P i≤tλi bµHS i−µ 2 P i≤tλi, the following corollary is a direct consequence of Theorem A.1, whose proof is analogous to that of Corollary 4.2. Corollary A.2. Let Assumption 1.1 hold.
https://arxiv.org/abs/2505.01987v1
For any [0,1)-valued predictable sequence (λi)i≥1, any B1 2(0)-valued predictable sequence (bµHS i)i≥1, and any [0,1]- valued predictable sequence (bσi)i≥1, the sequence of sets  DHS t−EHS t±RHS t,α 2 is a1−αconfidence sequence for σ2. From Corollary A.2, upper and lower inequalities for the variance can be derived analogously to those presented in Section 4. That is, in order to derive an upper inequalities for the variance, it suffices to ignore the term EHS t. Corollary A.3 (Vector-valued upper empirical Bernstein for the variance) . Let Assumption 1.1 hold. For any B1 2(0)-valued predictable sequence (bµHS i)i≥1, any[0,1]-valued predictable sequence (bσi)i≥1, and any [0,1)-valued predictable sequence (λi)i≥1, it holds that −∞, UHS t is a1−αupper confidence sequence forσ2, where UHS t:=DHS t+RHS t,α. In order to derive ˜Ri,δsuch that ∥bµi−µ∥ ≤ ˜Ri,δfor all i≥1, we propose to use the vector-valued anytime valid Bennett’s inequality from Theorem 4.10. That is, take bµHS t=Pt−1 i=1˜λiXiPt−1 i=1˜λi,˜Rt,δ=log(2/δ) +σ2Pt−1 i=1ψP(˜λi)Pt−1 i=1˜λi.(A.1) These choices of bµHS tand ˜Rt,δlead to the same exact definition of ˜At,˜Bt,δ,˜Ct,δ, At,Bt,δ, and Ct,δfrom Section 4. The following corollary follows analogously to its one-dimensional counterpart. Corollary A.4 (Vector-valued lower empirical Bernstein for the variance) . Let Assumption 1.1 hold. For the B1 2(0)-valued predictable sequence (bµHS i)i≥1 17 defined in (4.2), any [0,1]-valued predictable sequence (bσ2 i)i≥1, any [0,1)-valued predictable sequence (λi)i≥1, and any [0,∞)-valued predictable sequence (˜λi)i≥1, it holds that LHS t,∞ is a 1−αlower confidence sequence for σ2, where α1+α2=αand LHS t:=−Bt,α1+q B2 t,α1+ 4At(DHS t−Ct,α1−RHS t,α2) 2At. We propose to take the plug-ins (λi)i≥1and(˜λi)i≥1analogously to Section 4. These choices require that the definitions of bm2 4,tandbσ2 tfrom Section 4 naturally replace the squares by the squares of the norms. Similarly to Section 4, the choice of the split of αintoα1andα2is also of importance. In practice, we propose to take α1andα2analogously to Section 4. The optimality of the results from Section 4.4 for specific choices of α1andα2extends analogously to Hilbert spaces. However, Assumption 4.5 ought to be replaced by the following assumption. Assumption A.5. X1, X2, . . .is such that Vi−1 ∥Xi−µ∥2 is constant across i. Under Assumption 4.9 and Assumption A.5, the first order term of the width of the confidence intervals can be compared with that from the oracle Bernstein-type inequality, i.e.,p 2V[∥Xi−µ∥2] log(1 /α). The following corol- lary establishes that the first order width of the confidence intervals does indeed match this oracle benchmark. Its proof is not provided given that it is completely analogous to that of Section 4.4. Corollary A.6 (Sharpness) .Let Assumption 4.9 and Assumption A.5 hold. If α=α1,n+α2,nas defined in (4.6), then √n(UHS n−DHS n)a.s.→p 2V[∥Xi−µ∥2] log(1 /α), √n(DHS n−LHS n)a.s.→p 2V[∥Xi−µ∥2] log(1 /α). B Auxiliary lemmata Lemma B.1. Fora∈[0,1]andb≥0, ψP(ab)≤a2ψP(b), ψ E(ab)≤a2ψE(b). Proof.It suffices to observe that ψP(ab) =∞X k=2(ab)k k!(i) ≤a2∞X k=2bk k!=a2ψP(b), as well as ψE(ab) =∞X k=2(ab)k k(i) ≤a2∞X k=2bk k=a2ψE(b), where in both (i) follows from |a| ≤1. 18 Lemma B.2. LetψN=λ2 2andψE(λ) =−log(1−λ)−λ. The function λ∈[0,1)7→ψE(λ) ψN(λ)is increasing. Proof.It suffices to observe that ψE(λ) ψN(λ)=P k≥2λk k λ2 2= 2X k≥2λk−2 k, which is clearly increasing on
https://arxiv.org/abs/2505.01987v1
λ. Lemma B.3. It holds that nX i=11√ i∈ 2√n−2,2√n−1 , and so 1√nnX i=11√ i→2. Proof.Given that x7→1√xis a decreasing function, it follows that Zn 11√xdx≤nX i=11√ i≤1 +Zn 11√xdx, withRn 11√xdx= 2√n−2. In order to conclude the proof, it suffices to note that 1√nnX i=11√ i∈ 2−2√n,2−1√n ,2−2√n→2,2−1√n→2, and invoke the sandwich theorem. Lemma B.4. It holds that nX i=11 i∈[logn,logn+ 1], and so 1 lognnX i=11 i→1. 19 Proof.Given that x7→1 xis a decreasing function, it follows that Zn 11 xdx≤nX i=11 i≤1 +Zn 11 xdx, withRn 11 xdx= log n. In order to conclude the proof, it suffices to note that 1 lognnX i=11 i∈ 1,1 +1 logn ,1 +1 logn→1, and invoke the sandwich theorem. Lemma B.5. It holds that nX i=1√ i∈1 3+2 3n3 2,−2 3+2 3(n+ 1)3 2 . Proof.Given that x7→√xis an increasing function, it follows that 1 +Zn 1√xdx≤nX i=1√ i≤Zn+1 1√xdx, withRn 1√xdx=2 3n3 2. Lemma B.6. It holds that ∞X i=21 ilogi=∞, and so ∞X i=11 ilog(i+ 1)=∞. Proof.Given that x7→1 xlogxis a decreasing function, it follows that n−1X i=21 ilogi≥Zn 21 xlogxdx(i)=Zlogn log 21 udu, where we have used the change of variable u= log xin (i). Thus ∞X i=21 ilogi= lim n→∞n−1X i=21 ilogi≥lim n→∞Zlogn log 21 udu=∞. It remains to note that ∞X i=11 ilog(i+ 1)≥∞X i=11 (i+ 1) log( i+ 1)=∞X i=21 ilogi. 20 Lemma B.7. Let(an)n≥0be a deterministic sequence such that a0≥2and an∈[0,1]forn≥1. Then 1 nnX i=1aiP j≤i−1aj≤log nX i=0ai! Proof.Denoting si=Pi j=0aj, it follows that 1 nnX i=1aiP j≤i−1aj=1 nnX i=1si−si−1 si−1. We now note thatsi−si−1 si−1is the area of a rectangle with width si−si−1and height1 si−1. Define the function f(x) :=1 x−1, which is decreasing on xand f(si) =1 si−1(i) ≥1 (si−1+ 1)−1=1 si−1, where (i) follows from ai∈[0,1]. Thus 1 nnX i=1si−si−1 si−1≤Zsn s0f(x)dx=Zsn s01 x−1dx= log( sn−1)−log(a0−1) (i) ≤log(sn), where (i)follows from a0≥2, thus concluding the result. Lemma B.8. Let(an)n≥1be a deterministic sequence such that an→a. Then 1 nX i≤nain→∞→a. Further, if an→0and|bn|< C, then 1 nX i≤naibin→∞→0. Proof.Letϵ >0. We want to show that there exists M∈Nsuch that a−1 nX i≤nai ≤ϵ. •Given that an→a, there exists M1∈Nsuch that |an−a| ≤ϵ 2for all n≥M1. •Further, there exists M2∈Nsuch that1 nPM1−1 i=1|ai−a| ≤ϵ 2for all n≥M2. 21 Taking M= max {M1, M2}, it follows that a−1 nX i≤nai ≤1 nX i≤n|a−ai| =1 nM1−1X i=1|a−ai|+1 nnX i=M1|a−ai| ≤ϵ 2+1 nnX i=M1ϵ 2≤ϵ 2+1 nnX i=1ϵ 2=ϵ, thus concluding the first result. The second result trivially follows after observing 1 nX i≤naibi ≤C1 nX i≤n|ai|, and the right hand side converges to 0in view of the first result. Lemma B.9. Let(an)n≥1and(bn)n≥1be two deterministic sequences such that ann→∞→a, b i≥0,1 nnX i=1bin→∞→b. Then 1 nnX i=1aibin→∞→ab Proof.Letϵ∈(0,1)be arbitrary. It suffices to show that there exists M∈N such that 1 nnX i=1aibi−ab ≤ϵ for all n≥M. Given that an→a, there exists M1∈Nsuch that supi≥M1|ai− a| ≤ϵ 3(b+1). Furthermore,1 nPn i=1bi→bimplies the existence of M2∈Nsuch that 1 nnX i=1bi−b ≤ϵ 3(|a|+ 1). Lastly, there exists M3∈Nsuch that 1 nsup i≤M′−1|ai−a|M′−1X i=1bi≤ϵ 3, 22 where M′=M1∨M2. Taking M=M′∨M3,
https://arxiv.org/abs/2505.01987v1
it follows that 1 nnX i=1aibi−ab = 1 nnX i=1aibi−abi+abi−ab ≤1 nsup i≤n|ai−a|nX i=1bi+|a| 1 nnX i=1bi−b ≤1 nsup i≤n|ai−a|nX i=1bi+|a|ϵ 3(|a|+ 1) ≤1 nsup i≤M′−1|ai−a|M′−1X i=1bi+1 nsup M′≤i≤n|ai|nX i=M′bi+ϵ 3 ≤ϵ 3+ sup i≥M′|ai−a|1 nnX i=1bi+ϵ 3 ≤ϵ 3+ϵ 3(b+ 1)ϵ 3(|a|+ 1)+b +ϵ 3≤ϵ 3+ϵ 3+ϵ 3=ϵ. Lemma B.10. Let(an,i)n≥1,i∈[n]and(bn)n≥1be two deterministic sequences such that an,nn→∞→a,|an,i−a| ≤ |ai,i−a|, b i≥0,1 nnX i=1bin→∞→b. Then 1 nnX i=1an,ibin→∞→ab Proof.Letϵ∈(0,1)be arbitrary. It suffices to show that there exists M∈N such that 1 nnX i=1an,ibi−ab ≤ϵ forall n≥M. Giventhat an,n→a, thereexists M1∈Nsuchthat supi≥M1|ai,i− a| ≤ϵ 3(b+1). Furthermore,1 nPn i=1bi→bimplies the existence of M2∈Nsuch that 1 nnX i=1bi−b ≤ϵ 3(|a|+ 1). Lastly, there exists M3∈Nsuch that 1 nsup i≤M′−1|ai,i−a|M′−1X i=1bi≤ϵ 3, 23 where M′=M1∨M2. Taking M=M′∨M3, it follows that 1 nnX i=1an,ibi−ab = 1 nnX i=1an,ibi−abi+abi−ab ≤1 nsup i≤n|an,i−a|nX i=1bi+|a| 1 nnX i=1bi−b ≤1 nsup i≤n|ai,i−a|nX i=1bi+|a|ϵ 3(|a|+ 1) ≤1 nsup i≤M′−1|ai,i−a|M′−1X i=1bi+1 nsup M′≤i≤n|ai|nX i=M′bi+ϵ 3 ≤ϵ 3+ sup i≥M′|ai,i−a|1 nnX i=1bi+ϵ 3 ≤ϵ 3+ϵ 3(b+ 1)ϵ 3(|a|+ 1)+b +ϵ 3≤ϵ 3+ϵ 3+ϵ 3=ϵ. Lemma B.11. Let(an,i)n≥1,i∈[n]such that an,i≥0,Pn i=1an,i≤Cfor some C <∞, an,in→∞→0∀i≥1, and(bn)n≥1such that bnn→∞→0. Then nX i=1an,ibin→∞→0. Proof.Letϵ > 0. We want to show that there exists M∈Nsuch thatPn i=1an,ibi≤ϵfor all n≥M. •Given that bn→0, there exists M1∈Nsuch that |bn| ≤ϵ 2Cfor all n > M 1. •Further, there exists M2∈Nsuch thatPM1 i=1an,ibi≤ϵ 2for all n≥M2. Such an M2exists, as it suffices to take M2=max{M2,i:i∈[M1]}, where M2,iis such that an,ibi≤ϵ 2M1(whose existence is granted by an,i→0as n→ ∞for any fixed i). 24 Taking M= max {M1, M2}, it follows that nX i=1an,ibi=M1X i=1an,ibi+nX i=M1+1an,ibi ≤ϵ 2+ϵ 2CnX i=M1+1an,i (i) ≤ϵ 2+ϵ 2CnX i=1an,i≤ϵ 2+ϵ 2CC=ϵ, where (i)follows from an,i≥0, thus concluding the result. Lemma B.12. Leta >0andb >0. IfZn>0a.s. and Zn→ba.s., then inf n≥1a n+ 1+n n+ 1Zn is strictly positive almost surely. Proof.Given that Zn>0a.s. and Zn→ba.s., there exists A∈ Fsuch that P(A) = 1andZn(ω)>0for all n, as well as Zn(ω)→bwith n→ ∞, for all ω∈A. It suffices to show that, for ω∈A, inf n≥1a n+ 1+n n+ 1Zn(ω)>0. In order to see this, observe that Zn(ω)→bimplies that there exists m∈Nsuch thatZn>b 2for all n≥m. Given that the function x7→x/(x+ 1)is increasing onn, for all n≥m, a n+ 1+n n+ 1Zn(ω)≥n n+ 1Zn(ω)≥m m+ 1Zn(ω)≥m m+ 1b 2. IfZn(w)>0, then for all n < m, a n+ 1+n n+ 1Zn(ω)≥a n+ 1≥a m. From these two inequalities, we conclude that inf n≥1a n+ 1+n n+ 1Zn(ω)≥a m∧m m+ 1b 2 for all ω∈A. C Auxiliary propositions The proofs of the propositions exhibited herein are deferred to Appendix E. We start by presenting a proof of the almost sure convergence of the fourth moment estimator used throughout. Its proof can be found in Appendix E.1 25 Proposition C.1. LetX1, . . . , X nfulfill Assumption 1.1 and Assumption 4.5. Let(bµi)i∈[n]and(bσ2 i)i∈[n]be[0,1]-valued predictable sequences. If bµna.s.→µ,bσ2 na.s.→σ2, then 1 nnX i=1 (Xi−bµi)2−bσ2 i2a.s.→V (X−µ)2 , which implies bm2 4,na.s.→V (X−µ)2 . IfV (X−µ)2 = 0, thenthefourthmomentestimatordoesnotonlyconverge to0almost surely, but it also does it at a ˜O(1 t)rate. We start by formalizing
https://arxiv.org/abs/2505.01987v1
this result when (bm2 4,i)i∈[n]is defined as in Section 4.2. Its proof may be found in Appendix E.2 Proposition C.2. LetX1, . . . , X nfulfill Assumption 1.1 and Assumption 4.5 such that V (X−µ)2 = 0. Let (bµi)i∈[n],(bσ2 i)i∈[n], and (bm2 4,i)i∈[n]be defined as in Section 4.2. Then bm2 4,t=˜O1 t almost surely. The result also extends to the estimator (bm2 4,i)i∈[n]defined in Section 4.3. We present such an extension next, whose proof we defer to Appendix E.3. Proposition C.3. LetX1, . . . , X nfulfill Assumption 1.1 and Assumption 4.5 such that V (X−µ)2 = 0. Let (bµi)i∈[n],(bσ2 i)i∈[n], and (bm2 4,i)i∈[n]be defined as in Section 4.3. If log(1/α1,n) =˜O(1)and0< α 1,n≤α, then bm2 4,t=˜O1 t almost surely. IfV (X−µ)2 = 0, the (normalized) sum of the plug-ins also converges almost surely to a tractable quantity. We present this result next, and defer the proof to Appendix E.4. Proposition C.4. LetX1, . . . , X nfulfill Assumption 1.1 and Assumption 4.5 such that V (Xi−µ)2 >0. Let (δn)n≥1be a deterministic sequence such that δn→δ >0, δ n>0. 26 Define λt,δn:=s 2 log(1 /δn) bm2 4,tn∧c1, withc1∈(0,1]andbm2 4,tdefined as in Section 4 with c2>0. Then 1√nnX i=1λi,δna.s.→s 2 log(1 /δ) V[(Xi−µ)2]. IfV (X−µ)2 >0, we study the inverse of the (normalized) sum of the plug-ins. In the next proposition, we prove that such a quantity converges almost surely to 0at a ˜O(1√n)rate. Its proof may be found in Appendix E.6. Proposition C.5. LetX1, . . . , X nfulfill Assumption 1.1 and Assumption 4.5 such that V (Xi−µ)2 = 0. Let (δn)n≥1be a deterministic sequence such that δn→δ >0, δ n>0. Define λt,δn:=s 2 log(1 /δn) bm2 4,tn∧c1, withc1∈(0,1). Then 1 1√nPn i=1λi,δn=˜O1√n almost surely. We now analyze the almost sure converge of the sum of the product of two sequences of random variables, one involving the plug-ins through the function ψE. Its proof may be found in Appendix E.5 Proposition C.6. LetX1, . . . , X nfulfill Assumption 1.1 and Assumption 4.5 such that V (Xi−µ)2 >0. Let (δn)n≥1be a deterministic sequence such that δn↗δ >0, δ n>0. Define λt,δn:=s 2 log(1 /δn) bm2 4,tn∧c1, with c1>0, andbm2 4,tdefined as in Section 4 with c2>0. Let (Zn)n≥1be such that Zi≥0,1 nnX i=1Zia.s.→a, 27 witha∈R. Then nX i=1ψE(λi,δn)Zia.s.→alog(1/δ) V[(Xi−µ)2]. Lastly, we present a technical proposition that will be used in the proof of Theorem 4.7. We defer its proof to Appendix E.7. Proposition C.7. Letα1,n= Ω 1 log(n) andbσkbe defined as in Section 4, with c3>0. Then X 2≤i≤nn log(2/α1,n) +σ2Pi−1 k=1ψPq 2 log(2 /α1,n) bσ2 kklog(1+ k)∧c5o2 Pi−1 k=1q 2 log(2 /α1,n) bσ2 kklog(1+ k)∧c52=˜O(1)(C.1) almost surely. D Main proofs D.1 Proof of Theorem 3.3 Fixtand observe Et−1exp ˜λt(Xt−µ) =Et−1∞X k=0 ˜λt(Xt−µ)k k! =∞X k=0˜λk tEt−1 (Xt−µ)k k! (i)= 1 +∞X k=2˜λk tEt−1 (Xt−µ)k k! (ii) ≤1 +Et−1 (Xt−µ)2∞X k=2˜λk t k! (iii)= 1 + σ2ψP(˜λt) (iv) ≤exp σ2ψP(˜λt) , where(i)followsfrom EXt=µ,(ii)from |Xt−µ| ≤1,(iii)from Et−1 (Xt−µ)2 = σ2, and (iv) from 1 +x≤exp(x)for all x∈R. It thus follows that S′ t= exp X i≤t˜λi(Xi−µ)−σ2X i≤tψP(˜λi) t≥1,
https://arxiv.org/abs/2505.01987v1
S′ 0= 1, 28 is a nonnegative supermartingale. In view of Ville’s inequality (Theorem 3.1), we observe that P exp  X i≤t˜λi(Xi−µ)−σ2X i≤tψP(˜λi)  ≥2/δ ≤δ 2, thus P X i≤t˜λi(Xi−µ)−σ2X i≤tψP(˜λi)≥log(2/δ) ≤δ 2, and so P µ≤P i≤t˜λiXi−σ2P i≤tψP(˜λi)−log(2/δ) P i≤t˜λi! ≤δ 2. Arguing analogously replacing Xi−µforµ−Xiand taking the union bound concludes the proof. D.2 Proof of Theorem 4.1 The processes are clearly nonnegative, so it remains to prove that they are supermartingales. Let us begin by showing that S+ tis indeed a supermartingale, i.e., Et−1expn λt (Xt−bµt)2−˜σ2 t −ψE(λt) (Xt−bµt)2−bσ2 t2o ≤1(D.1) for any t≥1. In order to see this, denote Yt= (Xt−bµt)2−˜σ2 t, δ t=bσ2 t−˜σ2 t, and restate (D.1) as Et−1expn λtYt−ψE(λt) (Yt−δt)2o ≤1. (D.2) FromFanetal.(2015,Proposition4.1), whichestablishesthat exp ξλ−ξ2ψE(λ) ≤ 1 +ξλfor any λ∈[0,1)andξ≥ −1, it follows that Et−1expn λtYt−ψE(λt) (Yt−δt)2o = exp( λtδt)Et−1expn λt(Yt−δt)−ψE(λt) (Yt−δt)2o ≤exp(λtδt)Et−1[1 +λt(Yt−δt)] (i)= exp( λtδt) (1−λtδt) (ii) ≤exp(λtδt) exp (−λtδt) = 1, 29 where (i) is obtained given that Et−1Yt= 0, and (ii) from 1 +x≤exfor all x∈R. Showing that S− tis a supermartingale follows analogously, but replacing Yt andδtby −(Xt−bµt)2+ ˜σ2 t,−bσ2 t+ ˜σ2 t. Note that this proof is analogous to the proof of Waudby-Smith and Ramdas (2024, Theorem 2), but with non-constant conditional expectations ˜σ2 i. D.3 Proof of Corollary 4.2 In view of Ville’s inequality (Theorem 3.1) and Theorem 4.1, the probability of the event exp  X i≤tλi ±(Xi−bµi)2∓˜σ2 i −ψE(λi) (Xi−bµi)2−bσ2 i2  ≥2/δ uniformly over tis upper bounded byδ 2, and so is X i≤tλi ±(Xi−bµi)2∓˜σ2 i −ψE(λi) (Xi−bµi)2−bσ2 i2≥log(2/δ) uniformly over t. Thus P sup t∓P i≤tλi˜σ2 iP i≤tλi±Dt−Rt,α 2≥0! ≤δ 2. From P i≤tλi˜σ2 iP i≤tλi=P i≤tλi σ2+ (bµi−µ)2 P i≤tλi=σ2+Et, it follows that P sup t∓σ2∓Et±Dt−Rt,α 2≥0 ≤δ 2, which allows to conclude that P σ /∈ Dt−Et−Rt,α 2, Dt−Et+Rt,α 2 ≤δ 2, uniformly over t. 30 D.4 Proof of Corollary 4.4 As exhibited in Section 4.3, σ2≥Dt−P i≤tλi˜R2 i,α1P i≤tλi−Rt,α2 uniformly over twith probability α1+α2=α. Taking ˜R2 i,α1as in(4.2)leads to σ2≥Dt−P i≤tλi ˜Atσ4+˜Bt,α1σ2+˜Ct,α1 P i≤tλi−Rt,α2, i.e., σ2≥Dt−Atσ4−(Bt,α1−1)σ2−Ct,α1−Rt,α2. Thus, it suffices to consider σ2≥σ2 l,t, where σ2 l,tis such that σ2 l,t=Dt−Atσ4 l,t−(Bt,α1−1)σ2 l,t−Ct,α1−Rt,α2. Clearly, solving for this quadratic polynomial leads to (4.3). D.5 Proof of Theorem 4.6 We proceed differently for the cases V (Xi−µ)2 = 0andV (Xi−µ)2 >0. Case 1: V (Xi−µ)2 = 0.Note that √nRn,δn=log(1/δn) +P i≤nψE(λi,δn) (Xi−bµi)2−bσ2 i2 1√nP i≤nλi,δn. Denote ν2 i:= (Xi−bµi)2−bσ2 i2. In view of Proposition C.2 or Proposition C.3, it follows that nbm2 4,n=˜O(1) almost surely, and so there exists A∈ Fwith P(A) = 1such that nbm2 4,n(ω) = ˜O(1)for all ω∈A. For ω∈A, it may be that ∞X i=1ν2 i(ω) =:M <∞ (D.3) or ∞X i=1ν2 i(ω) =∞. (D.4) 31 If (D.3) holds, then X i≤nψE(λi,δn(ω)) (Xi(ω)−bµi(ω))2−bσ2 i(ω)2=X i≤nψE(λi,δn(ω))ν2 i(ω) ≤ψE(c1)X i≤nν2 i(ω) ≤ψE(c1)M, andso log(1/δn)+P i≤nψE(λi,δn(ω)) (Xi(ω)−bµi(ω))2−bσ2 i(ω)2isupperbounded bylog(1/l) +ψE(c1)M, where l=infn∈Nδn(which is strictly positive given that δn→δ >0andδn>0). If(D.4)holds, then there exists m(ω)∈Nsuch that, fort≥m(ω), bm2 4,t(ω)t=c2+t−1X i=1ν2 i(ω)≥2 log(1 /l) c2 1. Thus s 2 log(1 /δn) bm2 4,t(ω)n≤s 2 log(1 /l) bm2 4,t(ω)t≤c1, and so λi,δn(ω) =s 2
https://arxiv.org/abs/2505.01987v1
log(1 /δn) bm2 4,t(ω)n fori≥m(ω). Denote (In(ω)) := log(1 /δn) +X i<m(ω)ψE(λi,δn(ω)) (Xi(ω)−bµi(ω))2−bσ2 i(ω)2, (IIn(ω)) :=nX i=m(ω)ψE(λi,δn) (Xi(ω)−bµi(ω))2−bσ2 i(ω)2. We observe that (In(ω))≤log(1/l) +ψE(c1)X i<m(ω) (Xi(ω)−bµi(ω))2−bσ2 i(ω)2, 32 and so it is bounded. Furthermore, (IIn(ω)) =nX i=m(ω)ψE s 2 log(1 /δn) bm2 4,i(ω)n! (Xi(ω)−bµi(ω))2−bσ2 i(ω)2 (i) ≤2 log(1 /δn)ψE(c1) c2 1nX i=m(ω)1 bm2 4,i(ω)n (Xi(ω)−bµi(ω))2−bσ2 i(ω)2 ≤2 log(1 /l)ψE(c1) c2 1nX i=m(ω)1 bm2 4,i(ω)n (Xi(ω)−bµi(ω))2−bσ2 i(ω)2 ≤2 log(1 /l)ψE(c1) c2 1nX i=m(ω)1 ibm2 4,i(ω) (Xi(ω)−bµi(ω))2−bσ2 i(ω)2 =2 log(1 /l)ψE(c1) c2 1nX i=m(ω)1 c2+Pi−1 i=1ν2 i(ω)ν2 i(ω) (ii) ≤2 log(1 /l)ψE(c1) c2 1log c2+nX i=1ν2 i(ω)! =2 log(1 /l)ψE(c1) c2 1log bm2 4,n(ω)n where (i) follows from Lemma B.1 and c1∈(0,1), and (ii) follows from Lemma B.7. Given that m2 4,n(ω)n=˜O(1), it follows from the previous in- equalities that (In(ω)) + ( IIn(ω))is also ˜O(1). Consequently, we have shown that regardless of (D.3) or (D.4) holding, it follows that X i≤nψE(λi,δn(ω)) (Xi(ω)−bµi(ω))2−bσ2 i(ω)2=˜O(1) for all ω∈A, with P(A) = 1. That is, log(1/δn) +X i≤nψE(λi,δn) (Xi−bµi)2−bσ2 i2=˜O(1) almost surely. Further, by Proposition C.5 and δn→δ, it also follows that 1 1√nPn i=1λi,δn=˜O1√n almost surely. Thus, it is concluded that √nRn,δn=˜O1√n almost surely, and so it converges to 0almost surely. 33 Case 2: V (Xi−µ)2 >0.By Proposition C.4, 1√nnX i=1λi,δna.s.→s 2 log(1 /δ) V[(Xi−µ)2]. In view of 1 nX i≤n (Xi−bµi)2−bσ2 i2→V (Xi−µ)2 almost surely (Proposition C.1) and Proposition C.6, it follows that X i≤nψE(λi) (Xi−bµi)2−bσ2 i2→V (Xi−µ)2 log(1/δ) V[(Xi−µ)2]= log(1 /δ). We thus conclude that √nRn,δn→log(1/δ) + log(1 /δ)q 2 log(1 /δ) V[(Xi−µ)2]=p 2V[(Xi−µ)2] log(1 /δ) almost surely. D.6 Proof of Theorem 4.7 We differentiate the cases V (Xi−µ)2 = 0andV (Xi−µ)2 >0. Case 1: V (Xi−µ)2 = 0.By Proposition C.5 and α2,n→α, 1 1√nPn i=1λi,α2,n=˜O1√n almost surely. Furthermore, in view of Proposition C.7, X i≤nn log(2/α1,n) +σ2Pi−1 k=1ψPq 2 log(2 /α1,n) bσ2 kklog(1+ k)∧c5o2 Pi−1 k=1q 2 log(2 /α1,n) bσ2 kklog(1+ k)∧c52. is˜O(1)almost surely. Thus, the product of both is ˜O 1√n almost surely, which further implies that it converges to 0almost surely. Case 2: V (Xi−µ)2 >0.By Proposition C.4 and α2,n→α, 1√nX i≤nλi,α2,na.s.→s 2 log(1 /δ) V[(Xi−µ)2]. Thus, it suffices to prove X i≤nλi,α2,nR2 i,α1,na.s.→0 (D.5) 34 to conclude the proof. We note that λi,α2,nR2 i,α1,nis equal to 1√nX i≤n√nλi,α2,nn log(2/α1,n) +σ2Pi−1 k=1ψPq 2 log(2 /α1,n) bσ2 kklog(1+ k)∧c5o2 Pi−1 k=1q 2 log(2 /α1,n) bσ2 kklog(1+ k)∧c52. By Proposition C.7, 1√nX i≤nn log(2/α1,n) +σ2Pi−1 k=1ψPq 2 log(2 /α1,n) bσ2 kklog(1+ k)∧c5o2 Pi−1 k=1q 2 log(2 /α1,n) bσ2 kklog(1+ k)∧c52. is˜O(1√n)almost surely, and so it suffices to show that sup n∈Nsup i≤n√nλi,α2,n is bounded almost surely in order to conclude the result (in view of Hölder’s inequality, (D.5)will follow). Analogously to the proof of Proposition C.4, there exist mω∈Nandu(ω)<∞such that λt,α2,n(ω) =s 2 log(1 /α2,n) bm2 4,t(ω)n,1 bm2 4,n(ω)≤u(ω), forn≥mωandω∈A, with P(A) = 1. Given that α2,n→α >0andα2,n>0, then l:= inf nα2,n>0, and so we observe that √nλt,α2,n(ω) =s 2 log(1 /α2,n) bm2 4,t(ω)≤p 2 log(1 /l)u(ω) forn≥mω. It follows that sup n∈Nsup i≤n√nλi,α2,n(ω)≤p 2 log(1 /l)u(ω) ∨ sup n<m ωsup i≤n√nλi,α2,n(ω) <∞, and thus the result is concluded by Hölder’s
https://arxiv.org/abs/2505.01987v1
inequality. D.7 Proof of Theorem 4.10 Denote ft=X i≤t˜λi(Xi−µ). Pinelis (1994, Theorem 3.2) showed that Et−1cosh (∥ft∥)≤ 1 +Et−1ψP ˜λt∥Xt−µ∥ cosh (∥ft−1∥). 35 Similarly to the proof of Theorem 3.3, it now follows that 1 +Et−1ψP ˜λt∥Xt−µ∥ = 1 +Et−1∞X k=2 ˜λt∥Xt−µ∥k k! = 1 +∞X k=2˜λk tEt−1h ∥Xt−µ∥ki k! (i)= 1 +∞X k=2˜λk tEt−1h ∥Xt−µ∥ki k! (ii) ≤1 +Et−1h ∥Xt−µ∥2i∞X k=2˜λk t k! (iii)= 1 + σ2ψP(˜λt)(iv) ≤exp σ2ψP(˜λt) , where (i) follows from EXt=µ, (ii) follows from ∥Xt−µ∥ ≤1, (iii) follows from Et−1∥Xt−µ∥2=σ2, and (iv) follows from 1 +x≤exp(x)for all x∈R. Thus, the process S′ t= cosh  X i≤t˜λi(Xi−µ)  exp −σ2X i≤tψP(˜λi) , fort≥1, and S′ 0= 1, is a nonnegative supermartingale. In view of Ville’s inequality (Theorem 3.1) we observe that P cosh  X i≤t˜λi(Xi−µ)  exp −σ2X i≤tψP(˜λi) ≥1 δ ≤δ, and, from expx≤2 cosh xforx∈R, it follows that P exp  X i≤t˜λi(Xi−µ) −σ2X i≤tψP(˜λi) ≥2 δ ≤δ. Thus P  X i≤t˜λi(Xi−µ) −σ2X i≤tψP(˜λi)≥log(2/δ) ≤δ 2, and so P µ−P i≤t˜λiXiP i≤t˜λi ≤σ2P i≤tψP(˜λi) + log(2 /δ) P i≤t˜λi! ≤δ 2. 36 D.8 Proof of Theorem A.1 The proof is analogous to that of Theorem 4.1, replacing Yt= (Xt−bµt)2−˜σ2 t byYt=∥Xt−bµt∥2−˜σ2 t. E Proofs of auxiliary propositions E.1 Proof of Proposition C.1 Denote ˜m2 4,n:=1 nPn i=1 (Xi−bµi)2−bσ2 i2. Then ˜m2 4,n=1 nnX i=1 (Xi−bµi)2−σ2+σ2−bσ2 i2 =1 nnX i=1 (Xi−bµi)2−σ22 | {z } (In)−2 nnX i=1 (Xi−bµi)2−σ2 σ2−bσ2 i | {z } (IIn) +1 nnX i=1 σ2−bσ2 i2 | {z } (IIIn). It suffices to prove that (In)converges to V (X−µ)2 almost surely, and (IIn) and(IIIn)converge to 0almost surely. •Denoting γi= (µ−bµi)2+ 2(Xi−µ)(µ−bµi), it follows that (In) =1 nnX i=1 (Xi−µ+µ−bµi)2−σ22 =1 nnX i=1 (Xi−µ)2−σ2+ (µ−bµi)2+ 2(Xi−µ)(µ−bµi)2 =1 nnX i=1 (Xi−µ)2−σ2+γi2 =1 nnX i=1 (Xi−µ)2−σ22+2 nnX i=1 (Xi−µ)2−σ2 γi+1 nnX i=1γ2 i. The first of these three summands converges to V (X−µ)2 by the scalar martingale strong law of large numbers (Hall and Heyde, 2014, Theorem 2.1). Given that (bµi−µ)→0almost surely, γi→0almost surely as well. Thus, the latter summands converge to 0almost surely: the second summand converges to 0in view of Lemma B.8 and the fact that the (Xi−µ)2−σ2are bounded; the third summand converges to 0almost surely also in view of Lemma B.8. 37 •Given that (Xi−bµi)2−σ2 is bounded and σ2−bσ2 i →0almost surely, (IIn)converges to 0almost surely by Lemma B.8. •Given that σ2−bσ2 i →0almost surely, σ2−bσ2 i2→0almost surely, and so (IIIn)converges to 0almost surely by Lemma B.8. Thus, ˜m2 4,n→V (X−µ)2 almost surely. Given that bm2 4,n=c2 n+n−1 n˜m2 4,n−1, this also implies that bm2 4,n→V (X−µ)2 almost surely. E.2 Proof of Proposition C.2 Denote vi= (Xi−bµi)2−bσ2 i, so that m2 4,t=c2+P i≤t−1v2 i t. Ifσ2= 0, then Xi=µfor all i. In that case, bµi=c4 i+i−1 iµ,bσ2 i=c3 i+(c4−µ)2P j≤i−11 j2 i. Note that bσ2 i≤c3 i+(c4−µ)2P∞ j=11 j2 i=c3+ (c4−µ)2π2 6 i, and so v2 i="c4−µ i2 −bσ2 i#2 (i) ≤2c4−µ i4 + 2bσ4 i ≤2(c4−µ)4 i2+ 2bσ4 i ≤κ1 i2, where κ1:= 2(c4−µ)4+ 2 c3+ (c4−µ)2π2 62 , and (i) follows from (a−b)2≤ 2a2+ 2b2. Thus m2 4,t≤c2+P i≤t−1κ1
https://arxiv.org/abs/2505.01987v1
i2 t≤c2+P∞ i=1κ1 i2 t=c2+κ1π2 6 t=O1 t . Ifσ2>0, note that vi= (Xi−bµi)2−bσ2 i = (Xi−µ)2−σ2+ 2(Xi−µ)(µ−bµi) + (µ−bµi)2+σ2−bσ2 i (i)= 2(Xi−µ)(µ−bµi) + (µ−bµi)2+σ2−bσ2 i, 38 where (i) follows from (Xi−µ)2=σ2, and so |vi| ≤3|µ−bµi|+ σ2−bσ2 i . The martingale analogue of Kolmogorov’s law of iterated logarithm (Stout, 1970) establishes that lim sup i→∞|bµi−µ|√np 2σ2log log ( nσ2)= 1 almost surely. That implies that there exists A∈ Fsuch that P(A) = 1and, for all ω∈A, |bµi(ω)−µ| ≤p C(ω)p 2σ2log log ( iσ2)√ i for some C(ω)<∞. Furthermore, bσ2 i=c3+P j≤i−1(Xj−¯µj)2 i =c3+P j≤i−1(Xj−µ)2+ 2 (Xj−µ) (µ−¯µj) + (µ−¯µj)2 i =c3+P j≤i−1(Xj−µ)2+ 2 (Xj−µ) (µ−¯µj) + (µ−¯µj)2 i (i)=c3 i+i−1 iσ2+P j≤i−12 (Xj−µ) (µ−¯µj) + (µ−¯µj)2 i, where (i) follows from (Xi−µ)2=σ2, and so σ2−bσ2 i=σ2 i−c3 i−P j≤i−12 (Xj−µ) (µ−¯µj) + (µ−¯µj)2 i, which implies σ2−bσ2 i ≤c3 i+σ2 i+ P j≤i−12 (Xj−µ) (µ−¯µj) + (µ−¯µj)2 i ≤c3 i+σ2 i+ 3P j≤i−1|µ−¯µj| i ≤2 i+ 3P j≤i−1|µ−¯µj| i. 39 Thus, for ω∈A, |vi(ω)| ≤3|µ−bµi(ω)|+ σ2−bσ2 i(ω) ≤3|µ−bµi(ω)|+2 i+ 3P j≤i−1|µ−¯µj(ω)| i ≤3p 2C(ω)σ2log log ( iσ2)√ i+2 i+ 3P j≤i−1p C(ω)√ 2σ2log log( jσ2)√j i ≤3p 2C(ω)σ2log log ( iσ2)√ i+2 i+ 3p 2C(ω)σ2log log ( iσ2)P j≤i−11√j i (i) ≤3p 2C(ω)σ2log log ( iσ2)√ i+2√ i+ 6p 2C(ω)σ2log log ( iσ2)1√ i = 2 + 9p 2C(ω)σ2log log ( iσ2)1√ i, where (i) follows from Lemma B.3. Thus, (vi(ω))2≤ 2 + 9p 2C(ω)σ2log log ( iσ2)21 i. From here, it follows that, for ω∈A, m2 4,t(ω) =c2+P i≤t−1v2 i(ω) t ≤c2+P i≤t−1 2 + 9p 2C(ω)σ2log log ( iσ2)21 i t ≤c2+ 2 + 9p 2C(ω)σ2log log ( tσ2)2P i≤t−11 i t (i) ≤c2+ 2 + 9p 2C(ω)σ2log log ( tσ2)2 (1 + log t) t =˜O1 t , where (i) follows from Lemma B.4. Noting that P(A) = 1concludes the result. E.3 Proof of Proposition C.3 The proof follows analogously to that of Proposition C.2 as soon as we show that •ifσ= 0, then (Xi−bµi)2=O1 i2 ; 40 •ifσ >0, then |bµi(ω)−µ|=˜O1√ i almost surely. Let us now prove each of the statements. Ifσ= 0, then bµt=Pt−1 i=1˜λiXiPt−1 i=1˜λi=Pt−1 i=1˜λiµPt−1 i=1˜λi=µ, and we are done. Ifσ >0, define ιi=s 2 bσ2 iilog(i+ 1). We shall start by studying the growth ofP i≤nιiXi. In view of ∞X i=1Ei−1 ι2 i(Xi−µ)2 =σ2∞X i=1ι2 i=σ2∞X i=12 bσ2 iilog(i+ 1) ≥2σ2∞X i=11 ilog(i+ 1)(i)=∞, where (i) follows from Lemma B.6, as well as |ιi(Xi−µ)| ≤ιiwithιi→0almost surely, we can apply the martingale analogue of Kolmogorov’s law of the iterated logarithm (Stout, 1970). Thus, defining S2 n:=nX i=1Ei−1 ι2 i(Xi−µ)2 =σ2nX i=1ι2 i, it follows that lim sup n→∞P i≤nιi(Xi−µ)p 2S2nlog log( S2n)= 1 almost surely. Hence, there exists A1∈ Fwith P(A1) = 1such that X i≤nιi(ω)(Xi(ω)−µ) ≤C(ω)p 2S2nlog log( S2n) for all ω∈A1. Given that bσn→σalmost surely, there exists A2∈ Fwith P(A2) = 1such that bσn(ω)→σ. Hence, for each ω∈A2, there exists m(ω)∈N 41 such that bσi≥σ 2for all i≥m(ω). Thus, for ω∈A2, S2 n(ω) =σ2nX i=12 bσ2 i(ω)ilog(i+ 1) ≤nX i=18 ilog(i+ 1) ≤nX i=116 i (i) ≤16(log n+ 1), where (i) is
https://arxiv.org/abs/2505.01987v1
obtained in view of Lemma B.4. Hence, for ω∈A:=A1∩A2, X i≤nιi(ω)(Xi(ω)−µ) ≤C(ω)p 32(log n+ 1) log log(16(log n+ 1)) , which is ˜O(1). That is,P i≤nιi(Xi−µ) =˜O(1)almost surely. Let us now show that X i≤n˜λi,α1,n(Xi−µ) =˜O(1) (E.1) almost surely as well. For i≥m(ω), s 2 log(2 /α1,n) bσ2 iilog(i+ 1)≤σ√ 2s log(2/α1,n) ilog(i+ 1) and so, for i≥m(ω)∨σp log(2/α1,n)c5, it holds that ˜λi,α1,n=q log(2/α1,n)ιi. Given that we will let ntend to ∞, we can assume without loss of generality thatm(ω)< σp log(2/α1,n)c5=:tn. In that case, X i≤n˜λi,α1,n(Xi−µ) =X i<tn˜λi,α1,n(Xi−µ) +nX i=tn˜λi,α1,n(Xi−µ) =X i<tn˜λi,α1,n(Xi−µ) +q log(2/α1,n)nX i=tnιi(Xi−µ) =X i<tn ˜λi,α1,n−q log(2/α1,n)ιi (Xi−µ) +q log(2/α1,n)X i≤nιi(Xi−µ). 42 Now note that the absolute value of the first summand is upper bounded by X i<tn ˜λi,α1,n−q log(2/α1,n)ιi ≤X i<tn ˜λi,α1,n−q log(2/α1,n)ιi ≤X i<tn c5+q log(2/α1,n) sup iιi . Given that ιn→0almost surely and c2>0,supiιiis almost surely bounded, and thus such a first summand is upper bounded by tn c5+q log(2/α1,n) sup iιi , which is also ˜O(1)a.s., in view of log(1/α1,n) =˜O(1). Consequently, we have shown the validity of (E.1). Lastly, we observe that, X i≤n˜λi,α1,n=X i≤ns 2 log(2 /α1,n) bσ2 tilog(1 + i)∧c5 ≥1p log(1 + n)X i≤nr 2 log(2 /α) i∧c5 ≥1p log(1 + n)p 2 log(2 /α)∧c5X i≤n1√ i (i) ≥1p log(1 + n)p 2 log(2 /α)∧c5 2√n−2 , which is ˜Ω (√n), where (i) is obtained in view of Lemma B.3. We thus conclude that |bµt−µ|= Pt−1 i=1˜λi(Xi−µ)Pt−1 i=1˜λi =˜O(1) ˜Ω(√ t)=˜O1√ t almost surely. E.4 Proof of Proposition C.4 Given that m2 4,n→V (Xi−µ)2 a.s. (in view of Proposition C.1), there exists A∈ Fsuch that P(A) = 1and bm2 4,n(ω)→V (Xi−µ)2 for all ω∈A. Based on Lemma B.12 and c2>0, 1 bm2 4,n(ω)≤u(ω)<∞ 43 for all ω∈A. Given that δn→δ >0andδn>0, then l:=infnδn>0, and so we observe that s 2 log(1 /δn) bm2 4,t(ω)n≤s 2 log(1 /l) u(ω)n, which implies the existence of mω∈Nsuch that s 2 log(1 /l) u(ω)n≤c1 for all n≥mω. Hence λt,δn(ω) =s 2 log(1 /δn) bm2 4,t(ω)n forn≥mω. It follows that 1√nnX i=1λi,δn(ω) =1√nmω−1X i=1λi,δn(ω) +1√nnX i=mωλi,δn(ω). Clearly, the first term converges to 0, and so it suffices to show that 1√nnX i=mωλi,δn(ω)a.s.→s 2 log(1 /δ) V[(Xi−µ)2]. To see this, note that 1√nnX i=mωλi,δn(ω) =1√nnX i=mωs 2 log(1 /δn) bm2 4,i(ω)n =n−mω n|{z} (In)1 n−mωnX i=mωs 2 log(1 /δn) bm2 4,i(ω) | {z } (IIn(ω)) Clearly, (In)n→∞→1. Furthermore, (IIn(ω))n→∞→s 2 log(1 /δ) V[(Xi−µ)2] in view of m2 4,n(ω)→V (Xi−µ)2 ,δn→δ, and Lemma B.8. Hence 1√nnX i=mωλCI i,δn(ω)n→∞→s 2 log(1 /δ) V[(Xi−µ)2] forω∈A, with P(A) = 1, thus concluding the proof. 44 E.5 Proof of Proposition C.6 Analogously to the first part of the proof of Proposition C.4, there exists A∈ F such that P(A) = 1, bm2 4,n(ω)→V (Xi−µ)2 ∀ω∈A,1 nnX i=1Zi(ω)→a∀ω∈A,(E.2) and there exists mωsuch that λt,δn(ω) =s 2 log(1 /δn) bm2 4,t(ω)n forn≥mωfor all ω∈A. Observing that nX i=1ψE(λi,δn(ω))Zi(ω) =mω−1X i=1ψE(λi,δn(ω))Zi(ω) | {z } (In(ω))+nX i=mωψE(λi,δn(ω))Zi(ω) | {z } (IIn(ω)), Clearly, (In(ω))→0given that it is a linear combination of terms ψE(λi,δn(ω)), with λi,δn(ω)n→∞ ↘0, ψ E(λ)λ→0→0. Let us now prove that (IIn(ω))→s 2 log(1 /δ)
https://arxiv.org/abs/2505.01987v1
V[(Xi−µ)2], (E.3) forω∈A. Denoting ψN(λ) =λ2 2, as well as ξn,i(ω) :=ψE(λi,δn(ω)) ψN(λi,δn(ω)), it follows that (IIn(ω)) =nX i=mωψE(λi,δn(ω))Zi(ω) =nX i=mωψN(λi,δn(ω))ξn,i(ω)Zi(ω) =log(1/δn)(n−mω+ 1) n1 n−mω+ 1nX i=mω1 bm2 4,i(ω)ξn,i(ω)Zi(ω). In view of (E.2), Lemma B.9 yields 1 n−mω+ 1nX i=mω1 bm2 4,i(ω)Zi(ω)→a V[(Xi−µ)2]. 45 Noting thatψE(λ) ψN(λ)λ→0→1,λi,δi(ω)≥λi,δn(ω)forn≥i, and lim n→∞λn,δn(ω) = lim n→∞s 2 log(1 /δn) bm2 4,n(ω)n = lim n→∞r 1 nlim n→∞s 2 log(1 /δn) bm2 4,n(ω) = 0s 2 log(1 /δ) V[(Xi−µ)2] = 0, we observe that ξn,n(ω)→1, ξ i,i(ω)≥ξn,i(ω)≥1, where the latter inequality follows from Lemma B.2. Invoking Lemma B.10 with an,i=ξn,i(ω), b i=1 bm2 4,i(ω)Zi(ω), it follows that 1 n−mω+ 1nX i=mω1 bm2 4,i(ω)ξn,i(ω)Zi(ω)→a V[(Xi−µ)2]. It suffices to observe that log(1/δn)(n−mω+ 1) n→log(1/δ) to conclude the proof. E.6 Proof of Proposition C.5 In view Proposition C.2 or Proposition C.3, there exists A∈ Fsuch that P(A) = 1and m2 4,t(ω) =˜O1 t (E.4) for all ω∈A. For ω∈A, it may be that lim sup t→∞tm2 4,t(ω) =:M <∞ (E.5) or lim t→∞tm2 4,t(ω) =∞. (E.6) 46 Denote L:= supn∈Nδn, as well as κ:=q 2 log(1 /L) M∧c1. If (E.5) holds, then λt,δn(ω) =s 2 log(1 /δn) bm2 4,t(ω)n∧c1 =s 2 log(1 /δn) bm2 4,t(ω)tt n∧c1 ≥r 2 log(1 /L) Mt n∧c1 (i) ≥κr t n, where (i) follows fromt n≤1. Thus 1√nnX i=1λi,δn(ω)≥κ√nnX i=1r i n =κ nnX i=1√ i (i) ≥2κ 3nn3 2 =2κ 3n1 2, where (i) follows from Lemma B.5. If (E.6) holds, then there exists m(ω)∈Nsuch that, for t≥m(ω), bm2 4,tt≥2 log(1 /l) c2 1, where l=infn∈Nδn, which is strictly positive given that δn→δ >0andδn>0. Thus s 2 log(1 /δn) bm2 4,tn≤s 2 log(1 /l) bm2 4,tt≤c1, and so λi,δn(ω) =s 2 log(1 /δn) bm2 4,tn 47 fori≥m(ω). It follows that 1 1√nPn i=1λi,δn(ω)≤1 1√nPn i=m(ω)λi,δn(ω) =n (n−m(ω) + 1)p 2 log(1 /δn)n−m(ω) + 1Pn i=m(ω)1 bm4,i(ω) (i) ≤n (n−m(ω) + 1)p 2 log(1 /δn)| {z } (In(ω))sPn i=m(ω)bm2 4,i(ω) (n−m(ω) + 1)| {z } (IIn(ω)), where (i) follows from the harmonic-quadratic means inequality. Now note that (In(ω))n→∞→p 2 log(1 /δ). In view of (E.4), (IIn(ω)) =sPn i=m(ω)˜O1 i (n−m(ω) + 1)=˜O1√n , We have shown that, regardless of (E.5) or (E.6) holding, 1 1√nPn i=1λi,δn(ω)=˜O1√n for all ω∈A. Given that P(A) = 1, the proof is concluded. E.7 Proof of Proposition C.7 We will conclude the proof in two steps. First, we will prove that (In) = sup i≤n( log(2/α1,n) +σ2i−1X k=1ψP s 2 log(2 /α1,n) bσ2 kklog(1 + k)∧c5!)2 scales polylogarithmically with nalmost surely. Second, we will show that (IIn) =X i≤n1 Pi−1 k=1q 2 log(2 /α1,n) bσ2 kklog(1+ k)∧c52 also scales polylogarithmically with nalmost surely. Thus, by Hölder’s inequality and these two steps, it will follow that X i≤nn log(2/α1,n) +σ2Pi−1 k=1ψPq 2 log(2 /α1,n) bσ2 kklog(1+ k)∧c5o2 Pi−1 k=1q 2 log(2 /α1,n) bσ2 kklog(1+ k)∧c52(E.7) 48 scales logarithmically with nalmost surely. That is, it is ˜O(1)almost surely. Step 1. Ifσ= 0, then (In) =log2(2/α1,n), which scales at most logarithmi- cally with ngiven that 1/α1,n=O(logn), which follows from α1,n= Ω 1 log(n) . Ifσ >0, then in view of (a+b)2≤2a2+ 2b2,
https://arxiv.org/abs/2505.01987v1
it follows that (In)≤sup i≤n 2 log2(2/α1,n) + 2σ4(i−1X k=1ψP s 2 log(2 /α1,n) bσ2 kklog(1 + k)∧c5!)2 . We observe that ψP s 2 log(2 /α1,n) bσ2 kklog(1 + k)∧c5! ≤ψP s 2 log(2 /α1,n) bσ2 kklog(1 + k)! (i) ≤ 1p 2klog(1 + k)!2 ψP s 4 log(2 /α1,n) bσ2 k! =1 2klog(1 + k)ψP s 4 log(2 /α1,n) bσ2 k! (ii) ≤1 2klog(1 + k)exp s 4 log(2 /α1,n) bσ2 k! ≤1 kexp s 4 log(2 /α1,n) bσ2 k! (iii) ≤1 kexp4 log(2 /α1,n) bσ2 k =1 k2 α1,n4 bσ2 k, where (i) follows from Lemma B.1 and 2klog(1 +k)≥1for all k≥1, (ii) follows from ψP(x) =exp(x)−x−1≤exp(x)for all x≥0, and (iii) follows from bσk∈[0,1]and√x≤xfor all x≥1. Given Proposition C.1 and Lemma B.12 (in view of c3>0), there exists A∈ Fsuch that P(A) = 1and bσ2 k(ω)→σ2,inf kbσk(ω)≥κ(ω)>0, for all ω∈A. For ω∈Aandk∈N, ψP s 2 log(2 /α1,n) bσ2 k(ω)klog(1 + k)! ≤1 k2 α1,n 4 κ2(ω) , 49 and so sup i≤ni−1X k=1ψP s 2 log(2 /α1,n) bσ2 k(ω)klog(1 + k)! ≤2 α1,n 4 κ2(ω)i−1X k=11 k (i) ≤2 α1,n 4 κ2(ω) (logi+ 1) ≤2 α1,n 4 κ2(ω) (logn+ 1), where (i) is obtained in view of Lemma B.4. Thus (In(ω))≤2 log2(2/α1,n) +2 α1,n 4 κ2(ω) (logn+ 1), which scales polinomially with lognin view of 1/α1,n=O(logn). Step 2. Denoting κ=p 4 log(2 /α)∧c5, it follows that i−1X k=1s 2 log(2 /α1,n) bσ2 kklog(1 + k)∧c5≥i−1X k=1s 2 log(2 /α) klog(1 + k)∧c5 (i) ≥κi−1X k=1s 1 2klog(1 + k) =κ√ 2i−1X k=1s 1 klog(1 + k) ≥κp 2 log( i)i−1X k=1r 1 k (ii) ≥2κp 2 log( i)h (√ i−1−1)∨1i where (i) follows from 2klog(1 +k)≥1fork≥1, and (ii) is obtained in view of 50 Lemma B.3. It follows that (IIn)≤X 2≤i≤n2 log( i) κ2 (√i−1−1)∨12 ≤2 log( n) κ2X 2≤i≤n1  (√i−1−1)∨12 =2 log( n) κ2 2 +X 4≤i≤n1 (√i−1−1)2  (i) ≤2 log( n) κ2 2 +X 4≤i≤n9 i  ≤2 log( n) κ2 2 +X 2≤i≤n9 i  (ii) ≤2 log( n) κ2(2 + 9 log n), where (i) follows from√i−1−1≥√ i 3for all i≥4, and (ii) follows from Lemma B.4. Thus, (IIn)also scales polylogarithmically with n. FAlternative approaches to the proposed empir- ical Bernstein inequality We present in this appendix two alternative approaches to that proposed in Section 4. F.1Decoupling the inequality into first and second mo- ment inequalities We start by presenting a naive approach to the problem using two empirical Bernstein inequalities, which may be the most natural starting point. However, this approach will prove suboptimal, both theoretically and empirically. F.1.1Confidence sequences obtained using two empirical Bernstein inequalities We start by noting that σ2=EX2 i−E2Xi, Thus, in order to give an upper confidence sequence for σ2, it suffices to derive an upper confidence sequence for EX2 iand a lower confidence sequence for EXi. 51 Consider U1,α1,t:=P i≤tλi(X2 i−cm2i)2 P i≤tλi+log(1/α1) +P i≤tψE(λi) X2 i−cm2i2 P i≤tλi as the upper confidence sequence for EX2 i(which follows from empirical Bernstein inequality), and L2,α2,t:=P i≤t˜λiXiP i≤t˜λi−log(1/α1) +P i≤tψE(˜λi) (Xi−bµi)2 P i≤t˜λi asthelowerconfidencesequencefor EXi(whichfollowsfromempiricalBernstein), so that α1+α2=α. Now
https://arxiv.org/abs/2505.01987v1
we take σ2≤U1,α1,t−L2 2,α2,t (F.1) as the upper confidence sequence for σ2. Similarly, in order to derive lower inequalities, define L1,α1,t:=P i≤tλi(X2 i−cm2i)2 P i≤tλi−log(1/α1) +P i≤tψE(λi) X2 i−cm2i2 P i≤tλi as the lower confidence sequence for EX2 i(which follows from empirical Bernstein inequality), and L2,α2,t:=P i≤t˜λiXiP i≤t˜λi+log(1/α1) +P i≤tψE(˜λi) (Xi−bµi)2 P i≤t˜λi as the upper confidence sequence for EXi(which follows from empirical Bern- stein), so that α1+α2=α. Now we take σ2≥L1,α1,t−U2 2,α2,t (F.2) as the lower confidence sequence for σ2. F.1.2 Theoretical and empirical suboptimality of the approach Ideally, we would expect the width of the confidence interval for σ2to scale asp 2V(X−µ)2log(1/α)/t(i.e., first order term in Bennett’s inequality). However, we see that the term in U1,α1,t log(1/α1) +P i≤tψE(λi) X2 i−cm2i2 P i≤tλi 52 scales asp 2VX2log(1/α)/t. It suffices to observe that V(X−µ)2=E(X−µ)4−E2(X−µ)2 =E X4−4X3µ+ 6X2µ2−4Xµ3+µ4 − E2X2+µ4−2µ2EX2 = EX4−E2X2 +E −4X3µ+ 8X2µ2−4Xµ3 = EX4−E2X2 −4µE X3 2−X1 2µ2 =VX2−4µE X3 2−X1 2µ2 ≤VX2, where the last inequality follows from µ∈(0,1), to conclude that the first order term of this confidence interval will generally dominate that of Bennett’s inequality. We also clearly see the suboptimality of the approach empirically. Figure 2 exhibits the upper and lower inequalities proposed in Section 4 to those derived in this appendix for all the scenarios considered in Section 5, illustrating the poor performance of the latter. Figure 2: Average confidence intervals over 100simulations for the std σfor (I) the uniform distribution in (0,1), (II) the beta distribution with parameters (2,6), and (III) the beta distribution with parameters (5,5). For each of the inequalities, the 0.95%-empirical quantiles are also displayed. The decoupling approach (this appendix) is compared against EB (our proposal). EB clearly outperforms the decoupled approach in all the scenarios. F.2Upper bounding the error term instead of taking neg- ligible plug-ins In Section 4.3, we proposed to take λt,l,α 2= 0if log(2/α1) + ˆσ2 tPt−1 i=1ψP(˜λi)Pt−1 i=1˜λi≤1. 53 A reasonable alternative would be to avoid defining λt,l,α 2as0(i.e., always define λt,l,α 2:=λt,u,α 2), and to take ˜Rt,δ=  log(2/δ)+σ2Pt−1 i=1ψP(˜λi)Pt−1 i=1˜λi,iflog(2/δ)+ˆσ2 t−1Pt−1 i=1ψP(˜λi)Pt−1 i=1˜λi≤1, 1, otherwise . In order to formalize this, denote Υt:=( i∈[t] :log(2/δ) + ˆσ2 t−1Pt−1 i=1ψP(˜λi) Pt−1 i=1˜λi≤1) ,Υc t:= [t]\Υt. Taking At:=P i∈Υtλi˜AiP i≤tλi, B t,δ:= 1 +P i∈Υtλi˜Bi,δP i≤tλi, Ct,δ:=P i∈Υtλi˜Ci,δ+P i∈Υc tλiP i≤tλi, in Section 4.3, Corollary 4.4 also holds. Figure 3 exhibits the empirical perfor- mance of this choice of plug-ins and that of Section 4.3, in the three scenarios from Section 5. The figure shows the slight advantage of considering the plug-ins from Section 4.3. Figure 3: Average confidence intervals over 100simulations for the std σfor (I) the uniform distribution in (0,1), (II) the beta distribution with parameters (2,6), and (III) the beta distribution with parameters (5,5). For each of the inequalities, the0.95%-empirical quantiles are also displayed. The EB lower confidence intervals with the plug-ins from Section 4.3 (our proposal) are compared against the EB lower confidence intervals with the plug-ins proposed in this appendix (alternative). Despitetheexpectedsimilaroutcomes, theplug-insfromSection4.3 lead to slightly sharper bounds. 54
https://arxiv.org/abs/2505.01987v1
arXiv:2505.02002v1 [math.OC] 4 May 2025JOTA manuscript No. (will be inserted by the editor) Sharp bounds in perturbed smooth optimization Vladimir Spokoiny Received: date / Accepted: date Abstract This paper studies the problem of perturbed convex and smooth optimization. The main results describe how the solution and the value of the problem change if the objective function is perturbed. Example s include linear, quadratic, and smooth additive perturbations. Such proble ms naturally arise in statistics and machine learning, stochastic optimization, sta bility and robustnessanalysis,inverseproblems, optimal control,etc. The resultsprovide accurate expansions for the difference between the solution of th e original problem and its perturbed counterpart with an explicit error term. Keywords self-concordance ·third and fourth order expansions Mathematics Subject Classification (2010) 65K10·90C31·90C25· 90C15 1 Introduction For a smooth convex function f(·), consider an optimization problem υ∗= argmin υ∈Υf(υ). (1) HereΥis an open subset of /CApforp≤ ∞and we focus on unconstraint optimization.Thispaperstudiesthefollowingquestion:howdo υ∗andf(υ∗) change if the objective function fis perturbed in some special way? We show how any smooth perturbation can be reduced to a linear one with g(υ) =f(υ)+/an}b∇acketle{tυ,A/an}b∇acket∇i}ht. Weierstrass Institute and HU Berlin, HSE and IITP RAS, Mohrenstr. 39, 10117 Berlin, Germany spokoiny@wias-berlin.de 2 Vladimir Spokoiny Forυ◦= argminυg(υ),wedescribethedifferences υ◦−υ∗andg(υ◦)−g(υ∗) in termsofthe Newton correction /u1D53D−1Awith/u1D53D=∇2f(υ∗) or/u1D53D=∇2f(υ◦). Motivation 1: Statistical inference for SLS models. Most of statisti- cal estimation procedurescan be representedas “empiricalriskm inimization”. Thisincludesleastsquares,maximumlikelihood,minimumcontrast,and many other methods. Let L(υ) be a random loss function (or empirical risk) and/BXL(υ) its expectation, υ∈Υ⊆ /CAp,p <∞. Consider /tildewideυ= argmin υ∈ΥL(υ);υ∗= argmin υ∈Υ /BXL(υ). Aim: describe the estimation loss /tildewideυ−υ∗and the prediction loss (excess) L(/tildewideυ)−L(υ∗).Ostrovskii and Bach (2021) provides some finite sample accu- racyguaranteesunderself-concordanceof L(υ).Spokoiny (2025) introduced a specialclassof stochastically linear smooth (SLS)modelsforwhichthestochas- tic term ζ(υ) =L(υ)− /BXL(υ) linearly depends on υand thus, its gradient ∇ζdoes not depend on υ:ζ(υ)−ζ(υ◦) =/an}b∇acketle{tυ−υ◦,∇ζ/an}b∇acket∇i}ht. This means that L(υ) is a linear perturbation of f(υ) = /BXL(υ). This reduces the study of the maximum likelihood estimator /tildewideυto linear perturbation analysis. Motivation 2: Uncertainty quantification by smooth penaliz ation. Given a smooth penalty penG(υ) indexed by parameter G, define fG(υ) =f(υ)+penG(υ). A leading example is given by a quadratic penalty /ba∇dblGυ/ba∇dbl2/2: fG(υ) =f(υ)+/ba∇dblGυ/ba∇dbl2/2. Again, the question under study is the corresponding change of th e solu- tionυ∗ G= argminυfG(υ) and the value fG(υ∗ G) of this problem. Adding a quadratic penalty changes the curvature of the objective func tion but does not change its smoothness. It is important to establish an expansio n whose remainder is only sensitive to the smoothness of f. Similar questions arise in Bayesian inference and high-dimensional Laplace approximation; s ee e.g. Stuart(2010),Nickl(2022),Katsevich (2023), and references therein. For ap- plications to Bayesian variational inference see Katsevich and Rigollet (2023). Motivation 3: Quasi-Newton iterations. Consider an optimization routine for ( 1) delivering υkas a solution after the step k. We want to measure the accuracy υk−υ∗and the deficiency f(υk)−f(υ∗). Define fk(υ)def=f(υ)−/an}b∇acketle{tυ−υk,∇f(υk)/an}b∇acket∇i}ht. Then∇fk(υk) = 0. As this function is strongly convex, it holds υk= argminυfk(υ), while fis a linear pertirbation of fkwithA=∇f(υk). The obtained results describe υ∗−υkin terms of ∇f(υk) and∇2f(υk). Similar Sharp bounds in perturbed smooth optimization 3 derivations
https://arxiv.org/abs/2505.02002v1
for tensor methods in convex optimization and further references canbefoundin Doikov and Nesterov (2021,2022,2024)andRodomanov and Nesterov (2022,2021). An accurate estimate of υk−υ∗and offk(υk)−fk(υ∗) can be used for analysis of many iterative procedures in optimization and stochas- tic approximation; see Polyak and Juditsky (1992),Nesterov and Nemirovskii (1994),Boyd and Vandenberghe (2004),Nesterov (2018),Polyak(2007),among many others. There exists a vast literature on sensitive analysis in perturbed opt imiza- tion focusing on the asymptotic setup with a small parametric pertu rbation of a non-smooth objective function in Hilbert/Banach spaces under n on-smooth constraints;seee.g. Bonnans and Shapiro (2000),Bonnans and Cominetti (1996), Bonnans and Shapiro (1998) among many other. For geometric properties of perturbedoptimizationwithapplicationstomachinelearningsee Berthet et al. (2020) and references therein. For our motivating examples, the assum ption of a small perturbation is too restrictive. Instead, we limit ourselve s to a finite dimensional optimization setup and to a smooth perturbation. This e nables us to derive accurate non-asymptotic closed form results. This paper contribution. The main results describe the leading terms and the remainder of the expansion for the solution of the perturb ed opti- mization problem for the cases of a linear, quadratic, and smooth pe rturba- tion. The accuracyofthe expansion stronglydepends on the smoo thness ofthe perturbed objective function given in terms of a metric tensor /u1D53Bwith/u1D53B2/lessorsimilar/u1D53D for/u1D53Ddef=∇2f(υ∗). This enables us to obtain sharp bounds for a quadratic or smooth perturbation. Under second-order smoothness, we sho w that /ba∇dbl/u1D53B(υ◦−υ∗+/u1D53D−1A)/ba∇dbl ≤2√ω 1−ω/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl, /vextendsingle/vextendsingle2g(υ◦)−2g(υ∗)+/ba∇dbl/u1D53D−1/2A/ba∇dbl2/vextendsingle/vextendsingle≤ω 1−ω/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl2, where/ba∇dbl·/ba∇dblmeans the standard Euclidean norm and ωdescribes the relative error of the second-order Taylor approximation; see Theorem 3.2. Further, a third-order self-concordance condition with a small constant τ3helps to substantially improve the accuracy: /ba∇dbl/u1D53B−1/u1D53D(υ◦−υ∗+/u1D53D−1A)/ba∇dbl ≤0.75τ3/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl2, /vextendsingle/vextendsingle2g(υ◦)−2g(υ∗)+/ba∇dbl/u1D53D−1/2A/ba∇dbl2/vextendsingle/vextendsingle≤0.5τ3/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl3; see Theorems 3.3and3.4. All these bounds are non-asymptotic, the remainder is given in a closed form via the norm of the Newton correction /u1D53D−1Aand it is nearly optimal. The results only involve the smoothness constant τ3. Even more, the fourth-order self-concordance yields /ba∇dbl/u1D53B−1/u1D53D{υ◦−υ∗+/u1D53D−1A+/u1D53D−1∇T(/u1D53D−1A)}/ba∇dbl ≤(τ4/2+τ2 3)/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl3, 4 Vladimir Spokoiny and /vextendsingle/vextendsingle/vextendsingleg(υ◦)−g(υ∗)+1 2/ba∇dbl/u1D53D−1/2A/ba∇dbl2+T(/u1D53D−1A)/vextendsingle/vextendsingle/vextendsingle ≤τ4+4τ2 3 8/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl4+(τ4+2τ2 3)2 4/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl6 with the third-order tensor T=∇3f(υ∗); see Theorem 3.5. The skewness correction term /u1D53D−1∇T(/u1D53D−1A) naturally arises in variational inference and sharp Laplace approximation Katsevich (2024). We also explain how the case of any smooth perturbation can be reduced to a linear one. Organization of the paper. Section2introduces some concepts of local smoothnessinthesenseoftheself-concordanceconditionfrom Nesterov and Nemirovskii (1994). Section 3collects the main results on linearly perturbed optimization. The obtained results are extended to the case of quadratic pertu rbation in Section4and of a smooth perturbation in Section 5. 2 Gateaux smoothness and self-concordance Below we assume the function f(υ),υ∈Υ⊆ /CApto be strongly convex with a positive definite Hessian /u1D53D(υ)def=∇2f(υ). Also, assume f(υ) three or sometimes even four times Gateaux differentiable in υ∈Υ. Local smoothness offwill be described by the relative errorof the Taylorexpansionof the third or fourth order. Define δ3(υ,u) =f(υ+u)−f(υ)−/an}b∇acketle{t∇f(υ),u/an}b∇acket∇i}ht−1 2/an}b∇acketle{t∇2f(υ),u⊗2/an}b∇acket∇i}ht, δ′ 3(υ,u) =/an}b∇acketle{t∇f(υ+u),u/an}b∇acket∇i}ht−/an}b∇acketle{t∇f(υ),u/an}b∇acket∇i}ht−/an}b∇acketle{t∇2f(υ),u⊗2/an}b∇acket∇i}ht, and δ4(υ,u) =f(υ+u)−f(υ)−/an}b∇acketle{t∇f(υ),u/an}b∇acket∇i}ht−1 2/an}b∇acketle{t∇2f(υ),u⊗2/an}b∇acket∇i}ht−1 6/an}b∇acketle{t∇3f(υ),u⊗3/an}b∇acket∇i}ht. Now, for each υ, suppose to be given a positive symmetric matrix /u1D53B(υ) defining a local metric and a local vicinity around υ: for some radius r, Ur(υ) =/braceleftbig u∈ /CAp:/ba∇dbl/u1D53B(υ)u/ba∇dbl ≤r/bracerightbig Local smoothness
https://arxiv.org/abs/2505.02002v1
properties of fatυare given via the quantities ω(υ)def= sup u:/bardbl/u1D53B(υ)u/bardbl≤r2|δ3(υ,u)| /ba∇dbl/u1D53B(υ)u/ba∇dbl2, ω′(υ)def= sup u:/bardbl/u1D53B(υ)u/bardbl≤r|δ′ 3(υ,u)| /ba∇dbl/u1D53B(υ)u/ba∇dbl2.(2) The definition yields for any uwith/ba∇dbl/u1D53B(υ)u/ba∇dbl ≤r /vextendsingle/vextendsingleδ3(υ,u)/vextendsingle/vextendsingle≤ω(υ) 2/ba∇dbl/u1D53B(υ)u/ba∇dbl2,/vextendsingle/vextendsingleδ′ 3(υ,u)/vextendsingle/vextendsingle≤ω′(υ)/ba∇dbl/u1D53B(υ)u/ba∇dbl2.(3) Sharp bounds in perturbed smooth optimization 5 The approximation results can be improved under a third-order upp er bound on the error of Taylor expansion. (T3)For some τ3 /vextendsingle/vextendsingleδ3(υ,u)/vextendsingle/vextendsingle≤τ3 6/ba∇dbl/u1D53B(υ)u/ba∇dbl3,/vextendsingle/vextendsingleδ′ 3(υ,u)/vextendsingle/vextendsingle≤τ3 2/ba∇dbl/u1D53B(υ)u/ba∇dbl3,u∈ Ur(υ). (T4)For some τ4 /vextendsingle/vextendsingleδ4(υ,u)/vextendsingle/vextendsingle≤τ4 24/ba∇dbl/u1D53B(υ)u/ba∇dbl4,u∈ Ur(υ). We also present a version of (T3)resp.(T4)in terms of the third (resp. fourth) derivative of f. (T∗ 3)f(υ)is three times differentiable and sup u:/bardbl/u1D53B(υ)u/bardbl≤rsup z∈ /CAp/vextendsingle/vextendsingle/an}b∇acketle{t∇3f(υ+u),z⊗3/an}b∇acket∇i}ht/vextendsingle/vextendsingle /ba∇dbl/u1D53B(υ)z/ba∇dbl3≤τ3. (T∗ 4)f(υ)is four times differentiable and sup u:/bardbl/u1D53B(υ)u/bardbl≤rsup z∈ /CAp/vextendsingle/vextendsingle/an}b∇acketle{t∇4f(υ+u),z⊗4/an}b∇acket∇i}ht/vextendsingle/vextendsingle /ba∇dbl/u1D53B(υ)z/ba∇dbl4≤τ4. By Banach’s characterization Banach(1938),(T∗ 3)implies /vextendsingle/vextendsingle/an}b∇acketle{t∇3f(υ+u),z1⊗z2⊗z3/an}b∇acket∇i}ht/vextendsingle/vextendsingle≤τ3/ba∇dbl/u1D53B(υ)z1/ba∇dbl/ba∇dbl/u1D53B(υ)z2/ba∇dbl/ba∇dbl/u1D53B(υ)z3/ba∇dbl(4) for anyuwith/ba∇dbl/u1D53B(υ)u/ba∇dbl ≤rand allz1,z2,z3∈ /CAp. Similarly under (T∗ 4) /vextendsingle/vextendsingle/an}b∇acketle{t∇4f(υ+u),z1⊗z2⊗z3⊗z4/an}b∇acket∇i}ht/vextendsingle/vextendsingle ≤τ4/ba∇dbl/u1D53B(υ)z1/ba∇dbl/ba∇dbl/u1D53B(υ)z2/ba∇dbl/ba∇dbl/u1D53B(υ)z3/ba∇dbl/ba∇dbl/u1D53B(υ)z4/ba∇dbl,z1,z2,z3,z4∈ /CAp.(5) Lemma 2.1 Under(T3)or(T∗ 3), it holds for ω(υ)andω′(υ)from(2) ω(υ)≤τ3r 3, ω′(υ)≤τ3r 2. ProofFor any u∈ Ur(υ) with/ba∇dbl/u1D53B(υ)u/ba∇dbl ≤r /vextendsingle/vextendsingleδ3(υ,u)/vextendsingle/vextendsingle≤τ3 6/ba∇dbl/u1D53B(υ)u/ba∇dbl3≤τ3r 6/ba∇dbl/u1D53B(υ)u/ba∇dbl2, and the bound for ω(υ) follows. The proof for ω′(υ) is similar. Under (T∗ 3), apply the third order Taylor expansion to the univariate function f(υ+tu) oftand(T∗ 3)withz≡u. 6 Vladimir Spokoiny The values τ3andτ4are usually very small. Some quantitative bounds are given later in this section under the assumption that the functio nf(υ) can be written in the form f(υ) =nh(υ) for a fixed smooth function h(υ) with the Hessian ∇2h(υ). The factor nhas meaning of the sample size. (S∗ 3)f(υ) =nh(υ)forh(υ)three times differentiable and for a metric tensor/u1D55E(υ) sup u:/bardbl/u1D55E(υ)u/bardbl≤r/√n/vextendsingle/vextendsingle/an}b∇acketle{t∇3h(υ+u),u⊗3/an}b∇acket∇i}ht/vextendsingle/vextendsingle /ba∇dbl/u1D55E(υ)u/ba∇dbl3≤c3. (S∗ 4)the function h(·)satisfies (S∗ 3)and sup u:/bardbl/u1D55E(υ)u/bardbl≤r/√n/vextendsingle/vextendsingle/an}b∇acketle{t∇4h(υ+u),u⊗4/an}b∇acket∇i}ht/vextendsingle/vextendsingle /ba∇dbl/u1D55E(υ)u/ba∇dbl4≤c4. (S∗ 3)and(S∗ 4)are local versions of the so-called self-concordance condi- tion; see Nesterov and Nemirovskii (1994) andOstrovskii and Bach (2021). In fact, they require that each univariate function h(υ+tu) oft∈ /CAis self- concordantwith some universal constants c3andc4. Under(S∗ 3)and(S∗ 4), with/u1D53B2(υ) =n/u1D55E2(υ), the values δ3(υ,u),δ4(υ,u), andω(υ),ω′(υ) can be well bounded. Lemma 2.2 Suppose (S∗ 3). Then(T3)follows with τ3=c3n−1/2. More- over, for ω(υ)andω′(υ)from(2), it holds ω(υ)≤c3r 3n1/2, ω′(υ)≤c3r 2n1/2. (6) Also(T4)follows from (S∗ 4)withτ4=c4n−1. ProofFor any u∈ Ur(υ), by third order Taylor expansion, |δ3(υ,u)| ≤sup t∈[0,1]1 6/vextendsingle/vextendsingle/an}b∇acketle{t∇3f(υ+tu),u⊗3/an}b∇acket∇i}ht/vextendsingle/vextendsingle=n 6sup t∈[0,1]/vextendsingle/vextendsingle/an}b∇acketle{t∇3h(υ+tu),u⊗3/an}b∇acket∇i}ht/vextendsingle/vextendsingle ≤nc3 6/ba∇dbl/u1D55E(υ)u/ba∇dbl3=n−1/2c3 6/ba∇dbl/u1D53B(υ)u/ba∇dbl3≤n−1/2c3r 6/ba∇dbl/u1D53B(υ)u/ba∇dbl2. This implies (T3)as well as ( 6); see (3). The statement about (T4)is similar. 3 Linearly perturbed optimization Letf(υ) be a smooth convex function, υ∗= argmin υf(υ), f(υ∗) = min υf(υ),/u1D53D=∇2f(υ∗). Sharp bounds in perturbed smooth optimization 7 Later we study the question of how the point of minimum and the value of minimum of fchange if we add a linear component to f. More precisely, let another function g(υ) satisfy for some vector A g(υ)−g(υ∗) =/angbracketleftbig υ−υ∗,A/angbracketrightbig +f(υ)−f(υ∗). (7) Define υ◦def= argmin υg(υ), g(υ◦) = min υg(υ). (8) Theaimoftheanalysisistoevaluatethequantities υ◦−υ∗andg(υ◦)−g(υ∗). First, we consider the case of a quadratic function f. Lemma 3.1 Letf(υ)be quadratic with ∇2f(υ)≡/u1D53Dand letg(υ)be from (7). Then υ◦−υ∗=−/u1D53D−1A, g(υ◦)−g(υ∗) =−1 2/ba∇dbl/u1D53D−1/2A/ba∇dbl2. ProofIff(υ) is quadratic, then under ( 7),g(υ) is quadratic as well with ∇2g(υ)≡/u1D53D. This implies ∇g(υ∗)−∇g(υ◦) =/u1D53D(υ∗−υ◦). Further, ( 7) and∇f(υ∗) = 0 yield ∇g(υ∗) =A. Together with ∇g(υ◦) = 0, this implies υ◦−υ∗=−/u1D53D−1A. The Taylor expansion of gatυ◦yields by ∇g(υ◦) = 0 g(υ∗)−g(υ◦) =1 2/ba∇dbl/u1D53D1/2(υ◦−υ∗)/ba∇dbl2=1 2/ba∇dbl/u1D53D−1/2A/ba∇dbl2 and the assertion follows. 3.1 Local concentration Letfsatisfy (2) atυ∗with/u1D53B(υ∗) =/u1D53B≤κ/u1D53D1/2for some κ>0. The latter
https://arxiv.org/abs/2505.02002v1
means that the matrix /u1D53D−κ2/u1D53B2is positive definite. The next result describes the concentration properties of υ◦from (8) in a local elliptic set A(r)def={υ:/ba∇dbl/u1D53D1/2(υ−υ∗)/ba∇dbl ≤r}, (9) whereris slightly larger than /ba∇dbl/u1D53D−1/2A/ba∇dbl. Theorem 3.1 Letf(υ)be astrongly convex function with f(υ∗) = min υf(υ) and/u1D53D=∇2f(υ∗). Let, further, g(υ)andf(υ)be related by (7)with some vectorA. Fixν <1andrsuch that /ba∇dbl/u1D53D−1/2A/ba∇dbl ≤νr. Suppose now that 8 Vladimir Spokoiny f(υ)satisfy(2)forυ=υ∗,/u1D53B(υ∗) =/u1D53B≤κ/u1D53D1/2with some κ>0andω′ such that 1−ν−ω′κ2>0. (10) Then for υ◦from(8), it holds /ba∇dbl/u1D53D1/2(υ◦−υ∗)/ba∇dbl ≤rand/ba∇dbl/u1D53B(υ◦−υ∗)/ba∇dbl ≤κr. ProofDefine/u1D53Bκ=κ−1/u1D53B. Then/u1D53Bκ≤/u1D53D1/2and the use of /u1D53Bκin place of /u1D53Byields (2) withω′κ2in place of ω′. This allows us to reduce the proof to κ= 1. The bound /ba∇dbl/u1D53D−1/2A/ba∇dbl ≤νrimplies for any u /vextendsingle/vextendsingle/an}b∇acketle{tA,u/an}b∇acket∇i}ht/vextendsingle/vextendsingle=/vextendsingle/vextendsingle/an}b∇acketle{t/u1D53D−1/2A,/u1D53D1/2u/an}b∇acket∇i}ht/vextendsingle/vextendsingle≤νr/ba∇dbl/u1D53D1/2u/ba∇dbl. Letυbe a point on the boundary of the set A(r) from (9). We also write u=υ−υ∗. The idea is to show that the derivatived dtg(υ∗+tu)>0 is positive for t >1. Then all the extreme points of g(υ) are within A(r). We use the decomposition g(υ∗+tu)−g(υ∗) =/an}b∇acketle{tA,u/an}b∇acket∇i}htt+f(υ∗+tu)−f(υ∗). Withh(t) =f(υ∗+tu)−f(υ∗)+/an}b∇acketle{tA,u/an}b∇acket∇i}htt, it holds d dtf(υ∗+tu) =−/an}b∇acketle{tA,u/an}b∇acket∇i}ht+h′(t). (11) By definition of υ∗, it also holds h′(0) =/an}b∇acketle{tA,u/an}b∇acket∇i}ht. The identity ∇2f(υ∗) =/u1D53D yieldsh′′(0) =/ba∇dbl/u1D53D1/2u/ba∇dbl2. Bound ( 3) implies for |t| ≤1 /vextendsingle/vextendsingleh′(t)−h′(0)−th′′(0)/vextendsingle/vextendsingle≤t/ba∇dbl/u1D53Bu/ba∇dbl2ω′. Fort= 1, we obtain by ( 10) h′(1)≥ /an}b∇acketle{tA,u/an}b∇acket∇i}ht+/ba∇dbl/u1D53D1/2u/ba∇dbl2−/ba∇dbl/u1D53Bu/ba∇dbl2ω′≥ /ba∇dbl/u1D53D1/2u/ba∇dbl2(1−ω′−ν)>0. Moreover,convexityof h(t) implies that h′(t)−h′(0) increasesin tfort >1. Further, summing up the above derivation yields d dtg(υ∗+tu)/vextendsingle/vextendsingle/vextendsingle t=1≥ /ba∇dbl/u1D53D1/2u/ba∇dbl2(1−ν−ω′)>0. Asd dtg(υ∗+tu) increases with t≥1 together with h′(t) due to ( 11), the same applies to all such t. This implies the assertion. Sharp bounds in perturbed smooth optimization 9 3.2 Second-order expansions The result of Theorem 3.1allows to localize the point υ◦= argminυg(υ) in the local vicinity A(r) ofυ∗. The use of smoothness properties of gor, equivalently, of f, in this vicinity helps to obtain rather sharp expansions for υ◦−υ∗and forg(υ◦)−g(υ∗). Theorem 3.2 Assume the conditions of Theorem 3.1withω≤1/3. Then −ω 1−κ2ω/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl2≤2g(υ◦)−2g(υ∗)+/ba∇dbl/u1D53D−1/2A/ba∇dbl2 ≤ω 1+κ2ω/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl2.(12) Also /ba∇dbl/u1D53B(υ◦−υ∗+/u1D53D−1A)/ba∇dbl ≤2√ω 1−κ2ω/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl, /ba∇dbl/u1D53B(υ◦−υ∗)/ba∇dbl ≤1+2√ω 1−κ2ω/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl.(13) ProofAs in the proof of Theorem 3.1, replacing /u1D53Bκ=κ−1/u1D53Bwith/u1D53Breduces the statement to κ= 1 in view of κ2ω/u1D53B2=ω/u1D53B2 κ. By (2), for any υ∈ A(r) /vextendsingle/vextendsingle/vextendsinglef(υ)−f(υ∗)−1 2/ba∇dbl/u1D53D1/2(υ−υ∗)/ba∇dbl2/vextendsingle/vextendsingle/vextendsingle≤ω 2/ba∇dbl/u1D53B(υ−υ∗)/ba∇dbl2.(14) Further, g(υ)−g(υ∗)+1 2/ba∇dbl/u1D53D−1/2A/ba∇dbl2 =/angbracketleftbig υ−υ∗,A/angbracketrightbig +f(υ)−f(υ∗)+1 2/ba∇dbl/u1D53D−1/2A/ba∇dbl2 =1 2/vextenddouble/vextenddouble/u1D53D1/2(υ−υ∗)+/u1D53D−1/2A/vextenddouble/vextenddouble2+f(υ)−f(υ∗)−1 2/ba∇dbl/u1D53D1/2(υ−υ∗)/ba∇dbl2.(15) Asυ◦∈ A(r) and it minimizes g(υ), we derive by ( 14) g(υ◦)−g(υ∗)+1 2/ba∇dbl/u1D53D−1/2A/ba∇dbl2= min υ∈A(r)/braceleftBig g(υ)−g(υ∗)+1 2/ba∇dbl/u1D53D−1/2A/ba∇dbl2/bracerightBig ≥min υ∈A(r)/braceleftBig1 2/vextenddouble/vextenddouble/u1D53D1/2(υ−υ∗)+/u1D53D−1/2A/vextenddouble/vextenddouble2−ω 2/ba∇dbl/u1D53B(υ−υ∗)/ba∇dbl2/bracerightBig . 10 Vladimir Spokoiny Denoteu=/u1D53D1/2(υ−υ∗),ξ=/u1D53D−1/2A,and/u1D539=/u1D53D−1/2/u1D53B2/u1D53D−1/2.As/u1D53B2≤/u1D53D, /ba∇dblξ/ba∇dbl<r, andω <1, it holds /ba∇dbl/u1D539/ba∇dbl ≤1 for the operator norm of /u1D539and min υ∈A(r)/braceleftbig/vextenddouble/vextenddouble/u1D53D1/2(υ−υ∗)+/u1D53D−1/2A/vextenddouble/vextenddouble2−ω/ba∇dbl/u1D53B(υ−υ∗)/ba∇dbl2/bracerightbig = min /bardblu/bardbl≤r/braceleftbig /ba∇dblu+ξ/ba∇dbl2−ωu⊤/u1D539u/bracerightbig =−ξ⊤/braceleftbig ( /C1−ω/u1D539)−1− /C1/bracerightbig ξ≥ −ω 1−ωξ⊤/u1D539ξ with /C1being the unit matrix in /CAp. This yields g(υ◦)−g(υ∗)+1 2/ba∇dbl/u1D53D−1/2A/ba∇dbl2≥ −ω 2(1−ω)/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl2. Similarly g(υ◦)−g(υ∗)+1 2/ba∇dbl/u1D53D−1/2A/ba∇dbl2 ≤min υ∈A(r)/braceleftBig1 2/vextenddouble/vextenddouble/u1D53D1/2(υ−υ∗)+/u1D53D−1/2A/vextenddouble/vextenddouble2+ω 2/ba∇dbl/u1D53B(υ−υ∗)/ba∇dbl2/bracerightBig ≤1 2ξ⊤/braceleftbig/C1−( /C1+ω/u1D539)−1/bracerightbig ξ≤ω 2(1+ω)/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl2. (16) These bounds imply ( 12). Now we derive similarly to ( 15) that for υ∈ A(r) g(υ)−g(υ∗)≥/angbracketleftbig υ−υ∗,A/angbracketrightbig +1 2/ba∇dbl/u1D53D1/2(υ−υ∗)/ba∇dbl2−ω 2/ba∇dbl/u1D53B(υ−υ∗)/ba∇dbl2. A particular choice υ=υ◦yields g(υ◦)−g(υ∗)≥/angbracketleftbig υ◦−υ∗,A/angbracketrightbig +1 2/ba∇dbl/u1D53D1/2(υ◦−υ∗)/ba∇dbl2−ω 2/ba∇dbl/u1D53B(υ◦−υ∗)/ba∇dbl2. Combining this inequality with ( 16) allows to bound /angbracketleftbig υ◦−υ∗,A/angbracketrightbig +1 2/ba∇dbl/u1D53D1/2(υ◦−υ∗)/ba∇dbl2−ω 2/ba∇dbl/u1D53B(υ◦−υ∗)/ba∇dbl2≤ −1 2ξ⊤( /C1+ω/u1D539)−1ξ. Withu◦=/u1D53D1/2(υ◦−υ∗), this implies 2/angbracketleftbig u◦,ξ/angbracketrightbig +u◦⊤(1−ω/u1D539)u◦≤ −ξ⊤( /C1+ω/u1D539)−1ξ, and hence, /braceleftbig u◦+( /C1−ω/u1D539)−1ξ/bracerightbig⊤( /C1−ω/u1D539)/braceleftbig u◦+( /C1−ω/u1D539)−1ξ/bracerightbig ≤ξ⊤/braceleftbig ( /C1−ω/u1D539)−1−( /C1+ω/u1D539)−1/bracerightbig ξ≤2ω (1+ω)(1−ω)ξ⊤/u1D539ξ. Sharp bounds in perturbed smooth optimization 11 Introduce /ba∇dbl·/ba∇dbl/u1D56Bby/ba∇dblx/ba∇dbl2 /u1D56Bdef=x⊤( /C1−ω/u1D539)x,x∈ /CAp. Then /ba∇dblu◦+( /C1−ω/u1D539)−1ξ/ba∇dbl2 /u1D56B≤2ω 1−ω2ξ⊤/u1D539ξ. As /ba∇dblξ−( /C1−ω/u1D539)−1ξ/ba∇dbl2
https://arxiv.org/abs/2505.02002v1
/u1D56B=ω2(/u1D539ξ)⊤( /C1−ω/u1D539)−1/u1D539ξ≤ω2 1−ω/ba∇dbl/u1D539ξ/ba∇dbl2≤ω2 1−ωξ⊤/u1D539ξ we conclude for ω≤1/3 by the triangle inequality /ba∇dblu◦+ξ/ba∇dbl/u1D56B≤/parenleftbigg/radicalbigg ω2 1−ω+/radicalbigg 2ω 1−ω2/parenrightbigg/radicalBig ξ⊤/u1D539ξ≤2/radicalbiggω 1−ω/radicalBig ξ⊤/u1D539ξ, and (13) follows by /C1−ω/u1D539≥(1−ω) /C1. Remark 3.1 The roles of the functions fandgare exchangeable. In particu- lar, the results from ( 13) apply with /u1D53D=∇2g(υ◦) =∇2f(υ◦) provided that (2) is fulfilled at υ=υ◦. 3.3 Expansions under third order smoothness The results of Theorem 3.2can be refined if fsatisfies condition (T3). Theorem 3.3 Letf(υ)be astrongly convex function with f(υ∗) = min υf(υ) and/u1D53D=∇2f(υ∗). Letg(υ)fulfill(7)with some vector A. Suppose that f(υ)follows(T3)atυ∗with/u1D53B2,r, andτ3such that /u1D53B2≤κ2/u1D53D,r≥4κ 3/ba∇dbl/u1D53D−1/2A/ba∇dbl,κ3τ3/ba∇dbl/u1D53D−1/2A/ba∇dbl<1 4.(17) Thenυ◦= argminυg(υ)satisfies /ba∇dbl/u1D53D1/2(υ◦−υ∗)/ba∇dbl ≤4 3/ba∇dbl/u1D53D−1/2A/ba∇dbl,/ba∇dbl/u1D53B(υ◦−υ∗)/ba∇dbl ≤4κ 3/ba∇dbl/u1D53D−1/2A/ba∇dbl. Moreover, /vextendsingle/vextendsingle/vextendsingle2g(υ◦)−2g(υ∗)+/ba∇dbl/u1D53D−1/2A/ba∇dbl2/vextendsingle/vextendsingle/vextendsingle≤τ3 2/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl3. (18) ProofW.l.o.g. assume κ= 1 and replace rwith4 3/ba∇dbl/u1D53D−1/2A/ba∇dbl. By (17)τ3r≤ 1/3,and (17) implies ( 10). Now(T3)and Lemma 2.1ensure(2) withω′(υ) = τ3r/2, and the first statement follows from Theorem 3.1withν= 3/4. As∇f(υ∗) = 0,(T3)implies for any υ∈ A(r) /vextendsingle/vextendsingle/vextendsinglef(υ)−f(υ∗)−1 2/ba∇dbl/u1D53D1/2(υ−υ∗)/ba∇dbl2/vextendsingle/vextendsingle/vextendsingle≤τ3 6/ba∇dbl/u1D53B(υ−υ∗)/ba∇dbl3. (19) 12 Vladimir Spokoiny Further, g(υ)−g(υ∗)+1 2/ba∇dbl/u1D53D−1/2A/ba∇dbl2 =/angbracketleftbig υ−υ∗,A/angbracketrightbig +f(υ)−f(υ∗)+1 2/ba∇dbl/u1D53D−1/2A/ba∇dbl2 =1 2/vextenddouble/vextenddouble/u1D53D1/2(υ−υ∗)+/u1D53D−1/2A/vextenddouble/vextenddouble2+f(υ)−f(υ∗)−1 2/ba∇dbl/u1D53D1/2(υ−υ∗)/ba∇dbl2. Asυ◦∈ A(r) and it minimizes g(υ), we derive by ( 19) and Lemma 3.2with /u1D54C=/u1D53D1/2/u1D53B−1ands=/u1D53B/u1D53D−1A 2g(υ◦)−2g(υ∗)+/ba∇dbl/u1D53D−1/2A/ba∇dbl2= min υ∈A(r)/braceleftBig 2g(υ)−2g(υ∗)+/ba∇dbl/u1D53D−1/2A/ba∇dbl2/bracerightBig ≥min υ∈A(r)/braceleftBig/vextenddouble/vextenddouble/u1D53D1/2(υ−υ∗)+/u1D53D−1/2A/vextenddouble/vextenddouble2−τ3 3/ba∇dbl/u1D53B(υ−υ∗)/ba∇dbl3/bracerightBig ≥ −τ3 2/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl3. Similarly 2g(υ◦)−2g(υ∗)+/ba∇dbl/u1D53D−1/2A/ba∇dbl2 ≤min υ∈A(r)/braceleftBig/vextenddouble/vextenddouble/u1D53D1/2(υ−υ∗)+/u1D53D−1/2A/vextenddouble/vextenddouble2+τ3 3/ba∇dbl/u1D53B(υ−υ∗)/ba∇dbl3/bracerightBig ≤τ3 2/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl3. This implies ( 18). Lemma 3.2 LetU≥ /C1. Fix some rand lets∈ /CApsatisfy(3/4)r≤ /ba∇dbls/ba∇dbl ≤ r. Ifτr≤1/3, then max /bardblu/bardbl≤r/parenleftBigτ 3/ba∇dblu/ba∇dbl3−(u−s)⊤U(u−s)/parenrightBig ≤τ 2/ba∇dbls/ba∇dbl3, (20) min /bardblu/bardbl≤r/parenleftBigτ 3/ba∇dblu/ba∇dbl3+(u−s)⊤U(u−s)/parenrightBig ≤τ 2/ba∇dbls/ba∇dbl3. (21) ProofReplacing /ba∇dblu/ba∇dbl3withr/ba∇dblu/ba∇dbl2reduces the problem to quadratic pro- gramming. It holds with ρdef=τr/3 andsρdef= (U−ρ /C1)−1Us τ 3/ba∇dblu/ba∇dbl3−(u−s)⊤U(u−s)≤τr 3/ba∇dblu/ba∇dbl2−(u−s)⊤U(u−s) =−u⊤/parenleftbig U−ρ /C1/parenrightbig u+2u⊤Us−s⊤Us =−(u−sρ)⊤(U−ρ /C1)(u−sρ)+s⊤ ρ(U−ρ /C1)sρ−s⊤Us ≤s⊤/braceleftbig U(U−ρ /C1)−1U−U/bracerightbig s=ρs⊤U(U−ρ /C1)−1s. Sharp bounds in perturbed smooth optimization 13 This implies in view of U≥ /C1,r≤(4/3)/ba∇dbls/ba∇dbl, andρ=τr/3≤1/9 max /bardblu/bardbl≤r/parenleftBigτ 3/ba∇dblu/ba∇dbl3−(u−s)⊤U(u−s)/parenrightBig ≤ρ 1−ρ/ba∇dbls/ba∇dbl2≤τr 3(1−ρ)/ba∇dbls/ba∇dbl2≤4τ 9(1−ρ)/ba∇dbls/ba∇dbl3≤τ 2/ba∇dbls/ba∇dbl3, and (20) follows. Further, τ/ba∇dblu/ba∇dbl3/3≤τr/ba∇dblu/ba∇dbl2/3 =ρ/ba∇dblu/ba∇dbl2for/ba∇dblu/ba∇dbl ≤r, and τ 3/ba∇dblu/ba∇dbl3+(u−s)⊤U(u−s)≤ρ/ba∇dblu/ba∇dbl2+(u−s)⊤U(u−s). The globalminimum ofthe latter function is attained at uρdef= (U+ρ /C1)−1Us. As/ba∇dbluρ/ba∇dbl ≤ /ba∇dbls/ba∇dbl ≤rand (3/4)r≤ /ba∇dbls/ba∇dbl, this implies min /bardblu/bardbl≤r/parenleftBig ρ/ba∇dblu/ba∇dbl2+(u−s)⊤U(u−s)/parenrightBig =τr 3/ba∇dbluρ/ba∇dbl2+(uρ−s)⊤U(uρ−s) ≤s⊤/braceleftbig U−U(U+ρ /C1)−1U/bracerightbig s=ρs⊤U(U+ρ /C1)−1s≤ρ/ba∇dbls/ba∇dbl2≤4τ 9/ba∇dbls/ba∇dbl3, and (21) follows as well. 3.4 Advanced approximation under locally uniform smoothness The bounds of Theorem 3.3can be made more accurate if ffollows(T∗ 3) and(T∗ 4)and one can apply Taylor’s expansion around any point close to υ∗. In particular, the improved results do not involve the value /ba∇dbl/u1D53D−1/2A/ba∇dbl which can be large or even infinite in some situation; see Section 4. Theorem 3.4 Letf(υ)be astrongly convex function with f(υ∗) = min υf(υ) and/u1D53D=∇2f(υ∗). Assume (T∗ 3)atυ∗with/u1D53B2,r, andτ3such that /u1D53B2≤κ2/u1D53D,r≥3 2/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl,κ2τ3/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl<4 9. Then /ba∇dbl/u1D53B(υ◦−υ∗)/ba∇dbl ≤3 2/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl, (22) /ba∇dbl/u1D53B−1/u1D53D(υ◦−υ∗+/u1D53D−1A)/ba∇dbl ≤3τ3 4/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl2. (23) ProofW.l.o.g. assume κ= 1. If the function fis quadratic and convex with the minimum at υ∗then the linearly perturbed function gis also quadratic and convex with the minimum at ˘υ=υ∗−/u1D53D−1A. In general, the point ˘υis not the minimizer of g, however, it is very close to υ◦. We use that ∇f(υ∗) = 0 and ∇2f(υ∗) =/u1D53D. The main step of the proof is given by the next lemma. 14 Vladimir Spokoiny Lemma 3.3 Assume (T∗ 3)atυand letUr={u:/ba∇dbl/u1D53Bu/ba∇dbl ≤r}. Then /vextenddouble/vextenddouble/u1D53B−1/braceleftbig ∇f(υ+u)−∇f(υ)−/an}b∇acketle{t∇2f(υ),u/an}b∇acket∇i}ht/bracerightbig/vextenddouble/vextenddouble≤τ3 2/ba∇dbl/u1D53Bu/ba∇dbl2,u∈ Ur.(24) Also for all u,u1∈ Ur /vextenddouble/vextenddouble/u1D53B−1/braceleftbig ∇2f(υ+u1)−∇2f(υ+u)/bracerightbig /u1D53B−1/vextenddouble/vextenddouble≤τ3/ba∇dbl/u1D53B(u1−u)/ba∇dbl(25) and /vextenddouble/vextenddouble/u1D53B−1/braceleftbig ∇f(υ+u1)−∇f(υ+u)−∇2f(υ)(u1−u)/bracerightbig/vextenddouble/vextenddouble≤3τ3 2/ba∇dbl/u1D53B(u1−u)/ba∇dbl2.(26) Moreover, under (T∗ 4), for any u∈ Ur, /vextenddouble/vextenddouble/u1D53B−1/braceleftbig ∇f(υ+u)−∇f(υ)−/an}b∇acketle{t∇2f(υ),u/an}b∇acket∇i}ht−1 2/an}b∇acketle{t∇3f(υ),u⊗2/an}b∇acket∇i}ht/bracerightbig/vextenddouble/vextenddouble≤τ4
https://arxiv.org/abs/2505.02002v1
6/ba∇dbl/u1D53Bu/ba∇dbl3.(27) ProofFixu∈ Urand any vector w∈ /CAp. Fort∈[0,1], denote h(t)def=/angbracketleftbig ∇f(υ+tu)−∇f(υ)−t/an}b∇acketle{t∇2f(υ),u/an}b∇acket∇i}ht,w/angbracketrightbig . Thenh(0) = 0, h′(0) = 0, and (T∗ 3)and (4) imply |h′′(t)|=/vextendsingle/vextendsingle/an}b∇acketle{t∇3f(υ+tu),u⊗2⊗w/an}b∇acket∇i}ht/vextendsingle/vextendsingle≤τ3/ba∇dbl/u1D53Bu/ba∇dbl2/ba∇dbl/u1D53Bw/ba∇dbl. Withadef=∇f(υ+u)−∇f(υ)−/an}b∇acketle{t∇2f(υ),u/an}b∇acket∇i}ht, this yields |/an}b∇acketle{ta,w/an}b∇acket∇i}ht|=|h(1)| ≤τ3 2/ba∇dbl/u1D53Bu/ba∇dbl2/ba∇dbl/u1D53Bw/ba∇dbl, /ba∇dbl/u1D53B−1a/ba∇dbl= sup /bardblw/bardbl=1/vextendsingle/vextendsingle/an}b∇acketle{t/u1D53B−1a,w/an}b∇acket∇i}ht/vextendsingle/vextendsingle= sup /bardblw/bardbl=1/vextendsingle/vextendsingle/an}b∇acketle{ta,/u1D53B−1w/an}b∇acket∇i}ht/vextendsingle/vextendsingle≤τ3 2/ba∇dbl/u1D53Bu/ba∇dbl2, and the first statement follows. For ( 27), apply adef=∇f(υ+u)−∇f(υ)−/an}b∇acketle{t∇2f(υ),u/an}b∇acket∇i}ht−1 2/an}b∇acketle{t∇3f(υ),u⊗2/an}b∇acket∇i}ht, h(t)def=/angbracketleftbig ∇f(υ+tu)−∇f(υ)−t/an}b∇acketle{t∇2f(υ),u/an}b∇acket∇i}ht−t2 2/an}b∇acketle{t∇3f(υ),u⊗2/an}b∇acket∇i}ht,w/angbracketrightbig , and use (T∗ 4)and (5) instead of (T∗ 3)and (4). Further, with B 1def=∇2f(υ+u1)− ∇2f(υ+u) and∆=u1−u, by (T∗ 3), for any w∈ /CApand some t∈[0,1], /vextendsingle/vextendsingle/an}b∇acketle{t/u1D53B−1/braceleftbig ∇2f(υ+u1)−∇2f(υ+u)/bracerightbig /u1D53B−1,w⊗2/an}b∇acket∇i}ht/vextendsingle/vextendsingle=/vextendsingle/vextendsingle/an}b∇acketle{tB1,(/u1D53B−1w)⊗2/an}b∇acket∇i}ht/vextendsingle/vextendsingle =/vextendsingle/vextendsingle/angbracketleftbig ∇3f(υ+u+t∆),∆⊗(/u1D53B−1w)⊗2/angbracketrightbig/vextendsingle/vextendsingle≤τ3/ba∇dbl/u1D53B∆/ba∇dbl/ba∇dblw/ba∇dbl2. Sharp bounds in perturbed smooth optimization 15 This proves ( 25). Similarly, for some t∈[0,1] /vextendsingle/vextendsingle/angbracketleftbig /u1D53B−1/braceleftbig ∇f(υ+u1)−∇f(υ+u)−∇2f(υ+u)∆/bracerightbig ,w/angbracketrightbig/vextendsingle/vextendsingle =1 2/vextendsingle/vextendsingle/angbracketleftbig ∇3f(υ+u+t∆),∆⊗∆⊗/u1D53B−1w/angbracketrightbig/vextendsingle/vextendsingle≤τ3 2/ba∇dbl/u1D53B∆/ba∇dbl2/ba∇dblw/ba∇dbl and with B = ∇2f(υ+u)−∇2f(υ), by (25), /vextenddouble/vextenddouble/u1D53B−1B∆/vextenddouble/vextenddouble≤ /ba∇dbl/u1D53B−1B/u1D53B−1/ba∇dbl /ba∇dbl/u1D53B∆/ba∇dbl ≤τ3/ba∇dbl/u1D53B∆/ba∇dbl2. This completes the proof of ( 26). Now we prove ( 23). W.l.o.g. assume /ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl= 2r/3. Lemma 3.3, (24), applied with υ=υ∗andu=/u1D53D−1Ayields for ˘υ=υ∗−/u1D53D−1A /vextenddouble/vextenddouble/u1D53B−1∇g(˘υ)/vextenddouble/vextenddouble=/vextenddouble/vextenddouble/u1D53B−1{∇f(υ∗−/u1D53D−1A)−∇f(υ∗)−A}/vextenddouble/vextenddouble≤τ3 2/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl2.(28) As/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl= 2r/3, condition (T∗ 3)can be applied in the r/3-vicinity of ˘υ. We show that g(υ) attains its minimum within this vicinity. Fix any υ on its boundary, i.e. with /ba∇dbl/u1D53B(υ−˘υ)/ba∇dbl=r/3 and denote ∆=υ−˘υ. By (26) /vextenddouble/vextenddouble/u1D53B−1{∇g(υ)−∇g(˘υ)−/u1D53D∆}/vextenddouble/vextenddouble=/vextenddouble/vextenddouble/u1D53B−1{∇f(υ)−∇f(˘υ)−/u1D53D∆}/vextenddouble/vextenddouble≤3τ3 2/ba∇dbl/u1D53B∆/ba∇dbl2. In particular, this and ( 28) yield /vextenddouble/vextenddouble/u1D53B−1{∇g(˘υ+∆)−/u1D53D∆}/vextenddouble/vextenddouble≤2τ3/ba∇dbl/u1D53B∆/ba∇dbl2. For any uwith/ba∇dblu/ba∇dbl= 1, this implies /vextendsingle/vextendsingle/angbracketleftbig ∇g(˘υ+∆)−/u1D53D∆,/u1D53B−1u/angbracketrightbig/vextendsingle/vextendsingle≤2τ3/ba∇dbl/u1D53B∆/ba∇dbl2. (29) Now consider the function h(t) =g(˘υ+t∆). Then h′(t) =/an}b∇acketle{t∇g(˘υ+t∆),∆/an}b∇acket∇i}ht and (29) implies with u=/u1D53B∆//ba∇dbl/u1D53B∆/ba∇dbl /vextendsingle/vextendsingle/an}b∇acketle{t∇g(˘υ+∆),∆/an}b∇acket∇i}ht−/ba∇dbl/u1D53D1/2∆/ba∇dbl2/vextendsingle/vextendsingle≤2τ3/ba∇dbl/u1D53B∆/ba∇dbl3. As/u1D53D≥/u1D53B2, this yields h′(1)≥ /ba∇dbl/u1D53B∆/ba∇dbl2−2τ3/ba∇dbl/u1D53B∆/ba∇dbl3. (30) Similarly −h′(−1)≥ /ba∇dbl/u1D53B∆/ba∇dbl2−2τ3/ba∇dbl/u1D53B∆/ba∇dbl3and (28) yields by /ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl=2 3r |h′(0)|=/vextendsingle/vextendsingle/an}b∇acketle{t∇g(˘υ),∆/an}b∇acket∇i}ht/vextendsingle/vextendsingle≤τ3 2/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl2/ba∇dbl/u1D53B∆/ba∇dbl ≤2τ3 9r2/ba∇dbl/u1D53B∆/ba∇dbl.(31) 16 Vladimir Spokoiny Due to (30), (31),/ba∇dbl/u1D53B∆/ba∇dbl=r/3,τ3/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl ≤4/9, and/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl= 2r/3 h′(1)−|h′(0)| ≥ /ba∇dbl/u1D53B∆/ba∇dbl2−2τ3 9r2/ba∇dbl/u1D53B∆/ba∇dbl−2τ3/ba∇dbl/u1D53B∆/ba∇dbl3 =/ba∇dbl/u1D53B∆/ba∇dblr/parenleftBig1 3−2τ3r 9−2τ3r 9/parenrightBig >0. Similarly −h′(−1)>|h′(0)|,andconvexityof g(·) ensuresthat t∗= argminth(t) satisfies |t∗| ≤1. We summarize that υ◦= argminυg(υ) satisfies /ba∇dbl/u1D53B(υ◦− ˘υ)/ba∇dbl ≤r/3 while /ba∇dbl/u1D53B(˘υ−υ∗)/ba∇dbl=/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl= 2r/3, thus yielding ( 22). We can now apply (T∗ 3)atυ◦for checking ( 23). As∇g(υ◦) = 0, by ( 28) /ba∇dbl/u1D53B−1{∇g(υ∗−/u1D53D−1A)−∇g(υ◦)}/ba∇dbl ≤τ3 2/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl2.(32) By (26) of Lemma 3.3, it holds with ∆=υ∗−/u1D53D−1A−υ◦ /vextenddouble/vextenddouble/u1D53B−1{∇g(υ∗−/u1D53D−1A)−∇g(υ◦)−∇2g(υ∗)∆}/vextenddouble/vextenddouble≤3τ3 2/ba∇dbl/u1D53B∆/ba∇dbl2. Combining with ( 32) yields /ba∇dbl/u1D53B−1/u1D53D∆/ba∇dbl ≤3τ3 2/ba∇dbl/u1D53B∆/ba∇dbl2+τ3 2/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl2≤3τ3 2/ba∇dbl/u1D53B−1/u1D53D∆/ba∇dbl2+τ3 2/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl2. As 2x≤αx2+βwithα= 3τ3,β=τ3/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl2, andx=/ba∇dbl/u1D53B−1/u1D53D∆/ba∇dbl ∈ (0,1/α) implies x≤β/(2−αβ), this yields /ba∇dbl/u1D53B−1/u1D53D(υ◦−υ∗+/u1D53D−1A)/ba∇dbl ≤τ3 2−3τ2 3/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl2/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl2 and (23) follows by τ3/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl ≤4/9. Remark 3.2 As in Remark 2,fandgcan be exchanged. In particular, ( 23) applies with /u1D53D=/u1D53D(υ◦) provided that (T∗ 3)is fulfilled at υ◦. 3.5 Fourth-order expansions Under fourth-order condition (T∗ 4), expansion ( 23) can further be refined. Theorem 3.5 Letf(υ)be a strongly convex function satisfying (T∗ 3)and (T∗ 4)atυ∗= argminυf(υ)with some /u1D53B2,τ3,τ4, andrsuch that /u1D53B2≤κ2/u1D53D,r≥3 2/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl,κ2τ3/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl<4 9,κ2τ4/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl2<1 3.(33) Sharp bounds in perturbed smooth optimization 17 Letg(υ)fulfill(7)with some vector Aandg(υ◦) = min υg(υ). Then/ba∇dbl/u1D53B(υ◦− υ∗)/ba∇dbl ≤(3/2)/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl. Further, define a=−/u1D53D−1{A+∇T(/u1D53D−1A)}, (34) whereT(u) =1 6/an}b∇acketle{t∇3f(υ∗),u⊗3/an}b∇acket∇i}htforu∈ /CAp. Then /ba∇dbl/u1D53B−1/u1D53D(υ◦−υ∗−a)/ba∇dbl ≤(τ4/2+κ2τ2 3)/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl3. (35) Also /vextendsingle/vextendsingle/vextendsingleg(υ◦)−g(υ∗)+1 2/ba∇dbl/u1D53D−1/2A/ba∇dbl2+T(/u1D53D−1A)/vextendsingle/vextendsingle/vextendsingle ≤τ4+4κ2τ2 3 8/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl4+κ2(τ4+2κ2τ2 3)2 4/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl6(36) and /vextendsingle/vextendsingleT(/u1D53D−1A)/vextendsingle/vextendsingle≤τ3 6/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl3. (37) ProofW.l.o.g. assume κ= 1 and υ∗= 0. Theorem 3.4yields (23). Later we use that T(−u) =−T(u) while∇T(−u) =∇T(u). By(T∗ 3)and (4) /ba∇dbl/u1D53B−1/u1D53D(a+/u1D53D−1A)/ba∇dbl=/ba∇dbl/u1D53B−1∇T(/u1D53D−1A)/ba∇dbl = sup /bardblu/bardbl=13/vextendsingle/vextendsingle/an}b∇acketle{tT,/u1D53D−1A⊗/u1D53D−1A⊗/u1D53B−1u/an}b∇acket∇i}ht/vextendsingle/vextendsingle≤τ3 2/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl2. (38) As/u1D53B−1/u1D53D≥/u1D53D1/2≥/u1D53B, this implies by τ3/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl ≤4/9 /ba∇dbl/u1D53Ba/ba∇dbl ≤ /ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl+/ba∇dbl/u1D53B/u1D53D−1∇T(/u1D53D−1A)/ba∇dbl ≤/parenleftBig 1+τ3 2/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl/parenrightBig /ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl ≤11 9/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl (39) and /ba∇dbl/u1D53D1/2a+/u1D53D−1/2A/ba∇dbl ≤τ3 2/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl2. (40) Next, again by (T∗ 3), for any w /ba∇dbl/u1D53B−1∇2T(w)/u1D53B−1/ba∇dbl= sup /bardblu/bardbl=16/vextendsingle/vextendsingle/an}b∇acketle{tT,w⊗(/u1D53B−1u)⊗2/an}b∇acket∇i}ht/vextendsingle/vextendsingle≤τ3/ba∇dbl/u1D53Bw/ba∇dbl. The tensor ∇2T(u) is linear in u, hence/ba∇dbl∇2T(u)/ba∇dblis convex in uand sup t∈[0,1]/ba∇dbl/u1D53B−1∇2T(ta−(1−t)/u1D53D−1A)/u1D53B−1/ba∇dbl = max{/ba∇dbl/u1D53B−1∇2T(−/u1D53D−1A)/u1D53B−1/ba∇dbl,/ba∇dbl/u1D53B−1∇2T(a)/u1D53B−1/ba∇dbl} ≤τ3max{/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl,/ba∇dbl/u1D53Ba/ba∇dbl}. 18 Vladimir Spokoiny Based
https://arxiv.org/abs/2505.02002v1
on ( 39), assume /ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl ≤ /ba∇dbl/u1D53Ba/ba∇dbl ≤(11/9)/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl. Then ( 38) yields by ∇T(u) =∇T(−u) /ba∇dbl/u1D53B−1∇T(a)−/u1D53B−1∇T(/u1D53D−1A)/ba∇dbl=/ba∇dbl/u1D53B−1∇T(a)−/u1D53B−1∇T(−/u1D53D−1A)/ba∇dbl ≤sup t∈[0,1]/ba∇dbl/u1D53B−1∇2T(ta−(1−t)/u1D53D−1A)/u1D53B−1/ba∇dbl /ba∇dbl/u1D53B/u1D53D−1(a+/u1D53D−1A)/ba∇dbl ≤τ2 3 2/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl2/ba∇dbl/u1D53Ba/ba∇dbl ≤2τ2 3 3/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl3. As∇2f(0) =/u1D53Dand∇T(a) =1 2/an}b∇acketle{t∇3f(0),a⊗a/an}b∇acket∇i}ht, by (27) withυ= 0 and u=aand by (39) /vextenddouble/vextenddouble/u1D53B−1{∇f(a)−/u1D53Da−∇T(a)}/vextenddouble/vextenddouble ≤τ4 6/ba∇dbl/u1D53Ba/ba∇dbl3≤(11/9)3τ4 6/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl3≤τ4 3/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl3. Next we bound/vextenddouble/vextenddouble/u1D53B−1{∇g(a)−∇g(υ◦)}/vextenddouble/vextenddouble. As∇g(υ◦) = 0, (7) and (34) imply /vextenddouble/vextenddouble/u1D53B−1{∇g(a)−∇g(υ◦)}/vextenddouble/vextenddouble=/vextenddouble/vextenddouble/u1D53B−1∇g(a)/vextenddouble/vextenddouble=/vextenddouble/vextenddouble/u1D53B−1{∇g(a)−/u1D53Da−∇T(/u1D53D−1A)−A}/vextenddouble/vextenddouble ≤/vextenddouble/vextenddouble/u1D53B−1{∇f(a)−/u1D53Da−∇T(a)}/vextenddouble/vextenddouble+/ba∇dbl/u1D53B−1{∇T(a)−∇T(/u1D53D−1A)}/ba∇dbl ≤ ♦ 1,(41) where ♦1def=τ4+2τ2 3 3/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl3 and by (33) 3τ3♦1=τ3/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl/parenleftBig τ4/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl2+2τ2 3/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl2/parenrightBig ≤4 9/parenleftBig1 3+32 81/parenrightBig <1 3.(42) Further, ∇2g(0) =∇2f(0) =/u1D53D, and (26) of Lemma 3.3implies /vextenddouble/vextenddouble/u1D53B−1{∇g(a)−∇g(υ◦)−/u1D53D(a−υ◦)}/vextenddouble/vextenddouble =/vextenddouble/vextenddouble/u1D53B−1{∇f(a)−∇f(υ◦)−/u1D53D(a−υ◦)}/vextenddouble/vextenddouble≤3τ3 2/ba∇dbl/u1D53B(a−υ◦)/ba∇dbl2. Combining with ( 41) yields in view of /u1D53B2≤/u1D53D /ba∇dbl/u1D53B−1/u1D53D(a−υ◦)/ba∇dbl ≤3τ3 2/ba∇dbl/u1D53B(a−υ◦)/ba∇dbl2+♦1≤3τ3 2/ba∇dbl/u1D53B−1/u1D53D(a−υ◦)/ba∇dbl2+♦1. As 2x≤αx2+βwithα= 3τ3,β= 2♦1, andx∈(0,1/α) implies x≤ β/(2−αβ), we conclude by ( 42) /ba∇dbl/u1D53B−1/u1D53D(a−υ◦)/ba∇dbl ≤♦1 1−3τ3♦1≤τ4+2τ2 3 2/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl3,(43) Sharp bounds in perturbed smooth optimization 19 and (35) follows. Next we bound g(υ◦)−g(0). By ( 40) and/u1D53B2≤/u1D53D 1 2/ba∇dbl/u1D53D−1/2A/ba∇dbl2+/an}b∇acketle{tA,a/an}b∇acket∇i}ht+1 2/ba∇dbl/u1D53D1/2a/ba∇dbl2=1 2/ba∇dbl/u1D53D1/2a+/u1D53D−1/2A/ba∇dbl2≤τ2 3 8/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl4. This together with ∇f(0) = 0, ∇2f(0) =/u1D53D,(T∗ 4), and (39) implies /vextendsingle/vextendsingle/vextendsingleg(a)−g(0)+1 2/ba∇dbl/u1D53D−1/2A/ba∇dbl2−T(a)/vextendsingle/vextendsingle/vextendsingle =/vextendsingle/vextendsingle/vextendsinglef(a)−f(0)+/an}b∇acketle{tA,a/an}b∇acket∇i}ht+1 2/ba∇dbl/u1D53D−1/2A/ba∇dbl2−T(a)/vextendsingle/vextendsingle/vextendsingle ≤/vextendsingle/vextendsingle/vextendsinglef(a)−f(0)−1 2/ba∇dbl/u1D53D1/2a/ba∇dbl2−T(a)/vextendsingle/vextendsingle/vextendsingle+τ2 3 8/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl4 ≤τ4 24/ba∇dbl/u1D53Ba/ba∇dbl4+τ2 3 8/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl4≤/parenleftBigτ4 10+τ2 3 8/parenrightBig /ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl4. Further, by ∇g(υ◦) = 0 and ∇2g(·)≡ ∇2f(·), it holds for some υ∈[a,υ◦] 2/vextendsingle/vextendsingleg(a)−g(υ◦)/vextendsingle/vextendsingle=/vextendsingle/vextendsingle/an}b∇acketle{t∇2f(υ),(a−υ◦)⊗2/an}b∇acket∇i}ht/vextendsingle/vextendsingle. As/ba∇dbl/u1D53Ba/ba∇dbl ≤r=3 2/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbland/ba∇dbl/u1D53Bυ◦/ba∇dbl ≤r, also/ba∇dbl/u1D53Bυ/ba∇dbl ≤r. The use of ∇2f(0) =/u1D53D≥/u1D53B2and (25) yields by τ3/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl<4 9and (43) 2/vextendsingle/vextendsingleg(a)−g(υ◦)/vextendsingle/vextendsingle≤ /ba∇dbl/u1D53D1/2(a−υ◦)/ba∇dbl2+/vextendsingle/vextendsingle/angbracketleftbig ∇2f(υ)−∇2f(0),(a−υ◦)⊗2/angbracketrightbig/vextendsingle/vextendsingle ≤(1+τ3r)/ba∇dbl/u1D53D1/2(a−υ◦)/ba∇dbl2≤(5/3)(τ4+2τ2 3)2 4/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl6. AsT(a) =−T(−a), it holds with ∆def=/u1D53D−1∇T(/u1D53D−1A) for some t∈[0,1] /vextendsingle/vextendsingleT(a)+T(/u1D53D−1A)/vextendsingle/vextendsingle=/vextendsingle/vextendsingleT(/u1D53D−1A+∆)−T(/u1D53D−1A)/vextendsingle/vextendsingle=/vextendsingle/vextendsingle/angbracketleftbig ∇T(/u1D53D−1A+t∆),∆/angbracketrightbig/vextendsingle/vextendsingle ≤τ3 2/ba∇dbl/u1D53B(/u1D53D−1A+t∆)/ba∇dbl2/ba∇dbl/u1D53B∆/ba∇dbl=τ3 2/ba∇dbl/u1D53B/u1D53D−1A+t/u1D53B∆/ba∇dbl2/ba∇dbl/u1D53B∆/ba∇dbl. Similarly to ( 38), it holds /ba∇dbl/u1D53B∆/ba∇dbl ≤ /ba∇dbl/u1D53B−1∇T(/u1D53D−1A)/ba∇dbl ≤(τ3/2)/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl2, and byτ3/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl ≤1/2 /vextendsingle/vextendsingleT(a)+T(/u1D53D−1A)/vextendsingle/vextendsingle≤(5/4)2τ2 3 4/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl4. Summing up the obtained bounds yields ( 36). (37) follows from (T∗ 3). 20 Vladimir Spokoiny 4 Quadratic penalization Here we discuss the case when g(υ)−f(υ) is quadratic. The general case can be reduced to the situation with g(υ) =f(υ) +/ba∇dblGυ/ba∇dbl2/2. To make the dependence of Gmore explicit, denote fG(υ)def=f(υ)+/ba∇dblGυ/ba∇dbl2/2, υ∗= argmin υf(υ),υ∗ G= argmin υfG(υ) = argmin υ/braceleftbig f(υ)+/ba∇dblGυ/ba∇dbl2/2/bracerightbig .(44) Westudy thebias υ∗ G−υ∗inducedbythispenalization.Togetsomeintuition, consider first the case of a quadratic function f(υ). Lemma 4.1 Letf(υ)be quadratic with /u1D53D≡ ∇2f(υ). Denote /u1D53DG=/u1D53D+G2. Thenυ∗ Gfrom(44)satisfies υ∗ G−υ∗=−/u1D53D−1 GG2υ∗, f G(υ∗ G)−fG(υ∗) =−1 2/ba∇dbl/u1D53D−1/2 GG2υ∗/ba∇dbl2. ProofBy definition, fG(υ) is quadratic with ∇2fG(υ)≡/u1D53DGand ∇fG(υ∗ G)−∇fG(υ∗) =/u1D53DG(υ∗ G−υ∗). Further, ∇f(υ∗) = 0 yielding ∇fG(υ∗) =G2υ∗. Together with ∇fG(υ∗ G) = 0, this implies υ∗ G−υ∗=−/u1D53D−1 GG2υ∗. The Taylor expansion of fGatυ∗ G yields fG(υ∗)−fG(υ∗ G) =1 2/ba∇dbl/u1D53D1/2 G(υ∗−υ∗ G)/ba∇dbl2=1 2/ba∇dbl/u1D53D−1/2 GG2υ∗/ba∇dbl2 and the assertion follows. Now we turn to the general case with fsatisfying (T∗ 3). Define /u1D53DGdef=∇2fG(υ∗),bGdef=/ba∇dbl/u1D53B/u1D53D−1 GG2υ∗/ba∇dbl. (45) Theorem 4.1 LetfG(υ) =f(υ) +/ba∇dblGυ/ba∇dbl2/2be strongly convex and follow (T∗ 3)with some /u1D53B2,τ3, andrsatisfying for κ>0 /u1D53B2≤κ2/u1D53DG,r≥3bG/2,κ2τ3bG<4/9. Then /ba∇dbl/u1D53B(υ∗ G−υ∗)/ba∇dbl ≤3bG/2. (46) Moreover, /vextenddouble/vextenddouble/u1D53B−1/u1D53DG(υ∗ G−υ∗+/u1D53D−1 GG2υ∗)/vextenddouble/vextenddouble≤3τ3 4b2 G, /vextendsingle/vextendsingle/vextendsingle2fG(υ∗ G)−2fG(υ∗)+/ba∇dbl/u1D53D−1/2 GG2υ∗/ba∇dbl2/vextendsingle/vextendsingle/vextendsingle≤τ3 2b3 G. Sharp bounds in perturbed smooth optimization 21 ProofDefinegG(υ) by gG(υ)−gG(υ∗ G) =fG(υ)−fG(υ∗ G)−/an}b∇acketle{tG2υ∗,υ−υ∗ G/an}b∇acket∇i}ht.(47) The function fGis convex, the same holds for gGfrom (47). Moreover, ∇fG(υ∗) =G2υ∗yields∇gG(υ∗) =−G2υ∗+G2υ∗= 0. Hence, υ∗= argmingG(υ) andfG(υ) is a linear perturbation ( 7) ofgGwithA=G2υ∗. Now the results follow from Theorems 3.4and (18) of Theorem 3.3applied withf(υ) =fG(υ)−/an}b∇acketle{tA,υ/an}b∇acket∇i}ht,g(υ) =fG(υ),A=G2υ∗, and∇2f(υ∗) =/u1D53DG. The bound can be further improved under fourth-order smoothn ess off using the results of Theorem 3.5. Theorem 4.2 Letf(υ)be strongly convex and υ∗= argminυf(υ). Let also f(υ)follow(T∗ 3)and(T∗ 4)with/u1D53DG=∇2f(υ∗) +G2and
https://arxiv.org/abs/2505.02002v1
some /u1D53B2,τ3, τ4, andrsatisfying /u1D53B2≤κ2/u1D53DG,r=3 2bG,κ2τ3bG<4 9,κ2τ4b2 G<1 3 forbGfrom(45). Then(46)holds. Furthermore, define µG=−/u1D53D−1 G{G2υ∗+∇T(/u1D53D−1 GG2υ∗)} withT(u) =1 6/an}b∇acketle{t∇3f(υ∗),u⊗3/an}b∇acket∇i}htand∇T=1 2/an}b∇acketle{t∇3f(υ∗),u⊗2/an}b∇acket∇i}ht. Then /ba∇dbl/u1D53B(µG−/u1D53D−1 GG2υ∗)/ba∇dbl ≤τ3 2b2 G, and /ba∇dbl/u1D53B−1/u1D53DG(υ∗ G−υ∗−µG)/ba∇dbl ≤τ4+2κ2τ2 3 2b3 G, /vextendsingle/vextendsingle/vextendsinglefG(υ∗ G)−fG(υ∗) +1 2/ba∇dbl/u1D53D−1/2 GG2υ∗/ba∇dbl2+T(/u1D53D−1 GG2υ∗)/vextendsingle/vextendsingle/vextendsingle ≤τ4+4κ2τ2 3 8b4 G+κ2(τ4+2κ2τ2 3)2 4b6 G. ProofWe apply Theorem 3.5and use that for afrom (34), it holds a=µG. Also use that ∇3f(υ∗) =∇3fG(υ∗) =∇3gG(υ∗). 5 A smooth penalty The case of a general smooth penalty penG(υ) can be studied similarly to the quadratic case. Denote fG(υ)def=f(υ)+penG(υ), υ∗= argmin υf(υ),υ∗ G= argmin υfG(υ) = argmin υ/braceleftbig f(υ)+penG(υ)/bracerightbig . 22 Vladimir Spokoiny We study the bias υ∗ G−υ∗induced by this penalization. The statement of Theorem 4.1and its proof can be easily extended to this situation. Define /u1D53DGdef=∇2fG(υ∗),bGdef=/ba∇dbl/u1D53B/u1D53D−1 GMG/ba∇dbl,MGdef=∇penG(υ∗). Theorem 5.1 LetfG(υ) =f(υ) + penG(υ)be strongly convex and follow (T∗ 3)atυ∗with some /u1D53B2,τ3, andrsatisfying for κ>0 /u1D53B2≤κ2/u1D53DG,r≥3bG/2,κ2τ3bG<4/9, Then /ba∇dbl/u1D53B(υ∗ G−υ∗)/ba∇dbl ≤3bG/2. Moreover, /vextenddouble/vextenddouble/u1D53B−1/u1D53DG(υ∗ G−υ∗+/u1D53D−1 GMG)/vextenddouble/vextenddouble≤3τ3 4b2 G, /vextendsingle/vextendsingle/vextendsingle2fG(υ∗ G)−2fG(υ∗)+/ba∇dbl/u1D53D−1/2 GMG/ba∇dbl2/vextendsingle/vextendsingle/vextendsingle≤τ3 2b3 G. If, in addition, fG(υ)satisfies (T∗ 4)andκ2τ4b2 G<1 3, then with TG(u) = 1 6/an}b∇acketle{t∇3fG(υ∗),u⊗3/an}b∇acket∇i}ht,∇TG=1 2/an}b∇acketle{t∇3fG(υ∗),u⊗2/an}b∇acket∇i}ht, and µG=−/u1D53D−1 G{MG+∇TG(/u1D53D−1 GMG)}, it holds /ba∇dbl/u1D53B−1/u1D53DG(υ∗ G−υ∗−µG)/ba∇dbl ≤τ4+2κ2τ2 3 2b3 G, /vextendsingle/vextendsingle/vextendsinglefG(υ∗ G)−fG(υ∗) +1 2/ba∇dbl/u1D53D−1/2 GMG/ba∇dbl2+TG(/u1D53D−1 GMG)/vextendsingle/vextendsingle/vextendsingle ≤τ4+4κ2τ2 3 8b4 G+κ2(τ4+2κ2τ2 3)2 4b6 G. ProofConsider gG(υ) =fG(υ)−/an}b∇acketle{t∇penG(υ∗),υ/an}b∇acket∇i}ht. ThengGis strongly con- vex and ∇gG(υ∗) = 0 yielding υ∗= argminυgG(υ). Also, fG(υ) is a linear perturbation of gG(υ) withA=MG=∇penG(υ∗). Now all the statements of Theorem 4.1and Theorem 4.2apply to υ∗ Gwith obvious changes. Conclusion The paper systematically studies the effect of changing the object ive function by a linear, quadratic, or smooth perturbation. The obtained resu lts provide careful expansions for the solution and the value of the perturbe d optimiza- tion problem. These expansions can be used as building blocks in differe nt areas including statistics and machine learning, quasi-Newton optimiz ation, uncertainty quantification for inverse problems, among many othe rs. Sharp bounds in perturbed smooth optimization 23 References Banach, S. (1938). ¨Uber homogene Polynome in ( L2).Studia Mathematica , 7(1):36–44. Berthet, Q., Blondel, M., Teboul, O., Cuturi, M., Vert, J.-P., and Bach, F. (2020). Learning with differentiable perturbed optimizers . https://arxiv.org/abs/2002.08676. Bonnans, J. and Shapiro, A. (2000). Perturbation Analysis of Optimization Problems . Springer Series in Operations Research and Financial Engineer- ing. Springer New York. Bonnans, J. F. and Cominetti, R. (1996). Perturbed optimization in banach spaces i: A general theory based on a weak directional constraint qualifica- tion.SIAM Journal on Control and Optimization , 34(4):1151–1171. Bonnans, J. F. and Shapiro, A. (1998). Optimization problems with p ertur- bations: A guided tour. SIAM Review , 40(2):228–264. Boyd, S. and Vandenberghe, L. (2004). Convex optimization . Cambridge university press. Doikov, N. and Nesterov, Y. (2021). Minimizing uniformly convex fun ctions by cubic regularization of newton method. Journal of Optimization Theory and Applications , 189(1):317–339. Doikov, N. and Nesterov, Y. (2022). Local convergence of tens or methods. Mathematical Programming , 193(1):315–336. Doikov,N. andNesterov,Y. (2024). Gradientregularizationofne wton method with bregman distances. Mathematical programming , 204(1):1–25. Katsevich, A. (2023). Tight bounds on the laplace approximation ac curacy in high dimensions. https://arxiv.org/abs/2305.17604. Katsevich, A. (2024). The laplace approximation accuracy in high dimensions:
https://arxiv.org/abs/2505.02002v1
a refined analysis and new skew adjustment. https://arxiv.org/abs/2306.07262. Katsevich, A. and Rigollet, P. (2023). On the approximation accura cy of gaussian variational inference. Nesterov, Y. (2018). Lectures on convex optimization , volume 137. Springer. Nesterov, Y. and Nemirovskii, A. (1994). Interior-Point Polynomial Algo- rithms in Convex Programming . Society for Industrial and Applied Mathe- matics. Nickl, R. (2022). Bayesian non-linear statistical inverse problems. Lecture Notes ETH Zurich . Ostrovskii,D. M. and Bach, F. (2021). Finite-sample analysisof M-estimators using self-concordance. Electronic Journal of Statistics , 15(1):326 – 391. Polyak, B. T. (2007). Newton’s method and its use in optimization. European Journal of Operational Research , 181(3):1086–1096. Polyak, B. T. and Juditsky, A. B. (1992). Acceleration of stochas tic approxi- mation by averaging. SIAM journal on control and optimization , 30(4):838– 855. Rodomanov, A. and Nesterov, Y. (2021). New results on superline ar conver- gence of classicalquasi-newton methods. Journal of optimization theory and 24 Vladimir Spokoiny applications , 188:744–769. Rodomanov, A. and Nesterov, Y. (2022). Rates of superlinear co nvergence for classical quasi-newton methods. Mathematical Programming , pages 1–32. Spokoiny, V. (2025). Estimation and inference for Deep Neuronal Networks. arxiv.org/abs/2305.08193. Stuart,A.M.(2010).Inverseproblems:aBayesianperspective .Actanumerica , 19:451.
https://arxiv.org/abs/2505.02002v1
Central limit theorems under non-stationarity via relative weak convergence Nicolai Palm and Thomas Nagler May 6, 2025 Statistical inference for non-stationary data is hindered by the lack of classical central limit theorems (CLTs), not least because there is no fixed Gaussian limit to converge to. To address this, we introduce relative weak convergence , a mode of convergence that compares a statistic or process to a sequence of evolving processes. Relative weak convergence retains the main consequences of classical weak convergence while accommodating time- varying distributional characteristics. We develop concrete relative CLTs for random vectors and empirical processes, along with sequential, weighted, and bootstrap variants, paralleling the state-of-the-art in stationary settings. Our framework and results offer simple, plug-in replacements for classical CLTs whenever stationarity is untenable, as illustrated by applications in nonparametric trend estimation and hypothesis testing. Contents 1. Introduction 3 2. Relative Weak Convergence and CLTs 6 2.1. Background and notation . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.2. Non-stationary CLTs via weak convergence along subsequences . . . . . . 8 2.3. Relative weak convergence . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.4. Relative central limit theorems . . . . . . . . . . . . . . . . . . . . . . . 12 3. Relative CLTs for non-stationary time-series 14 3.1. Multivariate relative CLT . . . . . . . . . . . . . . . . . . . . . . . . . . 14 3.2. Asymptotic tightness under bracketing entropy conditions . . . . . . . . 15 3.3. Weighted uniform relative CLT . . . . . . . . . . . . . . . . . . . . . . . 17 1arXiv:2505.02197v1 [math.ST] 4 May 2025 3.4. Sequential relative CLT . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 4. Bootstrap inference 19 4.1. Bootstrap consistency and relative weak convergence . . . . . . . . . . . 19 4.2. Multiplier bootstrap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 4.3. Practical inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 5. Applications 22 5.1. Uniform confidence bands for a nonparametric trend . . . . . . . . . . . 22 5.2. Testing for time series characteristics . . . . . . . . . . . . . . . . .
https://arxiv.org/abs/2505.02197v1
. . . 24 A. Proofs for relative weak convergence and CLTs 28 A.1. Relative weak convergence in ℓ∞(T) . . . . . . . . . . . . . . . . . . . . . 30 A.2. Relative central limit theorems . . . . . . . . . . . . . . . . . . . . . . . 31 A.3. Existence and tightness of corresponding GPs . . . . . . . . . . . . . . . 31 A.4. Relative Lindeberg CLT . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 B. Tightness under bracketing entropy conditions 34 B.1. Coupling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 B.2. Bernstein inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 B.3. Chaining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 B.4. Proof of Theorem 3.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 C. Proofs for relative CLTs under mixing conditions 49 C.1. Proof of Theorem 3.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 C.2. Proof of Theorem 3.6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 C.3. Asymptotic tightness of the multiplier empirical process . . . . . . . . . . 56 D. Proofs for the bootstrap 60 D.1. Proof of Proposition 4.1 and a corollary . . . . . . . . . . . . . . . . . . . 60 D.2. Proof of Proposition 4.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 D.3. Proof of Proposition 4.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 D.4. Proof of Proposition 4.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 D.5. Proof
https://arxiv.org/abs/2505.02197v1
of Proposition 4.6 . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 D.6. Proof of Proposition 4.7 . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 D.7. A useful lemma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 E. Proofs for the applications 66 E.1. Proof of Corollary 5.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 E.2. Proof of Corollary 5.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 E.3. Proof of Corollary 5.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 F. Auxiliary results 69 F.1. Bracketing numbers under non-stationarity . . . . . . . . . . . . . . . . . 70 2 1. Introduction In many application areas, data are naturally recorded over time. The processes gener- ating these data often have evolving characteristics. Examples are environmental data, which are subject to evolving climatic conditions; economic and financial data, which are influenced by changing market fundamentals and political environment; and network data in epidemiology and social sciences, which are subject to changes in societal behav- ior. Given the significant role these data play in our society, there is a strong need for statistical methods that can handle such non-stationary data with formal guarantees. Most inferential procedures in statistics rely on central limit theorems (CLTs) as a fundamental principle. This includes simple low-dimensional problems such as testing for equality of means, as well as modern high-dimensional and infinite-dimensional re- gression problems, where empirical process theory (including related uniform CLTs via weak convergence) has become the workhorse (Van der Vaart and Wellner, 2023, Dehling and Philipp, 2002, Kosorok, 2008). However, weak convergence and, by extension, CLTs are fundamentally tied to stationarity. In general non-stationary settings, there may be no fixed Gaussian limit to converge to because the data-generating process evolves over time. Existing approaches A common ad-hoc fix to this conundrum is to de-trend, difference, or otherwise transform the data to remove the most glaring effects of non-stationarity (e.g., Shumway et al., 2000). The pre-processed data is assumed to be stationary, and classical CLTs are applied. This approach is sometimes successful in practice, but it has its pitfalls. The pre-processing is unlikely to remove all stationarity issues, and potential uncertainty and data dependence in the pre-processing are not accounted for in the subsequent analysis. A likely
https://arxiv.org/abs/2505.02197v1
reason for the popularity of these heuristics is the apparent scarcity of limit theorems for non-stationary data that could support the development of inferential methods. A few approaches have been proposed to establish CLTs for non-stationary processes, but they have significant limitations. Merlevede and Peligrad (2020) establish a uni- variate (uniform-in-time) CLT by standardizing the sample average by its standard deviation, which ensures a fixed standard Gaussian limit. Extending this approach to multivariate settings is straightforward by additionally de-correlating the coordinates. However, this is doomed to fail in infinite-dimensional settings, where a decorrelated Gaussian limit process has unbounded sample paths. Another recent line of work (Kar- makar and Wu, 2020, Mies and Steland, 2023, Bonnerjee et al., 2024) couples the sample average to an explicit sequence of Gaussian variables on an enriched probability space. Such results are stronger than necessary for most statistical applications and require substantially more effort to establish than classical CLTs. Furthermore, they apply only to multivariate sample averages with extensions to more complex quantities such as em- pirical processes being an open problem. The most developed approach thus far relies on the local stationarity assumption, as summarized in Dahlhaus (2012) and recently extended to empirical processes by Phandoidaen and Richter (2022). Here, asymptotic results become possible by considering a hypothetical sequence of data-generating pro- 3 cesses providing increasingly many, increasingly stationary observations in a given time window. This approach is tailored to procedures that localize estimates in a small win- dow, and the asymptotics do not concern the actual process generating the data. It does not apply, for example, to a simple (non-localized) sample average over non-stationary random variables. Contribution 1: Relative weak convergence This paper takes a different approach. Fundamentally, a CLT is a comparison between the distribution of a sample quantity and a fixed Gaussian distribution. Since, in the case of non-stationary data, there is no fixed distribution to converge to, we may simply compare to an appropriate sequence of Gaussian distributions. The approximating Gaussians can vary with the sample size, reflecting the evolving structure of the underlying process. This approach is strong enough to substitute classical (uniform) CLTs in statistical methods and simple enough such that the required conditions are not substantially stronger than those required for stationary CLTs. To formalize the idea in general terms, we introduce the following concept. Definition. We say Xn, Ynarerelatively weakly convergent if |E∗[f(Xn)]−E∗[f(Yn)]| →0 for all fbounded and continuous. Note that if Yn=Yis constant, this is simply the definition of weak convergence. IfTis finite-dimensional and the distributions of Ynsuitably continuous, relative weak convergence is equivalent to convergence of the difference of distribution functions. Our general notion of asymptotic normality looks as follows. Definition. We say that a sequence {Yn(t):T}satisfies a relative central limit theorem (relative CLT) if there exists a relatively compact sequence of centered, tight, and Borel measurable Gaussian processes NYnwith Cov[Yn(s), Yn(t)] =Cov[Nn(s), Nn(t)] such that YnandNYnare relatively weakly convergent. Let us explain the definition. The covariance structure of the approximating GP NYn is the natural Gaussian process to compare Ynto. Relatively compact sequences are the
https://arxiv.org/abs/2505.02197v1
natural analog of tight limits in classical weak convergence and defined formally in Section 2. Under relative compactness, relative weak convergence is equivalent to weak convergence on subsequences, which makes it straightforward to transfer many useful tools, such as the continuous mapping theorem or the functional delta method, to relative weak convergence. 4 Contribution 2: Relative CLTs for sample averages and empirical processes To make the general theory useful, we need provide concrete relative CLTs under verifiable conditions. It turns out that both finite- and infinite-dimensional relative CLTs hold under high-level assumptions similar to those in classical stationary CLTs. One of our main results looks as follows (Theorem 3.6). Theorem. Under some moment, mixing, and bracketing entropy conditions the weighted empirical process Gn∈ℓ∞(S× F)defined by Gn(s, f) =1√knknX i=1wn,i(s) f(Xn,i)−E[f(Xn,i)] satisfies a relative CLT. The weights wn,ican be specified to cover a variety of different CLT flavors. For example, taking wn,i(s) =1i≤⌊skn⌋yields a relative sequential CLT (Corollary 3.8); taking wn,i(s) = K((i/kn−s)/bn) for some kernel Kand bandwidth bn, we obtain localized CLTs similar to those established under local stationary by Phandoidaen and Richter (2022). To prove the theorem, we rely on a characterization of relative CLTs in terms of relative compactness and marginal relative CLTs of Gn, similar to classical empirical process theory. Regarding the marginals, we provide a relative version of Lyapunov’s multivariate CLT (Theorem 3.2). Relative compactness can be guaranteed established using chaining and coupling arguments under a bracketing condition tailored to non- stationary β-mixing sequences (Theorem 3.4). Generally, such entropy conditions also ensure the existence of the sequence of approximating Gaussian sequence. Contribution 3: Bootstrap inference under non-stationarity Many inferential meth- ods rely on a tractable Gaussian approximation in a weak sense, but whether that Gaussian changes with the sample size plays a subordinate role. A key difficulty re- mains, however: the covariance structure of the approximating Gaussian process is not known in practice and difficult to estimate. The bootstrap is a common solution to this problem. Considerable effort has been devoted to deriving consistent bootstrap schemes for special cases of non-stationary data. To the best of our knowledge, no boot- strap scheme currently exists for general non-stationary settings. Extending the work of B¨ ucher and Kojadinovic (2019), bootstrap consistency can be characterized in terms of relative weak convergence. Using our relative CLTs, we prove consistency of generic multiplier bootstrap procedures for non-stationary time series solely under moment and mixing assumptions. Summary and outline In summary, our main contributions are as follows: •We introduce a new mode of convergence, which is general enough to explain the asymptotics of non-stationary processes and convenient to use in statistical applications. 5 •We derive asymptotic normality of empirical processes for non-stationary time series under the same high-level assumptions as for classical stationary CLTs. •We facilitate the practical use of our theory by establishing consistency of a mul- tiplier bootstrap for non-stationary time series. This provides a new perspective on the asymptotic theory of non-stationary processes and a comprehensive framework of concepts and tools for statistical inference with gen- eral non-stationary data. Our results
https://arxiv.org/abs/2505.02197v1
are applicable to a wide range of statistical meth- ods and effectively usable as drop-in replacements for classical CLTs when data are not stationary. The remainder of this paper is structured as follows. Relative weak convergence and CLTs are developed in Section 2. We provide tools for proving relative CLTs and foster their intuition by connecting relative with classical CLTs. Section 3 develops relative CLTs for non-stationary β-mixing sequences. Section 4 develops multiplier bootstrap consistency for non-stationary time series, with applications to non-parametric trend estimation and hypothesis testing discussed in Section 5. 2. Relative Weak Convergence and CLTs This section introduces definitions, characterizations, and basic properties of relative weak convergence and relative CLTs. 2.1. Background and notation Throughout this paper, we make use of Hoffmann-Jørgensen’s theory of weak conver- gence. For this reason, we recall some definitions and central statements about weak converge in general metric spaces and in the space of bounded functions. The material and references of this section are found in Chapter 1 of Van der Vaart and Wellner (2023). Weak convergence in general metric spaces In what follows we denote by Xn: Ωn→Dsequences of (not necessarily measurable) maps with Ω nprobability spaces and Dsome metric space. We will assume Ω n= Ω without loss of generality (see discussion above Theorem 1.3.4). To avoid clutter, we omit the domain Ω and write Xn∈D whenever it is clear from context. Definition 2.1 (Definition 1.3.3) .The sequence Xnconverges weakly to some Borel measurable map X∈Dif E∗[f(Xα)]→E[f(Y)] for all bounded and continuous functions f:D→RwhereE∗[f(Xn)]denotes the outer expectation of f(Xn)(chapter 1.2). We write Xn→dX. 6 Weak convergence of (measurable) random vectors agrees with the usual notation in terms of distribution functions. Definition 2.2. The net Yαisasymptotically measurable if E∗[f(Yα)]−E∗[f(Yα)]→0 for all f:D→Rbounded and continuous where E∗andE∗denote outer and inner expectation, respectively.The net is asymptotically tight if for all ε >0there exists a compact set K⊆Dsuch that lim inf αP∗(Yα∈Kδ)≥1−ε for all δ >0with Kδ={y∈D:∃x∈Ks.t.d(x, y)< δ}where P∗denotes the inner probability. Weak convergence to some tight Borel measure implies asymptotic measurability and asymptotic tightness. Conversely, Prohorov’s theorem asserts weak convergence along subsequences whenever the sequence is asymptotically tight and measurable. Weak convergence of stochastic processes A stochastic process indexed by a set T is a collection {Y(t):t∈T}of random variables Y(t) : Ω→Rdefined on the same probability space. A Gaussian process (GP) is a stochastic process {N(t):t∈T}such that ( N(t1), . . . , N (tk)) is multivariate Gaussian for all t1, . . . , t k∈T. Weak convergence of empirical processes is typically considered in (a subspace of) the space of bounded functions ℓ∞(T) = f:T→R:∥f∥T= sup t∈T|f(t)|<∞ equipped with the uniform metric d(f, g) = supt∈T|f(t)−g(t)|=∥f−g∥T. If the sample paths t7→Y(t)(ω) of a stochastic process Yare bounded for all ω∈Ω, it induces a map Y: Ω→ℓ∞(T), ω7→(t7→Xt(ω)). To abbreviate notation, we say Y∈ℓ∞(T) is a stochastic process (with bounded sample paths) indicating that Y(t) : Ω→Rare random variables with common domain. A sequence Yn∈ℓ∞(T) converges weakly to some tight and Borel measurable Y∈ ℓ∞(T) if and only if Ynis asymptotically tight, asymptotically
https://arxiv.org/abs/2505.02197v1
measurable and all marginals converge weakly, i.e., (Yn(t1), . . . , Y n(tk))→d(Y(t1), . . . , Y (tk)) inRkfor all t1, . . . , t k∈T. 7 2.2. Non-stationary CLTs via weak convergence along subsequences Let us first explain some difficulties with non-stationary CLTs. Consider a sequence of stochastic processes Xi∈ℓ∞(T). CLTs establish weak convergence of the empirical process Gn∈ℓ∞(T) defined by Gn(t) =1√nnX i=1Xi(t)−E[Xi(t)] to some tight and measurable Gaussian process G∈ℓ∞(T). The convergence is charac- terized in terms of asymptotic tightness and marginal CLTs. The former is independent of stationarity (e.g., Theorems 2.11.1 and 2.11.9 of Van der Vaart and Wellner (2023)). The latter involves weak convergence of the marginals; in particular, Gn(t)→dG(t) for all t∈T. This also implies convergence of the variances under mild conditions. Here a certain degree of stationarity of the samples is required. For non-stationary observations, the convergence Var[Gn(t)]→Var[G(t)] fails in general. Example 2.3. Assume Var[Xi(t)]∈ {σ2 1, σ2 2}andXi(t)are independent. Write Nk,n= #{i≤n:Var[Xi(t)] =σ2 k}. Then, Var[Gn(t)] =n−1nX i=1Var[Xi(t)] =σ2 1N1,n n+σ2 1N2,n n=σ2 2+ (σ2 1−σ2 2)N1,n n converges if and only ifN1,n nconverges. For this reason, some non-stationary CLTs rely on standardization such that the covari- ance matrix equals the identity. Such standardization can only work in finite dimensions, however. The intuitive reason is a decorrelated Gaussian limit process Gcannot have bounded sample paths (nor be tight) unless Tis finite. Lemma 2.4. If a Gaussian process G∈ℓ∞(T)satisfies Cov[G(s),G(t)] = 0 fors̸=t andVar[G(s)] = 1 , then, Tis finite. Proof. Appendix F. In summary, the problem with uniform CLTs lies in the fact that marginal CLTs generally require a degree of stationarity - which cannot be bypassed by (naive) stan- dardization. 8 The basic result which we want to put into a larger context is a version of Lyapunov’s CLT. Write Sn=1√knknX i=1Xn,i−E[Xn,i]. forXn,1. . . X n,kn∈Ra triangular array of independent random variables. Assume the general moment condition supn,iE[|Xn,i|2+δ]<∞for some δ > 0. Lyapunov’s CLT asserts asymptotic normality whenever variances converge. Proposition 2.5. Suppose supn,iE[|Xn,i|2+δ]<∞for some δ >0. Then Var[Sn]→σ2 ∞ if and only if Sn→dN(0, σ2 ∞). Proof. The sufficiency is a special case of Proposition 2.27 of van der Vaart (2000) and the necessity is Example 1.11.4 of Van der Vaart and Wellner (2023). When Var[Sn] doesn’t converge, the moment condition still implies that Var[Sn] is uniformly bounded. Thus, every subsequence of Var[Sn] contains a converging subse- quence Var[Snk] along which Lyapunov’s CLT implies asymptotic normality of Snk. The weak limit of Snkhowever depends on the limit of Var[Snk]. In other words, along sub- sequences Snconverges weakly to some Gaussian, yet not globally. To obtain a global Gaussian to compare Snto, the complementary observation is the following fact Var[Snk]→σ2 ∞ if and only if N(0,Var[Snk])→dN(0, σ2 ∞). Thus, along subsequences, SnandN(0,Var[Sn]) have the same weak limits. Because any subsequence of Sncontains a weakly convergent subsequence, i.e., Snis relatively compact, some thought reveals their difference of distribution functions converges to zero. Proposition 2.6 (Relative Lyapunov’s CLT) .Denote by Fnresp. Φnthe distribution function of Snresp.N(0,Var[Sn]). Then, |Fn(t)−Φn(t)| →0for all t∈[0,1].
https://arxiv.org/abs/2505.02197v1
Proof. This is a special case of Theorem A.7. In other words, even in non-stationary settings, the scaled sample average remains approximately Gaussian, but with the notion of a limiting distribution replaced by a sequence of Gaussians. This mode of convergence and the resulting type of asymptotic normality extends naturally to stochastic processes. 2.3. Relative weak convergence All proofs of the remaining section are found in Appendix A. Let Xn, Ynbe sequences of arbitrary maps from probability spaces Ω n,Ω′ ninto a metric space D. Throughout the rest of this paper it is tacitly understood that all such maps have a common probability space as their domain. 9 Definition 2.7 (Relative weak convergence) .We say XnandYnarerelatively weakly convergent if |E∗[f(Xn)]−E∗[f(Yn)]| →0 for all f:D→Rbounded and continuous. We write Xn↔dYn. Remark 2.8. ForXa Borel law, Xn↔dXif and only if Xn→dX. Contrary to weak convergence, relative weak convergence implies neither measurability nor tightness. Any sequence is relatively weakly convergent to itself. For the purpose of this paper, we restrict to relative weak convergence to relatively compact sequences. Those turn out to be the natural analog of tight, measurable limits in classical weak convergence. Definition 2.9 (Relative asymptotic tightness and compactness) .We call Xnrelatively asymptotically tight if every subsequence contains a further subsequence which is asymp- totically tight. We call Xnrelatively compact if every subsequence contains a further subsequence which converges weakly to a tight Borel law. From the definition we see that if Xn↔dYnandYn→dYthen Xn→dY. Thus, if Yn is relatively compact, Xn↔dYnessentially states that XnandYnhave the same weak limits along subsequences. Proposition 2.10. Assume that Xnis relatively compact. The following are equivalent: (i)Xn↔dYn. (ii) For all subsequences nksuch that Xnk→dXwith Xa tight Borel law it follows Ynk→dX. (iii) For all subsequences nkthere exists a further subsequence nkisuch that both XnkiandYnkiconverge weakly to the same tight Borel law. In such case, Ynis relatively compact as well. The characterization of relative weak convergence via weak convergence on subse- quences is very convenient. In particular, many useful properties of weak convergence can be transferred to relative weak convergence. The following results are particularly relevant for statistical applications and indicate that relative weak convergence allows for similar conclusions than weak convergence. Proposition 2.11 (Relative continuous mapping) .IfXn↔dYnandg:D→Eis continuous then g(Xn)↔dg(Yn). 10 Proposition 2.12 (Extended relative continuous mapping) .Letgn:D→Ebe a se- quence of functions with Eanother metric space. Assume that for all subsequences of n there exists another subsequence nkand some g:D→Esuch that gnk(xk)→g(x)for allxk→xinD. Let Ynbe relatively compact. Then, (i)gn(Yn)is relatively compact. (ii) if Xn↔dYnthen gn(Xn)↔dgn(Yn). Proposition 2.13 (Relative delta-method) .LetD,Ebe metrizable topological vector spaces and θn∈Dbe relatively compact. Let ϕ:D→Ebe continuously Hadamard- differentiable (see Definition A.2) in an open subset D0⊂Dwith θn∈D0for all n. Assume rn(Xn−θn)↔dYn for some sequence of constants rn→ ∞ withYnrelatively compact. Then, rn ϕ(Xn)−ϕ(θn) ↔dϕ′ θn(Yn). LetSδ={x∈D:d(x, S)< δ}denote the δ-enlargement and ∂Sthe boundary (closure minus interior) of a set S. Proposition 2.14. LetXn↔dYnwithYnrelatively compact, and Snbe Borel sets such that every subsequence of Sncontains a further subsequence Snksuch that limk→∞Snk= Sfor some Swith lim inf δ→0lim sup k→∞P∗(Ynk∈(∂S)δ) = 0 . (1)
https://arxiv.org/abs/2505.02197v1
Then lim n→∞|P∗(Xn∈Sn)−P∗(Yn∈Sn)|= 0. Condition (1) ensures that Snare continuity sets of Ynasymptotically in a strong form. This prevents the laws from accumulating too much mass near the boundary of Sn. In statistical applications, Ynis typically (the supremum of) a non-degenerate Gaussian process, for which appropriate anti-concentration properties can be guaranteed for any setS(e.g., Giessing, 2023). A fundamental result in empirical process theory is the characterization of weak con- vergence in terms of asymptotic tightness and marginal weak convergence (Van der Vaart and Wellner, 2023, Theorem 1.5.7). This characterization underpins uniform CLTs. Be- cause relative weak convergence to some relatively compact sequence is equivalent to weak convergence along subsequences, a similar result holds for relative weak conver- gence. Theorem 2.15. LetXn, Yn∈ℓ∞(T)with Ynrelatively compact. The following are equivalent: 11 (i)Xn↔dYn. (ii)Xnis relatively asymptotically tight and all marginals satisfy Xn(t1), . . . , X n(td) ↔d Yn(t1), . . . , Y n(td) for all finite subsets {t1, . . . , t d} ⊂T. 2.4. Relative central limit theorems Fix a sequence of stochastic processes Yn∈ℓ∞(T) with supt∈TE[Yn(t)2]<∞. CLTs assert weak convergence Yn→dNwith Nsome tight and measurable GP. Naturally, we define (relative) asymptotic normality as Yn↔dNnwith Nnsome sequences of GPs. Contrary to CLTs with a fixed limit, there is no unique sequence of ‘limiting’ GPs, however. It shall be convenient to specify the covariance structure of Nnto mirror that ofYn. This allows for a tractable Gaussian approximation. Definition 2.16 (Corresponding GP) .LetYbe a map with values in ℓ∞(T). AGaus- sian process (GP) corresponding to Yis a map NYwith values in ℓ∞(T)such that {NY(t):t∈T} is a centered GP with covariance function given by (s, t)7→Cov[Y(s), Y(t)]. Similar to classical CLTs and in view of Proposition 2.10, we further restrict to relatively compact sequences of tight and Borel measurable GPs and define a relative CLT as follows. Definition 2.17 (Relative CLT) .We say that the sequence Ynsatisfies a relative central limit theorem if exists a relatively compact sequence of tight and Borel measurable GPs NYncorresponding to YnwithYn↔dNYn. Theorem 2.15 characterizes relative in terms of marginal relative CLTs. Corollary 2.18. The sequence Ynsatisfies a relative CLT if and only if (i) there exist tight and Borel measurable GPs NYncorresponding to Yn, (ii)YnandNYnare relatively asymptotically tight and (iii) all marginals (Yn(t1), . . . , Y n(tk))∈Rksatisfy a relative CLT. The restriction to relatively compact sequences of tight and Borel measurable GPs enables inference just as in classical weak convergence theory (see previous section and Section 5). Such sequences exist under mild assumptions. In finite dimensions, corre- sponding tight Gaussians always exist. Any such sequence converges iff its covariances 12 converge. Hence, a sequence of Gaussians is relatively compact iff covariances converge along subsequences. Equivalently, the sequences of variances are uniformly bounded (Corollary A.6). As a result, relative CLTs can be characterized as follows. Proposition 2.19. LetYnbe a sequence of Rd-valued random variables. Denote by Σn the covariance matrix of Yn. Then, the following are equivalent: (i)Ynsatisfies a relative CLT. (ii) for all subsequences nksuch that Σnk→Σconverges it holds Ynk→dN(0,Σ) andsupn∈N,i≤dVar[Y(i) n]<∞where Y(i) ndenotes
https://arxiv.org/abs/2505.02197v1
the i-th component of Yn. (iii) all subsequences nkcontain a subsequence nkisuch that Σnki→Σand Ynki→dN(0,Σ). In other words, multivariate relative CLTs are essentially equivalent to CLTs along subsequences where covariances converge. From this it is straightforward to general- ize classical multivariate CLTs to relative multivariate CLTs (e.g., Lindeberg’s CLT, Theorem A.7). For infinite dimensional index sets T, Kolmogorov’s extension theorem implies exis- tence of GPs {Nn(t):t∈T}with Cov[Nn(s), Nn(t)] =Cov[Yn(s), Yn(t)], but possibly unbounded sample paths. Tightness can be guaranteed under entropy conditions, as shown in the following. Define the covering number N(ϵ, T, d ) with respect to some semi-metric space ( T, d) as the minimal number of ϵ-balls needed to cover T. Denote by ρn(s, t) =Var[Yn(s)−Yn(t)]1/2, s, t ∈T, the standard deviation semi-metric on Tinduced by Yn. Proposition 2.20. If for all nit holds Z∞ 0p lnN(ϵ, T, ρ n)dϵ <∞, there exists a sequence of tight and Borel measurable GPs NYncorresponding to Yn. Proposition 2.21. LetNYnbe a sequence of Borel measurable GPs corresponding to Yn. Assume that there exists a semi-metric donTsuch that (i)(T, d)is totally bounded. 13 (ii)limn→∞Rδn 0p lnN(ϵ, T, ρ n)dϵ= 0for all δn↓0. (iii) limn→∞supd(s,t)<δnρn(s, t) = 0 for every δn↓0. If further supnVar[Yn(t)]<∞for all t∈T, the sequence NYnis asymptotically tight. Condition (iii) requires the existence of a global semi-metric with respect to which the sequence of standard deviation semi-metrics are asymptotically uniformly continuous. In many practical settings, one can simply take d(t, s) = supnρn(t, s). 3. Relative CLTs for non-stationary time-series Now that we know what relative CLTs are, this section provides specific instances of such results for non-stationary time-series. The assumptions of these CLTs are relatively weak, making them applicable in a wide range of statistical problems. Of course, no CLT can be expected to hold under arbitrary dependence structures. Several measures exist to constrain the dependence between observations, such as α-mixing or ϕ-mixing coefficents (Bradley, 2005), or the functional dependence measure of Wu (2005). In the following, we will focus on β-mixing, because it is widely applicable and allows for sharp coupling inequalities. Definition 3.1. Let(Ω,A, P)be a probability space and A1,A2⊂ A sub- σ-algebras. Define the β-mixing coefficient β(F1,F2) =1 2supX (i,j)∈I×J|P(Ai∩Bj)−P(Ai)P(Bj)| where the supremum is taken over all finite partitions ∪i∈IAi=∪j∈JBj= Ω with Ai∈ A1, Bj∈ A 2. For p, n∈Nandp < k ndefine βn(p) = sup k≤kn−pβ(σ(Xn,1, . . . , X n,k), σ(Xn,k+p, . . . , X n,kn)). Theβ-mixing coefficients quantify how independent events become as one moves fur- ther into the sequence. If the β-mixing coefficients become zero as nandpapproach infinity, the events become close to independent. Note that the mixing coefficients them- selves are indexed by n, because they may change over time in the nonstationary setting. 3.1. Multivariate relative CLT We start with a multivariate relative CLT for triangular arrays of random variables. Proposition 2.19 extends classical to relative multivariate CLTs. The following result builds on Lyapunov’s CLT in combination with a coupling argument. Let Xn,1, . . . , X n,kn be a triangular array of Rd-valued random variables. 14 Theorem
https://arxiv.org/abs/2505.02197v1
3.2 (Multivariate relative CLT) .Letα∈[0,1/2)and1 + (1 −2α)−1< γ. Assume (i)k−1 nPkn i,j=1|Cov[X(l1) n,i, X(l2) n,j]| ≤Kfor all nandl1, l2= 1, . . . d . (ii)supn,iEh |X(l) n,i|γi <∞for all l= 1, . . . , d . (iii) knβn(kα n)γ−2 γ→0. Then, the scaled sample average k−1/2 nPkn i=1(Xn,i−E[Xn,i])satisfies a relative CLT. Proof. Appendix C.1 The summability condition (i) on the covariances can be seen as a minimal requirement for any general CLT under mixing conditions (Bradley, 1999). Condition (i) and (iii) restrict the dependence where (i) essentially bounds the variances of the scaled sample average. Condition (ii) excludes heavy tails of Xn,i. Note that (ii) and (iii) exhibit a trade-off. Universal bounds on higher moments weaken the conditions on the mixing coefficients’ decay rate. 3.2. Asymptotic tightness under bracketing entropy conditions In order to extend the multivariate CLT to an empirical process CLT, we need a way to ensure relative compactness. Consider the following general setup: •Xn,1, . . . , X n,knis a triangular array of random variables in some Polish space X. •Fn={fn,t:t∈T}is a set of measurable functions from XtoRfor all n. •∪n∈NFnadmits a finite envelope function F:X →R, i.e., supn∈N,f∈Fn|f(x)| ≤ F(x) for all x∈ X . Define the empirical process GnonTby Gn(t) =1√knknX i=1fn,t(Xi,n)−E[fn,t(Xi,n)]. Because we have an envelope, Gnhas bounded sample paths and we obtain a map Gn with values in ℓ∞(T). This empirical process can be seen as a triangular version of the (classical) empirical process indexed by a single set of functions. To ensure asymptotic tightness, we rely on bracketing entropy conditions, with norms tailored to the nonstationary time-series setting. Given a semi-norm ∥·∥on (an extension of)F, recall that the bracketing numbers N[](ϵ,F, d)∈Nare the minimal number of brackets [li, ui] ={f∈ F:li≤f≤ui} 15 such that F=∪Nϵ i=1[li, ui] with li, ui:X → Rmeasurable and ∥li−ui∥< ϵ. For stationary observations Xn,i∼P, bracketing entropy is usually measured with respect to an Lp(P)-norm, p≥2. In case of non-stationary observations the brackets need to be measured with respect to all underlying laws of the samples. It turns out that a scaled average of Lp(PXn,i)-norms is sufficient. Definition 3.3. Letγ≥1. Define the semi-norms ∥ · ∥ γ,nresp.∥ · ∥ γ,∞onFby ∥h∥γ,n= 1 knknX i=1E[|h(Xn,i)|γ]!1/γ ∥h∥γ,∞= sup n∈N∥h∥γ,n. Note that ∥·∥γ,nis a composition of semi-norms, hence, a semi-norm itself (Lemma B.3). The following result shows that ∥ · ∥ γ,n-bracketing entropy conditions imply asymptotic tightness of Gnunder mixing assumptions (Appendix B). Because ∥·∥ γ,n-bracketing en- tropy bounds covering entropy (Remark B.8) the existence of an approximating sequence of GPs is also guaranteed. Theorem 3.4. Assume that for some γ >2 (i)∥F∥γ,∞<∞. (ii)supn∈Nmax m≤knmρβn(m)<∞for some ρ > γ/ (γ−2) (iii)Rδn 0p lnN[](ϵ,Fn,∥ · ∥ γ,n)dϵ→0for all δn↓0and are finite for all n. Denote by dn(s, t) =∥fn,s−fn,t∥γ,n fors, t∈T. Assume that there exists a semi-metric donTsuch that lim n→∞sup d(s,t)<δndn(s, t) = 0 for all δn↓0and(T, d)is totally bounded. Then, •Gnis asymptotically tight. •there exists an asymptotically tight sequence of tight Borel measurable GPs Nn corresponding to Gn. The proof, given in Appendix B, relies on a chaining
https://arxiv.org/abs/2505.02197v1
argument with adaptive coupling and truncation. An important intermediate step is a new maximal inequality (see The- orem B.6) that may be of independent interest: Theorem 3.5. LetFbe a class of functions f:X →Rwith envelope F, and ∥f∥γ,n≤δ1 nnX i,j=1|Cov[h(Xi), h(Xj)]| ≤K1∥h∥2 γ,n 16 for some γ > 2, all f∈ F andh:X →Rbounded and measurable. Suppose that supnβn(m)≤K2m−ρfor some ρ≥γ/(γ−2). Then, for any n≥5andδ∈(0,1), E∥Gn∥F≲Zδ 0q ln+N[](ϵ)dϵ+∥F∥γ,n[lnN[](δ)][1−1/(ρ+1)](1 −1/γ) n−1/2+[1−1/(ρ+1)](1 −1/γ)+√nN−1 [](en), where N[](ε) =N(ϵ,F,∥ · ∥ γ,n)andln+x= max(ln x,0). In many applications, ∥·∥γ,n-bracketing numbers can be replaced by Lγ(Q)-bracketing numbers whenever all PXn,iare simultaneously dominated by some measure Q(Ap- pendix F.1), or simply the L∞-bracketing numbers (which coincide with L∞-covering numbers) whenever the function class is uniformly bounded. Many bounds on Lγ(Q)- bracketing numbers are well known (Section 2.7 of Van der Vaart and Wellner (2023)). 3.3. Weighted uniform relative CLT With a multivariate relative CLT and conditions for asymptotic tightness, we have all we need to estabish relative empirical process CLTs. Our main result below should cover the vast majority of statistical applications. Let Fbe a set of measurable functions with finite envelope F. Recall that an envelope of Fis a function F:X →Rwith supf∈F|f(x)| ≤F(x) for all x∈ X. Assume the family of weights wn,i:S→R, n∈N, i≤knsatisfies supn,i,x|wn,i(x)|<∞and recall the definition of the weighted empirical process Gn∈ℓ∞(S× F) Gn(s, f) =1√knknX i=1wn,i(s) f(Xn,i)−E[f(Xn,i)] . Note that the envelope implies bounded sample paths of Gn. A relative CLT for Gncan be established in terms of bracketing entropy for the function class Wn={gn,s:{1, . . . , k n} →R, i7→wn,i(s):s∈S}. Denote by dnsemi-metrics on Sdefined by dw n(s, t) =∥gn,s−gn,t∥γ,n= 1 knknX i=1|wn,i(s)−wn,i(t)|γ!1/γ and assume the following entropy conditions on the weights W1Rδn 0p lnN[](ϵ,Wn,∥ · ∥ γ,n)dϵ→0 for all δn↓0 and finite for all n. W2 there exists a semi-metric dwonSsuch that for all δn↓0 lim n→∞sup dw(s,t)<δndw n(s, t) = 0 . W3 (S, d) is totally bounded. 17 These assumptions, cover the constant case wn,i(s) = 1 as well as more sophisticated weights; see the next sections. Theorem 3.6 (Weighted relative CLT) .Assume W1–W3 and the following hold: (i)∥F∥γ,∞<∞. (ii)supn∈Nmax m≤knmρβn(m)<∞for some ρ >2γ(γ−1)/(γ−2)2 (iii)R∞ 0p lnN[](ϵ,F,∥ · ∥ γ,∞)dϵ <∞. Then, Gnsatisfies a relative CLT in ℓ∞(S× F). Proof. Appendix C.2 The moment condition (i) ensures that all γ-moments E[|f(Xn,i)|γ] are uniformly bounded. Again, (i) and (ii) display a trade-off. Higher moments allow for a slower polynomial decay of the mixing coefficients. Example 3.7. Assuming the moment condition with γ= 4 theβ-mixing coefficients must decay polynomial of degree greater than 6, i.e., supnβn(m)≤Cm−ρfor some ρ >6and all m. 3.4. Sequential relative CLT A famous result by Donsker asserts asymptotic normality of the partial sum process over iiddata. A generalization to sequential empirical processes Zn∈ℓ∞([0,1]×F) indexed by functions, defined as Zn(s, f) =1√kn⌊skn⌋X i=1f(Xn,i)−E[f(Xn,i)], can be found in, e.g., Theorem 2.12.1 of Van der Vaart and Wellner (2023). Beyond the iidcase, asymptotic normality of Znis hard to prove and requires additional technical assumptions even for finite function classes (Dahlhaus et al., 2019, Merlev` ede et al., 2019).
https://arxiv.org/abs/2505.02197v1
Specifying wn,i(s) =1{i≤ ⌊sn⌋}, relative sequential CLTs are simple corollaries of weighted relative CLTs. Corollary 3.8 (Sequential relative CLT) .Under conditions (i)–(iii) of Theorem 3.6, the sequential empirical process Zn∈ℓ∞([0,1]× F)satisfies a relative CLT. Proof. Appendix C.2 18 4. Bootstrap inference To make practical use of relative CLTs, we need a way to approximate the distribution of limiting GPs. Their covariance operators are a moving target, however, and generally difficult to estimate. The bootstrap is a convient way to approximate the distribution of limiting GPs, and easy to implement in practice. This section provides some general results on the consistency of multiplier bootstrap schemes for non-stationary time-series. 4.1. Bootstrap consistency and relative weak convergence To define what bootstrap consistency means in the context of empirical processes, we follow the setup and notation of B¨ ucher and Kojadinovic (2019). In fact, we shall see that the usual definition of bootstrap consistency can be equivalently expressed in terms of relative weak convergence. Let Xnbe some sequence of random variables with values inXnandWnan additional sequence of random variables, independent of Xn, with values in WnwithW(j) ndenoting independent copies of Wn. Denote by Gn=Gn(Xn) resp.G(j) n=Gn(Xn,W(j) n) a sequence of maps constructed from Xnresp.Xn,W(j) nwith values in ℓ∞(T) such that each Gn(t),Gn(t)(j)is measurable. Proposition 4.1. Assume that Gnis relatively compact. Then, the following are equiv- alent: (i) for n→ ∞ sup h∈BL1(ℓ∞(F)) E h(G(1) n)|Xn −E[h(Gn)] P∗ →0 andG(1) nis asymptotically measurable (ii) it holds Gn,G(1) n,G(2) n ↔dG⊗3 n where BL1(ℓ∞(F))denotes the set of 1-Lipschitz continuous functions from ℓ∞(F)to R. Call G(j) naconsistent bootstrap scheme in any such case. Classically, Gnis some (transformation of an) empirical process and consistency of the bootstrap is derived from CLTs for GnandG(j) n. In view of Proposition 4.1, this approach generalizes to relative CLTs (Corollary D.1). 4.2. Multiplier bootstrap Now fix some triangular array Xn= (Xn,1, . . . , X n,kn)∈ Xnof random variables with values in a Polish space Xand some family of uniformly bounded functions wn,i:S→R. LetFbe a set of measurable functions from XtoRwith finite envelope F. Denote 19 byVn= (Vn,1, . . . V n,kn)∈Rkna triangular array of random variables and by V(i) n= (V(i) n,1, . . . , V(i) n,kn) independent copies of Vn. Define Gn,G(j) n∈ℓ∞(S× F) by Gn(s, f) =1√knknX i=1wn,i(s) f(Xn,i)−E[f(Xn,i)] , G(j) n(s, f) =1√knknX i=1V(j) n,iwn,i(s) f(Xn,i)−E[f(Xn,i)] . Proposition 4.2. LetXn,isatisfy the conditions of Theorem 3.6 for some γ >2andρ. For every ϵ >0, letνn(ϵ)be such that max |i−j|≤νn(ϵ)|Cov[Vn,i, Vn,j]−1| ≤ϵ. Assume that (i)Vn,1, . . . V n,knare identically distributed and independent of (Xn,i)i∈N. (ii)E[Vn,i] = 0 ,Var[Vn,i] = 1 , and supnE[|Vn,i|γ]<∞ (iii) knβX n νn(ϵ)γ−2 γ, knβV n kα nγ−2 γ→0for every ϵ >0and some 1 + (1 −2α)−1< γ. Then, G(j) nis a consistent bootstrap scheme. Proof. Appendix D.2 Example 4.3 (Block bootstrap with exponential weights) .Letξi∼Exp(1) beiidfor i∈Zand define Vn,i=1√2mni+mnX j=i−mn(ξj−1). Then Vn,iaremn-dependent and it holds |Cov[Vn,i, Vn,j]−1| ≤ | i−j|/mn. Choosing νn(ε) =⌊εmn⌋, we see that if (i)mn< kα nfor some 1 + (1 −2α)−1< γand (ii)knβX n(ϵmn)(γ−2)/γ→0for every ϵ >0, conditions
https://arxiv.org/abs/2505.02197v1
(i)–(iii) of Proposition 4.2 are satisfied. Continuing Example 3.7 with γ= 4 andsupnβn(m)≤Cm−ρfor some ρ >6, we can pick mn=k1/3 n. 20 4.3. Practical inference The bootstrap process G(j) nin the previous section depends on the unknown quantity µn(i, f) =E[f(Xn,i)]. In many testing applications, we have E[f(Xn,i)] = 0 at least under the null hypothesis; see Section 5. If this is not the case, estimating µn(i, f) con- sistently may still be possible in simple problems (e.g., fix-degree polynomial trend), or under triangular array asymptotics where µn(i, f) approaches a simple function (e.g., lo- cal stationarity). For a general, observed non-stationary process ( Xi)i∈N, it is impossible to distinguish a random series ( Xi)i∈NwithE[f(Xi)] = 0 from a deterministic one with E[Xi] =µ(i). As a consequence, it is generally impossible to quantify the uncertainty in Gnconsistently. This is a fundamental problem in non-stationary time series analysis, which the relative CLT framework makes transparent. A modified bootstrap can still provide valid, but possibly conservative, inference in practice. Let bµn(i, f) be a potentially non-consistent estimator of µn(i, f),µn(i, f) = E[bµn(i, f)] its expectation, and define the processes bG∗ n(s, f) =1√nnX i=1Vn,iwn,i(s)(f(Xi)−bµn(i, f)), G∗ n(s, f) =1√nnX i=1Vn,iwn,i(s)(f(Xi)−µn(i, f)). We assume that bµn(i, f) converges to µn(i, f) in the following sense. Proposition 4.4. Suppose the conditions of Proposition 4.2 are satisfied, bG∗ n−G∗ nis relatively compact, and for every ϵ >0, sup f∈Fmax 1≤i≤nVar[bµn(i, f)] =o(νn(ϵ)−1). Then∥bG∗ n−G∗ n∥S×F→p0. The variance condition is fairly mild: it must vanish, but can do so at a rate much slower than 1 /n. If the bias also vanishes at an appropriate rate, the approximated bootstrap process bG∗ nis, in fact, consistent. Proposition 4.5. Suppose the conditions of Proposition 4.2 and Proposition 4.4 are satisfied, G∗ nis relatively compact, and sup f∈Fmax 1≤i≤n|µn(i, f)−µn(i, f)|2=o(νn(ϵ)−1), ThenbG∗ nis consistent for Gn. In particular, this allows to derive bootstrap consistency under local stationarity asymptotics under standard conditions. As explained above, this type of consistency 21 should not be expected for the asymptotics of the observed process. Even without con- sistency, the bootstrap still provides valid, but conservative inference, as shown in the following propositions. Proposition 4.6. Define bq∗ n,αas the (1−α)-quantile of ∥bG∗ n∥S×F. Suppose that the conditions of Proposition 4.4 hold, and that G∗ nis relatively compact and satisfies a relative CLT with tight corresponding GPs and Var[G∗ n(s, f)],Var[Gn(s, f)]≥σ>0for all(s, f)∈ S × F andnlarge. Then, lim inf n→∞P(∥Gn∥S×F≤bq∗ n,α)≥1−α. If, on the other hand, G∗ nisnotrelatively compact, it usually holds P(∥G∗ n∥S×F> tn)→1, (2) for some tn→ ∞ . In this case, the bootstrap is over-conservative. Proposition 4.7. If(2)and the conditions of Theorem 3.6 and Proposition 4.4 hold, then lim n→∞P(∥Gn∥S×F≤bq∗ n,α) = 1 . Although the bootstrap quantiles may be conservative, they are still informative: as ntends to infinity, it usually holds bq∗ n,α/√n→0 (see the proof of Lemma D.3). In this sense, the bootstrap quantiles yield potentially conservative, yet asymptotically vanishing, uniform confidence intervals for the weighted mean n−1Pn i=1wn,iµ(i,·). 5. Applications To end, we explore some exemplary applications that
https://arxiv.org/abs/2505.02197v1
cannot be handled by previous results. The methods are illustrated by monthly mean temperature anomalies in the Northern Hemisphere from 1880 to 2024 provided by NASA (GISTEMP Team, 2025, Lenssen et al., 2024), and shown in Fig. 1. 5.1. Uniform confidence bands for a nonparametric trend Nonparametric estimation of the trend function µ(i) =E[Xi] is a key problem in non- stationary time series analysis. As explained in Section 4, µ(i) =E[Xi] is not es- timable consistently in general, at least not under the asymptotics of the observed pro- cess ( Xi)i∈N. The local stationarity assumption (Dahlhaus, 2012) resolves this issue by considering the asymptotics of a hypothetical sequence of time series ( Xn,i)i∈Nin which µn(i) =E[Xn,i] becomes flat as n→ ∞ . 22 −1012 1880 1920 1960 2000 YearAnomalyFigure 1: Monthly mean temperature anomalies in the Northern Hemisphere from 1880 to 2024 (dots), kernel estimate of the trend (solid line), and uniform 90%- confidence interval (shaded area). The relative CLT framework allows us to consider the asymptotics of the observed process ( Xi)i∈N, but we have to accept the fact that µ(i) =E[Xi] cannot be estimated consistently. Instead, we aim for estimating the smoothed mean µb(i) =1 nbnX j=1Kj−i nb µ(j) by bµb(i) =1 nbnX j=1Kj−i nb Xj, where Kis a kernel function supported on [ −1,1] and b >0 is the bandwidth parameter. In our setting, bandwidth bplays a different role than in the local stationarity framework. SinceE[bµn(i)] =µb(i) for every value of b, we do not require b→0 asn→ ∞ . Instead, we may choose bto be a fixed value, quantifying the scale (as a fraction of the overall sample) at which we look at the series. Fig. 1 shows the kernel estimate of the trend function µb(i) for b= 0.05 as a solid line. Here, b= 0.05 means that the smoothing window spans 0 .1×nmonths points, or roughly 15 years. Holding bfixed conveniently allows for uniform-in-time asymptotics as shown in the following. Corollary 5.1. Suppose the kernel Kis a four times continuously differentiable proba- bility density function supported on [−1,1],E[|Xi|5]<∞, and supnβX n(k)≲k−7. Then for any b >0, the process s7→√n(bµb(sn)−µb(sn)) satisfies a relative CLT in ℓ∞([0,1]). To quantify uncertainty of the estimator, we can use the bootstrap. Specifically, let bµ∗ b(i) =1 bnX j=1Kj−i b Vn,i(Xj−bµb(j)), 23 with block multipliers Vn,ias in Example 4.3. Let bqn,αbe the (1 −α)-quantile of the distribution of sups∈[0,1]√n|bµ∗ b(sn)|. Then, for α∈(0,1), we can construct a uniform confidence interval for µb(sn) by bCn(α) = bµb−bqn,α/√n,bµb+bqn,α/√n . Corollary 5.2. Suppose the condition of Corollary 5.1 hold, supi|µ(i)|<∞, and there iss∈[0,1]such that σ2 n(s) =Var" 1√nnX i=1Vn,iKi−sn nb (µb(i)−µ(i))# → ∞ . Then lim inf P(µb∈bCn(α))≥1−α. The condition on σ2 n(s) is usually satisfied in the time series setting, where a diverging number of the covariances Cov[Vn,i, Vn,j] are close to 1. If this is not the case, a similar result could be established using Proposition 4.6. The confidence interval bCn(α) is shown as a shaded area in Fig. 1. The confidence interval is uniformly valid for all s∈[0,1] and shows a significant, strongly
https://arxiv.org/abs/2505.02197v1
increasing trend in the last 50 years. 5.2. Testing for time series characteristics Suppose Z1, Z2, . . .is a non-stationary time series and we want to test H0: sup isup f∈F|E[f(Zi)]|= 0 against H1: sup isup f∈F|E[f(Zi)]| ̸= 0. The functions f∈ Fdetermine which characteristics of the time series we want to con- trol. This framework includes many important applications, two of which are discussed below. Example 5.3 (Equal characteristics of two series) .Suppose Zi= (Xi, Yi),i∈N, and we want to test whether the two time series X1, X2, . . . andY1, Y2, . . . have the same characteristics. To do so, let F={f:g(x)−g(y), g∈ G} , so that H0: sup isup g∈G|E[g(Xi)]−E[g(Yi)]|= 0. HereGdescribes the characteristics of the two time series Xi, Yithat we want to match. Common choices are monomials or indicator functions for testing equality of moments or distribution, respectively. 24 Example 5.4 (Deterministic trends) .Suppose we want to test for a deterministic trend in a time series (Xi)i∈N. Let ∆hXi=Xi+h−Xibe the h-step forward difference operator, and define ∆r h= ∆r−1 hXi+h−∆r−1 hXi, forr≥2. The null hypothesis is H0:E[∆r hXi] = 0 for all i∈Nand1≤r≤R, and fixed h, R∈N. The parameter Rdetermines the order of the polynomial trend we want to test for. The step-size hallows focusing on long-term trends in the presence of deterministic seasonality. This fits into the above framework by letting Zi= (∆ hXi, . . . , ∆R hXi), with the convention ∆r hXi= 0forhr≥i, and F={f:f(z1, . . . , z R) =zj,1≤r≤R}. The multiplier bootstrap allows to construct a test for the general null hypothesis above. Define the test statistic and its bootstrap version Tn= sup s∈[0,1],f∈F 1√nnX i=1wn,i(s)f(Zi) , T∗ n= sup s∈[0,1],f∈F 1√nnX i=1Vn,iwn,i(s)f(Zi) , where wn,i(s) are some weights. For example, wn,i(s) =K((i−sn)/nb) allows focusing on time-local deviations from the null hypothesis. Letα∈(0,1), and c∗ n(α) be the (1 −α)-quantile of the distribution of T∗ n. We reject H0iffTn> c∗ n(α). Level and consistency of the test can be straightforwardly derived from our general results. Corollary 5.5. Let the sequence of weights wn,i(s)andFsatisfy the conditions of The- orem 3.6. It holds P(Tn> c∗ n(α))→αunder H0, andP(Tn> c∗ n(α))→1whenever lim inf n→∞sup s∈S,f∈F 1 nnX i=1wn,i(s)E[f(Zi)] =δ >0. Because supi|E[f(Zi)]| ̸= 0 under H1, the distribution of the bootstrap statistic T∗ n does not resemble the distribution of Tnunder the alternative. Consistency still follows from the fact that Tn/√nis bounded away from zero with probability tending to 1, andT∗ n/√n→p0. The power of the test can be improved if we center by some (non- consistent) estimator bµn(i, f) as discussed in Section 4. As an illustration, we apply the above procedure to test for nonstationarity of the monthly mean anomalies. For example, let Zi= (Xi, Xi−120) be a pair of anomolies 10 years apart, F={f:f(x, y) =1(x < t )−1(y < t ):t∈[−5,5]}, and wn,i(s) = Kb((i−sn)/nb). This gives a Kolmogorov-Smirnov-type statistic Tn= sup t∈[−5,5],s∈[0,1] 1√nnX i=1Ki−sn bn 1Xi<t−1Xi−120<t . It is straightforward to show that the conditions of Corollary 5.5 hold, and we can use the multiplier
https://arxiv.org/abs/2505.02197v1
bootstrap to construct a test for the null hypothesis of no nonstationarity. Using b= 0.05 and a kernel estimator for bµn(s, f) as in the previous section, we get Tn= 0.69 and c∗ n(0.05) = 0 .30, and a bootstrapped p-value smaller than 0 .0001, providing strong evidence against the null hypothesis of stationarity. 25 References Bonnerjee, S., Karmakar, S., and Wu, W. B. (2024). Gaussian approximation for non- stationary time series with optimal rate and explicit construction. The Annals of Statistics , 52(5):2293 – 2317. Bradley, R. C. (1999). On the growth of variances in a central limit theorem for strongly mixing sequences. Bernoulli , 5(1):67–80. Bradley, R. C. (2005). Basic Properties of Strong Mixing Conditions. A Survey and Some Open Questions. Probability Surveys , 2(none):107 – 144. B¨ ucher, A. and Kojadinovic, I. (2019). A note on conditional versus joint unconditional weak convergence in bootstrap consistency results. Journal of Theoretical Probability , 32(3):1145–1165. Dahlhaus, R. (2012). Locally stationary processes. In Handbook of statistics , volume 30, pages 351–413. Elsevier. Dahlhaus, R., Richter, S., and Wu, W. B. (2019). Towards a general theory for nonlinear locally stationary processes. Bernoulli , 25(2):1013 – 1044. Dehling, H. and Philipp, W. (2002). Empirical process techniques for dependent data. InEmpirical process techniques for dependent data , pages 3–113. Springer. Doukhan, P. (2012). Mixing: Properties and Examples . Lecture Notes in Statistics. Springer New York. Giessing, A. (2023). Anti-concentration of suprema of gaussian processes and gaussian order statistics. GISTEMP Team (2025). GISS Surface Temperature Analysis (GISTEMP), version 4. https://data.giss.nasa.gov/gistemp/ . Dataset accessed 2025-05-01. Karmakar, S. and Wu, W. B. (2020). Optimal gaussian approximation for multiple time series. Statistica Sinica , 30(3):1399–1417. Kosorok, M. R. (2008). Introduction to empirical processes and semiparametric inference , volume 61. Springer. Ledoux, M. and Talagrand, M. (1991). Probability in Banach Spaces: Isoperimetry and Processes , volume 23. Springer Science & Business Media. Lenssen, N., Schmidt, G. A., Hendrickson, M., Jacobs, P., Menne, M., and Ruedy, R. (2024). A GISTEMPv4 observational uncertainty ensemble. J. Geophys. Res. Atmos. , 129(17):e2023JD040179. Merlevede, F. and Peligrad, M. (2020). Functional clt for nonstationary strongly mixing processes. Statistics & Probability Letters , 156:108581. 26 Merlev` ede, F., Peligrad, M., and Utev, S. (2019). Functional CLT for martingale-like nonstationary dependent structures. Bernoulli , 25(4B):3203 – 3233. Mies, F. and Steland, A. (2023). Sequential gaussian approximation for nonstationary time series in high dimensions. Bernoulli , 29(4):3114–3140. Phandoidaen, N. and Richter, S. (2022). Empirical process theory for locally stationary processes. Bernoulli , 28(1):453 – 480. Rio, E. (2017). Asymptotic Theory of Weakly Dependent Random Processes . Probability Theory and Stochastic Modelling. Springer Berlin Heidelberg. Shumway, R. H., Stoffer, D. S., and Stoffer, D. S. (2000). Time series analysis and its applications , volume 3. Springer. van der Vaart, A. (2000). Asymptotic Statistics . Asymptotic Statistics. Cambridge University Press. Van der Vaart, A. and Wellner, J. (2023). Weak Convergence and Empirical Processes: With Applications to Statistics . Springer Series in Statistics. Springer International Publishing. Wu, W. B. (2005). Nonlinear system theory: Another look at dependence. Proceedings of the National Academy of
https://arxiv.org/abs/2505.02197v1
Sciences , 102(40):14150–14154. 27 A. Proofs for relative weak convergence and CLTs Lemma A.1. The sequence Xnis relatively compact if and only if it is asymptotically measurable and relatively asymptotically tight. Proof. IfXnis relatively compact, it converges to some tight Borel law along subse- quences. Along such subsequences nk,Xnkis asymptotically tight and measurable by Lemma 1.3.8 of Van der Vaart and Wellner (2023). We obtain the sufficiency. Note that asymptotic measurability along subsequences implies (global) asymptotic measurability. For the necessity, for any subsequence there exists a further subsequence nksuch that Xnkis asymptotically tight and measurable. By Prohorov’s theorem (Van der Vaart and Wellner, 2023, Theorem 1.3.9), there exists a further subsequence nkisuch that Xnkiconverges weakly to some tight Borel law. This implies relative compactness of Xn. Proof of Proposition 2.10. Recall that Xnconverges weakly to a Borel law Xiff E∗[f(Xn)]→E[f(X)] for all f:D→Rbounded and continuous. 1.⇒2.: Assume that Xn↔dYn. Then, for all f:D→Rbounded and continuous |E∗[f(Ynk)]−E[f(X)]| ≤ |E∗[f(Xnk)]−E∗[f(Ynk)]|+|E∗[f(Xnk)]−E[f(X)]| →0 whenever Xnk→dXandXBorel measurable. 2.⇒3.: Since Xnis relatively compact, every subsequence Xnkcontains a weakly convergent subsequence Xnki→dXwith Xtight and Borel measurable. By assumption also Ynk→d X. 3.⇒1.: Given f, it suffices to prove that for all subsequences nkthere exists a further subsequence nkisuch that E∗[f(Xnki)]−E∗[f(Ynki)] →0. Pick nkisuch that both Xnki→dXandYnki→dX with Xtight and Borel measurable. Then, E∗[f(Ynki)]−E∗[f(Ynki)] ≤ E[f(X)]−E∗[f(Ynki)] + E∗[f(Xnki)]−E[f(X)] →0 At last, characterization (iii) implies relative compactness of Yn. 28 Proposition 2.11. For all f:E→Rbounded and continuous, the composition f◦g: D→Ris bounded and continuous. Thus, |E∗[f◦g(Xn)]−E∗[f◦g(Yn)]| →0 for all such fby definition of Xn↔dYn. This yields the claim. Proof of Proposition 2.12. Any subsequence of ncontains a further subsequence such that Ynk→dYand there exists g:D→Esuch that gnk(xk)→g(x) for all xk→xin D. Theorem 1.11.1 of Van der Vaart and Wellner (2023) implies gn(Ynk)→dg(Y). In particular, gn(Yn) is relatively compact and Proposition 2.10 yields the second claim. Proof of Proposition 2.14. We prove this statement by contradiction. Suppose that lim sup n→∞P∗(Xn∈Sn)−P∗(Yn∈sn)>0. Then there is a subsequence nkofnsuch that lim i→∞P∗(Xnki∈Snki)−P∗(Ynki∈Snki)>0, (3) for every subsequence nkiofnk. By Proposition 2.10, nkhas a subsequence nkion which Xnki→dYandYnki→dYfor some tight Borel law Y. We may further assume that Snkiconverges to Son the same subsequence. It holds lim sup i→∞P∗(Xnki∈Snki)−P∗(Ynki∈Snki) ≤lim sup i→∞P∗(Xnki∈Snki)−lim inf i→∞P∗(Ynki∈Snki) ≤lim sup i→∞P∗ Xnki∈lim sup i→∞Snki −lim inf i→∞P∗ Ynki∈lim inf i→∞Snki = lim sup i→∞P∗(Xnki∈S)−lim inf i→∞P∗(Ynki∈S). Further, the Portmanteau theorem (Van der Vaart and Wellner, 2023, Theorem 1.3.4) gives P∗(Y∈∂S)≤P∗(Y∈(∂S)δ)≤lim sup i→∞P∗(Ynki∈(∂S)δ). Taking δ→0, we obtain P(Y∈∂S) = 0, so Sis a continuity set of Y. Now the Portmanteau theorem implies lim sup i→∞P∗(Xnki∈S)−lim inf i→∞P∗(Ynki∈S) =P∗(Y∈S)−P∗(Y∈S) = 0 , which contradicts (3). The case where (3) holds with reverse sign is treated analogously. Definition A.2. LetD,Ebe metrizable topological vector spaces, this is metric spaces such that addition and scalar multiplication are continuous. A map ϕ:D→Eis called 29 Hadamard-differentiable at θ∈Dif there exists ϕ′ θ:D→Econtinuous and linear such that ϕ(θ+tnhn)−ϕ(θ) tn→ϕθ(h) for all tn→tinR,hn→hsuch that θ+tnhn∈Dfor all n.ϕis continuously Hadamard-differentiable in an open subset U⊂Difϕis Hadamard-differentiable for all θ∈Uandϕ′ θis continuous in θ∈U. Proof of Proposition 2.13. Note that gn:D→E,
https://arxiv.org/abs/2505.02197v1
x7→ϕ′ θn(x) satisfies the condition of Proposition 2.12 since ϕhas continuous Hadamard-differentials. Thus, ϕ′ θn(Yn) is rela- tively compact since Ynis. By (iii) of Proposition 2.10 and descending to subsequences, we may assume Yn→dY,θn→θandϕ′ θn(Yn)→dϕ′ θ(Y). By Theorem 3.10.4 in Van der Vaart and Wellner (2023), we obtain rn(ϕ(Xn)−ϕ(θn))→dϕ′ θ(Y). Then, (iii) of Proposition 2.10 yields the claim. A.1. Relative weak convergence in ℓ∞(T) Proof of Theorem 2.15. IfXn↔dYnthen Xnis relatively compact by Proposition 2.10. This is equivalent to relative asymptotic tightness and asymptotic measurability by Lemma A.1. The continuous mapping theorem then implies marginal relative weak convergence. For the reverse direction, let nkbe as subsequence. Let nkibe a subsequence of nksuch that Xnki→dXwith Xa tight Borel law and Ynkiis asymptotically tight. Since all marginals of XnandYnare relatively weakly convergent, this implies the convergence of all marginals of Ynkto the marginals of Xby characterization (ii) of Proposition 2.10. Note that all marginals of Xnare relatively compact, e.g., by the continuous mapping theorem. Together with asymptotic tightness of Ynki, this implies the convergence Ynki→dXby Theorem 1.5.4 of Van der Vaart and Wellner (2023). By characterization (iii) of Proposition 2.10 we obtain Xn↔dYn. Lemma A.3. The sequence Xnis relatively compact if and only if it is relatively asymp- totically tight and Xn(t)is asymptotically measurable for all t∈T. Proof. By Lemma A.1, Xnis relatively compact if and only if it is relatively asymptot- ically tight and asymptotically measurable. By definition, any sequence Xnis asymp- totically measurable if and only if any subsequence nkcontains a further subsequences nkisuch that Xnkiis asymptotically measurable. By Lemma 1.5.2 of Van der Vaart and Wellner (2023) being asymptotically measurable is equivalent to Xn(t) being asymp- totically measurable for all t∈Twhenever Xnis relatively asymptotically tight. All together, this implies the equivalence. 30 Corollary A.4 (Relative Cramer-Wold device) .LetXnandYnbe two sequences of Rd-valued random variables. If Xnis uniformly tight, then, Xn↔dYnif and only if tTXn↔dtTYn for all t∈Rd. Proof. Restricting to functions of the form f(tT·) with f:R→Rbounded and contin- uous, the only if part follows by definition. For the other direction, assume tTXn↔dtTYnfor all t∈Rd. Note that tTXnis uniformly tight for all t∈Rd. We use characterization (iii) of Proposition 2.10. Let nk be a subsequence. Since Xnis uniformly tight, there exists a subsequence Xnkj→dX. Then, also tTXnkj→dtTX. By characterization (ii) of Proposition 2.10 and tTXn↔d tTYn, it follows tTYnkj→dtTXfor all t∈Rd. By the Cramer-Wold device, we derive Ynkj→dX. This proves the claim by characterization (iii) of Proposition 2.10. A.2. Relative central limit theorems Proof of Corollary 2.18. Note that ( Yn(t1), . . . , Y n(tk)) satisfies a relative CLT if and only if (Yn(t1), . . . , Y n(tk))↔d(NYn(t1), . . . , N Yn(tk)) since corresponding Gaussians are unique in distribution. The necessity then follows by Lemma A.3 (for (ii)) and Theorem 2.15 (for (iii)). For the sufficiency observe that NYn(t) and Yn(t) are measurable by assumption. Thus, (ii) is equivalent to YnandNYnbeing relatively compact by Lemma A.3. Then, Theo- rem 2.15 provides the claim. Proof of Proposition 2.19. By Corollary A.6, there exists an asymptotically tight
https://arxiv.org/abs/2505.02197v1
se- quence of tight Borel measurable GPs corresponding to Ynif and only if sup n∈N,i≤dVar[Y(i) n]<∞. Equivalently, if all subsequences nkcontain a further subsequence nkisuch that Σ nkiconverges. Combined with the fact that a sequence of centered Gaussians converges weakly if and only if its corresponding sequence of covariances converges, the equivalences follows from Proposition 2.10. A.3. Existence and tightness of corresponding GPs Proposition A.5. If there exists a relatively asymptotically tight sequence of tight and Borel measurable GPs corresponding to Yn, then, every subsequence of ncontains a further subsequence nkisuch that Cov[Ynki(t), Ynki(s)]converges for all s, t∈T. 31 Proof. Denote by NYna relatively asymptotically tight sequence of GPs corresponding toYn. Any subsequence contains a further subsequence nkisuch that NYnkiconverges weakly to some tight GP. In particular, all marginals of NYnkiconverge weakly. Recall that a sequence of centered multivariate Gaussians converges weakly if and only if their corresponding covariances converges. Thus, we obtain convergence of all covariances Cov[NYnki(t), NYnki(s)] =Cov[Ynki(t), Ynki(s)]. Corollary A.6. IfTis finite, the following are equivalent: (i) there exists a relatively asymptotically tight sequence of tight and Borel measurable GPs corresponding to Yn. (ii)supnVar[Yn(t)]<∞for all t∈T. Proof. For the sufficiency, Proposition A.5 implies that all sequences of covariances Cov[Yn(s), Yn(t)] are relatively compact or, equivalently, bounded. For the necessaty, identify Ynwith ( Yn(t1), . . . , Y n(td)) for T={t1, . . . , t d}. Construct NYn∼ N (0,Σn) with Σ nthe covariance matrix of Yn. Then, each NYnis measurable and tight. Then, supnVar[Yn(t)]<∞implies that all covariance Cov[Yn(s), Yn(t)] are relatively compact. Thus, every subsequence nkcontains a further subsequence nki such that all covariances Cov[Ynki(s), Ynki(t)] converge. This is equivalent to the weak convergence of NYn. We obtain relative compactness, hence, asymptotic tightness of NYn. Proof of Proposition 2.20. By Kolmogorov’s extension theorem, there exist centered GPs {NYn(t):t∈T}with covariance function given by ( s, t)7→Cov[Yn(s), Yn(t)]. Since (T, ρ n) is totally bounded (by finitenss of the covering numbers), ( T, ρ n) is separable and, thus there exists a separable version of {NYn(t):t∈T}with the same marginal distributions (Section 2.3.3 of Van der Vaart and Wellner (2023)). Without loss of generality, assume that {NYn(t):t∈T}is separable. Then, E[∥NYn∥T]≤CZ∞ 0p lnN(ϵ, T, ρ n)dϵ <∞ E" sup ρn(s,t)≤δ|NYn(t)−NY n(s)|# ≤CZδ 0p lnN(ϵ, T, ρ n)dϵ for some constant Cby Corollary 2.2.9 of Van der Vaart and Wellner (2023). The first inequality implies that each {NYn(t):t∈T}has bounded sample paths almost surely, hence, without loss of generality {NYn(t):t∈T}induces a map NYnwith values in ℓ∞(T). The second inequality implies that all sample paths of NYnare uniformly ρn- equicontinuous in probability. Hence, each NYnis tight and Borel measurable (Example 1.5.10 of Van der Vaart and Wellner (2023)). 32 Proof of Proposition 2.21. We derive E" sup ρn(s,t)≤δ|NYn(t)−Nn(s)|# ≤CZδ 0p lnN(ϵ, T, ρ n)dϵ for some constant Cindependent of nby Corollary 2.2.9 of Van der Vaart and Wellner (2023). By (iii), for every sequence δ→0 there exists ϵ(δ)→0 such that d(s, t)< δ implies ρn(s, t)< ϵ(δ). Accordingly, lim sup nE" sup d(s,t)≤δ|NYn(t)−NYn(s)|# ≤lim sup nE" sup ρn(s,t)≤ϵ(δ)|NYn(t)−Nn(s)|# ≤lim sup nCZϵ(δ) 0p lnN(ϵ, T,
https://arxiv.org/abs/2505.02197v1
ρ n)dϵ Taking the limit δ→0 the right hand side converges to zero by (ii). Together with Markov’s inequality, we obtain that NYnis asymptotically uniformly d-equicontinuous in probability. By Corollary A.6 for fixed t∈Tthe sequence NYn(t) is relatively asymptotically tight for all t∈Tif supnVar[Yn(t)]<∞. Since this sequence is R-valued, relative asymptotic tightness, relative compactness and asymptotic tightness agree. Then, Theorem 1.5.7 of Van der Vaart and Wellner (2023) proves that NYnis asymptotically tight. A.4. Relative Lindeberg CLT Theorem A.7 (Relative Lindeberg CLT) .LetXn,1, . . . , X n,knbe a triangular array of independent random vectors with finite variance. Assume 1 knknX i=1E ∥Xn,i∥21{∥Xn,i∥2>knϵ} →0 for all ε >0and for all n∈N, l= 1, . . . , d 1 knknX i=1Varh X(l) n,ii ≤K∈R. Then, the scaled sample average√kn Xn−E Xn satisfies a relative CLT. Proof. Letklnbe a subsequence of knsuch that 1 klnklnX i=1Cov[Xln,i]→Σ 33 converges. Observe that the Lindeberg condition implies 1 klnklnX i=1E ∥Xln,i∥21{∥Xln,i∥2>klnϵ} →0 for all ϵ >0. We apply Proposition 2.27 of van der Vaart (2000) to the triangular array Yn,1, . . . Y n,klnwith Yn,i=k−1/2 lnXln,ito derive p kln Xln−E Xln →dN(0,Σ). By (ii) of Proposition 2.19 we derive the claim. B. Tightness under bracketing entropy conditions The proof of Theorem 3.4 is based on a long sequence of well-known arguments: We group the observations Xn,iin alternating blocks of equal size and apply maximal cou- pling. This yields random variables X∗ n,iwhich corresponding blocks are independent and we obtain the empirical process G∗ nwhere Xn,iis replaced by X∗ n,i. Because G∗ n consists of independent blocks, we can derive a Bernstein type inequality bounding the first moment of G∗ nin terms of Fnprovided that Fnis finite (Lemma B.2). For any fixed n, we use a chaining argument in order to reduce to finite Fnwhich, in combination with the Bernstein inequality, yields a bound of the first moment of G∗ nin terms of the bracketing entropy (Theorem B.6). Under the conditions of Theorem 3.4, this yields asymptotic equicontinuity of Gnwhich implies relative compactness of Gn. B.1. Coupling Letmnbe a sequence in N. Suppose for simplicity that knis a multiple of 2 mnand group the observations Xn,1, . . . , X n,knin alternating blocks of size mn. By maximal coupling (Rio, 2017, Theorem 5.1), there are random vectors U∗ n,j= X∗ n,(j−1)mn+1, . . . , X∗ n,jm n ∈ Xn such that •Un,j= Xn,(j−1)mn+1, . . . , X n,jm nd=U∗ n,jfor every j= 1, . . . , m n, •each of the sequences U∗ n,2j j=1,...,n/ (2mn)and U∗ n,2j−1 j=1,...,n/ (2mn)are indepen- dent, •P Xn,j̸=X∗ n,j ≤βn(mn) for all j, •P ∃j:Un,j̸=U∗ n,j ≤(kn/mn)βn(mn). 34 Define the coupled empirical process G∗ n∈ℓ∞(T) asGn, but with all Xn,jreplaced by X∗ n,j. In what follows, we will replace E∗(indicating the outer expectation) by Efor better readability. We will provide an upper bound on E∥G∗ n∥Tfor any fixed n. In such case, we identify G∗ nwith the empirical process G∗ n(f) =1√knknX i=1f X∗ n,i −E f X∗ n,i indexed by the function class Fn.
https://arxiv.org/abs/2505.02197v1
For fixed n, we drop the index n, i.e., write Fn=F, Xn,i=Xi,mn=metc. and assume without loss of generality that kn=n. Lemma B.1. For any class Fof functions f:X →Rwith supf∈F∥f∥∞≤Band any integer 1≤m≤n/2, it holds E∥Gn∥F≤E∥G∗ n∥F+B√nβn(m). Proof. We have E∥Gn∥F≤Esup f∈F|G∗ nf|+B√nE"nX i=11(Xj̸=X∗ j)# ≤E∥G∗ n∥F+B√nβn(m). B.2. Bernstein inequality Lemma B.2. LetFbe a finite set of functions f:X →Rwith ∥f∥∞≤B,1 nnX i,j=1|Cov[f(Xi), f(Xj)]| ≤Kδ2 for all f∈ F. Then, for any 1≤m≤n/2it holds E∥Gn∥F≲δp ln+(|F|) +mBln+(|F|)√n+B√nβn(m) where the constant only depends on Kandln+(x) = ln(1 + x). Proof. We have E∥Gn∥F≤E∥G∗ n∥F+B√nβn(m), by Lemma B.1. 35 Defining Aj,f=mX i=1f X∗ (j−1)m+i −E f X∗ (j−1)m+i , we can write √n|G∗ n(f)|= n/mX j=1Aj,f ≤ n/(2m)X j=1A2j,f + n/(2m)X j=1A2j−1,f . The random variables in the sequence ( A2j,f)n/(2m) j=1are independent and so are those in (A2j−1,f)n/(2m) j=1. We apply Bernstein’s inequality (Van der Vaart and Wellner, 2023, Lemma 2.2.10). Note that |Aj,f| ≤2mB, hence E |Aj,f|k ≤(2mB)k−2Var [Aj,f] for k≥2. We obtain 2m nn/(2m)X i=1E |Aj,f|k ≤(2mB)k−22m nn/(2m)X i=1Var [Aj,f] ≤(2mB)k−22m nnX i=1|Cov[f(Xi), f(Xj)]| ≤(2mB)k−22Kmδ2. Using Bernstein’s inequality for independent random variables gives P  n/(2m)X j=1A2j,f > t ≤2 exp −1 2t2 Knδ2+ 2tmB and the same bounds holds for the odd sums. Altogether we get P(|G∗ n(f)|> t)≤P  n/(2m)X j=1A2j,f > t√n/2 +P  n/(2m)X j=1A2j−1,f > t√n/2  ≤4 exp −1 8t2 Kδ2+tmB/√n The result follows upon converting this to a bound on the expectation (e.g., Van der Vaart and Wellner, 2023, Lemma 2.2.13). B.3. Chaining We will abbreviate ∥f∥a,n= 1 nnX i=1E[|f(Xi)|a]!1/a N[](ϵ) =N[](ϵ,F,∥ · ∥ γ,n). Let us first collect some properties of ∥ · ∥ γ,nand∥ · ∥ γ,∞. 36 Lemma B.3. The following holds: (i)∥ · ∥ a,ndefines a semi-norm. (ii)∥ · ∥ a,n≤ ∥ · ∥ b,nfora≤b. (iii)∥f1|f|>K∥a,n≤Ka−b∥f∥b/a b,nfor all K > 0anda≤b. Proof. Positivity and homogeinity of ∥ · ∥ γ,nfollow clearly and the triangle inequality follows by ∥h+g∥γ,n= 1 nnX i=1∥(h+g)(Xi)∥γ γ!1/γ ≤ 1 nnX i=1 ∥h(Xi)∥γ+∥g(Xi)∥γγ!1/γ ≤ 1 nnX i=1∥h(Xi)∥γ!1/γ + 1 nnX i=1∥g(Xi)∥γ γ!1/γ Minkowski’s inequality =∥h∥γ,n+∥g∥γ,n for all h, g. Next, ∥f∥a,n= 1 nnX i=1∥f(Xi)∥a a!1/a ≤ 1 nnX i=1∥f(Xi)∥a b!1/a ≤ 1 nnX i=1∥f(Xi)∥b b!1/b by Jensen’s inequality. Lastly, note Kb−a|f(Xi)1|f(Xi)|>K|a≤ |f(Xi)|b. Thus, ∥f1|f|≥K∥a,n= 1 nnX i=1E |f(Xi)1|f(Xi)|>K|a!1/a ≤Ka−b 1 nnX i=1E |f(Xi)|b!1/a =Ka−b∥f∥b/a b,n 37 Theorem B.4. LetFbe a class of functions f:X →Rwith envelope Fand for some γ≥2, ∥f∥γ,n≤δ1 nnX i,j=1|Cov[h(Xi), h(Xj)]| ≤K1∥h∥2 γ,n for all f∈ F andh:X →Rbounded and measurable. Suppose that supnβn(m)≤ K2m−ρfor some ρ≥γ/(γ−2). Then, for any n≥5,m≥1, B∈(0,∞), E∥Gn∥F≲Zδ 0q ln+N[](ϵ)dϵ +mBln+N[](δ)√n+√nBβ n(m) +√n∥F1{F > B }∥1,n+√nN−1 [](en) with constants only depending on K1, K2. If the integral is finite, then√nN−1 [](en)→0 forn→ ∞ . Let us first derive some useful corollaries. Corollary B.5. LetFbe a class of functions f:X →Rwith envelope F,Xn,iare independent and ∥f∥2,n≤δfor all f∈ F. Then, for any n≥5,B∈(0,∞), E∥Gn∥F≲Zδ 0q ln+N[](ϵ,F,∥ · ∥ 2,n)dϵ +Bln+N[](δ,F,∥ · ∥ 2,n)√n+√n∥F1{F > B }∥1,n+√nN−1 [](en) with constants only depending on K1, K2. Proof. It holds 1 nnX i,j=1|Cov[h(Xi), h(Xj)]|=1 nnX i=1Var[h(Xi)]≤2∥h∥2,n and the β-coefficients are 0 for
https://arxiv.org/abs/2505.02197v1
all m≥1. Applying Theorem B.4 with m= 1 yields the claim. Theorem B.6. LetFbe a class of functions f:X →Rwith envelope Fand for some γ >2, ∥f∥γ,n≤δ1 nnX i,j=1|Cov[h(Xi), h(Xj)]| ≤K1∥h∥2 γ,n 38 for all f∈ F andh:X →Rbounded and measurable. Suppose that max nβn(m)≤ K2m−ρfor some ρ≥γ/(γ−2). Then, for any n≥5, E∥Gn∥F≲Zδ 0q ln+N[](ϵ)dϵ+∥F∥γ,n[lnN[](δ)][1−1/(ρ+1)](1 −1/γ) n−1/2+[1−1/(ρ+1)](1 −1/γ)+√nN−1 [](en). with constants only depending on K1, K2. In particular, if the integral is finite, ∥F∥γ,∞<∞,ρ > γ/ (γ−2)andK1, K2can be chosen independent of n, then lim sup n→∞E∥Gn∥F≲Zδ 0q lnN[](ϵ)dϵ. Proof. It holds ∥F1{F > B }∥1,n≤√n∥F∥γ γ,n Bγ−1. By Theorem B.4 E[∥Gn∥F]≲Zδ 0q ln+N[](ϵ)dϵ+mBln+N[](δ)√n+√nBβ n(m) +√n∥F∥γ γ,n Bγ−1+√nN−1 [](en). Choose m= (n/ln+N[](δ))1/(ρ+1), which gives mln+N[](δ)√n+√nβn(m)≲n−1/2+1/(ρ+1)[ln+N[](δ)]1−1/(ρ+1), and, thus, E[∥Gn∥F]≲Zδ 0q ln+N[](ϵ)dϵ+B[ln+N[](δ)]1−1/(ρ+1) n1/2−1/(ρ+1)+√n∥F∥γ γ,n Bγ−1+√nN−1 [](en). Next, choose B= n[1−1/(ρ+1)]∥F∥γ γ,n [ln+N[](δ)]1−1/(ρ+1)!1/γ This gives B[ln+N[](δ)]1−1/(ρ+1) n1/2−1/(ρ+1)+√n∥F∥γ γ,n Bγ−1=∥F∥γ,n[ln+N[](δ)][1−1/(ρ+1)](1 −1/γ) n−1/2+[1−1/(ρ+1)](1 −1/γ). Lastly, ρ > γ/ (γ−2), then −1/2 + [1 −1/(ρ+ 1)](1 −1/γ)>0, so the second term in the first statement vanishes as n→ ∞ and the last term vanishes since the bracketing integral is finite. Proof of Theorem B.4. Let us first deduce the last statement. If the bracketing integral exists, then it must holdp ln+N[](δ)≲δ−1/(1 +|ln(δ)|) for δ→0, because the upper bound is not integrable. For δ−1=√ nlnn, we have ln +N[](δ)≲nlnn/(1 + ln n)2= o(n). So for large n, it must hold N−1 [](en)≲1/√ nlnn→0. We now turn to the proof of the first statement. 39 Truncation We first truncate the function class Fin order to apply Bernstein’s inequality in com- bination with a chaining argument. It holds E∥Gn∥F≤E ∥Gn(f1{|F| ≤B})∥F +E ∥Gn(f1{|F|> B})∥F , and E ∥Gn(f1{|F|> B})∥F ≤21√nnX i=1E F(Xi)1{|F(Xi)|> B} = 2√n∥F1{F > B }∥1,n. because In summary, E∥Gn(f)∥F≤E ∥Gn(f1{|F| ≤B})∥F + 2√n∥F1{F > B }∥1,n. Note that |f1{|F| ≤B}| ≤F1{|F| ≤B} ≤B. By replacing Fwith Ftrun={f1{|F| ≤B}:f∈ F} , we may without loss of generality assume that Fhas an envelope with ∥F∥∞≤B. Observe that the conditions of the theorem remain true for Ftrunand that the bracketing numbers with respect to Ftrunare bounded above by the bracketing numbers with respect toF. Lastly, we may assume ln +Nr0≤n: Ifn <ln+Nr0≤ln+N[](δ) then E∥Gn∥F≲√nB≤mBln+N[](δ)√n which still implies the claim. Chaining setup Fix integers r0≤r1such that 2−r0−1< δ≤2−r0. For r≥r0we construct a nested sequence of partitions F=SNr k=1Fr,kofFintoNrdisjoint subsets such that for each r≥r0 sup f,f′∈Fr,k|f−f′| γ,n<2−r. Clearly, we can choose the partition such that Nr0≤N[] 2−r0 ≤N[](δ). As explained in the proof of Theorem 2.5.8 of Van der Vaart and Wellner (2023), we may assume without loss of generality that p ln+Nr≤rX k=r0q ln+N[](2−k). 40 Then by reindexing the double sum, r1X r=r02−rp ln+Nr≤r1X r=r02−rrX k=r0q ln+N[](2−k) =r1X k=r0q ln+N[](2−k)r1X r=k2−r =r1X k=r02−kq ln+N[](2−k)r1X r=k2−(r−k) ≲r1X k=r02−kq ln+N[](2−k) ≲Zδ 0q ln+N[](ϵ)dϵ. Decomposition For a given f, suppose that Fr,kis the element of the partition that contains f. Note that such Fr,kis unique since all Fr,1, . . . ,Fr,Nrare disjoint. Define πr(f) as some fixed element of this set and define ∆r(f) = sup f1,f2∈Fr,k|f1−f2|. Set τr=2−r mr+1rn ln+Nr+1, m r= min(r ln+Nr n,1)−(γ−2)/(γ−1) , (4) and r1=−log2N−1 [](en). Then ln
https://arxiv.org/abs/2505.02197v1
+Nr≤nfor all r≤r1. We will frequently apply Bernstein’s inequality with m=mr. Here, note mr≤p n/ln+Nr≤p n/ln(2)≤n/2 for all n≥5. The following (in-)equalities are the reason for the choices of τrandmr: for rsuch 41 thatp ln+Nr+1≤n, i.e., r < r 1it holds mrτr−1√n= 2−r+1p ln+Nr √n2−rγτ−(γ−1) r = 2−r√n1 mrrn ln+Nr+1−(γ−1) = 2−r√n2−γp ln+Nr+1γ−1 mγ−1 r+1 = 2−rp ln+Nr+1 r ln+Nr+1 n!γ−2 mγ−1 r+1 = 2−rp ln+Nr+1 √nτr−1βn(mr) =√n2−r+11 mrrn ln+Nrβn(mr) ≲2−r+11 mrnp ln+Nrm−ρ r = 2−r+1p ln+Nrn ln+Nrm−ρ−1 r = 2−r+1p ln+Nrm2(γ−1) γ−2 r m−ρ−1 r ≤2−r+1p ln+Nr where the last inequality holds because 1 ≤mrandγ/(γ−2)≥ρ, hence, m2(γ−1) γ−2−ρ−1 r ≤ 1. Decompose f=πr0(f) + [f−πr0(f)]1{∆r0(f)/τr0>1} +r1X r=r0+1[f−πr(f)]1 max r0≤k<r∆k(f)/τk≤1,∆r(f)/τr>1 +r1X r=r0+1[πr(f)−πr−1(f)]1 max r0≤k<r∆k(f)/τk≤1 + [f−πr1(f)]1 max r0≤k≤r1∆k(f)/τk≤1 =T1(f) +T2(f) +T3(f) +T4(f). To see this, note that if ∆ r0(f)/τr0>1 all terms but T1(f) vanish and T1(f) =f. Otherwise, define ˆ ras the maximal number r0≤r≤r1such that max r0≤k≤r∆k(f)/τk≤ 1. Then, T1(f) =πr0(f) and if ˆ r < r 1, then, T2(f) =f−πˆr+1(f) T3(f) =πˆr+1(f)−πr0(f) T4(f) = 0 . 42 If ˆr=r1, then, T2(f) = 0 T3(f) =πr1(f)−πr0(f) T4(f) =f−πr1(f). We prove the theorem by separately bounding the four terms E∥GnTj∥F. Note that Gnis additive by construction, i.e., Gn(f+g) =Gn(f) +Gn(g). Bounding T1 Note that for every |g| ≤hit follows |Gn(g)| ≤ |Gn(h)|+ 2√n∥h∥1,n. In combination with the triangle inequality we obtain ∥GnT1∥F≤ ∥Gnπr0∥F+∥Gn∆r0∥F+ 2√nsup f∈F∥∆r0(f)1{∆r0(f)/τr0>1}∥1,n. The sets {∆r0(f) :f∈ F} and{πr0(f) :f∈ F} contain at most Nr0different functions each. The construction implies ∥πr0(f)∥γ,n≤δ,∥πr0(f)∥∞≲B,∥∆r0(f)∥γ,n≤2δ,∥∆r0(f)∥∞≲B. Now the Bernstein bound from Lemma B.2 gives E∥Gnπr0∥F+E∥Gn∆r0∥F≲δp ln+Nr0+mB nln+Nr0+√nBβ n(m). ≤δq ln+N[](δ) +mB nln+N[](δ) +√nBβ n(m). Since the bracketing numbers are decreasing, q ln+N[](δ)≤δ−1Zδ 0q ln+N[](ϵ)dϵ so E∥Gnπr0∥F+E∥Gn∆r0∥F≲Zδ 0q ln+N[](ϵ)dϵ+mBln+N[](δ)√n+√nBβ n(m). Recall ln +Nr+1≤nfor any r < r 1. For any such r, (iii) of Lemma B.3 gives √nsup f∈F∥∆r(f)1{∆r(f)/τr>1}∥1,n≤√nτ−(γ−1) r sup f∈F∥∆r(f)∥γ γ,n ≤√nτ−(γ−1) r 2−rγ so that the final upper bound becomes √nsup f∈F∥∆r(f)1{∆r(f)/τr>1}∥1,n≲2−rp ln+Nr+1, (5) 43 for any r < r 1. In particular, using δ≤2−r0, we get √nsup f∈F∥∆r0(f)1{∆r0(f)/τr0>1}∥1,n≲δq ln+N[](δ)≤Zδ 0q ln+N[](ϵ)dϵ. Combined, E∥GnT1∥F≲Zδ 0q ln+N[](ϵ)dϵ+mBln+N[](δ)√n+√nBβ n(m). Bounding T2 Next, E∥GnT2∥F≤r1X r=r0+1E Gn∆r1 max r0≤k<r∆k/τk≤1,∆r/τr>1 F + 2√nr1X r=r0+1sup f∈F ∆r(f)1 max r0≤k<r∆k(f)/τk≤1,∆r/τr>1 1,n =T2,1+T2,2. We start by bounding the first term. It holds ∆r(f)1 max r0≤k<r∆k(f)/τk≤1,∆r(f)/τr>1 γ,n≤2−r by construction of ∆ r(f). Since the partitions are nested ∆ r≤∆r−1. Thus, ∆r1 max r0≤k<r∆k/τk≤1,∆r/τr>1 F≤τr−1. Since there are at most Nrfunctions in {∆r(f) :f∈ F} , the Bernstein bound from Lemma B.2 yields T2,1≲r1X r=r0+1 2−rp ln+Nr+mrτr−1√nln+Nr+√nτr−1βn(mr) ≲r1X r=r0+12−rp ln+Nr ≲Zδ 0q ln+N[](ϵ)dϵ. Further, (5) gives √nsup f∈F ∆r(f)1 max r0≤k<r∆k(f)/τk≤1,∆r/τr>1 1,n≲2−rp ln+Nr. 44 forr < r 1and √nsup f∈F ∆r1(f)1 max r0≤k<r 1∆k(f)/τk≤1,∆r1/τr1>1 1,n≤√nsup f∈F∥∆r1(f)∥γ γ,n ≤√n2−r1 =√nN−1 [](en) by the definition of r1. Thus, T2,2≲r1−1X r=r0+12−rp ln+Nr+√nN−1 [](en)≲Zδ 0q ln+N[](ϵ)dϵ+√nN−1 [](en), and, in summary, E∥GnT2∥F≲Zδ 0q ln+N[](ϵ)dϵ+√nN−1 [](en). Bounding T3 Next, E∥GnT3∥F≤r1X r=r0+1E Gn[πr−πr−1]1 max r0≤k<r∆k/τk≤1 F. There are at most Nrfunctions πr(f) and at most Nr−1functions πr−1(f) asfranges overF. Since the partitions are nested, |πr(f)−πr−1(f)| ≤∆r−1(f) and |πr(f)−πr−1(f)|1 max r0≤k<r∆k(f)/τk≤1 ≤ |∆r−1(f)|1{∆r−1(f)/τr−1≤1} ≤τr−1. Further, ∥πr(f)−πr−1(f)∥γ,n≤ ∥∆r−1(f)∥γ,n≤2−r+1. Just as for T2,1, the Bernstein bound (Lemma B.2) gives E∥GnT3∥F≲Zδ 0q ln+N[](ϵ)dϵ.
https://arxiv.org/abs/2505.02197v1
Bounding T4 Finally, E∥GnT4∥F=E Gn[f−πr1(f)]1 max r0≤k≤r1∆k(f)/τk≤1 f∈F ≲E∥Gn∆r11{∆r1≤τr1}∥F+√nsup f∈F∥∆r1(f)1{∆r1(f)≤τr1}∥1,n ≲E∥Gn∆r0∥F+√nτr1. 45 and the first term is bounded by T1. Finally, observe that, by the definition of r1, √nτr1≤√n2−r1=√nN−1 [](en). Lemma B.7. Forγ >2and nX i=1βn(i)γ−2 γ≤K it holds 1 nnX i,j=1|Cov[h(Xi), h(Xj)]| ≤4K∥h∥2 γ,n for all h:X →Rmeasurable. Furthermore, if supn∈Nmax m≤nmρβn(m)<∞for some ρ > γ/ (γ−2), then, sup nnX i=1βn(i)γ−2 γ<∞. Proof. Let us first prove the latter claim. Note that if supn∈Nmax m≤nmρβn(m)<∞ thenPn i=1βn(i)γ−2 γ≲Pn i=1m−ργ−2 γand the latter display converges for ρ > γ/ (γ−2). For the first claim, it holds 1 nnX i,j=1|Cov[h(Xi), h(Xj)]| ≤1 nnX i,j=1βn(|i−j|)γ−2 γ∥h(Xi)∥γ∥h(Xj)∥γ ≤1 nnX i,j=1βn(|i−j|)γ−2 γ(∥h(Xi)∥2 γ+∥h(Xj)∥2 γ) ≤1 nnX i=1∥h(Xi)∥2 γnX j=1βn(|i−j|)γ−2 γ+1 nnX j=1∥h(Xj)∥2 γnX i=1βn(|i−j|)γ−2 γ ≤4K nnX i=1∥h(Xi)∥2 γ ≤4K∥h∥2 γ,n. by Theorem 3 of Doukhan (2012) and where the last inequality follows from 1 nnX i=1∥h(Xi)∥2 γ!1/2 = 1 nnX i=1E[|h(Xi)|γ]2/γ!1/2 ≤ 1 nnX i=1E[|h(Xi)|γ]!1/γ by 2/γ≤1 and Jensen’s inequality. 46 Remark B.8. Given a semi-metric donFinduced by a semi-norm ∥ · ∥ satisfying |f| ≤ |g| ⇒ ∥ f∥ ≤ ∥ g∥, any2ε-bracket [f, g]is contained in the ϵ-ball around (f−g)/2. Then, N(ε,F, d)≤N[](2ε,F,∥ · ∥). Both, ∥ · ∥ γ,nand∥ · ∥ γ,∞, satisfy this property. B.4. Proof of Theorem 3.4 We first prove the existence of an asymptotically tight sequence of GPs. We will conclude by Propositions 2.20 and 2.21: There exists K∈Rsuch that knX i=1βn(i)γ−2 γ≤K for all nby Lemma B.7. It holds ρn(s, t)2=Var[Gn(s)−Gn(t)] =1 knVar"knX i=1(fn,s−fn,t)(Xn,i)# ≤1 knknX i,j=1|Cov [(fn,s−fn,t)(Xn,i),(fn,s−fn,t)(Xn,j)]| ≤4K∥fn,s−fn,t∥2 γ,n = 4Kdn(s, t)2 by Lemma B.7. Thus, N(2√ Kϵ, T, ρ n)≤N(ϵ, T, d n)≤N[](2ϵ,Fn,∥ · ∥ γ,n) by Remark B.8. Next, observe that lnN[](ϵ,Fn,∥ · ∥ γ,n) = 0 for all ε≥2∥F∥γ,n. Thus, Z∞ 0q lnN[](ϵ,Fn,∥ · ∥ γ,n)dϵ <∞ for all nby the entropy condition (iii). In summary, 47 •(T, d) is totally bounded. •limn→∞Rδn 0p lnN(ϵ, T, ρ n)dϵ≲limn→∞Rδn 0p lnN[](ϵ,Fn,∥ · ∥ γ,n)dϵ= 0 for all δn↓0. •limn→∞supd(s,t)<δnρn(s, t)≤limn→∞supd(s,t)<δn2√ Kdn(s, t) = 0 for every δn↓0. •R∞ 0p lnN(ϵ, T, ρ n)dϵ≲R∞ 0p lnN[](ϵ,Fn,∥ · ∥ γ,n)dϵ <∞for all n. by the assumptions. Lastly, sup nVar[Gn(t)]≲sup n∥F∥2 γ,n=∥F∥2 γ,∞<∞ for all t∈Tby the same argument as above. Combined, we derive the claim by Propositions 2.20 and 2.21. Next, we prove asymptotic tightness of Gn. We derive that supnE |Gn(s)|2 <∞for alls∈Tagain by the moment condition (i) and the summability condition Lemma B.7. Thus, each Gn(s) is asymptotically tight. By Markov’s inequality and Theorem 1.5.7 of Van der Vaart and Wellner (2023), it suffices to prove uniform d-equicontinuity, i.e. that lim sup n→∞E∗sup d(f,g)<δn|Gn(s)−Gn(t)|= 0 for all δn↓0. By lim n→∞sup d(s,t)<δndn(s, t) = 0 for all δn↓0, for every sequence δ→0 there exists a sequence ϵ(δ)→0 such that d(s, t)< δimplies dn(s, t)< ϵ(δ). Thus, lim sup n→∞E∗sup d(s,t)<δn|Gn(s)−Gn(t)| ≤lim sup n→∞E∗sup dn(s,t)<ϵ(δn)|Gn(s)−Gn(t)|. Accordingly, it suffices to prove that lim sup n→∞E∗sup dn(s,t)<δn|Gn(s)−Gn(t)|= 0. For fixed nwe again identify Gnwith the empirical process Gnindexed by Fnand similarly for dn. Note that Gn(f)−Gn(g) =Gn(f−g) and the bracketing number with respect to the function class
https://arxiv.org/abs/2505.02197v1
Fn,δ={f−g:f, g∈ F n,∥f−g∥γ,n< δ} satisfies N[](ϵ,Fn,δ,∥ · ∥ γ,n)≤N[](ϵ/2,Fn,∥ · ∥ γ,n)2. Indeed, given ϵ/2 brackets [ lf, uf] and [ lg, ug] forfandg, [lf−ug, uf−lg] is an ε-bracket forf−g. By Theorem B.6, lim sup nE∗sup dn(s,t)<δn|Gn(s)−Gn(t)|≲lim n→∞Z2δn 0q lnN[](ϵ,Fn,∥ · ∥ γ,n)dϵ = 0 by (iii) which proves the claim. 48 C. Proofs for relative CLTs under mixing conditions C.1. Proof of Theorem 3.2 We first restrict to univariate random variables which, in combination with the relative Cramer-Wold device (Corollary A.4), yields Theorem 3.2. The idea is to split the scaled sample average 1√nnX i=1(Xn,i−E[Xn,i]) =1√nrnX i=1 Zn,i−E[Zn,i] +˜Zn,i−Eh ˜Zn,ii into alternating long and short block sums. By considering a small enough length of the short blocks, the short block sum are asymptotically negligible. It then suffices to prove a relative CLT for the sequence of long block sums (Lemma C.1) . By maximal coupling and Lemma C.1, the sequence of long block sums can be considered independent and Lindeberg’s CLT (Theorem A.7) applies. Lemma C.1. LetYnandY∗ nbe sequences of random variables in Rsuch that (i)|Yn−Y∗ n|P→0, (ii)supnVar [Yn]<∞and (iii)|Var [Yn]−Var [Y∗ n]| →0. Then, Ynsatisfies a relative CLT if and only if Y∗ ndoes. Proof. It suffices to prove the if direction since the statement is symmetric. Let knbe a subsequence of nandlnbe a further subsequence such that Y∗ ln→dNconverges weakly to some Gaussian with Var Y∗ ln →Var [N]. Such lnexists by (iii) of Proposition 2.19. Since |Yn−Y∗ n|P→0, we obtain Yln→dN. Note Var [N] = lim n→∞Var Y∗ ln = lim n→∞Var [Yln] by assumption. Thus, Ynsatisfies a relative CLT by (iii) of Proposition 2.19. Theorem C.2 (Univariate relative CLT) .LetXn,1, . . . , X n,knbe a triangular array of univariate random variables. Let α∈[0,1/2)and1 + (1 −2α)−1< γ. Assume that (i)k−1 nPkn i,j=1|Cov [Xn,i, Xn,j]| ≤Kfor all n. (ii)supn,iE[|Xn,i|γ]<∞. (iii) knβn(kα n)γ−2 γ→0. 49 Then, the scaled sample average√kn Xn−E Xn satisfies a relative CLT. Proof. There exists some δwith 0 < δ < 1/2−αand 1 + (1 −2α)−1<1 + (2 δ)−1< γ. Define qn=kα n,pn=k1/2−δ n−qnandrn=k1/2+δ n. Group the observations in alternating blocks of size pnresp. qn, i.e. Un,i= Xn,1+(i−1)(pn+qn), . . . , X n,pn+(i−1)(pn+qn) ∈Rpn(long blocks) ˜Un,i= Xn,1+ipn+(i−1)qn, . . . , X n,qn+ipn+(i−1)qn ∈Rqn(short blocks) Define Zn,i=pnX j=1U(j) n,i (long block sums) ˜Zn,i=qnX j=1˜U(j) n,i (short block sums) where the upper index jdenotes the j-th component. Then, knX i=1(Xn,i−E[Xn,i]) =rnX i=1 Zn,i−E[Zn,i] +˜Zn,i−Eh ˜Zn,ii and it holds k−1 nVar"knX i=1Xn,i# ≤k−1 nknX i,j=1|Cov [Xn,i, Xn,j]| ≤K k−1 nVar"rnX i=1Zn,i# ≤K k−1 nCov"rnX i=1˜Zn,i,rnX i=1Zn,i# =O(rnqn/kn) =O k1/2+δ+α−1 n =o(1) k−1 nVar"rnX i=1˜Zn,i# =O(rnqn/kn) =o(1) by assumption. Thus, k−1/2 nrnX i=1˜Zn,i−Eh ˜Zn,iiP→0 by Markov’s inequality. Hence, k−1/2 nknX i=1 Xn,i−E[Xn,i] −k−1/2 nrnX i=1 Zn,i−E[Zn,i] P→0. 50 Furthermore, Var" k−1/2 nknX i=1Xn,i# −Var" k−1/2 nrnX i=1Zn,i# =k−1 n Var"rnX i=1˜Zn,i# −2Cov"rnX i=1˜Zn,i,rnX i=1Zn,i# →0. By the previous lemma, k−1/2 nPkn i=1(Xn,i−E[Xn,i]) satisfies a relative CLT if and only ifk−1/2 nPrn i=1Zn,i−E[Zn,i] does. By maximal coupling (Theorem 5.1 of Rio (2017)), for all i= 1, . . . , r nthere exist random vectors
https://arxiv.org/abs/2505.02197v1
U∗ n,i∈Rpnsuch that •Un,id=U∗ n,i. •the sequence U∗ n,iis independent. •P ∃Un,i̸=U∗ n,i ≤rnβn(qn). Define the coupled long block sums Z∗ n,i=pnX j=1U∗(j) n,i. For all ε >0 we obtain P k−1/2 n rnX i=1Z∗ n,i−rnX i=1Zn,i > ϵ! ≤P ∃Un,i̸=U∗ n,i ≤rnβn(qn) =k1/2+δ nβn(kα n) ≤knβn(kα n)→0. Next, Var"rnX i=1Zn,i# =Var"rnX i=1Z∗ n,i# +rnX i̸=jCov [Zn,i, Zn,j] by independence of Z∗ n,iandPZn,i=PZ∗ n,i. Since supn,k∥Xn,k∥γ<∞and |Cov [Xn,i, Xn,j]| ≤βn(|i−j|)γ−2 γsup n,k∥Xn,k∥γ by Theorem 3. of Doukhan (2012), for i̸=jwe obtain |Cov [Zn,i, Zn,j]| ≤ O p2 nβn(qn)γ−2 γ 1 knrnX i̸=jCov [Zn,i, Zn,j] ≤ O knβn(qn)γ−2 γ =o(1). 51 Thus, 1 kn Var"rnX i=1Zn,i# −Var"rnX i=1Z∗ n,i# →0. Combined with PZn,i=PZ∗ n,i, hence k−1 nVarPrn i=1Z∗ n,i ≤KandE[Zn,i] =E Z∗ n,i , the previous lemma yields that k−1/2 nPrn i=1Zn,i−E[Zn,i] satisfies a relative CLT if and only if k−1/2 nPrn i=1Z∗ n,i−E Z∗ n,i does. Next, the moment assumption together with PZn,i=PZ∗ n,iimply that the sequence (rn/kn)1/2Z∗ n,isatisfies the Lindeberg condition given in Theorem A.7. More specifically, 1 rnrnX i=1Eh |(rn/kn)1/2Z∗ n,i|21{|(rn/kn)1/2Zn,i|2>rnε2}i =1 knrnX i=1E |Zn,i|21{|Zn,i|2>knε2} ≤ϵ1−γ/21 kγ/2 nrnX i=1E[|Zn,i|γ] ≤ϵ1−γ/2Crnpγ n kγ/2 n forC= supiE[|Xn,i|γ]<∞where we used E[|Zn,1|γ] =∥Zn,1∥γ γ = pnX i=1Xn,i γ γ ≤ pnX i=1∥Xn,i∥γ!γ ≤Cpγ n and similarly E[|Zn,i|γ]≤Cpγ n. It holds rnpγ n kγ/2 n≤rnpγ n (rnpn)γ/2=pγ/2 n rγ/2−1 n≤k1/2+δ−δγ n →0. Thus, k−1/2 nrnX i=1 Z∗ n,i−E Z∗ n,i =r−1/2 nrnX i=1(rn/kn)1/2 Z∗ n,i−E Z∗ n,i satisfies a relative CLT which finishes the proof. Proof of Theorem 3.2. Write Sn=1√knknX i=1(Xn,i−E[Xn,i]) 52 for the scaled sample average and Σ nfor its covariance matrix. Let Nn∼ N (0,Σn). By assumption, Σ nis componentwise a bounded sequence, hence, Nnis relatively com- pact. Thus, it suffices to prove Sn↔dNn. By Corollary A.4, this is equivalent to tTSn↔dtTNnfor all t∈Rd. Note that tTNn∼ N 0, tTΣn andtTSnis the scaled sample average associated to tTXn,i. Accordingly, it suffices to check the conditions of Theorem C.2 for tTXn,i: The moment and mixing conditions ((ii) and (iii)) of Theorem C.2 follow by assumption. Lastly, k−1 nknX i,j=1 Cov tTXn,i, tTXn,j ≤tTtmax l1,l2k−1 nknX i,j=1 Covh X(l1) n,i, X(l2) n,ji ≤tTtK for all n. This proves (i) of Theorem C.2 and combined we derive the claim. C.2. Proof of Theorem 3.6 We derive Theorem 3.6 from a more general result. Fix some triangular array Xn,1, . . . , X n,kn of random variables with values in a Polish space X. For each n∈Nlet Fn={fn,t:t∈T} be a set of measurable functions from XtoR. Assume that ∪n∈NFnadmits a finite envelope F:X →R. Theorem C.3. Assume that for some γ >2 (i)∥F∥γ,∞<∞. (ii)supn∈Nmax m≤knmρβn(m)<∞for some ρ >2γ(γ−1)/(γ−2)2 (iii)Rδn 0p lnN[](ϵ,Fn,∥ · ∥ γ,n)dϵ→0for all δn↓0and are finite for all n. Denote by dn(s, t) =∥fn,s−fn,t∥γ,n fors, t∈T. Assume that there exists a semi-metric donTsuch that lim n→∞sup d(s,t)<δndn(s, t) = 0 for all δn↓0and(T, d)is totally bounded. Then, the empirical process Gndefined by Gn(t) =1√knknX i=1fn,t(Xn,i)−E[fn,t(Xn,i)] satisfies a relative CLT in ℓ∞(T). 53 Proof. We apply Theorem 3.4 to derive relative compactness of Gnand the existence of an asymptotically tight sequence of tight Borel
https://arxiv.org/abs/2505.02197v1
measurable GPs NGncorrespond- ing to Gn. Note that each NGn(s) is measurable for all s∈T. Accordingly, NGnis asymptotically measurable, hence, relatively compact by Lemma A.3. According to Corollary 2.18, it remains to prove relative CLTs of the marginals (Gn(t1), . . . ,Gn(td)) for all d∈N,t1, . . . , t d∈T. We apply Theorem 3.2 to the triangular array Yn,1, . . . , Y n,knwith Yn,k= (fn,t1(Xk), . . . , f n,td(Xk)). (ii) of Theorem 3.2 follows by ∥F∥γ,∞<∞. Next, pick ρ−1γ γ−2< α <γ−2 2(γ−1). Such αexists since ρ−1γ γ−2<γ−2 2(γ−1). Then, 1 + (1 −2α)−1< γandknβn(kα n)γ−2 γ≲k1−ραγ−2 γ n →0 sinceγ γ−2< ρα . Lastly, the summability condition on the covariances follows by the summability condition on the β-mixing coefficients (Lemma B.7). Combined, we obtain the claim. Proof of Theorem 3.6. Define the random variables Yn,i= (Xn,i, i)∈ X × N. Note that X ×Nis Polish since XandNare. Define T=S× F,Fn={hn,t:t∈T}with hn,(s,f):X ×N→R,(x, k)7→wn,k(s)f(x) for every ( s, f)∈Twith wn,k= 0 for k > k n. Note that each hn,(s,f)is measurable. Then, the empirical process associated to Yn,iandTis given by Gn. We apply Theorem C.3: set K= max {supn,i,x|wn,i(x)|,∥F∥γ,∞}. Note that F′:X ×N→R,(x, k)7→KF(x) is an envelope of ∪n∈NFnsatisfying condition (i) of Theorem C.3. Since we have σ(Xn,i) =σ((Xn,i, i)), the β-mixing coefficients w.r.t. Yn,iare equal to the β-mixing coefficients w.r.t. Xn,i. Thus, (ii) of Theorem C.3 are satisfied by assumption. Define the semi-metric donTby d (s1, f1),(s2, f2) =dw(s1, s2) +∥f1−f2∥γ,∞. By the entropy condition (iii) and W3 we derive that ( T, d) is totally bounded. By 54 Minkowski’s inequality, we get dn (s1, f1),(s2, f2) =∥hn,(s1,f1)−hn,(s2,f2)∥γ,n = 1 knknX i=1∥wn,i(s1)f1(Xn,i)−wn,i(s2)f2(Xn,i)∥γ γ!1/γ ≤ 1 knknX i=1 ∥F(Xn,i)∥γ|wn,i(s1)−wn,i(s2)|+∥wn,i∥∞∥(f1−f2)(Xn,i)∥γγ!1/γ ≤K 1 knknX i=1|wn,i(s1)−wn,i(s2)|γ!1/γ +K 1 knknX i=1∥(f1−f2)(Xn,i)∥γ γ!1/γ ≤Kdw n(s1, s2) +K∥f1−f2∥γ,∞ which implies lim n→∞sup d(s,t)<δndn(s, t) = 0 for all δn↓0. Define gn,s(i) =wn,i(s) and Gn={gn,s:N→R:s∈S}. Given f∈ F, s∈Sand ε-brackets g≤gn,s≤gandf≤f≤f, set the centers fc= (f+f)/2 and gc= (g+g)/2. Then, |fgn,s−fcgc| ≤ |fgn,s−fgc|+|fgc−fcgc| ≤Fg−g 2+Kf−f 2. Thus, we obtain a bracket fcgc− Fg−g 2+Kf−f 2! ≤fgn,s≤fcgc+ Fg−g 2+Kf−f 2! with ∥F(g−g) +K(f−f)∥γ,n≤ ∥F∥γ,∞∥g−g∥γ,n+K∥f−f∥γ,n ≤2Kε hence, an 2 Kε-bracket for fgn,s. This implies N[](2Kϵ,Fn,∥ · ∥ γ,n)≤N[](ϵ,F,∥ · ∥ γ,∞)N[](ϵ, S,∥ · ∥ γ,n). Since Zδn 0q lnN[](ϵ,F,∥ · ∥ γ,∞)dϵ,Zδn 0p lnN(ϵ, S, d )dϵ→0 for all δn↓0, this implies the entropy condition (iii) of Theorem C.3 and, combined, we derive the claim. 55 Proof of Corollary 3.8. In Theorem 3.6 set wn,i: [0,1]→ {0,1}, s7→1{i≤ ⌊sn⌋}. Now note that |wn,i(s)−wn,i(t)| ≤1 can only be non-zero for |⌊skn⌋ − ⌊ tkn⌋| ≤ 2kn|s−t| many i’s. Thus, dw n(s, t) = 1 knknX i=1|wn,i(s)−wn,i(t)|γ!1/γ ≤(2|s−t|)1/γ andW2andW3are satisfied for dw(s, t) =|s−t|1/γ. In combination with wn,i(s)≤ wn,i(t) for all s≤t, N[](ϵ,Wn,∥ · ∥ γ,n)≤ ⌊1/ε⌋1/γ which implies W1. Theorem 3.6 gives the claim. C.3. Asymptotic tightness of the multiplier empirical process LetVn,1, . . . , V n,knbe a triangular array of identically distributed random variables. De- fineGV n∈ℓ∞(S× F) by GV n(s, f) =1√knknX i=1Vn,iwn,i(s) f(Xn,i)−E[f(Xn,i)] . We will derive asymptotic tightness of the
https://arxiv.org/abs/2505.02197v1
multiplier empirical process GV nin terms of bracketing entropy conditions with respect to F. Here, we apply a coupling argument and apply Corollary B.5 to the coupled empirical process. We will argue similar to the proof of Theorem C.3. To avoid clutter we assume wn,i= 1, but the argument is similar for the general case. Set fV:R×X → R,(v, x)7→vf(x). Forf∈ Fdefine fV,+,n: (R× X)mn→R,(v, x)7→m−1/2 nmnX i=1fV(vi, xi). Define FV,+,n={fV,+,n:f∈ F} . We use Un,iresp. U∗ n,ifrom the maximal coupling paragraph but with Xn,ireplaced by ( Vn,i, Xn,i). Define rn=kn/(2mn) andGn,1,Gn,2∈ ℓ∞(F) by Gn,1(f) =1√rnrnX i=1fV,+,n(Un,2i−1)−E[f+,n(Un,2i−1)], Gn,2(f) =1√rnrnX i=1fV,+,n(Un,2i)−E[f+,n(Un,2i)]. andG∗ n,jasGn,jbut with Un,ireplaced by U∗ n,i. Note GV n=Gn,1+Gn,2. 56 Lemma C.4. Denote by β(V,X) n theβ-coefficients associated with the triangular array (Vn,i, Xn,i)If kn mnβ(V,X) n(mn)→0, andG∗ n,1,G∗ n,2are asymptotically tight then, GV nis asymptotically tight. Proof. It holds P∗ ∥Gn,1−G∗ n,1∥F̸= 0 ≤rnP(∃i:U∗ i̸=Ui) ≤kn/mnβ(V,X) n(mn)→0 and similarly for Gn,2. Thus, Gn,1−G∗ n,1P∗ →0 and if G∗ n,1,G∗ n,2are asymptotically tight, so areGn,1,Gn,2. Since finite sums of asymptotically tight sequences remain asymptotically tight we obtain GV n=Gn,1+Gn,2is asymptotically tight. Theorem C.5. For some γ >2andα <(γ−2)/2(γ−1), assume (i)∥F∥γ,∞<∞. (ii)supn∥Vn,1∥γ<∞ (iii) k1−α nβV n(kα n) +k1−α nβX n(kα n)→0where βV ndenotes the β-coefficients of the Vn,i. (iv)supnPkn i=1βX n(i)γ−2 γ<∞. (v)Rδn 0p lnN[](ϵ,Fn,∥ · ∥ γ,∞)dϵ→0for all δn→0. Then, GV nis asymptotically tight. Proof. First, we may without loss of generality assume E[f(Xn,i)] = 0. To see this define the function class Gn={gf:f∈ F} with gf:X × { 1, . . . , k n} →R,(x, i)7→f(x)−E[f(Xn,i)]. ForYn,i= (Xn,i, i) it holds E[gf(Yn,i)] = 0. Note that X × { 1, . . . , k n}remains Polish and the β-mixing coefficients with respect to Yn,iandXn,iare equal. Further, given an ε-bracket [ f,f] with respect to ∥ · ∥ γ,n, define u(x, i) =f(x)−E[f(Xn,i)], l(x, i) =f−E[f(Xn,i)]. Clearly, f∈[f,f] implies gf∈[l, u]. Further, |u−l| ≤ |f−f|+E[|f−f|] ≤ |f−f|+∥f−f∥γ,n ≤ |f−f|+ε. 57 Thus, ∥u−l∥γ,n≤2εandN[](2ε,Gn,∥·∥ γ,n)≤N[](ε,F,∥·∥ γ,n). Replacing FbyGnand Xn,ibyYn,iwe may assume E[f(Xn,i)] = 0. Next, recall the function class FV={fV:f∈ F} withfV:R×X → R,(v, x)7→vf(x) and set Yn,i= (Vn,i, Xn,i). Then, the empirical process GV n∈ℓ∞(F) is the empirical process associated to FVandYn,i. Again, R× X is Polish and the β-coefficients βY n associated to Yn,isatisfy βY n(m)≤βX n(m) +βV n(m) by Theorem 5.1 (c) of Bradley (2005). Set mn=kα n. Then, kn mnβY n(mn) =k1−α nβY n(kα n)→0. We apply Lemma C.4 to Yn,iandFV. Thus, it suffices to show that G∗ n,1andG∗ n,1are asymptotically tight. We will prove the statement for G∗ n,1. The arguments for G∗ n,1are the same. Similar to the proof of Theorem 3.4, it suffices to show lim sup δ→0lim sup n→∞E∗∥G∗ n,1∥FV,δ= 0, where FV,δ={fV−gV:f, g∈ F,∥f−g∥γ,∞< δ}. Recall G∗ n,1(fV) =1 rnknX i=1fV,+,n(U∗ n,2i−1) sinceE[f(Xn,i)] = 0. Note that for fixed n,G∗ n,1can be identified with an empirical process G∗ n,1∈ℓ∞(FV,+,n) indexed by the function class FV,+,n={fV,+,n:f∈ F} . Denote by ∥ · ∥ 2,n,+the∥ · ∥ 2,n-seminorm on FV,+,ninduced by U∗ n,i. Next, let f∈ F and an ε-bracket f∈[f,f] with respect to
https://arxiv.org/abs/2505.02197v1
∥ · ∥ γ,nbe given. Clearly fV,+,n∈[fV,+,n,fV,+,n]. Since E[f(Xn,i)] = 0 it holds ∥fV,+,n∥2 2,n,+=1 rnrnX i=1E[fV,+,n(U∗ n,i)2] =1 rnrnX i=1Var[fV,+,n(U∗ n,i)] =1 mnrnrnX i=1pnX j,k=1Cov[Vn,(i−1)mn+jf(Xn,(i−1)mn+j), Vn,(i−1)mn+kf(Xn,(i−1)mn+k)] ≲2 knknX i,j=1|Cov[f(Xn,i), f(Xn,j)] ≲∥f∥2 γ,n, 58 by Lemma B.7. This yields N[](Cε,FV,+,n,∥ · ∥ 2,n,+)≤N[](ε,F,∥ · ∥ γ,n), for some constant Cindependent of nand ∥fV,+,n−gV,+,n∥2,n,+≤C∥f−g∥γ,n≤Cδ, for all fV−gV∈ F V,δ.Without loss of generality, C= 1. Lastly, note that FV,+,nare envelopes for FV,+,n. Putting everything together, and in combination with Corollary B.5, we obtain E∥Gn,1∥FV,δ≲Z2δ 0q ln+N[](ϵ)dϵ+Bln+N[](δ)√rn+√rn∥FV,+,n1{FV,+,n> B}∥1,n+√rnN−1 [](ern) with N[](ϵ) =N[](ϵ,F,∥ · ∥ γ,∞). Let B=an√rn, with an→0 arbitrarily slowly. We obtain Bln+N[](δ)√rn=anln+N[](δ)→0, for every fixed δ >0 due to condition (v). Further, √rn∥FV,+,n1{FV,+,n> a n√rn}∥1,n≤√rnmn∥FV1{FV> a np rn/mn}∥1,n =p rnmn(rn/mn)1−γ∥FV∥γ γ,na1−γ n =p rnmn(rn/mn)1−γ∥Vn,1∥γ∥F∥γ γ,na1−γ n ≲s mγ n rγ−2 n∥F∥γ γ,na1−γ n ≲s mγ+γ−2 n kγ−2 n∥F∥γ γ,na1−γ n =q k2α(γ−1)−(γ−2) n ∥F∥γ γ,na1−γ n →0, foran→0 sufficiently slowly, where we used that Vn,iare identically distributed with supn∥Vn,i∥γ<∞in the third and fourth step, and our condition on αand (i) in the last. Combined, we obtain lim sup nE∥Gn,1∥FV,δ≲Zδ 0q ln+N[](ϵ)dϵ for all δ >0. With δn→0, we obtain lim sup nE∥Gn,1∥FV,δn= lim sup nZδn 0q ln+N[](ϵ,Fn,∥ · ∥ γ,∞)dϵ, and the right hand side converges to 0 as δ→0 by condition (v), completing the proof. 59 D. Proofs for the bootstrap We will use the notation introduced in Section 4 without further mentioning. D.1. Proof of Proposition 4.1 and a corollary Proof of Proposition 4.1. SinceGnis relatively compact, so is G⊗3 n(Van der Vaart and Wellner, 2023, Example 1.4.6). Because GnandG⊗3 nare relatively compact, both state- ments can be checked at the level of subsequences and we may assume that Gn→dG converges weakly to some tight Borel law. By (asymptotic) independence, we obtain G⊗3 n→dG⊗3. Then (ii) is equivalent to Gn,G(1) n,G(2) n →dG⊗3 inℓ∞(F)3. Thus, (ii) is equivalent to (a) of Lemma 3.1 in B¨ ucher and Kojadinovic (2019) and we obtain the claim. Corollary D.1. Assume that Gnsatisfies a relative CLTs and G(i) nare relatively com- pact. Then, G(1) nis a consistent bootstrap scheme if (i) all marginals of Gn,G(1) n,G(2) n satisfy a relative CLT. (ii) for n→ ∞ Cov G(i) n(s),G(i) n(t) −Cov [Gn(s),Gn(t)]→0 Cov G(i) n(s),G(j) n(t) →0. for all i, j= 0,1,2,i̸=jwhere we set G(0) n=Gn. Proof. We conclude by Proposition 4.1, i.e., we prove Gn,G(1) n,G(2) n →dG⊗3 n. SinceGnsatisfies a relative CLTs resp. G(i) nis relatively compact and satisfies marginal relative CLTs, any subsequence of ncontains a further subsequence such that both, Gn andG(i) n, converge weakly to some tight and measurable GP. By Proposition 2.10 and (ii) of Proposition 4.1, we may assume that GnandG(i) nconverge weakly to some tight and measurable GP. By the latter condition, such limiting GPs are equal in distribution, i.e.,Gn,G(i) n→dNwith Nsome tight and measurable GP. Denote by N(i)iid copies of N. It suffices to prove Gn,G(1) n,G(2) n →dN⊗3. By the middle condition, Gn(t1), . . . ,Gn(tk),G(1) n(tk+1), . . . ,G(1) n(tk+m),G(2) n(tk+m+1), . . . ,G(2) n(tm+k+l) 60 converges weakly to
https://arxiv.org/abs/2505.02197v1
N(t1), . . . , N (tk), N(1)(tk+1), . . . , N(1)(tk+m), N(2)(tk+m+1), . . . , N(2)(tm+k+l) . Thus, Gn,G(1) n,G(2) n →dN⊗3 (Van der Vaart and Wellner, 2023, Section 1.5 Problem 3.) which finishes the proof. D.2. Proof of Proposition 4.2 Lemma D.2. For every ϵ >0define νn(ε)as the maximal natural number such that max |i−j|≤ν(ε)|Cov[Vn,i, Vn,j]−1| ≤ϵ. Assume that for some γ >2 (i)supn,iE[|F(Xn,i)|γ]<∞, (ii)knβX n νn(ε)γ−2 γ→0and (iii) k−1 nPkn i,j=1|Cov[f(Xn,i), g(Xn,j)]| ≤Kfor all f, g∈ F. Then, Cov G(j) n(s, f),G(j) n(t, g) −Cov Gn(s, f),Gn(t, g) →0 for all (s, f),(t, g)∈S× F. Proof. Recall |Cov[f(Xn,i), g(Xn,i)]| ≤sup n,iE[|F(Xn,i)|γ]2βX n(|i−j|)γ−2 γ by Theorem 3 of Doukhan (2012). Under the assumptions X i,j≤kn |i−j|≤νn(ε)[Cov[Vn,i, Vn,j]−1]Cov[f(Xn,i), g(Xn,j)] ≲ϵX i,j≤kn |i−j|≤νn(ε)|Cov[f(Xn,i), g(Xn,j)]| ≲knϵ k−1 n X i,j≤kn |i−j|>νn(ε)[Cov[Vn,i, Vn,j]−1]Cov[f(Xn,i), g(Xn,j)] ≤k−1 nX i,j≤kn |i−j|>νn(ε)|Cov[f(Xn,i), g(Xn,j)]| ≲knβX n νn(ε)γ−2 γ→0 61 Thus, lim sup n k−1 nknX i,j=1[Cov[Vn,i, Vn,j]−1]Cov[f(Xn,i), g(Xn,j)] ≲ϵ, with constant independent of ϵ. Hence, lim sup n Cov G(j) n(s, f),G(j) n(t, g) −Cov Gn(s, f),Gn(t, g) ≤sup n,i,s|wn,i(s)|2" lim sup n 1 knknX i,j=1[Cov[Vn,i, Vn,j]−1]Cov[f(Xn,i), g(Xn,j)] # ≲ϵ. Since this is true for all ϵ >0, taking ϵ→0 yields the claim. Proof of Proposition 4.2. We check the conditions of Corollary D.1. By Theorem 3.6 Gn satisfies a relative CLT. As in the proof of Theorem C.3, there exists some α′<γ−2 2(γ−1)such that knβn(kα n)γ−2 γ→ 0. Taking the maximum of αandα′we may assume α′=α. By Theorem C.5 G(j) nis asymptotically tight hence relatively compact. To prove marginal relative CLTs, assume without loss of generality E[f(Xn,i)] = 0 and note that the β-coefficients associated to the triangular arrays (wn,i(s1)f1(Xn,i), . . . , V(2) n,iwn,i(sk)fk(Xn,i)) are bounded above by βX n+βV nby Theorem 5.1 (c) of Bradley (2005). In particular, the assumptions on the β-coefficients’ decay rate given in Theorem 3.2 is satisfied for the marginals. Next, E[|V(k) n,iwn,i(s)f(Xn,i)|γ] =E[|V(k) n,i|γ]E[|wn,i(s)f(Xn,i)|γ] by independence of XnandV(k) n. In particular, sup n,iE[|V(k) n,iwn,i(s)f(Xn,i)|γ]<∞ for all ( s, f)∈S× F by assumption. Note that by the law of total covariances and E[V(k) n,i] = 0 Cov V(k) n,if(Xn,i), V(k) n,jg(Xn,j) = Eh V(k) n,iV(k) n,ji Cov f(Xn,i), g(Xn,j) ≲ Cov f(Xn,i), g(Xn,j) . Thus, (i) of Theorem 3.2 follows by the summability of the β-coefficients of Xn,i(Lemma B.7). and Theorem 3.2 implies marginal relative CLTs of ( Gn,G(1) n,G(2) n). 62 By the law of total covariances, independence of V(k) n,XnandE[V(k) n,i] = 0 we derive Cov wn,i(s)f(Xn,i), V(k) n,jwn,j(t)g(Xn,j) = 0 Cov V(k) n,iwn,i(s)f(Xn,i), V(l) n,jwn,j(t)g(Xn,j) = 0 for all k̸=land ( s, f),(t, g)∈S× F. By the above computation we obtain Cov[G(k) n(s, f),G(l) n(t, g)] =Cov[Gn(s, f),G(k) n(t, g)] = 0 fork̸=l. Lastly, Cov[G(1) n(s, f),G(1) n(t, g)]−Cov[Gn(s, f),Gn(t, g)]→0 by Lemma D.2. Then, Corollary D.1 provides the claim. D.3. Proof of Proposition 4.4 Observe that for every ϵ >0, Var[bG∗ n(s, f)−G∗ n(s, f)] =Var" 1√nnX i=1Vn,iwn,i(s)(µn(i, f)−bµn(i, f))# =1 nnX i=1nX j=1Cov[Vn,i, Vn,j]wn,i(s)wn,j(s)Cov[(µn(i, f)−bµn(i, f)),(µn(j, f)−bµn(j, f))] ≲νn(ϵ) sup iVar[bµn(i, f)] +ϵ, using the same arguments as in the proof of Proposition
https://arxiv.org/abs/2505.02197v1
4.2. Taking ϵ→0 and our assumption on the variance give Var[bG∗ n(s, f)−G∗ n(s, f)]→p0. Since also E[bG∗ n(s, f)− G∗ n(s, f)] = 0, it follows bG∗ n(s, f)−G∗ n(s, f)→p0 for every s, f. Since bG∗ n−G∗ nis relatively compact, this implies ∥bG∗ n−G∗ n∥S×F→p0. D.4. Proof of Proposition 4.5 Similar to the proof of Proposition 4.4, we have Var[G∗ n(s, f)−G∗ n(s, f)] =Var" 1√nnX i=1Vn,iwn,i(s)(µn(i, f)−µn(i, f))# ≲νn(ϵ) sup i|µn(i, f)−µn(i, f)|2+ϵ, Now conclude by the same arguments. 63 D.5. Proof of Proposition 4.6 Define Z∗ nas the Gaussian process corresponding to G∗ nandq∗ n,αas the α-quantile of ∥Z∗ n∥S×F. For every ϵ >0, Proposition 4.4 yields lim sup n→∞P(∥bG∗ n∥S×F≤q∗ n,α−ϵ) ≤lim sup n→∞P(∥G∗ n∥S×F≤q∗ n,α) + lim sup n→∞P(∥bG∗ n−G∗ n∥S×F≥ϵ) ≤lim sup n→∞P(∥G∗ n∥S×F≤q∗ n,α), Taking ϵ→0 implies lim sup n→∞P(∥bG∗ n∥S×F<q∗ n,α)≤lim sup n→∞P(∥G∗ n∥S×F≤q∗ n,α). Next, observe that Giessing (2023, Proposition 1) and our assumption give lim inf n→∞Var[∥Z∗ n∥S×F]≳lim inf n→∞inf (s,f)∈S×FVar[G∗ n(s, f)]>0. Letnkbe any subsequence of such that Z∗ nkconverges weakly to a Gaussian process Z∗ with α-quantile qα. Then the above implies that q∗ nk,α→qα>0, and P(∥Z∗∥S×F= qα) = 0. Thus, Proposition 2.14 gives lim sup n→∞P(∥G∗ n∥S×F≤q∗ n,α)≤lim sup n→∞P(∥Z∗ n∥S×F≤q∗ n,α) = 1−α. We have shown that lim sup n→∞P(∥bG∗ n∥S×F<q∗ n,α)≤1−α, which implies bq∗ n,α≥q∗ n,αwith probability tending to 1. This further implies lim inf n→∞P(∥Gn∥S×F≤bq∗ n,α)≥lim inf n→∞P(∥Gn∥S×F≤q∗ n,α)≥lim inf n→∞P(∥Zn∥S×F≤q∗ n,α), usingGn↔dZnand a similar continuity argument as above. Decompose G∗ n(s, f)−G∗ n(t, g) =Gn(s, f)−Gn(t, g) + [G∗ n(s, f)−Gn(s, f)−G∗ n(t, g) +Gn(t, g)], and observe that Gn(s, f)−Gn(t, g) and [ G∗ n(s, f)−Gn(s, f)−G∗ n(t, g) +Gn(t, g)] are uncorrelated for every ( s, f),(t, g)∈ S × F . Thus, Var[Zn(s, f)−Zn(t, g)] =Var[Gn(s, f)−Gn(t, g)] ≤Var[Gn(s, f)−Gn(t, g)] +Var[G∗ n(s, f)−G∗ n(t, g)−Gn(s, f) +Gn(t, g)] =Var[G∗ n(s, f)−G∗ n(t, g)] =Var[Z∗ n(s, f)−Z∗ n(t, g)]. 64 Since both Zn(s, f) andZ∗ n(s, f) are asymptotically tight, they are separable for large enough n. The version of Fernique’s inequality given by Ledoux and Talagrand (1991, eq. 3.11 and following paragraph) implies lim inf n→∞P(∥Zn∥S×F≤q∗ n,α)≥lim inf n→∞P(∥Z∗ n∥S×F≤q∗ n,α) = 1−α. Altogether, we have shown that lim inf n→∞P(∥Gn∥S×F≤bq∗ n,α)≥1−α, as claimed. D.6. Proof of Proposition 4.7 Proposition 4.4 and (2) imply that bq∗ n,α≥tnwith probability tending to 1. Then every subsequence of the sets Sn={z:|z| ≤tn}converges to R, whose boundary has probability zero under every tight Gaussian law. Proposition 2.14 and Theorem 3.6 give lim inf n→∞P(∥Gn∥S×F≤bq∗ n,α)≥lim inf n→∞P(∥Gn∥S×F≤tn)≥lim inf n→∞P(∥Zn∥S×F≤tn) = 1 , as claimed. D.7. A useful lemma Lemma D.3. LetVn,1, . . . , V n,nbe a sequence of mn-dependent random variables with mn=o(n1/2),E[Vn,i] = 0 ,Var[Vn,i] = 1 , and supi,nE[|Vn,i|a]<∞for any a∈N. LetFnbe a sequence of functions satisfying conditions (i) and (iv) of Theorem 3.6 with γ= 1. Then sup t∈T 1 nnX i=1Vn,iE[fn,t(Xi)] →p0, Proof. LetN(ε) be the number of ϵ-brackets of Fnwith respect to the ∥ · ∥ n,1-norm. As in the proof of Theorem 3.2, we can construct
https://arxiv.org/abs/2505.02197v1
a Cε-bracketing with respect to the ∥ · ∥ n,1-norm for the class Gn={gn,t(v, i) =vE[fn,t(Xi)]:t∈T}, for some C <∞and size N(ε). Let Gn,k= [g(k) n,g(k) n] be the k-thCϵ-bracket. Recall that Pngn,t=1 nnX i=1gn,t(Vn,i, i) =1 nnX i=1Vn,iE[fn,t(Xi)], 65 and Pgn,t=1 nnX i=1E[gn,t(Vn,i, i)] =1 nnX i=1E[Vn,i]E[fn,t(Xi)] = 0 . We thus get sup t∈T|Pngn,t| = sup t∈T|(Pn−P)gn,t| ≤max 1≤k≤N(ε)|(Pn−P)g(k) n|+ sup g∈Gn,k|(Pn−P)g−(Pn−P)g(k) n| ≤max 1≤k≤N(ε)|(Pn−P)g(k) n|+ sup g∈Gn,k|Pn(g−g(k) n)|+ sup g∈Gn,k|P(g−g(k) n)| ≤max 1≤k≤N(ε)|(Pn−P)g(k) n|+|Pn(g(k) n−g(k) n)|+ sup g∈Gn,k|P(g−g(k) n)| ≤max 1≤k≤N(ε)|(Pn−P)g(k) n|+|(Pn−P)(g(k) n−g(k) n)|+ 2 sup g∈Gn,k|P(g−g(k) n)| ≤3 max 1≤k≤N(ε)|(Pn−P)g(k) n|+ 2Cε, where in the last step, we assumed without loss of generality that any upper bound of the brackets also appears as a lower bound. By the moment condition on Vn,i, we have max 1≤i≤n|Vn,i| ≤anwith probability tending to 1 for any an→ ∞ arbitrarily slowly. On this event, Bernstein’s inequality Lemma B.2 with m=mn+ 1 gives E max 1≤k≤N(ε)|(Pn−P)g(k) n| ≲r mnlnN(ε) n+anmnlnN(ε) n, where we used that 1 nnX i=1nX j=1Cov[Vn,i, Vn,j]E[fn,t(Xi)]E[fn,t(Xj)]≲1 nnX i=1nX j=1|Cov[Vn,i, Vn,j]|≲mn, where the constant depends on the envelope of Fn. Since ln N(ε)≤Cε−2, we can choose ε=n−1/5to get E max 1≤k≤N(ε)|(Pn−P)g(k) n| +ε→0. E. Proofs for the applications E.1. Proof of Corollary 5.1 Theorem 3.2 implies marginal relative CLTs for every s∈[0,1]. For the uniform result, we apply Theorem 3.6 holds. Condition (i)–(iii) follow immediately from the assump- tions. For (iii), consider the class of functions Fn= (j, x)7→1 bKj−sn nb x:s∈[0,1] . 66 Since the kernel is L-Lipschitz, we have sup j 1 bKj−sn nb −1 bKj−s′n nb ≤L|s−s′| b2, which also implies 1 nnX j=1E 1 bKj−sn nb Xj−1 bKj−s′n nb Xj γ!1/γ ≤C|s−s′| b2, forC=LsupjE[|Xi|γ]1/γ<∞,γ= 5. Let sk=kϵb2/Cfork= 1, . . . , N (ε) with N(ε) =⌈(ϵb2/C)−1⌉. Then the functions Kk(j) =1 bKj−skn nb −ϵ/2, Kk(j) =1 bKj−skn nb +ϵ/2, form an ϵ-bracketing of Fnwith respect to the ∥ · ∥ n,γ-norm. Thus, Zδn 0q lnN[](ϵ,F,∥ · ∥ n,γ)dϵ≤Zδn 0p −ln(ϵb2/C)dϵ≲δnp logδ−1 n, which implies that condition (iii) of Theorem 3.6 holds. E.2. Proof of Corollary 5.2 We first apply Proposition 4.4. By assumption, the function s7→µb(sn) =1 nbnX j=1Kj−sn nb′ µ(j), has two continuous derivatives, uniformly bounded by some C∈(0,∞). Further, the first two derivatives of Kare Lipschitz. Then the same arguments as in the proof of Corollary 5.1 show that sup s∈[0,1]|bµ′ b(sn)|+ sup s∈[0,1]|bµ′′ b(sn)|+ sup s∈[0,1]|bµ′′′ b(sn)| ≤4C. with probability tending to 1. Thus, the functions bµb(sn) are included in the smooth- ness class C3 4C[0,1] with probability tending to 1. This class has sup-norm log-covering numbers (and, hence, ∥·∥n,γ-log-bracketing numbers for any n, γ) of order ε−3/2(Van der Vaart and Wellner, 2023, Theorem 2.7.1). Combining this with the arguments of the previous propostion, we get that Fn= (j, x)7→1 bKj−sn nb (x−f(j/n)):s∈[0,1], f∈C3 4C[0,1] 67 has Zδn 0q lnN[](ϵ,Fn,∥ · ∥ n,γ)dϵ≲Zδn 0ε−3/4dϵ≲δ1/4 n→0, for every δn→0. For µ∗ b(sn) =1 nbnX j=1Vn,iKj−sn nb (Xi−µb(j)), Proposition 4.2 shows that√n(bµ∗ b(sn)−µ∗ b(sn)) satisfies a relative CLT in ℓ∞([0,1]). Since further, sup jVar [bµb(j)] =O(n−1), the conditions of Proposition 4.4 hold and √nsup
https://arxiv.org/abs/2505.02197v1
s∈[0,1]|bµ∗ b(sn)−µ∗ b(sn)| → p0. Now consider the process√nµ∗ b(sn). As in the proof of Proposition 4.6, we have Var√nµ∗ b(sn) −Var√nbµb(sn) =σ2 n(s)→ ∞ , for some s∈[0,1], which implies that Var√nµ∗ b(sn) /σ2 n(s)→1. The univariate relative CLT (Theorem 3.2) with α >1/3 gives √nµ∗ b(sn)/σn(s)→dN(0,1). Since sups∈[0,1]|µ∗ b(sn)| ≥ |µ∗ b(sn)|for any s∈[0,1], this implies that P(sups∈[0,1]√n|µ∗ b(sn)|> tn)→1 for some sequence tn→ ∞ . Now the result follows from Proposition 4.7. E.3. Proof of Corollary 5.5 Observe that Tn= sup f∈F,s∈S 1√nnX i=1wn,i(s)(f(Zi)−E[f(Zi)]) , The relative CLT (Theorem 3.6) gives 1√nnX i=1wn,i(s)(f(Zi)−E[f(Zi)])↔dZn(s, f) in ℓ∞(S × F ), 68 where {Zn(s, f):f∈ F} is a relatively compact, mean-zero Gaussian process. Under the null hypothesis, E[f(Zi)] = 0 for all f∈ F, and the relative bootstrap CLT (Theorem 3.6 and Example 4.3) gives 1√nnX i=1Vn,iwn,i(s)(f(Zi)−E[f(Zi)])↔dZn(s, f) in ℓ∞(S × F ), The relative continuous mapping theorem now implies Tn↔dsup f∈F,s∈S|Zn(s, f)| and T∗ n↔dsup f∈F,s∈S|Zn(s, f)|, which proves that P(Tn> c∗ n(α))→αunder H0. Under the alternative, we have Tn= sup f∈F,s∈S 1√nnX i=1wn,i(s)(f(Zi)−E[f(Zi)]) +1√nnX i=1wn,i(s)E[f(Zi)] ≥sup f∈F,s∈S 1√nnX i=1wn,i(s)E[f(Zi)] −sup f∈F,s∈S 1√nnX i=1wn,i(s)(f(Zi)−E[f(Zi)]) , which implies Tn/√n≥sup f∈F 1 nnX i=1wn,i(s)E[f(Zi)] ≥δ >0, with probability tending to 1. For the bootstrap statistic, T∗ n√n≤sup f∈F,s∈S 1 nnX i=1Vn,iwn,i(s)(f(Zi)−E[f(Zi)]) + sup f∈F,s∈S 1 nnX i=1Vn,iwn,i(s)E[f(Zi)] . The first term on the right converges to 0 in probability by Example 4.3, the second by Lemma D.3. Thus, T∗ n/√n→p0, which implies c∗ n(α)/√n→0 and the claim follows. F. Auxiliary results Proof of Lemma 2.4. For any M > 0 and t1, . . . , t n∈Tit holds P( max 1≤i≤n|G(ti)|< M) =nY i=1P(|G(ti)|< M) = [ϕ(M)−ϕ(−M)]n where ϕdenotes the standard normal CDF. For n→ ∞ , i.e., if Tis infinite, the right side converges to zero. Thus, P∗(∥G∥T< M) = 0 for all MIn other words, Gdoes not have bounded samples paths. 69 F.1. Bracketing numbers under non-stationarity Fix some triangular array Xn,1, . . . , X n,knof random variables with values in a Polish spaceX. Denote by Fa set of measurable functions f:X →R. Lemma F.1. Assume that there exists some probability measure QonXand a constant K∈Rsuch that PXn,i(A)≤KQ(A)for all iand measurable sets A. Then, ∥f∥γ,n≤sup n∈N,i≤kn∥f(Xn,i)∥γ≤K1/γ∥f∥Lγ(Q) for all γ >0. Hence, N[](ϵ,F,∥ · ∥ γ,n)≤N[] K1/pϵ,F,∥ · ∥ Lγ(Q) . Proof. The condition PXn,i(A)≤KQ(A) for all measurable sets Ais equivalent to PXn,ibeing absolutely continuous with respect toQand all Radon-Nikodyn derivatives are bounded by K, i.e., ∂PXn,i ∂Q≤K (upto zero sets of Q) for all i. For all iit holds ∥f(Xn,i)∥γ γ=Z |f(Xn,i)|γdP =Z |f|γ∂PXn,i ∂QdQ ≤Z |f|γKdQ =K∥f∥γ Lγ(Q). This proves the claim. 70
https://arxiv.org/abs/2505.02197v1
arXiv:2505.02302v1 [math.PR] 5 May 2025Modelling with given reliability and accuracy in the space Lp(T) of stochastic processes from Sub φ(Ω) decomposable in series with independent elements Applied Statistics. Actuarial and Financial Mathematics. No. 2, 13–23, 2012 Oleksandr Mokliachuk1,∗ 1National Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”, Kyiv, Ukraine ∗Corresponding email: omoklyachuk@gmail.com Abstract Models that approximate stochastic processes from Subφ(Ω) with given reliability and accuracy in Lp(T) for some given φ(t) are considered. We also study construction of models of processes which can be decomposed into series with approximate elements. Karhunen-Lo` eve model is considered as an example of the application of the proposed construction. Keywords Models of stochastic processes, Karhunen-Lo` eve model, sub-Gaussian processes. Models of stochastic processes, AMS 2010 subject classifications. Primary: 60G07, Secondary: 62M15, 46E30 1 Introduction Let (Ω ,F, P) be a standard probability space, let Lo 2(Ω) be the space of centered random variables with finite second moment, Eξ= 0, Eξ2<∞, and let{Λ,U, µ}be a measurable space with a σ-finite measure µ. Let Lp(Λ, µ) be a Banach space of integrable to the power pfunctions with the measure µ. Definition 1. [1] A random variable ξis called sub-Gaussian if there exist a≥0, such that for all λ∈Rthe following inequality holds true: Eexp{λξ} ≤expa2λ2 2 . 1 The characteristic of the random variable ξ, specified as τ(ξ) = inf a≥0 :Eexp{λξ} ≤expa2λ2 2 , λ∈R will be called a sub-Gaussian standard of the random variable ξ. Definition 2. [1] A continuous even convex function φ={φ(x), x∈R}is called N–Orlicz function, if it increases in the domain x >0,φ(0) = 0, φ(x)>0 forx̸= 0 and the following conditions hold true: lim x→0φ(x) x= 0,and lim x→∞φ(x) x=∞. Definition 3. [1] Let φ={φ(x), x∈R}be an N–Orlicz function. The function φ∗(x) = sup y∈R(xy−φ(y)) is called the Young-Fenchel transform of the function φ. Definition 4. [1] Let φbe an Orlicz function such that lim x→0inf xφ(x) x2=c >0. A random variable ξbelongs to the space Subφ(Ω), if Eξ= 0,Eexp{λξ}exists for all λ∈R, and there exists a number a > 0 such that for all λ∈Rthe inequality Eexp{λξ} ≤exp{φ(λa)} is true. Definition 5. [2] Let X={X(t), t∈[0, T]}be a stochastic process from the space Subφ(Ω). A stochastic process XN={XN(t), t∈[0, T]}from the space Subφ(Ω) we will call a model that approximates a stochastic process Xwith given reliability 1 −αand accuracy δin the space Lp(0, T), if P(ZT 0(X(t)−XN(t))pdt)1/p> δ) ≤α. The following theorem is proved in [3]. Theorem 1. [3] Let {T,Λ, M}be a measurable space, let X={X(t), t∈T} ⊂ Subφ(Ω), and let τφ(t) =τφ(X(t)). Let the integral Z T(τφ(t))pdµ(t) =c <∞ exist. Then the integral Z T|X(t)|pdµ(t)<∞ exists with probability 1 and PZ T|X(t)|pdµ(t)> δ ≤2 exp( −φ∗ δ c1/p!) 2 for all δ > c f c1/pp δ1/p!!p , where fis a function such that φ(u) =Ru 0f(v)dv∀u >0and where φ∗is the Young-Fenchel transform of φ. The following theorem is a direct corollary of the previous one. Theorem 2. Let{T,Λ, M}be a measurable space, let X={X(t), t∈T} ⊂ Subφ(Ω), and let XNbe a model of the process X. Let fbe
https://arxiv.org/abs/2505.02302v1
a function such that φ(u) =Ru 0f(v)dv∀u >0. Let cN=Z T(τφ(X(t)−XN(t)))pdµ(t)<∞. The model XN(t)approximates the process X(t)with reliability 1−αand accuracy δin the Lp(T)space, if cN≤δ φ∗(−1) ln2 αp (1) and δ > c N f c1/p Np δ1/p!!p , (2) where φ∗is a Young-Fenchel transform of the function φ. Proof. Since|X(t)−XN(t)| ∈Subφ(Ω), Theorem 1 implies PZ T|X(t)|pdµ(t)> δ ≤2 exp( −φ∗ δ c1/p!) , therefore, PZ T|X(t)−XN(t)|pdµ(t)> δ ≤2 exp( −φ∗ δ cN1/p!) , so inequality (2) has to be true and 2 exp( −φ∗ δ cN1/p!) ≤α, whence φ∗ δ cN1/p! ≥ln2 α, and, finally, c1/p N≤δ1/p φ∗(−1)(ln2 α). 3 2 Estimation of reliability and accuracy of mod- elling of stochastic process in Lp(T)spaces Based on the Theorem 2, for functions φ(t) of specified forms the following theorems can be formulated. Theorem 3. Let a stochastic process X={X(t), t∈[0, T]}belong to the space Subφ(Ω), let φ(t) =tγ γ for1< γ≤2. Let cN=ZT 0(τφ(X(t)−XN(t)))pdµ(t)<∞. A model XN(t)approximates the process X(t)with reliability 1−αand accuracy δin the Lp(T)space, if  cN≤δ/(βln2 α)p/β cN< δ/pp(1−1/γ), where βis a number such that 1 β+1 γ= 1. Proof. The first inequality follows from Theorem 2 right away. Indeed, as φ(t) = tγ/γ, then φ∗(t) =tβ/β, which gives us the result required. Since α∈(0,1), then ln2 α>0. Let us consider the second inequality. Again, since φ(t) =tγ γ, then f(t) = tγ−1,t >0, and δ > c N  c1/p Np δ1/p!γ−1 p =cγ Npp(γ−1) δγ−1, whence cγ N<δγ pp(γ−1). Theorem 4. Let a stochastic process X={X(t), t∈[0, T]}belong to Subφ(Ω), let φ(t) =( t2 γ, t <1 tγ γ, t≥1, where γ >2. Let cN=ZT 0(τφ(X(t)−XN(t)))pdµ(t)<∞. 4 A model XN(t)approximates the process X(t)with reliability 1−αand accuracy δin the Lp(T)space, if cN≤δ/(βln2 α)p/β cN< δ/pp(1−1/γ), andβis a number such that 1 β+1 γ= 1. Proof. For the function f(t) we have f(−1)(t) =(γ 2t, t <2 γ t1 γ−1, t≥1. Let us consider φ∗(t). Ift >1, we have φ∗(t) =Z2/γ 0γ 2udu+Z1 2/γdu+Zt 1u1 γ−1du=γ 2u2 2 γ/2 0+ (1−2 γ) +u1 γ−1+1 t 1= =γ 21 22 γ2 + 1−2 γ+tβ β−1 β=tβ β. This result and Theorem 2 induce the first inequality of this Theorem 4. Let us consider the second inequality now. Let us first analyze the case when c1/p Np δ1/p>1. In such a case, f(x) =xγ−1, and δ > c N  c1/p Np δ1/p!γ−1 p , that is, cN<δ pp(1−1/γ), whenceδ pp< cN<δ pp(1−1/γ). For the case c1/p Np δ1/p>1 we have f(x) =x2 γ, and ( cN<δ pp cN<δ pp/2γ 2p/2. 5 Since γ >2 and p >1, we obtain cN<δ pp. Finally, we have cN<δ pp(1−1/γ). 3 Construction models of stochastic processes from Sub φ(Ω)that can be represented as a se- ries with independent elements. Assume we can represent a stochastic process X={X(t), t∈[0, T]}in the form of series X(t) =∞X k=1ξkak(t), (3) where ξk∈Subφ(Ω),ξkare independent, and the next property is true for this series:∞X k=1τ2 φ(ξk)a2 k(t)<∞. Usually, a sum of first Nelements of this representation is used as a model of such a process. However, functions ak(t) can
https://arxiv.org/abs/2505.02302v1
often be unfindable explicitly. In this case ˆ ak(t), approximations of function ak(t), may be used as elements of the model of a stochastic process after taking into account the impact of such approximation an accuracy and reliability of the process approximation with the model. Definition 6. We will call a model of a stochastic process X(t) the following expression: XN(t) =NX k=1ξkˆak(t), where ˆ ak(t) are approximations of function ak(t),ξk∈Subφ(Ω), ξkare inde- pendent. Let us introduce the following notations: δk(t) =|ak(t)−ˆak(t)|; ∆N(t) =|X(t)−XN(t)|= NX k=1ξkδk(t) +∞X k=N+1ξkak(t) . We will say that a model XNapproximates a stochastic process Xwith given accuracy and reliability in the space Lp[0, T], if P   ZT 0(∆N(t))pdt!1 p > δ  ≤α. Let us formulate a theorem for simulation of such processes in Lp(T). 6 Theorem 5. Let s stochastic process X={X(t), t∈[0, T]}belong to Subφ(Ω), letXNbe a model of the process X. Assume cN=ZT 0 τφ NX k=1ξkδk(t) +∞X k=N+1ξkak(t)!!p dt <∞. The model XN(t)approximates the stochastic process X(t)with reliability 1−α and accuracy δin the space Lp(T), when ( cN≤δ/(φ∗(−1)(ln2 α))p δ > c N(f(c1/p Np δ1/p))p, the function fis provided in Theorem 2, φ∗is a Young-Fenchel transform of φ. Proof. The proof of this Theorem 5 is similar to the proof of Theorem 2. For theorems that will follow we require the following statement. Theorem 6. [1] Let ξ1, ξ1, . . . , ξ n∈Subφ(Ω)be independent random variables. If a function φ(|x|1/s),x∈Ris convex for s∈(0,2], then τs φ nX k=1ξk! ≤nX k=1τs φ(ξk). Theorem 7. Let a stochastic process X={X(t), t∈[0, T]}belong to Subφ(Ω), φ(t) =( t2 γ, t <1 tγ γ, t≥1, where γ >2. Assume cN=ZT 0 NX k=1τ2 φ(ξk)δ2 k(t) +∞X k=N+1τ2 φ(ξk)a2 k(t)!p/2 dt <∞. The model XN(t)approximates the stochastic process X(t)with reliability 1−α and accuracy δin the space Lp(T), if cN≤δ/(βln2 α)p/β cN< δ/pp(1−1/γ), where βis such that 1/β+ 1/γ= 1. Proof. From Theorem 6 for s= 2 the next inequalities follow: cN=ZT 0(τφ(∆N(t)))pdµ(t) = =ZT 0 τφ NX k=1ξkδk(t) +∞X k=N+1ξkak(t)!!p dt≤ ≤ZT 0 NX k=1τ2 φ(ξk)δ2 k(t) +∞X k=N+1τ2 φ(ξk)a2 k(t)!p/2 dt. The last inequality follows from Theorem 6 and properties of the function τφ. 7 Theorem 8. Let a stochastic process X={X(t), t∈[0, T]}belong to Subφ(Ω), φ(t) =tγ γ,where 1< γ < 2. Assume cN=ZT 0 NX k=1τγ φ(ξk)δγ k(t) +∞X k=N+1τγ φ(ξk)aγ k(t)!p/γ dt <∞. The model XN(t)approximates the stochastic process X(t)with reliability 1−α and accuracy δin the space Lp(T), if cN≤δ/(βln2 α)p/β cN< δ/pp(1−1/γ), where βis such that 1/β+ 1/γ= 1. Proof. Theorem 6 for s=γimplies the following inequalities: cN=ZT 0(τφ(∆N(t)))pdµ(t) = =ZT 0 τφ NX k=1ξkδk(t) +∞X k=N+1ξkak(t)!!p dt≤ ≤ZT 0 NX k=1τγ φ(ξk)δγ k(t) +∞X k=N+1τγ φ(ξk)aγ k(t)!p/γ dt. The last inequality follows from Theorem 6 and properties of the function τφ. 4 Constructing models of stochastic processes inLp(0, T)using the Karhunen-Lo` eve decom- position As an example of application of the Theorems formulated above, we consider a well-known Karhunen-Lo` eve decomposition of stochastic processes. Let X= {X(t), t∈[0, T]}be continuous in a mean square stochastic process, EX(t) =
https://arxiv.org/abs/2505.02302v1
0, t∈T, let B(t, s) =EX(t)X(s) be the correlation function of this stochastic process. Sine the process X(t) is continuous in the mean square, the function B(t, s) is continuous on T×T. Consider the homogenous Fredholm integral equation of the second kind a(t) =λZ TB(t, s)a(s)ds. (4) It is well-known (see [4]) that this integral equation has at most countable family of eigenvalues. These numbers are non-negative. Let λ2 kbe the eigenvalues, and letak(t) be the corresponding eigenfunctions. Let us enumerate the eigenvalues in the ascending order: 0≤λ1≤λ2≤. . .≤λk≤λk+1≤. . . . 8 It is well-known that ak(t) are orthogonal functions. The main issue while constructing a Karhunen-Lo` eve model is the fact that eigenvalues and eigenfunctions of such integral equations can be found explicitly only for a few correlation functions. Let ˆ akbe functions that are the approxi- mations of the eigenfunctions of the integral equation, ˆλkbe approximations of the corresponding eigenvalues of this equation. We can construct such approx- imations using, for example, a method described in [5]. Definition 7. Let X={X(t), t∈[0, T]} ⊂Subφ(Ω) be a stochastic process with the correlation function B(t, s) =EX(t)X(s) that can be represented using the Karhunen-Lo` eve decomposition. We will call the process XN={XN(t), t∈ [0, T]} ⊂Subφ(Ω) the Karhunen-Lo´ eve model of the mentioned process, if XN(t) =NX k=1ˆak(t)p ˆλkξk, where ˆ ak(t) are approximations of eigenfunctions of the homogenous integral Fredholm equation of second kind (4), and ˆλkare approximations of eigenvalues of the same equation. Let us formulate theorems that will allow to construct models of stochastic processes using this decomposition in Lp(T). Theorem 9. Let a stochastic process X={X(t), t∈[0, T]}belong to Subφ(Ω), and let φ(t) =( t2 γ, t <1 tγ γ, t≥1 forγ > 2, and can be represented using the Karhunen-Lo` eve decomposition. Assume cN=ZT 0 NX k=1τ2 φ(ξk) δ2 k(t) ˆλk−ηk+ ˆa2 k(t)(pˆλk−q ˆλk−ηk)2 ˆλk(ˆλk−ηk) + +∞X k=N+1τ2 φ(ξk)a2 k(t) λk!p/2 dt <∞, where δk(t) =|φk(t)−ˆφk(t)|is the error of approximation of k-th eigenfunction of equation (4),ˆλkis the approximation of the k-the eigenvalue, ηk=|λk− ˆλk|is the error of approximation of the k-the eigenvalue. The model XN(t) approximates the stochastic process X(t)with reliability 1−αand accuracy δ in the space Lp(T), when the following conditions are true: cN≤δ/(βln2 α)p/β cN< δ/pp(1−1/γ). Proof. The statement of the theorem follows from Theorem 7. Indeed, cN=ZT 0 τφ NX k=1ξk ak(t)√λk−ˆak(t)pˆλk! +∞X k=N+1ξkak(t)√λk!!p dt= 9 =ZT 0 τφ NX k=1ξk ak(t)√λk−ˆak(t)√λk+ˆak(t)√λk−ˆak(t)pˆλk! + +∞X k=N+1ξkak(t)√λk!!p dt=ZT 0 τφ NX k=1ξk1√λkδk(t)+ +NX k=1ξkˆak(t) 1√λk−1pˆλk! +∞X k=N+1τφ(ξk)ak(t)√λk!!p dt≤ ≤ZT 0 NX k=1τ2 φ(ξk) δ2 k(t) ˆλk−ηk+ ˆa2 k(t)(pˆλk−q ˆλk−ηk)2 ˆλk(ˆλk−ηk) + +∞X k=N+1τ2 φ(ξk)a2 k(t) λk!p/2 dt. In the case when τφ(ξk)≤τ∀kwe will have the next statement. Theorem 10. Let a stochastic process X={X(t), t∈[0, T]}belong to Subφ(Ω), φ(t) =( t2 γ, t <1 tγ γ, t≥1 forγ >2, and let ∀k:τφ(ξk) =τ. Assume the process Xcan be represented using the Karhunen-Lo` eve decomposition, and let cN=τp/2ZT 0 B(t, t)−NX k=1(ˆak(t)−δk(t))2 ˆλk+ηk! + +NX k=1 δ2 k(t) ˆλk−ηk+ ˆa2 k(t)(pˆλk−q ˆλk−ηk)2 ˆλk(ˆλk−ηk)  p/2 dt <∞, where δk(t) =|φk(t)−ˆφk(t)|is the error of approximation of
https://arxiv.org/abs/2505.02302v1
k-th eigenfunction of equation (4),ˆλkis the approximation of the k-the eigenvalue, ηk=|λk− ˆλk|is the error of approximation of the k-the eigenvalue. The model XN(t) approximates the stochastic process X(t)with reliability 1−αand accuracy δ in the space Lp(T), when the following conditions are true: cN≤δ/(βln2 α)p/β cN< δ/pp(1−1/γ). Proof. This statement follows from the Mercer’s Theorem [4] and the previous Theorem. Indeed, according to the Mercer’s Theorem, B(t, t) =∞X k=1a2 k(t) λk, 10 therefore cN=ZT 0 NX k=1τ2 δ2 k(t) ˆλk−ηk+ ˆa2 k(t)(pˆλk−q ˆλk−ηk)2 ˆλk(ˆλk−ηk) +∞X k=N+1τ2a2 k(t) λk p/2 dt= =τp/2ZT 0 NX k=1τ2 δ2 k(t) ˆλk−ηk+ ˆa2 k(t)pˆλk−q ˆλk−ηk2 ˆλk(ˆλk−ηk) +B(t, t)−NX k=1a2 k(t) λk p/2 dt≤ ≤τp/2ZT 0 B(t, t)−NX k=1(ˆak(t)−δk(t))2 ˆλk+ηk! + +NX k=1 δ2 k(t) ˆλk−ηk+ ˆa2 k(t)(pˆλk−q ˆλk−ηk)2 ˆλk(ˆλk−ηk)  p/2 dt. Theorem 11. Let a stochastic process X={X(t), t∈[0, T]}belong to Subφ(Ω), let φ(t) =tγ γ for1< γ < 2. Assume cN=ZT 0 NX k=1τγ φ(ξk) δγ k(t) (λk−ηk)γ+ ˆaγ k(t)(pˆλk−q ˆλk−ηk)γ (ˆλk(ˆλk−ηk))γ + +∞X k=N+1τγ φ(ξk)a2γ k(t) λγ k!p/γ dt, where δk(t) =|φk(t)−ˆφk(t)|is the error of approximation of k-th eigenfunction of equation (5), ˆλkis the approximation of the k-the eigenvalue, ηk=|λk− ˆλk|is the error of approximation of the k-the eigenvalue. The model XN(t) approximates the stochastic process X(t)with reliability 1−αand accuracy δ in the space Lp(T), when the following conditions are true: ( cN≤δ/(βln2 α)p/β cN< δ/pp(1−1/γ). Proof. he statement of this theorem follows from the Theorem 8 and the proof of the Theorem 10. 11 5 Conclusions This article discusses modeling methods with specified reliability and accu- racy of stochastic processes from Subφ(Ω) spaces in Lp(T) spaces that allow decomposition in series with independent elements. Theorems are proved that allow constructing models of such stochastic processes with given reliability and accuracy. Besides, theorems have been proven for stochastic processes with ex- pansion terms that cannot be explicitly determined but can be approximated by certain functions. Similar theorems for some Subφ(Ω) spaces with given func- tions φ. As an example of the application of the results of the article, theorems have been proven that allow constructing the Karhunen-Lo` eve model in cases where the integral equation corresponding to the model cannot be explicitly solved. References [1] V.V. Buldygin, Yu.V. Kozachenko. Metric Characterization of Random Variables and Random Processes. – Providence: AMS, 260 p.–2000. [2] Yu.V. Kozachenko, A.O. Pashko. Modelyuvannya vypadkovykh protsesiv, Ky¨ ıv: Vydavnychyj Tsentr “Ky¨ ıvs’kyj Universytet”, 223 p.– 1999. [3] Yu. V. Kozachenko, O. E. Kamenshchikova. Approximation of SSub φ(Ω) stochastic processes in the space Lp(T). Theory Probab. Math. Stat. 79, 83–88, 2009. [4] F. G. Tricomi. Integral equations. – New York: Interscience Publishers, 238p.–1957. [5] O.M. Mokliachuk. Simulation of random processes with known correla- tion function with the help of Karhunen-Lo` eve decomposition, Theory of Stochastic Processes, Vol. 13, No.4, 163-169, 2007 12
https://arxiv.org/abs/2505.02302v1
Marginal minimization and sup-norm expansions in perturbed optimization Vladimir Spokoiny∗ Weierstrass Institute Berlin, HSE University and IITP RAS, Moscow, Mohrenstr. 39, 10117 Berlin, Germany spokoiny@wias-berlin.de May 6, 2025 Abstract Let the objective function fdepends on the target variable xalong with a nui- sance variable s:f(υ) =f(x,s) . The goal is to identify the marginal solution x∗= argminxminsf(x,s) . This paper discusses three related problems. The plugin approach widely used e.g. in inverse problems suggests to use a preliminary guess (pilot) bsand apply the solution of the partial optimization bx= argminxf(x,bs) . The main question to address within this approach is the required quality of the pilot ensuring the prescribed accuracy of bx. The popular alternating optimization approach suggests the following pro- cedure: given a starting guess x0, for t≥1 , define st= argminsf(xt−1,s) , and then xt= argminxf(x,st) . The main question here is the set of conditions ensuring a conver- gence of xttox∗. Finally, the paper discusses an interesting connection between marginal optimization and sup-norm estimation. The basic idea is to consider one component of the variable υas a target and the rest as nuisance. In all cases, we provide accurate closed form results under realistic assumptions. The results are illustrated by one numerical example for the BTL model. AMS 2010 Subject Classification: Primary 90C31 , 90C25. Secondary 90C15 Keywords : partial optimization, semiparametric bias; expansions in perturbed optimiza- tion ∗The article was prepared within the framework of the HSE University Basic Research Program 1arXiv:2505.02562v1 [math.OC] 5 May 2025 2 Marginal and alternating minimization, sup-norm bounds Contents 1 Introduction 2 2 Partial and marginal optimization 5 2.1 Partial optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.2 Conditional optimization under (semi)orthogonality . . . . . . . . . . . . 7 2.3 Semiparametric bias. A linear approximation . . . . . . . . . . . . . . . . 8 2.4 A linear perturbation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 3 Alternating optimization 10 4 Sup-norm expansions for linearly perturbed optimization 13 4.1 Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 4.2 A sup-norm expansion for a linear perturbation . . . . . . . . . . . . . . . 15 4.3 Sup-norm expansions for a separable perturbation . . . . . . . . . . . . . 16 5 Estimation for the Bradley-Terry-Luce model 17 A Optimization after linear perturbation 24 A.1 Smoothness conditions . . . . . . . . . . . . . . . . . . . . . . .
https://arxiv.org/abs/2505.02562v1
. . . . . . 24 A.2 A basic lemma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 B Proofs 26 B.1 Partial and marginal optimization. Proofs . . . . . . . . . . . . . . . . . . 26 B.2 Sup-norm expansions. Proofs . . . . . . . . . . . . . . . . . . . . . . . . . 30 1 Introduction For a smooth convex function f(·) , consider an unconstraint optimization problem υ∗= argmin υf(υ). Spokoiny (2025c) offered a number of rather precise results addressing the following question: how do υ∗andf(υ∗) change if the objective function fis perturbed in some special way? This question originates from statistical and machine learning applications, with many particular examples and applications including uncertainty quantification, guarantees and rates for statistical learning, bias due to penalizations, etc. The leading 3 example is given by a linear perturbation; see (A.1). To some surprise, any smooth perturbation can be considered in a similar way. This paper studies a slightly different but related question. Suppose that the full variable υis composed of two parts: υ= (x,s) , where xis of primary interest while sis a nuisance variable. Of course, a global optimization υ∗= (x,s∗) = argmin (x,s)f(x,s) delivers the x-component of the solution. However, a joint optimization can be a hard problem. Instead, one can consider two partial problems xs= argmin xf(x,s),sx= argmin sf(x,s), when one component is fixed while the other is optimized. Each of these two problems can be much easier that the original full-dimensional optimization, especially, if the objective function f(x,s) is convex in xwith sfixed and another way around. We mention three different setups where partial optimization appers in a natural way. Setup 1: Plug-in approach in semiparametric statistical inference. A prag- matic plug-in procedure is frequently used in inverse problems or statistical semipara- metric estimation: with a pilot bs, apply x∗(bs) in place of x∗. We refer to Noskov et al. (2025), Bartlett et al. (2020), Cheng and Montanari (2022), for an overview and further references related to plugin approach in random design regression. A more gen- eral Error-in-Operator problem has been discussed in Spokoiny (2025b). The error of this plug-in method can be studied within perturbed optimization problem: the true objective function f(x,s∗) of the target variable xis replaced with f(x,bs) . We study the error of this perturbation x∗(bs)−x∗later in Section 2 and show that this error is essentially linear in bs−s∗. Therefore, the plug-in approach requires a very good pilot guess. Setup 2: Optimization with sup-norm error. Recent paper Spokoiny (2025a) introduces a so-called critical dimension condition for consistency of the Maximum Like- lihood Estimator meaning the following: “effective parameter dimension is smaller than effective sample size”, where the effective dimension 𝕡can be defined as the trace of the covariance matrix of the standardized score, while the effective
https://arxiv.org/abs/2505.02562v1
sample size Ncorre- sponds to the smallest eigenvalue of the Fisher information matrix. Statistical inference on the parameter of interest requires even stronger condition 𝕡2≪ N. General results on Laplace approximation from Katsevich (2023) can be viewed as a lower bound in 4 Marginal and alternating minimization, sup-norm bounds this problem and indicate that this condition is really unavoidable. Surprisingly, Gao et al. (2023) established very strong inference results for the Bradley-Terry-Luce (BTL) problem of ranking of pplayers from pairwise comparisons under much weaker condition N≫log(p) . The distingwished feature of the study in Gao et al. (2023) is that each component of the parameter vector is analyzed separately leading to sup-norm losses. One of the aims of this paper is to revisit the estimation problem with sup-norm losses. Section 4 explains how this problem can be studied within AO approach. Setup 3: Alternating optimization and EM-type procedures. The other widely used approach is a so-called alternating optimization . Suppose to be given a starting guess x0and consider the following alternating optimization (AO) procedure: fort≥1 st= argmin sf(xt−1,s), xt= argmin xf(x,st). There exists vast literature on this subject. A comprehensive survey on coordinate de- scent and its variants, including alternating optimization can be found e.g. in Wright (2015), Boyd and Vandenberghe (2004), and in Bauschke and Combettes (2017). For early theoretical analysis of AO convergence for convex problems, see Luo and Tseng (1992). Bezdek and Hathaway (2003) discusses convergence properties of AO in the context of clustering. Tseng (2001) establishes convergence guarantees for non-smooth problems. Block coordinate descent methods in the broader context of optimization is addressed in Bertsekas (1999). We also mention some applications in machine learning and statistics. Xu and Yin (2013) extends the AO theory to multi-convex problems. Non-negative matrix factoriza- tion by AO has been studied in Hsieh and Dhillon (2011). Mairal (2015) discussed AO in the context of majorization-minimization (MM) algorithms. Beck and Tetruashvili (2013) provides general convergence guarantee bounds for AO. General convergence the- ory of AO for non-smooth and non-convex problems has been studied in Razaviyayn et al. (2013) and in Xu and Yin (2017). A common point for all the obtained results is a local convergence: an AO procedure converges at a linear rate provided a good starting guess x0. This seems to be an intrinsic feature of the approach. However, for practical application, one would need more specific conditions on this local vicinity, its shape, and dependence on the curvature of the objective function fand its smoothness properties. Section 3 addresses these questions and states an accurate result about convergence of the AO procedure to the solution 5 (x∗,s∗) as tgrows at the optimal linear rate. This paper contribution. It is well known that for a smooth function f(x,s) , the partial solution x∗ sis nearly linear in sin the form xs−x∗≈ −F−1 xxFxx(s−s∗). However, accurate guarantees for this approximation are important. A specification of the main result of Section 2 can be written as F1/2 xx xs−x∗+F−1 xxFxs(s−s∗) ≤τ◦∥F1/2 ss(s−s∗)∥2 ◦, (1.1) where τ◦is a small constant related to the self-concordance smoothness
https://arxiv.org/abs/2505.02562v1
condition on the function fand∥ · ∥◦is a norm in Rqin which smoothness of fw.r.t. sis measured. The closed form of the remainder is very important. In particular, it allows to write the error in the multiplicative way, what is crucial for many iterative algorithms. Later we apply this bound for two special cases. With the usual Eulidean norm, (1.1) will be used to provide accurate and sharp conditions for local convergence of AO-type methods; see Section 3. With the sup-norm ∥ · ∥◦=∥ · ∥∞, (1.1) helps to derive sharp sup-norm bounds in perturbed optimization; see Section 4. Organization of the paper. Section 2 studies the problem of partial and marginal optimization with a particular focus on the semiparametric bias caused by using a wrong value of the nuisance parameter. Section 4 applies the bound from Section 2 to the problem of perturbed optimization in sup-norm. The obtained results will be used in Section 3 for proving the convergence of the AO procedure. Section 5 presents a numerical example for the Bradley-Terry-Luce (BTL) model. Section A overviews one useful result from Spokoiny (2025c) on linearly perturbed optimization which will be used in all our derivations. The proofs are collected in Section B.1 and B.2. 2 Partial and marginal optimization This section discusses the problem of conditional/partial and marginal optimization. Consider a function f(υ) of a parameter υ∈ Rspwhich can be represented as υ= (x,s) , where x∈ Rpis the target subvector while s∈ Rqis a nuisance variable, sp=p+q. Our goal is to study the solution to the optimization problem υ∗= (x∗,s∗) = argminυf(υ) and, in particular, its target component x∗. Later we consider a localized setup with a local set Υof pairs υ= (x,s) in a vicinity of υ∗. 6 Marginal and alternating minimization, sup-norm bounds 2.1 Partial optimization For any fixed value of the nuisance variable s∈ S, consider fs(x) =f(x,s) as a function ofxonly. Below we assume that fs(x) is convex in xfor any s∈ S. Define xsdef= argmin x: (x,s)∈Υfs(x). Our goal is to describe variability of the partial solution xsinsin terms of xs−x∗ andf(υ∗)−fs(xs) . Introduce Asdef=∇fs(x∗) =∇xf(x∗,s), 𝔽sdef=∇2fs(x∗) =∇2 xxf(x∗,s).(2.1) Local smoothness of each function fs(·) around xscan be well described under the self-concordance property. Let for any s∈ S, some radius rsbe fixed. We also assume that a local metric on Rpfor the target variable xis defined by a matrix 𝔻s∈Mp that may depend on s∈ S. (T∗ 3|s)Fors∈ S, it holds sup u∈ Rp:∥𝔻su∥≤rssup z∈ Rp ⟨∇3fs(xs+u),z⊗3⟩ ∥𝔻sz∥3≤τ3. Our first result describes the semiparametric bias x∗−xscaused by using the value sof the nuisance variable in place of s∗. Proposition 2.1. Letfs(x)be a strongly convex function with fs(xs) = max xfs(x) and𝔽s=∇2fs(xs). Let Asand𝔽sbe given by (2.1). Assume (T∗ 3|s)atxswith𝔻2 s, rs, and τ3such that 𝔻2 s≤κ2𝔽s,rs≥3 2∥𝔻s𝔽−1 sAs∥,κ2τ3∥𝔻s𝔽−1 sAs∥<4 9. Then ∥𝔻s(xs−x∗)∥ ≤(3/2)∥𝔻s𝔽−1 sAs∥and moreover, ∥𝔻−1 s𝔽s(xs−x∗+𝔽−1 sAs)∥ ≤3τ3 4∥𝔻s𝔽−1 sAs∥2≤τ3rs 2∥𝔻s𝔽−1 sAs∥. (2.2) Also, 2fs(xs)−2fs(x∗) +∥𝔽−1/2 sAs∥2 ≤5τ3 2∥𝔻s𝔽−1 sAs∥3. (2.3) 7 2.2 Conditional optimization under (semi)orthogonality Here we study variability of the value xs= argminxf(x,s) w.r.t. the nuisance para- meter s. It
https://arxiv.org/abs/2505.02562v1
appears that local quadratic approximation of the function fin a vicinity of the extreme point υ∗yields a nearly linear dependence of xsons. We illustrate this fact on the case of a quadratic function f(·) . Consider the Hessian F=∇2f(υ∗) in the block form: Fdef=∇2f(υ∗) = FxxFxs FsxFss! (2.4) withFsx=F⊤ xs. Iff(υ) is quadratic then Fand its blocks do not depend on υ. Lemma 2.2. Letf(υ)be quadratic, strongly convex, and ∇f(υ∗) = 0 . Then xs−x∗=−F−1 xxFxs(s−s∗). (2.5) Proof. The condition ∇f(υ∗) = 0 implies f(υ) =f(υ∗)−(υ−υ∗)⊤F(υ−υ∗)/2 with F=∇2f(υ∗) . For sfixed, the point xsmaximizes −(x−x∗)⊤Fxx(x−x∗)/2−(x− x∗)⊤Fxs(s−s∗) and thus, xs−x∗=−F−1 xxFxs(s−s∗) . Observation (2.5) is a bit discouraging because the bias xs−x∗has the same mag- nitude as the nuisance parameter s−s∗. However, the condition Fxs= 0 yields xs≡x∗and the bias vanishes. If f(υ) is not quadratic, the orthogonality condition ∇s∇xf(x,s)≡0 for all ( x,s)∈ W still ensures a vanishing bias. Lemma 2.3. Letf(x,s)be continuously differentiable and ∇s∇xf(x,s)≡0. Then the point xs= argminxf(x,s)does not depend on s. Proof. The condition ∇s∇xf(x,s)≡0 implies the decomposition f(x,s) =f1(x) + f2(s) for some functions f1andf2. This in turn yields xs≡x∗. In some cases, one can check semi-orthogonality condition ∇s∇xf(x∗,s) = 0 ,∀s∈ S. (2.6) Lemma 2.4. Assume (2.6). Then ∇xf(x∗,s)≡0,∇2 xxf(x∗,s)≡ ∇2 xxf(x∗,s∗),s∈ S. (2.7) Moreover, if f(x,s)is convex in xgiven sthen xsdef= argmin xf(x,s)≡x∗,∀s∈ S. (2.8) 8 Marginal and alternating minimization, sup-norm bounds Proof. Consider As=∇xf(x∗,s) . Obviously As∗= 0 . Moreover, (2.6) implies that Asdoes not depend on sand thus, vanishes everywhere. As fis convex in x, this implies f(x∗,s) = max xf(x,s) and xs= argminxf(x,s)≡x∗. Similarly, by (2.6), it holds ∇s∇xxf(x∗,s)≡0 and (2.7) follows. Concavity of f(x,s) inxforsfixed and (2.7) imply (2.8). Orthogonality or semi-orthogonality (2.6) conditions are rather restrictive and ful- filled only in special situations. A weaker condition of one-point orthogonality means ∇s∇xf(x∗,s∗) = 0 . This condition is not restrictive and can always be enforced by a linear transform of the nuisance variable s. 2.3 Semiparametric bias. A linear approximation This section presents a first order expansion for the semiparametric bias xs−x∗under conditions on the cross-derivatives of f(x,s) for s=s∗orx=x∗. Suppose that the nuisance variable sis already localized to a small vicinity Sofs∗. A typical example is given by the level set Sof the form S= s:∥ℍ(s−s∗)∥◦≤r◦ , (2.9) where ℍis a metric tensor in Rq,∥ · ∥◦is a norm in Rq, and r◦>0 . Often ∥ · ∥◦is the usual ℓ2-norm. However, in some situations, it is beneficial to use different topology for the target parameter xand the nuisance parameter s. One example is given by estimation in sup -norm with ∥ · ∥◦=∥ · ∥∞. Assume the following condition. (T∗ 3,S)It holds with some τ12, τ21 sup s∈Ssup z∈ Rp,w∈ Rq|⟨∇3 xssf(x∗,s),z⊗w⊗2⟩| ∥𝔻z∥∥ℍw∥2◦≤τ12, (2.10) sup s∈Ssup z∈ Rp,w∈ Rq|⟨∇3 xxsf(x∗,s),z⊗2⊗w⟩| ∥𝔻z∥2∥ℍw∥◦≤τ21. (2.11) Remark 2.1. Condition (T∗ 3,S)only involves mixed derivatives ∇3 xssf(x∗,s) and ∇3 xxsf(x∗,s) of f(x∗,s) for the fixed value x=x∗, while condition (T∗ 3|s)only con- cerns smoothness of f(x,s) w.r.t. xforsfixed. Therefore, the combination of (T∗ 3|s) and(T∗ 3,S)is much weaker than the full dimensional condition (T∗
https://arxiv.org/abs/2505.02562v1
3). 9 For the norm ∥·∥◦, let∥B∥∗be the corresponding dual norm of an operator B: Rq→ Rp: ∥B∥∗= sup z:∥z∥◦≤1∥Bz∥. (2.12) Ifp= 1 and ∥ · ∥◦is the sup-norm ∥ · ∥∞then∥B∥∗=∥B∥1. Define ρ∗def=∥𝔻−1Fxsℍ−1∥∗. (2.13) The next result provides an expansion of the semiparametric bias xs−x∗under (T∗ 3,S) and(T∗ 3|s)with a leading linear term as in quadratic case of Lemma 2.2. For ease of formulation, assume that (T∗ 3|s)holds with 𝔻2 s≡𝔻2≤𝔽for𝔽=Fxx; see Remark 2.2 for an extension. Theorem 2.5. FixSfrom (2.9) and assume (T∗ 3,S)and(T∗ 3|s)for all s∈ S with 𝔻2 s≡𝔻2≤𝔽,rs≡r, and some τ3. Let also ωdef=τ21r◦<1,r≥ρ2∥ℍ(s−s∗)∥◦, ρ 2τ3r◦≤2 3, with ρ2def=3 2(1−ω) ρ∗+τ12r◦ 2 (2.14) andρ∗from (2.13) . Then the partial solution xsobeys ∥𝔻(xs−x∗)∥ ≤ρ2∥ℍ(s−s∗)∥◦. (2.15) Moreover, for any linear mapping Qon Rp ∥Q{xs−x∗+𝔽−1Fxs(s−s∗)}∥ ≤ ∥ Q𝔽−1𝔻∥τ◦∥ℍ(s−s∗)∥2 ◦, (2.16) τ◦def=1 1−ω ρ∗τ21+τ12 2+ρ2 2τ3 3 . (2.17) Remark 2.2. An extension to the case 𝔻2≤κ2𝔽can be done by replacing everywhere τ3,τ12,τ21,τ◦,ρ∗withκ3τ3,κτ12,κ2τ21,κτ◦,κ−1ρ∗respectively. 2.4 A linear perturbation Letg(x,s) be a linear perturbation of f(x,s) : g(x,s)−g(x∗,s∗) =f(x,s)−g(x∗,s∗) +⟨A,x−x∗⟩+⟨C,s−s∗⟩; 10 Marginal and alternating minimization, sup-norm bounds cf. (A.1). Let also Sbe given by (2.9) and s∈ S. We are interested in quantifying the distance between x∗andx◦ s, where x◦ sdef= argmin xg(x,s). The linear perturbation ⟨C,s−s∗⟩does not depend on xand it can be ignored. Proposition 2.6. Fors∈ S, assume the conditions of Proposition 2.5 and let ωdef=τ21∥ℍ(s−s∗)∥◦≤1/4. Then with ρ2from (2.14) , ∥𝔻(x◦ s−x∗)∥ ≤ρ2∥ℍ(s−s∗)∥◦+3 2(1−ω)∥𝔻−1A∥. (2.18) Moreover, for any linear mapping Q, it holds with τ◦from (2.17) ∥Q{x◦ s−x∗+𝔽−1Fxs(s−s∗) +𝔽−1A}∥ ≤ ∥Q𝔽−1𝔻∥ τ◦∥ℍ(s−s∗)∥2 ◦+ 2τ3∥𝔻 𝔽−1A∥2+4τ21 3∥𝔻 𝔽−1A∥∥ℍ(s−s∗)∥◦ ≤ ∥Q𝔽−1𝔻∥n (τ◦+τ21)∥ℍ(s−s∗)∥2 ◦+ (2τ3+τ21/2)∥𝔻 𝔽−1A∥2o . (2.19) 3 Alternating optimization Letf(υ) =f(x,s) depend on a target variable xand the nuisance variable s. Consider the full-dimensional optimization problem: υ∗= (x∗,s∗) = argmin (x,s)f(x,s). This problem is not convex and can be hard to solve. Instead we assume that a partial optimization of f(x,s) w.r.t. xwithsfixed and similarly w.r.t. sfor a xfixed is a simple convex programming. Suppose to be given a starting guess x0and consider the following alternating optimization (AO) procedure: for t≥1 st= argmin sf(xt−1,s), xt= argmin xf(x,st).(3.1) The question under study is a set of conditions ensuring a convergence of this procedure to the solution ( x∗,s∗) as tgrows. Let F(υ) =∇2f(υ) andF=F(υ∗) . We will 11 use the block representation (2.4) of this matrix. The major condition for applicability of the AO method is that each diagonal block FxxandFssis strongly positive and ∥P∥<1 for Pdef=F−1/2 xxFxsF−1/2 ss. (3.2) First, we explain the basic idea for the case of a quadratic function fwithF≡ ∇2f. Lemma 3.1. Letfbe a quadratic function fwithF≡ ∇2f. For any t≥1, the value xtfrom (3.1) satisfies with Pfrom (3.2) F1/2 xx(xt−x∗) =PP⊤F1/2 xx(xt−1−x∗). The proof follows from Lemma 2.2 applied twice to xt−x∗and then to st−s∗. A successful application of this identity yields a linear convergence of the AO procedure with the rate ∥PP⊤∥. Now we consider a general case and present conditions ensuring a local convergence of the AO method. These conditions mainly concern the marginal behavior of finx with sfixed or another way around. All
https://arxiv.org/abs/2505.02562v1
these conditions involve a metric tensor 𝔻 forx-component and ℍfor the s-component. Consider a local set in Υof the form X × S , where X= x:∥𝔻(x−x∗)∥ ≤rx , S= s:∥ℍ(s−s∗)∥ ≤rs . Later we assume that f(x,s) as a function of x∈ X is strongly convex and smooth for s∈ S fixed, and similarly in s∈ S forx∈ X fixed. (T∗ 3,X,S)For any s∈ S, the function f(x,s)is strongly convex in x∈ X and it holds sup x∈Xsup u∈ Rp ⟨∇3 xxxf(x,s),u⊗3⟩ ∥𝔻u∥3≤τ3. For any x∈ X, the function f(x,s)is strongly convex in sand it holds sup s∈Ssup w∈ Rq ⟨∇3 sssf(x,s),w⊗3⟩ ∥ℍw∥3≤τ3. 12 Marginal and alternating minimization, sup-norm bounds For any υ= (x∗,s)withs∈ S orυ= (x,s∗)withx∈ X, it holds sup z∈ Rp,w∈ Rq|⟨∇3 xssf(υ),z⊗w⊗2⟩| ∥𝔻z∥∥ℍw∥2≤τ12, sup z∈ Rp,w∈ Rq|⟨∇3 xxsf(υ),z⊗2⊗w⟩| ∥𝔻z∥2∥ℍw∥≤τ21. If all the constants τ3,τ12, and τ21are of the same order then this condition is nearly equivalent to the full dimensional condition (T∗ 3). Later we assume that ∥P∥<1 , and all the quantities τ12, τ3are sufficiently small to ensure ( τ12+τ3) (rx+rs)≪1 . For ease of formulation, later we assume τ12=τ21. In the next result, we discuss a local convergence of the AO procedure started at a point x0∈ X. Theorem 3.2. Assume (T∗ 3,X,S)with𝔻2≤Fxx,ℍ2≤Fss, and let τ12(rs∨rx)≤ω <1. Define ρ2def=3 2(1−ω) ∥𝔻−1Fxsℍ−1∥+ω 2 , τ◦def=1 1−ω τ12∥𝔻−1Fxsℍ−1∥+τ12 2+τ3ρ2 2 3 . Let rx≥ρ2 2∥𝔻(x0−x∗)∥, rs≥ρ2∥𝔻(x0−x∗)∥, ρ 2τ3(rx∨rs)≤2 3. Let also P=F−1/2 xxFxsF−1/2 ss satisfy (1 +ρ2 2)τ◦∥𝔻(x0−x∗)∥<1− ∥PP⊤∥. (3.3) ThenF1/2 xx(xt−x∗)converges to zero at linear rate ∥PP⊤∥. Proof. For every t≥1 , by (2.15) of Theorem 2.5 ∥ℍ(st−s∗)∥ ≤ρ2∥𝔻(xt−1−x∗)∥, ∥𝔻(xt−x∗)∥ ≤ρ2∥ℍ(st−s∗)∥.(3.4) In particular, for t= 1 , ∥ℍ(s1−s∗)∥ ≤ρ2∥𝔻(x0−x∗)∥. 13 Further, denote εtdef=F1/2 xx(xt−x∗)− PF1/2 ss(st−s∗), αtdef=F1/2 ss(st−s∗)− P⊤F1/2 xx(xt−1−x∗). Then F1/2 xx(xt−x∗)− P P⊤F1/2 xx(xt−1−x∗) =εt+Pαt, F1/2 ss(st−s∗)− P⊤PF1/2 ss(st−1−s∗) =αt+P⊤εt−1. Expansion (2.17) of Theorem 2.5 with Q=F1/2 xxyields by (3.4) ∥εt∥ ≤τ◦∥ℍ(st−s∗)∥2≤τ◦ρ2 2∥𝔻(xt−1−x∗)∥2. A similar result applies to αt: ∥αt∥ ≤τ◦∥𝔻(xt−1−x∗)∥2≤τ◦ρ2 2∥ℍ(st−1−s∗)∥2. As∥P∥<1 and 𝔻2≤Fxx, we conclude that F1/2 xx(xt−x∗)− P P⊤F1/2 xx(xt−1−x∗) =∥εt+Pαt∥ ≤(1 +ρ2 2)τ◦∥𝔻(xt−1−x∗)∥2≤(1 +ρ2 2)τ◦∥𝔻(xt−1−x∗)∥∥F1/2 xx(xt−1−x∗)∥ yielding ∥F1/2 xx(xt−x∗)∥ ≤n ∥PP⊤∥+ (1 + ρ2 2)τ◦∥𝔻(xt−1−x∗)∥o ∥F1/2 xx(xt−1−x∗)∥. This and (3.3) imply linear convergence of ∥F1/2 xx(xt−x∗)∥to zero at the rate ∥PP⊤∥. Similarly, F1/2 ss(st+1−s∗)− P⊤PF1/2 ss(st−s∗) =∥αt+1+P⊤εt∥ ≤(1 +ρ2 2)τ◦∥ℍ(st−s∗)∥2≤(1 +ρ2 2)τ◦∥ℍ(st−s∗)∥∥F1/2 ss(st−s∗)∥. This ensures a linear convergence of ∥F1/2 ss(st−s∗)∥to zero at the same rate ∥PP⊤∥. 4 Sup-norm expansions for linearly perturbed optimization Letf(υ) be a convex function and υ∗= argminυf(υ) . Define F=∇2f(υ∗) = (Fjm) . Let also g(υ) be obtained by a linear perturbation of f(υ) as in (A.1) and let υ◦= 14 Marginal and alternating minimization, sup-norm bounds argminυg(υ) . We intend to bound the corresponding change υ◦−υ∗in a sup-norm. The strategy of the study is to fix one component υjofυand treat the remaining entries as the nuisance parameter. 4.1 Conditions Fix a metric tensor 𝔻in diagonal form: 𝔻def= diag( 𝔻1, . . . ,𝔻p). Later we assume 𝔻2 j=Fjj. For each j≤p, we will use the representation υ= (υj,sj) , where sj∈ Rp−1collects all the remaining entries of υ. Similarly, define 𝔻= (𝔻j,ℍj) with ℍj= diag(
https://arxiv.org/abs/2505.02562v1
𝔻m) for m̸=j. Define also for some radius r∞ Υ◦={υ:∥𝔻(υ−υ∗)∥∞≤r∞}=n υ: max j≤p 𝔻j(υj−υ∗ j) ≤r∞o . Assume the following condition. (T∗ ∞)For each j≤p, the function f(υj,sj)fulfills sup sj:∥ℍj(sj−s∗ j)∥∞≤r∞sup υ:𝔻j|υ−υ∗ j|≤2r∞ ∇(3) υjυjυjf(υj,sj) 𝔻3 j≤τ3, (4.1) and sup υ=(υ∗ j,sj)∈Υ◦sup zj∈ Rp−1|⟨∇(3) υjυjsjf(υ∗ j,sj),zj⟩| 𝔻2 j∥ℍjzj∥∞≤τ21, sup υ=(υ∗ j,sj)∈Υ◦sup zj∈ Rp j|⟨∇3 υjsjsjf(υ∗ j,sj),z⊗2 j⟩| 𝔻j∥ℍjzj∥2∞≤τ12. For the dual norm ∥ · ∥∗from (2.12), define ρ1def= max j=1,...,p∥𝔻−1 jFυj,sjℍ−1 j∥∗= max j=1,...,psup ∥z∥∞≤1∥𝔻−1 jFυj,sjℍ−1 jz∥. (4.2) Lemma 4.1. It holds ρ2 1≤max j=1,...,p1 𝔻2 jX m̸=jF2 jm 𝔻2m. Proof. By definition, ρ2 1≤max j=1,...,psup ∥z∥∞≤1X m̸=j1 𝔻2 j𝔻2mF2 jmz2 m≤max j=1,...,p1 𝔻2 jX m̸=jF2 jm 𝔻2m 15 as claimed. It is mandatory for the proposed approach that ρ1<1 . 4.2 A sup-norm expansion for a linear perturbation The next result provides an upper bound on ∥υ◦−υ∗∥∞. Proposition 4.2. Letf(υ)be convex function and g(υ)be a linear perturbation of f(υ)with a vector A. For a diagonal metric tensor 𝔻, assume ρ1<1; see (4.2). Fix r∞=√ 2 1−ρ1∥𝔻−1A∥∞. Let(T∗ ∞)hold with this r∞andτ12, τ21, τ3satisfying ω=τ21r∞≤1/4, τ 12r∞≤1/4, τ ∞∥𝔻−1A∥∞≤√ 2−1, (4.3) where τ◦def=1 1−ω ρ1τ21+τ12 2+3(ρ1+ω/2)2τ3 4(1−ω)2 , (4.4) τ∞def= 2τ3+τ21 2+2(τ◦+τ21) (1−ρ1)2. Then υ◦= argminυg(υ)satisfies ∥𝔻(υ◦−υ∗)∥∞def= max j≤p|𝔻j(υ◦ j−υ∗ j)| ≤r∞. (4.5) Furthermore, 𝔻−1 F(υ◦−υ∗) +A ∞≤τ∞∥𝔻−1A∥2 ∞, (4.6) 𝔻(υ◦−υ∗+F−1A) ∞≤τ∞ 1−ρ1∥𝔻−1A∥2 ∞, and with ∆def= Ip−𝔻−1F𝔻−1 𝔻(υ◦−υ∗) +𝔻−1A) ∞≤τ∞ 1−ρ1∥𝔻−1A∥2 ∞+ρ1 1−ρ1∥𝔻−1A∥∞, 𝔻(υ◦−υ∗) + ( Ip+∆)𝔻−1A) ∞≤τ∞ 1−ρ1∥𝔻−1A∥2 ∞+ρ2 1 1−ρ1∥𝔻−1A∥∞. Remark 4.1. To gain an intuition about the result of the proposition, consider a typical situation with τ12≤τ3,τ21≤τ3,ω≤1/4 , 1−ρ1≤1/√ 2 . Then τ◦from (4.4) fulfills τ◦≤1.37 , and it holds τ∞≤12τ3. 16 Marginal and alternating minimization, sup-norm bounds 4.3 Sup-norm expansions for a separable perturbation A linear perturbation is a special case of a separable perturbation of the form t(υ) = P jtj(υj) . Another popular example is a Tikhonov regularization with tj(υj) =λυ2 jor, more generally, a Sobolev-type penalty tj(υj) =g2 jυ2 jfor a given sequence g2 j. Later we assume each component tj(υj) to be sufficiently smooth. This does not allow to incorporate a sparse penalty tj(υj) =λ|υj|or complexity penalty tj(υj) =λ1 I(υj̸= 0) . Letf(υ) be convex, υ∗= argminυf(υ) , and F=∇2f(υ∗) = (Fjm) . Define a separable perturbation g(υ) =f(υ) +t(υ) off(υ) by smooth functions tj(υj) : g(υ) =f(υ) +t(υ) =f(υ) +X jtj(υj), (4.7) and let υ◦= argminυg(υ) . We intend to bound the corresponding change υ◦−υ∗in a sup-norm. Separability of the penalty is very useful, the cross derivatives of gare the same as for f. We only update the tensor 𝔻(υ) assuming convexity of each tj(·) : 𝔻(υ)def= diag {𝔻1(υ), . . . ,𝔻p(υ)}, 𝔻2 j(υ)def= Fjj(υ) +t′′ j(υj). (4.8) Proposition 4.3. Letf(υ)be convex function and g(υ)be a separable perturbation of f(υ); see (4.7). For υ◦= argminυg(υ), define the diagonal metric tensor 𝔻=𝔻(υ◦); see(4.8). Let ρ1from (4.2) fulfill ρ1<1. Fix Mdef=∇t(υ∗) = (t′ j(υ∗ j))and r∞=√ 2 1−ρ1∥𝔻−1M∥∞. Let(T∗ ∞)hold for g(·)with this r∞andτ12, τ21, τ3satisfying ω=τ21r∞≤1/4, τ 12r∞≤1/4, τ ∞∥𝔻−1M∥∞≤√ 2−1, where τ◦def=1 1−ω ρ1τ21+τ12 2+3(ρ1+ω/2)2τ3 4(1−ω)2 , τ∞def= 2τ3+τ21 2+2(τ◦+τ21) (1−ρ1)2. Then υ◦= argminυg(υ)satisfies ∥𝔻(υ◦−υ∗)∥∞def= max j≤p|𝔻j(υ◦ j−υ∗ j)| ≤r∞. 17
https://arxiv.org/abs/2505.02562v1
Furthermore, 𝔻−1 F(υ◦−υ∗)−M ∞≤τ∞∥𝔻−1M∥2 ∞, 𝔻(υ◦−υ∗−F−1M) ∞≤τ∞ 1−ρ1∥𝔻−1M∥2 ∞, and with ∆def= Ip−𝔻−1F𝔻−1 𝔻(υ◦−υ∗) +𝔻−1M) ∞≤τ∞ 1−ρ1∥𝔻−1M∥2 ∞+ρ1 1−ρ1∥𝔻−1M∥∞, 𝔻(υ◦−υ∗) + ( Ip+∆)𝔻−1M) ∞≤τ∞ 1−ρ1∥𝔻−1M∥2 ∞+ρ2 1 1−ρ1∥𝔻−1M∥∞. Remark 4.2. If each tj(υj) is quadratic, then condition (T∗ ∞)forg(υ) follows from the same condition for f(υ) as the third derivatives of these two functions coincide. For gen- eral smooth perturbations tj, (4.1) should be checked for ∇3 υjυjυjg(υ) =∇3 υjυjυjf(υ) + t′′′ j(υj) . The other conditions in (T∗ ∞)can be checked for f. 5 Estimation for the Bradley-Terry-Luce model LetG= (V,E) stand for a comparison graph, where the vertex set V={1,2, . . . , n } represents the nitems of interest. The items jandmare compared if and only if (jm)∈ E. One observes independent paired comparisons Y(ℓ) jm,ℓ= 1, . . . , N jm, and Y(ℓ) jm= 1−Y(ℓ) mj. For modeling and risk analysis, Bradley-Terry-Luce (BTL) model is frequently used; see Bradley and Terry (1952), Luce (1959). The chance of each item winning a paired comparison is determined by the relative scores P item jis preferred over item m = P Y(ℓ) jm= 1 =eυ∗ j eυ∗ j+ eυ∗m=1 1 + eυ∗m−υ∗ j. The goal is to recover the score vector υ= (υ1, . . . , υ n)⊤and top– kitems. Most of the theoretical results for a BTL model have been established under the following assumptions: •Gis a random Erd˝ os-R´ enyi graph with the edge probability p;p≥Cn−1logn ensures with overwhelming probability a connected graph; •the values Njmare all the same; Njm≡L, for all ( j, m)∈ E; •for the ordered sequence υ∗ (1)≥υ∗ (2)≥. . .≥υ∗ (n), it holds υ∗ (k)−υ∗ (k+1)> ∆; •υ∗ (1)−υ∗ (n)≤ R; 18 Marginal and alternating minimization, sup-norm bounds see e.g. Chen et al. (2020); Gao et al. (2023) for a related discussion. Under such assumptions, Chen et al. (2019) and Chen et al. (2020) showed that the conditions ∆2≥Clogn n p L enables to identify the top- kset with a high probability. Both regularized MLE and a spectral method are rate optimal. We refer to Gao et al. (2023) for an extensive overview and recent results for the BTL model including a non-asymptotic MLE expansion. Unfortunately, some of the mentioned assumptions could be very restrictive in prac- tical applications. This especially concerns the graph structure and design of the set of comparisons. An assumption of a bounded dynamic range Ris very useful for the theo- retical study because it allows to bound the success probability of each game away from zero and one. However, it seems to be questionable for many real-life applications. Our aim is to demonstrate that the general approach of the paper enables us to get accurate results applying under •arbitrary configuration of the graph G; •heterogeneous numbers Njmof comparisons par edge; •unbounded range υ∗ (1)−υ∗ (n). We still assume that Gis connected; otherwise, each connected component should be considered separately. Forj < m , denote Sjm=PNjm ℓ=1Y(ℓ) jmandSjm= 0 if Njm= 0 . With ϕ(υ) = log(1 + eυ) , the negative log-likelihood for the parameter vector
https://arxiv.org/abs/2505.02562v1
υ∗reads as follows: L(υ) =−nX m=1m−1X j=1 (υj−υm)Sjm−Njmϕ(υj−υm) , (5.1) leading to the MLE eυ= argmin υL(υ). The function ϕ(υ) = log(1+eυ) is convex, hence, L(υ) is concave. However, representa- tion (5.1) reveals lack-of-identifiability problem: eυis not unique, any shift υ→υ+ae does not change L(υ) ,e=n−1/2(1, . . . , 1)⊤∈ Rn. Therefore, the Fisher informa- tion matrix F(υ) =∇2L(υ) is not positive definite and L(υ) is not strongly concave, thus,eυis not uniquely defined. For a connected graph G, assumed later on, this is- sue can be resolved by fixing one component of υ, e.g. υ1= 0 , or by the condition P jυj= 0 . In general, we need one condition per connected component of the graph 19 G. Alternatively, one can use penalization with a quadratic penalty ∥Gυ∥2/2 . A “non- informative” choice is G2=g2In; cf. Chen et al. (2019). Another option is to replace the constraintP jυj= 0 by the penalty ∥Gυ∥2=g2⟨υ,e⟩2with G2=g2ee⊤. It is obvious to see that this penalization replaces the conditionP jυj= 0 because it van- ishes for any such υ. If the true skill vector υ∗is defined under this condition then υ∗ G= argmaxυ ELG(υ) =υ∗. Hence, the penalty ∥Gυ∥2/2 does not yield any bias of estimation. The choice of the constant g2is not essential, we just assume it suffi- ciently large. One more benefit of using a penalty ∥Gυ∥2is that it ensures the desired stochastically dominant structure of the Fisher information matrix: each diagonal ele- ment is larger than the sum of the off-diagonal for this row. This property is important for proving a concentration of the penalized MLE eυG= argmin υLG(υ) = argmin υ L(υ) +∥Gυ∥2/2 . The next lemma reveals some important features of the Fisher information matrix F(υ) . Lemma 5.1. With ϕ′′(υ) =eυ (1+eυ)2, the entries Fjm(υ)of F(υ) =−∇2EL(υ)satisfy Fjj(υ) =X m̸=jNjmϕ′′(υj−υm), Fjm(υ) =−Njmϕ′′(υj−υm), j̸=m . This yields Fjj(υ) =−P m̸=j Fjm(υ). Moreover, each eigenvalue of F(υ)belongs to [0,2 Fjj(υ)]for some j≤n. Proof. The structure of F(υ) follows directly from (5.1). Gershgorin’s theorem implies the final statement. The rigorous theoretical study including finite sample expansions ∥D(eυG−υ∗+ F−1 G∇ζ)∥∞≤τ∞ 1−ρ1∥D−1∇ζ∥2 ∞, ∥D(eυG−υ∗+D−2∇ζ)∥∞≤τ∞ 1−ρ1∥D−1∇ζ∥2 ∞+ρ1 1−ρ1∥D−1∇ζ∥∞.(5.2) and a check of all the related conditions is deferred to the forthcoming paper Panke- vich and Spokoiny (2025). Here we only present some numerical results illustrating the established theoretical guarantees. We use the following experimental setup from Gao et al. (2023): an Erd¨ os–R´ enyi comparison graph with nvertices, edge probability p, the number Lof comparisons be- tween pairs of items is homogeneous, the values υ∗ iare i.i.d samples from U[υmin, υmax] , 20 Marginal and alternating minimization, sup-norm bounds R=υmax−υmin. For experiments, they used L= 1 , p= (log( n)3/n) ,υmin= 0 , υmax= 2 , and n∈ {100,200,500,1000,2000}. The study presents the important values ρ1from (4.2) and checks the quality of the expansion (5.2). Figure 5.1: Average ρ1value for different values of n. Figure 5.1 depicts the value of ρ1for each nafter averaging over few realizations of an Erd¨ os–R´ enyi comparison graph with L= 1 , p= log3(n) n , and υ∗
https://arxiv.org/abs/2505.02562v1
jas i.i.d. samples from U[0,2] . The error bars represent three times the standard deviation. The results support the hypothesis that ρ1is small for Erd¨ os–R´ enyi graphs and close to1 np≪1 . Figure 5.2 left presents the norm of the leading term ∥ F−1 G∇ζ∥∞and∥D−2∇ζ∥∞ while the right plot shows the errors ∥eυG−υ∗+ F−1 G∇ζ∥∞and∥eυG−υ∗+D−2∇ζ∥∞ forn= 100 . One can see that the remainder is much smaller in the magnitude that the leading term, thus indicating a very good quality of the provided expansion. Figure 5.2: Distribution of the leading term and the remainder for n= 100 . Similar results for n∈ {100,200,500,1000,2000}are collected in Figure 5.3. 21 Figure 5.3: Comparison of the leading term and the remainder for different n. References Banach, S. (1938). ¨Uber homogene Polynome in ( L2).Studia Mathematica , 7(1):36–44. Bartlett, P. L., Long, P. M., Lugosi, G., and Tsigler, A. (2020). Benign overfitting in linear regression. Proceedings of the National Academy of Sciences , 117(48):30063– 30070. Bauschke, H. H. and Combettes, P. L. (2017). Convex Analysis and Monotone Operator Theory in Hilbert Spaces . Springer. Beck, A. and Tetruashvili, L. (2013). On the convergence of block coordinate descent type methods. SIAM Journal on Optimization , 23(4):2037–2060. Bertsekas, D. P. (1999). Nonlinear Programming . Athena Scientific, 2nd edition. Bezdek, J. C. and Hathaway, R. J. (2003). Convergence of alternating optimization. Neural, Parallel & Scientific Computations , 11(4):351–368. Boyd, S. and Vandenberghe, L. (2004). Convex optimization . Cambridge university press. Bradley, R. A. and Terry, M. E. (1952). Rank analysis of incomplete block designs: I. the method of paired comparisons. Biometrika , 39(3/4):324–345. Chen, P., Gao, C., and Zhang, A. Y. (2020). Partial Recovery for Top- kRanking: Optimality of MLE and Sub-Optimality of Spectral Method. Chen, Y., Fan, J., Ma, C., and Wang, K. (2019). Spectral method and regularized MLE are both optimal for top- Kranking. Ann. Statist. , 47(4):2204–2235. 22 Marginal and alternating minimization, sup-norm bounds Cheng, C. and Montanari, A. (2022). Dimension free ridge regression. https://arxiv.org/abs/2210.08571. Gao, C., Shen, Y., and Zhang, A. Y. (2023). Uncertainty quantification in the Bradley–Terry–Luce model. Information and Inference: A Journal of the IMA , 12(2):1073–1140. Hsieh, C.-J. and Dhillon, I. S. (2011). Fast coordinate descent methods with variable se- lection for non-negative matrix factorization. In Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining , pages 1064–1072. ACM. Katsevich, A. (2023). Tight bounds on the laplace approximation accuracy in high dimensions. https://arxiv.org/abs/2305.17604. Luce, R. (1959). Individual Choice Behavior: A Theoretical Analysis . Wiley. Luo, Z.-Q. and Tseng, P. (1992). On the convergence of the coordinate descent method for convex differentiable minimization. Journal of Optimization Theory and Applications , 72(1):7–35. Mairal, J. (2015). Incremental majorization-minimization optimization with application to large-scale machine learning. SIAM Journal on Optimization , 25(2):829–855. Nesterov, Y. E. (1988). Polynomial methods in the linear and quadratic programming. Sov. J. Comput. Syst. Sci. , 26(5):98–101. Noskov, F., Puchkin, N., and Spokoiny, V. (2025). Dimension-free bounds in high-dimensional linear regression via error-in-operator approach. arxiv.org/abs/2502.15437. Ostrovskii, D. M. and Bach, F. (2021). Finite-sample analysis of M-estimators
https://arxiv.org/abs/2505.02562v1
using self-concordance. Electronic Journal of Statistics , 15(1):326 – 391. Pankevich, S. and Spokoiny, V. (2025). Estimation and inference in BTL model. in preparation. Razaviyayn, M., Hong, M., and Luo, Z.-Q. (2013). A unified convergence analysis of block successive minimization methods for nonsmooth optimization. SIAM Journal on Optimization , 23(2):1126–1153. 23 Spokoiny, V. (2025a). Estimation and inference for Deep Neuronal Networks. arxiv.org/abs/2305.08193. Spokoiny, V. (2025b). Estimation and inference in error-in-operator model. arxiv.org/2504.11834. Spokoiny, V. (2025c). Sharp bounds in perturbed smooth optimization. arxiv.org/2503.?? Tseng, P. (2001). Convergence of a block coordinate descent method for nondifferentiable minimization. Journal of Optimization Theory and Applications , 109(3):475–494. Wright, S. J. (2015). Coordinate descent algorithms. Mathematical Programming , 151(1):3–34. Xu, Y. and Yin, W. (2013). A block coordinate descent method for regularized multicon- vex optimization with applications to nonnegative tensor factorization and completion. SIAM Journal on Imaging Sciences , 6(3):1758–1789. Xu, Y. and Yin, W. (2017). A globally convergent algorithm for nonconvex optimization based on block coordinate update. Journal of Scientific Computing , 72(2):700–734. 24 Marginal and alternating minimization, sup-norm bounds A Optimization after linear perturbation Here we overview the main results of Spokoiny (2025c). Let f(υ) be a smooth convex function, υ∗= argmin υf(υ), and𝔽=∇2f(υ∗) . Later we study the question of how the point of maximum and the value of maximum of fchange if we add a linear or quadratic component to f. More precisely, let another function g(υ) satisfy for some vector A g(υ)−g(υ∗) = υ−υ∗,A +f(υ)−f(υ∗). (A.1) A typical example corresponds to f(υ) = EL(υ) and g(υ) = L(υ) for a random function L(υ) with a linear stochastic component ζ(υ) =L(υ)− EL(υ) . Then (A.1) is satisfied with A=∇ζ. Define υ◦def= argmin υg(υ), g (υ◦) = min υg(υ). The aim of the analysis is to evaluate the quantities υ◦−υ∗andg(υ◦)−g(υ∗) . A.1 Smoothness conditions Assume f(υ) to be convex and three or sometimes even four times Gateaux differentiable inυ∈Υ. For a fixed υ∈Υ, we need such a condition uniformly in a local vicinity of a fixed point υ∈Υ. The notion of locality is given in terms of a metric tensor 𝔻(υ)∈Mp. In most of the results later on, one can use 𝔻(υ) =𝔽1/2(υ) , where 𝔽(υ) =∇2f(υ) . Introduce the following conditions. (T∗ 3)f(υ)is three times differentiable and sup u:∥𝔻(υ)u∥≤rsup z∈ Rp ⟨∇3f(υ+u),z⊗3⟩ ∥𝔻(υ)z∥3≤τ3. (T∗ 4)f(υ)is four times differentiable and sup u:∥𝔻(υ)u∥≤rsup z∈ Rp ⟨∇4f(υ+u),z⊗4⟩ ∥𝔻(υ)z∥4≤τ4. By Banach’s characterization Banach (1938), (T∗ 3)implies ⟨∇3f(υ+u),z1⊗z2⊗z3⟩ ≤τ3∥𝔻(υ)z1∥∥𝔻(υ)z2∥∥𝔻(υ)z3∥ 25 for any uwith∥𝔻(υ)u∥ ≤rand all z1,z2,z3∈ Rp. Similarly under (T∗ 4) ⟨∇4f(υ+u),z1⊗z2⊗z3⊗z4⟩ ≤τ44Y k=1∥𝔻(υ)zk∥,z1,z2,z3,z4∈ Rp. The values τ3andτ4are usually very small. Some quantitative bounds are given later in this section under the assumption that the function f(υ) can be written in the form f(υ) =nh(υ) for a fixed smooth function h(υ) with the Hessian ∇2h(υ) . The factor nhas meaning of the sample size. Define 𝕞(υ) by 𝔻2(υ) =n𝕞2(υ) . (S∗ 3)f(υ) =nh(υ)forh(υ)three times differentiable and sup u:∥𝕞(υ)u∥≤r/√n ⟨∇3h(υ+u),u⊗3⟩ ∥𝕞(υ)u∥3≤c3. (S∗ 4)the function h(·)satisfies (S∗ 3)and sup u:∥𝕞(υ)u∥≤r/√n ⟨∇4h(υ+u),u⊗4⟩ ∥𝕞(υ)u∥4≤c4. (S∗ 3)and(S∗ 4)are local versions of the so-called self-concordance condition; see Nes- terov (1988) and Ostrovskii and Bach
https://arxiv.org/abs/2505.02562v1
(2021). In fact, they require that each univariate function h(υ+tu) ,t∈ R, is self-concordant with some universal constants c3and c4. A.2 A basic lemma This section presents a basic lemma from Spokoiny (2025a). Proposition A.1. Letf(υ)be a strongly concave function with f(υ∗) = min υf(υ) and𝔽=∇2f(υ∗). Assume (T∗ 3)atυ∗with𝔻2,r, and τ3such that 𝔻2≤κ2𝔽,r≥3 2∥𝔻 𝔽−1A∥,κ2τ3∥𝔻 𝔽−1A∥<4 9. Then ∥𝔻(υ◦−υ∗)∥ ≤(3/2)∥𝔻 𝔽−1A∥and moreover, ∥𝔻−1𝔽(υ◦−υ∗+𝔽−1A)∥ ≤3τ3 4∥𝔻 𝔽−1A∥2. (A.2) Moreover, 2g(υ◦)−2g(υ∗) +∥𝔽−1/2A∥2 ≤τ3 2∥𝔻 𝔽−1A∥3. (A.3) Remark A.1. The roles of fandgcan be exchanged. In particular, (A.2) applies with 𝔽=𝔽(υ◦) provided that (T∗ 3)is also fulfilled at υ◦. 26 Marginal and alternating minimization, sup-norm bounds B Proofs Here we collect the proofs of the main statements of the paper. B.1 Partial and marginal optimization. Proofs Proof of Proposition 2.1. Define gs(x) = fs(x)− ⟨x,As⟩. Then gsis concave and ∇gs(x∗) = 0 yielding x∗= argmaxxgs(x) . Now Theorem A.1 yields (2.2). Also by (A.3) 2gs(x∗)−2gs(xs) +∥𝔽−1/2 sAs∥2 ≤τ3 2∥𝔻s𝔽−1 sAs∥3 yielding 2fs(x∗)−2fs(xs)−2⟨x∗−xs,As⟩+∥𝔽−1/2 sAs∥2 ≤τ3 2∥𝔻s𝔽−1 sAs∥3. (B.1) By (2.2) ⟨x∗−xs,As⟩+∥𝔽−1/2 sAs∥2 = ⟨𝔽1/2 s(x∗−xs),𝔽−1/2 sAs⟩+⟨𝔽−1/2 sAs,𝔽−1/2 sAs⟩ = ⟨𝔽1/2 s(x∗−xs+𝔽−1 sAs),𝔽−1/2 sAs⟩ ≤ ∥𝔻−1 s𝔽s(x∗−xs+𝔽−1 sAs)∥ ∥𝔻s𝔽−1 sAs∥ ≤3τ3 4∥𝔻s𝔽−1 sAs∥3 This and (B.1) imply (2.3). Proof of Theorem 2.5. First we bound the variability of 𝔽s=∇xsf(x∗,s) over S. Lemma B.1. Assume (T∗ 3,S). Then for any s∈ S, with 𝔽=𝔽s∗, ∥𝔻−1(𝔽s−𝔽)𝔻−1∥ ≤τ21∥ℍ(s−s∗)∥◦. (B.2) Proof. Lets∈ S. By (2.11), for any z∈ Rp, it holds 𝔻−1(𝔽s−𝔽)𝔻−1,z⊗2 = 𝔽s−𝔽,(𝔻−1z)⊗2 ≤sup t∈[0,1] ⟨∇3 xxsf(x∗,s∗+t(s−s∗)),(𝔻−1z)⊗2⊗(s−s∗)⟩ ≤τ21∥z∥2∥ℍ(s−s∗)∥◦. This yields (B.2). The next result describes some corollaries of (B.2). 27 Lemma B.2. Assume 𝔻2≤κ2𝔽and let some other matrix 𝔽1∈Mpsatisfy ∥𝔻−1(𝔽1−𝔽)𝔻−1∥ ≤κ−2ω (B.3) with ω <1. Then ∥𝔽−1/2(𝔽1−𝔽)𝔽−1/2∥ ≤ ω , (B.4) ∥𝔽1/2(𝔽−1 1−𝔽−1)𝔽1/2∥ ≤ω 1−ω, (B.5) and 1 1 +ω∥𝔻 𝔽−1𝔻∥ ≤ ∥𝔻 𝔽−1 1𝔻∥ ≤1 1−ω∥𝔻 𝔽−1𝔻∥. (B.6) Furthermore, for any vector u (1−ω)∥𝔻−1𝔽u∥ ≤ ∥ 𝔻−1𝔽1u∥ ≤(1 +ω)∥𝔻−1𝔽u∥, (B.7) 1−2ω 1−ω∥𝔻 𝔽−1u∥ ≤ ∥ 𝔻 𝔽−1 1u∥ ≤1 1−ω∥𝔻 𝔽−1u∥. (B.8) Proof. Statement (B.4) follows from (B.3) because of 𝔽−1≤κ2𝔻−2. Define now Udef= 𝔽−1/2(𝔽1−𝔽)𝔽−1/2. Then ∥U∥ ≤ωand ∥𝔽1/2(𝔽−1 1−𝔽−1)𝔽1/2∥=∥( I+U)−1− I∥ ≤1 1−ω∥U∥ yielding (B.5). Further, ∥𝔻(𝔽−1 1−𝔽−1)𝔻∥=∥𝔻 𝔽−1 1𝔽1(𝔽−1 1−𝔽−1)𝔽 𝔽−1𝔻∥ =∥𝔻 𝔽−1 1𝔻 𝔻−1(𝔽1−𝔽)𝔻−1𝔻 𝔽−1𝔻∥ ≤ ∥𝔻 𝔽−1 1𝔻∥∥𝔻 𝔽−1𝔻∥∥𝔻−1(𝔽1−𝔽)𝔻−1∥ ≤ω∥𝔻 𝔽−1 1𝔻∥. This implies (B.6). Also, by 𝔻2≤κ2𝔽 ∥𝔻−1𝔽1u∥ ≤ ∥ 𝔻−1𝔽u∥+∥𝔻−1(𝔽1−𝔽)𝔻−1𝔻u∥ ≤ ∥𝔻−1𝔽u∥+ω∥𝔻u∥ ≤ ∥𝔻−1𝔽u∥+ω∥𝔻−1𝔽u∥ ≤(1 +ω)∥𝔻−1𝔽u∥, and (B.7) follows. Similarly ∥𝔻(𝔽−1 1−𝔽−1)u∥=∥𝔻 𝔽−1 1(𝔽1−𝔽)𝔽−1u∥=∥𝔻 𝔽−1 1𝔻 𝔻−1(𝔽1−𝔽)𝔻−1𝔻 𝔽−1u∥ ≤ ∥𝔻−1(𝔽1−𝔽)𝔻−1∥ ∥𝔻 𝔽−1 1𝔻∥ ∥𝔻 𝔽−1u∥ ≤ω 1−ω∥𝔻 𝔽−1u∥ and (B.8) follows as well. 28 Marginal and alternating minimization, sup-norm bounds Definition (2.12) implies the following bound. Lemma B.3. For any s∈ S and any linear mapping Q, it holds ∥QFxs(s−s∗)∥ ≤ ∥ QFxsℍ−1∥∗∥ℍ(s−s∗)∥◦. The next lemma shows that As=∇fs(x∗) is nearly linear in s. Lemma B.4. Assume (2.10) . Then with ρ∗=∥𝔻−1Fxsℍ−1∥∗ ∥𝔻−1{As+Fxs(s−s∗)}∥ ≤τ12 2∥ℍ(s−s∗)∥2 ◦, (B.9) ∥𝔻−1As∥ ≤ρ∗∥ℍ(s−s∗)∥◦+τ12 2∥ℍ(s−s∗)∥2 ◦. (B.10) Proof. Fixs∈ S and define as(t)def=As∗+t(s−s∗). Then as(0) = As∗= 0 , as(1) = As, and as(1)−as(0) =Z1 0a′ s(t)dt , where a′ s(t) =d dtas(t) for t∈[0,1] . Similarly, by a′ s(0) =−Fxs(s−s∗) , we derive as(1)−as(0)−a′ s(0) =Z1 0(a′ s(t)−a′ s(0))dt=Z1 0(1−t)a′′ s(t)dt , where a′′ s(t) =d2 dt2as(t)
https://arxiv.org/abs/2505.02562v1
. By condition (T∗ 3,S) ⟨a′′ s(t),z⟩ = ∇3 xssf(x∗,s∗+t(s−s∗)),z⊗(s−s∗)⊗2 ≤τ12∥𝔻z∥∥ℍ(s−s∗)∥2 ◦ and hence, ∥𝔻−1a′′ s(t)∥= sup z:∥z∥≤1 ⟨𝔻−1a′′ s(t),z⟩ = sup z:∥z∥≤1 ⟨a′′ s(t),𝔻−1z⟩ ≤τ12sup z:∥z∥≤1∥z∥∥ℍ(s−s∗)∥2 ◦=τ12∥ℍ(s−s∗)∥2 ◦. This yields ∥𝔻−1{As+Fxs(s−s∗)}∥ ≤ τ12∥ℍ(s−s∗)∥2 ◦Z1 0(1−t)dt≤τ12∥ℍ(s−s∗)∥2 ◦ 2 as claimed in (B.9). Lemma B.3 implies (B.10). Now Proposition 2.1 helps to show (2.15) and to bound xs−x∗+𝔽−1 sAs. 29 Lemma B.5. Let∥ℍ(s−s∗)∥◦≤r◦andτ21r◦≤ω <1. Then with ρ2from (2.14) ∥𝔻(xs−x∗)∥ ≤3 2∥𝔻 𝔽−1 sAs∥ ≤ρ2∥ℍ(s−s∗)∥◦. (B.11) Moreover, ∥𝔻−1𝔽s(xs−x∗+𝔽−1 sAs)∥ ≤τ3ρ2 2 3∥ℍ(s−s∗)∥2 ◦. (B.12) Proof. By𝔻2≤𝔽, (B.2), (B.8) of Lemma B.2, and (B.10) ∥𝔻 𝔽−1 sAs∥ ≤ ∥ 𝔻 𝔽−1 s𝔻∥ ∥𝔻−1As∥ ≤1 1−ω∥𝔻−1As∥ ≤2ρ2 3∥ℍ(s−s∗)∥◦. This enables us to apply Proposition 2.1 which implies (B.11) and ∥𝔻−1𝔽s(xs−x∗+𝔽−1 sAs)∥ ≤3τ3 4∥𝔻 𝔽−1 sAs∥2≤ρ2 2 3∥ℍ(s−s∗)∥2 ◦. This completes the proof. Now we can finalize the proof of the main result. With ∆sdef=−Fxs(s−s∗) , it holds xs−x∗+𝔽−1∆s=xs−x∗+𝔽−1 sAs−𝔽−1 sAs+𝔽−1∆s =xs−x∗+𝔽−1 sAs−𝔽−1 s(As−∆s)−(𝔽−1 s−𝔽−1)∆s. The use of (B.2) and (2.13) yields ∥𝔻−1𝔽s(𝔽−1−𝔽−1 s)∆s∥=∥𝔻−1(𝔽−𝔽s)𝔽−1∆s∥ ≤ ∥𝔻−1(𝔽−𝔽s)𝔻−1∥ ∥𝔻 𝔽−1𝔻∥ ∥𝔻−1Fxsℍ−1∥∗∥ℍ(s−s∗)∥◦ ≤τ21∥ℍ(s−s∗)∥◦ρ∗∥ℍ(s−s∗)∥◦. This together with (B.10) and (B.12) implies ∥𝔻−1𝔽s(xs−x∗+𝔽−1∆s)∥ ≤nτ3ρ2 2 3+ρ∗τ21+τ12 2o ∥ℍ(s−s∗)∥2 ◦. The use of (B.7) allows to bound ∥𝔻−1𝔽(xs−x∗+𝔽−1∆s)∥ ≤1 1−ω∥𝔻−1𝔽s(xs−x∗+𝔽−1∆s)∥, and (2.16) follows. 30 Marginal and alternating minimization, sup-norm bounds Proof of Proposition 2.6. Proposition 2.1 applied to fs(x) and gs(x) =fs(x)− ⟨A,x⟩ implies by (B.8) ∥𝔻(x◦ s−xs)∥ ≤3 2∥𝔻 𝔽−1 sA∥ ≤3 2(1−ω)∥𝔻−1A∥. This and (2.15) imply (2.18). Next we check (2.19) using the decomposition x◦ s−x∗+𝔽−1Fxs(s−s∗) +𝔽−1A = (x◦ s−xs+𝔽−1 sA)−(𝔽−1 sA −𝔽−1A) +{xs−x∗+𝔽−1Fxs(s−s∗)}. Proposition 2.5 evaluates the last term. Lemma B.1 helps to bound ∥𝔻−1𝔽(𝔽−1 s−𝔽−1)𝔽s𝔻−1∥=∥𝔻−1(𝔽s−𝔽)𝔻−1∥ ≤τ21∥ℍ(s−s∗)∥◦. This yields ∥Q(𝔽−1 s−𝔽−1)A∥ ≤ ∥ Q𝔽−1𝔻∥∥𝔻−1𝔽s(𝔽−1 s−𝔽−1)𝔽 𝔻−1∥∥𝔻 𝔽−1 sA∥ ≤ ∥Q𝔽−1𝔻∥τ21∥ℍ(s−s∗)∥◦1 1−ω∥𝔻 𝔽−1A∥. Moreover, for εsdef=x◦ s−xs+𝔽−1 sA, it holds ∥𝔻−1𝔽sεs∥ ≤3τ3 4∥𝔻 𝔽−1 sA∥2≤3τ3 4(1−ω)2∥𝔻 𝔽−1A∥2, and by (B.8) and ω≤1/4 ∥Qεs∥ ≤ ∥ Q𝔽−1𝔻∥ ∥𝔻−1𝔽εs∥ ≤ ∥ Q𝔽−1𝔻∥1−ω 1−2ω∥𝔻−1𝔽sεs∥ ≤ ∥Q𝔽−1𝔻∥3τ3 4(1−ω)(1−2ω)∥𝔻 𝔽−1A∥2≤ ∥Q𝔽−1𝔻∥2τ3∥𝔻 𝔽−1A∥2. The obtained bounds imply (2.19) in view of (4 /3)ab≤a2+b2/2 for any a, b. B.2 Sup-norm expansions. Proofs Proof of Proposition 4.2. Let us fix any j≤p, e.g. j= 1 . Represent any υ∈Υ◦as υ= (υ1,s1) , where s1= (υ2, . . . , υ p)⊤. By (4.2) 𝔻1F−1 11Fυ1s1(s1−s∗ 1) ≤ 𝔻−1 1Fυ1s1(s1−s∗ 1) ≤ρ1∥ℍ1(s1−s∗ 1)∥∞. (B.13) For any η1, define υ◦ 1(η1)def= argmax υ1g(υ1,η1). 31 Now we apply Proposition 2.6 with Q=𝔻1,𝔽=F11,ℍ=ℍ1= diag( 𝔻2, . . . ,𝔻p) , and∥ℍ1(s1−s∗ 1)∥∞≤r∞. As𝔻1F−1 11≤𝔻−1 1, bound (2.19) yields 𝔻1{υ◦ 1(s1)−υ∗ 1−F−1 11A1+F−1 11Fυ1s1(s1−s∗ 1)} ≤(τ◦+τ21)∥ℍ1(s1−s∗ 1)∥2 ◦+ 2τ3+τ21 2 𝔻−2 1A2 1 ≤(τ◦+τ21)r2 ∞+ 2τ3+τ21 2 ∥𝔻−1A∥2 ∞=τ∞∥𝔻−1A∥2 ∞. This, (B.13), and (4.3) imply by r∞=c∥𝔻−1A∥∞and c=√ 2/(1−ρ1) 𝔻1{υ◦ 1(η)−υ∗ 1} ≤ ∥𝔻−1A∥∞+ρ1r∞+τ∞∥𝔻−1A∥2 ∞ ≤ 1 + cρ1+√ 2−1 ∥𝔻−1A∥∞=c∥𝔻−1A∥∞=r∞. The same bounds apply to each j≤p. Therefore, when started from any point υ∈Υ◦, the sequential optimization procedure which maximizes the objective function w.r.t. one coordinate while keeping all the remaining coordinates has its solution within Υ◦. As the function fis strongly concave, and its value improves at every step, the procedure converges to a unique solution υ◦∈Υ◦. This implies (4.5). Further, with υ=υ◦, F11{υ1(s1)−υ∗ 1}+Fυ1s1(s1−s∗ 1) =F(υ◦−υ∗). This yields (4.6). Moreover, with
https://arxiv.org/abs/2505.02562v1
udef=𝔻(υ◦−υ∗) and Bdef=𝔻−1F𝔻−1, it holds 𝔻−1 F(υ◦−υ∗) +A =B(u+𝔻F−1A), and by (4.6) and Lemma B.6 𝔻(υ◦−υ∗+F−1A) ∞= u+𝔻F−1A ∞ = B−1𝔻−1{F(υ◦−υ∗) +A} ∞≤τ∞ 1−ρ1∥𝔻−1A∥2 ∞. (B.14) Finally, by (B.16) of Lemma B.6 and by 𝔻F−1A=B−1𝔻−1A ∥𝔻F−1A −𝔻−1A∥∞= (B−1− Ip)𝔻−1A ∞≤ρ1 1−ρ1∥𝔻−1A ∞, ∥𝔻F−1A −( Ip+∆)𝔻−1A∥∞= (B−1− Ip−∆)𝔻−1A ∞≤ρ2 1 1−ρ1∥𝔻−1A ∞. This and (B.14) imply the final inequalities of the proposition. 32 Marginal and alternating minimization, sup-norm bounds Lemma B.6. LetB= (Bij)∈Mpwith Bii= 1and sup u:∥u∥∞≤1∥(B− Ip)u∥∞≤ρ1<1. (B.15) Then ∥Bu∥∞≤(1−ρ1)−1∥u∥∞for any u∈ Rp. Similarly, ∥(B−1− Ip)u∥∞≤ρ1 1−ρ1∥u∥∞,∥(B−1−2 Ip+B)u∥∞≤ρ2 1 1−ρ1∥u∥∞.(B.16) Proof. Represent B= Ip−∆. Then (B.15) implies ∥∆u∥∞≤ρ1∥u∥∞. The use of B−1= Ip+∆+∆2+. . .yields for any u∈ Rp ∥B−1u∥∞≤∞X m=0∥∆mu∥∞. Further, with umdef=∆mu, it holds ∥∆m+1u∥∞=∥∆um∥∞≤ρ1∥um∥∞. By induction, this yields ∥∆mu∥∞≤ρm 1∥u∥∞and thus, ∥B−1u∥∞≤∞X m=0ρm 1∥u∥∞=1 1−ρ1∥u∥∞ as claimed. The proof of (B.16) is similar in view of Ip+∆= 2 Ip−B.
https://arxiv.org/abs/2505.02562v1
arXiv:2505.02636v1 [math.OC] 5 May 2025Nonconvex landscapes in phase retrieval and semidefinite low-rank matrix sensing with overparametrization Andrew D. McRae∗ May 6, 2025 Abstract We study a nonconvex algorithmic approach to phase retrieva l and the more general problem of semidefinite low-rank matrix sensing. Specifically, we anal yze the nonconvex landscape of a quartic Burer-Monteirofactorized least-squares optimization pr oblem. Wedevelopanewanalysisframework, taking advantage of the semidefinite problem structure, to u nderstand the properties of second-order critical points—specifically, whether they (approximatel y) recover the ground truth matrix. We show that it can be helpful to overparametrize the problem, t hat is, to optimize over matrices of higher rank than the ground truth. We then apply this framewo rk to several example problems: in addition to recovering existing state-of-the-art phase re trieval landscape guarantees (without over- parametrization), we show that overparametrizing by a fact or at most logarithmic in the dimension allows recovery with optimal statistical sample complexit y for the well-known problems of (1) phase retrieval with sub-Gaussian measurements and (2) more gene ral semidefinite matrix sensing with rank-1 Gaussian measurements. More generally, our analysi s (optionally) uses the popular method of convex dual certificates, suggesting that our analysis co uld be applied to a much wider class of problems. 1 Introduction and result highlights This paper considers the problem of estimating of positive semidefinit e (real or complex) d×dmatrix Z∗from (real) measurements of the form yi≈/an}b∇acketle{tAi,Z∗/an}b∇acket∇i}ht, i= 1,...,n, whereA1,...,A nare known positive semidefinite matrices, and/an}b∇acketle{t·,·/an}b∇acket∇i}htdenotes the elementwise Euclidean (Frobenius) matrix inner product. We will denote r= rank(Z∗), which we typically assume to be much smaller than the dimension d. This is an instance of the well-studied problem of low-rank matrix sensing. However, our requirement that the measurement matrices {Ai}are positive semidefinite is quite particular and, as we will see, adds significant structure to the pro blem. Hence we refer to our problem assemidefinite low-rank matrix sensing . One important instance of this problem is phase retrieval , where we want to recover a vector x∗from (approximate) magnitude measurements of the form |/an}b∇acketle{tai,x∗/an}b∇acket∇i}ht|, wherea1,...,a nare known measurement vectors (on vectors, /an}b∇acketle{t·,·/an}b∇acket∇i}htis the usual real or complex Euclidean inner product). Phase retrie val arises in many applications, particularly those involving estimation or image rec onstructionfrom optical measure- ments (where we may observe the intensity but not the phase of an electromagnetic wave). See Section 2 for further reading. Phase retrieval can be cast as a semidefinite rank-one matrix sensing problem by noting that|/an}b∇acketle{tai,x∗/an}b∇acket∇i}ht|2=/an}b∇acketle{taia∗ i,x∗x∗ ∗/an}b∇acket∇i}ht. To be more precise, let Fbe the set of real or complex numbers (i.e., F=RorF=C). We denote byHdthe set of Hermitian matrices in Fd×d. We want to recover a rank- rpositive semidefinite (PSD) matrixZ∗∈Hdfrom measurements of the form yi=/an}b∇acketle{tAi,Z∗/an}b∇acket∇i}ht+ξi∈R, i= 1,...,n, ory=A(Z∗)+ξ, (1) ∗The author is with the Institute of Mathematics, EPFL, Lausa nne, Switzerland. E-mail: andrew.mcrae@epfl.ch . 1 whereA1,...,A n∈Hdare known PSD matrices, we denote y= (y1,...,y n)∈Rn,ξ= (ξ1,...,ξ n)∈ Rn, and the linear operator A:Hd→Rnis defined by A(S) = /an}b∇acketle{tA1,S/an}b∇acket∇i}ht ... /an}b∇acketle{tAn,S/an}b∇acket∇i}ht . (note that the inner product of Hermitian matrices is always real). Inthephaseretrievalproblem, forunknown x∗∈Fdandknownmeasurementvectors a1,...,a n∈Fd,
https://arxiv.org/abs/2505.02636v1
we use the model ( 1) with Z∗=x∗x∗ ∗,andAi=aia∗ i, i= 1,...,n. (2) If, as is often the case in practice, x∗is real (F=R) but the measurements are complex, we can make everything real by taking Ai= Re(aia∗ i). Given measurements of the form ( 1), a natural question is how to estimate Z∗from the data {(yi,Ai)}n i=1. Many algorithms exist, particularly for the phase retrieval model (2) (see the surveys and other selected references in Section 2). However, most algorithms and/or their theoretical guar- antees (when those exist) are quite complicated, with elements suc h as special cost functions, careful initialization schemes, intricate analyses of iterative algorithms, or c ostly convex relaxations. This is es- pecially true once we move beyond the simplest problem instance of ph ase retrieval with highly idealized (e.g., Gaussian) measurements. A conceptually simple algorithmic approach is least-squares over the set of PSD low-rank matrices: for some search rank parameter p, we estimate Z∗with a solution to min Z/{ollowsequal0/ba∇dbly−A(Z)/ba∇dbl2s.t. rank( Z)≤p. (LR-LS) Here,/ba∇dbl·/ba∇dbldenotes the ordinary Euclidean ℓ2norm. However, it is not immediately clear how to solve this optimization problem. The feasible set is nonconvex and has a rather complicated nonsmooth structure. One popular approach is to relax the problem by dropping the rank co nstraint. This gives us the convex semidefinite program (SDP) min Z/{ollowsequal0/ba∇dbly−A(Z)/ba∇dbl2. (PhaseLift) This is one variant of the PhaseLift program introduced by [ 1] as a convex relaxation for phase retrieval (the originalPhaseLiftincluded the traceof Zto promotelowrank, but this wasshowntobe unnecessary by [2], [3], whose work we will later build on). Although ( PhaseLift ) is convex and well-studied, the feasible set has order d2degrees of freedom versus order pdfor (LR-LS), so for computational and storage purposes it is not very practical when dis large. We instead focus on a smooth parametrization of the feasible set of (LR-LS) that is amenable to local, derivative-based optimization algorithms. Every matrix Z/{ollowsequal0 of rank at most pcan be written Z=XX∗for some X∈Fd×p. Plugging this parametrization into ( LR-LS), we obtain min X∈Fd×pfp(X) where fp(X) =/ba∇dbly−A(XX∗)/ba∇dbl2. (BM-LS) This can alternatively be viewed as a Burer-Monteiro factorized for mulation of the SDP ( PhaseLift ). Although, from now on, we do not consider the nonsmooth low-rank problem ( LR-LS), we can still draw conclusions about it via this parametrization. In particular, via the w ork of Ha et al.[4], all guarantees that we prove in this paper for second-order critical points of ( BM-LS) will also hold for all local minima of (LR-LS). Considering ( BM-LS), two natural questions arise, answering which will be the main focu s of this paper: •Although the problem ( BM-LS) is smooth ( fpis a quartic polynomial in the elements1of the vari- ableX), it is still nonconvex, and thus, potentially, local algorithms could g et stuck in spurious local optima. Is this a problem? Many existing works overcome this in s ome cases with care- ful initialization schemes and/or algorithmic analyses, but we want so mething simpler and more general. 1The real and imaginary parts, in the complex case. 2 •How do
https://arxiv.org/abs/2505.02636v1
we choose the estimation rank p? The obvious choice is p=r= rank(Z∗) if it is known, but is this the best choice? For practical reasons we want pto be small (e.g., constant or at least ≪d). Nonconvex problems of the form ( BM-LS) have indeed been well studied in the low-rank matrix sensing literature, and there are many positive results showing tha t such problems can have a benign landscape : that every local minimum (or even second-order critical point2) is “good” in some sense (e.g., it is a global optimum or at least is close to the ground truth). This fits our purposes well, because it is well-known (and even rigorously proved—see, e.g., [ 5], [6] for merely two of many such results) that local search methods such as gradient descent or trust-region a lgorithms will find second-order critical points of problems like ( BM-LS). However,thevastmajorityofexistingresults(seeSection 2)ofthischaractermakestrongassumptions about the measurement operator A: specifically, that it has a restricted isometry property (RIP) in the sense that (up to rescaling)1 n/ba∇dblA(S)/ba∇dbl2≈/ba∇dblS/ba∇dbl2 Ffor allS∈Hdwith low rank (/ba∇dbl·/ba∇dblFdenotes the matrix Frobenius norm, i.e., the elementwise Euclidean norm). However, RIP is often an unreasonably strong condition. For example, wewill seesoonthat it does not hold forphas eretrievalwithout an unreasonably large number of measurements. Therefore, other approaches a re needed. In this paper, we develop a novel analysis framework of the noncon vex landscape of ( BM-LS). This framework does not require RIP and exploits the semidefinite proble m structure. We then use this framework to show that many popular instances of ( BM-LS) do indeed have a benign landscape in that every second-order critical point either recovers (in the absenc e of noise) the ground truth and is globally optimal or at least (with noise) gives a statistically accurate estimat or of the ground truth. In particular, our results reveal the benefits of rank overparametrization , that is, setting the search parameter pto be strictly larger than the ground truth rank r; for each of the specific problems we study, we obtain a benign landscape with optimal sample complexity with pchosen to be at most of order rlogd. In the rest of this introduction, we give a tour of some of the challen ges faced, some of the concrete implications of our analysis, and some future research directions fo r which we believe our framework will be helpful. 1.1 Phase retrieval with rank-one optimization To see better the challenges we face in trying to understand the lan dscape of ( BM-LS), we first consider the relatively simple and well-studied phase retrieval model ( 2). As the target matrix has rank 1, it is natural to consider ( BM-LS) with rank parameter p= 1. We then obtain the problem min x∈Fdf(x) where f(x) =/ba∇dbly−A(xx∗)/ba∇dbl2=n/summationdisplay i=1(yi−|/an}b∇acketle{tai,x/an}b∇acket∇i}ht|2)2. (PR-LS) This objective function was proposed for phase retrieval by Guiza r-Sicairos and Fienup [ 7] (as a special case of a larger family of loss functions) before being studied in more detail by Cand` es et al.[8]. One might hope to use something like the restricted isometry proper ty (RIP) mentioned before
https://arxiv.org/abs/2505.02636v1
to analyze ( PR-LS). For example, if the measurement vectors a1,...,a nare chosen as independent and independently distributed (i.i.d.) real or complex standard Gaussian r andom vectors, one can easily calculate that, for S∈Hd,Ain expectation satisfies E1 nA∗A(S) =m4S+(trS)Id=⇒E1 n/ba∇dblA(S)/ba∇dbl2=m4/ba∇dblS/ba∇dbl2 F+tr2(S), (3) whereA∗:Rn→Hdis the adjoint ofAgiven byA∗(z) =/summationtextn i=1ziAi, and we define m4:=E|z|4−1, wherezisarealorcomplexstandardnormalrandomvariable; withrealGau ssianmeasurements, m4= 2, and with complex Gaussian measurements, m4= 1. Thus, in expectation, up to the trace term and scaling,Ais an isometry. Unfortunately, even with Gaussian measurements, there is no hop e ofAhaving RIP without an unreasonably large (at least order d2) number of measurements: if we take S=1 /bardbla1/bardbl2a1a∗ 1, which is 2A point where the gradient is zero and the Hessian is positive semidefinite; we unpack this condition in more detail in Section3. 3 rank-one and has unit Frobenius norm, it is clear that3 1 n/ba∇dblA(S)/ba∇dbl2≥/ba∇dbla1/ba∇dbl4 n/greaterorsimilard2 n with high probability. This phenomenon was noted for more general m atrix sensing with rank-one measurement matrices in [ 9]. Nevertheless, with more specialized analysis, we can still say someth ing about the landscape of (PR-LS). Sunet al.[10], followed by Cai et al.[11] give positive results when the measurements are Gaussian. The following theorem, proved in Section 5, is a generalization of their results (here, /ba∇dbl·/ba∇dblℓ2 denotes the matrix ℓ2operator norm): Theorem 1. Consider the model (1)with rank-one Z∗=x∗x∗ ∗, and assume exact measurements, that is,ξ= 0. 1. SupposeAsatisfies, for some m,δL,δU>0 1 n/ba∇dblA∗A(x∗x∗ ∗)/ba∇dblℓ2≤(1+m+δU)/ba∇dblx∗/ba∇dbl2,and 1 n/ba∇dblA(xx∗−x∗x∗ ∗)/ba∇dbl2≥(1−δL)[m/ba∇dblxx∗−x∗x∗ ∗/ba∇dbl2 F+(/ba∇dblx/ba∇dbl2−/ba∇dblx∗/ba∇dbl2)2]for allx∈Fd, and suppose m2+2m−2>3(m2+2m)δL+2(m+1)δU. (4) Then every second-order critical point xof(PR-LS)satisfies xx∗=x∗x∗ ∗, that is, x=x∗sfor some unit-modulus s∈F. 2. For fixed x∗, ifAi=aia∗ i(orAi= Re(aia∗ i)ifF=R) for i.i.d. real or complex standard Gaussian vectorsa1,...,a n, then, for universal constants c1,c2>0, ifn≥c1dlogd, the conditions of part 1 (withm=m4) are satisfied with probability4at least1−c2n−2. Part2recovers the result of [ 11]. This was an improvement of the result of [ 10], which required n/greaterorsimilardlog3d(see Section 2.1for further discussion and related work). The arguments in those papers rely on fact that the measurements are Gaussian. The determinist ic condition of part 1is novel and has the benefit of applying to more general measurement ensembles. Note that in the real Gaussian measurement case, where m4= 2, condition ( 4) can be simplified to 2δL+δU<1. In the complex Gaussian measurement case, where m4= 1, condition ( 4) becomes 9δL+4δU<1. Comparingthe conditions on Ain Theorem 1with RIP and the expectation calculation ( 3), note that we have in part maintained the requirement for restricted lowerisometry (1 n/ba∇dblA(S)/ba∇dbl2/greaterorsimilar/ba∇dblS/ba∇dbl2 F), but the restricted upperisometrycondition (1 n/ba∇dblA(S)/ba∇dbl2/lessorsimilar/ba∇dblS/ba∇dbl2 F), which is what fails in the givencounterexample, has been relaxed. Instead, we only require (upper) concentratio n ofA∗Aon the specific input x∗x∗ ∗, which is easier to obtain when x∗is fixed independently of the random measurement vectors a1,...,a n. Unfortunately, the conditions of Theorem 1are still unreasonably strong in many cases. The de- terministic part 1is quite sensitive to the (almost-Gaussian) structure of A, and, even in the (often idealized) Gaussian case, the sample requirement n/greaterorsimilardlogdof part2(which cannot be improved if we want to satisfy the conditions of
https://arxiv.org/abs/2505.02636v1
part 1) is suboptimal by a logarithmic factor compared to the best- possible sample complexity n≈dthat is achieved by other methods such as PhaseLift [ 2] or various other, more elaborate schemes (see [ 12], [13] for an overview). There is some evidence that the landscape of (BM-LS) can be benign even with n≈dGaussian measurements [ 14], but this has not been proved rigorously. Thus more work is needed to show that ordinary least-squares opt imization (without, e.g., worrying about initialization) can solve phase retrieval with flexible measureme ntassumptions and optimal sample complexity. 3We write a/lessorsimilarb(equivalently, b/greaterorsimilara) to mean a≤cbfor some unspecified but universal constant c >0. We will similarly, on occasion, write A≼BorB/followsorequalAto denote the semidefinite ordering cB−A/followsequal0 for some c >0. We write a≈bto mean a/lessorsimilarbanda/greaterorsimilarbsimultaneously. 4Throughout this paper, we state probability bounds in the fo rm 1−cn−2for some c >0, but inspection of the proofs reveals that we can replace n−2withn−γfor any constant γ >0 with only a change in the other (unspecified) constants depending on γ. 4 1.2 The benefits and pitfalls of overparametrization Qualitatively, the condition on Ain Theorem 1, part1can be interpreted as a condition number require- ment. Similar requirements appear elsewhere in nonconvex matrix op timization; in particular, benign landscape results assuming the restricted isometry property (RI P) require the upper and lower isometry constants to be not too different. See [ 15] and the further references therein. However, a poor condition number can be mitigated in part by overparametrizing the optimization problem, that is, setting the optimization rank pto be strictly larger than the rank rof the ground truth or global optimum [ 16], [17]. This idea also appears in other, structurally quite different examples of low-rank matrix optimization: see Section 2.4. Our analysis framework in this paper shows that overparametrizat ion brings benefits even without RIP. Indeed, applied to the phase retrieval (or rank-onesemidefi nite matrix sensing) problem, this allows us, in part, to relax the conditions of Theorem 1: Theorem 2. Consider the model (1)with rank-one Z∗=x∗x∗ ∗andξ= 0. For rank parameter p≥1, consider the nonconvex least-squares problem (BM-LS). Suppose, for some constants α,β,L≥0, we have, for all X∈Fd×p, 1 n/ba∇dblA(XX∗−x∗x∗ ∗)/ba∇dbl2≥α/ba∇dblXX∗−x∗x∗ ∗/ba∇dbl2 F+β(/ba∇dblX/ba∇dbl2 F−/ba∇dblx∗/ba∇dbl2)2,and (5) 1 nA∗A(x∗x∗ ∗)/√∇ecedesequalL/ba∇dblx∗/ba∇dbl2Id. (6) Then, if (p+2)/parenleftbigg 1+β pβ+α/parenrightbigg α >2L, (7) every second-order critical point Xof(BM-LS)satisfies XX∗=x∗x∗ ∗. We prove this in Section 5. Theorem 1, part1is indeed a special case of this. It is tempting to say that, as long as α >0, we can always make plarge enough so that condition ( 7) is satisfied. In some cases, this may be true. However, the lower iso metry condition ( 5) depends on pin that it must hold for all X∈Fd×p. For Gaussian measurements, attempting to prove an inequality like (5) with the same methods that Sun et al.[10] use to prove a similar result for p= 1 (see Lemma 3in Section5) would require n/greaterorsimilarpd, which would defeat any sample-complexity benefit of Theorem 2over Theorem 1. Forgenerallow-rankmatrixsensingproblems,theimplicit dependen ceofthelowerisometryconstants on the optimization rank pis a fundamental limitation of the nonconvex optimization approach. For example,
https://arxiv.org/abs/2505.02636v1
evenwith thestrongerassumptionofRIP,RichardZhang hasgivenacounterexample(aversion of which appears in [ 17]) showing that, with excessive overparametrization, the problem ( BM-LS) may have spurious local minima very far from the ground truth even whe n, forpcloser to r, RIP holds and the landscape is indeed benign. However, in our case, the fact that both the ground truth Z∗and our measurement matrices {Ai}i are PSD allows us to overcome this limitation. 1.3 PSD measurements and universal lower isometry To reap the full benefits of a result like Theorem 2, we need the measurement operator Ato satisfy lower isometry in a relatively unrestricted sense: we want, for any Z/{ollowsequal0 (even of high rank), 1 n/ba∇dblA(Z−Z∗)/ba∇dbl2≥α/ba∇dblZ−Z∗/ba∇dbl2 F for some α >0. As discussed above, this is, in general, too much to ask even if Ahas RIP for suitably small ranks. However, the PSD structure of Z,Z∗, and the measurement matrices {Ai}iallows us to do more. Indeed, this has already been observed and studied (see be low) for certain variants of the convex relaxation ( PhaseLift ) for which there is no rank restriction or penalization. At a high level, the argument goes as follows. Given Z/{ollowsequal0, we can decompose the error H=Z−Z∗ into two components. We write H=H1+H2, whereH1indeed has low rank (of order r= rank(Z∗)), andH2may have large rank but is PSD, that is, H2/{ollowsequal0 (see, e.g. Section 7for a principled way to do this). 5 AsH1has low rank, we can reasonably hope to show that1 n/ba∇dblA(H1)/ba∇dbl2/greaterorsimilar/ba∇dblH1/ba∇dbl2 F. Furthermore, as H2and the matrices Aiare PSD, we have 1√n/ba∇dblA(H2)/ba∇dbl≥1 nn/summationdisplay i=1|/an}b∇acketle{tAi,H2/an}b∇acket∇i}ht| =/angbracketleftBigg 1 nn/summationdisplay i=1Ai,H2/angbracketrightBigg . The equality holds because /an}b∇acketle{tAi,H2/an}b∇acket∇i}ht≥0. If, for example, Ai=aia∗ ifor i.i.d. standard Gaussian ai, standard concentration inequalities imply that, when n/greaterorsimilard,1 n/summationtextn i=1Ai/followsorequalId, in which case we obtain 1 n/ba∇dblA(H2)/ba∇dbl2/greaterorsimilartr2(H2) =/ba∇dblH2/ba∇dbl2 ∗≥/ba∇dblH2/ba∇dbl2 F, where tr(·) and/ba∇dbl·/ba∇dbl∗respectively denote the matrix trace and nuclear norm.5 It is thus relatively straightforward to show that, with H=H1+H2,1 n/ba∇dblA(H1)/ba∇dbl2/greaterorsimilar/ba∇dblH1/ba∇dbl2 Fand 1 n/ba∇dblA(H2)/ba∇dbl2/greaterorsimilar/ba∇dblH2/ba∇dbl2 F. However, showing that we can “combine” these to obtain1 n/ba∇dblA(H)/ba∇dbl2/greaterorsimilar/ba∇dblH/ba∇dbl2 Fis quite technical and requires additional tools. 1.3.1 Phase retrieval with general sub-Gaussian measureme nts We consider, in this work, two separate approaches to proving suc h lower isometry. One is based on the work of Krahmer and St¨ oger [ 18], which considers the case of ordinary phase retrieval ( r= 1). This analysis framework allows for more general sub-Gaussian measurements. Specifically, we assume the entries of the measurement vectors aiare i.i.d. copies of a random variable wwhich we assume to be zero-mean and to satisfy (without loss of generality) E|w|2= 1. We also assume that wis sub-Gaussian with parameter Kin the sense that6Ee|w|2/K2≤2. As noted by [ 18], [20], certain moments of ware critical for our ability to do phase retrieval with such measurement s: •IfE|w|4= 1, or, equivalently, |w|= 1 almost surely, then the standard basis vectors of Fdwill be indistinguishable under these measurements. In that case, we mus t assume that the ground truth x∗is not too “peaky” (i.e., that it is incoherent with respect to the standard basis). •Ifx∗is complex, and|Ew2|= 1 (i.e., almost surely, w=svfor some fixed s∈Cand
https://arxiv.org/abs/2505.02636v1
arealrandom variablev), thenx∗and its elementwise complex conjugate ¯ x∗will be indistinguishable. We must therefore rule out this case. We can plug the lower isometry bounds of [ 18] into our Theorem 2to obtain the following result (see Section6for details): Theorem 3. Consider the model (1)with rank-one Z∗=x∗x∗ ∗for nonzero x∗∈Fdandξ= 0. Suppose Ai=aia∗ i, wherea1,...,a nare i.i.d. random vectors whose entries are i.i.d. copies of a random variable w. IfF=Rbutwis complex, we can take Ai= Re(aia∗ i). There exists a universal constant µ >0such that the following is true. Suppose E|w|2= 1,wis K–sub-Gaussian, and at least one of the following two stateme nts holds: 1.E|w|4>1, or 2./ba∇dblx∗/ba∇dbl∞≤µ/ba∇dblx∗/ba∇dbl. Furthermore, if F=C, assume that|Ew2|<1. Then there exist c1,c2,c3>0depending only on the properties of w(not on the dimension d) such that, if n≥c1d, with probability at least 1−c2n−2, for all p≥c3/parenleftbigg 1+dlogd n/parenrightbigg , every second-order critical point Xof(BM-LS)satisfies XX∗=x∗x∗ ∗. 5The trace/nuclear normtermappearing here can beinterpret ed as“implicitregularization” arisingfromthe semidefini te problem structure; see the discussion after Theorem 4. 6This is one of several equivalent (within constants) definit ions ofK–sub-Gaussianity. See, for example, [ 19, Sec. 2.5]. 6 Thus we see that, even with nof order d(vs.n/greaterorsimilardlogdas required by Theorem 1), we can obtain a benign landscape by choosing p≈logd. In terms of computational scaling, this is an improvement over the results of [ 18], which only proved exact recovery for a variant of ( PhaseLift ). See Section 2.2 for further relevant literature. We have so far been unable to extend this analysis approach to large r ground truth ranks without introducing a suboptimal dependence on the rank r. We thus, in addition, consider another (and older) method. 1.3.2 Dual certificate approach with application to Gaussia n measurements Our other analysis technique is a dual certificate approachsimilar to that introduced by [ 2], [3] to analyze a variant of ( PhaseLift ) for phase retrieval. We defer the details to Section 7. A deterministic landscape result similar to Theorem 2is given as Theorem 5in that section. This result allows for measurement noise. As an example application of the dual certificate approach, we cons ider again rank-1 Gaussian mea- surements. Although this approach could likely be adapted to the mo re general sub-Gaussian measure- ments of Theorem 3(as is done in the real, r= 1 case by [ 20]), the dual certificate construction and analysis become more complicated, so for brevity we do not explore t his further. For Z∗of rankr, we denote its nonzero eigenvalues by λ1(Z∗)≥···≥λr(Z∗). Theorem 4. Consider the model (1)with fixed rank- r Z∗/{ollowsequal0. Suppose Ai=aia∗ ifor i.i.d. standard (real or complex) Gaussian vectors a1,...,a n(ifF=Rbut the measurements are complex, we can take Ai= Re(aia∗ i)). For universal constants c1,c2,c3,c4>0, ifn≥c1rd, then, with probability at least 1−c2n−2, for all optimization ranks p≥c3(1+dlogd n)trZ∗+1 n/ba∇dblA∗(ξ)/ba∇dblℓ2 λr(Z∗), every second-order critical point Xof(BM-LS)satisfies /ba∇dblXX∗−Z∗/ba∇dblF≤c4√r/ba∇dblA∗(ξ)/ba∇dblℓ2 n. The dependence of the error bound on the noise ξand the ground truth rank ris identical to classical results in low-rank matrix sensing and is, in some cases, minimax-optim al. See, for example, [ 9], [21]– [23]. Usually,
https://arxiv.org/abs/2505.02636v1
however, without a hard estimator rank constraint, one must include a low-rank–inducing regularizer (e.g., trace/nuclear norm) to get such optimal depend ence on r. The fact that we obtain this without any explicit regularizer illustrates the “implicit regularizat ion” of the semidefinite problem structure (see the relevant footnote above). In the case r= 1 and ξ= 0, the result reduces to that of Theorem 3. For general r, assuming for simplicity that n/greaterorsimilardlogdandξ= 0, the optimization rank condition becomes p/greaterorsimilartrZ∗ λr(Z∗). This is satisfied, for example, when p/greaterorsimilarκr, whereκ=λ1(Z∗)/λr(Z∗). Thus we see that yet another “condition number” appears in a requirement on p. It is not clear whether the dependence on the eigenvalues Z∗ is tight; related works assuming RIP (e.g., [ 17]) do not have such a dependence, but relaxing the RIP assumption as we do requires quite different proof techniques. 1.4 Potential future directions Computational complexity guarantees Much of the phase retrieval literature has carefully considered th e problem of the computational cost of finding a solution (see, e.g., [ 24], [25]). We have not attempted to do something similar in the present work, but it should be possible. A complicating factor for obtaining competitive computational guar antees is that, in the over- parametrized case, the objective function is not locally strongly co nvex (even modulo the trivial action of the orthogonal/unitarygroup) about a rank-deficient minimizer . An overview of this issue and further reading is provided by Zhang et al.[26]. That work also proposes a solution via preconditioned gradient descent. It is likely that their results (e.g., their Cor. 9) could, with s ome additional calculations of properties of ( BM-LS), give a computational complexity bound, but we do not pursue this here. 7 Further applications We believe that the dual certificate approach of Section 7can, with additional work, be applied to other (non-Gaussian) measurements the arise in applications. For example, many papers (e.g., [ 27]–[31]) consider coded diffraction patterns , which come from optical imaging; in particular, the data has the for m of optical diffraction images produced with a number of randomly-ge nerated masks. Certain of these works prove exact recovery results for the semidefinite relaxatio n PhaseLift via a dual certificate similar to what we use in this paper. However, for technical reasons, we c annot simply plug their intermediate results into our framework, so additional work is needed to obtain t heoretical landscape guarantees for such a measurement model. 1.5 Paper outline and additional notation The rest of this paper is organized as follows: •Section2gives additional background and related work. •Section3presents the second-order criticality conditions of ( BM-LS) and derives a fundamental deterministic inequality (Lemma 1) that will be foundational for all the results in this paper. •Section4statesandprovesaprobabilisticconcentrationresult(Lemma 2)forthequantity A∗A(Z∗) that appears in Lemma 1and is thus critical to our subsequent results. •Section5proves the results for Gaussian(-like) measurements (Theorems 1and2) introduced in Sections 1.1and1.2. •Section6givesaproof(basedonresultsfrom[ 18]) ofthe phaseretrievallandscaperesultTheorem 3 for sub-Gaussian measurements given in Section 1.3. •Section7describes in detail the theoretical machinery of PhaseLift dual ce rtificates (mentioned in Section 1.3) and states and proves
https://arxiv.org/abs/2505.02636v1
our main deterministic theoretical result f or this analysis (Theorem 5). We then apply this to the Gaussian measurement ensemble to prov e Theorem 4, which was given in Section 1.3. For convenience, we collect here some (standard) notation that w e use throughout the paper. If xis a vector, we denote its Euclidean ( ℓ2),ℓ1andℓ∞norms by/ba∇dblx/ba∇dbl,/ba∇dblx/ba∇dbl1, and/ba∇dblx/ba∇dbl∞, respectively. If Xis a matrix, we denote its operator, Frobenius (elementwise Euclidean) and nuclear norms by /ba∇dblX/ba∇dblℓ2,/ba∇dblX/ba∇dblF, and/ba∇dblX/ba∇dbl∗, respectively. GivenA,B∈Hd(the set of d×dHermitian matrices, real or complex according to context), we writeA/√∇ecedesequalB(orB/{ollowsequalA) to mean B−A/{ollowsequal0. We denote by Idthed×didentity matrix. If Xis a matrix of rank r, we denote its nonzero singular values by σ1(X)≥···σn(X). IfXis Hermitian and positive semidefinite, in which case the singular values are the eige nvalues, we may instead write λ1(X)≥···≥λr(X). 2 Additional background and related work The phase retrieval literature is vast, and we can only cover a small portion of it that is most relevant to our work. For further reading, Shechtman et al.[32] give an accessible introduction from an optics/image processing point of view. The recent survey of Dong et al.[12] has a more statistical perspective. Fannjiang and Strohmer [ 13] provide a much longer and more technically detailed overview, includin g many convex and nonconvex algorithms and their theoretical guar antees. We also do not attempt to survey the literature on nonconvex optim ization and benign landscapes for general low-rank matrix sensing. Outside of phase retrieval ( see below) and certain other highly problem-specific results (see, e.g., [ 33] for matrix completion and robust principal component analysis), all the global landscape results we are aware of assume some form o f restricted isometry property (RIP). For state-of-the-art results and further references, see, f or example, [ 15], [17]. 8 2.1 Nonconvex optimization landscapes for phase retrieval For the quartic objective function ( PR-LS), primarily the Gaussian measurement case has been studied. The optimal sample-complexity threshold for obtaining a benign lands cape is an open question. Sun et al.[10] showed that n/greaterorsimilardlog3dsuffices. Cai et al.[11] subsequently improved this requirement to n/greaterorsimilardlogd(our Theorem 1, part2recovers this result). Sarao Mannelli et al.[14] provide numerical evidence and heuristic (statistical physics) arguments that the la ndscape indeed becomes benign when n/dpasses a constant threshold. However, Liu et al.[34] study in detail the landscape when dis large andd/lessorsimilarn≪dlogdand show that local convexity near the global optimum x∗(which is a key part of the arguments of [ 10], [11]) breaks down in this regime. Other works have considered different objective functions. Davis et al.[35] study the nonsmooth variant of ( PR-LS) minx/summationtext i|yi−|/an}b∇acketle{tai,x/an}b∇acket∇i}ht|2|. They study the locations of critical points but do not obtain a global benign lands cape result. The recent series of papers [36]–[39] considers a variety of loss functions which combine features of ( PR-LS) with truncation and/or features of the nonsmooth amplitude-based loss/summationtext i(√yi−|/an}b∇acketle{tai,x/an}b∇acket∇i}ht|)2. In each case, they show that, with n/greaterorsimilardGaussian measurements, the nonconvex landscape is benign in the s ense that every second-order critical point gives exact recovery of the ground t ruth. Theliteratureonmoregeneralnonconvexoptimizationformulation sandalgorithmsforphaseretrieval is vast,
https://arxiv.org/abs/2505.02636v1
and we do not attempt to cover it here. Most existing theor etical results consider initialization and local convergence of iterative algorithms. See the above-men tioned surveys and the recent papers [25], [30], [40] for further background and references. 2.2 Phase retrieval with sub-Gaussian measurements For the case of phase retrieval with general sub-Gaussian measu rements (like in our Theorem 3), Eldar and Mendelson [ 41], considering only the real case, first showed a universal lower (“s tability”) bound on (in our notation) /ba∇dblA(uu∗−vv∗)/ba∇dbl1overu,v∈Rd(or subsets thereof). Although their analysisframework is quite general, their concrete examples assume a “small-ball” condit ion on the ai’s that rules out, for example, measurement vectors composed of i.i.d. symmetric Bernou lli (zero-mean±1-valued) random variables (hence this is qualitatively similar to the fourth-moment ass umption E|w|4>1 of Theorem 3). Krahmer and Liu [ 20] build on that analysis framework and show that we can relax the sma ll-ball (or moment) assumption ifweassume that the groundtruth vectoris n ot too“peaky”; this is the assumption /ba∇dblx∗/ba∇dbl∞≤µ/ba∇dblx∗/ba∇dblof Theorem 3. They furthermore show, via a dual certificate approach similar to [2], [3], that, under similar assumptions as our Theorem 3, a variant of ( PhaseLift ) gives exact recovery. Krahmer and St¨ oger [ 18] extend this to the complex case (albeit without using dual certifica tes). Independently, Gao et al.[42], under measurement moment assumptions similar to those of Theo- rem3, showed that a spectral initialization plus gradient descent algorith m gives exact recovery when n/greaterorsimilardlog2d. Recently, Peng et al.[40], with a similar setup as [ 18] (and thus Theorem 3), use an intricate leave- one-out analysis to show that spectral initialization plus gradient de scent (with much larger step size than the result of [ 42] allows) gives exact recovery. Their guarantees require n/greaterorsimilardlog3dmeasurements. They comment that, before their work, there was no non-convex algorithm theoretically guaranteed to solve phase retrieval under such assumptions (e.g., symmetric Ber noulli measurements). Our Theorem 3 gives another nonconvex approach with improved sample complexity via a benign landscape of the least- squares problem ( BM-LS). 2.3 Semidefinite low-rank matrix sensing (generalized phas e retrieval) The more general semidefinite low-rank matrix sensing problem we pr esent in Section 1, that is, recovery of a matrix Z∗/{ollowsequal0 from measurements of the form /an}b∇acketle{tAi,Z∗/an}b∇acket∇i}htfor positive semidefinite (PSD) measurement matrices Ai/{ollowsequal0, is sometimes called generalized phase retrieval . However, this term is not entirely well defined in the literature. Wang and Xu [ 43] (and certain follow-up works), for example, use it to denote a variety of problems, including quite general linear matrix sensing. H owever, they primarily use this term to mean recovery of a vectorx∗from quadratic measurements of the form /an}b∇acketle{tAi,x∗x∗ ∗/an}b∇acket∇i}htfor general (not necessarily PSD) Ai∈Hd. In this section, we only consider cases where both Z∗and theAi’s are PSD. Chi and Lu [ 44] propose and study numerically an iterative (Kaczmarz) algorithm f or recovery of a low-rank PSD matrix from rank-1 PSD measurements. They do not provide theoretical guarantees; existing theoretical analyses of similar algorithms (e.g., in [ 45]) only consider the ordinary phase retrieval
https://arxiv.org/abs/2505.02636v1
caser= 1. 9 For the same problem, Chen et al.[46] analyse a trace-regularizedvariant of ( PhaseLift ), though they notethattheirtechniquescouldextendbeyondthe case Z∗/{ollowsequal0torecoveryofgeneralHermitianmatrices. Indeed, Kueng et al.[47] later do exactly this (with some additional extensions). Both work s show that, ifr= rank(Z∗), thenn/greaterorsimilarrdGaussian measurements suffice to ensure recovery with semidefinit e programming. Their analysis depends on the nuclear norm penalty an d does not take advantage of PSD structure as we (and, for example, [ 2], [3], [18]) do. Balan and Dock [ 48] study loss functions of the form ( BM-LS) as well as “amplitude”–based loss functions of the form /summationdisplay i(/an}b∇acketle{tAi,XX∗/an}b∇acket∇i}ht1/2−/an}b∇acketle{tAi,Z∗/an}b∇acket∇i}ht1/2)2 for general PSD matrices Ai. They focus on explicit calculation of upper and lower isometry const ants of these loss functions with respect to certain natural metrics ( αfrom our Theorem 2is one example of such a constant). 2.4 Overparametrization and condition numbers in low-rank matrix opti- mization We have seen in Section 1.2that we can view overparametrization as a way to overcome the poo r condition number (in a restricted isometry sense) of the measurem ent operatorA. More broadly, over- parametrization can be a useful tool to solve general7SDPs with linear objective and constraints of the form min Z/{ollowsequal0/an}b∇acketle{tC,Z/an}b∇acket∇i}hts.t.A(Z) =y, (8) where, for some dimensions d′,n′,C∈Hd′,y∈Rn′, andA:Hd′→Rn′is linear. Parametrizing Zby a Burer-Monteiro factorization of the form XX∗forX∈Fn′×p, the resulting nonlinear constraint A(XX∗) =ybecomes, under certain conditions, a Riemannian manifold constrain t [49]. Under these conditions, it is known that the problem ( 8) always has a solution of rank ≈√ n′, and, indeed, if the optimization rank parameter pis chosen to be at least this rank bound, then, for generic cost matrices C, the optimization landscape is benign, though pathological cases ex ist where this fails. See [49], [50] for relevant results and further reading. However, for certain problems, we can choose pmuch smaller than√ n′. In addition to the matrix sensing problems we have already discussed (see Section 1.2), this phenomenon is well studied for syn- chronization problems. For certain instances, the optimization landscape is again tied to a condition number (that of a dual certificate matrix to ( 8)), and overparametrizing the problem (i.e., choosing the rank parameter pof the Burer-Monteiro factorization to be larger than the rank of the global optimum of (8)) can compensate when the condition number is too large [ 51]–[54]. 3 Criticality conditions and basic consequences All of our theoretical guarantees concern second-order critical points of the smooth nonconvex problem (BM-LS). In the real case, Xis a second-order critical point if, at X, the gradient of the objective function fpis zero and the Hessian quadratic form is positive semidefinite, that is , ∇fp(X) = 0,and∇2fp(X)[˙X,˙X]≥0 for all ˙X∈Rd×p. (9) In the complex case ( F=C), the meaning is the same, but we must consider ( BM-LS) to be an optimization problem overthe realand imaginaryparts ofthe comple x variable X: that is, if X=U+iV forU,V∈Rd×p, we calculate the gradient and Hessian in the variable ( U,V). We make this explicit in our calculations below. The main result of this section is the following
https://arxiv.org/abs/2505.02636v1
lemma, which is the founda tion for every subsequent landscape result in this paper. Lemma 1. Consider (BM-LS)under the measurement model (1). IfZ∗/{ollowsequal0has rank r≥1, let X∗∈Fd×rbe such that Z∗=X∗X∗ ∗. 7The quadratic-cost program ( PhaseLift ) as well as the many variants in the literature can be put in th is form, though most works do not do this. 10 LetX∈Fd×pbe a second-order critical point of (BM-LS). For any matrix R∈Fr×p, we have /ba∇dblA(XX∗−Z∗)/ba∇dbl2≤/an}b∇acketle{tξ,A(XX∗−Z∗)/an}b∇acket∇i}ht +2 p+2/an}b∇acketle{ty,A((X∗−XR)(X∗−XR)∗)/an}b∇acket∇i}ht ≤/an}b∇acketle{tξ,A(XX∗−Z∗)/an}b∇acket∇i}ht+2/ba∇dblA∗(y)/ba∇dblℓ2 p+2/ba∇dblX∗−XR/ba∇dbl2 F. One potential benefit of overparametrization is immediately clear; t he larger p, the smaller the last term in the above inequality will be. Proof.The second inequality of the result follows from /an}b∇acketle{tA∗(y),(X∗−XR)(X∗−XR)∗/an}b∇acket∇i}ht≤/ba∇dblA∗(y)/ba∇dblℓ2/ba∇dbl(X∗−XR)(X∗−XR)∗/ba∇dbl∗ =/ba∇dblA∗(y)/ba∇dblℓ2/ba∇dblX∗−XR/ba∇dbl2 F. We now turn to the first inequality. We first consider the real case F=R, and then we extend this to the complex case. Standard calculations give ∇fp(X) = 4A∗(A(XXT)−y)X = 4A∗(A(XXT−Z∗)−ξ)X, and ∇2fp(X)[˙X,˙X] = 4/parenleftbigg /an}b∇acketle{tA∗(A(XXT)−y),˙X˙XT/an}b∇acket∇i}ht+1 2/ba∇dblA(X˙XT+˙XXT)/ba∇dbl2/parenrightbigg = 4/parenleftbigg /an}b∇acketle{tA(XXT−Z∗)−ξ,A(˙X˙XT)/an}b∇acket∇i}ht+1 2/ba∇dblA(X˙XT+˙XXT)/ba∇dbl2/parenrightbigg . We will drop the factor of 4 from now on, as it has no effect on the crit icality conditions ( 9). In the case of rank-one measurements Ai=aiaT iandp= 1, we have the convenient identity 1 2/ba∇dblA(X˙XT+˙XXT)/ba∇dbl2= 2/an}b∇acketle{tA(XXT),A(˙X˙XT)/an}b∇acket∇i}ht. Outside this specific case, this equality does not hold in general, but it becomes an inequality that will still be useful. More precisely, for A/{ollowsequal0, we have, by Cauchy-Schwartz, for any matrices B,Cof appropriate size, /an}b∇acketle{tA,BCT+CBT/an}b∇acket∇i}ht2= 4/an}b∇acketle{tA1/2B,A1/2C/an}b∇acket∇i}ht2 ≤4/ba∇dblA1/2B/ba∇dbl2 F/ba∇dblA1/2C/ba∇dbl2 F = 4/an}b∇acketle{tA,BBT/an}b∇acket∇i}ht/an}b∇acketle{tA,CCT/an}b∇acket∇i}ht. Therefore, applying this to each Ai/{ollowsequal0, we have 1 2/ba∇dblA(BCT+CBT)/ba∇dbl2≤2/an}b∇acketle{tA(BBT),A(CCT)/an}b∇acket∇i}ht. (10) We will consider rank-one ˙Xof the form ˙X=uvTforu∈Rd,v∈Rp. Plugging this into the Hessian inequality of ( 9) and then applying ( 10) gives 0≤/ba∇dblv/ba∇dbl2/an}b∇acketle{tA(XXT−Z∗)−ξ,A(uuT)/an}b∇acket∇i}ht+1 2/ba∇dblA(XvuT+u(Xv)T)/ba∇dbl2 ≤/ba∇dblv/ba∇dbl2/an}b∇acketle{tA(XXT−Z∗)−ξ,A(uuT)/an}b∇acket∇i}ht+2/an}b∇acketle{tA(XvvTXT),A(uuT)/an}b∇acket∇i}ht. Now, for fixed u, takev=vkfor each k= 1,...,p, where{vk}kis an orthonormal basis for Rp; adding up the resulting inequalities gives 0≤p/an}b∇acketle{tA(XXT−Z∗)−ξ,A(uuT)/an}b∇acket∇i}ht+2/an}b∇acketle{tA(XXT),A(uuT)/an}b∇acket∇i}ht = (p+2)/an}b∇acketle{tA(XXT−Z∗)−ξ,A(uuT)/an}b∇acket∇i}ht+2/an}b∇acketle{tA(Z∗)+ξ/bracehtipupleft/bracehtipdownright/bracehtipdownleft/bracehtipupright =y,A(uuT)/an}b∇acket∇i}ht. 11 We will next take, for R∈Rp×r,u= (X∗−XR)wℓforℓ= 1,...,r, where{wℓ}ℓis an orthonormal basis for Rrto obtain, again summing up the resulting inequalities, 0≤(p+2)/an}b∇acketle{tA(XXT−Z∗)−ξ,A((X∗−XR)(X∗−XR)T)/an}b∇acket∇i}ht +2/an}b∇acketle{ty,A((X∗−XR)(X∗−XR)T)/an}b∇acket∇i}ht. Finally, the zero-gradient condition A∗(A(XXT−Z∗)−ξ)X= 0 implies /an}b∇acketle{tA(XXT−Z∗)−ξ,A((X∗−XR)(X∗−XR)T)/an}b∇acket∇i}ht=/an}b∇acketle{tA(XXT−Z∗)−ξ,A(X∗XT ∗−XXT)/an}b∇acket∇i}ht =−/ba∇dblA(XXT−Z∗)/ba∇dbl2+/an}b∇acketle{tξ,XXT−Z∗/an}b∇acket∇i}ht, which we can plug in to the previous inequality to obtain 0≤−(p+2)/ba∇dblA(XXT−Z∗)/ba∇dbl2+(p+2)/an}b∇acketle{tξ,A(XXT−Z∗)/an}b∇acket∇i}ht +2/an}b∇acketle{ty,A((X∗−XR)(X∗−XR)T)/an}b∇acket∇i}ht. This immediately implies the result in the case F=R. Now, consider the complex case F=C. We rewrite the problem as one over real variables. Denote byS2dthe space of symmetric real 2 d×2dmatrices. We use the maps Hd∋A=B+iC/ma√sto→/tildewideA=/bracketleftbigg B CT C B/bracketrightbigg ∈S2d,Cd×p∋X=U+iV/ma√sto→/tildewideX=/bracketleftbigg U V/bracketrightbigg ∈R2n×p. Direct calculation confirms that /an}b∇acketle{tA,XX∗/an}b∇acket∇i}ht=/an}b∇acketle{t/tildewideA,/tildewideX/tildewideXT/an}b∇acket∇i}ht. An immediate consequence is that A/{ollowsequal0 implies/tildewideA/{ollowsequal0. Furthermore, setting J=/bracketleftbigg0−Id Id0/bracketrightbigg as the matrix representing multiplication by i(with JT=−Jrepresenting multiplication by −i), we have JT/tildewideAJ=/tildewideA, which will be useful in the calculations to follow. The complex problem ( BM-LS) thus reduces to the real optimization problem min /tildewideX∈R2d×p/ba∇dbl/tildewideA(/tildewideX/tildewideXT)−y/ba∇dbl2, (11) where/tildewideA:S2d→Rnisdefinedinthesamemanneras Awiththerealmeasurementmatrices /tildewideA1,...,/tildewideAn∈ S2dformed from A1,...,A n∈Hd. The result for the real case then implies that any second-order cr itical point/tildewideXof (11) satisfies, for any/tildewideR∈Rp×r, /ba∇dbl/tildewideA(/tildewideX/tildewideXT−/tildewideZ∗)/ba∇dbl2≤/an}b∇acketle{tξ,/tildewideA(/tildewideX/tildewideXT−/tildewideZ∗)/an}b∇acket∇i}ht +2 p+2/an}b∇acketle{ty,/tildewideA((/tildewideX∗−/tildewideX/tildewideR)(/tildewideX∗−/tildewideX/tildewideR)T)/an}b∇acket∇i}ht, where/tildewideZ∗∈S2d,/tildewideX∗∈R2d×rare defined in the obvious way. We immediately obtain, by reversing th e complex-to-real transformation, /ba∇dblA(XX∗−Z∗)/ba∇dbl2≤/an}b∇acketle{tξ,A(XX∗−Z∗)/an}b∇acket∇i}ht +2 p+2/an}b∇acketle{ty,A((X∗−X/tildewideR)(X∗−X/tildewideR)∗)/an}b∇acket∇i}ht. This is not quite what we want, because we had to assume /tildewideRwasreal. We must therefore inspect further the transformed problem’s structure
https://arxiv.org/abs/2505.02636v1
and consider how t o extend the proof of the real case. If R=R1+iR2∈Cr×pwithR1,R2∈Rr×p, we can replace, in the Hessian inequality calculations, /tildewideX∗−/tildewideX/tildewideRby/tildewideX∗−/tildewideXR1−J/tildewideXR2without any problem. To use the zero-gradient condition, we need, in addition to the equality /tildewideA∗(/tildewideA(/tildewideX/tildewideX∗−/tildewideZ∗)−ξ)/tildewideX= 0 12 which is identical to the real case, the equality /tildewideA∗(/tildewideA(/tildewideX/tildewideX∗−/tildewideZ∗)−ξ)J/tildewideX= 0, which follows from the previous equality by the fact that, for each i, JT/tildewideAiJ=/tildewideAi⇐⇒/tildewideAiJ=J/tildewideAi. Finally, noting that XR←→/tildewideXR1+J/tildewideXR2under the complex-to-real transformation, we indeed obtain the claimed inequality. 4 Concentration of A∗A(Z∗)for sub-Gaussian measurements A key quantity in Lemma 1in the previous section is /ba∇dblA∗(y)/ba∇dblℓ2=/ba∇dblA∗A(Z∗) +A∗(ξ)/ba∇dblℓ2. We do not consider noise in detail in this paper (the term A∗(ξ) has been studied in other works on low-rank matrix sensing and phase retrieval; see, e.g., [ 9], [21], [55]), but we still need to understand the spectral properties of the matrix A∗A(Z∗). In this section, we provide a concentration result for this matrix when the measurements are sub-Gaussian; we will use this result in each o f our applications. We say that a zero-mean random vector a∈CdisK–sub-Gaussian if, for every unit-norm x∈Cd, Ee|/an}bracketle{ta,x/an}bracketri}ht|2/K2≤2. This is, in particular, true if the entries of aare i.i.d. copies of a K–sub-Gaussian random variable win the sense given in Section 1.3.1. We explicitly consider the complex case, as it may be the case that the ground truth signal is real, i.e.,F=R, while the measurements are complex. The above definition is still valid whenais real. The following concentration result is a straightforward extension o f [10, Lem. 21]. For completeness, we provide a full proof. Lemma 2. LetZ∗/{ollowsequal0be fixed and of rank r. Leta1,...,a nbe i.i.d. copies of a K–sub-Gaussian vector a∈Cd, and take Ai=aia∗ i. There exists a universal constant c >0such that, if n≥d, with probability at least1−3n−2, A∗A(Z∗)/√∇ecedesequalEA∗A(Z∗)+cK2/parenleftBigg/radicalbigg d+logn n+(d+logn)logn n/parenrightBigg (trZ∗)Id. Proof.We will consider the case r= 1 first and then use this to extend to general r≥1. Thus, for now, assumeZ∗=x∗x∗ ∗, and, furthermore, assume without loss of generality that /ba∇dblx∗/ba∇dbl= 1. Within this proof, we use the letters c,c′, etc. to denote universal positive constants that may change from one usage to another. Note the following facts which follow from the sub-Gaussian assumpt ion ona: •For all unit-norm x∈Fd,/an}b∇acketle{tA,xx∗/an}b∇acket∇i}ht=|/an}b∇acketle{ta,x/an}b∇acket∇i}ht|2≥0 satisfies Ee/an}bracketle{tA,xx∗/an}bracketri}ht/K2≤2, which implies, for all integersk≥1, E/an}b∇acketle{tA,xx∗/an}b∇acket∇i}htk≤2K2kk!. (12) •With probability at least 1 −n−3, max i/an}b∇acketle{tAi,x∗x∗ ∗/an}b∇acket∇i}ht= max i|/an}b∇acketle{tai,x∗/an}b∇acket∇i}ht|2≤cK2logn=:τ. See, for example, [ 19, Ch. 2] for more details. We then, purely for analysis purposes, truncate the terms of A∗A(x∗x∗ ∗). With probability at least 1−n−3, we have A∗A(x∗x∗ ∗) =n/summationdisplay i=1/an}b∇acketle{tAi,x∗x∗ ∗/an}b∇acket∇i}htAi=n/summationdisplay i=1/an}b∇acketle{tAi,x∗x∗ ∗/an}b∇acket∇i}ht1{/an}bracketle{tAi,x∗x∗∗/an}bracketri}ht≤τ}Ai/bracehtipupleft/bracehtipdownright/bracehtipdownleft/bracehtipupright =:Gi=:Yτ. For any unit-norm x∈Fd, the i.i.d. and nonnegative random variables {/an}b∇acketle{tGi,xx∗/an}b∇acket∇i}ht}isatisfy, for k≥2, E/an}b∇acketle{tGi,xx∗/an}b∇acket∇i}htk=E(/an}b∇acketle{tAi,x∗x∗ ∗/an}b∇acket∇i}ht1{/an}bracketle{tAi,x∗x∗∗/an}bracketri}ht≤τ}/an}b∇acketle{tAi,xx∗/an}b∇acket∇i}ht)k ≤τk−2E[/an}b∇acketle{tAi,x∗x∗ ∗/an}b∇acket∇i}ht2/an}b∇acketle{tAi,xx∗/an}b∇acket∇i}htk] ≤τk−22K2(k+2)(k+2)! ≤c′(cK2τ)k−2K8k!. 13 The second inequality comes from H¨ older’s inequality together with ( 12). For the last equality, we have absorbed the factor of ( k+2)(k+1) into the constant in the exponential and the leading constant. This implies that the random variable /an}b∇acketle{tYτ,xx∗/an}b∇acket∇i}ht=/summationtextn i=1/an}b∇acketle{tGi,xx∗/an}b∇acket∇i}htis (cK4n,c′K2τ)–sub-expontential in the sense of [ 56, Sec. 2.1], so, for any t≥0, with probability at least 1 −2e−t, |/an}b∇acketle{tYτ,xx∗/an}b∇acket∇i}ht−E/an}b∇acketle{tYτ,xx∗/an}b∇acket∇i}ht|/lessorsimilarK4√ nt+K2τt. By a covering
https://arxiv.org/abs/2505.02636v1