text string | source string |
|---|---|
Pn andP′ n, respectively. Similarly, Pfrepresents the expectation of funderP(x,y) =F(x)G(y). Thus, the expressions are given by: Pnf=1 nn/summationdisplay i=1f(Xi,Yi),P′ nf=1 n2n/summationdisplay i=1n/summationdisplay j=1f(Xi,Yj), Pf=/integraldisplay fdP. (A.1) We choose f(x,y) =|F(x)−G(y)|andfn(x,y) =|Fn(x)−Gn(y)|, (A... | https://arxiv.org/abs/2505.01825v1 |
Substituting this result into Equation ( A.3), we get /tildewideϕn=3 n+1n/summationdisplay i=1/parenleftbigg2 3−|Ui−Vi|−Ui(1−Ui)−Vi(1−Vi)/parenrightbigg +Op(1 n) =/hatwideϕn+Op(1 n). Additionally, utilizing the results from Lemma A.1, it is easy to derive that E/hatwideϕn= 0, Var(/hatwideϕn) =/parenleftbigg3 n+1/parenr... | https://arxiv.org/abs/2505.01825v1 |
Sharp empirical Bernstein bounds for the variance of bounded random variables Diego Martinez-Taboada1and Aaditya Ramdas12 1Department of Statistics & Data Science 2Machine Learning Department Carnegie Mellon University {diegomar,aramdas}@andrew.cmu.edu May 6, 2025 Abstract Much recent effort has focused on deriving “em... | https://arxiv.org/abs/2505.01987v1 |
of confidence intervals. A (1−α)-confidence interval CCIfor a target parameter θis a random set such that P(θ∈CCI)≥1−α, where CCIis built after having observed a fixed number of observations. In contrast, a (1−α)-confidence sequence CCS,tprovides a high-probability guarantee that the parameter θis contained in the sequ... | https://arxiv.org/abs/2505.01987v1 |
(2009, Theorem 10). They are based on the concentration of self-bounding random variables (Maurer, 2006). Another concentration inequality for the variance can be found in the proof of Audibert et al. (2009, Theorem 1), which decouples the analysis into those of the mean and second centered moment, in a similar spirit ... | https://arxiv.org/abs/2505.01987v1 |
hold at any stopping time), derived using the following anytime-valid version of Markov’s inequality. Theorem 3.1 (Ville’s inequality) .For any nonnegative supermartingale (Mt)t≥0 andx >0, P(∃t≥0 :Mt≥x)≤EM0 x. 4 Powerful nonnegative supermartingale constructions are usually at the heart of anytime valid concentration i... | https://arxiv.org/abs/2505.01987v1 |
supermartingales that give way to Theorem 3.2. However, in contrast to Theorem 3.2, the conditional means of the random variables (Xi−bµi)2under study is not constant. This opens the door to providing concentration for the variance. Denoting Rt,α:=log(1/α) +P i≤tψE(λi) (Xi−bµi)2−bσ2 i2 P i≤tλi, Dt:=P i≤tλi(Xi−bµi)2 P... | https://arxiv.org/abs/2505.01987v1 |
to Appendix D.4. Corollary 4.4 (Lower empirical Bernstein for the variance) .Let Assumption 1.1 hold. For (bµi)i≥1defined as in (4.2), any [0,1]-valued predictable sequence (bσ2 i)i≥1, any [0,1)-valued predictable sequence (λi)i≥1, and any [0,∞)-valued predictable sequence (˜λi)i≥1, it holds that (Lt,∞)is a1−αlower con... | https://arxiv.org/abs/2505.01987v1 |
4.3, with c2∧c3>0. The condition c2∧c3>0is not necessary for the proofs to hold, but they follow cleaner with it. For simplicity, we will focus on the asymptotic behavior of both √nRn,α,√n P i≤tλi,α2,n˜R2 i,α1,nP i≤tλi,α2,n+Rn,α 2,n! , where we define λi,α2,n=λCI t,u,α 2,n. These two quantities correspond to the first ... | https://arxiv.org/abs/2505.01987v1 |
random variables lie in a ball of diameter 1, instead of a unit long interval. Similarly, the concept of variance involves norms, instead of just squares of scalars. Assumption 4.9. The stream of random variables X1, X2, . . .belongs to a separable Hilbert space H, and is such that ∥Xt∥ ∈ 0,1 2 ,Et−1Xt=µ,Et−1∥Xt−µ∥2=... | https://arxiv.org/abs/2505.01987v1 |
uniform distribution in (0,1), (II) the beta distribution with parameters (2,6), and (III) the beta distribution with parameters (5,5). For each of the inequalities, the 0.95%-empirical quantiles are also displayed. The Maurer Pontil (MP) inequality (Maurer and Pontil, 2009, Theorem 10) is compared against our proposal... | https://arxiv.org/abs/2505.01987v1 |
the AAAI Conference on Artificial Intelligence , volume 29. Ville, J. (1939). Etude Critique de la Notion de Collectif . Gauthier-Villars Paris. Wang, H. and Ramdas, A. (2024). Sharp matrix empirical Bernstein inequalities. arXiv preprint arXiv:2411.09516 . Waudby-Smith, I. and Ramdas, A. (2024). Estimating means of bo... | https://arxiv.org/abs/2505.01987v1 |
For any [0,1)-valued predictable sequence (λi)i≥1, any B1 2(0)-valued predictable sequence (bµHS i)i≥1, and any [0,1]- valued predictable sequence (bσi)i≥1, the sequence of sets DHS t−EHS t±RHS t,α 2 is a1−αconfidence sequence for σ2. From Corollary A.2, upper and lower inequalities for the variance can be derived a... | https://arxiv.org/abs/2505.01987v1 |
λ. Lemma B.3. It holds that nX i=11√ i∈ 2√n−2,2√n−1 , and so 1√nnX i=11√ i→2. Proof.Given that x7→1√xis a decreasing function, it follows that Zn 11√xdx≤nX i=11√ i≤1 +Zn 11√xdx, withRn 11√xdx= 2√n−2. In order to conclude the proof, it suffices to note that 1√nnX i=11√ i∈ 2−2√n,2−1√n ,2−2√n→2,2−1√n→2, and invoke the... | https://arxiv.org/abs/2505.01987v1 |
it follows that 1 nnX i=1aibi−ab = 1 nnX i=1aibi−abi+abi−ab ≤1 nsup i≤n|ai−a|nX i=1bi+|a| 1 nnX i=1bi−b ≤1 nsup i≤n|ai−a|nX i=1bi+|a|ϵ 3(|a|+ 1) ≤1 nsup i≤M′−1|ai−a|M′−1X i=1bi+1 nsup M′≤i≤n|ai|nX i=M′bi+ϵ 3 ≤ϵ 3+ sup i≥M′|ai−a|1 nnX i=1bi+ϵ 3 ≤ϵ 3+ϵ 3(b+ 1)ϵ 3(|a|+ 1)+b +ϵ 3≤ϵ 3+ϵ 3+ϵ 3=ϵ. Lemma B.10. Let(an,i)n≥1,i... | https://arxiv.org/abs/2505.01987v1 |
this result when (bm2 4,i)i∈[n]is defined as in Section 4.2. Its proof may be found in Appendix E.2 Proposition C.2. LetX1, . . . , X nfulfill Assumption 1.1 and Assumption 4.5 such that V (X−µ)2 = 0. Let (bµi)i∈[n],(bσ2 i)i∈[n], and (bm2 4,i)i∈[n]be defined as in Section 4.2. Then bm2 4,t=˜O1 t almost surely. The ... | https://arxiv.org/abs/2505.01987v1 |
S′ 0= 1, 28 is a nonnegative supermartingale. In view of Ville’s inequality (Theorem 3.1), we observe that P exp X i≤t˜λi(Xi−µ)−σ2X i≤tψP(˜λi) ≥2/δ ≤δ 2, thus P X i≤t˜λi(Xi−µ)−σ2X i≤tψP(˜λi)≥log(2/δ) ≤δ 2, and so P µ≤P i≤t˜λiXi−σ2P i≤tψP(˜λi)−log(2/δ) P i≤t˜λi! ≤δ 2. Arguing analogously replacing Xi−µfo... | https://arxiv.org/abs/2505.01987v1 |
log(1 /δn) bm2 4,t(ω)n fori≥m(ω). Denote (In(ω)) := log(1 /δn) +X i<m(ω)ψE(λi,δn(ω)) (Xi(ω)−bµi(ω))2−bσ2 i(ω)2, (IIn(ω)) :=nX i=m(ω)ψE(λi,δn) (Xi(ω)−bµi(ω))2−bσ2 i(ω)2. We observe that (In(ω))≤log(1/l) +ψE(c1)X i<m(ω) (Xi(ω)−bµi(ω))2−bσ2 i(ω)2, 32 and so it is bounded. Furthermore, (IIn(ω)) =nX i=m(ω)ψE s 2 log(1... | https://arxiv.org/abs/2505.01987v1 |
inequality. D.7 Proof of Theorem 4.10 Denote ft=X i≤t˜λi(Xi−µ). Pinelis (1994, Theorem 3.2) showed that Et−1cosh (∥ft∥)≤ 1 +Et−1ψP ˜λt∥Xt−µ∥ cosh (∥ft−1∥). 35 Similarly to the proof of Theorem 3.3, it now follows that 1 +Et−1ψP ˜λt∥Xt−µ∥ = 1 +Et−1∞X k=2 ˜λt∥Xt−µ∥k k! = 1 +∞X k=2˜λk tEt−1h ∥Xt−µ∥ki k! (i)= 1 +∞X... | https://arxiv.org/abs/2505.01987v1 |
i2 t≤c2+P∞ i=1κ1 i2 t=c2+κ1π2 6 t=O1 t . Ifσ2>0, note that vi= (Xi−bµi)2−bσ2 i = (Xi−µ)2−σ2+ 2(Xi−µ)(µ−bµi) + (µ−bµi)2+σ2−bσ2 i (i)= 2(Xi−µ)(µ−bµi) + (µ−bµi)2+σ2−bσ2 i, 38 where (i) follows from (Xi−µ)2=σ2, and so |vi| ≤3|µ−bµi|+ σ2−bσ2 i . The martingale analogue of Kolmogorov’s law of iterated logarithm (Stout, 197... | https://arxiv.org/abs/2505.01987v1 |
obtained in view of Lemma B.4. Hence, for ω∈A:=A1∩A2, X i≤nιi(ω)(Xi(ω)−µ) ≤C(ω)p 32(log n+ 1) log log(16(log n+ 1)) , which is ˜O(1). That is,P i≤nιi(Xi−µ) =˜O(1)almost surely. Let us now show that X i≤n˜λi,α1,n(Xi−µ) =˜O(1) (E.1) almost surely as well. For i≥m(ω), s 2 log(2 /α1,n) bσ2 iilog(i+ 1)≤σ√ 2s log(2/α1,n) ilo... | https://arxiv.org/abs/2505.01987v1 |
V[(Xi−µ)2], (E.3) forω∈A. Denoting ψN(λ) =λ2 2, as well as ξn,i(ω) :=ψE(λi,δn(ω)) ψN(λi,δn(ω)), it follows that (IIn(ω)) =nX i=mωψE(λi,δn(ω))Zi(ω) =nX i=mωψN(λi,δn(ω))ξn,i(ω)Zi(ω) =log(1/δn)(n−mω+ 1) n1 n−mω+ 1nX i=mω1 bm2 4,i(ω)ξn,i(ω)Zi(ω). In view of (E.2), Lemma B.9 yields 1 n−mω+ 1nX i=mω1 bm2 4,i(ω)Zi(ω)→a V[(Xi−... | https://arxiv.org/abs/2505.01987v1 |
it follows that (In)≤sup i≤n 2 log2(2/α1,n) + 2σ4(i−1X k=1ψP s 2 log(2 /α1,n) bσ2 kklog(1 + k)∧c5!)2 . We observe that ψP s 2 log(2 /α1,n) bσ2 kklog(1 + k)∧c5! ≤ψP s 2 log(2 /α1,n) bσ2 kklog(1 + k)! (i) ≤ 1p 2klog(1 + k)!2 ψP s 4 log(2 /α1,n) bσ2 k! =1 2klog(1 + k)ψP s 4 log(2 /α1,n) bσ2 k! (ii) ≤1 2klog(1 + k)exp ... | https://arxiv.org/abs/2505.01987v1 |
we take σ2≤U1,α1,t−L2 2,α2,t (F.1) as the upper confidence sequence for σ2. Similarly, in order to derive lower inequalities, define L1,α1,t:=P i≤tλi(X2 i−cm2i)2 P i≤tλi−log(1/α1) +P i≤tψE(λi) X2 i−cm2i2 P i≤tλi as the lower confidence sequence for EX2 i(which follows from empirical Bernstein inequality), and L2,α2,t... | https://arxiv.org/abs/2505.01987v1 |
arXiv:2505.02002v1 [math.OC] 4 May 2025JOTA manuscript No. (will be inserted by the editor) Sharp bounds in perturbed smooth optimization Vladimir Spokoiny Received: date / Accepted: date Abstract This paper studies the problem of perturbed convex and smooth optimization. The main results describe how the solution and ... | https://arxiv.org/abs/2505.02002v1 |
for tensor methods in convex optimization and further references canbefoundin Doikov and Nesterov (2021,2022,2024)andRodomanov and Nesterov (2022,2021). An accurate estimate of υk−υ∗and offk(υk)−fk(υ∗) can be used for analysis of many iterative procedures in optimization and stochas- tic approximation; see Polyak and J... | https://arxiv.org/abs/2505.02002v1 |
properties of fatυare given via the quantities ω(υ)def= sup u:/bardbl/u1D53B(υ)u/bardbl≤r2|δ3(υ,u)| /ba∇dbl/u1D53B(υ)u/ba∇dbl2, ω′(υ)def= sup u:/bardbl/u1D53B(υ)u/bardbl≤r|δ′ 3(υ,u)| /ba∇dbl/u1D53B(υ)u/ba∇dbl2.(2) The definition yields for any uwith/ba∇dbl/u1D53B(υ)u/ba∇dbl ≤r /vextendsingle/vextendsingleδ3(υ,u)/vextend... | https://arxiv.org/abs/2505.02002v1 |
means that the matrix /u1D53D−κ2/u1D53B2is positive definite. The next result describes the concentration properties of υ◦from (8) in a local elliptic set A(r)def={υ:/ba∇dbl/u1D53D1/2(υ−υ∗)/ba∇dbl ≤r}, (9) whereris slightly larger than /ba∇dbl/u1D53D−1/2A/ba∇dbl. Theorem 3.1 Letf(υ)be astrongly convex function with f(υ∗... | https://arxiv.org/abs/2505.02002v1 |
/u1D56B=ω2(/u1D539ξ)⊤( /C1−ω/u1D539)−1/u1D539ξ≤ω2 1−ω/ba∇dbl/u1D539ξ/ba∇dbl2≤ω2 1−ωξ⊤/u1D539ξ we conclude for ω≤1/3 by the triangle inequality /ba∇dblu◦+ξ/ba∇dbl/u1D56B≤/parenleftbigg/radicalbigg ω2 1−ω+/radicalbigg 2ω 1−ω2/parenrightbigg/radicalBig ξ⊤/u1D539ξ≤2/radicalbiggω 1−ω/radicalBig ξ⊤/u1D539ξ, and (13) follows ... | https://arxiv.org/abs/2505.02002v1 |
6/ba∇dbl/u1D53Bu/ba∇dbl3.(27) ProofFixu∈ Urand any vector w∈ /CAp. Fort∈[0,1], denote h(t)def=/angbracketleftbig ∇f(υ+tu)−∇f(υ)−t/an}b∇acketle{t∇2f(υ),u/an}b∇acket∇i}ht,w/angbracketrightbig . Thenh(0) = 0, h′(0) = 0, and (T∗ 3)and (4) imply |h′′(t)|=/vextendsingle/vextendsingle/an}b∇acketle{t∇3f(υ+tu),u⊗2⊗w/an}b∇acket∇... | https://arxiv.org/abs/2505.02002v1 |
on ( 39), assume /ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl ≤ /ba∇dbl/u1D53Ba/ba∇dbl ≤(11/9)/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl. Then ( 38) yields by ∇T(u) =∇T(−u) /ba∇dbl/u1D53B−1∇T(a)−/u1D53B−1∇T(/u1D53D−1A)/ba∇dbl=/ba∇dbl/u1D53B−1∇T(a)−/u1D53B−1∇T(−/u1D53D−1A)/ba∇dbl ≤sup t∈[0,1]/ba∇dbl/u1D53B−1∇2T(ta−(1−t)/u1D53D−1A)/u1D53B−1/ba∇... | https://arxiv.org/abs/2505.02002v1 |
some /u1D53B2,τ3, τ4, andrsatisfying /u1D53B2≤κ2/u1D53DG,r=3 2bG,κ2τ3bG<4 9,κ2τ4b2 G<1 3 forbGfrom(45). Then(46)holds. Furthermore, define µG=−/u1D53D−1 G{G2υ∗+∇T(/u1D53D−1 GG2υ∗)} withT(u) =1 6/an}b∇acketle{t∇3f(υ∗),u⊗3/an}b∇acket∇i}htand∇T=1 2/an}b∇acketle{t∇3f(υ∗),u⊗2/an}b∇acket∇i}ht. Then /ba∇dbl/u1D53B(µG−/u1D53D−1... | https://arxiv.org/abs/2505.02002v1 |
a refined analysis and new skew adjustment. https://arxiv.org/abs/2306.07262. Katsevich, A. and Rigollet, P. (2023). On the approximation accura cy of gaussian variational inference. Nesterov, Y. (2018). Lectures on convex optimization , volume 137. Springer. Nesterov, Y. and Nemirovskii, A. (1994). Interior-Point Polyn... | https://arxiv.org/abs/2505.02002v1 |
Central limit theorems under non-stationarity via relative weak convergence Nicolai Palm and Thomas Nagler May 6, 2025 Statistical inference for non-stationary data is hindered by the lack of classical central limit theorems (CLTs), not least because there is no fixed Gaussian limit to converge to. To address this, we ... | https://arxiv.org/abs/2505.02197v1 |
. . . 24 A. Proofs for relative weak convergence and CLTs 28 A.1. Relative weak convergence in ℓ∞(T) . . . . . . . . . . . . . . . . . . . . . 30 A.2. Relative central limit theorems . . . . . . . . . . . . . . . . . . . . . . . 31 A.3. Existence and tightness of corresponding GPs . . . . . . . . . . . . . . . 31 A.4. ... | https://arxiv.org/abs/2505.02197v1 |
of Proposition 4.6 . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 D.6. Proof of Proposition 4.7 . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 D.7. A useful lemma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 E. Proofs for the applications 66 E.1. Proof of Corollary 5.1 . . . . ... | https://arxiv.org/abs/2505.02197v1 |
reason for the popularity of these heuristics is the apparent scarcity of limit theorems for non-stationary data that could support the development of inferential methods. A few approaches have been proposed to establish CLTs for non-stationary processes, but they have significant limitations. Merlevede and Peligrad (2... | https://arxiv.org/abs/2505.02197v1 |
natural analog of tight limits in classical weak convergence and defined formally in Section 2. Under relative compactness, relative weak convergence is equivalent to weak convergence on subsequences, which makes it straightforward to transfer many useful tools, such as the continuous mapping theorem or the functional ... | https://arxiv.org/abs/2505.02197v1 |
are applicable to a wide range of statistical meth- ods and effectively usable as drop-in replacements for classical CLTs when data are not stationary. The remainder of this paper is structured as follows. Relative weak convergence and CLTs are developed in Section 2. We provide tools for proving relative CLTs and fost... | https://arxiv.org/abs/2505.02197v1 |
measurable and all marginals converge weakly, i.e., (Yn(t1), . . . , Y n(tk))→d(Y(t1), . . . , Y (tk)) inRkfor all t1, . . . , t k∈T. 7 2.2. Non-stationary CLTs via weak convergence along subsequences Let us first explain some difficulties with non-stationary CLTs. Consider a sequence of stochastic processes Xi∈ℓ∞(T). ... | https://arxiv.org/abs/2505.02197v1 |
Proof. This is a special case of Theorem A.7. In other words, even in non-stationary settings, the scaled sample average remains approximately Gaussian, but with the notion of a limiting distribution replaced by a sequence of Gaussians. This mode of convergence and the resulting type of asymptotic normality extends nat... | https://arxiv.org/abs/2505.02197v1 |
Then lim n→∞|P∗(Xn∈Sn)−P∗(Yn∈Sn)|= 0. Condition (1) ensures that Snare continuity sets of Ynasymptotically in a strong form. This prevents the laws from accumulating too much mass near the boundary of Sn. In statistical applications, Ynis typically (the supremum of) a non-degenerate Gaussian process, for which appropri... | https://arxiv.org/abs/2505.02197v1 |
the i-th component of Yn. (iii) all subsequences nkcontain a subsequence nkisuch that Σnki→Σand Ynki→dN(0,Σ). In other words, multivariate relative CLTs are essentially equivalent to CLTs along subsequences where covariances converge. From this it is straightforward to general- ize classical multivariate CLTs to relati... | https://arxiv.org/abs/2505.02197v1 |
3.2 (Multivariate relative CLT) .Letα∈[0,1/2)and1 + (1 −2α)−1< γ. Assume (i)k−1 nPkn i,j=1|Cov[X(l1) n,i, X(l2) n,j]| ≤Kfor all nandl1, l2= 1, . . . d . (ii)supn,iEh |X(l) n,i|γi <∞for all l= 1, . . . , d . (iii) knβn(kα n)γ−2 γ→0. Then, the scaled sample average k−1/2 nPkn i=1(Xn,i−E[Xn,i])satisfies a relative CLT. Pr... | https://arxiv.org/abs/2505.02197v1 |
argument with adaptive coupling and truncation. An important intermediate step is a new maximal inequality (see The- orem B.6) that may be of independent interest: Theorem 3.5. LetFbe a class of functions f:X →Rwith envelope F, and ∥f∥γ,n≤δ1 nnX i,j=1|Cov[h(Xi), h(Xj)]| ≤K1∥h∥2 γ,n 16 for some γ > 2, all f∈ F andh:X →R... | https://arxiv.org/abs/2505.02197v1 |
Specifying wn,i(s) =1{i≤ ⌊sn⌋}, relative sequential CLTs are simple corollaries of weighted relative CLTs. Corollary 3.8 (Sequential relative CLT) .Under conditions (i)–(iii) of Theorem 3.6, the sequential empirical process Zn∈ℓ∞([0,1]× F)satisfies a relative CLT. Proof. Appendix C.2 18 4. Bootstrap inference To make p... | https://arxiv.org/abs/2505.02197v1 |
(i)–(iii) of Proposition 4.2 are satisfied. Continuing Example 3.7 with γ= 4 andsupnβn(m)≤Cm−ρfor some ρ >6, we can pick mn=k1/3 n. 20 4.3. Practical inference The bootstrap process G(j) nin the previous section depends on the unknown quantity µn(i, f) =E[f(Xn,i)]. In many testing applications, we have E[f(Xn,i)] = 0 a... | https://arxiv.org/abs/2505.02197v1 |
cannot be handled by previous results. The methods are illustrated by monthly mean temperature anomalies in the Northern Hemisphere from 1880 to 2024 provided by NASA (GISTEMP Team, 2025, Lenssen et al., 2024), and shown in Fig. 1. 5.1. Uniform confidence bands for a nonparametric trend Nonparametric estimation of the ... | https://arxiv.org/abs/2505.02197v1 |
increasing trend in the last 50 years. 5.2. Testing for time series characteristics Suppose Z1, Z2, . . .is a non-stationary time series and we want to test H0: sup isup f∈F|E[f(Zi)]|= 0 against H1: sup isup f∈F|E[f(Zi)]| ̸= 0. The functions f∈ Fdetermine which characteristics of the time series we want to con- trol. T... | https://arxiv.org/abs/2505.02197v1 |
bootstrap to construct a test for the null hypothesis of no nonstationarity. Using b= 0.05 and a kernel estimator for bµn(s, f) as in the previous section, we get Tn= 0.69 and c∗ n(0.05) = 0 .30, and a bootstrapped p-value smaller than 0 .0001, providing strong evidence against the null hypothesis of stationarity. 25 R... | https://arxiv.org/abs/2505.02197v1 |
Sciences , 102(40):14150–14154. 27 A. Proofs for relative weak convergence and CLTs Lemma A.1. The sequence Xnis relatively compact if and only if it is asymptotically measurable and relatively asymptotically tight. Proof. IfXnis relatively compact, it converges to some tight Borel law along subse- quences. Along such ... | https://arxiv.org/abs/2505.02197v1 |
x7→ϕ′ θn(x) satisfies the condition of Proposition 2.12 since ϕhas continuous Hadamard-differentials. Thus, ϕ′ θn(Yn) is rela- tively compact since Ynis. By (iii) of Proposition 2.10 and descending to subsequences, we may assume Yn→dY,θn→θandϕ′ θn(Yn)→dϕ′ θ(Y). By Theorem 3.10.4 in Van der Vaart and Wellner (2023), we ... | https://arxiv.org/abs/2505.02197v1 |
se- quence of tight Borel measurable GPs corresponding to Ynif and only if sup n∈N,i≤dVar[Y(i) n]<∞. Equivalently, if all subsequences nkcontain a further subsequence nkisuch that Σ nkiconverges. Combined with the fact that a sequence of centered Gaussians converges weakly if and only if its corresponding sequence of c... | https://arxiv.org/abs/2505.02197v1 |
ρ n)dϵ Taking the limit δ→0 the right hand side converges to zero by (ii). Together with Markov’s inequality, we obtain that NYnis asymptotically uniformly d-equicontinuous in probability. By Corollary A.6 for fixed t∈Tthe sequence NYn(t) is relatively asymptotically tight for all t∈Tif supnVar[Yn(t)]<∞. Since this seq... | https://arxiv.org/abs/2505.02197v1 |
For fixed n, we drop the index n, i.e., write Fn=F, Xn,i=Xi,mn=metc. and assume without loss of generality that kn=n. Lemma B.1. For any class Fof functions f:X →Rwith supf∈F∥f∥∞≤Band any integer 1≤m≤n/2, it holds E∥Gn∥F≤E∥G∗ n∥F+B√nβn(m). Proof. We have E∥Gn∥F≤Esup f∈F|G∗ nf|+B√nE"nX i=11(Xj̸=X∗ j)# ≤E∥G∗ n∥F+B√nβn(m)... | https://arxiv.org/abs/2505.02197v1 |
all m≥1. Applying Theorem B.4 with m= 1 yields the claim. Theorem B.6. LetFbe a class of functions f:X →Rwith envelope Fand for some γ >2, ∥f∥γ,n≤δ1 nnX i,j=1|Cov[h(Xi), h(Xj)]| ≤K1∥h∥2 γ,n 38 for all f∈ F andh:X →Rbounded and measurable. Suppose that max nβn(m)≤ K2m−ρfor some ρ≥γ/(γ−2). Then, for any n≥5, E∥Gn∥F≲Zδ 0q... | https://arxiv.org/abs/2505.02197v1 |
+Nr≤nfor all r≤r1. We will frequently apply Bernstein’s inequality with m=mr. Here, note mr≤p n/ln+Nr≤p n/ln(2)≤n/2 for all n≥5. The following (in-)equalities are the reason for the choices of τrandmr: for rsuch 41 thatp ln+Nr+1≤n, i.e., r < r 1it holds mrτr−1√n= 2−r+1p ln+Nr √n2−rγτ−(γ−1) r = 2−r√n1 mrrn ln+Nr+1−(γ−... | https://arxiv.org/abs/2505.02197v1 |
Bounding T4 Finally, E∥GnT4∥F=E Gn[f−πr1(f)]1 max r0≤k≤r1∆k(f)/τk≤1 f∈F ≲E∥Gn∆r11{∆r1≤τr1}∥F+√nsup f∈F∥∆r1(f)1{∆r1(f)≤τr1}∥1,n ≲E∥Gn∆r0∥F+√nτr1. 45 and the first term is bounded by T1. Finally, observe that, by the definition of r1, √nτr1≤√n2−r1=√nN−1 [](en). Lemma B.7. Forγ >2and nX i=1βn(i)γ−2 γ≤K it holds 1 nnX i,... | https://arxiv.org/abs/2505.02197v1 |
Fn,δ={f−g:f, g∈ F n,∥f−g∥γ,n< δ} satisfies N[](ϵ,Fn,δ,∥ · ∥ γ,n)≤N[](ϵ/2,Fn,∥ · ∥ γ,n)2. Indeed, given ϵ/2 brackets [ lf, uf] and [ lg, ug] forfandg, [lf−ug, uf−lg] is an ε-bracket forf−g. By Theorem B.6, lim sup nE∗sup dn(s,t)<δn|Gn(s)−Gn(t)|≲lim n→∞Z2δn 0q lnN[](ϵ,Fn,∥ · ∥ γ,n)dϵ = 0 by (iii) which proves the claim. ... | https://arxiv.org/abs/2505.02197v1 |
U∗ n,i∈Rpnsuch that •Un,id=U∗ n,i. •the sequence U∗ n,iis independent. •P ∃Un,i̸=U∗ n,i ≤rnβn(qn). Define the coupled long block sums Z∗ n,i=pnX j=1U∗(j) n,i. For all ε >0 we obtain P k−1/2 n rnX i=1Z∗ n,i−rnX i=1Zn,i > ϵ! ≤P ∃Un,i̸=U∗ n,i ≤rnβn(qn) =k1/2+δ nβn(kα n) ≤knβn(kα n)→0. Next, Var"rnX i=1Zn,i# =Var"rnX i... | https://arxiv.org/abs/2505.02197v1 |
measurable GPs NGncorrespond- ing to Gn. Note that each NGn(s) is measurable for all s∈T. Accordingly, NGnis asymptotically measurable, hence, relatively compact by Lemma A.3. According to Corollary 2.18, it remains to prove relative CLTs of the marginals (Gn(t1), . . . ,Gn(td)) for all d∈N,t1, . . . , t d∈T. We apply ... | https://arxiv.org/abs/2505.02197v1 |
multiplier empirical process GV nin terms of bracketing entropy conditions with respect to F. Here, we apply a coupling argument and apply Corollary B.5 to the coupled empirical process. We will argue similar to the proof of Theorem C.3. To avoid clutter we assume wn,i= 1, but the argument is similar for the general ca... | https://arxiv.org/abs/2505.02197v1 |
∥ · ∥ γ,nbe given. Clearly fV,+,n∈[fV,+,n,fV,+,n]. Since E[f(Xn,i)] = 0 it holds ∥fV,+,n∥2 2,n,+=1 rnrnX i=1E[fV,+,n(U∗ n,i)2] =1 rnrnX i=1Var[fV,+,n(U∗ n,i)] =1 mnrnrnX i=1pnX j,k=1Cov[Vn,(i−1)mn+jf(Xn,(i−1)mn+j), Vn,(i−1)mn+kf(Xn,(i−1)mn+k)] ≲2 knknX i,j=1|Cov[f(Xn,i), f(Xn,j)] ≲∥f∥2 γ,n, 58 by Lemma B.7. This yields... | https://arxiv.org/abs/2505.02197v1 |