text
string
source
string
Pn andP′ n, respectively. Similarly, Pfrepresents the expectation of funderP(x,y) =F(x)G(y). Thus, the expressions are given by: Pnf=1 nn/summationdisplay i=1f(Xi,Yi),P′ nf=1 n2n/summationdisplay i=1n/summationdisplay j=1f(Xi,Yj), Pf=/integraldisplay fdP. (A.1) We choose f(x,y) =|F(x)−G(y)|andfn(x,y) =|Fn(x)−Gn(y)|, (A...
https://arxiv.org/abs/2505.01825v1
Substituting this result into Equation ( A.3), we get /tildewideϕn=3 n+1n/summationdisplay i=1/parenleftbigg2 3−|Ui−Vi|−Ui(1−Ui)−Vi(1−Vi)/parenrightbigg +Op(1 n) =/hatwideϕn+Op(1 n). Additionally, utilizing the results from Lemma A.1, it is easy to derive that E/hatwideϕn= 0, Var(/hatwideϕn) =/parenleftbigg3 n+1/parenr...
https://arxiv.org/abs/2505.01825v1
Sharp empirical Bernstein bounds for the variance of bounded random variables Diego Martinez-Taboada1and Aaditya Ramdas12 1Department of Statistics & Data Science 2Machine Learning Department Carnegie Mellon University {diegomar,aramdas}@andrew.cmu.edu May 6, 2025 Abstract Much recent effort has focused on deriving “em...
https://arxiv.org/abs/2505.01987v1
of confidence intervals. A (1−α)-confidence interval CCIfor a target parameter θis a random set such that P(θ∈CCI)≥1−α, where CCIis built after having observed a fixed number of observations. In contrast, a (1−α)-confidence sequence CCS,tprovides a high-probability guarantee that the parameter θis contained in the sequ...
https://arxiv.org/abs/2505.01987v1
(2009, Theorem 10). They are based on the concentration of self-bounding random variables (Maurer, 2006). Another concentration inequality for the variance can be found in the proof of Audibert et al. (2009, Theorem 1), which decouples the analysis into those of the mean and second centered moment, in a similar spirit ...
https://arxiv.org/abs/2505.01987v1
hold at any stopping time), derived using the following anytime-valid version of Markov’s inequality. Theorem 3.1 (Ville’s inequality) .For any nonnegative supermartingale (Mt)t≥0 andx >0, P(∃t≥0 :Mt≥x)≤EM0 x. 4 Powerful nonnegative supermartingale constructions are usually at the heart of anytime valid concentration i...
https://arxiv.org/abs/2505.01987v1
supermartingales that give way to Theorem 3.2. However, in contrast to Theorem 3.2, the conditional means of the random variables (Xi−bµi)2under study is not constant. This opens the door to providing concentration for the variance. Denoting Rt,α:=log(1/α) +P i≤tψE(λi) (Xi−bµi)2−bσ2 i2 P i≤tλi, Dt:=P i≤tλi(Xi−bµi)2 P...
https://arxiv.org/abs/2505.01987v1
to Appendix D.4. Corollary 4.4 (Lower empirical Bernstein for the variance) .Let Assumption 1.1 hold. For (bµi)i≥1defined as in (4.2), any [0,1]-valued predictable sequence (bσ2 i)i≥1, any [0,1)-valued predictable sequence (λi)i≥1, and any [0,∞)-valued predictable sequence (˜λi)i≥1, it holds that (Lt,∞)is a1−αlower con...
https://arxiv.org/abs/2505.01987v1
4.3, with c2∧c3>0. The condition c2∧c3>0is not necessary for the proofs to hold, but they follow cleaner with it. For simplicity, we will focus on the asymptotic behavior of both √nRn,α,√n P i≤tλi,α2,n˜R2 i,α1,nP i≤tλi,α2,n+Rn,α 2,n! , where we define λi,α2,n=λCI t,u,α 2,n. These two quantities correspond to the first ...
https://arxiv.org/abs/2505.01987v1
random variables lie in a ball of diameter 1, instead of a unit long interval. Similarly, the concept of variance involves norms, instead of just squares of scalars. Assumption 4.9. The stream of random variables X1, X2, . . .belongs to a separable Hilbert space H, and is such that ∥Xt∥ ∈ 0,1 2 ,Et−1Xt=µ,Et−1∥Xt−µ∥2=...
https://arxiv.org/abs/2505.01987v1
uniform distribution in (0,1), (II) the beta distribution with parameters (2,6), and (III) the beta distribution with parameters (5,5). For each of the inequalities, the 0.95%-empirical quantiles are also displayed. The Maurer Pontil (MP) inequality (Maurer and Pontil, 2009, Theorem 10) is compared against our proposal...
https://arxiv.org/abs/2505.01987v1
the AAAI Conference on Artificial Intelligence , volume 29. Ville, J. (1939). Etude Critique de la Notion de Collectif . Gauthier-Villars Paris. Wang, H. and Ramdas, A. (2024). Sharp matrix empirical Bernstein inequalities. arXiv preprint arXiv:2411.09516 . Waudby-Smith, I. and Ramdas, A. (2024). Estimating means of bo...
https://arxiv.org/abs/2505.01987v1
For any [0,1)-valued predictable sequence (λi)i≥1, any B1 2(0)-valued predictable sequence (bµHS i)i≥1, and any [0,1]- valued predictable sequence (bσi)i≥1, the sequence of sets  DHS t−EHS t±RHS t,α 2 is a1−αconfidence sequence for σ2. From Corollary A.2, upper and lower inequalities for the variance can be derived a...
https://arxiv.org/abs/2505.01987v1
λ. Lemma B.3. It holds that nX i=11√ i∈ 2√n−2,2√n−1 , and so 1√nnX i=11√ i→2. Proof.Given that x7→1√xis a decreasing function, it follows that Zn 11√xdx≤nX i=11√ i≤1 +Zn 11√xdx, withRn 11√xdx= 2√n−2. In order to conclude the proof, it suffices to note that 1√nnX i=11√ i∈ 2−2√n,2−1√n ,2−2√n→2,2−1√n→2, and invoke the...
https://arxiv.org/abs/2505.01987v1
it follows that 1 nnX i=1aibi−ab = 1 nnX i=1aibi−abi+abi−ab ≤1 nsup i≤n|ai−a|nX i=1bi+|a| 1 nnX i=1bi−b ≤1 nsup i≤n|ai−a|nX i=1bi+|a|ϵ 3(|a|+ 1) ≤1 nsup i≤M′−1|ai−a|M′−1X i=1bi+1 nsup M′≤i≤n|ai|nX i=M′bi+ϵ 3 ≤ϵ 3+ sup i≥M′|ai−a|1 nnX i=1bi+ϵ 3 ≤ϵ 3+ϵ 3(b+ 1)ϵ 3(|a|+ 1)+b +ϵ 3≤ϵ 3+ϵ 3+ϵ 3=ϵ. Lemma B.10. Let(an,i)n≥1,i...
https://arxiv.org/abs/2505.01987v1
this result when (bm2 4,i)i∈[n]is defined as in Section 4.2. Its proof may be found in Appendix E.2 Proposition C.2. LetX1, . . . , X nfulfill Assumption 1.1 and Assumption 4.5 such that V (X−µ)2 = 0. Let (bµi)i∈[n],(bσ2 i)i∈[n], and (bm2 4,i)i∈[n]be defined as in Section 4.2. Then bm2 4,t=˜O1 t almost surely. The ...
https://arxiv.org/abs/2505.01987v1
S′ 0= 1, 28 is a nonnegative supermartingale. In view of Ville’s inequality (Theorem 3.1), we observe that P exp  X i≤t˜λi(Xi−µ)−σ2X i≤tψP(˜λi)  ≥2/δ ≤δ 2, thus P X i≤t˜λi(Xi−µ)−σ2X i≤tψP(˜λi)≥log(2/δ) ≤δ 2, and so P µ≤P i≤t˜λiXi−σ2P i≤tψP(˜λi)−log(2/δ) P i≤t˜λi! ≤δ 2. Arguing analogously replacing Xi−µfo...
https://arxiv.org/abs/2505.01987v1
log(1 /δn) bm2 4,t(ω)n fori≥m(ω). Denote (In(ω)) := log(1 /δn) +X i<m(ω)ψE(λi,δn(ω)) (Xi(ω)−bµi(ω))2−bσ2 i(ω)2, (IIn(ω)) :=nX i=m(ω)ψE(λi,δn) (Xi(ω)−bµi(ω))2−bσ2 i(ω)2. We observe that (In(ω))≤log(1/l) +ψE(c1)X i<m(ω) (Xi(ω)−bµi(ω))2−bσ2 i(ω)2, 32 and so it is bounded. Furthermore, (IIn(ω)) =nX i=m(ω)ψE s 2 log(1...
https://arxiv.org/abs/2505.01987v1
inequality. D.7 Proof of Theorem 4.10 Denote ft=X i≤t˜λi(Xi−µ). Pinelis (1994, Theorem 3.2) showed that Et−1cosh (∥ft∥)≤ 1 +Et−1ψP ˜λt∥Xt−µ∥ cosh (∥ft−1∥). 35 Similarly to the proof of Theorem 3.3, it now follows that 1 +Et−1ψP ˜λt∥Xt−µ∥ = 1 +Et−1∞X k=2 ˜λt∥Xt−µ∥k k! = 1 +∞X k=2˜λk tEt−1h ∥Xt−µ∥ki k! (i)= 1 +∞X...
https://arxiv.org/abs/2505.01987v1
i2 t≤c2+P∞ i=1κ1 i2 t=c2+κ1π2 6 t=O1 t . Ifσ2>0, note that vi= (Xi−bµi)2−bσ2 i = (Xi−µ)2−σ2+ 2(Xi−µ)(µ−bµi) + (µ−bµi)2+σ2−bσ2 i (i)= 2(Xi−µ)(µ−bµi) + (µ−bµi)2+σ2−bσ2 i, 38 where (i) follows from (Xi−µ)2=σ2, and so |vi| ≤3|µ−bµi|+ σ2−bσ2 i . The martingale analogue of Kolmogorov’s law of iterated logarithm (Stout, 197...
https://arxiv.org/abs/2505.01987v1
obtained in view of Lemma B.4. Hence, for ω∈A:=A1∩A2, X i≤nιi(ω)(Xi(ω)−µ) ≤C(ω)p 32(log n+ 1) log log(16(log n+ 1)) , which is ˜O(1). That is,P i≤nιi(Xi−µ) =˜O(1)almost surely. Let us now show that X i≤n˜λi,α1,n(Xi−µ) =˜O(1) (E.1) almost surely as well. For i≥m(ω), s 2 log(2 /α1,n) bσ2 iilog(i+ 1)≤σ√ 2s log(2/α1,n) ilo...
https://arxiv.org/abs/2505.01987v1
V[(Xi−µ)2], (E.3) forω∈A. Denoting ψN(λ) =λ2 2, as well as ξn,i(ω) :=ψE(λi,δn(ω)) ψN(λi,δn(ω)), it follows that (IIn(ω)) =nX i=mωψE(λi,δn(ω))Zi(ω) =nX i=mωψN(λi,δn(ω))ξn,i(ω)Zi(ω) =log(1/δn)(n−mω+ 1) n1 n−mω+ 1nX i=mω1 bm2 4,i(ω)ξn,i(ω)Zi(ω). In view of (E.2), Lemma B.9 yields 1 n−mω+ 1nX i=mω1 bm2 4,i(ω)Zi(ω)→a V[(Xi−...
https://arxiv.org/abs/2505.01987v1
it follows that (In)≤sup i≤n 2 log2(2/α1,n) + 2σ4(i−1X k=1ψP s 2 log(2 /α1,n) bσ2 kklog(1 + k)∧c5!)2 . We observe that ψP s 2 log(2 /α1,n) bσ2 kklog(1 + k)∧c5! ≤ψP s 2 log(2 /α1,n) bσ2 kklog(1 + k)! (i) ≤ 1p 2klog(1 + k)!2 ψP s 4 log(2 /α1,n) bσ2 k! =1 2klog(1 + k)ψP s 4 log(2 /α1,n) bσ2 k! (ii) ≤1 2klog(1 + k)exp ...
https://arxiv.org/abs/2505.01987v1
we take σ2≤U1,α1,t−L2 2,α2,t (F.1) as the upper confidence sequence for σ2. Similarly, in order to derive lower inequalities, define L1,α1,t:=P i≤tλi(X2 i−cm2i)2 P i≤tλi−log(1/α1) +P i≤tψE(λi) X2 i−cm2i2 P i≤tλi as the lower confidence sequence for EX2 i(which follows from empirical Bernstein inequality), and L2,α2,t...
https://arxiv.org/abs/2505.01987v1
arXiv:2505.02002v1 [math.OC] 4 May 2025JOTA manuscript No. (will be inserted by the editor) Sharp bounds in perturbed smooth optimization Vladimir Spokoiny Received: date / Accepted: date Abstract This paper studies the problem of perturbed convex and smooth optimization. The main results describe how the solution and ...
https://arxiv.org/abs/2505.02002v1
for tensor methods in convex optimization and further references canbefoundin Doikov and Nesterov (2021,2022,2024)andRodomanov and Nesterov (2022,2021). An accurate estimate of υk−υ∗and offk(υk)−fk(υ∗) can be used for analysis of many iterative procedures in optimization and stochas- tic approximation; see Polyak and J...
https://arxiv.org/abs/2505.02002v1
properties of fatυare given via the quantities ω(υ)def= sup u:/bardbl/u1D53B(υ)u/bardbl≤r2|δ3(υ,u)| /ba∇dbl/u1D53B(υ)u/ba∇dbl2, ω′(υ)def= sup u:/bardbl/u1D53B(υ)u/bardbl≤r|δ′ 3(υ,u)| /ba∇dbl/u1D53B(υ)u/ba∇dbl2.(2) The definition yields for any uwith/ba∇dbl/u1D53B(υ)u/ba∇dbl ≤r /vextendsingle/vextendsingleδ3(υ,u)/vextend...
https://arxiv.org/abs/2505.02002v1
means that the matrix /u1D53D−κ2/u1D53B2is positive definite. The next result describes the concentration properties of υ◦from (8) in a local elliptic set A(r)def={υ:/ba∇dbl/u1D53D1/2(υ−υ∗)/ba∇dbl ≤r}, (9) whereris slightly larger than /ba∇dbl/u1D53D−1/2A/ba∇dbl. Theorem 3.1 Letf(υ)be astrongly convex function with f(υ∗...
https://arxiv.org/abs/2505.02002v1
/u1D56B=ω2(/u1D539ξ)⊤( /C1−ω/u1D539)−1/u1D539ξ≤ω2 1−ω/ba∇dbl/u1D539ξ/ba∇dbl2≤ω2 1−ωξ⊤/u1D539ξ we conclude for ω≤1/3 by the triangle inequality /ba∇dblu◦+ξ/ba∇dbl/u1D56B≤/parenleftbigg/radicalbigg ω2 1−ω+/radicalbigg 2ω 1−ω2/parenrightbigg/radicalBig ξ⊤/u1D539ξ≤2/radicalbiggω 1−ω/radicalBig ξ⊤/u1D539ξ, and (13) follows ...
https://arxiv.org/abs/2505.02002v1
6/ba∇dbl/u1D53Bu/ba∇dbl3.(27) ProofFixu∈ Urand any vector w∈ /CAp. Fort∈[0,1], denote h(t)def=/angbracketleftbig ∇f(υ+tu)−∇f(υ)−t/an}b∇acketle{t∇2f(υ),u/an}b∇acket∇i}ht,w/angbracketrightbig . Thenh(0) = 0, h′(0) = 0, and (T∗ 3)and (4) imply |h′′(t)|=/vextendsingle/vextendsingle/an}b∇acketle{t∇3f(υ+tu),u⊗2⊗w/an}b∇acket∇...
https://arxiv.org/abs/2505.02002v1
on ( 39), assume /ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl ≤ /ba∇dbl/u1D53Ba/ba∇dbl ≤(11/9)/ba∇dbl/u1D53B/u1D53D−1A/ba∇dbl. Then ( 38) yields by ∇T(u) =∇T(−u) /ba∇dbl/u1D53B−1∇T(a)−/u1D53B−1∇T(/u1D53D−1A)/ba∇dbl=/ba∇dbl/u1D53B−1∇T(a)−/u1D53B−1∇T(−/u1D53D−1A)/ba∇dbl ≤sup t∈[0,1]/ba∇dbl/u1D53B−1∇2T(ta−(1−t)/u1D53D−1A)/u1D53B−1/ba∇...
https://arxiv.org/abs/2505.02002v1
some /u1D53B2,τ3, τ4, andrsatisfying /u1D53B2≤κ2/u1D53DG,r=3 2bG,κ2τ3bG<4 9,κ2τ4b2 G<1 3 forbGfrom(45). Then(46)holds. Furthermore, define µG=−/u1D53D−1 G{G2υ∗+∇T(/u1D53D−1 GG2υ∗)} withT(u) =1 6/an}b∇acketle{t∇3f(υ∗),u⊗3/an}b∇acket∇i}htand∇T=1 2/an}b∇acketle{t∇3f(υ∗),u⊗2/an}b∇acket∇i}ht. Then /ba∇dbl/u1D53B(µG−/u1D53D−1...
https://arxiv.org/abs/2505.02002v1
a refined analysis and new skew adjustment. https://arxiv.org/abs/2306.07262. Katsevich, A. and Rigollet, P. (2023). On the approximation accura cy of gaussian variational inference. Nesterov, Y. (2018). Lectures on convex optimization , volume 137. Springer. Nesterov, Y. and Nemirovskii, A. (1994). Interior-Point Polyn...
https://arxiv.org/abs/2505.02002v1
Central limit theorems under non-stationarity via relative weak convergence Nicolai Palm and Thomas Nagler May 6, 2025 Statistical inference for non-stationary data is hindered by the lack of classical central limit theorems (CLTs), not least because there is no fixed Gaussian limit to converge to. To address this, we ...
https://arxiv.org/abs/2505.02197v1
. . . 24 A. Proofs for relative weak convergence and CLTs 28 A.1. Relative weak convergence in ℓ∞(T) . . . . . . . . . . . . . . . . . . . . . 30 A.2. Relative central limit theorems . . . . . . . . . . . . . . . . . . . . . . . 31 A.3. Existence and tightness of corresponding GPs . . . . . . . . . . . . . . . 31 A.4. ...
https://arxiv.org/abs/2505.02197v1
of Proposition 4.6 . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 D.6. Proof of Proposition 4.7 . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 D.7. A useful lemma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 E. Proofs for the applications 66 E.1. Proof of Corollary 5.1 . . . . ...
https://arxiv.org/abs/2505.02197v1
reason for the popularity of these heuristics is the apparent scarcity of limit theorems for non-stationary data that could support the development of inferential methods. A few approaches have been proposed to establish CLTs for non-stationary processes, but they have significant limitations. Merlevede and Peligrad (2...
https://arxiv.org/abs/2505.02197v1
natural analog of tight limits in classical weak convergence and defined formally in Section 2. Under relative compactness, relative weak convergence is equivalent to weak convergence on subsequences, which makes it straightforward to transfer many useful tools, such as the continuous mapping theorem or the functional ...
https://arxiv.org/abs/2505.02197v1
are applicable to a wide range of statistical meth- ods and effectively usable as drop-in replacements for classical CLTs when data are not stationary. The remainder of this paper is structured as follows. Relative weak convergence and CLTs are developed in Section 2. We provide tools for proving relative CLTs and fost...
https://arxiv.org/abs/2505.02197v1
measurable and all marginals converge weakly, i.e., (Yn(t1), . . . , Y n(tk))→d(Y(t1), . . . , Y (tk)) inRkfor all t1, . . . , t k∈T. 7 2.2. Non-stationary CLTs via weak convergence along subsequences Let us first explain some difficulties with non-stationary CLTs. Consider a sequence of stochastic processes Xi∈ℓ∞(T). ...
https://arxiv.org/abs/2505.02197v1
Proof. This is a special case of Theorem A.7. In other words, even in non-stationary settings, the scaled sample average remains approximately Gaussian, but with the notion of a limiting distribution replaced by a sequence of Gaussians. This mode of convergence and the resulting type of asymptotic normality extends nat...
https://arxiv.org/abs/2505.02197v1
Then lim n→∞|P∗(Xn∈Sn)−P∗(Yn∈Sn)|= 0. Condition (1) ensures that Snare continuity sets of Ynasymptotically in a strong form. This prevents the laws from accumulating too much mass near the boundary of Sn. In statistical applications, Ynis typically (the supremum of) a non-degenerate Gaussian process, for which appropri...
https://arxiv.org/abs/2505.02197v1
the i-th component of Yn. (iii) all subsequences nkcontain a subsequence nkisuch that Σnki→Σand Ynki→dN(0,Σ). In other words, multivariate relative CLTs are essentially equivalent to CLTs along subsequences where covariances converge. From this it is straightforward to general- ize classical multivariate CLTs to relati...
https://arxiv.org/abs/2505.02197v1
3.2 (Multivariate relative CLT) .Letα∈[0,1/2)and1 + (1 −2α)−1< γ. Assume (i)k−1 nPkn i,j=1|Cov[X(l1) n,i, X(l2) n,j]| ≤Kfor all nandl1, l2= 1, . . . d . (ii)supn,iEh |X(l) n,i|γi <∞for all l= 1, . . . , d . (iii) knβn(kα n)γ−2 γ→0. Then, the scaled sample average k−1/2 nPkn i=1(Xn,i−E[Xn,i])satisfies a relative CLT. Pr...
https://arxiv.org/abs/2505.02197v1
argument with adaptive coupling and truncation. An important intermediate step is a new maximal inequality (see The- orem B.6) that may be of independent interest: Theorem 3.5. LetFbe a class of functions f:X →Rwith envelope F, and ∥f∥γ,n≤δ1 nnX i,j=1|Cov[h(Xi), h(Xj)]| ≤K1∥h∥2 γ,n 16 for some γ > 2, all f∈ F andh:X →R...
https://arxiv.org/abs/2505.02197v1
Specifying wn,i(s) =1{i≤ ⌊sn⌋}, relative sequential CLTs are simple corollaries of weighted relative CLTs. Corollary 3.8 (Sequential relative CLT) .Under conditions (i)–(iii) of Theorem 3.6, the sequential empirical process Zn∈ℓ∞([0,1]× F)satisfies a relative CLT. Proof. Appendix C.2 18 4. Bootstrap inference To make p...
https://arxiv.org/abs/2505.02197v1
(i)–(iii) of Proposition 4.2 are satisfied. Continuing Example 3.7 with γ= 4 andsupnβn(m)≤Cm−ρfor some ρ >6, we can pick mn=k1/3 n. 20 4.3. Practical inference The bootstrap process G(j) nin the previous section depends on the unknown quantity µn(i, f) =E[f(Xn,i)]. In many testing applications, we have E[f(Xn,i)] = 0 a...
https://arxiv.org/abs/2505.02197v1
cannot be handled by previous results. The methods are illustrated by monthly mean temperature anomalies in the Northern Hemisphere from 1880 to 2024 provided by NASA (GISTEMP Team, 2025, Lenssen et al., 2024), and shown in Fig. 1. 5.1. Uniform confidence bands for a nonparametric trend Nonparametric estimation of the ...
https://arxiv.org/abs/2505.02197v1
increasing trend in the last 50 years. 5.2. Testing for time series characteristics Suppose Z1, Z2, . . .is a non-stationary time series and we want to test H0: sup isup f∈F|E[f(Zi)]|= 0 against H1: sup isup f∈F|E[f(Zi)]| ̸= 0. The functions f∈ Fdetermine which characteristics of the time series we want to con- trol. T...
https://arxiv.org/abs/2505.02197v1
bootstrap to construct a test for the null hypothesis of no nonstationarity. Using b= 0.05 and a kernel estimator for bµn(s, f) as in the previous section, we get Tn= 0.69 and c∗ n(0.05) = 0 .30, and a bootstrapped p-value smaller than 0 .0001, providing strong evidence against the null hypothesis of stationarity. 25 R...
https://arxiv.org/abs/2505.02197v1
Sciences , 102(40):14150–14154. 27 A. Proofs for relative weak convergence and CLTs Lemma A.1. The sequence Xnis relatively compact if and only if it is asymptotically measurable and relatively asymptotically tight. Proof. IfXnis relatively compact, it converges to some tight Borel law along subse- quences. Along such ...
https://arxiv.org/abs/2505.02197v1
x7→ϕ′ θn(x) satisfies the condition of Proposition 2.12 since ϕhas continuous Hadamard-differentials. Thus, ϕ′ θn(Yn) is rela- tively compact since Ynis. By (iii) of Proposition 2.10 and descending to subsequences, we may assume Yn→dY,θn→θandϕ′ θn(Yn)→dϕ′ θ(Y). By Theorem 3.10.4 in Van der Vaart and Wellner (2023), we ...
https://arxiv.org/abs/2505.02197v1
se- quence of tight Borel measurable GPs corresponding to Ynif and only if sup n∈N,i≤dVar[Y(i) n]<∞. Equivalently, if all subsequences nkcontain a further subsequence nkisuch that Σ nkiconverges. Combined with the fact that a sequence of centered Gaussians converges weakly if and only if its corresponding sequence of c...
https://arxiv.org/abs/2505.02197v1
ρ n)dϵ Taking the limit δ→0 the right hand side converges to zero by (ii). Together with Markov’s inequality, we obtain that NYnis asymptotically uniformly d-equicontinuous in probability. By Corollary A.6 for fixed t∈Tthe sequence NYn(t) is relatively asymptotically tight for all t∈Tif supnVar[Yn(t)]<∞. Since this seq...
https://arxiv.org/abs/2505.02197v1
For fixed n, we drop the index n, i.e., write Fn=F, Xn,i=Xi,mn=metc. and assume without loss of generality that kn=n. Lemma B.1. For any class Fof functions f:X →Rwith supf∈F∥f∥∞≤Band any integer 1≤m≤n/2, it holds E∥Gn∥F≤E∥G∗ n∥F+B√nβn(m). Proof. We have E∥Gn∥F≤Esup f∈F|G∗ nf|+B√nE"nX i=11(Xj̸=X∗ j)# ≤E∥G∗ n∥F+B√nβn(m)...
https://arxiv.org/abs/2505.02197v1
all m≥1. Applying Theorem B.4 with m= 1 yields the claim. Theorem B.6. LetFbe a class of functions f:X →Rwith envelope Fand for some γ >2, ∥f∥γ,n≤δ1 nnX i,j=1|Cov[h(Xi), h(Xj)]| ≤K1∥h∥2 γ,n 38 for all f∈ F andh:X →Rbounded and measurable. Suppose that max nβn(m)≤ K2m−ρfor some ρ≥γ/(γ−2). Then, for any n≥5, E∥Gn∥F≲Zδ 0q...
https://arxiv.org/abs/2505.02197v1
+Nr≤nfor all r≤r1. We will frequently apply Bernstein’s inequality with m=mr. Here, note mr≤p n/ln+Nr≤p n/ln(2)≤n/2 for all n≥5. The following (in-)equalities are the reason for the choices of τrandmr: for rsuch 41 thatp ln+Nr+1≤n, i.e., r < r 1it holds mrτr−1√n= 2−r+1p ln+Nr √n2−rγτ−(γ−1) r = 2−r√n1 mrrn ln+Nr+1−(γ−...
https://arxiv.org/abs/2505.02197v1
Bounding T4 Finally, E∥GnT4∥F=E Gn[f−πr1(f)]1 max r0≤k≤r1∆k(f)/τk≤1 f∈F ≲E∥Gn∆r11{∆r1≤τr1}∥F+√nsup f∈F∥∆r1(f)1{∆r1(f)≤τr1}∥1,n ≲E∥Gn∆r0∥F+√nτr1. 45 and the first term is bounded by T1. Finally, observe that, by the definition of r1, √nτr1≤√n2−r1=√nN−1 [](en). Lemma B.7. Forγ >2and nX i=1βn(i)γ−2 γ≤K it holds 1 nnX i,...
https://arxiv.org/abs/2505.02197v1
Fn,δ={f−g:f, g∈ F n,∥f−g∥γ,n< δ} satisfies N[](ϵ,Fn,δ,∥ · ∥ γ,n)≤N[](ϵ/2,Fn,∥ · ∥ γ,n)2. Indeed, given ϵ/2 brackets [ lf, uf] and [ lg, ug] forfandg, [lf−ug, uf−lg] is an ε-bracket forf−g. By Theorem B.6, lim sup nE∗sup dn(s,t)<δn|Gn(s)−Gn(t)|≲lim n→∞Z2δn 0q lnN[](ϵ,Fn,∥ · ∥ γ,n)dϵ = 0 by (iii) which proves the claim. ...
https://arxiv.org/abs/2505.02197v1
U∗ n,i∈Rpnsuch that •Un,id=U∗ n,i. •the sequence U∗ n,iis independent. •P ∃Un,i̸=U∗ n,i ≤rnβn(qn). Define the coupled long block sums Z∗ n,i=pnX j=1U∗(j) n,i. For all ε >0 we obtain P k−1/2 n rnX i=1Z∗ n,i−rnX i=1Zn,i > ϵ! ≤P ∃Un,i̸=U∗ n,i ≤rnβn(qn) =k1/2+δ nβn(kα n) ≤knβn(kα n)→0. Next, Var"rnX i=1Zn,i# =Var"rnX i...
https://arxiv.org/abs/2505.02197v1
measurable GPs NGncorrespond- ing to Gn. Note that each NGn(s) is measurable for all s∈T. Accordingly, NGnis asymptotically measurable, hence, relatively compact by Lemma A.3. According to Corollary 2.18, it remains to prove relative CLTs of the marginals (Gn(t1), . . . ,Gn(td)) for all d∈N,t1, . . . , t d∈T. We apply ...
https://arxiv.org/abs/2505.02197v1
multiplier empirical process GV nin terms of bracketing entropy conditions with respect to F. Here, we apply a coupling argument and apply Corollary B.5 to the coupled empirical process. We will argue similar to the proof of Theorem C.3. To avoid clutter we assume wn,i= 1, but the argument is similar for the general ca...
https://arxiv.org/abs/2505.02197v1
∥ · ∥ γ,nbe given. Clearly fV,+,n∈[fV,+,n,fV,+,n]. Since E[f(Xn,i)] = 0 it holds ∥fV,+,n∥2 2,n,+=1 rnrnX i=1E[fV,+,n(U∗ n,i)2] =1 rnrnX i=1Var[fV,+,n(U∗ n,i)] =1 mnrnrnX i=1pnX j,k=1Cov[Vn,(i−1)mn+jf(Xn,(i−1)mn+j), Vn,(i−1)mn+kf(Xn,(i−1)mn+k)] ≲2 knknX i,j=1|Cov[f(Xn,i), f(Xn,j)] ≲∥f∥2 γ,n, 58 by Lemma B.7. This yields...
https://arxiv.org/abs/2505.02197v1
N(t1), . . . , N (tk), N(1)(tk+1), . . . , N(1)(tk+m), N(2)(tk+m+1), . . . , N(2)(tm+k+l) . Thus, Gn,G(1) n,G(2) n →dN⊗3 (Van der Vaart and Wellner, 2023, Section 1.5 Problem 3.) which finishes the proof. D.2. Proof of Proposition 4.2 Lemma D.2. For every ϵ >0define νn(ε)as the maximal natural number such that max...
https://arxiv.org/abs/2505.02197v1
4.2. Taking ϵ→0 and our assumption on the variance give Var[bG∗ n(s, f)−G∗ n(s, f)]→p0. Since also E[bG∗ n(s, f)− G∗ n(s, f)] = 0, it follows bG∗ n(s, f)−G∗ n(s, f)→p0 for every s, f. Since bG∗ n−G∗ nis relatively compact, this implies ∥bG∗ n−G∗ n∥S×F→p0. D.4. Proof of Proposition 4.5 Similar to the proof of Propositio...
https://arxiv.org/abs/2505.02197v1
a Cε-bracketing with respect to the ∥ · ∥ n,1-norm for the class Gn={gn,t(v, i) =vE[fn,t(Xi)]:t∈T}, for some C <∞and size N(ε). Let Gn,k= [g(k) n,g(k) n] be the k-thCϵ-bracket. Recall that Pngn,t=1 nnX i=1gn,t(Vn,i, i) =1 nnX i=1Vn,iE[fn,t(Xi)], 65 and Pgn,t=1 nnX i=1E[gn,t(Vn,i, i)] =1 nnX i=1E[Vn,i]E[fn,t(Xi)] = 0 . ...
https://arxiv.org/abs/2505.02197v1
s∈[0,1]|bµ∗ b(sn)−µ∗ b(sn)| → p0. Now consider the process√nµ∗ b(sn). As in the proof of Proposition 4.6, we have Var√nµ∗ b(sn) −Var√nbµb(sn) =σ2 n(s)→ ∞ , for some s∈[0,1], which implies that Var√nµ∗ b(sn) /σ2 n(s)→1. The univariate relative CLT (Theorem 3.2) with α >1/3 gives √nµ∗ b(sn)/σn(s)→dN(0,1). Since sup...
https://arxiv.org/abs/2505.02197v1
arXiv:2505.02302v1 [math.PR] 5 May 2025Modelling with given reliability and accuracy in the space Lp(T) of stochastic processes from Sub φ(Ω) decomposable in series with independent elements Applied Statistics. Actuarial and Financial Mathematics. No. 2, 13–23, 2012 Oleksandr Mokliachuk1,∗ 1National Technical Universit...
https://arxiv.org/abs/2505.02302v1
a function such that φ(u) =Ru 0f(v)dv∀u >0. Let cN=Z T(τφ(X(t)−XN(t)))pdµ(t)<∞. The model XN(t)approximates the process X(t)with reliability 1−αand accuracy δin the Lp(T)space, if cN≤δ φ∗(−1) ln2 αp (1) and δ > c N f c1/p Np δ1/p!!p , (2) where φ∗is a Young-Fenchel transform of the function φ. Proof. Since|X(t)−XN(...
https://arxiv.org/abs/2505.02302v1
often be unfindable explicitly. In this case ˆ ak(t), approximations of function ak(t), may be used as elements of the model of a stochastic process after taking into account the impact of such approximation an accuracy and reliability of the process approximation with the model. Definition 6. We will call a model of a...
https://arxiv.org/abs/2505.02302v1
0, t∈T, let B(t, s) =EX(t)X(s) be the correlation function of this stochastic process. Sine the process X(t) is continuous in the mean square, the function B(t, s) is continuous on T×T. Consider the homogenous Fredholm integral equation of the second kind a(t) =λZ TB(t, s)a(s)ds. (4) It is well-known (see [4]) that thi...
https://arxiv.org/abs/2505.02302v1
k-th eigenfunction of equation (4),ˆλkis the approximation of the k-the eigenvalue, ηk=|λk− ˆλk|is the error of approximation of the k-the eigenvalue. The model XN(t) approximates the stochastic process X(t)with reliability 1−αand accuracy δ in the space Lp(T), when the following conditions are true: cN≤δ/(βln2 α)p/β ...
https://arxiv.org/abs/2505.02302v1
Marginal minimization and sup-norm expansions in perturbed optimization Vladimir Spokoiny∗ Weierstrass Institute Berlin, HSE University and IITP RAS, Moscow, Mohrenstr. 39, 10117 Berlin, Germany spokoiny@wias-berlin.de May 6, 2025 Abstract Let the objective function fdepends on the target variable xalong with a nui- sa...
https://arxiv.org/abs/2505.02562v1
. . . . . . 24 A.2 A basic lemma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 B Proofs 26 B.1 Partial and marginal optimization. Proofs . . . . . . . . . . . . . . . . . . 26 B.2 Sup-norm expansions. Proofs . . . . . . . . . . . . . . . . . . . . . . . . . 30 1 Introduction For a smooth convex f...
https://arxiv.org/abs/2505.02562v1
sample size Ncorre- sponds to the smallest eigenvalue of the Fisher information matrix. Statistical inference on the parameter of interest requires even stronger condition 𝕡2≪ N. General results on Laplace approximation from Katsevich (2023) can be viewed as a lower bound in 4 Marginal and alternating minimization, su...
https://arxiv.org/abs/2505.02562v1
condition on the function fand∥ · ∥◦is a norm in Rqin which smoothness of fw.r.t. sis measured. The closed form of the remainder is very important. In particular, it allows to write the error in the multiplicative way, what is crucial for many iterative algorithms. Later we apply this bound for two special cases. With ...
https://arxiv.org/abs/2505.02562v1
appears that local quadratic approximation of the function fin a vicinity of the extreme point υ∗yields a nearly linear dependence of xsons. We illustrate this fact on the case of a quadratic function f(·) . Consider the Hessian F=∇2f(υ∗) in the block form: Fdef=∇2f(υ∗) = FxxFxs FsxFss! (2.4) withFsx=F⊤ xs. Iff(υ) is q...
https://arxiv.org/abs/2505.02562v1
3). 9 For the norm ∥·∥◦, let∥B∥∗be the corresponding dual norm of an operator B: Rq→ Rp: ∥B∥∗= sup z:∥z∥◦≤1∥Bz∥. (2.12) Ifp= 1 and ∥ · ∥◦is the sup-norm ∥ · ∥∞then∥B∥∗=∥B∥1. Define ρ∗def=∥𝔻−1Fxsℍ−1∥∗. (2.13) The next result provides an expansion of the semiparametric bias xs−x∗under (T∗ 3,S) and(T∗ 3|s)with a leading ...
https://arxiv.org/abs/2505.02562v1
these conditions involve a metric tensor 𝔻 forx-component and ℍfor the s-component. Consider a local set in Υof the form X × S , where X= x:∥𝔻(x−x∗)∥ ≤rx , S= s:∥ℍ(s−s∗)∥ ≤rs . Later we assume that f(x,s) as a function of x∈ X is strongly convex and smooth for s∈ S fixed, and similarly in s∈ S forx∈ X fixed. (T∗ 3,...
https://arxiv.org/abs/2505.02562v1
𝔻m) for m̸=j. Define also for some radius r∞ Υ◦={υ:∥𝔻(υ−υ∗)∥∞≤r∞}=n υ: max j≤p 𝔻j(υj−υ∗ j) ≤r∞o . Assume the following condition. (T∗ ∞)For each j≤p, the function f(υj,sj)fulfills sup sj:∥ℍj(sj−s∗ j)∥∞≤r∞sup υ:𝔻j|υ−υ∗ j|≤2r∞ ∇(3) υjυjυjf(υj,sj) 𝔻3 j≤τ3, (4.1) and sup υ=(υ∗ j,sj)∈Υ◦sup zj∈ Rp−1|⟨∇(3) υjυjsjf(υ∗ j,s...
https://arxiv.org/abs/2505.02562v1
Furthermore, 𝔻−1 F(υ◦−υ∗)−M ∞≤τ∞∥𝔻−1M∥2 ∞, 𝔻(υ◦−υ∗−F−1M) ∞≤τ∞ 1−ρ1∥𝔻−1M∥2 ∞, and with ∆def= Ip−𝔻−1F𝔻−1 𝔻(υ◦−υ∗) +𝔻−1M) ∞≤τ∞ 1−ρ1∥𝔻−1M∥2 ∞+ρ1 1−ρ1∥𝔻−1M∥∞, 𝔻(υ◦−υ∗) + ( Ip+∆)𝔻−1M) ∞≤τ∞ 1−ρ1∥𝔻−1M∥2 ∞+ρ2 1 1−ρ1∥𝔻−1M∥∞. Remark 4.2. If each tj(υj) is quadratic, then condition (T∗ ∞)forg(υ) follows from the sam...
https://arxiv.org/abs/2505.02562v1
υ∗reads as follows: L(υ) =−nX m=1m−1X j=1 (υj−υm)Sjm−Njmϕ(υj−υm) , (5.1) leading to the MLE eυ= argmin υL(υ). The function ϕ(υ) = log(1+eυ) is convex, hence, L(υ) is concave. However, representa- tion (5.1) reveals lack-of-identifiability problem: eυis not unique, any shift υ→υ+ae does not change L(υ) ,e=n−1/2(1, . . ...
https://arxiv.org/abs/2505.02562v1
jas i.i.d. samples from U[0,2] . The error bars represent three times the standard deviation. The results support the hypothesis that ρ1is small for Erd¨ os–R´ enyi graphs and close to1 np≪1 . Figure 5.2 left presents the norm of the leading term ∥ F−1 G∇ζ∥∞and∥D−2∇ζ∥∞ while the right plot shows the errors ∥eυG−υ∗+ F−1...
https://arxiv.org/abs/2505.02562v1
using self-concordance. Electronic Journal of Statistics , 15(1):326 – 391. Pankevich, S. and Spokoiny, V. (2025). Estimation and inference in BTL model. in preparation. Razaviyayn, M., Hong, M., and Luo, Z.-Q. (2013). A unified convergence analysis of block successive minimization methods for nonsmooth optimization. S...
https://arxiv.org/abs/2505.02562v1
(2021). In fact, they require that each univariate function h(υ+tu) ,t∈ R, is self-concordant with some universal constants c3and c4. A.2 A basic lemma This section presents a basic lemma from Spokoiny (2025a). Proposition A.1. Letf(υ)be a strongly concave function with f(υ∗) = min υf(υ) and𝔽=∇2f(υ∗). Assume (T∗ 3)atυ...
https://arxiv.org/abs/2505.02562v1
. By condition (T∗ 3,S) ⟨a′′ s(t),z⟩ = ∇3 xssf(x∗,s∗+t(s−s∗)),z⊗(s−s∗)⊗2 ≤τ12∥𝔻z∥∥ℍ(s−s∗)∥2 ◦ and hence, ∥𝔻−1a′′ s(t)∥= sup z:∥z∥≤1 ⟨𝔻−1a′′ s(t),z⟩ = sup z:∥z∥≤1 ⟨a′′ s(t),𝔻−1z⟩ ≤τ12sup z:∥z∥≤1∥z∥∥ℍ(s−s∗)∥2 ◦=τ12∥ℍ(s−s∗)∥2 ◦. This yields ∥𝔻−1{As+Fxs(s−s∗)}∥ ≤ τ12∥ℍ(s−s∗)∥2 ◦Z1 0(1−t)dt≤τ12∥ℍ(s−s∗)∥2 ◦ 2 as claimed...
https://arxiv.org/abs/2505.02562v1
udef=𝔻(υ◦−υ∗) and Bdef=𝔻−1F𝔻−1, it holds 𝔻−1 F(υ◦−υ∗) +A =B(u+𝔻F−1A), and by (4.6) and Lemma B.6 𝔻(υ◦−υ∗+F−1A) ∞= u+𝔻F−1A ∞ = B−1𝔻−1{F(υ◦−υ∗) +A} ∞≤τ∞ 1−ρ1∥𝔻−1A∥2 ∞. (B.14) Finally, by (B.16) of Lemma B.6 and by 𝔻F−1A=B−1𝔻−1A ∥𝔻F−1A −𝔻−1A∥∞= (B−1− Ip)𝔻−1A ∞≤ρ1 1−ρ1∥𝔻−1A ∞, ∥𝔻F−1A −( Ip+∆)𝔻−1A∥∞= (B−1−...
https://arxiv.org/abs/2505.02562v1
arXiv:2505.02636v1 [math.OC] 5 May 2025Nonconvex landscapes in phase retrieval and semidefinite low-rank matrix sensing with overparametrization Andrew D. McRae∗ May 6, 2025 Abstract We study a nonconvex algorithmic approach to phase retrieva l and the more general problem of semidefinite low-rank matrix sensing. Specific...
https://arxiv.org/abs/2505.02636v1
we use the model ( 1) with Z∗=x∗x∗ ∗,andAi=aia∗ i, i= 1,...,n. (2) If, as is often the case in practice, x∗is real (F=R) but the measurements are complex, we can make everything real by taking Ai= Re(aia∗ i). Given measurements of the form ( 1), a natural question is how to estimate Z∗from the data {(yi,Ai)}n i=1. Many...
https://arxiv.org/abs/2505.02636v1
we choose the estimation rank p? The obvious choice is p=r= rank(Z∗) if it is known, but is this the best choice? For practical reasons we want pto be small (e.g., constant or at least ≪d). Nonconvex problems of the form ( BM-LS) have indeed been well studied in the low-rank matrix sensing literature, and there are man...
https://arxiv.org/abs/2505.02636v1
to analyze ( PR-LS). For example, if the measurement vectors a1,...,a nare chosen as independent and independently distributed (i.i.d.) real or complex standard Gaussian r andom vectors, one can easily calculate that, for S∈Hd,Ain expectation satisfies E1 nA∗A(S) =m4S+(trS)Id=⇒E1 n/ba∇dblA(S)/ba∇dbl2=m4/ba∇dblS/ba∇dbl2 ...
https://arxiv.org/abs/2505.02636v1
part 1) is suboptimal by a logarithmic factor compared to the best- possible sample complexity n≈dthat is achieved by other methods such as PhaseLift [ 2] or various other, more elaborate schemes (see [ 12], [13] for an overview). There is some evidence that the landscape of (BM-LS) can be benign even with n≈dGaussian ...
https://arxiv.org/abs/2505.02636v1
evenwith thestrongerassumptionofRIP,RichardZhang hasgivenacounterexample(aversion of which appears in [ 17]) showing that, with excessive overparametrization, the problem ( BM-LS) may have spurious local minima very far from the ground truth even whe n, forpcloser to r, RIP holds and the landscape is indeed benign. How...
https://arxiv.org/abs/2505.02636v1
arealrandom variablev), thenx∗and its elementwise complex conjugate ¯ x∗will be indistinguishable. We must therefore rule out this case. We can plug the lower isometry bounds of [ 18] into our Theorem 2to obtain the following result (see Section6for details): Theorem 3. Consider the model (1)with rank-one Z∗=x∗x∗ ∗for ...
https://arxiv.org/abs/2505.02636v1
however, without a hard estimator rank constraint, one must include a low-rank–inducing regularizer (e.g., trace/nuclear norm) to get such optimal depend ence on r. The fact that we obtain this without any explicit regularizer illustrates the “implicit regularizat ion” of the semidefinite problem structure (see the rele...
https://arxiv.org/abs/2505.02636v1
our main deterministic theoretical result f or this analysis (Theorem 5). We then apply this to the Gaussian measurement ensemble to prov e Theorem 4, which was given in Section 1.3. For convenience, we collect here some (standard) notation that w e use throughout the paper. If xis a vector, we denote its Euclidean ( ℓ...
https://arxiv.org/abs/2505.02636v1
and we do not attempt to cover it here. Most existing theor etical results consider initialization and local convergence of iterative algorithms. See the above-men tioned surveys and the recent papers [25], [30], [40] for further background and references. 2.2 Phase retrieval with sub-Gaussian measurements For the case...
https://arxiv.org/abs/2505.02636v1
caser= 1. 9 For the same problem, Chen et al.[46] analyse a trace-regularizedvariant of ( PhaseLift ), though they notethattheirtechniquescouldextendbeyondthe case Z∗/{ollowsequal0torecoveryofgeneralHermitianmatrices. Indeed, Kueng et al.[47] later do exactly this (with some additional extensions). Both work s show tha...
https://arxiv.org/abs/2505.02636v1
lemma, which is the founda tion for every subsequent landscape result in this paper. Lemma 1. Consider (BM-LS)under the measurement model (1). IfZ∗/{ollowsequal0has rank r≥1, let X∗∈Fd×rbe such that Z∗=X∗X∗ ∗. 7The quadratic-cost program ( PhaseLift ) as well as the many variants in the literature can be put in th is f...
https://arxiv.org/abs/2505.02636v1
and consider how t o extend the proof of the real case. If R=R1+iR2∈Cr×pwithR1,R2∈Rr×p, we can replace, in the Hessian inequality calculations, /tildewideX∗−/tildewideX/tildewideRby/tildewideX∗−/tildewideXR1−J/tildewideXR2without any problem. To use the zero-gradient condition, we need, in addition to the equality /til...
https://arxiv.org/abs/2505.02636v1