text
string
source
string
can show max 1≤s≤rnr−s 2(logp)r+s 2/vextenddouble/vextenddouble/vextenddouble/vextenddoublemax j∈[p]M/parenleftbig Pr−s(ψ2 j)/parenrightbig/vextenddouble/vextenddouble/vextenddouble/vextenddouble1/2 L1(P)≤Crnr−1 2/⌊a∇d⌊lMr/⌊a∇d⌊lL2(P)logr(np), so (29) withq= 1refines ( 31). In applications, /⌊a∇d⌊lMr/⌊a∇d⌊lL2(P)is often...
https://arxiv.org/abs/2504.10866v1
SinceXnis independent of G:=σ(X1,...,X n−1), we have I=E/bracketleftbigg max i∈In−1,r−1E[ψ1(Xi,Xn)| G]/bracketrightbigg ≤E/bracketleftbigg max i∈In−1,r−1ψ1(Xi,Xn)/bracketrightbigg =E[M(ψ1)], (36) where the inequality is by Jensen’s inequality. Meanwhile, we can rewrite IIas II=/integraldisplay SE/bracketleftbigg max i∈...
https://arxiv.org/abs/2504.10866v1
It remains to prove ( 40). Since E[/⌊a∇d⌊lG/⌊a∇d⌊l∞/⌊a∇d⌊lD/⌊a∇d⌊l3 ∞]≤nE max j∈[p]/parenleftiggr/summationdisplay a=1|Dj,a|/parenrightigg4 ≤Crmax a∈[r]nE/bracketleftbigg max j∈[p]D4 j,a/bracketrightbigg , it suffices to prove nE/bracketleftbigg max j∈[p]D4 j,a/bracketrightbigg ≤Cr∆2(a) (42) 29 for alla∈[r]. Observ...
https://arxiv.org/abs/2504.10866v1
1. In this step, we bound E[Γ1]andE[Γ2]. Observe that Γ1= max j∈[p]n/summationdisplay i=1E/bracketleftbig |ψj,1(X∗ i)−ψj,1(Xi)}|4|X/bracketrightbig ≤8max j∈[p]/parenleftigg n/⌊a∇d⌊lψj,1/⌊a∇d⌊l4 L4(P)+n/summationdisplay i=1ψj,1(Xi)4/parenrightigg . By Lemma 9 in Chernozhukov et al. (2015 ) and ( 25), E/bracketleftigg...
https://arxiv.org/abs/2504.10866v1
≤˜∆(0) 1log3p+/radicalbigg/braceleftig ˜∆(2) 2,∗(2)+˜∆(1) 2,∗(1)/bracerightig log5p. Third, since E[π1ψj(X1)] = 0 , inserting the expression ( 1) inπ2ψkgives π1ψj⋆1 1π2ψk(v) =E[π1ψj(X1){ψk(X1,v)−Pψk(X1)}] =π1ψj⋆1 1ψk(v)−P(π1ψj⋆1 1ψk). Hence/⌊a∇d⌊lπ1ψj⋆1 1π2ψk/⌊a∇d⌊lL2(P)≤ /⌊a∇d⌊lπ1ψj⋆1 1ψk/⌊a∇d⌊lL2(P). Thus, Lemma 2....
https://arxiv.org/abs/2504.10866v1
gives |II| ≤/integraldisplay R3d|Kh(x−y)||K(u)|f(x+uh′)f(x)f(y)dudxdy. Hence the above argument also shows |II| ≤h−d/2/⌊a∇d⌊lK/⌊a∇d⌊lL1/⌊a∇d⌊lK/⌊a∇d⌊lL2/⌊a∇d⌊lf/⌊a∇d⌊l3 L2. All together, we complete the proof of ( 56). Next, we prove ( 57). The upper bound follows from Lemma 8(b). Meanwhile, since/integraltext |u|≤aK(u...
https://arxiv.org/abs/2504.10866v1
vector WinRpasWj:=ζ⊤Mjζ−E[ζ⊤Mjζ],j∈[p]. In addition, let Zbe a centered Gaussian vector in Rpsuch thatσ:= min j∈[p]/⌊a∇d⌊lZj/⌊a∇d⌊lL2(P)>0. Then there exists a universal constant Csuch that sup A∈Rp|P(W∈A)−P(Z∈A)| ≤C σ/parenleftigg/radicalig /⌊a∇d⌊lCov[W]−Cov[Z]/⌊a∇d⌊l∞log2p+/parenleftbigg max j∈[p]κ4(Wj)log6p/parenr...
https://arxiv.org/abs/2504.10866v1
=:I+II+III, wherean:=Efn[J2(ˆψhn(αn))]/4. Let us bound I. By the definition of ˆcτ,I=Pfn(P∗(T∗ n>an)>τ). Hence, Markov’s inequality gives I≤τ−1Efn[P∗(T∗ n> an)]. Recall that /⌊a∇d⌊lf0/⌊a∇d⌊lHγ<∞for some γ >0. Hence,fn∈Hα0∧γ R1,bwithR1:=R+/⌊a∇d⌊lf0/⌊a∇d⌊lHγandb:=/⌊a∇d⌊lf0/⌊a∇d⌊l2 L2/2for sufficiently large n, so Efn[P∗(T∗ ...
https://arxiv.org/abs/2504.10866v1
Therefore, ( 65) follows once we prove the following inequalities: |E[Rε·∇f(W)]|/lessorsimilarE[/⌊a∇d⌊lRε/⌊a∇d⌊l∞]√logp σ, (69) |E[/an}⌊∇a⌋ketle{tVε,∇2f(W)/an}⌊∇a⌋ket∇i}ht]|/lessorsimilarε−1E[/⌊a∇d⌊lVε/⌊a∇d⌊l∞](logp)3/2 σ, (70) |∆|/lessorsimilarε−3E[Γε](logp)7/2 σ. (71) 48 Step 3. This step proves ( 69). We rewrite E[R...
https://arxiv.org/abs/2504.10866v1
natural extensions of those of Lemmas 8 and 9 inChernozhukov et al. (2015 ), respectively. In particular, the starting point is a symm etrization argument, which is summarized as the following lemma. Lemma 12. Letq≥1andψj∈Lq(Pr) (j= 1,...,p)be degenerate, symmetric kernels of order r≥1. Then there exists a constant Crd...
https://arxiv.org/abs/2504.10866v1
=/parenleftbign r/parenrightbig Prψj≤nrPrψj, (30) holds with cr= 2+K′ r. Proof of Theorem 6.By Lemma 12and Lyapunov’s inequality, /vextenddouble/vextenddouble/vextenddouble/vextenddoublemax j∈[p]|Jr(ψj)|/vextenddouble/vextenddouble/vextenddouble/vextenddouble Lq(P)≤Cr(q+logp)r/2/vextenddouble/vextenddouble/vextenddoubl...
https://arxiv.org/abs/2504.10866v1
/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddoublesup n∈[T]|S′ n,j|1{T>0}/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble Lα(P)/lessorsimilar√α/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble/radicaltp/radicalvertex/radicalvertex/radicalbtT/s...
https://arxiv.org/abs/2504.10866v1
i=1εiΨi/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddoublem B/bracketrightigg , whereε1,...,ε nare i.i.d. Rademacher variables independent of X. Withα:=m+logp, we have by (25) /parenleftigg E/bracketleftigg/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddoublen−1/summationdisplay i...
https://arxiv.org/abs/2504.10866v1
Sr−sψ⋆s sϕ(u,v)2Pr−s(du)≤ /⌊a∇d⌊lψ⋆s sψ/⌊a∇d⌊lL2(P2r−2s)Ps(ϕ2)(v). Proof. Property (a)immediately follows by definition. Property (b)follows from the Schwarz in- equality. Let us prove property (c). Using Fubini’s theorem repeatedly, we obtain for Pr′−s-a.s.v /integraldisplay Sr−sψ⋆s sϕ(u,v)2Pr−s(du) =/integraldisplay S...
https://arxiv.org/abs/2504.10866v1
2/integraldisplay π2ψj⋆1 1π2ψk(u,Xi)2P(du)+1 2/integraldisplay π2ψk⋆1 1π2ψj(u,Xi)2P(du). Hence, by Lemma 16(c) , P(|/tildewiderπ2ψj⋆1 1π2ψk|2)(Xi)≤max j,k∈[p]/⌊a∇d⌊lπ2ψj⋆1 1π2ψj/⌊a∇d⌊lL2(P2)M/parenleftbig P(|π2ψk|2)/parenrightbig . 67 Therefore, the AM-GM inequality gives II1log2p≤n2 2max j∈[p]/⌊a∇d⌊lπ2ψj⋆1 1π2ψj/⌊a∇d⌊...
https://arxiv.org/abs/2504.10866v1
Average density estim ators: Efficiency and bootstrap consistency. Econometric Theory 38 (6), 1140–1174. 11 Cattaneo, M. D., M. Jansson, and W. K. Newey (2018a). Alterna tive asymptotics and the partially linear model with many regressors. Econometric Theory 34 (2), 277–301. 20 Cattaneo, M. D., M. Jansson, and W. K. Newe...
https://arxiv.org/abs/2504.10866v1
A. and S. Ng (1998). Parametric and nonparametric ap proaches to price and tax reform. Journal of the American Statistical Association 93 (443), 900–909. 19,20 Di Nezza, E., G. Palatucci, and E. Valdinoci (2012). Hitchhi ker’s guide to the fractional Sobolev spaces. Bulletin des Sciences Math ´ematiques 136 (5), 521–57...
https://arxiv.org/abs/2504.10866v1
3,9 Kontorovich, A. (2023). Decoupling maximal inequalities. arXiv preprint arXiv:2302.14150 .24 Li, T. and M. Yuan (2024). On the optimality of Gaussian kerne l based nonparametric tests against smooth alternatives. Journal of Machine Learning Research 25 (334), 1–62. 3,16,17,18,44 73 Meckes, E. (2009). On Stein’s met...
https://arxiv.org/abs/2504.10866v1
arXiv:2504.11025v1 [math.ST] 15 Apr 2025Optimal inference for the mean of random functions Omar Kassi∗Valentin Patilea† April 16, 2025 Abstract We study estimation and inference for the mean of real-value d random func- tions defined on a hypercube. The independent random functio ns are observed on a discrete, random su...
https://arxiv.org/abs/2504.11025v1
the mean funct ion with functional data have been proposed recently, with simulation based methods b eing currently the most successful. See, for example, Degras (2011), Choi and Reim herr (2018), Dette et al. (2020), Wang et al. (2020), Telschow and Schwartzman (2022) an d Liebl and Reimherr (2023). The latter referen...
https://arxiv.org/abs/2504.11025v1
functions. The critical values for the confidence region s can be derived from the asymptotic variance of the Gaussian approximation. Our estima tion and inference approach for the mean function crucially depends on the random na ture of the design. In this paper, we assume that we know the density of the design, bu t t...
https://arxiv.org/abs/2504.11025v1
heteroscedastic me asurement errors. The design points Ti,mare random copies of the vector T∈ Twhich admits a density fTwith respect to the Lebesgue measure. This is the so-called random design framework. Let Tobs=/uniondisplay 1≤i≤NTiwith Ti={Ti,m: 1≤m≤Mi}. Finally, the number of all data points ( Yi,m,Ti,m) used to e...
https://arxiv.org/abs/2504.11025v1
see Timan (1994, Section 5.3.2), Handscom b (1966, equations (2) and (3) page 192), Mason (1980, equation (22)). For the univaria te case (D= 1), see DeVore and Lorentz (1993, Ch. 7) and Efromovich (1999b, Propo sition 2.4.2). 3 A new approach to estimating Fourier coefficients The Fourier coefficients akare integrals that...
https://arxiv.org/abs/2504.11025v1
in the Supplement. We now present some properties of /hatwidecland/hatwidedl. Proposition 2. LetSin Definition 2 be the independent sample T1,...,Tnof a random vectorT∈ T, and letA(·)be an integrable function of T. Then, E/bracketleftBig /hatwidedl|Tl/bracketrightBig =E[/hatwidecl|Tl],E/bracketleftBig A(Tl)/hatwidedl/br...
https://arxiv.org/abs/2504.11025v1
observation points {Ti,m}do not change across the trajectories Xi, and they are on a fixed, regular grid of, say, mpoints, then the weights in (11)are all equal. For simplicity, let D= 1. The estimate of the k−th Fourier coefficient then becomes ak=1 NmN/summationdisplay i=1m/summationdisplay m=1Yi,mφk/parenleftBigm m/par...
https://arxiv.org/abs/2504.11025v1
construction of simple Fourier coefficients estimates, l eading to a constant K1equal to the coefficient of difficulty, also seems to be a novelty in the problem of multivariate nonparametric heteroscedastic regression with random des ign,i.e.,whenMi= 1,∀i. 11 Remark 4. The rate of/hatwideµ∗is optimal over a large class of f...
https://arxiv.org/abs/2504.11025v1
whereΣ =E[W(d)W(d)⊤],Fis a constant which does not depend on h, and |h|2:= sup p,q/vextenddouble/vextenddouble/vextenddouble/vextenddouble∂2h ∂xp∂xq/vextenddouble/vextenddouble/vextenddouble/vextenddouble ∞,|h|3:= sup p,q,r/vextenddouble/vextenddouble/vextenddouble/vextenddouble∂3h ∂xp∂xq∂xr/vextenddouble/vextenddouble...
https://arxiv.org/abs/2504.11025v1
the Supplement we propose an alternative m ethod for constructing the confidence regions based on subsampling and without using the m atrix Σ. 6 Regularity estimation Estimation andinference have been constructed above assuming t hat theH¨ older exponent αµis given in Assumption (H3). We now want to provide an estimate ...
https://arxiv.org/abs/2504.11025v1
two examples is due to the presenc e of oscillations around 0, which after integration increase th e pointwise H¨ older exponent. In this section, we focus on the functions with a constant pointwise H¨ older exponent. It worth noting that the class of functions with constant pointwise H¨ older exponent is rich. Indeed,...
https://arxiv.org/abs/2504.11025v1
of α from the logarithm of an estimate of bk. LetJbe a large integer and define H=HJ={Hj=j/J: 1≤j≤ J}. (24) Then, for any H ∋Hj< α, it holds that bk+K−2Hj∼ K−2Hj. This allows to derive an estimating equation for Hjfrombk+K−2Hj, as long as Hj<α. In other words, for any smallǫ>0, we have −1 2log(K)log/parenleftbig bk+K−2H...
https://arxiv.org/abs/2504.11025v1
Then, for anyη>0, P/parenleftBig |/hatwideHj−/tildewideHj| ≥η/parenrightBig ≤2exp/parenleftbigg −k2 1(KD−1)N log2(N)+k2 1Ng(K)×2η2g(K)log2(K)K−4η 1+ηlog(K)K−2η/parenrightbigg +2 N+2Nexp/parenleftBig −2C2 0/radicalbig CMK−2Dm1/2/parenrightBig .(31) wherekis a positive constant and g(K)defined as in (26). 19 The last term...
https://arxiv.org/abs/2504.11025v1
, withα= min{αf,ασ,Hγ} ∈(0,1]. We show in the Supplement that ̺(D) = 5/2 if D= 1. ForD >1, Kabatjanskii and Levenstein (1978) have shown that /hatwidedi,m/lessorsimilarKD, see also Henze (1987, Lemma 1.3), for some constant R. On the other hand, Devroye et al. (2017, Theorem 2.1) have shown that M2E[V2 i,m|Ti,m]→α(D)∈[...
https://arxiv.org/abs/2504.11025v1
T(Ti,m)/braceleftbigg σ2(Ti,m)+γ(Ti,m,Ti,m)+η2 i,m/bracerightbigg/bracerightbigg2/bracketrightbigg =3 d2M2/summationdisplay (i,m)/summationdisplay (j,m′)E/bracketleftbigg ω2 i,mω2 j,m′φ2 k(Ti,m) f2 T(Ti,m)φ2 k(Tj,m′) f2 T(Tj,m′)/braceleftbig σ2(Ti,m)+γ(Ti,m,Ti,m)/bracerightbig ×/braceleftbigg σ2(Tj,m′)+γ(Tj,m′,Tj,m′)/b...
https://arxiv.org/abs/2504.11025v1
pkpk+M(M−1)E/bracketleftbig γ(T1,T2) /BD{(T1,T2)∈Ik×Ik+}/bracketrightbig ×{1−ρ3} +2 pkpk+E/bracketleftbig µ(T1)µ(T2) /BD{(T1,T2)∈Ik×Ik+}/bracketrightbig ×ρ3, 27 withGkas in Lemma 3 and ρ3= (1−pk)M+(1−pk+)M−(1−pk−pk+)M.With similar calculations as used to bound Gk, we get the following bound for the remainder G: |G|/les...
https://arxiv.org/abs/2504.11025v1
39(5):2330 – 2355. 30 Chatterjee, S. andMeckes, E. (2008). Multivariate normal appr oximation using exchange- able pairs. ALEA Lat. Am. J. Probab. Math. Stat. , 4:257–283. Choi, H. and Reimherr, M. (2018). A geometric approach to confide nce regions and bands for functional parameters. J. R. Stat. Soc. Ser. B. Stat. Met...
https://arxiv.org/abs/2504.11025v1
A. (1995). Gaussian random functions , volume 322 of Mathematics and its Applications . Kluwer Academic Publishers, Dordrecht. Mason, J. C. (1980). Near-best multivariate approximation by Fou rier series, Chebyshev series and Chebyshev interpolation. J. Approx. Theory , 28(4):349–358. N´ emeth, Z. (2014). On multivaria...
https://arxiv.org/abs/2504.11025v1
arXiv:2504.11217v1 [math.ST] 15 Apr 2025Is model selection possible for the ℓp-loss? PCO estimation for regression models Claire Lacour1, Pascal Massart2, and Vincent Rivoirard3,2 1Univ Gustave Eiffel, Univ Paris Est Creteil, CNRS, LAMA UMR8050, F-77 447, Marne-la-Vall´ ee, France 2Universit´ e Paris-Saclay, CNRS, Inria...
https://arxiv.org/abs/2504.11217v1
risk minimization includes maximum likelihood and least squares estimation. The penal ized empirical risk selection procedure consists in considering some proper penalty func tion pen: M →R+and taking /hatwidemminimizing Ln/parenleftBig /hatwidefm/parenrightBig + pen(m) (1) overM. We can then define the selected model S...
https://arxiv.org/abs/2504.11217v1
power of wavelet thresholding for Lp-estimation over the scale of Besov classes: see [DJKP95], [DJKP96],[DJKP97], [DJ98] . Refinements have been pro- posed in [Jud97], [KPT96], [HKP99], [JS05], to name but a few . It must be noted that these thresholding methods generally suffer from logarithmic loss es in the rates of c...
https://arxiv.org/abs/2504.11217v1
risk associated with this weigh tedℓp-norm. We assume in the sequel that ∝⌊ard⌊lθ∝⌊ard⌊lℓp(w)<∞. Given a collection of models M ⊂ P (Λ(N)), we wish to select /hatwidem∈ M in the best possible way. For this purpose, as explained previously, we rely on th ePCO criterion introduced by [LMR17, VLMR23]. The heuristic of thi...
https://arxiv.org/abs/2504.11217v1
involves the sum of two terms: a qua dratic one, proportional to√x, which is classical, and a second one proportional to xp/2. This last term is linear forp= 2 but it raises many technical difficulties otherwise in part icular forp >2. This result then reveals an elbow phenomenon depending on whethe rpis larger than 2 or...
https://arxiv.org/abs/2504.11217v1
except in T heorem 2.3 (for the first point), we assume that the ξλ’s are i.i.d. centered sub-Gaussian variables. Along this section, we denote σq:=/parenleftbig E[|ξλ|q]/parenrightbig1/q,1≤q<∞. We first derive in subsequent Theorem 2.1 a very general resul t which holds for any penalty function. Actually, by highlightin...
https://arxiv.org/abs/2504.11217v1
Gaussian i.i.d. case, when p= 1, the Cirelson-Ibragimov-Sudakov inequality gives c1,1=√ 2 andc2,1= 0 (see Theorem 3.4 in [Mas07]). When p= 2, we retrieve the well-known inequality for chi-squared variab les andc1,2=c2,2= 2 works: see Lemma 1 of [LM00]. This also matches with the Hanson-Wright i nequality with identity ...
https://arxiv.org/abs/2504.11217v1
some xmj≥1, pj(mj) =3 2σp p|mj|+κp2(p−2)+ 2|mj|/parenleftbig 1−p 2/parenrightbig +xp 2mj. (12) Observe that: pj(mj) =/braceleftBigg 3 2σp p|mj|+κp|mj|1−p 2xp 2mjifp≤2, 3 2σp p|mj|+κp2p 2−1xp 2mj ifp≥2. The factor xmj, depending on mj, will be specified later. But note that, mimicking the computations of Section 1.1, the...
https://arxiv.org/abs/2504.11217v1
p j(mj) with the valuep= 2 and we introduce p# j(mj) =3 2σ2 2|mj|+κ2xmj. (19) We obtain the following theorem. Theorem 2.7. Letq >1andp≥2. Letm∈ M. Forj∈ Jandmj∈ Mj, we take xmj≥1and we consider the estimate /tildewideθ=/hatwideθ(/hatwidem)associated with the penalty pen(m) := 2εp/summationdisplay j∈Jωjmin/parenleftBig...
https://arxiv.org/abs/2504.11217v1
mj∈Mj,mj/\e}atio\slash=∅|mj|(1−p 2)+e−xmj ≤+∞/summationdisplay d=1d(1−p 2)+e−alog(d)<∞. In the previous inequality, we have used that J={1}and Λ1= Λ(N); therefore, mj=m for anyjand anym∈ M . Theorem 2.5 gives E/bracketleftBig ∝⌊ard⌊l/tildewideθ−θ∝⌊ard⌊lp p/bracketrightBig ≤/tildewiderMpinf m∈M/braceleftBig E/bracketlef...
https://arxiv.org/abs/2504.11217v1
case r > p/ (2s+ 1) can be decomposed into two cases: the case r≥pand the case p/(2s+ 1)< r < p referred respectively as the homogeneous andintermediate cases . Finally the case r=p/(2s+ 1) will be called the frontier case. We refer to the zones for the upper bounds. 3.2.3 Upper bounds Our aim is to obtain the rate of ...
https://arxiv.org/abs/2504.11217v1
case, except the a daptive procedure proposed in [Jud97] based on an involved combination of thresholding and Lepski-type procedures. In particular, [DJKP95] have a logarithmic loss, both in hom egeneous and intermediate cases, with powers 2s+1. It is also the case in [LMS97] (for the study of the white nois e model) o...
https://arxiv.org/abs/2504.11217v1
belongs to Bs r,qif and only if /summationdisplay j≥−12qj(s+1 2−1 r)/parenleftBig/summationdisplay k∈Kj/vextendsingle/vextendsingle∝a\}⌊ra⌋ketle{tg,ψjk∝a\}⌊ra⌋ketri}ht/vextendsingle/vextendsingler/parenrightBigq r<∞. Whenq=∞, this condition becomes sup j≥−12j(s+1 2−1 r)/parenleftBig/summationdisplay k∈Kj/vextendsingle/...
https://arxiv.org/abs/2504.11217v1
We first state the following lemma whose proof can be found in Section 5.3.3. 22 Lemma 4.4. Let1≤p<∞. For any function g, we have: ∝⌊ard⌊lg∝⌊ard⌊lLp≤C∝⌊ard⌊lg∝⌊ard⌊lB0 p,p∧2, forCa constant and with ∝⌊ard⌊lg∝⌊ard⌊lp∧2 B0 p,p∧2:=/summationdisplay j≥−12j(p∧2)(1 2−1 p)/parenleftBig/summationdisplay k∈Kj/vextendsingle/vexte...
https://arxiv.org/abs/2504.11217v1
|θλ|<Kε|ξλ|. Then |Sλ| ≤wλ/vextendsingle/vextendsingle/vextendsingle|θλ|p+αεp|ξλ|p−|Yλ|p/vextendsingle/vextendsingle/vextendsingle ≤C2p(α,K−1)wλ|εξλ|p. using again Lemma 5.1 with |εξλ| ≥K−1|θλ|. Finally, /summationdisplay λ∈/hatwidemc∩mSλ≤/parenleftBig 1−α 2/parenrightBig/summationdisplay λ∈/hatwidemcwλ|θλ|p+C2p(α,K−1 ...
https://arxiv.org/abs/2504.11217v1
j∈Jωj/summationdisplay mj∈Mj mj/\e}atio\slash=∅/bracketleftBigg/integraldisplay∞ 0P/parenleftBig Z(mj)−p1 j(mj)>u/parenrightBig du +/integraldisplay∞ 0P/parenleftBig/braceleftbig Z(mj)−p2 j(mj)>u/bracerightbig ∩Ω(N,q)/parenrightBig du/bracketrightBigg . With the same computation as in the proof of Theorem 2.5, we ha ve...
https://arxiv.org/abs/2504.11217v1
this section we assume that r≥p. Let us recall our sub-collection of models. A model m=∪J j=−1mj∈ M =MHif for some 0 ≤L≤J ∀j≤L, mj= Λj={j}×Kj, ∀j >L, m j=∅. Note that for any m∈ M , E[Vp(m)] =εpσp p/summationdisplay j≥−1ωj|mj|=εpσp pL/summationdisplay j=−1ωj2j≤C(p,σp)εp2Lp/2, withC(p,σp) a constant only depending on pa...
https://arxiv.org/abs/2504.11217v1
j=L+lis log|Mj(L)| ≤ log/parenleftbigg2L+l |mL+l|/parenrightbigg ≤ |mL+l|log/parenleftbigge2L+l |mL+l|/parenrightbigg where we have used the bound log/parenleftbigc d/parenrightbig ≤dlog/parenleftbigec d/parenrightbig . Then log|Mj(L)| ≤ 2L2−lb(l+ 1)−3log/parenleftBig e2L+l⌊2L+lA(l)⌋−1/parenrightBig ≤2L2−lb(l+ 1)−3log/...
https://arxiv.org/abs/2504.11217v1
Mj,|mj|=d}e−Kdj. 39 Now we use that log/parenleftbigg2j d/parenrightbigg ≤d/parenleftbigg 1 + log/parenleftbigg2j d/parenrightbigg/parenrightbigg ≤2dj to state R(M)≤J/summationdisplay j=−1ωj2j/summationdisplay d=1e2dje−Kdj≤J/summationdisplay j=−12j(p 2−1)∞/summationdisplay d=1e(2−K)dj ≤J/summationdisplay j=−1ωje(2−K)j ...
https://arxiv.org/abs/2504.11217v1
1)−3/parenrightbig1−p r ≤2L(p 2−p r)/summationdisplay l≥02l(p 2−1)(/summationdisplay k|θL+l,k|r)p r/parenleftbig 2l(1−p/2)(l+ 1)−3/parenrightbig1−p r Sinceθ∈ Bs r,∞(R), for alll /parenleftBigg/summationdisplay k|θL+l,k|r/parenrightBigg1/r ≤R2−(s+1 2−1 r)(L+l)⇒/parenleftBigg/summationdisplay k|θL+l,k|r/parenrightBiggp/r...
https://arxiv.org/abs/2504.11217v1
we use the previous point. In Section 4, we are faced with non identically distributed v ariables. In this case, we use the following result, derived from Theorem 2.3. Lemma 5.10. Letp≥1. Assume that the ξλ’s are centered independent sub-Gaussian vari- ables. We assume that there exists a positive constant τsuch that f...
https://arxiv.org/abs/2504.11217v1
partition, of order B−Aand the size of Ijℓis smaller than|Ij|. Now, we apply Lemma 5.10, with τ= supjk∝⌊ard⌊lξjk∝⌊ard⌊lψ2. Letx≥1. For any 1 ≤ℓ≤K P /summationdisplay k∈Ijℓ|ξjk|p≥3 2σp p|Ijℓ|+κp|Ijℓ|/parenleftbig 1−p 2/parenrightbig +xp 2 ≤2e−x. Then, with probability larger that 1 −2Ke−x /summationdisplay k∈Ij|ξjk|...
https://arxiv.org/abs/2504.11217v1
only depending on φ,ψ,sandr. Now, when p≤2, the proof of Theorem 4.1 follows from (28) combined with Pr oposi- tion 4.2, Theorem 3.2, Lemmas 5.12 and 4.3 and Inequality (29 ). Whenp>2, Inequality (29) does not hold, but Inequality (30) shows t hat we only have to bound J/summationdisplay j=−1/parenleftBigg E/bracketl...
https://arxiv.org/abs/2504.11217v1
=/summationdisplay l≥0/parenleftBig 2(L+l)(p 2−1)/summationdisplay k>⌊2L+l/tildewideA(l)⌋|θL+l,(k)|p/parenrightBig2 p ≤/summationdisplay l≥0/parenleftBig 2(L+l)(p 2−1)(/summationdisplay k|θL+l,k|r)p r(⌊2L+l/tildewideA(l)⌋+ 1)1−p r/parenrightBig2 p ≤/summationdisplay l≥0/parenleftBig 2(L+l)(p 2−1)(/summationdisplay k|θL...
https://arxiv.org/abs/2504.11217v1
Johnstone, G Kerkyacharian, and Dominique Picard. Universal near minimaxity of wavelet shrinkage. In Festschrift forLucien Le Cam: Research Papers inProbability andStatistics, pages 183–218. Springer, 1997. [DL96] Luc Devroye and G´ abor Lugosi. A universally accepta ble smoothing factor for kernel density estimates. A...
https://arxiv.org/abs/2504.11217v1
bandwidth selectors. The Annals ofStatistics, pages 929–947, 1997. [Mal73] Colin L. Mallows. Some comments on Cp. Technometrics, 15(4):661–675, 1973. [Mas07] Pascal Massart. Concentration Inequalities andModel Selection (Saint-Flour Summer School onProbability Theory 2003, ed.J.Picard). Springer, 2007. 60 [Nem85] Arkad...
https://arxiv.org/abs/2504.11217v1
arXiv:2504.11269v1 [math.ST] 15 Apr 2025Minimax asymptotics∗ Mika Meitz University of HelsinkiAlexander Shapiro Georgia Institute of Technology April 2025 Abstract Inthispaper, weconsiderasymptoticsoftheoptimalvaluea ndtheoptimal solutions of parametric minimax estimation problems. Specifically, w e consider estimators ...
https://arxiv.org/abs/2504.11269v1
ξ∈Ξf(γ,ξ), /hatwideΓN:=argmin γ∈Γsup ξ∈ΞˆfN(γ,ξ),/hatwideΞN(γ) := argmax ξ∈ΞˆfN(γ,ξ). The aim of this article is to study the asymptotic behavior, in particula r the limiting distribution, of the optimal value ˆϑNand the optimal γ-solutions, say ˆ γN∈/hatwideΓN, as the sample size Ntends to infinity. Regarding previous ...
https://arxiv.org/abs/2504.11269v1
the functional delta method we are able to express the limiting distribution of the optimal value in two scen arios that are not convex-concave and that complement the earlier results obtained by Shapiro (2008) in the convex-concave case. No restrictions are imposed on the number ofγ- andξ-solutions. Section 3 consider...
https://arxiv.org/abs/2504.11269v1
We study asymptotics of the optimal value by first analyzing approp riate directional differentiability properties in Section 2.1, and then convert these re sults into asymptotics in Section 2.2. 2.1 Directional differentiability of the optimal value func tion To obtain our results, we next consider a minimax problem more...
https://arxiv.org/abs/2504.11269v1
and (2.6) we have that V(f+tkη) =f(γk,ξk)+tkη(γk,ξk). By passing to a subsequence if necessary we can assume that γkconverges to a point γ∗∈Γ∗andξkconverges to a point ξ∗∈Ξ. As the multifunction ( φ,γ)/mapsto→Ξ∗ φ(γ) is upper semicontinuous, it follows that ξ∗∈Ξ∗(γ∗), and hence V(f) =f(γ∗,ξ∗). By condition (i) we have ...
https://arxiv.org/abs/2504.11269v1
fu nctionV(φ) in the vicinity of the point φ0=f. 7 To this end, we apply results from Bonnans and Shapiro (2000, Sect ions 5.4 and 4.3) in a differentiable case and make the following assumption. Assumption 2.2. (i) For almost every Xand allξ∈Ξ the function F(X,γ,ξ) is differentiable in γwith ∇γF(X,γ,ξ) continuous in ( γ...
https://arxiv.org/abs/2504.11269v1
and Shapiro (2000). The upper bound (2.13) now follows from Proposition 5.121 (and the remarks that follow it) together with the form of the Lagrangian dual problem (2.12). The bound on the right hand side of (2.13) can be not tight, i.e. it can b e strictly larger than the respective liminf t↓0, even if Γ∗={γ∗}is the ...
https://arxiv.org/abs/2504.11269v1
multipliers Λ( γ∗) ={µ(γ∗)}is the singleton for every γ∗∈Γ∗. Conditions for uniqueness of Lagrange multipliers are given in Bonnans and Shapiro ( 2000, Theorem 5.114). In particular, the following condition is sufficient for the uniqu eness: For every γ∗∈Γ∗, the set Ξ∗(γ∗) ={ξ∗ 1,...,ξ∗ k}is finite (kmay depend on γ∗) and...
https://arxiv.org/abs/2504.11269v1
singleton and Ξ∗(γ∗) ={ξ∗ 1,...,ξ∗ k}is finite. (ii) The points γ∗andξ∗ 1,...,ξ∗ kare interior points of the respective sets Γ and Ξ. Part (i) of this assumption is admittedly quite restrictive but, toget her with part (ii), it allows us to write the original minimax problem (1.1) as a finite minimax pr oblem. This intur...
https://arxiv.org/abs/2504.11269v1
(iii) The following convergence results hold: sup γ∈N|ˆϕiN(γ)−ϕi(γ)| →0 w.p.1, (3.11) sup γ∈N/ba∇dbl∇ˆϕiN(γ)−∇ϕi(γ)/ba∇dbl →0 w.p.1, (3.12) sup γ∈N/ba∇dbl∇2ˆϕiN(γ)−∇2ϕi(γ)/ba∇dbl →0 w.p.1, (3.13) ∇ˆϕiN(γ∗) =∇γˆfN(γ∗,ξ∗ i)+op(N−1/2), (3.14) /ba∇dbl∇ˆϕiN(γ∗)−∇ϕi(γ∗)/ba∇dbl=Op(N−1/2). (3.15) Proof.(i) Asξ∗ iis an interior...
https://arxiv.org/abs/2504.11269v1
by the mean value theorem and the definition of BiN, w.p.1 forNlarge enough sup ξ∈BiN/ba∇dbl[∇ξˆfN(γ∗,ξ)−∇ξf(γ∗,ξ)]−[∇ξˆfN(γ∗,ξ∗ i)−∇ξf(γ∗,ξ∗ i)]/ba∇dbl ≤CN/ba∇dblˆξ∗ iN−ξ∗ i/ba∇dbl, whereCN:= supξ∈Ξ/ba∇dbl∇2 ξξˆfN(γ∗,ξ)−∇2 ξξf(γ∗,ξ)/ba∇dbland by (3.7) CN=op(1). Therefore sup ξ∈BiN/ba∇dbl∇ξˆfN(γ∗,ξ)−∇ξf(γ∗,ξ)/ba∇dbl ≤op...
https://arxiv.org/abs/2504.11269v1
1≤i≤kφi(γ) andφi:Rn→R,i= 1,...,k, are twice continuously differentiable functions. Consider theC2-norm of twice continuously differentiable function g:Rn→R; /ba∇dblg/ba∇dblC2:= sup γ∈N|g(γ)|+sup γ∈N/ba∇dbl∇g(γ)/ba∇dbl+sup γ∈N/ba∇dbl∇2g(γ)/ba∇dbl, whereNis a compact neighborhood of γ∗. The following definition formalizes a...
https://arxiv.org/abs/2504.11269v1
perturbations and hence for such hmwe have that h⊤ m∇φim(˜γm) = 0,i∈ I+(λ∗). It follows that h⊤∇ϕi(γ∗) = 0,i∈ I+(λ∗). Together with (3.29) this implies that h∈ L. By (3.28) this gives the contradiction with the second-order condition (3.25). 3.3 Quadratic approximation of the sample minimax problem We next show that, w...
https://arxiv.org/abs/2504.11269v1
of ψiN(γ), ∇ˆϕiN(γ)−∇ψiN(γ) =∇ˆϕiN(γ)−∇ˆϕiN(γ∗)−∇2ϕi(γ∗)(γ−γ∗).(3.34) As both /ba∇dblˆγN−γ∗/ba∇dbland/ba∇dbl¯γN−γ∗/ba∇dblareOp(N−1/2), we can restrict the analysis to the neighborhood BrN, defined in (3.32), with rN=Op(N−1/2). By (3.34) and the mean value theorem applied to ∇ˆϕiN(·) we have sup γ∈BrN/ba∇dbl∇ˆϕiN(γ)−∇ψiN...
https://arxiv.org/abs/2504.11269v1
matrix Σ given by the co- variance matrix of the nk×1 vector [ ∇γF(X,γ∗,ξ∗ 1),...,∇γF(X,γ∗,ξ∗ k)]. By the derivations so far it holds that ˆγN−γ∗= ¯γN−γ∗+op(N−1/2) = ¯γ(ZN)−¯γ(0)+op(N−1/2). Finally, by applying the (finite dimensional) Delta Theorem to problem (3 .30) we ob- tain the main result of our paper, characteri...
https://arxiv.org/abs/2504.11269v1
the asymptotics of ˆ γNunder a set of rather general assumptions. We close this section with remarks about two situations in which some c hanges to this result are needed. Remark 3.1. In Assumption 3.1 it was assumed that Ξ∗(γ∗) ={ξ∗ 1,...,ξ∗ k}with the pointsξ∗ 1,...,ξ∗ kbeing interior points of Ξ. If, on the contrary...
https://arxiv.org/abs/2504.11269v1
to data-driven problems. Operations Research 58, 595–612. Dem’yanov, V.F., Malozemov, V.N., 1974. Introduction to Minimax. Wiley , New York. Diakonikolas, J., Daskalakis, C., Jordan, M.I., 2021. Efficient methods for structured nonconvex-nonconcave min-max optimization, in: Artificial Intellige nce and Statistics, pp. 274...
https://arxiv.org/abs/2504.11269v1
Posterior Consistency in Parametric Models via a Tighter Notion of Identifiability Nicola Bariletto Bernardo Flores Stephen G. Walker The University of Texas at Austin May 27, 2025 Abstract We investigate Bayesian posterior consistency in parametric density models with proper priors, challenging the common perception t...
https://arxiv.org/abs/2504.11360v2
(see also Berk, 1970), which leveraged the extensive classical theory on the consistency of frequentist procedures, particularly maximum likelihood estimators (MLEs), to establish posterior consistency. These approaches involved imposing identifiability and regularity conditions on the parametric family of likelihoods—...
https://arxiv.org/abs/2504.11360v2
modeling framework: let the sample space be (R,B(R)),3whereB(T)denotes the Borel σ-algebra on a topological space T, and let dxdenote the Lebesgue measure on the real line. Define the statistical model FΘ:={fθ:θ∈Θ}, where fθ:= dFθ/dxis the density (Radon–Nikodym derivative) associated with a probability measure4Fθ(dx)≪...
https://arxiv.org/abs/2504.11360v2
in that it requires only a well-specified prior and applies even in nonparametric models—i.e., when Θis infinite-dimensional rather than Euclidean. However, this type of consistency is not sufficient to characterize the posterior’s behavior with respect to neighborhoods of the true density, rather than of the true CDF....
https://arxiv.org/abs/2504.11360v2
consistency is recognized to be this extended notion of identifiability, which we later term sequential identifiability (see Definition 3 below), a fundamental discrepancy between parametric and nonparametric models becomes apparent. Nonparametric models, by their very nature, are inherently prone to a lack of sequenti...
https://arxiv.org/abs/2504.11360v2
is to bring identifiability back to the core of the analysis of posterior consistency, yielding substantially better conditions in the parametric setting. Moreover, as the example in Section 5 illustrates, our strengthened notion of identifiability is specifically tailored to Bayesian procedures and has no bearing on M...
https://arxiv.org/abs/2504.11360v2
found in the supplementary material at the end of the article. 2 Notation and basic definitions Throughout, we use capital letters such as F,G, etc. to denote probability distributions, and lowercase letters f,g, etc. to denote the corresponding densities with respect to the Lebesgue measure. The standard Euclidean nor...
https://arxiv.org/abs/2504.11360v2
log-likelihood, requiring that log-likelihood ratios decay sufficiently fast—in an integrable manner—as the parameter value diverges, avoiding arbitrary peaks of the likelihood that may cause the MLE to overfit the data. Walker (1969) showed that assumptions W1–W3, together with a positive and continuous prior density,...
https://arxiv.org/abs/2504.11360v2
too weak to meaningfully capture closeness between densities. In particular, it is possible to construct sequences of densities that do not converge in the Hellinger sense, yet whose associated CDFs converge pointwise to that of a continuous random variable—i.e., they converge weakly. This phenomenon, which will be ana...
https://arxiv.org/abs/2504.11360v2
KL neighborhood 11 of the true parameter, the latter being a standard well-specification assumption. As we illustrate in Section 5, this condition captures the core mechanism behind Bayesian consistency and, unlike the classical assumptions in the literature, is entirely decoupled from MLE convergence. In fact, our ill...
https://arxiv.org/abs/2504.11360v2
rather than drifting toward, for instance, infinity. In light of our discussion, such stability would provide evidence in support of 12 consistency. 4.1 The role of oscillations While sequential identifiability is the central concept of our theoretical analysis, it is crucial to understand the implications of its failu...
https://arxiv.org/abs/2504.11360v2
Weibull, and von Mises distributions—for which sequential identifiability (and therefore posterior consistency) is readily verified. 4.3 Example 2: uniform distribution Anotherbasicparametricmodelnotexpressibleasanexponentialfamilyistheuniformdistributionon [0, θ]forθ∈Θ = (0 ,∞). Specifically, assume fθ(x)∝1[0,θ](x). W...
https://arxiv.org/abs/2504.11360v2
∞,Fθw−→F0, although fθdoes not converge to any density in the Hellinger sense. The first property confirms that the assumption of equivalence between the Euclidean and Hellinger metrics, made throughout this article, holds in this setting as well. This allows us to move freely between the two notions of convergence wit...
https://arxiv.org/abs/2504.11360v2
µ= 1. The associated CDFs satisfy Hλ,ω(x) =x/λ+µsin(ωx)/ω 1 +µsin(ωλ)/ω forx∈[0, λ], so that Hλ,0=Gλ(by continuous extension) and Hλ,ωw−→Gλ(while densities do not converge) as ω→ ∞, for all λ∈Λ. That is, by introducing an auxiliary oscillation parameter ω, one is able to induce sequential unidentifiability at every mem...
https://arxiv.org/abs/2504.11360v2
any fixed M > 0can attain this lower bound. This establishes the asymptotic failure of the maximum likelihood principle—and, consequently, of classical approaches to posterior consistency—at all θ⋆≥0. 5.3 Consistency at the uniform density As we have already discussed, while all classical approaches fail to ensure post...
https://arxiv.org/abs/2504.11360v2
from the classical approach that links posterior consistency to the regular behavior of maximum likelihood estimation, our perspective builds on the foundational result of Schwartz (1964), which guarantees weak consistency under a mild Kullback-Leibler support condition. Sequential identifiability then arises as the mi...
https://arxiv.org/abs/2504.11360v2
concentrated. This is analogous to our calculations in Section 5, where we established posterior consistency at the only parameter value for which our illustrative model is not sequential identifiable. Finally, our analysis—particularly the construction of a model based on cosine oscillations—sheds new light on a well-...
https://arxiv.org/abs/2504.11360v2
per contra that the posterior is not consistent in the Euclidean metric, so that there exist ε >0andδ >0such that F∞ θ⋆ x1:∞∈RN: Π({θ∈Θ :∥θ⋆−θ∥> ε} |x1:n)> δi.o. >0. Because F∞ θ⋆is a probability measure, the two above expressions imply that there exists some x1:∞ such that Π({θ∈Θ :dw(Fθ⋆, Fθ)≤εj} |x1:n)>1−δultimatel...
https://arxiv.org/abs/2504.11360v2
this, it suffices to check that ∂ ∂θp fθ(x) = xsin(θx) 2p 1 + cos( θx)p 1 + sin( θ)/θ+p 1 + cos( θx)(θcos(θ)−sin(θ)) 2θ2(1 + sin( θ)/θ)3/2 , which exists almost everywhere for all x∈[0,1], is bounded above by 1 for all x∈[0,1]and all θ≥0at which it exists, and moreover that θ7→p fθ(x)is absolutely continuous on [0,∞)fo...
https://arxiv.org/abs/2504.11360v2
identifiability at all θ >0, it is enough to observe that the model is identifiable and, as θ′→ ∞,Fθ′does not converge weakly to Fθ. Proof of property 4 To prove that as θ→ ∞,Fθw−→F0, it is enough to observe that lim θ→∞Fθ(x) = lim θ→∞x+ sin( θx)/θ 1 + sin( θ)/θ=x=F0(x) for all x∈[0,1]. To show instead that there exist...
https://arxiv.org/abs/2504.11360v2
show that F∞ θ⋆ x1:∞∈[0,1]N: max θ∈[0,M]nY i=1fθ(xi)<2nultimately! = 1 By step 1, this proves that the MLE is above Mfor all large n∈Na.s.-F∞ θ⋆, showing that (a) it is inconsistent at θ⋆, and (b) it diverges to infinity a.s.- F∞ θ⋆(because Mis arbitrarily large).14 Proof of step 1 Letδ >0be given, let x1, . . . , x n∈...
https://arxiv.org/abs/2504.11360v2