text string | source string |
|---|---|
ISSN : 1557-9654. DOI:10.1109/ 18.135648 . [31] Allan Gut. Probability: a graduate course . V ol. 5. Springer, 2005. [32] John Hajnal and Maurice S Bartlett. “Weak ergodicity in non-homogeneous Markov chains”. In: Mathematical Proceedings of the Cambridge Philosophical Society . V ol. 54. Cambridge University Press, 19... | https://arxiv.org/abs/2505.14458v1 |
Processes . en. V ol. 80. Probability Theory and Stochastic Modelling. Berlin, Heidelberg: Springer, 2017. ISBN : 978-3-662-54322-1 978- 3-662-54323-8. DOI:10.1007/978-3-662-54323-8 . [50] Sheldon M. Ross. Stochastic Processes . en. Wiley, 1983. ISBN : 978-0-471-09942-0. 19 [51] Mathieu Sart. “Density estimation under ... | https://arxiv.org/abs/2505.14458v1 |
precise, m∨m′=[ K′∈m′ m∨K′ (A.1) where m∨K′is as defined in eq. (2.5) . Then, |m∨m′| ≤2(|m|+|m′|). The rest of the second case can now be divided into the following 3 steps. Step I: Letm∈ M lbe a partition and K∈m. Recall from Proposition 11 item 3 that there exists K1, . . . , K lsuch that K⊂Ki. LetKi=K(1) iK(2) iK(3... | https://arxiv.org/abs/2505.14458v1 |
χ1 2nn−1X i=0Z χ×I1k(z1, z2, z3)δXi,ai(dz1, dz2)µχ(dz3) =Z χ1 2nn−1X i=01k(Xi, ai, y)µχ(dy). 22 This completes our verification. Now using Lemma 12, we get E H2(s,¯s) ≤E 2H2(s, Vm) . Next, we produce an upper bound for the second term. Observe that we can expand the square in H2(¯sm,ˆsm) to get H2(¯sm,ˆsm) =1 2nn−1... | https://arxiv.org/abs/2505.14458v1 |
n=Tst n≤1 and, Tst−1X i=01k(Xi, ai, Xi+1) = 0 P-almost everywhere. So the first term of eq. (B.7) can be upper bounded by 1andPTst−1 i=0 1k(Xi, ai, Xi+1) in the second term vanishes. Therefore, 1 2nX k∈mE vuutn−1X i=0E[ 1k(Xi, ai, Xi+1)|Xi, ai]−vuutn−1X i=01k(Xi, ai, Xi+1) 2 ≤1 2nX k∈m 1 +E vuutn−1X i=TstE[ 1k... | https://arxiv.org/abs/2505.14458v1 |
(1 + 2 log n)n−2X j=0P(Tst=j) +P(Tst=n−1) ≤1 2nX k∈m 2 + (1 + 2 log n)n−2X j=0P(Tst̸=n−1) ≤1 2n|m|(3 + 2 log n). It now follows from eq. (B.5) that E H2(¯sm,ˆsm) ≤1 2n|m|(3 + 2 log n)as required. CASE II. 1k(XTts, aTts, XTts) = 1 andE[ 1k(XTts, aTts, XTts)|XTts, aTts]<1 n(B.12) For this case, we use the inequ... | https://arxiv.org/abs/2505.14458v1 |
X f∈sm′ m′∈M lP 3 4 1−1√ 2 H2(ˆsm, f) +T(ˆsm, f)−pen(m′) +pen(m) +1 n ≤5 4 1 +1√ 2 H2(s, f1) + 2pen(m) +ζ+1 n! ≤X f∈sm′ m′∈M lexp(−n(pen(m) +pen(m′))/κ−nζ−1). To calculate this sum, we now need to compute the cardinality ofS f∈sm′ m′∈M lf. It follows by the construction in eq. (A.2) that the cardinality of the se... | https://arxiv.org/abs/2505.14458v1 |
the comparison fair, since otherwise we are comparing h2toh2 n. Letχ=I= [0,1/2)×[0,1/2)S[1/2,1]×[1/2,1]. We set µχandµIto be Lebesgue measures. Let the true sbe such that s(x, l, y ) = 2ifl, y∈[0,1/2) 2ifl, y∈[1/2,1] 0otherwise . Therefore, for all i≥0the states Xi’s are i.i.d Uniform distributions over [0,1/2)or... | https://arxiv.org/abs/2505.14458v1 |
eq. 1.11] we get αi,j≤ϕi,j. It is therefore sufficient to bound ϕi,j. Define the weak mixing coefficients ¯θi,jas ¯θi,j:= sup s1,s2∈χ,l1,l2∈I∥P(Xj, aj|Xi=s1, ai=l1)−P(Xj, aj|Xi=s2, ai=l2)∥TV, (B.15) and observe from [7, Lemma 1] that ϕi,j≤¯θi,j. Therefore, it is sufficient to prove ¯θi,j≤(1−Vol(χ0)κ)j−i−1. Let the dens... | https://arxiv.org/abs/2505.14458v1 |
p=1τ(i,⋆,p ) S. Recall from eq. (Fully Connected) that s(x, l, y )≤1/ε0. Therefore, for any time point pand any history ℏp−1 0, P Xp∈ Sχ|Hp−1 0=ℏp−1 0 ≤Vol(Sχ)/ε0,and (P.I) P Xp/∈ Sχ|Hp−1 0=ℏp−1 0 ≤1−ε0Vol(Sc χ). (P.II) It follows from (P.I) that, A≤Vol(Sχ)/ε0. Substituting this value in the right hand side of eq. ... | https://arxiv.org/abs/2505.14458v1 |
Fp−1 0, C⊆χ, D⊆[0,1/2) P(ap∈D|Xp∈C, A)≥ V(D). However, to show that (Xi, ai)isnotα-mixing, we note that for any p≥1 P ap∈[1/2,1]\ a0∈[1/2,1] =7 16, and P(ap∈[1/2,1])P(a0∈[1/2,1]) =11 16. Therefore, P ap∈[1/2,1]\ a0∈[1/2,1] −P(ap∈[1/2,1])P(a0∈[1/2,1]) =1 4, which in turn implies that αi,j= sup A,B P Hi 0∈A\ H∞ j∈B... | https://arxiv.org/abs/2505.14458v1 |
+pen(m2) +ζ. This completes the proof. B.11 Proof of Proposition 11 Proof. 1.ThatMl⊂ M l+1is obvious by construction. We prove |m| ≤2l(2d1+d2)by induction. It obviously is true for l= 0. Now let it be true for a given value l. Let m∈ M l+1be an element of Ml+1. From construction, either m∈ M l, orm∈S mS kS(m, k)where S... | https://arxiv.org/abs/2505.14458v1 |
by observing that for T(S)≥1,n/T(S)−1≥n/(2T(S)). Fact 2. T ˜S⋆ ≥4ld−1. Proof of Fact 2. This fact is proved using Fact 1. Summing over S ∈m(2) refon both sides of E[NS]≥n 2T(S), we get that, X S∈m(2) refE[NS] |{z} =:LHS≥X S∈m(2) refn 2T(S)≥X S∈m(2) refn 2T ˜S⋆= 2l(d1+d2)n 2T ˜S⋆ | {z } =:RHS. Observing that LHS =... | https://arxiv.org/abs/2505.14458v1 |
functions on the partition C × I × C . In other words, s(x, l, y ) =M(l′) i,jfor all x∈k(χ) i, y∈k(χ) j, l∈k(I) l′. We can represent M(l′) i,jby the following matrix which depends only on ιandξ(l′) M(l′) ι,ξ(l′)=d1×CιRξ(l′) JιLι , (B.27) where the blocks Cι∈Rd1/3×d1/3,Lι∈R2d1/3×2d1/3,Jι∈R2d1/3×d1/3, andRξ(l′)∈Rd1/3×2... | https://arxiv.org/abs/2505.14458v1 |
large constant depending only upon ι. All we need to show now is that unless n≥C′ ιd1d2for some constant C′ ι, there exists no estimator ˆssuch that P d2h2 n(s,ˆs)> ε2 ≤1 1 +π2. Separation of h2 n(·,·)Recall from the construction that χ= [0,1]d1andI= [0,1]d2. Furthermore, ιis known, and for all l∈k(I) j, j∈ {1, . . .... | https://arxiv.org/abs/2505.14458v1 |
p[ q=0n (Xq, aq)∈k(χ) i×k(I) jo ̸=Ø . The following lemma establishes the lower bound on T. Its proof is given in Section B.21. Lemma 22. Ifn < d 1d2/(6ι) log( d1d2/3)then,P(T> n)≥(1 +π2)−1. We now have all the tools to derive the lower bound. Lower Bound on the Probability of Error Throughout this part, we wi... | https://arxiv.org/abs/2505.14458v1 |
completes the proof of our lemma. B.16 Proof of Lemma 17 Proof. The proof of this Lemma share similarities with the proofs of Propositions 2 and 3 in [9] or that of Claim B3 in [52]. To begin, observe that it is enough to show H2(s, f2) +T(f1, f2)− H2(s, f1)≤1√ 2 H2(s, f2) +H2(s, f1) +1 nn−1X i=0Zi(f1, f2). Starting ... | https://arxiv.org/abs/2505.14458v1 |
the proof follows. Turning to 2 let χ0=Sd1/3 i=1κ(χ) iandκ= 3ι. Observe that Vol(χ0) = 1 /3. Now using Lemma 6, we arrive at the conclusion. Turning to 3, we first recall the definition of ρ⋆from Theorem 3: ρ⋆(S) = sup imax( P((Xi, ai)∈ S),sup j>iq P((Xi, ai)∈ S,(Xj, aj)∈ S)) . (B.37) 50 Now we can upper bound each ter... | https://arxiv.org/abs/2505.14458v1 |
+E h2 n(¯sm,ˆs) 1Ψ ≤E h2 n(s,¯sm) 1Ψ + 2E H2(¯sm,ˆs) 1Ψ ≤E h2 n(s,¯sm) 1Ψ + 2E H2(s,ˆs) 1Ψ + 2E H2(¯sm, s) 1Ψ ≤E h2 n(s,¯sm) 1Ψ + 2E H2(s,ˆs) + 2E H2(¯sm, s) We bound E H2(s,ˆs) ≤infm∈M l E H2(s, Vm) +pen(m) by Theorem 1. Term 2: Since the h2 n(·,·)≤1, the second term can be bounded as follows E[... | https://arxiv.org/abs/2505.14458v1 |
definition of rn rn=∥νn−ν∥TV. It follows that supA|νn(A)−ν(A)|=rnfor any measurable set A. Observe that this implies sup A|ν2 n(A)−ν2(A)|= sup A|νn(A)−ν(A)|(νn(A) +ν(A))≤2rn Consequently, sup A{νn(A)−ν(A)} ≤rnandinf A ν2 n(A)−ν2(A) ≥ −2rn. Now substituting the above lower bounds for ν2 n(Sr)andνn(Sr)it follows that, R... | https://arxiv.org/abs/2505.14458v1 |
1 ≥1 1 + π log(d1d2/3)+12>1 1 +π2. This proves the lemma. We now have all the tools to derive the lower bound. Lower Bound on the Probability of Error Throughout this part, we will assume that n < d 1d2/(6ι) log( d1d2/3), so thatP(T> n)≥(1 +π2)−1. Using eq. (B.30) and Lemma 22 we get, P(h2 n(s,ˆs)> ε2|T> n)P(T> n)>... | https://arxiv.org/abs/2505.14458v1 |
arXiv:2505.14611v1 [cs.IT] 20 May 2025Fisher-Rao distances between finite energy signals in noise Franck Florin Thales, France. Abstract This paper proposes to represent finite-energy signals obse rved in a given bandwidth as parameters of a probability distribution, and use the information- geometrical framework to comp... | https://arxiv.org/abs/2505.14611v1 |
However, a large number of applications, including telecommunication s, sonar, radar, electronic warfare, music, speech analysis, fault diagnosis , and others, are con- cerned with the acquisition of time series representing a finite energ y signal observed through a noisy receiving channel. Moreover, often in these app... | https://arxiv.org/abs/2505.14611v1 |
The cor- responding set of distributions constitute a submanifold of the full manifold of all observed signals. A numerical example is presented, demonstratin g that the corre- sponding submanifold is not a geodesic submanifold, which means that the Fisher-Rao distanceinthesubmanifoldisgreaterthantheFisher-Raodistance ... | https://arxiv.org/abs/2505.14611v1 |
a few families of probability distributions [ 7]. For instance, the Fisher-Rao distance between normal multivariate probability distributions has no explicit expression in gene ral. However, in specific cases, such as normal multivariate probability distribution s with the same covariance matrix, a closed form can be ob... | https://arxiv.org/abs/2505.14611v1 |
length of the interval ˇB). Thus, the observation takes the form: ∀ν∈ Bx(ν) =sξ(ν)+n(ν) (5) 5 In this equation, x(ν),sξ(ν), andn(ν) are complex numbers. The total vector of the observations xis composed of all the complex variables x(ν) in the observation bandwidth: x= (x(ν))ν∈B. This vector has NBcomplex coordinates. ... | https://arxiv.org/abs/2505.14611v1 |
in equation 7the parameters are the components of the vector µξ. In this case ξis a vector with 2 NBcomponents and∀k= 1,...2NBξk=µk. In other words: Ξ = R2NB. This means that all the signal components sξ(ν) vary freely. The signal variations determine a manifold, which we call the L2(B) manifold. This corresponds to al... | https://arxiv.org/abs/2505.14611v1 |
end of theinterval.Toreconstructacontinuousphasewithrespectto ν,wecanperformphase unwrapping. We denote the unwrapped phase by ˘ψϕ(ν). Consequently, the signal can be expressed as: sξ(ν) =ρφ(ν)·exp/parenleftig ı˘ψϕ(ν)/parenrightig . So the notation ˘ψϕ(ν) represents the unwrapped phase, which is continuous with respe... | https://arxiv.org/abs/2505.14611v1 |
are not all zero. In itself this property may not always mean much, because the numbers of coordinates of the g radients (PandN−P) are often lower than the number of gradient vectors ( NB). However, what is important in these expressions is that, for the geodesic, the coefficients are ex pressed as derivatives with respe... | https://arxiv.org/abs/2505.14611v1 |
phase allows the phase to be approximated as precisely as pos sible. This means that any observed phase function can be described by the model. Thus, the model applies to all signals with known spectrum magnitude and unknown phase. The phase of the model is the value modulo 2 πof˘ψϕ(ν) in the interval ] −π,π]. We note:... | https://arxiv.org/abs/2505.14611v1 |
(25) From equations equations 24and25, we derive: d2α dς2=K α3(26) ∀ν∈ Bdψϕ dς(ν) =1 α2·c(ν) (27) With:K=1 ω0/summationtext ν∈B2 γ0(ν)(ρ0(ν))2(c(ν))2 The solutions to these geodesic equations are computed in appendix C. 5.4 Expressions of the Fisher-Rao distance in the L2(B,α) manifold Theorem 5 The Fisher-Rao distance... | https://arxiv.org/abs/2505.14611v1 |
the interval between −πandπ. From equation 20, we expect the following property: dL2,B,α(ξ1,ξ2) dL2,B(ξ1,ξ2)≥1 (41) This will be verified in the figures. 5.7 Figures We use the expressions of the distances from equation 40, with the reference signal to noise ratio SNR1= 1. Some of the parameters are fixed: NB= 1000,ν0= 0.... | https://arxiv.org/abs/2505.14611v1 |
8 B Delta tau00.20.40.60.811.21.41.61.82Distances Fig. 7 Comparison of the two distances as functions of B∆τ. The lowest distance is dL2,B(ξ1,ξ2) with dashed line. B= 0.25, ∆ψ0= 0,γ= 1.0 1 2 3 4 5 6 7 8 B Delta tau11.021.041.061.081.11.121.141.161.181.2Ratio Fig. 8 Ratio of the two distances as a function of B∆τ. B= 0.... | https://arxiv.org/abs/2505.14611v1 |
the signal to noise ratio. Another property is that the Fisher-Rao distance between two sig nals varies as a function of the ratio of their energies. In fact, the Fisher-Rao dis tance between two signals is controlled by the ratio of their respective energies. Additionally, the difference in the phase spectrum of two si... | https://arxiv.org/abs/2505.14611v1 |
is not zero: ∀u= 1,...P∀q′,r′=P+1,...NΓq′r′,u:=guvΓv q′r′=1 2(−∂ugq′r′) and also: ∀u= 1,...P∀q,q′=P+1,...NΓuq′,q:=gqrΓr uq′=1 2(∂ugq′q) (B3) and: 23 ∀u= 1,...P∀q,q′=P+1,...NΓq′u,q:=gqrΓr q′u=1 2(∂ugqq′) (B4) We remark that equation B4can be deduced from equation B3, if we consider the symmetry: ∀i,j,k= 1,...NΓk ij= Γk ... | https://arxiv.org/abs/2505.14611v1 |
with the Fisher-Rao distance in the L2(B) manifold (Mahalanobis distance): dL2,B(ξ1,ξ2) =/radicaligg/summationdisplay ν∈B2 γ0(ν)(ρ0(ν))2/parenleftig (α2)2+(α1)2−2α2·α1·cos( ∆ψ(ν))/parenrightig (D32) dL2,B(ξ1,ξ2) =√ω0·/radicaligg (α2)2+(α1)2−2α2·α1·1 ω0/summationdisplay ν∈B2 γ0(ν)(ρ0(ν))2cos(∆ψ(ν)) (D33) The ratio b... | https://arxiv.org/abs/2505.14611v1 |
arXiv:2505.15543v1 [math.ST] 21 May 2025Submitted to Bernoulli Heavy-tailed and Horseshoe priors for regression and sparse Besov rates SERGIOS AGAPIOU1ISMAËL CASTILLO2PAUL EGELS3 1Department of Mathematics and Statistics, University of Cyprus, Nicosia, Cyprus. E-mail: agapiou.sergios@ucy.ac.cy 2Sorbonne Université, LPS... | https://arxiv.org/abs/2505.15543v1 |
1 2 statistics and machine learning are Gaussian processes [42] (henceforth GPs). For a number of func- tion classes, in particular ones for which the signal’s smoothness is ‘homogeneous’ over the considered input space, one can show that GPs achieve optimal posterior contraction rates, provided their param- eters are ... | https://arxiv.org/abs/2505.15543v1 |
to attain the minimax rate in mean square loss, over both classes of homogeneously smooth functions as well as classes permitting spatial inhomogeneities, see [3] and the recent preprint [21]. To make these priors adaptive, it is still necessary, however, and similar to GPs as discussed above, to draw at least one extr... | https://arxiv.org/abs/2505.15543v1 |
in terms of a specific loss, such as an Lp–loss. To exemplify this, let us consider the class of sieve priors with random truncation, namely priors defined as functions expanded on a finite number Kof coefficients over a basis, with random coefficients, and where the cut-off Kis itself drawn at random. The work [6], fo... | https://arxiv.org/abs/2505.15543v1 |
the following, we write X=X(n)= (X1,X2,...) for the corresponding sequence of observations. Frequentist analysis of posterior distributions. Consider data Xgenerated from the sequence model (2) with true regression function f0∈L2identified to its coefficients (f0,k)∈ℓ2. In the following, this will be written as X=X(n)∼... | https://arxiv.org/abs/2505.15543v1 |
a mixture arising from λ∼C+(0,1)andf|λ∼ N(0,τλ2), where C+(0,1)is the half-Cauchy distribution with location parameter 0and scale 1. The Horseshoe distribution HS(τ)has a density hτsatisfying, for all t̸= 0(see [9], Theorem 1.1), 1 (2π)3/2τlog(1 +4τ2 t2)≤hτ(t)≤1 (2π)3/2τlog(1 +2τ2 t2). (8) Horseshoe distributions are o... | https://arxiv.org/abs/2505.15543v1 |
For a suitable choice of orthonormal basis {φk}, the set Sβ(F)corresponds to a ball of L2–functions withβderivatives in the L2–sense. A more general study of posterior contraction under Lp–losses and Besov smoothness is conducted in Section 3. 2.1. Upper bounds for OT and truncated Horseshoe priors Our first result pro... | https://arxiv.org/abs/2505.15543v1 |
polynomially decaying scaling parameters, for some α >0, σk=k−1/2−α, for all k≥1. (11) In [1], it was shown that this prior leads to adaptation in the over-smoothing case, that is, if α≥β, the posterior contraction rate is the minimax rate over Sβ(F)(up to log factors). If on the contrary, it turns out that α < β , the... | https://arxiv.org/abs/2505.15543v1 |
of freedom, the prior regularity condition becomes empty. 3. Sparse Besov rates using heavy-tailed priors In this section we consider the problem of estimating, from the Bayesian nonparametrics point of view, a Besov function in Lp′–losses for p′≥1in the white noise model, again understood as projected into sequence sp... | https://arxiv.org/abs/2505.15543v1 |
ensures that Bspq⊂Lp′∩L2so that the problem of estimating the true unknown function f0∈BspqinLp′–norm is well-defined. (also, recall that we only consider positive smoothness indices s >0) s >(1/p−1/(p′∨2))+. (19) Minimax rates in this sequence setting and Besov smoothness were derived by Donoho and Johnstone [27] (ann... | https://arxiv.org/abs/2505.15543v1 |
estimate in the Lp′-norm exhibit highly localized irregularities, corresponding to only a few wavelet coefficients of large magnitude (hence the term "sparse"), which makes the statistical problem significantly harder. Theorem 3. Let0< s < S ,p,q∈[1,∞],1≤p′<∞,F >0and suppose (19). Consider observa- tions from model (18... | https://arxiv.org/abs/2505.15543v1 |
and illustrate our theory. Additional simulations illustrating the lower bound obtained in Theorem 2 are presented in Appendix A [2]. 4.1. White noise model regression under OT and Horseshoe priors We consider the white noise regression model (1), expanded in the orthonormal basis φk(t) =√ 2cos( π(k−1/2)t)leading to th... | https://arxiv.org/abs/2505.15543v1 |
they tend to preserve the data. Since for this prior, all coefficients (even in low frequencies) correspond to a small scaling τ= 1/n, all observations in frequencies with small signal (how small only depends on n, and is independent of the frequency), are set very close to 0. As a result there is very little variance ... | https://arxiv.org/abs/2505.15543v1 |
Lp′–error of the posterior means, hence after averaging we estimate the error Ef0||ˆf−f0||p′, where ˆfis the posterior mean. The same type error is computed for ˆfbeing the thresholding estimator. The second type of error, computed only in the two Bayesian settings, estimates Ef0EΠ[·|X]||f−f0||p′, where the inner expec... | https://arxiv.org/abs/2505.15543v1 |
wavelet coefficients only at the level j= 2i. Recalling the definition of the B3/2 1∞–norm from Section 3, this means that each function has B3/2 1∞–norm equal to 22i22i−1X k=0|f2i,k|, i= 1,...,4. Fork= 0,...,22i−1, we choose f2i,k= 20·2−2iw2i,k, where |w2i,k|sum to 1. In this way, we ensure that each ||f(i) 0||B3/2 1∞... | https://arxiv.org/abs/2505.15543v1 |
decrease as above forσk(slightly faster than any polynomial in k) is really needed in order to achieve full adaptation, since we prove that the HT (α)prior only achieves one-sided adaptation in the oversmoothing case α≥β, as also seen in our simulation study. The present paper also opens the door to the investigation o... | https://arxiv.org/abs/2505.15543v1 |
on submodels, as would be the case for model-selection type priors such as sieve priors with random cut-off or spike-and-slab priors. 6. Proofs of the results of Section 2 6.1. Proof of Theorem 1 For simplicity we focus on the case ν= 1 for the OT scalings (6), the proof being similar for any other ν >0. Also part of t... | https://arxiv.org/abs/2505.15543v1 |
Kn<k≤nn2σk√Mvnzk+1√MvnX ℓ≥1X ℓn<k≤(ℓ+1)n(n(ℓ+ 1)2)2σkz−1/2 k ≲n7/2logn√vne−Clog2n+n7/2 √vnX ℓ≥1(ℓ+ 1)4√ ℓ+ 1log(( ℓ+ 1)n)e−log2(ℓn) Taking vn= (logδn)K−2β n withδ > δ′large enough, the sums in the last display all go to 0, when n→ ∞ . Finally from Lemma 4 we have Pf0(Acn)→0, which concludes the proof. 6.2. Proof of The... | https://arxiv.org/abs/2505.15543v1 |
non-negative real number such that |X| ≤xandσ≲x, we have Z |θ|mϕ(√n(X−θ))h(θ/σ)dθ≳σm+1ϕ(√nx). Proof. By symmetry of both handϕ, it is enough to focus on the case X≥0. By restricting the domain of integration to the set [X−x,X+x],the integral of interest is greater than ϕ(√nx)ZX+x X−x|θ|mh(θ/σ)dθ. By assumption, X≤x, th... | https://arxiv.org/abs/2505.15543v1 |
, A. (2024). Adaptive inference over Besov spaces in the white noise model using p-exponential priors. Bernoulli 302275–2300. MR4746608 [5] A GAPIOU , S. and W ANG , S. (2024). Laplace priors and spatial inhomogeneity in Bayesian in- verse problems. Bernoulli 30878–910. MR4699538 [6] A RBEL , J., G AYRAUD , G. and R OU... | https://arxiv.org/abs/2505.15543v1 |
, D. L., J OHNSTONE , I. M., K ERKYACHARIAN , G. and P ICARD , D. (1995). Wavelet shrinkage: asymptopia? J. Roy. Statist. Soc. Ser. B 57301–369. With discussion and a reply by the authors. MR1323344 [26] D ONOHO , D. L., J OHNSTONE , I. M., K ERKYACHARIAN , G. and P ICARD , D. (1996). Density estimation by wavelet thre... | https://arxiv.org/abs/2505.15543v1 |
Griffin, Philiip J. Brown, Chris Hans, Luis R. Pericchi, Christian P. Robert and Julyan Arbel. MR3204017 26 [42] R ASMUSSEN , C. E. and W ILLIAMS , C. K. I. (2006). Gaussian processes for machine learning . Adaptive Computation and Machine Learning . MIT Press, Cambridge, MA. MR2514435 [43] R OˇCKOVÁ , V. and R OUSSEAU... | https://arxiv.org/abs/2505.15543v1 |
(cf. [1, Theorem 1]), while the two undersmooth- ing priors lead to too rough posterior means and perform very poorly (see Theorem 2). Heavy-tailed priors and sparse Besov rates 27 Figure A.6 . White noise model: true function (black), posterior means (blue), 95% credible regions (grey), for n= 2×102(top row) and n= 2×... | https://arxiv.org/abs/2505.15543v1 |
right hand side of the second to last display as X {k>eKα}∩Ncnσ2 kPf0(Ak)≳X {k>eKα}∩Ncnσ2 k. Recalling |Nn|=Nn≤eKαand using the fact that (σk)is non-increasing, we get X {k>eKα}∩Ncnσ2 k≥X k>2eKασ2 k≳K−2α α, where for the last inequality we use eKα≍Kα. Finally, since α < β , asnis large enough, Ef0Z ∥f−f0∥2 2dΠ(f|X)≳K−2... | https://arxiv.org/abs/2505.15543v1 |
jM′vn2−j≥1so we use assumption (H3) and obtain H σ−1 jM′vn2−j ≲2jσj M′vn. To define the event An, first consider the sets of indices (the set Λnwill be usefull further down the proof for indices j < J 1) Λn:= (j,k) :J0< j < J 1,|f0,jk| ≤1/√n and I:= Λ n∪ {(j,k) :j≥J1}.(D.6) Now consider for any l≥0 Ajk,l:=( |Xjk| ≤r... | https://arxiv.org/abs/2505.15543v1 |
last display which leads to Ef0Z |fjk−Xjk|p′dΠ(f|X) ≲n−p′/2Ln. (D.13) Recall (D.1) the definition of J0, plugging the previous bound in (D.12) leads to A≲Lnn−p′/2X j≤J02jp′/2≲Ln 2J0 n!p′/2 ≲Lnn−rp′. This last bound shows that A=O(vp′ n)forδ >0large enough, ensuring that, as n→ ∞ , Ef0Π {f:X j≤J0X k2j(p′/2−1)|fjk−f0... | https://arxiv.org/abs/2505.15543v1 |
(12), we have ||g||p=∥X j∈JX k∈Kjgjkψjk∥p≤X j∈J∥X k∈Kjgjkψjk∥p≲X j∈J2j(1/2−1/p)||gj·||p. Now simply apply Hölder’s inequality with exponents pandp/(p−1)to get the desired result. Lemma E.3. LetAnbe the event defined in (D.7) and assume f0∈Bspq(F), with p,q≥1and s,F > 0. Then as n→ ∞ , we have Pf0(Ac n)→0. 36 Proof. We ... | https://arxiv.org/abs/2505.15543v1 |
arXiv:2505.15688v1 [cs.LG] 21 May 2025A packing lemma for VCN k-dimension and learning high-dimensional data Leonardo N. Coregliano Maryanthe Malliaris∗ May 22, 2025 Abstract Recently, the authors introduced the theory of high-arity PAC learning, which is well-suited for learning graphs, hypergraphs and relational stru... | https://arxiv.org/abs/2505.15688v1 |
PAC paper, by closing an equivalence of the high-arity PAC theory as we will explain below. 2 Technical preliminaries In the PAC learning theory of Valiant [ Val84 ] (see also [ SSBD14 ] for a more thorough and modern introduction to the topic), an adversary picks a function F:X→Yout from some family H⊆YX and a probabi... | https://arxiv.org/abs/2505.15688v1 |
theory; however, this yields a rather unnatural learning framework: the adversary is picking a measureµoverX×Xand revealing several pairs ( xi,x′ i) drawn i.i.d. from µalong with their adjacencyF(xi,x′ i). Instead, the setup of 2-PAC learning is much more natural: the adversary picks a measure µover X, drawsmvertices (... | https://arxiv.org/abs/2505.15688v1 |
Haussler packing). Recall, the classic equivalence of the Haussler packing property with finiteness of VC-dimension can be proved by direct implications between the properties2. In the present work, we find high-arity proofs of the implications k-PAC learnability = ⇒k-ary Haussler packing property and k-ary Haussler pa... | https://arxiv.org/abs/2505.15688v1 |
at most 1 hypotheses. We direct the curious reader to Appendix A for the general version of the same definitions. Definition 3.1 ([CM24 ,§3] simplified) .By a Borel space, we mean a standard Borel space, i.e., a measurable space that is Borel-isomorphic to a Polish space when equipped with the σ-algebra of Borel sets. ... | https://arxiv.org/abs/2505.15688v1 |
that for every a∈Awe have FU(a) =f 1[a∈U](a) =/braceleftigg f0(a),ifa /∈U, f1(a),ifa∈U. 2. The Natarajan dimension ofFis defined as Nat(F)def= sup{|A||A⊆X∧FNatarajan-shatters A}. 4 Simplified versions of new high-arity concepts In this section, we formalize the high-arity version of the Haussler packing property in th... | https://arxiv.org/abs/2505.15688v1 |
with a k-PAC learnerA, thenHhas the Haussler packing property with associated function mHP H,ℓ(ε)def= min δ∈(0,1)/ceilingleftiggγH(⌈mPAC H,ℓ,A(ε/2,δ)⌉) 1−δ/ceilingrightigg −2≤min δ∈(0,1) |Λ|(⌈mPAC H,ℓ,A(ε/2,δ)⌉)k 1−δ −2, (5.1) where (m)kdef=m(m−1)···(m−k+ 1) denotes the falling factorial. Proof. First note th... | https://arxiv.org/abs/2505.15688v1 |
show that if x∈Ωk−1, then Nat(H(x))≤d. In turn, it suffices to show that if V⊆Ω is a (finite) set that is Natarajan-shattered by H(x) and ndef=|V|, thenn≤d. Letµ∈Pr(Ω) be given by 1 k νV+k−1/summationdisplay j=1δxj , whereνVis the uniform probability measure on Vandδtis the Dirac delta concentrated on t. SinceVis N... | https://arxiv.org/abs/2505.15688v1 |
Nikola Sandri´ c and Stjepan ˇ Sebek. Learning from non-irreducible Markov chains. J. Math. Anal. Appl. , 523(2):Paper No. 127049, 14, 2023. 12 [SW10] Hongwei Sun and Qiang Wu. Regularized least square regression with dependent samples. Adv. Comput. Math. , 32(2):175–189, 2010. [Val84] L. G. Valiant. A theory of the le... | https://arxiv.org/abs/2505.15688v1 |
Ω to Λ, denoted Fk(Ω,Λ), is the set of (Borel) measurable functions from Ek(Ω) to Λ. 2.[3.2.2] Ak-ary hypothesis class is a subsetHofFk(Ω,Λ) equipped with a σ-algebra such that: i. the evaluation map ev: H×Ek(Ω)→Λ given by ev( H,x)def=H(x) is measurable; 14 Non-partite finite VCN k-dimensionNon-partite agnostically k-P... | https://arxiv.org/abs/2505.15688v1 |
Ω be a Borel template, Λ be a non-empty Borel space andH⊆Fk(Ω,Λ) be ak-ary hypothesis class. 1. [3.8.2] A (k-ary) learning algorithm forHis a measurable function A:/uniondisplay m∈N/parenleftbigEm(Ω)×Λ([m])k/parenrightbig→H. 2.[3.8.3] We say that Hisk-PAC learnable with respect to a k-ary loss function ℓif there exist ... | https://arxiv.org/abs/2505.15688v1 |
3. [4.7.3] A k-partite loss function is: bounded if∥ℓ∥∞<∞. separated ifs(ℓ)>0 andℓ(x,y,y ) for every x∈E1(Ω) and every y∈Λ. 4.[4.7.4] For a k-partite loss function ℓ,k-partite hypotheses F,H∈Fk(Ω,Λ) and a probability k-partite template µ∈Pr(Ω), the total loss ofHwith respect to µ,Fandℓis Lµ,F,ℓ(H)def=Ex∼µ1/bracketleft... | https://arxiv.org/abs/2505.15688v1 |
use the lemma below that says that separated and bounded loss functions make the total loss satisfy a weak version of triangle inequality with a rescaling factor. This justifies the usage of the name “(almost) metric” for losses that are either metric or both separated and bounded in Figure 1. Finally, we point out tha... | https://arxiv.org/abs/2505.15688v1 |
For eachi∈[m+ 1], define the set Cidef=/braceleftbigg x∈E/tildewidem(Ω)/vextendsingle/vextendsingle/vextendsingle/vextendsingle∀y∈Y(x),Lµ,Fi,ℓ/parenleftbigA(x,y)/parenrightbig>s(ℓ)·ε 2·∥ℓ∥∞/bracerightbigg . Note that by taking ydef=(Fi)∗ m(x)∈Y(x) and using the fact that ℓis separated so that Fiis realizable inHw.r.t.ℓ... | https://arxiv.org/abs/2505.15688v1 |
concentrated on xf. •For eachf∈R,µfis the Dirac delta concentrated on x′ f. •µ1{a}is the uniform measure on V′. SinceV′is Natarajan-shattered by {H(x,x′,−)|H∈H} , there exist functions f0,f1:V′→Λ withf0(v)̸=f1(v) for every v∈V′and there exists a family {FU|U⊆V′}⊆H such that for everyU⊆V′and everyv∈V′, we haveFU(x,x′,v)... | https://arxiv.org/abs/2505.15688v1 |
with (A.10) implies/uniontext H∈H′B′(H) = 2V′. Sinceε·kk/(s(ℓ)·k!)<1/2, by Lemma 5.2, we get n≤log2|H′| 1−h2(ε·kk·n/(s(ℓ)·k!))≤log2m 1−h2(ε·kk·n/(s(ℓ)·k!)), which yields n≤dasnis an integer. B The partization operation In this section, we recall the definition of the partization operation from [ CM24 , Definition 4.20]... | https://arxiv.org/abs/2505.15688v1 |
arXiv:2505.15794v1 [math.ST] 21 May 2025One-sample location tests based on center-outward signs and ranks Daniel Hlubinka and ˇS´arka Hudecov ´a Abstract A multivariate one-sample location test based on the center-outward ranks and signs is considered, and two different testing procedures are proposed for centrally sym... | https://arxiv.org/abs/2505.15794v1 |
have not been studied in the liter- ature. The regression framework from Hallin et al. (2022) is not directly applicable without some additional constraints, because the c-o ranks and signs are invariant with respect to a shift. Inspired by the univariate one-sample Wilcoxon test, one possible strategy is to assume som... | https://arxiv.org/abs/2505.15794v1 |
to satisfy the condition: The discrete uniform distribution on G𝑛converges weakly to 𝑈𝑑as𝑛→∞. One possibility is to take G𝑛={𝒈𝑖𝑗,𝑖=1,...,𝑛𝑅,𝑗=1,...,𝑛𝑆},for𝒈𝑖𝑗=𝑟𝑖u𝑗, (1) where𝑛=𝑛𝑅𝑛𝑆,𝑟𝑖=𝑖/(𝑛𝑅+1), and u𝑗are unit vectors, distributed approximately regularly overS𝑑, or a union ofG𝑛from (1) a... | https://arxiv.org/abs/2505.15794v1 |
is the price we pay for the independence of the two samplesX+andX−. The main idea of the test with random signs is to test the hypothesis H 0:X𝑑=−X using the two-sample location test of Hallin et al. (2022) on the independent samples X+andX−. The following proposition follows from (Hallin et al., 2022, Proposition 3.2... | https://arxiv.org/abs/2505.15794v1 |
is the well-known ARE of the one sample Wilcoxon test with respect to the t-test. For 𝑑=2, we get ARE 0.985 and the values decrease with increasing dimension 𝑑. For instance for 𝑑=10, ARE=0.907. One-sample location tests based on center-outward signs and ranks 7 3.2 Symmetrized sample Consider now the set of size 2... | https://arxiv.org/abs/2505.15794v1 |
2 𝐷→𝜒2 𝑑, and 3𝑑∥T𝐹,𝑎∥=3𝑑 𝑛 𝑛∑︁ 𝑖=1F(𝑛) ±,𝑎(X𝑖) 2 𝐷→𝜒2 𝑑 as𝑛→∞ . One-sample location tests based on center-outward signs and ranks 9 Proof Both statistics are special cases of T𝑎=1√𝑛𝑛∑︁ 𝑖=1𝑱(F(𝑛) ±,𝑎(X𝑖))=1√𝑛𝑛∑︁ 𝑖=1𝐽(∥F(𝑛) ±,𝑎(X𝑖)∥)F(𝑛) ±,𝑎(X𝑖) ∥F(𝑛) ±,𝑎(X𝑖)∥, (4) where the score f... | https://arxiv.org/abs/2505.15794v1 |
from (5) that for 𝑋𝑖>0 we have 𝑅±,𝑎(𝑋𝑖)= 𝑅𝑎(𝑋𝑖)−2𝑛+1 2 +1 2= 𝑛+𝑅𝑖−2𝑛+1 2 +1 2=𝑅𝑖=𝑅±,𝑎(−𝑋𝑖), and the same equality is shown analogously for 𝑋𝑖<0. Further, 𝑆±,𝑎(𝑋𝑖)=I[𝑋𝑖>0]−I[𝑋𝑖<0]=−𝑆±,𝑎(−𝑋𝑖), which finishes the proof. □ Since the distribution of 𝑋𝑖is continuous, we can assume without... | https://arxiv.org/abs/2505.15794v1 |
grid does not satisfy the factorization 𝒈=𝑟𝑖𝒖𝑗mentioned in Section 2), (H) grid with 𝑛𝑅spheres and 𝑛𝑆directions which are constructed from a Halton sequence of𝑛𝑆points in R𝑑−1. For a given𝑛𝑆and dimension 𝑑the Halton sequence of points in R𝑑−1was generated using function halton from package randtoolbox ,... | https://arxiv.org/abs/2505.15794v1 |
not surprising that the Hotelling’s 𝑇2test leads to the largest power for the normal distribution. In this case, the power of c-o tests is generally comparable for𝑑=2 and for𝑑=4 with shift in direction 𝒔=(1,1,..., 1)⊤. For𝑑=6 larger differences are observed, but they decrease with an increasing sample size 𝑛. Amo... | https://arxiv.org/abs/2505.15794v1 |
0)⊤(dir=2, second row) for sample sizes𝑛∈{150,300}and various𝛿≥0. 16 Daniel Hlubinka and ˇS´arka Hudecov ´a 𝑑 𝑛 RAN-R1 RAN-R2 RAN-H SYM-R2 SYM-H HOT SPAT Normal dist. 2 150 0.040 0.036 0.056 0.046 0.050 0.062 0.052 300 0.044 0.040 0.044 0.044 0.038 0.060 0.050 4 150 0.036 0.042 0.048 0.036 0.040 0.050 0.048 300 0.0... | https://arxiv.org/abs/2505.15794v1 |
to be quire robust with respect to small deviations from the symmetry as they also reject the null in approximately 5 % cases for 𝛿=0 and the proportion of rejections growths with 𝛿in an expected way. alpha=1alpha=3alpha=5 0.0 0.1 0.2 0.3 0.0 0.1 0.2 0.3 0.0 0.1 0.2 0.30.250.500.751.00 0.250.500.751.00 0.250.500.751.... | https://arxiv.org/abs/2505.15794v1 |
and ˇSid´ak, Z. (1967). Theory of Rank Test . Academic Press, New York. Hallin, M. (2022). Measure transportation and statistical decision theory. Annual Review of Statistics and Its Application , 9:401–424. Hallin, M., del Barrio, E., Cuesta-Albertos, J., and Matr ´an, C. (2021). Distribution and quantile functions, r... | https://arxiv.org/abs/2505.15794v1 |
arXiv:2505.15969v1 [math.OC] 21 May 2025Grassmann and Flag Varieties in Linear Algebra, Optimization, and Statistics: An Algebraic Perspective Hannah Friedman and Serkan Ho¸ sten Abstract Grassmann and flag varieties lead many lives in pure and applied mathematics. Here we focus on the algebraic complexity of solving v... | https://arxiv.org/abs/2505.15969v1 |
compute multiple eigenvectors at once where we replace the vector zin (2) with an n×kmatrix Zconsisting of orthonormal columns. We solve a quadratic optimization problem over the Stiefel manifold called the multi-eigenvector problem : min/max ZTZ=Idktrace( ZTAZ). (3) The critical points of this problem are the sets of ... | https://arxiv.org/abs/2505.15969v1 |
the singular value decomposition of A. In the first case, we give a formula for the finite number of critical points which correspond to pairs of left and right singular vectors of A(Theorem 5.1). The second problem can be naturally formulated over a product of Grassmannians, and we give a complete description of the c... | https://arxiv.org/abs/2505.15969v1 |
flag manifold which is n 2 − n−k 2 =nk− k+1 2 by Proposition 2.1. The affine variety defined by IStis precisely Vk,n. Since ISthas k+1 2 generators, it is a complete intersection. The special orthogonal group SO( n) acts on Vk,ntransitively for k < n , and this shows thatVk,nis irreducible in this case. The variety... | https://arxiv.org/abs/2505.15969v1 |
pair of 1≤s < t≤rand where the sum is over all (i′, j′)obtained by exchanging the first mof the j-subscripts with mof the i-subscripts while preserving their order and given a permutation σ∈Sks,xi1,...,iks= sgn( σ)xσ(i1),...,σ(iks). We now turn to realizations of Grassmann and flag varieties in applied settings where t... | https://arxiv.org/abs/2505.15969v1 |
k i}orℓ, j∈ {ki+ 1, . . . , n }. Then ∂(P2 i−Pi)ℓj ∂(Pi′)ℓ′j′ Ek=( ±1 i′=i, ℓ′=ℓ, j′=j, 0 else where the sign is positive if ℓ, j≤kiand negative if ki< ℓ, j . There are n+1 2 −ki(n−ki) choices for such pairs {ℓ, j}. 6 Next fix i∈ {2, . . . , r }, letj∈ {1, . . . , k i−1}, and let ℓ∈ {ki+ 1, . . . , n }. Then ∂(PiPi−1−... | https://arxiv.org/abs/2505.15969v1 |
the right hand side ideal is smooth at every closed point. This proves smoothness of the variety and primeness of the ideal. 7 The orthogonal group O( n) acts transitively on closed points by conjugation. Thus every closed point is conjugate to the matrix X0= diag( c1, . . . , c n). Hence, we check smoothness atX0using... | https://arxiv.org/abs/2505.15969v1 |
isospectral coordinates. We append a column to Zto form a 3 ×3 orthogonal matrix ˜Zand fix the generic vector c= (c1, c2, c3). The isospectral coordinates for Zwith respect to care given by the matrix S=˜Zdiag( c1, c2, c3)˜ZT whose eigenvalues are c1, c2, c3and whose eigenvectors are the columns of ˜Z. This matrix sati... | https://arxiv.org/abs/2505.15969v1 |
The Multi-Eigenvector Problem In this section, we give three different formulations of the multi-eigenvector problem (3). The first two are formulated as optimization problems over a Grassmannian in Stiefel and projection coordinates. The third one utilizes a flag variety in its isospectral formulation. In all three ca... | https://arxiv.org/abs/2505.15969v1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.