text
string
source
string
ISSN : 1557-9654. DOI:10.1109/ 18.135648 . [31] Allan Gut. Probability: a graduate course . V ol. 5. Springer, 2005. [32] John Hajnal and Maurice S Bartlett. “Weak ergodicity in non-homogeneous Markov chains”. In: Mathematical Proceedings of the Cambridge Philosophical Society . V ol. 54. Cambridge University Press, 1958, pp. 233–246. [33] On ´esimo Hern ´andez-Lerma, Ra ´ul Montes-de-Oca, and Rolando Cavazos-Cadena. “Recurrence con- ditions for Markov decision processes with Borel state space: a survey”. In: Annals of Operations Research 28.1 (1991), pp. 29–46. 18 [34] Rodrigo Toro Icarte et al. “Using Reward Machines for High-Level Task Specification and Decom- position in Reinforcement Learning”. en. In: Proceedings of the 35th International Conference on Machine Learning . PMLR, July 2018, pp. 2107–2116. [35] Leonid Aryeh Kontorovich, Kavita Ramanan, et al. “Concentration inequalities for dependent random variables via the martingale method”. In: Annals of Probability 36.6 (2008), pp. 2126–2158. [36] Vikram Krishnamurthy. Partially Observed Markov Decision Processes: From Filtering to Controlled Sensing . Cambridge: Cambridge University Press, 2016. ISBN : 978-1-107-13460-7. DOI:10.1017/ CBO9781316471104 . [37] Claire Lacour. “Adaptive estimation of the transition density of a Markov chain”. In: Annales de l’Institut Henri Poincare (B) Probability and Statistics 43.5 (Sept. 2007), pp. 571–597. ISSN : 02460203. DOI:10.1016/j.anihpb.2006.09.003 . [38] Sergey Levine et al. “Offline reinforcement learning: Tutorial, review, and perspectives on open prob- lems”. In: arXiv preprint arXiv:2005.01643 (2020). [39] Lennart Ljung. System identification (2nd ed.): theory for the user . NJ, USA: Prentice Hall PTR, 1999. ISBN : 978-0-13-656695-3. [40] Matthias L ¨offler and Antoine Picard. Spectral thresholding for the estimation of Markov chain tran- sition operators . Oct. 2021. DOI:10.48550/arXiv.1808.08153 . [41] Horia Mania, Michael I Jordan, and Benjamin Recht. “Active learning for nonlinear system identifi- cation with guarantees”. In: arXiv preprint arXiv:2006.10277 (2020). [42] Pascal Massart. Concentration Inequalities and Model Selection . en. Ed. by Jean Picard. V ol. 1896. Lecture Notes in Mathematics. Berlin, Heidelberg: Springer, 2007. ISBN : 978-3-540-48497-4. DOI: 10.1007/978-3-540-48503-2 . [43] Florence Merlev `ede, Magda Peligrad, and Costel Peligrad. “On the local limit theorems for lower psi- mixing Markov chains”. en. In: Latin American Journal of Probability and Mathematical Statistics 19.1 (2022), p. 1103. ISSN : 1980-0436. DOI:10.30757/ALEA.v19-45 . [44] Florence Merlev `ede, Magda Peligrad, and Costel Peligrad. “On the local limit theorems for psi- mixing Markov chains”. en. In: Latin American Journal of Probability and Mathematical Statistics 18.1 (2021), p. 1221. ISSN : 1980-0436. DOI:10.30757/ALEA.v18-45 . [45] Florence Merlev `ede, Magda Peligrad, Emmanuel Rio, et al. “Bernstein inequality and moderate de- viations under strong mixing conditions”. In: High dimensional probability V: the Luminy volume 5 (2009), pp. 273–292. [46] Sean P Meyn and Richard L Tweedie. Markov chains and stochastic stability . Springer Science & Business Media, 2012. [47] Mirco Mutti, Riccardo De Santi, and Marcello Restelli. “The Importance of Non-Markovianity in Maximum State Entropy Exploration”. In: arXiv preprint arXiv:2202.03060 (2022). [48] David Pollard. A User’s Guide to Measure Theoretic Probability . Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge: Cambridge University Press, 2001. ISBN : 978-0-521-80242- 0.DOI:10.1017/CBO9780511811555 . [49] Emmanuel Rio. Asymptotic Theory of Weakly Dependent Random
https://arxiv.org/abs/2505.14458v1
Processes . en. V ol. 80. Probability Theory and Stochastic Modelling. Berlin, Heidelberg: Springer, 2017. ISBN : 978-3-662-54322-1 978- 3-662-54323-8. DOI:10.1007/978-3-662-54323-8 . [50] Sheldon M. Ross. Stochastic Processes . en. Wiley, 1983. ISBN : 978-0-471-09942-0. 19 [51] Mathieu Sart. “Density estimation under local differential privacy and Hellinger loss”. In: Bernoulli 29.3 (Aug. 2023), pp. 2318–2341. ISSN : 1350-7265. DOI:10.3150/22-BEJ1543 . [52] Mathieu Sart. “Estimation of the transition density of a Markov chain”. In: Annales de l’IHP Proba- bilit´es et statistiques . V ol. 50. 2014, pp. 1028–1068. [53] Andrew J. Schaefer et al. “Modeling Medical Treatment Using Markov Decision Processes”. en. In: Operations Research and Health Care: A Handbook of Methods and Applications . Ed. by Margaret L. Brandeau, Franc ¸ois Sainfort, and William P. Pierskalla. Boston, MA: Springer US, 2004, pp. 593– 612. ISBN : 978-1-4020-8066-1. DOI:10.1007/1-4020-8066-2_23 . [54] Richard F. Serfozo. “Semi-stationary processes”. en. In: Zeitschrift f ¨ur Wahrscheinlichkeitstheo- rie und Verwandte Gebiete 23.2 (June 1972), pp. 125–132. ISSN : 1432-2064. DOI:10 . 1007 / BF00532855 . [55] Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction . MIT press, 2018. [56] Alexandre B Tsybakov. Introduction to Nonparametric Estimation. Springer, 2009. [57] Shengbo Wang et al. On the Foundation of Distributionally Robust Reinforcement Learning . Jan. 2024. DOI:10.48550/arXiv.2311.09018 . [58] J. Wolfowitz. “Products of Indecomposable, Aperiodic, Stochastic Matrices”. In: Proceedings of the American Mathematical Society 14.5 (1963), pp. 733–737. ISSN : 0002-9939. DOI:10.2307/ 2034984 . [59] Jing Yu, Varun Gupta, and Adam Wierman. “Online Adversarial Stabilization of Unknown Linear Time-Varying Systems”. In: 2023 62nd IEEE Conference on Decision and Control (CDC) . Dec. 2023, pp. 8320–8327. DOI:10.1109/CDC49753.2023.10383849 . A Sketch of Proof of Proposition 2 We first prove an auxiliary result comparing two different piecewise constant estimators on two different partitions m1andm2. The proof of this Proposition can be found in Section B.10. Proposition 10. Letm1andm2be two different partitions belonging to Mlfor some l, and f1andf2be two piecewise constant functions on the two partitions respectively. Let κ= (2 + 11√ 2)/(2√ 2−2). Then, it holds with probability at most exp (−n(pen(m1)−pen(m2))/κ−nζ)that 3 4 1−1√ 2 H2(s, f2) +T(f1, f2)≤5 4 1 +1√ 2 H2(s, f1) +pen(m1) +pen(m2) +ζ for any ζ >0. Letmbe any partition. Consider the following two cases. CASE I T(ˆsm,ˆsˆm)−pen( ˆm) +pen(m) ≥0:If T(ˆsm,ˆsˆm)−pen( ˆm) +pen(m) ≥0, then the conclusion follows readily from Proposition 10 and some algebra. CASE II T(ˆsm,ˆsˆm)−pen( ˆm)+pen(m) ≤0:We first write the following proposition about dyadic partitions. Its proof follows by using the tree-like structure of dyadic cuts and can be found in Section B.11. Proposition 11. LetMlbe the dyadic partitions of depth las in Definition 1. Then, 20 1.Ml⊂ M l+1, for any l. Furthermore,P m∈M∞e−|m|≤P l≥02l(2d1+d2)e−2l(2d1+d2)≤15, and for any m∈ M l,|m| ≤2l(2d1+d2)where |m|is the cardinality of the partition m. 2. If m∈ M l\Ml′, where l′<l, then|m|>l′. 3. If K∈m∈ M l, then∃ {K1, K2, . . . , K l} ∈S m∈M lmsuch that K⊂Ki, i∈ {1, . . . ,l} 4. Define m∨m′as the set of non-empty intersections of m′with the elements of m. To be
https://arxiv.org/abs/2505.14458v1
precise, m∨m′=[ K′∈m′ m∨K′ (A.1) where m∨K′is as defined in eq. (2.5) . Then, |m∨m′| ≤2(|m|+|m′|). The rest of the second case can now be divided into the following 3 steps. Step I: Letm∈ M lbe a partition and K∈m. Recall from Proposition 11 item 3 that there exists K1, . . . , K lsuch that K⊂Ki. LetKi=K(1) iK(2) iK(3) i. We define the set smto be: sm:=  X K∈mfK 1K:fK∈l[ i=0  a bµI K(2) i µχ K(3) i:a∈ {0, . . . , n }, b∈ {1, . . . , n }    ;(A.2) observe that a bµI(K(2) i)µχ(K(3) i)−1 :i, a∈ {0, . . . , n }, b∈ {1, . . . , n } is the set of all the piece- wise constant functions that can be made with nsample points. We then prove the following result which is formally stated in Section B.2 as Lemma 15 sup m′∈M l3 4 1−1√ 2 H2(ˆsm,ˆsm′) +T(ˆsm,ˆsm′)−pen(m′) +pen(m)≤γ(m) γ(m)≤sup f∈sm′ m′∈M l3 4 1−1√ 2 H2(ˆsm, f) +T(ˆsm, f)−pen(m′) + 2pen(m) Step II: Using this lemma, we upper bound the probability of CH2(s,ˆs)≥inf m∈M l(H2(s,ˆsm) +pen(m)) by the probability of CH2(s,ˆs)≥sup f∈sm′ m′∈M l3 4 1−1√ 2 H2(ˆsˆm, f) +T(ˆsˆm, f)−pen(m′) + 2pen( ˆm) where sm′is as defined in eq. (A.2). Step III: We produce an upper bound to the preceding probability using Proposition 10 and appropriate union bounds. 21 B Proofs B.1 Proof of Proposition 1 Proof. Construct the piece-wise constant estimator of smgiven by ¯sm:=X k∈mPn−1 i=0E[ 1k(Xi, ai, Xi+1)|Xi, ai]Pn−1 i=0R χ1k(Xi, ai, y)dµχ(y)1k. Observe that by using the triangle inequality, we have E H2(s,ˆsm) ≤E H2(s,¯sm) +E H2(¯sm,ˆsm) . (B.1) We bound each term separately. For the purpose of bounding the first term, we require the following lemma. Letfbe an integrable function defined on a domain χλwith the range being R, and let λbe a measure on χλ. We can then adapt Lemma 2 from [10] as: Lemma 12. For any m, a finite partition of a subset Iofχfdefine ¯f:=X k∈mZ kfdλ λ(k) 1k. Then,E Hλ(f,¯f) ≤E[2Hλ(f, Vm)], where Hλis the Hellinger distance defined according to measure λ. For the purposes of the lemma, we make explicit the dependence of the Hellinger distance His matched to the integrating measure λand the projection ¯f. For the rest of the paper, this relationship is satisfied by construction, and we suppress this dependence. To use Lemma 12, we only need to verify that given λ=λn(as defined in Remark 1), f=s, and I=A, we have ¯f=1 2nPn−1 i=0E[ 1k(Xi, ai, Xi+1)|Xi, ai] 1 2nPn−1 i=0R χ1k(Xi, ai, y)dµχ(y)= ¯sm In other words, it is enough to show that for our given choice of λ, f,I, Z kf dλ λ(k)=Pn−1 i=0E[ 1k(Xi, ai, Xi+1)|Xi, ai]Pn−1 i=0R χ1k(Xi, ai, y)dµχ(y)=1 2nPn−1 i=0E[ 1k(Xi, ai, Xi+1)|Xi, ai] 1 2nPn−1 i=0R χ1k(Xi, ai, y)dµχ(y). We only verify the denominators are equal. The numerators follow similarly. For any k⊂χ×I×χsuch that k∈m, λ(k) =Z (z1,z2,z3)∈kλn(dz1, dz2, dz3) =Z (z1,z2,z3)∈k1 2nn−1X i=0δXi,ai(dz1, dz2)µχ(dz3) =Z χ×I×χ1 2nn−1X i=01k(z1, z2, z3)δXi,ai(dz1, dz2)µχ(dz3) =Z
https://arxiv.org/abs/2505.14458v1
χ1 2nn−1X i=0Z χ×I1k(z1, z2, z3)δXi,ai(dz1, dz2)µχ(dz3) =Z χ1 2nn−1X i=01k(Xi, ai, y)µχ(dy). 22 This completes our verification. Now using Lemma 12, we get E H2(s,¯s) ≤E 2H2(s, Vm) . Next, we produce an upper bound for the second term. Observe that we can expand the square in H2(¯sm,ˆsm) to get H2(¯sm,ˆsm) =1 2nn−1X i=0X k∈mZ χPn−1 i=0E[ 1k(Xi, ai, Xi+1)|Xi, ai]Pn−1 i=0R χ1k(Xi, ai, y)dµχ(y)1k(Xi, ai, y)dµχ(y) +1 2nn−1X i=0X k∈mZ χX k∈mPn−1 i=0 1k(Xi, ai, Xi+1)Pn−1 i=0R χ1k(Xi, ai, t)dµχ(t)1k(Xi, ai, t)dµχ(t)−2×C =1 2nX k∈mPn−1 i=0E[ 1k(Xi, ai, Xi+1)|Xi, ai]Pn−1 i=0R χ1k(Xi, ai, t)dµχ(t)n−1X i=0Z χ1k(Xi, ai, x)dµχ(x) +1 2nX k∈mPn−1 i=0 1k(Xi, ai, Xi+1)Pn−1 i=0R χ1k(Xi, ai, t)dµχ(t)n−1X i=0Z χ1k(Xi, ai, x)dµχ(x)−2×C. Where ‘C’ is the cross term made explicit in eq. (B.3). Observe that the denominators cancel with the integral in the numerators. So we can write, H2(¯sm,ˆsm) =1 2nX k∈mE[ 1k(Xi, ai, Xi+1)|Xi, ai] +1 2nX k∈m1k(Xi, ai, Xi+1)−2×C. (B.2) The cross term ‘C’ is 1 2nn−1X i=0Z χvuut X k∈mbk(y)! X k∈mb′ k(y)! dµχ(y) (B.3) where bk(·) =Pn−1 i=0E[ 1k(Xi, ai, Xi+1)|Xi, ai]Pn−1 i=0R χ1k(Xi, ai, t)dµχ(t)1k(Xi, ai,·)and b′ k(·) =Pn−1 i=0 1k(Xi, ai, Xi+1)Pn−1 i=0R χ1k(Xi, ai, t)dµχ(t)1k(Xi, ai,·) By using Cauchy-Schwarz inequality, we getq (Pbk)Pb′ k ≥Pp bkb′ k. This in turn implies that n−1X i=0Z χvuut X k∈mbk(y)! X k∈mb′ k(y)! dµχ(y)≥X k∈mZ χn−1X i=0q bkb′ kdµχ (B.4) It follows by substituting bkandb′ kthat, Z χn−1X i=0q bkb′ kdµχ =Z χn−1X i=0rPn−1 i=0E[ 1k(Xi, ai, Xi+1)|Xi, ai]Pn−1 i=0 1k(Xi, ai, Xi+1) Pn−1 i=0R χ1k(Xi, ai, t)dµχ(t)1k(Xi, ai, y)dµχ(y). 23 The integral in the denominator cancels with the one in the numerator, which consequently implies that 1 2nn−1X i=0Z χX k∈mq bkb′ kdµχ=1 2nX k∈mvuut n−1X i=0E[ 1k(Xi, ai, Xi+1)|Xi, ai]! n−1X i=01k(Xi, ai, Xi+1)! =1 2nX k∈mq ckc′ k, where ck= n−1X i=0E[ 1k(Xi, ai, Xi+1)|Xi, ai]! andc′ k= n−1X i=01k(Xi, ai, Xi+1)! . Substituting this into eq. (B.4), we get that the lower bound of the right hand side of eq. (B.4) isP k∈mp ckc′ k/2n. Substituting this lower bound into eq. (B.2) we can now observe that, H2(¯sm,ˆsm)≤1 2nX k∈m ck+c′ k−2q ckc′ k =1 2nX k∈m√ck−q c′ k2 =1 2nX k∈m vuutn−1X i=0E[ 1k(Xi, ai, Xi+1)|Xi, ai]−vuutn−1X i=01k(Xi, ai, Xi+1) 2 . Taking expectations on both sides now yield, E H2(¯sm,ˆsm) ≤1 2nX k∈mE vuutn−1X i=0E[ 1k(Xi, ai, Xi+1)|Xi, ai]−vuutn−1X i=01k(Xi, ai, Xi+1) 2 .(B.5) We first bound from above each term inside the summand. Define the finite stopping time Tst:= arg minn j≤n−1 :{ 1k(Xj, aj, Xj+1) = 1}[ E[ 1k(Xj, aj, Xj+1)|Xi, ai]≥n−1 o ∧n. (B.6) For any 3 positive numbers c1, c2, c3, we have the following algebraic inequality √c1+c2−√c32≤c1+ (√c1−√c3)2 By setting c1=Tst−1X i=0E[ 1k(Xi, ai, Xi+1)|Xi, ai] c2=n−1X i=TstE[ 1k(Xi, ai, Xi+1)|Xi, ai] c3=n−1X i=Tst1k(Xi, ai, Xi+1), 24 we can write  vuutn−1X i=0E[ 1k(Xi, ai, Xi+1)|Xi, ai]−vuutn−1X i=01k(Xi, ai, Xi+1) 2 ≤Tst−1X i=0E[ 1k(Xi, ai, Xi+1)|Xi, ai] + vuutn−1X i=TstE[ 1k(Xi, ai, Xi+1)|Xi, ai]−vuutn−1X i=01k(Xi, ai, Xi+1) 2 . (B.7) It follows from the definition of Tstthat Tst−1X i=0E[ 1k(Xi, ai, Xi+1)|Xi, ai]≤Tst−1X i=01
https://arxiv.org/abs/2505.14458v1
n=Tst n≤1 and, Tst−1X i=01k(Xi, ai, Xi+1) = 0 P-almost everywhere. So the first term of eq. (B.7) can be upper bounded by 1andPTst−1 i=0 1k(Xi, ai, Xi+1) in the second term vanishes. Therefore, 1 2nX k∈mE vuutn−1X i=0E[ 1k(Xi, ai, Xi+1)|Xi, ai]−vuutn−1X i=01k(Xi, ai, Xi+1) 2 ≤1 2nX k∈m 1 +E vuutn−1X i=TstE[ 1k(Xi, ai, Xi+1)|Xi, ai]−vuutn−1X i=Tst1k(Xi, ai, Xi+1) 2 .(B.8) The second term of the previous equation is now dealt in 2cases. CASE I. E[ 1k(XTst, aTst, XTst)|XTst, aTst]≥1 n(B.9) Recall (√a−√ b)2≤(a−b)2/bas the algebraic inequality obtained by rationalising√a−√ bfor positive numbers a, b. We substitute a=P1k(Xi, ai, Xi+1)andb=Pn−1 i=TE[ 1k(Xi, ai, Xi+1)|Xi, ai]to get the following upper bound to the right hand side of eq. (B.8): 1 2nX k∈mE vuutn−1X i=0E[ 1k(Xi, ai, Xi+1)|Xi, ai]−vuutn−1X i=01k(Xi, ai, Xi+1) 2 ≤1 2nX k∈m 1 +E"Pn−1 i=TE[ 1k(Xi, ai, Xi+1)|Xi, ai]−Pn−1 i=T1k(Xi, ai, Xi+1)2 Pn−1 i=TstE[ 1k(Xi, ai, Xi+1)|Xi, ai]#! =1 2nX k∈m 1 +n−1X j=0E Pn−1 i=TstE[ 1k(Xi, ai, Xi+1)|Xi, ai]−Pn i=Tst1k(Xi, ai, Xi+1)2 Pn−1 i=TstE[ 1k(Xi, ai, Xi+1)|Xi, ai]1Tst=j  ,(B.10) 25 where the equality follows sinceP j 1Tst=j= 1. Observe that Pn−1 i=TstE[ 1k(Xi, ai, Xi+1)|Xi, ai]−Pn−1 i=Tst1k(Xi, ai, Xi+1)2 Pn−1 i=TstE[ 1k(Xi, ai, Xi+1)|Xi, ai]1Tst=j is of the form (observed −expected )2/expected , which is the conditional variant of the well-known good- ness of fit (G.O.F.) statistic. The following lemma provides an upper bound to this G.O.F. statistic. Lemma 13. The G.O.F . statistic satisfies, E[G.O.F.]≤E n−1X i=jE[ 1k(Xi, ai, Xi+1)|Xi, ai]Pn−1 i=jE[ 1k(Xi, ai, Xi+1)|Xi, ai]1Tst=j . Next, we write the following algebraic inequality for nmany bounded positive real numbers zi. Lemma 14. For any integer j≤n,nmany bounded positive real numbers zi n−1X p=jzpPp i=jzi≤1 + log n−logzj. The proofs of the previous two lemmas follow similarly to the proof of Claims B.1 and B.2 in [52]. From an application of Lemmas 13 and 14 we get that for j≤n−2 E[G.O.F. ]≤E n−1X i=jE[ 1k(Xi, ai, Xi+1)|Xi, ai]Pn−1 i=jE[ 1k(Xi, ai, Xi+1)|Xi, ai]1Tst=j  ≤E[(1 + log n−logE[ 1k(Xi, ai, Xi+1)|Xi, ai]) 1Tst=j]. (B.11) But, from eq. (B.9) we have E[ 1k(Xi, ai, Xi+1)|Xi, ai]≥n−1. Thus, it follows that, E[G.O.F. ]≤E 1 + log n−log1 n 1Tst=j = (1 + 2 log n)P(Tst=j). Substituing this upper bound on the right hand side of eq. (B.10) it now follows that E H2(¯sm,ˆsm) ≤1 2nX k∈m 1 + (1 + 2 log n)n−2X j=0P(Tst=j)+ E" (E[ 1k(Xn−1, an−1, Xn)|Xn−1, an−1]− 1k(Xn−1, an−1, Xn))2 E[ 1k(Xn−1, an−1, Xn)|Xn−1, an−1]1Tst=n−1#! But when Tts=n−1, 1k(Xn−1, an−1, Xn) = 0 , and using the fact E[ 1k(Xn−1, an−1, Xn)|Xn−1, an−1]∈ [0,1], we get (E[ 1k(Xn−1, an−1, Xn)|Xn−1, an−1])2<E[ 1k(Xn−1, an−1, Xn)|Xn−1, an−1]. 26 Collecting all the previous facts and substituting them into the right hand side of eq. (B.10) we now get, 1 2nX k∈m 1 +n−1X j=0E Pn−1 i=TstE[ 1k(Xi, ai, Xi+1)|Xi, ai]−Pn i=Tst1k(Xi, ai, Xi+1)2 Pn−1 i=TstE[ 1k(Xi, ai, Xi+1)|Xi, ai]1Tst=j   ≤1 2nX k∈m 1 + (1 + 2 log n)n−2X j=0P(Tst=j) +EE[ 1k(Xn−1, an−1, Xn)|Xn−1, an−1] E[ 1k(Xn−1, an−1, Xn)|Xn−1, an−1]1Tst=n−1! ≤1 2nX k∈m 1 +
https://arxiv.org/abs/2505.14458v1
(1 + 2 log n)n−2X j=0P(Tst=j) +P(Tst=n−1)  ≤1 2nX k∈m 2 + (1 + 2 log n)n−2X j=0P(Tst̸=n−1)  ≤1 2n|m|(3 + 2 log n). It now follows from eq. (B.5) that E H2(¯sm,ˆsm) ≤1 2n|m|(3 + 2 log n)as required. CASE II. 1k(XTts, aTts, XTts) = 1 andE[ 1k(XTts, aTts, XTts)|XTts, aTts]<1 n(B.12) For this case, we use the inequality√a−√ b2 ≤(a−b)2/bby substituting b=P1k(Xi, ai, Xi+1)and a=Pn−1 i=TE[ 1k(Xi, ai, Xi+1)|Xi, ai]and create the G.O.F. 1statistic (observed −expected )2/observed . Then, we proceed similarly as before to get the following counterpart to eq. (B.11) E[G.O.F. 1]≤E[(1 + log n−log 1k(Xi, ai, Xi+1)) 1Tst=j] =E[(1 + log n−log 1) 1Tst=j] =E[(1 + log n) 1Tst=j]. Which in turn implies that, E H2(¯sm,ˆsm) ≤1 2n|m|(3 + log n) which can be trivially upper bounded by |m|(3 + 2 log n)/2n. This completes the proof. B.2 Proof of Proposition 2 Proof. We divide the proof of this proposition in two parts. 27 CASE I T(ˆsm,ˆsˆm)−pen( ˆm) +pen(m) ≥0:Following Proposition 10, it holds with probability at most exp (−n(pen(m) +pen( ˆm))/κ−nζ)that 3 4 1−1√ 2 H2(ˆsm,ˆsˆm)≤3 4 1−1√ 2 H2(ˆsm,ˆsˆm) +T(ˆsm,ˆsˆm)−pen( ˆm) +pen(m) ≤5 4 1 +1√ 2 H2(ˆsm,ˆsm) + 2pen(m) ≤5 4 1 +1√ 2 H2(ˆsm,ˆsm) + 2pen(m) +ζ. Since exp (−n(pen(m) +pen( ˆm))/κ−nζ)can be upper bounded trivially by 6 exp(−nζ), the rest fol- lows. We now proceed to address the other case. CASE II T(ˆsm,ˆsˆm)−pen( ˆm) +pen(m) ≤0:Observe that T(f1, f2) =−T(f2, f1). Therefore, T(ˆsˆm,ˆsm) +pen( ˆm)−pen(m)≥0. This further implies that, 3 4 1−1√ 2 H2(ˆsm,ˆsˆm)≤3 4 1−1√ 2 H2(ˆsm,ˆsˆm) +T(ˆsˆm,ˆsm) +pen( ˆm)−pen(m) We now require the following lemma which serves to provide an upper and lower bound for γ(m). Lemma 15. Letγbe as defined in eq. (2.4) . Then, sup m′∈M l3 4 1−1√ 2 H2(ˆsm,ˆsm′) +T(ˆsm,ˆsm′)−pen(m′) +pen(m)≤γ(m) γ(m)≤sup f∈sm′ m′∈M l3 4 1−1√ 2 H2(ˆsm, f) +T(ˆsm, f)−pen(m′) + 2pen(m) The proof of the first inequality is by using Proposition 11 Item 4 and some careful book-keeping. It follows similarly to that of Lemma B.2 in [52]. The proof of the second inequality can be found in Section B.14. Using Lemma 15, we get 3 4 1−1√ 2 H2(ˆsm,ˆsˆm) +T(ˆsˆm,ˆsm) +pen( ˆm)−pen(m) ≤γ( ˆm) ≤γ(m) +1 n ≤sup f∈sm′ m′∈M l3 4 1−1√ 2 H2(ˆsm, f) +T(ˆsm, f)−pen(m′) + 2pen(m) +1 n where the second inequality follows form eq. (Constrast) and the last inequality follows from the fact that ˆsm∈S f∈sm′ m′∈M lffor all m. It is now enough to show that it happens with low probability that sup f∈sm′ m′∈M l3 4 1−1√ 2 H2(ˆsm, f) +T(ˆsm, f)−pen(m′) + 2pen(m) +1 n 28 Taking an union bound over f∈sm′, we get P sup f∈sm′ m′∈M l3 4 1−1√ 2 H2(ˆsm, f) +T(ˆsm, f)−pen(m′) +pen(m) +1 n ≤5 4 1 +1√ 2 H2(s, f1) + 2pen(m) +ζ+1 n! (B.13) ≤X f∈sm′ m′∈M lP 3 4 1−1√ 2 H2(ˆsm, f) +T(ˆsm, f)−pen(m′) +pen(m) +1 n ≤5 4 1 +1√ 2 H2(s, f1) + 2pen(m) +ζ+1 n! We can now upper bound the probability using Proposition 10 by substituting ζbyζ+n−1. We get,
https://arxiv.org/abs/2505.14458v1
X f∈sm′ m′∈M lP 3 4 1−1√ 2 H2(ˆsm, f) +T(ˆsm, f)−pen(m′) +pen(m) +1 n ≤5 4 1 +1√ 2 H2(s, f1) + 2pen(m) +ζ+1 n! ≤X f∈sm′ m′∈M lexp(−n(pen(m) +pen(m′))/κ−nζ−1). To calculate this sum, we now need to compute the cardinality ofS f∈sm′ m′∈M lf. It follows by the construction in eq. (A.2) that the cardinality of the set l[ i=0  a bµI K(2) i µχ K(3) i:a∈ {0, . . . , n }, b∈ {1, . . . , n }   isl(n+ 1)n. Since l≤n, thenl(n+ 1)n≤n2(n+ 1) which in turn can be upper bounded as n2(n+ 1)≤ 1.5n3as long as n≥3. It follows that |sm′| ≤1.5|m′|n3|m′|= exp( |m′|(3 log( n) + log(1 .5)) Recall from Remark 2 that pen(m′)was defined to be L|m′|(1.5 + log n)/nfor some L≥3. It therefore follows that |sm′|exp(−22n×pen(m′)/L)≤exp(|m′|(3 log( n) + log(1 .5))−22|m′|(1.5 + log n)/L) ≤exp(−1.824|m′|) ≤exp(−|m′|). It therefore follows from Proposition 11 Item 1 that X f∈sm′ m′∈M lexp(−22n×pen(m′)/L)≤X m′∈M lexp(−|m′|) ≤15. 29 Trivially bounding exp(−22n×pen(m)/L)from above by 1, we get X f∈sm′ m′∈M lexp(−22n(pen(m) +pen(m′))/L−nζ−1)≤15 exp( −nζ−1) ≤6 exp(−nζ). This completes the proof. B.3 Proof of Proposition 3 Proof. Letl≥nandm1, m2∈ M ∞and let K∈m1. We define the γKas γK(m1, m2) :=√ 2−1 2√ 2H2(ˆsm1 1K,ˆsm2 1K) +T(ˆsm1 1K,ˆsm2 1K)−pen(m2∨K). γKcompares the relative performance of the histograms ˆsm1andˆsm2on the set K∈m1. Let m⋆ 2:= argmaxm2∈M∞γK(m1, m2). Using the fact that H2(·,·)≤1and|T(·,·)| ≤2we get −2−pen(χ×I×χ)≤γK(m1, χ×I×χ)≤γK(m1, m⋆ 2)≤3−pen(m⋆ 2∨K) with the second inequality following by definition. Since pen(m) =L(1.5+log n)|m|/n, and|χ×I×χ|= 1 −2−L(1.5 + log n)/n≤3−L(1.5 + log n)|m⋆ 2∨K|/n. This, with a bit of rearrangement implies |m⋆ 2∨K| ≤1 +5n L(1.5 + log n)≤n. Therefore, there exists m⊕ 2such that m⊕ 2∈ M nandm⊕ 2∨K=m⋆ 2∨K, which implies m⊕ 2also maximises γk(m1, m2). Therefore, max m2∈M∞γK(m1, m2) = max m2∈MnγK(m1, m2). We define m⋆:= argminm∈M lγ(m). It is obvious from definition that γ(m⋆)≤γ(χ×I×χ)≤3 + L(1.5 + log n)/n. We observe from Lemma 15 that γ(m⋆)≥sup m′∈M lX KγK(m⋆, m′) +pen(m⋆) ≥ −2−pen(χ×I×χ) +pen(m⋆) ≥ −2−L(1.5 + log n)(|m⋆| −1) n Some simple calculations now show that |m⋆| ≤2 + 5 n/(1.5 + log n)≤n, which implies m⋆∈ M n. 30 B.4 Proof of Lemma 4 Proof. The basic idea is to create a recurring sequence whose C ´esaro sum does not converge. We consider I={−1,1},χ={−1,1}and let µIandµχbe counting measures. Similar counter examples can be easily constructed for more general spaces. Define P Xi+1=−1 ai=−1, Xi=s := 1 andP Xi+1= 1 ai= 1, Xi=s := 1 ∀i≥0, s∈χ. Set the controls as ai= (−1)⌊log2(i)⌋. By construction, the waiting times are deterministic and finite, so that T(S)<∞. A trite but straightforward calculation shows that νn((1,1)) =4⌈k/2⌉−1 6n+1 + (−1)k 4n(r+ 1), k = log2n , r=n−2k. Hence, limn→∞νn (1,1) does not exist. The same argument applies to νn (−1,1) ,νn (1,−1) , and so on. This completes the proof. B.5 Proof of Proposition 5 Proof. We actually compare the remainder terms obtained via Proposition 23. The only difference is the extra rnterm does not appear in R(1)(n). This makes
https://arxiv.org/abs/2505.14458v1
the comparison fair, since otherwise we are comparing h2toh2 n. Letχ=I= [0,1/2)×[0,1/2)S[1/2,1]×[1/2,1]. We set µχandµIto be Lebesgue measures. Let the true sbe such that s(x, l, y ) =  2ifl, y∈[0,1/2) 2ifl, y∈[1/2,1] 0otherwise . Therefore, for all i≥0the states Xi’s are i.i.d Uniform distributions over [0,1/2)or[1/2,1]in accordance with the value of l. Observe that sis a piecewise constant function on a dyadic partition. So it can be perfectly approximated by histograms on M∞. Therefore, sup m∈M∞ h2(s, Vm) +pen(m) = sup m∈M∞ h2 n(s, Vm) +pen(m) =L(1.5 + log n) n for some universal constant Lwith the minimum achieved by the partition χ×I×χ. The controls aiare as follows: For a fixed integer i0≥1, and i∈ {0, . . . , i 0−1}, ai∼( Uniform[0,1/2) with probability1 i0 Uniform[1/2,1] otherwise , and for i≥d ai∼( Uniform[0,1/2) with probability1 2 Uniform[1/2,1] otherwise . 31 In essence, (Xi, ai)is an i.n.i.d sequence taking values in [0,1/2)×[0,1/2)S[1/2,1]×[1/2,1]. Lets(νn) denote the density of νnands(ν)denote the density of ν. The following form for total variation distance will be useful. Let A+be any set such that inf(x,l)∈A+{s(ν)(x, l)− s(νn)(x, l)} ≥0andA−be any set such that inf(x,l)∈A−{s(ν)(x, l)−s(νn)(x, l)} ≤0. Note that ∥ν−νn∥TV= max( sup A+Z (x,l)∈A+ s(ν)(x, l)−s(νn)(x, l) dxdl , sup A−Z (x,l)∈A− s(νn)(x, l)−s(ν)(x, l) dxdl) (B.14) We remark that we have suppressed the dependence of A+andA−onnfrom the notation. Now we derive s(ν)ands(νn). It can be easily seen that νis an uniform distribution on χ×I. We denote its density by s(ν)where s(ν)(x, l) =  2if(x, l)∈[0,1/2)×[0,1/2) 2if(x, l)∈[1/2,1]×[1/2,1] 0otherwise . We denote the density of (X0, a0)bys(ν0)where s(ν0)(x, l) =  4 i0if(x, l)∈[0,1/2)×[0,1/2) 4(1−1 i0)if(x, l)∈[1/2,1]×[1/2,1] 0 otherwise . For simplicity, let n≥i0and recall that rn:=∥ν−νn∥TV. Observe that by the linearity of the differential operator that s(νn)=i0 ns(ν0)+n−i0 ns(ν). Using eq. (B.14) it is now easy to see that rn= Θ( i0/n). We turn to T(S). LetS†⊆[0,1/2)×[0,1/2). 1[(Xi, ai)∈ S†]are independent Bernoulli trials with probability of success 4Vol(S†)/i0ifi∈0, . . . , i 0−1, and 4Vol(S†)ifi≥i0. Consider τ(1) S†. Therefore, T(S†) = E[τ(1) S†]≥i0/Vol(S†). Recall that the partition minimising the oracle risk was χ×I×χ. Therefore, m(2) ref=χ×I, and we can write the following expressions for R(1)(n)andR(2)(n). The only important thing to note here is the fact that the multiplicative term for nin the numerator of the exponents is larger for R(1)(n)for all values of i0. Define S⋆:= argmax Sr∈m(2) refexp −Cpnν2 n(Sr) 4C∆supi,jp P((Xi, ai)∈ Sr,(Xj, aj)∈ Sr) + 4n−1+ 2νn(Sr)(log n)2! . Then, R(1)(n) = 22exp −Cpnν2(S⋆)−2nCprn 4C∆ρ⋆(S⋆) + 4n−1+ 2ν(S⋆)(log n)2+ 2rn(logn)2 = 4 exp −4CpnVol(S⋆)2−2i0η1Cp 4C∆ρ⋆(S⋆) + 4n−1+ 4Vol( S⋆)(log n)2+ 2η2i0(logn)2 n! 32 where η1andη2are some positive constants. Similarly, R(2)(n) = 22exp −CpnVol(S⋆)2 4i2 0 4C∆ρ⋆(S⋆) +(4+(log n)2)Vol(S⋆) 2i0 . Since the multiplicative term for nin the numerator of of R(1)(n)is larger, it immediately follows that R(1)(n)/R(2)(n)→0. Thus R(1)(n) =o R(1)(n) . Next, by setting i0= Θp n(logn)2log(nlogn) , we get that R(1)(n) =O(logn/n). The rest of the proof follows. B.6 Proof of Lemma 6 Proof. Recall from [19, eq. 1.2] the definition of ϕmixing coefficients. Now using [19,
https://arxiv.org/abs/2505.14458v1
eq. 1.11] we get αi,j≤ϕi,j. It is therefore sufficient to bound ϕi,j. Define the weak mixing coefficients ¯θi,jas ¯θi,j:= sup s1,s2∈χ,l1,l2∈I∥P(Xj, aj|Xi=s1, ai=l1)−P(Xj, aj|Xi=s2, ai=l2)∥TV, (B.15) and observe from [7, Lemma 1] that ϕi,j≤¯θi,j. Therefore, it is sufficient to prove ¯θi,j≤(1−Vol(χ0)κ)j−i−1. Let the density of aibe denoted by s(i)(x, l′)defined as s(i)(x, l′) :=P ai∈dl′|Xi=x . We make note that (Xi, ai)forms an inhomogenous Markov chain with the probability of transition from (x, l)to(y, l′)at time point iiss(x, l, y )s(i)(y, l′). It follows from Hajnal and Bartlett [32, Theorem 2] that ¯θi,j≤j−1Y p=i 1− min (s1,l1),(s2,l2)∈χ×IZ (t,l′)∈χ×Iminn s(x1, l1, t)s(i)(t, l′) , s(x2, l2, t)s(i)(t, l′)o dl′dt! . (B.16) Recall that by hypothesis min x∈χ,l∈Is(x, l, t)> κ, for any t∈χ0. This implies that for all t∈χ0, minn s(x1, l1, t)s(i)(t, l′) , s(x2, l2, t)s(i)(t, l′)o ≥κs(i)(t, l′). Decomposing the integral over (t, l′)∈χ×Iin eq. (B.16) into an intergral over (t, l)∈(χ\χ0)×Iand (t, l)∈χ0×Iand substituting κs(i)(t, l′)as the appropriate lower bound we get, Z (t,l′)∈χ×Iminn s(x1, l1, t)s(i)(t, l′) , s(x2, l2, t)s(i)(t, l′)o dl′dt ≥Z t∈χ0,l′∈Iminn s(x1, l1, t)s(i)(t, l′) , s(x2, l2, t)s(i)(t, l′)o dl′dt ≥Z t∈χ0,l′∈Iκs(i)(t, l′)dl′dt = Vol( χ0)κ. 33 Now it follows that the right hand side of eq. (B.16) can be upper bounded by R.H.S.of eq.(B.16) ≤j−1Y p=i(1−Vol(χ0)κ) = (1 −Vol(χ0)κ)j−i−1, which completes our initial claim. B.7 Proof of Proposition 7 Proof. We begin by representing τ(i) Sin terms of τ(i,⋆,j) S’s. Observe that τ(i,⋆,j) Sis constructed so that τ(i) Sis τ(i,⋆,1) Sif the state at the corresponding time is inside Sχ; it is τ(i,⋆,1) S+τ(i,⋆,2) Sif the state was not in Sχafter τ(i,⋆,1) Stime points and Sχafterτ(i,⋆,1) S+τ(i,⋆,2) Stime points, and so on. Formally, this means τ(i+1) S=  τ(i,⋆,1) Sif XPi p=1τ(p) S+τ(i,⋆,1) S∈ Sχ τ(i,⋆,1) S+τ(i,⋆,2) Sif XPi p=1τ(p) S+τ(i,⋆,1) S/∈ SχandXPi p=1τ(p) S+τ(i,⋆,1) S+τ(i,⋆,2) S∈ Sχ ...... Therefore, τ(i+1) S =τ(i,⋆,1) S 1h XPi p=1τ(p) S+τ(i,⋆,1) S∈ Sχi + τ(i,⋆,1) S +τ(i,⋆,2) S 1h XPi p=1τ(p) S+τ(i,⋆,1) S/∈ Sχ, XPi p=1τ(p) S+τ(i,⋆,1) S+τ(i,⋆,2) S∈ Sχi +. . . , and taking a conditional expectation on both side provides the following identity E[τ(i+1) S|FPi−1 p=1τ(p) S] =Eh τ(i,⋆,1) S 1h XPi p=1τ(p) S+τ(i,⋆,1) S∈ Sχi |FPi−1 p=1τ(p) Si +Eh τ(i,⋆,1) S +τ(i,⋆,2) S 1h XPi p=1τ(p) S+τ(i,⋆,1) S/∈ Sχ, XPi p=1τ(p) S+τ(i,⋆,1) S+τ(i,⋆,2) S∈ Sχi |FPi−1 p=1τ(p) Si +. . . =Term 1 +Term 2 +. . . . (B.17) To compute an upper bound to E[τ(i) S], it is thus sufficient to individually find an upper bound to each term of the summation in the right-hand side of the previous equation by a careful bookkeeping of the conditional expectations. Term 1: Applying the law of conditional expectation to the first term we get E τ(i,⋆,1) S1 XPi p=1τ(p) S+τ(i,⋆,1) S∈ Sχ |FPi−1 p=1τ(p) S =E E τ(i,⋆,1) S1 XPi p=1τ(p) S+τ(i,⋆,1) S∈ Sχ |τ(i,⋆,1) S |FPi−1 p=1τ(p) S =E τ(i,⋆,1) SP XPi p=1τ(p) S+τ(i,⋆,1) S∈ Sχ|τ(i,⋆,1) S | {z } =:A|FPi−1 p=1τ(p) S (B.18) 34 where the second equality follows from tower property since FPi−1 p=1τ(p) S⊆ FPi−1 p=1τ(p) S+Pj−1
https://arxiv.org/abs/2505.14458v1
p=1τ(i,⋆,p ) S. Recall from eq. (Fully Connected) that s(x, l, y )≤1/ε0. Therefore, for any time point pand any history ℏp−1 0, P Xp∈ Sχ|Hp−1 0=ℏp−1 0 ≤Vol(Sχ)/ε0,and (P.I) P Xp/∈ Sχ|Hp−1 0=ℏp−1 0 ≤1−ε0Vol(Sc χ). (P.II) It follows from (P.I) that, A≤Vol(Sχ)/ε0. Substituting this value in the right hand side of eq. (B.18), we get the following upper bound to Term 1 E τ(i,⋆,1) S1 XPi p=1τ(p) S+τ(i,⋆,1) S∈ S |FPi−1 p=1τ(p) S ≤E τ(i,⋆,1) SVol(Sχ) ε0|FPi−1 p=1τ(p) S ≤T⋆(S)Vol(Sχ) ε0. Term 2: We turn to Term 2. We introduce the notation E∗for convenience where E∗[·] =E[·|FPi−1 p=1τ(p) S] Term 2: We introduce some notation for convenience. Define E∗[·] :=E[·|FPi−1 p=1τ(p) S] and proceed similarly as before to get E∗h τ(i,⋆,1) S 1h XPi p=1τ(p) S+τ(i,⋆,1) S/∈ S, XPi p=1τ(p) S+τ(i,⋆,1) S+τ(i,⋆,2) S∈ Sii =E∗  τ(i,⋆,1) S P XPi p=1τ(p) S+τ(i,⋆,1) S/∈ S, XPi p=1τ(p) S+τ(i,⋆,1) S+τ(i,⋆,2) S∈ S|τ(i,⋆,1) S , τ(i,⋆,2) S | {z } =:B . (B.19) We decompose B into P XPi p=1τ(p) S+τ(i,⋆,1) S+τ(i,⋆,2) S∈ S|τ(i,⋆,1) S, τ(i,⋆,2) S, XPi p=1τ(p) S+τ(i,⋆,1) S/∈ S | {z } =:C ×P XPi p=1τ(p) S+τ(i,⋆,1) S/∈ S|τ(i,⋆,1) S, τ(i,⋆,2) S | {z } =:D We bound C using P.I, and D using P.II. This gives us Right hand side of eq. (B.19) ≤Vol(Sχ) ε0×(1−ε0Vol(Sχ))E∗[τ(i,⋆,1) S] ≤Vol(Sχ) ε0×(1−ε0Vol(Sχ))T(⋆)(S). 35 We similarly get E∗ τ(i,⋆,2) S 1 XPi p=1τ(p) S+τ(i,⋆,1) S/∈ S, XPi p=1τ(p) S+τ(i,⋆,1) S+τ(i,⋆,2) S∈ S ≤Vol(Sχ) (1−ε0Vol(Sχ)) ε0T(⋆)(S). Therefore, Term2 ≤2Vol(Sχ) ε0×(1−ε0Vol(Sχ))T(⋆)(S). Proceeding similarly, we can find an upper bound to each term. Substituting these terms back into eq. (B.17) we get E[τ(i+1) S|FPi−1 p=1τ(p) S]≤∞X j=1jVol(Sχ) ε0×(1−ε0Vol(Sχ))j−1T(⋆)(S). (B.20) By integrating the first inequality of eq. (Fully Connected) with respect to y∈χ, we have 0<Vol(χ)ε0≤1 Consequently, 1−Vol(χ)ε0<1and1−Vol(Sχ)ε0<1for all Sχ⊆χ. This makes the series in the right hand side of eq. (B.20) convergent and we finally get, ∞X j=1jVol(Sχ) ε0×(1−ε0Vol(Sχ))j−1T(⋆)(S) =T(⋆)(S) ε2 0∞X j=1jε0Vol(Sχ) (1−ε0Vol(Sχ))j−1 =T(⋆)(S) ε3 0Vol(Sχ). B.8 Proof of Proposition 8 Proof. Observe that under conditions described in equations (Fully Connected) and (Minorisation) P (Xp, ap)∈ S|Hp−1 0 > ε0ε1Vol(S) for any positive integer p. This implies, P (Xp, ap)/∈ S|Hp−1 0∈ℏp−1 0 <1−ε0ε1Vol(S). Using this fact recursively, we get P (Xp+q, ap+q)/∈ S, . . . , (Xp, ap)/∈ S|Hp−1 0∈ℏp−1 0 <(1−ε0ε1Vol(S))q+1 for any q≥0. Now, let pbe when Xi−1, ai−1∈ S, for the τ⋆-th time. Then, P τ(τ⋆) S> q|Hp 0∈ℏp 0 =P (Xp+q, ap+q)/∈ S, . . . , (Xp, ap)/∈ S|Hp−1 0∈ℏp−1 0 <(1−ε0ε1Vol(S))q+1. 36 It now follows that E[τ(τ⋆) S|Hp 0∈ℏp 0]≤X q≥1P τ(τ⋆) S> q|Hp 0∈ℏp 0 ≤X q≥1P (Xp+q, ap+q)/∈ S, . . . , (Xp, ap)/∈ S|Hp−1 0∈ℏp−1 0 ≤X q≥1(1−ε0ε1Vol(S))q+ 1 ≤ε0ε1Vol(S) 1−ε0ε1Vol(S)+ 1. This completes the proof. B.9 Proof of Lemma 9 Proof. Letχ=I= [0,1]. We assume {Xi}are i.i.d. uniform random variables on [0,1]. We also assume a0is uniformly distributed on [0,1]. For i≥1, we define {ai}independently of {Xi}through conditional densities sai, where sai(l|a0) =  1 ifl∈[0,1]anda0∈[0,1/2) 1 4ifl∈[0,1/2)anda0∈[1/2,1] 7 4ifl∈[1/2,1]anda0∈[1/2,1] Now, by setting V(D) =R D(1/4)µI(dl), one can see that for any A∈
https://arxiv.org/abs/2505.14458v1
Fp−1 0, C⊆χ, D⊆[0,1/2) P(ap∈D|Xp∈C, A)≥ V(D). However, to show that (Xi, ai)isnotα-mixing, we note that for any p≥1 P ap∈[1/2,1]\ a0∈[1/2,1] =7 16, and P(ap∈[1/2,1])P(a0∈[1/2,1]) =11 16. Therefore, P ap∈[1/2,1]\ a0∈[1/2,1] −P(ap∈[1/2,1])P(a0∈[1/2,1]) =1 4, which in turn implies that αi,j= sup A,B P Hi 0∈A\ H∞ j∈B −P Hi 0∈A P H∞ j∈B ≥1 4 for all 1≤i < j . This completes the proof. 37 B.10 Proof of Proposition 10 Proof. For notational clarity, we introduce two intermediate objects, ψ(c1, c2)and¯f, defined by ψ(c1, c2) :=1√ 2√c2−√c1√c2+c1(B.21) ¯f(x, l, y ) :=f1(x, l, y ) +f2(x, l, y ) 2. Next, for two functions f1andf2, define Ziby Zi(f1, f2) :=ψ(f1(Xi, ai, Xi+1), f2(Xi, ai, Xi+1))−E[ψ(f1(Xi, ai, Xi+1), f2(Xi, ai, Xi+1))|Xi, ai].(B.22) We can now state the lemma, whose proof is provided in Section B.15: Lemma 16. Z ψ(f1, f2)2s dλ n≤3h H2 s, f2 +H2 s, f1i . We also state the following lemma, proved by algebraic manipulations in Section B.16: Lemma 17. Recall from eq. (B.21) thatϕ(c1, c2) = (√c2−√c1)/p 2(c1+c2). Then 1−1√ 2 H2 s, f2 +T f1, f2 ≤ 1 +1√ 2 H2 s, f1 +1 nn−1X i=0Zi f1, f2 . To proceed with the proof, we first adopt from [52] the following iteration of Bernstein’s inequality. As before, let {Fi 0}i≥0be a filtration and |gi| ≤bbe a bounded random variable adapted to it. Then we have the following lemma. Lemma 18. Define the sum sn:=Pn i=0 gi−E[gi|Fi 0] andVn:=Pn i=0E[g2 i|Fi 0]. Then P sn≥Vn 2(κ−b)+xκ ≤exp (−x) (B.23) for all κ > b , and x >0. Using Zias in eq. (B.22), set sn=Pn−1 i=0Ziand gi=ψ f1(Xi, ai, Xi+1), f2(Xi, ai, Xi+1) . Then, Lemma 18 asserts P sn≥Vn 2(κ−b)+xκ ≤exp(−x). (B.24) A simple rearrangement shows Vnreduces to nR ψ f1, f22s dλ n. Lemma 16 then boundsR ψ f1, f22s dλ n byZ ψ f1, f22s dλ n≤3h H2 s, f2 +H2 s, f1i . From eq. (B.24), we obtain P sn≥3n H2(s, f2) +H2(s, f1) 2(κ−b)+xκ ≤exp(−x). 38 Equivalently, Psn n≥3 H2(s, f2) +H2(s, f1) 2(κ−b)+xκ n ≤exp(−x). (B.25) By Lemma 17, 1−1√ 2 H2 s, f2 +T f1, f2 − 1 +1√ 2 H2 s, f1 ≤sn n. Substituting this into eq. (B.25) yields, with probability at most exp −x , 1−1√ 2 H2 s, f2 +T f1, f2 − 1 +1√ 2 H2 s, f1 ≤3 H2(s, f2) +H2(s, f1) 2(κ−b)+xκ n. Next, observe that ψ≤1/√ 2. We set b= 1/√ 2, x =n pen(m1) +pen(m2) +κζ κ, κ =2 + 11√ 2 2√ 2−2, implying 1.5×(κ−b) = 1−1/√ 2 /4. Hence, with probability at most exp −npen(m1)+pen(m2) κ−n ζ , 1−1√ 2 H2 s, f2 +T f1, f2 − 1 +1√ 2 H2 s, f1 ≤1 4 1−1√ 2h H2 s, f2 +H2 s, f1i +xκ n. By rearranging terms and bounding 1−0.50.5 H2 s, f1 by 1 + 0 .50.5 H2 s, f1 , we conclude 3 4 1−1√ 2 H2 s, f2 +T f1, f2 ≤5 4 1 +1√ 2 H2 s, f1 +pen(m1)
https://arxiv.org/abs/2505.14458v1
+pen(m2) +ζ. This completes the proof. B.11 Proof of Proposition 11 Proof. 1.ThatMl⊂ M l+1is obvious by construction. We prove |m| ≤2l(2d1+d2)by induction. It obviously is true for l= 0. Now let it be true for a given value l. Let m∈ M l+1be an element of Ml+1. From construction, either m∈ M l, orm∈S mS kS(m, k)where S(m, k)is as in Definition 1. Ifm∈ M l+1then|m| ≤2l(2d1+d2)and we have proved the induction step. If m∈S mS kS(m, k), then |m| ≤2(l+1)(2 d1+d2)−1by construction the induction step is satisfied. Finally, we observe that X m∈M∞e−|m|=X m∈M∞ |m|=2l(2d1+d2)e−|m|=X m∈M∞ |m|=2l(2d1+d2)e−2l(2d1+d2)≤X l≥02l(2d1+d2)e−2l(2d1+d2)≤e e−1 Thate e−1≤15is obvious. 2.is an easy observation from construction. We prove 3.using induction. It holds trivially for l= 0. Let the statement be true for a given l. Now, let ml+1be an element of Ml+1. As previously, observe that either∃ml∈ M l+1\Mlsuch that K∈ml, or by Definition 1, K∈S(m, k)for some pair m, k. In the former case, ∃{K1, . . . , K l}such that K⊂Ki. We set Kl+1=Kl, completing the proof. 39 The later case can again be subdivided into two distinct cases. Either K∈m\k, in which case, the proof proceeds similarly to the previous step, or K∈ {k1, k2, . . . , k2d2+2d1}, in which case, we set Kl+1=kand the proof is complete. 4.We first recall the definition of m∨m′from eq. (A.1) m∨m′=[ K′∈m′ m∨K′ where m∨K′:= K′∩K:K∈m, K′∩K̸=Ø . For any two dyadic partitions mandm′let Sagree(m, m′) := K:K∈mandK∈m′ . Observe from Definition 1 that if K′∈m′andK′/∈m, the it is constructed by dyadically partitioning some element of m. Let that element be K, and we have K∩K′=K′. Observe that if there exists another K⋆∈msuch that K⋆∩K′=K′, then either K⊂K⋆orK⋆⊆K. To avoid overcounting, we always let Kbe the smallest such set and write following definition. Sdisagree (K, m′) := K′:K′∈m′, K′/∈mandK′⊂Kfor some smallest K∈m . Sdisagree (K′, m)can be defined similarly. Since m∨m′is the set of non-empty intersections of m′with the elements of m, it follows that |m∨m′|=|Sagree(m, m′)|+ [ K∈m∩Sagree (m,m′)cSdisagree (K, m′) + [ K′∈m′∩Sagree (m,m′)cSdisagree (K′, m) We observe the following facts 1.|Sagree(m, m′)| ≤ |m|+|m′|, 2.| ∪K∈m∩Sagree (m,m′)cSdisagree (K, m′)| ≤ |m′|, 3.| ∪K′∈m′∩Sagree (m,m′)cSdisagree (K′, m)| ≤ |m|. This gives us the required result. B.12 Proposition 19 and proof of its upper bound Proposition 19. Assume the conditions of Theorem 3, and let ˜S⋆:= argmaxS∈m(2) refT(S),l≤n, and d1≥12. Then, 1. if n (logn)3≥cC−1 pT(S⋆)2 C∆ρ⋆(S⋆) +1 T(S⋆) log T ˜S⋆ . (B.26) Then,R(n)≤4/n 40 2. if n≤C−1 pT(S⋆)2 C∆ρ⋆(S⋆) +1 T(S⋆) , thenR(n)>1/2, and there exists a controlled Markov chain such that there exists no estimator ˆs satisfying E[h2 n(s,ˆs)]≤1 2(1 + π2). Broadly, our strategy is to pose the question of tightness of R(n)in terms of sample complexity, and then follow the usual techniques from [56] to show minimaxity. We first establish a few facts required for the proof: Fact 1. With NS:=Pn i=1 1[(Xi,ai)∈S],E[NS]≥n 2T(S). Proof of Fact 1. Recall from Lemma 25 that, E[NS]≥n T(S)−1 Since n≥2T ˜S⋆ , it follows from the definition of ˜S⋆thatn≥2T(S). The rest follows
https://arxiv.org/abs/2505.14458v1
by observing that for T(S)≥1,n/T(S)−1≥n/(2T(S)). Fact 2. T ˜S⋆ ≥4ld−1. Proof of Fact 2. This fact is proved using Fact 1. Summing over S ∈m(2) refon both sides of E[NS]≥n 2T(S), we get that, X S∈m(2) refE[NS] |{z} =:LHS≥X S∈m(2) refn 2T(S)≥X S∈m(2) refn 2T ˜S⋆= 2l(d1+d2)n 2T ˜S⋆ | {z } =:RHS. Observing that LHS =E X S∈m(2) refNS =E[n] =n, we can cancel nfrom both LHS and RHS to get T ˜S⋆ >2l(d1+d2)−1. The rest now follows. Fact 3. Cpn 4T(S⋆)2 4C∆ρ⋆(S⋆) +4+(log n)2 2T(S⋆)≥Cpn T(S⋆)2 (logn)2 C∆ρ⋆(S⋆) +1 T(S⋆) 41 Proof of Fact 3. We begin by observing that 4C∆ρ⋆(S⋆) +4 + (log n)2 2T(S⋆)=(logn)2 2 8 (logn)2C∆ρ⋆(S⋆) +8 (logn)2+ 1 T(S⋆)! ≤(logn)2 2 C∆ρ⋆(S⋆) +2 T(S⋆) ≤(logn)2 C∆ρ⋆(S⋆) +1 T(S⋆) , where the first inequality follows from the fact that 8/(logn)2≤1. The rest of the proof now follows. Proof of the Upper bound of Proposition 19. We first prove the first part. Let, n (logn)3≥cC−1 pT(S⋆)2 C∆ρ⋆(S⋆) +1 T(S⋆) log T ˜S⋆ . Then, n (logn)2≥cC−1 pT(S⋆)2 C∆ρ⋆(S⋆) +1 T(S⋆) log T ˜S⋆ logn ≥cC−1 pT(S⋆)2 C∆ρ⋆(S⋆) +1 T(S⋆) log T ˜S⋆ + log n =cC−1 pT(S⋆)2 C∆ρ⋆(S⋆) +1 T(S⋆) log nT ˜S⋆ This implies that Cpn T(S⋆)2 (logn)2 C∆ρ⋆(S⋆) +1 T(S⋆)≥log nT ˜S⋆ . Using Fact 3, we get Cpn 4T(S⋆)2 4C∆ρ⋆(S⋆) +4+(log n)2 2T(S⋆)≥log nT ˜S⋆ . Using Fact 2, we get Cpn 4T(S⋆)2 4C∆ρ⋆(S⋆) +4+(log n)2 2T(S⋆)≥log n2l(d1+d2)−1 . Now taking negative sign on both sides and exponentiating, we get 2l(d1+d2)exp −Cpn 4T(S⋆)2 4C∆ρ⋆(S⋆) +4+(log n)2 2T(S⋆) ≤4 n Now with R(n)as defined in Theorem 3, we get R(n)≤4/nwhich completes the proof. 42 B.13 Proof of the lower bound of Proposition 19 Assoud’s Reduction We begin with observing the simple fact that E[h2 n(s,ˆs)] =Z ε2∈(0,1)P(h2 n(s,ˆs)> ε2)dε2. So it is enough to show that without nsufficiently large and for any ε∈(0,1/32) P(h2 n(s,ˆs)> ε2)>1 2(1 + π2) for any estimator ˆsofs. We follow the recipe of Assoud’s reduction scheme [56, Chapter 2]. Without losing generality let χ×I= [0,1]d1+d2. LetDbe “some” class of controlled Markov chains (specified below). We use P to denote an element of D. One can write P= (s,{p(i)}i≥0), where sis the transition density and p(i)is the distribution of the control aiat time point igiven the previous history. Let ˆsbe any estimator of s. We will show that, as long as n≥cC−1 pT(S⋆)2 C∆ρ⋆(S⋆) +1 T(S⋆) , we have inf ˆssup P∈DP d2h2 n(ˆs, s)> ε2 >1 2(1 + π2). Construction of DLetd1be an even integer divisible by 3greater than 12. We simply let p(i)to be the uniform distribution on [0,1]d2. Now we carefully construct the transition densities. Let ιbe a known real number between 1/32and31/64and furthermore, let C={k(χ) 1, . . . , k(χ) d1}andI={k(I) 1, . . . , k(I) d2}be uniform partitions of χandIintod1andd2distinct cubes respectively. Let each integer l′such that k(I) l′∈ I, letξ(p)= (ξ(l′) 1, . . . , ξ(l′) d1/3)be some vector in {0,1}d1/3such that that ξ(l′)̸= (0, . . . , 0)for at least some l′. We consider s(x, l, y )to be piecewise constant
https://arxiv.org/abs/2505.14458v1
functions on the partition C × I × C . In other words, s(x, l, y ) =M(l′) i,jfor all x∈k(χ) i, y∈k(χ) j, l∈k(I) l′. We can represent M(l′) i,jby the following matrix which depends only on ιandξ(l′) M(l′) ι,ξ(l′)=d1×CιRξ(l′) JιLι , (B.27) where the blocks Cι∈Rd1/3×d1/3,Lι∈R2d1/3×2d1/3,Jι∈R2d1/3×d1/3, andRξ(l′)∈Rd1/3×2d1/3are given by Rξ(l′)=1 2 1 +ξ(l′) 1ε−2ι1−ξ(l′) 1ε−2ι3ι d1−33ι d1−3. . .3ι d1−3 3ι d1−33ι d1−31 +ξ(l′) 2ε−2ι1−ξ(l′) 2ε−2ι . . .3ι d1−3 .................. 3ι d1−3. . . . . . . . . 1 +ξ(l′) d1/3ε−2ι1−ξ(l′) d1/3ε−2ι , Lιis a matrix with every element equal to 3(1−ι)/2d1, and,CιandJιare matrices with every element equal to 3ι/d1. It can be verified by integrating that for each landx,s(x, l,·)is a valid transition density. 43 Some preliminary results Here, we derive some properties of CMC’s that are elements of Din the form of the following two results. Lemma 20. For each l∈k(I) l′, stationary distribution Π(l,ι)(·)of a Markov chain with transition density s(·, l,·)given in the previous construction is a piecewise constant function on C. Π(l,ι)(x) =  ι∀x∈Sd1/3 i=1k(χ) i ι(1−ξ(l′) 1ε−ι) 2+d1ι2 2(d1−3)+(1−ι)2 2∀x∈k(χ) d1/3+1 ι(1+ξ(l′) 1ε−ι) 2+d1ι2 2(d1−3)+(1−ι)2 2∀x∈k(χ) d1/3+2 ... ι(1−ξ(l′) d1/3ε−ι) 2+d1ι2 2(d1−3)+(1−ι)2 2∀x∈k(χ) d1−1 ι(1+ξ(l′) d1/3ε−ι) 2+d1ι2 2(d1−3)+(1−ι)2 2∀x∈k(χ) d1.(B.28) The proof follows by verifyingR Π(l,ι)(y)s(x, l, y )dy= Π(l,ι)(x)and is straightforward. Therefore, we omit it. Remark 9. Let(Xi, ai)be a controlled Markov chain with transition density sand the distribution over controls p(i)such that (s,{p(i)})∈ D. Since p(i)is uniform and independent of the history, one can easily see in the light of the previous lemma that the paired process (Xi, ai)forms a Markov chain with stationary distribution Π(x, l) = Π(l,ι)(x)for all x∈χandl∈I. Proposition 21. Let{(X0, a0), . . . , (Xn, an)}be a sample from a CMC which is an element of Dwith initial distribution Π(x, l) = Π(l,ι)(x). Then, 1. For any S ⊂ k(χ) i×k(I) jand any i∈ {1, . . . , d 1/3}, the expected return time Tas defined in definition 4 satisfies T(S) =4 5ι2Vol(S) 2. The α-coefficients of this controlled Markov chain satisfy αi,j≤(1−ι)j−i−1. In particular, cpas written in Assumption 1 is only depends upon ι. 3. Let Si,j=k(χ) i×k(I) jsuch that i∈ {1, . . . , d 1/3}. Then, ρ⋆(Si,j)(as defined in Theorem 3) satisfies ρ⋆(Si,j)<9(1−ι) 2d1d2. Simplification of the Sample Complexity We can now substitute upper bounds derived from Proposition 21 in the right hand side of eq. (3.4). For ease of perusal, we first rewrite the expression the right hand side of eq. (3.4) below C−1 pT(S⋆)2 C∆ρ⋆(S⋆) +1 T(S⋆) . We now note the following facts. 1.Cponly depends upon cpfrom Assumption 1, which in turn only depends upon ιfor the class of CMC’s we consider (by Proposition 21 part 2). 44 2.C∆only depends upon ι. 3. Since k(χ) i×k(I) jcreate d1d2uniform cubes of χ×I, for any Si,j=k(χ) i×k(I) j,Vol(Si,j) = (d1d2)−1. Using the previous facts, and substituting the bounds from Proposition 21 into the right hand side of eq. (3.4) we get C−1 pT(S⋆)2 C−1 pρ⋆(S⋆) +1 T(S⋆) ≤Cι C∆16d4 25ι4×9(1−ι) 2d2+4d2 5ι2. ≤Cιd1d2, where Cιis an appropriately
https://arxiv.org/abs/2505.14458v1
large constant depending only upon ι. All we need to show now is that unless n≥C′ ιd1d2for some constant C′ ι, there exists no estimator ˆssuch that P d2h2 n(s,ˆs)> ε2 ≤1 1 +π2. Separation of h2 n(·,·)Recall from the construction that χ= [0,1]d1andI= [0,1]d2. Furthermore, ιis known, and for all l∈k(I) j, j∈ {1, . . . , d 2}, the only unknown terms in the density s(x, l, y )are {ξ(j) 1, ξ(j) 2, . . . , ξ(j) d1/3}. Therefore, we only need to estimate d1d2/3many 0’s and 1’s. For ease of notation, we will use ξto denote this vector of d1d2/3many terms. To be precise ξ={ξ(1) 1, . . . , ξ(1) d1/3, . . . , ξ(d2) 1, . . . , ξ(d2) d1/3} Lets(ξ)to be the corresponding estimate of the density. Now let Ξto be another d1d2/3dimensional vector of 0’s and 1’s with corresponding density s(Ξ)such that ξ(l) 1̸= Ξ(l) 1 (B.29) for all l∈ {1, . . . , d }Now, we decompose h2 n. We write h2 n(s(ξ), s(Ξ)) =Z x,l,y∈[0,1]2d1+d2q s(ξ)(x, l, y )−q s(Ξ)(x, l, y )2 µχ(dy)νn(dx, dl ) >Z x∈[0,1]d1X j∈{1,...,d}Z l∈k(I) jZ y∈k(χ) 1q s(ξ)(x, l, y )−q s(Ξ)(x, l, y )2 µχ(dy) | {z } =:Aνn(dx, dl ). (B.30) We first carefully analyse the term Ain the previous expression. Z y∈k(χ) 1q s(ξ)(x, l, y )−q s(Ξ)(x, l, y )2 µχ(dy) =1 d1q d1(1 +ξ(1) 1ε−2ι)/2−q d1(1 + Ξ(1) 1ε−2ι)/22 +1 d1q d1(1−ξ(1) 1ε−2ι)/2−q d1(1−Ξ(1) 1ε−2ι)/22 . (B.31) Note the two following facts: 45 Fact 1.q d1(1 +ξ(1) 1ε−2ι)/2−q d1(1 + Ξ(1) 1ε−2ι)/22 >d1ε2 4. To show this fact, we write, q d1(1 +ξ(1) 1ε−2ι)/2−q d1(1 + Ξ(1) 1ε−2ι)/22 =d1 2q 1 +ξ(1) 1ε−2ι)−q 1 + Ξ(1) 1ε−2ι)2 =d1ε2(Ξ(1) j−ξ(1) 1)2 2q (1 +ξ(1) 1ε−2ι) +q (1 + Ξ(1) jε−2ι)2 =d1ε2 2q (1 +ξ(1) 1ε−2ι) +q (1 + Ξ(1) 1ε−2ι)2 >d1ε2 4, where the last line follows by the trivial inequalityp (1−2ι) +p (1 +ε−2ι)2 <2which holds for our admissible range of εandι. Fact 2. Similarly to Fact 1, q d1(1−ξ(1) 1ε−2ι)/2−q d1(1−Ξ(1) 1ε−2ι)/22 >d1ε2 4, Substituting this lower bound into the right hand side of eq. (B.31) we get A > d 1ε2/24, Substituting this lower bound of Ainto the right hand side of eq. (B.30) we get d2h2 n(s(ξ), s(Ξ))> d2Z x∈[0,1]d1X j∈{1,...,d}Z l∈k(I) 1A νn(dx, dl )≥X j∈{1,...,d}Z x∈[0,1]d1A νn(dx) =d2ε2 24. Letˆsbe any arbitrary estimate of sand let Ξ⋆∈ {0,1}d1d2/3such that Ξ⋆= argminΞh2 n(ˆs, s(Ξ)). For any Ξ0̸= Ξ ⋆satisfying eq. (B.29) d2ε2 24< d2h2 n(s(Ξ) 0, s(Ξ⋆))≤d2h2 n(s(Ξ) 0,ˆs) +d2h2 n(ˆs, s(Ξ⋆))≤2d2h2 n(s(Ξ) 0,ˆs) Therefore, {Ξ0: Ξ0̸= Ξ ⋆}|{z } =:E⊆ {h2 n(s(Ξ) 0,ˆs)> ε2/48}. (B.32) Lower Bounds on Touring Time One can see that for any random variable Tand a given number of samples n, P(h2 n(s,ˆs)> ε2)>P(h2 n(s,ˆs)> ε2|T> n)| {z } Probability of ErrorP(T> n) (B.33) 46 We define Tto be the first time all of the sets k(χ) i×k(I) j, i∈ {1, . . . , d 1/3}are visited. That is, T= min  p≥0 :\ i∈{1,...,d 1/3} 
https://arxiv.org/abs/2505.14458v1
p[ q=0n (Xq, aq)∈k(χ) i×k(I) jo  ̸=Ø  . The following lemma establishes the lower bound on T. Its proof is given in Section B.21. Lemma 22. Ifn < d 1d2/(6ι) log( d1d2/3)then,P(T> n)≥(1 +π2)−1. We now have all the tools to derive the lower bound. Lower Bound on the Probability of Error Throughout this part, we will assume that n < d 1d2/(6ι) log( d1d2/3), so thatP(T> n)≥(1 +π2)−1. Using eq. (B.30) and Lemma 22 we get, P(h2 n(s,ˆs)> ε2|T> n)P(T> n)>P(E|T> n)P(T> n) >1 1 +π2P(E|T> n) Now, if T> n, there exists i0, j0such thatPn i=1 1h (Xi,ai)∈k(χ) i0×k(I) j0i= 0. That is (Xi, ai)never visits the setk(χ) i0×k(I) j0during the first ntime points. Therefore, for any (x, y)∈k(χ) i0×k(I) j0the best estimate of s(x, l, y )is to choose uniformly over all possible values of ξ(j0) 1. Since {0,1}are the only two possibilities, P(E|T> n) =1 2. Therefore, P(h2 n(s,ˆs)> ε2|T> n)P(T> n)>1 2(1 + π2). The rest of the proof now follows. B.14 Proof of the upper bound in Lemma 15 Proof. We only prove need to prove sup m′∈M l3 4 1−1√ 2 H2(ˆsm,ˆsm′) +T(ˆsm,ˆsm′)−pen(m′) +pen(m)≤γ(m) γ(m)≤sup f∈sm′ m′∈M l3 4 1−1√ 2 H2(ˆsm, f) +T(ˆsm, f)−pen(m′) + 2pen(m), and the rest follows. The main objective of this proof is to construct a suitable set which allows us to exchange the order of the summation and the supremum in eq. (2.4). Let ˆsmbe the set of all piecewise constant functions on mwhose values matches with “some” histogram. Formally, ˆsm=(X K∈mˆsmK 1K,∀K∈m, m K∈ M l) . 47 Obviously, for every K∈mthere are multiple functions ˆf∈ˆsmwhich agree with ˆsmonK. The following procedure selects the coarsest one. For any function ˆf∈ˆsm, letmK(ˆf)be such that mK(ˆf) := argmin m′∈M ln |m′∨K|,ˆf 1K= ˆsm′ 1Ko . and set the partition m(ˆf) =S K∈mmK(ˆf). We observe that γ(m) =X K∈msup m′∈M l3 4 1−1√ 2 H2(ˆsm 1K,ˆsm′ 1K) +T(ˆsm 1K,ˆsm′ 1K)−pen(m′∨K) + 2pen(m) = sup ˆf∈ˆsm3 4 1−1√ 2 H2(ˆsm 1K,ˆf 1K) +T(ˆsm 1K,ˆf 1K)−pen(m′) + 2pen(m) Furthermore, it follows by construction that if ˆf∈ˆsm, then ˆf∈sm(ˆf). Therefore, γ(m)≤sup f∈sm′ m′∈M l3 4 1−1√ 2 H2(ˆsm, f) +T(ˆsm, f)−pen(m′) + 2pen(m). B.15 Proof of Lemma 16 Proof. The proof will then follow by integrating both sides with respect to λn. It is enough to prove, √f2−√f1p¯f!2 s≤3√s−p f22 +√s−p f12 . This is equivalent to proving p f2−p f12 s≤3¯f√s−p f22 +√s−p f12 . It holds by algebra that s≤2h (√s−p¯f)2+¯fi . The left hand side can now be rewritten as p f2−p f12 s≤2p f2−p f12 (√s−q ¯f)2+¯f = 2¯fp f2−p f12" (√s−p¯f)2 ¯f+ 1# = 2¯f" (√s−p¯f)2 ¯fp f2−p f12 +p f2−p f12# (B.34) Observe that√f2−√f12/¯f≤(p max{f1, f2})2/¯fwhich in turn can be upper bounded by 2. Thus, (√s−p¯f)2 ¯fp f2−p f12 ≤2(√s−q ¯f)2 ≤2(√f2−√s)2+ (√f1−√s)2 2, 48 where the second inequality follows from the convexity of the function x→(√x−√s)2and Jensen’s inequality. Since the fact√f2−√f12≤2h√f2−√s2+√f1−√s2i holds algebraically, we now have (√s−p¯f)2 ¯fp f2−p f12 +p f2−p f12 ≤3h (p f2−√s)2+ (p f1−√s)2i . This, when combined with eq. (B.34)
https://arxiv.org/abs/2505.14458v1
completes the proof of our lemma. B.16 Proof of Lemma 17 Proof. The proof of this Lemma share similarities with the proofs of Propositions 2 and 3 in [9] or that of Claim B3 in [52]. To begin, observe that it is enough to show H2(s, f2) +T(f1, f2)− H2(s, f1)≤1√ 2 H2(s, f2) +H2(s, f1) +1 nn−1X i=0Zi(f1, f2). Starting from the left hand side, we substitute the expression for Tfrom eq. (2.3), expand all squares, and cancel relevant terms. To be precise, we can write, L.H.S =Zp f2−√s2 dλn−Zp f1−√s2 dλn+1 nn−1X i=0ψ(f1(Xi, ai, Xi+1), f2(Xi, ai, Xi+1)) +Zq ¯fp f2−p f1 dλn+Z (f1−f2)dλn. =−2ρ(f2, s) + 2ρ(f1, s) +1 nn−1X i=0ψ(f1(Xi, ai, Xi+1), f2(Xi, ai, Xi+1)) +Zq ¯fp f2−p f1 dλn =−2ρ(f2, s) + 2ρ(f1, s) +1 nn−1X i=0Zi(f1, f2) +Z ψ(f1, f2)s dλ n+Zq ¯fp f2−p f1 dλn All that is now left to show is −2ρ(f2, s) + 2ρ(f1, s) +Z ψ(f1, f2)dλn+Zq ¯fp f2−p f1 dλn can be bounded above from by 0.50.5 H2(s, f2) +H2(s, f1) . As before, we start with the left hand side and observe that −2ρ(f2, s) + 2ρ(f1, s) +Z ψ(f1, f2)s dλ n+Zq ¯fp f2−p f1 dλn =Z" −2p f2s+ 2p f1s+√f2−√f1p¯fs+q ¯fp f2−p f1# dλn =Z"s f2 ¯fq ¯f−√s2 −s f1 ¯fq ¯f−√s2# dλn ≤Zs f2 ¯fq ¯f−√s2 dλn ≤√ 2H2(¯f, s). 49 The first inequality follows trivially. The second inequality follows from the fact that f2/¯f≤2. Now, observe that the function x→(√x−√s)2is convex in xwhen x >0. Therefore, using Jensen’s inequality, we can write√ 2H2(¯f, s)≤ H2(f1, s) +H2(f2, s) /√ 2. This completes the proof. B.17 Sketch of Proofs of Corollaries 2 and 3 Proof. Corollary 2 is proved similarly to part 1 of the proof of [10, Proposition 3]. □ To prove Corollary 3, we first use Theorem 1 to get, CE H2(s,ˆs) ≤inf m∈M l E H2(s, Vm) +pen(m) . Now, it is easy to see that under part 1 of Assumption 2, EH2(s, Vm)≤ΓVol( A)d2 2(√s, Vm)where d2is theL2norm. Substituting this into the previous equation we get CE H2(s,ˆs) ≤inf m∈M l Vol(A)Γd2 2(√s, Vm) +pen(m) . (B.35) The rest of the proof follows similarly to part 2 of the proof of [10, Proposition 3] to prove Corollary 2. B.18 Proof of Proposition 21 Proof. We first prove 1. Recall the definition of atoms from [46] and observe that (Xi, ai)is a stationary Markov chain with atomsn k(χ) i×k(I) jo withi, j∈ {1, . . . , d }. It follows now from Kac’s theorem [46, Theorem 10.2.2] for any atom α, E[T(α)] =1R x,l∈αΠ(x, l)dxdl. (B.36) We simply verify that Π(x, l)>3ι/2for any (x, l)∈χ×I. Recall from hypothesis that ε <1/32. This implies that, for any ξ∈ {0,1} 1−ξε−ι >31/32−ι > ι whenever ι <31/64. Thus, 3(1−ξε−ι)ι 2>3ι2 2>3ι2 4. Similarly, for d≥12,d/(d−3)>1, and for ι∈(1/32,31/64),1−ι > ι . Thus dι2 2(d−3)>ι2 2>ι2 4, and,(1−ι)2 2>ι2 2>ι2 4. Finally, for ι∈(1/32,31/64),ι >5ι2/4. Thus, Π(·,·)>5ι2/4. Now, since any S ⊂αis also an atom (subsets of atoms are atoms by definition), the rest of
https://arxiv.org/abs/2505.14458v1
the proof follows. Turning to 2 let χ0=Sd1/3 i=1κ(χ) iandκ= 3ι. Observe that Vol(χ0) = 1 /3. Now using Lemma 6, we arrive at the conclusion. Turning to 3, we first recall the definition of ρ⋆from Theorem 3: ρ⋆(S) = sup imax( P((Xi, ai)∈ S),sup j>iq P((Xi, ai)∈ S,(Xj, aj)∈ S)) . (B.37) 50 Now we can upper bound each term separately. Fix i0andj0and consider the following joint probability P((Xi, ai)∈ Si0,j0,(Xj, aj)∈ Si0,j0) =P((Xj, aj)∈ Si0,j0|(Xi, ai)∈ Si0,j0)| {z } =:Term1P((Xi, ai)∈ Si0,j0)| {z } =:Term2 Since (Xi, ai)is a stationary Markov chain, it follows from Lemma 20 that Term2 = Π( Si0,j0) =Z x∈k(χ) i0,l∈κ(I) j0Π(ι,ξ(l))(x)dxdl <3(1 +1 32−ι) 2Z x∈k(χ) i0,l∈κ(I) j0dxdl =3(33−32ι) 64d1d2. For the Term1, we only show the case when j=i+ 1. When j > i + 1, the proof follows very similarly using Champman-Kolmogorov decompositions. There are 2possible combinations given by whether i0lies in the set {1, . . . , d 1/3}or not. Case 1. (i0≥d1/3 + 1) . Since ai+1is a uniform random variable independent of the history, P((Xi+1, ai+1)∈ Si0,j0|(Xi, ai)∈ Si0,j0) =Z l∈k(I) j0P Xi+1∈k(χ) i0|(Xi, ai)∈ Si0,j0 dl =P Xi+1∈k(χ) i0|(Xi, ai)∈ Si0,j0 d2. Next, we observe that the transition density s(x, l, y ) =3(1−ι) 2for all x, l∈ Si0,j0. In particular, it is independent of x, l. Thus, P Xi+1∈k(χ) i0|(Xi, ai)∈ Si0,j0 =Z x∈k(χ) i03(1−ι) 2dx=3(1−ι) 2d1. So we get, Term1 = 3(1 −ι)/(2d2)<9(1−ι)/(2d1d2)as required. Case 2. (i0≤d1/3). Similar to above, we only need to find P Xi+1∈k(χ) i0|(Xi, ai)∈ Si0,j0 . And by a reasoning similar to before, P Xi+1∈k(χ) i0|(Xi, ai)∈ Si0,j0 =3ι d−3<9(1−ι) 2d1d2 when ι∈(1/32,31/64)andd≥12. We finally get Term1 <9(1−ι)/2d1d2. This implies P((Xi, ai)∈ Si0,j0,(Xj, aj)∈ Si0,j0)<3(33−32ι) 64d1d2×9(1−ι) 2d1d2<9(1−ι) 2d1d22 in our given range of ιandd. It can be easily seen from the calculations of Case 1. that P((Xi, ai)∈ S)< 9(1−ι)/2d1d2. By substituting all upper bounds into eq. (B.37) that ρ⋆(Si0,j0)<9(1−ι) 2d1d2. 51 B.19 Proof of Theorem 2 We first prove the following proposition Proposition 23. Letm(2) refbe the partition of Ainto uniform cubes of edge length 2−l. Assume that {(Xi, ai)}n i=0is a sequence from a controlled Markov chain satisfying Assumption 1. Then, the histogram estimator ˆssatisfies the following risk bound CE h2 n(s,ˆs) ≤inf m∈M l h2 n(s, Vm) +pen(m) +R(n). where R(n) =X Sr∈m(2) refexp −Cpnν2 n(Sr) 4C∆supi,jp P((Xi, ai)∈ Sr,(Xj, aj)∈ Sr) + 4n−1+ 2νn(Sr)(log n)2! . is a remainder term. C∆is as in Assumption 1 and Cponly depends upon cpin Assumption 1 Proof. LetA′:={(x, l) :∃y∈χ,(x, l, y )∈A}. In words, A′is the set given by the first two coordinates of elements in A. Let m(1) refandm(2) refbe the partitions of AandA′into uniform cubes of edge-length 2l respectively. Let Ψbe the tail event given by Ψ ={∀f1, f2∈Vm(1) ref:h2 n(f1, f2)≤2H2(f1, f2).} We can decompose the risk as follows. E h2 n(s,ˆs) =E h2 n(s,ˆs) 1Ψ +E h2 n(s,ˆs) 1Ψc =Term 1 +Term 2 . Term 1: Observe that if m∈ M lthenVm⊆Vm(1) ref. Let ¯sm:= argminf1∈Vm{h2 n(s, f1)}. E h2 n(s,ˆs) 1Ψ ≤E h2 n(s,¯sm) 1Ψ
https://arxiv.org/abs/2505.14458v1
+E h2 n(¯sm,ˆs) 1Ψ ≤E h2 n(s,¯sm) 1Ψ + 2E H2(¯sm,ˆs) 1Ψ ≤E h2 n(s,¯sm) 1Ψ + 2E H2(s,ˆs) 1Ψ + 2E H2(¯sm, s) 1Ψ ≤E h2 n(s,¯sm) 1Ψ + 2E H2(s,ˆs) + 2E H2(¯sm, s) We bound E H2(s,ˆs) ≤infm∈M l E H2(s, Vm) +pen(m) by Theorem 1. Term 2: Since the h2 n(·,·)≤1, the second term can be bounded as follows E[ 1Ψc] =P(Ψc). Observe 52 that, Ψc={∃f1, f2∈Vm(1) ref:h2 n(f1, f2)≥2H2(f1, f2).} ⊆( ∃Sr∈m(2) ref:νn(Sr)≥2 nn−1X i=01Sr(Xi, ai)) ⊆[ Sr∈m(2) ref( νn(Sr)≥2 nn−1X i=01Sr(Xi, ai)) =[ Sr∈m(2) ref( −νn(Sr)≥2 nn−1X i=01Sr(Xi, ai)−2νn(Sr)) =[ Sr∈m(2) ref( −νn(Sr)≥2 nn−1X i=01Sr(Xi, ai)−2 nE"n−1X i=01Sr(Xi, ai)#) =[ Sr∈m(2) ref( −n 2νn(Sr)≥n−1X i=01Sr(Xi, ai)−E"n−1X i=01Sr(Xi, ai)#) . In the previous equation, the second equality follows since νn(Sr) =E{P1Sr(Xi, ai)/n}. Now it follows that, P(Ψc)≤X Sr∈m(2) refP −n 2νn(Sr)≥n−1X i=01Sr(Xi, ai)−E"n−1X i=01Sr(Xi, ai)#! . LetYi:= 1Sr(Xi, ai)−E[ 1Sr(Xi, ai)]and∨2:= supin Var(Yi) + 2P j≥iCov(Yi, Yj)o . Using the concentration inequality for α-mixing processes (Theorem 2) from [45] we get P(Ψc)≤X Sr∈m(2) refexp −Cpn2 4ν2 n(Sr) n∨2+1 +n 2νn(Sr)(log n)2! =X Sr∈m(2) refexp −Cpn2ν2 n(Sr) 4n∨2+4 + 2 nνn(Sr)(log n)2 =X Sr∈m(2) refexp −Cpnν2 n(Sr) 4∨2+4n−1+ 2νn(Sr)(log n)2 where Cpis a constant depending only upon cpas defined in Assumption 1. All that is left is to upper bound ∨2. We use the slightly stronger version of Davydov’s covariance bound for α-mixing processes. Its proof is in Section B.22. Lemma 24. IfY1andY2are two random variables adapted to Hi 0andH∞ i+j, such that I1= 1[Y1∈A]and I2= 1[Y2∈A]thenCov(I1, I2)≤p αi,jP(Y1∈A, Y 2∈A) Using Lemma 24, we get ∨2≤sup i  Var(Yi) + 2X j>iq αi,jP((Xi, ai)∈ Sr,(Xj, aj)∈ Sr)  . (B.38) 53 Since Yi= 1Sr(Xi, ai)−E[ 1Sr(Xi, ai)],Var(Yi)≤P((Xi, ai)∈ Sr) (1−P((Xi, ai)∈ Sr))≤P((Xi, ai)∈ Sr). It now follows from Assumption 1 that, ∨2≤ 1 +X j≥iαi,j sup imax( P((Xi, ai)∈ Sr),sup j≥iq P((Xi, ai)∈ Sr,(Xj, aj)∈ Sr)) ≤C∆ρ⋆(Sr). Therefore, P(Ψc)≤X Sr∈m(2) refexp −Cpnν2 n(Sr) 4C∆ρ⋆(Sr) + 4n−1+ 2νn(Sr)(log n)2 . This completes the proof. Proof of Theorem 2 Proof. We first upper bound h2(·,·). Letf, gbe two conditional densities. We observe that h2(f, g) =Z χ×I×χp f(x, l, y )−p g(x, l, y )2 ν(dx, dl )µχ(dy) =Z χ×I×χp f(x, l, y )−p g(x, l, y )2 (νn(dx, dl )−νn(dx, dl ) +ν(dx, dl ))µχ(dy) ≤Z χ×I2 (ν(dx, dl )−νn(dx, dl )) +Z χ×I×χp f(x, l, y )−p g(x, l, y )2 νn(dx, dl )µχ(dy) = Term1 + Term2 where the previous inequality follows from the trivial bound Z χp f(x, l, y )−p g(x, l, y )2 µχ(dy)≤2. Observe that Term1 =Z χ×I×χp f(x, l, y )−p g(x, l, y )2 νn(dx, dl )µχ(dy) =h2 n(f, g) Turning to Term2 , we write Term2 =Z χ×I(ν(dx, dl )−νn(dx, dl )) ≤Z {x,l:ν(dx,dl )−νn(dx,dl )>0}(ν(dx, dl )−νn(dx, dl )) ≤ ∥νn−ν∥TV=rn we get h2(f, g)≤h2 n(f, g) + 2rn Now following Proposition 23 we only need to upper bound R(n)where R(n) =X Sr∈m(2) refexp −Cpnν2 n(Sr) 4C∆supiP(Xi, ai∈ Sr) + 4n−1+ 2νn(Sr)(log n)2 . 54 Next, we produce a lower bound for νn. Recall from Definition 3 the
https://arxiv.org/abs/2505.14458v1
definition of rn rn=∥νn−ν∥TV. It follows that supA|νn(A)−ν(A)|=rnfor any measurable set A. Observe that this implies sup A|ν2 n(A)−ν2(A)|= sup A|νn(A)−ν(A)|(νn(A) +ν(A))≤2rn Consequently, sup A{νn(A)−ν(A)} ≤rnandinf A ν2 n(A)−ν2(A) ≥ −2rn. Now substituting the above lower bounds for ν2 n(Sr)andνn(Sr)it follows that, R(n)≤X Sr∈m(2) refexp −Cpnν2(Sr)−2nCprn 4C∆supiP(Xi, ai∈ Sr) + 4n−1+ 2ν(Sr)(log n)2+ 2rn(logn)2 . Therefore, we get R(n)≤X Sr∈m(2) refexp −Cpnν2(Sr)−2nCprn 4C∆supiP(Xi, ai∈ Sr) + 4n−1+ 2ν(Sr)(log n)2+ 2rn(logn)2 Observe that the term in the exponent of the right hand side of the previous equation is maximised by some small set Smin. Let Smin:= argmax Sr∈m(2) refexp −Cpnν2(Sr)−2nCprn 4C∆P((Xi, ai)∈ Sr) + 4n−1+ 2ν(Sr)(log n)2+ 2rn(logn)2 Then we get, R(n)≤2l(d1+d2)exp −Cpnν2(Smin)−2nCprn 4C∆P((Xi, ai)∈ Smin) + 4n−1+ 2ν(Smin)(log n)2+ 2rn(logn)2 where the inequality follows from the construction of m(2) ref. Observe that ν(Smin)≤1and(4+2(log n)2)n−1≤ 1forn≥5The rest of the proof follows using some simple algebra. B.20 Proof of Theorem 3 Proof. We first state the following lemma whose proof is in Section B.23. Recall the definition of T(·)in 3.3. Then we have, Lemma 25. For any S ⊆χ×I νn(S)≥1 T(S)−1 n. Using the previous lemma and the fact that n≥2T(S⋆)≥2T(Sr)for all Sr∈m(2) refwe get νn(Sr)≥1 T(Sr)−1 n≥1 2T(Sr). The rest of the proof follows by substituting the previous lower bound in Proposition 23. 55 B.21 Proof of Lemma 22 Proof. We introduce the notation χ′:=n (k(χ) 1×k(I) 1), . . . , (k(χ) d1/3×k(I) 1),(k(χ) 1×k(I) 2), . . . , (k(χ) d1/3×k(I) d2)o . Observe that Tcan be written as, T:=d1d2/3−1X Υ=0UΥ (B.39) where UΥis the time spent between the Υ-th and the Υ + 1 -th unique element visited in χ′. Next, we observe two facts. Firstly, observe that for any element (k(χ) t, k(I) l′)belonging to χ′we have P (Xi, ai)∈(k(χ) t, k(I) l′)|Hi−1 0=ℏi−1 0 =3ι d1d2 independent of any history Hi−1 0. Secondly, observe that the probability of visiting a new state-control pair inχ′when Υunique states have already been visited is 3ι(d1d2/3−Υ)/d1d2. Together, these facts imply that UΥd=XΥwhere XΥ∼Geometricd1d2 3−Υ3ι d1d2 . (B.40) It follows from eq. (B.40) that, E[T] = d1d2 3ιd1d2/3−1X Υ=01 d1d2/3−Υ  where we have dropped the superscript lfrom Υ(l)for convenience. Rewriting the previous equation we get, E[T] =d1d2 3ιd1d2/3X Υ=11 Υ >d1d2 3ιlog (d1d2/3 + 1) . (B.41) where the last inequality follows from the Euler-Maclaurin (see for example, [2]) approximation of a sum by its integral. We also observe that, Var(UΥ) =d2k2 9ι2d1d2 3−Υ−2 1−d1d2 3−Υ3ι d1d2 . 56 The term inside the square brackets is a probability, and can be upper bounded by 1. Observe that when Υ≤d1d2/3−1we can upper bound Var(T)as Var(T)≤d1d2/3−1X Υ=0d2k2 9ι2d1d2 3−Υ−2 =d1d2/3X Υ=1d2k2 9ι21 Υ2 <d2k2 9ι2π2 6 <d2k2 9ι2π2 4. (B.42) where the second inequality follows from the fact thatP Υ≥11/Υ2=π2/6. Using Cantelli’s inequality [27, Equation 5], we obtain, for all 0< θ <E[T]/p Var(T), P T>d1d2 3ιlogd1d2 3+ 1 −θd1d2 3ιπ 2 ≥θ2 1 +θ2. From the equations B.41 and B.42, we get that E[T]/p Var(T) >(log(d1d2/3) + 1) /π. Substituting θ= (log( d1d2/3) + 1) /πwe get P T>d1d2 6ι logd1d2 3 +
https://arxiv.org/abs/2505.14458v1
1 ≥1 1 + π log(d1d2/3)+12>1 1 +π2. This proves the lemma. We now have all the tools to derive the lower bound. Lower Bound on the Probability of Error Throughout this part, we will assume that n < d 1d2/(6ι) log( d1d2/3), so thatP(T> n)≥(1 +π2)−1. Using eq. (B.30) and Lemma 22 we get, P(h2 n(s,ˆs)> ε2|T> n)P(T> n)>P(E|T> n)P(T> n) >1 1 +π2P(E|T> n) Now, if T> n, there exists i0, j0such thatPn i=1 1h (Xi,ai)∈k(χ) i0×k(I) j0i= 0. That is (Xi, ai)never visits the setk(χ) i0×k(I) j0during the first ntime points. Therefore, for any (x, y)∈k(χ) i0×k(I) j0the best estimate of s(x, l, y )is to choose uniformly over all possible values of ξ(j0) 1. Since {0,1}are the only two possibilities, P(E|T> n) =1 2. Therefore, P(h2 n(s,ˆs)> ε2|T> n)P(T> n)>1 2(1 + π2). The rest of the proof now follows. 57 B.22 Proof of Lemma 24 Proof. Recall that we denoted our probability space by Ω,F,F,P. For convenience of notation, we will denoteR ω∈Ω(·)P(dω)simply byR (·)We begin by writing explicitly Cov(I1, I2)and observing the upper bound Cov(I1, I2) =Z I1I2−Z I1Z I2 ≤Z I1I2=1 I1I2−Z I1Z I2 =Z I1I2 I1I2−Z I1Z I2 which follows trivially because the term inside is whole square is negative unless I1I2= 1. The second inequality follows since,R I1I2=1 I1I2−R I1R I2 ∈[0,1]. Similarly,  I1I2−Z I1Z I2 I1I2≤s I1I2−Z I1Z I2 I1I2. Now using Cauchy-Schwarz inequality we get Zs I1I2−Z I1Z I2 I1I2≤sZ I1I2−Z I1Z I2Z (I1I2) The first term equals to P(Y1∈A∩Y2∈A)−P(Y1∈A)P(Y2∈A)which can be trivially upper bounded byαi,j. This completes our proof. B.23 Proof of Lemma 25 Proof. We begin by fixing an S. Case I: (T(S) =∞)In this case, the left hand side is a positive real number and the right hand side becomes negative. Thus, the result holds trivially. We now turn to the non-trivial case. Case II: (T(S)<∞)Define the random variable {Z(p) S}and the filtration F′ pas, Z(0) S:= 0 Z(p) S:=Pp i=1τ(i) S T(S)−p F′ p:=FPp i=1τ(i) S. Observe that E[Z(p) S|F′ p−1] =E[Pp i=1τ(i) S|F′ p−1] T(S)−p =E[Pp−1 i=1τ(i) S|F′ p−1] T(S)−(p−1) +E[τ(p) S|F′ p−1] T(S)−1 ≤E[Z(p−1) S|F′ p−1] +T(S) T(S)−1 =Z(p−1) S, 58 where the last inequality follows because E[τ(p) S|F′ p−1]≤T(S)by eq. (3.3) and the last equality follows because Z(p−1) SisF′ p−1measurable. It follows that, {Z(p) S}is a supermartingale. Now, define N:= min {p≤n+ 1 :pX i=1τ(i) S> n}. It can be seen easily that Nis a valid stopping time. Moreover, since the return times τ(i) S≥1P-almost everywhere, it easily follows that P(N≤n+ 1) = 1 . Therefore, it follows from Doob’s Optional Stopping Theorem for supermartingales [31, Theorem 7.1, page 495] that, E[ZN]≤E[Z0]. Since Z0= 0, we can write E"PN i=1τ(i) S T(S)−N# ≤0. This in turn implies E"PN i=1τ(i) S T(S)# ≤E[N]. LetNS:=Pn i=1 1[(Xi,ai)∈S]be the number of times the controlled Markov chain returned to the set Sinn time steps. Observe that we can write NS= max {p≤n:pX i=1τ(i) S≤n}. In other words, NS=N−1P-almost everywhere. It follows that, E"PN i=1τ(i) S T(S)# ≤E[NS] + 1. This in turn implies E"PN i=1τ(i) S T(S)#
https://arxiv.org/abs/2505.14458v1
arXiv:2505.14611v1 [cs.IT] 20 May 2025Fisher-Rao distances between finite energy signals in noise Franck Florin Thales, France. Abstract This paper proposes to represent finite-energy signals obse rved in a given bandwidth as parameters of a probability distribution, and use the information- geometrical framework to compute the Fisher–Rao distance b etween these signals, seen as distributions. The observations are repre sented by their discrete Fourier transform, which are modeled as complex Gaussian ve ctors with fixed diagonal covariance matrix and parametrized means. The par ameters define the coordinate system of a statistical manifold. This work inve stigates the possibility of obtaining closed-form expressions for the Fisher-Rao di stance. We study two cases: the general case representing any finite energy signa l observed in a given bandwidth and a parametrized example of observing an attenu ated signal with a known magnitude spectrum and unknown phase spectrum, and w e calculate the Fisher-Rao distances for both cases. The finite energy si gnal manifold cor- responds to the manifold of the Gaussian distribution with a known covariance matrix, and the manifold of known magnitude spectrum signal s is a subman- ifold. We derive the expressions for the Christoffel symbols and the tensorial equations of the geodesics. This leads to geodesic equation s expressed as second- order differential equations. We show that the tensor differe ntial equations can be transformed into matrix equations. These equationsdepe nd on the parametric model but simplify to only two vectorial equations, which co mbine the magni- tude and phase of the signal and their gradients with respect to the parameters. We compute closed-form expressions of the Fisher-Rao dista nces for both stud- ied cases and show that the submanifold is non-geodesic, ind icating that the Fisher-Rao distance measured within the submanifold is gre ater than in the full manifold. Keywords: Fisher-Rao distance, Finite energy signal, Christoffel sym bols, Fisher metric 1 1 Introduction Time series analysis and signal processing are well-known applications of information geometry. Considering time series as realizations of statistical pop ulations modeled by parametric probability distributions leads to the exploitation of th e Fisher metric. The set of all the parametric probability distributions constitutes a statistical model which can be viewed as a statistical manifold. The Fisher metric relate s the geometric structure of this manifold to the statistical estimation problem [ 1]. Stationary Gaussian time series are commonly studied through the s tatistical behavior of their Fourier expansion. Amari and Nagaoka derived th e signal processing geometry from the spectral density function assuming that the u nweighted power cep- strum norm is finite [ 1]. Barbaresco reported information geometry of autoregressive (AR) models in their reflection coefficients [ 2]. Choi and Mullhaupt proved the corre- spondence between the information geometry of a signal filter and a K¨ ahler manifold [3]. In the common approach, the signal is expressed as a stationary time series with constraints on its power spectrum. The constraints detail the co nditions for which the signal is obtained as the output of a minimum-phase linear system. Th is includes AR and moving average (MA) models.
https://arxiv.org/abs/2505.14611v1
However, a large number of applications, including telecommunication s, sonar, radar, electronic warfare, music, speech analysis, fault diagnosis , and others, are con- cerned with the acquisition of time series representing a finite energ y signal observed through a noisy receiving channel. Moreover, often in these applica tions, the signal of interest not only has finite energy, but also has finite duration, w hich forbids any representation of it as a stationary random process. In this work, we consider the problem of the parametric estimation o f a finite energy signal mixed with additive stationary noise. In the considere d applications, the background noise is stationary and Gaussian, but the signal ca n be considered as deterministic and unknown. Each signal has a parametric descrip tion that induces a unique statistical distribution in the statistical manifold. Let xbe the vector of observationsand ξbe the vectorofsignalparameters.The randomstochasticbeha vior of the observations is described by the statistical distribution of t he observations conditional to the parameters: p(x|ξ). With the Fisher metric, it is possible to evaluate a statistical distance between two points represented by their respective parameters ξ1andξ2. This distance is known as the Fisher-Rao distance [ 4]. It provides an estimation of the dissimilarity between the two statistical populations p(x|ξ1) andp(x|ξ2). When the parameters are signal parameters, computing the Fisher-Rao distance between the two distributions aims at estimating the dissimilarity between the two corresponding signals , taking into account the distributions of the observations. If the distance is s mall, the distributions are similar and it is difficult to distinguish between the two signals. If the distance is large, the signals may be distinguished easily. Our objectiveis to compute the Fisher-Raodistance between two p ointsξ1andξ2, in the statistical manifold. In the following, we use a concise express ion and slightly imprecise language to refer to the Fisher-Rao distance between tw o statistical distri- butions corresponding to two different signals parameterized by tw o vectors ξ1and ξ2. We will simply refer to it as the distance between the two signals ξ1andξ2. 2 Section 2 explains the method ofinformation geometrycommonly use d to compute the Fisher-Rao distance based on the Fisher metric. This section re ferences previous papers from the community [ 4] [5] [6] [7]. Although it may not be necessary to recall the method in detail, it also provides an opportunity to introduce som e definitions and notational conventions. Section 3 addresses the signal modeling and the expressions of the statistical dis- tributions. The signal model has already been introduced in a previo us paper [ 8]. The parametric modeling is discussed in detail. Two manifolds are considere d: one is a submanifold of the other manifold. Section 4 details the different steps to compute the Fisher-Rao dist ance, includ- ing the Christoffel symbols and the differential equations of the geo desic. This section providesthe generalexpressionsofthe Fisher-Raodistances in t he two manifolds iden- tifiedintheprevioussection.Detailedcalculationproofsareincluded intheappendices to facilitate understanding. Section 5 examines a special situation involving the estimation of an at tenuated signal with an unknown phase spectrum and a known magnitude spec trum.
https://arxiv.org/abs/2505.14611v1
The cor- responding set of distributions constitute a submanifold of the full manifold of all observed signals. A numerical example is presented, demonstratin g that the corre- sponding submanifold is not a geodesic submanifold, which means that the Fisher-Rao distanceinthesubmanifoldisgreaterthantheFisher-Raodistance inthefullmanifold. The conclusion restates the major findings, explains the relevance of the work and its limitations, and offers some perspectives for future research. 2 How to compute the Fisher-Rao distance Before introducing the methodology to compute the Fisher-Rao dis tance, let’s intro- duce some notational conventions that will be used throughout th e paper. Vectors and matrices are highlighted in bold, while scalar values are not bold. [0]Pis the zero vector of size P. We use the Einstein notation to suppress the summation symbols whe n dealing with tensor forms. This is of course related to the tensor operatio ns. When the sum involves simple additions, such as over the frequency variable, we re tain the sum symbol, as shown in equation 13. Regarding the parameters, it’s important to distinguish between ξ2, which refers to a specific value of the vector of parameters ξ, andξ2, which designates the second coordinate of the vector of parameters ξ. To avoid confusion between square expressions and upper index 2, the square will be expressed after parentheses. For instance,/parenleftbig ξ2/parenrightbig2would be the square of the coordinate ξ2. When necessary, we will replace ξ=/parenleftig φT,ϕT/parenrightigT byφandϕwith their own indices. ξis a real vector with Ncomponents. φis a real vector with Pcomponents indexed by the symbols uorv, varying from 1 toP. 3 ϕis a real vector with N−Pcomponents indexed by the symbols qorr, varying fromP+1 toN. This notation of indexes will allow us to use a Schouten-like convention for the nota- tion of derivation with: ∂u=∂ ∂φu,∂v=∂ ∂φv,∂q=∂ ∂ϕqand∂r=∂ ∂ϕr. At the receiver level, the signal is often time-delayed and attenuat ed. When we need totakethesephenomenainto account,wedenote by αthe attenuationcoefficient, time and frequency independent, and τthe time delay characterizing the reception at a reference location in the time frame of the receiver. As for the attenuation, the time delay is time and frequency independent as no Doppler effect is as sumed. The method used to compute the Fisher-Rao distance is explained in s everal ref- erences [ 1] [4] [5] and is commonly used by the information geometry community [ 6] [7]. The first step is to get a parametric model describing the distributio n of the obser- vationsp(x|ξ). The Fisher information matrix [ gij] is derived from the log-likelihood as follows: gij=Eξ/bracketleftbigg∂ ∂ξilnp(x|ξ)∂ ∂ξjlnp(x|ξ)/bracketrightbigg (1) Then, based on the Fisher information matrix, the Christoffel symb ols are computed from the equations: ∀m,i,j= 1,...,N Γij,m:=gmkΓk ij=1 2/parenleftbigg∂gjm ∂ξi+∂gmi ∂ξj−∂gij ∂ξm/parenrightbigg (2) OncetheChristoffelsymbolsareavailable,theycanbeusedinthefo llowingdifferential equations, whose solutions are the geodesics ˜ξ(ς) joining ξ1=˜ξ(0) andξ2=˜ξ(1) (withς∈[0,1]): ∀m= 1,...,Nd2ξm dς2+Γm ijdξi dςdξj dς= 0 (3) Given the expressions of the geodesics ˜ξ(ς), the Fisher-Rao distance between the distributionat ξ1=˜ξ(0)andthedistributionat ξ2=˜ξ(1)isexpressedbythefollowing integral: d(ξ1,ξ2) =/integraldisplay1 0/radicaligg gijd˜ξi dςd˜ξj dςdς (4) Finding closed-form expressions for the Fisher-Rao distance is kno wn to be a non-trivial task, and closed-form expressions are only available fo r
https://arxiv.org/abs/2505.14611v1
a few families of probability distributions [ 7]. For instance, the Fisher-Rao distance between normal multivariate probability distributions has no explicit expression in gene ral. However, in specific cases, such as normal multivariate probability distribution s with the same covariance matrix, a closed form can be obtained: it corresponds t o the Mahalanobis distance between the means of the multivariate normal distribution s [6]. 4 Constraints on the parametric model of the probability distribution s restrict the distributions to a submanifold, and the geodesic distance in the subm anifold is gener- ally bigger than the geodesic distance measured in the global manifold . An example is given in [ 6], where it is shown that the Mahalanobis distance between the means of two multivariate normal distributions with the same covariance ma trix is greater than the Fisher-Rao distance between the same distributions assu ming no constraint on the covariance matrix. This is due to the path between the distrib utions: it can be shortened when one does not assume that the covariance matrix is constant [ 6]. So, given a manifold and a submanifold within this manifold, we must cons ider that the geodesic distance in the submanifold is generally greater th an the distance in the manifold. When examining the distance between two signals ξ1andξ2, we must carefully consider the constraints imposed by the parameter s and the geometric structure of the manifold they induce. In the following, we provide a model for the acquisition of a finite-ene rgy signal through a noisy receiving channel. The model determines the statis tical distributions of the observations. Based on this model, we develop the equations to obtain the Christoffel symbols. In particular, we use the block diagonal struc ture of the Fisher matrix, obtained from the signal model, to calculate simplified equatio ns. We show that, with additional hypotheses, in some cases, the geodesic equ ations can be solved to obtain a closed-form expression of the Fisher-Rao distance. 3 Signal parametric modeling 3.1 Observation of a finite energy signal The model used to derive the expression of the statistical distribu tion has already been applied in previous work concerning the Cramer-Rao Lower Bou nd (CRLB) [ 8]. We detail here the various considered hypotheses and give the exp ression of the law of probability of the observations and their dependency on the par ameters. In the considered applications, the signal of interest depends on t he parameter ξ and is a finite energy real signal over time tfor which we have/integraltext+∞ −∞(sξ(t))2dt<∞. Such a signal admit a Fourier expansion sξ(ν), depending on frequency ν. We examine the observations after applying the Fourier transform. Because, for real signals, the Fourier transforms at negative fr equencies are the complex conjugates of the Fourier transforms at positive freque ncies, we limit the observationstothepositivefrequencies.Thesensorhasalimitedb andwidth ˇBwhichis supposedacontinuousintervalin R+.Duetothelimitedfrequencybandwidth ˇBofthe sensor, time sampling effect and Discrete Fourier Transform (DFT) , the observation bandwidth islimitedtoasubset Bofthepositivefrequencies: B ⊂ˇB⊂R+.Inpractice this subset Bis composed of NBequispaced discrete values B={ν1,ν2,...νNB}and due to Parseval’s theorem we can write/summationtext ν∈B|sξ(ν)|2δν <∞(withδν=B/NB;B is here the
https://arxiv.org/abs/2505.14611v1
length of the interval ˇB). Thus, the observation takes the form: ∀ν∈ Bx(ν) =sξ(ν)+n(ν) (5) 5 In this equation, x(ν),sξ(ν), andn(ν) are complex numbers. The total vector of the observations xis composed of all the complex variables x(ν) in the observation bandwidth: x= (x(ν))ν∈B. This vector has NBcomplex coordinates. The signal is assumed to be deterministic, meaning that each (vecto r) value of the parameter ξcorresponds to only one function of frequency, sξ(ν). Given a time sampling rate of the original time series, the number of fr equencies NBdepends on the integration time, which is assumed to be greater tha n the signal duration or sufficiently long to capture all the information required f rom the signal. It is also assumed that the integration time is sufficiently long to assume t he asymptotic independence, circularity and Gaussian behaviour of the noise samp les. Looking at the noise contributions in the frequency domain n(ν), we assume that the noise components after Fourier transform n(ν) are centered, circular Gaussian random variables, independent from one frequency to the other. The Gaussianity is true if the noise is naturally Gaussian, but it is also asymptotically true for other types of noise distributions. As examples, more general hypothes is are referred to in the following papers [ 9] [10]. The asymptotic independence for any pair of frequencies is also reported in [ 1]. We assume that the noise spectral power densities are known: ∀ν∈ Bγ0(ν) = E(n(ν)n∗(ν)) (where∗designs the complex conjugated number). The hypothesis that the Gaussian complex noise is circular means: E(n(ν)n(ν)) = 0. The statistical distributions are determined by the signal paramet ers and the noise characteristics. Assuming all the previous hypotheses, the obse rvations xdepend on the parameters through a parametric law of probability, where the parameters char- acterize the signal. The law of the total vector of the observation sx= (x(ν))ν∈Bcan be expressed as: p(x|ξ) =/productdisplay ν∈B1 πγ0(ν)exp/parenleftigg −|x(ν)−sξ(ν)|2 γ0(ν)/parenrightigg (6) The total vector of the observations xwhich hasNBcomplex components, can be rewritten as a real vector Xwith 2NBreal components. Whith this notation, equation 6is similar to the normal multivariate distribution as expressed in [ 6]: p(X|ξ) =1 2NBπNB/radicalbig det(Σ)exp/parenleftigg −/parenleftbig X−µξ/parenrightbigTΣ−1/parenleftbig X−µξ/parenrightbig 2/parenrightigg (7) where: •NBis the number of frequencies νinB, •Σ=diag/parenleftbig σ2 0(n)/parenrightbig is a diagonal 2 NB×2NBmatrix, such that: ∀k= 1,..NBγ0(k) = 2σ2 0(2k−1) = 2σ2 0(2k), •Xis a 2NBvector, with ∀k= 1,...NBX2k−1=Re{x(k)}andX2k=Im{x(k)}, 6 •µξis a 2NBvector, with ∀k= 1,...NBµ2k−1=Re{sξ(k)}andµ2k=Im{sξ(k)}. 3.2 Signal dependence on parameters Equation 6(or equivalently equation 7) defines a family of probability distributions SΞwhen parameter ξvaries in a predefined set Ξ: SΞ={pξ=p(x|ξ)|ξ∈Ξ} (8) Additionally, we suppose that the model verifies the following regular ity condi- tions: Ξ is an open subset of R2NB. The mapping ξ/ma√sto→pξis injective and for any x∈CNBthe function ξ/ma√sto→p(x|ξ) isC∞. With these hypotheses,we mayconsider SΞas astatisticalmanifold (see [ 1] section 2.1 for more details). As a consequence, the geometry of the manif old lies in equation 6(or in equation 7) and in the definition of the parameter set Ξ. We look at two cases. The first case corresponds to the situation for which
https://arxiv.org/abs/2505.14611v1
in equation 7the parameters are the components of the vector µξ. In this case ξis a vector with 2 NBcomponents and∀k= 1,...2NBξk=µk. In other words: Ξ = R2NB. This means that all the signal components sξ(ν) vary freely. The signal variations determine a manifold, which we call the L2(B) manifold. This corresponds to all the signals with finite energy observed in the frequency bandwidth B. The dimension of the L2(B) manifold is 2 NB. TheL2(B) manifold cor- responds exactly to the manifold described by 2 NBmultivariate normal distributions with the same diagonal covariance matrix Σand different mean values. The second case concerns a situation when ξhasNcomponents, with N≤2NB, and the previous regularity conditions are verified. We call the manif old theL2(B,ξ) manifold. As ξhasNcomponents, Nis the dimension of the L2(B,ξ) manifold. The L2(B,ξ) manifold is a submanifold of the L2(B) manifold. For allνinB, the signal sξ(ν) is a complex value, which can be represented with a modulus (or magnitude) ρξ(ν) and a phase ψξ(ν) : sξ(ν) =ρξ(ν)·exp(ıψξ(ν)) (9) With (ı)2=−1,ψξ(ν)∈]−π,π], andρξ(ν)∈]0,+∞[. With expression 9, we see that the signal dependence on parameters affects mag- nitude and phase simultaneously. Each signal is characterized by its magnitude and phase as functions of frequency. Except when there is an addition al constraint, such as minimum phase, in general, a signal cannot be uniquely determined s olely by the phase or the magnitude of its Fourier transform [ 11]. Indeed, we assume that for the recovery of a signal, phase and magnitude spectrum functions can be combined freely. Thatis,anyphasefunction ψξ(ν)canbecombinedwithanymagnitudefunction ρξ′(ν) 7 withξandξ′being values arbitrarily chosen in Ξ. This allows us to particularize the model in the following way without loss of generality. The parameters ξof the signal are split into two parts: ξ=/parenleftig φT,ϕT/parenrightigT . One part, φ, is related to the magnitude ρφ(ν) of the signal and the other part, ϕ, is related to the phaseψϕ(ν) of the signal. ∀ν∈ Bsξ(ν) =ρφ(ν)·exp(ıψϕ(ν)) (10) In the following the L2(B,ξ) manifold is described by equation 10which refers to the case where ξ=/parenleftig φT,ϕT/parenrightigT . In this submanifold, the expression of the distributions is given by: p(x|ξ) =/productdisplay ν∈B1 πγ0(ν)exp/parenleftigg −|x(ν)−ρφ(ν)·exp(ıψϕ(ν))|2 γ0(ν)/parenrightigg (11) We have to assume that the model satisfies the regularity condition s. Ξ can be chosen to be an open subset of R2NBand the mapping ξ/ma√sto→pξcan be injective. In addition the model has to satisfy the hypothesis that for any x∈CNBthe function ξ/ma√sto→p(x|ξ) isC∞. We can admit that the magnitude ρφ(ν) isC∞with respect to the parameters φu. We will also verify that the phase ψϕ(ν) isC∞with respect to the parameters ϕr. As we are examining the signal in the observation bandwidth B, it is important to consider the dependency of the phase ψϕ(ν) on the frequency ν. Although the observation bandwidth Bis assessed only for a finite number NBof values of ν, it is possible to adopt a model valid for the continuous interval ˇB⊃ B. A discontinuity in the phase with respect to νcan occur when the phase reachesan extreme value of the interval, −πorπ, causing the phase to jump to the other
https://arxiv.org/abs/2505.14611v1
end of theinterval.Toreconstructacontinuousphasewithrespectto ν,wecanperformphase unwrapping. We denote the unwrapped phase by ˘ψϕ(ν). Consequently, the signal can be expressed as: sξ(ν) =ρφ(ν)·exp/parenleftig ı˘ψϕ(ν)/parenrightig . So the notation ˘ψϕ(ν) represents the unwrapped phase, which is continuous with respect toν.ψϕ(ν) represents the same phase in the interval ] −π,π] (i.e. modulo 2 π). The use of ˘ψϕ(ν) is especially interesting when we want to adopt a polynomial model that is continuous and smooth with respect to ν. 4 General expressions of the Fisher-Rao distances 4.1 Fisher-Rao distance in the L2(B) manifold It is known from [ 6] that, in the 2 NB-dimensional manifold composed by multivariate normal distributions with a common covariance matrix Σ, the Fisher-Rao distance between two distributions parametrized respectively by µξ1andµξ2is equal to the 8 Mahalanobis distance: dM/parenleftbig µξ1,µξ2/parenrightbig =/radicalig/parenleftbig µξ2−µξ1/parenrightbigTΣ−1/parenleftbig µξ2−µξ1/parenrightbig (12) With the notation of equations 7and9, the Mahalanobis distance corresponds to the Fisher-Rao distance in the L2(B) manifold. It can be rewritten as follows: dL2,B(ξ1,ξ2) =/radicaligg/summationdisplay ν∈B2 γ0(ν)/parenleftig (ρξ2(ν))2+(ρξ1(ν))2−2ρξ2(ν)·ρξ1(ν)·cos(ψξ2(ν)−ψξ1(ν))/parenrightig (13) 4.2 Geodesic equations in the L2(B,ξ) submanifold To compute the geodesic equations in the L2(B,ξ) submanifold, we use equations 1, 2,3and11. The expressions giving the Fisher metric and the equations giving th e Christoffel symbols are detailed in the appendices AandB. Theorem 1 (Linearly Dependant Gradients (LDG) theorem) The geodesic equations in the L2(B,ξ)submanifold reduce to the following two vectorial equation s: /summationdisplay ν∈B2 γ0(ν)/parenleftBigg d2ρφ dς2(ν)−ρφ(ν)/parenleftbiggdψϕ dς(ν)/parenrightbigg2/parenrightBigg ▽φρφ(ν) =[0]P(14) /summationdisplay ν∈B2 γ0(ν)/parenleftBigg /parenleftbig ρφ(ν)/parenrightbig2d2ψϕ dς2(ν)+2ρφ(ν)dρφ dς(ν)dψϕ dς(ν)/parenrightBigg ▽ϕψϕ(ν) =[0]N−P(15) Proof of Theorem 1Taking into account the notational convention from section 1, the equations of the geodesics 3can be written as follow: ∀u= 1,...Pd2φu dς2+Γu u′v′dφu′ dςdφv′ dς+Γu q′r′dϕq′ dςdϕr′ dς= 0 ∀q=P+1,...Nd2ϕq dς2+Γq q′r′dϕq′ dςdϕr′ dς+2·Γq q′u′dϕq′ dςdφu′ dς= 0 We can multiply the first set of equations by [ guv] and the second set by [ gqr], and we get: ∀u= 1,...P[guv]d2φv dς2+Γu′v′,udφu′ dςdφv′ dς+Γq′r′,udϕq′ dςdϕr′ dς= 0 (16) ∀q=P+1,...N[gqr]d2ϕr dς2+Γq′r′,qdϕq′ dςdϕr′ dς+2·Γq′u′,qdϕq′ dςdφu′ dς= 0 (17) Taking into account the definition of the derivation, we can r emark that, for all ν∈ B: ∂uρφ(ν)·dφu dς=∂ρφ ∂φu(ν)dφu dς=dρφ dς(ν) 9 ∂rψϕ(ν)·dϕr dς=∂ψϕ ∂ϕr(ν)dϕr dς=dψϕ dς(ν) And: d2ρφ dς2(ν) =d/parenleftBigdρφ dς(ν)/parenrightBig dς=∂v′/parenleftbiggdρφ dς(ν)/parenrightbigg ·dφv′ dς=∂v′/parenleftBigg ∂u′ρφ(ν)·dφu′ dς/parenrightBigg ·dφv′ dς Wich gives: d2ρφ dς2(ν) =∂2 v′u′ρφ(ν)·dφv′ dςdφu′ dς+∂u′ρφ(ν)·∂v′/parenleftBigg dφu′ dς/parenrightBigg ·dφv′ dς=∂2 v′u′ρφ(ν)·dφv′ dςdφu′ dς+∂u′ρφ(ν)·d2φu′ dς2 And, in the same way, we can obtain the following equality: d2ψϕ dς2(ν) =∂2 q′r′ψϕ(ν)·dϕq′ dςdϕr′ dς+∂q′ψϕ(ν)·d2ϕq′ dς2 Whenreplacingtheexpressions of gijandΓij,kinequations 16and17bytheirexpressions from equations A1 A2andB5 B6 B7 B8 and using the previous equalities, we obtain: ∀u= 1,...P/summationdisplay ν∈B2 γ0(ν)∂uρφ(ν)/parenleftBigg d2ρφ dς2(ν)−ρφ(ν)/parenleftbiggdψϕ dς(ν)/parenrightbigg2/parenrightBigg = 0 ∀q=P+1,...N/summationdisplay ν∈B2 γ0(ν)∂qψϕ(ν)/parenleftBigg /parenleftbig ρφ(ν)/parenrightbig2d2ψϕ dς2(ν)+2ρφ(ν)dρφ dς(ν)dψϕ dς(ν)/parenrightBigg = 0 These two equations can be rewritten with the gradients ▽φρφand▽ϕψϕ: /summationdisplay ν∈B2 γ0(ν)/parenleftBigg d2ρφ dς2(ν)−ρφ(ν)/parenleftbiggdψϕ dς(ν)/parenrightbigg2/parenrightBigg ▽φρφ(ν) =[0]P /summationdisplay ν∈B2 γ0(ν)/parenleftBigg ρ2 φ(ν)d2ψϕ dς2(ν)+2ρφ(ν)dρφ dς(ν)dψϕ dς(ν)/parenrightBigg ▽ϕψϕ(ν) =[0]N−P /square Remark 1 (LDG theorem significance) These equations imply that, resp ectively for the magnitude and the phase, the gradients with respect to the pa rameters, expressed for all different frequencies, are linearly dependent: there exist s a linear combination of the gradi- ents that equals zero, with coefficients that
https://arxiv.org/abs/2505.14611v1
are not all zero. In itself this property may not always mean much, because the numbers of coordinates of the g radients (PandN−P) are often lower than the number of gradient vectors ( NB). However, what is important in these expressions is that, for the geodesic, the coefficients are ex pressed as derivatives with respect to the curvilinear coordinate. 10 4.3 Fisher-Rao distance in L2(B,ξ) submanifold Proposition 2 With˜ρφand˜ψϕsolutions of the geodesic equations, the Fisher-Rao distan ce in theL2(B,ξ)submanifold can be computed as the following integral: dL2,B,ξ(ξ1,ξ2) =/integraldisplay1 0/radicaltp/radicalvertex/radicalvertex/radicalvertex/radicalbt/summationdisplay ν∈B2 γ0(ν) /parenleftbiggd˜ρφ dς(ν)/parenrightbigg2 +/parenleftBigg ˜ρφ(ν)d˜ψϕ dς(ν)/parenrightBigg2 dς(18) ProofWe note ˜φu(or˜φv) and ˜ϕq(or ˜ϕr) the solutions of the geodesic equations 14an15. They are expressed as functions of ς. From equation 4taking into account that the Fisher matrix is a block diagonal matrix as demonstrated in appendi xA, we derive the Fisher-Rao distance as the following integral: dL2,B,ξ(ξ1,ξ2) =/integraldisplay1 0/radicalBigg guvd˜φu dςd˜φv dς+gqrd˜ϕq dςd˜ϕr dςdς (19) We replace the guvandgqrwith their expressions from appendix A. Then we use the following equalities: ∂uρφ(ν)·dφu dς=∂ρφ ∂φu(ν)dφu dς=dρφ dς(ν) ∂rψϕ(ν)·dϕr dς=∂ψϕ ∂ϕr(ν)dϕr dς=dψϕ dς(ν) /square Remark 2 In order to compute the expression 18and give a closed form expression of the Fisher-Rao distance, it is necessary to solve equations 14and15. This can only be done when the parametric expressions of magnitude ρφ(ν) and phase ψϕ(ν) are expressed with closed-forms, that is when the model is particularized. 4.4 Comparison of the Fisher-Rao distances in the L2(B) manifold and in the L2(B,ξ) submanifold We have to consider that the N-dimensional L2(B,ξ) submanifold is included in the 2NB-dimensional manifold L2(B) composed by multivariate normal distributions with a common covariance matrix. As a consequence, we expect that th e Fisher-Rao dis- tancedL2,B,ξ(ξ1,ξ2) measured in the L2(B,ξ) submanifold is greater than or equal to the Fisher-Rao distance measured in the L2(B) manifold: dL2,B,ξ(ξ1,ξ2)≥dL2,B(ξ1,ξ2) (20) When both quantities are equal, the submanifold L2(B,ξ) is said geodesic in the manifoldL2(B). 11 5 Example with finite energy signals with known magnitude spectrum 5.1 Description of the modeling To be able to solve the previous equations 14and15, we need to specify the depen- dencies of the magnitude and the phase with their parameters. We p articularize the parametric description to define the L2(B,α) manifold, which is a specific case of the L2(B,ξ) manifold. We will have to make sure that the model verifies the regu larity conditions. In this example, we assume that the observation is within the freque ncy interval ˇB= ]ν0−B/2,ν0+B/2[. We suppose that the dependency of the magnitude ρ0(ν)∀ν∈ˇBis known, except for a global attenuation α: ∀ν∈ˇB ρ φ(ν) =α·ρ0(ν) (21) Thus, there is only one parameter related to the magnitude: P= 1 andφ1=α. In addition we specify the model associated with the phase. The con tinuous phase is supposed to be polynomial for ν∈ˇB, that is: ∀ν∈ˇB˘ψϕ(ν) =N/summationdisplay r=2ϕr·(ν)r−2(22) We assume that the order of the polynom, that is N−2, is lower or equal to NB−1. It is sufficient to approximate all phase values in Bas precisely as needed. Remark 3 The number N−1 of coefficients ϕrof the polynomial function describing the
https://arxiv.org/abs/2505.14611v1
phase allows the phase to be approximated as precisely as pos sible. This means that any observed phase function can be described by the model. Thus, the model applies to all signals with known spectrum magnitude and unknown phase. The phase of the model is the value modulo 2 πof˘ψϕ(ν) in the interval ] −π,π]. We note:ψϕ(ν) =/angbracketleftig ˘ψϕ(ν)/angbracketrightig ]−π,π]. As the phase is defined modulo 2 π, we limit the set of definition of the coefficient ϕ2 to the interval ] −π,π]. The second coefficient is related to the time delay τ:ϕ3=−2πτ. The boundary conditions completing the geodesic equations are give n by the known 12 values of the parameters at the ”points” ξ1andξ2, that are: α(0) =α1, ψϕ(0) =ψϕ1, α(1) =α2, ψϕ(1) =ψϕ2. ψϕ2andψϕ2are polynomial functions of frequency ν. Proposition 3 The model of the finite energy signals with known magnitude sp ectrum verifies the regularity conditions, proving that L2(B,α)is a manifold. ProofThe number of parameters is N, withP= 1. The parameters belong to an open set: •φ1=α∈]0,+∞[, open set. •∀r∈ {3,...N}ϕr∈R, open set. •ϕ2∈]−π,π]. It is not an open set. But we can limit the variation in the open set ]−π,π[ and treat the case ϕ2=πas a limit. The mapping is injective: •The mapping φ1/ma√sto→ρφ(ν) forν∈ˇBis injective. •The mapping ϕ/ma√sto→˘ψϕ(ν) forν∈ˇBis injective . The distribution is C∞with respect to the parameters: •ρφ(ν) isC∞with respect to φ1;∂ρφ ∂φ1(ν) =ρ0(ν) ;∂2ρφ ∂2φ1(ν) = 0 •˘ψϕ(ν) isC∞with respect to any ϕr∀r∈ {2,...N};∂˘ψϕ ∂ϕr(ν) = (ν)r−2;∂2˘ψϕ ∂2ϕr(ν) = 0 To totally fulfill the regularity conditions, we need to exte nd the properties of ˘ψϕ(ν) to ψϕ(ν), especially: •ψϕ(ν) isC∞with respect to any ϕr∀r∈ {2,...N} •The mapping ϕ/ma√sto→ψϕ(ν) forν∈Bis injective In fact we have: ∀ν∈ˇB∃k(ν)∈Z|˘ψϕ(ν) =ψϕ(ν)+2k(ν)π. This proves that the partial derivability properties with respect to ϕare the same for ˘ψϕ(ν) andψϕ(ν). The injectivity wont be verified only if there exists a value ϕ0for which we can find a value ϕ′ 0such that: ∀ν∈ˇB∃k(ν)∈N|˘ψϕ0(ν) =˘ψϕ′ 0(ν)+2k(ν)π. This would mean that k(ν) considered as a function of νis polynomial for ν∈ˇB. The only solution is that k(ν) is constant. In this case, all the coefficients of the polynom ial phases 13 ˘ψϕ0(ν) and˘ψϕ′ 0(ν) are the same except the lowest ones which only differ from mod ulo 2π: ϕ2 0=ϕ′2 0mod 2π. And by hypothesis, we have restricted the set of definition of theϕ2 0andϕ′2 0to the interval ]−π,π]. /square 5.2 Expression of the Fisher-Rao distance in the L2(B) manifold Theorem 4 The Fisher-Rao distance in the L2(B)manifold, given by the Mahalanobis distance, takes the form: dL2,B(ξ1,ξ2) =√ω0·/radicalBigg (α2)2+(α1)2−2α2·α1·1 ω0/summationdisplay ν∈B2 γ0(ν)(ρ0(ν))2cos(∆ψ(ν)) (23) With: ω0=/summationtext ν∈B2 γ0(ν)(ρ0(ν))2 ∆ψ(ν) =ψϕ2(ν)−ψϕ1(ν) ProofUse equations 21and22in equation 13. /square 5.3 Geodesic equations in the L2(B,α) manifold Based on equations 21and22, we get: dρφ dς(ν) =dα dς·ρ0(ν) d2ρφ dς2(ν) =d2α dς2·ρ0(ν) ▽φρφ(ν) =ρ0(ν) dψϕ dς(ν) =d˘ψϕ dς(ν) =N/summationdisplay r=2dϕr dς·(ν)r−2 d2ψϕ dς2(ν) =d2˘ψϕ dς2(ν) =N/summationdisplay r=2d2ϕr dς2·(ν)r−2 ▽ϕψϕ(ν) =▽ϕ˘ψϕ(ν) = 1 ν1 ... νL  With these hypotheses the two geodesics equations become: d2α dς2−α1 ω0/summationdisplay ν∈B2 γ0(ν)(ρ0(ν))2/parenleftbiggdψϕ dς(ν)/parenrightbigg2 = 0 (24) 14 ∀ν∈ Bαd2ψϕ dς2(ν)+2dα dςdψϕ dς(ν) = 0
https://arxiv.org/abs/2505.14611v1
(25) From equations equations 24and25, we derive: d2α dς2=K α3(26) ∀ν∈ Bdψϕ dς(ν) =1 α2·c(ν) (27) With:K=1 ω0/summationtext ν∈B2 γ0(ν)(ρ0(ν))2(c(ν))2 The solutions to these geodesic equations are computed in appendix C. 5.4 Expressions of the Fisher-Rao distance in the L2(B,α) manifold Theorem 5 The Fisher-Rao distance in the L2(B,α)manifold is given by: dL2,B,α(ξ1,ξ2) =√ω0·/radicaltp/radicalvertex/radicalvertex/radicalvertex/radicalbt(α2)2+(α1)2−2α1α2cos /radicalBigg 1 ω0/summationdisplay ν∈B2 γ0(ν)(ρ0(ν))2(∆ψ(ν))2  (28) ProofSee appendix D. /square Remark 4 As the difference between both signal phases ∆ ψ(ν) belongs to the interval ] −π,π] (∆ψ(ν) =/angbracketleftbig ψξ2(ν)−ψξ1(ν)/angbracketrightbig ]−π,π]), we have to consider that δ=/radicalbigg 1 ω0/summationtext ν∈B2 γ0(ν)(ρ0(ν))2(∆ψ(ν))2belongs to the segment δ∈[0,π]. This is consistent with the fact that δis also a difference of two arctan functions (see appendix C). By rewriting the Fisher-Rao distances with the ratio of the magnitud esγ=α2 α1 and the signal-to-noise ratio on the signal at ξ1, that isSNR1=ω0(α1)2, we derive the expressions of the Fisher-Rao distances in the L2(B) manifold of all finite energy signals and in the L2(B,α) submanifold of finite energy signals with a known magni- tude spectrum ρ0(ν). From equations 28and23, we see that the Fisher-Rao in the L2(B) manifold and in the L2(B,ρ0) manifold are respectively: dL2,B(ξ1,ξ2) =/radicalbig SNR1·/radicaligg (γ)2+1−2γ·1 ω0/summationdisplay ν∈B2 γ0(ν)(ρ0(ν))2cos(∆ψ(ν)) (29) 15 dL2,B,α(ξ1,ξ2) =/radicalbig SNR1·/radicaltp/radicalvertex/radicalvertex/radicalvertex/radicalbt(γ)2+1−2γcos /radicaligg 1 ω0/summationdisplay ν∈B2 γ0(ν)(ρ0(ν))2(∆ψ(ν))2  (30) 5.5 Asymptotic behaviour of the Fisher-Rao distances The Fisher-Rao distances are proportional to the square root of the signal-to-noise ratio. Therefore, we can study the geometry for a reference lev elSNR1= 1 (0 dB). Additionally, the distances depend on the ratio γbetween the attenuations α1 andα2. Proposition 6 When the phases differences ∆ψ(ν)are small, both distances are equivalent: dL2,B,α(ξ1,ξ2)∼ ∆ψ(ν)→0dL2,B(ξ1,ξ2)∼ ∆ψ(ν)→0/radicalbig SNR1·/radicalBigg (γ−1)2+γ1 ω0/summationdisplay ν∈B2 γ0(ν)(ρ0(ν))2(∆ψ(ν))2 (31) ProofUse a Taylor expansion. /square Proposition 7 When the phases differences are constant across the bandwidth (∆ψ(ν) = ∆ψ0) both distances are equal: dL2,B,α(ξ1,ξ2) = ∆ψ(ν)=∆ψ0dL2,B(ξ1,ξ2) = ∆ψ(ν)=∆ψ0/radicalbig SNR1·/radicalBig (γ)2+1−2γcos(∆ψ0) (32) ProofReplace ∆ψ(ν) by ∆ψ0in equations 29and30. /square Proposition 8 When the phases differences tend to a constant value across the bandwidth (∆ψ(ν)→∆ψ0) both distances tend to the same value: lim ∆ψ(ν)→∆ψ0dL2,B,α(ξ1,ξ2) = lim ∆ψ(ν)→∆ψ0dL2,B(ξ1,ξ2) =/radicalbig SNR1·/radicalBig (γ)2+1−2γcos(∆ψ0) (33) ProofUse ∆ψ(ν)→∆ψ0in equations 29and30. /square 16 Proposition 9 For large phase variations, we have: dL2,B(ξ1,ξ2)≈/radicalbig SNR1·/radicalBig (γ)2+1 (34) and: dL2,B,α(ξ1,ξ2)≈/radicalbig SNR1·/radicalBigg (γ)2+1−2γcos/parenleftbiggπ√ 3/parenrightbigg (35) ProofWhen the phases differences ∆ ψ(ν) vary quickly across the bandwidth, we can assume that:1 ω0/summationdisplay ν∈B2 γ0(ν)(ρ0(ν))2cos(∆ψ(ν))≈E(cos(θ)) (36) and cos /radicalBigg 1 ω0/summationdisplay ν∈B2 γ0(ν)(ρ0(ν))2(∆ψ(ν))2 ≈cos/parenleftBigg/radicalbigg E/parenleftBig (θ)2/parenrightBig/parenrightBigg (37) Whereθis a random angle between −πandπ. /square 5.6 Comparison of the Fisher-Rao distances with numerical applications If we assume that the signal-to-noise ratio2 γ0(ν)(ρ0(ν))2is constant within the bandwidth, from equation 29and30, the ratio between the distances becomes: dL2,B,α(ξ1,ξ2) dL2,B(ξ1,ξ2)=/radicaltp/radicalvertex/radicalvertex/radicalvertex/radicalbt(γ)2+1−2γcos/parenleftig/radicalig 1 NB/summationtext ν∈B(∆ψ(ν))2/parenrightig (γ)2+1−2γ1 NB/summationtext ν∈Bcos(∆ψ(ν))(38) For numerical application, we specify the model associated with the phase. The signal is supposed to be time-delayed (with a phase varying linear ly with the frequency), that is: ˘ψϕ(ν) =ψ0−2πτν (39) In addition, we assume that ˇB=/bracketleftbig ν0−B 2,ν0+B 2/bracketrightbig and from equation 38we get: dL2,B,α(ξ1,ξ2) dL2,B(ξ1,ξ2)≈/radicaltp/radicalvertex/radicalvertex/radicalvertex/radicalvertex/radicalvertex/radicalvertex/radicalbt(γ)2+1−2γcos/parenleftigg/radicalbigg 1 NB/summationtext ν∈B/parenleftig /an}bracketle{t∆ψ0−2πν∆τ/an}bracketri}ht]−π,π]/parenrightig2/parenrightigg (γ)2+1−2γsin(πB∆τ) πB∆τcos(∆ψ0−2πν0∆τ) (40) Where: ∆ψ0=ψ0(ξ2)−ψ0(ξ1) ∆τ=τ(ξ2)−τ(ξ1) 17 /an}bracketle{t∆ψ0−2πν∆τ/an}bracketri}ht]−π,π]is the angle restricted in
https://arxiv.org/abs/2505.14611v1
the interval between −πandπ. From equation 20, we expect the following property: dL2,B,α(ξ1,ξ2) dL2,B(ξ1,ξ2)≥1 (41) This will be verified in the figures. 5.7 Figures We use the expressions of the distances from equation 40, with the reference signal to noise ratio SNR1= 1. Some of the parameters are fixed: NB= 1000,ν0= 0.25. ∆τ varies and we draw the figures as a function of B∆τ(x-coordinate). For the other parameters, we study different cases: •B= 0.5, ∆ψ0= 0,γ= 1 This case focuses on wideband signals with the same attenuation and phase offset, but different time delays. As seen in equations 34and35the ratio tends to a limit/radicalbigg 1−cos/parenleftig π√ 3/parenrightig ≈1.11 for large delays. For small delays, the values of the distances remain the same. 0 1 2 3 4 5 6 7 8 B Delta tau00.20.40.60.811.21.41.61.82Distances Fig. 1 Comparison of the two distances as functions of B∆τ. The lowest distance is dL2,B(ξ1,ξ2) with dashed line. B= 0.5, ∆ψ0= 0,γ= 1.0 1 2 3 4 5 6 7 8 B Delta tau11.021.041.061.081.11.121.141.161.181.2Ratio Fig. 2 Ratio of the two distances as a function of B∆τ. B= 0.5, ∆ψ0= 0,γ= 1. •B= 0.5, ∆ψ0=π/2,γ= 1 18 This case focuses on wideband signals with the same attenuation, bu t different time delays and phase offsets. For large delays, the ratio tends to 1 .11 as in the previous case. For small delays, the values of the distances remain the same. Note that there is a decrease for some delays, due to the difference in ph ase offsets. In this case, the distances do not cancel out because the mean value of the phase differences across the bandwidth never cancels. The minimum distan ce is not zero. 0 1 2 3 4 5 6 7 8 B Delta tau00.20.40.60.811.21.41.61.82Distances Fig. 3 Comparison of the two distances as functions of B∆τ. The lowest distance is dL2,B(ξ1,ξ2) with dashed line. B= 0.5, ∆ψ0=π/2,γ= 1.0 1 2 3 4 5 6 7 8 B Delta tau11.021.041.061.081.11.121.141.161.181.2Ratio Fig. 4 Ratio of the two distances as a function of B∆τ. B= 0.5, ∆ψ0=π/2,γ= 1. •B= 0.5, ∆ψ0= 0,γ= 10 This case focuses on wideband signals with the same phase offset, bu t different attenuations and time delays. As seen in equations 34and35the ratio tends to a limit/radicalbigg 1−20 101cos/parenleftig π√ 3/parenrightig ≈1.02 for large delays. For small delays, the values of the distances remain the same. 19 0 1 2 3 4 5 6 7 8 B Delta tau99.29.49.69.81010.210.410.610.811Distances Fig. 5 Comparison of the two distances as functions of B∆τ. The lowest distance is dL2,B(ξ1,ξ2) with dashed line. B= 0.5, ∆ψ0= 0,γ= 10.0 1 2 3 4 5 6 7 8 B Delta tau11.021.041.061.081.11.121.141.161.181.2Ratio Fig. 6 Ratio of the two distances as a function of B∆τ. B= 0.5, ∆ψ0= 0,γ= 10. •B= 0.25, ∆ψ0= 0,γ= 1 This case focuses on lower band signals with the same attenuation an d phase offset, but different time delays. The ratio again tends to 1 .11 for large delays. For small delays, the values of the distances remain the same. 0 1 2 3 4 5 6 7
https://arxiv.org/abs/2505.14611v1
8 B Delta tau00.20.40.60.811.21.41.61.82Distances Fig. 7 Comparison of the two distances as functions of B∆τ. The lowest distance is dL2,B(ξ1,ξ2) with dashed line. B= 0.25, ∆ψ0= 0,γ= 1.0 1 2 3 4 5 6 7 8 B Delta tau11.021.041.061.081.11.121.141.161.181.2Ratio Fig. 8 Ratio of the two distances as a function of B∆τ. B= 0.25, ∆ψ0= 0,γ= 1. 20 6 Conclusion In this work, we focused on the parameter estimation of finite ener gy signals in the presence of additive noise. We assumed a scenario where the noise b ecomes asymp- totically Gaussian after Fourier transform, and is independent fro m one frequency to another. The statistical observation vector consists of the Fou rier transform outputs within a specified observation bandwidth. The noise spectral densit y is known, as is commonly the case in many applications. The observation vector dep ends on the noise random samples and on the signal parameters. The receiver’s obje ctive is to estimate these parameters and distinguish between two different signals. To assess the difficulty of distinguishing between two signals based on the statistical distrib utions of their observations, we considered the Fisher-Rao distance. Based on the finite energy signal model, we obtained closed-form ex pressions for the Fisher-Rao distance. This result is significant because finding clo sed-form expres- sions for the Fisher-Rao distance is generally a non-trivial task. To achieve this, we utilized the block diagonal structure of the Fisher matrix of the m odel. This matrix induces a Riemannian metric on the statistical manifold defined by the signal parameters. We estimated the Christoffel symbols of the Levi-Civit a connexion and the tensorial equations of the geodesics on the manifold. Using the signal modeling, we demonstrated that the geodesics are described by two combine d partial differ- ential equations involving the phase spectrum and the magnitude sp ectrum of the signal. These equations come together as the LDG theorem, which e xpresses the lin- ear dependence of the gradients of phase and magnitude relative t o their respective parameters. We examined two statistical situations corresponding to two statis tical manifolds from the geometric science of information perspective. The first s ituation corresponds to the manifold of all possible finite energy signals observed in the give n bandwidth, which we referred to as the L2(B) manifold. We demonstrated that this manifold cor- responds to a multivariable normal distribution with the same covaria nce matrix but different mean values. For this case, the Fisher-Rao distance equa tes to the Maha- lanobis distance. The second situation corresponds to the estimat ion of finite energy signals when the magnitude spectrum ρ0(ν) is known, except for a global attenuation coefficientα. This manifold was called the L2(B,α) manifold. We demonstrated that the manifold is a non geodesic submanifold of the L2(B) manifold. The analysis of the Fisher-Rao distance expressions revealed note worthy results regarding the estimation problem. The first simple property is that t he Fisher-Rao distance is proportional to the signal-to-noiseratio at the chosen reference signal. This means that the geometric structure of the manifold is controlled by an homothetic transformation, scaled by
https://arxiv.org/abs/2505.14611v1
the signal to noise ratio. Another property is that the Fisher-Rao distance between two sig nals varies as a function of the ratio of their energies. In fact, the Fisher-Rao dis tance between two signals is controlled by the ratio of their respective energies. Additionally, the difference in the phase spectrum of two signals impac ts their Fisher-Rao distance. When the difference in phase spectrum is small or does not vary throughout the bandwidth, the values of the distances rema in the same in the L2(B,α) submanifold and in the L2(B) manifold. This indicates that knowing the 21 magnitude spectrum has a limited impact on phase parameter estimat ion. For a signal with an unknown constant phase, knowing the signal magnitude spe ctrum does not increase the Fisher-Rao distance, that is does not make the phase estimation easier. Conversely, when the difference in phase spectrum varies significan tly throughout the bandwidth, the knowledgeofthe magnitude spectrum providesana dvantagefor signal parameters estimation. The constraint provided by the knowledge of the magnitude spectrum increases the Fisher-Rao distance. For signals with the s ame energy, the ratio is/radicalbigg 1−cos/parenleftig π√ 3/parenrightig ≈1.11. This scenario occurs, for instance, when wide-band signals are separated by a sufficient time delay. Although we obtained simplified equations for the geodesics, finding t heir general solution for any type of constraint in signal estimation remains an op en problem. For instance, we did not consider specific constraints as minimum pha se or specific parametric expression inducing dependence between magnitude an d phase. In the same way, other cases of controlled signal magnitude spectrum sh ould be investigated. Nevertheless, we anticipate that the Fisher-Rao distance betwee n finite energy signals will help the characterizationofsignal data bases.Additionally, the geodesic equations and the LDG theorem may provide valuable insights for some estimatio n techniques as analytic-informed neural networks. Acknowledgements. Thank you to Antonin Bertrand and to Mona Florin for our positive conversations and to the anonymous reviewers for their e xcellent feedback. These helped me a lot to clarify the presentation. Declarations •Funding No funding was received for conducting this study. •Conflict of interest/Competing interests The author has no conflicts of interest to declare that are relevan t to the content of this article. Appendix A Fisher metric From equations 1and10, we deduce: gij=/summationdisplay ν2 γ0Re{∂i[ρφ·exp(−ıψϕ)]∂j[ρφ·exp(ıψϕ)]} And we get: guv=gvu=/summationdisplay ν2 γ0∂uρφ∂vρφ (A1) gqr=grq=/summationdisplay ν2 γ0ρ2 φ·∂qψϕ·∂rψϕ (A2) 22 guq=gqu= 0 This means that the Fisher matrix is a block diagonal matrix: [gij] =/parenleftbigg[guv] [0] [0] [gqr]/parenrightbigg In addition, we remark that: ∂qguv=∂rguv= 0 And: ∂ugqr=∂ugrq=/summationdisplay ν4 γ0ρφ·∂uρφ·∂qψϕ·∂rψϕ (with the same expression replacing ubyv.) Appendix B Equations giving the Christoffel symbols Combining equation 2and equations A1A2, we derive the equations of the Christof- fel symbols. First, we obtain two equations corresponding to the C hristoffel symbols related to the two diagonal blocks of the Fisher matrix: ∀u,u′,v′= 1,...PΓu′v′,u:=guvΓv u′v′=1 2(∂u′gv′u+∂v′guu′−∂ugu′v′) ∀q,q′,r′=P+1,...NΓq′r′,q:=gqrΓr q′r′=1 2(∂q′gr′q+∂r′gqq′−∂qgq′r′) In addition, we must also consider the case where non zero Christoff el symbols are derived because gqr(which is related to the ϕparameters) depends on φ. This leads to a derivation ∂ugqrwhich
https://arxiv.org/abs/2505.14611v1
is not zero: ∀u= 1,...P∀q′,r′=P+1,...NΓq′r′,u:=guvΓv q′r′=1 2(−∂ugq′r′) and also: ∀u= 1,...P∀q,q′=P+1,...NΓuq′,q:=gqrΓr uq′=1 2(∂ugq′q) (B3) and: 23 ∀u= 1,...P∀q,q′=P+1,...NΓq′u,q:=gqrΓr q′u=1 2(∂ugqq′) (B4) We remark that equation B4can be deduced from equation B3, if we consider the symmetry: ∀i,j,k= 1,...NΓk ij= Γk ji The total maximal number of non zero Christoffel symbols in the equ ations is equal toP3+(N−P)3+3P·(N−P)2. This number is to be compared with N3, the total number of Christoffel symbol. As an example, if we use 1 pa rameter for the magnitude and two parameters for the phase, we have a total of 27 Christoffel symbols, with a maximum of 21 non zero symbols. From the expression of the block diagonal Fisher matrix, these equ ations can be written as two matrix linear equations: [guv]/bracketleftbig Γv u′v′|Γv q′r′/bracketrightbig = [Γu′v′,u|Γq′r′,u] [gqr]/bracketleftbig Γr q′r′|Γr q′u|Γr uq′/bracketrightbig = [Γq′r′,q|Γq′u,q|Γuq′,q] In these expression the right terms are given by: (∀u,u′,v′= 1,...P∀q,q′,r′=P+1,...N) Γu′v′,u=/summationdisplay ν∈B2 γ0(ν)∂uρφ(ν)·∂2 u′v′ρφ(ν) (B5) Γq′r′,u=−/summationdisplay ν∈B2 γ0(ν)ρφ(ν)·∂uρφ(ν)·∂q′ψϕ(ν)·∂r′ψϕ(ν) (B6) Γq′r′,q=/summationdisplay ν∈B2 γ0(ν)(ρφ(ν))2·∂qψϕ(ν)·∂2 q′r′ψϕ(ν) (B7) Γq′u,q= Γuq′,q=/summationdisplay ν∈B2 γ0(ν)ρφ(ν)·∂uρφ(ν)·∂qψϕ(ν)·∂q′ψϕ(ν) (B8) To solve these equations, it is only necessary to inverse the two Fish er matrix blocks [guv] and [gqr]. From the previous equations, we can remark the following property : Γq′u,q= Γuq′,q=−Γq′q,u 24 Appendix C Detailed geodesic equations of the example We have first to consider the equation: d2α dς2=K α3(C9) The general solutions are: ˜α(ς) =±/radicalbig k2 1ς2+2k2k2 1ς+k2 2k2 1+K√k1(C10) which gives: d˜α dς(ς) =±k1√k1(k2+ς)/radicalbig k2 1ς2+2k2k2 1ς+k2 2k2 1+K(C11) To determine the values of k1andk2, we use the following equations: ˜α(0) =α1 (C12) ˜α(1) =α2 (C13) These give the following: α2 2k1=k2 1+2k2k2 1+k2 2k2 1+K (C14) α2 1k1=k2 2k2 1+K (C15) We then look at the second equation: ∀ν∈ Bdψϕ dς=1 α2·c(ν) (C16) Based on equations C10and taking into account the boundary conditions ψϕ(0) = ψϕ1,ψϕ(1) =ψϕ2, we get with ∆ ψϕ(ν) =ψϕ2(ν)−ψϕ1(ν): ∀ν∈ Bc(ν) =√ K∆ψϕ(ν) arctan/parenleftig k1√ K(k2+1)/parenrightig −arctan/parenleftig k1√ K(k2)/parenrightig (C17) We may consider δthe weighted mean square difference of the phases (between pointψϕ1andψϕ2):δ=/radicalbigg 1 ω0/summationtext ν∈B2 γ0(ν)(ρ0(ν))2(∆ψ(ν))2. 25 And we obtain the equation: /parenleftbigg arctan/parenleftbiggk1√ K(k2+1)/parenrightbigg −arctan/parenleftbiggk1√ K(k2)/parenrightbigg/parenrightbigg2 = (δ)2(C18) In order to compute the parameters k1,k2,K, we have to solve the equations: α2 2−α2 1=k1+2k2k1 (C19) K=α2 1k1−k2 2k2 1 (C20) arctan/parenleftbiggk1√ K(k2+1)/parenrightbigg −arctan/parenleftbiggk1√ K(k2)/parenrightbigg =δ (C21) Usingtheformulatan( a−b) = (tana−tanb)/(1+tanatanb)onthelastequation C21, and then replacing K from equation C20, we get: tanδ=k1√ K(k2+1)−k1√ K(k2) 1+k1√ K(k2+1)·k1√ K(k2) =k1√ K 1+(k1)2 K(k2+1)·(k2) =k1√ K K+(k1)2k2 2+(k1)2k2 =k1√ K α2 1k1−k2 2k2 1+(k1)2k2 2+(k1)2k2 =√ K α2 1+(k1)k2 =/radicalbig α2 1k1−k2 2k2 1 α2 1+(k1)k2 Replacingk2with its value depending on k1andα1,α2, we obtain the equations: (tanδ)2/parenleftbig (α1)2+(α2)2−k1/parenrightbig2+/parenleftbig (α2)2−(α1)2−k1/parenrightbig2−4(α1)2k1= 0 (C22) k2=(α2)2−(α1)2 2k1−1 2(C23) K= (α1)2k1−1 4/parenleftbig (α2)2−(α1)2−k1/parenrightbig2(C24) The solutions are : k1= (α2)2+(α1)2−2α1α2cosδ (C25) 26 k2=−(α1)2+α1α2cosδ (α2)2+(α1)2−2α1α2cosδ(C26) K= (α1)2(α2)2(sinδ)2(C27) Appendix D Fisher-Rao distances for the example From equation 18, we derive the Fisher-Rao distance in the L2(B,α) manifold : dL2,B,α(ξ1,ξ2) =/integraldisplay1 0/radicaltp/radicalvertex/radicalvertex/radicalbt/parenleftbiggd˜α dς/parenrightbigg2 ω0+1 ˜α2/summationdisplay ν∈B2 γ0(ν)(ρ0(ν))2(c(ν))2dς(D28) which becomes: dL2,B,α(ξ1,ξ2) =√ω0/integraldisplay1 0/radicaligg/parenleftbiggd˜α dς/parenrightbigg2 +K ˜α2dς (D29) and finally: dL2,B,α(ξ1,ξ2) =/radicalbig ω0k1 (D30) Which is: dL2,B,α(ξ1,ξ2) =√ω0·/radicaltp/radicalvertex/radicalvertex/radicalvertex/radicalbt(α2)2+(α1)2−2α1α2cos /radicaligg 1 ω0/summationdisplay ν∈B2 γ0(ν)(ρ0(ν))2(∆ψ(ν))2 (D31) This has to be compared
https://arxiv.org/abs/2505.14611v1
with the Fisher-Rao distance in the L2(B) manifold (Mahalanobis distance): dL2,B(ξ1,ξ2) =/radicaligg/summationdisplay ν∈B2 γ0(ν)(ρ0(ν))2/parenleftig (α2)2+(α1)2−2α2·α1·cos( ∆ψ(ν))/parenrightig (D32) dL2,B(ξ1,ξ2) =√ω0·/radicaligg (α2)2+(α1)2−2α2·α1·1 ω0/summationdisplay ν∈B2 γ0(ν)(ρ0(ν))2cos(∆ψ(ν)) (D33) The ratio between the Fisher-Rao distances is given by: dL2,B(ξ1,ξ2) dL2,B,α(ξ1,ξ2)=/radicaltp/radicalvertex/radicalvertex/radicalvertex/radicalvertex/radicalvertex/radicalbt/parenleftig α2 α1/parenrightig2 +1−2α2 α1·1 ω0/summationtext ν∈B2 γ0(ν)(ρ0(ν))2cos(∆ψ(ν)) (α2 α1)2+1−2α2 α1cos/parenleftbigg/radicalbigg 1 ω0/summationtext ν∈B2 γ0(ν)(ρ0(ν))2(∆ψ(ν))2/parenrightbigg (D34) 27 This value depends on the ratioα2 α1. If we assume that the signal-to-noise ratio2 γ0(ν)(ρ(ν))2is constant within the bandwidth, the ratio becomes: dL2,B(ξ1,ξ2) dL2,B,α(ξ1,ξ2)=/radicaltp/radicalvertex/radicalvertex/radicalvertex/radicalvertex/radicalbt/parenleftig α2 α1/parenrightig2 +1−2α2 α1·1 NB/summationtext ν∈Bcos(∆ψ(ν)) (α2 α1)2+1−2α2 α1cos/parenleftig/radicalig 1 NB/summationtext ν∈B(∆ψ(ν))2/parenrightig (D35) References [1] Amari, S.I., Nagaoka, H.: Methods of Information Geometry. AMS , Providence, R.I. (2000) [2] Barbaresco,F.: Information intrinsic geometric flows. In: Nielse n, F., Barbaresco, F. (eds.) Bayesian Inference and Maximum Entropy Methods In Scie nce and Engineering. American Institute of Physics Conference Series, vo l. 872, pp. 211– 218 (2006). https://doi.org/10.1063/1.2423277 [3] Choi, J., Mullhaupt, A.P.: K¨ ahlerian information geometry for signa l processing. Entropy 17(4), 1581–1605 (2015) https://doi.org/10.3390/e17041581 [4] Nielsen,F.:Anelementaryintroductiontoinformationgeometry. Entropy 10(22), 1100 (2020) https://doi.org/110.3390/e22101100 [5] Amari, S.I.: Information Geometry and Its Applications. Springer , Tokyo (2016). https://doi.org/10.1007/978-4-431-55978-8 5 [6] Pinele, J., Strapasson, J.E., Costa, S.: The Fisher-Rao distance b etween mul- tivariate normal distributions: Special cases, bounds and applicat ions. Entropy 22(4), 404 (2020) [7] Miyamoto, H., Meneghetti, J. F.and Pinele, Costa, S.: On closed-fo rm expres- sions for the fisher-rao distance. Information Geometry 7(2), 311–354 (2024) https://doi.org/10.1007/s41884-024-00143-2 [8] Florin, F.: On Fisher information matrix, array manifold geometry a nd time delay estimation. In: Nielsen, F., Barbaresco, F. (eds.) Geomet ric Science of Information, GSI 2023, pp. 307–317. Springer, Cham ( 2023). https://doi.org/10.1007/978-3-031-38271-0 30 [9] Brillinger, D.: Asymptotic normality of finite fourier transforms of stationary generalized processes. Journal of multivariate analysis 12, 64–71 (1982) [10] Peligrad, M., Wu, W.B.: Central limit theorem for fourier transfor ms of 28 stationary processes. The Annals of Probability 38(5), 2009–2022 (2010) https://doi.org/10.1214/10-AOP530 [11] Quatieri, T., Oppenheim, A.: Iterative techniques for minimum pha se signal reconstruction from phase or magnitude. IEEE Trans. on ASSP ASSP-29 (6), 1187–1193 (1981) 29
https://arxiv.org/abs/2505.14611v1
arXiv:2505.15543v1 [math.ST] 21 May 2025Submitted to Bernoulli Heavy-tailed and Horseshoe priors for regression and sparse Besov rates SERGIOS AGAPIOU1ISMAËL CASTILLO2PAUL EGELS3 1Department of Mathematics and Statistics, University of Cyprus, Nicosia, Cyprus. E-mail: agapiou.sergios@ucy.ac.cy 2Sorbonne Université, LPSM; 4, place Jussieu, 75005 Paris, France. E-mail: ismael.castillo@sorbonne-universite.fr 3Sorbonne Université, LPSM; 4, place Jussieu, 75005 Paris, France. E-mail: paul.egels@sorbonne-universite.fr The large variety of functions encountered in nonparametric statistics, calls for methods that are flexible enough to achieve optimal or near-optimal performance over a wide variety of functional classes, such as Besov balls, as well as over a large array of loss functions. In this work, we show that a class of heavy-tailed prior distributions on basis function coefficients introduced in [1] and called Oversmoothed heavy-Tailed (OT) priors, leads to Bayesian posterior distributions that satisfy these requirements; the case of horseshoe distributions is also investigated, for the first time in the context of nonparametrics, and we show that they fit into this framework. Posterior contraction rates are derived in two settings. The case of Sobolev–smooth signals and L2–risk is considered first, along with a lower bound result showing that the imposed form of the scalings on prior coefficients by the OT prior is necessary to get full adaptation to smoothness. Second, the broader case of Besov-smooth signals with Lp′–risks, p′≥1, is considered, and minimax posterior contraction rates, adaptive to the underlying smoothness, and including rates in the so-called sparse zone, are derived. We provide an implementation of the proposed method and illustrate our results through a simulation study. Keywords: Bayesian nonparametrics, Besov spaces, frequentist analysis of posterior distributions, heavy-tailed prior distributions, horseshoe prior, sparse Besov rates. 1. Introduction The ability of estimators to be flexible, in the sense that their properties remain excellent in a variety of contexts, is a particularly sought-after characteristic. In nonparametric statistics, where the quantity of interest is typically an unknown function (which can represent a signal, an image etc. in practical applications), wavelet thresholding methods are a fundamental class that arguably satisfies the previous flexibility desideratum. Simple non-linear thresholding rules such as hard or soft thresholding are very broadly used in many areas of statistics and signal processing; from the mathematical perspective, they achieve asymptotic near-minimaxity over a very broad variety of functional classes and loss functions. While they are simple to implement and overall display very good empirical behaviour, wavelet thresh- olding methods still face a number of challenges numerically, in particular for low or moderate sample sizes, as one needs to find good values for thresholding constants, which may require some tuning. Bayesian nonparametric methods, on the other hand, have experienced rapid growth since the early 2000’s, when the development of sampling algorithms to simulate from posterior distributions or ap- proximations thereof (MCMC, ABC, Variational Bayes to name just a few) has been paralleled by a progressive mathematical understanding of the properties needed for the prior distribution to achieve optimal or near-optimal convergence properties for the corresponding posterior distributions. We refer to [29, 12] for recent overviews on the field. Among the most popular classes of priors on functions in
https://arxiv.org/abs/2505.15543v1
1 2 statistics and machine learning are Gaussian processes [42] (henceforth GPs). For a number of func- tion classes, in particular ones for which the signal’s smoothness is ‘homogeneous’ over the considered input space, one can show that GPs achieve optimal posterior contraction rates, provided their param- eters are well chosen [48, 10], and adaptation to smoothness can be achieved by, for example, drawing a scaling parameter at random [49]. However, for more ‘inhomogeneous’ signals, Gaussian processes can be shown to lead to suboptimal rates [5] for certain losses at least (e.g. quadratic) and, more gener- ally, properties of posterior distributions for functions with ‘heterogeneous’ regularity across the input space are currently less understood; we review below a few recent results in this direction. Regarding optimality of rates, historically the first systematic study of non-matched situations where the parameter of the loss function does not match the norm index defining the functional class, is due to Nemirovskii [40] and to Nemirovskii, Polyak and Tsybakov [38, 39], where in particular it was noticed that linear estimators can be suboptimal. Donoho and Johnstone [23] studied this type of questions in sequence space, while Donoho, Johnstone, Kerkyacharian and Picard [26] investigated adaptive min- imax rates in density estimation for Besov classes. For such classes, it turns out that the information- theoretic boundaries can be described by three ‘zones’, depending on the loss function and the char- acteristics of the Besov space – we refer to [33] for an in-depth discussion –: a regular zone, where best linear estimators achieve the minimax rate and the rate is the classical n−β/(2β+1)rate in terms of regularity βand number of observations n, anintermediate zone, where the rate is still the same but linear estimators are suboptimal, and a sparse zone, where the rate changes and can be expressed as a combination of the regularity and loss function parameters. This illustrates the delicate interplay between measures of loss and regularity, which in particular leads to significant, polynomial–in– ndif- ferences in convergence rates. Aside of wavelet methods, an adaptive local bandwidth selector based on Lepski’s method was developed for kernel–based estimators in [36]. While working on this paper, we learned of the very recent work [35], where the authors derive oracle inequalities and convergence rates in non-matched situations for penalisation methods. Coming back to Bayesian methods, as noted above, Gaussian Processes can be suboptimal when estimating functions with inhomogeneous smoothness (a fact formally proved in [5] in the white noise model, with mean square loss). Recently, it has been noted that taking heavier tails than Gaussian in defining priors can help in obtaining more diverse behaviour. In [3], p-exponential priors on coefficients of a basis expansion are considered; while the posterior convergence rates in mean square loss improve in a range of aspects compared to the Gaussian case, they are still not adaptive to smoothness. A main advantage is that a wider range of Besov spaces is accessible with optimal contraction rates with these priors, for well chosen parameters. For example, unlike GP priors, priors with exponential tails can be tuned
https://arxiv.org/abs/2505.15543v1
to attain the minimax rate in mean square loss, over both classes of homogeneously smooth functions as well as classes permitting spatial inhomogeneities, see [3] and the recent preprint [21]. To make these priors adaptive, it is still necessary, however, and similar to GPs as discussed above, to draw at least one extra parameter at random; such adaptive counterparts have been obtained in [4] for the white noise model, while [32] considered density estimation. In [1], the following (surprising at first) result is obtained: putting heavy-tailed priors (in the sense that they have polynomially decreasing tails) on coefficients can lead to automatic adaptation to smoothness. By taking well-chosen (but, crucially, deterministic and universal) scalings on heavy-tailed coefficients – defining so-called Oversmoothed heavy-Tailed (OT) priors –, the authors in [1] derive adaptation, up to a logarithmic factor, to homoge- neous and inhomogeneous smoothness in the L2–norm, and in the L∞(supremum)–norm over Hölder classes. Results cover Gaussian white noise regression using standard posterior distributions and priors with at least two moments (excluding e.g. Cauchy or horseshoe priors), while extensions are provided to other models such as density estimation and binary classification using fractional posteriors. The work [13] uses related heavy-tailed priors on coefficients of deep neural networks to derive automatic adaptation to smoothness and compositional structures or to smoothness and geometric structures, as well as adaptation to some anisotropic Besov classes, but results therein are confined to the L2–loss. Heavy-tailed priors and sparse Besov rates 3 We now review a few of the existing results for Bayesian posteriors for ‘non-canonical distances’ (in the loose sense of distances for which one cannot directly apply a generic contraction rate theorem as in [28]). For the Bayesian approach, which is likelihood-based, the study of convergence in terms of certain loss functions, such as Lp(p >2) orL∞–losses, is notoriously difficult. The first system- atic study of posterior contraction in such norms is due to Giné and Nickl [30], where consistency (in general sub-minimax) rates are obtained under generic conditions. An approach to get posterior contraction at minimax rate in the supremum norm is introduced in [11]. Adaptive rates have been obtained for specific priors and models [34, 14, 16, 37, 1]. The work [17] shows contraction under the (global) supremum norm for tree-type BCART priors, while [43] proves local rates for spike-and-slab priors and BCART priors. Yet, to the best of our knowledge, there is no systematic study of Bayesian posterior distributions for Besov regularities and loss functions other than L2andL∞and one of the paper’s aims is to fill this gap in a regression setting. To reduce technicalities, we will do so in the Gaussian white noise model, that can be seen as a prototypical nonparametric model [31]. We also underline that, while ‘adaptive’ priors (in the sense that, for example, they adapt to the unknown underlying smoothness) can sometimes be obtained by drawing certain parameters in the prior at random, in general – especially for distances for which one cannot apply a generic result as in [28] – it is unclear how to get adaptation to smoothness
https://arxiv.org/abs/2505.15543v1
in terms of a specific loss, such as an Lp–loss. To exemplify this, let us consider the class of sieve priors with random truncation, namely priors defined as functions expanded on a finite number Kof coefficients over a basis, with random coefficients, and where the cut-off Kis itself drawn at random. The work [6], for instance, shows that such priors are typically adaptive in terms of the L2–loss in regression, while a certain sub-optimality is noted for the posterior mean in pointwise loss. In their study of Bayesian trees, the authors in [17], prove a lower bound result (Theorem 5 in that source) showing that the posterior contraction rate of a fully-grown tree with random depth K(which is a regular histogram prior and can be viewed as a sieve with cut-off K), while optimal for the L2-loss (up to a logarithmic term), is polynomially sub-optimal in terms of a supremum-type loss. This illustrates that the choice of the prior to target adaptation with respect to a certain loss must be done with particular care. Regarding specific priors, it has been reported in the literature that horseshoe priors [9, 41], that are a very popular choice in the context of sparse high-dimensional models (and can be shown to display near-optimal behaviour in these sparse settings [47, 46]), can also be deployed in nonparametrics with empirically very good behaviour, but up to now there is no theory backing up these empirical findings. The contributions of this paper are threefold 1. we consider the question of obtaining minimax convergence rates for posterior distributions over a wide variety of Besov classes and losses. We show that the oversmoothed heavy-tailed (OT) priors recently introduced in [1] (and featuring scaling parameters with a specific deterministic decrease, slightly faster than polynomial in n) achieve adaptive (near)-minimax rates in the three aforementioned zones. This includes the ‘sparse’ zone with convergence rates featuring indices relative to both the functional class and the loss function. We also derive a sharpness result showing that scalings with exactly polynomial decrease can lead to suboptimal rates; 2. for the first time, we obtain posterior contraction rates for horseshoe prior distributions in the context of nonparametric estimation; doing so is nontrivial, since this prior both features very heavy (Cauchy) tails and a density going to infinity at zero. We also obtain rates for OT priors without the two moment assumption imposed in [1]. 3. we implement our method and show that in terms of statistical risk it is at least comparable to, while sometimes improving upon, state-of-the-art software. 4 Gaussian white noise model. For a given true regression function f0∈L2:=L2([0,1])andn≥1, the data Y(n)will be generated from the following model dY(n)=f0(t)dt+1√ndW(t), t ∈[0,1], (1) where Wis the standard Brownian motion on R. Given {φk:k≥1}an orthonormal basis of L2 for the canonical inner product ⟨., .⟩, we denote by f0,k:=⟨f0,φk⟩the coefficients of f0(which are square summable, (f0,k)∈ℓ2), so that f0=X k≥1f0,kφk holds in the L2–sense . Projecting (1) onto {φk}and denoting Xk:=R1 0φk(t)dY(n)(t), one gets the (infinite) normal se- quence model Xk=f0,k+1√nξk, k≥1, (2) where{ξk}k≥1are independent N(0,1)variables. In
https://arxiv.org/abs/2505.15543v1
the following, we write X=X(n)= (X1,X2,...) for the corresponding sequence of observations. Frequentist analysis of posterior distributions. Consider data Xgenerated from the sequence model (2) with true regression function f0∈L2identified to its coefficients (f0,k)∈ℓ2. In the following, this will be written as X=X(n)∼P(n) f0, and one denotes by Ef0the expectation under this distribution. The reconstruction of f0from the data Xwill be conducted through a Bayesian analysis: from a prior distribution Πonℓ2to be chosen below, one constructs a data-dependent probability measure on ℓ2 called the posterior distribution and given, for any measurable B⊂ℓ2, by Π[B|X] :=R Bexp{−n 2P k≥1(Xk−fk)2}dΠ(f)R exp{−n 2P k≥1(Xk−fk)2}dΠ(f). (3) For a given distance d, we say that the posterior contracts around f0at the rate εn→0ind-loss, if Ef0Π(d(f,f0)≤Mεn|X)→1asn→ ∞ , (4) withM > 0a sufficiently large constant. Heavy tailed series priors . For a function f∈L2, with f=P k≥1fkφk, we define a prior Πonf by drawing the coefficients fkindependently using heavy-tailed density functions. Here we consider heavy-tailed priors as in [1] that have the form fk=σkζk, (5) where σkare deterministic decaying coefficients and ζkare independent identically distributed (i.i.d.) random variables with a given heavy-tailed density honR. In particular, we assume that hsatisfies the following conditions: there exist constants c1,c2>0andκ≥0such that (H1) his symmetric, positive, and decreasing on [0,+∞), (H2) for all x≥0, log (1 /h(x))≤c1(1 + log1+κ(1 +x)), Heavy-tailed priors and sparse Besov rates 5 (H3) for all x≥1, H(x) :=Z+∞ xh(u)du≤c2 x. Note that, compared to [1], the density his not required to be bounded and must only satisfy the tail condition (H3). The latter condition is very mild and includes distributions without integer moments. For instance, κ= 0in (H2) allows for densities with polynomial tails such as the Cauchy density. Oversmoothed heavy-Tailed (OT) priors. Following [1], the choice of scalings (σk)in (5) we argue in favour, is a deterministic decay which is (slightly) faster than any polynomial in k: for some ν >0 and all k≥1, we set σk=e−(logk)1+ν. (6) For any specific choice of the density hsatisfying assumptions (H1)-(H2)-(H3), and any fixed ν >0, the resulting prior is an instance of what we call the OT-prior. The idea behind this prior is to shrink small values of the observed noisy coefficients by the deterministic small scaling factors σk, while the heavy-tails of the ζkwill enable the prior to capture substantial signals with high probability, see [1, Section 2] for a more detailed intuition. Although the results below hold for any ν >0, for simplicity we restrict the proofs to the case ν= 1(the proofs stay the same for different values of ν). The value of the hyper-parameter νhas relatively little effect on the non-asymptotic behavior of the posterior: across all simulations presented in Section 4, we take the same value ν= 1/2. Student and Horseshoe priors . We consider two specific classes of distributions for ζkin (5). The first is the class of Student distributions with at least one degree of freedom, that verify (H1)-(H2)-(H3). The second is the Horseshoe prior with parameter σk>0, fk∼HS(σk) independently across k≥1; (7) for any τ >0, the distribution of f∼HS(τ)is
https://arxiv.org/abs/2505.15543v1
a mixture arising from λ∼C+(0,1)andf|λ∼ N(0,τλ2), where C+(0,1)is the half-Cauchy distribution with location parameter 0and scale 1. The Horseshoe distribution HS(τ)has a density hτsatisfying, for all t̸= 0(see [9], Theorem 1.1), 1 (2π)3/2τlog(1 +4τ2 t2)≤hτ(t)≤1 (2π)3/2τlog(1 +2τ2 t2). (8) Horseshoe distributions are of the form (5), indeed if fk∼HS(σk)one can write fk=σkζ1 kwhere ζ1 k∼HS(1). A random variable ζ1∼HS(1)is symmetric and has Cauchy-like tails (this readily follows from (8)), its density h1satisfies conditions (H1), (H2) with κ= 0, and (H3). A variation of the Horseshoe prior that is of common use in high-dimensional models is a truncated version with uniform rescaling (with respect to k), that is for some τ >0, fki.i.d∼HS(τ)fork≤n, and fk= 0 fork > n. (9) This prior matches the definition of the heavy-tailed prior (5) with σk=τfor all kup to the truncation point k=n, after which it assigns all coefficients to zero. The uniform rescaling τis typically taken to be of the order of a negative power of n, see the discussion in Section 5 and Theorem 1 below. Outline. In Section 2, we establish posterior contraction rates in the L2-norm for Hilbert–Sobolev truths. Specifically, in Section 2.1 we show that OT priors on basis coefficients, as well as truncated 6 horseshoe priors, achieve (nearly) minimax-optimal rates. In Section 2.2, we derive lower bounds (both in probability and in expectation) for heavy-tailed priors with the slightly more common (so far in the literature of series priors) choice of polynomially decaying scalings of the form σk=k−α−1/2. In particular, for β-regular Sobolev truths, while this choice was shown in [1] to lead to (nearly) minimax rates in the over-smoothing regime α≥β, here we show that they give rise to polynomially slower– than–minimax rates in the under-smoothing regime α < β , hence establishing that they cannot be fully adaptive to smoothness, which gives another rationale for choosing OT-scalings as in (6). Section 3 considers a much broader setting of functional classes allowing for non-homogeneous smoothness and non-matched losses, showing that OT heavy-tailed priors adapt to the Besov regularity of the truth and achieve (nearly) minimax rates for general Lplosses, in all three zones, "regular", "intermediate" and "sparse". Section 4 provides numerical experiments along with implementation details, illustrating a range of rate behaviors across different signal classes. Section 5 provides a brief discussion putting the results of the paper into perspective. Sections 6.1 and 6.2 contain the proofs of Theorem 1 and 2 respectively. All other proofs and additional technical material are contained in the supplement [2]. 2. Contraction in L2–loss for Hilbert-Sobolev spaces The rate at which the posterior distributions (3) concentrate around the truth f0as in (4), crucially depends on both the regularity of the truth and the loss d. As a warm-up to the more elaborated results considered in the next section with double-indexed wavelet bases, we consider in this section the sim- plest case of Hilbert-Sobolev-type balls and the L2–loss, with expansions into a simple single-index orthonormal basis (e.g. the Fourier basis). Recalling that fk=⟨f,φk⟩, forβ,F > 0, denote Sβ(F) =n f∈L2([0,1]) :X k≥1k2βf2 k≤F2o . (10)
https://arxiv.org/abs/2505.15543v1
For a suitable choice of orthonormal basis {φk}, the set Sβ(F)corresponds to a ball of L2–functions withβderivatives in the L2–sense. A more general study of posterior contraction under Lp–losses and Besov smoothness is conducted in Section 3. 2.1. Upper bounds for OT and truncated Horseshoe priors Our first result provides an upper bound on the contraction rate for OT priors, where the decay sequence is given by (6). Up to a logarithmic factor (a logarithmic loss is in fact unavoidable for separable estimators, as shown in [8]) such priors achieve the minimax convergence rate in terms of the L2–loss over the above class. In all the results to follow, when we refer to priors given by (5), it will be understood that the prior density hof the variables ζksatisfies conditions (H1)–(H2)–(H3). Theorem 1. Suppose, in the sequence model (2), that f0∈ Sβ(F)and let Πbe the OT series prior with independent coefficients (fk)given by (5)(in particular, the prior (7)is admissible) and scalings given by (6). Asn→ ∞ , Ef0Πh {f:||f−f0||2 2>Lnn−2β/(2β+1)}|Xi →0, where Ln= logδn, for some δ >0. The same conclusion holds if the prior Πis the truncated Horse- shoe prior (9)with uniform scaling τ=n−4, up to setting Ln= log2+δn. Heavy-tailed priors and sparse Besov rates 7 The proof is given in Section 6.1. Theorem 1 establishes that, for the OT priors, the posterior con- traction rate is nearly minimax over Sobolev balls Sβ(F). Among special case of OT priors, one can choose for instance a Cauchy density for h, as well as the horseshoe density hτ. Further, the trun- cated version of the horseshoe prior (9) also leads to an optimal rate (up to slightly worse log-factors). This result strengthens Theorem 1 in [1], where a similar upper bound was obtained for OT priors, but with bounded densities and finite second-order moment. The key difference arises from our proof technique: rather than applying Markov’s inequality working under the posterior expectation, we work directly at the level of the posterior probability. This approach allows us to replace the finite second moment condition by the weaker tail condition (H3), enabling the use of heavier-tailed priors, which may have unbounded expectation, such as the Cauchy and Horseshoe priors. A simulation study for these heavy-tailed priors is provided in Section 4.1. We also note that in [1] a result for Cauchy priors was provided, but for fractional posteriors only, and not the standard posterior as considered here. Remark 1.When using the truncated Horseshoe in Theorem 1, we take τ=n−bwithb= 4. This is for technical reasons, and the proof similarly goes through for any power b≥4. In practice, unless nis very large, one may prefer to take a less conservative choice e.g. τ= 1/n(as we do in the simulations section). We conjecture that the conclusion of Theorem 1 still holds true for this choice. 2.2. Lower bound for polynomially decaying scalings Another possible choice of prior to recover f0∈ Sβ(F)is the HT( α) prior, which is defined as follows. It is a heavy-tailed prior (5) defined in a similar way as the OT prior, but with
https://arxiv.org/abs/2505.15543v1
polynomially decaying scaling parameters, for some α >0, σk=k−1/2−α, for all k≥1. (11) In [1], it was shown that this prior leads to adaptation in the over-smoothing case, that is, if α≥β, the posterior contraction rate is the minimax rate over Sβ(F)(up to log factors). If on the contrary, it turns out that α < β , then since parameter classes such as Sβ(F)are nested (i.e. Sβ(F)⊂ Sα(F)if α < β ), we have that f0∈ Sα(F), so by applying the over-smoothing result with αin place of β(we can since we are now in the case of ‘matched’ regularity of prior and truth), one gets that the posterior contracts at rate (at least) n−α/(2α+1)up to a log factor. However, this does not rule out that the rate is in fact faster. Our next result shows that this is not the case by providing a matching lower bound for the posterior contraction rate in the under-smoothing case α < β . Theorem 2. Letf0∈ Sβ(F)andΠbe the HT (α)series prior with 1/2< α < β . Forδ >0, define φn:=n−α/(2α+1)log−δn. Then, for δlarge enough, as n→ ∞ , Ef0Π[{f:∥f−f0∥2≤φn|X]→0. Under the weaker smoothness assumption 0< α < β, the following in-expectation bound holds, as n→ ∞ , Ef0Z ∥f−f0∥2 2dΠ(f|X)≳n−2α/(2α+1). 8 The proof is provided in Section 6.2 and Section C in the supplement [2]. Theorem 2 shows that in the under-smoothing case α < β , the posterior cannot concentrate at a rate faster than n−α/(2α+1), which is slower than the minimax rate over Sβ(L). This is a sharpness (i.e. lower-bound) result that demonstrates that the adaptive property of the HT (α) prior is fundamentally one-sided, meaning that α must be chosen large enough to ensure adaptation. This fact provides a strong incentive to use the OT prior (6), which, as shown in Theorem 1, always guarantees adaptation by ensuring that one is ‘always in the over-smoothing case’ (hence the name). Also, the lower bound result holds for any given function inSβ(F), and not just for some ‘least-favourable’ ones. The first part of Theorem 2 provides a lower bound for the in-probability contraction rate, which is expressed in the same form as the in-probability upper bound in Theorem 1. By an application of a Markov-like inequality, it is not hard to check that this result leads to a lower bound for the contraction rate ‘in expectation’ Ef0R ∥f−f0∥2 2dΠ(f|X), with a rate φ2n(including a logarithmic factor). The second part of the statement provides a more precise lower bound, without logarithmic factor. Simulations regarding contraction in the under-smoothing case are provided in Section A in the supplement [2]. Remark 2.The additional prior condition α >1/2appearing in the in-probability lower bound arises from our proof technique, based on Lemma 5 which assumes that the density hin our definition of the heavy-tailed series priors possesses certain fractional moments. An inspection of the proof of The- orem 2, reveals that this condition can be relaxed to α >(1/ρ−1/2)∨0if we slightly strengthen assumption (H3) to H(x)≲x−ρwithρ >1. For instance, for a Student distribution with at least two degrees
https://arxiv.org/abs/2505.15543v1
of freedom, the prior regularity condition becomes empty. 3. Sparse Besov rates using heavy-tailed priors In this section we consider the problem of estimating, from the Bayesian nonparametrics point of view, a Besov function in Lp′–losses for p′≥1in the white noise model, again understood as projected into sequence space; we wish to achieve this through a method (prior) that leads to a posterior distribution adaptive to the smoothness level of the unknown regression function. To the best of our knowledge, this question has not been addressed for a Bayesian method (or, more generally, for any likelihood- based approach). We first recall the definition of Besov spaces in terms of wavelet coefficients and then introduce the statistical problem and core result of this paper. 3.1. Besov classes, notation LetS >0, and Jbe the smallest integer satisfying 2J≥2S. Consider {ϕJk:k∈ {0,...,2J−1}} ∪ {ψjk:j≥J,k∈ {0,...,2j−1}}an orthonormal, boundary corrected, S-regular wavelet basis of L2([0,1])(see [18] for a construction). For any f∈Lp(ifp <∞) orfcontinuous (if p=∞), we have f=2J−1X k=0⟨f,ϕJk⟩ϕJk+X j≥J2j−1X k=0⟨f,ψjk⟩ψjk inLp,1≤p≤ ∞. To simplify the notation and to avoid splitting the study of the scaling and wavelet coefficients in the following statistical results, whenever j≥J, we write fjk:=⟨f,ψjk⟩and whenever j < J , we write fjk:=⟨f,ϕJ2j+k⟩,0≤j < J, 0≤k <2j, f−1,0:=⟨f,ϕJ0⟩. Heavy-tailed priors and sparse Besov rates 9 TheLp–norm of the sequence (fjk)kwithk∈ {0,...,2j−1}is denoted as ||fj·||p:= 2j−1X k=0|fjk|p 1/p . Under conditions on the wavelet basis (all satisfied with a boundary corrected S-Daubechies system), we have a Parseval-like quasi-equality for Lp–norms, with p≥1, (see Proposition 4.2.8 in [31] and a modification thereof for boundary corrected bases p.357) 2j−1X k=0fjkψjk p≃2j(1/2−1/p)||fj·||p. (12) Here ‘≃’ means equality up to a constant (depending only on pand the wavelet system). To describe the regularity of a function fwe use the decay of its coefficients in (12), for p,q∈[1,∞)and0< s < S , and define the norm ||f||Bspq:= X j≥−12qj(s+1/2−1/p)||fj·||q p 1/q , (13) with the usual adaptation whenever p=∞orq=∞. A Besov-type ball is defined for any p,q∈[1,∞], F >0and0< s < S , as Bs pq(F) =Bs pq([0,1],F) :={f∈Lp([0,1]) :||f||Bspq≤F}. (14) The corresponding Besov space Bspqis defined as Bs pq:=[ F>0Bs pq(F). (15) If the wavelet system is chosen smooth enough, the space Bspqcorresponds to the usual Besov space defined in terms of moduli of continuity (see 4.3 in [31] for a proof). From the definition of Besov spaces in terms of wavelet coefficients as in (13), one can deduce the following embedding properties Bs pq⊂Bs′ p′qwhenever p′> pands′−1/p′=s−1/p, (16) Bs pq⊂Bs′ pq′whenever s′< sor when s′=sandq′≥q. (17) In particular, if s−1/p > 0,Bspq⊂Bs′ ∞∞ is included in the space of continuous functions. 3.2. Regression in Besov spaces, sparse rates, contraction for OT decay Following the same approach as in the Hilbert-Sobolev case, if the signal belongs to a Besov space Bspq, from the definition of the space in terms of wavelet coefficients it is natural to estimate it from the projections of the white noise model into the (now, double-indexed wavelet) basis, that is, Xjk=f0,jk+1√nξjk, j≥ −1,0≤k <2j, (18) 10 where {ξjk}are independent N(0,1)variables. The next condition
https://arxiv.org/abs/2505.15543v1
ensures that Bspq⊂Lp′∩L2so that the problem of estimating the true unknown function f0∈BspqinLp′–norm is well-defined. (also, recall that we only consider positive smoothness indices s >0) s >(1/p−1/(p′∨2))+. (19) Minimax rates in this sequence setting and Besov smoothness were derived by Donoho and Johnstone [27] (announced in [25]), building up from the seminal work of Nemirovskii [40]; the case of the density estimation model was treated in [26]. We refer to the monograph [33], Section 10.4, for a detailed discussion. As it turns out, the minimax rate for this estimation problem crucially depends on the parameter p′of the loss function. To describe this rate, we now introduce some notation – we note that for simplicity we will not keep track on the precise power in the logarithmic factors appearing in the rates; this slightly simplifies the discussion compared to [27] (who also treat the even more general case of Besov losses)–. We particularize two regions where the statistical behavior differs R:={(p,p′) :p′<(2s+ 1)p} and S:={(p,p′) :p′≥(2s+ 1)p}. The following index ηspecifies in which region the indices (p,p′)are η:=sp−p′−p 2, (20) as indeed η >0if and only if (p,p′)∈ R, and η≤0if and only if (p,p′)∈ S. Consider s′:=s−1/p+ 1/p′, (21) which is positive by (19), and define the rate εn:=n−r, where, for ηas in (20), r:=( s/(1 + 2 s), ifη >0 s′/(1 + 2( s−1/p)), ifη≤0. (22) In the following lines, we provide some insight into the appearance of the elbow in the rate (22) and the distinction between the two regions. Note that this rate is known to be minimax (up to log- arithmic factors) over Bspqin the continuous case s >1/p. We remark that functions in Bspqcan be "non-homogeneously" smooth; that is, as p′increases, it becomes possible to induce increasingly large perturbations of their Lp′–norm using only a small number of wavelet coefficients. Consequently, the statistical problem of reconstructing f0∈Bspqfrom noisy observations of its wavelet coefficients be- comes more difficult as p′increases. The region Ris referred to as the regular region: for (p,p′)in R, the rate (22) corresponds to the standard minimax rate encountered in nonparametric statistics. It is worth noting also that Rcan further be divided into two zones. The first one is the ‘homogeneous’ zone p′≤p, where the aforementioned perturbative effect does not occur; therein, linear estimators, under- stood as linear functionals of the empirical measure n−1PδXi, are minimax-optimal. The second is the ‘non-homogeneous’ regular zone p′> p, where the perturbative effect does take place; therein, linear estimators are known to be suboptimal; in fact, the linear-minimax rate here is polynomially slower than the global minimax rate, and is of order n−s′/{2s′+1}, fors′as defined in (21). When p′<(2s+ 1)p, the perturbative effect is not strong enough to cause a discrepancy in the global min- imax rate. In contrast, in the so-called sparse region S, defined by p′≥(2s+ 1)p, the minimax rate becomes polynomially slower than the usual nonparametric rate from the regular region (although still Heavy-tailed priors and sparse Besov rates 11 faster than the corresponding linear-minimax rate for p′<∞). InS, the most difficult functions in Bspqto
https://arxiv.org/abs/2505.15543v1
estimate in the Lp′-norm exhibit highly localized irregularities, corresponding to only a few wavelet coefficients of large magnitude (hence the term "sparse"), which makes the statistical problem significantly harder. Theorem 3. Let0< s < S ,p,q∈[1,∞],1≤p′<∞,F >0and suppose (19). Consider observa- tions from model (18) with an unknown function f0∈Bspq(F)for some F >0. LetΠbe the OT wavelet series prior sampling coefficients fjkindependently as fjk= 2−j2ζjk, (23) where (ζjk)are i.i.d. copies of a heavy-tailed random variable ζsatisfying conditions (H1)–(H2)–(H3). Then, for rgiven by (22), Ef0Π {f:∥f−f0∥p′≥ Lnn−r}|X →0, asn→ ∞ ,where Ln= logδnfor some δ >0. The proof of this result is provided in Section D in the supplement [2]. Theorem 3 shows that OT series priors achieve complete minimax adaptation, up to logarithmic factors, over Besov-type balls simultaneously over the regular ( R) and sparse ( S) regions. To the best of our knowledge, this is the first Bayesian procedure proven to achieve such rates. In contrast, random Gaussian series, similar to linear estimates, are known to be suboptimal in this setting (see [3], Theorem 4.1). Our results also significantly extend previous work [1] and [13] on heavy-tailed priors, which beyond classical Sobolev/Hölder spaces considered only the case of Besov spaces Bsppand anisotropic Besov spaces Bspp([0,1]d)(this time in ddimension) with p < p′= 2, in particular being restricted to the (homoge- neous part of the) regular region R, where linear and global minimax rates coincide and are equal to the usual rate. Remark 3.The scaling coefficient 2−j2in (23) is the analogue, in double-index notation, of the scaling e−(logk)1+ν, with ν= 1therein, and up to a constant factor in the exponent. As anticipated below (6), and similarly to the single-index OT-prior considered in Section 2, the results of Theorem 3 still hold, with a similar proof, with scaling coefficients 2−j2in the two-index version (23) replaced by2−j1+νfor some ν >0, although for simplicity to keep the notation minimal, we provide the proof only for the former. Remark 4.For simplicity in the present paper we do not consider the case of the supremum norm (p′=∞). Let us briefly sketch how one can extend our results to this case, following the approach for the supremum norm introduced in [1] for Hölder-type spaces. For any (p,q), the embedding Bsp,q⊂Bs′ ∞,∞, given by (16) and (17) shows that estimating a Bspqfunction in L∞-norm is not harder than estimating a Bs′ ∞∞ function. Noting that Bs′ ∞∞ corresponds to the usual Hölder-Zygmund space whenever s′/∈N, one can use techniques of [1] Theorem 4 (contraction in sup-norm for Hölder-smooth functions) to obtain the desired rate. This rate corresponds, up to a logarithmic factor , ton−s′/(2s′+1), regardless of pandq(since p′=∞, we have s′=s−1/p, which is positive by (19), and η <0for allp≥1, thus the target rate - up to log terms - defined in (22) is n−s′/(2s′+1)). Note that following the argument in [1] will require the additional moment assumption E|ζ|<∞; we believe that this condition could possibly be omitted albeit under a slightly more technical argument. 12 4. A simulation study In this section, we present numerical simulations that corroborate
https://arxiv.org/abs/2505.15543v1
and illustrate our theory. Additional simulations illustrating the lower bound obtained in Theorem 2 are presented in Appendix A [2]. 4.1. White noise model regression under OT and Horseshoe priors We consider the white noise regression model (1), expanded in the orthonormal basis φk(t) =√ 2cos( π(k−1/2)t)leading to the normal sequence model (2). As underlying truth, we use a function with coefficients with respect to (φk)given by f0,k=k−3/2sin(k). In particular, this true function can be thought of as having Sobolev regularity (almost) β= 1. We consider four priors on the coefficients of the unknown. The first three are of the form fk=σkζk for i.i.d. ζk, for the same choice of scalings σk=e−(logk)3/2, but different distributions of ζ1: • Student distribution with 3 degrees of freedom, leading to a Student OT prior; • Cauchy distribution, leading to the Cauchy OT prior; • Horseshoe distribution, leading to the Horseshoe OT prior. The fourth prior is the truncated Horseshoe prior as in (9), with σk=τ= 1/n, where nis the noise precision parameter in (2). Note that, for the OT priors we use 3/2instead of 2in the exponent of the logarithm in the defi- nition (6) used in our analysis. As noted in the discussion following (6), the contraction rates are in fact identical for any exponent strictly larger than 1, however we found ‘the finite’ nbehaviour to be slightly better for 3/2 compared to 2(although the difference seemed relatively small in all conducted experiments), so we kept this choice throughout the simulations. Due to independence, the posterior decomposes into an infinite product of univariate posteriors. For all considered priors, we use Stan, with random initialization uniformly on the interval (−2,2), to sample each of the univariate posteriors [45]. One could also use simple random walk type algorithms, however, Stan is particularly convenient due to its simple implementation and adaptive tuning. In all four cases, we truncate at K= 200 , which, for the considered regularities of the truth and the priors, suffices for the truncation error to be of lower order compared to the estimation error, for the considered noise levels, which range from n= 103ton= 105(in the specific case of the truncated Horseshoe prior case with τk= 1/n, this truncation point in fact appears somewhat earlier than the truncation point K=nallowed in the last part of Theorem 1). In Figure 1, we present the posterior means as well as 95% credible regions for various noise levels, computed by taking the 95% out of the 4000 draws (after burn-in/warm up) which are closest to the mean in L2–sense. The three OT priors appear to perform very well at all noise levels, with the two heavier tailed priors (Cauchy and Horseshoe) leading to slightly broader credible regions. The Horse- shoe prior with scaling τ= 1/n, appears to be slightly overconfident in all but the lowest noise levels. This was in fact expected based on the intuition on scaled heavy-tailed priors outlined in [1, Section 2] in the univariate normal mean model: for small scalings, posterior means tend to strongly shrink towards 0, while for large scalings
https://arxiv.org/abs/2505.15543v1
they tend to preserve the data. Since for this prior, all coefficients (even in low frequencies) correspond to a small scaling τ= 1/n, all observations in frequencies with small signal (how small only depends on n, and is independent of the frequency), are set very close to 0. As a result there is very little variance in the posterior. On the contrary, the OT-type priors’ scalings for the first few frequencies remain large, before the faster than polynomial decay in the scaling param- eters eventually kicks in, leading to important frequencies getting significant values under the posterior (see also [1, Section 4] for more discussion on this). This in turn leads to posteriors exhibiting more variability in the OT priors case. Heavy-tailed priors and sparse Besov rates 13 Figure 1 . White noise model: true function (black), posterior means (blue), 95% credible regions (grey), for n= 103,104,105top to bottom and for the four considered priors left to right. 4.2. Performance for spatially inhomogeneous truths when varying the loss In this section we consider four spatially inhomogeneous true functions introduced in [23] for test- ing wavelet thresholding algorithms. We consider the full range of Lp′–losses, thereby considerably extending the simulation setting for OT priors in [1, Section E.2], where only L2–loss was considered. The four functions can be seen in Figure 2. We expand the functions in the Daubechies-8 maximally symmetric wavelet basis [20] (Symmlet-8) and add standard normal noise on each wavelet coefficient. We use 2048 coefficients and coarse level 5, while each function has been appropriately rescaled to get a signal-to-noise ratio (as captured by the ratio of the L2–norms of the function to the noise) approximately equal to 7, as in [23]. Hence, we have a normal sequence model as in (18) with n= 1. Analysis and synthesis of the wavelet expansions is performed in Wavelab850 [22]. Identically to [1, Section E.2], we consider priors on the wavelet coefficients of the form fjk=σjζjk for i.i.d. ζjk, with the following choices of the scalings σjand the distribution of ζ00: • Gaussian hierarchical prior: σj=τ2−j(1/2+α)withτ∼Inv-Gamma (1,1),α∼Exp(1) ,ζ00 standard normal; • Cauchy OT prior: σj= 2−j1+ν, with ν= 1/2,ζ00distributed according to the standard Cauchy distribution. 14 0.0 0.2 0.4 0.6 0.8 1.0−5051015Blocks 0.0 0.2 0.4 0.6 0.8 1.001020304050Bumps 0.0 0.2 0.4 0.6 0.8 1.0−15−10−50510HeaviSine 0.0 0.2 0.4 0.6 0.8 1.0−10−50510Doppler Figure 2 . Spatially inhomogeneous true functions. In addition, we consider frequentist estimation using the hybrid version of SureShrink, which is a soft wavelet thresholding algorithm developed by Donoho and Johnstone to be optimally smoothness- adaptive over Besov spaces, even for extremely sparse signals, see [24, 23]. We refer to [1, Section E.2] for details on the Markov chain Monte Carlo algorithms employed to sample the posteriors, as well as for the resulting visualizations. The implementation of hybrid SureShrink was done using the WaveShrink function of Wavelab850 [22]. To estimate the errors of the three considered methods, we averaged errors over 100 realizations of the data. In particular, for the two priors, we consider two types of errors. The first one is the
https://arxiv.org/abs/2505.15543v1
Lp′–error of the posterior means, hence after averaging we estimate the error Ef0||ˆf−f0||p′, where ˆfis the posterior mean. The same type error is computed for ˆfbeing the thresholding estimator. The second type of error, computed only in the two Bayesian settings, estimates Ef0EΠ[·|X]||f−f0||p′, where the inner expectation is estimated by taking the average of the Lp′–errors of the Markov chain samples after burn-in, and the outer by averaging over the 100 data realizations. The latter error captures the contraction of the whole posterior around the truth. In Figure 3 we show estimation errors in Lp′–loss on the left and contraction-type errors on the right, forp′= 1,2,3,4,6. Errors in supremum-loss are displayed in Table 1. The Cauchy OT prior, convinc- ingly outperforms the Gaussian hierarchical prior in all losses for the more ‘jumpy’ truths (Blocks and Heavy-tailed priors and sparse Besov rates 15 Figure 3 . Average errors in Lp′forp′= 1,2,3,4,6, for four model spatially inhomogeneous truths. Signal-to- noise ratio approximately 7 for all truths, errors averaged over 100 realizations of the noise. Errors for posterior means on left, contraction-type errors on right, for Cauchy OT prior (blue plus sign markers) and Gaussian hierar- chical prior (red cross markers). In the left plot, the black circles are Hybrid SureShrink estimation errors Bumps). For the smoother truths (Doppler and HeaviSine), the Cauchy prior has significantly better er- rors for Lp′-losses with smaller p′(especially contraction-type errors), while as p′increases the errors become more even, with the Gaussian prior slightly outperforming Cauchy OT in supremum-loss (this is, we believe, only true for some specific signals, and we conjecture that the hierarchical Gaussian pro- cedure is in fact suboptimal for minimax adaptation in supremum norm, based on the negative result from [17] mentioned in the introduction). This is despite the fact that, as can be seen in [1, Figures E.6 and E.7], the Gaussian prior fails to denoise the signals and the Cauchy-based posteriors are visually significantly better. The performance of the Cauchy-OT prior matches and often surpasses that of the hybrid SureShrink algorithm. Table 1. White noise model with spatially inhomogeneous truths: L∞average errors of posterior means (contraction-type errors in parentheses), under Cauchy OT and Gaussian hierarchical priors. Hybrid SureShrink estimation error also displayed. Cauchy OT Gaussian hierarchical SureShrink Blocks 4.20 (4.46) 5.08 (5.41) 4.52 Bumps 3.65 (4.08) 5.92 (6.21) 5.64 Doppler 2.85 (3.02) 2.41 (2.91) 2.86 HeaviSine 2.51 (2.69) 2.36 (2.46) 2.18 16 4.3. Comparison of the performance of OT priors to wavelet thresholding algorithms, in sparse Besov spaces In this subsection, we construct a sequence of true functions belonging in B3/2 1∞, which are close to being ‘least favourable’, in the sense that they are the most difficult to estimate among that Besov class. Our construction follows the strategy for constructing the functions used to establish lower bounds on the estimation rate over (sparse) Besov spaces in density estimation, as outlined e.g. in [33]. We construct four functions f(i) 0: [0,1]→R,i= 1,...,4, defined via their Symmlet-8 wavelet co- efficients. Each function, has non-zero wavelet coefficients only at one level: the function f(i) 0has non-zero
https://arxiv.org/abs/2505.15543v1
wavelet coefficients only at the level j= 2i. Recalling the definition of the B3/2 1∞–norm from Section 3, this means that each function has B3/2 1∞–norm equal to 22i22i−1X k=0|f2i,k|, i= 1,...,4. Fork= 0,...,22i−1, we choose f2i,k= 20·2−2iw2i,k, where |w2i,k|sum to 1. In this way, we ensure that each ||f(i) 0||B3/2 1∞= 20 , fori= 1,...,4. The w2i,k, are drawn via a ‘stick-breaking’ construction based on uniform random variables, see [29, Section 3.3.2]; to avoid all significant signal being on the left of the unit interval, we randomly permute the sticks. The resulting functions can be seen in Figure 4. Finally, we add standard normal noise scaled by 1/nforn= 10i+1to the coefficients of each truth, to define the noisy observations. For each truth, we generate 100 realizations of the observation. We consider Cauchy OT prior and the hybrid SureShrink estimator. The implementation of the infer- ence is performed in the same way as in the previous subsection. In Figure 5, we show the logarithms ofLp′errors of the posterior means and the thresholding estimators as a function of logn. Here, for eachp′= 1,2,3,4,6,∞, we have four data points, one for each f(i) 0, with corresponding noise preci- sion level n= 10i+1. The errors for each iare averaged over the 100 realizations of the data, before we apply the logarithm. According to the minimax rates discussed in Section 3, over B3/2 1∞, forp′≤4the minimax rate is the usual rate (here n−3/8), while for p′>4we are in the sparse zone and the minimax rate is slower (here n−1/4−1/(2p′)). Indeed, in Figure 5 we observe that for stronger norms, the errors for both the Cauchy OT prior and SureShrink, appear to decay at a slower rate with n. Furthermore, again the performance of the Cauchy OT prior appears to be on par with (hybrid) SureShrink. 5. Discussion In this work we introduce a number of nonparametric priors that are flexible enough to reach (near–) minimax optimality over Besov spaces and over a whole spectrum of loss functions. The prior we rec- ommend is the OT (Oversmoothed heavy-Tailed) prior with scale parameters σk= exp{−(logk)1+ν}, where we recommend the universal choice ν= 1/2yielding uniformly excellent results in practice, al- though close choices such as ν= 1remain quite close in terms of performance as well; and in terms of type of heavy-tailed prior, taking a Cauchy distribution or a Student( p) with e.g. p= 3degrees of freedom (where the difference of performance between the two is, again, quite mild). The new lower Heavy-tailed priors and sparse Besov rates 17 Figure 4 . ‘Least favourable’ truths f(i) 0, i= 1,..., 4of unit B3/2 1∞-norm, constructed to have non-zero wavelet coefficients only at level j= 2i, and with nonzero coefficients constructed via randomly permuted strick-breaking, scaled by 2−2i. Figure 5 . Log average errors for four ‘least favourable’ truths from Figure 4, with corresponding n= 10i+1. Lp′-errors for the posterior mean for Cauchy OT priors (left) and for the hybrid SureShrink estimator (right), for p′= 1,2,3,4,6,∞. bounds results of the present paper (Theorem 2) also confirm that the special type of
https://arxiv.org/abs/2505.15543v1
decrease as above forσk(slightly faster than any polynomial in k) is really needed in order to achieve full adaptation, since we prove that the HT (α)prior only achieves one-sided adaptation in the oversmoothing case α≥β, as also seen in our simulation study. The present paper also opens the door to the investigation of (very–) heavy tailed priors for nonpara- metric inference, including the classical Cauchy prior, or the more elaborate horseshoe distribution. The simulations in Section 4 reveal a number of differences between the OT with Cauchy or Student 18 prior and the horseshoe prior. From the theoretical perspective, the horseshoe prior fits the main The- orem in a similar way as Cauchy. Empirically, if we take a series prior with scalings σkas in the OT prior with horseshoe distributions, we obtain roughly similar behaviour as for Cauchy. However, al- gorithmically, although in the present setting the difference in computing time is not very significant, in general the fact that the horseshoe models a near-spike at zero – in order to mimic sparse vectors – is in principle not necessary in the present purely nonparametric context. For more complex models (such as density estimation or classification, not considered here, but studied in terms of certain losses in [1]), one may think that modelling a near-spike will be more costly computationally. Another interesting point of comparison is the truncated horseshoe prior. Such a prior is generally used in the context of sparsity – for instance for sparse sequences or high-dimensional linear regression [9] –; similarly, a Cauchy prior with common scale parameter was proposed for high dimensional linear regression in [19]. While such a prior can be used in the present nonparametrics context as well, it behaves somewhat similarly to a hard-thresholding estimator and so seems less flexible than e.g. the SureShrink algorithm that we compare to in Section 4, and indeed its performance is not as favourable for nonparametric function estimation, as one would expect (we did not try to optimise over the parameter τ, which in principle could be done e.g. via empirical Bayes, but the corresponding procedure seems overall more suited to purely sparse classes rather than nonparametric ones). We see that part of the problem with this type of prior is the choice of τ; we expect something similar to occur with spike-and-slab priors as considered in [34]. Indeed, although these priors can be conjectured to achieve similar optimality results for Besov classes as established in the present work, one may think that good performance in practice requires some tuning of the weight of the spike (which plays a similar role as τfor the horseshoe distribution). Also, as discussed in [1], deploying spike-and-slab priors in more complex models is expected to lead to computational difficulties in terms of exploration of sets of submodels (i.e. the ones corresponding to where zeros are distributed from the prior/posterior); and, on the other hand, OT priors retain a form of computational tractability beyond regression models, as their built-in deterministic shrinkage (through non-random scales σk) enables to use MCMC without having to sample posteriors
https://arxiv.org/abs/2505.15543v1
on submodels, as would be the case for model-selection type priors such as sieve priors with random cut-off or spike-and-slab priors. 6. Proofs of the results of Section 2 6.1. Proof of Theorem 1 For simplicity we focus on the case ν= 1 for the OT scalings (6), the proof being similar for any other ν >0. Also part of the statement involving the truncated horseshoe as in (9) is proven in the supplementary material [2]. Let Kn:=n1/(2β+1)andvn=K−2β nlogδnwithδ >0to be chosen large enough below. For f∈L2, letf[Kn]denote its orthogonal projection onto the linear span of the firstKnbasis vectors and set f[Kc n]:=f−f[Kn]. A union bound leads to Πh {f:||f−f0||2 2> vn}|Xi ≤Πh {f:||f[Kc n]−f[Kc n] 0||2 2> vn/2}|Xi + Πh {f:||f[Kn]−f[Kn] 0||2 2> vn/2}|Xi . Let us first deal with the coefficients k≤Kn. Using Markov’s inequality and next splitting the sum with(a+b)2≤2a2+ 2b2, one can bound the second term in the last display by (2/vn)times Z ||f[Kn]−f[Kn] 0||2 2dΠ(f|X)≤2X k≤KnZ (fk−Xk)2dΠ(f|X) + 2X k≤Kn(Xk−f0,k)2. Heavy-tailed priors and sparse Besov rates 19 Under Ef0, using the definition of the model (2), the second sum on the right hand side is smaller than Kn/n=o(vn). It now suffices to show that X k≤KnEf0Z (fk−Xk)2dΠ(f|X) =o(vn). Combining Lemma 2 with p= 2and Lemma 1 on coordinate kwe get, noting that |f0,k| ≤Ffor all k follows from (10), for any t∈R nEf0Z (fk−Xk)2dΠ(f|X)≲t−2 1 + log2(σk√n) +t4+ log2(1+κ) 1 +F+ 1/√n σk . Since σk=e−log2kandk≤Kn, we have logσ−1 k≲log2n. By taking t2≍log2(1+κ)n, the bound of the last display is of order log2(1+κ)n=: logδ′n. Therefore the sum in the last but one display is of order (logδ′n)Kn/n=o(vn)ifδis chosen larger than δ′. For the remaining terms k > K n, since f0∈ Sβ(F), when nis large enoughP k>K n|f0,k|2≤vn/8, so Ef0Π {f:X k>K n|fk−f0,k|2> vn/2}|X ≤Ef0Π {f:X k>K n|fk|2> vn/8}|X . Let us introduce the sequence zk:=k−1log−2k. Using in order the summability of (zk)and a union bound, we have, for a suitable constant M > 0, Ef0Π {f:X k>K n|fk|2> vn/8}|X ≤Ef0Π {f: max k>K nz−1 kf2 k≥Mvn}|X . For any event An, the upper-bound of the last display is further bounded by X k>K nEf0 Πh {f:f2 k≥Mzkvn}|Xi 1An +Pf0(Ac n). (24) Let us define the event An:=\ Kn<k≤n,k∈NAk,0∩\ ℓ≥1\ ℓn<k≤(ℓ+1)n,k∈NAk,ℓ, (25) where we have set Ak,ℓ:=  |Xk| ≤r 4log( n(ℓ+ 1)2) n| {z } yk,ℓ  ; N:={k:|f0,k| ≤1/√n}. For any ℓ,ksuch that ℓn < k ≤(ℓ+ 1)nwe set xk=yk,ℓ. With such choice of (xk)the conditions of Lemma 3 are satisfied on the event An. Writing ϕfor the density of the standard Gaussian distribution, 20 using ϕ≲1for the numerator and for the denominator Lemma 3 with m= 0, on coordinate k, we have Ef0 Πh {f:f2 k≥Mzkvn}|Xi 1An ≤Ef0 R |θ|>√Mzkvnϕ(√n(Xk−θ))h(θ/σk)dθ R ϕ(√n(Xk−θ))h(θ/σk)dθ1An! ≲Ef01 ϕ(√nxk)2H σ−1 kp Mvnzk 1An . Now for any k > K nandnlarge enough, we have σ−1 k√Mvnzk≥1, therefore using assumption (H3) H σ−1 kp Mvnzk ≲σk√Mvnzk. Noting that for yk,ℓas defined above we have ϕ(√nyk,ℓ)≳(n(ℓ+ 1)2)−2the sum in (24) is bounded as X k>K nEf0 Πh {f:f2 k≥Mzkvn}|Xi 1An ≲X
https://arxiv.org/abs/2505.15543v1
Kn<k≤nn2σk√Mvnzk+1√MvnX ℓ≥1X ℓn<k≤(ℓ+1)n(n(ℓ+ 1)2)2σkz−1/2 k ≲n7/2logn√vne−Clog2n+n7/2 √vnX ℓ≥1(ℓ+ 1)4√ ℓ+ 1log(( ℓ+ 1)n)e−log2(ℓn) Taking vn= (logδn)K−2β n withδ > δ′large enough, the sums in the last display all go to 0, when n→ ∞ . Finally from Lemma 4 we have Pf0(Acn)→0, which concludes the proof. 6.2. Proof of Theorem 2 Here we only show the proof of the in-probability lower bound, the proof for the in-expectation version can be found in Section C of the supplementary material [2]. Let Kα:=n1/(2α+1)and εn:=K−ααlogγn, where γ >0is to be chosen below. It is enough to show (see for instance [10], in particular Lemma 1 therein) that, as n→ ∞ , Π(||f−f0||2≤φn) e−2nε2nΠ(||f−f0||2≤εn)→0. Since α < β , we have f0∈Sβ(F)⊂Sα(F)and thanks to [1], Theorem 6, as soon as γ >0is taken large enough, the HT(α)prior satisfies the following prior mass condition, for nlarge enough, Π(||f−f0||2≤εn)≥e−nε2 n. (26) For the numerator, looking at the orthogonal projection on coordinates k > K α, we have Π(||f−f0||2≤φn)≤Π ||f[Kc α]−f[Kc α] 0||2≤φn . Heavy-tailed priors and sparse Besov rates 21 Since f0∈Sβ(F)andα < β , fornlarge enough, we have ||f[Kc α] 0||2≤FK−β α≤φnby definition of φn. Thus the triangle inequality leads to Π ||f[Kc α]−f[Kc α] 0||2≤φn ≤Π ||f[Kc α]||2≤2φn . To bound the last term, apply Lemma 5 with µ:= 1/2 +α,Kn=Kαandλ:=n(logn)(2γ+1)(2 α+1) such that µ >1andKnλ−1/(2µ)≤1when nis large enough. Thanks to hypothesis (H3), the condition E(|ζ|1/µ)<∞is satisfied for µ >1(hence the prior regularity assumption α >1/2, see also Remark 2), therefore we obtain log Π ||f[Kc α]||2≤2φn ≤4λφ2 n−Cλ1/(2α+1). Ifδ >0is large enough, we have 4λφ2n≤(C/2)λ1/(2α+1)and the previous bound is smaller than −(C/2)n1/(2α+1)(logn)2γ+1≲−nε2 nlogn. Combining this bound and the upper bound (26) leads to Π(||f−f0||2≤φn) e−2nε2nΠ(||f−f0||2≤εn)≤e−C′nε2 nlogn→0, asn→ ∞ , which concludes the proof for the in-probability bound. 6.3. Technical lemmas Lemma 1. Letθ∼πandX|θ∼ N(θ,1/n). Suppose πis the law of σ·ζwhere σ >0andζhas an heavy-tailed density satisfying (H1)–(H2). Then, for all t∈R,θ0∈R,σ >0, Eθ0Eπh et√n(θ−X)|Xi ≲σ√net2/2ec1log1+κ(1+|θ0|+1/√n σ) Proof. For any t∈R, denoting hthe density of ζ, we have Eθ0Eπh et√n(θ−X)|Xi =Eθ0R exp(t√n(θ−X))ϕ(√n(X−θ))h(θ/σ)dθR ϕ(√n(X−θ))h(θ/σ)dθ =Eξ∼N(0,1)R et(v−ξ)−(v−ξ)2 2h(θ0+v/√n σ)dv R e−(v−ξ)2 2h(θ0+v/√n σ)dv. The integral on the numerator is bounded as follows, using that his a density function Z et(v−ξ)−(v−ξ)2 2h(θ0+v/√n σ)dv=et2/2Z e−(t−(v−ξ))2 2 h(θ0+v/√n σ)dv≤σ√net2/2 For the denominator using that his symmetric and decreasing on (0,∞), along with condition (H2), one gets 22 Z e−(v−ξ)2 2h(θ0+v/√n σ)dv≳Z1 −1e−(v−ξ)2 2e−c1log1+κ(1+θ0+v/√n σ)dv ≳e−c1log1+κ(1+|θ0|+1/√n σ)Z1 −1e−(v−ξ)2 2dv. Using [15] pages 2015-2016, there exists a universal constant K >0, such that Eξ∼N(0,1)"Z1 −1e−(v−ξ)2 2dv.−1# ≤K. Combining the previous inequalities leads to the desired result. Lemma 2. LetYbe a real valued random variable. Then for t >0,p≥1andL(t) :=E(exp( t|Y|)), E(|Y|p)≤2p−1 tp(pp+ logpL(t)) Proof. Write E(|Y|p) =t−pE[logp(exp( t|Y|))]. Using concavity of the map x7→logpxforx > ep−1,by Jensen’s inequality, one may write E[logp(exp( t|Y|))]≤E[logp(ep−1+ exp( t|Y|))]≤logp(ep−1+L(t)) ≤(p+ logL(t))p≤2p−1(pp+ logpL(t)), where the second to last inequality uses log(ep−1+x)≤p+ log xvalid for x≥1. Lemma 3 (Lower bounds for posterior integrals) .Letϕbe the density function of the standard Gaus- sian distribution. Let σ >0,m≥0andhbe a density function satisfying (H1). Let Xbe a random variable and xa deterministic
https://arxiv.org/abs/2505.15543v1
non-negative real number such that |X| ≤xandσ≲x, we have Z |θ|mϕ(√n(X−θ))h(θ/σ)dθ≳σm+1ϕ(√nx). Proof. By symmetry of both handϕ, it is enough to focus on the case X≥0. By restricting the domain of integration to the set [X−x,X+x],the integral of interest is greater than ϕ(√nx)ZX+x X−x|θ|mh(θ/σ)dθ. By assumption, X≤x, the integral in the last display can be further bounded from below byRx 0|θ|mh(θ/σ)dθ=σm+1Rx/σ 0|u|mh(u)du,recalling X≥0. Using σ≲xas well as the positivity and monotonicity of h, the later integral is further bounded below, for some c >0, byRc 0|u|mh(u)du≳ 1. Putting everything together and using symmetry, one gets the desired result. Lemma 4. LetAnbe the event defined in (25) and assume f0∈ Sβ(F), with β,F > 0. Then as n→ ∞ , we have Pf0(Ac n)→0. Heavy-tailed priors and sparse Besov rates 23 Proof. For any k∈N, by definition we have |f0,k| ≤1/√n, thus for any ℓ≥0, Pf0(Ac k,ℓ) =Pf0( f0,k+ξk√n >r 4log( n(ℓ+ 1)2) n) ≤P |N(0,1)|>q 4log( n(ℓ+ 1)2)−1 . Asngets large enough, this Gaussian tail probability is further upper-bounded, for any ℓ≥0, by P |N(0,1)|>q 3log( n(ℓ+ 1)2) ≤(n(ℓ+ 1)2)−3/2. Combined with (25) the definition of the event Anand a union bound, we obtain Pf0(Ac n)≤X ℓ≥0(ℓ+ 1)n×(n(ℓ+ 1)2)−3/2≤n−1/2X ℓ≥0(ℓ+ 1)−2≲1/√n, which ensures Pf0(Acn)→0asn→ ∞ . Lemma 5. Consider a random sum of the form Sn:=X k>K nσ2 kζ2 k, where σk=k−µfor some µ >1/2andζkare independent and identically distributed copies of the real random variable ζ, satisfying E[|ζ|1/µ]<∞. Then for any ε >0, it holds logP Sn≤ε2 ≤λε2−Cλ1 2µ, (27) for any λ >0such that Knλ−1 2µ≤1, where Cis a positive constant (depending on µand the distri- bution of ζ). Proof. The proof follows the ideas of [7]. First, notice that for any λ >0, by the (exponential) Markov inequality it holds logP(Sn≤ε2)≤λε2+ log Ee−λSn. By independence, the second term is equal to logY k>K nEe−λσ2 kζ2 k=X k>K nlogEe−λσ2 kζ2 k≤Z∞ KnlogEe−λx−2µζ2dx, where in the last bound we have used comparison of sum with an integral (taking advantage of the fact that the expectation in the integrand is increasing with x). Changing variables twice, first setting√ λx−µ=yand then y=z−µ, we get that the last integral is equal to 1 µλ1 2µZ√ λK−µ n 0 logE−y2ζ2 y−1 µ−1dy=λ1 2µZ∞ Knλ−1 2µlogEe−z−2µζ2dz. Noting that the integrand in the right hand side is non-positive, under the assumption Knλ−1 2µ≤1, we have that the right hand side is upper bounded by λ1 2µZ∞ 1logEe−z−2µζ2dz, 24 where [7, Lemma 4.3] shows that the last integral is finite if and only if E[|ζ|1/µ]<∞, as assumed. The claimed bound (27) follows by combining the above considerations. References [1] A GAPIOU , S. and C ASTILLO , I. (2024). Heavy-tailed Bayesian nonparametric adaptation. Ann. Statist. 521433–1459. MR4804815 [2] A GAPIOU , S., C ASTILLO , I. and E GELS , P. Supplement to "Heavy-tailed and Horseshoe priors for regression and sparse Besov rates". [3] A GAPIOU , S., D ASHTI , M. and H ELIN , T. (2021). Rates of contraction of posterior distributions based on p-exponential priors. Bernoulli 271616–1642. MR4278794 [4] A GAPIOU , S. and S AVVA
https://arxiv.org/abs/2505.15543v1
, A. (2024). Adaptive inference over Besov spaces in the white noise model using p-exponential priors. Bernoulli 302275–2300. MR4746608 [5] A GAPIOU , S. and W ANG , S. (2024). Laplace priors and spatial inhomogeneity in Bayesian in- verse problems. Bernoulli 30878–910. MR4699538 [6] A RBEL , J., G AYRAUD , G. and R OUSSEAU , J. (2013). Bayesian Optimal Adaptive Estimation Using a Sieve Prior. Scandinavian Journal of Statistics 40549–570. [7] A URZADA , F. (2007). On the lower tail probabilities of some random sequences in lp.Journal of Theoretical Probability 20843–858. [8] C AI, T. T. (2008). On information pooling, adaptability and superefficiency in nonparametric function estimation. Journal of Multivariate Analysis 99421–436. [9] C ARVALHO , C. M., P OLSON , N. G. and S COTT , J. G. (2010). The horseshoe estimator for sparse signals. Biometrika 97465–480. MR2650751 [10] C ASTILLO , I. (2008). Lower bounds for posterior rates with Gaussian process priors. Electron. J. Stat. 21281–1299. MR2471287 [11] C ASTILLO , I. (2014). On Bayesian supremum norm contraction rates. Ann. Statist. 422058– 2091. MR3262477 [12] C ASTILLO , I. (2024). Bayesian nonparametric statistics, St Flour lecture notes LI. Springer. [13] C ASTILLO , I. and E GELS , P. (2024). Posterior and variational inference for deep neural networks with heavy-tailed weights. arXiv eprint 2406.03369. [14] C ASTILLO , I. and M ISMER , R. (2021). Spike and slab Pólya tree posterior densities: adaptive inference. Ann. Inst. Henri Poincaré Probab. Stat. 571521–1548. MR4291462 [15] C ASTILLO , I. and N ICKL , R. (2013). Nonparametric Bernstein–von Mises theorems in Gaussian white noise. The Annals of Statistics 411999 – 2028. [16] C ASTILLO , I. and R ANDRIANARISOA , T. (2022). Optional Pólya trees: posterior rates and un- certainty quantification. Electron. J. Stat. 166267–6312. MR4515717 [17] C ASTILLO , I. and R OˇCKOVÁ , V. (2021). Uncertainty quantification for Bayesian CART. Ann. Statist. 493482–3509. MR4352538 [18] C OHEN , A., D AUBECHIES , I. and V IAL, P. (1993). Wavelets on the Interval and Fast Wavelet Transforms. Applied and Computational Harmonic Analysis 154-81. [19] D ALALYAN , A. S. and T SYBAKOV , A. B. (2012). Sparse regression learning by aggregation and Langevin Monte-Carlo. J. Comput. System Sci. 781423–1443. MR2926142 [20] D AUBECHIES , I. (1992). Ten lectures on wavelets .CBMS-NSF Regional Conference Series in Applied Mathematics 61. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA. MR1162107 [21] D OLERA , E., F AVARO , S. and G IORDANO , M. (2024). On strong posterior contraction rates for Besov-Laplace priors in the white noise model. arXiv preprint arxiv:2411.06981. Heavy-tailed priors and sparse Besov rates 25 [22] D ONOHO , D., M ALEKI , A., S HAHRAM , M. et al. (2006). Wavelab 850. Software toolkit for time-frequency analysis . [23] D ONOHO , D. L. and J OHNSTONE , I. M. (1994). Ideal spatial adaptation by wavelet shrinkage. Biometrika 81425–455. MR1311089 [24] D ONOHO , D. L. and J OHNSTONE , I. M. (1995). Adapting to unknown smoothness via wavelet shrinkage. J. Amer. Statist. Assoc. 901200–1224. MR1379464 [25] D ONOHO
https://arxiv.org/abs/2505.15543v1
, D. L., J OHNSTONE , I. M., K ERKYACHARIAN , G. and P ICARD , D. (1995). Wavelet shrinkage: asymptopia? J. Roy. Statist. Soc. Ser. B 57301–369. With discussion and a reply by the authors. MR1323344 [26] D ONOHO , D. L., J OHNSTONE , I. M., K ERKYACHARIAN , G. and P ICARD , D. (1996). Density estimation by wavelet thresholding. Ann. Statist. 24508–539. MR1394974 [27] D ONOHO , D. L., J OHNSTONE , I. M., K ERKYACHARIAN , G. and P ICARD , D. (1997). Universal near minimaxity of wavelet shrinkage. In Festschrift for Lucien Le Cam 183–218. Springer, New York. MR1462946 [28] G HOSAL , S., G HOSH , J. K. and VAN DER VAART , A. W. (2000). Convergence rates of posterior distributions. Ann. Statist. 28500–531. MR1790007 (2001m:62065) [29] G HOSAL , S. and VAN DER VAART , A. (2017). Fundamentals of nonparametric Bayesian infer- ence.Cambridge Series in Statistical and Probabilistic Mathematics 44. Cambridge University Press, Cambridge. MR3587782 [30] G INÉ, E. and N ICKL , R. (2011). Rates of contraction for posterior distributions in Lr-metrics, 1≤r≤ ∞ .Ann. Statist. 392883–2911. MR3012395 [31] G INÉ, E. and N ICKL , R. (2015). Mathematical foundations of infinite-dimensional statistical models 40. Cambridge university press. [32] G IORDANO , M. (2023). Besov-Laplace priors in density estimation: optimal posterior contrac- tion rates and adaptation. Electron. J. Stat. 172210–2249. MR4649387 [33] H ÄRDLE , W., K ERKYACHARIAN , G., P ICARD , D. and T SYBAKOV , A. (1998). Wavelets, ap- proximation, and statistical applications .Lecture Notes in Statistics 129. Springer-Verlag, New York. MR1618204 [34] H OFFMANN , M., R OUSSEAU , J. and S CHMIDT -HIEBER , J. (2015). On adaptive posterior con- centration rates. Ann. Statist. 432259–2295. MR3396985 [35] L ACOUR , C., M ASSART , P. and R IVOIRARD , V. (2025). Is model selection possible for the ℓp- loss? PCO estimation for regression models. arXiv preprint arXiv:2504.11217 . [36] L EPSKI , O. V., M AMMEN , E. and S POKOINY , V. G. (1997). Optimal spatial adaptation to inho- mogeneous smoothness: an approach based on kernel estimates with variable bandwidth selectors. Ann. Statist. 25929–947. MR1447734 [37] N AULET , Z. (2022). Adaptive Bayesian density estimation in sup-norm. Bernoulli 281284–1308. MR4388939 [38] N EMIROVSKI ˘I, A. S., P OLYAK , B. T. and T SYBAKOV , A. B. (1983). Estimates of the maxi- mum likelihood type for a nonparametric regression. Dokl. Akad. Nauk SSSR 2731310–1314. MR731296 [39] N EMIROVSKI ˘I, A. S., P OLYAK , B. T. and T SYBAKOV , A. B. (1985). The rate of convergence of nonparametric estimates of maximum likelihood type. Problemy Peredachi Informatsii 2117–33. MR820705 [40] N EMIROVSKIY , A. S. (1985). Nonparametric estimation of smooth regression functions. Soviet J. Comput. Systems Sci. 231–11. MR844292 [41] P OLSON , N. G. and S COTT , J. G. (2011). Shrink globally, act locally: sparse Bayesian regular- ization and prediction. In Bayesian statistics 9 501–538. Oxford Univ. Press, Oxford With dis- cussions by Bertrand Clark, C. Severinski, Merlise A. Clyde, Robert L. Wolpert, Jim e.
https://arxiv.org/abs/2505.15543v1
Griffin, Philiip J. Brown, Chris Hans, Luis R. Pericchi, Christian P. Robert and Julyan Arbel. MR3204017 26 [42] R ASMUSSEN , C. E. and W ILLIAMS , C. K. I. (2006). Gaussian processes for machine learning . Adaptive Computation and Machine Learning . MIT Press, Cambridge, MA. MR2514435 [43] R OˇCKOVÁ , V. and R OUSSEAU , J. (2024). Ideal Bayesian spatial adaptation. J. Amer. Statist. Assoc. 1192078–2091. MR4797924 [44] S ZABÓ , B. T., VAN DER VAART , A. W. and VAN ZANTEN , J. H. (2013). Empirical Bayes scaling of Gaussian priors in the white noise model. Electron. J. Stat. 7991–1018. MR3044507 [45] S TAN DEVELOPMENT TEAM (2024). Stan Modelling Language Users Guide and Reference Manual v. 2.34. [46] VAN DER PAS, S., S ZABÓ , B. and VAN DER VAART , A. (2017). Uncertainty quantification for the horseshoe (with discussion). Bayesian Anal. 121221–1274. With a rejoinder by the authors. MR3724985 [47] VAN DER PAS, S. L., K LEIJN , B. J. K. and VAN DER VAART , A. W. (2014). The horseshoe estimator: posterior concentration around nearly black vectors. Electron. J. Stat. 82585–2618. MR3285877 [48] VAN DER VAART , A. W. and VAN ZANTEN , J. H. (2008). Rates of contraction of posterior distributions based on Gaussian process priors. Ann. Statist. 361435–1463. MR2418663 [49] VAN DER VAART , A. W. and VAN ZANTEN , J. H. (2009). Adaptive Bayesian estimation using a Gaussian random field with inverse gamma bandwidth. Ann. Statist. 372655–2675. MR2541442 SUPPLEMENTARY MATERIAL This supplement contains additional proofs and simulations. Appendix A presents simulations re- garding the lower bound results. The proof of Theorem 1 for the truncated Horseshoe prior distribution is presented in Appendix B, the proof of the in-expectation lower bound of Theorem 2 in Appendix C. The proof of the key main result Theorem 3 providing rates under Besov smoothness in Lp′-loss can be found in Appendix D. Finally, Appendix E contains a number of additional technical Lemmas. A. Additional simulations: suboptimality of HT( α) priors in the undersmoothing case We again consider the white noise regression model (1), this time expanded in the orthonormal basis φk(t) =√ 2sin( πkt). leading to the normal sequence model (2). As underlying truth, we use a function with coefficients with respect to (φk)given by f0,k=k−2.25sin(10 k). This is the setting studied in [44, Section 3]. In particular, this true function can be thought of as having Sobolev regularity (almost) β= 1.75. We consider HT( α) priors (given by (11)) based on the Student distribution with 3degrees of free- dom, for four choices of regularity α, 0.75 and 1.25 (undersmoothing), 1.75 (match) and 2.75 (over- smoothing). Similarly to the previous subsection, we again exploit the independence structure of the model, and employ Stan to sample the one dimensional posteriors. In all cases we again truncate at K= 200 . In Figure A.6, we present posterior sample means as well as 95% credible regions for noise preci- sion parameters n= 2×102(top row) and n= 2×104(bottom row). As expected by the theory, the matched and oversmoothed priors perform very well
https://arxiv.org/abs/2505.15543v1
(cf. [1, Theorem 1]), while the two undersmooth- ing priors lead to too rough posterior means and perform very poorly (see Theorem 2). Heavy-tailed priors and sparse Besov rates 27 Figure A.6 . White noise model: true function (black), posterior means (blue), 95% credible regions (grey), for n= 2×102(top row) and n= 2×104(bottom row). Student HT( α) prior with ν= 3degrees of freedom, for α= 0.75,1.25,1.75,2.75left to right, where the Sobolev regularity of the truth βis (almost) 1.75. B. Proof of Theorem 1 for the truncated Horseshoe Following the proof of Theorem 1, recall Kn:=n1/(2β+1)andvn=K−2β nlogδnwithδ >0to be chosen large enough below. Taking care first of indices k≤Knwe only need to show Ef0X k≤KnZ (fk−Xk)2dΠ(f|X) =o(vn). Recalling that Fis a bound on f0, applying Lemma 2 with p= 2and Lemma E.1 on coordinate kwe get, for any t∈R nEf0Z (fk−Xk)2dΠ(f|X)≲t−2(1 +t4+ log2 τ√n log 1 +4τ2 (F+1)2 ). Since τ=n−awitha >0, whenever nis large enough such that 4τ2<(F+ 1)2, the right hand side of the last display is bounded (up to constant) by t−2(1 +t4+ log2(τ−1√n)). This bound is optimized by taking tof the order of√logn, which leads to Ef0X k≤KnZ (fk−Xk)2dΠ(f|X)≲X k≤Knlogn n≲Kn nlogn, 28 this last bound is o(vn)forδ >1. Now for the terms k > K n, since the Horseshoe prior is truncated afterk=n, this term reduces to Ef0Π {f:X Kn<k≤n|fk|2> vn/8}|X . We can follow the rest of the proof of Theorem 1 with σkreplaced by τ, noting that the Horseshoe HS(1) satisfies condition (H3) thanks to the inequality (8). Therefore, as long as τ−1√Mvnzk≥1for n≥k > K n, we have Ef0Π {f:X Kn<k≤n|fk|2> vn/8}|X ≲X Kn<k≤nn2 τ√Mvnzk+Pf0(Ac n), Where An=T Kn<k≤n,k∈NAk,0is the event as in (25). The sum in the last display goes to 0when n→ ∞ forτ=n−4andδ >2or for τ=n−aanda >4. C. Proof of the in-expectation lower bound in Theorem 2 To show the in–expectation bound, first define the set Nn={k:|f0,k|>1/√n}. Denoting the cardinality of NnasNn, we have F2≥X k∈Nnk2β|f0,k|2≥n−1X k∈Nnk2β≥n−1NnX k=1k2β≥n−1(2β+ 1)−1(Nn)2β+1. Combining this with the assumption α < β , we get Nn≤(nF2(2β+ 1))1/(2β+1)≤((2β+ 1)F2)1/(2β+1)Kα=:eKα. For any k, define the event, Ak:= |Xk| ≤2/√n . To establish the in–expectation lower bound, we first restrict the set of indices to {k >eKα} ∩ Nc n={k >eKα:|f0,k| ≤1/√n}. Using for all k,|fk−f0,k|2≥ |fk|2/2− |f0,k|2, we get Ef0Z ∥f−f0∥2 2dΠ(f|X)≥X {k>eKα}∩NcnEf0Z |fk−f0,k|2dΠ(f|X) ≥X {k>eKα}∩NcnEf0Z1 2|fk|2dΠ(f|X)1Ak −X {k>eKα}∩Ncn|f0,k|2. Heavy-tailed priors and sparse Besov rates 29 First taking care of the non-stochastic sum, since f0∈Sβ(F)andeKα≍Kα, we have X {k>eKα}∩Ncn|f0,k|2≤X k>eKα|f0,k|2≲K−2β α. For the remaining sum, recall that ϕdenotes the standard Gaussian density function, we now study Ef0Z |fk|2dΠ(f|X)1Ak =Ef0R θ2ϕ(√n(Xk−θ))h(θ/σk)dθR ϕ(√n(Xk−θ))h(θ/σk)dθ1Ak =:Ef0T B1Ak . Using ϕ≲1, we can bound the denominator as B≲σk. To bound the numerator from below we set xk:= 2/√nsuch that for all k >eKαwe have σk≤eK−α−1/2 α ≲xk. We can then use Lemma 3 with m= 2on the event Ak={|Xk| ≤xk}, we get T≳ϕ(√nxk)σ3 k≳σ3 k. Therefore, we obtain X {k>eKα}∩NcnEf0Z1 2|fk|2dΠ(f|X)1Ak ≳X {k>eKα}∩Ncnσ2 kPf0(Ak). For any k∈ Ncn, we have |f0,k| ≤1/√n, for such k’s, one has Pf0(Ac k)≤Pr(|N(0,1)|>1)≤1/2. This allows to bound the
https://arxiv.org/abs/2505.15543v1
right hand side of the second to last display as X {k>eKα}∩Ncnσ2 kPf0(Ak)≳X {k>eKα}∩Ncnσ2 k. Recalling |Nn|=Nn≤eKαand using the fact that (σk)is non-increasing, we get X {k>eKα}∩Ncnσ2 k≥X k>2eKασ2 k≳K−2α α, where for the last inequality we use eKα≍Kα. Finally, since α < β , asnis large enough, Ef0Z ∥f−f0∥2 2dΠ(f|X)≳K−2α α−K−2β α≳K−2α α. D. Proof for sparse Besov rate Theorem 3 Throughout this proof we might write Lnfor logarithmic factors of the form logDnand some constant D >0, which may change from one line to another. Because we look at functions on [0,1], theLp′ spaces get smaller with p′. Since the minimax rate (given by (22)) is independent of p′in the regular region {p′≤p}we can reduce the case p′≤pto only p′=p. In the following of the proof we focus on the case p≤p′<∞.Recalling the definitions (20) of η, (21) of s′and (22) of r, we consider 2J0:=n1−2rand 2J1:= (nF2)1 2(s−1/p)+1, (D.1) 30 Note that since s−1/p+ 1/2>0, we have J0≤J1fornlarge enough. Also note that in the case η≤0we have 2J1= (nF2)r/s′. Simple algebraic manipulations give (p′−p)/2 +η(1−2r) =rp′ifη >0, (D.2) (p′−p)/2 +ηr/s′=rp′ifη≤0. (D.3) For a function f∈L2denote by f[J1]andf[Jc 1]the projections of fonto the span of the wavelets {ψjk}j<J 1,kand{ψjk}j≥J1,krespectively. Consider vn:=n−rlogδnthe targeted rate, with δ >0to be chosen large enough below. Using the triangle inequality for the Lp′-norm and a union bound yields Π {f:||f−f0||p′> vn}|X ≤Πh {f:||f[Jc 1]−f[Jc 1] 0||p′> vn/2}|Xi + Πh {f:||f[J1]−f[J1] 0||p′> vn/2}|Xi . We show that under Ef0both of the previous displayed terms go to 0withn→ ∞ . Let us first take care of the indices j≥J1. Using Lemma E.2 for f0withJ−=J1andJ+=∞, we get ||f[Jc 1] 0||p′≲X j≥J12j(1/2−1/p′)||f0,j·||p′. Using s−1/p=s′−1/p′and the embedding ℓp⊂ℓp′, available for p′≥p ||f[Jc 1] 0||p′≲X j≥J12−js′2j(1/2+s−1/p)||f0,j·||p≲sup j≥J1{2j(1/2+s−1/p)||f0,j·||p}X j≥J12−js′. The embedding Bspq(F)⊂Bsp∞(F)shows that the supremum in the last display is bounded by Fand ||f[Jc 1] 0||p′≲F2−J1s′. When η≤0the previous bound is equal to F(nF2)−r≲n−r. When η >0, noting that 1/(1 + 2( s− 1/p))≥s/(s′(2s+ 1)) leads to 2−J1s′= (nF2)−s′ 1+2(s−1/p)≲n−s 2s+1=n−r. Therefore, it holds for nlarge enough that ||f[Jc 1] 0||p′≤vn/4. So, Πh {f:||f[Jc 1]−f[Jc 1] 0||p′> vn/2}|Xi ≤Πh {f:||f[Jc 1]||p′> vn/4}|Xi . Once again, applying Lemma E.2 to f[Jc 1]withp′shows that it is sufficient to control, for a large enough constant M > 0, Ef0Π {f:X j≥J12j(1/2−1/p′)||fj·||p′> Mv n}|X . (D.4) Heavy-tailed priors and sparse Besov rates 31 Using the summability of (2−j/2)j(note that this sequence plays the role of ( zk) in the proof of Theorem 1), we get the following bound X j≥J12j(1/2−1/p′)||fj·||p′=X j≥J12j(1/2−1/p′)(X k|fjk|p′)1/p′≤X j≥J12j/2sup k|fjk|≲sup j≥J1,k{2j|fjk|}. Plugging the previous inequality in (D.4) and applying a union bound, we are left to control, for a large enough constant M′>0, and any event An, Ef0Π" {f: sup j≥J1,k{2j|fjk|> M′vn}|X# ≤X j≥J1X kEf0 Πh {f:|fjk|> M′2−jvn}|Xi 1An +Pf0(Ac n). Writing ϕfor the standard Gaussian density and following the proof of Theorem 1, we can employ Lemma 3 with m= 0on coordinates (j,k)forxjkto be chosen below, and obtain Ef0 Πh {f:|fjk|> M′2−jvn}|Xi 1An ≲Ef01 ϕ(√nxjk)2H σ−1 jM′vn2−j 1An .(D.5) Now for any j≥J1≳log2n, since σj= 2−j2, fornlarge enough we have σ−1
https://arxiv.org/abs/2505.15543v1
jM′vn2−j≥1so we use assumption (H3) and obtain H σ−1 jM′vn2−j ≲2jσj M′vn. To define the event An, first consider the sets of indices (the set Λnwill be usefull further down the proof for indices j < J 1) Λn:= (j,k) :J0< j < J 1,|f0,jk| ≤1/√n and I:= Λ n∪ {(j,k) :j≥J1}.(D.6) Now consider for any l≥0 Ajk,l:=( |Xjk| ≤r 4(l+ 1)log n n) , and define the event An:=\ J0<j≤log2n,(j,k)∈IAjk,0∩\ l≥1\ llog2n<j≤(l+1)log2n,(j,k)∈IAjk,l. (D.7) Forl≥0, let us set xjk:=p (4(l+ 1)log n)/nwhenever llog2(n)< j≤(l+ 1)log2n. For every j > J 0we have σj≲1/√nwhen nis large enough, therefore the constrains |Xjk| ≤xjkandσj≲xjk are satisfied on the event An. We can use Lemma 3 with m= 0on the event Anto obtain the bound in (D.5). Noting that ϕ(√nxjk)≳n−2(l+1)when llog2(n)< j≤(l+ 1)log2nleads to X j≥J1X kEf0 Πh {f:|fjk|> M′2−jvn}|Xi 1An ≲X J0<j≤log2nX kn22jσj M′vn+X l≥1(l+1)log2nX j=llog2n−1X kn2(l+1)2jσj M′vn. 32 Recalling σj= 2−j2,vn=n−rlogδnandJ0≍log2nthe sums of the last display satisfy, as n→ ∞ X J0<j≤log2nX kn22jσj M′vn=X J0<j≤log2nn222j2−j2 M′vn≤n4 M′vn2−J2 0log2n=o(1), X l≥1(l+1)log2nX j=llog2n−1X kn2(l+1)2jσj M′vn=X l≥1(l+1)log2nX j=llog2n−1n2(l+1)22j2−j2 M′vn ≤1 M′vnX l≥1n4(l+1)2−l2log2 2n=o(1). Using Lemma 4 gives Pf0(Acn)→0, asn→ ∞ , we obtain Ef0Πh {f:||f[Jc 1]−f[Jc 1] 0||p′> vn/2}|Xi =o(1). We are now left to control the indices j < J 1. Applying Lemma E.2 with the Lp′norm, J−=−1and J+=J1and noting that J1≲logn, leads to ||f[J1]−f[J1] 0||p′≲(logn)1−1/p′ X j<J 1X k2j(p′/2−1)|fjk−f0,jk|p′ 1/p′ . Ifδ >0is large enough, it is then sufficient to control Ef0Π {f:X j<J 1X k2j(p′/2−1)|fjk−f0,jk|p′> Mvp′ n}|X . (D.8) LetNnbe the set of indices (j,k)with resolution level j > J 0and high valued signal coefficients Nn:= (j,k) :j > J 0,|f0,jk|>1/√n . (D.9) Forf0= (f0,jk)∈Bspq(F), and any (j,k)we have |f0,jk| ≤ ||f0,j·||p≤F2−j(s−1/p+1/2). Therefore since s−1/p+ 1/2>0, we have Nn⊂ {(j,k) :J0< j < J 1}. (D.10) Recall (D.6) the definition of the set Λn, we have the decomposition {(j,k) :j < J 1}={(j,k) :j≤J0} ∪ N n∪Λn. Using the triangle inequality and a union bound we split the sum in (D.8) into three terms according to the previous decomposition. The first of these three terms is Ef0Π {f:X j<J 0X k2j(p′/2−1)|fjk−f0,jk|p′>(M/3)vp′ n}|X . Heavy-tailed priors and sparse Besov rates 33 Applying Markov’s inequality reduces the study of this first term to obtain a bound on A:=X j≤J0X k2j(p′/2−1)Ef0Z |fjk−f0,jk|p′dΠ(f|X). (D.11) From the convexity inequality |x+y|p′≲|x|p′+|y|p′, one gets |fjk−f0,jk|p′≲|fjk−Xjk|p′+|Xjk−f0,jk|p′. Under Ef0, the observation (Xjk)in model (18) is distributed as N(f0,jk,1/n), therefore Ef0(|Xjk− f0,jk|p′)≲n−p′/2. Along with 1An≤1, this leads to the bound, A≲n−p′/2X j≤J02jp′/2+X j≤J0X k2j(p′/2−1)Ef0Z |fjk−Xjk|p′dΠ(f|X) . (D.12) Applying Lemma 2 with p′on coordinates {(j,k) :j≤J0}, gives, for any t >0 Ef0Z |fjk−Xjk|p′dΠ(f|X) ≲(t√n)−p′ 1 + logp′Ef0Z et√n|Xjk−f0,jk|dΠ(f|X) . Note that since f0∈Bspq(F), we have |f0,jk| ≤F, using Lemma 1 to bound the Laplace transform of the posterior with prior fjk=σjζjkand using |x+y|p′≲|x|p′+|y|p′, Ef0Z |fjk−Xjk|p′dΠ(f|X) ≲(√nt)−p′ 1 + logp′(σj√n) +t2p′+ logp′(1+κ) 1 +F+ 1/√n σj . Now since σj= 2−j2andj≤J0≍lognwe have σ−1 j≤eClog2nfor some C >0. The bound then becomes, as ngets large enough, Ef0Z |fjk−Xjk|p′dΠ(f|X) ≲(√nt)−p′ 1 +t2p′+ log2p′(1+κ)n . This bound is optimized by t2p′of the order of the log factor in the
https://arxiv.org/abs/2505.15543v1
last display which leads to Ef0Z |fjk−Xjk|p′dΠ(f|X) ≲n−p′/2Ln. (D.13) Recall (D.1) the definition of J0, plugging the previous bound in (D.12) leads to A≲Lnn−p′/2X j≤J02jp′/2≲Ln 2J0 n!p′/2 ≲Lnn−rp′. This last bound shows that A=O(vp′ n)forδ >0large enough, ensuring that, as n→ ∞ , Ef0Π {f:X j≤J0X k2j(p′/2−1)|fjk−f0,jk|p′>(M/3)vp′ n}|X =o(1) 34 and thus taking care of the first term in the decomposition of (D.8). The second term we need to bound is the sum restricted on Nn(defined in (D.9)). Applying again Markov’s inequality leave us to bound B:=X (j,k)∈Nn2j(p′/2−1)Ef0Z |fjk−f0,jk|p′dΠ(f|X). (D.14) On the set Nn⊂ {(j,k) :J0< j < J 1}, using the monotonicity of (σj)and recalling the definition (D.1) of J1, we obtain log(σ−1 j)≤log(σ−1 J1)≤log(2J2 1)≲log2n. We now do the same split as in the term Aand use Lemmas 2 and 1 to obtain the bound (D.13) in the term B. Moreover, since |f0,jk|>1/√nonNnwe have B≲Lnn−p′/2X (j,k)∈Nn2j(p′/2−1)≲Lnn(p−p′)/2X (j,k)∈Nn2j(p′/2−1)|f0,jk|p. Using first Nn⊂ {(j,k) :J0< j < J 1}and then ||f0,j·||p p≤Fp2−jp(s−1/p+1/2), we get B≲Lnn(p−p′)/2X J0<j<J 12j(p′/2−1)||f0,j·||p p≲Lnn(p−p′)/2X J0<j<J 12−jη, where ηis the index defined in (20). If η <0, recalling that in this case 2J1= (nF2)r/s′(see (D.10) the definition of J1) we employ equation (D.3) and obtain B≲Lnn(p−p′)/22−J1η≲Lnn(p−p′)/2n−ηr/s′≲Lnn−rp′. Ifη >0, applying equation (D.2) we find B≲Lnn(p−p′)/22−J0η≲Lnn−rp′. Finally, when η= 0, we have p′= (2s+ 1)pand therefore (p′−p)/2 =sp=rp′such that B≲ Lnn−rp′(note that in the case η= 0there is an extra log factor in Ln). We have shown that B=O(vp′ n) and thus the second term in the decomposition of (D.8) satisfies, as n→ ∞ , Ef0Π {f:X (j,k)∈Nn2j(p′/2−1)|fjk−f0,jk|p′>(M/3)vp′ n}|X =o(1). The last term we need to control is the probability involving the sum restricted to Λn. Recalling that |f0,jk| ≤1/√nonΛnand using p′≥p, we can see that X (j,k)∈Λn2j(p′/2−1)|f0,jk|p′≤n(p−p′)/2X J0<j<J 1X k2j(p′/2−1)|f0,jk|p, which is bounded as in the study of the Bterm by Lnn−rp′≤vp′ nforδ >0large enough. Using |fjk−f0,jk|p′≲|fjk|p′+|f0,jk|p′, it is then sufficient to control Ef0Π {f:X (j,k)∈Λn2j(p′/2−1)|fjk|p′>(M/3)vp′ n}|X . Heavy-tailed priors and sparse Besov rates 35 A union bound can be used as in the previous study of the indices j > J 1, this leaves us to control X (j,k)∈ΛnEf0Πh {f:|fjk|> M′2−jvn}|Xi . Noting that Λn⊂ {(j,k) :J0< j < J 1}and that J0≍logn, we follow the same steps as in the aforementioned study and show that the quantity in the last display goes to 0withn→ ∞ , achieving the control of the three terms and ensuring that, as n→ ∞ , Ef0Πh {f:||f[J1]−f[J1] 0||p′> vn/2}|Xi =o(1), which concludes the proof. E. Additional Lemmas Lemma E.1. Letθ∼HS(τ), forτ >0andX|θ∼ N(θ,1/n). Then for some C >0, it holds, for all t∈R,θ0∈R,τ >0, Eθ0Eθ∼HS(τ)h et√n(θ−X)|Xi ≤Cτ√n log 1 +4τ2 (|θ0|+1)2et2/2 Proof. One can follow the proof of Lemma 1 and employ the sandwich inequality (8) for the Horseshoe to bound the denominator instead of the general heavy-tail lower bound (H2). Lemma E.2 (Bounding Lp–norms in terms of wavelet coefficients) .LetJ+≥J−≥ −1. Denote J:={j:J−≤j≤J+}andKj:={k: 0≤k <2j}for any j≥ −1. Consider g:=X j∈JX k∈Kjgjkψjk. For any p≥1, we have ||g||p≲X j∈J2j(1/2−1/p)||gj·||p and ||g||p p≲(J+−J−)p−1X j∈J2j(p/2−1)||gj·||p p Proof. Using the triangle inequality and the Parseval-like equality
https://arxiv.org/abs/2505.15543v1
(12), we have ||g||p=∥X j∈JX k∈Kjgjkψjk∥p≤X j∈J∥X k∈Kjgjkψjk∥p≲X j∈J2j(1/2−1/p)||gj·||p. Now simply apply Hölder’s inequality with exponents pandp/(p−1)to get the desired result. Lemma E.3. LetAnbe the event defined in (D.7) and assume f0∈Bspq(F), with p,q≥1and s,F > 0. Then as n→ ∞ , we have Pf0(Ac n)→0. 36 Proof. We notice that for any (j,k)∈ I, by definition, |f0,jk| ≤1/√n. Therefore the proof is similar to the single-index version of this lemma, Lemma 4, with ltaking the role of logℓtherein, we have in particular, as ngets large enough, for all (j,k)∈ Iandl≥0 Pf0(Ac l,jk)≤Pn |N(0,1)|>p 3(l+ 1)log no ≤n−3 2(l+1). The last bound along with a union bound yield Pf0(Ac n)≲X l≥02(l+1)log2nn−3 2(l+1)≲n−1/2X l≥0n−l/2≲1/√n, ensuring Pf0(Acn)→0asn→ ∞ .
https://arxiv.org/abs/2505.15543v1
arXiv:2505.15688v1 [cs.LG] 21 May 2025A packing lemma for VCN k-dimension and learning high-dimensional data Leonardo N. Coregliano Maryanthe Malliaris∗ May 22, 2025 Abstract Recently, the authors introduced the theory of high-arity PAC learning, which is well-suited for learning graphs, hypergraphs and relational structures. In the same initial work, the authors proved a high-arity analogue of the Fundamental Theorem of Statistical Learning that almost completely characterizes all notions of high-arity PAC learning in terms of a combinatorial dimension, called the Vapnik–Chervonenkis–Natarajan ( VCN k)k-dimension, leaving as an open problem only the characterization of non-partite, non-agnostic high-arity PAC learnability. In this work, we complete this characterization by proving that non-partite non-agnostic high-arity PAC learnability implies a high-arity version of the Haussler packing property, which in turn implies finiteness of VCN k-dimension. This is done by obtaining direct proofs that classic PAC learnability implies classic Haussler packing property, which in turn implies finite Natarajan dimension and noticing that these direct proofs nicely lift to high-arity. 1 Introduction Consider the following formulation of the question “are there few convex sets?”: Given k∈Nand ε>0, does there exist m=m(ε) such for every probability measure µoverR, there exist mconvex setsA1,...,Am⊆Rksuch that every convex set B⊆Rkisε-close to one of the Aiin the natural sense of the product measure µk, i.e.,µk(Ai△B)<ε(note thatmdepends only on εand not on µ)? More generally, we could ask: Question 1.1. Which classesHof subsets of Rkadmit such a “compression property”? The main result of this paper provides a complete answer to the general question in terms of a new notion of combinatorial dimension (in particular, the convex sets do have this “compression property”). To continue to illustrate this using the example of convex sets, the reader familiar with the Haussler packing property might want to first consider the case k= 1, in which the class of convex sets amounts to the class of intervals; this class has finite VC dimension and the result above is exactly the Haussler packing property. However, when k≥2, the class of convex sets no longer has finite VC dimension (even in the plane, notice that npoints along a circle are easily shattered by convex sets), so usual Haussler theory provably does not apply. In the present work, we will indeed use a general combinatorial dimension ( VCNk-dimension) which specializes to VC dimension in the k= 1 case, and we will obtain a characterization of this “compression property”, which we call ∗Research partially supported by NSF-BSF 2051825. 1 high-arity Haussler packing property, in terms of finiteness of this dimension. In order to explain what is new and non-trivial about this result, let us explain some ingredients of the argument. The proof builds on a recent breakthrough technology of high-arity PAC learning [ CM24 ]. We explain this formally below, but briefly, this theory allows for statistical learning to happen in much more complex settings than classical PAC theory. The key ingredient is understanding how to leverage structured-correlation in high-dimensional data to make new kinds of learning possible. In fact, the results of the present paper answer a major open question of the high-arity
https://arxiv.org/abs/2505.15688v1
PAC paper, by closing an equivalence of the high-arity PAC theory as we will explain below. 2 Technical preliminaries In the PAC learning theory of Valiant [ Val84 ] (see also [ SSBD14 ] for a more thorough and modern introduction to the topic), an adversary picks a function F:X→Yout from some family H⊆YX and a probability measure µoverXand we are tasked to learn Ffrom i.i.d. samples of the form (xi,F(xi))m i=1, where each xiis drawn from µ; our answer is required to be probably approximately correct (PAC) in the sense of having small total loss with high probability over the sample x. The Fundamental Theorem of Statistical Learning characterizes PAC learnability of a family Hin terms of finiteness of: •its Vapnik–Chervonenkis (VC) dimension [V ˇC71, BEHW89, VC15] when |Y|= 2, •its Natarajan dimension [Nat89] when Yis finite, •its Daniely–Shalev-Shwartz (DS) dimension [DSS14, BCD+22] in the general case. See Theorem A below for the case when Yis finite. In fact, Haussler [ Hau92 ] showed (when Yis finite, but the result was later extended to the general setting [ BCD+22]) that the above is also equivalent to agnostic PAC learnability of H, that is, even if we allow our adversary to pick instead a probability measure νoverX×Yand provide us i.i.d. samples from ν, then we can at least be competitive in the sense that our answer will have total loss close to the best performing member of H(with high probability over the sample). In a different work, Haussler [ Hau95 , Corollary 1, Theorem 2] also showed that finiteness of the Natarajan dimension of a family H⊆YX(withYfinite) is equivalent to the following packing property1: for everyε>0, there exists m=m(ε) such that for every probability measure µoverX, there existH′⊆H with|H′|≤msuch that every F∈H isε-close to some H∈H′in the sense that µ({x∈X|F(x)̸=H(x)})≤ε. Collectively, these works yield the Fundamental Theorem of Statistical Learning: Theorem A ([VˇC71,BEHW89 ,Nat89 ,Hau92 ,Hau95 ,VC15 ]).The following are equivalent for a familyH⊆YXof functions X→YwithYfinite: i. The Natarajan dimension of His finite. ii.Hhas the uniform convergence property. 1This is a slight generalization and reformulation of Haussler’s original work. Instead, Haussler’s original work frames this in terms of ε-separated sets and considers only the case Y={0,1}(which yields a characterization in terms of the VC-dimension). 2 iii.His agnostically PAC learnable. iv.His PAC learnable. v.Hhas the Haussler packing property. One of the main criticisms of classic PAC learning theory is that it strongly relies on the independence of its samples. While there has been considerable work [ HL94 ,AV95 ,Gam03 , ZZX09 ,SW10 ,ZXC12 ,ZLX12 ,ZXX14 ,BGS18 ,SvS23 ] in extending PAC theory to allow for some correlation, these works see correlation as an obstacle to learning. More recently, the authors introduced the theory of high-arity PAC learning [ CM24 ], in which correlation is leveraged to increase the learning power. High-arity PAC learning theory is heavily inspired by the problem of learning a graph over a setX(but more generally, also covers hypergraphs and even relational structures). By encoding such a graph by its adjacency matrix F:X×X→{0,1}, one could simply apply classic PAC learning
https://arxiv.org/abs/2505.15688v1
theory; however, this yields a rather unnatural learning framework: the adversary is picking a measureµoverX×Xand revealing several pairs ( xi,x′ i) drawn i.i.d. from µalong with their adjacencyF(xi,x′ i). Instead, the setup of 2-PAC learning is much more natural: the adversary picks a measure µover X, drawsmvertices ( xi)m i=1i.i.d. fromµand reveals all the adjacency information between these vertices: (F(xi,xj))m i,j=1(see Section 3 for a formal, but simplified definition and see Appendix A for the full definitions). In high-arity, there is also a natural partite framework. Namely, if we were trying to learn a bipartite graph F:X1×X2→{0,1}with a known bipartition ( X1,X2), we could instead allow the adversary to pick different measures µ1andµ2overX1andX2, respectively, draw ( x1 i)m i=1 i.i.d. fromµ1and ( x2 i)m i=1i.i.d. fromµ2, independently from the previous points and provide us all adjacency information: ( F(x1 i,x2 j))m i,j=1. Every (not necessarily bipartite) graph F:X×X→{0,1}can be interpreted as a bipartite graphF2 -part:X1×X2→{0,1}in whichX1=X2=X(combinatorially, this is doubling the vertices of F), but a priori 2-PAC learnability of a family H⊆{ 0,1}X×Xis not necessarily the same as partite 2-PAC learnability of its partization H2 -part def={F2 -part|F∈H} . The interplay between a high-arity hypothesis class Hand its partization Hk-partplays a crucial role in proving Theorem B below, which is high-arity analogue of Theorem A (we direct the reader again to Section 3 and Appendix A or to the original paper [ CM24 ] for the precise definitions of the concepts in this theorem). To illustrate the non-triviality of the interplay between partite and non-partite, consider the partite version of Question 1.1 in Section 1: if we change the setup to allow for product probability measures of the form µ1⊗···⊗µk, of not necessarily the same measure, why should we expect the resulting property to be equivalent to the one only for measures of the formµk? Theorem B ([CM24 , Theorems 1.1 and 5.1]) .Letk∈N+. The following are equivalent for a family H⊆YXkof functions Xk→YwithYfinite and its partization Hk-part: i. The Vapnik–Chervonenkis–Natarajan k-dimension ofHis finite. ii. The partite Vapnik–Chervonenkis–Natarajan k-dimension ofHk-partis finite. iii.Hk-parthas the uniform convergence property. 3 iv.His agnostically k-PAC learnable. v.Hk-partis partite agnostically k-PAC learnable. vi.Hk-partis partitek-PAC learnable. Furthermore, any of the items above implies the following: vii.Hisk-PAC learnable. (The statement above is the informal version [ CM24 , Theorem 1.1] that covers only the 0 /1- loss function; the formal version [ CM24 , Theorem 5.1] covers general loss functions (under mild assumptions).) It turns out that the implication of PAC learnability to finite Natarajan dimension (usually called No Free Lunch Theorem) of classic PAC theory only lifts naturally to the partite setting. The authors were able to partially solve this issue by providing a “departization” operation that allows non-partite agnostic k-PAC learnability to imply its partite analogue. However, this left open the question of whether non-partite k-PAC learnability (item (vii) in Theorem B) can also be included in the equivalence list. In this paper, we prove that this is indeed the case by developing a high-arity version of the Haussler packing property (which we refer to here as k-ary
https://arxiv.org/abs/2505.15688v1
Haussler packing). Recall, the classic equivalence of the Haussler packing property with finiteness of VC-dimension can be proved by direct implications between the properties2. In the present work, we find high-arity proofs of the implications k-PAC learnability = ⇒k-ary Haussler packing property and k-ary Haussler packing property =⇒finite VCNk-dimension (Theorem C below). The reader should note that when specialized to k= 1, these yield the expected direct implications from classic PAC to Haussler packing and from Haussler packing to finite Natarajan dimension. Theorem C (informal version of Theorems 5.1 and 5.3 of the present paper) .Letk∈N+. Then the following hold for a family H⊆YXkof functions Xk→YwithYfinite: i. IfHisk-PAC learnable, then Hhas thek-ary Haussler packing property. ii. IfHhas thek-ary Haussler packing property, then Hhas finite VCNk-dimension. Note, at the risk of stating the obvious, that two things have happened here: there is a new high-arity notion defined, that of k-ary Haussler packing, and two new direct implications that close the loop of equivalences when put together with the earlier Theorem B. Thus an immediate consequence of the main result in this paper is the following summary theorem (see Figure 1 for a pictorial view of the implications in this work and of [CM24]): Theorem D. Letk∈N+. The following are equivalent for a family H⊆YXkof functions Xk→Y withYfinite and its partization Hk-part: i. The Vapnik–Chervonenkis–Natarajan k-dimension ofHis finite. ii. The partite Vapnik–Chervonenkis–Natarajan k-dimension ofHk-partis finite. 2In fact, this essentially goes back to Haussler [ Hau95 ]; however, he does not prove that the packing property implies finite VC-dimension, he instead shows tightness of an analogous bound when the domain Xis finite; but one can derive the implication from his results. 4 iii.Hk-parthas the uniform convergence property. iv.His agnostically k-PAC learnable. v.Hk-partis partite agnostically k-PAC learnable. vi.Hisk-PAC learnable. vii.Hk-partis partitek-PAC learnable. viii.Hhas thek-ary Haussler packing property. ix.Hk-parthas thek-ary Haussler packing property. In particular, the equivalence of items (i) and (viii) above completely answer Question 1.1. The paper is organized as follows. First, since our main goal is to complete the characterization of (non-partite) k-PAC learnability, we opt to provide in the main text only a simplified version of the high-arity PAC definitions in [ CM24 ] and of our argument that only covers “rank at most 1” hypotheses in the non-partite setting, ignoring the subtlety of “high-order variables” (as these are mostly used in agnostic high-arity PAC, which is not needed for our results); we provide the full version of the high-arity definitions and of our argument in the appendices. Sections 3, 4 and 5 contain the simplified versions of the definitions from [ CM24 ], of our new definitions and of our main results, respectively. Appendix A contains the full version of the same content and also covers a partite version of the Haussler packing property. Appendix B relates the non-partite and partite Haussler packing properties directly through the partization operation. 3 Simplified high-arity PAC definitions In this section, we lay out the definitions of high-arity PAC theory of [ CM24 ] that we will need in a simplified manner that covers only rank
https://arxiv.org/abs/2505.15688v1
at most 1 hypotheses. We direct the curious reader to Appendix A for the general version of the same definitions. Definition 3.1 ([CM24 ,§3] simplified) .By a Borel space, we mean a standard Borel space, i.e., a measurable space that is Borel-isomorphic to a Polish space when equipped with the σ-algebra of Borel sets. The space of probability measures on a Borel space Λ is denoted Pr(Λ). Let Ω = (X,B) and Λ = ( Y,B′) be non-empty Borel spaces and k∈N+. 1.The set ofk-ary hypotheses from Ω to Λ, denoted Fk(Ω,Λ), is the set of (Borel) measurable functions from Ωkto Λ. 2. Ak-ary hypothesis class is a subsetHofFk(Ω,Λ) equipped with a σ-algebra such that i. the evaluation map ev: H×Ωk→Λ given by ev( H,x)def=H(x) is measurable; ii. for every H∈H, the set{H}is measurable; iii.for every Borel space Υ and every measurable set A⊆H× Υ, the projection of Aonto Υ, i.e., the set {υ∈Υ|∃H∈H,(H,υ)∈A}, 5 is universally measurable3(i.e., measurable in every completion of a probability measure on Υ). 3.Ak-ary loss function over Λ is a measurable function ℓ:Ωk×ΛSk×ΛSk→R≥0, whereSkis the symmetric group on [ k]def={1,...,k}. We further define ∥ℓ∥∞def= sup x∈Ωk y,y′∈ΛSkℓ(x,y,y′), s (ℓ)def= inf x∈Ωk y,y′∈ΛSk y̸=y′ℓ(x,y,y′), and we say that ℓis: bounded if∥ℓ∥∞<∞. separated ifs(ℓ)>0 andℓ(x,y,y ) = 0 for every x∈Ωkand everyy∈ΛSk. If we are further given k-ary hypotheses F,H∈Fk(Ω,Λ) and a probability measure µ∈Pr(Ω), then we define the total loss ofHwith respect to µ,Fandℓby Lµ,F,ℓ(H)def=Ex∼µk/bracketleftbigg ℓ/parenleftig x,/parenleftbigH(xσ(1),...,xσ(k))/parenrightbig σ∈Sk,/parenleftbigF(xσ(1),...,xσ(k))/parenrightbig σ∈Sk/parenrightig/bracketrightbigg . 4.We say that F∈Fk(Ω,Λ) is realizable inH⊆Fk(Ω,Λ) with respect to a k-ary loss function ℓ andµ∈Pr(Ω) if inf H∈HLµ,F,ℓ(H) = 0. 5. Thek-ary 0/1-loss function over Λ is defined as ℓ0/1(x,y,y′)def= 1[y̸=y′]. 6. A (k-ary) learning algorithm for ak-ary hypothesis class His a measurable function A:/uniondisplay m∈N/parenleftbigΩm×Λ([m])k/parenrightbig→H, where ([m])kdenotes the set of injections [ k]→[m]. 7.We say that a k-ary hypothesis class is k-PAC learnable with respect to a k-ary loss function ℓif there exists a learning algorithm AforHand a function mPAC H,ℓ,A:(0,1)2→R≥0such that for everyε,δ∈(0,1), everyµ∈Pr(Ω) and every F∈Fk(Ω,Λ) that is realizable in Hwith respect toℓandµ, we have Px∼µm/bracketleftigg Lµ,F,ℓ/parenleftbigg A/parenleftig x,/parenleftbigF(xα(1),...,xα(k))/parenrightbig α∈([m])k/parenrightig/parenrightbigg ≤ε/bracketrightigg ≥1−δ for every integer m≥mPAC H,ℓ,A(ε,δ). A learning algorithm Asatisfying the above is called a k-PAC learner forHwith respect to ℓ. 3This assumption about hypothesis classes is not made in [ CM24 ], but it is clearly needed for uniform convergence to make sense; in this document, we will not need this. Also, note that if His equipped with a σ-algebra that turns it into a standard Borel space, then this hypothesis is immediately satisfied as Suslin sets are universally measurable. 6 8. For ak-ary hypothesis H∈Fk(Ω,Λ) andx∈Ωk−1, we letHx: Ω→ΛSkbe defined by Hx(xk)σdef=H(xσ(1),...,xσ(k)) (xk∈Ω,σ∈Sk). For ak-ary hypothesis class H, the Vapnik–Chervonenkis–Natarajan k-dimension ofH(VCNk- dimension ) is defined as VCNk(H)def= sup x∈Ωk−1Nat/parenleftbigH(x)/parenrightbig, where H(x)def={Hx|H∈H} and Nat is the Natarajan-dimension (see Definition 3.2 below). Definition 3.2 (Natarajan dimension [ Nat89 ]).LetFbe a collection of functions of the form X→Yand letA⊆X. 1. We say thatF(Natarajan-)shatters Aif there exist functions f0,f1:A→Ysuch that i. for every a∈A, we havef0(a)̸=f1(a), ii. for every U⊆A, there exists FU∈F such
https://arxiv.org/abs/2505.15688v1
that for every a∈Awe have FU(a) =f 1[a∈U](a) =/braceleftigg f0(a),ifa /∈U, f1(a),ifa∈U. 2. The Natarajan dimension ofFis defined as Nat(F)def= sup{|A||A⊆X∧FNatarajan-shatters A}. 4 Simplified versions of new high-arity concepts In this section, we formalize the high-arity version of the Haussler packing property in the same simplified manner as in Section 3 and the notion of a metric loss (also in simplified manner); again, we direct the curious reader to Appendix A.3 for the general version of the same definitions. Definition 4.1 (k-ary Haussler packing property) .Letk∈N+, let Ω and Λ be non-empty Borel spaces, letH⊆Fk(Ω,Λ) be ak-ary hypothesis class and let ℓ:Ωk×ΛSk×ΛSk→R≥0be ak-ary loss function. We say thatHhas thek-ary Haussler packing property4with respect to ℓif there exists a functionmHP H,ℓ:(0,1)→R≥0such that for every ε∈(0,1) and every µ∈Pr(Ω), there existsH′⊆H with|H′|≤mHP H,ℓ(ε) such that for every F∈H, there exists H∈H′such thatLµ,F,ℓ(H)≤ε. We refer to elements of H′ask-ary Haussler centers ofHat precision εwith respect to µandℓ. We point out that the usual Haussler packing property does a priori make sense in the high-arity setting but would be relative to a probability measure νon Ωkand hence would be applicable to very special classes. By contrast, k-ary Haussler considers product measures µkfor a probability measureµover Ω. Recall that our goal is to fully characterize which hypothesis classes admit such 4It would be perhaps more fitting to call this the Haussler covering property, but we opt retain the historical name of its unary version. 7 a property. We will see in the course of the proofs the equivalence of k-ary Haussler packing to k-ary PAC learning requires interesting mathematical tools. For simplicity, in the remainder of this paper, for readability we may drop “ k-ary” from the terminology. Definition 4.2 (Metric loss functions) .Letk∈N+and let Ω and Λ be non-empty Borel spaces. We say that a k-ary loss function ℓ:Ωk×ΛSk×ΛSk→R≥0ismetric if for every x∈Ωk, the functionℓ(x,−,−) is a metric on ΛSkin the usual sense, that is, the following hold for every x∈Ωk and everyy,y′,y′′∈ΛSk: i. We have ℓ(x,y,y′) =ℓ(x,y′,y). ii. We have ℓ(x,y,y′) = 0 if and only if y=y′. iii. We have ℓ(x,y,y′′)≤ℓ(x,y,y′) +ℓ(x,y′,y′′). 5 Main results In this section we prove that k-PAC learnability implies the Haussler packing property (Theorem 5.1), which in turn implies finite VCNk-dimension (Theorem 5.3). The former theorem is done under the assumption that the loss function ℓis metric, but we point out that one could have assumed instead that ℓis separated and bounded with a slight change in the argument (this is featured in the non-simplified version of the implication, Theorem A.14). Finally, let us point out that the 0 /1-loss functionℓ0/1satisfies all hypotheses of Theorems 5.1 and 5.3 and [CM24, Theorems 1.1 and 5.1]. Theorem 5.1 (k-PAC learnability implies Haussler packing property, simplified) .Letk∈N+, let ΩandΛbe non-empty Borel spaces with Λfinite, letH⊆Fk(Ω,Λ)be ak-ary hypothesis class and letℓ: Ωk×ΛSk×ΛSk→R≥0be ak-ary loss function. Let also γH(m)def= sup x∈Ωm/vextendsingle/vextendsingle/vextendsingle/braceleftig/parenleftbigH(xα(1),...,xα(k))/parenrightbig α∈([m])k/vextendsingle/vextendsingle/vextendsingleH∈H/bracerightig/vextendsingle/vextendsingle/vextendsingle be the maximum number of different patterns (in Λ([m])k) that can be obtained by considering a fixedx∈Ωmand plugging in each injective k-tuple as an input of some H∈H. Ifℓis metric andHisk-PAC learnable
https://arxiv.org/abs/2505.15688v1
with a k-PAC learnerA, thenHhas the Haussler packing property with associated function mHP H,ℓ(ε)def= min δ∈(0,1)/ceilingleftiggγH(⌈mPAC H,ℓ,A(ε/2,δ)⌉) 1−δ/ceilingrightigg −2≤min δ∈(0,1) |Λ|(⌈mPAC H,ℓ,A(ε/2,δ)⌉)k 1−δ −2, (5.1) where (m)kdef=m(m−1)···(m−k+ 1) denotes the falling factorial. Proof. First note that the inequality in (5.1) follows from the trivial bound γH(m)≤|Λ|(m)k. Second, note that due to the ceilings on the expressions in (5.1), the minima are indeed attained as the functions only take values in N. 8 Suppose for a contradiction that the result does not hold, that is, there exists ε∈(0,1) and µ∈Pr(Ω) such that if mis given by the first minimum in (5.1), then for every H′⊆H with |H′|≤m, there exists F∈H such thatLµ,F,ℓ(H)>εfor everyH∈H′. By repeatedly applying this property, it follows that there exist F1,...,Fm+1∈H such that for every i,j∈[m+ 1] distinct, we haveLµ,Fi,ℓ(Fj)>ε(recall that ℓis metric, so Lµ,Fi,ℓ(Fj) =Lµ,Fj,ℓ(Fi)). Letδ∈(0,1) attain the first minimum in (5.1) and let /tildewidemdef=/ceilingleftbigg mPAC H,ℓ,A/parenleftbiggε 2,δ/parenrightbigg/ceilingrightbigg . For eachx∈Ω/tildewidem, let Y(x)def=/braceleftig/parenleftbigH(xα(1),...,xα(k))/parenrightbig α∈([/tildewidem])k/vextendsingle/vextendsingle/vextendsingleH∈H/bracerightig ⊆Λ([/tildewidem])k and note that|Y(x)|≤γH(/tildewidem). For eachi∈[m+ 1], define the set Cidef=/braceleftbigg x∈Ω/tildewidem/vextendsingle/vextendsingle/vextendsingle/vextendsingle∀y∈Y(x),Lµ,Fi,ℓ/parenleftbigA(x,y)/parenrightbig>ε 2/bracerightbigg . Note that by taking ydef=/parenleftbigFi(xα(1),...,xα(k))/parenrightbig α∈([m])k∈Y(x) and using the fact that Lµ,Fi,ℓ(Fi) = 0 (asℓis metric) so that Fiis realizable inHw.r.t.ℓandµ, PAC learnability implies that µ(Ci)≤δ. Define now the function G: Ω/tildewidem→R≥0by G(x)def=m+1/summationdisplay i=11Ci(x) =/vextendsingle/vextendsingle/vextendsingle/vextendsingle/braceleftbigg i∈[m+ 1]/vextendsingle/vextendsingle/vextendsingle/vextendsingle∀y∈Y(x),Lµ,Fi,ℓ/parenleftbigA(x,y)/parenrightbig>ε 2/bracerightbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle. We claim that for every x∈Ω/tildewidemand everyy∈Y(x), there exists at most one i∈[m+ 1] such thatLµ,Fi,ℓ(A(x,y))≤ε/2. Indeed, if not, then for some i,j∈[m+ 1] distinct, we would get Lµ,Fi,ℓ(Fj)≤Lµ,Fi,ℓ/parenleftbigA(x,y)/parenrightbig+Lµ,Fj,ℓ/parenleftbigA(x,y)/parenrightbig≤ε, where the first inequality follows since ℓis metric; the above would then contradict Lµ,Fi,ℓ(Fj)>ε. Thus, we conclude that for every x∈Ω/tildewidem, we have G(x)≥m+ 1−|Y(x)|≥m+ 1−γH(/tildewidem). (5.2) On the other hand, since µ(Ci)≤δfor everyi∈[m+ 1], we get /integraldisplay Ω/tildewidemG(x)dµ/tildewidem(x)≤(m+ 1)δ, which together with (5.2) implies m≤γH(/tildewidem) 1−δ−1, contradicting the definitions of m,δand/tildewidem. 9 For the next theorem, we will need the following standard combinatorial result about covers of the power set of [ n]. Lemma 5.2. Letc∈(0,1/2), letn∈Nand letC⊆2[n]be a collection of subsets of [n]. Suppose that for every U⊆[n], there exists V∈Csuch that|U△V|≤cn. Then n≤log2|C| 1−h2(c), where h2(t)def=tlog21 t+ (1−t) log21 1−t denotes the binary entropy. Proof. For eachV∈C, let B(V)def={U⊆[n]||U△V|≤cn} and note that |B(V)|=⌊cn⌋/summationdisplay i=0/parenleftigg n i/parenrightigg ≤2h2(c)·n, where the last inequality is the standard upper bound on the volume of a Hamming ball in terms of binary entropy (see e.g. [Ash65, Lemma 4.7.2]). Since/uniontext V∈CB(V) = 2[n], we conclude that |C|·2h2(c)·n≥2n, from which the result follows. Theorem 5.3 (Haussler packing property implies finite VCNk-dimension, simplified) .Letk∈N+, letΩandΛbe non-empty Borel spaces with Λfinite, letH⊆Fk(Ω,Λ)be ak-ary hypothesis class and letℓ: Ωk×ΛSk×ΛSk→R≥0be ak-ary loss function. Ifℓis separated andHhas the Haussler packing property, then VCNk(H)≤ min ε∈(0,min{s(ℓ)·k!/(2kk),1})/floorleftigglog2⌊mHP H,ℓ(ε)⌋ 1−h2(ε·kk/(s(ℓ)·k!))/floorrightigg , (5.3) where h2(t)def=tlog21 t+ (1−t) log21 1−t denotes the binary entropy. Proof. First, note that the minimum in (5.3) is indeed attained as the function only takes values inN∪{−∞} , so letε∈(0,min{s(ℓ)·k!/(2kk),1}) attain the minimum, let dbe the value of the minimum and let mdef=⌊mHP H,ℓ(ε)⌋ so that d=/floorleftbigglog2m 1−h2(ε·kk/(s(ℓ)·k!))/floorrightbigg . 10 WhenHis empty, the result is trivial as VCNk(H) =−∞, so supposeHis non-empty (hence m≥1 andd≥0). By the definition of VCNk-dimension, we have to
https://arxiv.org/abs/2505.15688v1
show that if x∈Ωk−1, then Nat(H(x))≤d. In turn, it suffices to show that if V⊆Ω is a (finite) set that is Natarajan-shattered by H(x) and ndef=|V|, thenn≤d. Letµ∈Pr(Ω) be given by 1 k νV+k−1/summationdisplay j=1δxj , whereνVis the uniform probability measure on Vandδtis the Dirac delta concentrated on t. SinceVis Natarajan-shattered by H(x), there exist functions f0,f1:V→ΛSkwithf0(v)̸=f1(v) for everyv∈Vand there exists a family {FU|U⊆V}⊆H such that for every U⊆Vand every v∈V, we haveFU x(v) =f 1[v∈U](v). ForF,F′∈H, let us define the set D(F,F′)def={v∈V|Fx(v)̸=F′ x(v)}. Clearly, for every U,U′⊆V, we haveD(FU,FU′) =U△U′. Note also that by the definition of µ, forF,F′∈H, we have Lµ,F,ℓ(F′)≥Pz∼µk/bracketleftbig∃σ∈Sk,∀j∈[k−1],zσ(j)=xj∧zσ(k)∈D(F,F′)/bracketrightbig·s(ℓ) ≥s(ℓ)·k! kk·n·|D(F,F′)|.(5.4) Sincemis defined via Haussler packing property, we know that there exists H′⊆H such that |H′|≤mand for every U⊆V, there exists H∈H′such thatLµ,FU,ℓ(H)≤ε. For eachH∈H′, let UHdef={v∈V|Hx(v) =f1(v)}, B(H)def={U⊆V|Lµ,FU,ℓ(H)≤ε}, B′(H)def=/braceleftigg U⊆V/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle|U△UH|≤ε·kk·n s(ℓ)·k!/bracerightigg . The Haussler packing property assumption implies /uniondisplay H∈H′B(H) = 2V. (5.5) On the other hand, by (5.4), for every U⊆Vand everyH∈H′, we have Lµ,FU,ℓ(H)≥s(ℓ)·k! kk·n·|D(FU,H)|≥s(ℓ)·k! kk·n·|D(FU,FUH)|=s(ℓ)·k! kk·n·|U△UH|, which along with (5.5) implies/uniontext H∈H′B′(H) = 2V. Sinceε·kk/(s(ℓ)·k!)<1/2, by Lemma 5.2, we get n≤log2|H′| 1−h2(ε·kk·n/(s(ℓ)·k!))≤log2m 1−h2(ε·kk·n/(s(ℓ)·k!)), which yields n≤dasnis an integer and dis the floor of the right-hand side of the above. 11 References [Ash65] Robert Ash. Information theory , volume No. 19 of Interscience Tracts in Pure and Applied Mathematics . Interscience Publishers John Wiley & Sons, New York-London- Sydney, 1965. [AV95] David Aldous and Umesh Vazirani. A Markovian extension of Valiant’s learning model. Inform. and Comput. , 117(2):181–186, 1995. [BCD+22]Nataly Brukhim, Daniel Carmon, Irit Dinur, Shay Moran, and Amir Yehudayoff. A characterization of multiclass learnability. In 2022 IEEE 63rd Annual Symposium on Foundations of Computer Science—FOCS 2022 , pages 943–955. IEEE Computer Soc., Los Alamitos, CA, [2022] ©2022. [BEHW89] Anselm Blumer, Andrzej Ehrenfeucht, David Haussler, and Manfred K. Warmuth. Learn- ability and the Vapnik-Chervonenkis dimension. J. Assoc. Comput. Mach. , 36(4):929– 965, 1989. [BGS18] Guy Bresler, David Gamarnik, and Devavrat Shah. Learning graphical models from the Glauber dynamics. IEEE Trans. Inform. Theory , 64(6):4072–4080, 2018. [CM24] Leonardo N. Coregliano and Maryanthe Malliaris. High-arity PAC learning via ex- changeability, 2024. [DSS14] Amit Daniely and Shai Shalev-Shwartz. Optimal learners for multiclass problems. In Maria Florina Balcan, Vitaly Feldman, and Csaba Szepesv´ ari, editors, Proceedings of The 27th Conference on Learning Theory , volume 35 of Proceedings of Machine Learning Research , pages 287–316, Barcelona, Spain, Jun 2014. PMLR. [Gam03] David Gamarnik. Extension of the PAC framework to finite and countable Markov chains. IEEE Trans. Inform. Theory , 49(1):338–345, 2003. [Hau92] David Haussler. Decision-theoretic generalizations of the PAC model for neural net and other learning applications. Inform. and Comput. , 100(1):78–150, 1992. [Hau95] David Haussler. Sphere packing numbers for subsets of the Boolean n-cube with bounded Vapnik-Chervonenkis dimension. J. Combin. Theory Ser. A , 69(2):217–232, 1995. [HL94] D. P. Helmbold and P. M. Long. Tracking drifting concepts by minimizing disagreements. Machine Learning , 14:27–45, 1994. [Nat89] Balas K. Natarajan. On learning sets and functions. Machine Learning , 4(1):67–97, 1989. [SSBD14] Shai Shalev-Shwartz and Shai Ben-David. Understanding Machine Learning: From Theory to Algorithms . Cambridge University Press, 2014. [SvS23]
https://arxiv.org/abs/2505.15688v1
Nikola Sandri´ c and Stjepan ˇ Sebek. Learning from non-irreducible Markov chains. J. Math. Anal. Appl. , 523(2):Paper No. 127049, 14, 2023. 12 [SW10] Hongwei Sun and Qiang Wu. Regularized least square regression with dependent samples. Adv. Comput. Math. , 32(2):175–189, 2010. [Val84] L. G. Valiant. A theory of the learnable. Commun. ACM , 27(11):1134–1142, nov 1984. [VˇC71] V. N. Vapnik and A. Ja. ˇCervonenkis. The uniform convergence of frequencies of the appearance of events to their probabilities. Teor. Verojatnost. i Primenen. , 16:264–279, 1971. [VC15] V. N. Vapnik and A. Ya. Chervonenkis. On the uniform convergence of relative frequencies of events to their probabilities. In Measures of complexity , pages 11–30. Springer, Cham, 2015. Reprint of Theor. Probability Appl. 16(1971), 264–280. [ZLX12] Bin Zou, Luoqing Li, and Zongben Xu. Generalization performance of least-square regularized regression algorithm with Markov chain samples. J. Math. Anal. Appl. , 388(1):333–343, 2012. [ZXC12] Bin Zou, Zongben Xu, and Xiangyu Chang. Generalization bounds of ERM algorithm withV-geometrically ergodic Markov chains. Adv. Comput. Math. , 36(1):99–114, 2012. [ZXX14] Bin Zou, Zong-ben Xu, and Jie Xu. Generalization bounds of ERM algorithm with Markov chain samples. Acta Math. Appl. Sin. Engl. Ser. , 30(1):223–238, 2014. [ZZX09] Bin Zou, Hai Zhang, and Zongben Xu. Learning from uniformly ergodic Markov chains. J. Complexity , 25(2):188–200, 2009. 13 A High-arity PAC with high-order variables In this section we lay out the definitions from high-arity PAC learning [ CM24 ] in their full generality including high-order variables and the new definitions, theorems and proofs from this paper in the same full generality. The definitions from [ CM24 ] are done in a streamlined manner, so we refer the reader to the original paper for a more thorough treatment accompanied by intuition; to facilitate the process, numbers in square brackets below refer to the exact location of the corresponding definition in [CM24]. A.1 Definitions in the non-partite Definition A.1 (Borel templates [3.1]) .By a Borel space, we mean a standard Borel space, i.e., a measurable space that is Borel-isomorphic to a Polish space when equipped with the σ-algebra of Borel sets. The space of probability measures on a Borel space Λ is denoted Pr(Λ). 1.[3.1.1] A Borel template is a sequence Ω = (Ω i)i∈N+, where Ω i= (Xi,Bi) is a non-empty Borel space. 2.[3.1.2] A probability template on a Borel template Ω is a sequence µ= (µi)i∈N+, where µi∈Pr(Ωi) is a probability measure on Ω i. We denote the space of probability templates on a Borel template Ω by Pr(Ω). 3. [3.1.4] For a (finite or) countable set Vand a Borel template Ω, we define EV(Ω)def=/productdisplay A∈r(V)X|A| equipping it with the product σ-algebra, where r(V)def={A⊆V|Afinite non-empty}. (A.1) Ifµ∈Pr(Ω) is a probability template on Ω, then we let µVdef=/circlemultiplytext A∈r(V)µ|A|be the product measure. We use the shorthands r(m)def=r([m]),Em(Ω)def=E[m](Ω) andµmdef=µ[m]when m∈N(where [m]def={1,...,m}). 4.[3.1.5] For an injective function α:U→Vbetween countable sets, we contra-variantly define the mapα∗:EV(Ω)→EU(Ω) by α∗(x)Adef=xα(A)/parenleftbigx∈EV(Ω),A∈r(U)/parenrightbig. Definition A.2 (Hypotheses [3.2, 3.5]) .Let Ω be a Borel template, Λ = ( Y,B′) be a non-empty Borel space and k∈N+. 1.[3.2.1] The set of k-ary hypotheses from
https://arxiv.org/abs/2505.15688v1
Ω to Λ, denoted Fk(Ω,Λ), is the set of (Borel) measurable functions from Ek(Ω) to Λ. 2.[3.2.2] Ak-ary hypothesis class is a subsetHofFk(Ω,Λ) equipped with a σ-algebra such that: i. the evaluation map ev: H×Ek(Ω)→Λ given by ev( H,x)def=H(x) is measurable; 14 Non-partite finite VCN k-dimensionNon-partite agnostically k-PAC learnableNon-partite k-PAC learnableNon-partite Haussler packing Partite finite VCN k-dimensionPartite uniform convergencePartite agnostically k-PAC learnablePartite k-PAC learnablePartite Haussler packing[CM24, 8.3][CM24, 6.3(v)] [CM24, 7.3(ii), 8.16] ℓflexible, symmetric and bounded5.1, A.14 ℓ(almost) metric and Λfinite5.3, A.17rk(H)≤1andℓseparated [CM24, 9.12]ℓlocal and bounded and Λfinite [CM24, 9.2]Existence of (almost) empirical risk minimizers [CM24, 6.3(v)][CM24, 8.4(ii)] [CM24, 8.4(i)] [CM24, 10.2] rk(H)≤1andℓseparatedA.14ℓ(almost) metric and ΛfiniteB.4 A.16 rk(H)≤1andℓseparated Figure 1: Diagram of implications between different high-arity PAC learning notions. Labels on arrows contain the number of the theorem (or the specific proposition in [ CM24 ]) that contains the proof of the implication and extra hypotheses needed. In the above “ℓ(almost) metric” means that either ℓis metric or ℓis separated and bounded. Arrows with two heads ( ↠) are tight in some sense with an obvious proof of tightness. Dashed arrows involve a construction (meaning that either the hypothesis class changes and/or the loss function changes) due to being in different settings; this also means that objects in one of the sides of the implication might not be completely general (as they are required to be in the image of the construction). Arrows with tails ( ↣) mean that exactly one of the sides involves a loss function. Under appropriate hypotheses, all items are proved equivalent (for example, if Λ is finite and the loss is the 0 /1-lossℓ0/1). 15 ii. for every H∈H, the set{H}is measurable; iii.for every Borel space Υ and every measurable set A⊆H× Υ, the projection of Aonto Υ, i.e., the set {υ∈Υ|∃H∈H,(H,υ)∈A}, is universally measurable5(i.e., measurable in every completion of a probability measure on Υ). 3. [3.2.3] Given F∈Fk(Ω,Λ) andm∈N, we define the function F∗ m:Em(Ω)→Λ([m])kby F∗ m(x)αdef=F(α∗(x))/parenleftbigx∈Em(Ω),α∈([m])k/parenrightbig, where ([m])kis the set of injections [ k]→[m]; whenk=m, we haveF∗ k:Ek(Ω)→ΛSk, where Sk= ([k])kis the symmetric group on [ k]. 4.[3.5.1] The rank of ak-ary hypothesis F∈Fk(Ω,Λ), denoted rk(F) is the minimum r∈N such thatFfactors as F(x)def=F′/parenleftbig(xA)A∈r(k),|A|≤r/parenrightbig /parenleftbigx∈Ek(Ω)/parenrightbig for some function F′:/producttext A∈r(k),|A|≤rXA→Λ. 5. [3.5.2] The rank of ak-ary hypothesis class H⊆Fk(Ω,Λ) is defined as rk(H)def= sup F∈Hrk(F). Definition A.3 (Loss functions [3.7]) .Let Ω be a Borel template, Λ be a non-empty Borel space andk∈N+. 1. [3.7.1] A k-ary loss function over Λ is a measurable function ℓ:Ek(Ω)×ΛSk×ΛSk→R≥0. 2. [3.7.2] For a k-ary loss function ℓ, we define ∥ℓ∥∞def= sup x∈Ek(Ω) y,y′∈ΛSkℓ(x,y,y′), s (ℓ)def= inf x∈Ek(Ω) y,y′∈ΛSk y̸=y′ℓ(x,y,y′). 3. [3.7.3 simplified] A k-ary loss function is: bounded if∥ℓ∥∞<∞. separated ifs(ℓ)>0 andℓ(x,y,y ) = 0 for every x∈Ek(Ω) and every y∈ΛSk. 4.[3.7.4] For a k-ary loss function ℓ, hypotheses F,H∈Fk(Ω,Λ) and a probability template µ∈Pr(Ω), the total loss ofHwith respect to µ,Fandℓis Lµ,F,ℓ(H)def=Ex∼µk/bracketleftig ℓ/parenleftbigx,H∗ k(x),F∗ k(x)/parenrightbig/bracketrightig . 5Footnote 3 also applies here. 16 5.[3.7.5] We say that F∈Fk(Ω,Λ) is realizable inH⊆Fk(Ω,Λ) with respect to a k-ary loss functionℓandµ∈Pr(Ω) if inf H∈HLµ,F,ℓ(H) = 0. Definition A.4 (k-PAC learnability [3.8]) .Let
https://arxiv.org/abs/2505.15688v1
Ω be a Borel template, Λ be a non-empty Borel space andH⊆Fk(Ω,Λ) be ak-ary hypothesis class. 1. [3.8.2] A (k-ary) learning algorithm forHis a measurable function A:/uniondisplay m∈N/parenleftbigEm(Ω)×Λ([m])k/parenrightbig→H. 2.[3.8.3] We say that Hisk-PAC learnable with respect to a k-ary loss function ℓif there exist a learning algorithm AforHand a function mPAC H,ℓ,A:(0,1)2→R≥0such that for every ε,δ∈(0,1), everyµ∈Pr(Ω) and every F∈Fk(Ω,Λ) that is realizable in Hwith respect to ℓ andµ, we have Px∼µm/bracketleftbigg Lµ,F,ℓ/parenleftig A/parenleftbigx,F∗ m(x)/parenrightbig/parenrightig ≤ε/bracketrightbigg ≥1−δ for every integer m≥mPAC H,ℓ,A(ε,δ). A learning algorithm Asatisfying the above is called a k-PAC learner forHwith respect to ℓ. Definition A.5 (VCNk-dimension [3.14]) .Let Ω be a Borel template, Λ be a non-empty Borel space,k∈N+andH⊆Fk(Ω,Λ) be ak-ary hypothesis class. 1. [3.14.1] For H∈Fk(Ω,Λ) andx∈Ek−1(Ω), let H∗ k(x,−):/productdisplay A∈r(k)\r(k−1)X|A|→ΛSk be the function obtained from H∗ kby fixing itsEk−1(Ω) arguments to be xand let H(x)def={H∗ k(x,−)|H∈H}. 2.[3.14.2] The Vapnik–Chervonenkis–Natarajan k-dimension ofH(VCNk-dimension ) is defined as VCNk(H)def= sup x∈Ek−1(Ω)Nat/parenleftbigH(x)/parenrightbig. A.2 Definitions in the partite Definition A.6 (Borelk-partite templates [4.1]) .Letk∈N+. 1.[4.1.1] A Borelk-partite template is a sequence Ω = (Ω A)A∈r(k), where Ω A= (XA,BA) is a non-empty (standard) Borel space and r(k) =r([k]) is given by (A.1). 2.[4.1.2] A probability k-partite template on a Borel k-partite template Ω is a sequence µ= (µA)A∈r(k), whereµAis a probability measure on Ω A. The space of probability k-partite templates on Ω is denoted Pr(Ω). 17 3.[4.1.4 simplified] For a Borel k-partite template Ω, a non-empty Borel space Λ and m∈N+, we define Em(Ω)def=/productdisplay f∈rk(m)Xdom(f), equipping it with the product σ-algebra, where rk(m)def={f:A→[m]|A∈r(k)}=/uniondisplay A∈r(k)[m]A. Ifµ∈Pr(Ω) is a probability k-partite template on Ω, we let µmdef=/circlemultiplytext f∈rk(m)µdom(f)be the product measure. 4. [4.1.5 simplified] For α∈[m]k, we define the map α∗:Em(Ω)→E 1(Ω) by α∗(x)fdef=xα|dom(f)/parenleftbigx∈Em(Ω),f∈rk(1)/parenrightbig. Definition A.7 (k-Partite hypotheses [4.2, 4.5]) .Letk∈N+, let Ω be a Borel k-partite template and let Λ = ( Y,B′) be a non-empty Borel space. 1.[4.2.1] The set of k-partite hypotheses from Ω to Λ, denoted Fk(Ω,Λ), is the set f (Borel) measurable functions from E1(Ω) to Λ. 2.[4.2.2] Ak-partite hypothesis class is a subsetHofFk(Ω,Λ) equipped with a σ-algebra such that: i. the evaluation map ev: H×E 1(Ω)→Λ given by ev( H,x)def=H(x) is measurable; ii. for every H∈H, the set{H}is measurable; iii.for every Borel space Υ and every measurable set A⊆H× Υ, the projection of Aonto Υ, i.e., the set {υ∈Υ|∃H∈H,(H,υ)∈A}, is universally measurable6(i.e., measurable in every completion of a probability measure on Υ). 3.[4.2.3 simplified] For a k-partite hypothesis F∈Fk(Ω,Λ), we letF∗ m:Em(Ω)→Λ[m]kbe given by F∗ m(x)αdef=F(α∗(x))/parenleftbigx∈Em(Ω),α∈[m]k/parenrightbig. 4.[4.5.1] The rank of ak-partite hypothesis F∈Fk(Ω,Λ), denote rk(F) is the minimum r∈N such thatFfactors as F(x) =F′/parenleftbig(xf)f∈rk(1),|dom(f)|≤r/parenrightbig /parenleftbigx∈E1(Ω)/parenrightbig for some function F′:/producttext f∈rk(1),|dom(f)|≤rXdom(f)→Λ. 6Footnote 3 also applies here. 18 5. [4.5.2] The rank of ak-partite hypothesis class H⊆Fk(Ω,Λ) is defined as rk(H)def= sup F∈Hrk(F). Definition A.8 (k-Partite loss functions [4.7]) .Letk∈N+, let Ω be a Borel k-partite template and let Λ be a non-empty Borel space. 1. [4.7.1] A k-partite loss function over Λ is a measurable function ℓ:E1(Ω)×Λ×Λ→R≥0. 2. [4.7.2] For a k-partite loss function ℓ, we define ∥ℓ∥∞def= sup x∈E1(Ω) y,y′∈Λℓ(x,y,y′), s (ℓ)def= inf x∈E1(Ω) y,y′∈Λ y̸=y′ℓ(x,y,y′).
https://arxiv.org/abs/2505.15688v1
3. [4.7.3] A k-partite loss function is: bounded if∥ℓ∥∞<∞. separated ifs(ℓ)>0 andℓ(x,y,y ) for every x∈E1(Ω) and every y∈Λ. 4.[4.7.4] For a k-partite loss function ℓ,k-partite hypotheses F,H∈Fk(Ω,Λ) and a probability k-partite template µ∈Pr(Ω), the total loss ofHwith respect to µ,Fandℓis Lµ,F,ℓ(H)def=Ex∼µ1/bracketleftig ℓ/parenleftbigx,H(x),F(x)/parenrightbig/bracketrightig . 5.[4.7.5] We say that F∈Fk(Ω,Λ) is realizable inH⊆Fk(Ω,Λ) with respect to a k-partite loss functionℓandµ∈Pr(Ω) if inf H∈HLµ,F,ℓ(H) = 0. 6. [4.7.6] The k-partite 0/1-loss function over Λ is defined as ℓ0/1(x,y,y′)def= 1[y̸=y′]. Definition A.9 (Partitek-PAC learnability [4.8]) .Letk∈N+, let Ω be a Borel k-partite template, let Λ be a non-empty Borel space and let H⊆Fk(Ω,Λ) be ak-partite hypothesis class. 1. [4.8.2] A (k-partite) learning algorithm forHis a measurable function A:/uniondisplay m∈N/parenleftbigEm(Ω)×Λ[m]k/parenrightbig→H. 2.[4.8.3] We say that Hisk-PAC learnable with respect to a k-partite loss function ℓif there exist a learning algorithm AforHand a function mPAC H,ℓ,A:(0,1)2→R≥0such that for every ε,δ∈(0,1), everyµ∈Pr(Ω) and every F∈Fk(Ω,Λ) that is realizable in Hwith respect to ℓ andµ, we have Px∼µm/bracketleftbigg Lµ,F,ℓ/parenleftig A/parenleftbigx,F∗ m(x)/parenrightbig/parenrightig ≤ε/bracketrightbigg ≥1−δ for every integer m≥mPAC H,ℓ,A(ε,δ). A learning algorithm Asatisfying the above is called a k-PAC learner forHwith respect to ℓ. 19 Definition A.10 (Partite VCNk-dimension [4.13]) .Letk∈N+, let Ω be a Borel k-partite template, let Λ be a non-empty Borel space and let H⊆Fk(Ω,Λ) be ak-partite hypothesis class. 1. [4.13.1] For A∈/parenleftbig[k] k−1/parenrightbig, let rk,Adef={f∈rk(1)|dom(f)⊆A}, forx∈/producttext f∈rk,AXdom(f)andH∈Fk(Ω,Λ), let H(x,−):/productdisplay f∈rk(1)\rk,AXdom(f)→Λ be the function obtained from Hby fixing its arguments in/producttext f∈rk,AXdom(f)to bexand let H(x)def={H(x,−)|H∈H}. 2.[4.13.2] The Vapnik–Chervonenkis–Natarajan k-dimension ofH(VCNk-dimension ) is defined as VCNk(H)def= sup A∈([k] k−1) x∈/producttext f∈rk,AXdom(f)Nat/parenleftbigH(x)/parenrightbig. A.3 New high-arity PAC definitions In this subsection, we lay out the new high-arity definitions of this paper in full generality. Definition A.11 (k-ary Haussler packing property) .Letk∈N+, let Ω be a Borel ( k-partite, respectively) template, let Λ be a non-empty Borel space, let H⊆Fk(Ω,Λ) be ak-ary (k-partite, respectively) hypothesis class and let ℓ:Ek(Ω)×ΛSk×ΛSk→R≥0(ℓ:E1(Ω)×Λ×Λ→R≥0, respectively) be a k-ary (k-partite, respectively) loss function. We say thatHhas thek-ary Haussler packing property with respect to ℓif there exists a function mHP H,ℓ:(0,1)→R≥0such that for every ε∈(0,1) and every µ∈Pr(Ω), there exists H′⊆H with |H′|≤mHP H,ℓ(ε) such that for every F∈H, there exists H∈H′such thatLµ,F,ℓ(H)≤ε. We refer to elements ofH′ask-ary Haussler centers ofHat precision εwith respect to µandℓ. Definition A.12 (Metric loss functions) .Letk∈N+, let Ω be a Borel ( k-partite, respectively) template, let Λ be a non-empty Borel space. We say that a k-ary (k-partite, respectively) loss function ℓ:Ek(Ω)×ΛSk×ΛSk→R≥0(ℓ:E1(Ω)× Λ×Λ→R≥0, respectively) is metric if for every x∈Ek(Ω) (x∈E1(Ω), respectively), the function ℓ(x,−,−) is a metric on ΛSk(Λ, respectively) in the usual sense, that is, the following hold for everyx∈Ek(Ω) andy,y′,y′′∈ΛSk(x∈E1(Ω) andy,y′,y′′∈Λ, respectively): i. We have ℓ(x,y,y′) =ℓ(x,y′,y). ii. We have ℓ(x,y,y′) = 0 if and only if y=y′. iii. We have ℓ(x,y,y′′)≤ℓ(x,y,y′) +ℓ(x,y′,y′′). 20 A.4 Main results In this section, we prove the main results in full generality including the high-order variables. For the particular case of the counterpart of Theorem 5.1, Theorem A.14, we will also show that instead of assuming that the loss function ℓis metric, one could assume that ℓis separated and bounded; for this, we will
https://arxiv.org/abs/2505.15688v1
use the lemma below that says that separated and bounded loss functions make the total loss satisfy a weak version of triangle inequality with a rescaling factor. This justifies the usage of the name “(almost) metric” for losses that are either metric or both separated and bounded in Figure 1. Finally, we point out that the 0 /1-loss function ℓ0/1satisfies all hypotheses of Theorems A.14, A.16 and A.17. Lemma A.13. LetΩbe a Borel ( k-partite, respectively) template, let Λbe a non-empty Borel space, letH⊆Fk(Ω,Λ)be ak-ary (k-partite, respectively) hypothesis class, let ℓbe ak-ary (k-partite, respectively) loss function and let µ∈Pr(Ω) be a probability ( k-partite, respectively) template. For eachF,H∈H, let D(F,H)def=/braceleftigg {x∈Ek(Ω)|F∗ k(x)̸=H∗ k(x)},in the non-partite case, {x∈E1(Ω)|F(x)̸=H(x)}, in the partite case. Then the following hold: i. We have s(ℓ)·M(F,H)≤LF,µ,ℓ(H)≤∥ℓ∥∞·M(F,H), where M(F,H)def=/braceleftiggµk(D(F,H)),in the non-partite case, µ1(D(F,H)),in the partite case. ii. Ifℓis separated and F,F′,H∈H, then Lµ,F,ℓ(F′)≤∥ℓ∥∞ s(ℓ)/parenleftbigLµ,F,ℓ(H) +Lµ,F′,ℓ(H)/parenrightbig Proof. Item (i) follows since Lµ,F,ℓ(H) =  Ex∼µk/bracketleftig ℓ/parenleftbigx,H∗ k(x),F∗ k(x)/parenrightbig/bracketrightig ,in the non-partite case, Ex∼µ1ℓ/bracketleftig/parenleftbigx,H(x),F(x)/parenrightbig/bracketrightig , in the partite case. For item (ii), by item (i), we have Lµ,F,ℓ(F′)≤∥ℓ∥∞·M(F,F′)≤∥ℓ∥∞·/parenleftbigM(F,H) +M(F′,H)/parenrightbig ≤∥ℓ∥∞ s(ℓ)·/parenleftbigLµ,F,ℓ(H) +Lµ,F′,ℓ(H)/parenrightbig where the second inequality is the (usual) triangle inequality. We now prove the counterpart of Theorem 5.1 both in the non-partite and partite settings. 21 Theorem A.14 (k-PAC learnability implies Haussler packing property) .Letk∈N+, letΩbe a Borel (k-partite, respectively) template, let Λbe a finite non-empty Borel space, let H⊆Fk(Ω,Λ) be ak-ary (k-partite, respectively) hypothesis class and let ℓbe ak-ary (k-partite, respectively) loss function. Suppose that Hisk-PAC learnable with a k-PAC learnerA. Let also γH(m)def= sup x∈Em(Ω)|{H∗ m(x)|H∈H}| be the maximum number of different patterns (in Λ([m])kin the non-partite case or in Λ[m]kin the partite case) that can be obtained by evaluating all elements of Hin a fixedx∈Em(Ω). Then the following hold: i.Ifℓis separated and bounded, then Hhas the Haussler packing property with associated function mHP H,ℓ(ε)def= min δ∈(0,1)/ceilingleftiggγH(⌈mPAC H,ℓ,A(s(ℓ)ε/(2∥ℓ∥∞),δ)⌉) 1−δ/ceilingrightigg −2 ≤  min δ∈(0,1) |Λ|(⌈mPAC H,ℓ,A(s(ℓ)ε/(2∥ℓ∥∞),δ)⌉)k 1−δ −2,in the non-partite case, min δ∈(0,1) |Λ|⌈mPAC H,ℓ,A(s(ℓ)ε/(2∥ℓ∥∞),δ)⌉k 1−δ −2,in the partite case.(A.2) ii. Ifℓis metric, thenHhas the Haussler packing property with associated function mHP H,ℓ(ε)def= min δ∈(0,1)/ceilingleftiggγH(⌈mPAC H,ℓ,A(ε/2,δ)⌉) 1−δ/ceilingrightigg −2 ≤  min δ∈(0,1) |Λ|(⌈mPAC H,ℓ,A(ε/2,δ)⌉)k 1−δ −2,in the non-partite case, min δ∈(0,1) |Λ|⌈mPAC H,ℓ,A(ε/2,δ)⌉k 1−δ −2,in the partite case.(A.3) Proof. For item (i), note that due to the ceilings on the right-hand sides of (A.2) , the minima are indeed attained as the functions only take values in N. The inequalities also clearly follow from the trivial bound: γH(m)≤  |Λ|(m)k,in the non-partite case, |Λ|mk,in the partite case. Suppose for a contradiction that the result does not hold, that is, there exist ε∈(0,1) and µ∈Pr(Ω) such that if mis given by the right-hand side of (A.2) , then for everyH′⊆H with |H′|≤m, there exists F∈H such thatLµ,F,ℓ(H)>εfor everyH∈H′. By starting with H′def=∅ and inductively applying this property with H′def={F1,...,Ft}to produce Ft+1, it follows that there exist F1,...,Fm+1∈H such that for every i,j∈[m+ 1] withi<j , we haveLµ,Fi,ℓ(Fj)>ε. 22 Letδ∈(0,1) attain the first minimum in (A.2) and let /tildewidemdef=/ceilingleftbigg mPAC H,ℓ,A/parenleftbiggs(ℓ)·ε 2·∥ℓ∥∞,δ/parenrightbigg/ceilingrightbigg . For eachx∈E/tildewidem(Ω), let Y(x)def={H∗ m(x)|H∈H} and note that|Y(x)|≤γH(/tildewidem).
https://arxiv.org/abs/2505.15688v1
For eachi∈[m+ 1], define the set Cidef=/braceleftbigg x∈E/tildewidem(Ω)/vextendsingle/vextendsingle/vextendsingle/vextendsingle∀y∈Y(x),Lµ,Fi,ℓ/parenleftbigA(x,y)/parenrightbig>s(ℓ)·ε 2·∥ℓ∥∞/bracerightbigg . Note that by taking ydef=(Fi)∗ m(x)∈Y(x) and using the fact that ℓis separated so that Fiis realizable inHw.r.t.ℓandµ, PAC learnability implies that µ(Ci)≤δ. Define now the function G:E/tildewidem(Ω)→R≥0by G(x)def=m+1/summationdisplay i=11Ci(x) =/vextendsingle/vextendsingle/vextendsingle/vextendsingle/braceleftbigg i∈[m+ 1]/vextendsingle/vextendsingle/vextendsingle/vextendsingle∀y∈Y(x),Lµ,Fi,ℓ/parenleftbigA(x,y)/parenrightbig>s(ℓ)·ε 2·∥ℓ∥∞/bracerightbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle. We claim that for every x∈E/tildewidem(Ω) and every y∈Y(x), there exists at most one i∈[m+ 1] such thatLµ,Fi,ℓ(A(x,y))≤s(ℓ)ε/(2∥ℓ∥∞). Indeed, if not, then for some i,j∈[m+ 1] withi<j , we would get s(ℓ)·ε ∥ℓ∥∞≥Lµ,Fi,ℓ/parenleftbigA(x,y)/parenrightbig+Lµ,Fj,ℓ/parenleftbigA(x,y)/parenrightbig≥s(ℓ) ∥ℓ∥∞·Lµ,Fi,ℓ(Fj), where the last inequality follows from Lemma A.13(ii); the above would then contradict Lµ,Fi,ℓ(Fj)> ε. Thus, we conclude that G(x)≥m+ 1−|Y(x)|≥m+ 1−γH(/tildewidem) (A.4) for everyx∈E/tildewidem(Ω). On the other hand, since µ(Ci)≤δfor everyi∈[m+ 1], we get /integraldisplay E/tildewidem(Ω)G(x)dµ/tildewidem(x)≤(m+ 1)δ, which together with (A.4) implies m≤γH(/tildewidem) 1−δ−1, contradicting the definitions of m,δand/tildewidem. The proof of item (ii) is completely analogous (see the proof of Theorem 5.1), but instead of using Lemma A.13, we use triangle inequality since ℓis metric, which allows us to improve the bound to(A.3) instead of (A.2) (also note that the fact that ℓis metric implies that Lµ,Fi,ℓ(Fi) = 0). 23 Remark A.15. The final bound provided on Theorem A.14 is provably not tight when rk(H)≤1 andℓis bounded. This is because a posteriori, we know that all results of high-arity PAC in [ CM24 ] along with Theorem A.17 prove that the notions considered are also equivalent to finiteness of VCNk-dimension, which in turn implies that γH(m)≤  (m+ 1)VCNk(H)·(m k−1)·/parenleftigg |Λ| 2/parenrightiggVCNk(H)·(m k−1) ,in the non-partite case, (m+ 1)VCNk(H)·mk−1·/parenleftigg |Λ| 2/parenrightiggVCNk(H)·mk−1 ,in the partite case. ≤/parenleftigg |Λ|2·(m+ 1) 2/parenrightiggVCNk(H)·mk−1 ,(A.5) instead of the trivial bounds for γH(m) used on Theorem A.14. For the counterpart of Theorem 5.3, it will be more convenient to split it into the partite version (Theorem A.16) and the non-partite version (Theorem A.17). We start with the easier one: the partite. Theorem A.16 (Haussler packing property implies finite VCNk-dimension, partite version) .Let k∈N+, letΩbe a Borelk-partite template, let Λbe a finite non-empty Borel space, let H⊆Fk(Ω,Λ) be ak-partite hypothesis class and let ℓ:E1(Ω)×Λ×Λ→R≥0be ak-partite loss function. Suppose ℓis separated and rk(H)≤1. IfHhas the Haussler packing property, then VCNk(H)≤ min ε∈(0,min{s(ℓ)/2,1})/floorleftigglog2⌊mHP H,ℓ(ε)⌋ 1−h2(ε/s(ℓ))/floorrightigg , (A.6) where h2(t)def=tlog21 t+ (1−t) log21 1−t denotes the binary entropy. Proof. Note that the minimum in (A.6) is indeed attained as the function only takes values in N∪{−∞} , so letε∈(0,min{s(ℓ)/2,1}) attain the minimum, let dbe the value of the minimum and let mdef=⌊mHP H,ℓ(ε)⌋ so that d=/floorleftbigglog2m 1−h2(ε/s(ℓ))/floorrightbigg . WhenHis empty, the result is trivial as VCNk(H) =−∞ so supposeHis non-empty (hence m≥1 andd≥0). By the definition of VCNk-dimension, we have to show that if A∈/parenleftbig[k] k−1/parenrightbigandx∈/producttext f∈rk,AXdom(f), then Nat(H(x))≤d. In turn, it suffices to show that if V⊆/producttext f∈rk(1)\rk,AXdom(f)is a (finite) set that is Natarajan-shattered by H(x), then|V|≤d. Let Rdef=rk(1)\(rk,A∪{1{a}}) 24 where 1{a}is the unique function {a}→[1] and letx′∈/producttext f∈RXdom(f)be any fixed point. Since rk(H)≤1, it follows that the projection V′ofVonto the coordinate indexed by 1{a}is Natarajan-shattered by {H(x,x′,−)|H∈H}. Letndef=|V|(which is equal to |V′|asrk(H)≤1 andVis Natarajan-shattered by H(x)) and letµ∈Pr(Ω) be the probability k-partite template given by: •For eachf∈rk,A,µfis the Dirac delta
https://arxiv.org/abs/2505.15688v1
concentrated on xf. •For eachf∈R,µfis the Dirac delta concentrated on x′ f. •µ1{a}is the uniform measure on V′. SinceV′is Natarajan-shattered by {H(x,x′,−)|H∈H} , there exist functions f0,f1:V′→Λ withf0(v)̸=f1(v) for every v∈V′and there exists a family {FU|U⊆V′}⊆H such that for everyU⊆V′and everyv∈V′, we haveFU(x,x′,v) =f 1[v∈U](v). Using now our definition of mvia Haussler packing property, we know that there exists H′⊆H such that|H′|≤mand for every U⊆V′, there exists H∈H′such thatLµ,FU,ℓ(H)≤ε. For eachH∈H′, let UHdef={v∈V′|H(x,x′,v) =f1(v)}, B(H)def={U⊆V′|Lµ,FU,ℓ(H)≤ε}, B′(H)def=/braceleftbigg U⊆V′/vextendsingle/vextendsingle/vextendsingle/vextendsingle|U△UH|≤ε·n s(ℓ)/bracerightbigg . The Haussler packing property assumption implies /uniondisplay H∈H′B(H) = 2V′. (A.7) Sinceℓis separated, by Lemma A.13(i), for every H∈H′and everyU⊆V′, we have Lµ,FU,ℓ(H)≥s(ℓ)·µk(D(FU,H))≥s(ℓ)·µk(D(FU,FUH))≥s(ℓ) n·|U△UH|, henceB(H)⊆B′(H), which together with (A.7) implies/uniontext H∈H′B′(H) = 2V′. Sinceε/s(ℓ)<1/2, by Lemma 5.2, we get n≤log2|H′| 1−h2(ε/s(ℓ))≤log2m 1−h2(ε/s(ℓ)), which yields n≤dasnis an integer. Theorem A.17 (Haussler packing property implies finite VCNk-dimension, non-partite version) . Letk∈N+, letΩbe a Borel template, let Λbe a finite non-empty Borel space, let H⊆Fk(Ω,Λ) be ak-ary hypothesis class and let ℓ:Ek(Ω)×ΛSk×ΛSk→R≥0be ak-ary loss function. Suppose thatℓis separated and rk(H)≤1. 25 IfHhas the Haussler packing property, then VCNk(H)≤ min ε∈(0,min{s(ℓ)·k!/(2kk),1})/floorleftigglog2⌊mHP H,ℓ(ε)⌋ 1−h2(ε·kk/(s(ℓ)·k!))/floorrightigg , (A.8) where h2(t)def=tlog21 t+ (1−t) log21 1−t denotes the binary entropy. Proof. Note that the minimum in (A.8) is indeed attained as the function only takes values in N∪{−∞} , so letε∈(0,min{s(ℓ)k!/(2kk),1}) attain the minimum, let dbe the value of the minimum and let mdef=⌊mHP H,ℓ(ε)⌋ so that d=/floorleftbigglog2m 1−h2(ε·kk/(s(ℓ)·k!))/floorrightbigg . WhenHis empty, the result is trivial as VCNk(H) =−∞, so supposeHis non-empty (hence m≥1 andd≥0). By the definition of VCNk-dimension, we have to show that if x∈Ek−1(Ω), then Nat(H(x))≤d. In turn, it suffices to show that if V⊆/producttext A∈r(k)\r(k−1)X|A|is a (finite) set that is Natarajan-shattered byH(x), then|V|≤d. Let Rdef=r(k)\(r(k−1)∪{{k}}) and letx′∈/producttext A∈RXAbe any fixed point. Since rk(H)≤1, it follows that the projection V′ofVonto the coordinate indexed by {k}is Natarajan-shattered by {H∗ k(x,x′,−)|H∈H}. Note that|V′|=|V|asrk(H)≤1 andVis Natarajan-shattered by H(x); thus our goal is to show |V′|≤d. Letndef=|V|(which is equal to |V′|asrk(H)≤1 andVis Natarajan-shattered by H(x)) and letµ∈Pr(Ω) be the probability template given by letting µibe any probability measure on Ω ifor eachi∈Nwithi≥2 and letting µ1def=1 k νV′+k−1/summationdisplay j=1δx{j} , whereνV′is the uniform probability measure on V′andδtis the Dirac delta concentrated on t. SinceV′is Natarajan-shattered by {H∗ k(x,x′,−)|H∈H} , there exist functions f0,f1:V′→ΛSk withf0(v)̸=f1(v) for every v∈V′and there exists a family {FU|U⊆V′}⊆H such that for everyU⊆V′and everyv∈V′, we have (FU)∗ k(x,x′,v) =f 1[v∈U](v). Recall that for H1,H2∈H, Lemma A.13 defines D(H1,H2)def={x∈Ek(Ω)|(H1)∗ k(x)̸= (H2)∗ k(x)}. Let us also define D′(H1,H2)def={v∈V′|(H1)∗ k(x,x′,v)̸= (H2)∗ k(x,x′,v)}. 26 Clearly, for every U,U′⊆V′, we haveD′(FU,FU′) =U△U′. Note that µk(D(H1,H2))≥Pz∼µk/bracketleftbig∃σ∈Sk,∀j∈[k−1],z{σ(j)}=x{j}∧z{σ(k)}∈D′(H1,H2)/bracketrightbig·s(ℓ) ≥k! kk·n·|D′(H1,H2)|.(A.9) (This is true even if there are repetitions among x{1},...,x{k−1}and even if V′contains some of these elements.) Using now our definition of mvia Haussler packing property, we know that there exists H′⊆H such that|H′|≤mand for every U⊆V′, there exists H∈H′such thatLµ,FU,ℓ(H)≤ε. For eachH∈H′, let UHdef={v∈V′|H(x,x′,v) =f1(v)}, B(H)def={U⊆V′|Lµ,FU,ℓ(H)≤ε}, B′(H)def=/braceleftigg U⊆V′/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle|U△UH|≤ε·kk·n s(ℓ)·k!/bracerightigg . The Haussler packing property assumption implies /uniondisplay H∈H′B(H) = 2V′. (A.10) Sinceℓis separated, by Lemma A.13(i), for every H∈H′and everyU⊆V′, we have Lµ,FU,ℓ(H)≥s(ℓ)·µk(D(FU,H))≥s(ℓ)·k! kk·n·|D′(FU,H)| ≥s(ℓ)·k! kk·n·|D′(FU,FUH)|=s(ℓ)·k! kk·n·|U△UH|, where the second inequality follows from (A.9) , henceB(H)⊆B′(H), which together
https://arxiv.org/abs/2505.15688v1
with (A.10) implies/uniontext H∈H′B′(H) = 2V′. Sinceε·kk/(s(ℓ)·k!)<1/2, by Lemma 5.2, we get n≤log2|H′| 1−h2(ε·kk·n/(s(ℓ)·k!))≤log2m 1−h2(ε·kk·n/(s(ℓ)·k!)), which yields n≤dasnis an integer. B The partization operation In this section, we recall the definition of the partization operation from [ CM24 , Definition 4.20] and we prove that the partite Haussler packing property of Hk-partimplies the non-partite Haussler packing property of H. Similarly to Section A, numbers in square brackets in the definition below refer to the exact location of the concept in [CM24]. Definition B.1 (Partization [4.20]) .Letk∈N+, let Ω be a Borel template and let Λ be a non-empty Borel space. 27 1.[4.20.1] The k-partite version of Ω is the Borel k-partite template Ωk-partgiven by Ωk-part Adef= Ω|A|(A∈r(k)). 2.[4.20.2] For µ∈Pr(Ω), thek-partite version ofµisµk-part∈Pr(Ωk-part) given byµk-part Adef=µ|A| (A∈r(k)). 3.[4.20.3] For a hypothesis F∈Fk(Ω,Λ), thek-partite version ofFis thek-partite hypothesis Fk-part∈Fk(Ωk-part,ΛSk) given by Fk-part(x)def=F∗ k/parenleftbigιk-part(x)/parenrightbig /parenleftbigx∈E1(Ωk-part)/parenrightbig, whereιk-part:E1(Ωk-part)→Ek(Ω) is given by ιk-part(x)Adef=x1A/parenleftbigx∈E1(Ωk-part),A∈r(k)/parenrightbig(B.1) and 1A∈rk(1) is the unique function A→[1]. 4.[4.20.4] For a hypothesis class H⊆Fk(Ω,Λ), thek-partite version ofHisHk-part def={Hk-part| H∈H} , equipped with the pushforward σ-algebra of the one of H. In [ CM24 , Lemma 8.1] (see Lemma B.2 below), it is shown that ιk-partis a Borel-isomorphism, which in turn implies thatH∋F∝⇕⊣√∫⊔≀→Fk-part∈Hk-partis a bijection (so singletons of Hk-partare indeed measurable) and thatH∝⇕⊣√∫⊔≀→Hk-partis an injection. We denote by Hk-part∋G∝⇕⊣√∫⊔≀→Gk-part,−1∈H the inverse ofH∋F∝⇕⊣√∫⊔≀→Fk-part∈Hk-part. 5.[4.20.5] For a k-ary loss function ℓover Λ, the k-partite version ofℓisℓk-part:E1(Ωk-part)× ΛSk×ΛSk→R≥0given by ℓk-part(x,y,y′)def=ℓ/parenleftbigιkpart(x),y,y′)/parenleftbigE1(Ωk-part),y,y′∈ΛSk/parenrightbig. Lemma B.2 (Partization basics [ CM24 , Lemma 8.1]) .LetΩbe a Borel template, let k∈N+, letΛ be a non-empty Borel space. Then the following hold: i. Forµ∈Pr(Ω) andm∈Nthe function ϕm:Em(Ω)→E⌊m/k⌋(Ωk-part)given by ϕm(x)fdef=x{(i−1)⌊m/k⌋+f(i)|i∈dom(f)} (f∈rk(⌊m/k⌋)). (B.2) is measure-preserving with respect to µmand(µk-part)⌊m/k⌋. Furthermore, if mis divisible by k, thenϕmis a measure-isomorphism. Moreover, we have ϕ−1 k=ιk-part, whereιk-partis given by (B.1) . ii. Form∈N,F∈Fk(Ω,Λ)andΦm: Λ([m])k→(ΛSk)[⌊m/k⌋]kgiven by (Φm(y)α)τdef=yβα◦τ/parenleftig α∈[⌊m/k⌋]k,τ∈Sk/parenrightig , (B.3) whereβα∈([m])kis given by βα(i)def= (i−1)/floorleftbiggm k/floorrightbigg +α(i)/parenleftigg α∈/bracketleftbigg/floorleftbiggm k/floorrightbigg/bracketrightbiggk ,i∈[k]/parenrightigg , (B.4) 28 the diagram Em(Ω) Λ([m])k E⌊m/k⌋(Ωk-part) (ΛSk)[⌊m/k⌋]kF∗ m ϕmΦm (Fk-part)∗ ⌊m/k⌋ commutes, where ϕmis given by (B.2) . The following lemma is equation (8.7) within the proof of [ CM24 , Proposition 8.4]. For completeness purposes, we restate this below with a self-contained proof. Lemma B.3. LetΩbe a Borel template, let k∈N+, let Λbe a non-empty Borel space, let ℓ:Ek(Ω)×ΛSk×ΛSk→R≥0be ak-ary loss function and let F,H∈Fk(Ω,Λ)be hypotheses. Then Lµ,F,ℓ(H) =Lµk-part,Fk-part,ℓk-part(Hk-part). Proof. This follows directly from Lµ,F,ℓ(H) =Ex∼µk[ℓ(x,H∗ k(x),F∗ k(x))] =Ex∼µk[ℓk-part(ϕk(x),Hk-part(ϕk(x)),Fk-part(ϕk(x)))] =Ez∼(µk-part)1[ℓk-part(z,Hk-part(z),Fk-part(z))] =Lµk-part,Fk-part,ℓk-part(Hk-part), where the second equality follows from the definition of ℓk-partand Lemma B.2(ii) and the third equality follows since ϕmis measure-preserving by Lemma B.2(i). Theorem B.4 (Haussler property: partite to non-partite) .LetΩbe a Borel template, let k∈N+, letΛbe a non-empty Borel space, let H⊆Fk(Ω,Λ)be ak-ary hypothesis class, let ℓ:Ek(Ω)× ΛSk×ΛSk→R≥0be ak-ary loss function. IfHk-parthas the Haussler packing property with respect to ℓk-part, thenHhas the Haussler packing property with respect to ℓwith associated function mHP H,ℓdef=mHP Hk-part,ℓk-part. Proof. Forε∈(0,1) andµ∈Pr(Ω), we know that there exists/tildewideH ⊆ Hk-partwith|/tildewideH| ≤ mHP Hk-part,ℓk-part(ε) such that ∀F∈Hk-part,∃H∈/tildewideH,Lµk-part,F,ℓk-part(H)≤ε LettingH′def=/tildewideHk-part,−1, the above implies ∀F∈H,∃H∈H′,Lµk-part,Fk-part,ℓk-part(Hk-part)≤ε, which by Lemma B.3 yields ∀F∈H,∃H∈H′,Lµ,F,ℓ(H)≤ε, as desired. 29
https://arxiv.org/abs/2505.15688v1
arXiv:2505.15794v1 [math.ST] 21 May 2025One-sample location tests based on center-outward signs and ranks Daniel Hlubinka and ˇS´arka Hudecov ´a Abstract A multivariate one-sample location test based on the center-outward ranks and signs is considered, and two different testing procedures are proposed for centrally symmetric distributions. The first test is based on a random division of the data into two samples, while the second one uses a symmetrized sample. The asymptotic distributions of the proposed tests are provided. For univariate data, two variants of the symmetrized test statistic are shown to be equivalent to the standard sign and Wilcoxon test respectively. The small sample behavior of the proposed techniques is illustrated by a simulation study that also provides a power comparison for various transportation grids. Key words: center-outward ranks and signs, multivariate one-sample location test, distribution-freeness 1 Introduction Testing a location for a given sample is one of the basic problems in theoretical and applied statistics. It finds a number of applications, including a comparison of paired data. For univariate observations, the one-sample t-test is a basic technique which is valid asymptotically under a broad class of alternatives, but which may be far from optimal for distributions far from Gaussian. There are several well-known nonparametric tests of location, the sign test and the one-sample Wilcoxon test being one of the most common, see H ´ajek and ˇSid´ak (1967). Both these tests use the Daniel Hlubinka Charles University, Faculty of Mathematics and Physics, Department of Probability and Statistics, e-mail: daniel.hlubinka@matfyz.cuni.cz ˇS´arka Hudecov ´a Charles University, Faculty of Mathematics and Physics, Department of Probability and Statistics, e-mail: hudecova@karlin.mff.cuni.cz 1 2 Daniel Hlubinka and ˇS´arka Hudecov ´a notion of the ranks and signs, which is, unfortunately, not directly extendable into a multivariate setting. For a multivariate one-sample location problem, the classical procedure is the Hotelling’s𝑇2test, which is optimal under Gaussian distribution but which can fail for non-elliptical distributions or distributions with heavy tails. There have been several attempts to define ranks for multivariate observations, see Hallin et al. (2021) for an overview. Oja and Randles (2004) proposed a one-sample location test based on spatial signs and spatial ranks, while Randles (1989) and Oja and Paindaveine (2005) studied tests defined via hyperplane based signs and ranks, see also recent computational improvements in Hudecov ´a and ˇSiman (2022). The mentioned multivariate rank concepts require an additional assumption that the underlying distribution is elliptical, which can be limiting in applications. Recently, Chernozhukov et al. (2017) and Hallin et al. (2021) proposed a concept of multivariate ranks and signs derived from the theory of the optimal measure transportation. The corresponding center-outward (c-o) ranks and signs have been applied in various multivariate statistical problems, see (Hallin, 2022; Ghosal and Sen, 2022; Shi et al., 2022; Hallin et al., 2023) and further references therein. Among them Hallin et al. (2022) proposed rank tests for multiple-output regression, which involves a location two-sample problem as a special case. The main advantage of these c-o rank tests is that they are fully distribution free and asymptotically efficient. The one-sample location test based on c-o ranks
https://arxiv.org/abs/2505.15794v1
have not been studied in the liter- ature. The regression framework from Hallin et al. (2022) is not directly applicable without some additional constraints, because the c-o ranks and signs are invariant with respect to a shift. Inspired by the univariate one-sample Wilcoxon test, one possible strategy is to assume some kind of symmetry of the underlying multivariate distribution and to derive a location test which employs this property. Two possible approaches are explored in this contribution for centrally symmetric multivariate distributions. The paper is organized as follows. Section 2 defines the c-o ranks and signs. Subsequently, two rank tests are proposed: The first one, discussed in Sections 3.1, is based on a random division of the data into two samples and the two-sample location test from Hallin et al. (2022). The second approach, introduced in Section 3.2, uses a symmetrized sample, and two test statistics are suggested. The asymptotic distributions of the corresponding test statistics are derived. Section 3.3 shows that for univariate data the tests are equivalent to the sign test and the Wilcoxon test, respectively. The finite sample behavior of the proposed methods is investigated in Monte Carlo study in Section 4, which compares also the power for various transportation grids. 2 Preliminaries LetXbe a𝑑-dimensional random vector with an absolutely continuous distribution 𝑃X. LetB𝑑be the closed unit ball in R𝑑andS𝑑={𝒛∈R𝑑;∥𝒛∥=1}be the One-sample location tests based on center-outward signs and ranks 3 unit sphere. Denote 𝑈𝑑the distribution of a random vector 𝑼=𝑅Zfor𝑅andZ independent, 𝑅uniformly distributed on [0,1], and Zuniformly distributed on S𝑑. We say that a map 𝑇pushes a distribution 𝑃1forward to𝑃2if for Z∼𝑃1it holds that𝑇(Z)∼𝑃2. For the univariate case 𝑑=1,𝑈1is simply the uniform distribution on the interval[−1,1]. The center-outward distribution function (c-o df) F±:R𝑑→B𝑑ofXis defined as the gradient of a convex function pushing 𝑃Xforward to𝑈𝑑. Under the existence of finite second moments F±is the𝐿2-optimal Monge-Kantorovich transportation, see Hallin et al. (2021); Hallin (2022) for more details. Remark 1 The motivation for the definition of the c-o df F±comes from the uni- variate situation. Let 𝑋be a random variable with an absolutely continuous distri- bution with a cumulative distribution function (cdf) 𝐹𝑋. Then𝑋can be transformed monotonically to a random variable 𝑊∼𝑈1, namely𝑊=2𝐹𝑋(𝑋)−1=𝐹±(𝑋). Moreover, if E𝑋2is finite, then the 𝐿2transport cost is minimal in a sense that min𝑉∼𝑈1E(𝑋−𝑉)2=E(𝑋−𝐹±(𝑋))2. This property can be directly extended to an absolutely continuous random vector. The gradient of a convex function on R𝑑is a 𝑑-variate analogue of a monotonic function on R. It follows from a McCann Theo- rem (McCann, 1995) that there is a 𝑃Xalmost surely unique gradient F±of a convex function such that F±(X)follows the uniform distribution 𝑈𝑑. Clearly, without the assumption of absolute continuity of Xthere exists no transformation to uniformly distributed random variable even for 𝑑=1. Assuming E∥X∥2<∞, the center- outward distribution function minimizes the mean square distance E∥X−F±(X)∥2 over all𝑈𝑑distributed random vectors. LetX1,..., X𝑛be a random sample from 𝑃X. The empirical center-outward distribution function F(𝑛) ±is the mapping of X1,..., X𝑛to a regular gridG𝑛of𝑛 points in the unit ball B𝑑such thatÍ𝑛 𝑖=1∥F(𝑛) ±(X𝑖)−X𝑖∥is minimal. In the following, the gridG𝑛is always assumed
https://arxiv.org/abs/2505.15794v1
to satisfy the condition: The discrete uniform distribution on G𝑛converges weakly to 𝑈𝑑as𝑛→∞. One possibility is to take G𝑛={𝒈𝑖𝑗,𝑖=1,...,𝑛𝑅,𝑗=1,...,𝑛𝑆},for𝒈𝑖𝑗=𝑟𝑖u𝑗, (1) where𝑛=𝑛𝑅𝑛𝑆,𝑟𝑖=𝑖/(𝑛𝑅+1), and u𝑗are unit vectors, distributed approximately regularly overS𝑑, or a union ofG𝑛from (1) and 𝑛0replications of{0}; in that case 𝑛=𝑛0+𝑛𝑅𝑛𝑆. For𝑑=2 it is possible to construct a completely regular grid for any given𝑛𝑆. For𝑑 >2 the unit vectors can be obtained as a suitable transformation of a low discrepancy sequence in R𝑑−1, as Halton or Sobol, see Fang and Wang (1994) and our Section 4 for more details, where some alternative grids are considered as well. If𝑛0=1, thenF(𝑛) ±−1(0)can be considered as an estimate of the sample location (sample median), see also discussion in (Hallin et al., 2021, Section 5). Finally, the center-outward rank 𝑅𝑖and sign S𝑖ofX𝑖are defined as 4 Daniel Hlubinka and ˇS´arka Hudecov ´a 𝑅𝑖=(𝑛𝑅+1)∥F(𝑛) ±(X𝑖)∥,S𝑖=I[F(𝑛) ±(X𝑖)≠0]F(𝑛) ±(X𝑖) ∥F(𝑛) ±(X𝑖)∥, soF(𝑛) ±(X𝑖)=𝑅𝑖/(𝑛𝑅+1)·S𝑖. The center-outward distribution function F±is invariant to a shift and it is equiv- ariant to orthogonal transformations. The empirical version F(𝑛) ±is also invariant to a shift and it is equivariant to an orthogonal transformation given by an orthogonal matrix Oif the gridG𝑛is replaced by the transformed grid OG𝑛, see Proposition 2.2 in Hallin et al. (2022). In that case, the orthogonal transformation preserves the ranks and the angles among the signs. This is not the case for a general scaling transformation. 3 Test of a center of symmetry Consider a𝑑-dimensional random vector Xwith a distribution which is centrally symmetric around a point 𝝁𝑆, i.e., P[X−𝝁𝑆∈𝐴]=P[𝝁𝑆−X∈𝐴]for any Borel set𝐴∈B(R𝑑), see also Serfling (2006). The hypothesis of interest is 𝐻0:𝝁𝑆=𝝁0 for some specified 𝝁0∈R𝑑. It can be assumed without loss of generality that 𝝁0=0, so 𝐻0:𝝁𝑆=0against𝐻1:𝝁𝑆≠0. The next proposition shows that under 𝐻0the center-outward distribution function is also centrally symmetric around zero. Proposition 1 Under𝐻0it holds that F±(−x)=−F±(x),for all x∈R𝑑. Proof Recall the construction of F±from Hallin et al. (2021): F±is∇𝜑(x)for 𝜑(x)=supu∈𝑆𝑑(u⊤x−𝜓(u)), where𝜓(u)is the unique convex function such that ∇𝜓 pushes the distribution 𝑈𝑑forward to𝑃Xand𝜓(0)=0. It follows from the symmetry of both distributions that −∇𝜓(−u)also pushes𝑈𝑑forward to𝑃X. The uniqueness of𝜓gives𝜓(−u)=𝜓(u). From this we conclude that 𝜑(−x)=supu∈𝑆𝑑(−u⊤(−x)− 𝜓(−u))=𝜑(x)and consequently F±(x)=∇𝜑(x)=−∇𝜑(−x)=−F±(−x), which completes the proof. □ In the following consider a random sample X𝑛=(X1,..., X𝑛)from a distribution which is centrally symmetric distribution around 𝝁𝑆and which has c-o df F±. Two approaches for testing 𝐻0are proposed in the next two sections. 3.1 Test with random signs Assume that 𝐻0holds. Then the distributions of Xand−Xcoincide, and, moreover, if a random variable 𝑌is independent with Xand distributed uniformly on {−1,1}, One-sample location tests based on center-outward signs and ranks 5 that is P[𝑌=1]=P[𝑌=−1]=1/2, denoted asU{− 1,1}, then the random variables XandX◦=𝑌Xfollow the same distribution. Consider a sequence 𝑌1,...,𝑌𝑛of iidU{− 1,1}random variables independent with the random sample X𝑛. Then the distribution of X𝑛is the same as the distribution ofX◦ 𝑛=(X◦ 1,..., X◦ 𝑛), forX◦ 𝑖=𝑌𝑖X𝑖, and the center-outward distribution functions ofX𝑛andX◦ 𝑛are equal. In particular, denoting I+={𝑖;𝑌𝑖=1},𝑛+=|I+|, and X+={X𝑖;𝑖∈I+},X−={X𝑖;𝑖∉I+} it follows thatX+andX−are two independent samples from the same distribution. Note that the sample size 𝑛+is random and follows the binomial distribution with parameters(𝑛,1/2), which
https://arxiv.org/abs/2505.15794v1
is the price we pay for the independence of the two samplesX+andX−. The main idea of the test with random signs is to test the hypothesis H 0:X𝑑=−X using the two-sample location test of Hallin et al. (2022) on the independent samples X+andX−. The following proposition follows from (Hallin et al., 2022, Proposition 3.2). Proposition 2 LetF(𝑛) ±be the sample c-o distribution function computed for X◦ 𝑛 and a gridG𝑛, and let𝑅𝑖andS𝑖be the corresponding rank and sign of X◦ 𝑖. Let 𝐽:[0,1)→Rbe a score function continuous on (0,1)with a bounded variation and∫1 0𝐽2(𝑢)d𝑢<∞, and define 𝑱(u)=Iu≠0𝐽(∥u∥)·u/∥u∥foru∈B𝑑. Denote as 𝐾𝐽(G𝑛)=∑︁ 𝒈∈G𝑛𝑱(𝒈). Then 𝑻=√︂𝑛 𝑛+(𝑛−𝑛+)"∑︁ 𝑖∈I+𝐽𝑅𝑖 𝑛𝑅+1 S𝑖−𝑛+ 𝑛𝐾𝐽(G𝑛)# converges in distribution for 𝑛→∞ to a centred normal distribution with variance matrix(1/𝑑)·∫1 0𝐽(𝑢)d𝑢·𝑰𝑑with probability one, where 𝑰𝑑is the identity matrix. Remark 2 The convergence in distribution holds with probability one since 𝑛+is a random variable. According to Hallin et al. (2022) 𝑛+/𝑛must converge to a constant 𝑐∈(0,1), as𝑛→∞ , to ensure asymptotic normality. Since the a.s. convergence of 𝑛+/𝑛to 1/2 follows from the strong law of large numbers and construction of X◦ 𝑛we get the asymptotic normality with probability one along the sequence of realizations of{𝑌𝑖}. The hypothesis 𝐻0is rejected if 𝑄=𝑑 ∫1 0𝐽2(𝑢)d𝑢∥𝑻∥2 exceeds 1−𝛼quantile of𝜒2 𝑑distribution. 6 Daniel Hlubinka and ˇS´arka Hudecov ´a Note that if the null hypothesis 𝐻0is violated, then the distributions of XandX◦ differ, i.e.,X+andX−are not sampled from the same distribution. Under 𝐻1, the distribution of Xis not symmetric with respect to the origin, while the distribution ofX◦is symmetric around 0regardless the validity of 𝐻0. Monte Carlo simulations presented in Section 4 illustrate that in such case the test reveals the difference in the locations of the two samples X+andX−. Remark 3 The asymptotic distribution of the test statistic 𝑄under a sequence of local alternatives can be derived from the two-sample situation studied in (Hallin et al., 2022, Proposition 5.1 and Section 5.3.1). We present the result for the special case of spherically symmetric distributions. Let the distribution of Xbe spherically symmetric with a radial density 𝑔, i.e. the density 𝑓ofXis proportional to 𝑔(√ x⊤x). Denote as 𝑓𝑟and𝐹𝑟the density and cumulative distribution function of ∥X∥respectively. Assume that 𝑔is mean square differentiable and 𝜑𝑔=−(𝑔1/2)′/𝑔1/2is such that∫1 0𝜑𝑔(𝐹−1 𝑟(𝑢))d𝑢 <∞. Let𝒉∈R𝑑be some non-zero vector. Under the sequence of local alternatives 𝐻1:𝜽=𝒉/√𝑛, the test statistic 𝑄has asymptotically for 𝑛→∞ a non-central 𝜒2 𝑑 distribution with a non-centrality parameter 𝑞=h∫1 0𝐽(𝑢)𝜑𝑔𝐹−1 𝑟(𝑢)d𝑢i2 𝑑∫1 0𝐽2(𝑢)d𝑢𝒉⊤𝒉. It easily follows from the latter formula that the test based on the score function 𝐽0=𝜑𝑔◦𝐹−1 𝑟is locally asymptotically optimal, c.f. (Hallin et al., 2022, Corollary 5.2 (ii)). Moreover, the asymptotic relative efficiency (ARE) of the test with respect to the Hotelling’s 𝑇2test is given as ARE=h∫1 0𝐽(𝑢)𝜑𝑔𝐹−1 𝑟(𝑢)d𝑢i2 ·E∥X∥2 𝑑2∫1 0𝐽2(𝑢)d𝑢. If𝐽=𝐽0then the ARE simplifies to∫1 0𝐽2 0(𝑢)d𝑢·E∥X∥2/𝑑2. IfXfollows the normal distribution N(𝜽,𝑰𝑑)then the optimal score function corresponds to the van der Waerden scores 𝐽0(𝑢)=√︃ 𝐺−1 𝑑(𝑢), where𝐺𝑑is the cdf of the𝜒2 𝑑distribution, and in that case ARE equals 1. If 𝐽(𝑢)=𝑢(Wilcoxon scores) then straightforward calculations give that ARE=3 𝑑∫∞ 0√𝑦𝐺𝑑(𝑦)d𝐺𝑑(𝑦)2 . This for𝑑=1 gives ARE =3/𝜋, which
https://arxiv.org/abs/2505.15794v1
is the well-known ARE of the one sample Wilcoxon test with respect to the t-test. For 𝑑=2, we get ARE 0.985 and the values decrease with increasing dimension 𝑑. For instance for 𝑑=10, ARE=0.907. One-sample location tests based on center-outward signs and ranks 7 3.2 Symmetrized sample Consider now the set of size 2 𝑛of observationsX𝑎={X1,..., X𝑛,−X1,...,−X𝑛}. Recall that under the null hypothesis, −X1has the same distribution as X1, so all the random vectors from X𝑎have the same distribution under 𝐻0. Let F(𝑛) ±,𝑎be the sample center-outward distribution function computed from X𝑎which takes values in a gridG2𝑛of 2𝑛points satisfying g∈G 2𝑛⇒−g∈G 2𝑛. (2) Hence, F(𝑛) ±,𝑎(X𝑖)stands for the value corresponding to X𝑖, and𝑅±,𝑎(X𝑖)and S±,𝑎(X𝑖)are its sample c-o rank and sign respectively. This slightly different notation is used in order to distinguish these quantities from those introduced in Section 2. Proposition 3 Under𝐻0 F(𝑛) ±,𝑎(𝑋𝑖)=−F(𝑛) ±,𝑎(−𝑋𝑖). Proof Recall that F(𝑛) ±,𝑎is a mapping 𝑔:X𝑎→G 2𝑛, which minimizes 𝑛∑︁ 𝑖=1∥X𝑖−𝑔(X𝑖)∥+𝑛∑︁ 𝑖=1∥−X𝑖−𝑔(−X𝑖)∥2, or equivalently, maximizes 𝑛∑︁ 𝑖=1X⊤ 𝑖[𝑔(X𝑖)−𝑔(−X𝑖)]. (3) To prove the claim let us first consider a norm ordering of the sample X1,..., X𝑛 such that ∥X1:𝑛∥≥∥ X2:𝑛∥≥···≥∥ X𝑛:𝑛∥, and note that the inequalities are a.s. strict since the underlying distribution of Xis absolutely continuous. Recall that for two ordered sets of real numbers 𝑎1< 𝑎 2<···< 𝑎𝑛, and 𝑏1< 𝑏 2<···< 𝑏𝑛it holds thatÍ𝑎𝑖𝑏𝜋(𝑖)≤Í𝑎𝑖𝑏𝑖and the inequality is strict if𝜋(𝑖)≠𝑖for some𝑖. Similar inequality holds for the sum of inner products (3). The sum is maximized if we match the “large” values of ∥X∥with g∈G 2𝑛with as most similar direction as possible. In other words, find g∈G 2𝑛such that the inner product⟨X1:𝑛,g⟩is maximal. Such gis unique a.s. (see the remark below). Set 𝑔(X1:𝑛)=g=−𝑔(−X1:𝑛). Clearly⟨X1:𝑛,2g⟩>⟨X1:𝑛,g−𝒉⟩>for any 𝒉≠−g. Remove X1:𝑛from the sample and gfrom the grid and continue with X2:𝑛up to X𝑛:𝑛. Hence, with probability 1 we obtain F(𝑛) ±,𝑎(X𝑖)=−F(𝑛) ±,𝑎(−X𝑖). Remark 4 If there are two grid points gandg′such that⟨Xℓ:𝑛,g⟩=⟨Xℓ:𝑛,g′⟩then the choice between gandg′depends on the sample points X𝑖:𝑛,𝑖=ℓ+1,...𝑛 with 8 Daniel Hlubinka and ˇS´arka Hudecov ´a probability 1. Hence, there is a sample point X𝑖:𝑛associated in the sense of the proof above with (without loss of generality) g′such that⟨X𝑖:𝑛,g′⟩>⟨X𝑖:𝑛,g⟩. Since ⟨Xℓ:𝑛,2g⟩+⟨X𝑖:𝑛,2g′⟩>⟨Xℓ:𝑛,g+g′⟩+⟨X𝑖:𝑛,g′+g⟩ it becomes clear that the maximum of (3) is attained if F(𝑛) ±,𝑎(X𝑖)=−F(𝑛) ±,𝑎(−X𝑖). Proposition 4 Under𝐻0F(𝑛) ±,𝑎(X1)P→F±(X1)as𝑛→∞ . Proof The proof goes along the lines of proof of Proposition 3.3 in (Hallin et al., 2021, Appendix F.3) if we prove the weak convergence of the empirical distribution b𝑃2𝑛,𝑎ofX𝑎to the distribution 𝑃X. The latter convergence follows by Portmanteau theorem as for any closed set 𝐹it holds lim sup 𝑛→∞b𝑃2𝑛,𝑎(𝐹)=lim sup 𝑛→∞ 1 2𝑛𝑛∑︁ 𝑖=1I[X𝑖∈𝐹]+1 2𝑛𝑛∑︁ 𝑖=1I[−X𝑖∈𝐹]! ≤1 2𝑃X(𝐹)+1 2𝑃−X(𝐹) since the empirical distributions of (X1,..., X𝑛)and(−X1,...,−X𝑛)converge weakly to𝑃Xand𝑃−X, respectivelly. As 𝑃X(𝐹)=𝑃−X(𝐹)under the null hypothesis we get for any closed set 𝐹the inequality lim sup𝑛→∞b𝑃2𝑛,𝑎(𝐹) ≤𝑃X(𝐹)that implies b𝑃2𝑛,𝑎𝑤→𝑃X. □ Recall that the grid G2𝑛is assumed to satisfy the condition (2), which implies thatÍ g∈G2𝑛g=0. Consider two test statistics: T𝑆,𝑎=1√𝑛𝑛∑︁ 𝑖=1S±,𝑎(X𝑖), T𝐹,𝑎=1√𝑛𝑛∑︁ 𝑖=1F(𝑛) ±,𝑎(X𝑖)=1√𝑛𝑛∑︁ 𝑖=1𝑅±,𝑎(X𝑖)S±,𝑎(X𝑖). The test statistic T𝑆,𝑎is based on signs only, while T𝐹,𝑎is a Wilcoxon type statistic. Proposition 5 Under𝐻0 𝑑∥T𝑆,𝑎∥2=𝑑 𝑛 𝑛∑︁ 𝑖=1S±,𝑎(X𝑖)
https://arxiv.org/abs/2505.15794v1
2 𝐷→𝜒2 𝑑, and 3𝑑∥T𝐹,𝑎∥=3𝑑 𝑛 𝑛∑︁ 𝑖=1F(𝑛) ±,𝑎(X𝑖) 2 𝐷→𝜒2 𝑑 as𝑛→∞ . One-sample location tests based on center-outward signs and ranks 9 Proof Both statistics are special cases of T𝑎=1√𝑛𝑛∑︁ 𝑖=1𝑱(F(𝑛) ±,𝑎(X𝑖))=1√𝑛𝑛∑︁ 𝑖=1𝐽(∥F(𝑛) ±,𝑎(X𝑖)∥)F(𝑛) ±,𝑎(X𝑖) ∥F(𝑛) ±,𝑎(X𝑖)∥, (4) where the score function 𝐽is chosen𝐽(𝑟)=1 for T𝑆,𝑎and𝐽(𝑟)=𝑟forT𝐹,𝑎. In both casesÍ g∈G2𝑛𝑱(g)=Í g∈G2𝑛𝐽(∥g∥)g/∥g∥=0. In the following, we restrict 𝐽 to the two functions above. It follows from the property of the grid G2𝑛that 1√ 2𝑛 𝑛∑︁ 𝑖=1𝑱(F(𝑛) ±,𝑎(X𝑖))−𝑛∑︁ 𝑖=1𝑱(F(𝑛) ±,𝑎(−X𝑖))! =2√ 2𝑛𝑛∑︁ 𝑖=1𝑱(F(𝑛) ±,𝑎(X𝑖))−0=√ 2T𝑎. Define T𝑒 𝑎=1√ 2𝑛 𝑛∑︁ 𝑖=1𝑱(F±(X𝑖))−𝑛∑︁ 𝑖=1𝑱(F±(−X𝑖))! . Then, using Proposition 1 and since 𝑱(x)=−𝑱(−x), T𝑒 𝑎=√ 2√𝑛𝑛∑︁ 𝑖=1𝑱(F±(X𝑖)) and √ 2T𝑎−T𝑒 𝑎=1√ 2𝑛𝑛∑︁ 𝑖=1(a𝑖−a− 𝑖), where a𝑖=𝑱(F(𝑛) ±,𝑎(X𝑖))−𝑱(F±(X𝑖))anda− 𝑖=𝑱(F(𝑛) ±,𝑎(−X𝑖))−𝑱(F±(−X𝑖))=−a𝑖 for𝑖=1,...,𝑛 . Then E∥√ 2T𝑎−T𝑒 𝑎∥2=1 2𝑛4𝑛∑︁ 𝑖=1E∥a𝑖∥2+1 2𝑛∑︁∑︁ 𝑗≠𝑖E(a𝑖−a− 𝑖)⊤(a𝑗−a− 𝑗). Because X1,..., X𝑛have the same distribution, then E∥a𝑖∥2=E∥a1∥2for all𝑖= 1,...,𝑛 . Furthermore, E(a𝑖−a− 𝑖)⊤(a𝑗−a− 𝑗)=Ea⊤ 𝑖a𝑗−Ea−⊤ 𝑖a𝑗−Ea⊤ 𝑖a− 𝑗+Ea−⊤ 𝑖a− 𝑗=0. The last equality follows from the fact that (X𝑖,X𝑗)𝐷=(−X𝑖,X𝑗), hence(a𝑖,a𝑗)𝐷= (a− 𝑖,a𝑗)and thus, all the expectations are the same. Hence, it suffices to show that E∥a1∥2=E∥𝑱(F(𝑛) ±,𝑎(X𝑖))−𝑱(F±(X𝑖))∥2→0. Recall the convergence F(𝑛) ±,𝑎(X𝑖)−F±(X𝑖)P→0.proved in Proposition 4. Then it follows by the continuity of 𝐽that 𝑱(F(𝑛) ±,𝑎(X1))− 𝑱(F±(X1))P→0and E∥𝑱(F(𝑛) ±,𝑎(X1))∥2→E∥𝑱(F±(X1))∥2. This shows that√ 2T𝑎−T𝑒 𝑎=𝑜𝑞.𝑚.(1), 10 Daniel Hlubinka and ˇS´arka Hudecov ´a and thus, the asymptotic distribution of T𝑎is the same as asymptotic distribu- tion of T𝑒 𝑎/√ 2. Recall that F±(X𝑖)has the same distribution as V=𝑈·W, where𝑈is uniformly distributed on [0,1]andWis uniformly distributed on the unit sphere in R𝑑, and𝑈andWare independent. Then EV=0 and varV=E𝑈2EWW′=1 3varW=1 3𝑑𝑰𝑑. The asymptotic distribution of T𝑒 𝑎fol- lows from the central limit theorem. If𝐽(𝑟)=𝑟then √ 2T𝑒 𝑎=1√𝑛𝑛∑︁ 𝑖=1F±(X𝑖)𝐷→N𝑑 0,1 3𝑑𝑰𝑑 and thus 3𝑑∥T𝐹,𝑎∥𝐷→𝜒2 𝑑. If𝐽(𝑟)=1 then √ 2T𝑒 𝑎=1√𝑛𝑛∑︁ 𝑖=1S±,𝑎(X𝑖)𝐷→N𝑑 0,1 𝑑𝑰𝑑 and thus𝑑∥T𝑆,𝑎∥2𝐷→𝜒2 𝑑. □ Remark 5 The proof reveals that one could consider the general test statistic 𝑻𝑎 defined in (4) via a general score function 𝐽satisfying assumptions of Proposition 2 such that𝐽(0)=0. The discussion about the optimal choice of 𝐽and the AREs of the test would be analogous to the one provided in Remark 3. We rather show in the next Section 3.3 that the tests based on the two test statistics from Proposition 5 are for 𝑑=1 asymptotically equivalent to the sign and one-sample Wilcon test respectively. 3.3 One-dimensional analogue of the symmetrized test Let𝑋1,...,𝑋𝑛be a random sample from a univariate continuous distribution, and consider the two test statistics proposed in Section 3.2. Proposition 6 Let𝑅1,...,𝑅𝑛be the univariate ranks of |𝑋1|,...,|𝑋𝑛|. Then 𝑅±,𝑎(𝑋𝑖)=𝑅𝑖=𝑅±,𝑎(−𝑋𝑖), 𝑆±,𝑎(𝑋𝑖)=(I[𝑋𝑖>0]−I[𝑋𝑖<0])=−𝑆±,𝑎(−𝑋𝑖) and F(𝑛) ±,𝑎(𝑋𝑖)=𝑅𝑖 𝑛+1(I[𝑋𝑖>0]−I[𝑋𝑖<0]). Proof It is shown in Appendix B of Hallin et al. (2021) that if 𝑍1,...,𝑍𝑁is a random sample from a continuous univariate distribution and 𝑅𝑍 1,...,𝑅𝑍 𝑁are the classical univariate ranks of 𝑍1,...,𝑍𝑁, then for even 𝑁 𝑅±(𝑍𝑖)= 𝑅𝑍 𝑖−𝑁+1 2 +1 2, 𝑆±(𝑍𝑖)=I 𝑅𝑍 𝑖>𝑁+1 2 −I 𝑅𝑍 𝑖<𝑁+1 2 ,(5) One-sample location tests based on center-outward signs and ranks 11 andF(𝑛) ±(𝑍𝑖)=𝑅±(𝑍𝑖)𝑆±(𝑍𝑖)(𝑁/2+1)−1. Let𝑅𝑎(𝑋𝑖)stand for the (classical univariate) rank of 𝑋𝑖in the augmented sample X𝑎=(𝑋1,...,𝑋𝑛,−𝑋1,...,−𝑋𝑛). It is easy to see that 𝑅𝑎(𝑋𝑖)+𝑅𝑎(−𝑋𝑖)=2𝑛+1 with 𝑅𝑎(𝑋𝑖)=( 𝑛+𝑅𝑖𝑋𝑖>0, 𝑛−𝑅𝑖+1𝑋𝑖<0. Moreover,𝑅𝑎(𝑋𝑖)>𝑛+1/2 if and only if 𝑋𝑖>0. Then it follows
https://arxiv.org/abs/2505.15794v1
from (5) that for 𝑋𝑖>0 we have 𝑅±,𝑎(𝑋𝑖)= 𝑅𝑎(𝑋𝑖)−2𝑛+1 2 +1 2= 𝑛+𝑅𝑖−2𝑛+1 2 +1 2=𝑅𝑖=𝑅±,𝑎(−𝑋𝑖), and the same equality is shown analogously for 𝑋𝑖<0. Further, 𝑆±,𝑎(𝑋𝑖)=I[𝑋𝑖>0]−I[𝑋𝑖<0]=−𝑆±,𝑎(−𝑋𝑖), which finishes the proof. □ Since the distribution of 𝑋𝑖is continuous, we can assume without loss of generality that𝑋𝑖≠0 for all𝑖=1,...,𝑛 . It follows from Proposition 6 that √𝑛𝑇𝑆,𝑎=𝑛∑︁ 𝑖=1(I[𝑋𝑖>0]−I[𝑋𝑖<0])=2𝑛∑︁ 𝑖=1I[𝑋𝑖>0]−𝑛=2𝑊𝑆−𝑛, where𝑊𝑆=Í𝑛 𝑖=1I[𝑋𝑖>0]is the test statistic of the univariate sign test. Hence, for 𝑑=1 the test based on 𝑇𝑆,𝑎is exactly the univariate sign test. Note that it follows from the well known properties of 𝑊𝑆that that𝑇𝑆,𝑎𝐷→N(0,1), and𝑇2 𝑆,𝑎𝐷→𝜒2 1, which is exactly what is claimed in Proposition 5 for 𝑑=1. Similarly, for the Wilcoxon test statistic 𝑇𝐹,𝑎, it follows that √𝑛𝑇𝐹,𝑎=1 𝑛+1 ∑︁ 𝑖:𝑋𝑖>0𝑅𝑖−∑︁ 𝑖:𝑋𝑖<0𝑅𝑖! =1 𝑛+1 2∑︁ 𝑖:𝑋𝑖>0𝑅𝑖−𝑛(𝑛+1) 2! =2𝑊𝑊−𝑛(𝑛+1) 2 𝑛+1, where𝑊𝑊=Í 𝑖:𝑋𝑖>0𝑅𝑖is the test statistic of the one sample Wilcoxon test. It is well known that 2𝑊𝑊−𝑛(𝑛+1) 2√︁ 1/3𝑛3=√ 3√𝑛2𝑊𝑊−𝑛(𝑛+1) 2 𝑛𝐷→N(0,1). Hence, for𝑑=1 the test based on 𝑇𝐹,𝑎is equivalent to the one-sample Wilcoxon test,√ 3𝑇𝐹,𝑎𝐷→N(0,1)and 3∥𝑇𝐹,𝑎∥2𝐷→𝜒2 1.This again coincides with the asymptotics claimed in Proposition 5 for 𝑑=1. 12 Daniel Hlubinka and ˇS´arka Hudecov ´a 4 Simulations The performance of the two proposed approaches is explored in a Monte Carlo study conducted in R program, R Core Team (2022). The power of each test is studied with respect to various distributional alternatives, and also with respect to various constructions of the grid for the empirical c-o distribution function. The following 𝑑-dimensional distributions were considered under the null hypothesis: (a) multivariate centered normal distribution with identity variance matrix, (b) multivariate 𝑡distribution with 𝑑𝑓=1 degrees of freedom, and identity scale matrix, (c) mixture ofL(X)andL(−X)with equal weights, where Xis a𝑑-dimensional vector with independent components with marginal exponential distribution with mean 1. The multivariate Cauchy 𝑡1distribution in (b) is used for its heavy tails, while the distribution (c), in the following referred to as “double-exponential”, represents a non-elliptical distribution. Notice that multivariate 𝑡-distributions with 𝑑𝑓 > 1 were also considered, and the corresponding results are somehow in between the results obtained for (a) and (b), so they are not presented in detail. Under the alternative, the above distributions were shifted by the vector 𝛿·𝒔, where 𝛿∈{0.0,0.05,0.10,0.15,0.20,0.25,0.30}and𝒔is a directional vector considered as either 𝒔=(1,1,..., 1)⊤or𝒔=(1,0,..., 0)⊤. Notice that both distributions (a) and (b) use the identity scale matrix which allows us to compare easier the impact of shifts in the two different directions. The dimension was set as 𝑑∈ {2,4,6}. The sample size was chosen as 𝑛∈ {150,300}and the grid was constructed with 𝑛𝑅=6, so∥F(𝑛) ±(X𝑖)∥can take𝑛𝑅 different values. The choice 𝑛𝑅=15 was considered as well, but the conclusion regarding the power comparison are analogous to those for 𝑛𝑅=6, so these results are not presented for the sake of brevity. For the one-sample c-o test with random signs from Section 3.1, the grid G𝑛was constructed as one of the following: (R1) regular random grid with 𝑛𝑅spheres and 𝑛𝑆=𝑛/𝑛𝑅directions which are generated randomly from the uniform distribution on S𝑑−1, (R2) random grid with 𝑛𝑅spheres and 𝑛𝑆=𝑛/𝑛𝑅directions, which are drawn randomly fromS𝑑−1for each sphere separately (so the resulting
https://arxiv.org/abs/2505.15794v1
grid does not satisfy the factorization 𝒈=𝑟𝑖𝒖𝑗mentioned in Section 2), (H) grid with 𝑛𝑅spheres and 𝑛𝑆directions which are constructed from a Halton sequence of𝑛𝑆points in R𝑑−1. For a given𝑛𝑆and dimension 𝑑the Halton sequence of points in R𝑑−1was generated using function halton from package randtoolbox , see Dutang and Savicky (2022). Subsequently these points were transformed to S𝑑−1using polar coordinates and the corresponding suitable transformation described in (Fang and Wang, 1994, Section 1.5.3). The empirical c-o distribution function F(𝑛) ±was computed using so called One-sample location tests based on center-outward signs and ranks 13 Hungarian method with the help of function solve_LSAP from package clue , see Hornik (2005). Thesymmetrized one-sample c-o test from Section 3.2 requires a symmetric grid of 2𝑛points satisfying (2). It was constructed as (R2*) union ofGand−G, whereGis obtained by (R2), (H*) grid with 𝑛𝑅spheres,𝑛𝑆directions 𝒔which are constructed from a Halton sequence of 𝑛𝑆points in R𝑑−1transformed toS𝑑−1∩R𝑑−1×[0,∞), and corresponding 𝑛𝑆directions−𝒔. All tests were computed for Wilcoxon scores with 𝐽(𝑢)=𝑢. For some other possible choices for 𝐽see (Hallin et al., 2022, Section 6), in particular their Section 6.2 for the discussion of the choice of the score function and efficiency of the test, and for a comparison in the context of two-sample tests see Hallin and Mordant (2023). The power of the c-o rank tests is compared to the benchmark Hotelling’s one-sample 𝑇2 test and also to the one-sample spatial rank test from Oja and Randles (2004), see Sirkia et al. (2021). To sum up, there are 7 tests to be compared: 3 based on the C-O test with random signs and with various grids (abbreviated as RAN-R1, RAN-R2, RAN-H), 2 based on the symmetrized sample and various grids (SYM-R2, SYM-H), the Hotelling’s 𝑇2test (HOT) and the spatial rank test (SPAT). The power of all tests is computed from 500 replications. (d=2, dir=2)(d=4, dir=2)(d=6, dir=2)(d=2, dir=1)(d=4, dir=1)(d=6, dir=1) 0.0 0.1 0.2 0.3 0.0 0.1 0.2 0.3 0.0 0.1 0.2 0.30.0 0.1 0.2 0.3 0.0 0.1 0.2 0.3 0.0 0.1 0.2 0.30.000.250.500.751.00 0.000.250.500.751.000.000.250.500.751.00 0.000.250.500.751.000.000.250.500.751.00 0.000.250.500.751.00 δn 150 300 Statistic RAN-R1 RAN-R2 RAN-H SYM-R2 SYM-H HOT SPATNormal dist. Fig. 1 Power of the considered tests for the standard normal distribution N𝑑(0,𝑰)inR𝑑for𝑑∈ {2,4,6}(in columns) shifted by 𝛿·𝒔for𝒔=(1,..., 1)⊤(dir=1, first row) or 𝒔=(1,0,..., 0)⊤ (dir=2, second row) for sample sizes 𝑛∈{150,300}and various𝛿≥0. 14 Daniel Hlubinka and ˇS´arka Hudecov ´a 4.1 Results The obtained results are provided in Figures 1–3 in forms of power functions de- pending on the size of the shift 𝛿for the significance level 𝛼=0.05. In each plot, the various considered tests are distinguished by colors and the sample sizes by line types (dotted-dashed for 𝑛=150, and solid for 𝑛=300). The three considered dimensions 𝑑=2,4,6 are provided in the three plot columns, while the rows correspond to the two directional vector 𝒔, with direction 1 and 2 corresponding to 𝒔=(1,1,..., 1)⊤ and𝒔=(1,0,..., 0)⊤respectively. In addition, the size of the tests is summarized numerically in Table 1. All considered c-o tests satisfy the prescribed significance level 𝛼=0.05 with a tendency to be slightly undersized (conservative) in some settings. Regarding the power, it is
https://arxiv.org/abs/2505.15794v1
not surprising that the Hotelling’s 𝑇2test leads to the largest power for the normal distribution. In this case, the power of c-o tests is generally comparable for𝑑=2 and for𝑑=4 with shift in direction 𝒔=(1,1,..., 1)⊤. For𝑑=6 larger differences are observed, but they decrease with an increasing sample size 𝑛. Among the c-o tests, the symmetrized version with the randomized grid (R2) performs the best. The Hotelling’s 𝑇2test completely fails for 𝑡1distribution, while the test based on spatial ranks performs the best. Differences in power between this test and the c-o tests seem to increase with a growing dimension 𝑑. Finally, the double exponential distribution illustrates the benefits of c-o ranks based tests, because in this case all the proposed c-o tests completely outperform the Hotelling’s 𝑇2test as well as the spatial rank tests if the shift is in direction 𝒔=(1,1,..., 1)⊤. For shift in direction 𝒔=(1,0,..., 0)⊤, the symmetrized test performs the best for dimension 𝑑≤4, but it is outperformed by the spatial rank test for 𝑑=6. Comparing the two proposed c-o approaches (random signs and symmetrized), it is not possible to say that the symmetrized test leads to a larger power compared to the randomized test in all cases, but it could be slightly preferred. Its disadvantage, compared to the randomized test, is that it uses a transport of 2 𝑛points, so it is more computationally demanding for larger sample sizes 𝑛. Regarding the choice of the grid, for both randomized and symmetrized test, the choice of the random grid (R2) seems to lead to overall the best results compared to a random grid (R1) and to the fixed regular grid (H). 4.2 Sensitivity to the assumption of central symmetry The proposed c-o tests rely on the assumption of central symmetry of the true distribution. One may ask, how the tests perform if this assumption is violated. Of course, if the distribution of Xis not centrally symmetric, then there is no true point 𝝁𝑆to be tested, and, in fact, it is not clear what is the theoretical counterpart of the location estimator (𝐹(𝑛) ±)−1(0). However, it still can be of interest to explore the behavior of the proposed test for small deviations from the symmetry. One-sample location tests based on center-outward signs and ranks 15 (d=2, dir=2)(d=4, dir=2)(d=6, dir=2)(d=2, dir=1)(d=4, dir=1)(d=6, dir=1) 0.0 0.1 0.2 0.3 0.0 0.1 0.2 0.3 0.0 0.1 0.2 0.30.0 0.1 0.2 0.3 0.0 0.1 0.2 0.3 0.0 0.1 0.2 0.30.000.250.500.751.00 0.00.20.40.60.000.250.500.751.00 0.00.20.40.60.000.250.500.75 0.00.20.40.6 δn 150 300 Statistic RAN-R1 RAN-R2 RAN-H SYM-R2 SYM-H HOT SPATt with df=1 Fig. 2 Power of the considered tests for 𝑡1distribution in R𝑑for𝑑∈{2,4,6}(in columns) shifted by𝛿·𝒔for𝒔=(1,..., 1)⊤(dir=1, first row) or 𝒔=(1,0,..., 0)⊤(dir=2, second row) for sample sizes𝑛∈{150,300}and various𝛿≥0. (d=2, dir=2)(d=4, dir=2)(d=6, dir=2)(d=2, dir=1)(d=4, dir=1)(d=6, dir=1) 0.0 0.1 0.2 0.3 0.0 0.1 0.2 0.3 0.0 0.1 0.2 0.30.0 0.1 0.2 0.3 0.0 0.1 0.2 0.3 0.0 0.1 0.2 0.30.000.250.500.751.00 0.000.250.500.751.000.000.250.500.751.00 0.000.250.500.751.000.000.250.500.751.00 0.000.250.500.751.00 δn 150 300 Statistic RAN-R1 RAN-R2 RAN-H SYM-R2 SYM-H HOT SPATDouble exp. dist. Fig. 3 Power for the “double exponential” distribution in R𝑑for𝑑∈{2,4,6}(in columns) shifted by𝛿·𝒔for𝒔=(1,..., 1)⊤(dir=1, first row) or 𝒔=(1,0,...,
https://arxiv.org/abs/2505.15794v1
0)⊤(dir=2, second row) for sample sizes𝑛∈{150,300}and various𝛿≥0. 16 Daniel Hlubinka and ˇS´arka Hudecov ´a 𝑑 𝑛 RAN-R1 RAN-R2 RAN-H SYM-R2 SYM-H HOT SPAT Normal dist. 2 150 0.040 0.036 0.056 0.046 0.050 0.062 0.052 300 0.044 0.040 0.044 0.044 0.038 0.060 0.050 4 150 0.036 0.042 0.048 0.036 0.040 0.050 0.048 300 0.040 0.040 0.024 0.030 0.026 0.046 0.044 6 150 0.044 0.040 0.042 0.034 0.034 0.044 0.058 300 0.050 0.040 0.030 0.050 0.040 0.060 0.062 𝑡1dist. 2 150 0.030 0.038 0.036 0.034 0.030 0.016 0.052 300 0.028 0.028 0.030 0.022 0.026 0.012 0.038 4 150 0.054 0.042 0.038 0.036 0.046 0.024 0.044 300 0.034 0.034 0.030 0.038 0.044 0.008 0.040 6 150 0.050 0.042 0.040 0.032 0.036 0.018 0.060 300 0.030 0.020 0.034 0.024 0.034 0.016 0.054 Double exp. dist. 2 150 0.036 0.044 0.048 0.048 0.044 0.046 0.042 300 0.042 0.046 0.046 0.046 0.040 0.040 0.044 4 150 0.046 0.040 0.042 0.034 0.036 0.046 0.040 300 0.044 0.050 0.042 0.050 0.048 0.054 0.050 6 150 0.040 0.038 0.044 0.034 0.034 0.040 0.068 300 0.032 0.038 0.032 0.022 0.028 0.042 0.052 Table 1 Empirical size of the considered tests for the three 𝑑-dimensional distributions: the randomized one-sample test with various grids (RAN-R1, RAN-R2, and RAN-H), the symmetrized test with two different grids (SYM-R2 and SYM-H), Hotelling’s 𝑇2test (HOT), and the spatial rank test (SPAT). For this purpose, we use a bivariate skew normal distribution SN2(0,𝑰,𝜶)for𝜶= (𝛼,𝛼)⊤,𝛼∈R, where the parametrization is taken from (Azzalini and Capitanio, 2014, Chapter 5), i.e. the density of this distribution is 𝑓(𝒙)=2𝜑(𝒙)Φ(𝜶⊤𝒙), where𝜑is the density of a bivariate normal distribution N2(0,𝑰)andΦis the cumulative distribution function of univariate N(0,1). The parameter 𝜶is referred to as a slant parameter, and it drives the skewness of the distribution. The mean of this distribution is 𝝁=(𝜇,𝜇)⊤for𝜇=√︁ 2/𝜋𝛼/√ 1+2𝛼2. We simulate data as X=𝒀−𝝁+𝛿·𝒔for𝒀∼SN(0,𝑰,𝜶)and a directional vector 𝒔=𝑐(1,0)and a shift size 𝛿≥0. For each sample, we computed the following tests of zero location: test based on random signs (RAN, grid (R2)), test based on a symmetrized sample (SYM, grid (R2)), the Hotelling’s 𝑇2test (HOT) and the spatial rank test (SPAT). Here, “location” is the mean for the Hotelling’s 𝑇2test, while it is more complicated to define the theoretical location for three remaining tests for skewed data. Subsequently, proportions of rejections for significance level 0 .05 were calculated and they are plotted in Figure 4 for 𝑛∈{150,300}and for𝛼∈{1,3,5}. For𝛿=0 and𝛼=1, all the four procedures reject the null hypothesis of zero location in approximately 5 % of the cases, while for 𝛿=0 and𝛼>1 this is the case One-sample location tests based on center-outward signs and ranks 17 only for the first three tests. This is not surprising for the Hotelling’s 𝑇2test, because for𝛿=0 we have EX=0regardless the value of the slant parameter 𝛼, so the null hypothesis holds and the test keeps the nominal level asymptotically. In addition, EX≠0for𝛿 > 0, the test is consistent, and it leads to the largest percentage of rejections for any fixed 𝛿 >0. It is interesting to see that the two proposed c-o tests (RAN and SYM) seem
https://arxiv.org/abs/2505.15794v1
to be quire robust with respect to small deviations from the symmetry as they also reject the null in approximately 5 % cases for 𝛿=0 and the proportion of rejections growths with 𝛿in an expected way. alpha=1alpha=3alpha=5 0.0 0.1 0.2 0.3 0.0 0.1 0.2 0.3 0.0 0.1 0.2 0.30.250.500.751.00 0.250.500.751.00 0.250.500.751.00 δn 150 300 Statistic RAN SYM HOT SPATSkew Normal in d=2 Fig. 4 Proportion of rejections of the null hypothesis of zero location for the c-o test with random signs (RAN), symmetrized sample (SYM), Hotelling’s 𝑇2test (HOT), and spatial rank test (SPAT) for skew data in dimension 𝑑=2. The columns correspond to distributional parameter 𝛼: the larger 𝛼, the more skewed distribution, see details in the main text. 4.3 Comparison with marginal one-sample test One of the reviewers raised an interesting question regarding the benefits of the multivariate one-sample test with respect to 𝑑univariate tests with a correction for multiple testing. The corresponding power comparison is provided in Figure 5, where the power of the symmetrized c-o test with a random grid (R2) and the power of the spatial rank test is compared to the power of 𝑑univariate Wilcoxon tests (conducted for each coordinate) combined via the Bonferroni correction. Results for the double exponential distribution are presented, but the overall conclusion is the same also for the normal and 𝑡1distribution. The results reveal that the comparison depends on the dimension 𝑑and on the shift direction, while differences between sample sizes 𝑛are not so visible. Indeed, there are situations (shift in a suitable direction) where it is more beneficial to simply conduct 𝑑univariate marginal tests instead of one multivariate test (spatial or c-o test), but the opposite holds for some other settings. In practice, one typically does not have a prior information about the direction of the possible shift, so a multivariate test is a more general choice. 18 Daniel Hlubinka and ˇS´arka Hudecov ´a (d=2, dir=2)(d=4, dir=2)(d=6, dir=2)(d=2, dir=1)(d=4, dir=1)(d=6, dir=1) 0.0 0.1 0.2 0.3 0.0 0.1 0.2 0.3 0.0 0.1 0.2 0.30.0 0.1 0.2 0.3 0.0 0.1 0.2 0.3 0.0 0.1 0.2 0.30.000.250.500.751.00 0.000.250.500.751.000.000.250.500.751.00 0.000.250.500.751.000.000.250.500.751.00 0.000.250.500.751.00 δn 150 300 Statistic SYM-CO SPAT MARGComparison with d univariate tests Fig. 5 Power curves of two multivariate one-sample tests (symmetrized c-o test and spatial rank test) compared to the power of 𝑑univariate marginal Wilcoxon tests (MARG) combined via the Bonferroni correction for “double exponential” distribution in R𝑑for𝑑∈{2,4,6}(in columns). A centered sample is shifted by 𝛿·𝒔for𝒔=(1,..., 1)⊤(dir=1, first row) or 𝒔=(1,0,..., 0)⊤ (dir=2, second row) for sample sizes 𝑛∈{150,300}. References Azzalini, A. and Capitanio, A. (2014). The Skew-Normal and Related Families . Cambridge University Press, New York. Chernozhukov, V., Galichon, A., Hallin, M., and Henry, M. (2017). Monge–kantorovich depth, quantiles, ranks and signs. Ann. Statist. , 45(1):223– 256. Dutang, C. and Savicky, P. (2022). randtoolbox: Generating and Testing Random Numbers . R package version 2.0.3. Fang, K. and Wang, Y. (1994). Number-theoretic Methods in Statistics . Chapman & Hall. Ghosal, P. and Sen, B. (2022). Multivariate ranks and quantiles using optimal transport: Consistency, rates and nonparametric testing. Ann. Statist. , 50(2):1012– 1037. H´ajek, J.
https://arxiv.org/abs/2505.15794v1
and ˇSid´ak, Z. (1967). Theory of Rank Test . Academic Press, New York. Hallin, M. (2022). Measure transportation and statistical decision theory. Annual Review of Statistics and Its Application , 9:401–424. Hallin, M., del Barrio, E., Cuesta-Albertos, J., and Matr ´an, C. (2021). Distribution and quantile functions, ranks, and signs in R𝑑: A measure transportation approach. One-sample location tests based on center-outward signs and ranks 19 Ann. Statist. , 49(2):1139–1165. Hallin, M., Hlubinka, D., and Hudecov ´a,ˇS. (2022). Efficient fully distribution-free center-outward rank tests for multiple-output regression and MANOV A. JASA . doi 10.1080/01621459.2021.2021921. Hallin, M., La Vecchia, D., and Liu, H. (2023). Rank-based testing for semiparametric V AR models: A measure transportation approach. Bernoulli , 29(1):229–273. Hallin, M. and Mordant, G. (2023). On the finite-sample performance of measure- transportation-based multivariate rank tests. In Yi, M. and Nordhausen, K., editors, Robust and Multivariate Statistical Methods: Festschrift in Honor of David E. Tyler , pages 87–119. Springer International Publishing, Cham. Hornik, K. (2005). A CLUE for CLUster Ensembles. Journal of Statistical Software , 14(12). Hudecov ´a,ˇS. and ˇSiman, M. (2022). Multivariate ranks based on randomized lift- interdirections. Computational Statistics & Data Analysis , 172. McCann, R. J. (1995). Existence and uniqueness of monotone measure-preserving maps. Duke Mathematical Journal , 80:309–323. Oja, H. and Paindaveine, D. (2005). Optimal signed-rank tests based on hyperplanes. Journal of Statistical Planning and Inference , 135:300–323. Oja, H. and Randles, R. (2004). Multivariate nonparametric testss. Statistical Science , 19:598–605. R Core Team (2022). R: A Language and Environment for Statistical Computing . R Foundation for Statistical Computing, Vienna, Austria. Randles, R. (1989). A distribution-free multivariate sign test based on interdirections. Journal of the American Statistical Association , 84:1045–1050. Serfling, R. (2006). Multivariate symmetry and asymmetry. In Kotz, S., N., B., Read, C., and Vidakovic, B., editors, Encylopedia of Statistical Sciences , volume 8, pages 5338–5345. Wiley, 2 edition. Shi, H., Hallin, M., Drton, M., and Han, F. (2022). On universally consistent and fully distribution-free rank tests of vector independence. Ann. Statist. , 50(4):1933– 1959. Sirkia, S., Miettinen, J., Nordhausen, K., Oja, H., and Taskinen, S. (2021). Spa- tialNP: Multivariate Nonparametric Methods Based on Spatial Signs and Ranks . R package version 1.1-5.
https://arxiv.org/abs/2505.15794v1
arXiv:2505.15969v1 [math.OC] 21 May 2025Grassmann and Flag Varieties in Linear Algebra, Optimization, and Statistics: An Algebraic Perspective Hannah Friedman and Serkan Ho¸ sten Abstract Grassmann and flag varieties lead many lives in pure and applied mathematics. Here we focus on the algebraic complexity of solving various problems in linear algebra and statistics as optimization problems over these varieties. The measure of the algebraic complexity is the amount of complex critical points of the corresponding optimization problem. After an exposition of different realizations of these manifolds as algebraic varieties we present a sample of optimization problems over them and we compute their algebraic complexity. 1 Introduction A flag manifold or flag variety parametrizes nested sequences of subspaces of fixed dimension, known as flags, of a finite dimensional vector space V. In this article, we take this vector space to be V=RnorV=Cn. We fix rpositive integers 1 ≤k1< k 2<···< k r≤nand consider Fl(k;n) = Fl( k1, . . . , k r;n) ={W1⊂W2⊂ ··· ⊂ Wr:Wi⊆V,dim(Wi) =ki,1≤i≤r}. The set of flags Fl( k;n) is both a (real or complex) manifold and a (real or complex) algebraic variety. As an algebraic variety, it lives in some affine space ANor projective space PN. In this case, we would like to know the defining equations, i.e., the polynomials whose common zero set is Fl( k;n). We also ask to know algebraic invariants of Fl( k;n) such as the dimension and the degree where the latter depends on the particular embedding. In Section 2, we introduce and discuss several realizations of flag varieties, namely, the Stiefel coordinates, Pl¨ ucker embedding, projection embedding, and isospectral model. The Grassmannian Gr( k, n) ofk-dimensional subspaces of Vis a flag variety with r= 1. It also appears with different realizations such as its Pl¨ ucker and projection embeddings [5, 6]. For each realization of the flag variety, we prove that the natural equations defining these varieties generate prime ideals. We discuss the relationships between these different embeddings. All of these realizations have appeared in the literature; see [5, 6, 9, 17]. We collect them in one place to help anyone, especially those from the applied and computational algebraic geometry and algebraic statistics communities, to get a foot in the door. 1 Our theme is optimization on Grassmann and flag varieties. As a starting point, consider the eigenvalue problem for symmetric matrices. Let Abe an n×nreal symmetric matrix and consider the problem of minimizing or maximizing the Rayleigh quotient: min/max z̸=0zTAz zTz. (1) It is known that the optimal value for this eigenvalue problem is the minimum (respectively, maximum) eigenvalue of A. A formulation which avoids the fraction in the objective function is the quadratic optimization problem over Sn−1, the sphere of dimension n−1: min/max z∈Sn−1zTAz. (2) The optimal solution to the above eigenvalue problem (1) and (2) is a unit eigenvector corresponding to the minimum/maximum eigenvalue. The critical points of this constrained optimization problem are precisely the unit eigenvectors of A. The choice of sign for each eigenvector gives us 2 ncritical points. The eigenvalue problem has the following generalization to
https://arxiv.org/abs/2505.15969v1
compute multiple eigenvectors at once where we replace the vector zin (2) with an n×kmatrix Zconsisting of orthonormal columns. We solve a quadratic optimization problem over the Stiefel manifold called the multi-eigenvector problem : min/max ZTZ=Idktrace( ZTAZ). (3) The critical points of this problem are the sets of orthogonal k-frames consisting of unit eigenvectors of A, up to the action of the orthogonal group O( k) over the complex numbers. Theorem 1.1. LetAbe a generic real symmetric n×nmatrix and let Zbe an n×kvariable matrix. The algebraic set of complex critical points of the multi-eigenvector problem (3) is G {i1,...,ik}∈([n] k){[ui1ui2···uik]Q:Q∈O(k)} where u1, . . . , u nis an orthonormal eigenbasis of A. This algebraic set is a disjoint union ofn k varieties isomorphic to O(k); it has dimension dim(O( k)) =n k and degree deg(O( k))n k . The degree of the special orthogonal group was computed in [1, Theorem 1.1]. Because the multi-eigenvector problem is O( k)-invariant, it is more naturally considered over the Grassmannian. In Section 3, we will restate this problem over the Grassmannian in projection coordinates [5], where it hasn k critical points (Theorem 3.3). To formulate this problem over the flag variety, one can use isospectral coordinates [9]. In the isospectral formulation, the set of critical points is a disjoint union of products of smaller orthogonal groups (Theorem 3.5). In both Theorems 3.3 and 3.5, we give explicit descriptions of the critical points. When the set of critical points is finite, its size, the algebraic degree , provides a measure for the complexity of the problem. In particular, any optimal solution is an algebraic function of the input data of this degree; see [12, 13, 16]. 2 In Section 4, we study the heterogeneous quadratics minimization problem, which is used as a benchmark for testing numerical methods for Riemannian optimization [15]. Given symmetric n×nmatrices A1, . . . , A k, we seek to find Z= [Z1···Zk] such that min/max ZTZ=IdkkX i=1ZT iAiZi. (4) IfA1=···=Ak, one recovers the multi-eigenvector problem (3). However, for a generic choice of A1, . . . , A k, this problem is not an optimization problem over the Grassmannian, but rather over the flag variety Fl(1 , . . . , k ;n). We formulate the problem in projection coordinates and observe that in this formulation the heteregoneous quadratics minimization problem is a generic linear optimization problem over the flag variety. We use numerical methods to compute the critical point counts for small values of k(Table 1). We also report our observation based on our computational experiments that taking A1, . . . , A kto be generic diagonal matrices does not change the number of critical points. For n= 3 and k= 2, we show that this number is 40 (Proposition 4.2). The code for these computations is available athttps://github.com/hannahfriedman/flag_optimization_algebraic_perspective . In Section 5, we examine two problems from statistics [17]: canonical correlation analysis and correspondence analysis. In these problems, we consider a rectangular matrix Ainstead of a symmetric matrix and the critical points of the problems are given by
https://arxiv.org/abs/2505.15969v1
the singular value decomposition of A. In the first case, we give a formula for the finite number of critical points which correspond to pairs of left and right singular vectors of A(Theorem 5.1). The second problem can be naturally formulated over a product of Grassmannians, and we give a complete description of the critical points in this case as well (Theorem 5.2). 2 The Many Lives of the Flag Variety We start with the observation that the set of partial flags 0 = W0⊂W1⊂W2⊂ ··· ⊂ Wr⊂Vwith dim( Wi) = kiare in bijection with the sequence of subspaces (W1/W0, W2/W1, W3/W2, . . . , W r/Wr−1) where Wi/Wi−1is viewed as a subspace in V/W i−1 fori= 1, . . . , r . This gives a formula for the dimension of Fl( k;n) = Fl( k1, . . . , k r;n). Proposition 2.1. The dimension of Fl(k;n)equalsPr i=1(ki−ki−1)(n−ki). In particular, the dimension of the complete flag variety Fl(1,2, . . . , n −1;n)isn(n−1)/2. Proof. The above bijection implies that there is a bijection from Fl( k;n) to the points in Gr(n, k 1)×Gr(k2−k1, n−k1)× ··· × Gr(kr−kr−1, n−kr−1). Since the ith Grassmannian above has dimension ( ki−ki−1)(n−ki−1) the formula follows. For the complete flag variety ki−ki−1= 1 for all i= 1, . . . , r , and again the formula follows. The flag variety Fl( k;n) is first and foremost conceived as a real or complex manifold. We can associate to a flag W1⊂W2⊂ ··· ⊂ Wr⊂Van orthogonal matrix Qwhose first ki columns comprise an orthonormal basis of Wifori= 1, . . . , r . The smaller orthogonal group 3 O(ki−ki−1) acts on the columns Qki−1+1, . . . , Q kiby right multiplication without altering the flag. Therefore Fl(k1, . . . , k r;n)≃O(n)/O(k1)×O(k2−k1)× ··· × O(n−kr). It is instructive to work out that the dimension of the quotient on the right hand side n 2 −k1 2 −k2−k1 2 − ··· −kr−kr−1 2 −n−kr 2 gives the formula for the dimension of Fl( k;n) in Propositon 2.1. A related formulation utilizes the Stiefel manifold Vk,n. Definition 2.2. The Stiefel manifold Vk,nis the set of orthonormal k-frames. Namely, it consists of n×kmatrices Zwhere ZTZ= Id k. The Stiefel manifold Vk,nis an affine algebraic variety in Ank. When k=n,Vk,n= O( n) and it has two irreducible components. Some of the contents of the following theorem are known, for example in [3, Lemma 2.4]. Theorem 2.3. The Stiefel manifold Vk,nhas dimension dim(Vk,n) =nk−k+ 1 2 =n 2 −n−k 2 . The Stiefel ideal ISt=I(Vk,n) =⟨ZTZ−Idk⟩is a complete intersection. When k < n ,Vk,n is irreducible and IStis prime. Proof. For the dimension, we observe that Fl(1 ,2, . . . , k ;n)≃Vk,n/Qk i=1O(1), since each orthonormal frame Zcorresponds to a flag W1⊂W2⊂ ··· ⊂ Wkwith dim( Wi) =iup to the sign of each column of Z. This gives a finite map from Vk,nto Fl(1 ,2, . . . , k ;n) of degree 2k. Hence, the dimension of Vk,nis equal to the dimension of this
https://arxiv.org/abs/2505.15969v1
flag manifold which isn 2 −n−k 2 =nk−k+1 2 by Proposition 2.1. The affine variety defined by IStis precisely Vk,n. Since ISthask+1 2 generators, it is a complete intersection. The special orthogonal group SO( n) acts on Vk,ntransitively for k < n , and this shows thatVk,nis irreducible in this case. The variety Vk,nis the orbit of [ e1···ek] under this action where ejis the jth standard unit vector in Cn. We show that the affine scheme defined by IStis reduced by showing that the above point is smooth. The Jacobian of IStis Jac(Z)T= 2Z1 Z2Z3···Zk ··· 2Z2 Z1 Z3···Zk 2Z3 Z1 Z2 ··· ......... Zk 2Zk Z1 Z2···Zk−1 , (5) and Jac([ e1···ek])Thas rankk+1 2 since all columns have different supports. This shows thatIStis a radical ideal in general and it is a prime ideal for k < n . 4 Now, we can associate to a flag W1⊂W2⊂ ··· ⊂ Wr⊂Vwith dim( Wi) =kian orthonormal kr-frame Ztogether with the orthogonal groups O( ki−ki−1) acting on the columns Zki−1+1, . . . , Z kiby right multiplication. In other words, Fl(k1, . . . , k r;n)≃Vkr,n/O(k1)×O(k2−k1)× ··· × O(kr−kr−1). In particular, the Grassmannian Gr( k, n) is identified with Vk,n/O(k). For both the flag and Grassmann manifolds, we call their representation as a quotient of the Stiefel variety the representation in Stiefel coordinates . We will use the Stiefel coordinates for the Grassmannian to formulate the multi-eigenvector problem. The above initial introduction views Fl( k;n) and Gr( k, n) primarily as manifolds. As algebraic varieties, they have different embeddings. The classical embedding is in projective space. The case for the Grassmanian Gr( k, n) is well-known. A k-dimensional subspace W ofVis identified with a k×nmatrix Aof rank kwhose rows form a basis of W. This identification is up to the action of GL( k) by left multiplication to account for all possible bases of W. Then we consider the map Gr(k, n)−→P(n k)−1A7→det(AI)I∈([n] k) where AIis the k×ksubmatrix of Awhose columns are indexed by I. The image is an irreducible projective variety of dimension k(n−k) defined by the quadratic Pl¨ ucker relations [6, Section 9.1] in the coordinates xIofP(n k)−1. In similar fashion, we embed Fl( k1, . . . , k r;n) intoP(n k1)−1× ··· × P(n kr)−1. This time we consider the map Fl(k;n)→P(n k1)−1× ··· × P(n kr)−1, A7→ det(AI1)I1∈([n] k1),···,det(AIr)Ir∈([n] kr) (6) where Ais akr×nmatrix of rank krandAIjis the kj×kjsubmatrix of the first kjrows of Awith columns indexed by Ij. The image is an irreducible projective variety of dimensionPr i=1(ki−ki−1)(n−ki) also defined by quadratic relations in the coordinates xIwhere Iis a subset of nof cardinality equal to one of k1, k2, . . . , k r. The equations below express the fact that Ws⊂Wtfor every pair of subspaces in the flag whenever s < t . Theorem 2.4 (Proposition 9.1.1, [6]) .The variety Fl(k1, . . . , k r;n)⊆P(n k1)−1×···× P(n kr)−1 in Pl¨ ucker coordinates is defined by the prime ideal generated by the quadrics xi1,...,iksxj1,...,jkt−X xi′ 1,...,i′ ksxj′ 1,...,j′ kt for every
https://arxiv.org/abs/2505.15969v1
pair of 1≤s < t≤rand where the sum is over all (i′, j′)obtained by exchanging the first mof the j-subscripts with mof the i-subscripts while preserving their order and given a permutation σ∈Sks,xi1,...,iks= sgn( σ)xσ(i1),...,σ(iks). We now turn to realizations of Grassmann and flag varieties in applied settings where they are represented by symmetric matrices. We give two such representations of flag varieties. The first one uses projection matrices. One can uniquely represent a k-dimensional subspace of Rnwith the orthogonal projection matrix Ponto that subspace. The symmetric matrix Psatisfies P2=Pand trace( P) =k. These equations realize the Grassmannian as an affine variety in A(n+1 2)which we call the projection Grassmannian pGr(k, n) [5, 8]. 5 Proposition 2.5 (Theorem 5.2, [5]) .The projection Grassmannian pGr(k, n)is an irre- ducible variety defined by the prime ideal ⟨P2−P⟩+⟨trace( P)−k⟩ ⊂C[P]. We extend this definition to flag varieties. Indeed, if PandP′are projection matrices with rank( P)≤rank( P′), then im( P)⊆im(P′) precisely when P′P=P. We therefore define the projection flag variety as pFl(k;n) ={(P1, . . . , P r)∈Sym(Cn)r:PiPj=Pj,trace( Pi) =kifor 1≤j≤i≤r}.(7) This definition may be found in [17, Proposition 17]. The affine variety pFl( k;n) is irreducible: given an orthonormal frame Zin the Stiefel variety Vkr,n, we set Pi= [Z1···Zki][Z1···Zki]T. This yields projection matrices satisfying PiPj=Pjforj≤iand trace( Pi) =ki. Conversely, given such projection matrices, the column span of Piis the subspace Wiin the flag. Hence, we can construct an orthonormal frame Z∈Vkr,nsuch that Z1, . . . , Z kiis an orthonormal basis of Wi, and now Pi= [Z1···Zki][Z1···Zki]T. Since pFl(k;n) is given by this parametrization we obtain the result: Proposition 2.6. The projection flag variety pFl(k;n)is an irreducible affine variety. The equations in (7) define the projection flag variety set theoretically. We now prove that they generate the prime ideal I(pFl(k;n)). We note that in (7) it suffices to retain the relations P2 i=Pifori= 1, . . . , r andPiPi−1=Pi−1fori= 2, . . . , r . Theorem 2.7. LetFl(k1, . . . , k r;n)be a flag variety and let P1, . . . , P rben×nsymmetric matrices of unknowns. Then pFl(k;n)is smooth and has prime ideal ⟨PiPi−1−Pi−1: 2≤i≤r⟩+⟨P2 i−Pi,trace( Pi)−ki: 1≤i≤r⟩ ⊂C[P1, . . . , P r]. Proof. We follow the proof of the Grassmannian case in [5, Theorem 5.2]. Since the variety is irreducible and the generators of the ideal cut out the variety set theoretically, it suffices to prove that the scheme of the ideal is smooth. As in the Grassmannian case, there is a transitive O( n) action on tuples ( P1, . . . , P r) and hence we need only check smoothness at the point Ek= (Ek1, . . . , E kr) where Eiis the n×nmatrix with ( Ei)ℓj= 1 if ℓ=j≤iand zero otherwise. The codimension of pFl( k;n) isN=Pr i=1n+1 2 −n(n−ki) +Pr i=2ki−1(n−ki). We prove that the rank of the Jacobian is at least Nby describing Nlinearly independent rows of the Jacobian evaluated at Ek. Fixi∈ {1, . . . , r }and choose ℓ, j∈ {1, . . . ,
https://arxiv.org/abs/2505.15969v1
k i}orℓ, j∈ {ki+ 1, . . . , n }. Then ∂(P2 i−Pi)ℓj ∂(Pi′)ℓ′j′ Ek=( ±1 i′=i, ℓ′=ℓ, j′=j, 0 else where the sign is positive if ℓ, j≤kiand negative if ki< ℓ, j . There aren+1 2 −ki(n−ki) choices for such pairs {ℓ, j}. 6 Next fix i∈ {2, . . . , r }, letj∈ {1, . . . , k i−1}, and let ℓ∈ {ki+ 1, . . . , n }. Then ∂(PiPi−1−Pi−1)ℓj ∂(Pi′)ℓ′j′ Ek=  1 if i′=i, ℓ′=ℓ, j′=j, −1 if i′=i−1, ℓ′=ℓ, j′=j, 0 else. There are ki−1(n−ki) choices for pairs {ℓ, j}. Ranging over all choices of iyields Nrows of the Jacobian with disjoint supports. Thus the Jacobian has the desired rank. The second representation of Grassmann and flag varieties using symmetric matrices is known as the isospectral model [9]. This representation encodes a flag with a single matrix. To construct this matrix, we represent a flag in Fl( k;n) by an equivalence class of orthogonal matrices Q∈O(n) modulo an O( k1)×···× O(kr−kr−1)×O(n−kr)-action. We then define a diagonal matrix X0= diag( c1, . . . , c n) such that c1=···=ck1,ck1+1=···=ck2, etc. The symmetric matrix X=QTX0Qdoes not depend on our choice of representative Qand therefore uniquely encodes the flag. So the flag variety Fl( k;n) is embedded as the variety of diagonalizable complex symmetric matrices with spectrum c= (c1, . . . , c n). Definition 2.8. Suppose cki+1=···=cki+1fori= 0, . . . , r where k0= 0 and kr+1=nand letX0= diag( c). The isospectral model for the flag variety Fl c(k;n) is the image of the map SO(n)→Cn×n, Q 7→QX 0QT. (8) From this definition, we see that the isospectral model is irreducible of the correct di- mension. A complex symmetric Xis diagonalizable precisely when its minimal polynomial has no repeated factors. By [7, Theorem XI.4], diagonalizable implies orthogonally diago- nalizable, so the condition that the minimal polynomial of Xhas no repeated factors suffices to prove that Xis in Fl c(k;n) for some choice of c. Supposing Xsatisfies the polynomialQr j=1(X−ckjIdn), we know the eigenvalues of Xareckj, but we do not know the mul- tiplicities. For this task, we need trace( X) =Pn j=1cj. To identify the multiplicities for non-generic c, one must consider the traces of powers of X, as well. However, it is shown in [9] that for generic c, trace( X) gives enough information to determine the multiplicities of the eigenvalues. Thus a matrix Xis in Fl c(k;n) precisely whenQr j=1(X−ckjIdn) = 0 and trace( X) =Pn j=1cj. This discussion shows that these equations cut out the isospectral model over C. We now prove that they generate a prime ideal. Theorem 2.9. LetFl(k;n)be a flag variety and let Xbe a symmetric matrix of unknowns. Given a generic choice of c1, . . . , c nsatisfying ckj+1=···=ckj+1forj= 0, . . . , r , the variety Flc(k;n)is smooth and its prime ideal is I(Flc(k;n)) =⟨rY j=1(X−ckjIdn),trace( X)−nX j=1cj⟩. Proof. We proceed as in the proofs of Theorem 2.3 and Theorem 2.7 and show that the affine scheme defined by
https://arxiv.org/abs/2505.15969v1
the right hand side ideal is smooth at every closed point. This proves smoothness of the variety and primeness of the ideal. 7 The orthogonal group O( n) acts transitively on closed points by conjugation. Thus every closed point is conjugate to the matrix X0= diag( c1, . . . , c n). Hence, we check smoothness atX0using the Jacobian criterion. It suffices to check that the Jacobian J(c) evaluted at X0has rank at least n+k1 2 +···+n−kr 2 . The rows and columns of J(c) are both indexed by entries ( i, j) of the matrix X. We will show that the entry J(c)(i,j),(i′,j′)̸= 0 precisely when i=i′, j=j′, and ci=cj. With the correct ordering of variables, J(c) is diagonal with diagonal entries of the form J(c)(i,j),(i,j). It is then clear from counting that this matrix has the correct rank. We prove off-diagonal entries of J(c) are 0. If {i, j} ∩ {i′, j′}=∅, by the product rule ∂(Xm)ij ∂xi′j′ X=X0=nX ℓ=1xiℓ∂(Xm−1)ℓj ∂xi′j′ X=X0=ci∂(Xm−1)ij ∂xi′j′ X=X0= 0 where the second equality comes from evaluating at X=X0and the last equality is induction onm. Since derivatives of all powers of Xvanish, J(c)(i,j),(i′,j′)= 0 if {i, j} ∩ {i′, j′}=∅. Next suppose i=i′andj̸=j′. Again using the product rule, ∂(Xm)ij ∂xij′ X=X0= (Xm−1)j′j+nX ℓ=1xiℓ∂(Xm−1)ℓj ∂xij′ X=X0=ci∂(Xm−1)ij ∂xij′ X=X0= 0 where the last equality is by induction on m. As above, we have J(c)(i,j),(i,j′)= 0 for j̸=j′. This concludes the proof that J(c) is a diagonal matrix. Finally, we prove that the diagonal entries J(c)(i,j),(i,j)are nonzero if ci=cj. We compute ∂(Xm)ij ∂xij X=X0= (Xm−1)jj+nX ℓ=1xiℓ∂(Xm−1)ℓj ∂xij X=X0=cm−1 j+ci∂(Xm−1)ij ∂xij X=X0. By induction we see that∂(Xm)ij ∂xij X=X0=Pm−1 p=0cp icm−1−p j .Ifci=cj,∂(Xm)ij ∂xij=mcm iand soJ(c)(i,j),(i,j)=f′(ci) where f(z) = (z−ck1)···(z−ckr) is the minimal polynomial of X. Since ciis a root of fandfhas no double roots J(c)(i,j),(i,j)̸= 0. Since there are at least n+k1 2 +···+n−kr 2 nonzero entries on the diagonal of J(c) and all off-diagonal entries are zero, we conclude that J(c) has rank at least n+k1 2 +···+n−kr 2 . Example 2.10. We show the different ways of representing the flag variety Fl(1 ,2; 3). In Stiefel coordinates, this flag variety is realized as V2,3/O(1)×O(1) via a 3 ×2 matrix Z= z11z12 z21z22 z31z32  with orthonormal columns. We identify such matrices up to sign of each column. To obtain the Pl¨ ucker coordinates for the flag represented by Z, we take minors. This yields six Pl¨ ucker coordinates that realize this complete flag variety in P2×P2: x1=z11, x 2=z21, x 3=z31, x12=z11z22−z12z21, x 13=z11z32−z12z31, x 23=z21z32−z22z31. 8 By Theorem 2.4 these coordinates satisfy a single relation x1x23−x2x13+x3x12= 0. To represent the flag in projection coordinates, let P1be the unique orthogonal projection matrix onto the space spanned by the first column of ZandP2be the unique orthogonal matrix onto the column space of Z: P1= z11 z21 z31  z11z21z31 , P 2=ZZT. One checks that these matrices satisfy the relations P2 1=P1 P2 2=P2 P2P1=P1 trace( P1) = 1 trace( P2) = 2 . Finally, we can also represent this flag with a single matrix via
https://arxiv.org/abs/2505.15969v1
isospectral coordinates. We append a column to Zto form a 3 ×3 orthogonal matrix ˜Zand fix the generic vector c= (c1, c2, c3). The isospectral coordinates for Zwith respect to care given by the matrix S=˜Zdiag( c1, c2, c3)˜ZT whose eigenvalues are c1, c2, c3and whose eigenvectors are the columns of ˜Z. This matrix satisfies its minimal polynomial and has the correct trace: (S−c1Id3)(S−c2Id3)(S−c3Id3) = 0 trace( S) =c1+c2+c3. From this example, we see that it is not difficult to write the Pl¨ ucker, projection, and isospectral coordinates once we have Stiefel coordinates for a flag. The following proposition summarizes what is known about how to write different coordinates for the flag variety in terms of other coordinates. Proposition 2.11. The relationships between the different lives of the flag variety Fl(k1, k2, . . . , k r;n)are captured in Figure 1. In the figure, Zis an n×krmatrix in Vkr,n,˜Z is an n×northogonal matrix whose first krcolumns comprise Z, and ZI,Jdenotes the sub- matrix of Zwith rows indexed by the set Iand columns indexed by the set J. The variables xi1,...,iksindicate Pl¨ ucker coordinates and Xidenotes the cocircuit matrix for Gr(ki, n)as in [5, Corollary 2.5]. The matrix Piis a projection matrix in the tuple of (P1, . . . , P r). Finally, Sis an n×nsymmetric matrix with eigenvalues c= (c1, . . . , c n). Proof. The map from Stiefel to Pl¨ ucker coordinates is as in (6). The map from Pl¨ ucker to projection coordinates is explained for the Grassmannian in [5, Corollary 2.5] and extends naturally to the flag variety. To move from Stiefel coordinates to projection coordinates, one takes projection matrices onto the spans of the appropriate submatrices of Z. To go the other way, one takes a spectral decomposition of Pr=˜ZEkr˜ZTwhere Eiis the n×nmatrix whose (1 ,1), . . . , (i, i) entries are 1 and all other entries are zero. Set Zto be the first krcolumns of ˜Z. The map from isospectral to Stiefel coordinates is similar: take the spectral decomposition ofS=˜Zdiag( c1, . . . , c n)˜ZTand remove the last n−krcolumns from ˜Z. Given Stiefel 9 coordinates Z, one forms ˜Zby extending the columns of Zto an orthonormal basis for Cn. The equivalence between these coordinates follows from the definitions of the Stiefel and isospectral coordinates. To move from projection coordinates to isospectral coordinates one composes the maps passing through the Stiefel coordinates. Write the diagonal matrix diag( c1, . . . , c n) =cnIdn+ (ckr−cn)Ekr+···+ (ck1−ck2)Ek1. Conjugating with the augmented Stiefel coordinates ˜Zyields the expression for Sin terms of projection coordinates. StiefelPl¨ ucker Projection Isospectralxi1,...,iks= det( Zi1···iks,1···ks) Pi=Z1···n,1···kiZT 1···n,1···kiS=˜Zdiag( c1, . . . , c n)˜ZT S=cnIdn+ (ckr−cn)Pkr+···+ (ck1−ck2)P1Pi=ki trace( X2 i)X2 i Figure 1: Diagram explaining how to move from one life of the flag variety to another. If A→Bin the diagram, the edge label explains how to write the Bcoordinates in terms of theAcoordinates. Two of the arrows are bidirectional, meaning that one direction comes from matrix multiplication and the other comes from a matrix factorization. 3
https://arxiv.org/abs/2505.15969v1
The Multi-Eigenvector Problem In this section, we give three different formulations of the multi-eigenvector problem (3). The first two are formulated as optimization problems over a Grassmannian in Stiefel and projection coordinates. The third one utilizes a flag variety in its isospectral formulation. In all three cases we describe the sets of critical points of these optimization problems, where in the Stiefel case we describe the O( k)-orbit of the critical points in Vk,n. 3.1 Critical points in Stiefel coordinates The goal of this section is to prove Theorem 1.1 which characterizes the critical points of the multi-eigenvector problem in Stiefel coordinates. We first prove the following lemma. 10 Lemma 3.1. Suppose A∈Cn×nis orthogonally diagonalizable and has full rank. Let Z∈ Vk,n. IfM∈Ck×kis such that AZ=ZM, then Mis also orthogonally diagonalizable. Proof. LetA=UΛUTbe a factorization of Awith Λ diagonal and U∈O(n). Then AZ= UΛUTZ=ZM and rearranging we have M= (ZTU)Λ(UTZ) where ( ZTU)(UTZ) = Id k. This factorization implies that for every diagonal entry λiof Λ, M(ZTU)i=λi(ZTU)i. Since Mhas at most keigenvalues and the λiare nonzero, n−kof the columns of ZTUare forced to be zero. This yields an orthogonal diagonalization for M. Proof of Theorem 1.1. Write Zifor the ith column of Z. Note first that tr( ZTAZ) =Pk j=1ZT iAZi. This allows us to compute ∇tr(ZTAZ) = 2 ZT 1A ZT 2A··· ZT kA which is a row vector with nkentries. The Jacobian of the polynomials defined by ZTZ= Id k is Jac( Z) as in (5). A critical point is a vector Zsatisfying ZTZ= Id kwhere ∇tr(ZTAZ) is in the row span of Jac( Z). In particular, there exist µij∈C, 1≤i, j≤kwith µij=µji such that AZi=Pk j=1µijZj. These µijcan be assembled into a k×ksymmetric matrix M satisfying AZ=ZM. SoZis a critical point if and only if it satisfies ZTZ= Id kand there exists a symmetric matrix M∈Sym(Ck) such that AZ=ZM. If such a matrix Mexists, then by Lemma 3.1, it is orthogonally diagonalizable. Therefore we replace the condition AZ=ZM with the condition AZ=ZQTΛQ. where Q∈O(k) and Λ is diagonal. Thus Zis a critical point if and only if it satisfies ZTZ= Id kand AZQT=ZQTΛ. These conditions hold precisely when the columns of ZQTare a subset of an orthonormal eigenbasis for A, which proves that Zhas the desired form. 3.2 Critical points in projection coordinates In Stiefel coordinates, we work with an orbit of critical points in Vk,nwhich corresponds to ak-dimensional subspace spanned by keigenvectors of A. It is desirable to formulate this problem so that it has finitely many critical points, because equivalence classes are difficult to compute with. We therefore transform the multi-eigenvector problem into projection coordinates for the Grassmannian as in Section 2. If Zisn×kandZTZ= Id k, then P=ZZT projects onto the column space of Z. Then using the trace trick, we have trace( ZTAZ) = trace( AP) and we can reformulate our problem as min/max P2=P trace( P)=ktrace( AP). (9) We first prove a lemma relating critical points under a polynomial map. We will use this result a few times in the paper. Lemma 3.2. LetV⊂CsandW⊂Ctbe smooth
https://arxiv.org/abs/2505.15969v1