text
string
source
string
argument (see, e.g., [ 19, Ch. 4]), we then have, with probability at least 1 −2e−t, /ba∇dblYτ−EYτ/ba∇dblℓ2≤c/parenleftBig K4/radicalbig n(d+t)+K2τ(d+t)/parenrightBig ≤cK4/parenleftBig/radicalbig n(d+t)+(logn)(d+t)/parenrightBig . Takingt= 3lognand noting thatA∗A(x∗x∗ ∗)/{ollowsequalYτ, which implies EA∗A(x∗x∗ ∗)/{ollowsequalEYτ, we have, with probability at least 1 −3n−3, A∗A(x∗x∗ ∗)−EA∗A(x∗x∗ ∗)/√∇ecedesequalYτ−EYτ/√∇ecedesequalcK4/parenleftBig/radicalbig n(d+logn)+(d+logn)logn/parenrightBig Id. Rescaling by 1 /ngives the result for r= 1. Applying it to each term in the eigenvalue decomposition Z∗=r/summationdisplay k=1λk(Z∗)uku∗ k and taking a union bound gives the result with probability at least 1−3rn−3≥1−3n−2. 5 Analysis with restricted lower isometry In this section, we provide proofs of Theorems 1and2. These results assume no noise ( ξ= 0) and a rank-one Z∗=x∗x∗ ∗for some x∗∈Fd Beforecontinuingtothe full proofs, itis helpful, to seethe benefit s ofoverparametrization,toconsider how one would prove a simplified version of Theorem 2. In the notation of that result, suppose we ignore the trace term in ( 5) (setting βto zero) and obtain from ( 7) the condition (p+2)α >2L. IfXis a second-order critical point of ( BM-LS), combining the assumed inequalities ( 5) and (6) with Lemma1(withξ= 0) gives, for any u∈Fp, (p+2)α/ba∇dblXX∗−x∗x∗ ∗/ba∇dbl2 F≤(p+2)/ba∇dblA(XX∗−x∗x∗ ∗)/ba∇dbl2 ≤2/ba∇dblA∗A(x∗x∗ ∗)/ba∇dblℓ2/ba∇dblx∗−Xu)/ba∇dbl2 ≤2L/ba∇dblx∗/ba∇dbl2/ba∇dblx∗−Xu/ba∇dbl2. An obvious choice of uis one that minimizes /ba∇dblx∗−Xu/ba∇dbl2. This also means that x∗−Xu∈range(X)⊥, which implies /ba∇dblXX∗−x∗x∗ ∗/ba∇dbl2 F=/ba∇dblX(X−x∗u∗)∗−(x∗−Xu)x∗ ∗/ba∇dbl2 F =/ba∇dblX(X−x∗u∗)∗/ba∇dbl2 F+/ba∇dbl(x∗−Xu)x∗ ∗/ba∇dbl2 F ≥/ba∇dblx∗/ba∇dbl2/ba∇dblx∗−Xu/ba∇dbl2. We thus obtain the inequality (p+2)α/ba∇dblx∗/ba∇dbl2/ba∇dblx∗−Xu/ba∇dbl2≤2L/ba∇dblx∗/ba∇dbl2/ba∇dblx∗−Xu/ba∇dbl2. Because we assumed ( p+2)α >2L, we must have/ba∇dblx∗−Xu/ba∇dbl= 0. Tracing back through our inequalities then shows/ba∇dblXX∗−x∗x∗ ∗/ba∇dblF= 0, that is, XX∗=x∗x∗ ∗. Unfortunately, even for Gaussian measurements, considering th e approximate values of αandL suggested by the expectation ( 3), the condition ( p+ 2)α >2Lwill not be satisfied when p= 1; the 14 above simple analysis is too loose in this case. We therefore need a mor e careful analysis that includes the trace term (corresponding to the parameter βof Theorem 2) of (3). We begin with a proof of Theorem 1, which only considers optimization rank parameter p= 1. Although part 1is a direct consequence of the more general result Theorem 2, we provide a full proof for pedagogy and motivation, as the additional calculations necess ary to incorporate the trace (i.e., β) term of ( 5) are simplest in the case p= 1. First, in order to prove part 2, we need, along with Lemma 2above, another concentration result for Gaussian measurements. Lemma 3 ([10, Lemma 22]) .Consider Gaussian measurements of the form Ai=aia∗ i, wherea1,...,a n are i.i.d. real or complex standard Gaussian vectors. For a f unctionc1(δ)>0only depending on δand a universal constant c2>0, for any δ∈(0,1), ifn≥c1(δ)dlogd, then, with probability at least 1−c2n−2, uniformly over x,z∈Fd, 1 n/ba∇dblA(xx∗−zz∗)/ba∇dbl2≥(1−δ)E1 n/ba∇dblA(xx∗−zz∗)/ba∇dbl2 = (1−δ)[m4/ba∇dblxx∗−zz∗/ba∇dbl2 F+(/ba∇dblx/ba∇dbl2−/ba∇dblz/ba∇dbl2)2]. Thecited lemma is preciselythe abovein the complexcasewith somesimp lifications in the statement; the real case holds by the same arguments. Proof of Theorem 1.First, we show how to obtain part 2from part 1. Fix sufficiently small δU,δL>0 for the assumption ( 4) to hold with m=m4. The expectation calculation ( 3) gives1 n/ba∇dblEA∗A(x∗x∗ ∗)/ba∇dblℓ2= (1+m4)/ba∇dblx∗/ba∇dbl2. By Lemma 2(we can take K≈1), ifn/greaterorsimilarδ−2 Ud+δ−1 Udlogd, we have, with probability at least 1−3n−2, 1 n/ba∇dblEA∗A(x∗x∗ ∗)/ba∇dblℓ2≤1 n/ba∇dblEA∗A(x∗x∗ ∗)/ba∇dblℓ2+δU/ba∇dblx∗/ba∇dbl2=
https://arxiv.org/abs/2505.02636v1
(1+m4+δU)/ba∇dblx∗/ba∇dbl2. For the lower isometry assumption, we can directly apply Lemma 3; ifn≥c1(δL)dlogd, the assumption holds with probability at least 1 −c2n−2. Combining the failure probabilities with a union bound gives the result. We now turn to proving part 1. We adopt the cleaner notation of Theorem 2and setα= (1−δL)m, β= 1−δL, andL= 1+m+δU. Lemma 1then implies, for any s∈F, 3α/ba∇dblxx∗−x∗x∗ ∗/ba∇dbl2 F+3β(/ba∇dblx/ba∇dbl2−/ba∇dblx∗/ba∇dbl2)2≤2L/ba∇dblx∗/ba∇dbl2/ba∇dblx∗−sx/ba∇dbl2. (13) Theobviouschoiceof sistheonethatminimizes /ba∇dblx∗−sx/ba∇dbl,which, bystandardlinearalgebracalculations, is such that x∗−sx⊥xand (ifx/ne}ationslash= 0) /ba∇dblx∗−sx/ba∇dbl2=/ba∇dblx∗/ba∇dbl2−|/an}b∇acketle{tx,x∗/an}b∇acket∇i}ht|2 /ba∇dblx/ba∇dbl2= (1−ρ2)/ba∇dblx∗/ba∇dbl2, (14) where ρ2:=|/an}b∇acketle{tx,x∗/an}b∇acket∇i}ht|2 /ba∇dblx/ba∇dbl2/ba∇dblx∗/ba∇dbl2 is the absolute squared correlation between xandx∗. Ifx= 0, the same holds with ρ2= 0. As x∗−sx⊥x, we additionally have /ba∇dblxx∗−x∗x∗ ∗/ba∇dbl2 F=/ba∇dblx(x−s∗x∗)∗−(x∗−sx)x∗ ∗/ba∇dbl2 F =/ba∇dblx(x−s∗x∗)∗/ba∇dbl2 F+/ba∇dbl(x∗−sx)x∗ ∗/ba∇dbl2 F =/ba∇dblx/ba∇dbl2/ba∇dblx−s∗x∗/ba∇dbl2+/ba∇dblx∗/ba∇dbl2/ba∇dblx∗−sx/ba∇dbl2 ≥(/ba∇dblx/ba∇dbl4+/ba∇dblx∗/ba∇dbl4)(1−ρ2). (15) The last inequality uses (cf. ( 14)) /ba∇dblx−s∗x∗/ba∇dbl2≥min s′∈F/ba∇dblx−s′x∗/ba∇dbl2= (1−ρ2)/ba∇dblx/ba∇dbl2. Plugging ( 14) and (15) into (13), we obtain 3α(/ba∇dblx/ba∇dbl4+/ba∇dblx∗/ba∇dbl4)(1−ρ2)+3β(/ba∇dblx/ba∇dbl2−/ba∇dblx∗/ba∇dbl2)2≤2L/ba∇dblx∗/ba∇dbl4(1−ρ2). 15 Ifρ2= 1, then, as β= 1−δL>0, we must have/ba∇dblx/ba∇dbl2=/ba∇dblx∗/ba∇dbl2, and we are done. If ρ2<1, then we can divide by 1−ρ2and obtain the (weaker) inequality 3α(/ba∇dblx/ba∇dbl4+/ba∇dblx∗/ba∇dbl4)+3β(/ba∇dblx/ba∇dbl2−/ba∇dblx∗/ba∇dbl2)2≤2L/ba∇dblx∗/ba∇dbl4 Now assume, without loss of generality, that /ba∇dblx∗/ba∇dbl= 1, and set t=/ba∇dblx/ba∇dbl2. The above inequality can be rearranged to obtain 0≥3(α+β)t2−6βt+3(α+β)−2L ≥−3β2 α+β+3(α+β)−2L, where the second inequality comes from minimizing the previous expre ssion over twitht=β β+α. Mul- tiplying by α+βand rearranging gives 3(α2+2βα)≤2L(α+β). Plugging in our values of α,β, andLgives 3(1−δL)2(m2+2m)≤2(1+m+δU)(1−δL)(1+m). Some algebra gives m2+2m−2≤3(m2+2m)δL+2(m+1)δU, which the condition ( 4) contradicts. This completes the proof. With this as a warmup, we now continue to the slightly more complicated general case p≥1: Proof of Theorem 2.The inequalities ( 5) and (6) and Lemma 1imply, for any u∈Fp, α/ba∇dblXX∗−x∗x∗ ∗/ba∇dbl2 F+β(/ba∇dblX/ba∇dbl2 F−/ba∇dblx∗/ba∇dbl2)2≤2L p+2/ba∇dblx∗/ba∇dbl2/ba∇dblx∗−Xu/ba∇dbl2. Weagainchoose utominimize/ba∇dblx∗−Xu/ba∇dbl2. Explicitly, wetake u=X†x∗, whereX†istheMoore-Penrose pseudoinverse of X. Again, this ensures that x∗−Xu∈range(X)⊥, so /ba∇dblXX∗−x∗x∗ ∗/ba∇dbl2 F=/ba∇dblX(X−x∗u∗)∗−(x∗−Xu)x∗ ∗/ba∇dbl2 F=/ba∇dblX(X−x∗u∗)∗/ba∇dbl2 F+/ba∇dblx∗/ba∇dbl2/ba∇dblx∗−Xu/ba∇dbl2. Combined with the previous inequality, we obtain α/ba∇dblX(X−x∗u∗)∗/ba∇dbl2 F+β(/ba∇dblX/ba∇dbl2 F−/ba∇dblx∗/ba∇dbl2)2≤/parenleftbigg2L p+2−α/parenrightbigg /ba∇dblx∗/ba∇dbl2/ba∇dblx∗−Xu/ba∇dbl2. We now set ρ2:=/an}b∇acketle{tPX,x∗x∗ ∗/an}b∇acket∇i}ht /ba∇dblx∗/ba∇dbl2=/ba∇dblPXx∗/ba∇dbl2 /ba∇dblx∗/ba∇dbl2, wherePX=XX†is the orthogonal projection matrix onto range( X). Note that in the case p= 1, this reduces to the same quantity as in the proof of Theorem 1above. Due to the choice u=X†x∗, we have /ba∇dblx∗−Xu/ba∇dbl2=/ba∇dblx∗/ba∇dbl2−/ba∇dblXu/ba∇dbl2=/ba∇dblx∗/ba∇dbl2−/ba∇dblPXx∗/ba∇dbl2=/ba∇dblx∗/ba∇dbl2(1−ρ2). Furthermore, /ba∇dblX(X−x∗u∗)∗/ba∇dbl2 F=/ba∇dblXX∗−Xux∗ ∗/ba∇dbl2 F =/ba∇dblXX∗/ba∇dbl2 F+/ba∇dblXX†x∗x∗ ∗/ba∇dbl2 F−2/an}b∇acketle{tXX∗,XX†x∗x∗ ∗/an}b∇acket∇i}ht =/ba∇dblXX∗/ba∇dbl2 F+/ba∇dblx∗/ba∇dbl2/ba∇dblPXx∗/ba∇dbl2−2/an}b∇acketle{tXX∗,x∗x∗ ∗/an}b∇acket∇i}ht ≥/ba∇dblXX∗/ba∇dbl2 F+/ba∇dblx∗/ba∇dbl2/ba∇dblPXx∗/ba∇dbl2−2/ba∇dblXX∗/ba∇dblF/ba∇dblPXx∗/ba∇dbl2 =/ba∇dblXX∗/ba∇dbl2 F+/ba∇dblx∗/ba∇dbl4ρ2−2/ba∇dblXX∗/ba∇dblF/ba∇dblx∗/ba∇dbl2ρ2 = (1−ρ2)/ba∇dblXX∗/ba∇dbl2 F+ρ2(/ba∇dblXX∗/ba∇dblF−/ba∇dblx∗/ba∇dbl2)2 ≥(1−ρ2)/ba∇dblXX∗/ba∇dbl2 F. 16 We thus obtain α/ba∇dblXX∗/ba∇dbl2 F(1−ρ2)+β(/ba∇dblX/ba∇dbl2 F−/ba∇dblx∗/ba∇dbl2)2≤/parenleftbigg2L p+2−α/parenrightbigg /ba∇dblx∗/ba∇dbl4(1−ρ2). Ifρ2= 1, then tracing through our inequalities reveals (as α >0)/ba∇dblXX∗−x∗x∗ ∗/ba∇dblF= 0. Otherwise, dividing through by 1 −ρ2and noting that /ba∇dblX/ba∇dbl2 F= tr(XX∗)≤√p/ba∇dblXX∗/ba∇dblF, we obtain the weaker inequality α p/ba∇dblX/ba∇dbl4 F+β(/ba∇dblX/ba∇dbl2 F−/ba∇dblx∗/ba∇dbl2)2≤/parenleftbigg2L p+2−α/parenrightbigg /ba∇dblx∗/ba∇dbl4. The rest is similar to the proof of Theorem 1above. Assume, without loss of generality, that /ba∇dblx∗/ba∇dbl= 1, and sett=/ba∇dblX/ba∇dbl2 F. The last inequality can be rewritten as 0≥α pt2+β(t−1)2−/parenleftbigg2L p+2−α/parenrightbigg =/parenleftbiggα p+β/parenrightbigg t2−2βt+α+β−2L p+2 ≥−β2 β+α/p+α+β−2L p+2 =α+αβ pβ+α−2L p+2. The second inequality comes from minimization over twitht=β β+α/p. The condition ( 7) implies that this last expression is strictly positive, giving a contradiction. 6 Sub-Gaussian measurements In this section, we prove Theorem 3. We will do this with the help of the following technical lemma: Lemma 4 ([18]).Under the conditions of Theorem 3, there exist constants c1,c2,c3,c4>0depending only on the properties of wsuch that, for n≥c1d,
https://arxiv.org/abs/2505.02636v1
with probability at least 1−c2e−c3n, for allZ/{ollowsequal0, 1√n/ba∇dblA(Z−Z∗)/ba∇dbl≥1 n/ba∇dblA(Z−Z∗)/ba∇dbl1≥c4/ba∇dblZ−Z∗/ba∇dbl∗. This summarizes several intermediate results of [ 18] (in particular, their Lemmas 3, 4, and 5). With this, we can continue to the main proof: Proof of Theorem 3.We will use c,c′, etc. to denote positive constants, depending only on the proper ties ofw, which may change from one use to another. Expectation calculations (e.g., [ 18, Lem. 9]) give E1 nA∗A(Z∗) = (trZ∗)Id+Z∗+(|Ew2|2)Z∗+(E|w|4−2−|Ew2|2)ddiag(Z∗)/√∇ecedesequalc/ba∇dblx∗/ba∇dbl2Id, whereZ∗is the elementwise complex conjugate of Z∗, and ddiag: Hd→Hdextracts the diagonal entries of a matrix. Together with Lemma 2, we obtain, with probability at least 1 −cn−2, 1 n/ba∇dblA∗A(Z∗)/ba∇dblℓ2≤c/parenleftbigg 1+dlogn n/parenrightbigg /ba∇dblx∗/ba∇dbl2≤c/parenleftbigg 1+dlogd n/parenrightbigg /ba∇dblx∗/ba∇dbl2. The second inequality follows from the observation that, for n≥d,dlogn n/lessorsimilarmax{1,dlogd n}. Next, Lemma 4(we relax the probability bound) gives, with probability at least 1 −cn−2, for all Z/{ollowsequal0, 1 n/ba∇dblA(Z−Z∗)/ba∇dbl2≥c/ba∇dblZ−Z∗/ba∇dbl2 ∗≥c/ba∇dblZ−Z∗/ba∇dbl2 F. Along with a union bound for the probabilities, we can apply Theorem 2withα=c,β= 0, and L=c′/parenleftBig 1+dlogd n/parenrightBig to obtain the result. 17 7 PhaseLift dual certificate In this section, we develop our landscape analysis of ( BM-LS) by the method of dual certificates. We fix the rank- rground-truth matrix Z∗=X∗X∗ ∗, whereX∗∈Fd×r. LetZ∗have eigenvalue decomposition Z∗=UΛU∗, whereU∈Fd×rwithU∗U=Ir, and Λ is an r×rdiagonal matrix with diagonal entries λ1(Z∗)≥···≥λr(Z∗)>0. Note that, for k= 1,...,r,λk(Z∗) =σ2 k(X∗). We write PU:=UU∗andP⊥ U:=Id−PUas the orthogonal projection matrices onto range( Z∗) and its orthogonal complement respectively. We denote by Tthe tangent space of rank- rmatrices at Z∗, given by T={UB∗+BU∗:B∈Fd×r}⊂Hd. We denote byT⊥its orthogonal complement in Hd(with respect to the Frobenius inner product). The orthogonal projections onto TandT⊥are respectively given, for S∈Hd, by PT(S) =SPU+PUSP⊥ U=PUS+P⊥ USPUand PT⊥(S) =P⊥ USP⊥ U. For a deterministic landscape result, we will make two key assumption s, which resemble those made and, for certain measurement models, proved in papers studying P haseLift such as [ 2], [3], [27]. Assumption 1 (Dual certificate) .For some ǫ≥0, there exists λ∈Rnsuch that Y:=A∗(λ) satisfies PT⊥(Y)/{ollowsequalP⊥ U /ba∇dblPT(Y)/ba∇dblF≤ǫ. This is simply a higher-rank analog of the inexact dual certificate intr oduced in [ 2], [3]. The quantity /ba∇dblλ/ba∇dblwill be important in our analysis. Assumption 2 (Approximate isometry) .For some µT,LT>0, 1√n/ba∇dblA(H)/ba∇dbl≥µT/ba∇dblPT(H)/ba∇dblF−LTtr(PT⊥(H)) for all H∈HdwithPT⊥(H)/{ollowsequal0. The papers [ 2], [3] instead used the separate assumptions1 n/ba∇dblA(H)/ba∇dbl1≥µT/ba∇dblH/ba∇dblFfor allH∈T and1 n/ba∇dblA(H)/ba∇dbl1≤LTtrHfor allH/{ollowsequal0. The combination of these, together with the norm inequal- ity/ba∇dblA(H)/ba∇dbl≥1√n/ba∇dblA(H)/ba∇dbl1, immediately implies Assumption 2, but this separation turns out to be suboptimal for our derived results. We can now state our main deterministic result: Theorem 5. Suppose Assumptions 1and2hold with µT> LTǫ, and suppose the rank parameter pin (BM-LS)satisfies p > τ:= 2/parenleftbigg1+LT√n/ba∇dblλ/ba∇dbl µT−LTǫ/parenrightbigg2/ba∇dblA∗(y)/ba∇dblℓ2 nλr(Z∗)−2. (16) Then every second-order critical point Xof(BM-LS)satisfies /ba∇dblXX∗−Z∗/ba∇dblF≤/ba∇dblPT(XX∗−Z∗)/ba∇dblF+trPT⊥(XX∗) ≤p+2 p−τ/parenleftbigg1+ǫ+√n/ba∇dblλ/ba∇dbl(LT+µT) µT−LTǫ/parenrightbigg2√ 2r/ba∇dblA∗(ξ)/ba∇dblℓ2 n. Proof.LetXbe a second-order critical point of ( BM-LS). Then, for any matrix R∈Fr×p, Lemma 1 gives /ba∇dblA(XX∗−Z∗)/ba∇dbl2≤/an}b∇acketle{tξ,A(XX∗−Z∗)/an}b∇acket∇i}ht+2/ba∇dblA∗(y)/ba∇dblℓ2 p+2/ba∇dblX∗−XR/ba∇dbl2 F. SetH=XX∗−Z∗. We rearrange the previous inequality as p−τ p+2/ba∇dblA(H)/ba∇dbl2≤/an}b∇acketle{tξ,A(H)/an}b∇acket∇i}ht+1 p+2/bracketleftbig 2/ba∇dblA∗(y)/ba∇dblℓ2/ba∇dblX∗−XR/ba∇dbl2 F−(τ+2)/ba∇dblA(H)/ba∇dbl2/bracketrightbig .(17) 18 We first consider the second term on the right-hand side of ( 17), showing that it cannot be positive. We need to lower bound /ba∇dblA(H)/ba∇dbl. We will denote, for brevity, HT=PT(H) and HT⊥=PT⊥(H)
https://arxiv.org/abs/2505.02636v1
=P⊥ UXX∗P⊥ U/{ollowsequal0. From Cauchy-Schwartz and Assumption 1, we have /ba∇dblλ/ba∇dbl/ba∇dblA(H)/ba∇dbl≥/an}b∇acketle{tY,H/an}b∇acket∇i}ht =/an}b∇acketle{tPT⊥(Y),HT⊥/an}b∇acket∇i}ht+/an}b∇acketle{tPT(Y),HT/an}b∇acket∇i}ht ≥trHT⊥−ǫ/ba∇dblHT/ba∇dblF. We can add this (scaled by LT) to the inequality from Assumption 2to obtain /parenleftbigg1√n+LT/ba∇dblλ/ba∇dbl/parenrightbigg /ba∇dblA(H)/ba∇dbl≥(µT−LTǫ)/ba∇dblHT/ba∇dblF, which implies /ba∇dblA(H)/ba∇dbl2≥n/parenleftbiggµT−LTǫ 1+LT√n/ba∇dblλ/ba∇dbl/parenrightbigg2 /ba∇dblHT/ba∇dbl2 F. (18) Now, choose R∈Fr×psuch that ( X∗−XR)∗X= 0. We then have /ba∇dblHT/ba∇dblF≥/ba∇dblPUH/ba∇dblF ≥/ba∇dblPUHP⊥ X/ba∇dblF =/ba∇dblX∗(X∗−XR)∗/ba∇dblF ≥σr(X∗)/ba∇dblX∗−XR/ba∇dblF, (19) whereP⊥ Xis the orthogonal projection matrix onto range( X)⊥⊆Fd. Combining ( 18) and (19) and recalling that λr(Z∗) =σ2 r(X∗), we obtain /ba∇dblA(H)/ba∇dbl2≥n/parenleftbiggµT−LTǫ 1+LT√n/ba∇dblλ/ba∇dbl/parenrightbigg2 λr(Z∗)/ba∇dblX∗−XR/ba∇dbl2 F =2 τ+2/ba∇dblA∗(y)/ba∇dblℓ2/ba∇dblX∗−XR/ba∇dbl2 F. Using this to simplify ( 17), we obtain p−τ p+2/ba∇dblA(H)/ba∇dbl2≤/an}b∇acketle{tξ,A(H)/an}b∇acket∇i}ht≤/ba∇dblA∗(ξ)/ba∇dblℓ2/ba∇dblH/ba∇dbl∗. (20) The previous inequalities /ba∇dblλ/ba∇dbl/ba∇dblA(H)/ba∇dbl≥trHT⊥−ǫ/ba∇dblHT/ba∇dblFand1√n/ba∇dblA(H)/ba∇dbl≥µT/ba∇dblHT/ba∇dblF−LTtrHT⊥, combined in different proportions than before, give /parenleftbigg (LT+µT)/ba∇dblλ/ba∇dbl+1+ǫ√n/parenrightbigg /ba∇dblA(H)/ba∇dbl≥(µT−LTǫ)(/ba∇dblHT/ba∇dblF+trHT⊥), which implies /ba∇dblA(H)/ba∇dbl2≥n/parenleftbiggµT−LTǫ 1+ǫ+√n/ba∇dblλ/ba∇dbl(LT+µT)/parenrightbigg2 (/ba∇dblHT/ba∇dblF+trHT⊥)2. (21) On the other hand, /ba∇dblH/ba∇dbl∗≤/ba∇dblHT/ba∇dbl∗+/ba∇dblHT⊥/ba∇dbl∗≤√ 2r/ba∇dblHT/ba∇dblF+trHT⊥. Combining this with ( 20) and (21), we obtain p−τ p+2n/parenleftbiggµT−LTǫ 1+ǫ+√n/ba∇dblλ/ba∇dbl(LT+µT)/parenrightbigg2 (/ba∇dblHT/ba∇dblF+trHT⊥)2≤/ba∇dblA∗(ξ)/ba∇dblℓ2(√ 2r/ba∇dblHT/ba∇dblF+trHT⊥), from which the result easily follows. 19 7.1 Application: Gaussian measurements In this section, we show how Theorem 5can be applied with the Gaussian measurement model to prove Theorem 4. In this model, the measurement matrices are Ai=aia∗ ifor i.i.d. standard real or complex Gaussian vectors a1,...,a n. In this section, we will use liberally the notation a/lessorsimilarb(orb/greaterorsimilara) to mean that a≤Cbfor some unspecified but universal constant C >0. Similarly, the constant cthat appears in the probability estimates will not depend on the problem parameters but can chang e from one usage to another. We need several supporting lemmas showing that the conditions of T heorem5are satisfied with high probability. Lemma 5. For fixed rank- r Z∗/{ollowsequal0, ifn/greaterorsimilarrd, then, with probability at least 1−cn−2, the Gaussian measurement ensemble satisfies Assumption 1with /ba∇dblλ/ba∇dbl/lessorsimilar/radicalbiggr nand ǫ/lessorsimilar/radicalbigg r2(d+logn) n. This is a straightforward generalization of [ 2, Lemma 2.3] and [ 3, Theorem 1], which only consider r= 1 and do not bound /ba∇dblλ/ba∇dbl. We provide a proof below in Section 7.2. Lemma 6. For fixed rank- r Z∗, ifn/greaterorsimilarrd, with probability at least 1−2n−2, the Gaussian measurement ensemble satisfies Assumption 2with µT/greaterorsimilar1 LT/lessorsimilar/radicalbigg d n. We provide a proof below in Section 7.2. The methods of [ 2], [3] would provide a similar result with LT≈1, but, considering the fact that the bounds on ǫand/ba∇dblλ/ba∇dblin Lemma 5increase with r, this is suboptimal for larger r. With these tools, we can proceed to the main proof: Proof of Theorem 4.The failure probabilities of the supporting lemmas are of order n−2, so, taking a union bound, the final result has failure probability of the same orde r. By the expectation calculation ( 3), Lemma 2, and the fact that n/greaterorsimilard, we have, similarly to the proof of Theorem 3in Section 6, 1 n/ba∇dblA∗A(Z∗)/ba∇dblℓ2/lessorsimilar/parenleftbigg 1+dlogd n/parenrightbigg trZ∗. Lemmas 5and6imply that Assumptions 1and2hold with /ba∇dblλ/ba∇dbl/lessorsimilar/radicalbiggr n, ǫ/lessorsimilar/radicalbigg r2(d+logn) n, µT/greaterorsimilar1,andLT/lessorsimilar/radicalbigg d n, so LT√n/ba∇dblλ/ba∇dbl/lessorsimilar/radicalbigg rd n,andLTǫ/lessorsimilarr(d+logn) n. Withn/greaterorsimilarrdwith large enough constant, we will have LT√n/ba∇dblλ/ba∇dbl≤1/2 andLTǫ≤µT/2, so the quantity τfrom Theorem 5can be upper bounded as τ≤18 µ2 T/ba∇dblA∗A(Z∗)/ba∇dblℓ2+/ba∇dblA∗(ξ)/ba∇dblℓ2 nλr(Z∗)−2/lessorsimilar(1+dlogd n)trZ∗+1 n/ba∇dblA∗(ξ)/ba∇dblℓ2 λr(Z∗). We then apply Theorem 5. 20 7.2 Proofs of auxiliary lemmas In this section we provide proofs of Lemmas 5and6, which we used to prove Theorem 4. Proof of Lemma 5.If the matrix
https://arxiv.org/abs/2505.02636v1
Uhas columns u1,...,u r(these are the nontrivial eigenvectors of Z∗), setEk=uku∗ k. We will set, for constants α,β,γ > 0 that we will tune, λi=1 n/parenleftBigg α−βr/summationdisplay k=1/an}b∇acketle{tAi,Ek/an}b∇acket∇i}ht1{/an}bracketle{tAi,Ek/an}bracketri}ht≤γ}/parenrightBigg . By construction and the properties of Gaussian random vectors, note that, for each i, therrandom variables{/an}b∇acketle{tAi,Ek/an}b∇acket∇i}ht}kare i.i.d. random variables with the distribution of |z|2, where zis a standard normal random variable (real or complex, as appropriate). Then, we can calculate, by similar methods as for ( 3), EY=nEλ1A1 =αId−β(mγ 4PU+rmγ 2Id) = (α−βmγ 4−βrmγ 2)PU+(α−βrmγ 2)P⊥ U, where mγ 2:=E[|z|21{|z|2≤γ}],andmγ 4:=E[|z|41{|z|2≤γ}]−mγ 2. Settingα= (mγ 4+rmγ 2)β, we obtain EY=βmγ 4P⊥ U. We can then set γto be a moderate constant (say, 10) so that mγ 4/greaterorsimilar1 and then set β= (mγ 4)−1to obtainEY=P⊥ U. It will be useful to bound certain moments of the i.i.d. random variable sλ1,...,λ n. By construction, Eλ1=1 n. Note, furthermore, that we can write λ1=Eλ1+β nr/summationdisplay k=1(mγ 2−/an}b∇acketle{tA1,Ek/an}b∇acket∇i}ht1{/an}bracketle{tA1,Ek/an}bracketri}ht≤γ})/bracehtipupleft/bracehtipdownright/bracehtipdownleft /bracehtipupright =:εk. Recall from above that, because a1is Gaussian, ε1,...,ε rare i.i.d. zero-mean random variables. Fur- thermore, Eε2 1/lessorsimilar1 andEε4 1/lessorsimilar1. We can therefore estimate (noting that we have chosen β/lessorsimilar1) Eλ2 1= (Eλ1)2+β2 n2r/summationdisplay k=1Eε2 k /lessorsimilarr n2, and Eλ4 1/lessorsimilar(Eλ1)4+β4 n4E/parenleftBiggr/summationdisplay k=1εk/parenrightBigg4 =1 n4+β4 n4r/summationdisplay k,ℓ=1E[ε2 kε2 ℓ] /lessorsimilarr2 n4. We now bound/ba∇dblλ/ba∇dbl. Note that E/ba∇dblλ/ba∇dbl2=nEλ2 1/lessorsimilarr n, so, by Jensen’s inequality E/ba∇dblλ/ba∇dbl/lessorsimilar/radicalbigr n. Noting furthermore that, by construction, each |λi|/lessorsimilarr nalmost surely, a standard concentration inequality for Lipschitz function s of independent and bounded random variables (e.g., [ 57, Thm. 6.10]) gives, for t≥0, with probability at least 1 −e−t2/2, /ba∇dblλ/ba∇dbl−E/ba∇dblλ/ba∇dbl/lessorsimilarr nt. 21 Then, choosing t= 2√logn, we obtain, with probability at least 1 −n−2, /ba∇dblλ/ba∇dbl/lessorsimilar/radicalbiggr n+r n/radicalbig logn/lessorsimilar/radicalbiggr n, where the last inequality uses the fact that, for n/greaterorsimilarrd, rlogn n≤max/braceleftbiggrd n,rd ed/bracerightbigg /lessorsimilar1. We now turn to the concentration of Y=A∗(λ) about its mean. We use a similar approach as in the proof of Lemma 2. Again, c,c′, etc. denote universal positive constants which may change from one appearance to another. For fixed unit-norm x∈Fd, /an}b∇acketle{tY,xx∗/an}b∇acket∇i}ht=n/summationdisplay i=1λi|/an}b∇acketle{tai,x/an}b∇acket∇i}ht|2. Noting that|λi|≤cr nalmost surely, we can bound the moments of each (i.i.d.) term in the sum as, for k≥2, E/vextendsingle/vextendsingleλi|/an}b∇acketle{tai,x/an}b∇acket∇i}ht|2/vextendsingle/vextendsinglek≤/parenleftBig cr n/parenrightBigk−2 E[λ2 i|/an}b∇acketle{tai,x/an}b∇acket∇i}ht|2k] ≤/parenleftBig cr n/parenrightBigk−2 (Eλ4 i)1/2(E|/an}b∇acketle{tai,x/an}b∇acket∇i}ht|4k)1/2 ≤c′/parenleftBig cr n/parenrightBigk−2 ·r n2·/parenleftbig (c′′)2k·(2k)!/parenrightbig1/2 ≤c′/parenleftBig cr n/parenrightBigk−2 ·r n2·k!. The third inequality uses a standard Gaussian moment bound (see, e .g., the proof of Lemma 2) along with our estimate of Eλ4 1. The last inequality absorbs the ( c′′)kterm into the others and also uses the fact (e.g., by Stirling’s approximation) that/radicalbig (2k)!≤c′ckk!, again consolidating the constants. Then, followingsimilarsteps asin the proofofLemma 2, we obtain, with probabilityatleast 1 −2n−2, /ba∇dblY−EY/ba∇dblℓ2≤c/parenleftBigg/radicalbigg r(d+logn n+r(d+logn) n/parenrightBigg ≤c/radicalbigg r(d+logn) n=:δ. Note that we can then take ǫ:=√ 2rδ/lessorsimilar/radicalbigg r2(d+logn) n. We have only proved that, on this event, PT⊥(Y)/{ollowsequal(1−δ)P⊥ U. However, choosing n/greaterorsimilarrdwith large enough constant ensures, say, δ≤1/2, so rescaling Yby (1−δ)−1≤2 givesPT⊥(Y)/{ollowsequalPT⊥, only changing the other bounds by a constant. This completes the proo f. Proof of Lemma 6.LetH∈Hd. Note that /ba∇dblH/ba∇dbl∗≤/ba∇dblPT(H)/ba∇dbl∗+/ba∇dblPT⊥(H)/ba∇dbl∗≤√ 2r/ba∇dblPT(H)/ba∇dblF+/ba∇dblPT⊥(H)/ba∇dbl∗. Arguments (which we omit) identical to those in [ 18, Sec. 6] or [ 55,
https://arxiv.org/abs/2505.02636v1
App. B] give, for n/greaterorsimilard, with probability at least 1 −2n−2, for allH∈Hd, 1√n/ba∇dblA(H)/ba∇dbl≥1 n/ba∇dblA(H)/ba∇dbl1≥c1/ba∇dblH/ba∇dblF−c2/radicalbigg d n/ba∇dblH/ba∇dbl∗ for universal constants c1,c2>0 that will remain fixed for the rest of this proof. On this event we th en have 1√n/ba∇dblA(H)/ba∇dbl≥c1/ba∇dblPT(H)/ba∇dblF−c2/radicalbigg d n/ba∇dblH/ba∇dbl∗ ≥/parenleftBigg c1−c2/radicalbigg 2rd n/parenrightBigg /ba∇dblPT(H)/ba∇dblF−c2/radicalbigg d n/ba∇dblPT⊥(H)/ba∇dbl∗. 22 Withn/greaterorsimilarrd, we have µT:=c1−c2/radicalbigg 2rd n/greaterorsimilar1, and we set LT:=c2/radicalbigg d n. This completes the proof. Acknowledgements This work was supported by the Swiss State Secretariat for Educa tion, Research and Innovation (SERI) under contract MB22.00027. The author thanks Jonathan Dong, Richard Zhang, Nicolas Boumal, Christopher Criscitiello, Andreea Mu¸ sat, and Quentin Rebjock for helpful inspiration, discussions, and suggestions. References [1] E. J. Cand` es, T. Strohmer, and V. Voroninski, “PhaseLift: Ex act and stable signal recovery from magnitude measurements via convex programming,” Commun. Pure Appl. Math. , vol. 66, no. 8, pp. 1241–1274, 2013. [2] E. J. Cand` es and X. Li, “Solving quadratic equations via PhaseLif t when there are about as many equations as unknowns,” Found. Comput. Math. , vol. 14, no. 5, pp. 1017–1026, 2013. [3] L. Demanet and P. Hand, “Stable optimizationless recovery from phaseless linear measurements,” J. Fourier Anal. Appl. , vol. 20, pp. 199–221, 2014, issn: 1531-5851. [4] W. Ha, H. Liu, and R. F. Barber, “An equivalence between critical points for rank constraints versus low-rank factorizations,” SIAM J. Optim. , vol. 30, no. 4, pp. 2927–2955, 2020. [5] F. E. Curtis, Z. Lubberts, and D. P. Robinson, “Concise complex ity analyses for trust region methods,” Optim. Lett. , vol. 12, pp. 1713–1724, 2018. [6] J. D. Lee, I. Panageas, G. Piliouras, M. Simchowitz, M. I. Jordan , and B. Recht, “First-order methods almost always avoid strict saddle points,” Math. Program. , vol. 176, pp. 311–337, 2019. [7] M. Guizar-Sicairos and J. R. Fienup, “Phase retrieval with trans verse translation diversity: A nonlinear optimization approach,” Opt. Express , vol. 16, no. 10, 2008. [8] E. J. Cand` es, X. Li, and M. Soltanolkotabi, “Phase retrieval via Wirtinger flow: Theory and algorithms,” IEEE Trans. Inf. Theory , vol. 61, no. 4, pp. 1985–2007, 2015. [9] T. T. Cai and A. Zhang, “ROP: Matrix recovery via rank-one pro jections,” Ann. Stat. , vol. 43, no. 1, pp. 102–138, 2015. [10] J. Sun, Q. Qu, and J. Wright, “A geometric analysis of phase ret rieval,”Found. Comput. Math. , vol. 18, no. 5, pp. 1131–1198, 2018. [11] J.-F. Cai, M. Huang, D. Li, and Y. Wang, “Nearly optimal bounds f or the global geometric land- scape of phase retrieval,” Inverse Probl. , vol. 39, no. 7, 2023. [12] J. Dong, L. Valzania, A. Maillard, T.-a. Pham, S. Gigan, and M. Uns er, “Phase retrieval: From computational imaging to machine learning: A tutorial,” IEEE Signal Process. Mag. , vol. 40, no. 1, pp. 45–57, 2023. [13] A. Fannjiang and T. Strohmer, “The numerics of phase retriev al,”Acta Numer. , vol. 29, pp. 125– 228, 2020. [14] S. Sarao Mannelli, G. Biroli, C. Cammarota, F. Krzakala, P. Urban i, and L. Zdeborov´ a, “Complex dynamicsin simpleneuralnetworks:Understandinggradientflowin p haseretrieval,”in Proc. Conf. Neural Inf.
https://arxiv.org/abs/2505.02636v1
Process. Syst. (NeurIPS) , vol. 33, Virtual conference, Dec. 2020, pp. 3265–3274. [15] Y. Bi, H. Zhang, and J. Lavaei, “Local and global linear converg ence of general low-rank matrix recovery problems,” in Proc. AAAI Conf. Artif. Intell. (AAAI) , vol. 36, Virtual conference, Feb. 2022, pp. 10129–10137. 23 [16] Z. Ma, Y. Bi, J. Lavaei, and S. Sojoudi, “Geometric analysis of no isy low-rank matrix recovery in the exact parametrized and the overparametrized regimes,” INFORMS J. Opt. , vol. 5, no. 4, pp. 356–375, 2023. [17] R. Y. Zhang, “Improvedglobal guaranteesfor the nonconve xBurer-Monteirofactorizationvia rank overparameterization,” Math. Program. , 2024. [18] F. Krahmer and D. St¨ oger, “Complex phase retrieval from su bgaussian measurements,” J. Fourier Anal. Appl. , vol. 26, no. 89, 2020. [19] R. Vershynin, High-Dimensional Probability, An Introduction with Appli cations in Data Science . Cambridge University Press, 2018, 296 pp., isbn: 1108415199. [20] F. Krahmer and Y.-K. Liu, “Phase retrieval without small-ball pr obability assumptions,” IEEE Trans. Inf. Theory , vol. 64, no. 1, pp. 485–500, 2018. [21] E. J. Cand` es and Y. Plan, “Tight oracle inequalities for low-rank matrix recovery from a minimal number of noisy random measurements,” IEEE Trans. Inf. Theory , vol. 57, no. 4, pp. 2342–2359, 2011. [22] S. Negahban and M. J. Wainwright, “Estimation of (near) low-ra nk matrices with noise and high- dimensional scaling,” Ann. Stat. , vol. 39, no. 2, 2011. [23] A. Rohde and A. B. Tsybakov, “Estimation of high-dimensional lo w-rank matrices,” Ann. Stat. , vol. 39, no. 2, 2011. [24] G. Wang, G. B. Giannakis, and Y. C. Eldar, “Solving systems of ra ndom quadratic equations via truncated amplitude flow,” IEEE Trans. Inf. Theory , vol. 64, no. 2, pp. 773–794, 2018. [25] S. Kim and K. Lee, “Robust phase retrieval by alternating minimiz ation,”IEEE Trans. Signal Process., vol. 73, pp. 40–54, 2025. [26] G. Zhang, S. Fattahi, and R. Y. Zhang, “Preconditioned gradie nt descent for overparameterized nonconvex Burer–Monteiro factorization with global optimality cer tification,” J. Mach. Learn. Res., vol. 24, no. 163, pp. 1–55, 2023. [27] E. J. Cand` es, X. Li, and M. Soltanolkotabi, “Phase retrieval f rom coded diffraction patterns,” Appl. Comput. Harmon. Anal. , vol. 39, no. 2, pp. 277–299, 2015. [28] D. Gross, F. Krahmer, and R. Kueng, “Improved recovery gu arantees for phase retrieval from coded diffraction patterns,” Appl. Comput. Harmon. Anal. , vol. 42, no. 1, pp. 37–64, 2017. [29] H. Li, S. Li, and Y. Xia, “Sampling complexity on phase retrieval fr om masked Fourier measure- ments via Wirtinger flow,” Inverse Probl. , vol. 38, no. 10, 2022. [30] H. Li and J. Li, “Truncatedamplitude flow with coded diffraction p atterns,” Inverse Probl. , vol. 41, no. 1, 2025. [31] Z. Hu, J. Tachella, M. Unser, and J. Dong, “Structured rando m model for fast and robust phase retrieval,” in Proc. IEEE Int. Conf. on Acoustics, Speech, and Signal Proce ssing (ICASSP) , Hy- derabad, India, Apr. 2025. [32] Y. Shechtman, Y. C. Eldar, O. Cohen, H. N. Chapman, J. Miao, a nd M. Segev, “Phase retrieval
https://arxiv.org/abs/2505.02636v1
withapplicationtoopticalimaging:Acontemporaryoverview,” IEEE Signal Process. Mag. , vol.32, no. 3, pp. 87–109, 2015. [33] R. Ge, C. Jin, and Y. Zheng, “No spurious local minima in nonconve x low rank problems: A unified geometric analysis,” in Proc. Int. Conf. Mach. Learn. (ICML) , Sydney, Australia, Aug. 2017, pp. 1233–1242. [34] K. Liu, Z. Wang, and L. Wu, “The local landscape of phase retrie val under limited samples,” IEEE Trans. Inf. Theory , vol. 70, no. 12, pp. 9012–9035, 2024. [35] D. Davis, D. Drusvyatskiy, and C. Paquette, “The nonsmooth landscape of phase retrieval,” IMA J. Numer. Anal. , vol. 40, no. 4, pp. 2652–2695, 2020. [36] Z. Li, J.-F. Cai, and K. Wei, “Toward the optimal construction of a loss function without spurious local minima for solving quadratic equations,” IEEE Trans. Inf. Theory , vol. 66, no. 5, pp. 3242– 3260, 2020. [37] J.-F. Cai, M. Huang, D. Li, and Y. Wang, “Solving phase retrieval with random initial guess is nearly as good as by spectral initialization,” Appl. Comput. Harmon. Anal. , vol. 58, pp. 60–84, 2022. 24 [38] J.-F. Cai, M. Huang, D. Li, and Y. Wang, “The global landscape of phase retrieval I: Perturbed amplitude models,” Ann. Appl. Math. , vol. 37, no. 4, pp. 437–512, 2021. [39] J.-F. Cai, M. Huang, D. Li, and Y. Wang, “The global landscape of phase retrieval II: Quotient intensity models,” Ann. Appl. Math. , vol. 38, no. 1, pp. 62–114, 2022. [40] H. Peng, D. Han, L. Li, and M. Huang, “Noisy phase retrieval fr om subgaussian measurements,” 2024. arXiv: 2412.07401 [math.OC] . [41] Y. C. Eldar and S. Mendelson, “Phase retrieval: Stability and rec overyguarantees,” Appl. Comput. Harmon. Anal. , vol. 36, pp. 473–494, 2014. [42] B. Gao, H. Liu, and Y. Wang, “Phase retrieval for sub-Gaussia n measurements,” Appl. Comput. Harmon. Anal. , vol. 53, pp. 95–115, 2021. [43] Y. Wang and Z. Xu, “Generalized phase retrieval: Measurement number, matrix recovery and beyond,” Appl. Comput. Harmon. Anal. , vol. 47, no. 2, pp. 423–446, 2019. [44] Y. Chi and Y. M. Lu, “Kaczmarzmethod for solving quadratic eq uations,” IEEE Signal Processing Letters, vol. 23, no. 9, pp. 1183–1187, 2016. [45] Y. S. Tan and R. Vershynin, “Phase retrieval via randomized Ka czmarz: Theoretical guarantees,” Inform. Inference. , vol. 8, pp. 97–123, 2019. [46] Y. Chen, Y. Chi, and A. J. Goldsmith, “Exact and stable covarian ce estimation from quadratic sampling via convex programming,” IEEE Trans. Inf. Theory , vol. 61, no. 7, pp. 4034–4059, 2015. [47] R. Kueng, H. Rauhut, and U. Terstiege, “Low rank matrix reco very from rank one measurements,” Appl. Comput. Harmon. Anal. , vol. 42, no. 1, pp. 88–116, 2017. [48] R. Balan and C. B. Dock, “Lipschitz analysis of generalized phase retrievable matrix frames,” SIAM J. Matrix Anal. Appl. , vol. 43, no. 3, pp. 1518–1571, 2022. [49] N. Boumal, V. Voroninski, and A. S. Bandeira, “Deterministic gua rantees for Burer-Monteiro fac- torizations of smooth semidefinite programs,” Commun. Pure Appl. Math. , vol. 73, no. 3, pp. 581– 608, 2019. [50] L. O’Carroll, V. Srinivas, and A.
https://arxiv.org/abs/2505.02636v1
Vijayaraghavan, “The Burer-M onteiro SDP method can fail even above the Barvinok-Pataki bound,” in Proc. Conf. Neural Inf. Process. Syst. (NeurIPS) , New Orleans, Louisiana, Dec. 2022, pp. 31254–31264. [51] P. Abdalla, A. S. Bandeira, M. Kassabov, V. Souza, S. H. Strog atz, and A. Townsend, “Expander graphs are globally synchronising,” Oct. 23, 2022. arXiv: 2210.12788 [math.CO] . [52] S. Ling, “Local geometry determines global landscape in low-ra nk factorization for synchroniza- tion,”Found. Comput. Math. , 2025. [53] F. Rakoto Endor and I. Waldspurger, “Benign landscape for Bu rer-Monteiro factorizations of MaxCut-type semidefinite programs,” 2024. arXiv: 2411.03103 [math.OC] . [54] A. D. McRae, “Benign landscapes for synchronization on spher es via normalized Laplacian matri- ces,” arXiv: 2503.18801 [math.OC] . [55] A. D. McRae, J. Romberg, and M. A. Davenport, “Optimal conv exlifted sparse phase retrieval and PCA with an atomic matrix norm regularizer,” IEEE Trans. Inf. Theory , vol. 69, no. 3, pp. 1866– 1882, 2023. [56] M. J. Wainwright, High-Dimensional Statistics: A Non-Asymptotic Viewpoint . Cambridge Univer- sity Press, 2019, 568 pp., isbn: 1108498027. [57] S. Boucheron, G. Lugosi, and P. Massart, Concentration Inequalities . Oxford University Press, 2013. 25
https://arxiv.org/abs/2505.02636v1
Hierarchical random measures without tables Marta Catalano1and Claudio Del Sole2 1Luiss University, Rome, Italy , mcatalano@luiss.it 2University of Milan - Bicocca, Milan, Italy , claudio.delsole@unimib.it Abstract The hierarchical Dirichlet process is the cornerstone of Bayesian nonparametric multilevel models. Its generative model can be described through a set of latent variables, commonly referred to as tables within the popular restaurant franchise metaphor. The latent tables simplify the expression of the posterior and allow for the implementation of a Gibbs sampling algorithm to approximately draw samples from it. However, managing their assignments can become computationally expen- sive, especially as the size of the dataset and of the number of levels increase. In this work, we identify a prior for the concentration parameter of the hierarchical Dirichlet process that (i) induces a quasi-conjugate posterior distribution, and (ii) removes the need of tables, bringing to more interpretable expressions for the poste- rior, with both a faster and an exact algorithm to sample from it. Remarkably, this construction extends beyond the Dirichlet process, leading to a new framework for defining normalized hierarchical random measures and a new class of algorithms to sample from their posteriors. The key analytical tool is the independence of multivariate increments, that is, their representation as completely random vectors . Keywords : Bayesian nonparametrics, Completely random measure, Dirichlet pro- cess, Hierarchical model, Multilevel model, Partial exchangeability. 1 Introduction Historically, Bayesian nonparametric and hierarchical models have addressed data com- plexity in different ways. Nonparametric models ensure full flexibility to the marginal distribution of the observations by increasing the dimensionality of the parameter space, while hierarchical models focus on the interactions between the observations, grouping them and modeling their dependencies through shared parameters or latent features. The interaction between parameters is recursively modeled in a similar fashion, defining a hierarchical structure that enables pooling of information across different groups while 1arXiv:2505.02653v1 [math.ST] 5 May 2025 preserving their distinct characteristics – all within the principled framework of Bayesian inference. The hierarchical Dirichlet process (Teh et al., 2006) represented a significant breakthrough, demonstrating the advantages of combining the two approaches through the sharing of infinite-dimensional parameters. Since then, it has proven effective in an impressive number of contexts, including natural language processing (Teh et al., 2006; Zavitsanos et al., 2011), genomics (Sohn and Xing, 2009; Elliott et al., 2019; Liu et al., 2024), computer vision (Sudderth et al., 2008; Haines and Xiang, 2011), music segmenta- tion and speaker diarisation (Ren et al., 2008; Fox et al., 2011), cognitive science (Griffiths et al., 2007), robotics (Nakamura et al., 2011; Taniguchi et al., 2018), network analysis (Durante et al., 2025). The computational feasibility of the hierarchical Dirichlet process in Teh et al. (2006) is strictly linked to a compelling posterior representation via latent variables, often re- ferred to as tables in the restaurant franchise metaphor. The need for tables arises from the nature of the infinite-dimensional parameter, which is an almost surely discrete ran- dom probability ˜P=P i≥1Jiδθi, characterized by a countably infinite number of jumps Jiand atoms θi. In a Bayesian nonparametric setting, a single group of observations
https://arxiv.org/abs/2505.02653v1
is often modelled as conditionally independent and identically distributed from ˜P, where ˜P is e.g. a Dirichlet process (Ferguson, 1973) with diffuse mean measure P0. In this case, since the atoms θiare independent and identically distributed from P0, two observations coincide if and only if they share the same atom θi. Conversely, in the hierarchical Dirich- let process, the distribution P0of the atoms is itself a latent parameter, which is shared across different groups and likewise modelled as an almost surely discrete random proba- bility. Hence, in contrast with the exchangeable case, the atoms display ties with positive probability, and two observations coincide either if they share the same atom or if their respective atoms have identical values. This complicates the posterior representation, unless one keeps track of the atom associated to each observation, which is precisely the role of the latent tables, as discussed in Catalano et al. (2024b). For nobservations, the tables are ndependent latent variables with possible ties, inducing a distribution on the set of partitions of n. The dimensionality of the parameter space increases with the number of observations, and one typically needs a Gibbs sampling algorithm to sample from it. The most popular implementation for the hierarchical Dirichlet process is the Gibbs sampler based on the restaurant franchise metaphor (Teh et al., 2006), often addressed asmarginal orcollapsed Gibbs sampler. Despite the convenient expression of its full con- ditionals and its remarkable flexibility, it is not an exact algorithm, since it recovers the posterior distribution only asymptotically, and there are other well-known drawbacks: it involves a considerable amount of bookkeeping and scales poorly as the number of observations nincreases, as it relies on a sequential updating scheme for the nlatent ta- bles. Moreover, each table allocation depends on all the other allocations, inducing high 2 autocorrelation in the Markov chain, slow mixing, and preventing parallelisation. Such limitations have been recognized by several works; see e.g. Teh et al. (2006); Teh and Jor- dan (2010); Williamson et al. (2013); Lijoi et al. (2020); Das et al. (2024). Accordingly, a plethora of alternative sampling-based strategies have been proposed to reduce computa- tional time and improve the mixing properties and scalability of the standard implemen- tation. These are typically Gibbs samplers adopting a conditional orblocked approach, that is, instantiating the random measures with some finite-dimensional approximation of the posterior. Direct assignment schemes instantiate the jumps of the common random probability, often addressed as global weights , through the stick-breaking construction (Teh et al., 2006) or a finite-dimensional Dirichlet approximation (Fox et al., 2011), while variational methods (Teh et al., 2007; Wang et al., 2011; Bryant and Sudderth, 2012) construct mean field approximations of stick-breaking ratios at both levels of the hierar- chy. Nevertheless, their full conditional distributions still depend on the nested partition induced by the tables, and thus do not completely avoid some of its inherent drawbacks. Interestingly, the conditional Gibbs samplers proposed in Lijoi et al. (2020) and Das et al. (2024) obviate the need for tables by considering a finite-dimensional approximation of the model, i.e. by truncating the
https://arxiv.org/abs/2505.02653v1
common random probability a priori. In this work we pursue a different strategy that eliminates the need for tables while preserving the infinite-dimensionality of the model, thanks to a specific gamma hyper- prior for the shared concentration parameter of the Dirichlet processes. Intuitively, the hierarchical Dirichlet process defines a vector of dependent random probabilities with in- dependent jumps and common atoms with ties. Our hyperprior allows for an alternative representation as dependent random probabilities with dependent jumps and common atoms without ties, thereby making the tables superfluous. A fundamental result of this work shows that this representation is the normalization of a specific completely random vector (Catalano et al., 2021), that is, a vector of dependent random measures with jointly independent increments. This is the natural multivariate extension of a com- pletely random measure (Kingman, 1967), whose normalization is very popular in the Bayesian nonparametric literature (Regazzini et al., 2003; James et al., 2006, 2009). In particular, James et al. (2009) derive an almost conjugate representation of the posterior, conditionally on a real-valued latent variable Uamenable to standard approximate sam- pling schemes, such as random walk Metropolis-Hastings. In this work, we extend the posterior representation to any normalized completely random vector, which in princi- ple could be applied to many other well-established models, including the normalization of GM-dependent measures (Lijoi et al., 2014), L´ evy copulas (Epifani and Lijoi, 2010), compound random measures (Griffin and Leisen, 2017), and thinned random measures (Lau and Cripps, 2022). The characterisation as a normalised completely random vector allows one to identify a novel posterior representation for the hierarchical Dirichlet process with hyperprior 3 that does not require the latent tables. The price to pay for casting off the tables is the introduction of a latent vector and kjump vectors, where kis the total number of distinct observations, typically much smaller than the number of observations n. In principle, the dimension of the latent and jump vectors coincides with the number of groups d, which could be potentially large, thus slowing down the posterior inference algorithm. However, through an in-depth analysis of their distributions, we reduce the non-standard sampling steps to sampling a random vector supported on [0 ,∞)2k+1. This approach offers three fundamental advantages: (i) the random vector lives in a standard space, and thus can be approximately sampled with standard techniques, shows better mixing, and can be analysed with a well-established set of diagnostics; (ii) its dimension does not increase with the number of observations nnor with the number of groups d, which brings to consistently faster algorithms whenever nordincreases but kremains sufficiently small; (iii) when nis small, we devise an exact sampling algorithm, thus avoiding the approximation error of Gibbs sampling procedures. In summary, our novel posterior representation allows for the development of a faster algorithm when nordare large but kis not, and of an exact algorithm when nis small. It should be noted that k consistently smaller than nis the most common setting where the model is or should be applied, since, e.g., it is well known that for the hierarchical Dirichlet process
https://arxiv.org/abs/2505.02653v1
kis of the same order as log(log( n)). The hierarchical Dirichlet process with hyperprior is only a particular case of the more general class of nonparametric hierarchical models discussed in this work. These models arise as the normalization of a vector (˜ µ1, . . . , ˜µd) of conditionally independent completely random measures, given another completely random measure ˜ µ0. We prove that these dependent random measures are completely random vectors and, for this rea- son, we term them hierarchical completely random vectors . Such hierarchical structures have been used to model dependent hazards (Camerlenghi et al., 2021; Del Sole et al., 2025) and beta processes (Thibaux and Jordan, 2007; Masoero et al., 2018; James et al., 2024), but have never been considered for normalization. This novel specification merges two compelling properties for the first time: the naturalness of the hierarchical construc- tion and the analytical tractability of multivariate independent increments. The former leads to simple representations and draws interesting parallels with popular hierarchical models in the literature, such as Teh et al. (2006); Camerlenghi et al. (2018). The lat- ter enhances the theoretical investigation of the model, leading to the almost-conjugate posterior representation we have already discussed, and playing a crucial role in deriv- ing closed-form expressions for the moments and dependence structure of the prior. In extensive simulation studies, we show how their prior elicitation is crucial to control bor- rowing of information and shrinkage. Moreover, we set the rules for a fair comparison with the hierarchical Dirichlet process, by observing that fixing unequal marginal vari- ance or unequal correlation for the two models can completely alter the conclusions. The 4 take-home message is that, when possible, the comparison between performances of any two dependent priors should entail setting the same first two moments and measure of dependence. In summary, this work introduces normalized hierarchical completely random vectors, which provide a new way to build dependent priors that combines the naturalness of hierarchical structures with a convenient posterior representation due to its multivariate infinite divisibility. On the one hand, they boost the applicability of existing models, such as the hierarachical Dirichlet process (Teh et al., 2006); on the other hand, they provide a new general recipe to build dependent priors. As such, normalized hierarchical completely random vectors enter the flourishing context of partially exchangeable models, which includes, among others, dependent stick-breaking constructions (MacEachern, 1999, 2000; Dunson and Park, 2008; Horiguchi et al., 2024), additive random measures (M¨ uller et al., 2004; Lijoi et al., 2014), nested structures (Rodr´ ıguez et al., 2008; Camerlenghi et al., 2019a; Beraha et al., 2021; Lijoi et al., 2023), L´ evy copulas (Epifani and Lijoi, 2010; Riva-Palacio and Leisen, 2018), compound random measures (Griffin and Leisen, 2017), dependent P´ olya trees (Christensen and Ma, 2020), Furbi priors (Ascolani et al., 2024); see Quintana et al. (2022) for a recent review. Notation The Cartesian product of dcopies of a set A is denoted as Ad=A×···× A. The product measure of dprobabilities P1, . . . , P disQd i=1Pi, and if P1=···=Pd=P it also appears
https://arxiv.org/abs/2505.02653v1
as Pd. Bold symbols indicate vectors, e.g. s= (s1, . . . , s d), and d s= ds1· ··· · dsdis (a restriction of) the Lebesgue measure on Rd. We repeatedly use the notation Ω d= [0,+∞)d\{0}, and the abbreviation a.s. for almost surely . Iff:X→Yis a measurable map and ρis a positive measure on X, then f#ρis the pushforward measure onYdefined as ( f#ρ)(B) =ρ(f−1(B)), for any measurable set B⊆Y. We use ∼to underline the randomness of a random measure, e.g. ˜ µis a random measure and µis a deterministic measure. The notation N(µ, σ2) stands for the normal distribution with mean µand variance σ2. 2 Hierarchical completely random vectors In this section, we recall the definition of hierarchical random measures, highlight that they are homogeneous completely random vectors, and recover both their multivariate Laplace exponent and their L´ evy measure. The proofs follow a structure similar to thesubordination of L´ evy processes, first defined in Bochner (1955), and beautifully described in Bertoin (1996) and Sato (1999). Their joint L´ evy measure displays some similarities with that of compound random measures (Griffin and Leisen, 2017), but remarkably there is no intersection between the two classes. The specification of the law of a hierarchical completely random vector involves an outer and an inner L´ evy measure, 5 whose identifiability is studied with relevant examples. A vector of random measures ˜µ= (˜µ1, . . . , ˜µd) is a measurable function on Md X, where MXdenotes the space of boundedly finite measures on a Polish space X. We recall the definition of a completely random vector (Catalano et al., 2021), which is the natural multivariate generalization of a completely random measure (CRM) defined in Kingman (1967). For a Borel set AofX, we use the notation ˜µ(A) = (˜µ1(A), . . . , ˜µd(A)). Definition 1. A vector of random measures ˜µ= (˜µ1, . . . , ˜µd) is a completely random vector (CRV) if, given pairwise disjoint Borel sets A1, . . . , A kofX,˜µ(A1), . . . ,˜µ(Ak) are mutually independent random vectors in Rd. We refer to Appendix A for a brief and self-contained account on completely random measures, L´ evy measures, L´ evy intensities, Laplace exponents, and their multivariate extension to completely random vectors. Henceforth, ˜ µ∼CRM( ρ⊗P0) denotes a CRM with product L´ evy intensity d ρ(s) dP0(x), and ID( ρ) indicates a pure-jump infinitely divisible distribution with L´ evy measure ρ, whose expression in integrals is d PID(ρ)and whose probability density function (p.d.f), if it exists, is fID(ρ)(s). Definition 2. Letρ0andρbe L´ evy measures on (0 ,+∞) and let P0be an atomless measure. We say that ˜µ= (˜µ1, . . . , ˜µd)∼hCRV( ρ, ρ0, P0) is a hierarchical CRV with idiosyncratic L´ evy measure ρ, base L´ evy measure ρ0, and base measure P0if ˜µ1, . . . , ˜µd|˜µ0∼CRM( ρ⊗˜µ0); ˜ µ0∼CRM( ρ0⊗P0). The next proposition shows that ˜µ∼hCRV( ρ, ρ0, P0) is in fact a completely random vector in the sense of Definition 1. We provide the expression of its multivariate Laplace functional
https://arxiv.org/abs/2505.02653v1
through the Laplace exponent and derive its multivariate L´ evy intensity. Theorem 1. Let˜µ∼hCRV( ρ, ρ0, P0)with P0a probability measure. Then ˜µis a homogeneous CRV with Laplace exponent ψh: Ωd→(0,+∞)and L´ evy intensity νh= ρh⊗P0such that, for every λ= (λ1, . . . , λ d)ands= (s1, . . . , s d)∈Ωd, ψh(λ) =ψ0dX i=1ψ(λi) , dρh(s) =Z+∞ 0dY i=1dPID(tρ)(si) dρ0(t). In particular, if ID( tρ) has p.d.f. fID(tρ), then ρhhas a L´ evy density ρh(s1, . . . , s d) =Z+∞ 0dY i=1fID(tρ)(si) dρ0(t). A sufficient condition for ID( tρ) to have a probability density function is that the L´ evy measure ρhas infinite total mass and is absolutely continuous (Sato, 1999, Theorem 6 27.7). The structure of the L´ evy densities above resemble the ones of compound random measures introduced in Griffin and Leisen (2017), ρ(s1, . . . , s d) =Z+∞ 01 tdHs1 t, . . . ,sd t dρ0(t), (1) where His a p.d.f. on [0 ,+∞)d. However, there is no intersection between the two classes. Lemma 1. The L´ evy density ρhof˜µ∼hCRV (ρ, ρ0, P0)cannot be expressed as (1). The next result studies the identifiability of the parameters of a hierarchical CRV. For c∈R, denote by c#the pushforward of the multiplication map s7→cs. Theorem 2. Let˜µ(i)∼hCRV( ρ(i), ρ(i) 0, P(i) 0), for i= 1,2. Then ˜µ(1)=˜µ(2)in distribu- tion if and only if there exists c >0such that ρ(2) 0= (c−1)#ρ(1) 0, ρ(2)=cρ(1), P(1) 0=P(2) 0. In particular, if ρ(i)andρ(i) 0have L´ evy densities, for i= 1,2, this is equivalent to ρ(2) 0(s) =cρ(1) 0(cs), ρ(2)(s) =cρ(1)(s), P(1) 0=P(2) 0. The last condition can be easily checked on specific classes of models. For example, if we restrict to ρ0=αρfor some α > 0, then ρandP0are identifiable. To help in the interpretation of the identifiability conditions, we observe that ρ(2) 0(s) =cρ(1) 0(cs) if and only if ˜ µ(1) 0=c˜µ(2) 0. We conclude with two leading examples of hierarchical CRVs. Example 1. We term ˜µa gamma-gamma hierarchical CRV if there exist shape param- eters α, α 0>0, rate parameters b, b0>0, and P0a base measure such that ˜µ1, . . . , ˜µd|˜µ0∼CRM αe−bs sds⊗˜µ0 ; ˜ µ0∼CRM α0e−b0s sds⊗P0 . By applying Proposition 1 and the expression of the 1-dimensional Laplace exponent in Definition 5, when P0is a probability measure, the multivariate Laplace exponent of ˜µis ψh(λ1, . . . , λ d) =α0log 1 +α b0dX i=1log 1 +λi b . Therefore, two gamma-gamma hCRVs coincide in distribution if they have the same ratio α/b 0. Moreover, since tρ=tαs−1e−bsis the L´ evy measure of a gamma CRM with shape 7 parameter tαand scale parameter b, by Proposition 1 the L´ evy density of ˜µis ρh(s1, . . . , s d) =α0e−bPd i=1siZ+∞ 0bdtα Γ(tα)ddY i=1stα−1 ie−b0t tdt. Example 2. We term ˜µa stable-stable hierarchical CRV if there exists shape parameters α, α 0>0, discount parameters σ, σ 0>0, and a base measure P0such that ˜µ1, . . . , ˜µd|˜µ0∼CRMα σ Γ(1−σ)1 s1+σds⊗˜µ0 ; ˜µ0∼CRMα0σ0 Γ(1−σ0)1 s1+σ0ds⊗P0 . Proposition 1 and Definition 6
https://arxiv.org/abs/2505.02653v1
imply that the multivariate Laplace exponent is ψh(λ1, . . . , λ d) =α0ασ0(λσ 1+···+λσ d)σ0. Hence, two stable-stable hCRVs coincide in distribution if they have the same value for α0ασ0. For d= 1 we obtain the Laplace functional of the marginal ˜ µi, namely ψ(λ) = α0λσσ0, which is the Laplace exponent of a stable CRM with shape α0and discount parameter σσ0. Remarkably, we recover the well-known fact that the subordination of a stable L´ evy process with a stable process is again a stable process (Bertoin, 1996; Sato, 1999; Camerlenghi et al., 2018). An explicit expression for the multivariate L´ evy density is only available when σ= 1/2, since we need the density of the stable infinitely divisible distribution. In such case, setting C=α0(2π)−d/2σ0Γ(1−σ0)−12(d−σ0)/2−1Γ((d−σ0)/2), ρh(s1, . . . , s d) =CdY i=1s−3/2 idX i=11 si−d−σ0 2 . 3 Normalization of hierarchical CRVs One of the most common uses of completely random measures in Bayesian statistics is their normalization (Regazzini et al., 2003), which defines random probabilities whose law can act as nonparametric priors. The same construction can be extended to vectors of dependent random measures, such as hierarchical CRVs. In this section, we provide con- ditions for the normalization to be well-defined and investigate connections with popular models, such as the hierarchical Dirichlet process (Teh et al., 2006; Camerlenghi et al., 2019b) and the hierarchical normalized σ-stable process (Camerlenghi et al., 2019b). The final part of the section discusses general techniques to measure dependence of normalized hierarchical CRVs. For the hierarchical CRV ˜µ∼hCRV( ρ, ρ0, P0), we derive a vector of dependent ran- 8 dom probabilities as ˜µ ˜µ(X):=˜µ1 ˜µ1(X), . . . ,˜µd ˜µd(X) , (2) which is well-defined if 0 <˜µi<+∞a.s., for i= 1, . . . , d . The upper bound forces P0 to be a finite measure, and thus we can assume without loss of generality that P0is a probability measure. The lower bound forces each ˜ µito be infinitely active, that is, the corresponding L´ evy measures to have infinite mass. Lemma 2. Let˜µ∼hCRV( ρ, ρ0, P0). Then each ˜µiis infinitely active if and only if Z+∞ 0dρ0(t) =Z+∞ 0dρ(t) = +∞. Lemma 2 shows that ˜ µiis infinitely active if and only if both ˜ µ0and ˜µi|˜µ0are infinitely active. Remark 1. The main subtlety of the proof of Lemma 2 is that ρhas a finite mass if and only if ID( tρ) gives positive probability to {0}, as nicely shown in Regazzini et al. (2003). In this case,Z ΩddPd ID(tρ)̸=Z (0,+∞)ddPd ID(tρ)= 1 Therefore, we need ρto have infinite mass to conclude thatR Ωddρh(s) =R+∞ 0dρ0(t) by Proposition 1 and Fubini-Tonelli theorem. The construction in (2) is similar to the normalized hierarchical model in Camer- lenghi et al. (2019b); Catalano et al. (2024a), where however the base random measure is normalized as well, that is ˜µ1, . . . , ˜µd|˜µ0∼CRM ρ⊗˜µ0 ˜µ(X) ; ˜ µ0∼CRM( ρ0⊗P0). This slight modification is crucial from the point of view of the overall law of ˜µ, which is no longer a CRV. In particular,
https://arxiv.org/abs/2505.02653v1
the marginal random measures ˜ µi’s are not CRMs. Interestingly, at least two popular hierarchical specifications can be expressed in terms of a normalized hierarchical CRV. Recall that ˜P= (˜P1, . . . , ˜Pd)∼HDP( α, α 0, P0) is a hierarchical Dirichlet process (Teh et al., 2006) with concentration parameters α, α 0>0 and base probability P0if ˜P1, . . . , ˜Pd|˜P0iid∼DP(α˜P0); ˜P0∼DP(α0P0), (3) where DP( α0P0) denotes a Dirichlet process (Ferguson, 1973) with base measure α0P0. We show that the normalization of the gamma-gamma hCRV in Example 1 recovers the HDP with a specific gamma prior on the concentration parameter. Here, Gamma( a, b) denotes the gamma distribution with shape aand rate b. 9 Proposition 1. For parameters α, α 0>0andb, b0>0, and base measure P0, let˜µand ˜Pbe vectors of random measures such that ˜µ1, . . . , ˜µd|˜µ0iid∼CRM αe−bs sds⊗˜µ0 , ˜µ0∼CRM α0e−b0s sds⊗P0 , ˜P1, . . . , ˜Pd|˜α∼HDP(˜ α, α 0, P0), ˜α∼Gamma( α0, b0/α). Then, using the notation in (2), it holds that ˜µ/˜µ(X)d=˜P. Remarkably, the distribution of a normalized gamma-gamma hCRV depends in fact only onα0andα/b 0. Indeed, the role of the ratio α/b 0for identifiability is highlighted in Example 1, while bis a scale parameter for ˜µand disappears with the normalization; see also Figure 1 and Appendix C.6. In practice, one may restrict to b=b0= 1, without loss of generality. Moreover, when the idiosyncratic component is a stable CRM, the normalized hier- archical model from Camerlenghi et al. (2019b) can be expressed as a normalized hier- archical CRV. This generalizes, with a different technique, a result of Camerlenghi et al. (2018), which assumes the base CRM to be a stable CRM as well. This fact is also observed for L´ evy processes in Bertoin (1996). The key point is that, for a stable CRM ˜µwith L´ evy measure ρ, any proportional measure c˜µwith c >0 is again a stable CRM with proportional L´ evy measure c′ρ, for some c′>0. Proposition 2. Let˜µ(1)and˜µ(2)be two vectors of random measures defined by ˜µ(1) 1, . . . , ˜µ(1) d|˜µ0iid∼CRMα σ Γ(1−σ)1 sσ+1ds⊗˜µ0 , ˜µ(2) 1, . . . , ˜µ(2) d|˜µ0iid∼CRMα σ Γ(1−σ)1 sσ+1ds⊗˜µ0 ˜µ0(X) , where α >0,0< σ < 1, and ˜µ0is an infinitely active CRM . Then, ˜µ(1) ˜µ(1)(X)d=˜µ(2) ˜µ(2)(X). The simplest and most widely used quantity to measure the dependence of random prob- abilities ˜P=˜µ/˜µ(X) is the pairwise linear correlation Corr( ˜Pi(A),˜Pj(A)), for a Borel set A. At the level of the random measures ˜µ, its computation is a slight modification of the results in Catalano et al. (2024a, Section 8), where the expressions are derived by lever- aging on the conditional independence structure. We report these results in Appendix B (Proposition 13) and present an alternative proof that builds on their jointly infinitely divisible structure. Interestingly, this second technique greatly simplifies the derivation in the normalized case. 10 Proposition 3. Let˜µ∼hCRV( ρ, ρ0, P0), and let ψandψ0denote the Laplace exponents ofρandρ0, respectively. For any Borel set As.t.P0(A)̸= 0,1, and for every i̸=j, the normalization ˜P=˜µ/˜µ(X)satisfies E(˜Pi(A)) =P0(A)and Var(
https://arxiv.org/abs/2505.02653v1
˜Pi(A)) =−P0(A)(1−P0(A))Z+∞ 0ue−ψ0(ψ(u))(ψ0◦ψ)′′(u) du, Cov( ˜Pi(A),˜Pj(A)) = Var˜µ0 ˜µ0(X) =−P0(A)(1−P0(A))Z+∞ 0ue−ψ0(u)ψ′′ 0(u) du, where (ψ0◦ψ)′′(u) =ψ′′ 0(ψ(u))ψ′(u)2+ψ′ 0(ψ(u))ψ′′(u). In particular, for every i̸=j, Corr( ˜Pi(A),˜Pj(A)) =R+∞ 0ue−ψ0(u)ψ′′ 0(u) duR+∞ 0ue−ψ0(ψ(u))(ψ0◦ψ)′′(u) du. Example 3. Consider ˜µa gamma-gamma hCRV as in Example 1 with b=b0= 1 and letAbe a Borel set such that P0(A)̸= 0,1. The marginal and mixed moments of ˜µand ˜P=˜µ/˜µ(X) are E(˜µi(A)) =α0α P0(A), E (˜Pi(A)) =P0(A), Var(˜µi(A)) =α0α(1 +α)P0(A),Var( ˜Pi(A)) = 1 +α0 αe1/αEα0 1/αP0(A)(1−P0(A)) 1 +α0, Cov(˜µi(A),˜µj(A)) =α0α2P0(A),Cov( ˜Pi(A),˜Pj(A)) =P0(A)(1−P0(A)) 1 +α0, Corr(˜ µi(A),˜µj(A)) =α 1 +α, Corr( ˜Pi(A),˜Pj(A)) = 1 +α0 αe1/αEα0 1/α−1 , where En(x) =R+∞ 1t−ne−txdt=xn−1Γ(1−n, x) is the generalized exponential integral function. The expression of the correlation Corr(˜ µi(A),˜µj(A)) stands out for its tractability and interpretability (see Proposition 13). However, it is limited to pairwise comparisons, takes into account only the first two moments of the vector of random measures, and in principle does not detect independence. To overcome these limitations, we consider the Wasserstein index of dependence of Catalano et al. (2024b), which is defined for any homogeneous CRV with finite second moments and equal marginals. Remarkably, it equals 1 if and only if the random measures are equal almost surely and equals 0 if and only if the random measures are independent. This index is suitable to our hierarchical random measures, since we have proved that they are CRVs; however, it is not defined for their normalization. Specifically, for ˜µ∼CRV( ρ⊗P0), where ρis a multivariate L´ evy measure with identical marginals ¯ ρ, the index is IW(˜µ) = 1−W∗(ρ, ρco)2 W∗(ρ⊥, ρco)2, 11 where W∗is the extended Wasserstein distance of order 2 (Figalli and Gigli, 2010; Guillen et al., 2019; Catalano et al., 2024b), ρcois the comonotonic L´ evy measure, supported on the diagonal, and ρ⊥is the independent L´ evy measure, supported on the axes, that is, dρco(s) = d¯ ρ(s1)dY i=2dδs1(si), dρ⊥(s) =dX j=1d¯ρ(sj)Y i̸=jdδ0(si). (4) A general expression for IW(˜µ) in terms of the tail integral of ¯ ρand the pushforward measure (Σ) #ρ, where Σ( s) =Pd i=1si, can be found in Catalano et al. (2024b). Here we give a non-trivial specialization for the hierarchical case ˜µ∼hCRV( ρ, ρ0, P0) in terms of ρandρ0. To this end, denote by Mr(ρ) =R srdρ(s) the rth moment of the L´ evy measure ρ, and define Uρ,ρ0(u) =R+∞ 0(1−FID(tρ)(u))dρ0(t), where FPis the c.d.f. of a distribution Pon the real line. Proposition 4. Let˜µ∼hCRV( ρ, ρ0, P0)such that ID(ρ)has a p.d.f.. Then IW(˜µ) equals 1−M1(ρ0)M2(ρ) +M2(ρ0)M1(ρ)2−1 dR+∞ 0R+∞ 0s U−1 ρ,ρ0(Udρ,ρ 0(s))fID(dtρ)(s) dsdρ0(t) M1(ρ0)M2(ρ) +M2(ρ0)M1(ρ)2−R+∞ 0R+∞ 0s U−1 ρ,ρ0(dUρ,ρ0(s))fID(tρ)(s) dsdρ0(t). 4 Posterior analysis Vectors of dependent random probability measures are commonly employed in Bayesian statistics to model partially exchangeable observations. Indeed, any infinitely active CRV can be used to this scope through normalization (2). Many models in the literature fall within this framework, including GM-dependent measures (Lijoi and Nipoti, 2014), compound random measures (Griffin and Leisen, 2017), L´ evy copulas (Epifani and Lijoi, 2010), and thinned random measures (Lau and Cripps, 2022). In this section, we derive the expression of the posterior distribution for
https://arxiv.org/abs/2505.02653v1
a generic CRV, which can be seen as a multivariate extension of James et al. (2009) and a special case of FuRBI random measures (Ascolani et al., 2024) with shared atoms. We later specialize this result to hierarchical CRV and explore the structure of the posterior in further detail. Let Xi= (Xi1, . . . , X ini) be the ith group of observations, for i= 1, . . . , d , modelled as X1, . . . ,Xd|˜µ∼˜µ1 ˜µ1(X)n1 × ··· טµd ˜µd(X)nd , ˜µ∼CRV( ν), (5) where Pndenotes the n-fold product measure and d ν(s, x) = d ρx(s) dP0(x) is a multi- variate L´ evy intensity with P0a diffuse probability. In the following, we often use the compact notation X1:d= (X1, . . . ,Xd). 12 The next theorem provides a general expression for the posterior distribution ˜µ|X1:d, showing that it preserves the CRV property, conditionally on a set of dependent latent variables. For this purpose, let X∗= (X∗ 1, . . . , X∗ k) be the distinct observations in X1:d, that is, for every j= 1, . . . , k , there exist Xihsuch that Xih=X∗ j, and X∗ j̸=X∗ ℓ for every j̸=ℓ. Moreover, for every i= 1, . . . , d andj= 1, . . . , k , denote by nijthe number of observations in Xiequal to X∗ j. This implies that, for every i= 1, . . . , d , the decomposition ni=ni1+. . . , n ikholds. Finally, define the dlatent variables U= (U1, . . . , U d) with joint density fU(u)∝dY i=1uni−1 ie−ψ(u)kY j=1τn1j,...,n dj|X∗ j(u), (6) where ψis the Laplace exponent of ˜µandτm|x(u) =R Ωde−u·sQd i=1smi idρx(s), form∈Nd andu∈Ωd. Theorem 3. LetX1, . . . ,Xdfollow model (5)with dν(s, x) = d ρx(s) dP0(x), where P0 is a diffuse probability and ρxis an infinitely active L´ evy measure P0-a.s. Then there existUwith p.d.f. (6)such that ˜µ|X1:dd=˜µ∗+kX j=1JjδX∗ j, where ˜µ∗and(J1, . . . ,Jk)are conditionally independent random quantities, given U, such that (i)˜µ∗|Uis aCRV with L´ evy intensity dν∗ U(s, x) =e−U·sdν(s, x); (ii) for each j= 1, . . . , k ,Jj= (J1j, . . . , J dj)is the vector of jumps at the fixed point of discontinuity X∗ j, shared among the groups, whose conditional law satisfies dPJj|U(s)∝dY i=1snij ie−U·sdρX∗ j(s). (7) We observe that ρxdoes not need a L´ evy density on Ω d. This can be seen as a tech- nical detail on (0 ,+∞) but it is actually relevant on Ω d, as some popular models, such as GM-dependent measures, do not have a L´ evy density. We refer to Catalano et al. (2024b, Lemma 6) for an expression of the multivariate L´ evy measure corresponding to GM-dependence measures, which is a weighted combination of the independent and comonotonic L´ evy measure in (4). We now specialize Theorem 3 to the case of hierarchical CRVs. For simplicity, we assume that ρandρ0have L´ evy densities on (0 ,+∞), denoted with the same nota- tion. In particular, this implies
https://arxiv.org/abs/2505.02653v1
that both ID( ρ) and ID( ρ0) have a p.d.f. (Sato, 1999, 13 Theorem 27.7). The first nontrivial result characterizes the distribution of the vector of random measures ˜µ∗, showing the conditional quasi-conjugacy of the model. Indeed, con- ditionally on U, we can interpret ˜µ∗as a hierarchical CRV with heterogeneous marginal distributions. Proposition 5. Letρandρ0have L´ evy densities on (0,+∞)and let ˜µ∼hCRV( ρ, ρ0, P0). Conditionally on U, the CRV ˜µ∗in Theorem 3 satisfies ˜µ∗|˜µ∗ 0,U∼dY i=1CRM e−Uisρ(s) ds⊗˜µ∗ 0 , ˜µ∗ 0|U∼CRM e−Pd i=1ψ(Ui)sρ0(s) ds⊗P0 . In order to sample from the posterior distribution of Theorem 3, one also has to sample the jumps Jj’s at fixed locations, and the latent variables U, which are d-dimensional random variables. The case in which dis small does not pose compelling problems, and the sampling task can be performed via d-dimensional rejection sampling or approximated by a random walk Metropolis-Hastings scheme. However, when dis moderate or large, reducing the dimension of the proposal becomes essential. We described a technique to reduce the sampling of jumps to the sampling of 1-dimensional random variables. To this end, define ¯τm(u, t) =Z+∞ 0sme−usfID(tρ)(s) ds. (8) Proposition 6. Letρandρ0have L´ evy densities on (0,+∞)and let ˜µ∼hCRV( ρ, ρ0, P0). Conditionally on the latent variables U, for every j= 1, . . . , k , the jumps Jj= (J1j, . . . , J dj) in Theorem 3 satisfy J1j, . . . , J dj|U, J0j∼fJj|U,J0j(s) =dY i=1snij ie−UisifID(J0jρ)(si) ¯τnij(Ui, J0j), where J0jis a random variable having p.d.f. fJ0j|U(t)∝Qd i=1¯τnij(Ui, t)ρ0(t). Proposition 6 reduces the sampling of k d-dimensional jumps to the easier task of sampling k(d+ 1) 1-dimensional variables. These random variables can be sampled exactly in the case of gamma-gamma hCRVs, as detailed in Section 5. For the latent variables U, the evaluation of their density (6) up to a normalizing constant requires to compute τm|x. Although this task can be tackled directly on a case-by-case basis, the d-dimensional integral in the definition can be reduced to a 1- dimensional integral for hierarchical CRV. Given its homogeneity, τm|x=τmdoes not depend on x. 14 Proposition 7. Letρandρ0have L´ evy densities on (0,+∞)and let ˜µ∼hCRV( ρ, ρ0, P0). Form∈Ndandu∈Ωd, τm(u) =Z+∞ 0dY i=1¯τmi(ui, t)ρ0(t) dt. Interestingly, τn1j,...,n dj(U) is the normalizing constant for the jump J0jin Proposition 6. This quantity can be computed in closed form in some cases, such as the gamma-gamma hCRV, and used as building block for any computational method approximating the d- dimensional distribution of U. However, when dis large, reducing to lower-dimensional distributions becomes essential. Therefore, we propose an alternative representation of the latent variables Uas conditionally independent gamma random variables with ran- dom rate parameters, which may be of interest also in the univariate case. For m∈Nk andt∈(0,+∞)k+1, setm•=Pk i=1miand define C(m;t) =Z (0,+∞)k+1(s0+s1+···+sk)−m•fID(t0ρ)(s0)kY j=1smj jfID(tjρ)(sj) ds. (9) Proposition 8. Letρandρ0have L´ evy densities and let ˜µ∼hCRV( ρ, ρ0, P0). Then, U1, . . . , U d|β1, . . . , β d∼dY i=1Gamma( ni, βi). Moreover, βi=Si0+Si1+. . . , S ik, for each i= 1, . . .
https://arxiv.org/abs/2505.02653v1
, d , where Si= (Si0, . . . , S ik)are conditionally independent vectors given T= (T0, T1, . . . , T k), whose p.d.f. satisfy fSi|T(si0, . . . , s ik)∝(si0+si1+···+sik)−nifID(T0ρ)(si0)kY j=1snij ijfID(Tjρ)(sij), fT(t0, . . . , t k)∝dY i=1C(ni1, . . . , n ik;t)fID(ρ0)(t0)kY j=1ρ0(tj). Proposition 8 reduces the sampling of the ddependent latent variables Uto the sampling ofdconditionally independent 1-dimensional random variables β1, . . . , β d, which repre- sent the scale parameters in the distributions of U1, . . . , U d. This sampling task can be approached through standard computational methods or further simplified, as described in the following section. 15 5 The gamma-gamma hierarchical CRV 5.1 Posterior representation This section specifies results in Section 4 to the fundamental example of gamma-gamma hCRVs, and provides some details for the subsequent implementation of posterior sam- pling algorithms.In the following, let (( t))n= Γ(t+n)/Γ(t) denote the ascending factorial. Proposition 9. Let˜µa gamma-gamma hCRV, as in Example 1, and define the quantity λ(U) =b0 α+dX i=1log(1 + Ui/b). Then, conditionally on the latent variables U, and for each j= 1, . . . , k , (a) the CRV ˜µ∗in Proposition 5 is a hierarchy of conditionally gamma CRMs, ˜µ∗ 1, . . . , ˜µ∗ d|˜µ∗ 0,U∼dY i=1CRM α s−1e−b(1+Ui/b)sds⊗˜µ∗ 0 ; ˜µ∗ 0|U∼CRM α0s−1e−α λ(U)sds⊗P0 ; (b) the jumps Jjin Proposition 6 are conditionally independent and, for i= 1, . . . , d , Jij|Ui, J0j∼Gamma( αJ0j+nij, b(1 +Ui/b)); (c) the density of the rescaled random variable αJ0jis proportional to fαJ0j|U(t)∝t−1e−λ(U)tdY i=1((t))nij. (10) Details on sampling algorithms for the hierarchy in (a) are in Appendix C.5, where we describe an efficient implementation of Newton’s method for inverting the exponential integral, based on a logarithmic transformation that guarantees convergence for each starting point. The nontrivial task of sampling the jumps Jj’s is here reduced to sam- pling variables αJ0j’s in (c), whose densities can be computed up to normalizing constants. A convenient option is to resort to computational methods that generate approximate samples, such as a random walk Metropolis-Hasting algorithm; cfr. Appendix C.1. Re- markably, we also develop an exact sampler in Section 5.2 below. The last step to sample from the posterior is sampling the latent U, whose character- ization in Proposition 8 can be simplified through a change of variables T=T0+···+Tk andVj=Tj/T. 16 Proposition 10. Let˜µa gamma-gamma hierarchical CRV . The distribution of latent variables U= (U1, . . . , U d)can be decomposed as follows: (a) for each i= 1, . . . , d , one has Ui|βi∼Gamma( ni•, βi), with βi|T∼Gamma( αT, b ); (b) the joint density of αTand a random vector V= (V0, . . . , V k)of auxiliary latent variables, supported on the k-dimensional unit simplex ∆k, is proportional to fαT,V(t,v)∝tα0−1e−(b0/α)tvα0−1 0dY i=11 ((t))nikY j=1 v−1 jdY i=1((tvj))nij! .(11) For convenience of implementation, one can directly sample Ui/b. Indeed, these are the relevant quantities in Proposition 9 for sampling the jumps Ji1, . . . , J ikand random measure
https://arxiv.org/abs/2505.02653v1
˜ µ∗ i, and computing λ(U). Notably, the non-standard step for sampling Uis the marginal sampling of the variable αTin (11), whose joint density with the auxiliary vector Vis known up to a normalizing constant. In Appendix C.1 we describe a Metropolis- Hastings within Gibbs algorithm to sample from the marginal distribution of αT, while alternative procedures to obtain exact samples from αTare considered in Section 5.2 hereunder. 5.2 Exact sampling As highlighted by Propositions 9-10, posterior sampling from the gamma-gamma hCRV mainly consists in sampling gamma random variables, except for two critical steps, namely the sampling of random variables αJ01, . . . , αJ 0kin (10) and of the marginal sampling of random variable αTin (11). In both cases, one may resort to MCMC algorithms, based on Metropolis-Hastings steps, to get approximated samples from such distributions. In the following, we propose alternative algorithms based on analytical manipulations of their density functions, which instead allow for exact sampling. For this purpose, we introduce the coefficients S(q1, . . . , q d;h), defined by the recursive relation S(q1, . . . , q ℓ+ 1, . . . , q d;h) =qℓS(q1, . . . , q d;h) +S(q1, . . . , q d;h−1), (12) with boundary conditions S(0, . . . , 0; 0) = 1 and S(q1, . . . , q d; 0) = S(0, . . . , 0;h) = 0 for q•>0 orh >0. Remarkably, these coefficients can be seen as the natural generaliza- tion of the unsigned Stirling numbers of the first kind to blocked permutations. Indeed, S(q1, . . . , q d;h) represents the number of permutations with hcycles in the Young sub- group of Sq•(group of permutations of q•elements) induced by the integer partition (q1, . . . , q d). For convenience of notation, let mij∈ {0,1}be the indicator for nij>0, that is, mij= min(1 , nij), and define m•j=Pd i=1mij. 17 Proposition 11. For each j= 1, . . . , k , the density of αJ0jin(10) can be rewritten as fαJ0j|U(t)∝n•jX hj=m•jS(n1j, . . . , n dj;hj)thj−1e−λ(U)t. where the coefficients S(q1, . . . , q d;h)are defined in (12). Therefore, the random variable αJ0jis in fact a finite mixture of gamma distributions, with mixing weights pjhj∝λ(U)−hjΓ(hj)S(n1j, . . . , n dj;hj), for hj=m•j, . . . , n •j. Note that since n•j>0, then m•j>0, and the gamma distributions and mixing weights are properly defined. The evaluation of S(n1j, . . . , n dj;hj) via the recursive relation in (12) has computational cost O n2 •j . Remark 2. The proof of Proposition 11 shows that S(n1j, . . . , n dj;hj) =X (h1j,...,h dj)∈HdY i=1S(nij, hij), whereH={(h1j, . . . , h dj):h1j+···+hdj=hj, mij≤hij≤nij}. A naive approach to their evaluation would involve dnested cycles, with a computational cost of OQd i=1nij , for each j= 1, . . . , k . Remarkably, recognizing the recursive structure in (12) substan- tially reduces the computational cost to quadratic in the number of observations
https://arxiv.org/abs/2505.02653v1
n•j. Following similar steps, and further marginalizing with respect to the auxiliary vari- ablesV, we can derive the marginal density of αTup to a normalizing constant. Proposition 12. The density of the random variable αTin(11) can be written as fαT(t)∝tα0−1e−(b0/α)tdY i=11 ((t))ni nX h=mch ((α0))hth! , (13) where m=Pk j=1m•jand coefficients chare defined, for S(q1, . . . , q d;h)as in (12), by ch=X h1+···+hk=h m•j≤hj≤n•jkY j=1Γ(hj)S(n1j, . . . , n dj;hj). The evaluation of ch, for h=m, . . . , n , from the expression above may seem computa- tionally expensive when kis large, as a naive approach would involve knested cycles. However, the ch’s can be obtained through a sequence of discrete convolutions and com- puted at cost OP j<ℓn•jn•ℓ , as detailed in Appendix C.2. Exact samples from the distribution of αTcan be obtained resorting to rejection sampling algorithms. Indeed, 18 considering a real parameter r, (13) can be rewritten as fαT(t)∝tα0+r−1e−(b0/α)t(t−rR(t)), R (t) =dY i=11 ((t))ni nX h=mch ((α0))hth! , where R(t) is a ratio of polynomials, and a continuous function for t≥0. The distri- bution of αTis thus absolutely continuous with respect to a Gamma( α0+r, b0/α), with Radon-Nikodym derivative proportional to t−rR(t). We construct a rejection sampling algorithm by proposing values from such gamma distribution and accepting with proba- bility proportional to t−rR(t). A necessary condition is for the Radon-Nikodym derivative to be bounded above: this is guaranteed in our setting when 0 ≤r≤m−d. Within this interval, we choose the value of rthat maximizes the acceptance probability, which is equivalent to maximizing rlogt∗(r)−logR(t∗(r)) + ( α0+r) log( b0/α)−log Γ( α0+r), where t∗(r) is the value of tthat maximizes t−rR(t). Details on the optimal choice of r can be found in Appendix C.3. The adaptive rejection sampling algorithm of Gilks and Wild (1992) considerably improves the acceptance rate of our rejection sampler. Although computationally more convenient, theoretical guarantees are only available if the density of αTis logarithmically concave, which we are unable to prove in general. For numerical experiments, we consider the adaptive rejection sampler as an alternative to perform exact sampling of αT, and perform empirical checks for log-concavity. 5.3 Posterior sampling algorithms Collecting the distributional results described above, as well as the details in Appendix C, we obtain complete algorithms to sample jumps Jat fixed locations in the posterior dis- tribution (Theorem 5) induced by the gamma-gamma hCRV. The common structure of conditional dependencies within the sampling algorithms is summarized in Figure 1, where the computational bottlenecks are highlighted by red circles. The proposed algo- rithms allow for parallelization at different stages. Specifically, after the initial sampling ofαT, one can parallelize across groups to sample the latent variables U(Proposition 10). Moreover, after the computation of λ(U), one can parallelize across distinct values to sample the conditionally independent variables αJ01, . . . , αJ 0kand further parallelize across both groups and distinct values to sample jumps J(Proposition 9). The MCMC step for posterior sampling the jumps Jis summarized in Algorithm 1. The state of the Markov chain consists of the latent variable
https://arxiv.org/abs/2505.02653v1
αT, auxiliary variables V, and jumps αJ01, . . . , αJ 0k, thus having dimension 2 k+ 1. The exact posterior sampling 19 Figure 1: Conditional dependencies between random variables in Algorithms 1 and 2. Red circles represent the computational bottlenecks; quantities in empty circles are sampled from gamma distributions. Model parameters and observa- tions enter the sampling scheme at different steps. For simplicity, variables are reported up to scaling w.r.t. model parameters, e.g. Tinstead of αT. procedure developed in Section 5.2 is instead summarized in Algorithm 2. This scheme outputs i.i.d. samples from the posterior, but requires a non-trivial initialization. The main computational bottleneck is represented by the rejection sampling step for the latent variable αT, which may suffer from low acceptance rates. Numerical illustrations proving the effectiveness of the proposed algorithms are discussed in Appendix C.7. As highlighted in Proposition 1, the normalized gamma-gamma hCRV model coincides with the hierarchical Dirichlet process (HDP) with a particular gamma prior on the con- centration parameter. Therefore, the algorithms described above can be compared with the standard marginal Gibbs sampling scheme for posterior analysis of the HDP, based on the restaurant franchise metaphor (Teh et al., 2006). In fact, our proposals present some interesting advantages. The MCMC approach in Algorithm 1 relies on a state of dimension 2 k+ 1, which is typically much smaller than n, the number of latent variables (allocation to tables) in the marginal algorithm for the HDP, thus potentially speeding up mixing. Moreover, running a Markov chain on the Euclidean space (0 ,∞)k+1×∆k is more convenient than working on the space of partitions, as many standard tools for MCMC diagnostics are designed for Euclidean spaces. On the other hand, Algorithm 2 allows for exact i.i.d. sampling from the posterior, thus avoiding any issue implied by MCMC schemes. 20 Algorithm 1 MCMC posterior sampling algorithm for jumps J 1:Current state :αT,V,(αJ01, . . . , αJ 0k) 2:V←Metropolis-Hastings step from the distribution of V|(αT) 3:(αT)←Metropolis-Hastings step from the distribution of ( αT)|V 4:fori= 1, . . . , d do 5: (bβi)|(αT)∼Gamma( αT,1) 6: (Ui/b)|(bβi)∼Gamma( ni, bβi) 7:end for 8:Compute λ(U) =b0/α+Pd i=1log(1 + Ui/b) 9:forj= 1, . . . , k do 10: (αJ0j)←Metropolis-Hastings step from the distribution of ( αJ0j)|λ(U) 11:end for 12:fori= 1, . . . , d do 13: forj= 1, . . . , k do 14: Jij|(αJ0j),(Ui/b)∼Gamma( nij+αJ0j, b(1 +Ui/b)) 15: end for 16:end for 5.4 Simulation studies In the following, we carry out extensive simulation studies to compare the different al- gorithms in terms of computational time, as the dimensions of the input data grow. Specifically, we compare five algorithms: (i) the MCMC sampler in Algorithm 1 with gamma proposals (MH); (ii) the same MCMC sampler with Gaussian proposals on the log-scale (MHlog); (iii) the exact sampler in Algorithm 2 (exact); (iv) the alternative ex- act sampler based on adaptive rejection sampling (ARS); (v) the marginal Gibbs sampler of Teh et al. (2006) for the HDP with gamma prior (HDPpr). Algorithms are compared in terms of CPU time per effective sample,
https://arxiv.org/abs/2505.02653v1
averaging over 25 simulated dataset for each experimental setting. The burn-in time for algorithms (i) and (ii) and the initialization time for algorithms (iii) and (iv) are deducted from the total execution times. Figure 2a compares the algorithms for growing number of groups d, while the number of observations per group ni= 10 is fixed. Similarly, Figure 2b compares the algorithms for growing number of observations per group ni, while the number of groups d= 4 is fixed. In both cases, observations are sampled from a hierarchical Dirichlet process with concentration parameters α= 5 and α0= 3. Note that parameter α0impacts the number of columns in the counts matrix ( nij)ij, while parameter αcontrols its sparsity. For the standard Gibbs sampler, the CPU time grows nearly linearly in the total number of ob- servations, as expected given its sequential allocation structure. On the other hand, the exact sampling algorithms show an exponential growth rate: we believe this is due to the deterioration of the proposal in the rejection sampling step, which leads to increasingly lower acceptance rates. The adaptive rejection sampling approach consistently outper- forms the scheme in Algorithm 2. Despite the lack of theoretical guarantees (Section 5.2), 21 Algorithm 2 Exact posterior sampling algorithm for jumps J. 1:Initialize : Compute coefficients in Propositions 11-12, optimal parameter r, and upper bound t∗(r). 2:while proposal is not accepted do 3: Propose αT∼Gamma( α0+r, b0/α) 4: Accept proposal with probability proportional to ( αT)−rR(αT) (Section 5.2) 5:end while 6:fori= 1, . . . , d do 7: (bβi)|(αT)∼Gamma( αT,1) 8: (Ui/b)|(bβi)∼Gamma( ni, bβi) 9:end for 10:Compute λ(U) =b0/α+Pd i=1log(1 + Ui/b) 11:forj= 1, . . . , k do 12: Hj|λ∼Categorical( pjm•j, . . . , p jn•j) from precomputed weights (Section 5.2) 13: (αJ0j)|Hj, λ∼Gamma( Hj, λ) 14:end for 15:fori= 1, . . . , d do 16: forj= 1, . . . , k do 17: Jij|(αJ0j),(Ui/b)∼Gamma( nij+αJ0j, b(1 +Ui/b)) 18: end for 19:end for empirical checks on simulated datasets have found sporadic violation of the log-concavity assumption, which may be possibly due to numerical instability. Finally, the MCMC scheme in Algorithm 1 seems to also grow linearly with the number of groups (Figure 2a), but with a smaller slope with respect to the standard Gibbs sampler, and appears even less affected by the number of observations per group (Figure 2b). We argue that the CPU time for MCMC algorithms may be affected instead by the number of clusters k, which is slowly increasing in the experimental settings considered above, as displayed in the bottom panels of Figure 2. To support this claim, we conduct a further simulation study fixing both the number of groups d= 5 and the number of observations per group ni= 12, while allowing the number of clusters kto vary by sampling observations from the HDP model with concentration parameter α0ranging in the interval [2 ,4]. In this setting, exact algorithms display negligible dependence on the number of clusters, while the CPU time for MCMC schemes grows nearly linearly with k (Figure 3): this is in line with
https://arxiv.org/abs/2505.02653v1
the fact that the state of the Markov chain has dimension 2k+ 1. Interestingly, this linear growth is partially mitigated by the better mixing due to the decrease in the number of possible configurations of the latent tables as kgrows, for fixed n. This fact is also reflected by the slight decrease in CPU time for the standard Gibbs sampler. In conclusion, the MCMC schemes should be preferred when the number of clusters kis small compared with the total number of observations n, regardless of the number 22 (a)ni= 10 observations per group. (b)d= 4 groups. Figure 2: (Top) CPU time per effective sample for different algorithms with (a) growing number of groups dand (b) growing number of observations per group ni. (Bottom) Number of clusters for each experimental setting. of groups. In contrast, exact sampling algorithms should be preferred when few data are involved, as they provide exact samples from the posterior within a CPU time that is comparable to the alternative MCMC-based algorithms. Furthermore, our simulation studies show that the MCMC algorithm with Gaussian proposals on the log-scale should be preferred to the alternative with gamma proposals. Similarly, exact sampling based on ARS is preferable with respect to standard rejection sampling, with the proviso that log-concavity should be empirically checked. 6 Eliciting the dependence structure Normalized hierarchical CRVs induce dependence between the marginal random prob- abilities, which in turn regulates the borrowing of information across different groups. Hierarchical models are a natural way to induce positive dependence, especially in a Bayesian setting. However, as highlighted in Catalano et al. (2024a), the elicitation of the dependence is more difficult with respect to other proposals because model parame- ters typically affect both the marginal distribution and the dependence across groups. For this reason, Catalano et al. (2024a) propose two kinds of (weak) flexibility for partially exchangeable models: (i) for every ρ∈[0,1], there exists a specification of the parameters such that Corr( ˜Pi(A),˜Pj(A)) = ρ, where Ais a fixed set s.t. E(˜Pi(A))̸= 0,1; (ii) for every ρ∈[0,1] and for every fixed value of the marginal mean E(˜Pi(A)) and variance Var( ˜Pi(A)), there exists a specification of the parameters s.t. Corr( ˜Pi(A),˜Pj(A)) = ρ. The expressions in Example 3 show that the normalized gamma-gamma hCRV achieves the flexibility of second kind. We now investigate the role of parameter values in the 23 Figure 3: CPU time per effective sample for the different algorithms, with growing number of clusters k. The number of groups d= 5 and observations per group ni= 12 are fixed. Parameters Fixed α >0 Fixed α0>0 α0→0α0→+∞ α→0 α→+∞ σ2=σ2(α, α 0) 1 0 1 1 ρ=ρ(α, α 0) 1 α/(1 +α) 1/(1 +α0) 1 Table 1: Limiting behaviours of variance and correlation parameters for the nor- malized gamma-gamma hCRV as a function of its parameters α, α 0>0, with Var( ˜Pi(A)) =σ2P0(A)(1−P0(A)) and Corr( ˜Pi(A),˜Pj(A)) =ρ. borrowing of information, for fixed values of the marginal mean and variance. Let˜µbe a gamma-gamma hCRV as in Example 1 and let Abe a Borel set s.t. P0(A)̸= 0,1. From Example 3, it is
https://arxiv.org/abs/2505.02653v1
apparent that correlation is not affected by the mean E(˜Pi(A)) = P0(A) but is deeply related to the variance; we refer to Table 1 for some limiting behaviours. Since αandα0impact both the variance and the dependence structure, it may be difficult to elicit them in practice. From the point of view of the practioner, it is certainly more intuitive to elicit mean, variance and correlation indepen- dently. Hence, one can fix σ2, ρ∈(0,1) such that Var( ˜Pi(A)) =σ2P0(A)(1−P0(A)), and Corr( ˜Pi(A), Pj(A)) = ρ, and then find the corresponding values of αandα0by solving the system of non-linear equations ρ(1 +α0/α e1/αEα0(1/α))−1 = 0 , σ2(1 +α0)−1/ρ= 0, through standard numerical methods. Figure 4 displays the values of α, α 0that corre- spond to a range of values for σ2andρ. To visualize the effect of the borrowing, we consider d= 3 groups of independent Poisson observations, each of size ni= 10, with means equal to 2, 3 and 4. Figure 5 24 Figure 4: Values of αandα0for the gamma-gamma hCRV corresponding to fixed values of variance σ2and correlation ρ, when the set Asatisfies P0(A) = 1 /2. displays the expected values of the posterior random means ER xd˜Pi(x) X , which coincide with the means of the predictive distributions for the three groups, as we vary the prior pairwise correlation coefficient ρ∈(0,1) and the prior variance through σ2∈(0,1), keeping the base measure P0=N(10,1) fixed. Posterior samples are obtained using the exact algorithm. For a fixed variance σ2, a higher correlation ρinduces more borrowing, and thus the posterior means are closer to each other. Depending on σ2, the estimates are closer to the prior (which is pushing them towards higher values, being centered at 10), or to their empirical means. On the other hand, for a fixed correlation ρ, a higher variance σ2reduces the weight of the prior, and thus pushes the estimates towards their empirical means. Interestingly, lower values of ρcan induce estimates that are closer to each other than higher values of ρ, depending on the value of σ2. Indeed, low values of σ2will force estimates to be closer to the prior, and thus also closer to each other; e.g., compare ρ=σ2= 0.1 vs. ρ= 0.5 and σ2= 0.9. This is the effect of the shrinkage, which can sometimes be difficult to distinguish from the borrowing of information, especially if the prior mean is close to the grand mean of the observations. 7 A fair comparison with the HDP In this section, we compare the normalized gamma-gamma hCRV with the hierarchical Dirichlet process (3). Proposition 1 shows that the normalized gamma-gamma hCRV is equivalent to the HDP with a particular gamma prior on the concentration parameter θ. Hence, it could seem natural to compare the normalized gamma-gamma hCRV with the HDP by fixing the same parameters ( α, α 0, P0), as in Figure 6 (left plot), where P0= N(5,1) and the data are the same as in Section 6. However, this approach may alter the comparison, since fixing the same parameters can entail very different marginal variance and correlation structures. In
https://arxiv.org/abs/2505.02653v1
particular, for the HDP( α0, α, P 0), following Camerlenghi 25 Figure 5: Posterior expected random means of the gamma-gamma hCRV model, for three groups of independent Poisson observations with means equal to 2, 3 and 4, each of size ni= 10. Top: each plot has fixed variance parameter σ2and increasing correlation ρ. Bottom: each plot has fixed correlation and increasing variance. The prior mean P0=N(10,1) is fixed for all plots. Posterior samples are obtained using the exact algorithm. The horizontal lines denote the empirical means. et al. (2019b), E(˜Pi(A)) =P0(A), Var( ˜Pi(A)) =1 +α+α0 (1 +α0)(1 + α)P0(A)(1−P0(A)),Corr( ˜Pi(A),˜Pj(A)) =1 +α 1 +α+α0. Table 2 reports some limiting behaviors, which notably differ from those of the gamma- gamma hCRV in Table 1 when α→ ∞ . Rather than considering the same values of (α, α 0), a fair comparison should consider the same values of the mean, variance and correlation, which are achieved by fixing α=1 1−ρ1 σ2−1 , α 0=1 ρσ2−1, in the HDP; Figure 7 pictures the values of ( α, α 0) as a function of ( σ, ρ). The right plot of Figure 6 compares the gamma-gamma hCRV with the HDP fixing the same mean, variance, and correlation. In contrast, the middle plot illustrates a common approach in which the correlation is fixed without accounting for variance. In conclusion, when fixing the same parameters α, α 0, one might mistakenly conclude that the gamma-gamma hCRV borrows much more information than the HDP. A similar misinterpretation arises when fixing the correlation without adjusting for variance. Ultimately, only by matching 26 Figure 6: Comparison between the gamma-gamma hCRV and the HDP models with three groups of independent Poisson observations with means equal to 2, 3 and 4, each of size ni= 10. The two models have: the same hyperparameters α, α 0(left); the same correlation and different variances, with σ2equal to 0 .2 and 0 .8 respectively (middle); the same variance and correlation (right). The prior mean P0=N(5,1) is fixed for all plots. Posterior samples are obtained using the exact algorithm. Parameters Fixed α >0 Fixed α0>0 α0→0α0→+∞ α→0 α→+∞ σ2=σ2(α, α 0) 1 1 /(1 +α) 1 1 /(1 +α0) ρ=ρ(α, α 0) 1 0 1 /(1 +α0) 1 Table 2: Limiting behaviours of variance and correlation parameters for the HDP as a function of its parameters α, α 0>0, with Var( ˜Pi(A)) =σ2P0(A)(1−P0(A)) and Corr( ˜Pi(A),˜Pj(A)) =ρ. both variance and correlation, it becomes clear that the gamma-gamma hCRV exhibits only slightly more borrowing of information. This increase can be attributed to the hierarchical CRV leveraging not only the base measure but also the information contained in the concentration parameter. References Ascolani, F., Franzolini, B., Lijoi, A., and Pr¨ unster, I. (2024). Nonparametric priors with full-range borrowing of information. Biometrika , 111(3):945–969. Barrios, E., Lijoi, A., Nieto-Barajas, L. E., and Pr¨ unster, I. (2013). Modeling with normalized random measure mixture models. Statistical Science , 28(3):313–334. Beraha, M., Guglielmi, A., and Quintana, F. A. (2021). The semi-hierarchical Dirichlet process and its application to clustering homogeneous distributions. Bayesian Analysis ,
https://arxiv.org/abs/2505.02653v1
16(4):1187–1219. Bertoin, J. (1996). L´ evy Processes . Cambridge Tracts in Mathematics. Cambridge Uni- versity Press. 27 Figure 7: Values of αandα0for the HDP corresponding to fixed value of variance σ2and correlation ρ, when the set Asatisfies P0(A) = 1 /2. Bochner, S. (1955). Harmonic Analysis and the Theory of Probability . University of California Press, 1 edition. Bryant, M. and Sudderth, E. (2012). Truly nonparametric online variational inference for hierarchical Dirichlet processes. In Advances in Neural Information Processing Systems , volume 25. Curran Associates, Inc. Camerlenghi, F., Dunson, D. B., Lijoi, A., Pr¨ unster, I., and Rodriguez, A. (2019a). Latent nested nonparametric priors. Bayesian Analysis , 14(4):1303–1356. Camerlenghi, F., Lijoi, A., Orbanz, P., and Pr¨ unster, I. (2019b). Distribution theory for hierarchical processes. The Annals of Statistics , 47:67–92. Camerlenghi, F., Lijoi, A., and Pr¨ unster, I. (2021). Survival analysis via hierarchically dependent mixture hazards. The Annals of Statistics , 49(2):863–884. Camerlenghi, F., Lijoi, A., and Pr¨ unster, I. (2018). Bayesian nonparametric inference beyond the Gibbs-type framework. Scandinavian Journal of Statistics , 45(4):1062– 1091. Campbell, T., Huggins, J. H., How, J. P., and Broderick, T. (2019). Truncated random measures. Bernoulli , 25(2):1256–1288. Catalano, M., Del Sole, C., Lijoi, A., and Pr¨ unster, I. (2024a). A unified approach to hierarchical random measures. Sankhya A , 86(S1):255–287. Catalano, M., Lavenant, H., Lijoi, A., and Pr¨ unster, I. (2024b). A Wasserstein index of dependence for random measures. Journal of the American Statistical Association , 119(547):2396–2406. Catalano, M., Lijoi, A., and Pr¨ unster, I. (2021). Measuring dependence in the Wasserstein distance for Bayesian nonparametric models. The Annals of Statistics , 49:2916–2947. Christensen, J. and Ma, L. (2020). A Bayesian hierarchical model for related densities by using P´ olya trees. Journal of the Royal Statistical Society. Series B (Statistical Methodology) , 82(1):127–153. 28 Constantine, G. M. and Savits, T. H. (1996). A multivariate Faa Di Bruno formula with applications. Transactions of the American Mathematical Society , 348(2):503–520. Daley, D. and Vere-Jones, D. (2002). An Introduction to the Theory of Point Processes: Volume I: Elementary Theory and Methods . Probability and Its Applications. Springer. Daley, D. and Vere-Jones, D. (2007). An Introduction to the Theory of Point Processes: Volume II: General Theory and Structure . Probability and Its Applications. Springer New York. Das, S., Niu, Y., Ni, Y., Mallick, B. K., and Pati, D. (2024). Blocked Gibbs sampler for hierarchical Dirichlet processes. Journal of Computational and Graphical Statistics . Del Sole, C., Lijoi, A., and Pr¨ unster, I. (2025). Principled estimation and prediction with competing risks: a Bayesian nonparametric approach. Submitted . Dunson, D. B. and Park, J.-H. (2008). Kernel stick-breaking processes. Biometrika , 95(2):307–323. Durante, D., Gaffi, F., Lijoi, A., and Pr¨ unster, I. (2025). Partially exchangeable stochastic block models for multilayer networks. Journal of the American Statistical Association, to appear . Elliott, L. T., De Iorio, M., Favaro, S., Adhikari, K., and Teh, Y. W. (2019). Mod- eling population structure under hierarchical Dirichlet processes. Bayesian Analysis , 14(2):313–339. Epifani, I. and Lijoi, A. (2010). Nonparametric priors for vectors of survival functions. Statist. Sinica , 20:1455–1484.
https://arxiv.org/abs/2505.02653v1
Ferguson, T. S. (1973). A Bayesian analysis of some nonparametric problems. The Annals of Statistics , 1(2):209–230. Ferguson, T. S. and Klass, M. J. (1972). A representation of independent increment processes without Gaussian components. The Annals of Mathematical Statistics , 43(5):1634–1643. Figalli, A. and Gigli, N. (2010). A new transportation distance between non-negative measures, with applications to gradients flows with Dirichlet boundary conditions. Journal de Math´ ematiques Pures et Appliqu´ ees , 94:107–130. Fox, E. B., Sudderth, E. B., Jordan, M. I., and Willsky, A. S. (2011). A sticky HDP-HMM with application to speaker diarization. The Annals of Applied Statistics , 5(2A):1020– 1056. Gilks, W. R. and Wild, P. (1992). Adaptive rejection sampling for Gibbs sampling. Journal of the Royal Statistical Society. Series C (Applied Statistics) , 41(2):337–348. Griffin, J. E. and Leisen, F. (2017). Compound random measures and their use in Bayesian non-parametrics. Journal of the Royal Statistical Society Series B , 79:525–545. Griffiths, T., Canini, K., Sanborn, A., and Navarro, D. (2007). Unifying rational models of categorization via the hierarchical Dirichlet process. In Proceedings of the Annual Meeting of the Cognitive Science Society , volume 29. 29 Guillen, N., Mou, C., and ´Swi¸ ech, A. (2019). Coupling L´ evy measures and comparison principles for viscosity solutions. Transactions of the American Mathematical Society , 372:7327–7370. Haines, T. S. and Xiang, T. (2011). Delta-dual hierarchical Dirichlet processes: A prag- matic abnormal behaviour detector. In 2011 IEEE International Conference on Com- puter Vision , pages 2198–2205. Hardy, M. (2006). Combinatorics of partial derivatives. The Electronic Journal of Com- binatorics [electronic only] , 13(1):Research paper R1, 13 p., electronic only–Research paper R1, 13 p., electronic only. Horiguchi, A., Chan, C., and Ma, L. (2024). A tree perspective on stick-breaking models in covariate-dependent mixtures. Bayesian Analysis , pages 1–28. James, L. F., Lee, J., and Pandey, A. (2024). Bayesian analysis of generalized hierar- chical Indian buffet processes for within and across group sharing of latent features. arXiv:2304.05244. James, L. F., Lijoi, A., and Pr¨ unster, I. (2006). Conjugacy as a distinctive feature of the Dirichlet process. Scandinavian Journal of Statistics , 33(1):105–120. James, L. F., Lijoi, A., and Pr¨ unster, I. (2009). Posterior analysis for normalized random measures with independent increments. Scandinavian Journal of Statistics , 36(1):76– 97. Kingman, J. F. C. (1967). Completely random measures. Pacific Journal of Mathematics , 21:59–78. Lau, J. W. and Cripps, E. (2022). Thinned completely random measures with applications in competing risks models. Bernoulli , 28(1):638–662. Lijoi, A. and Nipoti, B. (2014). A class of hazard rate mixtures for combining survival data from different experiments. Journal of the American Statistical Association , 109:802– 814. Lijoi, A., Nipoti, B., and Pr¨ unster, I. (2014). Bayesian inference with dependent normal- ized completely random measures. Bernoulli , 20:1260–1291. Lijoi, A., Pr¨ unster, I., and Rigon, T. (2020). Sampling hierarchies of discrete random structures. Statistics and Computing , 30:1591–1607. Lijoi, A., Pr¨ unster, I., and Rebaudo, G. (2023). Flexible clustering via hidden hierarchical Dirichlet priors. Scandinavian Journal of Statistics , 50(1):213–234. Liu, J., Wade, S., and Bochkina, N. (2024). Shared differential clustering across
https://arxiv.org/abs/2505.02653v1
single-cell RNA sequencing datasets with the hierarchical Dirichlet process. Econometrics and Statistics . MacEachern, S. N. (1999). Dependent nonparametric processes. In ASA Proceedings of the Section on Bayesian Statistical Science , pages Alexandria, VA: American Statistical Association. 30 MacEachern, S. N. (2000). Dependent Dirichlet processes. Tech. Report , page Ohio State University. Masoero, L., Camerlenghi, F., Favaro, S., and Broderick, T. (2018). Posterior repre- sentations of hierarchical completely random measures in trait allocation models. In NeurIPS Workshop on All of Bayesian Nonparametrics . M¨ uller, P., Quintana, F., and Rosner, G. (2004). A method for combining inference across related nonparametric Bayesian models. Journal of the Royal Statistical Society Series B, 66:735–749. Nakamura, T., Nagai, T., and Iwahashi, N. (2011). Multimodal categorization by hier- archical Dirichlet process. In 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems , pages 1520–1525. Quintana, F. A., M¨ uller, P., Jara, A., and MacEachern, S. N. (2022). The dependent Dirichlet process and related models. Statistical Science , 37:24–41. Regazzini, E., Lijoi, A., and Pr¨ unster, I. (2003). Distributional results for means of normalized random measures with independent increments. The Annals of Statistics , 31:560–585. Ren, L., Dunson, D. B., and Carin, L. (2008). The dynamic hierarchical Dirichlet process. InProceedings of the 25th International Conference on Machine Learning , ICML ’08, page 824–831, New York, NY, USA. Association for Computing Machinery. Riva-Palacio, A. and Leisen, F. (2018). Bayesian nonparametric estimation of sur- vival functions with multiple-samples information. Electronic Journal of Statistics , 12(1):1330–1357. Rodr´ ıguez, A., Dunson, D. B., and Gelfand, A. E. (2008). The nested Dirichlet process. Journal of the American Statistical Association , 103(483):1131–1154. Sato, K. (1999). L´ evy Processes and Infinitely Divisible Distributions . Cambridge Uni- versity Press. Schilling, R. L., Song, R., and Vondracek, Z. (2012). Bernstein Functions . De Gruyter, Berlin, Boston. Sohn, K.-A. and Xing, E. P. (2009). A hierarchical Dirichlet process mixture model for haplotype reconstruction from multi-population data. The Annals of Applied Statistics , 3(2):791–821. Sudderth, E. B., Torralba, A., Freeman, W. T., and Willsky, A. S. (2008). Describing visual scenes using transformed objects and parts. International Journal of Computer Vision , 77:291–330. Taniguchi, T., Yoshino, R., and Takano, T. (2018). Multimodal hierarchical Dirichlet process-based active perception by a robot. Frontiers in Neurorobotics , 12:22. Teh, Y. W. and Jordan, M. I. (2010). Hierarchical Bayesian nonparametric models with applications. In Hjort, N. L., Holmes, C. C., Muller, P., and Walker, S. G., editors, Bayesian Nonparametrics , pages 158–207. Cambridge University Press. 31 Teh, Y. W., Jordan, M. I., Beal, M. J., and Blei, D. M. (2006). Hierarchical Dirichlet processes. Journal of the American Statistical Association , 101(476):1566–1581. Teh, Y. W., Kurihara, K., and Welling, M. (2007). Collapsed variational inference for hdp. In Platt, J., Koller, D., Singer, Y., and Roweis, S., editors, Advances in Neural Information Processing Systems , volume 20. Curran Associates, Inc. Thibaux, R. and Jordan, M. I. (2007). Hierarchical beta processes and the Indian buffet process. In Meila, M. and Shen, X., editors, Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics , volume 2
https://arxiv.org/abs/2505.02653v1
of Proceedings of Machine Learning Research , pages 564–571, San Juan, Puerto Rico. PMLR. Vershik, A., Yor, M., and Tsilevich, N. (2004). On the Markov–Krein identity and quasi- invariance of the gamma process. Journal of Mathematical Sciences , 121:2303–2310. Walker, S. and Damien, P. (2000). Representations of L´ evy processes without Gaussian components. Biometrika , 87(2):477–483. Wang, C., Paisley, J., and Blei, D. M. (2011). Online variational inference for the hi- erarchical Dirichlet process. In Gordon, G., Dunson, D., and Dud´ ık, M., editors, Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics , volume 15 of Proceedings of Machine Learning Research , pages 752–760. Williamson, S., Dubey, A., and Xing, E. (2013). Parallel Markov chain Monte Carlo for nonparametric mixture models. In Proceedings of the 30th International Conference on Machine Learning , volume 28 of Proceedings of Machine Learning Research , pages 98–106. PMLR. Wolpert, R. L. and Ickstadt, K. (1998). Simulation of L´ evy random fields. In Dey, D., M¨ uller, P., and Sinha, D., editors, Practical Nonparametric and Semiparametric Bayesian Statistics , pages 227–242. Springer New York. Zavitsanos, E., Paliouras, G., and Vouros, G. A. (2011). Non-parametric estimation of topic hierarchies from texts with hierarchical Dirichlet processes. Journal of Machine Learning Research , 12(83):2749–2775. Zhang, J. and Dassios, A. (2024). Posterior sampling from truncated Ferguson-Klass representation of normalised completely random measure mixtures. Bayesian Analysis , pages 1–31. 32 A Background on completely random measures This appendix contains a concise account of completely random measures (Kingman, 1967) and their multivariate extension to vectors of random measures. In particular, we introduce the key notions of L´ evy measure and Laplace exponent, which are frequently used in this work. Let (X, dX) be a Polish space endowed with a distance dX. The space MXof boundedly finite measures on Xis a Borel space with the topology of weak♯convergence (Daley and Vere-Jones, 2002). A random measure is a measurable function ˜ µ: Ω→MXfrom some probability space Ω. Definition 3. A random measure ˜ µ: Ω→MXis acompletely random measure (CRM) if, given a finite collection of pairwise disjoint and bounded Borel sets A1,···, AkofX, the random variables ˜ µ(A1), . . . , ˜µ(Ak) are mutually independent. Kingman (1967) provides a remarkable representation theorem that decomposes any CRM ˜µinto a unique sum ˜ µ=µ0+ ˜µf+ ˜µcof three independent components: (i) a deterministic measure µ0, (ii) a random measure with fixed atoms ˜ µf, and (iii) an a.s. discrete random measure without fixed atoms ˜ µc. The use of CRMs as priors in Bayesian nonparamet- ric models usually restricts to the third component. For this reason, unless differently specified, we only focus on this class of CRMs and henceforth assume ˜ µ= ˜µc. Moreover, Kingman (1967) shows that for any CRM ˜ µthere exists a Poisson random measure ˜N on (0 ,+∞)×Xsuch that, for any Borel set AofX, ˜µ(A) =ZZ (0,∞)×Asd˜N(s, x). (14) The mean measure νof the Poisson random measure ˜Ncharacterizes the law of ˜ µand is termed the L´ evy measure of the CRM, justifying the notation ˜ µ∼CRM( ν). This measure
https://arxiv.org/abs/2505.02653v1
on (0 ,+∞)×Xcan have infinite mass on sets of the form (0 , ϵ)×A. However, for every ϵ >0 and for every bounded Borel set A, it must satisfy the following constraints: C1. the jump component is bounded out of the origin, ν((ϵ,+∞)×A)<+∞; C2. the jump component is integrable in the origin,RR (0,ϵ)×Asdν(s, x)<+∞; C3. the atom component has no point masses: for every x∈X,ν((0,+∞)× {x}) = 0. Details can be found in Daley and Vere-Jones (2007, Theorem 10.1.III). The identity (14) is used in Kingman (1967) to derive a L´ evy-Khintchine representation of the Laplace transform of ˜ µ(A), logE(e−λ˜µ(A)) =−ZZ (0,+∞)×A(1−e−λs) dν(s, x), (15) 33 for every λ > 0. Therefore ˜ µ(A) has a pure-jump infinitely divisible distribution with L´ evy measure d ρA(s) =R Adν(s, x), which we compactly write as ˜ µ(A)∼ID(ρA); refer to Sato (1999, Theorem 8.1) for further details. For A=X, the expression in (15) provides theLaplace exponent of the CRM, that is, ψ(λ) =−logE(e−λ˜µ(X)). Remarkably, thanks to the independence of the evaluations on disjoint sets, the Laplace transform characterizes the law of the CRM. The usual specification of L´ evy intensities is through disintegration dν(s, x) = d ρx(s) dP0(x), (16) where P0is aσ-finite atomless measure (cfr. C3) on Xandρxis aσ-finite measure on (0,+∞),P0-a.e., such that, for every ϵ >0 (cfr. C1 and C2), (i)ρx((ϵ,+∞))<+∞, (ii)Z (0,ϵ)sdρx(s)<+∞. (17) Note that P0does not have to be a probability measure nor a measure with finite mass, unless we extend condition C.1 to unbounded Borel sets. In this case, we can assume P0to be a probability measure, and the disintegration (16) is unique. Conditions (17) imply that ρxis a L´ evy measure on (0 ,+∞); when ρxis absolutely continuous with respect to the Lebesgue measure, we term its Radon-Nikodym derivative L´ evy density . In our context, we are interested in two additional conditions. Firstly, we consider L´ evy measures ρxthat have infinite mass near the origin, that is, such that ρx((0,+∞)) =∞; we term such L´ evy measures infinitely active . Secondly, we consider disintegration in product form, which leads to the definition of homogeneous CRM. Definition 4. Let ˜µ∼CRM( ν) such that νsatisfies (16). We refer to ˜ µas a homogeneous CRM if ρx=ρ P0-a.e, and write ˜ µ∼CRM( ρ⊗P0). For a homogeneous ˜ µ∼CRM( ρ⊗P0) the L´ evy-Khintchine representation in (15) sim- plifies consistently. Indeed, under homogeneity and assuming P0to be a probability measure, the Laplace exponent is ψ(λ) =Z (0,+∞)(1−e−λs) dρ(s), and log E(e−λ˜µ(A)) =−P0(A)ψ(λ). We observe that ψis a non-negative and infinitely differentiable function whose derivative is completely monotone. Moreover, it vanishes at 0 and its derivative vanishes at + ∞. Thanks to the L´ evy-Khintchine representation of Bernstein functions (see, e.g. Schilling et al. (2012), Theorem 3.2) any such function is the Laplace exponent of a CRM. 34 The two fundamental examples are the gamma CRM and the stable CRM. We recall their definitions, which are used as building blocks throughout the manuscript. Definition 5. A random measure ˜ µ∼CRM( ρ⊗P0) is a gamma CRM of shape α >0 and rate
https://arxiv.org/abs/2505.02653v1
b >0 ifρhas L´ evy density and Laplace exponent equal to, respectively, ρ(s) =αe−bs s; ψ(λ) =αlog 1 +λ b . Definition 6. A random measure ˜ µ∼CRM( ρ⊗P0) is a stable CRM of shape α > 0 and discount parameter σ∈(0,1) ifρhas L´ evy density and Laplace exponent equal to, respectively, ρ(s) =ασ 1−σ1 s1+σ; ψ(λ) =αλσ. The notion of completely random measure can be naturally extended to vectors ˜µ= (˜µ1, . . . , ˜µd) of random measures, as in Definition 1. Following Catalano et al. (2021), we term them completely random vectors (CRVs). The representation in (14) can be extended to CRVs by considering a multivariate L´ evy intensity νon Ω d×X, where Ωd= [0,+∞)d\ {0}. Similarly, the Laplace transform of the random vector ˜µ(A) is characterized by logE(e−λ·˜µ(A)) =−ZZ Ωd×A(1−e−λ·s) dν(s, x), where λ= (λ1, . . . , λ d),s= (s1, . . . , s d)∈Ωdand·denotes the scalar product. The multivariate Laplace exponent is defined as ψ(λ) =−logE(e−λ·˜µ(X)) =ZZ Ωd×X(1−e−λ·s) dν(s, x). (18) Consistently with the univariate case, a CRV is homogeneous if ν=ρ⊗P0, where P0is an atomless measure on Xandρisd-dimensional L´ evy measure on Ω dsuch that (i)ρx((ϵ,+∞)d)<+∞, (ii)Z [0,ϵ)d\{0}∥s∥dρx(s)<+∞. Under homogeneity, if P0is a probability measure, the multivariate Laplace exponent is ψ(λ) =Z Ωd(1−e−λ·s) dρ(s) and log E(e−λ·˜µ(A)) =−P0(A)ψ(λ) forλ∈Ωd. Finally, if ˜µ∼CRV( ν), it easily follows from the definition that the marginal random measures ˜ µiare CRMs. Their L´ evy measure can be obtained by marginalization of νas dνi(si, x) =R s−i∈[0,+∞)d−1dν(s, x), where s−i= (s1, . . . , s i−1, si+1, . . . , s d). 35 B Proofs In this section we often use L(X) to denote the probability law of a random object X. Proof of Theorem 1 Step 1. Show that ˜µis a CRV. Let˜µ(A) = (˜ µ1(A), . . . , ˜µd(A)). We prove that, for every A1,···, Akmutually disjoint sets of X, the random vectors ˜µ(A1), . . . ,˜µ(Ak) are mutually independent. Specifically, we show that all linear combinations are mutually independent, that is, for coefficients λij>0, with i= 1, . . . , d andj= 1, . . . , k , E e−Pk j=1Pd i=1λij˜µi(Aj) =kY j=1E e−Pd i=1λij˜µi(Aj) . This identity is proved exploiting the following properties: (i) ˜ µ1, . . . , ˜µdare conditionally independent given ˜ µ0; (ii) since ˜ µi|˜µ0is a CRM, its evaluations on disjoint sets are independent; (iii) by the specification of the model, ˜ µi(Aj)|˜µ0∼ID(˜µ0(Aj)ρ), where ID(ρ) denotes a pure-jump infinitely divisible distribution with L´ evy measure ρ, and thus L(˜µi(Aj)|˜µ0) =L(˜µi(Aj)|˜µ0(Aj)); (iv) since ˜ µ0is a CRM, ˜ µ0(A1), . . . , ˜µ0(Ak) are independent; (v) again, ˜ µ1, . . . , ˜µdare conditionally independent given ˜ µ0. This entails E e−Pn j=1Pd i=1λij˜µi(Aj)(i)=EdY i=1E e−Pk j=1λij˜µi(Aj)|˜µ0 (ii)=EdY i=1kY j=1E e−λij˜µi(Aj)|˜µ0 (iii)=EdY i=1kY j=1E e−λij˜µi(Aj)|˜µ0(Aj) (iv)=kY j=1EdY i=1E e−λij˜µi(Aj)|˜µ0(Aj) (v)=kY j=1E e−Pd i=1λij˜µi(Aj) . Step 2. Derive the Laplace transform. From the specification of the model, it follows that ˜µi(A)|˜µ0iid∼ID(˜µ0(A)ρ). Therefore, logE e−λ·˜µ(A) = logE E e−λ·˜µ(A)|˜µ0 = logEdY i=1e−ψ(λi)˜µ0(A) =
https://arxiv.org/abs/2505.02653v1
logE e−˜µ0(A)Pd i=1ψ(λi) =−P0(A)ψ0dX i=1ψ(λi) . 36 Step 3. Determine homogeneity and Laplace exponent. The multivariate L´ evy measure νh and the Laplace exponent ψhof˜µare uniquely characterized by the Laplace transform. Given the product form of Step 2, it follows that νh=ρh⊗P0for some multivariate ρh and the Laplace exponent satisfies ψh(λ) =ψ0dX i=1ψ(λi) . Step 4. Determine the L´ evy intensity. The L´ evy intensity ρhis uniquely characterized by the Laplace exponent ψh(λ) =Z Ωd(1−e−λ·s) dρh(s). We use the notation Lρ(λ) =E(e−λX) =eψ(λ), where λ >0 and X∼ID(ρ), to denote the Laplace transform of an infinitely divisible random variable with L´ evy measure ρand Laplace exponent ψ. Notice that Lρ(λ)t=Ltρ(λ). Defining Xiiid∼ID(tρ), one has ψh(λ) =ψ0dX i=1ψ(λi) =Z+∞ 0 1−e−Pd i=1ψ(λi)t dρ0(t) =Z+∞ 0 1−dY i=1e−ψ(λi)t dρ0(t) =Z+∞ 0 1−dY i=1Lρ(λi)t dρ0(t) =Z+∞ 0 1−dY i=1Ltρ(λi) dρ0(t) =Z+∞ 0 1−dY i=1EXi e−λiXi dρ0(t) =Z+∞ 0EX 1−e−Pd i=1λiXi dρ0(t) =Z+∞ 0Z [0,+∞)d 1−e−Pd i=1λisidY i=1dPID(tρ)(si) dρ0(t) =Z+∞ 0Z Ωd 1−e−Pd i=1λisidY i=1dPID(tρ)(si) dρ0(t) =Z Ωd(1−e−λ·s)Z+∞ 0dY i=1dPID(tρ)(si) dρ0(t), by exchanging the order of the integrals thanks to Fubini-Tonelli theorem, and observing that the integrand vanishes in s=0. If ID( tρ) has density fID(tρ)with respect to the Lebesgue measure, then this is equal to Z Ωd(1−e−λ·s)Z+∞ 0dY i=1fID(tρ)(si) dρ0(t) ds. 37 Proof of Lemma 1 We reason by contradiction. Since ρhandρ0areσ-finite measures, by the uniqueness of the disintegration of measures ρ0-a.e. it holds that 1 tdHs1 t, . . . ,sd t ds=dY i=1fID(tρ)(si) ds. This forces Hto be in product form, and thus there exists a density H1on [0,+∞) such that 1 tH1s t ds=fID(tρ)(s) ds, ρ 0-a.e. IfXis a random variable with density H1, then tXhas density f(s) =H1(s/t)/t. From the identity above, tXis an infinitely divisible distribution with L´ evy density tρ(s). This implies that Xis also infinitely divisible with L´ evy density ρ(s/t). Therefore, its density depends on t, since ρ(s) cannot be constant (17). This contradicts X∼H1. Proof of Theorem 2 Step 1. Express the condition in terms of Laplace exponents. Since hierarchical CRVs are CRVs, their law is uniquely determined by their multivariate Laplace exponent. The 1-dimensional Laplace exponents ψ(1) 0andψ(2), associated to ρ(1) 0andρ(2)respectively, are strictly increasing by definition, which implies that they are invertible. Therefore, from the expression of Proposition 1, ˜µ1and˜µ2have the same multivariate Laplace exponent if and only if dX i=1ψ(1)◦(ψ(2))−1(si) = (ψ(1) 0)−1◦ψ(2) 0dX i=1si , P(1) 0=P(2) 0, for every ( s1, . . . , s d)∈Ωd. Define f=ψ(1)◦(ψ(2))−1andg= (ψ(1) 0)−1◦ψ(2) 0. Since the inverse of a continuous function is continuous, f, g: (0,+∞)→(0,+∞) are continuous functions such that f(0) = g(0) = 0 and f(s1) +···+f(sd) =g(s1+···+sd). (19) Step 2. Show that f=gand are linear functions. Lets1=sands2=···=sd= 0. Then, since f(0) = 0, we have g(s) =f(s) +f(0) +···+f(0) = f(s), and thus g=f. To prove linearity, we show that f(s1+s2) =f(s1) +f(s2) for every s1, s2>0 and f(cs) =cf(s) for every c, x > 0. The first property follows by taking s3=···=sd= 0 in (19). As for the second, we prove it first for ca natural number and then
https://arxiv.org/abs/2505.02653v1
for ca rational number. We conclude by density thanks to the continuity of f. For n∈Nnatural number, by 38 (19), f(ns) =f(s+···+s) =nf(s), while for q=n/m rational number, with n, m∈N, nf(s) =f(ns) =f(qms) =mf(qs), which implies f(qs) =nf(s)/m=qf(s). Step 3. Conclusion. We have proved that there exists c >0 such that, for every s >0, ψ(1)◦(ψ(2))−1(s) = (ψ(1) 0)−1◦ψ(2) 0(s) =cs. Thus for every s >0, ψ(2) 0(s) =ψ(1) 0(cs), ψ(2)(s) =1 cψ(1)(s), P(1) 0=P(2) 0. Using a change of variable, this condition can be equivalently expressed in terms of the L´ evy measures as ρ(2) 0=c#ρ(1) 0; ρ(2)(s) =1 cρ(1)(s); P(1) 0=P(2) 0. If the L´ evy measures have densities, this is equivalent to ρ(2) 0(s) =1 cρ(1) 0s c ; ρ(2)(s) =1 cρ(1)(s); P(1) 0=P(2) 0, which coincides with the statement of the proposition by substituting cwith 1 /c. Proof of Lemma 2 Step 1. Find the L´ evy measure ρ1of˜µi.By Proposition 1, the multivariate L´ evy measure of˜µis the measure on Ω dsatisfying dρh(s1, . . . , s d) =Z+∞ 0dY i=1dPID(tρ)(si) dρ0(t). The marginal L´ evy measure of each ˜ µicoincides with the one of ˜ µ1, which is expressed dρ1(s) =Z [0,+∞)d−1Z+∞ 0dPID(tρ)(s)dY i=2dPID(tρ)(si) dρ0(t) =Z+∞ 0dPID(tρ)(s) dρ0(t), where we have exchanged the order of integration thanks to Fubini-Tonelli’s theorem (measures are positive and σ-finite). Step 2. Show that if ρandρ0are infinitely active, then ρ1is infinitely active. Ifρis infinitely active, then ID( tρ) gives zero probability to the origin, thus Z Ω1dρ1(s) =Z Ω1Z+∞ 0dPID(tρ)(s) dρ0(t) =Z+∞ 0dρ0(t), 39 where again we have used Fubini-Tonelli’s theorem to exchange the order of the integrals. Since ρ0is infinitely active, the last integral is + ∞. Step 3. If ρ0is finitely active, then ρ1is finitely active . Since ID( tρ) is a probability measure on [0 ,+∞), its mass on (0 ,+∞) is smaller than or equal to 1. Thus, by Fubini- Tonelli, Z Ω1dρ1(s) =Z+∞ 0Z Ω1dPID(tρ)(s) dρ0(t)≤Z+∞ 0dρ0(t)<+∞, where the last inequality is due to the finite activity of ρ0. Step 4: If ρis finitely active, then ρ1is finitely active. Ifρis finitely active with total mass a, then Y∼ID(tρ) has positive mass in 0, namely P(Y= 0) = e−ta. This follows by observing that Yis a compound Poisson distribution, and P(Y= 0) coincides with the probability mass in 0 of a Poisson random variable with rate λ=at. Thus, by Fubini-Tonelli’s theorem, Z Ω1dρ1(s) =Z+∞ 0Z Ω1dPID(tρ)(s) dρ0(t) =Z+∞ 0(1−e−at) dρ0(t) =ψ0(at)<+∞. Proof of Proposition 1 The Dirichlet process arises as a normalization of a gamma CRM (Ferguson, 1973), thus ˜µ1 ˜µ1(X), . . . ,˜µd ˜µd(X) ˜µ0iid∼DP(α˜µ0) = DP α˜µ0(X)˜µ0 ˜µ0(X) . For a gamma CRM, the total mass ˜ µ0(X) is independent from the normalization (Ver- shik et al., 2004, Lemma 1). Moreover, ˜ µ0(X)∼Gamma( α0, b0) and thus α˜µ0(X)∼ Gamma( α0, b0/α) is independent of ˜P0= ˜µ0/˜µ0(X). Proof of Proposition 2 We prove that, conditionally on ˜ µ0, it holds ˜ µ(1) d=c(˜µ0) ˜µ(2)for some positive value c(˜µ0) depending on ˜ µ0. This implies that the normalizations of ˜ µ(1)and ˜µ(2)are equal in distribution.
https://arxiv.org/abs/2505.02653v1
Conditionally on ˜ µ0, both ˜ µ(1)and ˜µ(2)are CRMs, and we can express the condition ˜ µ(1)d=c(˜µ0) ˜µ(2)in terms of their conditional L´ evy measures ρ(1)andρ(2)as ρ(2)(s)⊗˜µ0 ˜µ0(X)d=c(˜µ0)ρ(1)(c(˜µ0)s)⊗˜µ0. Plugging in the expressions for the stable L´ evy measures, we need to find c(˜µ0) such that α σ Γ(1−σ)1 sσ+1⊗˜µ0 ˜µ0(X)d=α σ Γ(1−σ)1 c(˜µ0)σ1 sσ+1⊗˜µ0. 40 The proof is concluded by choosing c(˜µ0) = ˜µ0(X)1/σ. Proof of Proposition 3 We first state and prove a related result concerning the unnormalized random measures. Proposition 13. Let˜µ∼hCRV (ρ, ρ0, P0)and let Abe a Borel set s.t. P0(A)̸= 0,1. Then, for every i̸=j, E(˜µi(A)) =P0(A)M1(ρ0)M1(ρ), Var(˜µi(A)) =P0(A)M1(ρ0)M2(ρ) +P0(A)M2(ρ0)M1(ρ)2, Cov(˜µi(A),˜µj(A)) =P0(A)M2(ρ0)M1(ρ)2. In particular, for every i̸=j, cor(˜µi(A),˜µj(A)) =M2(ρ0)M1(ρ)2 M1(ρ0)M2(ρ) +M2(ρ0)M1(ρ)2. Proof. The expressions can be derived (i) through the hierarchical structure using the tower property and the law of total (co)variance, or (ii) using the expression of the moments of (jointly) infinitely divisible distributions in terms of their L´ evy measures. We provide a proof exploiting both techniques. Recall that, by Campbell’s theorem, the mean and variance of an infinitely divisible random variable X∼ID(ρ) satisfy E(X) =Z+∞ 0sdρ(s) =M1(ρ), Var(X) =Z+∞ 0s2dρ(s) =M2(ρ), Proof through hierarchical structure . Since ˜ µi(A)|˜µ0∼ID(˜µ0(A)ρ), by the tower prop- erty, E(˜µi(A)) =E(E(˜µi(A)|˜µ0)) =E ˜µ0(A)Z+∞ 0sdρ(s) =P0(A)M1(ρ0)M1(ρ). Similarly, by the law of total variance, Var(˜µi(A)) =E(Var(˜ µi(A)|˜µ0)) + Var( E(˜µ(A)|˜µ0)) =E ˜µ0(A)Z+∞ 0s2dρ(s) + Var ˜µ0(A)Z+∞ 0sdρ(s) = =P0(A)M1(ρ0)M2(ρ) +P0(A)M2(ρ0)M1(ρ)2. Finally, by the law of total covariance and thanks to the conditional independence of the 41 random measures ˜ µiand ˜µjfori̸=j, given ˜ µ0, Cov(˜µi(A),˜µj(A)) = Cov( E(˜µi(A)|˜µ0),E(˜µj(A)|˜µ0)) = Var ˜µ0(A)Z+∞ 0sdρ(s) =P0(A)M2(ρ0)M1(ρ)2. Proof through joint infinite divisibility . Since ˜ µiis a CRM, by considering its L´ evy measure in Proposition 1, E(˜µi(A)) =P0(A)Z+∞ 0sZ+∞ 0dPID(tρ)(s) dρ0(t) =P0(A)Z+∞ 0Z+∞ 0sdPID(tρ)(s) dρ0(t) =P0(A)Z+∞ 0EX∼ID(tρ)(X) dρ0(t) =P0(A)M1(ρ)Z+∞ 0tdρ0(t) =P0(A)M1(ρ)M1(ρ0). Similarly, for the variance, Var(˜µi(A)) =P0(A)Z+∞ 0s2Z+∞ 0dPID(tρ)(s) dρ0(t) =P0(A)Z+∞ 0Z+∞ 0s2dPID(tρ)(s) dρ0(t) =P0(A)Z+∞ 0EX∼ID(tρ)(X2) dρ0(t) =P0(A)Z+∞ 0(Var X∼ID(tρ)(X) +EX∼ID(tρ)(X)2) dρ0(t) =P0(A)Z+∞ 0(tM2(ρ) +t2M1(ρ)2) dρ0(t) =P0(A)M2(ρ)M1(ρ0) +P0(A)M1(ρ)2M2(ρ0). For the covariance, we observe that, for any ( X1, X2)∼ID(ρ) jointly infinitely di- visible random variables with a multivariate L´ evy measure ρ, it holds Cov( X1, X2) =R Ω2s1s2dρ(s1, s2); see, e.g., Sato (1999, 25.8). Thus, Cov(˜µi(A),˜µj(A)) =P0(A)Z Ω2s1s2dρh(s1, s2) =P0(A)Z+∞ 0Z Ω2s1s2dPID(tρ)(s1) dPID(tρ)(s2) dρ0(t) =P0(A)Z+∞ 0EX∼ID(tρ)(X)2dρ0(t) =M1(ρ)2Z+∞ 0t2dρ0(t) =P0(A)M1(ρ)2M2(ρ0). We now prove Proposition 3. Similarly to the proof of Proposition 13, we could derive 42 the desired results in two ways: (i) exploiting the hierarchical structure, or (ii) using the fact that we are normalizing a CRV. When focusing on the normalization, exploiting the CRV structure is particularly convenient for deriving the mean and variance, whereas using the hierarchical structure brings to a straightforward calculation for the covariance. The univariate results in James et al. (2006) show that if ˜ µ∼CRM( ρ⊗P0), where P0is a probability measure and ψdenotes its Laplace exponent, the normalization ˜P= ˜µ/˜µ(X) satisfies E(˜P(A)) =P0(A), Var( ˜P(A)) =−P0(A)(1−P0(A))Z+∞ 0ue−ψ(u)ψ′′(u) du. Since ˜Pi(A) is the normalization of ˜ µi∼CRM( ρh⊗P0), this immediately implies that E(˜Pi(A)) =P0(A). Moreover, by considering the Laplace exponent of ˜ µiin Proposition 1, Var( ˜Pi(A)) =−P0(A)(1−P0(A))Z+∞ 0ue−ψ0(ψ(u))(ψ0◦ψ)′′(u)
https://arxiv.org/abs/2505.02653v1
du. By the law of total covariance and the conditional independence of the random probability measures ˜Piand ˜Pjfori̸=j, given ˜ µ0, Cov( ˜Pi(A),˜Pj(A)) = Cov( E(˜Pi(A)|˜µ0),E(˜Pj(A)|˜µ0)) = Var˜µ0(A) ˜µ0(X) =−P0(A)(1−P0(A))Z+∞ 0ue−ψ0(u)ψ′′ 0(u) du. Proof of Proposition 4 The starting point of the proof is Theorem 5 of Catalano et al. (2024b). For completeness, we report the statement with our notation. In the following, πidenotes the i-th projection map, so that the marginal ρiof a multivariate L´ evy measure ρcan be expressed in terms of pushforward measure as πi#ρ. The second moment of a univariate measure ¯ ρis denoted byM2(¯ρ) =R Ω1s2d¯ρ(s), whereas U¯ρ(s) = ¯ρ((s,+∞)) denotes its tail integral, whose generalized inverse U−1 ¯ρ:t7→inf{x >0 :U¯ρ(x)< t}coincides with the usual inverse if U¯ρis injective. In the following, ρcoandρ⊥denote the comonotonic and independent L´ evy measures in (4). Theorem 4 (Theorem 5 Catalano et al. (2024b)) .Letρbe a multivariate L´ evy measure onΩdwith finite second moments and equal marginals πi#ρ= ¯ρ, for i= 1, . . . , d . Denote byρ+= Σ #ρthe (univariate) pushforward measure of Σ(s) =Pd i=1si. Then, W∗(ρ, ρco)2= 2dM2(¯ρ)−2Z+∞ 0U−1 ρ+(s)U−1 ¯ρ(s) ds, W∗(ρ⊥, ρco)2= 2dM2(¯ρ)−2dZ+∞ 0s U−1 ¯ρ(dU¯ρ(s)) d¯ρ(s). 43 In order to prove Proposition 4, we need to compute the relevant quantities in the case ˜µ∼ hCRV( ρ, ρ0, P0), whose multivariate L´ evy measure is given in Proposition 1. The marginal L´ evy measure ¯ ρcoincides with ρhwhen d= 1, that is, ¯ ρ(s) =R+∞ 0fID(tρ)(s) dρ0(t). It follows that M2(¯ρ) = Var(˜ µi(X)) in Proposition 13. Moreover, the tail integral of ¯ ρis U¯ρ(u) =Z+∞ u¯ρ(s)ds=Z+∞ 0Z+∞ ufID(tρ)(s) dρ0(t) =Z+∞ 0(1−FID(tρ)(u)) dρ0(t), which coincides with the definition of Uρ,ρ0(u) in the statement. As for the tail integral Uρ+, by definition of pushforward measure, since ρhis continuous, Uρ+(u) =Z (0,+∞)d1(u,+∞)(s1+···+sd)ρh(s) ds =Z+∞ 0Z (0,+∞)d1(u,+∞)(s1+···+sd)dY i=1fID(tρ)(si)ds dρ0(t). The expression inside the parenthesis coincides with the survival function of the sum of d independent infinitely divisible random variables with L´ evy measure tρ, which is infinitely divisible with L´ evy measure dtρ; it follows that Uρ+(u) =Z+∞ 0(1−FID(dtρ)(u)) dρ0(t) =Udρ,ρ 0(u). The proof is concluded by considering the change of variable s′=U−1 ρ+(s), and observing that the derivative of Uρ+(s) is equal to −R+∞ 0fID(dtρ)(s) dρ0(t). Proof of Theorem 3 Since the random probabilities ˜ µi/˜µi(X) are a.s. discrete with random atoms from a con- tinuous distribution, the model for the observations X1:d|˜µis non-dominated. Therefore, we cannot rely on Bayes theorem to find the posterior distribution, and should use an alternative strategy, based on the multivariate Laplace functional. This proof can be seen as the multivariate extension of the proof in James et al. (2009). The multivariate Laplace functional Eexp −dX i=1Z fid˜µi! is defined for any non-negative measurable functions f1, . . . f dand characterizes the law of any vector of random measures. In particular, when ˜µ∼CRV( ν), it satisfies logEexp( −dX i=1Z fid˜µi) =−Z Ωd×X(1−e−Pd i=1sifi(x)) dν(s, x). (20) 44 Our goal is to find an appropriate expression of E(e−Pd i=1R fid˜µi|X1:d=x1:d), for any non-negative measurable functions f1, . . . f d.
https://arxiv.org/abs/2505.02653v1
Throughout the proof, we exploit the following properties of the conditional expectation, which hold for any random variable X, Y and for any Borel set Asuch that P(Y∈A)≥0: a. conditional expectation with respect to events: E(X|Y∈A) =E(X 1A(Y))/P(Y∈A); b. tower property: E(X) =E(E(X|Y)) and P(A) =E( 1A) =E(P(A|Y)). Step 1. Express the conditional expectation in terms of events. By the dominated con- vergence theorem and the exchangeability of the observations E(e−Pd i=1R fid˜µi|X1:d=x1:d) = lim ϵ→0E e−Pd i=1R fid˜µi X1:d∈dY i=1kY j=1Bnij ϵ(x∗ j) , where x∗= (x∗ 1, . . . , x∗ k) are the unique values in the observations, with multiplicities ni1, . . . , n ikfor each group i= 1, . . . , d ; see Section 4. Here, Bϵ(x) ={ω:d(x, ω)≤ϵ} denotes the ball of radius ϵ >0 centered in x, and we use Bm ϵ(x) =Bϵ(x)×···× Bϵ(x) for their m-cartesian product. Without loss of generality, we always consider ϵsufficiently small for the balls {Bϵ(x∗ j)}jto be pairwise disjoint. Step 2. Condition with respect to events. Using property (a.), E(e−Pd i=1R fid˜µi|X1:d=x1:d) = lim ϵ→0E e−Pd i=1R fid˜µi1Qd i=1Qk j=1Bnij ϵ(x∗ j)(X1:d) P X1:d∈Qd i=1Qk j=1Bnij ϵ(x∗ j) =: lim ϵ→0Nϵ(x∗) Dϵ(x∗), where we have introduced the notation Nϵ(x∗) for the numerator and Dϵ(x∗) for the denominator. In the next steps (3–6) we show that both numerator and denominator decrease at the same speed as ϵ→0, namely Nϵ(x∗) =CNkY j=1P0(Bϵ(x∗ j)) +okY j=1P0(Bϵ(x∗ j)) , Dϵ(x∗) =CDkY j=1P0(Bϵ(x∗ j)) +okY j=1P0(Bϵ(x∗ j)) , for some constants CN, CD>0, where P0is the diffuse base probability of the CRV. It then follows that the limit above coincides with CN/CD. Since the denominator is a special case of the numerator when f1=···=fd= 0, we focus on finding CNand then specialize the result to CD. 45 Step 3. Express the numerator in terms of ˜µ.By the tower property (b.), Nϵ(x∗) =E e−Pd i=1R fid˜µiP X1:d∈dY i=1kY j=1Bnij ϵ(x∗ j) ˜µ =E e−Pd i=1R fid˜µidY i=1kY j=1˜µi(Bϵ(x∗ j))nij ˜µ(X)nij =EdY i=11 ˜µi(X)nie−R fid˜µikY j=1˜µi(Bϵ(x∗ j))nij . Step 4. Use the gamma trick to separate the integrand into independent components. Using the density of a gamma with shape niand rate ˜ µi(X), we rewrite 1 ˜µi(X)ni=1 Γ(ni)Z+∞ 0uni−1 ie−˜µi(X)uidui. Henceforth, we adopt the compact notation Bj=Bϵ(x∗ j) and B0=X\ {B1⊔ ··· ⊔ Bk}, with the convention n0=ni0= 0 for i= 1, . . . , d . Using Fubini-Tonelli’s theorem and the independence property of a CRV on disjoint set-wise evaluations, Nϵ(x∗) =EdY i=11 Γ(ni)Z+∞ 0uni−1 ikY j=0e−˜µi(Bj)uiduikY j=0e−R Bjfid˜µi˜µi(Bj)nij =1Qd i=1Γ(ni)Z (0,+∞)ddY i=1u−1 ikY j=0EdY i=1unij ie−R Bj(fi(x)+ui) d˜µi(x)˜µi(Bj)nij du =1Qd i=1Γ(ni)Z (0,+∞)ddY i=1u−1 i nϵ(u;x∗) du, where nϵ(u;x∗) =kY j=0EdY i=1e−R Bj(fi(x)+ui) d˜µi(x)(ui˜µi(Bj))nij . We now study the asymptotic behaviour of the quantity nϵ(u;x∗). Step 5. Express the integrand nϵ(x∗)in terms of the derivative of the multivariate Laplace functional. Letη1, . . . , η d≥1 be auxiliary quantities, such that nϵ(u;x∗) can be written as nϵ(u;x∗) =kY j=0EdY i=1lim ηi→1+e−R Bj(fi(x)+ηiui) d˜µi(x)(ui˜µi(Bj))nij = lim η→1+kY j=0EdY i=1e−R Bj(fi(x)+ηiui) d˜µi(x)(ui˜µi(Bj))nij , 46 where η→1+is a compact notation for ηi→1+for
https://arxiv.org/abs/2505.02653v1
each i= 1, . . . , d , and we have exchanged limit and expectation by monotone convergence theorem. We observe that, for some ℓ∈ {1, . . . , d }, ∂ ∂ηℓdY i=1e−R Bj(fi(x)+ηiui) d˜µi(x)=−uℓ˜µℓ(Bj)dY i=1e−R Bj(fi(x)+ηiui) d˜µi(x). This formula can be applied recursively, using the convention d0/du0= Id, for Id the identity function. Specifically, nϵ(u;x∗) = lim η→1+kY j=0(−1)n•jE∂n•j ∂ηn1j 1. . . ∂ηndj ddY i=1e−R Bj(fi(x)+ηiui) d˜µi(x) , where n•j=n1j+···+ndj. Since ηi≥1 and fi≥0, the derivative is bounded above by the product dY i=1(ui˜µi(Bj))nijexp(−ui˜µi(Bj)), which has finite mean since the exponential decay of exp( −Pd i=1ui˜µi(Bj)) is not compro- mised by the slower polynomial growth ofQd i=1(ui˜µi(Bj))nij. Therefore, the derivative is uniformly integrable in η1, . . . , η dand we can exchange derivative and expectation. Remarkably, we need to introduce ηand cannot derive the expressions directly with re- spect to u: otherwise, the derivative would not be uniformly integrable for CRMs with unbounded moment measures, such as the σ-stable CRM. Using the expression (18) of the multivariate Laplace functional of a CRV, we obtain nϵ(u;x∗) = lim η→1+kY j=0(−1)n•j∂n•j ∂ηn1j 1. . . ∂ηndj dE e−Pd i=1R Bj(fi(x)+ηiui) d˜µi(x) = lim η→1+kY j=0(−1)n•j∂n•j ∂ηn1j 1. . . ∂ηndj dexp −Z Ωd×Bj 1−e−s·(f(x)+ηu) dν(s, x) , where we use the compact notation f(x) = (f1(x), . . . , f d(x)) and ηu= (η1u1, . . . , η dud). Forj= 0, . . . , k , define the function gj(η,u) =−Z Ωd×Bj 1−e−s·(f(x)+ηu) dν(s, x). Step 6. Determine the asymptotic behaviour of the partial derivatives of e−gj.Forj= 1, . . . , k , the partial derivatives of gjsatisfy, as ϵ→0, ∂n•j ∂ηn1j 1. . . ∂ηndj dgj(η,u) 47 = (−1)n•jdY i=1unij iZ Ωd×BjdY i=1snij ie−s·(f(x)+ηu)dν(s, x) = (−1)n•jdY i=1unij iZ ΩddY i=1snij ie−s·(f(x∗ j)+ηu)dρx∗ j(s)P0(Bj) +o(P0(Bj)). The first line comes from uniform integrability of the derivative, which allows exchanging the integral and derivative operators; the second line follows from Lebesgue differentiation theorem for atomless measures on Polish spaces. The multivariate Fa` a di Bruno formula (Constantine and Savits, 1996) allows to ex- press multiple partial derivatives of a function. We introduce the notation as in Hardy (2006), who observed that partial derivatives of the type ∂n•/∂ηn1 1. . . ∂ηnd dcan be treated asmaximally mixed partial derivatives of the type ∂n•/∂v 1·. . . ∂v n•by allowing for ties among the variables, which brings to more compact expressions. In particular, define v1=···=vn1j=η1, and vPi−1 i′=1ni′j+1=···=vPi i′=1ni′j=ηifori= 2, . . . , d , so that, (−1)n•j∂n•j ∂ηn1j 1. . . ∂ηndj de−gj(η,u)=e−g(η,u)X πY A∈π∂|A| Q i∈A∂vigj(η,u), (21) where the sum is over all partitions πof the numbers {1, . . . , n •j}. From this expression, one can derive a formula in terms of η1, . . . , η dthrough appropriate combinatorial coef- ficients, retrieving the one in Constantine and Savits (1996). However, in our case, the combinatorial formulation above is not necessary, as we are only interested in the asymp- totic behaviour as ϵ→0. From the previous discussion, all terms
https://arxiv.org/abs/2505.02653v1
∂|A|/(Q i∈A∂vi)g(η,u) are asymptotically equivalent up to a constant. Hence, from (21), the asymptotically slowest term is the summand corresponding to the partition πwith a minimal number of sets, that is, π={1, . . . , n •j}. Therefore, as ϵ→0, (21) reads (−1)n•j∂n•j ∂ηn1j 1. . . ∂ηndj de−gj(η,u)= exp −Z Ωd×Bj(1−e−s·(f(x)+ηu)) dν(s, x) ×dY i=1unij iZ ΩddY i=1snij ie−s·(f(x∗ j)+ηu)dρx∗ j(s)P0(Bj) +o(P0(Bj)). Step 7. Determine the asymptotic behaviour of nϵ(x∗).Considering the term e−g0, which does not vanish as ϵ→0, we obtain that nϵ(u;x∗) is asymptotically equal to nϵ(u;x∗) = exp −Z Ωd×X(1−e−s·(f(x)+u)) dν(s, x) ×dY i=1uni ikY j=1Z ΩddY i=1snij ie−s·(f(x∗ j)+u)dρx∗ j(s)kY j=1P0(Bj) +okY j=1P0(Bj) , 48 where we have computed the limit for η→1+under the integration using monotone convergence. Step 8. Determine the asymptotic behaviour of the numerator Nϵ(x∗).From the relation between nϵ(u;x∗) and Nϵ(x∗) in Step 4, by monotone convergence theorem, we have that Nϵ(x∗) =CNQk j=1P0(Bj) +o(Qk j=1P0(Bj)), where CNequals 1Qd i=1Γ(ni)Z (0,+∞)ddY i=1uni−1 i exp −Z Ωd×X(1−e−s·(f(x)+u)) dν(s, x) ×kY j=1Z ΩddY i=1snij ie−s·(f(x∗ j)+u)dρx∗ j(s) du. Step 9. Determine the expression of the Laplace functional a posteriori. By specializing the formula in Step 8 for CNtof= 0, we determine the value of CD, which can be conveniently expressed in terms of the multivariate Laplace exponent ψand cumulants τn1j,...,n dj|x∗ jas CD=1Qd i=1Γ(ni)Z (0,+∞)ddY i=1uni−1 i e−ψ(u)kY j=1τn1j,...,n dj|x∗ j(u) du. If follows that the Laplace functional a posteriori is equal to E(e−Pd i=1R fid˜µi|X1:d=x1:d) =R (0,+∞)dQd i=1uni−1 i e−R Ωd×X(1−e−s·(f(x)+u))dν(s,x)Qk j=1R ΩdQd i=1snij ie−s·(f(x∗ j)+u)dρx∗ j(s) du R (0,+∞)dQd i=1uni−1 i e−ψ(u)Qk j=1τn1j,...,n dj|x∗ j(u) du. Step 10. Interpret the denominator as the normalizing constant of latent random variables U1, . . . , U d.The multivariate Laplace exponent ψ(u) can be retrieve at the numerator by multiplying and dividing by e−ψ(u). Indeed, Z Ωd×X(1−e−s·(f(x)+u)) dν(s, x) +ψ(u) =Z Ωd×X(e−s·u−e−s·(f(x)+u)) dν(s, x) =Z Ωd×X(1−e−sf(x))e−s·udν(s, x). Therefore, the posterior Laplace functional is rewritten as E(e−Pd i=1R fid˜µi|X1:d=x1:d) =E e−R Ωd×X(1−e−sf(x))e−s·Udν(s,x)kY j=1R ΩdQd i=1snij ie−s·(f(x∗ j)+U)dρx∗ j(s) τn1j,...,n dj|x∗ j(U) , 49 where U= (U1, . . . , U d) is a vector of random variables with joint p.d.f. fU(u)∝dY i=1uni−1 ie−ψ(u)kY j=1τn1j,...,n dj|x∗ j(u). This implies that there exist latent variables U= (U1, . . . , U d) such that the posterior Laplace functional of ˜µ, conditionally on U, satisfies E e−Pd i=1R fid˜µi|X1:d=x1:d,U =E e−Pd i=1R fid˜µ∗ i|UkY j=1E e−f(x∗ j)·Jj|U , where, conditionally on U, the measure ˜µ∗is a CRV with L´ evy measure e−s·Udν(s, x) andJjis a vector of jumps with distribution dPJj|U(s)∝dY i=1snij ie−s·Udρx∗ j(s). If we define the random elements ˜µ∗andJjto be conditionally independent given U, the Laplace functional of their sum is the product of their Laplace functions, and thus E e−Pd i=1R fid˜µi|X1:d=x1:d,U =E e−Pd i=1R fid(˜µ∗ i+Pk j=1Jijδx∗ j)|U . By uniqueness of the Laplace functional, this implies that L(˜µ|X1:d=x1:d) =L ˜µ∗+kX j=1Jjδx∗ j . Proof of Proposition 5 We first state and prove a preliminary Lemma on the exponential tilting of a L´ evy mea- sure. Lemma 3. Letρbe a L´ evy measure on
https://arxiv.org/abs/2505.02653v1
(0,∞)with Laplace exponent ψsuch that ID(ρ) has a p.d.f. denoted by fID(ρ). For u >0, define dρu(s) =e−usdρ(s)the exponential tilting ofρ. Then for s, t, u > 0, e−usfID(tρ)(s) =e−tψ(u)fID(tρu)(s). Proof. LetX∼ID(tρ) and let Xu∼ID(tρu). By the uniqueness of the Laplace trans- form, it enough to show that, for every λ >0, E e−λXu =Z∞ 0e−λsfID(tρu)(s) ds=etψ(u)Z∞ 0e−(λ+u)sfID(tρ)(s) ds=etψ(u)E e−(λ+u)X . 50 Indeed, the Laplace transform of Xuis equal to E e−λXu =Z∞ 0e−λsfID(tρu)(s) ds=e−R+∞ 0(1−e−λs)te−usdρ(s)=e−tR+∞ 0(e−us−e−(λ+u)s)dρ(s) =etψ(u))−tψ(λ+u)=etψ(u)Z∞ 0e−(λ+u)sfID(tρ)(s) ds=etψ(u)E e−(λ+u)X . Considering ˜µ∼hCRV( ρ, ρ0, P0), the expression of the L´ evy measure in Theorem 3(i) is given by dν∗ U(s, x) =e−U·sdνh(s, x) =e−U·sρh(s) dsdP0(x), where we have used that ID( tρ) has a p.d.f. on (0 ,+∞). Exploiting Lemma 3 for the exponential tilting, we obtain e−U·sρh(s) =Z+∞ 0dY i=1 e−UisifID(tρ)(si) ρ0(t) dt =Z+∞ 0dY i=1 e−tψ(Ui)fID(tρUi)(si) ρ0(t) dt =Z+∞ 0dY i=1fID(tρUi)(si)e−tPd i=1ψ(Ui)ρ0(t) dt. In analogy with Theorem 1, this expression can be interpreted as the Laplace exponent of a hierarchical CRV with heterogeneous marginal distributions, characterized by the L´ evy measures dρ∗ i(s) = d ρUi(s) =e−Uisρ(s) ds, dρ∗ 0(t) =e−tPd i=1ψ(Ui)ρ0(t) dt. Proof of Proposition 6 Since ρandρ0have L´ evy densities, the distribution of the vector of jumps Jjin (7), given the latent variables U, has p.d.f. proportional to dY i=1snij ie−U·sρh(s) =dY i=1snij ie−UisiZ+∞ 0dY i=1fID(tρ)(si)ρ0(t) dt =Z+∞ 0dY i=1snij ie−UisifID(tρ)(si)ρ0(t) dt =Z+∞ 0dY i=1snij ie−UisifID(tρ)(si) ¯τnij(Ui, t)dY i=1¯τnij(Ui, t)ρ0(t) dt, 51 where ¯ τm(u, t) is defined in (8). Therefore, the distribution of Jj|Uis a mixture of conditionally independent random variables J1j, . . . , J djwith densities fJij|Ui,J0j(s) =snij ie−UisfID(tρ)(s) ¯τnij(Ui, t), i = 1, . . . , d, given a mixing random variable J0jhaving p.d.f. proportional toQd i=1¯τnij(Ui, t)ρ0(t). Proof of Proposition 7 Substituting the expression for ρhas in the proof of Proposition 6, we use Fubini-Tonelli Theorem as in Remark 1 to obtain τm(u) =Z ΩddY i=1smi ie−u·sρh(s) ds=Z+∞ 0Z ΩddY i=1smi ie−uisifID(tρ)(si) dsρ0(t) dt =Z+∞ 0dY i=1Z+∞ 0smi ie−uisifID(tρ)(si) dsiρ0(t) dt=Z+∞ 0dY i=1¯τmi(ui, t)ρ0(t) dt. Proof of Proposition 8 Starting from the expression of the distribution of the latent variables Uin (6), specialized for˜µ∼hCRV( ρ, ρ0, P0), fU(u)∝dY i=1uni−1 ie−ψ0(Pd i=1ψ(ui))kY j=1τn1j,...,n dj(u) =dY i=1uni−1 ie−ψ0(Pd i=1ψ(ui))kY j=1Z+∞ 0dY i=1¯τmi(ui, tj)ρ0(tj) dtj, where we have substituted the expression of τm(u) obtained Proposition 7. For each j= 1, . . . , k , exploiting the definition of ¯ τm(u, t) in (8), we obtain Z+∞ 0dY i=1¯τmi(ui, tj)ρ0(tj) dtj=Z+∞ 0dY i=1Z∞ 0snij ije−uisijfID(tjρ)(sij) dsijρ0(tj) dtj =Z ΩddY i=1e−uisijZ+∞ 0dY i=1snij ijfID(tjρ)(sij)ρ0(tj) dtjds•j, where we have exchanged the integrals thanks to Fubini-Tonelli’s theorem. Recall that, conditionally on ˜ µ0, the distribution of ˜ µi(X)∼ID(˜µ0(X)ρ), while at the root of the hierarchy ˜ µ0(X)∼ID(ρ0). Therefore, exploiting the definition of multivariate Laplace 52 exponent, dY i=1uni−1 ie−ψ0(Pd i=1ψ(ui))=dY i=1uni−1 iE e−u·˜µ(X) =Z ΩddY i=1uni−1 ie−uiyiZ∞ 0dY i=1fID(zρ)(yi)fID(ρ0)(z) dz. In conclusion, the density of the vector of latent variables Uis proportional to fU((u))∝Z Ωk+1 ddY i=1uni−1 ie−ui(si0+si1+···+sik) Z (0,+∞)k+1dY i=1fID(t0ρ)(si0)fID(ρ0)(t0)kY j=1snij ijfID(tjρ)(sij)ρ0(tj) dtds, where we have renamed integration variables yiassi0andzast0. Therefore, conditionally on
https://arxiv.org/abs/2505.02653v1
a vector β= (β1, . . . , β d) of dependent random variables, the latent variable U1, . . . , U d are independent and gamma distributed, with Ui∼Gamma( ni, βi), for i= 1, . . . , d . Moreover, each βi=Si0+Si1+···+Sik, where the joint density of S= (Sij)ijis proportional to fS(s)∝dY i=1(si0+si1+···+sik)−ni ×Z (0,+∞)k+1dY i=1fID(t0ρ)(si0)fID(ρ0)(t0)kY j=1snij ijfID(tjρ)(sij)ρ0(tj) dtds. Therefore, conditionally on T= (T0, . . . , T k), the random vectors Si= (Si0, Si1,···, Sik), for each i= 1, . . . , d , are independent, with density proportional to fSi|T(si)∝(si0+si1+···+sik)−nifID(T0ρ)(si0)kY j=1snij ijfID(Tjρ)(sij). Finally, the density of vector Tis proportional to fT(t)∝dY i=1C(ni1, . . . , n ik;t)fID(ρ0)(t0)kY j=1ρ0(tj), where Cis defined in (9) and represents the normalizing constant for the distribution of Si|T. 53 Proof of Proposition 9 From Example 1, the gamma-gamma hCRV is characterized by ρ(s) =αe−bs s, ρ 0(s) =α0e−b0s s, where α, α 0>0 are shape parameters and b, b0>0 rate parameters. The Laplace exponent in Definition 5 is ψ(λ) =αlog(1 + λ/b). The rest of the proof follows from Propositions 5 and 6. (a) From Proposition 5, and substituting the expressions for ρ,ρ0andψ, e−Uisρ(s) =αs−1e−bs−Uis=αs−1e−b(1+Ui/b)s, e−Pd i=1ψ(Ui)sρ0(s) =α0s−1e−b0s−αPd i=1log(1+ Ui/b)s=α0s−1e−αλ(U)s, where λ(U) =b0/α+dX i=1log(1 + Ui/b). (b) From Proposition 6, for each j= 1, . . . , k , the jumps J1j, . . . , J djare conditionally independent, given UandJ0j. Moreover, from the specification of ρabove, the random variable ID( tρ) has gamma distribution with shape parameter αtand rate parameter b. Hence, for each i= 1, . . . , d , the jump Jijhas density proportional to snije−UisfID(J0jρ)(s)∝sαJ0j+nij−1e−bs−Uis, which is the density of a gamma random variable with shape αJ0j+nijand rate b+Ui. The normalizing constant ¯ τnij(Ui, t) in (8) is ¯τnij(Ui, t) =Z+∞ 0snije−UisfID(tρ)(s) ds=bαt (b+Ui)nij+αtΓ(nij+αt) Γ(αt). (c) Again from Proposition 6, for j= 1, . . . , k , the density of J0j, given U, is propor- tional to fJ0j|U(t)∝dY i=1¯τnij(Ui, t)ρ0(t)∝dY i=1 1 1 +Ui/bαt ((αt))nij! α0e−b0t tdt ∝t−1e−b0t−αtPd i=1log(1+ Ui/b)dY i=1((αt))nij∝t−1e−αλ(U)tdY i=1((αt))nij, where (( αt))n= Γ( αt+n)/Γ(αt) denotes the ascending factorial. The result is obtained computing the density of the linear transformation αJ0j. 54 Proof of Proposition 10 Recall that, from the specification of ρin Example 1, the random variable ID( tρ) has gamma distribution with shape parameter αtand rate parameter b. The rest of the proof follows from Proposition 8. (a) For each i= 1, . . . , d , the density of Si= (Si0, . . . , S ik), given T= (T0, . . . , T k), is proportional to fSi|T(si0, . . . , s ik)∝(si0+si1+···+sik)−ni•fID(T0ρ)(si0)kY j=1snij ijfID(Tjρ)(sij) ∝(si0+si1+···+sik)−ni•sαT0−1 i0e−bsi0kY j=1sαTj+nij−1 ij e−bsij ∝(si0+si1+···+sik)−ni•e−b(si0+si1+···+sik)sαT0−1 i0kY j=1sαTj+nij−1 ij . Applying the change of variables βi=Si0+···+SikandWij=Sij/βi, for j= 0, . . . , k , the joint density of βi≥0 and Wi= (Wi0, . . . , W ik)∈∆k, where ∆kis thek-dimensional unit simplex, is fβi,Wi|T(zi,wi)∝zα(T0+T1+···+Tk)−1 i e−bziwαT0−1 i0kY j=1wαTj+nij−1 ij . Therefore βi=Si0+···+Sikis independent from Wiand has gamma distribution with shape α(T0+···+Tk) and rate b.
https://arxiv.org/abs/2505.02653v1
Moreover, the quantity C(m;t) in (9) is given by C(m;t) =Z (0,+∞)k+1(s0+···+sk)−m•fID(t0ρ)(s0)kY j=1smj jfID(tjρ)(sj)ds =bα(t0+···+tk) Γ(αt0)Qk j=1Γ(αtj)Z (0,+∞)zα(t0+···+tk)−1e−bzdzZ ∆kwαt0−1 0kY j=1wαtj+mj−1 j dw =Γ(α(t0+···+tk)) Γ(α(t0+···+tk) +m•)kY j=1Γ(αtj+mj) Γ(αtj). (b) From the specification of ρ0in Example 1, the random variable ID( ρ0) has gamma distribution with shape parameter α0and rate parameter b0. Therefore, the density ofT= (T0, . . . , T k) is proportional to fT(t)∝dY i=1C(ni1, . . . , n ik;t)fID(ρ0)(t0)kY j=1ρ0(tj) 55 ∝dY i=1 Γ(α(t0+···+tk)) Γ(α(t0+···+tk) +ni•)kY j=1Γ(αtj+nij) Γ(αtj)! tα0−1 0e−b0t0kY j=1t−1 je−b0tj ∝dY i=11 ((α(t0+···+tk)))ni• tα0−1 0e−b0t0kY j=1t−1 je−b0tj dY i=1((αtj))nij! , where (( αt))n= Γ(αt+n)/Γ(αt) is the ascending factorial. Since we only need the distribution of αT=α(T0+···+Tk) to sample the random variables β1, . . . , β d, we could apply the change of variables αT=α(T0+···+Tk), V j=Tj/T, j = 0, . . . , k, and obtain the joint density of αTand the vector V= (V0, . . . , V k)∈∆kof auxiliary latent variables, supported on the k-dimensional unit simplex ∆k: fαT,V(t,v)∝tα0−1e−(b0/α)tvα0−1 0dY i=11 ((t))ni•kY j=1 v−1 jdY i=1((tvj))nij! . Note that this function is integrable in t; indeed, it holds that (( s))q∼sfors→0 when qis a positive integer. Therefore, for t→0, fT,V(t,v)∼tα0−1dY i=11 ((t))ni•kY j=1dY i=1(tvj))nij∼tα0−1dY i=1tmi•−1∼tα0+m−d−1, and since we are assuming ni•>0, then mi•≥1 and therefore m≥d. Proof of Proposition 11 Recall that the ascending factorial (( s))q, for integer q, can be written as ((s))q=qX h=0S(q, h)sh, where S(q, h) are the unsigned Stirling numbers of the first kind, defined through the recursive relation S(q+ 1, h) =q S(q, h) +S(q, h−1), with initial conditions S(0,0) = 1 andS(q,0) = S(0, h) = 0 for q >0 or h > 0. Since S(q,0) = 0 whenever q >0, the summation above can start from 1 if qis strictly positive. From Proposition 9, the density of αJ0j, for each j= 1, . . . , k , can be rewritten as fαJ0j|U(t)∝t−1e−λ(U)tdY i=1 nijX hij=mijS(nij, hij)thij  56 ∝t−1e−λ(U)tn•jX hj=m•jX h1j+···+hdj=hj mij≤hij≤nijdY i=1S(nij, hij)thj ∝n•jX hj=m•j X h1j+···+hdj=hj mij≤hij≤nijdY i=1S(nij, hij)! thj−1e−λ(U)t where mij∈ {0,1}is the indicator for nij>0, that is, mij= min(1 , nij), and m•j=Pd i=1mij. Moreover, for each j= 1, . . . , k , define the coefficients S(n1j, . . . , n dj;hj) where S(q1, . . . , q d;h) =X h1+···+hd=h 0≤hi≤qidY i=1S(qi, hi). Exploiting the recursive relation for unsigned Stirling numbers of the first kind, we obtain S(q1, . . . ,q ℓ+ 1, . . . , q d;h) =X h1+···+hd=h 0≤hi≤qi 0≤hℓ≤qℓ+1S(qℓ+ 1, hℓ)dY i=1 i̸=ℓS(qi, hi) =X h1+···+hd=h 0≤hi≤qi 0≤hℓ≤qℓ+1qℓS(qℓ, hℓ)dY i=1 i̸=ℓS(qi, hi) +X h1+···+hd=h 0≤hi≤qi 0≤hℓ≤qℓ+1S(qℓ, hℓ−1)dY i=1 i̸=ℓS(qi, hi) =qℓX h1+···+hd=h 0≤hi≤qi 0≤hℓ≤qℓS(qℓ, hℓ)dY i=1 i̸=ℓS(qi, hi) +X h1+···+hd=h−1 0≤hi≤qi 0≤hℓ≤qℓS(qℓ, hℓ)dY i=1 i̸=ℓS(qi, hi) =qℓS(q1, . . . , q d;h) +S(q1, . . . , q d;h−1). Note that in the third line we have used the fact that S(qℓ, qℓ+ 1) = 0 (first term) and applied the change of index hℓ7→hℓ+ 1 (second term). Therefore, the coefficients above satisfy the recurrence relation in
https://arxiv.org/abs/2505.02653v1
(12). Proof of Proposition 12 From Proposition 10, the joint density of αTand the random vector V= (V0, . . . , V k)∈ ∆kof auxiliary variables can be rewritten as fαT,V(t,v) ∝tα0−1e−(b0/α)tvα0−1 0dY i=11 ((t))ni•kY j=1 v−1 jdY i=1 nijX hij=mijS(nij, hij)vhij jthij   57 ∝tα0−1e−(b0/α)tvα0−1 0dY i=11 ((t))ni•kY j=1 n•jX hj=m•jS(n1j, . . . , n dj;hj)vhj−1 jthj  ∝tα0−1e−(b0/α)tvα0−1 0dY i=11 ((t))ni•nX h=mX h1+···+hk=h m•j≤hj≤n•jkY j=1S(n1j, . . . , n dj;hj)vhj−1 jthj ∝tα0−1e−(b0/α)tdY i=11 ((t))ni•nX h=m X h1+···+hk=h m•j≤hj≤n•jkY j=1S(n1j, . . . , n dj;hj)! thvα0−1 0kY j=1vhj−1 j, where m=Pk j=1m•j. The marginal density of αTis obtained by integration w.r.t. the auxiliary vector V: fαT(t) ∝tα0−1e−(b0/α)tdY i=11 ((t))ni•nX h=m X h1+···+hk=h m•j≤hj≤n•jkY j=1S(n1j, . . . , n dj;hj)! thZ ∆kvα0−1 0kY j=1vhj−1 j ∝tα0−1e−(b0/α)tdY i=11 ((t))ni•nX h=m X h1+···+hk=h m•j≤hj≤n•jkY j=1Γ(hj)S(n1j, . . . , n dj;hj)! Γ(α0) Γ(α0+h)th. Note that this marginalization is possible since we are restricting to hj>0, and thus the integral is finite. C Posterior sampling for the gamma-gamma hCRV This section provides additional details on the practical implementation of posterior sam- pling algorithms for the gamma-gamma hCRV described in Section 5, and contains nu- merical illustrations supporting their effectiveness. Section C.1 analyzes the Metropolis- Hastings steps of Algorithm 1, outlining two alternative proposal distributions. The ini- tialization of Algorithm 2 is detailed in Sections C.2 and C.3, which respectively discuss the computation of coefficients ch’s in Proposition 12 and the optimization of parameter r within the rejection sampling scheme. Section C.4 describes the inverse L´ evy measure al- gorithm to obtain a truncated sample from the hierarchy of gamma CRMs in Proposition 11. For this purpose, an efficient procedure to sequentially invert the exponential inte- gral function is outlined in Section C.5. Section C.6 characterizes the posterior random probabilities ˜ pi= ˜µi/˜µi(X)|X1:das conditionally normalized gamma CRMs, i.e. condi- tionally Dirichlet processes, and thus provides an alternative approach to directly sample their posterior normalized jumps. This allows for a straightforward comparison with the marginal Gibbs sampler of Teh et al. (2006) for HDP with gamma prior, which targets 58 the same posterior distributions (Proposition 1). Numerical illustrations of the effective- ness of the proposed algorithms in terms of mixing properties and posterior accuracy are provided in Section C.7. C.1 Metropolis-Hastings steps in Algorithm 1 The non-standard steps in the posterior sampling algorithms of Section 5.3 are: (i) the marginal sampling of random variable αTin (11), whose joint density with the auxiliary vector Vis known up to a normalizing constant; (ii) the sampling of random variables αJ01, . . . , αJ 0kin (10), whose densities are again known up to normalizing constants. A natural approach to obtain samples from non-standard densities is resorting to MCMC schemes. In Algorithm 1, we consider a blocked Gibbs sampler with Metropolis-Hastings steps. The full conditional distributions are derived from Propositions 9 and 10, namely (Vj, V0)|V−j,0,(αT) =t∼vα0−1 0v−1 jdY i=1((tvj))nij (j= 1, . . . , k ), (αT)|V∼tα0−1e−(b0/α)tdY i=11 ((t))ni•kY j=1dY i=1((tvj))nij, (αJ0j)|U∼t−1e−λ(U)tdY i=1((t))nij (j= 1, . . . , k ), where
https://arxiv.org/abs/2505.02653v1
V−j,0denotes the vector V∈∆kwith components VjandV0removed. Moreover, variables ( Vj, V0) are additionally subject to the constraintPk j=0Vj= 1, while variables αTandαJ0jcan take every positive value. At each iteration of the Gibbs sampler, we sequentially propose new values for each variable, and accept or reject the proposal according to the Metropolis-Hastings ratio. The rest of the section focuses on the possible proposals. A simple but effective symmetric proposal for the pair of auxiliary variables ( Vj, V0), for each j= 1, . . . , k , is v∗ j=ε(v0+vj), v∗ 0= (1−ε) (v0+vj), where v0andvjare the current values of V0andVjandε∼ U(0,1). The proposal is accepted with log-probability log(r) = min( 0,(α0−1)(v∗ 0−v0)−(v∗ j−vj) +dX i=1 log((tv∗ j))nij−log((tvj))nij) . 59 For the positive variables αTandαJ0j’s, we consider two alternative proposals, which we first describe for a generic positive random variable with density f(x). (a)Gamma proposal . Following the approach discussed in Barrios et al. (2013), we propose a new value x∗from a gamma distribution centered at the current value x, x∗∼Gamma( δ, δ/x ), where δ >0 controls the variance of the proposal. The proposed value x∗is accepted with log-probability min 0,logf(x∗)−logf(x) + log Gamma( x;δ, δ/x∗)−log Gamma( x∗;δ, δ/x ) = min 0,logf(x∗)−logf(x) + (2 δ−1)(log x−logx∗) +δx∗ x−x x∗ , where x7→log Gamma( x;a, b) is the log-density of a gamma distribution with shape a >0 and rate b >0. For the practical implementation, we fix δ= 2, as Barrios et al. (2013) suggest to take δ≥1. (b)Random walk on log-transform . For a positive random variable, we can target the density of its log-transform, which is t7→f(et)et. In this case, we resort to a random walk on the log-scale, and propose a new value x∗from a log-normal distribution centered at the current value x, x∗=x eε, ε ∼ N(0, σ2), where σ2controls the variance of the proposal. The proposed value x∗is accepted with log-probability log(r) = min 0,(logf(x∗) + log x∗)−(logf(x) + log x) = min 0,logf(x∗)−logf(x) +ε . Both approaches require the evaluation of the logarithm of target density at the current and proposed values, up to normalizing constants. For our purposes, we have logfαT|V(t) = (α0−1) log t−(b0/α)t+kX j=1dX i=1log((tvj))nij−dX i=1log((t))ni•, logfαJ0j|U(t) =dX i=1log((t))nij−λ(U)t−logt (j= 1, . . . , k ). Note that a convenient simplification may be relevant for random variables αJ0j’s such 60 thatn1j, . . . , n dj≤1. Indeed, they are gamma distributed and can be sampled exactly: αJ0j|U∼Gamma dX i=1nij, λ(U)! . C.2 Computing the coefficients in Proposition 12 In this section, we prove that the coefficients ch, forh=m, . . . , n , defined in Proposition 12, can be computed through a sequence of discrete convolutions. For each ℓ= 1, . . . , k , define the vector c(ℓ) = (c(ℓ, h))hwith c(ℓ;h) =X h1+···+hℓ=h m•j≤hj≤n•jℓY j=1a(j;hj), h =ℓX j=1m•j, . . . ,ℓX j=1n•j, where a(j;h) = Γ( h)S(n1j, . . . , n dj;h), for h=m•j, . . . , n •jandj= 1, . . . , ℓ . Thus, the ch’s are such that ch=c(k;h),
https://arxiv.org/abs/2505.02653v1
that is, coincide with c(k). The entries of vector c(ℓ) can be computed from vector c(ℓ−1) as follows: c(ℓ;h) =X h1+···+hℓ=h m•j≤hj≤n•jℓY j=1a(j;hj) =n•ℓX hℓ=m•ℓ X h1+···+hℓ−1=h−hℓ m•j≤hj≤n•jℓ−1Y j=1a(j;hj) a(ℓ;hℓ) =n•ℓX hℓ=m•ℓc(ℓ−1;h−hℓ)a(ℓ;hℓ). In other words, the vector c(ℓ) is obtained by convolution between vector c(ℓ−1) and vector a(ℓ) = (a(ℓ;h), h). The computational cost for this operation is O n•ℓPℓ−1 j=1n•j . Therefore, the coefficients ( ch)h=c(k) can be computed through the recursive relation c(0) = (1) ,c(ℓ) =c(ℓ−1)∗a(ℓ) (ℓ= 1, . . . , k ), or, equivalently, c(k) =a(1)∗ ··· ∗ a(k), where ∗is the convolution operator. The total computational cost is OP j<ℓn•jn•ℓ . C.3 Optimal choice of parameter r Section 5.2 outlines a rejection sampling algorithm for sampling the latent variable αT from its density (13). Specifically, values are proposed from Gamma( α0+r, b0/α) and accepted with probability proportional to t−rR(t), where ris a real parameter and R(t) =dY i=11 ((t))ni• nX h=mch ((α0))hth! . 61 A necessary condition for the rejection sampling scheme is t−rR(t) to be bounded above fort≥0. Since R(t) is a ratio of polynomials, both having degree nand non-negative coefficients, it is a continuous function for t >0. Moreover, when qis a positive integer, ((s))q∼sfors→0 while (( s))q∼sqfors→ ∞ . Hence, R(t)∼tm−dfort→0 and R(t)∼cn/((α0))nfort→ ∞ , which implies that t−rR(t) is continuous and bounded for t≥0 when 0 ≤r≤m−d. Our goal is choosing the value of rwithin this interval such that the acceptance probability is maximized. For this purpose, let t∗(r) be the value of tthat maximizes t−rR(t) for t≥0. The overall acceptance probability is ETt∗(r) trR(t) R(t∗(r)) =t∗(r)r R(t∗(r))Z∞ 0t−rR(t)b0 αα0+r Γ(α0+r)−1tα0+r−1e−(b0/α)tdt =t∗(r)r R(t∗(r))b0 αα0+r Γ(α0+r)−1Z∞ 0tα0−1e−(b0/α)tR(t) dt. Finding the value of 0 ≤r≤m−dthat maximizes the quantity above is equivalent to finding the value maximizing its logarithm, discarding terms not depending on r, that is r∗= arg max r{rlogt∗(r)−logR(t∗(r)) + ( α0+r) log( b0/α)−log Γ( α0+r)}. This maximization problem can be further simplified by restricting to a finite set of po- tentially maximizing values. Indeed, for 0 < r < m −d, the function R(t) is continuous and differentiable for t >0, and such that t−rR(t)→0 for both t→0 and t→ ∞ . Hence, t∗(r) is a stationary point for t−rR(t), which implies R′(t∗(r))t∗(r) =r R(t∗(r)). More- over, by the implicit function theorem, t∗(r) is a continuous and differentiable function in r. Therefore, the objective function is continuous and differentiable for 0 < r < m −d, and the set of potentially maximizing points in (0 , m−d) may be restricted to the stationary points (if any), satisfying logt∗(r) + log( b0/α)−ψ(α0+r) = 0 , where ψdenotes here the digamma function ψ(x) =d dxlog Γ( x), and we have used that t∗(r) is a stationary point for t−rR(t) and thus R′(t∗(r))t∗(r) =r R(t∗(r)). Remarkably, this stationarity condition is also satisfied at the boundary of the maximization set. Indeed, for r= 0, the optimal t∗(r) may be either a stationary point or + ∞, for which R′(t∗(r)) = 0 necessarily holds. On the other hand, if r=m−d, the optimal t∗(r) may be either a
https://arxiv.org/abs/2505.02653v1
stationary point or 0, for which R(t∗(r)) = 0 holds. 62 C.4 Sampling from the hierarchy of gamma CRMs From Proposition 9(a), the residual component ˜µ∗of the posterior distribution of ˜µ retains a hierarchical structure, conditionally on latent variables U: ˜µ∗ 1, . . . , ˜µ∗ d|˜µ∗ 0,U∼dY i=1CRM α s−1e−b(1+Ui/b)sds⊗˜µ∗ 0 ; ˜µ∗ 0|U∼CRM α0s−1e−α λ(U)sds⊗P0 . Therefore, one needs to sample from a hierarchy of gamma CRMs in order to obtain complete samples from the posterior. At the root of the hierarchy, approximate posterior samples from the rescaled gamma random measure α˜µ∗ 0|Ucan be obtained through the Ferguson-Klass representation (Ferguson and Klass, 1972). This amounts to sampling the largest Ljumps of an infinitely active random measure in decreasing order, and thus provides its best finite-dimensional approximation: for a fixed truncation level, the approximation error is minimized. A straightforward approach uses the inverse L´ evy measure algorithm (Wolpert and Ickstadt, 1998; Walker and Damien, 2000). For ℓ= 1, . . . , L , letω0ℓ≥0 be the value solving the equation ξℓ α0=Z∞ ω0ℓs−1e−α λ(U)sds=E1(αλ(U)ω0ℓ) ( ℓ= 1, . . . , L ), where E1is the exponential integral function and ξ1<···< ξ La.s. are the first L jump times of a unit rate Poisson process, that is, ξ0= 0 and the inter-arrival times areξℓ−ξℓ−1iid∼Exp(1), for ℓ= 1, . . . , L . Then the random measure α˜µ∗ 0|Ucan be approximated as α˜µ∗ 0|U≈LX ℓ=1(αω0ℓ)δYℓ, where Y1, . . . , Y Liid∼P0are independent from the Poisson process ( ξ1, . . . , ξ L). This algo- rithm requires to sequentially invert the exponential integral function numerically, which is a nontrivial albeit much investigated task. Details about our implementation are pro- vided in the following Section C.5. Alternative approaches for sampling the largest L jumps of a random measure in decreasing order, and their specifications for the gamma process, are explored in Campbell et al. (2019); Zhang and Dassios (2024). The random measures in ˜µ∗are conditionally independent given α˜µ∗ 0andU, and each component ˜ µ∗ i|˜αµ∗ 0,Ucan be approximately sampled from a gamma CRM having L´ evy measure ν∗ i(ds,dx) =LX ℓ=1(αω0ℓ)s−1e−b(1+Ui/b)sds δYℓ(dx). The additive components of the L´ evy measure represent independent summands for ˜ µ∗ i 63 Figure 8: Conditional dependencies between random variables involved in sam- pling the hierarchy of gamma CRMs. Red circles represent the computational bottlenecks. The sampling scheme for the latent variables Uis outlined in Section 5.1 and Figure 1. For simplicity, variables are reported up to scaling w.r.t. model parameters. concentrated at different fixed location. Hence, ˜ µ∗ i|α˜µ∗ 0,Ucan be approximated as ˜µ∗ i|α˜µ∗ 0,U≈LX ℓ=1ωiℓδYℓ, where ωi1, . . . ω iLare independent random variables with ωiℓ∼Gamma( αω0ℓ, b(1+Ui/b)), forℓ= 1, . . . , L . The sampling procedure to obtain an approximation of ˜µby truncation of its infinite sequence of jumps is summarized in Figure 8; the similarities with the sampling algorithms for jumps J, summarized in Figure 1, are evident. Note that the total mass of each random measure ˜ µ∗ i, that is, the mass
https://arxiv.org/abs/2505.02653v1
of the posterior random measure ˜µi|X1:dnot assigned to fixed locations, can instead be sampled exactly from a hierarchy of gamma random variables C.5 Inverting the exponential integral The implementation of the inverse L´ evy measure algorithm of Walker and Damien (2000) for the gamma CRM requires to invert the tail integrals of its L´ evy density (Section C.4). This amounts to invert the exponential integral function E1, defined for x >0 as E1(x) =Z+∞ xs−1e−sds. 64 Figure 9: Iterations of Newton’s method for solving the equation E1(x) = 2 .0, with different starting points. Starting on the right of the solution may lead the algorithm outside the domain of the function (orange); starting on the left guarantees convergence (green). The right panel zooms on converging iterations. Note that E1is a strictly decreasing function, and thus is invertible. A convenient approach to find its inverse E−1 1(y), for a given value y >0, is to determine the unique root of the function x7→E1(x)−y, exploiting root-finding algorithms such as Newton’s method. This method requires the evaluation of the derivative E′ 1of the exponential integral, which can be computed in closed form, as it equals the opposite of the integrand function, E′ 1(x) =−x−1e−x. A well-known limitation of Newton’s method is the possibility to obtain iteration val- ues that fall outside the domain of the function, where its evaluation is not possible. This situation is particularly relevant to inverse L´ evy measure algorithms, as the tail integrals of infinitely divisible L´ evy densities diverge to + ∞as the lower bound of the integration interval goes to 0. A simple workaround that typically solves this issue is choosing a starting point x0for Newton’s algorithm which falls on the left of the solution, that is E1(x0)≥y. This can be achieved by iteratively halving an initial guess. Remarkably, within the sequential approach required by the algorithm in Section C.4, the standard choice for the initial guess is the solution at the previous step. However, this falls on the right of the current solution, and thus halving is always necessary. Convergence of this algorithm is guaranteed whenever the L´ evy density is a decreasing function, and thus its tail integral is decreasing and convex. Figure 9 shows an illustration of the different behaviours of Newton’s method, depending on the starting point x0. Instead, we consider an alternative approach that improves the efficiency of Newton’s method. Specifically, we redefine the exponential integral through the logarithm of its argument; in other words, we invert the function f(z) =E1(ez) =Z+∞ ezs−1e−sds, whose derivative is given by f′(z) =E′ 1(ez)ez=−exp(−ez). This formulation has the 65 advantage of not being restricted to positive values, thus preventing iterations to fall outside the domain. Note that fis decreasing and convex, which guarantees convergence of Newton’s method for every starting point x0. A decisive advantage of this approach is the asymptotic linearity of function fasz diverges to −∞. Specifically, f(z)≈ −γ−zforz→ −∞ , where γis the Euler-Mascheroni constant. This property is crucial for speeding up convergence of the numerical scheme, as the rate of convergence of Newton’s
https://arxiv.org/abs/2505.02653v1
method is proportional to the second derivative around the solution. The asymptotic expansion is also useful to improve the numerical stability of function evaluations. We argue that redefining the tail integral as a function on the whole real line, removing the constraint to positive values, may be a general technique to enhance the performances of Newton’s method for computing the inverse L´ evy measure, beyond the gamma process case. C.6 Posterior distribution of normalized random measures The posterior random probabilities arising from model (5) are the normalization of the posterior random measures characterized in Theorem 3 and specialized in Proposition 9 for the gamma-gamma hCRV. The simplest approach to obtain posterior samples is thus normalizing the posterior samples from the corresponding random measures. In this section, we show that such posterior random probabilities are distributed as conditionally Dirichlet processes with discrete base measures. Therefore, their probability weights can be alternatively sampled from a Dirichlet distribution. This allows for a straightforward comparison with alternative samplers for the HDP, such as the marginal Gibbs sampler of Teh et al. (2006); see the following Section C.7. For this purpose, denote by πandπ∗the normalized jumps of ˜ µi|X1:d, for i= 1, . . . , d , πij=JijPk j=1Jij+ ˜µ∗ i(X), π∗ iℓ=ωiℓPk j=1Jij+ ˜µ∗ i(X), where the Jij’s are jumps at fixed locations and the ωiℓ’s are the jumps of ˜ µ∗ i, arranged in decreasing order; see Section C.4 for details on sampling from ˜ µ∗ i. From Theorem 3 it follows that the random probabilities ˜P=˜µ/˜µ(X) are distributed, a posteriori, as ˜Pi=˜µi ˜µi(X) X1:dd=kX j=1πijδX∗ j+X ℓ≥1π∗ iℓδYℓ(i= 1, . . . , d ). (22) For the gamma-gamma hCRV considered in this section, the conditional distribution of each posterior random probability is a Dirichlet process with discrete base measure. Proposition 14. Let˜Pa normalized gamma-gamma hCRV. A posteriori, the random probabilities ˜P|X1:dare conditionally independent, given variables αJ01, . . . , αJ 0kand 66 the random measure ˜µ∗ 0in Proposition 9, and distributed as ˜Pi=˜µi ˜µi(X) X1:d, αJ0,˜µ∗ 0ind∼DP α˜µ∗ 0+kX j=1(nij+αJ0j)δX∗ j! (i= 1, . . . , d ). Proof. By Proposition 9, independently for each i= 1, . . . , d , the random measure ˜ µ∗ i| ˜µ∗ 0, Uiis a gamma CRM with shape α˜µ∗ 0(X) and rate b+Ui. Moreover, for each j= 1, . . . , k , the random jump Jij|αJ0j, Uiis independently gamma distributed with shape nij+αJ0jand rate b+Ui. Therefore, the posterior distribution of ˜ µi|αJ0,˜µ∗ 0, Uiis that of a gamma CRM, being a superposition of independent gamma processes with same rate. Indeed, its L´ evy measure is s−1e−b(1+Ui/b) α˜µ∗ 0+kX j=1(nij+αJ0j)δX∗ j (i= 1, . . . , d ). The normalization of a gamma CRM is then a Dirichlet process (Ferguson, 1973). Remarkably, the posterior distribution of ˜Pdoes not depend on the prior rate parameter b, as already discussed right after Proposition 1. Moreover, it is conditionally independent of latent variables U, given αJ0and ˜µ∗ 0. This result is particularly relevant from the algorithmic point of view, as anticipated above.
https://arxiv.org/abs/2505.02653v1
Indeed, after obtaining samples from variables αJ01, . . . , αJ 0kand an approxima- tion by truncation of α˜µ∗ 0, as described in Section C.4, the probability weights in (22) can be sampled from a ( k+L)-dimensional Dirichlet distribution. In particular, for each i= 1, . . . , d andL∈N, πi1, . . . , π ik, π∗ i1, . . . , π∗ iL X1:d,J0,˜µ∗ 0 ∼Dirichlet ni1+αJ01, . . . , n ik+αJ0k, αω 01, . . . , αω 0L . The conditional independence from Uallows further parallelization of the posterior sam- pling schemes. The resulting structure of conditional dependencies within the complete sampling algorithms for the normalized posterior random measures is summarized in Figure 10. C.7 Numerical illustrations This section contains numerical illustrations of the proposed algorithms for posterior inference. We consider the following algorithms: the MCMC sampler with Metropolis- Hastings steps in Algorithm 1, using gamma proposals (MH) and random walk on log- scale (MHlog) (cfr. Section C.1), the exact sampler in Algorithm 2 (exact), and its al- ternative version based on adaptive rejection sampling (ARS). As a reference to evaluate 67 Figure 10: Conditional dependencies between random variables in the sampling algorithms for posterior normalized random measures. Red circles represent the computation bottlenecks; quantities enclosed in blue boxes are sampled from Dirichlet distributions. For simplicity, vari- ables are reported up to scaling w.r.t. model parameters. their effectiveness, we consider the marginal Gibbs sampler of Teh et al. (2006) for the HDP, with gamma prior on the concentration parameter (HDPpr). Note that the re- sampling step for the concentration parameter can be performed either via standard Metropolis-Hastings, or exploiting auxiliary beta random variables. These algorithms target the same posterior distribution for the random probabilities, hence their posterior estimates should coincide. We consider d= 4 groups of observations, each of size ni= 25, sampled from independent Poisson distributions with means 2 ,3,4 and 5. The number of clusters in the simulated dataset is k= 12. Model parameters are fixed at α0=α= 1 andb0=b= 1. We draw 1 ,000 posterior samples for each algorithm; for the MCMC schemes, we consider a burn-in of 100 steps and a thinning factor of 10. Figure 11 summarizes the diagnostics and posterior distribution for the random vari- ableαT. The MCMC algorithms show good mixing, with acceptance rates of 0 .37 and 0.30, and effective samples sizes after thinning of 722 and 703. The envelopes considered for rejection sampling in exact approaches prove to be tight, with acceptance rates of 0 .71 and 0 .91 respectively. For computational convenience, in the adaptive rejection sampling algorithm, we update the piecewise linear envelope only if the acceptance probability is 68 smaller that 0 .8. The optimal value for ris 6.95; see Section C.3. Similarly, Figure 12 summarizes the diagnostics and posterior distributions for random variables J01, . . . , J 0k. The value of latent variables Uis fixed at their posterior means, so that λ(U) = 7 .46. For this illustration, we consider the latent jumps at values 2 and 3, with
https://arxiv.org/abs/2505.02653v1
counts n•j equal to 24 and 15, respectively. Again, the MCMC algorithms show good mixing, with acceptance rates between 0 .42 and 0 .52 and effective samples sizes after thinning above 700. Finally, the posterior distributions for the random jumps Jijat values 2 and 3, for thed= 4 groups, are displayed in Figure 13; the effective samples sizes after thinning are all above 700. Note that the estimates of the posterior distributions are obtained via Gaussian kernel smoothing. Finally, we perform a visual comparison with the marginal Gibbs sampler of Teh et al. (2006) for the HDP in terms of accuracy of posterior distributions. Since this algorithm outputs posterior probability weights, we sample the from posterior random probabilities according to the procedure outlined in C.6. Considering the same exper- imental setting introduced above, we compare the MCMC sampler with random walk Metropolis-Hastings steps (MHlog), the exact sampler (exact), and the marginal Gibbs sampler for HDP, with or without gamma prior on the concentration parameter (HDP and HDPpr, respectively). The posterior distributions for the probability weights πijat values 2 and 3 are displayed in Figure 14; again, the algorithms show good mixing, with effective samples sizes after thinning consistently exceeding 800. As expected, estimates of the posterior distributions for the HDP model without prior visibly differ from those obtained with the other algorithms, which instead target the same posterior distributions (Proposition 1). This difference may be likewise observed in the right plot of Figure 6. 69 Figure 11: Diagnostics and posterior distribution for the random variable αT. Top: traceplots for MCMC algorithms. Middle: envelopes for rejection sampling in exact algorithms. Bottom: estimates of the posterior distribution with Gaus- sian kernel smoothing. 70 Figure 12: Diagnostics and posterior distribution for the latent jumps variables J0jat values 2 and 3. Top: traceplots for MCMC algorithms. Bottom: estimates of the posterior distributions with Gaussian kernel smoothing. 71 Figure 13: Posterior distributions for the jumps Jijat values 2 and 3, for the d= 4 groups. 72 Figure 14: Posterior distributions for the probability weights πijat values 2 and 3, for the d= 4 groups. The HDP model without prior targets different posterior distributions (black). 73
https://arxiv.org/abs/2505.02653v1
Transformers for Learning on Noisy and Task-Level Manifolds: Approximation and Generalization Insights Zhaiming Shen∗Alex Havrilla∗Rongjie Lai†Alexander Cloninger‡ Wenjing Liao∗ May 7, 2025 Abstract Transformers serve as the foundational architecture for large language and video generation models, such as GPT, BERT, SORA and their successors. Empirical studies have demonstrated that real-world data and learning tasks exhibit low-dimensional structures, along with some noise or measurement error. The performance of transformers tends to depend on the intrinsic dimension of the data/tasks, though theoretical understandings remain largely unexplored for transformers. This work establishes a theoretical foundation by analyzing the performance of transformers for regression tasks involving noisy input data on a manifold. Specifically, the input data are in a tubular neighborhood of a manifold, while the ground truth function depends on the projection of the noisy data onto the manifold. We prove approximation and generalization errors which crucially depend on the intrinsic dimension of the manifold. Our results demonstrate that transformers can leverage low-complexity structures in learning task even when the input data are perturbed by high-dimensional noise. Our novel proof technique constructs representations of basic arithmetic operations by transformers, which may hold independent interest. 1 Introduction Transformer architecture, introduced in Vaswani et al. [2017], has reshaped the landscape of machine learning, enabling unprecedented advancements in natural language processing (NLP), computer vi- sion, and beyond. In transformers, traditional recurrent and convolutional architectures are replaced by an attention mechanism. Transformers have achieved remarkable success in large language models (LLMs) and video generation, such as GPT [Achiam et al., 2023], BERT [Devlin, 2018], SORA [Brooks et al., 2024] and their successors. Despite the success of transformers, their approximation and generalization capabilities remain less explored compared to other network architectures, such as feedforward and convolutional neural networks. Some theoretical investigations of transformers can be found in Jelassi et al. [2022]; Yun et al. [2019]; Edelman et al. [2022]; Wei et al. [2022]; Takakura and Suzuki [2023]; Gurevych et al. [2022]; Bai et al. [2023]. Specifically, Yun et al. [2019] proved that transformer models can universally approximate continuous sequence-to-sequence functions on a compact support, while while the network size grows exponentially with respect to the sequence dimension. Edelman et al. [2022] evaluated the capacity of ∗{zshen49, ahavrilla3, wliao60 }@gatech.edu. School of Mathematics, Georgia Institute of Technology, Atlanta, GA 30332 †lairj@purdue.edu. Department of Mathematics, Purdue University, West Lafayette, IN 47907 ‡acloninger@ucsd.edu. Department of Mathematics and Halicio˘ glu Data Science Institute, University of California, San Diego, La Jolla, CA 92093 1arXiv:2505.03205v1 [cs.LG] 6 May 2025 Transformer networks and derived the sample complexity to learn sparse Boolean functions. Takakura and Suzuki [2023] studied the approximation and estimation ability of Transformers as sequence- to-sequence functions with anisotropic smoothness on infinite dimensional input. Gurevych et al. [2022] studied binary classification with transformers when the posterior probability function exhibits a hierarchical composition model with H¨ older smoothness. Jelassi et al. [2022] analyzed a simplified version of vision transformers and showed that they can learn the spatial structure and generalize. Lai et al. [2024] established a connection between transformers and smooth cubic splines. Bai et al. [2023]
https://arxiv.org/abs/2505.03205v1
proved the in-context learning ability of transformers for least squares, ridge regression, Lasso and generalized linear models. Compared to transformers, feedforward and convolutional neural networks are significantly bet- ter understood in terms of approximation [Cybenko, 1989; Hornik et al., 1989; Leshno et al., 1993; Mhaskar, 1993; Bach, 2017; Maiorov, 1999; Pinkus, 1999; Petrushev, 1998; Yarotsky, 2017; Lu et al., 2021; Oono and Suzuki, 2019; Lai and Shen, 2021, 2024; Zhou, 2020] and generalization [Kohler and Mehnert, 2011; Schmidt-Hieber, 2020; Oono and Suzuki, 2019] theories. Theoretical results in Yarotsky [2017]; Lu et al. [2021]; Oono and Suzuki [2019]; Schmidt-Hieber [2020] addressed function approxi- mation and estimation in a Euclidean space. For functions supported on a low-dimensional manifold, approximation and generalization theories were established for feedforward neural networks in Chui and Mhaskar [2018]; Shaham et al. [2018]; Chen et al. [2019]; Schmidt-Hieber [2019]; Nakada and Imaizumi [2020]; Chen et al. [2022] and for convolutional residual neural networks in Liu et al. [2021]. To relax the exact manifold assumption and allow for noise on input data, Cloninger and Klock [2021] studied approximation properties of feedforward neural networks under inexact manifold assumption, i.e., data are in a tubular neighborhood of a manifold and the groundtruth function depends on the projection of the noisy data onto the manifold. This relaxation accommodates input data with noise and accounts for the low complexity of the learning task beyond the low intrinsic dimension of the input data, making the theory applicable to a wider range of practical scenarios for feedforward neural networks. In the application of transformers, empirical studies have demonstrated that image, video, text data and learning tasks tend to exhibit low-dimensional structures [Pope et al., 2021; Sharma and Kaplan, 2022; Havrilla and Liao, 2024], along with some noise or measurement error in real-world data sets. The performance of transformers tends to depend on the intrinsic dimension of the data/tasks [Sharma and Kaplan, 2022; Havrilla and Liao, 2024; Razzhigaev et al., 2023; Min et al., 2023; Aghajanyan et al., 2020]. Specifically, Aghajanyan et al. [2020] empirically showed that common pre-trained models in NLP have a very low intrinsic dimension. Pope et al. [2021]; Razzhigaev et al. [2023]; Havrilla and Liao [2024] investigated the intrinsic dimension of token embeddings in transformer architectures, and obtained a significantly lower intrinsic dimension than the token dimension. Despite of the empirical findings connecting to performance of transformers with the low intrinsic dimension of data/tasks, theoretical understandings about how transformers adapt to low-dimensional data/task structures and build robust predictions against noise are largely open. Havrilla and Liao [2024] analyzed the approximation and generalization capability of transformers for regression tasks when the input data exactly lie on a low-dimensional manifold. However, the setup in Havrilla and Liao [2024] does not account for noisy data concentrated near a low-dimensional manifold and low- complexity in the regression function. In this paper, we bridge this theoretical gap by analyzing the approximation and generalization error of transformers for regression of functions on a tubular neighborhood of a manifold. To leverage the low-dimensional structures in the learning task, the function depends
https://arxiv.org/abs/2505.03205v1
on the projection of the input onto the manifold. Specifically, let M ⊆ [0,1]Dbe a compact, connected d-dimensional Riemannian manifold isometrically embedded in RDwith a positive reach τM, and M(q) be a tubular region around the manifold Mwith local tube radius given by q∈[0,1) times the local reach (see Definitions 2 Figure 1: The tubular region around manifold Mand the orthogonal projection πM. 1 and 4). We consider function f:M(q)→Rin the form: f(x) =g(πM(x)),∀x∈ M (q) (1) where πM(x) = arg min z∈M∥x−z∥2, (2) is the orthogonal projection onto the manifold M, and g:M → Ris an unknown α-H¨ older function on the manifold M. An illustration of the tubular region and the orthogonal projection onto the manifold is shown in Figure 1. The regression model in (1) covers a variety of interesting scenarios: 1) Noisy Input Data: The input xis a perturbation of its clean counterpart πM(x) on the manifold M. One can access the input and output pairs, i.e. ( x, f(x)) but the clean counterpart πM(x) is not available in this learning task. 2) Low Intrinsic Dimension in the Machine Learning Task: The input data live in a high-dimensional space RD, but the regression or inference task has a low complexity. In other words, the output f(x) locally depends on dtangential directions on the task manifold M, and the function is locally invariant along the D−dnormal directions on the manifold. The model in (1) is also general enough to include many interesting special cases. For example, when Mis a linear subspace, the model in (1) becomes the well-known multi-index model [Cook and Li, 2002]. When q= 0, one recovers the exact manifold regression model where functions are supported exactly on a low-dimensional manifold. In this paper, we establish novel mathematical approximation and statistical estimation (or gen- eralization) theories for functions in (1) via transformer neural networks. Approximation Theory: Under proper assumptions of M, for any ϵ >0, there exists a transformer neural network to universally approximate function fin (1) up to ϵaccuracy (Theorem 1). The width of this transformer network is in the order of Dϵ−d αand the depth is in the order of d+ ln(ln( ϵ−1)). Note that dis the intrinsic dimension of the manifold Mandαrepresents the H¨ older smoothness of g. In this result, the network complexity crucially depends on the intrinsic dimension. Generalization Theory: When ni.i.d. training samples {(xi, f(xi))}n i=1are given, we consider the empirical risk minimizer ˆT to be defined in (10). Theorem 2 shows that the squared generalization error of ˆT is upper bounded in the order of n−2α 2α+d. In the exact manifold case when q= 0, Theorem 2 gives rise to the min-max regression error [Gy¨ orfi et al., 2006]. In the noisy case when q∈(0,1), Theorem 2 demonstrates a denoising phenomenon given by transformers such that when the sample 3 sizenincreases, the generalization error converges to 0 at a fast rate depending on the intrinsic di- mension d. Basic Arithmetic Operations Implemented by Transformers: In addition, our proof explicitly constructs transformers to implement basic arithmetic operations, such as addition, constant
https://arxiv.org/abs/2505.03205v1
mul- tiplication, product, division, etc. Such implementation can be done efficiently (e.g., in parallel) on different tokens. These results can be applied individually as building blocks for approximation studies using Transformers. This paper is organized as follows. In section 2, we introduce some preliminary definitions. In section 3, we present our main results, including the approximation and generalization error bound achieved by transformer networks. In section 4, we provide a proof sketch of our main results. In Section 6, we make conclusion and discuss its impact. 2 Preliminaries 2.1 Manifold Definition 1 (Manifold) And-dimensional manifold Mis a topological space where each point has a neighborhood that is homeomorphic to an open subset of Rd. Further, distinct points in Mcan be separated by disjoint neighborhoods, and Mhas a countable basis for its topology. Definition 2 (Medial Axis) LetM ⊆RDbe a connected and compact d-dimensional submanifold. Itsmedial axis is defined as Med(M) :={x∈RD| ∃p̸=q∈ M,∥p−x∥2=∥q−x∥2= inf z∈M∥z−x∥2}, which contains all points x∈RDwith set-valued orthogonal projection πM(x) = arg minz∈M∥x−z∥2. Definition 3 (Local Reach and Reach of a Manifold) The local reach forv∈ M is defined as τM(v) := inf z∈Med(M)∥v−z∥2,which describes the minimum distance needed to travel from vto the closure of medial axis. The smallest local reach τM:= inf v∈MτM(v)is called reach ofM. Definition 4 (Tubular Region around a Manifold) Letq∈[0,1). The tubular region around the manifold Mwith local tube radius qτM(v)is defined as M(q) :={x∈RD|x=v+u, v∈ M, u∈ker(P(v)⊤), ∥u∥2< qτM(v)}, (3) where the columns of P(v)∈RD×drepresent an orthonormal basis of the tangent space of Matv. Definition 5 (Geodesic Distance) The geodesic distance between v, v′∈ M is defined as dM(v, v′) := inf {|γ|:γ∈C1([t, t′]), γ: [t, t′]→ M , γ(t) =v, γ(t′) =v′}, where the length is defined by |γ|:=Rt′ t∥γ′(s)∥2ds. The existence of a length-minimizing geodesic γ: [t, t′]→ M between any two points v=γ(t), v′=γ(t′)is guaranteed by Hopf–Rinow theorem [Hopf and Rinow, 1931]. 4 Definition 6 ( δ-Separated and Maximal Separated Set) LetSbe a set associated with a metric d, we say Z⊆Sisδ-separated if for any z, z′∈Z, we have d(z, z′)> δ. We say Z⊆Sismaximal separated δ-netif adding another point in Zdestroys the δ-separated property. Definition 7 (Covering Number) Let(H, ρ)be a metric space, where His the set of objects and ρis a metric. For a given ϵ >0, the covering number N(ϵ,H, ρ)is the smallest number of balls of radius ϵ(with respect to ρ) needed to cover H. More precisely, N(ϵ,H, ρ) := min {N∈N| ∃{h1, h2, . . . , h N} ⊆ H , ∀h∈ H,∃hisuch that ρ(h, hi)≤ϵ}. LetdMbe a geodesic metric defined on M, we can extend dMto the tubular region M(q) such that dM(q)(u, v) :=dM(πM(u), πM(v)), provided that u, v∈ M (q) has the unique orthogonal projection onto M. According to Cloninger and Klock [2021, Lemma 2.1], for any x∈ M (q) with q∈[0,1),xhas a unique projection onto Msuch thatπM(x) =v. 2.2 Transformer Network Class Definition 8 (Feed-forward Network Class) The feed-forward neural network (FFN) class with weights θis FFN (LFFN,wFFN) ={FFN( θ;·)|FFN( θ;·)is a FNN with at most LFFNlayers and
https://arxiv.org/abs/2505.03205v1
width wFFN}. We use ReLU function σ(x) = max( x,0) as the activation function in the feed-forward network. Note that each feed-forward layer is applied tokenwise to an embedding matrix H. Definition 9 (Attention and Multi-head Attention) The attention with the query, key, value matrices Q, K, V ∈Rdembed×dembed is AK,Q,V (H) =V Hσ ((KH)⊤QH). (4) It is worthwhile to note that the following formulation is convenient when analyzing the interaction between a pair of tokens, which is more relevant to us. A(hi) =Pℓ i=1σ(⟨Qhi, Kh j⟩)V hj (5) The multi-head attention (MHA) with mheads is MHA( H) =Pm j=1VjHσ((KjH)⊤QjH). (6) Note that in this paper, we consider ReLU as the activation function rather than Softmax in the attention. Definition 10 (Transformer Block) The transformer block is a residual composition of the form B(H) = FFN(MHA( H) +H) + MHA( H) +H. (7) 5 Figure 2: Transformer architecture constructed to approximate ˆf(the purple component implements each of the ˜ηi, the red component approximates1 ∥˜η∥1, the yellow component approximates each of the ηi(x), and then approximates ˆf). Definition 11 (Transformer Block Class) The transformer block class with weights θis B(m, L FFN, wFFN) ={B(θ;·)|B(θ;·)aMHA withmattention heads, and a FNN layer with depth LFFNand width wFFN}. Definition 12 (Transformer Network) A transformer network T(θ;·)with weights θis a com- position of an embedding layer, a positional encoding matrix, a sequence transformer blocks, and a decoding layer, i.e., T(θ;x) := DE ◦BLT◦ ··· ◦ B1◦(PE + E( x)), (8) where x∈RDis the input, E :RD→Rdembed×ℓis the linear embedding, PE∈Rdembed×ℓis the positional encoding. B1,···,BLT:Rdembed×ℓ→Rdembed×ℓare the transformer blocks where each block consists of the residual composition of multi-head attention layers and feed-forward layers. DE :Rdembed×ℓ→R is the decoding layer which outputs the first element in the last column. In our analysis, we utilize the well-known sinusoidal positional encoding Ij∈R2, which can be interpreted as rotations of a unit vector e1within the first quadrant of the unit circle. More precisely, for an embedding matrix H= PE + E( x) given in (13), the first two rows are the data terms, which are used to approximate target function. The third and fourth rows are interaction terms with Ij= (cos(jπ 2ℓ),sin(jπ 2ℓ))⊤, determining when each token embedding will interact with another in the attention mechanism, where ℓis the number of hidden tokens. The last (fifth) row are constant terms. Definition 13 (Transformer Network Class) The transformer network class with weights θis T(LT, mT, dembed , ℓ, L FFN, wFFN, R, κ ) =n T(θ;·)|T(θ;·)has the form (8)withLTtransformer blocks, at most mTattention heads in each block, embedded dimension dembed ,number of hidden tokens ℓ,andLFFNlayers of feed-forward networks with hidden width wFFN,with output ∥T(θ;·)∥L∞(RD)≤R and weight magnitude ∥θ∥∞≤κo . Here∥θ∥∞represent the maximum magnitude of the network parameters. When there is no ambiguity in the context, we will shorten the notation T(LT, mT, dembed , ℓ, L FFN, wFFN, R, κ ) asT. Throughout the paper, we use x= (x1,···, xD) as the input variable , with each xibeing the i-th component of x. We summarize the notations in Table 2 in the Appendix
https://arxiv.org/abs/2505.03205v1
A. 6 3 Transformer Approximation and Generalization Theory We next present our main results about approximation and generalization theories for estimating functions in (1). 3.1 Assumptions Assumption 1 (Manifold) LetM ⊆ [0,1]Dbe a non-empty, compact, connected d-dimensional Riemannian manifold isometrically embedded in RDwith a positive reach τM>0. The tubular region M(q)defined in (3) satisfies q∈[0,1)andM(q)⊆[0,1]D. Assumption 2 (Target function) The target function f:M(q)→Rcan be written in (1)such thatf:=g◦πMandg:M → Risα-H¨ older continuous with H¨ older exponent α∈(0,1]and H¨ older constant L >0: |g(z)−g(z′)| ≤Ldα M(z, z′)for all z, z′∈ M. In addition, we assume ∥f∥L∞(M(q))≤Rfor some R > 0. 3.2 Transformer Approximation Theory Our first contribution is a universal approximation theory for functions satisfying Assumption 2 by a transformer network. Theorem 1 Suppose Assumption 1 holds. For any ϵ∈(0,min{1,(τM/2)α}), there exists a trans- former network T(θ;·)∈ T(LT, mT, dembed , ℓ, L FFN, wFFN, R, κ )with parameters LT=O(d+ ln(ln( ϵ−1))), m T=O(Dϵ−d α), dembed =O(1), ℓ=O(Dϵ−d α), LFFN=O(1), wFFN=O(1), κ=O(D2ϵ−2d+8 α) such that, for any fsatisfying Assumption 2, if the network parameters θare properly chosen, the network yields a function T(θ;·)with ∥T(θ;·)−f∥L∞(M(q))≤ϵ. (9) The notation O(·)inmT,ℓ,κhide terms dependent on d, q, τ M, L, R, Vol(M), while O(·)in others hide the terms dependent on q, τM, L, R, Vol(M). The proof of Theorem 1 is provided in Section 4 and a flow chat of our transformer network is illustrated in Figure 2. One notable feature of Theorem 1 is that the network is shallow. It only requires near constant depth O(d+ ln(ln( ϵ−1))) to approximate the function fdefined on the noisy manifold with any accuracy ϵ. This highlights a key advantage of Transformers over feed-forward ReLU networks, which require substantially more layers, e.g., O(ln(1 ϵ)), to achieve the same accuracy [Yarotsky, 2017]. 3.3 Transformer Generalization Theory Theorem 1 focuses on the existence of a transformer network class which universally approximates all target functions satisfying Assumption 2. However, it does not yield a computational strategy to obtain the network parameters for any specific function. In practice, the network parameters are obtained by an empirical risk minimization. 7 Suppose {xi}n i=1areni.i.d samples from a distribution Psupported on M(q), and their corre- sponding function values are {f(xi)}n i=1. Given ntraining samples {(xi, f(xi))}n i=1, we consider the empirical risk minimizer ˆTnsuch that ˆTn:= arg minT∈T1 nPn i=1(T(xi)−f(xi))2, (10) where Tis a transformer network class. The squared generalization error of ˆTnis E∥ˆTn−f∥2 L2(P)=EZ M(q)(ˆTn(x)−f(x))2dP, (11) where the expectation is taken over {xi}n i=1. Our next result establishes a generalization error bound for the regression of f. Theorem 2 Suppose Assumptions 1 and 2 hold. Let {(xi, f(xi))}n i=1arentraining samples where {xi}n i=1areni.i.d samples of a distribution Psupported on M(q). If the transformer network class T(LT, mT, dembed , ℓ, L FFN, wFFN, R, κ )has parameters LT=O(d+ ln(ln( nα 2α+d))), m T=O(Dnd 2α+d), dembed =O(1), ℓ=O(Dnd 2α+d), LFFN=O(1), wFFN=O(1), κ=O(D2n2d+8 2α+d) where the O(·)inmT,ℓ,κhide terms dependent on d, q, τ M, L, R, Vol(M), while O(·)in others hide the terms dependent on q, τM, L, R, Vol(M). Then the empirical risk minimizer ˆTngiven by (10)
https://arxiv.org/abs/2505.03205v1
satisfies E∥ˆTn−f∥2 L2(P)≤˜O D2d3n−2α 2α+d (12) where ˜O(·)hides the logarithmic terms dependent on D, d, q, n, α, L, R, τ M,Vol(M), and constant terms dependent on dand Vol (M). The proof of Theorem 2 is provided in Section 4. Theorem 2 shows that the squared generalization error of ˆT is upper bounded in the order of n−2α 2α+d. In the exact manifold case when q= 0, Theorem 2 gives rise to the min-max regression error [Gy¨ orfi et al., 2006]. In the noisy case when q∈(0,1), Theorem 2 demonstrates a denoising phenomenon given by transformers such that when the sample sizenincreases, the generalization error converges to 0 at a fast rate depending on the intrinsic dimension d. 4 Proof of Main Results 4.1 Basic Arithmetic Operations via Transformer To prove our main results, let us first construct transformers to implement basic arithmetic operations such as addition, constant multiplication, product, division, etc,. All the basic arithmetic operations are proved in details in Appendix B.2. The proofs utilizes the Interaction Lemma 8 [Havrilla and Liao, 2024], which states that we can construct an attention head such that one token interacts with exactly another token in the embedding matrix. This allows efficient parallel implementation of these fundamental arithmetic operations. For convenience, we summarize all the operations implemented via transformer in Table 1. These basic operations can also serve as building blocks for other tasks of independent interest. 8 Table 1: The bound on each parameter in the transformer network class to implement certain operations for input x= (x1,···, xD)∈RDandy= (y1,···, yD)∈RD. The notation ⊙stands for componentwise product and◦rstands for componentwise r-th power. Note that the map x17→1 x1requires x1bounded above and bounded away from zero if x1>0, and x1bounded below and bounded away from zero if x1<0. The tolerance for the last operation is measured in ∥ · ∥ 1norm while others are measured in ∥ · ∥∞norm. Operations LT mT LFFN wFFN Tolerance Reference x7→PD i=1xiO(1) O(D) O(1) O(1) 0 Lemma 1 x7→x+c O (1) O(D) O(1) O(1) 0 Lemma 2 x7→cx O (1) O(D) O(1) O(1) 0 Lemma 3 x7→x⊙x O (1) O(D) O(1) O(1) 0 Lemma 4 (x, y)7→x⊙y O (1) O(D) O(1) O(1) 0 Lemma 5 x7→x◦rO(ln(r)) O(rD) O(1) O(1) 0 Lemma 6 x17→1 x1 O(ln(ln(1 ϵ))) O(ln(1 ϵ))O(1) O(1) ϵ Lemma 7 x7→˜ηi(x) O(d) O(D) O(1) O(1) 0 Proposition 1 x7→(η1(x),···, ηK(x))O(d+ ln(ln(1 ϵ))) O(Dϵ−d)O(1) O(1) ϵ Proposition 2 Lemma 1 (Sum of Tokens) Letdembed = 5,M > 0, and x= (x1,···, xD)be vector in RDsuch that∥x∥∞≤M. Let Hbe an embedding matrix of the form H= x1··· xD0 0··· ··· 0 I1··· ··· I ℓ 1··· ··· 1 ∈Rdembed×ℓ, (13) where ℓ≥D+ 1. Then there exists a transformer block B∈ B(D,3, dembed )such that B(H) = x1··· xDx1+···+xD0 0··· ··· ··· 0 I1··· ··· ··· I ℓ 1··· ··· ··· 1 (14) with∥θB∥∞≤O(ℓ2M2∥H∥2 ∞,∞). We say Bimplements the sum of tokens in x. Lemma 2 (Constant Addition) Letdembed = 5,M > 0,c= (c1,···, cD)andx= (x1,···, xD)be vectors in RDsuch that ∥x∥∞+∥c∥∞≤M. Let Hbe an embedding matrix of the form
https://arxiv.org/abs/2505.03205v1
H= x1··· xD0 0··· ··· 0 I1··· ··· I ℓ 1··· ··· 1 ∈Rdembed×ℓ, where ℓ≥2D. Then there exists a transformer block B∈ B(D,3, dembed )such that B(H) = x1··· xDx1+c1··· xD+cD0 0··· ··· ··· ··· ··· 0 I1··· ··· ··· ··· ··· I ℓ 1··· ··· ··· ··· ··· 1 (15) with∥θB∥∞≤O(ℓ2M2∥H∥2 ∞,∞). We say Bimplements the addition of ctox. 9 Lemma 3 (Constant Multiplication) LetM > 0, and c= (c1,···, cD)andx= (x1,···, xD)be vectors in RDsuch that ∥c⊙x∥∞≤M. Let Hbe an embedding matrix of the form H= x1··· xD0 0··· ··· 0 I1··· ··· I ℓ 1··· ··· 1 ∈Rdembed×ℓ, where ℓ≥2D. Then there exists a transformer block B∈ B(D,3, dembed )such that B(H) = x1··· xDc1x1··· cDxD0 0··· ··· ··· ··· ··· 0 I1··· ··· ··· ··· ··· I ℓ 1··· ··· ··· ··· ··· 1 . (16) with∥θB∥∞≤O(ℓ2M2∥H∥2 ∞,∞). We say Bimplements the multiplication of ctoxcomponentwisely. Lemma 4 (Squaring) LetM > 0, and x= (x1,···, xD)be vector in RDsuch that ∥x∥∞≤M. Let Hbe an embedding matrix of the form H= x1··· xD0 0··· ··· 0 I1··· ··· I ℓ 1··· ··· 1 ∈Rdembed×ℓ, where ℓ≥2D. Then there exist three transformer blocks B1, B2, B3∈ B(D,3, dembed )such that B3◦B2◦B1(H) = x1··· xD(x1)2··· (xD)20 0··· ··· ··· ··· ··· 0 I1··· ··· ··· ··· ··· I ℓ 1··· ··· ··· ··· ··· 1 (17) with∥θB∥∞≤O(ℓ2M2∥H∥2 ∞,∞). We say B1, B2, B3implements the square of x. Lemma 5 (Componentwise Product) LetM > 0,x= (x1,···, xD)andy= (y1,···, yD)be vectors in RDbe such that ∥x⊙y∥∞+∥x∥∞+∥y∥∞≤M. Let Hbe an embedding matrix of the form H= x1··· xDy1··· yD0 0··· ··· ··· ··· ··· 0 I1··· ··· ··· ··· ··· I ℓ 1··· ··· ··· ··· ··· 1 ∈Rdembed×ℓ, (18) where ℓ≥3D. Then there exist three transformer blocks B1, B2, B3∈ B(D,3, dembed )such that B3◦B2◦B1(H) = x1··· xDy1··· yDx1y1··· xDyD0 0··· ··· ··· ··· ··· ··· ··· ··· 0 I1··· ··· ··· ··· ··· ··· ··· ··· I ℓ 1··· ··· ··· ··· ··· ··· ··· ··· 1 (19) with∥θB∥∞≤O(ℓ2M2∥H∥2 ∞,∞). We say B1, B2, B3implements the componentwise product between xandy. 10 Lemma 6 (Componentwise r-th Power) LetM > 0, and rbe some integer such that 2s−1< r≤ 2sfor some integer s≥1. Let x= (x1,···, xD)∈RDsuch that max i,j=1,···,r{∥x∥i ∞+∥x∥j ∞}< M , andHbe an embedding matrix of the form H= x1··· xD0 0··· ··· 0 I1··· ··· I ℓ 1··· ··· 1 ∈Rdembed×ℓ, where ℓ≥2sD. Then there exists a sequence of transformer blocks Bi∈ B(2⌊(i−1)/3⌋D,3, dembed ), i= 1,···,3s, such that B3s◦B3s−1◦ ··· ◦ B1(H) = x1··· xD··· (x1)r··· (xD)r0 0··· ··· ··· ··· ··· ··· 0 I1··· ··· ··· ··· ··· ··· I ℓ 1··· ··· ··· ··· ··· ··· 1 ∈Rdembed×ℓ(20) with∥θB∥∞≤O(ℓ2M2∥H∥2 ∞,∞). We say B1,···, B3simplements the componentwise r-th power of x. Lemma 7 (Power Series and Division) LetM > 0, and rbe some integer such that 2s−1< r≤2s for some integer s≥1. Let x= (x1)∈Rsuch that max i,j=1,···,r{|x|i+|x|j}< M , and Hbe an embedding matrix of the form H= x10··· 0 0··· ··· 0 I1··· ··· I ℓ
https://arxiv.org/abs/2505.03205v1
1··· ··· 1 ∈Rdembed×ℓ, where ℓ≥2s. Then there exists a sequence of transformer blocks Bi∈ B(2⌊(i−1)/3⌋,3, dembed ),i= 1,···,3s,B3s+1∈ B(r,3, dembed )such that B3s+1◦ ··· ◦ B1(H) = (x1)1··· (x1)rPr i=1(x1)i0 0··· ··· ··· 0 I1··· ··· ··· I ℓ 1··· ··· ··· 1  with∥θB∥∞≤O(ℓ2M2∥H∥2 ∞,∞). We say B1,···, B3s+1implements power series of scalar xup to r term. Moreover, if x1∈[c1, c2]with 0< c 1< c 2. Let cbe a constant such that 1−cx1∈(−1,1). Then there exists a sequence of transformer blocks B1, B2, B3s+4, B3s+5∈ B(1,3, dembed ),B3s+3∈ B(r,3, dembed ), and Bi∈ B(2⌊(i−3)/3⌋,3, dembed ), for i= 3,···,3s+ 2, such that B3s+5◦ ··· ◦ B1(H) = x1−cx11−cx1···Pr i=0(1−cx1)icPr i=0(1−cx1)i0 0··· ··· ··· ··· ··· 0 I1··· ··· ··· ··· ··· I ℓ 1··· ··· ··· ··· ··· 1  with∥θB∥∞≤O(ℓ2M2∥H∥2 ∞,∞). We say B1,···, B3s+5approximate the division over xwith tolerance (1−cx1)r+1/x1, i.e., 1 x−crX i=0(1−cx1)i ≤(1−cx1)r+1 x1. With these basic arithmetic operations, we can prove our main results. 11 4.2 Proof of Theorem 1 We prove Theorem 1 in two steps. The first step is to approximate fby a piecewise-constant or- acle approximator denoted by ˆf. The second step is to implement the oracle approximator ˆfby a transformer neural network. Proof. [Proof of Theorem 1] •Oracle Approximator In this proof, we consider the piecewise constant oracle approximator constructed by Cloninger and Klock [2021]. Let Z={z1,···, zK}be a maximal separated δnet of Mwith respect to dM. According to Cloninger and Klock [2021, Lemma 6.1], K≤3dVol(M)dd 2δ−d. We define the geodesic ball as Ui:={z∈ M :dM(z, zi)≤δ}. Then the collection {Ui}K i=1covers Mand the preimages {π−1 M(Ui)}K i=1 covers the approximation domain M(q). For any partition of unity {ηi(x)}K i=1subordinate to the cover {π−1 M(Ui)}K i=1, we can decompose f asf(x) =PK i=1f(x)ηi(x).Following the idea in Cloninger and Klock [2021], we approximate fby the piecewise-constant function ˆf(x) =PK i=1g(zi)ηi(x). (21) where each ηiis constructed as follows. Let P(v)∈RD×dbe the matrix containing columnwise orthonormal basis for the tangent space Matv. Let p:=1 2(1 +q) and h:=6 1−qp−1. Define ˜ηi(x) :=σ 1−∥x−zi∥2 pτM(zi)2 −∥P(zi)⊤(x−zi)∥2 hδ2! ηi(x) := ˜ηi(x)/∥˜η(x)∥1 (22) fori= 1, . . . , K , and define the vectors: ˜η(x) = (˜η1(x),···,˜ηK(x)), η(x) = (η1(x),···, ηK(x)). (23) The ellipsoidal regions for ˜ ηi>0 are illustrated in Figure 3. In this construction, {ηi}K i=1forms a partition of unity subordinate to the cover {π−1 M(Ui)}K i=1ofM. It is proved in Cloninger and Klock [2021, Proposition 6.3] that {ηi}K i=1satisfies the localization property supx∈M(q),ηi(x)̸=0dM(q)(x, zi)≤O(δ), (24) where O(·) hides the constant term in q. Furthermore, ∥˜η(x)∥1is uniformly bounded above and bounded away from zero. This property is useful when estimating the depth of transformer network (see Remark 4). We then have |f(x)−ˆf(x)|= KX i=1g(πM(x))ηi(x)−KX i=1g(zi)ηi(x) ≤KX i=1|g(πM(x))−g(zi)|ηi(x) ≤LKX i=1dα M(πM(x), zi)ηi(x)≤O(δα) 12 Figure 3: The covering of tubular region M(q), where each ellipsoid represents the region {x: ˜ηi(x)>0}. where O(·) hides the constant terms in qandL. •Implementing the Oracle Approximator by Transformers Since each ˜ ηi(x) in (22) is composition of basic arithmetic operations, we can represent it without error by using transformer network.
https://arxiv.org/abs/2505.03205v1
The first result in this subsection establishes the result for representing each ˜ ηi(x). Proposition 1 Suppose the Assumption 1 holds. Let {˜ηi(x)}K i=1be defined as (22). Then for each fixed i, there exists a transformer network T(θ;·)∈ T(LT, mT, dembed , ℓ, L FFN, wFFN, R, κ )with pa- rameters LT=O(d), mT=O(D), dembed =O(1), ℓ≥O(D), LFFN=O(1), wFFN=O(1), κ=O(D2δ−8) such that T(θ;x) = ˜ηi(x) (25) for any x∈[0,1]D. The notation O(·)inmT, ℓ, κ hides the terms which are dependent on d, q, τ M, while O(·)in others hides the terms dependent on q, τM. The proof of Proposition 1 is deferred to Appendix C. The main theme of the proof is that, from (22), it is easy to see that ˜ ηi(x) is built from a sequence basic arithmetic operations such as constant addition, constant multiplication, squaring, etc,. Each of these operations is implemented in Table 1. By chaining these operations sequentially, we get the corresponding T( θ;·) to represent ˜ ηi(·). Once each ˜ ηiis represented by T( θ;·), we can apply Lemma 7 to construct another transformer network which implements ηi(x) = ˜ηi(x)/∥˜η(x)∥1,i= 1,···, K, and η(x) = (η1(x),···, ηK(x)) within some tolerance. Then take the linear combination of those ηi(x) to approximate ˆf. Note that we need to satisfies δ∈(0, τM/2) in order to have the cardinality K=O(δ−d) (see Lemma 6.1 in [Cloninger and Klock, 2021]), where O(·) hides the terms dependent in dand the volume of manifold Vol( M). The approximation result for ηi(x) is presented in Proposition 2 and its proof is deferred to Ap- pendix C. Proposition 2 Suppose Assumption 1 holds. Let Z={z1,···, zK}be a maximal separated δ-net of Mwith respect to dMsuch that δ∈(0, τM/2), and define ηaccording to (23). Then for any ϵ∈(0,1), 13 there exists T(θ;·) = (T1(θ;·),···,TK(θ;·))with each Ti(θ;·)∈ T(LT, mT, dembed , ℓ, L FFN, wFFN, R, κ ) such that for any x∈ M (q), |Ti(θ;x)−ηi(x)| ≤ϵηi(x). (26) Consequently, T(θ;·)satisfies supx∈M(q)∥T(θ;x)−η(x)∥1≤ϵ. (27) The network T(θ;·)has parameters LT=O(d+ ln(ln( ϵ−1))), mT=O(Dδ−d), dembed =O(1), ℓ≥O(Dδ−d), LFFN=O(1), wFFN=O(1), κ=O(D2δ−2d−8) where O(·)inmT,ℓ,κhides the terms which are dependent on d, q, τ M,Vol(M), while O(·)in others hides the terms which are dependent on q, τM,Vol(M). With Proposition 2, we can approximates the ˆfin (21) easily by scaling down the tolerance with the supremum norm of g. Let T 1(θ;·) := (T1 1(θ;·),···,TK 1(θ;·)) where each Ti 1approximates ηisuch that supx∈M(q)∥T1(θ;x)−η(x)∥1≤ϵ/∥g∥L∞(M). Then by Lemma 3 with constant c= (g(z1),···g(zK)) and Lemma 1, we can construct B 1,B2∈ B(K,3, dembed ) such that T 2:= B 2◦B1implements the approximation ofPK i=1g(zi)Ti 1(θ;x), where T2hasLT2=O(1) and mT2=K=O(δ−d). Let T := T 2◦T1, then for any x∈ M (q), we have |T(θ;x)−ˆf(x)|=|KX i=1g(zi)Ti 1(θ;x)−KX i=1g(zi)ηi(x)| ≤ ∥g∥L∞(M)∥T1(θ;x)−η(x)∥1=ϵ. An illustration of the constructed transformer network architecture for approximating ˆfis provided in Figure 2. •Putting Error Bounds Together For any partition of unity {ηi(x)}K i=1subject to the covering {π−1 M(Ui)}K i=1, we can write fasf(x) =PK i=1f(x)ηi(x). We consider the following piecewise constant approximation of f: f(x) =KX i=1f(x)ηi(x)≈ˆf(x) :=KX i=1g(zi)ηi(x), (28) By the triangle inequality, for any x∈ M
https://arxiv.org/abs/2505.03205v1
(q), |f(x)−T(θ;x)| ≤ |f(x)−ˆf(x)|+|ˆf(x)−T(θ;x)|. For the first term, we have |f(x)−ˆf(x)|= KX i=1g(πM(x))ηi(x)−KX i=1g(zi)ηi(x) ≤KX i=1|g(πM(x))−g(zi)|ηi(x) ≤LKX i=1dα M(πM(x), zi)ηi(x)≤LKX i=172δ (1−q)2α ηi(x) =L72δ (1−q)2α . 14 The last equality is due to partition of unity, and the inequality before the last equality is from Proposition 6.3 in [Cloninger and Klock, 2021]. For the second term, by Proposition 2 and its discussion, there exists a transformer network T(θ;·)∈ T with parameters LT=O(d+ ln(ln( δ−α))),mT=O(Dδ−d),dembed =O(1),ℓ=O(Dδ−d), LFFN =O(1),wFFN =O(1),κ=O(D2δ−2d−8), such that ∥T(θ;·)−ˆf∥L∞(M(q))≤δα. (29) Thus |T(θ;x)−f(x)| ≤L72δ (1−q)2α +δα= 1 +L72 (1−q)2α δα. By choosing δsuch that 1 +L 72 (1−q)2α δα=ϵ, we get δ=O(ϵ1/α) and |T(θ;x)−f(x)| ≤ϵ. Such a transformer network T(θ;·)∈ T has parameters LT=O(d+ ln(ln( ϵ−1))),mT=O(Dϵ−d α), dembed =O(1),ℓ≥O(Dϵ−d α),LFFN =O(1),wFFN =O(1),κ=O(D2ϵ−2d+8 α). 2 4.3 Proof of Theorem 2 Theorem 2 is proved via a bias-variance decomposition. The bias reflects the approximation error of f by a constructed transformer network, while the variance captures the stochastic error in estimating the parameters of the constructed transformer network. For the bias term, we can bound it by using the approximation error bound in Theorem 1. The variance term can be bounded using the covering number of transformers (see Lemma 10). Proof. [Proof of Theorem 2] By adding and subtracting the twice of the bias term, we can rewrite the squared generalization error as E∥ˆTn−f∥2 L2(P)=EZ M(q)(ˆTn(x)−f(x))2dP =E" 2 nnX i=1(ˆTn(xi)−f(xi))2# +EZ M(q)(ˆTn(x)−f(x))2dP−E" 2 nnX i=1(ˆTn(xi)−f(xi))2# . By Jensen’s inequality, the bias term satisfies E" 1 nnX i=1(ˆTn(xi)−f(xi))2# =Einf T∈T" 1 nnX i=1(T(xi)−f(xi))2# ≤inf T∈TE" 1 nnX i=1(T(xi)−f(xi))2# = inf T∈TZ M(q)(T(x)−f(x))2dP≤inf T∈TZ M(q)∥T−f∥2 L∞(M(q))dP = inf T∈T∥T−f∥2 L∞(M(q))≤O(ϵ2). 15 By Lemma 6 in [Chen et al., 2022], the variance term has the bound EZ M(q)(ˆTn(x)−f(x))2dP−E" 2 nnX i=1(ˆTn(xi)−f(xi))2# ≤inf δ>0104R2 3nlnNδ 4R,T,∥ · ∥∞ + 4 +1 2R δ ≤104R2 3nlnN1 4nR,T,∥ · ∥∞ + 4 +1 2R1 n where Nδ 4R,T,∥ · ∥∞ is the covering number (defined in Definition 7) of transformer network class Twith L∞norm. By Lemma 10, we get lnN1 4nR,T,∥ · ∥∞ ≤ln 2LT+3nRL FFNd18L2 T embedw18L2 TLFFN FFNκ6L2 TLFFNmL2 T TℓL2 T4d2 embedw2 FFND(mT+LFFN)LT ≤(4d2 embed w2 FFND(mT+LFFN)LT)(18L2 TLFFNln(2nRL FFNdembed wFFNκmTℓ)) ≤72 ln(2 nRL FFNdembed wFFNκmTℓ)d2 embed w2 FFNDm TL3 TL2 FFN. For target accuracy ϵ, we know from Theorem 1 that LT=O(d+ ln(ln( ϵ−1))),mT=O(Dϵ−d α), dembed =O(1),ℓ=O(Dϵ−d α),LFFN=O(1),wFFN=O(1),κ=O(D2ϵ−2d+8 α), where O(·) hides the constant terms in d, q, τ M,Vol(M). This simplifies the above to lnN1 4nR,T,∥ · ∥∞ ≤˜O(D2d3ϵ−d α) where ˜O(·) hides the logarithmic terms in D, d, q, n, ϵ, α, L, R, τ M,Vol(M) and constant terms in dand Vol(M). Thus, the variance term is bounded by EZ M(q)(ˆTn(x)−f(x))2dP−E" 2 nnX i=1(ˆTn(xi)−f(xi))2# ≤˜O D2d3ϵ−d α n! . Putting the bias and variance together, we get E∥ˆTn−f∥2 L2(P)≤˜O ϵ2+D2d3ϵ−d α n! . (30) By balancing the bias and variance, i.e., setting ϵ2=ϵ−d α n, we get ϵ=n−α 2α+d. This yields E∥ˆTn−f∥2 L2(P)≤˜O D2d3n−2α 2α+d (31) as desired. 2 Remark 1 It is worth pointing out that the factor of two included in the proof is intended to enhance the rate of convergence of
https://arxiv.org/abs/2505.03205v1
the statistical error. 16 Figure 4: Left subplot: Estimated intrinsic dimension (ID) of pixel and embedded image representations with various amounts of isotropic Gaussian noise. Noise added on pixels quickly distorts low-dimensional structures. Embedding with the pre-trained model demonstrates a denoising effect, recovering the original ID at all noise levels. Right subplot: Estimated intrinsic dimension of water buffalo images and embeddings across various noise levels. 5 Experiments Our theoretical results show that transformers can recover low-dimensional structures even when training data itself may not exactly lie on a low-dimensional manifold. To validate this findings, we conduct a series of experiments measuring the intrinsic dimension of common computer vision datasets with various levels of isotropic Gaussian noise. We then embed noisy image data using a pre-trained vision transformer (ViT) [Dosovitskiy et al., 2021] and measure the intrinsic dimension of the resulting embeddings. Setup. We measure the validation split of Imagenet-1k [Deng et al., 2009]. We first pre-process images by rescaling to D= 224 ×244 dimensions and normalizing pixel values inside of the [ −1,1]Dcube. We use the pre-trained google/vit-base-patch16-224 model to produce image embeddings of size 196×768. To measure intrinsic dimension we use the MLE estimator [Levina and Bickel, 2004] with K= 30 neighbors with batch size 4096 averaged over 50,000 images. We flatten all images beforehand. Results. Figure 4 shows that, with no noise, the intrinsic dimensions of this dataset in both pixel and embedding space are measured to be 25. As isotropic Gaussian noise with increasing variance is added, the intrinsic dimension of pixel data quickly increases. However, the intrinsic dimension of the embedded noisy pixel data remains constant, demonstrating the strong denoising effect of the vision transformer. Figure 4 also measures the intrinsic dimension of the water buffalo subset of Imagenet (class 346) across various noise levels. The estimated image dimension is around 15 while the estimated embedding dimension is around 18. However, adding isotropic Gaussian noise quickly increases the intrinsic dimension of images while having a negligible effect on the intrinsic dimension of embeddings. 6 Conclusion and Discussion This paper establishes approximation and generalization bounds of transformers for functions which depend on the projection of the input onto a low-dimensional manifold. This regression model is interesting in machine learning applications where the input data contain noise or the function has 17 low complexity depending on a low-dimensional task manifold. Our theory justifies the capability of transformers in handling noisy data and adapting to low-dimensional structures in the prediction tasks. This work considers H¨ older functions with H¨ older index α∈(0,1]. How to estimate this H¨ older index is a practically interesting problem. How to extend the theory to more regular functions with α >1 is a theoretically interesting problem. More broadly, our work improves fundamental understanding of transformers and improves our ability to theoretically and safely predict future capabilities. Acknowledgments Rongjie Lai’s research is supported in part by NSF DMS-2401297. Zhaiming Shen and Wenjing Liao are partially supported by NSF DMS-2145167 and DOE SC0024348. Alex Cloninger is supported in part by NSF CISE-2403452. References Josh Achiam, Steven Adler, Sandhini
https://arxiv.org/abs/2505.03205v1
Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774 , 2023. Armen Aghajanyan, Luke Zettlemoyer, and Sonal Gupta. Intrinsic dimensionality explains the effec- tiveness of language model fine-tuning. ArXiv , abs/2012.13255, 2020. Francis Bach. Breaking the curse of dimensionality with convex neural networks. Journal of Machine Learning Research , 18:1–53, 2017. Yu Bai, Fan Chen, Huan Wang, Caiming Xiong, and Song Mei. Transformers as statisticians: Provable in-context learning with in-context algorithm selection, 2023. Tim Brooks, Bill Peebles, Connor Holmes, Will DePue, Yufei Guo, Li Jing, David Schnurr, Joe Taylor, Troy Luhman, Eric Luhman, Clarence Ng, Ricky Wang, and Aditya Ramesh. Video generation models as world simulators. 2024. Minshuo Chen, Haoming Jiang, Wenjing Liao, and Tuo Zhao. Efficient approximation of deep relu networks for functions on low dimensional manifolds. Advances in neural information processing systems , 32, 2019. Minshuo Chen, Haoming Jiang, Wenjing Liao, and Tuo Zhao. Nonparametric regression on low- dimensional manifolds using deep relu networks: Function approximation and statistical recovery. Information and Inference: A Journal of the IMA , 11(4):1203–1253, 2022. Charles K. Chui and Hrushikesh N. Mhaskar. Deep nets for local manifold learning. Frontiers in Applied Mathematics and Statistics , 4:12, 2018. Alexander Cloninger and Timo Klock. A deep network construction that adapts to intrinsic dimen- sionality beyond the domain. Neural Networks , 141:404–419, 2021. R Dennis Cook and Bing Li. Dimension reduction for conditional mean in regression. The Annals of Statistics , 30(2):455–474, 2002. 18 George Cybenko. Approximation by superpositions of a sigmoidal function. Mathematics of Control, Signals and Systems , 2(4):303–314, 1989. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hier- archical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition , pages 248–255, 2009. Jacob Devlin. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 , 2018. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale, 2021. Benjamin L Edelman, Surbhi Goel, Sham Kakade, and Cyril Zhang. Inductive biases and variable creation in self-attention mechanisms. In International Conference on Machine Learning , pages 5793–5831. PMLR, 2022. Iryna Gurevych, Michael Kohler, and G¨ ozde G¨ ul S ¸ahin. On the rate of convergence of a classifier based on a transformer encoder. IEEE Transactions on Information Theory , 68(12):8139–8155, 2022. L´ aszl´ o Gy¨ orfi, Michael Kohler, Adam Krzyzak, and Harro Walk. A distribution-free theory of non- parametric regression . Springer Science & Business Media, 2006. Alex Havrilla and Wenjing Liao. Predicting scaling laws with statistical and approximation theory for transformer neural networks on intrinsically low-dimensional data. In Advances in Neural Informa- tion Processing Systems , 2024. Heinz Hopf and W. Rinow. ¨Uber den begriff der vollst¨ andigen differentialgeometrischen fl¨ ache. Com- mentarii Mathematici Helvetici , 3:209–225, 1931. Reprinted in Selecta Heinz Hopf , Herausgegeben zu seinem 70. Geburtstag von der Eidgen¨
https://arxiv.org/abs/2505.03205v1
ossischen Technischen Hochschule Z¨ urich, 1964, pp. 64–79. Kurt Hornik, Maxwell Stinchcombe, and Halbert White. Multilayer feedforward networks are universal approximators. Neural Networks , 2(5):359–366, 1989. Samy Jelassi, Michael Sander, and Yuanzhi Li. Vision transformers provably learn spatial structure. Advances in Neural Information Processing Systems , 35:37822–37836, 2022. Michael Kohler and Jens Mehnert. Analysis of the rate of convergence of least squares neural network regression estimates in case of measurement errors. Neural Networks , 24(3):273–279, 2011. Ming-Jun Lai and Zhaiming Shen. The kolmogorov superposition theorem can break the curse of dimensionality when approximating high dimensional functions. arXiv preprint arXiv:2112.09963 , 2021. Ming-Jun Lai and Zhaiming Shen. The optimal rate for linear kb-splines and lkb-splines approximation of high dimensional continuous functions and its application. arXiv preprint , arXiv:2401.03956, 2024. Zehua Lai, Lek-Heng Lim, and Yucong Liu. Attention is a smoothed cubic spline. arXiv preprint arXiv:2408.09624 , 2024. 19 Moshe Leshno, Vladimir Ya Lin, Allan Pinkus, and Shimon Schocken. Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. Neural Networks , 6(6): 861–867, 1993. Elizaveta Levina and Peter Bickel. Maximum likelihood estimation of intrinsic dimension. In L. Saul, Y. Weiss, and L. Bottou, editors, Advances in Neural Information Processing Systems , volume 17. MIT Press, 2004. Hao Liu, Minshuo Chen, Tuo Zhao, and Wenjing Liao. Besov function approximation and binary classification on low-dimensional manifolds using convolutional residual networks. In International Conference on Machine Learning , pages 6770–6780. PMLR, 2021. Jianfeng Lu, Zuowei Shen, Haizhao Yang, and Shijun Zhang. Deep network approximation for smooth functions. SIAM Journal on Mathematical Analysis , 53(5):5465–5506, 2021. V. E. Maiorov. On best approximation by ridge functions. Journal of Approximation Theory , 99: 68–94, 1999. Hrushikesh N. Mhaskar. Approximation properties of a multilayered feedforward artificial neural network. Advances in Computational Mathematics , 1(1):61–80, 1993. Zeping Min, Qian Ge, and Zhong Li. An intrinsic dimension perspective of transformers for sequential modeling, 2023. Ryumei Nakada and Masaaki Imaizumi. Adaptive approximation and generalization of deep neural network with intrinsic dimensionality. Journal of Machine Learning Research , 21(174):1–38, 2020. Kenta Oono and Taiji Suzuki. Approximation and non-parametric estimation of resnet-type convolu- tional neural networks. In International conference on machine learning , pages 4922–4931. PMLR, 2019. P. P. Petrushev. Approximation by ridge functions and neural networks. SIAM Journal on Mathe- matical Analysis , 30:155–189, 1998. Allan Pinkus. Approximation theory of the mlp model in neural networks. Acta Numerica , 8:143–195, 1999. Phillip E. Pope, Chen Zhu, Ahmed Abdelkader, Micah Goldblum, and Tom Goldstein. The intrinsic dimension of images and its impact on learning. ArXiv , abs/2104.08894, 2021. Anton Razzhigaev, Matvey Mikhalchuk, Elizaveta Goncharova, Ivan Oseledets, Denis Dimitrov, and Andrey Kuznetsov. The shape of learning: Anisotropy and intrinsic dimensions in transformer-based models. ArXiv , abs/2311.05928, 2023. Johannes Schmidt-Hieber. Deep relu network approximation of functions on a manifold. arXiv preprint arXiv:1908.00695 , 2019. Johannes Schmidt-Hieber. Nonparametric regression using deep neural networks with relu activation function. The Annals of Statistics , 48(4), 2020. Uri Shaham, Alexander Cloninger, and Ronald R Coifman. Provable approximation properties for deep neural networks. Applied and Computational Harmonic Analysis , 44(3):537–557,
https://arxiv.org/abs/2505.03205v1
2018. 20 Utkarsh Sharma and Jared Kaplan. Scaling laws from the data manifold dimension. J. Mach. Learn. Res., 23(1), jan 2022. ISSN 1532-4435. Shokichi Takakura and Taiji Suzuki. Approximation and estimation ability of transformers for sequence-to-sequence functions with infinite dimensional input. In International Conference on Machine Learning , pages 33416–33447. PMLR, 2023. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems , 2017. Colin Wei, Yining Chen, and Tengyu Ma. Statistically meaningful approximation: a case study on approximating turing machines with transformers. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, editors, Advances in Neural Information Processing Systems , volume 35, pages 12071–12083. Curran Associates, Inc., 2022. Dmitry Yarotsky. Error bounds for approximations with deep relu networks. Neural Networks , 94: 103–114, 2017. Chulhee Yun, Srinadh Bhojanapalli, Ankit Singh Rawat, Sashank Reddi, and Sanjiv Kumar. Are transformers universal approximators of sequence-to-sequence functions? In International Confer- ence on Learning Representations , 2019. Ding-Xuan Zhou. Universality of deep convolutional neural networks. Applied and computational harmonic analysis , 48(2):787–794, 2020. 21 A Table of Notations Our notations are summarized in Table 2. Table 2: Table of notations Symbol Interpretation M A manifold Vol(M) Volume of the manifold M Med(M) medial axis of a manifold M τ(v) local reach at of Matv τM local reach of M πM(x) projection of x∈ M (q) onto M P(v) D×dmatrix consists of orthonormal basis of the tangent space of Matv. dM(x, x′) geodesic distance between xandx′ dM(q)(v, v′) tubular geodesic distance between vandv′ {z1,···, zK} a maximal separated δ-net of Mwith respect to dM x= (x1,···, xD) input variable H embedding matrix dembed embedding dimension T a transformer network B a transformer block LT number of transformer blocks in T mT maximum number of attention heads in each block of T ℓ number of hidden tokens Ij interaction term (cos(jπ 2ℓ),sin(jπ 2ℓ))⊤ Hi,j the (i, j)-th entry of H HJ,: submatrix of Hwith rows with row index in Jand all the columns H:,J submatrix of Hwith all the rows and columns with column index in J x⊙x componentwise product, i.e., x⊙x= ((x1)2,···,(xD)2) x◦rcomponentwise r-th power, i.e., x◦r= ((x1)r,···,(xD)r) ∥x∥1 ℓ1norm of a vector x ∥x∥∞ maximum norm of a vector x ∥M∥∞,∞ maximum norm of a matrix M B Implementing Basic Arithmetic Operations by Transformers B.1 Interaction Lemma and Gating Lemma We first present two lemmas which will be useful when building the arithmetic operations. The first lemma is called Interaction Lemma. Lemma 8 (Interaction Lemma) LetH= [ht]1≤t≤ℓ∈Rdembed×ℓbe an embedding matrix such that h(dembed−2):(dembed−1) t =Itandhdembed t = 1. Fix 1≤t1, t2≤ℓ,1≤i≤dembed , and ℓ∈N. Suppose dembed≥5and∥H∥∞,∞< M for some M > 0, and the data kernels Qdata(first two rows in the query matrix Q) and Kdata(first two rows in the key matrix K) satisfy max{∥Qdata∥∞,∞,∥Kdata∥∞,∞} ≤µ. 22 Then we can construct an attention head Awith∥θA∥∞=O(d4 embedµ2ℓ2M2)such that A(ht) =( σ(⟨Qdataht, Kdataht2⟩)eiift=t1, 0 otherwise . Proof. We refer its proof to Lemma 3
https://arxiv.org/abs/2505.03205v1
in [Havrilla and Liao, 2024]. 2 Remark 2 The significance of the Interaction Lemma is that we can find an attention head such that one token interacts with exactly another token in the embedding matrix. This property facilitates the flexible implementation of fundamental arithmetic operations, such as addition, multiplication, squaring, etc., while also supporting efficient parallelization. The next lemma shows the way to zero out the contiguous tokens in the embedding matrix while keep other tokens unchanged via a feed-forward network. Lemma 9 (Gating Lemma) Letdembed≥5andH= [ht]1≤t≤ℓ∈Rdembed×ℓ, be an embedding matrix such that h(dembed−2):(dembed−1) t =Itandhdembed t = 1. Then there exists a two-layer feed-forward network (FFN) such that FFN( ht) =  ht ift∈ {1,···, k} 0 I1 t I2 t 1 otherwise Additionally, we have ∥θFFN∥∞≤O(ℓ∥H∥∞). Proof. We refer its proof to Lemma 4 in [Havrilla and Liao, 2024]. 2 B.2 Proof of Basic Arithmetic Operations B.2.1 Proof of Lemma 1 Proof. [Proof of Lemma 1] Let us define each attention head Ai, 1≤i≤D, with the data kernel in the form Qdata i=0 0 0 0 1 0 0 0 0 1 Kdata i=1 0 0 0 0 0 0 0 0 M . Lethidenote the i-th column of H, 1≤i≤ℓ. By Interaction Lemma 8, we can construct Ai, 1≤i≤D, such that hD+1interacts with hionly, i.e., Ai(hD+1) =σ(⟨Qdata ihD+1, Kdata ihi⟩)e1=σ(xi+M)e1= (xi+M)e1, andAi(ht) = 0 when t̸=D+ 1. Then the residual multi-head attention yields MHA( H) +H= x1··· xDx1+···+xD+DM 0 0··· ··· ··· 0 I1··· ··· ··· I ℓ 1··· ··· ··· 1 . 23 Then we apply the Gating Lemma 9 to have a FFN (3) to subtract off the constant DM in the D+ 1-th column. Thus B(H) = x1··· xDx1+···+xD0 0··· ··· ··· 0 I1··· ··· ··· I ℓ 1··· ··· ··· 1  as desired. The weights ∥θB∥∞≤O(ℓ2M2∥H∥2 ∞,∞) follows from Interaction Lemma 8. 2 Remark 3 By reexamining the proof, it is easy to see that the summation term x1+···+xDcan be put in any column of the first row, not necessarily the D+ 1-th column. This provides a lot of flexibility when parallelizing different basic operations in one transformer block. B.2.2 Proof of Lemma 2 Proof. [Proof of Lemma 2] Let us define the each attention head Ai, 1≤i≤D, with the data kernel in the form Qdata i=0 0 0 0 0 0 0 0 0 1 Kdata i=0 0 0 0 0 1 0 0 0 ci+M . By Interaction Lemma 8, we can construct Aisuch that hD+iinteracts with hionly, i.e., Ai(hD+i) =σ(⟨Qdata ihD+i, Kdata ihi⟩)e1=σ(xi+ci+M)e1= (xi+ci+M)e1, andAi(ht) = 0 when t̸=D+i. Then the residual multi-head attention yields MHA( H) +H= x1··· xDx1+c1+M··· xD+cD+M0 0··· ··· ··· ··· ··· 0 I1··· ··· ··· ··· ··· I ℓ 1··· ··· ··· ··· ··· 1 . Then we apply the Gating Lemma 9 to have a FFN (3) to subtract off the constant Monly from columns D+ 1 to 2 D. Therefore, we have B(H) = x1··· xDx1+c1··· xD+cD0 0··· ··· ··· ··· ··· 0 I1··· ··· ··· ··· ··· I ℓ 1··· ··· ··· ···
https://arxiv.org/abs/2505.03205v1
··· 1  as desired. The weights ∥θB∥∞≤O(ℓ2M2∥H∥2 ∞,∞) follows from Interaction Lemma 8. 2 B.2.3 Proof of Lemma 3 Proof. [Proof of Lemma 3] Let us define the each attention head Ai, 1≤i≤D, with the data kernel in the form Qdata i=0 0 0 0 ci 0 0 0 0 1 Kdata i=1 0 0 0 0 0 0 0 0 M . Then by Interaction Lemma 8, we can construct Aisuch that hD+iinteracts with hionly, i.e., Ai(hD+i) =σ(⟨Qdata ihD+i, Kdata ihi⟩)e1=σ(cixi+M)e1= (cixi+M)e1, andAi(ht) = 0 when t̸=D+i. Then the residual multi-head attention yields 24 MHA( H) +H= x1··· xDc1x1+M··· cDxD+M0 0··· ··· ··· ··· ··· 0 I1··· ··· ··· ··· ··· I ℓ 1··· ··· ··· ··· ··· 1 . Then we apply the Gating Lemma 9 to have a FFN (3) to subtract off the constant Monly from columns D+ 1 to 2 D. Thus B(H) = x1··· xDc1x1··· cDxD0 0··· ··· ··· ··· ··· 0 I1··· ··· ··· ··· ··· I ℓ 1··· ··· ··· ··· ··· 1 . as desired. The weights ∥θB∥∞≤O(ℓ2M2∥H∥2 ∞,∞) follows from Interaction Lemma 8. 2 B.2.4 Proof of Lemma 4 Proof. [Proof of Lemma 4] First, applying Lemma 3 with multiplication constant c= (1,···,1), we can construct the transformer block B1∈ B(D,3, dembed ) so that it copies the first Delements in the first row from columns 1 ,···, Dto columns D+ 1,···,2D, i.e., H1:=B1(H) = x1··· xDx1··· xD0 0··· ··· ··· ··· ··· 0 I1··· ··· ··· ··· ··· I ℓ 1··· ··· ··· ··· ··· 1 . ForB2, let us define each attention head Ai, 1≤i≤D, with the data kernel in the form Qdata i=1 0 0 0 0 0 0 0 0 0 Kdata i=1 0 0 0 0 0 0 0 0 0 . Leth1,idenote the i-th column of H1, 1≤i≤ℓ. By Interaction Lemma 8, we can construct Ai, 1≤i≤D, such that h1,D+iinteracts with h1,ionly, i.e., Ai(h1,D+i) =σ(⟨Qdata ih1,D+i, Kdata ih1,i⟩)e1=σ((xi)2)e1= (xi)2e1, andAi(h1,t) = 0 when t̸=D+i. Then the residual multi-head attention yields MHA( H1) +H1= x1··· xD(x1)2+x1··· (xD)2+xD0 0··· ··· ··· ··· ··· 0 I1··· ··· ··· ··· ··· I ℓ 1··· ··· ··· ··· ··· 1 . LetH2:=B2(H1) = MHA( H1) +H1, and we use h2,ito denote the i-th column of H2, 1≤ i≤ℓ. Now again by Lemma 3 with multiplication constant c= (−1,···,−1), we can construct B3∈ B(D,3, dembed ) with each attention head ˜Ai, 1≤i≤D, such that h2,D+iinteracts with h2,ionly. Let the data kernel of each ˜Aiin the form Qdata i=0 0 0 0 −1 0 0 0 0 1 Kdata i=1 0 0 0 0 0 0 0 0 M . 25 By Interaction Lemma 8, we have ˜Ai(h2,D+i) =σ(⟨Qdata ih2,D+i, Kdata ih2,i⟩)e1=σ(−xi+M)e1= (−xi+M)e1, and ˜Ai(h2,t) = 0 when t̸=D+i. Thus, the residual multi-head attention yields MHA( H2) +H2= x1··· xD(x1)2+M··· (xD)2+M0 0··· ··· ··· ··· ··· 0 I1··· ··· ··· ··· ··· I ℓ 1··· ··· ··· ··· ··· 1 . Then we apply the Gating Lemma 9 to have a FFN (3) to subtract off the constant Monly from columns D+
https://arxiv.org/abs/2505.03205v1
1 to 2 D. Therefore, we have B3◦B2◦B1(H) =B3(H2) = x1··· xD(x1)2··· (xD)20 0··· ··· ··· ··· ··· 0 I1··· ··· ··· ··· ··· I ℓ 1··· ··· ··· ··· ··· 1  as desired. The weights ∥θB∥∞≤O(ℓ2M2∥H∥2 ∞,∞) follows from Interaction Lemma 8. 2 B.2.5 Proof of Lemma 5 Proof. [Proof of Lemma 5] First, applying Lemma 3 with multiplication constant c= (1,···,1), we can construct the transformer block B1∈ B(D,3, dembed ) so that it copies the first Delements in the first row from columns 1 ,···, Dto columns 2 D+ 1,···,3D, i.e., H1:=B1(H) = x1··· xDy1··· yDx1··· xD0 0··· ··· ··· ··· ··· ··· ··· ··· 0 I1··· ··· ··· ··· ··· ··· ··· ··· I ℓ 1··· ··· ··· ··· ··· ··· ··· ··· 1 . ForB2, let us define the each attention head Ai, 1≤i≤D, with the data kernel in the form Qdata i=1 0 0 0 0 0 0 0 0 1 Kdata i=1 0 0 0 0 0 0 0 0 M . (32) By Interaction Lemma 8, we can construct Ai, 1≤i≤D, such that h1,2D+iinteracts with h1,D+i only, i.e., Ai(h1,2D+i) =σ(⟨Qdata ih1,2D+i, Kdata ih1,D+i⟩)e1=σ(xiyi+M)e1= (xiyi+M)e1, andAi(h1,t) = 0 when t̸= 2D+i. Then the residual multi-head attention yields MAH( H1) +H1= x1··· xDy1··· yDx1y1+x1+M··· xDyD+xD+M0 0··· ··· ··· ··· ··· ··· ··· ··· 0 I1··· ··· ··· ··· ··· ··· ··· ··· I ℓ 1··· ··· ··· ··· ··· ··· ··· ··· 1 . Then we apply the Gating Lemma 9 to have a FFN (3) to subtract off the constant Monly from columns 2 D+ 1 to 3 D. Thus, we have 26 H2:=B2◦B1(H) =B2(H1) = x1··· xDy1··· yDx1y1+x1··· xDyD+xD0 0··· ··· ··· ··· ··· ··· ··· ··· 0 I1··· ··· ··· ··· ··· ··· ··· ··· I ℓ 1··· ··· ··· ··· ··· ··· ··· ··· 1 . Now again by Lemma 3 with multiplication constant c= (−1,···,−1), we can construct B3∈ B(D,3, dembed ) with each attention head ˜Ai, 1≤i≤D, such that h2,2D+iinteracts with h2,ionly. Let the data kernel of each ˜Aiin the form Qdata i=0 0 0 0 −1 0 0 0 0 1 Kdata i=1 0 0 0 0 0 0 0 0 M . By Interaction Lemma 8, we have ˜Ai(h2,2D+i) =σ(⟨Qdata ih2,2D+i, Kdata ih2,i⟩)e1=σ(−xi+M)e1= (−xi+M)e1, and ˜Ai(h2,t) = 0 when t̸= 2D+i. Thus, the residual multi-head attention yields MHA( H2) +H2= x1··· xDy1··· yDx1y1+M··· xDyD+M0 0··· ··· ··· ··· ··· ··· ··· ··· 0 I1··· ··· ··· ··· ··· ··· ··· ··· I ℓ 1··· ··· ··· ··· ··· ··· ··· ··· 1  Then we apply the Gating Lemma 9 to have a FFN (3) to subtract off the constant Monly from columns 2 D+ 1 to 3 D. Therefore, we have B3◦B2◦B1(H) =B3(H2) = x1··· xDy1··· yDx1y1··· xDyD0 0··· ··· ··· ··· ··· ··· ··· ··· 0 I1··· ··· ··· ··· ··· ··· ··· ··· I ℓ 1··· ··· ··· ··· ··· ··· ··· ··· 1  as desired. The weights ∥θB∥∞≤O(ℓ2M2∥H∥2 ∞,∞) follows from Interaction Lemma 8. 2 B.2.6 Proof of Lemma 6 Proof. [Proof
https://arxiv.org/abs/2505.03205v1
of Lemma 6] It suffices to show for the case r= 2s. Let us proceed by induction on s. First, suppose B1, B2, B3∈ B(D,3, dembed ) implements the squaring operation as shown in Lemma 4, i.e., H3:=B3◦B2◦B1(H) = x1··· xD(x1)2··· (xD)20 0··· ··· ··· ··· ··· 0 I1··· ··· ··· ··· ··· I ℓ 1··· ··· ··· ··· ··· 1 . For the next three blocks B4, B5, B6, we can apply Lemma 3 with c= (1 ,···,1) on B4∈ B(2D,3, dembed ) to copy the nonzero elements in the first row from columns 1 ,···,2Dto columns 2D+1,···,4D. Apply Lemma 5 on B5∈ B(2D,3, dembed ) such that h4,2D+iinteracts only with h4,D+i, andh4,3D+iinteracts only with h4,D+i, 1≤i≤D. Then apply Lemma 3 with c= (−1,···,−1) on B6∈ B(2D,3, dembed ) such that h5,2D+iinteracts only with h5,iandh5,3D+iinteracts only with h5,D+i, 1≤i≤D. 27 Then we have H6:=B6◦B5◦ ··· ◦ B1(H) = x1··· xD··· (x1)4··· (xD)40 0··· ··· ··· ··· ··· ··· 0 I1··· ··· ··· ··· ··· ··· I ℓ 1··· ··· ··· ··· ··· ··· 1 . Now suppose in the ( s−1)-th step, we have H3s−3:=B3s−3◦ ··· ◦ B1(H) = x1··· xD··· (x1)2s−1··· (xD)2s−10 0··· ··· ··· ··· ··· ··· 0 I1··· ··· ··· ··· ··· ··· I ℓ 1··· ··· ··· ··· ··· ··· 1 . Then we can apply Lemma 3 with c= (1,···,1) on B3s−2∈ B(2s−1D,3, dembed ) to copy the nonzero elements in the first row from columns 1 ,···,2s−1Dto columns 2s−1D+ 1,···,2sD. Apply Lemma 5 onB3s−1∈ B(2s−1D,3, dembed ) to build 2s−1Dattention heads such that h3s−2,(2s−1+j−1)D+iinteracts only with h3s−2,(2s−1−1)D+i, for 1 ≤j≤2s−1and 1 ≤i≤D. Apply Lemma 3 with c= (−1,···,−1) onB3s∈ B(2s−1D,3, dembed ) to build 2s−1Dattention heads such that h3s−1,2s−1D+iinteracts only with h3s−1,i, for 1 ≤i≤2s−1D. Therefore, we get B3s◦B3s−1◦ ··· ◦ B1(H) =B3s◦B3s−1◦B3s−2(H3s−3) = x1··· xD··· (x1)2s··· (xD)2s0 0··· ··· ··· ··· ··· ··· 0 I1··· ··· ··· ··· ··· ··· I ℓ 1··· ··· ··· ··· ··· ··· 1 , as desired. The weights ∥θB∥∞≤O(ℓ2M2∥H∥2 ∞,∞) follows from Interaction Lemma 8. By reexamining the proof, the total number of attention heads needed in this implementation is 3·2D(1 + 2 + ···+ 2s−1) = 6 D(2s−1) = 6 D(r−1). 2 B.2.7 Proof of Lemma 7 Proof. [Proof of Lemma 7] For power series, it suffices to show for the case r= 2s. First, by Lemma 6, we can construct Bi∈ B(2⌊i/2⌋,3, dembed ), 1≤i≤3s, such that H3s:=B3s◦ ··· ◦ B1(H) = (x1)1··· (x1)r0 0··· ··· 0 I1··· ··· I ℓ 1··· ··· 1 . Then by Lemma 1, we can construct B3s+1∈ B(r,3, dembed ) such that B3s+1(H3s) =B3s+1◦ ··· ◦ B1(H) = (x1)1··· (x1)rPr i=1(x1)i0 0··· ··· ··· 0 I1··· ··· ··· I ℓ 1··· ··· ··· 1 . 28 For division, it suffices to show for the case r= 2sas well. First, by Lemma 3 and Lemma 2, we can construct B1, B2∈ B(1,3, dembed ) such that B2◦B1(H) = x1−cx11−cx10 0··· ··· 0 I1··· ··· I ℓ 1··· ··· 1 . Then by the first part of this proof, we can construct
https://arxiv.org/abs/2505.03205v1
Bi∈ B(2⌊(i−3)/3⌋,3, dembed ), 3≤i≤3s+ 2, to implement all the i-th power of (1 −cx1)i, 1≤i≤r. Then we can construct B3s+3∈ B(r,3, dembed ) to add up all the powers, i.e., B3s+3◦ ··· ◦ B1(H) = x1−cx11−cx1(1−cx1)2···Pr i=1(1−cx1)i0 0··· ··· ··· ··· ··· 0 I1··· ··· ··· ··· ··· I ℓ 1··· ··· ··· ··· ··· 1 . Then, we apply Lemma 2 and Lemma 3 to construct B3s+4, B3s+5∈ B(1,3, dembed ) to add the constant 1 into the power series and multiply the constant crespectively, i.e., B3s+5◦ ··· ◦ B1(H) = x1−cx11−cx1···Pr i=0(1−cx1)icPr i=0(1−cx1)i0 0··· ··· ··· ··· ··· 0 I1··· ··· ··· ··· ··· I ℓ 1··· ··· ··· ··· ··· 1 . Since 1 x1−crX i=0(1−cx1)i = c∞X i=r+1(1−cx1)i = (1−cx1)r+1 x1 , we get the desired approximation result. The weights ∥θB∥∞≤O(ℓ2M2∥H∥2 ∞,∞) follows from Inter- action Lemma 8. 2 Remark 4 For any x∈[c1, c2]with 0< c 1< c 2, i.e., xis bounded above and bounded away from 0, we can find some csuch that 1−cx∈(−1,1). Given any prescribed tolerance ϵ >0, by solving (1−cx)r+1/x≤ϵ, we get r=O(ln(1 ϵ)). This is useful when calculating the depth LTand token number mTof each block in the transformer network when approximating each ηi(x)in Proposition 2. C Proof of Proposition 1 and 2 Proof. [Proof of Proposition 1] Notice that the two key components in ˜ ηi(x): −∥P(zi)⊤(x−zi)∥2 hδ2 and −∥x−zi∥2 pτM(zi)2 have no interaction between each other, therefore can be built in parallel using the same number of transformer blocks. Let us focus on implementing − ∥P(zi)⊤(x−zi)∥2 hϵ2 . Letx∈RD, for each i= 1,···, K, we first embed xinto the embedding matrix Hwhere H= x1··· xD0 0··· ··· 0 I1··· ··· I ℓ 1··· ··· 1 ∈Rdembed×ℓ. 29 •Implementation of x−zi By Lemma 2, we can construct B1∈ B(D,3, dembed ) so that it implements the constant addition x−ziin the first row from columns D+ 1 to 2 D, i.e., H1:=B1(H) = x1··· xDx1−(zi)1··· xD−(zi)D0 0··· ··· ··· ··· ··· 0 I1··· ··· ··· ··· ··· I ℓ 1··· ··· ··· ··· ··· 1 . •Implementation of P(zi)⊤(x−zi) By Lemma 3, we can sequentially construct B2, B3,···, Bd+1∈ B(D,3, dembed ) so that each of them implements the constant multiplication with cj= (P(zi)⊤ j,1,···, P(zi)⊤ j,D) = ( P(zi)1,j,···, P(zi)D,j) forj= 1,···, d. For each j= 1,···, d, we put the constant multiplication results P(zi)1,j(x1−(zi)1),···, P(zi)D,j(xD−(zi)D) in the first row from columns ( j+ 1)D+ 1 to ( j+ 2)D, i.e., Hd+1:=Bd+1◦···◦ B1(H) = (H1):,I1P(zi)1,1(x1−(zi)1)··· ··· P(zi)D,d(xD−(zi)D)0 0 ··· ··· ··· 0 I2D+1 ··· ··· ··· I ℓ 1 ··· ··· ··· 1 , where I1={1,···,2D}. The notation ( H1):,I1denotes the submatrix of H1with all the rows and columns with column index in I1. Next, by Lemma 1, we can construct Bd+2∈ B(D,3, dembed ) so that it implements the sum of the terms in the first row of Hd+1block by block, where each block is a sum of Dterms, and we put the dsums in the first row from columns ( d+ 2)D+ 1 to ( d+ 2)D+d.
https://arxiv.org/abs/2505.03205v1
More precisely, we have Hd+2: =Bd+2(Hd+1) = (Hd+1):,Id+1PD j=1P(zi)j,1(xj−(zi)j)···PD j=1P(zi)j,d(xj−(zi)j)0 0 ··· ··· 0 I(d+2)D+1 ··· ··· I ℓ 1 ··· ··· 1 , where Id+1={1,···,(d+ 2)D}. •Implementation of − ∥P(zi)⊤(x−zi)∥2 hδ2 Then by Lemma 4, we can construct Bd+3∈ B(D,3, dembed ) so that it implements the square of those sums in the first row of Hd+2, and we put the corresponding squares in the first row from columns ( d+ 2)D+d+ 1 to ( d+ 2)D+ 2d. Thus, Hd+3: =Bd+3(Hd+2) = (Hd+2):,Id+2PD j=1P(zi)j,1(xj−(zi)j)2 ···PD j=1P(zi)j,d(xj−(zi)j)2 0 0 ··· ··· 0 I(d+2)D+d+1 ··· ··· I ℓ 1 ··· ··· 1 , 30 where Id+2={1,···,(d+ 2)D+d}. Finally, by Lemma 1, we can construct Bd+4∈ B(D,3, dembed ) and Bd+5∈ B(1,3, dembed ) so that Bd+4implements the sum of those squares in Hd+3, i.e., it computes the square of 2-norm of the term ∥P(zi)⊤(x−zi)∥2 2, and Bd+5implements the constant −1/(hδ)2multiplication. Therefore, Hd+5:=Bd+5◦Bd+4(Hd+3) = (Hd+3):,Id+3∥P(zi)⊤(x−zi)∥2 2− ∥P(zi)⊤(x−zi)∥2 hδ2 0 0 ··· 0 I(d+2)D+2d+1 ··· I ℓ 1 ··· 1 , where Id+3={1,···,(d+ 2)D+ 2d}. The total number hidden tokens is on the order of O(Dd). •Implementation of − ∥x−zi∥2 pτM(zi)2 For the implementation of − ∥x−zi∥2 pτM(zi)2 , we need Dmore tokens to save the values (x1−(zi)1)2,···,(xD−(zi)D)2, one more token to save the 2-norm square ∥x−zi∥2 2=PD j=1(xj−(zi)j)2, and one more token to save the constant multiplication with constant −1/(pτM(zi))2. By the Interaction Lemma 8, we can implement all these operation in parallel within transformer blocks Bd+3, Bd+4, Bd+5for the implementation of − ∥P(zi)⊤(x−zi)∥2 hδ2 . We need D+ 2 more tokens for this. So far, after bringing the implementation of− ∥x−zi∥2 pτM(zi)2 , we have Hd+5= (Hd+4):,Id+4− ∥x−zi∥2 pτM(zi)2 − ∥P(zi)⊤(x−zi)∥2 hδ2 0 0 ··· 0 I(d+3)D+2d+3 ··· I ℓ 1 ··· 1 , where Id+4={1,···,(d+ 3)D+ 2d+ 2}. •Implementation of 1− ∥x−zi∥2 pτM(zi)2 − ∥P(zi)⊤(x−zi)∥2 hδ2 Furthermore, we need Bd+6∈ B(2,3, dembed ) to take the sum of − ∥x−zi∥2 pτM(zi)2 and− ∥P(zi)⊤(x−zi)∥2 hδ2 , andBd+7∈ B(1,3, dembed ) to add constant 1, i.e., Hd+7: =Bd+7◦Bd+6(Hd+5) = (Hd+5):,Id+5− ∥x−zi∥2 pτM(zi)2 − ∥P(zi)⊤(x−zi)∥2 hδ2 1− ∥x−zi∥2 pτM(zi)2 − ∥P(zi)⊤(x−zi)∥2 hδ2 0 0 ··· 0 I(d+3)D+2d+5 ··· I ℓ 1 ··· 1 , where Id+5={1,···,(d+ 3)D+ 2d+ 4}. •Implementation of ˜ηi(x) 31 Finally, we need one block Bd+8to implement the ReLU function. This can be achieved by the similar spirit as the proof of Lemma 3. ForBd+8, let us define an attention head Awith the data kernel in the form Qdata i=0 0 0 0 1 0 0 0 0 1 Kdata i=1 0 0 0 0 0 0 0 0 0 . By Interaction Lemma 8, we can construct Ain such a way that hd+7,(d+3)D+2d+7interacts with hd+7,(d+3)D+2d+6only, i.e., A(hd+7,(d+3)D+2d+7) =σ(⟨Qdata ihd+7,(d+3)D+2d+7, Kdata ihd+7,(d+3)D+2d+6⟩)e1 =σ 1−∥x−zi∥2 pτM(zi)2 −∥P(zi)⊤(x−zi)∥2 hδ2! e1 = ˜ηi(x)e1, andAi(hd+7,t) = 0 when t̸= (d+ 3)D+ 2d+ 7. For the feed-forward layer of B8, we take the weight matrix equals to identity and bias equals to zero, so that it implements the identity operation. It is easy to see Bd+8∈ B(1,1, dembed ) and Hd+8:=Bd+8(Hd+7) = (Hd+7):,Id+7˜ηi(x) 0 0 0 I(d+3)D+2d+7Iℓ
https://arxiv.org/abs/2505.03205v1
1 1 , where Id+7={1,···,(d+ 3)D+ 2d+ 6}. By reexamining the proof, we get LT=O(d),mT=O(D),dembed =O(1),ℓ≥O(Dd),LFFN = O(1),wFFN =O(1),κ=O(D2d6δ−8). By hiding the dependency on dwhen it is not the dominating term, we have LT=O(d),mT=O(D),dembed =O(1),ℓ≥O(D),LFFN =O(1),wFFN =O(1), κ=O(D2δ−8). 2 Remark 5 The above procedure implements of one ˜ηi(x), for i= 1,···, K. To implement all ˜η1(x),···,˜ηK(x)parallely, we can start with a large ℓand partition the matrix into Kchunks where each chunk implements one of ˜ηi(x). Such implementation is possible because of the Interaction Lemma 8. Moreover, as discussed in Remark 3, each intermediate output can be put into any col- umn in the matrix without affecting the final result. This flexibility also facilitates parallelization. Proof. [Proof of Proposition 2] First, we would like to parallelize (see also Remark 5) apply Proposi- tion 1 to implement ˜ η1(x),···,˜ηK(x) simultaneously. Let Hbe an embedding matrix of the form H= (Hd+7):,I1 d+7··· (Hd+7):,IK d+7˜η1(x) ··· ˜ηK(x)∥˜η(x)∥10 0 ··· ··· ··· 0 I((d+3)D+2d+6)K+1··· ··· ··· I ℓ 1 ··· ··· ··· 1 . From Theorem 2.2 in [Cloninger and Klock, 2021] , we know K=O(δ−d) where O(·) hides the dependency on dand volume of the manifold Vol( M). Thus, there exists T1(θ;·)∈ Twith LT=O(d), mT=O(KD) =O(Dδ−d),dembed =O(1),ℓ≥O(KD) =O(Dδ−d),LFFN =O(1),wFFN =O(1), κ=O(D2δ−2d) such that T1(θ;·) can exactly represent H. 32 Then, by Lemma 7, we can construct transformer blocks B1,···, B3s+5with the maximum number of attention heads equal to rwithin each block to approximate1 ∥˜η(x)∥1up to (1−c∥˜η(x)∥1)r+1 ∥˜η(x)∥1 tolerance, where cis some constant such that 1 −c∥˜η(x)∥1∈(−1,1). As shown in Proposition 6.3 of [Cloninger and Klock, 2021], that 1 −q≲∥˜η(x)∥1≲dd/2(1−q)−2d, where ≲hides the dependency of some uniform constants. Therefore we can find some csuch that 1 −c∥˜η(x)∥1∈(−1,1). More precisely, we have, H3s+5: =B3s+5◦ ··· ◦ B1(H) = ···˜η1(x) ··· ˜ηK(x)∥˜η(x)∥1··· cPr k=0(1−c∥˜η(x)∥1)k0 0 ··· ··· ··· ··· ··· 0 I((d+3)D+2d+6)K+1··· ··· ··· ··· ··· I ℓ 1 ··· ··· ··· ··· ··· 1 . Then, by Lemma 5, for each fixed i= 1,···, K, we can construct Bi 3s+6∈ B(1,3, dembed ) such that it implements the pairwise multiplication between cPr k=0(1−c∥˜η(x)∥1)kand ˜ηi(x), i.e., Hi 3s+6:=Bi 3s+6(H3s+5) = ···cPr k=0(1−c∥˜η(x)∥1)kc˜ηi(x)Pr k=0(1−c∥˜η(x)∥1)k0 0 ··· 0 I((d+3)D+2d+6)K+K+r+4 ··· I ℓ 1 ··· 1 . Since1 t=cP∞ k=0(1−ct)kfor 1−ct∈(−1,1), we can truncate the approximation of1 ∥˜η(x)∥1up to r-th power such that 1 ∥˜η(x)∥1−crX k=0(1−c∥˜η(x)∥1)k = c∞X k=r+1(1−c∥˜η(x)∥1)k = (1−c∥˜η(x)∥1)r+1 ∥˜η(x)∥1 ≤ϵ ∥˜η(x)∥1.(33) Therefore ηi(x)−c˜ηi(x)rX k=0(1−c∥˜η(x)∥1)k = ˜ηi(x) ∥˜η(x)∥1−c˜ηi(x)rX k=0(1−c∥˜η(x)∥1)k ≤ϵ˜ηi(x) ∥˜η(x)∥1=ϵηi(x). From the last inequality of (33), we get r=O(ln(1 ϵ)) (See also Remark 4). Let Ti 2implements the sequence B3s+6◦ ··· ◦ B1for each fixed i, then each Ti 2satisfies LTi 2=O(ln(r)) = O(ln(ln(1 ϵ))) and mTi 2=r=O(ln(1 ϵ)). LetT2:= (T1 2,···, TK 2), then T2satisfies LT2=O(ln(r)) =O(ln(ln(1 ϵ))) and mT2=O(ln(1 ϵ) +K). LetT:=T2◦T1, then we have sup x∈M(q)∥T(θ;x)−η(x)∥1= sup x∈M(q)KX i=1 ηi(x)−c˜ηi(x)rX k=0(1−c∥˜η(x)∥1)k ≤sup x∈M(q)KX i=1ϵ˜ηi(x) ∥˜η(x)∥1=ϵ, as desired. By reexamining the proof, we get LT=LT1+LT2=O(d+ ln(ln(1 ϵ))),mT= max( mT1, mT2) = O(max( Dδ−d,ln(1 ϵ)+K)) =O(Dδ−d),dembed =O(1),ℓ≥O(Dδ−d+ln(1 ϵ)+K) =O(Dδ−d),LFFN = O(1),wFFN =O(1),κ=O(D2δ−2d−8). 2 33 Remark 6 When calculating the transformer network parameters, we make the assumption that the
https://arxiv.org/abs/2505.03205v1
Depth based trimmed means Alejandro Cholaquidis†, Ricardo Fraiman†, Leonardo Moreno‡, Gonzalo Perera§1 †Centro de Matemáticas, Facultad de Ciencias, Universidad de la República, Uruguay. ‡Instituto de Estadística, Departamento de Métodos Cuantitativos, FCEA, Universidad de la República, Uruguay. §Centro Universitario Regional del Este - Universidad de la República, Uruguay. Abstract Robust estimation of location is a fundamental problem in statistics, particularly in scenarios where data contamination by outliers or model misspecification is a concern. In univariate settings, methods such as the sample median and trimmed means balance robustness and efficiency by mitigating the influence of extreme observations. This paper extends these robust techniques to the multivariate context through the use of data depth functions, which provide a natural means to order and rank multidimensional data. We review several depth measures and discuss their role in generalizing trimmed mean estimators beyond one dimension. Our main contributions are twofold: first, we prove the almost sure consistency of the multivariate trimmed mean estimator under mixing conditions; second, we establish a general limit distribution theorem for a broad family of depth-based estimators, encompassing popular examples such as Tukey’s and projection depth. These theoretical advancements not only enhance the understanding of robust location estimation in high-dimensional settings but also offer practical guidelines for applications in areas such as machine learning, economic analysis, and financial risk assessment. A small example with simulated data is performed, varying the depth measure used and the percentage of trimmed data. 1 Introduction Robust estimation of location is a cornerstone in statistics, especially when data are contami- nated by outliers or when the underlying model is misspecified. In univariate analysis, the sample median is often preferred over the sample mean because of its superior breakdown point of 1/2, see Hampel et al. (1986). However, despite its robustness, the median can be relatively inefficient under light-tailed distributions such as the normal distribution, see Huber (1981). To balance robustness and efficiency, trimmed means have been extensively studied and applied in practice, see Tukey (1977) and Wilcox (1997). Trimming, in essence, involves discarding a percentage of the extreme values from both ends of a dataset. This procedure is particularly useful when outliers might significantly distort results or lead to misleading conclusions. In practice, trimmed means are used across various fields. For instance, in economic analysis, this technique mitigates the undue influence of extreme events or anomalies that could skew indicators and trends, see Wilcox (2011). It has been proven very recently that, for the one-dimensional case, the trimmed achieves a Gaussian-type concentration around the mean, see Oliveira et al. (2025). Moreover, under slightly stronger assumptions, they also show that both tails of the trimmed mean satisfy a strong ratio-type approximation. In data science, trimming ensures that machine learning models are trained on cleaner, more representative data, thereby improving predictive performance, see Rousseeuw and Croux 1In memory: This work began with the collaboration of Gonzalo Perera who unfortunately passed away recently. 1arXiv:2505.03523v1 [math.ST] 6 May 2025 (1993). Similarly, in financial risk assessment, trimming helps reduce the impact of anomalous data points, resulting in more reliable models for forecasting and
https://arxiv.org/abs/2505.03523v1
decision-making, see Barnett et al. (1994). In medical research, trimming is applied to clinical datasets to eliminate outliers that may result from measurement errors or rare occurrences that do not reflect the broader population, see Farcomeni (A. and Ventura). Extending these robust estimation ideas to the multivariate context presents additional challenges because there is no natural ordering in higher dimensions. In this setting, data depth functions have emerged as a powerful tool for ranking multivariate observations and constructing robust location estimators. The concept of data depth was first introduced to statistics by Tukey in the 1970s, see Tukey (1975), as a means of analyzing multivariate data. In Rd, the Tukey depth function, denoted by TD (x, P)for a probability measure P, is defined as TD(x, P):= inf{P(H) :His a closed halfspace with x∈H}. In simple terms, a depth function measures how close a point is to the center of a distribution. The greater the depth, the more central the point. This concept is pivotal because it extends familiar ideas such as medians, ranks, and order statistics to multidimensional settings. Building on Tukey’s groundbreaking work, several alternative notions of depth have been developed. Two prominent examples are the simplicial depth and the spatial depth. The simplicial depth, introduced by Liu, see Liu (1990, 1992); Liu and Singh (1992), is defined by SD(x, P):=P x∈S[X1, . . . , X d+1] , where X1, . . . , X d+1are independent and identically distributed random vectors in Rdwith common distribution P, and S[X1, . . . , X d+1]denotes the d-dimensional simplex formed by these points, that is, the set of all convex combinations of the vertices. In contrast, the spatial depth, proposed by Serfling, see Serfling (2002); Vardi and Zhang (2000), is given by SpD(x, P):= 1− EP S(x−X) , where for any nonzero vector x,S(x)denotes its projection onto the unit sphere, with the convention that S(0) = 0, and Xis a random vector with distribution P. In addition to these measures, other depth functions have been suggested for various types of data, including the convex hull peeling depth, see Barnett (1976), Oja depth, see Oja (1983), projection depth, see Liu (1992), and spherical depth, see Elmore et al. (2006). While some of these, like Tukey’s depth, can be naturally extended to infinite-dimensional settings, others, such as Liu’s simplicial depth, do not adapt as easily. In these cases, researchers have proposed alternative notions of depth (see, for example, Fraiman and Muniz (2001); López-Pintado and Romo (2009); Claeskens et al. (2014); Cuevas and Fraiman (2009)). More recently, the concept of depth has been explored within broader frameworks, including non-Euclidean metric spaces, see Fraiman et al. (2019). For a discussion on the essential properties that a depth function should satisfy (such as vanishing at infinity), see Serfling and Zuo (2000). The concept of statistical depth has also been generalized to data on Riemannian manifolds and other metric spaces. For example, Cholaquidis et al. (2020) introduced the weighted lens depth—an extension of the classical lens depth that incorporates the geometry of Riemannian manifolds—and demonstrated its utility
https://arxiv.org/abs/2505.03523v1
in supervised classification. In a subsequent work, 2 Cholaquidis et al. (2023) examined level sets of depth measures providing uniform consistency results and new insights into central dispersion in general metric spaces. An early approach to the trimmed mean for multivariate data was introduced by Gordaliza (1991), who proposed impartial trimming—a data-driven procedure for discarding a small fraction of observations deemed “outlying.” This approach laid the foundation for robust clustering methods, such as the impartial trimmed k-means method introduced by Cuesta- Albertos et al. (1997). The robustness of this method was later examined in detail by García-Escudero et al. (1999), demonstrating its effectiveness in the presence of outliers. Subsequently, García-Escudero et al. (2008) extended the clustering procedure to adjust clusters with varying dispersion and weight. More recently, the concept of impartial trimming has been adapted to functional data analysis, see Cuesta-Albertos and Fraiman (2007), thereby broadening its applicability. The extension of L-estimators to the multivariate case, as discussed in Fraiman et al. (1999), provided a powerful tool for generalizing the concept of trimmed means to higher-dimensional contexts—even for infinite-dimensional data, see also Fraiman and Muniz (2001). The key idea is that depth measures in multivariate data facilitate the generalization of the trimmed mean concept to Rd, as proposed in Zuo (2002). In that work it is obtained the consistency and asymptotic normality of a smoothed version of the trimmed mean estimator, albeit under restrictive assumptions regarding the depth measure and the underlying data distribution, see Remark 1 for a detailed comparison with our results. Subsequent research has extended these ideas, investigating various properties of multivariate trimmed means—such as robustness, consistency, and limiting distributions—for specific depth measures. For example, Zuo (2006) focuses on projection depth, showing that it leads to a Gaussian limit distribution, while Massé (2004, 2009) studied Tukey’s depth. In Stigler (1973) it is proven that in the one-dimensional case, the limit distribution can be non-Gaussian. In particular, in Ilienko et al (1991), the authors explore empirical Tukey depth and prove strong limit theorems similar to the Marcinkiewicz-Zygmund law. They study the convergence of empirical depth-trimmed regions to the theoretical ones, focusing on uniform distributions on convex bodies, with convergence rates of (n−1log log n)1/2. These results highlight the diverse behaviours of multivariate trimmed means, depending on the depth measure employed. The main contribution of our work is twofold. First, we prove the almost sure consistency of the multivariate trimmed mean estimator in the case of mixing data. Second, we obtain the asymptotic distribution of the trimmed mean, for a broad family of depth measures, which includes Tukey’s depth and projection depth as special cases, see also Remark 1 for a detailed comparison of this result and a quite restrictive one, existing in the literature. 1.1 The multivariate trimmed mean Inwhatfollows, weinvestigatemultivariatetrimmedmeansdefinedviageneraldepthfunctions under relatively weak conditions. Specifically, we assume the existence of a depth function, D:Rd→(0, T],and consider a sequence {ˆfn}nof density estimators, that are assumed to be continuous functions. For instance, given a suitably chosen continuous kernel Khnwhich depends on a bandwidth hn,ˆfn(x) = (Khn∗Fn)(x), where Fnis the empirical distribution
https://arxiv.org/abs/2505.03523v1
function and ∗denotes the convolution. LetFthe set of continuous and bounded functions on Rd. Consider the map, D=D(x, g) :Rd× F −→ R, 3 where D(x, g)is the depth corresponding to the density g. The multivariate trimmed mean is defined as Π(D, f) :=Z RdtI{D(t)≥a}f(t)dt, (1) while, if Dndenotes an estimator of Dbased on a sample X1, . . . , X nof a random vector with density f, the empirical counterpart of Πis Π(Dn,ˆfn) :=Z RdtI{Dn(t)≥a}ˆfn(t)dt. (2) To prove the almost sure (a.s.) consistency of (1)Dncan be any estimator that converges a.s. to D, see(H’4)below. However, the limiting distribution is obtained for a particular choice of Dn. We replace the empirical version of the depth function Dn(x) =D(x, F n)by a smooth version given by Dn(x) =D(x,ˆfn). For instance, for Liu’s simplicial depth we can take a regular version for the empirical version, i.e. D(x, g) = PG x∈Simplex ,where Ghas density g. That is D(x, g) =PG x∈Simplex ,as defined in Liu’s paper, but, Dnwill be PGn x∈ Simplex with Gn∼ˆfn,instead of PFn x∈Simplex . The main questions we will address correspond to strong consistency and the asymptotic distribution of Π(Dn,ˆfn). Remark 1.Although a general Central Limit Theorem can be found in Zuo (2002) (see Theorem 3.1), the assumptions taken in that work include the Hadamard differentiability of the operator (1)(see hypothesis C4 in the mentioned document). In our work, this is not assumed as a hypothesis but is instead obtained the asymptotic distribution (2)as a consequence of the regularity properties of D(see hypotheses (H2)and(H3)below), which constitutes a considerable leap in generality. Furthermore, hypothesis C5 of Zuo (2002) requires that the operator (1)be close to (2)at a rate proportional to Fn−F. Again, here that hypothesis is not required; instead, such consistency is derived separately. Notably, until now, this work has been the only study in the literature addressing the asymptotic distribution of the estimator in the general case. Furthermore, in the aforementioned paper the asymptotic distribution is always normal— a property that does not generally hold, see Stigler (1973). For Tukey depth, a an asymptotic distribution Theorem is established in Massé (2004) when the trimmed mean (see Equation 1) is replaced by a smoothed version, in which the indicator function is substituted with a continuously differentiable weight W. A similar result was previously obtained for simplicial depth in Dümbgen (1992), again by replacing the trimmed mean with a smoothed version. 1.2 Road map The remainder of the paper is organized as follows. In Section 3 we study some topological and geometric properties of the level sets of Dwhich will be used through the manuscript. Specially to get the limiting distribution Theorem for the trimmed means. The almost sure consistency of the estimator is obtained in Section 4. Section 5 is quite technical and it is devoted to prove the differentiability of the functional Π(D, f)w.r.t the second component, in the Hadamard sense. This will be a key ingredient to get the limiting distribution Theorem, obtained in Section 6. At the end of this Section, a small simulation example is conducted
https://arxiv.org/abs/2505.03523v1
to illustrate the shape of the limiting distribution. 4 2 Main hypotheses Leta∈(0, T)fixed, and a depth D:Rd→(0, T]. The following set of hypotheses will be considered through the manuscript. (H1) Dis continuous in Rd, and lim∥t∥→+∞D(t) = 0 . (H2)There exists δ > 0and a finite number of points µ1, . . . , µ ℓ, with µi∈Rdfor all i= 1, . . . , ℓ, such that: D(µi)> a. Also, for each µiand each ⃗ n∈Sd−1there exists a real value γi(a,⃗ n)∈R+, such that, the functions λ∈(−δ, δ)7→D(µi+ (λ+γi(a,⃗ n))⃗ n), are strictly decreasing, and γi(a,⃗ n)is such that D(µi+⃗ nγi(a,⃗ n)) =a. As we will see (H2)allows to define ℓfunctions τi: Ωi⊂(0, T]×Sd−1→Rd\ {µi},by setting τi(s,⃗ n) =µi+ λs+γi(a,⃗ n) ⃗ n, where λs∈(−δ, δ)is the unique value for which D(τi(s,⃗ n)) = s. See Section 3 for details. (H3) Dis of class Cp,1< p≤ ∞, and for all i= 1, . . . , ℓ dτ−1 ihas no singular points in τi(Ωi)(i.e., dxτ−1 ihas full rank ∀x∈τi(Ωi),Ωias in(H2)). (H4) fis probability density, continuos on an open set U, containing {t:D(t)≥a} ∪iτi(Ωi). Remark 2.Hypothesis (H1)holds for a wide range of depth measures, including lens depth, simplicial depth, L1-depth, and Tukey depth, among many others. In these cases, no additional assumptions on Xare required beyond the existence of a density, see Serfling and Zuo (2000). Hypothesis (H2)is satisfied, for example, when Xis symmetric with respect to a single point (see Serfling and Zuo (2000), p. 464) for simplicial depth, Tukey depth, and L1-depth. Moreover, this hypothesis is considerably more general since it permits a finite number of modes. Hypothesis (H3)is used to get the asymptotic distribution in Theorem 13 below. To this aim (H3)forp= 2is enough. This ensures that each τiis a diffeomorphism, see Lemma 5. Hypothesis (H4)is a quite weak assumption that includes most well-known densities. 3 On the level sets of D Given a∈(0, T), we associate to aits nearest centre µi, as introduced in (H2)); if two or more centres are equidistant from a, one is chosen arbitrarily. We then define τi(a,⃗ n) =µi+γi(a,⃗ n)⃗ n, so that, by definition, D τi(a,⃗ n) =a. Note that, under hypothesis (H2), the mapping λ∈(−δ, δ)7−→D µi+ (λ+γi(a,⃗ n))⃗ n , is strictly decreasing. Consequently, if λ̸= 0, then D(µi+ (λ+γi(a,⃗ n))⃗ n)̸=a. More generally, we define the functions τi: Ωi⊂(0, T]×Sd−1→Rd\ {µi},by setting τi(s,⃗ n) =µi+ λs+γi(a,⃗ n) ⃗ n, where λs∈(−δ, δ)is the unique value for which D(τi(s,⃗ n)) =s. 5 Since Dis a continuous function and D(µi)> afor all i= 1. . . . , ℓ, we can take δsmall enough such that D(µi+ (−δ+γi(a,⃗ n))⃗ n))< µ i (3) for all ⃗ n, and τi(Ωi)∩τj(Ωj) =∅ ∀ i̸=j. (4) In what follows we assume that δis small enough to guarantee (3) and (4). Lemma3.For each i= 1, . . . , ℓ,τiis a homeomorphism over its image, τi(Ωi), whose inverse is given by: τ−1 i(x) = D(x),x−µi ∥x−µi∥ ,∀x∈τi(Ωi). (5) Proof.We remove, for ease of writing, the dependence on ion the subindexes. Letνbe the function defined
https://arxiv.org/abs/2505.03523v1
by (5) i.e., ν(x) = D(x),x−µ ∥x−µ∥ . Observe that, by (3),µ /∈τ(Ω). Ifx̸=µ, then τ(ν(x)) = x, then D(y) =D(x), and there exists λ, such that, y=µ+λx−µ ∥x−µ∥. But, from x=µ+∥x−µ∥x−µ ∥x−µ∥ and(H1), it follows that λ=∥x−µ∥. Then x=y. This proves that τ(ν(x)) =xand, τ◦ν=Id(τ(Ω)), (6) where Id (·)denotes the identity map. On the other hand, if ⃗ n∈Sd−1, then by definition of ν: ν(τ(a,⃗ n)) = a,τ(a,⃗ n)−µ ∥τ(a,⃗ n)−µ∥ = (a,⃗ n), which implies: ν◦τ=Id(Ω). (7) From (6) and (7) we deduce that τ: Ω→τ(Ω)is bijective and that τ−1=ν. Since τ−1is bijective and continuous, and its domain is an open set of Rd, by the Invariance of the Domain Theorem, see Engelking et al. (1992), we conclude that τ−1is a homeomorphism. Lemma4.IfCa={x∈Rd:D(x) =a}, then Cais a union of ℓdisjoint sets, C1 a, . . . , Cℓ a, each of them homeomorphic to Sd−1. 6 Proof.It is clear that Ca=∪ℓ i=1{x∈Rd:x=τi(a,⃗ n)for some ⃗ n∈Sd−1}=∪ℓ i=1Ci a. Since 0< a < D (µi)Ci a̸=∅and, from (3) µi/∈Ci a. Consider φi a:Sd−1→Ci agiven by φi a(⃗ n) =τi(a,⃗ n), which implies φais a continuous bijection, with (φi a)−1(x) =x−µi ∥x−µi∥, continuous, hence φi ais a homeomorphism. By (4) the sets Ci aare disjoint. Lemma5.Under (H1)to(H3),τiis aCp-diffeomorphism and Ci aisCp-diffeomorphic to Sd−1for all i= 1, . . . , ℓ. Proof.Since τ−1 iis of class Cpinτi(Ωi)open, τ−1 iis bijective and dτ−1 ihas no singular points. By the inverse mapping Theorem, see Lang (2012), τ−1 iis a diffeomorphism of class Cp, hence so is τi. For Ci a, both φi a(·) =τi(a,·)and (φi a)−1(·) =· −µ ∥ · −µ∥, areCp. 4 Consistency In this section we consider, as before, a data–depth function D:Rd→(0, T],a∈(0, T), and any estimator of D, denoted by Dn, which is constructed from a sample X1, X2, . . . , X nof a random vector X. We assume that the common distribution of Xhas a density fbut do not require the sample to be iid. In fact, the consistency results will be obtained from hypothesis (H2)and the following four weak hypotheses. (H’1) E(∥X∥)<∞. (H’2)Letˆfnbe a sequence of estimators of f(based on X1, X2, . . . , X n) such that ˆfn→falmost surely, uniformly on compact sets. (H’3) P D(X) =a = 0. (H’4)The estimator Dnis such that Sn:= sup x∈Rd Dn(x)−D(x) a.s.− − →0. where, in what follows,a.s.− − →will denote almost sure convergence. Remark 6.1.By considering a kernel–based density estimator, hypothesis (H’2)is satis- fied, see Silverman (1978). Even Property (H’2)is satisfied under some conditions of sample dependence, see Roussas (1991); Giné and Guillou (2002); Einmahl and Mason (2005). 2.Hypothesis (H’3)is met if D(X)has a density or, as follows from Lemma 4, if Xhas a density and hypothesis (H1)and(H2)holds. 7 3.ifDnis the empirical version of D(based on an iid sample), then uniform convergence (H’4)is ensured for many depth notions, see Remark A.3 of Serfling and Zuo (2000). Donoho and Gasko (1992) proved it for the sample half-space depth (Tukey’s depth). Similarly, they prove the convergence for the simplicial depth, see Liu (1990), Dümbgen (1992), and Arcones
https://arxiv.org/abs/2505.03523v1
and Giné (1993). Moreover, other works show more depth measures where the hypothesis (H’4)is satisfied (for the iid case), see Cholaquidis et al. (2023); Liu and Singh (1993); Zuo and Serfling (2000). For the simplicial depth (H’4)holds even when X1, . . . , X nare not iid, from Theorem 1 in Dümbgen (1992), when Dn(x) = D(x, P n)being Pnany sequence such that Pn→Pweakly. The same holds for Tukey’s depth, see Donoho and Gasko (1992), pp. 1816–1817) The following theorem states that the trimmed mean estimator (2)converges almost surely to (1). Theorem 7. Assume (H2)and(H’1)to(H’4). Then, Π(Dn,ˆfn)a.s.− − →Π(D, f)asn→ ∞ . Proof.Let Π(D,ˆfn) =Z Rdt1{D(t)≥a}ˆfn(t)dt. Then, Π(D,ˆfn)−Π(D, f) =Z Rdt1{D(t)≥a}ˆfn(t)dt−Z Rdt1{D(t)≥a}f(t)dt. From (H’2)and(H2) Π(D,ˆfn)a.s.− − → Π(D, f)asn→ ∞. Let us write, ∥Π(Dn,ˆfn)−Π(D, f)∥ ≤ ∥ Π(Dn,ˆfn)−Π(D,ˆfn)∥+∥Π(D,ˆfn)−Π(D, f)∥. Let us bound, Π(Dn,ˆfn)−Π(D,ˆfn) ≤Z Rd∥t∥|1{D(Xi)≥a}−1{Dn(Xi)≥a}|ˆfn(t)dt. From (H’4),D(t)−Sn≤Dn(t)≤D(t) +Sn.Hence, ∥Π(Dn,ˆfn)−Π(D,ˆfn)∥ ≤Z Rd∥t∥|1{(D(t)+Sn)≥a}−1{(D(t)−Sn)≥a}|1{Sn≤δ}ˆfn(t)dt+ Z Rd∥t∥|1{(D(t)+Sn)≥a}−1{(D(t)−Sn)≥a}|1{Sn≥δ}ˆfn(t)dt=A(δ) +B(δ) On one hand, B(δ)≤Z Rd∥t∥ˆfn(t)dt1{Sn≥δ}a.s.− − →0, since 1{Sn≥δ}a.s.− − →0, andZ Rd∥t∥ˆfn(t)dta.s.− − →E∥X1∥. 8 On the other hand, An(δ)≤Z Rd∥t∥|1{(D(t)+δ)≥a}−1{(D(t)−δ)≥a}|ˆfn(t)dt →E ∥X1∥|1{(D(X1)+δ)≥a}−1{(D(X1)−δ)≥a}| :=A(δ),asn→ ∞ . Then, limAn(δ) =A(δ). Let us prove that A(δ)→0asδ→0+. We will use Dominated Convergence Theorem. First, ∥X1∥|1{(D(X1)+δ)≥a}−1{(D(X1)−δ)≥a}| ≤ ∥X1∥ ∈L1.From (H’3), with probability one, 1{(D(X1)+δ)≥a}−1{(D(X1)−δ)|≥a}= 0 asδ→0+. Lastly, A(δ)→0asδ→0+. 5 On the Haddamard differentiability of trimmed means. Let us consider D={y:Rd→Rcontinuous and bounded }, andUas in(H4). Define, DG={g:Rd→Rcontinuous on U,bounded , g∈L1(Rd)}. LetD∈ Dandf∈ DGsuch that (H1)to(H4)hold. Let 0< a < T anda1< a < a 2< T such that K={t∈Rd:a1≤D(t)< a2} ⊂ ∪ℓ i=1Ωiand (3) and (4) holds. Then, T(y, g) =Z Kt1{y(t)>a}g(t)dtfor any (y, g)∈ D × DG . The aim of this section is prove that Tis Hadamard differentiable at (D, f)and T ′(y, g) =ℓX i=(Z Sd−1τi(a,⃗ n)f τi(a,⃗ n) det d(a,⃗ n)τi y τi(a,⃗ n) d⃗ n+Z Rdt1{D(t)≥a}g(t)dt) , see Theorem 11 below. We will prove the result for ℓ= 1but the general proof follows exactly the same ideas, so in what follows we denote just τinstead of τi. The proof is based on some technical results. Lemma8.LetG:I→Rdbe a continuous and bounded function on the bounded interval I⊂R; assume that H:I→Ris uniformly continuous and set, for a∈◦ I: Vε(a) =Z IG(δ) 1{H(δ)≥a−δ ε}−1{δ≥a}dδ ε. Then, limε→0+Vε(a) =G(a)H(a). Proof.Pick η >0; take I1, . . . , I Nηas a partition of Iinto disjoint intervals such that sup x,y∈Ii|H(x)−H(y)|< η, ∀1≤i≤Nη, 9 and that a∈◦ Iiis an interior point of Ii. Define Hias the value of Hat an arbitrary point of Iiand set: Hη=NηX i=1Hi1Ii,and Vη ε(a) =Z IG(δ) 1{Hη(δ)≥a−δ ε}−1{δ≥a}dδ ε. Then, Vη ε(a) =NηX i=1Z IiG(δ) 1{Hi≥a−δ ε}−1{δ≥a}dδ ε. For0< ε < ε ηsuch that (a−εη(∥H∥∞+η), a+εη(∥H∥∞+η))⊂Iia, Vη ε(a) =NηX i=1Z Ii∩[a−εHi,a]G(δ)dδ ε. Observe that, Z Iia∩[a−εHia,a)G(δ) ϵdδ=Za a−εHiaG(δ) ϵdδ=−HiaZa a−εHiaG(δ) −ϵHiadδ→HiaG(a) =Hη(a)G(a). Then, Vη ε(a)−→Hη(a)G(a)asε→0+. (A) It is obvious that: Hη(a)G(a)−→H(a)G(a)asη→0+. (B) In addition: ∥Vε(a)−Vn ε(a)∥= Z IG(δ) 1{H(δ)>a−δ ε}−1{Hη(δ)>a−δ ε} dδ ≤Z I∥G(δ)∥1{δ>a−εH(δ)}∆{δ>a−εHη(δ)} εdδ =Z I∥G(δ)∥1{a−εH(δ)>δ>a−εHη(δ)} εdδ +Z I∥G(δ)∥1{a−εHη(δ)>δ>a−εH(δ)} εdδ ≤Z I∥G(δ)∥1{a−ε(Hη(δ)−η)>δ>a−εHη(δ)}+1{a−εHη(δ)>δ>a−ε(Hη(δ)+η)} εdδ =NηX i=1Z Ii∥G(δ)∥1{a−εHi+εη>δ>a −εHi−εη} εdδ =Z Ia i∩(a−εHia−εη,a−εHia+εη)∥G(δ)∥ εdδfor0< ε < ε∗ γ = (η−Hia)Za+(η−Hia)ε a∥G(δ)∥dδ+ (Hia+η)Za−ε(η+Hia) a∥G(δ)∥ −ε(η+Hia)dδ −→ ε→0+∥G(δ)∥ (η−Hia) + (Hia+η) = 2η∥G(a)∥. Then, lim ε→0+∥Vε(a)−Vn ε(a)∥
https://arxiv.org/abs/2505.03523v1
≤2η∥G(a)∥. (C) 10 From (A), (B), and (C), we get: lim ε→0+∥Vε(a)−H(a)G(a)∥ ≤lim ε→0+∥Vε(a)−Vη ε(a)∥ +lim ε→0+∥Vη ε(a)−Hη(a)G(a)∥+lim ε→0+∥Hη(a)G(a)−H(a)G(a)∥ ≤2η∥G(a)∥+ 0 + 0 . Taking η↓0+, we get: limε→0+∥Vε(a)−H(a)G(a)∥= 0.Thus, the Lemma follows. Proposition 9.Assume that (H1)to(H4)hold. Let 0< a < T,r >0, andUas in (H4). Define, B(r) ={Y:Rd→Rcontinuous on Uand bounded, with ∥Y∥∞≤r}, and ∆ε(Y) =Z Rdt1{|D(t)+εY(t)|≥a}−1{|D(t)|≥a} ε f(t)dt. Then, lim ε→0+sup Y∈X∥∆ε(Y)−∆(Y)∥= 0,where Xis any ∥ · ∥∞-compact subset of B(r), and ∆(Y) =Z Sd−1τ(a,⃗ n)|det(dτ(a,⃗ n))|Y(τ(a,⃗ n))f(τ(a,⃗ n))d⃗ n. Proof.IfY∈ B(τ)andD(t)< a−ετ, then D(t) +εY(t)< a, and if D(t)> a+ετ, then D(t) +εY(t)> a. In both cases, the integrator in ∆ε(Y)equals 0. By taking εsmall enough, we can reduce the integral in ∆ε(Y)to the compact set: K={t∈Rd:a2≥D(t)> a1}, where 0< a1< a < a 2< Tare fixed from now on. Therefore, for 0< ε < ε 0, ∆ε(Y) =Z Kt 1{D(t)+εY(t)≥a}−1{D(t)≥a} εf(t)dt. (8) Denoting by νxthe first coordinate of the vector ν: ∆ε(Y) =Z Kτ(τ−1(t)) 1{(τ−1(t))x+εY(τ(τ−1(t)))≥a}−1{(τ−1(t))x≥a} f(τ(τ−1(t))) × det(dτ−1 t) det(dτ)τ−1(t) dt. Using change of variables (δ,⃗ n) =τ−1(t), see Lang (2012): ∆ε(Y) =Z τ−1(K)τ(δ,⃗ n)1{δ+εY(τ(δ,⃗ n))>a}−1{δ>a} εf(τ(δ,⃗ n))× det(dτ)(δ,⃗ n) dδd⃗ n. (9) Then, ∆ε(Y)equals, Z Sd−1Za2 a1τ(δ,⃗ n)1{δ+εY(τ(δ,⃗ n))≥a}−1{δ≥a} εf(τ(δ,⃗ n)) det(dτ)(δ,⃗ n) dδd⃗ n =Z Sd−1Iε(⃗ n)d⃗ n. 11 Fix⃗ n∈Sd−1. Define, G(δ) =τ(δ,⃗ n)f(τ(δ,⃗ n)) det(dτ)(δ,⃗ n) , H (δ) =Y(τ(δ,⃗ n)),and I= [a1, a2]. It is clear that Lemma 8 applies, and we deduce, lim ε→0+Iε(⃗ n) =τ(a,⃗ n)f(τ(a,⃗ n)) det(dτ)(a,⃗ n) Y(τ(a,⃗ n)) := I(⃗ n),∀⃗ n∈Sd−1.(10) On the other hand Iε(⃗ n) ≤Za2 a1 τ(δ,⃗ n) 1{{δ+ε Y(τ(δ,⃗ n))≥a}△{δ,a}}f τ(δ,⃗ n) det dτ (δ,⃗ n) dδ ds ≤Za2 a1 τ(δ,⃗ n) 1{δ−ε r<a≤δ+ε r}f τ(δ,⃗ n) det dτ (δ,⃗ n) dδ ds. Where we used that ∥Y∥∞≤r. If0< ε < (1/r) min{a2−a, a−a1}, then Iε(⃗ n) ≤Za+εr a−εr τ(δ,⃗ n) det dτ (δ,⃗ n) f τ(δ,⃗ n) dδ=Mε(⃗ n). Moreover, as ε→0+,Mε(⃗ n)−→ 2r τ(a,⃗ n) det dτ (a,⃗ n) f τ(a,⃗ n) =M(⃗ n),and by Tonelli’s theorem, Z Sd−1Mε(⃗ n)d⃗ n=1 εZa+εr a−εrZ Sd−1 τ(δ,⃗ n) f τ(δ,⃗ n) det dτ (δ,⃗ n) dδ ds. Since for δ∈[a1, a2], the map (δ,⃗ n)7−→ ∥ τ(δ,⃗ n)∥f τ(δ,⃗ n) det(dτ) (δ,⃗ n),is continuous on the compact domain [a1, a2]×Sd−1, so its integral is a continuous function of δ. Then, lim ε→0+2rZ Sd−1Mε(⃗ n)d⃗ n=Z Sd−1M(⃗ n)d⃗ n. From where it follows that, (i)∥Iε(⃗ n)∥ ≤ Mε(⃗ n)∀⃗ n∈Sd−1,∀0< ε < ε 1. (ii)Mε(⃗ n)− − − → ε→0+M(⃗ n)∀⃗ n∈Sd−1. (iii)Z Sd−1Mε(⃗ n)d⃗ n− − − → ε→0+Z Sd−1M(⃗ n)d⃗ n. (B) From (A) and (B) it follows that Z Sd−1Iε(⃗ n)d⃗ n− − − → ε→0+Z Sd−1I(⃗ n)d⃗ n, which concludes the proof that ∆ε(Y)− − − → ε→0+∆(Y)∀Y∈ X,(C). Now we will show that this convergence is uniform over X. Take η > 0and choose Y1, . . . , YNη∈ Xsuch that sup Y∈Xmin 1≤i≤Nη∥Y−Yi∥ ≤ η. 12 It is clear that ∥∆(Y)−∆(Yi)∥ ≤Z Sd−1∥τ(ai,⃗ n)∥ det(dτ) (ai,⃗ n)f τ(ai,⃗ n) d⃗ n · ∥Y−Yi∥. Since the integral above is finite (denote it by C), we obtain ∥∆(Y)−∆(Yi)∥ ≤
https://arxiv.org/abs/2505.03523v1
C∥Y−Yi∥ ∀ Y∈ X,1≤i≤Nη. (D) On the other hand, ∥∆ε(Y)−∆ε(Y(i))∥=1 ε Z Kt 1{D(t)+ε Y(i)(t)> a}−1{D(t)+ε Y(t)> a} f(t)dt ≤Z K∥t∥∥f(t)∥1{a−ε Y(i)(t)−ε∥Y(i)−Y∥∞≤D(t)< a−ε Y(i)(t) +ε∥Y(i)−Y∥∞ dt ε ≤Z K∥t∥∥f(t)∥1{a−ε Y(i)(t)−ε∥Y(i)−Y∥∞<D(t)≤a−ε Y(i)(t)+ε∥Y(i)−Y∥∞ dt ε. Then, if we use the same change of variable as in (9), ∥∆ε(Y)−∆ε(Y(i))equals, Z Sd−1Za2 a1∥τ(δ,⃗ n)∥f τ(δ,⃗ n) det(dτ) (ai,⃗ n)1{a−ε Y(i)(τ(δ,⃗ n))−δ≤∥Y(i)−Y∥∞}dδ d⃗ n. Observe that {a−ε Y(i)−δ∥={Yi+∥Yi−Y∥∞≥(a−δ)/ϵ} − { Yi− ∥Yi−Y∥∞≥(a−δ)/ϵ}. Then, ∥∆ε(Y)−∆ε(Y(i))∥=Z Sd−1Za2 a1∥τ(δ,⃗ n)∥f τ(δ,⃗ n) det(dτ) (ai,⃗ n)1{Yi(τ(δ,⃗ n))+∥Yi−Y∥∞≥(a−δ)/ϵ}−1{δ≥a} ϵdδ d⃗ n −Z Sd−1Za2 a1∥τ(δ,⃗ n)∥f τ(δ,⃗ n) det(dτ) (ai,⃗ n)1{Yi(τ(δ,⃗ n))−∥Yi−Y∥∞≥(a−δ)/ϵ}−1{δ≥a} ϵdδ d⃗ n =:Bε. Applying Lemma 8 to G(δ) =∥τ(δ,⃗ n)∥f(τ(δ,⃗ n)) det(dτ) (δ,⃗ n), H∗(δ) =Y(i)(τ(δ,⃗ n)) +∥Y(i)−Y∥∞,and H(δ) =Y(i)(τ(δ,⃗ n)) +∥Y(i)−Y∥∞, and dominating the integrals, we deduce that Bεconverges, as ε→0+, to Z Sd−1∥τ(ai,⃗ n)∥f(τ(ai,⃗ n)) det(dτ) (ai,⃗ n) Yi(τ(ai,⃗ n))+∥Yi−Y∥∞−Yi(τ(ai,⃗ n))+∥Yi−Y∥∞ d⃗ n, which implies that lim ε→0∥∆ε(Y)−∆ε(Yi)∥ ≤ C∥Yi−Y∥∞,∀Y∈ X,1≤i≤Nη, but, more precisely: (using the monotony of Bεwith respect to ∥Yi−Y∥∞) If∥Yi−Y∥∞< handα < ε < ε (h)then ∥∆ε(Y)−∆ε(Yi)∥ ≤ C h. (E) 13 Consider now any Y∈ X; take Yisuch that ∥Yi−Y∥∞< γ, then, ∥∆(Y)−∆ε(Y)∥ ≤ ∥ ∆(Y)−∆(Yi)∥+∥∆(Yi)−∆ε(Yi)∥+∥∆ε(Yi)−∆ε(Y)∥. Using (D) and (E) we get that, if 0< ε < ε (γ)then ∥∆(Y)−∆ε(Y)∥ ≤ C γ+∥∆(Yi)−∆ε(Yi)∥. Furthermore, sup Y∈X∥∆(Y)−∆ε(Y)∥ ≤ C γ+ max 1≤i≤Nγ∥∆(Yi)−∆ε(Yi)∥. Since ∆ε(Yi)→∆(Yi)asε→0+∀1≤i≤NηandNηis finite, lim ε→0sup Y∈X∥∆(Y)−∆ε(Y)∥ ≤ C γ. Taking now γ→0+, the result follows. Proposition 10.LetGbe a compact subset of BPD(r) =n g:Rd→R:g∈L1(Rd)is bounded and continuous on U,∥g∥∞≤ro . Assume (H1)to(H4)and let Y∈ X,Xbe as in Proposition 9. Let ∇ε(g) =Z Rdt1{D(t)+ε Y(t)≥a}g(t)dt,and ∇(g) =Z Rdt1{D(t)≥a}g(t)dt. Then, sup Y∈X, g∈G∥∇ε(g)− ∇ (g)∥ − − − → ε→0+0. Proof.Using again the remark that the integral can be reduced to a compact set, for a fixed g,∇ε(g)→ ∇(g)asε→0+, as a trivial consequence of Dominated Convergence Theorem. Consider now η >0, and g1, . . . , g Nη∈ Gsuch that sup g∈Gmin 1≤i≤Nη∥g−gi∥∞< η. Clearly, ∥∇(g)− ∇(gi)∥ ≤ C∥g−gi∥∞. LetKbe the compact set {t:D(t)∈[a1, a2] 0< a1< a < a 2< T}, and so on. Then ∥∇ε(g)− ∇ ε(gi)∥ ≤Z K∥t∥1{D(t)+ε Y(t)≥a}dt∥g−gi∥ ≤Z K∥t∥1{D(t)+εr≥a}dt∥g−gi∥ ≤C(r)∥g−gi∥. It follows that, for εsmall, ∥∇ε(g)− ∇ (g)∥ ≤ C(r)∥g−gi∥+∥∇ε(gi)− ∇ (gi)∥+C(r)∥g−gi∥ =C(r)η+∥∇ε(gi)− ∇ (gi)∥. Then, sup g∈G∥∇ε(g)− ∇ (g)∥ ≤ C(r)η+ max 1≤i≤N2∥∇ε(gi)− ∇ (gi)∥. From where it follows, lim ε→0sup g∈G∥∇ε(g)− ∇ (g)∥ ≤ Q(r)η. Taking η→0+, the result follows. 14 Theorem 11. LetUas in (H4)and D={y:Rd→Rcontinuous on U,and bounded }, DG={g:Rd→Rcontinuous on U,and bounded , g∈L1(Rd)}. LetD∈ Dandf∈ DGsuch that (H1)to(H4)hold. Let 0< a < T anda1< a < a 2< T such that K={t∈Rd:a1≤D(t)< a2} ⊂ ∪ℓ i=1Ωi,(3)and(4)holds. The functional T(y, g) =Z Kt1{y(t)>a}g(t)dtfor any (y, g)∈ D × DG , is Hadamard differentiable in (D, f), and T ′(y, g) =ℓX i=1(Z Sd−1τ(a,⃗ n)f τi(a,⃗ n) det d(a,⃗ n)τi y τi(a,⃗ n) d⃗ n+Z Rdt1{D(t)≥a}g(t)dt) . Proof.As before, we will assume ℓ= 1, the general case is proven exactly the same. T D+εy, f +εg − T(D, f) ε=Z Kt1{D(t)+εy(t)≥a}−1{D(t)≥a} εf(t)dt+ Z Kt1{D(t)+εy(t)≥a}g(t)dt. The first term, as ε→0, it converges uniformly on compact sets to Z Sd−1τ(a,⃗ n)f τ(a,⃗ n) det dτ(a,⃗ n)
https://arxiv.org/abs/2505.03523v1
y τ(a,⃗ n) d⃗ n, by Proposition 9. The second term, it converges to Z Rdt1{D(t)≥a}g(t)dt. by Proposition 10. Hence the limit of the difference is exactly the derivative claimed. Remark 12.If∥Y∥∞≤r, then for εsmall enough, T D+εY, f +εg − T(D, f) ε=Z Rdt1{D(t)+εY(t)≥a}−1{D(t)≥a} εf(t)dt+ Z Rdt1{D(t)+εY(t)≥a}g(t)dt. 6 Asymptotic distribution for trimmed means In this section, we derive the asymptotic distribution of the trimmed mean estimator (2)for the case Dn(x) =D(x,ˆfn). As before, the theorem is stated for the case ℓ= 1; however, the proof follows the same ideas in the general case. Hypothesis (H4)is assumed to hold for p= 2. The following theorem assumes that ˆfnis a sequence of continuous functions that serve as estimators of f(not necessarily kernel-based). 15 Theorem 13. LetDbe such that (H1)to(H4) holds. Assume also that (i)Dis Hadamard differentiable with respect to (x, f), and that ∂D ∂g(x, g), is continuous as a function of (x, g)in a neighborhood of f. (ii)∥ˆfn−f∥∞→0a.s on compact sets, and (iii)√anˆfn−f (t)converges weakly to b(t), on{t:D(t)≥a},b(t)being a stochastic process indexed on {t:D(t)≥a}. Let us denote Π(Dn,ˆfn) =Z Rdt1{Dn(t)≥a}ˆfn(t)dtand Π(D, f) =Z Rdt1{D(t)≥a}f(t)dt, then√an Π(Dn, fn)−Π(D, f) converges weakly to Z Sd−1τ(a,⃗ n)f τ(a,⃗ n)∂D ∂g τ(a,⃗ n), f b(τ(a,⃗ n)) det dτ(a,⃗ n) d⃗ n+Z Rdt1{D(t)≥a}b(t)dt. (11) Remark 14.Assumptions (H1)through (H4)were discussed in Remark 2. Assumption (i) is satisfied, for example, by Tukey’s depth (see Lemma 5.6 in Massé (2009)). Assumption (ii) is discussed in detail in point 1 of Remark 2. Regarding assumption (iii) see Corollary in Rosenblatt (1976). Proof.Let us observe that, Π(Dn,ˆfn)−Π(D, f) =Z Rdt1{Dn(t)≥a}ˆfn(t)dt−Z Rdt1{D(t)≥a}f(t)dt =Z Rdt(1{Dn(t)≥a}−1{D(t)≥a})f(t)dt+Z Rd1{Dn(t)≥a}(ˆfn(t)−f(t))dt. By Remark 12, for nlarge enough, Π(Dn,ˆfn)−Π(D, f) =T(Dn, fn)− T(D, f).By the Hadamard differentiability of Tat(D, f), T(Dn,ˆfn)− T(D, f) =T ′(D, f) Dn−D,ˆfn−f +o ∥Dn−D∥∞+∥ˆfn−f∥∞ . Since Dis Hadamard differentiable w.r.t. the second component, and the fact that ∂D ∂g(x, g) is a continuous function of (x, g)in a neighborhood of f,∥Dn−D∥∞≈C∥ˆfn−f∥∞where the norm is on compact sets. Then, √an T(Dn,ˆfn)− T(D, f) ≈√an T ′(D, f) Dn−D,ˆfn−f . By Theorem (11) √an T(Dn,ˆfn)− T(D, f) ≈√anZ Sd−1τ(a,⃗ n)f τ(a,⃗ n) det dτ(a,⃗ n) Dn−D τ(a,⃗ n) d⃗ n+ √anZ Rdt1{D(t)>a}ˆfn(t)−f(t) dt. 16 Since (Dn−D)(x) =D(x,ˆfn)−D(x, f) =∂D ∂g(x, f)(ˆfn−f) +o(∥ˆfn−f∥∞), it follows that, √an T(Dn,ˆfn)− T(D, f) ≈ √anZ Sd−1τ(a,⃗ n)f τ(a,⃗ n) det dτ(a,⃗ n) ∂D ∂g(τ(a,⃗ n), f)(ˆfn−f) τ(a,⃗ n) d⃗ n+ √anZ Rdt1{D(t)>a}ˆfn(t)−f(t) dt. We define two linear operators: L1(g) =Z Sd−1w1(⃗ n)g τ(a,⃗ n) d⃗ n, and L2(g) =Z Rdw2(t)g(t)dt, with w1andw2being the (continuous) weight functions. Since√anˆfn−f converges weakly uniformly, to a process bdefined on the compact set {t:D(t)≥a}. L1 anˆfn−f L−→L1(b)and L2 anˆfn−f L−→L2(b). whereL−→denotes weak convergence. Since L1andL2are continuous linear functionals defined on the same space, the vector  L1 anˆfn−f , L2 anˆfn−f , converges in distribution to (L1(b), L2(b)). Then, By applying the continuous mapping theorem L1 anˆfn−f +L2 anˆfn−fL−→L1(b) +L2(b). 6.1 Simulations This subsection presents a simulation example that illustrates the limiting distribution described in Theorem 13. To visualize the shape of the limiting distribution due to its complexity, Rn=√an(Π(Dn, fn)−Π(D, f))values are simulated. For
https://arxiv.org/abs/2505.03523v1
each measure, 500 values are simulated from the distribution of Rnusing two values of a(selected based on the range of values for each depth measure) and assuming an=√n. Each simulation of Rnis based on a sample of size n= 500drawn from a bivariate Beta distribution with independent components, where each component follows a Beta(2,2)dis- tribution. The density estimate ˆfnis computed using kernel density estimation, with the bandwidth hchosen according to Silverman’s rule of thumb, see Silverman (1986). In all cases we choose the gaussian kernel. Three classic depth measures are considered: Liu’s depth, Tukey’s depth, and a projection-based depth measure. Figures 1, 2, and 3 display the estimated densities and their corresponding contour curves for each depth measure and each level of a. 17 (a)a= 0.05 (b)a= 0.1 Figure 1: Density estimation (left panel) and corresponding level sets (right panel) of the distribution of√an(Π(Dn, fn)−Π(D, f))for a sample of size 500, computed using Liu depth for different values of a. 18 (a)a= 0.1 (b)a= 0.2 Figure 2: Density estimation (left panel) and corresponding level sets (right panel) of the distribution of√an(Π(Dn, fn)−Π(D, f))for a sample of size 500, computed using Tukey depthfor different values of a. 19 (a)a= 0.1 (b)a= 0.2 Figure 3: Density estimation (left panel) and corresponding level sets (right panel) of the distribution of√an(Π(Dn, fn)−Π(D, f))for a sample of size 500, computed using Projection depthfor different values of a. 20 References Arcones, M. A. and Giné, E. (1993). Limit theorems for u-processes. TheAnnalsof Probability, pages 1494–1542. Barnett, V. (1976). The ordering of multivariate data. JournaloftheRoyalStatistical Society.SeriesA(General), 139(3):318–355. Barnett, V., Lewis, T., et al. (1994). Outliers inStatistical Data, volume 3. Wiley New York. Cholaquidis, A., Fraiman, R., Gamboa, F., and Moreno, L. (2020). Weighted lens depth: Some applications to supervised classification. JournalofComputational andGraphical Statistics. Cholaquidis, A., Fraiman, R., and Moreno, L. (2023). Level sets of depth measures and central dispersion in abstract spaces. Test, 32:942–957. Claeskens, G., Hubert, M., Slaets, L., and Vakili, K. (2014). Multivariate functional halfspace depth.JournaloftheAmerican Statistical Association, 109(505):411–423. Cuesta-Albertos, J. A. and Fraiman, R. (2007). Impartial trimmed k-means for functional data.JournalofClassification, 24(2):55–73. Cuesta-Albertos, J. A., Gordaliza, A., and Matrán, C. (1997). Trimmed k-means: An attempt to robustify quantizers. AnnalsofStatistics, 25(2):553–576. Cuevas, A. and Fraiman, R. (2009). On depth measures and dual statistics: A methodology for dealing with general data. JournalofMultivariate Analysis, 100(4):753–766. Donoho, D. L. and Gasko, M. (1992). Breakdown properties of location estimates based on halfspace depth and projected outlyingness. TheAnnalsofStatistics, pages 1803–1827. Dümbgen, L. (1992). Limit theorems for the simplicial depth. Statistics &Probability Letters, 14(2):119–128. Einmahl, U. and Mason, D. M. (2005). Uniform in bandwidth consistency of kernel-type function estimators. TheAnnalsofStatistics, 33(3):1380–1403. Elmore, R. T., Hettmansperger, T. P., and Xuan, F. (2006). Spherical data depth and a multivariate median. DIMACS SeriesinDiscrete Mathematics andTheoretical Computer Science, 72:87. Engelking, R., Sieklucki, K., and Ostaszewski, A. (1992). Topology: A geometric approach. (NoTitle). Farcomeni, Alessio and Ventura, Laura (2012) An overview of robust methods in medical research Statistical Methods inMedicalResearch, 21(2):111–133. Fraiman, R., Gamboa, F., and Moreno, L. (2019). Connecting pairwise geodesic spheres by depth: Dcops. JournalofMultivariate Analysis,
https://arxiv.org/abs/2505.03523v1
169:81–94. Fraiman, R., Meloche, J., García-Escudero, L. A., Gordaliza, A., He, X., Maronna, R., Yohai, V. J., Sheather, S. J., McKean, J. W., Small, C. G., et al. (1999). Multivariate l-estimation. Test, 8:255–317. Fraiman, R. and Muniz, G. (2001). Trimmed means for functional data. Test, 10(2):419–440. 21 García-Escudero, L. A., Gordaliza, A., and Matrán, C. (1999). Robustness properties of k- means and trimmed k-means.JournaloftheAmerican Statistical Association , 94(447):956– 969. García-Escudero, L. A., Gordaliza, A., Matrán, C., and Mayo-Iscar, A. (2008). A general trimming approach to robust cluster analysis. Ann.Statist., 36(1):1324–1345. Giné, E. and Guillou, A. (2002). Rates of strong uniform consistency for multivariate kernel density estimators. In Annales del’Institut HenriPoincaré (B)Probability andStatistics , volume 38, pages 907–921. Elsevier. Gordaliza, A. (1991). Best approximations to random variables based on trimming procedures. JournalofApproximation Theory, 64(2):162–180. Hampel, F. R., Ronchetti, E. M., Rousseeuw, P. J., and Stahel, W. A. (1986). Robust Statistics: TheApproach BasedonInfluence Functions. Wiley. Huber, P. J. (1981). RobustStatistics. Wiley. Ilienko, Andrii and Molchanov, Ilya and Turin, Riccardo (2025). Strong limit theorems for empirical halfspace depth trimmed regions. Bernoulli, 31(1):816–841. Lang, S. (2012). RealandFunctional Analysis, volume 142. Springer Science & Business Media. Liu, R. Y. (1990). On a notion of data depth based on random simplices. TheAnnalsof Statistics, 18(1):405–414. Liu, R. Y. (1992). Data depth and multivariate rank tests. In Dodge, Y., editor, L1-Statistical Analysis andRelatedMethods, pages 279–294. Liu, R. Y. and Singh, K. (1992). Ordering directional data: Concepts of data depth on circles and spheres. AnnalsofStatistics, 20(3):1468–1484. Liu, R. Y. and Singh, K. (1993). A quality index based on data depth and multivariate rank tests.JournaloftheAmerican Statistical Association, 88(421):252–260. López-Pintado, S. and Romo, J. (2009). On the concept of depth for functional data. Journal oftheAmerican Statistical Association, 104(486):718–734. Massé, J.-C. (2004). Asymptotics for the tukey depth process, with an application to a multivariate trimmed mean. Bernoulli, 10(3):397–419. Massé, J.-C. (2009). Multivariate trimmed means based on the tukey depth. Journalof Statistical Planning andInference, 139(2):366–384. Oja, H. (1983). Descriptive statistics for multivariate distributions. Statistics &Probability Letters, 1(6):327–332. Oliveira, R. I., Orenstein, P., and Rico, Z. F. (2025). Finite-sample properties of the trimmed mean.arXivpreprint arXiv:2501.03694. Rosenblatt, M. (1976). On the maximal deviation of k-dimensional density estimates. The AnnalsofProbability, 4(6):1009–1015. 22 Roussas, G. G. (1991). Kernel estimates under association: Strong uniform consistency. Statistics &Probability Letters, 12(5):393–403. Rousseeuw, P. J. and Croux, C. (1993). Alternatives to the median absolute deviation. JournaloftheAmerican Statistical Association, 88(424):1273–1283. Serfling, R.(2002). Adepthfunctionandascalecurvebasedonspatialquantiles. In Statistical DataAnalysis BasedontheL1-Norm andRelatedMethods, pages 25–38. Springer. Serfling, R. and Zuo, Y. (2000). General notions of statistical depth function. TheAnnalsof Statistics, 28(2):461–482. Silverman, B. W. (1978). Weak and strong uniform consistency of the kernel estimate of a density and its derivatives. TheAnnalsofStatistics, pages 177–184. Silverman, B. W. (1986). The kernel method for univariate data. Densityestimation for statistics anddataanalysis, 34–74.Springer US. Stigler, S. M. (1973). The asymptotic distribution of the trimmed mean. AnnalsofStatistics , 1(3):472–477. Tukey, J. W. (1975). Mathematics and the picturing of data. In Proceedings ofthe International Congress ofMathematicians, volume 2, pages 523–531. Tukey, J. W. (1977). Exploratory DataAnalysis. Addison-Wesley. Vardi, Y. and Zhang, C.-H. (2000). The multivariate L1-median
https://arxiv.org/abs/2505.03523v1
Maximum likelihood estimation for the λ-exponential family Xiwei Tian1, Ting-Kam Leonard Wong1r0000´0001´5254´7305s, Jiaowen Yang2, and Jun Zhang3 1University of Toronto, Toronto, Ontario, Canada xiwei.tian@mail.utoronto.ca, tkl.wong@utoronto.ca 2Meta, Menlo Park, California, United States jiaowen@meta.com 3University of Michigan, Ann Arbor, Michigan, United States junz@umich.edu Abstract. Theλ-exponential family generalizes the standard exponen- tial family via a generalized convex duality motivated by optimal trans- port. It is the constant-curvature analogue of the exponential family from the information-geometric point of view, but the development of computational methodologies is still in an early stage. In this paper, we propose a fixed point iteration for maximum likelihood estimation under i.i.d. sampling, and prove using the duality that the likelihood is mono- tonealongtheiterations.Weillustratethealgorithmwiththe q-Gaussian distribution and the Dirichlet perturbation. Keywords: λ-exponential family ·λ-duality ·maximum likelihood es- timation ·q-Gaussian distribution ·Dirichlet perturbation. 1 Introduction Let a state space Xand a reference measure νonXbe given. For example, we may let Xbe a Euclidean space and νbe the Lebesgue measure. Definition 1 ( λ-exponential family). LetλPRzt0ube a fixed constant. A d- dimensional λ-exponential family dominated by νis a parameterized probability density ppx;θq,θPΘĂRd, given by ppx;θq“p 1`λθ¨Fpxqq1 λ `e´φpθq, xPX, (1) where z`“maxtz,0uis the positive part, F“pF1, . . . , F dq:XÑRdis a vector of statistics, and the primal potential function φpθqis defined by the normaliza- tionş Xppx;θqdνpxq“1. Theλ-exponential family (defined by a divisive normalization ) was intro- duced in [17] (also see [16]) as a natural extension of the standard exponential family ppx;θq“eθ¨Fpxq´ϕpθq, (2)arXiv:2505.03582v1 [math.ST] 6 May 2025 2 Tian et al. which is recovered in (1) by letting λÑ0. It also recovers, by switching to a subtractive normalization (see [17, Section III.A]), the q-exponential family with q“1´λ[3,4,12]. The q-Gaussian distribution (see Example 1) and the Dirichlet perturbation (see Section 3) are important examples of the λ-exponential family. The key idea is a generalized convex duality, called the λ-duality, which is a special case of the c-duality in optimal transport [15]. Whereas the cumulant generating function ϕin (2) is convex, when λă1the function φin (1) can be shown(undersuitableregularityconditions)tohavethepropertythat1 λpeλφ´1q is convex. This generalized convexity is captured by the λ-conjugate fpλqpyq:“sup x"1 λlogp1`λx¨yq´fpxq* , (3) in which the pairing function1 λlogp1`λx¨yqis used in place of the usual inner product x¨y. Using the λ-duality based on (3), many information-geometric properties of the exponential family (obtained by applying the usual convex duality to the ϕin (2)) have natural analogues. In particular, while an expo- nential family is dually flat [2], a λ-exponential family is dually projectively flat with constant sectional curvature λ. Further results about the λ-duality and its relations with the usual convex duality can be found in [18,19]. Severalcomputationalmethodologieshavebeendevelopedforthegeneralized exponential family (or its special cases such as the t-distribution) represented as (1) or as a q-exponential family: logistic regression [6], approximate inference [5], classification [8], nonlinear principal component analysis [14], dimension reduc- tion [1], online estimation [10], maximum likelihood estimation and variational inference [9], as well as compositional data analysis [11]. In this paper, we study maximumlikelihoodestimationofthe λ-exponentialfamilyunderi.i.d.sampling, emphasizing the role played by the λ-duality. For a (regular) exponential family (2), it is well
https://arxiv.org/abs/2505.03582v1
known (see [2, Section 2.8.3]) that the maximum likelihood estimate (MLE) ˆθgiven data points x1, . . . , x n (under independent sampling) is characterized by ∇ϕpˆθq“1 nnÿ i“1Fpxiq. (4) That is, the MLE ˆηof the dual (expectation) parameter η“∇ϕpθqis given simply by the sample mean of the sufficient statistic. For a λ-exponential family, the appropriate dual parameter is defined by the λ-gradient η“∇pλqφpθq:“∇φpθq 1´λ∇φpθq¨θ, (5) which can be interpreted probabilistically as an escort expectation. The in- verse mapping is given by θ“∇pλqψpηq, where ψ“φpλqis the dual poten- tial given by (6) below and can be expressed as a Rényi entropy [17, Theorem III.14]. In Section 2, we show that the MLE ˆηforηis aconvex combination of Fpx1q, . . . , Fpxnq, where the weights depend on ˆηand the data. This relation Maximum likelihood estimation for the λ-exponential family 3 suggests a natural fixed-point iteration (see Algorithm 1) to compute the MLE; it is different from the algorithm proposed recently in [9] which is based on op- timizing a bound of the likelihood. In Theorem 1, we show that when λă0the likelihood is monotone along the iterations of our algorithm. In Section 3, we illustrate this algorithm with the Dirichlet perturbation model. Further investi- gation of this algorithm, as well as comparison with alternative approaches, will be addressed in a follow-up paper. 2 MLE via fixed-point iteration Let a λ-exponential family (1) be given. Throughout this paper, we assume that the family satisfies the following regularity conditions. It can be verified that these conditions hold for the examples considered in this paper. Assumption 1 We assume that λă0,Θ“tθPRd:ş p1`λθ¨Fpxqq1{λ `dνpxqă 8uis the natural parameter set and [17, Condition III.10] holds. Moreover, we assume that the dual parameter set Ξ:“∇pλqφpΘqis convex and contains the common support S:“txPX:ppx;θqą0uof the family. Letφbe the potential function in (1). We define the dual potential ψby the λ-conjugate (3) of φ: ψpηq:“φpλqpηq“sup θ1PΘ"1 λlogp1`λθ1¨ηq´φpθ1q* , ηPΞ. (6) Under Assumption 1,1 λpeλψ´1qis strictly convex on Ξ. Moreover, ∇pλqφ:ΘÑ Ξis a diffeomorphism with inverse ∇pλqψ. We define the dual parameter ηby η:“∇pλqφpθq, so that θ“∇pλqψpηq. From the definition of λ-conjugate we have the following analogue of the Fenchel-Young identity: φλpθq`ψλpηq”1 λlogp1`λθ¨ηq, η“∇pλqφpθq, (7) where 1`λθ¨ηcan be shown to be strictly positive. The convexity of Ξis related to the λ-analogue of convex functions of Legendre type; see [17, Section VI] for further discussion. Letx1, . . . , x nPXbeni.i.d. samples from the model (1) and write yi“ FpxiqPRd. We assume xiPSfor all ias otherwise the likelihood is zero. The log likelihood is then given by ℓpθq“nÿ i“1logppxi;θq“nÿ i“1ˆ1 λlogp1`λθ¨yiq´φpθq˙ . (8) Proposition 1 (First order condition). Suppose Assumption 1 holds. If ˆθis an MLE, i.e. it maximizes ℓpθq, then ˆη:“∇pλqφλpˆθqsatisfies the relation ˆη“nÿ i“1wipˆθqyi, (9) 4 Tian et al. where the weights wiare defined by wipθq:“1{p1`λθ¨yiqřn j“11{p1`λθ¨yjq, i“1, . . . , n. (10) Proof.Differentiating(8)yieldsthefirstordercondition ∇φpˆθq“1 nřn i“1yi 1`λˆθ¨yi. Note that 1´λ∇φpˆθq¨ˆθ“1 nnÿ i“11 1`λˆθ¨yi. (11) We obtain (9) from the definition (5) of the dual parameter. When λÑ0, (9) reduces to (4). In general, when λ‰0equation (9) does
https://arxiv.org/abs/2505.03582v1
not have a closed form solution. However, it suggests the following procedure for computing the MLE. Algorithm 1 (Fixed-point iteration) Givendata y1“Fpx1q, . . . , y n“Fpxnq and an initial guess θp0qPΘ, definepηpkqqkě1ĂΞandpθp0qqkě1ĂΘby "ηpk`1q:“řn i“1wipθpkqqyi, θpk`1q:“∇pλqψpηpk`1qq.(12) More compactly, we may express (12) by the following dynamical system on the dual parameter space Ξ: ηpk`1q:“Tpηpkqq, Tpηq:“nÿ i“1wipηpkqqyi, (13) where by an abuse of notations we denote wipθq“wip∇pλqψpηqqalso by wipηq. It is clear that if ηpkqalready satisfies (9), then ηpk`1q“ηpkq. Since the convex hull of y1, . . . , y nis invariant under T, Brouwer’s fixed-point theorem implies that a fixed point exists (uniqueness is left for future research). We give a simple example to illustrate the algorithm. Example 1 ( q-Gaussian distribution). LetX“Randνbe the Lebesgue mea- sure.For qPp0,3q,theq-Gaussiandistributionasascalefamilycanbeexpressed as aλ-exponential family with λ“1´qPp´2,1q: ppx;θq“p 1´λθx2q1 λe´φpθq, xPR, (14) where θPΘ“ p´8 ,0q,Fpxq “x2andφpθq “´1 2logp´θq`Cλfor some constant Cλ(see [17, Example III.17] for details). The dual parameter is η“ ∇pλqφpθq “1 2`λ1 θPΞ“ p´8 ,0q, soθ“∇pλqψpηq “1 2`λ1 η. In Figure 1 we illustrate Algorithm 1 with a simulated data-set with λ“ ´1.2andn“500. The true value of θis´1and we show two trajectories with different initial values. In both cases, the iterates converge quickly to the MLE. Maximum likelihood estimation for the λ-exponential family 5 −5−4−3−2−1 0−5−4−3−2−10 ηT(η) Fig. 1.Two trajectories pηpkqqof Algorithm 1 for the q-Gaussian distribution in Ex- ample 1. The solid curve shows the graph of TpηqonΞ“p´8 ,0qdefined by (13) for the given data-set. The dashed line is the graph of the identity map. The fixed point (indicated by the cross) corresponds to the MLE. We are now ready to state the main result of the paper whose proof is a novel application of the λ-duality. In particular, we will use the λ-gradients and the strict convexity of1 λpeλψ´1q. Theorem 1 (Monotonicity of likelihood). Under Assumption 1, we have ℓpˆθpk`1qqąℓpˆθpkqqunless ˆθpkqalready satisfies the fixed point condition (9). Proof.We first express the weights wiin terms of the dual potential ψ. Since θ“∇pλqψpηq, for each iwe have 1`λθ¨yi“1`λ∇ψpηq 1´λ∇ψpηq¨η¨yi“1`λ∇ψpηq¨pyi´ηq 1´λ∇ψpηq¨η.(15) Also note the identity 1´λ∇ψpηq¨η“1 1`λθ¨η“1´λ∇φpθq¨θ, (16) which can be verified by a similar computation. Let Ψpηq:“eλψpηqwhich is positive and (since λă0) strictly concave on Ξ. For each i, define κipηq:“Ψpηq`∇Ψpηq¨pyi´ηq. Using the chain rule, (15) and (16), we have 1`λθ¨yi“κipηq Ψpηqp1`λθ¨ηq“κipηq eλψpηq´logp1`λθ¨ηq“κipηqeλφpθq.(17) 6 Tian et al. Thus κipηq“ppxi;θqλand wipθq“pκipηqq´1 řn j“1pκjpηqq´1. (18) From (18), we observe that ℓpθq“1 λlog˜nź i“1κipηq¸ . (19) Since λă0, maximizing the likelihood over θPΘis equivalent to minimizing the productśn i“1κipηqoverη“∇pλqψpθqPΞ. Now we make the key observation. Consider the k-th iteratepθpkq, ηpkqq. For anyηPΞ, we have nÿ i“1wipθpkqqκipηq“nÿ i“1wipθpkqqpΨpηq`∇Ψpηq¨pyi´ηqq “Ψpηq`∇Ψpηq¨˜nÿ i“1wipθpkqqyi´η¸ “Ψpηq`∇Ψpηq¨pηpk`1q´ηq,(20) where the last equality follows from (12). Letting η“ηpkqandη“ηpk`1q gives nÿ i“1wipθpkqqκipηpkqq“Ψpηpkqq`∇Ψpηpkqq¨pηpk`1q´ηpkqq andnÿ i“1wipθpkqqκipηpk`1qq“Ψpηpk`1qq. Since ηpkq‰ηpk`1qby assumption, the strict concavity of Ψimplies that nÿ i“1wipθpkqqκipηpkqqănÿ i“1wipθpkqqκipηpk`1qq. (21) From (18), the last inequality is equivalent to nÿ i“1pκipηpkqqq´1 řn j“1pκjpηpkqqq´1κipηpk`1qqănÿ i“1pκipηpkqqq´1 řn j“1pκjpηpkqqq´1κipηpkqq, which leads to 1 nnÿ i“1κipηpk`1qq κipηpkqqă1. By the inequality of arithmetic and geometric means, we have śn i“1κipηpk`1qqśn i“1κipηpkqqď˜ 1 nnÿ i“1κipηpk`1qq κipηpkqq¸n ă1. From (19) (and recalling that λă0), we have
https://arxiv.org/abs/2505.03582v1
ℓpθpk`1qqąℓpθpkqq. Maximum likelihood estimation for the λ-exponential family 7 3 Dirichlet perturbation TheDirichlet perturbation model is a multiplicative analogue of the normal lo- cation model4and is a key example of the λ-exponential family [13,17]. Let ∆d be the open unit simplex in Rd`1defined by ∆d“␣ p“pp0, . . . , pdqPp0,1qd`1:p0`¨¨¨` pd“1( . In this section, we use superscripts to denote the components. On ∆d, introduce theperturbation operation p‘q:“˜ p0q0 řd j“0pjqj, . . . ,pdqd řd j“0pjqj¸ , which is the addition under the Aitchison geometry [7]. This leads to the differ- ence operation paq:“˜ p0{q0 řd j“0pj{qj, . . . ,pd{qd řd j“0pj{qj¸ . In particular, for any pP∆d,pap“´ 1 1`d, . . . ,1 1`d¯ is the barycenter of the simplex (uniform distribution). Fix σą0and let D“ pD0, . . . , Ddqbe a Dirichlet random vector with parameters´ 1 σp1`dq, . . . ,1 σp1`dq¯ . We may regard σas a noise parameter. Consider the distribution of Q:“p‘D, (22) where pP∆dis regarded as the parameter. In [17, Proposition III.20], it was shown that for σą0fixed, the parameterized distribution of QonX“∆dcan be expressed as a d-dimensional λ-exponential family with λ“´σă0, θ:“ˆp0 λp1, . . . ,p0 λpd˙ and Fpqq:“ˆq1 q0, . . . ,qd q0˙ . The potential function is φpθq “1 λp1`dqřd i“1logp´θiq, defined for θPΘ:“ p´8,0qd. The dual parameter is then given by η:“∇pλqφpθq“1 λˆ1 θ1, . . . ,1 θd˙ “ˆp1 p0, . . . ,pd p0˙ PΞ:“p0,8qd,(23) which is independent of σorλ. In particular, we may recover pfrom ηby p“pp0, p1, . . . , pdq“˜ 1 1`řd j“1ηj,η1 1`řd j“1ηj, . . . ,ηd 1`řd j“1ηj¸ .(24) 4By this we mean the model Y“θ`ϵ, where θPRdis the unknown mean and ϵ„Np0, σ2Iqis the noise, and σą0is treated as a nuisance parameter. 8 Tian et al. (0, 1, 0) (0, 0, 1)(1, 0, 0) (0, 1, 0) (0, 0, 1)(1, 0, 0) Fig. 2. Left: True p(red▲) and samples ( `) from the Dirichlet perturbation model. Right:Three trajectories pppkqqkě0of the algorithm (25) with different initial values. They all converge quickly to the MLE (blue ▲). With (24), we can express the update (13) in terms of the simplex parameter ppkqrather than the dual parameter ηpkq. In the following proposition, we show that the update for the Dirichlet per- turbation is, in fact, independent of λ. That is, the algorithm works the same way even if σ“´λisunknown . This is analogous to maximum likelihood esti- mation of the normal location model, where the sample mean does not depend on the value of the noise. This is not the case in general. Proposition 2. Letq1, . . . , q nP∆dbeni.i.d. samples from the Dirichlet per- turbation model (22). Under the algorithm (13), we have ppk`1q“ppkq‘˜ 1 nnÿ i“1pqiappkqq¸ . (25) We emphasize that1 nřn i“1is the Euclidean (not Aitchison) average on ∆d. Proof.Weplug(23)and(24)into(13)andsimplifyusingthesimplexoperations. The details are omitted due to space constraints. Example 2. We consider a simulated data-set from the Dirichlet perturbation model with d“2,p“ p0.1,0.4,0.5q,σ“0.1andn“100(chosen for visu- alization purposes; the algorithm works for much larger
https://arxiv.org/abs/2505.03582v1
values of dandn). In Figure 2 we plot the data and the output of the algorithm (25) with several initial values. In all cases, the iterates converge quickly to the MLE. In practice, we may initialize pp0qby the sample mean1 nřn i“1qias it tends to be close to the MLE. Acknowledgments. The research of T.-K. L. Wong is partially supported by an NSERC Discovery Grant (RGPIN-2019-04419). Maximum likelihood estimation for the λ-exponential family 9 Disclosure of Interests. The authors have no competing interests to declare that are relevant to the content of this article. References 1. Abe, M., Nomura, Y., Kurita, T.: Nonlinear dimensionality reduction with q- Gaussian distribution. Pattern Analysis and Applications 27(1), 26 (2024) 2. Amari, S.I.: Information Geometry and Its Applications. Springer (2016) 3. Amari, S.i., Ohara, A.: Geometry of q-exponential family of probability distribu- tions. Entropy 13(6), 1170–1185 (2011) 4. Amari, S.i., Ohara, A., Matsuzoe, H.: Geometry of deformed exponential families: Invariant, dually-flat and conformal geometries. Physica A: Statistical Mechanics and its Applications 391(18), 4308–4319 (2012) 5. Ding, N., Qi, Y., Vishwanathan, S.: t-divergence based approximate inference. Ad- vances in Neural Information Processing Systems 24(2011) 6. Ding, N., Vishwanathan, S.: t-logistic regression. In: Advances in Neural Informa- tion Processing Systems. vol. 23 (2010) 7. Egozcue, J.J., Pawlowsky-Glahn, V., Mateu-Figueras, G., Barcelo-Vidal, C.: Iso- metric logratio transformations for compositional data analysis. Mathematical Ge- ology 35(3), 279–300 (2003) 8. Futami, F., Sato, I., Sugiyama, M.: Expectation propagation for t-exponential fam- ily using q-algebra. Advances in Neural Information Processing Systems 30(2017) 9. Guilmeau, T., Chouzenoux, E., Elvira, V.: On variational inference and maximum likelihood estimation with the λ-exponential family. Foundations of Data Science 6(1), 85–123 (2024) 10. Kainth, A.S., Wong, T.K.L., Rudzicz, F.: Conformal mirror descent with logarith- mic divergences. Information Geometry 7(Suppl 1), 303–327 (2024) 11. Machado, A.F., Charpentier, A., Gallic, E.: Optimal transport on categorical data for counterfactuals using compositional data and Dirichlet transport. arXiv preprint arXiv:2501.15549 (2025) 12. Naudts, J.: Generalised Thermostatistics. Springer (2011) 13. Pal, S., Wong, T.K.L.: Multiplicative Schröodinger problem and the Dirichlet transport. Probability Theory and Related Fields 178(1), 613–654 (2020) 14. Tao, Z., Wong, T.K.L.: Projections with logarithmic divergences. In: Geometric Science of Information: 5th International Conference, GSI 2021, Paris, France, July 21–23, 2021, Proceedings 5. pp. 477–486. Springer (2021) 15. Villani, C.: Topics in Optimal Transportation. American Mathematical Society (2003) 16. Wong, T.K.L.: Logarithmic divergences from optimal transport and Rényi geome- try. Information Geometry 1(1), 39–78 (2018) 17. Wong,T.K.L.,Zhang,J.:TsallisandRényideformationslinkedviaanew λ-duality. IEEE Transactions on Information Theory 68(8), 5353–5373 (2022) 18. Zhang, J., Wong, T.K.L.: λ-deformed probability families with subtractive and divisive normalizations. In: Handbook of Statistics, vol. 45, pp. 187–215. Elsevier (2021) 19. Zhang, J., Wong, T.K.L.: λ-deformation: A canonical framework for statistical manifolds of constant curvature. Entropy 24(2), 193 (2022)
https://arxiv.org/abs/2505.03582v1
Learning Survival Distributions with the Asymmetric Laplace Distribution Deming Sheng1Ricardo Henao1 Abstract Probabilistic survival analysis models seek to es- timate the distribution of the future occurrence (time) of an event given a set of covariates. In re- cent years, these models have preferred nonpara- metric specifications that avoid directly estimating survival distributions via discretization. Specifi- cally, they estimate the probability of an individ- ual event at fixed times or the time of an event at fixed probabilities (quantiles), using supervised learning. Borrowing ideas from the quantile re- gression literature, we propose a parametric sur- vival analysis method based on the Asymmetric Laplace Distribution (ALD). This distribution al- lows for closed-form calculation of popular event summaries such as mean, median, mode, varia- tion, and quantiles. The model is optimized by maximum likelihood to learn, at the individual level, the parameters (location, scale, and asym- metry) of the ALD distribution. Extensive results on synthetic and real-world data demonstrate that the proposed method outperforms parametric and nonparametric approaches in terms of accuracy, discrimination and calibration. 1. Introduction Survival models(Nagpal et al., 2021), also known as time-to- event models, are statistical frameworks designed to predict the time until a specific event of interest occurs, given a set of covariates. These models are particularly valuable in situations where the timing of the event is crucial and often subject to censoring , which means that in some cases the event has not yet occurred or remains unobserved by the end of the data collection period. The flexibility and adaptability of survival models have led to their widespread application in various fields, including engineering (Lai & Xie, 2006), finance (Gepp & Kumar, 2008), marketing (Jung 1Department of Electrical and Computer Engineer- ing, Duke University. Correspondence to: Deming Sheng <deming.sheng@duke.edu >, Ricardo Henao <ri- cardo.henao@duke.edu >.et al., 2012), and, notably, healthcare (Zhang et al., 2017; V oronov et al., 2018; L ´anczky & Gy ˝orffy, 2021; Emmerson & Brown, 2021). Survival models can be broadly categorized into parametric, semiparametric, and nonparametric methods, each offering unique strengths depending on the characteristics of the data and the underlying assumptions. Parametric survival models assume that survival times follow a specific statis- tical distribution, enabling explicit mathematical modeling of the survival function. Common examples include the exponential distribution for constant hazards rates (Feigl & Zelen, 1965), the Weibull distribution for flexible hazards rate modeling (Scholz & Works, 1996), and the log-normal distribution for positively skewed survival times (Royston, 2001). Semiparametric methods, such as the Cox propor- tional hazards model (Cox, 1972), assume a proportional hazards structure without specifying a baseline hazard dis- tribution, which offers robustness and interpretability. Non- parametric methods, including the Kaplan-Meier estimator (Kaplan & Meier, 1958) and the Nelson-Aalen estimator (Aalen, 1978), rely solely on observed data, avoiding distri- butional assumptions while directly estimating survival and hazards (risk) functions. More recently, neural networks have significantly advanced survival models across parametric, semiparametric, and non- parametric settings. In parametric methods, LogNorm MLE (Hoseini et al., 2017) enhances parameter estimation for log-normal distributions. Semiparametric approaches, ex- emplified by DeepSurv (Katzman et al., 2018), integrate
https://arxiv.org/abs/2505.03712v2
neural networks to capture nonlinear relationships while preserving the structure of models such as the Cox propor- tional hazards model. Nonparametric approaches, such as DeepHit (Lee et al., 2018) and CQRNN (Pearce et al., 2022), leverage deep learning to directly estimate survival functions without relying on traditional assumptions. These advances allow survival models to handle complex, high-dimensional data with greater precision and flexibility. Naturally, each approach has limitations that may affect its suitability for different applications. Parametric models rely on strong assumptions about the underlying distribu- tion, which may not accurately capture true survival patterns. Semiparametric models are dependent on the proportional hazards assumption, which can be invalid in certain datasets. 1arXiv:2505.03712v2 [cs.LG] 7 May 2025 Learning Survival Distributions with the Asymmetric Laplace Distribution Nonparametric models, such as DeepHit and CQRNN, tend to be computationally intensive and require large datasets for effective training, making them less practical in resource- constrained settings. Additionally, these models often pro- duce discrete estimates, which may compromise interpreta- tion and summarization flexibility compared to the continu- ous modeling offered predominantly by parametric models. To address these limitations, we propose a parametric sur- vival analysis method based on the Asymmetric Laplace Distribution (ALD). Our contributions are listed below. •We introduce a flexible parametric survival model based on the Asymmetric Laplace Distribution, which offers superior flexibility in capturing diverse survival patterns compared to other distributions (parametric methods). •The continuous nature of the ALD-based approach of- fers great flexibility in summarizing distribution-based predictions, thus addressing the limitations of existing discretized nonparametric methods. •Experiments on 14 synthetic datasets and 7 real-world datasets in terms of 9 performance metrics demonstrate that our proposed framework consistently outperforms both parametric and nonparametric approaches in terms of both discrimination and calibration. These results un- derscore the robust performance and generalizability of our method in diverse datasets. 2. Background Survival Data. A survival dataset Dis represented as a set of triplets {(xn, yn, en)}N n=1, where xn∈Rddenotes the set of covariates in ddimensions, yn= min( on, cn)∈R+ represents the observed time, and enis the event indicator. If the event of interest is observed, e.g.death, then on< cn and the event indicator is set to en= 1, otherwise, the event iscensored anden= 0. In this work, we make the common assumption that the distributions of observed and censored variables are conditionally independent given the covariates, i.e.,o⊥ ⊥c|x. Moreover, while we primarily consider right-censored data, less common types of censoring can be readily implemented (Klein & Moeschberger, 2006), e.g., left-censoring can be data handled by changing the likelihood accordingly (see Section 3.3 for an example of how the maximum likelihood loss proposed here can be adapted for such a case). Survival and Hazard Functions. Survival and hazards functions are two fundamental concepts in survival analy- sis. The survival function is denoted as S(t) =P(T > t ), which represents the probability that an individual has sur- vived beyond time t. It can also be expressed in terms of the cumulative distribution function (CDF), F(t), which gives the probability that the event has occurred by the time t, as
https://arxiv.org/abs/2505.03712v2
S(t) = 1−F(t). The hazards function, denoted as λ(t),describes the instantaneous risk that the event occurs at a specific time t, given that the individual has survived up to that point. Formally, it is defined as: λ(t) = lim ∆t→0P(t≤T < t + ∆t|T≥t) ∆t. The hazards function is related to the survival function through: λ(t) =−d dtlogS(t),orS(t) = exp −Zt 0λ(u)du . Furthermore, the probability density function (PDF), f(t), which represents the likelihood that the event occurs at time t, can be derived as: f(t) =−d dtS(t) =λ(t)S(t). These relationships establish a unified framework linking S(t),F(t),λ(t), and f(t), highlighting their interdepen- dence in survival analysis. Importantly, for the purpose of making predictions, we are interested in distributions conditioned on observed covariates, namely S(t|x),F(t|x), λ(t|x)andf(t|x). Survival Models. Survival models can be broadly classified into three main categories. Parametric models assume that the survival PDF follows a specific probability distribution as descrived above. These models thus use a predefined closed-form distribution to describe f(t|x)andF(t|x), for which a model estimating its parameters can be specified. Alternatively, semiparametric models, such as the Cox pro- portional hazards model (Cox, 1972), first decompose the conditional hazards function as λ(t|x) =λ(t)λ(x), then estimate λ(t)from the data and specify a model for λ(x). In contrast, nonparametric models, such as DeepHit and CQRNN (Pearce et al., 2022) circumvent directly modeling conditional distributions by discretizing f(t|x)(DeepHit, Lee et al., 2018), learning summaries of f(t|x)such as (a fixed set of) quantiles Pearce et al. (CQRNN, 2022), or even learning to sample from f(t|x)(Chapfuwa et al., 2018). More details can be found in Appendix A.2. 3. Methods 3.1. Asymmetric Laplace Distribution (ALD) Definition 3.1 (Kotz et al. (2012)). A random variable Yis said to have an asymmetric Laplace distribution with parameters (θ, σ, κ ), if its PDF is: fALD(y;θ, σ, κ ) =√ 2 σκ 1+κ2  exp√ 2κ σ(θ−y) ,ify≥θ , exp√ 2 σκ(y−θ) ,ify < θ ,(1) where θ,σ > 0, and κ > 0, are the location, scale and asymmetry parameters. 2 Learning Survival Distributions with the Asymmetric Laplace Distribution Figure 1. The proposed neural network architecture for predict- ing the parameters of the Asymmetric Laplace Distribution AL(θ, σ, κ ). Moreover, its CDF can be expressed as: FALD(y;θ, σ, κ ) =  1−1 1+κ2exp√ 2κ σ(θ−y) ,ify≥θ , κ2 1+κ2exp√ 2 σκ(y−θ) , ify < θ .(2) We denote the distribution of YasAL(y;θ, σ, κ ). Corollary 3.2. The Asymmetric Laplace Distribution, denoted as AL(θ, σ, κ ), can be reparameterized as AL(θ, σ, q )to facilitate quantile regression (Yu & Moyeed, 2001), where q∈(0,1)is the percentile parameter that represents the desired quantile. The relationship between q andκis given by q=κ2/(κ2+ 1) . Additional details are provided in Appendix A.1. 3.2. Model for the ALD The structure of the proposed model is illustrated in Fig- ure 1, where a shared encoder is followed by three inde- pendent heads to estimate the parameters θ,σ, and κof the ALD distribution. For the purpose of the experiments in Section 5 with structured data, we use fully connected layers with ReLU
https://arxiv.org/abs/2505.03712v2
activation functions. The outputs of the model connected to θ,σandκare further constrained to be non-negative through an exponential (Exp) activation. In addition, a residual connection is included to enhance gradient flow and improve model stability. See Appendix B.3 for more details about the architecture of the model. 3.3. Learning for the ALD We propose learning the model for the ALD through maxi- mum likelihood estimation (MLE). Since the event of inter- est can be either observed or censored, we specify separate objectives for these two types of data. For observed events, for which e= 1, we directly seek to optimize the parameters of the model to maximize fALD(t|x)in(1). Alternatively, for censored events, for which e= 0, we optimize the parameters of the model to maximize the survival function SALD(t|x) = 1−FALD(t|x)in(2). In this manner, the ALD objective below accounts for both the occurrence of events and their respective timing while explicitly incorporatingthe survival probability constraint for censored data: −L ALD=X n∈D OlogfALD(yn|xn) +X n∈D ClogSALD(yn|xn), (3) where DOandDCare the subsets of D=DO∪ D Cfor which e= 1ande= 0, respectively. Detailed derivations of the objective in (3) be found in Appendix A.1. The simplicity of the objective in (3)is a consequence of the ability to write the relevant distributions, fALD(t|x)and SALD(t|x), in closed form. Moreover, we make the follow- ing remarks. •The objective in (3)has the same form as the one used in other parametric approaches, for instance Royston (2001) for the log-normal distribution. •We can readily adapt the loss for other forms of censoring, for instance, if events are left censored, we only have to replace the second term of (3) by 1−SALD(t|x). •We do not consider additional loss terms, as is usually done for other approaches, e.g., DeepHit optimizes a form similar to (3), where the density function and cumulative distribution are replaced by discretized approximations, but also consider an additional loss term to improve dis- crimination (Lee et al., 2018). •Although the ALD in (1)has support for t <0, we have observed empirically that this is unlikely to happen, as we will demonstrate in the experiments. 3.4. Comparison between our Method and CQRNN CQRNN (Pearce et al., 2022) adopts the widely-used ob- jective for quantile regression, which is also based on the Asymmetric Laplace Distribution AL(θ, σ, q ), and uses the transformation in Corollary 3.2. Specifically, they use the maximum likelihood estimation approach to optimize the following objective: LQR(y;θq, σ, q) = log σ−log[q(1−q)] +1 σ( q(y−θq), ify≥θq, (1−q)(θq−y),ify < θ q.(4) Following the quantile regression framework, their approach optimizes a model to predict θqfor a predefined collection of quantile values, e.g.,q={0.1,0.2, . . . , 0.9}. Effectively and similarly to ours, Pearce et al. (2022) specify a shared encoder with multiple heads to predict {θq}q. Note that the objective in (4)does not require one to specify σ, which 3 Learning Survival Distributions with the Asymmetric Laplace Distribution results in the following simplified loss: LQR(y;θq, q) =( q(y−θq), ify≥θq, (1−q)(θq−y),ify < θ q, = (y−θq)(q−I[θq> y]), (5) where I[·]is the indicator function. The formulation in (5) is also known
https://arxiv.org/abs/2505.03712v2
as the pinball orcheckmark loss (Koenker & Bassett Jr, 1978), which is widely used in the quantile regression literature. Importantly, unlike in the objective for our approach in (3), CQRNN does not maximize the survival probability directly. Instead, they adopt the also widely used approach based on the Portnoy’s estimator (Neocleous et al., 2006), which optimizes an objective function tailored for censored quantile regression. Specifically, this approach introduces a re-weighting scheme to handle the censored data: LCQR(y, y∗;θq, q, w) =wLQR(y;θq, q) + (1−w)LQR(y∗;θq, q),(6) where where y∗is apseudo value set to be “sufficiently” larger than all the observed values of yin the data. Specif- ically, in CQRNN (Pearce et al., 2022) it is defined as y∗= 1.2 max iyi. However, Portnoy (Neocleous et al., 2006) indicates that y∗could be set to any sufficiently large value approximating ∞. For example, Koenker (2022) sets y∗= 1e6. This means that in practice, this parameter of- ten requires careful tuning based on the specific dataset, provided that different datasets exhibit varying levels of sensitivity to it. In some cases, we have observed that small perturbations in y∗can lead to considerable variation on per- formance metrics. Consequently, optimizing this parameter can be non-trivial, making the use of CQRNN, and other censored quantile regression methods, challenging. The other parameter in (6)that requires attention is the weight w∈(0,1), which is defined as w= (q−qc)/(1−qc), and where qcis the quantile at which the data point was cen- sored ( e= 0, y=c), with respect to the observed value distribution, i.e.,p(o < c|x). The challenge is that qcis not known in practice. To address this issue, CQRNN proposes two strategies: a sequential grid algorithm and the quantile grid output algorithm. The core idea of both strategies is to approximate qcusing the proportion qcorresponding to the quantile that is closest to the censoring value cusing the distribution of observed events y, which are readily avail- able. Even with this approach, qcis an inherently inaccurate approximation. Its precision heavily depends on the initial grid of qvalues, specifically, the intervals between consecu- tiveqvalues. Consequently, smaller intervals provide finer granularity, but increased computational costs, while larger intervals may lead to coarser approximations that tend to affect model performance. This means that in some cases, the model is sensitive to the choice of the grid of qvalues.In contrast, our approach enjoys a simple objective function resulting in parametric estimates of several distribution sum- maries such as mean, median, standard deviation, and quan- tiles without additional cost. Additional details of CQRNN are provided for completeness in Appendix A.2. 4. Related Work Survival analysis is a fundamental area of study in statistics and machine learning, focusing on modeling time-to-event data while accounting for censoring. A wide range of mod- els has been developed that span parametric, semiparametric, and nonparametric methods. Parametric models assume a specific distribution for the time-to-event variable, providing a structured approach to modeling survival and hazards functions. Commonly used distributions include the exponential (Feigl & Zelen, 1965), Weibull (Scholz & Works, 1996), and the log-normal dis- tribution (Royston, 2001). For example, the log-normal model
https://arxiv.org/abs/2505.03712v2
assumes that the logarithm of survival times follows a normal distribution, enabling straightforward parameteri- zation of survival curves. In modern approaches (Hoseini et al., 2017), neural networks are employed to learn the parameters of the assumed distribution, e.g., the mean and variance for the log-normal. This combination allows the model to leverage the power of neural networks to capture complex, nonlinear relationships between covariates and sur- vival times, while keeping the interpretability and structure inherent to the parametric framework. However, these mod- els face challenges despite their simplicity when the true event distribution significantly deviates from that assumed. Semiparametric methods strike a balance between flexibil- ity and interpretability. One notable example is the Cox proportional hazards model (Cox, 1972), which assumes a multiplicative effect of covariates on the hazard function. Building on this foundation, DeepSurv (Katzman et al., 2018), a deep learning-based extension, replaces the lin- ear assumption with neural network architectures to model complex feature interactions. DeepSurv has demonstrated improved performance in handling high-dimensional covari- ates while maintaining the interpretability of hazard ratios. However, semiparametric models face challenges in effec- tively handling censored data, particularly when censoring rates are very high. In such cases, the limited amount of usable information can lead to degraded performance and reduced reliability of the model’s estimates. The Kaplan-Meier (KM) estimator (Kaplan & Meier, 1958) is a widely used nonparametric method for survival anal- ysis. It estimates the survival function directly from the data without assuming any underlying distribution. The KM estimator is particularly effective for visualizing survival curves and computing survival probabilities. However, its 4 Learning Survival Distributions with the Asymmetric Laplace Distribution Table 1. Dataset summaries: number of features (Feats), train- ing/test data size, and proportion of censored events (PropCens). Dataset Feats Train data Test data PropCens Type 1 – Synthetic target data with synthetic censoring Norm linear 1 500 1000 0.20 Norm non-linear 1 500 1000 0.24 Exponential 1 500 1000 0.30 Weibull 1 500 1000 0.22 LogNorm 1 500 1000 0.21 Norm uniform 1 500 1000 0.62 Norm heavy 4 2000 1000 0.80 Norm med 4 2000 1000 0.49 Norm light 4 2000 1000 0.25 Norm same 4 2000 1000 0.50 LogNorm heavy 8 4000 1000 0.75 LogNorm med 8 4000 1000 0.52 LogNorm light 8 4000 1000 0.23 LogNorm same 8 4000 1000 0.50 Type 2 – Real target data with real censoring METABRIC 9 1523 381 0.42 WHAS 6 1310 328 0.57 SUPPORT 14 7098 1775 0.32 GBSG 7 1785 447 0.42 TMBImmuno 3 1328 332 0.49 BreastMSK 5 1467 367 0.77 LGGGBM 5 510 128 0.60 inability to incorporate covariates limits its applicability in complex scenarios. More recent nonparametric approaches, such as DeepHit (Lee et al., 2018) and CQRNN (Pearce et al., 2022), leverage neural networks to predict survival probabilities or quantiles without imposing strong distri- butional assumptions. These methods are highly flexible, capturing nonlinear relationships between features and sur- vival outcomes, making them particularly suited for high- dimensional and heterogeneous datasets. Nevertheless, a notable shortcoming of both DeepHit and CQRNN is that they produce piecewise constant or point-mass
https://arxiv.org/abs/2505.03712v2
distribution estimates, respectively, that lack continuity and smoothness, leading to survival estimates that can complicate summa- rization, interpretation, and downstream analysis. 5. Experiments 5.1. Datasets We utilize two types of datasets, following Pearce et al. (2022): (Type 1) synthetic data with synthetic censoring and (Type 2) real-world data with real censoring. Table 1 presents a summary of general statistics for all datasets. To account for training and model initialization variability, we run all experiments 10 times with random splits of the data with partitions consistent with Table 1. The source code required to reproduce the experiments presented in the following can be found in the Supplementary Material. For synthetic observed data with synthetic censoring, theinput features xare generated uniformly as x∼ U(0,2)d, where drepresents the number of features. The observed variable o∼p(o|x)and the censored variable c∼p(c|x) follow distinct distributions, with each distribution param- eterized differently, depending on the specific dataset con- figuration. This variability in distributions and parameters allows for the evaluation of the model’s robustness under diverse synthetic data scenarios. For real target data with real censoring, we utilize datasets that span various domains, characterized by distinct features, sample sizes, and censoring proportions. Four of these datasets: METABRIC, WHAS, SUPPORT, and GBSG, were retrieved from the DeepSurv GitHub repository1. Other details are available in Katzman et al. (2018). The remaining three datasets: TMBImmuno, BreastMSK, and LGGGBM were sourced from cBioPortal2for Cancer Ge- nomics. These datasets constitute a diverse benchmark across domains such as oncology and cardiology, allowing a comprehensive evaluation of survival analysis methods. Ad- ditional details of all datasets can be found in Appendix B.1. 5.2. Metrics Predictive Accuracy Metrics : Mean Absolute Error (MAE) and Integrated Brier Score (IBS) (Graf et al., 1999), measure the accuracy of survival time predictions. MAE quantifies the average magnitude of errors between pre- dicted and observed survival times ˜yiandyi, respectively. For synthetic data, ground truth values are obtained directly from the observed distribution, while for real data, only observed events ( e= 1) are considered. For the IBS calcu- lation, we select 100 time points evenly from the 0.1 to 0.9 quantiles of the distribution for yin the training set. Concordance Metrics : Harrell’s C-Index (Harrell et al., 1982) and Uno’s C-Index (Uno et al., 2011), which evaluate the ability of the model to correctly order survival times in a pairwise manner, while accounting for censoring. Harrell’s C-Index is known to be susceptible to bias, when the censor- ing rate is high. This happens because censoring dominates the pairwise ranking when estimating the proportion of cor- rectly ordered event pairs. Alternatively, Uno’s C-Index adjusts for censoring by using inverse probability weighting, which provides a more robust estimate when the proportion of censored events is high. Calibration Metrics : There are several metrics to assess calibration. We consider summaries (slope and intercept) of the calibration curves using the predicted PDF f(t|x)or the survival distribution S(t|x). Moreover, we use the cen- sored D-Calibration (CensDcal) (Haider et al., 2020). For the former, Cal [f(t|x)], 9 prediction interval widths are 1https://github.com/jaredleekatzman/DeepSurv/ 2https://www.cbioportal.org/ 5 Learning Survival Distributions with
https://arxiv.org/abs/2505.03712v2
the Asymmetric Laplace Distribution Table 2. Summary of benchmarking results across 21 datasets. Each column group shows three figures: the number of datasets where our method significantly outperforms, underperforms or is comparable with the baseline indicated. The last two rows summarize the column totals and proportions to simplify the comparisons. For reference, the total number of comparisons is 189. Metricvs. CQRNN vs. LogNorm vs. DeepSurv vs. DeepHit Better Worse Same Better Worse Same Better Worse Same Better Worse Same MAE 6 8 7 10 3 8 6 8 7 12 6 3 IBS 19 1 1 21 0 0 21 0 0 21 0 0 Harrell’s C-Index 4 2 15 10 3 8 6 2 13 15 0 6 Uno’s C-Index 2 3 16 9 2 10 6 1 14 15 0 6 CensDcal 8 4 9 10 1 10 8 5 8 15 1 5 Cal[S(t|x)](Slope) 0 0 21 15 0 6 13 0 8 12 0 9 Cal[S(t|x)](Intercept) 0 0 21 14 0 7 0 11 10 16 0 5 Cal[f(t|x)](Slope) 4 0 17 14 0 7 9 0 12 14 0 7 Cal[f(t|x)](Intercept) 0 4 17 10 0 11 8 0 13 18 0 3 Total 43 / 189 22 / 189 124 / 189 113 / 189 9 / 189 67 / 189 77 / 189 27 / 189 85 / 189 138 / 189 7 / 189 44 / 189 Proportion 0.228 0.116 0.656 0.598 0.048 0.354 0.407 0.143 0.450 0.730 0.037 0.233 considered, e.g., 0.1 for [0.45,0.55], 0.2 for [0.4,0.6],etc. These are used to define the time ranges for each prediction, after which we calculate the proportion of test events that fall within each interval. The calculation of the proportion of censored and observed cases follows the methodology in Goldstein et al. (2020), with further details provided in Ap- pendix B.2. This calibration curve of expected vs. observed events is summarized with an ordinary least squares linear fit parameterized by its slope andintercept . A well-calibrated model is expected to have a unit slope and a zero intercept. For the survival distribution, Cal [S(t|x)], we follow a sim- ilar procedure, however, we consider 10 non-overlapping intervals in the range (0,1),i.e.,(0,0.1],(0.1,0.2],etcand then calculating the proportion of test events that fall within each interval. The calculation of CensDcal starts with that of Cal [S(t|x)], which is followed by computing the sum of squared residuals between the observed and expected proportions, i.e., 0.1 for the 10 intervals defined above. These three groups of metrics provide a robust framework for evaluating predictive accuracy, calibration, and concor- dance in survival analysis. For the results we calculate averages and standard deviations for all metrics over 10 ran- dom test sets. The metrics that require a point estimate, i.e., MAE and C-Index are obtained using the expected value off(t|x), which can be calculated in closed form. More details about all metrics can be found in Appendix B.2. 5.3. Baselines We compare the proposed method against four baselines representative of the related work, to evaluate performance and effectiveness. LogNorm (Royston, 2001): A paramet- ric
https://arxiv.org/abs/2505.03712v2
survival model that assumes that the event times fol- low a log-normal distribution. DeepSurv (Katzman et al., 2018): A semi-parametric survival model based on the Cox proportional hazards framework, leveraging deep neuralnetworks for the representation of the time-independent haz- ards. DeepHit (Lee et al., 2018): A deep learning-based survival model that predicts piece-wise probability distribu- tions over event times using a fully neural network architec- ture. CQRNN (Pearce et al., 2022): A censored quantile regression model that employs a neural network architecture, and whose objective is based on the Asymmetric Laplace Distribution. These baselines represent a mix of paramet- ric, semi-parametric, and non-parametric survival modeling techniques, allowing us to provide a comprehensive bench- mark for comparison. The implementation details, including model selection, of our method and the other baselines can be found in Appendix B.3. 5.4. Results Table 2 provides a comprehensive summary of the com- parisons between our model and the four baselines in 21 datasets and 9 evaluation metrics, which is 189 comparisons in total. When assessing the statistical significance of the different metrics we use a Student’s ttest with p <0.05 considered significant after correction for false discovery rate using Benjamini-Hochberg (Benjamini & Hochberg, 1995). These results underscore several key insights: Overall Superiority : Our model is significantly better than the baselines consistently more often. For instance, our model significantly outperforms CQRNN in 23% of the comparisons while the opposite only occurs 12%, and these proportions are higher for the comparisons against the other baselines, namely, 60%, 41% and 73% for LogNorm, Deep- Surv and DeepHit, respectively. Accuracy : Our model demonstrates significant improve- ments over the baselines in predictive accuracy, with a no- table improvement in MAE compared to LogNorm and DeepHit. Moreover, it consistently outperforms the base- 6 Learning Survival Distributions with the Asymmetric Laplace Distribution (a) Concordance metric (Harrell’s C-index). (b) Calibration metric (CensDcal). Figure 2. Performance on discrimination and calibration metrics. (a) concordance and (b) calibration. Reported are test averages with standard deviations over 10 runs. lines on nearly every dataset when evaluated with the IBS metric. This consistent superiority in IBS underscores our model’s ability to provide accurate and reliable predictions across the entire time range, not just at specific time points. Table 4 and Figure 4 in the Appendix further support this, showing that our method achieves significantly lower IBS values, which reflects its effectiveness in learning from cen- sored data without exacerbating bias in survival estimates. Concordance : While Harrell’s and Uno’s C-Indices yield more balanced results across models, our model achieves a relatively higher number of wins compared to other base- lines, especially in terms of Harrel’s C and against LogNorm and DeepHit, which underscores our model’s ability to rank predictions effectively. Calibration : Our model demonstrates strong performance in calibration metrics, particularly in CensDcal, Cal [S(t|x)] (slope) and Cal [f(t|x)](slope). The results for CensDcal highlight its ability to effectively handle censored obser- vations. Additionally, our model shows significant supe-riority in slope- and intercept-related metrics, particularly when compared to log-norm and DeepHit. However, the improvement over CQRNN remains relatively subtle. These
https://arxiv.org/abs/2505.03712v2