text string | source string |
|---|---|
argument (see, e.g., [ 19, Ch. 4]), we then have, with probability at least 1 −2e−t, /ba∇dblYτ−EYτ/ba∇dblℓ2≤c/parenleftBig K4/radicalbig n(d+t)+K2τ(d+t)/parenrightBig ≤cK4/parenleftBig/radicalbig n(d+t)+(logn)(d+t)/parenrightBig . Takingt= 3lognand noting thatA∗A(x∗x∗ ∗)/{ollowsequalYτ, which implies EA∗A(x∗x∗ ∗)/{ollo... | https://arxiv.org/abs/2505.02636v1 |
(1+m4+δU)/ba∇dblx∗/ba∇dbl2. For the lower isometry assumption, we can directly apply Lemma 3; ifn≥c1(δL)dlogd, the assumption holds with probability at least 1 −c2n−2. Combining the failure probabilities with a union bound gives the result. We now turn to proving part 1. We adopt the cleaner notation of Theorem 2and se... | https://arxiv.org/abs/2505.02636v1 |
with probability at least 1−c2e−c3n, for allZ/{ollowsequal0, 1√n/ba∇dblA(Z−Z∗)/ba∇dbl≥1 n/ba∇dblA(Z−Z∗)/ba∇dbl1≥c4/ba∇dblZ−Z∗/ba∇dbl∗. This summarizes several intermediate results of [ 18] (in particular, their Lemmas 3, 4, and 5). With this, we can continue to the main proof: Proof of Theorem 3.We will use c,c′, etc. ... | https://arxiv.org/abs/2505.02636v1 |
=P⊥ UXX∗P⊥ U/{ollowsequal0. From Cauchy-Schwartz and Assumption 1, we have /ba∇dblλ/ba∇dbl/ba∇dblA(H)/ba∇dbl≥/an}b∇acketle{tY,H/an}b∇acket∇i}ht =/an}b∇acketle{tPT⊥(Y),HT⊥/an}b∇acket∇i}ht+/an}b∇acketle{tPT(Y),HT/an}b∇acket∇i}ht ≥trHT⊥−ǫ/ba∇dblHT/ba∇dblF. We can add this (scaled by LT) to the inequality from Assumption 2... | https://arxiv.org/abs/2505.02636v1 |
Uhas columns u1,...,u r(these are the nontrivial eigenvectors of Z∗), setEk=uku∗ k. We will set, for constants α,β,γ > 0 that we will tune, λi=1 n/parenleftBigg α−βr/summationdisplay k=1/an}b∇acketle{tAi,Ek/an}b∇acket∇i}ht1{/an}bracketle{tAi,Ek/an}bracketri}ht≤γ}/parenrightBigg . By construction and the properties of G... | https://arxiv.org/abs/2505.02636v1 |
App. B] give, for n/greaterorsimilard, with probability at least 1 −2n−2, for allH∈Hd, 1√n/ba∇dblA(H)/ba∇dbl≥1 n/ba∇dblA(H)/ba∇dbl1≥c1/ba∇dblH/ba∇dblF−c2/radicalbigg d n/ba∇dblH/ba∇dbl∗ for universal constants c1,c2>0 that will remain fixed for the rest of this proof. On this event we th en have 1√n/ba∇dblA(H)/ba∇dbl≥c1... | https://arxiv.org/abs/2505.02636v1 |
Process. Syst. (NeurIPS) , vol. 33, Virtual conference, Dec. 2020, pp. 3265–3274. [15] Y. Bi, H. Zhang, and J. Lavaei, “Local and global linear converg ence of general low-rank matrix recovery problems,” in Proc. AAAI Conf. Artif. Intell. (AAAI) , vol. 36, Virtual conference, Feb. 2022, pp. 10129–10137. 23 [16] Z. Ma, ... | https://arxiv.org/abs/2505.02636v1 |
withapplicationtoopticalimaging:Acontemporaryoverview,” IEEE Signal Process. Mag. , vol.32, no. 3, pp. 87–109, 2015. [33] R. Ge, C. Jin, and Y. Zheng, “No spurious local minima in nonconve x low rank problems: A unified geometric analysis,” in Proc. Int. Conf. Mach. Learn. (ICML) , Sydney, Australia, Aug. 2017, pp. 1233... | https://arxiv.org/abs/2505.02636v1 |
Vijayaraghavan, “The Burer-M onteiro SDP method can fail even above the Barvinok-Pataki bound,” in Proc. Conf. Neural Inf. Process. Syst. (NeurIPS) , New Orleans, Louisiana, Dec. 2022, pp. 31254–31264. [51] P. Abdalla, A. S. Bandeira, M. Kassabov, V. Souza, S. H. Strog atz, and A. Townsend, “Expander graphs are globall... | https://arxiv.org/abs/2505.02636v1 |
Hierarchical random measures without tables Marta Catalano1and Claudio Del Sole2 1Luiss University, Rome, Italy , mcatalano@luiss.it 2University of Milan - Bicocca, Milan, Italy , claudio.delsole@unimib.it Abstract The hierarchical Dirichlet process is the cornerstone of Bayesian nonparametric multilevel models. Its ge... | https://arxiv.org/abs/2505.02653v1 |
is often modelled as conditionally independent and identically distributed from ˜P, where ˜P is e.g. a Dirichlet process (Ferguson, 1973) with diffuse mean measure P0. In this case, since the atoms θiare independent and identically distributed from P0, two observations coincide if and only if they share the same atom θ... | https://arxiv.org/abs/2505.02653v1 |
common random probability a priori. In this work we pursue a different strategy that eliminates the need for tables while preserving the infinite-dimensionality of the model, thanks to a specific gamma hyper- prior for the shared concentration parameter of the Dirichlet processes. Intuitively, the hierarchical Dirichle... | https://arxiv.org/abs/2505.02653v1 |
kis of the same order as log(log( n)). The hierarchical Dirichlet process with hyperprior is only a particular case of the more general class of nonparametric hierarchical models discussed in this work. These models arise as the normalization of a vector (˜ µ1, . . . , ˜µd) of conditionally independent completely rando... | https://arxiv.org/abs/2505.02653v1 |
as Pd. Bold symbols indicate vectors, e.g. s= (s1, . . . , s d), and d s= ds1· ··· · dsdis (a restriction of) the Lebesgue measure on Rd. We repeatedly use the notation Ω d= [0,+∞)d\{0}, and the abbreviation a.s. for almost surely . Iff:X→Yis a measurable map and ρis a positive measure on X, then f#ρis the pushforward ... | https://arxiv.org/abs/2505.02653v1 |
through the Laplace exponent and derive its multivariate L´ evy intensity. Theorem 1. Let˜µ∼hCRV( ρ, ρ0, P0)with P0a probability measure. Then ˜µis a homogeneous CRV with Laplace exponent ψh: Ωd→(0,+∞)and L´ evy intensity νh= ρh⊗P0such that, for every λ= (λ1, . . . , λ d)ands= (s1, . . . , s d)∈Ωd, ψh(λ) =ψ0dX i=1ψ(λi... | https://arxiv.org/abs/2505.02653v1 |
imply that the multivariate Laplace exponent is ψh(λ1, . . . , λ d) =α0ασ0(λσ 1+···+λσ d)σ0. Hence, two stable-stable hCRVs coincide in distribution if they have the same value for α0ασ0. For d= 1 we obtain the Laplace functional of the marginal ˜ µi, namely ψ(λ) = α0λσσ0, which is the Laplace exponent of a stable CRM ... | https://arxiv.org/abs/2505.02653v1 |
the marginal random measures ˜ µi’s are not CRMs. Interestingly, at least two popular hierarchical specifications can be expressed in terms of a normalized hierarchical CRV. Recall that ˜P= (˜P1, . . . , ˜Pd)∼HDP( α, α 0, P0) is a hierarchical Dirichlet process (Teh et al., 2006) with concentration parameters α, α 0>0 ... | https://arxiv.org/abs/2505.02653v1 |
˜Pi(A)) =−P0(A)(1−P0(A))Z+∞ 0ue−ψ0(ψ(u))(ψ0◦ψ)′′(u) du, Cov( ˜Pi(A),˜Pj(A)) = Var˜µ0 ˜µ0(X) =−P0(A)(1−P0(A))Z+∞ 0ue−ψ0(u)ψ′′ 0(u) du, where (ψ0◦ψ)′′(u) =ψ′′ 0(ψ(u))ψ′(u)2+ψ′ 0(ψ(u))ψ′′(u). In particular, for every i̸=j, Corr( ˜Pi(A),˜Pj(A)) =R+∞ 0ue−ψ0(u)ψ′′ 0(u) duR+∞ 0ue−ψ0(ψ(u))(ψ0◦ψ)′′(u) du. Example 3. Consider ... | https://arxiv.org/abs/2505.02653v1 |
a generic CRV, which can be seen as a multivariate extension of James et al. (2009) and a special case of FuRBI random measures (Ascolani et al., 2024) with shared atoms. We later specialize this result to hierarchical CRV and explore the structure of the posterior in further detail. Let Xi= (Xi1, . . . , X ini) be the... | https://arxiv.org/abs/2505.02653v1 |
that both ID( ρ) and ID( ρ0) have a p.d.f. (Sato, 1999, 13 Theorem 27.7). The first nontrivial result characterizes the distribution of the vector of random measures ˜µ∗, showing the conditional quasi-conjugacy of the model. Indeed, con- ditionally on U, we can interpret ˜µ∗as a hierarchical CRV with heterogeneous marg... | https://arxiv.org/abs/2505.02653v1 |
, d , where Si= (Si0, . . . , S ik)are conditionally independent vectors given T= (T0, T1, . . . , T k), whose p.d.f. satisfy fSi|T(si0, . . . , s ik)∝(si0+si1+···+sik)−nifID(T0ρ)(si0)kY j=1snij ijfID(Tjρ)(sij), fT(t0, . . . , t k)∝dY i=1C(ni1, . . . , n ik;t)fID(ρ0)(t0)kY j=1ρ0(tj). Proposition 8 reduces the sampling ... | https://arxiv.org/abs/2505.02653v1 |
˜ µ∗ i, and computing λ(U). Notably, the non-standard step for sampling Uis the marginal sampling of the variable αTin (11), whose joint density with the auxiliary vector Vis known up to a normalizing constant. In Appendix C.1 we describe a Metropolis- Hastings within Gibbs algorithm to sample from the marginal distrib... | https://arxiv.org/abs/2505.02653v1 |
n•j. Following similar steps, and further marginalizing with respect to the auxiliary vari- ablesV, we can derive the marginal density of αTup to a normalizing constant. Proposition 12. The density of the random variable αTin(11) can be written as fαT(t)∝tα0−1e−(b0/α)tdY i=11 ((t))ni nX h=mch ((α0))hth! , (13) where m=... | https://arxiv.org/abs/2505.02653v1 |
αT, auxiliary variables V, and jumps αJ01, . . . , αJ 0k, thus having dimension 2 k+ 1. The exact posterior sampling 19 Figure 1: Conditional dependencies between random variables in Algorithms 1 and 2. Red circles represent the computational bottlenecks; quantities in empty circles are sampled from gamma distributions... | https://arxiv.org/abs/2505.02653v1 |
averaging over 25 simulated dataset for each experimental setting. The burn-in time for algorithms (i) and (ii) and the initialization time for algorithms (iii) and (iv) are deducted from the total execution times. Figure 2a compares the algorithms for growing number of groups d, while the number of observations per gr... | https://arxiv.org/abs/2505.02653v1 |
the fact that the state of the Markov chain has dimension 2k+ 1. Interestingly, this linear growth is partially mitigated by the better mixing due to the decrease in the number of possible configurations of the latent tables as kgrows, for fixed n. This fact is also reflected by the slight decrease in CPU time for the ... | https://arxiv.org/abs/2505.02653v1 |
apparent that correlation is not affected by the mean E(˜Pi(A)) = P0(A) but is deeply related to the variance; we refer to Table 1 for some limiting behaviours. Since αandα0impact both the variance and the dependence structure, it may be difficult to elicit them in practice. From the point of view of the practioner, it... | https://arxiv.org/abs/2505.02653v1 |
particular, for the HDP( α0, α, P 0), following Camerlenghi 25 Figure 5: Posterior expected random means of the gamma-gamma hCRV model, for three groups of independent Poisson observations with means equal to 2, 3 and 4, each of size ni= 10. Top: each plot has fixed variance parameter σ2and increasing correlation ρ. Bo... | https://arxiv.org/abs/2505.02653v1 |
16(4):1187–1219. Bertoin, J. (1996). L´ evy Processes . Cambridge Tracts in Mathematics. Cambridge Uni- versity Press. 27 Figure 7: Values of αandα0for the HDP corresponding to fixed value of variance σ2and correlation ρ, when the set Asatisfies P0(A) = 1 /2. Bochner, S. (1955). Harmonic Analysis and the Theory of Prob... | https://arxiv.org/abs/2505.02653v1 |
Ferguson, T. S. (1973). A Bayesian analysis of some nonparametric problems. The Annals of Statistics , 1(2):209–230. Ferguson, T. S. and Klass, M. J. (1972). A representation of independent increment processes without Gaussian components. The Annals of Mathematical Statistics , 43(5):1634–1643. Figalli, A. and Gigli, N... | https://arxiv.org/abs/2505.02653v1 |
single-cell RNA sequencing datasets with the hierarchical Dirichlet process. Econometrics and Statistics . MacEachern, S. N. (1999). Dependent nonparametric processes. In ASA Proceedings of the Section on Bayesian Statistical Science , pages Alexandria, VA: American Statistical Association. 30 MacEachern, S. N. (2000).... | https://arxiv.org/abs/2505.02653v1 |
of Proceedings of Machine Learning Research , pages 564–571, San Juan, Puerto Rico. PMLR. Vershik, A., Yor, M., and Tsilevich, N. (2004). On the Markov–Krein identity and quasi- invariance of the gamma process. Journal of Mathematical Sciences , 121:2303–2310. Walker, S. and Damien, P. (2000). Representations of L´ evy... | https://arxiv.org/abs/2505.02653v1 |
on (0 ,+∞)×Xcan have infinite mass on sets of the form (0 , ϵ)×A. However, for every ϵ >0 and for every bounded Borel set A, it must satisfy the following constraints: C1. the jump component is bounded out of the origin, ν((ϵ,+∞)×A)<+∞; C2. the jump component is integrable in the origin,RR (0,ϵ)×Asdν(s, x)<+∞; C3. the ... | https://arxiv.org/abs/2505.02653v1 |
b >0 ifρhas L´ evy density and Laplace exponent equal to, respectively, ρ(s) =αe−bs s; ψ(λ) =αlog 1 +λ b . Definition 6. A random measure ˜ µ∼CRM( ρ⊗P0) is a stable CRM of shape α > 0 and discount parameter σ∈(0,1) ifρhas L´ evy density and Laplace exponent equal to, respectively, ρ(s) =ασ 1−σ1 s1+σ; ψ(λ) =αλσ. The n... | https://arxiv.org/abs/2505.02653v1 |
logE e−˜µ0(A)Pd i=1ψ(λi) =−P0(A)ψ0dX i=1ψ(λi) . 36 Step 3. Determine homogeneity and Laplace exponent. The multivariate L´ evy measure νh and the Laplace exponent ψhof˜µare uniquely characterized by the Laplace transform. Given the product form of Step 2, it follows that νh=ρh⊗P0for some multivariate ρh and the Lap... | https://arxiv.org/abs/2505.02653v1 |
for ca rational number. We conclude by density thanks to the continuity of f. For n∈Nnatural number, by 38 (19), f(ns) =f(s+···+s) =nf(s), while for q=n/m rational number, with n, m∈N, nf(s) =f(ns) =f(qms) =mf(qs), which implies f(qs) =nf(s)/m=qf(s). Step 3. Conclusion. We have proved that there exists c >0 such that, ... | https://arxiv.org/abs/2505.02653v1 |
Conditionally on ˜ µ0, both ˜ µ(1)and ˜µ(2)are CRMs, and we can express the condition ˜ µ(1)d=c(˜µ0) ˜µ(2)in terms of their conditional L´ evy measures ρ(1)andρ(2)as ρ(2)(s)⊗˜µ0 ˜µ0(X)d=c(˜µ0)ρ(1)(c(˜µ0)s)⊗˜µ0. Plugging in the expressions for the stable L´ evy measures, we need to find c(˜µ0) such that α σ Γ(1−σ)1 sσ+1... | https://arxiv.org/abs/2505.02653v1 |
du. By the law of total covariance and the conditional independence of the random probability measures ˜Piand ˜Pjfori̸=j, given ˜ µ0, Cov( ˜Pi(A),˜Pj(A)) = Cov( E(˜Pi(A)|˜µ0),E(˜Pj(A)|˜µ0)) = Var˜µ0(A) ˜µ0(X) =−P0(A)(1−P0(A))Z+∞ 0ue−ψ0(u)ψ′′ 0(u) du. Proof of Proposition 4 The starting point of the proof is Theorem 5... | https://arxiv.org/abs/2505.02653v1 |
Throughout the proof, we exploit the following properties of the conditional expectation, which hold for any random variable X, Y and for any Borel set Asuch that P(Y∈A)≥0: a. conditional expectation with respect to events: E(X|Y∈A) =E(X 1A(Y))/P(Y∈A); b. tower property: E(X) =E(E(X|Y)) and P(A) =E( 1A) =E(P(A|Y)). Ste... | https://arxiv.org/abs/2505.02653v1 |
each i= 1, . . . , d , and we have exchanged limit and expectation by monotone convergence theorem. We observe that, for some ℓ∈ {1, . . . , d }, ∂ ∂ηℓdY i=1e−R Bj(fi(x)+ηiui) d˜µi(x)=−uℓ˜µℓ(Bj)dY i=1e−R Bj(fi(x)+ηiui) d˜µi(x). This formula can be applied recursively, using the convention d0/du0= Id, for Id the identit... | https://arxiv.org/abs/2505.02653v1 |
∂|A|/(Q i∈A∂vi)g(η,u) are asymptotically equivalent up to a constant. Hence, from (21), the asymptotically slowest term is the summand corresponding to the partition πwith a minimal number of sets, that is, π={1, . . . , n •j}. Therefore, as ϵ→0, (21) reads (−1)n•j∂n•j ∂ηn1j 1. . . ∂ηndj de−gj(η,u)= exp −Z Ωd×Bj(1−e−s... | https://arxiv.org/abs/2505.02653v1 |
(0,∞)with Laplace exponent ψsuch that ID(ρ) has a p.d.f. denoted by fID(ρ). For u >0, define dρu(s) =e−usdρ(s)the exponential tilting ofρ. Then for s, t, u > 0, e−usfID(tρ)(s) =e−tψ(u)fID(tρu)(s). Proof. LetX∼ID(tρ) and let Xu∼ID(tρu). By the uniqueness of the Laplace trans- form, it enough to show that, for every λ >0... | https://arxiv.org/abs/2505.02653v1 |
a vector β= (β1, . . . , β d) of dependent random variables, the latent variable U1, . . . , U d are independent and gamma distributed, with Ui∼Gamma( ni, βi), for i= 1, . . . , d . Moreover, each βi=Si0+Si1+···+Sik, where the joint density of S= (Sij)ijis proportional to fS(s)∝dY i=1(si0+si1+···+sik)−ni ×Z (0,+∞)k+1dY... | https://arxiv.org/abs/2505.02653v1 |
Moreover, the quantity C(m;t) in (9) is given by C(m;t) =Z (0,+∞)k+1(s0+···+sk)−m•fID(t0ρ)(s0)kY j=1smj jfID(tjρ)(sj)ds =bα(t0+···+tk) Γ(αt0)Qk j=1Γ(αtj)Z (0,+∞)zα(t0+···+tk)−1e−bzdzZ ∆kwαt0−1 0kY j=1wαtj+mj−1 j dw =Γ(α(t0+···+tk)) Γ(α(t0+···+tk) +m•)kY j=1Γ(αtj+mj) Γ(αtj). (b) From the specification of ρ0in Example 1,... | https://arxiv.org/abs/2505.02653v1 |
(12). Proof of Proposition 12 From Proposition 10, the joint density of αTand the random vector V= (V0, . . . , V k)∈ ∆kof auxiliary variables can be rewritten as fαT,V(t,v) ∝tα0−1e−(b0/α)tvα0−1 0dY i=11 ((t))ni•kY j=1 v−1 jdY i=1 nijX hij=mijS(nij, hij)vhij jthij 57 ∝tα0−1e−(b0/α)tvα0−1 0dY i=11 ((t))ni•kY j... | https://arxiv.org/abs/2505.02653v1 |
V−j,0denotes the vector V∈∆kwith components VjandV0removed. Moreover, variables ( Vj, V0) are additionally subject to the constraintPk j=0Vj= 1, while variables αTandαJ0jcan take every positive value. At each iteration of the Gibbs sampler, we sequentially propose new values for each variable, and accept or reject the ... | https://arxiv.org/abs/2505.02653v1 |
that is, coincide with c(k). The entries of vector c(ℓ) can be computed from vector c(ℓ−1) as follows: c(ℓ;h) =X h1+···+hℓ=h m•j≤hj≤n•jℓY j=1a(j;hj) =n•ℓX hℓ=m•ℓ X h1+···+hℓ−1=h−hℓ m•j≤hj≤n•jℓ−1Y j=1a(j;hj) a(ℓ;hℓ) =n•ℓX hℓ=m•ℓc(ℓ−1;h−hℓ)a(ℓ;hℓ). In other words, the vector c(ℓ) is obtained by convolution betwee... | https://arxiv.org/abs/2505.02653v1 |
stationary point or 0, for which R(t∗(r)) = 0 holds. 62 C.4 Sampling from the hierarchy of gamma CRMs From Proposition 9(a), the residual component ˜µ∗of the posterior distribution of ˜µ retains a hierarchical structure, conditionally on latent variables U: ˜µ∗ 1, . . . , ˜µ∗ d|˜µ∗ 0,U∼dY i=1CRM α s−1e−b(1+Ui/b)sds⊗˜µ... | https://arxiv.org/abs/2505.02653v1 |
of the posterior random measure ˜µi|X1:dnot assigned to fixed locations, can instead be sampled exactly from a hierarchy of gamma random variables C.5 Inverting the exponential integral The implementation of the inverse L´ evy measure algorithm of Walker and Damien (2000) for the gamma CRM requires to invert the tail i... | https://arxiv.org/abs/2505.02653v1 |
method is proportional to the second derivative around the solution. The asymptotic expansion is also useful to improve the numerical stability of function evaluations. We argue that redefining the tail integral as a function on the whole real line, removing the constraint to positive values, may be a general technique... | https://arxiv.org/abs/2505.02653v1 |
Indeed, after obtaining samples from variables αJ01, . . . , αJ 0kand an approxima- tion by truncation of α˜µ∗ 0, as described in Section C.4, the probability weights in (22) can be sampled from a ( k+L)-dimensional Dirichlet distribution. In particular, for each i= 1, . . . , d andL∈N, πi1, . . . , π ik, π∗ i1, . . ... | https://arxiv.org/abs/2505.02653v1 |
counts n•j equal to 24 and 15, respectively. Again, the MCMC algorithms show good mixing, with acceptance rates between 0 .42 and 0 .52 and effective samples sizes after thinning above 700. Finally, the posterior distributions for the random jumps Jijat values 2 and 3, for thed= 4 groups, are displayed in Figure 13; th... | https://arxiv.org/abs/2505.02653v1 |
Transformers for Learning on Noisy and Task-Level Manifolds: Approximation and Generalization Insights Zhaiming Shen∗Alex Havrilla∗Rongjie Lai†Alexander Cloninger‡ Wenjing Liao∗ May 7, 2025 Abstract Transformers serve as the foundational architecture for large language and video generation models, such as GPT, BERT, SO... | https://arxiv.org/abs/2505.03205v1 |
proved the in-context learning ability of transformers for least squares, ridge regression, Lasso and generalized linear models. Compared to transformers, feedforward and convolutional neural networks are significantly bet- ter understood in terms of approximation [Cybenko, 1989; Hornik et al., 1989; Leshno et al., 199... | https://arxiv.org/abs/2505.03205v1 |
on the projection of the input onto the manifold. Specifically, let M ⊆ [0,1]Dbe a compact, connected d-dimensional Riemannian manifold isometrically embedded in RDwith a positive reach τM, and M(q) be a tubular region around the manifold Mwith local tube radius given by q∈[0,1) times the local reach (see Definitions 2... | https://arxiv.org/abs/2505.03205v1 |
mul- tiplication, product, division, etc. Such implementation can be done efficiently (e.g., in parallel) on different tokens. These results can be applied individually as building blocks for approximation studies using Transformers. This paper is organized as follows. In section 2, we introduce some preliminary defini... | https://arxiv.org/abs/2505.03205v1 |
width wFFN}. We use ReLU function σ(x) = max( x,0) as the activation function in the feed-forward network. Note that each feed-forward layer is applied tokenwise to an embedding matrix H. Definition 9 (Attention and Multi-head Attention) The attention with the query, key, value matrices Q, K, V ∈Rdembed×dembed is AK,Q,... | https://arxiv.org/abs/2505.03205v1 |
A. 6 3 Transformer Approximation and Generalization Theory We next present our main results about approximation and generalization theories for estimating functions in (1). 3.1 Assumptions Assumption 1 (Manifold) LetM ⊆ [0,1]Dbe a non-empty, compact, connected d-dimensional Riemannian manifold isometrically embedded in... | https://arxiv.org/abs/2505.03205v1 |
satisfies E∥ˆTn−f∥2 L2(P)≤˜O D2d3n−2α 2α+d (12) where ˜O(·)hides the logarithmic terms dependent on D, d, q, n, α, L, R, τ M,Vol(M), and constant terms dependent on dand Vol (M). The proof of Theorem 2 is provided in Section 4. Theorem 2 shows that the squared generalization error of ˆT is upper bounded in the order ... | https://arxiv.org/abs/2505.03205v1 |
H= x1··· xD0 0··· ··· 0 I1··· ··· I ℓ 1··· ··· 1 ∈Rdembed×ℓ, where ℓ≥2D. Then there exists a transformer block B∈ B(D,3, dembed )such that B(H) = x1··· xDx1+c1··· xD+cD0 0··· ··· ··· ··· ··· 0 I1··· ··· ··· ··· ··· I ℓ 1··· ··· ··· ··· ··· 1 (15) with∥θB∥∞≤O(ℓ2M2∥H∥2 ∞,∞). We say Bimplements the additio... | https://arxiv.org/abs/2505.03205v1 |
1··· ··· 1 ∈Rdembed×ℓ, where ℓ≥2s. Then there exists a sequence of transformer blocks Bi∈ B(2⌊(i−1)/3⌋,3, dembed ),i= 1,···,3s,B3s+1∈ B(r,3, dembed )such that B3s+1◦ ··· ◦ B1(H) = (x1)1··· (x1)rPr i=1(x1)i0 0··· ··· ··· 0 I1··· ··· ··· I ℓ 1··· ··· ··· 1 with∥θB∥∞≤O(ℓ2M2∥H∥2 ∞,∞). We say B1,···, B3s+1imple... | https://arxiv.org/abs/2505.03205v1 |
The first result in this subsection establishes the result for representing each ˜ ηi(x). Proposition 1 Suppose the Assumption 1 holds. Let {˜ηi(x)}K i=1be defined as (22). Then for each fixed i, there exists a transformer network T(θ;·)∈ T(LT, mT, dembed , ℓ, L FFN, wFFN, R, κ )with pa- rameters LT=O(d), mT=O(D), demb... | https://arxiv.org/abs/2505.03205v1 |
(q), |f(x)−T(θ;x)| ≤ |f(x)−ˆf(x)|+|ˆf(x)−T(θ;x)|. For the first term, we have |f(x)−ˆf(x)|= KX i=1g(πM(x))ηi(x)−KX i=1g(zi)ηi(x) ≤KX i=1|g(πM(x))−g(zi)|ηi(x) ≤LKX i=1dα M(πM(x), zi)ηi(x)≤LKX i=172δ (1−q)2α ηi(x) =L72δ (1−q)2α . 14 The last equality is due to partition of unity, and the inequality before the last eq... | https://arxiv.org/abs/2505.03205v1 |
the statistical error. 16 Figure 4: Left subplot: Estimated intrinsic dimension (ID) of pixel and embedded image representations with various amounts of isotropic Gaussian noise. Noise added on pixels quickly distorts low-dimensional structures. Embedding with the pre-trained model demonstrates a denoising effect, reco... | https://arxiv.org/abs/2505.03205v1 |
Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774 , 2023. Armen Aghajanyan, Luke Zettlemoyer, and Sonal Gupta. Intrinsic dimensionality explains the effec- tiveness of language model fi... | https://arxiv.org/abs/2505.03205v1 |
ossischen Technischen Hochschule Z¨ urich, 1964, pp. 64–79. Kurt Hornik, Maxwell Stinchcombe, and Halbert White. Multilayer feedforward networks are universal approximators. Neural Networks , 2(5):359–366, 1989. Samy Jelassi, Michael Sander, and Yuanzhi Li. Vision transformers provably learn spatial structure. Advances... | https://arxiv.org/abs/2505.03205v1 |
2018. 20 Utkarsh Sharma and Jared Kaplan. Scaling laws from the data manifold dimension. J. Mach. Learn. Res., 23(1), jan 2022. ISSN 1532-4435. Shokichi Takakura and Taiji Suzuki. Approximation and estimation ability of transformers for sequence-to-sequence functions with infinite dimensional input. In International Co... | https://arxiv.org/abs/2505.03205v1 |
in [Havrilla and Liao, 2024]. 2 Remark 2 The significance of the Interaction Lemma is that we can find an attention head such that one token interacts with exactly another token in the embedding matrix. This property facilitates the flexible implementation of fundamental arithmetic operations, such as addition, multipl... | https://arxiv.org/abs/2505.03205v1 |
··· 1 as desired. The weights ∥θB∥∞≤O(ℓ2M2∥H∥2 ∞,∞) follows from Interaction Lemma 8. 2 B.2.3 Proof of Lemma 3 Proof. [Proof of Lemma 3] Let us define the each attention head Ai, 1≤i≤D, with the data kernel in the form Qdata i=0 0 0 0 ci 0 0 0 0 1 Kdata i=1 0 0 0 0 0 0 0 0 M . Then by Interaction Lemma 8, we c... | https://arxiv.org/abs/2505.03205v1 |
1 to 2 D. Therefore, we have B3◦B2◦B1(H) =B3(H2) = x1··· xD(x1)2··· (xD)20 0··· ··· ··· ··· ··· 0 I1··· ··· ··· ··· ··· I ℓ 1··· ··· ··· ··· ··· 1 as desired. The weights ∥θB∥∞≤O(ℓ2M2∥H∥2 ∞,∞) follows from Interaction Lemma 8. 2 B.2.5 Proof of Lemma 5 Proof. [Proof of Lemma 5] First, applying Lemma 3 with mult... | https://arxiv.org/abs/2505.03205v1 |
of Lemma 6] It suffices to show for the case r= 2s. Let us proceed by induction on s. First, suppose B1, B2, B3∈ B(D,3, dembed ) implements the squaring operation as shown in Lemma 4, i.e., H3:=B3◦B2◦B1(H) = x1··· xD(x1)2··· (xD)20 0··· ··· ··· ··· ··· 0 I1··· ··· ··· ··· ··· I ℓ 1··· ··· ··· ··· ··· 1 . For th... | https://arxiv.org/abs/2505.03205v1 |
Bi∈ B(2⌊(i−3)/3⌋,3, dembed ), 3≤i≤3s+ 2, to implement all the i-th power of (1 −cx1)i, 1≤i≤r. Then we can construct B3s+3∈ B(r,3, dembed ) to add up all the powers, i.e., B3s+3◦ ··· ◦ B1(H) = x1−cx11−cx1(1−cx1)2···Pr i=1(1−cx1)i0 0··· ··· ··· ··· ··· 0 I1··· ··· ··· ··· ··· I ℓ 1··· ··· ··· ··· ··· 1 . Then, we... | https://arxiv.org/abs/2505.03205v1 |
More precisely, we have Hd+2: =Bd+2(Hd+1) = (Hd+1):,Id+1PD j=1P(zi)j,1(xj−(zi)j)···PD j=1P(zi)j,d(xj−(zi)j)0 0 ··· ··· 0 I(d+2)D+1 ··· ··· I ℓ 1 ··· ··· 1 , where Id+1={1,···,(d+ 2)D}. •Implementation of − ∥P(zi)⊤(x−zi)∥2 hδ2 Then by Lemma 4, we can construct Bd+3∈ B(D,3, dembed ) so that it implements the sq... | https://arxiv.org/abs/2505.03205v1 |
1 1 , where Id+7={1,···,(d+ 3)D+ 2d+ 6}. By reexamining the proof, we get LT=O(d),mT=O(D),dembed =O(1),ℓ≥O(Dd),LFFN = O(1),wFFN =O(1),κ=O(D2d6δ−8). By hiding the dependency on dwhen it is not the dominating term, we have LT=O(d),mT=O(D),dembed =O(1),ℓ≥O(D),LFFN =O(1),wFFN =O(1), κ=O(D2δ−8). 2 Remark 5 The above pro... | https://arxiv.org/abs/2505.03205v1 |
Depth based trimmed means Alejandro Cholaquidis†, Ricardo Fraiman†, Leonardo Moreno‡, Gonzalo Perera§1 †Centro de Matemáticas, Facultad de Ciencias, Universidad de la República, Uruguay. ‡Instituto de Estadística, Departamento de Métodos Cuantitativos, FCEA, Universidad de la República, Uruguay. §Centro Universitario R... | https://arxiv.org/abs/2505.03523v1 |
decision-making, see Barnett et al. (1994). In medical research, trimming is applied to clinical datasets to eliminate outliers that may result from measurement errors or rare occurrences that do not reflect the broader population, see Farcomeni (A. and Ventura). Extending these robust estimation ideas to the multivari... | https://arxiv.org/abs/2505.03523v1 |
in supervised classification. In a subsequent work, 2 Cholaquidis et al. (2023) examined level sets of depth measures providing uniform consistency results and new insights into central dispersion in general metric spaces. An early approach to the trimmed mean for multivariate data was introduced by Gordaliza (1991), w... | https://arxiv.org/abs/2505.03523v1 |
function and ∗denotes the convolution. LetFthe set of continuous and bounded functions on Rd. Consider the map, D=D(x, g) :Rd× F −→ R, 3 where D(x, g)is the depth corresponding to the density g. The multivariate trimmed mean is defined as Π(D, f) :=Z RdtI{D(t)≥a}f(t)dt, (1) while, if Dndenotes an estimator of Dbased on... | https://arxiv.org/abs/2505.03523v1 |
to illustrate the shape of the limiting distribution. 4 2 Main hypotheses Leta∈(0, T)fixed, and a depth D:Rd→(0, T]. The following set of hypotheses will be considered through the manuscript. (H1) Dis continuous in Rd, and lim∥t∥→+∞D(t) = 0 . (H2)There exists δ > 0and a finite number of points µ1, . . . , µ ℓ, with µi∈... | https://arxiv.org/abs/2505.03523v1 |
by (5) i.e., ν(x) = D(x),x−µ ∥x−µ∥ . Observe that, by (3),µ /∈τ(Ω). Ifx̸=µ, then τ(ν(x)) = x, then D(y) =D(x), and there exists λ, such that, y=µ+λx−µ ∥x−µ∥. But, from x=µ+∥x−µ∥x−µ ∥x−µ∥ and(H1), it follows that λ=∥x−µ∥. Then x=y. This proves that τ(ν(x)) =xand, τ◦ν=Id(τ(Ω)), (6) where Id (·)denotes the identity map.... | https://arxiv.org/abs/2505.03523v1 |
and Giné (1993). Moreover, other works show more depth measures where the hypothesis (H’4)is satisfied (for the iid case), see Cholaquidis et al. (2023); Liu and Singh (1993); Zuo and Serfling (2000). For the simplicial depth (H’4)holds even when X1, . . . , X nare not iid, from Theorem 1 in Dümbgen (1992), when Dn(x) ... | https://arxiv.org/abs/2505.03523v1 |
≤2η∥G(a)∥. (C) 10 From (A), (B), and (C), we get: lim ε→0+∥Vε(a)−H(a)G(a)∥ ≤lim ε→0+∥Vε(a)−Vη ε(a)∥ +lim ε→0+∥Vη ε(a)−Hη(a)G(a)∥+lim ε→0+∥Hη(a)G(a)−H(a)G(a)∥ ≤2η∥G(a)∥+ 0 + 0 . Taking η↓0+, we get: limε→0+∥Vε(a)−H(a)G(a)∥= 0.Thus, the Lemma follows. Proposition 9.Assume that (H1)to(H4)hold. Let 0< a < T,r >0, andUas in... | https://arxiv.org/abs/2505.03523v1 |
C∥Y−Yi∥ ∀ Y∈ X,1≤i≤Nη. (D) On the other hand, ∥∆ε(Y)−∆ε(Y(i))∥=1 ε Z Kt 1{D(t)+ε Y(i)(t)> a}−1{D(t)+ε Y(t)> a} f(t)dt ≤Z K∥t∥∥f(t)∥1{a−ε Y(i)(t)−ε∥Y(i)−Y∥∞≤D(t)< a−ε Y(i)(t) +ε∥Y(i)−Y∥∞ dt ε ≤Z K∥t∥∥f(t)∥1{a−ε Y(i)(t)−ε∥Y(i)−Y∥∞<D(t)≤a−ε Y(i)(t)+ε∥Y(i)−Y∥∞ dt ε. Then, if we use the same change of variable as in (9), ... | https://arxiv.org/abs/2505.03523v1 |
y τ(a,⃗ n) d⃗ n, by Proposition 9. The second term, it converges to Z Rdt1{D(t)≥a}g(t)dt. by Proposition 10. Hence the limit of the difference is exactly the derivative claimed. Remark 12.If∥Y∥∞≤r, then for εsmall enough, T D+εY, f +εg − T(D, f) ε=Z Rdt1{D(t)+εY(t)≥a}−1{D(t)≥a} εf(t)dt+ Z Rdt1{D(t)+εY(t)≥a}g(t)dt. ... | https://arxiv.org/abs/2505.03523v1 |
each measure, 500 values are simulated from the distribution of Rnusing two values of a(selected based on the range of values for each depth measure) and assuming an=√n. Each simulation of Rnis based on a sample of size n= 500drawn from a bivariate Beta distribution with independent components, where each component fol... | https://arxiv.org/abs/2505.03523v1 |
169:81–94. Fraiman, R., Meloche, J., García-Escudero, L. A., Gordaliza, A., He, X., Maronna, R., Yohai, V. J., Sheather, S. J., McKean, J. W., Small, C. G., et al. (1999). Multivariate l-estimation. Test, 8:255–317. Fraiman, R. and Muniz, G. (2001). Trimmed means for functional data. Test, 10(2):419–440. 21 García-Escu... | https://arxiv.org/abs/2505.03523v1 |
Maximum likelihood estimation for the λ-exponential family Xiwei Tian1, Ting-Kam Leonard Wong1r0000´0001´5254´7305s, Jiaowen Yang2, and Jun Zhang3 1University of Toronto, Toronto, Ontario, Canada xiwei.tian@mail.utoronto.ca, tkl.wong@utoronto.ca 2Meta, Menlo Park, California, United States jiaowen@meta.com 3University ... | https://arxiv.org/abs/2505.03582v1 |
known (see [2, Section 2.8.3]) that the maximum likelihood estimate (MLE) ˆθgiven data points x1, . . . , x n (under independent sampling) is characterized by ∇ϕpˆθq“1 nnÿ i“1Fpxiq. (4) That is, the MLE ˆηof the dual (expectation) parameter η“∇ϕpθqis given simply by the sample mean of the sufficient statistic. For a λ-... | https://arxiv.org/abs/2505.03582v1 |
not have a closed form solution. However, it suggests the following procedure for computing the MLE. Algorithm 1 (Fixed-point iteration) Givendata y1“Fpx1q, . . . , y n“Fpxnq and an initial guess θp0qPΘ, definepηpkqqkě1ĂΞandpθp0qqkě1ĂΘby "ηpk`1q:“řn i“1wipθpkqqyi, θpk`1q:“∇pλqψpηpk`1qq.(12) More compactly, we may expre... | https://arxiv.org/abs/2505.03582v1 |
ℓpθpk`1qqąℓpθpkqq. Maximum likelihood estimation for the λ-exponential family 7 3 Dirichlet perturbation TheDirichlet perturbation model is a multiplicative analogue of the normal lo- cation model4and is a key example of the λ-exponential family [13,17]. Let ∆d be the open unit simplex in Rd`1defined by ∆d“␣ p“pp0, . .... | https://arxiv.org/abs/2505.03582v1 |
values of dandn). In Figure 2 we plot the data and the output of the algorithm (25) with several initial values. In all cases, the iterates converge quickly to the MLE. In practice, we may initialize pp0qby the sample mean1 nřn i“1qias it tends to be close to the MLE. Acknowledgments. The research of T.-K. L. Wong is p... | https://arxiv.org/abs/2505.03582v1 |
Learning Survival Distributions with the Asymmetric Laplace Distribution Deming Sheng1Ricardo Henao1 Abstract Probabilistic survival analysis models seek to es- timate the distribution of the future occurrence (time) of an event given a set of covariates. In re- cent years, these models have preferred nonpara- metric s... | https://arxiv.org/abs/2505.03712v2 |
neural networks to capture nonlinear relationships while preserving the structure of models such as the Cox propor- tional hazards model. Nonparametric approaches, such as DeepHit (Lee et al., 2018) and CQRNN (Pearce et al., 2022), leverage deep learning to directly estimate survival functions without relying on tradit... | https://arxiv.org/abs/2505.03712v2 |
S(t) = 1−F(t). The hazards function, denoted as λ(t),describes the instantaneous risk that the event occurs at a specific time t, given that the individual has survived up to that point. Formally, it is defined as: λ(t) = lim ∆t→0P(t≤T < t + ∆t|T≥t) ∆t. The hazards function is related to the survival function through: ... | https://arxiv.org/abs/2505.03712v2 |
activation functions. The outputs of the model connected to θ,σandκare further constrained to be non-negative through an exponential (Exp) activation. In addition, a residual connection is included to enhance gradient flow and improve model stability. See Appendix B.3 for more details about the architecture of the mode... | https://arxiv.org/abs/2505.03712v2 |
as the pinball orcheckmark loss (Koenker & Bassett Jr, 1978), which is widely used in the quantile regression literature. Importantly, unlike in the objective for our approach in (3), CQRNN does not maximize the survival probability directly. Instead, they adopt the also widely used approach based on the Portnoy’s esti... | https://arxiv.org/abs/2505.03712v2 |
assumes that the logarithm of survival times follows a normal distribution, enabling straightforward parameteri- zation of survival curves. In modern approaches (Hoseini et al., 2017), neural networks are employed to learn the parameters of the assumed distribution, e.g., the mean and variance for the log-normal. This ... | https://arxiv.org/abs/2505.03712v2 |
distribution estimates, respectively, that lack continuity and smoothness, leading to survival estimates that can complicate summa- rization, interpretation, and downstream analysis. 5. Experiments 5.1. Datasets We utilize two types of datasets, following Pearce et al. (2022): (Type 1) synthetic data with synthetic cen... | https://arxiv.org/abs/2505.03712v2 |
the Asymmetric Laplace Distribution Table 2. Summary of benchmarking results across 21 datasets. Each column group shows three figures: the number of datasets where our method significantly outperforms, underperforms or is comparable with the baseline indicated. The last two rows summarize the column totals and proport... | https://arxiv.org/abs/2505.03712v2 |
survival model that assumes that the event times fol- low a log-normal distribution. DeepSurv (Katzman et al., 2018): A semi-parametric survival model based on the Cox proportional hazards framework, leveraging deep neuralnetworks for the representation of the time-independent haz- ards. DeepHit (Lee et al., 2018): A d... | https://arxiv.org/abs/2505.03712v2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.