text string | source string |
|---|---|
hypothesis of no signal. Consequently, we proceed by considering the next, larger, regression tree Tm′,m′> m, in the sequence of nested regression trees. If, when considering the regression tree Tm′we find that m′−1X k=1pobs,k> δ, (8) then the procedure stops and the previous regression tree Tmis selected as the optima... | https://arxiv.org/abs/2505.18769v1 |
n)). The approximation pn(u)from (10) corresponds to Eq. (2.5) on p. 345 in [Yao and Davis, 1986]. The true p-value is the function u7→ PN U(n) max> u evaluated at the observed value for U(n) max. The true p-value is approximated from above by P(n) max:=dpn(U(n) max). (11) We emphasise that given an observation uobs,... | https://arxiv.org/abs/2505.18769v1 |
Based on the notation in Section 2 the split is accepted at significance level εif U(n) max=S−(S≤r∗+S>r∗) S/n> u ε, (13) where “ ∗” indicates that we consider the optimal split, and where uεis the solution to dpn(uε) =ε, (14) where pn(u)is from (10). An equivalent rephrasing of (13) is MSE 1−MSE 2−uεbσ2>0, (15) where M... | https://arxiv.org/abs/2505.18769v1 |
be too surprising, since the above application of the Cpstatistic does nottake into account that the candidate split point has been chosen by minimising an L2loss. For a p-parameter Gaussian model the Cpstatistic coincides with the Akaike information criterion (AIC), see e.g. [Hastie et al., 2009, Ch. 7.5, 12 Eq. (7.29... | https://arxiv.org/abs/2505.18769v1 |
Middle table: The analogous quantiles for dependent normal covariates with a common pairwise correlation of ρ= 0.8and standard vari- ances. Right table: Quantile approximation corresponding to (10). distribution of Umaxunder H0and that the quantile approximations provide good upper bounds. We now turn to assuming that ... | https://arxiv.org/abs/2505.18769v1 |
as a data set for pure out-of-sample testing. 2 4 6 8 10 12 141.01.21.41.61.82.02.2 2 4 6 8 10 12 1405101520 Figure 4: Left plot: MSEP (blue) and MSE (orange) for each tree in the nested sequence of cost-complexity-pruned subtrees. Right plot: cumula- tivep-value for each tree in the nested sequence of cost-complexity-... | https://arxiv.org/abs/2505.18769v1 |
the root node. Since cross-validation results in non-deterministic ϑ, we realise two distinct ϑvalues corresponding to two distinct trees in the sequence cost-complexity pruned trees. 18 X1≤0.00 0.99 0 0 X2≤ −0.01 2.03 8·10−5 1·10−4 X3≤0.04 1.57 1·10−5 1·10−4 X4≤0.17 2.10 0.86 1.9377X7≤0.3 0.75 1.58 5.49X3≤0.05 2.46 2·... | https://arxiv.org/abs/2505.18769v1 |
CV, Panel (c) shows the tree obtained using thep-value method. Leaf values correspond to mean values. All RMSE val- ues can be found in Table 2. levels of δ. The value δ=∞gives a boosted-trees procedure similar to the ABT-machinefrom[Huyghe et al., 2024]inthecaseof L2-boosting. Itshould be noted that the GBM stopping c... | https://arxiv.org/abs/2505.18769v1 |
variates. Sankhy¯ a: The Indian Journal of Statistics, Series A , pages 339–353. A Proofs A.1 Proof of Proposition 1 Before starting the proof of Proposition 1 we note the following: Remark 3. The distribution of the observed test statistic U(n) maxdoes not de- pend on σunder the alternative hypothesis. Under the alter... | https://arxiv.org/abs/2505.18769v1 |
that limn→∞PA(n)(Fn<1) = 0. The proof is complete. Lemma 6. For every η∈R,lim inf n→∞PA(n)(T2 n> c2 n)≥α+ Φ(η)(1−α). 25 Proof.Fixη∈R. From the expression for the tail probability on page 350 in [Yao and Davis, 1986] we see that for each n, PA(n)(T2 n> c2 n)≥P(Bn,1∩Bn,2∪An(θn)). The events Bn,1, Bn,2are independent of θ... | https://arxiv.org/abs/2505.18769v1 |
arXiv:2505.19104v1 [math.ST] 25 May 2025Distributional Limit Theory for Optimal Transport Eustasio del Barrio∗Alberto González-Sanz† Jean-Michel Loubes‡David Rodríguez-Vítores§ Abstract Optimal Transport (OT) is a resource allocation problem with applications in biology, data science, economics and statistics, among ot... | https://arxiv.org/abs/2505.19104v1 |
above were the empirical versions of Tc(P,Q), namely, given the observation of an i.i.d. sample X1,...,Xn∼Pwe look atTc(Pn,Q), wherePndenotes the empirical measures on the sample (for the sake of brevity we focus here on one sample problems, but all the results that we present can be adapted to the two sample case). Th... | https://arxiv.org/abs/2505.19104v1 |
possibility, for instance, to build asymptotically valid confidence intervals for the EOT cost in any dimension. Here we present a review of the best distributional limit theorems available and a discussion on the different proof techniques leading to them. An alternative way to benefit from OT data analysis tools whil... | https://arxiv.org/abs/2505.19104v1 |
features of the theory show up in this simpler setup: CLTs for the fluctuation of the OT cost with respect to its expected value hold with great generality; Gaussianity of the limiting distribution of the empirical OT cost is related to the regularity of the cost; limit theorems for the OT plans and maps require more r... | https://arxiv.org/abs/2505.19104v1 |
facts turn the study of distributional limit laws for empirical estimators of OT maps on the real line into exercises about weak convergence of empirical or quantile processes in some Lpspace. However, weak convergence of these processes typically requires stronger assumptions than weak convergence of the transportatio... | https://arxiv.org/abs/2505.19104v1 |
2.1 is that non strictly convex cost functions (the case p= 1) result in some differences with respect to the CLT. We see that non Gaussian limiting distributions can show up. In fact, this will be always the case if P=Q. The limiting distribution will be the law of the centered version of the the right-hand side in (1... | https://arxiv.org/abs/2505.19104v1 |
Theorem 2.3. AssumePhas a continuous density fsuch thatf◦F−1is continuous in (0,1)and monotonic in a neighbourhood of 0and1. If for some p≥1 Jp(P) :=/integraldisplay1 0(t(1−t))p/2 fp(F−1(t))dt<∞. (16) 8 Distributional Limit Theory for Optimal Transport then np/2Tp(Pn,P)−→w/integraldisplay1 0|B(t)|p fp(F−1(t))dt, whereB... | https://arxiv.org/abs/2505.19104v1 |
in turn, if and only if Pis supported on an interval and the absolutely continuous component of Phas on that interval an a.e. positive density (this is Proposition A.17 in [ 7]). In this case p= 1it is of interest to observe that (11)is the necessary and sufficient condition for weak convergence of√n(Fn−F)as a random e... | https://arxiv.org/abs/2505.19104v1 |
convergence of variances. We refer to [ 28] for details. Theorem 3.1 holds also for the quadratic cost and compactly supported measures in infinite dimensional Hilbert spaces [57]. 3.2 Centering constants in dimension d>1 As in the one-dimensional case, it would be of interest to obtain conditions ensuring that the bia... | https://arxiv.org/abs/2505.19104v1 |
smoothness of the cost. A similar result holds when the support of PandQare countable, under the additional assumption ∞/summationdisplay i=1∥xi∥p√pi<∞, see [111] for details. A central limit theorem for the OT plans in discrete spaces is derived in [ 78], where the corresponding limit distribution is usually non-Gauss... | https://arxiv.org/abs/2505.19104v1 |
transport potentials or plans? This remains an open problem, except in the semi-discrete case, which we will examine next. 3.5 Semi-discrete OT An important application of Theorem 3.3 arises in the setting of semi-discrete optimal transport, where one of the probability measures, PorQ, is discrete while 14 Distribution... | https://arxiv.org/abs/2505.19104v1 |
equilibrium Q(Lagi(z∗)) =piis attained, for all i= 1,...,N. The nondegeneracy of the Hessian yields a Polyak-Lojasiewicz inequality for the dual functional. Then, due to the intrinsic parametric complexity of the problem, [ 29] proved that the rates of convergence of the empirical potential z(n)are parametric and deriv... | https://arxiv.org/abs/2505.19104v1 |
from zero. Then 1. There exists a nonzero random variable Vγsuch that n1 2γ∥∇φn−∇φ∗∥Lγ(Q)−→wVγ. 2.Thereexistsatightrandomelement GinthedualBanachspace (Cβ(Ω′;Rd))′ such that the element Cβ(Ω′;Rd)∋f∝⇕⊣√∫⊔≀→√n⟨∇φn−∇φ∗,f⟩L2(Q) of(Cβ(Ω′;Rd))′converges in distribution to Gin(Cβ(Ω′;Rd))′. Apart from these two points, [52] pr... | https://arxiv.org/abs/2505.19104v1 |
the curse of dimensionality for a fixed regularization parameter ϵ>0[48, 90]. Recently, other types of penalizations for the OT problem have been proposed in the literature [ 94,6,97]. These approaches may be particularly applicable in cases where a sparse approximation of the OT plan is desired [60, 61, 117]. We do no... | https://arxiv.org/abs/2505.19104v1 |
to additive constants) of the empirical version of the Schrödinger system (27), that is, the version in which we replace PwithPn. It is convenient at this point to introduce the notation Γ(f,g) :=/parenleftbigg/integraldisplay exp/parenleftbiggf(x) +g(y)−c(x,y) ϵ/parenrightbigg dQ(y), /integraldisplay exp/parenleftbigg... | https://arxiv.org/abs/2505.19104v1 |
enables to interpret the difference (fϵ−fϵ,n,gϵ−gϵ,n)as an infinite-order V-statistic, facilitating the derivation of the central limit theorem in this more general setting. Theorem 4.4. Fixη:Rd×Rd→R. Assume that the cost function c:Rd×Rd→ Randηare and measurable in the support of P⊗Q. Then it holds that √n/integraldis... | https://arxiv.org/abs/2505.19104v1 |
hence, theteststatistic Dϵ(Pn,P)isnot distribution-free. Finding consistent estimators of the sequence {λi}i∈N⊂[0,∞) is an open problem. We believe that the series expansion of the linearized operators provided in [58] might give a consistent estimator of these parameters. 4.2 Smooth optimal transport The curse of dime... | https://arxiv.org/abs/2505.19104v1 |
null hypothesis P=Q, the limit√nWσ p(Pn,P)has also as limiting distribution a dual norm [102,95]. However, for p= 1and under the alternative, the limiting distribution is not in general Gaussian due to the non uniqueness of the potentials. The proof technique follows the functional delta-method and the fact that first ... | https://arxiv.org/abs/2505.19104v1 |
this linearized equation admits a unique classical solution (up to additive constants) for smooth data —hence, the linearized operator is invertible—, see [ 86]. In the Euclidean case, the boundary conditions need to be linearized too. This is the first major difficulty for Euclidean data. However, this one can be solv... | https://arxiv.org/abs/2505.19104v1 |
[3]. 27 Distributional Limit Theory for Optimal Transport Theorem 4.8. Fixd≥3ands≥d. Letp,qbe densities in Rd/Zdsuch that log(p)andlog(q)areCs. Assume that K∈C∞ c((0,1)d)is an even kernel such that its Fourier transform F(K)satisfies sup ∥x∦=0|[F(K)](x)−1| ∥x∥s+1<∞. Then for any x∈Rd/Zdandh=cn−βfor somec>0and1 d+4<β <... | https://arxiv.org/abs/2505.19104v1 |
the extended functional delta method under the assumption of compactly supported probabilities with convex support for p>1, and under mild moment assumptions for p= 1. The assumptions required for p= 1were further refined by [116]. Later, [74] provided a refined version of the results in [ 115] and [53], again for comp... | https://arxiv.org/abs/2505.19104v1 |
AI arises when a variable, which characterizes a group of individuals, affects systematically the behaviour of an algorithm. This implies that the decisions or the performance of the algorithm may be different for different subgroups, leading to possible infringement of their fundamental rights and thus to discriminati... | https://arxiv.org/abs/2505.19104v1 |
element in Lp(0,1)? 31 Distributional Limit Theory for Optimal Transport In the null case P=Qand strictly convex cost ( p > 1) the fluctuation CLT yields a null limiting distribution. With a faster rate there is still some potential room for a different CLT. Building upon earlier work by Ambrosio and coauthors (see [ 8... | https://arxiv.org/abs/2505.19104v1 |
[12]J. Cárcamo, A. Cuevas, and L.-A. Rodríguez. Directional differentiability for supremum-type functionals: Statistical applications. Bernoulli, 26:2143 – 2175, 2020. [13]G. Carlier. On the linear convergence of the multimarginal sinkhorn algorithm. SIAM Journal on Optimization , 32(2):786–794, 2022. [14]G. Carlier an... | https://arxiv.org/abs/2505.19104v1 |
Transport [32]E. Del Barrio, H. Inouzhe, J.-M. Loubes, C. Matrán, and A. Mayo-Íscar. opti- malflow: optimal transport approach to flow cytometry gating and population matching. BMC bioinformatics , 21:1–25, 2020. [33]E. del Barrio and J. Loubes. Central limit theorems for empirical transportation cost in general dimens... | https://arxiv.org/abs/2505.19104v1 |
Conference on Artificial Intelligence and Statistics , pages 3327–3337. PMLR, 2020. [51]Z. Goldfeld, K. Greenewald, J. Niles-Weed, and Y. Polyanskiy. Convergence of smoothed empirical measures with applications to entropy estimation. IEEE Transactions on Information Theory , 66(7):4368–4391, 2020. [52]Z. Goldfeld, K. K... | https://arxiv.org/abs/2505.19104v1 |
Bernoulli, 30(4):2846 – 2877, 2024. [74]S. Hundrieser, G. Mordant, C. A. Weitkamp, and A. Munk. Empirical optimal transport under estimated costs: Distributional limits and statistical applications. Stochastic Processes and their Applications , 178:104462, 2024. [75]S. Hundrieser, T. Staudt, and A. Munk. Empirical opti... | https://arxiv.org/abs/2505.19104v1 |
Feb. 2017. [95]S. Nietert, Z. Goldfeld, and K. Kato. Smooth p-wasserstein distance: Structure, empirical approximation, and statistical applications. In M. Meila and T. Zhang, editors,Proceedings of the 38th International Conference on Machine Learning , volume 139 of Proceedings of Machine Learning Research , pages 81... | https://arxiv.org/abs/2505.19104v1 |
2022. [117]S. Zhang, G. Mordant, T. Matsumoto, and G. Schiebinger. Manifold learning with sparse regularised optimal transport. arXiv:2307.09816 , 2023. 8 supplement Proof of Theorem 2.1. The convergence statement (7)is Theorem 2.1 in [ 31]. The limiting variance is defined as follows. We set hp(x) =|x|p. Then,h′ p(x) ... | https://arxiv.org/abs/2505.19104v1 |
andP′ ndenotes the empirical d.f. on X′ 1,X2,..., Xn, withX′ 1a further indepen- dent observation from P. We observe that |Z−Z′|≤ sup f:,∥f∥Lip≤1/vextendsingle/vextendsingle/vextendsingle/vextendsingle/integraldisplay fd(Pn−P′ n)/vextendsingle/vextendsingle/vextendsingle/vextendsingle≤1 n∥X1−X′ 1∥, 42 Distributional Li... | https://arxiv.org/abs/2505.19104v1 |
arXiv:2505.19351v1 [math.AC] 25 May 2025Squared Linear Models Hannah Friedman, Bernd Sturmfels and Maximilian Wiesmann Abstract We study statistical models that are parametrized by squares of linear forms. All crit- ical points of the likelihood function are real and positive. There is one for each region of the projec... | https://arxiv.org/abs/2505.19351v1 |
summarize our main results. Given the model (1), we fix data s1, . . . , s n∈R>0. We are interested in the log-likelihood function x7→s1logℓ2 1(x) +···+snlogℓ2 n(x)−(s1+···+sn) log ℓ2 1(x) +···+ℓ2 n(x) .(4) In Section 2 we prove that all complex critical points of (4) are real, and there is precisely one critical poi... | https://arxiv.org/abs/2505.19351v1 |
model. This variety is the complement in Pd−1of the nhyperplanes V(ℓ1), . . . , V (ℓn) and the quadric V(q) defined q=ℓ2 1+···+ℓ2 n. Its Euler characteristic is an upper bound on the number of real critical points. One argues that there is at least one critical point per region of Pd−1 R\∪n i=1V(ℓi). This gives a lower... | https://arxiv.org/abs/2505.19351v1 |
hypothesis. We conclude that the ℓi are in general position relative to the quadric qin the complex projective space Pd−1. Remark 2.3. The hypothesis that ℓ1, . . . , ℓ nhave real coefficients is necessary in Lemma 2.2. For example, if d= 2, n= 3 and ℓ1=√−1·x, ℓ 2=y, ℓ 3=x+y, then the quadric factors asq=ℓ2 1+ℓ2 2+ℓ2 3... | https://arxiv.org/abs/2505.19351v1 |
birational inverse from XtoPd−1can be found using Gr¨ obner bases, but its coordinates are generally complicated. For instance, here is such an inversion formula for Example 1.1: x1 x3=p2 1+p2 2+p2 3+p2 4−2p1p2+ 6p1p3−2p1p4−2p2p3−2p2p4−2p3p4 4p3(−p1+p2−p3+p4). If the matroid of Ais not connected, then the map Pd−1→Xis ... | https://arxiv.org/abs/2505.19351v1 |
is minimally generated by n− d 2 linear forms and1 12(d+ 1)d2(d−1)quadrics. The variety Xd,nis isomorphic to the quadratic Veronese embedding of Pd−1intoPN−1. Proof. The ( N+1)×(N+1) minors of [ L p] give linear forms in pthat vanish on the variety Xd,n. Since Lhas rank N, precisely n−Nof these linear forms are linear... | https://arxiv.org/abs/2505.19351v1 |
3.1. Not much is known about the ideals of such projections in general. The following result collects what we know in small dimensions. Proposition 3.4. Table 1 gives the degrees of the generators of Id,nford≤6andn < N . (d, n) Degrees (3,4) 41 (3,5) 37 (4,5) 81 (4,6) 4151062 (4,7) 445 (4,8) 2232041 (4,9) 210(d, n) Deg... | https://arxiv.org/abs/2505.19351v1 |
the variety of symmetric matrices with degenerate eigenvalues. We now turn to the question whether the generic model Xd,nis smooth. This happens when nis large relative to d. For instance, if d= 3 then the model is singular for n= 4, by Example 1.1, but it is smooth for n≥5. This threshold is explained by the following... | https://arxiv.org/abs/2505.19351v1 |
in algebraic statistics [10, 12]. It captures the relationship between data and critical points of the log-likelihood function ΛA(s, x) =nX i=1silog(pi(x)) =nX i=1silog(ℓ2 i(x))−log(q(x))nX j=1sj. (8) As before, q=ℓ2 1+···+ℓ2 nis the partition function, and the parameters s1, . . . , s nrepresent the data. The partial ... | https://arxiv.org/abs/2505.19351v1 |
the arrangement A′is strict normal crossing (SNC) in the sense of [12]. This means that all intersections of hypersurfaces in A′are smooth of the expected dimension. Indeed, by Lemma 2.2, no intersection of the hyperplanes is tangent to V(q). It remains to show that the hyperplanes meet V(q) in the expected dimension. ... | https://arxiv.org/abs/2505.19351v1 |
ideal IAinR[s1, s2, s3, s4, x1, x2, x3] is generated by the 3 ×3 minors of M= s1s2s3s4 ℓ2 1ℓ2 2ℓ2 3ℓ2 4 ℓ1ℓ2ℓ3−ℓ4 . The likelihood correspondence LAis a threefold in P3×P2. The map onto P3is 7-to-1. We conclude with a remark on the non-generic case. If A′is not SNC then the maximal minors of the matrices (9) and (1... | https://arxiv.org/abs/2505.19351v1 |
We also introduce the diagonal matrix Y(i)= diag( y1, . . . , y i−1, yi+1, . . . , y n). Since s=ei, the maximal minors of the matrix in (12) are precisely the maximal minors of B(i)Y(i). After relabeling we may assume i=n. Each maximal minor of B(n)Y(n)factors as det(B(n) C)yc1yc2···ycn−d+1where B(n) Cis the submatrix... | https://arxiv.org/abs/2505.19351v1 |
Corollary 5.3. Fix a generic model (A, B)andi∈[n]. Given data s∈Rn, where w= val( s) satisfies wi< w jforj∈[n]\{i}, the µtropical critical points zare distinct. They are z=X j∈J(wj−wi)ej, (14) where Jruns over the µsubsets of cardinality at most d−1in[n]\{i}. Before we come to the proof, we go over a detailed example t... | https://arxiv.org/abs/2505.19351v1 |
so the sum vanishes. After tropicalization, the vanishing of the sum becomes the condition that the minimum min k̸=ℓ∈C(wk−zk+zℓ) is attained twice. Since zk= 0 for k∈C\{j}, we have wk−zk+zℓ= wk ifk, ℓ̸=j, wj−zj ifk=j, wk+zj ifℓ=j. Since zj>0, the minimum is never attained in the last case. By assumption, wk> w i.... | https://arxiv.org/abs/2505.19351v1 |
n−d+1)×nmatrix y2 1y2 2···y2 n B·Y! = y1y2···yn B! ·Y= 1 1···1 B·Y−1! ·Y2. (16) Here we abbreviate Y= diag( y1, . . . , y n). The n d−1 maximal minors of (16) factor as follows: yj0yj1···yjn−d·detyj0yj1···yjn−d bj0bj1···bjn−d for 1 ≤yj0<yj1<···<yjn−d≤n. (17) These determinants define n d−1 hyperplanes in Pd−1=P(ker... | https://arxiv.org/abs/2505.19351v1 |
all linear models. Their log-Voronoi cells were studied in detail by Alexandr in [2]. However, in general, the log-Voronoi cells are non-linear convex bodies that are strictly contained in their log-normal polytopes. For an illustration see [3, Figure 2]. In what follows, we initiate the study of log-Voronoi cells for ... | https://arxiv.org/abs/2505.19351v1 |
at τijσ(y). Therefore s∈S∩S′. Since Sis convex, S∩V(si−sj)⊆S∩S′⊆∂Sis a connected, linear piece of the boundary. If the proposition holds for sufficiently many triples i, j, σ , then ∂Sis piecewise linear. Example 6.5 (d= 2, n= 4).The model in Example 5.2 has A={x1, x1+x2, x1+2x2, x2}. Fixy= (3,2,1,−1). The log-normal p... | https://arxiv.org/abs/2505.19351v1 |
et al.), Lecture Notes in Mathematics, volume 2108, Springer, pages 63–117, 2014. [11] T. Kahle, L. K¨ uhne, L. M¨ uhlherr, B. Sturmfels and M. Wiesmann: Arrangements and likeli- hood,arXiv:2411.09508 . [12] T. Kahle, H. Schenck, B. Sturmfels and M. Wiesmann: The likelihood correspondence , arXiv:2503.02536 . [13] J.M.... | https://arxiv.org/abs/2505.19351v1 |
arXiv:2505.19359v1 [math.ST] 25 May 2025NONPARAMETRIC ESTIMATION OF SLICED INVERSE REGRESSION BY THE k-NEAREST NEIGHBORS KERNEL METHOD LURAN BENGONO MINTOGO, EMMANUEL DE DIEU NKOU, AND GUY MARTIAL NKIET Abstract. We investigate nonparametric estimation of sliced inverse regression (SIR) via the k-nearest neighbors appr... | https://arxiv.org/abs/2505.19359v1 |
Classification. Primary 62G05; Secondary 62J02. Key words and phrases. Dimension reduction, knearest neighbors estimation, nonparametric regression, sliced inverse regression, asymptotic normality. 1 2 L. BENGONO MINTOGO, E.D.D. NKOU, AND G.M. NKIET methods, there is the sliced average variance estimation (SAVE) method... | https://arxiv.org/abs/2505.19359v1 |
the covariance matrix Λ of the conditional expectation of Xgiven Ydenoted, as usual, by E(X|Y); that is Λ =E R(Y)R(Y)⊤ , where R(Y) =E(X|Y). Since R(Y) = (R1(Y), . . . , R d(Y))⊤, where Rℓ(Y) =E(Xℓ|Y) for ℓ∈ {1, . . . , d }, it is clear that Λ = λℓj 1⩽ℓ,j⩽dwith λℓj=Rℓ(Y)Rj(Y). Estimation of the EDR space is, theref... | https://arxiv.org/abs/2505.19359v1 |
bandwith estimators. In particular, they do not require to make a prior choice of the bandwith as it is the case for the fixed bandwith estimators. It is true that the choice of the number of neighbors must be made, but this is done in a finite set of integers, which makes the research easier. 3.Asymptotic properties I... | https://arxiv.org/abs/2505.19359v1 |
ℓ(Y) <∞, and if√nE |Rm,ℓ(Y)Rs,j(Y)|1{f(Y)⩽an} = o(1) for (m, s)∈ {1,2}2andan∼bn, then √nE Rℓ(Y)Rj(Y)1{f(Y)⩽bn} = o(1) because √nE Rℓ(Y)Rj(Y)1{f(Y)⩽bn} ⩽√nE |Rℓ(Y)Rj(Y)|1{f(Y)⩽bn} ⩽√nE |R1,ℓ(Y)R1,j(Y)|1{f(Y)⩽bn} +√nE |R1,ℓ(Y)R2,j(Y)|1{f(Y)⩽bn} +√nE |R2,ℓ(Y)R1,j(Y)|1{f(Y)⩽bn} +√nE |R2,ℓ(Y)R2,j(Y)|1{f(Y)⩽b... | https://arxiv.org/abs/2505.19359v1 |
were used with bn=n−0.09and different kernels : Gaussian kernel, Epanechnikov kernel, biweight kernel, triweight kernel and triangular kernel. For our method, we took knas the integer part of n0.85. The obtained results are reported in Table 1 and Table 2. It can be seen that both methods give equivalent results. Indee... | https://arxiv.org/abs/2505.19359v1 |
Table 2. Average and standard deviation of Dover 500 replicates, with sample sizen= 200 ,400. Sample size Kernel Model 1 Model 2 Model 3 k-NN Kernel k-NN Kernel k-NN Kernel Gaussian 0.266 0.220 1.146 1.174 1.228 1.224 (0.092) (0.074) (0.186) (0.138) (0.160) (0.132) Epanechnikov 0.230 0.226 1.228 1.170 1.218 1.226 (0.08... | https://arxiv.org/abs/2505.19359v1 |
1,j(y) m!kn√βn nf(y)mZ tmK(t)dt +1 6kn√βn nf(y)3Z t3g(3) 1,j y+θkn√βn nf(y)t K(t)dt, and since Kis of order 3 (see Assumption 3.8-( iv)), it follows Z g1,j y+kn√βn nf(y)t −g1,j(y) K(t)dt =1 6kn√βn nf(y)3Z t3g(3) 1,j y+θkn√βn nf(y)t K(t)dt =1 6kn√βn nf(y)3Z t3 g(3) 1,j y+θkn√βn nf(y)t −g(3) 1,j(y) K(... | https://arxiv.org/abs/2505.19359v1 |
}2: (i)1√nnX i=1( g1,ℓ(Yi) f2 bn(Yi)bg− 1,j,n(Yi) +g1,j(Yi) f2 bn(Yi)bg− 1,ℓ,n(Yi)−E" g1,ℓ(Y) f2 bn(Y)bg− 1,j,n(Y) +g1,j(Y) f2 bn(Y)bg− 1,ℓ,n(Y)#) =1√nnX i=1g1,ℓ(Yi)g1,j(Yi) f2 bn(Yi)+g1,ℓ(Yi)f(Yi) 2f2 bn(Yi)Xij1{Xij⩾0}+g1,j(Yi)f(Yi) 2f2 bn(Yi)Xiℓ1{Xiℓ⩾0} −E" g1,ℓ(Y)g1,j(Y) f2 bn(Y)+g1,ℓ(Y)f(Y) 2f2 bn(Y)Xij1{Xij⩾0}+g1... | https://arxiv.org/abs/2505.19359v1 |
dominated convergence theorem we get: lim n→∞(φℓ,n(Y)) = 0. Moreover, using (5.17), we have E X2 j1{Xj⩾0}|Y φ2 ℓ,n(Y)⩽E X2 j1{Xj⩾0}|YZc c0|t| |K(t)|+ 2|R1,ℓ(Y)| |K(t)|dt2 ⩽2c2 c2 0Z |t| |K(t)|dt2 E X2 j|Y + 4Z |K(t)|dt2 E X2 j|Y R2 1,ℓ(Y). Since E2c2 c2 0Z |t| |K(t)|dt2 E X2 j|Y + 4Z |K(t)|dt2 E X... | https://arxiv.org/abs/2505.19359v1 |
f2 bn(Y)bgr,ℓ,n(Y)#) =1√nnX i=1gm,ℓ(Yi)gr,j(Yi) f2 bn(Yi)+gm,ℓ(Yi)f(Yi) 2f2 bn(Yi)Xij1{Xij⩾0}+gm,j(Yi)f(Yi) 2f2 bn(Yi)Xiℓ1{Xiℓ⩾0} −E" gm,ℓ(Y)gr,j(Y) f2 bn(Y)+gm,ℓ(Y)f(Y) 2f2 bn(Y)Xj1{Xj⩾0}+gm,j(Y)f(Y) 2f2 bn(Y)Xiℓ1{Xℓ⩾0}# + op(1). Proof. We only give the proof for m=r= 1 since those related to the other values are ob... | https://arxiv.org/abs/2505.19359v1 |
f2 bn(Yi)bg2,ℓ,n(Yi)−E" g2,ℓ(Y) f2 bn(Y)bg2,j,n(Y) +g2,j(Y) f2 bn(Y)bg2,ℓ,n(Y)#) . Then, Lemma 5.5 yields the result. (ii). Putting Zℓ,j,n=2√nnX i=1 Rbn,ℓ(Yi)Rbn,j(Yi)f(Yi) fbn(Yi)−E Rbn,ℓ(Y)Rbn,j(Y)f(Y) fbn(Y) , we have: E1√nnX i=1n I(3) ℓj(Yi)−Eh I(3) ℓj(Y)io − Zℓ,j,n2 =E2√nnX i=1 Rbn,ℓ(Yi)Rbn,j(Yi)bfn(Yi)... | https://arxiv.org/abs/2505.19359v1 |
K(t)dt dy, where 0 < θ < 1. Since Kis of order 3, it follows βn√nZ Z {f(y)⩾bn+Cτn}Rm,ℓ(y)Rs,j(y) f y+kn√βn nf(y)t −f(y) K(t)dy dt =βn√nZ {f(y)⩾bn+Cτn}Rm,ℓ(y)Rs,j(y)k3 n(√βn)3 6n3f3(y)Z t3 f(3) y+θkn√βn nf(y)t −f(3)(y) K(t)dt dy. Thus βn√nZ Z {f(y)⩾bn+Cτn}Rm,ℓ(y)Rs,j(y) f y+kn√βn nf(y)t −f(y) K(t)dy dt ⩽β... | https://arxiv.org/abs/2505.19359v1 |
n,ℓ,j−U(4) n,ℓ,j, where I(1) ℓj(y) =gℓ(y)gj(y) f2 bn(y)=Rbn,ℓ(y)Rbn,j(y), I(2) ℓjandI(3) ℓjare defined in (5.22) and (5.23), and the U(m) n,ℓ,j’s are given in (5.1), (5.2), (5.3) and (5.4). Then, from Lemma 5.1 we get √nbλ(n) ℓj=1√nnX i=1n I(1) ℓj(Yi) +I(2) ℓj(Yi)−I(3) ℓj(Yi)o + op(1), (5.35) and, putting νℓj=E I(1) ℓ... | https://arxiv.org/abs/2505.19359v1 |
and R.J. Caroll, An asymptotic theory for sliced inverse regression, Ann. Statist. 20(1992), 1040–1061. [10] K.C. Li, Sliced inverse regression for dimension reduction, J. Amer. Statist. Assoc. 86(1991), 316–342. [11] K.C. Li, Y. Aragon, K. Shedden, and C. Thomas-Agnan, Dimension reduction for multivariate response dat... | https://arxiv.org/abs/2505.19359v1 |
Foundations of Top- kDecoding For Language Models Georgy Noarov∗Soham Mallick∗Tao Wang∗Sunay Joshi Yan Sun Yangxinyu Xie Mengxin Yu Edgar Dobriban University of Pennsylvania May 27, 2025 Abstract Top-kdecoding is a widely used method for sampling from LLMs: at each token, only the largest knext-token- probabilities are... | https://arxiv.org/abs/2505.19371v1 |
. . . . . . . . . . . . . . . . 10 5.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 6 Related work 11 7 Discussion 12 *Equal contribution. Correspondence to gnoarov@seas.upenn.edu, kcillam@wharton.upenn.edu, tawan@upenn.edu, dobriban@wharton.upenn.edu 1arX... | https://arxiv.org/abs/2505.19371v1 |
. . . . . . . . . . . . . . . . . . . . 30 G Example: α-Bregman decoding 32 G.1 Proof of Lemma 4.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 G.2 Proof of Proposition 4.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 G.3 Illustrating prima... | https://arxiv.org/abs/2505.19371v1 |
that modify each next-token-probability distribution to induce sparsity , i.e., to keep only a small number of tokens with a nonzero probability. This includes the widely used top- k[21] and top- p[32] sampling methods, among others. These methods are motivated by the intuition that the noisy LLMs probabilities need to... | https://arxiv.org/abs/2505.19371v1 |
of independent interest. As an example, we discuss α-Bregman decoding strategies, generated by Tsallis α-entropies x7→xα/[α(α−1)], for which we show that primal renormalization can be solved exactly in several cases of interest and converges to water-filling as α→ ∞ (Section 4). Finally, we illustrate some of the decod... | https://arxiv.org/abs/2505.19371v1 |
and a collection of renormalization maps Tk: ∆sub,k→∆k,k= 1, . . . , V . We define an adaptive generalized top- kdecoding strategy DecT: ∆V→∆Vviap7→Tˆk(p)(p[1 :ˆk(p)]). Below, we will design specific renormalizers Tand ways to choose k. 2.2 Regularized sparse Bregman decoding Decoding via sparse divergence minimization... | https://arxiv.org/abs/2505.19371v1 |
show that this optimization problem is well-defined. When there are multiple minimizers, we assume that one is selected in an arbitrary measurable way. 5 In both the primal and the dual Bregman case, when λ= 0, the corresponding sparse decoding Problem 2 is solved atˆp=p(and uniquely so if ϕis strictly convex), with th... | https://arxiv.org/abs/2505.19371v1 |
the second argument, which can interfere with the uniqueness of dual projections. To pave the road towards dual Bregman projections, we will therefore rely on additional structure in ϕanddϕ, expressed as the following dual validity condition. 3It is strictly increasing for ν∈[−f(xi),1−f(xi)], but the required νmay lie ... | https://arxiv.org/abs/2505.19371v1 |
The dual case, owing i.a. to the implicit form of the dual renormalization formulas (5), is correspondingly more complex to handle. Unlike in Theorem 3.2, our next result requires further conditions, which we state as a menu of two options. The relationship between the extra assumptions is intricate; Assumption (A2) is... | https://arxiv.org/abs/2505.19371v1 |
as the elasticity of the function ϕ′. 8 Dual k-convexity. The proof is in Appendix E. The above dualization strategy does not directly apply. Instead, we lower bound ∆2cost∗(k)by regrouping the loss contributions of the indices i∈[k+ 1], and —via intricate term rearrangement and bounding—reduce to proving the local con... | https://arxiv.org/abs/2505.19371v1 |
the theoretical foundations of top- kdecoding, our aim in this section is simply to illustrate that the performance of our novel decoding schemes can be competitive with standard top- kdecoding. In particular, we do not aim to compare or compete with other popular and established decoding methods, which is beyond the s... | https://arxiv.org/abs/2505.19371v1 |
seen in Table 1, across all temperature settings, primal decoding with adaptive k∗ achieves accuracy comparable to top- k. At higher temperatures (such as 1.5), the performance of top- kdecoding degrades more rapidly than that of primal decoding. 10 10 20 30 40 50 k4681012Perplexity Diff T op-k Primal-1.5 Primal-2.0 10... | https://arxiv.org/abs/2505.19371v1 |
motivated by intuition that the “unreliable tail” of low-probability tokens is mis-estimated [ 32]. In particular, [ 32] propose top- psampling, and [ 38] propose min- psampling. Other sampling methods were proposed in [4,22,31,36]. [12] propose the decoding game, a two-player game between a generator/LLM and an advers... | https://arxiv.org/abs/2505.19371v1 |
2016. 12 [7]Lucien Birgé and Pascal Massart. Gaussian model selection. Journal of the European Mathematical Society , 3 (3):203–268, 2001. [8]Mathieu Blondel, André F.T. Martins, and Vlad Niculae. Learning with Fenchel-Young losses. Journal of Machine Learning Research , 21(35):1–69, 2020. URL http://jmlr.org/papers/v2... | https://arxiv.org/abs/2505.19371v1 |
al. The Llama 3 herd of models. arXiv preprint arXiv:2407.21783 , 2024. [26] Sebastian Gruber and Florian Buettner. Uncertainty estimates of predictions via a general bias-variance decomposition. In International Conference on Artificial Intelligence and Statistics , pages 11331–11354. PMLR, 2023. [27] Peter D Grünwald... | https://arxiv.org/abs/2505.19371v1 |
Linguistics ACL 2024 , pages 9427–9440, 2024. [45] Daniel Reem, Simeon Reich, and Alvaro De Pierro. Re-examination of Bregman functions and new properties of their divergences. Optimization , 68(1):279–348, 2019. [46] Michael Eli Sander, Joan Puigcerver, Josip Djolonga, Gabriel Peyré, and Mathieu Blondel. Fast, differe... | https://arxiv.org/abs/2505.19371v1 |
Proof of the dual greedy property in Theorem 3.3 21 Proof of discrete convexity for primal Bregman projection 25 Proof of discrete convexity for dual Bregman projection 27 Algorithmic details 29 Example: α-Bregman decoding 32 Supplementary experimental details 37 A Existence and uniqueness of dual Bregman decoding Theo... | https://arxiv.org/abs/2505.19371v1 |
First, consider x >0. Then, we have the following: 1.Since the map y7→dϕ(x, y)is strictly convex for y∈[x,1], it follows that∂ ∂ydϕ(x, y) = Ψ( x, y,0)is strictly increasing for y∈[x,1], and so is Ψ(x, y, ν ). 2. We have Ψ(x, x, ν ) =−ν⩽0. Further, Ψ(x,1, ν) =ϕ′′(1)(1−x)−ν⩾0, whenever ν⩽ϕ′′(1)(1−x). Hence, the map y7→Ψ(... | https://arxiv.org/abs/2505.19371v1 |
B.1 Decomposing the Bregman cost function on subsets Lemma B.1. For any Q={i1, i2, . . . i k} ⊆[V]of size k, the loss function as defined in (8)simplifies to: L(Q) =kX j=1[ϕ([TQ(p)]j)−ϕ′(pij)[TQ(p)]j] +S[V]− |Q|ϕ(0). (9) 18 Proof. Observe that: L(Q) = D ϕ((ˆpQ,0V−k),(pQ, pQc)) =kX j=1d([TQ(p)]j, pij) +SQc =kX j=1[ϕ([TQ... | https://arxiv.org/abs/2505.19371v1 |
defined for allx∈f([0,1]). Then, we have for every x∈f([0,1])the identity: ϕ(f−1(x)) =xf−1(x)−ϕ∗(x).Moreover (ϕ∗)′=f−1, and ϕ∗is strictly increasing. Proof. Since the map p7→R(p) :=px−ϕ(p)is continuous, it achieves a maximum on [0,1]. From the first order condition of the defining equation for ϕ∗, if the maximum is ach... | https://arxiv.org/abs/2505.19371v1 |
: [0,1]×(0,∞)→(0,1],such that [T(p)]i=ξ(pi, ν)for all i,and for optimal ν. It follows from the proof of Lemma A.1 that the solution ξis well-defined. Define two auxiliary functions ψ, h that will be used in the computation of the Bregman costs below, such that for all (x, y, ν )∈D: ψ(x, y) :=ϕ(y)−ϕ′(y)(y−x),andh(x, ν) ... | https://arxiv.org/abs/2505.19371v1 |
so N⩾0. 24 Case 2: A(ξ)<0. Since ξ⩾x, we have N(x, ν)⩾ϕ′(ξ)ϕ′′(ξ) +ξ A(ξ) = ϕ′(ξ)2u′(ξ), where u(t) :=t ϕ′′(t)/ϕ′(t). Indeed, u′(t)ϕ′(t)2=ϕ′(t) ϕ′′(t) +t ϕ′′′(t) −t ϕ′′(t)2=ϕ′(t)ϕ′′(t) +t ϕ′(t)ϕ′′′(t)−ϕ′′(t)2 . By Assumption (A2), uis non-decreasing, so u′(ξ)⩾0; hence N(x, ν)⩾0in this case as well. Because v(x, ξ)>... | https://arxiv.org/abs/2505.19371v1 |
strictly concave in νfor each k, which follows as the Legendre dual mapping ϕ∗is strictly convex since so is ϕ. Then, observe that for every j, ∂ ∂νW(k, ν) = 1−kX j=1(ϕ∗)′(f(pi) +ν) = 1−kX j=1f−1(f(pj) +ν). (17) Thus, ∂ ∂νW(k, ν)|ν=νk= 1−kX j=1f−1(f(pj) +νk) = 1−kX j=1[Tk(p)]j= 0. AsW(k,·)is strictly concave in ν,W(k,·... | https://arxiv.org/abs/2505.19371v1 |
=νk 2[T∗ k(p)]k−[T∗ k+1(p)]k−[T∗ k+1(p)]k+1 . Range 2: i∈ {k, k+ 1}.For Range 2, we first note that the following three types of terms cancel out: ϕ(0), ϕ(pk),ϕ(pk+1). Furthermore, terms involving ϕ′(0)vanish by assumption. The remaining terms in the Range 2 sum can then be written as: Range 2 Sum ⩾n −2 (−ϕ([T∗ k(p)]... | https://arxiv.org/abs/2505.19371v1 |
▷Upper bound on feasible ν 5: Initialize νlow←0,νhigh←M 6: while νhigh−νlow> εdo 7: ν←(νlow+νhigh)/2 8: fori= 1tokdo 9: xi←x[i] 10: y[i]←SOLVE ROOT(xi,ν,f′′,ε) 11: end for 12: G←Pk i=1y[i] 13: ifG <1then 14: νlow←ν 15: else 16: νhigh←ν 17: end if 18: end while 19: return y 20:end function 21:function SOLVE ROOT(xi,ν,f′... | https://arxiv.org/abs/2505.19371v1 |
second derivative in yof this expression is (α−1)yα−2−(α−2)xyα−3=yα−3(y(α−1)−x(α−2)) = yα−3(y(α−1) +x(2−α)). Now, if y⩾x, then using α−1⩾0we have that the above expression is ⩾yα−3(x(α−1) +x(2−α)) =yα−3x⩾0, confirming the convexity in y. Now for the condition that x7→u(x) :=xϕ′′(x)/ϕ′(x)is non-decreasing from Assumptio... | https://arxiv.org/abs/2505.19371v1 |
value ( 0.1, left) and of the smaller value ( 0.001, right). 34 G.4 Illustrating general nonconvexity of dual renormalization Figure 5 illustrates that the dual Bregman objective can in general be non-convex for large α. Figure 5: Nonconvexity of the Bregman dual landscape on the square (x, y)∈[0,1]2. G.5 Illustrating ... | https://arxiv.org/abs/2505.19371v1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.